Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
612084 | 1 | null | null | 2 | 18 | >
Consider independent observations ${(y_i, x_{1i}, x_{2i}) : 1 ≤ i ≤ n}$ from the regression model $y_i = β_1x_{1i} + β_2x_{2i} + e_i, i = 1, . . . , n$ ,where $x_{1i}$ and $x_{2i}$ are scalar covariates, $β_1$ and $β_2$ are unknown scalar coefficients, and $e_i$ are uncorrelated errors with mean $0$ and variance $\sigma^2 > 0$. Instead of using the correct model, we obtain an estimate $\hat{β_1} $ of $β_1$ by minimizing $\sum_{i=1}^n (Y_i -\beta_1X_{1i})^2$.Find the bias and mean squared error of $\hat{β_1} $.
By minimizing $\sum_{i=1}^n (Y_i -\beta_1X_{1i})^2$ I found that $\hat{β_1}=\frac{\sum_{i=1}^n y_ix_{1i}}{\sum_{i=1}^n x_{1i}^2}$. Now, Bias of $\hat{β_1}$ is $E(\hat{β_1})-\beta_1$. I'm not sure how to find $E(\frac{\sum_{i=1}^n y_ix_{1i}}{\sum_{i=1}^n x_{1i}^2})$. Can someone please suggest any reading material to learn how to find this or provide hints?
| Linear Regression Restricted minimization | CC BY-SA 4.0 | null | 2023-04-06T07:47:57.793 | 2023-04-08T08:14:13.200 | 2023-04-08T08:14:13.200 | 339153 | 339153 | [
"regression"
] |
612085 | 2 | null | 611808 | 1 | null |
- The hierarchical structure is mainly for the "fixed" effects, but you do need to consider it when it comes to an autocorrelation structure. With both correlation in gamm() and rho in bam(), unless told otherwise, the model will assume a single long time series.
This behaviour may be what you want, but you may, for example in your case, want to nest the correlation structure within day rather than just the temporal ordering of the samples. Such a nesting would say that the autocorrelation structure operates at the lowest grouping level (within day) — you can't opt to nest it within year when you have sub-daily level data unless you create a variable that orders the samples at the sub-daily level within year.
With correlation you would use corAR1(form = ~ t | year) to indicate you want an AR(1) nested within year, where t orders the observations within year. To get this nested within site you just need to augment the right-hand side of the formula. One way to do this is to create a variable year_site = interaction(year, site, drop = TRUE) in your data and then modify the formula to be corAR1(form = ~ t | year_site). There may be more direct ways to do this within the formula but all the examples I have seen use a single grouping factor. With rho you need to create a logical vector that is FALSE everywhere, except at the first observation of each "series" that you want the correlation structure to be nested within.
Importantly, you are estimating the same single parameter $\rho$ regardless of how you specify the AR(1), that operates within each level of the grouping variable (or equivalently for other ARMA terms). In that sense, the parameters are global in the way they describe a common autocorrelation structure within each level of the hierarchy.
- brms works essentially the same way. See https://paul-buerkner.github.io/brms/reference/autocor-terms.html and then the individual correlation functions linked from that page which document arguments such as time (for ordering the observations) and gr for the grouping factor.
| null | CC BY-SA 4.0 | null | 2023-04-06T07:49:15.950 | 2023-04-06T07:49:15.950 | null | null | 1390 | null |
612086 | 1 | null | null | 1 | 86 | How to compute the conditional variance of a sum of three normally distributed random variables given two other random variables? Assume pairwise correlations exist and the following joint distributions:
$X\sim \mathcal N(\mu_X,\sigma_X^2)$;
$Y\sim \mathcal N(\mu_Y,\sigma_Y^2)$;
$Z\sim \mathcal N(\mu_Z,\sigma_Z^2)$;
$\theta_1\sim \mathcal N(\mu_{\theta_1},\sigma_{\theta_1}^2)$;
$\theta_2\sim \mathcal N(\mu_{\theta_2},\sigma_{\theta_2}^2)$;
The multidimensional linear projection theorem can be directly applied to find the conditional mean and variance of random variable/partition of random variables A given random variable/partition of random variables B, but what if A is a combination of three random variables? I am new to the community, so please do not close the question without detailing what else is needed. Thanks.
| If $(X,Y,Z,\theta_1,\theta_2)$ $\sim \mathcal N_5(\mu,\Sigma)$, what is $\text{Var}[X+Y+Z|\theta_1,\theta_2]$? | CC BY-SA 4.0 | null | 2023-04-06T07:50:55.893 | 2023-04-06T14:09:15.303 | 2023-04-06T14:09:15.303 | 20519 | 384956 | [
"self-study",
"variance",
"conditional-probability",
"covariance",
"multivariate-normal-distribution"
] |
612087 | 2 | null | 612077 | 1 | null | Consider the dataset below. Let `y` be the fraud indicator and `x1`, `x2`, `x3` the other categories.
```
y x1 x2 x3
Y Y N N
N Y N N
N Y N N
Y N Y N
N N Y N
N N Y N
Y N N Y
N N N Y
N N N Y
```
As you can see globally there is 30% of fraudulent cases, also per each level of the `xN` categories there are 30% of fraudulent cases. However, notice that for each `xN` category a different observation is fraudulent. So finding the fraudulent `x1` is different than finding it in `x2` or `x3`. They are definitely not the same, nonetheless that the probabilities are the same for them.
In this example, `y` is independent of `xN`s, so it doesn't really matter, but for a real dataset it can be the case that such variables have predictive power when taken together with the rest of the dataset. For example, let's say you are selling insurance and per premium vs non-premium customer categories and they have roughly the same % of fraud. If you had a way of predicting fraud in the premium category, it won't help for the non-premium category and the other way around. You would need both. If you ignored the categories, you wouldn't account for the different kinds of fraud in both groups.
| null | CC BY-SA 4.0 | null | 2023-04-06T08:14:43.753 | 2023-04-06T08:31:32.267 | 2023-04-06T08:31:32.267 | 35989 | 35989 | null |
612089 | 1 | null | null | 4 | 165 | I'd like to estimate an integral of the following form using Monte Carlo method:
$$ \int_{t_1}^{t_2} g(t) \left[ \int_{- \infty}^{\infty} f(t, u) du \right] ^\gamma dt$$
In case of $\gamma$ being a positive integer (say, $2$) I can rewrite it as follows:
$$ \int_{t_1}^{t_2} \int_{- \infty}^{\infty} \int_{- \infty}^{\infty} g(t) f(t, u_1) f(t, u_2) du_1 du_2 dt$$
That is, I can simply take two independent unbiased estimates of the inner integral and multiply them. My question is: could something similar be done in case of fractional $\gamma$?
| Unbiased estimator of an integral raised to a power | CC BY-SA 4.0 | null | 2023-04-06T08:54:11.520 | 2023-04-07T13:35:13.283 | 2023-04-07T13:35:13.283 | 345280 | 345280 | [
"monte-carlo",
"numerical-integration"
] |
612090 | 1 | null | null | 2 | 34 | I am learning gam by myself. I tried to find posts that can help me to finding and understanding smooth functions i.e., basis functions from gam using `mgcv`. From a post I found I can use `smoothCon` to estimate basis functions without explicitly from the model. However when I did so, I observed `smoothCon` and `mgcv$smooth` doesnt match.
Here an reproducible example of what I did:
```
library(mgcv)
set.seed(1)
x.1=runif(80)
x.2=runif(80)
fx.1=sin(x.1*.2)
fx.2=cos(x.2)
fx=fx.1+fx.2
y=rbinom(80,1,fx/(1+exp(fx)))
gam.m= gam(y~s(x.1,bs="cr",k=4)+s(x.2,bs="cr",k=4), family=binomial, method="REML")
```
Now, I extracted basis functions
```
smooth.mgcv= predict.gam(gam.m,type="lpmatrix")
smooth.x.1= smooth.mgcv[,2:4] #basis functions of x.1#
```
then from smooth.Con
```
smooth.create=smoothCon(s(x.1,bs='cr',k=4),data=data.frame(x.1=x.1),knots=NULL)
```
, but this two are giving different output. How do this basis functions work?
| Why smoothCon gives different estimates of smooths than mgcv? | CC BY-SA 4.0 | null | 2023-04-06T09:00:31.353 | 2023-04-06T13:15:21.003 | 2023-04-06T09:12:35.600 | null | null | [
"generalized-additive-model",
"mgcv",
"smoothing"
] |
612091 | 1 | null | null | 0 | 24 | If we have a recorded percentage (statistic) on a population, if we take random samples we might not encounter the percentage till e.g. even we exhaust the population.
E.g. a box with $500000$ balls of which $25000$ ($5$%) of them are red and the rest are all white.
My question is what is min/max number of a sample to consider we will be able to see that $5$% of red balls we know exists in the larger population? And not specifically for $500000$ but any population larger than $50000$.
Are there any online tools/tables already available that can provide the sizes for any number $n$ of a population?
If there is also a margin of error that is ok too as long as it is configurable/reported
| Tools for reporting sample size that contains the percentage of the population | CC BY-SA 4.0 | null | 2023-04-06T09:26:13.533 | 2023-04-06T09:26:13.533 | null | null | 385062 | [
"sample-size",
"sample",
"population",
"percentage",
"population-attributable-fraction"
] |
612093 | 1 | null | null | 1 | 71 | I have made several models (RF, XGB and GLM) to predict a binary outcome and they all achieved an AUC of approximately 0.8 and Brier scores 0.1-0.15.
Test set is fairly small (n= 350), cases with outcome are (n=50).
I am trying to create calibration plots in RStudio and I am getting results that I don't understand.
At first I tried predtools, classifierplots and runway as in the example below and got results that all looked like the plot below:
Model:
```
RF_model <- randomForest(outcome ~ ., data = TRAIN_data)
RF_prediction$pred <- predict(RF_model, TEST_data, type = "prob")[,"no"]
```
and for the calibration plot (with the "probably" package):
```
RF_prediction %>% cal_plot_breaks (outcome, pred)
```
[](https://i.stack.imgur.com/m6BTU.png)
The sudden dive towards zero looked wrong to me..
I tried searching for more information and after reading the excellent([https://towardsdatascience.com/introduction-to-reliability-diagrams-for-probability-calibration-ed785b3f5d44](https://towardsdatascience.com/introduction-to-reliability-diagrams-for-probability-calibration-ed785b3f5d44)) I realized I was probably using an incorrect data format and tried using relative frequencies instead. This created a nice S-shaped curve that looked too perfect and uniform to be believed.
[](https://i.stack.imgur.com/Pc3gS.png)
Finally I found ([Create calibration plot in R with vectors of predicted and observed values](https://stats.stackexchange.com/questions/606263/create-calibration-plot-in-r-with-vectors-of-predicted-and-observed-values)) on this site and ended up with the curves below after using the rms package and following syntax:
```
RF_model <- randomForest(outcome ~ ., data = TRAIN_data)
TEST_data$pred <- predict(RF_model, TEST_data, type = "prob")[,"no"]
plot <- val.prob(TEST_data$pred, TEST_data$outcome)
```
with the curve for the RF model as above:
[](https://i.stack.imgur.com/u5jYH.png)
together with the curves for the other two models:
[](https://i.stack.imgur.com/VReEe.png)
I need help understanding the following:
- Do the syntax and the plots seem correct?
- How do I interpret the way the curves “stop” at different predicted probabilities?
- How to remove the annoying “overall” legend by the curves..?? I need to write the names of the models instead!
(I managed to get rid all statistic data text with “logistic.cal = FALSE, statloc = FALSE” on the val.prob command and “flag = 0” on the plot.)
and on a more general note
- I have seen the terms reliability diagram and calibration plot used interchangeably. Are they the same thing with different names or is there some subtle difference that is lost on me..?
| Creating and interpreting calibration plots for several models with a binary outcome | CC BY-SA 4.0 | null | 2023-04-06T09:54:44.477 | 2023-04-07T11:40:53.427 | 2023-04-07T11:40:53.427 | 22047 | 385064 | [
"r",
"random-forest",
"reliability",
"calibration",
"scoring-rules"
] |
612097 | 2 | null | 503824 | 1 | null | Quaternions are a type of hypercomplex numbers that can be used to represent rotations in 3D space, and quaternion neural networks (QNNs) have been proposed as a way to better model the non-linearities of 3D data. However, in the context of chest X-rays, which are 2D images, it's not clear that QNNs would provide any benefit over traditional CNN architectures.
Regarding your question about RGB versus grayscale images, the decision depends on the specific details of your problem. Grayscale images have only one channel (i.e., intensity) per pixel, while RGB images have three channels (red, green, and blue) per pixel. Converting RGB to grayscale will result in a loss of information about color, but it may simplify the model and make it easier to train.
In general, if color information is relevant to the problem at hand, then using RGB images may be better. For example, if the presence of certain colors is indicative of the presence or absence of pneumonia, then RGB images could help the model learn these relationships. However, if color is not relevant or the model is struggling to learn from the RGB images, then grayscale images may be a better choice.
It's important to note that in some cases, converting RGB to grayscale can introduce its own biases and distortions, so it's always a good idea to experiment with both approaches and evaluate the performance of the model on a held-out test set.
| null | CC BY-SA 4.0 | null | 2023-04-06T10:14:13.017 | 2023-04-06T10:14:13.017 | null | null | 385069 | null |
612098 | 1 | 612106 | null | 0 | 50 | I am reading this [blog post](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/#speed-up-diffusion-model-sampling) where the author talks about diffusion models. Let's keep diffusion out of the conversation for now. The author showcased that we can parameterize a Gaussian distribution by a desired standard deviation sigma as shown below:
[](https://i.stack.imgur.com/VIvXs.png)
I didn't get how the sigma value is inserted here. Can someone please elaborate this in detail?
| Parameterizing a Gaussian distribution | CC BY-SA 4.0 | null | 2023-04-06T10:36:30.477 | 2023-04-07T05:59:27.190 | null | null | 100976 | [
"normal-distribution",
"markov-process",
"gaussian-process",
"parameterization",
"diffusion"
] |
612099 | 1 | null | null | 1 | 17 | I'm currently researching an moderation model in SPSS with Gender as Moderator. When I'm running PROCESS number 1 (moderation) I choose the option “only continuous variables that define products”. Result: interaction effect not significant & main-effect not significant. When running the model with "All variables that define product" the result is: interaction effect not significant & main effect significant. How??? I think the best option is only continuous variables because of the categorical moderator... Hope someone can help me with the choice for only continuous variables vs. all variables. Thanks in advance!
| Center variables PROCESS Moderation model SPSS | CC BY-SA 4.0 | null | 2023-04-06T10:39:53.503 | 2023-04-06T10:39:53.503 | null | null | 385071 | [
"interaction",
"interpretation",
"spss",
"main-effects"
] |
612100 | 1 | null | null | 0 | 46 | I have a maximum likelihood problem where I estimate two parameters. The likelihood function is that of an exponential distribution with mean $\lambda$. The parameters are parameters of this $\lambda$. The $L$ is the number of observations in the following expression. Each observation $l$ is a vector with $N$ points. Therefore, the observation $\mathbf{Z}$ is a matrix having dimension of $L \times N$. The mean $\lambda$ at each of the $N$ points is known in functional form with parameters $\Theta$.
$$ \log\left(p(\mathbf{Z}|\Theta)\right) = - L \left( \sum_{i = 1}^{N} \log\left( \lambda_i(\Theta) \right) + \frac{\sum_{l = 1}^{L} Z_{l, i}}{\lambda_i(\Theta)} \right) $$
The estimator is given by the following optimization problem.
$$ \hat{\Theta} = \max_{\Theta} \log\left(p(\mathbf{Z}|\Theta)\right) $$
As it is a highly non-linear problem, I can't find closed form equations for the estimator. I solve it numerically using Newton's method. This is the reason why, I don't perfectly know if it is a biased estimator. Just looking at the results, the thing that I can say is that it is an asymptotically unbiased estimator. For lower values of $N$ and $L$, the estimator has large bias.
The entries of the Fisher information matrix for this likelihood looks like the following.
$$ I_{m, n} = \mathbb{E}\left[ \left( \frac{\partial \log\left(p(\mathbf{Z}|\Theta)\right) }{\partial \theta_m} \right) \left( \frac{\partial \log\left(p(\mathbf{Z}|\Theta)\right) }{\partial \theta_n} \right) \right] $$
That simplifies to,
$$ I_{m, n} = \sum_{i = 1}^{N} \frac{L}{(\lambda_i(\Theta))^2} \frac{\partial \lambda_i(\Theta)}{\partial \theta_m} \frac{\partial \lambda_i(\Theta)}{\partial \theta_n} $$
Then, I define the CRLB like the following.
$$ \mathbb{V}\left[ \hat{\theta_m} \right] \geq I^{-1}_{m, m} $$
The problem is that the variance of my estimator (especially for one of the parameters) is below the CRLB when $N$ is low and $L$ is high. As if it starts going below the CRLB when I am increasing $L$. For higher $N$ and $L$, the variance remain above the CRLB.
How can I find a more suitable lower bound for this problem? I checked that the variance bound for a biased estimator, however, it requires the derivative of the bias as a function of the parameter $b^{\prime}(\Theta)$, which I don't have in a functional form.
| How to find the Cramer-Rao bound for a biased estimator when the bias is not known in closed form | CC BY-SA 4.0 | null | 2023-04-06T10:50:10.177 | 2023-04-06T22:31:56.773 | 2023-04-06T22:31:56.773 | 805 | 327104 | [
"maximum-likelihood",
"cramer-rao"
] |
612102 | 1 | null | null | 0 | 59 | When we have response variable that is of binary type and our interest is to know how probability is associated with covariates then we use logistic regression.
But I specifically want to know about logistic generalized additive models (gam).
Suppose, I have a data with binary response. How can I know whether here I need a linear or a gam logistic?
As for other data type, I can plot scatter plot of response vs covariate and if I see nonlinear pattern I can use gam. But for binary response, I can not have that choice.
(Adding `mgcv` tag as anyone using it may have understanding on this issue).
| How to know when a generalized additive model need to be used for binary data? | CC BY-SA 4.0 | null | 2023-04-06T10:54:53.637 | 2023-04-06T14:15:25.437 | 2023-04-06T14:15:25.437 | 362671 | null | [
"logistic",
"linear",
"generalized-additive-model",
"mgcv"
] |
612103 | 2 | null | 611695 | 0 | null | In a binary situation, either the predicted probablity is above the threshold and corresponds to a categorical prediction of $1$, or the predicted probability is below the threshold, which means that the predicted probability of the $0$ category is above the threshold and corresponds to a categorical prediction of $0$. (Use some randomization rule for if the prediction is right on the threshold, which could be its own question.)
When you have three or more categories, it can be that none of the categories exceed the threshold. For instance, if you want to pick the category corresponding to a predicted probability exceeding $0.5$, it can be that each category is predicted with a probability no greater than $0.4$, such as $(0.4, 0.3, 0.2, 0.1)$ for a four-category problem. If I am going to use a threshold-based classification system, I would consider this a "no prediction" prediction, and my confusion matrix would not be square but would have another prediction category of "no prediction" in addition to the true labels.
The idea behind the software would be to get the raw model outputs, determine which category exceeds the threshold, and pick that category as the prediction. Since this is not a software page, I will leave the exact Python implementation to the OP, though a now-deleted answer seems to have a good idea.
```
Y_pred_proba = knn.predict_proba(X_test)[:,1]
Y_pred = (Y_pred_proba > threshhold).astype(int)
```
(If several categories exceed the threshold, I am not sure what I would do.)
| null | CC BY-SA 4.0 | null | 2023-04-06T11:23:57.277 | 2023-04-06T11:23:57.277 | null | null | 247274 | null |
612104 | 1 | null | null | 1 | 15 | For a data matrix $X$ of dimension $n \times p$ where $p > n$ and corresponding label vector $y$ of dimension $n$, the standard least squares fit, $\hat{\beta} = (X^TX)^{-1}X^Ty$, is underdetermined. A typical approach is to use the Moore-Penrose inverse to find the min-norm solution by solving $\hat{\beta}^{\text{mn}} = X^T(XX^T)^{-1}y$. Specifically, this finds $\min_\hat{\beta} ||\hat{\beta}||^2_2$ such that $X \hat{\beta} = y$.
My question is if the min-norm solution described above for a data matrix $X$ is equivalent to solving standard least squares for some $n \times n$ basis of $X$? In other words, if we call this basis $B$ could we solve $\hat{\beta}^{\text{basis}} = B^T(BB^T)^{-1}y$ such that $\hat{\beta}^{\text{basis}}$ and $\hat{\beta}^{\text{mn}}$ are equivalent even for a test point not in $X$?
| Does min-norm least squares solve regular least squares in some basis? | CC BY-SA 4.0 | null | 2023-04-06T11:28:22.943 | 2023-04-06T11:28:22.943 | null | null | 213434 | [
"regression",
"least-squares",
"linear-model",
"dimensionality-reduction",
"high-dimensional"
] |
612106 | 2 | null | 612098 | 1 | null | Let us assume an AR(1) model: for $t\ge 1$
$$x_t=\sqrt{\alpha}x_{t-1}+\sqrt{1-\alpha}\epsilon_t\qquad\epsilon_t\sim\mathcal N(0,1)$$
Then, for $0<\sigma^2<1-\alpha$,
$$\sqrt{1-\alpha}\epsilon_t\quad\text{and}\quad\sqrt{1-\alpha-\sigma^2}\epsilon_t^1+\sigma\epsilon_t^2$$
have the same distribution when $\epsilon_t,\epsilon_t^1,\epsilon_t^2$ are iid $\mathcal N(0,1)$.
I am however puzzled by the decomposition proposed in the [reproduced entry](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/#speed-up-diffusion-model-sampling):
$$x_{t-1}=\sqrt{\alpha_{t-1}}x_{0}+\underbrace{\sqrt{1-\alpha_{t-1}-\sigma^2_t}\epsilon_t+\sigma_t\epsilon}_{\displaystyle\sqrt{1-\alpha_{t-1}}\epsilon_{t-1}}\qquad\epsilon,\epsilon_t\sim\mathcal N(0,1)$$
because this turns $\epsilon_t$ and $\epsilon_{t-1}$ into correlated Normal variates, contrary to the initial assumption.
| null | CC BY-SA 4.0 | null | 2023-04-06T11:38:57.100 | 2023-04-07T05:59:27.190 | 2023-04-07T05:59:27.190 | 7224 | 7224 | null |
612107 | 1 | 612454 | null | 3 | 120 | As far as I know [[source]](https://stats.stackexchange.com/questions/419393/how-to-find-t-value-without-data),
$$t_{\widehat{\beta}} = \frac{\widehat{\beta}}{\widehat{SE_{\beta}}}.$$
It means the sign of the t-value should be the same as the sign of beta.
In [Table S1](https://ars.els-cdn.com/content/image/1-s2.0-S2451902218301587-mmc1.pdf#page=16) of [Shen (2018)](https://doi.org/10.1016%2Fj.bpsc.2018.06.007), the signs are different. Why? Did I miss something?
I have noticed the passage referenced by user @utobi before. But
(1) the first row in Table S1: N17_N15, Beta: 0.054, SE: 0.016, t.value: -3.403, Valence of connection: + clearly did not followed the pattern.
(2)More importantly, a comparison of Figure 1 and Table S1 indicate the beta values in Table S1 means connection strength already, so there is no need to multiply sign one more time. For example,
Table S1: N24_N4 -0.066 (sign of mean value connection is negative)
Figure 1:
[](https://i.stack.imgur.com/xKLmh.png)
Note the caption: “Red lines are the connections where strength was positively associated with cognitive performance, and blue lines denote negative associations with cognitive performance”
(3) t-value is used to calculate p-value, and their relationship is symmetry around 0. So changing the sign would mean nothing.
[](https://i.stack.imgur.com/RkW2j.png)
Also, if we interpret "Valence of connection" and "95% CI of value of connection" in Table S1-S3 as the sign and CI for the values of connection, they should be the same across the 3 tables. However, they sometimes agree and sometimes not.
Table S1
N45_N15 + 1.233 1.291
Table S2
N45-N15 + 1.233 1.291
---
Table S1
N17_N15 + 1.215 1.275
Table S2
N17-N15 - -0.825 -0.784
---
Table S1
N24_N4 - -1.136 -1.075
Table S2
N24-N4 + 0.588 0.651
| Relationship between beta and t-value in Shen (2018) | CC BY-SA 4.0 | null | 2023-04-06T11:55:57.273 | 2023-04-11T09:26:07.940 | 2023-04-11T08:32:01.487 | 169706 | 169706 | [
"t-test",
"partial-correlation",
"neuroimaging"
] |
612108 | 2 | null | 611695 | 0 | null | You divide predicted probabilities by class frequencies to create a class frequency-adjusted prediction, and your categorical prediction will be the class with the highest score. This gives you the same result as the threshold method proposed by Dave, but can be used even if multiple classes exceed the threshold.
E.g., your class frequencies are 0.8, 0.1, 0.1, and you get predictions for a sample 0.7, 0.14, 0.16; your adjusted score will be 0.875, 1.4, 1.6, and so the third class is your categorical prediction.
That being said, there are many problems with thresholding and classification, and in general, it is not recommended (at least on this site), and you will find several threads about it here. Mainly, with thresholded predictions it is much harder to detect if a model is better than another model, the thresholding usually ignores the actual costs of false positive and false negative predictions, and the estimated probabilities might be much more helpful for the actual user of a model than discrete predictions.
| null | CC BY-SA 4.0 | null | 2023-04-06T12:05:22.357 | 2023-04-06T12:05:22.357 | null | null | 53084 | null |
612109 | 2 | null | 612089 | 4 | null | There seems to be a [Bernoulli factory](https://peteroupc.github.io/bernoulli.html#1_1__x___y____lambda) for solving this problem, namely producing unbiased estimates of the powered inner integral. Let us assume
- $0<\gamma<1$,
- $0<\varrho=\int f(t,x)\,\text dx<1$ (removing the dependence on $t$ to simplify notations).
Simulating a [Bernoulli variate](https://peteroupc.github.io/bernoulli.html#1_1__x___y____lambda) with probability and expectation $\varrho^{\gamma}$ can then be achieved by the following scheme, provided by Peter Occil (and attributed to [Mendo, 2019](https://arxiv.org/abs/1612.08923)):
```
Repeat the following process, until this algorithm returns a value x:
i. Set k=1
ii. Generate a Bernoulli B(ϱ) variate z; if z=1, return x=1
iii. Else, with probability γ/k, return x=0
iv. Else, set k=k+1 and return to ii.
```
An R rendering of the above
```
bf<-function(p=.5,g=1,x=-1){
while(x<0){
F=F+1
x=ifelse(rbinom(1,1,p),
1,x+rbinom(1,1,g/F))}
x}
```
Each time $\varrho$ is called, it can furthermore be replaced with an unbiased and independent estimator.
| null | CC BY-SA 4.0 | null | 2023-04-06T12:10:40.710 | 2023-04-07T09:48:30.527 | 2023-04-07T09:48:30.527 | 7224 | 7224 | null |
612110 | 1 | null | null | 0 | 16 | I have (panel) data about posts published by a group of users on a social media platform before and after those users received a treatment. Each of their posts is assigned to a subject (6 subjects overall). I want to test if the distribution of posts between subjects changed after the treatment. What test would be best suited for this?
| What is the appropriate test for distributions before and after a treatment? | CC BY-SA 4.0 | null | 2023-04-06T12:18:57.817 | 2023-04-12T17:31:49.850 | 2023-04-12T17:31:49.850 | 385084 | 385084 | [
"hypothesis-testing",
"distributions",
"statistical-significance"
] |
612111 | 2 | null | 612074 | 0 | null | So it looks like you basically want a summary statistic for three values not influenced by outliers. I suggest using the median value of the three. Not only is this simple and easy to understand, also you can probably hardly do better here, as you don't have enough data to make any more sophisticated assessment of what's an outlier. Also it's as much in line with the procedure for four values as you can get.
| null | CC BY-SA 4.0 | null | 2023-04-06T12:22:12.627 | 2023-04-06T12:22:12.627 | null | null | 247165 | null |
612112 | 2 | null | 612029 | 9 | null | G-methods: the collection of 'general' methods for dealing with time-varying confounding developed by James Robins. These include g-formula, inverse probability weighting of marginal structural models, augmented inverse probability weighting, and g-estimation of structural nested models.
To help define the remainder of the terms, I am going to define some notation. Let $Y_i^a$ be the potential outcome under action $a$ (the outcome person $i$ would have if they did $a$), $Y$ be the observed outcome, $A$ be the observed action taken, and $W$ be a set of covariates deemed to be confounding variables. Finally, let $V$ be a single variable from $W$. For simplicity, I will only talk about the outcome at a single time, but everything here generalizes to multiple times (i.e., settings with time-varying confounding).
G-formula: The approach used to express $E[Y^a]$ in terms of the observed data by an outcome process. In this setting, the g-formula is
$$E[Y^a] = \sum_{w} E[Y | A=a, W=w] \Pr(W=w)$$
Here, we have written the unobservable quantity we want to estimate (the marginal mean of the potential outcomes, which we can't see) in terms of $W,A,Y$ (observables). The g-formula relies on modeling the outcome process (i.e., $Y$ as a function of $A,W$) but it does not tell us how to do this.
G-computation: G-computation can be thought of as the algorithm to practically implement the g-formula. It says to fit some model for $Y$ given $A,W$ then using that model predict everyone's outcomes if their $A=a$, and then take the mean of those predicted potential outcomes. G-formula and g-computation are often used interchangeably in the literature, so something to be wary of.
G-estimation: G-estimation is a separate method. More formally it is 'g-estimation of structural nested models'. So, we need to talk about structural nested models first.
Structural nested models (SNM): SNM are outcome models meant to handle time-varying effect measure modification (something that is technically difficult to define). But they are pretty straightforward in the single time-point setting. The following is an additive SNM:
$$E[Y^a | A=a, V] - E[Y^0 | A=a, V] = \alpha_1 a + \alpha_2 a V$$
This model has two parameters, the effect of $a$ when $V=0$ and the effect of $a$ when $V=1$ (if $V$ is binary). So, the structural nested model is a model we might assume for how the additive effect of $A$ on $Y$ varies by $V$.
Now back to g-estimation. G-estimation is the process we use to estimate the parameters ($\alpha_1,\alpha_2$) of the SNM. It is important to note that the estimand, or parameter of interest, varies between g-formula and g-estimation. Therefore, they should not be confused with each other.
Marginal structural models (MSM): To help contextualize SNM, we can contrast them with MSM. The 'marginal' in MSM refers to the fact that our model is marginal (i.e., not conditional on $W$). 'Structural' refers to the fact that the model includes potential outcomes. 'Model' indicates we are using some model. An example of an MSM would be
$$E[Y^a] = \beta_0 + \beta_1 a$$
Note that this is the same estimand as the g-formula. One way to estimate the parameters of a MSM is using inverse probability weighting (there are also other ways).
We can also consider MSM that capture effect measure modification (termed 'faux MSM' since we are no longer marginal). The following is an example
$$E[Y^a] = \beta_0 + \beta_1 a + \beta_2 V + \beta_3 a V$$
Note that this model includes 4 parameters, unlike the SNM. So, if we interpret the parameters of the MSM, we need to refer to them as conditional on $V$. This is not the case for the SNM (it only has two parameters, neither of which is a main effect for $V$). This difference is what makes SNM capable of capturing time-varying effect measure modification, whereas MSM do not.
To see these different methods in the context of time-varying confounding, I would recommend the following [paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6074945/). The examples there should further clarify the distinctions between these terms.
| null | CC BY-SA 4.0 | null | 2023-04-06T12:28:55.907 | 2023-04-06T15:49:47.313 | 2023-04-06T15:49:47.313 | 247479 | 247479 | null |
612113 | 2 | null | 612102 | 1 | null | A logistic GLM will fit linear functions of covariates on the scale of the link function (linear predictor), whereas a GAM would fit smooth functions on this scale. If you aren't sure if you need a GAM, you could just fit the GAM, check if the size of the basis expansion for each smooth was sufficiently large (via `k.check()` or `gam.check()`), and rely on the smoothing parameters to shrink away unnecessary wiggliness.
If you want a formal test for the necessity or otherwise of the wiggly bits of the smooth over a purely linear effect, you can do this with a modification to the null space for the default thin plate spline basis.
```
# pseudo code
m <- gam(y ~ x + s(x, m = c(2, 0), bs = "tp"),
data = foo, method = "REML", family = binomial())
```
where we fit a linear term in `x` plus a smooth function of `x`, but we have modified the basis for the smooth so that it no longer includes linear functions in the span of the basis. This modification is done by `m = c(2, 0)`, which indicates we want the usual second order derivative penalty but with a 0 size null space (the span of functions that aren't affected by the penalty because they have 0 second derivative). With this specification, the output from `summary()` will give a test for the necessity of the wiggliness provided by the smooth over the linear effect estimated by the linear term.
| null | CC BY-SA 4.0 | null | 2023-04-06T12:33:23.927 | 2023-04-06T12:33:23.927 | null | null | 1390 | null |
612114 | 2 | null | 498238 | 1 | null | >
Can a model still be "overfit" if it is hitting 99.9% on the hidden test set on Kaggle (ie, 30,000 rows withheld on Kaggle by instructor)?
Consider a situation where you have a strong class imbalance where $99.95\%$ of the categories belong to one class. In such a situation, your accuracy, after going through all kinds of trouble to learn and implement fancy machine learning methods, [is worse than some jerk would get by predicting the majority category every time](https://stats.stackexchange.com/a/595958/247274). In this case, your model performance turns out to be quite poor, despite what appears to be a sky-high accuracy score, so pointing out, "Look at how good my holdout performance is! No overfitting here!" does not work.
It might be that you could achieve $99.97\%$ training accuracy and $99.96\%$ holdout accuracy with a simpler model, which would indicate overfitting according to a pretty standard definition where in-sample performance is improved at the expense of out-of-sample performance.
Despite the [flaws](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models) of accuracy as a performance metric, I agree that $99.9\%$ on holdout data at least sounds impressive (though, depending on the prevalence, it might be poor), and if you have reason to believe that $99.9\%$ accuracy really is good performance for this task, you might not care that you could achieve $99.92\%$ with a simpler model that has a worse in-sample score, despite the overfitting present in your model. (Whether or not you should be interested in the accuracy score is a separate matter, one addressed in the above link.)
| null | CC BY-SA 4.0 | null | 2023-04-06T12:33:25.837 | 2023-05-30T23:50:12.403 | 2023-05-30T23:50:12.403 | 247274 | 247274 | null |
612115 | 1 | null | null | 0 | 9 | I am interested in identifying the labor market impact of COVID-19 border closures in border regions in Europe. I wanted to use the Synthetic Control method to compare monthly employment development of each border region after the closure of the borders with the employment development of synthetically generated controls that approximate the counterfactual situation. However, if I was to take a border region (e.g., Ticino, Switzerland), which regions would really be a fair donor pool? I was thinking to create a donor pool from Swiss regions that are not on the border, but that have similar employment levels and mobility restrictions prior to (and during) the first border closure (March-June 2020). Yet, since all regions are impacted by border closure and they are also experiencing mobility restrictions, I am concerned with such an approach. Is there a different way to causally approach this?
If I had more disaggregated data, I think it would be interesting to calculate a measure of distance to the border and perhaps use it as some sort of continuous treatment measure and do a difference-in-difference. However, it is difficult to find monthly (or smaller unit) data at smaller geographic units.
If I am unclear in any way, please let me know and I will clarify. Any help is much appreciated - thanks in advance!
| Choosing a method to estimate the effect of covid border closures on border regions | CC BY-SA 4.0 | null | 2023-04-06T13:01:46.850 | 2023-04-06T13:01:46.850 | null | null | 385088 | [
"difference-in-difference",
"control-group",
"covid-19",
"synthetic-controls"
] |
612117 | 1 | null | null | 0 | 14 |
### Description of background
- Consider a 2d random walk with drift:
$$X(t) = \sum_{k=1}^t X_k \\
Y(t) = \sum_{k=1}^t Y_k$$
where each $X_k$ and $Y_k$ are independently exponentially distributed with rate $\lambda = 0.5$.
- Let's define $Y^\star$ as $Y(t^\star)$, where $t^\star$ is the time when $X(t)$ passes some boundary line.
This looks as following
[](https://i.stack.imgur.com/Ba2My.png)
The curve added to the histogram is the density of a non-central chi-squared distribution.
A motivation for this random walk is in the question: [Compound Poisson Distribution with sum of exponential random variables](https://stats.stackexchange.com/questions/273902/)
### Question
My question is whether there is an intuitive explanation for why $Y^\star$ is non-central chi-squared distributed (with 2 degrees of freedom and non-centrality parameter equal to the boundary value).
### Code for plot
```
### prepare empty plot
plot(-10,-10, xlim = c(0,30), ylim = c(0,30), xlab = "x(t)", ylab = "y(t)", main = "random walks")
### prepare empty variable
ystar = rep(0,n)
### compute random walks and add them to the plot
### also, compute ystar
for (i in 1:n) {
x = cumsum(rexp(m,0.5))
y = cumsum(rexp(m,0.5))
lines(x,y, col = rgb(0,0,0,0.01))
hit = min(which(x>bound))
ystar[i] = c(0,y)[hit]
}
lines(c(bound,bound),c(-100,100), col = 2) ### add boundary line
### create histogram
hist(ystar, freq = F, breaks = seq(-0.5,max(ystar)+1.5,0.5), xlim = c(0,40), main = "histogram of y*", xlab = "y*")
### add a curve for the non-central chi-squared distribution
zs = seq(0,50,0.1)
lines(zs,dchisq(zs,0,bound))
```
| Intuition behind occurence of non central chi squared distribution in conditional coordinates of a random walk | CC BY-SA 4.0 | null | 2023-04-06T13:15:17.850 | 2023-04-06T13:15:17.850 | null | null | 164061 | [
"distributions",
"intuition",
"random-walk"
] |
612118 | 2 | null | 612090 | 0 | null | `smoothCon()` only creates the basis functions for the requested smooth, and by default it doesn't apply identifiability constraints. `gam()` estimates the coefficients for each basis function.
The specific source of the difference here is the most likely due to the first feature; by default `absorb.con` is set to `FALSE` in `smoothCon()`, which means that the constant function (in this case) is in the span of the basis. If you apply the same identifiability constraint that was applied when fitting via `gam()`, the basis functions evaluated at the same data point should match whether you use `"lpmatrix"` in the `predict()` call or `smoothCon()`.
More correctly, you should use `PredictMat()` to evaluate the basis functions at specific locations.
Here's an example using some utilities from my gratia package to make this a little easier
```
library("mgcv")
library("gratia")
# simulate data
df <- data_sim("eg1", n = 2000, dist = "binary", seed = 42)
m <- gam(y ~ s(x0, bs = "cr") + s(x1, bs = "cr") + s(x2, bs = "cr") +
s(x3, bs = "cr"),
data = df, family = binomial, method = "REML")
# generate data to evaluate the basis functions at
ds <- data_slice(m, x0 = evenly(x0, n = 10))
# create the basis expansion as `gam()` would do it
sm <- smoothCon(s(x0, bs = "cr"), data = df, absorb.cons = TRUE)[[1]]
# evaluate this basis at the 10 evenly spaced values of x0
p1 <- PredictMat(sm, data = ds)
# compute the lp matrix at the same set of points
p2 <- predict(m, newdata = ds, type = "lpmatrix")
# pop off the intercept to aid comparison
p2 <- p2[, -1]
# compare
identical(p1[, 1], unname(p2[, 1]))
```
but extracting the smooth model matrix from the object created by `smoothCon()` also works
Also, don't use `predict.gam()`; this function is an S3 method and you should really only be calling `predict()`.
| null | CC BY-SA 4.0 | null | 2023-04-06T13:15:21.003 | 2023-04-06T13:15:21.003 | null | null | 1390 | null |
612119 | 2 | null | 22434 | 1 | null | Claus Wilke's 2019 book ["Fundamentals of Data Visualization"](https://clauswilke.com/dataviz/) is another possible "modern successor." The book's preprint is still freely available online.
Like Tukey's EDA, Wilke's book is focused on exploring your data using graphs while keeping in mind the things that matter to statisticians: thinking in terms of distributions, thinking about precision & uncertainty in our estimates, thinking about bias-variance tradeoffs when smoothing a trend or choosing a histogram bin size, and so on.
Wilke assumes you'll be making your graphs on the computer and provides the code for all his graphs (mostly in R's `ggplot2`) on GitHub. But the book itself is written in a software-agnostic way: the text is about best practices, not about how to implement them in a specific software tool. There's a brief chapter on choosing the right viz software tool for your needs.
He also concisely introduces concepts like Wilkinson's Grammar of Graphics; recommends best practices in line with folks like Cleveland and Tufte; and discusses how to make effective graphics for communication, not just exploration. Wilke's book does not break new ground on these fronts (unlike the Tukey or Cleveland books mentioned in other answers), but rather does a great job of distilling it and putting it all in one place, illustrated with good/bad/ugly examples using real datasets. It's become my go-to book for introducing data visualization to statisticians.
| null | CC BY-SA 4.0 | null | 2023-04-06T13:26:10.017 | 2023-04-06T13:26:10.017 | null | null | 17414 | null |
612120 | 2 | null | 612086 | 2 | null | Denote $(X, Y, Z)$ and $(\theta_1, \theta_2)$ by $X_1$ and $X_2$, and partition the mean vector $\mu$ and the covariance matrix $\Sigma$ accordingly to as follows:
\begin{align*}
\mu = \begin{bmatrix} \mu_1 \\ \mu_2 \end{bmatrix}, \quad
\Sigma = \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22}
\end{bmatrix}.
\end{align*}
Then by the [conditional distribution property](https://stats.stackexchange.com/questions/612101/how-to-apply-the-linear-projection-theorem-in-three-dimensions) of the multivariate normal distribution, the conditional variance of $X_1$ given $X_2$ is $\bar{\Sigma} = \Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}$ (this result is also cited in another question [by yourself](https://stats.stackexchange.com/questions/612101/how-to-apply-the-linear-projection-theorem-in-three-dimensions), though in a more verbose way). Now write $X + Y + Z = e'X_1$, where $e = (1, 1, 1)'$. [It then follows that](https://en.wikipedia.org/wiki/Covariance_matrix#Which_matrices_are_covariance_matrices?):
\begin{align}
& \operatorname{Var}(X + Y + Z|\theta_1, \theta_2) \\
=& \operatorname{Var}(e'X_1|X_2) \\
=& e'\operatorname{Var}(X_1|X_2)e \\
=& e'\bar{\Sigma}e.
\end{align}
Of course, you can proceed to expand $e'\bar{\Sigma}e$ by finishing tedious matrix operations to express the final result in terms of $\sigma_X^2, \ldots, \sigma_{\theta_2}^2$ and their pairwise correlations $\rho_{ij}$. To me, I think $e'\bar{\Sigma}e$ is a more succinct and equally clear result, so I will be satisfied to stop here.
| null | CC BY-SA 4.0 | null | 2023-04-06T13:29:51.797 | 2023-04-06T13:29:51.797 | null | null | 20519 | null |
612121 | 2 | null | 481110 | 1 | null | A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $Y_{pre}$ in the baseline period (e.g., total healthcare expenditures or number of inpatient visits during the 12 month pre-intervention period) regression to the mean (RTM) bias may be present if the baseline treatment/control distributions are significantly different.
As a corrective measure to avoid RTM bias, conduct an ANCOVA regression in the propensity score matched data including the baseline outcome and treatment indicator as regressors and outcome = $Y_{post}$.
| null | CC BY-SA 4.0 | null | 2023-04-06T13:31:15.883 | 2023-04-06T13:31:15.883 | null | null | 13634 | null |
612122 | 2 | null | 302484 | 0 | null | You are alluding to some kind of backward stepwise regression. While such a modeling strategy [has its issues](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/), stepwise selection of variables, when you use proper validation techniques, seems like it can [be competitive](https://stats.stackexchange.com/questions/594106/how-competitive-is-stepwise-regression-when-it-comes-to-pure-prediction) when it comes to pure prediction problems.
A few issues come to mind.
- Once you do one step of variable removal, the p-values no longer what they are supposed to mean in subsequent regressions, as those p-values are calculated as if the variable selection had not occurred.
- If you measure performance with adjusted $R^2$, your value is biased high if you penalize according to the number of features in the final model instead of the number of candidate features.
- If you eliminate insignificant variables and fit a model on the remaining variables, some of them might be insignificant. Do you fit on only the significant features in this second model? What if that third model has insignificant features?
| null | CC BY-SA 4.0 | null | 2023-04-06T13:37:25.247 | 2023-04-06T13:37:25.247 | null | null | 247274 | null |
612123 | 1 | 612126 | null | 1 | 86 | Let $X_i$ be an iid sample from $X\sim N(\mu,\sigma^2)$. I try to find the generalized likelihood ratio test of $H_0: \sigma^2=\sigma_0^2$ v.s. $H_1: \sigma^2\neq \sigma_0^2$ with $\mu$ unknown.
---
My work:
I try to find the likelihood ratio statistic:
\begin{align}
\lambda(x) &= \frac{\sup_{\theta=\theta_0}L(\theta\mid X)}{\sup_{\theta\neq\theta_0}L(\theta\mid X)}
\end{align}
For the global MLE case, I know that
$$
\sup_{\theta\neq\theta_0}L(\theta\mid X)=L(\hat{\mu},\hat{\sigma}^2)=(\frac{1}{\sqrt{2\pi\hat{\sigma}^2}})^{n} \exp[-\frac{1}{2\hat{\sigma}^2}\sum_{i=1}^n (X_i-\bar{X})^2]
$$
where $\hat{\mu}$ is the sample mean and $\hat{\sigma}^2=\frac{1}{n}\sum_i (X_i-\bar{X})^2$.
But for the restricted MLE, I am a little bit confused. Since $\theta=\theta_0$ means $(\mu,\sigma^2)=(\mu, \sigma_0^2)$, then
$$
\sup_{\theta=\theta_0}L(\theta\mid X)=\sup L(\mu_0, \sigma_0^2)?
$$
Is $\mu_0=\bar{X}$ and $\sigma_0^2=\frac{1}{n}\sum_i (X_i-\bar{X})^2$? So this will be the same as in the global case...
| The generalized likelihood ratio test of $H_0: \sigma^2=\sigma_0^2$ v.s. $H_1: \sigma^2\neq \sigma_0^2$ with $\mu$ unknown | CC BY-SA 4.0 | null | 2023-04-06T13:41:25.020 | 2023-04-06T19:18:10.323 | 2023-04-06T19:18:10.323 | 56940 | 334918 | [
"hypothesis-testing",
"self-study",
"mathematical-statistics"
] |
612124 | 1 | null | null | 0 | 27 | I have run a Kruskal Wallis test in my dataset and have a significant overall result (p<<0.001). However, since the medians for my categories are the same, I can't work out the direction of the trends. Post hoc tests have revealed which groups are different, but I'm not sure how to work which groups are affected more by my treatments than the others - is there any way to work this out?
Thank you in advance!
| Kruskal-Wallis Test - medians are same but result is significant? | CC BY-SA 4.0 | null | 2023-04-06T13:47:26.637 | 2023-04-06T13:47:26.637 | null | null | 384825 | [
"r",
"hypothesis-testing",
"median",
"kruskal-wallis-test"
] |
612125 | 1 | null | null | 0 | 27 | I'm running a bayesian paired t-test using JASP. In the software, it says that one of the assumptions is that "The difference scores are normally distributed in the population". I'm a little confused as to why this is true since the prior is a Cauchy distribution, not normal. Please let me know if you know why!
Thanks!
| Do bayesian paired t-tests assume normality? | CC BY-SA 4.0 | null | 2023-04-06T13:50:52.527 | 2023-04-06T14:46:11.437 | null | null | 381151 | [
"bayesian",
"t-test",
"paired-data"
] |
612126 | 2 | null | 612123 | 4 | null | No, under $H_0$ you know that $\sigma^2$ equals $\sigma^2_0$ thus you don't have to estimate $\sigma^2$ but only $\mu$. Thus under $H_0$, the maximum likelihood is
$$
\sup_{\theta=\theta_0}L(\theta\mid X)= L(\bar X, \sigma_0^2\mid X).
$$
Don't get distracted by the fact that the estimators of $\mu$ in the two hypotheses coincide. It is the estimator of $\theta = (\mu,\sigma^2)$ that matters here.
As per request, the likelihood ratio is
\begin{align}
\lambda(X) &= \frac{\sup_{\theta=\theta_0}L(\theta\mid X)}{\sup_{\theta\neq\theta_0}L(\theta\mid X)} = \frac{L(\bar X, \sigma_0^2|X)}{L(\bar X, \hat\sigma^2|X)}\\
& = \left(\frac{\hat\sigma^2}{\sigma_0^2}\right)^{n/2}\exp\left(-\frac{n\hat\sigma^2}{2\sigma_0^2} + \frac{n}{2}\right).
\end{align}
The rejection region has "shape"
$$
\left\{(X_1,\ldots X_n): \left(\frac{\hat\sigma^2}{\sigma_0^2}\right)^{n/2}\exp\left(-\frac{n\hat\sigma^2}{2\sigma_0^2}\right) \leq \exp(-n/2)\right\}.
$$
After some algebra, you will find that the likelihood ratio test of level $\alpha$ is to reject $H_0$
whenever
$$
T_n >\chi_{n-1, 1-\alpha/2}^2\,\text{ or } T_n <\chi^2_{n-1,\alpha/2},
$$
where $\chi_{n-1, p}^2$ denotes the $p$th quantile of the $\chi_{n-1}^2$ distribution and $T_n = \frac{n\hat\sigma^2}{\sigma_0^2} \sim \chi_{n-1}^2$.
| null | CC BY-SA 4.0 | null | 2023-04-06T13:50:57.020 | 2023-04-06T19:16:05.150 | 2023-04-06T19:16:05.150 | 56940 | 56940 | null |
612127 | 1 | null | null | 3 | 19 | If we have a dataset $D := f(x_i, y_i)^n_{i=1}$ where $x_i = [x_{i_1}, x_{i_2}, ... , x_{i_p}]^T$ is a p-dimensional
predictor and $y_i \in R$ is the response to $x_i$.
Now, shall we select as our parameter vector $\beta = [\beta_1, ... , \beta_p]$ and choose our model to
be $\hat{y}_i = \beta^Tx_i$
or select as our parameter vector $\beta^{'}_i = [1, \beta_{i_1}, , ... ,\beta_{i_p}]$ and choose our model to be $\hat{y}_i = \beta^{'T}x^{'}_i$ where $x^{'}_i = [1, x_{i_1}, ... ,x_{i_p}]^T$.
I think that padding 1 at the first element of $\beta$ vector is wrong, this will make the bias term 1 all the time which should not be the case.
| How to choose appropriate parameter vector for linear regression? | CC BY-SA 4.0 | null | 2023-04-06T14:06:19.073 | 2023-04-06T14:06:19.073 | null | null | 385091 | [
"regression",
"linear-model",
"predictor"
] |
612128 | 1 | null | null | 1 | 8 | I measured changes in the appearance of chronic wounds. There are four levels of this variable that can be ordered from the worst to best as necrotic < sloughy < granulating < epithelialising. There are proportions of the set of patients with corresponding levels such as:
[](https://i.stack.imgur.com/RKQRy.png)
The wounds were measured at the baseline on whole set of patients (n = 57), after 2 weeks (n = 56), 4 wks (n = 54), 6 wks (n = 45), ...12 wks (n = 18). The patients healed throughout the study, and therefore n decreases in time. How to test, if the variable at the particular time improved relatively to the baseline?
| compare time-dependent proportions to baseline | CC BY-SA 4.0 | null | 2023-04-06T14:07:36.590 | 2023-04-06T14:12:44.543 | 2023-04-06T14:12:44.543 | 385090 | 385090 | [
"time-series",
"proportion",
"non-independent"
] |
612129 | 1 | null | null | 1 | 13 | I need to analyse one dataset composed of two experiments. The goal is to identify sick individuals (animals) based on different parameters. These parameters have been collected on different farms (one time per farm), during two data recollection campaigns.
I have very limited information on one of the campaign, and I have some reasons to belive that the method used in some farms (but I don't know which) may have biased the results.
I identified two problems :
- Sick animals are expected to have extreme values in most of the parameters, and are rare. So if I search for outliers, I will probably find only sick animals.
- It is completely normal if some farms have more sick animals than others (they are just more at risk)
I need to have an idea if there are bias in some farms. I know it sounds tricky, but does someone have a solution for this ?
| How to detect bias between two experiments | CC BY-SA 4.0 | null | 2023-04-06T14:09:39.777 | 2023-04-06T14:09:39.777 | null | null | 384253 | [
"r",
"bias"
] |
612130 | 2 | null | 611822 | 1 | null | Unless a function is designed to handle censored observations, it won't handle censored observations as coded via the `Surv()` function in R. The `nlmer()` function isn't designed to handle censored observations.
It seems that this particular problem can be solved by a log transform of the hypothesized power function:
$$\log(aN^b)=\log a+b\log N .$$
That transformation sets the log of the known $N$ as the predictor variable, provides a form linear in the unknown coefficient $b$, and allows for random intercepts in the form of $\log a$. You then can use your choice of software for modeling random effects in survival models, summarized for example in the R [Survival Task View](https://cran.r-project.org/web/views/Survival.html).
I suspect that more complicated hypothesized non-linear functions would benefit from a Bayesian survival model, but that doesn't seem to be needed here.
| null | CC BY-SA 4.0 | null | 2023-04-06T14:10:07.173 | 2023-04-06T14:10:07.173 | null | null | 28500 | null |
612131 | 1 | null | null | 1 | 135 | Given the constants $\{a,b,c,d,e,f$}, I want to compute the conditional mean $\text{E}[Z|S_1,S_2]$ and the conditional variance $\text{Var}[Z|S_1,S_2]$, with:
$Z=a+bX_1+cX_2+dY_1+eY_2+fY_3$
Is the following true?
$\text{E}[Z|S_1,S_2]=a+b\text{E}[X_1|S_1,S_2]+c\text{E}[X_2|S_1,S_2]$
and
$\text{Var}[Z|S_1,S_2]=b^2\text{Var}[X_1|S_1,S_2]+c^2\text{Var}[X_2|S_1,S_2]+d^2\sigma_{Y_1}^2+e^2\sigma_{Y_2}^2+f^2\sigma_{Y_3}^2+bc\text{Cov}[X_1,X_2|S_1,S_2]+2de\text{Cov}[Y_1,Y_2]+2df\text{Cov}[Y_1,Y_3]+2ef\text{Cov}[Y_2,Y_3]$
where $\text{Cov}[X_1,X_2|S_1,S_2]=\text{E}[X_1X_2|S_1,S_2]-\text{E}[X_1|S_1,S_2]\text{E}[X_2|S_1,S_2]$
Assume $S_1=X_1+\epsilon_{X_1}, S_2=X_2+\epsilon_{X_2}$ (where $\epsilon_{X_1}\sim \mathcal N(0,\sigma_{\epsilon_{X_1}}^2)$ and $\epsilon_{X_2}\sim \mathcal N(0,\sigma_{\epsilon_{X_2}}^2)$) and the following joint distributions:
$\begin{pmatrix}
X_1 \\
X_2
\end{pmatrix}$ $\sim \mathcal N$ $\bigg(\begin{pmatrix}
\mu_{X_1} \\
\mu_{X_2}
\end{pmatrix}, \begin{pmatrix}
\sigma_{X_1}^2 & \rho_{X_1X_2}\sigma_{X_1}\sigma_{X_2}\\
* & \sigma_{X_2}^2
\end{pmatrix}\bigg)$
$\begin{pmatrix}
Y_1 \\
Y_2 \\
Y_3
\end{pmatrix}$ $\sim \mathcal N$ $\Bigg(\begin{pmatrix}
0 \\
0 \\
0
\end{pmatrix}, \begin{pmatrix}
\sigma_{Y_1}^2 & \rho_{Y_1Y_2}\sigma_{Y_1}\sigma_{Y_2} & \rho_{Y_1Y_3}\sigma_{Y_1}\sigma_{Y_3}\\
* & \sigma_{Y_2}^2 & \rho_{Y_2Y_3}\sigma_{Y_2}\sigma_{Y_3}\\
* & * & \sigma_{Y_3}^2
\end{pmatrix}\Bigg)$
Assume also that $(X_1,X_2)$ and $(S_1,S_2)$ are independent from $(Y_1,Y_2,Y_3)$.
| If $X$ $\sim \mathcal N_2(\mu_X,\Sigma_X)$, $Y$ $\sim \mathcal N_3(\mu_Y,\Sigma_Y)$, are these the conditional mean and variance? | CC BY-SA 4.0 | 0 | 2023-04-06T14:10:21.263 | 2023-04-12T17:21:58.973 | 2023-04-12T17:21:58.973 | 384956 | 384956 | [
"self-study",
"variance",
"conditional-probability",
"covariance",
"multivariate-normal-distribution"
] |
612132 | 2 | null | 608240 | 0 | null | It depends on what you want to do with that prediction.
If you just want to know if there is information about some event in the predictors or compare models/predictors etc., then don't threshold and just analyze the raw predicted probabilities.
If you should just make a model, then make a model that outputs probabilities and let whoever uses the model use the probabilities to threshold them as they wish.
If you need to make a discrete action, then fit a model that predicts probabilities and then decide on the action based on a cost-benefit analysis. If a probability of an event is 0.2, it doesn't matter if the class frequency is 1/1 or 1/10000. What matters is only the cost of action/inaction and the cost of false positive/false negative predictions.
If you know that, as you said, "The chance of it happening is up from ultra-super-duper-unlikely to ultra-unlikely," you can make a lot of money if you can bet on it repeatedly, and a lot of people do, but in medicine, it probably doesn't matter
| null | CC BY-SA 4.0 | null | 2023-04-06T14:30:03.570 | 2023-04-06T14:30:03.570 | null | null | 53084 | null |
612133 | 2 | null | 612125 | 1 | null | Yes, it does. The prior distribution is about where you think the possible values of the parameter (the difference in means) can be. Normality is about the shape of the data distribution in the population or about the distribution of residuals. You can have uniform/half-Cauchy/horse-shoe prior, but that doesn't mean the data are uniform/half-Cauchy/horse-shoe distributed.
In Bayesian analysis (depending on the software), you can change the distribution of residuals from normal to something else, so your Bayesian test doesn't need to assume a normal distribution, but by default, it does, and that is something else than the prior distribution.
This can be done with frequentest/likelihood based analysis too, but it is less common
| null | CC BY-SA 4.0 | null | 2023-04-06T14:46:11.437 | 2023-04-06T14:46:11.437 | null | null | 53084 | null |
612134 | 1 | null | null | 0 | 12 | If I were to estimate a poisson regression model with robust variance (or a logbin or cox regression model) to obtain prevalence ratios, could I interpret a continuous variable predictor?
So for example, say I regressed ADHD diagnosis on beck depression inventory (continuous scale measure), controlling for sex and age. I typically see that in these cases, people will dichotomize the scale measure into groups, select one as a reference group, and then report prevalence ratios relative to the reference group.
However, if I included the scale measure as continuous, could I interpret the results as...individuals who are one point higher on the continuous measure (depression inventory) are 1.5 times as likely to have ADHD? And are there publications or references where this is done?
I understand that people typically dichotomize the variables to create reference groups for comparison, but it seems like it would also make sense to not break up a continuous variable.
| Continuous variables and prevalence ratios interpretation | CC BY-SA 4.0 | null | 2023-04-06T14:49:43.180 | 2023-04-06T14:51:16.043 | 2023-04-06T14:51:16.043 | 362671 | 300821 | [
"references",
"continuous-data",
"prevalence"
] |
612135 | 1 | null | null | 0 | 42 | I am considering the model:
$$
y_t = \beta_0\left(\Pi_{i=0}^{K}x_{i,t}^{\beta_i}\right)\left(\Pi_{j = K+1}^{L}e^{\beta_{j}x_{j,t}}\right)
$$
where we want to have multiplicative effect between variables and linear for the others. My question is the following: in this model, how can we interprate and estimate the coefficients ? I mean we cannot keep the interpretation from the simple linear regression model where the coefficient $\beta_i$ represents the impact of the explanatory variable $x_{i,t}$ on the explained variable $y_t$.
The advantage of a such model is to separate explanatory variables in two sets in order to have something realistic about the model when we should have $y_t = 0$ because in the set of fundamental explanatory variables we have one variable $x_{i,t} = 0$ so there is a kind of ''interaction" between variables.
Thank you a lot
| Multiplicative linear model | CC BY-SA 4.0 | null | 2023-04-06T14:56:14.673 | 2023-04-06T14:56:14.673 | null | null | 375362 | [
"regression",
"nonlinear-regression",
"nonlinear"
] |
612136 | 2 | null | 612057 | 1 | null | Weights are not applied to individual variables. They are applied to the whole sample once estimated. So question 1 doesn't make sense. Instead of using `svyglm()`, let's use `lm()`, which has a simpler interface. Running `lm(Y ~ treat, data = data, weights = weights)`, which fits a weighted least squares regression, and looking at the coefficient on `treat` is the same as computing the weighted difference in outcome means
```
with(data,
weighted.mean(Y[treat == 1], weights[treat == 1]) -
weighted.mean(Y[treat == 0], weights[treat == 0]))
```
They are two ways of doing the same thing. This is called the Hajek estimator of the treatment effect. We use the former because it is more straightforward to compute standard errors. Using `svyglm()` does the same thing but uses a different interface. (Note the standard errors from `lm()` are incorrect and need to be adjusted, but the ones from `svyglm()` are approximately correct.)
In question 2, you ask why we would further adjust for covariates after weighting and why the estimate changes. We further adjust for covariates for two reasons: 1) to reduce bias due to remaining imbalance, i.e., when the weights don't exactly balance the covariate means, which is always the case for standard IPTW (but not for all method like entropy balancing, which does perfectly balance the means), and 2) to increase the precision of the effect estimate (decrease the standard error) by explaining variability in the outcome. If you exactly balance your covariate means, then it doesn't matter whether you include covariates in the outcome model or not; the estimate will be the same, which is demonstrated in [Hainmueller (2012)](https://doi.org/10.1093/pan/mpr025) who proposes entropy balancing.
For question 3, a rate is a mean. It is the frequency divided by the sample size. We can compute a weighted rate for a binary variable by computing the weighted mean of that variable. It doesn't matter whether the weights were designed to balance a variable or not, you can still compute a weighted mean using those weights. That is, to compute the weighted death rate under control in the weighted sample, you just run
```
with(data,
weighted.mean(Y[treat == 0], weights[treat == 0]))
```
This is also equal to the intercept in the weighted least squares model for the outcome when no covariates are included. So the weights are the IPTW weights, the only weights that are being estimated.
---
It might be that the ease of using the software is obfuscating some of the details for you. I recommend doing everything manually so you understand what each step is actually doing. For example, estimate the propensity scores (PS) manually:
```
ps.fit <- glm(treat ~ age + educ + married + re74, data = lalonde,
family = binomial)
ps <- ps.fit$fitted.values
```
You can see we get one PS for each unit. Now, compute the IPTW ATT weights from the PS using the formula:
```
weights <- ifelse(lalonde$treat == 1, 1, ps / (1 - ps))
```
You can see we get one weights for each unit. Hopefully this makes it clear the we don't have weights for variables; we have one weight for each unit, and this set of weights balances the covariates and is used in the outcome model to estimate the treatment effect. The reason we use it in the outcome model is because it balances the covariates.
We can assess balance using the weights by computing the weighted difference in proportion or the standardized mean difference after weighting:
```
# Weighted difference in proportion for `married`
with(lalonde,
weighted.mean(married[treat == 1], weights[treat == 1]) -
weighted.mean(married[treat == 0], weights[treat == 0]))
# Weighted SMD for `age`
with(lalonde,
(weighted.mean(age[treat == 1], weights[treat == 1]) -
weighted.mean(age[treat == 0], weights[treat == 0])) /
sd(age[treat == 1]))
```
You should see that these align with the `bal.tab()` output.
Finally, if balance is acceptable, we can computed the weighted outcome means and their difference or use `lm()` or `svyglm()` to estimate the treatment effect as a coefficient in the outcome regression model:
```
# Weighted outcome mean under control
m0 <- with(lalonde, weighted.mean(re78[treat == 0], weights[treat == 0]))
# Weighted outcome mean under treatment
m1 <- with(lalonde, weighted.mean(re78[treat == 1], weights[treat == 1]))
# Difference in weighted means: the treatment effect estimate
m1 - m0
# Using linear regression to estimate treatment effect
lm(re78 ~ treat, data = lalonde, weights = weights) |>
coef()
```
The first step of estimating the propensity scores and weights is done by `weightit()`, and the second step is done by `bal.tab()`. But it is important to run these analyses yourself manually to understand where these values are coming from. Hopefully that elucidates the method for you.
| null | CC BY-SA 4.0 | null | 2023-04-06T15:02:04.993 | 2023-04-06T15:02:04.993 | null | null | 116195 | null |
612137 | 1 | null | null | 0 | 5 | I applied the permutation test on my data to test if they are inhomogeneous, present on page. 689 of the spatstatbook. As an example of the bronze filter data. To do so, I unmarked my points and ran the two tests (I also divided my area into 6 quadrants: 2 of 20x150m and 4 of 15x150m -> total area 100x150m), which showed that my general data are more or less homogeneous (I did the test via image of the behaviors of kscaled and kinho, where both had practically the same behavior). My local tests gave `locTest(T=1.3437, p-value=0.225)`, `corrTest (T = 2.3059, p-value = 0.052)`, which concludes that my overall data is more or less `homogeneous`.
Although I have `unmarked` my data to do the analysis, as in the example, I have many marks (sp, and many functional traits). My question is, should I apply the permutation test for each `mark` type? In the case of categorical, for each `level`? Or does the general test alone suffice for the assumption of homogeneity?
| Testing if the pattern is inhomogeneous in spatstat | CC BY-SA 4.0 | null | 2023-04-06T15:06:42.570 | 2023-04-06T15:06:42.570 | null | null | 385095 | [
"r",
"ripley-k",
"spatstat"
] |
612138 | 1 | null | null | 0 | 15 | I have done a field experiment and got a treatment group and a control group, but the sample size between the control group and treatment group is unequal (22,000 vs. 63,000) due to some restrictions (i.e., the firm needs to ensure that the proportion of observations in the treatment group is in some pre-determined level). I want to study if the treatment significantly influences the outcome variable. And before that, I need to do the randomness check. So I am wondering if the unequal sample size will influence the randomness check and regression results.
| Do the unequal sample sizes between treatment group and control group of a randomized experiment influence the randomness check and regression? | CC BY-SA 4.0 | null | 2023-04-06T15:07:04.540 | 2023-04-06T15:07:04.540 | null | null | 304442 | [
"regression",
"statistical-significance",
"treatment-effect",
"randomness",
"randomized-tests"
] |
612139 | 1 | null | null | 2 | 42 | Say I have a set of measurement values $y_\text{m} = (y_{\text{m},1}, \dots y_{\text{m},N}) $, and compare these with some ground truth $y = (y_1, \dots y_N)$. Then, if I understood correctly, I can estimate the (sample) variance of the error $(y_\text{m} - y)$ in my measurement values as
\begin{equation}
\sigma^2 = \frac{\sum_{i=1}^N (y_{\text{m},i} - y_i)^2}{N-1}
\end{equation}
(and, if I'm working with a fit instead of ground truth, this changes to the mean square error, with $N-2$ in the denominator).
According to the explanation [here](https://online.stat.psu.edu/stat415/book/export/html/810), with a sufficiently well-behaved error, I could even compute a confidence interval for $\sigma^2$.
Now, if I'm interested in the relative error, expressed as a percentage, $100 \times (y_\text{m} - y)/y$, how do these estimates for $\sigma^2$ and MSE change? And can I still compute a confidence interval in that case?
EDIT: Sorry, I just realised that in the ground truth case the variance doesn't change, because ground truth is not a stochastic variable. So my question only pertains to the MSE case.
EDIT 2: Doing a further search based on @Demetri Pananos's comments, it looks like the answers [here](https://stats.stackexchange.com/q/30309), [here](https://stats.stackexchange.com/a/19580), and [here](https://stats.stackexchange.com/a/21888) may help me get started
| How to compute sample variance and/or mean square error as a percentage? | CC BY-SA 4.0 | null | 2023-04-06T15:08:27.230 | 2023-04-11T17:20:27.590 | 2023-04-11T17:20:27.590 | 322995 | 322995 | [
"regression",
"variance",
"error",
"percentage"
] |
612141 | 1 | null | null | 0 | 25 | If I were estimating the treatment effect in a linear regression model. Assuming there isn't a risk of collider bias and multicollinearity, would it be acceptable to add in as many covariates as possible to control for the remaining variation unexplained by the treatment effect variable? Or would the risk of including variables that are spuriously correlated invalidate any inference made on the adjusted treatment effect variable?
Also, in a non-linear model, because of non-collapsibility, is it in fact preferred to do this to control for all unexplained variation that biases the treatment effect variable from the lack of an error term?
Any help would be really appreciated, thanks!
| If we aren't interested in interpretation of control variables in a regression model, is spurious correlation still problematic | CC BY-SA 4.0 | null | 2023-04-06T15:19:43.600 | 2023-04-06T15:37:55.297 | null | null | 211127 | [
"multiple-regression",
"modeling",
"feature-selection",
"model"
] |
612142 | 2 | null | 611988 | 1 | null | There are two ways to interpret this question: (1) how to generate Beta variates as ratios $Z = X/(X+Y)$ of independent random variables $X$ and $Y$ and (2) what the distribution of such a ratio is when $X$ and $Y$ have Exponential distributions. Because (1) is well-known ([use Gamma distributions](https://en.wikipedia.org/wiki/Beta_distribution#Derived_from_other_distributions)), and because the second question is explicitly asked in the comment thread, I will address the second one here.
In the following I will exploit rules of differentiation (Chain, Product, Sum, and Quotient results) along with the fact that
$$\int_0^\infty e^{-\theta x}\,\mathrm dx = \frac{1}{\theta}\tag{*}$$
for any positive constant $\theta.$ This is almost all you will need to know of Calculus.
---
Recall that the Exponential distribution family describes positive random variables with densities expressible as
$$f(x,\alpha) = e^{-\alpha x}$$
for positive arguments $x.$ The parameter $\alpha \gt 0$ is usually called the rate. Its reciprocal $1/\alpha$ is a scale parameter. The survival function of this distribution is obtained by recalling the exponential function equals its derivative, whence
$$G(x,\alpha) = \Pr(X \gt x) = \alpha e^{-\alpha x}.$$
Given two rates $\alpha,\beta$ for $X$ and $Y$, the difficulties we face are (i) the denominator $X+Y$ has a [hypoexponential distribution](https://stats.stackexchange.com/a/412855/919), not an Exponential one (unless $\alpha=\beta$); and (ii) the numerator $X$ and denominator $X+Y$ are not independent. But we can readily find the distribution of $Z$ directly by noting $0\le Z \le 1$ and choosing an arbitrary $\theta$ to compute the distribution function of $Z$ using $(*)$ at the last step:
$$\begin{aligned}
F_Z(z;\alpha,\beta) = \Pr(Z\le z) &= \Pr\left(Y \ge \frac{1-z}{z}X\right) = E\left[G\left(\frac{1-z}{z}X, \beta\right)\right]\\
&= \int_0^\infty G\left(\frac{1-z}{z}X, \beta\right)\,f(x,\alpha)\,\mathrm dx\\
& = \int_0^\infty \beta e^{-\beta ((1-z)/z)x}\,\alpha e^{-\alpha x} \,\mathrm d x\\
&=\frac{\alpha\beta z}{\alpha z + (1-z)\beta}.
\end{aligned}$$
This fully answers all questions about $Z.$ In particular,
- The density of $Z$ on the interval $(0,1)$ is the derivative $$f_Z(z;\alpha,\beta) = \frac{\mathrm d}{\mathrm d z} F_Z(z;\alpha,\beta) = a \left(\frac{b}{a z + b(1-z)}\right)^2.$$
- The expectation is the integral of $1-F_Z$ from $0$ to $1,$ equal to $$E[Z;\alpha,\beta] = \frac{\beta \left(\alpha \log \left(\frac{\alpha}{\beta}\right)+ \beta - \alpha \right)}{(\alpha -\beta )^2}.$$
(This calculation requires a bit more expertise in integral Calculus, but is straightforward.)
For the example $\alpha=3,$ $\beta=1$ I used `R` to generate a million iid realizations of independent $(X,Y),$ computed $Z$ for each of them, and plotted the histogram (gray) and the density $f_Z(z;3,1)$ on top of that:
```
n <- 1e6; a <- 3; b <- 1
x <- rexp(n, a); y <- rexp(n, b)
hist(x / (x + y), freq = FALSE); curve(a * (b / (b * (1-x) + a * x))^2, add = TRUE)
```
[](https://i.stack.imgur.com/kvOVe.png)
The agreement supports the correctness of this analysis.
BTW, it looks like the same approach will produce a closed-form solution when $X$ and $Y$ have any Gamma distributions, even with different shape parameters. The CDF will be a rational function of $z/(1-z).$
| null | CC BY-SA 4.0 | null | 2023-04-06T15:35:04.653 | 2023-04-06T19:46:23.883 | 2023-04-06T19:46:23.883 | 919 | 919 | null |
612143 | 2 | null | 612141 | 0 | null | Each variable you include costs you a degree of freedom, which makes your tests less powerful and standard errors wider, and the whole fit less stable, partially as you mentioned, because of spurious correlations, but also simply because you are just fitting more stuff.
| null | CC BY-SA 4.0 | null | 2023-04-06T15:37:55.297 | 2023-04-06T15:37:55.297 | null | null | 53084 | null |
612144 | 1 | null | null | 0 | 21 | Can anyone explain me about how sample size selection happens which is generally used as $20\times (p+q)$? Here $p$ is the number of parameters in the final model and $q $ is the number of parameters that may have been examined but discarded along the way.
Any credible reference is appreciated.
| How does sample size selection happen which is generally used as $20\times (p+q)$? | CC BY-SA 4.0 | null | 2023-04-06T15:39:54.740 | 2023-04-06T15:43:23.727 | 2023-04-06T15:43:23.727 | 362671 | 385054 | [
"machine-learning",
"references",
"sample-size",
"learning"
] |
612145 | 1 | null | null | 2 | 42 | Below you can view the univariate sales dataset for a particular product with x3 promotional interventions/campaigns (highlighted in grey and green); each promotion campaign stretched for a length of 14 days. My main objective is to eventually measure the post-period uplift of all 3 campaigns separately, but to keep complexity at a minimum, I figured it might be better to focus this question on only analysing 1 campaign correctly and then just apply the logic to the other two campaigns afterwards. The promotion campaign I want to analyse with the `causalimpact`-library, is highlighted in green (campaign #2).
[](https://i.stack.imgur.com/h5Rxv.png)
Questions:
- How long should the pre-period and post-period be to analyze promo campaign #2 in the best possible way?
- More specifically, which of the 4 options is the best to use (if at all)?
Note that:
- Option 1 considers shorter time-frames, but excludes possible adverse effects due to previous campaigns being run.
- If I do consider to use longer pre-period ranges (Option 3 & 4), should I include the data as is or should I recreate a baseline by handling the outliers and promo spikes the same way as for example the FB Prophet library before building a forecasting model, by either:
Drop the data for the outlier & promo campaign 1, and create a new baseline for the pre-period?
Set the data for the outlier & promo campaign 1 to zero, and create a new baseline for the pre-period?
Set the data for the outlier & promo campaign 1 to none, and create a new baseline for the pre-period?
Use another approach, one which I have not considered above?
Thanks in advance!
| How to handle previous interventions within the pre-period of the same time series using causalimpact? | CC BY-SA 4.0 | null | 2023-04-06T15:42:51.837 | 2023-05-02T07:17:49.887 | 2023-05-02T07:17:49.887 | 53690 | 385025 | [
"causality",
"intervention-analysis",
"causalimpact"
] |
612147 | 1 | null | null | 1 | 14 | I am a bit lost on the correct approach to this Statistical problem. Let me explain the context of this problem: I have a cohort population with longitudinal measurements during a period of several years and in this population, a subsample of its, has been wearing an accelerometer to measure Physical Activity (considered as the gold standard) and all of the other subjects have been answering a questionnaire regarding Physical Activity.
My goal is to predict or set a rule to approach the accelerometer using the questionnaire data and the set of predictors (like socio-economical variables, etc...) of each participant.
The outcome variable is a score of Physical Activity derived from both Accelerometer or Questionnaire. Any ideas on how to assess this problem in a cross sectional or longitudinal form?
I was thinking about Multivariant linear mixed models, but I feel like some option of training with data and validating with the Accelerometer could be a good option also.
Anything will be very welcome!
| Approach of a Golden Standard Measurement with questionnaire data | CC BY-SA 4.0 | null | 2023-04-06T16:25:41.863 | 2023-04-06T20:39:10.363 | null | null | 350241 | [
"time-series",
"multiple-regression",
"reliability",
"validity"
] |
612148 | 2 | null | 612012 | 2 | null | I suggest you just do it; boxplots look fine, Shapiro-Wilk does not have to be non-significant, the data should just look approximately normal. You can use ANOVA for unequal variances to deal with heteroskedasticity. The difference in variances between groups might not be just a nuisance but also something worth writing a paper about.
Although I think that ANOVA is enough, you can find a version of ANCOVA for unequal variances or just a heteroskedastic/robust linear model, to deal with gender. If you want to go non-parametric, you can use the Kruskal-Wallis test, or a permutation test, neuroimagers love permutation tests. See Winkler et al. 2014 Neuroimage [https://www.sciencedirect.com/science/article/pii/S1053811914000913](https://www.sciencedirect.com/science/article/pii/S1053811914000913), where they also discuss heteroscedasticity and nuisance variable, so either you can put gender in your model or shuffle within gender. There is also an accompanying (Matlab?) package for it.
| null | CC BY-SA 4.0 | null | 2023-04-06T16:29:12.053 | 2023-04-06T18:07:28.220 | 2023-04-06T18:07:28.220 | 22047 | 53084 | null |
612149 | 2 | null | 285618 | 2 | null | It seems like you have a proposed model $\mathbb E[y\vert X] = \beta_0 + \beta_1x + \beta_2x^2$, for some $\beta_0,\beta_1,\beta_2$. Then you want to know if there are $\delta_0, \delta_1, \delta_2\in\mathbb R$ such that $(\beta_0 + \delta_0) + (\beta_1 + \delta_1)x + (\beta_2 + \delta_2)x^2$ provides a a better fit. Therefore, your hypothesis test, regardless of the mechanics of how you test, would be as follows.
$$
H_0: \delta_0 = \delta_1 = \delta_2 = 0\\
H_a: \text{The null hypothesis is false, and at least one of the }\delta_1\text{ are nonzero}.
$$
If any of $\delta_0$, $\delta_1$, and $\delta_2$ are nonzero, then your hypothesized values for the intercept, slope, and quadratic are not all correct.
$$
y = (\beta_0 + \delta_0) + (\beta_1 + \delta_1)x + (\beta_2 + \delta_2)x^2 + \epsilon
\\\bigg\Updownarrow\\
y - \beta_0 - \beta_1x-\beta_2x^2 = \delta_0 + \delta_1x+\delta_2x^2 + \epsilon
$$
Therefore, I would say to treat $y - \beta_0 - \beta_1x-\beta_2x^2$ as your dependent variable and fit a usual regression on the $x$ variable and its square.$^{\dagger}$. I would then do a test of nested linear models with this model containing a regression of this new $y$ on nothing, not even an intercept. After all, if the original coefficients are correct, there should be no difference.
An F-test of nested linear models would be a way to test this.
A quick simulation suggests that this is at least not a terrible way to proceed. The result of this simulation is that the p-values are more-or-less distributed $U(0,1)$ when the correct coefficients are subtracted (to form `y_new` in the language of this code), yet the test has considerable power to reject when the wrong coefficients are subtracted, the latter of which corresponds to the situation where the original coefficients you have assumed are not correct.
```
library(ggplot2)
set.seed(2023)
N <- 1000
R <- 10000
x <- runif(N)
Ey <- 2 + 2*x + 2*x^2
ps1 <- ps2 <- rep(NA, R)
for(i in 1:R){
# Add some noise to the conditional expected value of y
#
y <- Ey + rnorm(N)
# Do the subtraction
#
y_new <- y - 2 - 2*x - 2*x^2
# Regress using the new y-variable
#
L1 <- lm(y_new ~ x + I(x^2))
# Now regress on nothing
#
L0 <- lm(y_new ~ 0)
# Do the F-test of nested models
#
f_test <- anova(L1, L0)
# Save the p-value from the overall F-test
#
ps1[i] <- f_test$`Pr(>F)`[2]
# Now do it but assume the wrong parameters to subtract
# Do the subtraction
#
y_new <- y - 1.9 - (2.1)*x - (1.7)*x^2
# Regress using the new y-variable
#
L1 <- lm(y_new ~ x + I(x^2))
# Now regress on nothing
#
L0 <- lm(y_new ~ 0)
# Do the F-test of nested models
#
f_test <- anova(L1, L0)
# Save the p-value from the overall F-test
#
ps2[i] <- f_test$`Pr(>F)`[2]
}
d1 <- data.frame(
pvalue = c(ps1, ps2),
CDF = ecdf(ps1)(c(ps1, ps2)),
Null = "True"
)
d2 <- data.frame(
pvalue = c(ps1, ps2),
CDF = ecdf(ps2)(c(ps1, ps2)),
Null = "False"
)
d <- rbind(d1, d2)
ggplot(d, aes(x = pvalue, y = CDF, col = Null)) +
geom_line() +
geom_point() +
geom_abline(slope = 1, intercept = 0) +
theme(legend.position="bottom")
```
[](https://i.stack.imgur.com/LZolU.png)
$^{\dagger}$ One thought could be to use orthogonal polynomials for better numerical stability. In that case, I would use the orthogonal polynomials from the beginning.
| null | CC BY-SA 4.0 | null | 2023-04-06T16:29:43.697 | 2023-04-08T14:34:52.320 | 2023-04-08T14:34:52.320 | 247274 | 247274 | null |
612150 | 1 | null | null | 0 | 18 | I have one question about linear regression and Multiple Regression Analysis:
After I've obtained the model's equation, I want to use it to predict how the Y changes due to one variable while all the other predictors are kept fixed at certain values.
So I fix all the predictors except one to certain values (X1=free, X2=a, X3=b, X4=c).
Normally I would proceed to plot an Y vs X1 graph but I received the advice to plot the (Y + residues obtained during the calculation of the model) vs X1 graph.
I don't understand what is the purpose of this advice. Do you have any clues?
| Regression, Y+residual | CC BY-SA 4.0 | null | 2023-04-06T16:35:40.443 | 2023-04-06T16:37:21.160 | 2023-04-06T16:37:21.160 | 362671 | 385097 | [
"regression",
"residuals"
] |
612151 | 2 | null | 611385 | 1 | null | Consider the case that $a = 1$ and $X_0 \sim U(1, 2)$ then with probability $1, S_0 > 1$ and so $E[\tau] = 0$. Now let's keep $a = 1$ and replace $X_0$ with normal distribution of the same mean and variance as $U(1, 2)$, which should be $N(\mu = 1.5, \sigma = \sqrt{1/12})$. Now $E[\tau]>P(X_0 \in [-1, 1]) > 0$
From this we can conclude that $E[\tau]$ depends on more than just $a, \mu, \sigma$ and you can't give an exact solution.
Now being given $\mu$ and $\sigma$ makes me think that you can construct an upper bound using Chebyshev's inequality ([https://en.wikipedia.org/wiki/Chebyshev%27s_inequality](https://en.wikipedia.org/wiki/Chebyshev%27s_inequality)). I'm admittedly not entirely sure how.
Also i wrote a small simulation for a less clear cut case in R. Maybe it will help you make some further investigations:
```
n <- 10000 # number of simulations
a <- 1.5
# mu = 1 sig = 1
taus_normal <- replicate(n, {
i <- 0
S <- rnorm(1, 1, 1)
while (abs(S) <= a) {
i <- i+1
S <- S + ifelse(i %% 2 == 0, 1, -1) * rnorm(1, 1, 1)
}
return(i)
})
# lambda = 1, therefore mu = 1 sig = 1
taus_poisson<- replicate(n, {
i <- 0
S <- rpois(1, 1)
while (abs(S) <= a) {
#print(paste("i:", i, "S:", S))
i <- i+1
S <- S + ifelse(i %% 2 == 0, 1, -1) * rpois(1, 1)
}
return(i)
})
mean(taus_normal)
mean(taus_poisson)
```
```
| null | CC BY-SA 4.0 | null | 2023-04-06T16:39:03.653 | 2023-04-06T16:39:03.653 | null | null | 341520 | null |
612153 | 2 | null | 384384 | 0 | null | To give you more of a visual answer, this is a KDE plot using a mean = [0,0], cov = [[1,0], [0, 1]] looks like in seaborn:
Code:
```
import pandas as pd
import numpy as np
import seaborn as sns; sns.set()
import matplotlib.pyplot as plt
df = pd.DataFrame(np.random.multivariate_normal(mean=[0, 0], cov=[[1,0],[0,1]], size=10_000), columns=["x", "y"])
plt.figure(figsize=(15,10))
plt.title("3D Diagonal Gaussian")
sns.kdeplot(data=df, x="x", y="y")
```
[](https://i.stack.imgur.com/M9Xq7.png)
| null | CC BY-SA 4.0 | null | 2023-04-06T16:42:10.733 | 2023-04-06T16:42:10.733 | null | null | 385104 | null |
612154 | 1 | null | null | 0 | 36 | I wonder if anyone can help me to decide what steps to undertake in making a mixed-effects model in `R`.
My data consists of 5 treatments of which the effects are being measured in 10 consecutive time points. All subjects (20) receive 4 of the 5 possible treatments in a block-randomized fashion. I want to construct a model in `lme4`. How would I go about this?
| I need some guidance in making a mixed effects model in r | CC BY-SA 4.0 | null | 2023-04-06T16:42:48.263 | 2023-04-06T19:22:36.377 | 2023-04-06T19:22:36.377 | 56940 | 385105 | [
"mixed-model",
"lme4-nlme"
] |
612155 | 1 | null | null | 0 | 43 | I have N patients. Each one of them performed T tasks, and each task if performed M times. For each task, I get some measurements from which I extract a vector of features. I therefore have T*M vectors per patients.
Is it possible to cluster the data by patient ? I want to gather patients with similars tasks into G group (where G will probably be 2).
I saw clustering method like hierachical clustering, but it seems to me that it assumes each data vectors to belong to no initial cluster.
| Clustering of known clusters: how to cluster by patient? | CC BY-SA 4.0 | null | 2023-04-06T16:48:28.820 | 2023-04-09T13:30:08.767 | 2023-04-09T13:30:08.767 | 301511 | 301511 | [
"machine-learning",
"clustering",
"hierarchical-clustering"
] |
612157 | 1 | null | null | 0 | 18 | I have a histogram A that holds a distribution describing kinematics. Now I want to approximate that distribution by other distributions that were made under simplified assumptions. Then I want to quantify which estimation gave the smallest relative error when compared to the true one. Let's consider one of these simplified distributions and call it histogram B.
Two questions:
- I assume the relative error that I make when using histogram B would simply be, for bin i:
$ \text{rel. err}_{i} = \text{Abs}(\text{bincontent}_\text{A,i}-\text{bincontent}_\text{B,i})/\text{bincontent}_\text{A,i}$
Is that correct?
- How would I quantify which assumption (histo B, C or D) has the lowest relative error when considering all of the bins, some of which being wider than others? Taking the mean of the relative errors over all bins is not accurate due to the different bin sizes. Would I have to use a $\chi^2$-test and then compare the $\chi^2/\text{DOF}$ of histos B, C and D? Or would I better compare the p-value? Or is there another way?
| Relative Error and Chi-Squared-Test | CC BY-SA 4.0 | null | 2023-04-06T16:56:00.213 | 2023-04-06T17:02:23.937 | 2023-04-06T17:02:23.937 | 362671 | 385106 | [
"error",
"histogram"
] |
612158 | 1 | null | null | 0 | 20 | I have a years worth of electricity power data on 15 minute intervals joined with weather data and time-of-week one hot dummy variables.
Is using train/test split an okay approach for validating the model? Am attempting to predict electricity with explainer variables like weather and time-of-week dummies.
For starters, I weeded out a bunch of dummy variables variables with OLS regression in statsmodels and then attempted to fit the model with XG Boost. Would anyone have some tips for a better approach on fitting time series data, validate the ML model, and then attempting to use regression to predict electricity? Some of my Python code for the ML training process:
```
# shuffle the DataFrame rows
df2 = df2.sample(frac=1)
train, test = train_test_split(df2, test_size=0.2)
regressor = XGBRegressor()
X_train = np.array(train.drop(['total_main_kw'],1))
y_train = np.array(train['total_main_kw'])
X_test = np.array(test.drop(['total_main_kw'],1))
y_test = np.array(test['total_main_kw'])
regressor.fit(X_train, y_train)
predicted_kw_xgboost = regressor.predict(X_test)
y_test_df = pd.DataFrame({'test_power':y_test})
y_test_df['predicted_kw_xgboost'] = predicted_kw_xgboost
y_test_df.plot(figsize=(25,8))
```
Will plot trained model predicting the test dataset but I have not done any verification if the data is stationary or not:
[](https://i.stack.imgur.com/rY2q1.png)
```
mse = mean_squared_error(y_test_df['test_power'], y_test_df['predicted_kw_xgboost'])
print("MEAN SQUARED ERROR: ",mse)
print("ROOT MEAN SQUARED ERROR: ",round(np.sqrt(mse),3)," IN kW")
MEAN SQUARED ERROR: 4.188126272978789
ROOT MEAN SQUARED ERROR: 2.046 IN kW
```
Thanks any tips still learning in this area..
| validating ML model in regression | CC BY-SA 4.0 | null | 2023-04-06T16:58:12.383 | 2023-04-06T16:58:12.383 | null | null | 223662 | [
"regression",
"machine-learning",
"time-series",
"python",
"boosting"
] |
612159 | 1 | null | null | 1 | 22 | I have been taught that if an estimator is unbiased, then its convergence in probability can be proven by taking the limit of its variance as the sample size grows to infinity and showing it is equal to 0. However, considering the definition of the variance and the fact the estimator is unbiased, this would also prove mean square convergence, which is a stronger form of convergence. I was therefore wondering if the two concepts are equivalent if the estimator is unbiased, or the method I outlined above is just a sufficient but not necessary condition for convergence in probability of an unbiased estimator. Basically, is it possible to define an unbiased estimator whose variance does not tend to 0, but nonetheless converges in probability?
| If an estimator for a parameter is unbiased, is it necessary for its variance to tend towards 0 for it to be consistent? | CC BY-SA 4.0 | null | 2023-04-06T17:01:42.327 | 2023-04-06T17:01:42.327 | null | null | 367248 | [
"convergence",
"unbiased-estimator"
] |
612160 | 1 | null | null | 0 | 26 | Let's say we have two categorical variables the first with categories $j = 1,..., J$ and the other with categories $k = 1,...,K$. Often in Bayesian hierarchical linear regression, we might have a model specification like the following,
$$ Y_{ijk} = \alpha + \beta_j + \gamma_k + \epsilon_i $$
$$ \beta_j \sim N(\mu_\beta, \sigma_\beta) \text{ for } j = 1,...,J$$
$$ \alpha_k \sim N(\mu_\alpha, \sigma_\alpha) \text{ for } k = 1,...,K$$
with some priors on $\epsilon, \mu_\beta, \mu_\alpha, \sigma_\beta,$ and $\sigma_\alpha$.
However, in an OLS linear regression setting we would always drop a category and treat it as a baseline (aka "dummy encoding"). Something like,
$$ Y_{ijk} = \alpha + \beta_j + \gamma_k + \epsilon_i $$
$$ \beta_1 = 0$$
$$ \alpha_1 = 0$$
I understand that the hierarchical prior (and even non-hierarchical priors) can resolve identifiability issues, so it's less important to drop a categorical variable in the Bayesian context. I also understand the interpretability benefits of not having a baseline categorical variable. However, it seems to me that it would be easier for our MCMC sampling to converge if we dropped a category and set $\mu_\beta = 0$ and $\mu_\alpha=0$.
If my primary concern is model convergence, should I be using dummy encoding? Why don't I typically see dummy encoding in a Bayesian regression context particularly in hierarchical models?
| Why don't we typically drop a category as a baseline in Bayesian hierarchical linear regression? | CC BY-SA 4.0 | null | 2023-04-06T17:02:38.750 | 2023-04-06T17:02:38.750 | null | null | 23801 | [
"bayesian",
"categorical-data",
"convergence",
"categorical-encoding",
"hierarchical-bayesian"
] |
612161 | 1 | null | null | 2 | 252 | The context is about the use of a given model deviance (often referred to as “Residual deviance” in R) and that of its “Null deviance” to calculate D2, the deviance explained for models with non-normal error.
This metric (D2) is somehow an approximation of what a coefficient of determination (R2) is for a linear model, providing an approximate idea of the variation explained by your model, although it is probably better described as a value reflecting how close its fit is from being perfect when compared to a saturated model, see:
[https://bookdown.org/egarpor/PM-UC3M/glm-deviance.html](https://bookdown.org/egarpor/PM-UC3M/glm-deviance.html)
See also p. 166-167 of Guisan & Zimmermann (2000) for the calculation of D2 and its related adjusted version:
[https://www.wsl.ch/staff/niklaus.zimmermann/papers/ecomod135_147.pdf](https://www.wsl.ch/staff/niklaus.zimmermann/papers/ecomod135_147.pdf)
With most packages, such as “stats” and its glm() function, “MASS” and glm.nb(), and “mgcv” and gam(), those two deviance-related values can easily be obtained with "deviance" and "null.deviance" (preceeded by a dollar symbol - see further below). The gam() function actually also provides D2 directly in its output.
For the glmmTMB package and its glmmTMB() function, I haven't been able to extract the residual deviance nor the null deviance to calculate D2.
Does anyone has an idea of how to achieve this?
Here's an example of what I'm looking for, using real count data from a sample of walleyes through a monitoring program. The idea is to extract the deviance-related information for a same model using each function/package when fitting these with the same error structure (NB2, link=log) and see how they compare, especially when contrasted to glmmTMB (not being able to extract such information from this package).
Walleye catch curve example:
From the "descending limb of a catch curve" in fisheries, one can model the rate at which counts decrease with age to estimate the instantaneous mortality (Z: the absolute value of the age coefficient) on the log scale from a sample of randomly-captured fish. The age-frequencies data are as follow:
```
age<-seq(1,15,by=1)
count<-c(151,56,117,10,12,21,8,2,2,1,2,0,1,1,2)
walleye<-data.frame(age,count)
```
These data are over-dispersed (variance > mean) and as such, the Poisson family distribution is inadequate. Using the glm.nb() function of the "MASS" package allows to model the variance in "extra" as follow:
```
summary(m.walleye.nb2<-glm.nb(count~age,data=walleye))
Call:
glm.nb(formula = count ~ age, data = walleye, init.theta = 3.114212171,
link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.6522 -0.8761 -0.2121 0.5188 1.8561
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.21804 0.36336 14.361 < 2e-16 ***
age -0.42792 0.05395 -7.931 2.17e-15 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(3.1142) family taken to be 1)
Null deviance: 109.841 on 14 degrees of freedom
Residual deviance: 15.903 on 13 degrees of freedom
AIC: 95.102
Number of Fisher Scoring iterations: 1
Theta: 3.11
Std. Err.: 1.58
2 x log-likelihood: -89.102
```
Note that both the Null deviance (109.841) and the Residual deviance (15.903) are provided directly in the output and already tell me that the predicted values of the considered model adjust pretty well to the observed (count) data given the large difference between the Null and Residual deviances.
These two deviance-related values can also be directly extracted as follow:
```
m.walleye.nb2$null.deviance
[1] 109.8413
m.walleye.nb2$deviance
[1] 15.90327
```
Using first the hnp() function of the hnp package ([Moral et al. 2017](https://www.jstatsoft.org/article/view/v081i10)) I can tell that the Pearson residuals of my model are behaving as expected given the distributional assumptions of this Poisson extension, i.e. the negative binomial type II (nb2). This model is thus adequate (i.e., goodness-of-fit), but I'm interested too in obtaining a "calibration" metric to get an idea of how well the predictions adjust to the observed data. That's where D2 is useful, and more so from an "explanatory power" than "adequacy" perspective:
```
D2<-100*(1-m.walleye.nb2$deviance/m.walleye.nb2$null.deviance)
D2
[1] 85.52159
```
This quite high value is not unexpected, as counts are deacreasing with age (walleyes die as they age) and despite the high variation observed in these age-frequencies data, the .nb2 model is capturing most of the signal for the central tendency and its associated variance.
If I use the gam() function of the mgcv package, I can run the exact same model when not recoursing to a smoothing function s() and specifying the argument method="ML" as REML would otherwise be used by default:
```
library(mgcv)
summary(m.walleye.nb2.GAM<-gam(
count~age,family=nb(theta=NULL),method="ML",data=walleye))
Family: Negative Binomial(3.114)
Link function: log
Formula:
count ~ age
Parametric coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.21804 0.36336 14.361 < 2e-16 ***
age -0.42792 0.05395 -7.931 2.17e-15 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-sq.(adj) = 0.774 Deviance explained = 85.5%
-ML = 44.551 Scale est. = 1 n = 15
```
Note that the parameter estimates are identical to those obtained with
glm.nb() of the "MASS" package. The Deviance explained provided in the output corresponds to the one that was calculated for the previous .nb2 model. If we calculate D2 by hand for this nb2.GAM model:
```
D2<-100*(1-m.walleye.nb2.GAM$deviance/m.walleye.nb2.GAM$null.deviance)
D2
[1] 85.52159
```
Identical.
Now, fitting the same model in glmmTMB using .NB2 instead of .nb2 to differentiate it from the one obtained with glm.nb(), we get:
```
library(glmmTMB)
summary(m.walleye.NB2<-glmmTMB(count~age,family=nbinom2,data=walleye))
Family: nbinom2 ( log )
Formula: count ~ age
Data: walleye
AIC BIC logLik deviance df.resid
95.1 97.2 -44.6 89.1 12
Dispersion parameter for nbinom2 family (): 3.11
Conditional model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.21804 0.34947 14.931 < 2e-16 ***
age -0.42792 0.05485 -7.802 6.09e-15 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
The parameter estimates are again identical to those of .nb2 and .nb2.GAM and the deviance provided is of 89.1.
All models have the same log-likelihood:
```
logLik(m.walleye.nb2)
'log Lik.' -44.55119 (df=3)
logLik(m.walleye.nb2.GAM)
'log Lik.' -44.55119 (df=3)
logLik(m.walleye.NB2)
'log Lik.' -44.55119 (df=3)
```
As indicated by @Ben Bolker, the deviance reported in glmmTMB is:
```
deviance<-(-2*-44.55119)
deviance
[1] 89.10238
```
The "deviance" here is obviously the same for all three models (89.10238) and is used for instance in the calculation of the AIC = -2(log-likelihood) + 2K where K is the number of parameters (df=3 from above):
```
89.10238+(2*3)
[1] 95.10238
```
In the end, the more specific question would be:
How can I extract the deviance-related information (null and residual deviances) from a glmmTMB object for the computation of D2 in such a simple model (one predictor, fixed effect)?
Although not of prime importance, this metric nonetheless helps to provide an idea of how "good" is your model at explaining the variation in your data, knowing that a top-ranking, adequate model may have a low explanatory power while still being useful. As such, knowing this information is desirable to sometimes tone down a statement related to the model predictions. I guess that Simon Wood has not provided D2 in the gam() function output of his "mgcv" package for no reason.
| How to extract the residual and null deviances from a glmmTMB object (to calculate D2, the deviance explained)? | CC BY-SA 4.0 | null | 2023-04-06T17:16:28.280 | 2023-04-24T11:58:49.330 | 2023-04-24T01:10:18.723 | 338493 | 338493 | [
"generalized-linear-model",
"deviance",
"glmmtmb"
] |
612163 | 2 | null | 380848 | 0 | null | In the example you have given where effect sizes are reported for boys and girs separately within a single study, this is a type of hierarchical effects. Another type of hierarchical effects (i.e., the typical one) is that research group conducts multiple studies based on independent samples and each study reports one effect estimate. If I understand correctly, the two types can be treated as equivalent and modelled in the three-level structure.
| null | CC BY-SA 4.0 | null | 2023-04-06T17:26:50.657 | 2023-04-06T17:26:50.657 | null | null | 323047 | null |
612166 | 2 | null | 1315 | 1 | null | Perhaps the distribution is multimodal, i.e. the distributions could be a sum of different conditions and hence multiple distributions, e.g. such as a system that is idle vs. a system that is doing heavy single file downloading vs. a system that is streaming requiring regular intervals of high data throughput. The time spaces between pings may also be important, and some Nyquist-limited information may be lost in inadequate sampling.
There is an additional post here about summing multiple exponent distributions. I could not comment on nor upvote that one, so I made this separate response. The point on multiple hops is very valid, and I would first try analyzing pings to the next immediate node.
| null | CC BY-SA 4.0 | null | 2023-04-06T17:36:01.430 | 2023-04-06T17:42:33.837 | 2023-04-06T17:42:33.837 | 233806 | 233806 | null |
612169 | 1 | null | null | 1 | 31 | Sometimes, in a meta-analysis, it happens that studies include more than one independent sample and multiple component outcomes are reported for each sample. Which model would be more reasonable to use, three-level or four-level meta-analysis?
I have read the two nice papers “[Three-level meta-analysis of dependent effect sizes](https://pubmed.ncbi.nlm.nih.gov/23055166/)” and “[Meta-analysis of multiple outcomes: a multilevel approach](https://pubmed.ncbi.nlm.nih.gov/25361866/)” by Prof. Wim Van den Noortgate, et al. In three-level meta-analysis, for dependences like multiple outcomes from the same study (i.e., correlated effects), the dependence was modelled as hierarchical effects, that is, the three-level model for correlated effects was: level 1-sample, level 2-outcome, level 3-study, whereas for hierarchical effects, level 1-sample, level 2-sub-study, level 3-study.
My question is: if the two types of dependence occur in the same meta-analysis, would it be considered reasonable to conduct a four-level meta-analysis, i.e., level 1–sample, level 2–outcome, level 3–sub-study, and level 4–study? Or conduct a three-level meta-analysis, i.e., level 1–sample, level 2–outcome/sub-study, and level 3–study? Would there be a general requirement for the number of units in each level? If using three-level model, it seems difficult to interpret the variance estimate on level 2.
| Three-level or four-level meta-analysis? | CC BY-SA 4.0 | null | 2023-04-06T17:44:50.917 | 2023-04-06T17:44:50.917 | null | null | 323047 | [
"multilevel-analysis",
"meta-analysis"
] |
612170 | 1 | null | null | 10 | 324 | Is it possible to include mixed effects into a random forest model in R? I know about the `lmer` (from `lme4`) and `randomForest` (from randomForest) functions but it would be nice if I could combine the two in some way, if that makes sense.
I'm afraid that if I don't include random effects in my random forest model, it will be incorrect.
| Mixed effects in Random forest (in R) | CC BY-SA 4.0 | null | 2023-04-06T18:05:52.430 | 2023-04-06T21:00:56.923 | 2023-04-06T19:41:28.143 | 56940 | 385110 | [
"r",
"mixed-model",
"random-forest",
"meta-analysis"
] |
612171 | 2 | null | 411676 | 2 | null | The McFadden pseudo $R^2$ takes the stance that the extension of the usual $R^2$ to logistic regression should be $1-\left(\dfrac{L_1}{L_0}\right)$, where $L_1$ is the log-likelihood of your model and $L_0$ is the log-likelihood of an intercept-only model that predicts the mean every time (see, for instance, [UCLA](https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/) on this). If you apply this idea to a Gaussian likelihood instead of binomial, you wind up with the following.
$$
1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
The numerator is the log-likelihood of your model, and the denominator is the log-likelihood of an intercept-only model that always predicts the mean $\bar y$.
The above expression winds up being equal to $\text{corr}(y, \hat y)$ in the OLS linear regression case. Further, the expression above represents a comparison of the likelihood (or mean squared error) of your model relative to that of a baseline model, and if you cannot beat the baseline model on a metric of interest, that suggests trouble. Predicting $\bar y$ every time makes sense as a baseline model because, for a model that is supposed to predict conditional means, what better naïve prediction of the conditional mean than the overall mean?
There are issues with summarizing model performance with just one value, as is suggested in the comments, and Cross Validated has [an answer](https://stats.stackexchange.com/a/13317/247274) showing why $R^2$ can be high despite a clearly incorrect model. However, $
1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$ is a totally reasonable statistic to calculate, and you can do it for your model.
| null | CC BY-SA 4.0 | null | 2023-04-06T18:25:15.270 | 2023-04-06T18:25:15.270 | null | null | 247274 | null |
612172 | 1 | 612196 | null | 2 | 54 | The `wilcox.test` function in R calculates the pseudomedian and a confidence interval when `conf.int=TRUE`.
In this question, [Wilcoxon signed rank test - help on interpretation of pseudo median](https://stats.stackexchange.com/questions/404971/wilcoxon-signed-rank-test-help-on-interpretation-of-pseudo-median) for example, there is a description about the calculation of the pseudomedian, but I don't understand why the confidence interval cannot be done only with the median, but with the median of the means.
| Why is the pseudomedian better than the median in a Wilcoxon test? | CC BY-SA 4.0 | null | 2023-04-06T18:25:16.427 | 2023-04-06T23:54:08.127 | 2023-04-06T23:54:08.127 | 22047 | 285902 | [
"r",
"wilcoxon-mann-whitney-test",
"median",
"wilcoxon-signed-rank"
] |
612174 | 1 | null | null | 0 | 34 | I have been tasked with evaluating hospital length of stay (LOS) in two groups of patients using the Cox proportional hazards model. One group of patients received a medication, the other did not. Hospital discharge constitutes a failure, there is no censoring (all patients are eventually discharged).
The hazards for the two groups of patients were not proportional though, so Cox is no longer the correct approach. I am not sure how to decide on the appropriate model (accelerated failure-time (AFT) model or multiplicative/proportional hazards (PH) model) and the appropriate survival distribution (exponential, Weibull, etc.) to use for the model.
Any tips for how to find the appropriate model and distribution?
| How do I choose correct model (AFT or PH) & distribution when Cox proportional hazards is not appropriate? | CC BY-SA 4.0 | null | 2023-04-06T18:40:07.623 | 2023-04-07T14:26:32.100 | null | null | 385112 | [
"survival",
"cox-model",
"proportional-hazards",
"accelerated-failure-time"
] |
612175 | 1 | 612214 | null | 2 | 51 | During the fine-tunning of a DistilBert model, I tried two optimizers (with different parameter sets) on the same dataset.
Here are the results:
```
- AdamW: train loss (0.21), val loss (0.33), accuracy (0.88)
- SGD: train loss (0.35), val loss (0.35), accuracy (0.87)
```
I read that if:
```
- train loss > val loss: the model is under-fitted
- train loss == val loss: the model is well-fitted
- train loss < val loss: the model is over-fitted
```
So I would say that the model trained with AdamW is over-fitted, but on the other end it is (slightly) better.
Should I prefer a well-fitted model with a slight loss of validation results, or should I focus only on the best validation results?
| Does the best model necessarily have the best results on validation set? | CC BY-SA 4.0 | null | 2023-04-06T18:46:38.520 | 2023-04-07T09:39:33.067 | 2023-04-07T06:44:44.533 | 382857 | 382857 | [
"overfitting",
"fitting"
] |
612177 | 1 | 612181 | null | 2 | 69 | Binary cross entropy is normally used in situations where the "true" result or label is one of two values (hence "binary"), typically encoded as 0 and 1.
However, the documentation for PyTorch's [binary_cross_entropy](https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html) function has the following:
>
target (Tensor) – Tensor of the same shape as input with values between 0 and 1.
(In this context "target" is the "true" result/label.)
The "between" seems rather odd. It's not "either 0 or 1, just with a real-valued type", but explicitly between. Further digging reveals this to be [deliberate](https://github.com/pytorch/pytorch/issues/2272#issuecomment-319766578) on the part of the PyTorch programmers. (Though I can't seem to find out why.)
Granted, given the definition of BCE $( y\log x + (1-y) \log (1-x) )$ it's certainly possible to compute things with target values that aren't strictly {0, 1}, but I'm not sure what the potential use of such a situation is.
Under what sort of situations would one potentially compute the binary cross entropy with target values which are intermediate? What would a class label of 0.75 actually mean, philosophically speaking?
| Meaning of non-{0,1} labels in binary cross entropy? | CC BY-SA 4.0 | null | 2023-04-06T19:08:20.153 | 2023-04-06T19:42:51.360 | null | null | 69382 | [
"cross-entropy"
] |
612178 | 1 | 612260 | null | 1 | 60 | I will show what I'm doing it in R to make sure if I'm doing it correctly
This is my dataset which I'm using for analysis
```
dput(df2)
structure(list(TCGA_ID = c("TCGA-AB-2965", "TCGA-AB-2881", "TCGA-AB-2834",
"TCGA-AB-2818", "TCGA-AB-2898", "TCGA-AB-2956", "TCGA-AB-2994",
"TCGA-AB-2920", "TCGA-AB-3009", "TCGA-AB-2892", "TCGA-AB-2814",
"TCGA-AB-2891", "TCGA-AB-2991", "TCGA-AB-2875", "TCGA-AB-2805",
"TCGA-AB-3007", "TCGA-AB-2884", "TCGA-AB-2975", "TCGA-AB-2946",
"TCGA-AB-2932", "TCGA-AB-2979", "TCGA-AB-2917", "TCGA-AB-2828",
"TCGA-AB-2867", "TCGA-AB-2815", "TCGA-AB-2839", "TCGA-AB-2928",
"TCGA-AB-2980", "TCGA-AB-2873", "TCGA-AB-2853", "TCGA-AB-2976",
"TCGA-AB-2877", "TCGA-AB-3001", "TCGA-AB-3012", "TCGA-AB-2940",
"TCGA-AB-2992", "TCGA-AB-2806", "TCGA-AB-2995", "TCGA-AB-2847",
"TCGA-AB-2842", "TCGA-AB-2858", "TCGA-AB-2987", "TCGA-AB-2856",
"TCGA-AB-2916", "TCGA-AB-2901", "TCGA-AB-2844", "TCGA-AB-2808",
"TCGA-AB-2955", "TCGA-AB-2820", "TCGA-AB-2811", "TCGA-AB-2835",
"TCGA-AB-2930", "TCGA-AB-2845", "TCGA-AB-2893", "TCGA-AB-2942",
"TCGA-AB-2921", "TCGA-AB-2988", "TCGA-AB-3002", "TCGA-AB-2925",
"TCGA-AB-2943", "TCGA-AB-2959", "TCGA-AB-2933", "TCGA-AB-2939",
"TCGA-AB-2866", "TCGA-AB-2813", "TCGA-AB-2896", "TCGA-AB-3008",
"TCGA-AB-2950", "TCGA-AB-2819", "TCGA-AB-2895", "TCGA-AB-2830",
"TCGA-AB-2812", "TCGA-AB-2918", "TCGA-AB-2915", "TCGA-AB-2869",
"TCGA-AB-2948", "TCGA-AB-2931", "TCGA-AB-2924", "TCGA-AB-2935",
"TCGA-AB-2836", "TCGA-AB-2970", "TCGA-AB-2900", "TCGA-AB-2936",
"TCGA-AB-2934", "TCGA-AB-2952", "TCGA-AB-2927", "TCGA-AB-2817",
"TCGA-AB-2949", "TCGA-AB-2914", "TCGA-AB-2996", "TCGA-AB-2885",
"TCGA-AB-2882", "TCGA-AB-2825", "TCGA-AB-2823", "TCGA-AB-2888",
"TCGA-AB-2919", "TCGA-AB-2890", "TCGA-AB-2984", "TCGA-AB-2897",
"TCGA-AB-2865", "TCGA-AB-2983", "TCGA-AB-2841"), turqoise_module = c("High",
"Low", "High", "High", "Low", "High", "Low", "High", "Low", "Low",
"High", "High", "Low", "Low", "High", "Low", "High", "Low", "Low",
"Low", "Low", "Low", "Low", "High", "Low", "High", "High", "Low",
"Low", "High", "High", "Low", "Low", "Low", "Low", "Low", "Low",
"Low", "High", "High", "Low", "High", "High", "High", "Low",
"High", "Low", "Low", "High", "High", "Low", "High", "Low", "High",
"Low", "High", "High", "Low", "High", "High", "Low", "High",
"Low", "High", "High", "High", "Low", "Low", "Low", "High", "High",
"High", "High", "High", "High", "Low", "High", "Low", "High",
"High", "High", "High", "Low", "High", "High", "High", "Low",
"Low", "Low", "Low", "High", "Low", "High", "Low", "Low", "Low",
"High", "Low", "Low", "High", "High", "Low"), OS_MONTHS = c(11.3,
29.7, 7.7, 10.2, 36.1, 5.7, 83.5, 11.8, 19, 59.3, 26.3, 21.5,
88.3, 27.7, 18.5, 75.8, 24.8, 34, 24.4, 42.1, 47, 57.3, 99.9,
5.2, 26.3, 16.3, 4, 47.5, 32.7, 3.1, 30, 41.4, 76.2, 86.6, 55.4,
56.3, 30.6, 73.6, 52.7, 0.3, 19.2, 6.3, 5.3, 45.3, 10.5, 3.9,
118.1, 16.4, 0.3, 8.2, 77.3, 7.1, 9.3, 6.6, 43.5, 8.1, 0.8, 46.8,
7.9, 4.2, 15.4, 4.6, 36.9, 5.5, 1.3, 7.5, 27, 40.3, 95.6, 5.7,
8.1, 11.5, 7.4, 0.5, 27.1, 18.1, 0.1, 26, 1.6, 17, 10.7, 6.3,
13.8, 6.6, 1.9, 2.4, 9.3, 32.6, 48.3, 73, 7, 11, 7.5, 0.2, 33.5,
26.8, 0.5, 71.3, 30.5, 2.3, 11.2, 46.5), Status = c(1L, 0L, 1L,
1L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 1L, 0L,
0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 1L,
1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 0L,
1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L,
0L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 0L,
0L, 1L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 1L, 0L, 0L,
1L, 1L, 1L)), row.names = c(NA, -102L), class = "data.frame")
```
Where the turqoise_module was obtained based on the a signature of 31 genes which I was able to get based on the explanation [here](https://stats.stackexchange.com/questions/610125/using-multiple-genes-building-gene-signature-and-survival-analysis/610284?noredirect=1#comment1133081_610284)
Now what I do next is this
```
fit1 = survfit(Surv(OS_MONTHS, Status)~ turqoise_module, data=df2)
fit1
out1 = survdiff(Surv(OS_MONTHS, Status)~ turqoise_module, data=df2)
out1
```
The output of the above model fitting is this
```
fit1
Call: survfit(formula = Surv(OS_MONTHS, Status) ~ turqoise_module,
data = df2)
n events median 0.95LCL 0.95UCL
turqoise_module=High 51 48 7.1 5.7 8.2
turqoise_module=Low 51 17 NA 55.4 NA
> out1
Call:
survdiff(formula = Surv(OS_MONTHS, Status) ~ turqoise_module,
data = df2)
N Observed Expected (O-E)^2/E (O-E)^2/V
turqoise_module=High 51 48 18.4 47.7 74.7
turqoise_module=Low 51 17 46.6 18.8 74.7
Chisq= 74.7 on 1 degrees of freedom, p= <2e-16
```
When i plot them using this code
```
ggsurvplot(fit1,
pval = TRUE, conf.int = TRUE,data = df2,
risk.table = TRUE, # Add risk table
risk.table.col = "strata", # Change risk table color by groups
linetype = "strata", # Change line type by groups
surv.median.line = "hv", # Specify median survival
ggtheme = theme_bw(base_size = fsize),
palette = c("#990000", "#000099","green","black"))
```
I get this output [](https://i.stack.imgur.com/SReOX.png)
Now to obtain the turquoise_module based on which I categorized high or low I have used this code
```
df1 <- x_te[,gene]
df1 <- as.data.frame(df1)
df1$LSC22score <- rowSums(
sweep(x = df1,
MARGIN = 2,
STATS = beta1,
FUN = `*`)
)
df1 <- df1 %>% dplyr::mutate(turqoise_module =
dplyr::case_when(
LSC22score > median(LSC22score) ~ "High",
LSC22score < median(LSC22score) ~ "Low"
))
```
Now my question is when I see the `out1` output and the figure generate there is bit of confusion due to my lack of conceptual clarity.
**
>
The expected number of deaths in the high turqoise module group is
18.4, while the observed number of deaths is 48, indicating that there are more deaths in this group than expected. The opposite is true for
the low turqoise module group, where the observed number of deaths is
17 while the expected number is 46.6, suggesting that there are fewer
deaths in this group than expected.
**
But when I see the figure the I thought patient categorized as low are surviving for longer period of time.
Any suggestion or help would be really appreciated how to interpret or where am I going wrong
| Interpreting survival analysis plot using genes as predictor | CC BY-SA 4.0 | null | 2023-04-06T19:10:37.537 | 2023-04-07T15:32:19.223 | null | null | 334559 | [
"survival",
"lasso"
] |
612180 | 2 | null | 612170 | 11 | null | The current main popular implementation of Random Forests (RF) (i.e. the `randomForest` package) is available only for univariate (continuous or discrete) responses. On the other hand, mixed models are inherently multivariate models, that is models that deal with vector-valued responses. Fortunately, extensions of RF for multivariate responses, in particular for handling longitudinal data, do exist.
`LongitudiRF` is one of the `R` packages that implement Random Forests for longitudinal data of which I am aware. A lot more information can be found at [this](https://doi.org/10.1093/bib/bbad002) recent review paper on longitudinal data with Random Forests.
Related posts:
[How can I include random effects (or repeated measures) into a randomForest](https://stats.stackexchange.com/q/103730/56940)
[How to deal with hierarchical / nested data in machine learning](https://stats.stackexchange.com/q/221358/56940)
[Random forest for binary panel data](https://stats.stackexchange.com/q/156600/56940)
| null | CC BY-SA 4.0 | null | 2023-04-06T19:40:24.667 | 2023-04-06T21:00:56.923 | 2023-04-06T21:00:56.923 | 56940 | 56940 | null |
612181 | 2 | null | 612177 | 2 | null | This is about something that the ML community has taken to calling “soft labels”.
Think of the original zero or one labels as a probability distribution. These place all the probability mass on one outcome or the other. By smoothing the labels, we can ascribe fractional certainty to the outcomes, and the model can fit to these smoothed values instead of 1.0 and 0.0. An observed benefit is that it avoids the saturation problems attested in the sigmoid and tanh functions.
Aside from the empirical benefit to label smoothing, sometimes you want to explicitly model the prior uncertainty in the label. If you have noisy labels for your data, or if your data are aggregations of multiple trials with different outcomes, then there is inherent uncertainty in what the correct label for a given instance is. You can interpret the number as a probability, leveraging whatever philosophical stance you take toward the meanings of those.
| null | CC BY-SA 4.0 | null | 2023-04-06T19:42:51.360 | 2023-04-06T19:42:51.360 | null | null | 155836 | null |
612182 | 2 | null | 411676 | 0 | null | I am unsure of the reason behind the idea of fitting a GLM with family=gaussian(link=log), as my understanding is that an identity link should be use in such case. I may be wrong here, but I've never seen that before.
If the standardized residuals from your model when using family=gaussian(link=identity) are not sufficiently normally distributed according to a Shapiro-Wilk test or else, assuming that you're dealing with a response variable on a continuous scale, then maybe you should consider performing a quantile regression instead (for instance: package quantreg or qgam), using tau=0.5 to model the median which will often be close to the mean, despite both being different measures of the central tendency you're looking at?
Or maybe you are analyzing count data, such as how many fish were caught in a net, which can only be 0, 1, 2,... ? In such case (i.e., discrete variable), the Poisson family distribution which relies on link=log, or one of its extensions such as the negative binomial type II (NB2) that also requires link=log, should be used instead. I would advise against using any log-transformation to try to fit a model in an attempt to meet parametric test assumptions, would it be for a continuous variable or count data. See for instance:
[O'Hara and Kotze 2010 Methods in Ecology and Evolution](https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210X.2010.00021.x)
[Steel et al. 2013 Ecosphere](https://esajournals.onlinelibrary.wiley.com/doi/epdf/10.1890/ES13-00160.1)
With a normal error and an adequate model (homogeneity of variance/homoscedasticity is respected too, indepedent observations, no multicollinearity), one may use R2 to have an idea of the variation explained by the model, but as indicated by @whuber, it should not be used to assess the adequacy of a model. For non-normal error, the equivalent is D2, the deviance explained, but again this metric should be used to have an idea of how well the predicted values are close (or far) to those of a perfect (saturated) model.
For both linear and log-linear models, I would recommend to consider the hnp package when it comes to adequacy assessment, see Moral et al. (2017):
[https://www.jstatsoft.org/article/view/v081i10](https://www.jstatsoft.org/article/view/v081i10)
Briefly, given the distributional assumptions (error structure used) for a model, a simulated envelope that relies on half-normal plots (i.e., hnp) allows to quantify the behaviour of the Pearson residuals extracted from the considered model, for which about 95% should be found within this envelope for an adequate model. By repeating such simulations 10 or 100 times, one can get a mean percentage of residuals found outside the envelope to categorize the fit. This can allow to test many error structures such as gaussian, Poisson, NB2, quasipoisson and others, and can also be adapted with "helper functions" for other less-common cases such as the Generalized Poisson and also Generalized Additive Models.
For the family=gaussian case, on would also need to check for heteroscedasticity. The Breusch and Pagan test was developed for such case and can be performed with the bptest() of the lmtest package (but apparently just for linear models) or better, with the check_heteroscedasticity() function of the performance package that will perform such test for a GLM. For Poisson and NB2 for instance, the variance is modelled as a function of the mean, so that does not apply. Using hnp is the best way in my opinion to make sure that, for instance, the variance is a quadratic function of the mean (NB2) rather than a linear function of it (NB1) when over-dispersed data are being modelled. If both NB2 and NB1 are adequate, than an information-theoretic approach will help to identify the one that best adjust to the data with the AICc or other information criterion.
In the end, to be useful for statistical inferences, your model should be adequate and ideally explain a decent part of the observed variation in your response variable. This being said, I'd rather use an adequate model with a low R2 than an inadequate model with a high R2.
| null | CC BY-SA 4.0 | null | 2023-04-06T19:45:20.863 | 2023-04-06T19:45:20.863 | null | null | 338493 | null |
612183 | 1 | null | null | 2 | 11 | Analysts often use Rubin's rule (RR) to obtain a pooled estimate of a popular quantity from multiple (imputed) datasets. While popular statistical software (such as the [R survey package](https://r-survey.r-forge.r-project.org/survey/html/with.svyimputationList.html) or Stata's [mi](https://www.stata.com/meeting/canada18/slides/canada18_Wells.pdf)) will apply RR to any set of inputs, this may lead to invalid inferences when the underlying assumptions are violated. For example, RR assumes "congenial" sources of input. Usually this means that the missing data are modeled drawn from some multivariate or joint distribution that also features the analysis model.
In many cases, this assumption is unjustified. For example, multiple imputation with chained equations (MICE) is a noncongenial imputation model with possibly-flexible functions of predictors of missingness. Many machine learning methods also try to predict missingness with added noise but their sampling distributions are unknown and are non-congenial. Most notably, in cases where complex (e.g., stratified, clustered) sampling is employed, clustering errors with the Horwitz-Thompson estimator basically violates all of the underlying assumptions - post-hoc adjustments to covariance matrices do not lend to congeniality. Of course, congeniality is often not enough!
How then might one combine multiple imputations of such data for valid inferences? Specifically, how should one pool estimates from multiply-imputed data with complex sampling designs to ensure consistency (at a minimum) and efficiency/unbiasedness (at best)?
I found Barlett and Hughes (2020) propose some options but am not sure if there is a more analytical result to rely on or if the bootstrap they recommend is valid for complex samples.
# Relevant Readings
- Bartlett JW, Hughes RA. Bootstrap inference for multiple imputation under uncongeniality and misspecification. Statistical Methods in Medical Research. 2020;29(12):3533-3546.
- Meng, Xiao-Li. “Multiple-Imputation Inferences with Uncongenial Sources of Input.” Statistical Science, vol. 9, no. 4, 1994, pp. 538–58.
- Jared S. Murray. "Multiple Imputation: A Review of Practical and Theoretical Findings." Statist. Sci. 33 (2) 142 - 159, May 2018.
| How to pool estimates from multiply-imputed datasets with complex sampling designs? | CC BY-SA 4.0 | null | 2023-04-06T19:58:33.330 | 2023-04-06T19:58:33.330 | null | null | 120828 | [
"sampling",
"survey",
"data-imputation",
"multiple-imputation",
"mice"
] |
612184 | 1 | null | null | 2 | 28 | [This answer](https://stats.stackexchange.com/questions/502277/general-recipe-for-finding-unbiased-or-consistent-estimator) gives an answer for discrete distributions referencing Halmos (1946). However, I am looking for a more general result. For this, the references provided in the previous answer only dealt with creating non-negative estimators from already existing unbiased estimators.
[Bickel and Lehmann (1969)](https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-40/issue-5/Unbiased-Estimation-in-Convex-Families/10.1214/aoms/1177697370.full) give a characterization of unbiasedly estimable functionals of distributions that are absolutely continuous wrt a measure $\mu$. However, the characterization is not constructive. Is there a constructive version of this result (Lemma 4.3, and thus Theorem 4.2)? After a lot searching, nothing came up.
If there is not, how feasible is deriving a general construction?
| Construction of Estimators for Unbiasedly Estimable Functionals | CC BY-SA 4.0 | null | 2023-04-06T19:59:16.377 | 2023-04-07T02:32:57.117 | 2023-04-07T02:32:57.117 | 362671 | 187852 | [
"mathematical-statistics",
"estimation",
"unbiased-estimator"
] |
612185 | 1 | 612186 | null | 4 | 135 | I am to analyze a set of economic variables, taken from multiple countries, and recorded across time. This is certainly a panel dataset.
If I'm not mistaken, the pooled OLS, fixed and random effects models are linear
[](https://i.stack.imgur.com/acUJG.png)
while polynomial regression models seem to focus on a single variable
[](https://i.stack.imgur.com/xmoQF.png)
Is there such a thing as a polynomial panel regression model? I am having a very hard time finding any explanation online or in textbooks about this particular topic.
Thank you in advance for any help, all possible indications as to where to learn about this topic are welcome.
| Is there such a thing as polynomial multivariate panel regression? | CC BY-SA 4.0 | null | 2023-04-06T19:59:23.170 | 2023-04-06T20:13:15.283 | 2023-04-06T20:13:15.283 | 56940 | 385115 | [
"regression",
"mixed-model",
"multiple-regression",
"panel-data",
"polynomial"
] |
612186 | 2 | null | 612185 | 7 | null | Adding polynomial terms may help at capturing possible non-linear effects in the covariates, thus you can add such terms in a random-effects model just as you do in the classical linear regression. There may be tons of examples of applications using polynomial terms in a random-effects model, one that I can recall right now is Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus, Springer, Sect. 1.5.
With random-effects models, you could even estimate more complicated nonlinear relationships between a response and a covariate; the details with an `R` implementation can be found in the aforementioned book, in Chapters 6, 7 and 8.
| null | CC BY-SA 4.0 | null | 2023-04-06T20:12:07.440 | 2023-04-06T20:12:07.440 | null | null | 56940 | null |
612187 | 2 | null | 391225 | 3 | null | The OP is not asking about the model's outputs, they are asking about how to use non-binary labels as inputs (e.g. with label smoothing), which `multi:softmax` and `multi:softprob` do not support.
For that you have a couple options. You could specify a custom loss function: [https://xgboost.readthedocs.io/en/stable/tutorials/custom_metric_obj.html](https://xgboost.readthedocs.io/en/stable/tutorials/custom_metric_obj.html)
Or, you can continue using the given multi class objective, create multiple rows for each non-zero class label, and weight them accordingly. As suggested here: [https://stackoverflow.com/questions/66432364/training-xgboost-with-soft-labels](https://stackoverflow.com/questions/66432364/training-xgboost-with-soft-labels)
| null | CC BY-SA 4.0 | null | 2023-04-06T20:22:32.710 | 2023-04-06T20:22:32.710 | null | null | 385117 | null |
612188 | 2 | null | 612147 | 1 | null | It's not clear to me exactly what you are trying to do.
You need to define the gold standard so that you can compare some other measure to it.
You mention longitudinal data - does that mean that the gold standard is change in the activity or the level of activity? If it's change, that's unusual, but OK. If it's level, I'm not sure why having longitudinal data matters.
What's the effect of socieconomic predictors? If they predict activity they probably shouldn't be part of the gold standard, that's asking for bias.
Typically, you're aiming for accuracy. You want a (relatively) simple measure of agreement that tells you how close your measurement is (likely to be) to the gold standard. I don't see how linear mixed models would help with that. (But I might have misunderstood your problem.)
| null | CC BY-SA 4.0 | null | 2023-04-06T20:39:10.363 | 2023-04-06T20:39:10.363 | null | null | 17072 | null |
612190 | 1 | null | null | 2 | 51 | I am analyzing count data (number of negative mental health symptoms). I ran a 1 sample KS test in SPSS and the sig is <0.001 for both a Poisson distribution and a normal distrubtion - indicating that the data does not follow either. There are no zero values whatsoever (values range from 1-9). What other options do I have for a regression analysis?
| Issue with Poisson regression | CC BY-SA 4.0 | null | 2023-04-06T22:07:07.837 | 2023-04-06T23:09:03.493 | null | null | 385122 | [
"regression",
"poisson-distribution",
"negative-binomial-distribution"
] |
612191 | 1 | null | null | 0 | 9 | I am running a mancova where each participant received 8 versions of a phishing email that combines 3 factors (A,B,C,AB,AC, BC, ABC, Control). I am measuring the level of risk (4 levels). My covariates are age, gender, and duration in the survey. I would like to know how to create a dummy variable that shows my model the manipulation of factors I am doing, but as all participants received all manipulations I am having some challenges defining my IV.
| How to create dummy variable for within subjects? | CC BY-SA 4.0 | null | 2023-04-06T22:22:31.767 | 2023-04-06T22:22:31.767 | null | null | 382816 | [
"categorical-encoding",
"mancova"
] |
612192 | 2 | null | 610924 | 1 | null | Note: I do not intend to accept my own answer, just wanted to provide info from an interesting relevant reference I just came across, which may interest people viewing this question.
The paper
N. Razali and Y.B. Wah (2011), ["Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors and Anderson–Darling tests"](https://www.researchgate.net/publication/267205556), Journal of Statistical Modeling and Analytics, 2 (1): 21–33
Shows
>
"Power comparisons of these four tests were obtained via Monte Carlo simulation of sample data generated from alternative distributions that follow symmetric and asymmetric distributions.
...
Results show that Shapiro-Wilk test is the most powerful normality test, followed by Anderson-Darling test, Lilliefors test and Kolmogorov-Smirnov test. However, the power of all four tests is still low for small sample size."
| null | CC BY-SA 4.0 | null | 2023-04-06T22:35:24.530 | 2023-04-06T22:35:24.530 | null | null | 366449 | null |
612193 | 2 | null | 612190 | 3 | null | There's a number of problems indicated by your post that should be discussed before getting to other options; specifically your approach to choosing between the normal and Poisson (or to ruling both out) have some serious problems.
- Clearly a count can't be normal; one is discrete and the other continuous, so a test of a discrete variable being normal makes no sense $-$ you already know $H_0$ is false for an absolute certainty, so that hypothesis is pointless to test.
- You're looking at the marginal distribution, when the distributional model in the GLM is about the conditional distribution, so the other test (of it being Poisson) is also useless.
- A goodness of fit test of a distributional assumption is pretty much unhelpful even if you could test the right thing, since simple parametric assumptions are essentially never true in practice and a large enough sample would always lead you to reject a perfectly suitable approximation.
So, for example, it can't be exactly Poisson either (there's a finite list of possible symptoms, for one thing). But just because they can't be the exact model doesn't mean you should rule them out.
The question to ask of the data would better be "Is the normal a reasonable approximation to use?" or "Is the Poisson a reasonable approximation to use?" which you don't answer with a goodness of fit test.
- A bigger issue is likely to be the way that they have different models for the spread, and (assuming you chose canonical link functions), also for the conditional mean. These two are typically much more important than the choice of which of the two conditional distributions you use.
- Potential endpoint issues: if the lower limit of 1 is because you only include people who would have at least one symptom in your sample (rather than just by happenstance, where a $0$ might have occurred but happened not to), then you'd be dealing with a truncated distribution.
If the value 9 was a hard upper bound (you only considered 9 symptoms) you should probably consider a model that reflects that bound. A binomial is probably unsuitable (since the distinct symptoms are not equally probable), but a quasi-binomial might not be a bad choice.
There's additional issues still (like the problem of using the same data to choose a model and to perform the inference conditionally on that selected model) but I won't get into those details right now.
| null | CC BY-SA 4.0 | null | 2023-04-06T22:37:40.010 | 2023-04-06T23:09:03.493 | 2023-04-06T23:09:03.493 | 805 | 805 | null |
612194 | 1 | null | null | 0 | 18 | I'm currently trying to have a look at the effect of renewable energy consumption and innovation on economic growth. One technique I was looking at for showing this was running an Oaxaca-Blinder decomposition between two different countries. I was wondering if anyone could help me with two questions I had regarding this.
- If comparing average growth rates between two countries in a certain time span, is it better to use GDP growth rates, or take the first difference of lngdp? I was thinking the first would be better because it would be in percentage terms.
- As long as I use sufficient control variables, is it feasible to run an Oaxaca-Blinder decomposition in macroeconomics? I am not able to find much existing literature on this topic, and so I was wondering if anyone had advice on this.
| Oaxaca-Blinder in Macroeconomics? | CC BY-SA 4.0 | null | 2023-04-06T22:39:50.917 | 2023-04-07T02:43:45.567 | 2023-04-07T02:43:45.567 | 362671 | 384488 | [
"macroeconomics",
"decomposition",
"misspecification",
"blinder-oaxaca"
] |
612195 | 2 | null | 608951 | 2 | null | In brief, no, you should not drop $\mu_0$ from the model (even if you've mean-centered). The main reason that comes to mind is that the estimation process is still estimating a grand mean along with all the other parameters. And if you drop the grand mean from the model, you free up one more degree of freedom that probably should not be free.
If you mean center your overall sample, then you've introduced an additional source of sampling error into the estimated group means (as it is unlikely your sample grand mean is the same as the population grand mean). And this is why it is best to leave the degrees of freedom for the model at their correct values.
Another way to think about this is that your statement about $\mu_0=0$ is incorrect. You don't now what the population grand mean is, you only know that it will estimate to zero: $\hat{\mu}_0=0$.
As for your second query, I personally would use a uniform prior or a normal distribution (relying on the prior you have for the variance). However, as with any prior, this really depends on the context and the justification for your beliefs about that parameter.
| null | CC BY-SA 4.0 | null | 2023-04-06T22:40:23.310 | 2023-04-06T22:40:23.310 | null | null | 199063 | null |
612196 | 2 | null | 612172 | 1 | null | I assume you are asking about the Wilcoxon signed-rank test.
So consider a single sample location problem where $X_1,X_2,\ldots,X_n$ are i.i.d with distribution $F(x-\theta)$, under the assumption that $F$ is continuous, $\theta$ is the unique population median and $F(\cdot)$ is symmetric about $0$.
In this setup, the sample pseudomedian is the [Hodges-Lehmann estimator](https://en.wikipedia.org/wiki/Hodges%E2%80%93Lehmann_estimator) of $\theta$. It is a consistent and median-unbiased estimator. Note that the [population pseudomedian](https://en.wikipedia.org/wiki/Pseudomedian) coincides with the population median because $F$ is symmetric.
The Wilcoxon signed-rank statistic for testing $H_0:\theta=0$ is
$$T_n=\sum_{i=1}^n I(X_i>0)R_i^+\,,$$
where $R_i^+$ is the rank of $|X_i|$ among $\{|X_1|,|X_2|,\ldots,|X_n|\}$.
This can be [rewritten](https://stats.stackexchange.com/q/215889/119261) in the form
$$T_n=\sum_{1\le i\le j\le n}I\left(\frac{X_i+X_j}{2}>0\right)$$
In other words, $$T_n=\sum_{k=1}^{m}I(Z_k>0)\,,$$
where $Z_1=X_1,\,Z_2=\frac{X_1+X_2}{2},\,Z_3=\frac{X_1+X_3}{2},\,\ldots,Z_m=X_n$ with $m=n+\binom{n}{2}=\frac{n(n+1)}{2}$.
Suppose the alternative hypothesis is $H_1:\theta>0$, so that a right-tailed test based on $T_n$ is appropriate.
Now, under $H_0$, distribution of $T_n=T_n(X_1,\ldots,X_n)$ is symmetric about $E_{H_0}(T_n)=\frac{n+\binom{n}{2}}{2}=\frac{n(n+1)}{4}$.
So, under $\theta$, distribution of $T_n(X_1-\theta,\ldots,X_n-\theta)$ is also symmetric about $\frac{n(n+1)}{4}$.
While constructing the Hodges-Lehmann estimator of location, we estimate $\theta$ by $\hat\theta$ such that
$$T_n(X_1-\hat\theta,\ldots,X_n-\hat\theta)\approx \frac{n(n+1)}{4}$$
It can be shown that this leads to the sample pseudomedian
$$\hat\theta=\operatorname*{med}_{1\le i\le j\le n}\left\{\frac{X_i+X_j}{2}\right\}$$
| null | CC BY-SA 4.0 | null | 2023-04-06T22:44:26.530 | 2023-04-06T22:44:26.530 | null | null | 119261 | null |
612197 | 1 | 612262 | null | 2 | 50 | I have some doubts about how to recognize if there are extreme weights after balancing my population with inverse probability treatment weighting.
For instance, let's look at these results [code at the end of the post] - I know that age is not perfectly balanced but it doesn't matter as it is just an example:
[](https://i.stack.imgur.com/laU34.png)
M.0.Adj = Weighted mean-weighted rate for the non-treated population / SD.0.Adj = Standard Deviation in non-treated / M.1.Adj = Weighted mean-weighted rate for the treated population / SD.1.Adj = Standard Deviation in treated / Diff.adj = Standardized Mean Difference / V.Ratio.Adj = The ratio of the variances of the two groups after adjusting
Moreover, these are a density plot with the propensity scores and a histogram with weights I made:
[](https://i.stack.imgur.com/qSIvN.png)
[](https://i.stack.imgur.com/FoxG9.png)
This is an example of the balance achieved (I don't know if it is useful in this context):
[](https://i.stack.imgur.com/k777q.png)
What to I have to look at in order to know if there are extreme weights? Do I have to look at the plots? How can I know if I balanced correctly and there are no problems caused by extreme weights so that I don't have to take further action to correct them (e.g. trimming ...)? I don't know how to "recognize" the extreme weights.
For those who prefer to have the code:
```
library(cobalt)
library(WeightIt)
library(dplyr)
data("lalonde", package = "cobalt")
W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75,
data = lalonde, estimand = "ATT", method = "ps")
lalonde <- lalonde %>% mutate(weights = W.out$weights)
lalonde <- lalonde %>% mutate(ps = W.out$ps)
summary(W.out)
bal.tab(W.out, stats = c("m", "v"), thresholds = c(m = .10), disp=c("means", "sds"))
library(ggplot2)
ggplot(lalonde, aes(x = ps, fill = as.factor(treat))) +
geom_density(alpha = 0.5, colour = "grey50") +
geom_rug() +
scale_x_log10(breaks = c(1, 5, 10, 20, 40)) +
ggtitle("Distribution of propensity scores")
library(weights)
wtd.hist(W.out$weights)
```
| Recognize extreme weights in inverse probability treatment weighting | CC BY-SA 4.0 | null | 2023-04-06T23:04:25.193 | 2023-04-07T14:50:42.027 | 2023-04-06T23:20:44.730 | 384938 | 384938 | [
"r",
"propensity-scores",
"weights",
"weighted-mean",
"weighted-data"
] |
612198 | 1 | null | null | 1 | 24 | I have a set of 5000 samples with 24 features, and I'm runing a Gridsearch using scikit-learn to find optimal values for C, epsilon and gamma in SVR. In total I'm testing 90 different hyperparameter configurations with 5 fold cross validation. Also I included verbose = 10 to get feedback on whats happening. For the first 50 or so hyperparameter configurations the total training time was a couple of seconds and the scores were pretty bad. For the next phew combinations training time scalated to 10 minutes or so and the scores got better. Now evaluating model 64/90 total training time is taking around 200 mins. I've already set n_jobs = -1, and pre_dispatch = '1*n_jobs' and checked in task manager that I didn't have any memory problems. I believe the problem comes when taining with higher values of C (C = 100).
A similar thing happened when evaluating a gaussian procces regressor on the same data set. This time considering only two different kernels in a gridsearch, the worst performing kernel took under a minute to train and the better kernel took around 2 hours.
What is producing this behaviour and can I do something about it? Should I just reduce the number of hyperparameter configurations? Also, are this times reasonable for a 5000 samples data set?
| Increased training time during gridsearch for Support Vector Regression | CC BY-SA 4.0 | null | 2023-04-06T23:10:58.483 | 2023-04-06T23:10:58.483 | null | null | 385124 | [
"regression",
"machine-learning",
"cross-validation",
"svm",
"scikit-learn"
] |
612201 | 1 | null | null | 0 | 16 | Given a type 1 ANOVA where both factors are fixed and with equal cell numbers, how do I mathematically formulate the $H_0$ testing for interaction effect? Let's say I have 3 levels of Factor A (rows) and 3 columns of factor B (columns).
I denote my interaction term in my model as $\gamma$
This is my take on my question:
$\gamma_{11}=\gamma_{12}=\gamma_{13}=\gamma_{21}=\gamma_{22}=\gamma_{23}=\gamma_{31}=\gamma_{32}=\gamma_{33}$
Does this seem right?
| Formulate mathematically H0 testing for interaction in type 1 ANOVA | CC BY-SA 4.0 | null | 2023-04-07T00:07:38.887 | 2023-04-07T02:36:05.857 | 2023-04-07T02:36:05.857 | 362671 | 368624 | [
"hypothesis-testing",
"anova"
] |
612202 | 1 | null | null | 0 | 46 | I am stuck in the diagnostics of my model, and I am looking for advice on what to do.
My data frame goes like this
```
$ ID : Factor w/ 15 levels "Buzinza","Kabukojo",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Period : Factor w/ 3 levels "After","Before",..: 1 3 1 1 2 1 1 2 3 1 ...
$ Class : Factor w/ 3 levels "Adult female",..: 1 1 1 1 1 1 1 1 1 1 ...
$ no.trans: num 4 5 6 4 15 4 17 22 2 23 ...
```
Nbinom had a better AIC than Poisson (which was overdispersed)
```
model_nb <- glmmTMB(no.trans ~ Period * Class + (1 | ID),
family = nbinom2(),
data = periods)
```
Normality and overdispersion were ok.
Vif was not sensible so I calculated the variance of predictor variables and the matrix seemed ok.
But ` plot(fitted(model_nb), resid)` showed vertical clustered lines.
[](https://i.stack.imgur.com/zJFfc.png)
Dharma accused the presence of 12 outliers in 3465 observations for the non-transformed data and 6 outliers for the sqr transformed data.
I tried also lme4 package.
I have only those variables so I cannot complicate the model further. And it is already quite simple.
Not sure what to do next, any advice would be very welcomed!
Cheers,
| Clustered vertical lines in the plot(model, residuals) | CC BY-SA 4.0 | null | 2023-04-07T00:34:40.350 | 2023-04-10T10:20:35.110 | null | null | 300055 | [
"data-visualization",
"residuals",
"model",
"glmmtmb"
] |
612204 | 1 | null | null | 0 | 25 | Let's say we have two Heckman selection models. Can we correlate the residuals from the their outcome parts (since they might have different lengths)? If yes, how?
| Can we correlate the residuals from the their outcome parts in Heckman model? | CC BY-SA 4.0 | null | 2023-04-07T01:50:18.277 | 2023-04-07T15:33:26.310 | 2023-04-07T15:33:26.310 | 225266 | 225266 | [
"heckman"
] |
612205 | 1 | null | null | 1 | 34 | I am estimating the population mean of the 2023 value of cars from a stratified sample. The value of the cars is right skewed on visual inspection, and some basic diagnostics indicate normality assumptions are violated. I need to calculate the 95%CI of the calculated population mean. My first thought was to use accelerated bootstrap, however, after a bit of research, I can’t seem to find a package in R that calculates this for stratified samples. Before I go trying to code this from scratch, is there an alternative non-parametric approach in R to calculating confidence intervals in skewed stratified samples?
| Non-parametric bootstrap for 95%CI calculation in stratified sample in R | CC BY-SA 4.0 | null | 2023-04-07T02:01:59.993 | 2023-04-07T02:41:18.190 | null | null | 385129 | [
"r",
"confidence-interval",
"bootstrap",
"skewness",
"weighted-sampling"
] |
612206 | 1 | null | null | 1 | 25 | I have multiple discrete time Markov processes. They each consist of the same 12 categorical states. I want to model how the probability of each of these states varies over time across each process. To do this, I fit a multinomial logistic regression model with this formula:
State t+1 ~ State t + Time
My data is organized like so:
|Subject |Time |State t |State t+1 |
|-------|----|-------|---------|
There are multiple subjects in this dataset. Each subject has its own Markov process that is indexed by time at regular intervals from t=2 to t=600. In other words, each subject has its own series of state transitions, and each state transition for each subject has a corresponding time point that ranges from t=2 to t=600.
The multinomial logistic regression model seemed to have worked. It appears as if its fitted values can be used to represent how the probability of each state varies over time. However, I'm unsure if I fit this model correctly. For instance, I defined time as an integer rather than as, say, an ordered categorical variable. I'm not sure if this was the correct choice.
But, overall, I'd like to know whether I used this model appropriately.
| How to model time-varying probability in Markov process with multinomial logistic regression? | CC BY-SA 4.0 | null | 2023-04-07T02:24:01.433 | 2023-04-07T03:00:02.260 | 2023-04-07T03:00:02.260 | 331670 | 331670 | [
"r",
"markov-process",
"multinomial-logit"
] |
612207 | 2 | null | 612205 | 1 | null | You don’t need normality or symmetry to estimate a population mean’s sampling variance, though those things are helpful.
There are several bootstrap methods that have been developed for complex samples (i.e., samples selected with stratified sampling, cluster sampling, unequal selection probabilities, etc.)
The R packages ‘svrep’ and ‘survey’ implement several of them. The ‘svrep’ documentation describes how to implement a number of bootstrap methods for complex surveys and explains how to pick one that’s appropriate for your sample.
[https://cran.r-project.org/web/packages/svrep/vignettes/bootstrap-replicates.html](https://cran.r-project.org/web/packages/svrep/vignettes/bootstrap-replicates.html)
The following paper offers an excellent overview of the many bootstrap methods developed for complex samples.
[https://projecteuclid.org/journals/statistics-surveys/volume-10/issue-none/A-survey-of-bootstrap-methods-in-finite-population-sampling/10.1214/16-SS113.full](https://projecteuclid.org/journals/statistics-surveys/volume-10/issue-none/A-survey-of-bootstrap-methods-in-finite-population-sampling/10.1214/16-SS113.full)
| null | CC BY-SA 4.0 | null | 2023-04-07T02:26:39.083 | 2023-04-07T02:41:18.190 | 2023-04-07T02:41:18.190 | 94994 | 94994 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.