Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
609616 | 2 | null | 609613 | 1 | null | Both are correct, if there's just a single coefficient involved.
"[D]ividing the squared coefficient estimate by its estimated variance" gives a statistic evaluated against a chi-square distribution with 1 degree of freedom. That's just the square of the z-statistic in your display, which is evaluated against a standard normal distribution. $(-3.176)^2=10.09$
As a chi-square distribution with 1 degree of freedom is the distribution of a squared standard normal, inference is identical regardless of your definition.
The overall Wald test in a model with multiple coefficients is a joint test of the hypothesis that all coefficients equal 0. With more than 1 coefficient, that can't be done with a z-test; the more general chi-square form is used, with an appropriate number of degrees of freedom.
A Wald test can also be used to evaluate subsets of coefficients, for example all those associated with a multi-level categorical predictor or for a predictor along with all of its interactions. Search this site for "chunk test" for more details.
| null | CC BY-SA 4.0 | null | 2023-03-15T21:29:07.570 | 2023-03-15T21:29:07.570 | null | null | 28500 | null |
609617 | 1 | null | null | 0 | 59 | I have implemented a beta regression and am a little confused on how I should interpret the coefficients of my model. For context, both my independent variables and dependent variable are expressed in percentage form, ranging from [0, 1]. The only exception is one independent variable which takes the binary value of 0 or 1. Also, I used a logit link. Does anybody mind sharing how I could interpret the coefficients here in this beta regression? I've never worked with a dataset like this before; any help would be appreciated!
| Interpreting coefficients of beta regression | CC BY-SA 4.0 | null | 2023-03-15T20:19:56.747 | 2023-03-17T14:52:23.783 | 2023-03-17T14:52:23.783 | 11887 | 383317 | [
"r",
"regression",
"beta-regression"
] |
609618 | 2 | null | 609166 | 0 | null | After studying Tim's answer and realising that code can give me the best short sentences. I realised that I could understand best via code.
The following code outputs the likelihood of a particular mean and variance combination, for a normal distribution and some generated data.
```
import numpy as np
from scipy.stats import norm
RNG = np.random.default_rng(seed=0)
def likelihood(pMean, pStdDev):
# construct the probability density function for the model this particular likelihood function will use, using the supplied parameters.
gausianPDF = norm.pdf(X, loc=pMean, scale=pStdDev)
# construct the product of the PDF for each data point
productOfPDFs = np.prod(gausianPDF)
# that is the answer.
return productOfPDFs
X = RNG.choice(20,30)
mean = np.mean(X)
stdDev = np.std(X)
lh = likelihood(mean, stdDev)
print('The likelihood of mean ',mean,'and stdDev',stdDev, 'for the data is ',lh)
```
When I ran it I got
[](https://i.stack.imgur.com/Z8QVA.png)
This is for a normal distribution.
I see that the likelihood is the product of a probability density function.
The particular probability density function is based on a distribution model and some data distributed according to the model.
To get the likelihood of some parameters we evaluate the likelihood function for the particular distribution and data, using the parameters we are interested in.
Thus,
>
The likelihood of the parameters of a model is given by the joint probability density of the data, as modelled using those parameters.
| null | CC BY-SA 4.0 | null | 2023-03-15T21:39:31.947 | 2023-03-16T16:56:20.310 | 2023-03-16T16:56:20.310 | 284610 | 284610 | null |
609619 | 2 | null | 609524 | 0 | null | You have to be careful when you say "`anova()` function," as even in R that can have different meanings depending on the type of model and package.
For your evaluation of the single `B:C` interaction coefficient, analysis of deviance would best be a nested comparison of the first model you show, including that interaction term, against the second one with the same coefficients but with the `B:C` interaction omitted. You then evaluate the p-value against your pre-specified $\alpha$ cutoff.
According to the help page for `anova.glm()`, if you instead specify a single model then you get a sequential term-by-term analysis. That might lead to different apparent "significance" results if you [change the order of predictors](https://stats.stackexchange.com/q/13241/28500) in the model.
The second part of the question is harder. You can compare different models with respect to goodness of fit, adjusted for the number of predictors in the model. In your example, the nested `anova()` just described can tell you whether adding the `B:C` interaction improves the last model in a way that justifies including that interaction. The p-value serves that purpose "at significance level $\alpha$."
If models being compared don't involve nested sets of predictors you can't use `anova()` for comparisons. Some suggest using measures like the [Akaike Information Criterion](https://stats.stackexchange.com/q/116935/28500) in that case, but that's not universally accepted and there isn't a "significance level" for that.
A general goodness-of-fit test is to evaluate how well the modeling process works on multiple bootstrapped samples of the data. Again, though, there's no "significance level" for that. You have to gauge, based on your understanding of the subject matter, whether the model is good enough for your purposes.
| null | CC BY-SA 4.0 | null | 2023-03-15T22:15:56.853 | 2023-03-15T22:15:56.853 | null | null | 28500 | null |
609620 | 1 | null | null | 2 | 15 | I was wondering if someone can shed some light on which cross-validation method should I, in general, use more often: k-fold cross-validation or repeated random sub-sampling validation.
From [Wikipedia](https://en.wikipedia.org/wiki/Cross-validation_(statistics)), I understood that k-fold cross-validation makes sure to partition the data, that is, that every data point is in the training dataset and in the validation dataset only once. Repeated random sub-sampling validation, on the other hand, allows for some data points to be in training and/or the validation dataset multiple times, as well as some data points not being in the training and/or the validation dataset at all. Basically, in repeated random sub-sampling validation there is no partitioning; there is randomness which picks out the samples.
If someone can explain some common situations where I'd prefer one over the other and why, I'd appreciate it a lot.
| In practice, should I use k-fold cross-validation or repeated random sub-sampling validation as my default choice of evaluating the model performance? | CC BY-SA 4.0 | null | 2023-03-15T22:18:55.837 | 2023-03-15T22:18:55.837 | null | null | 298358 | [
"machine-learning",
"cross-validation",
"monte-carlo",
"model-evaluation",
"subsampling"
] |
609621 | 2 | null | 109832 | 0 | null | Especially in complicated setting, the exact definition of $R^2$ is not clear. The definition in the question is close to the definition I like and that `sklearn` uses (apart from a [slight disagreement](https://stats.stackexchange.com/questions/590199/how-to-motivate-the-definition-of-r2-in-sklearn-metrics-r2-score) I have with the package) that I give below.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{j=1}{\sum}}\left(
t_j-o_j
\right)^2
}{
\overset{N}{\underset{j=1}{\sum}}\left(
\bar o - o_j
\right)^2
}\right)
$$
Here, I am taking the $t_j$ to be the predictions made by the trained model, the $o_j$ to be the observed values, and $\bar o$ to be the mean of all observed outcomes.
The key point of getting $R^2<0$, however, is that one of the following must be true.
- You are using a definition based on squaring the Pearson correlation between the predicted and observed values, and a mistake in your code has caused a real number to square to a negative number.
- You are using some formula like the one I gave or that is in the original question, and the numerator exceeds the denominator. Since only this is a statistics issue, it is the one I will address.
(I suppose you could be doing something with complex numbers, but let's set aside that possibility, as it is not routine and probably not what you're doing.)
Since we are dealing with real numbers, the only way for the formula I gave or that is in the original question to give a value less than zero is if the fraction numerator exceeds the denominator. In both of our equations, the numerator is easy to identify as the sum of the squared residuals.
The deninomator of the equation I gave is also a sum of squared residuals, just not of the residuals of our model. This is the sum of squares of a model that always predicts in mean value, regardless of the feature values. This can be regarded as a baseline "must beat" model. That is, if your model is not better at predicting the conditional mean than a model that always predicts the pooled mean (which is, in many regards, a sensible guess for the conditional mean if you are naïve about how the features relate to the outcome), your model is not so valuable. Dividing the model sum of squared residuals by this naïve sum of squared residuals compares the two, and if the model sum of squared residuals is higher, this fraction will exceed $1$, leading to the entire equation being below zero.
That is, when the $R^2$ calculation I wrote above is less than zero, you have a model that is outperformed by always guessing the same value, so your model is probably pretty bad. While we like to make good models, it is valuable to learn that a model is bad.
For the equation given in the original question, the fraction numerator is the same sum of squared residuals. The denominator, I claim, is also the same, at least if the data are standardized to have a mean of zero and a variance of one, as is quite common. Then $\overset{N}{\underset{j=1}{\sum}}\left(
o_j
\right)^2 = \overset{N}{\underset{j=1}{\sum}}\left( 0-
o_j
\right)^2 = \overset{N}{\underset{j=1}{\sum}}\left(\bar o-
o_j
\right)^2$, since $\bar o=0$ for the standardized values of $o$, showing the two equations to be equal. With the equations being equal, the same argument applies about what it means to get a negative value.
| null | CC BY-SA 4.0 | null | 2023-03-15T22:33:36.610 | 2023-03-15T22:33:36.610 | null | null | 247274 | null |
609622 | 2 | null | 609160 | 0 | null | I suggest that you use one model with all the predictors. Here are two good reasons:
- If you have a model with one predictor, you only explain the effects of that one predictor, and any unexplained effects, including the effects due to other predictors, go into the residual variation. If all the predictors that have effects are in one model, then the residual errors will be much smaller, the SEs of your estimates will be smaller, and your statistical tests will be more powerful.
- Some of those predictors may interact with one another. If so, your one-predictor models will produce misleading estimates. You should try models with interactions and simplify from there if some interactions have high p values in an anova test (say, using car::Anova()).
| null | CC BY-SA 4.0 | null | 2023-03-15T22:38:52.727 | 2023-03-15T22:38:52.727 | null | null | 52554 | null |
609623 | 1 | null | null | 2 | 43 | I am in the context of an observational study, but let's take as an example a randomized control trial studying the effect of treatment $T$ on outcome $Y$.
A difference-in-means test indicated no change in $E[Y]$. However, you notice that $Var[Y \mid T=1]$ is much higher than $Var[Y \mid T=0]$. a) How do you test for $Var[Y \mid T=1] > Var[Y \mid T=0]$? b) Is there a way to quantify this difference (as in "$T$ caused an increase in $Y$'s variance by x amount")?
| Effect of treatment on outcome variance | CC BY-SA 4.0 | null | 2023-03-15T22:57:06.763 | 2023-03-15T23:31:55.183 | null | null | 350397 | [
"variance",
"causality",
"treatment-effect"
] |
609624 | 1 | null | null | 0 | 14 | I am setting up a questionnaire for a lab experiment to measure support for four competing policies. Very importantly, I want to know the public's order of preference. I am torn on how best to operationalize my DV (policy preferences) since this is going to have repercussions on my statistical power needs.
Setup of the lab experiment: control; treatment condition 1; treatment condition 2. I plan to use ANOVA for my statistical analysis
Four individual questions: this is the most straightforward option. I ask how much they like each policy from least to most. I can then calculate the popularity of each policy and calculate aggregate preferences. This sounds the most straightforward process but I am afraid that respondents are going to 'straightline' or lose sight of their ordered preferences.
Ranked choice: just one question where respondents rank the four policy options from most to least favorite. Straightline and order preference concerns begone, but statistical analysis more convoluted.
I have two questions.
- Statistical analysis for ranked choice DV: can ANOVA compute differences in ranked choices between the three groups? If not ANOVA, then what?
- Statistical power: which of the two DV operationalizations would demand the highest statistical power? Remember, this is a lab experiment so there is a logistic prime on keeping recruitment levels low.
Thank you,
| Best DV operationalization for statistical power | CC BY-SA 4.0 | null | 2023-03-15T23:02:39.737 | 2023-03-15T23:02:39.737 | null | null | 318236 | [
"statistical-significance",
"anova",
"experiment-design",
"statistical-power",
"dependent-variable"
] |
609625 | 2 | null | 130661 | 0 | null | Transformation of variables is a good option when that linearizes the problem. That procedure can be used to increase the correlations, reduce the residuals, and decrease the number of parameters needed to produce a good fit to the data.
For example, $\ln Y=a_0+a_1\ln X_1+a_2\ln X_2\to Y=e^{a_0}X_1^{a_1}X_2^{a_2},$ might be vastly superior to $Y=a_0+a_1X_1+a_2X_2$. A hint as to what to do is often provided by examination of the data or its residuals. For example, if one has fan shaped heteroscedasticity of parameters, like in this [paper](https://www.researchgate.net/publication/6707938_An_improved_method_for_determining_renal_sufficiency_using_volume_of_distribution_and_weight_from_bolus_Tc-99m-DTPA_two_blood_sample_paediatric_data)
[](https://i.stack.imgur.com/Hc3at.png)
That particular type of log-log transform may be of interest. More generally, there are lots of transforms to consider: taking square roots, exponentiation, taking reciprocals, and so forth. Another indication of how one should treat the data comes from considering the physics of the problem. For example, if a regression problem cannot take negative $Y$-values, that problem may not be linear, as lines may become negative.
| null | CC BY-SA 4.0 | null | 2023-03-15T23:07:39.830 | 2023-04-25T10:07:01.947 | 2023-04-25T10:07:01.947 | 53580 | 99274 | null |
609626 | 2 | null | 609623 | 1 | null | While we often use F-testing for testing differences in means (e.g., ANOVA), the F-test is actually a test of variances that methods like ANOVA use cleverly to investigate differences in means.
Therefore, the first thought might be to use an F-test of the two variances. This can be implemented in R software, for instance, using `var.test`.
Unfortunately, the F-test lacks robustness to deviations from normality. The JBStatistics channel on YouTube has a [video](https://www.youtube.com/watch?v=4Hr56qUkohM&list=PLvxOuBpazmsMNIgaarUNmvs70sAjiPeVM&index=10) showing this, and it might be fun to come up with your own simulations to show this.
A more robust alternative is the Ansari-Bradley test, implemented in R through `ansari.test`. Technically, this is not quite a variance test, but it tends to do a good job and could be worth a read.
If you want to get into a more general setting where you find the variance conditional on multiple covariates, [this](https://stats.stackexchange.com/q/585033/247274) question of mine is asking the same and has yet to get the kind of resolution I had hoped to get.
For quantifying the effect size, I find it natural to talk about the ratio of the two variances, rather than the difference. It makes sense to me to say that one distribution has twice or half the variance of another, and this ratio is part of what is calculated in the F-test.
Finally, establishing causality is likely to encounter the same kind of bugaboos that occur when it comes to establishing causality in a regression that estimates conditional means. This is good, because people who do causal inference already have tools to do so (e.g., instrumental variables), yet the estimation is different (estimating a conditional variance instead of a conditional mean), so the theoretical motivation in the causal inference may be more difficult, and the techniques may not be as well established with easy availability in software (e.g., the analogue to instrumental variables when conditional variances are being estimated).
| null | CC BY-SA 4.0 | null | 2023-03-15T23:10:42.293 | 2023-03-15T23:10:42.293 | null | null | 247274 | null |
609628 | 2 | null | 609623 | 1 | null | If the data for each group are independent and identically distributed according to normal distribution, you could conduct a two-sample F-test to determine whether $\mathbb{V}(Y\mid T=1) > \mathbb{V}(Y\mid T=0)$. The F-statistics is $F=\frac{s^2_{1}}{s^2_{0}}$, where $s^2_{1}$ and $s^2_{0}$ are the sample variances for group $T=1$ and $T=0$. Then compare it to the critical value $F_{n_1-1,n_0-1}$. You can reject the null hypothesis and conclude $\mathbb{V}(Y\mid T=1) > \mathbb{V}(Y\mid T=0)$ if $F>F_{n_1-1, n_0-1}$.
| null | CC BY-SA 4.0 | null | 2023-03-15T23:31:55.183 | 2023-03-15T23:31:55.183 | null | null | 383333 | null |
609629 | 1 | 609778 | null | 1 | 74 | I've been working on building a random forest model using h2o.ai in R for climate data. I know that there is some issue, either with my understanding of randomforest, code or dataset. However, I'm not sure exactly what is causing my model to have a very high MSE and low percent variance explained. My apologies in advance if I've overlooked something very simple. I have spent much time reading and testing but haven't improved.
So far I've tried: adjusting the parameters, reducing number of correlated predictors, checking formulas and input data for outliers, normality. Based on what I've researched random forest has been used for similar data in the past. I am using 70 rows in total with a 0.7 split. The entire dataset I've created is 12M rows and to create this subset I have taken the mean for 70 regions. I have tested on the entire dataset with no significant change. Here is my code, header and current results:
##Code##
```
#Check data
head(dNBR_model)
#summary(dNBR_model)
#correlation matrix
corNBR <- cor(dNBR_model)
dNBRcor <- cor.mtest(dNBR_model, conf.level = 0.95)
corrplot(corNBR, p.mat = dNBRcor$p, type = "upper", order = "hclust", insig='blank', addCoef.col ='black', tl.col = "black", tl.srt = 45)
#Run RF model
set.seed(561)
dNBR_split <- initial_split(dNBR_model, prop = .7)
dNBR_train <- training(dNBR_split)
dNBR_test <- testing(dNBR_split)
y <- "dNBR"
x <- setdiff(names(dNBR_train), y)
#initialize h2o
h2o.init(max_mem_size='50G')
#convert train to h2o
train.h2o <- as.h2o(dNBR_train)
dNBR_test.h2o <- as.h2o(dNBR_test)
testDRF <- h2o.randomForest(x, y, ntrees = 500, max_depth = 15, min_rows = 1, mtries = 7, nbins = 20, sample_rate = 0.75000, training_frame = train.h2o, validation_frame = dNBR_test.h2o)
testperf <- h2o.performance(testDRF)
summary(testDRF)
#percent variance explained
VE = ((1 - h2o.mse(testDRF))/(h2o.var(train.h2o$dNBR)))*100
print(VE)
RMSE = h2o.mse(testDRF) %>% sqrt()
PRMSE = (RMSE/(mean(dNBR_test$dNBR)))*100
print(PRMSE)
h2o.varimp_plot(testDRF)
varimp <- h2o.varimp(testDRF)
h2o.residual_analysis_plot(model = testDRF, newdata = dNBR_test.h2o)
```
##Results##
```
** Reported on validation data. **
MSE: 4034.157
RMSE: 63.51502
MAE: 47.38234
RMSLE: 0.1174528
Mean Residual Deviance : 4034.157
% variance explained: -167.545
```
[](https://i.stack.imgur.com/wvjSB.png)
##Header##
[](https://i.stack.imgur.com/ByMoU.png)
| Identifying root cause of very poor Random Forest model | CC BY-SA 4.0 | null | 2023-03-16T00:04:49.180 | 2023-03-17T08:23:42.933 | 2023-03-16T00:37:55.880 | 383336 | 383336 | [
"r",
"regression",
"random-forest",
"model-evaluation",
"h2o"
] |
609631 | 1 | null | null | 2 | 131 | Let’s say that my survey has 2 sections that I want to find the correlation between, they each have 5 questions with each question being a 5-point likert scale. What do I do after? What are the steps to doing this in Excel? I can't seem to find any good guides on how to do correlation with likert scale variables. I was also told that I should test for normality and check for linearity with a scatter plot before I begin with correlation, but I can't find a guide about it for likert scale variables either.
| How to use Spearman’s correlation with two likert scales? | CC BY-SA 4.0 | null | 2023-03-16T01:09:22.210 | 2023-03-16T03:34:24.200 | 2023-03-16T03:34:24.200 | 383340 | 383340 | [
"correlation",
"likert",
"spearman-rho"
] |
609632 | 2 | null | 584285 | 0 | null | I will echo the other answer here and say the comparison of several t-tests is not a good practice (Brown, 1990). In any case, I wanted to provide a shorter alternative answer. One could simply use a Welch's t-test without really caring much about the distributional attributes associated with your data. Unlike the t-test, the Welch t-test is defined so:
$$
t = \frac{\bar{X_1}-\bar{X_2}}{\sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}}
$$
where $\bar{X}$ is the mean of a sample, $s^2$ is the variance of a sample, and $n$ is the sample size. The reason why the Welch t-test is different is because it doesn't unnecessarily pool together the variance like a Student t-test. This means you don't have to concern yourself with the equality of variance between each group. That said, some have remarked that for non-parametric t-tests, one should carefully select between the Mann-Whitney U-test and the Welch t-test. However, some simulations have shown that Welch pretty much does the trick in all cases with only minor losses in power in extreme cases (Delacre et al., 2017). So from extreme departures of normality, the Welch basically covers your bases and you don't really have to concern yourself with CLT definitions of the test to begin with.
The Welch is also fairly easy to accomplish using R. Here I quickly simulate some data and run the `t.test` function, which by default uses the Welch version:
```
#### Sim Data ####
group <- factor(
rbinom(n=1000,size=1,prob=.5),
labels = c("Ctrl","Trt")
)
outcome <- rnorm(n=1000)
#### Welch ####
t.test(outcome ~ group)
```
Results shown below:
```
Welch Two Sample t-test
data: outcome by group
t = -1.2624, df = 993.49, p-value = 0.2071
alternative hypothesis: true difference in means between group Ctrl and group Trt is not equal to 0
95 percent confidence interval:
-0.2105298 0.0456953
sample estimates:
mean in group Ctrl mean in group Trt
-0.06989047 0.01252676
```
Leaving the arguments of normality aside, I think using the Welch by default is generally good practice because it simply negates a lot of the issues associated with using the standard Student t-test.
#### Citations
- Brown, J. D. (1990). The use of multiple t-tests in language research. TESOL Quarterly, 24(4), 770–773. https://doi.org/10.2307/3587135
- Delacre, M., Lakens, D., & Leys, C. (2017). Why psychologists should by default use Welch’s t-test instead of Student’s t-test. International Review of Social Psychology, 30(1), 92. https://doi.org/10.5334/irsp.82
| null | CC BY-SA 4.0 | null | 2023-03-16T01:14:08.167 | 2023-03-16T02:03:35.227 | 2023-03-16T02:03:35.227 | 345611 | 345611 | null |
609633 | 1 | null | null | 2 | 38 | I may be missing something obvious, but is there a python package that can reliably do density estimation of a PDF in high dimensions (e.g. 512)? I know of scipy's `gaussian_kde` but KDE methods work poorly in high dimensions. There is a lot of literature on various ways to get high dimensional density estimation to work, but is there a good, efficient implemented method somewhere that someone knows of?
| Package for Multidimensional Density Estimation | CC BY-SA 4.0 | null | 2023-03-16T03:54:10.567 | 2023-03-16T04:11:51.320 | 2023-03-16T04:11:51.320 | 362671 | 382378 | [
"machine-learning",
"probability",
"distributions",
"density-function",
"density-estimation"
] |
609634 | 2 | null | 609617 | 0 | null | You interpret a logit-link beta regression output in a same way that you would interpret a logit-link logistic regression. We are modelling the expectation of the Beta-distributed Random Variable $Y$, via a logit link. Your model is something like:
$$
\text{logit} ( E[Y] ) = b_0 + b_1x_1 + b_2x_2 + \cdots
$$
Let's say that $x_1$ is your $[0,1]$ proportional predictor, and $x_2$ is your binary predictor.
Interpretation of $x_1$: it's probably not useful to do the usual "an increase in $x_1$ by 1 results in a ..." interpretation. You may want to say something like "and increase in $x_1$ by 1 percentage point (i.e. 0.01) results in an increase in the odds by a factor of $\exp(0.01 \times b_1)$."
Interpretation of $x_2$: basically a standard interpretation: "$x_2 = 1$ (or whatever the class label is) corresponds to an increase in the odds by a factor of $\exp(b_2)$".
| null | CC BY-SA 4.0 | null | 2023-03-16T04:01:21.097 | 2023-03-16T04:01:21.097 | null | null | 369002 | null |
609635 | 1 | null | null | 0 | 33 | I am conducting a nonparametric MANOVA to see the effect of Group on each of the four binary response variables (DV1, DV2, DV3, DV4) after controlling for a covariate Cov1 (One-way MANCOVA) using vegan Package in R.
However, I am confused about how to proceed with post-hoc tests to identify the specific differences between A and B for each of the DVs after finding an omnibus significant test result. I am aware that many examples and papers use non-parametric ANOVAs which is Kruskall-Wallis followed by t-tests or Mann-Whitney U test in the literature after they conduct nonparametric MANOVAs, but their dependent variables are continuous/numeric measures, therefore it makes sense to use KW or MW tests for them.
My DV is strictly binary (1= present vs. 0= absent), so I am wondering if Fisher's exact test (because some cells are n < 5) or chi-square test with Bonferroni corrections would be appropriate, but I am not sure.
I would like any advice on this and papers/empirical examples on the internet would be greatly appreciated. I am in psychology and use R/SPSS/JASP.
My sample data (ID: Subject ID, Group: 2 levels- A and B, 4 DVs measured on a binary scale either present or absent, Cov1: covariate measured on a continuous scale.
```
df <- data.frame (ID = c ("1", "2", "3", "4", "5", "6", "7", "8", "9", "10"),
Group = c("A","A","A","A","A","B","B","B","B","B"),
DV1 = c(0,0,0,0,1, 1,1,1,1,1),
DV2 = c(1,1,1,1,0, 0,0,0,0,1),
DV3 = c(0,0,0,0,1, 0,1,0,0,1),
DV4 = c(0,1,0,0,0, 1,1,1,1,1),
Cov1= c(1.25, 2.42, 2.56, 1.05, 2.56, 2.02, 3.8, 2.9, 3.2, 3.7))
```
Centering covariate Cov1
```
df$Cov1 <- scale(df$Cov1, center=TRUE, scale=FALSE)
```
Run nonparametric MANOVA with a covariate (MANCOVA) using vegan Package in R
```
library(vegan)
Y <- df[, c("DV1", "DV2", "DV3", "DV4")]
npMANOVA <- adonis2(Y ~ df$Group + df$Cov1, method = "euclidean", permutations = 1000)
```
Output tells us that there is a main effect of Group (p = 0.006993).
```
Permutation test for adonis under reduced model
Terms added sequentially (first to last)
Permutation: free
Number of permutations: 1000
adonis2(formula = Y ~ df$Group + df$Cov1, permutations = 1000, method = "euclidean")
Df SumOfSqs R2 F Pr(>F)
df$Group 1 4.2000 0.44681 6.8597 0.006993 **
df$Cov1 1 0.9141 0.09725 1.4930 0.250749
Residual 7 4.2859 0.45595
Total 9 9.4000 1.00000
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
| What post-hoc steps to take after nonparametric MANOVA with binary dependent outcome variables? | CC BY-SA 4.0 | null | 2023-03-16T04:28:59.087 | 2023-03-16T04:31:36.923 | 2023-03-16T04:31:36.923 | 291699 | 291699 | [
"r",
"nonparametric",
"binary-data",
"post-hoc",
"manova"
] |
609636 | 2 | null | 609106 | 1 | null | Are you finding $\mathrm{SE}(\widehat{\lambda})$ and $\mathrm{SE}(\widehat{\lambda}\log{\widehat{\lambda}})$ instead? Let $g(\widehat{\lambda})=\widehat{\lambda}\log\widehat{\lambda}$. By delta method, the variance of $g(\widehat{\lambda})$ is
\begin{eqnarray*}
\mathbb{V}(g(\widehat{\lambda}))&\approx&(g'(\lambda))^2\mathbb{V}(\widehat{\lambda})\\
&=&(1+\log\lambda)^2\frac{\lambda^2}{n}
\end{eqnarray*}
Then, $\mathrm{SE}(\widehat{\lambda}\log\widehat{\lambda})=\frac{\lambda(1+\log\lambda)}{\sqrt{n}}$.
| null | CC BY-SA 4.0 | null | 2023-03-16T04:35:19.527 | 2023-03-16T04:35:19.527 | null | null | 383333 | null |
609637 | 1 | null | null | 7 | 118 | Apologies if the question is unsuitable for this site. Please direct me to the appropriate place, and I will take this down.
I am a statistician, and I have been struggling to find a meaning to the existence of statistics as a discipline in today's world where everyone cares about big black box models applied to big datasets. Statistics traditionally has been based on generative models, assumed some structure in the data and has developed methods to extract structure and do inference.
However, today people just care about prediction. Nobody cares about inference, and perhaps rightly so, because inference always necessitates a generative framework. The models we study today are extremely complicated and it's not clear if there is any hope for theory.
Time and again, we have heard statements like statistics is the least important part of data science. It is kind of painful to hear this as I have been trained as a statistician, and I wonder what is the way forward.
Do you struggle with this? What are your views on this? What advice would you give to a budding statistician given the trends you observe today?
| How can a statistician be relevant today? | CC BY-SA 4.0 | null | 2023-03-16T04:36:06.367 | 2023-03-16T07:49:53.537 | null | null | 59485 | [
"machine-learning",
"inference"
] |
609638 | 2 | null | 609637 | 8 | null | Statisticians often work in some form of consultation. As you said, many people need to validate a certain hypothesis, in science, medicine, etc. A statistician can analyze the data, but more importantly set up the proper experimental design. Much of science is based on the ability to do certain inferences. As long as there is a need for science there will be a need for statisticians to assist the scientists in that process.
There is also the educational part to statistics. People who work in data analysis often need help from people who understand statistics better. You may argue that the algorithms do not require a lot of math, and while that might be true to an extent, those algorithms are impossible to understand without a background in math. It is always easy to look at the final answer and not how one reached that answer. Many of the libraries are used by people who have very limited statistical background. If you want to help those people, and educate them in how those libraries work, or possibly edit them for specific purposes, then you will need a strong statistics background.
| null | CC BY-SA 4.0 | null | 2023-03-16T04:47:18.700 | 2023-03-16T07:49:53.537 | 2023-03-16T07:49:53.537 | 22047 | 68480 | null |
609639 | 1 | 609644 | null | 4 | 64 | Suppose $\text{supp}(X)\subseteq \mathbb{R}_{\geq 1}.$ Can we say $$\text{Cov}(X,\log X)\geq 0?$$
On one hand, we can say by monotonicity of log and Jensen's inequality that $$X\geq E[X]\implies \log X\geq \log E[X]\geq E[\log X].\quad (1)$$
Now if it also holds that $$\log X\geq E[\log X]\implies X\geq E[X]\quad (2)$$
then $\text{sign}(X-E[X])=\text{sign}(\log X-E[\log X])$ and we are done, but I don't think $(2)$ necessarily holds.
| Sign of Correlation between $X$ and $\log X$ | CC BY-SA 4.0 | null | 2023-03-16T05:04:51.163 | 2023-03-16T06:25:16.350 | 2023-03-16T05:37:17.437 | 342032 | 342032 | [
"correlation",
"expected-value",
"covariance"
] |
609640 | 2 | null | 609637 | 3 | null | The role of the statistician continues to be essential. I would say that there is a shortage of people with good statistical thinking. Testing hypotheses is required in any scientific rigorous decision process.
Novel health interventions (such as drugs against cancer) could not be developed without the support of a statistician in clinical trials.
Big tech companies also conduct A/B testing whenever they want to implement a new feature. A/B testing is a clinical trial to evaluate whether a feature improves a performance metric.
More recently, the literature of prediction models (whether based on black-boxes or classical regression methods) has advocated that clinical trials should be conducted to evaluate the impact on clinical outcomes of the implementation of these predictive models in a clinical setting. So, ML scientists interact with statisticians to design trials and test hypotheses.
In the end of the day, science only moves forward with formulation of hypotheses, data collection and test of hypotheses.
| null | CC BY-SA 4.0 | null | 2023-03-16T05:09:03.280 | 2023-03-16T05:09:03.280 | null | null | 30855 | null |
609641 | 1 | null | null | 0 | 21 | I took some of the data analysis courses and did a few projects on my own and I feel that I am so far from the community of data analysts and knowing new features when it releases or any new articles, questions, etc.
So I want to know the best resources where I can follow to grow up my knowledge and level up my skills, and also some communities of forums to be more interactive with the people in my field and things like that.
I appreciate your help so much
| Data Analysis communites and learning resources | CC BY-SA 4.0 | null | 2023-03-16T05:33:54.623 | 2023-03-16T05:33:54.623 | null | null | 383313 | [
"references"
] |
609644 | 2 | null | 609639 | 4 | null | Your sign requirement does not necessarily hold, but it's still possible to prove the result using an alternative method. Since $x \log x$ is [convex](https://math.stackexchange.com/questions/594300/why-is-x-logx-convex) and $\log x$ is concave (over the stipulated range), [Jensen's inequality](https://en.wikipedia.org/wiki/Jensen%27s_inequality) gives:
$$\begin{align}
\mathbb{E}(X) \log(\mathbb{E}(X)) &\leqslant \mathbb{E}(X\log X), \\[6pt]
\log(\mathbb{E}(X)) &\geqslant \mathbb{E}(\log X). \\[6pt]
\end{align}$$
Applying each of these inequalities (in order) we get:
$$\begin{align}
\mathbb{Cov}(X,\log X)
&= \mathbb{E}(X \log X) - \mathbb{E}(X) \mathbb{E}(\log X) \\[6pt]
&\geqslant \mathbb{E}(X) \log(\mathbb{E}(X)) - \mathbb{E}(X) \mathbb{E}(\log X) \\[6pt]
&\geqslant \mathbb{E}(X) \mathbb{E}(\log X) - \mathbb{E}(X) \mathbb{E}(\log X) \\[6pt]
&= 0. \\[6pt]
\end{align}$$
| null | CC BY-SA 4.0 | null | 2023-03-16T06:04:05.003 | 2023-03-16T06:25:16.350 | 2023-03-16T06:25:16.350 | 362671 | 173082 | null |
609645 | 1 | null | null | 0 | 26 | I am working with a mixed linear model where I have several groups, each with a different number of repeated measures. I fit a separate model for each group, but I am facing an issue when it comes to using data from participants with lower counts of repeated measures.
For example, if I fit a model for a group with five repeated measures, I cannot utilize the data from participants who have only three repeated measures. This leads to a loss of valuable information from those participants. Is there a method or approach to handle this issue in mixed linear models, so that I can include all participants and make use of the data with varying numbers of repeated measures across groups?
Another aspect I would like to address is the analysis of errors in the model. I have calculated the absolute error for each observation in the test set. I am wondering if it would be appropriate to use an ANOVA or t-test to check for potential differences in the error for a specific value in a particular feature. Is this a valid approach, or should I consider alternative methods for assessing differences in errors across feature values?
Any guidance or references to best practices in this area would be greatly appreciated.
| Handling Variable Repeated Measures and Error Analysis in Mixed Linear Models | CC BY-SA 4.0 | null | 2023-03-16T06:21:32.000 | 2023-03-16T06:21:32.000 | null | null | 383350 | [
"mixed-model",
"anova",
"t-test",
"repeated-measures"
] |
609646 | 1 | null | null | 0 | 38 | My ultimate goal is a way to evaluate a group of "m" covariance matrices (all size n*n) so I can pick an arbitrary one and calculate "this one is tighter than the average covariance matrix by a quantifiable amount" so I can select the top XX% and weight them according to how good they are.
One approach would be to describe the mean and (co)variance of the set of m covariance matrices, maybe by incorporating an additional dimension:
- the "mean" covariance would be n*n computed by taking the mean in the new dimension.
- extend normal covariance matrix calculation by computing the covariance matrix in the new dimension for each row and column, then multiply the values where the "grid" of new covariance matrices "collide" to form an n*n*n "covariance cube."
- or I could reshape the n*n covariance matrices to an n^2*m array where basically each column is the "flattened" version of each covariance matrix. Then I can use standard tools to compute the mean (n^2*1) and covariance (n^2*n^2)...maybe these are even the same values I'd get making a cube??
Alternatively, my mentor suggested just taking the eigenvalues of each covariance matrix and summing them to give an idea of how "tight" each one is and then compare those scalars against each other. Doesn't seem as rigorous but it might work.
For background, I am investigating a nonlinear estimation problem where I want to compare the quality of a hypothesis to the set of all the hypotheses that I am investigating (where each hypothesis quality is described by an n*n covariance matrix). Ideally this could extend one dimension further which could result in a hypercube or n^3*n^3 covariance matrix? In the case of the eigenvalues it scales very easily.
Thank you!
| Evaluate relative quality of covariance matrix relative to a set | CC BY-SA 4.0 | null | 2023-03-16T06:25:37.137 | 2023-03-16T20:06:45.670 | 2023-03-16T20:06:45.670 | 383352 | 383352 | [
"multivariate-analysis",
"covariance",
"covariance-matrix",
"multidimensional-scaling"
] |
609647 | 2 | null | 492319 | 2 | null |
#### A simple "rule of thumb" can be derived from the conjugate normal model
As you correcly point out, the simplest way to examine this is to consider an archetypal model form like the normal model to derive an appropriate "rule of thumb". To this end, let's consider the Bayesian normal model using the conjugate prior for the mean parameter $\mu$ and with a fixed precision parameter $\lambda$. ([Precision](https://en.wikipedia.org/wiki/Precision_(statistics)) is the inverse of the variance if you haven't seen this before.) This model can be written formally as:
$$\begin{align}
X_1,...,X_N | \mu, \lambda &\sim \text{IID N} \Big( \mu, \frac{1}{\lambda} \Big), \\[10pt]
\mu | \lambda &\sim \text{N} \Big( \mu_0, \frac{1}{\tau_0} \Big). \\[6pt]
\end{align}$$
Under this model the posterior distribution for $\mu$ (conditional on $\lambda$) is:
$$p(\mu|\mathbf{x}, \lambda)
= \text{N} \Big( \mu_n, \frac{1}{\tau_n} \Big),$$
with posterior mean and precision given respectively by:
$$\mu_n = \frac{\tau_0 \mu_0 + n \lambda \bar{y}_n}{\tau_0 + n \lambda}
\quad \quad \quad \quad \quad
\tau_n = \tau_0 + n \lambda.$$
Now, since our interest is in the prior-to-posterior change in variance, we are only interested in the latter equation for the precision parameter. As you can see from that equation, there is a simple rule for updating the precision: the posterior precision of the mean parameter is the prior precision plus $n$ lots of the sampling precision. (You can easily get the corresponding rule for the prior-to-posterior variance, but it is not as simple.) This means that ---within the normal model--- the posterior precision is always higher than the prior precision and so the corresponding posterior variance is always lower than the prior variance.
This gives you a "rule of thumb" that holds exactly in the normal model but would only be an approximation (at best) in other models. It is not always the case that the posterior variance is lower than the prior variance, so be careful with how widely you consider this "rule" to apply. It will tend to apply reasonably well in cases where the sampling distribution and prior distribution are both unimodal and close to the normal distribution.
| null | CC BY-SA 4.0 | null | 2023-03-16T06:26:32.343 | 2023-03-16T06:26:32.343 | null | null | 173082 | null |
609648 | 1 | 609650 | null | 3 | 84 | This question is a follow up to [this](https://stats.stackexchange.com/questions/609639/sign-of-correlation-between-x-and-log-x#609644) question.
Suppose $f$ is strictly increasing. Can we say
$$\text{Cov}(X,f(X))\geq 0?$$
Ben's answer on the aforementioned [linked post](https://stats.stackexchange.com/questions/609639/sign-of-correlation-between-x-and-log-x#609644) can be extended to show the result holds for $f(x)$ concave and $g(x):=xf(x)$ convex. [This](https://stats.stackexchange.com/a/289187/342032) post seems to suggest the desired inequality for the general case using a pictorial interpretation of covariance as expected signed area, but a formal proof would be delightful.
| Sign of Correlation between $X$ and $f(X)$ for strictly monotonic $f$ | CC BY-SA 4.0 | null | 2023-03-16T06:51:21.973 | 2023-03-16T13:41:32.213 | 2023-03-16T12:29:42.047 | 342032 | 342032 | [
"correlation",
"expected-value",
"covariance",
"inequality"
] |
609649 | 1 | null | null | 0 | 18 | [](https://i.stack.imgur.com/92wiQ.png)
[](https://i.stack.imgur.com/3iqFA.png)
The following is code for optimization, during which I calculate the validation error(or test error) too.But I am not sure whether it is a good graph or if there is some problems with my code.Please give me some suggestions.
code
```
colnames(data_plot)[3] <- "CellType"
n <- nrow(data_plot)
set.seed(12345)
id <- sample(1:n,floor(n * 0.5))
data_train <- data_plot[id,]
data_test <- data_plot[-id,]
data_train$CellType <- ifelse(data_train$CellType == "T-cell",1,-1)
data_test$CellType <- ifelse(data_test$CellType == "T-cell",1,-1)
data_train_p <- as.matrix(data_train[,-3])
data_test_p <- as.matrix(data_test[,-3])
lossfun <- function(theta,X, Y){
result <- mean(log(1 + exp(-Y * (X %*%theta))))
return(result)
}
cost <- function(theta){
loss_train <- lossfun(theta,X = data_train_p,Y = data_train$CellType)
loss_test <- lossfun(theta,X= data_test_p,Y = data_test$CellType)
num <<- num + 1
training_cost[num] <<- loss_train
test_cost[num] <<- loss_test
return(loss_train)
}
num <- 0
training_cost <- NULL
test_cost <- NULL
theta_initial <- rep(0,2)
optimum_theta <- optim(par = theta_initial,fn = cost,method = "BFGS",control=list(maxit=20))
iteration <- 1:num
data_plot <- data.frame(iteration,training_cost,test_cost)
data_plot <- reshape2::melt(data_plot,id.var = "iteration")
library(ggplot2)
ggplot(data_plot,aes(x=iteration,y= value,color= variable)) + geom_point()
```
| If it is a good graph to show the relationship between the training error and test error after using optim function? | CC BY-SA 4.0 | null | 2023-03-16T06:53:34.057 | 2023-03-16T08:40:02.787 | 2023-03-16T08:40:02.787 | 110833 | 383356 | [
"r",
"classification",
"optimization",
"validation"
] |
609650 | 2 | null | 609648 | 7 | null | Let $f$ be strictly increasing. Then
$$\operatorname{Cov}(X, f(X)) =\mathbb E[Xf(X) ]-\mathbb E[X]\mathbb E[f(X) ]=\mathbb E[(X-\mathbb E[X])(f(X) -f(\mathbb E[X]))].\tag 1\label 1$$
Now $$X\gtreqless\mathbb E[X]\implies f(X) \gtreqless f(\mathbb E[X])\tag 2.\label 2$$
Use both $\eqref 1,\eqref 2,$ to check your claimed result.
| null | CC BY-SA 4.0 | null | 2023-03-16T07:14:00.217 | 2023-03-16T07:14:00.217 | null | null | 362671 | null |
609652 | 1 | null | null | 2 | 104 | Could you provide formal definition for p-value? Or, do you have any good source for it?
For a day, I've been searching for the formal definition of p-value, but I couldn't yet. The majority of the book of statistics are mathematically non-rigorous and most of them didn't define p-value formally. I found two books below that are mathematically rigorous, but the definition by these two are also ambiguous.
For example, in p127 "Mathematical Statistics, 2nd edition" by Jun Shao:
>
It is good practice to determine not only whether $H_0$ is rejected or
accepted for a given $\alpha$ and a chosen test $T_\alpha$, but also
the smallest possible level of significance at which $H_0$ would be
rejected for the computed $T_{\alpha}(x)$, i.e. $\hat{\alpha} = \mathrm{inf}\left\{ \alpha \in (0, 1) : T_\alpha(x) = 1 \right\}$. Such an $\hat{\alpha}$, which depends on $x$ and the chosen test and is a
static, is called the p-value for the test $T_{\alpha}$.
However, In this book, $T_{\alpha}$ is not defined before the above statement.
In other book, p63, "Testing Statistical Hypothesis 3rd edition" by E.L. Lehmann:
>
... When this is the case, it is good practice to determine not only
whether the hypothesis is accepted or rejected at the given
significance level, but also to determine the smallest significance
level, or more formally
$$ \hat{p} = \hat{p}(X) = \mathrm{inf}\left\{ \alpha : X \in S_{\alpha} \right\} $$
at which the hypothesis would be rejected for the given observation.
This number, the so-called p-value given an idear of how strongly
the data contradicts the hypothesis.
But unfortunately I couldn't find the definition of $S_{\alpha}$ so the same situation as Shao's book.
| Formal definition of p-value | CC BY-SA 4.0 | null | 2023-03-16T07:41:31.940 | 2023-03-16T12:45:04.860 | 2023-03-16T08:58:14.953 | 362671 | 310702 | [
"hypothesis-testing",
"mathematical-statistics",
"statistical-significance",
"p-value",
"references"
] |
609653 | 1 | null | null | 1 | 28 | I'm performing survival analysis on time to drop out of a certain program. However, the censoring of each case depends heavily on the length of the program. For example, some programs only last 3 months, while some can last up to 18 months. Programs of length 6, 9, and 12 months account for 80% of the observations. They are censored only if they do not drop out at the end of the program, therefore the time of censoring depends greatly on program length
I would like to ask for best practices on creating an appropriate estimation when censoring depends on one of the covariates. So far these are the options that I came up with
- Create one estimator for all observations, but it feels odd when a person joining a 3-month program would have a survival function of 18 months
- Create one estimator for each group of program length, but some groups only have a small number of observations, so I'm not sure if it can produce a good estimator. The large number of estimators can also be a problem
- Bin the programs by length, then create one estimator for each bin. This seems more sensible but I'm not sure if there's any caveat.
| How to deal with survival analysis when censoring time depends on a covariate | CC BY-SA 4.0 | null | 2023-03-16T07:49:37.593 | 2023-03-18T15:52:18.587 | null | null | 383361 | [
"survival",
"censoring",
"stratification"
] |
609654 | 2 | null | 529951 | 1 | null | I think Reward decay decreases with each time step whereas the discount factor is a fixed number for the whole episode.
| null | CC BY-SA 4.0 | null | 2023-03-16T07:58:03.490 | 2023-03-16T07:58:03.490 | null | null | 383362 | null |
609655 | 1 | null | null | 0 | 9 | Every test (Covid test, verdict in court, pharmaceutical quality control instrument, ...) is subject to statistical variation. Sometimes a Covid test will be false-positive, sometimes an innocent person gets sentenced. And sometimes a sick person has a false-negative result.
For sentencing a person to jail, the false-positive rate needs to be minimized, therefore the burden is to provide evidence for guilt. Is it the same for Covid tests? I guess there it is better to have a person stay at home even if not infected, just to stay on the safe side. However, false-negative should be reduced not to infect others.
Is there a general way of thinking about it? When should a test be designed for high true positive rate and when for true negative rate? Does one go at the expense of the other?
| When are high true positive rate and when high true negative rate favoured? | CC BY-SA 4.0 | null | 2023-03-16T08:02:15.597 | 2023-03-16T08:02:15.597 | null | null | 52669 | [
"hypothesis-testing"
] |
609656 | 2 | null | 609652 | 6 | null | Lehmann is talking about a nested sequence of critical regions $\langle S_\alpha\rangle$ with the index being the size of the corresponding test. This is due to the fact that he needs to find the smallest significance level.
Shao is also using the same concept; however instead of defining the $p$-value in terms of $S_\alpha, $ the author used the critical function (using Lehmann terminology) in that $T_\alpha(\mathbf x) =1\implies \mathbf x\in S_\alpha.$
Note both are talking about non-randomized test procedures.
Generalizing to the randomized case is not difficult either. Lehmann explains further that in that case, one can resort to the nested tests $\langle \varphi_\alpha\rangle.$
References have been provided in my answer [here](https://stats.stackexchange.com/a/595369/362671); see Ben's [post](https://stats.stackexchange.com/a/561866/362671) for how the nested argument emanated from imposing a certain evidentiary order relation. For a brief philosophical take, check my post [here](https://stats.stackexchange.com/a/597828/362671).
| null | CC BY-SA 4.0 | null | 2023-03-16T08:05:25.570 | 2023-03-16T08:05:25.570 | null | null | 362671 | null |
609658 | 2 | null | 609652 | 4 | null | >
But unfortunately I couldn't find the definition of $S_{\alpha}$
The p-value is not defined in an unambiguous way
"The probability to get, given the null hypothesis, an effect-size equal to or larger than the observed effect-size"
The culprit is that 'effect-size' is not unambiguously defined. It depends on arbitrary choices.
For the same experiment different methods can compute different p-values (depending on different methods to define $S_\alpha$, different definitions of effect size).
So the p-value has indirectly no unambiguous formal definition. It is also more something like a concept rather than a rigorous mathematical construction. Statistics is more than objective mathematical formulas.
For more specific cases like a particular hypothesis test, the p-value can be expressed in an unambiguous formal way. E.g. for a one sided z-test with $H_0: \mu \leq 0$ and $H_a: \mu > 0$ the p-value is defined as $p = 1-\Phi(z)$ where $\Phi$ is the cumulative distribution function of the standard normal distribution.
---
>
may I ask you if you can expand the point For the same experiment different methods can compute different p-values?
For many observations with only a single dimension, there is often a natural order and most approaches agree, and if they disagree then it is not because they have a different view of effect-size, but because the methods might be approximations of the actual p-value and are not exact methods to compute the p-value. Yet, a typical difference is the difference between one-sided and two-sided tests (example: [Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$?](https://stats.stackexchange.com/questions/548235/))
When the data is multivariate then what is and what is not extreme becomes even more ambiguous than just the difference between one-sided and two-sided. One has to draw regions. An example of a difference occurs here:
- Surprising behavior of the power of Fisher exact test (permutation tests)
The image shows results from two binomial distributed variables $X,Y \sim B(500,p)$ with two different tests for $H_0:p=0.5$
- R Tukey HSD Anova: Anova significant, Tukey not?
How can I get a significant overall ANOVA but no significant pairwise differences with Tukey's procedure?
Comparing two, or more, independent paired t-tests
The image shows difference in rejection regions for the hypothesis $H_0: \mu_1=\mu_2=\mu_3$ based on the observed t-values of individual comparison between the three possible pairs.
- Which statistical analysis should I perform if the data sets are not normally distributed?
This compares a Mann-Whitney U test versus t-test (which both test equality of distributions, but one does this by testing the relative dominance of distributions $P(X<Y) = P(Y>X)$ and the other by testing the equality of means $\mu_X = \mu_Y$)
| null | CC BY-SA 4.0 | null | 2023-03-16T08:35:35.640 | 2023-03-16T12:45:04.860 | 2023-03-16T12:45:04.860 | 164061 | 164061 | null |
609661 | 1 | null | null | 0 | 14 | I have a dataset of platelet count measurements, between 0 and 20, and bleeding outcome (yes/no)
Each patient is has multiple measurements (5-30)
I would like to test for association between platelet count and bleeding.
I have done a boxTidwell test, and the relationship between log-odds and outcome is not linear.
I have tried binning the platelet count in intervals of 5 (0-5, 6-10,11-15,16-20) and the outcome is seen below:
```
bin bleeding n prop logprop
[0,5] 103 278 0.37050360 -0.9928921
(5,10] 96 696 0.13793103 -1.9810015
(10,15] 67 771 0.08690013 -2.4429958
(15,20] 36 418 0.08612440 -2.4519625
```
There seems to be a non-linear relationship between platelet count and bleeding outcome, but how do I best go about analyzing this.
Thanks in advance
| Help needed with model selection | CC BY-SA 4.0 | null | 2023-03-16T09:18:10.700 | 2023-03-16T09:18:10.700 | null | null | 383366 | [
"regression",
"model"
] |
609662 | 2 | null | 609371 | 2 | null | Let $g(X_1, ..., X_n) = \Vert \hat{f_n} - f \Vert_1$, then
$$
|g(X_1, ..., X_k, ..., X_n) - g(X_1, ..., X'_k, ..., X_n)| = |\Vert\hat{f_n} - f \Vert_1 - \Vert\hat{f'_n} - f \Vert_1| \\
\leq \Vert\hat{f_n} - \hat{f'_n} \Vert_1 \qquad \text{(Triangle Inequality)}\\
=\int_{-\infty}^\infty \frac{1}{nh}|K(\frac{x-X_k}{h})-K(\frac{x-X'_k}{h})|dx\\
\leq \frac{2}{nh}\int_{-\infty}^\infty K(\frac{x-X_k}{h})dx\\
= \frac{2}{n}. \qquad \text{(Chain Rule)}
$$
Thus $g$ satisfies bounded difference inequality.
$$\mathbb{P}(|\Vert\hat{f_n} - f \Vert_1 - \mathbb{E}\Vert\hat{f_n} - f \Vert_1|) \leq 2\exp (\frac{2t^2}{\frac{4n}{n^2}}) = 2e^{-\frac{nt^2}{2}}$$
| null | CC BY-SA 4.0 | null | 2023-03-16T09:34:32.783 | 2023-03-16T09:34:32.783 | null | null | 383368 | null |
609663 | 1 | null | null | 1 | 15 | I am currently using the package PSweight from R to do propensity score with 2 different methods : IPW and Matching.
I first used it to make a propensity score based on IPW method to get the average treatment effect (ATE), but then i also tried the matching method and i am wondering if i should chose between one or another, or if it's a good thing to show the results of both methods.
Additionnally i am not exactly sure how the matching one is done. For example if in my treated group i have 35 patients and 65 in my control group, it will compare to the 35 closest matching weights out of the control group ? And so it would be only 70 patients involved in the effects results instead of the 100 for the IPW
| Understanding the matching propensity score | CC BY-SA 4.0 | null | 2023-03-16T09:34:36.450 | 2023-03-16T09:58:41.147 | 2023-03-16T09:58:41.147 | 377392 | 377392 | [
"r",
"propensity-scores",
"matching"
] |
609664 | 1 | null | null | 1 | 14 | I have a time series broken down by day, and there are gaps in it that I have marked in red:
[](https://i.stack.imgur.com/XnpGs.png)
the distribution there is not normal
[](https://i.stack.imgur.com/Nzr0L.png)
How do we approach modeling a system that will look for anomalies here if they appear?
Because if you differentiate as I did above and try to set limits, for example, using Z score or IQr, it is obvious that anomalies will be growth that is typical and normal. I don't know how to approach this problem, what methods to use and where to go, please tell me.
[](https://i.stack.imgur.com/CJcnQ.png)
| how to find anomalies for a non-normal distribution with seasonality? | CC BY-SA 4.0 | null | 2023-03-16T09:36:54.347 | 2023-03-16T09:36:54.347 | null | null | 383365 | [
"outliers",
"anomaly-detection"
] |
609665 | 2 | null | 598075 | 1 | null |
# Standardization for causal inference is a tricky business
This is an amazing question because it raises an unresolved issue at the heart of causal inference: What is a causally meaningful level of representation? The representation includes choosing variables and, as in your case, a suitable measurement scale. Standardization amounts to assuming that a re-scaled version of the measured data is more suitable. This may be justifiable for predictive tasks, but it becomes a lot more tricky in the context of causal inference.
The scale of variables may contain useful information but it could also distort your results.
## Information in the data scale
From a statistical perspective, you are right in observing that standardization would (in linear regression) affect only coefficient magnitudes, but not statistical significance.
So if your method of causal inference only relies on significance, this is not a problem in the first place.
However, the data scale (if it is known, as could be the case for example with count data) may hold useful information about the relationship between variables. This fact has been used for example by the [winning submission to a competition on finding causal links](http://proceedings.mlr.press/v123/weichwald20a/weichwald20a.pdf).
## The promise and pitfalls of scale-sensitivity
At the same time, attempting to use information in the data scale may change results in unintended ways. If a suitable data scale is not known, variables with scale-dependent properties such as high variance may come to dominate results (as would be the case e.g. for penalized regression).
A [recent work on synthetic data from causal models](https://proceedings.neurips.cc/paper/2021/file/e987eff4a7c7b7e580d659feb6f60c1a-Paper.pdf) shows that many models produce data with strong scale patterns. These patterns are shown to dominate the performance of algorithms estimating causal structure and influence between variables. This might be a good thing if there was information in the data scale, but in the real world that is hard to be sure of. After all, many scales are arbitrary and who knows whether we should measure in meters or millimeters, pounds or kilogram, bitcoin or USD, and so on.
## Take-away
Whether or not you should standardize depends on your method and your domain. If you are convinced that you know the right scale and there may be information in the raw coefficients, then using the original scale may give you additional insights. If you don't know the right scale and/or are not trying to use it, standardization is a good idea to reduce the risk that the data scale changes results in unintended ways.
| null | CC BY-SA 4.0 | null | 2023-03-16T09:56:37.087 | 2023-03-16T11:53:58.530 | 2023-03-16T11:53:58.530 | 250702 | 250702 | null |
609666 | 1 | 609695 | null | 0 | 26 | Im working on a quantile regression model where I look at how imports (impi,t) of intermittent electricity (inti,t) such as wind solar from country i in period t impact day ahead prices in Norway in period t. My current model can be simplified to this:
pricet ~ impi,t + inti,t + impi,t*inti,t
However, i´m not entirely certain about the implications of modelling it this way. For instance, imports can have a "main effect" on price but intermittent generation should only have an effect on price if its imported. When I look at the coefficients it seems that the inti,t is a lot more significant than the interaction which is likely due to the imported flows (in MWh) being quite small. But in theory, that variable shouldnt be that significant because it cant have an isolated main effect on price. Could I model this in a better way? Or am I interpreting the three coefficients wrong?
If for example
impi,t = 0.1
inti,t = -0.01
impi,t*inti,t = -0.000005
Should I interpret it as -0.01-0.000005+0.1 is the effect of importing intermittent from that country? All variables are continous.
| Interpreting regression and variable set-up | CC BY-SA 4.0 | null | 2023-03-16T09:58:32.157 | 2023-03-16T14:50:32.303 | 2023-03-16T10:07:56.373 | 383228 | 383228 | [
"r",
"regression",
"interaction",
"panel-data",
"variable"
] |
609667 | 1 | null | null | 1 | 13 | I have 4 experimental groups/conditions and 5 measurement times for each group/condition. Each participant only took part in one of the 1 conditions. In total there are 27 participants and each condition has around 6 participants.
In my data set, several participants are missing a value during one of the 5 measurement times. So there are several cases (rows) where the participant does not have 5 measurement times but only 4 for example. The missing values are completely random and exist in all conditions.
My problem is the following. Because RM-ANOVA does case-wise deletion I end up with around 6 fewer cases which severely impacts my results.
What I am wondering is if it is feasible to impute the missing data using a regression for example. And how many values can I impute before it is no longer feasible?
| Imputing missing values for a RM-ANOVA | CC BY-SA 4.0 | null | 2023-03-16T10:01:10.577 | 2023-03-21T20:33:23.900 | 2023-03-21T20:09:00.790 | 11887 | 383330 | [
"anova",
"repeated-measures",
"missing-data",
"data-imputation"
] |
609669 | 2 | null | 608527 | 0 | null | If you use the same random seed but replace `25` with `2500` and `50` with `5000` for the numbers, then you get the expected coefficient and hazard ratio:
```
coxph(Surv(time, event) ~ treatment, data)
# Call:
# coxph(formula = Surv(time, event) ~ treatment, data = data)
#
# coef exp(coef) se(coef) z p
# treatment -0.05108 0.95021 0.02830 -1.805 0.0711
#
# Likelihood ratio test=3.26 on 1 df, p=0.07115
# n= 5000, number of events= 5000
```
The hazard ratio you specified is very close to 1. Even with 5000 events, this larger random sample doesn't find a "statistically significant" difference under the usual p < 0.05 criterion.
The result from the sample of 50 that you took was well within sampling error. As a quick check, take 999 random samples of 25 treatment and 25 control cases from these 5000. Look at the distribution of coefficient estimates.
```
set.seed(20230316)
c999 <- double(999)
for(sample in 1:999) {
cSample <- data[sample(1:2500, 25, replace=TRUE),];
tSample <- data[sample(2501:5000, 25, replace=TRUE),];
c999[sample] <- coef(coxph(Surv(time, event) ~ treatment, data = rbind(cSample,tSample)))
}
```
95% of the coefficient estimates are between
```
c999[order(c999)][25]
# [1] -0.6704489
c999[order(c999)][975]
# [1] 0.5502292
```
Your particular value is well within those limits.
```
ecdf(c999)(-0.52)
#[1] 0.05505506
```
With your small sample size of 50, there was an even smaller coefficient estimate in more than 5% of samples from these 5000.
| null | CC BY-SA 4.0 | null | 2023-03-16T10:19:28.070 | 2023-03-16T10:46:41.973 | 2023-03-16T10:46:41.973 | 28500 | 28500 | null |
609670 | 2 | null | 608127 | 2 | null | In "The use of multiple measurements in taxonomic problems" Fisher asked the question
>
What linear function of the four measurements $$X=\lambda_1x_1 + \lambda_2x_2 + \lambda_3x_3 + \lambda_4x_4 $$
will maximize the ratio of the difference between the specific means to the standard deviations within species?
So you can see LDA as finding the linear combination of the measured variables that maximizes the F-ratio in an ANOVA test.
[](https://i.stack.imgur.com/Yi5UG.png)
>
If there are two classes then the LDA draws one hyperplane and projects the data onto this hyperplane in such a way as to maximize the separation of the two categories. This hyperplane is created according to the two criteria considered simultaneously:
Maximizing the distance between the means of two classes;
Minimizing the variation between each category.
The quote is more or less correct.
It is more precisely about maximizing the ratio of the 'distance between the means of two classes' to 'the variation within each category'.
The projection is onto a line, not onto a hyperplane. Although the line defines a range of hyperplanes that can be used in categorisation.
>
LDA can be viewed as a special case of the Bayes classifier.
If you estimate the distributions as multivariate normal distributions with the same covariance, then the Bayes classifier is a hyperplane perpendicular to the LDA-axis.
An example of the hyperplane is below (which also shows the hyperplane for qda, which relaxes the assumption of equal covariance matrices)
[](https://i.stack.imgur.com/rTnVG.png)
| null | CC BY-SA 4.0 | null | 2023-03-16T10:24:04.180 | 2023-03-16T10:30:29.113 | 2023-03-16T10:30:29.113 | 164061 | 164061 | null |
609671 | 1 | null | null | 0 | 35 | I am currently attempting to use the ART method for analysing some non-normal self-report Likert data using repeated measures ANOVA. I ran everything on R following these guides rigorously:
[https://cran.r-project.org/web/packages/ARTool/readme/README.htmland](https://cran.r-project.org/web/packages/ARTool/readme/README.htmland)
[https://cran.r-project.org/web/packages/ARTool/ARTool.pdf](https://cran.r-project.org/web/packages/ARTool/ARTool.pdf)
[https://rcompanion.org/handbook/F_16.htmlI've](https://rcompanion.org/handbook/F_16.htmlI%27ve)
However, there seemed to be a discrepancy in the degrees of freedom, F and p-values, but mean square residuals are identical. We had 30 participants and 4 conditions (2x2 design), so 120 observations. When running the 2x2 RM ANOVA we would expect (1, 29) df. When running the code in the guide (for example):
```
m <- art(response ~ Variable1 * Variable2 + Error(Participant), data=myData)
```
or
```
m <- art(response ~ Variable1 * Variable2 + (1|Participant), data=myData)
```
and then
```
anova(m)
```
We get df of (1,87) for some reason – the appropriateness was also verified in the summary (Step 2 in the guide) so not sure why.
I found another general example of an RM code online (but not in any official guides) here [Coding repeated measures 2x2 ANOVA for aligned rank transformed data](https://stats.stackexchange.com/questions/267842/coding-repeated-measures-2x2-anova-for-aligned-rank-transformed-data) with a slight alteration:
```
m <- art(response ~ Variable1 * Variable2 + Error(Participant/(Variable1 * Variable2)), data=myData)
anova(m)
```
This actually gives the appropriate degrees of freedom (1, 29) but changes the raw sum of squares residuals (but not mean square residuals) and results slightly.
Does anyone know why is this and which might be appropriate?
| Using aligned rank transform (ART) for two-way repeated measures ANOVA, two different codes from official guides give different results | CC BY-SA 4.0 | null | 2023-03-16T10:30:58.893 | 2023-03-16T10:49:50.493 | 2023-03-16T10:49:50.493 | 362671 | 383224 | [
"repeated-measures"
] |
609673 | 1 | null | null | 1 | 22 | I have conducted a very large survey (n>10,000). I asked all participants a series of demographic questions (gender, ethnicity, educational background etc - all categorical). I then asked all participants further questions where again all is categorical, for example "what will you do? a, b, c, d, e (1)
Afterwards,I ask all participants a series of 15 items in a Likert Scale format, from "Not at all Important" to "Very Important" (5 options). (2)
My goal is to carry out a sub analysis to see if there is a trend within demographic subgroups for both (1) and (2). Forgive me if I sound ignorant but I was thinking that for (1) a simple chi-squared is sufficient? I know that shows whether there is a significant relationship but is there anyway of finding the correlation between 2 categorical variables?
For (2), I have searched extensively and come up with ANOVA, Kruskal Wallis, Mann-Whitney etc. What should I use? And is there an example anywhere?
Thank you so much for your help! Really appreciate it
| Likert Scale - Statistical Analysis? | CC BY-SA 4.0 | null | 2023-03-16T10:46:59.377 | 2023-03-16T10:51:51.180 | 2023-03-16T10:51:51.180 | 383374 | 383374 | [
"anova",
"wilcoxon-mann-whitney-test",
"likert",
"kruskal-wallis-test"
] |
609674 | 2 | null | 609585 | 0 | null | Ok, so after checking also with the original authors - in the 3rd equation ($\partial \mathcal l/\partial \mu_\mathcal B$) the 2nd term does indeed simplify to 0.
The 4th equation ($\partial \mathcal l/\partial x_i$) however is not summed by itself but is multiplied as part of an outer product with the inputs to the layer - so when computing the gradient of the entire batch, we will have a vector of size $(N,)$ per neuron, or $(N,k)$ matrix per layer if we have $k$ neurons. To compute the downstream $\partial\mathcal l/\partial W $ we will do an outer product $a\cdot \partial\mathcal l/\partial x$ (where $a$ are the activations from the last layer / input to the current layer, and $\cdot$ is an outer product) [this is just one (simple) way to present it, we could also use tensors instead]. It's true that if we sum the matrix across the rows / batch it will sum to 0, but this is not what we do for the gradients.
Also note that in the 4th equation, $\partial \mathcal l/\partial \sigma^2_\mathcal B, \partial \mathcal l/\partial \mu_\mathcal B$ are of size $(,k)$ and need to be broadcast to the size of the matrix $(N,k)$ when computing the gradient for the entire batch.
Update: note that the 4th equation can also be simplified more if we plug in the $\partial \mathcal l/\partial \sigma^2_\mathcal B, \partial \mathcal l/\partial \mu_\mathcal B$ derivatives, and also replace some terms with $\hat x$:
$$ \frac{\partial \mathcal L}{\partial x} = \frac{1}{n \sqrt {\sigma^2+\epsilon}}[n\frac{\partial \mathcal L}{\partial \hat x} - 1^T\frac{\partial \mathcal L}{\partial \hat x}-\hat x(1^T\frac{\partial \mathcal L}{\partial \hat x}\hat x)]\\
$$
You can check the full derivation on this YouTube video I made [here](https://www.youtube.com/watch?v=Y23QgQGAGJQ).
| null | CC BY-SA 4.0 | null | 2023-03-16T11:08:59.087 | 2023-03-19T12:27:11.760 | 2023-03-19T12:27:11.760 | 117705 | 117705 | null |
609675 | 1 | null | null | 0 | 19 | I am self-learning about structural time series, and for me the best way to understand topic is to simulate the data myself. I want to simulate a time series of local level model with seasonal components:
$$y_t = \mu_t + \gamma_t + v_t$$
$$ \mu_t = \mu_{t-1} +w_t$$
Where both disturbance terms are normally distributed. My code to simulate the time series is:
```
require(bsts)
set.seed(1234)
y2 <- c()
mu2 <- c(0)
seasons <- seq(-5,5,length.out =12)/5
for (i in 2:200) {
w1 <- rnorm(1,0,0.1)
v <- rnorm(1,0,0.1)
mu2 <- c(mu2, mu2[i-1] +w1)
y2 <- c(y2,mu2[i]+seasons[i%%12-1] +v)
}
```
I intentionally put really strong seasonality and small disturbance variances, so the model could capture it easier:
```
ss <- AddLocalLevel(list(), y2)
ss <- AddSeasonal(ss, y2, nseasons = 12)
model <- bsts(y2 ,state.specification = ss,niter = 3000)
pr <- predict(model,horizon = 30,burn = 100)
plot(pr)
```
Unfortunately, the predictions don't seem to capture the seasonality at all:
[](https://i.stack.imgur.com/UTWYy.png)
What did I do wrong? Did I poorly simulate the time series? Or did I mis-specify the model? Or boht?
| BSTS package cannot capture seasons in simulated data | CC BY-SA 4.0 | null | 2023-03-16T11:27:47.257 | 2023-03-16T11:27:47.257 | null | null | 243578 | [
"r",
"time-series",
"seasonality",
"state-space-models",
"bsts"
] |
609677 | 1 | null | null | 0 | 17 | Assume I have a dependent variable (Y) that is the sum of 5 independent variables (x1, x2, ..., x5) and with no error.
I can subset my data into three groups with a grouping independent variable (g).
The three groups have different results on Y and I need to check which of the 5 indepdendents variables is the most important (i.e. is affecting more the sum) in making a difference between groups. For instance, different groups could have very similar values in x1-x4, and very different values in x5, and x5 would be the variable making the difference.
I can't use the standarized coefficients of the linear model because it has a perfect fit (no error term).
Also I was thinking on using the variation of each variable between the groups, but it could happen that the variable with the highest variability is contributing very low to the total sum. So it wouldn't be detecting the variable causing the difference between groups in the dependent variable.
Can you recommend me a method to measure the variable importance in this conditions?
| Variable importance calculation in perfect fitted model (Y = x1+x2+x3) - which method? | CC BY-SA 4.0 | null | 2023-03-16T11:36:15.160 | 2023-03-16T11:57:35.410 | 2023-03-16T11:57:35.410 | 353633 | 353633 | [
"multiple-regression",
"overfitting",
"importance"
] |
609680 | 1 | 610652 | null | 3 | 64 | I have a regression which has treatment and control groups.
I want to test how the treatment group (binary variable) reacts during crises periods compared to control group.
The crisis variable is a summation of several crises dummy variables that the maximum value is two (showing only values of 0, 1 and 2) as there are some overlapping two crises periods in my sample.
Thus, this crisis variable shows a non-zero value only if the time is during the crisis periods.
I interact this crisis variable with the treatment group.
If so, can I still call this a difference-in-differences regression?
I assume the crisis impact overlaps if two crises happen in the same period which is a necessary assumption I need to keep.
| Can I still call this as a difference-in-difference analysis? | CC BY-SA 4.0 | null | 2023-03-16T12:12:33.890 | 2023-03-24T21:16:12.610 | 2023-03-24T17:34:33.083 | 40447 | 40447 | [
"regression",
"difference-in-difference",
"treatment-effect",
"pre-post-comparison"
] |
609681 | 2 | null | 511617 | 1 | null | I am with you that the test statistics are useful largely because they imply (or at least suggest) p-values, so if you already have the p-value, working backward to get the test statistic does not make much sense. You have some sense of what p-value constitutes statistical significance (such as the venerable $p<0.05$). Compare your p-value to that threshold. If your calculation of what constitutes statistical significance is in terms of a threshold for the test statistic, that calculation came by using some $\alpha$ threshold for the p-value (such as $\vert t\vert>2$ to indicate significance at the $0.05$-level), so you can use that $\alpha$ to make sense of your p-value and determine its significance.
Whether or not such dichtomous thinking about significant/insignificant is good for science is a separate issue, and most statisticians would say that it is not. However, if that is how you want to proceed, you have the needed information in the p-value.
EDIT
[An exception is if you are using instrumental variables, where estimator bias appears to be a decreasing function of the $F$-statistic.](https://stats.stackexchange.com/a/610836/247274)
| null | CC BY-SA 4.0 | null | 2023-03-16T12:28:09.970 | 2023-03-29T20:35:08.073 | 2023-03-29T20:35:08.073 | 247274 | 247274 | null |
609682 | 1 | null | null | 0 | 13 | I am trying to write the regression generated from the below function in R:
```
Arima(Dependent, order=c(1,0,0), xreg=x1, seasonal=list(order=c(1,0,0), period=3), include.mean=FALSE)
```
The estimated parameters are:
`ar1 --> -0.79`
`sar1 --> 0.74`
`x1 --> -0.06`
I am trying to match the fitted values generated in R with the manual reperformance of the model using the estimated parameters. Could you please assist?
| Translate R output for regression with SARIMA errors to an equation | CC BY-SA 4.0 | null | 2023-03-16T12:35:39.227 | 2023-03-16T13:38:27.403 | 2023-03-16T13:38:27.403 | 53690 | 383384 | [
"r",
"arima",
"seasonality"
] |
609683 | 1 | 609728 | null | 3 | 110 | I ran an ordinal regression in R using polr(), and have checked the proportional odds assumptions with brant() from the brant package. I am very new to ordinal regression and I am a bit confused about how to interpret the output of the polr model.
My dependent variable is an ordered factor with 11 levels, and I have a series of independent variables, some continuous and some discrete.
Could anyone refer me to a relatively easy to understand guide to interpreting the outputs of polr() (coefficients, fit, etc)?
I have been following [this guide](https://stats.oarc.ucla.edu/r/faq/ologit-coefficients/), as suggested in the answer to a similar question. I have obtained the p values, CIs, and odds ratios. However, I am a bit confused about how to interpret them, as, unlike the example linked above, my DV has more than two levels.
The answer to [this question on CV](https://stats.stackexchange.com/questions/490913/interpreting-coefficients-from-ordinal-regression-r-polr-function) was also quite useful, but the independent variable in the example is binary and I was wondering how the interpretation would translate with, say, a continuous variable.
In short, how would I translate statements such as: "For every one unit increase in student’s GPA the odds of being more likely to apply (very or somewhat likely versus unlikely) is multiplied 1.85 times (i.e., increases 85%), holding constant all other variables" (from the 1st example above), but with a dependent variable with many levels?
| Interpreting ordinal regression output in R polr() | CC BY-SA 4.0 | null | 2023-03-16T12:40:38.590 | 2023-03-17T11:44:06.213 | null | null | 382486 | [
"r",
"model",
"ordinal-data",
"ordered-logit",
"polr"
] |
609684 | 2 | null | 538752 | 0 | null | There are some difficulties when it comes to answering this. For instance, how probable is it that the true coefficient is zero? If the true coefficient almost certainly is not zero, then there is an argument that the probability of a suprious significant coefficient is zero, since every rejection of the null hypothesis $H_0: \beta_i=0$ is correct. This is part of what the comment by whuber means about how the p-value depends on both the estimate and the true value. This is kind of blending frequentist and Bayesian thinking, yes, but the question phrasing seems to invite that.
My take on this interview question is that it is a test of whether or not you know how p-values behave under the null hypothesis. If you repeatedly test when the null hypothesis really is true, tests should give a $U(0,1)$ distribution of p-values. I demonstrate below in a simulation where $mu=0$ is true, and the p-values give the desired uniform distribution on $(0,1)$.
```
library(ggplot2)
set.seed(2023)
N <- 10
R <- 10000
p <- rep(NA, R)
for (i in 1:R){
x <- rnorm(N, 0, 1)
p[i] <- t.test(x, mu = 0)$p.value
}
d <- data.frame(
p_value = p,
CDF = ecdf(p)(p)
)
ggplot(d, aes(x = p_value, y = CDF)) +
geom_point() +
geom_abline(slope = 1, intercept = 0)
```
[](https://i.stack.imgur.com/pKXGQ.png)
With that in mind, the distribution under the null hypothesis of p-values is $U(0,1)$, and if $X\sim U(0,1)$, then $P(X<x)=x$. Consequently, $P(X<0.05)=0.05$, and there is a $5\%$ chance of observing a suprious significant coefficient. If you want to consider multiple coefficients, assume independence, and assume all coefficients to truly equal zero, then I agree with your calculation of $1-0.95^p$, where $p$ is the number of coefficients. However, I think the discussion of the uniform distribution of p-values under the null hypothesis is the key part of this.
| null | CC BY-SA 4.0 | null | 2023-03-16T12:46:44.807 | 2023-03-16T12:46:44.807 | null | null | 247274 | null |
609685 | 1 | 609806 | null | 1 | 39 | I'm trying to figure out under what conditions one would make 'unconditional = FALSE' (in plot.gam and gratia::draw), because in my case 'unconditional = TRUE' shrunk the uncertainty bands around my smoothing parameters - which I take as a good thing. This is probably not always the case, but is there an intuitive explanation for what this does and when to use it? When would we treat the smoothing parameters as fixed?
From ?plot.gam:
>
...if TRUE then the smoothing parameter uncertainty corrected
covariance matrix is used to compute uncertainty bands, if available.
Otherwise the bands treat the smoothing parameters as fixed.
gratia::draw(m,
select = c(3,4),
unconditional = FALSE)
[](https://i.stack.imgur.com/rPpq0.png)
gratia::draw(m,
select = c(3,4),
unconditional = TRUE)
[](https://i.stack.imgur.com/YBRKS.png)
| When to use 'unconditional = FALSE' in plot.gam() | CC BY-SA 4.0 | null | 2023-03-16T12:49:52.480 | 2023-03-17T14:46:26.347 | 2023-03-16T12:55:20.087 | 337106 | 337106 | [
"covariance-matrix",
"generalized-additive-model",
"uncertainty"
] |
609686 | 1 | null | null | 0 | 29 | Let us assume that we are given $m$ iid samples from an unknown discrete distribution over $[k]$. Let's also assume that we are interested in a distributional property that is label-invariant. Let us define the fingerprint $f$ (also known as collision statistics or histogram of histogram) of the $m$ samples as a vector whose $i$th coordinate equals the number of elements which appear exactly $i$ times in the sample. As an example, $f[1]$, the number of elements which appears exactly once in the sample is likely to be maximized for the uniform distribution. Uniformity is one example of a label-invariant distribution property that remains invariant under relabeling of elements. Entropy and support size are other properties of distributions that also remain invariant under element relabeling.
The question that I am interested in is the following: Let $P$ and $Q$ be two discrete distributions having total variation distance at most $\epsilon$, for some small $\epsilon>0$. What can we say about the fingerprints of the samples of size $m$ of these two distributions? How close the respective fingerprints would be? What would be an appropriate notion of distance between the fingerprints?
Thanks,
| Comparison of Fingerprints of Discrete Distributions | CC BY-SA 4.0 | null | 2023-03-16T12:52:59.053 | 2023-03-16T17:50:07.357 | 2023-03-16T17:50:07.357 | 300233 | 300233 | [
"distributions",
"distance"
] |
609687 | 1 | null | null | 0 | 54 | I have a dataset of 40 patients which receive 4 different treatments, group 1, is placebo, group 2, drug A, group 3, drug B and group 4, drug A+B. Only one dose from each drug and combination. I would like to use Bliss independence model to test for synergy of drugs A and B. My response variable is tumor growth from baseline, so values are ranging between 20-1500. I first caluculated tumor inhibition rate for each group (1-mean(tumor growth for group i)/(mean growth for placebo)), i = (A,B,A+B). Next I calculated the expected tumor inhibition for drug A+B as: E(AB)=E(A)+E(B)-E(A)*E(B), where E(A) and E(B) are the observed tumor inhibition rates which I calculated earlier. And finally I calculated Bliss independence index as BII=(O(AB)-E(AB))/(1-E(AB)), where O(AB) is the observed tumor inhibition rate for A+B. I get positive value for this BII, so apparently the combination is synergic, but is there any statistical way to test for this? I would like to get some confidence interval or p-value to test how significantly the combination is synergic. Or is it possible if we have only one dose? Would it make sense to use e.g. bootstrap method to create variance for the BII value? Any other method than Bliss also would work.
| Test for synergy using Bliss independance model for two drugs | CC BY-SA 4.0 | null | 2023-03-16T12:53:53.600 | 2023-03-17T16:14:15.050 | 2023-03-17T16:14:15.050 | 271652 | 271652 | [
"anova",
"variance",
"p-value",
"bootstrap",
"sas"
] |
609689 | 1 | null | null | 2 | 73 | I have a general question about running multiple pairwise comparisons. I did some genetic work in grad school, where we (I believe) utilized a statistical test (Student's T-test?) followed by a FDR-adjustment. This seemed to work fairly well at limiting our gathering of false-positives. However, in my current role, in some cases, I end up using Tukey HSD for multiple comparisons (it's especially useful when I wish to categorize groups). As I understand, Tukey HSD also accounts for multiple comparisons.
My question is, when trying to determine what statistical test to perform, is there a good way to know when to use something like a Benjamini-Hochberg control of FDR, and when to perform a test like Tukey HSD? Is there a rule-of-thumb based on how many pairwise comparisons you wish to make, and is one method more sensitive than the other (able to detect differences better, but more-likely to detect false-positives)?
Thanks!
| Pairwise comparison; Tukey HSD or FDR | CC BY-SA 4.0 | null | 2023-03-16T13:31:54.103 | 2023-03-16T13:31:54.103 | null | null | 383391 | [
"false-discovery-rate",
"tukey-hsd-test"
] |
609691 | 2 | null | 609559 | 2 | null | I am the main developer of HiClass.
Apparently, from the URL you linked you are using a third-party package from globality-corp, which is developed by someone else. This is the correct link for [HiClass](https://github.com/scikit-learn-contrib/hiclass), which is currently hosted on scikit-learn-contrib.
Answering your questions:
1. Is there a handy way to retrieve just the predicted classes (e.g. predicting just the "Bike" class instead of the hierarchical passway ["Root", "non-Motorised", "Bike"]?
Unfortunately there is not an implementation to only return the leaf nodes at the moment, since the assumption we had with hierarchical classification is that all levels are important for the prediction. Please, correct me if I am wrong, but if you are only interested in the leaf nodes wouldn't flat classification be more appropriate for your use-case? I would need to update the algorithm to accommodate for that and it will probably take some time. I imagine an easier alternative would be for you to do some post-processing to return the last level that is not empty.
2. Having that my classes have different tree levels (Walk & Bike -> level-2, others level-3), How then should the hierarchy of the Walk & Bike classes be (["Root", "non-Motorised", "Bike"] or ["Root", "non-Motorised", "Bike", " "]) considering the statement I qouted above?
It is fine to provide the labels with nested lists, i.e., just as ["Root", "non-Motorised", "Bike"] and HiClass will add empty levels as necessary.
Edit: I think it would be useful to link this example from the gallery of examples to complement my answer to question 2: [training with different number of levels](https://hiclass.readthedocs.io/en/latest/auto_examples/plot_empty_levels.html)
For future questions, please feel free to open an issue on GitHub or send me an email since I only read stack exchange occasionally.
Best regards,
Fabio
| null | CC BY-SA 4.0 | null | 2023-03-16T14:03:07.590 | 2023-03-16T14:04:57.097 | 2023-03-16T14:04:57.097 | 343396 | 343396 | null |
609692 | 1 | 609706 | null | 1 | 42 | Can someone explain to me what's going on in the following?
Suppose we have data with constant dependent variable:
```
set.seed(42)
dat <- data.frame('y' = rep(10, 100), 'x' = rnorm(100, mean=10))
```
A bivariate regression model with intercept has perfect fit and constant residuals, as expected:
```
m1 <- lm(y ~ x, data = dat)
summary(m1)
Call:
lm(formula = y ~ x, data = dat)
Residuals:
Min 1Q Median 3Q Max
0 0 0 0 0
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10 0 Inf <2e-16 ***
x 0 0 NaN NaN
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0 on 98 degrees of freedom
Multiple R-squared: NaN, Adjusted R-squared: NaN
F-statistic: NaN on 1 and 98 DF, p-value: NA
Warning message:
In summary.lm(m1) : essentially perfect fit: summary may be unreliable
```
However, a model with the same data but without intercept does neither have perfect fit nor constant residuals:
```
m2 <- lm(y ~ 0 + x, data = dat)
summary(m2)
Call:
lm(formula = y ~ 0 + x, data = dat)
Residuals:
Min 1Q Median 3Q Max
-2.11758 -0.51485 0.04904 0.74581 3.08951
Coefficients:
Estimate Std. Error t value Pr(>|t|)
x 0.98624 0.01024 96.34 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.032 on 99 degrees of freedom
Multiple R-squared: 0.9894, Adjusted R-squared: 0.9893
F-statistic: 9282 on 1 and 99 DF, p-value: < 2.2e-16
```
Centering the predictor restores constant residuals but not perfect fit:
```
dat2 <- dat
dat2$x <- scale(dat2$x, scale = F)
m3 <- lm(y ~ 0 + x, data = dat2)
summary(m3)
Call:
lm(formula = y ~ 0 + x, data = dat2)
Residuals:
Min 1Q Median 3Q Max
10 10 10 10 10
Coefficients:
Estimate Std. Error t value Pr(>|t|)
x -8.401e-15 9.700e-01 0 1
Residual standard error: 10.05 on 99 degrees of freedom
Multiple R-squared: 6.475e-31, Adjusted R-squared: -0.0101
F-statistic: 6.41e-29 on 1 and 99 DF, p-value: 1
```
```
| Regression analysis with constant dependent variable | CC BY-SA 4.0 | null | 2023-03-16T14:22:41.963 | 2023-03-17T09:13:08.593 | 2023-03-17T09:13:08.593 | 290944 | 290944 | [
"r",
"regression",
"regression-coefficients",
"intercept",
"centering"
] |
609693 | 2 | null | 609399 | 1 | null | Don't.
As @SextusEmpiricus said in a comment, "The inclusion of the people that didn't receive rehabilitation into the analysis (with a time t=0) is wrong, because this is interpreted as those patients having received rehabilitation."
The best way to show a "survival" curve for duration of rehabilitation is to limit the curve to those who actually received rehabilitation and state that clearly in your report. Also report the fraction of those in the study who received rehabilitation. That way the reader won't be misled.
| null | CC BY-SA 4.0 | null | 2023-03-16T14:33:23.797 | 2023-03-16T14:33:23.797 | null | null | 28500 | null |
609694 | 1 | null | null | 0 | 36 | I'm new to time series analysis and I would like to know if any of you has any suggestion on website to check out about this topic.
| Do you have any suggestion for a website regarding time series analysis? | CC BY-SA 4.0 | null | 2023-03-16T14:45:30.090 | 2023-03-16T15:20:00.287 | 2023-03-16T15:20:00.287 | 362671 | 377525 | [
"time-series",
"references"
] |
609695 | 2 | null | 609666 | 0 | null | I suspect that your observation:
>
When I look at the coefficients it seems that the inti,t is a lot more significant than the interaction which is likely due to the imported flows (in MWh) being quite small. But in theory, that variable shouldnt be that significant because it cant have an isolated main effect on price.
comes from the attempt to interpret a so-called "main effect" for a predictor involved in an interaction. That can easily lead to confusion.
With the interaction term, the coefficient for inti,t represents its association with outcome when inpi,t = 0. The "significance" of that "main effect" coefficient is for a difference of the estimate from 0 when inpi,t = 0. If you centered the inpi,t values, the coefficient for inti,t would change, and thus its difference from a value of 0. There's a simple explanation on [this page](https://stats.stackexchange.com/q/417029/28500).
With an interaction there is no simple interpretation of a "main effect." Thus you can't really compare the "significance" of that coefficient against the "significance" of the interaction coefficient, as you seem to be trying to do. Evaluate the model overall, not the individual "main effect" coefficients.
| null | CC BY-SA 4.0 | null | 2023-03-16T14:50:32.303 | 2023-03-16T14:50:32.303 | null | null | 28500 | null |
609696 | 2 | null | 609476 | 0 | null | The magnitude of a regression coefficient is related to the measurement scale of the associated continuous predictor. A change of 1 unit in outcome per millimeter change in a predictor is equivalent to a change of $10^{6}$ outcome units per kilometer change in the same predictor.
Without more details about the model it's hard to say if something else is going on in your case. But don't worry about the coefficient magnitudes per se, if they make sense in the overall context of the model.
| null | CC BY-SA 4.0 | null | 2023-03-16T15:00:25.643 | 2023-03-16T15:00:25.643 | null | null | 28500 | null |
609697 | 1 | 610063 | null | 1 | 33 | My motivation is to produce carcasses of animals in an ecosystem.
- The animals have discrete sizes in kg (75, 216, 700, 2500, 5000, 8500, 25000).
- I also have the estimated percentage each animal contributes to the
total ecosystem based on scaling relationships (49.3,
36.8,6,6.7,0.6,0.4,0.2)
- Finally, I also have the average amount of kg the system should support (1752kg)
So I want to have a discrete distribution that uses the first two bits of data to produce a system with 1752kg of animals on average. So it's fine that sometimes it would overshoot or undershoot the average.
[This answer](https://stats.stackexchange.com/questions/67911/how-to-sample-from-a-discrete-distribution) gets some of the way there but my average value seems to make a difference.
I ultimately want to implement this in NetLogo code but I'm familiar with R if people want to illustrate their answers that way.
| How to sample from a distribution with discrete variables with a known average? | CC BY-SA 4.0 | null | 2023-03-16T15:01:51.063 | 2023-03-20T14:22:06.917 | 2023-03-20T14:14:57.340 | 35989 | 318475 | [
"distributions",
"sampling",
"simulation",
"discrete-data"
] |
609698 | 1 | null | null | 0 | 11 | Consider $X, Y$ and suppose you have some i.i.d. observations of $Y$ and $X + \text{do}(Y)$ with observation noise $\epsilon_0$ and $\epsilon_1$ (Gaussian). So the observations of the latter should not influence out estimate of $Y$.
Is there some nice way of expressing this when doing the analysis (I've made this problem very simple, but consider doing regression or bayesian updates).
So you have some log-likelihood like this (for example)
$$
J = \frac{(\hat{Z}_0 - Y)^2}{\epsilon_0^2} + \frac{(\hat{Z}_1 - \text{do}(Y) - X)^2}{\epsilon_1^2}
$$
Are there any tricks to using the DAG matrix to effectively do a stop gradient?
This post seems related? [Mathematical notation for suppressing differentiation](https://stats.stackexchange.com/questions/519522/mathematical-notation-for-suppressing-differentiation)
Another way of asking this question is suppose you have a standard algorithm for dealing with the non-causal situation here (consider a filter or online mean) is there a simple way besides recursively solving the system, to change that solution into a causal one?
| How to modifiy regression or update equations to handle causal do-calculus type statements? | CC BY-SA 4.0 | null | 2023-03-16T15:02:28.127 | 2023-03-16T15:02:28.127 | null | null | 13610 | [
"causal-diagram"
] |
609701 | 2 | null | 370372 | 0 | null | Your analysis is fully confounded by indication and lacks a proper control. Some potential solutions may be to consider an external control from a different datasource, or depending on the nature of the response (presuming there are appropriate biomarker data on the indication) you may consider marginal structural models.
| null | CC BY-SA 4.0 | null | 2023-03-16T15:14:49.677 | 2023-03-16T15:14:49.677 | null | null | 8013 | null |
609702 | 2 | null | 609305 | 3 | null | There's a risk here of circular logic, related to the problem of [survivorship bias](https://en.wikipedia.org/wiki/Survivorship_bias). The pattern of treatment use might be due to the probability of treatment success rather than the other way around, as you would like to infer.
For an extreme example, say that a child has a mild asthma attack that might well have resolved on its own. The child takes just one emergency treatment, and the attack ends. With many cases like that, under your interpretation of patterns you might be tempted to say that a single emergency treatment is the "best" pattern.
Similar problems might arise with any attempts to interpret differences in patterns of treatment as a function of eventual treatment success. Discuss these issues carefully with colleagues who understand the subject matter well.
| null | CC BY-SA 4.0 | null | 2023-03-16T15:21:59.257 | 2023-03-16T15:21:59.257 | null | null | 28500 | null |
609704 | 1 | null | null | 0 | 19 | How to prove the equation of E-step in EM-Algorithm?[](https://i.stack.imgur.com/CIEBf.jpg)
| How to prove the equation of E-step in EM-Algorithm? | CC BY-SA 4.0 | null | 2023-03-16T16:00:02.283 | 2023-03-16T16:03:43.100 | 2023-03-16T16:03:43.100 | 56940 | 383404 | [
"self-study",
"expectation-maximization"
] |
609705 | 1 | null | null | 0 | 7 | I would appreciate your expertise. Firstly, I am not a statistician, so I kindly ask you as a statistician. The problem I am facing:
I am conducting a retrospective longitudinal analysis of patient records. Consider a group with an index date for a particular disease event of interest. Patients are followed for N months until they are lost to follow-up, or an outcome, death, occurs.
For our statistical models, we collect patient characteristics found during a 12-month baseline before the index date as covariates. There are several covariates to consider. The exposure to a drug and comorbidity indications within that baseline period for life-long conditions, such as chron's disease.
It has been suggested that using comorbidity (disease) data found during a 1-year baseline could result in immortal time bias. I am struggling to see how that is the case. I understand immortal time bias in terms of when the presence of a covariate is measured during the follow-up time, e.g., exposure to a drug, because that will divide populations into those who must live long enough to experience that exposure. But I can't grasp how covariates before an index date will introduce immortal time bias.
I would be grateful if someone could explain this in simple terms, and then how to go about adjusting the timelines to account for immortal time bias.
Many thanks
| How to account for immortal time bias in retrospective longitudinal studies from covariates during a baseline period? | CC BY-SA 4.0 | null | 2023-03-16T16:16:20.047 | 2023-03-16T16:16:20.047 | null | null | 231723 | [
"panel-data",
"observational-study"
] |
609706 | 2 | null | 609692 | 1 | null | Algebraically, it's quite straightforward to show why this is the case, but here's a visual explanation. This is a scatterplot of your data:
[](https://i.stack.imgur.com/A0hen.png)
You want to fit the model $y_i = \alpha + \beta x_i + \varepsilon_i$, $i=1,\ldots,100$. Your predictor of $y_i$ is $\hat{y}_i=\hat\alpha+\hat\beta x_i$, a straight line. In your particular case, you have $y_i=10$ for all $i$.
- In the model with intercept, you can get a perfect fit ($y_i=\hat{y}_i$ for all $i$) by setting $\hat\alpha=10$ and $\hat\beta=0$, a horizontal line. The residuals are $y_i-\hat{y}_i=0$.
[](https://i.stack.imgur.com/R14Ww.png)
- In the model without an intercept ($\alpha=0$), you're forcing the line to pass through the origin. There is no value of $\hat\beta$ that gives you a perfect fit, as that would require $\hat\beta x_i=10$ for all $i$; this is not possible, as the $x_i$ are all different. The residuals from this model (vertical distance are $y_i-\hat{y}_i=y_i\hat\beta x_i$, so they're all different.
[](https://i.stack.imgur.com/eVMSM.png)
- If you center the predictor, you still can't get a perfect fit, but the line of best fit is horizontal* again (so the residuals are equal).
[](https://i.stack.imgur.com/9tDRg.png)
*almost horizontal, as the $x_i$ values are not perfectly symmetric about 0.
| null | CC BY-SA 4.0 | null | 2023-03-16T16:17:17.430 | 2023-03-16T16:17:17.430 | null | null | 238285 | null |
609707 | 2 | null | 585223 | 2 | null | This just in... Try using the logistf package instead of `glm`:
```
library(logistf)
> mf <- logistf(
+ surv ~ treat ,
+ family = binomial(link = "logit"),
+ data = d
+ )
> emmeans(mf, specs = ~ treat, type = "response")
treat prob SE df lower.CL upper.CL
A 0.908 0.0413 235 0.789 0.963
B 0.929 0.0368 235 0.813 0.975
C 0.969 0.0246 235 0.861 0.994
Control 0.663 0.0675 235 0.521 0.781
D 0.990 0.0144 235 0.855 0.999
Confidence level used: 0.95
Intervals are back-transformed from the logit scale
```
### Note:
As this is written, `emmeans` support has just now been added to a branch of the logistf package, and it will take a while to reach CRAN. But in the meantime, you can use `qdrg()` instead as follows:
```
> mf.rg <- emmeans::qdrg(object = mf, data = d, link = "logit")
> emmeans(mf.rg, specs = ~ treat, type = "response")
(same output)
```
| null | CC BY-SA 4.0 | null | 2023-03-16T16:19:43.220 | 2023-03-16T16:19:43.220 | null | null | 52554 | null |
609709 | 2 | null | 609489 | 1 | null | With so many [parameterizations of Weibull models](https://stats.stackexchange.com/q/508139/28500) in survival analysis, you first need to identify the parameterization that has been used.
According to the [BaSTA manual](https://cran.r-project.org/web/packages/BaSTA/BaSTA.pdf), its Weibull parameterization is what Wikipedia calls the [second alternative](https://en.wikipedia.org/wiki/Weibull_distribution#Second_alternative). The BaSTA $b_0$ is the Wikipedia shape parameter $k$ and its $b_1$ is Wikipedia's rate parameter $\beta$. The rate parameter is the inverse of the scale parameter $\lambda$ in Wikipedia's "standard parameterization."
The survival function in terms of $b_0$ and $b_1$ is:
$$S(x) = \exp(-(b_1 x)^{b_0}).$$
The discussion of [Parametric Survival Models](https://grodri.github.io/survival/ParametricSurvival.pdf) by Germán Rodríguez puts this into a form that might be more readily interpreted. For a Weibull model parameterized this way, the distribution of log-survival times can be written:
$$\log X = -\log b_1 + \frac{W}{b_0} ,$$
where $W$ is a standard minimum extreme value distribution. In that form, $b_1$ is related to a location of the distribution in log time, and $b_0$ is related to the inverse of the width of the distribution.
For your results, that location in time is earlier for males, but the width of the distribution is nominally wider for males ($b_0$ is smaller). Eventually, with that wider distribution, the tail of the distribution for males overlaps the distribution for females.
That said, I'd be careful in interpreting these results too closely. The standard errors of the $b_0$ values for males and females overlap, so one might argue that there isn't much evidence for a difference in distribution widths. I suspect that there is also covariance in the estimates of $b_1$ and $b_0$, complicating their separate interpretation.
Although it's possible to model $b_0$ as a function of covariate values (male/female), a frequent practice is only to model $-\log b_1$ as a function of covariates and assume a shared $b_0$ value for all cases. That might explain your data just as well, leading to a simple interpretation in terms of either accelerated failure times or proportional hazards for a Weibull model.
| null | CC BY-SA 4.0 | null | 2023-03-16T16:38:31.077 | 2023-03-16T17:17:40.467 | 2023-03-16T17:17:40.467 | 28500 | 28500 | null |
609710 | 1 | null | null | 0 | 59 | From various sources and reading, I get to know that, the Nickell Bias associated with employing fixed effects with a lagged dependent variable is small when T is large. I understand the notions intuitively, but I am unable to comprehend the underlying mathematical consistency. I would appreciate it if someone could explain the same to me. Thanks!
| Nickell Bias in dynamic Fixed Effect model for large T | CC BY-SA 4.0 | null | 2023-03-16T16:47:15.297 | 2023-03-16T16:47:15.297 | null | null | 367779 | [
"panel-data",
"fixed-effects-model",
"bias",
"dynamic-regression"
] |
609711 | 1 | null | null | 1 | 58 | I am struggling to understand two different quantile regression specifications and the assumptions of conditional quantile independence and full independence. In the first specification, suppose we have the location-scale model:
\begin{equation}
y_{i}=\beta_{0}+x_{i}^{'}\beta+(\delta_{0}+\delta_{1}x_{i}^{'})\epsilon_{i}
\end{equation}
So that $Var(\epsilon_{i}|x)$ depends on $x_{i}$. Our identification conditions in the linear quantile regression model restrict $Q_{\tau}(\epsilon_{i}|x_{i})=0$, so how is it possible in this model, that changes in $x_{i}$ can have any impact on any percentile of the conditional distribution?
In a more general, random coefficient model:
\begin{equation}
y_{i}=x_{i}^{'}\beta(\epsilon_{i}), \quad \epsilon_{i}\sim Unif(0,1),
\end{equation}
where we assume $\epsilon$ is independent from $x_{i}$. So how can there be any interesting marginal effects of the covariates on the conditional distribution if we assume full independence?
My conclusion is that when these independence assumptions are valid, quantile regression is useful only for its "robustness-to-outliers" property. When there is endogeneity, quantile regression becomes much more interesting, but requires different identification conditions such as those in IVQR (Chernozhukov and Hansen, 2005) or control functions (Lee, 2007 or Imbens and Newey, 2009) especially when a model such as the second equation represents a structural relationship between $y$ and $x$. Is this reasoning correct?
| Quantile Regression and Independent Errors | CC BY-SA 4.0 | null | 2023-03-16T16:48:24.447 | 2023-03-16T16:48:24.447 | null | null | 383405 | [
"econometrics",
"quantile-regression",
"endogeneity"
] |
609712 | 2 | null | 609609 | 3 | null | Not an entire solution, but some thoughts I have about this situation.
(Repeated) cross validation has (at least) two different sources of variance:
- variance uncertainty due to case (I'll use n for number of statistically independent cases), and
- variance due to model instability.
Looking at more surrogate models (`n_repeats * k`) will reduce the part of the variance uncertainty on the final estimate that is due to model instability. But the total number of tested cases stays the same (`n`) after the first complete run. That part of the variance uncertainty can only be reduced by more cases, more surrogate models cannot possibly help.
---
There's a further consideration: cross validation estimates for generalization error are often used as approximation of the generalization error of the model trained on all cases. This is the case when the task at hand is building a model on the data set at hand for application/production use. As opposed to comparing the performance of training algorithms for the given type of data (in that case, there's the problem that only part of the relevant variance components can be assessed by cross validation experiments - see Y. Bengio, Y. Grandvalet, No Unbiased Estimator of the Variance of K-Fold Cross-Validation, J. Mach. Learn. Res. 5 (2004) 1089–1105.)
For the production-use scenario, we say that the variation we observe between our estimate and any single surrogate model could serve as approximation for the variance due to instability in the training between the average performance at `n` and the single model we then train on the full data set. That means, while we can say that relevant variance uncertainty due to the finite number of tested cases does down with $\frac{1}{n}$, no such reduction can be claimed for the model-instability part of the variance.
| null | CC BY-SA 4.0 | null | 2023-03-16T16:53:02.940 | 2023-03-16T16:53:02.940 | null | null | 4598 | null |
609713 | 1 | 609779 | null | 0 | 73 | The Aalen model assumes that the cumulative hazard H(t) for a subject can be expressed as a(t) + X B(t), where a(t) is a time-dependent intercept term, X is the vector of covariates for the subject (possibly time-dependent), and B(t) is a time-dependent matrix of coefficients.
My intuition was that coefficient plots from `plot.aareg` produce cumulative hazards for each explanatory variable and intercept (or time-dependent baseline hazard). But should the cumulative hazard function be non-decreasing?
Anyhow, is there any way to get the survival function from this model?
[](https://i.stack.imgur.com/fX2EK.png)
```
library(survival)
library(tidyr)
library(dplyr)
lfit <- aareg(Surv(time, status) ~ sex , data=lung,
nmin=1)
plot(lfit)
tibble(t=lfit$times, coef=lfit$coefficient[,'Intercept']) %>%
group_by(t) %>%
summarise(baseline_coef_t=sum(coef)) %>%
mutate(baseline_cumhaz=cumsum(baseline_coef_t),
baseline_survival=exp(-baseline_cumhaz))
```
[](https://i.stack.imgur.com/WT5Dh.png)
| How get survival function estimates from Aalen's additive regression? | CC BY-SA 4.0 | null | 2023-03-16T16:57:20.347 | 2023-03-17T19:33:05.420 | null | null | 14729 | [
"r",
"self-study",
"survival"
] |
609714 | 1 | null | null | 0 | 27 | I have a dataset (n=290) that I need to run an EFA. I was hoping to also run a CFA but I'm unsure if I have enough data to do a split sample approach. Please advise.
| Split Sample for EFA and CFA with Smaller Dataset | CC BY-SA 4.0 | null | 2023-03-16T16:58:25.567 | 2023-03-16T16:58:25.567 | null | null | 383410 | [
"sample-size",
"structural-equation-modeling",
"confirmatory-factor"
] |
609715 | 2 | null | 462760 | 2 | null | Quick Take: [it turns out that the two are equivalent](https://stats.stackexchange.com/a/612574/247274), so it does not matter which you use as long as you are clear about what the terms mean and what numbers you input into the equations.
Let's break down what the terms mean in each equation.
$$
\text{Logistic Loss}\\
\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}
\log\left(1 + \exp(-y_i w^Tx_i)\right)
$$
(This is the correct way to write the full "logistic loss", as the equation given in the question is the contribution to the loss by each prediction (of which the mean loss value is calculated).)
$N$ is the sample size.
$y_i\in\{-1,+1\}$ is the $i$th true value.
$w^T$ is the transposed parameter vector estimate of the logistic regression.
$x_i$ is the $i$th feature vector (your vector of predictors).
Note that $w^Tx_i$ is the predicted value of the logistic regression on the log-odds scale (so before applying the inverse link function to convert to probability). After all, a generalized linear model is $g(\mathbb E[y\vert X=x_i])=w^Tx_i$.
Therefore, the logistic loss will be useful if you have coded your categories as $\pm1$. The predicted values you input into the loss function along with these $\pm1$-coded categories are the log-odds.
$$
\text{Log Loss}\\
-\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left[
y_i \log(p(y_i)) + (1 - y_i)\log(1 - p(y_i))
\right]
$$
$N$ is the sample size.
$y_i\in\{0, 1\}$ is the $i$th true value.
$p(y_i)$ is the predicted probability that observation $i$ belongs to category $1$. This is the predicted value of the logistic regression on the probability scale, so applying the inverse of the log-odds logistic regression link function to the linear predictor of the logistic regression.
$$
p(y_i) = \dfrac{1}{
1 + \exp(-w^Tx_i)
}\\
\Big\Updownarrow\\
w^Tx_i = \log\left(
\dfrac{
p(y_i)
}{
1 - p(y_i)
}
\right)
$$
This "log" form of the loss function makes sense when the categories are coded as $0$ and $1$ instead of $\pm1$ and when you have predicted probabilities.
That you can convert easily between the $\{0,1\}$ and $\{-1,+1\}$ categorical encodings and between the log-odds and probabilities means that you are free to use whichever you like. Just keep track of what goes into which equation. For instance, do not mix together the predicted log-odds and $\{0,1\}$ encoding.
If you want to use log-odds and $\{-1,+1\}$ encoding, use the "logistic" form of the loss function. If you want to use probability and $\{0,1\}$ encoding, use the "log" form of the loss function.
EDIT
A simulation is not a proof, but it did give me a good feeling to see that, in the below simulation that calculates loss values each way for a range of (over $25000$) possible parameter values for the logistic regression, the two loss functions give the same loss value if the correct arguments are passed to each function.
```
set.seed(2023)
library(ggplot2)
N <- 100
x <- runif(N, 0, 1)
z <- 4*x - 2
p <- 1/(1 + exp(-z))
y01 <- rbinom(N, 1, p)
y_pm <- 2 * y01 - 1
b0s <- seq(-4, 0, 0.025)
b1s <- seq(2, 6, 0.025)
log_losses <- logistic_losses <- rep(NA, length(b0s) * length(b1s))
log_loss <- function(p, y){
return(
-mean(
(y) * log(p)
+
(1 - y) * log(1-p)
)
)
}
logistic_loss <- function(logodds, y){
return(
mean(
log(
1 + exp(
-y * logodds
)
)
)
)
}
counter <- 1
for (i in 1:length(b0s)){
print(i)
intercept <- b0s[i]
for (j in 1:length(b1s)){
slope <- b1s[j]
log_odds <- intercept + slope*x
probability <- 1/(1 + exp(-log_odds))
log_losses[counter] <- log_loss(probability, y01)
logistic_losses[counter] <- logistic_loss(log_odds, y_pm)
counter <- counter + 1
}
}
L <- lm(log_losses ~ logistic_losses)
d <- data.frame(
log_loss = log_losses,
logistic_loss = logistic_losses
)
ggplot(d, aes(x = logistic_loss, y = log_losses)) +
geom_point() +
geom_abline(slope = 1, intercept = 0)
summary(L)
```
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.384e-15 3.348e-17 -1.608e+02 <2e-16 ***
logistic_losses 1.000e+00 4.422e-17 2.261e+16 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.189e-15 on 25919 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 5.114e+32 on 1 and 25919 DF, p-value: < 2.2e-16
```
[](https://i.stack.imgur.com/5f2qN.png)
Indeed, the differences between the two calculations all are on the order of $10^{-16}$, if not smaller.
```
summary(abs(logistic_losses - log_losses))
```
```
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.000e+00 0.000e+00 0.000e+00 2.115e-17 0.000e+00 6.661e-16
```
| null | CC BY-SA 4.0 | null | 2023-03-16T17:01:40.740 | 2023-04-20T11:18:06.183 | 2023-04-20T11:18:06.183 | 247274 | 247274 | null |
609717 | 1 | null | null | 1 | 14 | Take the following regressions:
- y1 = a1+ d1*y3+ lag(Y)B1 + XG1 + u1
- y2 = a2 + d2*y3+ lag(Y)B2 + XG2 + u2
- y3 = a3 + d3*y3+ lag(Y)B3 + XG3 + u3
where Y = [y1, y2, y3] is a time-ordered matrix of data, terms "a" are intercepts, X is a matrix of covariates, B and G are vectors of coefficients, and u are error terms. lag(.) is the one-period lag operator.
Give the solution to the coefficient matrix [a, d, B, G] such that d3 = 0
I seem to have stumped chatgpt :D
Is this even feasible using stacked OLS (probably with some clever weighting?), or would I need to use a constrained optimizer?
| vector autoregression where one of the dependent variable's current values determines the other dependent variables | CC BY-SA 4.0 | null | 2023-03-16T17:14:31.183 | 2023-03-16T17:14:31.183 | null | null | 17359 | [
"time-series",
"econometrics",
"structural-equation-modeling",
"vector-autoregression"
] |
609719 | 1 | null | null | 0 | 23 | Consider the following toy panel data:
```
N <- 3
M <- 3
NM <- N*M
set.seed(123)
x <- runif(NM)
a <- rep(rnorm(N,sd=0.2),each=M)
l <- rep(rnorm(N,sd=0.2),each=M)
u <- rnorm(NM,sd=0.2)
y <- 1+a+l+x+u
i <- rep(1:3,each=3)
m <- rep(1:3,3)
d <- pdata.frame(data.frame(i,m,x,y));d[1:2,]
```
where `N` is the cross-sectional sample size and `M` is the time-series sample size. I would like to estimate by hand the output for random effect that I can get from R `plm` package:
```
plm(y~x, data=d, model="random",effect="individual")
Model Formula: y ~ x
Coefficients:
(Intercept) x
1.25031 0.86436
plm(y~x, data=d, model="random",effect="time")
Model Formula: y ~ x
Coefficients:
(Intercept) x
1.31581 0.75367
```
Here is my attempt:
```
unit_means <- aggregate(d[c("x", "y")], by = list(d$i), mean)
colnames(unit_means) <- c("i", "avg_x_unit", "avg_y_unit")
d$x_centered <- d$x - merge(d, unit_means)[, "avg_x_unit"]
d$y_centered <- d$y - merge(d, unit_means)[, "avg_y_unit"]
n_i <- tapply(d$i, d$i, length)
s_y_i <- tapply(d$y_centered^2, d$i, sum)
s_x_i <- tapply(d$x_centered^2, d$i, sum)
x_bar_i <- tapply(d$x_centered, d$i, sum) / n_i
y_bar_i <- tapply(d$y_centered, d$i, sum) / n_i
s_xy_i <- tapply(d$x_centered * d$y_centered, d$i, sum)
var_u <- (s_y_i - (s_xy_i^2 / s_x_i)) / (sum(n_i) - length(n_i))
b <- sum(s_xy_i - x_bar_i * sum(y_bar_i * n_i)) / sum(s_x_i - x_bar_i^2 * n_i)
a <- mean(d$y) - b * mean(d$x)
s_e <- sqrt((sum(d$y_centered^2) - b^2 * sum(d$x_centered^2) - var_u * sum(n_i)) / (sum(n_i) - length(n_i)))
c(a = a, b = b, var_u = var_u, s_e = s_e)
a b var_u.1 var_u.2 var_u.3 s_e.1 s_e.2
1.2463890312 0.8709813635 0.0008106072 0.0045818490 0.0085573429 0.2040443626 0.1896766704
s_e.3
0.1732454860
```
my results are quite near but not identical to the one from the first `plm` function. Why? And what about the second?
| How to get Random Effects Estmates by hand on R? | CC BY-SA 4.0 | null | 2023-03-16T17:21:08.950 | 2023-03-16T17:21:08.950 | null | null | 296201 | [
"r",
"mixed-model",
"estimation",
"panel-data",
"generalized-least-squares"
] |
609720 | 1 | null | null | 0 | 8 | At the moment I'm working on CRTs for policy. Normally, we would get data in a way that we can easily find the strata using cvcrand package in R (see below for example):
[](https://i.stack.imgur.com/ITGoQ.png)
However, my current project has ver volatile characteristics which move week on week - meaning that for each county, we have weekly data where the variables we have to use to create the strata change quite a bit. If I try to use any of the previous analyses, it assigned different strata to the same county. I've thought about taking an average of the last X weeks, but I'm not sure if that's the correct approach.
Any ideas would be appreciated!
Thanks
| Clustering Randomised Trials with multiple observations | CC BY-SA 4.0 | null | 2023-03-16T17:32:36.387 | 2023-03-16T17:32:36.387 | null | null | 136481 | [
"randomized-tests"
] |
609722 | 2 | null | 580049 | 0 | null | It is not so unusual for in-sample and out-of-sample data to have differences in the class ratios, just by flukes of randomly sampling to allocate observations to the in-sample and out-of-sample data (unless you make a point to stratify in order to maintain the class ratio). However, your difference is so great that I have to think something is inherently different about the second time period that makes it much more likely to have the first outcome than it is in the first period. You do not have to have many observations to find this difference to be statistically significant.
Consequently, by training in period $1$ and testing in period $2$, you are testing in an inherently different situation.
The reason you have better classification accuracy in the second period than the first period is likely due to the imbalance in the first period leading the model to predict probabilities that are on the low side, probably quite a bit below $0.5$. When you make predictions for the second period, your predictions will still tend to be below a probability of $0.5$, so when you apply a threshold of $0.5$, those get rounded to category $0$, which is much more common in this time period, so you are more likely to get the right answer.
(Or maybe your majority class is coded as $1$, and your predicted probabilities tend to be quite a bit higher than $0.5$. Analogous logic applies.)
If you evaluate the probabilistic predictions in the two time periods however, such as with log loss, Brier score, or even a ROC curve, you are likely to have better performance on the training data from period one than the testing data from period two. I would consider the stronger performance in terms of accuracy to be something of a mirage.
If you have some way to predict the drift of the [prior probability](https://stats.stackexchange.com/a/583115/247274) in each time period, you could perhaps [calibrate](https://stats.stackexchange.com/a/558950/247274) your out-of-sample probabilities to reflect the class ratio in that period (instead of, falsely, assuming the class ratio to remain constant). If you have this ability, however, you might be more inclined to use those determinants of the prior probability in a time period as features in your classifier or probability prediction model.
If you do not have the ability to model how the prior probability in each time period changes, yet it does, then you are modeling a nonstationary process with no information about the dynamics. Of course your performance will be poor. (Again, your higher accuracy in the second time period is a mirage. If you evaluate the probabilities directly, you are likely to find worse performance in period two than in period one.)
| null | CC BY-SA 4.0 | null | 2023-03-16T17:43:33.293 | 2023-03-16T17:53:28.450 | 2023-03-16T17:53:28.450 | 247274 | 247274 | null |
609723 | 1 | null | null | 1 | 91 | I am going through the [introduction](https://xgboost.readthedocs.io/en/stable/tutorials/model.html) to XGBoost page, and there is a section where they derive the optimal value of the leaf node, for a given tree structure.
To quote the specific section,
>
In this equation, $w_j$ are independent with respect to each other, the form $G_j w_j + \frac{1}{2} (H_j + \lambda) w_j^2$ is quadratic and the best $w_j$ for a given structure $q(x)$ and the best objective reduction we can get is:
$$ w^*_j = -\frac{G_j}{H_j + \lambda}$$....
For context, $G_j w_j + \frac{1}{2} (H_j + \lambda) w_j^2$ is the summand of the objective, $G_j$ is the sum of sample gradients in leaf $j$ WRT leaf values evaluated at existing trees, $H_j$ is the sum of Hessian terms in leaf $j$, $w_j$ is the leaf value, and $\lambda$ is the L2-regularization constant.
Clearly $w^*_j$ is the stationary point of $w_j$ in the objective.
Now, the second derivative of the objective is $H_j + \lambda$. But a stationary point is only a minimum if the second derivative is positive. In this case, their equation for $w^*_j$ is only a minimizer of the objective if $H_j > 0$ (let's ignore $\lambda$ for simplicity here). If $H_j$ is negative, or zero, then $w^*_j$ may, in fact, be a maximizer or a saddle point.
I find that the second derivative of most typical objective functions is strictly positive. The common examples that come to mind are MSE ($1$), cross-entropy ($p (1 - p)$ where $p = \sigma(y)$), Poisson objective ($\frac{t}{y^2}$ where $t$ is the target). Which is a happy coincidence. But, as soon as we introduce a custom objective with a 2nd derivative that can be negative or zero, then XGBoost can no longer handle this. Is this true?
The reason I am asking this is because I am currently trying to train a model with a custom objective with Hessian that is sometimes $<0$. And I am getting the wrong behavior for very obvious cases. I even contrived a dataset where the leaf values should all be pushed to $-\infty$, and are instead being pushed to $+\infty$. One "hacky" solution I found, that actually worked, is to replace the Hessian with its absolute value. Though I can't theoretically justify why it works.
Is XGBoost unable to handle custom objectives with negative second derivatives? Or am I fundamentally misunderstanding something?
| Can XGBoost handle a custom objective where the 2nd derivative can be negative? | CC BY-SA 4.0 | null | 2023-03-16T17:44:43.577 | 2023-03-20T15:20:50.013 | 2023-03-20T15:20:50.013 | 73531 | 73531 | [
"machine-learning",
"optimization",
"boosting",
"cart",
"loss-functions"
] |
609724 | 1 | null | null | 0 | 48 | Suppose two people want to play a game in which person A
has probability 2/3 of winning. However, the only thing that they have is a
fair coin which they can flip as many times as they want. They wish to find
a method that requires only a finite number of coin flips.
- Give one method to use the coins to simulate an experiment with probability 2/3 of success. The number of flips needed can be random, but
it must be finite with probability one.
- Suppose K < $\infty$. Explain why there is no method such that with
probability one we flip the coin at most K times.
3.Repeat the last exercise with 2/3 replaced by 1/π.
My answers:
It seems like author is asking about a probability problem.
Here's one way to simulate an experiment with a probability of 2/3 success using a fair coin:
- Flip the coin twice. If it comes up heads both times, then the experiment is considered a success (with probability 1/4). If it comes up tails both times, then the experiment is considered a failure (with probability 1/4). If it comes up one head and one tail (in any order), then start over and flip the coin twice again. This process can be repeated until either two heads or two tails come up.
- The reason why there is no method such that with probability one we flip the coin at most K times is because no matter how large K is, there is always a non-zero probability that we will need to flip the coin more than K times before getting either two heads or two tails.
- To simulate an experiment with a probability of 1/π success using a fair coin, you could use a similar method as above but with different stopping conditions based on the binary representation of π.
Explanation about how to simulate an experiment with 1/π success?
Here's one way to simulate an experiment with a probability of 1/π success using a fair coin:
- Write out the binary representation of π: 11.00100100001111110110...
- Flip the coin once to determine whether the first digit after the binary point is 0 or 1. If it's 0 and the coin comes up heads, then the experiment is considered a success (with probability 1/2). If it's 1 and the coin comes up tails, then the experiment is considered a failure (with probability 1/2).
- If neither of these conditions is met (i.e., if it's 0 and the coin comes up tails or if it's 1 and the coin comes up heads), then flip the coin again to determine whether the second digit after the binary point is 0 or 1 and repeat this process until either a success or failure condition is met.
This method will eventually result in either a success or failure with probability one because π has an infinite number of digits in its binary representation.
Credit goes to Microsoft new Bing chatgpt.
| Stochastic Calculus: probability | CC BY-SA 4.0 | null | 2023-03-16T17:46:16.133 | 2023-03-17T03:30:26.800 | 2023-03-17T03:30:26.800 | 72126 | 72126 | [
"probability",
"stochastic-calculus"
] |
609726 | 1 | null | null | 7 | 480 | Say I have a questionnaire with 5 questions about anxiety. For each question, their response is rated a 1 or 0. Their total anxiety score is the sum, so an integer between 0 and 5.
Now I would like to run a regression model for explanatory purposes to see the effects of predictors on anxiety level. This data isn't continuous and is bounded, so linear regression doesn't seem like a good fit.
What alternative would you suggest? I want to use the right model, but also do not want to over=complicate the interpretation of the findings (this is for psych research). The way the outcome is calculated appears to fit with binomial (trials are the questions, 1s are successes), but I have never seen this binomial regression used for something like this.
| What type of regression to use when outcome is integers from 0 to 5 | CC BY-SA 4.0 | null | 2023-03-16T17:59:53.293 | 2023-03-17T11:03:55.287 | 2023-03-17T11:03:15.357 | 22047 | 368419 | [
"regression",
"generalized-linear-model",
"linear-model",
"binomial-distribution"
] |
609727 | 1 | 612574 | null | 12 | 565 | [This](https://stats.stackexchange.com/q/462760/247274) question discusses two equivalent ways to express the canonical loss function for a logistic regression, depending on if you code the categories as $\{0,1\}$ or $\{-1,+1\}$. In the following, let $x_i$ be the $i$th feature vector, $w$ be the parameter vector for the logistic regression, $N$ be the sample size, and $p(y_i)$ be the predicted probability of membership to category $1$.
$$
\text{Logistic Loss}\\
\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}
\log\left(1 + \exp(-y_i w^Tx_i)\right)\\
y_i\in\{-1,+1\}
$$
$$
\text{Log Loss}\\
-\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left[
y_i \log(p(y_i)) + (1 - y_i)\log(1 - p(y_i))
\right]\\
y_i\in\{0, 1\}
$$
What is the algebra showing these two formulations to be equivalent? Not even the [proposed duplicate to the first link](https://stats.stackexchange.com/questions/229645/why-there-are-two-different-logistic-loss-formulation-notations) really shows why the two must give the same loss value, and while both [this](https://stats.stackexchange.com/q/250937/247274) and [this](https://stats.stackexchange.com/questions/340546/likelihood-function-for-binomial-distribution-with-label-1-and-1/453938#453938) are close, neither quite explicitly shows that $\text{Logistic Loss} = \text{Log Loss}$. I would like to see a chain of equal expressions like $\text{Logistic Loss} =\dots = \text{Log Loss}$.
| What is the algebra showing the logistic and log loss to be equivalent? | CC BY-SA 4.0 | null | 2023-03-16T18:06:34.700 | 2023-04-11T18:23:28.547 | 2023-03-16T18:12:04.797 | 247274 | 247274 | [
"regression",
"machine-learning",
"probability",
"classification",
"loss-functions"
] |
609728 | 2 | null | 609683 | 2 | null | I find it easier to think about this model using the latent-variable representation.
For simplicity, let's assume that your model has only one independent variable (GPA), and that the dependent variable is ordinal with 5 categories (instead of 11).
Suppose that every student has a score, $Z_i$, that is on a continuous scale. The model equation is
$$
Z_i = \underbrace{\beta x_i}_{\eta_i} + \epsilon_i
$$
where $x_i$ is the student's GPA, and $\epsilon_i \sim \mathrm{Logistic}(0,1)$. We thus have $Z_i \sim \mathrm{Logistic}(\eta_i,1)$, where the linear predictor $\eta_i$ is the location. Here are the density functions for the scores of two students, $Z_1$ and $Z_2$:
[](https://i.stack.imgur.com/GfR7g.png)
You don't get to observe the value of $Z_i$ directly (it's a latent variable). Instead, imagine that you have cutpoints $\zeta_1<\zeta_2<\zeta_3<\zeta_4$ along the horizontal axis, and you only know which group $Z_i$ falls into.
[](https://i.stack.imgur.com/u4CbB.png)
This is your independent variable, $Y_i$. Specifically, you observe:
$$
Y_i = \begin{cases} 1 & \text{if } Z_i<\zeta_1 \\
2 & \text{if }\zeta_1<Z_i<\zeta_2 \\
3 & \text{if }\zeta_2<Z_i<\zeta_3 \\
4 & \text{if }\zeta_3<Z_i<\zeta_4 \\
5 & \text{if }\zeta_4 < Z_i \end{cases}
$$
The plots below show that $P(Y_1=2)>P(Y_2=2)$, as $Z_1$ is likelier than $Z_2$ to fall between these two cutpoints.
[](https://i.stack.imgur.com/093a3.png)
[](https://i.stack.imgur.com/Eska3.png)
Notice that some cutpoints are closer together, so those categories will not be observed as often.
We have:
\begin{align*}
P(Y=2) &= P(\zeta_1<Z<\zeta_2) \\
& = P(\zeta_1<\eta + \epsilon_i<\zeta_2) \\
& = P(\zeta_1 - \eta< \epsilon_i<\zeta_2- \eta) \\
& = \sigma(\zeta_2- \eta) - \sigma(\zeta_1 - \eta)
\end{align*}
where $\sigma$ is the CDF of the $\mathrm{Logistic}(0,1)$ distribution, i.e. the inverse of the logit function.
Notice that we have
$$
P(Y>k)=\sigma(\eta-\zeta_k) \Leftrightarrow \mathrm{logit}(P(Y>k))=\eta - \zeta_k\,.
$$
A one-unit increase in GPA changes the linear predictor from $\eta$ to $\eta+\beta$, so the log-odds of a student falling in a category higher than $k$ (for any $k$) increase by $\beta$.
It's also helpful to interpret the parameters by comparing individual cases, e.g., the probability of each value of $Y$ for a student who is average in every predictor, versus one who the same but with a GPA that is one point higher.
| null | CC BY-SA 4.0 | null | 2023-03-16T18:17:54.820 | 2023-03-17T11:44:06.213 | 2023-03-17T11:44:06.213 | 238285 | 238285 | null |
609729 | 1 | 609735 | null | 2 | 33 | Suppose the linear model is $y = \beta x + \epsilon$, where $X \sim \mathcal{N}(0, 1), \epsilon \sim \mathcal{N}(0, s^2)$. If we only observe the sign of the output $y_i$, and the number of observations is large, $i$ how can we estimate $\beta$?
| Only observing sign of the output of a linear model under Gaussian assumption | CC BY-SA 4.0 | null | 2023-03-16T18:22:28.477 | 2023-03-16T19:52:31.233 | 2023-03-16T19:08:43.810 | 247274 | 153648 | [
"regression",
"normal-distribution",
"estimation",
"linear-model",
"regression-coefficients"
] |
609731 | 1 | 609850 | null | 0 | 65 | I have conducted a CFA for a one-factor measurement model and then proceeded to do multigroup CFAs to test for measurement variance for gender and age (binary variable, median split). Testing for measurement invariance proceeds as would be expected for gender, but I am getting some strange results when testing for scalar invariance on age. Specifically, the fit indices all show improved fit from the less restrictive model to the more restrictive model. What might be the issue?
## Details
CFA is done with the cfa() function in lavaan version 0.6-13 and the fit objects are compared with the compareFit() function in semTools version 0.5-6.
The lavaan model syntax:
```
ghmodel <- 'harm =~ Q5_col + Q6_col + Q7_col + Q8_col + Q9_col + Q10_col + Q11_col'
```
The Qx_col variables have 5 ordinal response alternatives coded with the levels 0, 1, 2, 3, 4 in the data frame "gh".
The general CFA model:
```
ghfit <- cfa(ghmodel, data = gh, ordered = TRUE)
```
The binary age variable is coded like this:
```
gh <- gh %>%
mutate(ageBinary = case_when(Age < 38 ~ 0,
Age >= 38 ~ 1))
```
Now for the two multigroup models for age and the comparison:
```
ghfitAge2 <- cfa(ghmodel, ordered = TRUE, data = gh, group = "ageBinary",
group.equal = "loadings")
ghfitAge3 <- cfa(ghmodel, ordered = TRUE, data = gh, group = "ageBinary",
group.equal = c("loadings", "intercepts"))
comp <- compareFit(ghfitAge2, ghfitAge3)
summary(comp)
```
At the compareFit() stage I get the following warning message:
>
Warning message:
In (function (object, ..., method = "default", A.method = "delta", :
lavaan WARNING:
Some restricted models fit better than less restricted models;
either these models are not nested, or the less restricted model
failed to reach a global optimum. Smallest difference =
-25.2639971395378
Now the models are nested as can be seen from the code and the less restricted model did converge.
Here is some of the output which shows reduced chi-square and higher CFI in the more restrictive ghfitAge3:
|Model |Df |Chisq |Chisq diff |Df diff |Pr(>Chisq) |cfi |cfi diff |
|-----|--|-----|----------|-------|----------|---|--------|
|ghfitAge2 |34 |192.61 | | | |0.9988652 | |
|ghfitAge3 |54 |167.35 |-39.497 |20 |1 |0.9991890 |0.0003238 |
PS: I am aware of the similar post [MultiGroup Factor Analysis CFI gets better as model gets more restricted](https://stats.stackexchange.com/questions/290911/multigroup-factor-analysis-cfi-gets-better-as-model-gets-more-restricted)
The poster has experienced improved CFI like here but makes no mention of the chi-square. The accepted answer explains that the increase in CFI could be because the chi-square statistic may increase at a slower rate than the degrees of freedom earned by imposing constraints. However, the chi-square is decreasing in my case so I don't see how that can be the answer here.
Might it be due to some downstream effect of using the "ordered = true" specification in the cfa() function? I have attempted to rerun these model with regular ML and then the more restrictive model experiences reduced fit as expected. However, I do not want to go with ML because the fit for the overall model becomes much worse.
| Comparing multigroup CFAs: Improved fit with more restrictions | CC BY-SA 4.0 | null | 2023-03-16T18:36:47.540 | 2023-03-17T21:22:38.547 | null | null | 311885 | [
"chi-squared-test",
"confirmatory-factor",
"lavaan",
"scale-invariance"
] |
609733 | 1 | null | null | 0 | 8 | I am having a difficult time interpreting this model. I am investigating whether hypertension is a risk factor for having low birthweight babies.
I ran a multivariable logistic regression's model with the following covariates: hypertension, sex, age group, and ethnicity.
Based on the statistical output hypertension seems to be a significant predictor of low birthweight, but I am struggling to see the association in the results. The residual vs fitted plots ( [https://i.stack.imgur.com/4L9Vd.png](https://i.stack.imgur.com/4L9Vd.png)) are going in opposite directions, which leads me to believe that there are large amounts of errors in the data. Similarly, the cooks distance graph ([https://i.stack.imgur.com/wmSMq.png](https://i.stack.imgur.com/wmSMq.png)) also show various amounts of error. Is my interpretation correct?
[](https://i.stack.imgur.com/4L9Vd.png)
[](https://i.stack.imgur.com/SD6RH.png)
[](https://i.stack.imgur.com/wmSMq.png)
| Interpretation of residual vs fitted & cooks distance plots | CC BY-SA 4.0 | null | 2023-03-16T18:55:26.737 | 2023-03-16T18:55:26.737 | null | null | 383422 | [
"logistic",
"residuals",
"cooks-distance"
] |
609734 | 1 | null | null | 0 | 156 | I am trying to write some code to automatically detect whether a time series is seasonal. I have been looking into using the Kruskal-Wallis test, as there are a few examples of this being useful online, e.g. [here](https://stats.stackexchange.com/a/310262/383421).
Basically, you would perform this test by breaking your time series into groups (say, yearly) and performing the Kruskal-Wallis test on all of these groups to see if they are likely to have been sampled from the same distribution. The idea is that if the data is seasonal then each year (or month, or whatever) should have the same mean. However, there seem to be two fundamental flaws.
- Most importantly, stationary data will also have the same mean each year (by definition) even if it is non-seasonal.
- The Kruskal-Wallis test has a null hypothesis that the mean of each group is the same. The idea for seasonality detection is the following: if the null hypothesis is true when the time series is broken into groups of a certain lag, then the data is probably seasonal for that given lag. However, this strikes me as the opposite of what we want. The null hypothesis should be that the data is not seasonal, and we only reject this if the data "looks seasonal enough".
Am I misunderstanding something, or is this a reasonable argument to dismiss the Kruskal-Wallis test for seasonality detection?
| Using Kruskal-Wallis to Detect Seasonality in Time Series | CC BY-SA 4.0 | null | 2023-03-16T19:07:38.880 | 2023-04-04T05:18:43.737 | 2023-03-16T22:17:26.553 | 805 | 383421 | [
"time-series",
"seasonality",
"kruskal-wallis-test"
] |
609735 | 2 | null | 609729 | 1 | null | [Probit regression operates under the assumption that there is some latent linear model that has Gaussian errors, but we only see the signs.](https://en.wikipedia.org/wiki/Probit_model) This is exactly the situation described in the question, so probit regression sounds like the way to go.
A simulation shows this to be pretty slick and to give estimated coefficients that are close to the OLS coefficients and close to the true values.
```
set.seed(2023)
N <- 10000
x <- rnorm(N, 0, 1)
b0 <- -1
b1 <- 1
y <- b0 +b1*x + rnorm(N)
# Set indicator for positivity of y
#
z <- y > 0
# Fit a linear model to the original x and y
#
L <- lm(y ~ x)
# Fit a probit model to x and the indicator for the sign of y
#
P <- glm(z ~ x, family = binomial(link = "probit"))
summary(L)$coef[, 1]
summary(P)$coef[, 1]
```
```
> summary(L)$coef[, 1]
(Intercept) x
-0.9939896 1.0080528
> summary(P)$coef[, 1]
(Intercept) x
-0.9853826 0.9783042
```
EDIT
Worth a mention is that the standard errors on the probit model are higher. Since there is less information available to the probit model (the `z <- y > 0` line is lossy compression of the `y` variable), I suspect that this is not just a fluke. Consequently, if you have the original `y`, it seems that you can get tighter coefficient estimates by using the linear model, even though you can do something reasonable if you only have the lossy-compressed `z <- y>0`.
| null | CC BY-SA 4.0 | null | 2023-03-16T19:08:17.150 | 2023-03-16T19:52:31.233 | 2023-03-16T19:52:31.233 | 247274 | 247274 | null |
609736 | 1 | null | null | 0 | 12 | my goal is to define the interaction between 2 compounds. The data did not meet the assumptions of Two way Anova. I understand their is no alternative to testing the interaction. I am less interested comparing between the subjects. Is it correct to analyze using Friedman Test to compare the mean ranks between the related groups to understand the effect on only one subject?
I have 2 compounds tested on 4 subjects: A, B a
To do so i used 2 concentrations of A (A1,A2) and 3 of B (B1,B2,B3) and 3 repetitions.
11 treatments including control (no added substances):
C, B1, B2, B3
A1, A1B1, A1B2, A1B3
A2 A2B1, A2B2, A2B3
| Friedman on one subject? | CC BY-SA 4.0 | null | 2023-03-16T19:15:57.543 | 2023-03-16T19:15:57.543 | null | null | 383420 | [
"hypothesis-testing",
"spss",
"paired-data",
"friedman-test"
] |
609739 | 1 | null | null | 3 | 96 | I am trying to self-taught myself on Calculus for machine learning and read the [book](https://archive.org/details/spivak-m.-calculus-2008/mode/1up) by Spivak. But it is too rigorous and need a lot of time to finish it.
As far as I am concerned, Calculus is only used for optimization in machine learning.
I am seeking some good calculus resources especially apt for machine learning research but less rigorous than Spivak.
Edited: After some research, I plan on going through MIT OpenCourseWare on single, multivariable calculus and differential equations. What book should I read to study calculus along watching these lectures?
| What are some good calculus resources relevant for Machine learning researcher aspirant? | CC BY-SA 4.0 | null | 2023-03-16T19:40:08.903 | 2023-03-17T07:51:07.670 | 2023-03-17T07:51:07.670 | 362671 | 356068 | [
"machine-learning",
"references",
"calculus"
] |
609740 | 1 | null | null | 0 | 24 | I am trying to fully standardize a linear model which I am running in R using "lme4". However, I am not really sure if I am doing it correctly. The model consists of a continuous outcome variable (reaction time), a predictor (condition, factor with three levels) and a varying slope (participant).
```
RT ~ condition + (1|participant)
```
The predictor condition is coded using (inverse) Helmer contrasts. So I am comparing conditions 1 and 2, and condition 3 and the mean of the first two conditions. Here is the code:
```
inv_helmert <- matrix(c(-.5, .5, 0, -1/3, -1/3, 2/3), ncol = 2)
contrasts(q_lm$condition) <- inv_helmert
f1 <- lmer(RT ~ condition + (1|participant), data=q_lm)
summary(f1)
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 2.9767 0.1556 57.0000 19.136 <2e-16 ***
DiffCond1/2 0.2414 0.1415 288.0000 1.706 0.089 .
DiffCond3/1,2 1.8776 0.1225 288.0000 15.326 <2e-16 ***
```
I now want to standardize the regression to interpret it as effect size. Hence, I z-standardized the dependent variable using `scale(..., center=TRUE, scale=TRUE)`. Which leads to the following result:
```
Estimate Std. Error df t value Pr(>|t|)
(Intercept) -3.530e-15 8.776e-02 5.700e+01 0.000 1.000
DiffCond1/2 1.362e-01 7.981e-02 2.880e+02 1.706 0.089 .
DiffCond3/1,2 1.059e+00 6.911e-02 2.880e+02 15.326 <2e-16 ***
```
I understand that this is not correctly standardized, as usually standardized coefficients should be < 1. So probably I also have to standardize the independent variable condition.
So I tried to other things which lead to the same result:
- Using the MuMIn::std.coef(f1, partial.sd=TRUE) function..
- Using a solution I found here (orthonormalizing the Helmert contrasts): [https://stats.stackexchange.com/questions/392173/how-to-calculate-standardized-orthogonal-contrast-coding-in-r][1]
```
library(far)
ex <- as.factor(c("COND12", "Cond1/2", "Cond3/1,2"))
EC <- cbind(c(1, 1, 1), inv_helmert)
SOCC <- orthonormalization(EC)*3^(1/2)
SOCC <- SOCC[, (2:3)]
contrasts(ex) <- SOCC
contrasts(q_lm$condition) <- SOCC
```
Both methods yield the same results:
```
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 2.97672 0.15555 57.00000 19.136 <2e-16 ***
DiffCond1/2 0.09854 0.05775 288.00000 1.706 0.089 .
DiffCond3/1,2 0.88510 0.05775 288.00000 15.326 <2e-16 ***
```
Four questions:
- Are these regression now fully standardized?
- Why do these two things lead to the same result?
- In case they are correctly standardized: Why is this achieved by orthonormalization of the contrasts?
- Why do I also have to standardize the nominal predictor condition?
| Standardize linear regression with single contrast coded predictor | CC BY-SA 4.0 | null | 2023-03-16T19:40:13.200 | 2023-03-19T19:14:32.730 | 2023-03-19T19:14:32.730 | 11887 | 309425 | [
"r",
"mixed-model",
"lme4-nlme",
"regression-coefficients",
"standardization"
] |
609741 | 2 | null | 609727 | 16 | null | Consider the case when $y_i = -1$ in the logistic loss and $y_i = 0$ in the log loss. The summand in the logistic loss becomes $$\log\left(1 + \exp(w^Tx_i)\right)$$ and the summand in the log loss becomes $$-\log(1 - p(y_i = 0))$$
Using the following equivalence given in your answer [here](https://stats.stackexchange.com/a/609715/296197)
$$
p(y_i) = \dfrac{1}{
1 + \exp(-w^Tx_i)
}\\
\Big\Updownarrow\\
w^Tx_i = \log\left(
\dfrac{
p(y_i)
}{
1 - p(y_i)
}
\right)
$$
We can re-write the summand in the logistic loss as
\begin{align}
\log\left(1 + \exp(w^Tx_i)\right) &= \log\left(1 + \exp\left(\log\left(
\dfrac{
p(y_i=-1)
}{
1 - p(y_i=-1)
}
\right)\right)\right) \\
&= \log\left(1+ \frac{p(y_i=-1)}{1-p(y_i=-1)}\right) \\
&= \log\left(\frac{1-p(y_i=-1)}{1-p(y_i=-1)} + \frac{p(y_i=-1)}{1-p(y_i=-1)}\right) \\
&= \log\left(\frac{1}{1-p(y_i=-1)}\right) \\
&= -\log\left(1-p(y_i=-1)\right) \\
\end{align}
Assuming that $p(y_i = -1)$ for the logistic loss is equivalent to $p(y_i = 0)$ for the log loss, the summand for the logistic loss (when $y_i = -1$) is equivalent to the summand for the log loss (when $y_i = 0$). The case when $y_i = 1$ in the logistic loss and $y_i = 1$ in the log loss can be shown in a similar way.
| null | CC BY-SA 4.0 | null | 2023-03-16T19:53:21.700 | 2023-03-16T21:42:15.000 | 2023-03-16T21:42:15.000 | 296197 | 296197 | null |
609743 | 2 | null | 609739 | 2 | null | It's not a book and not addressing optimization, but one of the best resources to self-learn calculus are the [lectures by Gilbert Strang](https://youtube.com/playlist?list=PLBE9407EA64E2C318) that were recorded and are available on YouTube. He also wrote a great handbook.
If you would find Strang a little bit too hard, I'll recommend starting with the [Khan Academy](https://www.khanacademy.org/) lectures.
| null | CC BY-SA 4.0 | null | 2023-03-16T20:39:46.330 | 2023-03-16T20:39:46.330 | null | null | 35989 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.