Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
608643 | 1 | null | null | 0 | 22 | I am dealing with a dataset that has different input data with a different behavior but I am getting confused as I can not find any trend or pattern in my data. I checked seasonality and found nothing. I checked the literature for data that does not have trend and seasonality, but I do not know exactly what is the proper treatment for prediction and interpretation of the results. What is the solution when the data is like this?
I also plotted a heatmap correlation matrix and to me, this matrix says that I am dealing with data that are not correlated. Am I right about these two concepts? If so what should be done as further steps?
If you think the figure is not good to be pasted in the question please do not downvote and just let me know, I will promise to solve it.
Thanks.
[](https://i.stack.imgur.com/7NUxk.png)
| Modeling a time series without trend or seasonality | CC BY-SA 4.0 | null | 2023-03-07T14:42:08.100 | 2023-03-07T17:52:41.257 | 2023-03-07T17:52:41.257 | 53690 | 318338 | [
"correlation",
"python",
"dataset",
"seasonality"
] |
608644 | 2 | null | 608599 | 1 | null | >
Does that mean that there is never really a way to retrieve a prespecified OR from a single simulated database?
Yes, you will not be able to exactly identify the prespecified ORs that were used to simulate the data when you fit a logistic regression model because, as you say, the data is generating process you use includes randomness processes.
As the sample size of your simulated data set increases, the fitted ORs (estimates of your prespecified ORs) will become closer to the prespecified ORs. The confidence intervals for or estimated ORs will show you how much uncertainty there is in the estimates.
| null | CC BY-SA 4.0 | null | 2023-03-07T14:43:29.253 | 2023-03-07T14:43:29.253 | null | null | 122545 | null |
608646 | 1 | 608912 | null | 1 | 159 | I have built this model in R:
```
library(emmeans)
library(lmerTest)
mixed_age_interaction = lmer(basdai ~ 1 + gender + time + age +
gender*time + time*age + gender*age +
gender*time*age + (1|ID) + (1|country), data = dat, REML = F, control=lmerControl(optimizer="bobyqa"))
```
In this model, I am predicting the treatment response basdai (scale 0 to 100), using gender (male/female), time (0, 0.5, 1 and 2 years, time as a categorical variable), age (18-90 years), and the two-way and three-way interactions of these variables. The linear mixed model takes into account correlations within subjects and subjects within countries.
I used the following code to plot the relationship between sex, time and age:
```
emmip(basdai_age,~ gender ~ time | age, at = list(age = c(20, 40, 60, 80)), plotit= TRUE)
```
This gives me the following plot (blue = female, red = male):
[](https://i.stack.imgur.com/Qe2R0.png)
The plot looks good, although it may be confusing for viewers to see that time is not plotted at the 1.5 year mark since it is not a measured category. Moreover, it would be helpful to add labels at the top of each subplot indicating the age group being displayed (e.g. "20 years", "40 years", etc.) to make it more clear.
Therefore I have tried to substract the data from the `emmip() function` by using `plotit = FALSE`, and subsequently trying to create the plot to my liking. However, I am struggling to calculate the confidence intervals of the estimated marginal means. I asked `chatGPT` for help which has provided me with the formula, but the CI's seem to small.
Here is what I have tried so far:
```
library(ggplot2)
df_basdai_age <- emmip(basdai_age,~ gender ~ time | age, at = list(age = c(20, 40, 60, 80)), facetlab = "label_both", plotit= FALSE)
df_basdai_age <- emmip(basdai_age,~ gender ~ time | age, at = list(age = c(20, 40, 60, 80)), facetlab = "label_both", plotit= FALSE)
df_basdai_age$age = as.factor(df_basdai_age$age)
levels(df_basdai_age$age) <- c("20 years", "40 years", "60 years", "80 years")
df_basdai_age$time = as.numeric(as.character(df_basdai_age$time))
df_basdai_age <- mutate(df_basdai_age, ymin = yvar - qt(0.975, df)*(SE/sqrt(df)))
df_basdai_age <- mutate(df_basdai_age, ymax = yvar + qt(0.975, df)*(SE/sqrt(df)))
#Plot the graph
p <-
ggplot(df_basdai_age, aes(x = time, y = yvar, color = tvar)) +
geom_line(size = 1) +
geom_point() +
geom_errorbar(aes(ymin=ymin, ymax=ymax), width=.05) +
labs(x = "Time (years)", y = "") +
theme_cowplot() +
facet_wrap(~ age, nrow = 2) + theme(legend.position="none")
# show plot
p
```
This is the resulting plot:
[](https://i.stack.imgur.com/8dcBr.png)
Here is the complete dataset:
```
structure(list(gender = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L,
2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L,
2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), levels = c("Male", "Female"
), class = "factor"), time = c(0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2,
2, 2, 2, 2, 2), age = structure(c(1L, 1L, 2L, 2L, 3L, 3L, 4L,
4L, 1L, 1L, 2L, 2L, 3L, 3L, 4L, 4L, 1L, 1L, 2L, 2L, 3L, 3L, 4L,
4L, 1L, 1L, 2L, 2L, 3L, 3L, 4L, 4L), levels = c("20 years", "40 years",
"60 years", "80 years"), class = "factor"), yvar = c(53.2222132545591,
55.644671238144, 53.979760945038, 58.0003674613788, 54.737308635517,
60.3560636846136, 55.4948563259959, 62.7117599078483, 18.8785653284777,
27.3140001897482, 25.9035670388476, 33.6311637386693, 32.9285687492176,
39.9483272875904, 39.9535704595875, 46.2654908365116, 17.1280381612022,
25.6617577361121, 23.8414745169542, 31.7675512043731, 30.5549108727061,
37.8733446726342, 37.268347228458, 43.9791381408953, 15.8476044974251,
25.0206920526655, 23.029545900933, 30.9838114430519, 30.2114873044409,
36.9469308334382, 37.3934287079488, 42.9100502238246), SE = c(1.48085388286971,
1.54315837900391, 1.41163641601991, 1.42678010979953, 1.4630053643472,
1.49892114510165, 1.62355480278176, 1.73623936576435, 1.47800196744245,
1.54052855366631, 1.41101870318731, 1.42580231659991, 1.46095633844491,
1.4978877623696, 1.61701847301061, 1.73363627458762, 1.48984952535246,
1.58248569215204, 1.41398647326204, 1.43575219607306, 1.47296775812952,
1.52128548685686, 1.65241653887761, 1.80638768204503, 1.51339032203046,
1.65536350490358, 1.41906539783371, 1.45202624858626, 1.4940999289118,
1.56932417176622, 1.71642484529658, 1.95023658100379), df = c(19.1992529938252,
22.636661796014, 15.8557024206515, 16.5459552649169, 18.2937320570655,
20.157114181664, 27.7433329713143, 36.2839788594381, 19.051888835057,
22.483155277679, 15.828184146664, 16.5012376120077, 18.1918644929504,
20.10278950016, 27.3001430927299, 36.0701453085591, 19.669221418473,
25.0336574427524, 15.9616355502659, 16.9666087914172, 18.7980095965655,
21.3885230873732, 29.7715565509346, 42.514368203954, 20.9429853491516,
29.973619949575, 16.1930278685159, 17.7496566912749, 19.9012388133609,
24.2214091721269, 34.6602465546453, 57.7544903350949), tvar = structure(c(1L,
2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L,
2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), levels = c("Male",
"Female"), class = "factor"), xvar = structure(c(1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L), levels = c("0",
"0.5", "1", "2"), class = "factor"), ymin = c(52.5153436652014,
54.9731205471491, 53.2276742716765, 57.258776742029, 54.0195058924355,
59.6599914019999, 54.8631940181502, 62.1273437446853, 18.1699669372756,
26.6410506766869, 25.1510488995009, 32.8889200571732, 32.2094835380351,
39.2516759650496, 39.3188965541991, 45.6801042113467, 16.4265441774652,
25.0104024242387, 23.0910485329519, 31.0320361368517, 29.8433247363632,
37.1900266089296, 36.649658499789, 43.4202483189333, 15.1597661980371,
24.4031685566235, 22.2826943822545, 30.2589939247067, 29.5126365026472,
36.2891347676435, 36.8013488320248, 42.3963183501654), ymax = c(53.9290828439169,
56.316221929139, 54.7318476183996, 58.7419581807286, 55.4551113785984,
61.0521359672273, 56.1265186338416, 63.2961760710114, 19.5871637196797,
27.9869497028096, 26.6560851781943, 34.3734074201655, 33.6476539604,
40.6449786101312, 40.5882443649759, 46.8508774616764, 17.8295321449393,
26.3131130479854, 24.5919005009564, 32.5030662718946, 31.2664970090489,
38.5566627363388, 37.8870359571269, 44.5380279628573, 16.5354427968132,
25.6382155487075, 23.7763974196115, 31.7086289613971, 30.9103381062346,
37.6047268992329, 37.9855085838727, 43.4237820974838)), estName = "yvar", pri.vars = c("gender",
"time", "age"), adjust = "none", side = 0, delta = 0, type = "link", mesg = "Degrees-of-freedom method: satterthwaite", row.names = c(1L,
2L, 9L, 10L, 17L, 18L, 25L, 26L, 3L, 4L, 11L, 12L, 19L, 20L,
27L, 28L, 5L, 6L, 13L, 14L, 21L, 22L, 29L, 30L, 7L, 8L, 15L,
16L, 23L, 24L, 31L, 32L), labs = list(xlab = "Levels of time",
ylab = "Linear prediction", tlab = "gender"), vars = list(
byvars = "age", tvars = "gender"), class = "data.frame")
```
Question: If I have the standard error, marginal mean, degrees of freedom, how do I calculate the `ymin` and `ymax` for a marginal mean of a linear mixed model? Is the formula used here correct?
| Calculating Confidence intervals of marginal means for a linear mixed model using emmeans package | CC BY-SA 4.0 | null | 2023-03-07T14:50:08.760 | 2023-03-09T18:23:18.767 | 2023-03-07T14:59:39.610 | 335198 | 335198 | [
"regression",
"mixed-model",
"confidence-interval",
"lsmeans",
"ggplot2"
] |
608647 | 2 | null | 608599 | 1 | null | You can never get exactly the prespecified odds-ratio, but you can get arbitrarily close to it by setting the number of `1`s and `0`s in the outcome column for each cell in the design to match the proportion predicted by your intended odds-ratios (rounded to the nearest whole number). This eliminates any mismatch due to random simulation. The mismatch due to rounding shrinks as `n_per_cell` increases.
```
library(tidyverse)
b0 = -1
b1 = .5
b2 = -2
n_per_cell = 1000 # Number of repetitions per cell
# There are 2 (x1) * 2 (x2) * 2 (outcomes) possible cells in this design
count_df = expand_grid(
x1 = c(0, 1),
x2 = c(0, 1),
outcome = c(0, 1),
n_total = n_per_cell
) %>%
mutate(log_odds_outcome = b0 + b1 * x1 + b2 * x2,
p_outcome = plogis(log_odds_outcome),
n = ifelse(outcome == 1,
round(p_outcome * n_total),
round((1 - p_outcome) * n_total)))
count_df
#> # A tibble: 8 × 7
#> x1 x2 outcome n_total log_odds_outcome p_outcome n
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 0 0 0 1000 -1 0.269 731
#> 2 0 0 1 1000 -1 0.269 269
#> 3 0 1 0 1000 -3 0.0474 953
#> 4 0 1 1 1000 -3 0.0474 47
#> 5 1 0 0 1000 -0.5 0.378 622
#> 6 1 0 1 1000 -0.5 0.378 378
#> 7 1 1 0 1000 -2.5 0.0759 924
#> 8 1 1 1 1000 -2.5 0.0759 76
# Expand the counts out into a full data frame
df = count_df %>%
select(x1, x2, outcome, n) %>%
uncount(n)
head(df)
#> # A tibble: 6 × 3
#> x1 x2 outcome
#> <dbl> <dbl> <dbl>
#> 1 0 0 0
#> 2 0 0 0
#> 3 0 0 0
#> 4 0 0 0
#> 5 0 0 0
#> 6 0 0 0
# Look at cell means
df %>% group_by(x1, x2) %>% summarise(mean(outcome))
#> `summarise()` has grouped output by 'x1'. You can override using the `.groups`
#> argument.
#> # A tibble: 4 × 3
#> # Groups: x1 [2]
#> x1 x2 `mean(outcome)`
#> <dbl> <dbl> <dbl>
#> 1 0 0 0.269
#> 2 0 1 0.0470
#> 3 1 0 0.378
#> 4 1 1 0.0760
# Confirm log-odds-ratios are close to [-1, .5, -2]
m = glm(outcome ~ x1 + x2, data = df, family = binomial)
summary(m)
#>
#> Call:
#> glm(formula = outcome ~ x1 + x2, family = binomial, data = df)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -0.9748 -0.7913 -0.3970 -0.3110 2.4711
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -1.00079 0.06726 -14.88 < 2e-16 ***
#> x1 0.50366 0.08624 5.84 5.21e-09 ***
#> x2 -2.00390 0.10507 -19.07 < 2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 3918.6 on 3999 degrees of freedom
#> Residual deviance: 3407.6 on 3997 degrees of freedom
#> AIC: 3413.6
#>
#> Number of Fisher Scoring iterations: 5
```
Created on 2023-03-07 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)
| null | CC BY-SA 4.0 | null | 2023-03-07T14:53:36.133 | 2023-03-07T14:53:36.133 | null | null | 42952 | null |
608648 | 2 | null | 436154 | 2 | null | Suppose that the training process is working & the loss is decreasing during an epoch. If this is the case, then we know that the average loss at the beginning of the epoch will be larger than the average loss at the end of the epoch. This means you have to make a choice:
- As a result, if you store all of the minibatch losses computed during the epoch, then compute the average, that average will be biased upwards, towards the loss value at the beginning of the epoch. The extent to which this matters depends on the magnitude of the difference between the loss at the beginning of the epoch and the loss at the end of the epoch. The closer together the beginning loss and end loss are, the smaller the bias will be.
- If you discard the minibatch losses computed during the epoch & recompute the loss for all samples at the end, then you're increasing the computational cost of each epoch because you're passing the training data to the model twice.
You'll have to decide which one is the best fit for your needs. If you have a tight budget, (1) might make more sense. If you have a great need to precisely measure the training loss, then (2) might be better.
| null | CC BY-SA 4.0 | null | 2023-03-07T15:18:43.667 | 2023-03-07T15:18:43.667 | null | null | 22311 | null |
608650 | 2 | null | 607035 | 2 | null | You have not actually tested the 3-way interaction as you claim. Rather, you looked at the 8 separate coefficients, chose the one that was most significant, and compared it to the 0.05 cutoff which ignores the issue of multiplicity.
To formally test the presence of a 3-way interaction, I would suggest a likelihood ratio test from `library(lmtest)`.
```
mod1 = lmer(pga ~ 1 + gender + time + smoking_status +
gender*time + time*smoking_status + gender*smoking_status +
gender*time*smoking_status + (1|ID) + (1|country), data = dat, REML = F, control=lmerControl(optimizer="bobyqa"))
mod2 = lmer(pga ~ 1 + gender + time + smoking_status +
gender*time + time*smoking_status + gender*smoking_status +
+ (1|ID) + (1|country), data = dat, REML = F, control=lmerControl(optimizer="bobyqa"))
lrtest(mod1, mod2)
```
| null | CC BY-SA 4.0 | null | 2023-03-07T15:29:31.567 | 2023-03-09T17:12:07.867 | 2023-03-09T17:12:07.867 | 21054 | 8013 | null |
608651 | 1 | null | null | 0 | 15 | I've been struggling to find a description of an approach to a situation like mine or any R packages that seem capable of handling my needs. I have a large dataset (~80,000 rows) where my dependent variable is 1 or 0, describing the presence/absence of a feature of individual animals observed. There are two components to the non-independence of each observation: observations can come from the same location (a geographic random intercept, characterized by a label for which county the observation comes from) and the observations can come from the same species (nested within genus and family). This results in binomial data, since we often have multiple 1/0 observations for a given location and a given species. I need to model how independent (environmental) variables associated with location influence the probability of 1 vs 0 in the dependent variable, while accounting for the nonindependence of observations from the same location and observations from the same species - I expect species to differ in the proportion of 1 vs 0. Importantly, I cannot collapse observations from the same location into a single value for each location (e.g., 0.70 = proportion of 1's) because there is a critical independent variable (year) that is different for observations from the same location. In other words, I need to retain the individual observations so I can model the observation-level effect in addition to the location-level effect.
A simplified version of the data structure is below to help clarify the issue:
|Y |Location |Species |Environmental X |Time |
|-|--------|-------|---------------|----|
|1 |County A |Species 1 |7 |1975 |
|0 |County A |Species 1 |7 |1983 |
|1 |County B |Species 1 |4 |1967 |
|0 |County C |Species 1 |5.8 |1952 |
|1 |County A |Species 2 |7 |1975 |
|1 |County D |Species 2 |9 |1995 |
|0 |County D |Species 2 |9 |1946 |
|0 |County D |Species 2 |9 |1968 |
|1 |County E |Species 3 |2.7 |1998 |
The independent variable Time should have effects on probability of 1 vs 0 for each observation, observations from the same location are non-independent, and observations for the same species are non-independent (and should be nested in a phylogenetic tree).
Every solution I can find doesn't seem to permit an analysis that contains all these features. I'd appreciate any advice on approaches I should look at that would best be suited for my situation.
| Ideas for binomial generalized linear mixed model with both phylogeny and random intercepts? | CC BY-SA 4.0 | null | 2023-03-07T15:35:06.810 | 2023-03-07T19:03:03.300 | null | null | 253365 | [
"mixed-model",
"nested-data",
"ecology",
"phylogeny"
] |
608652 | 1 | 608900 | null | 1 | 20 | I have a dataset for a group of 66,000 subjects diagnosed with a dangerous condition, and the time it takes for death to occur (the “event”) or to not occur (survival or “censored”). I am pursuing survival analysis using R. However, what if I am not interested in time-to-event, but just want to study the end of period death/survival rate with the objective of deriving a probability distribution of death at year 5 for groups of subjects facing the same diagnosis? I’d like to explore other statistical options where time to event is not a consideration. Is this a matter of choosing a distribution (such as binomial since in this case the subjects either live or die) and running simulations using my group’s parameters?
In the case of my study group:
- Death rate (over 5 years) is 70.5%
- SDEV (population or sample) is 0.4562 based on calculating SDEV on patient status at study period end where 1 = death and 0 = alive or “censored”
- SERR is 0.0018
| Which are the statistical methodologies to consider when examining study group death rates but without considering time to death? | CC BY-SA 4.0 | null | 2023-03-07T15:35:59.210 | 2023-03-09T16:51:53.440 | null | null | 378347 | [
"r",
"survival",
"binomial-distribution",
"simulation",
"monte-carlo"
] |
608653 | 1 | 608672 | null | 2 | 110 | How to compute $$\int_{-k}^{k}F(x)dx$$
where $F(x)$ is the cumulative distribution function of continuous random variable $X$ which has symmetric pdf about $x=0$ and $k>0$.
| Integral of cdf of a symmetric random variable | CC BY-SA 4.0 | null | 2023-03-07T15:47:46.647 | 2023-03-09T16:00:12.297 | 2023-03-07T18:13:35.987 | 60613 | 382603 | [
"probability",
"density-function",
"cumulative-distribution-function"
] |
608654 | 2 | null | 436154 | 3 | null | To unbiasedly estimate a model's training loss at the end of an epoch, do exactly what you do to estimate its validation loss at the end of an epoch: set the model in evaluation mode (to disable training-only computations like dropout), and apply the model to the sample. Not all of the training sample has to be used here; a random subsample should do just fine.
This procedure is less biased than the typical procedure you mentioned:
>
sum the [training] loss of each batch and divide by the number of batches analyzed for getting the loss of the current epoch
The typical procedure saves time because there are no extra model calls; it just sums what was already computed. But it's a biased estimator of training loss because:
- See Sycorax's answer
- Training-only computations were applied to get the loss for each training batch, since (presumably) this loss is recycled from the forward pass during backpropagation. Applying dropout, for example, causes loss to be overestimated. There are other training-only computations, e.g., feeding the model back true rather than predicted sequence elements as in a "teacher forcing" architecture for language translation (e.g., Figure 10.6 here1), which cause training loss to be underestimated.
I personally prefer to compute an unbiased estimate of training loss and error because it's insightful to see how training and validation loss and error compare between different models. One can iterate a model more easily by understanding how much certain interventions affect its bias and variance.
The typical procedure is fine if your only goal is to sanity check that optimization is working, i.e., training loss consistently goes down. Training error doesn't have to be estimated at all to select the best performing model. Only validation error needs to be estimated.
## References
- Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
| null | CC BY-SA 4.0 | null | 2023-03-07T15:56:53.660 | 2023-03-07T20:04:52.217 | 2023-03-07T20:04:52.217 | 337906 | 337906 | null |
608656 | 1 | null | null | 0 | 22 | Some has given me a positive integer (5).
It’s actually a forecast of the mean number of hurricanes for this summer, so we can think of it as an estimate of the mean of a Poisson distribution.
But I know they rounded it, so I know the actual number was [4.5,5.5).
So I’m going to sample randomly from that interval, by way of incorporating the uncertainty that they removed by rounding it.
But with what distribution should I sample?
One might say uniform, but that doesn’t seem right.
For instance, if someone gives me 1 it’s surely more likely to have come from below 1 than above 1. Perhaps I should use a log-scale somehow. That’s often the answer to these things.
I vaguely remember something about how more numbers have 1 at the start than any other digit, which seems like a similar thing.
Any thoughts anyone?
Thanks
| I want to deround (or unround) a non-negative integer back to being a non-negative real number, using statistics | CC BY-SA 4.0 | null | 2023-03-07T16:00:54.700 | 2023-03-07T16:00:54.700 | null | null | 331423 | [
"rounding"
] |
608657 | 2 | null | 608528 | 2 | null | The `residuals.survreg()` function in the R [survival package](https://cran.r-project.org/package=survival) allows for 9 different types of residuals. The default "response" residuals, the differences between observed and predicted survival times, are what you seem to have in mind.
If there are no censored observation times, then you can certainly can use such residuals to evaluate models. Then a survival model is just a particular type of parametric model. For some parametric forms it might be better to evaluate residuals in a corresponding transformed scale, as illustrated in [this answer](https://stats.stackexchange.com/a/561486/28500), but the basic idea of using residuals between predictions and observations, of some form, to evaluate models is OK if there's no censoring.
Censoring is an issue for most survival models in practice, however, and there's no reliable way to evaluate residuals for censored observations. You suggest avoiding that problem by restricting analysis of such residuals to uncensored observations. The problem is that you are then throwing away the information provided by the censored observations, which do contribute to the likelihood calculations.
The principles explained in [this post](https://www.fharrell.com/post/addvalue/) might be of interest. Censored observations make evaluation of explained variation in outcomes unreliable. Likelihood-based methods like AIC take into account all of the observations and thus provide the most efficient and reliable use of the data to evaluate models.
| null | CC BY-SA 4.0 | null | 2023-03-07T16:05:50.000 | 2023-03-07T16:05:50.000 | null | null | 28500 | null |
608658 | 1 | null | null | 1 | 161 | Good evening everyone,
I am here to ask a question regarding the statistical models ARIMA & SARIMA use to build predictive models based on past values and with the intent of predicting future values.
My question is this: should the data that the statistical model takes as input be normalized and scaled as is the case for any dataset that goes as input to a Machine Learning model?
Should categorical variables be transformed via one-hot-encoding?
How should date features be handled?
| ARIMA or SARIMA scale and normalize data | CC BY-SA 4.0 | null | 2023-03-07T16:08:49.803 | 2023-03-07T17:49:24.040 | 2023-03-07T17:49:24.040 | 53690 | 364824 | [
"forecasting",
"arima",
"normalization",
"categorical-encoding",
"feature-scaling"
] |
608659 | 1 | null | null | 0 | 46 | Consider a random polynomial $p(z)=\sum_0^n A_i z^i.$ where $A_i,i=0,12,3,..,n$ are iid uniform variables in the interval $(0,1).$ I want to show that that the probability of the root with min modulus lying outside the unit disc approaches 0 as the degree becomes larger and larger.That is,the probability that all roots lie outside the unit disc approaches zero as n goes to infinity.
My Attempt We observer that the probability that $A_0 \geq A_1 ...\geq A_n$ holding true is $\frac{1}{(n+1)!}$ so that from Enstrom Kakeya theorem,it follows that the all roots lying outside is at least $\frac{1}{(n+1)!} $ for an nth degree polynomial. Unfortunately, for large n this though true becomes devoid of any useful information. I try the next approach by considering the product of roots as follows:
$$|z_1.z_2 \cdots.z_n|=|A_0/A_n|$$
If $\xi$ is the modulus of the root with max modulus we have $$ \xi^n \leq \left|\frac{A_0}{A_n}\right|$$
Denoting by $W$ the random variable $\left|\frac{A_0}{A_n}\right|^{1/n}$ and considering
$$ \xi <W<w,$$ we have:
$$P(\xi \leq w) \geq P(W \leq w)=F_W(w) ,$$
$F_W$ being the CDF of $W$
This leads to $$ P(\xi >w) \leq 1-F_W(w)$$
Now if for $w>1,$ the RHS of the approaches 0,then we are done. But I don't know how that can be done.
| Probability of Roots outside the unit disc | CC BY-SA 4.0 | null | 2023-03-07T16:09:49.453 | 2023-03-07T17:28:09.190 | 2023-03-07T17:28:09.190 | 20519 | 295633 | [
"probability-inequalities",
"complex-numbers"
] |
608660 | 1 | null | null | 1 | 44 | This seems like it should be a really simple thing, so I apologize if this has been answered elsewhere, but I've browsed dozens of existing questions and none seem to fit exactly.
Suppose I have two cohort groups, control and target. I apply a treatment to the target, and want to use DiD to compare conversion deltas between the two groups; pre and post treatment.
I believe the following r code correctly proves that in our instance, the effect is truly non-zero (i.e. we reject the NULL hypothesis). However, because it's a binomial GLM, the confidence intervals are bounding the probability that it's non-zero, not the actual effect itself:
```
Converted <- c(2526,2500,2818,2268)
Offered <- c(3992,3956,4484,3996)
Cohort <- factor(c("Target","Target","Control","Control"))
Period <- factor(c(0,1,0,1))
ConvRate <- Converted / Offered
fit <- glm(ConvRate~Cohort*Period, family=binomial, weights=Offered)
summary(fit)
confint(fit)
```
(Summarized) Output:
```
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.52560 0.03090 17.007 < 2e-16 ***
CohortTarget 0.01850 0.04509 0.410 0.681633
Period1 -0.25367 0.04444 -5.708 1.14e-08 ***
CohortTarget:Period1 0.25017 0.06434 3.888 0.000101 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 5.1064e+01 on 3 degrees of freedom
Residual deviance: -7.9270e-13 on 0 degrees of freedom
AIC: 42.851
Number of Fisher Scoring iterations: 2
Waiting for profiling to be done...
2.5 % 97.5 %
(Intercept) 0.4651823 0.5863361
CohortTarget -0.0698583 0.1069002
Period1 -0.3407991 -0.1665982
CohortTarget:Period1 0.1240757 0.3762874
```
In order to get the actual effect size, I ran this through a LM, but it's refusing to provide me with confidence intervals;
```
lfit <- lm(ConvRate~Cohort*Period, weights=Offered)
summary(lfit)
confint(lfit)
```
Output:
```
Call:
lm(formula = ConvRate ~ Cohort * Period, weights = Offered)
Weighted Residuals:
ALL 4 residuals are 0: no residual degrees of freedom!
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.628457 NA NA NA
CohortTarget 0.004309 NA NA NA
Period1 -0.060889 NA NA NA
CohortTarget:Period1 0.060075 NA NA NA
Residual standard error: NaN on 0 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: NaN
F-statistic: NaN on 3 and 0 DF, p-value: NA
2.5 % 97.5 %
(Intercept) NaN NaN
CohortTarget NaN NaN
Period1 NaN NaN
CohortTarget:Period1 NaN NaN
Warning message:
In qt(a, object$df.residual) : NaNs produced
```
A similar issue results with `family=gaussian` when using GLM again:
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.628457 NA NA NA
CohortTarget 0.004309 NA NA NA
Period1 -0.060889 NA NA NA
CohortTarget:Period1 0.060075 NA NA NA
(Dispersion parameter for gaussian family taken to be NaN)
Null deviance: 1.2194e+01 on 3 degrees of freedom
Residual deviance: 2.0249e-28 on 0 degrees of freedom
AIC: -272.54
Number of Fisher Scoring iterations: 1
Waiting for profiling to be done...
Error in while ((step <- step + 1) < maxsteps && abs(z) < zmax) { :
missing value where TRUE/FALSE needed
Calls: confint -> confint.glm -> profile -> profile.glm
In addition: Warning message:
In qf(1 - alpha, 1, n - p) : NaNs produced
Execution halted
```
I'm wondering how I can get the effect size confidence intervals? The difference in differences is ~6% but I'm looking for statistically significant intervals.
| How to determine effect size confidence interval in a difference-of-differences A/B test | CC BY-SA 4.0 | null | 2023-03-07T16:14:28.353 | 2023-03-07T22:43:01.943 | null | null | 382606 | [
"r",
"effect-size",
"difference-in-difference"
] |
608661 | 1 | null | null | 0 | 69 | I'm learning about machine learning and I have two questions about nlp.
- Considering a dataset with many texts. Should I split it in train/test set before or after use CountVectorizer? I'm asking this because if I use 'fit_transform(train_set)' and after 'transform(test_set)' words in the test set that are not present in the training set will not be part of the 'bag of words'.
- Considering a dataset with many texts. Should I split it in train/test set before or after use TfidfVectorizer? As before, if I use 'fit_transform(train_set)' and after 'transform(test_set)' terms in the test set that are not present in the training set will not be part of the 'bag of words'. Moreover, the IDF part of the equation could cause 'data leak' if I use fit_transform(dataset) and after split it into train/test because it will take the frequency of a term in the corpora and apply in the test set.
| Train/Test split before or after CountVectorizer and TfidfVectorizer? | CC BY-SA 4.0 | null | 2023-03-07T16:19:40.763 | 2023-03-07T16:19:40.763 | null | null | 369891 | [
"machine-learning",
"train-test-split",
"bag-of-words"
] |
608662 | 1 | null | null | 0 | 34 | Why do the standard errors for `gamlss` are much lower than the standard errors from `lmer`?
```
library(lme4)
library(lmerTest)
library(gamlss)
fit <- lmer(Reaction ~ Days + (1|Subject), data = sleepstudy)
summary(fit)
# Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest']
# Formula: Reaction ~ Days + (1 | Subject)
# Data: sleepstudy
#
# REML criterion at convergence: 1786.5
#
# Scaled residuals:
# Min 1Q Median 3Q Max
# -3.2257 -0.5529 0.0109 0.5188 4.2506
#
# Random effects:
# Groups Name Variance Std.Dev.
# Subject (Intercept) 1378.2 37.12
# Residual 960.5 30.99
# Number of obs: 180, groups: Subject, 18
#
# Fixed effects:
# Estimate Std. Error df t value Pr(>|t|)
# (Intercept) 251.4051 9.7467 22.8102 25.79 <2e-16 ***
# Days 10.4673 0.8042 161.0000 13.02 <2e-16 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Correlation of Fixed Effects:
# (Intr)
# Days -0.371
fit <- gamlss(Reaction ~ Days + re(random = ~ 1|Subject),
data = sleepstudy,
family = "NO")
summary(fit)
# ******************************************************************
# Family: c("NO", "Normal")
#
# Call: gamlss(formula = Reaction ~ Days + re(random = ~1 |
# Subject), family = "NO", data = sleepstudy, opt = "optim")
#
# Fitting method: RS()
#
# ------------------------------------------------------------------
# Mu link function: identity
# Mu Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 251.4051 4.0759 61.68 <2e-16 ***
# Days 10.4673 0.7635 13.71 <2e-16 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# ------------------------------------------------------------------
# Sigma link function: log
# Sigma Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 3.3817 0.0527 64.16 <2e-16 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# ------------------------------------------------------------------
# NOTE: Additive smoothing terms exist in the formulas:
# i) Std. Error for smoothers are for the linear effect only.
# ii) Std. Error for the linear terms maybe are not accurate.
# ------------------------------------------------------------------
# No. of observations in the fit: 180
# Degrees of Freedom for the fit: 18.76598
# Residual Deg. of Freedom: 161.234
# at cycle: 2
#
# Global Deviance: 1728.238
# AIC: 1765.77
# SBC: 1825.689
# ******************************************************************
```
| Standard errors in GAMLSS vs LMER | CC BY-SA 4.0 | null | 2023-03-07T16:20:09.107 | 2023-03-07T16:36:10.650 | 2023-03-07T16:36:10.650 | 30855 | 30855 | [
"lme4-nlme",
"gamlss"
] |
608663 | 1 | null | null | 2 | 116 | I want to run a simulation study and I need to simulate data from the [Conway–Maxwell–Poisson distribution](https://en.wikipedia.org/wiki/Conway%E2%80%93Maxwell%E2%80%93Poisson_distribution).
However, it seems like the probability mass function is not available in closed-form, so, I do not know how to simulate from this distribution.
- Is there a method that I can use to simulate from this distribution?
- Is there any R package that implements this simulation?
| How to simulate from the Conway–Maxwell–Poisson distribution? | CC BY-SA 4.0 | null | 2023-03-07T16:34:04.087 | 2023-03-08T17:59:17.303 | null | null | 382608 | [
"r",
"simulation",
"conway-maxwell-poisson-distribution"
] |
608664 | 2 | null | 608564 | 0 | null | Update: I figured it out! Posting as an answer in case anyone else has this question. This is just an application of orthogonality properties of OLS.
For simplicity, assume $D$ consists of a single variable and $W$ contains a constant term.
Let $\hat\gamma$ be the vector of coefficients from regressing $D$ on $W$, and let $\hat\delta$ be the vector of coefficients from regressing $Y$ on $W$, and let $\langle \cdot, \cdot \rangle$ be the sample inner product $\langle E,F \rangle = \dfrac{1}{n} \sum E_iF_i$.
We then have
$$\hat{\beta} = \dfrac{\langle D - \hat \gamma W, Y - \hat \delta W \rangle }{|| D - \hat \gamma W ||^2 }$$
and
$$\tilde{\beta} = \dfrac{\langle D - \tilde\gamma W, Y - \tilde\delta W \rangle }{|| D - \tilde\gamma W ||^2 }$$
By the orthogonality properties of OLS, we have
$$|| D - \tilde \gamma W||^2 = || D - \hat \gamma W||^2 + ||(\tilde \gamma - \hat \gamma)W||^2$$
and
$$\langle D - \tilde \gamma W, Y - \tilde \delta W \rangle = \langle D - \hat \gamma W, Y - \hat \delta W \rangle + \langle (\hat\gamma - \tilde\gamma)W, (\hat\delta - \tilde\delta)W \rangle$$
So
$$\hat\beta - \tilde\beta = \dfrac{\langle D - \hat \gamma W, Y - \hat \delta W \rangle||(\tilde \gamma - \hat \gamma)W||^2 - || D - \tilde \gamma W||^2 \langle (\hat\gamma - \tilde\gamma)W, (\hat\delta - \tilde\delta)W \rangle}{\left( ||D - \tilde \gamma W||^2 + ||(\tilde \gamma - \hat \gamma)W||^2\right) ||D - \tilde \gamma W||^2} $$
But $\langle D - \hat \gamma W, Y - \hat \delta W \rangle$ converges in probability to the covariance of $\tilde D$ and $\tilde Y$, and $|| D - \tilde \gamma W||^2$ converges in probability to the variance of $D,$ so the result follows because $|| (\hat \gamma - \tilde \gamma)W||$ and $||(\hat \delta - \tilde \delta) W||$ are both $O(n^{-1/2}).$
| null | CC BY-SA 4.0 | null | 2023-03-07T16:45:09.667 | 2023-03-07T16:45:09.667 | null | null | 382529 | null |
608665 | 1 | null | null | 0 | 13 | I am creating a model and analyzing the relationships between the variables using the partial least squares structural equation. I have some questions regarding this method
- What do negative values of the path coefficient indicate?
-Is it possible to construct a regression equation using the results of the structural equation?
- How can the results of the structural equation be used practically?
| partial least squares structural equation modeling | CC BY-SA 4.0 | null | 2023-03-07T16:54:40.847 | 2023-03-07T19:40:46.090 | 2023-03-07T19:40:46.090 | 53690 | 382609 | [
"structural-equation-modeling",
"partial-least-squares"
] |
608666 | 1 | null | null | 1 | 35 | I am trying to understand how to apply SPRT to the following case.
Random variable $X$ is exponentially distributed according to $f(x, \theta)$. Test the following hypotheses
$$
H_0 : \theta \ge \theta_0
$$
$$
H_1 : \theta < \theta_0
$$
However, ever example I have found of SPRT uses hypotheses for the form $H_0:\theta=\theta_0$ and $H_1:\theta=\theta_1$ which leads to the use of the ratio of two probability density functions $f_0=f(x,\theta_0)$ and $f_1=f(x,\theta_1)$.
I cannot figure out how the hypotheses that I want to test can be applied to SPRT.
| Understanding the Sequential Probability Ratio Test | CC BY-SA 4.0 | null | 2023-03-07T17:09:08.323 | 2023-03-07T17:24:23.737 | 2023-03-07T17:24:23.737 | 362671 | 382610 | [
"probability",
"hypothesis-testing",
"sequential-analysis"
] |
608667 | 2 | null | 524561 | 1 | null | The Chow test is a bit different from other regression hypothesis tests.
Much of the time, such testing involves nested models, where one model drops out of the other by setting some coefficients equal to zero. When you test those and “accept” the null hypothesis (more on that in a moment), you go with the simpler model, as there is no statistical evidence that the extra coefficients are nonzero.
With the Chow test, you are testing if the coefficients in two regression models are equal, since you got them with different data. An example could be testing if the coefficients for dogs are the same as the coefficients for cats. The interpretation of accepting the null hypothesis is that the coefficients are not different, so combine the two data sets and fit away, since the dog and cat groups are not different.
Consequently, accepting the null hypothesis means that you go fit a third regression where you combine your two data sets and treat them as equivalent, since you accept the notion that they lead to equal regression coefficients.
However, you need to be careful with the idea of accepting the null hypothesis, because failing to reject and accepting the null hypothesis are not the same. In particular, there Are multiple reasons why it might be difficult to detect a difference, with a small sample size and multicollinearity being possible culprits. If you have minimal power to detect a difference, then failing to find a difference is not impressive.
A way to think about it is this: if you report to someone that you found nothing, they would be quite reasonable to ask how hard you looked for a difference. Power quantifies how hard you looked for a difference between the regression coefficients.
| null | CC BY-SA 4.0 | null | 2023-03-07T17:32:10.060 | 2023-03-07T17:32:10.060 | null | null | 247274 | null |
608668 | 2 | null | 608658 | 0 | null | ARIMA/SARIMA does not require any scaling. (Nor is it the case that "any dataset that goes as input to a Machine Learning model" needs to be scaled.)
(S)ARIMA can't deal with any predictors at all, so your question about encoding of categorical predictors is moot. There is (S)ARIMAX, or regression with (S)ARIMA errors. [These are different things.](https://robjhyndman.com/hyndsight/arimax/) If you fit such a model, categorical predictors will usually be encoded one-hot, but honestly, it does not matter a lot. Your software should do this under the hood.
(S)ARIMA expects data with a predefined seasonal frequency, i.e., number of periods per season. As such, there is no notion of "dates". You might have a vector of observations with a frequency of 7, and that might be daily data with weekly seasonality. Or you might have a frequency of 4, and your data might be quarterly with yearly seasonality - or it might be aggregates per six-hour-buckets with daily seasonality! It doesn't matter, and (S)ARIMA won't care. All it cares about is the frequency.
([Note that (S)ARIMA has major problems with "long" seasonality, e.g., daily data with yearly seasonality.](https://robjhyndman.com/hyndsight/longseasonality/) Daily data with weekly seasonality is fine, monthly data with yearly seasonality too.)
| null | CC BY-SA 4.0 | null | 2023-03-07T17:33:38.880 | 2023-03-07T17:33:38.880 | null | null | 1352 | null |
608669 | 1 | null | null | 0 | 14 | I am using the multiple linear regression to work out if the glycaemic variables would influence my outcome while holding the body weight and menstrual cycle constant.
Here's the thing, for these glycaemic variables, I've measured several related factors, such as glucose peak and glucose nadir, which are highly correlated. So, I can't include all of these in one multiple linear regression. I'm considering running several multiple linear regressions to see if each factor affects my outcome. However, I wonder if it would increase the type 1 error if I performed so many calculations.
or what should I do if I want to explore the influence of each glycaemic factor on my outcome?
many thanks
| how to decide which one you are going to include in multiple linear regression if you have multiple related variables link to one predictor? | CC BY-SA 4.0 | null | 2023-03-07T17:37:06.063 | 2023-03-07T17:37:06.063 | null | null | 380060 | [
"regression"
] |
608670 | 1 | null | null | 1 | 56 | I am new to statistics and a beginner in DLNM. I am trying to understand the different functions of DLNM.
In the 'crosspred' of DLNM, the exponentiated regression coefficient from the Poisson models, exp(fit), is the relative risk/rate ratio (RR), while with GAM, exp(fit) is at the original scale of the response variable.
| distributed lag non-linear models, DLNM r package | CC BY-SA 4.0 | null | 2023-03-07T17:50:25.030 | 2023-03-07T17:50:25.030 | null | null | 382614 | [
"generalized-linear-model",
"biostatistics",
"poisson-regression",
"generalized-additive-model",
"relative-risk"
] |
608672 | 2 | null | 608653 | 5 | null | Essentially translating whuber's comment into analysis and using point symmetry of the cdf around $(0,1/2)$, $F(k)=1-F(-k)$ or $F(-k)=1-F(k)$,
$$
\begin{align*}
\int_{-k}^{k}F(x)dx&=\int_{-k}^{0}F(x)dx+\int_{0}^{k}F(x)dx\\
&=\int_{0}^{k}[1-F(x)]dx+\int_{0}^{k}F(x)dx\\
&=\int_{0}^{k}1dx\\
&=k
\end{align*}
$$
Two examples:
```
k <- 1
> integrate(pnorm, -k, k)$value
[1] 1
> integrate(punif, -k, k, min=-4,max=4)$value
[1] 1
```
My initial, more clumsy solution:
Also by symmetry a, say, convex part of the cdf between $-k$ and 0 will be offset by a concave part between 0 and $k$ (the areas between the red and lightblue line in the plot below are equal), such that the integral is equal to a trapezoid on $-k$ and $k$ with rectangle height $1-F(k)$ and a triangle with height $F(k)-(1-F(k))=2F(k)-1$. All in all, the area is
$$
2k(1-F(k))+2k\frac{2F(k)-1}{2}=k
$$
Schematically:
[](https://i.stack.imgur.com/5Ap1o.jpg)
```
stddev <- .75
x <- seq(-2, 2,by=0.01)
plot(x, pnorm(x, sd=stddev), type="l", lwd=2, col="lightblue")
segments(k, 0, k, pnorm(k, sd=stddev),lty=2)
segments(-k, 0, -k, 1-pnorm(k, sd=stddev),lty=2)
segments(-k, 1-pnorm(k, sd=stddev), k, pnorm(k, sd=stddev), lty=1, lwd=2, col="red")
abline(v=0, lty=2)
segments(-k, 1-pnorm(k, sd=stddev), k, 1-pnorm(k, sd=stddev),lty=2)
segments(0, pnorm(k, sd=stddev), k, pnorm(k, sd=stddev),lty=2)
text(-0.1, pnorm(k, sd=stddev), "F(k)")
text(-k-.25, 1-pnorm(k, sd=stddev), "1-F(k)")
```
| null | CC BY-SA 4.0 | null | 2023-03-07T18:03:45.877 | 2023-03-09T16:00:12.297 | 2023-03-09T16:00:12.297 | 67799 | 67799 | null |
608673 | 1 | null | null | 0 | 78 | So I'm a 3rd year undergraduate doing my thesisin football score models right now. In my thesis I want to include a proof of what the link function for the Poisson distribution is and why it relates the mean to our linear predictors. I'm almost there, but there is one part that most literature seems to gloss over.
So we have our linear predictors η =β0 + β1_xi1+⋯+βp_xip and our natural parameter θ. Now I know why if η = θ then our link function is canonical, but my question is how do we prove that we have η = θ in the case of the Poisson distribution? Most literature just state that the canonical link sets this equality, or we can assume this equality for distributions that are members of the exp. family but don't actually prove why. Can anyone help?
| How to prove the Poisson link function is a canonical link function? | CC BY-SA 4.0 | null | 2023-03-07T18:05:35.397 | 2023-03-07T18:50:05.343 | null | null | 382617 | [
"generalized-linear-model",
"poisson-distribution",
"exponential-family",
"poisson-process",
"link-function"
] |
608674 | 1 | 608708 | null | 4 | 340 | Sample $(X_1, X_2,\ldots, X_n)^T\sim{\mathcal{N}(\textbf{0}, \Sigma)}$. What is the expected cross-sectional variance of $(X_1, X_2, \ldots, X_n)^T$? In other words, if
$$
S^2 = \frac{1}{n}\sum_{k = 1}^n \left(X_k - \bar{X}\right)^2\qquad\text{and}\qquad \bar{X} = \frac{1}{n}\sum_{k = 1}^n X_k,
$$
what is $E[S^2]$?
As an example, I'll show the relatively trivial two dimensional case. Suppose
$$(X_1, X_2)^T\sim{\mathcal{N}\left(\textbf{0}, \begin{pmatrix} \sigma_1^2 & \rho\sigma_1\sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma_2^2\end{pmatrix}\right)}.
$$
Then the cross-sectional mean is
$$
\bar{X} = \frac{X_1+X_2}{2}.
$$
Using the cross-sectional mean, the cross-sectional variance is
$$
S^2 = \frac{1}{2}\left(X_1 - \bar{X}\right)^2 + \frac{1}{2}\left(X_2 - \bar{X}\right)^2 = \left(\frac{X_1 - X_2}{2}\right)^2
$$
Hence, the expected cross-sectional variance is
$$
E\left[S^2\right] = E\left[\left(\frac{X_1 - X_2}{2}\right)^2\right] = \frac{\sigma_1^2 - 2\rho\sigma_1\sigma_2+\sigma_2^2}{4}.
$$
| Variance Among Coordinates of Multivariate Normal | CC BY-SA 4.0 | null | 2023-03-07T18:15:13.773 | 2023-03-10T15:02:26.993 | 2023-03-10T15:02:26.993 | 382612 | 382612 | [
"variance",
"multivariate-normal-distribution"
] |
608675 | 2 | null | 493299 | 0 | null | Simple linear regression is a special case of multiple linear regression that only has one feature ($x$ variable). Consequently, any theorem that applies to multiple linear regression must apply to simple linear regression, so, yes, the Gauss-Markov assumptions are the same.
It then becomes an issue of how to translate the multiple linear regression notation into simple linear regression. The answer is that your model matrix just has a column of $1$s for the intercept and then one column for your lone feature. Depending on how you want to express the assumptions, you might want to write the matrix as $n\times 2$, or you might just want to say that you have feature $X_1$ and that’s all.
| null | CC BY-SA 4.0 | null | 2023-03-07T18:19:35.877 | 2023-03-07T18:19:35.877 | null | null | 247274 | null |
608676 | 2 | null | 491450 | 1 | null | It is a fact from calculus that $\arg\min$ does not change when we apply an increasing function like multiplication by a positive number. For instance, $f(x)=x^2$ and $f(x)=7x^2$ have the same minimizer, $x=0$.
Consequently, the sum of the squares errors (technically residuals) and the mean of the squares residuals have extremely similar behavior: whatever regression parameters or predictions minimizes one also minimizes the other.
The reason why people would divide by the sample size is to keep a large sample size from giving an enormous number. The typical squared residual could be quite small, but the sum of a million or billion small numbers might wind up being quite large. Additionally, there is a relationship between the mean squared error (again, technically residuals) and the variance of the residuals in that the mean squared error formula is exactly the “population” variance of the residuals.
For instance, imagine your boss asking you about model performance. Wouldn’t you prefer to report, and your boss find more useful, information about how large a typical residual is, rather than the sum of all of the squared residuals?
| null | CC BY-SA 4.0 | null | 2023-03-07T18:29:24.423 | 2023-03-07T18:29:24.423 | null | null | 247274 | null |
608679 | 2 | null | 491174 | 1 | null | Yes, these refer to the same equation, with the possible exception being multiplication by a positive number. For a sample size of $N$, predictions $\hat p_i\in[0,1]$, and true values $y_i\in\{0,1\}$, the log loss is:
$$
-\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left[
y_i\log(\hat p_i) + (1-y_i)\log(1-\hat p_i)
\right]
$$
(It is possible that some will not multiply by the $\frac{1}{N}$. This doesn’t matter for optimization, since multiplying by a positive number does not change the values giving the optimum, but watch out for it when it comes to reporting the model performance. Whatever software you use should document what it does, though I would assume $\frac{1}{N}$ if the documentation does not mention anything.)
It is a convention that $0\times\log(0)$ is taken to be $0$, should the model make a probability prediction of $0$ or $1$. However, $1\times\log(0)$ is taken as $\infty$, hence the extremely harsh penalty for confident but incorrect predictions.
| null | CC BY-SA 4.0 | null | 2023-03-07T18:47:49.080 | 2023-03-07T18:47:49.080 | null | null | 247274 | null |
608680 | 2 | null | 608673 | 2 | null | In general, if the conditional mass/density function for a single observation is of the form
$$
f_{Y_i|\mathbf{X}_i}(y_i|\mathbf{x}_i)=h(y_i, \phi) \exp\left\{\frac{\theta_iy_i-b(\theta_i)}{\tau(\phi)} \right\}
$$
where $\theta_i$ depends on $\mathbf{x}_i$, we say that the distribution belongs to an exponential dispersion family. The parameters $\theta_i$ and $\phi$ are location and scale parameters.
We can then show that the mean function is
$$
\mu(\mathbf{x}_i):=\mathbb{E}(Y_i|\mathbf{X}_i=\mathbf{x}_i)=b'(\theta_i)
$$
the derivative of the function $b$.
If we then take the location parameter to be a linear function of the explanatory variables, $\theta_i = \beta_0 + \beta_1 x_{i1}+\ldots + \beta_p x_{ip} = \mathbf{x}_i \mathbf{\beta}$, we have
$$
\mu(\mathbf{x}_i)=b'(\mathbf{x}_i \mathbf{\beta}) \Rightarrow b'^{-1}(\mu(\mathbf{x}_i)) = \mathbf{x}_i \mathbf{\beta}
$$
Thus, $b'^{-1}$ is a natural choice of link function and is known as the canonical link.
Try writing the Poisson mass function in this form and you can verify the result for this particular case.
| null | CC BY-SA 4.0 | null | 2023-03-07T18:50:05.343 | 2023-03-07T18:50:05.343 | null | null | 238285 | null |
608681 | 2 | null | 490785 | 0 | null | Those features are in the model, just with their coefficients set to zero. Consequently, the software is being quite reasonable in expecting those variables in the data frame for which you want to make predictions.
I see three options.
- Supply those variables, knowing the values will not affect the predictions (since the coefficients are zero).
- Make up values for those variables (such as all zero), since their values do not affect the predictions. This could be useful if you need to do complex data wrangling to access those values, resulting in either slow performance or irritation to the programmer. This could be useful, too, if you stop collecting data on the variables that do not survive the LASSO estimation.
- Do the vector multiplication on your own, outside of the usual prediction method. This will involve matrix multiplication of the data frame with the “surviving” features times the vector of nonzero LASSO-estimated regression coefficients. You’re always allowed to say that the model is something like $\hat y_i=\hat\beta_0+0x_{i1}+\hat\beta_2x_{2i}=\hat\beta_0+\hat\beta_2x_{2i}$.
| null | CC BY-SA 4.0 | null | 2023-03-07T18:56:14.860 | 2023-03-07T18:56:14.860 | null | null | 247274 | null |
608682 | 1 | null | null | 0 | 30 | Goal: I am trying to compare the proportion of participants who choose one of the combined options (A-B and B-A) in questions where they see A-First vs questions where they see B-First.
Data: I have data that looks like the following table. Each participant answers 3 questions. For each question, they have 3 choices: A, B, and either A-B or B-A. When A-First is 1 and B-first is 0, they see option A-B. When A-First is 0 and B-First is 1, they see option B-A.
```
ParticipantID Question A-First B-First Choice
1 1 1 0 A
1 2 0 1 A
1 3 0 1 B-A
2 1 1 0 B
2 2 1 0 A
2 3 0 1 B-A
3 1 0 1 B
3 2 1 0 A-B
3 3 1 0 B
```
Considerations: I was considering using a Chi-Squared test, but my sense is this would not be appropriate here because there is overlap in the two groups being compared. Depending on the question (1, 2, or 3), each participant is sometimes in the A-First group and sometimes in the B-First group.
Would a McNemar's test be appropriate here? Or something else? This study design differs from the other examples I've been able to find online, such as the example on the [McNemar's test Wikipedia page.](https://en.wikipedia.org/wiki/McNemar%27s_test)
| Chi-square test, McNemar test, or something else for comparing proportions of overlapping groups | CC BY-SA 4.0 | null | 2023-03-07T18:57:09.027 | 2023-03-07T18:57:09.027 | null | null | 194797 | [
"chi-squared-test",
"proportion",
"percentage",
"mcnemar-test"
] |
608683 | 2 | null | 608651 | 0 | null | This seems a problem to a phylogenetic linear mixed-effects model. Have you tried `glmmTMB` or bayesian alternatives, like `brms`? I'd build a binomial GLMM with repeated measures (=multiple observations within species), including country as random intercept and then include a phylogenetic VCV to account for phylogenetic autocorrelation (with your data set has > 30 species), if not I'd just include species as a crossed random effect, given that your data shows that you have the same species in different countries. For the bayesian alternative, take a look at [this vignette](https://cran.r-project.org/web/packages/brms/vignettes/brms_phylogenetics.html). For glmmTMB, look [here](https://cran.r-project.org/web/packages/glmmTMB/vignettes/covstruct.html).
Also, make sure you diagnose your model for zero inflation and correct for that, if needed. I'd use `DHARMa` [in this case](https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html#zero-inflation-k-inflation-or-deficits).
| null | CC BY-SA 4.0 | null | 2023-03-07T19:03:03.300 | 2023-03-07T19:03:03.300 | null | null | 103642 | null |
608684 | 1 | 608885 | null | 0 | 49 | Consider a general problem where we try to model an output variable $Y$ with several independent variables $X_1$, $X_2$, $X_3$, etc. that are binary or continuous. From previous study, we know that the values of the continuous variable $X_1$ are affected by a binary variable $Z$ but the $Z$ has no effect on the output. How should I model this in R?
- Y ~ X1:Z + X2 + X3
- Y ~ X1:Z + X1 + X2 + X3
- Y ~ X1:Z + Z + X1 + X2 + X3
Here is my concrete example as it might help : $X_1$, $X_2$, $X_3$ are features extracted from medical imaging data such as for each patient the mean or the maximum of the values in a region of interest. $Y$ could be either a binary output describing if the tumor is aggressive or not, or survival data such as overall survival. $Z$ is a binary variable that describes if the patient has got a premedication before the image acquisition. We know from a previous study that if the premedication is given to a patient we will observe higher $X_1$ values than in the absence of premedication.
My instinct tells me to use option 1. because $Z$ has no impact on $Y$. It only depends on if the premedication was given or not which depends on the date of the image acquisition (protocol changed other time) so we can assume in my opinion that this is random. But from what I read on an [older post](https://stats.stackexchange.com/questions/11009/including-the-interaction-but-not-the-main-effects-in-a-model) it doesn't sound like a good idea to omit the main effect term.
| Should main effect of an interaction term with no relation with output be included? | CC BY-SA 4.0 | null | 2023-03-07T19:34:10.823 | 2023-03-09T14:32:14.410 | null | null | 382619 | [
"r",
"regression",
"interaction"
] |
608685 | 2 | null | 180572 | 2 | null | This calls for a multinomial logistic regression, possibly with polynomial contrasts to account for the fact that the predictor is ordinal (it may or may not be better to simply treat the 3 levels as categorical).
To account for the non-independence (repeated measures), you would probably need to add a random effect (intercept and perhaps slope).
| null | CC BY-SA 4.0 | null | 2023-03-07T19:38:08.797 | 2023-03-07T19:38:08.797 | null | null | 121522 | null |
608686 | 1 | 608692 | null | 3 | 142 | I was recently at a seminar where a statistician used chi-squared divided by the degrees of freedom (DF) as an assessment of model performance in a logistic regression. They suggested that having a $\frac{\chi^2}{DF}$ close to 1 suggested good performance in the model.
I am having some difficulty googling more information around this measure of goodness of fit and was hoping to get some guidance or resources.
Once source I found stated the following:
>
"A $\chi^2$ statistic with k degrees of freedom, d.f., is the sum of the squares of k random unit-normal deviates. Therefore its expected value is k, and its model variance is 2k. This provides the convenient feature that the expected value of a mean-square statistic, i.e., a $\chi^2$ statistic divided by its d.f. is 1." ~ https://www.rasch.org/rmt/rmt171n.htm
And there is some mentioned of dividing test statistics by degrees of freedom in [SAS documentation guidelines](https://support.sas.com/resources/papers/proceedings14/1485-2014.pdf). But I am having difficulty finding a more thorough and satisfactory answer.
Why does having $\frac{\chi^2}{DF}$ close to 1 suggest decent good model performance (goodness-of-fit)?
Related but different questions on $\frac{\chi^2}{DF}$ are found [here](https://math.stackexchange.com/questions/3988423/what-is-the-distribution-for-a-chi-squired-variable-divided-by-its-degrees-of-fr) and [here](https://stats.stackexchange.com/questions/452691/divide-a-chi-squared-distribution-by-its-degrees-of-freedom).
| Why is chi-squared divided by degrees of freedom a measure of model goodness of fit? | CC BY-SA 4.0 | null | 2023-03-07T19:50:00.290 | 2023-03-07T21:35:08.990 | 2023-03-07T21:30:18.980 | 302557 | 302557 | [
"logistic",
"chi-squared-test",
"goodness-of-fit"
] |
608687 | 1 | null | null | 1 | 17 | When we have categorical features in a regression (say a generalized linear model for now), it is typical to let one category be subsumed by the intercept and then code binary indicator variables for each of the remaining categories, so “cat” could be subsumed by the intercept, “dog” would be coded as $(1,0)$, and “horse” would be coded as $(0,1)$.
If these were ordered categories (e.g., small/medium/large), a such an encoding would lose the order. A remedy I have seen is to build up to the highest level, so the lowest level is subsumed by the intercept, then the second-lowest level has one $1$, the next-lowest level has two $1$s, and so on: medium would be $(1,0)$, and then large would be $(1,1)$.
This seems to take care of the issue. We get an indicator for each level, but we build up to the higher levels to enforce an order. The “large” size had its indicator turned on, yes, but it also turns on an indicator for the lower size(s).
Is this the standard way to approach ordinal encoding? I do believe I have read about it (perhaps on Cross Validated). What would be the drawbacks to this kind of encoding m, and what remedies to those drawbacks exist?
| Encoding ordinal categories as features | CC BY-SA 4.0 | null | 2023-03-07T19:50:06.393 | 2023-03-07T19:50:06.393 | null | null | 247274 | [
"regression",
"generalized-linear-model",
"ordinal-data",
"feature-engineering"
] |
608688 | 1 | null | null | 0 | 18 | I have weight as response variable which is continuous variable. And 3 explanatory variables viz; Parity(Count), Age(continuous) and Sex(Factor).
I want to fit a glm model using deviances analysis to select the best model from all possible plausible model.
The models are nested.
How do I go about it pls. Knowing that the assumptions are met since Normality is not neccessary.
Pls help me.
| How to fit a nested glm model using deviance for selectio? | CC BY-SA 4.0 | null | 2023-03-07T19:52:07.463 | 2023-03-13T18:47:27.600 | 2023-03-13T18:47:27.600 | 11887 | 337832 | [
"r",
"generalized-linear-model",
"deviance",
"nested-models"
] |
608689 | 1 | 608690 | null | 2 | 144 | I just recently started working with logistic regression, and I'm struggling with the interpretation of the results.
Say I have brain disease (BD) as an outcome and gestational age (GE) as an explanatory variable. The OR is 0.99. I have used R to calculate this:
```
gl1 <- glm(BD~GE, data, family = "binomial")
```
How do I know what my reference group is in this case? Does the model pick a reference for me? Individuals with disease is explained by 1, and individuals with no disease are 0. The clinical theory is that individuals with a low GE is more likely to get BD.
But this doesn't make sense to me in this case. Every time I Google this issue, I get: "every increase of GE is associated with 0.99 times the odds of BD". But shouldn't it be opposite? That is, every decrease of GE, is associated with 0.99 time the odds of BD?
| Odds ratio from logistic regression isn't negative when it should be | CC BY-SA 4.0 | null | 2023-03-07T19:52:28.297 | 2023-03-07T20:15:00.157 | 2023-03-07T20:06:30.493 | 7290 | 382622 | [
"r",
"logistic",
"generalized-linear-model",
"interpretation",
"odds-ratio"
] |
608690 | 2 | null | 608689 | 8 | null | It may help to read more about how odds work. A place to start might be: [Interpretation of simple predictions to odds ratios in logistic regression](https://stats.stackexchange.com/q/34636/7290).
To answer your specific question, $0.99$ is a negative relationship. Every time you increase gestational age by $1$ unit (week?) the odds of having a brain disease is multiplied by $0.99$. That's less than $1.0$, so the odds are decreasing. To illustrate, let's imagine the odds of brain disease for babies born at $30$ weeks is $1.0$. Now, let's see what happens to the odds of brain disease as babies are born later (less early):
```
week odds(BD)
[1,] 30 1.0000000
[2,] 31 0.9900000
[3,] 32 0.9801000
[4,] 33 0.9702990
[5,] 34 0.9605960
[6,] 35 0.9509900
[7,] 36 0.9414801
[8,] 37 0.9320653
[9,] 38 0.9227447
[10,] 39 0.9135172
[11,] 40 0.9043821
```
The odds are in fact decreasing.
| null | CC BY-SA 4.0 | null | 2023-03-07T20:08:15.827 | 2023-03-07T20:15:00.157 | 2023-03-07T20:15:00.157 | 7290 | 7290 | null |
608691 | 1 | null | null | 1 | 53 | I recently came across a setup of differences-in-differences which I have previously wondered about, but had not seen before in a published study. Consider the following setting:
Outcome: measured at the monthly level (e.g., employment rate) across multiple units (e.g., states). Data is available across multiple years.
Treatment: policy that impacted all the units in year T' after month t' (e.g., the pandemic in march 2020).
Diff-in-diff design: we will use the monthly outcome data in years before T' as a control group. That is we assume in the absence of the intervention, the monthly outcome data follows the same trend in year T' as say the previous year. So our DD estimate will be the pre- and post- monthly outcome change in year T' relative to say the previous year T'-1 where the policy was never introduced.
So far I have found two papers published in economic journals that have used this approach:
- Metcalfe et al. (2011, The Economic Journal) to study the impact of the 9/11 attacks on subjective well-being.
- Brodeur et al. (2021, Journal of Public Economics) to study the impacts of COVID-19 stay-at-home mandate on mental health outcomes.
My question is simply whether this type of difference-in-difference design is well known, or if it has a special name. It differs from the conventional design which leverages variation in treatment status across units post-treatment. I found this Stata list post where someone wanted to use this type of design but was being criticized as being wrong by multiple active stata list posters:
[https://www.statalist.org/forums/forum/general-stata-discussion/general/1651271-difference-in-differences-and-panel-data](https://www.statalist.org/forums/forum/general-stata-discussion/general/1651271-difference-in-differences-and-panel-data)
The design introduced above personally seems intuitive to me. But I wasn't sure if there is some well known criticism against this type of design? I don't recall ever seeing it an econometrics textbook.
| Difference-in-differences with monthly data where the previous untreated year is used as the control group | CC BY-SA 4.0 | null | 2023-03-07T20:11:56.573 | 2023-03-07T21:21:58.767 | 2023-03-07T21:21:58.767 | 153998 | 153998 | [
"regression",
"panel-data",
"difference-in-difference",
"economics"
] |
608692 | 2 | null | 608686 | 2 | null | A couple preliminary points:
First, as gung notes, model performance is not the same as goodness of fit. The latter is one aspect of the former - all else being equal, a model that fits the data better is probably a better-performing model - but not the whole picture.
Second, when people say $\chi^2/DF \approx 1$ is an indicator of good model fit, what does $\chi^2$ mean and where did it come from? The answer is that, to assess goodness of fit, we often look at the residual sum of squares (RSS):
$$RSS = \sum_{i=1}^n r_i^2 = \sum_{i=1}^n (\hat{y}_i - y_i)^2$$
where $y_i$ is the actual and $\hat{y}_i$ is the model predicted value. Under certain conditions, the RSS is exactly or approximately $\chi^2$ distributed. This property of the RSS allows us to assess the quality of the model fit against a theoretical expectation.
# Linear Model with Gaussian Errors
To see how we can use the RSS and $\chi^2$ to assess goodness of fit, let's now ask: if we were to happen upon a good model, how big would we expect the RSS to be? For example, suppose we have $n = 1,000,000$ data points that fit the standard linear model perfectly:
$$y_i = ax_i + b + e_i$$
with $e_i$ i.i.d. standard normal. Suppose we're very lucky to have an "oracle", a magic algorithm which always finds the exact true values of $a$ and $b$ regardless of the noise in the data. Then the residuals of our oracular "fit" would be precisely
$$r_i = y_i - ax_i - b = e_i$$
and the RSS would be precisely $\sum_i e_i^2$. Since $\chi^2$ is by definition the sum of squared i.i.d. standard normal random variables, our RSS is $\chi^2$-distributed with $k = n = 1,000,000$ degrees of freedom.* Because $k$ is so large in our example, $\chi^2$ is also very close to normally distributed with mean $k$ and variance $2k$. This means the RSS is approximately normal with mean $k=1,000,000$ and standard deviation $(2k)^{1/2} \approx 1414$, and RSS/k is also approximately normal with mean 1 and standard deviation $(2k)^{1/2}/k \approx 0.0014$. With RSS/k having such a small standard deviation, for it to deviate even 0.01 from its expected value of 1 would be more than seven standard deviations away from the expected value, and therefore extremely unlikely. Thus, the realized value of RSS/k is almost certainly within the interval 0.99-1.01.
This example suggests that, if we have lots of data and are able to fit a perfect model, RSS/k should be very close to 1. In the real world, models rarely fit perfectly, and a model that fails this test is not necessarily bad. But it can be a useful diagnostic - if it's not close to 1, it may be worth plotting the residuals or doing other forms of digging in.
*In this thought experiment, we have an oracle which gives us the true value of $a$ and $b$ regardless of the noise in the data. In practice, one must make an adjustment for the fact that the estimated $a$ and $b$ are not exactly right and are influenced by the noise in the data, which makes the RSS lower than it would be if we had the oracle. That's why when we do a real fit, we have to subtract off the number of parameters to obtain the degrees of freedom, and we get $k = n-2$ rather than $k = n$ for a two-parameter linear fit like this example.
# Logistic and other models
The theoretical results used in the argument above are available in more general situations. Actually, any time you (1) fit a model to i.i.d. data with maximum likelihood, (2) have a lot of data, and (3) you know the structure of the true model, the residual sum of squares will have an approximate $\chi^2$ distribution. This follows from more [general results](https://www.stat.berkeley.edu/%7Ebartlett/courses/2013spring-stat210b/notes/26notes.pdf) for likelihood ratio tests.
| null | CC BY-SA 4.0 | null | 2023-03-07T20:19:03.033 | 2023-03-07T21:35:08.990 | 2023-03-07T21:35:08.990 | 11646 | 11646 | null |
608693 | 1 | null | null | 0 | 25 | I have a dataset where the first 6 columns correspond to binary entries referring to sick/not sick and 2 additional columns with age and a specific score (dim of the dataset 60x8). I need to generate a correlation matrix from the data. My dataset is pretty small (60 observations) and it doesnt follow a normal distribution. Can I use Kendall's correlation to compute the coefficients for binary and continuous variables? Google keeps telling me that I should use point-biserial correlation for this, but it seems to follow Pearson's correlation which is suited for normally distributed data - sadly not my case.
Im thankful for any critic and tips.
I tried Kendall's vs Spearman's correlation and concluded that Kendall's is better suited due to the small dataset size.
| Can I use Kendall's correlation to determine the correlation between continuous and binary variables? | CC BY-SA 4.0 | null | 2023-03-07T20:19:34.923 | 2023-03-07T20:19:34.923 | null | null | 382623 | [
"correlation",
"binary-data",
"continuous-data",
"computational-statistics"
] |
608694 | 1 | 608747 | null | 11 | 5811 | I've encountered an interesting discussion at work on interpretation of precision (confusion matrix) within a machine learning model. The interpretation of precision is where there is a difference of opinion so my description centers a little bit simplified around precision.
The problem discussed with some numbers for reference:
Suppose we have a machine learning model A for binary classification. The training dataset has 100.000 datapoints. The data is quite imbalanced: 5% is classified as 1, 95% as 0. The model is tested on another (unseen) dataset of 50.000 datapoints. For evaluation a confusion matrix is made to evaluate model A. Precision (TP/(TP+FP) = 30%. From here on there is a difference in view between the data scientists (both camps are intelligent people).
Group 1: We think the model is useful. While precision is low (30%), it is quite higher than random (5%). Therefore the model has some value. We can use the output the model generates and can expect the model to pinpoint datapoints which on average have a 30% probability of being a 1.
Group 2: You can not use this model. Precision needs to be at minimum 70-80% for a model to be useful. The point is, precision only measures the model, and not the underlying data. Therefore one can only use models with a minimum of 70-80% precision. Balanced or imbalanced data doesn't matter.
I myself find myself more in group 1. So if someone can explain why group 2 is right (if they are right) I would be happy.
More context: We produced a model to pinpoints locations where inspectors can find objects with on error. In the past ( al the datapoints) inspectors found on average in 5% of their visits an error. Model A had a precision of 30%. So the reasoning of group 1 is if inspectors only go to these 30% locations pinpointed by the model they will on average find more errors than the historic 5%. I should ad that only about 500 visits per year will be done, false negatives have no associated costs. So any precision gain more than 5% would be good.
| My machine learning model has precision of 30%. Can this model be useful? | CC BY-SA 4.0 | null | 2023-03-07T20:32:14.807 | 2023-03-10T16:30:38.987 | 2023-03-08T16:44:51.700 | -1 | 382624 | [
"machine-learning",
"classification",
"supervised-learning",
"precision-recall",
"confusion-matrix"
] |
608695 | 1 | null | null | 0 | 14 | Consider observations $y\in \mathbb{R}^T$. The data generating process is defined in terms of the innovations:
$$
\begin{align}
y_t &= \varepsilon_t + \sum_{p=1}^P L^p \theta_p \varepsilon_t\\
\varepsilon_t &\sim N\left(0,\sigma^2_{\varepsilon}\right)
\end{align}
$$
where $\theta \in \mathbb{R}^P$ are the coefficients for an invertible moving average process of order $P$. Let $\theta_0=1.0$. Then I believe the autocovariance matrix $\Omega$ for an $MA(P)$ process is:
$$
\begin{align}
\omega_{ij} &=
\begin{cases}
\sigma^2_\varepsilon \sum_{p=|i-j|}^P\theta_p \theta_{p-|i-j|} & |i-j| \le P\\
0 & otherwise
\end{cases}
\end{align}
$$
Denote the first $P$ innovations $\varepsilon^-\equiv \left[\varepsilon_0,\varepsilon_{-1},...,\varepsilon_{1-P}\right]$. How can I construct the full likelihood function with parameters $\Theta\equiv\left\{\varepsilon^-,\theta,\sigma^2_\varepsilon\right\}$?
If I didn't care about the initial innovations, I could use $L(\theta,\sigma^2_\varepsilon)=\left(2\pi\right)^{-T/2}Det(\Omega)Exp\left(-\frac{1}{2}y'\Omega^{-1} y\right)$, but I do care about them.
| What is the full likelihood for a normally distributed MA(P) process where the initial innovations are included in the parameter set? | CC BY-SA 4.0 | null | 2023-03-07T20:32:26.533 | 2023-03-07T20:32:26.533 | null | null | 153224 | [
"time-series",
"likelihood"
] |
608696 | 1 | 609053 | null | -2 | 75 | Could you please help me to prove the following equation:
$$E(x^{-1})=\int_{0}^{\infty}M_{x}(-t)dt$$
Where $M_{x}(-t)$ is the moment-generating function.
I think the following equation will be useful:
$$x^{-1}=\int_{-\infty}^{0}e^{ux}du$$
for $x>0$
| Evaluating $E(x^{-1}). $ | CC BY-SA 4.0 | null | 2023-03-07T20:37:49.343 | 2023-03-11T03:31:13.783 | 2023-03-11T03:30:44.843 | 362671 | 382603 | [
"self-study",
"expected-value",
"moment-generating-function"
] |
608697 | 1 | null | null | 2 | 62 | I have a question similar to the ones described in the forum here ([http://www.sthda.com/english/wiki/chi-square-goodness-of-fit-test-in-r](http://www.sthda.com/english/wiki/chi-square-goodness-of-fit-test-in-r)).
I want to know whether one sex is more common than the other at a study site. Unfortunately, I am dealing with a very cryptic species and my sample size is less than 5 for each sex for a particular year.(male = 5, female = 3).
If I were to perform a chi-squared GOF test, my code would be as follows:
```
sex.ratio = chisq.test(c(5,3), p = c(0.5,0.5))
```
Is there an alternative to the Chi-squared GOF Test that would be appropriate to run on this dataset with very small sample sizes?
| Alternative to chi-squared goodness-of-fit test for very small sample size | CC BY-SA 4.0 | null | 2023-03-07T20:54:41.473 | 2023-03-07T23:09:29.233 | 2023-03-07T22:17:31.233 | 164936 | 315533 | [
"chi-squared-test"
] |
608698 | 2 | null | 608694 | 3 | null | 5% would not be the performance of the model making predictions “at random” or if you just classified everything as 1’s.
I'm not sure if comparing it to the base rate makes sense here. It means comparing to the most primitive alternative possible. Why not try some other simple model (decision tree, logistic regression, $k$NN) as a benchmark?
Moreover, there's nothing magical about “70-80% precision”. For some problems, this would not be achievable, but for others way too low. Those numbers are arbitrary and there's no reason whatsoever to aim at them.
| null | CC BY-SA 4.0 | null | 2023-03-07T20:58:24.437 | 2023-03-09T09:57:56.050 | 2023-03-09T09:57:56.050 | 35989 | 35989 | null |
608699 | 1 | null | null | 0 | 41 | I have a question that probably will sound stupid, but I am approaching myself to statistic concepts. I saw that the multivariate standard normal has a distribution characterized by the parameters $0$ (that is, a vector of 0s) and $I$ as a matrix of variance-covariance. I understand the fact that each marginal has mean 0 and variance 1 (because they are standard normal) but why they are uncorrelated? I saw that 2 standard normal can have an association. Is there any way to prove it?
| Uncorrelation of multivariate standard normal | CC BY-SA 4.0 | null | 2023-03-07T21:01:43.597 | 2023-03-08T10:54:16.823 | 2023-03-08T10:54:16.823 | 362798 | 362798 | [
"machine-learning",
"normal-distribution",
"variance",
"multivariate-analysis"
] |
608701 | 2 | null | 608694 | 17 | null | I would say neither group is entirely correct. The question is what do you want to do with the model, and what will happen for positive or negative model predictions?
There are screening tests used in medical practice that have precision (positive predictive value) that low -- mammograms for breast cancer, prostate-specific antigen for prostate cancer. The positive predictive value of low-dose spiral CT for detecting lung cancer in smokers is way lower than 30%.
What these have in common is that you really want to detect cases, so you care much more about sensitivity (recall) than precision. The benefit of a true positive is much more than 3 times the cost of a false positive.
So, what you need to know to decide is the two costs ('losses' in decision theory). You can then work out the expected loss from using the algorithm and from not using it, and see which is lower.
| null | CC BY-SA 4.0 | null | 2023-03-07T21:03:55.287 | 2023-03-07T21:03:55.287 | null | null | 249135 | null |
608704 | 1 | null | null | 0 | 35 | I am a bit confused by the hazard ratios from the flexsurvspline model (Royston-Parmar). For a categorical variable (say X_1 with two levels, (a) and (b)) with time-varying coefficients, I can specify the model as
```
flexsurvspline(Surv(time, event) ~ X_1+X_2+X_3,k=2, data = data, scale="hazard",anc=list(gamma1=~X_1, gamma2 = ~ X_1, gamma3 = ~ X_1))
```
where gamma1,2, and 3 are the ancillary parameters (see:[https://cran.r-project.org/web/packages/flexsurv/flexsurv.pdf](https://cran.r-project.org/web/packages/flexsurv/flexsurv.pdf)). Now, in the model summary, as expected, I will get the hazard ratios for the three ancillary parameters as gamma1(X_1(a) vs X_1(b)),gamma2(X_1(a) vs X_1(b)),gamma3(X_1(a) vs X_1(b)).
My confusion arises from the timepoints used by the model to get the three hazard ratios. As we have two knots (which by default are at 33% and 67% quantiles of log(time)), do they play any role in the timepoint selection for the hazard ratios?
| Interpreting hazard ratios in Royston-Parmar model (gammax, flexsurv::flexsurvspline) | CC BY-SA 4.0 | null | 2023-03-07T21:14:15.820 | 2023-03-08T20:01:55.007 | 2023-03-08T20:01:55.007 | 40888 | 40888 | [
"r",
"regression",
"survival",
"cox-model",
"parametric"
] |
608706 | 2 | null | 608674 | 7 | null | Based on your comment, you question can be clarified as follows:
>
If $(X_1, \ldots, X_n) \sim N_n(0, \Sigma)$, what is the expected value of the sample variance (before bias correction)
\begin{align}
S^2 = \frac{1}{n}\sum_{i = 1}^n(X_i - \bar{X})^2?
\end{align}
There are many solutions to this classical problem. One that I like most is to express $S^2$ as a quadratic form of $\mathbf{X} = (X_1, \ldots, X_n)$, then apply the [quadratic form expectation formula](https://en.wikipedia.org/wiki/Quadratic_form_(statistics)#Expectation). As an exercise, you can verify that
\begin{align}
nS^2 = \mathbf{X}^T\Lambda\mathbf{X}, \tag{1}
\end{align}
where $\Lambda = I_{(n)} - n^{-1}ee^T$ and $e$ is an $n$-long column vector of all ones. It then follows by the formula in the above link that
\begin{align}
E[S^2] =\frac{1}{n}\operatorname{tr}(\Lambda\Sigma)
=\frac{1}{n}\operatorname{tr}(\Sigma) - \frac{1}{n^2}e^T\Sigma e. \tag{2}
\end{align}
With the representation $(1)$ and the Gaussian assumption, you can even ask for the variance of $S^2$, which follows from the [quadratic form variance formula in Gaussian case](https://en.wikipedia.org/wiki/Quadratic_form_(statistics)#Variance_in_the_Gaussian_case):
\begin{align}
\operatorname{Var}(S^2) =\frac{2}{n^2}\operatorname{tr}(\Lambda\Sigma\Lambda\Sigma)
=\frac{2}{n^2}\operatorname{tr}(\Sigma^2) -
\frac{4}{n^3}e^T\Sigma^2e
+ \frac{2}{n^4}(e^T\Sigma e)^2. \tag{3}
\end{align}
Note that $(2)$ holds for any random vector with mean vector $0$ and covariance matrix $\Sigma$. $(3)$ additionally requires that the distribution to be Gaussian. For similar, yet more technical problems, see [this thread](https://stats.stackexchange.com/questions/598863/distribution-of-mathbfx-top-mathbba-mathbfx-mathbfb-top-mathbf/598946#598946) and [this thread](https://stats.stackexchange.com/questions/598231/expected-squared-dot-product-between-iid-gaussian-vectors/598379#598379).
| null | CC BY-SA 4.0 | null | 2023-03-07T21:23:56.600 | 2023-03-07T22:02:22.363 | 2023-03-07T22:02:22.363 | 20519 | 20519 | null |
608707 | 1 | null | null | 0 | 6 | Since the linear softmargin SVM problem is linear regression with hinge loss and regularization, does it suffer from some of the same issues? Could anyone help me understand, or point me to articles, discussing the following issues:
- multicollinearity
- heteroskedasticity
- autocorrelation in residuals
| Linear Softmargin SVM Requirements | CC BY-SA 4.0 | null | 2023-03-07T21:27:05.563 | 2023-03-07T21:27:05.563 | null | null | 286173 | [
"regression",
"svm"
] |
608708 | 2 | null | 608674 | 6 | null | The "variance" in the question refers to the formula given by
$$V(X_1,X_2,\ldots, X_n) = \frac{1}{n}\sum_{i=1}^n X_i^2 - \left(\frac{1}{n}\sum_{i=1}^n X_i\right)^2.$$
This is a homogeneous quadratic form with coefficients $v_{ij}$ given by
$$V(X_1, X_2, \ldots, X_n) = \sum_{i,j=1}^n v_{ij}\, X_iX_j.$$
By inspecting the formula for $V$ it is evident that
$$v_{ii} = \frac{1}{n} - \frac{1}{n^2}$$
(one term from each of the right-hand parts of the formula) and for $i\ne j$
$$v_{ij} = -\frac{1}{n^2} X_iX_j$$
(the cross-products appearing solely in the right-hand term of the formula).
Because the expectations of all the $X_i$ are zero, $\Sigma_{ij} = \operatorname{Cov}(X_i,X_j) = E[X_iX_j].$ Consequently, linearity of expectation gives us the answer, which for convenience I will express in various equivalent forms:
$$\begin{aligned}
E\left[V(X_1,X_2,\ldots,X_n)\right] &= \sum_{ij} v_{ij} E[X_iX_j] \\
&= \sum_{ij} v_{ij}\Sigma_{ij} \\
&= \frac{1}{n}\sum_{i} \Sigma_{ii} -\frac{1}{n^2}\sum_{i,j} \Sigma_{ij} \\
&= \frac{1}{n}\operatorname{tr}(\Sigma) - \frac{1}{n^2}\mathbf 1^\prime \Sigma \mathbf 1.
\end{aligned}$$
FWIW, the second line is (a) the dot product of $V$ and $\Sigma$ considered as vectors of length $d^2$ and (b) according to the rules of matrix multiplication, can be expressed as $\operatorname{tr}(V^\prime\Sigma)=\operatorname{tr}(\Sigma^\prime V) = \operatorname{tr}(\Sigma V)$ etc. because $\Sigma$ is symmetric. ($V$ can always be made symmetric by replacing every $v_{ij}$ by $(v_{ij}+v_{ji})/2.$) It's helpful to have seen such matrix expressions before so you can understand what they really mean and where they come from.
---
The same technique serves to find the expectation of any form in a set of variables, whether homogenous or not, of any degree. For quadratic forms like $V$ all we need to know is $\Sigma$ (it doesn't matter that the distribution is multivariate Normal). For forms of higher degree, we need to know the higher multivariate moments.
| null | CC BY-SA 4.0 | null | 2023-03-07T21:33:22.343 | 2023-03-07T21:39:39.717 | 2023-03-07T21:39:39.717 | 919 | 919 | null |
608709 | 1 | null | null | 2 | 52 | So I got this basic concept of deterministic and stochastic trend.
$$\begin{align*}\text{Deterministic trend (DT)} & : y_t = \beta t +\varepsilon_t\\
\text{Stochastic trend (ST)} & : y_t = \beta +y_{t-1} + \varepsilon_t\\\end{align*}$$
But when I'm using actual data for a ARIMA model, is it possible to identify precisely if my series has significant DT or ST? Well I assume that it can't have both, right?
So my data is fitted by `auto.arima` in ARIMA(0,1,0) with drift, and the Mann-Kendall test rejected $H_0$, so the series has a significant trend, but how am I gonna tell if this is deterministic or stochastic? is there a test or something that I can check?
And what about a ARIMA(0,2,0) with no significant trend? Does it mean that the series has neither deterministic nor stochastic trend?
| how to confirm a trend is deterministic or stochastic? | CC BY-SA 4.0 | null | 2023-03-07T21:37:42.170 | 2023-03-08T07:29:23.690 | 2023-03-08T07:29:23.690 | 44269 | 379702 | [
"time-series",
"arima",
"trend",
"unit-root"
] |
608710 | 1 | null | null | 0 | 22 | I hope you're doing well.
How do we use this expression of generalised partial correlation coefficient? [](https://i.stack.imgur.com/HgI5n.png)
I struggle with putting it into work in practical examples of multiple regression models.
We have the following model : y = B1+ B2x2 + B3x3 + B4x4 + B5x5 + mu
Say we want to find the expression of r14.32, the 2nd order partial correlation coefficient between y (1 stands for y here) and the exogenous variable x4, while taking out x3 and x2 effects' on y. How can I express it? How about, r15.234, the third order PCC? Or r14.5, the first order PCC.
| The generalisation of the expression of partial correlation coefficient | CC BY-SA 4.0 | null | 2023-03-07T22:23:18.107 | 2023-03-13T18:07:59.737 | 2023-03-13T18:07:59.737 | 11887 | 381025 | [
"correlation",
"multiple-regression",
"econometrics",
"partial-correlation"
] |
608711 | 1 | null | null | 0 | 69 | I am comparing two treatment groups and I have 3 different types of outcomes. The dependent variable, treatment, is time dependent and so I conducted time dependent cox regression for each of the three outcomes.
However, for one of the outcomes, one treatment group had no events. Of course, the cox regression output for the estimated effect of that group will be −∞, and hence the hazard ratio in this scenario can't be interpreted.
I saw in another [thread](https://stats.stackexchange.com/questions/124821/dealing-with-no-events-in-one-treatment-group-survival-analysis) that the log rank test can be used in a scenario where one of the treatment groups has no events. However, my understanding is that while this may be a valid option for variables that are not time dependent (constant treatment value throughout study period), the standard log rank test can't be done in this scenario. Am I wrong? I tried doing this in R and the output had different estimates for each event strata.
Is there another way to compare the two groups?
I can clarify further in case the question isn't clear.
P.S. I have used the terms time dependent/varying interchangeably in this post.
| How to compare groups when one treatment group has no events and the treatment variable is time varying - survival analysis | CC BY-SA 4.0 | null | 2023-03-07T22:33:26.570 | 2023-03-09T13:44:49.087 | null | null | 377613 | [
"survival",
"cox-model",
"time-varying-covariate",
"logrank-test"
] |
608712 | 2 | null | 608660 | 1 | null | As I noted in the comments, the problem is that you're not providing the full dataset to the regression function.
For a toy example that involves only 20 observations
```
period <- c(rep(0, 10), rep(1, 10))
treatment <- c(rep(1, 5), rep(0, 5), rep(1, 5), rep(0, 5))
converted <- c(1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0)
dat <- data.frame(period, treatment, conversion)
head(dat)
summary(lm(converted ~ period + treatment + period * treatment))
```
Now, you get a standard error for the interaction because it's possible to estimate the variance.
| null | CC BY-SA 4.0 | null | 2023-03-07T22:43:01.943 | 2023-03-07T22:43:01.943 | null | null | 266571 | null |
608713 | 2 | null | 608699 | 1 | null | It follows pretty directly from the various definitions of the terms.
The off-diagonal elements of the variance-covariance matrix are the covariances and the diagonal elements are variances. When the diagonal elements are all 1's, the off-diagonal elements are then correlations (see the definition of correlation in terms of covariance and variances.)
With an identity variance-covariance matrix, $I$, the diagonals are indeed 1, and the off-diagonals (which are all 0) will then be correlations. i.e. if the variance-covariance matrix is $I$, all the variables are uncorrelated. With a standard multivariate normal, the variance-covariance matrix is defined to be $I$. That's the whole thing, in effect, just looking at two definitions.
If any off the off diagonal elements were not-zero, then the distribution will be multivariate normal, but the variance-covariance matrix won't be $I$ and then it's not multivariate standard normal. Further, if all the off-diagonals are 0 but not all the diagonals are 1, then the variance-covariance matrix is again not $I$ (it is multivariate normal and uncorrelated but the margins are not standard normal) and it's therefore not multivariate standard normal.
[It's also possible to have a distribution where the margins are standard normal and the variables are all uncorrelated but the distribution is not multivariate normal -- that is, to have the variables related by some non-Gaussian copula that leaves the variables uncorrelated but not independent.]
| null | CC BY-SA 4.0 | null | 2023-03-07T22:44:49.087 | 2023-03-07T22:52:33.183 | 2023-03-07T22:52:33.183 | 805 | 805 | null |
608714 | 2 | null | 608697 | 5 | null | Because the counts are small, you might use an exact binomial test:
```
binom.test(5, 8)
```
or a chi-square test with Monte Carlo simulation
```
chisq.test(c(5, 3), simulate.p.value=TRUE, B=10000)
```
However, think about the the meaning of the p-value in this case. It's analogous to flipping a coin 8 times (with the null hypothesis being that the coin is fair, with a 50/50 probability of getting heads). If you flip the coin 8 times, would it be that surprising to get 5 heads even if the coin is fair ?
| null | CC BY-SA 4.0 | null | 2023-03-07T23:09:29.233 | 2023-03-07T23:09:29.233 | null | null | 166526 | null |
608715 | 2 | null | 608653 | 5 | null | Words are superfluous:
[](https://i.stack.imgur.com/g2Pfx.png)
... but sadly I need more than 22 characters.
| null | CC BY-SA 4.0 | null | 2023-03-07T23:31:04.847 | 2023-03-08T00:36:34.083 | 2023-03-08T00:36:34.083 | 805 | 805 | null |
608716 | 1 | null | null | 0 | 27 | I am using a multinomial logistic regression of one nominal variable with many categories against a number of independent variables. I would like to test for differences between the coefficients for different categories of the dependent variable, including for differences between categories other than the reference category.
However, I am wondering how to correctly control the family-wise error rate for multiple comparisons, across multiple pairs of categories and across multiple independent variables. It seems like most software packages (I am using `nnet` in R, for example) do not control family-wise error, and regardless, they only support comparisons against the reference category.
I could use a Holm-Bonferonni adjustment, but is there anything more powerful for a multinomial logistic regression? I am imagining a test analagous to a Tukey test but for regression coefficients instead of means.
| Multinomial logistic regression - controlling family-wise error for pairwise comparisons? | CC BY-SA 4.0 | null | 2023-03-08T00:09:05.200 | 2023-03-13T18:10:22.453 | 2023-03-13T18:10:22.453 | 11887 | 366178 | [
"multiple-comparisons",
"categorical-encoding",
"multinomial-logit",
"familywise-error",
"nnet"
] |
608717 | 1 | null | null | 0 | 11 | Given the following layer that multiplies $v_a$ with the output of $tanh$ activation function:
$$
\large e_{ij} = v_a^\top \tanh{\left(W_a s_{i-1} + U_a h_j\right)}
$$
I am not sure if the dimensions below are correct though, because given that$W_a \in \mathbb{R}^{n\times m}$, $U_a \in \mathbb{R}^{n \times m}$, and $v_a \in \mathbb{R}^m$
are the weight matrices and $n$ is the hidden state size., then the output of $tanh$ should be $\mathbb{R}^{n}$ after multiplication with matrices. So the output of $tanh$ is $\mathbb{R}^{n}$, but $v_a \in \mathbb{R}^m$, so how we can multiple a vector of dimension $\mathbb{R}^m$ with a vector of dimension $\mathbb{R}^n$, please? I guess dimensions are wrong.
[](https://i.stack.imgur.com/X3V1C.png)
| Dimensions of a Neural Network Layer | CC BY-SA 4.0 | null | 2023-03-08T00:31:03.470 | 2023-03-08T00:31:03.470 | null | null | 309731 | [
"neural-networks",
"dimensions"
] |
608720 | 1 | null | null | 0 | 17 | I am working on a competition on kaggle. The competition is a classification problem. I tried to extract 2 new features(engineer features) from the data.
The accuracy on the validation data increased but the test accuracy decreased on the leaderboard.
Also, the training accuracy is very close to the validation accuracy which means there is no overfitting, right?
For example I tired logistic regression with cross validation and XGboost, Here are the result:
logistic regression.
```
validation micro f1 score: 0.694
training micro f1 score: 0.680
```
XGboost
```
validation micro f1 score: 0.7379
training micro f1 score: 0.7385
test micro f1 score on the leaderboard: .711
```
Why does making new features increase the validation accuracy but decrease the test accuracy?
Should they not go up or down together?
| Why does making new features increase the validation accuracy but decrease the test accuracy? | CC BY-SA 4.0 | null | 2023-03-08T01:28:40.190 | 2023-03-08T01:28:40.190 | null | null | 116480 | [
"machine-learning",
"feature-selection",
"overfitting",
"feature-engineering"
] |
608721 | 1 | 610593 | null | 2 | 193 | Wondering if someone can help clarify my intuition on this. Say you have a continuous covariate and a binary grouping variable and you introduce an interaction term between the two in a basic regression model (continuous outcome). Say the interaction term from the model output is not statistically significant, but suggests a 'fanning out' of the two regression slopes (the slope for one group increases at a faster rate than the slope for the second group). Would it be reasonable in this case, if one were to estimate marginal means at increasing values of the continuous covariate, that the contrast of those means may become statistically significant at some point? (or am I indeed misunderstanding some very basic concepts here).
The reason I ask is that I helped with an 'inflection-point' (linear spline) analysis. i.e. created a change-in-slope variable equal to zero below the hypothesised inflection point or the variable minus the inflection point for values above this (fairly standard change-in-slope coding). I then ran a model with both the original covariate and the change-in-slope variable together. The coefficient for the original covariate giving the slope until the inflection point and the other giving the change in slope thereafter. In a sense I consider this an interaction term.
The aim of this analysis was to see what 'effect' a particular life event had on an outcome (so in a sense a longitudinal analysis whereby the main covariate is a measure of time). After the model I used emmeans to estimate the difference in the model predicted outcome at specified time points, under the actual scenario that the life event changed the trajectory of the outcome, vs the counterfactual/hypothetical scenario that the trajectory prior to the life event was allowed to continue.
The slopes fan out, but no matter what post life-event time I estimate marginal means at, the p values remain the same as that of the model interaction. Is this just due to the way the model is parameterised? I'd like a better understanding of why.
P.S. I can try and provide some dummy data as an example if that would be helpful...
edit - 10/03/2023
I am adding some code to produce some dummy data relating to this question to hopefully assist in clarifying the question.
```
library(simstudy)
library(tidyverse)
library(ggplot2)
library(emmeans)
rm(list = ls())
# The following code creates some fake data for an interaction effect between a continuous (age) and binary (sex) variable on a continuous outcome (BP)
set.seed(581345)
def <- defData(varname = "male", dist = "binary", formula = .5 , id = "cid")
def <- defData(def, varname = "age", dist = "normal", formula = "20 + 20*male", variance = 20) # make males 10 yrs older on average
def <- defData(def, varname = "BP", dist = "normal", formula = "70 + 0.9*male*(age-25)", variance = 50) # make males BP 0.9 times their (age-25) higher on average
dtstudy <- genData(50, def)
dtstudy$male <- factor(dtstudy$male)
# Plot
ggplot(data = dtstudy, aes(x = age, y = BP, group = male)) +
geom_point(aes(color = male), size = 3, position = position_jitter(w = 0.2)) +
geom_smooth(aes(color = male), method = "lm", linewidth = 1, fullrange = F, se = F) +
theme_bw(base_size = 20) +
xlab("Age") + ylab("BP") +
guides(color = guide_legend(title = "Male")) +
scale_x_continuous(breaks = seq(0,60,10), limits = c(0,60)) +
scale_y_continuous(breaks = seq(0,120,20), limits = c(0,120))
# Model
mod <- lm(BP ~ male * age, data = dtstudy)
summary(mod)
# Marginal Means
(emms <- emmeans(mod, ~ male + age, at = list(age = c(0, 1, 20, 40, 60, 61))))
custom <- list(`Sex diff at age = 0` = c(-1,1,0,0,0,0,0,0,0,0,0,0),
`Sex diff at age = 1` = c(0,0,-1,1,0,0,0,0,0,0,0,0),
`Sex diff at age = 20` = c(0,0,0,0,-1,1,0,0,0,0,0,0),
`Sex diff at age = 40` = c(0,0,0,0,0,0,-1,1,0,0,0,0),
`Sex diff at age = 60` = c(0,0,0,0,0,0,0,0,-1,1,0,0),
`Sex diff at age = 61` = c(0,0,0,0,0,0,0,0,0,0,-1,1))
contrast(emms, custom) |>
summary(infer = T)
# While the interaction from the model is not statistically significant, contrasts of the average difference in the outcome do become statistically significant.
# But the interaction effect is really the diff in diffs
# Diff from age = 0 to age = 1
`Sex diff at age = 0` = c(-1,1,0,0,0,0,0,0,0,0,0,0)
`Sex diff at age = 1` = c(0,0,-1,1,0,0,0,0,0,0,0,0)
contrast(emms, method = list(`Sex diff age 1 - age 0` = `Sex diff at age = 1`-`Sex diff at age = 0`))
# Diff from age = 60 to age = 61
`Sex diff at age = 60` = c(0,0,0,0,0,0,0,0,-1,1,0,0)
`Sex diff at age = 61` = c(0,0,0,0,0,0,0,0,0,0,-1,1)
contrast(emms, method = list(`Sex diff age 61 - age 60` = `Sex diff at age = 61`-`Sex diff at age = 60`))
# Diff from age = 0 to age = 60
`Sex diff at age = 0` = c(-1,1,0,0,0,0,0,0,0,0,0,0)
`Sex diff at age = 60` = c(0,0,0,0,0,0,0,0,-1,1,0,0)
contrast(emms, method = list(`Sex diff age 60 - age 0` = `Sex diff at age = 60`-`Sex diff at age = 0`))
# So the p value for the interaction remains the same no matter what diff in diff we take.
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Now lets ignore sex completely and using the same data imagine there is an obvious change in slope at about 25 years because there has been some hypothetical life event that occurs in all 25 year olds
# Plot
ggplot(data = dtstudy, aes(x = age, y = BP)) +
geom_point(color = "cornflowerblue", size = 3, position = position_jitter(w = 0.2)) +
geom_smooth(color = "cornflowerblue", method = "loess", linewidth = 1, fullrange = F, se = F) +
theme_bw(base_size = 20) +
xlab("Age") + ylab("BP") +
scale_x_continuous(breaks = seq(0,60,10), limits = c(0,60)) +
scale_y_continuous(breaks = seq(0,120,20), limits = c(0,120))
# So we create a change-in-slope variable (linear spline) to model this
dtstudy <- dtstudy |>
mutate(age_slope_change = ifelse(age <= 25, 0, age - 25))
# Model
mod2 <- lm(BP ~ age + age_slope_change, data = dtstudy)
summary(mod2)
# Plot
# The red line represents the counterfactual slope, had the life-event not occurred
dtstudy$pred <- predict(mod2)
ggplot(data = dtstudy, aes(x = age, y = BP)) +
geom_point(color = "cornflowerblue", size = 3, position = position_jitter(w = 0.2)) +
geom_line(aes(x = age, y = pred), linewidth = 1, color = "cornflowerblue") +
geom_segment(aes(x = 25, xend = 45,
y = coef(mod2)[1] + coef(mod2)[2] * 25,
yend = coef(mod2)[1] + coef(mod2)[2] * 45),
linewidth = 1, color = "red", linetype = "dashed") +
theme_bw(base_size = 20) +
xlab("Age") + ylab("BP") +
scale_x_continuous(breaks = seq(0,60,10), limits = c(0,60)) +
scale_y_continuous(breaks = seq(0,120,20), limits = c(0,120))
# Diff in BP between actual and counterfactual BP at age = 30 years
(emms2 <- emmeans(mod2, ~ age + age_slope_change, at = list(age = c(30), age_slope_change = c(0,5)))) |>
summary(infer = T)
emms2 |>
pairs(reverse = T)
# Diff in BP between actual and counterfactual BP at age = 40 years
(emms2 <- emmeans(mod2, ~ age + age_slope_change, at = list(age = c(40), age_slope_change = c(0,15)))) |>
summary(infer = T)
emms2 |>
pairs(reverse = T)
# So these are really just estimating interaction effects at whatever age we choose? The p value is the interaction p value.
# Is there a way to setup emmeans to calculate contrasts as in lines 28-37 above? I am guessing not as there is no corresponding factor variable in the model - the interaction is all we can calculate...
```
| Linear spline and 'interaction' p value | CC BY-SA 4.0 | null | 2023-03-08T03:18:42.747 | 2023-03-27T15:04:00.293 | 2023-03-10T00:42:52.793 | 108644 | 108644 | [
"regression",
"splines",
"lsmeans"
] |
608724 | 1 | null | null | 1 | 24 | I have a set of vectors $x_i \in\mathbb{R}^m$ and a similarity function f that quantifies how similar $x_i,x_j$ are. Unfortunately, calculating f takes a lot of time.
I want to use some neural architecture A s.t.:
- $A(x_i)\in\mathbb{R}^n$ where $n<<m$
- Euclidian Distance between $A(x_i),A(x_j)$ will output similar results to f.
When i say similar results i mean that if $A(x_i),A(x_j), A(x_k)$ are embedded in the vector space in a way That $A(x_i),A(x_j)$ are very close to each other but $A(x_k)$ is very far from both, then $f(x_i,x_j)<f(x_i,x_k)$ and $f(x_i,x_j)<f(x_j,x_k)$
What kind of loss should I use for this task?
Edit:
The data I have is unlabeled
| Dimensionality reduction that preserves non-trivial similarity | CC BY-SA 4.0 | null | 2023-03-08T03:47:28.380 | 2023-03-08T04:02:59.187 | 2023-03-08T04:02:59.187 | 187391 | 187391 | [
"neural-networks",
"dimensionality-reduction",
"similarities",
"embeddings"
] |
608725 | 2 | null | 116423 | 1 | null | Better if you first compute the linear Gaussian fit of your data:
fit1 <- gamV(y ~ x1 + x2 + x3..., family = gaussian, data = GAM, aViz = list(nsim = 50))
and then contrast AIC scores yielded by linear and non-linear fit:
AIC(fit1,fit2)
Now you can just say that those GAMs attained lower AIC scores were performed
| null | CC BY-SA 4.0 | null | 2023-03-08T04:12:05.940 | 2023-03-08T04:12:05.940 | null | null | 382643 | null |
608726 | 2 | null | 608499 | 0 | null | The problem is that you need to be pooling the underlying `lm` models, not the ANOVA summaries of the models. The statistical theory and programs for multiple imputation are built around regression modeling, so we need to use that structure. ANOVA is equivalent to a linear regression using a single categorical variable, so the resulting analysis is equivalent to what you want to do.
The output won't be the same as what you're probably used to; the usual sum of squares decomposition doesn't really make sense after multiple imputation. Instead you have a couple options for doing a hypothesis test/summary. The first option is to go all-in on the regression modeling approach and look at each group/category separately. This method will define one of the groups as the "reference group" and the p-values will correspond to hypothesis tests that the average in that group is the same as the average in the reference group. In this example data `reg` has 5 levels (north, east, west, south, city) and north is the reference level. The p-values below show whether each of the other 4 regions has an average height that is statistically different than the north region. You can change the reference level to whatever you want.
```
library(mice)
# Set seed for reproducibility
set.seed(1234)
# Use built-in dataset
dat <- mice::boys
# Make imputations
imps <- mice(dat)
# Fit lm models
fits <- with(imps, lm(hgt ~ reg))
# Pool models
pooled <- pool(fits)
# Get results
summary(pooled)
#> term estimate std.error statistic df p.value
#> 1 (Intercept) 149.62394 5.158947 29.002809 689.4577 0.0000000000
#> 2 regeast -16.26098 6.294533 -2.583350 728.6964 0.0099782919
#> 3 regwest -20.68640 5.954958 -3.473811 710.4351 0.0005442446
#> 4 regsouth -22.92516 6.171241 -3.714838 667.8366 0.0002203387
#> 5 regcity -25.36682 7.461717 -3.399596 721.9398 0.0007119482
```
The above procedure outputs many different p-values, each corresponding to a pair-wise comparison. If you want to perform an "omnibus" test of whether the entire set of `reg` variables is significant overall, you can use a "D1", "D2", "D3" test. If you want details see this textbook by the `mice` creator, [https://stefvanbuuren.name/fimd/sec-multiparameter.html](https://stefvanbuuren.name/fimd/sec-multiparameter.html)
The author recommends the D1 test in general. In the case when you only have a single independent variable in the `lm()` models it's quite easy to perform this test.
```
D1(fits)
#> test statistic df1 df2 dfcom p.value riv
#> 1 ~~ 2 4.311034 4 736.3446 743 0.00187468 0.008875879
```
This p-value corresponds to the test that the intercept-only model is equally good as the model with `reg`. The p-value is small, so we conclude that `reg` significantly affects the `hgt` variable.
If you're performing an ANCOVA with other variables in the model it's a little more complicated to perform the D1 test, but still quite manageable.
| null | CC BY-SA 4.0 | null | 2023-03-08T04:31:29.133 | 2023-03-08T04:31:29.133 | null | null | 282433 | null |
608728 | 1 | null | null | 0 | 18 | I would like some help proving this expression of partial correlation in terms of student's t :
r(yx.efd.....p)² = t²/[t²+n-k-1]
Can you please direct me to a source explaining how to go about this? I just don't know how. I'm not a spammer or anything. This question was previously deleted, can I understand why?
Many thanks,
Edit : Since my previous question hasn't been answered, now I'm looking for sources or books to be able to answer it.
| Partial correlation coefficient in terms of student's t | CC BY-SA 4.0 | null | 2023-03-08T06:33:11.710 | 2023-03-08T06:40:49.427 | 2023-03-08T06:40:49.427 | 381025 | 381025 | [
"regression",
"self-study",
"t-test",
"econometrics",
"partial-correlation"
] |
608729 | 2 | null | 590587 | 0 | null | The evaluation metric you described is similar to the concept of "Cumulative Gain" (CG) in Information Retrieval. CG measures the quality of a ranked list of items by summing up the relevance scores of the items in the list.
In your example, the true top k items have a cumulative gain of 19, and the predicted top k items have a cumulative gain of 17 and 16 for the first and second predictions, respectively. By dividing the predicted cumulative gain by the true cumulative gain, you can get a value that indicates the effectiveness of the prediction.
It's worth noting that there are several variants of CG, such as Normalized Cumulative Gain (NCG), Discounted Cumulative Gain (DCG), and Normalized Discounted Cumulative Gain (NDCG). These variants differ in how they weight the relevance scores of the items in the list, but the basic idea is the same.
I hope this helps! Let me know if you have any other questions.
| null | CC BY-SA 4.0 | null | 2023-03-08T06:37:47.470 | 2023-03-08T06:37:47.470 | null | null | 382650 | null |
608730 | 1 | null | null | 0 | 46 | Assume that $Y_1,\dots, Y_n$ follows a Binomial distribution with probability of $p(d_i)$. Assume that the pdf of $Y_i$:
$$
f(p_i,Y_i)=\binom{n_i}{y_i}p_i^{y_i}(1-p_i)^{n_i-y_i}
$$
Assume that a model of $p(d)$ is
$$
p(d)=\log\frac{p(d)}{1-p(d)}=\alpha+\beta d.
$$
Question: I am confused about what is the Fisher information matrix?
I know the definition: the Fisher information of $\theta=(\alpha, \beta)$ is
$$
I(\theta)=E\left[\left(\frac{\partial \log f(x;\theta)}{\partial \theta}\right)^2\right]
$$
But I am not sure if I need to plug into all data sample $Y_1,\dots, Y_n$. I mean
$$
I(\theta)=E\left[\left(\frac{\partial \log \prod_{i=1}^n f(x_i;\theta)}{\partial \theta}\right)^2\right]
$$
---
My work:
Let $\theta=[\alpha, \beta]^\top$ and $z=[1,d]^\top$.
The score function is
$$
S(\theta)=\sum_{i=1}^n\left[y_i-n_i \frac{e^{z_i^T\theta}}{1+e^{z_i^T\theta}}\right]z_i
$$
and the Hessian matrix is
$$
H(\theta)=-\sum n_ip_i(1-p_i)z_iz_i^\top.
$$
| What is the Fisher information matrix in the logit model? | CC BY-SA 4.0 | null | 2023-03-08T06:56:01.157 | 2023-03-13T18:16:20.983 | 2023-03-08T09:37:09.657 | 362671 | 334918 | [
"logistic",
"mathematical-statistics",
"estimation",
"fisher-information"
] |
608732 | 2 | null | 256456 | 1 | null | If you know that the data is normally distributed, you can infer it given the lower and upper quantiles.
```
norm_from_quantiles = function(lower, upper, p = 0.25) {
mu = mean(c(lower, upper))
sigma = (lower - mu) / qnorm(p)
list(mu = mu, sigma = sigma)
}
```
Here, `p` and `1-p` are the quantiles of `lower` and `upper` so `p = 0.25` is quartiles while `p = 0.1` would mean that `lower` and `upper` are 10% and 90% quantiles respectively.
| null | CC BY-SA 4.0 | null | 2023-03-08T07:35:50.010 | 2023-03-10T09:04:10.727 | 2023-03-10T09:04:10.727 | 17459 | 17459 | null |
608733 | 1 | null | null | 2 | 25 | I am currently faced with the question. Do I include the variable XY reflectively, formatively or as a scale value in the model?
Theoretically, there is a lot to be said for a formative measurement model, since it is primarily knowledge that is being asked about.
Now, the scale was determined by the authors using a factor analysis.
Can I simply decide to take the model formatively, even if the author may not see it that way? Or does the factor analysis performed imply that the author assumes a reflective measurement model? Or would one also, if one were to assume a formative measurement model, have carried out an exploratory factor analysis beforehand? Probably not, right?
| Does the factor analysis performed imply that the author assumes a reflective measurement model? | CC BY-SA 4.0 | null | 2023-03-08T07:37:10.397 | 2023-03-09T08:03:39.153 | 2023-03-09T08:03:39.153 | 380073 | 380073 | [
"structural-equation-modeling"
] |
608734 | 2 | null | 608733 | 0 | null | A reflective (not "reflexive") model requires that scale items are correlated with each other, since they're affected by the same underlying factor. In a formative model, there's no need for items to be correlated with each other, for instance you might have a risk score factor that's made up of several uncorrelated items that ask about different contributing risks.
[](https://pbs.twimg.com/media/FJE4uHxXIAQNqX1?format=png&name=large)
| null | CC BY-SA 4.0 | null | 2023-03-08T07:56:52.150 | 2023-03-08T07:56:52.150 | null | null | 42952 | null |
608735 | 1 | null | null | 0 | 23 | I want to compare two sets of discrete data (0 to infinite, but realistically around 450) that don't follow a normal distribution but a pareto distribution, so supossedly I cannot use the Student's test. My intention is to detect if they come from different distributions or not.
That data has a N of 500-1000 for one group and around 50-100 for the other group that I want to test.
I've read that having an N so high the normality is not that important, but I'm not sure at which point in the N size that becomes true. I've also ready about Wilcoxon (also with the sum rank variant) or Chi-square being a possibility, but I have found this discussion a bit confusing.
What would be the most appropiate test for this?
| TTest for discrete data with a pareto distribution | CC BY-SA 4.0 | null | 2023-03-08T08:34:27.247 | 2023-03-13T18:17:38.820 | 2023-03-13T18:17:38.820 | 11887 | 155514 | [
"hypothesis-testing",
"t-test",
"wilcoxon-mann-whitney-test",
"pareto-distribution"
] |
608736 | 2 | null | 421964 | 1 | null | Disclaimer: I'm not a native speaker and I learned about cubic splines many years ago. Taking this into account:
Both of your proposals, staring with
>
Knots are where different cubic polynomials are joined
are clear to me, although I didn't know the technical meaning of "jolt". But, the way you state it has a touch -- at least to my ear -- of implying that knots are some properties of the curve; that we first construct the curve and then find the knots on it based on the properties you list. It is like saying
"An extreme is where the first derivative is zero".
There is a causal implication here: First we have a curve (which might follow e.g. from the laws of physics) and on that curve we find the extremes.
Since for splines it's the other way round -- we first choose the knots, and then construct the curve around them, -- I'd start from the cubic segments and explain that points where they are joined are called "knots". Something like:
>
(These) different cubic polynomial segments are joined together to produce a (visually) smooth curve. In order to achieve this, cubic splines enforce three levels of continuity: the function, its slope, and its acceleration or second derivative (slope of the slope) do not change at the joints. Only the jolt -- the third derivative of the curve -- may abruptly change. These joints are called "knots".
| null | CC BY-SA 4.0 | null | 2023-03-08T08:39:22.607 | 2023-03-08T08:56:28.007 | 2023-03-08T08:56:28.007 | 169343 | 169343 | null |
608737 | 1 | 608752 | null | 0 | 125 | In the Survival vignette entitled "Spline terms in a Cox model"
[https://cran.r-project.org/web/packages/survival/vignettes/splines.pdf](https://cran.r-project.org/web/packages/survival/vignettes/splines.pdf)
on page 3 there is this graph:
[](https://i.stack.imgur.com/viiFU.png)
The plot is showing the effect of Age which was fitted as a spline term in a Cox model. In this plot, the centering was set at age = 50, so the y axis is relative to that (hence the y axis = 1 at x = 50).
My question: Can we also call the Y axis here Hazard Ratio? This seems sensible to me as the plot is showing the exponentiated difference (i.e. ratio) between y at x=50 and y at all other x values.
| Presenting spline terms from a Cox model | CC BY-SA 4.0 | null | 2023-03-08T08:39:31.640 | 2023-03-08T11:13:22.737 | null | null | 167591 | [
"regression",
"survival",
"cox-model",
"splines",
"hazard"
] |
608738 | 1 | 608925 | null | 1 | 48 | In my stats class, we are talking about the 2 proportion $Z$-test, which compares two sample proportions. The test statistic is given as:
$$
Z = \frac{\hat{p_1}-\hat{p_2}}{\sqrt{\hat{p_c}(1-\hat{p_c})\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}
$$
My question is, is there a good way to visualize this operation?
I can understand what the 1 proportion $Z$ interval does, since it basically just standardizes the sample proportion distribution, but I struggle to understand what is going on here. It seems at first glance to have subtracted the two distributions of sample proportions, but on second thought probably isn't the case.
(The visualization could be similar to the diagram for visualizing type I & II errors and power shown below.)
[](https://i.stack.imgur.com/aReEH.jpg)
| Visualizing how the 2 sample Z test works | CC BY-SA 4.0 | null | 2023-03-08T08:52:00.240 | 2023-03-09T19:57:45.803 | 2023-03-09T19:41:06.420 | 7290 | 382657 | [
"data-visualization",
"statistical-power",
"proportion",
"z-test",
"type-i-and-ii-errors"
] |
608739 | 1 | null | null | 0 | 27 | Let us say, we are tasked with setting (average/list) prices that are likely to convert for heterogeneous products (e.g. used cars of all shape and sizes - made up example!). Let us also say that we only use one feature: miles per gallon for target encoding with respect to the target prices achieved sales in the last 12 months. We simply bin the feature using quartiles and calculate the median prices per quartile.
When we use a fancy model to price a new car to be sold, we feed the miles per gallon quartile's retrospective average to the model. As we use data from the last 12 months actual sales during scoring - would this be data leakage or acceptable? I guess for fitting the model you would only use the training data's quartile values (at least during optimisation - such as feature selection, interaction effect inclusion or hyper parameter optimisation if we deal with a machine learning model)?
| When does target encoding lead to overfitting | CC BY-SA 4.0 | null | 2023-03-08T08:53:32.513 | 2023-03-12T16:26:57.823 | 2023-03-12T16:26:57.823 | 11887 | 6412 | [
"regression",
"feature-selection",
"overfitting",
"feature-engineering",
"hyperparameter"
] |
608740 | 1 | 608749 | null | 8 | 363 | Why do we need normality test if the sample size is large enough and hence, the distribution of the sample mean is approximately normal based on central limit theorem?
| Why do we need normality test if we already have CLT? | CC BY-SA 4.0 | null | 2023-03-08T09:04:06.440 | 2023-03-09T22:35:18.207 | 2023-03-08T11:13:10.943 | 362671 | 159516 | [
"normal-distribution",
"chi-squared-test",
"central-limit-theorem"
] |
608741 | 2 | null | 608694 | 8 | null | This primarily depends on how the model is supposed to be used. From your context it seems you have an alternative test which has an almost perfect classification rate but is very expensive to use because it essentially consists of sending a qualified human to do a manual check. You want to use your model (which in comparision is extremely cheap) to decide where to perform the human tests.
If I understand you correctly than a) your goal is to find as many instances evaluated by a human as 1 as possible (implying instances evaluated as 0 are not valuable) and b) if some instances that are 1 are not checked by a human because the model thinks they are unlikely candidates, this is not a problem because the number of human checks is very limited anyway.
If these assumptions are correct than any model that has a bigger than 5% chance to find instances qualified as 1 is useful and your model with a 30% hit rate will increase the number of instances qualified as 1 sixfold so is a major improvement compared to not using a the model.
There are of course plenty of other situations where such a model would be useless or even actively bad if applied. It just depends on what you want to do with the model.
| null | CC BY-SA 4.0 | null | 2023-03-08T09:07:27.260 | 2023-03-09T07:11:17.687 | 2023-03-09T07:11:17.687 | 181468 | 181468 | null |
608742 | 2 | null | 608411 | 2 | null | Yes, it is exactly as you say. LMs (and machine translation models, too) start with a randomly initialized embedding matrix, which is learned via standard backpropagation. It is typically not implemented via multiplying one-hot vectors, which would waste memory. All deep learning frameworks have embedding layers that do an index-based lookup.
| null | CC BY-SA 4.0 | null | 2023-03-08T09:10:12.380 | 2023-03-08T09:10:12.380 | null | null | 249611 | null |
608743 | 1 | 609400 | null | 1 | 63 | I would like to determine a partial correlation from fixed effects in my linear mixed models.
If i would run a model for example $Aij = \beta_0 + \beta_1X + \beta_2Y + \beta_2Z + ui$ + ϵij
Can I calculate a partial correlation coefficient for the $\beta_1$ to $A$ using the formula
$r= \beta_1 *(var(\beta_1) / \text{sd}(A))$ ?
| Can I calculate the correlation coefficient from the beta of fixed effects in a linear mixed model? | CC BY-SA 4.0 | null | 2023-03-08T09:21:45.097 | 2023-03-14T12:34:07.357 | 2023-03-14T12:34:07.357 | 345611 | 382664 | [
"regression",
"mixed-model",
"effect-size",
"partial-correlation"
] |
608744 | 1 | null | null | 0 | 34 | How can I obtain the Variance-Covariance matrix of estimated coefficients, $\hat{\beta}$, which are a result of some GLM (solved for example using IRLS. Is there a reference you can direct me to?
In R the function is `vcov` however, it does not detail the computation. Bonus points, if there is an easy way to obtain it using H20.
| Finding the Variance-Covariance matrix of IRLS output | CC BY-SA 4.0 | null | 2023-03-08T09:22:28.123 | 2023-03-08T10:49:04.887 | 2023-03-08T10:49:04.887 | 73117 | 73117 | [
"r",
"generalized-linear-model",
"variance",
"covariance",
"h2o"
] |
608745 | 1 | null | null | 0 | 39 | I am using R and have based my analysis on the book by Hyndman, R.J., & Athanasopoulos, G. (2021) Forecasting: principles and practice, 3rd edition, OTexts using the fpp3 package ([https://otexts.com/fpp3/](https://otexts.com/fpp3/))
The aim is to estimate number of averted cases after the implementation of an intervention. I have fitted an Arima model to the following time series of yearly incidence data which after log-transformation is stationary based on KPSS test (function unitroot_kpss). Using the ARIMA function from the same package for automatically deciding on the best model based on AICc (using also stepwise=FALSE for a more extensive search) i get the model Arima(5,0,0) as the most fitting. A residual check shows that there is no residual autocorrelation left, the Ljung-Box test also verifies that the residuals are white noise.
As it can be seen in the graph, the prediction intervals of the median yearly incidence are simply too wide making it pointless to estimate uncertainty around the predicted averted cases. The intervals are of course the same also for the mean. What i have considered and would like a second opinion is to estimate the median incidence for the whole post-intervention period, instead by year, and use the prediction interval around that to estimate uncertainty around the averted cases. In this way we do lose the yearly predictions but i can't think of an alternative when they are as non-informative as in this case.
I have done this by simulating 10000 future paths from the fitted Arima(5,0,0) model using bootstrap residuals, estimating the median incidence for whole the post-intervention period for each of them and finally the median of all medians in the following way:
```
futures <- arima500 |>
generate(times = 10000, h = 49, set.seed(123),bootstrap = TRUE,
point_forecast = list(.median = median)) |>
as_tibble() |>
group_by(.rep) |>
summarise(.sim = median(.sim)) |>
summarise(total = distributional::dist_sample(list(.sim)))
future_median <- futures |>
mutate(
median = median(total),
pi80 = hilo(total, 80),
pi95 = hilo(total, 95)
)
```
Inspiration for this came from the section "Prediction intervals from bootstrapped residuals" in ([https://otexts.com/fpp3/prediction-intervals.html](https://otexts.com/fpp3/prediction-intervals.html))
and ([https://otexts.com/fpp3/aggregates.html](https://otexts.com/fpp3/aggregates.html)) where the setup in that example was to aggregate monthly data to get yearly estimates.
My questions are:
- Is this approach statistically correct?
- If so, is it more correct to estimate the median of medians (median = median(total) in the code above) or the mean of medians instead? Is it simply a matter of choice?
- I haven't worked a lot with arima models and the order of the chosen model (5th order autoregressive model) was a bit surprising. I am more used to seeing lower order models. Is there something to be considered there? Models selected by the ARIMA function will not contain roots outside or close to the unit circle so that part is already taken care of.
- Finally, how well are Arima models suited for long forecasting as in this case. Should other models be preferred instead?
[](https://i.stack.imgur.com/3WCtF.png)
| Prediction interval of the future median based on a series of future samples | CC BY-SA 4.0 | null | 2023-03-08T09:45:21.033 | 2023-03-08T09:45:21.033 | null | null | 25032 | [
"time-series",
"forecasting",
"arima",
"bootstrap",
"simulation"
] |
608746 | 1 | null | null | 0 | 25 | Why is not "standard deviation" = $\frac{Σ|X-μ|}{N}$, $μ$ being the mean, and is the formula we know today?
This could be considered a philosophical question until you try to connect word "standard" with the actual formula or explain (the meaning of) it to others.
Why go to $R^2$ space and back (to $R$) when you could compute a mean directly in $R$ ?
| Why standard deviation is based on squared value? | CC BY-SA 4.0 | null | 2023-03-08T10:10:54.133 | 2023-03-08T10:20:21.077 | null | null | 382666 | [
"standard-deviation",
"history"
] |
608747 | 2 | null | 608694 | 30 | null | As [Dave argues](https://stats.stackexchange.com/questions/608694/my-machine-learning-model-has-precision-of-30-can-this-model-be-usefull#comment1129789_608694), if "false negatives have no associated costs", then your best course of action would be not to classify anything as positive, i.e., as 1. You inspect nothing at all, you incur zero cost, and all your inspectors can do something different (or be fired).
Yes, of course this makes no sense. Which is because it makes no sense to claim that false negatives incur no costs. They do, it's the cost of uncaught errors.
This is an example of why precision, sensitivity etc. are all [as misleading as accuracy as evaluation metrics](https://stats.stackexchange.com/q/312780/1352), especially (but [not only](https://stats.stackexchange.com/a/368979/1352)) in "unbalanced" situations.
What I would strongly recommend you do is scrap "hard" classifications, use probabilistic classifications instead and separate the decision aspect from the statistical modeling aspect. The decisions (whether to inspect or not) should not only be driven by the probabilistic classification, [but also by the cost structure](https://stats.stackexchange.com/a/312124/1352).
| null | CC BY-SA 4.0 | null | 2023-03-08T10:38:02.020 | 2023-03-08T10:38:02.020 | null | null | 1352 | null |
608749 | 2 | null | 608740 | 8 | null | 1. The CLT certainly doesn't solve all problems. For example:
(a) There are distributions for which the CLT doesn't hold.
Here's an example:
[](https://i.stack.imgur.com/kne9Vm.png)
This density is a mixture of a symmetric 4-parameter beta and a $t_2$. There's a normal distribution that looks visually pretty close to it (e.g. in the way that a Kolmogorov-Smirnov statistic measures distance, largest absolute difference in cdf), but this distribution does not have a finite variance, and the ordinary central limit theorem fails to hold for this seemingly unremarkable-looking case.
(While it is close in the stated sense to a normal distribution, I did not spend time getting it as close as possible; there are examples that look even closer to a normal, indeed it can be as close as you like.)
(b) There are distributions for which the CLT does hold, but for which even averages of a million observations are not really close to a normal.
There's an example discussed [here](https://stats.stackexchange.com/a/238566/805), a lognormal distribution with sufficiently large shape parameter ($\sigma$).
That number ($n= 10^6$) can be pushed up, beyond any fixed value. That is, there's really no "large enough" that's sufficient to make the distribution of standardized sums or means 'close to normal' for every distribution among the set of distributions for which the CLT does hold.
(c) There are tests that assume normality but which do not involve means. In those cases the CLT isn't necessarily of any direct relevance.
2. None of this is an argument for using formal tests of assumptions. That doesn't really answer the right question.
There's a nice discussion of that point (in relation to testing normality) in Harvey Motulsky's answer [here](https://stats.stackexchange.com/a/2501/805). Much more could be said about testing but that's perhaps not the direct issue here, so I won't labor the point further.
| null | CC BY-SA 4.0 | null | 2023-03-08T11:05:06.357 | 2023-03-09T22:35:18.207 | 2023-03-09T22:35:18.207 | 805 | 805 | null |
608750 | 1 | null | null | 1 | 19 | I know about the bias variance tradeoff such that with higher variance we have less bias and vice versa. But what is the relationship of this tradeoff? Is it just an exact inverse proportionality i.e $bias\propto 1/variance$?
Second part of my question is what factors cause high variance in our model? I know that multicolinearity and high dimensionality ($p>n$) increase the variance. Are there any other factors?
| What is the exact relationship between variance and bias in regression? | CC BY-SA 4.0 | null | 2023-03-08T11:10:56.387 | 2023-03-08T11:10:56.387 | null | null | 362605 | [
"regression",
"variance",
"bias"
] |
608751 | 2 | null | 608694 | 2 | null | As hashed over a bit in the comments. The usefulness of the model here is going to hinge heavily on how likely it is that the distribution of your future data is to match the training/validation data.
I think it would be a serious mistake to assume they will match unless you could think of a super obvious reason why they wouldn't. I would assume there will be differences and test that hypothesis. If the generation of this data is stationary and highly likely to remain similar, then the model is probably useful, but you should make an attempt to quantify this (e.g. if your current data was gathered at different times, separate it into the earliest and latest data and check model performance within each subgroup and see if there are time-based trends in the data itself (look for time/geography/population-based trends in a wide range of statistics, means, medians, maxes, mins, curl, etc...) or if there is any other potential reason the data might change, e.g. new professional guidelines). I wouldn't recommend assuming the model will generalize even with good looking generalization on train/test/validation sets until you get a new real-world independent validation set. And then the model might work just for your company with exactly how you gather data right now, if you outsource the data gathering any sort of new semi-systemic idiosyncrasy might throw a serious wrench in the works. Likewise, it may not work at another branch office, etc...
This sort of issue is a plague in precision medicine, in part because the large whole genome and exome data sets and GWAS studies are so biased towards white Europeans but when you want to go to clinical application you're suddenly not treating only white Europeans. Then there are false positives... you can have a SNP associated with their descent look associated with disease but have it turn out to be socioeconomic rather than genetic, etc... This is in part why ML hasn't obliterated much simpler statistical tests in that field. One of the rationales some have offered for including more diverse populations in GWAS is merely to reduce the false positives showing up for the currently available data. I try not to read that too cynically.
Also, I'm aware of various attempts to use more recent ML methods (deep learning, gradient boosted trees, xgboost) for imputation in this field, but none of them have broken into mainstream use despite very flattering initial papers. Largely because when they are applied to new independent data they don't perform better than the HMMs and often quite a bit worse.
When group B says they want to see much better performance, the actual threshold is arbitrary, but I think the sentiment is that they expect some loss of utility due to data differences and, unless the initial performance was strong, expect it is likely for the edge the model has to evaporate or even be harmful.
Edit: I read now the comments where you say it is a logistic regression. Usually people tend not to say "My machine learning model" when it's a logistic regression even though it can indeed be considered machine learning. You are certainly less likely to have some of the issues described above when using simpler models like that, but it can still happen. I do lean towards it being useful, but it can still be worth validating the common assumptions (e.g. stationality).
| null | CC BY-SA 4.0 | null | 2023-03-08T11:13:15.963 | 2023-03-08T11:22:28.237 | 2023-03-08T11:22:28.237 | 207145 | 207145 | null |
608752 | 2 | null | 608737 | 1 | null | Yes, because if you look at the code, you will see that the term (which represents the coefficient by age) in the plot function is encased in `exp()`. And in this case the reference value for the Hazard Ratio (HR) is set to be the age 50 (this is explained in the text of the vignette).
When you use splines representing HRs you always need to set a reference value to compare the rest of covariate values to. In this case the HR line represents the value of the ratio for each age over age 50.
I'll give more detailed if you are missing something.
| null | CC BY-SA 4.0 | null | 2023-03-08T11:13:22.737 | 2023-03-08T11:13:22.737 | null | null | 113233 | null |
608753 | 2 | null | 608694 | 7 | null |
## As with most things, neither of two polar opinions is wholly correct
If a model can select an area needed for inspection better than random chance then for a singular inspection run you're using your inspector's time more usefully.
As an example, lets say your inspectors are checking welds on a pipeline. A classification model for error/no error may pick up early signs of corrosion but not spot a more subtle error like porosity or an undercut. In the long term, dependence on the model to inform inspectors could mean other error type get inspected less.
I'd recommend trying to look into your false negatives here, I know you say they incur no cost but you've got inspectors for a reason. Is there a particular type of defect that your model isn't picking up? Is there more data you can bring in to better account for the other error types?
tldr;
Better than random sounds like it would be more effective, but if your model is blind to an error type it could be increasing the likelihood that that error goes unnoticed (without knowing the specific situation we can't say more). At the same time 70-80% is just a number picked from thin air. A lot is down to your application.
| null | CC BY-SA 4.0 | null | 2023-03-08T11:23:05.823 | 2023-03-08T11:23:05.823 | null | null | 136450 | null |
608754 | 1 | null | null | 0 | 9 | why does the effect of a control variable on the coefficient of interest depends on how the control variable is measured?
The following DiD model is estimated:
Y = beta_1Treat + beta_2Post + beta_3(Treat*Post) + Controll_Var
Y is standardised student test score. Treat treatment dummy variable (0,1) and post determines if observation is pre or post reform (1,0).
If the control variable (here students eligible for free or reduced lunch) is measured in absolute numbers or univariate standardised, b_3 is not statistically significant anymore.
If I use the percentage of students eligible for frl, b_3 stays statistically significant.
Why is this the case? And what does this say about my diff-in-diffs estimation?
| Does the scale of a control variable matter? | CC BY-SA 4.0 | null | 2023-03-08T11:39:44.103 | 2023-03-08T11:39:44.103 | null | null | 380647 | [
"statistical-significance",
"difference-in-difference",
"controlling-for-a-variable",
"scale-parameter"
] |
608755 | 1 | null | null | 0 | 9 | I have been working with logistic regression, and I have used it to test an association between an outcome and exposure in a random cohort group.
The outcome is cancer and the exposure is smoking.
```
cancer ~ smoking
```
The OR I get from this is 20.
Then I add more cases to the model (adjusting for more cases), to see a stronger association and more narrow CI. I know that this will lead to an ascertainment bias. Meaning I will have a bigger proportion of cases (more smokers), so it will no longer be a random cohort group. I did this:
```
cancer ~ smoking + more_smokers
```
I was expecting that the OR will increase a bit, but it did the opposite. So I got an OR of 17 instead, but I also got much narrow CI as expected.
What would be the interpretation for the decreasing OR?
| Decreasing Odds ratio when adding more cases | CC BY-SA 4.0 | null | 2023-03-08T11:51:07.787 | 2023-03-08T11:51:07.787 | null | null | 382622 | [
"logistic"
] |
608757 | 1 | 608762 | null | 3 | 136 | Here is a simple example in R using the `iris` dataset:
```
summary(lm(Sepal.Length ~ Species, data=iris))
```
Output is:
```
Call:
lm(formula = Sepal.Length ~ Species, data = iris)
Residuals:
Min 1Q Median 3Q Max
-1.6880 -0.3285 -0.0060 0.3120 1.3120
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.0060 0.0728 68.762 < 2e-16 ***
Speciesversicolor 0.9300 0.1030 9.033 8.77e-16 ***
Speciesvirginica 1.5820 0.1030 15.366 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5148 on 147 degrees of freedom
Multiple R-squared: 0.6187, Adjusted R-squared: 0.6135
F-statistic: 119.3 on 2 and 147 DF, p-value: < 2.2e-16
```
Why is the Std. Error for Speciesversicolor the same as the Std. Error for Speciesvirginica (`0.1030`)? I cannot get my head around that...
| Linear regression with categorical variable: why are standard errors all the same? | CC BY-SA 4.0 | null | 2023-03-08T11:53:43.653 | 2023-03-08T16:35:22.730 | null | null | 164356 | [
"r",
"regression"
] |
608758 | 1 | null | null | 0 | 18 | I am currently doing a nested cross-validation for lasso regression to determine the best lambda value.
I am using the mlr3 package with the documentation from [https://mlr3book.mlr-org.com/optimization.html#sec-nested-resampling](https://mlr3book.mlr-org.com/optimization.html#sec-nested-resampling)
In the example, the hyperparameters selected and tested on the outer fold were:
```
iteration cost gamma classif.ce
1 -11.512925 -11.512925 0.4567227
1 -11.512925 5.756463 0.4567227
1 -5.756463 -11.512925 0.4567227
1 0.000000 5.756463 0.4567227
1 5.756463 -11.512925 0.2747899
...
3 0.000000 -11.512925 0.4678571
3 0.000000 -5.756463 0.2151261
3 0.000000 5.756463 0.4678571
3 5.756463 0.000000 0.4678571
3 11.512925 -5.756463 0.1941176
```
So how do I determine the best hyperparameters (cost and gamma in this case) for my final model?
| Which hyper parameter to select after nested cross validation? | CC BY-SA 4.0 | null | 2023-03-08T11:57:51.610 | 2023-03-08T12:08:06.313 | 2023-03-08T12:08:06.313 | 347386 | 347386 | [
"machine-learning",
"hyperparameter"
] |
608759 | 1 | null | null | 0 | 53 | I want to get a deepened understanding of random effects models and how they compare to fixed effects models. While there are some great answers already given by the community the following question remains unclear to me:
Broadly speaking one could say that before analyzing values across different units of analysis, each value in a fixed effects model gets subtracted by the mean value of the respective variable, right? Thus, all variation in a fixed effects model is relative to the unit of analysis "baseline-level". Consequently, applying fixed effects models with data with no/few variation is not useful.
I read in several publications that random-effects models are in such settings a better option. So do random effects models similar to fixed effects model consider within variation before they analyse variation between units?
| Within and Between Variation in fixed and random effects models | CC BY-SA 4.0 | null | 2023-03-08T12:09:23.653 | 2023-03-09T08:57:03.443 | null | null | 305206 | [
"mathematical-statistics",
"mixed-model",
"panel-data",
"fixed-effects-model"
] |
608760 | 1 | null | null | 0 | 20 | I have some time series floating point distance measurements $x_n$ with corresponding integer time rounded to the nearest seconds $t_n$. If I calculate velocity (eg. forward difference), there is large variation about the likely true value due to the loss of precision in time. How can I calculate a plausible set of adjusted times ($T_n=t_n+e_n; -0.5 \leq e_n< 0.5$) such that the acceleration is minimized (approximately) between successive data points?
Is this a standard type of problem, with a standard solution (eg. in SciPy)? A possible approach might be to calculate the rolling mean of velocity and calculate the necessary adjustments $e_n$ from these.
| Estimating lost precision in time series data | CC BY-SA 4.0 | null | 2023-03-08T12:13:01.833 | 2023-03-08T12:32:03.947 | 2023-03-08T12:32:03.947 | 362671 | 382673 | [
"optimization",
"data-preprocessing",
"scipy",
"rounding"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.