Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
611610 | 1 | null | null | 1 | 20 | I am running models, and I am learning how to use LIME to explain the models. I trained a random forest, on data that has 988 rows and 5000 columns. However, I am getting an error which says `Error: All permutations have no similarity to the original observation. Try setting bin_continuous to TRUE and/or increase kernel_size`. I don't understand this, and I would appreciate edits to my code below. (Discalimer, this is a homework question).
This is my attempt below.
```
library(lime)
explainer_caret <- lime(training, model_train)
pdf('lime_1_6.pdf')
explanation <- explain(testing[1:6, ], explainer_caret,
labels="positive",
n_permutations=5,
dist_fun="manhattan",
kernel_width = 3,
n_features = 10)
dev.off()
```
To try to fix this I tried one of the suggestions using [https://goodekat.github.io/LIME-research-journals/journals/02-understanding_lime/02-understanding_lime.html](https://goodekat.github.io/LIME-research-journals/journals/02-understanding_lime/02-understanding_lime.html) as the guide:
```
explainer_caret <- lime(training, model_train,
preprocess = NULL, bin_continuous = TRUE,
n_bins = 4, quantile_bins = TRUE)
pdf('lime_1_6.pdf')
explanation <- explain(testing[1:6, ], explainer_caret,
labels="positive",
n_permutations=5,
dist_fun="manhattan",
kernel_width = 3,
n_features = 10)
dev.off()
```
I still get this `Error: All permutations have no similarity to the original observation. Try setting bin_continuous to TRUE and/or increase kernel_size`
| Using LIME to interpret model predictions in R | CC BY-SA 4.0 | null | 2023-04-02T22:24:23.160 | 2023-04-02T22:24:23.160 | null | null | 378400 | [
"r",
"regression",
"machine-learning",
"classification",
"lime"
] |
611611 | 1 | 611612 | null | 2 | 39 | I am studying Elliptically Symmetric Distributions and someone recommended me Symmetric Multivariate and Related Distributions by Fang (1990), which is the book I am reading for that purpose.
I know that if $X\sim N(\mu,\sigma^2)$, then $|X|$ follows a folded normal distribution. So, I wonder if there exists some generalization. For that, let $X\sim \operatorname{EC}_1(\mu,\sigma^2,\psi)$, what is the distribution of $|X|$?
I thought using PDF's definition of $X\sim \operatorname{EC}_1(\mu,\sigma^2,\psi)$:
$$f(x)=\frac{1}{\sigma}\psi\left(\frac{(x-\mu)^2}{\sigma^2}\right)$$
But I think it is the wrong way. Do you know what is the distribution of $|X|$ to know what I should have to prove? Or if you know an article that talks about this, It would be really helpful.
| Distribution of absolute value of Elliptically Symmetric Random Variable | CC BY-SA 4.0 | null | 2023-04-02T22:30:15.417 | 2023-04-25T09:26:03.610 | 2023-04-25T09:26:03.610 | 56940 | 384779 | [
"distributions",
"random-variable",
"elliptical-distribution"
] |
611612 | 2 | null | 611611 | 3 | null | To my knowledge, this is still under active research, I am not aware of a generalization to elliptical distributions.
The latest article that I know of is [On Moments of Folded and Truncated Multivariate Normal Distributions](https://www.cesarerobotti.com/wp-content/uploads/2019/04/JCGS-KR.pdf), which states in the conclusion:
>
Generalizing the results to multivariate elliptical distributions
requires a lot more work. Although the product moments of multivariate
elliptical distributions can be obtained from the product moments of
multivariate normal distributions (see, for example, Berkane and
Bentler (1986) and Maruyama and Seo (2003)), it is not clear how to
obtain product moments of folded and truncated multivariate elliptical
distributions. We leave this topic for future research.
| null | CC BY-SA 4.0 | null | 2023-04-02T22:56:32.930 | 2023-04-25T08:47:07.800 | 2023-04-25T08:47:07.800 | 53580 | 53580 | null |
611613 | 1 | null | null | 0 | 15 | I have a hidden Markov model.
$X_n = e^{-b\times dt}\times X_{n-1} + \sigma_v \times V_v$
$Y_n = X_n + \sigma_w \times V_w$
where $b$ is the model parameter, $\sigma_w$ is the standard deviation following Gaussian noise. How to find the conditional probability of $b$ given $X$ and $Y$ from $n \in {1,2,\dots, k}$?
I think I need to use the log-normal as the $\log(\sigma)$ returns a negative value. Can anyone provide a detailed derivation?
| Derive Gibbs sampler for exponential state space model | CC BY-SA 4.0 | null | 2023-04-02T23:07:36.550 | 2023-04-02T23:44:29.040 | 2023-04-02T23:44:29.040 | 44269 | 384781 | [
"bayesian",
"markov-chain-montecarlo",
"gibbs"
] |
611615 | 1 | null | null | 0 | 38 | As I was watching a video explaining how MDS works, the narrator mentioned that PCA is equivalent to MDS when Euclidean distances are used. I got confused as to how that's the case.
My guess is that it has to do with SVD but I don't know how to properly prove it. Thanks in advance!
| Proof that PCA is equivalent to MDS when using Euclidean distances | CC BY-SA 4.0 | null | 2023-04-02T23:49:12.663 | 2023-04-02T23:49:12.663 | null | null | 351043 | [
"pca",
"dimensionality-reduction",
"multidimensional-scaling"
] |
611616 | 1 | null | null | 1 | 12 | I need to run an analysis in spss, please could you advise me on that matter?
I gave my examples, please correct me..
- three groups: control, body positivity, body neutrality,
- measure: self esteem, body image, social comparison
hypotheses:
H1: Participants who view body neutrality messaging will have higher scores on a measure of self-esteem and higher scores on a measure of positive body image than participants who view body positivity messaging.
ANOVA 2x2 (BN/BP x Selfesteem/Body image)
H2: Participants who view body positivity and participants who view body neutrality messaging will have higher scores on a measure of self-esteem and positive body image than participants who view non-body-related images.
ANOVA 2x2 (BP/non-body-related x Selfesteem/Body image)
ANOVA 2x2 (BN/non-body-related x Selfesteem/Body image)
H3: Amongst participants who viewed body neutrality messaging, there will be no correlation between scores on a measure of positive body image and scores on a measure of social comparison
correlation for Body Neutrality: body image x social comparison
H4: Amongst participants who viewed body positivity messaging, there will be a positive correlation between scores on a measure of positive body image and scores on a measure of social comparison.
correlation for Body Positivity: body image x social comparison
or multiple correlation?
| Ascertaining an appropriate test for this scenario | CC BY-SA 4.0 | null | 2023-04-02T23:59:43.817 | 2023-04-03T02:08:12.483 | 2023-04-03T02:08:12.483 | 362671 | 384785 | [
"spss",
"dataset"
] |
611618 | 1 | null | null | 0 | 11 | The `esoph` dataset included in the R core installation had the following structure
```
head(esoph)
agegp alcgp tobgp ncases ncontrols
1 25-34 0-39g/day 0-9g/day 0 40
2 25-34 0-39g/day 10-19 0 10
3 25-34 0-39g/day 20-29 0 6
4 25-34 0-39g/day 30+ 0 5
5 25-34 40-79 0-9g/day 0 27
6 25-34 40-79 10-19 0 7
str(esoph)
'data.frame': 88 obs. of 5 variables:
$ agegp : Ord.factor w/ 6 levels "25-34"<"35-44"<..: 1 1 1 1 1 1 1 1 1 1 ...
$ alcgp : Ord.factor w/ 4 levels "0-39g/day"<"40-79"<..: 1 1 1 1 2 2 2 2 3 3 ...
$ tobgp : Ord.factor w/ 4 levels "0-9g/day"<"10-19"<..: 1 2 3 4 1 2 3 4 1 2 ...
$ ncases : num 0 0 0 0 0 0 0 0 0 0 ...
$ ncontrols: num 40 10 6 5 27 7 4 7 2 1 ...
```
Although I managed to build 2 binomial GLM models with predictors of ordered/unordered factors as follows
```
mod1 <- glm(cbind(ncases,ncontrols) ~ agegp + alcgp+ tobgp, esoph, family=binomial)
summary(mod1)
Call:
glm(formula = cbind(ncases, ncontrols) ~ agegp + alcgp + tobgp,
family = binomial, data = esoph)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.9507 -0.7376 -0.2438 0.6130 2.4127
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.19039 0.20737 -5.740 9.44e-09 ***
agegp.L 3.99663 0.69389 5.760 8.42e-09 ***
agegp.Q -1.65741 0.62115 -2.668 0.00762 **
agegp.C 0.11094 0.46815 0.237 0.81267
agegp^4 0.07892 0.32463 0.243 0.80792
agegp^5 -0.26219 0.21337 -1.229 0.21915
alcgp.L 2.53899 0.26385 9.623 < 2e-16 ***
alcgp.Q 0.09376 0.22419 0.418 0.67578
alcgp.C 0.43930 0.18347 2.394 0.01665 *
tobgp.L 1.11749 0.24014 4.653 3.26e-06 ***
tobgp.Q 0.34516 0.22414 1.540 0.12358
tobgp.C 0.31692 0.21091 1.503 0.13294
mod2 <- glm(cbind(ncases,ncontrols) ~ factor(agegp, ordered=F) +
factor(alcgp, ordered=F) + factor(tobgp, ordered=F),
esoph, family=binomial)
summary(mod2)
Call:
glm(formula = cbind(ncases, ncontrols) ~ factor(agegp, ordered = F) +
factor(alcgp, ordered = F) + factor(tobgp, ordered = F),
family = binomial, data = esoph)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.9507 -0.7376 -0.2438 0.6130 2.4127
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.8954 1.0859 -6.350 2.16e-10 ***
factor(agegp, ordered = F)35-44 1.9809 1.1041 1.794 0.072786 .
factor(agegp, ordered = F)45-54 3.7763 1.0680 3.536 0.000407 ***
factor(agegp, ordered = F)55-64 4.3352 1.0650 4.070 4.69e-05 ***
factor(agegp, ordered = F)65-74 4.8964 1.0764 4.549 5.39e-06 ***
factor(agegp, ordered = F)75+ 4.8265 1.1213 4.304 1.67e-05 ***
factor(alcgp, ordered = F)40-79 1.4346 0.2501 5.737 9.63e-09 ***
factor(alcgp, ordered = F)80-119 1.9807 0.2848 6.956 3.51e-12 ***
factor(alcgp, ordered = F)120+ 3.6029 0.3850 9.357 < 2e-16 ***
factor(tobgp, ordered = F)10-19 0.4381 0.2283 1.919 0.055039 .
factor(tobgp, ordered = F)20-29 0.5126 0.2730 1.878 0.060398 .
factor(tobgp, ordered = F)30+ 1.6410 0.3441 4.769 1.85e-06 ***
```
I tried to google and search on Stackoverflow but I still couldn't find relevant information for the following questions:
- How can a GLM with family=binomial accept 2 response variables in the form of cbind(ncases, ncontrols) What does the model actually try to predict?
- What do the characters L, Q, C, ^4, ^5 in agegp.L, agegp.Q, ..., agegp^5 from the output of summary(mod1) mean?
- Why are the coefficients for variables agegp, alcgp and tobgp different depending on whether those variables are ordered/unordered factors?
| Understanding the process of building a binomial GLM with factored predictors | CC BY-SA 4.0 | null | 2023-04-03T00:12:22.143 | 2023-04-03T00:12:22.143 | null | null | 163750 | [
"r",
"generalized-linear-model",
"categorical-data",
"ordinal-data"
] |
611620 | 2 | null | 459545 | 2 | null | There may be a mistake in the formula from @oszkar's answer. You can see this by modifying `b` from the example above. The code
```
a <- c(2, 1, 2, 1, 2)
b <- c(1, 2, -1, -2, 1)
n <- length(a)
c_0 <- abs(1 / n * sum((a - mean(a)) * (b - mean(b))))
for (t in -3:3) {
if (t <= 0) {
c_t <- 1 / n * sum((a[1:(n + t)] - mean(a)) * (b[(1 - t):n] - mean(b)))
} else {
c_t <- 1 / n * sum((a[(1 + t):n] - mean(a)) * (b[1:(n - t)] - mean(b)))
}
r_t <- c_t / c_0
print(r_t)
}
```
yields
```
[1] -3.4
[1] 2.9
[1] 0.2
[1] 1
[1] 0.2
[1] -3.1
[1] 0.6
```
It may not make sense to have a correlation outside of [-1, 1]. The `ccf` in R:
```
ccf(a, b, 3, plot = F)$acf
```
gives
```
, , 1
[,1]
[1,] -0.37777778
[2,] 0.32222222
[3,] 0.02222222
[4,] 0.11111111
[5,] 0.02222222
[6,] -0.34444444
[7,] 0.06666667
```
The form for $r_{ij}(t)$ used in R is actually
$$r_{ij}(t)=\dfrac{c_{ij}(t)}{\sqrt{\sigma_i^2\sigma_j^2}}$$
where
$$\sigma_i^2=\frac{1}{n} \sum_{s=1}^{n}\left[X_i(s)-\overline{X}_i\right]^2.$$
That is, the denominator is the square root of the product of the "population variance" of the two time series. With this formula you can see that the results below match with the results from `ccf`.
```
a <- c(2, 1, 2, 1, 2)
b <- c(1, 2, -1, -2, 1)
n <- length(a)
c_0 <- sqrt(sum((a - mean(a)) ^ 2 / n) * sum((b - mean(b)) ^ 2 / n))
for (t in -3:3) {
if (t <= 0) {
c_t <- 1 / n * sum((a[1:(n + t)] - mean(a)) * (b[(1 - t):n] - mean(b)))
} else {
c_t <- 1 / n * sum((a[(1 + t):n] - mean(a)) * (b[1:(n - t)] - mean(b)))
}
r_t <- c_t / c_0
print(r_t)
}
```
gives
```
[1] -0.3777778
[1] 0.3222222
[1] 0.02222222
[1] 0.1111111
[1] 0.02222222
[1] -0.3444444
[1] 0.06666667
```
| null | CC BY-SA 4.0 | null | 2023-04-03T00:39:13.880 | 2023-04-03T03:20:17.070 | 2023-04-03T03:20:17.070 | 308929 | 308929 | null |
611621 | 2 | null | 340786 | 1 | null | I want to expand upon / comment on the other answers.
First, note that if you are using the noncentral t-distribution, you need to specify your equivalence bounds as a standardized mean difference $\delta$, since the noncentrality parameter will be $\delta\sqrt{n}$. If you are using TOST (i.e. central t-distribution), you need to specify it on the raw scale. In either case, if the equivalence bound is not on the right scale, you'll have to estimate it using the sample standard deviation, which will affect the error rates of your test.
Second, I do not follow David how using the noncentral t-distribution yields more power than TOST. It seems to me that (for a given sample size), this depends on the equivalence bounds (which the shape of the noncentral t depends on).
Here is the result of a simulation:
- x-axis shows the population effect size, y-axis is power
- solid curves are for TOST, dashed curves for noncentral t
- the colors correspond to different equivalence bounds of 0.5 (navy), 1 (purple), 1.5 (red), 2 (orange)
- vertical dotted lines show the corresponding equivalence bounds
- horizontal dotted line is at $\alpha=0.05$
- note that you can see David's result in the plot: the dashed navy line crosses the point (0,0.17) and the solid navy line (0,0)
[](https://i.stack.imgur.com/kgqA6.png)
In this simulation, equivalence bounds are specified as a standardized mean difference. For the TOST, the raw bounds are estimated using the sample SD. Hence, the max false positive error rate is not actually equal to $\alpha$ (solid curves do not necessarily go through where the dotted lines cross). However, if I didn't make an error somewhere, power for the noncnetral t is not necessarily higher than for TOST.
Code:
```
# sample size
n <- 10
# point null on raw scale
mu0 <- 0
# population sd
sigma <- 1.5
# equivalence bounds (magnitudes)
delta_eq <- c(0.5, 1, 1.5, 2)
# population ES = (mu-mu0)/sigma
delta <- seq(-max(delta_eq)*1.1, max(delta_eq)*1.1, length.out=200)
colors <- c("navy", "purple", "red", "orange")
par(mfrow=c(1,1))
nreps <- 10000
power.ump <- matrix(nrow = length(delta), ncol = length(delta_eq))
power.tost <- matrix(nrow = length(delta), ncol = length(delta_eq))
for (i in seq_along(delta)) {
# population mean
mu <- mu0 + delta[i]*sigma
# each col hold p-values for a given delta_eq
p.tost <- matrix(nrow = nreps, ncol = length(delta_eq))
p.ump <- matrix(nrow = nreps, ncol = length(delta_eq))
for (j in 1:nreps) {
# sample
x <- rnorm(n, mu, sigma)
m <- mean(x)
s <- sd(x)
# UMP
# non-centrality parameters (note: do not depend on s)
ncp.lower <- -delta_eq * sqrt(n)
ncp.upper <- +delta_eq * sqrt(n)
# observed t-statistic (two-sided test)
t <- abs((m-mu0) * sqrt(n) / s)
# p-value UMP
p.ump.lower <- pt(-t, n-1, ncp.lower, lower.tail=FALSE) - pt(t, n-1, ncp.lower, lower.tail=FALSE)
p.ump.upper <- pt(+t, n-1, ncp.upper, lower.tail=TRUE) - pt(-t, n-1, ncp.upper, lower.tail=TRUE)
p.ump[j, ] <- pmax(p.ump.lower, p.ump.upper)
# TOST
# lower and upper equivalence bounds TOST (raw scale, estimated using s)
mu0.tost.lower <- mu0 - delta_eq * s
mu0.tost.upper <- mu0 + delta_eq * s
# observed t-statistic TOST
t.tost.lower <- (m - mu0.tost.lower) * sqrt(n) / s
t.tost.upper <- (m - mu0.tost.upper) * sqrt(n) / s
# p-value tost
p.tost.lower <- pt(t.tost.lower, n-1, ncp=0, lower.tail=FALSE)
p.tost.upper <- pt(t.tost.upper, n-1, ncp=0, lower.tail=TRUE)
p.tost[j, ] <- pmax(p.tost.lower, p.tost.upper)
}
power.ump[i, ] <- colMeans(p.ump < 0.05)
power.tost[i, ] <- colMeans(p.tost < 0.05)
}
plot(0, 0, type="n", xlim=range(delta), ylim=c(0,1), xlab=expression(delta), ylab="Power")
for (i in seq_along(delta_eq)) {
lines(delta, power.tost[ ,i], lty="solid", col=colors[i])
lines(delta, power.ump[ ,i], lty="dashed", col=colors[i])
abline(v = -delta_eq[i], col=colors[i], lty="dotted")
abline(v = delta_eq[i], col=colors[i], lty="dotted")
}
abline(h=0.05, lty="dotted")
```
| null | CC BY-SA 4.0 | null | 2023-04-03T00:47:06.463 | 2023-04-03T00:47:06.463 | null | null | 349912 | null |
611622 | 1 | 611631 | null | 2 | 118 | Suppose we have a one-sample binomial proportions problem with a small number of trials (say 5–10 trials). I want to conduct inference and come up with a p-value for testing $p\neq p_0$ for some $p_0$ prescribed proportion. What are some ways to do this? Is there a Fisher's Exact test analogue for one-sample? What about re-randomization tests?
| How to conduct inference on binomial proportions with small sample sizes? | CC BY-SA 4.0 | null | 2023-04-03T01:01:24.493 | 2023-04-04T04:05:49.453 | 2023-04-03T23:24:43.247 | 44269 | 108150 | [
"inference",
"proportion"
] |
611623 | 2 | null | 314428 | 1 | null | Think you have it backwards on sigma squared. The "beta" of the GARCH model is the coefficient of historical variance.
| null | CC BY-SA 4.0 | null | 2023-04-03T01:08:32.743 | 2023-04-03T01:08:32.743 | null | null | 384787 | null |
611624 | 2 | null | 611541 | 0 | null | I haven't used BIC, so there may be some differences, but I have used AIC a lot. I will generally report (some variant of) `AICcmodavg::aictab` results. Exactly what you report will depend on your audience, e.g. different journals may have different reporting requirements, or you may want a very brief summary if it's for a non-technical audience.
An example from the help package for `aictab` gives
```
> aictab(cand.set = Cand.models, modnames = Modnames, sort = TRUE)
Model selection based on AICc:
K AICc Delta_AICc AICcWt Cum.Wt LL
mod 2 9 89.98 0.00 0.8 0.8 -35.18
mod 1 7 92.77 2.79 0.2 1.0 -38.89
mod 5 6 115.52 25.54 0.0 1.0 -51.39
mod 4 5 121.02 31.04 0.0 1.0 -55.25
mod 3 4 138.14 48.16 0.0 1.0 -64.90
```
Where
- the row name is a description of the model,
- K is the number of parameters,
- AICc is the small-sample AIC (you'd use BIC here),
- Delta_AICc is the difference between the minimum AICc and that particular model's AICc
- AICcWt is the "model probability" out of the candidate set
- CumWt is the cumulative sum of the model probabilities
- LL is the log-likelihood of the model
| null | CC BY-SA 4.0 | null | 2023-04-03T01:09:11.113 | 2023-04-03T01:09:11.113 | null | null | 369002 | null |
611625 | 1 | null | null | 1 | 45 | Suppose our dependent variable Y is TRUE vs FALSE, and our independent variable X is GREEN, YELLOW, and RED. We performed a logistic regression of Y~X. I wonder if it is possible and how to use the trained logistic regression to answer the following question:
- What is the odds ratio (and p-value) of TRUE vs FALSE for subjects whose X equals to GREEN.
- What is the odds ratio (and p-value) of TRUE vs FALSE for subjects whose X equals to YELLOW.
- What is the odds ratio (and p-value) of TRUE vs FALSE for subjects whose X equals to RED.
| post-hoc analysis for logistic regression? | CC BY-SA 4.0 | null | 2023-04-03T01:20:25.607 | 2023-04-03T02:54:00.987 | 2023-04-03T02:54:00.987 | 11887 | 95083 | [
"regression",
"logistic",
"post-hoc"
] |
611626 | 2 | null | 611625 | 0 | null | It is very possible. Using 0/1 to indicate False/True, $x=1$ to indicate red, and $w=1$ to indicate green, the model is
$$ \operatorname{logit}(p) = \beta_0 + \beta_1 x + \beta_2 w$$
Here, we have set the reference group to be yellow. Note the following
- $ \beta_0 $ Is the log odds for the yellow category, and hence $\exp(\beta_0)$ is the odds of yellow.
- $\beta_0 + \beta_1$ is the log odds of the red group. The odds of red is then $\exp(\beta_0 + \beta_1) = \exp^{\beta_0} \cdot \exp^{\beta_1}$
- A similar argument can be made for green.
As for p-values, this depends on what the null hypothesis is. Typical logistic regression output will report p values for the Wald test that $\beta_j$ is 0. If you edit your question to be more specific about the associated null for the p value in question, I can be more precise.
| null | CC BY-SA 4.0 | null | 2023-04-03T01:28:29.990 | 2023-04-03T01:28:29.990 | null | null | 111259 | null |
611627 | 2 | null | 611170 | 1 | null | Brier score might not be the statistic of interest for a particular task. In that case, Brier score would not be appropriate for comparing a logistic regression and a random forest, but that is because Brier score simply is not the right value to calculate, rather than anything specific to how the values the Brier score evaluates are calculated or estimated.
However, if Brier score is what interests you, do calculate it. As long as the inputs to the score are appropriate (have an interpretation as probabilities, so not the log-odds output of a logistic regression or the predicted category that you can get by prediction methods in random forest software), go for it.
If there is an objection to doing this because random forests often give probability values that lack calibration and the Brier score will penalize this, that seems like a feature, not a bug, of Brier score. (Or maybe you don’t care about calibration, but then Brier score should not be your statistic of interest.)
If there is an objection to calculating the Brier score of a model because the model was not optimized well (mentioned in the comments), that seems like an admission that the model is not very good. If a model is making poor predictions (in terms of Brier score) because it was not optimized well, the key part of that to me is that the model is making poor predictions.
| null | CC BY-SA 4.0 | null | 2023-04-03T02:24:58.330 | 2023-04-03T02:24:58.330 | null | null | 247274 | null |
611628 | 2 | null | 611566 | 1 | null | I suspect that reading the full dataset from disk at each node is inevitable. The purpose of storing a histogram-like data structure in RAM is to avoid repeated passes.
- In order to select the best split point (whether arbitrary or constrained to a fixed set of proposed values) for a given feature, we need to process the points in the order determined by values of the feature. Unfortunately, the training dataset is not stored on disk in any particular order.
- Even if the data were sorted by some feature's values, the order would be different for other features.
- Even if we could access the data in the required order for each feature, and even if such random access reads from disk were efficient, we would still do multiple traversals of the disk-stored training set, because the maximisation of the gain requires a separate traversal for each feature.
The "approximate algorithm" separates reading from disk and traversing the in-memory histogram-like structure. The speed-up would be roughly proportional to the number of features (if disk operations are slow).
| null | CC BY-SA 4.0 | null | 2023-04-03T02:30:33.593 | 2023-04-22T03:34:52.923 | 2023-04-22T03:34:52.923 | 254326 | 254326 | null |
611629 | 2 | null | 588998 | 1 | null | My understanding of this question is that you have a model that distinguishes between pictures of dogs, alligators, and magpies, and you want it to come back and tell you, “I dunno,” when you feed it a picture of the Empire State Building.
I see two options.
- Use the continuous outputs of your multi-class model. If you show a picture of a skyscraper to the animal classifier, I would hope for it to give approximately equal probabilities of each category.
- Train a multi-label model that can give low probabilities of all categories. If you feed a picture of a skyscraper to the animal classifier, I would hope that it would give low probabilities for every animal.
Both of these would, at least roughly, correspond to your second idea. As far as determining the threshold at which you would classify as “other” instead of one of your categories, that depends on the cost of misclassification.
The trouble with the first idea is that there is basically no limit to what would constitute the “other” category. A skyscraper is not one of those three animals, but neither is a Corvette. Do you then train the model on an “other” category that contains both skyscrapers and cars? Then what about motorcycles, airplanes, or birthday cakes? By calling all of these the same category, you are telling the model that these are the same, yet they are not.
| null | CC BY-SA 4.0 | null | 2023-04-03T02:40:43.780 | 2023-04-03T02:40:43.780 | null | null | 247274 | null |
611631 | 2 | null | 611622 | 3 | null | Just use the binomial distribution directly. Evaluating one-sided p-values is just a matter of binomial tail probabilities. If you observe $y$ successes out of $n$ trials and you want to test $H_0$: $p=p_0$ vs $H_a$: $p>p_0$ then the one-sided p-value is $P(Y \ge y)$ where $Y$ follows the Bin($n$, $p_0$) binomial distribution.
The one-sided p-value of $H_0$ vs $H_a$: $p<p_0$ would be $P(Y \le y)$.
For example, suppose $n=8$, you observe $y=7$ and $p_0=0.4$.
Then the one-sided upper-tail p-value is
```
> y <- 7
> n <- 8
> p0 <- 0.4
> pbinom(y-0.5, size=n, prob=p0, lower.tail=FALSE)
[1] 0.00851968
```
which is 0.0085.
This is an exact binomial-test p-value.
Two-sided tests of $H_0$: $p=p_0$ vs $H_a$: $p\ne p_0$, are a little more complicated because there are several competing ways to construct the rejection region when the null distribution is asymetric.
If $p_0=0.5$, then the null distribution is symmetric and you can simply multiply the smaller of the two one-sided p-values by 2.
If the null distribution is not symmetric then you can still
multiply the smallest one-sided p-value by 2 to get a valid two-sided p-value, an approach that is simple and easy but not the most popular as it is somewhat conservative.
The method that is most closely analogous to Fisher's exact test is to take the sum of all the binomial probabilities that are less than or equal to $P(Y=y)$ given that $p=p_0$.
My function `exactTest` implements four different ways to compute the two-sided p-value (see [https://rdrr.io/bioc/edgeR/man/exactTest.html](https://rdrr.io/bioc/edgeR/man/exactTest.html)).
The method of doubling the smallest one-sided p-value is called "doubletail" and the method of adding up the smallest probabilities is called "smallp".
Each method corresponds to a different rejection region.
If $p_0=0.5$, so the null distribution is symmetric, then all four methods are the same.
My function assumes negative binomial counts but reduces to binomial tests when `dispersion=0`.
The binomial test is also implemented in the `binom.test` function in R.
It uses the "smallp" method for two-sided probabilities, which is the same method used by Fisher's exact test to construct a two-sided rejection region.
There have been many related answers on this forum:
- Exact Binomial Test p-values? (explains the "smallp" method)
- How to properly calculate the exact p-value for binomial hypothesis testing (e.g. in Excel)?
- p-values different for binomial test vs. proportions test. Which to report?
- How do you calculate an exact two-tailed P-value using binomial distribution? (AdamO asserts the "doubletail" method. whuber asserts there are at least five different reasonable ways to define the two-sided rejection region.)
- Why are p-values from the binomial test in R non-monotonic in trials?
- Understanding 2-sided p-values in a binomial distribution
Finally, note that the binomial test is exactly equivalent to a re-randomization test. If you conduct a one-sided randomization test infinitely many times, you'll just get the one-sided binomial p-values.
| null | CC BY-SA 4.0 | null | 2023-04-03T03:48:39.463 | 2023-04-04T04:05:49.453 | 2023-04-04T04:05:49.453 | 129321 | 129321 | null |
611632 | 2 | null | 610597 | 0 | null | >
One concern is, since users' span of activity do not universally cover the observed periods that the assumption of parallel trends is violated - or would that only be the case if the span of activity differs between the groups?
Neither is necessarily true.
I would imagine coverage is limited at the edges of your panel, but I can't verify this without more information. In a perfect world, you want lots of observations around the intervention months. In many studies, it's common to observe wider confidence intervals on the period-specific effects as we progress through time. Oftentimes this is because users may attrit as we move farther and farther away from the immediate intervention month.
I'm still curious about a few things. How many users subscribe late? What proportion have no pre-intervention usage data? Must a subscriber show some usage before they receive the intervention? These questions aren't being posed to engender confusion. Rather, you should start thinking about how much coverage is going to be a concern in your study. That being said, in a setting with over 20,000 users, I wouldn't worry about it too much.
>
Another concern (but from my understanding it should be handled by the fixed effects approach) is that a user's group assignment is to some degree influenced by BaseUsageScore.
The individual fixed effects already adjust for users' base usage scores.
>
Should I be trying to control for this disparity outside of the model above?
The individual fixed effects adjust for all time-constant attributes of the individual users. This allows for the selection of users into treatment on the basis of time-invariant characteristics (e.g., race, gender, height, etc.). Age is a constant change regressor and won't offer you much traction, especially with less than two years of data. You also won't be able to hold apart age from any other factor that is changing at a constant rate as well, a point already made quite elegantly [here](https://stats.stackexchange.com/questions/149288/variable-with-constant-difference-in-time-fixed-effect-estimator). On the other hand, grouping users (e.g., age cohorts) and estimating heterogeneous treatment effects by age may be worthwhile, but outside the scope of your question.
And note that even though you're "adjusting for" stable user demographics (even those you haven't thought of), this does not absolve you from demonstrating parallel outcome paths before the intervention. Though average "level" differences in any one month are allowable, you still must convincingly argue that the level differences in base scores do not affect trends over time. Put differently, while average usage between the treatment and control group may be higher or lower in any one month, their evolution through time (i.e., month-over-month) is what really matters. The group trajectories over time should be reasonably similar.
| null | CC BY-SA 4.0 | null | 2023-04-03T04:40:02.850 | 2023-04-03T04:40:02.850 | null | null | 246835 | null |
611633 | 2 | null | 611622 | 2 | null | Rather than doing a hypothesis test, it may be more informative to derive a confidence interval for the probability parameter using the [Wilson score interval](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Wilson_score_interval). You can do this using the `CONF.prop` function in the [stat.extend package](https://CRAN.R-project.org/package=stat.extend). Here is an example where we generate a 95% confidence interval a small amount of data:
```
stat.extend::CONF.prop(alpha = 0.05, n = 10, sample.prop = 4/10)
Confidence Interval (CI)
95.00% CI for proportion parameter for infinite population
Interval uses 10 binary data points with sample proportion = 0.4000
[0.168180329706236, 0.687326230266342]
```
| null | CC BY-SA 4.0 | null | 2023-04-03T04:48:33.833 | 2023-04-03T04:48:33.833 | null | null | 173082 | null |
611634 | 1 | null | null | 0 | 32 | Assume I have some time series data $Y=[y_1, y_2, y_3, ...]$ that for convenience we'll assume is evenly spaced in time and one dimensional. Note that we have no reason to believe $Y$ is particularly seasonal or follows some easily graspable model. There are tags on the data that provide context (e.g. "set thermostat to 80" shortly before a rise in temperature) but these tags are not really parseable algorithmically.
We assume some window of interest with start location $a$ and size $\delta_i$: $$Z_{a}=[y_a,y_{a+1},...y_{a+\delta_i}]$$
I'm interested in the other possible locations $Z_{b}$ within $Y$ that most resemble $Z_{a}$, subject to some arbitrary definition of "resemble". We'll calculate this resemblance with a function $D(Z_a,Z_b)$. $D$ could be a Euclidean distance or something fancier, but we can assume that it meets the standard axioms for a distance metric (positivity, symmetry, triangle inequality, etc.).
The ideal output is a list of the $m$ best values of $b$ that optimize $D$ (of which $a \approx b$ is likely to top the list), and thus the $m$ places in the time series that look the most like our original window of interest. The goal is to then look at the tags around those similar windows to glean an understanding of the tags (e.g. "Someone messing with the thermostat precedes every time the temperature sits at 80 for 12 hours." as a trivial example.)
- Is there a name for this kind of analysis?
- The obvious brute force approach is to compute $D$ for all possible start locations and then pick the best; this is assumedly roughly $O(n \delta_i)$, so it could be worse, but are there any clever tricks one could implement as the dataset gets larger?
- How should one handle overlap between similar but adjacent windows? That is, knowing that the three best windows start at $a$, $a+1$, and $a-1$ is not particularly helpful and may hide a previous similarity that is a still good but less exact match and occurs far away from $a$.
- I should note that we're not interested in predicting the future values of the time-series; that's an entirely separate and much larger problem.
One thought I had related to 2 and 3 was to do the brute force analysis and thus obtain the 1D vector of all distances $\boldsymbol{D}$. The relevant locations are then the peaks of $\boldsymbol{D}$ and can also be seen visually on this plot.
| Find states similar to present in past time-series data | CC BY-SA 4.0 | null | 2023-04-03T05:08:45.600 | 2023-04-03T05:08:45.600 | null | null | 367164 | [
"time-series",
"optimization",
"feature-selection"
] |
611635 | 2 | null | 588998 | 2 | null | This is an important and surprisingly difficult question. Two of the reasons why it's hard:
- One way that classifiers work is by constructing directions in feature space that separate the classes you are interested in. If you train it on dogs and horses, it might learn that horses are (a) bigger and (b) tend to be less hairy. Along that direction in feature space you find buses. A bus is even bigger and less hairy than a horse, so it is super-horse-like and your classifier will be extremely confident that it's a horse.
- Another way that classifiers work is by taking the test data and finding 'nearby' points in the training set. You might say that a test point should be classified as 'other' if there are no nearby points. This works when you have low-dimensional data. For high-dimensional data, though, there are no points that are 'nearby' on all variables. If you have a 'nearby' decision rule, it has to work by ignoring/downweighting many of the variables, so a new test point can appear 'nearby' even if it's very different on those ignored/downweighted variables.
As further evidence that it's a hard problem, people overgeneralise learned decision rules. They don't mistake a skyscraper for a dog, but they do (for example) learn the features that distinguish edible and inedible mushrooms in their home region and then move somewhere else and tragically fail to recognise that the distinguishing features are different in their new home.
| null | CC BY-SA 4.0 | null | 2023-04-03T06:37:37.187 | 2023-04-03T06:37:37.187 | null | null | 249135 | null |
611636 | 2 | null | 611106 | 1 | null | No, even when we assume a single data generating process and even when it's a very simple data generating process, it's quite possible for predictive models to be very different from models estimating the effect of a specific variable.
Suppose we have a very simple data generating process
```
x~Bernoulli(0.5)
z<- x+N(0,1)
y<- z+N(0,1)
```
where all the arrows represent truly causal relationships. Under this data generating process, `y` is independent of `x` given `z`, so the best predictive model is `y~z`. If we put `x` in this model it would have a coefficient of 0. `x` is not predictive.
However, if we want to estimate the effect of `x` on `y`, this effect is not zero. Individuals with `x==1` have higher `y` than those with `x==0`, by 1 unit on average, and this is causal. We need to fit the model `y~x` and not the model `y~z` or `y~x+z` to estimate the effect of `x` on `y`.
If you constructed a model describing the full data-generating process, you could derive both the best predictive model `y~z` and the best model for the effect of `x`, `y~x` from it. These models are different, and they would be different whether or not you knew the exact data-generating process.
| null | CC BY-SA 4.0 | null | 2023-04-03T06:53:23.197 | 2023-04-03T07:06:57.427 | 2023-04-03T07:06:57.427 | 249135 | 249135 | null |
611637 | 2 | null | 273902 | 2 | null | This comes a bit late, but I've just spent some time on this question, so here's a summary of the way the reasoning goes, based on the previously proposed answers:
- the sum of $n$ i.i.d. exponentially distributed variables is distributed as
$f_0 = \delta(x)$, $f_n = \frac{\alpha^n x^{n-1} e^{-\alpha x}}{(n-1)!}$ (in which case it is a gamma distribution).
- then the probability of there being $n$ contributions in the total sum is given by the Poisson distribution of $n$, $w_n = e^{-\lambda} \frac{\lambda^n}{n!}$.
So, making a mix of the original post and parts of the different previous answers, we can make up an expression for the total pdf as:
$$
f(x) = e^{-\lambda} \left[ \delta(x) + \sum_{n=1}^\infty \frac{\lambda^n}{n!} \frac{\alpha^n x^{n-1} e^{-\alpha x}}{(n-1)!}\right],
$$
or, shifting the indices around:
$$
f(x) = e^{-\lambda} \left[ \delta(x) + e^{-\alpha x} \lambda \alpha \sum_{n=0}^\infty \frac{(\lambda \alpha x)^n}{(n+1)! n!}\right].
$$
Where I contribute is by referencing Eq. 9.6.10 of Abramowitz & Stegun, which gives an expression for the ``hard series'' mentioned by the OP in terms of modified Bessel functions as
$$
I_\nu(z) = (z/2)^\nu \sum_{k=0}^\infty \frac{(z^2/4)^k}{k! \Gamma(\nu+k+1)},
$$
from which we finally get
$$
f(x) = e^{-\lambda} \left[ \delta(x) + e^{-\alpha x} \sqrt{\frac{\alpha \lambda}{x}} \mathrm{I_1} (2 \sqrt{\alpha \lambda x})\right].
$$
| null | CC BY-SA 4.0 | null | 2023-04-03T06:54:59.247 | 2023-04-03T06:54:59.247 | null | null | 384801 | null |
611638 | 1 | null | null | 1 | 28 | I am looking at a fertility dataset and have the following variables:
- kids is 1 if a mom had more than 2 kids, 0 otherwise.
- multi2nd is 1 if the 2nd and 3rd children are twins, 0 otherwise.
- age is the age of mom.
- agebirth is the age of mom at time of the birth of her first child.
- blackm is 1 if mom is black, 0 otherwise.
I then regressed (OLS) `kids` against `multi2nd`, `age`, `agebirth`, `blackm`. The coefficient I got for `multi2nd` was around 0.7. However, I am wondering why this isn't 1, since if we have `multi2nd` for an individual, wouldn't it necessarily imply `kids`=1?
| Why does regression of an outcome variable against a binary predictor variable where 1 implies the outcome, result in coefficients less than 1? | CC BY-SA 4.0 | null | 2023-04-03T07:23:54.273 | 2023-04-03T09:19:07.037 | null | null | 108150 | [
"regression"
] |
611639 | 1 | null | null | 0 | 49 | I am having trouble resolving what is happening in my large dataset. The issue is illustrated in this simple example data:
```
library(performance)
library(lme4)
data("iris")
test1 <- lmer(Sepal.Width ~ Petal.Width + (1|Species), data = iris, REML = FALSE)
test1
performance(test1)
performance_cv(test1, method = "k_fold", k = 5, stack = FALSE)
```
Results of the last two lines:
```
> performance(test1)
# Indices of model performance
AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
---------------------------------------------------------------------------
91.941 | 92.217 | 103.983 | 0.910 | 0.318 | 0.869 | 0.297 | 0.300
> performance_cv(test1, method = "k_fold", k = 5, stack = FALSE)
# Cross-validation performance (5-fold method)
MSE | MSE_SE | RMSE | RMSE_SE | R2 | R2_SE
----------------------------------------------
0.095 | 0.036 | 0.3 | 0.059 | 0.46 | 0.24
```
`performance()` on mixed models `lmer()`, defaults to using Nakigawa R2 where we get conditional, for both random and fixed effects, and marginal for fixed effects. All good, as I understand it Nakigawa R2 is heavily preferred for most mixed modelling, please correct me if I'm wrong.
In my own data, I am getting some strange (to me) results in the cross validation. Here we see the `"CV" R2 (0.46)`. I want to be sure I am interpreting this properly and that `performance_cv` is also using Nakigawa methodology.
I have read that cross validation is only "valid" on the fixed effects. Is this true and why?
Therefore, the `"CV" R2 (0.46)` presented should be compared to the `R2 (marg.) 0.318`?
What should I be looking out for if the `"CV" R2` is much bigger or much smaller than the original models `R2`, with regard to over-fitting or under-fitting models?
`performance_cv()` is coded [here](https://github.com/easystats/performance/blob/7ca35e3001f091d468f9203a84e5ae5ee02c5131/R/performance_cv.R), however I am no expert when it comes to how these functions inherit classes (specifically `lmer()`), so I could be wrong here, but I can't see if it is using Nakigawa methods (it doesn't appear to be to me), and/or unsure if that matters, so any help there is greatly appreciated.
| K-fold cross validation using Nakigawa-R2 for lmer() mixed model issues | CC BY-SA 4.0 | null | 2023-04-03T07:23:56.373 | 2023-04-03T07:23:56.373 | null | null | 331950 | [
"r",
"cross-validation",
"lme4-nlme",
"r-squared"
] |
611640 | 2 | null | 611106 | 0 | null | First of all, I would argue that for making predictions in machine learning in many cases you don't really need to care about the data-generating process. If you are using something like $k$NN regression, the only thing that you're doing is predicting the mean of the most similar datapoints in the training data, it doesn't make any assumptions about the data-generating process. The same applies to most of the other machine learning.
Moreover, keep in mind that the idea of a data-generating process is just an abstraction helping us to formalize the statistical model for the particular data. There is no "the" data-generating process. No data was "generated" from something like Gaussian (or any other) distribution because there is no such thing in nature as Gaussian distribution, it's a mathematical concept.
For the purpose of inference, we want our model to be simple, so it is easy to interpret. For making predictions, we want it to be accurate, possibly at the price of being less interpretable. Neither of the models is single correct, they are [both wrong, but each is useful in its own way](https://stats.stackexchange.com/questions/57407/what-is-the-meaning-of-all-models-are-wrong-but-some-are-useful).
| null | CC BY-SA 4.0 | null | 2023-04-03T07:50:02.127 | 2023-04-03T07:50:02.127 | null | null | 35989 | null |
611641 | 1 | null | null | 1 | 32 | If I am comparing where there are any significant differences in the number of damaged apples between apple types, what kind of statistical test should I use?
There are 5 types of apples with different sample sizes.
|Apple Type |Number of observed apples |Number of damaged apples |
|----------|-------------------------|------------------------|
|Granny Smith |2500 |233 |
|Fuji |2000 |135 |
|Golden Delicious |1500 |68 |
|Honey Crisp |1200 |48 |
|Gala |950 |14 |
*The data in the table has not been actually researched. I was created just for an example.
I would really appreciate it if somebody could tell me what kind of test I should use and why.
| Question about testing significance | CC BY-SA 4.0 | null | 2023-04-03T08:46:52.133 | 2023-04-03T09:29:52.683 | null | null | 384744 | [
"statistical-significance"
] |
611643 | 1 | null | null | 1 | 26 | The Capital Asset Pricing model proposes that,
$$
R_i=R_f+\beta(R_m-R_f)
$$
where $R_i$ is the return of the i-th asset, $R_f$ is the risk-free rate and $R_m$ is the Market returns. $\beta$ is generally estimated using simple linear regression, but the covariates are random variables which violates the assumptions of regression.
| If returns are random variables, then how can CAPM be posed as a simple linear regression? | CC BY-SA 4.0 | null | 2023-04-03T09:00:39.770 | 2023-04-03T09:17:16.660 | null | null | 355078 | [
"regression",
"finance"
] |
611644 | 1 | null | null | 0 | 8 | I embedded two IATs into one Qualtrics survey. Once I had recruited all participants, I separated the data into two CSV files and tried to calculate the D scores from the two individual CSV files using the IATGEN analysis tool. However it only calculated the D score for the second IAT, but but not the first. I haven't altered the data in any way, I just deleted the data from the second IAT. Can anyone help with suggestions as to what I might be doing wrong? I have used two IATs in one survey before and successfully calculated both sets of Dscores using the same method.
| How do I analyse D score from 2 IATs embedded within a qualtrics survey using IATGEN | CC BY-SA 4.0 | null | 2023-04-03T09:14:42.013 | 2023-04-03T09:14:42.013 | null | null | 211553 | [
"dataset"
] |
611645 | 2 | null | 611643 | 1 | null | First, the CAPM does not state that. It states that (using your notation)
$$
\mathbb{E}(R_i)=R_f+\beta(\mathbb{E}(R_m)-R_f).
$$
Second,
$$
\beta:=\frac{ \text{Cov}(R_i,R_m) }{ \text{Var}(R_m) }
$$
is well defined when $R_i$ and $R_m$ are random variables, as long as $\text{Var}(R_m)$ exists. Also, having a random covariate is not a violation of regression assumptions in general. There is no single set of regression assumptions. There are different sets for obtaining different properties of different estimators. [This](https://stats.stackexchange.com/questions/16381) is a related thread that goes into more detail.
| null | CC BY-SA 4.0 | null | 2023-04-03T09:17:16.660 | 2023-04-03T09:17:16.660 | null | null | 53690 | null |
611646 | 2 | null | 611638 | 1 | null | I kind of get what you mean: if the second and third children are twins, then the mother must have had more than two children. However, the mother also has nonzero values for the other features. She is not zero years old. She was not zero years old when she gave birth. She might be Black. Since all of these features could contribute to the outcome when the mother did not have twins, their coefficients are probably nonzero. Consequently, when you then turn on the “twins” variable (set it to one), the model would overestimate the outcome of the “twins” coefficient were one. Thus, the coefficient estimation lowers the “twins” coefficient, allowing for more accurate predictions.
| null | CC BY-SA 4.0 | null | 2023-04-03T09:19:07.037 | 2023-04-03T09:19:07.037 | null | null | 247274 | null |
611647 | 1 | null | null | 0 | 10 | I'm working on a fraud dataset that comes with a column for ID. I can remove this column manually, but plotting its distribution made me wonder if there's an automatic way to remove a column like this. See a histogram of the data below:
[](https://i.stack.imgur.com/J6Tfm.png)
I feel like you could find these completely unique, or nearly completely unique columns by checking the variance. Is this standard practice or are there better ways to automatically remove columns like ID?
| How do you automatically check for feature columns like ID? | CC BY-SA 4.0 | null | 2023-04-03T09:28:12.957 | 2023-04-03T09:28:12.957 | null | null | 363857 | [
"variance",
"feature-selection",
"filter"
] |
611648 | 2 | null | 611641 | 0 | null | If I understand well your question, you would like to test whether the number of damaged apples is dependent on the apple type or if all apple types produce the same amount of damaged apples. If we suppose that the number of damaged apples is independent of the apple type and that you have the same amount of apples of each type, then, each damaged apple has a chance of $1/( number \ of \ apple \ type ) = \frac{1}{5}$ to be in one of the category Granny Smith, Fuji, Golden Delicious, Honey Crisp and Gala. Let us note this category $1$, $2$, $3$, $4$ and $5$ for respectively Granny Smith, Fuji, Golden Delicious, Honey Crisp and Gala for simplicity. From here, you can state the null hypothesis for your problem:
- $H_0$ : for a damaged apple you have a probability of $\frac{1}{5}$ that the apple belongs to the $i$-th apple type,
while your alternative hypothesis is
- $H_1$ : for a damaged apple you have a probability different of $\frac{1}{5}$ that the apple belongs to the $i$-th apple type.
Now you have to compute the empirical probability that a damaged apple is part of one of the apple type and you calculate it from your data. Because we supposed at the beginning that we have at disposal the same amount of apple of each type, we can do a 'scaling' operation of your data to to simulate it as follows:
- Number of damaged apple of category 1: 233/2500*1000 = 93.2,
- Number of damaged apple of category 2: 135/2000*1000 = 67.5,
- Number of damaged apple of category 3: 68/1500*1000 = 45.3,
- Number of damaged apple of category 4: 48/1200*1000 = 40.0,
- Number of damaged apple of category 5: 14/950*1000 = 14.7.
So, in this case, the total number of damaged apples is $93.2+67.5+45.3+40+14.7 = 260.7$. Then we have as empirical probability for each apple types:
- $\hat{p}_1 = 93.2/260.7 = 0.358$ for type 1,
- $\hat{p}_2 = 67.5/260.7 = 0.259$ for type 2,
- $\hat{p}_3 = 45.3/260.7 = 0.174$ for type 3,
- $\hat{p}_4 = 40.0/260.7 = 0.153$ for type 4,
- $\hat{p}_5 = 14.7/260.7 = 0.056$ for type 5,
and we will now conduct a chi squared test of adequation to discover whether the probabilities $\hat{p}_1,\hat{p}_2,\hat{p}_3,\hat{p}_4$ and $\hat{p}_5$ differ from the supposed probabilities $p_1,p_2,p_3,p_4$ and $p_5$ all equal to $\frac{1}{5}$.
The corresponding R-code to lead this test is given below.
```
observed_damaged_apples <- c(93.2,67.5,45.3,40.0,14.7)
expect_prob = c(1/5,1/5,1/5,1/5,1/5)
test <- chisq.test(observed_damaged_apples, p = expect_prob)
```
and the result is
```
Chi-squared test for given probabilities
data: observed_damaged_apples
X-squared = 67.468, df = 4, p-value = 7.768e-14
```
As you can see the p-value is near to $0$ and if you are testing with a significance level of $5\%$, the null hypothesis is rejected.
| null | CC BY-SA 4.0 | null | 2023-04-03T09:29:52.683 | 2023-04-03T09:29:52.683 | null | null | 383929 | null |
611649 | 2 | null | 76602 | 0 | null | If you calculate $R^2$ as the squared Pearson correlation between observed ($y$) and predicted outcomes ($\hat y$), $R^2=\left(
\text{corr}\left(
y, \hat y
\right)
\right)^2$, there are some major issues. In particular, $\left(\text{corr}\left(
y, a+b\hat y
\right)
\right)^2=\left(\text{corr}\left(
y, \hat y
\right)
\right)^2$ for $a\in\mathbb R$ and $b\ne 0$, meaning that you can shift the predictions up and down with $a$ or scale them with $b$, and this calculation will not catch that. As an example, $y=(1,2,3)$ and $\hat y=(-105,-205,-305)$ have a perfect squared correlation of $1$ between them, yet the predictions $\hat y$ of $y$ are clearly terrible. Consequently, this probably is not the calculation you want to use.
An alternative calculation, equal to the squared Pearson correlation in OLS linear regression, is below.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
The numerator is equal to $n\times(RMSE)^2$, so this is just a monotonic (decreasing) function of the RMSE…
…if the denominator is constant, which is unlikely when you are doing cross validation. Therefore, this $R^2$ involves more than just the performance of your model, introducing a complexity that is difficult to resolve (especially when you consider that [it is not clear what calculation to perform when it comes to holdout data](https://stats.stackexchange.com/a/609612/247274)). Thus, if you are getting acceptable $RMSE$ values, especially on holdout data, that should be a signal that your model is performing the way you want it to, regardless of what is happening with the ambiguous $R^2$ value you calculate.
| null | CC BY-SA 4.0 | null | 2023-04-03T09:41:46.900 | 2023-04-03T09:41:46.900 | null | null | 247274 | null |
611650 | 1 | null | null | 6 | 143 | Disclaimer: I am not a statistician, but a computer scientist working a lot with probability distributions. I am asking this question to help me understand how statistical frameworks and terminology can be applied formally to the reasoning below.
Consider the following example, e.g. borrowed from a [tweet](https://twitter.com/ESYudkowsky/status/1455930656991559681) of E. Yudkowski:
>
Suppose you think you're 80% likely to have left your laptop power adapter somewhere inside a case with 4 otherwise-identical compartments. You check 3 compartments without finding your adapter. What's the probability that the adapter is inside the remaining compartment?
There are two intuitive answers to this question:
(A) 80%. If I was 80% sure the adapter is in the case before, and I rule out 3 compartments, then the entire 80% suspicion now falls to the fourth;
(B) 50%. Compartment 4 and "compartment 5" (everywhere else) are equally likely.
It is straightforward to formally obtain answer (B) using conditional probability. Let the binary random variable $X$ indicate whether the adapter is in the case or not, $P(X=1)=0.8$. If $X=1$, choose $Y$ uniformly in $\{1,\ldots,4\}$, otherwise let $Y=5$. Define the indicator variable $I = [Y>3]$. Then
$$P(Y=4|I=1) = 0.5$$
I found it much harder to give a mathematical explanation for answer (A), i.e. the 80%. My question is: How to formalize that line of reasoning?
Here is one way to do it: If we "fix" the value $X=x$, then the conditional probabilities agree with the conclusions of reasoner A: $P(Y=4|I=1,X=1) = 1$ and $P(Y=5|I=1,X=0)=1$. Now, if we marginalize out $X$, we obtain as desired
$$\sum_{x \in \lbrace 0,1\rbrace} P(Y=4|I=1, X=x)P(X=x) = 0.8$$
Is there a name or notation for this mode of inference? It reminds me of [Jeffrey's update](https://arxiv.org/pdf/2112.14045.pdf) where a variable is similarly "fixed" during inference. I'm not very familiar with causal inference, but is it possible to describe reasoning (A) in those terms, e.g. do-calculus? I'm happy about any pointers or bits of terminology. Thanks
| How to formalize the following intutive reasoning (Bayes' rule)? | CC BY-SA 4.0 | null | 2023-04-03T09:42:15.970 | 2023-04-14T02:50:43.740 | 2023-04-14T02:50:43.740 | 173082 | 384805 | [
"self-study",
"references",
"causality"
] |
611651 | 2 | null | 50550 | 0 | null | If recall is $1$, then the model is catching all of the positive cases. $P(\hat y=1\vert y=1)=1$.
If precision is $1$, then the model is only catching cases of the positive category that actually belong yo the positive category. $P(y=1\vert \hat y=1)=1$.
If both of these equal one, then your model is catching all of the positive cases without classifying a negative case as positive. This mean that all positive and negative cases are classified correctly: $100\%$ accuracy.
Perfect accuracy is an extremely high standard that might be possible on easy data, but this suggests overfitting to me. Such performance is just a little bit too good. Even MNIST classifiers make the occasional mistake.
| null | CC BY-SA 4.0 | null | 2023-04-03T09:47:44.360 | 2023-04-03T09:47:44.360 | null | null | 247274 | null |
611652 | 2 | null | 199717 | 0 | null | Sure, I suppose that you can just throw a huge number of possible features at the model and penalize model complexity using regularization. This seems like a variant of the [MARS/EARTH](https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_spline) approach that also throws a huge number of possible nonlinear basis functions at the model and figures out which stick.
A big drawback to this is that feature selection is notoriously unstable, and if you do some kind of cross validation or bootstrap, you are likely to find different features being selected, depending on the exact data. What, then, would be the “correct” features for modeling?
The typical suggested resource for learning how to include the kind of flexibility you desire is the textbook [Regression Modeling Strategies by Frank Harrell](https://rads.stackoverflow.com/amzn/click/com/3319194240) of Vanderbilt University. The gist is that Harrell, who is an active contributor to Cross Validated whose [profile](https://stats.stackexchange.com/users/4253/frank-harrell) you can search for posts on related topics, advocates for flexible models using splines that discover the nonlinear relationships, rather than fitting a particular functional form like a logarithm or a square root.
| null | CC BY-SA 4.0 | null | 2023-04-03T10:06:48.073 | 2023-04-03T10:06:48.073 | null | null | 247274 | null |
611653 | 1 | null | null | 1 | 56 | I have a question about the implementation of Cox Time Varying Regression model, that I need to perform to understand the impact of co-variates on my survival prediction.
I found an example here ([https://lifelines.readthedocs.io/en/latest/Time%20varying%20survival%20regression.html](https://lifelines.readthedocs.io/en/latest/Time%20varying%20survival%20regression.html)) but I still have a doubt.
My dataset contains some patients (so rows of the dataframe with the same ID) that are not studied continuously during the observation time, showing some "gaps"; so, for example, I have info like that:
- ID: 1 , start: 0 , stop: 50 , event : 0
- ID: 1 , start: 60 , stop: 80 , event : 0
- ID: 1 , start: 80 , stop: 100 , event : 1
So, as you can see, the first two time intervals are not consecutive but have a gap (from 50 to 60). Do you know whether I can include these patients in my analysis or I have to remove them? I have this doubt because I've never seen this "situation" in the examples I found online.
Thank you in advance.
| Cox Time Varying Regression model - python lifelines - start and stop variables | CC BY-SA 4.0 | null | 2023-04-03T10:14:09.560 | 2023-04-03T12:34:33.593 | null | null | 384810 | [
"survival",
"cox-model",
"time-varying-covariate",
"lifelines"
] |
611654 | 1 | null | null | 0 | 11 | I am trying to understand why my data is not showing a full S-curve? Is it because the predictor does not do a good job of predicting fellow = 1, or simply because there are few fellow = 1 that score within the top end on c_ns2 (x-axis variable)?
[](https://i.stack.imgur.com/QWSi1.png)
| logistic regression graph - understanding data | CC BY-SA 4.0 | null | 2023-04-03T10:24:22.263 | 2023-04-03T10:24:22.263 | null | null | 383343 | [
"regression",
"logistic",
"sigmoid-curve"
] |
611655 | 1 | null | null | 0 | 58 | I have a question regarding 2SLS and interaction-terms between the predicted endogenous variable and a set of exogenous variables (second stage).
I want to calculate the effect of an endogenous variable (X) on the dependent variable (Y) in interaction with a set of exogenous variables (X2 + X3 + X4). Normally, without an interaction, I would use an exogenous instrumental variable (Z) that explains a portion of the variance in X and isolate this variance using a two-stage least squares model (2SLS). Can I still do this, if I want to include an interaction term in the second stage model? AND: How would the code look like in R-studio? Because I am not sure if I have to include the interaction also on the left side of the calculation. Or only in the second level. Thanks for any help!
My code (R-Studio) for the 2SLS-Model would be like this:
```
first_stage <- plm(X ~ Z * (X1 + X2 + X3), data = data, model = "within", effect = "individual")
X_hat <- predict(first_stage, type = "response")
data <- cbind.data.frame(data, X_hat)
data <- pdata.frame(data, index = c("country", "year"))```
second_stage <- plm(Y ~ X_hat * (X1 + X2 + X3), data = data, model = "within", effect = "individual")
```
| Can I have an interaction-term between the predicted endogenous variable and other exogenous variables in the second stage Modell of a 2SLS? | CC BY-SA 4.0 | null | 2023-04-03T10:40:04.043 | 2023-04-03T14:20:21.000 | 2023-04-03T14:20:21.000 | 384671 | 384671 | [
"r",
"instrumental-variables",
"2sls"
] |
611656 | 1 | null | null | 0 | 20 | I am trying to build an index with and without using Principal Components Analysis.
I have read that the sign of the PCA based index is arbitrary. All the PCA can do is give me a direction. I can scale that direction with any scalar $ \alpha $ and it will still remain the eigendirection. For example we may take $ \alpha=-1 $.
For example, please see [here](https://stats.stackexchange.com/questions/269428/reverse-the-sign-of-pca)
Due to this, the index based on PCA may be high OR low when the other index is high.
So may I do this :
- Chosen.sign = sign(correlation(index without using PCA,index while using PCA))
- Multiply the index built by using PCA by the sign in 1.
This way the index using PCA will always be high when the index build without using PCA is high.
Here is a real life example:
We wish to build an index for financial stress in the money market.
- We compute realised volatility of the inter bank rate and the spread between the interbank rate and the government bond yeild.
We build a variance equal weighted index = $ (s_1 + s_2)/2$.
Here $ s_1 $ = z-score of realised volatility of the interbank rate
and $ s_2 $ = z-score of spread between interbank rate and the govt bond yeild
We know from "physical reasoning" that when this index is high the stress in the market is high.
- We compute a PCA based index from realised volatility and the spread. This could be high OR low when the stress in market is high.
- I used the correlation between 1 and 2 to decide the sign of 2 as mentioned in the first paragraph.
Can I think that the stress in the market is high/low when the stress in the PCA based index is high/low.
Is there any mistake in what I have done?
May I use this method to avoid flipping direction by the PCA based index when new data is added each month?
| Yet another attempt at deciding the sign of a PCA based index | CC BY-SA 4.0 | null | 2023-04-03T10:44:20.600 | 2023-04-03T13:46:52.030 | null | null | 121994 | [
"pca",
"finance"
] |
611657 | 2 | null | 611650 | 2 | null | The issue here is that your prior does not fully specify the joint distribution of the relevant events at issue. If you let $\mathscr{E}_1,\mathscr{E}_2,\mathscr{E}_3,\mathscr{E}_4$ denote the individual events that each of the four respective compartments contain the laptop (only one of which can be true at most), then all you have specified in the prior probability:
$$\pi \equiv \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4).$$
This is not a full specification of prior probabilities for each of the possible outcomes. However, suppose you are further willing to specify the prior probability of finding the laptop in a particular compartment if it is in any of the compartments. We will denote these probabilities as:
$$\phi_i \equiv \mathbb{P}(\mathscr{E}_i | \mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4).$$
Now, if you observe that compartments 1-3 are empty then your posterior probability that the laptop is in the remaining compartment is:
$$\begin{align}
\mathbb{P}(\mathscr{E}_4 | \bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3)
&= 1 - \mathbb{P}(\bar{\mathscr{E}}_4 | \bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3) \\[14pt]
&= 1 - \frac{\mathbb{P}(\bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3 \cap \bar{\mathscr{E}}_4)}{\mathbb{P}(\bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3)} \\[6pt]
&= 1 - \frac{1 - \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4)}{1 - \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3)} \\[6pt]
&= 1 - \frac{1 - (\phi_1 + \phi_2 + \phi_3 + \phi_4) \pi}{1 - (\phi_1 + \phi_2 + \phi_3) \pi} \\[6pt]
&= \frac{\phi_4 \pi}{1 - (\phi_1 + \phi_2 + \phi_3) \pi}. \\[6pt]
\end{align}$$
In your particular specification of the problem you have $\pi = 0.8$ which means that the resulting posterior probability that the laptop is in the remaining compartment can be anywhere from zero up to 80%. (The latter result is obtained by taking $\phi_1 = \phi_2 = \phi_3 = 0$ and $\phi_4 = 1$.) Suppose alternatively that you are of the view that if the laptop is in one of the compartments then it is equally likely to be in any of them. This is reflected by choosing prior probabilities $\phi_1 = \phi_2 = \phi_3 = \phi_4 = \tfrac{1}{4}$ which then leads to the posterior probability 50%.
| null | CC BY-SA 4.0 | null | 2023-04-03T11:00:30.320 | 2023-04-03T11:00:30.320 | null | null | 173082 | null |
611658 | 2 | null | 324665 | 0 | null | In the section regarding partial-RDA, Legendre & Legendre ([2012](https://www.elsevier.com/books/numerical-ecology/legendre/978-0-444-53868-0)) state that (p. 606):
>
Hypotheses may also be of the analysis-of-variance type, involving either a single classification criterion (factor) or several factors and their interactions, each recoded as dummy variables.
Borcard et al. ([2011](https://link.springer.com/book/10.1007/978-1-4419-7976-6)) furthermore indicate that (p. 155):
>
An RDA produces min[p, m, n − 1] canonical axes, where n is the number of objects and m is the number of degrees of freedom of the model (number of numeric explanatory variables, including levels of factors if qualitative explanatory variables are included; a factor with k classes requires (k − 1) dummy variables for coding, so there are (k − 1) degrees of freedom for this factor).
However, if you're using `R`, note that some RDA-related procedures and functions do not allow categorical variables. For instance, still according to Borcard et al. (2011), the `packfor::forward.sel()` function does not allow factors. You'd thus better do some reading before proceeding with you analyses in order to know what is allowed by which function or software.
So globally yes, you can usually include categorical explanatory variables in a RDA as long as you use dummy coding. There are also [various solutions](https://stats.stackexchange.com/questions/5774/can-principal-component-analysis-be-applied-to-datasets-containing-a-mix-of-cont) to include such variables in PCAs.
| null | CC BY-SA 4.0 | null | 2023-04-03T11:13:03.747 | 2023-04-03T11:13:03.747 | null | null | 203941 | null |
611659 | 1 | null | null | 3 | 1164 | I am working on a project where I am evaluating different machine learning models to be used as scoring functions during in-silico docking. It is a regression problem where the 3D structure data of a protein bound to a ligand is used to predict the binding affinity of the complex. I have a training set of ~3600 examples and a test set of 209 examples. The test set is chosen to be representative and diverse (more details on how exactly this works can be found in [1]). The test and training set are disjoint. The goal of the test set is to evaluate the performance of the model on a diverse set of unseen examples. I am testing out many different regression models (MARS, kNN regression, support vector regression and random forest regression). I used a grid search with 10-fold CV to tune hyperparameters on the training set, and with the optimal hyperparameters, I trained the model on the same training set. Following this, I tested and compared model performance on the testing set.
How good or bad of a practice is it to tune hyperparameters and train the model on the same set? What is the consequence of doing this?
References:
[1] Ashtawy HM, Mahapatra NR. A Comparative Assessment of Predictive Accuracies of Conventional and Machine Learning Scoring Functions for Protein-Ligand Binding Affinity Prediction. IEEE/ACM Trans Comput Biol Bioinform. 2015 Mar-Apr;12(2):335-47. doi: 10.1109/TCBB.2014.2351824. PMID: 26357221.
| Is it a bad practice to learn hyperparameters from the training data set? | CC BY-SA 4.0 | null | 2023-04-03T11:17:48.777 | 2023-04-04T19:38:37.170 | 2023-04-04T10:09:23.520 | 384812 | 384812 | [
"regression",
"cross-validation",
"hyperparameter",
"train-test-split"
] |
611662 | 1 | null | null | 6 | 61 | The soup analogy is,
>
You only need a single spoon to sample the soup, provided it is well stirred.
It has been used several times here [Sampling distributions of sample means](https://stats.stackexchange.com/questions/146873/sampling-distributions-of-sample-means/146879) and [What is your favorite layman's explanation for a difficult statistical concept?](https://stats.stackexchange.com/questions/155/what-is-your-favorite-laymans-explanation-for-a-difficult-statistical-concept/50854). It is also referenced on other websites, for example [Soup analogy](https://www.linkedin.com/pulse/soup-analogy-catherine-chambers). It has a corollary that "if the soup is not well stirred, a single spoonful may not be representative."
What is the origin of this aphorism? The linked answer above finds in in Behar, R., Grima, P., & Marco-Almagro, L. (2012). [Twenty Five Analogies for Explaining Statistical Concepts. The American Statistician](https://www.tandfonline.com/doi/abs/10.1080/00031305.2012.752408) But that paper surely isn't the origin of the saying. I remember it from a university lecture in the 1990s, and even then it was treated as a proverbial saying. Who first said this, and roughly when? What was the original quote?
| Who created the "soup analogy" for sampling | CC BY-SA 4.0 | null | 2023-04-03T11:28:20.080 | 2023-04-07T17:06:28.153 | null | null | 147572 | [
"survey-sampling",
"teaching",
"communication"
] |
611663 | 2 | null | 611659 | 15 | null | It's completely inappropriate to pick hyperparameters from the test set, if it's meant to be a test set, because it makes the results on the test set unreliable. I.e. even if the test set is perfectly representative of what the model would be used on in real-life, there's no longer any reason to believe the test set performance would also be seen when the model is used in practice. A set of data used in this way is not a test set (terms used for it would e.g. be "validation set").
How badly the performance evaluation is wrong depends heavily on the details of each case. With a enormously large test set to which one could not possibly overfit, it may be less of a problem, but 209 examples is a really small number, so the potential for issues is large. The other thing is how heavily one can tune the models. E.g. just tuning the L2-penalty of a ridge-regression is probably less dangerous compared with tuning many parameters (e.g. regularization parameters, data pre-processing choices, feature sets to include etc.). At some point the number of hyper-parameters could be so large, that one could almost achieve a perfect fit to the set used for hyperparameter tuning.
It is much more typical to tune hyperparameters and select models based on cross-validation and/or a validation set. Comparing models on the test set could make sense, if your study is primarily about comparing the models. If the purpose is to then select one model and say what it's performance is, you end up overestimating it's performance by selecting the best model out of several models. An unbiased estimation of its performance could then come from a new test set.
| null | CC BY-SA 4.0 | null | 2023-04-03T11:30:41.783 | 2023-04-03T11:30:41.783 | null | null | 86652 | null |
611664 | 1 | 611671 | null | 1 | 36 | I am conducting a retrospective cohort study to determine the association between receiving medicine X with death during the first 100 days of therapy for patients with leukemia, from 2010-2020. Starting in 2015, all patients began receiving medicine X on Day 10 (t=10) of therapy to prevent infection, whereas nobody received it before. So, I have two cohorts to compare: those receiving medicine X, and those who didn’t.
Assuming era-dependent confounders are controlled for (e.g., quality of supportive care in each era, etc.), I’d like to determine the effect of medicine X on survival using a Cox PH model, but I am faced with the problem that many patients (10% of total cohort, 20% of events) die before day 10 and therefore die before they can receive medicine X (the exposure). As expected, this survivorship bias contributes to a large association between survival to 100 days and medicine X.
What strategy do you recommend to approach the issue of survivorship bias here, allowing for basic limitations to retrospective, non-randomized studies? Given that patients aren’t at risk of the event (death given medication X status) until they actually receive medication X, my first instinct is to simply left-truncate the data and describe the effect of medicine X as the hazard of death given survival to Day 10 (when they receive medicine X), assuming that confounders between those receiving medication X and those surviving are controlled for. Are there any other approaches I should consider? I considered a time-varying exposure, but I think it is inappropriate here given the specifics of medication X and the specific disease here, as the effect of medication X is expected to be quite different in days 1-10 than it would be in days >10, and I’m not interested in that question right now.
Thanks
| Approach to removing survivorship bias | CC BY-SA 4.0 | null | 2023-04-03T11:59:05.830 | 2023-04-03T13:32:34.327 | null | null | 384819 | [
"survival",
"bias"
] |
611666 | 2 | null | 611525 | 1 | null | If there are any events (state transitions) in the data set during the last shared time interval, then the "last rows" of individuals who don't have the event during that interval should not be "discarded." They still contribute to the information for building the model and are not "discarded" by the modeling process, either.
Consider a simple 2-state alive/dead transition model. The discrete-time model for that last time interval is a binomial regression of a transition to death versus no transition. You need to keep track of the number of no-transition cases to evaluate the probability of a transition during that last time interval. The argument extends to multinomial regression for multi-state models.
If there are no events at all during that last time interval, then there is no hazard of an event during that time interval. You might have a situation like that in the [answer you cite in a comment](https://stats.stackexchange.com/q/579564/28500). If each discrete-time interval is modeled with a separate intercept for a baseline hazard, then data from the last time interval will provide no information for event probabilities during that interval. Those last-interval data points might nevertheless contribute some information for some models that fit the baseline hazard over time to a smoothed form. With a parametric model, such cases with aright-censored transition time provide a [likelihood contribution](https://stats.stackexchange.com/a/530456/28500) proportional to the survival curve up to the right-censoring time.
| null | CC BY-SA 4.0 | null | 2023-04-03T12:04:47.037 | 2023-04-03T12:04:47.037 | null | null | 28500 | null |
611669 | 2 | null | 255276 | 0 | null | Normalizing should be performed depending on the reference value. So, for the forecasting studies, since we are trying to approach the real value, you may consider dividing by the real value.
$$\textrm{RMSE} = \sqrt{1/n\sum(y - y_i)^2/n},~ i = 1,\ldots,n $$
$$\textrm{NRMSE} = \textrm{RMSE}/y $$
Keep in mind that if you have only one sample then RMSE would be a wrong choice. Let's say the real value is 80, and the approximation is 60. If you apply RMSE, it will give you the difference between those values, not the percentage error. That is:
$$\textrm{RMSE} = \sqrt{(80-60)^2/1}= 20.$$
However, $\textrm{NRMSE }$ will give you the error as a percentage:
$\textrm{NRMSE }= 20/80 = 1/4 = 25\%. $
| null | CC BY-SA 4.0 | null | 2023-04-03T12:34:29.977 | 2023-04-03T13:09:15.007 | 2023-04-03T13:09:15.007 | 362671 | 384822 | null |
611670 | 2 | null | 611653 | 0 | null | If there can be at most one event per individual in a Cox model, then the gaps don't matter. Each time interval is treated as left-truncated at the lower end and, if there is no event, right-censored at the upper end. For such a model, you don't even need to keep track of patient IDs. In that case, as the R [vignette on time-dependent survival models](https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf) explains in Section 2:
>
this representation is simply a programming trick. The likelihood equations at any time point use only one copy of any subject, the program picks out the correct row of data at each time.
Thus an individual provides information during the time intervals for which you have data, and no information during the gaps.
The situation is different if there can be more than one event per individual in a Cox model, or in fully parametric models.
Be very careful, however, in building and interpreting models with time-varying covariate values. There is a big risk of [survivorship bias](https://en.wikipedia.org/wiki/Survivorship_bias); the `lifelines` package thus won't allow predictions from such a model.
| null | CC BY-SA 4.0 | null | 2023-04-03T12:34:33.593 | 2023-04-03T12:34:33.593 | null | null | 28500 | null |
611671 | 2 | null | 611664 | 0 | null | The easiest thing to defend is to model survival as a function of `X` starting from day 10. If no one ever received `X` prior to day 10, then any model of survival as a function of `X` that involves event times prior to day 10 violates causality: you can't look forward to a later predictor value to explain earlier survival, as you note. Left truncation at day 10, which evaluates subsequent survival conditional upon surviving 10 days, is an appropriate approach.
If you do include `X` as a time-varying covariate and model from day 0, then the intervals containing `X` will be interpreted as left-truncated prior to the administration of `X` in any event. One might argue that including time points before day 10 will help evaluate how well your model is handling "era-dependent confounders," but you would need to consult with colleagues to see if that actually makes sense in your situation and if it would be defensible in publication. If you choose that route, be very clear that no coefficient involving `X` provides information on `X` prior to the left-truncation time.
Even then, there are potentially big problems. As I understand the scenario, the use of `X` seems completely tied to the calendar date of treatment. It's not clear how well you can disentangle `X` from other "era-dependent confounders," and you might end up just having to compare pre- and post-2015 survival while noting that use of `X` might have played an important role. Perhaps a smoothed, continuous model of calendar date as a predictor will show an appropriate jump at 2015 when `X` came into use, which could strengthen your case.
| null | CC BY-SA 4.0 | null | 2023-04-03T12:48:09.123 | 2023-04-03T13:32:34.327 | 2023-04-03T13:32:34.327 | 28500 | 28500 | null |
611672 | 1 | null | null | 0 | 18 | For my master's thesis I've gotten pretty stuck with the analyses. I have 5 categorical treatments (predator model presentations) and I've measured how birds respond to them at 3 different time points (baseline, during model presentation, response). I've collected data on 3 different behaviours during these time points and I want to know whether there is an effect of predator model on the time spent performing these three behaviours over the different time points (ex. they spend a significantly high proportion of time preening in response to predator model 5 during the response compared to the baseline).
I've so far conducted some Friedman's tests, but I would have to do 5 treatments x 3 response behaviours = 15 Friedman's tests, which takes up a huge amount of my word count. Plus, I have a lot of missing data (NAs) so some of my Friedman's tests return p-value=NA.
| Analysing multiple categorical variables at different time points for several dependent variables in R? | CC BY-SA 4.0 | null | 2023-04-03T13:01:48.703 | 2023-04-03T13:04:55.237 | 2023-04-03T13:04:55.237 | 362671 | 384825 | [
"r",
"hypothesis-testing",
"friedman-test"
] |
611673 | 1 | null | null | 0 | 16 | I am calculating elasticities per income quintiles. I have 10,000 observations of elasticities calculated for each quintile, so a dataset of 5*10,000.
I would like to test whether the elasticities are stat. different per income-group.
How should I do this?
Can I calculate the mean over all samples and use the Wald-test to see if the mean for each individual quintile is statistically different from the overall mean?
Any other ideas?
| Statistical difference per group | CC BY-SA 4.0 | null | 2023-04-03T13:06:12.203 | 2023-04-03T13:06:12.203 | null | null | 201272 | [
"wald-test"
] |
611674 | 1 | null | null | 0 | 34 | For SPSS is rating knowledge of a nutrient on a scale of 1-10 ordinal or continuous scale variable ?
In my questionnaire I stated that 1=poor knowledge of the nutrient while 10=excellent knowledge of the nutrient
| Rating knowledge on a scale 1-10 | CC BY-SA 4.0 | null | 2023-04-03T13:14:45.033 | 2023-04-03T13:21:27.620 | 2023-04-03T13:21:27.620 | 384827 | 384827 | [
"spss",
"ordinal-data",
"continuous-data"
] |
611675 | 2 | null | 611582 | 5 | null | The answer to the question:
>
where does $\mathcal{N}(m(X),k(X,X))$ come from ?
relies on two elementary pieces of information:
(1) Understanding how marginal posterior probabilities can be derived from Bayes' rule. Consider for example a model with two sets of unknown parameters $\theta$ and $\phi$, but with data $y$ that depends only on $\theta$, that is $P(y|\theta,\phi) = P(y|\theta)$. Applying Bayes' rule we have
$$ P(\theta,\phi | y ) \propto P(y|\theta) \pi(\theta,\phi)$$
Where $\pi(\theta,\phi)$ denotes the joint prior of $\theta$ and $\phi$.
Now we can integrate both sides with respect to $\phi$ to obtain the marginal posterior probability of $\theta$:
$$ P(\theta| y ) \equiv \int d\phi P(\theta,\phi | y ) \propto P(y|\theta) \int d\phi\pi(\theta,\phi) \equiv P(y|\theta) \pi(\theta)$$
Where $\pi(\theta) \equiv \int d\phi\pi(\theta,\phi)$ is the marginal prior of $\theta$.
In other words, the marginal posterior can simply be obtained by applying Bayes' rule to the marginal prior, when the likelihood depends only on a subset of the parameters:
$$ P(\theta| y ) \propto P(y|\theta) \pi(\theta).$$
(2) Understanding the marginal distributions of a Gaussian process: this in fact follows directly from the very [Definition](https://en.wikipedia.org/wiki/Gaussian_process#Definition) of a Gaussian process: we say that a stochastic process $\{f(x)|x \in \mathcal{X}\}$ is Gaussian if and only if every finite subset $(f(x_1),f(x_2),...,f(x_n))$ has a marginal multivariate normal distribution. The notation $f \sim \mathcal{GP}(m,k)$ is therefore equivalent to $(f(x_1),f(x_2),...,f(x_n)) \sim \mathcal N(\mathbf{m}, \mathbf{K})$ where $\mathbf{m} = (m(x_1),m(x_2),...,m(x_n))$ and $\mathbf{K}_{ij} = k(x_i,x_j)$.
Combining the above two pieces of information leads us to the answer in a straightforward manner. In GP Regression, we are estimating an unknown function $f$, that is, a quantity that has an infinite number of degrees of freedom. (If you don't like thinking about infinite quantities, you can instead just think about a very large number $N$, e.g. by restricting $f$ to some grid points. This doesn't change anything about the logic).
The observed data $y_i$ only depend on the finite subset $\{f(x_i)|i=1,...,n\}$. If we would like to find the marginal posterior of some other subset, e.g. $\{f(x_i)|i=n+1,...,m+n\}$, then we only need to know the marginal prior of $\mathbf{f}=(f(x_1),f(x_2),...,f(x_{n+m}))$, which is by definition multivariate normal. This is how we arrive at
$$ P(\mathbf{f} | \{y_i\} ) \propto \Pi_{i=1}^n P(y_i|f(x_i)) \times \mathcal N(\mathbf f | \mathbf{m} , \mathbf{K})$$
where $\mathbf{m}$ and $\mathbf K$ are derived from the mean and covariance functions $m(x)$ and $k(x,x')$ as explained above.
Notice that, if $P(y_i|f(x_i))$ is normal, then the marginal posterior probability of $\mathbf{f}$ is also multivariate normal, And this is true for any subset $\mathbf{f}$. But this is exactly the requirement for the posterior distribution of the entire function $f$ to be a Gaussian process itself!. We established, therefore, that in GP regression, the posterior probability is also a GP.
| null | CC BY-SA 4.0 | null | 2023-04-03T13:32:54.700 | 2023-04-03T13:32:54.700 | null | null | 348492 | null |
611676 | 2 | null | 482089 | 0 | null |
#### Without cycles
The "period" parameter does not play a role if the data does not have any cycles. This short example cannot have a seasonality in it. Thus, the answer to the question is that there is no answer.
#### With cycles, and with observations at regular intervals
The "period" parameter is the number of observations in a seasonal cycle. For example, if you have daily observations and weekly seasonality, the period is 7.
#### With cycles, and with observations at irregular intervals
Not asked in the example of this question, but if the data had a seasonality and by the same time irregular intervals, there are questions dealing with this:
- Trend in irregular time series data
- How to analyse irregular time-series in R
Thanks go to the remarks of user [Scortchi](https://stats.stackexchange.com/users/17230/scortchi-reinstate-monica).
| null | CC BY-SA 4.0 | null | 2023-04-03T13:36:43.747 | 2023-04-03T13:36:43.747 | null | null | 287262 | null |
611677 | 2 | null | 593737 | 0 | null | In the ESS guidelines on seasonal adjustment (p. 6), the definition of seasonality states that
>
Usual seasonal fluctuations mean those movements which recur with similar intensity in the same season each year and which, on the basis of the past movements of the time series in question and under normal circumstances, can be expected to recur.
So from this definition infra-annual periodic are not considered seasonal. Yet, in experimentally adjusted daily time series, weekly and monthly recurring effects are adjusted for (e.g. [German truck toll data](https://www.destatis.de/EN/Service/EXDAT/Datensaetze/truck-toll-mileage.html)). This shows that the distinction between seasonal and infra-annual cyclical effects are often not done in practice. That length of the month varies is taken care of in many applications (e.g. [Ollech, 2022](https://www.degruyter.com/document/doi/10.1515/jtse-2020-0028/html)). The same is true for the length of the years of course, as we sometimes have 366 instead of 365 days.
| null | CC BY-SA 4.0 | null | 2023-04-03T14:07:11.337 | 2023-04-03T14:07:11.337 | null | null | 384832 | null |
611680 | 1 | null | null | 0 | 39 | Let $\mathbf{X}=\{X_1, X_2, \ldots, X_n\}\sim\mathcal{N}(\mu, \sigma^2)$ and $\mu$ is known. Does $\frac{\sigma^2_{\mathbf{MLE}}-\sigma^2}{\mathbf{se}(\sigma_{\mathbf{MLE}}^2)}$ is asymptotically normal and why?
I know $$\sigma_{\mathbf{MLE}}^2=\frac{1}{n}\sum_{i=1}^{n}\left[X_i-\mu\right]^2$$
and $\frac{n\sigma_{\mathbf{MLE}}^2}{\sigma^2}\sim\chi^2(n-1)$.
| Is $\frac{\sigma^2_{\mathbf{MLE}}-\sigma^2}{\mathbf{se}(\sigma_{\mathbf{MLE}}^2)}$ asymptotically normal? | CC BY-SA 4.0 | null | 2023-04-03T14:26:16.047 | 2023-04-03T14:48:45.257 | 2023-04-03T14:48:45.257 | 384636 | 384636 | [
"maximum-likelihood",
"likelihood",
"estimators"
] |
611681 | 1 | null | null | 1 | 50 | In reviewing my notes about making causal inferences under the selection on the observables identification strategy, I reviewed some pieces that make critiques against contemporary strategies in observational settings that use regression adjustment. I'll use [Samii 2016](https://www.journals.uchicago.edu/doi/abs/10.1086/686690) as an example although the arguments made in this paper are fairly widespread from what I can observe.
Samii (2016) critiques conventional regression-based studies on numerous grounds:
- Psuedo-generality problem: users of regression adjustment rarely make clear the weights generated from their regression analysis, using descriptive statements about the spatial-temporal coverage of their data set to implicitly suggest that their regression models weight the data similarly.
- Manipulating regression model specifications to achieve a favorable result (I would also throw in here the critique that regression is sensitive to functional form specifications).
- Poor control variable selection: Many users of regression adjustment may include "bad controls" (Cinelli et al. 2022)
- Evaluating multiple treatment effects in a single regression equation
- Barring an experiment, causal inference is difficult
Here is where my question begins. I was taught that matching strategies and inverse probability weighting (IPW) are natural "upgrades" over standard regression adjustment strategies. Certainly, I think explaining the logic of these methods is easier than explaining how regression weights are generated, but what methodological advantages, specifically for making causal inferences do matching/weighting hold over regression adjustment? It seems to me that, for the five points made above, matching/weighting are on an equal footing with standard regression adjustment if bad practices in standard regression adjustment studies are addressed.
For example:
- Users of regression adjustment can make their weights generated from a regression model clear and transparent. With that being said, users of matching/IPW can also fail to be transparent concerning the cases kept post-matching and IPW-generated weights.
- One can manipulate the variables chosen to match on/generate propensity scores on to achieve a favorable outcome.
- Likewise, one can choose "bad" variables to match on/generate propensity scores from.
- Users of regression adjustment do not inherently have to follow the practice of attempting to evaluate multiple treatment effects in a single regression model.
- Matching/IPW suffer from the potential of unobserved confounding just as much as regression adjustment does.
In summary, I feel as if most of the critiques against regression adjustment are levied towards the culture of regression adjustment rather than the method itself. However, I completely acknowledge that I may be confused on an important point here. I acknowledge the value of matching/IPW as alternative strategies under selection on the observables, but I fail to articulate why these methods should preform any better than regression adjustment at estimating causal effects (assuming the use of regression adjustment has kicked some of its associated bad practices).
| How do matching/weighting outperform regression adjustment for making causal inferences? | CC BY-SA 4.0 | null | 2023-04-03T14:37:25.673 | 2023-04-03T15:15:07.030 | null | null | 360805 | [
"regression",
"causality",
"propensity-scores",
"matching"
] |
611682 | 2 | null | 611583 | 1 | null | Some observations are that your code for AMOC is overwriting the three different methods as the names are the same. The final method you fit is change in mean and variance of a Normal distribution, which you plot. There are no plots of the PELT output which is fitting multiple changepoints
More importantly, all the models you fit are for (flat) mean or variance shifts. There is clearly a trend in your data. AMOC places a changepoint almost at the centre because when you fit a flat mean before and a flat mean after the change, this is the best place to put a single change. If you look at the corresponding PELT plot, you will likely have a step function where every few observations a changepoint will be there to capture the increasing trend.
See the following two papers for descriptions of common pitfalls and a worked-through data example for constructing changepoint methods.
[1] Good Practices and Common Pitfalls in Climate Time Series Changepoint Techniques: A Review [https://arxiv.org/abs/2212.02674](https://arxiv.org/abs/2212.02674) these points don't just apply to climate series
[2] Changepoint Detection: An Analysis of the Central England Temperature Series [https://arxiv.org/abs/2106.12180](https://arxiv.org/abs/2106.12180)
walkthrough with discussion
| null | CC BY-SA 4.0 | null | 2023-04-03T14:47:31.950 | 2023-04-03T14:47:31.950 | null | null | 13409 | null |
611683 | 2 | null | 437477 | 0 | null | There is a problem with your index. It is not dimensionless. The first term of the index is dimensionless, but the second term has the dimension (1/X). Thus it is not only not dimensionless, but it is also non-homogeneous. Please check if it is correct.
| null | CC BY-SA 4.0 | null | 2023-04-03T14:48:07.927 | 2023-04-03T14:48:07.927 | null | null | 226618 | null |
611684 | 2 | null | 151943 | 0 | null | The `EnvCpt` R package can fit changepoint models with trends (and other features such as autocorrelation which can be present in this sort of data). You can set a minimum segment length which could be equal to the minimum time between repairs (or slightly smaller may give better results), if there are multiple potential candidates for a "repair point" then it will choose the "best" from a statistical criteria optimization perspective.
The code doesn't restrict to negative gradients but you could modify it to do so.
[https://cran.r-project.org/web/packages/EnvCpt/index.html](https://cran.r-project.org/web/packages/EnvCpt/index.html)
| null | CC BY-SA 4.0 | null | 2023-04-03T14:55:27.850 | 2023-04-03T14:55:27.850 | null | null | 13409 | null |
611686 | 1 | null | null | 2 | 44 | Suppose I am performing a propensity score matched analysis using the `MatchIt` package in R, following the example reported here: [https://kosukeimai.github.io/MatchIt/articles/estimating-effects.html](https://kosukeimai.github.io/MatchIt/articles/estimating-effects.html)
```
library("MatchIt")
#Create dataset 'd' as reported at the bottom of the example----
gen_X <- function(n) {
X <- matrix(rnorm(9 * n), nrow = n, ncol = 9)
X[,5] <- as.numeric(X[,5] < .5)
X
}
#~20% treated
gen_A <- function(X) {
LP_A <- - 1.2 + log(2)*X[,1] - log(1.5)*X[,2] + log(2)*X[,4] - log(2.4)*X[,5] + log(2)*X[,7] - log(1.5)*X[,8]
P_A <- plogis(LP_A)
rbinom(nrow(X), 1, P_A)
}
# Continuous outcome
gen_Y_C <- function(A, X) {
2*A + 2*X[,1] + 2*X[,2] + 2*X[,3] + 1*X[,4] + 2*X[,5] + 1*X[,6] + rnorm(length(A), 0, 5)
}
gen_Y_B <- function(A, X) {
LP_B <- -2 + log(2.4)*A + log(2)*X[,1] + log(2)*X[,2] + log(2)*X[,3] + log(1.5)*X[,4] + log(2.4)*X[,5] + log(1.5)*X[,6]
P_B <- plogis(LP_B)
rbinom(length(A), 1, P_B)
}
gen_Y_S <- function(A, X) {
LP_S <- -2 + log(2.4)*A + log(2)*X[,1] + log(2)*X[,2] + log(2)*X[,3] + log(1.5)*X[,4] + log(2.4)*X[,5] + log(1.5)*X[,6]
sqrt(-log(runif(length(A)))*2e4*exp(-LP_S))
}
set.seed(19599)
n <- 2000
X <- gen_X(n)
A <- gen_A(X)
Y_C <- gen_Y_C(A, X)
Y_B <- gen_Y_B(A, X)
Y_S <- gen_Y_S(A, X)
d <- data.frame(A, X, Y_C, Y_B, Y_S)
#Create a toy column for an example treatment variable
set.seed(1145)
d$treatment <- sample.int(2, length(d$A), replace=TRUE)
d$treatment <- factor(d$treatment, levels=c(1,2), labels=c("Treatment A","Treatment B"))
#Matching----
mF <- matchit(A ~ X1 + X2 + X3 + X4 + X5 +
X6 + X7 + X8 + X9, data = d,
method = "full", estimand = "ATT")
md <- match.data(mF)
```
Now suppose I have another variable, namely `treatment`, and that I want to perform a Cox-regression analysis to investigate whether there is an interaction between the effect of `treatment` and the `A` variable, on the risk of outcome (i.e. if the effect of `treatments` differs across the levels of `A` after matching). I read the "Moderation Analysis" section in the page linked above, but I am not sure the following is applicable under that context/it is formally correct, so I am asking.
My attempt would be:
```
library("survival")
coxph(Surv(Y_S) ~ A*treatment, data = md, robust = TRUE,
weights = weights, cluster = subclass)
```
The output is the following:
```
Call:
coxph(formula = Surv(Y_S) ~ A * treatment, data = md, weights = weights,
robust = TRUE, cluster = subclass)
coef exp(coef) se(coef) robust se z p
A 0.52453 1.68967 0.07818 0.11519 4.553 5.28e-06
treatmentTreatment B 0.17025 1.18560 0.05142 0.10742 1.585 0.113
A:treatmentTreatment B -0.17577 0.83881 0.10831 0.16196 -1.085 0.278
Likelihood ratio test=67.88 on 3 df, p=1.211e-14
n= 2000, number of events= 2000
```
Now the questions:
- Question 1: Is this approach to model the interaction treatment*A correct? Can I model an interaction after I have done the matching (obviously without matching for the treatment variable)?
- Question 2: Can I interpret the coefficients reported in the output as the conditional effect of treatment on the outcome, and interpret the interaction as I would do in a 'standard' non-matched Cox-regression model?
| How to model an interaction in a propensity-score matched dataset | CC BY-SA 4.0 | null | 2023-04-03T15:09:22.993 | 2023-04-03T15:09:22.993 | null | null | 122916 | [
"regression",
"survival",
"cox-model",
"propensity-scores",
"matching"
] |
611687 | 1 | null | null | 0 | 11 | I am trying to model a Kalman Filter for an IMU (inertial measurement unit) with the method described by [Zhou](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1304870) (2004)
and [Filippeschi](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5492902/) (2017, pp.11-12).
In this method, the state vector is:
$$
X = \begin{bmatrix}g_x\\g_y\\g_z\\m_x\\m_y\\m_z\end{bmatrix}
$$
Where $g$ is the gravity vector and $m$ is the magnetic vector.
The process model is as follows:
$$
X_{j+1}=(\dot X_jT+I_6)X_{j}+w_j
$$
Where $\dot X_j$ is a vector containing the rate of change of $g$ and $m$, $T$ is the sample time, $I_6$ being the 6x6 identity matrix, and $w_j$ measurement noise.
$\dot X_j$ can be obtained from the raw measurements of the sensors.
$$
\dot X = \begin{bmatrix}\dot g\\ \dot m\end{bmatrix}
$$
$$
\dot g = g x w
$$
$$
\dot m = m x w
$$
Where g is the accelerometer output, m is the magnetometer output, and w is the gyroscope output.
The measurement model is simple:
$$
Z_j = X_j + \delta_j
$$
With $\delta_j$ being the white measurement noise.
I am not very familiar with kalman filters, but from what I have gathered, there's five main equations:
- State extrapolation
- Covariance extrapolation
- State update
- Covariance update
- Kalman gain
My issue comes from the fact that most literature has the state extrapolation equation as follows (I'm ommiting the control vector):
$$
X_{j+1} = AX_j + w_j
$$
However, I am confused about deriving the state extrapolation and covariance extrapolation equations, since the process model doesn't really have a state transition matrix). Can I simply substitute $A$ with $(\dot X_jT+I_6)$?
For instance, this would mean the state extrapolation eq ends up like this:
$$
X_{j+1} = \dot X_jTX_j + X_j
$$
And perhaps the covariance extrapolation like this (which I have no idea how to solve):
$$
P_{j+1} = (\dot X_jT+I_6)P_j(\dot X_jT+I_6)^T + Q$$$$
$$
If you dont understand the process model its better to read the paper linked at the top. But it is verbatim from the paper. I didn't just make it up.
| Deriving Kalman Filter equations | CC BY-SA 4.0 | null | 2023-04-03T15:13:00.593 | 2023-04-03T15:13:00.593 | null | null | 384759 | [
"kalman-filter"
] |
611688 | 2 | null | 611681 | 1 | null | This is an interesting question, which was discussed several times here and elsewhere.
I can link some reference which I found very useful when dealing with this question: [A nice paper on the topic](https://www.sciencedirect.com/science/article/pii/S073510971637036X), and a very interesting [thread](https://discourse.datamethods.org/t/propensity-score-matching-vs-multivariable-regression/2319) on datamethods. Some other will certainly give more details on the topic but I think this is a very good starting point.
Overall I tend to support the idea that a well constructed (and perhaps validated) covariate-adjusted regression model performs well in most cases.
| null | CC BY-SA 4.0 | null | 2023-04-03T15:15:07.030 | 2023-04-03T15:15:07.030 | null | null | 122916 | null |
611689 | 1 | null | null | 0 | 32 | I wrote the following code in Python. First I generate some data:
```
# generate data
x1 = np.random.normal(10, 5, 1000)
x2 = np.random.normal(20, 5, 1000)
error = np.random.normal(0, 1, 1000)
y = np.exp(3*x1 + 6*x2) + error
X = np.stack((x1,x2)).T
```
Then I pre-allocate some variables:
```
weights = np.ones(np.shape(X)[1])
learningRate = 0.0001
numberOfIterations = 10000
```
I (try to) do maximum likelihood estimation:
```
for _ in range(0, numberOfIterations):
# the derivative of the log likelihood for the Poisson distribution
gradient = -1 * len(y) + 1/weights * np.sum(X, axis=0)
# gradient ascent
weights = weights + learningRate * gradient
```
Then I try verify the results with statsmodels:
```
# check outcome
check = sm.GLM(y, X, family=sm.families.Poisson()).fit()
print(check.params)
print(weights)
>>>
[10.20840077 20.03072152]
[2.99 6.00]
```
As can be seen, my outcome does not match the output from `statsmodels`.
After analysis, I think I estimated the parameter of the Poisson distribution $(X \sim Poi(\mu))$, where the sample mean is the ML estimator. Indeed it can be seen (since I generated the data), that the population means are 10 and 20. Furthermore, the sample means are exactly equal to the MLE outcome. Therefore I am convinced my code is correct, but it is not what I want to calculate. I want to calculate the beta coefficients using MLE:
$$
log(\mu) = \beta_1 x_1 + \beta_2 x_2
\\
\mu = E(y)
\\
\mu \sim Poi(\lambda)
$$
So I want to estimate $b_1^{MLE}$ and $b_2^{MLE}$, but instead I got $\hat{\lambda}_{MLE}$.
How do I get $b_1^{MLE}$ and $b_2^{MLE}$?
| How to implement GLM regression from scratch? | CC BY-SA 4.0 | null | 2023-04-03T15:22:56.477 | 2023-04-12T11:31:16.433 | 2023-04-12T11:31:16.433 | 219554 | 219554 | [
"regression",
"python",
"generalized-linear-model",
"statsmodels"
] |
611690 | 1 | null | null | 0 | 36 | I'm performing hyperparameter tuning for a classifier. After I finish, I'm updating the hyperparameter search space and re-tuning the hyperparameters again. I repeat this process a few times. In addition to the validation set used for hyperparameter optimization, I use a separate test set for final evaluation.
Is it a good approach, or can it lead to overfitting?
I'm asking for a general case and not my private case, but my hyperparameter tuning method is Optuna, and the classifier is Catboost. In addition, in each Optuna iteration, when fitting the CatBoost, I use an early stop on the same validation set.
The relevant code is attached below.
```
def objective(self,trial):
params, p = get_catboost_params(trial, self.categorical_features)
clf = CatBoostClassifier(verbose=200, random_seed=42, eval_metric='AUC', auto_class_weights='Balanced', **params)
clf.fit(self.X_train, self.y_train, eval_set=(self.X_val, self.y_val), early_stopping_rounds=p, cat_features=self.categorical_features)
y_val_prob = clf.predict_proba(self.X_val)[:, 1]
auc_score= roc_auc_score(self.y_val, y_val_prob)
return auc_score
def optimize_hyperparameters (self):
study = optuna.create_study(direction="maximize")
study.optimize(self.objective, n_trials=self.n_trials)
self.best_params= study.best_params
trials_params = self.get_trial_params(study)
trials_params.to_csv("optuna_result.csv")
```
| Is repeated hyperparameter tuning can lead to overfitting? | CC BY-SA 4.0 | null | 2023-04-03T15:25:42.113 | 2023-04-03T15:55:31.987 | 2023-04-03T15:55:31.987 | 276238 | 276238 | [
"classification",
"predictive-models",
"overfitting",
"validation",
"hyperparameter"
] |
611692 | 1 | null | null | 0 | 17 | How do I compute partial autocorrelation values for a time series of categorical (nominal-scale) data?
C. H. Weiss ([2008](https://www.hsu-hh.de/mathstat/wp-content/uploads/sites/781/2017/10/Folien_09_11_1.pdf), [2018](https://onlinelibrary.wiley.com/doi/book/10.1002/9781119097013):chapter 6) provides very clear descriptions of how simple autocorrelation measures, such as Cramer's V or Cohen's $\kappa$, can be computed to measure the degree of serial dependence in a categorical time series. However, no equivalently basic (algorithmic) procedure is provided for the partial measures.
I am looking for a series of steps that someone with little background in time-series analysis can follow and implement.
| Partial autocorrelation with categorical data | CC BY-SA 4.0 | null | 2023-04-03T15:32:30.170 | 2023-04-03T15:32:30.170 | null | null | 137333 | [
"time-series",
"categorical-data",
"autocorrelation",
"autoregressive",
"partial-correlation"
] |
611694 | 1 | null | null | 0 | 24 | I am currently reading about stochastic processes and Brownian Motion.
When books have notation such as $E[X_t] = 0$ and $Var[X_t] = \sqrt{t}$ this is considered over sample paths.
However, when we consider the distribution of the increment $X_t - X_s \sim \mathcal{N}(0,|t-s|)$ what random variable is this exactly?
Is it:
- The increment at a fixed time over possible paths i.e. the set $ [ X(t,\omega_k) - X(s,\omega_k) ]_k $ for fixed $t,s$
- The increment over a particular path, but varying the times the increments are taken i.e. the set $ [X(t,\omega) - X(t+\Delta,\omega) ]_{\Delta}$ for fixed $\Delta,\omega$
Does one of these imply the other perhaps? I have tested it empirically and it seems to be true in both cases.
| Stochastic Process Notation / Brownian Increments | CC BY-SA 4.0 | null | 2023-04-03T15:53:42.977 | 2023-04-03T15:57:56.217 | 2023-04-03T15:57:56.217 | 365983 | 365983 | [
"probability",
"stochastic-processes",
"brownian-motion",
"stochastic-calculus"
] |
611695 | 1 | null | null | 3 | 110 | I am facing a multiclass classification problem where I have 4 classes and one of them dominates over the others. I use a KNN classification model and the majority of the instances are being classified as the majority class. I used the `weights = 'distance'` parameter and it did improve, but not all what I expected. I know that adjusting the classification thresholds of each class can improve the classification in the classes with fewer instances, but I don't know how to do it. My code is this:
```
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn import metrics
import numpy as np
from sklearn.model_selection import cross_val_score
from scipy.spatial.distance import braycurtis
from sklearn.metrics import confusion_matrix
df_X = pd.read_csv('df_data.csv')
df_Y = pd.read_csv('df_Class.csv')
X_train, X_test, Y_train, Y_test = train_test_split(df_X, df_Y, random_state=42)
knn = KNeighborsClassifier(n_neighbors = 5, metric = braycurtis, weights = 'distance')
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_score = accuracy_score(Y_test, Y_pred)
print("Acierto de KNN en la partición de test:", acc_score)
print(metrics.classification_report(Y_test,Y_pred))
m_confusion = confusion_matrix(Y_test, Y_pred)
print(m_confusion)
```
and my results are these:
```
precision recall f1-score support
1 0.64 0.39 0.48 244
2 0.77 0.49 0.60 371
3 0.56 0.95 0.71 626
4 0.64 0.34 0.44 408
accuracy 0.61 1649
macro avg 0.65 0.54 0.56 1649
weighted avg 0.64 0.61 0.58 1649
[[ 94 4 126 20]
[ 21 182 127 41]
[ 10 6 592 18]
[ 23 43 204 138]]
```
Thank you very much!
| How to adjust the classification thresholds in a multiclass classification problem? | CC BY-SA 4.0 | null | 2023-04-03T15:59:38.317 | 2023-04-06T12:05:22.357 | null | null | 384840 | [
"machine-learning",
"classification",
"python",
"scikit-learn",
"k-nearest-neighbour"
] |
611696 | 1 | null | null | 1 | 19 | We have a shuffled, standard 52 card deck with 4 Aces and 4 twos. I am confused about the probability of seeing an ace before seeing a two.
By symmetry, it seems obvious that the probability should be $0.5$, since the probability of seeing one before the other is equal.
But when I think of it this way: given the order of the aces in a shuffled deck, each "two" card have an equal chance of being placed in any of 5 "buckets":
[](https://i.stack.imgur.com/twUIQ.png)
So the probability of seeing an ace first is the probabiltiy that all "two"s lands in the latter 4 of the buckets. And since each one has an equal probability of landing in each other buckets, we have $P = (4/5)^4 = 0.4096$ . But that's different from the intuitive answer of $0.5$ given by symmetry! So what went wrong?
| Probability of seeing an ace before seeing a two in a 52 card deck? | CC BY-SA 4.0 | null | 2023-04-03T16:06:47.267 | 2023-04-03T16:06:47.267 | null | null | 313996 | [
"probability",
"combinatorics"
] |
611697 | 1 | 611716 | null | 0 | 30 | I did time-dependent covariate analyses with cox regression in R. But I wonder if the interpretation is the same as for a cox model with a fixed covariate?
Context: in my analysis the physical activity score index was analyzed 5 months after their lung cancer remission period and then 5 more months after that, i.e. 10 months after remission. Some people relapse lung cancer between 5 and 10 months after remission, and others did not. I created my dataframe with the time intervals so the physical activity index score at the second line (i.e. at 10 months) changes. My baseline is 5 months so `t` start is 0 days.
Here is the HR that I obtained with a score index as continous variable : 0.65 (0.80-1.30)
Despite my research, I don't know if I can interpret it as a classic cox model or not.
| interpreting HR with time dependent covariates cox regression | CC BY-SA 4.0 | null | 2023-04-03T16:15:18.843 | 2023-04-03T20:38:02.803 | 2023-04-03T17:15:56.387 | 28500 | 380006 | [
"interpretation",
"cox-model",
"hazard",
"time-varying-covariate",
"proportional-hazards"
] |
611698 | 1 | null | null | 0 | 18 | Question: Is there a known, exact expression for the Bayes factor between two multivariate normal hypotheses?
Let $H_1$ and $H_2$ be two subsets of $R^d$ with normal priors $\pi(\mu|H_j)$. The sets $H_j$ represent two hypotheses for the unknown mean $\mu$.
The Bayes factor is defined as $BF=P(X|H_1)/P(X|H_2)$, where $X=(X_1,\ldots,X_n)$ is an iid sample from a multivariate normal $N(\mu,\Sigma)$. Each $X_i$ is $d$-dimensional. In this expression,
$$
P(X|H_j)
= \int P(X|\mu,H_j)\pi(\mu|H_j)\,d\mu.
$$
A common example of $H_j$ is $H_1 : \mu_1=0$ and $H_2=R^d$, so we are testing if the first coordinate of the mean is zero. It is clear that this is just the integral of a normal PDF, so it can be calculated analytically.
In principle I could do this myself, but it seems quite tedious, and I just need to know the eventual formula. I have not been able to find a reference for this, which is surprising given how standard this is.
Clarification: For general $H_j$, this is probably intractable, but I only need the formula when $H_j$ are defined by coordinate zeroes as in the example.
| Exact computation of Bayes factor for multivariate normal | CC BY-SA 4.0 | null | 2023-04-03T16:21:28.993 | 2023-04-03T16:25:03.787 | 2023-04-03T16:25:03.787 | 384846 | 384846 | [
"bayesian",
"references",
"model-selection",
"computational-statistics",
"bic"
] |
611699 | 1 | 611784 | null | 1 | 36 | I know that P(Y=y|do(X=x)) is different from P(Y=y|X=x) in that the former is an interventional probability where the intervention is applied to the entire population, whereas the latter is a straightforward conditional probability where we simply condition on X taking on value x. But how would we read something like P(Y=y|do(X=x), M=m)? Would it be read as the probability of "Y taking on value y given that we intervene on X (with respect to Y) to set X=x and given that we restrict the population to where M takes on value M=m"?
If this were true, we could, conceptually speaking, first intervene on the entire population to set X=x and then restrict the population to M=m, or the other way around, i.e., first restrict the population to M=m and then intervene on this sub-set to set X=x. Is this true?
| What do we exactly mean by P(Y=y|do(X=x), M=m)? | CC BY-SA 4.0 | null | 2023-04-03T16:29:51.567 | 2023-04-04T09:08:46.637 | null | null | 384845 | [
"conditional-probability",
"causality",
"intervention-analysis"
] |
611700 | 1 | null | null | 2 | 132 | I have ran the below linear regression model and using the performance package in R I however checked whether the distribution of the residuals is normal. The performance package suggests I should be using a Cauchy distribution for the errors. In a search of stats.stackexchange and Google, it isn't clear how to do this. How can I model the data below using a Cauchy distribution for errors?
```
library(performance)
c(7L, 50L, 12L, 20L, 6L, 12L, 30L, 3L, 21L, 43L, 42L, 35L, 18L, 6L, 23L, 16L, 8L, 43L, 10L, 24L, 19L, 30L, 13L, 9L, 6L, 17L, 46L, 14L, 8L, 25L, 16L, 9L, 28L, 11L, 3L, 28L, 38L, 37L, 6L, 25L, 27L, 24L, 5L, 1L, 9L, 4L, 14L, 22L, 0L, 11L, 17L, 1L, 5L, 37L, 52L, 16L, 2L, 0L, 12L, 13L, 2L, 16L, 8L, 2L, 3L, 15L, 23L, 24L, 1L, 18L, 17L, 18L, 3L, 40L, 2L, 32L, 24L, 17L, 1L, 2L, 3L, 30L, 17L, 5L, 33L, 15L, 19L, 20L, 3L, 0L, 2L, 2L, 8L, 18L, 7L, 3L, 18L, 0L, 17L, 20L) -> dependent.var
c(4.66666666666667, 75, 28, 6, 1.83, 38.36, 80, 0, 14, 107, 137, 94.75, 36, 10.8666666666667, 44, 27, 32, 86, 52.8333333333333, 108, 76.5, 54, 26, 23.75, 11.75, 33.2133333333333, 100, 58, 50, 94, 32.25, 16, 33.75, 29.25, 7.75, 100, 98, 58.45, 4.58, 56, 59, 73.4166666666667, 6.16666666666667, 1, 53.79, 41.95, 43.25, 70.5, 0, 10, 3.25, 0, 14, 98, 112, 35, 0.25, 16.25, 30.83, 68, 1.25, 30.25, 13.25, 11.1, 1.5, 41, 45.17, 40, 6, 52.8566666666666, 43, 41, 3, 131, 0, 45.67, 74, 25.4166666666667, 0.25, 4.75, 14.58, 2.75694444444444, 32, 0, 92.25, 34, 66, 14, 1.75, 1.5, 1, 21.53, 4.08333333333333, 44.07, 55.9, 12, 20, 12.5, 48.1333333333333, 24.03) -> independent.var
lm(dependent.var ~ independent.var) -> model
check_distribution(model)
# # Distribution of Model Family
#
# Predicted Distribution of Residuals
#
# Distribution Probability
# cauchy 53%
# normal 41%
# chi 6%
#
# Predicted Distribution of Response
#
# Distribution Probability
# neg. binomial (zero-infl.) 59%
# beta-binomial 34%
# half-cauchy 3%
```
| Linear regression with Cauchy distribution for errors | CC BY-SA 4.0 | null | 2023-04-03T16:29:54.957 | 2023-04-07T22:13:41.953 | null | null | 12492 | [
"r",
"regression",
"linear-model",
"residuals",
"cauchy-distribution"
] |
611703 | 1 | null | null | 0 | 28 | To visualize p-values or confidence intervals, the t-distribution is sometimes rescaled using the sample standard deviation and then centered at a certain value.
To be more specific, consider drawing a random sample of size $n$ from a normal distribution with mean $\mu$. The random variables $M$ and $S$ denote the sample mean and sample standard deviation. Then $T=\frac{(M-\mu)\sqrt{n}}{S}$ has a t-distribution with $n-1$ degrees of freedom.
Let $m$ and $s$ denote the observed sample mean and standard deviation, and let $\mu_{L}$ and $\mu_U$ denote the limits of a 95% confidence interval for $\mu$. The confidence limits can be visualized by rescaling and relocating the t-distribution via $\frac{Ts}{\sqrt{n}}+\mu_L$ and $\frac{Ts}{\sqrt{n}}+\mu_U$.
Below is an example with $n=10$, $m \approx -0.46$ (solid vertical line), $s \approx 1.00$, $\mu_L\approx-1.18$ (left vertical dashed line), $\mu_U\approx0.25$ (right vertical dashed line). The area under the left curve above the vertical solid line is $0.025$, so is the area under the right curve below the certical solid line.
Do these distributions have any meaningful interpretation?
[](https://i.stack.imgur.com/ICYx1.png)
(My intuition says no, and it might be more appropriate to stay in the "standardized mean difference space" when visualizing this, since then you could say "assuming $\mu=\mu_0$, this is how the quantity $\frac{\bar{x}-\mu}{s}$ would be distributed in the long run, etc.)
Code to reproduce the figure:
```
set.seed(12)
x <- rnorm(10)
m <- mean(x)
s <- sd(x)
print(m)
print(s)
ci <- m + qt(c(0.025, 0.975), n-1) * s/sqrt(n)
xvec <- seq(-4, 4, 0.05)
dens <- dt(xvec, n-1) * sqrt(n)/s
lower_xvec <- xvec*s/sqrt(n) + ci[1]
upper_xvec <- xvec*s/sqrt(n) + ci[2]
par(mfrow=c(1,1))
plot(0, 0, type="n", xlim=c(-1,1)*max(abs(lower_xvec), abs(upper_xvec)), ylim=(c(0, max(dens))), xlab="", ylab="")
abline(v=m)
abline(v=ci[1], lty="dashed")
abline(v=ci[2], lty="dashed")
lines(lower_xvec, dens)
lines(upper_xvec, dens)
```
| Any meaningful interpretation t-distribution, when rescaled using the sample SD? | CC BY-SA 4.0 | null | 2023-04-03T17:14:46.817 | 2023-04-03T20:00:32.267 | 2023-04-03T20:00:32.267 | 349912 | 349912 | [
"mathematical-statistics",
"confidence-interval",
"p-value",
"t-distribution",
"frequentist"
] |
611704 | 1 | null | null | 0 | 6 | Generalised linear models assume the response to belong to exponential dispersion model family with density (or probability mass function) of the form
$$
p(y \mid \theta, \phi) = \exp \left \{ \frac{y \theta - b(\theta)}{a(\phi)} + c(y, \phi)\right \} \,.
$$
Every resource I've consulted invariably assumes $a(\phi)$ to have the form $\phi / w$ for some known weight $w$. Indeed, this can be verified for the all common distributions which are usually encountered, e.g. normal, Poisson, binomial. This begs the question: are there any instances of GLMs where this is not the case? If so, can you provide an example and show how the likelihood equations are affected? If not, why not adjust the model to reflect this in the first place, i.e.
$$
p(y \mid \theta, \phi) = \exp \left \{ \frac{w(y \theta - b(\theta))}{\phi} + c(y, \phi)\right \} \,.
$$
in place of the vacuously general $a(\phi)$?
| General form of the dispersion function for GLMs | CC BY-SA 4.0 | null | 2023-04-03T17:19:30.197 | 2023-04-03T17:19:30.197 | null | null | 304924 | [
"generalized-linear-model",
"exponential-family"
] |
611705 | 1 | null | null | 0 | 14 | I have a dataset with around 1M data. But, in real-life it should be more. From the dataset after first step processing for specific task, I got 50K data. Then I processed it again for a specific attribute value (continuous), got around 7K data successful. I was thinking of one-sample t-test considering population is unknown. Is it okay?
Again, what should be my sample size in this case? All 7K data (finally I had for a particular attribute) or 1000 of them?
(I read in some online sources that when the population is large, it should still be around 1000)[https://tools4dev.org/resources/how-to-choose-a-sample-size/](https://tools4dev.org/resources/how-to-choose-a-sample-size/)
[https://www.qualtrics.com/experience-management/research/determine-sample-size/](https://www.qualtrics.com/experience-management/research/determine-sample-size/)
I intend to test, whether the population mean is greater than the sample mean for that particular attribute.
I am a newbie in this field and would appreciate any kind of help from the community. TIA
| What should be the significance test when population size is unknown(know that it's more than 1M) and sample size is greater than 30? | CC BY-SA 4.0 | null | 2023-04-03T17:32:32.103 | 2023-04-03T17:32:32.103 | null | null | 383118 | [
"statistical-significance",
"t-test",
"sample-size",
"population"
] |
611706 | 1 | null | null | 1 | 22 | I have been handed data that is interval censored where left censoring is limit of detection and right censoring is saturation of the assay. How do I estimate the means and mean standard errors of this data?
| mean and MSE of Interval censored data | CC BY-SA 4.0 | null | 2023-04-03T17:47:16.387 | 2023-04-03T17:47:16.387 | null | null | 28141 | [
"censoring",
"interval-censoring"
] |
611707 | 1 | null | null | 8 | 883 | Is there a known symmetric distribution with finite 1st moment but undefined or infinite for moments>1?
| An example of a SYMMETRIC distribution with finite mean but infinite/undefined variance? | CC BY-SA 4.0 | null | 2023-04-03T17:48:43.347 | 2023-04-04T07:26:20.060 | 2023-04-03T17:55:12.473 | 200268 | 200268 | [
"distributions"
] |
611708 | 1 | null | null | 0 | 22 | Say you're fitting a generalized linear model where the response variable is weight and there are several factors: height, sex, vegetarian, country where the person lives, etc. Now say you expect there to be an interaction between height and country. Does the way you account for this in the model depend on how many levels there are for the country factor? For instance, if you only took measurements for people from three different countries, would you account for it as an interaction term, whereas if you took measurements for people from 30 different countries, you'd account for it as a random intercept?
| Are random effects just interaction factors with many levels? | CC BY-SA 4.0 | null | 2023-04-03T17:51:34.953 | 2023-04-03T17:51:34.953 | null | null | 176182 | [
"regression",
"mixed-model",
"generalized-linear-model",
"interaction"
] |
611709 | 2 | null | 611707 | 19 | null | A t-distribution with a small degrees of freedom parameter satisfies your requirements. While having one degree of freedom results in a Cauchy distribution that even lacks an expected value, $t_{\nu}$ for $\nu\in(1,2]$ has a mean of zero but infinite variance while also being symmetric about its mean.
| null | CC BY-SA 4.0 | null | 2023-04-03T17:53:30.317 | 2023-04-03T17:53:30.317 | null | null | 247274 | null |
611710 | 2 | null | 349741 | 0 | null | If you expand your horizons, you can see that there are several more distributions which can have left tails. For example, the 4-parameter Stable, BetaPert, Weibull, Triangle, Beta (mentioned above), and Generalized Extreme Value (GEV). The GEV distribution shown is also known as the smallest value or minimum value extreme distribution.
[](https://i.stack.imgur.com/pwLRK.jpg)
| null | CC BY-SA 4.0 | null | 2023-04-03T18:13:18.147 | 2023-04-04T03:08:24.170 | 2023-04-04T03:08:24.170 | 377184 | 377184 | null |
611711 | 1 | null | null | 0 | 29 | I recently read a paper and the authors run a survey experiment that had 4 groups, including a control group. The author writes the following: "The treatment about better responding to economic crises has no systematic effect on exchange rate policy preferences. In addition, the estimated effect from T1 is larger than T2 (F-test βT1 > βT2, p-value = .06)."
What I don't understand is, why would the authors use an F-test instead of a one-tailed t-test? Is the result going to be the same? If not, what are some advantages on doing this? Lastly, when they say F-test, does that mean ANOVA?
| Can the F-test be used when comparing 2 groups? Why not use the t-test? | CC BY-SA 4.0 | null | 2023-04-03T18:20:12.513 | 2023-04-03T18:59:57.120 | 2023-04-03T18:52:40.443 | 7290 | 355204 | [
"t-test",
"f-test"
] |
611713 | 1 | null | null | 1 | 34 | I was wondering how you would find the Yule-Walker equations for an AR(2) (or, really, any AR(P)) stationary time-series model, where there is a "missing" term. For example:
$$ X_t - \phi\frac{1}{4}X_{t-2} = \epsilon_t $$
Where $\epsilon_t \sim WN(0,\sigma^2)$.The "missing" term is the $X_{t-1}$.
Writing the Yule-Walker, my attempt was:
\begin{align*}
\gamma(0) - \theta\frac{1}{4}\gamma(2) &= \sigma^2 \\\\
\gamma(1) - \theta\frac{1}{4}\gamma(1) &= 0 \\\\
\gamma(2) - \theta\frac{1}{4}\gamma(0) &= 0
\end{align*}
But I was unsure if this was correct or not, since we don't end up with an n x n matrix (and our book does not provide an example of anything different - undergrad introduction to time-series).
| Writing Yule-Walker Equations for AR(2) with Missing Term | CC BY-SA 4.0 | null | 2023-04-03T18:48:41.227 | 2023-04-03T18:59:38.860 | 2023-04-03T18:59:38.860 | 53690 | 384854 | [
"time-series",
"estimation",
"autoregressive"
] |
611714 | 1 | null | null | 0 | 29 | I have made stock price predictions using two different valuation models, the Dividend Discount Model and the Discounted Free Cash Flow Model.
I have made these predictions on two different different stock markets, the swedish and the danish market.
I have used data for 2010-2017 and thereby made a prediction on the stock price for 2018 for each of the companies on the two markets.
There are two things that I want to test:
- Can any of the valuation models predict stock prices?
- Is one of the models better at predicting stock prices on either the Danish or Swedish market?
For my first test, I have considered using simple linear regression to test each of the two valuation models against the actual stock price. However I am unsure if this is an appropriate method, or their existing a better way to test this.
For my second test, I have considered using a Two-Way Anova, however I am also unsure if this is appropriate.
Do you have any suggestions for which statistical model to use for the two things I want to test?
| How to compare stock price prediction with actual stock price? | CC BY-SA 4.0 | null | 2023-04-03T18:59:30.023 | 2023-04-03T23:58:12.743 | 2023-04-03T19:26:03.323 | 384857 | 384857 | [
"regression",
"forecasting",
"predictive-models"
] |
611715 | 2 | null | 611711 | 1 | null |
- The "F-test" and the "ANOVA test" can be considered synonyms. There are also the ANOVA model and the ANOVA table, e.g.
- You can conduct an F-test on just 2 groups. The F-statistic will be the square of the two-sided t-statistic. The p-values from the two methods should be identical, although it's possible that there will be rounding differences at enough decimal places down due to different computational algorithms that the software uses.
- The F-test does not allow for a directional test in the sense that a one-tailed t-test does. So if there are only 2 groups and a directional test is desired, a t-test would make more sense.
- I find the author's quoted phrasing a little awkward, but they seem to be saying that there is no effect, since p=.06 (i.e., >.05). If so, that is a (common) fallacy. You cannot conclude there is no effect just because the result is significant.
| null | CC BY-SA 4.0 | null | 2023-04-03T18:59:57.120 | 2023-04-03T18:59:57.120 | null | null | 7290 | null |
611716 | 2 | null | 611697 | 0 | null | A big difficulty here is that even a "classical" Cox analysis with all predictor values specified at `time = 0` makes a very specific assumption about the association between predictor variables and outcome. It assumes that the only thing that matters is the current values of the predictor variables at each event time.
To fit the model, at each event time the values of the predictor variables of the case having the event are compared against those of all the cases at risk. There is nothing included about the past history of a predictor variable, unless that past history is somehow incorporated into a new predictor variable. If a predictor's value should change from its initial value during the course of the study, then the model will use an incorrect value for it in calculations at all subsequent event times.
To answer your question most directly: insofar as those assumptions of the Cox model are met, then an interpretation of a hazard ratio with respect to current predictor values is valid.
The problem is that situations with time-varying covariates and data sets like yours often don't meet those assumptions.
When time-varying covariates are included in a model, difficulties are compounded. The fact that you have a predictor value available at some time means that you already know that the individual is alive at that time. In your case, I suspect that there were deaths, not just relapses, before 10 months.
In your data, if there is were any larges changes in the activity index between 5 and 10 months, then it's likely that the value was different from the value at 5 months for a good deal of the intervening time. Thus all calculations based on events during that time period between 5 and 10 months could be erroneous, not just those for the cases with incorrect index values. Similarly, if the activity index is changing over time, it's likely that many calculations based on events after 10 months are also in error if they use the values at 10 months. There are ways to do joint modeling of covariate values along with survival outcomes, but I don't how well they would work with only 2 time points for your index.
There's also a problem with the direction of causality. If someone at 5 months was feeling ill due to an impending but not yet clinically detectable relapse, such an individual would probably score low on the physical activity score. In that situation, the association of a low physical activity score with faster relapse might be due to the clinical biology leading to the relapse, rather than the other way around.
Thus it will be difficult to give a reliable interpretation of the model results. It would have to very carefully stated, something like "the hazard associated with the most recently observed activity index was..." Even that type of statement would not deal with potential miscalculations due to changes in the index during intermediate times, or the problem of the direction of causality.
| null | CC BY-SA 4.0 | null | 2023-04-03T19:04:02.297 | 2023-04-03T20:38:02.803 | 2023-04-03T20:38:02.803 | 28500 | 28500 | null |
611717 | 1 | null | null | 1 | 88 | Cross-posting from [stackoverflow](https://stackoverflow.com/questions/75922590/mixed-models-in-nlme-package-using-lme-nested-data-structure-error-in-mo)
I am working with a dataset having the following structure (download data [here](https://drive.google.com/file/d/1VD-efm10TlAzd1xnsybhrE-qX2ODQb5K/view?usp=share_link)).
[](https://i.stack.imgur.com/fpWrg.png)
The variable resp is a physiological response measured once for every subject during the study. In the study, each subject was observed for 5 days at 10 time points. The variables vala, valb, valc, vald are the observed values that indicate the percentage of time a subject spent performing a particular activity (I am not using the activity variables directly but a transformed-version of the variables. The transformation is performed using the R package compositions). Grp (5 groups), sbjt (10 subjects), dy (5 days), tm (10 time points) are factors.
```
library(compositions) # for the function ilr()
sim[10:12] <- ilr(sim[6:9]) # transforming the activity variables
```
I am interested in fitting a mixed-effects model with grp as fixed effect (because I am specifically interested in the groups that I have selected for the participants) and sbjt, dy, tm as random effects (because I am not specifically interested in the subjects that I have chosen and wish to generalize to the study population; day and time are random because different subjects were observed at different days and different times). The times are nested within days, which in turn, are nested within subjects (tm within dy within sbjt). I believe the model is appropriate for understanding the amount of variance my activity variables and random effect variables are contributing to my resp variable.
In nlme::lme(), I have specified my model as follows with subject autocorrelation:
```
library(nlme) # for the function lme()
model <- lme(resp ~ V1 + V2 + V3 + grp, random = ~ 1 | Sbjt/dy/tm, correlation = corAR1(), method = "REML", data = sim) # V1, V2, V3 are the transformed activity variables
# Error in solve.default(estimates[dimE[1] - (p:1), dimE[2] - (p:1), drop = FALSE]) :
system is computationally singular: reciprocal condition number = 1.24056e-17
```
I have the following questions about my model:
- Is it possible to fit a mixed-effects model with only 10 response values (another post seemed to suggest that running a mixed-model is not appropriate in such situations)? If possible, how do I get the above model to run? If not, what other models should I use instead?
- Could someone explain what the above error is telling me and how to correct it?
- If the 3-level nesting in my model is too complex, how do I simplify it so that my model runs?
I am also open to any other suggestions about the best way to model my data in nlme::lme() since I would like to specify subject autocorrelation. Thank you for your time and help! I greatly appreciate it!
| mixed-models in "nlme" package using lme() - nested data structure - error in model fitting - system is computationally singular | CC BY-SA 4.0 | null | 2023-04-03T19:04:59.883 | 2023-04-04T21:43:51.953 | null | null | 374417 | [
"mixed-model",
"lme4-nlme",
"autocorrelation",
"nested-data"
] |
611718 | 2 | null | 611700 | 2 | null | I don't really like this approach to determining the model family, I think it should be set a priori instead of using this machine learning method that this package uses with no citations to methodological papers. But that isn't the question.
You could use the function [heavy::heavyLm()](https://rdrr.io/cran/heavy/man/heavyLm.html) with a t-distribution family and manually set the degrees of freedom to 1. (Note that this package is not on CRAN: [https://github.com/faosorios/heavy](https://github.com/faosorios/heavy))
Update: apparently the package also has a built-in Cauchy family for `heavyLm()`, so you could use that also. They should work out the same (to a meaningful level of precision anyways).
| null | CC BY-SA 4.0 | null | 2023-04-03T19:05:09.487 | 2023-04-04T12:31:27.953 | 2023-04-04T12:31:27.953 | 288048 | 288048 | null |
611719 | 1 | null | null | 1 | 63 | A comment to [this](https://stats.stackexchange.com/q/611700/247274) question suggests that the OLS estimate of linear model parameters is unbiased, even when the error term is Cauchy. Given that Cauchy distributions lack an expected value, I am skeptical that the parameter estimates have an expected value (though I am on board with their expected value being the true parameter values if they do have an expected value).
Do the OLS parameter estimates even have an expected value when the error term is Cauchy? If so, is the bias equal to zero?
$$
y_i = \beta_0 + \beta_1x_{i, 1} + \dots + \beta_px_{i, p} +\epsilon_i\\
\epsilon_1,\dots,\epsilon_n\overset{iid}{\sim}t_1\\
\implies\\
\mathbb E\left[
(X^TX)^{-1}X^Ty
\right] = \beta\\?
$$
| OLS with $iid$ Cauchy errors still unbiased? | CC BY-SA 4.0 | null | 2023-04-03T19:19:20.957 | 2023-04-03T19:19:20.957 | null | null | 247274 | [
"regression",
"least-squares",
"linear",
"bias",
"cauchy-distribution"
] |
611720 | 1 | null | null | 2 | 37 | I am running a logistic regression. Treatment is a factor with 3 levels. I am assuming that intercept is one of these 3 levels (negative control in this case). Is there a reason the equation is splitting the predictor variable like this?
```
Call:
glm(formula = propgfp ~ treatment, family = quasibinomial, data = frass4glm)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.7631 -0.0001 0.6031 0.6964 0.8529
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -19.87 3390.23 -0.006 0.995
treatmentFrass 22.64 3390.23 0.007 0.995
treatmentPositive 22.64 3390.23 0.007 0.995
(Dispersion parameter for quasibinomial family taken to be 1.324805)
Null deviance: 100.322 on 34 degrees of freedom
Residual deviance: 25.863 on 32 degrees of freedom
AIC: NA
```
```
| Why is my predictor variable being split up in a logistic regression? | CC BY-SA 4.0 | null | 2023-04-03T19:28:25.193 | 2023-04-03T19:34:57.963 | null | null | 384860 | [
"r",
"logistic"
] |
611721 | 2 | null | 611720 | 1 | null | Your variable has three levels. One is subsumed by the intercept. One is the binary variable `treatmentFrass` that is $1$ when the treatment is `Frass` and zero otherwise. One is the binary `treatmentPositive` that is $1$ when the treatment is `Positive` and zero otherwise. This way, only the intercept is active when the treatment is neither `Frass` nor `Positive`; the parameter on `treatmentFrass` gives you the estimated difference in the outcome between the omitted category (that is subsumed by the intercept) and the `Frass` group; and the parameter on `treatmentPositive` gives you the estimated difference in the outcome between the omitted category (that is subsumed by the intercept) and the `Positive` group.
This is the same as why a three-factor variable in an ANOVA regression is split into two categories with the third subsued by the intercept.
| null | CC BY-SA 4.0 | null | 2023-04-03T19:34:57.963 | 2023-04-03T19:34:57.963 | null | null | 247274 | null |
611722 | 2 | null | 601523 | 1 | null | According to [Wikipedia](http://en.wikipedia.org/wiki/Concomitant_(statistics)): "In statistics, the concept of a concomitant, also called the induced order statistic, arises when one sorts the members of a random sample according to corresponding values of another random sample.
"Let $(X_i, Y_i), i = 1, . . ., n$ be a random sample from a bivariate distribution. If the sample is ordered by the $X_i$, then the $Y$-variate associated with $X_{r:n}$ will be denoted by $Y_{[r:n]}$ and termed the concomitant of the $r$th order statistic."
| null | CC BY-SA 4.0 | null | 2023-04-03T19:51:10.750 | 2023-04-03T19:51:10.750 | null | null | 298128 | null |
611723 | 1 | null | null | 3 | 144 | I have a Markov kernel $Q$ from which I would like to generate proposals for the Metropolis-Hastings algorithm. The problem is: When the proposal is accepted, the "internal state" of $Q$ changes. This means that if proposals $y_1,\ldots,y_n$ are accepted, the internal state of $Q$ depends on $y_1,\ldots,y_n$. I know, this means that we cannot use $Q$ as a proposal kernel for the Metropolis-Hastings algorithm.
However, my simple solution to that problem is the following: Before the first sample is accepted, the state of $Q$ does not change. Now, I simply run the Metropolis-Hastings algorithm with proposal kernel $Q$ until the first proposal is accepted. Then I stop. Then I start the Metropolis-Hastings algorithm again, but with the different proposal kernel given by the modified kernel $Q$.
Is this process still guaranteed to work? Are the accepted samples distributed according to the target density after a sufficient long period of time?
EDIT:
I think we can describe the algorithm I've got in mind as follows:
- Let $E$ be the state space and $Q_k$ be a Markov kernel with source $E^k$ and target $E$
- Start with any $x_0\in E$
- Run Metropolis-Hastings with initial state $x_0$ and proposal kernel $Q_1$ for a single iteration
- Let $y_1\in E$ denote the proposed sample and $x_2\in E$ the state after the iteration (So, $x_1=y_1$ if the proposal was accepted and $x_1=x_0$ otherwise)
- Now run Metropolis-Hastings with initial state $x_1$ and proposal kernel $Q_2(x_0,y_1,\;\cdot\;)$ (remark: I'm unsure whether it wouldn't be better to replace this with $Q_2(x_0,x_1,\;\cdot\;)$)
- and so on ...
It would be interesting to know whether - under certain assumptions on $Q_1,Q_2,\ldots$ - the samples $x_b,x_{b+1},\ldots$ are still distributed according to the target density for sufficiently large $b$.
## EDIT 2
You can assume that $Q_k(x_1,\ldots,x_k;\;\cdot\;)$ has density $$q_k(x_1,\ldots,x_k;\;\cdot\;):=e^{-\beta f_k(x_1,\ldots,x_k;\;\cdot\;)}$$ with respect to the Lebesgue measure on $[0,1)^d$, where $f_k(x_1,\ldots,x_k;\;\cdot\;)$ is nonnegative. Also: The $Q_i$ are constructed in a way so that at the last iteration $k_{\text{max}}$, we have $f_{k_{\text{max}}}(x_1,\ldots,x_{k_{\text{max}}})=0$. So, the sequence $Q_1,\ldots,Q_{k_{\text{max}}}$ somehow converges; maybe this is enough to show that everything works.
| Running Metropolis-Hastings algorithm with changing proposal kernel; each time the kernel is changing starting the algorithm afresh. Does it work? | CC BY-SA 4.0 | null | 2023-04-03T20:00:09.523 | 2023-05-07T15:23:29.057 | 2023-05-06T15:41:52.367 | 222528 | 222528 | [
"sampling",
"markov-chain-montecarlo",
"markov-process",
"metropolis-hastings"
] |
611724 | 1 | null | null | 3 | 60 | I have fitted a Bayesian GARCH(1,1) model with Student $t$ innovations to some time series data, $X_1,...,X_n$ and now want to estimate Value-at-Risk (VaR) (i.e., 5% quantiles) at each times $t=1,,...,n$. To do so, I'm using `bayesGARCH` in `R`, which returns an `mcmc` object from which I can obtain samples from the posterior distribution of the coefficients $\theta$, which in turn determine $\textrm{Var}(X_t)$.
More formally, I want to estimate the quantiles of a random variable $X_t$ which is constructed as follows:
$$ X_t = \sigma_t(\theta) Z_t $$
$$ \theta \sim F $$
$$ Z_t \sim \sqrt{\frac{\nu-2}{\nu}} ~ t_\nu, $$
where $F$ is unknown but I have $M$ samples from it. ($t_\nu$ is scaled to that $\textrm{Var}(Z_t) = 1$ and hence $\textrm{Var}(X_t) = \sigma_t^2$.) Given $\theta$, I can readily compute $\sigma_t$.
If $\sigma_t$ were fixed, then clearly $X_t \sim \sigma_t \sqrt{\frac{\nu-2}{\nu}} t_\nu$. The stochasticity of $\sigma_t$ complicates estimating the quantiles of $X_t$.
I was thinking of 2 approaches:
- For $j=1,...,M$, simulate $X_j \sim \sigma_t(\theta_j) \sqrt{\frac{\nu-2}{\nu}} t_\nu$ (perhaps multiple times). Then, take $\widehat{\textrm{VaR}}$ as the empirical 5% quantile of $X_1,...X_M$.
- For $j=1,...,M$, let $\widehat{\textrm{VaR}}_j$ be the 5% quantile of $\sigma_t(\theta_j) \sqrt{\frac{\nu-2}{\nu}} t_\nu$. Then, take $\widehat{\textrm{VaR}} = \frac{1}{M} \sum_j \widehat{\textrm{VaR}}_j$.
My question is: which of the two approaches are valid? Does there exist a more appropriate method?
To me, approach (1) seems reasonable, yet perhaps inefficient. Approach (2), might be invalid, but I couldn't say why. Indeed, these two approaches yield quite different results, especially for more extreme quantiles).
| VaR from Bayesian GARCH / Quantile Estimation | CC BY-SA 4.0 | null | 2023-04-03T20:03:23.810 | 2023-04-03T20:05:11.703 | 2023-04-03T20:05:11.703 | 360713 | 360713 | [
"bayesian",
"stochastic-processes",
"garch",
"quantiles",
"volatility"
] |
611725 | 2 | null | 594595 | 1 | null | In the interest of having an answer to your question, here's a derivation of an expression for the mean residual life in terms of an incomplete gamma function.
Use a substitution $u = (\frac{t}{\sigma})^a$. Then $du = \frac{a}{\sigma} (\frac{t}{\sigma})^{a-1}dt$.
Since $\frac{t}{\sigma} = u^{1/a}$, this gives us $du = \frac{a}{\sigma} (u^{1/a})^{a-1}dt = \frac{a}{\sigma} u^{1-1/a}dt$, so that $dt = \frac{\sigma}{a} u^{1/a-1} du$. Then
\begin{align*}mrl(x) &= e^{{(x/\sigma)}^a} \int_{(x/\sigma)^a}^{\infty} \frac{\sigma}{a} u^{1/a-1} e^{-u} \, du \\
&= \frac{\sigma}{a} e^{{(x/\sigma)}^a} \Gamma\left(\frac{1}{a}, \left(\frac{x}{\sigma}\right)^a \right),
\end{align*}
where $\Gamma(s,x)$ is the [upper incomplete gamma function](https://en.wikipedia.org/wiki/Incomplete_gamma_function).
This can be rewritten slightly by using a [recurrence property](https://en.wikipedia.org/wiki/Incomplete_gamma_function#Properties) of the upper incomplete gamma function:
\begin{align*}
mrl(x) &= \sigma e^{{(x/\sigma)}^a} \left(\Gamma\left(1 + \frac{1}{a}, \left(\frac{x}{\sigma}\right)^a \right) - \left(\left(\frac{x}{\sigma}\right)^a\right)^{1/a} e^{-{(x/\sigma)}^a} \right) \\
&= \sigma e^{{(x/\sigma)}^a} \Gamma\left(1 + \frac{1}{a}, \left(\frac{x}{\sigma}\right)^a \right) - x.
\end{align*}
The reader can decide which is simpler.
Neither of these expressions is closed-form in the usual sense, as noted in a comment above. However, incomplete gamma functions are [implemented in some programming languages like Python](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.gammaincc.html#scipy.special.gammaincc). Thus if your interest is numerical you could evaluate the MRL using one of these two expressions and calling that special function rather than using a more general numerical integration procedure.
| null | CC BY-SA 4.0 | null | 2023-04-03T20:05:55.393 | 2023-04-03T20:05:55.393 | null | null | 2195 | null |
611727 | 1 | null | null | 0 | 18 | I have allele frequencies from two independent studies
|Study |Number Sample |Allele 1 Freq |Allele 2 Freq |Allele 3 Freq |
|-----|-------------|-------------|-------------|--------------|
|A |20 |.45 |.475 |.075 |
|B |34 |.72 |.28 |NA |
The weighted average of each allele frequency is as follows
```
Allele 1 Freq avg: .62
Allele 2 Freq avg: .355
Allele 3 Freq avg: .075
```
The estimated diplotype frequency using hardy Weinberg equilibrium is:
```
Allele 1/Allele 2: 2 * .62 * .355 = 0.4402
Allele 1/Allele 3: 2 * .62 * .075 = 0.093
Allele 2/Allele 3: 2 * .355 * .075 = 0.05325
```
Lets say that I know that:
- Allele 1/Allele 2 and Allele 1/Allele 3 are the only known diplotypes that can produce phenotype x
- Allele 2/Allele 3 is the only diplotype known to produce phenotype y
Thus I estimate that the frequency of `phenotype x` is `0.4402 + 0.093 = 0.5332` and the frequency of `phenotype y` is `0.05325`.
What I wonder is if I can compute the standard deviation of the phenotype estimates. Usually I would estimate this by `√[p(1-p)/n]` where `n` is sample size and `p` is the population proportion, but I think it is a little weird here because of the missing data, not sure what value to use for `n` or if I can even do this. Any help is appreciated.
| Estimated standard deviation of a phenotype prevalence that is derived from several studies | CC BY-SA 4.0 | null | 2023-04-03T20:24:48.553 | 2023-04-08T20:02:10.260 | null | null | 302882 | [
"standard-deviation",
"bioinformatics",
"genetics"
] |
611728 | 2 | null | 606996 | 0 | null | It is so long as a useful $R^2$ metric can be chosen. One such metric is the deviance $R^2$ (Cameron & Windmeijer, 1996).
I provide an example with the recommended metric below.
```
> MASS::glm.nb(cyl ~ am + drat + wt, data = mtcars)
Call: MASS::glm.nb(formula = cyl ~ am + drat + wt, data = mtcars, init.theta = 575132.5129,
link = log)
Coefficients:
(Intercept) am drat wt
2.04291 0.09733 -0.23111 0.16930
Degrees of Freedom: 31 Total (i.e. Null); 28 Residual
Null Deviance: 16.57
Residual Deviance: 5.988 AIC: 132.6
Warning messages:
1: In theta.ml(Y, mu, sum(w), w, limit = control$maxit, trace = control$trace > :
iteration limit reached
2: In theta.ml(Y, mu, sum(w), w, limit = control$maxit, trace = control$trace > :
iteration limit reached
> domir::domir(
cyl ~ am + drat + wt,
\(fml) {
MASS::glm.nb(fml, data = mtcars) |>
performance::r2_kullback(adjust = FALSE)
}
)
Overall Value: 0.6387408
General Dominance Values:
General Dominance Standardized Ranks
am 0.09366876 0.1466460 3
drat 0.22793619 0.3568524 2
wt 0.31713586 0.4965016 1
Conditional Dominance Values:
Subset Size: 1 Subset Size: 2 Subset Size: 3
am 0.2694899 0.0006728092 0.01084360
drat 0.4824463 0.1349402416 0.06642206
wt 0.5719815 0.2241399125 0.15528618
Complete Dominance Designations:
Dmnated?am Dmnated?drat Dmnated?wt
Dmnates?am NA FALSE FALSE
Dmnates?drat TRUE NA FALSE
Dmnates?wt TRUE TRUE NA
```
Reference
Cameron, A. C., & Windmeijer, F. A. (1996). R-squared measures for count data regression models with applications to health-care utilization. Journal of Business & Economic Statistics, 14(2), 209-220.
| null | CC BY-SA 4.0 | null | 2023-04-03T20:32:57.957 | 2023-04-03T20:32:57.957 | null | null | 203199 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.