Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
610492 | 1 | null | null | 2 | 37 | I've been learning Gillespie's algorithm to simulate continuous time Markov chains. I understand how the algorithm is derived from the reaction probability density function
$P(\tau, \mu)$ = probability at time $t$ that the event $\mu$ will occur in the time interval $(t + \tau, t + \tau + d\tau)$
What I don't get is how this guarantees us that we're sampling from the "master equation" or the Kolmogorov forward equations. Why is that sampling with this probability gives us trajectories that follow the distribution given by solving the forward equations?
| Gillespie's Algorithm's Connection to Kolmogorov's Forward Equations | CC BY-SA 4.0 | null | 2023-03-23T17:39:04.007 | 2023-03-23T23:20:41.713 | 2023-03-23T23:20:41.713 | 383970 | 383970 | [
"simulation",
"markov-chain-montecarlo",
"stochastic-processes",
"markov-process"
] |
610493 | 2 | null | 610460 | 0 | null | I managed it by modifying the data manually, and than run the `VARselect` function on the relevant lag in loops of 1 to 11. It seems to work, although it is done in a very unpretty way.
I would appreciate any criticism.
```
# testing for optimal lag length (d*) based on y
# using the AIC (aim to min. AIC)
AIC <- matrix(NA, ncol = 4, nrow = 0) %>% as.data.frame()
colnames(AIC) = c("month", "feature", "lag", "AIC")
for (i in 1:12) {
x = 1
while (x <= ncol(df_all)-3) {
feature = colnames(df_all[3+x])
print(paste0("feature = ", feature))
for(d in 1:11){
# choose 1 X at a time and convert to TS
test_d <- df_all[ ,c(1:3, 3+x)] %>%
dplyr::arrange(date) %>%
# set X d months backward, to get lagged effects
dplyr::rename_with(~'feature', .cols = 4) %>%
dplyr::mutate(feature = lag(feature, n = d)) %>%
dplyr::mutate(year = year(date), .after = month) %>%
dplyr::select(-date) %>%
drop_na()
test_d <- test_d %>%
# leave only prices of month i in the data
dplyr::filter(month == i) %>%
ts(start = test_d$year[1], frequency = 12)
# Lag selection (with no constant, no trend)
lag_d <- vars::VARselect(y = test_d, lag.max = d, type = "none")
AIC[nrow(AIC)+1,] <- c(i, feature, d, lag_d$criteria[1])
}
x = x+1
}
}
AIC <- AIC %>%
dplyr::filter(AIC != "-Inf") %>%
dplyr::mutate(AIC = as.numeric(AIC)) %>%
dplyr::group_by(month, feature) %>%
dplyr::slice_min(AIC, n = 1)
```
| null | CC BY-SA 4.0 | null | 2023-03-23T17:52:27.560 | 2023-03-24T09:53:37.690 | 2023-03-24T09:53:37.690 | 300110 | 300110 | null |
610500 | 2 | null | 584689 | 1 | null | Yes, you can use shapiro-wilk (SW) to 1200 elements.
The initial study of this test was done for samples below 50 data points. However, over time there have been improvements that allow the use of this test up to 5000 elements. It is important to read in the documentation of the software whether it uses the original or improved version of SW test. Generally, current software uses the improved version.
Here are some references:
- https://link.springer.com/referenceworkentry/10.1007/978-3-642-04898-2_421
- https://link.springer.com/article/10.1007/BF01891203
- https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html#scipy.stats.shapiro
- http://lib.stat.cmu.edu/apstat/R94
- http://lib.stat.cmu.edu/apstat/
- https://www.tandfonline.com/doi/full/10.1080/00949655.2010.520163
| null | CC BY-SA 4.0 | null | 2023-03-23T19:12:54.620 | 2023-03-23T19:12:54.620 | null | null | 346197 | null |
610501 | 1 | 610516 | null | 1 | 69 | I am going to try to simplify the context of my analysis into an apples/oranges scenario.
I have 30 baskets, each with 10 fruits. The 10 fruits are made up of a number of apples and oranges.
Each fruit also has a number of dots on them.
I want to run a correlation between the proportion of apples in the baskets and the proportion of dots on the apples. For example, Basket 1 has 30% apples and the apples have 40% of the dots on the fruits in the basket. Basket 2 has 60% apples and the apples have 20% of the dots on the fruits in the basket, and so on. I want to also run the same correlation with the oranges.
I want to see whether there is a relationship between the number of apples in the baskets and the number of dots on the apples across the 30 baskets. If I run the regression with just the proportion of apples, I get a different correlation than when running a correlation with the oranges. I was wondering what kind of regression would be appropriate for comparing paired/linked proportional data like this?
| Regression Analysis for Proportional Data | CC BY-SA 4.0 | null | 2023-03-23T19:24:37.623 | 2023-03-24T09:24:35.170 | null | null | 383981 | [
"regression",
"correlation",
"proportion",
"paired-data",
"percentage"
] |
610502 | 1 | null | null | 2 | 24 | Is there any literature exploring convergence guarantees of the MICE imputation method for missing data? In practice, the method seems to work pretty reliably with different regressor but I can't seem to find any theoretical justification for its relatively good performance / datasets on which it is likely to fail.
| Theoretical Results for MICE Imputation | CC BY-SA 4.0 | null | 2023-03-23T19:41:12.650 | 2023-03-23T19:41:12.650 | null | null | 383982 | [
"multivariate-analysis",
"missing-data",
"data-imputation",
"multiple-imputation"
] |
610503 | 1 | null | null | 1 | 12 | I have the datasets which may contain multiple patterns. The data needs to be classified based on grouping in the data. The figure below shows an example of the data. The circles around the data are manually drawn to indicate groups.
Any help to do this type of data classification using Matlab? Thanks.
[](https://i.stack.imgur.com/RXU69.png)
| Identifying groups/patterns in a random dataset using Matlab | CC BY-SA 4.0 | null | 2023-03-23T19:44:32.327 | 2023-03-23T19:44:32.327 | null | null | 383983 | [
"classification",
"matlab",
"pattern-recognition"
] |
610504 | 2 | null | 133901 | 1 | null | See [https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-19/issue-1/Approximate-Weights/10.1214/aoms/1177730297.full?tab=ArticleLinkCited](https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-19/issue-1/Approximate-Weights/10.1214/aoms/1177730297.full?tab=ArticleLinkCited) or the Casella&Berger's "Statistical Inference". And here is a type-up.
$
\newcommand{\bracket}[1]{\left ( #1 \right )}
\newcommand{\curly}[1]{\left \{ #1 \right \} }
\newcommand{\squared}[1]{\left [ #1 \right ]}
$
Let us view the given $a_i$'s as follows:
fix $\lambda \in [0,1]$, given $\{t_i\}_{i=1}^k \subset [-1,1]$, put $b_i = a_i^*(1+\lambda t_i)$
and $a_i = \bracket{\sum_{j=1}^k b_j}^{-1}b_i$.
In this format, the conditions $a_i \geqslant 0$ and $\sum_{i=1}^k a_i = 1$ are satisfied
while the sequence $\{a_i\}_{i=1}^k$ remains arbitrarily given.
We have, remarking that variance is additive in case of zero covariance:
\begin{equation*}
\begin{split}
& \text{Var} \bracket{ \bracket{\sum_{j=1}^kb_j}^{-1} \sum_{i=1}^k b_i W_i}
= \bracket{\sum_{j=1}^kb_j}^{-2} \sum_{i=1}^k b_i^2\sigma_i^2
\\&= \bracket{\sum_{j=1}^ka_j^*(1+\lambda t_j)}^{-2} \sum_{i=1}^k (a_i^*)^2 (1+\lambda t_i)^2 \sigma_i^2
\\& = \bracket{\sum_{j=1}^ka_j^*(1+\lambda t_j)}^{-2} \sum_{i=1}^k \bracket{\sum_{j=1}^k \frac{1}{\sigma_j^2}}^{-2}
\frac{1}{\sigma_i^4} (1+\lambda t_i)^2 \sigma_i^2
\\& = \bracket{\sum_{j=1}^ka_j^*(1+\lambda t_j)}^{-2} \bracket{\sum_{j=1}^k \frac{1}{\sigma_j^2}}^{-1}
\sum_{i=1}^k a_i^* (1+\lambda t_i)^2
\\& = \bracket{\sum_{j=1}^ka_j^*(1+\lambda t_j)}^{-2} \bracket{\sum_{j=1}^k \frac{1}{\sigma_j^2}}^{-1}
\squared{1+2\lambda \sum_{i=1}^ka_i^*t_i + \lambda^2 \sum_{i=1}^k a_i^*t_i^2}
\\& = \bracket{\sum_{j=1}^ka_j^*(1+\lambda t_j)}^{-2} \bracket{\sum_{j=1}^k \frac{1}{\sigma_j^2}}^{-1}
\squared{\bracket{1+ \lambda \sum_{j=1}^k a_j^*t_j}^2 + \lambda^2 \sum_{j=1}^k a_j^*t_j^2
- \lambda^2 \bracket{\sum_{j=1}^k a_j^*t_j}^2}
\\& = \bracket{\sum_{j=1}^k \frac{1}{\sigma_j^2}}^{-1} \curly{1+
\bracket{\sum_{j=1}^ka_j^*(1+\lambda t_j)}^{-2}\lambda^2\squared{\sum_{j=1}^k a_j^*t_j^2
- \bracket{\sum_{j=1}^k a_j^*t_j}^2}}
\end{split}
\end{equation*}
Put $T = \sum_{j=1}^k a_j^*t_j$. We now implement:
\begin{equation*}
\sum_{j=1}^k a_j^* t_j^2 \leqslant 1
\end{equation*}
We have proved that:
\begin{equation*}
\text{Var}(W) = \text{Var} \bracket{ \bracket{\sum_{j=1}^kb_j}^{-1} \sum_{i=1}^k b_i W_i} \leqslant
\text{Var}(W^*) \squared{1+ \frac{\lambda^2(1-T^2)}{(1+\lambda T)^2}}
\end{equation*}
If $f(T) = \frac{1-T^2}{(1+\lambda T)^2}$, then the equation $f'(T)=0$ has a unique solution $T=-\lambda$.
Taking $T=0$ for instance would yield that $T=-\lambda$ is a maximum. Therefore, plug in $T=-\lambda$ yields:
\begin{equation*}
\text{Var}(W') \leqslant \frac{\text{Var}(W^*)}{1-\lambda^2}
\end{equation*}
It is left to show that $\lambda$ defined in the problem is in the range $[0,1]$.
But directly solving for $\lambda$ in the given condition will yield:
\begin{equation*}
\lambda = \frac{b_{\max}-b_{\min}}{b_{\max}+b_{\min}}
\end{equation*}
This completes the proof.
| null | CC BY-SA 4.0 | null | 2023-03-23T19:46:17.660 | 2023-03-23T23:34:38.137 | 2023-03-23T23:34:38.137 | 345771 | 345771 | null |
610506 | 2 | null | 610404 | 0 | null | Some points:
- The model formulation will depend on the specific questions of interest (i.e., which main effects and interactions to include).
- If you have missing data in your outcome variable target it is preferable to use a linear mixed model.
- Working with differences (i.e., target at post minus the target at pre) is, generally, considered suboptimal. It is better to use an ANCOVA model (i.e., linear regression), in which you put as the outcome the target at post, and you include as a covariate the target at pre.
| null | CC BY-SA 4.0 | null | 2023-03-23T19:51:54.193 | 2023-03-26T19:09:29.210 | 2023-03-26T19:09:29.210 | 219012 | 219012 | null |
610507 | 1 | null | null | 0 | 24 | We assume we have an interval $I=[a,b]$. We define $C(I)$ to be the set of continuous functions on $I$. We further define the set of one-hidden layer neural networks
$$NN(H,\theta)=\left\{ f_{\theta}=\sum_{i=1}^{H}a_{i}\phi(w_{i}\cdot x+b_{i}) |a_{i},w_{i},b_{i}\in\mathbb{R}\right\}$$
with activation function $\phi$ which could be a sigmoid function or $\tanh$ or ReLU and we define the parameter vector $\theta=(a,w,b)\in \mathbb{R}^{3H}$.
We define the activation points as $K=\{x_1,\dots,x_{H}|w_{i} \cdot x_{i}+b_{i}=0 \quad \forall i \in 1,\dots ,H\}$
Let $\|\|$ be the supremum norm. The approximation error is given by
$$d(f_{\theta},f)=\sup_{f \in C(I)}\inf_{\theta \in \mathbb{R}^{3H}}\|f_{\theta}-f\|$$
and can e.g. be bounded by
$d(f_{\theta},f) \leq \frac{5}{2}\omega(f,\frac{b-a}{H})=\varepsilon$
([2](https://www.sciencedirect.com/science/article/abs/pii/S0925231207002986)) where $\omega$ is the modulus of continuity. There are other bounds.
---
My question is the following: How is $\max(K),\min(K)$ related to $a$ and $b$ and $\varepsilon$?
E.g. looking at the following picture.
[](https://i.stack.imgur.com/rSF5g.png)
We can approximate the constant function with a ReLU network and place the activation point as far away form $a$ as we want. But this is very pathological, in general we need the nonlinearity which comes with the activation point to fit the nonlinearity of the function. Hence I would assume that $|\min(K)-a| \approx \frac{\varepsilon}{H}$. But this is not a proof.
| Universal approximation related to activation regions | CC BY-SA 4.0 | null | 2023-03-23T20:06:47.780 | 2023-03-23T20:12:31.230 | 2023-03-23T20:12:31.230 | 375493 | 375493 | [
"regression",
"machine-learning",
"neural-networks",
"approximation"
] |
610508 | 2 | null | 573457 | 1 | null | We're discussing a similar question in my work team comparing healthcare intervention with variable post-period durations.
If you're normalizing the outcomes in the pre- and post-periods as a monthly rate (vs. simply summing the $y$ values in each period regardless of time length) then yes, a pre-post comparison may be conducted if you're reasonably certain the parallel trends assumption is valid.
However, a few points to consider:
- Are seasonal effects important in your study? If they are, best to use 12 month pre- and post-periods (or a multiple of 12) for all subjects.
- Computing outcome rates based on variable time periods per subject reduces the precision of your difference-in-differences estimate (5 months representing fewer observations than 12 months, increasing variability of measurements).
| null | CC BY-SA 4.0 | null | 2023-03-23T20:12:13.540 | 2023-03-23T20:12:13.540 | null | null | 13634 | null |
610509 | 2 | null | 481979 | 0 | null | Some people might use $r^2$ to denote the Pearson correlation between the predictions and true values, $r^2 = \left(\text{corr}\left(\hat y, y\right)\right)^2$, while using $R^2$ to mean the formula you gave.
There are serious problems with just squaring the Pearson correlation between the predictions and true values. I go through some of the math [here](https://datascience.stackexchange.com/a/114457/73930) and show some images [here](https://stats.stackexchange.com/a/584562/247274). In that regard, the $r^2$ is problematic and can miss that your predictions are terrible. A reason people use it is that $r^2$ and $R^2$ coincide for OLS linear regression with an intercept, which is an extremely common regression technique that almost anyone who uses statistics has learned.
If you take $R^2 = 1 - \dfrac{SS_{res}}{SS_{total}}$ as you have, then $R^2$ is just a transformation of the sum of squared residuals, and the sum of squared residuals is an extremely reasonable way to assess a regression model.
A common complaint about $R^2$ is that it can be driven ([close to](https://stats.stackexchange.com/a/591849/247274)) a perfect $R^2=1$ by overfitting to the training data. This is valid, but do the algebra: that corresponds to $SS_{res}=0$ (or close to zero if $R^2=1$ is impossible). Thus, such a complaint is also a complaint about $SS_{res}$ $\big($ditto for $MSE=SS_{res}/N$ and $RMSE = \sqrt{MSE}$$\big)$.
A way to keep your modeling honest is to check performance on some holdout data. There is [disagreement](https://stats.stackexchange.com/q/590199/247274) about what constitutes an out-of-sample $R^2$ calculation, though if you pick one that is a transformation, such as the $
R^2_{\text{out-of-sample, Dave}}
$ that I discuss in the link, you are evaluating the out-of-sample sum of squared residuals, perhaps in a way that gives some context to the value as a comparison to the performance of a baseline model.
Evaluating the out-of-sample sum of squared residuals, or some function of it like $R^2$, $MSE=\dfrac{SS_{Res}}{N}$ (for $N$ predictions), or $RMSE = \sqrt{MSE}$, makes total sense, whether the model is linear or not. Other measures of performance might make more sense (e.g., mean absolute error, $MAE = \overset{N}{\underset{i=1}{\sum}}\left\vert y_i - \hat y_i\right\vert$), but that is a separate conversation. If what you find interesting is a measure of square loss, $SS_{res}$, $MSE$, $RMSE$, and $R^2 = 1 - \frac{SS_{res}}{SS_{total}}$ are transformations of each other and, in some sense, convey the same information.
| null | CC BY-SA 4.0 | null | 2023-03-23T20:19:55.863 | 2023-04-20T10:54:35.517 | 2023-04-20T10:54:35.517 | 247274 | 247274 | null |
610510 | 1 | null | null | 0 | 20 | Below you can find the problem that I am trying to understand. My main problem is to understand where do the reparametrization that they propose come from. Alternatively I have try to connect it with the constraint LS KKT solution but I do not see the connection.
$$
\begin{aligned}
& \mathbf{r}_t=\left(\begin{array}{l:l}
\mathbf{1} & \mathbf{B}_i
\end{array}\right) \mathbf{f}_{m i, t}+\boldsymbol{\varepsilon}_t, \quad t=1,2, \cdots, T \\
& =\mathbf{B}_{m i} \mathbf{f}_{m i, t}+\boldsymbol{\varepsilon}_t \\
&
\end{aligned}
$$
where $\mathbf{B}_{m i}$ is an $N \times(K+1)$ matrix with $\mathbf{B}_i$ an $N \times K$ matrix of $1^{\prime} s$ and $0^{\prime} s$ representing $K$ industry sectors, with each stock belonging to one and only one sector over the given time interval, and
$$
\mathbf{f}_{m i, t}=\left(f_{0, t}, f_{1, t},, f_{2, t}, \cdots, f_{K, t}\right)^{\prime} .
$$
As such the matrix B is rank deficient with rank K instead of K+1. Consequently the use of LS to fit the above model for each time period does not lead to a unique solution. One solution is to impose a constraint:
$$
f_{1, t}+f_{2, t}+\cdots+f_{K, t}=0 .
$$
This can be accomplished with the reparameterization (this is where I don't understand)
$$
\mathbf{f}_{m i, t}=\mathbf{R}_{m i} \mathbf{g}_{m i, t}
$$
$$
\mathbf{r}_t=\mathbf{B}_{m i} \mathbf{R}_{m i} \mathbf{g}_{m i, t}+\boldsymbol{\varepsilon}_t
$$
where
$$
\mathbf{R}_{m i}=\left(\begin{array}{c}
\mathbf{I}_K \\
\boldsymbol{a}^{\prime}
\end{array}\right) \sim(K+1) \times K
$$
$$
\boldsymbol{a}^{\prime}=(0,-1,-1, \cdots,-1) \sim K \times 1
$$
$$
\mathbf{g}_{m i, t}=\left(g_{1, t}, g_{2, t}, \cdots, g_{K, t}\right)^{\prime} .
$$
The model (9) now has a unique least-squares solution $\hat{\mathbf{g}}_t$, and it is easy to check that (8) insures that the constraint (7) is satisfied.
| Constraint Linear regression in finance | CC BY-SA 4.0 | null | 2023-03-23T20:22:17.200 | 2023-03-23T20:22:17.200 | null | null | 271843 | [
"linear-model"
] |
610512 | 1 | null | null | 0 | 30 | Can I have three levels of randomization and three error terms even when the replications are produced under same whole plot treatments under split split plot design? For example, three reps are included under T1 tillage method as whole plot with 4 fertilizer sub plots and 3 sowing time sub sub plots; 3 three reps under T2 tillage method as whole plot with 4 fertilizer sub plots and 3 sowing time as sub sub plots; and so on.....
| Split-split-plot design: Analysis and error terms | CC BY-SA 4.0 | null | 2023-03-23T20:48:20.877 | 2023-03-24T00:21:10.597 | 2023-03-24T00:21:10.597 | 11887 | 383989 | [
"anova",
"experiment-design",
"split-plot"
] |
610513 | 2 | null | 444210 | 0 | null | It depends on how you define $R^2$, and there are several legitimate definitions, with the two below being the most reasonable to me.
$$
R^2 =\left(\text{corr}\left(\hat y, y\right)\right)^2\\
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
$R^2 =\left(\text{corr}\left(\hat y, y\right)\right)^2$ will not care about bias. If you predict one unit too high every time, the correlation is perfect. Quoting [an answer of mine from Data Science](https://datascience.stackexchange.com/a/114457/73930), for any real $a$ and positive $b$, $
\left(\text{corr}\left(\hat y, y\right)\right)
=
\left(\text{corr}\left(a + b\hat y, y\right)\right)
$. In fact, for nonzero $b$, $
\left(\text{corr}\left(\hat y, y\right)\right)^2
=
\left(\text{corr}\left(a + b\hat y, y\right)\right)^2
$.
Concretely, if your true values are $y=(1,2,4,6,5)$, $\hat y=(12, 13, 15, 17, 16)$ makes for terrible predictions but does have a perfect correlation with $y$. (In this example, $a=11$ and $b=1$.)
For the second formula, the numerator is just a function of the mean squared error. Since the denominator is a function of the data and not of any particular model, regard the denominator as a constant (which is is for any given data set). Then the second formula is just a decreasing function of the mean squared error. Since, all else equal, mean squared error increases when bias magnitude increases ($MSE = \text{bias}^2 + \text{var}$), you can regard that equation for $R^2$ as moving in the opposite direction of bias magnitude, yes. As bias magnitude decreases, $R^2$ increases, and as bias magnitude increases, $R^2$ decreases (assuming equal variance in both cases). This makes sense to me. Holding the variance equal, higher bias magnitude means a worse fit, which should correspond to a lower $R^2$.
| null | CC BY-SA 4.0 | null | 2023-03-23T20:53:34.030 | 2023-03-23T20:53:34.030 | null | null | 247274 | null |
610514 | 2 | null | 610350 | 1 | null | You have to be careful with the residuals estimated by projection. Your example (without what you call "within residuals," given that there are no replicates within an `id` under the same condition) shows this nicely.
With your data and model:
```
length(model.pr[[5]][,"Residuals"])
# [1] 40
```
But there are patterns within those residuals. Each individual has 4 residual values, in pairs of pairs: all 4 have the same absolute values, with 2 positive and 2 negative. For example, for `id=1`:
```
model.pr[[5]][1:4,"Residuals"]
# 1 2 3 4
# -0.6504411 0.6504411 0.6504411 -0.6504411
```
That doesn't seem like an appropriate set of data to submit to a Shapiro-Wilk test, even if you think that normality testing isn't [essentially useless](https://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless). A truly normal data set of 40 shouldn't have only 10 unique absolute values.
In Section 10.2 of [Venables and Ripley](https://link.springer.com/book/10.1007/978-0-387-21706-2), which discusses these matters, the residuals from projections are only used for a `qqnorm()` plot, not for a formal normality test. For a plot of residuals versus fitted, they further recommend using the `fitted()` and `studres()` from the model itself, at the last stratum. In your model there are only 10 such values, one for each individual:
```
length(studres(model[[5]]))
# [1] 10
```
The recommended diagnostic plots for your model would be based on:
```
plot(fitted(model[[5]]),studres(model[[5]]))
qqnorm(model.pr[[5]][,"Residuals"])
```
[This page](https://stats.stackexchange.com/a/485603/28500) discusses some other issues in evaluating this type of model. Note that your data lead to a singular fit with a 0 estimate for random-intercept variance when modeled with `lmer()`.
| null | CC BY-SA 4.0 | null | 2023-03-23T20:56:41.523 | 2023-03-23T21:05:14.913 | 2023-03-23T21:05:14.913 | 28500 | 28500 | null |
610515 | 1 | null | null | 0 | 68 | A while ago I read (probably on stata forums) that FE can be on a lower level than SE clusters, but not vice versa. For example, firm/household FE with industry/state SE are good, but doing state FE with household SE would be wrong.
Intuitively it seems to make sense. If treatment assignment is on a household level, shouldn't household FE be the first choice? But it seems that there are papers in different fields that use higher level FE with lower level SE. What am I missing?
| Levels of clustered standard errors and fixed effects | CC BY-SA 4.0 | null | 2023-03-23T21:50:02.683 | 2023-03-24T10:14:50.747 | 2023-03-23T21:50:36.543 | 321435 | 321435 | [
"standard-error",
"fixed-effects-model"
] |
610516 | 2 | null | 610501 | 0 | null | Your direction of using a linear regression is appealing even though it has an obvious limitation (of not being limited to $[0,1]$. We can change it though to fit your analysis.
>
I want to see whether there is a relationship between the number of apples in the baskets and the number of dots on the apples across the 30 baskets
So, assuming we want to estimate the apples-regression. That is, to find the relationship between the number of apples in the basket (or, the proportion of apples in each basket), and the proportion of dots assigned them (of all the dots of all the fruits in the basket). Let's specify the task at hand as if we want to estimate the total number of points, so we can use a Poisson Regression model, which is a form of GLM model.
>
Poisson Regression models are best used for modeling events where the outcomes are counts. Or, more specifically, count data: discrete data with non-negative integer values that count something, like the number of times an event occurs during a given timeframe or the number of people in line at the grocery store.
Count data can also be expressed as rate data, since the number of times an event occurs within a timeframe can be expressed as a raw count (i.e. "In a day, we eat three meals") or as a rate ("We eat at a rate of 0.125 meals per hour"). [here]
I won't delve into Poisson Regression, as the link I attached includes a better and clearer introduction. Instead, let's discuss the modeling part. You want to specify a regression model that estimates the proportion of dots assigned to apples in the basket. Therefore, $y_i$ is this proportion, for basket $i$, and is calculated by $\dfrac{N^{apples}_{i}}{N^{oranges}_{i}}$, where $N$ is the number of dots.
Next, you want as a predictor the number of apples in basket $i$, $X_{i}$. I don't think we want to add a per-basket fixed effect because we aren't interested in the average number of dots per apple within the basket, but we definitely want to adjust our error rates to allow for correlation between fruits in the basket. Luckily, the `glm` package is quite general, and allow for clustered regression errors, see te documentation [here](https://search.r-project.org/CRAN/refmans/miceadds/html/lm.cluster.html).
| null | CC BY-SA 4.0 | null | 2023-03-23T22:02:05.170 | 2023-03-23T22:02:05.170 | null | null | 285927 | null |
610517 | 1 | null | null | 1 | 25 | It is shown [here](https://math.stackexchange.com/questions/335306/why-are-additional-constraint-and-penalty-term-equivalent-in-ridge-regression) that the optimization of a loss function with regularization,
$$\text{argmin}_b L(X,b) + c ||b||_p \phantom{aaaaaaaaaaaaaaaaaaaaaaaa} (*)$$
is equivalent to the constrained optimization
$$\text{argmin}_b L(X,b), \phantom{aaa}\text{under constraint} ||b||_p \leq r \phantom{aaaaaaaa} (**)$$
where $r \geq ||b_{opt}||_p$ and $b_{opt}$ solves $(*)$ via the Lagrangian.
This implies that when presented with the optimization $(*)$, one has to solve it (find $b_{opt}$), in order to write down the constraint ($r$) in $(**)$.
When presented with $(*)$ (when ready to train an ML model in particular), I would like to know before solving for the optimal parameters if we can determine $r$, the constrained version $(**)$. My goal is to limit my search space before doing the original optimization $(*)$.
Can this be done? If it depends on $p$, the norm used, I am most interested in $p= 1,2$.
| Can we convert the optimization of a loss function with regularization to the Lagrangian, constrained optimization *before* solving the optimization? | CC BY-SA 4.0 | null | 2023-03-23T22:16:11.283 | 2023-03-23T22:16:11.283 | null | null | 92660 | [
"regularization",
"loss-functions",
"lagrange-multipliers",
"constrained-optimization"
] |
610518 | 1 | null | null | 1 | 42 | The usual assumption of i.i.d. is made with such frequency that it risks being applied to things like difference of correlations for highly dependent (correlated) variables. To calculate dependent correlation probability [Stinger 1980](https://psycnet.apa.org/record/1980-08757-001), [pdf](http://www.psychmike.com/Steiger.pdf), [online calculator](http://www.psychmike.com/dependent_correlations.php) is frequently cited (5181$\times$ on Google Scholar as of today). The same is true for receiver operator characteristic curves, that is, the assumption of independent ROC curves risks being made for highly correlated variables with the result that significant differences can be falsely discarded. Is there an equivalent probability calculation method analogous to Stinger 1980 for application to significance of difference between two dependent ROC curves, i.e., curves that are so not independent that even slight differences are significant?
I came across a post for which odds ratios for dependent ROC curves were discussed, I want a probability calculator if there is one, please.
| Calculate ROC probability for highy correlated variables | CC BY-SA 4.0 | null | 2023-03-23T22:21:38.503 | 2023-03-27T07:50:42.700 | null | null | 99274 | [
"probability",
"roc",
"non-independent"
] |
610519 | 1 | null | null | 0 | 6 | Suppose one is testing multiple hypotheses and, for a given FDR, computes Benjamini-Hochberg adjusted p-values. Furthermore, consider the classifier defined by $f(p; t) = \{\text{True if } p \ge t; \text{else False}\}$ where $p$ is the p-value from the statistical test and the classes are "null hypothesis is rejected" and "not rejected".
My question is: at the adjusted p-value threshold corresponding to the FDR, would we expect the classifier's false positive rate to equal the FDR?
| Question about false discovery rate with Benjamini-Hochberg-adjusted p-values and classifier false positive rate | CC BY-SA 4.0 | null | 2023-03-23T22:21:40.220 | 2023-03-23T22:21:40.220 | null | null | 49798 | [
"statistical-significance",
"multiple-comparisons"
] |
610520 | 1 | null | null | 2 | 78 | Let's say we want to compare two probabilities $p_1$ and $p_2$, not necessarily referred to the same population. For example, $p_1$ may be the probability of getting a certain disease conditioned on having been vaccinated, and $p_2$ the probability of getting the disease for non-vaccinated people.
Common measures to compare two probabilities are their risk difference $p_1-p_2$, relative risk $p_1/p_2$ and odds ratio $p_1(1-p_2)/(p_2(1-p_1))$.
Is there any setting or application field where the measure $p_1/(p_1+p_2)$ is used? I see it as a "normalized" version of $p_1/p_2$. It is similar to that if $p_1$ can be assumed to be significantly smaller than $p_2$, with the advantage that it is guaranteed to be bounded between $0$ and $1$, which can lead to certain estimation techniques.
Any ideas or pointers would be appreciated.
| Use of $p_1/(p_1+p_2)$ to compare two probabilities $p_1$, $p_2$? | CC BY-SA 4.0 | null | 2023-03-23T22:38:22.600 | 2023-03-24T15:15:05.947 | 2023-03-23T22:45:48.457 | 28285 | 28285 | [
"probability",
"terminology",
"odds-ratio",
"relative-risk",
"risk-difference"
] |
610521 | 1 | null | null | 1 | 38 | What transformation, preferably an invertible one, accomplishes this? Thank you in advance.
I have come across one transformation. If X is our random variable, then `Y=sign(lambda)*X+exp(X*lambda)`. Unfortunately, this transformation doesn't add skew to just a single tail as I would like.
| How can I add skew to any distribution (i.e. standard normal) in one direction? | CC BY-SA 4.0 | null | 2023-03-23T22:40:56.683 | 2023-04-14T08:21:46.720 | null | null | 383993 | [
"distributions",
"data-transformation"
] |
610522 | 1 | null | null | 0 | 66 | I have two dependent variables that I will use in two separate models (namely math_score & language_score) and one independent variable (relative_power) that I will use in both models using Stata 14. The two models have the same sample size. I suspect there's endogeneity in relative_power so I add two instruments to address this problem using ivregress 2sls command. When I tried to test the endogeneity with the estat endog command, I get different results for relative_power.
For the model with math_score, the Durbin & Wu-Hausman test statistic is significant, indicating the independent variable relative_power is indeed endogenous, but for language_score the test statistic is not significant, meaning relative_power is exogenous.
Does this often happen? What should I do to conclude the possible endogeneity in relative_power?
I have consulted with my Professor about this but they said it's best to look at previous studies and since previous studies claim that the independent variable that I used was endogenous they said I should treat it as such and keep using IV regression for the model with language_score.
Is it wise to use 2SLS regression even though the independent variable is not endogenous based on the estat endog test? I am a novice and have not encountered something like this before. Any insight would be appreciated.
| Can an explanatory variable be both endogenous and exogenous? | CC BY-SA 4.0 | null | 2023-03-23T23:01:12.327 | 2023-03-27T22:38:15.130 | 2023-03-26T08:49:43.890 | 22047 | 382735 | [
"stata",
"predictor",
"instrumental-variables",
"endogeneity"
] |
610523 | 2 | null | 610520 | 1 | null | You can certainly invent any number of metrics to compare two proportions. Since you mention risk difference, risk ratio, and odds ratio, common comparisons of independent samples, most would assume you are speaking about two sample test of proportions.
Other measures of association for binary endpoints include:
- number needed to treat: $NNT=1/(p_2−p_1)$.
- Population attributable fraction roughly given as $n_1/(n_1+n_2) (RR-1) / (1+n_1/(n_1+n_2)(RR-1)$
In a two sample test, your metric $\theta = p_1/(p_1+p_2)$ could be interpreted as the probability that a randomly selected "case" belongs to group A, and so if $\theta > 0.5$, there's an excess in group A. Not meaningful for case-control studies without weighting.
As an estimator, $\theta$ is almost certainly guaranteed to underperform the MLE, that is logistic regression and inference on the log odds ratio which is the natural parameter of a binomial random variable. The lack of boundedness is what makes logistic regression so powerful.
If the events are complementary as @whuber points out then, the test has a conditional intepretation and you can use conditional probability laws to identify their respective "estimators" and tests.
| null | CC BY-SA 4.0 | null | 2023-03-23T23:05:27.650 | 2023-03-24T15:15:05.947 | 2023-03-24T15:15:05.947 | 8013 | 8013 | null |
610524 | 1 | null | null | 0 | 27 | I have a linear mixed model with a random effect. The associated `R` code is
```
lmer(log_wmh ~ 1 + viscode * dx_official + (1 | PTID) + AGE + PTGENDER + PTEDUCAT)
```
What would be the most appropriate way to write the equation out for this model? I was a bit confused and would appreciate any clarification. Also, any insight on how to explain the error term in the resulting equation? I would appreciate any help. Thanks!
Edit: would it be like this?
```
Y_{it} = \beta_{0} + \beta_1(\viscode_{it}) + \beta_2(\diagnosis_{i}) + \beta_4(\age_{i}) + \beta_4(\gender_{i}) + \beta_5(\education_{i}) + \alpha_{i} + \epsilon_{it}
```
| Linear Mixed Model in R - Equation | CC BY-SA 4.0 | null | 2023-03-23T23:49:27.687 | 2023-03-24T01:28:08.560 | 2023-03-24T01:28:08.560 | 383995 | 383995 | [
"r",
"mixed-model",
"panel-data",
"linear",
"model"
] |
610527 | 1 | 610531 | null | 2 | 27 | $X_1,\ldots,X_n, i.i.d.$
$p=P\left(X>0\right),\theta=p\left(1-p\right) $
How can I find the U-statistics of $\theta$?
| Find the U-statistics for a parameter | CC BY-SA 4.0 | null | 2023-03-24T00:08:19.473 | 2023-03-24T01:21:33.127 | 2023-03-24T00:39:42.100 | 805 | 383997 | [
"nonparametric",
"u-statistics"
] |
610528 | 1 | null | null | 4 | 45 | I'd like to run analyses using Rao-Scott Chi-squared test using svychisq to account for survey weights. It seems that statistic="Chisq" gives the Rao-Scott version, however, now that I'm looking at the documentation deeper, apparently there are other options that seems to accommodate low cell size (because statistic="Chisq" gave me a warning that the approximation may be incorrect) such as statistic="adjWald". I'm wondering whether adjWald also make adjustments on the basis of weight.
| svychisq with statistic="Chisq" vs. statistic="adjWald" | CC BY-SA 4.0 | null | 2023-03-24T00:26:22.137 | 2023-04-16T07:47:02.980 | 2023-03-29T18:37:37.877 | 11887 | 383998 | [
"r",
"chi-squared-test",
"survey",
"survey-weights",
"wald-test"
] |
610529 | 2 | null | 610524 | 1 | null | Assuming you have a sample of $n$ units
\begin{align*}
\log(y_{i}) = \alpha + \beta^\top \mathbf{X}_i + \delta_{g(i)}W_{g(i)} + \epsilon_i,\quad \epsilon_i\sim N(0,\sigma_\epsilon^2),\quad \delta_g\sim N(\delta,\sigma_\delta^2)
\end{align*}
where $y_i$ is $\texttt{wmh}_i$, $\mathbf{X}_i=(\texttt{viscode}_i\times\texttt{dx_official}_i,\texttt{age}_i,\texttt{PTGENDER}_i,\texttt{PTEDUCAT}_i)^\top$, and $W_{g(i)}$ is $\texttt{PTID}_i$. Finally, $\epsilon_i$ and $\delta_g$ are assumed to be independent.
Source: [official manual](https://cran.r-project.org/web/packages/lme4/vignettes/lmer.pdf), Equation 2 and Table 2.
| null | CC BY-SA 4.0 | null | 2023-03-24T01:00:08.620 | 2023-03-24T01:06:59.140 | 2023-03-24T01:06:59.140 | 135461 | 135461 | null |
610530 | 2 | null | 236223 | 1 | null | Question:
>
In short, how can I implement [empirical] partial dependence plots for tree based models without resorting to the training data?
Answer: You can't.
---
I agree with Xiangyu Zheng's [answer](https://stats.stackexchange.com/a/473255) and expand it. I further argue that Jerome H. Friedman himself made a mistake in his 1999/2001 paper "Greedy Function Approximation: A Gradient Boosting Machine" ([preprint pdf](https://jerryfriedman.su.domains/ftp/trebst.pdf)). If he has an account on StackExchange, please ping him, I would highly appreciate his comment.
This paper is the source of "the single traversal weight allocation algorithm", which was mentioned in other answers (with references to scikit-learn and [http://nicolas-hug.com/blog/pdps](http://nicolas-hug.com/blog/pdps)). Friedman writes [I changed notation to match the question]:
>
For regression trees based on single-variable splits, however, the partial dependence of $f(X)$ on a specified target variable subset $X_S$ is straightforward to evaluate given only the tree, without reference to the data itself. For a specific set of values for the variables $X_S$, a weighted traversal of the tree is performed. At the root of the tree, a weight value of $1$ is assigned. For each nonterminal node visited, if its split variable is in the target subset $X_S$, the appropriate left or right daughter node is visited and the weight is not modified. If the node’s split variable is a member of the complement subset $X_C$, then both daughters are visited and the current weight is multiplied by the fraction of training observations that went left or right, respectively, at that node.
Each terminal node visited during the traversal is assigned the
current value of the weight. When the tree traversal is complete, the
value of $f_S(X_S)$ is the corresponding weighted average of the
$f(X)$ values over those terminal nodes visited during the tree
traversal.
However, this algorithm does not calculate correctly the estimated partial dependence $\bar{f}_s(x_S) = \frac{1}{N}\sum_{i=1}^N{f(x_S,x^{(i)}_{C})}$. We see, that the algorithm only uses the number of training observation in each split (equivalently, at each node), but the partial dependence function depends on more than that. Formally, I claim:
Claim. It is possible to modify the training data in such a way that the inputs to the algorithm (the decision tree and the number of training-set datapoints at each node) remains the same, yet the partial dependence function changes.
Let's first look at how the partial dependence is calculated. For this, I crudely adapt a picture from Nicolas Hug's blog. Here each terminal node corresponds to an unsplit rectangular region. The training samples $x^{(i)}=(x^{(i)}_S,x^{(i)}_C)$ (red lozenges) are replaced with their orthogonal projections onto $X_S=x_S$, i.e. $(x_S,x^{(i)}_C)$ (green plusses). Then the values of $f$ assigned to these projected points and averaged:
$$\bar{f}_s(x_S) = \frac{1}{N}\sum_{i=1}^N{f(x_S,x^{(i)}_{C})} = \frac{1}{6}(2 v_G + v_I + 3 v_H).$$
[](https://i.stack.imgur.com/BV6NM.png)
Now let us move one datapoint (red lozenge) as shown, in such a way that the datapoint remains within the same region, but its projection moves from one region to another. The empirical partial dependence function has clearly changed:
$$\bar{f}_s(x_S) = \frac{1}{N}\sum_{i=1}^N{f(x_S,x^{(i)}_{C})} = \frac{1}{6}(2 v_G + 2 v_I + 2 v_H).$$
[](https://i.stack.imgur.com/yl2RI.png)
Of course, one could argue that if we move training points like this, there is a good chance that we learn a completely different tree if we run the training again (recall that decision trees are unstable learners). But this is not the point. The point is that the proposed algorithm does not have a solid foundation (none was provided). Anyway, the partial dependence value the algorithm returns happens to match our second exact (empirical) calculations: $$ \text{algorithm}(x_S) = \frac{1}{3} v_G + \frac{1}{3} v_I + \frac{1}{3} v_H$$
[](https://i.stack.imgur.com/vI4ow.png)
Further, I again agree with Xiangyu Zheng, who writes in their answer:
>
The fast way is more like averaging on the conditional distribution of the other covariates (not the strictly defined conditional distribution, but in the tree structure).
Intuitively, "conditional distribution" $p(x_c|x_s)$ provided by the tree structure should not be too far from its estimate that is suitable for computing $E[f(X_S,X_C)|X_S=x_s]$:
- where $f(x_S,x_C)$ changes fast, the decision tree is incentivised to make additional splits and create smaller regions;
- where $f(x_S,x_C)$ changes slowly, inaccuracies in $p(x_c|x_s)$ will probably cancel out and yield a reasonable average value.
Ironically, both in the paper and in the ESL book, Friedman elaborates on how $E[f(x_S,X_C)]$ is different from $E[f(X_S,X_C)|X_S=x_s]$, but provides an algorithm that claims to compute the former while seemingly approximating the latter.
| null | CC BY-SA 4.0 | null | 2023-03-24T01:21:32.267 | 2023-03-24T01:40:42.473 | 2023-03-24T01:40:42.473 | 254326 | 254326 | null |
610531 | 2 | null | 610527 | 1 | null | A $U-$statistic that is unbiased for $\theta=p(1-p)$ would be
\begin{align*}
\binom{n}{2}^{-1}\sum_{i<j}Y_i(1-Y_j),
\end{align*}
where $Y_i:=\mathbb{1}(X_i>0)$. Note that $\mathbb{E}[Y_i]=\mathbb{E}[\mathbb{1}(X_i>0)]=\mathbb{P}(X_i>0)=p.$ To see unbiasedness, by the fact that $\{Y_i\}_{i=1}^n$ is an i.i.d. sample, we have that
\begin{align*}
\mathbb{E}\left[\binom{n}{2}^{-1}\sum_{i<j}Y_i(1-Y_j)\right] &= \binom{n}{2}^{-1}\sum_{i=1}^n\sum_{j<i}\mathbb{E}\left[Y_i(1-Y_j)\right]\\
&=\binom{n}{2}^{-1}\sum_{i=1}^n\sum_{j<i}\mathbb{E}\left[Y_i\right]\mathbb{E}\left[(1-Y_j)\right] \\
&=\binom{n}{2}^{-1}\binom{n}{2}p(1-p)=\theta.
\end{align*}
| null | CC BY-SA 4.0 | null | 2023-03-24T01:21:33.127 | 2023-03-24T01:21:33.127 | null | null | 135461 | null |
610533 | 1 | null | null | 0 | 17 | Good evening everyone, I have a question about time series, I have a challenge that was given to me at work to work with a dataset, with a very short time series, of 365 days, I am using the [darts](https://unit8co.github.io/darts/) framework , with RandomForest model. In this I was guided to use the timeline dataset for both training and testing. As this is the first time series theme, I will work more with classification, separating the set for training and testing. I don't know if it's correct to use the same training set for testing as well. But as it was requested to use the train for both, I want to know if, in the time series, there is a way to make a prediction of the initial date of the set. train until its last date. Like reverse time. I would like guidance on how to do this correctly.
In the Darts library there is this example
```
import pandas as pd
from darts import TimeSeries
# Read a pandas DataFrame
df = pd.read_csv("AirPassengers.csv", delimiter=",")
# Create a TimeSeries, specifying the time and value columns
series = TimeSeries.from_dataframe(df, "Month", "#Passengers")
# Set aside the last 36 months as a validation series
train, val = series[:-36], series[-36:]
model = ExponentialSmoothing()
model.fit(train)
prediction = model.predict(len(val), num_samples=1000)
import matplotlib.pyplot as plt
series.plot()
prediction.plot(label="forecast", low_quantile=0.05, high_quantile=0.95)
plt.legend()
```
In the framework example, the set is separated. But I don't want to do that. It would just be train, both for training and testing.
But the prediction is made with the model for the future, I don't know if there is such a possibility, to verify this prediction from the same date that the train set starts.
| Make predictions with Time Series, starting from the date that the training set starts | CC BY-SA 4.0 | null | 2023-03-24T01:39:45.467 | 2023-03-29T18:35:45.493 | 2023-03-29T18:35:45.493 | 11887 | 373067 | [
"time-series",
"python",
"predictive-models",
"train-test-split"
] |
610534 | 2 | null | 390779 | 1 | null | Can I have three levels of randomization and three error terms even when the replications are produced under same whole plot treatments under split split plot design? For example, three reps are included under T1 tillage method as whole plot with 4 fertilizer sub plots and 3 sowing time sub sub plots; 3 three reps under T2 tillage method as whole plot with 4 fertilizer sub plots and 3 sowing time as sub sub plots; and so on.....
| null | CC BY-SA 4.0 | null | 2023-03-24T02:00:00.520 | 2023-03-24T02:00:00.520 | null | null | 383989 | null |
610535 | 1 | null | null | 0 | 34 | I am a graduate student in imaging genetics. When I have a statistics questions and search online and can often find answers. However, I cannot cite User Generated Content. Can you suggest some reference books contain all the formulas for all the formulas of common statistics methods?
For example, I have recently read [an answer about the formula for calculating SE of predictor variable (x)](https://stats.stackexchange.com/questions/236437/how-to-compute-the-standard-error-of-a-predictor-variable). But I cannot find the formula in books like [Handbook of Regression Analysis](https://www.google.com.sg/books/edition/Handbook_of_Regression_Analysis/X95obhB6RQcC?hl=zh-CN&gbpv=1&dq=since%20values%20farther%20from%20the%20centroid%20are%20harder%20to%20predict%20as%20precisely.%20Specifically,%20for%20a%20simple%20regression,%20the%20estimated%20standard%20error%20of%20a%20predicted%20value&pg=PA1973&printsec=frontcover) (I can only see formula of SE of y).
| Statistics reference books (handbooks) contain all the formulas and easily understood for non-math people | CC BY-SA 4.0 | null | 2023-03-24T02:06:37.650 | 2023-03-24T02:25:45.467 | 2023-03-24T02:25:45.467 | 169706 | 169706 | [
"references"
] |
610537 | 2 | null | 610452 | 5 | null | Generally, starting the graph above zero is preferable, as it allows the viewer to more easily digest the information, which is the point of the graph. You do have to be careful to not mislead the viewer into such misperceptions as that twice the distance from the bottom means twice the value. You can emphasize that it doesn't start at zero, such as by having [squiggly lines](https://tex.stackexchange.com/questions/79269/how-to-show-the-data-does-not-start-at-zero-symbol-on-a-pgfplot-graph) . Depending on the context, you might want to just change the variable from "survival" to something like "attrition", and show 1-survival; going from 90% to 95% is a small percentage change in survival, but half the attrition (going from 10% to 5%), so the question arises as to which is more pertinent.
| null | CC BY-SA 4.0 | null | 2023-03-24T03:18:50.623 | 2023-03-24T03:18:50.623 | null | null | 179204 | null |
610538 | 2 | null | 489390 | 0 | null | A regression model can be seen as a function: you input feature values, and the model outputs a fitted value (prediction). Like any other function, you can pick an output value and solve for the input feature value(s) that give such a value, if any exist.
Your function might be bounded from above, but it might not be. Consequently, it need not make sense to talk about a maximum. However, if there is a limit to the mix of feature values that are within your budget, then you have a constrained optimization problem where you want to find the feature value(s) maximizing the regression output, subject to the constraint that the cost of those feature values is within to it budget, and this makes sense whether your regression has an upper bound or not.
Formally, let $p_i$ be the price of feature $i$, and let $B$ be the budget.
$$
\underset{(x_1,\dots,x_k)\in\mathbb R^k}{\arg\min} \left\{\hat\beta_0+
\hat\beta_1x_1+\dots\hat\beta_kx_k\Big\vert
p_1x_1 +\dots + p_kx_k \le B
\right\}
$$
In words, this is saying that you want to find the feature values such that the regression output is maximized without those feature values exceeding the available budget. This is related to the Walrasian or Marshallian demand in economics.
The major techniques for solving a problem like this are the Karush-Kuhn-Tucker theorem and Lagrange multipliers. There must be Python packages that implement this kind of constrained optimization.
Note that the uncertainty in estimating the regression coefficients could impact your results, as is mentioned [here](https://stats.stackexchange.com/a/476497/247274). It might be that your coefficient estimates are so imprecise (wide confidence intervals) that, if you solve the constrained optimization problem for many reasonable combinations of regression coefficients, your optimal bundles of feature values bounce all over the place. I might be interested in how the constrained maximum varies upon bootstrapping, fitting a regression to the bootstrap sample, and solving the constrained optimization according to the bootstrap coefficients. If these values tend to give values that are acceptable, that would be reassuring. If the values are all over the place and often are too low, that is concerning.
In economics, a related idea to the optimization discussed above, utility maximization, is expenditure minimization: what is the least amount that must be spent in order to achieve a utility of at least the required amount. If you have a target in mind for the sentiment score, you can solve a similar (but different) constrained optimization problem to figure out the combination of features that minimizes the amount you must spend. (Can you figure out the set for which you find the $\min$ or $\arg\min?$) This is related to Hicksian demand in economics.
| null | CC BY-SA 4.0 | null | 2023-03-24T03:29:36.820 | 2023-03-24T03:42:50.980 | 2023-03-24T03:42:50.980 | 247274 | 247274 | null |
610539 | 1 | null | null | 2 | 17 | For instance, if I want to compare how well various model specifications cross validate but some models converged with default parameters such the number of iterations, chains, and the treedepth, and others needed adjusting before they converged, are all the model's estimates and validity still comparable? I'm not asking about uniformity of priors between models -- only the number of iterations, chains, the treedepth.
| In Bayesian regression, is uniformity of parameters such as iterations, chains, and treedepth required for models to be comparable? | CC BY-SA 4.0 | null | 2023-03-24T03:47:27.227 | 2023-03-24T03:47:27.227 | null | null | 315405 | [
"bayesian"
] |
610541 | 1 | 613273 | null | 5 | 284 | I want to estimate a staggered difference-in-difference (DD) with continuous treatment. The data looks something like this:
$$
\begin{array}{ccc}
Individual & year & CT_{i,t} \\
\hline
1 & 2000 & 0 \\
1 & 2001 & 0 \\
1 & 2002 & 0 \\
1 & 2003 & 0 \\
1 & 2004 & 0.3 \\
1 & 2005 & 0.4 \\
1 & 2006 & 0.42 \\
1 & 2007 & 0.2 \\
1 & 2008 & 0 \\
1 & 2009 & 0 \\
\hline
2 & 2000 & 0 \\
2 & 2001 & 0 \\
2 & 2002 & 0 \\
2 & 2003 & 0 \\
2 & 2004 & 0 \\
2 & 2005 & 0 \\
2 & 2006 & 0 \\
2 & 2007 & 0 \\
2 & 2008 & 0 \\
2 & 2009 & 0 \\
\hline
3 & 2000 & 0.1 \\
3 & 2001 & 0.1 \\
3 & 2002 & 0.1 \\
3 & 2003 & 0.5 \\
3 & 2004 & 0.6 \\
3 & 2005 & 0.4 \\
3 & 2006 & 0.2 \\
3 & 2007 & 0.1 \\
3 & 2008 & 0.3 \\
3 & 2009 & 0.1 \\
\hline
4 & 2000 & 0.3 \\
4 & 2001 & 0.2 \\
4 & 2002 & 0.4 \\
4 & 2003 & 0.2 \\
4 & 2004 & 0.3 \\
4 & 2005 & 0.5 \\
4 & 2006 & 0.1 \\
4 & 2007 & 0.12 \\
4 & 2008 & 0.13 \\
4 & 2009 & 0.14 \\
\hline
\end{array}
$$
The generalized DD equation can be specified as follows:
$$
y_{i,t} = \gamma_i + \lambda_t + \delta CT_{i,t} + \epsilon_{i, t}, \cdots (1)
$$
where $i$ denotes some individual and $t$ for year. $\gamma_i$ are individual fixed effects, and $\lambda_t$ are year fixed effects. $CT_{i,t}$ is a continuous treatment variable that measures individual $i$'s exposure to some "shock" in year $t$. Each of the individuals only experiences one treatment, i.e., individual 1 in 2004, individual 2 never, individual 3 in 2003, and individual 4 in 2006. For example:
Consider individual 1, his exposure to the shock is 0 until treatment occurs in year 2004, where his exposure to the shock has an intensity of 0.3. In the year after treatment, his exposure becomes 0.4, then 0.42, then 0.2 and dies out in 2008.
Individual 2 is "never treated".
Individual 3 has a constant exposure until he becomes treated in 2003, where his exposure jumps to 0.5. As a result of this treatment, his exposure then fluctuates around until the end of the sample period 2009.
Individual 4 has a fluctuating exposure until he becomes treated in 2006, where his exposure falls to 0.1, then fluctuates until the end of the sample period 2009.
My first question is whether Equation (1) is an appropriate generalized DD equation that I can use to estimate the "treatment" effect?
My second question is how can I estimate a dynamic period by period coefficient version of Equation (1)? For example, in the usual case where treatment is staggered but binary (and where the treatment variable is 0 in the "pre treatment" period), one can easily estimate a dynamic version by using period by period dummy variables such as shown [here](https://stats.stackexchange.com/questions/526787/how-to-plot-the-graph-or-perform-a-formal-test-of-parallel-trends-for-generalize?rq=1). However, how can I do that here? The continuous treatment variable (CT) is not always 0 in the pre-treatment period, nor does it take on a constant value post-treatment either.
EDIT: Some more information on the "treatment". Each treatment is the introduction of a new regulation. Each individual is treated based on how much he spends in a given year. If he spends more, his intensity of treatment (i.e., exposure to the regulatory shock) is higher, if he spends less, his intensity of treatment (i.e., exposure to the regulatory shock) is less, CT is bounded between 0 and 1. The first regulation occurred before the start of the sample period in 1992. This affected ALL individuals at the same time, but then after 1992, for each individual, a "newer" version of the regulation came into effect, but the introduction is staggered for each individual. The difference between the "newer" regulation and the initial one in 1992 is that the amount of money one spends translates into a different amount of treatment intensity. For example, if someone spends \$1 under the 1992 regulation, then the treatment intensity, say, takes a value of 0.1, but under the newer regulation, \$1 may translate into only 0.01 (these are just hypothetical values I made up to illustrate the difference in the regulation). Let me explain in detail how CT varies for each individual:
For individual 1, under the 1992 regulation, he spends nothing in 2000, 2001, 2002, and 2003, thus his CT is 0. For him, the new regulation is enforced in 2004, he happens to spend some money in 2004, spends a different amount of money in 2005, etc. That's why his CT fluctuates from 2004 to 2007. He spends nothing in 2008 and 2009, so his CT is 0.
Individual 2 never spends anything throughout the entire sample period, so his CT is always 0.
Individual 3 spends a constant amount of money in each of the years 2000, 2001, and 2002, so under the 1992 regulation, his CT is always 0.1. It does not fluctuate because he spends the same amount in each of these three years. But the newer regulation comes into effect for him in 2003. He spends varying amounts of money until the end of the sample period, that's why his CT fluctuates after 2003.
Individual 4 spends a varying amount of money each year from 2000 to 2005. Under the 1992 regulation, his CT fluctuates around. But the newer regulation for him comes into effect in 2006. Again he spends a varying amount of money until the end of the sample period, so his CT fluctuates until 2009.
| Staggered difference-in-difference with continuous treatment | CC BY-SA 4.0 | null | 2023-03-24T04:37:45.713 | 2023-04-18T08:20:06.620 | 2023-04-18T08:16:24.760 | 246835 | 29021 | [
"time-series",
"econometrics",
"difference-in-difference",
"treatment-effect",
"generalized-did"
] |
610542 | 1 | null | null | 3 | 197 | Large language models, such as GPT-3, BLOOM etc, can generate open-ended text. Say I want to prompt these models to answer a question. How can I compare the semantic similarity of the answer it provides me with a reference question?
Eg, if the prompt is, `'What should I eat for breakfast?'`, an example response that BLOOM will output is `'when should i eat lunch? etc.). Is there any way to get some suggestions from microsoft?'`
Say the reference answer to this question is `'You should eat a full English breakfast'`. How can I measure how different the BLOOM response is to the reference answer, in terms of its content?
Currently, I am encoding the text into BERT embeddings, and then computing cosine similarity between the output text and reference text. But the cosine similarity scores are quite low, because, as you can see, the BLOOM answer is not really on task. Should I be finetuning the BLOOM model somehow before trying to test semantic similarity?
The original BLOOM/GPT-3 models did quite well on SQuaD, but I don't seem to find that these models are answering questions very well without finetuning.
| How to compare the semantic similarity of text generated by large language models (GPT-3, BLOOM etc) to reference text? | CC BY-SA 4.0 | null | 2023-03-24T04:42:26.633 | 2023-04-16T03:16:20.300 | null | null | 379430 | [
"natural-language",
"word-embeddings",
"latent-semantic-analysis",
"text-generation",
"gpt"
] |
610543 | 1 | null | null | 1 | 11 | I have two sets of moment conditions, one is IV moment with N observations but the second moment only has N_1 observations, N_1<N.
How to build the covariance matrix? Appreciate for any replies!
| How to build the covariance matrix with different weighted moments via GMM | CC BY-SA 4.0 | null | 2023-03-24T04:52:30.117 | 2023-03-24T04:52:30.117 | null | null | 384005 | [
"covariance-matrix",
"method-of-moments",
"generalized-moments"
] |
610544 | 1 | 610585 | null | 0 | 64 | I apologies if I am asking a trivial question, this is something I used to do easily in college now I forgot everything.
I like to know how to estimate the expected value and variance of a mixture of Poisson distribution.
0.3*(((2^x)(e^-2))/x!) + 0.45(((3^x)(e^-3))/x!) + 0.25(((0.5^x)*(e^-0.5))/x!)
[](https://i.stack.imgur.com/Sc3Rm.png)
| Expected value and variance of p | CC BY-SA 4.0 | null | 2023-03-24T05:55:36.460 | 2023-03-24T13:41:25.510 | 2023-03-24T09:28:25.763 | 110833 | 356687 | [
"variance",
"expected-value"
] |
610545 | 5 | null | null | 0 | null | null | CC BY-SA 4.0 | null | 2023-03-24T06:36:42.300 | 2023-03-24T06:36:42.300 | 2023-03-24T06:36:42.300 | 1352 | 1352 | null | |
610546 | 4 | null | null | 0 | null | ChatGPT is a Large Language Model (LLM). Don't use this tag to ask for clarification of "advice" given by ChatGPT, but only for questions about its statistical underpinnings. Note that "advice" given by ChatGPT is frequently wrong and misleading! | null | CC BY-SA 4.0 | null | 2023-03-24T06:36:42.300 | 2023-03-24T06:36:42.300 | 2023-03-24T06:36:42.300 | 1352 | 1352 | null |
610547 | 5 | null | null | 0 | null | null | CC BY-SA 4.0 | null | 2023-03-24T06:40:30.670 | 2023-03-24T06:40:30.670 | 2023-03-24T06:40:30.670 | 1352 | 1352 | null | |
610548 | 4 | null | null | 0 | null | Large Language Models (LLMs) are pretrained models that will probabilistically generate natural language texts. The underlying model is typically a Deep Learning one. Examples include GPT models. | null | CC BY-SA 4.0 | null | 2023-03-24T06:40:30.670 | 2023-03-24T06:40:30.670 | 2023-03-24T06:40:30.670 | 1352 | 1352 | null |
610549 | 1 | null | null | 0 | 30 | I'm a scientist but not a professional mathematician and in [this question](https://math.stackexchange.com/questions/4663958/possible-typo-in-a-text-using-fourier-transform-properties/4663990), I asked about a possible typographical error in an [article on round-off error](https://doi.org/10.1021/ac50057a033) that I've been reading in the journal Analytical Chemistry.
The results presented in that article are said to rely on three properties of [characteristic functions](https://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)) (cf). Specifically, the paper says (column 1, p. 1142) the following (in which I have corrected the typo).
>
Three general properties of characteristic functions are:
(1) If $g_1$ is the c.f. of $f_1$ and $g_2$ is the c.f. of $f_2$ then $g_1 - g_2$ is the c.f. of $f_1 - f_2$.
(2) If $g(t)$ is the c.f. of $f(z)$ and $z$ is displaced by the fixed value $z_0$ so that the PDF is $f(z-z_0)$, the c.f. of $f(z-z_0)$ is $g(t)\, \mathrm{exp}(i z_0 t).$
(3) If $g(t)$ is the c.f. of $f(z)$, then the the c.f. of $\int_{-\infty}^z f(w)\,\mathrm dw$ is $-g(t)/(it)$.
Are there well recognized names for each of the listed properties?
| Name of a property of characteristic functions (Fourier transforms) | CC BY-SA 4.0 | null | 2023-03-24T06:44:08.410 | 2023-03-24T09:29:03.823 | 2023-03-24T09:29:03.823 | 61108 | 61108 | [
"terminology",
"characteristic-function"
] |
610550 | 1 | 610552 | null | 1 | 12 | If my p-values for all variables are not significant, does looking at/discussing the direction of variables even make sense?
| If coefficients insignificant, direction of relationship between variables also insignificant? | CC BY-SA 4.0 | null | 2023-03-24T07:33:36.753 | 2023-03-24T08:34:49.890 | 2023-03-24T08:06:41.587 | 383803 | 383803 | [
"multiple-regression",
"p-value"
] |
610551 | 1 | 610845 | null | 3 | 95 | What is the theoretical justification for using the square root of the sample size as the weight when combining z-scores in a meta-analysis?
Is this because the variance of the z-score is proportional to 1/n, where n is the sample size, so the inverse variance is proportional to n?
| Using sample sizes when combining z-scores | CC BY-SA 4.0 | null | 2023-03-24T07:43:45.577 | 2023-03-28T08:35:51.683 | 2023-03-27T07:36:36.680 | 144600 | 169706 | [
"meta-analysis",
"weights",
"z-score"
] |
610552 | 2 | null | 610550 | 1 | null | Generally speaking no. As if you reject the fail to reject the Null-Hypothesis (p>0.05) you are not sure that the effect and hence its direction is not just noise or randomness in your sample. However, in some settings (like small sample size domains) people argue that the p-value is not as relevant as long as the effect size is large enough. But this is only the case in very specific scenarios and careful consideration.
| null | CC BY-SA 4.0 | null | 2023-03-24T08:34:49.890 | 2023-03-24T08:34:49.890 | null | null | 220466 | null |
610554 | 2 | null | 610521 | 0 | null | Either you can use the `scipy` function. `skewnorm` or as you can see in their documentation they are simply mulitplying the CDA with the PDF where `a` is the skewness factor.
`skewnorm.pdf(x, a) = 2 * norm.pdf(x) * norm.cdf(a*x)`
The complete documentation is found here:
[https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html)
| null | CC BY-SA 4.0 | null | 2023-03-24T08:48:24.397 | 2023-04-14T08:21:46.720 | 2023-04-14T08:21:46.720 | 220466 | 220466 | null |
610556 | 1 | null | null | 2 | 6 | I'm working on a project involving large and evolving datasets and I'm interested in using incremental, online, or continuous learning algorithms in R. Can anyone recommend well-maintained R packages for implementing these algorithms? Additionally, any resources (tutorials, blog posts, papers, etc.) explaining their usage would be greatly appreciated.
Looking forward to your suggestions!
| Seeking R Packages and Resources for Incremental/Online/Continuous Learning Algorithms | CC BY-SA 4.0 | null | 2023-03-24T08:52:24.980 | 2023-03-24T08:52:24.980 | null | null | 384019 | [
"r"
] |
610557 | 1 | null | null | 1 | 29 | I don't have a lot of experience with multilevel models and Mplus, so I'm unsure if my approach is appropriate.
I would therefore be very grateful for any help and feedback!
I have data from an experiment.
3 groups of participants each went through 4 trials in which they watched instructional videos. After each trial, their state motivation was measured using 4 items. At the end of the experiment, all videos were rated together using one scale and their trait motivation (4 items) was measured as a control variable.
I would now like to examine the relationship between the video ratings and the state-motivation ratings. At the same time, I would like to control for group membership and trait motivation.
To do this, I have specified a multilevel model in which the ratings after each trial (level 1) are nested within the individual subjects (level 2). In particular, I am interested in the cross-level influence of video rating (level 2) on state motivation (level 1).
At level 1 I have a latent dependent variable (state-motivation) with 4 indicators. Additionally, I want to control for the video number.
On level 2 I have the video rating as an independent variable and group membership and trait motivation as covariates.
The data is in long format.
Here is the corresponding Mplus syntax:
```
USEVARIABLE = ID !individual
NIS !videorating
StMoIt1-StMoIt4 !Item 1-4 of the state motivation
TrMoIN1-TrMoIN4 !Item 1-4 of the trait motivation
video !number of the video
high mid; ! group membership as dummy coding
CLUSTER = ID;
WITHIN = video;
BETWEEN = TrMoIN1 TrMoIN2 TrMoIN3 TrMoIN4 NIS high mid;
MISSING=;
DEFINE:
high = 0;
IF (group == 3) then high = 1;
low = 0;
IF (group == 1) then low = 1;
mid = 0;
IF (group == 2) then mid = 1;
ANALYSIS: TYPE= TWOLEVEL;
MODEL:
%WITHIN%
StMot BY StMoIt1-StMoIt4*(p1-p4);
StMot ON video;
%BETWEEN%
StMotB BY StMoIt1-StMoIt4*(p1-p4);
TrMot BY TrMoIN1-TrMoIN4;
NIS ON high mid;
StMotB ON NIS TrMot;
```
Is this multilevel model an appropriate approach to answer my question?
If not, could you point me in the right direction?
I have also thought about a growth model, however I am not interested in changes over time/videos and the state motivation is very stable across videos.
I am also particularly unsure about using long-format data in this.
Thank you very much!
| Cross-level influence in multilevel model for longitudinal data with time-invariant outcome | CC BY-SA 4.0 | null | 2023-03-24T09:00:32.947 | 2023-03-24T10:37:23.683 | 2023-03-24T10:37:23.683 | 384018 | 384018 | [
"panel-data",
"multilevel-analysis",
"latent-variable",
"mplus"
] |
610558 | 2 | null | 610501 | 1 | null | The main difference between the two is in their assumption of the distribution of $y$. Poisson regression assumes that $y$ is the result of a count, but it can also handle rates. Beta regression assumes that $y$ is distributed according to Beta distribution. In Poisson regression, the coefficients represent the logarithm of the ratio of the mean count for a one-unit increase in the predictor variable. In Beta regression, the coefficients represent the logarithm of the ratio of the mean proportion for a one-unit increase in the predictor variable. It sounds like they both may be valid in your case.
Regarding your second question:
>
What if I wanted to run a regression to see whether the dominant fruit in the basket also has the most dots? So for instance, in some baskets, apples dominate, while in other baskets, oranges dominate. So instead of apples vs. oranges, I would instead look at dominant vs. non-dominant fruit. –
In that case, you may want to reparametrize your regression. Your current definition isn't clear, as you did not define what is dominant nor what "has the most dots" mean. Do you mean that
- if there are more oranges than apples in a basket, THEN oranges on average have more dots? or
- if there are more oranges than apples in a basket, THEN it is more likely that the fruit with the most dots in the basket is an orange?
| null | CC BY-SA 4.0 | null | 2023-03-24T09:24:35.170 | 2023-03-24T09:24:35.170 | null | null | 285927 | null |
610560 | 2 | null | 219605 | 1 | null | I have used auto-arima function to have parameters of the best model (p, d, q), I would like to have also the RMSE value for each order (p, d, q).
Could you please help me to find the RMSE value corresponding to each AIC value.
I would like to illustrate the overfitting engendered by the model with the best RMSE.
Thanks in advance.
| null | CC BY-SA 4.0 | null | 2023-03-24T09:59:11.947 | 2023-03-24T09:59:11.947 | null | null | 384027 | null |
610562 | 2 | null | 610522 | 1 | null | In real life, a variable is either endogenous or exogenous. It can't be both, since the definition of either of those terms is the exact opposite of the other one.
If you are trying to assess whether a variable is endogenous or not using a statistical test, you may be uncertain whether it is endogenous or exogenous. Similarly, doing two different tests might give you two different results as here (if that never happened, they wouldn't be different tests!)
If you are uncertain, it is normally best to treat the variable as endogenous. If you do an exogenous analysis on a variable that could be endogenous it is usually useless, and readers will not trust it. conversely if you do an endogenous analysis on a variable which was exogenous, it is usually still valid (though it may not be as well powered).
| null | CC BY-SA 4.0 | null | 2023-03-24T10:07:51.510 | 2023-03-24T10:07:51.510 | null | null | 129051 | null |
610563 | 2 | null | 610515 | 0 | null | You can always have fixed effects at any level higher than all of the clustering / random effects.
Imagine that you fit a regression to a single state, with random effects or clustering at the household level. That is certainly a valid regression.
Now imagine that you do the same for each state. The regression constants have now become fixed effects for each state! (Okay, you have to estimate the variance and that becomes a pooled estimate rather than estimated separately in each regression. but that doesn't change whether it's valid).
| null | CC BY-SA 4.0 | null | 2023-03-24T10:14:50.747 | 2023-03-24T10:14:50.747 | null | null | 129051 | null |
610564 | 1 | null | null | 0 | 35 | I am solving the problem of detecting swallowing and non-swallowing events from the audio. I labelled the data using Praat software by marking the swallowing and nonswallowing events. I trained the model using LibSVM with balanced dataset of 1841 instance of each and test with non-balanced dataset 369 non-swallow and 548 swallow events. I performed grid-search on optimal $(C,\gamma)$ and find them as $(2,0.25)$ which 5-fold cross validation accuracy as 90%, which is for total of 29 record. But when I tested the model for test data, I obtained 60% accuracy. I also tested for balanced data, but the result still not acceptable. If the model is not generalized or overfit, why cross-validation accuracy is 90% ? Any idea? How can I fix the problem?
| High Cross Validation but low test accuracy on LibSVM | CC BY-SA 4.0 | null | 2023-03-24T10:22:30.110 | 2023-03-24T11:16:53.827 | null | null | 382772 | [
"cross-validation",
"svm",
"accuracy",
"supervised-learning",
"libsvm"
] |
610565 | 2 | null | 610518 | 2 | null | I think that your premise that i.i.d. is often made when comparing dependent ROC curves is unfounded. Seminal papers by Hanley or DeLong have explicit terms to account for covariance between the variables.
DeLong expresses the covariance of two correlated ROC curves as:
[](https://i.stack.imgur.com/joekj.png)
Hanley calculates a critical ratio $z$ which takes the correlation $r$ between the two variables into account.
[](https://i.stack.imgur.com/DThG3.png)
These papers are highly cited, and are the basis for all such comparisons to date. They do not make the i.i.d. assumption, but explicitly address it.
References:
- Elisabeth R. DeLong, David M. DeLong and Daniel L. Clarke-Pearson (1988) "Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach". Biometrics 44, 837-845.
- James A. Hanley and Barbara J. McNeil (1982) "The meaning and use of the area under a receiver operating characteristic (ROC) curve". Radiology 143, 29-36.
| null | CC BY-SA 4.0 | null | 2023-03-24T10:48:30.603 | 2023-03-26T08:46:51.077 | 2023-03-26T08:46:51.077 | 36682 | 36682 | null |
610566 | 1 | 610576 | null | 0 | 25 | I am currently working with the rugarch package to forecast the EU-ETS price. While I get reasonable results for the in-sample volatility, the forecast of the of the time series does not look correct at all:
[](https://i.stack.imgur.com/kfzEE.png)
Is this because of the low ar1 and ar2 parameter estimates? If so, is there a way to overcome this problem? I have daily observations (n=3,000)
```
-----------------------------------
GARCH Model : eGARCH(1,1)
Mean Model : ARFIMA(2,0,0)
Distribution : sstd
Optimal Parameters
------------------------------------
Estimate Std. Error t value Pr(>|t|)
mu 0.000703 0.000282 2.4935 0.012651
ar1 -0.037111 0.014955 -2.4815 0.013084
ar2 -0.027603 0.011804 -2.3385 0.019361
omega -0.165146 0.020888 -7.9061 0.000000
alpha1 -0.035044 0.010961 -3.1971 0.001388
beta1 0.977160 0.002873 340.1154 0.000000
gamma1 0.212031 0.020791 10.1981 0.000000
skew 0.978063 0.021619 45.2417 0.000000
shape 5.448983 0.471307 11.5614 0.000000
```
Looking forward to your advice.... Thanks!
| rugarch: Forecast result does not show any AR structure | CC BY-SA 4.0 | null | 2023-03-24T10:49:13.727 | 2023-03-24T11:42:16.407 | 2023-03-24T11:36:23.033 | 53690 | 384028 | [
"r",
"time-series",
"forecasting",
"autoregressive",
"garch"
] |
610567 | 1 | 610584 | null | 2 | 85 | I’m trying to understand the interpretation of interaction terms, specifically in the context of GEE models. I’m familiar with them in conditional models, and am comfortable with marginal effects in the absence of an interaction, but the two together is causing me some trouble. I have illustrated an example below.
Take, for example, a longitudinal study looking at the effect of a drug (treatment) vs a placebo (placebo) on weight in kg (weight, continuous), and how the effect of treatment varies with time in years (time, continuous). If the GEE output was:
Variable | Coefficient (95% CI)
_cons | 70 (50,90)
treatment | -5 (-9,-1)
time | 1.1 (1.05,1.15)
treatment x time | 0.9 (-0.1,1.9)
Without the interaction, I understand that the treatment coefficient would be interpreted as the marginal effect of treatment on weight, as would that for a 1-year increase in time. I am, however, at a loss at to the marginal interpretation of time, treatment or the interaction between them. Any help would be greatly appreciated! Also, I am aware that conditional = marginal in the context of linear models but this is more to demonstrate the point before moving onto more complicated models.
Unfortunately I couldn’t find any questions covering this on the site
| GEE interaction interpretation | CC BY-SA 4.0 | null | 2023-03-24T11:00:50.057 | 2023-03-24T13:30:10.633 | null | null | 384030 | [
"interaction",
"generalized-estimating-equations"
] |
610568 | 1 | null | null | 1 | 56 | I am a biologist with very limited statistics/mathematics background who frequently encounters the same problem when trying to analyse a specific type of data, but would like to finally begin to understand how I approach this properly. I will try explain the basis of the experiment and then describe the output I'd like;
We're looking at survival of two different strains (S1 and S2) of the same bacterial species on a range of surfaces. Each strain is placed on a surface and at specific timepoints (say t-1hr) the amount of viable cells are determined - this sum is divided by the initial inoculum (t-0hr) to produce a ratio of viable cells. This is repeated in triplicate for each strain - so my first question is:
>
a) what is the most accurate way to find the mean and estimate the error of the calculated ratios for each time point?
Let's say x = t-0hr and y = t-1hr
Replicate 1: x = 6x10^7 y = 4x10^6 ratio = 0.0667
Replicate 2: x = 3x10^7 y = 1x10^6 ratio = 0.0333
Replicate 3: x = 9x10^6 y = 9x10^4 ratio = 0.01
I want a mean of the ratios ± some estimate of the variance and I feel like simply adding those ratios together and dividing by 3 is incorrect.
Once I have a mean and error for S1 and S2 for each timepoint, I then want to examine whether these means are significantly different between strains, ideally at each timepoint - in essence, is one strain better are surviving on that surface after a given time. How would I then go about that?
I feel like this is extremely basic but I have spent a lot of time trying to find the correct answer in these and other forums and the answers are always far too in-depth for me to even slightly understand and require quite a substantial amount of prior knowledge regarding mathematics.
Many thanks in advance for your time!
Chris
| Calculating mean and estimating variance of ratios | CC BY-SA 4.0 | null | 2023-03-24T11:04:49.127 | 2023-03-24T22:10:15.867 | 2023-03-24T15:20:05.207 | 384032 | 384032 | [
"variance",
"ratio"
] |
610569 | 1 | null | null | 0 | 29 | I would like to use a sample size method proposed by Alonzo et al.*
More specifically, I would like to use their sample size method for matched screen-positive studies (section 6 in the paper). I have replicated their example in R. Below is a screenshot of their equation, and then follows my code in R:
[](https://i.stack.imgur.com/iAm8v.png)
Here implemented in R:
```
DR_a <- 0.6 # detection rate for test a: given in the paper
DR_b <- 0.8 # detection rate for test b: given in the paper
DDR <- 0.32 # P(Y_a=1, Y_b=1, D=1): given in the paper.
Z <- qnorm(p=.10, lower.tail=FALSE) # to have power 1-beta = 0.9:
a <- 1-sqrt(1-0.05) #significance level a* = 1-sqrt(1-0.05)
Z2 <- qnorm(p=a, lower.tail=FALSE)
delta <-1
y <- 1.2 #rTPR_A:B
((Z+Z2)/(log(y)/sigma))^2 * ((1+y)*DR_b - 2*DDR) / (y*(DR_b^2))
```
Now, I would like to move on to using it for my data.
Delta = 1 was used in the equation above, since this is a superiority design. Changing to a non-inferiority design, I would simply set delta below 1, preferably 0.90.
I assume that changing the study design from matched to unmatched just results in a doubling of needed participants, i.e. n*2.
Does this sound valid?
- Alonzo, T. A., Pepe, M. S., & Moskowitz, C. S. (2002). Sample size calculations for comparative studies of medical tests for detecting presence of disease. Statistics in medicine, 21(6), 835-852.
| Sample size calculation cf. Alonzo et al | CC BY-SA 4.0 | null | 2023-03-24T11:15:09.503 | 2023-03-29T18:33:29.850 | 2023-03-29T18:33:29.850 | 11887 | 322537 | [
"sample-size"
] |
610570 | 2 | null | 610564 | 0 | null | I think you have two possibilities :
- Sampling error : that means your samples doesn't represent the population very well, in this case you'll need to add more samples in the zones that shows your model very weak, or maybe you could delete some bad samples to make the distribution of the reference response more normal
- Overfitting : that means you'll need a model regularisation (you have focused on the cost 'l2 penalty' but maybe it's not enough), google it until findin a regularisation algorithm that you could apply on your case
I think you should try the first one, it's more guaranteed
Good luck
| null | CC BY-SA 4.0 | null | 2023-03-24T11:15:54.237 | 2023-03-24T11:16:53.827 | 2023-03-24T11:16:53.827 | 384029 | 384029 | null |
610571 | 2 | null | 572369 | 1 | null |
### In short
The Kullback-Leibler divergence is the expectation value of the log-odds of two distributions
$$D_{KL}(A || B) = \textbf{E}_A\left[\log \left(\frac{P_A(x)}{P_B(x)} \right) \right]$$
or for continuous distributions
$$D_{KL}(A || B) = \textbf{E}_A\left[\log \left(\frac{f_A(x)}{f_B(x)} \right) \right]$$
When you transform both $A$ and $B$ in the same way with a one-to-one function, then the events in both transformed and non-transformed representations are exchangeable while the log-odds for specific events remain the same, and that is why the divergence remains the same.
---
### Intuitive graphical view
The transformations used with normalizing flows are invertible and that requires that they are [one-to-one functions](https://en.wikipedia.org/wiki/Bijection). This means that volume under the curve of transformed elements remains fixed.
For example in the graph below the transformation is the quantile function of the standard normal which transforms the space $[0,1]$ to the line $\mathbb{R}$. The events $0.18<x<0.22$ and $0.45<x<0.49$ transform to respectively $-0.915<u<-0.772$ and $-0.126<u<-0.025$.
What you see is that the area under the curves relating to these events change width and height, but the total area remains the same.
In addition the relative ratio of the height two curves remains the same! The transformation changes the height of the distributions in the same way. The log-odds of events don't change.
[](https://i.stack.imgur.com/PeYMt.png)
The computation of the divergence will be an integral expressing a weighted sum/average of the log-odds $\log \left(\frac{f_A(x)}{f_B(x)}\right)$. For each contribution/weight of probability $f_A(x) dx$ you have an equivalent size weight $g_A(u) du$ with the same log-odds.
$$ \begin{array}{}
\int_{x \in \mathcal{X}} \log \left(\frac{f_A(x)}{f_B(x)}\right) f_A(x) dx &=& \int_{x \in \mathcal{X}} \text{LO}_X(x) f_A(x) dx \\
&=& \int_{u \in \mathcal{U}} \text{LO}_U(u) f_A(x(u)) \frac{dx(u)}{du} du \\
&=& \int_{u \in \mathcal{U}} \text{LO}_U(u) g_A(u) du &=& \int_{u \in \mathcal{U}} \log \left(\frac{g_A(u)}{g_B(u)}\right) g_A(u) du \\
\end{array}$$
where $x(u)$ is the value of $x$ when we apply the reverse transformation to $u$.
### One-to-one
An important requirement is that the transformation is one-to-one. This ensures that the transformation of the distribution density in terms of $x$ to the distribution density in terms of $u$ can be written as
$$g(u) = f(x(u)) \frac{dx(u)}{du} du $$
If multiple values of $x$ transform to a single $u$ then we would get a sum over all those values of $x$ that transform to $u$
$$g(u) = \sum_{i} f(x_i(u)) \left|\frac{\text{d}x_i(u)}{\text{d}u}\right|$$
| null | CC BY-SA 4.0 | null | 2023-03-24T11:30:38.093 | 2023-03-24T12:01:26.447 | 2023-03-24T12:01:26.447 | 164061 | 164061 | null |
610572 | 1 | null | null | 0 | 50 | I'm trying to use Scipy's [scipy.optimise.curve_fit](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html#scipy.optimize.curve_fit) to calculate the parameters for a non-linear least squares fit of my data to a function. I have 2-3 response values for each value of my explanatory variable. Each of these individual response values is assumed to be errorless.
I've done a fair bit of forum searching online but I still need help understanding what I should be specifying for the `absolute_sigma` and `sigma` arguments and what effect these have on the output variance-covariance matrix, as I need the matrix for calculating confidence intervals for my fitted parameters and confidence bands for my function via the Delta method.
Any advice would be very much appreciated!
| Understanding variance calculations in scipy.optimise.curve_fit (Python) | CC BY-SA 4.0 | null | 2023-03-24T11:32:13.470 | 2023-03-24T11:32:13.470 | null | null | 384035 | [
"python",
"confidence-interval",
"variance",
"nonlinear-regression",
"scipy"
] |
610574 | 1 | null | null | 0 | 119 | I am a beginner in ML modelling. I am working on Santander Customer Satisfaction Prediction competition and the evaluation metric is AUC. The dataset has 370 features and the target variable is TARGET with values 0/1. It dataset has rows around 75K.
I tried logistic regression algorithm which gave me the auc score of 0.76 on train and test sets and 0.74 approx. on submission(test.csv dataset).
I tried Xgboost algorithm to improve the score and built a base model which gave auc score of around 0.94 on training, validation and test datasets. But on submission(test.csv) I'm getting very low score of 0.55 approx. If the model is overfitting then I believe I should get a similar low score on validation and test sets as well. But that is not happening here.
The only feature engineering step I did before splitting the data into val and test sets is counting the no. of zero valued features in a data point. I don't think there is data leakage because of this.
### Code details:
adding zeros column:
```
df.insert(1,'zeros', (df == 0).astype('int64').sum(axis=1))
```
train, val and test split:
```
x_train, x_test, y_train, y_test = train_test_split(features_df, target, train_size=0.75, random_state=100, shuffle=True)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
print('-'*150)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, train_size=0.80, random_state=100, shuffle=True)
print(x_train.shape, y_train.shape)
print(x_val.shape, y_val.shape)
```
Hyperparameters for xgboost algorithm:
```
xgb_model = XGBClassifier(learning_rate=0.2, n_estimators=37, max_depth=5,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27)
```
[Kaggle notebook link](https://www.kaggle.com/code/sownikturaga96/santander-prediction-using-xgboost)
| xgboost algorithm is giving excellent auc score on train, validation and test datasets (0.94) but giving worst public score (0.5) after submission | CC BY-SA 4.0 | null | 2023-03-24T11:37:51.170 | 2023-04-26T06:35:43.900 | 2023-04-26T06:35:43.900 | 376732 | 376732 | [
"boosting",
"auc"
] |
610575 | 1 | null | null | 1 | 11 | I have extracted some triples from text using REBEL an information extraction technic. How do I automatically establish where these triples fit into my pre-existing ontology? How can I use the ontologies schema?
| How can I use an ontology schema to classify triples? | CC BY-SA 4.0 | null | 2023-03-24T11:40:20.327 | 2023-03-24T15:23:29.863 | 2023-03-24T15:23:29.863 | 359569 | 359569 | [
"classification",
"natural-language",
"knowledge-discovery"
] |
610576 | 2 | null | 610566 | 0 | null | Yes, your forecast is most likely almost flat because of the low `ar1` and `ar2` parameter estimates. But is that a reason for concern? There are [many threads](https://stats.stackexchange.com/search?q=flat+forecast) on Cross Validated asking about flat forecasts. The answers in these threads explain why they are not a problem. In [one of them](https://stats.stackexchange.com/questions/378817/auto-arima-forecasting-same-value-continuously-for-future-part-in-r/378826#378826), Stephan Kolassa writes:
>
If you are concerned that the forecast does not reproduce the variability in your historical data: don't be. Forecasting models attempt to disentangle the signal from the noise and only extrapolate the signal, because the noise is - by definition - not forecastable. Therefore, any forecast will look smoother than the original time series.
| null | CC BY-SA 4.0 | null | 2023-03-24T11:42:16.407 | 2023-03-24T11:42:16.407 | null | null | 53690 | null |
610577 | 1 | null | null | 0 | 13 | I have a sample (n=200) and I have the results of two in silico prediction tests. Results are continuous and very similar but I want to confirm this statistically.
I have done shapiro test to see if samples follow a normal distribution and p value of one sample was 1.73-e25 so, they dont follow a normal distribution.
What is the best approach to do this?
| What is the best test two compare samples with continuous results when there is not normality | CC BY-SA 4.0 | null | 2023-03-24T11:50:58.730 | 2023-03-24T11:50:58.730 | null | null | 378571 | [
"t-test",
"shapiro-wilk-test"
] |
610578 | 2 | null | 548162 | 0 | null | QUICK
There is an interesting comment about what makes sense for a definition of $SSR$. However, given the definitions used in the code, if $SSR > SST$, then $\overset{N}{\underset{i=1}{\sum}}\Big[ (y_i - \hat{y_i})(\hat{y_i} - \bar{y}) \Big]<0$, as $SST := \overset{N}{\underset{i=1}{\sum}} ( y_i-\bar{y})^2 = SSE + SSR + 2\overset{N}{\underset{i=1}{\sum}}\Big[ (y_i - \hat{y_i})(\hat{y_i} - \bar{y}) \Big]$, and $SSE\ge 0$..
LONGER
Your predicted and observed values might look like they correlate when you look at a plot, but that does not mean they agree. If they agree, they should be about equal and comform to the line $y=\hat y$ (slope of one, intercept of zero). Let's take a look.
```
library(ggplot2)
d <- data.frame(
yobs = c(29.08,21.8371611111111,41.1785861111111,
60.5846,42.8531777777778,35.6931861111111,15.1174416666667,
10.9228777777778,17.6561777777778,29.2195694444444,
4.48469166666667,24.2387083333333,57.5354805555556,29.4075305555556,
26.7835888888889,28.9258111111111,37.1471972222222,
30.5934277777778,9.22973333333333,57.0615833333333,25.5308722222222,
40.429725,11.9677777777778,24.6323805555556,43.5893833333333,
25.0586194444444,21.5084305555556,28.5317944444444,
17.2729027777778,63.3144833333333,18.7004027777778,15.7129944444444,
15.6565138888889,27.4428777777778,55.2504027777778,
33.6584277777778,10.0764861111111,0.956327777777778,
30.4974416666667,40.2348166666667,12.0094138888889,16.0595388888889,
6.70388888888889,61.6930861111111,45.5002555555556,
34.9412638888889),
ypred = c(37.9778265746194,20.4344267726767,
24.2583278821139,81.3820676947289,35.9664230956281,48.2550410428931,
13.1322244321762,11.2277223100893,17.3847974374533,
36.2654061390013,13.6891124226893,36.93587791295,42.4778772806932,
60.4805857896792,50.8097811774078,31.2983753184525,
39.4901787588643,36.0489111859141,5.16132056902304,67.6280256177873,
46.6873141264554,56.9305336644725,17.1904930898903,
17.8447406631152,81.8167881348895,21.6446504197869,17.2125579607197,
27.8854475743327,25.6223558489715,39.1097052984601,
14.3303635195841,8.3085889213573,14.7616830600331,29.6236752760362,
36.4710794579997,32.1294471109381,21.9208933069802,
8.17174771983545,30.3954470923862,25.2201086957305,13.7007923212405,
16.2708330581924,11.7006605896811,71.8768937208489,
77.2434241984382,30.0205384313346))
ggplot(d, aes(x = ypred, y = yobs)) +
geom_point() +
geom_abline(
slope = 1,
intercept = 0,
col = 'red'
)
```
[](https://i.stack.imgur.com/5YGLP.png)
You're right; that identity line seems to fit the data decently. Now let's do a regression with the true and predicted values and plot the regression line.
```
library(ggplot2)
d <- data.frame(
yobs = c(29.08,21.8371611111111,41.1785861111111,
60.5846,42.8531777777778,35.6931861111111,15.1174416666667,
10.9228777777778,17.6561777777778,29.2195694444444,
4.48469166666667,24.2387083333333,57.5354805555556,29.4075305555556,
26.7835888888889,28.9258111111111,37.1471972222222,
30.5934277777778,9.22973333333333,57.0615833333333,25.5308722222222,
40.429725,11.9677777777778,24.6323805555556,43.5893833333333,
25.0586194444444,21.5084305555556,28.5317944444444,
17.2729027777778,63.3144833333333,18.7004027777778,15.7129944444444,
15.6565138888889,27.4428777777778,55.2504027777778,
33.6584277777778,10.0764861111111,0.956327777777778,
30.4974416666667,40.2348166666667,12.0094138888889,16.0595388888889,
6.70388888888889,61.6930861111111,45.5002555555556,
34.9412638888889),
ypred = c(37.9778265746194,20.4344267726767,
24.2583278821139,81.3820676947289,35.9664230956281,48.2550410428931,
13.1322244321762,11.2277223100893,17.3847974374533,
36.2654061390013,13.6891124226893,36.93587791295,42.4778772806932,
60.4805857896792,50.8097811774078,31.2983753184525,
39.4901787588643,36.0489111859141,5.16132056902304,67.6280256177873,
46.6873141264554,56.9305336644725,17.1904930898903,
17.8447406631152,81.8167881348895,21.6446504197869,17.2125579607197,
27.8854475743327,25.6223558489715,39.1097052984601,
14.3303635195841,8.3085889213573,14.7616830600331,29.6236752760362,
36.4710794579997,32.1294471109381,21.9208933069802,
8.17174771983545,30.3954470923862,25.2201086957305,13.7007923212405,
16.2708330581924,11.7006605896811,71.8768937208489,
77.2434241984382,30.0205384313346))
L <- lm(d$yobs ~ d$ypred)
ggplot(d, aes(x = ypred, y = yobs)) +
geom_point() +
geom_abline(
slope = summary(L)$coef[2, 1],
intercept = summary(L)$coef[1, 1],
col = 'blue'
)
```
[](https://i.stack.imgur.com/iw8vW.png)
That fit is not amazing, but it does look better to me, particularly when you consider that it is the vertical distance, not perpendicular distance, that is considered. In particular, the points to the right of the plot have a blue (regression) line that appears in the image, while the red (identity) line is off the chart to the way right of the first plot.
Consequently, the points in your data frame do not really conform to the identity line.
This is why, when you get into complicated situations, how you calculate $R^2$ matters. In the case of OLS linear regressions with an intercept, two common calculations agree.
$$
R^2 =\left(\text{corr}\left(\hat y, y\right)\right)^2\\
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right) = 1-\dfrac{SSE}{SST}
$$
When you get into more complicated situations, these do not agree.
```
SST <- sum((mean(d$yobs)-d$yobs)^2)
SSE <- sum((d$yobs-d$ypred)^2)
SSR <- sum((d$ypred-mean(d$yobs))^2)
cor(d$yobs, d$ypred) # 0.7665833
1 - SSE/SST # 0.2931399
```
As you inferred from looking at the plot and thinking that the line of best fit (blue regression line) did have a decent fit to the points, there is a fairly strong correlation between the predictions and true values: $\approx 0.77$. However, when you calculate according to the equation that divides `SSE` by `SST`, you get a much weaker result of $\approx 0.29$, suggesting that your observed and predicted values do not agree to the extent that the plot my first suggest.
In the extreme, you can have silly results where $y=(1,2,3)$ and $\hat y = (101, 102, 103)$ have perfect correlation yet disagree terribly. I give [plots here](https://stats.stackexchange.com/a/584562/247274) that have perfect squared correlation between predictions and observations, yet the predictions are terrible. Consequently, I do not believe the correlation between predicted and true values to be a useful measure of model performance (though it might give insight into how to correct your predictions, such as subtracting $100$ every time in the above example, so I would not totally write off $\left(\text{corr}\left(\hat y, y\right)\right)^2$). I would go with $1-SSE/SST$, where the above example (and linked plots) will give an awful value less than zero to flag the predictions as not aligning with the true values.
Overall, there is no issue with your numbers or calculations.
To address what it means for $SSR$ to exceed $SST$, let's look at the decomposition of the total sum of squares, which I have copied from [another answer](https://stats.stackexchange.com/a/551916/247274) of mine.
$$ y_i-\bar{y} = (y_i - \hat{y_i} + \hat{y_i} - \bar{y}) = (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) $$
$$( y_i-\bar{y})^2 = \Big[ (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) \Big]^2 =
(y_i - \hat{y_i})^2 + (\hat{y_i} - \bar{y})^2 + 2(y_i - \hat{y_i})(\hat{y_i} - \bar{y})
$$
$$SSTotal := \overset{N}{\underset{i=1}{\sum}} ( y_i-\bar{y})^2 = \overset{N}{\underset{i=1}{\sum}}(y_i - \hat{y_i})^2 + \overset{N}{\underset{i=1}{\sum}}(\hat{y_i} - \bar{y})^2 + 2\overset{N}{\underset{i=1}{\sum}}\Big[ (y_i - \hat{y_i})(\hat{y_i} - \bar{y}) \Big]$$
$$ :=SSE + SSR + Other $$
We know that $SSE\ge0$. Thus, if $SSR>SST$, then $Other < 0$ for the two sides of the equation to be equal. As the algebra says must be the case, this is true, and the `SST` is equal to the sum of `SSE`, `SSR`, and `Other` term.
```
Other <- 2 * sum((d$yobs - d$ypred)*(d$ypred - mean(d$yobs)))
Other # -15576.2
SSE + SSR + Other # 11600.41
SST # 11600.41
```
| null | CC BY-SA 4.0 | null | 2023-03-24T11:54:05.153 | 2023-03-24T13:45:29.153 | 2023-03-24T13:45:29.153 | 247274 | 247274 | null |
610580 | 1 | null | null | 0 | 23 | I have a dataframe with 4 columns, corresponding to different emotions extracted with ML methods from one video. The dataframe describes one person, rows represent moments in time. Now I'm trying to mark dataframe as insignificant, if the detected emotions are almost-constant.
So, my H0 is: "any of the emotions is non-stationary".
Now I perform Augmented Dickey-Fuler test (with null hypothesis "the series is non-stationary") on every series and obtain p-values (code in python given below for reference together with a df sample and major stats). I end up having 4 p-values.
How should I combine those? I can formulate H0 as "any of the emotions is non-stationary with alpha=0.01" and reject it if all p-values obtained are below 0.01, but this sounds not statistically correct, because from the global point of view I'm checking the whole dataset and not separate series. With my original H0 formulation I have to combine 4 p-values into one somehow. I suppose Fisher's method does not apply here, because it was developed for dependent cases with the same null hypothesis, and my case is different.
Here's the dataframe sample and some code (please ignore "neutral" column because it's linearly dependent on 4 others - they add up to 1):
```
>>> from statsmodels.tsa.stattools import adfuller
>>> import pandas as pd
>>> df = pd.read_csv(...).dropna() # Insignificant df
>>> df2 = pd.read_csv(...).dropna() # Significant df
>>> df.head()
Unnamed: 0 time angry scared happy neutral sad
8 8 1.6 0.046867 0.030736 0.042334 0.814897 0.065166
9 9 1.8 0.051908 0.038864 0.045297 0.791667 0.072264
10 10 2.0 0.042280 0.033831 0.055860 0.797632 0.070397
11 11 2.2 0.064768 0.017584 0.040667 0.841843 0.035137
12 12 2.4 0.070928 0.022191 0.033639 0.825483 0.047759
>>> df[cols].describe()
angry scared happy sad
count 272.000000 272.000000 272.000000 272.000000
mean 0.047902 0.026313 0.053351 0.058907
std 0.014715 0.009156 0.022048 0.017035
min 0.017400 0.010842 0.011263 0.024328
25% 0.037608 0.019508 0.039247 0.047019
50% 0.046432 0.024267 0.049815 0.055466
75% 0.055815 0.031104 0.064834 0.068593
max 0.142180 0.055817 0.222235 0.139777
>>> df2.head()
Unnamed: 0 time angry scared happy neutral sad
8 8 1.6 0.009712 0.004793 0.202277 0.758015 0.025203
9 9 1.8 0.008974 0.005125 0.112469 0.841718 0.031714
10 10 2.0 0.007013 0.004551 0.221301 0.738514 0.028622
11 11 2.2 0.005347 0.002529 0.224950 0.746677 0.020497
12 12 2.4 0.004089 0.002009 0.315020 0.665884 0.012998
>>> df2[cols].describe()
angry scared happy sad
count 272.000000 272.000000 272.000000 272.000000
mean 0.006801 0.003985 0.403518 0.025887
std 0.004960 0.003326 0.340374 0.024425
min 0.000014 0.000007 0.014501 0.000036
25% 0.003299 0.001642 0.097946 0.010492
50% 0.005878 0.003536 0.288607 0.021846
75% 0.009877 0.005776 0.671007 0.033472
max 0.024246 0.021502 0.998112 0.180877
>>> for col in cols: print(adfuller(df[col])[1])
...
1.7074987958656052e-16
2.0059524498747396e-07
9.910010274093592e-09
2.6496740053033137e-19
>>> for col in cols: print(adfuller(df2[col])[1])
...
0.08205085843597149
0.02944711896782458
0.319362154099021
0.009455210475492137
```
In the code above I want to recognize `df` as insignificant (all p-values are below 0.01) and `df2` as significant (some p-values are over 0.01). What is the right thing to do? `all(p_values < 0.01)` seems not fair and Fisher's method - inapplicable. I don't need python code, only mathematical explanation or link to good source for applicable algorithm.
| Combine p-values of several independent samples for combined hypothesis | CC BY-SA 4.0 | null | 2023-03-24T12:11:29.427 | 2023-03-24T12:11:29.427 | null | null | 384039 | [
"hypothesis-testing",
"p-value",
"multiple-comparisons",
"combining-p-values"
] |
610582 | 1 | null | null | 0 | 19 | I am conducting a mediation analysis (model 4) in SPSS using the PROCESS macro by Hayes.
I have cross-sectional survey data and want to test the following relation:
Trait perfectionism (X) -> Emotion Dysregulation (M) -> Sound sensitivity (Y) I want to add one covariate in the analysis: a diagnosis of OCPD. My rationale behind this is that both OCPD and perfectionism are strongly related and therefore, as a covariate, such a diagnosis might have an effect on Y.
When adding it to the model, it has no statistical significance and the effect is negligible. Therefore, does it make sense to retain it as a covariate if ONLY 12 participants (out of 145) have reported a diagnosis of OCPD?
| Can someone please help with covariates? | CC BY-SA 4.0 | null | 2023-03-24T12:28:17.087 | 2023-03-24T12:28:17.087 | null | null | 380432 | [
"spss"
] |
610584 | 2 | null | 610567 | 2 | null | The individual coefficients here have the same interpretation (or difficulty in interpretation) as in conditional models with interactions or in generalized models. The differences are the interpretation as marginal instead of conditional associations with outcome for GEEs and, for generalized models, interpretations in terms of the link function associating the linear predictor with outcome. The problem in all model types with interactions is that the individual coefficients for predictors require care in interpretation.
The trick is to remember that everything ends up being coded as numeric, with the following general form for a 2-way interaction between predictors `x` and `z` with respect to an outcome `y` in a generalized linear model with link function `g()`:
$$g(y)=\beta_0' + \beta_1'x+\beta_2'z +\beta_3' xz.$$
Thus the individual coefficient for an interacting predictor represents association with outcome when the other predictor is coded at 0. The interaction coefficient is the extra association with outcome when they both aren't coded at 0.
In this case, if `x` is treatment, it presumably has value 0 for placebo and 1 for treated. Your `treatment` coefficient is thus the estimated association of treatment with outcome when `time = 0`. That might not have a simple interpretation on its own. With your single linear term for `time`, it's a linear extrapolation to `time = 0` from your other time points, in the overall context of the model. If `time = 0` is the start of treatment, then a non-zero `treatment` coefficient might represent a difference in baseline outcome values between treatment groups or a mis-specified model of the association of `time` with outcome.
The `time` coefficient is the linear association between outcome and time for the placebo (`treatment = 0`). The interaction coefficient is the extra linear association of time with outcome with `treatment = 1` and, conversely, the extra association of `treatment` with outcome for each extra unit of `time`.
In this type of study you typically need to model time more flexibly than with a single simple linear term. Chapter 7 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/long.html) covers longitudinal data modeling in some detail. It's mostly from the perspective of a different marginal modeling approach, generalized least squares, but the principles hold for other longitudinal models. The chapter also includes a useful summary table of the strengths and weaknesses of different modeling approaches.
| null | CC BY-SA 4.0 | null | 2023-03-24T13:30:10.633 | 2023-03-24T13:30:10.633 | null | null | 28500 | null |
610585 | 2 | null | 610544 | 2 | null | Suppose $\xi$ is a random variable such that $\mathbb{P}(\xi = k) = 0.3\frac{2^ke^{-2}}{k!} + 0.45\frac{3^ke^{-3}}{k!} + 0.25\frac{\frac{1}{2^k}e^{-\frac{1}{2}}}{k!}$ and $ran(\xi)=\{0,1,2,3,4,5...\}$.
So, by the definition of mathematical expectation you have:
$\mathbb{E}\xi = \sum^{+\infty}_{k=0}k\cdot\mathbb{P}(\xi = k) =\\ \sum^{+\infty}_{k=0}k(0.3\frac{2^ke^{-2}}{k!} + 0.45\frac{3^ke^{-3}}{k!} + 0.25\frac{\frac{1}{2^k}e^{-\frac{1}{2}}}{k!})$
To proceed with this sum you just need to recal, that for any real $z$, $e^z = \sum^{+\infty}_{k=0}\frac{z^k}{k!}$ and we will try to make this sum similar to this expression.
$\sum^{+\infty}_{k=0}k(0.3\frac{2^ke^{-2}}{k!} + 0.45\frac{3^ke^{-3}}{k!} + 0.25\frac{\frac{1}{2^k}e^{-\frac{1}{2}}}{k!}) = \\ \sum^{+\infty}_{k=1}0.3\frac{2^ke^{-2}}{(k-1)!} + \sum^{+\infty}_{k=1}0.45\frac{3^ke^{-3}}{(k-1)!} + \sum^{+\infty}_{k=1}0.25\frac{\frac{1}{2^k}e^{-\frac{1}{2}}}{(k-1)!} = \\= 0.3\cdot e ^{-2}\cdot 2\sum^{+\infty}_{k=1}\frac{2^{k-1}}{(k-1)!} + 0.45\cdot e^{-3} \cdot 3\sum^{+\infty}_{k=1}\frac{3^{k-1}}{(k-1)!} + \\ + \ 0.25\cdot e^{-\frac{1}{2}} \cdot \frac{1}{2}\sum^{+\infty}_{k=1}\frac{\frac{1}{2^{k-1}}}{(k-1)!} = 0.3\cdot e ^{-2}\cdot 2 \cdot e^{2} + 0.45\cdot e^{-3} \cdot \cdot e^{3} \cdot 3 + 0.25\cdot e^{-\frac{1}{2}} \cdot \frac{1}{2} \cdot e^{\frac{1}{2}} = 0.6 + 1.35 + 0.125 = 2.075$.
So, $\mathbb{E}\xi = 2.075$.
To find variance, you just need to use this formula:
$\mathbb{V}ar\xi = \mathbb{E}[\xi^2] - (\mathbb{E}\xi)^2$, so you need to find $\mathbb{E}[\xi^2]$.
I hope you now understand how to find $\mathbb{E}[\xi^2]$ but I will you give a trick which is frequently being used to find second moment of poisson random variable.
$\mathbb{E}[\xi^2] = \sum^{+\infty}_{k=0}k^2\cdot\mathbb{P}(\xi = k) =\\ = \sum^{+\infty}_{k=0}(k^2+0)\cdot\mathbb{P}(\xi = k) = \sum^{+\infty}_{k=0}(k^2-k+k)\cdot\mathbb{P}(\xi = k) = \sum^{+\infty}_{k=0}(k^2-k)\cdot\mathbb{P}(\xi = k) + \sum^{+\infty}_{k=0}k\cdot\mathbb{P}(\xi = k) = \sum^{+\infty}_{k=0}(k^2-k)\cdot\mathbb{P}(\xi = k) + \mathbb{E}\xi$.
Now to find the first sum you need to use the same method as I used to find $\mathbb{E}\xi$ and you will be done.
| null | CC BY-SA 4.0 | null | 2023-03-24T13:41:25.510 | 2023-03-24T13:41:25.510 | null | null | 378446 | null |
610586 | 1 | 610595 | null | 1 | 32 | I'm studying the cases in which the endogeneity problem arises in OLS regression.
Suppose we have the following population equation:
$y=\beta_0 +\beta_1 x_1 + ... + \beta_k x_k + \gamma q + \epsilon$
and say $E(\epsilon | x,q)=0$, such that: $E(y|x,q)=\beta_0 +\beta_1 x_1 + ... + \beta_k x_k + \gamma q$
Suppose $q$ is unobserved and so it goes into the error term, thus your population equation reads as
$y=\beta_0 +\beta_1 x_1 + ... + \beta_k x_k + \nu$ , where $\nu=\gamma q + \epsilon$
Then, the slides says, nothing is lost assuming that $E(q)=0$, because an intercept is included in the basic equation, so that $E(\nu)=0$.
Why is fine assuming that $E(q)=0$, because an intercept is included in the basic equation?
| Omitted variable problem | CC BY-SA 4.0 | null | 2023-03-24T13:57:34.597 | 2023-03-24T15:45:17.423 | null | null | 365936 | [
"least-squares",
"econometrics",
"endogeneity",
"omitted-variable-bias"
] |
610587 | 2 | null | 399755 | 0 | null | It's not a hyperparameter. If you pick some value $k$ for the dimension of the result, it simply means that you would be ignoring the remaining components. Training the model multiple times for the different values of $k$ would give the same results for all the components $<k$. So this is not something to tune in the regular sense. What you would do is to fit PCA to the data and then see how much variability is explained for different values of $k$ for the fitted model.
| null | CC BY-SA 4.0 | null | 2023-03-24T14:27:35.133 | 2023-03-24T14:27:35.133 | null | null | 35989 | null |
610588 | 1 | null | null | 0 | 53 | I have been reading about SEM and latent variables. Now it seems usually that a latent variable is a function of observed manifest variables. But is there a SEM where a latent variable is a predctor of an outcome? So in my example, we have outcomes (y1, y2), observed independent variables (x1, x2), one latent l1, and two errors (e1, e2), over the same indviduals.
y1 = x1 + l1 + e1
y2 = x2 + l1 + e2
Would that work in lavaan?
| SEM and latent variables | CC BY-SA 4.0 | null | 2023-03-24T14:29:03.233 | 2023-03-25T15:51:42.467 | null | null | 13132 | [
"structural-equation-modeling"
] |
610589 | 2 | null | 547343 | 1 | null | As zihao gong already mentioned [in his comment](https://stats.stackexchange.com/questions/547343/value-from-brown-forsythe-test-using-r-is-different-to-if-done-manually-trying#comment1129405_547343), the test in the `onewaytests` package is not the right one to perform here.
Instead, you could use `bftest` from the `ALSM` package, which gives you the following result that corresponds to the solution from your textbook:
```
lm_mod <- lm(residuals ~ factor_fitted, dat)
ALSM::bftest(lm_mod, group = dat$factor_fitted)
# t.value P.Value alpha df
# [1,] 0.5520951 0.5824418 0.05 79
```
I used the dataset that you provided at the end of your question as `dat`.
| null | CC BY-SA 4.0 | null | 2023-03-24T14:35:44.103 | 2023-03-24T14:35:44.103 | null | null | 384051 | null |
610591 | 1 | null | null | 0 | 41 | I have run a binomial generalised linear mixed model (GLMM) via the lme4 package in R. Then I have got the result for it. In the paper, I wrote this for the result: b = 2.23, SE = 0.59, p < 0.01.
However, I'm not sure if I need to find odd ratio for this? If yes, can you please tell me how to do it in R. And how I should add the odd ratio result in the finding?
| Do I need to find odd ratio for the result of binomial GLMM? | CC BY-SA 4.0 | null | 2023-03-24T14:48:10.763 | 2023-03-25T18:43:42.567 | null | null | 40023 | [
"binomial-distribution",
"glmm"
] |
610592 | 1 | 610601 | null | 4 | 178 | I appreciate that similar questions have been asked and answered to this, but I think that my case is substantively different as the specific interpretation of the coefficient is not relevant.
I have a set of choice experiment results that I am using to calculate the 'Valuation of Travel Time' for the participants. The standard process for doing this is to fit a logistic regression model to the result of whether someone accepts a specific level of compensation for a specific travel time. You can then divide the time coefficient by the compensation one to generate the valuation of travel time (don't worry if doesn't make sense). Hence for the below model:
|term |estimate |std.error |z value |p.value |
|----|--------|---------|-------|-------|
|(Intercept) |1.256319853 |0.1096819372 |11.45420919 |2.24E-30 |
|time |0.1015354957 |0.003527728861 |-28.78211442 |3.59E-182 |
|comp |0.1532430136 |0.005337402709 |28.7111582 |2.77E-181 |
The 'VTT' is (approximately) 0.102/0.153 - which, is £0.67 per minute, or £40 per hour.
I want to provide a 95% confidence interval for this valuation and I've been searching for the correct method. My instinct is that it's just:
upper limit: time_coef + 1.96time_SE / comp_coef - 1.96comp_SE
lower limit: time_coef - 1.96time_SE / comp_coef + 1.96comp_SE
Without a detailed understanding of VTT methodology, does this look correct?
Edit - following useful feedback, I can include the covariance table if anyone else is able to use this data to help me validate that I am using the correct method.
| |(Intercept) |time |comp |
||-----------|----|----|
|(Intercept |1.203013e-02 |-1.265604e-04 |9.785909e-06 |
|time |-1.265804e |1.244487e-05 |-1.437472e-05 |
|comp |9.785909e-06 |-1.437472e-05 |2.848787e-05 |
| Providing 95% confidence intervals for VTT calculation using logistic regression coefficients | CC BY-SA 4.0 | null | 2023-03-24T14:50:50.323 | 2023-03-24T15:51:02.193 | 2023-03-24T15:41:57.063 | 56940 | 384052 | [
"r",
"logistic",
"multiple-regression",
"confidence-interval",
"generalized-linear-model"
] |
610593 | 2 | null | 608721 | 1 | null | I think you're just accidentally confusing yourself by using slightly different methods. To answer your question - yes, in an interaction, if you compare the marginal means at different values (also called the 'pick-a-point' approach), you will eventually reach a point where there's significance (though you may be at a point beyond where your data actually go!). The common alternative is the Johnson-Neyman approach (with multiple comparison correction), which gives you the range of values over which the marginal means are significantly different.
Now, I'm just going to repeat your inflection point analysis using a slightly different approach:
```
## Use the chngpt library to fins the inflection point
> library(chngpt)
> fit1=chngpt::chngptm (formula.1=BP~1, formula.2=~age, data = dtstudy, type="M01", ncpus = 1,
+ family="gaussian")
> summary(fit1)
Change point model threshold.type: hinge
Coefficients:
est Std. Error* (lower upper) p.value*
(Intercept) 69.531116 1.780648 66.0708537 73.050993 0.00000000
(age-chngpt)+ 1.118825 0.497638 0.6927669 2.643508 0.02455891
Threshold:
est Std. Error (lower upper)
27.711777 3.655929 22.217950 36.549191
>
> # Inflection point is at age = 27.7
> # create a new dummy variable that splits age at the inflection point
>
> dtstudy$age_bin = 0
> dtstudy$age_bin[dtstudy$age>fit1$coefficients[3]] = 1
> dtstudy$age_bin = as.factor(dtstudy$age_bin)
>
> # plot the result
>
> ggplot(data = dtstudy, aes(x = age, y = BP,color =age_bin )) +
+ geom_point( size = 3, position = position_jitter(w = 0.2)) +
+ geom_smooth(inherit.aes = F,aes(x = age, y = BP),color="black", method='lm',formula ='y ~ poly(x,2)') +
+
+ geom_smooth(aes(x = age, y = BP), method='lm',formula ='y ~ x') +
+
+ theme_bw(base_size = 20) +
+ xlab("Age") + ylab("BP") +
+ scale_x_continuous(breaks = seq(0,60,10), limits = c(0,60)) +
+ scale_y_continuous(breaks = seq(0,120,20), limits = c(0,120))
>
```
[](https://i.stack.imgur.com/gX4pB.png)
```
>
> # repeat the marginal means analysis - we eventually reach significance
>
> mod2 = lm(BP ~ age * age_bin,data=dtstudy)
> summary(mod2)
Call:
lm(formula = BP ~ age * age_bin, data = dtstudy)
Residuals:
Min 1Q Median 3Q Max
-13.6644 -5.9880 -0.2751 4.7991 17.0088
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 82.4360 6.7773 12.164 5.63e-16 ***
age -0.6893 0.3461 -1.991 0.052398 .
age_bin1 -39.8296 13.7691 -2.893 0.005818 **
age:age_bin1 1.7065 0.4667 3.656 0.000656 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 7.538 on 46 degrees of freedom
Multiple R-squared: 0.4994, Adjusted R-squared: 0.4668
F-statistic: 15.3 on 3 and 46 DF, p-value: 4.854e-07
> # Marginal Means
> (emms <- emmeans(mod, ~ male + age, at = list(age = c(0, 1, 20, 40, 60, 61))))
male age emmean SE df lower.CL upper.CL
0 0 73.5 5.96 46 61.5 85.5
1 0 51.2 17.00 46 17.0 85.4
0 1 73.3 5.69 46 61.8 84.7
1 1 52.0 16.57 46 18.7 85.4
0 20 69.7 1.55 46 66.6 72.8
1 20 67.2 8.42 46 50.2 84.1
0 40 66.0 5.81 46 54.3 77.7
1 40 83.1 1.73 46 79.7 86.6
0 60 62.2 11.39 46 39.3 85.1
1 60 99.1 9.24 46 80.5 117.7
0 61 62.0 11.67 46 38.5 85.5
1 61 99.9 9.67 46 80.4 119.3
Confidence level used: 0.95
> custom <- list(`Sex diff at age = 0` = c(-1,1,0,0,0,0,0,0,0,0,0,0),
+ `Sex diff at age = 1` = c(0,0,-1,1,0,0,0,0,0,0,0,0),
+ `Sex diff at age = 20` = c(0,0,0,0,-1,1,0,0,0,0,0,0),
+ `Sex diff at age = 40` = c(0,0,0,0,0,0,-1,1,0,0,0,0),
+ `Sex diff at age = 60` = c(0,0,0,0,0,0,0,0,-1,1,0,0),
+ `Sex diff at age = 61` = c(0,0,0,0,0,0,0,0,0,0,-1,1))
> contrast(emms, custom) |>
+ summary(infer = T)
contrast estimate SE df lower.CL upper.CL t.ratio p.value
Sex diff at age = 0 -22.25 18.02 46 -58.52 14.0 -1.235 0.2231
Sex diff at age = 1 -21.27 17.52 46 -56.53 14.0 -1.214 0.2310
Sex diff at age = 20 -2.54 8.56 46 -19.77 14.7 -0.297 0.7681
Sex diff at age = 40 17.17 6.06 46 4.97 29.4 2.832 0.0068
Sex diff at age = 60 36.88 14.66 46 7.37 66.4 2.515 0.0155
Sex diff at age = 61 37.87 15.15 46 7.37 68.4 2.499 0.0161
Confidence level used: 0.95
>
```
Edit To answer a follow-up question - aren't the interaction and linear-spline models essentially identical? They are similar, but the linear-spline is missing a key piece. The linear-spline term, called `age_slope_change` in the original question, is equivalent to an interaction between `age_bin` and `age` (after centering `age` on the inflection point). Recall that an interaction model is generally of the form:
$Y = \beta _1X + \beta _2M +\beta _3XM + \epsilon$
This makes it clear that what is missing from the linear spline model is the term $\beta _2M$ - i.e., `age_bin`. That is, it does not model differences in the average outcome between the two groups. So if your question is "Does the slope change after the inflection point?" then the linear spline model is actually prone to false-positives!
| null | CC BY-SA 4.0 | null | 2023-03-24T15:01:34.113 | 2023-03-27T15:04:00.293 | 2023-03-27T15:04:00.293 | 288142 | 288142 | null |
610594 | 2 | null | 610592 | 5 | null | No, that's not correct. You don't have the necessary information in the output, you would additionally need the information on the covariance matrix of the regression coefficients (with the standard errors, you have the square root of the diagonal entries of that matrix). Luckily, if you have that, you can use something like the delta method, which there are existing packages that will handle the details for you, e.g. in R the [deltaMethod function from the car package](https://search.r-project.org/CRAN/refmans/car/html/deltaMethod.html). That's a useful search term to look for/the background information given there would be a good starting point.
| null | CC BY-SA 4.0 | null | 2023-03-24T15:09:43.740 | 2023-03-24T15:09:43.740 | null | null | 86652 | null |
610595 | 2 | null | 610586 | 2 | null | Start with
$$y=\beta_0 +\beta_1 x_1 + ... + \beta_k x_k + \gamma q +\epsilon.$$
Say that the mean value of $q$ is $\bar q$. Then centering $q$ around its mean gives $q_c=q-\bar q$. Substitute into the above and collect constant terms:
$$y=(\beta_0 + \gamma \bar q)+\beta_1 x_1 + ... + \beta_k x_k + \gamma q_c + \epsilon.$$
Any offset of the unobserved $q$ in this situation will be included in the intercept of a model that's based on the observed predictors. It won't affect the estimates of the coefficients for the observed predictors $x_i$, or the bias in the coefficient for any $x_i$ correlated with the unobserved $q$.
Two warnings. First, omitting an intercept in such a model will lead to problems. Second, omitted-variable bias can be more of a problem in other types of models, as explained [here](https://stats.stackexchange.com/q/113766/28500) for a probit model. In OLS there is no bias in the coefficient for an observed predictor uncorrelated with the unobserved predictor. In models without an error term like $\epsilon$ in OLS to capture excess heterogeneity resulting from $q$, an unobserved/unmodeled predictor can lead to bias in coefficients for all included predictors.
| null | CC BY-SA 4.0 | null | 2023-03-24T15:10:29.257 | 2023-03-24T15:45:17.423 | 2023-03-24T15:45:17.423 | 28500 | 28500 | null |
610596 | 1 | null | null | 2 | 28 | I am using acceptance-rejection sampling to sample random variable $x$ according to distribution $f(x)$. The steps I followed are
- First generated uniformly distributed random variable $x$ from 0 to $x_{max}$
- Generated a second random number $u$ between 0 and $f_{max}$.
- Then checked the condition if $u < f(x)$, If this condition is satisfied then I accept $x$ otherwise reject and repeat the above two steps.
In the 1st step I have generated $x$ that follows a uniform distribution. Is it possible to sample random variable $x$ that already has distribution which is not uniform ? Means $x$ already has a distribution function $g(x)$ but I want to sample those $x$ who will follow my desired distribution function $f(x)$.
Example: The initial random variable $x$ has a distribution function as shown in the figure (which is not uniform).
[](https://i.stack.imgur.com/9FLu8.png)
My target distribution looks like the figure below.
[](https://i.stack.imgur.com/nKIb2.png)
Now Is it possible to sample $x$ from the first figure that will follow the target distribution function (second figure)? If yes, then how to do this? Should I use any other method than acceptance-rejection sampling.
| Sampling from a distribution function $g_{x}$ that will follow $f_{x}$ | CC BY-SA 4.0 | null | 2023-03-24T15:17:05.923 | 2023-03-24T15:48:16.227 | 2023-03-24T15:41:45.800 | 35989 | 384041 | [
"distributions",
"random-variable",
"markov-chain-montecarlo",
"random-generation"
] |
610597 | 1 | 611632 | null | 1 | 115 | I have some 23 months of longitudinal data for around 22,000 users. Approximately 2,000 of those users received an intervention during that period - with the intervention occurring during any one of those 23 months. I have mocked up what this data looks like below:
The Period represents the usage for that month the User is active, the demographics (Sex, Age, and BaseUsageScore) are static throughout the dataset for a particular user, then Trt_Group represents group assignment and Trt_Month represents if that period reflects a period after treatment. Usage is the outcome variable.
|UserID |Period |Usage ($) |Sex |Age |BaseUsageScore |Trt_Group |Trt_Month |
|------|------|---------|---|---|--------------|---------|---------|
|56A783 |2021-Jan |50 |M |52 |0.5 |1 |0 |
|56A783 |2021-Feb |80 |M |52 |0.5 |1 |0 |
|56A783 |2021-Mar |100 |M |52 |0.5 |1 |0 |
|56A783 |2021-Apr |75 |M |52 |0.5 |1 |1 |
|56A783 |2021-May |0 |M |52 |0.5 |1 |1 |
|30Z790 |2021-Jan |65 |F |33 |0.2 |0 |0 |
|30Z790 |2021-Feb |30 |F |33 |0.2 |0 |0 |
|30Z790 |2021-March |0 |F |33 |0.2 |0 |0 |
|310X17 |2022-June |100 |M |80 |0.8 |1 |0 |
|310X17 |2022-July |124 |M |80 |0.8 |1 |1 |
|310X17 |2022-Aug |186 |M |80 |0.8 |1 |1 |
|310X17 |2022-May |184 |M |80 |0.8 |1 |1 |
Some other things to know about the dataset:
- A user may become active and then de-activate at any point in that 23 month period (though almost all have a continuous string of periods). This coverage is similar across treatment and control.
- The control group is not demographically representative of the treatment group (younger with less males with lower BaseUsage scores), but I believe these individual differences are handled by the fixed effects model.
I am new to DiD, but with what I learned so far I've been toying with the `plm` package. Below is how I set up my model - since `Trt_Month` is always 0 for the control group I did not include the `Trt_Month*Trt_Group` interaction term:
```
plm(Usage ~ Trt_Month, index = c("UserID", "Period"), method = "within", effect = "twoways", data = df)
```
One concern is, since users' span of activity do not universally cover the observed periods that the assumption of parallel trends is violated - or would that only be the case if the span of activity differs between the groups?
Another concern (but from my understanding it should be handled by the fixed effects approach) is that a user's group assignment is to some degree influenced by `BaseUsageScore`. Should I be trying to control for this disparity outside of the model above?
Please let me know if any of this needs more clarification or detail.
| Generalized Difference-In-Difference with join in and drop outs across span of time periods | CC BY-SA 4.0 | null | 2023-03-24T15:19:28.760 | 2023-04-03T04:40:02.850 | 2023-03-27T02:26:14.867 | 246835 | 383889 | [
"panel-data",
"causality",
"fixed-effects-model",
"difference-in-difference",
"plm"
] |
610598 | 2 | null | 610588 | 1 | null | The outcome variable in SEM most definitely can be a latent variable. A latent variable is just a bunch of observed variables together to represent a construct (usually a theory). I would familiarize with factor analysis first, before you dive into SEM. SEM is used when you want to use a latent variable (a factor) in a path analysis, usually (said with caution here) to attempt to show causality.
Yes, it can be a predictor. A latent variable can used anywhere in a SEM model.
| null | CC BY-SA 4.0 | null | 2023-03-24T15:29:47.790 | 2023-03-25T15:51:42.467 | 2023-03-25T15:51:42.467 | 383476 | 383476 | null |
610601 | 2 | null | 610592 | 6 | null | Expanding over Bjorn's answer, you are trying to perform statistical inference on the ratio of two regression coefficients and since this is a nonlinear function you have to use additional tools. The most accessible tool is the [delta method](https://en.wikipedia.org/wiki/Delta_method); other possible procedures may be higher-order asymptotics (e.g. Brazzale et al. Applied Asymptotics: Case Studies in Small-Sample Statistics, Cambridge, 2007).
`R` implementations for the delta method and higher-order asymptotics can be found in the `car` and `likelihoodAsy` (see also `hoa`) packages respectively.
Here is an example of the delta method using the `car` package. Note that in order to use the delta method you'll need the full estimated covariance matrix of the estimator of your regression coefficients. The function `deltaMethod` automatically takes care of this.
```
library(car)
# create a fictitius binary variable
binary_time <- ifelse(Transact$time <= 5000, 1, 0)
# run binary logistic regression
m1 <- glm(binary_time ~ t1 + t2, data = Transact,
family = "binomial")
# get the CI by the delta method
deltaMethod(m1, "b1/b2", parameterNames= paste("b", 0:2, sep=""))
```
| null | CC BY-SA 4.0 | null | 2023-03-24T15:37:12.783 | 2023-03-24T15:51:02.193 | 2023-03-24T15:51:02.193 | 56940 | 56940 | null |
610602 | 2 | null | 610596 | 3 | null | Sure you can. For example, you can use the [independent](https://stats.stackexchange.com/questions/234767/convergence-of-the-independent-metropolis-hastings-algorithm) variant of the [Metropolis-Hasting algorithm](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm).
- Generate $x'$ from the distribution $g$,
- Take
$$
x_{t+1} = \cases{
x' \quad \text{with probability} \; \min\Big( \frac{f(x') g(x_t)}{f(x_t) g(x')}, \, 1\Big) \\
x_t \quad \text{otherwise}
}
$$
See Christian P. Robert and George Casella Monte Carlo Statistical Methods, p. 276. Notice however that this would not work if $f$ and $g$ have different domains and would not be very efficient if the two distributions are very different from each other.
There may be even more efficient algorithms (see [mcmc](/questions/tagged/mcmc)) if you wouldn't insist on generating the samples independently from $g$.
| null | CC BY-SA 4.0 | null | 2023-03-24T15:39:37.433 | 2023-03-24T15:48:16.227 | 2023-03-24T15:48:16.227 | 35989 | 35989 | null |
610606 | 2 | null | 610377 | 1 | null | Did you try what was suggested in the error message?
>
Perhaps a 'data' or 'params' argument is needed
```
emmeans::emmeans(
incidence.gam,
~ age,
cov.reduce = FALSE,
type = "response",
data = data
)
```
... and please note that `interval = "confidence"` has no effect, as that is not an argument for `emmeans()` or `ref_grid()`.
| null | CC BY-SA 4.0 | null | 2023-03-24T16:02:47.473 | 2023-03-24T16:02:47.473 | null | null | 52554 | null |
610607 | 2 | null | 610375 | 2 | null | The identical arrows is not at all surprising. That's because `emmeans()` summarises your model, not the data, and your model is additive. You have no interaction effects, which implies that the effects of one factor are the same regardless of the level of any other factors. If you expect different comparisons, you need to fit a model that has interactions.
### Revised answer
The above is not a correct answer because I did not read the OP carefully enough and misinterpreted it.
The question is about why all the comparison arrows all have the same endpoints. This I think I can explain. First, the method and algorithm, which is a bit ad hoc, is explained in the ["xplanations" vignette](https://cran.r-project.org/web/packages/emmeans/vignettes/xplanations.html#arrows). Briefly, we try to get each pair of comparison arrows to overlap by the same fraction as the (adjusted) confidence interval for the difference of those means overlaps the origin. But in this example, at least after multiplicity adjustments, no two means are anywhere near significantly different; that is, all the adjusted confidence intervals overlap the origin, so all the arrows should overlap.
But second, for practical display reasons, there is really no reason to extend a comparison arrow any farther left than the lowest mean, nor any farther right than the largest mean: If two arrows are going to overlap, they will have started overlapping somewhere between those extremes. So it appears that all of the arrows we computed would have extended out of this range, and so they all got truncated to values slightly beyond the min and the max.
| null | CC BY-SA 4.0 | null | 2023-03-24T16:06:42.687 | 2023-03-27T04:07:17.873 | 2023-03-27T04:07:17.873 | 52554 | 52554 | null |
610608 | 1 | 610611 | null | 1 | 28 | Kaiser's rule suggests the number of principal components to be included in an analysis by looking at eigenvalues. If I'm given standard deviations only, instead of eigenvalues, can I still somehow use the Kaiser rule?
This question was asked in a multivariate exam.
| Can I apply Kaiser Rule without knowing the eigenvalues? | CC BY-SA 4.0 | null | 2023-03-24T16:14:41.580 | 2023-03-24T16:42:26.703 | 2023-03-24T16:42:26.703 | 56940 | 377525 | [
"self-study",
"pca",
"multivariate-analysis",
"factor-analysis",
"eigenvalues"
] |
610609 | 2 | null | 610286 | 2 | null | It appears that your model is appropriate. But you can't have your cake and eat it too. If you have 8 populations, and those are the sampling units for habitats (I suppose 4 populations per habitat?), then that's exactly like having 8 subjects in your study, 4 per treatment. You don't have much data for discerning differences between habitats, regardless of what your visual impressions might be.
| null | CC BY-SA 4.0 | null | 2023-03-24T16:16:14.130 | 2023-03-24T16:16:14.130 | null | null | 52554 | null |
610610 | 1 | null | null | 2 | 94 | This is a dummy dataframe resembling my real-life data:
```
structure(list(cond = c("WT", "WT", "WT", "WT", "WT", "WT", "WT",
"WT", "WT", "WT", "WT", "WT", "WT", "WT", "WT", "WT", "WT", "WT",
"WT", "WT", "WT", "WT", "WT", "WT", "KO", "KO", "KO", "KO", "KO",
"KO", "KO", "KO", "KO", "KO", "KO", "KO", "KO", "KO", "KO", "KO",
"KO", "KO", "KO", "KO"), class = c("N", "N", "N", "N", "N", "N",
"Y", "Y", "Y", "Y", "N", "N", "N", "N", "Y", "Y", "N", "Y", "N",
"N", "Y", "N", "Y", "N", "N", "N", "Y", "Y", "Y", "N", "N", "Y",
"N", "N", "N", "Y", "Y", "N", "N", "N", "N", "N", "N", "N"),
lattice = c(72.4394527831179, 70.1486049154573, 71.2024262282001,
70.095774734531, 73.1687587160835, 73.4725521658284, 71.1213324059112,
69.4426450566097, 67.7407461727878, 67.3598397689386, 69.5170866395342,
68.751790570905, 73.2734806165999, 72.0386374169852, 70.293510845974,
68.9576642114016, 69.4472846093111, 70.8520303262601, 69.967844969872,
69.7957750105144, 76.3165495002798, 70.8237308152673, 70.5087804854601,
70.0768856496865, 49.4569953395058, 52.0898768763027, 44.3112116723351,
53.0069841435797, 49.6755152863985, 50.3014101181505, 49.0856479592249,
48.3511098818039, 50.0812079766985, 50.4035212282794, 54.0992908724316,
43.4055868143946, 50.1834254159389, 54.7298925145524, 55.1516389972744,
51.4685454381875, 52.253317158648, 52.8558395390657, 51.5377616093217,
57.7792154694597)), row.names = c(NA, -44L), class = "data.frame")
```
Those are two experiment conditions ("WT" & "KO"). In each condition, observations might be classified as "Y" or "N" depending on whether the organism exhibits some measured trait or not.
I would like to compare those 2 groups (experimental conditions) & to infer, whether there is a statistically significant difference in amount of observations regarded as "Y" between the groups (and if it is the case, whether there are more or less "Y"-s in "KO" with respect to "WT" or not).
I do not know what type of statistical test would be more appropriate for this task: Fisher's, Chi-squared, etc.
Info: the "lattice" is another feature of the dataset, I am comparing this parameter between conditions using the Wilcoxon rank test. For this question it might be ignored. I just decided to show an entire structure of df, including this column.
EDITS:
- the experiment does not have fixed marginals.
- there was a time window, in which the data were collected.
- conditions are independent (those were cell-sorting experiments).
- afaik, this cannot be addressed with McNemar's test which is applicable for dependent variables, so the question was incorrectly assigned as duplicate.
| Most appropriate statistical test for count data (2x2 contingency) | CC BY-SA 4.0 | null | 2023-03-24T16:17:15.670 | 2023-05-15T12:55:04.697 | 2023-04-06T11:57:00.787 | 384060 | 384060 | [
"r",
"hypothesis-testing"
] |
610611 | 2 | null | 610608 | 1 | null | As you know, the variances of the principal components (PCs) are given by the eigenvalues of the covariance matrix. Therefore, the PCs' standard deviations are just the eigenvalues' square root. So you can safely apply the Kaiser rule to the standard deviations.
| null | CC BY-SA 4.0 | null | 2023-03-24T16:18:41.473 | 2023-03-24T16:18:41.473 | null | null | 56940 | null |
610612 | 2 | null | 610005 | 1 | null | You should add `type = "response"` to your `emmeans()` call. That will becak transform the estimates and display standard error and confidence limits.
I guess I would try to dissuade you from trying to report standard deviations of the data on the response scale. Your model uses a log response, so those are not the SDs you used in your analysis.
| null | CC BY-SA 4.0 | null | 2023-03-24T16:21:22.650 | 2023-03-24T16:21:22.650 | null | null | 52554 | null |
610614 | 1 | null | null | 0 | 13 | I have calculated true positive rate (TPR) for two tests. I want to report the % difference for test b compared to test a. This is very simple, however, I have gotten myself confused as to how to report this.
Example:
TPR for test a = 38/110 = 34.5%.
TPR for test b = 45/110 = 40.9%.
Which (if any) of the following statements are correct?
1. 40.9%-34.5% = 6.4%: “Test b showed a higher proportion of detected diseases by 7 (+6.4%)”.
2. (45/38)/38 = 3.1%: “Test b showed a higher proportion of detected diseases by 7 (+3.1%)”.
3. 100 * ((|38-45|)/(38+45)) /2 = 16.9%: “Test b showed a higher proportion of detected diseases by 7 (+16.9%)”.
I hope you can help.
| Formulation of percentage difference | CC BY-SA 4.0 | null | 2023-03-24T16:23:15.343 | 2023-03-24T16:40:54.150 | 2023-03-24T16:30:42.683 | 322537 | 322537 | [
"percentage"
] |
610615 | 1 | null | null | 0 | 67 | Dear statistics fans,
I am absolutely clueless in an analysis and hope you can help because I have nobody else to ask right now and got totally lost in four statistics books and a zillion tutorials over the past five weeks.
[](https://i.stack.imgur.com/NMBJi.png)
And here is an overview of what I am trying to do (also see toy graph for visual aid):
- I try modelling a continuous response variable as function of a whole bunch of predictors along a land use gradient (ie a succession of typical anthropogenic land use classes like agriculture, meadow... to unused, protected landscape)
- I got observation plots in five land uses (A-E in the graph) along that land use gradient, replicated in two different ecosystems; the ecosystems got equal numbers of observations, only some land uses are underrepresented/unbalanced
- there is some spatial autocorrelation as within ecosystems and land uses the oberservation plots are clumped in the landscaped
- the two ecosystems generally have a different baseline level in the response var (orange vs darkred solid line), but the overall pattern is the same; only when plotting histograms of the response vars their distribution always looks bimodal (red solid line in small inset)
- I would like to include the two ecosystems as a fixed factor (same for the five land uses) as I am interested in their difference, not seeking generalization across these
- the predictors along the land use gradient are of two types: i) the ones that follow the same trend all across the gradient, some increasing, others decreasing (blue dashed lines), and ii) the ones that only have a non-zero trend across half of the gradient and are often zeros or close to zero in the other half of the gradient (green dotted lines)
What has happened so far:
- a predecessor of mine tried modelling the whole gradient with linear models, but got no significant results because the response var is decreasing to either end
- hence I set out and separated the dataset in two halves (using the land use ‘C’ in both sets) and tried modelling this with linear and linear mixed models; it is a pity though, as that separation also decreases the number of observations drastically for each subset
- also, this complicates matters in so far, that in order to interprete the gradient as a whole I was advised to use the same set of predictors in both models, even so some of them got a lot of zeros or close to zero in 2 out of 3 land use locations of a half gradient
- anyways, I tried starting with data visualizations and plotting, did anovas between land uses that were promising
- followed by extensive correlation checks and PCAs of all predictors, compiled some nice predictor sets that have no correlation greater |0.75| (although I had to drop some nice predictor vars in the process, unfortunately)
- I started out with lm() and stepAIC(), then added interactions for a few hand-picked ecologically meaningful term combinations
- then moved over to lmer() to include the nestedness of land use within ecosystem, but then I got confused if that is correct, as that means both factors are introduced in the random part of the model, but I wanted to keep them as fixed factors ... ?
- then I tried lme() ans stepAIC() and also doodled around with a variance structure to allow for different variances per ecosystem and land use but I ain’t sure what I am doing there and if I do it correctly, as the same factors I used the variance structure for are also my main fixed effects of interest
- then I took the book by Zuur & Ieno (Mixed effects models and extensions in ecology with R, 2009) an tried to work through their stepwise model construction process with gls() and lme() but checking residual plots of the models is not really fun, I believe partly because of the zeros in predictors, and some oddballs like in one particular ecosystem-land use combination having all residuals turning to zero variance
- I then tried standardizing and transforming the hell out of predictors and even response variables, but nothing really helps much, plus I bet you people know a way nicer method than that
My Goal:
- get together a data analysis in R that adequately deals with the data I have (n<100 observations), preferably picturing the whole gradient as it is an ecologically meaningful real-life gradient
- find a proper way of identifying the most important predictors and their effect sizes on my response variable in such a way that the outcomes are still interpreteable in ecologically meaningful terms (e.g. no four-way-interactions please etc)
- I need to get this done with my limited understanding of statistical mathematics, yet I am eager to learn and understand new things (just not all at once)
I would be really grateful if one of you could hint me to the right kind of analysis, as right now I ain’t even sure what I should look for and in which order I should implement a zillion bits and pieces of advice into one stringent analysis. I feel like I get a lot of suggestions, yet lack the understanding to meaningfully add them together and am often not sure if adding more things just messes everything else up.
Thank you so very much in advance!
| Nice data - ugly residuals - too many puzzle pieces | CC BY-SA 4.0 | null | 2023-03-24T16:08:34.057 | 2023-03-27T08:04:11.750 | 2023-03-27T08:04:11.750 | 384233 | 384233 | [
"r",
"regression"
] |
610616 | 2 | null | 610614 | 1 | null | There are a couple of popular contrasts for binary data. I'll illuminate two of them you seem to be getting at.
## Risk Difference
The risk difference is literally the difference in rates between groups. If $p_1 = 38/110$ and $p_2 = 45/110$ then the risk difference is
$$ p_2 - p_1 \approx 0.064 $$
I would say something like "Group 2 had a risk 6.4 percentage points higher than group 1". The use of percentage points makes it clear you're talking about an absolute difference.
## Risk Ratio
The risk ratio is literally the ratio between risks, i.e.
$$ \dfrac{p_2}{p_1} = \dfrac{45/110}{38/110} \approx 1.18 $$
I would say something like "The rate in group 2 had a relative difference over group 1 of 18%". Here, the use of the word "relative" communicates you're talking about differences relative to group 1.
| null | CC BY-SA 4.0 | null | 2023-03-24T16:38:09.833 | 2023-03-24T16:38:09.833 | null | null | 111259 | null |
610617 | 2 | null | 610614 | 1 | null | This isn't so simple as you might have hoped. Any of those choices is subject to mis-interpretation by your readers, precisely because of the difficulty you have in making this choice.
Expressing values in percents and percentage changes too often ends up being confusing rather than clarifying. See for example this [post by Frank Harrell](https://www.fharrell.com/post/percent/).
The simplest solution and the one least likely to lead to confusion: let the results speak for themselves, and avoid any invocation of percents. For example, "TPR for test `a` was 0.34. TPR for test `b` was 0.41." It would be best to include error estimates around those point estimates.
| null | CC BY-SA 4.0 | null | 2023-03-24T16:40:54.150 | 2023-03-24T16:40:54.150 | null | null | 28500 | null |
610618 | 2 | null | 610591 | 0 | null | You definitely should report the odds ratio, logits are often difficult to interpret. It's been awhile since I've done one but try (with glmer loaded) either A or B.
A.)
fixed <- fixef(ModelName)
confintfixed <- confint(ModelName, parm = "beta_", method = "Wald")
OR <- exp(cbind(fixed, confintfixed))
OR
B.)
parameters::parameters(ModelName, exponentiate = TRUE, details = TRUE)
Other info below
[https://www.rdocumentation.org/packages/lme4/versions/1.1-31/topics/glmer](https://www.rdocumentation.org/packages/lme4/versions/1.1-31/topics/glmer)
^glmer package link
[https://cran.r-project.org/web/packages/sjPlot/vignettes/plot_model_estimates.html](https://cran.r-project.org/web/packages/sjPlot/vignettes/plot_model_estimates.html)
^ cool way to visualize odds
| null | CC BY-SA 4.0 | null | 2023-03-24T16:42:15.400 | 2023-03-24T16:42:15.400 | null | null | 383476 | null |
610619 | 1 | null | null | 0 | 25 | Based on my theoretical arguments, I have two competing mediators (M1 and M2), each of which possibly mediates the relationship between my independent (X) and dependent (Y) variable. I implemented two separate mediation analyses using the lavaan package in R with bootstrapping technique for the same dataset: one with the mediator M1 (X>M1>Y), and one with the mediator M2 (X>M2>Y). The coefficient estimate for the mediating effect (i.e. the indirect effect a1*b1) of M2 (β=0.199) is higher than that of M1 (β=0.170). How can I find out if the mediating effects of these two mediators differ in a statistically significant way? Through which method or procedure can I assess this?
I have tried to find some information on comparing the coefficients of different mediators for the same statistical model, but what I could get was related only to comparing the coefficients of different independent variables for the same or different models but not mediators.Although a very similar question was asked some time ago in this platform, surprisingly it has not been answered.
As can be seen from the R codes, my model is actually a moderated mediation model, but at the moment I am only interested in the formal comparison of the mediation effects, not the effects of moderated mediation.
I would very much appreciate if someone could help me resolve this issue. Thanks a lot in advance.
Here's my code:
```
# For Mediator1 (M1)
ModMediation1 <- ' M1 ~ a1*RD + a2*PD + a3*RDXPD + a4*FA +
a5*FS + a6*FG + a7*T
Perf ~ b1*M1 + b2*FA + b3*FS + b4*FG + b5*T +
c2*PD + c3*RDXPD
# indirect effect
IndEff := a1*b1
# index of moderated mediation
IndModMed := a3*b1
'
ModMediation1_fit <- lavaan::sem(ModMediation1, data = p2_df, se = "bootstrap", bootstrap = 10000)
summary(ModMediation1_fit, fit.measures = TRUE, rsq = TRUE, standardized = TRUE, ci = TRUE)
parameterestimates(ModMediation1_fit, boot.ci.type = "bca.simple", standardized = TRUE, level = 0.95)
# For Mediator2 (M2)
ModMediation2 <- ' M2 ~ a1*RD + a2*PD + a3*RDXPD + a4*FA +
a5*FS + a6*FG + a7*T
Perf ~ b1*M2 + b2*FA + b3*FS + b4*FG + b5*T +
c2*PD + c3*RDXPD
# indirect effect
IndEff := a1*b1
# index of moderated mediation
IndModMed := a3*b1
'
ModMediation2_fit <- lavaan::sem(ModMediation2, data = p2_df, se = "bootstrap", bootstrap = 10000)
summary(ModMediation2_fit, fit.measures = TRUE, rsq = TRUE, standardized = TRUE, ci = TRUE)
parameterestimates(ModMediation2_fit, boot.ci.type = "bca.simple", standardized = TRUE, level = 0.95)
```
PS: I posted this question some time ago in StackOverflow, which was apparently not a right platform.
| comparison of two different mediators for the models with the same DV and IVs | CC BY-SA 4.0 | null | 2023-03-24T17:07:42.663 | 2023-03-24T17:07:42.663 | null | null | 384050 | [
"r",
"model-comparison",
"mediation",
"lavaan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.