Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
609872 | 1 | 609897 | null | 2 | 84 | I do a survival regression on some time-to-event data (vehicle breakdowns) with some covariates (essentially the age of the vehicle and some boolean variables for vehicle type). I must admit that I am not so sure about the quality of the data and whether they result from multiple processes instead of one.
For the implementation, I use the Python package `lifelines` and the [CoxPH](https://lifelines.readthedocs.io/en/latest/fitters/regression/CoxPHFitter.html) fitter in particular.
This is the distribution of survival times in my data (for both censored and non-censored points!):
[](https://i.stack.imgur.com/XF0Wv.png)
Here is the model summary:
```
<lifelines.CoxPHFitter: fitted with 2081 total observations, 354 right-censored observations>
duration col = 'duration'
event col = 'observed'
cluster col = 'vehicle'
robust variance = True
baseline estimation = breslow
number of observations = 2081
number of events observed = 1727
partial log-likelihood = -11593.48
time fit was run = 2023-03-18 06:32:51 UTC
---
coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95%
covariate
year_of_manufacture 0.05 1.06 0.01 0.03 0.08 1.03 1.08
N7411 0.90 2.46 0.18 0.54 1.26 1.72 3.52
N7412 1.25 3.50 0.18 0.91 1.60 2.48 4.94
N7413 1.13 3.11 0.17 0.80 1.46 2.24 4.31
N7460 -0.47 0.63 0.33 -1.12 0.19 0.33 1.21
N7440 0.76 2.14 0.20 0.38 1.14 1.46 3.13
cmp to z p -log2(p)
covariate
year_of_manufacture 0.00 4.15 <0.005 14.86
N7411 0.00 4.95 <0.005 20.33
N7412 0.00 7.11 <0.005 39.68
N7413 0.00 6.76 <0.005 36.06
N7460 0.00 -1.39 0.16 2.61
N7440 0.00 3.89 <0.005 13.28
---
Concordance = 0.56
Partial AIC = 23198.95
log-likelihood ratio test = 163.41 on 6 df
-log2(p) of ll-ratio test = 106.14
```
This is how the modeled baseline hazard looks:
[](https://i.stack.imgur.com/stP6S.png)
I now have some questions:
- When I plot the deviance residuals of the model, a very clear pattern is visible:
(Red dots indicate censored values, blue dots indicate events).
I assume that this pattern indicates that the model is missing some crucial feature which allows it to explain the visible non-linear relation between explanatory and target variable. And it seems to me that the model is underestimating hazard for low durations and overestimating it for large durations (please correct me if I am wrong).
- But why can't the model adapt the hazard function to this slope, as the baseline hazard is non-parametric and thus not bound to any specific distribution of survival times?
- Besides adding other covariables: What could I do to improve the model?
- Is my interpretation correct that the model says all features except N7460 do have a significant effect on the outcome and that a concordance value of 0.56 tells us that the model does slightly better than plainly guessing average values?
| Residuals pattern in Cox proportional hazards model | CC BY-SA 4.0 | null | 2023-03-18T07:24:36.417 | 2023-03-21T16:25:27.460 | null | null | 185947 | [
"regression",
"python",
"survival",
"residuals",
"cox-model"
] |
609874 | 1 | null | null | 0 | 33 | I have a dataset of hourly day-ahead electricity prices and hourly forecasted day-ahead demand (from governmental agency) in the Norwegian price area NO2. I am trying to use the forecasted day-ahead demand to predict the next 24-hour electricity prices for each day (cross validation using a rolling window) with a linear model. However, I am having trouble implementing the exogenous predictor (day-ahead demand) in this model without getting an error.
I have tried lagging the predictor, which looks like this:
```
# rolling window
NO2 <- NO2 %>%
filter(datetime > as.POSIXct("2023-01-01 00:00:00"))
NO2_stretch <- NO2 %>%
stretch_tsibble(.init = 48, .step = 24) %>% # initial training set of size 48 (two days), step = 24 as the training set should expand one day for each forecast iteration
filter(.id !=max(.id)) # remove the last one as there will be no more observations to test
NO2_stretch # te "id" column corresponds to which row in the cross validation that we are in
# Estimate the linear models for each window
fit_cv <- NO2_stretch %>%
model(TSLM(no2_price ~ lag(no2_load, 24) + season())) # get a forecast for each day with an increasing training set
# can pipe this rolling window into forecast in the usual way to produce one step ahead forecasts from all models
fc_cv <- fit_cv %>%
forecast(h="1 day")
# Cross-validated accuracy
fc_cv %>% accuracy(NO2_actual)
```
The forecast function gives me this error:
```
Error: Problem with `mutate()` input `TSLM(no2_price ~ no2_load + season())`.
x object 'no2_load' not found
Unable to compute required variables from provided `new_data`.
Does your model require extra variables to produce forecasts?
ℹ Input `TSLM(no2_price ~ no2_load + season())` is `(function (object, ...) ...`.
```
Any help on this would be highly appreciated, let me know if I can clarify this further.
| Forecasting day-ahead electricity prices using a linear time series regression | CC BY-SA 4.0 | null | 2023-03-18T09:07:22.040 | 2023-03-22T00:23:46.750 | 2023-03-18T09:13:57.513 | 362671 | 382774 | [
"forecasting",
"cross-validation",
"linear-model"
] |
609875 | 2 | null | 126110 | 0 | null | $\require{cancel}$
$$\operatorname{Cov}(\theta^\ast)=E[{\theta^\ast}{\theta^\ast}']-E[{\theta^\ast}]E[{\theta^\ast}]'$$
$$E[{\theta^\ast}]E[{\theta^\ast}]'=\theta \theta'$$
$$y = X\theta+u$$
$$E[{\theta^\ast}{\theta^\ast}']'=E[((X'WX)^{−1}X'Wy)((X'WX)^{−1}X'Wy)']=\\
E[(X'WX)^{−1}X'Wyy'WX(X'WX)^{−1}]=E[(X'WX)^{−1}X'W(X\theta+u)(\theta'X'+u')WX(X'WX)^{−1}]\\
=E[(X'WX)^{−1}X'W(X\theta\theta'X+u\theta'X+X\theta u'+uu')WX(X'WX)^{−1}]=
E[\cancel{(X'WX)^{−1}(X'WX)}\theta\theta'\cancel{(XWX)(X'WX)^{−1}}]+\\
\underbrace{E[(X'WX)^{−1}X'W(u\theta'X)WX(X'WX)^{−1}]}_{=0}+\\
\underbrace{E[(X'WX)^{−1}X'W(X\theta u')WX(X'WX)^{−1}]}_{=0}+\\
E[(X'WX)^{−1}X'W(uu')WX(X'WX)^{−1}]\\
=\theta \theta' +(X'WX)^{−1}X'W\overbrace{\Sigma}^{E[uu']} WX(X'WX)^{−1}
$$
Hence
$$\operatorname{Cov}(\theta^\ast)=\underbrace{\cancel{\theta \theta'} +(X'WX)^{−1}X'W\Sigma WX(X'WX)^{−1}}_{E[{\theta^\ast}{\theta^\ast}']}-
\underbrace{\cancel{\theta \theta'}}_{E[{\theta^\ast}]E[{\theta^\ast}]'}\\
=(X'WX)^{−1}X'W\Sigma WX(X'WX)^{−1}$$
Then
$$\operatorname{Var}(a'\theta^\ast)=E[a'\theta^\ast{\theta^\ast}'a]-E[a'\theta^\ast]E[{\theta^\ast}'a]\\
=a'E[\theta^\ast{\theta^\ast}]a - a'E[\theta^\ast]E[\theta^\ast]'a\\
= a'\left(E[\theta^\ast{\theta^\ast}] - E[\theta^\ast]E[\theta^\ast]'\right)a\\
=a'\operatorname{Cov}(\theta^\ast)a$$
---
If you can use the results of the unweighted version, the result can be directly obtained by recognizing a linear transformation of both $X$ and $y$ that generalizes the weighted linear regression estimator.
| null | CC BY-SA 4.0 | null | 2023-03-18T09:39:05.817 | 2023-03-22T13:25:04.487 | 2023-03-22T13:25:04.487 | 60613 | 60613 | null |
609876 | 2 | null | 609829 | 3 | null | PCA just comes down to using the eigendecomposition of the (empirical) covariance matrix of the data.
The full eigendecomposition of the covariance matrix results in a set of eigenvectors and corresponding eigenvalues, which can be interpreted as the variance along these eigenvectors.
Because these eigenvectors are orthogonal, they can be used to create a rotation matrix that rotates the data so that the basis aligns with the direction of maximum variance.
For some people, this may be easier to understand with a few lines of code:
```
import numpy as np
data = np.random.randn(256, 2) # generate some random 2D data
covariance = np.cov(data, rowvar=False) # compute covariance matrix
variances, rotation = np.linalg.eigh(covariance) # eigendecomposition
pcs = (data - data.mean(0)) @ rotation # compute principal components
```
In other words, principal components are just a rotation version of the (centred) data.
This also means that the (centred) data can simply be reconstructed by rotating back the principal components: `pcs @ rotation.T`.
As a result, a "full" PCA (not sure if this is the correct term) is perfectly reversible.
This is how PCA is typically used in the context of pre-processing data.
After all, this rotation should typically make it easier to find important features.
Typically, the principal components are additionally whitened (scaled to unit variance) using the eigenvalues of the decomposition (`variances` in the code).
It is also possible to whiten the data after the rotation and then rotate it back, which gives rise to ZCA pre-processing.
I can highly recommend this [answer to a related question](https://stats.stackexchange.com/a/117463/95000) for further reading (and some nice figures).
When using PCA for dimensionality reduction, you would only use a subset of the columns in the rotation matrix.
This obviously leads to loss of information.
However, the columns of the rotation matrix can effectively be to transform the data back to the original space.
After all, using only a few columns corresponds to setting principal components to zero (i.e. dropping information).
If the total variance corresponding to these dropped dimensions is low enough, a reasonable reconstruction (of the centred data) is typically possible.
Again, in code:
```
data = np.random.randn(256, 784) # generate some random high-D data
covariance = np.cov(data, rowvar=False) # compute covariance matrix
_, rotation = np.linalg.eigh(covariance) # eigendecomposition
reduced_pcs = (data - data.mean(0)) @ rotation[:, -70] # dimensionality reduction PCA
reconstruction = reduced_pcs @ rotation[:, -70:].T # reconstruction of (centred) data
```
Note that the reconstruction of the random data in this snippet of code is not going to work well.
However, you should get some reasonable results if you plug in e.g. some MNIST data.
TL;DR: PCA is often used to pre-process data (make it nicer to work with) and can actually often be transformed back to the original input space (i.e. is not necessarily abstract).
| null | CC BY-SA 4.0 | null | 2023-03-18T09:49:13.160 | 2023-03-18T09:49:13.160 | null | null | 95000 | null |
609878 | 1 | null | null | 1 | 38 | From Meyer's Introductory Probability and Statistical Applications, 2nd ed:
Suppose that the dimensions, $X$ and $Y$, of a rectangular metal plate may be considered to be independent continuous random variables with the following pdfs:
$$X: g(x) = \begin{cases}
x - 1, & 1 < x \leq 2 \\
-x + 3, & 2 < x < 3 \\
0, & \text{elsewhere}
\end{cases}$$
$$Y: h(y) = \begin{cases}
\frac{1}{2}, & 2 < y < 4 \\
0, & \text{elsewhere}
\end{cases}$$
Find the pdf of the area of the plate, $A = XY$.
My attempt: for two independent continuous random variables $X, Y$ with pdfs $g(x)$ and $h(y)$, we know that if $W = XY$, then $p(w) = \int^{+\infty}_{-\infty} g(u) h \Big( \frac{a}{u} \Big) \Big| \frac{1}{u} \Big| \ dx$. To find the integration bounds, I took the first interval $1 < x \leq 2$ and derived, using $a = xy$ and $2 < y < 4$:
\begin{align}
\implies 2 < &\frac{a}{x} < 4 \\
\implies 2x < &a < 4x \\
\end{align}
$$\implies 1 < x < \frac{a}{2} \quad \text{and} \quad \frac{a}{4} < x < 2$$
Similarly, I apply the same procedure to the $2 < x < 3$ portion of the pdf of $X$ and derive the following bounds:
$$2 < x < \frac{a}{2} \quad \text{and} \quad \frac{a}{4} < x < 3$$
Using the aforementioned result to derive each segment of the piecewise pdf of $A$, I get:
\begin{align}
\int^{a/2}_1 g(u) h \Big( \frac{a}{u} \Big) \frac{1}{u} \ du &= \int^{a/2}_1 (x-1)(1/2)(1/x) \ dx \\
&= \frac{a-2}{4} - \frac{1}{2} \ln \frac{a}{2}, \quad (2 < a < 4) \\
\int^{2}_{a/4} g(u) h \Big( \frac{a}{u} \Big) \frac{1}{u} \ du &= \int^{2}_{a/4} (x-1)(1/2)(1/x) \ dx \\
&= \frac{8-a}{8} + \frac{1}{2} \ln \frac{a}{8}, \quad (4 < a < 8) \\
\int^{a/2}_{2} g(u) h \Big( \frac{a}{u} \Big) \frac{1}{u} \ du &= \int^{a/2}_2 (-x + 3)(1/2)(1/x) \ dx \\
&= \frac{4-a}{4} + \frac{3}{2} \ln \frac{a}{4}, \quad (4 < a < 6) \\
\int^{3}_{a/4} g(u) h \Big( \frac{a}{u} \Big) \frac{1}{u} \ du &= \int^{3}_{a/4} (-x + 3)(1/2)(1/x) \ dx \\
&= \frac{a-12}{8} + \frac{3}{2} \ln \frac{12}{a}, \quad (8 < a < 12)
\end{align}
Therefore, the pdf $p(a)$ may be defined as:
$$p(a) = \begin{cases}
\frac{a-2}{4} - \frac{1}{2} \ln \frac{a}{2}, &(2 < a < 4) \\
\frac{16-3a}{8} + \ln \frac{a^2}{16 \sqrt{2}}, &(4 < a < 6) \\
\frac{8-a}{8} + \frac{1}{2} \ln \frac{a}{8}, &(6 < a < 8) \\
\frac{a-12}{8} + \frac{3}{2} \ln \frac{12}{a}, &(8 < a < 12)
\end{cases}$$
However, integrating each segment over their respective bounds fails to yield unity. Graphically, it appears that there is an issue with the segment on $6 < a < 8$. My question: what is the issue?
[](https://i.stack.imgur.com/IezMS.png)
| 2-dimensional functions of random variables with piecewise densities | CC BY-SA 4.0 | null | 2023-03-18T10:03:03.493 | 2023-03-18T10:49:31.663 | 2023-03-18T10:49:31.663 | 362671 | 382637 | [
"probability",
"density-function",
"piecewise-pdf"
] |
609879 | 2 | null | 609458 | 1 | null | The likelihood function is a function of all components of $\theta$. What you call $l(\theta_i)$ is in fact a slice of the likelihood function taken by fixing all other components to their MLE's, i.e. $l(\theta_i,\theta_0=\hat\theta_0,\theta_1=\hat\theta_1,...)$.
Based on the relation to the asymptotic distribution of the MLE, this corresponds to a conditional distribution of $\theta_i$.
The profile likelihood on the other hand, takes a slice along a principal axis (in the gaussian case), which corresponds to the marginal distribution of $\theta_i$.
The figures in [this post](https://stackoverflow.com/questions/21867156/bivariate-normal-with-marginal-and-conditional-densities) can help you visualize it and understand the difference between conditional and marginal distributions.
Note that, since you don't know the true values of $\theta$, the conditional distribution does not represent the uncertainty on $\theta_i$. Also note that the conditional distribution will always be narrower than the marginal one.
| null | CC BY-SA 4.0 | null | 2023-03-18T10:07:01.303 | 2023-03-18T10:07:01.303 | null | null | 348492 | null |
609880 | 2 | null | 598897 | 2 | null | If the distribution shape changes over time, this means that observations are not distributed independently and identically (the main issue may be dependence, or non-identity, or both). Now if data are not independently and identically distributed, a histogram or kernel density estimator applied to the observations will not normally estimate "the" distribution, as there is no well defined single one. (Exception are data from a stationary process, but this runs counter to "shape changes over time"; at best, the process may at some point go into stationarity, in which case earlier observations should not be used for distribution estimation.)
So this question would require an analysis of what goes on over time, which cannot be seen from the histogram.
| null | CC BY-SA 4.0 | null | 2023-03-18T10:43:51.960 | 2023-03-18T10:43:51.960 | null | null | 247165 | null |
609881 | 2 | null | 609817 | 0 | null | If you want a data driven clustering, k-means looks promising in the sense that it will produce clusters with similar within-cluster variance, which may make sense in your application. The problem of imbalanced cluster sizes may come from a very skew distribution of values, and k-means may produce something more to your liking if you transform the data first, say, by taking logs or square roots. As written in the comments, it all depends on what you really need in this situation. There is no uniquely "true" clustering that could be found reliably from data. Different clusterings are possible on the same data and background knowledge regarding the meaning of the data and aim of clustering is required to decide between them.
| null | CC BY-SA 4.0 | null | 2023-03-18T10:54:44.277 | 2023-03-18T10:54:44.277 | null | null | 247165 | null |
609882 | 1 | null | null | 0 | 38 | We're interested in whether a lipreading training improves audiovisual speech perception. Participants that either received lipreading training or not completed a speech comprehension task at two time points. Speech was presented in an audio-only and an audiovisual condition. Each participant completed both audio-only and audiovisual trials at both time points. For each trial participants' response was rated as correct or incorrect. The following variables are of interest:
- condition (factor with two levels: audio-only vs audiovisual)
- session (factor with two levels: pre-training vs post-training)
- group (factor with two levels: lipreading training vs no training)
We want to model the threeway interaction between condition, session and group as this depicts the potential training effect.
We used a set of 300 sentences with each subject hearing all sentences. However, because the assignment was randomised, each sentence was presented in each condition, session and group (across all subjects). This means that participants and sentences are crossed.
What would the model with the maximal random effects structure look like? Using lme4 I would fit a GLMM with a logistic link function and a random slope for each within-unit predictor (i.e., within participant and within sentence). Using the formula notation in R, I would fit the following model:
```
response ~ condition * session * group + (condition * session | participant) + (condition * session * group | sentence)
```
Would you say that the model specification is correct?
And how would the mathematical formula for such a model look like?
| GLMM with crossed random effects | CC BY-SA 4.0 | null | 2023-03-18T11:13:10.300 | 2023-03-18T11:40:57.233 | 2023-03-18T11:40:57.233 | 367023 | 367023 | [
"mixed-model",
"lme4-nlme",
"glmm"
] |
609883 | 1 | null | null | 1 | 20 | I'm trying to approximate an unknown distribution by a truncated Edgeworth series, with cumulants/central moments estimated from a large sample.
I notice though that I am getting negative tail probability densities. What would be the restrictions on my estimates of the sample central moments to guarantee positive density? Is there an accessible paper or set of lecture notes treating this problem?
| Restrictions on sample cumulants/moments for truncated Edgeworth expansion | CC BY-SA 4.0 | null | 2023-03-18T11:43:19.790 | 2023-03-18T11:43:19.790 | null | null | 256959 | [
"distributions",
"approximation"
] |
609885 | 1 | null | null | 3 | 72 | I'm currently working with multivariate GARCH representations of time-series for financial data using the `rmgarch` R package. This package in turn uses the well-known `rugarch` package to fit the 'marginal' GARCH processes.
Right now I'm evaluating the goodness-of-fit and parameter estimates when using different conditional distributions for the variance innovations. It seems that (skewed) student's t and skewed normal fit my data better as they present heavier-than-normal tails, skew, and excess kurtosis. In addition, the parameter estimates seem to become more significant (particularly the estimate for $\alpha_1$) when using non-normal distributions.
Typically, I would analyse the standardised residuals of my model (available in `"fit object"@mfit$stdresid` for a DCC-fitted model), and use for example a QQ plot against the theoretical distribution to examine the behaviour, along with an ACF plot to examine autocorrelations. This is easily done when assuming normal innovations as the standard `qqnorm` in R does the trick, however I'm less sure how to do this procedure for the skewed normal and (skewed) $t$ distributions.
Is there a good way to produce these kinds of QQ-plots in R? The GARCH package(s) will readily output estimates for skew and shape, but I'm not sure how to incorporate these into QQ-plots.
Also, as a side question: currently I'm fitting three time series. Two of them seem to want Student's $t$ distribution, and one skewed normal. For the joint innovations, however, I'm currently using a multivariate normal distribution. Does this make sense from a mathematical perspective? As you might tell, I'm relatively inexperienced with time-series models, but this seems akin to fitting a joint distribution normally but with non-normal marginal distributions, which seems odd.
Thanks for any help! :)
| Evaluating goodness-of-fit for GARCH models in R with QQ-plots (rugarch package) | CC BY-SA 4.0 | null | 2023-03-18T11:51:53.603 | 2023-03-18T20:15:51.883 | 2023-03-18T20:15:51.883 | 53690 | 383523 | [
"r",
"time-series",
"goodness-of-fit",
"garch",
"qq-plot"
] |
609886 | 1 | null | null | 0 | 28 | I have a data set with two variables (var1 and var2) and two binary variables Sex (1,2) and Time (1,2). Therefore, I have four groups.
I have calculated pearson correlations and R2 for the four groups. For example:
```
Data %>%
group_by(Sex,Time) %>%
summarize(cor=cor(var1, var2, use="complete.obs"))
```
Results:
group1: 0.06;
group2: 0.17;
group3: 0.24;
group4: 0.26
As the sample size is huge I guess the differences are significant. However, I would like to know if there are statistically significant differences among these correlations.
I would also like to compare R2s for the regression models (so that I can add control variables).
```
lm1 <- lm(Var1~Var2+Control, data=Dat_group1)
lm2 <- lm(Var1~Var2+Control, data=Dat_group2)
lm3 <- lm(Var1~Var3+Control, data=Dat_group3)
lm4 <- lm(Var1~Var4+Control, data=Dat_group4)
```
Is there any way to test if these correlations/R2 in regression models are significant?
| Compare R2 and Pearson correlation in different models | CC BY-SA 4.0 | null | 2023-03-18T12:27:45.927 | 2023-03-18T13:09:48.117 | 2023-03-18T13:09:48.117 | 362671 | 383524 | [
"r",
"correlation",
"group-differences",
"lm"
] |
609887 | 2 | null | 609871 | 4 | null | All the fitting methods in `gam()`/`bam()` return some form of smoothness selection score:
- GCV
- UBRE (AIC)
- GACV
- REML score
- ML score
- ...
and this is what is displayed in the summary output.
Their main utility is as a quick way to compare models (in the same way you might compare models based on AIC); the lower the score the "better" the model. Note that for ML or REML scores, these are not the log-likelihood of the data given at the MLEs of the model parameters: you'll get slightly different values from `logLik()`:
```
r$> logLik(fit.1)
'log Lik.' -457.0239 (df=6.872103)
r$> logLik(fit.2)
'log Lik.' -456.9475 (df=7.106575)
```
and I'm not exactly sure why (probably the REML/ML scores aren't including certain other parameters that are in the log likelihood of the data.)
As to your specific example, the model using thin plate regression spline fits slightly better (it has a slightly lower negative log restricted likelihood score), but uses a bit more wiggliness than the CRS model (`fit.1`). But the difference is negligible and certainly not worth bothering in this case.
```
library("gratia")
compare_smooths(fit.1, fit.2) |>
draw()
```
[](https://i.stack.imgur.com/6tpuT.png)
| null | CC BY-SA 4.0 | null | 2023-03-18T12:28:03.663 | 2023-03-18T12:28:03.663 | null | null | 1390 | null |
609888 | 2 | null | 605680 | 1 | null | This question describes an interesting experiment: can sea turtle distinguish between the colors red and blue? And since the OP has provided all the data, let's look at it.
There is a panel for each turtle, with turtles whose reinforced color is red on top. For each color (row), I've ordered the turtles by speed of learning (more about this below). The time $t$ (in days) ranges from 0 to 9; it simplifies the math a bit to start at $t$ = 0 rather than $t$ = 1. And each point indicates the number of successes out of 10 trials: the number of times the turtle pushed the paddle with its reinforced color and got a reward.
[](https://i.stack.imgur.com/SlDfb.png)
The position of the reinforced color (left or right paddle) was randomized during each trial. So we can model the number of successes as $Y \approx \operatorname{Binomial}(10, p_t)$. Let's also assume that the probability of success $p_t$ is a function of time and two learning parameters, $\alpha$ and $\beta$:
$$
\begin{aligned}
p_t(\alpha,\beta) &= \alpha + (1 - \alpha)\left(1 - \exp(-\beta t)\right) \quad \alpha\in(0,1), \beta > 0
\end{aligned}
$$
This equation looks a bit complex but the meaning of $\alpha$ and $\beta$ is easy to explain: Initially, at time $t$ = 0, the probability of success is $\alpha$ and then over then next 9 days, it grows by a factor of $\beta$. Prior to the experiment, each turtle received training with its reinforced color alone, so we expect the $\alpha$ parameter is ≥ 0.5 for all of them. The $\beta$ parameter is the speed of learning.
Here are the MLEs (maximum likelihood estimates) of $\alpha$ and $\beta$ for each turtle. I obtained these by minimizing the negative log-likelihood of $\operatorname{Binomial}(10, p_t)$. R code is attached at the end.
```
#> id color α.mle β.mle
#> Green (1) red 0.779 1.36
#> Hawksbill red 0.517 0.272
#> Green (2) red 0.522 0.249
#> Green (5) blue 0.735 0.778
#> Green (4) blue 0.623 0.657
#> Green (3) blue 0.528 0.314
```
There is variability in the initial probabilities $\alpha$, with Green (1) and Green (5) doing very well from the start. There is even more variability in the speed of learning $\beta$. This is not a formal hypothesis test but the exploratory analysis suggests that either (a) there is no difference in the speed of learning when the reinforced color is blue vs red, or (b) there difference is small compared to the biological variability between turtles.
[](https://i.stack.imgur.com/uMqPq.png)
---
```
library("nloptr")
minimize <- function(x0, func, lb = NULL, ub = NULL) {
opts <- list(
"algorithm" = "NLOPT_LN_SBPLX",
"xtol_rel" = 1.0e-6
)
a <- nloptr(x0, func, lb = lb, ub = ub, opts = opts)
list(x = a$solution, fx = a$objective)
}
turtles <- c(
# red
"Green (1)", "Hawksbill", "Green (2)",
# blue
"Green (5)", "Green (4)", "Green (3)"
)
data <- load_data()
params.mle <- tibble(
id = character(),
color = character(),
α.mle = numeric(),
β.mle = numeric()
)
for (i in seq(turtles)) {
id <- turtles[i]
x <- 0:9
n <- 10
subset <- data %>%
filter(.data$id == {{ id }})
y <- subset$successes
col <- subset$color[1]
nll <- function(param) {
a <- param[1]
b <- param[2]
prob <- a + (1 - a) * (1 - exp(-b * x))
-sum(dbinom(y, n, prob, log = TRUE))
}
soln <- minimize(c(0.5, 1), nll, lb = c(0, 0), ub = c(1, 3))
a <- soln$x[1]
b <- soln$x[2]
params.mle <- params.mle %>%
bind_rows(
list(
id = id, color = col,
α.mle = a, β.mle = b
)
)
}
params.mle
```
| null | CC BY-SA 4.0 | null | 2023-03-18T12:59:24.493 | 2023-03-18T12:59:24.493 | null | null | 237901 | null |
609890 | 2 | null | 608921 | 1 | null | Referring to “the” MSE is probably a mistake, since there are reasonable arguments for multiple calculations (an $n$ denominator and and $n−p$ denominator both make sense). I would want to define explicitly what I mean if there is any ambiguity about it.
However, the R decision to call $\hat\sigma$ the standard error makes no sense to me, because standard errors are associated with parameters being estimated. To which parameter does $\hat\sigma$ correspond? (I don’t have an answer, which is why I have yet to see why R calls $\hat\sigma$ a residual standard error.)
I do not really have a clean name for $\hat\sigma=\sqrt{
\frac{
\sum\left(
y_i-\hat y_i
\right)^2
}{
n-p}
}$. While the expression inside the square root is unbiased for error variance (assuming fairly typical assumptions like the Gauss-Markov conditions), Jensen’s inequality means that $\hat\sigma$ is biased for the error standard deviation, so “unbiased error standard deviation” is not correct.
EDIT
[The R developers appear to know that residual standard "error" is a misnomer and lament that it crept into the documentation for functions to such an extent that correcting this becomes difficult.](https://stats.stackexchange.com/a/613571/247274)
| null | CC BY-SA 4.0 | null | 2023-03-18T13:15:06.573 | 2023-04-20T19:28:33.007 | 2023-04-20T19:28:33.007 | 247274 | 247274 | null |
609891 | 2 | null | 515593 | 3 | null | Your example is not a Directed Acyclic Graph (DAG) because it has a cycle in its bi-directed edge. So in terms of causal graphical models, I do not think there is a name for such a structure because it contradicts the underlying causal DAG assumption.
That said, it might make sense to consider common causes of "Exposure" and "Variable", or find other ways of expressing your set-up in a graph that does not rely on a bi-directed edge.
If you really do believe there is a bidirectional effect that can not be resolved either through a different model, or by arguing things like "the effect in one direction will take very long to show so it does not matter", then it seems the DAG modeling assumptions are simply violated. I should say that this would not be unusual - they are very strong assumptions. There is also some theory on cyclical causal graphs. I am not sure how applicable such approaches would be to your case, but it could be an idea to look into it.
| null | CC BY-SA 4.0 | null | 2023-03-18T13:43:40.193 | 2023-03-18T15:37:17.507 | 2023-03-18T15:37:17.507 | 250702 | 250702 | null |
609892 | 1 | null | null | 1 | 42 | I know that for fitting a random forest model in a training set, you create many bootstrap samples with the training data set and fit a decision tree to each bootstrapped training set. Then aggregate predictions from every tree to get the single prediction.
But, the question is, for testing, can you please inform me how it works in an unseen set? I don't think you can create many bootstrap samples with the training data. Do you create bootstrap samples with the test data?
Thanks in advance for your help
| Bootstrapping for testing set | CC BY-SA 4.0 | null | 2023-03-18T13:47:12.547 | 2023-03-18T13:48:26.213 | 2023-03-18T13:48:26.213 | 382257 | 382257 | [
"random-forest",
"bootstrap"
] |
609893 | 1 | null | null | 1 | 82 | Random Fourier Features (RFFs) were introduced by A. Rahimi and B. Recht in their 2007 publication [Random Features for Large-Scale Kernel Machines](https://people.eecs.berkeley.edu/%7Ebrecht/papers/07.rah.rec.nips.pdf). RFFs are based on Bochner's theorem, which applies to stationary kernels on Euclidean spaces i.e. any positive semi-definite function $k$ satisfying
$$
k(x, y) = \kappa(||x - y||), ~\forall~x, y~ \in \mathbb{R}^d.
$$
Bochner's theorem states that such a function can be written as a Fourier transform :
$$
k(x, y) = \int_{\mathbb{R}^d} \exp{\left[i \omega^T (x - y)\right]}\mathbb{P}(d\omega),
$$
where $\mathbb{P}$ is a probability measure. This is an expectation with respect to $\mathbb{P}$ and we can therefore approximate $k$ via Monte-Carlo based on a sample drawn from $\mathbb{P}$.
It turns out that Bochner's theorem can be extended to non-stationary kernels on Euclidean spaces as well. This is known as Yaglom's theorem or the Yaglom-Genton theorem. A non-stationary kernel can therefore also be approximated via a Monte Carlo scheme. This is what is done in a 2017 paper by J-F. Ton et al., [Spatial Mapping with Gaussian Processes and Nonstationary Fourier Features](https://arxiv.org/abs/1711.05615). I do not understand the first steps derivation of the Monte Carlo scheme in that paper.
For some context, Yaglom's theorem says that a non-stationary kernel $k$ on $\mathbb{R}^d$ can be written as
$$
k(x, y) = \int_{\mathbb{R}^d \times \mathbb{R}^d} \exp{\left[i \left( \omega_1 ^T x - \omega_2 ^T y \right) \right]} \mu(d\omega_1, d\omega_2),
$$
where $\mu$ is the Lebesgue-Stieltjes measure associated to some $f$ which is a positive semi-definite function with bounded variation. This is the same as
$$
k(x, y) = \int_{\mathbb{R}^d \times \mathbb{R}^d} \exp{\left[i \left( \omega_1 ^T x - \omega_2 ^T y \right) \right]} f(\omega_1, \omega_2)d\omega_1 d\omega_2,
$$
where $f$ is as above.
The authors first claim that $f$ can be written as
$$
f(\omega_1, \omega_2) = g(\omega_1, \omega_2) + g(\omega_2, \omega_1) + g(\omega_1, \omega_1) + g(\omega_2, \omega_2),
$$
where $g$ is some density. It is not clear to me why there should exist a density $g$ satisfying that.
Further, the authors then plug in that form for $f$ in the integral. That should lead to
$$
k(x, y) = \frac{1}{4} \int_{\mathbb{R}^d \times \mathbb{R}^d} \exp{\left[i \left( \omega_1 ^T x - \omega_2 ^T y \right) \right]} \left( g(\omega_1, \omega_2) + g(\omega_2, \omega_1) + g(\omega_1, \omega_1) + g(\omega_2, \omega_2) \right) d\omega_1 d\omega_2.
$$
The authors obtain instead
$$
k(x, y) = \frac{1}{4} \int_{\mathbb{R}^d \times \mathbb{R}^d} \left(\exp{\left[i \left( \omega_1 ^T x - \omega_2 ^T y \right) \right]} + \exp{\left[i \left( \omega_2 ^T x - \omega_1 ^T y \right) \right]} + \exp{\left[i \omega_1 ^T \left(x - y \right) \right]} + \exp{\left[i \omega_2 ^T \left(x - y \right) \right]} \right) \mu(d\omega_1, d\omega_2).
$$
I do not understand how they get to that expression. Could someone unblock me ?
| Non-stationary Random Fourier Features | CC BY-SA 4.0 | null | 2023-03-18T14:01:35.620 | 2023-04-02T14:57:26.293 | null | null | 383233 | [
"machine-learning",
"gaussian-process",
"kernel-trick",
"approximation",
"fourier-transform"
] |
609894 | 1 | 610729 | null | 1 | 49 | I was reading an extract from the book "regression and other stories" and at chapter 9 the author distinguish between 3 cases
"After fitting a regression, $y = a + bx + error$, we can use it to predict a new data point, or a set of
new data points, with predictors $x_{new}$. We can make three sorts of predictions, corresponding to increasing levels of uncertainty:
- The point prediction,$\hat{a}+\hat{b}x_{new}$: Based on the fitted model, this is the best point estimate of the average value of y for new data points with this new value of x. We use $\hat a$ and $\hat b$ here because the point prediction ignores uncertainty.
- The linear predictor with uncertainty, $a + bx_{new}$, propagating the inferential uncertainty in $(a, b)$: This represents the distribution of uncertainty about the expected or average value of y for new data points with predictors $x_{new}$
- The predictive distribution for a new observation, $a + bx_{new} + error$: This represents uncertainty about a new observation y with predictors $x_{new}$.
and then it makes the example
" For example, consider a study in which blood pressure, $y$, is predicted from the dose, $x$, of a drug.
For any given $x_{new}$, the point prediction is the best estimate of the average blood pressure in the
population, conditional on dose $x_{new}$; the linear predictor is the modeled average blood pressure of
people with dose $x_{new}$ in the population, with uncertainty corresponding to inferential uncertainty
in the coefficients $a$ and $b$; and the predictive distribution represents the blood pressure of a single
person drawn at random drawn from this population, under the model conditional on the specified
value of $x_{new}$.
As sample size approaches infinity, the coefficients a and b are estimated more and more precisely,
and the uncertainty in the linear predictor approaches zero, but the uncertainty in the predictive
distribution for a new observation does not approach zero; it approaches the residual standard
deviation $σ$"
but honestly i am not sure i understood the difference between 2) and 3)
suppose i have a model $y= f(x,a,b,c)$ that depends on a predictor variable $x$ and 3 other coefficients $a,b,c$ and i sample the posterior distribution with EMCEE or another software and find the best fit coefficients $\hat a, \hat b, \hat c$.
if :
- i substitute the value of $a,b,c$ with the best fit coefficients at a point $x_{new}$ i get the point prediction
case 2 should correspond to finding the uncertainty on the model expected value of $y_{pred}$ at $x_{new}$ and should correspond to this procedure (i think):
- fix $x= x_{new}$ and
- take the samples of $a,b,c$ and
- collect all the $f(x_{new},a,b,c)$
- compute the mean and the variance to get the distribution of uncertainty about the
model average value of y
but what's case 3 and what is the procedure ? and what does it mean " under the model conditional on the specified value of $x_{new}$"
.
| difference between the linear predictor with uncertainty and predictive distribution for a new observation | CC BY-SA 4.0 | null | 2023-03-18T14:49:33.207 | 2023-03-25T21:23:41.280 | null | null | 275569 | [
"bayesian",
"predictive-models",
"inference"
] |
609895 | 1 | null | null | 2 | 21 | I want to compare X between different groups "material".
Material is a categorical variable with four groups A,B,C,D.
patient_ID is a subject specific identifier -> I use this as a random effect.
```
lmer(X~ material + (1|patient_ID), data)
```
Now my question/problem:
Numbers of samples vary between the four groups:
A: 1 per patient_ID
B: 0-1 per patient_ID
C: 0-2 per patient_ID
D: 0-2 per patient_ID
(Edit: if more than 1 sample was taken, it was a repeated measure -> those 2 samples are expected to be very similar and are not independent)
"Missing" data should be no problem in a LMM, but in C,D when 2 samples exist, those are not independent as they are just a "repeated measurement". How do I handle this or does my model already account for this because of the random effect patient_ID?
| LMM repeated measures (dependent) accounted for by my model? | CC BY-SA 4.0 | null | 2023-03-18T14:59:26.587 | 2023-03-18T17:04:09.880 | 2023-03-18T16:31:13.967 | 362671 | 377867 | [
"lme4-nlme"
] |
609896 | 2 | null | 605600 | 3 | null | The blood biomarker concentration measurements are censored because of the upper detection limit (at 2,500 judging from the plots).
You can use ordinal regression (aka proportional odds regression) to model the data. You can also do a Wilcoxon rank-sum test to compare populations 1 and 2 or the Kruskal-Wallis test to compare more than two populations. But with the tests you cannot include additional covariates (such as region) and in any case ordinal regression generalizes these tests.
Alternatively, you can use [tobit regression](https://en.wikipedia.org/wiki/Tobit_model).
Keep in mind that the censoring implies some loss of information. For example, you can't learn much about the upper quantiles of the blood biomaker in population 2, other than they are ≥ 2,500. Similarly, you cannot do precise inference (say a 95% two-sided confidence interval) for the population 2 mean.
Here are some relevant threads/links:
- Biostatistics for Biomedical Research Sections 7.3.1 and 7.6.2 analyze data censored at an upper detection limit of 2,5000, just as in your case.
- Tobit models and Ordinal logistic regression by the UCLA Stats Help.
- What regression approach would suit zero-inflated data that is censored at a fuzzy threshold?
- Explanation of a censored regression model
Or search for [censored regression](https://stats.stackexchange.com/search?q=censor%2A+regression) on Cross Validated.
| null | CC BY-SA 4.0 | null | 2023-03-18T15:24:09.520 | 2023-03-18T15:24:09.520 | null | null | 237901 | null |
609897 | 2 | null | 609872 | 2 | null | The plots that you show aren't very useful for evaluating a Cox model. Intuition from models like ordinary least squares often doesn't extend to survival modeling.
The baseline hazard over time is necessarily 0 except at observed event times, as the Cox-model hazard is exactly 0 between event times. The lines connecting the points in your first plot are thus, at best, misleading. If you want to show the baseline hazard then show the cumulative hazard, a step function. Remember, however, that the baseline hazard isn't even a part of the Cox model itself. It's something you can extract from the data after you fit the model. It doesn't evaluate the fit of the model.
The [discussion of goodness-of-fit measures for survival models in lifelines](https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#goodness-of-fit) doesn't include deviance residuals, for good reason. Those coming new to survival modeling often hope to use deviance residuals (or the martingale residuals to which they are related) in the ways that one uses residuals in least-squares modeling. For many purposes you can't, as [Therneau and Grambsch](https://www.springer.com/us/book/9780387987842) explain in Chapter 4. For example, in Section 4.2.3, they note that "martingale residuals and the fitted values are negatively correlated, frequently strongly so," which is related to the plot you show.
With time-fixed covariates, the deviance residuals $d_i$ are close to a normalized difference between the observed $(N_i)$ and predicted $(\hat E_i)$ numbers of events for an individual at the event or censoring time $t_i$ (Therneau and Grambsch, Section 4.3):
$$d_i \approx \frac{N_i(t_i)-\hat E_i(t_i)}{\sqrt {\hat{E_i}(t_i)}}. $$
The numerator is the martingale residual process evaluated at the event or censoring time. The estimated expected number of events for individual $i$ with time-fixed covariate values $X_i$ and corresponding coefficient estimates $\hat \beta$ is a non-decreasing function of time (see Chapter 4 of Therneau and Grambsch):
$$\hat{E_i}(t) = \int_0^t Y_i(s) e^{X_i \hat\beta} d \hat{\Lambda}_0(s),$$
where $Y_i(s)$ is 1 while the individual is at risk (0 otherwise) and $\hat{\Lambda}_0(s)$ is the estimated baseline cumulative hazard over time.
$\hat{E}_i(t)$ is thus related to the estimated baseline cumulative hazard, which has a step increase at each event time over the entire data sample. That has two implications.
First, $\hat{E}_i(t)$ increases by $e^{X_i \hat\beta} d \hat{\Lambda}_0(s)$ at each event time $s$ while at risk. For any set of individuals with the same set of covariate values, the deviance residuals thus necessarily decrease with increasing times to events (or to right censoring).
Second, as the baseline cumulative hazard estimate extends out until the last event in the entire data set, it's quite possible for $\hat{E}_i(t)$ to exceed 1 at a late enough time, if $e^{X_i \hat\beta}$ is large enough.
Put another way, a survival model is of a distribution of event times as a function of covariate values, typically a pretty wide distribution. Some individuals with the same covariate values are going to have event times earlier than "expected" and others will have later event times. That (plus the fact that the deviance residual can't exceed 0 if the event time is censored) is all that your plot of deviance residuals against observation time demonstrates.
Plots of martingale residuals against values of a covariate can be useful to evaluate a model's functional form for a continuous covariate; see [this page](https://stats.stackexchange.com/a/362553/28500). What can work more directly than starting from martingale residuals is to use a flexible regression spline to fit a continuous predictor like `year_of_manufacture` and let the data tell you an approximate functional form.
Finally, the concordance doesn't evaluate "average values" in the way that you seem to think. It's the fraction of pairs of cases for which the observed and model-predicted event orders agree (among case pairs that can be evaluated). It's a measure of discrimination among cases, not calibration. In your case, only 0.56 of pairs had agreement between observed and predicted event orders, close to the 0.5 you would expect by chance.
| null | CC BY-SA 4.0 | null | 2023-03-18T15:24:52.900 | 2023-03-21T16:25:27.460 | 2023-03-21T16:25:27.460 | 28500 | 28500 | null |
609898 | 2 | null | 605600 | 1 | null | Possibly you might figure out some distribution and apply some censurized estimation method.
But, in this case, a single regressor (the groups), it might be easier to perform a [permutation test](https://en.m.wikipedia.org/wiki/Permutation_test).
| null | CC BY-SA 4.0 | null | 2023-03-18T15:43:13.873 | 2023-03-18T15:43:13.873 | null | null | 164061 | null |
609900 | 2 | null | 609829 | 4 | null | It is absolutely not clear, obvious, or agreed-upon what "[intelligence](https://en.wikipedia.org/wiki/Intelligence)" is in humans (or, for that matter, in nonhuman animals). What is clear is that people's performance on a variety of tasks involving mental processes (a very general term) is highly correlated. If you speak five languages, then chances are that your performance on standard [matrix tests](https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices) is also above average.
The probably currently best accepted "theory of intelligence" essentially boils down to performing a PCA on the result of multiple tests of mental processes and calling the first principal component [the g factor](https://en.wikipedia.org/wiki/G_factor_in_non-humans). After you have done this, you start arguing with other psychologists about what precisely you are measuring, whether there truly is an underlying trait "intelligence", and what later principal components are measuring.
This construction gets more interesting and relevant as artificial intelligence (or, [cough, not](https://stats.meta.stackexchange.com/q/6422/1352)) gets better and better.
| null | CC BY-SA 4.0 | null | 2023-03-18T15:50:46.280 | 2023-03-18T15:50:46.280 | null | null | 1352 | null |
609901 | 2 | null | 609653 | 0 | null | It sounds like you need to engage in some exploratory data analysis.
The choice of model will depend on your understanding of the factors that contribute to dropout. If the rate of dropout isn't a function of program length per se (except that you can't drop out of a program after it's ended), you might be able to put all of the data together in a single model, even though some interpretations might seem odd (precisely because you can't drop out of a program after it's ended). That might work well if most dropouts are early in time regardless of program length. An alternative would be to express the time-to-dropout as a fraction of the program length. Including the program length as a covariate would be wise in either case.
This might also be handled with a cure model, where you explicitly model the failure to have an event along with the time-to-event for those who do.
| null | CC BY-SA 4.0 | null | 2023-03-18T15:52:18.587 | 2023-03-18T15:52:18.587 | null | null | 28500 | null |
609902 | 1 | null | null | 0 | 20 | I am a meteorologist and I regularly hear of POD (we call it probability of detection) $ POD = \frac{TP}{TP+FN} $ as well as FAR (False Alarm Rate) $ FAR = \frac{FP}{FP+TN}$. But I recently had a colleague ask what would happen if we took the ratio of these two properties when we get these for our weather forecasts.
After some Googling, I found out that this is a known ratio which is [positive likelihood ratio](https://en.wikipedia.org/wiki/Receiver_operating_characteristic), but the problem I am having with conceptualizing this property is that a lot of the information is typically with regards to medicine, or testing. Not so much on prediction/forecasting.
So I am just wondering if someone would be able to help me understand this property abit more by describing, or providing resources about how this would give insights in the skill for a forecast (whether that is for Weather, Machine Learning, etc).
| What does positive likelihood ratio mean outside of medicine? | CC BY-SA 4.0 | null | 2023-03-18T15:55:19.700 | 2023-03-18T15:55:19.700 | null | null | 371103 | [
"probability",
"likelihood-ratio",
"false-positive-rate",
"true-positive-rate"
] |
609903 | 2 | null | 609895 | 1 | null | Yes, the linear mixed model accounts for the dependence between two measurements taken from the same patient.
Say patient $i$ has two measurement taken at material/site B. Let's denote these by $x_{i,B_1}$ and $x_{i,B_2}$, respectively. Under the LMM `X ~ material + (1|ID)`, the covariance between these two measurements is:
$$
\begin{aligned}
\operatorname{Cov}\left\{x_{i,B_1}, x_{i,B_2}\right\}
&= \operatorname{Cov}\left\{\mu_B + \eta_i + \epsilon_{i,B_1}, \mu_B + \eta_i + \epsilon_{i,B_2} \right\}
= \operatorname{Cov}\left\{\eta_i + \epsilon_{i,B_1}, \eta_i + \epsilon_{i,B_2} \right\} \\
&= \sigma^2_\eta
\end{aligned}
$$
where $\mu_B$ is the fixed effect of B, $\eta_i$ is the random effect of patient $i$ and $\epsilon_{i,B_j}$ are measurement errors.
The random effects $\eta_i$ are similar to measurement errors in the sense that the $\eta_i$s are iid $\operatorname{Normal}(0, \sigma^2_\eta)$ while the errors are iid $\operatorname{Norma}(0, \sigma^2)$. Since all measurements of patient $i$ share the random component $\eta_i$, they are correlated.
Compare this with the covariance between measurements of two different patients $i$ and $j$ at site B:
$$
\begin{aligned}
\operatorname{Cov}\left\{y_{i,B_1}, y_{j,B_1}\right\}
&= \operatorname{Cov}\left\{\mu_B + \eta_i + \epsilon_{i,B_1}, \mu_B + \eta_j + \epsilon_{j,B_1} \right\}
= \operatorname{Cov}\left\{\eta_i + \epsilon_{i,B_1}, \eta_j + \epsilon_{j,B_1} \right\} \\
&= 0
\end{aligned}
$$
because the random effects $\eta_i$ and $\eta_j$ are independent, the errors are independent and the $\eta$s and $\epsilon$s are independent between each other.
PS: An alternative to the linear mixed model (LMM) is generalized least squares (GLS). With GLS you specify the variance/covariance structure explicitly. For example, you might want to let each material/site have a different variance while multiple measurements taken from the same patients are correlated as above. In R you can fit GSL with [nlme::gls](https://www.rdocumentation.org/packages/nlme/versions/3.1-162/topics/gls).
| null | CC BY-SA 4.0 | null | 2023-03-18T16:50:17.817 | 2023-03-18T17:04:09.880 | 2023-03-18T17:04:09.880 | 237901 | 237901 | null |
609904 | 2 | null | 137190 | 0 | null | The mode of the posterior $\arg \max p(\theta|x)$ does not always coincide with the mean of the posterior $\int_{\Theta} \theta p(\theta|x) \mathrm{d}\theta$. It does in some cases, like a Gaussian posterior, but generally it does not. Depending on your field, the latter is called expected a-posteriori (EAP), or simply posterior expectation.
| null | CC BY-SA 4.0 | null | 2023-03-18T17:07:31.073 | 2023-03-18T17:07:31.073 | null | null | 71679 | null |
609905 | 1 | null | null | 0 | 17 | In Random forest, you have a sample size that determines to have 63.2% of the training set. So, around 1/3 of the training set is not going to be in the training set. 1/3 is disregarded for now.
With the sample size (i.e. 63.2% of the training set), there will be out-of-bag observations which means those observations in the sample size will be left out when making a decision tree in the training set. Each tree has its own out-of-bag observations.
Can you please confirm this part? Thanks
Please correct me if I am wrong. Thank you
| random forest, sample size, and out-of-bag observation | CC BY-SA 4.0 | null | 2023-03-18T17:10:39.697 | 2023-03-18T17:10:39.697 | null | null | 382257 | [
"random-forest",
"bootstrap"
] |
609906 | 2 | null | 606693 | 0 | null | This is reasonably straightforward, if you can assume that the probability of a chick hatching at any given egg size is independent of the chicken that laid the egg, the sizes of other eggs laid by the chicken, etc. In that case, you have a single probability $p_j$ for the probability of a live chick hatch from an egg in size group $j$.
For egg-laying chicken $i$, the estimated number of live-hatched chicks $C_i$ would then be the sum of the products of those probabilities times its number of eggs $N_{ij}$ in each size group $j$:
$$C_i = \sum_{j=11}^{j=22} p_j N_{ij}.$$
That suggests a simple linear multiple regression of total chick counts as the outcome versus its number of eggs in each size group (12 predictors). Your first data display is already set up for that. The coefficient for each size group $j$ would be the estimated probability of a live hatch at that size.
This might ideally be handled in a way that respected the count-based outcome values, perhaps a Poisson regression but with an identity link instead of the default log link. With this large a dataset and about an 80:1 ratio of observations to estimated coefficients, however, a simple linear regression might be good enough.
| null | CC BY-SA 4.0 | null | 2023-03-18T17:21:09.480 | 2023-03-18T17:21:09.480 | null | null | 28500 | null |
609908 | 2 | null | 609537 | 3 | null | As @Dave indicated in the comments above, pROC attempts to detect if the positive group displays higher or lower values of the predictor than the control group. This can be controlled with the [direction](https://www.rdocumentation.org/packages/pROC//topics/roc) argument.
Caret doesn't do such a detection, and will assume that negative observations have lower values.
It turns out, when you reverse the direction of comparisons, you replace SE and SP with NPV and PPV, respectively.
An other potentially confounding factor is: which group is the negative, and which one is the positive? The two packages have a slightly different logic to figure this out, which can lead to similar discrepancies. In pROC this can be controlled with the `levels` argument, and in caret with [positive](https://www.rdocumentation.org/packages/caret/versions/6.0-93/topics/confusionMatrix)
| null | CC BY-SA 4.0 | null | 2023-03-18T17:34:14.733 | 2023-03-18T17:34:14.733 | null | null | 36682 | null |
609910 | 1 | 609921 | null | 0 | 37 | I am following a course in Coursera about Introduction to Statistics and I got lost in the part of the explanation of the linear regression. I know is a simple subject, but I do not get it how the instructor applies the linear regression formula to solve an example case.
According to the slides the instructor says that the regression line is obtained by the following formula:
$$\hat{y}=a+bx $$ given that:
$b=r\frac{s_{y}}{s_{x}}$ and $a=\bar{y}-b\bar{x}$
So I could plug the values of a and b into the equation and obtain the following:
$$\hat{y}=\bar{y}-b\bar{x}+r\frac{s_{y}}{s_{x}}x$$
The example the instructor mentions is the following:
The case is to predict the final exam score of a student who scored 41 in the midterm, the data available is: $\overline{midterm}=49.5,\overline{final}=69.1,s_{mid}=10.2,s_{final}=11.8,r=0.67$
so the instructor then says that the score of 41 is below the $\overline{midterm}$ in 8.5 points, we can call this the $difmidterm$. Then he calculates the following: $$\frac{diffmidterm}{s_{mid}}=\frac{8.5}{10.2}=0.83$$
I suppose that the last term could be considered like the standard deviation of that score, am I right?
Then he plugs all the values in an equation that it is not mentioned, but I suppose is derived from the equation of the regression line; so far I think the equation he uses is:
$$\hat{y}=\overline{final}-r*s_{mid}*s_{final}$$
$$=69.1-0.67*0.83*11.8=62.5$$
The minus sign is because the value of 8.5 is below the mean and the final answer would be 62.5 as our prediction. However, I do not see how this equation $\hat{y}=\overline{final}-r*s_{mid}*s_{final}$ relates to $\hat{y}=\bar{y}-b\bar{x}+r\frac{s_{y}}{s_{x}}x$.
What detail am I missing in this solution?
| How was applied the formula of linear regression in this case? | CC BY-SA 4.0 | null | 2023-03-18T17:42:32.390 | 2023-03-18T20:04:16.100 | null | null | 69395 | [
"regression"
] |
609912 | 2 | null | 601849 | 1 | null | From the description in a comment:
>
between each timepoints (ex: Ja2015 - Jul2016) I have ~200 individuals experiencing a change in area (growth rates), new individuals showing up in the population (recruitment events; binomial), and individuals found in previous timepoints have disappeared and are no longer present (mortality; binomial)
it seems that you have data on individuals all evaluated at the same points in time while they are alive. That's a type of panel data. (If each individual can have its own observation-time intervals then you might need more sophisticated methods to handle the interval censoring.)
In terms of survival per se, you have a setup for a fairly standard discrete-time survival model. You format the data such that you have, for each time interval, one row of data for each individual who was at risk of death at the start of the interval. That long data format simplifies analysis.
Each row can contain the calendar start date of the interval to handle chronological time as a predictor, the duration of the interval to handle the different durations of intervals, and values during the interval for covariates like "heat wave present" (or maybe even better, some continuous measure of heat stress). I don't know enough about corals to say for certain, but I suspect that you might need to include an individual's age at the start of each time period as a covariate, or maybe maybe its size as a proxy for that.
The event/outcome binary marker for each individual and time interval is set to 1 if the individual died during the time interval, to 0 otherwise. That 0 outcome value also is used for an individual lost to follow up for reasons other than death during an observation period ("right censoring" in survival analysis). This data format also allows you to include an individual newly born or otherwise added to the study, for time intervals starting with the first time the individual is observed.
Then you do a binomial regression over the entire data set with appropriately modeled covariates (calendar date, duration of observation interval, age, size, environmental variables...). A logistic regression is one way to do this, but a [complementary log-log link](https://stats.stackexchange.com/q/429266/28500) instead of the logit link used for logistic regression is more closely related to proportional-hazards survival models.
You might model births with a Poisson or other regression model for counts; with the default log link, you would include the log-duration of each time interval as an [offset](https://stats.stackexchange.com/q/175349/28500) to account for the differences in duration of observation intervals.
| null | CC BY-SA 4.0 | null | 2023-03-18T18:11:50.283 | 2023-03-18T20:31:28.510 | 2023-03-18T20:31:28.510 | 28500 | 28500 | null |
609914 | 1 | null | null | 1 | 51 | My study involves a dependent variable measuring reading times (minimum value = 0.3) and two categorical variables (y = "quick" or "slow"; t = "cute" or "ugly") that are used to define conditions. The design is nested.
I have determined that my dependent variable fits very well into an inverse gamma distribution. However, I encountered issues with homogeneity of variances assumption and the normality of residues when running models using glmer, besides the model prediction was a bad fit. As a result, I decided to use a brm model instead.
[](https://i.stack.imgur.com/e7Dwn.png)
During the process, I received a warning message that read `gamma_lpdf: Inverse scale parameter[1] is -xxx, but must be >0!`and `Initialization between (-2, 2) failed after 100 attempts.` I attempted to address the issue by setting `init=0.1`, but it did not work. Although using the `log` link instead of the `inverse` link resolved the issue, I want to use the inverse family. Therefore, I am wondering how I can instruct the model to only use positive values during initialization. The model prediction looks good, but I don't want to use it if it gives these warnings.
```
install.packages("VGAM")
library(VGAM)
set.seed(123)
df <- data.frame(x = rinvgauss(100, mean = 1.07, shape = 2.28, dispersion = 0.11), # dependent variable: reading time
id = rep(1:100, each = 60), # participant's id
speed_frame = rep_len(rep(c(-1, 1), each = 60), 100), # between-subjects factor
beauty = rep_len(c(rep(-1, 30), rep(1, 30)), 100), # within-subjects factor
stimuli_id =rep(1:60, times = 100))
for (i in 2:5) {
df[, i] <- scale(df[, i], center = TRUE, scale = FALSE)
}
brm(x ~ speed_frame + beauty + speed_frame:beauty + (1 + speed_frame | id) + (1 + speed_frame + beauty | stimuli_id), data=df, family=Gamma(link = "inverse"), backend = "cmdstanr")
```
| bayesian problem using inverse gamma: negative initial values | CC BY-SA 4.0 | null | 2023-03-18T18:31:09.590 | 2023-03-18T22:04:32.717 | 2023-03-18T22:04:32.717 | 372242 | 372242 | [
"bayesian",
"gamma-distribution",
"inverse-gamma-distribution",
"brms"
] |
609916 | 1 | null | null | 1 | 31 | I'm presently evaluating the position of individuals of an 3 populations of an animal (according to their sexe) in function of the environmental factors (12) present in their habitat. To detect which environmental factors have the most impact, I'm using PCA in R.
I have standardized, centered my data and chose the PCs that have an eigenvalues > 1. I obtained 4 «significant» PCs.
My next step was to define my PCs by determinating the number of factors. To determinate that, I evaluated the contribution (%) of each factors on a Scree Plot. One Scree Plot for each PCs - so I have 4 of them. The factors retained have a contribution > (1/12)% - (1 / number of factors in total).
When evaluating these PCs, I notice that I have the same variables retained for more than one PCs - for example : Temperature being retained in PC 3 and PC 2.
As PCA is suppose to put together features/variables and create new features that are uncorrelated, I am wondering if the fact that Temperature is repeated twice causes a problem?
From reading posts on CV and other Website, I found different possbilities:
- Leave it as it is and mention that more than one variable is repeating
- Do Factor analysis or clustering
- Consider the highest percentage of contribution
But I'm not sure what to do.
Any guidance would be helpful
Thank you!
| duplicated variables for different components | CC BY-SA 4.0 | null | 2023-03-18T19:18:51.973 | 2023-06-03T08:02:18.127 | 2023-06-03T08:02:18.127 | 3277 | 383536 | [
"r",
"pca"
] |
609919 | 1 | null | null | 1 | 31 | This is an exercise from the probability book by Ross. This is not homework.
[](https://i.stack.imgur.com/gj5EF.jpg)
Using conditional probability and the distribution of sum of two geometric random variables, the probability comes out to be $\frac{1}{(n-1)}$.
But I am not able to understand how can the same probability be deduced from just the hint without all the computations.
I am not sure but it seems to have to do with each of the previous $(n-1)$ tosses being equally probable of being the time of the first head. Is that correct?
| Probability mass function of time of first head | CC BY-SA 4.0 | null | 2023-03-18T19:45:58.357 | 2023-03-19T07:28:39.583 | 2023-03-19T07:28:39.583 | 383537 | 383537 | [
"probability",
"self-study",
"conditional-probability",
"geometric-distribution"
] |
609920 | 2 | null | 609885 | 1 | null | To assess the distributional assumption of a GARCH model, you can look at the probability integral transform (PIT) of the standardized residuals. It can be obtained by `pit(fit)` where `fit` is a fitted model object from `rugarch`. If the model is a good approximation of the data, the PIT should be roughly Uniform[0,1]. If you want to do a QQ plot, you can use the `qnorm` function to go from uniform to standard normal and then use the `qnorm` function for the QQ plot. See p. 41-43 of the `rugarch` [vignette](https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf) for a more detailed discussion.
```
library(rugarch)
data(dmbp) # example data
spec=ugarchspec()
fit=ugarchfit(data=dmbp[,1], spec=spec)
pit=pit(fit)
hist(pit) # this should be approximately Uniform[0,1]
norm=qnorm(pit)
hist(norm) # this should be approximately standard normal
qqnorm(norm); abline(a=0, b=1) # this is your QQ plot
```
---
As to your side question, multivariate normal distribution does not make sense if the marginals are not normal. A possible fix for that is to use a copula-GARCH model (available in `rmgarch`). It does not yield a DCC-type dynamic, though.
On the other hand, I guess it should be possible to have different distributional assumptions for the different marginals in a DCC model in `rmgarch` (I have by now forgotten how these things are implemented there). After all, the DCC structure is built on top of the univariate GARCH models; it takes them as the starting point. The estimation might also be carried out in two steps: first the marginals and then the DCC part. So the "multivariate normal assumption" might be used for fitting the DCC structure, and it might act as a normal copula without affecting the marginals. But I am not sure about this.
| null | CC BY-SA 4.0 | null | 2023-03-18T20:03:26.280 | 2023-03-18T20:11:25.873 | 2023-03-18T20:11:25.873 | 53690 | 53690 | null |
609921 | 2 | null | 609910 | 2 | null | The equation $\widehat{y}=\overline{\text{final}}-rs_\mathrm{mid}s_\mathrm{final}$ is wrong. The correct equation should be
\begin{eqnarray*}
\widehat{y}&=&\overline{\mathrm{final}}-r\frac{\overline{\mathrm{mid}}-x_\mathrm{new}}{s_\mathrm{mid}}s_\mathrm{final}
\end{eqnarray*}
where $x_\mathrm{new}$ is the new midterm score of the student. This equation is equivalent to $\widehat{y}=a+bx_\mathrm{new}$ where $b=r\frac{s_\mathrm{final}}{s_\mathrm{mid}}$ and $a=\overline{\text{final}}-b\times\overline{\text{mid}}$. Consider the above equation,
\begin{eqnarray*}
\widehat{y}&=&\overline{\mathrm{final}}-r\frac{\overline{\mathrm{mid}}-x_\mathrm{new}}{s_\mathrm{mid}}s_\mathrm{final}\\
&=&\overline{\text{final}}-r\frac{s_\mathrm{final}}{s_\mathrm{mid}}(\overline{\text{mid}}-x_\mathrm{new})\\
&=& \underbrace{\overline{\text{final}}-r\frac{s_\mathrm{final}}{s_\mathrm{mid}}\overline{\text{mid}}}_{a}+\underbrace{r\frac{s_\mathrm{final}}{s_\mathrm{mid}}}_{b}x_\mathrm{new}
\end{eqnarray*}
| null | CC BY-SA 4.0 | null | 2023-03-18T20:04:16.100 | 2023-03-18T20:04:16.100 | null | null | 383333 | null |
609924 | 1 | null | null | 2 | 80 | I am performing quite some binomial/multinomial models for my thesis. After doing the emmeans statement, I used the contrast statement to compare the emmeans of the different groups. But, I start to think about a few things:
- My data is sampled as such that the groups don't have an equal size
- Therefore the SE is not equal for each estimate. If my reasoning is correct, the variance between the two groups is not the same. I didn't found how the contrast test works, but if it actually just performs a t-test, then the assumptions for a t-test are violated. How can I adjust for this?
Or am I making things more difficult than they actually are?
Example:
I want to compare the distribution of 4 age classes between two methods. HD and DD, HD contain 322 individuals and DD 1711 individuals. The null hypothesis is that there is no difference in age distribution between the two methods. The Anova test on the model was significant. Therefore I want to know which age classes differ the most between the two methods.
the piece of code (Age class consists of a 0,1,2,3+ age class, method consists of the HD and DD group):
```
a <- multinom(`Age class` ~ `Method` , data = goodyears)
emmeans = emmeans(a,~ `Age class` |`Method`, mode = "prob")
z =contrast(emmeans, "pairwise", simple = "each", combine = TRUE, adjust = "mvt")
```
For example, the emmeans for the 1y age class in the HD group is 0.32 (95%CL [0.29,0.34]) and the emmeans for the 1y age class in the DD group is 0.24 (95%CL [0.18,0.30]). The contrast test is insignificant (logical as the intervals overlap). Is there a way to deal with the rather large difference in group size? Statistically the smaller group size caused a larger confidence interval which made the effect less significant. Or do I better accept the result as it is, and state in a discussion that the insignificance of the effect might be due to the limited sample size of one group?
| Comparing emmeans values multinom model | CC BY-SA 4.0 | null | 2023-03-18T21:53:54.240 | 2023-03-19T18:14:11.933 | 2023-03-19T15:42:51.833 | 382882 | 382882 | [
"r",
"multinomial-distribution",
"post-hoc",
"contrasts",
"lsmeans"
] |
609925 | 1 | null | null | -1 | 61 | From my readings (wikipedia, books and YT videos), I derive that there are means of observations (the frequency weighed values of the random variables), and also means of different samples "means of means".
I have read all the basic math and solved some problems. However the formula for the Standard Error defined as
>
The standard deviation for the normal distribution of the samples' means
escapes my understanding.
The formula is
$$ s_m = s_o / N ^ {1/2} $$
where $s_m$ is the mean of samples' means, and $s_o$ is the mean of observations, where does this come from exactly ?
| Standard Error of a Sample Means | CC BY-SA 4.0 | null | 2023-03-18T21:54:24.013 | 2023-03-19T21:07:41.920 | null | null | 379919 | [
"normal-distribution"
] |
609926 | 1 | null | null | 2 | 110 | I have two vectors, A and B which I want to compare using the MWU test. Both vectors have the size of 995 with A having a mean and standard deviation of 10.50050 and 2.82287, respectively. The mean and std for B are 10.19397 and 2.87137. The histogram and KDE for the two vectors are as follows.
[](https://i.stack.imgur.com/KJTK8.png)
All of this would tell me that the distribution of the vectors are very similar, however, MWU (as implemented in Python SciPy) returns a p value of 0.01764, which I deem relatively low.
Could someone please explain what key concept I'm missing?
Thank you.
| Mann-Whitney U test returning small p value | CC BY-SA 4.0 | null | 2023-03-18T22:22:39.013 | 2023-03-19T16:51:13.993 | null | null | 383545 | [
"hypothesis-testing",
"python",
"wilcoxon-mann-whitney-test",
"scipy"
] |
609927 | 2 | null | 609925 | 1 | null | Using the properties (1) $Var(aX) = a^2 Var(X)$, and (2) $Var(X+Y)=Var(X)+Var(Y)$, assuming independence between $X$ and $Y$, it is easy to demonstrate that
$$Var(E[X])=Var\left(\frac{1}{n}\sum_{i}^nX_i\right)=\frac{1}{n^2}\sum_{i}^nVar(X_i)$$
Since the $X_i$ are samples from the same distribution, $Var(X_i)=\sigma^2$
$$Var(E[X])=\frac{1}{n^2}\sum_{i}^n\sigma^2=\frac{\sigma^2}{n}$$
Translating into finite sample estimates:
$$s_m^2=\frac{s_o^2}{n}$$
| null | CC BY-SA 4.0 | null | 2023-03-18T22:34:55.263 | 2023-03-19T21:07:41.920 | 2023-03-19T21:07:41.920 | 60613 | 60613 | null |
609928 | 2 | null | 609925 | 2 | null |
- The terms 'mean of observations' and 'mean of different samples' are a bit confusing. Generally, both observations and samples also refer to the realisation of a random variable.
- We rarely use $s_{.}$ to denote the mean of something. For the formula you mentioned, $s_m$ is the standard deviation of the sample mean and $s_o$ is the standard deviation of the Gaussian random variable (not necessary to be Gaussian).
- I think you are confused by the term 'standard error'. Standard error of the sample mean is just the standard deviation of the sample mean. When we measure the deviation of an estimator, we usually replace 'standard deviation' with 'standard error'.
Back to your question, where does the formula come from exactly? Suppose $X_1,...,X_N$ follow a probability distribution $F(x)$ with common variance $\sigma^2$. The standard error of $\overline{X}$ is
\begin{eqnarray*}
\newcommand{\ind}{\perp\!\!\!\!\perp}
\mathrm{SE}(\overline{X}) &=& \sqrt{\mathbb{V}(\overline{X})}\\
&=& \sqrt{\mathbb{V}(\frac{\sum_{i=1}^{N}X_i}{N})}\\
&=& \sqrt{\frac{1}{N^2}\mathbb{V}(\sum_{i=1}^{N}X_i)}\\
&\overset{X_i\,\ind\,X_j}{=}& \sqrt{\frac{1}{N^2}\sum_{i=1}^{N}\mathbb{V}(X_i)}\\
&=&\sqrt{\frac{1}{N^2}N\sigma^2}\\
&=& \frac{\sigma}{\sqrt{N}}
\end{eqnarray*}
| null | CC BY-SA 4.0 | null | 2023-03-18T23:05:56.660 | 2023-03-18T23:05:56.660 | null | null | 383333 | null |
609929 | 1 | null | null | 0 | 25 | I'd like to ask about middle of the discussion in [this answer](https://stats.stackexchange.com/a/210119/364080). Writing things out in reverse, $\mathrm{Prob}(F_X(X) \leq y) = \mathrm{Prob}(X \leq \mathrm{inf}\{x: F_X(x) \geq y\})$. Why is $\mathrm{inf}\{x: F_X(x) \geq F_X(X)\} = X$ (the left-hand sides of the 'less than or equal to' signs)? Wouldn't there still be an issue where there exists $x_0$ greater than the infimum but $F_X$ is the same for both?
([This answer](https://stats.stackexchange.com/a/435983/364080) composes the CDF with its inverse - the "These definitions imply..." bit. But I think that's easier than composing the CDF's inverse with the CDF.)
| The probability integral transform when the CDF is non-decreasing | CC BY-SA 4.0 | null | 2023-03-18T23:49:57.990 | 2023-03-18T23:49:57.990 | null | null | 364080 | [
"probability",
"distributions",
"quantiles",
"cumulative-distribution-function"
] |
609930 | 2 | null | 345765 | 0 | null | This is "multiple instance learning"; the [wikipedia page](https://en.m.wikipedia.org/wiki/Multiple_instance_learning) has a good introduction.
Your examples sound like they don't conform to the [standard assumption](https://en.m.wikipedia.org/wiki/Multiple_instance_learning#Assumptions), so you'll need some "metadata-based" algorithm. At the simplest, you engineer metadata features from the bags and fit a traditional model to the result. More complex algorithms try to automatically find good metadata definitions; neural networks seem like a natural candidate for that, but I'm not familiar enough to give approaches. Still, armed with the search phrase "multi-instance", you can find e.g. [https://arxiv.org/abs/2202.11132](https://arxiv.org/abs/2202.11132)
| null | CC BY-SA 4.0 | null | 2023-03-18T23:51:47.740 | 2023-03-18T23:51:47.740 | null | null | 232706 | null |
609932 | 2 | null | 72893 | 9 | null | Rob Hyndman's answer isn't technically correct - [Granger and Morris (1976)](http://www.jstor.org/stable/2345178) didn't show exactly that. In fact, for $AR(p)$ process $X_t$ and $AR(q)$ process $Y_t$,
\begin{align}
\phi_X(B)X_t &= \epsilon_t\\
\phi_Y(B)Y_t &= \eta_t\\
\end{align}
where $B$ is the backshift operator, we have
\begin{align}
Z_t &= X_t + Y_t\\
&= \phi_X^{-1}(B)\epsilon_t + \phi_Y^{-1}(B)\eta_t\\
\phi_X(B)\phi_Y(B)Z_t &= \phi_Y(B)\epsilon_t + \phi_X(B)\eta_t
\end{align}
The left hand side polynomial is of order $p+q$ and the right hand side has autocovariance zero at lags above $\max(p,q)$, thus in general $Z_t \sim ARMA(p+q,\max(p,q))$ - however as Granger and Morris (1976) point out this is not necessarily the case. In fact, strictly speaking we have
\begin{align}
AR(p)+AR(q) = ARMA(x,y),\text{ }\text{ }\text{ }x\leq p+q, y\leq\max(p,q)
\end{align}
For example (p. 249), in the case of repeated AR polynomial roots,
\begin{align}
(1-\alpha_1B)X_t &= \epsilon_t,\text{ }\text{ }\text{ }\text{i.e., }X_t\sim AR(1)\\
(1-\alpha_1B)(1-\alpha_2B)Y_t &= \eta_t,\text{ }\text{ }\text{ }\text{i.e., }Y_t\sim AR(2)
\end{align}
We have
\begin{align}
Z_t &= X_t + Y_t\\
(1-\alpha_1B)(1-\alpha_2B)Z_t &= (1-\alpha_2B)\epsilon_t + \eta_t
\end{align}
Then $Z_t\sim ARMA(2,1)$, i.e., $x<p+q$ and $y<\max(p,q)$.
Likewise (p. 249), on the MA side, for
\begin{align}
(1-\alpha B)X_t &= \epsilon_t,\text{ }\text{ }\text{ }\text{i.e., }X_t\sim AR(1)\\
(1+\alpha B)Y_t &= \eta_t,\text{ }\text{ }\text{ }\text{i.e., }Y_t\sim AR(1)
\end{align}
We have
\begin{align}
Z_t &= X_t + Y_t\\
(1-\alpha B)(1+\alpha B)Z_t &= (1+\alpha B)\epsilon_t + (1-\alpha B)\eta_t
\end{align}
Denote the right hand side
\begin{align}
\zeta_t = \epsilon_t + \alpha \epsilon_{t-1} + \eta_t -\alpha \eta_{t-1}
\end{align}
If $\text{var}(\epsilon)=\text{var}(\eta)$, we have
\begin{align}
E[\zeta_t \zeta_{t-k}]=0,\text{ }\text{ }\text{ }\forall k>0
\end{align}
Then $Z_t \sim ARMA(2,0)$, i.e., $y<\max(p,q)$.
However as Granger and Morris (1976, p. 250) highlight, '[i]t would be highly coincidental if the "true" series [i.e., $X_t$] and observational error series [i.e., $Y_t$] obeyed models having common roots, apart possibly from the root unity, or that the paramaters of these models should be such that the cancelling out of terms produces a value of $y$ less than the maximum possible.'
| null | CC BY-SA 4.0 | null | 2023-03-19T00:54:52.760 | 2023-03-24T23:35:49.157 | 2023-03-24T23:35:49.157 | 249850 | 249850 | null |
609933 | 1 | null | null | 2 | 199 | Is there a way to increase the number of dimensions through feature transformation in machine learning? If so what are the techniques involved?
| What are the methods to increase the dimension of a feature space? | CC BY-SA 4.0 | null | 2023-03-19T01:42:58.370 | 2023-03-19T17:04:35.923 | 2023-03-19T17:04:35.923 | 22311 | 383553 | [
"machine-learning",
"feature-engineering"
] |
609934 | 2 | null | 609829 | 1 | null | Lots of what we're trying to get at with dimensionality reduction, whether linear or nonlinear, is abstraction of the data away from a raw space and towards a manifold or embedding in which complex phenomena can be described with fewer, more meaningful parameters. PCA is arguably the simplest of these techniques; it will help you reduce correlated variables to a single common dimension. Implicit linear correlations in your dataset aside, lots of things in the real world are expressed as linear combinations of a set of latent variables. For example, you can use SVD (a generalization of PCA) in natural language because documents that contain words like "cat" are also likely to contain words like "pet" in rough proportion. So in a bag-of-words or tf-idf model, the correlation gets called out.
Aside from dimensionality reduction (or rather why it works for dimensionality reduction), PCA/SVD attempt to maximally explain the overall variance of the data using fewer components. You can think of this as "stretching" each component apart from the others as much as possible while keeping the component itself coherent (and this is actually explicitly how more advanced dimensionality reduction techniques such as maximum variance unfolding work). That makes it very useful at teasing apart how important each linear factor is within a dataset. That turns out to be particularly useful in recommendation engines over graphs, like the one that won the Netflix prize.
If PCA isn't useful today, it isn't because it inherently lacks usefulness, but because we have more powerful nonlinear techniques that have supplanted it.
| null | CC BY-SA 4.0 | null | 2023-03-19T02:18:47.213 | 2023-03-19T02:18:47.213 | null | null | 383554 | null |
609936 | 2 | null | 609933 | 4 | null | Square them.
Log them.
Multiply them.
Multiply their logs.
Log their products.
Take Fourier or wavelet transforms of time series data.
Any function of your set of features is a possible additional feature to include instead of or along with the original features. Some such functions will be more useful than others, yes, but pretty much any function of your features included as an additional feature will increase the dimension of your feature space.
| null | CC BY-SA 4.0 | null | 2023-03-19T03:04:28.677 | 2023-03-19T03:04:28.677 | null | null | 247274 | null |
609937 | 1 | null | null | 1 | 27 | Say, I have a function containing a random variable such as $ f(X)$, where $X $ is the random variable. how can I take derivative with respect to the variance of $X$?
| Taking derivative of a function containing random variable wrt the variance of that variable | CC BY-SA 4.0 | null | 2023-03-19T03:52:51.190 | 2023-03-19T04:43:53.920 | 2023-03-19T04:43:53.920 | 362671 | 383555 | [
"probability",
"distributions",
"variance",
"derivative"
] |
609938 | 1 | null | null | 1 | 20 | [](https://i.stack.imgur.com/bQDAj.png)
I have data which I plotted to visualize. I cant seem to fit any sort of sensible curve to this plot and I wondered how do I approach this task?
| How do I fit an equation to this butterfly curve? | CC BY-SA 4.0 | null | 2023-03-19T03:59:38.687 | 2023-03-19T04:06:06.390 | 2023-03-19T04:06:06.390 | 69508 | 383556 | [
"regression",
"machine-learning"
] |
609939 | 1 | null | null | 0 | 3 | If clusters vary greatly in size, then it seems that the form $f = (1 - n/N)$ is not always a good choice. Usually in this form $n$ is the number of clusters sampled and $N$ is the pop cluster total.
This treats all $n$ equally.
Is there a modified version of the fpc for variable size clusters? Something like:
$(1 - (n/N + m/M)/2)$ where $m$ is the number of clusters sampled from pop $M$ and $n$ is the number of ssus sampled from $N$ total ssus.
| How is the finite population of unequal sized clusters applied in one-stage sampling? | CC BY-SA 4.0 | null | 2023-03-19T04:51:52.467 | 2023-03-19T04:51:52.467 | null | null | 43080 | [
"survey",
"population",
"survey-sampling",
"cluster-sample",
"finite-population"
] |
609941 | 1 | null | null | 0 | 36 | I have a question about performing multiple linear regression preferably on SPSS 27.
I had 3 dependent variables on the teacher's intention to remain on the job. I got a mean of the scores (measured in a 7-point Likert Scale) for dependent variables. But I am stuck especially with regards to performing the regression as I've 25 predictor/ independent variables. I watched some YouTube videos but did not help me much. I would appreciate it if somebody could explain to me, in detail, how I could go ahead with the performance. I am mostly self-taught and do not have a go-to person, so any kind of help here online, in a simple, detailed manner, would help me learn. Thanks in advance.
[](https://i.stack.imgur.com/20YrZ.png)
| Performing multiple linear regression on SPSS when there are many predictor variables | CC BY-SA 4.0 | null | 2023-03-19T06:40:38.563 | 2023-03-19T08:54:15.607 | 2023-03-19T08:54:15.607 | 383563 | 383563 | [
"regression",
"spss"
] |
609942 | 2 | null | 609345 | 1 | null | Neural networks are universal approximators, so they could approximate any function. But for it to really work you possibly would need a huge number of weights, a lot of data, and training it for a really long time. All the new kinds of neural networks and other improvements in this area are to make them smaller, work with realistically big datasets, and be able to train in a reasonable time. So theoretically it could be possible, but not practically.
The difference is between the model figuring everything by itself vs us giving it a reasonable starting point. We have different ways of making life easier for the machine learning algorithms: using [feature engineering](https://stats.stackexchange.com/a/350238/35989), initializing the weights cleverly, using informative priors for the parameters, or using architectures that already force some informed way of using the data as in this case. Machine learning to a great degree is about finding those shortcuts.
As a side comment, notice that the same applies to the brains of humans and other animals: we do not start with a set of neurons and need to figure out how to use them all by ourselves. We start with pre-build architecture, and innate knowledge, with many of the reflexes and basic “functionalities” available to us at birth. Starting at tabula rasa would be really ambitious.
| null | CC BY-SA 4.0 | null | 2023-03-19T07:06:56.903 | 2023-03-19T08:12:30.167 | 2023-03-19T08:12:30.167 | 35989 | 35989 | null |
609943 | 1 | null | null | 0 | 16 | Study question: build a model that predicts if a patient would benefit from treatment A vs treatment B.
Outcome: numerical survey score at 24 months
Other variables: demographics, potentially imaging.
Issue: I want to risk stratify and use something like cox regression to create nomogram (so under this score, recommend Treatment A, above cutoff, recommend Treatment B; however, our outcome measure is improvement (i.e we don’t have an adverse outcome for survival analysis) so I guess there is no “risk” per se. Any way around this? Or should I turn to SVM/random forest (but then this wouldn’t allow me to risk stratify)?
| What kind of predictive model can I use to recommend one intervention vs another? | CC BY-SA 4.0 | null | 2023-03-19T07:44:46.023 | 2023-03-24T00:49:21.680 | 2023-03-24T00:49:21.680 | 11887 | 383564 | [
"machine-learning",
"multiple-regression",
"predictive-models",
"medicine"
] |
609945 | 2 | null | 609926 | 1 | null |
- With many measurements (like 995), small differences may already become significant. A small effect size can still be significant.
Also a t-test will show a significant difference. The means have a difference of about $0.3$ and the t-statistic is around $2.4$ giving a p-value of around $0.0166$
Are smaller p-values more convincing?
Why is "statistically significant" not enough?
- Btw, the Mann-Whitney U test is not the same as a t-test and can be significant, even when the means are the same (https://stats.stackexchange.com/a/470512/).
The MWU test is testing whether $P(X>Y) = P(Y>X)$. This is the case when in a PP-plot the curve divides the plane into two equal areas. So, in relation to a MWU test, a PP-plot might be a better way to visualize the difference rather than two histograms.
| null | CC BY-SA 4.0 | null | 2023-03-19T08:06:34.563 | 2023-03-19T08:25:10.147 | 2023-03-19T08:25:10.147 | 164061 | 164061 | null |
609946 | 1 | 609972 | null | 0 | 65 | I have data on days in which the greening of trees happen across America in 2015. This includes meteorological and topography data etc. I want to predict the day of greening happens through a linear mixed-effect model by meteorological data and topography data being fixed effects while the states of America is the random effect. I have looked into how to conduct the linear mixed-effect model in R, and I tried to perform this, but the output looks strange. I have looked at many examples and [this](https://stats.stackexchange.com/questions/13166/rs-lmer-cheat-sheet) 'cheat sheet' on how to perform linear mixed-effect models in R.
Right now, when I for example plot the relationship between relative humidity and day of greening I get strong positive relationships in many states which makes a lot of sense in this study.
[](https://i.stack.imgur.com/92TJn.png)
But, in my overall linear mixed-effect model I get a negative estimate for relative humidity, while a simple linear regression generates a positive estimate for relative humidity as predictor. The models are below:
```
The number 15 represents the year 2015
Doy: Day of the year, greening occurs
PostC15: Carbon content in the vegetation
PostMC15: Mean carbon content in the state
PostPAR15 photosynthetically active radiation
PostElev15: Elevation
PostPR15: Precipitation
PostRH15: Relative humidity
PostTemp15: Air temperature
PostAsp15: Aspect
lm.doy <- lm(doy ~ postC15+postMC15+postPAR15+postElev15+postPR15+postRH15+postTEMP15+postAsp15, data = df)
lmm.doy <- lme(doy ~ postC15+postMC15+postPAR15+postElev15+postPR15+postRH15+postTEMP15+postAsp15, data = postSM, random = ~1+postC15+postMC15+postPAR15+postElev15+postPR15+postRH15+postTEMP15+postAsp15|states, method = 'ML', control = lmeControl(opt = "optim", msMaxIter=1000, maxIter = 1000, msMaxEval = 1000))
Below is the summary of both:
summary(lm.doy)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.549e-16 1.286e-02 0.000 1.000000
postC15 -3.567e-02 1.636e-02 -4.626 3.83e-06 ***
postMC15 4.209e-02 2.225e-02 3.689 0.000228 ***
postPAR15 -5.008e-02 3.016e-02 -1.660 0.096960 .
postElev15 6.690e-02 1.588e-02 2.323 0.020217 *
postPR15 7.233e-02 1.871e-02 2.797 0.005178 **
postRH15 8.407e-02 2.556e-02 3.289 0.001012 **
postTEMP15 3.635e-01 2.043e-02 8.004 1.52e-15 ***
postAsp15 -3.278e-01 1.477e-02 -8.655 < 2e-16 ***
summary(lmm.doy)
Value Std.Error DF t-value p-value
(Intercept) -0.2376308 0.10220873 4500 -1.248727 0.2118
postC15 -0.1541045 0.01906032 4500 -2.313944 0.0207
postMC15 -0.1168753 0.02265520 4500 -1.186277 0.2356
postPAR15 -0.2255873 0.03026647 4500 -3.818988 0.0001
postElev15 -0.1322979 0.01637984 4500 -1.361303 0.1735
postPR15 -0.1411053 0.01815212 4500 -1.713592 0.0867
postRH15 -0.1729018 0.03824127 4500 -1.644868 0.1001
postTEMP15 0.1462955 0.04477048 4500 0.810701 0.4176
postAsp15 0.1557790 0.01890807 4500 2.421137 0.0155
```
I am not sure that I set up my linear mixed-effect model correctly. In simple terms, I thought by defining:
```
random = ~1+postC15+postMC15+postPAR15+postElev15+postPR15+postRH15+postTEMP15+postAsp15|states
```
I more or less compute small linear regression for each state and eventually 'sum' it up to one global estimate of intercept and slope for each individual variable. Or should I basically do the following:
```
doy ~ (1|states) + postC15+postMC15+postPAR15+postElev15+postPR15+postRH15+postTEMP15+postAsp15 + (0+postC15+postMC15+postPAR15+postElev15+postPR15+postRH15+postTEMP15+postAsp15|states)
```
To get what I want...the effect of each and every variable for each and every state?
How do I set up a linear mixed-effects model in R to predict the day of greening for each individual state from meteorological and topographical data?
| How to correctly set up my mixed-effect model? | CC BY-SA 4.0 | null | 2023-03-19T08:07:34.173 | 2023-03-19T15:19:24.490 | 2023-03-19T14:03:10.033 | 290826 | 290826 | [
"r",
"regression",
"mixed-model",
"linear-model"
] |
609947 | 1 | null | null | 2 | 10 | I need ranking comments according to the text. They are represented by a vector representation from Sentence transformers. One text and 5 comments, and a score from 0 to 4 for each comment.
I need advice on how I can make the network architecture, in particular inputs, outputs, and loss function.
At the moment I have not found anything better than solving the classification problem and ranking by probabilities.
| I need to create a neural network architecture for learning to rank task | CC BY-SA 4.0 | null | 2023-03-19T09:21:57.707 | 2023-03-19T09:21:57.707 | null | null | 314363 | [
"neural-networks",
"loss-functions",
"ranking"
] |
609948 | 1 | null | null | 3 | 151 |
- How do I interpret a negative interaction coefficient in multiple regression, with negative coefficients for main effects? 2) When I run the model without the interaction, the relationship is positive. Does anyone have any ideas why this is happening?
The variables in my model are all continuous. I'm testing the interaction between school climate (x) and activism predicting school belonging (y). I added grade level to the first block with my deviation scores for school climate and activism. Variables were centered before creating interaction term.
| How to interpret regression negative interaction coefficient | CC BY-SA 4.0 | null | 2023-03-19T09:33:09.213 | 2023-03-19T18:18:28.733 | 2023-03-19T09:40:54.980 | 339145 | 339145 | [
"regression",
"multiple-regression",
"interaction"
] |
609950 | 1 | null | null | 1 | 20 | In my research, for example, trunk is the endogenous independent variable of interest and headroom is the instrument variable for trunk. Both headroom and trunk is continuous variable.
Let’s assume the basic iv-regression follows like this
```
sysuse auto,clear
ivreg2 price displacement (trunk=headroom), robust
```
My question is: I want further study the effect of trunk on price at different level of trunk. How should I instrument for the bin of trunk?
In OLS, I can create 3 bins of trunk according to the value of trunk is high/middle/low, in stata:
```
sysuse auto.dta, clear
sum trunk, detail
gen trunk_high=(trunk>=`r(p75)')
gen trunk_mid=(trunk>`r(p25)' & trunk<`r(p75)')
gen trunk_low=(trunk<=`r(p25)')
reg price trunk_high trunk_mid trunk_low displacement, robust
```
How can I instrument for trunk_high trunk_mid trunk_low
Or I can just use predicted value in first-stage:
(but how can I calculate the standard error?)
```
sysuse auto.dta, clear
sum trunk, detail
gen trunk_high=(trunk>=`r(p75)')
gen trunk_mid=(trunk>`r(p25)' & trunk<`r(p75)')
gen trunk_low=(trunk<=`r(p25)')
reg trunk displacement
predict trunk_hat
sum trunk_hat, detail
gen trunk_hat_high=(trunk_hat>=`r(p75)')
gen trunk_hat_mid=(trunk_hat>`r(p25)' & trunk_hat<`r(p75)')
gen trunk_hat_low=(trunk_hat<=`r(p25)')
reg price trunk_hat_high trunk_hat_mid trunk_hat_low displacement, robust
```
| instrument variable for bins of endogenous variable | CC BY-SA 4.0 | null | 2023-03-19T09:41:11.553 | 2023-03-19T09:41:11.553 | null | null | 383573 | [
"categorical-data",
"econometrics",
"stata",
"instrumental-variables"
] |
609952 | 1 | 609954 | null | 2 | 64 | WLLN tells us that if $X_1,...,X_n$ are iid, with $X_1$ having finite mean $\mu$, then their sample average converges in probability to $\mu$.
Suppose instead we know that $X_1,...,X_n$ are iid and their sample average converges in probability to a constant. Can we argue that $X_1$ has a finite mean and hence that such a constant must be the mean of $X_1$?
| If sample average converges in an iid sample, must it converge to the mean? | CC BY-SA 4.0 | null | 2023-03-19T10:22:29.087 | 2023-03-20T01:52:51.600 | 2023-03-20T01:52:51.600 | 342032 | 342032 | [
"convergence",
"asymptotics",
"law-of-large-numbers"
] |
609953 | 2 | null | 447844 | 1 | null |
## Additive noise models are an assumption
Additive noise models (ANM) express an assumption about the functional form of causal relationships. In your question, you say that "`cancer <- f(smoking) + noise` should be pretty similar to `smoking <- f(cancer) + noise`". In a way that's exactly what the ANM assumption is saying – they are pretty similar, except that in the causal direction the noise is independent.
ANMs postulate that in the true causal model, noise is added to the effect, which leads to an asymmetry that can be exploited to learn the causal structure.
## An intuition about additive noise models
Now you might say "why would the ANM assumption apply?"
One motivation is to introduce stochasticity. Without noise of some form, the causal relationships would be completely deterministic, which would be hard to justify.
Seeing that we need some stochasticity and having decided to use a noise term, why would it be additive? The intuition here is that the noise is some independent variation of the effect variable that does not interact with the values of the cause. Such variation could be caused by other variables that we have decided to view as outside our causal system but that still independently have some (usually weak) effect on our system variables.
| null | CC BY-SA 4.0 | null | 2023-03-19T10:24:34.453 | 2023-03-19T12:06:26.710 | 2023-03-19T12:06:26.710 | 250702 | 250702 | null |
609954 | 2 | null | 609952 | 3 | null | For counter-examples, you might want to consider cases where the weak law of large numbers applies but the strong law of large numbers does not. These cases must have $E[X_1]$ undefined.
For example, adapting [the second example in Wikipedia](https://en.wikipedia.org/wiki/Law_of_large_numbers#Differences_between_the_weak_law_and_the_strong_law),
- suppose $\mathbb P\left(X_1=\frac{(-2)^n}{n} \right)=\frac1{2^n}$ for positive integer $n$
- this does not have a mean since $\sum\limits_{n=1}^\infty \frac{(-2)^n}{n} \frac1{2^n}=\sum\limits_{n=1}^\infty \frac{(-1)^n}{n}$ does not converge absolutely
- but the sum does converge conditionally to $-\log_e(2)\approx -0.693$ and the sample averages converge in probability to this
In this example, you need a large sample to see much convergence. For example with $10^4$ simulations each with sample sizes of $10^2$ (red), $10^3$ (green) and $10^4$ (blue), the following R code
```
Xbar <- function(cases){
Y <- rgeom(cases, 1/2) + 1 # R's geometric distribution starts at 0
X <- (-2)^Y / Y
mean(X)
}
set.seed(2023)
sims4 <- replicate(10^4, Xbar(10^4))
plot(density(sims4, from=-2, to=1), col="blue")
sims3 <- replicate(10^4, Xbar(10^3))
lines(density(sims3, from=-2, to=1), col="green")
sims2 <- replicate(10^4, Xbar(10^2))
lines(density(sims2, from=-2, to=1), col="red")
abline(v = -log(2))
```
shows the increasing concentration of the sample mean as the sample size increases
[](https://i.stack.imgur.com/zpXPO.png)
| null | CC BY-SA 4.0 | null | 2023-03-19T10:51:23.150 | 2023-03-19T11:14:45.500 | 2023-03-19T11:14:45.500 | 2958 | 2958 | null |
609955 | 2 | null | 609829 | 8 | null | I use three examples in my lectures to illustrate what PCA can do (click the links for pointers to the slides). They're chosen to show how useful it is in general data science practice and how powerful it can be (especially given that it's just a linear transformation).
## example 1: bone analysis
Imagine you're a paleontologist, and you find a shoulder bone. Because of years of training, you recognize immediately that this is a Hominid bone (which is very rare) and not a Chimp bone (which is common).
How do you report that fact? "It's a Hominid bone because I say so", isn't very scientific.
One way would be to use PCA. You measure a bunch of features on the bone and on some similar bones, and you plot the first two principal components. Here's one example of such a plot [[1](https://i.stack.imgur.com/1iPgl.png)].
[](https://i.stack.imgur.com/1iPgl.png)
The plot very clearly shows what your trained eyes told you in the first place: the bones of Hominids cluster together, far away from the more common bones of humans, chimps and other apes.
You can even draw a line backwards through the known evolutionary path, to get a hypothesis where humans come from. Turns out chimps and bonobos are a better candidate than gorillas and gibbons.
## example 2: DNA
Take about 1300 people in Europe, sequence their DNA, and check about half a million markers. This gives you a dataset with 1300 instances, and 500 thousand features. Apply PCA, and plot it by the first two principal components. Now color the points by where the subject is from. Here's the result [[2](https://i.stack.imgur.com/NItOP.jpg)].
[](https://i.stack.imgur.com/NItOP.jpg)
The plot reveals that the first two principal components provide a blurry picture of the geographical distibution.
I admit it's hard to say what this is useful for, but it certainly illustrates the power of the method.
## example 3: eigenfaces
Take a set of images of people's faces and flatten then into high-dimensional vectors. Run PCA.
Again, you will see clusters for certain meaningful concepts. But here, we can do something else cool. We can take the n-th principal component, and nudge one of the examples in our data a little in that direction.
Here is the result for the first couple of principal components:
[](https://i.stack.imgur.com/9WddD.jpg)
[1](https://i.stack.imgur.com/1iPgl.png) Fossil hominin shoulders support an African ape-like last common ancestor of humans and chimpanzees. Nathan M. Young, Terence D. Capellini, Neil T. Roach and Zeresenay Alemseged [http://www.pnas.org/content/112/38/11829](http://www.pnas.org/content/112/38/11829)
[2](https://i.stack.imgur.com/NItOP.jpg) Novembre, J., Johnson, T., Bryc, K., Kutalik, Z., Boyko, A. R., Auton, A., ... & Bustamante, C. D. (2008). Genes mirror geography within Europe. Nature, 456(7218), 98-101. [https://www.nature.com/articles/nature07331](https://www.nature.com/articles/nature07331)
| null | CC BY-SA 4.0 | null | 2023-03-19T11:07:41.247 | 2023-03-19T11:07:41.247 | null | null | 20085 | null |
609956 | 1 | 610299 | null | 0 | 37 | We have cancer medical registry data, including information on date of diagnosis, treatment, and followup e.g. date of death etc. However, we only know type of treatment received for each person. We have no information on WHEN they received treatment, nor dose, frequency etc. Whilst this is not optimal, I guess this is better than not knowing at all. However we are uncertain how to model such treatment variables in e.g. a Cox model. It seems to me that a fixed in time covariate is not appropriate (e.g. may begin treatment well after time zero). Time varying covariate we cannot properly do either as this is precisely the information we lack. Running as a logistic regression model is one thing we thought of but that seems wasteful since we do know time to outcome. Any advice would be great!
Update
Only information on primary cancer and primary treatment is known. We don't have any information on subsequent treatment if recurrence occured.
We have date for initial diagnosis of cancer (in situ) and also dates for diagnosis of invasive cancer and death (if either occured).
Suppose we want to use a Cox model for time FROM in situ diagnosis to event (e.g. invasive cancer or death from cancer), we are unsure on where to start time zero in our situation. If we set time zero equal to in situ diagnosis date (unique to each person), then clearly nobody at that time has actualy had any treatment yet!
Or we could make some assumptions about the likely timeframes which treatment USUSALLY occurs fololowing diagnosis and set time zero to after that? I guess I was thinking that this might lose power and also different treatment is likely to start at different times...e.g. maybe chemo, then radio etc. making it very confusing. I'm thinking along the lines of the comment from @EdM on this question:
[Cox model: advice on constructing time varying exposure to drugs](https://stats.stackexchange.com/questions/604036/cox-model-advice-on-constructing-time-varying-exposure-to-drugs/604292?noredirect=1#comment1129303_604292)
where the comment says...
Any fixed-in-time covariate should have a value that was in place at time=0 in the survival analysis.
Despite this statistical requirement, might it be "valid" to still set time zero equal to in situ diagnosis date and think of it as patient will at some time over follow up experience treatment x, y z etc. even if they have not yet experienced any treament at all at time zero? And then model these treatemnt variables as just ordinary main effects (i.e. fixed in time rather than time varying) in the Cox model?
| Cox model: how to model treatment variable when timing is unknown | CC BY-SA 4.0 | null | 2023-03-19T11:07:45.537 | 2023-03-22T11:38:24.563 | 2023-03-22T11:02:19.820 | 167591 | 167591 | [
"logistic",
"cox-model",
"time-varying-covariate",
"rms"
] |
609957 | 1 | null | null | 0 | 9 | If I got it correctly, in nested cv we have 2 layers, k1 and k2.
In K2, we choose the best model, with its hyperparams in K1 we do model assessment.
How fair is it to estimate the generalization error as an average of the errors in each of the k1 outer models? Because we have k1 different models, it's not the same model we are evaluating k1 different time.
| Question model selection & assessment in Nested CV | CC BY-SA 4.0 | null | 2023-03-19T11:08:25.423 | 2023-03-19T11:28:15.610 | 2023-03-19T11:28:15.610 | 362671 | 383578 | [
"machine-learning"
] |
609958 | 1 | 610187 | null | 2 | 124 | Hastie, et al in “Elements of Statistical Learning” describe a particular DGP(Data Generating Process) or causal model on page 13.
>
The training data in each class (2 classes total) came from a mixture of 10 low variance Gaussian distributions, with individual means themselves distributed as Gaussian. A mixture of Gaussians is best described in terms of the generative model. One first generates a discrete variable which determines which of the component Gaussians to use, and then generates an observation from the chosen density.
Having learned most of what I know about DAG’s from Pearl’s Primer book, which does not include a mixture as an example, I am just not sure what a DAG would look like in this case.
John Kruschke, in his book "Doing Bayesian Data Analysis" offers an attractive alternative to DAGs, which show a sketch of the distribution indexed by its parameters. See [here](http://doingbayesiandataanalysis.blogspot.com/2013/10/diagrams-for-hierarchical-models-we.html) and [here](http://doingbayesiandataanalysis.blogspot.com/2012/05/graphical-model-diagrams-in-doing.html). So if you prefer to do a DBDA style plot over the DAG, that works too.
| A graph that helps understand DGP of a mixture of Gaussians (DAG, Kruschke's DBDA style graph, etc.)? | CC BY-SA 4.0 | null | 2023-03-19T11:32:37.643 | 2023-03-21T15:28:28.237 | 2023-03-21T15:28:28.237 | 198058 | 198058 | [
"dag",
"causal-diagram"
] |
609959 | 1 | null | null | 1 | 14 | I built an ensemble regression model (Random Forest + KNN + SVM) to predict biomass based on environmental conditions (biomass is strictly positive but continuous).
I now would like to use this model to predict total global biomass. To get the total biomass, I predict the biomass for each grid on a global map and then sum the numbers.
Now my question is, how do I add errors to this estimate? I have calculated RMSE and MAE. But both RMSE and MAE are larger than the mean. Simply adding error bars based on MAE value would thus create negative lower limits which is not reasonable since biomass has to be strictly positive.
| Which error metric should I use for summed estimates? | CC BY-SA 4.0 | null | 2023-03-19T11:46:55.600 | 2023-03-19T11:46:55.600 | null | null | 383579 | [
"machine-learning",
"prediction-interval"
] |
609960 | 1 | 610338 | null | 1 | 27 | I'm running a multivariate growth model with two variables measured across 3 waves using lavaan/growth function.
Here's the model specification I'm using, with non-uniform time differences. Two variables here are "cyn" and "cms".
```
model.cms.cynic <- " icyn =~ 1*T1_Cynic + 1*T2_Cynic + 1*T3_Cynic
scyn=~ 0*T1_Cynic + 1*T2_Cynic + 5.8*T3_Cynic
icms =~ 1*T1_CMS + 1*T2_CMS + 1*T3_CMS
scms =~ 0*T1_CMS + 1*T2_CMS + 5.8*T3_CMS
scyn ~ icms
scms ~ icyn"
```
I was interested in looking at 1) associations between the two intercepts, 2) associations between the two slopes, and 3) whether an intercept for one variable predicts the slope of another variable.
My first question is whether the last two lines of my code adequately test my third research question.
My second question pertains to the results I got for the third question in tandem with the associations I found for the first two questions.
The results showed that the intercept of cms ("icms") negatively predicted the slope of cyn ("scyn"). That said, because I also observed that icms and icyn were positively associated, I'm not sure whether predicting scyn from the icyn needs to also control for the intercept of cms (icms), since
- higher icms predicted higher icyn,
- higher icyn would restrict the growth of cyn (=scyn) due to ceiling effect, which would
- contribute to the observed negative coefficient of icms predicting scyn (which could be considered as an artifact).
That is, I'm not sure whether I SHOULD specify the last two lines as follows:
```
scyn ~ icms + icyn
scms ~ icyn + icms
```
Is this reasoning correct, or unfounded? Does the growth curve model (based on my specification) take that ceiling effect into account already?
| Regressing slope on intercept in multivariate growth curve model | CC BY-SA 4.0 | null | 2023-03-19T11:52:28.000 | 2023-03-22T16:59:32.140 | null | null | 169238 | [
"lavaan",
"growth-model"
] |
609961 | 1 | null | null | 1 | 45 | I have a reflective-formative higher-order construct that combines seven reflective first-order constructs. I validated this higher-order construct in a study using the PLS-SEM method in which all seven subconstructs showed significant positive path coefficients with this higher-order construct.
In my new study, I included this higher-order construct as an independent variable with other endogenous four reflective constructs as dependent variables. When I run the PLS-SEM algorithm with this model, two of the previously positive path coefficients between the seven subconstructs and the higher-order construct become negative.
Could anyone please explain to me what could be the reason? I am using the same data as the previous study, so how can I get consistent results?
Any help would be greatly appreciated.
| Reflective-formative higher-order construct path issue in PLS-SEM | CC BY-SA 4.0 | null | 2023-03-19T11:56:34.630 | 2023-03-19T11:56:34.630 | null | null | 383580 | [
"structural-equation-modeling",
"partial-least-squares"
] |
609963 | 1 | null | null | 7 | 800 | I am having trouble understanding why the Central Limit Theorem (CLT) is applicable in A/B testing. As a beginner in statistics, I am trying to grasp the intuition behind it.
The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. However, in A/B testing, we only draw two samples, and their distribution is not necessarily guaranteed to be normal. Nonetheless, the difference between these two samples is guaranteed to be a normal distribution. My question is, how is the difference between these two samples constructed, and why is it guaranteed to be a normal distribution?
| Why is the Central Limit applicable in A/B testing? | CC BY-SA 4.0 | null | 2023-03-19T12:30:36.683 | 2023-03-20T04:50:10.513 | null | null | 383582 | [
"inference",
"experiment-design",
"central-limit-theorem",
"random-allocation"
] |
609964 | 2 | null | 609963 | 6 | null | >
we only draw two samples
You can consider a sample of size $n$ as $n$ samples of size $1$.
The outcome can be seen as a sum of $n$ independent [Bernoulli distributed variables](https://en.m.wikipedia.org/wiki/Bernoulli_distribution) (if the people in the sample are independent), also know as a [binomial distribution](https://en.m.wikipedia.org/wiki/Binomial_distribution).
>
the distribution of their means tends towards a normal distribution
The central limit theorem tells that in the limit of an infinite sample size the distribution of the normalised sum will approach a normal distribution. In practice this is used to argue that a finite sample will also approximately follow a Normal distribution.
In the special case of a Binomial distribution we can also use the [De Moivre–Laplace theorem](https://en.m.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem) to argue that the distribution is approximately normal distributed.
Related: [Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?](https://stats.stackexchange.com/questions/549501/)
| null | CC BY-SA 4.0 | null | 2023-03-19T12:42:47.427 | 2023-03-19T22:15:10.527 | 2023-03-19T22:15:10.527 | 164061 | 164061 | null |
609965 | 1 | null | null | 0 | 28 | Just as the title imply, I am searching for an error evaluation metric with the following characteristics:
- able to handle cases where actual value is 0
- can evaluate different units/scales
- isn't disproportionately affected by outliers
My intention is to find a complementary evalution metric to sMAPE.
| Evaluation metric which is able to handle cases where actual value is 0, different units and isn't disproportionately affected by outliers | CC BY-SA 4.0 | null | 2023-03-19T12:50:31.367 | 2023-03-19T12:50:31.367 | null | null | 320876 | [
"inference",
"error",
"model-evaluation",
"measurement-error"
] |
609967 | 1 | null | null | 0 | 17 | I have performed a decoding task (50% theoretical chance level) with data from 36 individuals. 21 out of 36 yielded above chance level decoding accuracies as measured via non-parametric permutation tests for each individual.
What is the statistical test I should apply to see whether 21/36 is a statistically significant ratio when compared to chance level (i.e. How to look at group-level statistical significance with this 21/36 ratio that I obtained from individual-level stats)?
| Going from Individual-Level Significance to Group-Level Significance Analysis Against Chance | CC BY-SA 4.0 | null | 2023-03-19T13:55:07.560 | 2023-03-19T14:55:11.157 | 2023-03-19T14:55:11.157 | 383589 | 383589 | [
"distributions",
"classification",
"statistical-significance",
"permutation-test"
] |
609969 | 2 | null | 609948 | 4 | null | If the coefficients of your main effects are flipping directions after you've added an interaction term, and variables were mean-centered prior to constructing the interaction term (and you're using the mean-centered variables in the regression), then probably what's happening is that your variables are skewed. Interaction terms are uncorrelated with the constituent variables only when variables are normally distributed. So, you should interpret the interaction coefficient relative to the main effect coefficients from the model without the interaction term. It can also help to plot the interaction.
| null | CC BY-SA 4.0 | null | 2023-03-19T14:52:12.517 | 2023-03-19T14:52:12.517 | null | null | 288142 | null |
609970 | 1 | null | null | 7 | 656 | I'm confused about the way L1 & L2 pop-up in what seem different roles in the same play:
- Regularization - penalty for the cost function, L1 as Lasso & L2 as Ridge
- Cost/Loss Function - L1 as MAE (Mean Absolute Error) and L2 as MSE (Mean Square Error)
Are [1] and [2] the same thing? or are these two completely separate practices sharing the same names? (if relevant) what are the similarities and differences between the two?
| L1 & L2 double role in Regularization and Cost functions? | CC BY-SA 4.0 | null | 2023-03-19T15:00:30.437 | 2023-03-28T15:39:39.037 | 2023-03-28T15:39:39.037 | 247274 | 383593 | [
"regression",
"machine-learning",
"regularization",
"loss-functions",
"norm"
] |
609972 | 2 | null | 609946 | 2 | null | As [@shawn-hemelstrand](https://stats.stackexchange.com/users/345611/shawn-hemelstrand) commented, I am pretty surprised that this code converged for you. `nlmer::lme()` will sometimes allow users to fit models that they do not have enough data for. If you are new to mixed models, I would strongly recommend starting with `lme4::lmer()`(also using the `lmerTest` library). For example, I might first fit this model as:
```
lmm.doy.lmer <- lmerTest::lmer(doy ~
postC15+postMC15+postPAR15+postElev15+postPR15+postRH15+postTEMP15+postAsp15 +
(1|states),
data = postSM)
```
I think the broader question is, what is the best way to analyze this data? You have non-independent measurements within-state. But there's more! The states themselves are non-independent. Some states are closer than others, and their measurements are therefore going to be more similar. I also imagine that your measurements from within each state relate to different geographic regions - counties maybe? So the same applies. Basically, you have a massive spatial auto-correlation issue (your observations are non-independent in a structured fashion). You'll need to use an analysis that specifically accounts for spatial auto-correlation. See [here](https://walker-data.com/census-r/modeling-us-census-data.html) for a decent primer.
| null | CC BY-SA 4.0 | null | 2023-03-19T15:19:24.490 | 2023-03-19T15:19:24.490 | null | null | 288142 | null |
609973 | 2 | null | 609933 | 0 | null | A feature transformation can be "learned" by fitting a neural network. For example, if the original features $x \in \mathbb{R}^d$ are mapped to one of $k$ classes, the mapping may be modeled using a neural network $f$ with one hidden layer (omitting bias parameters):
$$
\begin{equation*}
f(x; W_1, W_2) = \sigma(W_2 \text{relu}(W_1x))
\end{equation*}
\\
\text{where } W_1 \in \mathbb{R}^{h \times d}, W_2 \in \mathbb{R}^{k \times h}.
$$
You may very well set $h$ to be greater than $d$. After fitting $f$ on a bunch of training data, the function
$$
\begin{equation*}
g(x; W_1) = W_1x
\end{equation*}
$$
outputs a vector in $\mathbb{R}^{h}$ which may encode information about how a given $x$ relates to the $k$ classes. This information may be treated as a more abstract or "transferrable" representation of $x$ for future tasks.
[Word2vec](https://en.wikipedia.org/wiki/Word2vec) is a concrete example of this idea: an $x$ is a "one-hot" vector indicating which word in the vocabulary it is, and a neural network is learned to (literally) assign any word to a vector. This vector abstractly represents something about the "meaning" of that word. (In a literal sense, a Word2vec model doesn't necessarily increase the dimensionality of the original one-hot vector. But it illustrates the point that transformations to a vector space with any dimension can be learned.) These vectors can be used to numerically encode wordy features in external datasets, or to initialize the first layer of a language model.
| null | CC BY-SA 4.0 | null | 2023-03-19T15:33:22.843 | 2023-03-19T16:01:40.243 | 2023-03-19T16:01:40.243 | 337906 | 337906 | null |
609974 | 2 | null | 609924 | 1 | null | The `emmeans` package does not require equal sample sizes, so to that extent you are worrying too much. It uses the variance-covariance matrix of coefficient estimates to compare different scenarios after the initial model has been fit.
Based on that matrix, the formula for a [weighted sum of correlated variables](https://en.wikipedia.org/wiki/Variance#Weighted_sum_of_variables) provides the variance estimate for any sum/difference of coefficient estimates involved in a comparison of interest. The assumption for significance testing is generally that the coefficient estimates follow a multivariate normal or multivariate t distribution, an assumption that underlies much statistical analysis.
The sample size enters through its effects on those variance/covariance estimates. A coefficient variance estimate typically decreases with the number of observations, so in your case there is probably more precision in estimates for the `DD` than in the `HD` class.
Where you might be making things harder for yourself is in the number of comparisons you perform. Results in `emmeans` are corrected for [multiple comparisons](https://en.wikipedia.org/wiki/Multiple_comparisons_problem). The more comparisons you perform, the harder it is to show that any single comparison is "significant." I'm not sure how many comparisons are provided by your call to the `contrast()` function, but I don't think that you need to perform more than an `HD` versus `DD` comparison within each of the 4 age classes to accomplish what you want, or 4 comparisons total.
Finally, don't jump to an assumption that overlap of confidence intervals necessarily means no significant difference between groups. See [this answer](https://stats.stackexchange.com/a/18259/28500) for a more nuanced explanation.
| null | CC BY-SA 4.0 | null | 2023-03-19T15:55:48.573 | 2023-03-19T15:55:48.573 | null | null | 28500 | null |
609975 | 1 | null | null | 0 | 19 | What steps can I take to correct for linearity assumptions in a multiple linear regression analysis conducted in Python on a dataset with 79 independent features, including 22 continuous, 28 nominal, and 29 ordinal variables, where most of the continuous variables have low correlation with the dependent variable?
Things I have tried. I have attempted applying a non-linear transformation on the dependent variable y and also adding polynomial terms of 2nd degree to the transgressing variables, which are continuous variables with low linearity with correlation coefficient less than 5. However, both attempts separately have failed to produce a linear relationship between the independent variables and the dependent variable, which I checked using scatter plots and numerically using Pearson's correlation coefficient.
I also checked the residuals vs fitted plot on the validation set, but the graph contained many outlier points.
I've been stuck on this problem for quite some time now and genuinely don't know how to tackle it as a self-learner, I don't have any mentors or colleagues to turn to so if anyone has any ideas on how to address this problem, I would be grateful for your input.
| Challenges with achieving linearity in multiple linear regression analysis | CC BY-SA 4.0 | null | 2023-03-19T15:58:23.547 | 2023-03-24T00:47:56.290 | 2023-03-24T00:47:56.290 | 11887 | 266930 | [
"multiple-regression",
"linear",
"assumptions",
"continuous-data",
"linearity"
] |
609976 | 2 | null | 609924 | 1 | null | >
Is there a way to deal with the rather large difference in group size? Statistically the smaller group size caused a larger confidence interval which made the effect less significant.
You have less information (= fewer samples) about the HD group and about the distribution of age classes within this group. So you should expect to estimate the HD parameters with less accuracy (= wider confidence intervals). To get more precise estimates, you would have to collect more data.
That said, you may be over-adjusting by computing more comparisons than you are actually interested in.
For illustration purposes, I've generated some fake data according to your specifications. There are two methods, DD and HD, and four age classes 0, 1, 2 and 3+.
Here is the output of `contrast` as you use it:
```
emm <- emmeans(fit, ~ `Age class` | `Method`, mode = "prob")
contrast(emm, "pairwise", simple = "each", combine = TRUE, adjust = "mvt")
#> Method Age class contrast estimate SE df t.ratio p.value
#> DD . 0 - 1 -0.03507 0.0170 6 -2.065 0.4246
#> DD . 0 - 2 -0.02747 0.0169 6 -1.630 0.6360
#> DD . 0 - (3+) -0.01870 0.0167 6 -1.119 0.8811
#> DD . 1 - 2 0.00760 0.0175 6 0.435 0.9986
#> DD . 1 - (3+) 0.01636 0.0173 6 0.945 0.9377
#> DD . 2 - (3+) 0.00877 0.0172 6 0.510 0.9968
#> HD . 0 - 1 0.05280 0.0477 6 1.106 0.8861
#> HD . 0 - 2 0.22360 0.0400 6 5.590 0.0106
#> HD . 0 - (3+) 0.30125 0.0351 6 8.581 0.0011
#> HD . 1 - 2 0.17080 0.0387 6 4.409 0.0332
#> HD . 1 - (3+) 0.24845 0.0340 6 7.299 0.0028
#> HD . 2 - (3+) 0.07765 0.0283 6 2.743 0.2039
#> . 0 DD - HD -0.16472 0.0291 6 -5.666 0.0101
#> . 1 DD - HD -0.07686 0.0285 6 -2.697 0.2156
#> . 2 DD - HD 0.08635 0.0235 6 3.677 0.0720
#> . 3+ DD - HD 0.15523 0.0193 6 8.054 0.0017
#>
#> P value adjustment: mvt method for 16 tests
```
Why adjust for all possible 16 tests? If you are not interested in the contrast between 1y age class in the HD group and the 2y age class in the DD group and the rest of the comparisons across both ages and methods, you are over-adjusting.
Here is how to compare the two methods at each age class.
```
# NB: Switch the order in the `condition on` statement.
emm <- emmeans(fit, ~ `Method` | `Age class`, mode = "prob")
contrast(emm, "pairwise", adjust = "mvt")
#> Age class = 0:
#> contrast estimate SE df t.ratio p.value
#> DD - HD -0.1647 0.0291 6 -5.666 0.0013
#>
#> Age class = 1:
#> contrast estimate SE df t.ratio p.value
#> DD - HD -0.0769 0.0285 6 -2.697 0.0357
#>
#> Age class = 2:
#> contrast estimate SE df t.ratio p.value
#> DD - HD 0.0863 0.0235 6 3.677 0.0104
#>
#> Age class = 3+:
#> contrast estimate SE df t.ratio p.value
#> DD - HD 0.1552 0.0193 6 8.054 0.0002
```
Note that now we get no statement about number of adjustments. Since for each age class emmeans calculates a single pairwise comparison, it applies no adjustment to the p-values.
The question if and how to adjust for multiple comparisons of interest is trickier than the fact we shouldn't calculate and adjust for comparisons of no interest. Since you are studying the difference in age distribution between the two methods (rather than, say, focusing on babies and toddlers separately), I think you should do multiple comparisons adjustment.
So finally here is how to compare the two methods at each age class and adjust for making four comparisons across the four age classes.
```
emm %>%
contrast("pairwise", by = "Age class") %>%
summary(by = NULL, adjust = "mvt")
#> contrast Age class estimate SE df t.ratio p.value
#> DD - HD 0 -0.1647 0.0291 6 -5.666 0.0038
#> DD - HD 1 -0.0769 0.0285 6 -2.697 0.1054
#> DD - HD 2 0.0863 0.0235 6 3.677 0.0324
#> DD - HD 3+ 0.1552 0.0193 6 8.054 0.0006
#>
#> P value adjustment: mvt method for 4 tests
```
| null | CC BY-SA 4.0 | null | 2023-03-19T15:59:04.760 | 2023-03-19T18:14:11.933 | 2023-03-19T18:14:11.933 | 237901 | 237901 | null |
609977 | 1 | null | null | 0 | 23 | While training a linear regression model, it is advised to standardize the input features using the mean and std deviation of the train input features.
```
x = (x - x_mean)/ x_std_dev
```
However, when we want to infer the model on some examples, do we use the same mean and std deviation used during training or the mean and std deviation of the batch of test examples we feed the model? Which one is better and why?
```
x = (x - x_mean_train) / x_std_dev_train
```
vs
```
x = (x - x_mean_test_batch) / x_std_dev_batch
```
| What mean and variance should we use for inferring results using linear regression? | CC BY-SA 4.0 | null | 2023-03-19T16:07:04.640 | 2023-03-19T16:07:04.640 | null | null | 383597 | [
"regression",
"machine-learning",
"mathematical-statistics"
] |
609979 | 1 | null | null | 0 | 26 | My question is specific and concerns transport modeling. I am trying to find a way how to estimate price elasticities of travel demand from a 4-step transport model. With this model it is possible to evaluate different transport-related policies and it will give among other things origin-destination matrices as a result. I plan to model road tolls of different magnitudes and get how the number of car trips will reduce -> using these data I can run a following regression equation:
$Y = a + bX$, where X is a road toll and Y is a number of trips, from which I can get elasticity of travel demand by car with respect to road tolls, which is $b*Y/X$.
My question is: is it a correct way to do so? In the literature I could only find how to estimate such elasticities from raw travel data (e.g. data from transport operators on number of trips and prices). In this case there are two main approaches:
- linear regressions of some form
- aggregate logit models based on discrete choice theory
Although a linear regression is a reasonable method for a raw travel data, I wonder if I can apply it to the results obtained from travel demand model, given that this one itself can be based on the discrete choice theory (model choice module).
Would it then make sense to derive the elasticities directly from the utility functions used in the model? Or given that the results of the model are obtained after the network assignment, there is some value-added of running the model and using its OD outputs?
In short, my question is what would be the best way to derive demand elasticities from modeled data?
Thank you in advance! I would greatly appreciate your hints!
P.S. I am not sure it is a right stack exchange sub-site to ask this question, but I didn't find a better one. Sorry if it is not.
| Estimating price elasticities of demand from four step transport model | CC BY-SA 4.0 | null | 2023-03-19T16:18:16.573 | 2023-03-20T11:59:39.920 | 2023-03-20T11:59:39.920 | 383602 | 383602 | [
"regression",
"estimation",
"modeling",
"elasticity",
"transportation"
] |
609980 | 1 | 609981 | null | 1 | 57 | When looking at implementations of VAE's online, specifically the KL divergence loss, the formula used is:
$$ KL\hspace{1mm} Loss = -\frac{1}{2}(1+\log{\sigma^2}-\mu^2-\sigma^2) $$
or some variation of it. In the code accompanying the paper I am currently reading, the KL loss is calculated using entropy and cross entropy under the equality:
$$ KL\hspace{1mm}Loss = Cross\hspace{1mm}Entropy - Entropy $$
with:
$$ Cross\hspace{1mm}Entropy = \frac{1}{2}(\mu^2 + \sigma^2) + \log{\sqrt{2\pi}} $$
and
$$ Entropy = \frac{1}{2}(\log{\sigma^2}+\log{2\pi e}) $$
This confuses me greatly, as subtracting the entropy from the cross entropy does not yield the conventional formula for the KL Loss mentioned above. Where do these entropy and cross entropy formulas come from and why do they not satisfy the KL divergence equality? Are there assumptions being made here that I am unaware of?
| Calculating KL divergence with entropy and cross entropy for VAEs | CC BY-SA 4.0 | null | 2023-03-19T16:18:19.373 | 2023-03-19T17:04:00.013 | 2023-03-19T17:04:00.013 | 60613 | 382888 | [
"machine-learning",
"autoencoders",
"kullback-leibler",
"variational-bayes",
"cross-entropy"
] |
609981 | 2 | null | 609980 | 2 | null | The Kullback–Leibler divergence is also called relative entropy.
It is easy to see that:
\begin{align}
D_{KL}(P\|Q)
&=\int p(x)\log\left(\frac{p(x)}{q(x)}\right)dx\\
&=\underbrace{\int p(x)\log\left(\frac{1}{q(x)}\right)dx}_{H(P,Q)} - \underbrace{\int p(x)\log\left(\frac{1}{p(x)}\right)dx}_{H(P)},
\end{align}
respectively, the cross-entropy between $P,Q$ and the entropy of $P$.
---
Going back to your example, we have:
\begin{cases}
H(P,Q) = \frac{1}{2}(\mu^2 + \sigma^2) + \log{\sqrt{2\pi}}\\
H(P)= \frac{1}{2}(\log{\sigma^2}+\log{2\pi e})
\end{cases}
\begin{align}
D_{KL}(P\|Q)
&=H(P,Q) - H(P)\\
&=\frac{1}{2}(\mu^2 + \sigma^2) + \log{\sqrt{2\pi}}-\frac{1}{2}(\log{\sigma^2}+\log{2\pi e})\\
&=\frac{1}{2}(\mu^2 + \sigma^2 + 2\log{\sqrt{2\pi}} - \log{\sigma^2}-\log{2\pi e})\\
&=\frac{1}{2}\left(- \log{\sigma^2} + \mu^2 + \sigma^2 + \log\left(\frac{
2\pi}{2\pi e}\right)\right)\\
&=\frac{1}{2}\left(- \log{\sigma^2} + \mu^2 + \sigma^2 - 1\right)\\
&=\frac{-1}{2}\left(1 + \log{\sigma^2} - \mu^2 - \sigma^2\right)
\end{align}
| null | CC BY-SA 4.0 | null | 2023-03-19T16:39:52.653 | 2023-03-19T17:03:29.803 | 2023-03-19T17:03:29.803 | 60613 | 60613 | null |
609982 | 2 | null | 609933 | 0 | null | There are lots of different ways. A classic example is polynomial expansions - you take all powers of your input variables ([https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem](https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem) from 1885)
Fourier series ([https://en.wikipedia.org/wiki/Fourier_series](https://en.wikipedia.org/wiki/Fourier_series))
These are roughly equivalent to the [https://en.wikipedia.org/wiki/Universal_approximation_theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem) of neural networks.
Note that being able to represent a function doesn't mean you have "learnt" the function.
eg the parity function: 1 if the (integer) number is odd or 0 if the number is odd.
This can be represented on any fixed interval eg [0,100] by a suitable polynomial). however, there is no generalisation -eg train on [0,50] test on [51,100]. Similarly with tree based methods (trees/random forest/xgboost etc).
| null | CC BY-SA 4.0 | null | 2023-03-19T16:45:18.083 | 2023-03-19T16:45:18.083 | null | null | 27556 | null |
609983 | 2 | null | 609139 | 0 | null | With your data, the `isSingular` warning simply means that the data don't provide enough information to distinguish the variance among random intercepts from a value of 0. For evaluating whether there is some systematic difference between the chosen sites (`Type=1`) and the nearby "random" sites (`Type=0`), related to site characteristics, that probably doesn't matter a lot. You would get the same point estimates for coefficients whether or not you include the random intercepts.
A potentially bigger problem is in your modeling of the continuous predictors. For example, your model assumes a linear association between the log-odds of being `Type=1` and the percentage of `Canopy_Cover`. Almost all of your `Canopy_Cover` values are close to 0, however, with less than 5% of observations at over 50% cover. Does the implicit simple linear association make sense in that context? Similarly, the `X100cm_Cover` values include a relatively small number of very large values.
A flexible fit of the continuous predictors would be highly preferable. The problem you face is that you only have 47 cases in the minority class, so you are already in risk of overfitting with your model that estimates 6 coefficients beyond the intercept. To avoid overfitting in a binary regression, you typically need about 15 members of the minority class per estimated coefficient.
Two alternative solutions come to mind. One is simply to evaluate the within-site paired differences between the `Type=1` and the `Type=0` locations for each characteristic, probably with a non-parametric approach. Another is to use a highly flexible model, with a slow learning rate to minimize overfitting, to identify the site characteristics that are most strongly associated with differences between `Type=1` and `Type=0`. That could be, for example, a generalized additive model or a boosted tree.
| null | CC BY-SA 4.0 | null | 2023-03-19T16:46:40.700 | 2023-03-19T16:46:40.700 | null | null | 28500 | null |
609984 | 2 | null | 609926 | 1 | null | One thing I'd like to add to the answer by @SextusEmpiricus is that a useful statistic to report when using an Wilcoxon-Mann-Whitney test is one that reports the probability that an observation in one group being greater than an observation in the other group.
There are several variants of this statistic. Reporting the probability, two are Vargha and Delaney's A and Grissom and Kim's probability of superiority.
These can be transformed to a -1 to 1 scale, like an r value. Examples are the Glass rank biserial coefficient and Cliff’s delta.
I don't know if any of these are available in Python, but they are relatively easy to compute.
My suspicion is that you will find these effect size statistics suggest there is just a small difference in the stochastic dominance between these two distributions.
| null | CC BY-SA 4.0 | null | 2023-03-19T16:51:13.993 | 2023-03-19T16:51:13.993 | null | null | 166526 | null |
609986 | 1 | null | null | 0 | 18 | I have calculated around 300 Bayesian linear regression models using rstanarm in R; I would really not like to rerun them because of run-time issues. Now I wonder whether I can/should adjust the confidence intervals for multiple-comparison issues. Is there a way to do so? Shall I use a different HDI width - so that when it is now 95%, I can use a 99.9983% HDI?
| How can I easily adapt the highest density interval for multiple-comparisons? | CC BY-SA 4.0 | null | 2023-03-19T17:20:53.083 | 2023-03-19T17:20:53.083 | null | null | 376793 | [
"bayesian",
"statistical-significance",
"multiple-comparisons"
] |
609987 | 1 | null | null | 0 | 41 | I have my dataset with different mutations as unit of analysis. These mutations belong to 5 different classes. Also, I have collected, 9 features about these mutations. In other words I have 12 columns:
- First column: mutation ID
- Second column: Mutation class
- Third column to eleven: Features about these mutations (at individual level)
- Twelve column: Drug resistant/susceptible or binary column.
In addition, I have done a survey of experts, asking them the probability of resistance given each mutation class.
Now, I want to model drug resistant as a function of these 9 features using Bayesian Hierarchical model. I want to take into account those mutation classes and the probabilities from experts as prior information. I don't know if Bayesian Hierarchical model is the right approach. If this is correct, then, how can I parametricize my model. I want to write in my methods section.
Thank you!
| Multilevel (Hierarchical) Bayesian Model in R | CC BY-SA 4.0 | null | 2023-03-19T17:35:37.803 | 2023-03-19T17:35:37.803 | null | null | 376043 | [
"r",
"probability",
"bayesian",
"hierarchical-bayesian",
"naive-bayes"
] |
609988 | 2 | null | 609948 | 4 | null | Don't waste much time thinking about the "main effect" coefficients when there's an interaction involved. The "main effect" coefficients only hold when the interacting continuous predictor has a value of 0. Furthermore, recall that the product of a negative "main" effect and a negative "interaction" coefficient is positive.
[This page](https://stats.stackexchange.com/q/80050/28500) discusses a similar situation, with a positive interaction instead.
Focus instead on model predictions (with associated error estimates) for specific scenarios of interest. Whether or not you first center the predictors, you will get the correct results for such predictions from the model.
| null | CC BY-SA 4.0 | null | 2023-03-19T18:18:28.733 | 2023-03-19T18:18:28.733 | null | null | 28500 | null |
609990 | 1 | null | null | 0 | 16 | Throughout this post:
- $t$ is an index for the $t$-th outcome variable, and there are a total of $T$ outcomes
- $n$ is an index for the $n$-th individual, and there are a total of $N$ individuals,
- $C_{t,k}$ denotes the $k$-th cutpoint parameter for outcome variable $t$, which has $K_{t}$ categories.
- We are including $- \infty$ and $+ \infty$ in the cutpoint parameter set, therefore we have a total of $K_{t} + 1$ cutpoints for each outcome variable $t$.
- Column vectors are denoted by underlining (e.g. $\underline{X}$) and row vectors by $\underline{X}'$.
- Matrices are denoted by bold font (e.g. $\mathbf{\Omega}$).
- $\Phi$ denotes the CDF of a standard normal and $\phi$ the PDF
For each individual $n$, we have an observed ordinal outcome row vector of length $T$ such that $\underline{Y}_{n}'$ = $\left[Y_{n,1} ~ \ldots ~ Y_{n,K_{T}} \right]$ so there are a total of $NT$ observed data points.
For binary outcomes there are only two categories - $K_t$ = 2 - $Y_{n,t}$ is either 1 for a negative result or 2 for a positive result - and each cutpoint parameter is either $C_{t,0}$ = - $\infty$, $C_{t,1}$ = 0, or $C_{t,2}$ = $+ \infty$.
$\mathbf{X_n}'$ is the design matrix and $\underline{\beta^{*}}$ is the full covariate vector of length $Tp$ - which is formed by stacking all of the individual covariate vectors $\underline{\beta}_{t}$ for each of the $T$ outcomes on top of one another.
Latent data
We augment each observed data vector $\underline{Y}_{n}'$ with a continuous latent vector $\underline{Z}_{n}'$ = $\left[Z_{n,1} ~ \ldots ~ Z_{n,K_{T}} \right]$ which has a truncated multivariate normal distribution with mean vector $\underline{X}' \underline\beta^{*}, $ and a $T$-dimensional variance-covariance matrix $\mathbf{\Omega}$ equal to a correlation matrix (for identifiability) truncated in the interval $ I\left[\underline{C}_{y_{n} - 1}, \underline{C}_{y_{n}} \right] $:
\begin{equation}
\begin{aligned}
\underline{Z}_{n}' \sim \text{multi_normal}\left(\mathbf{X_n}' \underline{\beta^{*}}, \mathbf{\Omega} \right)
\cdot
I\left[\underline{C}_{y_{n} - 1}, \underline{C}_{y_{n}} \right]
\end{aligned}
\label{augmented_data}
\end{equation}
The relationship between the observed ordinal data vector $\underline{Y}_{n}'$ and the augmented latent data vector $\underline{Z}_{n}'$ is:
${Y}_{n,t} = k$ $\implies$ ${C}_{y_{n} - 1, t} < {Z}_{n,t}' < {C}_{y_{n}, t}$
We can sample the latent data from each univariate conditional distribution: (note that $-t$ means "all but the "$t$-th):
\begin{equation}
\begin{aligned}
{Z}_{n,t} | \underline{Z}_{n,-t} &\sim \text{univariate_normal} \left(\mu_{n,T}, \sigma_{n,T} \right)
\cdot
I\left[\underline{C}_{y_{n,t} - 1}, \underline{C}_{y_{n,t}} \right]
\end{aligned}
\label{MVN_full_conditionals}
\end{equation}
Where:
$
\mu_{n,t} =
\mathbf{X_n}' \underline\beta +
\underline\Omega_{t, -t} \cdot
\boldsymbol\Omega^{-1}_{-t, -t} \cdot
\left( \underline{Z}_{-t} - X_{n}' \underline\beta_{-t} \right)
$, and
$
\sigma^{2}_{n,t} =
\Omega^{-1}_{t, t} -
\underline\Omega_{t, -t} \cdot
\boldsymbol\Omega^{-1}_{-t, -t} \cdot
\underline\Omega'_{t, -t}
$
Log-likelihood and conditional log-posterior distribution of log-differences
The \textbf{log-likelihood} for each individual $n$ is expressed as:
\begin{equation}
\begin{aligned}
LL\left( \text{Parameters} | \underline{Y}_{n}' \right) &=
\sum_{t=1}^{T}
\text{log}\left[
\Phi\left( \frac{C_{y_n} - \mu_{n,t}}{\sigma_{n,t}} \right) -
\Phi\left( \frac{C_{y_n - 1} - \mu_{n,t}}{\sigma_{n,t}} \right)
\right]
\end{aligned}
\label{log_likelihood}
\end{equation}
We can write the full conditional log-density of the cutpoint parameters as (assuming uniform prior for now to simplify the calculations) :
\begin{equation}
\begin{aligned}
\log\left(\pi \left( \underline{C_t} | \underline{\beta}, \mathbf{\Omega}, \underline{Z}_{-t} \right)\right) & \propto
\sum_{n | Y_n = 1}
\text{log}\left[
\Phi\left( \frac{C_{t,1} - \mu_{n,t}}{\sigma_{n,t}} \right)
\right]
\\ &\cdot
\sum_{n | Y_n = 2}
\text{log}\left[
\Phi\left( \frac{C_{t,2} - \mu_{n,t}}{\sigma_{n,t}} \right) -
\Phi\left( \frac{C_{t,1} - \mu_{n,t}}{\sigma_{n,t}} \right)
\right]
\\ &\vdots
\\ &\cdot
\sum_{n | Y_n = K}
\text{log}\left[
1 -
\Phi\left( \frac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right)
\right]
\end{aligned}
\end{equation}
In order to use a gradient-based algorithm such as MALA we need to re-parameterise the cutpoints so that they are unconstrained. We can do this by re-parameterising them as log-differences:
$
\delta_{t,k} = \log(C_{t,k} - C_{t,k-1}) ~ \text{(for k = 2,... , K-1, so K-2 log-differences for each outcome)}
$
i.e. for k = 2,... , K-1:
$
C_{t,k} = C_{t,1} + \sum_{k'=2}^{k} \exp(\delta_{t,k'}) = \sum_{k'=2}^{k} \exp(\delta_{t,k'}) ~ \text{(as fixing first cutpoint to 0)}
$
And the log absolute determinant of the Jacobian adjustment to account for the transformation is:
$
\log(|J|) = \sum_{k=2}^{K-1} \delta_{t,k}
$
Therefore, the full conditional posterior density of the log-differences if given as the sum of the log-posterior of the cutpoints and the log absolute determinant of the Jacobian:
$
\log\left(\pi \left( \underline{\delta_t} | \underline{\beta}, \mathbf{\Omega}, \underline{Z}_{-t} \right)\right) =
\log\left(\pi \left( \underline{C_t} | \underline{\beta}, \mathbf{\Omega}, \underline{Z}_{-t} \right)\right) +
\sum_{k=2}^{K-1} \delta_{t,k}
$
Attempt at calculating gradient vector (first-order partial derivatives)
I want to calculate the partial derivatives of the log-posterior of the log-differences w.r.t each log difference, i.e.:
$
\frac{ \partial \log\left(\pi \left( \underline{\delta_t} | \underline{\beta}, \mathbf{\Omega}, \underline{Z}_{-t} \right)\right) }{ \partial {\delta_{t, k}} } =
\frac{ \partial \log\left(\pi \left( \underline{C_t} | \underline{\beta}, \mathbf{\Omega}, \underline{Z}_{-t} \right)\right) }{ \partial {\delta_{t, k}} } +
\frac{ \partial }{ \partial {\delta_{t, k}} } \sum_{k=2}^{K-1} \delta_{t,k}
$
Note that the second term of the partial derivatives (i.e. the derivatives of the Jacobian adjustment) will vanish.
The last log-difference ($\delta_{t, K-1}$) is only included in the expression for $C_{t, K-1}$ so the partial derivative w.r.t this parameter is given by:
\begin{equation}
\begin{aligned}
\frac{ \partial \log\left(\pi \left( \underline{\delta_t} | \underline{\beta}, \mathbf{\Omega}, \underline{Z}_{-t} \right)\right) }{ \partial {\delta_{t, K-1}} }
\\
%%%%%%%% term 1
&= \sum_{n | Y_n = K} {\left[- \phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right) \cdot \text{exp}(\delta_{t, K-1}) / \sigma_{n,t} \right]} \Big/ {\left[ 1 - \Phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right)\right]}
\\
%%%%%%%% term 2
&+ \sum_{n | Y_n = K-1} \cfrac{\left[ \phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right) \cdot \text{exp}(\delta_{t, K-1}) / \sigma_{n,t} -
\Phi\left( \cfrac{C_{t,K-2} - \mu_{n,t}}{\sigma_{n,t}} \right) \right]}{\left[
\Phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right) -
\Phi\left( \cfrac{C_{t,K-2} - \mu_{n,t}}{\sigma_{n,t}} \right)
\right]}
\end{aligned}
\end{equation}
The second-to-last log-difference ($\delta_{t, K-2}$) is included in the expression for both $C_{t, K-1}$ and $C_{t, K-2}$, so the partial derivative w.r.t this parameter is given by:
\begin{equation}
\begin{aligned}
\frac{ \partial \log\left(\pi \left( \underline{\delta_t} | \underline{\beta}, \mathbf{\Omega}, \underline{Z}_{-t} \right)\right) }{ \partial {\delta_{t, K-2}} }
\\
\\
%%%%% term 1
&= \sum_{n | Y_n = K} {\left[- \phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right) \cdot \text{exp}(\delta_{t, K-2}) / \sigma_{n,t} \right]} \Big/ {\left[ 1 - \Phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right)\right]}
\\
%%%%%%% term 2
&+ \sum_{n | Y_n = K-1} \cfrac{\left[ \phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right) \cdot \text{exp}(\delta_{t, K-2}) / \sigma_{n,t} -
\phi\left( \cfrac{C_{t,K-2} - \mu_{n,t}}{\sigma_{n,t}} \right) \cdot \text{exp}(\delta_{t, K-2}) / \sigma_{n,t} \right]}{\left[
\Phi\left( \cfrac{C_{t,K-1} - \mu_{n,t}}{\sigma_{n,t}} \right) -
\Phi\left( \cfrac{C_{t,K-2} - \mu_{n,t}}{\sigma_{n,t}} \right)
\right]}
\\
%%%%%%% term 3
&+ \sum_{n | Y_n = K-2} \cfrac{\left[ \phi\left( \cfrac{C_{t,K-2} - \mu_{n,t}}{\sigma_{n,t}} \right) \cdot \text{exp}(\delta_{t, K-2}) / \sigma_{n,t} -
\Phi\left( \cfrac{C_{t,K-3} - \mu_{n,t}}{\sigma_{n,t}} \right) \right]}{\left[
\Phi\left( \cfrac{C_{t,K-2} - \mu_{n,t}}{\sigma_{n,t}} \right) -
\Phi\left( \cfrac{C_{t,K-3} - \mu_{n,t}}{\sigma_{n,t}} \right)
\right]}
\\
\end{aligned}
\end{equation}
And we carry on like this for the remaining derivatives, so the partial derivative w.r.t ($\delta_{t, K-3}$) will have 4 sums, ($\delta_{t, K-4}$) will have 5 sums, etc.
However the results I am getting are not matching results obtained via automatic differentiation (done in C++ using Stan math library). The reason why I am calculating derivatives manually is because I am looking at implementing RHMC which requires calculating third-order derivatives and from what I have heard autodiff tends to be less efficient at higher-order derivatives.
Is there an error somewhere in the derivation?
| Derivation of first-order derivatives (gradient vector) for multivariate ordinal regression cutpoints log-posterior | CC BY-SA 4.0 | null | 2023-03-19T18:29:36.533 | 2023-03-19T21:16:37.690 | 2023-03-19T21:16:37.690 | 377094 | 377094 | [
"regression",
"bayesian",
"multivariate-analysis",
"ordinal-data"
] |
609991 | 2 | null | 609970 | 4 | null | no, they refer to two different things:
- prior over parameters (that's your belief of how the parameters should be distributed)
- assumption on the "noise" of the measurements (given an observation, what's the distribution that you think describes the noise)
MAE and L1 is a Laplacian prior, MSE and L2 is a Gaussian likelihood/prior, and they are both derived using the max log-likelihood principle
| null | CC BY-SA 4.0 | null | 2023-03-19T18:30:02.543 | 2023-03-20T10:56:00.403 | 2023-03-20T10:56:00.403 | 346940 | 346940 | null |
609993 | 1 | null | null | 0 | 48 | I am attempting, via linear regression, to model a dataset.I've tried various transformations on the response/ and predictors, as well as WLS but the assumptions are not met. I'm looking for the possible existence of significance in variables, so from my understanding the normality of the residuals is needed to perform the requisite significance tests. There doesn't appear to be any multicollinearity issues.
The response is a continuous variable that measure the total amount element T (arbitrary label) in soil, and the predictors are the plots of land where the soil is, weather data such as precipitation in inches, temperature etc, and the amount of treatment injected into the soil. The data is collected over ~50 months, each row of the dataset is the average monthly value of the predictors, and there are ~10 plots of land (so ~50 rows per plot of land). Please find below, images of the qqplot and the residual vs. fitted plot.
The clustering of data in the residuals vs. fitted plot suggest to me that maybe another predictor is uncaptured driving the deviation from the assumptions but I am not 100% sure. Any and all suggestions are appreciated.
[](https://i.stack.imgur.com/zQtdC.png)
[](https://i.stack.imgur.com/UGnUH.png)
PS: There are outliers in this dataset, so that may be an issue but there is no reason to suspect that they are a result of data misentry. Hope this explanation is clear, will answer any questions in the comments. Thank you all in advance.
| Can't fix non-normality and heteroskedasticity | CC BY-SA 4.0 | null | 2023-03-19T18:49:07.583 | 2023-03-19T18:49:07.583 | null | null | 383608 | [
"regression",
"multiple-regression",
"heteroscedasticity",
"normality-assumption"
] |
609994 | 1 | null | null | 0 | 18 | My study aims to determine the causal relationship between a subset of predictors (consisting of 13 variables) and one binary response variable. We have unbalanced data because the number of samples at each response level is unequal (n1=636, n2=100). I search extensively on Google and also on stat.overflows, but the advice is too complicated for my understanding.
- It appears that weighted logit regression can be used. As the objective of this study is causal relationship and not prediction, I was curious how the class_weight should be determined. Do I need to defined the class_weight in training data set? Any references that include R code?
- Should I use one of the following strategies instead? 1. Bayesian logistic regression versus 2. Penalized logistic regression
- As I have 13 predictors, do you think the sample size would be enough to avoid overfitting and to have reliable estimation?
| need some advice regarding finding appropriate statistical method. (weighted, Bayesian, penalized) logistic regression | CC BY-SA 4.0 | null | 2023-03-19T18:52:26.243 | 2023-03-19T22:09:54.977 | 2023-03-19T22:09:54.977 | 362671 | 383609 | [
"regression"
] |
609995 | 2 | null | 609495 | 1 | null | As a counterexample, just take:
$$k=1,\quad X \text{ uniform on } [0,1],\quad \varepsilon \text{ uniform on } [0,2],\quad Y=X+\varepsilon$$
$$E[X\mid Y=y]=\begin{cases}
\frac{y}{2} & if\quad 0<y<1 \\
\frac{1}{2} & if\quad 1<y<2 \\
\frac{y-1}{2} & if\quad 2<y<3 \\
\end{cases}$$
| null | CC BY-SA 4.0 | null | 2023-03-19T19:21:22.410 | 2023-03-20T07:04:26.343 | 2023-03-20T07:04:26.343 | 376154 | 376154 | null |
609997 | 2 | null | 609963 | 10 | null | Suppose you have two populations: A and B.
You draw samples from A: $a_1,a_2,...,a_n$
You draw samples from B: $b_1,b_2,...,b_n$
The actual values of $a$'s and $b$'s are just the numbers $0$'s and $1$'s, which represent success/failures.
Now let $\alpha$ and $\beta$ denote the averages of these samples, i.e.
$$ \alpha = \frac{a_1+a_2+...+a_n}{n} \text{ and }\beta = \frac{b_1 + b_2 + ... + b_n}{n} $$
These quantities $\alpha,\beta$ represent the proportion of successes that you see in each population. For example, suppose your samples for A included a total of 30 times where $a=1$ and 70 times where $a=0$. Then $\alpha = .3$, and so you estimate that the success rate for population A is roughly 30 percent.
You can apply the CLT to $\alpha$ and $\beta$ since they are means from a population. You are correct that that $a$'s and $b$'s are not normally distributed. But the moment you start taking their means they become normally distributed.
As a follow up question you ask "why is their difference normally distributed"? Their difference is given by $\alpha - \beta$. It is a well-known theorem in probability, that if $\alpha$ and $\beta$ are normally distributed, and they are independent, then $\alpha - \beta$ is also normally distributed.
Do you require help how to determine the $\mu$ and $\sigma$ parameters for their difference?
| null | CC BY-SA 4.0 | null | 2023-03-19T19:38:17.640 | 2023-03-19T19:38:17.640 | null | null | 68480 | null |
609998 | 2 | null | 609970 | 10 | null | They are distinct notions.
Point #2 refers to the usual kind of loss function. The first example almost anyone who studies statistics or data analysis of any kind sees is the square loss in ordinary least squares linear regression: add up the squared residuals.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
$$
Another viable loss function is to add up the absolute residuals.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left\vert
y_i - \hat y_i
\right\vert
$$
Each of these can be expressed in terms of $p$-norms.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2=\vert\vert y-\hat y\vert\vert_2^2\\
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left\vert
y_i - \hat y_i
\right\vert = \vert\vert y-\hat y\vert\vert_1
$$
Consequently, it is reasonable to refer to these as $\ell_2$ and $\ell_1$ loss, respectively.
The penalization from the regularization in point #1 is separate. First, there is not necessarily a need to include penalization, so it might be that you just find the regression parameters that lead to predictions $\hat y_i$ giving the minimal $\ell_p$ loss, and this is exactly what ordinary least squares estimation does for $\ell_2$ loss. However, there are various reasons why the best loss value might not be desirable. Regularization is a way of sacrificing the training loss value in order to improve some other facet of performance, a major example being to sacrifice the in-sample fit of a machine learning model to quell overfitting and improve out-of-sample performance.
You can mix-and-match loss functions and regularization to your heart's content. For instance, ridge regression uses square loss with an added penalty term that involves the $\ell_2$ norm of the regression parameter vector.
$$
L_{\text{ridge}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda\vert\vert\hat\beta\vert\vert_2
$$
LASSO regression uses square loss with a penalty term that uses the $\ell_1$ norm of the parameter vector.
$$
L_{\text{LASSO}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda\vert\vert\hat\beta\vert\vert_1
$$
Elastic net uses both types of penalty.
$$
L_{\text{Elastic Net}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$
Finally, while I do not see this approach discussed much, you could use $\ell_1$ loss with either penalty or even both.
$$
L_{\text{Other}}=\vert\vert y-\hat y\vert\vert_1 + \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$
(The $\lambda$ parameters control how much of a penalty there is for having large coefficients in the parameter vector. It is common to tune these using cross validation.)
Getting to other types of models, nothing stops you from using $\ell_1$ or $\ell_2$ penalization (or both) with, say, logistic regression and its associated "log loss".
$$
L_{\text{Log}}=-\overset{n}{\underset{i=1}{\sum}}\left(
y_i\log(\hat y_1) + (1 - y_1)\log(1 - \hat y_1)
\right)\\
L_{\text{Penalized Log}}=-\overset{n}{\underset{i=1}{\sum}}\left(
y_i\log(\hat y_1) + (1 - y_1)\log(1 - \hat y_1)
\right)+ \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$
| null | CC BY-SA 4.0 | null | 2023-03-19T19:45:55.197 | 2023-03-26T02:35:51.097 | 2023-03-26T02:35:51.097 | 247274 | 247274 | null |
610000 | 2 | null | 609963 | 2 | null | >
The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution.
Not quite, though maybe you have a book that says something vague and not particularly accurate like this. Such vagueness of language has misled you about what it is even referring to.
It's not the number of samples, but the number of observations in a sample. Sample sizes are very large in typical A/B tests, but see the later discussion, which explains why that might not be sufficient for the sort of variables commonly used in many A/B tests. First lets look at what the CLT says, or at least let us get a bit closer to a formal statement of it.
In particular (for a 'classical' CLT in mean-form), if $\bar{X}_n$ (n=1,2,3...) is a sequence of sample means of $n$ independent and identically distributed observations from a population with finite mean and variance ($\mu$ and $\sigma^2$), and $Z_n = \frac{\bar{X}_n-\mu}{\sigma/\sqrt{n}}$ is the standardized mean, then in the limit as $n\to\infty$ the (cumulative) distribution function, $F_n(z)$ of $Z_n$ converges to the standard normal cdf, $\Phi(z)$.
(Conveniently, this theorem - relating as it does to distribution functions - is in a form that is potentially relevant to evaluating tail probabilities.)
This would suggest as sample sizes become very large, the distribution function $F_n$ of a statistic $Z_n$ should eventually become close to that of a standard normal distribution. The CLT itself doesn't tell you how large that might need to be; it only talks about what happens in the limit as $n$ goes to infinity. In some situations (even when the CLT holds), that might need to be very large indeed.
In particular, you might consider very skewed distributions (which are very common in calculations like click-through-rates or purchase rates or whatever, and also in effectively continuous quantities such as time or money spent on a site) and see that sample means can sometimes remain clearly non-normal even when sample sizes are getting into the thousands.
>
Nonetheless, the difference between these two samples is guaranteed to be a normal distribution.
I presume you intend "difference between sample means" there (otherwise, where does the CLT come in? 'difference in samples' is not a test statistic), but either way, this is wrong. Indeed if the original distributions were non-normal you can guarantee that the distributions of the sample means (and of their difference, given the usual assumptions) is not actually normal. However, it might in practice get quite close $-$ close enough that the normal will yield perfectly reasonable answers, except perhaps in the extreme tail $-$ if only the sample size were large enough. Large enough, that is, given the particular situation you're in and your particular sense of what might be close enough.
>
My question is, how is the difference between these two samples constructed, and why is it guaranteed to be a normal distribution?
Look to the specific statistic you're using in the test. You don't mention which it is, and in A/B testing there's at least two distinct situations that are commonly involved and many more which might be possible. For both those common cases the statistic at least has a numerator with the form of a difference of means.
However, the CLT alone is not sufficient for the whole statistic in either case. A suitable argument for a t-test (e.g. if you were testing say, time or money spent at a site) or a z-test (as an approximation of a binomial test of proportions) would require additional argument, since the denominator is also a random variable. Such an argument is possible (e.g. by invoking Slutsky's theorem).
| null | CC BY-SA 4.0 | null | 2023-03-19T21:22:32.547 | 2023-03-19T23:03:08.050 | 2023-03-19T23:03:08.050 | 805 | 805 | null |
610001 | 1 | null | null | 0 | 15 | In this [paper](https://www.researchgate.net/publication/339768301_Enriching_Variety_of_Layer-Wise_Learning_Information_by_Gradient_Combination?enrichId=rgreq-c327e4bef7d84e5b2e0eb0d463ce162d-XXX&enrichSource=Y292ZXJQYWdlOzMzOTc2ODMwMTtBUzo5MTQxOTI5NzQ0NzkzNjdAMTU5NDk3MTk5ODg0NA%3D%3D&el=1_x_2&_esc=publicationCoverPdf) named, "Enriching Variety of Layer-Wise Learning Information by Gradient Combination", the authors used the term gradient timestamp that I did not see before in other papers. They provided an example in Fig. 5 in their paper below for ResNet network:
[](https://i.stack.imgur.com/fAzTs.png)
Questions:
- I am trying to understand the term "Gradient Timestamp" and what it mean, please.
- In the figure above, if $G_1$ refers to the weights of layer 1 added through the residual connection to the following Layer 3 and Layer 5 of the network, then why the author did not also add $G_3$ to Layer 5 since there is also a residual connection as I see from Layer 3 to Layer 5, please? If these paths between layers refer to paths during the backpropagation phase, then it would make more sense I think.
| Understanding the Term Gradient Timestamp in ResNet | CC BY-SA 4.0 | null | 2023-03-19T21:23:36.350 | 2023-03-19T21:23:36.350 | null | null | 309731 | [
"neural-networks"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.