Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
613071 | 2 | null | 161225 | 1 | null | To find standard errors of random effects for lmer(), use `library(merDeriv); sqrt(diag(vcov(lmer(), full = TRUE)))`. Another mentioned `library(arm); se.ranef(lmer())` at [https://stackoverflow.com/questions/31694812](https://stackoverflow.com/questions/31694812). If you use nlme::lme() instead, see the answer in [https://stackoverflow.com/a/76025033/20653759](https://stackoverflow.com/a/76025033/20653759) for standard errors of variance of random effects using fisher information matrix from the package lmeInfo.
According to the comment [Estimates of the variance of the variance component of a mixed effects model](https://stats.stackexchange.com/questions/161225/estimates-of-the-variance-of-the-variance-component-of-a-mixed-effects-model/613071#comment727705_161225), your question seems to be instead whether the variability differs between groups of states. Then, reporting standard errors of random effects' standard deviations or variances may not help. Instead, consider a likelihood ratio test between models estimated by REML. The state effects on variability can be captured by either (1) the residual structure or (2) the random effects of another grouping level UNDER each state. This should be done in nlme::lme(), as lmer() does not allow such specifications.
If the initial model is `lme(sbp ~ age * sex, random = ~ 1 | state)`, following approach (1) leads to `lme(sbp ~ age * sex, random = ~ 1 | state, weights = varIdent(form = ~ 1 | state))` so that the residual standard error (sigma) is allowed to differ by a ratio to a reference state's. Then compare these two models using anova() to test H0: the error variance is the same among states. Approach (2) requires an additional level of hierarchy, such as repeated measurements on the same patients, leading to `lme(sbp ~ age * sex, random = list(patient = pdDiag(~ 0 + state)))` or simply `lme(sbp ~ age * sex, random = ~ 0 + state | patient)`, where the standard deviation of random intercepts by patient is allowed to vary by state. Although "0 +" in the formula appears to omit intercepts, but random intercepts are fully contained within each state factor level. Comparing it with an restrictive model `lme(sbp ~ age * sex, random = list(patient = pdIdent(~ 1)))` or simply `lme(sbp ~ age * sex, random = ~ 1 | patient )`, where the standard deviation of random intercepts by patient is homogeneous among states by anova() tests H0: the random effect variance of patient-specific intercepts is the same among states.
Note that Approaches (1) and (2) address different questions. It appears that the clarification in Alexis's comment points to Approach (1).
| null | CC BY-SA 4.0 | null | 2023-04-15T22:27:19.667 | 2023-04-16T02:31:49.377 | 2023-04-16T02:31:49.377 | 284766 | 284766 | null |
613072 | 1 | null | null | 7 | 215 | By "permissible" (for lack of a better term) I mean models which despite of a "flat" (improper) prior (i.e., $\int_{\Theta} p(\theta) d \theta = + \infty$) nevertheless produce a proper posterior (i.e., $\int_{\Theta} p(\theta|\mathbf{x}) d \theta = 1$). Under those circumstances, the likelihood does the heavy lifting, and [the MAP will equal the MLE](https://stats.stackexchange.com/q/140168).
But there are of course many models whose [likelihood does not have a (unique) maximum](https://stats.stackexchange.com/q/16758) and usually require some additional constraints to be estimable. So intuition says that for those models, a flat prior would produce an improper posterior. Is that the case, and if not, what characterizes the exceptions?
EDIT: Having looked through the archives some more, I noticed that [a related questions was answered here](https://stats.stackexchange.com/q/97768).
| Is the class of models for which the MLE exists also the one for which flat priors are permissible? | CC BY-SA 4.0 | null | 2023-04-15T23:38:20.503 | 2023-04-16T08:51:09.800 | 2023-04-16T04:32:56.687 | 71679 | 71679 | [
"bayesian",
"maximum-likelihood",
"identifiability",
"improper-prior"
] |
613073 | 2 | null | 593525 | 0 | null | To test whether the intercept differs among 50 states, consider using likelihood ratio test between one model without state as either random or fixed effects and another model with random intercepts by state, illustrated [https://rpubs.com/DKCH2020/578881](https://rpubs.com/DKCH2020/578881).
If you do pairwise comparisons between each state pair, I agree with Roland's comment at [How to retrieve standard errors from random effects in nlme?](https://stats.stackexchange.com/questions/593525/how-to-retrieve-standard-errors-from-random-effects-in-nlme#comment1099575_593525) that it requires modeling state as fixed effects, meaning that there will be 50 - 1 = 49 coefficients estimated representing the difference in V1 between each state and a reference state. As a result, there could be 50*49/2 = 1225 pairs of differences, which makes the results intangible. Instead, you can designate one single state (e.g. your major study area) as the reference to reduce the number of pairwise comparisons to 49. In that case, you can use adjusted CIs to visualize the state comparisons. See Wright, T., Klein, M., & Wieczorek, J. (2019). A primer on visualizations for comparing populations, including the issue of overlapping confidence intervals. The American Statistician, 73(2), 165–178. [https://doi.org/10.1080/00031305.2017.1392359](https://doi.org/10.1080/00031305.2017.1392359).
To retrieve standard errors of standard deviations or variances of random effects, see [https://stats.oarc.ucla.edu/r/faq/how-can-i-calculate-standard-errors-for-variance-components-from-mixed-models/](https://stats.oarc.ucla.edu/r/faq/how-can-i-calculate-standard-errors-for-variance-components-from-mixed-models/). Because standard deviations and variances of random effects have skewed sample distribution, however, standard errors for these statistics can be misleading. Instead, reporting profile confidence intervals is encouraged. See [https://github.com/lme4/lme4/issues/497](https://github.com/lme4/lme4/issues/497).
| null | CC BY-SA 4.0 | null | 2023-04-15T23:54:49.743 | 2023-04-15T23:54:49.743 | null | null | 284766 | null |
613075 | 1 | null | null | 0 | 19 | I’m planning a study to collect number of weeks our participants are employed each month (during 1 year time period) for an interrupted time series analysis. Thus, the dependent variable is limited to values 0, 1, 2, 3, or 4 weeks. Should I be concerned about using a Poisson distribution for such a limited range of values? Would it be better to collect employment for each week (or biweekly) as yes/no and use a logistic regression? Thanks!!
| Poisson regression with limited range DV | CC BY-SA 4.0 | null | 2023-04-16T00:54:50.507 | 2023-04-16T00:54:50.507 | null | null | 385797 | [
"time-series",
"poisson-regression"
] |
613077 | 1 | null | null | 0 | 18 | E.g. I have a time series data from day 1 up to day T.
I want to compare models with Yt~Xt, with
- equal weight of all data set
- put more weight (like exponential decay) for the day more close to T
Can I use AIC or BIC to compare these models?
| Can I use AIC or BIC to compare models fitted to the same data, but different weighting? | CC BY-SA 4.0 | null | 2023-04-16T01:14:36.113 | 2023-04-16T01:14:36.113 | null | null | 315687 | [
"time-series",
"model-comparison",
"sample-weighting"
] |
613078 | 1 | null | null | 3 | 54 | I was recently asked the question, "In ridge regression $\hat{y}=X(X^\top X+\lambda I)^{-1} X^\top y$, why might the correlation $\mathrm{corr}(\hat{y},y)$ between the predicted values ($\hat{y}$) and actual values ($y$) remain constant as $\lambda$ varies?" My initial thought is that this could be due to the eigenvalues of the $X^\top X$ matrix being roughly equal. However, I'm not certain if this is the only factor at play or if there are other potential reasons for this phenomenon. Thank you so much!
| Explaining Constant $\mathrm{corr}(\hat{y},y)$ in Ridge Regression as $\lambda$ Varies | CC BY-SA 4.0 | null | 2023-04-16T01:27:38.213 | 2023-04-16T15:25:55.023 | 2023-04-16T15:25:55.023 | 145991 | 145991 | [
"regression",
"correlation",
"ridge-regression"
] |
613079 | 2 | null | 613032 | 1 | null | Let's say we have a time variable $t$ such as the one below, with ten time points:
$$
t = [1,2,3,4,5,6,7,8,9,10]
$$
If we estimate this as a continuous variable, the regression will fit all estimated values under a single coefficient. If we include it as a factor, it effectively splits up the predictions, in turn giving you a unique coefficient for each individual time. This will naturally change the significance of the coefficients in regressions. As an example with some simulated data below, I have created a 10-point time predictor and a normally distributed outcome variable $y$. Then I fit the data with $t$ as numeric data:
```
#### Sim Data ####
set.seed(123)
t <- rep(seq(1:10),10)
y <- t*.01 + rnorm(n=100, sd = .1)
#### Fit Numeric Regression ####
fit.c <- lm(y ~ t)
summary(fit.c)
```
The summary of the regression looks like this:
```
Call:
lm(formula = y ~ t)
Residuals:
Min 1Q Median 3Q Max
-0.234666 -0.059717 -0.001804 0.058016 0.210123
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.000725 0.019797 0.037 0.970861
t 0.011512 0.003191 3.608 0.000488 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.09164 on 98 degrees of freedom
Multiple R-squared: 0.1173, Adjusted R-squared: 0.1083
F-statistic: 13.02 on 1 and 98 DF, p-value: 0.0004877
```
You can see when I have created a minor association with $t$ and $y$, the coefficient is significant. However, if I fit that same data as a factor:
```
#### Fit Factor Regression ####
fit.f <- lm(y ~ factor(t))
summary(fit.f)
```
The summary now has 10 coefficients, each with their own significance value:
```
Call:
lm(formula = y ~ factor(t))
Residuals:
Min 1Q Median 3Q Max
-0.208248 -0.064316 0.002569 0.065867 0.195329
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01469 0.02946 0.499 0.6193
factor(t)2 -0.01966 0.04166 -0.472 0.6380
factor(t)3 0.02592 0.04166 0.622 0.5353
factor(t)4 0.04688 0.04166 1.125 0.2634
factor(t)5 0.03664 0.04166 0.879 0.3815
factor(t)6 0.08489 0.04166 2.038 0.0445 *
factor(t)7 0.09378 0.04166 2.251 0.0268 *
factor(t)8 0.04309 0.04166 1.034 0.3037
factor(t)9 0.07547 0.04166 1.812 0.0734 .
factor(t)10 0.10652 0.04166 2.557 0.0122 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.09315 on 90 degrees of freedom
Multiple R-squared: 0.1624, Adjusted R-squared: 0.07865
F-statistic: 1.939 on 9 and 90 DF, p-value: 0.05615
```
Because this is a categorical regression, it is only comparing each time point to the original reference criterion of Time 1. If we plot the data and draw the regression line based off the continuous fit:
[](https://i.stack.imgur.com/msgfX.png)
You can see that this specific regression is predicting an overall single upward trend. However, because the categorical regression is comparing each time point individually to the reference criterion (Time 1), only the time points which are greatly different from it are flagged as significant. You can see for example why Time 10 is significant because it's data points are on average greater than Time 1, whereas Time 3 has generally the same distribution of data as Time 1, thus it is non-significant.
On that note, it should be clear what you are doing with the data in this case. If you are just interested in fitting longitudinal data over a limited number of times of special interest, then fitting them as factors would be preferred, because then you could explain which time points had an actual effect on the response. However, if you have several time points and only want to know what the general trend is, it would be better to fit the data as numeric. Keep in mind there is more to read on this topic, but that would be my advice.
One last thing...you may have noticed my data in particular was somewhat nonlinear. This is relatively common with time-based data. I would check to see that your regression isn't fitting a linear trend to a nonlinear one. If it is, you may need to consider nonlinear methods if you do in fact treat the data as numeric.
| null | CC BY-SA 4.0 | null | 2023-04-16T01:52:16.193 | 2023-04-16T02:08:29.513 | 2023-04-16T02:08:29.513 | 345611 | 345611 | null |
613081 | 2 | null | 612874 | 0 | null | First, generalized additive mixed models (GAMMs) are just extensions of generalized additive models (GAMs), so the interpretation of their respective odds ratios is pretty much the same, though of course you are adding in the extra information about random effects. That isn't all that important for your question though.
Remember that odds ratios in generalized linear models (GLMs) are based off single coefficients. Recall that a GAM fit is based off the estimation of multiple coefficients using a spline, and this consequently alters how odds ratios (ORs) can be interpreted. As an example, if the data is fit to an almost parabolic association between the predictor and outcome, the odds ratios for each section of this regression would have opposite interpretations.
To show this in action, one can use the `oddsratio` package on a given GAM to check for specific regions of a regression line and it's associated OR. I have used the example given on the [oddsratio package vignette page](https://pat-s.github.io/oddsratio/articles/oddsratio.html) so you can check through the documentation yourself. Below is a GAM fitted from the `data_gam` data in the `oddsratio` package. We use the `or_gam` function by specifying a specific zone of values and it's associated OR:
```
#### Load Libraries ####
library(oddsratio)
library(tidyverse)
#### Fit GAM ####
fit_gam <- mgcv::gam(y ~ s(x0) + s(I(x1^2)) + s(x2) + offset(x3) + x4,
data = data_gam
)
#### Check OR for "Cut" ####
or_gam(
data = data_gam, model = fit_gam, pred = "x2",
values = c(0.099, 0.198)
)
```
This area of the regression has a considerable OR, as from .099 and .198, there is an associated OR of 23. We can also fit these sections to a plot to visualize what they are saying. You first create a `ggplot` based object with `plot_gam`, then create an OR-based object with `or_gam`, then layer them on top of each other to create a plot with the region of interest.
```
#### Create Plot Object ####
plot_object <- plot_gam(fit_gam, pred = "x2", title = "Predictor 'x2'")
#### Create OR Object ####
or_object <- or_gam(
data = data_gam, model = fit_gam,
pred = "x2", values = c(0.099, 0.198)
)
#### Insert OR Object into Plot Object ####
plot <- insert_or(plot_object, or_object,
or_yloc = 3,
values_xloc = 0.05, arrow_length = 0.02,
arrow_col = "red"
)
#### Generate OR Plot ####
plot +
theme_minimal()
```
[](https://i.stack.imgur.com/XtOu9.png)
The OR associated with this range makes sense...there is a pretty linear positive trend here. If we fit it to somewhere else with a negative trend:
```
#### Create OR Object ####
or_object <- or_gam(
data = data_gam, model = fit_gam,
pred = "x2", values = c(0.3, 0.4)
)
#### Insert OR Object into Plot Object ####
plot <- insert_or(plot_object, or_object,
or_yloc = 3,
values_xloc = 0.05, arrow_length = 0.02,
arrow_col = "red"
)
#### Generate OR Plot ####
plot +
theme_minimal()
```
We get an almost negligible OR:
[](https://i.stack.imgur.com/WcbaS.png)
So to summarize, the OR is dependent on where you are looking, but in general it can be interpreted in a similar way.
| null | CC BY-SA 4.0 | null | 2023-04-16T03:13:28.493 | 2023-04-16T03:18:31.360 | 2023-04-16T03:18:31.360 | 345611 | 345611 | null |
613082 | 2 | null | 613078 | 1 | null | When you have a ridge in the parameter space, that ridge is a set of values where the fit hardly changes anywhere along it (the $\hat{y}$ values are essentially the same along that ridge even though $\hat{\beta}$ changes).
It's $\hat{\beta}$ that's not well-determined if you don't regularize, while the fit is nearly fixed.
Since that's the exact case where you want to use ridge regression, a collection of more or less regularized fits (larger or smaller $\lambda$ values) will still end up very near to the top of that ridge, and so have almost the same fit -- i.e. almost the same $\hat{y}$ values.
Which is to say, when you really want to use ridge regression (because you have an ill-determined system leading to a ridge in log-likelihood - or in -SSE - in the beta-space), that's exactly when you should expect to see the fit hardly change as you change $\lambda$.
Since the fit is barely changing, the correlation is nearly constant.
| null | CC BY-SA 4.0 | null | 2023-04-16T03:16:38.263 | 2023-04-16T08:29:35.960 | 2023-04-16T08:29:35.960 | 805 | 805 | null |
613083 | 2 | null | 613072 | 5 | null | No, these are somewhat different problems. If you have an improper flat prior and you don't have a unique MLE, you will often not have a unique posterior mode, so neither MLE nor MAP estimation will be useful without some additional thought/constraints. But you can easily have a proper posterior.
Some examples:
- Mixture models, where there is non-identifiability because of relabelling. There will still be relabelling in the posterior, but the posterior will be proper as long as the mixing probabilities are bounded away from zero
- 'Flat' or nearly flat regions in the likelihood: if you have $2\times 2$ table where you only observe the margins, the odds ratio is non-identifiable and the likelihood is nearly flat over some range of values. Given a flat prior, you'd get a flat posterior over that range. However, the flat range will typically be bounded so that the posterior is proper.
- it's quite possible to have non-identifiability with bounded parameter spaces, so even a flat posterior would be proper. Suppose $Y\sim Binomial(1,p_1)$ and you have a flat prior over $[0,1]\times[0,1]$ for $(p_1,p_2)$. The posterior for $p_2$ (about which you have no data) will still be flat, but it will not be improper.
Conversely, you can get an improper posterior without non-identifiability. Hobert and Casella discuss this for linear mixed models [here](https://www.jstor.org/stable/2291572). They don't explicitly use flat priors, but their improper priors could be regarded as flat for some transformed parameter.
One situation where you can get an improper posterior from non-identifiability is when the likelihood is flat on a unbounded subspace of the parameter space. Suppose you have a model $Y\sim N(\alpha+\beta,1)$. The data only tell you about $\alpha+\beta$ and your posterior for $\alpha-\beta$ will be flat if the prior is flat.
| null | CC BY-SA 4.0 | null | 2023-04-16T03:57:52.297 | 2023-04-16T03:57:52.297 | null | null | 249135 | null |
613085 | 2 | null | 274777 | 0 | null | Since you original model `lmer(var~cond +(1|blocks) + (1+cond|sub) , data=data)` allows a random intercept by `sub`, a random slope of `cond` by `sub`, and a correlation between these two random effects, the equivalence following DuCorey's specification is
```
lmer(var ~ cond, data = data, random = list(
one1 = pdIdent(~ blocks - 1),
one2 = pdLogChol(~ sub + cond - 1)))
```
Here `pdLogChol()` or `pdSymm()` allows the variance-covariance of the random effects to be freely estimated, unlike `pdIdent()` that does not allow random effects of variables specified in the formula to be correlated and requires each to have the same variance. The generated model has a first level grouping `one1` and a second level grouping `one2`, so that in case of modeling correlation of error terms within groups, use specification like `correlation = corCAR1(form = ~ cond | one1/one2/sub)`.
I do not think that using one1 and one2 separately is necessary. Using the same pseudo-variable name like `one` for both grouping factors should be sufficient. Then the model would be:
```
lmer(var ~ cond, data = data, random = list(
one = pdIdent(~ blocks - 1),
one = pdLogChol(~ sub + cond - 1)),
correlation = corCAR1(form = ~ cond | one/one/sub))
```
| null | CC BY-SA 4.0 | null | 2023-04-16T04:14:45.073 | 2023-04-16T09:02:32.170 | 2023-04-16T09:02:32.170 | 284766 | 284766 | null |
613086 | 2 | null | 346945 | 0 | null | Well if the features are categorical this above regression answer doesn't really hold true. Then it is dependent on the number of categories how many splits you can do on one feature. If each feature only has two categories, then OP would be correct.
| null | CC BY-SA 4.0 | null | 2023-04-16T04:55:56.770 | 2023-04-16T04:55:56.770 | null | null | 349817 | null |
613089 | 1 | null | null | 3 | 62 | I'm new to econometrics and learning about fixed effects. If I understood it correctly, we are literally including dummy variables for every observations. For example, I have a panel data of firms and if I include firm fixed effects, that means I will have dummy variables for each firm. The question I have is the problem regarding degrees of freedom. If we include dummy variables for each n firms, then that means the model has n dummy variables + some covariates, so does that mean the number of parameters exceed the number of observations?.. Then how could we run the model if so?.. What am I missing here?
| Clarifying degrees of freedom in fixed effects model | CC BY-SA 4.0 | null | 2023-04-16T06:42:40.293 | 2023-04-16T08:25:50.783 | 2023-04-16T07:07:14.913 | 35989 | 355204 | [
"multicollinearity",
"fixed-effects-model",
"degrees-of-freedom"
] |
613090 | 1 | null | null | 0 | 13 | I am attempting to make a dependent variable that contains 4 survey questions that are all binary responses. I am unsure what the best way to go about doing that is potentially through a type of index.
Originally, thought a logit model would be appropriate but second-guessing since the D.V. will not be a binary variable once changed.
This is clearly a hypothetical example but identical to what I am attempting to do:
Attempting to make a group of who go to fast food chains.
D.V. would contain 4 questions:
- Do you eat McDonalds fast food? (0 = No/ 1= Yes)
- Do you eat Wendys fast food? (0 = No/ 1= Yes)
- Do you eat Arbys fast food? (0 = No/ 1= Yes)
- Do you eat Burger King fast food? (0 = No/ 1= Yes)
This would end up getting tested with different control variables that are either categorical or binary.
Thoughts?
Referred here from StackOverflow [original post](https://stackoverflow.com/questions/76019431/dependent-variable-with-multiple-survey-questions).
| Dependent Variable Containing Multiple Binary Variables and Specifying Appropriate Model Type | CC BY-SA 4.0 | null | 2023-04-16T06:52:45.513 | 2023-04-17T14:24:19.770 | 2023-04-17T14:24:19.770 | 385740 | 385740 | [
"r",
"survey",
"methodology"
] |
613092 | 2 | null | 613089 | 1 | null | If you have $n$ observations where each is a separate firm and you use $n$ dummy variables to identify such a firm, then yes, combined with other variables you would have too many parameters. But if you have only one row of data per firm, it doesn't make any sense to have a dummy variable per firm. If you only had the dummies, your model would find separate intercepts per each firm, i.e. it would be
$$
E[y_i] = \beta_i + \varepsilon_i
$$
so the only thing it needs to do to return predictions leading to the least error is to return $E[y_i] = y_i$, to return the data itself.
If you have fewer firms than the number of observations, but still only a few data points per each firm, you have a similar problem where you don't have enough data to reliably estimate the parameters per each firm.
So if one of those is your scenario, then you cannot include dummies per each firm in your model. To do this, you need multiple datapoints per each.
It shows that indeed if you have more parameters than datapoints, the model cannot be fitted (see also [multicollinearity](/questions/tagged/multicollinearity)). You could use a regularized model in such a scenario though.
| null | CC BY-SA 4.0 | null | 2023-04-16T07:01:02.223 | 2023-04-16T07:06:13.310 | 2023-04-16T07:06:13.310 | 35989 | 35989 | null |
613093 | 2 | null | 610528 | 0 | null | Yes, these are all design-based tests that use the sampling weights. The Rao-Scott tests have been shown to have better control of Type I error than the Wald and `adjWald` tests when the number of design degrees of freedom is small, but they may have lower power.
| null | CC BY-SA 4.0 | null | 2023-04-16T07:47:02.980 | 2023-04-16T07:47:02.980 | null | null | 249135 | null |
613094 | 2 | null | 585856 | 0 | null | I will try to answer this question based on what I understand from the article Bates et al. (2015), which is the same document as the lmer vignette cited in the question. The goal here is to better understand what the sum of squares for the fixed effect $i$ means, which is given by:
$$
\mathit{SS}_i = \hat{\beta}^\top \boldsymbol{R}_i^\top \boldsymbol{R}_i \hat{\beta}
$$
First of all, as it is stated in page 33 of the paper, $\boldsymbol{R}_i$ contain the rows of $\boldsymbol{R}_X$ associated with the $i$th fixed-effects term. The matrix $\boldsymbol{R}_X$ first appears in Eq. 18, on page 14. It is part of the Cholesky decomposition of the system matrix that appears in Eq. 17. This later matrix comes from the minimization of the penalized weighted residual sum-of-squares (Eq. 14), when the parameters $\boldsymbol{\theta}$ (the covariance parameter vector, associated with the random effects) are fixed.
Eq. 18 (and, also, Eq. 50) indicates how the matrix $\boldsymbol{R}_X$ is computed:
$$
\boldsymbol{R}_X^\top\boldsymbol{R}_X = \boldsymbol{X}^\top\boldsymbol{W}\boldsymbol{X} - \boldsymbol{R}_{ZX}^\top\boldsymbol{R}_{ZX}
$$
We will see later in this answer how this matrix $\boldsymbol{R}_X$ can be related to the sum of squares associated with the fixed effects. We will discuss later the meaning of the other two matrices ($\boldsymbol{X}$ and $\boldsymbol{R}_{ZX}$ that appear in the equation above.
Before getting to that, let us first consider a linear fixed-effects model, in which the response vector $\mathcal{Y}$ is normally distributed and whose mean value has a linear dependence on the fixed effects, represented by the $p$-dimensional coefficient vector $\boldsymbol{\beta}$:
$$
\mathcal{Y} \sim \mathcal{N}(\boldsymbol{X}\boldsymbol{\beta}, \sigma^2)
$$
Here, $\boldsymbol{X}$ is the $n\times p$ model matrix ($n$ is the number of observations, or the length of $\mathcal{Y}$). Note that this is similar to Eq. 1 of the paper, but I am considering $\boldsymbol{o}$ (the vector of known prior offset terms) to be equal to zero and $\boldsymbol{W}$ (the diagonal matrix of known prior weights) to be equal to the identity matrix, for sake of simplicity.
Let us make things concrete with an example in R. The code below prepares the data that will be used to fit both fixed-effects and mixed-effects models:
```
### number of levels of fixed effect x
p <- 2
### number of groups
q <- 4
### number of repetitions per level of x and per group
n <- 2
### Values for x
x <- rep(c(-1, 1), each = q * n)
### Groups
g <- gl(q, n, n * p * q)
### Fixed effects
fe.i <- 0.1
fe.x <- 0.5
### Random effects
set.seed(12345678)
re.i <- rnorm(q)
re.i <- 0.1 *(re.i - mean(re.i)) / sd(re.i)
re.x <- rnorm(q)
re.x <- 0.2 *(re.x - mean(re.x)) / sd(re.x)
### Noise
noise <- rnorm(n * p * q, sd = 0.3)
### Dependent variable
y <- fe.i + fe.x * x + re.i [g] + re.x [g] * x + noise
### Data frame
dat <- data.frame(x = x, y = y, g = g)
```
Note that the factor `x` is continuous and was chosen such that there is no interaction between the fixed effects intercept and slope.
The fixed-effects linear model described above is fitted, in R, as follows:
```
m1 <- lm(y ~ x, dat)
```
The fitted coefficients $\hat{\boldsymbol{\beta}}$ can be extracted as:
```
m1.beta <- coefficients(m1)
```
whose values are:
```
> m1.beta
(Intercept) x
0.1492887 0.4151967
```
The model matrix $\boldsymbol{X}$ can be composed as:
```
X.m1 <- cbind(rep(1, n * p * q), x)
```
Assuming that:
$$
\boldsymbol{y}_{\mathrm{obs}} = \boldsymbol{X}\hat{\boldsymbol{\beta}} + \boldsymbol{\varepsilon}
$$
where $\boldsymbol{\varepsilon}$ is the vector of residuals with mean equal to zero and standard deviation $\sigma$, the total sum of squares $\mathit{SS}_\mathrm{total}$ can be related to the sum of squares due to the fixed coefficients $\mathit{SS}_\mathrm{fixef}$ and to the sum squares of the residuals $\mathit{SS}_\mathrm{resid}$:
$$
\mathit{SS}_\mathrm{total} = \mathit{SS}_\mathrm{fixef} + \mathit{SS}_\mathrm{resid}
$$
where:
$$
\begin{align}
\mathit{SS}_\mathrm{total} &= \boldsymbol{y}_{\mathrm{obs}}^\top\boldsymbol{y}_{\mathrm{obs}}\\
\mathit{SS}_\mathrm{fixef} &= \hat{\boldsymbol{\beta}}^\top\boldsymbol{X}^\top\boldsymbol{X}\hat{\boldsymbol{\beta}}\\
\mathit{SS}_\mathrm{resid} &= \boldsymbol{\varepsilon}^\top\boldsymbol{\varepsilon}
\end{align}
$$
These values can be computed as follows:
```
SS.fixef <- t(m1.beta) %*% t(X.m1) %*% X.m1 * m1.beta
SS.m1.resid <- sum(residuals(m1) ^ 2)
```
and are visualized with the following code:
```
> cat(sprintf("SS.fixef: %f (intercept), %f (slope)\nSS.resid: %f\n",
+ SS.fixef[1], SS.fixef[2], SS.m1.resid))
SS.fixef: 0.356594 (intercept), 2.758212 (slope)
SS.resid: 2.141890
```
Note that, when computing `SS.fixef`, the last multiplication is scalar (`*`) instead of matrix (`%*%`), such that the sums of squares for both intercept and slope are computed separately.
The total sum of squares is computed as follows:
```
SS.total <- t(dat$y) %*% dat$y
```
and we can verify that the equation $\mathit{SS}_\mathrm{total} = \mathit{SS}_\mathrm{fixef} + \mathit{SS}_\mathrm{resid}$ holds:
```
> cat(sprintf("SS.total: %f\nSS.fixef + SS.resid: %f\n",
+ SS.total, sum(SS.fixef) + SS.resid))
SS.total: 5.256697
SS.fixef + SS.m1.resid: 5.256697
```
The values for the sums of squares for the slope (dependent on variable `x`) and for the residuals, which are respectively 2.75821 and 2.1419, are the identical to the ones that are obtained by the call of the `anova` function on model `m1`:
```
> anova(m1)
Analysis of Variance Table
Response: y
Df Sum Sq Mean Sq F value Pr(>F)
x 1 2.7582 2.75821 18.029 0.0008145 ***
Residuals 14 2.1419 0.15299
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Let us now investigate the mixed-effects model and how it deals with the same data. According to Bates al. (2015), the conditional distribution of $\mathcal{Y}$ given $\mathcal{B} = \boldsymbol{b}$ has the form:
$$
(\mathcal{Y}|\mathcal{B} = \boldsymbol{b}) \sim \mathcal{N}(\boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{Z}\boldsymbol{b}, \sigma^2)
$$
where $\boldsymbol{Z}$ is the $n \times q$ model matrix for the $q$-dimensional vector-valued random-effects variable, $\mathcal{B}$, whose value is fixed at $\boldsymbol{b}$. The unconditional distribution of $\mathcal{B}$ is also multivariate normal with mean zero and a $q \times q$ variance-covariance matrix $\boldsymbol{\Sigma}$. Again, the matrix of weights $\boldsymbol{W}$ is ignored here for the sake of simplicity.
Note that in the data defined above (included in the data frame `dat`), there is a grouping factor `g`, which has four levels. For each one of these levels, there are two measurements for $x = -1$ and another two measurements for $x = 1$. The data has been generated such that random effects on both intercept (with size `re.i`) and slope (with size `re.x`) are induced. With 16 observations in total, it is possible to fit a model containing random effects for both intercept and slope, avoiding singularities in the fit.
This is a graphical representation of the data:
```
png(file = "data.png", width = 500, height = 470)
par(mar = c (5, 4, 0.1, 0.1))
cols <- c("red", "blue", "gold", "green4")
plot(dat$y, las = 1, pch = 19, col = cols[dat$g],
ylab = "Y", cex = 1.5, xaxt = "n")
axis(1, at = seq (1, nrow (dat)))
legend("topleft", ins = 0.05, col = cols, pch = 19,
legend = sapply (seq (1, q), function (x) sprintf ("group %d", x)))
dummy <- dev.off()
```
[](https://i.stack.imgur.com/K7hAK.png)
The horizontal axis is just the row index of data frame `dat`. The value of $x$ for the first 8 points is $-1$ and the value of $x$ for the last 8 points is $1$.
We fit a mixed-effects model with two fixed factors (for intercept and slope) and with two random factors, specified by the formula element `(x || g)`, which is equivalent to `(1 | g) + (0 + x | g)` (i.e., we do not consider a correlation between random effects for intercept and slope):
```
library(lme4)
m2 <- lmer(y ~ x + (x || g), dat)
```
The fitted fixed coefficients $\hat{\boldsymbol{\beta}}$ can be extracted from the `m2` object with this code:
```
m2.beta <- fixef(m2)
```
They are identical to the coefficients fitted to the fixed-effects model `m1`:
```
> cat(sprintf("FE model: intercept = %f, slope = %f\nME model: intercept = %f, slope = %f\n",
+ m1.beta[1], m1.beta[2], m2.beta[1], m2.beta[2]))
FE model: intercept = 0.149289, slope = 0.415197
ME model: intercept = 0.149289, slope = 0.415197
```
The model matrix $\boldsymbol{X}$ for `m2` can be extracted with the following code:
```
X.m2 <- getME(m2, "X")
XtX <- t(X.m2) %*% X.m2
```
It is identical to the model matrix for `m1`:
```
> X.m1
x
[1,] 1 -1
[2,] 1 -1
[3,] 1 -1
[4,] 1 -1
[5,] 1 -1
[6,] 1 -1
[7,] 1 -1
[8,] 1 -1
[9,] 1 1
[10,] 1 1
[11,] 1 1
[12,] 1 1
[13,] 1 1
[14,] 1 1
[15,] 1 1
[16,] 1 1
> X.m2
(Intercept) x
1 1 -1
2 1 -1
3 1 -1
4 1 -1
5 1 -1
6 1 -1
7 1 -1
8 1 -1
9 1 1
10 1 1
11 1 1
12 1 1
13 1 1
14 1 1
15 1 1
16 1 1
attr(,"assign")
[1] 0 1
attr(,"msgScaleX")
character(0)
```
We can easily see that the values of $\hat{\boldsymbol{\beta}}^\top\boldsymbol{X}^\top\boldsymbol{X}\hat{\boldsymbol{\beta}}$
are the same for `m1` and `m2`:
```
> t(m1.beta) %*% t(X.m1) %*% X.m1 %*% m1.beta
[,1]
[1,] 3.114806
> t(m2.beta) %*% XtX %*% m2.beta
[,1]
[1,] 3.114806
```
Remember that this was the value for the sum of squares associated with the fixed factors in the fixed-effects models. This is not the case for the mixed-effects model and we reach now the important point of this answer. As we have seen at the beginning of my text, $\boldsymbol{X}^\top\boldsymbol{X}$ is the sum of two terms (ignoring the $\boldsymbol{W}$ matrix, for the sake of simplicity):
$$
\boldsymbol{X}^\top\boldsymbol{X} = \boldsymbol{R}_X^\top\boldsymbol{R}_X + \boldsymbol{R}_{ZX}^\top\boldsymbol{R}_{ZX}
$$
Pre-multiplying by $\hat{\boldsymbol{\beta}}^\top$ and post-multiplying by $\hat{\boldsymbol{\beta}}$ both sides of the equation above:
$$
\hat{\boldsymbol{\beta}}^\top\boldsymbol{X}^\top\boldsymbol{X}\hat{\boldsymbol{\beta}} = \hat{\boldsymbol{\beta}}^\top\boldsymbol{R}_X^\top\boldsymbol{R}_X\hat{\boldsymbol{\beta}} + \hat{\boldsymbol{\beta}}^\top\boldsymbol{R}_{ZX}^\top\boldsymbol{R}_{ZX}\hat{\boldsymbol{\beta}}
$$
gives us an expression for the sums of squares that are related to fixed coefficients of the model. The right-hand side is the sum of squares of the fixed-effects model fitted to the same data. We see that this value is partitioned into two terms:
- the term $\hat{\boldsymbol{\beta}}^\top\boldsymbol{R}_X^\top\boldsymbol{R}_X\hat{\boldsymbol{\beta}}$ that is related to the fixed-effects coefficients per se,
- and the term $\hat{\boldsymbol{\beta}}^\top\boldsymbol{R}_{ZX}^\top\boldsymbol{R}_{ZX}\hat{\boldsymbol{\beta}}$ that appears because we are fitting a mixed-effects linear model to the data. Its value depends on the model matrix for the random effects $\boldsymbol{Z}$ and on the values of the parameters $\boldsymbol{\theta}$ of the variance-covariance matrix $\boldsymbol{\Sigma}_\boldsymbol{\theta}$. It is computed during the algebraic phase of the penalized least squares algorithm (Eq. 49).
This is why one should only consider the term $\hat{\boldsymbol{\beta}}^\top\boldsymbol{R}_X^\top\boldsymbol{R}_X\hat{\boldsymbol{\beta}}$ when computing the $F$ statistics for testing the fixed effects. One further indication that this must the case comes from the fact that the covariance matrix of the fixed-effects coefficients is given by (Eq. 54 of Bates et al. 2015):
$$
\mathrm{Var}_{\boldsymbol{\theta}, \sigma}(\hat{\boldsymbol{\beta}}) = \sigma^2 \boldsymbol{R}_X^{-1} (\boldsymbol{R}_X^\top)^{-1}
$$
and depends only on $\boldsymbol{R}_X$, besides $\sigma$.
The total sum of squares for the mixed-effects model can then be partitioned into four terms:
$$
\mathit{SS}_\mathrm{total} = \mathit{SS}_{RX} + \mathit{SS}_{RZX} + \mathit{SS}_\mathrm{ranef} + \mathit{SS}_\mathrm{resid}
$$
where:
$$
\begin{align}
\mathit{SS}_{RX} &= \hat{\boldsymbol{\beta}}^\top\boldsymbol{R}_X^\top\boldsymbol{R}_X\hat{\boldsymbol{\beta}}\\
\mathit{SS}_{RZX} &= \hat{\boldsymbol{\beta}}^\top\boldsymbol{R}_{ZX}^\top\boldsymbol{R}_{ZX}\hat{\boldsymbol{\beta}}\\
\end{align}
$$
The term $\mathit{SS}_\mathrm{resid}$ is only related to the residuals error (the value of $\sigma$) and the term $\mathit{SS}_\mathrm{ranef}$ is only related to the random effects (the values in the variance-covariance matrix of the random effects $\boldsymbol{\Sigma}_\boldsymbol{\theta}$).
In our example, this is how these values are computed:
```
RX <- getME(m2, "RX")
RZX <- getME(m2, "RZX")
RXtRX <- t(RX) %*% RX
RZXtRZX <- t(RZX) %*% RZX
SS.RX <- t(m2.beta) %*% RXtRX * m2.beta
SS.RZX <- t(m2.beta) %*% RZXtRZX %*% m2.beta
SS.m2.resid <- sum((dat$y - predict(m2)) ^ 2)
SS.ranef <- sum((dat$y - predict(m2, re.form=NA)) ^ 2) - SS.m2.resid
```
As we did previously, the last multiplication in the computation of `SS.RX` is scalar (`*`) instead of matrix (`%*%`), such that we can get the separated sums of squares for the intercept and the slope (this is another way of computing $\mathit{SS}_i$ with $\boldsymbol{R}_i$, as stated at the beginning of this answer).
We can verify that all terms sum up to the value of the total sum of squares ($\mathit{SS}_{\mathrm{total}}$):
```
> cat(sprintf("SS.RX: intercept = %f, slope = %f\nSS.RZ: %f\nSS.ranef: %f\nSS.m2.resid: %f\n\nSS.RX + SS.RZ + SS.ranef + SS.m2.resid: %f\nSS.total: %f\n",
+ SS.RX[1], SS.RX[2], SS.RZX, SS.ranef, SS.m2.resid,
+ sum(SS.RX) + SS.RZX + SS.ranef + SS.m2.resid,
+ SS.total))
SS.RX: intercept = 0.070573, slope = 2.181704
SS.RZ: 0.862529
SS.ranef: 1.270335
SS.m2.resid: 0.871556
SS.RX + SS.RZ + SS.ranef + SS.m2.resid: 5.256697
SS.total: 5.256697
```
As it should be, the value for the sum of squares for the fixed-effect slope (`SS.RX[2]`) is exactly the same as the value that appears when `anova` is invoked for `m2`:
```
> anova(m2)
Analysis of Variance Table
npar Sum Sq Mean Sq F value
x 1 2.1817 2.1817 27.452
```
So to summarize, the value of the sum of squares associated with the coefficients of a mixed-effects model is somehow related to the sum of squares associated with the coefficients of the equivalent fixed-effects model fitted to the same data, in the sense that the former is a part of the later. This can be visualized in the figure below, where the width of the rectangles are proportional to the respective sums of squares (see Appendix for the code that generates part of this figure).
[](https://i.stack.imgur.com/ngAnj.png)
In this figure, the pink and the cyan rectangles correspond to the sums of squares that are used to compute the $F$ statistics that appear in the ANOVA tables, for both fixed-effects and mixed-effects models.
References
Bates D, Mächler M, Bolker B, Walker S (2015) Fitting linear mixed-effects models using lme4. J Stat Softw 67(1). DOI: [10.18637/jss.v067.i01](http://dx.doi.org/10.18637/jss.v067.i01).
Appendix
This is the code for generating part of the last figure. The generated SVG file was further edited in Inkscape for adding the labels.
```
svg(file = "ss.svg", width = 7, height = 4)
plot(NA, NA, type = "n", bty = "n", xaxt = "n", yaxt = "n",
xlab = "", ylab = "", xlim = c(0, SS.total), ylim = c(0, 2.5))
y <- c(0, 0, 1, 1, 0)
x <- c(0, 1, 1, 0, 0)
o <- 0
for (s in c (SS.fixef[1], SS.fixef[2], SS.m1.resid)) {
polygon(o + x * s, y + 1.5)
o <- o + s
}
o <- 0
for (s in c (SS.RX[1], SS.RX[2], SS.RZX, SS.ranef, SS.m2.resid)) {
polygon(o + x * s, y)
o <- o + s
}
dummy <- dev.off()
```
| null | CC BY-SA 4.0 | null | 2023-04-16T07:53:39.347 | 2023-04-16T07:53:39.347 | null | null | 385308 | null |
613096 | 1 | null | null | 1 | 55 | I am trying to replicate Section 4.1. of a paper "On the Heterogeneous Effects of Sanctions on Trade and Welfare: Evidence from the Sanctions on Iran and a New Database" by Felbermayr et al. (2020). This paper tries to see the effect of various sanctions on trade flows, using the Global Sanctions Database (a big database of trade sanctions at different time periods).
Following the paper, I conducted a regression where I want to see the effect of sanctions against Iran on the trade flows of the sanctionning country.
Thus, I created a big number of dummy variables that work this way :
COUNTRY_IRN => Takes the value 1 if the sanction in question is a sanction of "Country" (replace with any country) on Iranian imports.
COUNTRY_IRN => Takes the value 1 if the sanction in question is a sanction of "Country" (replace with any country) on Iranian exports.
I have such dummies for 13 different countries, making it 26 dummies in total. When I run my regression (my other independant variables are dummies that identify complete/partial trade sanctions, as well as for exportation and exportation sanctions towards Iran. Also, a bunch of time invariant controls for country-pairs), I get these results :
[](https://i.stack.imgur.com/YJMr3.png)
I used a Poisson Pseudo Maximum Likelihood regression model with country-pair fixed effect, time-varying exporting-country dummies and time-varying destination-country dummy variables, The command I used is :
```
ppmlhdfe tradeflow_comtrade_o TRADE_SANCT_COMPL TRADE_SANCT_PARTL OTHER_SANCT SANCT_IRAN_EXP SANCT_IRAN_IMP IRN_USA USA_IRN IRN_CAN CAN_IRN IRN_AUS AUS_IRN IRN_CHE CHE_IRN IRN_CHN CHN_IRN IRN_TUR TUR_IRN IRN_BRA BRA_IRN IRN_ARE ARE_IRN IRN_RUS RUS_IRN IRN_IND IND_IRN IRN_ZAF ZAF_IRN IRN_JPN JPN_IRN IRN_SGP SGP_IRN log_dist comlang_off sibling_ever contig RTA, a(CountryPairs Sanctioning_time_fixed Sanctioned_time_fixed)
```
As you can see, all country-specific dummies, save for the USA, have been omitted from the regression because of collinearity (as stated by Stata), and I cannot find out why. Is it linked to the fixed effects I used ? Is it because all of these dummies are mutually exclusive ? What can I do to get coefficients for all these variables, without Stata omitting them ?
After some testing around, I think it might come from my use of Country Pairs fixed effects. Do you think my guess is good ? If it is, what can I do to work around this ? Since the paper used country pair fixed effects, I have to use them as well.
| Dummy variable coefficients are getting automatically omitted by Stata : what to do to keep them? | CC BY-SA 4.0 | null | 2023-04-16T08:05:43.720 | 2023-04-16T11:09:19.510 | null | null | 382870 | [
"stata",
"categorical-encoding"
] |
613097 | 2 | null | 613072 | 3 | null | If the prior is uniform then
$$f(\theta|x) \propto \frac{\mathcal{L}(\theta,x)}{\int_{\theta \in \Theta} \mathcal{L}(\theta,x) d\theta}$$
And this is a proper distribution when the integral of the likelihood function in the denominator is finite.
An simple example where this is not gonna work is when for a particular observation $x$ the likelihood is above some finite value in an infinite range of the parameters. For example consider Poisson distribution $X \sim Poisson(\lambda=1/\theta)$ and the observation $x=0$, then the likelihood is equal to $\mathcal{L}(\theta,0) = e^{-1/\theta}$ then we need to compute the integral $\int_0^\infty e^{-1/\theta} d\theta$ which diverges and has no finite value.
| null | CC BY-SA 4.0 | null | 2023-04-16T08:51:09.800 | 2023-04-16T08:51:09.800 | null | null | 164061 | null |
613098 | 1 | null | null | 1 | 33 | I want to use an ANOVA for my analysis (2x3 design). I can decide if I can safely use parametric tests. The two samples results: Shapiro-Wilk p<.001) and Q-Q plots don't seem to be normally distributed. Should I use non-parametric tests?
I want to test the relationship between an ordinal variable (high/low and high/medium/low) and a discrete variable (score of a questionnaire) using an ANOVA (2x3 design).
Each sample has 427 participants.
[](https://i.stack.imgur.com/NzGS5.jpg)[](https://i.stack.imgur.com/me3cW.jpg)
| Q-Q plots and normality: Can I use ANOVA? | CC BY-SA 4.0 | null | 2023-04-16T09:22:24.283 | 2023-04-16T09:38:16.360 | 2023-04-16T09:38:16.360 | 22047 | 385790 | [
"normal-distribution",
"anova",
"qq-plot",
"psychology"
] |
613100 | 1 | null | null | 1 | 9 | I want to find input demand elasticities using a cost function. Input quantities and input prices are available for individual farmers for 5 food crops and 5 years (2015-2019). But farmers may vary across the time. I wanted to increase the sample size and then I pool the data of 5years then the total number of observations are 1200. My supervisor advised me to consider the 5 crops as one crop since theses crops are mostly showed similar in cost of production.
Could I perform pool OLS If I take real values of the data? I didn’t notice empirical evidence to follow.
| pooled cross section data | CC BY-SA 4.0 | null | 2023-04-16T09:51:40.620 | 2023-04-16T09:51:40.620 | null | null | 364992 | [
"least-squares",
"survey",
"cross-section",
"elasticity"
] |
613101 | 1 | 613231 | null | 3 | 53 | Is there a family of distributions that resemble the normal distribution (symmetric, spanning all real numbers, and approximately bell-shaped) but have lighter tails than normal distribution?
I'm looking for a suitable prior for psychological trait, but I want to penalize outliers more strictly than is usual for the normal distribution.
---
Updated with additional technical details:
The school organises written entrance examinations. The test has many parallel versions, so it will be scored with an IRT model using anchor items. The Bayesian parameter estimation is used for many reasons.
My employer has a requirement that the score a student receives on the test must be an integer between 0 and 100. Therefore, I linearly transform the parameter estimate $\theta$, round, and winsorize the values that lie outside the interval $[0,100]$.
A histogram of the resulting score may look like this (not real data - simulated ideal case):
[](https://i.stack.imgur.com/k1jGe.png)
What I don't like about this solution is that the vast majority of the values (about 95%) are concentrated in the interval $[25,75]$ - so half of the range is practically unused.
If I adjust the standard deviation of the data higher, then the solution changes like this:
[](https://i.stack.imgur.com/qb0vA.png)
Again I am not satisfied, this time because of the cumulation of values at both extremes.
I was thinking (and this is why I asked the question) that it might help if the estimate of the $\theta$ parameter came from a prior that has lighter tails. The data would then not contain so many outliers that leave the desired interval.
Ideally, I would imagine something like this:
[](https://i.stack.imgur.com/CMbnX.png)
| Light tailed symmetric distribution | CC BY-SA 4.0 | null | 2023-04-16T10:20:40.927 | 2023-04-18T12:42:05.563 | 2023-04-17T15:12:38.973 | 96600 | 96600 | [
"distributions",
"bayesian",
"normal-distribution",
"prior"
] |
613102 | 1 | null | null | 0 | 24 | I am doing text classifcation and using SciKit Learn's Chi-square test to select features. Reading about the Chi-square test, a word being present in a text should be just as predictive of a class as it being absent, since considering considering one or the other just re-labels the columns. But SKlearn gives me different results.
In the Minimal Reproducible Example below, the word `present` perfectly predicts a class among imbalanced classes. For simplicity, I set a word's absence explicitly, with the word `absent`. SciPy gives the same results for `present` and `absent`. SKlearn gives different results, and they are both different from SciPy:
```
import numpy as np
import pandas as pd
import sklearn.feature_selection
import sklearn.feature_extraction
import scipy.stats
def generate_data(size=100, class_in_focus=1):
# Generate data.
np.random.seed(42)
classes = [1, 2, 3, 4]
probs = [0.1, 0.2, 0.3, 0.4]
df = pd.DataFrame({"class": np.random.choice(
a=classes,
size=size,
p=probs
)})
df["text"] = "" * df.shape[0]
# Generate one word that perfectly predicts a class,
# and another whose absence perfectly predicts a class.
df.loc[df["class"] == class_in_focus, "text"] = "present"
df.loc[df["class"] != class_in_focus, "text"] = "absent"
return df
def chi2(y, x):
"""Selects features with a chi-square test.
"""
vec = sklearn.feature_extraction.text.TfidfVectorizer(
#min_df=5,
#max_df=0.6,
binary=True,
use_idf=True
)
# Extract the terms.
x_vec = vec.fit_transform(x)
terms = vec.get_feature_names_out()
x_01 = (x_vec > 0).astype(int).toarray()
sklearn_features = None
print("SciPy chi-2 p-values:")
for category in np.unique(y):
chi2, p = sklearn.feature_selection.chi2(x_01, y==category)
for n in range(len(terms)):
xtab = pd.crosstab(x_01[:,n], y==category)
sp_chi2_val, sp_chi2_p, sp_chi2_dof, sp_chi2_exp = scipy.stats.chi2_contingency(xtab)
print("%.6f : p-value for '%s' and category %d" % (sp_chi2_p, terms[n], category))
new_df = pd.DataFrame(
{"terms":terms, "p-value":p, "y":category}
)
if None is sklearn_features:
sklearn_features = new_df
else:
sklearn_features = pd.concat([sklearn_features, new_df])
print("SkLearn p-values:")
print(sklearn_features)
#return sklearn_features["terms"].unique().tolist()
df = generate_data(class_in_focus=3)
chi2(df["class"], df["text"])
```
The result is:
```
SciPy chi-2 p-values:
0.032127 : p-value for 'absent' and category 1
0.032127 : p-value for 'present' and category 1
0.002490 : p-value for 'absent' and category 2
0.002490 : p-value for 'present' and category 2
0.000000 : p-value for 'absent' and category 3
0.000000 : p-value for 'present' and category 3
0.000003 : p-value for 'absent' and category 4
0.000003 : p-value for 'present' and category 4
SkLearn p-values:
terms p-value y
0 absent 1.833879e-01 1
1 present 3.737299e-02 1
0 absent 7.598796e-02 2
1 present 5.495042e-03 2
0 absent 7.237830e-08 3
1 present 3.572249e-17 3
0 absent 8.350924e-03 4
1 present 3.676005e-05 4
```
What is SKLearn doing in the Chi-square test exactly, and why is it different from SciPy and textbook statistics?
| Why does sklearn's Chi2 test return different results when a feature is present or absent? | CC BY-SA 4.0 | null | 2023-04-16T10:28:54.613 | 2023-04-16T10:28:54.613 | null | null | 241968 | [
"python",
"chi-squared-test",
"scikit-learn",
"scipy"
] |
613103 | 2 | null | 613096 | 1 | null | >
As you can see, all country-specific dummies, save for the USA, have been omitted from the regression because of collinearity (as stated by Stata)
If Stata tells that you have collinearity then it means that a combination of one or several other regressors has the same value as the dummy variable. This makes that you can not compute a unique set of coefficients.
| null | CC BY-SA 4.0 | null | 2023-04-16T11:09:19.510 | 2023-04-16T11:09:19.510 | null | null | 164061 | null |
613104 | 1 | null | null | 0 | 11 | I am planning a learning experiment where I will analyze choices over time (trial), and I have no experience with this kind of data and would appreciate some starting help. Simplified, I expect the data to come in this format.
[](https://i.stack.imgur.com/Dj0pF.png)
Each participant must choose an action (count) for each trial for ten trials, and I want to count 1 if that is the best action or 0 if it is not. I will have two experimental groups (A and B). I expect one experimental group to choose better strategies over time. My goal is to test this prediction (or the probability of the data, given the distribution under the null). The participants will be nested within a school, and I expect each school to vary in how they start and selection action over time. How can I analyze this data? I guess I have some choices, e.g., use hierarchical GAM or hierarchical Generalized Linear Models (GLM), but I don't know what's best.
If I use hierarchical GLM, I guess I could model it the following way:
lmer(choice(0,1) ~ trial([0,10]) + group(A, B) + trial:group + (1 + trial |participant) + (1 + trial | school), family="binomial").
If I use hierarchical GAM, I guess I could model it the following way:
gam(choice(0,1) ~ group + s(trial, by=group) + s(trial, participant, m=1, bs="fs") , family="binomial").
I wonder if this is a good starting point, and I would like to hear your comments.
| Appropriate statistical model for analyzing longitudinal choice data? | CC BY-SA 4.0 | null | 2023-04-16T11:13:02.873 | 2023-04-16T11:13:02.873 | null | null | 298551 | [
"generalized-linear-model",
"panel-data",
"proportion",
"count-data",
"generalized-additive-model"
] |
613105 | 1 | 613107 | null | 0 | 14 | When I first downloaded R, every time I ran a t.test or wilcox.test, the output would include not only the p-value but also the mean of the two groups being compared, class-interval etc. I have tried using the summary function but it doesn't provide any extra detail. Now all I get is the following output only:
Wilcoxon rank sum test with continuity correction
data: Change_SIT2 by Exercise condition
W = 1449, p-value = 0.4718
alternative hypothesis: true location shift is not equal to 0
Did I mess up my settings?
How can I get the additional information I used to get?
| Issues with my output. Did I mess up my RStudio settings? | CC BY-SA 4.0 | null | 2023-04-16T11:16:21.107 | 2023-04-16T11:37:37.657 | null | null | 374911 | [
"r",
"p-value"
] |
613106 | 1 | null | null | 1 | 19 | What exactly are the standardized and unstandardized canonical correlation coefficients and what is the difference between them?
| Standardized and unstandardized canonical correlation coefficients | CC BY-SA 4.0 | null | 2023-04-16T11:37:33.303 | 2023-06-02T05:41:46.243 | 2023-06-02T05:41:46.243 | 121522 | 358071 | [
"standardization",
"canonical-correlation"
] |
613107 | 2 | null | 613105 | 0 | null | This looks like the standard output of wilcox.test, nothing wrong there. It sounds like you're missing the t-test output, probably earlier you ran both and now for some reason you only ran wilcox.test. If you consciously run and print both, chances are you get what you're asking for from the t-test output.
| null | CC BY-SA 4.0 | null | 2023-04-16T11:37:37.657 | 2023-04-16T11:37:37.657 | null | null | 247165 | null |
613108 | 1 | null | null | 2 | 36 | [](https://i.stack.imgur.com/hu2Fo.png)
I have been confused by the problem (9.3). I think it's reasonable when adding more predictors in the regression, the SSE would be less. Yet, I don't know how to proof it.
Besides, problem 10.1 seems to imply the (9.3) is an chi-square random variable. But I still have no idea about how to prove it.
Please give me some hint, and thank you very much!
| How to solve the following two problems of linear regression? | CC BY-SA 4.0 | null | 2023-04-16T11:40:56.653 | 2023-04-16T11:40:56.653 | null | null | 385824 | [
"multiple-regression"
] |
613109 | 1 | null | null | 1 | 13 | I really appreciate your help with my data.
I am using mlogit package in R for random utility model: [https://www.jstatsoft.org/article/view/v095i11#:~:text=mlogit%20is%20a%20package%20for,random%20parameter%20models](https://www.jstatsoft.org/article/view/v095i11#:%7E:text=mlogit%20is%20a%20package%20for,random%20parameter%20models))%20are%20implemented.
The dependent variable "choice" of provinces depends on alternative specific variables: logPCI, logopenness, loglabour3, logunem1, loglabour3 and 4 other variables: logMAinside1, logMAoutside1, logSAinside1, logSAoutside1.
I have a question regarding 4 variables: logMAinside1, logMAoutside1, logSAinside1, logSAoutside1. Are they alternative specific variables or individual specific variables? Firms belonging to the same "sector" in the same province will have the same values of these 4 variables (for example, firm id 108582716 and firm id 108594870 in sample dataset). If firms belonging to the same sector but in different provinces will have different values of these four variables. In another words, the values of these four variables vary by sector and by provinces (alternative). How can I put separator to account for this information? I read manual of mlogit package in R but do not know how to appy for my data. Am I correct when I use separator (|) like this?
ml.mydata<- mlogit(choice ~ logPCI + logopenness + logseaport3 + loglabour3 + logunem1| logMAinside1 + logMAoutside1 + logSAinside1 + logSAoutside1, mydata, reflevel = "Hanoi_city").
Here is my sample data set.
```
structure(
list(
firm_id = c(
108582716,
108582716,
108582716,
108594870,
108594870,
108594870,
108595095,
108595095,
108595095,
108605233,
108605233,
108605233,
201761149,
201761149,
201761149,
201762784,
201762784,
201762784,
201764559,
201764559,
201764559
),
alt = c(
"Haiphong_city",
"Phutho_province",
"Hanoi_city",
"Haiphong_city",
"Phutho_province",
"Hanoi_city",
"Haiphong_city",
"Phutho_province",
"Hanoi_city",
"Haiphong_city",
"Phutho_province",
"Hanoi_city",
"Hanoi_city",
"Haiphong_city",
"Phutho_province",
"Haiphong_city",
"Hanoi_city",
"Phutho_province",
"Haiphong_city",
"Hanoi_city",
"Phutho_province"
),
choice = c(0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1,
0, 0, 1, 0, 0),
year_operation = c(
2019,
2019,
2019,
2019,
2019,
2019,
2019,
2019,
2019,
2019,
2019,
2019,
2017,
2017,
2017,
2017,
2017,
2017,
2017,
2017,
2017
),
sector = c(
"SA",
"SA",
"SA",
"SA",
"SA",
"SA",
"SB",
"SB",
"SB",
"SM",
"SM",
"SM",
"SA",
"SA",
"SA",
"SN",
"SN",
"SN",
"SD",
"SD",
"SD"
),
technology = c(
"Low",
"Low",
"Low",
"Low",
"Low",
"Low",
"Low",
"Low",
"Low",
"High",
"High",
"High",
"Low",
"Low",
"Low",
"High",
"High",
"High",
"Low",
"Low",
"Low"
),
logMAinside1 = c(
8.2224178,
8.2224178,
8.2224178,
8.2224178,
8.2224178,
8.2224178,
8.0093489,
8.0093489,
8.0093489,
8.2393103,
8.2393103,
8.2393103,
6.7310853,
6.7310853,
6.7310853,
8.9314318,
8.9314318,
8.9314318,
7.2551007,
7.2551007,
7.2551007
),
logMAoutside1 = c(
8.1201506,
8.1201506,
8.1201506,
8.1201506,
8.1201506,
8.1201506,
8.2819128,
8.2819128,
8.2819128,
9.70644,
9.70644,
9.70644,
8.0759678,
8.0759678,
8.0759678,
8.5643978,
8.5643978,
8.5643978,
7.8877192,
7.8877192,
7.8877192
),
logSAinside1 = c(
7.7692056,
7.7692056,
7.7692056,
7.7692056,
7.7692056,
7.7692056,
7.93539,
7.93539,
7.93539,
11.101022,
11.101022,
11.101022,
6.561851,
6.561851,
6.561851,
11.221056,
11.221056,
11.221056,
9.6826878,
9.6826878,
9.6826878
),
logSAoutside1 = c(
7.8650966,
7.8650966,
7.8650966,
7.8650966,
7.8650966,
7.8650966,
8.2279396,
8.2279396,
8.2279396,
9.4778214,
9.4778214,
9.4778214,
7.7405944,
7.7405944,
7.7405944,
8.3962173,
8.3962173,
8.3962173,
8.0652342,
8.0652342,
8.0652342
),
logseaport3 = c(
1.4350846,
5.3082676,
4.8202815,
1.4350846,
5.3082676,
4.8202815,
1.4350846,
5.3082676,
4.8202815,
1.4350846,
5.3082676,
4.8202815,
4.8202815,
1.4350846,
5.3082676,
1.4350846,
4.8202815,
5.3082676,
1.4350846,
4.8202815,
5.3082676
),
logpop = c(
7.161622,
5.9839363,
7.7137847,
7.161622,
5.9839363,
7.7137847,
7.161622,
5.9839363,
7.7137847,
7.161622,
5.9839363,
7.7137847,
7.6879973,
7.145196,
5.9687076,
7.145196,
7.6879973,
5.9687076,
7.145196,
7.6879973,
5.9687076
),
logopenness = c(
5.5514636,
4.7599664,
4.702239,
5.5514636,
4.7599664,
4.702239,
5.5514636,
4.7599664,
4.702239,
5.5514636,
4.7599664,
4.702239,
4.8924828,
5.1715345,
4.6572061,
5.1715345,
4.8924828,
4.6572061,
5.1715345,
4.8924828,
4.6572061
),
logPCI = c(
4.230186,
4.1826606,
4.2312036,
4.230186,
4.1826606,
4.2312036,
4.230186,
4.1826606,
4.2312036,
4.230186,
4.1826606,
4.2312036,
4.1066027,
4.0960097,
4.0707345,
4.0960097,
4.1066027,
4.0707345,
4.0960097,
4.1066027,
4.0707345
),
loglabour3 = c(
8.9138956,
8.7382259,
9.1257057,
8.9138956,
8.7382259,
9.1257057,
8.9138956,
8.7382259,
9.1257057,
8.9138956,
8.7382259,
9.1257057,
8.9457035,
8.8061895,
8.6069441,
8.8061895,
8.9457035,
8.6069441,
8.8061895,
8.9457035,
8.6069441
),
logunem1 = c(
5.857933,
5.2470241,
5.2983174,
5.857933,
5.2470241,
5.2983174,
5.857933,
5.2470241,
5.2983174,
5.857933,
5.2470241,
5.2983174,
5.7365723,
5.8289456,
5.2983174,
5.8289456,
5.7365723,
5.2983174,
5.8289456,
5.7365723,
5.2983174
),
logroad = c(
8.8258247,
6.5759125,
8.9816189,
8.8258247,
6.5759125,
8.9816189,
8.8258247,
6.5759125,
8.9816189,
8.8258247,
6.5759125,
8.9816189,
8.830514,
8.5009623,
6.4081993,
8.5009623,
8.830514,
6.4081993,
8.5009623,
8.830514,
6.4081993
),
group = c(
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North",
"Key_North"
)
),
row.names = c(NA,-21L),
spec = structure(list(
cols = list(
firm_id = structure(list(), class = c("collector_double",
"collector")),
alt = structure(list(), class = c("collector_character",
"collector")),
choice = structure(list(), class = c("collector_double",
"collector")),
year_operation = structure(list(), class = c("collector_double",
"collector")),
sector = structure(list(), class = c("collector_character",
"collector")),
technology = structure(list(), class = c("collector_character",
"collector")),
logMAinside1 = structure(list(), class = c("collector_double",
"collector")),
logMAoutside1 = structure(list(), class = c("collector_double",
"collector")),
logSAinside1 = structure(list(), class = c("collector_double",
"collector")),
logSAoutside1 = structure(list(), class = c("collector_double",
"collector")),
logseaport3 = structure(list(), class = c("collector_double",
"collector")),
logpop = structure(list(), class = c("collector_double",
"collector")),
logopenness = structure(list(), class = c("collector_double",
"collector")),
logPCI = structure(list(), class = c("collector_double",
"collector")),
loglabour3 = structure(list(), class = c("collector_double",
"collector")),
logunem1 = structure(list(), class = c("collector_double",
"collector")),
logroad = structure(list(), class = c("collector_double",
"collector")),
group = structure(list(), class = c("collector_character",
"collector"))
),
default = structure(list(), class = c("collector_guess",
"collector")),
delim = ","
), class = "col_spec"),
class = c("spec_tbl_df",
"tbl_df", "tbl", "data.frame")
)``
```
| How to put separator among variables in R mlogit package | CC BY-SA 4.0 | null | 2023-04-16T11:45:21.567 | 2023-04-16T11:50:22.597 | 2023-04-16T11:50:22.597 | 385823 | 385823 | [
"mlogit"
] |
613110 | 1 | 613113 | null | 0 | 52 | My statistics teacher told us the following asymptotic result:
$X \sim N(0,1) $
$$ P(X > u) \underset{u \rightarrow +\infty}{\sim} \frac{1}{u} \exp\left(-\frac{u^2}{2}\right). $$
Do you know how to demonstrate this (or if there is a mistake?)
Thank you.
| Asymptotic equivalence of the survival function of a standard Gaussian | CC BY-SA 4.0 | null | 2023-04-16T12:22:32.720 | 2023-04-16T13:18:12.947 | 2023-04-16T12:49:18.123 | 20519 | 385825 | [
"probability",
"mathematical-statistics",
"survival",
"probability-inequalities"
] |
613111 | 1 | null | null | 2 | 69 | I am running a linear mixed effects model for time series data using R-INLA. My response variable is normally distributed. The model has a random intercept, and temporal autocorrelation between data points is modelled with a first order autoregressive structure. The parameter estimates from the model match our hypotheses. However, when I check diagnostic plots for the model, the residuals vs fitted values plot has a very clear pattern. Other diagnostic plots (like the QQ plot, etc.) look fine. I’m assuming that this residual vs fitted plot is telling me that there is a non-linear relationship between my response variable and my fixed effects variables. But I am not sure how to fix this. I was wondering if you could please give me some suggestions.
Here is the plot:
[](https://i.stack.imgur.com/iE1AX.png)
Here is the code for the model:
```
m_inla1 <- inla(y~ x1 +
x2+
x3+
x4+
x5+
f(group, model = "ar1"),
data = dat,
control.compute = list(waic = TRUE, dic = TRUE, cpo = TRUE, return.marginals.predictor=TRUE))
```
Where y is a continuous variable. Four of the covariates (x1, x2, x3, and x4) are also continuous variables. These have been standardised using the r function ‘scale’, which standardises each value by subtracting the mean and dividing by the standard deviation. The final covariate (x5) is a binary categorical variable that takes the value 0 or 1. The variable ‘group’ is the random intercept and has 48 levels.
Thanks so much!
| Understanding residual vs fit plot for mixed effect model with AR1 structure | CC BY-SA 4.0 | null | 2023-04-16T12:44:06.117 | 2023-04-17T08:52:10.703 | 2023-04-17T06:17:07.440 | 385831 | 385831 | [
"mixed-model",
"residuals",
"autocorrelation",
"glmm",
"diagnostic"
] |
613113 | 2 | null | 613110 | 1 | null | The expression you gave is close (you missed the factor $\frac{1}{\sqrt{2\pi}}$) to the correct asymptotic behavior of the survival function of standard normal distribution:
\begin{align*}
1 - \Phi(x) \sim \frac{1}{x}\varphi(x), \tag{1}
\end{align*}
where $\Phi$ and $\varphi$ are CDF and PDF of standard normal random variable respectively.
$(1)$ is a corollary of the inequality (for fixed $x > 0$)
\begin{align*}
(x^{-1} - x^{-3})\varphi(x) < 1 - \Phi(x) < x^{-1}\varphi(x). \tag{2}
\end{align*}
To prove $(2)$, notice the obvious inequality:
\begin{align*}
(1 - 3t^{-4})\varphi(t) < \varphi(t) < (1 + t^{-2})\varphi(t), \; t > x. \tag{3}
\end{align*}
Integrating three expressions above from $x$ to $\infty$ yields $(2$) -- note that each term in $(2)$ is the primitive function of each term in $(3)$ times $-1$.
| null | CC BY-SA 4.0 | null | 2023-04-16T13:01:21.937 | 2023-04-16T13:18:12.947 | 2023-04-16T13:18:12.947 | 20519 | 20519 | null |
613114 | 1 | null | null | 0 | 15 | I know that I should be able to work this out but I am struggling. My dependent variable is a log variable (call it y) and my independent variable is a proportion that takes a value between 0 and 1 (call it x).
I am struggling to interpret what Beta on the independent variable means for the log y variable.
Any help would be appreciated. Thanks
| Interpretation of proportion coefficient | CC BY-SA 4.0 | null | 2023-04-16T13:35:03.747 | 2023-04-16T13:35:03.747 | null | null | 385833 | [
"regression",
"interpretation",
"proportion",
"log-linear"
] |
613115 | 1 | null | null | 0 | 73 | I'm doing a random effects model. I have a dummy per treatment, which would be "Message", "MessageTax", "Tax" and "Donation" which refers to whether or not the experiment included a donation. The data comes from 2 similar experiments, but one was with a donation and the other was not, so the Message, Tax and MessageTax treatments are in both experiments.
This appears in Stata when I run this regression:
xtreg q Mensaje MensajeImpuesto Impuesto Donacion Donacion##Mensaje Donacion##MensajeImpuesto Donacion##Impuesto,re
How can I fix it?
[](https://i.stack.imgur.com/gSaCe.png)
[![enter image description here][2]][2]
| Why are my variables being omitted by Stata? | CC BY-SA 4.0 | null | 2023-04-16T14:12:48.540 | 2023-04-23T11:38:31.530 | 2023-04-23T11:38:31.530 | 385836 | 385836 | [
"regression",
"mixed-model",
"interaction",
"stata",
"multicollinearity"
] |
613116 | 1 | null | null | 5 | 136 | Let $X$ be a non-negative random variable with finite variance. It is obvious that its MGF $E[e^{-\lambda(X-E[X])}]$ exists for $\lambda > 0$.
How to prove that $E[e^{-\lambda(X-E[X])}] \le \exp(\lambda^2 E[X^2]/2)$ for $\lambda > 0$?
| Upper Bound of MGF for a non-negative random variable with bounded variance | CC BY-SA 4.0 | null | 2023-04-16T14:23:03.653 | 2023-05-18T13:17:53.160 | null | null | 281197 | [
"probability",
"probability-inequalities",
"moment-generating-function"
] |
613117 | 1 | null | null | 9 | 805 | Say I am trying to run a logistic regression on an individual's choice between objects A and B based on some vector of characteristics.
My dataset, however, only happens to have the proportion of people who chose A and the proportion of people who chose B in different countries and a vector of aggregate characteristics of the individuals in that country. Is there a way to identify the effects of these characteristics on an individual's choice?
| How can I make inferences about individuals from aggregated data? | CC BY-SA 4.0 | null | 2023-04-16T14:25:07.720 | 2023-04-17T12:59:45.000 | 2023-04-17T12:59:45.000 | 509 | 385839 | [
"logistic",
"choice-modeling"
] |
613119 | 2 | null | 600115 | 1 | null | A two-sided t-test of one regression coefficient is equivalent to an F-test of nested models where one model contains the variable with that coefficient and the other model contains the same variables except for that one. In some sense, whenever you run a two-sided t-test of one regression coefficient, you are doing an F-test, even if you think of it in terms of a t-test.
I assume that Casella/Berger shows the equivalence of t-stats and F-stats somewhere.
Casella, George, and Roger L. Berger. Statistical inference. Cengage Learning, 2021.
| null | CC BY-SA 4.0 | null | 2023-04-16T14:38:18.197 | 2023-04-16T14:38:18.197 | null | null | 247274 | null |
613120 | 2 | null | 600115 | 2 | null | No, such a test will not always be used.
- There are for sure situations in which it is crystal clear from background knowledge even before seeing the data that the involved explanatory variables are connected to the response. In such situations a precise strength of effect (and quantifying uncertainty) is of interest, but a test of a null hypothesis of "no relation" isn't of interest as rejection will not be informative beyond what is already known.
- There is a considerable number of statisticians these days who oppose the use of statistical hypothesis tests and the notion of significance quite generally. These people may advocate Bayesian methods and may not run F-tests.
- Model assumptions behind the F-test may be violated to the extent that it should not be trusted. Bootstrap/resampling, nonparametric, or robust techniques may be used instead.
| null | CC BY-SA 4.0 | null | 2023-04-16T14:47:37.597 | 2023-04-16T14:47:37.597 | null | null | 247165 | null |
613121 | 1 | null | null | 0 | 18 | If I use growth rate of Y as the dependent variable in an ARDL model, do predictors need to be converted into growth rates, too?
| If the dependent variable in an ARDL model is a growth rate, must predictors also be growth rates? | CC BY-SA 4.0 | null | 2023-04-16T14:50:09.233 | 2023-04-16T15:24:46.683 | 2023-04-16T15:24:46.683 | 53690 | 383188 | [
"data-transformation",
"ardl"
] |
613122 | 1 | null | null | -1 | 35 | I have a classification problem, with a target categorical variable. Is a string variable that involves some sort of classification: good, medium and bad.
Is there some algorithm or technique (involving ordinal label encoding, for example) that takes advantage of the 'sorted' target? Could it be an approach to convert de target to integer with ordinal encoding, use a regression algorithm, and later 'round' the results to get the categorical predictions?
| How to encode labels for for ordered categorical target? Is there an algorithm for this type of problem? | CC BY-SA 4.0 | null | 2023-04-16T14:56:42.137 | 2023-04-21T19:22:47.343 | 2023-04-16T17:37:25.097 | 22311 | 381118 | [
"regression",
"machine-learning",
"logistic",
"ordered-logit"
] |
613124 | 1 | null | null | 0 | 18 | Suppose $r$ is the sample Pearson correlation coefficient estimator of two random variables $X$ and $Y$, while $rho$ is the population correlation coefficient. From various sources, e.g., [Is the sample correlation coefficient an unbiased estimator of the population correlation coefficient?](https://stats.stackexchange.com/questions/220961/is-the-sample-correlation-coefficient-an-unbiased-estimator-of-the-population-co), we know that, if $X$ and $Y$ obey the bi-variate Gaussian random variables with population correlation coefficient $\rho$, the bias of the sample estimator is
$$
E[r]=\rho\left[ 1-\dfrac{1-\rho^2}{2N}+O(1/N^2) \right].
$$
Then what is the sampling bias of the estimator $r^2$?
| What is the sampling bias in sample Pearson correlation coefficient squared? | CC BY-SA 4.0 | null | 2023-04-16T15:00:58.127 | 2023-04-16T15:00:58.127 | null | null | 41145 | [
"correlation",
"sampling",
"bias"
] |
613125 | 1 | null | null | 0 | 8 | As part of a student project, I'm working on explaining this paper (link [https://www.jstor.org/stable/43305585](https://www.jstor.org/stable/43305585) ) "A simple test for random effects in regression models". For context, it provides a way to test if a random effect is present by testing (in a LMM or GLMM) whether we can reject the null hypothesis that its variance is zero.
For this test, it's necessary to obtain the distribution of the random effects $b$ given the fixed effects, ie we need to know $f(b|\beta)$. In the LMM case, this is all derived in detail. I'm having trouble working out the GLMM case, where the author quickly states that the test still works, alluding to an asymptotic distribution of the MAP $b$. I'm only aware of the well-known asymptotic normal distribution of the MLE. In the GLMM case, the MAP estimate differs from the MLE. Is there a corresponding asymptotic distribution for MAPs?
I also have a (hopefully related) question:
If I have the large sample result (from "Generalized Additive Models", Simon Wood)
$$ \mathcal{B}|y\sim N(\hat{\mathcal{B}}, \Sigma)$$ for some matrix $\Sigma$ where $\mathcal{B}^T=(\beta^T,b^T)$ and $\hat{B}$ is the MAP estimate, is there a way to "invert" this result analytically to obtain the distribution of $\hat{\mathcal{B}}$ (in particular $\hat{b}$?)
| Results on the variance of the MAP estimate of the random effects in a GLMM? | CC BY-SA 4.0 | null | 2023-04-16T15:16:06.263 | 2023-04-16T15:16:06.263 | null | null | 371599 | [
"mixed-model",
"generalized-linear-model",
"glmm",
"asymptotics"
] |
613126 | 1 | null | null | 0 | 11 | I have been trying to create a neural network from scratch.
I have been trying to calculate the gradients of the weights and biases of the neural network by watching videos and reading papers, but still have been unable to get a good grasp on it.
z is all the weights multiplied by the nodes of the previse layer plus the bias of the node.
a is the sigmoid of z
I believe I having trouble with the w(L-number) calculations in the formula. Because I believe there should be more w(L-number) and does not work otherwise.
I have added some pictures for clarification:
[](https://i.stack.imgur.com/z6yoT.jpg)
[](https://i.stack.imgur.com/6mwWh.jpg)
| How do you find the gradients of weights and biases in neural network during back propagation? | CC BY-SA 4.0 | null | 2023-04-16T15:16:12.697 | 2023-04-16T15:16:12.697 | null | null | 385838 | [
"gradient-descent",
"backpropagation",
"gradient"
] |
613127 | 2 | null | 613121 | 0 | null | From the statistical perspective, there is no such requirement. From the subject-matter perspective, such a requirement may or may not make sense.
| null | CC BY-SA 4.0 | null | 2023-04-16T15:22:25.320 | 2023-04-16T15:22:25.320 | null | null | 53690 | null |
613128 | 1 | null | null | 6 | 1158 | Let's say part of the mainstream believes that a drug X is more effective than drug Y. Another part of the mainstream believes that drug x is less effective than drug Y. A scientist appears who wants to challenge it and show that the drug X is equivalent to the drug Y, generating the same result and the same side effects, since both act exactly by the same mechanism.
I know the example seems a little forced, but it is just to provoke the classic definition that always aligns the alternative hypothesis with the research hypothesis.
In this case there should be the equality (expressed as a small interval) the alternative hypothesis and therefore the difference (outside of cited small interval) as an null hypothesis?
Previously [Gavin Simpson](https://stats.stackexchange.com/users/1390/gavin-simpson) thinks in his answer for [that question](https://stats.stackexchange.com/questions/13797/which-one-is-the-null-hypothesis-conflict-between-science-theory-logic-and-sta), that the alternative hypothesis would be possible to use equality, but I would like to see more opinions.
>
In statistics there are tests of equivalence as well as the more
common test the Null and decide if sufficient evidence against it. The
equivalence test turn this on its head and posits that effects are
different as the Null and we determine if there is sufficient evidence
against this Null.
I'm not clear on your drug example. If the response is a
value/indicator of the effect, then an effect of 0 would indicate not
effective. One would set that as the Null and evaluate the evidence
against this. If the effect is sufficiently different from zero we
would conclude that the no-effectiveness hypothesis is inconsistent
with the data. A two-tailed test would count sufficiently negative
values of effect as evidence against the Null. A one tailed test, the
effect is positive and sufficiently different from zero, might be a
more interesting test.
If you want to test if the effect is 0, then we'd need to flip this
around and use an equivalence test where the H0 is the effect is not
equal to zero, and the alternative is that H1 = the effect = 0. That
would evaluate the evidence against the idea that effect was different
from 0.
ChatGPT has agreed with the cited user, and gave me the following answer to the same question (Don't be upset with me. I quote this answer because it raised reflections on me):
>
Yes, in this case, the alternative hypothesis would be that the two
drugs, X and Y, are equivalent in terms of effectiveness and side
effects. The null hypothesis would be that there is a significant
difference between the two drugs in terms of effectiveness or side
effects. The scientist's aim would be to gather evidence that supports
the alternative hypothesis and rejects the null hypothesis.
It's worth noting that the null hypothesis is often stated as the
opposite of the research hypothesis in order to provide a clear
statement of what the scientist is trying to disprove. However, in
this case, the null hypothesis does not necessarily contradict the
mainstream beliefs that you mentioned. It simply states that there is
a difference between the drugs, whereas the mainstream beliefs
disagree on the direction of that difference.
| Should the alternative hypothesis always be the research hypothesis? | CC BY-SA 4.0 | null | 2023-04-16T15:25:59.947 | 2023-04-27T00:36:44.347 | 2023-04-17T12:07:02.517 | 207671 | 207671 | [
"hypothesis-testing"
] |
613129 | 1 | null | null | 1 | 10 | I am running a regression of y on four variables, two of which are binary variables and two of which are discrete variables which range from 0 to 10. For context, y was data collected from an experiment, and the values of the four independent variables are provided as information to the participants.
I find that the coefficient of the binary variables are larger than the coefficient for the discrete variables. I am wondering if this is simply due to a technicality or whether I should consider this a finding.
I am considering creating "fake variables" to check whether it is really a technicality. I have tried created a fake variable from the discrete variables by halving the value and then adding noise. I do see an increase in the coefficient but this way of checking seems to be quite arbitrary.
Is there a way to check using simulation whether the coefficients are due to the experiment design? I cannot run this experiment again.
| How do I create a simulated variable to check whether coefficient size arises from technicality? | CC BY-SA 4.0 | null | 2023-04-16T15:36:44.750 | 2023-04-16T15:36:44.750 | null | null | 371338 | [
"least-squares",
"experiment-design"
] |
613131 | 1 | 613141 | null | 0 | 75 | In various literature, it is stated that when 2 or more variables are highly correlated it causes problems in your PCA analysis. However, in the literature that I've read, it is not stated how high the correlation needs to be in order to be too high.
The solution is to remove variables that are highly correlated. My question is: how much correlation is high enough to cause problems in a PCA analysis, in other words, when is correlation too high?
EDIT: Asking in relation to ordination methods/ multivariate statistical analysis where the objective is to find directions of most change in the data/relations between variables.
| When are variables too highly correlated to be included in PCA? | CC BY-SA 4.0 | null | 2023-04-16T16:01:32.910 | 2023-04-16T20:06:36.943 | 2023-04-16T18:06:33.900 | 71814 | 71814 | [
"correlation",
"pca",
"multivariate-analysis"
] |
613132 | 1 | null | null | 1 | 28 | I have a Dataset that represents if in some date a bus company made any kind of traffic infraction, and how many infractions were made. This is a view of the dataset where i have the Companies, the date, how many km were traveled by the buses of that company, how many infractions were made and if in that day that company made infractions (1 if yes, 0 if not)
[](https://i.stack.imgur.com/2LPO1.png)
I'm wondering if it makes sense using Cox Survival statistic in this kind of scenario to know what are the probabilities in days, weeks and months that a company would make a infraction, and what are the companies that have more probability of it. Cox need a survival time or time until event, but its not like from time 0 there would be more infractions, its not dependent on how much time has passed from time 0. Is more like a pattern in certain dates.
| Cox survival statistic on Infractions made by bus companies | CC BY-SA 4.0 | null | 2023-04-16T16:06:24.310 | 2023-04-17T21:02:07.707 | 2023-04-16T22:29:40.907 | 385843 | 385843 | [
"probability",
"python",
"predictive-models",
"cox-model",
"algorithms"
] |
613133 | 1 | null | null | 0 | 11 | I’m trying to optimize a matrix by subsetting rows and then performing a calculation on the subsetted rows. The calculation is representing each of the columns exactly once and having as few duplicates as possible while also including as many rows as possible.
To be clear, the parameters to optimize are the following:
- p = Number of columns detected (More is better)
- q = Number of duplicate rows (Less is better)
- r = Penalty on including a duplicate row to increase p
Here is a simple example where there is a clear answer where rows [2,3,4] will be chosen (row 0 is dropped because row 2 is better):
```
A = [
[1,0,0,0,0],
[0,0,0,0,0],
[1,0,1,0,0],
[0,1,0,1,0],
[0,0,0,0,1],
]
```
p = 5 (all columns are represented)
q = 0 (no duplicates)
There will not always be a perfect combination and sometimes it will need to add a penalty (include a duplicate q) to add to the number of columns represented (p). Weighting this will be important which will be done by r.
```
B = [
[1,1,0,1,0],
[0,0,0,0,0],
[1,0,0,0,1],
[0,0,1,0,0],
[0,0,0,0,1],
]
```
Best combination is [0,2,3]
P = 5 (all columns detected)
q = 1 (1 duplicate)
Lastly, there will sometimes be 2 or more subsets that have the best combination so it should include all best subsets:
```
C = [
[1,0,0,1,0],
[0,1,1,0,1],
[0,1,0,0,1],
[1,0,0,1,0],
[0,0,0,0,1],
]
```
In this one, I spot a few good options: [0,1], [1,3]
Other than doing bruteforce of all combinations of rows, can someone help me understand how to start implement this while leveraging any of the algorithms in Scikit-Learn, SciPy, NumPy, or similar in Python?
More specifically, what algorithms can I use to optimize the rows in a NumPy array that maximizes `p`, minimize `q` weighted by `r` (e.g., `score = p - q*r`)?
| How to subset a binary matrix to maximize the number of unique elements while minimizing duplicates? | CC BY-SA 4.0 | null | 2023-04-16T16:08:41.930 | 2023-04-17T01:25:17.027 | 2023-04-17T01:25:17.027 | 92493 | 92493 | [
"optimization",
"binary-data",
"regularization",
"matrix",
"numpy"
] |
613134 | 1 | null | null | 0 | 21 | I thought I understood the intuition behind censoring but I need to be perfectly clear about one thing which confuses me.
In my course book I only have the example of censoring from below:
P(y=1) = P(X'B + u > c) --> after normalizing so that c = 0 --> P(X'B + u > 0) --> P(u > -X'B) = by symmetry = P(X'B > u) = F(X'B)
where F(X'B) is the CDF.
When censoring from above, how exactly are these steps written out? Is the following correct?
P(y=1) = P(X'B + u < c) --> P(X'B + u < 0) = P(-u > X'B) = P(X'B > u) = F(X'B)
But is this correct? Shouldnt it be that when censoring from above we get P(y=1) = 1 - F(X'B)?
| Censored below and above | CC BY-SA 4.0 | null | 2023-04-16T16:09:35.860 | 2023-04-16T16:09:35.860 | null | null | 377742 | [
"censoring"
] |
613135 | 1 | null | null | 0 | 58 | I am aware this is very basic, and I think I may be overcomplicating it but can someone please confirm what the best thing to do is?
I am looking at female representation within special advisers in UK Government, specifically to see whether women are reaching the highest bands of seniority (which is PB4 in the data).
Here is a summary of the data for 2022:
[Female Representation Between Bands](https://i.stack.imgur.com/BrnIj.png)
However, I want to look at what % of the total cohort of women is in each band compared to men and then calculate if this is statistically significant?
[% of cohort in each category](https://i.stack.imgur.com/RrPZy.png)
So for PB4, 3% of the total cohort of women make it to PB4 as opposed to 6% of all men, which is a difference of 100% - now I want to know what test can I do to test if that difference is statistically significant?
Thank you!
---
Before I refined my research Q to looking at %'s I tried a fishers exact (as my data violates Q-Q plots) and it came up with not-statistically significant.
But I am unsure if I did something wrong, because how can a difference of 100% not be significant?
| What statistical test do I use to check percentage differences? | CC BY-SA 4.0 | null | 2023-04-16T16:08:16.917 | 2023-04-16T22:55:34.580 | null | null | 385844 | [
"r"
] |
613136 | 2 | null | 613117 | 7 | null | This is mostly an expanded version of my comment. It's not possible to go backwards from aggregated to individual data. Indeed, suppose variable $\bar x$ for country $i$, takes on value $4$. Assume there are three individuals $x_1,x_2$ and $x_3$. Then for the three individuals, we could either choose $x_1=2, x_2= 6, x_3 = 4$ or $x_1=6, x_2=2, x_3=4$ or any other conforming partition. So we would end up with a dataset that doesn't necessarily coincide with the original one.
| null | CC BY-SA 4.0 | null | 2023-04-16T16:38:52.783 | 2023-04-16T17:43:41.507 | 2023-04-16T17:43:41.507 | 22311 | 56940 | null |
613137 | 1 | null | null | 3 | 29 | I'm new to econometrics and recently learned about fixed effects. If I have a cross sectional data, is it possible to include fixed effects? What seemed strange is that the number of dummy variables will be exactly the same as the number of observations in this case and there will be some covariates which means the number of parameters exceed the observations. So is it correct that it's impossible?
| Can we include fixed effects in cross sectional data? | CC BY-SA 4.0 | null | 2023-04-16T16:59:21.290 | 2023-04-17T23:07:10.317 | null | null | 355204 | [
"fixed-effects-model",
"cross-section"
] |
613138 | 1 | null | null | 0 | 12 | I've heard that the results from regressions with first differences and fixed effects yield the same results. If so, why would someone choose one over the other? In fact, I see most people using fixed effects. Is there a reason why?
| First differences vs fixed effects | CC BY-SA 4.0 | null | 2023-04-16T17:05:52.540 | 2023-04-16T17:05:52.540 | null | null | 355204 | [
"fixed-effects-model"
] |
613139 | 1 | 613451 | null | 3 | 88 | In a within-subjects design (n = 30), I had planned to run equivalence tests with the equivalence bounds set at the effect size of Cohen's d 0.35. This is partly based on previous research and partly due to my plan to infer practical (and not just theoretical) significance from the analysis.
It was unforeseen that the data would have non-normal distribution of residuals, and therefore I will rely on equivalence tests using Wilcoxon signed-rank tests. Now I run into the issue of what my equivalence bounds should be.
Conversion of effect sizes
- Does it make sense to convert my intended Cohen's d into Wilcoxon r or rank biserial-correlation (rrb)? Naively, I thought I could find a midpoint between "small" and "medium" for either Wilcoxon r or rrb, but I've just realized that the size interpretation heuristics are less well-defined for non-parametric effect sizes. What's a valid way to convert from d to r or rrb?
Visualisation
- What's a good way to graph the equivalence test results from Wilcoxon signed rank tests? I don't think it's a valid approach to visually inspect whether 90% confidence intervals fall within the equivalence bounds, even if I plot the CIs for median of differences? Or would it be better practice (or clearer scientific communication) to plot the CIs in terms of rank biserial-correlation instead?
| Convert Cohen's d to nonparametric effect size and use as equivalence bounds | CC BY-SA 4.0 | null | 2023-04-16T17:12:28.467 | 2023-04-19T13:02:37.540 | 2023-04-19T10:01:35.243 | 54123 | 54123 | [
"nonparametric",
"effect-size",
"equivalence"
] |
613140 | 2 | null | 425257 | 0 | null | I think this variance is with respect to the sampled datasets.
Suppose we have two sampled datasets, i.e., A and B, and 5 predictors. With A, the first predictor is selected. However, with B, the first predictor may not be selected.
Thus, the model as a whole has a high variance.
| null | CC BY-SA 4.0 | null | 2023-04-16T17:26:03.697 | 2023-04-16T17:26:03.697 | null | null | 328961 | null |
613141 | 2 | null | 613131 | 2 | null | PCA by itself is just an algebraic transformation. Highly but imperfectly correlated variables do not cause trouble for it. They might cause trouble in an inferential statistical analysis that follows the PCA, but that is another step of the process.
| null | CC BY-SA 4.0 | null | 2023-04-16T17:26:58.837 | 2023-04-16T20:06:36.943 | 2023-04-16T20:06:36.943 | 53690 | 53690 | null |
613142 | 1 | null | null | 1 | 27 | When we create kernel densities we could use different kernels. Here I create an example with Gaussian, Rectangular and Triangular kernel:
[](https://i.stack.imgur.com/CrA2T.png)
When we check the start and end points of the distributions they are all the same. We can see from the Gaussian and Triangular kernels that they smoothly curve to the edges. But the Rectangular kernel does hit the 0 density earlier. So I was wondering, why are the tails from the Rectangular kernels not cutoff at the point it hits 0 density? What is the purpose of showing the flat tail lines?
---
Code to create plot with example data:
```
set.seed(7)
values = runif(100, 0, 1)
par(mfrow=c(2,2))
plot(density(values,
kernel = "gaussian"),
main = "Gaussian", col = "red")
plot(density(values,
kernel = "rectangular"),
main = "Rectangular", col = "blue")
plot(density(values,
kernel = "triangular"),
main = "Triangular", col = "green")
plot(density(values,
kernel = "gaussian"), col = "red", main = "All")
lines(density(values, kernel = "rectangular"), col = "blue")
lines(density(values, kernel = "triangular"), col = "green")
```
| Why is Rectangular density kernel not cut off at tails? | CC BY-SA 4.0 | null | 2023-04-16T18:02:42.130 | 2023-04-16T18:02:42.130 | null | null | 323003 | [
"distributions",
"kernel-smoothing",
"density-estimation"
] |
613143 | 2 | null | 613117 | 12 | null | It sounds like this problem is an example of ecological inference.
[Gary King](https://gking.harvard.edu/) provides this definition in lecture slides: "[Advanced Quantitative Research Methodology Lecture
Notes: Ecological Inference](https://gking.harvard.edu/files/seitlk1p.pdf)" January 28, 2012
>
Ecological Inference is the process of using aggregate (i.e.,
“ecological”) data to infer discrete individual-level relationships of
interest when individual-level data are not available.
Gary King cautions that, if you can avoid making ecological inference, you'll be much better off! The reason ecological inference is risky is [as Utobi states](https://stats.stackexchange.com/a/613136/22311): the problem is under-specified.
That said, some problems are intractable without attempting ecological inference. For instance, ballots are secret, so estimating how voting totals (an aggregate in a geographic area) break down along demographic attributes (a quality of an individual) is a problem for ecological inference. Gary King is a proponent of the tomography method for ecological inference.
Gary King. A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data.
Princeton University Press, 1997.
| null | CC BY-SA 4.0 | null | 2023-04-16T18:06:31.560 | 2023-04-17T03:14:29.323 | 2023-04-17T03:14:29.323 | 252490 | 22311 | null |
613144 | 1 | null | null | 1 | 43 | I am trying to build an auto-suggestion logic which looks at currently selected items and recommends a list of items to select the next item from.
I can formulate the problem thus:
Given a set of points S = {p1,p2,p3....,pn} for the selected items and another set of points M = {a1,a2,...,am} for the remaining items, I am looking to find the point in M that is "closest to the set" S. The distance metric could be euclidean, cosine or custom (based on different situations). For a specific case, let us only consider cosine similarity. Each point has some coordinates that I can use to compute similarity/distance.
It appears to me that a few things I can try are:
- Find a representative point in the set S (that representative point may not necessarily exist in S), e.g. the centroid of the points in S, and then find the nearest point from M to the centroid
- Try some form of hierarchical clustering (it is not necessary that if I apply hierarchical clustering on the union of S and M, points in S will neatly form a cluster)
- Create a dataset with S union M as rows and add a column with labels 1 and 0 for points from S and M respectively. Fit a classification model and find the point (belonging to M) that the highest probability for class 1
Can I please have some thoughts on what could be the best way to solve this problem, not necessarily limited to the three ideas that I have mentioned above.
| Finding the nearest point to a given set of points | CC BY-SA 4.0 | null | 2023-04-16T18:25:53.783 | 2023-04-21T14:47:26.680 | 2023-04-21T14:34:21.130 | 140378 | 140378 | [
"classification",
"similarities",
"hierarchical-clustering"
] |
613145 | 2 | null | 613128 | 3 | null | [@Dave](https://stats.stackexchange.com/users/247274/dave) gave me a light about the question and told me about the equivalence test, explained [here](https://www.integral-concepts.com/wp-content/media/What-is-Equivalence-Testing-and-When-Should-We-Use-It.pdf).
The hypothesis test for equivalence can be written as follows:
H0: The difference between the two group means is outside the equivalence interval
H1: The difference between the two group means is inside the equivalence interval
However, I study this question deeply in the last days in many textbooks and other sources. There is no consensus of all statistics on this issue. I found several divergent opinions.
The [divergence](https://stats.stackexchange.com/questions/18988/do-null-and-alternative-hypotheses-have-to-be-exhaustive-or-not) happens more relative to the debate if the alternative hypothesis is always complementary to the null hypothesis, or is not. I, for my part, find it much more logical to consider the null and alternative hypothesis as perfectly complementary. [This](https://www.andrews.edu/%7Ecalkins/math/edrm611/edrm08.htm), [this](https://faculty.washington.edu/gloftus/Downloads/Loftus.NullHypothesis.2010.pdf) and [this](https://med.stanford.edu/stepup/research-scientific/_jcr_content/main/panel_builder/panel_0/tabs/tab_main_panel_builder_panel_0_tabs_1/download/file.res/Hyp%20test_Ps%20and%20errors.pptx) (sources linked to universities) agree with me.
As stated in the referred link, if there is a gray zone, which is neither included in the alternative hypothesis nor in the zero hypothesis.In this case, rejecting the null hypothesis no longer serves conceptually to help highlight the alternative hypothesis, it may merely mean that we are in the gray zone. For me there is no point in thinking that way.
However, almost all experts agrees that the null hypothesis always should contain '=' operator ("<=", "=" or ">="). See, for instance, [here](https://pressbooks-dev.oer.hawaii.edu/introductorystatistics/chapter/null-and-alternative-hypotheses/), [here](http://www.csun.edu/%7Emr31841/documents/lecture8_000.pdf) and [here](https://www.tcc.fl.edu/media/divisions/learning-commons/resources-by-subject/math/statistics/The-Null-and-the-Alternative-Hypotheses.pdf), sources linked to universities, adopt this line of thought.
And I understand why. Always have '=' in the null hypothesis, it creates a certain standardization of statistical methodologies that could reject this equality. It would be very confusing to have to deal with an alternative hypothesis that could contain equality.
If we accept that null and the alternative hypothesis is the complement (">","≠" or "<", respectively) in a exhaustive way.
Sometimes the researcher believes in the alternative hypothesis, sometimes don't.
This applies to the hypothetical situation that I've created, if we suppose the research hypothesis does not necessarily align with the researcher's beliefs.
In this context, the alternative hypothesis it would be the hypothesis that "challenges" the null hypothesis, in the sense of finding out if the study has statistically evidence regarded as sufficient to reject it.
Addendum: 2 textbooks as an example
1) Understandable Statistics - Brase, Brase -10th edition - 2012
>
...Any hypothesis that differs from the null hypothesis is called an
alternate hypothesis...
pg 411
>
In statistical testing, the null hypothesis H0 always contains the
equals symbol.
pg. 412
It agrees with this 2 claims (null hypothesis should contains '=' and null hypothesis and alternative hypothesis are complementary)
2) Statistics - James McClave, Terry Sincich - 13th edition - 2018
>
...While alternative hypotheses are always specified as strict
inequalities, such as μ < 2,400, μ > 2,400, or μ ≠ 2,400, null
hypotheses are usually specified as equalities, such as μ = 2,400.
Even when the null hypothesis is an inequality, such as μ ≤ 2,400, we
specify H0: μ = 2,400, reasoning that if sufficient evidence exists to
show that Ha:μ > 2,400 is true when tested against H0: m = 2,400,
then surely sufficient evidence exists to reject μ < 2,400 as well...
pg 403
It agrees with that `Ho` should contain '=' operator, but works with a "hole" in hypothesis space. However, looking at the text, it's clear that it's irrelevant.
| null | CC BY-SA 4.0 | null | 2023-04-16T18:45:10.120 | 2023-04-27T00:36:44.347 | 2023-04-27T00:36:44.347 | 207671 | 207671 | null |
613148 | 1 | null | null | 1 | 34 | If I have X and Y datasets, I will apply meta-learning few-shot-learning by training the model on the X dataset and applying meta-testing using the Y dataset. While I compare my result with other state of the art methods, can I use their pre-trained model and test it on the Y dataset or do I have to train a model using their method on the X dataset and then test it on the Y dataset? Because the pre-trained models of other methods are trained on a different dataset not the X dataset.
I will compare my method with other few-shot learning methods for research purposes, but I don't know the right way. Thanks
| How to compare AI model with other models for research purposes | CC BY-SA 4.0 | null | 2023-04-16T19:17:24.270 | 2023-04-16T19:17:24.270 | null | null | 380942 | [
"machine-learning",
"model-evaluation",
"model-comparison",
"research-design",
"maml"
] |
613149 | 1 | null | null | 1 | 44 | I would like to run a Cross-Level Interaction in MPlus with a Level-2 Moderator and fixed slopes. I do not center the level 1 predictor because I use latent aggregation.
Does anyone know of any good code on Cross-Level-Interaction with only fixed slopes?
| MPlus Cross-Level-Interaction | CC BY-SA 4.0 | null | 2023-04-16T19:45:57.983 | 2023-04-17T11:37:18.713 | 2023-04-17T11:37:18.713 | 199063 | 385854 | [
"mplus"
] |
613150 | 1 | 613401 | null | 1 | 74 | Sometimes I have numerical columns that are composed of two unique values. For example, a value from the set $\{0.1, 5.4\}$ in every cell, or $\{-1, 0\}$ in every cell. I typically scale these columns with the rest of the numerical columns.
However, before my scaling step I perform an encoding step where I look for categorical columns with two unique values and encode with a label encoder.
This step could easily capture my two-unique-value numerical columns too, which would save me a scaling step.
However, I'm worried I'll lose important information. Can it be safe to label encode numerical columns with two unique values only?
| Do you lose information when you encode numerical columns with two values? | CC BY-SA 4.0 | null | 2023-04-16T19:53:02.183 | 2023-04-19T23:20:29.917 | 2023-04-19T19:36:29.313 | 363857 | 363857 | [
"binary-data",
"categorical-encoding",
"feature-scaling",
"scale-invariance"
] |
613151 | 1 | null | null | 1 | 13 | I am analyzing longitudinal data from high school students measured across 4 waves (2010, 2011, 2012, 2013). Of course not all high school students were Freshmen in 2010 so I have missing data for those who were Sophomores and up, for example here are those we have data for (x):
| |2010 |2011 |2012 |2013 |
||----|----|----|----|
|Freshmen |x |x |x |x |
|Sophmore |- |x |x |x |
|Jr |- |- |x |x |
|Sr |- |- |- |x |
Would these data be considered MAR or MNAR? I have missingness and the missingness is directly tied to a variable I have (grade level at Wave 1).
My next question is does running my model using REML fully resolve concerns related to this missingness? I have heard this but I cannot find an article supporting that it is so. Ideally a research article I could cite to alleviate reviewer concerns. For additional context, this is a longitudinal multilevel model.
In all my questions are:
-Are my data MAR or MNAR?
-If they are MAR, does running my MLM using REML resolve this problem?
-If they are MNAR, does running my MLM using REML resolve this problem?
-Also, if they are MAR and running my MLM using REML does resolve this problem do I have to include grade at Wave 1 in my model as a predictor?
Feel free to share if you have additional concerns I am overlooking. Thank you so much!
| I am trying to understand if my data are MAR and if using REML is a sufficient solution to dealing with this | CC BY-SA 4.0 | null | 2023-04-16T20:27:04.193 | 2023-04-16T20:27:04.193 | null | null | 385858 | [
"panel-data",
"multilevel-analysis",
"missing-data",
"reml"
] |
613152 | 1 | 613399 | null | 2 | 114 | Suppose I have a dataset where 100 patients have the disease (e.g. information such as height, smoking, weight, age, disease status) and 10000 patients do not have the disease (i.e. class imbalance).
I am interested in using Logistic Regression to try and understand what patient characteristics appear to influence the odds of having the disease or not. As such, there are significantly more patients without the disease compared to those who do not.
I fear that fitting a Logistic Regression on the entire dataset might partly invalidate the results as patients without the disease will have more influence in the model estimates. To potentially mitigate this problem, I am thinking of using Propensity Score Matching to select 100 patients who do not have the disease - in a way such that we only select patients without the disease so that they have an "approximate analog" in the disease set. As a result, I will have a dataset with only 200 patients and the the ratio of disease to non-disease will be balanced.
I had the following question: By using this Propensity Score Matching approach, I will end up discarding lots of information corresponding to the non-diseased patients and a result might be forfeiting large amounts of valuable information that might be beneficial to the model. However, by including this information, I fear that I risk "flooding" the model with too much information corresponding to the "non-diseased patients" and suppressing information belonging to the diseased patients.
In general - can Propensity Score Matching be used to mitigate problems/biases associated with class imbalance when fitting regression models to such types of problems?
Notes:
- https://www.stat.berkeley.edu/~freedman/weight.pdf
| Using Propensity Score Matching to Reduce "Class Imbalance" Biases? | CC BY-SA 4.0 | null | 2023-04-16T20:32:04.080 | 2023-04-19T01:42:22.390 | 2023-04-19T01:42:22.390 | 77179 | 77179 | [
"regression",
"logistic",
"unbalanced-classes",
"propensity-scores",
"matching"
] |
613153 | 2 | null | 613135 | 0 | null | I'm not exactly sure what your goal is, but if you want to know whether or not females make up 50% of the advisers in a particular band, I would suggest a binomial test `binom.test(x,n,p)`.
So for the PB4 band it's `binom.test(2,(2+5),0.5)`, so 2 'successes'(x) , 2+5=7 'trials'(n) and 0.5 'probability of success'(p), so assuming there are 50% women. The test gives a p value of about 0.45. So, although 5 is more than 100% more than 2, it's not significant, but that's not surprising, as you only have 7 'trials'.
P.S.: As you didn't give too many details about your approach with the Fisher exact test, I'm not sure what you did, but if you did something like:
```
cohort <- matrix(c(2,16,27,8,5,22,46,10),nrow=4)
fisher.test(cohort)
```
you wouldn't end up with a test that looks at whether 3% women in PB4 is less than 6% men, but rather whether the distribution of women and men on the four bands is different, which according to the test is not the case.
| null | CC BY-SA 4.0 | null | 2023-04-16T20:41:14.670 | 2023-04-16T20:41:14.670 | null | null | 383278 | null |
613155 | 1 | null | null | 2 | 76 | So I was going through this [paper](https://arxiv.org/abs/2011.06225) and under Uncertainty modeling it says [](https://i.stack.imgur.com/RhVZl.png)
So I tried deriving it on my own and I got
$p(\omega | X, Y) = \frac{p(Y | X, \omega) \cdot p(X,\omega)}{p(Y | X) \cdot P(X)}$
I am not quite sure how they removed the term $p(X,\omega)$ in the numerator and $P(X)$ term in the denominator.
Another doubt is regarding how they arrived at the integral for
$p(y*|x*,X,Y) = \int p(y*|x*,\omega) \cdot p(\omega|X,Y) d\omega$
is there some proof for it or is it based on intuition?
| Regarding the bayes rule derivation of posterior distribution, $p(\omega|x,y),$ for a given dataset $D$ over $\omega.$ | CC BY-SA 4.0 | null | 2023-04-16T21:28:02.793 | 2023-04-24T19:25:23.343 | 2023-04-24T19:25:23.343 | 375558 | 375558 | [
"probability",
"bayesian",
"mathematical-statistics",
"posterior",
"bayesian-network"
] |
613156 | 2 | null | 613128 | 8 | null | I would say that the "alternative hypothesis" is usually NOT a "proposed hypothesis".
You do not define "proposed hypothesis" and it is not a common phrase. Presumably you mean that it is either a statistical hypothesis or it is a scientific hypothesis. They are usually quite different things.
A scientific hypothesis usually concerns a something to do with the true state of the real world, whereas a statistical hypothesis concerns only conditions within a statistical model. It is very common for the real world to be more complicated and less well-defined than a statistical model and so inferences regarding a statistical hypothesis will need to be thoughtfully extrapolated to become relevant to a scientific hypothesis.
For your example a scientific hypothesis concerning the two drugs in question might be something like 'drug x can be substituted for drug y without any noticeable change in results experienced by the patients'. A relevant statistical hypothesis would be much more restricted along the lines of 'drug x and drug y have similar potencies' or that 'drug x and drug y have similar durations of action' or maybe 'doses of drug x and drug y can be found where they have similar effects'. Of course, the required degree of similarity and the assays used for evaluation of the statistical hypothesis will have to be defined. Apart from the enormous differences in scope of the scientific and potential statistical hypotheses, the first may require several or all of the others to be true.
If you want to know if a hypothesis is a statistical hypothesis then if it concerns the value of a parameter within a statistical model or can be restated as being about a parameter value, then it is.
Now, the "alternative hypothesis". For the hypothesis testing framework there are two things that are commonly called 'alternative hypotheses'. The first is an arbitrary effect size that is used in the pre-data calculation of test power (usually for sample size determination). That alternative hypothesis is ONLY relevant before the data are in hand. Once you have the data the arbitrarily specified effect size loses its relevance because the observed effect size is known. When you perform the hypothesis test the effective alternative becomes nothing more than 'not the null'.
It is a bad mistake to assume that a rejection of the null hypothesis in a hypothesis test leads to the acceptance of the pre-data alternative hypothesis, and it is just about as bad to assume that it leads to the acceptance of the observed effect size as a true hypothesis.
Of course, the hypothesis test framework is not the only statistical approach, and I would argue, it is not even the most relevant to the majority of scientific endeavours. If you use a likelihood ratio test then you can compare the data support for two specified parameter values within the statistical model and that means that you can do the same within a Bayesian framework.
| null | CC BY-SA 4.0 | null | 2023-04-16T21:31:36.717 | 2023-04-16T21:31:36.717 | null | null | 1679 | null |
613157 | 1 | null | null | 0 | 15 | Say you're training an NN and have different groups of samples, say number of groups is `ngroups`.
Each group has a different number of samples, say `nsamples1`, `nsamples2`, etc..
The number of samples per batch is `batch_size` and you want each samples from each group to show up at least once (on average) during each batch, with replacement.
This means that groups with a smaller number of samples will get oversampled and seen multiple times per epoch, but at least that group will have equal representation during training.
How would you choose the sampling weights (probability of pulling a sample for a given batch size)?
My specific scenario is this:
I have 3 groups:
- Number of samples = 50
- Number of samples = 2000
- Number of samples = 3000
with `batch_size = 4`.
How do I select the sampling weights so that a sample from each group shows up at least once (on average) for each batch?
| Importance sampling weights for NN training | CC BY-SA 4.0 | null | 2023-04-16T21:32:01.873 | 2023-04-16T21:32:01.873 | null | null | 364334 | [
"neural-networks",
"sampling",
"gradient-descent",
"importance-sampling"
] |
613158 | 1 | null | null | 1 | 32 | I recently learned how to test for concurvity on a GAMM I constructed, but I'm confused about how to test if 2 factors (2 categorical parametric terms), or 1 factor and a smooth term, are too closely related to be in the same model. The function `mgcv.helper::vif.gam()` was helpful, but only seems to recognize continuous variables.
Is there a way to test how individual factor variables are related to other terms in a model? The `performance::check_concurvity()` function looks promising too, but it also seems to lump all parametric variables into one term. Is concurvity the wrong word?
| Concurvity involving categorical variables (?) | CC BY-SA 4.0 | null | 2023-04-16T21:34:38.250 | 2023-04-17T02:02:07.890 | 2023-04-17T02:01:32.847 | 345611 | 337106 | [
"regression",
"categorical-data",
"generalized-additive-model",
"diagnostic",
"concurvity"
] |
613160 | 1 | null | null | 0 | 17 | Would this be a problem I build a gene signature and tested it in my test, train and validation sets. I do observe significant difference in terms of survival when I had take those genes as predictors but which I obtained using lasso regression. Now when I see the difference in term of my clinical groups which in my case are the subtypes I see difference in terms of few groups in my validation where as for my data sets where I had got the signatures there is fair amount of differences
Now my question is would that be problematic, i.e there is differences in survival based on those genes but not much differences in terms of expression difference between clinical groups
Any suggestion or help would be really appreciated.
| Difference in survival but not much in terms of expression | CC BY-SA 4.0 | null | 2023-04-16T22:04:11.107 | 2023-04-16T22:04:11.107 | null | null | 334559 | [
"survival"
] |
613162 | 2 | null | 613135 | 1 | null | A q-q plot would not be the correct method to examine the distributions here. These are grouped count data and while qq plots are designed for continuous variables. A Fisher exact test would be an acceptable test but I think it would not tell you whether there were statistical evidence of your main question regarding the decreasing proportion of women at higher levels of influence. It's not a test of trend. What you want is a test of trend.
My eyeball assessment is that these data are pretty weak for showing a significant trend because that is not the direction shown in the middle two bands where most of the counts lie and there's actually evidence in the other direction, and furthermore the numbers at the highest band are so small that no test is likely to find significance. 2 out of 7 versus 5 out of 7 is not strong evidence despite your impression that the contrast is clear. Now I'm going to retire briefly and apply some R code to the task of building a trend test.
Here's one sort of trend test appropriate for this question implemented in Poisson regression:
```
summary(glm(V1 ~ PB+offset(log(V1+V2)), data=cohort.df, fam="poisson"))
Call:
glm(formula = V1 ~ PB + offset(log(V1 + V2)), family = "poisson",
data = cohort.df)
Deviance Residuals:
1 2 3 4
-0.3462 0.4316 -0.3278 0.2194
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.84681 0.43475 -1.948 0.0514 .
PB -0.04269 0.18527 -0.230 0.8177
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 0.51500 on 3 degrees of freedom
Residual deviance: 0.46172 on 2 degrees of freedom
AIC: 20.774
Number of Fisher Scoring iterations: 4
```
So there is an estimated negative coefficient for the ratio of women to band subtotals, but the strength of the association is very weak and chance-like. You could also have don a Cochran_armitage trend test and I'm guessing that you could probably find an R package that would do that named test. Theres a good vignette on handling count data in R by Zeileis, Kleiber, and Jackman: [https://cran.r-project.org/web/packages/pscl/vignettes/countreg.pdf](https://cran.r-project.org/web/packages/pscl/vignettes/countreg.pdf)
You could also consult: "R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002) 2 nd edition" by Laura A. Thompson, 2008©. I got my copy many years ago but it still appears available: [https://www.stat.purdue.edu/~zhanghao/MAS/handout/R%20Manual%20to%20Agresti%E2%80%99s%20Categorical%20Data%20Analysis.pdf](https://www.stat.purdue.edu/%7Ezhanghao/MAS/handout/R%20Manual%20to%20Agresti%E2%80%99s%20Categorical%20Data%20Analysis.pdf)
| null | CC BY-SA 4.0 | null | 2023-04-16T22:24:43.207 | 2023-04-16T22:55:34.580 | 2023-04-16T22:55:34.580 | 2129 | 2129 | null |
613163 | 2 | null | 613116 | 1 | null | Here is a proof adapted from [John Duchi's lecture notes](http://cs229.stanford.edu/extra-notes/hoeffding.pdf), but it's off by a factor of $2$ if you don't mind :
We will assume throughout that $X$ has an MGF defined on a neighborhood of $0$, such that all of its moments exist (as [finite variance is not sufficient](https://stats.stackexchange.com/q/32706/305654)).
Let $X'$ be an independent copy of $X$, i.e. $X'$ has the same distribution as $X$ while being independent of $X$. We have
$$E[e^{-\lambda(X-E[X])}] = E_X[e^{-\lambda(X-E_{X'}[X'])}] \le E_XE_{X'}[e^{-\lambda(X-X')}] $$
Where we have applied Jensen's inequality to $e^{\lambda E_{X'}[X']}:= f(E_{X'}[X']) $.
Now for convenience let $Y:= X-X'$. The trick is to notice that $Y$ and $-Y$ are identically distributed, which implies that $Y$ and $S\cdot Y$ are identically distributed when $S$ is an independent [Rademacher random variable](https://en.wikipedia.org/wiki/Rademacher_distribution), i.e. when $S$ only takes value $+1$ and $-1$ with equal probability.
This implies that
$$\begin{align*}
E_Y[e^{-\lambda Y}] &= E_YE_S[e^{-\lambda S\cdot Y}] \\
&= E_Y[E_S[e^{-\lambda S\cdot Y}\mid Y]]\\
&= E_Y[\cosh(-\lambda Y)] =E_Y[\cosh(\lambda Y)]\end{align*}$$
Now assume that $Y$ is supported on some interval $[-a,a]$. By Taylor's theorem, we know that for all $Y(\omega)$, there exists $\xi\in (0,Y)\cup (Y,0)$ such that
$$\cosh(\lambda Y) = 1 + \frac{\lambda^2Y^2}{2} + \frac{\lambda^3Y^3}{6}\sinh(\lambda\xi(Y)) \le 1 + \frac{\lambda^2Y^2}{2} + \frac{\lambda^3Y^3}{6}\sinh(\lambda a)$$
Because $Y$ is symmetric, we have that $E[Y^3]=0$, hence taking expectation we find
$$E_Y[\cosh(\lambda Y)] \le 1 + \frac{\lambda^2}{2}E_Y[Y^2]\le \exp\left(\frac{\lambda^2}{2}E_Y[Y^2]\right) $$
Where we have used the well known $1 + x \leq e^{x} $. All that is left is to observe that $E_Y[Y^2] = E_Y[X^2 + (X')^2 - 2XX'] = 2E[X^2] - 2E[X]^2 $ and it follows that
$$E[e^{-\lambda(X-E[X])}] \le \exp\left(\lambda^2 (E[X])^2\right)$$
Which is the desired result up to a factor $2$.
---
Now in the general case where $Y$ is not compactly supported, you can construct a sequence
$$Y_n = (-n)\vee(Y\wedge n) \in [-n,n]$$
which is bounded and converges pointwise to $Y$. It then follows from Fatou's lemma that
$$\begin{align*}
E_Y[\cosh(\lambda Y)]&\le \lim\inf_n E_{Y_n}[\cosh(\lambda Y_n)] \\
&\le \lim\inf_n \exp\left(\frac{\lambda^2}{2}E_{Y_n}[Y_n^2]\right)\\
&\le\exp\left(\frac{\lambda^2}{2}E_Y[Y^2]\right)\end{align*}.$$
From which the same conclusion follows.
| null | CC BY-SA 4.0 | null | 2023-04-16T22:31:01.950 | 2023-04-17T06:59:59.667 | 2023-04-17T06:59:59.667 | 305654 | 305654 | null |
613164 | 2 | null | 613027 | 4 | null | Update: to fix a code error
Your version of the signed rank test is probably not what you want, because the rank transformation doesn't account for weights. This means the null hypothesis of the test depends on the sampling design; part of the point of design-based inference is to avoid this.
It would make more sense to compute the ranks the way `survey::svyranktest` does, as estimates of population ranks (there's a Biometrika paper cited in the references). As far as I know, no-one has actually derived the null sampling distribution, but it's a reasonable guess that `svyglm` -- or, equivalently, `svymean` would produce something asymptotically correct. It does for the two-sample case.
For the sign test, you don't need ranks (of course). The test asks whether the (population) proportion of positive and negative signs are equal, or in the paired case, whether the population proportion of the first and the second number being larger are equal
```
> svymean(~I((api00>api99)-1/2), dclus2)
mean SE
I((api00 > api99) - 1/2) 0.34354 0.0318
> pnorm(abs(.34/0.0318),lower.tail=FALSE)*2
[1] 1.111631e-26
```
Assuming no missing values, the procedure for a signed-rank test of a variable `y` is
```
w<-weights(design, "sampling")
ii <-order(abs(y))
r<-numeric(length(y))
r[ii]<-ave(cumsum(w[ii]) - w[ii]/2, factor(y[ii]))
signedr<-r*sign(y)
svymean(signedr, design)
```
So, in the same example
```
> w=weights(dclus2, "sampling")
> dclus2<- update(dclus2,y=api00-api99)
> y<-model.frame(dclus2)$y
> ii<-order(abs(y))
> r<-numeric(nrow(dclus2))
> r[ii]<-ave(cumsum(w[ii]) - w[ii]/2, factor(y[ii]))
> signedr<-r*sign(y)
> svymean(signedr, dclus2)
mean SE
[1,] 2172.8 224.57
> pnorm(abs(2172.8/224.57),lower.tail=FALSE)*2
[1] 3.836651e-22
```
It's not clear that the signed-rank test is very useful with weights -- it is not exact and need not be very outlier-resistant.
It would be reasonably straightforward to make these into functions that computed R `htest` objects -- it's just a matter of setting all the right names and attributes.
| null | CC BY-SA 4.0 | null | 2023-04-16T23:13:38.350 | 2023-04-18T01:35:28.217 | 2023-04-18T01:35:28.217 | 249135 | 249135 | null |
613165 | 1 | null | null | 0 | 18 | I have a NN with a single output scalar. I want this scalar to tend towards positive infinity if some of the inputs take on certain values. How can I guarantee this without adding training data?
| How to bound neural network output? | CC BY-SA 4.0 | null | 2023-04-16T23:44:37.947 | 2023-04-16T23:44:37.947 | null | null | 364334 | [
"neural-networks",
"bounds"
] |
613166 | 1 | null | null | 0 | 15 | I have a NN with a single output scalar. I want this scalar to tend towards positive infinity if some of the inputs take on certain values. How can I guarantee this without adding training data?
| How to bound neural network output? | CC BY-SA 4.0 | null | 2023-04-16T23:44:37.943 | 2023-04-16T23:44:37.943 | null | null | 364334 | [
"neural-networks",
"bounds"
] |
613167 | 2 | null | 612940 | 2 | null | In the first scenario, C is a descendant of a mediator, so contrary to previous answers, yes it would be harmful to include if your interest is in estimating the total effect of X on Y. See DAG 12 here:
[http://causality.cs.ucla.edu/blog/index.php/2019/08/14/a-crash-course-in-good-and-bad-control/](http://causality.cs.ucla.edu/blog/index.php/2019/08/14/a-crash-course-in-good-and-bad-control/)
Agree with earlier answers that there's no available strategy to identify the effect of C on Y given this DAG is correct.
| null | CC BY-SA 4.0 | null | 2023-04-16T23:45:08.783 | 2023-04-16T23:45:08.783 | null | null | 106580 | null |
613168 | 1 | null | null | 1 | 12 | I just want to calculate the percent change in y variable as a result of reduction in x variable by 10%. Is there a code for it in R? for example, if x = particulate matter in air and Y = air pollution level. I just want to see if I reduce particulate matter by 10%, how much percent change will be there in air pollution level?
| calculate the percent change in y variable as a result of reduction in x variable by 10% | CC BY-SA 4.0 | null | 2023-04-16T23:57:15.593 | 2023-04-16T23:57:15.593 | null | null | 385865 | [
"regression"
] |
613169 | 1 | null | null | 0 | 77 | Can you please explain why this statement holds?
a distribution where the variance is proportional to the mean will better tolerate larger prediction errors occurring with larger predictions than one whose variance is independent of the mean.
| prediction error in GLM | CC BY-SA 4.0 | null | 2023-04-17T00:09:18.330 | 2023-04-24T03:32:59.487 | null | null | 382257 | [
"generalized-linear-model",
"predictive-models",
"error"
] |
613170 | 1 | null | null | 2 | 35 | I'm sure there is an obvious answer to this question but I'd like to understand better what the difference is between p-values for t-tests and type 3 ANOVA f-tests for binary variables in mixed effect models as implemented in `lmerTest`. In a standard linear model, my understanding is that these should be the same. For instance, using the `ham` dataset included in `lmerTest`:
```
library(lme4)
library(lmerTest)
library(car)
fm1 <- lm(Informed.liking ~ Gender + Information * Product, data=ham)
```
The p-values for Gender and Information in these models are the same, as expected (output cut to just the relevant parts).
```
summary(fm1)
```
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
Gender2 -0.2443 0.1791 -1.364 0.1731
Information2 0.1605 0.3582 0.448 0.6543
```
```
Anova(fm1, type = 3)
```
```
Anova Table (Type III tests)
Response: Informed.liking
Sum Sq Df F value Pr(>F)
Gender 9.7 1 1.8601 0.173090
Information 1.0 1 0.2008 0.654260
```
However, running the same thing with a mixed effect model yields different p-values.
```
fm2 <- lmer(Informed.liking ~ Gender + Information * Product + (1 | Consumer), data=ham)
```
With the result
```
summary(fm2)
```
```
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
Gender2 -0.2443 0.2606 79.0000 -0.938 0.3514
Information2 0.1605 0.3288 560.0000 0.488 0.6256
```
```
anova(fm2, type = 3)
```
```
Type III Analysis of Variance Table with Satterthwaite's method
Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
Gender 3.848 3.8480 1 79 0.8789 0.3513501
Information 6.520 6.5201 1 560 1.4893 0.2228402
```
My understanding is that these are using the same Satterthwaite approximation. Does that not imply that these p-values are the same?
Edit:
Thanks to Sal Magnifico below for pointing out the need for Sum contrast coding in `car`. This seems to solve the problem -- presumably `lmerTest` somehow does this automatically when calling a type 3 ANOVA. With contrast coding, the summary now matches the ANOVA for variables with 1 df. Here is the full reprex output including the Product terms:
```
library(reprex)
library(lme4)
#> Warning: package 'lme4' was built under R version 4.1.2
#> Loading required package: Matrix
library(lmerTest)
#>
#> Attaching package: 'lmerTest'
#> The following object is masked from 'package:lme4':
#>
#> lmer
#> The following object is masked from 'package:stats':
#>
#> step
library(car)
#> Loading required package: carData
fm1 <- lm(Informed.liking ~ Gender + Information * Product, data=ham,
contrasts=list(Gender=contr.sum, Information=contr.sum, Product=contr.sum))
summary(fm1)
#>
#> Call:
#> lm(formula = Informed.liking ~ Gender + Information * Product,
#> data = ham, contrasts = list(Gender = contr.sum, Information = contr.sum,
#> Product = contr.sum))
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -5.4293 -1.7035 0.2471 1.8150 4.2224
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 5.73152 0.08956 64.000 < 2e-16 ***
#> Gender1 0.12214 0.08956 1.364 0.1731
#> Information1 -0.10031 0.08955 -1.120 0.2631
#> Product1 0.07562 0.15510 0.488 0.6261
#> Product2 -0.62809 0.15510 -4.049 5.76e-05 ***
#> Product3 0.35957 0.15510 2.318 0.0208 *
#> Information1:Product1 0.02006 0.15510 0.129 0.8971
#> Information1:Product2 -0.10340 0.15510 -0.667 0.5053
#> Information1:Product3 -0.11574 0.15510 -0.746 0.4558
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.28 on 639 degrees of freedom
#> Multiple R-squared: 0.03442, Adjusted R-squared: 0.02234
#> F-statistic: 2.848 on 8 and 639 DF, p-value: 0.004085
Anova(fm1, type = 3)
#> Anova Table (Type III tests)
#>
#> Response: Informed.liking
#> Sum Sq Df F value Pr(>F)
#> (Intercept) 21283.7 1 4095.9447 < 2.2e-16 ***
#> Gender 9.7 1 1.8601 0.1730900
#> Information 6.5 1 1.2548 0.2630677
#> Product 91.8 3 5.8893 0.0005733 ***
#> Information:Product 10.4 3 0.6663 0.5729402
#> Residuals 3320.4 639
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fm2 <- lmer(Informed.liking ~ Gender + Information * Product + (1 | Consumer), data=ham)
summary(fm2)
#> Linear mixed model fit by REML. t-tests use Satterthwaite's method [
#> lmerModLmerTest]
#> Formula: Informed.liking ~ Gender + Information * Product + (1 | Consumer)
#> Data: ham
#>
#> REML criterion at convergence: 2869.9
#>
#> Scaled residuals:
#> Min 1Q Median 3Q Max
#> -2.49290 -0.69971 0.09928 0.74897 2.69673
#>
#> Random effects:
#> Groups Name Variance Std.Dev.
#> Consumer (Intercept) 0.8274 0.9096
#> Residual 4.3780 2.0924
#> Number of obs: 648, groups: Consumer, 81
#>
#> Fixed effects:
#> Estimate Std. Error df t value Pr(>|t|)
#> (Intercept) 5.8490 0.2843 358.4421 20.574 <2e-16 ***
#> Gender2 -0.2443 0.2606 79.0000 -0.938 0.3514
#> Information2 0.1605 0.3288 560.0000 0.488 0.6256
#> Product2 -0.8272 0.3288 560.0000 -2.516 0.0122 *
#> Product3 0.1481 0.3288 560.0000 0.451 0.6525
#> Product4 0.2963 0.3288 560.0000 0.901 0.3679
#> Information2:Product2 0.2469 0.4650 560.0000 0.531 0.5956
#> Information2:Product3 0.2716 0.4650 560.0000 0.584 0.5594
#> Information2:Product4 -0.3580 0.4650 560.0000 -0.770 0.4416
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Correlation of Fixed Effects:
#> (Intr) Gendr2 Infrm2 Prdct2 Prdct3 Prdct4 In2:P2 In2:P3
#> Gender2 -0.453
#> Informatin2 -0.578 0.000
#> Product2 -0.578 0.000 0.500
#> Product3 -0.578 0.000 0.500 0.500
#> Product4 -0.578 0.000 0.500 0.500 0.500
#> Infrmtn2:P2 0.409 0.000 -0.707 -0.707 -0.354 -0.354
#> Infrmtn2:P3 0.409 0.000 -0.707 -0.354 -0.707 -0.354 0.500
#> Infrmtn2:P4 0.409 0.000 -0.707 -0.354 -0.354 -0.707 0.500 0.500
anova(fm2, type = 3)
#> Type III Analysis of Variance Table with Satterthwaite's method
#> Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
#> Gender 3.848 3.8480 1 79 0.8789 0.3513501
#> Information 6.520 6.5201 1 560 1.4893 0.2228402
#> Product 91.807 30.6024 3 560 6.9901 0.0001271 ***
#> Information:Product 10.387 3.4624 3 560 0.7909 0.4992920
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fm3 <- lmer(Informed.liking ~ Gender + Information * Product + (1 | Consumer), data=ham,
contrasts=list(Gender=contr.sum, Information=contr.sum, Product=contr.sum))
summary(fm3)
#> Linear mixed model fit by REML. t-tests use Satterthwaite's method [
#> lmerModLmerTest]
#> Formula: Informed.liking ~ Gender + Information * Product + (1 | Consumer)
#> Data: ham
#>
#> REML criterion at convergence: 2882.4
#>
#> Scaled residuals:
#> Min 1Q Median 3Q Max
#> -2.49290 -0.69971 0.09928 0.74897 2.69673
#>
#> Random effects:
#> Groups Name Variance Std.Dev.
#> Consumer (Intercept) 0.8274 0.9096
#> Residual 4.3780 2.0924
#> Number of obs: 648, groups: Consumer, 81
#>
#> Fixed effects:
#> Estimate Std. Error df t value Pr(>|t|)
#> (Intercept) 5.73152 0.13028 79.00000 43.993 < 2e-16 ***
#> Gender1 0.12214 0.13028 79.00000 0.938 0.3514
#> Information1 -0.10031 0.08220 560.00000 -1.220 0.2228
#> Product1 0.07562 0.14237 560.00000 0.531 0.5955
#> Product2 -0.62809 0.14237 560.00000 -4.412 1.23e-05 ***
#> Product3 0.35957 0.14237 560.00000 2.526 0.0118 *
#> Information1:Product1 0.02006 0.14237 560.00000 0.141 0.8880
#> Information1:Product2 -0.10340 0.14237 560.00000 -0.726 0.4680
#> Information1:Product3 -0.11574 0.14237 560.00000 -0.813 0.4166
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Correlation of Fixed Effects:
#> (Intr) Gendr1 Infrm1 Prdct1 Prdct2 Prdct3 In1:P1 In1:P2
#> Gender1 -0.012
#> Informatin1 0.000 0.000
#> Product1 0.000 0.000 0.000
#> Product2 0.000 0.000 0.000 -0.333
#> Product3 0.000 0.000 0.000 -0.333 -0.333
#> Infrmtn1:P1 0.000 0.000 0.000 0.000 0.000 0.000
#> Infrmtn1:P2 0.000 0.000 0.000 0.000 0.000 0.000 -0.333
#> Infrmtn1:P3 0.000 0.000 0.000 0.000 0.000 0.000 -0.333 -0.333
anova(fm3, type = 3)
#> Type III Analysis of Variance Table with Satterthwaite's method
#> Sum Sq Mean Sq NumDF DenDF F value Pr(>F)
#> Gender 3.848 3.8480 1 79 0.8789 0.3513501
#> Information 6.520 6.5201 1 560 1.4893 0.2228402
#> Product 91.807 30.6024 3 560 6.9901 0.0001271 ***
#> Information:Product 10.387 3.4624 3 560 0.7909 0.4992920
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Created on 2023-04-17 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)
| Type-3 ANOVA vs. t-test for binary variables in mixed effect model | CC BY-SA 4.0 | null | 2023-04-17T00:24:23.350 | 2023-04-17T12:35:03.843 | 2023-04-17T12:35:03.843 | 385867 | 385867 | [
"mixed-model",
"anova",
"lme4-nlme"
] |
613172 | 1 | null | null | 1 | 21 | Problem setup: Suppose $X_1, \ldots, X_n$ is an i.i.d. sample from $F_X$ (CDF), and $Y_1, \ldots, Y_n$ is another i.i.d. sample from $F_Y$ (also CDF). In addition, $h(z_1, \ldots, z_n)$ is a real-valued symmetric function. Let $H_X = h(X_1, \ldots, X_n)$ and $H_Y = h(Y_1, \ldots, Y_n)$. Denote $D_{KS}(X, Y)$ as the Kolmogorov-Smirnov distance between $X \sim F_X$ and $Y \sim F_Y$, that is,
$$D_{KS}(X, Y) = \sup_{z \in \mathbb R} |F_X(z) - F_Y(z)|.$$
Question: How should I use $D_{KS}(X, Y)$ to bound $D_{KS}(H_X, H_Y)$, as in finding a function $g$ such that $D_{KS}(H_X, H_Y) \leq g(D_{KS}(X, Y))$? One can make assumptions on $F_X$, $F_Y$, and $h$, but should be as minimal as possible. (for example, compact support of $F_X$, $h$ being Lipschitz, and etc.).
My intuition is that if $X$ and $Y$ are from two close distributions (in terms of KS distance), so should $H_X$ and $H_Y$, but I couldn't find a way to formally make a statement.
Any help or ideas would be much appreciated!
| Upper bound on Kolmogorov-Smirnov distance after some transformation $h$ | CC BY-SA 4.0 | null | 2023-04-17T03:15:13.693 | 2023-04-17T07:17:57.927 | null | null | 324867 | [
"distributions",
"distance",
"kolmogorov-smirnov-test"
] |
613173 | 2 | null | 181346 | 0 | null | >
This transition probability varies with time and is correlated with the observation features.
Another option is to use a plain old [factor graph](https://en.wikipedia.org/wiki/Factor_graph), which is a generalization of a hidden markov model. You can model the domain knowledge that results in changing transition probability as a random variable for the shared factor.
| null | CC BY-SA 4.0 | null | 2023-04-17T03:22:52.997 | 2023-04-17T03:22:52.997 | null | null | 380291 | null |
613174 | 1 | null | null | 1 | 70 | Background
Suppose we want to look examine the viral load from individuals on drug therapy. In the study design, there are two groups. Group one receiving no treatment (or placebo) as the control and group two as the group receiving antiviral treatment. We compare the didferences in average reduction of viral load between the two groups.
Thus, let $x_1,...,x_n$ be the control group and $y_1,..., y_n$ be the group on antiviral therapy. In a scenario where there are equal observations of $n$ within each group, we assume the distribution for the control to be $\mathrm{N}(\mu, \sigma^2)$ and those on the antiviral therapy to be distributed by $N(\mu + \delta, \sigma^2)$. $\delta$ denotes any additional reduction received from the antiviral treatment whilst $\mu$ denotes the mean reduction in absence of any treatment.
The hypotheses proposed are
\begin{align}
H_{0}: \delta = 0 \quad \text{and} \quad H_{1}:\delta \neq 0
\end{align}
Question
What is the log-likelihood function $\ell(\mu, \delta, z$) if $z = \sigma^2$ and thus determine the MLE for each parameter.
Approach
Consider the two normal distributions and their probability density functions for $x_i$ (no treatment) and $y_i$ (treatment) with equal variance $\sigma^2 = z$
\begin{align*}
f_X(x) &= \frac{1}{\sigma\sqrt{2\pi}} \mathrm{exp}\left\{-\frac{\left(x_i - \mu\right)^2}{2\sigma^2}\right\} = \frac{1}{\sqrt{z2\pi}} \mathrm{exp}\left\{-\frac{\left(x_i - \mu\right)^2}{2v}\right\} \\
f_Y(y) &= \frac{1}{\sigma\sqrt{2\pi}} \mathrm{exp}\left\{-\frac{\left(y_i - (\mu+\delta)\right)^2}{2\sigma^2}\right\} = \frac{1}{\sqrt{z2\pi}} \mathrm{exp}\left\{-\frac{\left(y_i - (\mu+\delta)\right)^2}{2v}\right\}
\end{align*}
The joint likelihood function would then be
\begin{align*}
L(x,y) = \prod_{i = 1}^{n}f_{XY}(x,y) = \prod_{i = 1}^{n}f_X(x) \cdot \prod_{i = 1}^{n} f_Y(y)
\end{align*}
Taking the log of the likelihood function would result in the following
\begin{align}
\ell(\mu,\delta,z|\mathbf{y}) = \sum_{i=1}^{n}\log f(x_i|\mu,z) + \sum_{i=1}^{n}\log f(y_i|\mu+\delta,z)
\end{align}
The resulting log-likelihood function would be
\begin{align}
\ell(\mu,\delta,v|\mathbf{x_i,y_i}) = -\frac{n}{2}\log(2\pi z) - \frac{1}{2z}\sum_{i=1}^{n}(x_i-\mu)^2 -\frac{n}{2}\log(2\pi z) - \frac{1}{2z}\sum_{i=1}^{n}(y_i-(\mu+\delta))^2
\end{align}
Help
I understand the next step to calculate the MLE to be taking the partial derivative with respect to each parameter $\mu, \delta, z$. However, I am having problems trouble shooting how to proceed. I am unsure if the approach I have taken to finding the joint likelihood function is correct. Any help would appreciated. Thank you in advance!
| Maximum likelihood estimation for joint probability function | CC BY-SA 4.0 | null | 2023-04-17T04:00:34.860 | 2023-04-17T11:55:36.427 | 2023-04-17T11:55:36.427 | 376744 | 376744 | [
"self-study",
"mathematical-statistics",
"maximum-likelihood",
"joint-distribution"
] |
613175 | 1 | null | null | 0 | 3 | I am currently conducting a longitudinal secondary data analysis. Due to the nature of the data that has previously been collected and my proposed analysis (Growth Mixture Modelling), I am needing to appropriately align scores from the Child Behaviour Checklist with the Brief Infant-Toddler Social and Emotional Assessment. Currently, I have been recommended methods which produce a single binary variable however I am wanting to standardise the scores between these measures to yield continuous variables instead of categorical (and look to safeguard the variance in the data). I am wondering whether any researcher has had experience in aligning scores from these specific measures in their own research or are aware of studies which have done this? If so, I will be interested to view/learn and/or even discuss such methods. Thank you in advance.
| Aligning scores between the Child Behaviour Checklist (CBCL) and The Brief Infant-Toddler Social and Emotional Assessment (BITSEA) | CC BY-SA 4.0 | null | 2023-04-17T06:54:06.783 | 2023-04-17T06:54:06.783 | null | null | 385882 | [
"psychometrics",
"growth-mixture-model"
] |
613176 | 1 | null | null | 0 | 13 | I need to calculate within-subjects' confidence intervals in R for a plot I am creating. Does anyone know how to construct either the Loftus-Masson confidence intervals or the Cousineau-Morey confidence intervals in R?
I have been trying to find out how to do this for hours.
Thank you
| How to compute within-subjects confidence intervals (Loftus-Masson or Cousineau-Morey)? | CC BY-SA 4.0 | null | 2023-04-17T06:59:41.333 | 2023-04-17T06:59:41.333 | null | null | 371659 | [
"r",
"confidence-interval"
] |
613177 | 2 | null | 613111 | 1 | null | This pattern of data points that seem to be aligned on a straight line resembles the pattern that might occur with the [Yule-Simpson effect](https://en.wikipedia.org/wiki/Simpson%27s_paradox).
The effect within the groups is different from the main effect.
[](https://i.stack.imgur.com/cjr6q.png)
```
### create data
m = 10
n = 10
z = rnorm(m)
z = rep(z,each=n)
x = rnorm(m*n,z)
y = 3*z-x
### plot data
plot(x,y, main = "x versus y")
### add linear fit
mod = lm(y~x)
lines(x,predict(mod))
### plot residuals
plot(predict(mod),y-predict(mod), main = "prediction versus residual")
```
| null | CC BY-SA 4.0 | null | 2023-04-17T07:08:35.517 | 2023-04-17T08:52:10.703 | 2023-04-17T08:52:10.703 | 164061 | 164061 | null |
613178 | 2 | null | 613172 | 0 | null | If you think of $h$ as a test statistic, then you should be able to get substantial differences between the distributions of $h({\mathbf X})$ and $h({\mathbf X})$ when the K-S distance between the distributions of $X$ and $Y$ is of order $n^{-1/2}$.
For example, take $X\sim N(0,1)$ and $Y\sim N(\mu,1)$ and $h({\mathbf X})= n^{-1/2}\sum_i X_i$. Then $h(X)\sim N(0,1)$ and $h(Y)\sim N(n^{1/2}\mu,1)$, which is not small if $\mu$ is of order $n^{-1/2}$ or larger.
| null | CC BY-SA 4.0 | null | 2023-04-17T07:17:57.927 | 2023-04-17T07:17:57.927 | null | null | 249135 | null |
613180 | 1 | null | null | 1 | 27 | I am building a boosted decision trees classification model, where the input variables vary smoothly with time.
The problem is that the predictions will always be biased by the most recent entries. I understand that this is happening because these are the points of hyperspace closest to the target points. If I don't use the n last entries, then the predictions are still biased by the last used entries.
How can I avoid this bias and "shake things up"? I expect that the correct model will pick up relations between variables and use them, not blindly looking for the nearest hyperpoints.
The solution could be to use another ML method, but some quick tries show that the problem isn't easy to get rid of, in general. So, any suggestions, either within BDT or with another algorithm, are appreciated.
| Prediction biased by closest hyperpoints | CC BY-SA 4.0 | null | 2023-04-17T07:26:50.350 | 2023-04-17T08:09:47.677 | null | null | 156277 | [
"machine-learning",
"random-forest",
"bias"
] |
613181 | 2 | null | 2358 | 1 | null | There is no difference. This is because the maximum likelihood solution of the parameters of the joint problem $Y = W^T φ(x)$ with K target variables decouples to K independent regression problems, assuming a conditional distribution of the target vector to be an isotropic Gaussian of the form $p(t|φ(x),W, β) = N (t|W^T φ(x), β^{-1} I)$. Refer to section '3.1.5 Multiple outputs' from the book 'Pattern Recognition and Machine Learning', Bishop for details.
| null | CC BY-SA 4.0 | null | 2023-04-17T07:51:07.397 | 2023-04-17T13:41:02.513 | 2023-04-17T13:41:02.513 | 369961 | 369961 | null |
613182 | 2 | null | 520015 | 0 | null | You could find useful the following link "Comparing trends and exogenous variables in SARIMAX, ARIMA and AutoReg"
The article addresses details on the statsmodels implementation of SARIMAX and ARIMA models, specifically when using exog variables, there are code snippets and interesting sections as how "Reconstructing residuals, fitted values and forecasts in SARIMAX and ARIMA"
[https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_sarimax_faq.html](https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_sarimax_faq.html)
| null | CC BY-SA 4.0 | null | 2023-04-17T08:03:45.493 | 2023-04-17T08:03:45.493 | null | null | 365405 | null |
613183 | 2 | null | 613180 | 1 | null | A couple of things that you could do:
- Use a rolling window approach, limiting the training data to a fixed window of the most recent data points;
- Add input variavbles that capture temporal relationships between your input variables and the target variable. Lagged variables could capture the impact of past values on the current value, or use rolling averages to smooth out the impact of recent data.
| null | CC BY-SA 4.0 | null | 2023-04-17T08:09:47.677 | 2023-04-17T08:09:47.677 | null | null | 385885 | null |
613184 | 1 | null | null | 0 | 10 | I am working on brain tumor data and I would like to be able to separate the uptake of a radioactive tracer in the tumors, depending on the tumor grade. Simplified explanation: Grade 2 tumors have the lowest uptake, grade 3 has higher uptake and grade 4 has highest uptake in general. ROC-analysis gives me a threshold of <1.99 (AUC:0.91, CI:0.82-1.00) for grade 2 tumors and a threshold of >4.53 (AUC:0.89, CI:0.76-1.00) for grade 4 tumors. But how do I handle the grade 3 tumors? Does this imply that I can use an interval of 1.99-4.53 to define grade 3 tumors? If so, can I also calculate AUC and CI for this interval? Best, Anna
| Finding optimal threshold values between groups using ROC-analyses | CC BY-SA 4.0 | null | 2023-04-17T08:13:10.480 | 2023-06-02T05:41:06.403 | 2023-06-02T05:41:06.403 | 121522 | 385887 | [
"roc",
"threshold"
] |
613185 | 1 | null | null | 0 | 14 | I am doing a segmentation task. There are 5 annotators to annotate the images. However, the segmentation target is a small object whose boundary is hard to delineate. Therefore, the differences of the same images between annotators are large, resulting in a low dice coefficient in evaluating the annotator differences.
Is there a more appropriate (or easier) evaluation metric for small targets whose boundary is hard to tell?
| Considering large annotator differences, is there an appropriate evaluation metric for segmentation? | CC BY-SA 4.0 | null | 2023-04-17T08:37:57.460 | 2023-04-17T08:37:57.460 | null | null | 356444 | [
"machine-learning",
"neural-networks",
"model-evaluation",
"image-segmentation"
] |
613186 | 1 | null | null | 0 | 26 | I want to run a logistic regression for a binary target
$$
ER = \frac{1}{1 + e^{-\left(\theta_1 S + \theta_0\right)}}
$$
The goal is to get values for $\theta$. $S$ is known. $S$ is part of an earlier logistic regression and now I want to use $\theta$ to calibrate the model.
To optimise this we can use cross entropy loss $L$ as logistic regression normally does.
$$
L(\theta) = -\sum y_n log(ER(\theta)) +(1-y)log(1-ER(\theta))
$$
In addition, I want to add a target $ER_{target}$
$$
ER_{observed} = \frac{D}{D + N}
$$
$D$ is number of successes, $N$ is number of failures.
$$
ER_{target} = \frac{D}{D + w\times N}
$$
$w$ is a weighting for failures. $w$ is between $(0, 1)$
$w$ can also be written as
$$
w = \left( \frac{1}{ER_{target}}-1\right)\frac{ER_{observed}}{1- ER_{observed}}
$$
The reformulation of $w$ is fine, I checked it.
To incorporate $w$ Cross entropy loss can be re-written as
$$
L(\theta, w) = -\sum y_n log(ER(\theta)) + w(1-y)log(1-ER(\theta))
$$
Why is this reformulation of cross entropy valid? It feels odd that $w$ impacts failures in cross-entropy, though not the successes. Is it correct? How do you get to it? Is it just an approximation which works for small number of successes?
| Logistic regression with target rate | CC BY-SA 4.0 | null | 2023-04-17T08:39:56.360 | 2023-04-17T08:39:56.360 | null | null | 161138 | [
"logistic",
"calibration",
"cross-entropy"
] |
613188 | 2 | null | 613155 | 0 | null | I would say that the logic is the following:
$$
p(\omega \lvert X,Y) = \frac{p(Y\lvert X, \omega)p(X, \omega)}{\int d\omega \, p(Y\lvert X, \omega)p(X, \omega)}
$$
Now because the data $X$ and the parameters $\omega$ are independent from each other, you can use $p(X, \omega) = p(X)p(\omega)$. And I assume somehow you do not integrate over $X$ because these are fixed variables more than random variables (I believe), furthermore this would also imply $p(X) = 1$.
Therefore we are left with:
$$
p(\omega \lvert X,Y) = \frac{p(Y\lvert X, \omega)p(\omega)}{\int d\omega \, p(Y\lvert X, \omega)p(\omega)} \equiv \frac{p(Y\lvert X, \omega)p(\omega)}{ p(Y\lvert X)}
$$
Hope someone more knowledgable can help :)
| null | CC BY-SA 4.0 | null | 2023-04-17T08:49:53.827 | 2023-04-17T08:49:53.827 | null | null | 310850 | null |
613189 | 1 | null | null | 0 | 22 | I am running a PGLS (Phylogenetic Generalized Least Squares) model selection using the library 'MuMIn' and its functions, see below the code:
```
*# Perform PGLS function
modelo <- pgls(ExtinctionRisk ~ PC1 + PC2 + PC3 + Island_Endemic + Volancy + Habitat + New_Migration + Foraging , data = cdat, lambda = 'ML')
dd1 <- dredge(modelo)
subset(dd1, delta < 4)
# Model average models with delta AICc < 4
model.avg(dd1, subset = delta < 4)
#or as a 95% confidence set:
model.avg(dd1, subset = cumsum(weight) <= .95) # get averaged coefficients
#'Best' model
x<-summary(get.models(dd1, 1)[[1]])*
```
I am not sure what's the difference between using a delta AICc<4 than the thumb rule (delta<2)?
| Difference between AICc delta<2 and AICc delta<4 in model.average | CC BY-SA 4.0 | null | 2023-04-17T09:02:12.487 | 2023-04-17T12:27:36.343 | 2023-04-17T12:27:36.343 | 220466 | 385889 | [
"r",
"aic",
"phylogeny",
"mumin"
] |
613190 | 2 | null | 397896 | 0 | null | I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations are transforming the range between [0, 1] or [-1,1] (like in ResNet50V2) and resizing the images to all the same width and height to use a batch size different than 1.
| null | CC BY-SA 4.0 | null | 2023-04-17T09:41:23.963 | 2023-04-17T09:41:23.963 | null | null | 379875 | null |
613191 | 1 | null | null | 0 | 12 | Suppose that we have a time series $y_1,...,y_n$. Additionally, consider that this time series as a seasonal component of period 12 and a linear trend. To make our time series stationary we apply the following transformations:
$y_t^{'}=\{y_{t}-y_{t-12}\}$
$y_{t}^{*}= \{y_{t}^{'}- y_{t-1}^{'}\}$
Now, $y_t^{*}$ is expected to be stationary. My question is if we train a model on $y_t^{*}$ and predict $y_{t+1}^{*},...,y_{t+H}^{*}$, how can we recover the original scale? Notice that we are predicting the differences. My guess is that we should store the past lag and the past seasonal lag and then sum the cumulative predicted differences. Am I right?
| Recovering original scale time series | CC BY-SA 4.0 | null | 2023-04-17T09:44:48.170 | 2023-04-17T09:44:48.170 | null | null | 346061 | [
"time-series"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.