Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
612208 | 1 | null | null | 0 | 9 | I have an interesting medical problem where the price of failure and offset between precision and recall is much much higher than typical ml systems I’ve worked with before.
The precision of one class needs to be so high that we actually measure it in false positives per day, which should be 0.01 or less. In terms of precision it is one in five million, 99.99998%.
To
To get enough test data to have an expected value of one failure, there needs to be 100 days of data. Does that also mean there needs to be 3,000 days of data to say anything with confidence about the precision metric (30x that figure)?
3,000 days of data is 600x how much training data there is currently.
However the training data is focusing on the edge cases, it’s not a typical day. A typical day will be mostly easy cases with maybe one interesting case per day. In that sense, we already have well over 3,000 days in terms of expected interesting cases.
What are some best practices for getting confidence in deploying the model short of actually collecting the raw amount of real world data needed to definitively hit a metric?
It seems estimating how many interesting cases are hit per day and estimating what the 3,000 day metric would be is the way to go, but I can’t convince the owners of it. Are there any similar problems in the literature to point to and learn from?
Also having a test set that’s magnitudes bigger than the training set feels weird, but I’m not sure if it’s necessarily wrong per se, but definitely expensive and counterintuitively out of proportion with a typical ml real world test
| Minimum test set size for a real world high precision application | CC BY-SA 4.0 | null | 2023-04-07T04:01:42.173 | 2023-04-07T04:01:42.173 | null | null | 20580 | [
"machine-learning",
"precision"
] |
612209 | 1 | null | null | 0 | 19 | In Minitab, we can obtain the sample size/power result with only the maximum difference between the group means instead of ALL the group means.(As seen in the picture below.) How is this done? What are the formulas for the calculation of the sum of squares?
Also, I guess R shall also do this. However, it seems that functions such as power.anova.test and pwr.anova.test both require ALL the group means. Please, does someone know how Minitab does at the backstage and how can we do this in R?
[](https://i.stack.imgur.com/ul8ag.png)
| How is power/sample size calculation performed in R with only the maximum difference between means available? | CC BY-SA 4.0 | null | 2023-04-07T04:27:25.150 | 2023-04-07T04:27:25.150 | null | null | 253207 | [
"anova"
] |
612212 | 1 | null | null | 0 | 17 | I have a list of students from different classes, so far I have used a linear regression to create predicted future test scores based on multiple criteria such as attendance, previous test scores etc.
I would now like to use these predicted scores to work out the probability of each student finishing 1st, 2nd, 3rd in their class etc. One concern I have is that all the classes are different sizes.
I am very new to this, so any advice on the most efficient way to work out probabilities based on predicted scores would be greatly appreciated.
Thanks in advance
| Probabilities of ranking positions based on predicted scores | CC BY-SA 4.0 | null | 2023-04-07T05:27:18.120 | 2023-04-07T05:27:18.120 | null | null | 380548 | [
"regression",
"probability",
"distributions",
"linear",
"ranks"
] |
612213 | 1 | null | null | 0 | 6 | So I'm trying to use Weighted Binary Cross Entropy Function. I'm trying to calculate the weights for each class. I have 14 of them them in the target variable.
I'm using the following function for calculating the weights.
```
def compute_class_freqs(labels):
"""
Args:
labels (np.array): matrix of labels, size (num_examples, num_classes)
Returns:
positive_frequencies (np.array): array of positive frequences for each
class, size (num_classes)
negative_frequencies (np.array): array of negative frequences for each
class, size (num_classes)
"""
N = len(labels)
print(labels)
positive_frequencies = np.sum(labels, axis=1) / N
negative_frequencies = 1 - positive_frequencies
return positive_frequencies, negative_frequencies
```
I'm taking BATCH_SIZE as 32. Since the total data size is 112121, which is obviously not a multiple of 32 or any power of 2. There are some samples that get left in the last batch that are not 32. Hence I get this broadcasting errors from numpy.
Should I ignore the last batches in calculating the weight. Or is it significant that I should do so?
I'm using tensroflow 2.11.0 in python 3.9.16
| Should I ignore the left out samples or insert padding during batch processing data | CC BY-SA 4.0 | null | 2023-04-07T06:39:23.330 | 2023-04-07T06:39:23.330 | null | null | 385140 | [
"python",
"loss-functions",
"tensorflow",
"numpy"
] |
612214 | 2 | null | 612175 | 1 | null | A training loss which is lower than the validation loss doesn't necessarily indicate an overfitted model. Overfitting should be a concern when the validation loss stops improving or actually deteriorates with further model training. There are a number of reasons why training loss might be persistently lower than validation loss and in fact this is typically the case.
| null | CC BY-SA 4.0 | null | 2023-04-07T06:54:23.123 | 2023-04-07T06:54:23.123 | null | null | 211876 | null |
612215 | 2 | null | 611177 | 0 | null | There are 4 possible treatment combinations in this case. You can do a priori contrasts between specific treatment combinations or a pairwise post hoc to compare all of them.
No, you do not just do t-tests because you will inflate your Type I error. Pairwise procedures adjust type I error rates, eg Tukey.
| null | CC BY-SA 4.0 | null | 2023-04-07T07:01:30.303 | 2023-04-07T07:01:30.303 | null | null | 202808 | null |
612216 | 2 | null | 611954 | 0 | null | An interaction between `Cov` and `Time` means that the relationship between `Cov` and `Y` changes across time. It doesn't mean that the value of `Cov` for each person has to change across time. For example, maybe having a gene for high cholesterol (`Cov`) doesn't have a strong relationship with risk of heart attack (`Y`) when someone is young, but it does have a strong relationship when someone is old. This is a time-varying effect of a time-invariant predictor.
Whether you should include a term like this in your model depends is a substantive question. You are essentially asking "what is the right model?". We don't know what the right model is, which is why we do statistics in the first place. You can always include it and see if the coefficient is different from 0, but whether it is or isn't might not mean anything substantively (e.g., if the variable is confounded or mediated, in which case the coefficient may not represent a meaningful effect).
| null | CC BY-SA 4.0 | null | 2023-04-07T07:10:50.847 | 2023-04-07T07:10:50.847 | null | null | 116195 | null |
612220 | 2 | null | 611582 | 1 | null | >
where does ${\mathcal{N}}\left( {m\left( X \right),k\left( {X,X} \right)} \right)$ come from?
This occurs as prior.
$$f(X)|X,k \sim {\mathcal{N}}\left( {m\left( X \right),k\left( {X,X^\prime} \right)} \right)$$
Describes a distribution of different functions $f(X)$.
It does occur in your Bayesian rule
$$ p\left( {\left. {f , \sigma , m , k ,{\rm M} , {\rm K}} \right|X , Y} \right) \propto \\
{\sigma ^{ - n}}\prod\limits_{i = 1}^n {{e^{ - \frac{{{{\left( {{y_i} - {x_i}} \right)}^2}}}{{2{\sigma ^2}}}}}} \underbrace{{\text{GP}}\left( {m\left( x \right),k\left( {x,x'} \right)} \right)}_{{\mathcal{N}}\left( {m\left( X \right),k\left( {X,X} \right)} \right)}p\left( {\left. m \right|{\rm M}} \right)p\left( {\left. k \right|{\rm K}} \right)p\left( {\rm M} \right)p\left( {\rm K} \right)p\left( \sigma \right) \\
$$
| null | CC BY-SA 4.0 | null | 2023-04-07T08:14:46.217 | 2023-04-07T10:33:46.207 | 2023-04-07T10:33:46.207 | 164061 | 164061 | null |
612221 | 1 | null | null | 4 | 45 | I want to test a claim that the population probability is at least 0.9 and Im confused about how I should set my alternative hypothesis.
Should it be H0 : p=0.9 vs H1: p<0.9 or H0: p=0.9 vs H1: p>0.9 ?
| what should be chosen as the alternative hypothesis | CC BY-SA 4.0 | null | 2023-04-07T08:28:20.490 | 2023-04-07T10:01:43.313 | null | null | 358099 | [
"hypothesis-testing",
"statistical-significance",
"inference"
] |
612222 | 1 | 612230 | null | 4 | 473 | I am studying this [source](http://www.sthda.com/english/wiki/one-way-anova-test-in-r#check-anova-assumptions-test-validity) about One-Way ANOVA Test in R. We know that ANOVA test assumes that the data is normally distributed and the variance across groups are homogeneous. In the source the claim that we can check this with some diagnostic plots. At the part Check the homogeneity of variance assumption, the say that the residuals versus fits plots can be used to check the homogeneity of variances:
>
The residuals versus fits plot can be used to check the homogeneity of
variances.
In the plot below, there is no evident relationships between residuals
and fitted values (the mean of each groups), which is good. So, we can
assume the homogeneity of variances.
But it is not explained how we can see it from the plot. Is it because of distribution of the points or of because the red line? So here is some reproducible code with the plot they are talking about:
```
library(ggpubr)
#> Loading required package: ggplot2
my_data <- PlantGrowth
my_data$group <- ordered(my_data$group,
levels = c("ctrl", "trt1", "trt2"))
# Compute the analysis of variance
res.aov <- aov(weight ~ group, data = my_data)
# Summary of the analysis
summary(res.aov)
#> Df Sum Sq Mean Sq F value Pr(>F)
#> group 2 3.766 1.8832 4.846 0.0159 *
#> Residuals 27 10.492 0.3886
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# 1. Homogeneity of variances
plot(res.aov, 1)
```

Created on 2023-04-07 with [reprex v2.0.2](https://reprex.tidyverse.org)
So I was wondering if anyone could please explain how to interpret this plot and why this could tell us something about the homogeneity of variance assumption?
| Check the homogeneity of variance assumption by residuals against fitted values | CC BY-SA 4.0 | null | 2023-04-07T08:40:04.150 | 2023-04-08T14:54:37.740 | null | null | 323003 | [
"self-study",
"anova",
"variance",
"heteroscedasticity",
"assumptions"
] |
612223 | 2 | null | 469973 | 1 | null | You need the variance of only those stores on which you are conducting the test. It is important to understand that A and B should be similar before the experiment (And A test). In general, go from stores to users, conversions (to the target metric that the test is aimed at)
| null | CC BY-SA 4.0 | null | 2023-04-07T08:50:30.020 | 2023-04-07T08:50:30.020 | null | null | 385153 | null |
612224 | 2 | null | 418136 | 1 | null | Our team developed a practical approach to estimate binary classification metrics on positive-unlabelled datasets using the prior probability of the positive class. We apply this approach to adjust the confusion matrix and calculate standard metrics such as accuracy, precision, and recall to evaluate the model's performance. This method allows for a more accurate evaluation of models on positive-unlabelled datasets, where the positive class is underrepresented. We validated our approach using a synthetic binary classification dataset and compared the results with the corresponding metrics computed on the original labelled version. The full description is [A Practical Approach to Evaluating Positive-Unlabeled (PU) Classifiers in Business Analytics](https://medium.com/towards-data-science/a-practical-approach-to-evaluating-positive-unlabeled-pu-classifiers-in-real-world-business-66e074bb192f)
| null | CC BY-SA 4.0 | null | 2023-04-07T08:55:53.767 | 2023-04-07T08:55:53.767 | null | null | 363056 | null |
612225 | 1 | null | null | 1 | 58 | I have fitted two robust linear mixed effects models, `null.model` and `full.model`, with same random-effects term, `(1 | id)`, to a data set using `robustlmm::rlmer`. These models only differ by a predictor `x`:
```
null.model <- rlmer(y ~ (1 | id), data)
full.model <- rlmer(y ~ x + (1 | id), data)
```
I have chosen to fit robust LMMs over LMMs (provided by `lme4::lmer` or `lmerTest::lmer`) since I saw that the residuals (of LMMs) were not following the straight line at the ends when I plotted Q-Q plots against a standard Gaussian distribution.
I am now facing a problem of comparing these two models. [The manual for robustlmm](https://cran.r-project.org/web/packages/robustlmm/robustlmm.pdf) says
>
... the log likelihood is not defined for the robust estimates returned by rlmer.
So, I can't perform a likelihood ratio test on the lines of `anova(null.model, full.model)`. Is there any way to compare these two models? Am I missing any important statistical assumptions that I should be aware of before I think of comparing these models?
| Model comparison for robust linear mixed effects models | CC BY-SA 4.0 | null | 2023-04-07T09:00:22.783 | 2023-04-07T10:10:28.863 | 2023-04-07T10:10:28.863 | 298817 | 298817 | [
"mixed-model",
"lme4-nlme",
"model-selection",
"model-comparison"
] |
612226 | 1 | null | null | 2 | 20 | I'm following this [note](https://www.stats.ox.ac.uk/%7Erebeschi/teaching/AFoL/20/material/lecture15.pdf) to learn about deriving an upper bound of the UCB algorithm on the Stochastic Multi-Armed Bandit Problem. In particular, the proof of Lemma 15.6 there connotes that we can apply Hoeffding's inequality to
$$ Pr(\frac{1}{N_{t,a}}\sum_{s=1}^t Z_{s,a} I_{A_s=a} - \mu_a \geq \epsilon ) $$
where $Z_{s,a}$'s are iid rewards with mean $\mu_a$ and are independent of other rewards, $A_s$ is the action we make at time $s$ which is in general a function of the past rewards and past actions, $N_{t,a}=\sum_{s=1}^t I_{A_s=a}$.
Of course, a natural starting point is to condition on $N_{t,a}=n$, so that inside the probability there are $n$ iid random variables. However, I cannot proceed because $Z$'s are clearly not independent of $N_{t,a}$. (as $N_{t,a}$ depends on $(A_s)_{s=1,...,t}$ and each $A_s$ depends on $(Z_{i,a})_{i=1,...,s-1, a\in A}$; in particular, UCB speficies $A_{s+1}= argmax_a U_{s,a}:= \frac{1}{N_{t,a}}\sum_{s=1}^t Z_{s,a} I_{A_s=a} + \sqrt{\frac{log(s)}{2N_{s,a}}}$)
Similarly, online notes [here](https://www.cs.cornell.edu/courses/cs6783/2021fa/lec25.pdf) (page 2, Lemma 1) and [here](https://courses.cs.washington.edu/courses/cse599i/18wi/resources/lecture3/lecture3.pdf) (page 8, Theorem 4) all directly claim that we can apply Hoeffding's inequality to the above inequality without justification.
I'm aware that there are other methods to derive an upper bound for UCB, but I'd really like to confirm if this approach is a dead end.
| Application of Hoeffding's inequality on the Stochastic Multi-Armed Bandit Problem | CC BY-SA 4.0 | null | 2023-04-07T09:32:53.563 | 2023-04-07T09:56:25.253 | 2023-04-07T09:56:25.253 | 385123 | 385123 | [
"machine-learning",
"probability",
"mathematical-statistics",
"probability-inequalities"
] |
612227 | 2 | null | 612175 | 2 | null | First of all, what you are describing are not KPIs. KPIs are usually [business metrics for the problem you are trying to solve](https://medium.com/mercadolibre-tech/no-machine-learning-kpis-first-935e8ca0e4a9). They do not have to do anything with the machine learning metrics. Using those terms interchangeably would be confusing for many people. If indeed you had a key performance indicator, it would be a key metric for your project, so it would be decisive by itself.
Second, your definitions of underfitting and overfitting are not correct. Metrics are real-valued, so there is literally zero probability that training and test metrics would be equal. As @Estacionario noticed in their answer, test metrics would usually be worse than the training metrics because they are calculated on unseen data. We are talking about underfitting or overfitting if those differences are significant (there is no formal threshold) and/or based on other criteria.
Finally, consider a more extreme case, where you have two models: the first one has 50% train and 50% test accuracy, while the other has 90% train and 80% test accuracy. Which would you choose? The consistently poor one does not sound like a great choice.
| null | CC BY-SA 4.0 | null | 2023-04-07T09:39:33.067 | 2023-04-07T09:39:33.067 | null | null | 35989 | null |
612228 | 1 | null | null | 1 | 25 | I am asked in a homework question to prove asymptotic normality for the generalized method of moments estimator. The assumptions (which i think are necessary to solve this particular subproblem) given in the theorem are
- $ (Z_i)_{i \in \mathbb{N}}$ is a sequence of i.i.d. random variables.
- $g(z|\theta)$ is continuously differentiable wrt. $\theta$ in a neighborhood $\mathcal{N}$ of $\theta_0\in Int(\Theta)$ ($g$ is a moment restriction function, $\theta_0$ is the true parameter, and $\Theta\subset\mathbb{R}^k$ is the parameter space)
- $\mathbb{E}[\sup_{\theta\in \mathcal{N}}||g(Z_i|\theta)||]<\infty $
In the concluding argument of the proof I need to show that $G_n(\theta):=\frac{1}{n}\sum_{i=1}^{n}\partial_\theta g(Z_i|\theta)$ converges uniformly to $G(\theta) = \mathbb{E}[\partial _\theta g(Z_i|\theta)]$, i.e. $$\sup_{\theta\in\mathcal{N}} ||G_n(\theta) - G(\theta)||\stackrel{P}{\rightarrow}0$$ It is hinted at that the convergence follows from conditions 2 and 3. I have also snooped around various stack exchanges and gotten a hunch that the Borel-Cantelli lemma might be helpful. But at this point I am truly lost.
Any help would be greatly appreciated!
(If you feel like you would need more information on the problem, please let me know)
| Proving uniform convergence of moment restriction score function in GMM asymptotic normality proof | CC BY-SA 4.0 | null | 2023-04-07T09:41:19.120 | 2023-04-15T20:18:14.800 | 2023-04-07T09:42:27.960 | 385155 | 385155 | [
"econometrics",
"convergence",
"asymptotics",
"generalized-moments"
] |
612229 | 2 | null | 612221 | 0 | null | X claims: "The probability $p$ is at least 0.9". If you test $H_0:\ p\ge 0.9$ against the alternative $p<0.9$, it means that you will reject X's claim if there is clear evidence against it. For example, if you observe relative frequency $\hat p=0.87$, X's claim will not be rejected, as the result is compatible with $p=0.9$ despite estimating below it, unless the sample size is very large (in which case even a small deviation from the $H_0$ will come out significant).
If you test $H_0:\ p\le 0.9$ against $p>0.9$, you demand significant "proof" that $p$ cannot be 0.9 or smaller. With $\hat p=0.87$ you won't reject $H_0$ and therefore can't say you have evidence that $p>0.9$. In fact this (rather obviously) doesn't even give you evidence for $p\ge 0.9$, and neither does, say, $\hat p=0.92$ in case your sample size is not very large.
Chances are in the given situation you want the first option, however you need to think through the consequences and the background of the situation. Do you want to let X get away with their claim if the evidence isn't clearly against it, or do you actually demand that X "statistically proves" their claim, excluding the possibility that $p<0.9$ with small error probability?
| null | CC BY-SA 4.0 | null | 2023-04-07T09:53:34.680 | 2023-04-07T09:53:34.680 | null | null | 247165 | null |
612230 | 2 | null | 612222 | 7 | null | It's a good question, because in practice a great deal depends on experience rather than exact rules, and how is one to judge with little or no experience, and who counts as experienced or expert, and will experts always agree? (They won't.)
Even more depends on knowing that equal variances are an ideal condition, not a binding essential such that tiny deviations are fatal. It's a hobby-horse of mine, although not an original point, that the almost universal use of the term assumption in these statistical contexts is not especially helpful. In logic and pure mathematics, a failure of assumptions can be utterly fatal to the validity of an argument. In applied mathematics, including statistics, a failure of "assumptions" has to be judged pragmatically, because just about every application is an approximation. We would often be better off talking about ideal conditions, a phrase intended to march with a realisation that real data are usually messy and imperfect, especially when compared with fantasy or brand-name distributions. (Ironically, or otherwise, one of the most important ideal conditions, independence in some sense, is rarely discussed or checked for.)
There are some slightly more precise guidelines that I would add.
- As a starting point, unequal variances that need attention tend to leap out at you from a plot, sometimes phrased in terms of a pattern hitting you between the eyes. If you're in doubt, you can usually assume there isn't a real problem.
- Perhaps contradicting #1, but that's typical of any advice: you can't always trust a graph. An appearance of greater or lesser variability can sometimes arise from differences in group size. A large group is more likely to include values from the tails of a conditional distribution than a small group. Hence if in doubt, calculate the variances to check, either for pre-defined groups as here or in some other appropriate manner.
- Is there a better model within reach? is the important associated question. For example, if variability of residuals seemed to increase with fitted or predicted values, I might wonder about working on a transformed scale, say by taking logarithms or (even better) using a generalized linear model with a logarithmic link. You stick with a model if you can't think of a better one, or more positively, change to a better model if you can see one. (Trying another model and finding that it isn't better, indeed possibly worse, and being able to report that, is very good practice in my view. Some people get queasy about choosing a model after exploration of the data or initial analysis. The view that you must think up a model in advance rather limits the scope for learning from data. Where did the model come from any way?)
- Your data are unlikely to be absolutely unique or unprecedented. What do people do in your journal literature? More generally, what do you know, as a scientist or other subject-matter expert, about how big or how small values may be, including whether there are limits to counted or measured values?
- More work, but not much so with decent software, is to simulate with similar sample sizes from a set-up with homoscedastic errors and see how different do results look in a portfolio of fake datasets? People new to statistics often underestimate how much variability there is in small samples, even if the underlying process is close to ideal. This example qualifies as a very small dataset by most standards.
| null | CC BY-SA 4.0 | null | 2023-04-07T09:55:27.450 | 2023-04-07T10:21:56.797 | 2023-04-07T10:21:56.797 | 22047 | 22047 | null |
612231 | 1 | 612235 | null | 1 | 32 | I had planned to perform a 3 x 2 repeated measures ANOVA before I realized that all the variables are distributed in a bimodal, U-shaped distribution where 0 and 1 are the modes. The high occurrence of 0 and 1 are meaningful to the analysis and therefore it may be inappropriate to transform the data.
Why is the variable bounded?
The outcome variable is a gaze behavior. Zero indicates the absence of a behavior, and 1 indicates that the behavior is strong.
What values are within the variable?
There are 0 and 1, and lots of values in between.
Considering that 1) the data violate ANOVA assumptions of normality of residuals; 2) I prefer not to transform the data; 3) sample size is small (n = 30, within-subjects), I am now planning to proceed with a non-parametric permutation test.
Would my understanding be correct?
[](https://i.stack.imgur.com/3Db60.png)
| Performing ANOVA on bounded variables between 0 and 1 | CC BY-SA 4.0 | null | 2023-04-07T09:56:59.020 | 2023-04-07T17:39:47.323 | 2023-04-07T17:36:32.787 | 54123 | 54123 | [
"anova",
"nonparametric"
] |
612232 | 2 | null | 612222 | 6 | null | The answer by @NickCox is excellent. I add that the shown plot on its own in my view doesn't raise any concern, as any difference in variances is not strikingly clear and one could imagine changing just 1-3 observations here by a bit (extreme outliers should make you worry) so that variances would look about as homogeneous as it gets, i.e., one could easily imagine this to be generated from a model with homogeneous variances with a bit of random variation.
| null | CC BY-SA 4.0 | null | 2023-04-07T10:00:49.080 | 2023-04-07T10:00:49.080 | null | null | 247165 | null |
612233 | 2 | null | 612221 | 0 | null | What you wrote can be interpreted in multiple ways, so the answer depends on your research question. Example:
If you have an old medicine that gives at least 90% of people adverse effects, and you want to show that your new medicine is better, then you should test H0 p=0.9 vs one sided H1 p<0.9
If you have and old medicine that treats 90% of people, and you want to show that your new medicine is better than that, then you should test H0 p=0.9 vs one sided H1 p>0.9
| null | CC BY-SA 4.0 | null | 2023-04-07T10:01:43.313 | 2023-04-07T10:01:43.313 | null | null | 53084 | null |
612234 | 1 | null | null | 0 | 13 | I am training an LSTM where I have sales data from 20 different individuals over the past 10 years.
Now, I read this brilliant answer: [How to train LSTM model on multiple time series data?](https://stats.stackexchange.com/questions/305863/how-to-train-lstm-model-on-multiple-time-series-data)
But, due to domain knowledge / prior analysis, I believe that only “recent” timesteps are useful in predicting a future timestep, let’s say, the previous 1 year.
Now, what do I do with all of this historical data? Do I just not use it?
Can I just create more subsequences out of these longer sequences and train the LSTM on these as well?
Or do I have to input the whole sequence for each individual?
| Shortening LSTM Sequences | CC BY-SA 4.0 | null | 2023-04-07T10:19:12.923 | 2023-04-07T10:19:12.923 | null | null | 292642 | [
"neural-networks",
"lstm",
"recurrent-neural-network"
] |
612235 | 2 | null | 612231 | 1 | null |
- Yes, the data are not suitable for ANOVA for the reasons you've mentioned
- Transformation won't help, because you will still have a hard limit at some value, so your days won't be normal anyway.
- Monte Carlo permutation test, or some rank based nonparametric test should be fine.
- You might also use parametric generalized linear models with an appropriate link function, maybe a beta regression.
- If your data are like that because of censoring, then you might need to do something else, but i don't know much about that
| null | CC BY-SA 4.0 | null | 2023-04-07T10:31:06.773 | 2023-04-07T17:39:47.323 | 2023-04-07T17:39:47.323 | 22047 | 53084 | null |
612236 | 2 | null | 610393 | 0 | null | As far as definition is concerned, there is no problem in defining the mutual information between two (vector) random variables that take value in different spaces. The mutual information between two random variables $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$ with joint distribution $p_{X,Y}$ and marginals $p_X$ and $p_Y$, respectively, is
defined as
$$
\begin{align}
I(X;Y)
&= \sum_{x \in \mathcal{X}}\sum_{y \in \mathcal{Y}} p_{X,Y}(x,y) \log \frac{p_{X,Y}(x,y)}{p_X(x)p_Y(y)}\\
&=H(X)+H(Y)-H(X,Y).
\end{align}
$$
If you estimate the distributions $p_{X,Y}$, $p_X$, $p_Y$, you should be able to compute a (naive?) estimation for the mutual information.
| null | CC BY-SA 4.0 | null | 2023-04-07T10:52:40.513 | 2023-04-07T10:52:40.513 | null | null | 384237 | null |
612237 | 1 | null | null | 1 | 41 | The formulation of the conditional density is:
$$ f_{Y|X}(y|x) = \frac{f_{X,Y}(x,y)}{f_X(x)}. $$
I need to estimate this density from data and it's prohibitively time-consuming to calculate the joint density (I have 10s of variables). However, for fixed values of x, it is easy to estimate directly the conditional density (without calculating the joint). I also have access to the df of X and its inverse, $F_X$, $F_X^{-1}$. My approach to estimate the conditional density is then to calculate this:
$$ \hat{f_{Y|X}(y|x)} \overset{?}= \frac{\sum_i f_{Y|X}(y|x=F_X^{-1}(i))}{NF}, $$
where $i$ takes a range of quantiles (for example $n$ evenly spaced points between 0 and 1) and NF is a normalization factor to ensure the density integrates to 1.
I admit not having any solid understanding of why I did this, but if someone can atleast point me to some resources, it would be greatly appreciated.
Extra non-relevant note: the densities in question are actually copula-densities, but it should not alter any of the equations.
| Efficient estimation of conditional probability density | CC BY-SA 4.0 | null | 2023-04-07T10:58:04.697 | 2023-04-07T10:59:30.703 | 2023-04-07T10:59:30.703 | 382301 | 382301 | [
"estimation",
"dataset",
"density-function",
"conditional"
] |
612238 | 1 | null | null | 0 | 13 | Experts,
I fitted a GAM Model with mgcv package containing a significant independent variable: number of days since first Action. Max.number of days were 40 days.
Dependant variable is Action of frogs.Question is Do you recommend gam.predict or time series function for predicting action patterns of frogs.Thanks a lot for your advice!!
| Time Series function or predict.gam function for predictions woth a fitter GAM Model? | CC BY-SA 4.0 | null | 2023-04-07T11:01:26.140 | 2023-04-07T11:01:26.140 | null | null | 385166 | [
"regression",
"time-series",
"multivariate-analysis",
"simulation",
"mgcv"
] |
612239 | 1 | null | null | 4 | 187 | Is there a theoretical upper limit to the number of parameters that be estimated with maximum likelihood estimation? My understanding is no, but that if you have too many parameters it may not be possible to find one set of parameters that uniquely optimizes the log-likelihood.
Assuming the above is correct, practically speaking, how can I determine if my model has too may parameters? Are there tests or guidelines regarding how many observed data points I need relative to my number of parameters?
| Is there a theoretical maximum to the number of parameters that can be estimated with maximum likelihood estimation? | CC BY-SA 4.0 | null | 2023-04-07T11:04:55.907 | 2023-04-07T11:22:04.100 | null | null | 385165 | [
"maximum-likelihood"
] |
612240 | 2 | null | 612239 | 5 | null | Theoretical? No. Let’s look at an example where the number of parameters is unbounded.
If we assume $iid$ Gaussian error terms in a linear model, OLS coincides with maximum likelihood estimation. If there are more observations than regression parameters, then we have a model matrix $X$ that has full rank. Consequently, $(X^TX)$ exists, and the usual a$\hat\beta_{ols}=(X^TX)^{-1}X^Ty$ exists and is equivalent to maximum likelihood estimation.
Since there is no theoretical limit to the number of observations, there is no theoretical limit to the number of parameters that can be estimated.
Moving beyond this example, you are correct that having many parameters can result in the MLE not being unique, yes, but that does not keep MLEs from existing.
| null | CC BY-SA 4.0 | null | 2023-04-07T11:22:04.100 | 2023-04-07T11:22:04.100 | null | null | 247274 | null |
612242 | 1 | null | null | 2 | 34 | I want to run robustness tests for my model. For example, by reducing the sample to heavily concentrated groups, running a different regression (probit etc) etc. But, how do I ascertain that my results are robust? Is it sufficient that my key explanatory variables have same sign as the original model, magnitude of coefficients is similar and that they are significant? Is it necessary that the coefficients are exactly same?
Thank you
| How to determine if my model is robust? Should the coefficients be same? | CC BY-SA 4.0 | null | 2023-04-07T11:58:16.590 | 2023-04-07T11:58:16.590 | null | null | 369093 | [
"logistic",
"least-squares",
"regression-coefficients",
"post-hoc",
"robust"
] |
612243 | 1 | null | null | 1 | 97 | Some background on my problem - Let us consider a discrete memoryless channel (DMC) $W_{Y|X}$ from Alice to Bob. A DMC is a conditional probability distribution over the random variable $Y$ given input random variable $X$. We wish to send a uniformly randomly chosen message from a set of size $\mathcal{M}$ over this channel with average transmission error at most $\varepsilon$. To achieve this, one has an encoder on Alice's side that takes the message as input and outputs a random variable $X$. $X$ is the input to the channel which outputs random variable $Y$. A decoder on Bob's side takes random variable $Y$ as input and outputs a message. A transmission error has occurred in the communication protocol if the input and output messages are different and otherwise, Alice and Bob have successfully sent a message using $W$.
Now consider $n$ i.i.d. copies of the given DMC and denote it by $W_{Y|X}^{\otimes n}$ for $n\in\mathbb{N}$. We ask what the maximum communication rate can be using the DMC $W_{Y|X}^{\otimes n}$. As before, we have an encoder that takes a uniformly random message from a set $\mathcal{M(n)}$ and outputs and $n$-bit string $X^n$. Let $p_{X^n}$ be the distribution over the $n$-bit strings $X^n$ when we encode a uniformly random message. $p_{X^n}$ must be invariant under any permutation of the $n$ positions since we have $n$ i.i.d. copies of the same channel. As we increase $n$, we add more i.i.d. copies of $W_{Y|X}$. Hence, the permutation invariance of our capacity-achieving $p_{X^n}$ holds for any choice of $n\in\mathbb{N}$.
For $k\in\mathbb{N}$, let us consider i.i.d. distributions $q_{X^k}^i = \prod\limits_{j=1}^kq^i_{X_j}$ for any choice of distribution $q^i$. Does there always exist a convex combination of such i.i.d. distributions such that
$$p_{X^k} = \sum_{i}\mu(i)q_{X^k}^i$$
where $\mu(i)$ is some measure that assigns a weight to each element of our convex combination. I am not sure if this claim is a special case of the de Finetti theorem (see Theorem 2 of [these notes](https://people.eecs.berkeley.edu/%7Ejordan/courses/260-spring10/lectures/lecture1.pdf)).
The point that I am confused about is whether my $p_{X^k}$ can be thought of as extendible for $n>k$ to allow me to invoke the de Finetti theorem.
| Is this a valid statement according to the de Finetti theorem? | CC BY-SA 4.0 | null | 2023-04-07T12:10:07.167 | 2023-04-10T15:08:25.553 | 2023-04-10T15:08:25.553 | 110901 | 110901 | [
"bayesian",
"random-variable",
"exchangeability",
"permutation"
] |
612245 | 1 | null | null | 0 | 19 | I'm struggling a bit with intuition behind resampling tests for difference in means.
I've two samples s1 and s2 of size n1 and n2 respectively. Population parameters are unknown. I'd like to know if the means of the two populations from which the samples came are different.
The Permutation approach for this I've read to be:
- Note the observed difference in mean between s1 and s2.
- Combine both s1 and s2 into a single large pool
- Draw n1 samples compute the mean. Compute the mean of the remaining n2 samples and compute the difference between the two. Note this difference down.
- repeat the above a large number of times, thus getting a distribution for difference in means.
- Test the observed difference in mean in step 1 against the distribution in step 4 for significance.
My problem is - The NULL distribution generated in step 4, wouldn't that be wrong if the samples were actually from populations that did have different means? What we get in step 4, seems to a distribution of difference in means when the two populations are combined without any relevance to whether the means were the same. How can this be used as the null distribution - where the null hypothesis is supposed to be that the two populations have same mean?
I have the same question for bootstrap as well (i.e., In step 4, if we draw samples of size n1 and n2 with replacement)
Thanks!
| Resample test for difference in means - null distribution is it right? | CC BY-SA 4.0 | null | 2023-04-07T12:23:19.803 | 2023-04-07T12:23:19.803 | null | null | 385172 | [
"hypothesis-testing",
"statistical-significance",
"inference",
"permutation-test",
"resampling"
] |
612246 | 2 | null | 318174 | 0 | null | For dependent observations, transform the individual data NOT the differences. Then you can back-transform the mean difference to give the ratio of the geometric means. If the paired data are before and after measurements, this is interpreted as the ratio of the two geometric mean measurements.
---
Reference: Oxford Handbook of Medical Statistics 2nd edition, page(332)
| null | CC BY-SA 4.0 | null | 2023-04-07T12:31:30.510 | 2023-04-07T12:31:30.510 | null | null | 362674 | null |
612247 | 1 | null | null | 0 | 40 | I am trying to simulate data coming from a joint model of longitudinal and survival data. Basically, my thought process is this.
- I need to define a maximum follow-up time, F.
- I need to define coefficients ($\boldsymbol{\beta}$, $\sigma_t$,generate X from some distribution, W from some distributions and define $\alpha$ and $\gamma$)
- I need to use the max follow-up time to solve for integrals using the uniroot function.
For each individual i, I generate survival probabilities from the uniform distribution.
\begin{equation} \label{eq:uniroot1}
S(t|W, X) \sim U(0,1)
\end{equation}
For each individual, I am trying to solve:
\begin{equation}
\int_0^F h(u|\textbf{W}, m_i(u)) \partial u + \log U(0,1) = 0
\end{equation}
where I define here $\textbf{W}$ are baseline covariates and $m_i(u)$ is the time-varying covariate that is essentially "longitudinal marker" without the error term defined as follows:
\begin{equation}
y_i(t) = m_i(t) + \epsilon_i(t) \\
m_i(t) = (\beta_0 + b_{i0}) + (\beta_1 + b_{i1})t + (\beta_2 X_{2i}) + \beta_3X_{2i}t \\ \end{equation}
And the form of the Weibull PH is as follows:
\begin{equation} \label{eq:weibullPH}
(\sigma_t \exp(\alpha m_i(t) +\boldsymbol{\gamma}^T\textbf{W}_i))t^{\sigma_t-1}\\
\end{equation}
Once I solve this equation, I can just generate C from Uniform(0, Max.FollowUpTime)to perform uniform censoring.
My questions are:
A. Does this seem like the right way to get survival times?
B. How do I define the maximum follow-up time such that it is a reasonable upper bound survival time for the values I defined in point 2.
Thank you for any pointers!
| How to simulate right-censored Weibull PH survival times with time-varying covariate? | CC BY-SA 4.0 | null | 2023-04-07T12:34:15.290 | 2023-04-07T20:44:43.927 | 2023-04-07T20:44:43.927 | 28500 | 58910 | [
"survival",
"simulation",
"weibull-distribution"
] |
612250 | 1 | 612901 | null | 1 | 29 | I am currently working with a MLR model comprising 1 numeric/continuous predictor variable (x1), several nominal categorical variables (x2 ... xi), and an interaction term between the continuous variable and one of the categorical variables (x1*x2). I wish to plot the relationship between y and x1 on a 2d scatterplot (ideally in ggplot2) with a line of best fit and confidence intervals. I believe that this is possible in theory, so long as the categorical variables are fixed at a pre-selected/reference level for plotting purposes. My understanding is that the model terms associated with the different levels of the categorical variables will just shift the intercept up/down the y-axis, and not fundamentally change the nature of the relationship between y and x1 (i.e., the slope). However, I have not been able to work out how to do generate such a plot in practice.
A reprex of a similar (toy) model is provided below:
```
# Generate data.frame
df<-
data.frame(
"y"=c(32,27,29,41,26,23,35,36,35,32,29,30,40,27,38,21,31,26,26,34,41,29,26,24),
"x1"=c(28,32,36,40,44,48,52,56,60,64,68,72,72,68,64,60,56,52,48,44,40,36,32,28),
"x2"=c("M","F"),
"x3"=c("A","B","C"),
"x4"=c("I","II","III","IV")
)
df$x2<-
as.factor(df$x2)
df$x3<-
as.factor(df$x3)
df$x4<-
as.factor(df$x4)
# Generate MLR model
lm<-
lm(
y~
x1*x2+
x3+
x4,
data=df
)
summary(lm)
```
summary(lm) reads as follows:
[](https://i.stack.imgur.com/HmPW3.jpg)
Imagine that I wish to plot the relationship between y and x1 on a 2d scatterplot, and use x2=M, x3=A, and x4=III as the reference levels at which I wish to fix these covariates. How would I do so? I have tried manually calculating the predicted values for each of the data points if they were associated with these reference levels, and plotting them all, like so:
```
# Manually generate predictions
df$fixed<-
32.424825+ # intercept
df$x1*-0.003497+ # term for x1
1*-4.525641+ # term for x2=M
1*0+ # term for x3=A
1*-3.666667+ # term for x4=III
1*(df$x1*0.153846) # x1*x2 interaction term when x2=M
# Plot df$fixed vs df$x1
library(ggplot2)
p<-
ggplot(
data=df,
mapping=aes(
x=x1,
y=fixed
)
)+
geom_point()+
geom_smooth(
method="lm",
se=T
)
p
```
This approach has not worked for me, specifically as i) I wish to show the confidence intervals around the line of best fit, and ii) I have a very large dataset (~500k observations). In essence, I think that what I am doing here is passing ~500k fitted values to ggplot2, and then asking it to plot the line of best fit and associated confidence intervals. Unsurprisingly, there is essentially no uncertainty--given that the values are fitted, and that there are so many of them--so the confidence intervals are arbitrarily small. This would thus not appear to be the correct approach.
Is anyone aware of a method/package/function where I can plot y~x1 (ideally in ggplot2 graphics) with x2 .... xi held constant, in a way that still shows the uncertainty in the data (i.e., with confidence intervals)?
Thank you very much.
| Holding covariates constant to plot MLR model on 2d scatterplot in R | CC BY-SA 4.0 | null | 2023-04-07T13:39:56.093 | 2023-04-14T08:39:37.247 | 2023-04-14T08:39:37.247 | 347134 | 347134 | [
"r",
"multiple-regression",
"linear-model",
"ggplot2",
"marginal-effect"
] |
612251 | 2 | null | 612079 | 0 | null | >
In statistics classes I have been advised to do quick t-test before running the regression analysis to find out if the two groups (male and female) differ in the DV.
You probably wouldn't get that recommendation from many of those who frequently visit this site. Any use of an outcome to decide on the structure of a model violates the assumptions for later significance testing, in a way that's difficult to control for. There was no need to do that test. Unless the design was nicely balanced between genders with respect to other outcome-associated predictors, that these might even lead you astray.
Your single model with the interaction term contains all the information you need, and it might even contain information that there is an association of gender with outcome. You say:
>
gender did not significantly predict DV
but I wonder if that assessment is only based on the reported single regression coefficient for `gender`. With `gender` involved in an interaction, that coefficient will be for the situation when all of the interacting predictors are at 0 or reference levels. That often is a situation of no practical importance. Furthermore, the reported p-value for that single coefficient will be whether the coefficient for `gender` under that situation is different from 0. That single coefficient is not a test of the overall significance of `gender`.
To evaluate the overall significance of `gender`, consider a likelihood-ratio test of two models: your full model and the same model completely without `gender` as a predictor. Alternatively, do a Wald test of all the coefficients involving `gender` in your full model, as performed for example by the `Anova()` function in the R [car package](https://cran.r-project.org/package=car).
| null | CC BY-SA 4.0 | null | 2023-04-07T13:41:32.163 | 2023-04-07T13:41:32.163 | null | null | 28500 | null |
612252 | 1 | null | null | 0 | 14 | Suppose I have model $M$ generating data $Y=\beta_0+\beta_1X+\beta_2Z+\beta_3W$ with all $\beta$'s known. Instead of using model $M$, I used misspecified models $M':Y=\beta'_0+\beta'_1X+\beta'_2Z$, $M'':Y=\beta'_0+\beta'_1X+\beta'_3W$ and went on testing hypothesis $H_0:\beta'_1=0$ vs $H_1:\beta'_1\neq 0$ with level $\alpha$ for $M'$ and $M''$.
$Q1:$ Should this hypothesis testing inversion give a coverage of $1-\alpha$ in general? The inversion of testing to get confidence interval requires model specification correctness assumption. If the model is correctly specified, then $coverage+level$ would be 1. If not, I could not see why it should even be the case. Level $\alpha$ is always fixed number, but coverage could be $0$ due to bias.
$Q2:$ Should I even compare efficiency between $M'$ and $M''$ for estimating $\beta'_1$? It could be possible that one of the models having efficiency$>1$. This does not fit into unbiased estimator's Cramer-Rao bound context. What does it mean to even compare $M'$ and $M''$'s $\beta'_1$'s efficiency or confidence interval width here?
| misspecified models coverage and efficiency | CC BY-SA 4.0 | null | 2023-04-07T13:56:49.670 | 2023-04-07T13:56:49.670 | null | null | 79469 | [
"hypothesis-testing",
"model",
"efficiency",
"misspecification",
"coverage-probability"
] |
612253 | 2 | null | 611946 | 0 | null | In a model with interactions, you cannot study the main effects separately from the interactions. So instead of the two questions: (a) How the two main effects (Field & Distance) affect abundance? (b) How the Field × Distance interaction affects abundance, you can study (a) differences in abundance between fields at a given distance, and (b) differences in abundance between distances in each field type.
This means that rather than looking at the estimated model parameters (the coefficients you get with the `summary` function in R), it's more constructive to compare estimated marginal means, ie. E{Abundance | Field, Distance}.
It helps to illustrate this suggestion with an example. So I'll simulate data under the experimental setup described in the question. Here are the parameters I'm going to use. (I have no idea whether the chosen parameter values are at all realistic.) The rate is the average abundance for a given Field × Distance combination, ie. E{Abundance | Field, Distance}. Note that Field and Distance interact for Field = "grassland" and "production" but not for Field = "not sown".
```
params <- tribble(
~Field_Type, ~distance, ~rate,
"not sown", "50m", 0.2,
"not sown", "75m", 0.2,
"not sown", "150m", 0.2,
"not sown", "200m", 0.2,
"grassland", "50m", 1,
"grassland", "75m", 1,
"grassland", "150m", 1,
"grassland", "200m", 3,
"production", "50m", 1,
"production", "75m", 2,
"production", "150m", 2,
"production", "200m", 4
)
```
The mock dataset is generated from a Poisson model with known parameters, so I don't need to check that the Poisson model is appropriate for the data. In their analysis, the OP should verify that the Poisson model is a reasonable fit to the actual data; this is an important and non-trivial step.
Next we fit the Poisson generalized linear mixed model (GLMM). I include a random sector effect, `(1|LS)`, but not a random sampling round effect. I omit `(1|sampling.round)` because the description "the three different time points when the data was collected" suggests to me that data was collected at three different times but not necessarily at exactly the same three times for all combinations of Field × Distance. Also, it simplifies the simulation a bit. In any case, the structure of the random effects makes no difference for how analyze & compare the fixed effects in the model.
```
model1 <- glmmTMB(
Abundace ~ Field_Type + distance + Field_Type * distance + (1 | LS),
family = poisson(link = "log"),
data = OWL6
)
```
Once we fit the model, we can easily get the estimated model coefficients. It's not obvious how these coefficients are related to the rates E{Y | Field, Distance}. Interpreting the coefficients is challenging because (a) the coefficients depend on the parametrization: "not sown" and "50m" are the reference Field & Distance levels and don't appear in the summary table; (b) there are interactions, so we cannot vary a main effect while "keeping all other predictors fixed"; and (c) since this is a generalized model with a $\log$ link, the coefficients are on the log scale, $\log\text{rate}$, not on the original abundance scale.
```
summary(model1)
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -1.4253 0.2776 -5.134 2.84e-07 ***
#> Field_Typegrassland 1.4053 0.3095 4.541 5.61e-06 ***
#> Field_Typeproduction 1.4424 0.3084 4.677 2.91e-06 ***
#> distance75m -0.3677 0.4336 -0.848 0.396429
#> distance150m 0.2076 0.3734 0.556 0.578151
#> distance200m -0.1671 0.4097 -0.408 0.683440
#> Field_Typegrassland:distance75m 0.5083 0.4725 1.076 0.282068
#> Field_Typeproduction:distance75m 1.0966 0.4637 2.365 0.018028 *
#> Field_Typegrassland:distance150m -0.2861 0.4227 -0.677 0.498520
#> Field_Typeproduction:distance150m 0.5808 0.4073 1.426 0.153833
#> Field_Typegrassland:distance200m 1.2337 0.4395 2.807 0.005003 **
#> Field_Typeproduction:distance200m 1.5396 0.4366 3.526 0.000421 ***
```
We can avoid all these challenges by interpreting the estimated marginal means instead of the model coefficients. That is, interpret the estimated rates E{Y | Field, Distance} instead of the estimated coefficients $\widehat{\beta}$s. I'll use the [emmeans](https://cran.r-project.org/web/packages/emmeans/index.html) package to do the calculations.
First let's look at the estimated rates:
```
emm <- emmeans(model1, c("Field_Type", "distance"), type = "response")
emm
#> Field_Type distance rate SE df asymp.LCL asymp.UCL
#> not sown 50m 0.240 0.0668 Inf 0.1395 0.414
#> grassland 50m 0.980 0.1352 Inf 0.7481 1.285
#> production 50m 1.017 0.1377 Inf 0.7801 1.326
#> not sown 75m 0.166 0.0555 Inf 0.0866 0.320
#> grassland 75m 1.128 0.1451 Inf 0.8768 1.452
#> production 75m 2.108 0.1992 Inf 1.7520 2.537
#> not sown 150m 0.296 0.0741 Inf 0.1812 0.483
#> grassland 150m 0.906 0.1300 Inf 0.6842 1.200
#> production 150m 2.238 0.2053 Inf 1.8696 2.679
#> not sown 200m 0.203 0.0614 Inf 0.1126 0.368
#> grassland 200m 2.848 0.2322 Inf 2.4276 3.342
#> production 200m 4.013 0.2770 Inf 3.5057 4.595
#>
#> Confidence level used: 0.95
#> Intervals are back-transformed from the log scale
```
Since this is a simulation, we know the true rates, so let's plot the true rates (indicated by ×'s) alongside the estimated rates.

In practice, we don't know the true rates. Instead we can compare rates between field types for each distances, and between distances for each field type. That's a lot of pairwise comparisons because there are 3 field types and 4 distances.
```
contrast(emm, "pairwise", by = "distance", adjust = "mvt")
#> distance = 50m:
#> contrast ratio SE df null z.ratio p.value
#> not sown / grassland 0.2453 0.0759 Inf 1 -4.541 <.0001
#> not sown / production 0.2364 0.0729 Inf 1 -4.677 <.0001
#> grassland / production 0.9636 0.1855 Inf 1 -0.192 0.9793
#>
#> distance = 75m:
#> contrast ratio SE df null z.ratio p.value
#> not sown / grassland 0.1475 0.0527 Inf 1 -5.359 <.0001
#> not sown / production 0.0789 0.0273 Inf 1 -7.333 <.0001
#> grassland / production 0.5351 0.0849 Inf 1 -3.942 0.0002
#>
#> distance = 150m:
#> contrast ratio SE df null z.ratio p.value
#> not sown / grassland 0.3265 0.0940 Inf 1 -3.887 0.0003
#> not sown / production 0.1322 0.0352 Inf 1 -7.606 <.0001
#> grassland / production 0.4050 0.0686 Inf 1 -5.339 <.0001
#>
#> distance = 200m:
#> contrast ratio SE df null z.ratio p.value
#> not sown / grassland 0.0714 0.0223 Inf 1 -8.456 <.0001
#> not sown / production 0.0507 0.0157 Inf 1 -9.649 <.0001
#> grassland / production 0.7097 0.0748 Inf 1 -3.255 0.0028
#>
#> P value adjustment: mvt method for 3 tests
#> Tests are performed on the log scale
```
```
contrast(emm, "revpairwise", by = "Field_Type", adjust = "mvt")
#> Field_Type = not sown:
#> contrast ratio SE df null z.ratio p.value
#> 75m / 50m 0.692 0.300 Inf 1 -0.848 0.8307
#> 150m / 50m 1.231 0.460 Inf 1 0.556 0.9446
#> 150m / 75m 1.778 0.741 Inf 1 1.381 0.5100
#> 200m / 50m 0.846 0.347 Inf 1 -0.408 0.9770
#> 200m / 75m 1.222 0.549 Inf 1 0.446 0.9702
#> 200m / 150m 0.688 0.269 Inf 1 -0.957 0.7731
#>
#> Field_Type = grassland:
#> contrast ratio SE df null z.ratio p.value
#> 75m / 50m 1.151 0.216 Inf 1 0.749 0.8749
#> 150m / 50m 0.925 0.183 Inf 1 -0.396 0.9785
#> 150m / 75m 0.803 0.154 Inf 1 -1.142 0.6587
#> 200m / 50m 2.906 0.463 Inf 1 6.698 <.0001
#> 200m / 75m 2.525 0.382 Inf 1 6.121 <.0001
#> 200m / 150m 3.143 0.515 Inf 1 6.982 <.0001
#>
#> Field_Type = production:
#> contrast ratio SE df null z.ratio p.value
#> 75m / 50m 2.073 0.340 Inf 1 4.440 <.0001
#> 150m / 50m 2.200 0.358 Inf 1 4.848 <.0001
#> 150m / 75m 1.061 0.139 Inf 1 0.457 0.9676
#> 200m / 50m 3.945 0.596 Inf 1 9.092 <.0001
#> 200m / 75m 1.904 0.220 Inf 1 5.565 <.0001
#> 200m / 150m 1.793 0.203 Inf 1 5.148 <.0001
#>
#> P value adjustment: mvt method for 6 tests
#> Tests are performed on the log scale
```
Finally, the distances are ordered (though the model ignores this fact), so we might not be interested in all possible pairwise distance comparisons. Here is how to look at "successive" distance pairs only; since we make fewer comparisons we apply smaller multiple comparison adjustment. This is reasonable if we believe (a priori) that the relationship between abundance rate and distance is monotonic.
```
contrast(emm,
method = list(
"75m - 50m" = c(-1, 1, 0, 0),
"150m - 75m" = c(0, -1, 1, 0),
"200m - 150m" = c(0, 0, -1, 1)
),
by = "Field_Type",
adjust = "mvt"
)
#> Field_Type = not sown:
#> contrast ratio SE df null z.ratio p.value
#> 75m / 50m 0.692 0.300 Inf 1 -0.848 0.7341
#> 150m / 75m 1.778 0.741 Inf 1 1.381 0.3805
#> 200m / 150m 0.688 0.269 Inf 1 -0.957 0.6607
#>
#> Field_Type = grassland:
#> contrast ratio SE df null z.ratio p.value
#> 75m / 50m 1.151 0.216 Inf 1 0.749 0.7859
#> 150m / 75m 0.803 0.154 Inf 1 -1.142 0.5217
#> 200m / 150m 3.143 0.515 Inf 1 6.982 <.0001
#>
#> Field_Type = production:
#> contrast ratio SE df null z.ratio p.value
#> 75m / 50m 2.073 0.340 Inf 1 4.440 <.0001
#> 150m / 75m 1.061 0.139 Inf 1 0.457 0.9429
#> 200m / 150m 1.793 0.203 Inf 1 5.148 <.0001
#>
#> P value adjustment: mvt method for 3 tests
#> Tests are performed on the log scale
```
---
R code to simulate a dataset and reproduce the analysis:
```
library("emmeans")
library("glmmTMB")
library("tidyverse")
set.seed(1234)
params <- tribble(
~Field_Type, ~distance, ~rate,
"not sown", "50m", 0.2,
"not sown", "75m", 0.2,
"not sown", "150m", 0.2,
"not sown", "200m", 0.2,
"grassland", "50m", 1,
"grassland", "75m", 1,
"grassland", "150m", 1,
"grassland", "200m", 3,
"production", "50m", 1,
"production", "75m", 2,
"production", "150m", 2,
"production", "200m", 4
)
params <- params %>%
mutate(
Field_Type = factor(Field_Type,
levels = c("not sown", "grassland", "production")
),
distance = factor(distance,
levels = c("50m", "75m", "150m", "200m")
)
)
params
OWL6 <-
crossing(
params,
LS = 1:18
) %>%
mutate(
re = rnorm(n(), sd = 0.1)
) %>%
crossing(
sampling.round = 1:3
) %>%
mutate(
Abundace = rpois(n(), exp(log(rate) + re))
)
model1 <- glmmTMB(
Abundace ~ Field_Type + distance + Field_Type * distance + (1 | LS),
family = poisson(link = "log"),
data = OWL6
)
summary(model1, "fixed")
emm <- emmeans(model1, c("Field_Type", "distance"), type = "response")
emm
contrast(emm, "pairwise", by = "distance", adjust = "mvt")
contrast(emm, "revpairwise", by = "Field_Type", adjust = "mvt")
contrast(emm,
method = list(
"75m - 50m" = c(-1, 1, 0, 0),
"150m - 75m" = c(0, -1, 1, 0),
"200m - 150m" = c(0, 0, -1, 1)
),
by = "Field_Type",
adjust = "mvt"
)
as_tibble(emm) %>%
ggplot(
aes(distance, rate,
group = Field_Type,
color = Field_Type
)
) +
geom_point(
aes(distance, rate, group = Field_Type),
position = position_nudge(x = 0.1),
data = params,
inherit.aes = FALSE,
shape = 4
) +
geom_pointrange(
aes(
ymin = asymp.LCL,
ymax = asymp.UCL
),
size = 0.1
) +
facet_grid(
~Field_Type
) +
theme(
legend.position = "none"
)
```
| null | CC BY-SA 4.0 | null | 2023-04-07T14:06:13.917 | 2023-04-07T14:06:13.917 | null | null | 237901 | null |
612254 | 2 | null | 463870 | 0 | null | I think that's the following: You got to put the eval_set being equal to the set you are going evaluate the model in that moment. It means that if you are testing if some model gotten from XGBOOST its better than other, the eval_test would be the validation, that set that you splitted to make the hyperparameter search.
In a Nested Cross Validation for example, in the inside fold would be the "internal test set" and when you got to evaluate for performance of that model, you would use as eval_set the "external test set".
It's like to giving tô XGBOOST that given some unseen data, he'll have a tip for not training too much based on that unseen data, whoever could It be this unseen data, validation or test data.
For hyperparameter search it's like you are telling for all different models generated via hyperparameter search the following "okay guys, you all are training until you are not improving anymore on this validation set"
| null | CC BY-SA 4.0 | null | 2023-04-07T14:08:02.723 | 2023-04-07T14:08:02.723 | null | null | 378073 | null |
612255 | 1 | null | null | 0 | 46 | Say i have a dataset with groups that i want to use for a Regression problem that looks like the following where feature1 is the group column:
```
idx: [0,1,2,3,4,5]
feature1: [1,1,2,2,3,3]
feature2: [6,7,8,9,4,5]
target: [9,8,4,3,2,6]
```
How do i split this properly without any data leakage? Ive read that you need to split the data by groups such that the groups in train do not appear in test. But doesnt that mean that if i use the group feature as a categorical feature, then that feature in the test set will be completely unseen? How do i tackle this problem? Can i split the data randomly?
| How to deal with groups when splitting a data into train and test? | CC BY-SA 4.0 | null | 2023-04-07T14:09:50.223 | 2023-04-07T14:16:46.223 | 2023-04-07T14:16:46.223 | 385179 | 385179 | [
"regression",
"machine-learning",
"categorical-data",
"dataset",
"train-test-split"
] |
612256 | 1 | 612292 | null | 4 | 41 | I am struggling to understand a certain inequality based on the regression $L_2$-error of a regression function estimate.
The setting is that of random forests for regression.
- Let $\Theta = \{ \Theta_{1}, \dots, \Theta_{M} \}$ be the (iid) random variables that capture the randomness that goes into contructing the individual trees.
- Let $m(x) = \mathbb{E}\left[ Y ~|~ X=x \right]$ be the (true, unknown) regression function that we want to estimate with the random forst.
- Assume that trees in the forest are fully grown, i.e. each cell in a tree contains exactly one of the points subsampled/bootstrapped for construction of the tree. Consequently, we can write the regression function estimate of the forest as $m_{n}(X) = \sum_{i=1}^n W_{ni}(X)Y_{i}$
where $W_{ni}(x) = \mathbb{E}_{\Theta}\left[\mathbb{1}_{x_{i}\in A_{n}(x, \Theta_{j})}\right]$
and $A_n(x, \Theta)$ is the cell of $x$ in a tree generated via $\Theta$.
Now, the inequality in question is the following. It's from the proof of Theorem 2 in [Scornet2015](https://projecteuclid.org/journals/annals-of-statistics/volume-43/issue-4/Consistency-of-random-forests/10.1214/15-AOS1321.full).
$$
\mathbb{E}\left[m_{n}(X) - m(X) \right]^2 \leq
2 \mathbb{E}\left[ \sum_{i=1}^n W_{ni}(X)(Y_{i}-m(X_{i})) \right]^2 +
2 \mathbb{E}\left[ \sum_{i=1}^n W_{ni}(X)(m(X_{i})-m(X)) \right]^2
$$
My first question is: Why is that? I have tried applying the basic textbook error decompositions but am not getting anywhere.
My second question is: In the publication, the authors refer to the first term as the "estimation error" and the second as the "approximation error". This does not quite fit with my current understanding of these terms:
- Estimation error: Error of selected function as compared to best possible choice from hypothesis class
- Approximation error: Error of best possible from hypothesis class as compared to true regression function
Getting an intuition on the second question is probably more important to me.
| Bound for $L_2$-error of random forest estimate | CC BY-SA 4.0 | null | 2023-04-07T14:15:39.307 | 2023-04-10T17:09:33.310 | null | null | 178468 | [
"random-forest",
"expected-value",
"error"
] |
612257 | 2 | null | 612174 | 0 | null | If proportional hazards (PH) doesn't hold, then neither a Weibull nor its special case of an exponential survival model will work directly, because each implicitly assumes PH (at least with the way that covariates are usually included in a Weibull model). See [this page](https://stats.stackexchange.com/q/492263/28500).
If there isn't an obvious choice of a distribution based on your understanding of the subject matter, then the usual solution is to try several approaches until you find one that adequately fits the data. You should, however, explain your evaluations of the various approaches to your readers, as use of the outcomes to choose the structure of a model violates the assumptions for things like p-values.
Chapter 18 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/parsurv.html) covers parametric survival modeling, showing ways to evaluate the quality of the fit with different choices of distributions. The R [flexsurv package](https://cran.r-project.org/package=flexsurv) provides for parametric modeling under a wide set of survival distributions, including user-defined distributions.
You might instead consider a Cox model with time-varying coefficients, as explained in the [vignette on time dependence](https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf) in the R [survival package](https://cran.r-project.org/package=survival). That relaxes the PH assumption, allowing the hazard ratios to change as a defined function of time. An additive instead of multiplicative hazard model might also be helpful, as implemented for example by the `aareg()` function in that package.
| null | CC BY-SA 4.0 | null | 2023-04-07T14:26:32.100 | 2023-04-07T14:26:32.100 | null | null | 28500 | null |
612259 | 2 | null | 493382 | 1 | null | None of them will give a meaningful result.
You attributes are not comparable. One decibel is not one eye color difference.
Results depend entirely on how you preprocess the data, and you can get pretty much any result you want (or did not want...).
Try to phrase your problem as an equation first. Do not try to solve it by trying out algorithms without a plan. You will see that your problem is not well specified, you do not know what you are solving.
| null | CC BY-SA 4.0 | null | 2023-04-07T14:33:26.357 | 2023-04-07T14:33:26.357 | null | null | 18215 | null |
612260 | 2 | null | 612178 | 1 | null | Yes, those in the `turquoise_low` group survive longer, as the plot shows and as you interpret the plot. I think that the confusion is from the way that the log-rank test performed by `survdiff()` is explained.
The "expected" number of deaths for the log-rank test in each `turquoise` group in your quotation is the number expected if there were no survival difference between the groups. That's the null hypothesis for the test. The calculation is a bit tricky in detail, outlined [here](https://stats.stackexchange.com/a/577140/28500).
So if there are more deaths in one group and fewer in the other than "expected" based on the null hypothesis, that's evidence for a survival difference between the groups. That's what these data show.
| null | CC BY-SA 4.0 | null | 2023-04-07T14:39:57.580 | 2023-04-07T15:32:19.223 | 2023-04-07T15:32:19.223 | 28500 | 28500 | null |
612261 | 2 | null | 611805 | 3 | null | Probably none of them is "correct", because of your data.
- there is no elbow, this is pretty much the expected behavior on random data.
- all Silhouette scores for k>2 are very low, so none of these results is good
- C-H seems to max out at 6, why do you choose 4?
In particular when the methods disagree and do not give clear indications, this usually means that simply none of the results is good!
See my preprint:
>
Schubert, Erich. "Stop using the elbow criterion for k-means and how to choose the number of clusters instead." arXiv preprint arXiv:2212.12189 (2022). https://arxiv.org/abs/2212.12189
and pay attention in particular on the section titled "The true challenges of k-means" and Figure 4 (because the earlier results are on easy data sets). You do not need to choose k if k-means cannot solve your problem - have you considered that your data does not contain k-means type of clusters? Be open to the answer being "k-means cannot cluster this data set well".
As you are using PCA, beware that PCA may even destroy some signal. Plot the data. If you cannot identify clusters in your plot, k-means probably cannot, either.
| null | CC BY-SA 4.0 | null | 2023-04-07T14:47:07.437 | 2023-04-10T18:44:42.353 | 2023-04-10T18:44:42.353 | 18215 | 18215 | null |
612262 | 2 | null | 612197 | 3 | null | The problem with extreme weights is that they yield high variability in the weights which decreases the effective sample size. You don't have to check for extreme weights; you just need to check for an unacceptably low effective sample size. In this case, the ESS for the control group decreased by quite a lot. You might wonder why that is. One answer could be a few extreme weights that dramatically increase the variance of the weights. Looking at the summary of weights and their histograms, it seems this could be the case.
The output of `summary(W.out)` displays the ESS and information on the largest weights. You can see that the largest weights are between 3 and 4, but their values are quite similar. These values do not seem too extreme, though they are clearly quite a bit larger than the average control group weight of ~.44.
You can use `plot(summary(W.out))` to directly plot a histogram of the weights. The output looks like the following:
[](https://i.stack.imgur.com/9bldc.jpg)
It's pretty clear that most control weights are quite small and there are a few weights that are relatively large, which is likely causing the decrease in ESS. There is no individual unit with an extreme weight, but rather a cluster of units with unusually high weights. You can see if trimming (i.e., winsorizing) the weights makes a difference using `trim()`; I find that trimming the weights to anywhere between the 85th and 95th percentile improves the ESS without dramatically worsening balance.
---
I appreciate you wanting to practice your coding skills to generate the plots, but `cobalt` and `WeightIt` provide utilities for making those plots. Instead of using `weights::wtd.hist()`, you can just use `plot(summary(W.out))` as I mentioned above. Also, `hist()` would have sufficed; a histogram of weights is not the same thing as a weighted histogram, which is what `weights::wtd.hist()` displays. You didn't even use the `weights` argument, which is the only way that differs from `hist()`. I'm not sure why your histogram has values greater than 20; are you sure you are using the right code to generate that plot? To plot the distribution of propensity scores, just use `cobalt::bal.plot()`, e.g., `bal.plot(W.out, "prop.score", which = "both")`.
| null | CC BY-SA 4.0 | null | 2023-04-07T14:50:42.027 | 2023-04-07T14:50:42.027 | null | null | 116195 | null |
612263 | 1 | null | null | 0 | 21 | Let’s assume I am simulating data under a given model, and using MCMC with said data to estimate a (known) model parameter. Let’s assume I do this thousands of times. The results I obtain show that the median of the posterior distribution for this parameter is always smaller than the true value, yet the 95% highest posterior density interval generally contains the true value.
Is the estimator biased? A biased estimator is one whose expected value systematically differs from the true value. If the median of the posterior distribution is the expected value, then it is, as I always obtain smaller values than expected (never larger ones). However, across ~75% of replicates, the true value still falls within the confidence interval of the expected value.
Any thoughts?
| Is an MCMC estimator biased if the confidence interval contains the true parameter value? | CC BY-SA 4.0 | null | 2023-04-07T14:59:48.017 | 2023-04-07T14:59:48.017 | null | null | 385181 | [
"bayesian",
"mathematical-statistics",
"statistical-significance",
"descriptive-statistics",
"markov-chain-montecarlo"
] |
612264 | 1 | null | null | 0 | 27 | I'm trying to derive the influence function of the estimand $\Psi$
$$\Psi(P) = P(Y > y | X = x)$$
Following tutorials for deriving the influence function of the average treatment effect [here](https://arxiv.org/pdf/1903.01706.pdf). Has anyone seen this derived anywhere?
| Influence function of conditional quantile | CC BY-SA 4.0 | null | 2023-04-07T15:01:44.373 | 2023-04-07T15:01:44.373 | null | null | 385180 | [
"functional-data-analysis",
"influence-function"
] |
612265 | 2 | null | 611776 | 0 | null | The total effect is the sum of the direct and indirect paths. There is no conflict, as this is indeed the case in your example. You have evidence to claim the indirect effect is different from 0. You can't say much about the total or direct effects because they are nonsignificant. Nonsignificant doesn't mean equal to 0. It means if there were equal to 0, you would see results like the ones you saw often. Your results are compatible with the following patterns:
- A positive indirect effect, a negative direct effect, and 0 total effect, indicating two compensatory paths (e.g., in a system designed for stability, where one path offsets the other to minimize the total effect of the main predictor)
- A positive indirect effect, a 0 direct effect, and a positive total effect, indicating full mediation by the mediator
- A positive indirect effect, a positive direct effect, and a positive total effect, indicating partial mediation by the mediator
There may be other combinations as well. The point is that you cannot distinguish among them from your data. You may say, "my direct effect was negative, so how does the third option make sense?" The direct effect was negative in your sample, but the confidence interval for the direct effect (which you did not display) contains positive values, meaning the true direct effect could be positive. Again, you simply don't have enough information to determine that, but that doesn't mean there is any conflict in your results.
| null | CC BY-SA 4.0 | null | 2023-04-07T15:01:54.673 | 2023-04-07T15:01:54.673 | null | null | 116195 | null |
612266 | 1 | 612271 | null | 1 | 21 | I am conducting a meta-analysis with a skewed distribution. To address this issue, I transformed the "marker" data into a log-scale, "ln marker".
I obtained the (geometric) mean and standard deviation of ln marker from one article.
My goal is to find the 95% CI of ln marker. To do this, I transformed the geometric mean and geometric standard deviation back into log-scale, so they became the mean and standard deviation of ln marker again. I then used the formula "mean +/- 1.96 SD/sqrt(N)" to calculate the 95% CI of ln marker, assuming the normal distribution by log transformation.
However, I am unsure if this is the right way to get the 95% CI of ln marker with geometric mean and SD.
| Getting (geometric) 95% CI from geometric mean and geometric SD (after log-transformation) | CC BY-SA 4.0 | null | 2023-04-07T15:15:01.240 | 2023-04-07T15:41:28.430 | 2023-04-07T15:40:45.730 | 362671 | 385139 | [
"confidence-interval",
"mean",
"lognormal-distribution",
"logarithm"
] |
612268 | 1 | null | null | 0 | 76 | I am working with a bunch of different GAMs with predictor land cover and remote sensing variables derived at different scales and comparing models within scale. I am not interested in spatial and temporal autocorrelation as an explanatory variable in my model, but it does need to be accounted for (checked with Moran's I). I have done that using:
```
te(long, lat,year, d= c(2,1), bs = c("ds","tp"))
```
For the models within scale I am setting an upper bound for k of my spatial temporal variable, checking that "null" model and then fixing it for subsequent models that include landscape and remote sensing variables so they can be compared to my null model. At each scale I have a different size data set due to some restriction parameters that reduce the data size at smaller scales.
My question is whether the k values for the spatial and temporal variable should be set across scales or whether it's acceptable to set them by assessing the null model at each scale?
When I set it at each scale I'm worried how much it shifts the other variables (especially at the smallest scale that has the smallest data set). At 30 and 20km when given a flexible k max of (10,5) the null model suggests a spatial temporal edf of 34, ~k of c(7,5), but at 10 km it suggests an edf of ~14 pushing down my k to (5,3)- this makes some of my other predictor variables look very different at this scale than the others. I have a feeling that shifting the year k value across scales is bad and that the lat/lon k may be more flexible, but I can't find anything about this. The lat/long k can go to a minimum of 5, which would give 24 edf- c(5,5). Or is it best to just keep it at c(7,5) like the others?
I am not directly comparing (using AIC) models across scales, but I would like them to be roughly comparable.
| GAM setting k for spatial autocorrelation across scale | CC BY-SA 4.0 | null | 2023-04-07T15:39:05.717 | 2023-04-07T23:28:54.890 | 2023-04-07T23:28:54.890 | 354914 | 354914 | [
"generalized-additive-model",
"model-comparison",
"mgcv",
"spatio-temporal",
"basis-function"
] |
612269 | 1 | 612511 | null | 0 | 32 | In a [book](https://leanpub.com/biostatmethods) about Biostatistics, I found this example to calculate expected value:
>
Consider the following hypothetical example of a lung cancer study in which all patients start in phase 1, transition into phase 2, and die at the end of phase 2. Unfortunately, but inevitably, all people die. Biostatistics is often concerned with studying approaches that could prolong or improve life. We assume the length of phase 1 is random and is well modeled by an exponential distribution with mean of five years. Similarly, the length of phase 2 is random and can be modeled by a Gamma distribution with parameters α = 5 and β = 4. Suppose that a new drug that can be administered at the beginning of phase 1 increases 3 times the length of phase 1 and 1.5 times the length of phase 2. Consider a person who today is healthy, is diagnosed with phase 1 lung cancer in 2 years, and is immediately administered the new treatment. We would like to calculate the expected value of the survival time for this person. Denote by X the time from entering in phase 1 to entering phase 2 and by Y the time from entering phase 2 to death without taking treatment. Thus, the total survival time is 2 + 3X + 1.5Y and the expected total survival time, in years, is
E(2+3X +1.5Y) = 2+3E(X)+1.5E(Y) = 2+3×3+1.5×5/4 = 12.875 .
What I don't understand is why in the last equation E[x] is set to 3 while in my opinion it should be 5 because the length of phase 1 (for patients not taking the drug) has the exponential distribution with mean of five years?
I'm also a bit confused with this sentence:
>
Consider a person who today is healthy, is diagnosed with phase 1 lung cancer in 2 years, and is immediately administered the new treatment
Does that mean that the person will enter the phase 1 in the next two years while today they are healthy? It sounds kind of strange to me how medical advancements can help predict a future disease for someone's today healthy two years from now.
| Calculate expected survival time for two stage survival | CC BY-SA 4.0 | null | 2023-04-07T15:39:51.560 | 2023-04-10T18:14:18.133 | 2023-04-10T16:35:37.907 | 383728 | 383728 | [
"survival",
"expected-value",
"biostatistics"
] |
612270 | 1 | 612635 | null | 1 | 67 | I understand that the x-intercept can be calculated using $y = mx + b$ for a linear model. I am unsure if this is statistically appropriate for a mixed model with count data, given that counts cannot be negative and there are random effects to consider. I have seen examples of x-intercept calculations for count data with simple linear regressions, but I'm unsure if this method can be extended to mixed models.
Here is my model:
```
mod_6 <-
glmmTMB(total_count ~ mean_temp + (1|month) + (1|spread_event),
family = nbinom1, data = dat_nc_ncb)
summary(mod_6)
```
Here is the output.
```
Family: nbinom1 ( log )
Formula: total_count ~ mean_ws + (1 | month) + (1 | spread_event)
Data: dat_nc_ncb
AIC BIC logLik deviance df.resid
1399.1 1415.6 -694.5 1389.1 194
Random effects:
Conditional model:
Groups Name Variance Std.Dev.
month (Intercept) 0.3671 0.6059
spread_event (Intercept) 0.3279 0.5726
Number of obs: 199, groups: month, 10; spread_event, 26
Dispersion parameter for nbinom1 family (): 177
Conditional model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.4928 0.3515 9.936 <2e-16 ***
mean_ws -1.1099 0.5126 -2.165 0.0304 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Is it statistically accurate if extract the fixed effects coefficients using `coefficients <- fixef(mod_6)`, identify the coefficient for the intercept using `intercept <- coefficients[1]`, extract the slope using `slope <- coefficients[2]` and finally extract x-intercept using `x_intercept <- -intercept/slope`?
Or would be it more appropriate to use a simple `glm` with `quassipoisson` family, and then calculate `x-intercept`. That way, I won't have to worry about random effects?
## Details about the experiment
I left out my potted plants in the field for a week, took them back to the glasshouse and counted the number of infected leaves per plant after two weeks. Plants are infected in ideal condition of temperature.
## Analysis goal
I need to find lower temperature thresholds. More details can be found in figures 1-4 [here], ([http://uspest.org/wea/Boxwood_blight_risk_model_summaryV21.pdf](http://uspest.org/wea/Boxwood_blight_risk_model_summaryV21.pdf)), but the basic idea is that we want to find out temperature at which no disease was observed (`lower temperature threshold for disease`). Since the goal is to find thresholds, I am happy to let go of the random effects if this allows me to calculate x-intercept for the `mean_temp`.
| Is it possible to calculate x-intercept from a mixed model? | CC BY-SA 4.0 | null | 2023-04-07T15:41:23.257 | 2023-04-12T01:27:22.390 | 2023-04-11T22:32:25.783 | 346283 | 346283 | [
"regression",
"mixed-model",
"glmm",
"intercept",
"glmmtmb"
] |
612271 | 2 | null | 612266 | 1 | null | Yes, that approach seems reasonable. Two notes.
- Your equation for the CI assumes a large sample size. You really should use the critical value from the t distribution (which accounts for sample size) and not 1.96 (which is correct only for large samples).
- Your CI is in the log scale. Back transform both confidence limits to get a a confidence interval in the scale of the original data
| null | CC BY-SA 4.0 | null | 2023-04-07T15:41:28.430 | 2023-04-07T15:41:28.430 | null | null | 25 | null |
612272 | 1 | null | null | 0 | 22 | Based on a data set I found a sensitivity (.82) and specificity (.88) for a diagnostic test bases on a n=257 sample. However, I wonder whether I can generalize these numbers.
I thought this was very much the same question as inferring a succes rate in a binary proces (k successes in n trials) using a binomial distribution. But I doubt that running two Bayesian parameter estimations (for sensitivity and specificity respectively) is really the best way to go. Does anyone have an idea how to best approach this issue, or maybe it is just unnecessary.
| Bayesian statistics: Inferring a true value for test sensitivity and specificity | CC BY-SA 4.0 | null | 2023-04-07T15:45:42.103 | 2023-04-07T15:45:42.103 | null | null | 385184 | [
"bayesian",
"accuracy",
"diagnostic"
] |
612273 | 1 | null | null | 2 | 49 | Does $P(B|A) = 0$ with $P(A) \neq 0$ mean $A \cap B = \varnothing$?
I think I already have an answer, but I'm not sure it's correct.
I would say no, because we can consider a variable $X \sim U(0,1)$, described by a continuous uniform distribution, where, for example,
$A$ is "$0.2 \leq X \leq 0.9$" and $B$ is "$X = 0.5$".
This way we have
\begin{align}
P(B) = 0, P(A) > 0, A \cap B = B \neq \varnothing,
\end{align}
but $P(B | A) = P(A \cap B) / P(A) = P(B) / P(A) = 0 / P(A) = 0$.
| Does conditional probability that equals zero imply the events are disjoint | CC BY-SA 4.0 | null | 2023-04-07T15:46:01.210 | 2023-04-07T20:50:53.450 | 2023-04-07T20:50:53.450 | 20519 | 385186 | [
"probability",
"distributions",
"conditional-probability",
"uniform-distribution"
] |
612274 | 1 | 612281 | null | 0 | 92 | I'm trying to prove that the 2nd order polynomial kernel, $K(x_i, x_j) = (x_i^Tx_j + 1)^2$ is a valid kernel which satisfies the following conditions:
- K is symmetric, that is, $K(x_i, x_j) = K(x_j, x_i)$.
- K is positive semi-definite, that is, $\forall v \space\space v^TKv \geq 0.$
We can actually prove that second-order polynomial kernel function is a valid kernel by deriving the corresponding transformation function $\phi(x) = [1, \sqrt{2}x_1, ..., \sqrt{2}x_d, x_1x_1, x_1x_2, ..., x_1x_d, x_2x_1, ...x_dx_d]^T$ where $d$ is the number of features (dimensionality). But I do want to prove that two conditions listed above holds for the given kernel function.
My attempts:
- Symmetry is rather straightforward:
$$(x_i^Tx_j + 1)^2 = x_i^Tx_jx_i^Tx_j + 2x_i^Tx_j + 1 = A \in \mathbb{R}$$
$$(x_j^Tx_i + 1)^2 = x_j^Tx_ix_j^Tx_i + 2x_j^Tx_i + 1 = B \in \mathbb{R}$$
It can be observed that $A^T = B$, and since they are scalars, $A = A^T = B \implies A = B$.
- For the second condition, my attempt is as follows:
$$v^TK = [\sum_{i=1}^{n}(x_i^Tx_1 + 1)^2 v_i \space\space ... \space\space \sum_{i=1}^{n}(x_i^Tx_n + 1)^2 v_i] \\
v^TKv = \sum_{j=1}^{n}\left(\sum_{i=1}^{n}(x_i^Tx_j + 1)^2 v_i\right) v_j$$$$
v^TKv = \sum_{j=1}^{n}\sum_{i=1}^{n}(x_i^Tx_j + 1)^2 v_i v_j$$
Now I proceed with expanding the term $(x_i^Tx_j + 1)^2$:
$$v^TKv = \sum_{j=1}^{n}\sum_{i=1}^{n}(x_i^Tx_jx_i^Tx_j + 2x_i^Tx_j + 1) v_i v_j $$$$ = \sum_{j=1}^{n}\sum_{i=1}^{n}x_i^Tx_jx_i^Tx_jv_i v_j + 2x_i^Tx_jv_i v_j + v_i v_j$$
After this point, I don't know how to proceed. I feel like I have to use double sum property:
$$\sum_{i=1}^{n}\sum_{j=1}^{n}a_ib_j = \sum_{i=1}^{n}a_i \cdot \sum_{i=1}^{n}b_i$$
But I can eliminate only the term with $v_iv_j$.
$$v^TKv = (\sum_{j=1}^{n}v_i\sum_{i=1}^{n}v_i) + 2(\sum_{i=1}^{n}\sum_{j=1}x_i^Tx_jv_iv_j) + \sum_{i=1}^{n}\sum_{j=1}^{n}x_i^Tx_jx_i^Tx_jv_i v_j$$
$$ =(\sum_{i=1}^{n}v_i)^2 +2(\sum_{i=1}^{n}\sum_{j=1}x_i^Tx_jv_iv_j) + \sum_{i=1}^{n}\sum_{j=1}^{n}x_i^Tx_jx_i^Tx_jv_i v_j$$
First term is greater than or equal to zero, therefore it can be cancelled out. But, for the rest, I cannot come up with any simplification.
I have two questions:
- How should I proceed further at this point?
- How can one prove that any polynomial kernel with degree $p$ is PSD using this approach?
Thank you for your time.
| Prove that 2nd order polynomial kernel is positive semi-definite | CC BY-SA 4.0 | null | 2023-04-07T15:55:49.313 | 2023-04-09T00:22:29.313 | 2023-04-07T16:03:45.010 | 385190 | 385190 | [
"machine-learning",
"self-study",
"kernel-trick",
"linear-algebra"
] |
612276 | 1 | null | null | 0 | 20 | I'm looking at 4 pairs of independent variables and one dependent variable. The correlation analyses revealed no significant correlation, although it was positive, between all independent variables and the dependent variable. Is it bad that my correlation coefficients were not significant?
The regression analyses revealed significant coefficients, but they were quite low, all between .100-.200. I calculated R squared which gives me very low variance, from 7% to less than 1%. what does this mean? Is my analysis just not useful?
The independent variables were 4 different factors of 2 different marketing strategies and the dependent variable was purchasing decisions, so I was thinking maybe there is low variance because purchasing decisions are influenced by many other things apart from those factors of the marketing strategies or even marketing strategies in general, e.g. also influenced by price, product type/quality etc. Not sure if this makes sense though..
NOTE: My study aims to compare the 2 marketing strategies, so im not even sure that variance is that important in my situation. I am looking to see which marketing strategy is the most effective, so should I just focus on which one has the higher coefficients?
| correlation and regression - low variance | CC BY-SA 4.0 | null | 2023-04-07T16:13:16.077 | 2023-04-07T16:22:24.717 | 2023-04-07T16:22:24.717 | 362671 | 385191 | [
"regression",
"correlation",
"variance"
] |
612277 | 2 | null | 610263 | 0 | null | The package documentation did indeed have an explanation for how the plot was calculated: "Centered: A vector of quoted variable names that are to be mean-centered. If "all", all non-focal predictors are centered. You may instead pass a character vector of variables to center. User can also use "none" to base all predictions on variables set at 0. The response variable, pred, modx, and mod2 variables are never centered." (my emphasis)
[https://interactions.jacob-long.com/reference/cat_plot.html#ref-usage](https://interactions.jacob-long.com/reference/cat_plot.html#ref-usage)
| null | CC BY-SA 4.0 | null | 2023-04-07T16:16:23.290 | 2023-04-07T16:16:23.290 | null | null | 368313 | null |
612278 | 2 | null | 69235 | 1 | null | This is an old question, and the previous answers were very good, but I will try to answer it, to get a clearer picture. Maybe that can help someone.
Let's get the data and the contrast matrix:
```
hsb2 = read.table('https://stats.idre.ucla.edu/stat/data/hsb2.csv', header=T, sep=",")
hsb2$race.f = factor(hsb2$race, labels=c("Hispanic", "Asian", "African-Am", "Caucasian"))
mat = matrix(c(1/4, 1/4, 1/4, 1/4, 1, 0, -1, 0, -1/2, 1, 0, -1/2, -1/2, -1/2, 1/2, 1/2), ncol = 4)
mat
[,1] [,2] [,3] [,4]
[1,] 0.25 1 -0.5 -0.5
[2,] 0.25 0 1.0 -0.5
[3,] 0.25 -1 0.0 0.5
[4,] 0.25 0 -0.5 0.5
```
The actual contrasts that you want is:
```
C <- t(mat)
C
[,1] [,2] [,3] [,4]
[1,] 0.25 0.25 0.25 0.25
[2,] 1.00 0.00 -1.00 0.00
[3,] -0.50 1.00 0.00 -0.50
[4,] -0.50 -0.50 0.50 0.50
```
And the contrasts are:
\begin{equation}
C\mu = \begin{pmatrix}
\phantom{..} 1/4 & 1/4 & 1/4 & 1/4 \\
1 & 0 & -1 & 0\\
-1/2 & 1 & 0 & -1/2\\
-1/2 & -1/2 & 1/2 & 1/2\\
\end{pmatrix}\
\begin{pmatrix}\mu1 \\\mu2 \\\mu3 \\\mu4 \end{pmatrix}
\end{equation}
If we calculate them on the data:
```
means <- with(hsb2, tapply(X = write, INDEX = race.f, FUN = mean))
C %*% means
[,1]
[1,] 51.678376
[2,] -1.741667
[3,] 7.743247
[4,] -1.101580
```
Which are the same values given on the website, using their notation:
```
mymat = solve(t(mat))
summary(lm(write ~ race.f, hsb2, contrasts = list(race.f= mymat[,2:4])))
Call:
lm(formula = write ~ race.f, data = hsb2, contrasts = list(race.f = mymat[,
2:4]))
Residuals:
Min 1Q Median 3Q Max
-23.0552 -5.4583 0.9724 7.0000 18.8000
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 51.6784 0.9821 52.619 < 2e-16 ***
race.f1 -1.7417 2.7325 -0.637 0.52461
race.f2 7.7432 2.8972 2.673 0.00816 **
race.f3 -1.1016 1.9642 -0.561 0.57556
```
So we know why we take the transpose, why do we take the inverse?
The model we can use is:
\begin{equation}
y \ =\ X\mu \ +\ \epsilon \
\end{equation}
with $X$ the design matrix for the cell means model and $\mu$ the vector of means.
We can evaluate the means based on this model, using the least squares method:
\begin{equation}
\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\
\end{equation}
Now, we constructed the contrast matrix so that C is square and full rank, and we can take its inverse $C^{-1}$, and insert them in the model equation:
\begin{equation}
y \ =\ X\mu \ +\ \epsilon \ = \ X I\mu \ +\ \epsilon \ =\ X \ (C^{-1}C)\ \ \mu \ +\ \epsilon = \ (X C^{-1}) \ (C \mu) \ + \epsilon
\end{equation}
That means that we can take $XC^{-1}$ as the modified design matrix, to evaluate the contrasts $C\mu$ using the least squares method. In this case, if we name the modified design matrix $X_{1} = XC^{-1}$:
\begin{equation}
\hat{C\mu} = (X_{1}^{'}X_{1})^{-1}X_{1}^{'}y
\end{equation}
So we use the inverse of the contrast matrix to evaluate the actual contrasts.
To see that it works in R:
```
X <- model.matrix( ~ hsb2$race.f + 0) # model matrix for the cell means model
X1 <- X %*% solve(C) # this is the modified model matrix, using the inverse.
```
And we solve by the method of least squares or by lm().
```
# least squares equations:
solve ( t(X1) %*% X1 ) %*% t(X1) %*% hsb2$write
[,1]
[1,] 51.678376
[2,] -1.741667
[3,] 7.743247
[4,] -1.101580
# lm() with modified design matrix:
summary(lm(write ~ X1 + 0, data= hsb2))
Call:
lm(formula = write ~ X1 + 0, data = hsb2)
Residuals:
Min 1Q Median 3Q Max
-23.0552 -5.4583 0.9724 7.0000 18.8000
Coefficients:
Estimate Std. Error t value Pr(>|t|)
X11 51.6784 0.9821 52.619 < 2e-16 ***
X12 -1.7417 2.7325 -0.637 0.52461
X13 7.7432 2.8972 2.673 0.00816 **
X14 -1.1016 1.9642 -0.561 0.57556
# lm() with contrast argument:
summary(lm(write ~race.f, data= hsb2, contrasts = list(race.f= MASS::ginv(C[-1,]))))
Call:
lm(formula = write ~ race.f, data = hsb2, contrasts = list(race.f = MASS::ginv(C[-1,
])))
Residuals:
Min 1Q Median 3Q Max
-23.0552 -5.4583 0.9724 7.0000 18.8000
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 51.6784 0.9821 52.619 < 2e-16 ***
race.f1 -1.7417 2.7325 -0.637 0.52461
race.f2 7.7432 2.8972 2.673 0.00816 **
race.f3 -1.1016 1.9642 -0.561 0.57556
```
Which is the same result as above.
Here we generated C so that it is invertible. In the last call, we provided the pseudoinverse of the contrast matrix without the intercept row. lm() adds the intercept column to generate $C^{-1}$. We could have used contrasts = list(race.f = solve(C)[, -1]) .
For pre-defined contrasts, it's the same thing. They are provided without the intercept term in lm(), but it is added internally and the contrasts are evaluated using the least squares method using the modified design matrix.
For example, if we use
```
contr.treatment(4)
2 3 4
1 0 0 0
2 1 0 0
3 0 1 0
4 0 0 1
```
The actual contrasts that are evaluated are:
```
C <- solve(cbind(1, contr.treatment(4)))
1 0 0 0
-1 1 0 0
-1 0 1 0
-1 0 0 1
```
And we can evaluate the contrasts as before:
```
X1 <- X %*% solve(C)
# least squares equations:
solve ( t(X1) %*% X1 ) %*% t(X1) %*% hsb2$write
[,1]
46.458333
2 11.541667
3 1.741667
4 7.596839
# lm() with modified design matrix:
summary(lm(write ~ X1 +0, data= hsb2))
Call:
lm(formula = write ~ X1 + 0, data = hsb2)
Residuals:
Min 1Q Median 3Q Max
-23.0552 -5.4583 0.9724 7.0000 18.8000
Coefficients:
Estimate Std. Error t value Pr(>|t|)
X1 46.458 1.842 25.218 < 2e-16 ***
X12 11.542 3.286 3.512 0.000552 ***
X13 1.742 2.732 0.637 0.524613
X14 7.597 1.989 3.820 0.000179 ***
# lm with pre-defined contrasts:
summary(lm(write ~race.f, data= hsb2, contrasts = list(race.f = contr.treatment(4))))
Call:
lm(formula = write ~ race.f, data = hsb2, contrasts = list(race.f = contr.treatment(4)))
Residuals:
Min 1Q Median 3Q Max
-23.0552 -5.4583 0.9724 7.0000 18.8000
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 46.458 1.842 25.218 < 2e-16 ***
race.f2 11.542 3.286 3.512 0.000552 ***
race.f3 1.742 2.732 0.637 0.524613
race.f4 7.597 1.989 3.820 0.000179 ***
```
And indeed:
```
C %*% means
[,1]
46.458333
2 11.541667
3 1.741667
4 7.596839
```
| null | CC BY-SA 4.0 | null | 2023-04-07T16:28:51.933 | 2023-04-07T16:36:01.933 | 2023-04-07T16:36:01.933 | 383873 | 383873 | null |
612279 | 1 | null | null | 0 | 11 | I will be administering an early literacy assessment to preschoolers at 2 time points in the preschool year. I want to be able to examine the growth from point A to point B. I also wanted to administer a social emotional assessment at the two time points and asses for growth as well. Lastly, I wanted to examine correlation amongst the two.
I had initially thought of a growth model but realize two time points may not be appropriate for this type of model. Any suggestions?
| What statistical analyses would you use to analyze the change across two time points, months apart? | CC BY-SA 4.0 | null | 2023-04-07T16:34:02.030 | 2023-04-08T14:38:03.850 | 2023-04-08T14:38:03.850 | 11887 | 385194 | [
"pre-post-comparison",
"growth-model"
] |
612280 | 1 | null | null | 1 | 26 | I would like to ask a question about obtaining a standard error (SE) from the 95% confidence interval (CI) formula after log-transformation.
As you may know, in a normal distribution with a significance level of p<0.05, the 95% CI can be expressed as mean +/- 1.96*SE. To obtain the SE, I tried two methods:
Please note that SE, mean, upper limit, and lower limit are in log-scale in this case :)
1. SE = (upper limit - mean) / 1.96 (or SE = (mean-lower limit) / 1.96)
2. SE = (upper limit - lower limit) / 3.92
I expected these methods to give the same result, but in many cases, they did not. I suspect that this may be due to log-transformation or non-normal distribution.. However, I still need to obtain the SE for conducting a meta-analysis.
Which of these methods is better for obtaining the SE in this case?
| Obtaining a SE from 95% CI in log-transformation | CC BY-SA 4.0 | null | 2023-04-07T16:34:17.360 | 2023-04-08T09:15:30.097 | 2023-04-08T09:15:30.097 | 385139 | 385139 | [
"confidence-interval",
"standard-error",
"meta-analysis"
] |
612281 | 2 | null | 612274 | 2 | null | Note that $K$ is the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)#Definition) of the matrix $M = (m_{ij})$ with itself, where $m_{ij} = 1 + x_i^Tx_j$, which can be written as (assuming the input vector $x_i$ is a $p \times 1$ column vector):
\begin{align}
M = ee^T + XX^T,
\end{align}
where
\begin{align}
e = \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1\end{bmatrix} \in \mathbb{R}^{n \times 1}, \quad
X = \begin{bmatrix} x_1^T \\ x_2^T \\ \vdots \\ x_n^T \end{bmatrix}
\in \mathbb{R}^{n \times p}.
\end{align}
Hence for any $v \in \mathbb{R}^{n \times 1}$, we have
\begin{align}
v^TMv = v^Tee^Tv + v^TXX^Tv = (v^Te)^2 + (X^Tv)^T(X^Tv) \geq 0,
\end{align}
showing $M$ is positive semi-definite (PSD).
Now the result follows from $K = M \circ M$ and [Schur product theorem](https://en.wikipedia.org/wiki/Schur_product_theorem):
>
If two matrices $M_1$ and $M_2$ are PSD, then their Hadamard product $M_1 \circ M_2$ is also PSD.
---
You attempt should also work, if assisted with some common "trace tricks" when dealing with quadratic forms (which is essentially the trick used by the first proof in the Schur product theorem link).
The second term $\sum_{i = 1}^n\sum_{j = 1}^n v_ix_i^Tx_jv_j$ is actually the Euclidean norm of the vector $v_1x_1 + \cdots + v_nx_n$, hence it is nonnegative.
To prove $\sum_{i = 1}^n\sum_{j = 1}^n v_i(x_i^Tx_j)^2v_j \geq 0$, denote the order $p$ matrix $\sum_k v_kx_kx_k^T$ by $A$, which is clearly symmetric. By the linearity of the trace operator $\operatorname{tr}$ and its property $\operatorname{tr}(M_1M_2) = \operatorname{tr}(M_2M_1)$, we have
\begin{align}
& \sum_{i = 1}^n\sum_{j = 1}^n v_i(x_i^Tx_j)^2v_j \\
=& \sum_{i = 1}^n\sum_{j = 1}^n v_ix_i^Tx_jx_i^Tx_jv_j \\
=& \sum_{i = 1}^n\sum_{j = 1}^n v_ix_i^Tx_jx_j^Tx_iv_j \\
=& \sum_{i = 1}^nv_ix_i^TAx_i \\
=& \sum_{i = 1}^n\operatorname{tr}(v_ix_i^TAx_i) \\
=& \sum_{i = 1}^n\operatorname{tr}(v_iAx_ix_i^T) \\
=& \operatorname{tr}\left(A\sum_{i = 1}^nv_ix_ix_i^T\right) \\
=& \operatorname{tr}(A^2) = \operatorname{tr}(A^TA) \geq 0.
\end{align}
This completes the proof.
| null | CC BY-SA 4.0 | null | 2023-04-07T16:38:15.693 | 2023-04-09T00:22:29.313 | 2023-04-09T00:22:29.313 | 20519 | 20519 | null |
612282 | 1 | null | null | 1 | 15 | I have merged data to create an event study for the treated population. This treatment happens in 4 batches for some university students in certain cohorts (when they turn 19 in 2005 and are at university when the treatment commenced - thus are treated. But this treatment doesn’t happen for a majority of students and for a variety of cohorts. I have tried running an event study but run into issues of colinearity, so I think i may turn to matching, as the treatment happens (especially the first treatment for higher tier universities). so it may be better to match on all characteristics bar treatment. Is this a fitting way to avoid colinearity in event studies (matching on observables) so that the treatment effect is found via matching. Furthermore, what type of matching would be best here (there are more control than there are treat but obviously some universities are different) so exact matching, nearest neighbour matching, propensity score or fancy genetic matching? the cohorts i have are different birth years and thus attended university at different years but this isn’t exactly specified in the data just estimated.
| Matching, event study and cohort data | CC BY-SA 4.0 | null | 2023-04-07T16:39:35.750 | 2023-04-07T16:39:35.750 | null | null | 385193 | [
"matching",
"generalized-did"
] |
612283 | 1 | null | null | 2 | 44 | The Bonferroni correction seems to be quite controversial. But I read again and again that it should be used for multiple tests. But what exactly are multiple tests? If I have three different data sets in the same study and run only one t-test on each data set, is that a multiple test and do I have to apply a Bonferroni correction?
Or am I only talking about multiple testing if I have one data set and do three tests on the same data set?
I find the statements when it comes to the Bonferroni correction very unclear and would be grateful for your expertise.
| Bonferroni correction: Whats exactly is meant by "multiple tests"? | CC BY-SA 4.0 | null | 2023-04-07T16:52:49.433 | 2023-04-07T21:37:48.097 | 2023-04-07T19:40:33.827 | 805 | 231746 | [
"hypothesis-testing",
"bonferroni",
"type-i-and-ii-errors"
] |
612284 | 2 | null | 611662 | 2 | null | This saying is credit to George Gallup. It dates from before 1941, though I've not been able to find a primary source. It seems likely that he used the analogy multiple times.
For example the [Ottawa Citizen writes:](https://news.google.com/newspapers?nid=2194&dat=19411127&id=l-0uAAAAIBAJ&sjid=VtsFAAAAIBAJ&pg=4887,5489739&hl=en)
>
When a cook want to taste the soup to see how it is coming he doesn't have to drink the whole boilerful, nor does he take a spoonful from the top then a bit from the middle and some from the bottom. He stirs the whole cauldron thouroughly, then he stirs it some more, then he tastes it.
This doesn't claim to be an exact quote, but seems to be indicative of how Gallup made the analogy. It is presented in the context of George Gallup, but not as a quote. Given the early date of this article it is possible that was, in fact, Gregory Clark who came up with the idea. But given the range of other sources pointing to Gallup, one can surmise that Gallup had used the analogy either in his interview with Clarke, or it was in the background reading on Gallup that Clarke did before the interview.
As an aside - this article is dated Nov 27th 1941, 10 days before the USA joined the second world war. Look to the top right of the page for a short story on how "Pearl Harbour would be in grave danger of sabotage, if the US become involve in a war in the Pacific". It is one column tucked away on page 18. The lead story that day "Seige of Tobruk Broken".
| null | CC BY-SA 4.0 | null | 2023-04-07T17:06:28.153 | 2023-04-07T17:06:28.153 | null | null | 147572 | null |
612286 | 1 | null | null | 2 | 97 | I'm new to probability theory.
Let's say that I have the following situation:
Three identical boxes have different collections of doughnuts in them. The box on the
left ($L$) has 2 plain ($p$) doughnuts, 3 maple ($m$) doughnuts, and 5 chocolate ($c$) doughnuts. The box in the
middle ($M$) has 2 plain doughnuts, 3 maple doughnuts, and 5 chocolate doughnuts. The box on
the right ($R$) has 3 plain doughnuts, 4 maple doughnuts, and 6 chocolate doughnuts. You grab
a box at random and without looking inside you grab a doughnut from that box.
Background: I gave this problem on an Elementary Statistics exam with the question "What is the probability of selecting a plain doughnut?" Since any box and any doughnut inside a box could be selected from the information given in the problem, my answer was $7/33\approx 0.2121$, which is the number of plain doughnuts divided by the total number of doughnuts. One of my students used the rule of total probability: If we let event $A=\{p\}$, event $B=\{L\}$, event $C=\{M\}$, and event $D=\{R\}$, then (I think)
\begin{align*}
P(A)&=P(A|B)P(B)+P(A|C)P(C)+P(A|D)P(D) \\
&=(2/10)(1/3)+(2/10)(1/3)+(3/13)(1/3)\\
&=41/195\approx 0.2103
\end{align*}
That's assuming that $\{B,C,D\}$ is even a legitimate partition here. [Roussas (A First Course in Mathematical Statistics, 1973) defines a partition as a set $A_i\in U$ such that $A_i \cap A_j = \emptyset, i\neq j$ and $\sum_i A_i=\Omega$ where $U$ is a $\sigma$-field for some probability space $(\Omega,U,P)$.]
Now that the background has been established, I'm curious to know which one of us is correct and why. The problem is that I don't know enough about probability theory to come to a good answer. I don't want the problem solved for me, but the two questions below will help me in my process.
I have started to answer the question from what I do know. My questions are below.
I'm letting $\Omega_B=\{L,M,R\}$ represent the outcome space for the boxes. We'll keep the events $B,C,D$ as defined above. I'm also letting $\Omega_d=\{p,m,c\}$ be the outcome space for the doughnuts.
I have two questions:
- I'm trying to represent in notation "the probability that we select a plain doughnut given that we can choose any box". I'm going with $P(A|\Omega_B)$ but I'm not sure that using $\Omega_B$ would be the correct way to represent that. I could also try $P(A|B,C,D)$, but I'm not sure that that would make much sense either.
- I would also like to represent in notation the "probability that we select a plain doughnut given that we select any box and any doughnut. I'm going with $P(A|\Omega_B\cap \Omega_d)$ here, but, again, I'm not sure how you would represent this here.
If what I am doing is sound, great! If not, how would this situation be approached?
Let me know if there are still ambiguities in my question. Thanks!
| Does my use of the notation $P(\cdot|\Omega)$ make sense? | CC BY-SA 4.0 | null | 2023-04-07T17:30:12.270 | 2023-04-08T22:06:48.123 | 2023-04-08T16:43:22.907 | 385196 | 385196 | [
"probability",
"terminology"
] |
612287 | 1 | null | null | 0 | 33 | My dataset consists of temperature measurements from thermocouples as shown in the figure.[](https://i.stack.imgur.com/lsJnu.png)
I use several models like Long Short-Term Memory (LSTM) and GRU in order to predict future values of these thermocouples. Only past measurements of the thermocouples are taken into consideration in my model, no other variables. My RMSE and Mean Absolute Error values for all the models are acceptable. However, the residual errors do not follow normal distribution. The distribution of the errors is bimodal, where major mode is a gaussian distribution and minor mode is a lognormal distribution. Is it a requirement for a time series forecasting model to have residual errors that follow normal distribution? Does the non normal distribution of the residual errors show that we cannot trust the model?
Thank you in advance!
| Non-normal distribution of residual errors in time series forecasting | CC BY-SA 4.0 | null | 2023-04-07T18:04:12.330 | 2023-04-07T18:04:12.330 | null | null | 385198 | [
"time-series",
"distributions",
"normal-distribution",
"forecasting",
"residuals"
] |
612288 | 1 | null | null | 0 | 15 | I need to test the relationship between 3 variables. The problem is, one variable is ordinal and the other two are nominal (more precisely, both are from a dichotomous scale). I have already looked for this, but it seems to me that there is no suitable type of analysis. The best way currently seems to me to use a logistic regression, but the ordinal variable does not fit in there.
Do you have any ideas which I can search for or a possible solution?
| Analysis for relationship between ordinal and nominal variables | CC BY-SA 4.0 | null | 2023-04-07T18:30:09.890 | 2023-04-07T18:30:09.890 | null | null | 385200 | [
"regression",
"hypothesis-testing",
"statistical-significance",
"inference",
"descriptive-statistics"
] |
612290 | 1 | 612293 | null | 2 | 129 | In a simple neural network, having more nodes on an input layer that on the next layer performs a compression or dimension reduction similar to what PCA does. The fewer nodes encode in a combination some kind of information that is in the previous layer.
While the forward computation is structurally similar to PCA , the weights form a matrix, it is not equivalent. That is, an autoencoder reduces dimensions as does PCA, but there is no gurantee of orthogonality or correspondence to eignevalues.
Is there an activation function and loss function for one layer (or larger more complicated architecture and choice of activation and loss functions and backprop alternative) that does converge to the PCA coefficients?
That is, is there some way to get a weight matrix that is orthogonal -and- the next level of nodes correspond to the eigensystem (sortable by eigenvalues)?
The motivation is to 'do everything' with a neural network architecture rather than use processes outside of the NN model. This way one could remove collinearity for modeling non-linear subspaces.
| PCA via a Neural Network | CC BY-SA 4.0 | null | 2023-04-07T19:19:24.073 | 2023-04-10T16:31:34.503 | 2023-04-10T16:31:34.503 | 3186 | 3186 | [
"neural-networks",
"pca",
"autoencoders"
] |
612291 | 1 | null | null | 1 | 17 | Say I'm running Metropolis-Hastings with target density $p$. What I would like to do is divide the space $E$, on which $p$ is defined, into a disjoint union $E=\bigcup_iE_i$ and run a separate instance of Metropolis-Hastings inside each stratum $E_i$ (for example, since I can come up with a proposal kernel specifically designed for $E_i$).
Now, if I know nothing about $p$, how do I know what a smart choice for the stratification into the $E_i$ is? Intuitively, we somehow want that $E=\bigcup_iE_i$ is a decomposition of $p$ into its "modes", but what does that even mean in general (maybe that $p$ "does not vary too much inside each $E_i$?)?
Is there maybe some trial-and-error mechanism to detect the modes?
To make things simpler, please assume that $E=[0,1)^2$.
---
EDIT: Relevant articles on the web are:
- Stratification as a general variance reduction method for Markov chain Monte Carlo: https://arxiv.org/pdf/1705.08445.pdf
- Slides corresponding to the paper: https://icerm.brown.edu/materials/Slides/tw19-2-hire/Computing_Rare_Event_Probabilities_by_Stratified_Markov_Chain_Monte_Carlo_]_Brian_Van_Koten,_University_of_Massachusetts-_Amherst.pdf
- Xi'an's blog bost about this: https://xianblog.wordpress.com/2020/12/03/stratified-mcmc/
However, I don't know how I would use this in my case $E=[0,1)^2$ and an arbitrary $p$.
| How should we stratify the space for Metropolis-Hastings? | CC BY-SA 4.0 | null | 2023-04-07T19:22:45.603 | 2023-04-07T19:37:34.590 | 2023-04-07T19:37:34.590 | 222528 | 222528 | [
"markov-chain-montecarlo",
"metropolis-hastings",
"stratification"
] |
612292 | 2 | null | 612256 | 3 | null | For your first question, the trick is to write $m(X) =\sum_{i=1}^n W_{ni}(X)m(X)$, which then gives :
$$\begin{align*}
\mathbb{E}\left[m_{n}(X) - m(X) \right]^2 &= \mathbb{E}\left[\sum_{i=1}^n W_{ni}(X)(Y_{i} - m(X)) \right]^2\\
&= \mathbb{E}\left[\sum_{i=1}^n W_{ni}(X)(Y_{i} - m(X_i) + m(X_i) - m(X)) \right]^2\\
\end{align*} $$
Now we write $a_i := W_{ni}(X)(Y_i - m(X_i))$, $b_i:=W_{ni}(X)(m(X_i) - m(X))$, and using the inequality $\left(\sum_{1\le i\le n} a_i + b_i\right)^2 \le 2\left(\sum_{1\le i\le n} a_i\right)^2 + 2\left(\sum_{1\le i\le n} b_i\right)^2 $ (which can be proven by recurrence, starting from the [well known](https://math.stackexchange.com/questions/2168244) $(a+b)^2 \le 2a^2 + 2b^2 $), we immediately get
$$\begin{align*}
\mathbb{E}\left[m_{n}(X) - m(X) \right]^2 &=\mathbb{E}\left[\sum_{i=1}^n a_i + b_i \right]^2 \\
&\le 2\mathbb{E}\left[\sum_{i=1}^n a_i \right]^2 + 2\mathbb{E}\left[\sum_{i=1}^n b_i\right]^2
\end{align*} $$
As desired.
For your second question, I agree with your definitions of "estimation error" and "approximation error". I am not very familiar with regression trees and the notations used in the paper, but let me nonetheless try to explain why they named these two terms as they did :
- The term $I_n := \mathbb{E}\left[\sum_{i=1}^n a_i \right]^2$ represents the $L^2$ error between the estimator $m_n := \sum_{i=1}^n W_{ni}(X) Y_i$ and the "best possible tree" $m_{best} := \sum_{i=1}^n W_{ni}(X) m(X_i)$ (I admit that I'm not 100% sure that $m_{best}$ is indeed the best tree, I strongly suspect it to be true though). That indeed corresponds to the definition of the estimation error between our estimator and the best possible estimator in the given hypothesis class.
- The term $J_n := \mathbb{E}\left[\sum_{i=1}^n b_i \right]^2$ corresponds to the $L^2$ error between the estimator $m_{best}$ as defined above and the actual regression function $m:= \mathbb E[Y\mid X=\cdot]$. Again, admitting that $m_{best}$ is the best estimator in our hypothesis class, this corresponds exactly to the definition of approximation error you gave.
| null | CC BY-SA 4.0 | null | 2023-04-07T19:49:39.480 | 2023-04-10T17:09:33.310 | 2023-04-10T17:09:33.310 | 305654 | 305654 | null |
612293 | 2 | null | 612290 | 2 | null | Just doing PCA inside of a neural network is not much of a stretch, since the most naïve implementation will simply employ gradient updates to compute the $QR$ algorithm for the covariance matrix.
It's well-known that [autoencoders](/questions/tagged/autoencoders) find a rank $k$ approximation to the data. And in the particular sense of minimizing certain metrics of reconstruction error, this approximation is optimal. However, there is no guarantee that the estimated matrices will be orthogonal, nor to components that maximize variance; indeed, we would expect the matrices to merely span the rank $k$ PCA solution simply because the generic auto-encoder optimization task does not impose these constraints.
- What're the differences between PCA and autoencoder?
- Evaluating an autoencoder: possible approaches?
But modern neural network libraries implement methods to introduce orthogonality constraints to matrices. For example, PyTorch does this with [parameterizations](https://pytorch.org/tutorials/intermediate/parametrizations.html), with a [specific method for orthogonality constraints](https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html). We can likewise use parameterizations to do things like enforce that a weight matrix is triangular (for instance, by using a binary mask).
Finally, the [$QR$ algorithm](https://en.wikipedia.org/wiki/QR_algorithm) is a method to estimate the eigenvalues of a square matrix. PCA is a decomposition of the covariance matrix, which is square.
This is somewhat roundabout, and I don't believe implementing this homebrew PCA method is a good solution. I expect there to be superior methods to finding a nice low-rank representation of the data, even if the data are large.
Moreover, a further refinement would work on the data matrix directly, instead of the covariance matrix.
Incidentally, [PyTorch also implements SV](https://pytorch.org/docs/stable/generated/torch.svd.html)D (which can be used to do PCA: [Relationship between SVD and PCA. How to use SVD to perform PCA?](https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca/134283#134283)).
---
This leaves much to be desired; for instance, one of the main strengths of neural networks is that they can achieve state-of-the-art results by streaming batches of data, instead of requiring access to all of the data at once. And the sketch above assumes that you're forming the covariance matrix directly, instead of batches of raw data.
These papers outline several different methods to use neural networks to estimate PCA in more sophisticated ways. As I have time, I'll expand this answer to summarize the key points.
Fyfe, Colin. "A neural network for PCA and beyond." Neural Processing Letters 6 (1997): 33-41.
Migenda N, Möller R, Schenck W (2021) Adaptive dimensionality reduction for neural network-based online principal component analysis. PLoS ONE 16(3): e0248896. [https://doi.org/10.1371/journal.pone.0248896](https://doi.org/10.1371/journal.pone.0248896)
Du, Ke-Lin, and Madisetti NS Swamy. Neural networks and statistical learning. Springer Science & Business Media, 2013.
Kong, Xiangyu, Changhua Hu, and Zhansheng Duan. Principal component analysis networks and algorithms. Singapore: Springer Singapore, 2017.
Bartecki, K. (2012). Neural Network-Based PCA: An Application to Approximation of a Distributed Parameter System. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2012. Lecture Notes in Computer Science(), vol 7267. Springer, Berlin, Heidelberg. [https://doi.org/10.1007/978-3-642-29347-4_1](https://doi.org/10.1007/978-3-642-29347-4_1)
P. Pandey, A. Chakraborty and G. C. Nandi, "Efficient Neural Network Based Principal Component Analysis Algorithm," 2018 Conference on Information and Communication Technology (CICT), Jabalpur, India, 2018, pp. 1-5, doi: 10.1109/INFOCOMTECH.2018.8722348.
| null | CC BY-SA 4.0 | null | 2023-04-07T19:58:05.200 | 2023-04-10T16:05:35.460 | 2023-04-10T16:05:35.460 | 22311 | 22311 | null |
612294 | 1 | null | null | 0 | 8 | Say I want the probability of at least 1 event occurring out of a series of `n` events each with differing but known probabilities from observing past behavior `p_0, p_1, ... p_n`. Generally, if I assume all of these events are independent, I can get that for some share of the events via 1 - the product of all `(1 - p_i)` in my probability pool.
But now let's say closer inspection of the original events from which I derived by probabilities shows serial correlation was strong. For instance, if an event happened at step `t` in the event chain, it was more likely to happen at `t + 1`. If I then re-asked (say for some new, similar experiment), what's the probability of at least one event, it doesn't make sense that I can use the basic probability formulation above, correct? Should I account for that serial correlation in some way, and if so, how so?
| How to account for serial correlation in probability of at least one event? | CC BY-SA 4.0 | null | 2023-04-07T20:13:08.190 | 2023-04-07T20:13:08.190 | null | null | 260763 | [
"probability",
"mathematical-statistics",
"autocorrelation"
] |
612295 | 1 | null | null | 0 | 10 | I am struggling to interpret the location parameters in generalised partial credit models.
Say you have location parameters $a_1$,$a_2$ and $a_3$. My professor said that for the item to be accepted then they must be monotonic i.e. $a_1$<$a_2$<$a_3$ which I think has to do with the monotonicity assumption of such models however I am not sure.
| How to interpret Location parameters in Generalised Partial Credit Model? | CC BY-SA 4.0 | null | 2023-04-07T20:20:55.603 | 2023-04-07T20:20:55.603 | null | null | 369789 | [
"item-response-theory"
] |
612296 | 1 | null | null | 1 | 36 | Let’s assume there is a variable $z$ that can be modeled as:
$$z=f(x)+g(y)+ε$$
Where $f$ and $g$ are unknown functions greater than zero, $x$ and $y$ are independent variables and $ε$ is random noise. $y$ is totally unknown, but I have data pairs $(x,z)$ and I’m trying to identify the $x$ value at which $z$ is “considerably” influenced by $f(x)$. "considerably influenced" means something like finding $x_1$ at which the expected value of $z$ change certain predefined amount. That is, finding $x_1$ such that:
$$E[z|x = x_1] - E[z|x = x_0] > C$$
for some predefined reference point $x_0$ (which can be some low value of $x$) and threshold $C$.
I can assume that $f(x)$ is monotonically increasing. An example of the relationship between $x$ and $z$ could be this:
[](https://i.stack.imgur.com/MoDsD.png)
Qualitatively, this point might be somewhere above $x=20$, but I’d like to have a formal approach using well defined and repeatable criteria. I was thinking on fitting a model $\hat{z}=h(x)$ and find $x_1$ at which $Δh=h(x_1)-h(x_{0})=C$ so the problem can be easily solved if $x_{0}$ and $C$ are defined.
My question is what type of model would be suitable and whether the heteroscedasticity of the data makes this approach invalid.
Any suggestions and critics to my approach are also appreciated.
| Best model for inference with non-linear data | CC BY-SA 4.0 | null | 2023-04-07T20:31:13.517 | 2023-04-07T22:45:13.817 | 2023-04-07T22:45:13.817 | 376797 | 376797 | [
"regression",
"machine-learning",
"inference"
] |
612297 | 2 | null | 612247 | 0 | null | Question A. Although I haven't compared every detail, it looks like your approach is the same as that used by the R [simsurv package](https://cran.r-project.org/package=simsurv). See the [vignette](https://cran.r-project.org/web/packages/simsurv/vignettes/simsurv_technical.html) on the technical background to the package, and the [section of a vignette](https://cran.r-project.org/web/packages/simsurv/vignettes/simsurv_usage.html#example-4-simulating-under-a-joint-model-for-longitudinal-and-survival-data) dealing specifically with joint modeling. I don't know that you need to define a maximum follow-up time, but that's certainly allowed for. My sense is that the numerical methods used by the package avoids some of the problems with evaluating the cumulative hazard out to long times, but I don't have experience with that.
Question B. I think that the choice of maximum follow-up time will based on either (1) your understanding of the underlying subject matter related to a maximum time likely to be found in practice, or (2) trial and error on a data set that covers the ranges of predictors and coefficients that you have in mind.
| null | CC BY-SA 4.0 | null | 2023-04-07T20:39:07.917 | 2023-04-07T20:39:07.917 | null | null | 28500 | null |
612300 | 2 | null | 612283 | 1 | null | The answer for almost all questions that I see here regarding multiple comparison 'corrections' such as Bonferroni is that the desirability of their application depends on things that are usually not mentioned in the question! That means that any really accurate and balanced answer has to be very long. I will not make this long enough, but will point you to my best attempt long-form answer: [A Reckless Guide to P-values : Local Evidence, Global Errors](https://link.springer.com/chapter/10.1007/164_2019_286)
What is the nature of your study and what are your inferential objectives? Is the study a preliminary one that might be though of a 'hypothesis generating' or is it intended to be a standalone 'definitive' account? You might be more interested in the evidential meaning of the data more than the long run error rate consequences of your statistical procedures.
The controversy that you mention might well be a consequence of people being unwilling to imagine that not every user of statistical approaches share their particular purposes and circumstances.
Are the null hypotheses of the several tests the same, or related, or independent? Are any of the data shared across tests?
'Corrections' for multiplicity always come at the cost of reduced power. In other words, they trade off type II errors for extra protection against a category of type I errors. Given your inferential objectives, is that trade-off going to render your designed balance of false positive and false negative errors undesirable? Did you design that balance with the 'correction' in mind? Did you design that balance at all, or are you relying on the arbitrary p<0.05?
| null | CC BY-SA 4.0 | null | 2023-04-07T21:37:48.097 | 2023-04-07T21:37:48.097 | null | null | 1679 | null |
612301 | 1 | null | null | 0 | 21 | I have a question about bootstrapping correlated values from grouped data. The context is using Census data grouped by region, $R$ (tract or block group). Each region has a list of estimated values $V$: median income, population in poverty, age 18 to 64, etc. Each estimate has a margin of error, $MOE$. This $MOE$ is easily translated to a std error for a 90% confidence interval.
When I use this data, I would like to sample from the regional values according to the distribution of the errors. If I create an error term $e(v,r)$ independently for each region $r$ and value $v$ then I am assuming independence of the $V$'s when they are very likely to be correlated.
How should I set up a sampling system to include the correlation patterns within the regions? Is there a common name for this process?
TIA.
| Bootstrapping from Census data | CC BY-SA 4.0 | null | 2023-04-07T21:40:10.933 | 2023-04-07T21:40:10.933 | null | null | 21827 | [
"bootstrap",
"census"
] |
612302 | 1 | null | null | 1 | 51 | I am currently doing the statistical analysis for my master thesis.
In my experiment, I have 3 species that I expose to Erosion treatments.
After every treatment, I see what percentage of each species fall under one of 11 damage categories as can be seen in the picture:
[](https://i.stack.imgur.com/rDEID.png)
I collected how big the angle compared to standing upright was. If the number was 5 for example, it was heavily leaning to the right. Angle_cat is just the damage category, sorry for not naming it properly. For the percentages, I looked at all shoots of the same plant species after an erosion treatment and calculated how much percentage of the plant fell under this damage category. So in the picture, all the percentages of Phrag (which is reed) and an erosion of 0 cm amount to 1. So for the Phrag 2.5 cm erosion, you can see that the percentage of species with an angle of -3 decreased.
The problem I currently have is that for each Erosion;Species combinations, I have multiple damage categories. So if I do glm(Percentage~Erosion*Species) all percentages would be like different experiment results of the same treatment, instead of a part of the results. Is there a way I can look at the difference between the different Erosion;Species;damage categories with a glm or do I need a different method?
Another way of representing this data is like this (if this helps):
[](https://i.stack.imgur.com/S9ZhY.png)
I wish you a nice easter,
Sam
| glm with nested treatments | CC BY-SA 4.0 | null | 2023-04-07T21:40:49.397 | 2023-04-08T20:17:30.367 | 2023-04-08T20:17:30.367 | 378046 | 378046 | [
"r",
"regression",
"generalized-linear-model",
"nested-data"
] |
612303 | 2 | null | 611857 | 3 | null | Don't try to reconcile the two Cox-model fitting functions that way. It will only make your head hurt. (At least it makes mine hurt.)
A Cox model only defines linear predictors and associated hazards relative to a reference scenario. After the model is fit, you can then generate a baseline hazard over time for that reference scenario. That baseline hazard is then used to generate survival predictions based on differences of covariate values from the reference scenario.
I think about that baseline hazard as functionally equivalent to the intercept in a linear regression model. If you change the reference values used for a predictor, then the intercept in a linear regression model will change and the baseline hazard function in a Cox model will change.
The `survival` package uses the overall means of the covariate values as the reference scenario. The `rms` package stores the linear-predictor values as differences from the mean linear predictor value. For your case, that gives:
```
exp(fitCPH2$linear.predictor[[1]])
# [1] 1.013916
```
which is the value that you found. If you specify `ref.zero=TRUE` then the result is relative to the "Adjust to" values in the `datadist`:
```
dd$limits["Adjust to",c("fin","age","prio")]
# fin age prio
# Adjust to no 23 2
Predict(fitCPH2,fin="no",age=27,prio=3,fun=exp,type = "predictions",ref.zero =TRUE,conf.int=0.95,digits = 4)
# fin age prio yhat lower upper
# 1 no 27 3 0.8424147 0.7071496 1.003554
```
which is closer to what you found with `coxph` but still not the same, as the "Adjust to" values in `rms` aren't the same as the mean values used by `survival`.
```
fitCPH$means
# finyes age prio
# 0.000000 24.597222 2.983796
```
Within each package, everything will work OK in terms of predictions. But it takes some work to try to get them to use the same reference values. The end of the help page for `datadist` shows how to do it if you really need to. You need to change the "Adjust to" values, redefine the `datadist` option to the new version, `update()` the original fit, and specify `ref.zero=TRUE` in the call to `Predict()`:
```
dd$limits["Adjust to","age"] <- fitCPH$means[["age"]]
dd$limits["Adjust to","prio"] <- fitCPH$means[["prio"]]
options(datadist="dd")
fitCPH2adj <- update(fitCPH2)
Predict(fitCPH2adj,fin="no",age=27,prio=3,fun=exp,type = "predictions",ref.zero =TRUE,conf.int=0.95,digits = 4)
# fin age prio yhat lower upper
# 1 no 27 3 0.85244 0.7726731 0.9404416
```
But then those confidence intervals are for the difference from the new reference values, which might not be what you want. In particular, the width of the confidence interval is 0 at the new reference values:
```
Predict(fitCPH2adj,fin="no",age= fitCPH$means[["age"]],prio= fitCPH$means[["prio"]],fun=exp,type = "predictions",ref.zero =TRUE,conf.int=0.95,digits = 4)
# fin age prio yhat lower upper
# 1 no 24.59722 2.983796 1 1 1
```
I can't think of anything that you can do with a `coxph` model that you can't with a `cph` model, provided that you follow the necessary syntax changes (like using `strat()` instead of `strata()` for defining strata). It's much simpler and less error-prone to work within a single package for such things.
| null | CC BY-SA 4.0 | null | 2023-04-07T22:04:22.137 | 2023-04-08T07:19:09.663 | 2023-04-08T07:19:09.663 | 28500 | 28500 | null |
612304 | 1 | null | null | 1 | 17 | Edvard, the evaluator in sample B, does not know Richard, the target subject in sample A. However, the two, independently, give the same answer/Likert value (1-5) to 30% of the questionnaire items. It results in an artificial inflation of the correlation between the evaluations. How can I reduce it?
| How can I reduce correlation between two independents variable? | CC BY-SA 4.0 | null | 2023-04-07T23:00:53.193 | 2023-04-07T23:00:53.193 | null | null | 385213 | [
"regression",
"correlation",
"variance",
"variability",
"variable"
] |
612305 | 1 | null | null | 2 | 113 | I've been trying to learn how to do this analysis but I can't find any information that sheds light on my case and I can't figure out what to do from Hayes' book. I would really appreciate it if someone could help me out.
For my undergraduate thesis, I'm examining how emotion regulation predicts resilience and whether age and gender moderate this relationship. The Emotion Regulation Questionnaire measures two strategies: Cognitive Reappraisal and Expressive suppression. This means I have two predictors and two moderators (one of which is dichotomous, meaning gender).
What model should I use in Process? And considering that only one predictor can be entered, do I run the analysis twice? And would running multiple analyses lead to Type errors I and II, requiring a Bonferroni or Hochberg correction?
Any advice would be greatly appreciated.
| Process Macro SPSS-Moderation Analysis with 2 Predictors and 2 moderators | CC BY-SA 4.0 | null | 2023-04-07T23:05:35.907 | 2023-04-13T01:26:00.033 | 2023-04-07T23:06:48.137 | 385207 | 385207 | [
"interaction"
] |
612306 | 1 | null | null | 0 | 10 | I've been told that these following acf graphics basically shows that both series are stationary, but i didn't really understand why.
Is it if both autocorrelation and partial autocorrelation can gradually decay to 0 then the series is stationary? Or it's a lot more complicate than this..?
[](https://i.stack.imgur.com/ZvHRr.png)
[](https://i.stack.imgur.com/FFPr8.png)
| how to identify stationarity of a series by interpreting acf and pacf graphics | CC BY-SA 4.0 | null | 2023-04-07T23:41:18.553 | 2023-04-07T23:41:18.553 | null | null | 379702 | [
"stationarity",
"acf-pacf"
] |
612307 | 1 | null | null | 0 | 27 | I am trying to implement the Schwartz-Smith (2000) commodity pricing model from the paper [Short-term variations and long-term dynamics in commodity prices](https://www.jstor.org/stable/pdf/2661607.pdf?casa_token=cJE8xBT2xhsAAAAA:ctIUoAiO9MmnyS90cu29CpPyZa0_rIsMQneJz1z5DML1tc5WHdl0FfyVNKPAzhLZAZKleSD0mi4Ss9HO5HjVvADxH8Eq5YjDQfTnf_fn0lEa62ZGS-Cm)
The model is estimated using the Kalman Filter, where the state space is described by the following two equations:
xt = c + Gxt-1 + $\omega$t
yt = dt + Ftxt + vt,
where xt = [$\chi$t, $\xi$t] is a 2 x 1 vector of the state space variables and yt = [lnFT1, . . . , lnFTn] is an n x 1 vector of of the log futures prices at time t with maturities Ti for i = 1, . . . , n
c, G, d, and F are the state space parameters with appropriate dimensions
$\omega$t is a 2 x 1 vector of disturbances with E[$\omega$t] = 0 and a covariance matrix of Var[$\omega$t] = W
vt is an n x 1 vector of disturbances with E[vt] = 0 and a covariance matrix of Var[vt] = V
All of the state space parameters with the exception of V, so c, G, W, d, and F are functions of seven underlying constant parameters (as well as the time increments and the maturities):
$\theta$ = ($\kappa$, $\sigma$$\chi$, $\mu$$\xi$, $\sigma$$\xi$, $\rho$ $\xi$$\chi$, $\lambda$$\chi$, $\mu$$\xi$*)
The nature of which is described in the paper.
While V is assumed to be a diagonal matrix, whose elements are si2 for i = 1, . . . , n
I am trying to estimate these parameters with the Expectation-Maximization (EM) algorithm. For that I am using the pykalman Python package to calculate the loglikelihood of the observations, then maximize the loglikelihood with respect to $\theta$ and the elements of V, using scipy.optimize. I then reestimate the Kalman Filter with the new parameter estimates; this is repeated until convergence.
The problem with this approach is that I am only getting the end result of the estimation and no descriptive statistics. In the original paper, next to the estimated parameter values, the standard errors are also provided but the method with which they are calculated is not described. How can I go about obtaining the values of these standard errors?
| How do I calculate the standard error of Kalman Filter parameter estimates? | CC BY-SA 4.0 | null | 2023-04-08T00:21:56.090 | 2023-04-08T00:21:56.090 | null | null | 385211 | [
"time-series",
"python",
"standard-error",
"kalman-filter",
"state-space-models"
] |
612308 | 1 | 616483 | null | 1 | 34 | I have a fairly elaborate Directed Acyclic Graph (DAG) for the analysis that I am running, but I am presenting a simplified example here to clarify a few things.
Here is a DAG from dagitty.net:
[](https://i.stack.imgur.com/kZfjH.jpg)
- According to the graph, I only need to adjust for A in order to close
the back door path and to identify the total causal effect of
Treatment on Outcome. In other words, the minimal adjustment set for
this diagram is just A.
- Conversely, if I were to condition on C, the pathway
Treatment -> C -> Outcome would be biased because C is on the front door path between the Treatment and the
Outcome, so C should be left out from a regression model OR else B would also need to be conditioned on to close the formed back door path.
My question is about variables like B, the adjustment for which is not strictly necessary (assuming C stays unadjusted for). Adjustment/conditioning on B or leaving out completely is seemingly inconsequential for the total causal effect of Treatment on Outcome. In this case, what are the implications, benefits or drawbacks of including B-type variables in my regression models? Would I not gain any precision or explanatory power in the model by including it as a control, rather than optionally leaving it out?
| Adjusting for variables outside of minimal adjustment set for total causal effect in a DAG | CC BY-SA 4.0 | null | 2023-04-08T00:27:43.213 | 2023-05-21T14:22:48.533 | null | null | 171851 | [
"regression",
"model-selection",
"dag",
"causal-diagram"
] |
612309 | 1 | null | null | 1 | 19 | I am iteratively solving a stochastic equation by generating a random field and using the resulting generation to move toward an equilibrium. I know that the system converges but I want to use an appropriate stopping criteria.
At equilibrium, I know that if I continue to iterate, the mean change over many iterations will be 0 at every node $x$ in the field. My null hypothesis $H_0$ at equilibrium is that the change in value at each point $\Delta X_{i}$ will follow a normal distribution with mean 0 and unknown variance.
Let's say I have $N$ nodes. Therefore, if I keep track of the changes over the previous $T$ time steps, I can compute a student's t-distribution test statistic for each node
$K_i = \frac{\langle \Delta X_i \rangle_T}{\sigma_{i,T} /\sqrt{n}}$, for $i = 1,2,...,N$
where
$\langle \Delta X_i \rangle_T = \frac{1}{T}\sum_{t=1}^T\Delta x_{i,t}$
and
$\sigma_{i,T} = \frac{1}{T-1}\sum_{t=1}^{T}(\Delta x_{i,t}-\langle \Delta X_i \rangle_T)^2 $.
This is where I am a little uncertain. I can check each node individually for acceptance (say rejecting the null hypothesis with $\alpha = 0.05$ significance level). My plan to decide whether my whole system is at equilibrium is to treat each node as Bernoulli trial. I can compute $R$ such that $P(R_o<R) = 0.95$ (0.95, or any probability I want to prescribe), where $R_o$ is the observed number of rejections by inverting the cumulative binomial distribution resulting from probability $\alpha = 0.05$ and $N$ trials for my desired acceptance probability. Therefore I can say if my actual number of observed rejections $R_{o,actual} > R$, I am not yet at equilibrium.
Does this make sense? One area where I'm still confused is that, this scheme seems to be independent of the value of $\alpha$. Is that correct? Is there a different test that would be more appropriate?
| Convergence criteria for random field | CC BY-SA 4.0 | null | 2023-04-08T00:43:29.020 | 2023-04-08T10:14:33.847 | 2023-04-08T10:14:33.847 | 385215 | 385215 | [
"hypothesis-testing",
"convergence",
"random-field"
] |
612311 | 1 | null | null | 1 | 30 | I am using bootstrapping to calculate confidence intervals for a risk ratio. In some of the bootstrapped samples, there are no observations of one of the outcomes, leading to the risk ratio being value / 0. Thus, when I try to calculate the SE, I get a NaN value in R, and thus an Inf upper CI.
How should I calculate the standard error and confidence intervals with a risk ratio? I have a very small sample within strata (and need to calculate the risk ratio in the strata.)
| Bootstrap produces inf confidence interval | CC BY-SA 4.0 | null | 2023-04-08T01:04:15.113 | 2023-04-08T03:07:18.890 | 2023-04-08T03:07:18.890 | 11887 | 349481 | [
"r",
"confidence-interval",
"bootstrap",
"standard-error",
"relative-risk"
] |
612312 | 1 | null | null | 0 | 41 | I'm trying to adjust a Generalized Additive Model to a daily time series. My goal is to do a short-term forecast for the gas demand of my city. I have data since 2015, including information about the weather (minimum and maximum temperatures). The data with which I'm working is daily data, and since I have information from many years back I have double seasonality: yearly and weekly.
Here is an image of the historical data throught the years. We can see that every year the demand rises during winter months (may, june, july, aug, sept):
[](https://i.stack.imgur.com/2ePJ7.png)
And here is an image of the data weekly. We can see that during the end of the week the demand decreases:
[](https://i.stack.imgur.com/iCCPn.png)
The real question is the following: what is the right sintax for taking into account this two seasonalities when trying to adjust a gam() model from the mgcv package in Rstudio?
I know that it should be something like this:
```
library(mgcv)
gam_1 <- gam(gas_demand ~ s(x1, bs = "cr", k = 7) +
s(x2, bs = "ps", k = 365),
data = df,
family = gaussian)
#This is just an example, x1 and x2 have not been defined, they
#are supposed to be the covariates of the model
#The daily data should be stored in the data frame 'df'
#The response variable that I'm trying to forecast is 'gas_demand'
```
Should I create a column in my data frame that goes from 1 to 7, depending on the day of the week of the observation to take into account the weekly seasonality?
And for the yearly seasonality, should I create a column with values from 1 to 365 (depending on the day of the year)? Or a colum with values from 1 to 12, depending on the month of the year? I'm not sure what would be the right way to do it.
And my final question: which type of basis function is recommended for each type of seasonality?
I'm really desperate for a response since I'm struggling to find examples that work with this exact type of data. Thanks in advance! :)
| How to fit a GAM with double seasonality to a daily time series? (mgcv package) | CC BY-SA 4.0 | null | 2023-04-08T01:17:19.957 | 2023-04-08T14:06:29.593 | 2023-04-08T14:06:29.593 | 375274 | 375274 | [
"r",
"time-series",
"forecasting",
"generalized-additive-model",
"multiple-seasonalities"
] |
612313 | 1 | 612321 | null | 1 | 37 | The leaps library regsubset function gives an object that contains the list of BIC drops of each subset model from the intercept model.
However, it is different from what is calculated manually.
For example, using the mtcars dataset
Reproducible code:
```
library(leaps)
# Stepwise selection
stepwise = regsubsets(mpg~., data=mtcars, method="seqrep", nvmax=10)
# Plot results to see best subset from stepwise selection is of size 3
plot(stepwise, scale="bic")
# Optimal subset size is 3
which.min(summary(stepwise)$bic)
# The variables from subset size 3 are cyl, hp and wt
coef(stepwise, 3)
# Minimum (optimal) BIC value drop from intercept model is -45.41594
min(summary(stepwise)$bic)
# Manual calculation i.e. BIC of model of 3 variables - BIC of intercept
# Gives -48.88168
BIC(lm(mpg~cyl+hp+wt, data=mtcars))-BIC(lm(mpg~1, data=mtcars))
```
Why is there such a difference (-48.9 vs -45.4)?
Roughly an absolute difference of 3.5
Refer to this [post](https://stats.stackexchange.com/questions/87468/why-do-i-get-different-bic-values-when-i-use-regsubsets-and-lm-in-r) for BIC display values of regsubset summary, notice the use of word "about" and the discrepancies in manual and regsubset summary can be seen there too. (also around a difference of 3.5)
| BIC drop in regsubset summary different from manual calculation in R | CC BY-SA 4.0 | null | 2023-04-08T01:37:16.790 | 2023-04-08T06:17:34.750 | 2023-04-08T01:45:35.837 | 373321 | 373321 | [
"r",
"regression",
"model-selection",
"bic"
] |
612314 | 2 | null | 612222 | 4 | null | I agree that @NickCox's answer is an excellent answer to the general question: "how do I use graphical summaries to evaluate the degree to which my data violate the modeling assumptions I am using?"
However, I would quibble with a couple of the more specific assertions in the [source material](http://www.sthda.com/english/wiki/one-way-anova-test-in-r#check-anova-assumptions-test-validity).
## 1. scale-location plots are (much!) better than residual vs. fitted plots for assessing heteroscedasticity
Here's what we get when we ask for `plot(res.aov, which = 3)` (see `?plot.lm` for more details):
[](https://i.stack.imgur.com/wlvX0.png)
This shows the square root of the absolute value of the standardized residuals vs. the fitted values. The red line gives a reasonable visual indication of the trend in the variability (i.e., in this case the variance decreases slightly as the fitted values increase).
- the most important aspect of the S-L plot is its use of the absolute value, which allows you to judge trend in the data rather than the degree of variation (which is harder to judge by eye, and in particular can be misleading if the fitted values are unevenly distributed; this is the same idea as @NickCox's point #2).
- the scale-location plot uses standardized residuals ($(\hat y - y)/(\sigma \sqrt{1-h})$, where $h$ is the diagonal of the hat matrix), which correct the residuals so that they have equal variances if the data are actually homoscedastic.
- the square-root is used to reduce the skewness of the distribution of the transformed residuals: from ?plot.lm,
>
$\sqrt{|E|}$ is much less skewed than $|E|$ for Gaussian zero-mean $E$
## 2. the numbered points are "outliers" only in the loosest sense
If you look at `?plot.lm` you'll see that there is an argument `id.n = 3`, corresponding to
>
id.n: number of points to be labelled in each plot, starting with
the most extreme.
In other words, the most extreme three residuals will always be labeled, regardless of whether they would be considered unusually extreme under the model assumptions. (Defining outliers is a messy subject in any case; if you want to find outliers, your simplest/default strategy would be using `plot(res.aov)` with `which` equal to 4, or 6, which will show you [Cook's distance](https://stats.stackexchange.com/questions/22161/how-to-read-cooks-distance-plots).)
| null | CC BY-SA 4.0 | null | 2023-04-08T01:56:37.333 | 2023-04-08T14:54:37.740 | 2023-04-08T14:54:37.740 | 2126 | 2126 | null |
612315 | 1 | null | null | 1 | 18 | I want to investigate whether there is a significant difference in the centrality of eigenvectors between two groups. The sample sizes of the two groups are over 9000 and 300, respectively, with a large difference. I used the Mann-Whitney U non-parametric test, and I want to know whether the result of the non-parametric test is reliable.
| When the sample sizes of two datasets are greatly different, is the result of non-parametric testing still reliable? | CC BY-SA 4.0 | null | 2023-04-08T03:37:32.223 | 2023-04-08T03:37:32.223 | null | null | 382194 | [
"nonparametric",
"sample"
] |
612319 | 1 | 612320 | null | 3 | 69 | The setting is $A\in \mathbb{R}^{n*n}$ with each entry being i.i.d. bounded r.v. in $[a,b]$. The question is to prove $\Vert A\Vert_2$ is sub-Gaussian.
Intuitively I thought since $\{A_{ij}\}_{i,j=1,...,n}$ is bounded, then
$$\Vert A \Vert_2 = \sup_{\Vert v \Vert = 1} \vert v^TA^TAv\vert = \sup_{\Vert v \Vert = 1}\vert\sum_{i,j}v_iv_j(\sum_k A_{ki}A_{kj})\vert\leq \max(a^2,b^2)$$
Then $\Vert A\Vert_2$ is bounded so that it is sub-Gaussian. Is there any problem in the above process?
| Spectral norm of matrices of i.i.d. bounded r.v. is sub-Gaussian | CC BY-SA 4.0 | null | 2023-04-08T04:41:05.083 | 2023-04-11T03:27:58.300 | null | null | 383159 | [
"probability-inequalities",
"subgaussianity"
] |
612320 | 2 | null | 612319 | 3 | null | Yes, there's a problem.
Suppose $v_i$ is $1$ for $i=1$ and 0 otherwise. Then
$$\sum_{i,j} v_iv_j\left(\sum_k A_{ki}A_{kj}\right)=\sum_k A_{k1}A_{k1}=\sum_k A_{k1}^2$$
and this is only bounded above by $\max (na^2, nb^2)$. Looking for a deterministic bound won't work; it doesn't take advantage of the randomness.
Next, simple computer experiments show that $\|A\|_2$ is large when $n$ is large, so we should expect even the probabilistic bounds to depend on $n$. Also, there are two definitions of sub-Gaussian out there, one requiring zero mean and one not. We must be using the one that doesn't, since $\|A\|_2$ clearly doesn't have zero mean.
Given the bounds $[a,b]$ and no other information, we're presumably supposed to use Hoeffding's inequality. To reduce the number of cases to consider, I'll assume $0<a$. An obvious iid sum to apply the inequality to is the Frobenius norm $\|A\|_F^2=\sum_{ij} A_{ij}^2$.
By Hoeffding's inequality
$$P(|\|A\|_F^2-E[\|A\|_F^2]|>t)\leq 2\exp\frac{-2t^2}{n^2(b^2-a^2)^2}$$
so $\|A\|_F^2$ is sub-Gaussian and so $\left\|\|A\|_F\right\|_{\psi_2}$ is finite, where
$$\|X\|_{\psi_2}=\inf\left\{C>0: E[ \exp(|X/C|^2)]\leq 2 \right\}$$
Now $0\leq\|A\|_2\leq \|A\|_F$ implies $\|A\|_2$ also has finite $\psi_2$ norm and is sub-Gaussian.
| null | CC BY-SA 4.0 | null | 2023-04-08T05:37:30.877 | 2023-04-11T03:27:58.300 | 2023-04-11T03:27:58.300 | 249135 | 249135 | null |
612321 | 2 | null | 612313 | 0 | null | I'm not sure why you think the `$bic` component is the drop in BIC from the intercept-only model. Perhaps that would be better, but it isn't, and it isn't documented to be.
You can see that the BIC differences from `summary` do match manual calcuation for all the models where `summary` computes the BIC, eg
```
> summary(stepwise)$bic[3]-summary(stepwise)$bic[1]
[1] -7.621329
> BIC(lm(mpg~cyl+hp+wt, data=mtcars))-BIC(lm(mpg~wt, data=mtcars))
[1] -7.621329
> summary(stepwise)$bic[3]-summary(stepwise)$bic[2]
[1] -10.20327
> BIC(lm(mpg~cyl+hp+wt, data=mtcars))-BIC(lm(mpg~cyl+disp, data=mtcars))
[1] -10.20327
```
If you want to calculate BIC differences for this intercept-only model it takes a bit of work to make `regsubsets` fit it. You need to specify `intercept=FALSE`, supply a column of 1s, and then increase `nvmax` enough that the model with just the intercept is chosen. Once I do that, I get
```
mtcars$one<-1
stepwise = regsubsets(mpg~., data=mtcars, method="seqrep", nvmax=10, intercept=FALSE,nbest=5)
coef(stepwise,14)
coef(stepwise,4)
summary(stepwise)$bic[14]-summary(stepwise)$bic[4]
```
Alternatively, the BIC that `summary.stepwise` would compute for the intercept-only model with `intercept=TRUE` if it computed one is $\log n$, so
```
> log(nrow(mtcars))
> min(summary(stepwise)$bic)-log(nrow(mtcars))
[1] -48.88168
```
| null | CC BY-SA 4.0 | null | 2023-04-08T06:17:34.750 | 2023-04-08T06:17:34.750 | null | null | 249135 | null |
612322 | 1 | null | null | 0 | 9 | if I have the following regression model in gamlss (or if it's a general concept)
Y ~ var1
Where Y ~ beta, and var1=(g1,g2,g3) g1 is the reference level, what does it mean that the intercept is significant by itself and in combination with other level being sgnificant too? does it mean the variable is associated with the reference group association if the intercept is significant?
| Catgeorical variable coefficients interpretation | CC BY-SA 4.0 | null | 2023-04-08T06:40:12.163 | 2023-04-08T06:40:12.163 | null | null | 8089 | [
"intercept",
"gamlss"
] |
612323 | 2 | null | 610231 | 2 | null | Suppose the control data matrix is $Z$ and for tidyness suppose that each column has mean zero. Let the control correlation matrix is $R$ and assume it is not singular. Let $R^{-1/2}$ be a square root of the inverse of $R$. The matrix $R^{-1/2}Z$ has uncorrelated columns; its correlation matrix is the identity, $R^{-1/2}RR^{-1/2}$
Now let $Y$ be the case data matrix, also with each column centred at zero. The matrix $R^{-1/2}Y$ would have uncorrelated columns if the correlations were the same, so its correlation matrix is a summary of the 'extra' correlations in the cases.
Note that if we write $S$ for the correlation matrix of $Y$, $R^{-1/2}SR^{-1/2}$ is symmetric with non-negative eigenvalues and so is suitable for PCA. Or you could just do singular-value decomposition on $R^{-1/2}Y$.
(I think I'd actually prefer doing this with covariances rather than correlations, so that differences in variance between the two groups don't look like differences in correlation)
| null | CC BY-SA 4.0 | null | 2023-04-08T06:41:33.950 | 2023-04-08T06:41:33.950 | null | null | 249135 | null |
612324 | 2 | null | 597401 | 0 | null | If you are looking at the univariate shift
If the feature is not encoded, the only thing that you can do is check if the category is in the training data. Then how it affects the specific model that you are using, behaviour will be different for a NN or for a Decision Tree based model.
If the feature is encoded, then you have to look of how your encoder handless unseen categories.
If you are looking at the general shift
I will expect that it affects also the rest of covariates. Then you need to have a wider look at the data.
Being very biased, I will suggest my own work:
- Explanation Shift: How did the distribution shift affect the model? https://arxiv.org/pdf/2303.08081.pdf
- Whose implementation you can check on the skshift python package https://skshift.readthedocs.io/en/latest/
In the related work section, you can find other approaches.
Would love some feedback to see if it works for your case
| null | CC BY-SA 4.0 | null | 2023-04-08T07:09:33.397 | 2023-04-08T07:09:33.397 | null | null | 270023 | null |
612325 | 1 | 612328 | null | 0 | 21 | Is it possible to use intention-to-treat principle in randomized studies with crossover design? Was it ever used in such studies?
| Intention to treat principle in randomized crossover trials | CC BY-SA 4.0 | null | 2023-04-08T07:10:33.567 | 2023-04-08T23:13:25.287 | null | null | 80704 | [
"crossover-study"
] |
612326 | 2 | null | 593575 | 0 | null | Lets say so you have $D_1=\{X_{tr},y_{tr}\}\sim P(X,Y)$ and $D_2=\{X_{ood}\}\sim Q(X,Y)$, where $P\neq Q$. If this is your situation, then in theory it's impossible to estimate model performance on $Q$. If can be than the distance between P(X) and Q(X) is very big, but does not necessarily imply that the performance will drop.
You need either to characterize the type of shift, have a causal graph, or some labeled OOD data $Q(Y)$
If you want to measure how much did the model changed, being very biased, I will suggest my own work:
- Explanation Shift: How did the distribution shift affect the model? https://arxiv.org/pdf/2303.08081.pdf
- Whose implementation you can check on the skshift python package https://skshift.readthedocs.io/en/latest/
In the related work section, you can find other approaches.
Would love some feedback to see if it works for your case
| null | CC BY-SA 4.0 | null | 2023-04-08T07:17:34.620 | 2023-04-08T07:17:34.620 | null | null | 270023 | null |
612328 | 2 | null | 612325 | 2 | null | Yes, all the time. In a crossover trial, the intention-to-treat principle says that participants should be analysed according to the treatment sequence they were assigned to at randomisation
| null | CC BY-SA 4.0 | null | 2023-04-08T07:54:47.850 | 2023-04-08T23:13:25.287 | 2023-04-08T23:13:25.287 | 249135 | 249135 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.