Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
610262 | 2 | null | 610226 | 1 | null | Yes, the ROC curves can be the same.
The easiest examples are trivial: a model with perfect separation will consist of the left and top segments, regardless of the data size or balance; and a model with constant probability predictions produced just the diagonal line.
More interesting curves are possible though: it requires the two values of $N$ (resp. $P$) to have nontrivial gcd. See my answer to [Smallest possible difference between AUC of two ranker](https://stats.stackexchange.com/q/593871/232706) for details, but in brief: the true positive rate and false positive rate have denominators $N$ and $P$, so the allowed points in ROC space lie on the $(N+1)\times(P+1)$ grid. From there, it follows that their grids coincide on a grid of size $(\operatorname{gcd}(N_1, N_2)+1)\times(\operatorname{gcd}(P_1, P_2)+1)$. Choosing any set of points on that common grid (in a monotone fashion) yields a valid ROC curve, and from there generating rank-orderings to achieve the necessary TPRs and FPRs doable.
For example, take one dataset with $(N, P) = (9, 5)$ and another with $(N, P) = (6, 10)$. Here are their available points in ROC space; the first in blue, the second in orange, their overlap getting something like gray because of overlap:
[](https://i.stack.imgur.com/OYt9S.png)
We can pick the points $\{(0,0), (\frac13, \frac35), (\frac23, 1), (1,1)\}$, all on the shared $4\times6$ grid, creating this ROC curve:
[](https://i.stack.imgur.com/XfHGh.png)
And working backwards into the two datasets+models:
- model 1 gives three negative and three positive examples prediction 0.75; three negatives and two positives prediction 0.5; and three negatives prediction 0.25.
- model 2 gives two negatives and six positives prediction 0.9; two negatives and four positives prediction 0.5; two negatives prediction 0.1.
You can check that these both produce the above ROC curve.
| null | CC BY-SA 4.0 | null | 2023-03-22T03:16:00.680 | 2023-03-24T21:28:40.843 | 2023-03-24T21:28:40.843 | 232706 | 232706 | null |
610263 | 1 | null | null | 0 | 59 | I have a multilevel model with one significant interaction and several covariates. I understand the results from the summary fairly well, but I'm a bit stumped by the output in the visualization. Here is the output from the model:
[](https://i.stack.imgur.com/uXh4W.png)
I used the cat_plot function in the interactions package in R to create the visualization below. The generated values are not exactly what I would expect. The DV is a continuous variable, and the two variables involved in the interaction are categorical. To plot variables like this, I believe I was taught that I can plug in values for each of the variables in question and add up the coefficients, but I get a much larger value than what is indicated in the plot. For example, if I want to calculate the value of someone who received the intervention and was in middle school, I would add up .025 + .043 -.021. This gives me .047 for when intervention =1 and grade_level=middle. The value for this calculation in the plot is just above .02. I'm obviously missing something here or very misguided. Can anyone give me some insight into how the visualization is generated from the model output? TIA
```
cat_plot(intensity_lme_math_grade_inter_no_year , pred =Intervention , modx = grade_level, geom = "line", vary.lty = TRUE, x.label="Intervention", y.label = "Score", legend.main="Grade Band")
```
[](https://i.stack.imgur.com/G8zG4.png)
| How are interactions calculated in a visualization using the cat_plot function from the interactions package in R? | CC BY-SA 4.0 | null | 2023-03-22T03:31:00.183 | 2023-04-07T16:16:23.290 | null | null | 368313 | [
"r",
"data-visualization",
"interaction"
] |
610265 | 1 | 610273 | null | 0 | 82 | Let $X_1,..., X_n$ be iid sample from the Poisson distribution with parameter $\lambda$. Find the UMVUE of $\lambda + \lambda^2$.
I know $T := \sum\limits_{i=1}^n X_i$ is complete and sufficient for $\lambda$. Also, $T/n$ is the MLE of $\lambda$. By the invariance principle of the MLE, $T/n + T^2 / n^2$ is the MLE for $\lambda + \lambda^2$.
I tried following the approach for computing UMVUE of $\lambda^3$ from an [earlier post](https://stats.stackexchange.com/questions/143086/finding-a-umvue-for-a-specific-function), but could not compute the expectation that gives $\lambda + \lambda^2$.
| Poisson: finding UMVUE for $\lambda + \lambda^2$ | CC BY-SA 4.0 | null | 2023-03-22T03:56:09.950 | 2023-03-22T05:12:30.410 | null | null | 334241 | [
"umvue",
"rao-blackwell",
"cramer-rao"
] |
610266 | 2 | null | 280367 | 0 | null | It is possible to solve this through Bayes Theorem or by using a Confusion Matrix. Both will arrive at the same equation.
When you say the test is 99.99% correct we will assume P(Detected | Bomb) = P(Not Detected | No Bomb) = 99.99%. If you use the confusion matrix we will have the following:
- Sensitivity = P(Detected | Bomb) = TP / (TP + FN) = 99.99%
- Specificity = P(Not Detected | No Bomb) = TN / (TN + FP) = 99.99%
Here is the solution using both Bayes Theorem and Confusion Matrix
## Using Confusion Matrix
|Detector |Bomb |No Bomb |Total |
|--------|----|-------|-----|
|Detected |True Positive (TP) |False Positive (FP) |Total Detected |
|Not Detected |False Negative (FN) |True Negative (TN) |Total Not Detected |
|Total |Total Bombs |Total No Bombs |Total People |
What we know:
- Total People = 10,000
- Total Bombs = TP + FN = 1/10,000 = 0.01%
- Total No Bombs = (TN + FP) = 9,999/10000 = 99.99%
- Total = TP + FP + FN + TN = 10,000/10,000 = 100%
- Sensitivity = TP / (TP + FN) = TP / Total Bombs = 99.99%
- Specificity = TN / (TN + FP) = TN / Total No Bombs = 99.99%
We will be able to populate all the values of the confusion matrix as follows:
- TP = Sensitivity x Total Bombs = 99.99% x 0.01% = 0.0099%
- TN = Specificity x Total No Bombs = 99.99% x 99.99% = 99.98%
- FP = (1 - Specificity) x Total No Bombs = 0.01% x 99.99% = 0.0099%
- FN = (1 - Sensitivity) x Total Bombs = 0.01% x 0.01% = 0.000001%
|Detector |Bomb |No Bomb |Total |
|--------|----|-------|-----|
|Detected |99.99% x 0.01% = 0.0099% |0.01% x 99.99% = 0.0099% |~0.02% |
|Not Detected |0.01% x 0.01% = 0.000001% |99.99% x 99.99% = 99.98% |99.98% |
|Total |0.01% |99.99% |100% |
The original question is what is the probability that when it went off, the person actually had a bomb?
This is TP / (TP + FP)
FP = TP
So TP / (2 x TP)
= 1/2 = 50%
## Using Bayes Theorem
We have the following formula:
P(Bomb | Detected) = (P(Detected | Bomb) x P(Bomb)) / P(Detected)
We know the following
- P(Detected | Bomb) = 99.99%
- P(Bomb) = 0.01%
P(Detected) = P(Detected | Bomb) x P(Bomb) + P(Detected | No Bomb) x P(No Bomb)
- P(Detected | Bomb) = 1 - P(Detected | Bomb) = 99.99% = 0.01%
- P(No Bomb) = 1 - P(Bomb) = 1 - 0.01% = 99.99%
P(Detected) = 99.99% x 0.01% + 0.01% x 99.99% = 2 x (99.99% x 0.01%)
We are solving for: P(Bomb | Detected) = (P(Detected | Bomb) x P(Bomb)) / P(Detected)
Substituting all the values:
P(Bomb | Detected) = (P(Detected | Bomb) x P(Bomb)) / P(Detected)
P(Bomb | Detected) = (99.99% x 0.01%) / 2 x (99.99% x 0.01%) = 1/2 = 50%
Hope this helps
| null | CC BY-SA 4.0 | null | 2023-03-22T04:17:46.177 | 2023-03-22T04:17:46.177 | null | null | 324876 | null |
610267 | 1 | null | null | 0 | 43 | >
Let $\lbrace \mathbb{P}_\theta \rbrace_{\theta\in \Theta}, \Theta \subset \mathbb{R}$, be an identifiable parametric family of distributions with common support, where card$(\Theta)\geq 2$. Consider the family of estimators $\Delta = \lbrace \delta(\textbf{X}): \mathbb{E}_\theta \delta^2 <\infty, \theta \in \Theta \rbrace$ and the loss function $L(\theta,a)=(\theta-a)^2$. Prove that there does not exist an estimator $\delta(\textbf{X})$ for which $R(\theta,\delta)=0,\theta \in \Theta$.
So I suppose that there exists an estimator $\delta(\textbf{X})$ for which $R(\theta,\delta)=0,\theta \in \Theta$. Since the loss function is non-negative, then it concludes that $\delta(\textbf{X}) = \theta $ almost surely. I don't know if it is true, because I think that $\theta(\textbf{X})$ cannot take many values. If yes, then what can I do next to show the contradictory? I want to show that for all $x$ in $\Omega$, $f(x;\theta_1)=f(x;\theta_2)$ so it contradicts the definition of identifiable parametric family, but I am not sure how.
| Prove that there does not exists an estimator for which the risk is $0$ | CC BY-SA 4.0 | null | 2023-03-22T04:23:28.193 | 2023-03-22T04:23:28.193 | null | null | 383816 | [
"mathematical-statistics",
"inference",
"loss-functions"
] |
610270 | 1 | null | null | 0 | 26 | Just wanted to confirm that my understanding of Bayes is correct:
People brought 200 chocolate chip cookies and 100 oatmeal raisin cookies. Total = 300 cookies.
You also know that 150 (out of 200) of the chocolate chip cookies have M&Ms, and
25 (out of 100) of the oatmeal raisin cookies have M&Ms.
Now if you see a cookie in front of you and it has M&M's.
Using Bayes Theorem, what is the probability of it being oatmeal raisin?
---
My thought process:
A = Has M&Ms
B = Oatmeal Rasin
P(B|A) = ( P(A|B)P(B) )/ P(A)
P(oatmeal rasin | Has M&Ms)
= P(Has M&Ms | oatmeal rasin) * P(oatmeal rasin) / P(Has M&Ms)
P(Has M&Ms) = (150+25)/300 = 175/300
P(oatmeal rasin) = 100/300 = 1/3
P(Has M&Ms | oatmeal rasin) = 25/100 = 1/4
=(1/4)*(1/3) / (175/300) = 0.142, 14.2% chance the cookie is oatmeal rasin given it has M&Ms?
| Is my Bayes Theorem answer correct? | CC BY-SA 4.0 | null | 2023-03-22T04:32:47.650 | 2023-03-22T17:38:06.767 | 2023-03-22T17:38:06.767 | 361781 | 361781 | [
"bayesian",
"conditional-probability"
] |
610271 | 1 | 610390 | null | 6 | 141 | Suppose we have two independent and identically distributed random variables $X$ and $Y$, both following the standard normal distribution. We take $1,000,000$ sample pairs and want to determine the correlation between $X$ and $Y$ in the pairs where $X + Y < 0$.
I have performed a simulation-based approach and my results suggest that the correlation between $X$ and $Y$ in these pairs is less than $0$. However, I am wondering if there is a mathematical way to show this result.
Any insights and mathematical arguments would be appreciated. Thank you!
| How to determine the correlation between two normal random variables conditioned on their sum being negative? | CC BY-SA 4.0 | null | 2023-03-22T04:39:36.777 | 2023-03-23T19:58:26.643 | 2023-03-23T00:41:39.643 | 145991 | 145991 | [
"correlation",
"normal-distribution"
] |
610273 | 2 | null | 610265 | 1 | null | There is no need to bring MLE into the discussion, all you need is $T \sim \text{Poisson}(n\lambda)$ hence
\begin{align}
E(T) = n\lambda, \; E(T^2) = n\lambda + n^2\lambda^2.
\end{align}
From this it is easy to see $n^{-2}T^2 + (n^{-1} - n^{-2})T$ is an unbiased estimator of $\lambda + \lambda^2$. Now use Lehmann-Scheffe theorem to conclude.
| null | CC BY-SA 4.0 | null | 2023-03-22T05:12:30.410 | 2023-03-22T05:12:30.410 | null | null | 20519 | null |
610274 | 1 | null | null | 3 | 247 | Learning Bayesian decision theory (specifically in Machine Learning) recently, couldn't figure out what do the posterior possibility $P(c|x)$ and the prior possibility $P(x|c)$ mean exactly.
Anybody knows what $x$ and $c$ represent exactly in the possibility formula?
(all I know is c stand for class and x stand for samples. But I suppose they have some different meaning in the formula, perhaps represent some events?)
| prior & posterior probability in Bayesian Decision Theory | CC BY-SA 4.0 | null | 2023-03-22T05:19:05.183 | 2023-03-22T08:47:36.520 | 2023-03-22T06:41:53.480 | 35989 | 383068 | [
"machine-learning",
"bayesian",
"naive-bayes"
] |
610275 | 1 | null | null | 0 | 37 | I have a half fraction factorial experiment for four factors, meaning I have eight experiments that I conduct. I expect that some of the responses are quadratic and not linear, thus I included centre points in my experiments, which introduces a ninth (zeroth) experiment. Indeed, quadratic responses seem possible from boxplots. Five repeats for all experiments were run, with the eight experiments in random order. The zeroth experiment, the centre points, have five repeats at the start of the runs, five repeats in the middle and five repeats at the end. This is to observe variation trends in operator skill or machine conditions during the whole experiment.
I am trying to run ANOVA in R to interpret the results. As far as I understand from a theoretical point of view, I should be able to run ANOVA on the eight experiments sorted by factor, but also on the lack of fit based on the centre points. A lack of fit would indicate that I have to do response surface modelling to find the optimum, and perhaps include a few more experiments in order to get the additional regression coefficients. I however have no idea how to implement this in R.
The textbook I use (Montgomery, Design and analysis of experiments) give a rudimentary test for lack of fit, where the average of the factorial point responses are compared with the average of the centre point responses. Mine is off by almost 10 %. They also give an explanation of how the lack of fit can be calculated by hand. This explanation is attached [here](https://drive.google.com/file/d/1YeoKWfiwSCAksnAGIajAbqyQb_GSKtJN/view?usp=sharing) as a pdf.
I can run the ANOVA on the four factors with no problem.
```
summary(aov(t_σ[16:55] ~ A[16:55] + B[16:55] + C[16:55] + D[16:55], data))
```
which yields
```
Df Sum Sq Mean Sq F value Pr(>F)
A[16:55] 1 66.3 66.3 2.733 0.1072
B[16:55] 1 1.5 1.5 0.062 0.8044
C[16:55] 1 692.6 692.6 28.544 5.68e-06 ***
D[16:55] 1 89.1 89.1 3.674 0.0635 .
Residuals 35 849.2 24.3
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
indicating that only Factor C has a significant effect. However, I have no idea how to add the lack of fit as an additional row in the ANOVA table based on the centre points. Is this possible, and can anyone advise on how I should do this? If not, is there another way to get the lack of fit in a separate operation?
Attached is my raw data. 'ekspnr' indicates the experiment number, where 0 indicates the centre point experiment. The data frame has been sorted according to experiment number, so that all the centre points are from 0 to 15, and the rest are from 16 to 55. 't_σ' is the response, and the A, B, C and D refers to the factors.
```
ekspnr t_σ A B C D
0 0.0 27.951699 0.30 205.0 50.0 6.0
1 0.0 28.009995 0.30 205.0 50.0 6.0
2 0.0 27.939666 0.30 205.0 50.0 6.0
3 0.0 24.949808 0.30 205.0 50.0 6.0
4 0.0 25.536769 0.30 205.0 50.0 6.0
5 0.0 23.334088 0.30 205.0 50.0 6.0
6 0.0 24.637399 0.30 205.0 50.0 6.0
7 0.0 22.460906 0.30 205.0 50.0 6.0
8 0.0 20.429309 0.30 205.0 50.0 6.0
9 0.0 25.034200 0.30 205.0 50.0 6.0
10 0.0 21.047813 0.30 205.0 50.0 6.0
11 0.0 22.043904 0.30 205.0 50.0 6.0
12 0.0 22.177695 0.30 205.0 50.0 6.0
13 0.0 21.854369 0.30 205.0 50.0 6.0
14 0.0 19.923816 0.30 205.0 50.0 6.0
15 1.0 20.746215 0.18 190.0 10.0 2.0
16 1.0 18.999751 0.18 190.0 10.0 2.0
17 1.0 18.619490 0.18 190.0 10.0 2.0
18 1.0 18.820066 0.18 190.0 10.0 2.0
19 1.0 19.194736 0.18 190.0 10.0 2.0
20 2.0 19.088337 0.42 190.0 10.0 10.0
21 2.0 18.797644 0.42 190.0 10.0 10.0
22 2.0 18.377256 0.42 190.0 10.0 10.0
23 2.0 19.097554 0.42 190.0 10.0 10.0
24 2.0 17.815757 0.42 190.0 10.0 10.0
25 3.0 10.011159 0.18 220.0 10.0 10.0
26 3.0 13.538536 0.18 220.0 10.0 10.0
27 3.0 12.768532 0.18 220.0 10.0 10.0
28 3.0 13.711670 0.18 220.0 10.0 10.0
29 3.0 10.657853 0.18 220.0 10.0 10.0
30 4.0 22.674432 0.42 220.0 10.0 2.0
31 4.0 16.326696 0.42 220.0 10.0 2.0
32 4.0 21.479866 0.42 220.0 10.0 2.0
33 4.0 25.216749 0.42 220.0 10.0 2.0
34 4.0 21.089515 0.42 220.0 10.0 2.0
35 5.0 24.565803 0.18 190.0 90.0 10.0
36 5.0 29.021329 0.18 190.0 90.0 10.0
37 5.0 32.300469 0.18 190.0 90.0 10.0
38 5.0 32.673953 0.18 190.0 90.0 10.0
39 5.0 25.842299 0.18 190.0 90.0 10.0
40 6.0 20.402979 0.42 190.0 90.0 2.0
41 6.0 21.260363 0.42 190.0 90.0 2.0
42 6.0 21.916816 0.42 190.0 90.0 2.0
43 6.0 23.040326 0.42 190.0 90.0 2.0
44 6.0 15.786039 0.42 190.0 90.0 2.0
45 7.0 40.205180 0.18 220.0 90.0 2.0
46 7.0 31.956339 0.18 220.0 90.0 2.0
47 7.0 31.159483 0.18 220.0 90.0 2.0
48 7.0 32.386282 0.18 220.0 90.0 2.0
49 7.0 28.827811 0.18 220.0 90.0 2.0
50 8.0 23.726401 0.42 220.0 90.0 10.0
51 8.0 20.733421 0.42 220.0 90.0 10.0
52 8.0 28.205647 0.42 220.0 90.0 10.0
53 8.0 18.459101 0.42 220.0 90.0 10.0
54 8.0 21.004829 0.42 220.0 90.0 10.0
```
The only function I could find was pureErrorAnova() from the package alr3. However, when completing an example from my textbook (Montgomery, Design and analysis of experiments), this function does not yield the correct answers. The LoF and pure error has the wrong degrees of freedom, and their values are vastly different from the textbook. Moreover, the LoF is larger than the pure error in the table produced by the table, whereas it should be the other way around according to the textbook. The example refers to an earlier one. I attach them as pdfs [here](https://drive.google.com/file/d/1kjuo7S0v9Ul8pL940PZYuY17jnSix-mV/view?usp=sharing) and [here](https://drive.google.com/file/d/1KfEcbYkvCvAUzaXQFABUQdzdcDQFch4u/view?usp=sharing). Note how the LoF is different than simply blocking the centre points as a separate block in a normal anova table.
| Is there a function in R to give the lack of fit and pure error of an ANOVA completed on a half 2^k factorial with centre points? | CC BY-SA 4.0 | null | 2023-03-22T05:29:53.980 | 2023-03-28T05:32:09.337 | 2023-03-28T05:32:09.337 | 383821 | 383821 | [
"r",
"anova",
"experiment-design",
"goodness-of-fit",
"fractional-factorial"
] |
610277 | 1 | null | null | 0 | 28 | i am plotting here the graphs of the autocorrelation plot of my data done with python prior and after making the differentiation
[](https://i.stack.imgur.com/quipK.png)
[](https://i.stack.imgur.com/F6pBz.png)
[](https://i.stack.imgur.com/VaFvD.png)
[](https://i.stack.imgur.com/QoRhV.png)
I do not understand the two trends of the paths after differencing because without considering the lag 0 it is not defined. I also tried the augmented dickey fuller test. Before differencing p value is 0,58 so non stationary behavior and until here it is ok. after differencing p-value is of the 10^-12 order so I should expect a non stationary time series but the autocorrelation and partial one remain so strange and it is not clear what lag/lags are relevant.
I also want to fit my data into an arima model.
Can you help me ?
| ACF and PACF graph interpretation problems | CC BY-SA 4.0 | null | 2023-03-22T06:10:07.127 | 2023-03-22T06:43:35.027 | 2023-03-22T06:43:35.027 | 53690 | 383823 | [
"arima",
"model-selection",
"acf-pacf",
"differencing"
] |
610279 | 2 | null | 299322 | 0 | null | The features vector can be combined to an image by -
- Adjusting the features shape by using tf.reshape and tf.tile
- Combining the features and image by performing concatenation, add (as described in Research document) or other merge operators
Here is a code example for creating a Custom Keras Layer that merge features and image, by using tile and concatenation -
```
class FeatureConcatLayer(tf.keras.layers.Layer):
def build(self, input_shape):
self.image_shape = input_shape[0][1:]
self.num_features = input_shape[1][1]
def call(self, inputs):
image, features = inputs
features = tf.reshape(features, (-1, 1, 1, self.num_features))
features = tf.tile(features, [1, self.image_shape[0], self.image_shape[1], 1])
return tf.concat([image, features], axis=-1)
```
| null | CC BY-SA 4.0 | null | 2023-03-22T06:40:21.340 | 2023-03-22T06:40:21.340 | null | null | 383822 | null |
610280 | 2 | null | 610274 | 3 | null | $x$ and $c$ in Bayes theorem are [random variables](https://stats.stackexchange.com/questions/50/what-is-meant-by-a-random-variable/54894#54894). Any random variables. Bayes's theorem is about being able to flip sides of the conditional distribution from $P(x|c)$ to $P(c|x)$ or the other way around. They could be events, e.g. “probability that it rains ($x$) given that it’s cloudy ($c$), $P(x|c)$”, in [naive Bayes](https://stats.stackexchange.com/questions/314623/naive-bayes-likelihood) algorithm $x$ is a feature of the model and $c$ is the predicted class, in a classical Bayesian model $x$ would be your data and $c$ the parameter of the model.
| null | CC BY-SA 4.0 | null | 2023-03-22T06:40:29.980 | 2023-03-22T07:24:44.830 | 2023-03-22T07:24:44.830 | 35989 | 35989 | null |
610281 | 1 | null | null | 0 | 71 | There are many tutorials/packages in Python to detect anomalies in time-series given that the time-series is numerical.
Currently, I have a time-series that is categorical, i.e. the time-series data said that, at time XXX the event AAA occurred.
I want to detect anomalies for this data. For instance, if too many events BBB occured in a short period of time...
Could you give me some starting point (if it is in Python, it's great).
Many thanks
| Anomaly detection in time-series with categorical data | CC BY-SA 4.0 | null | 2023-03-22T07:43:52.340 | 2023-03-22T22:01:20.860 | null | null | 91530 | [
"time-series",
"anomaly-detection"
] |
610282 | 2 | null | 610274 | 0 | null | In a very simplified way:
Posterior probability = $P(\gamma | D)$ = probability that your parameter (or vector of parameters) $\gamma$ is equal to the value you've sampled given your dataset $D$.
Prior = arbitrary guess of the value of $\gamma$ based on an expert knowledge (or ignorance for uninformative prior)
However, a question you don't ask is "what means the likelihood" in the Bayesian formula: basically, it is the opposite of the posterior probability.
Likelihood = $P(D|\gamma)$ = probability that your data is observed for the given values of the parameter(s) $\gamma$
| null | CC BY-SA 4.0 | null | 2023-03-22T08:05:14.273 | 2023-03-22T08:05:14.273 | null | null | 302006 | null |
610283 | 2 | null | 610274 | 2 | null | Let us give an example that makes this as simple as possible.
Suppose your samples are all taken from a Bernoulli distribution, i.e. these are ``binary samples'', these are just $1$'s and $0$'s. Let those samples by denoted by: $x_1,x_2,...,x_n$. Here each $x_k$ is equal to either $1$ or $0$. Let $\mathbf{x}$ denote the entire vector of all those samples, so $\mathbf{x} = (x_1,x_2,...,x_n)$.
Now, since these samples are coming from a Bernoulli distribution it means there is an unknown parameter $c$ which represents the ``success rate''. The number/parameter $c$ is unknown, and it is supposed to represent how often the samples display $1$. Therefore, if $c=.9$ then we expect to see a lot of $1$'s in the vector $\mathbf{x}$ and if $c=.1$ then we expect to see mostly zeros instead.
Let us say, for example, that our sample vector $\mathbf{x}$ consists of $70$ observations of "$1$" and $30$ observations of "$0$". It is reasonable to guess that $c = \frac{70}{100} = .7$. However, because of random flucations in the data it could happen that the true value of $c=.65$ and the data was just more lucky and generated a bit more $1$'s.
The "posterior distribution", which you denote as $P(c|\mathbf{x})$ is supposed to quantify your uncertainty about the value of the $c$ parameter. In the example we are using, where $\mathbf{x}$ has 70 successes and 30 failures, it can be shown (perhaps, you can show this yourself!), that the posterior distribution looks like this,
[](https://i.stack.imgur.com/PEYTg.png)
From this picture you can see that the most reasonable choice for $c$ is $0.7$, but it could also be $0.8$ but much less likely, however when we reach $0.9$ it becomes very unreasonable. Instead of saying "more reasonable", or "less reasonable", ect, we make the language precise so there is no confusion about what we mean, and the posterior distribution is what quantities your uncertainty and likelihood of the unknown parameter $c$.
| null | CC BY-SA 4.0 | null | 2023-03-22T08:47:36.520 | 2023-03-22T08:47:36.520 | null | null | 68480 | null |
610284 | 2 | null | 610125 | 3 | null | Not an answer, but too long for a comment.
First of all, please specfiy the dependency on `{tibble}` by `library(tibble)` or `tibble::rownames_to_column("Patient")`.
Just to make things clear: according to the linked paper, "LSC17" is not a patient but a score calculated as follows (cp. Methods):
>
For each gene, the probeset with the highest average GE in the training data was selected to represent that gene. To extract a core subset of genes from among the 43 that were more highly expressed in LSC + cell fractions that best explained patient outcomes in the training cohort, we used a linear regression technique based on the LASSO algorithm as implemented in the glmnet 1.9-8 R package16,17 , while enabling leave-one-out cross-validation to fit a Cox regression model. A minimal subset of 17 genes was selected whose weighted combined GE (LSC17 score) was highly correlated to survival outcomes in the training cohort.
Assuming you have done this, and `gene1`, ... `gene18` (`gene17`in the paper) is your result, the subsequent step is describred as (cp. Methods)
>
The LSC17 score is calculated for each patient as a linear combination of GE of these 17 genes weighted by regression coefficients that were estimated from the training data as follows: LSC17 score = (DNMT3B × 0.0874) + (ZBTB46 × −0.0347) + (NYNRIN × 0.00865) + (ARHGAP22 × −0.0138) + (LAPTM4B × 0.00582) + (MMRN1 × 0.0258) + (DPYSL3 × 0.0284) + (KIAA0125 × 0.0196) + (CDK6 × −0.0704) + (CPXM1 × −0.0258) + (SOCS2 × 0.0271) + (SMIM24 × −0.0226) + (EMP1 × 0.0146) + (NGFRAP1 × 0.0465) + (CD34 × 0.0338) + (AKR1C3 × −0.0402) + (GPR56 × 0.0501).
Since I am not sure what your training data is or should be, I demonstrate the next step based on the provided data. Note, you (probably) need to perform a regression on the training data as mentioned.
If `gene1`, ... `gene5` is an appropritate choice/shortcut/..., and further assuming that every row correspondents to a patient, you can compute `LSC17score` as follows
```
# data handling
df <- as.data.frame(mat2[, -1])
# lets create some toy regression coefficients
# set.seed(032223)
# beta <- round(runif(n = 5, min = -.25, max = .25), 2) / 10
beta <- c(0.009, -0.004, -0.004, -0.007, 0.009)
# linear combination
df$LSC17score <- rowSums(
sweep(x = df[, c(paste0("gene", 1:5))],
MARGIN = 2,
STATS = beta,
FUN = `*`)
)
```
Transformation of continous data to categorical means an information loss:
```
# c := categorical
df$LSC17c <- cut(x = df$LSC17score,
breaks = 3,
labels = c("FiveLow", "FiveStandard", "FiveHigh")
)
```
A glimpse of the result:
```
> head(df[, c("LSC17score", "LSC17c")], n = 7)
LSC17score LSC17c
1 0.07762944 FiveHigh
2 -0.53205310 FiveStandard
3 -0.39538182 FiveStandard
4 0.19058862 FiveHigh
5 0.10449223 FiveHigh
6 -0.34134344 FiveStandard
7 -0.97711100 FiveLow
```
Maybe renaming to `LSC5score` and `LSC5c` is good practice, since we only use `gene1`, ..., `gene5`.
| null | CC BY-SA 4.0 | null | 2023-03-22T09:12:33.577 | 2023-05-28T22:28:00.193 | 2023-05-28T22:28:00.193 | 333892 | 333892 | null |
610285 | 1 | null | null | 1 | 49 | >
Very deep models involve the composition of several functions or
layers. The gradient tells how to update each parameter, under the
assumption that the other layers do not change. In practice, we update
all of the layers simultaneously.
— Page 313, [Deep Learning](https://www.deeplearningbook.org/contents/optimization.html#:%7E:text=models.-,Very,simultaneously.,-When), 2016.
Do we violate this assumption in practice? If so, what are the consequences of this violation? One consequence is that we cannot guarantee that updating all parameters in a single step will move us in the direction of the steepest descent. Even if we have superb data in great amounts, the loss function is convex and the single gradient step is calculated based on all the samples. Is that correct? This is because simultaneously updating all parameters does not take into account their dependence on each other, correct?
| Neural network parameters dependency vs gradient descent | CC BY-SA 4.0 | null | 2023-03-22T09:20:56.627 | 2023-04-01T12:11:22.663 | 2023-04-01T12:11:22.663 | 347904 | 347904 | [
"machine-learning",
"neural-networks",
"optimization",
"gradient-descent",
"gradient"
] |
610286 | 1 | null | null | 2 | 32 | I built a linear mixed effect model with nlme with Body length, Habitat and Sex as fixed effects. Body length is added for body size correction while the other two both have two levels. In the random part of the model Population was used as a random effect (with 8 levels). To account for the violation of heterogeneity I used the VarIdent function. The model is:
Model<- lme(A~ Body_length+Sex X Habitat, random= ~ 1|Population, method="REML", weights = varIdent(form = ~1|PopulationSex))*
As populations are clearly linked to habitats each population can only belong to one Habitat level. As a result of the connection between Habitat and Population (I think) the DF in case of Habitat is 6 while in case of Sex it is 500. To my knowledge low DF results in less chance to detect significant result and also increases the inaccuracy of the estimation. The model was followed by a contrast analysis.
When visually inspecting the emmeans it seems like the Standard Errors estimated are very high and are almost the same for each Habitat X Sex. Also Sex and Habitat seems to have simiar effect on the response variable.
I tried to check the data building an identical GLS model (without the random effect) and Habitat also became significant , had the same DF as Sex and the Standard Errors estimates became lower and differed between groups. Therefore the random effect -due to its few levels (as suggested by some authors)- seems to case this discrepancy. At the same time leaving out Population effect from the model would be incorrect as Populations clearly differ for each other and individuals within a population are more alike. So my questions would be:
Is the random effect specified correctly? Is there a way to fix my model?
If not could you please recommend other type models to be used with this kind of data?
| How to deal with potentially too few levels of the random effect | CC BY-SA 4.0 | null | 2023-03-22T09:21:15.873 | 2023-03-24T16:16:14.130 | 2023-03-22T11:02:40.173 | 383826 | 383826 | [
"mixed-model",
"degrees-of-freedom",
"contrasts"
] |
610287 | 1 | null | null | 1 | 27 | I have successfully developed an image classifier using Deep Learning, in particular I have used a ResNet50V2 network with fine tuning transfer learning. I built up a database from the available imagine and then I tune the parameters. I split the database in train and test sets and using crossvalidation to investigate the hyperparameters. It works fine and I am happy. Now, my supervisor said that in production the database will be increased in time. That is, after the action of a machine, the camera will takes some photos and then they will be added in the database. My supervisor asks if the database can be retrained after one or more addition to the database. Sure, it is possible, but the hyperparameters should be redefined and the training should be done on the entire database. It is a heavy time consuming process. Am I correct? Are there smarter ways to tackle this problem?
| Continuous retraining a model on a increasing database | CC BY-SA 4.0 | null | 2023-03-22T09:35:29.310 | 2023-04-14T10:40:03.877 | null | null | 379875 | [
"neural-networks",
"computer-vision"
] |
610288 | 2 | null | 610174 | 5 | null | >
Are these ideas sensible? Or is there a standard way of calculating KL Divergence that I haven't been able to find yet?
The standard way to compute the (symmetric) Kullback-Leibler divergence is to apply the formula
$$\sum_{x \in \mathcal{X}_P} P(x) \log\left( \frac{P(x)}{Q(x)} \right) + \sum_{x \in \mathcal{X}_Q} Q(x) \log\left( \frac{Q(x)}{P(x)} \right) $$
where $P$ and $Q$ are the probabilities of the events $x$ and the sum is taken over the space of all events $\mathcal{X}$ with non-zero probability.
Or an equivalent for densities
$$\int_{x \in \mathcal{X}_P} f(x) \log\left( \frac{f(x)}{g(x)} \right) \, dx + \int_{x \in \mathcal{X}_Q} g(x) \log\left( \frac{g(x)}{f(x)} \right) \, dx $$
The non-standard step is how you obtain the distributions for your data.
- 1. Binning. This is the simple way to do it. It is especially known to be used in creating histograms. All the typical complications/problems with histograms also apply here. If you have very low numbers in the bins then the method does not work well. And you may even get bins with zero probability such that the divergence becomes infinite. See the below example with two samples from a standard normal distribution
- 1a. kernel smoother an alternative to binning and histograms, is to fit a some kernel smoother to the data. With the vanilla smoothener density from R then the image above would become.
For this example 0.1081095 is the divergence.
- Using class probabilities. This can work as well. One way to do this is to use a nearest neighbours algorithm that computes the ratio $P(x)/Q(x)$ and sum this over all points.
For the example this could go like the following in R code
### generate data
set.seed(1)
x = rnorm(100)
y = rnorm(100)
z = c(x,y)
### 15 compute nearest neighbours
M = outer(z,z, "-") # matrix with distances
M = abs(M)
M2 = apply(M,1,order) # get id's of closest neighbour
## compute probabilities
p2 = colSums(M2[1:15,]>100) #compute id 's of class y
odds1 = p2/(15-p2) # compute odds
## divergence
sum(log(1/odds1[1:100]))/100+
sum(log(odds1[101:200]))/100
This gives a divergence of 0.2482063
I made a quick and dirty computation if the nearest neighbours, possibly there is some ready to use function with an algorithm that can be applied here. The basic principle is clear.
A problem here is that some probabilities $P$ and $Q$ might become zero and the divergence becomes infinite. The kernel smoother does not have this disadvantage and is similar to a nearest neighbours classification.
- Fit a distribution. In some problems you could fit some parametric distribution.
| null | CC BY-SA 4.0 | null | 2023-03-22T09:35:50.313 | 2023-03-22T09:35:50.313 | null | null | 164061 | null |
610291 | 2 | null | 610174 | 4 | null | This sounds like an [xy problem](https://en.wikipedia.org/wiki/XY_problem): you are asking the wrong question, the fundamental issue is identifying why the model is performing worse (problem X in wikipedia), instead you are asking about developing metrics to calculate difference between two multidimensional data sets(problem Y in wikipedia).
As commenters have pointed out the difference in KL divergence is hard to interpret.
Having found out the actual question is identifying why the model is performing worse, then the direct issue with KL (apart from complexities in calculating it), is that there is no link to the dependent variable. eg if only one variable impacts the model performance, then KL divergence across all the variables will not be very discriminatory.
So what I am suggesting you do is analyse the distribution of your unthresholded model predictions - this links inputs more directly to model performance - eg what proportion of inputs were assigned an output of 30% -35% (from 0 - 100%). These are the "same" inputs in terms of your model. So look at how this distribution has changed over time . Since this is 1 dimensional it's easier to analyse and visualise.
This will give you an idea if the input data distribution has changed over time. (and you could even use KL divergence on it if you so wished, but I suspect it won't be useful vs just a regular histogram)
An alternative explanation is not that the independent data distribution has changed, but the input-output relationship has changed.
you can analyse this using a probability calibration curve. see eg [why-model-calibration-matters-and-how.html](https://www.unofficialgoogledatascience.com/2021/04/why-model-calibration-matters-and-how.html). I am not advocating you add a calibration step to your model (as in that article) though, just create a plot showing the true probability for each binned output value (eg 30-35% etc). Compare the plot you get when the model
was performing well, vs when the model was performing worse.[It doesn't really matter whether the original dataset was well calibrated, just if there is a difference to the new data set]
In the ideal case, either the predicted value distribution changes, but calibration curve stays same or vice versa - to allow you to identify which of these is the reason. Obviously, chances are that both have changed!
For some metrics you could even analyse the relative impact of each (if you can adjust your metric to takes as input probability and binned prediction output and actual probability for that bin).
| null | CC BY-SA 4.0 | null | 2023-03-22T09:54:57.187 | 2023-03-23T09:07:14.077 | 2023-03-23T09:07:14.077 | 27556 | 27556 | null |
610292 | 1 | 610306 | null | 3 | 53 | I am currently working through [Scornet2015 - Consistency of Random Forests](https://projecteuclid.org/journals/annals-of-statistics/volume-43/issue-4/Consistency-of-random-forests/10.1214/15-AOS1321.full).
I'm having trouble understanding a specific inequality that is used in the proofs without further explanation. I am assuming it is something rather general and not immediately related to the topic at hand.
As far as I can see, the inequality boils down to the following (I'll quote the full statements below).
$$
\mathbb{E}[X] \leq \xi + u \mathbb{P}[X > \xi]
$$
where $u$ such that $X \leq u$.
Why is that? I've looked at various basic tools such as Markov or Chebyshev inequalities as well as inequalities for tail probabilities but none seems to apply here.
## First application
This considers (something like) the estimation error of a truncated estimate.
$$\begin{aligned}
& \left.\mathbb{E}\left[\sup _{\substack{f \in \mathcal{F}_n(\Theta) \\
\|f\|_{\infty} \leq \beta_n}} \mid \frac{1}{a_n} \sum_{i=1}^{a_n}\left[f\left(\mathbf{X}_i\right)-Y_{i, L}\right]^2-\mathbb{E}\left[f(\mathbf{X})-Y_L\right]^2\right]\right] \\
& \quad \leq \xi+2\left(\beta_n+L\right)^2 \mathbb{P}\left[\sup _{\substack{f \in \mathcal{F}_n(\Theta) \\
\|f\|_{\infty} \leq \beta_n}}\left|\frac{1}{a_n} \sum_{i=1}^{a_n}\left[f\left(\mathbf{X}_i\right)-Y_{i, L}\right]^2-\mathbb{E}\left[f(\mathbf{X})-Y_L\right]^2\right|>\xi\right]
\end{aligned}
$$
Earlier, it is established that
$$\sup _{\substack{f \in \mathcal{F}_n(\Theta) \\\|f\|_{\infty} \leq \beta_n}}\left|\frac{1}{a_n} \sum_{i \in \mathcal{I}_{n, \Theta}}\left[f\left(\mathbf{X}_i\right)-Y_{i, L}\right]^2-\mathbb{E}\left[f(\mathbf{X})-Y_L\right]^2\right| \leq 2\left(\beta_n+L\right)^2$$
## Second application
This considers the variation of the estimate $m$ in cells $A_{n}(\mathbf{X}, \Theta)$ of the random forest. The variation is defined as
$\Delta(m, A)=\sup _{\mathbf{x}, \mathbf{x}^{\prime} \in A}\left|m(\mathbf{x})-m\left(\mathbf{x}^{\prime}\right)\right|$ and thus upper-bounded by the supremum norm $\|f\|_{\infty}:=\sup _{x \in[0,1]}|f(x)|$.
$$
\begin{aligned}
\mathbb{E}\left[\Delta\left(m, A_n(\mathbf{X}, \Theta)\right)\right]^2
& \leq \xi^2+4\|m\|_{\infty}^2 \mathbb{P}\left[\Delta\left(m, A_n(\mathbf{X}, \Theta)\right)>\xi\right]
\end{aligned}
$$
| Inequality relating expected value and tail probability | CC BY-SA 4.0 | null | 2023-03-22T09:57:04.910 | 2023-03-22T12:32:23.767 | 2023-03-22T11:54:24.570 | 178468 | 178468 | [
"random-forest",
"probability-inequalities"
] |
610293 | 1 | null | null | 1 | 22 | How can I deduce the posterior probability, up to a multiplicative coefficient, and how can I determine, without integral calculus, the normalization coefficient?
| Posteriori - How to determine the normalization coefficient without integrating the marginal | CC BY-SA 4.0 | null | 2023-03-22T09:57:12.673 | 2023-03-22T09:57:12.673 | null | null | 383795 | [
"probability",
"bayesian",
"likelihood",
"posterior"
] |
610294 | 1 | null | null | 0 | 28 | If we look at the [table of distributions in the exponential family](https://en.wikipedia.org/wiki/Exponential_family#Table_of_distributions), we will see some sufficient statistics have $\log(x)$, which means we have put constraints on $\mathbb{E}[\log(X)]$ when formulating these distributions as maximum entropy models.
Why is $\mathbb{E}[\log(X)]$ important? And what properties (essentially, of gamma distributions) do we expect from this constraint?
| Why do we want to constrain E[ln(x)] in some maximum entropy models? | CC BY-SA 4.0 | null | 2023-03-22T10:29:28.050 | 2023-03-22T10:29:28.050 | null | null | 4864 | [
"gamma-distribution",
"exponential-family",
"maximum-entropy"
] |
610296 | 1 | null | null | 1 | 41 | I have a dataset of 393 people and I am running binary logit regression. The goal of this regression is to examine which predictors are significant in predicting the dependent variable.
The dependent variable is 1 = not taking a ride with public transport after the price increase
0 = taking public transport even if the price increases. In the data 82 people said 1, 311 people said 0
The reason I am doing this regression is to answer whether different groups of people based on independent variables like (age, gender, income, etc..) and on different types of trips (based on distance, train vs bus, etc.) have different elasticities.
The model is not meant to predict.
However, when I test the predictive power of the model it gives me this table
[](https://i.stack.imgur.com/Y6vRI.png)
meaning, a very high specificity of over 81 %, but a very low sensitivity of 15,85 %.
Overall correctly predicted is almost 80 %.
If my goal is to explore whether the groups are different from each other, is this test even relevant? If so, how would you interpret those results?
If not, are there any other tests I should run, or just examining where each individual explanatory variable is significant is enough?
Thank you in advance!
| Binary logit regression - specificity vs sensitivity | CC BY-SA 4.0 | null | 2023-03-22T11:02:56.577 | 2023-03-22T11:02:56.577 | null | null | 383838 | [
"r",
"regression",
"logistic",
"predictive-models",
"econometrics"
] |
610297 | 1 | null | null | 1 | 20 | I have multiple independent variables and my dependent variable. These are observed at an annual basis. I want to know how much the independent ones affect the dependent. What I am wondering most about is the time. Because the effect on the dependent variable is likely to be lagged, so that the Y in say year 2010 is dependent on what happened in 2009 too ...
Is it possible to incorporate lag in Y when I have multiple independent variables ...? Is this a time series problem?? Maybe I could use time as an independent variable?
What I am not sure about is what kind of model to apply to this problem ...
Edit: the Y variable is energy production in each year, and the X variables are price of energy and different policies such as kvotas and taxes for the respective years :) sample size is 10 years
| What model to use when multiple independent variables and annual data | CC BY-SA 4.0 | null | 2023-03-22T11:06:35.503 | 2023-03-27T18:27:06.417 | 2023-03-27T18:27:06.417 | 383188 | 383188 | [
"time-series",
"multiple-regression"
] |
610298 | 1 | null | null | 0 | 12 | Basically I have data from two inventories (which have 7-10 questions on a likert scale). They were both administered at 4 different times. I am using GALMj package in jamovi to run a mixed model because I have some other categorical factors to consider.
They cover overlapping (but not identical constructs). I am trying figure out how I can sequentially add individual combinations of questions from Inventory A, to see which attributes explain the majority of variance in Inventory B.
Is there a way to automate (i.e. sequentially add or subtract each covariate until you find the best fit?)
| How can I sequentially test 10+continuous covariates (individual questions in an inventory) in my model (repeated measures) | CC BY-SA 4.0 | null | 2023-03-22T11:09:16.030 | 2023-04-05T17:13:01.090 | null | null | 383839 | [
"regression",
"mixed-model"
] |
610299 | 2 | null | 609956 | 1 | null | Usual practice is to define `time = 0` in studies of primary cancer with this type of data as the date of diagnosis. That's typically also the date of most interest to the patient. For example, in the [Clinical Data Resource for The Cancer Genome Atlas](https://doi.org/10.1016/j.cell.2018.02.052), the Methods state:
>
OS [overall survival] is the period from the date of diagnosis until the date of death from any cause.
Although the therapy is not yet defined at that date, there is no serious problem in using the ultimate choice of primary therapy as a fixed-in-time predictor in a Cox survival model. Therapy typically begins within a few weeks of diagnosis, a period during which there are usually few deaths. In particular for a Cox model, absolute times don't matter, only their ordering in time. Unless there are many early deaths, the results will be interpretable as the post-diagnosis survival of patients who received (or at least were assigned to) each type of therapy. That interpretation then includes the delays typically involved in choosing and providing each type of therapy.
In some circumstances and depending on how data are coded, you might want to omit very early deaths when the definition of therapy received might be ambiguous. For example, a patient might have received primary surgery with a recommendation for adjuvant radiotherapy, but have died from surgery complications before the course of radiation was finished. Depending on the specific question you are asking and your knowledge of the subject matter, you might want to omit such individuals with unexpectedly early deaths from the study, while clearly explaining that choice in your report.
What's complicated in comparing responses to therapy this way is that the choice of therapy typically is a function of clinical characteristics, like tumor size and spread to lymph nodes, that themselves are associated with outcome. Those problems pose much more difficulty in interpretation than the choice of `time = 0`.
| null | CC BY-SA 4.0 | null | 2023-03-22T11:38:24.563 | 2023-03-22T11:38:24.563 | null | null | 28500 | null |
610300 | 1 | 610356 | null | 4 | 112 | Let $W$ be a random variable valued in $L^2[0,1]$ (an infinite dimensional function space). Take $W=\{W(t), t\in[0,1]\}$ on $[0,1]$.
$$W(t)=\sum_{i=1}^\infty e_i(t) N_i, \quad \forall t\in [0,1]$$
where for all $i\in\mathbb N$,
$$e_i(t)=\sqrt{2}\sin{(i-1/2)\pi t},$$
$$N_i \sim N(0,\lambda_i), \quad \lambda_i=[(i-1/2)\pi]^{-2}, \{N_i\} \text{ independent}.$$
QUESTION How can I generate each process? Each value $W(t)$ is the limit of a series. Should I truncate the indices of the summation to $\{1,\dotsc, k\}$ where $k$ is a fixed "large" integer?
This process is well known and there should be a simple way to perform simulations.
| Simulating a process | CC BY-SA 4.0 | null | 2023-03-22T11:55:38.277 | 2023-04-09T12:10:52.757 | 2023-04-09T12:10:52.757 | 180540 | 180540 | [
"simulation",
"stochastic-processes"
] |
610301 | 1 | null | null | 1 | 69 | In `?lme4::ranef`, it is stated:
>
condVar: a logical argument indicating if the conditional variance-covariance matrices of the random effects should be added as an attribute.
If condVar is TRUE, each data frame has an attribute called "postVar".
If there is a single random-effects term for a given grouping factor, this attribute is a three-dimensional array with symmetric faces; each face contains the variance-covariance matrix for a particular level of the grouping factor.
While in `?nlme::getVarCov`
>
Extract the variance-covariance matrix from a fitted model, such as a mixed-effects model.
type: For models fit by lme() the type argument specifies the type of variance-covariance matrix, either "random.effects" for the random-effects variance-covariance (the default) or "conditional" for the conditional. variance-covariance of the responses or "marginal" for the marginal variance-covariance of the responses.
Then I try
```
library("nlme")
library("lme4")
fm1 <- lme(distance ~ age, data = Orthodont, random = ~ 1 | Subject)
lfm1 <- lmer(distance ~ age + (1 | Subject), data = Orthodont)
getVarCov(fm1, individuals = "F01", type = "conditional")
Subject F01
Conditional variance covariance matrix
1 2 3 4
1 2.0495 0.0000 0.0000 0.0000
2 0.0000 2.0495 0.0000 0.0000
3 0.0000 0.0000 2.0495 0.0000
4 0.0000 0.0000 0.0000 2.0495
Standard Deviations: 1.4316 1.4316 1.4316 1.4316
# Looking at ranef(lfm1) I see subject 'F01' is the 20th
attr(ranef(lfm1)[["Subject"]], "postVar")[, , 20]
[1] 0.4596965
```
Why is there this difference? Is it possible to get the lme4 var-cov matrix from a nlme model?
| Difference of the conditional variance-covariance matrices between lme4 and nlme | CC BY-SA 4.0 | null | 2023-03-22T12:00:34.233 | 2023-03-22T20:20:47.593 | 2023-03-22T20:04:40.740 | 219012 | 212097 | [
"r",
"mixed-model",
"lme4-nlme",
"covariance-matrix"
] |
610302 | 1 | 610348 | null | 3 | 108 | I am having trouble with a step of a proof in the book Statistical Estimation Asymptotic Theory by Ibragimov and Has'minskii.
Lemma 2.1: Let $T=T(X_1, ..., X_n)$ be an arbitrary statistic with $\mathbb{E}_\theta|T|<\infty$. Then
$$
\mathbb{E}_\theta[T | X_2 - X_1, ..., X_n - X_1] = \int_{R^k}T(X_1 + \theta - u, ..., X_n+\theta-u)\frac{\Pi_1^nf(X_j-u)}{\int_{R^k}f(X_j - v)dv}du.
$$
Following is the snapshot from the book:
[](https://i.imgur.com/wN3dnnB.jpg)
[](https://i.stack.imgur.com/7LdDx.jpg)
The proof begins with: Denote by $I$ the right hand side of the equation above. $I$ is a function of $X_2-X_1$, ..., $X_n-X_1$ only. Therefore it is sufficient to prove that for any statistic $z$ of the form $z(X_2-X_1, ..., X_n-X_1)$ the equality $\mathbb{E}_\theta[zI] = \mathbb{E}[zT]$ is valid.
I cannot see the jump between the final equality, $\mathbb{E}_\theta[zI] = \mathbb{E}[zT]$, and the main statement of the lemma. How does one imply the other?
## Edit: Adding definition of $\bf W$
The other missing definition is that $\bf W$ satisfies
- ${\bf W}(u;v) = w(u-v)$
- $w(u)$ is defined and nonnegative, $w(0)=0$,and continuous at $u=0$ but is not identically $0$
- $w(u) = w(-u)$
- the sets $\{ u:w(u) < c \}$ are convex for all $c>0$
| Derivation of the Pitman estimator | CC BY-SA 4.0 | null | 2023-03-22T12:08:05.867 | 2023-03-23T15:15:53.123 | 2023-03-22T17:50:28.487 | 383843 | 383843 | [
"probability",
"mathematical-statistics",
"inference",
"conditional-expectation"
] |
610303 | 1 | null | null | 0 | 20 | I was wondering if I'm missing something as I have never seen one factor x as part of two slopes in a (generalized linear mixed/) random effects model like this: y ~ (x | id) + (x | item). Can that be a sensible thing to do or will this model have issues or contains some illogical assumption? I'd appreciate any insight.
In case background is useful: We did an experiment and are interessted in the effect of the factor "block of trials" (x) (e.g. if repsonses stem from the 1st, 2nd, ... or last block). The effect of that factor could differ by participant (e.g. some may get better quickly others much slower or don't improve at all) and by items used (e.g. if some specific items are presented in the 2nd block, participants response may be more accurate as compared to if some other items are presented in the 2nd block, which may be more difficult (but we didn't do any pilot study on that, so we don't know about item difficulty). Items are balanced throught the experimental blocks. So the factor is between-subject, but within-item factor right? For the random slope for item, we'd expect an interaction with an experimental condition z, so the acutal model looks like that: y ~ (x | id) + (x*z | item). Can that model work and be interpretable?
| Random slopes in mixed models: 2 random slopes for 1 factor? | CC BY-SA 4.0 | null | 2023-03-22T12:17:15.127 | 2023-03-30T08:15:30.417 | 2023-03-30T08:15:30.417 | 337648 | 337648 | [
"regression",
"mixed-model",
"generalized-linear-model",
"glmm"
] |
610304 | 1 | null | null | 0 | 38 | In a 3x3 ANOVA (2 IV - each with 3 levels)
is the interaction comparing 9 means or is it comparing the 2 main effects means?
| In a 3x3 ANOVA (2 IV - each with 3 levels) interaction explained | CC BY-SA 4.0 | null | 2023-03-22T12:21:24.477 | 2023-03-22T16:28:27.200 | null | null | 381123 | [
"anova"
] |
610305 | 2 | null | 304959 | 0 | null | At least two concerns come to mind.
- If you do not model with a multinomial distribution, you lose an obvious interpretation in terms of the probability of category membership. You don't get predictions that are exactly $1$, $2$, or $3$, so what do you do with a prediction like $1.2?$ Sure, you could round to the nearest integer, but this presents a few problems. First, the rounding means that the space of predictions that get mapped to categories $1$ and $3$ are $(-\infty, 1.5)$ and $[2.5,+\infty)$, respectively, yet the space of predictions that get mapped to category $2$ is $[1.5,2.5]$. You might be willing to make some distribution assumptions to alleviate this concern, but it sure seems like there is a lot more opportunity to be in the zones for categories $1$ and $3$ than to be in the zone for category $2$. Second, there is no sense of how costly the mistakes are. If you really need to be sure that a case is in category $2$ in order to classify that way, you might not want to round to $2$ unless the prediction is in the interval $(1.95, 2.2)$. Calibrated probabilities of class membership like you would be pursuing with a multinomial regression approach give a much more natural interpretation of the predictions.
- It seems that, at low values of $X$, category $1$ is most likely; then category $2$ is most likely for medium values of $X$; finally, category $3$ is most likely for high values of $X$. (Maybe the order is reversed.) It is convenient that your categories were coded this way, but they just as easily could have been coded in a different order, which would wreck this monotonic relationship and not be detected as well by a linear model. Sure, you could use nonlinear basis functions like polynomials or splines, but imagine doing this when you have ten or a hundred categories that you have to put in the right order for each feature. That is just for one feature. If you have multiple features, you have to get the order right on all of them, and this is not a given. It might be that, on one feature, high values correspond with category $3$, medium values correspond with category $2$, and low values correspond with category $1$, yet another feature has high values that correspond with category $3$, medium values that correspond with category $1$, and low values that correspond with category $2$. No matter how you order the category labels, they will be incorrect for one of those features (and I will venture a guess that you have more than just two features).
If you can validate your ability to consistently get quality performance, perhaps it makes business to go with your modeling approach. However, given that multinomial modeling is not especially difficult to implement, the above caveats are worth considering.
| null | CC BY-SA 4.0 | null | 2023-03-22T12:29:29.187 | 2023-03-22T12:29:29.187 | null | null | 247274 | null |
610306 | 2 | null | 610292 | 1 | null | Let $\Omega$ be the sample space and $X: \Omega \to \mathbb{R}$ a random variable. Assume $X < u$, i.e. $X(\omega) < u$ for all $\omega \in \Omega$.
Then
$$
\begin{align}
\mathbb{E}[X] &= \sum_{\omega \in \Omega} \mathbb{P}(\omega) X(\omega) \\
&= \sum_{\substack{\omega \in \Omega \\ X(\omega) \leq \xi}} \mathbb{P}(\omega ) \underbrace{X(\omega)}_{\leq \xi} +
\sum_{\substack{\omega \in \Omega \\ X(\omega) > \xi}}
\mathbb{P}(\omega) \underbrace{X(\omega)}_{\leq u} \\
& \leq \xi ~ \underbrace{
\mathbb{P}(X \leq \xi)
}_{\leq 1} + u ~ \mathbb{P}(X > \xi) \\
& \leq \xi + u ~\mathbb{P}(X > \xi)
\end{align}
$$
| null | CC BY-SA 4.0 | null | 2023-03-22T12:32:23.767 | 2023-03-22T12:32:23.767 | null | null | 178468 | null |
610307 | 2 | null | 610224 | 1 | null | >
why did LASSO chose multiple features that gives higher error rather than only keeping a single or fewer features that gives a lower error?
You told the regression to minimize the LASSO loss and then evaluated it on a different criterion.
Setting aside numerical issues (LASSO lacks a closed-form solution, after all), minimizing a loss function is literal: such estimation finds the parameters that give the smallest value of that particular loss function. There is no guarantee about another loss function; that would make all loss functions equivalent. It might turn out that the solution giving a smaller value for one loss function also gives the smaller value for another loss function, but minimizing the loss function only guarantees the smallest value for that particular loss function.
| null | CC BY-SA 4.0 | null | 2023-03-22T12:38:06.830 | 2023-03-22T12:38:06.830 | null | null | 247274 | null |
610308 | 1 | null | null | 0 | 21 | I am currently designing an acceptability judgment experiment that contains one variable with two levels. The experiment will be a within-subject design. There will be multiple sets of target sentences. Each set contains two sentences (each sentence is a condition) that are minimal pairs for informants to judge. Do I need to perform a latin square to prevent the informants from seeing both of the two sentences of a minimal pair?
For example, suppose that I have 8 sets, then each informant will see only 4 sentences for each condition instead of all of the 8 sentences if the sets of sentences are counterbalanced.
The materials and the literature that I read tend to suggest that if the experiment is a 2x2 design with 2 within-subject variables and different item sets, the trials in each set of sentences need to be counterbalanced using Latin Square. For example, if there are 8 sets of sentences, and each set contains four sentences for the four conditions, after a Latin Square treatment, each participant will only see two sentences for each condition. So I am not sure whether it is necessary to do so if I only have one variable with two levels.
Thanks in advance for any comments.
| Do I need to use Latin Square when I only have one variable with two levels? | CC BY-SA 4.0 | null | 2023-03-22T12:38:06.923 | 2023-03-22T12:38:06.923 | null | null | 383846 | [
"mixed-model",
"experiment-design",
"latin-square"
] |
610309 | 2 | null | 304959 | 1 | null | It might be more or less fine, or you might get absolute garbage.
It REALLY depends on what values "1" "2" and "3" in your dependent variable represent.
An OLS/linear regression tells you the association between a one unit increase in each independent variable and an "increase" in the dependent variable.
So the question you have to ask is "does it make sense to talk about the dependent variable 'increasing' or not?"
If the DV is a measure of socio economic status and 1,2, and 3 stand for "poor" "middle class" and "rich" then it makes sense to ask what factors are associated with having a "higher" value. So running a linear regression/OLS will do an OK job of that. Technically you are violating a bunch of other OLS assumptions, and if you are worried about that you could run an ordered logit. But OLS will probably give you the correct "story" in terms of signs and significance values.
However, if the DV is a measure of employment status and 1, 2 and 3 stand for "unemployed," "employed," and "retired," then it doesn't make any sense at all to ask what factors are associated with having a "higher" value. Running an OLS model (or even an ordered logit) on this dependent variable will give you garbage. To analyze this variable you need to either use a multinomial logit model, or recode the variable in some way (e.g. to a binary variable of some sort).
The model itself is blind to the distinction between these two types of variables, so there is no "test" you could run to sort it out. You need to use your own background knowledge about what Y is measuring.
And just as an aside - the measurement level of the INDEPENDENT variables are irrelevant when it comes to what type of model to choose. No matter whether you are running a binary logit, OLS, multinomial logit, or some other kind of model you've never heard of, the rules about what sorts of independent variables you can or can not include are the same.
| null | CC BY-SA 4.0 | null | 2023-03-22T12:41:53.810 | 2023-03-22T12:41:53.810 | null | null | 291159 | null |
610310 | 1 | null | null | 0 | 9 | I'm trying a DiD by comparing adjacent districts which lie on two sides of state border, before and after a policy. The scheme is following: state A introduced a policy in year t, while state B (adjacent state) did not. Now, state A has a district p (say) which shares a border with districts q and r of state B. The idea is comparing outcomes between p and q,r before and after period t. So district p after t is treatment group, while q,r form control group.
It might also happen that two treatment districts in state A share border with same control district in state B and I'm not sure what to do in such situation
I have come across such a design earlier but currently I'm at a loss and can't think of any paper that uses this design so that I may look it up.
I want to know how to match these treatment and control districts and would appreciate any help. Please reply on this thread if you feel this question needs more clarification and I'll update with any relevant information.
Thanks!
| Difference-in-difference with border areas: how to create indicator for matched area? | CC BY-SA 4.0 | null | 2023-03-22T13:12:39.630 | 2023-03-22T13:12:39.630 | null | null | 383851 | [
"difference-in-difference",
"research-design"
] |
610311 | 1 | null | null | 0 | 20 | I am trying to build a regression model, where I want to test an assumption that $y_t = \beta_0 + \beta_1 \cdot y_{t-1} + \epsilon$, in other words that the previous observation could be used as a covariate. This post address that question, [Inclusion of lagged dependent variable in regression](https://stats.stackexchange.com/questions/52458/inclusion-of-lagged-dependent-variable-in-regression), where the suggestion is to compare the correlation of the dependent variable with a lagged version of itself.
Further, my data is not approximately normal. In the book "Time Series Analysis with applications in R (Cryer & Chan)" a suggestion is to compare the transformation of the dependant variable $y_t$ with $y_t - y_{t-1}$ and $log(y_t)-log(y_{t-1})$, and see if that yields better results. There it's also suggested that the inverse covariance matrix is more suitable to find any lagged dependency as it isolates the effects by conditioning.
But I'm struggling to put the pieces together, given that I plan on using $y_{t-1}$ as a covariate, how should I proceed? Should I generate matrices for each case ($y_t$, $y_t-y_{t-1}$ and $log(y_t) - log(y_{t-1})$)? How do I compare them if so?
| Correlation-, covariance- and inverse covariance matrix of $y_t$, $y_t - y_{t-1}$ and $log(y_t) - log(y_{t-1})$ | CC BY-SA 4.0 | null | 2023-03-22T13:16:26.110 | 2023-03-22T13:16:26.110 | null | null | 320876 | [
"regression",
"machine-learning",
"autocorrelation",
"covariance-matrix",
"lags"
] |
610312 | 1 | 610321 | null | 0 | 79 | As I posted [here](https://stats.stackexchange.com/questions/610039/how-should-interactions-be-modeled-in-mixed-models) in reference of which model fits better the assumption, I reached the conclusion that this model is the better:
(lme(variable ~ time:group + group + time, random = ~ 1| subject)
,omitting the random slope because I only have 2 time points, following the advice from LuckyPal.
My experiment is based on multiple individuals organized in 3 groups under 3 different treatments with measurements in pre- and post-intervention. The goal is to observe significant difference between groups in several variables.
The output of my model varies according of which group (= grup_int as variable) of intervention should I put as reference level:
```
genes_long_gapdh$grup_int <- relevel(genes_long_gapdh$grup_int, "X"), X = 1, 2 or 3
```
As I said the output varies according to the reference group. The values I have are kind of like this (I ommit the df (minimum of 89), the efect(all fixed), the std error and the statistic):
```
#Group 1 as reference level
var group term estimate p.value
ppara NA (Intercept) 3,6772 0,0000
ppara NA grup_int2 0,0723 0,7516
ppara NA grup_int3 -0,0979 0,6614
ppara NA time 0,0243 0,6893
ppara NA time:grup_int2 -0,0232 0,8004
ppara NA time:grup_int3 -0,0235 0,7901
ppard NA (Intercept) 0,8672 0,0000
ppard NA grup_int2 0,3188 0,1225
ppard NA grup_int3 -0,1764 0,3771
ppard NA time -0,0409 0,4727
ppard NA time:grup_int2 -0,1242 0,1425
ppard NA time:grup_int3 0,0305 0,7092
#Group 2 as reference level
var group term estimate p.value
ppara NA (Intercept) 3,7495 0,0000
ppara NA grup_int1 -0,0723 0,7516
ppara NA grup_int3 -0,1702 0,4707
ppara NA time 0,0012 0,9863
ppara NA time:grup_int1 0,0232 0,8004
ppara NA time:grup_int3 -0,0003 0,9971
ppard NA (Intercept) 1,1860 0,0000
ppard NA grup_int1 -0,3188 0,1225
ppard NA grup_int3 -0,4952 0,0188
ppard NA time -0,1651 0,0088
ppard NA time:grup_int1 0,1242 0,1425
ppard NA time:grup_int3 0,1546 0,0720
#Group 3 as ref level
var term estimate p.value
ppara (Intercept) 3,579282373 4,05E-43
ppara grup_int1 0,097936121 0,661358561
ppara grup_int2 0,170195822 0,470658635
ppara time 0,000834153 0,989590284
ppara time:grup_int1 0,02348789 0,790110789
ppara time:grup_int2 0,000337136 0,997128189
ppard (Intercept) 0,690819735 3,62E-06
ppard grup_int1 0,176353734 0,377121561
ppard grup_int2 0,49519246 0,018796677
ppard time -0,01045055 0,858293253
ppard time:grup_int1 -0,030457613 0,709158076
ppard time:grup_int2 -0,154610426 0,072025362
```
As you can see:
- What that the term time stands for? It is changing according the reference level, but is this the term meant to measure differences with the other 2 groups, like an overall p-value? Doesn't seem like that
- Are the interaction time:grup_intX comparisons one-to-one group as a "two-sample t-test" in linear mixed-model (with the pertinent adjustments). I mean if the p-value showed comes from a direct comparison between groups
- Is it possible to obtain an overall p-value comparing all three groups interaction with time?
| Interpretation of interaction terms with time in linear mixed-effects model | CC BY-SA 4.0 | null | 2023-03-22T13:30:51.057 | 2023-03-22T17:26:13.070 | null | null | 339186 | [
"mixed-model",
"lme4-nlme",
"interaction"
] |
610314 | 1 | null | null | 0 | 21 | I'm starting to have doubts regarding my approach to bootstrap a linear model on my rare events data. On repeated sampling, sometimes not all factor levels from a variable are represented and then this doesn't show coefficients. Because of unequal lengths, this causes problems when trying to create a matrix/data frame which allows me to create mean / confidence intervals from the obtained coefficients.
My example shows a generalised linear model, because this is what I am using in real life, but the same problem would happen with a "simple" linear regression `lm`. My real data has approximately 2000 events in ~90000 observations, and the problem occurs only with specific variables that have many factor levels. I decided to leave those problematic variables out of the equation, as they are luckily also not very relevant, but I wondered about the general question.
```
foo <- data.frame(event = c(rep(T, 98), rep(F, 2)),
v = letters[1:2])
foo$v[1] <- "e"
mod <- glm(event ~ v, data = foo, family = "binomial")
bootstrap <- function(mod, n_boot = 2) {
set.seed(1)
array_coef <- replicate(n_boot, expr = {
bdat <- foo[sample(nrow(foo), size = 40, replace = TRUE), ]
bfit <- update(mod, data = bdat) ## refit with new data
coef(bfit)
})
array_coef
}
## of course showing all factor levels
coef(mod)
#> (Intercept) vb ve
#> 3.87120101 0.02061929 13.69486748
## not always showing all coefficients, I assume because the non-represented factor levels are being dropped
bootstrap(mod, n_boot = 2)
#> [[1]]
#> (Intercept) vb ve
#> 2.656606e+01 4.415089e-06 2.046100e-22
#>
#> [[2]]
#> (Intercept) vb
#> 21.56607 -18.62163
```
Created on 2023-03-22 with [reprex v2.0.2](https://reprex.tidyverse.org)
| Is it appropriate to bootstrap a linear model on rare event data when not always all factor levels of independent variables are present? | CC BY-SA 4.0 | null | 2023-03-22T13:48:38.857 | 2023-03-22T13:48:38.857 | null | null | 221162 | [
"r",
"regression",
"generalized-linear-model"
] |
610315 | 1 | null | null | 1 | 21 | Given a time series data $\{X_t\}_{t = 0}^\infty$, what does its moving average and moving standard deviation estimate when there is no assumption that $\mathbb{E}[X_t] = \text{const}, \forall t$? Suppose that $X_t\sim\mathcal{N}(\mu_t, \sigma^2)$, then its $N$-step MA is $N^{-1}\sum_{s = 0}^{N - 1}X_{t - s} \sim \mathcal{N}(N^{-1}\sum_{s = 0}^{N - 1}\mu_{t - s}, Var[N^{-1}\sum_{s = 0}^{N - 1}X_{t - s}])$. It seems that the MA can not be directly used in estimating the mean of $X_t$ with no additional assumption on $\mu_t$. I was wondering are there any other statistics like MA can be used to estimate $\mu_t$ given $\mu_s, s\in\{0,\cdots,t-1\}$ when we assume that $\mu_t = f(t;\mu_{t-1}, \mu_{t-2},\cdots,\mu_0)$ ($f$ is not trival)?
Thank you!
| A question about an estimation using moving average and moving standard deviation? | CC BY-SA 4.0 | null | 2023-03-22T13:54:37.850 | 2023-03-22T16:09:04.527 | 2023-03-22T16:09:04.527 | 53690 | 340156 | [
"time-series",
"mathematical-statistics",
"moving-average",
"moving-window"
] |
610316 | 1 | null | null | 0 | 10 | I am trying to predict a time series for each `rg` (generic data), and since I have many rg, then obviously I can't create a model for each rg, my idea would be to use rg as a covariate, like the following example:
```
#> # A tibble: 9 × 3
#> data rg y
#> <dbl> <chr> <dbl>
#> 1 1 a 0.711
#> 2 1 b 1.73
#> 3 2 c -2.23
#> 4 2 d 0.741
#> 5 3 a 0.759
#> 6 4 c -0.538
#> 7 4 d 1.41
#> 8 2 a -0.376
#> 9 1 d -0.689
```
The problem is that the data is repeated because we have all the y's for each rg on each date, if I want to predict a fixed window of time (like 4 months into the future) I can treat this problem as a regression, and shift Y by 4 months. But what if I don't want a fixed prediction window? Suppose you want the forecast 2 months from now, or 7 and so on. How to model this problem? Some tips?
- Tip: I know that I can pivot the table and leave each row for a date equal to a traditional time series, but the idea is to supply rg as a covariate and return $\hat{y}$
| Time Series with observations to every level of categorical variable | CC BY-SA 4.0 | null | 2023-03-22T13:59:51.540 | 2023-03-22T13:59:51.540 | null | null | 312438 | [
"machine-learning",
"time-series",
"econometrics"
] |
610317 | 1 | null | null | 0 | 15 | I conducted an experiment with insects to find out whether one odour is preferred over another. The insects had the option of moving towards one of the odours (one of the stimulus zones) or staying in the middle (neutral zone). I recorded their position (stimulus zone 1 / stimulus zone 2 / neutral zone) ten times at intervals of three minutes each. The time was chosen so that they had enough time to slowly move from one stimulus zone to the other. After repeating the experiment several times (insects were not reused), I got a table like this:
|stimulus zone 1 |stimulus zone 2 |neutral zone |
|---------------|---------------|------------|
|6 |2 |2 |
|4 |2 |4 |
|4 |4 |2 |
|2 |0 |8 |
|4 |2 |4 |
|3 |3 |4 |
|6 |2 |2 |
|4 |2 |4 |
|5 |2 |3 |
|5 |2 |3 |
I used a Wilcoxon test for paired samples in R to analyse the data, as I found it in several publications:
```
wilcox.test(stimulus_zone_1, stimulus_zone_2, paired = TRUE)
```
But I was very unsure if this was really the best option to analyse the data. Any recommendations or thoughts?
| Dual choice experiment - Is the paired sample Wilcoxon signed rank test the most appropriate? | CC BY-SA 4.0 | null | 2023-03-22T14:02:07.710 | 2023-03-22T14:02:07.710 | null | null | 383278 | [
"r",
"paired-data",
"wilcoxon-signed-rank",
"choice-modeling"
] |
610318 | 2 | null | 610300 | 2 | null | While not based on the KL theorem you state, Wiener processes can be approximated/simulated as follows, using that the Wiener process is the continuous-time limit of a (scaled) random walk
```
n <- 5000
u <- rnorm(n)
W <- 1/sqrt(n)*cumsum(u)
```
| null | CC BY-SA 4.0 | null | 2023-03-22T14:05:15.243 | 2023-03-22T14:05:15.243 | null | null | 67799 | null |
610319 | 2 | null | 6127 | 1 | null | One more example: MKinfer::perm.t.test(). It's quite fast.
I don't know which one of the above it matches, because neither of you set the seed, so the results will never be well comparable. I use 1000.
```
> set.seed(1000)
> MKinfer::perm.t.test(x1, y1)
Permutation Welch Two Sample t-test
data: x1 and y1
(Monte-Carlo) permutation p-value = 0.007
permutation difference of means (SE) = 28.1 (10.7)
95 percent (Monte-Carlo) permutation percentile confidence interval:
7.35 49.15
Results without permutation:
t = 3, df = 22, p-value = 0.009
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
7.67 48.63
sample estimates:
mean of x mean of y
95.1 67.0
```
and for the second data:
```
> set.seed(1000)
> MKinfer::perm.t.test(DV~IV, alternative="greater")
Permutation Welch Two Sample t-test
data: DV by IV
(Monte-Carlo) permutation p-value = 0.004
permutation difference of means (SE) = 28.1 (10.7)
95 percent (Monte-Carlo) permutation percentile confidence interval:
10.5 Inf
Results without permutation:
t = 3, df = 22, p-value = 0.005
alternative hypothesis: true difference in means is greater than 0
95 percent confidence interval:
11.2 Inf
sample estimates:
mean in group A mean in group B
95.1 67.0
```
| null | CC BY-SA 4.0 | null | 2023-03-22T14:18:41.363 | 2023-03-22T14:18:41.363 | null | null | 383859 | null |
610320 | 2 | null | 610132 | 1 | null | This question seems to indicate that the question-writer has a misconception about GLMs. In particular, GLMs deal with the conditional distribution of the outcome; however, you are being asked this question about the marginal distribution.
With that out of the way, three possibilities come to mind.
- Poisson
- Negative Binomial
- Binomial
Poisson is kind of the first thought for count data, meaning that the Poisson distribution gets on the list. However, the Poisson distribution is restrictive in that the mean and variance are equal. Negative binomial is an method to loosen that restriction. Depending on the level of sophistication of the class, I could believe either of these to be the full-credit answer: Poisson if the assignment just wants you to identify this as a count model, and negative binomial if the assignment wants you to identify this as a count model but also identify Poisson as restrictive.
However, my guess is binomial, and my justification is that there seems to be a cap on how high the count is. A binomial distribution with eight trials has its maximum value capped at eight, which is your maximum, while Poisson and negative binomial have no upper bound. If the conditional distribution were Poisson or negative binomial, I might expect to see some kind of outlier-type of point where there is a stray high value like twelve, which a binomial distribution on eight trials does not allow.
Nonetheless, maybe the upper bound of eight is just because of the particular feature values, and with different feature values, counts like twelve would be possible (which would not be true for a binomial distribution with eight trials). Just looking at the marginal distribution of $Y$, we simply cannot distinguish between what happens because of the feature values and what happens because of the conditional distribution.
| null | CC BY-SA 4.0 | null | 2023-03-22T14:19:11.850 | 2023-03-22T14:19:11.850 | null | null | 247274 | null |
610321 | 2 | null | 610312 | 2 | null | There are two frequent sources of confusion here related to your first question.
For one, with a multi-level categorical predictor under the default treatment/dummy coding in R, the reports of coefficients for non-reference levels are for differences from the reference category. The p-value reports are for the significance of the difference of each coefficient from 0. Thus, even without an interaction, the coefficients and their individual "significance" depends on the choice of reference.
For the other, when a predictor is involved in an interaction its individual coefficient represents its association with outcome when its interacting predictors are at reference or 0 values. So for your individual "time" coefficient (evidently modeled as linearly associated with outcome), you find what is expected: its value is its estimated association with outcome in whatever group you have chosen as the reference. Similarly, the "group" coefficients are their differences from the reference group in associations with outcome when `time = 0`. If you [re-centered your time values those "group" coefficients would change](https://stats.stackexchange.com/q/65898/28500).
With respect to your second question, the p-values for all coefficients, including for interaction terms, are based on the coefficient estimates and their variances. That's a t-test, but the coefficient estimates and variances are based on all of the data, not just a a comparison among specific groups. See [this page](https://stats.stackexchange.com/q/68151/28500) for an explanation in ordinary least squares; the principle is similar in your mixed model although there is dispute about the choice of the appropriate [number of degrees of freedom](https://stats.stackexchange.com/a/147032/28500). In particular, there is a single estimate of residual error, based on all the data, that is used to estimate the coefficient variances.
With respect to your third question, some flavor of an `anova()` function can be used to evaluate all terms involving a single predictor, or any set of predictors, or all interaction terms. For example, you can use a likelihood-ratio-based `anova()` to compare a model without the interactions of interest to one with the interactions. For any combination of coefficients, you can use a Wald ["chunk" test](https://stats.stackexchange.com/q/27429/28500) based on the variance-covariance matrix of the coefficient estimates. The `Anova()` function in the R [car package](https://cran.r-project.org/package=car) is a convenient way to perform such tests on all coefficients involving each individual predictor and the combinations of interaction terms.
| null | CC BY-SA 4.0 | null | 2023-03-22T14:22:59.050 | 2023-03-22T14:22:59.050 | null | null | 28500 | null |
610322 | 2 | null | 610263 | 0 | null | You would also have to add in the `(Intercept)` value to your calculation to get the type of estimate that you seek. The `(Intercept)` value holds the key here, as its value of 0.031 represents a situation in which `Intervention = 0` and `grade_level = "elementary"`. Yet the plot presents a value on the order of -0.025.
I suspect that the discrepancy has to do with how the plotting software is handling the data values associated with other predictors (the `perc_` coefficients), each of which is a necessarily non-negative value with a negative regression coefficient. The software is presumably using some average over the data in your model or is otherwise centering some aspect of your data. Software-specific questions are off-topic here, so you might enquire of the package author if that's not clear from the manual.
| null | CC BY-SA 4.0 | null | 2023-03-22T14:45:15.570 | 2023-03-22T14:45:15.570 | null | null | 28500 | null |
610323 | 2 | null | 610214 | 0 | null | In general, you shouldn't remove outliers unless you know that the data point is wrong or impossible or otherwise objectionable in reality.
Using methods to detect outliers is useful to find data that may require some double-checking or additional scrutiny.
However, in this case, that one data point appears to be so far out from the rest of the data set (with n > 300).
One approach is to use a robust method that won't be as affected by this one point.
In this case, your model is simple enough that I'm sure you can see what effect that one point has on the results. I mean by examining simple plots of the dependent variable vs. each independent variable.
I can't make a conclusion without seeing all the data, and understanding what it is that you are trying to model (and why !). But I think if your conclusion is that this one point should be removed so as not to bias the results (in the colloquial sense), that may be a reasonable approach. One large, expensive house in the data set may influence the results to suggest that sqft has a large effect on price, when it doesn't for the rest of the 300 + data points.
If you choose to remove this point, be sure to state this in your results and explain why. It may simply be that you remove the one observation with sqft > X because it's not representative of the rest of the rest of the population.
| null | CC BY-SA 4.0 | null | 2023-03-22T15:01:23.460 | 2023-03-22T15:01:23.460 | null | null | 166526 | null |
610324 | 1 | null | null | 0 | 15 | I am using a control function approach with a probit model for the selection equation and a fractional model for the outcome equation.
From the selection equation, I calculate the generalized residuals and add them as an independent variable to the outcome equation. For the generalized residuals, I use the formula from Wooldridge (2015, p. 428). This formula uses both the estimated parameters and the observed (true) values.
Now my question is: how to calculate the generalized residuals when making predictions for a new dataset, for which the true values are not observed?
Reference: Wooldridge, J. M. (2015). Control Function Methods in Applied Econometrics. The Journal of Human Resources, 50(2), 420-445, DOI: [https://doi.org/10.3368/jhr.50.2.420](https://doi.org/10.3368/jhr.50.2.420).
(I have also posted this question on [Statalist](https://www.statalist.org/forums/forum/general-stata-discussion/general/1706245-making-predictions-for-a-new-dataset-using-the-control-function-approach), but no luck so far.)
| Making predictions for a new dataset using the control function approach | CC BY-SA 4.0 | null | 2023-03-22T15:09:29.497 | 2023-03-22T15:31:36.020 | 2023-03-22T15:31:36.020 | 362671 | 301809 | [
"predictive-models",
"residuals",
"two-step-estimation"
] |
610325 | 1 | null | null | 1 | 24 | I have a distance matrix between paris of graphs computed by Graph Edit Distances. Besides, I have also a group or class label for each graph. Besides, each graph is assigned a target value in real number. I want to do a knn to predict this target value as a regression problem. How can I use both the distance matrix and the class labels at the same time? Thanks!
| How to use Graph Edit Distances and the graph-level features at the same time | CC BY-SA 4.0 | null | 2023-03-22T15:16:24.630 | 2023-03-22T15:16:24.630 | null | null | 195367 | [
"machine-learning",
"k-nearest-neighbour"
] |
610326 | 2 | null | 575737 | 2 | null | How you perform and calculate such a statistic depends on what you want to learn by from it.
My belief about $R^2$ is that it is a comparison of how your model performs (in terms of square loss) vs how a naïve model performs when it just predicts the mean every time. With this in mind, there are two possibilities for calculating a subgroup $R^2$.
- Calculate the usual $R^2=1-\left(\dfrac{
\overset{N_{group}}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N_{group}}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$, limited to the $N_{group}$ points in the group.
- Calculate $R^2=1-\left(\dfrac{
\overset{N_{group}}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N_{group}}{\underset{i=1}{\sum}}\left(
y_i-\bar y_{group}
\right)^2
}\right)
$, using the mean of just that particular group.
Since you know the group membership, it seems legitimate do use the second formula, which will tell you how your model performs on that one group compared to how you would do if you predicted the mean of that group every time.
When you use the `sklearn` implementation like you do, I believe that you get this second statistic. [There is an issue about using the in-sample vs out-of-sample mean in the sklearn implementation](https://stats.stackexchange.com/q/590199/247274), but these value are (hopefully) quite close and will give similar results.
Your results tell you that, while the model does a better job of predicting (in terms of square loss) than predicting the same mean every time, for some groups, you would be better off predicting the mean of the group than you would if you used your model predictions.
I will venture a guess that, if you run a regression on just the group indicator variables (giving a model that predicts the group mean), you will have lower out-of-sample MSE than your existing model has. If you have many more instances of group $C$ and $D$ that have positive "grouped" $R^2$ than of the other groups, then this might not hold, but if the groups are roughly balanced, this is my prediction. You seem to do a better job of predicting by using the group means than by using your model predictions.
(If you take the stance that $R^2$ measures the proportion of explained variance, by limiting your analysis to just one group, you are cutting down the variance, so of course a smaller proportion of the variance is explained. There are issues about this, since such an interpretation of $R^2$ need not apply, but this might give you an intuition about why your grouped $R^2$ values are lower than your overall $R^2$ value.)
| null | CC BY-SA 4.0 | null | 2023-03-22T15:19:28.730 | 2023-03-22T15:32:59.477 | 2023-03-22T15:32:59.477 | 247274 | 247274 | null |
610327 | 1 | null | null | 1 | 15 | I have a set of aggregate grade data and want to know if it is possible to determine if there are groups of samples of similar grade fraction using R studio.
Some of the data:
[](https://i.stack.imgur.com/EgqI2.jpg)
Column A contains sample labels (A1 contains 'Sample' as a title)
Row 1 B1 to K1 contain sieve sizes in micrometers corresponding to sieve sizes 32mm to 75um
B2 to K34 contain percentage of sample passing each sieve size for each of the samples.
- Is it actually possible to see if the the samples fall into similar groups using statistical methods.
- Assuming 'YES' has anyone ever actually achieved this in R studio {or do I need to head back over to Stack Overflow and ask the lovely people on there for more help?
Edit: A taster of what the data looks like plotted on grade curve.. To my eye it looks like A and B could be similar, G appears to be different to all, the rest seem to be similar to each other... OR is it just pattern matching when there is no numerical substance!
[](https://i.stack.imgur.com/Pqu8q.jpg)
| Can comparison between curves be achieved statistically to group into similar and different sets? | CC BY-SA 4.0 | null | 2023-03-22T15:32:30.067 | 2023-03-22T18:09:24.843 | 2023-03-22T18:09:24.843 | 383861 | 383861 | [
"multiple-comparisons",
"equivalence",
"curves"
] |
610328 | 2 | null | 610304 | 0 | null | It can be useful to understand that the interaction from a 2-way ANOVA is equivalent to a comparison between two regression models. Here's an example (one factor has 3 levels, the other has 2 levels):
```
# Load in data
> install.packages("palmerpenguins")
> library(palmerpenguins)
# Run a standard 2-way anova - there's a significant interaction
> summary(aov(body_mass_g ~species*sex,data=penguins))
Df Sum Sq Mean Sq F value Pr(>F)
species 2 145190219 72595110 758.358 < 2e-16 ***
sex 1 37090262 37090262 387.460 < 2e-16 ***
species:sex 2 1676557 838278 8.757 0.000197 ***
Residuals 327 31302628 95727
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
11 observations deleted due to missingness
```
This interaction result is the same one I would get if I compared a regression with no interaction (`model1`) to a regression with an interaction (`model2`).
```
> model1 = lm(body_mass_g ~species+sex,data=penguins)
> model2 = lm(body_mass_g ~species+sex+species:sex,data=penguins)
> anova(model1,model2)
Analysis of Variance Table
Model 1: body_mass_g ~ species + sex
Model 2: body_mass_g ~ species + sex + species:sex
Res.Df RSS Df Sum of Sq F Pr(>F)
1 329 32979185
2 327 31302628 2 1676557 8.757 0.0001973 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Looking deeper at `model2`, we see that there are 2 interactions added here:
```
> summary(model2)
Call:
lm(formula = body_mass_g ~ species + sex + species:sex, data = penguins)
Residuals:
Min 1Q Median 3Q Max
-827.21 -213.97 11.03 206.51 861.03
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3368.84 36.21 93.030 < 2e-16 ***
speciesChinstrap 158.37 64.24 2.465 0.01420 *
speciesGentoo 1310.91 54.42 24.088 < 2e-16 ***
sexmale 674.66 51.21 13.174 < 2e-16 ***
speciesChinstrap:sexmale -262.89 90.85 -2.894 0.00406 **
speciesGentoo:sexmale 130.44 76.44 1.706 0.08886 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 309.4 on 327 degrees of freedom
(11 observations deleted due to missingness)
Multiple R-squared: 0.8546, Adjusted R-squared: 0.8524
F-statistic: 384.3 on 5 and 327 DF, p-value: < 2.2e-16
```
So our anova interaction result is how much better the model fits when all the interactions are added to the linear regression.
| null | CC BY-SA 4.0 | null | 2023-03-22T15:44:57.763 | 2023-03-22T16:26:32.583 | 2023-03-22T16:26:32.583 | 288142 | 288142 | null |
610329 | 1 | null | null | 0 | 37 | I need some help with the model I created, please see the pic.
As you can see, there are quite a few mediators in the model.
[](https://i.stack.imgur.com/UDQYo.png)
Below is the indirect and total effects I'm getting. However, I'm wondering how should I report it in APA format in a research paper.
[](https://i.stack.imgur.com/gVmvS.png)
In addition, is there a way to tease the indirect effects apart to know the mediation effect one at a time?
The last question: why my specific indirect 1 was insignificant in terms of p-value, but the CI actually didn't include 0 in between (see below)? Should I consider it significant or not?
[](https://i.stack.imgur.com/iWhP1.png)
Thank you for helping out!
| SEM using Mplus: Indirect Effects Interpretation and how to report | CC BY-SA 4.0 | null | 2023-03-22T15:51:16.230 | 2023-03-22T16:07:46.457 | 2023-03-22T16:07:46.457 | 383865 | 383865 | [
"structural-equation-modeling"
] |
610330 | 1 | 610331 | null | 0 | 19 | The classifier I'm using has 3 possible label outputs - POSITIVE, NEGATIVE or UNKNOWN. For training data, the labels are only POSITIVE and NEGATIVE.
What is the best way to handle evaluating the classifier output? I want to preserve the UNKNOWN label in general since I don't want low-confidence labels, but I want to also minimize the amount of UNKNOWNs while preserving precision/recall.
| Evaluating classifier with 2 labels and 'unknown' label | CC BY-SA 4.0 | null | 2023-03-22T15:53:55.783 | 2023-03-22T15:59:28.550 | null | null | 154998 | [
"classification",
"multilabel",
"labeling"
] |
610331 | 2 | null | 610330 | 1 | null | Use a probabilistic classifier. Anything that gets a low predicted probability of belonging to the target class gets labeled NEGATIVE. Anything with a high predicted probability gets labeled POSITIVE. Anything in between is UNKNOWN. Adjust the two thresholds involved as necessary to optimize your KPIs.
Note that Precision and Recall suffer from the exact same issues as Accuracy: [Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/q/312780/1352)
You may want to take a look at [this answer about classification thresholds](https://stats.stackexchange.com/a/312124/1352).
| null | CC BY-SA 4.0 | null | 2023-03-22T15:59:28.550 | 2023-03-22T15:59:28.550 | null | null | 1352 | null |
610332 | 2 | null | 78354 | 1 | null | I wanted to add some more basic information to the previous (great) responses, and clarify a little (also for myself) how contrast coding works in R, and why we need to calculate the inverse of the contrast coding matrix to understand which comparisons are performed.
I'll start with the description of the linear model and contrasts in terms of matrix algebra, and then go through an example in R.
The cell means model for ANOVA is:
\begin{equation}
y = X\mu + \epsilon = X\begin{pmatrix} \mu1 \\\mu2 \\\mu3 \\\mu4 \end{pmatrix} + \epsilon
\end{equation}
With X as the design matrix and u as the vector of means. An example is this, where we have 4 groups coded in each column:
\begin{equation}
X=\begin{pmatrix}
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}
\end{equation}
In this case, we can estimate the means by the least square method, using the equations:
\begin{equation}
\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\
\end{equation}
This is all good, but let's imagine we want specific comparisons rather than the means, like differences in means compared to a reference group.
In the case of 4 groups, we could express this as a matrix C of comparisons, multiplied by the vector of means:
\begin{equation}
C\mu = \begin{pmatrix}
\phantom{..} 1 & 0 & 0 &0 \\
-1 & 1 & 0 & 0\\
-1 & 0 & 1 & 0\\
-1 & 0 & 0 & 1\\
\end{pmatrix}\
\begin{pmatrix}\mu1 \\\mu2 \\\mu3 \\\mu4 \end{pmatrix}
= \begin{pmatrix} \mu1 \\\mu2-\mu1 \\\mu3-\mu1 \\\mu4-\mu1 \end{pmatrix}
\end{equation}
The first group serves as reference, and we calculate the deviations from it. The matrix C serves to describe the comparisons, it is the contrast matrix.
Technically here these are not contrasts, because the sum in each row should be zero by definition, but that will serve our purpose, and this is the matrix referred to in the contr.treatment() function in R (its inverse, see below).
The matrix C defines the contrasts.
We want to evaluate contrasts from the data, in the context of the same model.
We note that:
\begin{equation}
y \ =\ X\mu \ +\ \epsilon \ =\ XI\mu \ +\ \epsilon \ =\ X \ (C^{-1}C)\ \ \mu \ +\ \epsilon = \ (X C^{-1}) \ (C \mu) \ + \epsilon
\end{equation}
Therefore we can use the first term in parentheses to evaluate the second term (our comparisons), using the least squares method, just as we did for the original equation above.
This is why we use the inverse of the contrast matrix C, and it needs to be square and full rank in this case.
We use the least square method to evaluate the contrasts, with the same equation as above, using the modified design matrix:
\begin{equation}
(X C^{-1})
\end{equation}
And we evaluate:
\begin{equation}
C\mu
\end{equation}
using the method of least squares.
The coefficients for this model can be evaluated as before using least squares, replacing the original design matrix by the new one.
Or naming $X_{1} = (X C^{-1})$ the modified design matrix:
\begin{equation}
\hat{C\mu} = (X_{1}^{'}X_{1})^{-1}X_{1}^{'}y=\\C\hat{\mu}=
\begin{pmatrix} \hat{\mu1} \\\hat{\mu2}-\hat{\mu1} \\\hat{\mu3}-\hat{\mu1} \\\hat{\mu4}-\hat{\mu1} \end{pmatrix}
\end{equation}
Using the modified design matrix (with the inverse of the contrast matrix) and the least squares method, we evaluate the desired constrasts.
Of course, to get the original contrast matrix, we need to invert the contrast coding matrix used in R.
Let's try and make it work on an example in R:
```
x <- rnorm(20,7,2) + 7
y <- rnorm(20,7,2)
z <- rnorm(20,7,2) + 15
t <- rnorm(20,7,2) + 10
df <- data.frame(Score=c(x,y,z,t), Group = c(rep("A",20),rep("B",20),rep("C",20),rep("D",20)))
df$Group <- as.factor(df$Group)
head(df)
Score Group
1 12.83886 A
2 11.49714 A
3 16.27147 A
4 11.84989 A
5 16.00455 A
6 13.78611 A
```
We have four teams A, B, C, D and the scores of each individual.
Let's make the design matrix X for the cell means model:
```
X <- model.matrix(~Group + 0, data= df)
colnames(X) <- c("A", "B", "C", "D")
head(X)
A B C D
[1,] 1 0 0 0
[2,] 1 0 0 0
[3,] 1 0 0 0
[4,] 1 0 0 0
[5,] 1 0 0 0
[6,] 1 0 0 0
```
We can find the means of each group by the least squares equation
\begin{equation}
\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\
\end{equation}
in R:
```
solve( t(X) %*% X) %*% t(X) %*% df$Score
[,1]
A 14.189628
B 7.021692
C 21.668745
D 17.595326
with(df, tapply(X= Score, FUN = mean, INDEX = Group))
A B C D
14.189628 7.021692 21.668745 17.595326
```
But we want comparisons of means to the first group (treatment contrasts). We use the matrix C of contrasts defined earlier.
Based on what was said before, what we really want is the inverse of C, to evaluate the contrasts.
R has a built-in function for this, called contr.treament(), where we specificy the number of factors.
We build the inverse of C, the contrast coding matrix, this way:
```
cbind(1, contr.treatment(4) )
2 3 4
1 1 0 0 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
```
if we invert this matrix, we get C, the comparisons we want:
```
solve(cbind(1, contr.treatment(4)))
1 0 0 0
-1 1 0 0
-1 0 1 0
-1 0 0 1
```
Now we construct the modified design matrix for the model:
```
X1 <- X %*% cbind(1, contr.treatment(4) )
colnames(X1) <- unique(levels(df$Group))
```
And we solve for the contrasts, either by plugging the modified design matrix into the least squares equation, or using the lm() function:
```
# least square equation
solve(t(X1) %*% X1) %*% t(X1) %*% df$Score
[,1]
A 14.189628
B -7.167936
C 7.479117
D 3.405698
# lm with modified design matrix
summary( lm(formula = Score ~ 0 + X1 , data = df) )
Call:
lm(formula = Score ~ 0 + X1, data = df)
Residuals:
Min 1Q Median 3Q Max
-3.5834 -1.2433 -0.1077 1.3763 4.5317
Coefficients:
Estimate Std. Error t value Pr(>|t|)
X1A 14.1896 0.3851 36.845 < 2e-16 ***
X1B -7.1679 0.5446 -13.161 < 2e-16 ***
X1C 7.4791 0.5446 13.732 < 2e-16 ***
X1D 3.4057 0.5446 6.253 2.16e-08 ***
# lm with built-in treatment contrasts
summary( lm(formula = Score ~ Group , data = df, contrasts = list(Group = "contr.treatment")) )
Call:
lm(formula = Score ~ Group, data = df, contrasts = list(Group = "contr.treatment"))
Residuals:
Min 1Q Median 3Q Max
-3.5834 -1.2433 -0.1077 1.3763 4.5317
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 14.1896 0.3851 36.845 < 2e-16 ***
GroupB -7.1679 0.5446 -13.161 < 2e-16 ***
GroupC 7.4791 0.5446 13.732 < 2e-16 ***
GroupD 3.4057 0.5446 6.253 2.16e-08 ***
```
We get the mean of the first group and the deviations for the others, as defined in the contrast matrix C.
We can define any type of contrast in this way, either using the built-in functions contr.treatment(), contr.sum() etc or by specifying which comparisons we want. For its contrasts arguments, lm() expects the inverse of C without the intercept column, solve(C)[,-1], and it adds the intercept column to generate $C^{-1}$, and uses it for the modified design matrix.
There are many refinements on this scheme (orthogonal contrasts, more complex contrasts, not full rank design matrix etc), but this is the gist of it (cf also here for reference: [https://cran.r-project.org/web/packages/codingMatrices/vignettes/codingMatrices.pdf](https://cran.r-project.org/web/packages/codingMatrices/vignettes/codingMatrices.pdf)).
| null | CC BY-SA 4.0 | null | 2023-03-22T16:00:24.290 | 2023-04-08T21:48:43.623 | 2023-04-08T21:48:43.623 | 383873 | 383867 | null |
610333 | 1 | null | null | 2 | 22 | In the sciences very often we are testing for the presence of a predicted relationship, for example, that two variables are positively correlated ($\beta > 0$). Under traditional frequentist testing, one might set the null hypothesis to be $H_0: \beta = 0$, with a compound alternative hypothesis of $H_{1,a}: \beta > 0$ and $H_{1,b}: \beta < 0$.
If one finds strong support for $H_{1,b}$ (that the slope is negative) is it correct to say that this is "evidence against $H_{1,a}$"? Typically we only state that we find evidence against the null hypothesis, but in the case of a directional test it seems intuitive that if you find strong evidence for a negative slope, then this should also count as strong evidence against a positive slope, not simply against the null of $H_0: \beta = 0$.
So is it correct to say use the term "evidence against" one of the other compound alternative hypotheses? Otherwise it seems to imply that no one can ever find evidence against a presupposed hypothesis, despite evidence to its contrary. Or is it that my null and alternative hypotheses are not formed correctly?
| When to use the term "evidence against" when testing a directional hypotheses | CC BY-SA 4.0 | null | 2023-03-22T16:20:39.263 | 2023-03-22T16:51:14.833 | 2023-03-22T16:51:14.833 | 35989 | 46382 | [
"hypothesis-testing",
"inference"
] |
610334 | 2 | null | 610304 | 0 | null | Your situation is described by a response variable $Y$ regressed on two categorical variables, $X, W$ each having 3 levels. In a $3 \times 3$ model without interaction, the linear model is as follows:
$$ E_1[Y|X,W] = \beta_0 + \beta_1 I(W=2) + \beta_2 I(W=3) + \beta_3 I(X=2) + \beta_4 I(X=3)$$
This is a 5 parameter model and it is not saturated. There are 9 distinct possible mean responses predicted by a combination of 5 covariates. A linear model without interaction is expressed as follows:
\begin{array}{ccc}
X & W & E_1[Y] \\ \hline
1 & 1 & \beta_0 \\
1 & 2 & \beta_0 + \beta_3\\
1 & 3 & \beta_0 + \beta_4\\
2 & 1 & \beta_0 +\beta_1\\
2 & 2 & \beta_0 +\beta_1+ \beta_3\\
2 & 3 & \beta_0 +\beta_1+ \beta_4\\
3 & 1 & \beta_0 +\beta_2\\
3 & 2 & \beta_0 +\beta_2+ \beta_3\\
3 & 3 & \beta_0 +\beta_2+ \beta_4\\
\end{array}
Because the number of parameters is smaller than the number of distinct means we know that the model is not saturated, and thus some constraint is imposed on this model. If you tell your software to "add an interaction" you have a new model:
$$ \begin{array}{rl}E_2[Y|X,W] &= \beta_0 + \beta_1 I(W=2) + \beta_2 I(W=3) + \beta_3 I(X=2) + \beta_4 I(X=3) + \\
& \gamma_1 I(W=2, X=2)) +
\gamma_2 I(W=2, X=3) + \\
& \gamma_3 I(W=3, X=2) +
\gamma_4 I(W=3, X=3)\\
\end{array}
$$
This nine parameter model is saturated and each predicted $\hat{Y}$ corresponds to the group mean $Y$. That is:
\begin{array}{ccc}
X & W & E_2[Y] \\ \hline
1 & 1 & \beta_0 \\
1 & 2 & \beta_0 + \beta_3\\
1 & 3 & \beta_0 + \beta_4\\
2 & 1 & \beta_0 +\beta_1\\
2 & 2 & \beta_0 +\beta_1+ \beta_3 +\gamma_1\\
2 & 3 & \beta_0 +\beta_1+ \beta_4+\gamma_2\\
3 & 1 & \beta_0 +\beta_2\\
3 & 2 & \beta_0 +\beta_2+ \beta_3 + \gamma_3\\
3 & 3 & \beta_0 +\beta_2+ \beta_4 + \gamma_4\\
\end{array}
When you get the regression output you typically see hypothesis tests for each interaction term compared to the 0.05 level - as a default. Each $\gamma_i$ term is comparing 4 means. For instance, $\gamma_1$ is statistically significant if the 2-1 difference for the X and the W is not additive, that is the null hypothesis that $\gamma_1 = 0$ can be restated as $\mu_{X=2, W=2} - \mu_{X=1, W=1} \ne \beta_1 + \beta_2$ or equivalently $\mu_{X=2, W=2} - \mu_{X=2, W=1} - \mu_{X=1, W=2} + - \mu_{X=1, W=1} \ne 0$. In other words, each $\gamma_i$ compares 4 means.
However, the global test of significance for model 1 vs model 2 is a 4 degree of freedom test that compares all 9 mean levels.
| null | CC BY-SA 4.0 | null | 2023-03-22T16:28:27.200 | 2023-03-22T16:28:27.200 | null | null | 8013 | null |
610335 | 2 | null | 190843 | 0 | null | How about when the dependent variable Y is a discrete response rather than a continuous response, say for a classification model where Y is either 0 or 1. Can we still use the Sobol's method to gain insights into model sensitivities?
| null | CC BY-SA 4.0 | null | 2023-03-22T16:30:02.113 | 2023-03-22T16:30:02.113 | null | null | 383869 | null |
610336 | 1 | null | null | 0 | 52 | I have the results of a dual choice test to evaluate oviposition preference. In the experimental arena, the same insect was offered simultaneously two plants (A and B) and the number of eggs laid on each plant was counted. So the assumption of independence would not be fulfilled and it seems to me more correct to use a binomial distribution than a Poisson distribution.The problem appears when I want to use a binomial distribution because it would have no explanatory variable. The variable would be the proportion of eggs laid on one of the plants (number of eggs laid in A/ total eggs laid in A +B) against.. nothing! Because the explanatory variable (type of plant:A and B) is contained within the response variable. These are two sides of the same coin.
Would it be correct to write the binomial model like this? or is it better to use a chi-square distribution? I have ten replicates.
m<-glm(eggsproportion ~ 1 , family=binomial, weights = total, data=eggs)
Thanks!
| Can a binomial distribution be made without an explanatory variable? | CC BY-SA 4.0 | null | 2023-03-22T16:55:00.837 | 2023-03-22T22:35:58.967 | 2023-03-22T22:04:01.493 | 383871 | 383871 | [
"r",
"generalized-linear-model",
"binomial-distribution"
] |
610337 | 1 | null | null | 1 | 32 | I'm doing a a meta analysis on the mortality of heart failure patients and I'm using hazard ratios reported in the studies for the meta-analysis using REVMAN software.
All of the hazard ratios and confidence interval of the studies are entered without a problem, except for only one study which the program can't accepts the hazard ratio and its confidence interval like in the attached photo , the reported value in the paper is hazard ratio 0.31 95% CI 0.005-.74 p value = 0.0016
[](https://i.stack.imgur.com/T20Op.png)
| Meta analysis of generic inverse variance | CC BY-SA 4.0 | null | 2023-03-22T16:55:45.433 | 2023-03-29T01:16:41.380 | 2023-03-28T22:30:42.833 | 11887 | 383870 | [
"confidence-interval",
"meta-analysis",
"mortality"
] |
610338 | 2 | null | 609960 | 1 | null | Two points, which are sort of tangential.
- Why are you using 5.8? Do you have a strong reason for that value, or could you let the slope be non-linear by freeing that estimate?
- Regressing slope on intercept (or intercept on intercept, or slope on slope) is a bit weird. Intercept is indicated by three time points - including the final one. Slope is indicated by three time points, including the first one. When you regress slope on intercept you are saying that time 3 (intercept) has a predictive effect on time 1 (slope), but time 3 happened after time 1, so this requires an effect which goes backwards in time. For this reason, it is much more common to correlate slopes and intercepts.
If you use the second approach,
```
scyn ~ icms + icyn
scms ~ icyn + icms
```
Your structural model is saturated anyway.
| null | CC BY-SA 4.0 | null | 2023-03-22T16:59:32.140 | 2023-03-22T16:59:32.140 | null | null | 17072 | null |
610339 | 1 | 610481 | null | 3 | 81 | I'm building a predictive model with potentially multiple predictors. To that end, I try different, nested models, each with one more predictor than the previous one and compare their AICs. The AIC falls with each new predictor, but very slowly after the second one. Since the AIC is itself a random variable, I worry that a formally better model, where the AIC is lower by less than 0.5% than the previous one, is not truly better, but just a random effect.
So I thought I'd compare the models by bootstrapping. There are at least two ways I can think of:
- For each set of predictors, generate 1000 (or whatever) different bootstrap datasets, fit a model on each dataset and record its AIC. Plot the distribution of AICs over different set of predictors ('Full model' corresponds to 'AWFST' in the boxplot):
[](https://i.stack.imgur.com/ZXUHu.png)
[](https://i.stack.imgur.com/cjuBd.png)
Or:
- For each set of predictors, train the model on the full dataset. Generate 1000 different bootstrap datasets, use each model to make predictions on each dataset and record its log-loss. Plot the distribution of log-losses over different set of predictors:
[](https://i.stack.imgur.com/hCiva.png)
[](https://i.stack.imgur.com/gARqD.png)
For better comparison, the same random seed was used in both approaches. As you can see, the results are quite similar, but not quite identical. Does any of the approaches make sense and, if yes, is one 'better' than the other? If not, where am I making a mistake?
| Selecting the model by bootstrapping: AIC vs. log-loss? | CC BY-SA 4.0 | null | 2023-03-22T17:02:32.387 | 2023-04-14T08:18:54.973 | 2023-03-22T17:39:32.167 | 169343 | 169343 | [
"model-selection",
"bootstrap",
"aic",
"log-loss"
] |
610341 | 1 | null | null | 4 | 185 | I want to generate a standard normal distributed time-series. In addition the ACF of my timeseries should match a desired ACF. I have given lags 1 to 30 with the corresponding ACF-values. For further processing, the CDF of the timeseries should be the standard normal CDF.
When i try filtering a standard normal distributed timeseries with my ACF, the result does not have the desired attributes. Neither is the ACF like the desired ACF, nor is the CDF unchanged.
I tried the approach from jblood94 from [this](https://stats.stackexchange.com/questions/176722/creating-random-variable-with-certain-auto-correlation-in-r/606314#606314) and [this](https://stats.stackexchange.com/questions/606259/how-to-generate-uniform-distributed-samples-with-given-auto-correlation-function) thread. But this approach only works for very specific ACF-values.
When i insert my desired ACF, this approach does not work.
Here is my desired ACF:
[](https://i.stack.imgur.com/Ye3tU.jpg)
The model for my ACF:
```
lags = 0:30
a = 0.9999
b = -0.07197
acf = a*exp(b.*lags)
```
I assume that the problem is the high correlation at the beginning.
The timeseries needs to be standard normal distributed, for the next steps in my program.
The following Matlab-Code includes the approach. But as you can see, it will only work, if we set a minimum of 45 lags. Why is this so?
In addition, i have to adjust the initial STD of my samples. How can i calculate the initial STD, to be after the filtering 1?
```
%% init
clear;
close all;
clc;
set(groot, 'DefaultAxesLineWidth', 2);
set(groot, 'DefaultLineLineWidth', 2);
set(groot, 'DefaultAxesFontSize', 14);
%% define desired auto-correlation function
lagsDesired = 0:45; % if lag is <45 the approach fails
a = 0.999;
b = -0.07197;
acfDesired = a * exp(b .* lagsDesired);
%% generate normal distributed samples
numSamples = 1e6;
% sigma increases during filtering (How can i calculate the factor?)
sigmaFactor = 0.35;
x = sigmaFactor * randn(numSamples, 1);
%% predefine indices
m = length(acfDesired);
i1 = sequence((m-1):-1:1);
i2 = sequence((m-1):-1:1, 'from', 2:m);
i3 = cumsum((m-1):-1:1);
%% initial values for adjustment
w = acfDesired;
temp = cumsum(w(i1) .* w(i2));
a = [1, diff([0 temp(i3)])/sum(w.^2)];
%% iterative adjustment of the filter weights
while (max(abs(acfDesired - a)) >= 2e-3) %sqrt(eps))
max(abs(acfDesired - a))
temp = cumsum(w(i1) .* w(i2));
a = [1, diff([0 temp(i3)])/sum(w.^2)];
w = w .* (acfDesired./a);
end
%% filter normal samples with weights
y = filter(w, 1, x);
%% transform normal to uniform samples via standard normal cdf
uniRandCorr = normcdf(y);
%% plot
figure;
[Fy, xy] = ecdf(y);
[Fn, xn] = ecdf(randn(numSamples, 1));
plot(xy, Fy);
hold on;
plot(xn, Fn);
l = legend({'ECDF of samples' , 'Desired ECDF'}, 'Location', 'best');
title(l, 'Legend');
title('CDF');
xlabel('Value');
ylabel('CDF');
grid on;
figure;
[acf, lags] = autocorr(uniRandCorr, 'NumLags', 60);
plot(lags, acf);
hold on;
plot(lagsDesired, acfDesired);
l = legend({'ACF of samples' , 'Desired ACF'}, 'Location', 'best');
title(l, 'Legend');
title('ACF');
xlabel('Lag');
ylabel('ACF');
grid on;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [seq] = sequence(nvec, varargin)
% [seq] = sequence(nvec, varargin)
%
% DESCRIPTION:
% R-Functions translated to MATLAB
% Creates a vector of sequences
% R: A Language and Environment for Statistical Computing Reference Index
% Version 4.2.3 Page 505 ff
%
% EXAMPLE:
% -
%
% INPUT:
% nvec : Input vector
% from : Starting value(s)
% by : Increment of the sequence(s)
%
% OUTPUT:
% seq : Generated sequence
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Parse inputs
iParser = inputParser;
addRequired(iParser, 'nvec', ...
@(x) isnumeric(x) && all(x>=0));
addOptional(iParser, 'from', ones(length(nvec), 1), ...
@(x) isnumeric(x) && (isscalar(x) || length(x)==length(nvec)));
addOptional(iParser, 'by', ones(length(nvec), 1), ...
@(x) isnumeric(x) && (isscalar(x) || length(x)==length(nvec)));
parse(iParser, nvec, varargin{:});
if isscalar(iParser.Results.from)
from = ones(length(nvec))*iParser.Results.from;
else
from = iParser.Results.from;
end
if isscalar(iParser.Results.by)
by = ones(length(nvec))*iParser.Results.by;
else
by = iParser.Results.by;
end
seq = [];
for i=1:length(nvec)
seq = [seq, linspace(from(i), from(i) + (by(i)*(nvec(i)-1)), nvec(i))];
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
```
This gives the following output:
[](https://i.stack.imgur.com/5pcvr.jpg)
[](https://i.stack.imgur.com/OBfWL.jpg)
As you can see, the result is correct. BUT, it works only with min. 45 lags AND if i change the model of the acf I have to adapt this parameter. In addition i have no idea, how i can calculate the sigma of the gauss samples, so that the sigma AFTER the filtering is 1 (standard). I know, that the standarddeviation changes during filtering, but I have not found a formula for this yet.
| How to generate a standard normal distributed time-series with a given ACF | CC BY-SA 4.0 | null | 2023-03-22T17:05:52.823 | 2023-03-28T17:12:20.260 | 2023-03-23T17:29:04.577 | 380589 | 380589 | [
"time-series",
"normal-distribution",
"autocorrelation",
"random-generation"
] |
610342 | 1 | 610425 | null | 11 | 1162 | It seems to me like the concepts of incorporating prior beliefs about parameters VERSUS viewing parameters as latent random variables are two VERY separate concepts, and yet I've found that they're often confounded and treated as one. Can't we have prior beliefs about what the most likely value of a parameter is without actually believing that the parameters are "random variables" i.e they do have a single "true" value, but there's just uncertainty about what that value is. The fact that we have "prior knowledge" about what the value of the parameter might be and that we encode that via a prior probability distribution doesn't seem to me to exclude the possibility that that thing we're talking about (the "parameter") is a fixed, albeit latent value.
I've taken a course in Bayesian statistics, and have read online about Bayesian analysis, but I actually still do not fully understand this.
EDIT: Just to be a bit more concrete, suppose we conduct an analysis “as a frequentist,” and infer the value of some parameter. However, because we have some prior knowledge about the parameter, we use a prior that reflects our uncertainty about the value of the parameter. But just because we used a prior doesn’t require us to assume that the parameter we’re inferring is inherently random. The resulting probability distribution of the inferred parameter is simply arising because of uncertainty, not inherent randomness in the parameter. We still believe, as frequentists, that the parameter has a fixed, latent value. So are we being Bayesian, or frequentist, or a combination of both in this type of scenario?
| Is the "Bayesian approach" about prior beliefs, viewing parameters as random variables, or both? | CC BY-SA 4.0 | null | 2023-03-22T17:18:00.017 | 2023-03-23T18:41:29.543 | 2023-03-22T20:52:24.910 | 283319 | 283319 | [
"bayesian"
] |
610343 | 2 | null | 610312 | 0 | null | This is how your effects looks like. You have three groups and two time points giving 2x3=6 values. In the graph the two time points are shown on the x-axis and the three groups are shown by using lines with different colours.
[](https://i.stack.imgur.com/wqd39.png)
| null | CC BY-SA 4.0 | null | 2023-03-22T17:26:13.070 | 2023-03-22T17:26:13.070 | null | null | 164061 | null |
610344 | 2 | null | 610046 | 2 | null | There are a few things going on here. I don't use the `fda` package, and highly software-specific questions are off topic here, but there seems to be some confusion about how splines are implemented.
Here is a single set of `AUC` versus `Day` data that seem similar to yours. When things aren't working, start with a single simple example before you make things more complicated.
```
aucData
# Day AUC
# 1 0 0.00
# 2 1 0.10
# 3 2 0.30
# 4 3 0.35
# 5 4 0.40
# 6 5 0.38
# 7 6 0.45
# 8 7 0.42
```
With only 8 data points for each combination of "Model" and "Drug" you will be limited in the quality of any fit. Even the simple models below with only 3 or 4 coefficients might well be over-fitting the data. Be very wary of over-interpretation.
One problem (maybe the biggest) with your work so far is that functions like `bs()` or your `bspline_fun()` only define a model matrix for the predictor that can be used to fit a regression spline. They don't fit anything directly. Here's an example with a single interior knot at the median `Day` value (as far as I might be comfortable going with only 8 observations).
```
library(splines)
bs(aucData$Day,knots=3.5)
# 1 2 3 4
# [1,] 0.000000000 0.00000000 0.000000000 0.000000000
# [2,] 0.530612245 0.09912536 0.005830904 0.000000000
# [3,] 0.571428571 0.30320700 0.046647230 0.000000000
# [4,] 0.367346939 0.47230321 0.157434402 0.000000000
# [5,] 0.157434402 0.47230321 0.367346939 0.002915452
# [6,] 0.046647230 0.30320700 0.571428571 0.078717201
# [7,] 0.005830904 0.09912536 0.530612245 0.364431487
# [8,] 0.000000000 0.00000000 0.000000000 1.000000000
## reports of attributes omitted
```
This matrix shows how each of the 8 `Day` values (rows) are represented in terms of the 4 predictors (columns) for which coefficients would be reported in a model. You then have to fit the `AUC` values against this basis representation of the `Day` values:
```
bs1 <- lm(AUC~bs(Day,knots=3.5),data=aucData)
```
The smoothing spline works a bit differently. It places a knot at each data point and then penalizes the wiggliness of the curve to take into account the number of data points. Here's the default for these data.
```
ss1 <- with(aucData,smooth.spline(Day,AUC))
ss1
# Call:
# smooth.spline(x = Day, y = AUC)
#
# Smoothing Parameter spar= 0.4433949 lambda= 0.002846502 (14 iterations)
# Equivalent Degrees of Freedom (Df): 3.748625
# Penalized Criterion (RSS): 0.005042357
# GCV: 0.002231849
```
In this case, either seems to represent the data adequately.
```
plot(AUC~Day,data=aucData,type="l",bty="n")
newDays <- seq(0,7,length.out=100)
lines(newDays,predict(bs1,newdata=list(Day=newDays)),col="red")
lines(predict(ss1,newDays),col="blue") ## predict() works a bit differently here
legend("topleft",legend="black, raw data\nred, bs\nblue, ss",bty="n")
```
[](https://i.stack.imgur.com/jf18d.png)
I suspect that some of your data time courses are wigglier than this, so that you get wigglier curves that look troubling. If, as I suspect, the data represent some type of ideally non-decreasing growth curve that is subject to some measurement error, the spline fits in those cases might not make much sense at all. You might see a hint of that in the red curve for the `bs` fit, which starts pointing upward at the last `Day`. Don't try to inspect all of your curves at once, as you seem to be doing.
If there's a general theoretical form that might be expected to describe the data, you might be better off with a nonlinear fit of an appropriate parametric model. For example, a logistic growth curve can often represent such data; there's a built-in `SSlogis()` function that can simplify the fitting:
```
nlsLogis <- nls(AUC~SSlogis(Day,Asym,xmid,scale),data=aucData)
nlsLogis
# Nonlinear regression model
# model: AUC ~ SSlogis(Day, Asym, xmid, scale)
# data: aucData
# Asym xmid scale
# 0.4078 1.5654 0.5153
# residual sum-of-squares: 0.004432
```
Here `Asym` is the upper asymptote; `xmid` is the `Day` value at which you reach half of that, and `scale` sets the steepness of the curve. You can display that if you choose (not shown here) via:
```
lines(newDays,predict(nlsLogis,newdata=list(Day=newDays)))
```
If this logistic growth model is appropriate, this provides a simple closed-form function for the AUC against time, with readily interpretable coefficients (unlike those in spline fits).
$$\text{AUC} = \frac{\text{Asym}}{1+\exp\left((\text{xmid}-\text{Day})/\text{scale}\right)}$$
That would seem ideal for "functional data analysis," if that functional form is appropriate.
| null | CC BY-SA 4.0 | null | 2023-03-22T17:28:29.207 | 2023-03-22T17:28:29.207 | null | null | 28500 | null |
610345 | 1 | null | null | 0 | 79 | I am required to project all the points on PCA to PC1 axis and then on PC2 axis as to see whether there is a good separation between the points. Through this I have to determine which dimension can best help to identify the malicious events. Being a beginner in R, I am quite confused what I am required to do or how to go about. Kindly advice me on this problem (I have plotted PCA Score, loading and Biplot of the first two PCs).
| Plotting all the points of PCA to only one PCA axis, first PC1 and then on the PC2 | CC BY-SA 4.0 | null | 2023-03-22T17:39:16.560 | 2023-03-22T17:39:16.560 | null | null | 383876 | [
"machine-learning",
"pca",
"scatterplot",
"ggplot2",
"biplot"
] |
610346 | 1 | null | null | 0 | 39 | I am new to machine learning. When classifying, I want to do k-fold cross validation instead of separating training and test datasets with hold out. I know that I can use the `caret` package in R for this. Something occurred to me: In k-fold cross validation, there is no clear test set as in the hold out method. In this case, there are two ways, the first is to use k-fold cross validation and make a manual training and test set with a for loop. The second is to train the whole data set with the `caret` package via the `traincontrol` parameter. In the second way, we make the estimations over a whole training set to construct a complexity matrix. How can I tell if there is an over-fitting since there is no test data set? Also, do you think the 1st way or the second way would be healthier?
| Classification with K-fold cross validation | CC BY-SA 4.0 | null | 2023-03-22T17:41:00.673 | 2023-03-29T01:37:16.647 | 2023-03-29T01:37:16.647 | 11887 | 383874 | [
"r",
"classification",
"cross-validation",
"caret"
] |
610347 | 1 | null | null | 0 | 37 | How to test if there's a statistically significant difference between two counts
Hi! I'm struggling to test if two ratios on the same row of my dataset are significantly different from each other. I need to do this test for as many rows are in my dataset. The ratios, a and b, are in two separate columns of my data, and I'm trying only to plot points that have a statistically significant difference between them.
What I've done is take the difference between each set of values (a - b) and calculated the percentile rank of each difference relative to its column using scipy.stats.percentileofscore. I've made a histogram of this data. But I'm not sure how to proceed now. The histogram doesn't look normal. I want to just take the values above 95 and below 5 percent on the histogram, but since my distribution isn't normal I don't think that'd be correct!
What's difficult here is that I'm analyzing data from an experiment I really should have done in replicates, rather than just once. And comparing between counts, not averages, confuses me.
Any guidance would be greatly appreciated. Thanks so much :)
| How to test if there's a statistically significant difference between two counts, based on relative percentile? | CC BY-SA 4.0 | null | 2023-03-22T17:48:50.927 | 2023-03-22T20:29:25.947 | null | null | 383878 | [
"hypothesis-testing",
"statistical-significance",
"confidence-interval"
] |
610348 | 2 | null | 610302 | 2 | null | In (2.4), $I\ $ is a function of $(X_2 - X_1, \ldots, X_n - X_1)$ by a mere change of variable in the integral, namely
\begin{align} I &=\int_{\mathbb R^k}T(X_1 + \theta - u, ..., X_n+\theta-u)\frac{\prod_1^nf(X_j-u)}{\int_{\mathbb R^k}\prod_1^nf(X_j - v)dv}du\\ &=\int_{\mathbb R^k}T(\theta - \nu, ..., X_n-X_1+\theta-\nu)\frac{\prod_1^nf(X_j-X_1-\nu)}{\int_{\mathbb R^k}\prod_1^nf(X_j -X_1 - v)dv}d\nu
\end{align}
The identity that constitutes the lemma then follows from
$$\mathbb{E}_\theta[z(X_2-X_1,\ldots)I(X_2-X_1,\ldots)] = \mathbb{E}_\theta[z(X_2-X_1,\ldots)T((X_1,X_2,\ldots)]$$
because (i) $I$ is a function of $(X_2-X_1,\ldots)$ only and (ii) itsatisfies the [orthogonal projection property associated with conditional expectations in measure theory](https://en.wikipedia.org/wiki/Conditional_expectation#Conditional_expectation_with_respect_to_a_sub-%CF%83-algebra). For instance, here is the very first definition found in Steve Lalley's U of Chicago [coursenotes](http://galton.uchicago.edu/%7Elalley/Courses/383/ConditionalExpectation.pdf):
>
Definition 1. Let $(Ω, \mathfrak F , P )$ be a probability space and let $\mathfrak S$ be a
σ−algebra contained in $\mathfrak F$. For any real random variable $X ∈ L^2(Ω,\mathfrak F ,
> P )$, define $\mathbb E (X | \mathfrak S )$ to be the orthogonal projection of $X$ onto the
closed subspace $L^2(Ω,\mathfrak S , P )$.
The property$$\mathbb{E}_\theta\big[z(X_2-X_1,\ldots)\big\{I(X_2-X_1,\ldots) -T(X_1,X_2,\ldots)\big\}\big]=0$$expresses the $L^2$ orthogonality between measurable functions of $(X_2-X_1,\ldots)$ and $I-T$.
| null | CC BY-SA 4.0 | null | 2023-03-22T17:54:14.267 | 2023-03-23T15:15:53.123 | 2023-03-23T15:15:53.123 | 354041 | 7224 | null |
610349 | 2 | null | 544749 | 0 | null | I was also struggled with this statement pointed by the poster: by sufficiency, in the form of the Rao-Blackwell Theorem, we need consider only unbiased estimators of zero based on Y where Y is a complete and sufficient in the context. The authors didn't give much detail explaining why we should only check if the proposed unbiased estimator is uncorrelated with unbiased estimators only based on a complete and sufficient statistic. After some search, I find that an unbiased estimator uncorrelated with unbiased estimator of zero based on sufficient and complete statistic implies the unbiased estimator uncorrelated with all unbiased estimator of zero.
For example, let $h(\tilde{T})$ be an unbiased estimator of $g(\theta)$ based on $\tilde{T}$ which is complete and sufficient. Let $U$ be an unbiased estimator of zero. $U_t$ be unbiased estimator of zero based on $\tilde{T}$. Then the following shows what I claimed above:
Let $\phi_t$=$h(\tilde{T})+cU$ where c is a constant. Here we have $\phi_t$ is an unbiased estimator of zero because:
E($\phi_t$)=E($h(\tilde{T})+cU$)=E($h(\tilde{T})$)+cE($U$)=$g(\theta)$+0=$g(\theta)$
Now we want to know if $h(\tilde{T})$ can be updated by adding random noise (in this case $U$). Since both $h(\tilde{T})$ and $\phi_t$ are unbiased estimators, to decide which one is better, we compare their corresponding variances:
Var($\phi_t$)= Var($h(\tilde{T})+cU$)=Var($h(\tilde{T})$)+$c^2$Var($U$)+2$c$Cov($\tilde{T},U$)
This implies Var($\phi_t$) $\geq$ Var($h(\tilde{T})$) unless Cov($h(\tilde{T}),U$)=0
- Note: Cov($h(\tilde{T}),U$) can be negative so that Var($\phi_t$) $\le$ Var($h(\tilde{T})$)
- Cov($h(\tilde{T}),U$) =E($h(\tilde{T})U$) because E($U$)=0
Here $U$ is all unbiased estimator of zero. The poster asks why $h(\tilde{T})$ uncorrelated with $U_t$ is sufficient in applying theorem 7.3.20. I hope the following answers the questions:
Cov($h(\tilde{T}),U$) =E($h(\tilde{T})U$) = E{E[$h(\tilde{T})U|\tilde{T}$]}=E{$h(\tilde{T})E[U|\tilde{T}]$}=Cov($h(\tilde{T}),E[U|\tilde{T}]$)
where $E[U|\tilde{T}]$ is unbiased estimator of zero based on $\tilde{T}$
So $h(\tilde{T})$ and $E[U|\tilde{T}]$ uncorrelated implies $h(\tilde{T})$ and $U$ uncorrelated.
This is my very first post on StackExchange. Please comment if I made any mistake.
| null | CC BY-SA 4.0 | null | 2023-03-22T17:56:26.533 | 2023-03-22T17:56:26.533 | null | null | 355884 | null |
610350 | 1 | 610514 | null | 0 | 40 | I was wondering how to assess residual normality of a repeated measures ANOVA. In some threads, users refer to Venables and Ripley: Residuals in multistratum analyses: Projections and recommend to extract residuals using `proj()` function. Is there any argument speaking against application of statistical normality tests, like shapioro wilk test?
Or would a mixed model (`lmer` from `lme4` package) be the better way?
Reproducible example:
```
library(MASS)
set.seed(123)
data<- data.frame(id = factor(rep(1:10, each = 4)),
cond1 = factor(rep(c("a", "b"), 20)),
cond2 = factor(rep(rep(c("x", "y"), each = 2), 10)),
Y = rnorm(40, 5, 2))
model<- aov(Y ~ cond1*cond2 +
Error(id/(cond1*cond2)), data = data)
model.pr <- proj(model)
shapiro.test(model.pr[[5]][, "Residuals"])
```
| Testing assumptions for repeated measures ANOVA | CC BY-SA 4.0 | null | 2023-03-22T18:11:08.263 | 2023-03-23T21:05:14.913 | null | null | 277811 | [
"anova",
"lme4-nlme",
"repeated-measures",
"heteroscedasticity",
"normality-assumption"
] |
610351 | 1 | null | null | 0 | 31 | I have been working with a data file in R that contains two primary categorical variables : study location (study, 19 levels) which is a nuisance variable and race (4 levels) which is the outcome of interest. There are other variables in the model (age) but as they do not change I don't think they impact my question.
I was originally told to run a logistic regression model where both study and race were dummy coded. E.g. for race:
```
white 0 0 0
black 1 0 0
latino 0 1 0
asian 0 0 1
```
The results for whether race was significantly different from white were then interpreted from the regression summary of the beta coefficients and their p-values. However, isn't the interpretation of the raceBlack coefficient, for example, the marginal difference between white and black at the reference of the site variable? I then recoded the site variable to effects coding using contrasts(data$site) = contr.sum(n) yielding, as an example if n=4:
```
[,1] [,2] [,3]
site1 1 0 0
site2 0 1 0
site3 0 0 1
site4 -1 -1 -1
```
This resulted in an expected change to the intercept, but both the estimates and p-values for the race coefficients (still dummy coded) did not change. I thought the new interpretation of the raceBlack coefficient would be, "the difference between white and black at the average of site location." Have I done something incorrectly or does my thinking need correcting?
Thank you for your help.
| Interpreting regression coefficients with partial dummy vs. effects coding and multiple factors | CC BY-SA 4.0 | null | 2023-03-22T18:15:08.953 | 2023-03-22T19:12:58.820 | null | null | 149739 | [
"regression",
"interpretation",
"categorical-encoding"
] |
610355 | 2 | null | 610351 | 0 | null | The correct interpretation of the dummy-coded `raceBlack` coefficient is the associated difference in outcome from the reference level of `race` when all other predictors are held constant. The particular way the other predictors are coded doesn't matter if `race` isn't involved in an interaction with any of them.
Things are more complicated when there are interactions. Then the individual coefficient for any predictor can be affected by re-leveling, re-coding, or re-centering a predictor with which it interacts. That's not the case here.
As an aside: if `location` is just a nuisance variable, you might be better off treating it with a random intercept in a mixed model instead of trying to estimate 18 coefficients for its 19 levels.
| null | CC BY-SA 4.0 | null | 2023-03-22T19:12:58.820 | 2023-03-22T19:12:58.820 | null | null | 28500 | null |
610356 | 2 | null | 610300 | 9 | null | One approach is to use the theorem. It is an orthogonal expansion designed to give the best possible approximation (in a least squares sense) whenever you cut it off at a finite endpoint.
Notice that since
$$1 = \operatorname{Var}(W(1)) = \sum_{i=1}^\infty e_i(1)^2\operatorname{Var}(N_i) = \sum_{i=1}^\infty 2 \lambda_i,$$
the $\lambda_i$ sum to $1/2.$ (You can verify this analytically by relating the sum of the $\lambda_i$ to $\zeta(2) = \pi^2/6.$)
Thus, to approximate the process with a least squares error smaller than $\epsilon,$ find the first $n$ for which
$$1 - \epsilon \le 2\sum_{i=1}^n \lambda_i,$$
generate $n$ iid Normal values $Z_1,Z_2,\ldots, Z_n,$ and set $N_i = \sqrt{\lambda_i} Z_i.$ Roughly, to improve the precision from $\epsilon$ to $\epsilon/C$ (with $C\ge 1$) you will need $C$ times as many terms. (An excellent approximation for $n$ is $0.2/\epsilon.$)
Here is an example of one realization of this process using a common sequence of realized values of $N_i$ with varying levels of precision:
[](https://i.stack.imgur.com/bTbtI.png)
The advantages of this approach are clear: with limited computing time you can generate an approximation of the walk and, given additional time, you can dynamically improve it to supply detail, as illustrated by the progression in this illustration. Moreover, you can zoom in at will merely by keeping the same realized values of the $N_i$ and computing the sine functions within arbitrarily narrow intervals. (The further you zoom in, though, the more accurate your approximation will need to be.)
The code in `R` is particularly simple once you have computed the $\lambda_i:$ one line to generate the $N_i$ and another line to compute the sum. See the block of code below following "One realization of the process." It finds $n$ by a simple search, doubling the search length until $n$ is found or the problem is getting too large. Generally you would be fine just rounding $0.2/\epsilon$ up to an integer.
```
n.max <- 1e4
par(mfrow = c(2,2))
for (eps in c(1e-1, 1e-2, 1e-3, 1e-4, 1e-5)) {
#
# Search for `n`.
#
n <- 1
lambda <- 8/pi^2
remainder <- 1 - lambda
while(remainder > eps && n < n.max) {
i <- seq_len(n) + n
n <- n + length(i)
l <- 2/(pi * (i - 1/2))^2
lambda <- c(lambda, l)
remainder <- remainder - sum(l)
}
if (n > n.max) {
warning("Intended precision not attained with ", n.max, " terms.")
break
}
Lambda <- cumsum(lambda)
n <- which.max(Lambda >= 1 - eps)
lambda <- lambda[1:n]
Lambda <- Lambda[1:n]
#
# One realization of the process.
#
set.seed(17)
N <- rnorm(n) * sqrt(lambda)
W <- function(t, N) {
sqrt(2) * colSums(outer(seq_along(N), t, \(i,x) sin((i - 1/2) * pi * x) * N))
}
curve(W(t, N), 0, 1, xname = "t", n = 1201,
main = bquote(paste(epsilon == .(eps), ", ", n == .(n))))
}
par(mfrow = c(1,1))
```
| null | CC BY-SA 4.0 | null | 2023-03-22T19:28:19.847 | 2023-03-23T02:34:10.537 | 2023-03-23T02:34:10.537 | 919 | 919 | null |
610357 | 2 | null | 610301 | 2 | null | The difference is explained by the fact that these are two different variances. In particular, the model both `lme()` and `lmer()` fit in this case is
$$\left\{
\begin{array}{l}
\texttt{distance}_{ij} = \beta_0 + \beta_1 \texttt{age}_{ij} + b_{i0} + \varepsilon_{ij}\\\\
b_{i0} \sim \mathcal N(0, \sigma_b^2), \quad \varepsilon_{ij} \sim \mathcal N(0, \sigma^2)
\end{array}
\right.$$
Individual `F01` has four measurements, and conditional on the random intercept, these measurements are independent with variance $\sigma^2$. The conditional model is given in the first line of the equation above. This is what you get from the call `getVarCov(fm1, individuals = "F01", type = "conditional")`.
The `postVar` component that you get from the call to `lme4::ranef(..., )` is the posterior variance for the random effect, i.e., $\mbox{var}(b_{i0} \mid \texttt{distance}_{ij})$. I don't know if you can get this from an `lme` model.
| null | CC BY-SA 4.0 | null | 2023-03-22T20:20:47.593 | 2023-03-22T20:20:47.593 | null | null | 219012 | null |
610358 | 2 | null | 610347 | 0 | null | This is a simple test of proportions. Basically, you'd like to see if DNA reads appear at different rates between sample types, knowing that the sample types may have different numbers of total reads.
To do this, you can run a chi-squared or Fisher test on a 2x2 contingency table - on one axis you have "Condition 1" and "Condition 2", and on the other, you have "Read in Gene" versus "Read Not in Gene". For every query gene, fill in the table counting the number of reads you saw for each gene in each condition, as well as the remaining number of reads that didn't get assigned to that gene for each condition. The chi squared or Fisher test will tell you if a different proportion of reads get assigned to the gene between the two conditions.
As perhaps a simpler analogy for this problem, it's the same as rolling two dice many times, and try to determine if any of the numbers appear at significantly different rates.
| null | CC BY-SA 4.0 | null | 2023-03-22T20:29:25.947 | 2023-03-22T20:29:25.947 | null | null | 76825 | null |
610359 | 1 | null | null | 0 | 30 | Is anyone aware of an approximation to the density function for the studentized range distribution [https://en.wikipedia.org/wiki/Studentized_range_distribution](https://en.wikipedia.org/wiki/Studentized_range_distribution) ? I've found a fast approximation for quantiles made in excel [https://www.ars.usda.gov/ARSUserFiles/60540520/RapidCalculationOfQ.pdf](https://www.ars.usda.gov/ARSUserFiles/60540520/RapidCalculationOfQ.pdf)
And I've found Fortran code for the CDF and another function in the jStat library for cdf values; however I'm curious if there are any coarse approximations for the pdf that are out there - this is to assist in fast plotting of the density curve- it's just a visualization and doesn't need to be too precise.
I appreciate any suggestions anyone has. Thanks!
| approximating the density of the studentized range distribution | CC BY-SA 4.0 | null | 2023-03-22T20:32:35.383 | 2023-03-22T20:32:35.383 | null | null | 213506 | [
"density-function",
"approximation",
"tukey-hsd-test"
] |
610360 | 1 | null | null | 0 | 25 | I have the following probability distribution function given by:
\begin{equation}
\label{eq:function}
f(x) = \frac{4a}{x^5} \exp \left[ {- \frac{a}{x^4}} \right] \quad \quad 0 \leq x \leq \infty \quad \text{and} \quad a>0
\end{equation}
for which I have derived the following Quartiles, Moments and Estimates for the parameter $a$.
Quartiles:
Let $F_X(z) = \exp \left[ {-\frac{a}{z^4}} \right]$.
1st Quartile:
Solve for $F_X(n_0) = 0.25$
\begin{equation}
\implies n_0 = \left[ - \frac{a}{\log(0.25)}\right] ^\frac{1}{4} \approx (1.38629a)^\frac{1}{4}
\end{equation}
2nd Quartile:
Solve for $F_X(n_0) = 0.5$
\begin{equation}
\implies n_0 = \left[ - \frac{a}{\log(0.5)}\right] ^\frac{1}{4} \approx (0.693147a)^\frac{1}{4}
\end{equation}
3rd Quartile:
Solve for $F_X(n_0) = 0.75$
\begin{equation}
\implies n_0 = \left[ - \frac{a}{\log(0.75)}\right] ^\frac{1}{4} \approx (0.287682a)^\frac{1}{4}
\end{equation}
Moments:
\begin{equation}
\mathbb{E} \left[X \right] = a^\frac{1}{4} \cdot \Gamma \left( \frac{3}{4} \right)
\end{equation}
\begin{equation}
\mathbb{E} \left[X^2 \right] = \left( a \pi \right) ^\frac{1}{2}
\end{equation}
\begin{equation}
\mathbb{E} \left[X^3 \right] = 4a^\frac{3}{4} \cdot \Gamma \left( \frac{5}{4} \right)
\end{equation}
\begin{equation}
\mathbb{E} \left[X^4 \right] = \int_{0}^{\infty} \frac{4a}{x} \exp \left[ {- \frac{a}{x^4}} \right]
\end{equation}
where the 4th Moment does not converge on $[0,\infty]$ hence it does not exist.
Estimators Derived:
Estimator 1: Using 1st Moment
\begin{equation}
\implies \hat{a}_{MM1} = \left[ \frac{\widehat{\mathbb{E} \left[X \right]}}{\Gamma \left( \frac{3}{4} \right)}\right]^4 = \left[ \frac{\bar{X}}{\Gamma \left( \frac{3}{4} \right)} \right]^4 = \left[ \frac{\frac{1}{n} \sum_{i=1}^\infty X_i}{\Gamma \left( \frac{3}{4} \right)} \right]^4
\end{equation}
Estimator 2: Using 2nd Moment
\begin{equation}
\label{eq:Estimator_a_MoM}
\implies \hat{a}_{MM2} = \frac{\left[ \widehat{\mathbb{E} \left[X^2 \right]} \right]^2}{\pi} = \frac{\left[ \frac{1}{n} \sum_{i=1}^\infty X_i^2 \right]^2}{\pi}
\end{equation}
Estimator 3: Using the Median (2nd Quartile)
\begin{equation}
\label{eq:Estimator_a_Med}
\implies \hat{a}_{MED} = - \left( n_0 \right)^4 \cdot \log(0.5)
\end{equation}
where $n_0$ is the sample median.
My issue arises in finding the Bias and Variance of each of the above estimators as I am having trouble deriving these. In taking the Expected Value of each estimator to derive its Bias, I am getting confused because of the large powers, and the same applies for the variance. Any help in obtaining these is greatly appreciated.
| Deriving Properties of Estimators (Bias and Variance) | CC BY-SA 4.0 | null | 2023-03-22T20:37:26.867 | 2023-03-22T20:37:26.867 | null | null | 360499 | [
"inference",
"expected-value",
"estimators",
"unbiased-estimator"
] |
610362 | 1 | null | null | 0 | 44 | I am trying to evaluate how a surgical intervention (insertion of a spinal fusion cage), resulting in distance changes between two vertebral bodies, led to new symptoms (yes/no) in some patients but not others. This procedure is usually done to alleviate pain from small intervertebral disc space, so the change in distance and position between the vertebra is important. The thing is that some distances increase between the vertebra and other decrease, as a result of the fusion surgery, and all distance changes across two vertebra are associated, as they are the same individual.
I collected several anatomical point-to-point distances between vertebra (front body, rear body, side body, left nerve canal, etc.) a- pre-surgery, immediately after cage placement (b- intra-surgical), and up to a month later (c- post-surgery). Then I separated the data into one-segment surgeries (lumbar 4- lumbar 5) and two-segment (lumbar 4-lumbar 5-sacral 1) surgeries.
At the moment I am trying to figure out the one-segment data first. We want to analyze the data in a pretest-posttest style by comparing “intra- minus pre- changes" and "post- minus pre- changes", so testing average differences of measurements across segments. However, since the positional changes of these vertebra relative to one another is associated across all measurements, I thought creating a composite score of "distance change" would be the best way to: 1) isolate which specific anatomical distance change(s) may possibly be related to new symptoms, and 2) establish a distance change “threshold” at which new post-surgical symptoms begin to appear.
As part of the above approach, I thought to first create Z-scores for each for each individual’s (subjects 1, 2, n) measurement type/column from the whole surgical population (including yes and no new symptoms), then take the average of each measurement type/column across all vertebral space measurements to create a composite score, per Song et al (2013) for the pre-, intra- and post- measurements. Then I want to standardize the composite scores via weighted averaging. Since I do not have previous results, I figured I could use principal component analysis. The publication says that “other principal components (the technique yields more than one principal component) may be constructed to capture other dimensions in the data”. I am assuming that would let me check whether there is more than one anatomical distance measurement that provides a sufficiently good explanatory power for the new symptoms?
I thought about doing this: After obtaining the standardized composite score of all vertebral measurements for each of the three time periods, I can test those individuals that had new symptoms against the composite score, in order to evaluate which specific “pre-surgical to intra-surgical” and “pre-surgical to post-surgical” anatomical distances changed significantly across time, to accomplish the first underlined goal above. Does my plan make sense?
For the second “threshold” goal, what is the best way to assess the absolute measurement changes in the “intra- minus pre- changes" and "post- minus pre- comparisons"?
A third and final concern is – the above approach, I figured may work for the 1-segment surgeries (across two vertebra), but how should I build up the analysis for 2-segment surgeries (across three vertebra)?
I had looked at a related post from a few years ago: [Composite Scores and Standardized Composite Scores t test](https://stats.stackexchange.com/questions/89614/composite-scores-and-standardized-composite-scores-t-test/430436#430436) - but the data there seem to be categorical (surveys).
Any and all feedback is welcomed.
References:
Song et al 2013 - [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5459482/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5459482/)
| Creating a standardized composite "score" made up of multiple continuous, dependent variables for analysis in SPSS | CC BY-SA 4.0 | null | 2023-03-22T20:59:28.657 | 2023-03-22T20:59:28.657 | null | null | 383316 | [
"pca",
"z-score",
"pre-post-comparison"
] |
610363 | 1 | null | null | 7 | 324 | When running a multiple regression analysis, why do we not need to correct the p-values for the amount of predictors in the model?
```
summary(lm(mpg ~ disp + hp + drat + wt + gear, data=mtcars))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 27.733774 6.596345 4.204 0.000274 ***
disp 0.007368 0.011805 0.624 0.537998
hp -0.041216 0.014317 -2.879 0.007882 **
drat 1.244929 1.490168 0.835 0.411088
wt -3.406208 1.090465 -3.124 0.004351 **
gear 0.863456 1.110779 0.777 0.443972
```
Since each predictor is tested for significance individually, why is it ok to report these results without correcting for the amount of tests?
| Correcting p-value in multiple regression | CC BY-SA 4.0 | null | 2023-03-22T21:25:43.953 | 2023-03-23T14:40:34.167 | 2023-03-23T14:40:34.167 | 509 | 212831 | [
"multiple-regression"
] |
610364 | 1 | null | null | 1 | 41 | There is [this question](https://stats.stackexchange.com/q/328225/209974) on the F-1 score, asking why we compute the harmonic mean of precision and recall rather than its arithmetic mean. There were good arguments in the answers in favor of the harmonic mean, in particular that it is suited to take the average of ratios and drops to zero whenever one of the other does.
Which begs the question, why is the harmonic mean of sensitivity and specificity not a thing (to my knowledge)? There are both ratios and the same fine arguments could apply.
| Why don't we use the harmonic mean of sensitivity and specificity? | CC BY-SA 4.0 | null | 2023-03-22T21:28:27.967 | 2023-03-23T12:04:17.467 | null | null | 209974 | [
"precision-recall",
"sensitivity-specificity",
"f1"
] |
610365 | 1 | null | null | 0 | 17 | Marcos de lopez who wrote the book "Advances of ML in Finance" talked about a new cross validation technique where he removed the autocorrelation of the time series by deleting a few observations before and after the OOS (Out of Sample) dataset. For example if IPS ( INPUT sample) and OPS output sample
2015 2016 2017 2018 2019
IPS OPS IPS IPS IPS
we would remove 2 month at the end of 2015 and 4 months at the end of 2017 and perform the insample and oos testing.
My thoughts are
- There will still be autocorrelation or some form of temporal memory retention in them.
- Creating holes in the continuous time series data would not help us to build a strategy and we are breaking links in between
I know the alternate method to this is Walk Forward or Rolling CV but do you guys agree with this?
| Does embargo and purging completely remove autocorrelation? | CC BY-SA 4.0 | null | 2023-03-22T21:35:30.847 | 2023-03-28T22:33:45.283 | 2023-03-28T22:33:45.283 | 11887 | 383898 | [
"machine-learning",
"cross-validation",
"descriptive-statistics",
"autocorrelation",
"finance"
] |
610366 | 1 | 610389 | null | 2 | 93 | Carlos Cinelli in this great post [https://stats.stackexchange.com/a/384460/198058](https://stats.stackexchange.com/a/384460/198058) gives an example of 3 different Data Generating Processes/Causal Models giving rise to the same joint distribution $(X,Y)$. Below is a snapshot of the 3 models taken from his post.
[](https://i.stack.imgur.com/ypkw5.png)
I was able to work out that for each of the 3 models $P(X)=P(Y)=N(0,1)$ but there is still quite a bit of work left to do to show that in these 3 models $(X,Y)$ have the same joint distribution.
In model 1: $P(X,Y)=P(X)*P(Y|X)$ but what is $P(Y|X)$? In model 2: $P(X,Y)=P(Y)*P(X|Y)$ but what is $P(X|Y)$? Model 3 is even more complicated. I am hoping someone can explain all the steps we go through in order to show that these 3 models have the same joint distribution.
| How to derive the joint distribution in these 3 models? | CC BY-SA 4.0 | null | 2023-03-22T21:39:38.600 | 2023-03-25T02:16:43.377 | 2023-03-22T21:47:57.250 | 198058 | 198058 | [
"causality",
"joint-distribution"
] |
610368 | 1 | null | null | 1 | 27 | If $X$ and $Y$ are independent binomial random variables with identical parameters $n$ and $p$, show
analytically that the conditional probability of $X$, given that $X + Y = m$ is the hypergeometric distribution.
| Conditional probability given conditional probabilities | CC BY-SA 4.0 | null | 2023-03-22T21:55:29.757 | 2023-03-23T01:51:24.847 | 2023-03-23T01:51:24.847 | 20519 | 383900 | [
"self-study",
"random-variable",
"conditional-probability",
"independence",
"hypergeometric-distribution"
] |
610369 | 2 | null | 610281 | 0 | null | What you have there seems to be an [uneven/unequally/irregularly spaced time series](https://en.wikipedia.org/wiki/Unevenly_spaced_time_series), consisting of discrete (in time) events.
The most straightforward approach for anomalies in number-of-events would be to transform the data into a regular time-series, and use a general-purpose anomaly detection model.
```
import pandas
events = pandas.DataFrame.from_records([
# B normally happens once per day
{ 'time': '2023-03-22T18:22:00', 'event': 'B' },
{ 'time': '2023-03-23T18:22:00', 'event': 'B' },
{ 'time': '2023-03-24T18:22:00', 'event': 'B' },
# skipped a day
#{ 'time': '2023-03-25T18:22', 'event': 'B' },
# many times in one day
{ 'time': '2023-03-26T16:22:00', 'event': 'B' },
{ 'time': '2023-03-26T18:22:00', 'event': 'B' },
{ 'time': '2023-03-26T20:22:00', 'event': 'B' },
# A once per week
{ 'time': '2023-03-26T20:22:00', 'event': 'A' },
])
events['time'] = pandas.to_datetime(events['time'])
print(events)
# Transform to regular time-series with counts
time_bins = '1d'
regular = events.set_index('time').groupby('event').resample(time_bins).count().rename(columns={'event': 'count'})
print(regular)
# Transform into one column per event
data = regular.reset_index().pivot(index='time', columns='event', values='count').add_suffix('_count')
data = data.fillna(0.0)
print(data)
# Can do further feature engineering here, like split time into weekday/time-of-day
# and then pass to a standard Anomaly Detection method, such as IsolationForest
```
Should print
```
time event
0 2023-03-22 18:22:00 B
1 2023-03-23 18:22:00 B
2 2023-03-24 18:22:00 B
3 2023-03-26 16:22:00 B
4 2023-03-26 18:22:00 B
5 2023-03-26 20:22:00 B
6 2023-03-26 20:22:00 A
count
event time
A 2023-03-26 1
B 2023-03-22 1
2023-03-23 1
2023-03-24 1
2023-03-25 0
2023-03-26 3
event A_count B_count
time
2023-03-22 0.0 1.0
2023-03-23 0.0 1.0
2023-03-24 0.0 1.0
2023-03-25 0.0 0.0
2023-03-26 1.0 3.0
```
| null | CC BY-SA 4.0 | null | 2023-03-22T22:01:20.860 | 2023-03-22T22:01:20.860 | null | null | 201327 | null |
610370 | 1 | 616640 | null | 4 | 73 | I like how pyro from Uber is structured, that it uses pytorch and how many features it brings. Pymc looks ok as well (but would not be my favorite with regards to the syntax and lifetime!)
Do you have a book you would highlight for learning probabilistic programming, the variational inference, ...
I found books using pymc
- https://github.com/BayesianModelingandComputationInPython/BookCode_Edition1 and
- https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
And the book "Variational Methods for Machine Learning with Applications to Deep Networks" with ~140 pages seems rather short (and the table of contents did not convince me yet).
Do you have suggestions. Should I just go with one of the pymc books? (Although pymc3 will die sooner or later because it is based on theano, which is deprecated - I would prefer a pytorch or Jax based probabilistic programming language)
| Book suggestion probabilistic programming | CC BY-SA 4.0 | null | 2023-03-22T22:11:06.083 | 2023-05-23T06:03:05.950 | 2023-03-23T00:34:31.130 | 11887 | 298651 | [
"references",
"probabilistic-programming"
] |
610371 | 2 | null | 610336 | 0 | null | In general, you can use the GLM as a way of doing the one-sample hypothesis test for a proportion.
```
set.seed(1)
y <- rbinom(100, 1, 0.4) ## the illusive biased coin realization
f <- glm(y ~ 1, family=binomial)
summary(f)
```
Gives:
```
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.2819 0.2020 -1.395 0.163
```
Where the `(Intercept)` is the log-odds of response. In other words, if $p=0.5$, then $\log\left(p/(1-p)\right) = 0$. So, at the 0.05 level I would not reject the null hypothesis that my coin is fair.
Note to test any proportion other than 0.5, you would need to calculate and add an offset to the model. For instance, if the null were that p=0.3, then the log odds of p is -0.84, and setting the offset to this value for each result gives a hypothesis test for p!=0.3 in the intercept term.
Anyway, you could compare this result to:
```
prop.test(sum(y), 100, correct = F)
```
Which gives nearly the same result:
```
1-sample proportions test without continuity correction
data: sum(y) out of 100, null probability 0.5
X-squared = 1.96, df = 1, p-value = 0.1615
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.3373330 0.5278461
sample estimates:
p
0.43
```
Of course, the important bits are a) that you have pivoted from your planned analysis due to a suspicion of non-independent data and b) a binomial analysis won't actually handle dependence in data. The interesting bit is that the analysis is essentially the same as the Poisson! The "log-linear modeling" approach has known connections to logistic regression. Consider our biased coin experiment:
```
t <- as.data.frame(table(y))
```
gives a tabular result:
```
y Freq
1 0 57
2 1 43
```
and fitting the Poisson model gives you the exact same result.
```
> summary(glm( Freq ~ y, family=poisson, data=t))
Call:
glm(formula = Freq ~ y, family = poisson, data = t)
Deviance Residuals:
[1] 0 0
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 4.0431 0.1325 30.524 <2e-16 ***
y1 -0.2819 0.2020 -1.395 0.163
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 1.9665e+00 on 1 degrees of freedom
Residual deviance: -6.4393e-15 on 0 degrees of freedom
AIC: 15.487
Number of Fisher Scoring iterations: 2
```
In other words, I still have $p=0.163$ for whether I get "more heads" or "more tails" in the count model. Conditioning on the overall count of data is already handled by adjusting for the intercept in the Poisson model.
>
In the experimental arena, the same insect was offered simultaneously
two plants (A and B) and the number of eggs laid on each plant was
counted. So the assumption of independence would not be fulfilled
Unfortunately, this is not any description at all of dependent data! By "insect" I assume you mean several specimens of a particular species of insect, and you are thinking that if subject A lays on plant A then subject B is more likely to lay on plant B, then there is really no way to handle this in the analysis, it's a feature of the design. The preference for plant A may really be a result of there is only one other plant to lay on, and if there are more on A, you can still declare there's a preference for "A", it's just subject to the conditions of the study.
| null | CC BY-SA 4.0 | null | 2023-03-22T22:23:17.617 | 2023-03-22T22:35:58.967 | 2023-03-22T22:35:58.967 | 8013 | 8013 | null |
610372 | 1 | null | null | 1 | 26 | I have modeled a count variable for a study that occurs over 3.5 years. The regression is a mixed effects ZINB with a random intercept for county and an offset for population. It was suggested that I add a dummy variable for the year but when I do, then it creates a problem with the quantile deviations. Here is the plot (ZINB without year as a predictor variable) [](https://i.stack.imgur.com/WUAst.jpg). I was a bit concerned about the lowest quantile but this is the best I've been able to get. When I added a year, this happened. This is the ZINB with year as factor variable. [](https://i.stack.imgur.com/ZpdWY.jpg). I did also try year as both numeric and factor variable. It would not converge with year as a random intercept, presumably due to the relatively short duration of the study (I read it should be >5 years to use as a random intercept). Based on these residuals, plotted with DHARMa, I have a preference for the first one but wonder how much to weigh these plots in my decision.
| Is it ok to NOT include year as a dummy variable in a random intercept ZINB regression? Problem with residuals plot from DHARMa | CC BY-SA 4.0 | null | 2023-03-22T22:23:33.733 | 2023-05-12T22:32:20.477 | 2023-03-29T01:40:56.153 | 11887 | 205125 | [
"mixed-model",
"panel-data",
"residuals"
] |
610373 | 1 | null | null | 1 | 26 | The [answer to "Do we have to tune the number of trees in a random forest?"](https://stats.stackexchange.com/a/348246/228809) suggests using as large a number of trees in a forest as possible. Is there a rule-of-thumb for choosing this "large enough number of trees" a priori? Perhaps as a function of the number of observations and number of explanatory variables?
| Determining the number of trees in a random forest model a priori fitting | CC BY-SA 4.0 | null | 2023-03-22T22:30:40.137 | 2023-03-22T22:30:40.137 | null | null | 228809 | [
"random-forest",
"hyperparameter",
"law-of-large-numbers"
] |
610374 | 1 | null | null | 1 | 11 | The policy was implemented all over the country and applies to everyone. Can I do difference in difference looking at yearly total emissions data of the country before the policy was implemented vs after the policy was implemented? without using a control group
| I want to determine the impact of a policy on carbon emissions in a single country by doing difference in difference but don't have a control group | CC BY-SA 4.0 | null | 2023-03-22T22:50:15.443 | 2023-03-22T22:50:15.443 | null | null | 383903 | [
"regression",
"difference-in-difference",
"control-group"
] |
610375 | 1 | 610607 | null | 1 | 80 | My response is a ratio (length of a discoloration disease in the plant divided by the height of the plant), so is always (0, 1). Actually, sometimes it could be [0, 1), i.e., including zeros.
As I have some zeros, I transformed them accordingly with `betareg` [vignette](https://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf):
>
a useful transformation in practice is (y · (n − 1) + 0.5)/n where n is the sample size (Smithson and Verkuilen 2006).
By the way, this is the (transformed) response distribution:
```
summary(data_nonna$tpercent_discoloration)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.0003636 0.0380720 0.1726520 0.2158268 0.3462657 0.9408556
```
My beta regression model is as follows:
```
library(betareg)
library(emmeans)
b1 <- betareg(
tpercent_discoloration ~ cv + zone_line + severity,
link = 'logit',
data = data_nonna
)
```
where `cv` is a factor of different cultivars (18 levels), `zone_line` is a factor of 0 or 1, and `severity` is another factor with levels 1, 2, 3, 4, or 5, so all my predictors are factors.
As one of my objectives is to compare the cultivar levels, i.e. which one has the lower percentage of discoloration, of which one has the larger percentage of discoloration etc, and check whether they differ, I started to use `emmeans`.
Firstly I plotted the comparisons and saw that all the arrows were identical (same low and upper values for every `cv`), so I changed it to `plotit = F` to take a look at the data being plotted.
Here are the results:
```
emm_betareg <- emmeans(b1, specs = 'cv', type = 'response')
comps <- plot(emm_betareg, CIs = F, comparisons = T, plotit = F)
comps
cv the.emmean SE df asymp.LCL asymp.UCL pri.fac lcmpl rcmpl
CPLRC5007 0.1584522 0.02699608 Inf 0.10554081 0.2113635 CPLRC5007 0.1051907 0.2190178
CPLRC5663 0.1485528 0.02561763 Inf 0.09834319 0.1987625 CPLRC5663 0.1051907 0.2190178
DK4866 0.1864413 0.02767189 Inf 0.13220539 0.2406772 DK4866 0.1051907 0.2190178
DT97-4290 0.1613726 0.02739499 Inf 0.10767936 0.2150657 DT97-4290 0.1051907 0.2190178
Exp1_Stine39LA02 0.1157786 0.02077353 Inf 0.07506324 0.1564940 Exp1_Stine39LA02 0.1051907 0.2190178
Exp2_XC3810 0.1503152 0.02642073 Inf 0.09853150 0.2020988 Exp2_XC3810 0.1051907 0.2190178
Jack 0.1073797 0.02001452 Inf 0.06815195 0.1466074 Jack NA 0.2190178
JTN-4307 0.1835275 0.03059703 Inf 0.12355838 0.2434965 JTN-4307 0.1051907 0.2190178
JTN-5208 0.1099293 0.02082648 Inf 0.06911017 0.1507485 JTN-5208 0.1051907 0.2190178
JTN-5308 0.1290918 0.02378328 Inf 0.08247745 0.1757062 JTN-5308 0.1051907 0.2190178
K07-1544 0.1690793 0.02827543 Inf 0.11366044 0.2244981 K07-1544 0.1051907 0.2190178
LS980358 0.1231427 0.02113839 Inf 0.08171223 0.1645732 LS980358 0.1051907 0.2190178
MorsoyRT5388N 0.1901965 0.03489674 Inf 0.12180018 0.2585929 MorsoyRT5388N 0.1051907 0.2190178
NKBrandS39-A3 0.1328391 0.02656402 Inf 0.08077458 0.1849036 NKBrandS39-A3 0.1051907 0.2190178
Osage 0.1519922 0.02967803 Inf 0.09382435 0.2101601 Osage 0.1051907 0.2190178
Pharaoh 0.2168288 0.04980157 Inf 0.11921949 0.3144381 Pharaoh 0.1051907 NA
R01581F 0.1949003 0.03568743 Inf 0.12495428 0.2648464 R01581F 0.1051907 0.2190178
Spencer 0.1163156 0.02318274 Inf 0.07087824 0.1617529 Spencer 0.1051907 0.2190178
Results are averaged over the levels of: zone_line, severity
Confidence level used: 0.95
```
As you can see, both `lcmpl` and `rcmpl` are the same for each `cv` level.
We also have two NAs, which, as I understood, it's because we don't need to compare the lower level to the left and the higher value to the right.
Also, the results say:
>
Results are averaged over the levels of: zone_line, severity
so I thought to check the contingency table for the predictors:
```
table(data_nonna$cv, data_nonna$zone_line)
0 1
CPLRC5007 48 22
CPLRC5663 72 8
DK4866 67 13
DT97-4290 75 5
Exp1_Stine39LA02 67 13
Exp2_XC3810 78 2
Jack 79 1
JTN-4307 61 8
JTN-5208 79 1
JTN-5308 78 1
K07-1544 80 0
LS980358 69 10
MorsoyRT5388N 78 2
NKBrandS39-A3 74 5
Osage 74 5
Pharaoh 38 2
R01581F 79 1
Spencer 72 8
```
```
table(data_nonna$cv, data_nonna$severity)
1 2 3 4 5
CPLRC5007 2 6 17 40 5
CPLRC5663 3 6 23 41 7
DK4866 2 5 26 34 13
DT97-4290 0 10 29 36 5
Exp1_Stine39LA02 4 5 28 40 3
Exp2_XC3810 3 1 35 31 10
Jack 0 3 14 47 16
JTN-4307 2 9 14 37 7
JTN-5208 3 15 27 30 5
JTN-5308 4 10 24 37 4
K07-1544 0 5 29 38 8
LS980358 2 5 16 35 21
MorsoyRT5388N 1 9 24 38 8
NKBrandS39-A3 3 8 24 33 11
Osage 1 6 23 39 10
Pharaoh 1 5 5 18 11
R01581F 5 8 12 28 27
Spencer 2 4 18 45 11
```
We have different values, so I expected to have different "arrows" for each `cv`.
Sorry if I'm making some silly mistake, but I don't know what is going on.
Any idea of what could be causing this problem?
| emmeans for betareg is giving me identical arrow ranges when plotting comparisions = T | CC BY-SA 4.0 | null | 2023-03-22T23:17:12.060 | 2023-03-27T04:07:17.873 | null | null | 252638 | [
"lsmeans",
"beta-regression"
] |
610377 | 1 | null | null | 1 | 72 | I have run a gam regression on data that looks like the following:
|age |frequency |person_years |
|---|---------|------------|
|10 |1 |12796.5 |
|12 |2 |13049.5 |
|13 |5 |13220.0 |
|14 |13 |13313.0 |
|15 |27 |13516.5 |
|16 |18 |13778.5 |
using the following gam function from mgcv
```
incidence.gam <- gam(
frequency ~ offset(log(person_years)) + s(age),
data = data,
family = poisson(link = "log")
)
```
I would like to create a prediction of frequency for all ages using emmeans. I have tried the following
```
emmeans::emmeans(
incidence.gam,
~ age,
cov.reduce = FALSE,
type = "response",
interval = "confidence"
) %>% broom::tidy(conf.int = TRUE)
```
however I get the following error and am at a loss how to solve this.
```
Error in `[.data.frame`(tbl, , vars, drop = FALSE) :
undefined columns selected
Error in (function (object, at, cov.reduce = mean, cov.keep = get_emm_option("cov.keep"), :
Perhaps a 'data' or 'params' argument is needed
```
| Can emmeans be used to predict an outcome using a gam model? | CC BY-SA 4.0 | null | 2023-03-22T23:48:38.850 | 2023-03-24T16:02:47.473 | 2023-03-22T23:53:02.947 | 383905 | 383905 | [
"r",
"predictive-models",
"poisson-regression",
"mgcv",
"lsmeans"
] |
610378 | 1 | null | null | 1 | 28 | I conducted a 2x1 chi square and need to report it in APA. There are 280 in the sample. Do I deduct the df from the sample size? e.g. X2 (1, N=279)
Thank you
| Chi square reporting df for 2x1 | CC BY-SA 4.0 | null | 2023-03-22T23:51:58.510 | 2023-03-23T00:08:48.817 | null | null | 211553 | [
"chi-squared-test",
"degrees-of-freedom"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.