Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
610867
2
null
34578
0
null
There are also countless other methods to estimate survival functions in special scenarios. For example, there are multiple methods to adjust survival functions for confounders ([https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9681](https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9681)). You may also use a bayesian approach ([https://pubmed.ncbi.nlm.nih.gov/34548947/](https://pubmed.ncbi.nlm.nih.gov/34548947/)) and there are also extensions for time-varying variables ([https://pubmed.ncbi.nlm.nih.gov/12459796/](https://pubmed.ncbi.nlm.nih.gov/12459796/)) etc. Which method to use depends almost entirely on your situation.
null
CC BY-SA 4.0
null
2023-03-27T11:03:13.677
2023-03-27T11:03:13.677
null
null
305737
null
610869
2
null
610804
1
null
Suppose you created a dataframe - every row represents a person and columns represent a food and value will be frequency of consumption. Then think in mathematical terms that every row is a vector and you can find the cosine angle between them - in other words cosine similarity. This is returning a continuous value but I am not sure if this will represent what you intend. Of course you will need to encode the data into numbers first.
null
CC BY-SA 4.0
null
2023-03-27T11:54:15.393
2023-03-27T11:54:15.393
null
null
362382
null
610870
1
null
null
0
50
I have a continuous dependent variable and one categorical independent variable (with three levels). For the three levels, I have 42, 33, and 45 observations respectively. So, in short, I have unequal sample sizes. I want to run a linear model. But I am unsure how to account for the three levels' different sample sizes. Is using weights in a linear model the right approach? If so, how to use assign them properly?
Linear model with different sample size
CC BY-SA 4.0
null
2023-03-27T12:00:25.360
2023-03-29T23:50:25.100
2023-03-29T23:46:46.377
11887
365972
[ "r", "regression", "sample-size", "linear", "model" ]
610871
1
null
null
2
32
I have two decision trees that gives each individual a rating based on the same 4 parameters (two quantitative and one qualitative). They are supposed to be exactly the same, but give different results (which indicates some error). I have the thresholds for one of them, but the ones for the other decision tree remain unknown. I need to understand what happened to these results. It would really help me to know the differences between the two models. I have access to both the inputs and outputs of the models. I already tried to recreate the decision trees using `rpart` in R, but it didn't succeed. ``` rpart(Model_output ~ Parameter1 + Parameter2 + Parameter3 + Parameter4, data=data, method="class", control=rpart.control(cp=0)) ``` (I chose the full tree option because I assumed it would be simple, since the initial model is supposed to be simple). Can someone help me ?
How to identify differences between two decision trees
CC BY-SA 4.0
null
2023-03-27T12:04:33.490
2023-03-31T23:55:37.177
2023-03-31T23:55:37.177
11887
384253
[ "r", "clustering", "cart" ]
610872
1
null
null
1
34
I have 4 groups in which I have for each 7 days ``` Days G1 G2 G3 G4 Day1 85 16 32 92 Day2 23 9 11 33 Day3 41 2 21 27 Day4 19 6 6 15 Day5 6 1 3 7 Day6 28 5 2 6 Day7 66 11 15 27 ``` And I would like to compare each group by the proportion of values depending on the days. Is there any statistic test I can use in order to test for instance that the proportion of values distributed on the 7 days is not statistically different between `G1` and `G2`? Here is the data in `dput` format: ``` structure(list(days = c("Day1", "Day2", "Day3", "Day4", "Day5", "Day6", "Day7"), G1 = c(85L, 23L, 41L, 19L, 6L, 28L, 66L), G2 = c(16L, 9L, 2L, 6L, 1L, 5L, 11L), G3 = c(32L, 11L, 21L, 6L, 3L, 2L, 15L), G4 = c(92L, 33L, 27L, 15L, 7L, 6L, 27L)), class = "data.frame", row.names = c(NA, -7L)) ```
Compare proportion of value between groups in R
CC BY-SA 4.0
null
2023-03-27T12:04:34.540
2023-03-29T23:45:41.190
2023-03-29T23:42:20.733
11887
197361
[ "r", "statistical-significance", "count-data" ]
610874
1
610879
null
1
102
I am trying to run a Friedman Test on some data in R. I would like to measure a 3-way interaction with a repeated measure. However, I keep getting a persistent error message when I run the code and cannot figure out what the issue is. Below is a description of my data with a `dput` for reproducibility: - cl_conc (response variable) - soil_type (explanatory variable 1; 4 levels) - treatment (explanatory variable 2; 4 levels) - days (repeated measure; 3 levels) -core_id (subject variable; 48 levels) ``` leach.fried <- friedman.test(cl_conc ~ soil_type*treatment*days | core_id, data = leach2) ``` When I run the code I get this error but am unsure what exactly it means after researching it ``` Error in friedman.test.formula(cl_conc ~ soil_type * treatment * days | : incorrect specification for 'formula' ``` Here is my data: ``` leach2 <- structure( list( core = c( "MS", "MS", "MS", "ML", "ML", "ML", "MK", "MK", "MK", "MC", "MC", "MC", "FS", "FS", "FS", "FL", "FL", "FL", "FK", "FK", "FK", "FC", "FC", "FC", "MS", "MS", "MS", "ML", "ML", "ML", "MK", "MK", "MK", "MC", "MC", "MC", "FS", "FS", "FS", "FL", "FL", "FL", "FK", "FK", "FK", "FC", "FC", "FC", "MS", "MS", "MS", "ML", "ML", "ML", "MK", "MK", "MK", "MC", "MC", "MC", "FS", "FS", "FS", "FL", "FL", "FL", "FK", "FK", "FK", "FC", "FC", "FC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CK", "CL", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC" ), core_id = c( "MS1", "MS1", "MS1", "ML1", "ML1", "ML1", "MK1", "MK1", "MK1", "MC1", "MC1", "MC1", "FS1", "FS1", "FS1", "FL1", "FL1", "FL1", "FK1", "FK1", "FK1", "FC1", "FC1", "FC1", "MS2", "MS2", "MS2", "ML2", "ML2", "ML2", "MK2", "MK2", "MK2", "MC2", "MC2", "MC2", "FS2", "FS2", "FS2", "FL2", "FL2", "FL2", "FK2", "FK2", "FK2", "FC2", "FC2", "FC2", "MS3", "MS3", "MS3", "ML3", "ML3", "ML3", "MK3", "MK3", "MK3", "MC3", "MC3", "MC3", "FS3", "FS3", "FS3", "FL3", "FL3", "FL3", "FK3", "FK3", "FK3", "FC3", "FC3", "FC3", "CS1", "CL1", "CK1", "CC1", "PS1", "PL1", "PK1", "PC1", "CS2", "CL2", "CK2", "CC2", "PS2", "PL2", "PK2", "PC2", "CS3", "CL3", "CK3", "CC3", "PS3", "PL3", "PK3", "PC3", "CS1", "CL1", "CK1", "CC1", "PS1", "PL1", "PK1", "PC1", "CS2", "CL2", "CK2", "CC2", "PS2", "PL2", "PK2", "PC2", "CS3", "CL3", "CK3", "CC3", "PS3", "PL3", "PK3", "PC3", "CS1", "CK1", "CL1", "CC1", "PS1", "PL1", "PK1", "PC1", "CS2", "CL2", "CK2", "CC2", "PS2", "PL2", "PK2", "PC2", "CS3", "CL3", "CK3", "CC3", "PS3", "PL3", "PK3", "PC3" ), soil_type = c( "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH" ), treatment = c( "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "KCl", "SM", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control" ), days = c( 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L ), cl_conc = c( 18.1, 18.1, 17.4, 77.1, 81.4, 66.8, 19.4, 22.3, 36.9, 1.9, 1.2, 0.6, 27.8, 28.3, 28.3, 107.8, 150.3, 94.6, 84.8, 53.4, 51.9, 9.1, 4.25, 1.9, 19.8, 20.7, 20.5, 102, 56.7, 47.4, 33.4, 15.3, 19.9, 2, 1.2, 0.8, 37.1, 39.8, 34.8, 81.9, 67.5, 56, 41.1, 38.3, 30.9, 12.4, 6, 3.1, 27.8, 27.8, 24.9, 79.7, 65.5, 55.2, 13.5, 20.4, 24.7, 1.6, 1.2, 0.7, 42.7, 40.5, 30.1, 121.2, 73.6, 38, 53, 38.5, 22.3, 4.7, 1.9, 0.85, 46.5, 52.6, 32.9, 2.8, 45.1, 1.3, 51.2, 2.6, 47.59251129, 68.3, 38.8, 5.4, 34.1, 66.7, 23.51266468, 0.6, 34.2, 55.7, 23.8, 5, 42.1, 47.9, 44.3, 0.8, 56.23151874, 81.2, 36.1, 1.6, 36.3, 48.2, 35.6, 1.5, 44.8, 80.9, 34.66600908, 3.1, 33.3, 81.5, 20.2, 0.4, 40.1, 66.8, 24.5, 3.6, 39, 68.2, 36, 0.303367677, 31.1, 23.2, 75.7, 0.6, 26.2, 45.3, 21.3, 0.6, 33.76030379, 47.5, 20.5, 1.1, 28.6, 65.9, 18.9, 0.2, 30.2, 65.5, 23.3, 2.7, 23.9, 64, 24.7, 0.1 ), cl_load = c( 0.058825, 0.0543, 0.0609, 0.26985, 0.26455, 0.2171, 0.0582, 0.0669, 0.119925, 0.0057, 0.0036, 0.00195, 0.09035, 0.0849, 0.0849, 0.3773, 0.4509, 0.3311, 0.2756, 0.1602, 0.1557, 0.03185, 0.010625, 0.006175, 0.06435, 0.07245, 0.07175, 0.3315, 0.19845, 0.1659, 0.10855, 0.05355, 0.064675, 0.0065, 0.0039, 0.0026, 0.1113, 0.1393, 0.1044, 0.266175, 0.185625, 0.168, 0.113025, 0.105325, 0.0927, 0.0372, 0.018, 0.010075, 0.09035, 0.09035, 0.0747, 0.2391, 0.22925, 0.1656, 0.0405, 0.0663, 0.06175, 0.0048, 0.0036, 0.0021, 0.1281, 0.131625, 0.097825, 0.3939, 0.2392, 0.114, 0.17225, 0.125125, 0.0669, 0.015275, 0.006175, 0.0023375, 0.11625, 0.14465, 0.0987, 0.0084, 0.0902, 0.0039, 0.1152, 0.00715, 0.118981278, 0.2049, 0.1164, 0.0135, 0.093775, 0.2001, 0.064659828, 0.0018, 0.0855, 0.1671, 0.06545, 0.0125, 0.094725, 0.131725, 0.099675, 0.0024, 0.177129284, 0.2436, 0.1083, 0.0048, 0.1089, 0.15665, 0.1068, 0.0045, 0.1344, 0.2427, 0.11266453, 0.010075, 0.0999, 0.2445, 0.0606, 0.0012, 0.1203, 0.2004, 0.0735, 0.0126, 0.117, 0.2046, 0.117, 0.000985945, 0.0933, 0.0696, 0.2268, 0.0018, 0.0917, 0.15855, 0.0639, 0.00195, 0.101280911, 0.1425, 0.0615, 0.003025, 0.0858, 0.214175, 0.0567, 0.00065, 0.1057, 0.212875, 0.075725, 0.0081, 0.0717, 0.192, 0.0741, 3e-04 ) ), row.names = c( 2L, 3L, 4L, 6L, 7L, 8L, 10L, 11L, 12L, 14L, 15L, 16L, 18L, 19L, 20L, 22L, 23L, 24L, 26L, 27L, 28L, 30L, 31L, 32L, 34L, 35L, 36L, 38L, 39L, 40L, 42L, 43L, 44L, 46L, 47L, 48L, 50L, 51L, 52L, 54L, 55L, 56L, 58L, 59L, 60L, 62L, 63L, 64L, 66L, 67L, 68L, 70L, 71L, 72L, 74L, 75L, 76L, 78L, 79L, 80L, 82L, 83L, 84L, 86L, 87L, 88L, 90L, 91L, 92L, 94L, 95L, 96L, 121L, 122L, 123L, 124L, 125L, 126L, 127L, 128L, 129L, 130L, 131L, 132L, 133L, 134L, 135L, 136L, 137L, 138L, 139L, 140L, 141L, 142L, 143L, 144L, 145L, 146L, 147L, 148L, 149L, 150L, 151L, 152L, 153L, 154L, 155L, 156L, 157L, 158L, 159L, 160L, 161L, 162L, 163L, 164L, 165L, 166L, 167L, 168L, 169L, 170L, 171L, 172L, 173L, 174L, 175L, 176L, 177L, 178L, 179L, 180L, 181L, 182L, 183L, 184L, 185L, 186L, 187L, 188L, 189L, 190L, 191L, 192L ), class = "data.frame" ) ```
Error when conducting a Friedman Test in R
CC BY-SA 4.0
null
2023-03-27T12:47:24.307
2023-03-27T13:05:36.760
2023-03-27T13:05:36.760
56940
382821
[ "r", "anova", "repeated-measures", "interaction", "friedman-test" ]
610876
2
null
597623
1
null
This can be found, e.g., in Calin and Udriste's [Geometric Modeling in Probability and Statistics](https://link.springer.com/book/10.1007/978-3-319-07779-6) (Proposition 1.6.3). Let us denote $p_\xi(x) := p(x;\xi)$. As you noted, $\int \partial_i p_\xi(x)~\mathrm{d}x = \partial_i \int p_\xi(x)~\mathrm{d}x = \partial_i(1) = 0$. With that, we can write $$ \int p_\xi(x) \partial_i \log p_\xi(x)~\mathrm{d}x = \int \partial_i p_\xi(x)~\mathrm{d}x = 0. $$ Differentiating this expression again, now with respect to $\xi^j$, gives $$ \begin{align*} &\partial_j \int p_\xi(x) \partial_i \log p_\xi(x)~\mathrm{d}x = 0\\ \iff &\int \left(\partial_j p_\xi(x)\right) \left(\partial_i \log p_\xi(x)\right)~\mathrm{d}x + \int p_\xi(x) \left(\partial_j\partial_i \log p_\xi(x)\right)~\mathrm{d}x = 0\\ \iff &\int \left(p_\xi(x) \partial_j \log p_\xi(x)\right) \left(\partial_i \log p_\xi(x)\right)~\mathrm{d}x + \int p_\xi(x) \left(\partial_j\partial_i \log p_\xi(x)\right)~\mathrm{d}x = 0\\ \iff &g_{ij}(\xi) = - E_\xi \left[\partial_j\partial_i \log p_\xi(x)\right]. \end{align*} $$
null
CC BY-SA 4.0
null
2023-03-27T12:50:53.437
2023-03-27T12:52:58.297
2023-03-27T12:52:58.297
384237
384237
null
610879
2
null
610874
1
null
You have to construct the interaction yourself before calling the test, i.e. ``` leach2$inter <- interaction(as.factor(leach2$soil_type), as.factor(leach2$treatment), as.factor(leach2$days)) > table(leach2$inter) ESL.Control.4 FAH.Control.4 NCL .Control.4 WSL.Control.4 ESL.KCl.4 3 3 3 3 3 FAH.KCl.4 NCL .KCl.4 WSL.KCl.4 ESL.SM.4 FAH.SM.4 3 3 3 3 3 NCL .SM.4 WSL.SM.4 ESL.Tl.4 FAH.Tl.4 NCL .Tl.4 3 3 3 3 3 WSL.Tl.4 ESL.Control.11 FAH.Control.11 NCL .Control.11 WSL.Control.11 3 3 3 3 3 ESL.KCl.11 FAH.KCl.11 NCL .KCl.11 WSL.KCl.11 ESL.SM.11 3 3 3 3 3 FAH.SM.11 NCL .SM.11 WSL.SM.11 ESL.Tl.11 FAH.Tl.11 3 3 3 3 3 NCL .Tl.11 WSL.Tl.11 ESL.Control.18 FAH.Control.18 NCL .Control.18 3 3 3 3 3 WSL.Control.18 ESL.KCl.18 FAH.KCl.18 NCL .KCl.18 WSL.KCl.18 3 3 3 3 3 ESL.SM.18 FAH.SM.18 NCL .SM.18 WSL.SM.18 ESL.Tl.18 3 3 3 3 3 FAH.Tl.18 NCL .Tl.18 WSL.Tl.18 3 3 3 ``` However, it seems that you cannot apply the Friedman test to your data since you don't have exactly one observation in the response for each combination of levels of groups and blocks: ``` leach.fried <- friedman.test(cl_conc ~ inter|core_id, data = leach2) Error in friedman.test.default(mf[[1L]], mf[[2L]], mf[[3L]]) : not an unreplicated complete block design ```
null
CC BY-SA 4.0
null
2023-03-27T13:01:28.633
2023-03-27T13:01:28.633
null
null
56940
null
610882
1
null
null
0
21
Subspace test for multivariate normal distribution Suppose $X_1, X_2,\ldots, X_n$ are i.i.d. observations from a multivariate normal distribution $N(\mu,\Sigma)$ where $\Sigma$ is known. Furthermore, assume that $R$ is a given matrix and $r$ is a given vector. Use the likelihood ratio procedure to produce a test statistic for $$H_0 : R\mu = r\quad\text{vs}\quad H_1 : R\mu \neq r.$$ Give explicit formulae for the test statistic and the critical values. I am studying completely different things but this assignment is part of a seminar which is only for some additional points. I would be very grateful if someone could explain the answer to me step by step.
Subspace test for multivariate normal distribution
CC BY-SA 4.0
null
2023-03-27T13:13:26.587
2023-03-27T13:43:21.540
2023-03-27T13:18:08.070
56940
384248
[ "hypothesis-testing", "matrix", "multivariate-normal-distribution", "likelihood-ratio", "lagrange-multipliers" ]
610884
1
611253
null
1
59
I implement a GARCH-DCC model in Python, for number of asset = 2. My implementation is the following : ``` def garch_dcc_specification( self, eps_last: Optional[np.ndarray], cond_var_last: Optional[np.ndarray], q_last_t: Optional[np.ndarray], ) -> SpecResult: if eps_last is None: eps_last = np.zeros(self.n) if q_last_t is None: q_last_t = np.zeros((self.n, self.n)) epsilon_square_last = np.array([eps_last_i ** 2 for eps_last_i in eps_last]) # first, evaluate the garch cond. variance. # (garch_alpha, beta and omega are 1D arrays) cond_var_t = (self.garch_omega + self.garch_alpha * epsilon_square_last + self.garch_beta * (cond_var_last if cond_var_last is not None else np.zeros(self.n))) d_t = np.diag([math.sqrt(v) for v in cond_var_t]) if cond_var_last is not None: d_last_t = np.diag([math.sqrt(v) for v in cond_var_last]) v_last_t = inv(d_last_t).dot(eps_last) else: v_last_t = np.zeros(self.n) # DCC specification for the conditional correlation # note: DO NOT DO v[t - 1].dot(v[t - 1].transpose()) : since v[t - 1] is a 1D array, result would be a number q_t = (self.dcc_r * (1 - self.dcc_alpha - self.dcc_beta) + self.dcc_alpha * v_last_t.reshape(1, -1).transpose().dot(v_last_t.reshape(1, -1)) + self.dcc_beta * q_last_t) # standardize q to get a real correlation matrix r_t = np.zeros((self.n, self.n)) for i in range(self.n): for j in range(self.n): r_t[i][j] = q_t[i][j] / math.sqrt(q_t[i][i] * q_t[j][j]) # transforms to a variance-covariance matrix by incorporing the cond variances h_t = d_t.dot(r_t).dot(d_t) return GarchDccParams.SpecResult(cond_var = cond_var_t, q = q_t, h = h_t) def generate_innovations(self, length: int) -> np.ndarray: innovations = np.zeros((length + 1, self.n)) spec_res: List[GarchDccParams.SpecResult] = [] for t in range(0, length + 1): spec_res.append(self.garch_dcc_specification( eps_last = innovations[t - 1] if t != 0 else None, cond_var_last = spec_res[t - 1].cond_var if t != 0 else None, q_last_t = spec_res[t - 1].q if t != 0 else None, )) innovations[t] = np.random.multivariate_normal(np.zeros(self.n), spec_res[t].h) ``` To check my implentation, I control that the `generate_innovation()` empirical pearson correlation coefficient (with `np.corrcoef`) is equal to the input `self.dcc_r` matrix correlation coefficient, which should be the unconditional correlation of the overrall generated innovation, if I understand correctly. When running with constant variance in the GARCH (garch alpha and beta = 0), I get a good pearson coef coefficient that is equal to the one I set in the input `self.dcc_r` Though, when the conditional variance is moving (garch alpha and beta > 0), I don't get the same coefficient, I get always a lesser empirical correlation coefficient then expected in the DCC_R. For example, when running with 10000 points, and with an input dcc_r correlation coef. of 0.9, I get an empirical unconditional correlation, in my generated innovations, of around 0.75 PS: To simplificate, I set dcc_alpha and dcc_beta to 0 so only the dcc_r matrix is taken into account (we have a GARCH-CCC model instead of a GARCH-DCC, cond. correlation is always the same). The "problem" (if it is one) still occurs, still when GARCH alpha/beta > 0. Is it normal ?
GARCH CCC/DCC : empirical correlation coefficient different than the one in input CCC matrix
CC BY-SA 4.0
null
2023-03-27T13:29:01.330
2023-03-30T12:10:07.540
2023-03-27T19:02:17.803
372184
372184
[ "garch" ]
610885
1
610888
null
6
281
I am trying to run a Friedman Test in `R` with a repeated measure, however, my data do not qualify as an unreplicated complete block design. I am wondering what alternative test I can run as a 3-way repeated measures ANOVA and Friedman test are not appropriate. I am interested in knowing the interaction that soil_type, treatment, and days have on the cl_conc of my subjects. There are 48 subjects that were tested over a period of 3 days (4, 11, and 18). Description of my data: - cl_conc (response variable) - soil_type (explanatory variable 1; 4 levels) - treatment (explanatory variable 2; 4 levels) - days (repeated measure; 3 levels) - core_id (subject variable; 48 levels) Data for reproducibility: ``` leach2 <- structure( list( core = c( "MS", "MS", "MS", "ML", "ML", "ML", "MK", "MK", "MK", "MC", "MC", "MC", "FS", "FS", "FS", "FL", "FL", "FL", "FK", "FK", "FK", "FC", "FC", "FC", "MS", "MS", "MS", "ML", "ML", "ML", "MK", "MK", "MK", "MC", "MC", "MC", "FS", "FS", "FS", "FL", "FL", "FL", "FK", "FK", "FK", "FC", "FC", "FC", "MS", "MS", "MS", "ML", "ML", "ML", "MK", "MK", "MK", "MC", "MC", "MC", "FS", "FS", "FS", "FL", "FL", "FL", "FK", "FK", "FK", "FC", "FC", "FC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CK", "CL", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC", "CS", "CL", "CK", "CC", "PS", "PL", "PK", "PC" ), core_id = c( "MS1", "MS1", "MS1", "ML1", "ML1", "ML1", "MK1", "MK1", "MK1", "MC1", "MC1", "MC1", "FS1", "FS1", "FS1", "FL1", "FL1", "FL1", "FK1", "FK1", "FK1", "FC1", "FC1", "FC1", "MS2", "MS2", "MS2", "ML2", "ML2", "ML2", "MK2", "MK2", "MK2", "MC2", "MC2", "MC2", "FS2", "FS2", "FS2", "FL2", "FL2", "FL2", "FK2", "FK2", "FK2", "FC2", "FC2", "FC2", "MS3", "MS3", "MS3", "ML3", "ML3", "ML3", "MK3", "MK3", "MK3", "MC3", "MC3", "MC3", "FS3", "FS3", "FS3", "FL3", "FL3", "FL3", "FK3", "FK3", "FK3", "FC3", "FC3", "FC3", "CS1", "CL1", "CK1", "CC1", "PS1", "PL1", "PK1", "PC1", "CS2", "CL2", "CK2", "CC2", "PS2", "PL2", "PK2", "PC2", "CS3", "CL3", "CK3", "CC3", "PS3", "PL3", "PK3", "PC3", "CS1", "CL1", "CK1", "CC1", "PS1", "PL1", "PK1", "PC1", "CS2", "CL2", "CK2", "CC2", "PS2", "PL2", "PK2", "PC2", "CS3", "CL3", "CK3", "CC3", "PS3", "PL3", "PK3", "PC3", "CS1", "CK1", "CL1", "CC1", "PS1", "PL1", "PK1", "PC1", "CS2", "CL2", "CK2", "CC2", "PS2", "PL2", "PK2", "PC2", "CS3", "CL3", "CK3", "CC3", "PS3", "PL3", "PK3", "PC3" ), soil_type = c( "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "WSL", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "NCL ", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH", "ESL", "ESL", "ESL", "ESL", "FAH", "FAH", "FAH", "FAH" ), treatment = c( "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "Tl", "Tl", "SM", "SM", "SM", "KCl", "KCl", "KCl", "Control", "Control", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "KCl", "SM", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control", "Tl", "SM", "KCl", "Control" ), days = c( 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 11L, 18L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L ), cl_conc = c( 18.1, 18.1, 17.4, 77.1, 81.4, 66.8, 19.4, 22.3, 36.9, 1.9, 1.2, 0.6, 27.8, 28.3, 28.3, 107.8, 150.3, 94.6, 84.8, 53.4, 51.9, 9.1, 4.25, 1.9, 19.8, 20.7, 20.5, 102, 56.7, 47.4, 33.4, 15.3, 19.9, 2, 1.2, 0.8, 37.1, 39.8, 34.8, 81.9, 67.5, 56, 41.1, 38.3, 30.9, 12.4, 6, 3.1, 27.8, 27.8, 24.9, 79.7, 65.5, 55.2, 13.5, 20.4, 24.7, 1.6, 1.2, 0.7, 42.7, 40.5, 30.1, 121.2, 73.6, 38, 53, 38.5, 22.3, 4.7, 1.9, 0.85, 46.5, 52.6, 32.9, 2.8, 45.1, 1.3, 51.2, 2.6, 47.59251129, 68.3, 38.8, 5.4, 34.1, 66.7, 23.51266468, 0.6, 34.2, 55.7, 23.8, 5, 42.1, 47.9, 44.3, 0.8, 56.23151874, 81.2, 36.1, 1.6, 36.3, 48.2, 35.6, 1.5, 44.8, 80.9, 34.66600908, 3.1, 33.3, 81.5, 20.2, 0.4, 40.1, 66.8, 24.5, 3.6, 39, 68.2, 36, 0.303367677, 31.1, 23.2, 75.7, 0.6, 26.2, 45.3, 21.3, 0.6, 33.76030379, 47.5, 20.5, 1.1, 28.6, 65.9, 18.9, 0.2, 30.2, 65.5, 23.3, 2.7, 23.9, 64, 24.7, 0.1 ), cl_load = c( 0.058825, 0.0543, 0.0609, 0.26985, 0.26455, 0.2171, 0.0582, 0.0669, 0.119925, 0.0057, 0.0036, 0.00195, 0.09035, 0.0849, 0.0849, 0.3773, 0.4509, 0.3311, 0.2756, 0.1602, 0.1557, 0.03185, 0.010625, 0.006175, 0.06435, 0.07245, 0.07175, 0.3315, 0.19845, 0.1659, 0.10855, 0.05355, 0.064675, 0.0065, 0.0039, 0.0026, 0.1113, 0.1393, 0.1044, 0.266175, 0.185625, 0.168, 0.113025, 0.105325, 0.0927, 0.0372, 0.018, 0.010075, 0.09035, 0.09035, 0.0747, 0.2391, 0.22925, 0.1656, 0.0405, 0.0663, 0.06175, 0.0048, 0.0036, 0.0021, 0.1281, 0.131625, 0.097825, 0.3939, 0.2392, 0.114, 0.17225, 0.125125, 0.0669, 0.015275, 0.006175, 0.0023375, 0.11625, 0.14465, 0.0987, 0.0084, 0.0902, 0.0039, 0.1152, 0.00715, 0.118981278, 0.2049, 0.1164, 0.0135, 0.093775, 0.2001, 0.064659828, 0.0018, 0.0855, 0.1671, 0.06545, 0.0125, 0.094725, 0.131725, 0.099675, 0.0024, 0.177129284, 0.2436, 0.1083, 0.0048, 0.1089, 0.15665, 0.1068, 0.0045, 0.1344, 0.2427, 0.11266453, 0.010075, 0.0999, 0.2445, 0.0606, 0.0012, 0.1203, 0.2004, 0.0735, 0.0126, 0.117, 0.2046, 0.117, 0.000985945, 0.0933, 0.0696, 0.2268, 0.0018, 0.0917, 0.15855, 0.0639, 0.00195, 0.101280911, 0.1425, 0.0615, 0.003025, 0.0858, 0.214175, 0.0567, 0.00065, 0.1057, 0.212875, 0.075725, 0.0081, 0.0717, 0.192, 0.0741, 3e-04 ) ), row.names = c( 2L, 3L, 4L, 6L, 7L, 8L, 10L, 11L, 12L, 14L, 15L, 16L, 18L, 19L, 20L, 22L, 23L, 24L, 26L, 27L, 28L, 30L, 31L, 32L, 34L, 35L, 36L, 38L, 39L, 40L, 42L, 43L, 44L, 46L, 47L, 48L, 50L, 51L, 52L, 54L, 55L, 56L, 58L, 59L, 60L, 62L, 63L, 64L, 66L, 67L, 68L, 70L, 71L, 72L, 74L, 75L, 76L, 78L, 79L, 80L, 82L, 83L, 84L, 86L, 87L, 88L, 90L, 91L, 92L, 94L, 95L, 96L, 121L, 122L, 123L, 124L, 125L, 126L, 127L, 128L, 129L, 130L, 131L, 132L, 133L, 134L, 135L, 136L, 137L, 138L, 139L, 140L, 141L, 142L, 143L, 144L, 145L, 146L, 147L, 148L, 149L, 150L, 151L, 152L, 153L, 154L, 155L, 156L, 157L, 158L, 159L, 160L, 161L, 162L, 163L, 164L, 165L, 166L, 167L, 168L, 169L, 170L, 171L, 172L, 173L, 174L, 175L, 176L, 177L, 178L, 179L, 180L, 181L, 182L, 183L, 184L, 185L, 186L, 187L, 188L, 189L, 190L, 191L, 192L ), class = "data.frame" ) ```
Alternative to Friedman Test in R
CC BY-SA 4.0
null
2023-03-27T13:30:51.613
2023-03-28T12:48:59.323
2023-03-28T08:34:08.917
171783
382821
[ "r", "anova", "repeated-measures", "nonparametric", "friedman-test" ]
610886
2
null
523733
0
null
The way that I think of regression is that we want to make accurate predictions of some variable of interest ($Y$). If we just have measurements of that variable, all we can use is that variable, and we cannot explain its variability. If we measure some determinants ($X$) of that variable, however, then it might be that some of the variability in $Y$ can be explained by the fact that $X$ is not constant. That is, some of the reason that $Y$ varies is because $X$ varies. As an example, consider predicting the height of a human. If all you know is that the subject is a human, all you have to go by is the overall distribution of human heights, and you cannot explain any of the variability in human heights. However, we know some determinants of human height. Age is a big one: as humans grow up, they get taller. That is, part of the reason why there is such variation in human height is because a major determinant of human height, age, has variability. Of course human heights will vary if this age determinant of human height varies! When you consider "variance" as the mathematical measure of the colloquial term "variation", this leads to using a phrase like "the proportion of variance in our variable of interest that is explained by the variance in some observed determinant(s) of that variable of interest." This is enough for me. If you want to get into the math, perhaps you can argue that $R^2$ is related to the squared correlation between the outcome $y$ and predictor $x$, so the denominator of that correlation contains the standard deviation of $x$ that gets squared to the variance of $x$. Another thought could be to use the fact that $\left(y - (mx + b)\right)^2$ can be expanded to involve a square of $x$, which would be related to the variance of $x$.
null
CC BY-SA 4.0
null
2023-03-27T13:41:27.713
2023-03-27T13:41:27.713
null
null
247274
null
610888
2
null
610885
10
null
The tests you cited are not appropriate due to the presence of repeated measures. The common way to deal with repeated measures is via mixed-effects linear models. I'm considering here the most general model, borrowing from one of your earlier posts. ``` > leach_lme <- lme(fixed = cl_conc ~ soil_type*treatment*days, + random =~1|core_id, data = leach2, + method = "ML") > anova(leach_lme) numDF denDF F-value p-value (Intercept) 1 80 677.4590 <.0001 soil_type 3 32 6.1510 0.0020 treatment 3 32 109.4603 <.0001 days 1 80 17.3933 0.0001 soil_type:treatment 9 32 2.2588 0.0436 soil_type:days 3 80 3.1330 0.0301 treatment:days 3 80 1.0310 0.3834 soil_type:treatment:days 9 80 3.9676 0.0003 ``` As you can see, the three-way interaction is significant, and so is the two-way interaction `soil_type:days`, etc. Linear mixed-effects is a parametric model, so as usual in the context of a linear model, one needs to check the residuals are well-behaving. As per request in the comments, here is a quick residual check. ``` plot(leach_lme) ``` [](https://i.stack.imgur.com/SLmQX.png) The message here is that residuals may be heteroscedastic. Now let's log-transforming the response ``` leach_lme2 <- lme(fixed = log(cl_conc) ~ soil_type*treatment*days, random =~1|core_id, data = leach2, method = "ML") plot(leach_lme2) ``` [](https://i.stack.imgur.com/pChsS.png) A part of a single observation which appears to be far from the bulk of the data, the residuals look fine to me, i.e. homoscedastic. The QQ-plot as well (not shown here but you can plot it yourself via `qqnorm(leach_lme2)`) doesn't seem that bad. P.S. In this answer, I treated `days` as a numerical variable. To treat it as a factor, as you seem to be interested in (thanks Sal Mangiacifo for pointing it out), use `leach2$days <- factor(leach2$days)` and redo the analyses. The output will be slightly different compared to the one shown above; there will be two additional parameters to be estimated.
null
CC BY-SA 4.0
null
2023-03-27T13:44:36.740
2023-03-28T12:48:59.323
2023-03-28T12:48:59.323
56940
56940
null
610889
1
610893
null
2
23
Wondering if anyone can help. I’m trying to compare two regression models with one predictor to see which best describes the data. Model one is a linear model (y = ax + b) with R2 = .036, F = 3.047, p =.084 Model two is a reciprocal quadratic model (y = a(1/x)2 + b(1/x) + c) with R2 = .072, F = 3.128, p =.045 As you can see, neither fit the data that well, although model two is just about significant. As model one approached significance, and had fewer parameters, I used Akaike’s Information Criterion to compare the two models, with AIC suggesting that model one is more likely to be correct. I’m a little confused as to how I should interpret this. Should I consider that model one is more likely to represent the data, even though it is not significant, or should I consider model two as a better fit because it has a larger R2 and is significant? Any help is appreciated!
Regression model comparison query - AIC suggesting a non-significant model is better than an alternative significant model
CC BY-SA 4.0
null
2023-03-27T13:55:30.917
2023-03-27T14:23:50.090
2023-03-27T14:23:50.090
56940
184580
[ "regression", "multiple-regression", "linear-model", "model-comparison" ]
610890
1
null
null
2
71
I ran a non-linear regression using scipy's curve_fit. I fitted the data using an exponential function $y=a \ e^{bx}$ and calculated the confidence intervals. This is what I get: [](https://i.stack.imgur.com/X6Abs.png) However, when I fit the data using a 3-parameter exponential function $y=a \ e^{bx} + c$, I get much wider confidence intervals: [](https://i.stack.imgur.com/sgfTf.png) Why is that? I then calculated the root mean squared error to see how good is the fit. These are the RMSE I get when I use a 2-parameter exponential function: - Blue fit: 0.077 - Orange fit: 0.097 - Green fit: 0.091 I know having a low RMSE is good, but aren't these too low? Am I missing something? Thank you in advance! UPDATE: I tried a different approach to get confidence intervals: bootstrapping. This is what I get with a 2-parameter exponential function: [](https://i.stack.imgur.com/JPTHJ.png) And for a 3-parameter exponential function: [](https://i.stack.imgur.com/2Eo4d.png) I wrote in the below image the values of the function parameters and the 95% confidence intervals for those, for both the "old" method and the bootstrapping one: [](https://i.stack.imgur.com/uaCkk.png) This is what happens for a quadratic function $y=a \ x^2$: [](https://i.stack.imgur.com/ykZua.png) Here you can see a comparison of the quadratic function's confidence interval obtained with the "old" and with the bootstrapping method: [](https://i.stack.imgur.com/1MrA0.png) In this case, the confidence intervals closely match! Besides doing a visual inspection, I also checked the values of the confidence intervals for the parameter $a$ for both the methods and they are indeed close. I then assume the difference between the "old" and the new method confidence intervals for the case of a 2-parameter and 3-parameter exponential functions is due to the fact that in the "old" method I was ignoring the covariance between the parameters since to calculate the confidence intervals I made use of the square root of the diagonal entries of the covariance matrix, hence I was not considering the off-diagonal terms.
Non-linear regression: confidence intervals and root mean squared error
CC BY-SA 4.0
null
2023-03-27T13:56:30.730
2023-03-28T09:46:12.190
2023-03-28T09:46:12.190
383746
383746
[ "confidence-interval", "error" ]
610891
1
null
null
1
13
I am supposed the investigate viscosity related phenomena in water flow through a horizontal pipe, connected to a water tank at one end, and with clear opening on the other. Mounted in the middle is perpendicularly protruding vertical manometer tube with height markings. Depending on the pressure induced by internal forces in the fluid, water level in the manometer tube will change. Problem is (and it's indeed what I'm supposed to observe) that under certain conditions, flow will chaotically go back and forth between being turbulent and laminar, which causes the water level in the manometer tube to constantly fluctuate and never settle on one value that could be used in the further analysis. What would be a good estimate of 'mean water level' and it's error? When quantifying the error in measuring / estimating a variable, I usually refer to its variance (since it's straightforward to estimate reliably and model it's propagation), but that is always when I can take many measurements, each with clearly defined value. In this case, water level will have changed before I could even read it. It never settles, and doesn't oscillate in any nice, predictable way (harmonic etc.). Most I can do without explicitly recording it and working with the footage is to note the bounds which constrain the fluctuations. I found that other people who previously worked on this assignment simply used $\sigma = $ upper bound - lower bound (and got away with it) but I would like to use a more proper method, and be able to justify it, so that I can give a reasonable estimate of variance of the variables derived from the water level (mostly by multiplying it and taking it's powers).
How to estimate variance when measuring fluctuating variable that never settles?
CC BY-SA 4.0
null
2023-03-27T14:01:39.130
2023-03-31T23:53:31.420
2023-03-31T23:53:31.420
11887
300263
[ "variance", "measurement-error", "measurement" ]
610893
2
null
610889
2
null
In both cases, the portmanteau $F$-tests are at the margins of the significance level and the adjusted $R^2$ are pretty small. The message IMO is that those models are doing a pretty bad job. However, if you really have to choose one, then, since the models are not nested and have a different number of parameters, I would choose on the basis of the AIC (or BIC if that matters). In this case, there is a discrepancy of AICs equal to 86.615 in favour of the simpler model. Thus the AIC suggests picking the first model.
null
CC BY-SA 4.0
null
2023-03-27T14:20:29.660
2023-03-27T14:20:29.660
null
null
56940
null
610894
1
610899
null
2
102
Thank you for any help. I am looking at the interaction of time with plasma biomarkers in the brain. Here is the code that I ran: ``` test = lmer(precuneus.dvr ~ Cage*time + sex*time + race_binary*time + NFLz*time + (1|idno), data = testDFlong30, na.action = na.omit) ``` `precuneus.dvr`: is measured in this unit called DVR, `race`: 0 = White and 1 = non-White, `sex`: 0 = Female and 1 = male, `time`: time between visits for each participant for a brain scan. My output is here in the image below. How would I interpret the results I see and especially the ones with a time interaction?[](https://i.stack.imgur.com/aLus1.png)
Interpretation of linear mixed effects model for longitudinal data with lmer
CC BY-SA 4.0
null
2023-03-27T14:34:45.853
2023-03-28T18:22:22.470
2023-03-28T18:22:22.470
22311
384264
[ "r", "regression", "mixed-model", "lme4-nlme" ]
610896
2
null
610815
1
null
Your data structure and model outline seem correct to me. I think the problem lies in the SPSS syntax you are using. You mentioned you put time and intervention in as repeated measures (I assume this happens in SPSS Mixed model dialogue). But you say you are using them as fixed effects, and not as random effects (which seems reasonable to me), so there is no need to put them into the repeated box, and actually, if you also have them as fixed effects, you can't use them as random effects (I think this is what produces the error). Just use participant id as a random factor (put it into the "Subjects" box in the SPSS Mixed models first dialogue box, and then specify the id-related random intercept in the random dialogue box by moving id to the right and ticking the intercept box). I think the following syntax would also achieve the above. You may want to change some of the estimation details, I used the defaults, but I believe this is the model structure you are trying to fit (edited to add I used SPSS 29.0): ``` MIXED outcome BY Intervention Time /CRITERIA=DFMETHOD(SATTERTHWAITE) CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0.00000001, RELATIVE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0, ABSOLUTE) /FIXED=Intervention Time Intervention*Time | SSTYPE(3) /METHOD=REML /PRINT=SOLUTION /RANDOM=INTERCEPT | SUBJECT(id) COVTYPE(VC) /EMMEANS=TABLES(Intervention*Time) . ```
null
CC BY-SA 4.0
null
2023-03-27T14:55:41.710
2023-03-27T18:24:06.217
2023-03-27T18:24:06.217
357710
357710
null
610897
1
610903
null
1
68
I've begun working with estimating confounder-adjusted survival curves using the `adjustedCurves` package in R and I need help interpreting results. Image A at the bottom shows a simple Kaplan-Meier survival plot for the data, where I only model the OCL density variable (only 2 states for this variable, values of 200 or 300). I use the `survival` and `survminer` packages and my code for fitting and plotting the model is: ``` fit <- survfit(Surv(mos, status) ~ OCLRng, data = survDF) ggsurvplot(fit, pval = TRUE, conf.int = TRUE, risk.table = TRUE, # Add risk table risk.table.col = "strata", # Change risk table color by groups linetype = "strata", # Change line type by groups surv.median.line = "hv", # Specify median survival ggtheme = theme_bw(), # Change ggplot2 theme palette = c("#E7B800", "#2E9FDF")) ``` The K-M plot makes intuitive sense in that I expect OCL=300 to have higher survival probability than OCL=200 from experience and from other data stratifications. Image B at the bottom shows the confounder-adjusted survival curves, using the `adjustedCurves` package, where I set `group` = the same OCL variable used above, and introduce other variables of Channels, Node and sRng. My code for fitting and plotting this is: ``` survDFMod <- survDF %>% mutate(group = as.factor(OCLRng)) outcomeSurvDF <- survival::coxph(Surv(mos, status) ~ Channels + Node + sRng + group, data= survDFMod, x = TRUE) adjSurvDF <- adjustedsurv( data = survDFMod, variable = "group", ev_time = "mos", event = "status", method = "direct", outcome_model = outcomeSurvDF, conf_int = TRUE, na.action = "na.omit" ) plot(adjSurvDF, conf_int=TRUE, linetype=TRUE, legend.position = "top") ``` Intuitively and in plain language, what is the confounder-adjusted survival curve in Image B below telling me? And why conceptually could it be that the relationship has reversed from what is shown in the Image A K-M survival curve, where the OCL = 300 appears to have a lower survival probability than OCL = 200? [](https://i.stack.imgur.com/4W380.png) Edit to include insights from Denzo and EdM responses: From reviewing the materials referred by Denzo and from EdM's comments, and in particular the discussion on confounders in [Confounder - definition](https://stats.stackexchange.com/questions/59369/confounder-definition), I believe my OCL variable falls into the following category (where the Z variable illustrated below is similar to my OCL variable): [](https://i.stack.imgur.com/1lNNr.png)
How to interpret the results from estimating the confounder-adjusted survival curves when running the adjustedCurves package?
CC BY-SA 4.0
null
2023-03-27T15:06:38.077
2023-03-28T08:24:46.853
2023-03-28T08:24:46.853
378347
378347
[ "r", "survival", "cox-model", "confounding" ]
610898
1
null
null
1
9
I am analyzing 30-day mortality associated with two treatments. I am planning to use IPTW. I would like to account for time in my propensity scores. The study inclusion dates span 15 years, so I would like to account for date of hospital admission (i.e. study entry date) in my score as I think medical practice changes over time could be influencing the propensity to receive one treatment vs. the other. What is the best way to account for this in my model?
Accounting for study entry date in logistic regression for generating propensity scores
CC BY-SA 4.0
null
2023-03-27T15:17:58.643
2023-03-27T15:17:58.643
null
null
384268
[ "regression", "propensity-scores", "time-varying-covariate" ]
610899
2
null
610894
5
null
Start with `time` and `sex`. Ignoring the other variables for now, your line of best fit is ``` precuneus.dvr = 1.148 + 0.00222 * time + 0.03073 * sex + 0.00417 * time * sex + ... ``` The more time between visits, the higher the predicted value of `precuneus.dvr`, but the line for female participants (`sex = 0`) is ``` 1.148 + 0.00222 * time ``` whereas for male participants (`sex = 1`) it is ``` (1.148 + 0.03073) + (0.00222 + 0.00417) * time ``` i.e. the two lines have different intercepts and different slopes. Males start from a higher level and the line is steeper (bigger effect as `time` increases). You can interpret the interaction of `race_binary` and `time` in a similar way. The variables `Cage` and `NFLz` are not binary, so we interpret their interaction with `time` a little differently, but the idea is the same. For example, the older the participant, the steeper the slope of `time`. Centering the variable the way you did (so that `Cage = 0` corresponds to average) is a sensible idea, as we can then interpret the value of `2.224e-02` as the slope of `time` for a participant of average age.
null
CC BY-SA 4.0
null
2023-03-27T15:30:58.843
2023-03-27T15:30:58.843
null
null
238285
null
610900
2
null
610897
1
null
What this is telling you is that it's often unwise to evaluate a single predictor by itself in a clinical survival model. In this situation, I suspect that the `OCL` group is associated with some of the other variables that you have included in the covariate-adjusted model, and that those other variables are the ones more directly associated with outcome. In the simple Kaplan-Meier analysis, the `OCL` variable was serving as a type of proxy for those other variables. It has little if anything to add to information about survival once you take those other variables into account. For best results, include as many outcome-associated predictors as you can, without overfitting, in a survival model. That's particularly true when there's a specific new predictor in which you are interested; you want to make sure that new predictor adds something useful to what's already known clinically.
null
CC BY-SA 4.0
null
2023-03-27T15:35:29.293
2023-03-27T15:35:29.293
null
null
28500
null
610901
1
null
null
0
83
I am modeling time-series data (30 measurements) at the individual-level by a grouping factor (5 levels) and I have the following model specification from a generalized additive model (GAM): ``` bam(response ~ s(Time) + s(Time, fac, bs = "fs", m = 1)) ``` My specific question concerns what the intercept term in such a model indicates. Here ([GAM in R; an intercept term](https://stats.stackexchange.com/questions/562967/gam-in-r-an-intercept-term)), Simpson indicated that with a smooth term, the intercept is the mean of the response. My concrete question is: With the inclusion of the factor smooth, what does the intercept represent, exactly? It doesn't appear to be the mean of the response, but close to (though not exactly) the mean of the means of the factor levels of the response variable. How can we interpret the intercept in such a model?
Intercept term of a GAM with smooth and factor smooth
CC BY-SA 4.0
null
2023-03-27T15:56:41.927
2023-04-01T04:44:07.640
2023-04-01T04:35:03.627
345611
357147
[ "regression", "time-series", "generalized-additive-model", "mgcv", "intercept" ]
610902
1
null
null
0
13
Assume that we have a matrix $X$ and we want to do Singular Value Decompotision(SVD) with $X$. First we need to find the average of $X$ row wise, called $\mu$ `mu = mean(X, 2)` Then we need to center $X$ $$X = X - \mu$$ Then we doing SVD: $$USV^{T} = X$$ When using PCA. Should I always project the centered matrix $X$ by using the transpose of the eigenvectors in $U$ e.g $$W = U(:, 1:components)$$ $$Y = W'*X$$ Is that what we want to achieve when it comes to PCA? What does $Y$ tell us? Why are we doing this projection?
Do we always want to do projection when we using PCA with SVD?
CC BY-SA 4.0
null
2023-03-27T16:30:32.907
2023-03-27T16:30:32.907
null
null
275488
[ "pca", "svd" ]
610903
2
null
610897
1
null
I believe this question has more to do with understanding what "confounding" really is than it has to do with survival curves. Your question is a very valid one: why is the adjusted estimate different from the un-adjusted estimate? To understand this, you need to understand what it is you are really trying to estimate. What exactly is the quantity you are looking for? The `adjustedCurves` package is designed mainly to estimate the survival curve that would have been observed if every individual in `data` had been set to a specific value of the target variable by external intervention. This is called a counterfactual quantity. The best way to estimate this quantity would be to perform a large, high quality randomized controlled trial. Since this is often unfeasible, methods that can adjust for confounding in a different way have been proposed. Some of those are implemented in the `adjustedCurves` package. Those methods assume that you have identified a set of confounders that has the property that if you adjust for all of those confounders in an appropriate fashion, the true counterfactual quantity of interest may be estimated in an unbiased way. Note that the counterfactual interpretation of the results produced by the `adjustedCurves` package I gave above is only correct if you have such a sufficient adjustment set. But how can you identify a set of such confounders? And what even is a confounder? Those questions have been discussed in great detail. Judea Pearl has done some great work on this. You may also find some first information about this here: [Confounder - definition](https://stats.stackexchange.com/questions/59369/confounder-definition) I recommend you to read some causal inference literature first and get into the details of the estimation process afterwards. "The Book of Why" by Judea Pearl is a great place to start, as it does not require any previous knowledge and does not contain crazy mathematics. As a final note, I would like to point out that you should not artificially categorise variables, such as the `OCLRng` variable, because that may lead to loss of statistical power or even bias. There are ways to visualize the (causal) effect of a continuous variable on a time-to-event outcome that closely resemble Kaplan-Meier curves, which can also be adjusted for confounding variables. Information on that can be found in another publication of mine: [https://arxiv.org/abs/2208.04644](https://arxiv.org/abs/2208.04644) which also comes with an associated R-package [https://cran.r-project.org/package=contsurvplot](https://cran.r-project.org/package=contsurvplot)
null
CC BY-SA 4.0
null
2023-03-27T16:43:42.237
2023-03-27T16:43:42.237
null
null
305737
null
610905
2
null
71357
0
null
One simple answer, because it's biased. A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and calculate the MLE for each BS subsample and average them together. The relative bias is 5%! ``` set.seed(123) out <- replicate(1000, { n <- 10 u <- runif(n, 0, 3) mle <- max(u)*(n+1)/n bsm <- mean(replicate(1000, { max(sample(u, replace=T))*(n+1)/n })) c(mle, bsm) }) b <- hist(out[1, ], col=c1 <- rgb(0.5, 0.5, 0.5, 0.5), breaks=pretty(out, 20), xlab='Estimate') hist(out[2, ], breaks = b$breaks, col=c2 <- rgb(0.5, 0.5, 0, 0.50), add=T) ``` [](https://i.stack.imgur.com/ZibZ1.png)
null
CC BY-SA 4.0
null
2023-03-27T16:59:08.453
2023-03-27T16:59:08.453
null
null
8013
null
610907
1
611264
null
0
38
I am prototyping a pipeline on the [FSDD dataset](https://github.com/Jakobovski/free-spoken-digit-dataset) (audio/10-class classification); the audio data are loaded with librosa, 0-padded/trimmed to 0.5 sec (4000-dimensioned numpy vectors) each and converted to mel-spectrograms with a 512 frame size, 256 hop-size and 80 mel banks. That yields mel spectrograms with a (80,16) shape. I wanted to run a model that utilizes the temporal aspect of the data, therefore I am using LSTMs with keras. From tutorials (e.g [https://machinelearningmastery.com/understanding-simple-recurrent-neural-networks-in-keras/](https://machinelearningmastery.com/understanding-simple-recurrent-neural-networks-in-keras/)) I have seen that keras reads inputs for RNNs like so: (batch_size, time_steps, features). Therefore, I created a dataloader with the transposed mel-spectrograms to follow that read pattern. Essentially, how I understand it, is that by feeding a 2D array to a keras RNN, rows correspond to timesteps and and columns to features. I am running a really basic LSTM on the data: ``` IN_SHAPE = (16,80) model = keras.Sequential() model.add(layers.Input(shape=IN_SHAPE)) model.add(layers.LSTM(128)) model.add(layers.Dense(100, activation='relu')) model.add(layers.Dense(10, activation=tf.keras.activations.softmax)) model.summary() model.compile( optimizer=tf.keras.optimizers.Adam(lr=0.001), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], ) history = model.fit( train_set, epochs=N_EPOCHS, validation_data=val_set ) ``` It seems to be underfitting a lot (I have tried different learning rates and adding a subsequent LSTM layers)and what is most peculiar is that both for train and validation, the accuracy fluctuates among the same exact values, below I list the print history accuracies for training as evidence: [0.10355556011199951, 0.09955555945634842, 0.1137777790427208, 0.1088888868689537, 0.09022222459316254, 0.10711111128330231, 0.1088888868689537, 0.10488889366388321, 0.10355556011199951, 0.109333336353302, 0.10533333569765091, 0.10311111062765121, 0.1088888868689537, 0.10355556011199951 ...] - Firstly, I was wondering whether conceptually my understanding of how the keras RNN reads the transposed mel-spectrograms is right/wrong. - Secondly, I was also wondering whether the results are bad because RNNs and sequence models in general do not model well spectrograms/multidimensional data.
RNN/LSTM networks on spectrograms underfitting massively - is the CNN encoder a prerequisite?
CC BY-SA 4.0
null
2023-03-27T17:21:42.650
2023-03-30T14:05:50.953
null
null
240802
[ "lstm", "tensorflow", "keras", "recurrent-neural-network", "audio" ]
610908
1
null
null
2
24
I am trying to find conditions for the root-n consistency of a generic L1-penalized M-estimator in a fixed p setting. I was able to find those for L1-penalized likelihood and regression (Fan, Li (2001)) and for my case but with diverging p (Negahban et al., 2012). In the latter in particular they state that in the fixed p setting there are standard techniques for proving consistency, but they don’t cite anyone on this. Could anyone give me some pointers on where to find these techniques? Thanks!
root-n consistency of penalized M-estimator with fixed p
CC BY-SA 4.0
null
2023-03-27T17:28:33.360
2023-03-27T17:28:33.360
null
null
332295
[ "lasso", "regularization", "consistency", "m-estimation" ]
610911
1
611008
null
7
119
I have a set of points $P_i$ which are described by an angle $\theta_i$ and a magnitude $r_i$. $\theta_i$ follows a Uniform distribution $(\theta_i \sim U(0, 2\pi))$ and $r_i$ follows a chi-k distribution $r_i \sim \chi_k$. [](https://i.stack.imgur.com/8EKdk.png) Is there any way of describing this distribution using a multivariate distribution in cartesian coordinates? If I use a bivariate gaussian, I get a circular distribution. How can I take into account the hole in the middle?
Donut-like Distribution in Cartesian Coordinates
CC BY-SA 4.0
null
2023-03-27T18:05:31.127
2023-03-28T14:19:58.093
2023-03-28T14:11:15.783
380090
380090
[ "distributions", "normal-distribution", "uniform-distribution", "circular-statistics", "chi-distribution" ]
610912
1
610956
null
2
98
I have read through the emmeans "Basics of EMMs" vignette, "Working with messy data" vignette, and this [Stack Overflow post](https://stackoverflow.com/questions/66748520/what-is-the-difference-between-weights-cell-and-weights-proportional-in-r-pa) but am still having a hard time knowing which 'weights' argument to use when factor levels have uneven numbers of observations. I am using binomial GAMs with mgcv in R to model binary species presence on a hydrophone against environmental (chlorophyll-a concentration, sea-surface temperature, sea-level, vessel presence) and categorical temporal covariates (season and photoperiod): `M <- gam(Species ~ s(Chla,bs="ts") + s(SST,bs="ts") + s(SLEV,bs="ts") + Season + Photoperiod + Vessel, data=Hydrophone, family='binomial', method = "ML")` The categorical groups do not have an even number of observations (i.e. dawn/dusk photoperiods are shorter than night/day, fewer observations were gathered in summer than other season, clips with vessel presence/absence are uneven) and I am hoping to account for this when running pairwise comparisons in `contrast` from the `emmeans` library on my model factors. Specifically, I don't want small groups to have less weight than large groups; they should be proportional to the total number of observations per group. I am primarily torn between using `weights="proportional"` and `weights="cells"`. The description for `weights="proportional"` in the `emmeans` help file makes it sound ideal since it uses "Weight in proportion to the frequencies (in the original data) of the factor combinations that are averaged over." However, when I run `weights="proportional"` in `contrasts` I get the same results as when I run `weights="equal"`, which doesn't seem right. `weights="cells"` uses "Weight according to the frequencies of the cells being averaged" but this is in regards to the reference grid constructed by `emmeans` and I am unsure if those frequencies are the same as in the original dataset? Using `weights="cells"` in `contrast` gives different results than `weights="proportional"` and `weights="equal"`. To summarize, my question is which weighting argument accounts for unbalanced factors (i.e. different number of observations per factor level) when conducting pairwise comparisons using `contrast` from the `emmeans` R library? Sorry for the long post but I wanted to provide adequate context.
emmeans weights for unbalanced groups/factors?
CC BY-SA 4.0
null
2023-03-27T18:12:41.623
2023-03-28T03:15:37.290
null
null
383007
[ "r", "multiple-comparisons", "unbalanced-classes", "generalized-additive-model", "lsmeans" ]
610913
1
null
null
1
58
I am trying to implement [Attention Is All You Need](https://arxiv.org/abs/1706.03762) paper from scratch in PyTorch. So far, I implemented the Scaled Dot-Product Attention layer and the Multi-Head Attention layer. As I began to write the code for the Encoder, I am facing a question I have not yet found an answer to: How do I go from embeddings to queries, keys and values in the Transformer? As you can see from Figure 1 in the paper below, embeddings enter the Encoder and then somehow they turn into queries, keys and values which enter the Multi-Head Attention layer. I do not know how to get the queries, keys and values from the embeddings. The way I implemented it, my Multi-Head Attention layer expects queries, keys and values in its `forward` method (the method for the forward pass in PyTorch). Maybe it should expect something else? I am a bit confused and I'm hoping someone can clarify what happens here. I looked at various resources online (various Cross Validated answers and Medium articles (such as Illustrated Attention)), but couldn't find a clear answer to this question. [](https://i.stack.imgur.com/Hyt5e.png)
How do I go from embeddings to queries, keys and values in the Transformer model?
CC BY-SA 4.0
null
2023-03-27T18:18:15.457
2023-03-27T18:19:15.567
2023-03-27T18:19:15.567
384280
384280
[ "machine-learning", "natural-language", "transformers", "attention", "embeddings" ]
610915
1
null
null
0
9
I have $5$ groups containing $30$ people. Every week, a person in a group plays a person in the same group at a game (so $15$ games in total for the entire group, but only one per person). This goes on for $29$ weeks, until everyone has played eachother. There is no mixing between groups here. The game is a computer game, where they compete against one another for $60$ minutes, to see who collects the most gold. Let's say that the amount of gold collected by a player is given by a Poisson Distribution. My dataset comprises of an entry for each player for each game, with Group ID, User ID, Date, Gold Taken, and a few other predictors regarding the individual themselves. Based off of their historical games, I am trying to predict the amount of Gold Taken for an individual in their next game. I am not sure how to tackle such a problem, where the amount of Gold Taken also depends on their future opponent, which are also observations in my data set. I feel the only way to do this is by looking at the previous games of the individual in question, and also the next opponent, and see how they performed, but I'm not sure how to formulate this? Let's consider individual $i$ playing against individual $j$ during week $t$. I could look at individual $i$'s running average of Gold Taken, and then look at individaul $j$'s running average of Gold Taken Against, and use this in a Poisson Regression? This seems to lends itself somewhat to Mixed Effect Regression, since the data is naturally hierarchical in nature (groups/inidivudals), repeated observations from the same individuals and so on. But I'm not sure how I'd incorporate the following in to it: - Exponential Smoothing / Weight observations by time. The recent performances should be weighted higher; some exponential decay function of time. - Interaction between observations in the dataset? Person $i$ versus Person $j$.Do I need to run separate regressions for each? Any help would be appreciated
Mixed Effect Regression and Time Series Data
CC BY-SA 4.0
null
2023-03-27T18:42:44.967
2023-03-27T18:42:44.967
null
null
292642
[ "regression", "time-series", "mixed-model", "poisson-regression", "exponential-smoothing" ]
610916
2
null
610890
2
null
It looks like your design is ill-conditioned for your particular estimates (note that since it's a non-linear least squares fit, the conditioning will depend on your current estimates). If you can get a covariance matrix of your parameter estimates, you'll likely find that at least 2 of a, b and c are strongly correlated. SVD decomposition can also help with this if you can get the Jacobian, small singular values suggest an ill-conditioned problem, and the columns of one of the orthogonal matrices help you find the offending parameters.
null
CC BY-SA 4.0
null
2023-03-27T18:48:33.740
2023-03-27T18:48:33.740
null
null
190524
null
610918
1
null
null
0
26
I'm currently running some simple linear regressions to compare predicted animal movements (time spent in a movement/behaviour) with ground truth information of movement. I have transformed the raw data for the regression analysis to get statistics such as model R2. Question - is it ok to plot the raw data with a fitted regression line for visual representation given that the raw data were transformed for the actual analysis? As often happens, the raw data are more interpretable than the transformed data. I see a similar question here but I don't think it addresses this: [Plotting raw data, but running statistics on log-transformed data](https://stats.stackexchange.com/questions/115804/plotting-raw-data-but-running-statistics-on-log-transformed-data)
Plotting transformed data
CC BY-SA 4.0
null
2023-03-27T19:08:20.757
2023-03-27T21:04:36.613
2023-03-27T21:04:36.613
91669
91669
[ "regression", "data-transformation" ]
610919
1
null
null
4
48
Noncentral F-distribution is used frequently in communication areas. In one of the applications, I need to do a sum of two i.i.d R.V having non-central F-distribution with parameter 1 (d.o.f for numerator), $N-1$ (d.o.f for the denominator) and $\lambda$ (non-centrality) parameter of the numerator. Is there any standard result on the sum of these two R.V.? or is there any approximation result? Since applying a straightforward approach, i.e., the convolution formula, is tough or doesn't seem to have a closed-form solution.
Sum of two i.i.d R.V having singly non-central F distribution
CC BY-SA 4.0
null
2023-03-27T19:18:18.980
2023-03-27T21:59:59.590
null
null
384284
[ "probability", "distributions", "random-variable", "f-distribution" ]
610920
2
null
609139
0
null
Okay I was able to figure it out with help from a colleague. The package `glmmTMB` is a conditional model that was made for a dataset similar to this and makes sure to pair the sites while still looking at selection. Thank you for the feedback everyone!
null
CC BY-SA 4.0
null
2023-03-27T19:27:18.670
2023-03-27T19:27:18.670
null
null
382991
null
610921
2
null
610820
4
null
A more general result is as follows. Let $X, Y$ be random variables, with $E(X) = \mu_1, E(Y) = \mu_2$, $\text{var}(X) = \sigma_1^2$, $\text{var}(Y) = \sigma_2^2$ and $\text{cov}(X,Y) = \sigma_{12}$. Then, for any reals $n,m$, $$E(nX + mY) = nE(X)+mE(Y) = n\mu_1+m\mu_2,$$ $$ \text{var}(nX) = n^2\text{var}(X) = n^2 \sigma_1^2, $$ $$\text{cov}(nX, mY) = n\cdot m\cdot\text{cov}({X,Y}) = nm\sigma_{12},$$ and \begin{align*} E[(nX + mY)^2] &= E(n^2 Y^2 + nmXY + m^2Y^2)\\ & = n^2E(Y^2) + nmE(XY) + m^2 E(Y^2)\\ & = n^2(\sigma_1^2+\mu_1^2) + nm(\sigma_{12}+\mu_1\mu_2) + m^2 (\sigma_2^2+\mu_2^2)\\ \end{align*} thus \begin{align*} \text{var}(nX+mY) &= E(nX+mY-E(nX+mY))^2\\ &=E\left\{(nX+mY)^2 - 2(nX+mY)E((nX+mY) + [E(nX+mY)]^2\right\}\\ &= E[(nX + mY)^2] - [E(nX+mY)]^2\\ &= \ldots \text{ (check!) }\\ &= n^2\sigma_1^2 + m^2\sigma_2^2 +2nm\sigma_{12}. \end{align*} On the other hand, by the same token, you can show that $$\text{var}(nX-mY) = n^2\sigma_1^2 + m^2\sigma_{2}^2 - 2nm\sigma_{12}.$$
null
CC BY-SA 4.0
null
2023-03-27T19:35:55.787
2023-03-27T19:35:55.787
null
null
56940
null
610922
1
610999
null
3
40
I'm learning about variance reduction for Monte Carlo methods and I am confused about how to calculate the "estimation error" of a given method. My question is how should I interpret "Estimation Error" given the context below? In my textbook, there is the following setup: We have a Random Variable ${X}$ with a given distribution. Our goal is to approximate ${E[h(X)]}$ via Monte-Carlo and calculate the "Estimation Error" of our estimate. First Approach Using Simple Random Sample (naive) Monte Carlo, we draw a simple random sample of size ${N}$ from the target distribution of ${X}$ (in the textbook example this is a normal distribution and we use an R function to get the random draws). We then approximate ${E[h(X)]}$ via ${\Sigma_i \frac{h(X_i)}{N}}$. Then the textbook gives the "Estimation Error" as ${\sqrt{\frac{s^2(h(\dot{X}))}{N}}}$, where ${\dot{X}}$ is the vector of sample points and ${s^2(*)}$ is the sample variance function. To me this makes sense. Because we are sampling directly from the distribution of ${X}$, variation within a sample will be the same as the variation across samples. So, ${\hat{Var}(\hat{E}[h(X)])}$ can be approximated with just one sample. Second Approach (Stratified Sampling) In place of Simple Random Sampling, we now divide the distribution of ${X}$ into ${N}$ strata and take a single sample point ${X_i}$ from each stratum. The sample points are chosen uniformly within the limits of each stratum. (For example if the target distribution was uniform between 0 and 10 and we divided the space into 5 strata, then the second sample point would be taken uniformly between 2 and 4. these are sampled uniformly between the limits of the stratum). The strata are chosen by evenly dividing the interval ${(0,1)}$ into ${N}$ segments, then computing ${F^{-1}(p_i)}$ where ${p_i}$ are the end points of the ${N}$ segments. This makes ${P(X_i \epsilon A_i)}$ equal to ${1/N}$ for each strata ${A_i}$ The textbook says to use the same formula from the first approach for both the estimator of ${E[h(X)]}$ and the "estimation error" of the second approach. To me it makes sense that, under stratified sampling, ${\hat{E}[h(X)]}$ is still ${\Sigma_i \frac{h(X_i)}{N}}$. This is because we've chosen the strata in such a way that ${P(X_i\epsilon A_i) = \frac{1}{N}}$ and we only have a single sample point from each stratum. However, using the same "estimation error" calculation doesn't make sense to me. The variance within a particular stratified sample (one run of the experiment), ${s^2(h(\dot{X}))}$ must be quite a bit wider than the variance across multiple samples, ${s^2(\hat{E}[h(X)])}$ because stratification ensures our samples spread the whole distribution each time; So ${h(\dot{X})}$ will be very wide in a single sample. However, ${\dot{X}}$ won't jiggle around much across samples so we will get very nearly the same value for ${\hat{E}[h(X)]}$ in each run of the experiment (i.e. the "estimation error" doesn't seem to capture the variation of the estimator). Is the book misusing "estimation error", or am I missing something?
Estimation Error Calculation
CC BY-SA 4.0
null
2023-03-27T19:58:59.250
2023-03-29T14:02:48.937
null
null
252129
[ "variance", "optimization", "monte-carlo", "estimators" ]
610923
2
null
100245
0
null
See [https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/missing.data.html](https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/missing.data.html) Or enter `help(mgcv::missing.data)` in an R session. An approach that can be effective, with sample code on the help page, is then to ``` substitute a simple random effects model in which the by variable mechanism is used to set s(x) to zero for any missing x, while a Gaussian random effect is then substituted for the ‘missing’ s(x). ``` Factors are required, one for each variable that has missing values, for use as missing value indicators, in each case with as many levels as there are missing values. The NAs are replaced in each case, by the mean for the relevant variable.
null
CC BY-SA 4.0
null
2023-03-27T20:29:06.463
2023-03-27T20:29:06.463
null
null
63726
null
610924
1
null
null
12
231
Given a reference distribution and an unknown sample, we need some statistical test to determine if the unknown sample came from the reference (one-sample test), or given two samples to determine whether they are realizations from the same distribution. A very popular test for both is the [one-sample Kolmogorov-Smirnov test, or two-sample K-S test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test). However, it is well-known that the Kolmogorov-Smirnov test is most sensitive at the median location of the distribution, and much less sensitive when there are differences in the tails, which in many cases is the more interesting region (see this assertion in the link for Kuiper's test below). There are other tests which are equally-sensitive at the tails as at the median, such as the [Anderson-Darling test](https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test), or [Kuiper's test](https://en.wikipedia.org/wiki/Kuiper%27s_test), which also has the additional advantage of being applicable to cyclical shifts. Other alternative test: - a statistical test using the Kullback-Leibler divergence, for example: Arizono and Ohta (1989), "A Test for Normality Based on Kullback-Leibler Information". - a statistical test using the Wasserstein distance, for example: Wang, Gao, Xie (2020), "Two-sample Test using Projected Wasserstein Distance: Breaking the Curse of Dimensionality". My question is: What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests, such as Anderson-Darling, Kuiper's, and other alternatives? Please feel free to recommend any other test not mentioned in the question. Please clarify objective quantitative advantages. (An answer could be, perhaps, "KS has no objective quantitative advantages besides popularity and mind-share", but then please clarify the objective quantitative advantages of any other test you recommend and why.)
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests?
CC BY-SA 4.0
null
2023-03-27T20:35:33.040
2023-04-06T22:35:24.530
2023-03-28T08:02:44.090
366449
366449
[ "kolmogorov-smirnov-test", "kullback-leibler", "anderson-darling-test" ]
610925
1
null
null
0
24
I have a question about modelling but I don't even know what kind of model to start looking into. I have panel data with multiple waves. The response variable I'm interested in is categorical variable (let's call it `status`). This variable has three categories: `willing to be vaccinate`, `unwilling to be vaccinated` and `vaccinated`. The problem is that the `vaccinated` group is getting larger with each wave as people get vaccinated over time. I'm not sure how to model this type of data structure. Or if that's even possible. Would very much appreciate some guidance towards papers/textbooks/resources that help with modelling these data. Thank you.
Time series with categorical response variable
CC BY-SA 4.0
null
2023-03-27T20:53:40.137
2023-03-27T20:53:40.137
null
null
219593
[ "categorical-data", "survival", "panel-data", "time-varying-covariate" ]
610926
1
null
null
0
27
we are working with some time series data that look like the following: [](https://i.stack.imgur.com/NdNSd.png) We have three individuals (a1,a2,a3) and then several observations at regular time-intervals. Here, we have 12 time intervals. At each interval, we have one coded behavioral variable (A, B, C or D). What we would like to investigate is how "synchronous" or "asynchronous" i) each pair of individuals (i.e. a1/a2; a1/a3; a2/a3) ii) all three individuals are in a given behavior. For instance, from this dummy data it appears that for the behavior "A", a1 and a2 are commonly co-exhibiting that behavioral code. I was thinking that perhaps some form of simple ratio of times coexpressing versus times not, given the relative base rate of each individual showing each behavior might be the way to go forward. I am interested in any ideas of how to measure this form of 'synchrony'. (I'm also aware that synchrony may not be the most appropriate word here, but unsure what is better). In terms of the actual data. We could have as many as 40 replicates of these data for each group of 3 ids. Therefore, ideally any method would be able to compare data across multiple observations, but allowing for breaks (i.e. the 1st observation of the 2nd replicate is not the 13th observation.).
Identifying synchrony across individual time series
CC BY-SA 4.0
null
2023-03-27T20:57:55.213
2023-03-27T20:57:55.213
null
null
49482
[ "time-series", "correlation", "sequence-analysis", "sequential-analysis" ]
610927
1
null
null
0
6
Let's say we have yearly end of year ratings (y) and their belonging features (X). If you had to analyse temporal dependencies/influences of some of those features, how would you do that? What are possible methods to analyse that? All i could think for is e.g making an ordered probit model and analyse how the feature importance change over the years (e.g with shap values). What would you do? Please share all your thoughts.
How to analyse temporal influences of Features
CC BY-SA 4.0
null
2023-03-27T21:19:24.157
2023-03-27T21:19:24.157
null
null
384290
[ "time-series", "panel-data", "feature-engineering", "importance", "ordered-probit" ]
610928
1
null
null
0
13
I have a set of 100 data point pairs, representing the estimated $(x_{\mathrm{est}},y_{\mathrm{est}})$ position coordinates of a physical object, calculated based on sensor range measurements. Knowing the exact ground-truth position coordinates $(x_{\mathrm{true}},y_{\mathrm{true}})$ of the object, what is a good choice of error metric $(e_{x}=x_{\mathrm{true}}-x_{\mathrm{est}},e_{y}=y_{\mathrm{true}}-y_{\mathrm{est}}$, or $\big\|e_{\{\cdot\}}\big\|)$, and how to visually represent (in a plot) the error distribution of the calculated position to perform an error analysis? Thanks for any idea you can offer.
Error analysis of a measured physical quantity
CC BY-SA 4.0
null
2023-03-27T21:19:39.560
2023-03-27T23:10:32.563
2023-03-27T23:10:32.563
314475
314475
[ "distributions", "measurement-error" ]
610929
1
null
null
0
20
If the normality of residuals assumption is violated for a LMM, is it appropriate to instead use spearman/kendall correlations of each predictor and the residuals to replace the t-test of predictor significance? The data is longitudinal and still want to keep the random effects, hence why I'm thinking of applying the non-parametric test to the residuals (this will be equivalent to testing the predictors significance in explaining the left over variance not explained by random effects). Thinking this is a sufficient proxy, obviously could also go GEE or permutation test, but want to know if a non-parametric test is good? If this just makes no sense feel free to be brutal and suggest more sensible approaches. This is part of the final analysis for my masters program. PS. have tried multiple transformation--nothing works. Data has a predictor clustered around a value, and many suspicious values. Spoke to my advisor and I will address in the analysis but still need sanity check for the question at hand.
Replacing T-test with spearman or kendall correlation
CC BY-SA 4.0
null
2023-03-27T21:25:55.570
2023-03-27T21:27:31.420
2023-03-27T21:27:31.420
383608
383608
[ "t-test" ]
610930
1
null
null
0
17
In general, I deal with multiclass classification problems with umbalanced datasets. In these cases, in general, I adopt accuracy, and macro and weighted averages of precision, recall and f-measure. Now I'm dealing with a balanced dataset. In this case, is accuracy enough? Why?
Is accuracy enough for evaluating models in balanced multiclass problems?
CC BY-SA 4.0
null
2023-03-27T21:36:59.157
2023-03-27T21:36:59.157
null
null
219084
[ "machine-learning", "classification" ]
610931
1
null
null
1
14
An ROC curve is plotted with (1-False Positive Rate) on the X-axis and the True Positive Rate on the Y-axis. However, the way in which each point of the curve is plotted is by first picking a cut-off probability value below which all samples are classified as negative and above which all are positive. And using this decision threshold, we get one point (TPRpoint 1, 1-FPRpoint 1). Doing this for multiple decision thresholds/cut-off probabilities, we can generate a 2-Dimensional ROC curve. Does this mean that the true independent value is the cut-off probability and not the FPR as looking at the X-axis would suggest?
What is the independent value in an roc curve?
CC BY-SA 4.0
null
2023-03-27T21:43:30.983
2023-03-27T21:43:30.983
null
null
372593
[ "correlation", "roc", "dependent-variable" ]
610932
1
null
null
0
18
I read that you have to follow this: maximizing the difference between the nodes and minimizing the difference within the nodes I know that SSE_parent should be bigger than SSE_child because you want your SSE child to be lower as you move down to the terminal nodes. But, at the same time, you want SSE within the nodes to be low: creation of leaves involved minimizing squared differences of actual and assigned values. So, I guess it's a balancing act between the two "rules" Do you agree with this at a high level? Also, when you have a long right tail, your target variable produces more high valued leaves. It means the actual value and the predicted value is nothing but the average outcome of all cases at that node. Can you please confirm as well Thanks!
Decision tree for splitting a node
CC BY-SA 4.0
null
2023-03-27T21:51:39.597
2023-03-27T22:20:36.230
2023-03-27T22:20:36.230
382257
382257
[ "cart" ]
610933
2
null
610919
0
null
I think you end up having a non-standard distribution. My (tentative) approach would be the following. Let $W_{j}\sim \mathrm{F}_{1,N-1,\lambda},j=1,2$. First, we know that $\widetilde{T}_j\sim \mathrm{t}_{N-1,\lambda},\widetilde{T}_j:=\sqrt{W_j},j=1,2,$ where $\mathrm{t}_{\nu,\mu}$ is the non-central $\mathrm{t}$ distribution with $\nu$ degrees of freedom and non-centrality parameter $\lambda$. Second, I would consider the centralized version of the two random variables above. Namely, $$T_j = \widetilde{T}_j-\frac{\lambda}{\sqrt{V/(N-1)}},\quad V\sim \chi_{N-1}^2.$$ Finally, [this paper](https://arxiv.org/abs/0906.3037) provides a formula for the density of two independent $d$-dimensional Student-$t$ random vectors. In your case $d=1$. If you can work with $(T_1,T_2)$ instead of $(W_1,W_2)$ you are done. Otherwise, you simply transform the density for $T_1+T_2$ given in the paper into the density of $W_1+W_2$ using the relationships mentioned above.
null
CC BY-SA 4.0
null
2023-03-27T21:59:59.590
2023-03-27T21:59:59.590
null
null
135461
null
610935
2
null
396932
0
null
The simple answer is that independent of outcome with controlling for covariates is fine since the control variables will absorb any correlated variations. Considering the example you mentioned, we are interested in the effect of wages on education. The first stage and second stages for 2SLS will be: $$Wage= \beta_{0}Area+\beta_{n}Controls_{n}+\epsilon$$ $$Education= \beta_{0}\hat{Wage}+\beta_{n}Controls_{n}+\epsilon$$ where $\hat{Wage}$ is the estimated value for $Wage$. Two conditions (Exclusion Restriction and Relevance) that will need to be satisfied to validate the instrument will be: $$Cov(\epsilon Area|Controls)=0\ (1)$$ $$Corr(Wage Area|Controls)\neq0\ (2)$$ Note here that both assumptions are conditions on Controls. In real practice, equation (1) could also be without the condition, which will be harder to achieve.
null
CC BY-SA 4.0
null
2023-03-27T22:06:49.733
2023-03-27T22:06:49.733
null
null
384291
null
610936
2
null
591584
0
null
2SLS is appropriate for binary endogenous variables and binary outcome variables. In fact, it is a common method for estimating the effects of binary endogenous variables on binary or continuous outcome variables. Bivariate Probit (Biprobit) is also a viable alternative for estimating the effect of a binary endogenous variable on binary outcome variables. Biprobit can handle binary endogenous variables and binary outcomes, but it assumes that the errors in both the endogenous and outcome equations are jointly normally distributed. If this assumption is not met, then Biprobit may not be the best option. In your case, since you have continuous instruments and binary endogenous variables and binary outcome variables, 2SLS can be used to estimate the causal effect of sanitation on child health. You can use the binary outcome variables as dependent variables in the second stage regression. It is important to ensure that the instruments used in the analysis satisfy the 2SLS assumptions, particularly the exogeneity and relevance conditions. Additionally, you should also consider other potential confounding variables that could influence the relationship between sanitation and child health, and include them in the analysis as control variables.
null
CC BY-SA 4.0
null
2023-03-27T22:11:39.643
2023-03-27T22:11:39.643
null
null
384291
null
610937
1
null
null
0
7
Well, I'm not very into statistics and I am facing a problem regarding the dependence between two variables in my experiments. Context: I have to variables in my experiments, SNR and score. I know empirically that for a high SNR I get a better score. However, it is not true always because I have some good scores on low SNR. However, it seems like I might be wrong and that correlation is not true but I am not confident stating that. [](https://i.stack.imgur.com/sw5xc.png) Here is the plot with the two variables plotted. A kind of slope is appreciated in the picture but it is not very clear... What do you suggest? Does it make sense that they are correlated or from the data that is not so true? Thanks!
Correlation Study with two variables with a non linear trend
CC BY-SA 4.0
null
2023-03-27T22:31:48.547
2023-03-27T22:33:39.207
2023-03-27T22:33:39.207
365263
365263
[ "regression", "correlation", "dependent-variable" ]
610938
1
null
null
0
14
I am analyzing a study comparing the weight loss of different lures over 37 days. We have seven lure types: 0mmBottle, 1mmBottle, 1.5mmBottle, 3mmBottle ,6mmBottle, 2mmSachet, and 6mmSachet. There are five replicates within each lure type. Each lure was weighed daily to determine its weight loss, and the weight loss is the dependent variable that I will be comparing. Is it appropriate to analyze these results with a repeated measures anova? Would a TukeyHSD test be a solid choice to conduct post hoc tests?
Repeated measures anova with replication in R
CC BY-SA 4.0
null
2023-03-27T22:33:06.887
2023-03-29T22:58:18.327
2023-03-29T22:58:18.327
11887
384293
[ "anova", "repeated-measures" ]
610940
1
615545
null
3
150
Fractional factorial designs (FFDs) and Response Surface Methodology (RSM) are both approaches to extracting some information about how multiple interacting factors affect a response variable. Their value lies in the fact that they involve much less time and investment to perform than the classic (and easier-to-understand) full-factorial experiment. FFDs and RSM are uncommon in my field and I'm trying to understand some basics about them. Are FFDs mainly thought of as a component of RSM? If so, what does RSM look like without FFDs? And how are FFDs used if not as part of RSM? I'd appreciate good references about this as well.
How do Fractional Factorial Designs relate to 'Response Surface Methodology'?
CC BY-SA 4.0
null
2023-03-27T22:37:43.080
2023-05-30T12:23:54.913
2023-05-11T11:26:16.183
121522
121522
[ "interaction", "experiment-design", "fractional-factorial", "response-surface-methodology" ]
610941
2
null
610522
1
null
It is possible that a variable is both exogenous and endogenous depending on the independent variable and treatment effect you are estimating. In your situation, it is possible that relative_power is exogenous to language_score while being endogenous towards math_score. When we say endogenous and exogenous, we mean one variable with respect to another. For example, in the classic example of education affecting wages, education is endogenous with respect to wages, but may not be with other variables such as ice cream price. Ultimately, it will still depend on the context that whether you have a good reason for the exogeneity of the independent variable.
null
CC BY-SA 4.0
null
2023-03-27T22:38:15.130
2023-03-27T22:38:15.130
null
null
384291
null
610942
1
null
null
0
81
I am training a Full model (logistic regression) and a few different models (LASSO, Elastic net, CART, random forest) to predict a certain clinical outcome. I split my data into training and test sets based on time (Temporal validation), latest data used for testing. Train set - 2580 observations (incidence rate -8.8%) Test set - 980 observations (incidence rate -5.5%) I used AUC and calibration slope to evaluate model performance and computed based on 10-fold cross-validation for the training set. Then I computed AUC and calibration slope for the test set. [](https://i.stack.imgur.com/vObcX.png) My concern is that AUC values in cross-validation is much lower than the AUC values in testing set. I tried several options to find reason for this. I checked the differences of distributions of variables between train and test sets and found no difference. What is the reason for this, is there something I'm missing here? Any idea or suggestion is appreciated.
AUC values of training and cross-validation are lower than AUC values of test set
CC BY-SA 4.0
null
2023-03-27T22:43:45.347
2023-03-27T22:43:45.347
null
null
124115
[ "machine-learning", "predictive-models", "validation", "auc", "calibration" ]
610943
1
null
null
0
42
I am analyzing the effect of covid rates and covid lockdown levels on murder and suicide rates for the 50 most populous counties in the U.S. I am not sure what to do to analyze the question in aggregate terms- I have transformed the data into panel data: I have 36 months and 50 counties, so 1800 cells of data for suicide and murder. I also have the covid rates in panel data format as well- what kind of variable could I use for an instrumental variable for a fixed effects regression? or what kind of regression would work better for this question? Thanks!
What kind of tests should I run for my master's thesis?
CC BY-SA 4.0
null
2023-03-27T22:53:12.127
2023-03-28T18:21:20.427
null
null
384296
[ "regression", "econometrics", "stata", "instrumental-variables", "effects" ]
610945
1
610946
null
3
153
I am running a binary logistic regression analysis in R software using `rms` package. Due to the small sample size (n=96), I used all data as training data. Given that I don't have test data, what method do you suggest for evaluating/validating the model?
How to evaluate/validate a binary logistic regression model using training data?
CC BY-SA 4.0
null
2023-03-27T23:30:24.333
2023-03-27T23:54:48.057
2023-03-27T23:37:01.807
379762
379762
[ "r", "regression", "logistic", "cross-validation", "model-evaluation" ]
610946
2
null
610945
5
null
The author of the `rms` package often recommends a bootstrap procedure instead of using an explicit holdout set The idea is that, when you have a holdout set, you deprive the model of precious training data. Perhaps this is not such a big deal when you have billions of samples, but you do not. However, if you just fit to the training data and go with whatever parameter estimates you get, you have no sense of if you have overfit. Enter bootstrap. The idea is to train the model on all of the data. Then evaluate on your metric of choice, say log loss or Brier score. Then you select a bootstrap sample of your data train on that sample, and apply the trained model to the entire data set. Evaluate this model using the same evaluation metric, and compare the performance of this bootstrap-trained model to the performance of the model trained on all observations. Repeat, repeat, repeat. You now have a sense of by how much you have overfit and if that amount is acceptable. The function `rms::validate` will be your friend for this.
null
CC BY-SA 4.0
null
2023-03-27T23:49:18.847
2023-03-27T23:54:48.057
2023-03-27T23:54:48.057
247274
247274
null
610947
1
null
null
3
50
I've been trying to implement and extend some results from the papers ["Always Valid Inference"](https://arxiv.org/abs/1512.04922) and ["Peeking at A/B Tests"](http://library.usc.edu.ph/ACM/KKD%202017/pdfs/p1517.pdf). The authors provide a closed form expression of the "mixture" likelihood ratio for a two-sided alternative hypothesis (i.e., where $\Theta$ can be any real number). [](https://i.stack.imgur.com/mearC.png) [](https://i.stack.imgur.com/s4jD8.png) I'm interested in developing a closed form expression for the mixture likelihood ratio for one-sided alternatives (i.e., where $\Theta$ can only be positive or negative). The authors did not provide a derivation, so here's my attempt ## Solving the integral - Assume data from cohort $A$ and cohort $B$ arrive in pairs $Z_n = (A_n, B_n)$ - $h(\theta) = \text{Normal}(0, \tau^2)$ - $f(X_i|\mu=\theta) = \text{Normal}(\theta, \sigma^2)$ - $f(X_i|\mu=\theta_0) = \text{Normal}(\theta_0, \sigma^2)$ $$\Lambda_T = \int_{\theta \in \Theta} h(\theta) \prod_{m=1}^T \frac{f(Z_m|\mu=\theta)}{f(Z_m|\mu=\theta_0)}d\theta $$ $$ =\int_{\theta \in \Theta} \frac{1}{\sqrt{2\pi\tau^2}}e^{\frac{-1}{2}\big(\frac{\theta - 0}{\tau}\big)^2} \prod_{m=1}^T \frac{\frac{1}{\sqrt{2\pi\sigma^2}} e^{\frac{-1}{2}\big(\frac{Z_m - \theta}{\sigma}\big)^2}}{\frac{1}{\sqrt{2\pi\sigma^2}} e^{\frac{-1}{2}\big(\frac{Z_m - \theta_0}{\sigma}\big)^2}} d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \prod_{m=1}^T \Big[ \frac{e^{\frac{-1}{2}\big(\frac{Z_m - \theta}{\sigma}\big)^2}}{ e^{\frac{-1}{2}\big(\frac{Z_m - \theta_0}{\sigma}\big)^2}}\Big] e^{\frac{-1}{2}\big(\frac{\theta - 0}{\tau}\big)^2}d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \prod_{m=1}^T \Big[ \exp \Big( \frac{-1}{2}\big((\frac{Z_m - \theta}{\sigma})^2 - (\frac{Z_m - \theta_0}{\sigma})^2\big)\Big) \Big] \exp \Big( \frac{-1}{2}\big(\frac{\theta - 0}{\tau}\big)^2 \Big)d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \exp \Big[ \sum_{m=1}^T \Big( \frac{-1}{2}\big((\frac{Z_m - \theta}{\sigma})^2 - (\frac{Z_m - \theta_0}{\sigma})^2\big) \Big) + \frac{-1}{2}\big(\frac{\theta - 0}{\tau}\big)^2 \Big]d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \exp \Big[ \frac{-1}{2} \big[\sum_{m=1}^T \Big(\big((\frac{Z_m - \theta}{\sigma})^2 - (\frac{Z_m - \theta_0}{\sigma})^2\big) \Big) + \big(\frac{\theta - 0}{\tau}\big)^2 \big]\Big]d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \exp \Big[ \frac{-1}{2} \big[ \sum_{m=1}^T \Big(\frac{Z_m^2 -2Z_m\theta + \theta^2}{\sigma^2} - \frac{Z_m^2 -2Z_m\theta_0 + \theta_0^2}{\sigma^2}\big) \Big) + \big(\frac{\theta - 0}{\tau}\big)^2 \big] \Big]d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \exp \Big[ \frac{-1}{2} \big[ \big(\frac{\sum_{m=1}^T Z_m^2}{\sigma^2} + \frac{-2\theta \sum_{m=1}^T Z_m + T\theta^2}{\sigma^2} \big) - \big( \frac{\sum_{m=1}^T Z_m^2}{\sigma^2} + \frac{ -2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{\sigma^2}\big)\big) + \frac{\theta^2}{\tau^2} \big] \Big]d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \exp \Big[ \frac{-1}{2} \big[ \big( \frac{-2\theta \sum_{m=1}^T Z_m + T\theta^2}{\sigma^2} \big) - \big( \frac{ -2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{\sigma^2}\big)\big) + \frac{\theta^2}{\tau^2} \big] \Big]d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \exp \Big[\frac{\theta \sum_{m=1}^T Z_m}{\sigma^2} - \frac{T\theta^2}{2\sigma^2} - \frac{\theta_0 \sum_{m=1}^T Z_m}{\sigma^2} + \frac{T\theta_0^2}{2\sigma^2} - \frac{\theta^2}{2\tau^2} \Big]d\theta $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} \exp \Big[ -\theta^2\Big(\frac{T}{2\sigma^2} + \frac{1}{2\tau^2}\Big) + \theta \Big( \frac{\sum_{m=1}^T Z_m}{\sigma^2} \Big) + \Big(\frac{-\theta_0 \sum_{m=1}^T Z_m}{\sigma^2}+ \frac{T\theta_0^2}{2\sigma^2} \Big)\Big]d\theta $$ At this point, we can let - $a=\frac{T}{2\sigma^2} + \frac{1}{2\tau^2} = \frac{T\tau^2 + \sigma^2}{2\sigma^2\tau^2}$ - $b= \frac{\sum_{m=1}^T Z_m}{\sigma^2}$ - $c=\frac{-\theta_0 \sum_{m=1}^T Z_m}{\sigma^2}+ \frac{T\theta_0^2}{2\sigma^2} = \frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}$ If $\Theta = \mathbb{R}$ $$ \frac{1}{\sqrt{2\pi\tau^2}} \int_{\theta \in \Theta} e^{-a\theta^2 + b\theta + c} d\theta = \frac{1}{\sqrt{2\pi\tau^2}} \sqrt{\frac{\pi}{a}} \exp \Big(c + \frac{b^2}{4a} \Big) $$ - If $\Theta = +\mathbb{R}$, the integral is the same, but with an added factor $\text{erf}\big(\frac{b}{2\sqrt{a}}\big) + 1$. - If $\Theta = -\mathbb{R}$, the integral is the same, but with an added factor $\text{erfc}\big(\frac{b}{2\sqrt{a}}\big)$ These are great results! I means I can simply multiply the two-sided result by a correction factor to find the one-sided result. ## Sanity checking my values for $a$, $b$, and $c$ Going back to assuming $\Theta = \mathbb{R}$, here's an attempt to match the literature the literature $$ \frac{1}{\sqrt{2\pi\tau^2}} \sqrt{\frac{\pi}{a}} \exp \Big(c + \frac{b^2}{4a} \Big) $$ $$ = \frac{1}{\sqrt{2\pi\tau^2}} \sqrt{\frac{\pi}{\big(\frac{T\tau^2 + \sigma^2}{2\sigma^2\tau^2}\big)}} \exp \Big(\big(\frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(\frac{\sum_{m=1}^T Z_m}{\sigma^2}\big)^2}{4\big(\frac{T\tau^2 + \sigma^2}{2\sigma^2\tau^2}\big)} \Big) $$ $$ = \frac{1}{\sqrt{2\tau^2}} \sqrt{\frac{1}{\big(\frac{T\tau^2 + \sigma^2}{2\sigma^2\tau^2}\big)}} \exp \Big(\big(\frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(\frac{\sum_{m=1}^T Z_m}{\sigma^2}\big)^2}{4\big(\frac{T\tau^2 + \sigma^2}{2\sigma^2\tau^2}\big)} \Big) $$ $$ = \frac{1}{\sqrt{2\tau^2}} \sqrt{\big(\frac{2\sigma^2\tau^2}{T\tau^2 + \sigma^2}\big)} \exp \Big(\big(\frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(\frac{\sum_{m=1}^T Z_m}{\sigma^2}\big)^2}{4\big(\frac{T\tau^2 + \sigma^2}{2\sigma^2\tau^2}\big)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\big(\frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(\frac{\sum_{m=1}^T Z_m}{\sigma^2}\big)^2}{4\big(\frac{T\tau^2 + \sigma^2}{2\sigma^2\tau^2}\big)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\big(\frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(\frac{\sum_{m=1}^T Z_m}{\sigma^2}\big)^2}{2\big(\frac{T\tau^2 + \sigma^2}{\sigma^2\tau^2}\big)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\big(\frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(\sum_{m=1}^T Z_m\big)^2}{\sigma^4} \frac{\sigma^2\tau^2}{2\big(T\tau^2 + \sigma^2\big)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\big(\frac{-2\theta_0 \sum_{m=1}^T Z_m + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(\sum_{m=1}^T Z_m\big)^2 \tau^2}{2\sigma^2\big(T\tau^2 + \sigma^2\big)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\big(\frac{-2\theta_0T\bar{Z}_T + T\theta_0^2}{2\sigma^2}\big) + \frac{\big(T\bar{Z}_T\big)^2 \tau^2}{2 \sigma^2\big(T\tau^2 + \sigma^2\big)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\frac{\big((-2\theta_0T\bar{Z}_T + T\theta_0^2) (T\tau^2 + \sigma^2)\big) + (T\bar{Z}_T)^2 \tau^2}{2\sigma^2(T\tau^2 + \sigma^2)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\frac{ -2\theta_0(T^2\tau^2)\bar{Z}_T + (T^2\tau^2)\theta_0^2 + \sigma^2\big(-2\theta_0T\bar{Z}_T + T\theta_0^2\big) + (T\bar{Z}_T)^2 \tau^2}{2\sigma^2(T\tau^2 + \sigma^2)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\frac{ -2\theta_0(T^2\tau^2)\bar{Z}_T + (T^2\tau^2)\theta_0^2 + \sigma^2\big(-2\theta_0T\bar{Z}_T + T\theta_0^2\big) + (\bar{Z}_T)^2 T^2\tau^2}{2\sigma^2(T\tau^2 + \sigma^2)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\frac{T^2\tau^2\Big((\bar{Z}_T)^2 -2\theta_0\bar{Z}_T + \theta_0^2\Big) + \sigma^2\big(-2\theta_0T\bar{Z}_T + T\theta_0^2\big)}{2\sigma^2(T\tau^2 + \sigma^2)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\frac{T^2\tau^2\Big(\bar{Z}_T - \theta_0\Big)^2 + \sigma^2\big(-2\theta_0T\bar{Z}_T + T\theta_0^2\big)}{2\sigma^2(T\tau^2 + \sigma^2)} \Big) $$ $$ = \sqrt{\frac{\sigma^2}{T\tau^2 + \sigma^2}} \exp \Big(\frac{T^2\tau^2\Big(\bar{Z}_T - \theta_0\Big)^2}{2\sigma^2(T\tau^2 + \sigma^2)} + \frac{\sigma^2\big(-2\theta_0T\bar{Z}_T + T\theta_0^2\big)}{2\sigma^2(T\tau^2 + \sigma^2)} \Big) $$ ## Problem For the general case where $\theta_0 \neq 0$, then $\frac{\sigma^2\big(-2\theta_0T\bar{Z}_T + T\theta_0^2\big)}{2\sigma^2(T\tau^2 + \sigma^2)}$ does not equal 0, and so the result in the paper doesn't match what I have here. It's possible I have a mistake somewhere, but I have PORED over this derivation for literally hours and haven't found the solution. I have tried using SymPy to check my work, but I'm finding it finicky.
Can't reproduce closed form expression from sequential testing
CC BY-SA 4.0
null
2023-03-27T23:51:01.203
2023-03-29T14:06:52.020
null
null
221331
[ "sequential-analysis", "reproducible-research", "checking" ]
610949
2
null
65558
2
null
Normalization means that you put your data in a particular range, often $0$ to $1$. If you have coded your binary variable with $0$ and $1$, you already have this property and do not need to do anything. If you use a different type of coding of your binary variable, [such as $\pm 1$](https://stats.stackexchange.com/q/609727/247274), then the usual $\dfrac{x_i - \min(x)}{\max(x) - \min(x)}$ should work. If you have not coded your categories with numbers, your software does it under the hood, and the documentation should say how.
null
CC BY-SA 4.0
null
2023-03-28T00:07:43.240
2023-05-12T20:46:37.433
2023-05-12T20:46:37.433
247274
247274
null
610950
1
null
null
1
27
I am dealing with different problem where a count data has to be modelled either with binomial distribution or hypergeometric. I have done a extensive literature read and it seems that occurence equal to 0 (count equal to 0) does not provide any information to te posterior distribution of a potential beta distribution for the probability of occurrence. I would like to know: - Which is the mathematical justification for this? - In case of having a real proportion dsitribution close to 0, why count=0 would not bring any information to the posterior. I consider it does provide information since in this case, 0 is plausible and likely value Regards Juan
Beta Binomial Distirbution - Updating beta with 0 occurence
CC BY-SA 4.0
null
2023-03-28T00:12:00.473
2023-03-28T01:22:14.733
2023-03-28T01:22:14.733
22311
384249
[ "bayesian", "zero-inflation", "beta-binomial-distribution" ]
610951
1
null
null
0
10
I'm trying to standardize independent variables to determine their individual effect on the dependent variable. I have about 40 numeric, continuous independent variables, and around 10 ordinal independent variables. The dependent variable is ordinal. All of the ordinal independent variables have approximately the same scale (1-10). As does the dependent variable. Currently using python code in SQL Server 2019 with Machine Learning Services. Using the sklearn.linear_model.LogisticRegression classifier, with multi_class parameter set to OVR and using the liblinear solver with predict_proba to predict the probability estimates of each class. I currently get the coefficients of each independent variable, however, as I have not standardized the independent variables, I cannot really compare their individual effect on the dependent variable. Should I just standardize all the variables, numeric and ordinal, by subtracting the mean and then dividing by the standard deviation of that variable, or is their a better method you would suggest when dealing with mixed ordinal and numeric independent variables? I assume that once standardized, I can then interpret each coefficient to mean that an increase by 1 standard deviation in the independent variable will increase the dependent variable by the coefficient value.
Standardizing ordinal independent variables, to compare with numerical independent variables, as to their effect on the ordinal dependent variable
CC BY-SA 4.0
null
2023-03-28T00:35:41.997
2023-03-28T00:35:41.997
null
null
301076
[ "regression", "regression-coefficients", "ordinal-data", "standardization", "variable" ]
610953
2
null
242084
0
null
RMSE is nice because it relates to the Brier score, which is just a term sometimes used for square loss in classification settings. Depending on how $R^2$ is calculated, it might or might not relate to the RMSE. I would calculate $R^2$ in such a situation by comparing Brier score of your model to the Brier score that predicts the [prior probability](https://stats.stackexchange.com/a/583115/247274) for each category every time, and I would take the stance I discuss [here](https://stats.stackexchange.com/questions/590199/how-to-motivate-the-definition-of-r2-in-sklearn-metrics-r2-score) when it comes to an out-of-sample $R^2$. However, not everyone calculates $R^2$ the same way, and there are [serious flaws](https://datascience.stackexchange.com/a/114457/73930) to just calculating the correlation between the predictions and the observed outcomes. (I also do not know how that would work for when there are multiple categories.) Perhaps even better than such an approach is to compare using the log-likelihoods. This would be akin to McFadden’s $R^2$. UCLA has a [nice page](https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/) discussing metrics for logistic regression. Since a neural network classifier is, in some regards, just an amplified logistic regression, there is useful content there (especially when degrees of freedom are not considered, since that could be tough to calculate for a neural network machine learning approach). The last two on the page have some flaws, but I do like [my interpretation](https://stats.stackexchange.com/questions/605818/how-to-interpret-the-ucla-adjusted-count-logistic-regression-pseudo-r2/605819#605819) of their adjusted count, though I [take issue](https://stats.stackexchange.com/questions/605450/is-the-proportion-classified-correctly-a-reasonable-analogue-of-r2-for-a-clas) with their assertion that “count” (what they call classification accuracy) is a reasonable analogue to the usual $R^2$.
null
CC BY-SA 4.0
null
2023-03-28T01:29:40.603
2023-03-28T01:29:40.603
null
null
247274
null
610954
1
null
null
0
21
I read that you have to follow this: maximizing the difference between the nodes and minimizing the difference within the nodes I know that SSE_parent should be bigger than SSE_child because you want your SSE child to be lower as you move down to the terminal nodes. But, at the same time, you want SSE within the nodes to be low: creation of leaves involved minimizing squared differences of actual and assigned values. So, I guess it's a balancing act between the two "rules" Do you agree with this at a high level? Also, when you have a long right tail, your target variable produces more high valued leaves. It means the actual value and the predicted value is nothing but the average outcome of all cases at that node. Can you please confirm as well
Decision tree and how to make a split
CC BY-SA 4.0
null
2023-03-28T02:15:03.113
2023-04-03T04:56:41.183
2023-04-03T04:56:41.183
11887
382257
[ "cart" ]
610955
1
null
null
0
14
I have a sequence $S$ of characters generated from a finite alphabet $\Sigma$. Let's assume all characters are equally probable within $S$. I have a frequent pattern mining algorithm that, for simplicity, let's assume it requires only two input parameters (apart from the sequence $S$): - the (fixed) length of the patterns to be mined, $l_{\min}$. - the minimum support, $\sup_{\min}$. i.e. the minimum number of times a given pattern must appear to be considered frequent. Is it possible to relate the input parameters of the algorithm (i.e. $l_{\min}$ and $\sup_{\min}$) to the p-value of a certain appropriate test to ensure that the mined patterns have not been results of mere chance? I would like for example to replace the $\sup_{\min}$ input parameter with this p-value. And what is the appropriate test for this? In other words, I am looking for how $l_{\min}$ and $\sup_{\min}$ relate to the mentioned p-value and to the size of $\Sigma$.
Pattern mining: is this pattern the result of mere chance?
CC BY-SA 4.0
null
2023-03-28T02:25:28.010
2023-03-28T02:51:32.883
2023-03-28T02:51:32.883
251472
251472
[ "hypothesis-testing", "p-value", "pattern-recognition", "sequential-pattern-mining" ]
610956
2
null
610912
1
null
The issue is not accounting for unbalanced data, but what kind of inference you want to make. Are you trying to characterize a population from which you sampled, or did you vary those factor levels experimentally? The extremes are `"equal"` (the default), which is best for estimating the effects of one factor while holding the rest fixed in an experimental mode; and `"cells"`, which in a linear model, basically reproduces the raw marginal means of the data but is least useful for quantifying effects and most vulnerable to confounding with effects of other factors. You might want to also consider counterfactuals (look that up in the [vignette index](https://cran.r-project.org/web/packages/emmeans/vignettes/vignette-topics.html#c)) which allow for certain kinds of causal inferences.
null
CC BY-SA 4.0
null
2023-03-28T03:15:37.290
2023-03-28T03:15:37.290
null
null
52554
null
610957
1
null
null
0
38
I am intrigued by the discussion of [scoring-rules](/questions/tagged/scoring-rules) yet I am left wondering about its practical implementation; I hope this thread can ameliorate that for me and ideally others. Tabling the issue of the forecast combination puzzle, I am interested in the application of scoring rules with respect to stacking methods, discussed here: [Ensemble classifiers trained using different sets of features](https://stats.stackexchange.com/questions/583930/ensemble-classifiers-trained-using-different-sets-of-features) Let's say we have a trivial example of the following: two models trained on different features and we are convinced that stacking would be a great idea. Instead of training yet another model, we can use scoring rules -- but how? What does our next steps look like? Are the steps the same even if we have two different models trained on different features (assuming we end up with predicted probabilities after training with both models)? Separately but relatedly -- what should I be doing after getting predicted probabilities to use my scoring function properly/in practice? ``` # a MWE to get started np.set_printoptions(suppress=True) from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.metrics import log_loss # load data iris = datasets.load_iris() # first model; first 2 features m1 = LogisticRegression(multi_class='multinomial', solver='newton-cg').fit(iris.data[:, :2], iris.target) # preds m1_preds = np.round(m1.predict_proba(X), decimals=4) # second model; last 2 features m2 = LogisticRegression(multi_class='multinomial', solver='newton-cg').fit(iris.data[:, 2:], iris.target) # preds m2_preds = np.round(m2.predict_proba(X), decimals=4) # gen scores m1_score = log_loss(iris.target, m1_preds) m2_score = log_loss(iris.target, m2_preds) ``` ```
Practically implementing scoring rules
CC BY-SA 4.0
null
2023-03-28T03:25:42.180
2023-03-28T05:56:32.947
null
null
121052
[ "machine-learning", "classification", "scoring-rules" ]
610959
2
null
507751
0
null
With Bernoulli randomization, there is a small chance we end up with all units being randomized to the same group. When that happens, the difference-in-means estimator is undefined. In the event that we have some units in both groups, the difference-in-means estimator is defined and is unbiased. This can be seen using iterated expectation first conditioning on the random vector of assigned treatments, and then marginalizing over it.
null
CC BY-SA 4.0
null
2023-03-28T04:21:18.490
2023-03-28T04:21:18.490
null
null
203700
null
610960
1
610966
null
1
63
I have a prediction task at hand, and I'm deciding on how to sample my data and train a model with no look-ahead bias. Given a time series $Z$, my task is to build a simple predictor of size $m$ (think of a causal autoregression $AR(m)$, or anything else), that predicts the immediate next value in the time series. I'd like to then build the data matrix $\textbf{X}$ and the ground truth output vector $\textbf {y}$, and solve the system $\textbf{X} \textbf{w} = \textbf{y}$, in which $\textbf w$ are my model parameters as a vector. Here are my questions: - Is it OK to have a sample (x,y pair) whose input ($x$-part) overlaps with the label ($y$-part) of another one? For example, let's say $m=5$ and assume one of my training samples (i.e., one of $\textbf X$'s rows) is $x_1 = Z[0:5]$ with the corresponding label $y_1 = Z[6]$. Wouldn't it be a look-ahead bias in my model if I have another sample like $x_2 = Z[3:8]$ with $y_2 = Z[9]$, whose input includes the true label of the first sample? - Is it considered the look-ahead bias? - How can I avoid this to ensure my model is not informed by the labels at all? As pointed out in the comments, this is a textbook classical problem and is likely discussed in many references. I appreciate it if you also share these texts with me and future readers.
Do I have look-ahead bias?
CC BY-SA 4.0
null
2023-03-28T05:13:31.973
2023-03-28T07:43:12.037
2023-03-28T07:38:46.563
226624
226624
[ "bias", "autoregressive" ]
610961
1
null
null
5
91
Let's suppose that data is collected for clinics across the state. The clinics are located in different counties, but also some of the clinics are owned by large healthcare systems that are located in different counties. This data doesn't fit the typical design of system nested within a county. The data is collected for the specific clinic. The outcomes are collected for each clinic and the sociodemographic information is available for the county level. What is of interest is the association between the sociodemographic information on the county level and the outcomes collected at the clinic. I'm thinking that I could create a new random intercept for each clinic (ClinicID) so that each one can be nested within the county. In the table below there are 8 clusters, nested in 5 counties. But I have not accounted for any characteristics of the health care system. Is there another way I could also account for clustering of the health systems? Would I add another random effect? I am still figuring out the specification for the random variable in glmTMMB, but I think it would be (1 | CountyID) + (1 | ClinicID) because each ClinicID is unique. Also, where can I find information about population offsets when nesting random variables? I just want to be sure I'm using the right number-- I think it would be for the clinic, not the county. I'm a R user and relatively new to these multi-level regression. My apologies for such a basic question and thank you in advance for any help! Edit: I think that I can just add a random intercept also for the SystemID, and then ClinicID is nested in the CountyID: (1 | SystemID) + (1 | CountyID/ClinicID). I read somewhere you can add these things for the pseudoeffect (for example, gender). I'm not totally sure how it relates to offsets. But the problem is that the demographic info is on county level so there's no variation within the county cluster if I do it that way. There are 15 health systems, 27 counties, 36 clinics. Thousands of observations are available per clinic. Most counties (19) have just one health system. But 7 counties have two health systems and 1 county has three health systems. On the flip side, ten health systems are just in one county, and there are five health systems that are large, operating in 2,3,5,7, and 9 different counties, respectively. Example: |SystemID |CountyID |ClinicID | |--------|--------|--------| |A |1 |A1 | |A |2 |A2 | |A |3 |A3 | |A |4 |A4 | |B |1 |B1 | |B |2 |B2 | |C |5 |C5 | |C |4 |C4 | Side note: Unfortunately, I'm modeling zero-inflated data with glmmTMB and the [wrapper mentioned for multiple membership specification of random effects is only for lme4](https://stats.stackexchange.com/questions/487039/multiple-membership-vs-crossed-random-effects/487047#487047). But also I don't think this is multiple membership because according to the answer [here](https://stats.stackexchange.com/questions/487039/multiple-membership-vs-crossed-random-effects/487047#487047) "So to give a definition of multiple membership, I would say this occurs when the lowest level units "belong" to more than one upper-level unit." In my case, we just have more than one random variable, and each clinic can only belong to one level of each of the two random variables. New update: I realized that I need covariates for the lowest level or else there won't be variation within the county clusters. So I've been working on collecting that data.
Nested mixed effects model- am I missing an additional random effect?
CC BY-SA 4.0
null
2023-03-28T05:13:39.200
2023-03-30T19:39:36.527
2023-03-30T19:39:36.527
205125
205125
[ "multilevel-analysis", "multiple-membership" ]
610962
2
null
610957
1
null
How you combine different model forecasts is your own decision and has nothing to do with scoring rules, just like it wouldn't have anything to do with accuracy. You obtain multiple forecasts (whether they be probabilistic forecasts or "classifications"), you then combine them somehow (for ex. simple average), and then you use these with your scoring rule (or classification metric) of choice. So in a sense it's the same as using only one model results, it is your job (and a separate task) to combine multiple forecasts.
null
CC BY-SA 4.0
null
2023-03-28T05:56:32.947
2023-03-28T05:56:32.947
null
null
143489
null
610963
1
null
null
0
8
I want a brief difference about Linear Discriminant Analysis and Linear Regression. Isnt it the same process? I heard the difference is that when it comes to multidimensional target/categories LDA is used . Is there any difference other than that?
Difference between LDA and LR?
CC BY-SA 4.0
null
2023-03-28T06:29:15.697
2023-03-28T06:29:15.697
null
null
384302
[ "regression", "pca", "linear-model", "linear", "java" ]
610964
2
null
519249
0
null
I think I found out how to run such a regression in practice. One can use statsmodels RLM without cluster robust errors to get the weights assigned to each observation and then plug these weights into statsmodels WLS, which does support cluster robust standard errors. Since clustering only affects the significance but not the coefficients, the weights from the RLM regression without clustering should be correct.
null
CC BY-SA 4.0
null
2023-03-28T06:35:49.940
2023-03-28T06:35:49.940
null
null
219299
null
610965
1
610995
null
7
277
I want to calculate the [Kullback–Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) [](https://i.stack.imgur.com/Z0Fpe.png) between two [multivariate $t$](https://en.wikipedia.org/wiki/Multivariate_t-distribution) distributions with different degrees of freedom (say $\nu_1$ and $\nu_2$), but same location and scale matrix, for arbitrary dimensions, [](https://i.stack.imgur.com/A2zJE.png) How can I deal with the multivariate integral in the KL divergence? Is there a closed-form solution?
Kullback–Leibler divergence between two multivariate t distributions with different degrees of freedom?
CC BY-SA 4.0
null
2023-03-28T07:19:19.007
2023-03-28T13:33:50.290
2023-03-28T07:25:51.667
384306
384306
[ "kullback-leibler", "t-distribution", "multivariate-distribution" ]
610966
2
null
610960
2
null
> Is it OK to have another sample whose input ($x$-part) overlaps with the label of the previous sample $y_1$ (for example $x_2 = Z[3:8]$ with $y_2 = Z[9]$)? This is what usually happens when you fit a time-series model that predicts the present given the past. > Is it considered the look-ahead bias? Not really. It would if you had a model that used $Z[4:9]$ to predict $Z[3]$, in such a case, you could not use the model to make the forecast, then you couldn't use it because to make it you would need to know the future. But here you predict past from the future, so you don't look ahead. Another example of this kind of bias would be if you had as a feature something like "weekly average", so to make prediction on Monday you would consider already known weekly average, that could give overtly optimistic training time metrics, while such a model would not be usable for forecasting because you would not know the weekly average for the future. The same applies to all the other features that would be calculated using the data "from the future", but again, does not apply to using only historical data to make a prediction. > How can I avoid this to ensure my model is not informed by the labels at all? Say that you have ten years of data, from 2012 to 2022, if you trained on 2012-2020 to predict 2021-2022, but then used the model to make a forecast for 2023, you won't train on the most up-to-date data that is likely the most relevant to predict the neat future. Time-series models try using all the data most efficiently to avoid problems like this. If you want to train a model that does something like "predict tomorrow given today" you can't avoid it, but also there is no reason why you would need to avoid it.
null
CC BY-SA 4.0
null
2023-03-28T07:43:12.037
2023-03-28T07:43:12.037
null
null
35989
null
610967
2
null
610709
4
null
Consolidation of my comments: - You stop the first time you have one more Heads than Tails, so you are correct that the probability you stop after an even number of flips is $0$. - So let's consider the probability of stopping after $2k+1$ flips. After $2k$ flips you want the same number of Heads and Tails, without ever having had more Heads than Tails, and then finally have an extra Heads. The number of ways of doing this is the number of Dyck words, the Catalan number $\frac{1}{k+1} \binom{2k}k$. - If the probability of Heads is $p$ and Tails is $1-p$, since you need $k+1$ Heads and $k$ Tails, the probability of this is $$\mathbb P(X=2k+1)= \frac1{k+1}{2k \choose k}p^{k+1}(1-p)^k$$ - If $p \ge \frac12$ then the probability of ever stopping is $\sum\limits_{k=0}^\infty\frac1{k+1}{2k \choose k}p^{k+1}(1-p)^k = 1$, so the expected number of flips need to finish is $$\mathbb E[X]=\sum\limits_{k=0}^\infty\frac{2k+1}{k+1}{2k \choose k}p^{k+1}(1-p)^k = \frac{1}{2p-1}$$ - With $p=\frac35$ this gives $\mathbb E[X]=5$, as suggested by the simulations. - With $p=\frac12$ this gives $\mathbb E[X]=+\infty$, so you almost surely finish in finite time but with an infinite expected time. - With $p<\frac12$ then the probability of ever stopping is $\frac p{1-p} <1$ and the probability of never stopping is $\frac{1-2p}{1-p}>0$, so there is no expectation. Curiously, if you condition on stopping in finite time, the conditional expectation is $\mathbb E[X \mid X < \infty]=\frac{1}{|2p-1|}$, similar to the result when $p > \frac12$. For example with $p=\frac25$, the probability of ever stopping is $\frac23$ and the expectation of the number of flips conditioned on stopping is again $5$.
null
CC BY-SA 4.0
null
2023-03-28T07:54:30.003
2023-03-28T07:54:30.003
null
null
2958
null
610968
1
null
null
1
16
I am not that much experiencend in R and I want to run a CFA. The code is as follows: ``` higher <- ' # First Order co =~ co1 + co2 + co4 + co6 + co7 + co9 + co10 + co11 ia =~ ia1 + ia2 + ia3 + ia4 + ia5 + ia7 dc =~ dc1 + dc2 + dc3 + dc4 + dc5 + dc8 + dc9 capm =~ capm1 + capm2 + capm3 + capm4 + capm5 # Second Order factor2 =~ co + ia + dc # Higher Order factorh =~ factor2 + capm ' #calculation fithigher <- cfa(higher, data=semdata, estimator="ml", bounds = T) summary(fithigher, standardized=T, fit.measure=T, rsquare = TRUE) semPaths(fithigher, "std", intercepts=FALSE, residuals = F) ``` Unfortunately, the after fithigher <- cfa(...) the following error is produced: ``` Warning message: In lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING: Could not compute standard errors! The information matrix could not be inverted. This may be a symptom that the model is not identified. ``` Can anybody help me to solve this issue? Thank you very much in advance. The results of summary() as as follows: [](https://i.stack.imgur.com/kJjDe.png) [](https://i.stack.imgur.com/BBnUH.png) [](https://i.stack.imgur.com/odHkY.png)
CFA with lavaan: Standard errors not computable
CC BY-SA 4.0
null
2023-03-28T07:58:44.217
2023-03-28T08:54:22.580
2023-03-28T08:54:22.580
384307
384307
[ "r", "standard-error", "confirmatory-factor", "lavaan", "identifiability" ]
610969
1
null
null
0
18
I'm currently running some kind of research about factors explaining the pro-environmental behaviours at work. To do so, I used a questionnaire (which you can found in this [article](https://www.sciencedirect.com/science/article/pii/S0959652614007914?via%3Dihub)) which I traduced in french. To sum it up, the factors tested are attitudes towards PEB, informations need, personal values, sensibility towards the environment, the intention to adopt a PEB,social and subjective norms at work, perceived behavioral control, situational factors, and support from the direction. Some effective behaviors are also measured. The scales are used mainly are likert going from 1 ("strongly disagree") to 5 ("totally agree") I don't have that many answers for the moment (only 4...) but I wanted to try on and start making some statistics out of it, to anticipate... and I'm struggling a little bit. So I made up an excel to compute the values I already got. And to anticipate I also computed the means of my factors and behaviours at work, in case I needed them. So with all that I thought about running a linear regression in order to know the weight of those in the adoption of PEB. I use either jasp or jamovi... and I really don't know which values I have to compute into the factors. Does it have to be ALL the questions, or only the questions about one factor, or the means of the factors...? Thanks in advance for your answers, it would be a pleasure to exchange with all of you.
Statistics out of a survey
CC BY-SA 4.0
null
2023-03-28T08:16:17.673
2023-03-29T07:33:57.560
2023-03-29T07:33:57.560
384308
384308
[ "regression", "survey", "jasp" ]
610970
2
null
610755
0
null
I am not aware of a software implementation of the DCC model with an external regressor in the DCC equation. A simpler solution might be to obtain the fitted conditional correlations and covariances from a vanilla DCC model and then regress them on your external variable of interest. This is not nearly as elegant a solution, but much easier to implement.
null
CC BY-SA 4.0
null
2023-03-28T08:22:21.937
2023-03-28T08:22:21.937
null
null
53690
null
610971
1
null
null
1
33
A simulation output for each microscopic travel demand model and two further macroscopic MDCEV models - transformed versions of the microscopic model - are given for a base scenario. Moreover, three additional outputs are given for a specific forecast scenario with varied model inputs (e.g., reducing public transportation travel times, expanding carsharing and bikesharing services). Every output is considered on an aggregated level, namely on modal splits (share of trips with bike, car, as a pedestrian, etc.). Now, we want to analyze and compare the predictive strength of the MDCEV models with regard to the microscopic model, which serves as a reference. Therefore, we want to compare the modal shares for the forecast scenario as well as the modal shifts (differences between each share of the base scenario and forecast scenario with varied inputs). Which statistical model serves the best?
Comparing Sensitivities of Microscopic and Macroscopic Models (Travel Demand)
CC BY-SA 4.0
null
2023-03-28T08:27:32.747
2023-03-29T22:51:52.517
2023-03-29T22:51:52.517
11887
321991
[ "model-comparison", "sensitivity-analysis", "method-comparison" ]
610972
1
null
null
0
37
A model of the form $y=a\cdot x^b$ can be linearly fitted by taking logs on both sides - giving $\ln(y)=\ln(a)+b\cdot\ln(x)$, where $\ln(y)$ is regressed against $\ln(x)$. This is a standard textbook approach. What if there was an additive linear term included - e.g. $y=a\cdot x_1^b + c\cdot x_2$, how would you fit this for $a$, $b$, and $c$? Taking logs doesn't seem to do the trick, what other alternative methods are there?
Fitting a power law model with an additional linear term
CC BY-SA 4.0
null
2023-03-28T08:28:24.503
2023-03-28T08:53:35.613
null
null
351756
[ "regression", "python", "fitting", "curve-fitting", "power-law" ]
610973
1
null
null
0
16
Is `theta_j` just the jth element of `theta_head_k` in 3.23? [](https://i.stack.imgur.com/onIPR.png) after the kth iteration of the whole algorithm (algorithm 5) i get the new kth estimate of my parameter vector `theta`. All other things that are dependent on this estimate are depicted as `...(theta_head_k)` but in the shooting algorithm (3.23) there is `x(theta_head_k)` and `theta_j`. Is `theta_j` just the jth element of `theta_head_k`? But that would be kind of inconsistent with the notation in the rest of the paper. Or do I miss anything here? The [](https://i.stack.imgur.com/uxymJ.png) and it starts with defining `theta_head_k`
What variables do I use for this shooting algorithm?
CC BY-SA 4.0
null
2023-03-28T08:37:18.540
2023-03-28T08:55:49.373
2023-03-28T08:55:49.373
362671
384310
[ "optimization", "lasso", "coordinate-descent" ]
610974
2
null
610798
0
null
Your first step should always be to plot. Here is a plot of when your occurrences happened, a histogram of the waiting times, and a time series plot of the waiting times: [](https://i.stack.imgur.com/qJTvD.png) The first thing that jumps out at us is that the last waiting time was enormously long, corresponding to the single spike at the far right of the histogram - this last waiting time was 660 seconds, all the others no more than 137 seconds. The second observation (see the right-hand panel) is that waiting times apparently were already increasing before the very last occurrence. The variance of the waiting times was also increasing. It seems to me like the first thing you should do is to investigate just what caused these two effects. Understanding this should inform your subsequent analysis and prediction. Did something change over time, and if so, is the change persistent, or will matters revert to the previous state, or something else? You could then try modeling the waiting times with an appropriate distribution. The exponential distribution is commonly used for that, but since your data are discrete, you might want to model them as negative binomial (which describes the waiting times in a [Bernoulli process](https://en.wikipedia.org/wiki/Bernoulli_process)). You can include predictors using negative binomial regression, see the textbook Negative Binomial Regression by Hilbe. Alternatively, you could use standard time series forecasting techniques to forecast the likely next waiting time. For instance, you could fit an Exponential Smoothing model to the sequence of waiting times and forecast that out, which will give you a forecast of 188 seconds for the expected next waiting time, along with prediction intervals. You can even simulate from this series to get a probability density for when the next occurrence will happen: [](https://i.stack.imgur.com/oeXUP.png) However, do note that understanding your data is definitely more important than finding the most sophisticated model! R code: ``` occurrences <- c(25,26,34,56,72,76,76,86,88,88,106,148,151,151,161,195,200, 214,215,215,220,231,243,245,247,257,263,265,288,295,314,339,342,342,353, 368,407,413,436,447,469,470,472,505,513,557,566,598,609,623,663, 676,683,687,776,789,850,875,921,1058,1078,1167,1255,1292,1952) waiting_times <- diff(occurrences) par(mfrow=c(1,3),las=1) plot(occurrences,rep(1,length(occurrences)),type="h",lwd=2, yaxt="n",ylab="",xlab="Second",main="Occurrences",ylim=c(0,1.3)) hist(waiting_times,breaks=seq(-0.5,max(waiting_times)+0.5)) plot(waiting_times,type="l") library(forecast) model <- ets(waiting_times) forecast(model,h=1) set.seed(1) sims <- replicate(10000,simulate(model,nsim=1,bootstrap=TRUE)) table_sims <- hist(sims,breaks=seq(floor(min(sims)),ceiling(max(sims))),plot=FALSE) plot(occurrences,rep(1,length(occurrences)),type="h",lwd=2, yaxt="n",ylab="",xlab="Second",main="Occurrences with forecast", ylim=c(0,1.3),xlim=c(0,max(occurrences)+max(sims))) points(max(occurrences)+table_sims$mids,table_sims$density, type="h",col="grey",lwd=3) ```
null
CC BY-SA 4.0
null
2023-03-28T08:37:59.163
2023-03-28T08:37:59.163
null
null
1352
null
610975
1
null
null
0
27
I have a collection of text documents. I would like to find the smallest set of words such that searching by those words allows discovering each document. It is quite natural to describe this data as a matrix, where columns are unique words and rows give the number of times a word occurs in a given document. What would be an efficient way of discovering the smallest set? If I'm not mistaken, finding an exact solution requires testing all possible combinations. That is obviously too much asked even with a small amount of unique words. So how could I arrive at "good" solutions (set of words describes most variation) .. Something like PCA makes sense here?
Smallest set of words contained in a number of documents
CC BY-SA 4.0
null
2023-03-28T08:40:32.857
2023-03-28T14:51:25.030
null
null
75022
[ "pca", "dataset", "natural-language" ]
610976
2
null
610972
1
null
This is a job for something like nonlinear least squares, e.g., `nls` in `R`. Here is a snippet. ``` nl.eq <- function(x1, x2, a, b, d) { (a*x1^b+d*x2) } n <- 10000 x1 <- runif(n) x2 <- rnorm(n) a <- 2 b <- 3 d <- 4 y <- a*x1^b + d*x2 + rnorm(n) nlsreg <- nls(y ~ nl.eq(x1, x2, a, b, d), start = list(a = 1, b = 1, d = 1)) summary(nlsreg) ```
null
CC BY-SA 4.0
null
2023-03-28T08:53:35.613
2023-03-28T08:53:35.613
null
null
67799
null
610977
1
null
null
2
31
We aim to draw maximum information from CEP-logistics experts to set up a utility function for delivery chain choices for each parcel. The choices / dependent variables are specific transport chains, e.g., delivery only with light commercial vehicle (LCV), LCV with a turnover to cargo bikes, tram with a turnover to cargo bike - in total, six options. The independent variables are parcel size, the distance of the recipient, travel times, emission values, etc. Now, we want to estimate the parameters. What is statistically the best method if we can only get 5-10 responses from experts? Something like the "Analytic hierarchy process", maybe? Thank you for your responses.
Draw Maximum Information from Experts to set up Utility Function
CC BY-SA 4.0
null
2023-03-28T09:07:56.167
2023-03-28T09:07:56.167
null
null
321991
[ "experiment-design", "interview" ]
610978
1
611002
null
1
43
The locally unbiased(l.u.) estimator $\hat{\theta}\left( x \right)$, with $x$ stands for the experiment result, refers to the estimator that satisfies(see Eq(5) of [this paper](https://arxiv.org/abs/2001.11742) for multiparameter case) $$\sum_x{p\left( x|\theta =\varphi \right) \hat{\theta}\left( x \right)}=\varphi ,\sum_x{\partial _{\theta}p\left( x|\theta \right) |_{\theta =\varphi}\hat{\theta}\left( x \right)}=1$$ where I have used $\varphi$ to stands for true value. It is a weak version of globally unbiased, i.e. unbiased at any possible true value of $\theta$. The aim of the l.u. estimator is that there might be cases when a globally unbiased estimator does not exist. So I want to know if there is some specific example that we cannot find a globally unbiased estimator while we can find a l.u. estimator?
Example when globally unbiased estimator does not exist while locally unbiased estimator exists?
CC BY-SA 4.0
null
2023-03-28T09:28:25.597
2023-03-28T13:55:22.793
null
null
336322
[ "estimation", "estimators", "unbiased-estimator" ]
610979
1
null
null
1
20
What is the sample size calculation for a discrete outcome (count variable)? For example, I want to design a trial with a placebo and a treatment. The outcome is the number of days of treatment : 14 days for placebo and 10 days for treatment (reduction of 4 days of treatment). Which formula should I use to calculate the sample size for this trial? I usually use the book "Sample Sizes for Clinical, Laboratory and Epidemiology Studies (Machin et al., 2018)" for sample sizes calculations but I do not find any example for this type of data.
What is the sample size calculation for a discrete outcome (count variable)?
CC BY-SA 4.0
null
2023-03-28T09:41:16.257
2023-03-28T09:41:16.257
null
null
250007
[ "sample-size", "clinical-trials" ]
610980
1
610985
null
4
245
So I am having some numbers which I can't understand...basically I am performing a permanova test using the `adonis2` function, from the [vegan](https://github.com/vegandevs/vegan) package, and what I see is a very significant p-value with a very low f-statistic and I was wondering how this is possible. This is what I have: ``` Permutation test for adonis under reduced model Terms added sequentially (first to last) Permutation: free Number of permutations: 999 adonis2(formula = transposed_taxa ~ Treatment, data = adonis_meta, permutations = permuts, method = "bray", by = "terms") Df SumOfSqs R2 F Pr(>F) Treatment 1 0.3506 0.08074 1.581 0.002 ** Residual 18 3.9912 0.91926 Total 19 4.3418 1.00000 ``` I also tried an online [F distribution calculator](https://www.statology.org/f-distribution-calculator/) and, if I did the computations correctly (i.e., filled the proper number of degrees of freedom), the calculator gives me a non-significant p-value for the f-statistic I get from the test. Also, I tried the other way around, i.e., I provided the p-value and I got a very high f-statistics. However, I may be completely wrong and/or this calculator does not work for permanova test. What am I missing here? EDIT: this code provide the results i am seeing: ``` # load library library("vegan") # set seed set.seed(131) # build metadata meta <- structure(list(treatment = structure(c(1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L), levels = c("treat", "ctrl"), class = "factor"), SampleID = c("smpl_001", "smpl_002", "smpl_003", "smpl_004", "smpl_005", "smpl_006", "smpl_007", "smpl_008", "smpl_009", "smpl_010", "smpl_011", "smpl_012", "smpl_013", "smpl_014", "smpl_015", "smpl_016", "smpl_017", "smpl_018", "smpl_019", "smpl_020")), row.names = c("smpl_001", "smpl_002", "smpl_003", "smpl_004", "smpl_005", "smpl_006", "smpl_007", "smpl_008", "smpl_009", "smpl_010", "smpl_011", "smpl_012", "smpl_013", "smpl_014", "smpl_015", "smpl_016", "smpl_017", "smpl_018", "smpl_019", "smpl_020"), class = "data.frame") # build distance matrix with bray curtis metric dist_mat <- structure(c(0.724637600797927, 0.810151211442847, 0.677040935047858, 0.684990809226583, 0.773841077947218, 0.730965541771736, 0.721636429281786, 0.670062302186762, 0.735219755374301, 0.726420817041101, 0.713890852756624, 0.672481163496018, 0.770155244262034, 0.736284810905502, 0.725613337053673, 0.729027437249588, 0.684174177335026, 0.731597842534973, 0.744984703810319, 0.624016277905062, 0.716705596289758, 0.642368044368606, 0.660041089097984, 0.683430554390529, 0.637694222022575, 0.696747540033531, 0.678073194220625, 0.606862380345106, 0.57055508520052, 0.551073859282405, 0.685299854417393, 0.576578503650073, 0.70888030875411, 0.753358012196228, 0.559701576529528, 0.628139583570118, 0.616139151118536, 0.673897628859827, 0.688365913838432, 0.600801291804287, 0.653870227670454, 0.713403074119784, 0.622562333387564, 0.628100195801016, 0.638223638807926, 0.621583052281211, 0.639684018803079, 0.658541917209453, 0.62143045183579, 0.711481887009044, 0.753101948589314, 0.652118731984756, 0.669405500109936, 0.631233748354645, 0.681137745516244, 0.75990843296923, 0.704545253550062, 0.733485020719486, 0.70159943122252, 0.687665942056175, 0.71194111729798, 0.687556422198947, 0.672450715128387, 0.729058498800823, 0.777450338280718, 0.724047236746885, 0.693695657946689, 0.649024860632527, 0.743978052380142, 0.690306665100079, 0.66738824968164, 0.666456720997265, 0.736939480231912, 0.701398808286334, 0.666061916019718, 0.673314770321561, 0.678582532793769, 0.601722977395924, 0.694594001795866, 0.666715778630456, 0.733905602972688, 0.708928297345911, 0.660957906863241, 0.688380488856927, 0.66749857777161, 0.638562693219007, 0.709099622247953, 0.661582073250648, 0.633361258020319, 0.64395516014329, 0.649660296892919, 0.693266403675858, 0.619593981263679, 0.610426150945898, 0.680254103567928, 0.704351725342505, 0.655961485324122, 0.682236566670637, 0.60440942570617, 0.717559929520578, 0.678396363326736, 0.662793617648454, 0.657884773482801, 0.611508752657517, 0.695249293387878, 0.641413832130993, 0.631289777942896, 0.76873495257605, 0.7214801270069, 0.648605405437042, 0.638675714854417, 0.690476980816459, 0.624196684143274, 0.716828273978603, 0.620623296876388, 0.734305024372298, 0.696968360773382, 0.731531545670681, 0.590046390437714, 0.656457660565246, 0.767672873743218, 0.662583670754548, 0.741384141808377, 0.632153877463622, 0.686203048119573, 0.681849943901682, 0.679030715444911, 0.695081943961127, 0.670973436524669, 0.655707226050489, 0.592390953713221, 0.719296118412277, 0.681242617504174, 0.709011062976595, 0.67783231497305, 0.628507609160785, 0.698992117288368, 0.692043273990139, 0.635338402719872, 0.657103639492005, 0.716464174187625, 0.615559649585827, 0.695082938920572, 0.667734702508218, 0.616496859490065, 0.642779258886188, 0.653646025092285, 0.631693957514602, 0.611457903279869, 0.688517163734204, 0.737259708862597, 0.603583398496329, 0.656242818753164, 0.597835522574495, 0.594058163092095, 0.670135094778774, 0.621802926897363, 0.744151393706153, 0.723831207989047, 0.563513462547226, 0.628881872570669, 0.62794706034554, 0.676684493053249, 0.622600780389654, 0.698874934198522, 0.7223868940598, 0.552933805235236, 0.664090363014779, 0.611138112893922, 0.596969123444625, 0.73702614785897, 0.66350703692813, 0.648073310354135, 0.611886898022537, 0.698582583703285, 0.683497738003825, 0.753238008973899, 0.637095874502176, 0.625844086528647, 0.629241237689765, 0.732814733043977, 0.641962575290711, 0.769763636147583, 0.711868946785738, 0.701186509994886, 0.689603853801978, 0.766601551308245, 0.634042239970803, 0.624152094184768, 0.701782607184278), maxdist = 1, Size = 20L, Labels = c("smpl_001", "smpl_002", "smpl_003", "smpl_004", "smpl_005", "smpl_006", "smpl_007", "smpl_008", "smpl_009", "smpl_010", "smpl_011", "smpl_012", "smpl_013", "smpl_014", "smpl_015", "smpl_016", "smpl_017", "smpl_018", "smpl_019", "smpl_020"), Diag = FALSE, Upper = FALSE, method = "bray", class = "dist") # run test print(adonis2(dist_mat~treatment, data=meta, permutations=999, by="terms")) ```
Why am I seeing a low F-statistic, but a very significant p-value in permanova?
CC BY-SA 4.0
null
2023-03-28T09:58:35.323
2023-03-29T08:07:57.877
2023-03-29T08:07:57.877
114511
114511
[ "statistical-significance", "p-value", "permutation-test", "f-statistic", "vegan" ]
610981
1
null
null
0
111
Assume that we have a high-dimensional data with a few samples. We want to select a minimum set of best features from this dataset using LightGBM feature importance. This is because of an external restriction that we need to limit the number of features that are used in the final model. We want to select features using LightGBM feature importance vectors. I see [this question](https://stats.stackexchange.com/questions/454633/boruta-feature-selection-method) about applying Boruta before LASSO for feature selection. In the comments, someone referred to [this question](https://stats.stackexchange.com/questions/164048/can-a-random-forest-be-used-for-feature-selection-in-multiple-linear-regression/164068#164068) that shows that features that are important for a non-linear model (such as the random forest applied in Boruta) may not be important for a linear model like LASSO. What about if we use LightGBM as the second step of feature selection after running Boruta? Can this have any benefits compared to just running LGBM without running Boruta?
Boruta followed by LightGBM for feature selection
CC BY-SA 4.0
null
2023-03-28T10:17:43.063
2023-03-28T10:55:18.270
2023-03-28T10:37:00.487
384317
384317
[ "machine-learning", "feature-selection", "lightgbm", "boruta" ]
610982
1
null
null
0
67
I'm estimating an Panel ARDL model. Generally the dynamic cumulated multiplier is estimated as follows (eq. (1): \begin{equation} \frac{\sum_{i=0}^{m}\beta_{i}}{1-\sum_{j=1}^{n}\phi_{j}} \end{equation} Where $\beta$ are the coefficients of the independent variable with $m$ lags. $\phi$ ar coefficients with $n$ lags. In my model, the dependent variable are growth rates and addition/cumulation makes no sense here. If, for example, $y_{t} = 0.1$ and on the following day $y_{t+1} = -0.1$, then with addition (as applied in the normal cumulative multiplier calculation) the cumulative effect would be zero. In the case of growth rates, we must instead calculate in this case: $(1+0.1)*(1-0.1)$, whereby the initial value is not reached again after two periods. The accumulation or addition of growth rates, used in the cumulative dynamic multiplier of the above equation is not appropriate because growth rates must be considered multiplicatively over time (eq. 2): \begin{equation} y_{t+p}=y_{t}(1+g)(1+g_{1})(1+g_{2})...(1+g_{p}) \end{equation} Are the following own thoughts correct? By considering the autoregressive effects, for Growth Rates the Long-Run Effect with 10 Lags from both exogeneous and endogeneous variable should be estimated with \begin{equation} = t_{0}*t_{1}*t_{2}*t_{3}*t_{4}*t_{5}*t_{6}*t_{7}*t_{8}*t_{9}*t_{10} = \prod_{p=0}^{i}t_{p} \end{equation} whereby $t_{p}$ is estimated with \begin{equation} t_{0} = (1+\beta_{t})\\ t_{1} = (1+\beta_{t-1})*((t_{0}-1)\phi_{1}+1)\\ t_{2} = (1+\beta_{t-2})*((t_{1}-1)\phi_{1}+1)*((t_{0}-1)\phi_{2}+1)\\ t_{3} = (1+\beta_{t-3})*((t_{2}-1)\phi_{1}+1)*((t_{1}-1)\phi_{2}+1)*((t_{0}-1)\phi_{3}+1)\\ t_{4} = (1+\beta_{t-4})*((t_{3}-1)\phi_{1}+1)*((t_{2}-1)\phi_{2}+1)*((t_{1}-1)\phi_{3}+1)*((t_{0}-1)\phi_{4}+1)\\ t_{5} = (1+\beta_{t-5})*\prod_{n=0}^{4}((t_{n}-1)\phi_{|n-4|}+1)\\ t_{6} = (1+\beta_{t-6})*\prod_{n=0}^{5}((t_{n}-1)\phi_{|n-5|}+1)\\ t_{7} = (1+\beta_{t-7})*\prod_{n=0}^{6}((t_{n}-1)\phi_{|n-6|}+1)\\ t_{8} = (1+\beta_{t-8})*\prod_{n=0}^{7}((t_{n}-1)\phi_{|n-7|}+1)\\ t_{9} = (1+\beta_{t-9})*\prod_{n=0}^{8}((t_{n}-1)\phi_{|n-8|}+1)\\ t_{10} = (1+\beta_{t-10})*\prod_{n=0}^{9}((t_{n}-1)\phi_{|n-9|}+1)\\ \end{equation} which is the same as the recursive equation: \begin{equation} t_{p} = (1+\beta_{p})*\prod_{q=0}^{p-1}((t_{q}-1)\phi_{|q-p|}+1) \end{equation} where $\beta_{p}$ are the estimated coefficients of the exogeneous variable with $p$ lags. $\phi_{q}$ are the estimated AR coefficients of the endogeneous variable with $q$ lags. Unfortunately, I haven't found anything in the literature on this, since everyone actually uses the ARDL ECM model for growth rates. Now I am wondering how best to calculate the long-run effects: - common cum. dyn. multiplier eq. (1) -> problem: simple addition at growth rates actually not correct. - equation (2) -> easy to understand, but AR part is not considered. - screenshot equation (3) -> correct? do you see any problems with this?
How to estimate cumulative dynamic multiplier in ARDL model with growth rates
CC BY-SA 4.0
null
2023-03-28T10:22:23.880
2023-03-31T18:01:13.263
2023-03-31T18:01:13.263
384316
384316
[ "time-series", "econometrics", "autoregressive", "ardl" ]
610983
1
null
null
0
19
I recently learned that in statistics, the Wald confidence interval (CI) creates a CI by approximating the binomial distribution as a Gaussian. The intuition behind this approximation seems reasonable because when the sample size is large, the binomial distribution can be well approximated by the normal distribution. Now, what I want to do is state a formal guarantee or statement about the confidence interval of the binomial parameter when using normal approximation. To this end, I want to ask if the following statement is true? > Let $\mathbb{P}_{q, m}$ be probability distribution of binomial distribution with mean $q$ with $m$ samples. Let $C_{\alpha, m}$ be a Wald $(1-\alpha)$ confidence interval, that is, $$ C_{\alpha, m} = \left[ \hat{q} - Q_{\alpha/2}\sqrt{\frac{\hat{q}(1 - \hat{q})}{m}}, \hat{q} + Q_{\alpha/2}\sqrt{\frac{\hat{q}(1 - \hat{q})}{m}} \right] $$ where $\hat{q}$ is a estimated value of $q$ using $m$ samples. Note that this $C_{\alpha, m}$ is derived by normal approximation. Now it is guaranteed that, for any $q \in (0, 1)$, $$ \lim_{m \to \infty} \mathbb{P}_{q, m}(q \in C_{\alpha, m}) \leq 1 -\alpha $$
Formal guarantee for confidence interval of binomial parameter using normal approximation
CC BY-SA 4.0
null
2023-03-28T10:32:46.630
2023-03-28T10:41:58.207
2023-03-28T10:41:58.207
310702
310702
[ "statistical-significance", "confidence-interval", "binomial-distribution", "normality-assumption", "central-limit-theorem" ]
610985
2
null
610980
6
null
There is a reason why the tests are based on permutation instead of nominal values of F-statistic. The reason is that nominal values of F-statistic are not correct in this setting, but we must find the empirical distribution under null model from permutations. You can have a look at the permutation distribution of F using function `permustats` to extract permutation values (randomized values) and `densityplot` and other support functions to display those values, or `densityplot(permustats(<yourmodel>))`.
null
CC BY-SA 4.0
null
2023-03-28T10:37:11.927
2023-03-28T10:37:11.927
null
null
340028
null
610986
2
null
610775
1
null
With a single random uniform, $u_i$, from each participant, select $$\arg\max_i\bigg(\frac{\log(u_i)}{w_i}\bigg)$$ Or equivalently, $$\arg\max_i\Big(u_i^{1/w_i}\Big)$$ Testing with a quick simulation in R: ``` n <- 6L w <- 1:n tabulate(max.col(t(matrix(log(runif(1e6L*n)), n)/w)), n)/1e6 #> [1] 0.047665 0.094986 0.142365 0.189983 0.238822 0.286179 tabulate(max.col(t(matrix(runif(1e6L*n), n)^(1/w))), n)/1e6 #> [1] 0.047389 0.095431 0.142469 0.191077 0.238128 0.285506 w/sum(w) #> [1] 0.04761905 0.09523810 0.14285714 0.19047619 0.23809524 0.28571429 ``` (see [https://en.wikipedia.org/wiki/Weibull_distribution#Reparametrization_tricks](https://en.wikipedia.org/wiki/Weibull_distribution#Reparametrization_tricks))
null
CC BY-SA 4.0
null
2023-03-28T10:43:42.377
2023-03-28T13:00:30.350
2023-03-28T13:00:30.350
214015
214015
null
610987
2
null
610981
0
null
- Feature selection algorithms like Boruta, don't guarantee you to pick "universally the best" features. Each of the algorithms picks some definition of what they mean as importance and uses some algorithm for finding them, in some cases leading to picking different features depending on the choice of algorithm, or it's parameters. Saying it differently, the features picked by Boruta (or other algorithm) do not necessarily need to be the "best", different algorithm could pick different algorithm leading to better performance depending on the problem. - The same applies to feature importance. Usually there are more than one ways of calculating importances, that can give you different results. - Extracting a subset of "best" features and using them for a new model, will not necessarily lead to a model as good as the model using all the features. Different features may need different hyperparameters etc. So both approaches can give you different results and neither is guaranteed to be the best. Probably the best you can do is try both and compare the results.
null
CC BY-SA 4.0
null
2023-03-28T10:55:18.270
2023-03-28T10:55:18.270
null
null
35989
null
610989
1
610990
null
2
37
I want to perform a Cox Regression Analysis for the dependent variable of overall survival in years. I want to use a categorical explanatory variable that has 3 levels. Let's say for the sake of simplicity it's 3 types of mutually exclusive radiological findings like: - Good CT-Scan - Neutral CT-Scan - Bad CT-Scan If my understanding is correct then for a given predictor the hazard in one group would be expected to be a constant proportion of the hazard in another group. So let's just say we pick Level 1 = Good CT-Scan to be the constant and then I get Hazard Ratios for "Neutral" and "Bad CT-Scans" when I run the Cox Regression Analysis in R. My question is then, does it matter which level we choose to be the constant? How would I choose the level which needs to be constant? R does this automatically, but I wonder if it makes any difference to switch. Any help is appreciated.
Cox Regression: Using categorical explanatory variables, does it matter which level is constant?
CC BY-SA 4.0
null
2023-03-28T11:54:02.983
2023-03-28T12:01:39.317
null
null
384315
[ "cox-model" ]
610990
2
null
610989
3
null
In some sense it makes no difference in a frequentist partial maximum likelihood analysis. I.e. any comparison between the categories you make afterwards will not change. On the other hand, it makes sense to parameterize things so that you directly get the comparisons you are the most interested. If you do a Bayesian version of the analysis, it might be easier to specify prior distributions (that reflect the assumptions you want to make) in some parameterization. Note also that specifying a reference category is just one of several parameterizations you could choose (you could e.g. also do average of the categories and difference from category 1 to 2, as well as difference from 2 to 3, or many other options). It just does not matter too much in a frequentist setting for the reasons indicated above.
null
CC BY-SA 4.0
null
2023-03-28T12:01:39.317
2023-03-28T12:01:39.317
null
null
86652
null
610991
1
null
null
0
50
If you Google Images "auto-correlation", you will find many formulae of this form (from [here](https://docs.oracle.com/cd/E57185_01/CBREG/ch06s03s03s03.html)):- [](https://i.stack.imgur.com/UKQW6.png) I've implemented this and so with a sequence { 65, 66, 67, 68, 69, 70, 71, 72 }, $r_k = 0.625$ for a lag of $k=1$. That matches exactly with [this](https://www.easycalculation.com/statistics/autocorrelation.php) online calculator, and [this](https://scicoding.com/4-ways-of-calculating-autocorrelation-in-python/) demo (if you use their example sequence instead). So my coding is perfect :-) Yet. LibreOffice Calc tells me the correlation is 1.0. So does Python's Pandas. And the classic interpretation of Pearson is that two monotonic (by +1) sequences should have a value of +1.0. So why the difference please?
Why does there appear to be different types of Pearson's autocorrelation?
CC BY-SA 4.0
null
2023-03-28T12:22:18.737
2023-03-29T04:13:32.650
2023-03-29T04:13:32.650
11887
74762
[ "correlation", "autocorrelation", "pearson-r" ]