Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
608280
1
null
null
0
12
I want to measure the effect of reporting CO2 emissions on stock returns. To do this, I am thinking of identifying natural experiments/events to study this effect. My plan is to first identify stock pairs which exhibit similar returns over a period of time and do not report CO2 emissions. Then at some point in time, one of the two stocks starts to report CO2 emissions, whereas the other one does not. This way, I have a treated and a control stock. For this pair of stocks, I could now run a DID regression to measure the difference in returns after the treatment happened. One pair alone is not sufficient to draw any conclusion of course. So I want to do the same with multiple pairs to see if I find a consistent effects and measure its average magnitude. Now I am unsure as to how I would calculate this average effect of these DID regressions of different pairs. Can I just take the average of the DID effects? How would i calculate the standard error of this average effect then? Is this even the right approach to begin with or can you propose something else?
Difference in differences regression with multiple control/treatment pairs and time periods
CC BY-SA 4.0
null
2023-03-03T13:39:31.930
2023-03-03T13:39:31.930
null
null
382302
[ "regression", "time-series", "difference-in-difference" ]
608281
1
null
null
0
18
Suppose i have a panel data with 2 time periods (pre and post events), how would i test for parallel trends assumption under a DDD framework?
How would you test for parallel trends in a panel data set of 2 time periods under DDD framework?
CC BY-SA 4.0
null
2023-03-03T13:43:06.060
2023-03-03T13:43:06.060
null
null
380206
[ "difference-in-difference", "trend", "parallel-analysis" ]
608282
2
null
608152
2
null
According to your comments it seems like you want to model some variable that has a periodic time dependency. When you model this as a linear function then subtracting the time is gonna make it still a linear function. The model below $$y = a + bt + \epsilon -t$$ is just equivalent to linear models $$y = a + b^\prime t + \epsilon \quad \text{with $b^\prime = b-1$} $$ or $$y^\prime = a + bt + \epsilon \quad \text{with $y^\prime = y+t$} $$ --- - What you probably want to do instead is to include some periodic variables like Fourier terms. - Or alternatively you use a time variable modulo the time of the day. E.g. time stamps like t = 1.7 days, t = 2.7 days, t = 3.7 days will all become t=0.7. The image below shows an example with a time variable on the x-axis that runs from t=0 to t=7. On the left we have the variable $y$ plotted as a function of $t$. On the right we have the variable $y$ plotted as a function of $t$ modulo $1$ (the colour coding of the points is kept the same).
null
CC BY-SA 4.0
null
2023-03-03T13:53:54.643
2023-03-03T13:53:54.643
null
null
164061
null
608283
1
null
null
0
13
Let's say I am running a survey, where I want respondents to answer a yes/no question. With probability `t` the respondents answer truthfully. If they answer truthfully, they answer "yes" with probability `p1`. If they decide to lie, they answer "yes" with probability `p2`. I would like to know the variance of the process describing the likelihood that a person answers yes? My understanding is, that if `T~bernoulli(t)`, `X~bernoulli(p1)`, `Y~bernoulli(p2)`, Then I am looking for `Var(T*X + (1-T)*Y)`? Is that right? If yes, how can I express the variance in terms of `t`, `p1`, `p2`? Moreover, if I had a ratio of two such variables, what would be the variance then? Note: This is not a self-study question, though I understand it might sound a bit textbookish.
How to calculate a variance of "2-stage" bernoulli trials?
CC BY-SA 4.0
null
2023-03-03T14:03:34.030
2023-03-03T14:03:34.030
null
null
137200
[ "variance", "bernoulli-distribution" ]
608284
2
null
361150
0
null
More experiments and research indicate that a better method would be to do the statistical calculations (like standard deviation and mean) with the decibel values, then convert them to percentage values for comparison using the following formula. ``` % = (10^(dBm/10))-1 ``` for the dB/dBc/dBm (power) group. Use /20 for dBV/dBrc values. (And there are others for acoustics, etc.) This yields a value in the range of -100% to +$\infty$%. - 0 dB = 1 = 0% - 3 dB = 2 = 100%, -3 dB = 1/2 = -50% - 6 dB = 4 = 300%, -6 dB = 1/4 = -75% - 10 dB = 10 = 900%, -10 = 1/10 = -90% The rationale for this is: - The linear and decibel scales are almost the same from 1 to 10 (0 dB = 1, 10 dB = 10). The farther from the 0:1 'center point' the more the two diverge. (30 dB = 1,000; -30 dB = 0.001) - Decibel values are ratios as are % values. dB is 10(log(P1/P2)). dBm is 10(log(P1/1 mW)). Converting a group of data values that are far removed from the 0:1 'center' but local to each other, to percentage values allows them to be used with/compared to other groups as if they are all in the same -1 to +1 range. - It also creates unitless values.
null
CC BY-SA 4.0
null
2023-03-03T14:34:01.483
2023-03-16T13:31:12.357
2023-03-16T13:31:12.357
374825
374825
null
608285
1
null
null
0
37
My goal is to create a visualization of the strength of the McFadden's $R^2$ of a (multinomial) logistic regression, where McFadden's $R^2$ is $1-\dfrac{LL(M_1)}{LL(M_0)}$, involving the ratio of the log-likelihood of the fitted model ($M_1$) to that if an intercept-only model ($M_0$), analogous to $R^2$ in linear regression. If this were a linear regression, a good way to visualize the strength of the $R^2$ would be to plot the predicted and true values. For a multinomial logistic regression, this seems problematic. My idea is to simulate $2\times2$ data that will have a particular McFadden's $R^2$. Then I will put the counts in an array (confusion matrix of sorts). Then I will plot a $2\times2$ grid with the four squares colored (probably in greyscale) according to their counts. If there are dramatically different colors, that would signal high predictive performance, while colors that are hard to distinguish would indicate more pedestrian performance. I can simulate $2\times2$ data quite easily. ``` set.seed(2023) N <- 100 x <- rbinom(N, 1, 0.5) z <- 2*x - 1 pr <- 1/(1 + exp(-z)) y <- rbinom(N, 1, pr) ``` However, I do not have any obvious way to control the relationship between `x` and `y` in terms of the McFadden's $R^2$. What is the statistics of this relationship? Can it come through some kind of copula between the marginal Bernoulli distributions? I don't know what I want to do with my visualization, but I want to figure out how to make it so I can figure out how useful it is.
statistics linking McFadden's $R^2$ to the relationship between two binary variables, akin to correlation (Copula with Bernoulli margins?)
CC BY-SA 4.0
null
2023-03-03T14:42:28.943
2023-04-07T12:31:03.740
2023-04-07T12:31:03.740
247274
247274
[ "regression", "logistic", "data-visualization", "copula", "pseudo-r-squared" ]
608288
1
null
null
0
11
I want to compare the predictions of two theories A and B to some experimental data. The theories predict some quantity $y$ as a function of some other quantity $x$ by some analytical formula $y_A(x)$ and $y_B(x)$. Importantly, there is no fitting parameter. I have a series of experimental data $\{X_i,Y_i\}_{i=1,\ldots,N}$ with uncertainty $\{\Delta X_i,\Delta Y_i\}_{i=1,\ldots,N}$. If there was only uncertainty on $Y$, I would (naively) compute a Mean Square Deviation $MSD$ for each theory as $$ MSD_A = \sum_i\left(\frac{Y_i-y_A(X_i)}{\Delta Y_i}\right)^2. $$ and similarly for theory B. First of all, I am not sure this would be the proper thing to do in this case. Second, what is the proper way in presence of additional uncertainty on $X$? (If it simplifies the discussion, one can use the fact that in the present case, I typically have that $\Delta X/X\ll \Delta Y/Y$.)
Comparison between theory and experiment using Mean Square Deviation
CC BY-SA 4.0
null
2023-03-03T15:28:21.543
2023-03-03T15:28:21.543
null
null
220335
[ "statistical-significance" ]
608289
1
null
null
3
45
I am a complete novice in this but just trying to have a go. I have a dataset which has the number of birds in my garden each day for one month in June 2020 and June 2021. I would like to see if there is a significant difference in my total. As far as I can work out, a two-tailed t-test with equal variances would work for this (however, please correct me if I'm wrong). H0: there is no significant difference between the number of birds visiting my garden in June 2020 and June 2021. Please could someone help to write the code for this in `R`? [](https://i.stack.imgur.com/8QDfJ.png)
Confused! Two tailed t-test in r with equal variance
CC BY-SA 4.0
null
2023-03-03T15:29:00.857
2023-03-03T15:47:56.287
2023-03-03T15:38:19.660
56940
382311
[ "r", "hypothesis-testing", "ecology" ]
608290
2
null
608289
2
null
In `R` you can simply do (thanks @COOLSerdash for the remark!) ``` t.test(Jun20, Jun21, var.equal = TRUE) ``` but there is a fundamental problem with this approach. Your variables are counts and, for the worse, with quite small numbers. They surely violate the normality assumption behind the `t.test`. I would suggest going with a two-sample Poisson (or even better, a negative binomial) test in which you assume the two variables are independently Poisson-distributed with different parameter values.
null
CC BY-SA 4.0
null
2023-03-03T15:35:31.923
2023-03-03T15:47:56.287
2023-03-03T15:47:56.287
56940
56940
null
608291
1
null
null
0
9
I used to use PRROC long time ago to do statistical analysis. I remember that I could generate the ROC curve with the sensitivity and specificity of the main thresholds in the plot. However, reading the [manual](https://cran.r-project.org/web/packages/PRROC/PRROC.pdf), I don't find the way to do this. My code: ``` !/usr/bin/env Rscript library("data.table") # install.packages("PRROC",repos = "http://cran.us.r-project.org") library("PRROC") library("ggplot2") library("dplyr") set.seed(12) class1 <- fread("/mainfs/wrgl/manuel/spliceAI/test/data/spliceAI_class1.csv") nrow(class1) class0 <- fread("/mainfs/wrgl/manuel/spliceAI/test/data/spliceAI_class0.csv") nrow(class0) class0 <- class0 %>% dplyr::sample_n(nrow(class1)) nrow(class0) roc <- roc.curve( scores.class0 = class1$Max, scores.class1 = class0$Max, curve = TRUE ) pdf("/mainfs/wrgl/manuel/spliceAI/test/data/roc_balance.pdf") plot(roc) dev.off()[![enter image description here][2]][2] ``` And the output I get so far [](https://i.stack.imgur.com/ZfSdg.png)
Does PRROC generate the sensitivity and specificity when generating the ROC curve?
CC BY-SA 4.0
null
2023-03-03T15:35:48.787
2023-03-03T15:35:48.787
null
null
378571
[ "roc" ]
608292
1
null
null
2
25
Consider the bivariate probability distribution of $(X,Y)$. The Spearman correlation coefficient of $X$ and $Y$ is given by $$\rho_{X,Y} = \frac{\operatorname{Cov}(R(X), R(Y))}{\sqrt{\operatorname{Var}(R(X))\operatorname{Var}(R(Y))}},$$ where $R$ is the ranking function. Let $f$ and $g$ be two measurable functions, and define $U:=f(X)$ and $V:=g(Y)$. In general, $U$ and $X$ as well as $V$ and $Y$ are not independent anymore. Now suppose we estimated $\rho_{X,Y}$ and $\rho_{U,V}$ based on an observed an iid sample of $X$, $Y$, $U$, and $V$. The estimators $\hat\rho_{X,Y}$ and $\hat\rho_{U,V}$ are random variables, and I want to compute the covariance $\operatorname{Cov}(\hat\rho_{X,Y},\hat\rho_{U,V})$. Clearly, if $f$ and $g$ are both monotone, $\hat\rho_{X,Y} = \hat\rho_{U,V}$ and the covariance equals the variance of $\hat\rho_{X,Y}$. I don't need to describe the covariance in terms of $f$ and $g$; I would be already very happy with a formula for the covariance of two dependend Spearman correlation coefficients.
Covariance of sample Spearman correlation coefficients
CC BY-SA 4.0
null
2023-03-03T15:47:19.660
2023-03-03T15:47:19.660
null
null
359647
[ "covariance", "spearman-rho" ]
608293
1
null
null
0
91
Below I have an example data set in which I'm modeling the time-to-event given two fixed-effect factors `a` and `b`, and a random-effect `id` representing a specific system. In the data, `y` is my response and it can be censored, `complete == 0`. Since there is censoring I'm using survival modeling specifically `survival::survreg`. I simulated the data assuming a normal response but just for simplicity. I want to know if system-to-system variability is significant hence I specifically included it as a term in `mod`. (Side note: I believe I need a parametric model because ultimately I want to find $Pr(time<X | a, b)$. My plan would be to design an experiment and specifically test if system-to-system variability is significant because in general I don't know another way to simply estimate the system-to-system variability within the model and then specifically test if it is above something I would care about. Even if I did know how to test that hypothesis, I'm not sure how I would do that in a setting where I have censoring, since these models don't appear to support random effects/frailty.) I don't actually care about predicting time-to-event for these particular systems because they're just a random selection from a larger population. So, after testing my hypothesis, I would like to specify a model that allows me to predict time-to-event for any random system, and add the variability due to `id` back into the estimate for the model scale parameter. At first I thought I could just remove `id` and the estimate for scale would adjust appropriately, but then I realized I was probably not accounting for dependence between observations, and so would be biasing the scale estimate to be too low (this is my `mod2`). Then I realized `survreg` has a `cluster` argument and based on my interpretation of that I could add `id` there to account for dependence between observations. But, there doesn't appear to be any difference between `mod2` and `mod3`. So, I'm not sure I'm appropriately estimating variance that includes system-to-system variance, and accounting for dependence between observations. Ultimately, I want to know if system-to-system variability is significant (above some value). Is there a better way to approach that? And, I want to create a model that allows me to predict $Pr(time<X | a, b)$ for any given system, so variability would need to include system-to-system variability and account for dependence. Is `mod3` already doing that, and if not how would I go about doing this? ``` df <- structure(list(a = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), .Label = c("1", "2"), class = "factor"), b = structure(c(1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L), .Label = c("1", "2"), class = "factor"), id = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L), .Label = c("1", "2", "3", "4", "5", "6"), class = "factor"), y = structure(c(3.26929742382298, 4.53114357864273, 4.60748705769383, 3.66367202748024, 3.60339975174583, 5.62915408515664, 3.64624718709751, 1.42962146166766, 1.30650049618921, 5.61594670417012, 3.56114991343261, 2.90426493230704, 5.33081428734092, 4.4468917639601, 3.80591195313923, 3.59352602872529, 6, 5.02854403654424, 1.63266901770139, 4.84643518333852, 4.64646233097725, 3.79696218021276, 1.52452686442184, 2.31343035760782, 5.12691234029955, 5.83801932729357, 3.34581840898551, 3.38150741733905, 3.98077477160601, 1.29589299723746, 2.43899048753085, 1.77004740191484, 4.53958023188868, 5.57858838493128, 4.77331394964177, 2.68176493257508, 3.53501745110821, 4.26050503442007, 3.75005476932345, 1.18192642948869, 4.55545382605327, 5.4635695047017, 5.76949594826958, 3.74817724327111, 6, 4.62781426082135, 2.8126158503946, 3.54233497740739, 2.80595535587881, 5.94953555255156, 6, 2.24916778530024, 2.49528039300557, 4.21606177364399, 2.85890042665913, 5.96390897323753, 4.5696903228557, 4.96315937063162, 3.67055356512646, 1.81364051363185, 4.19824842058448, 0.507622027976411, 5.1133922636573, 2.44493997863081, 6, 5.62996085484595, 3.55177691511373, 4.83278609173004, 3.32950899280437, 2.1502053399121, 4.16782479354695, 2.3657176304431, 4.50165802744744, 3.9115119862275, 2.61571858071789, 1.94699690077225, 3.85871440979462, 5.12861282734017, 0.776132131835725, 3.252401613923, 5.12797588377574, 5.52320008287, 3.17227444150401, 3.48357854182989, 4.31570815747131, 2.40127992482549, 4.72686168028773, 3.95566389485539, 6, 6, 5.45442619984441, 2.0018082488094, 3.87075731891542, 2.19373651792322, 3.02055448561179, 2.10010542443457, 4.56317380971635, 2.43361752717133, 3.73704315860611, 1.81812303412177, 6, 4.43654354650618, 4.42674367672321, 2.93776036921988, 6, 2.97494564603199, 2.93608323141461, 5.07697368476795, 2.93901544134742, 2.90398885591002, 2.32584807714999, 1.73507066849021, 5.19802660703162, 5.65797905839819, 4.35070083809204, 3.15776036832507, 6, 3.70878632745045, 3.46132064520567, 2.88036932845041), .Dim = c(120L, 1L), .Dimnames = list(c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119", "120"), NULL)), complete = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1)), class = "data.frame", row.names = c(NA, -120L )) library(survival) mod <- survreg(Surv(y, complete) ~ a + b + id, data = df, dist = "gaussian") mod2 <- survreg(Surv(y, complete) ~ a + b, data = df, dist = "gaussian") mod3 <- survreg(Surv(y, complete) ~ a + b, cluster = id, data = df, dist = "gaussian") anova(mod) # Analysis of Deviance Table # # distribution with link # # Response: Surv(y, cens) # # Scale estimated # # Terms added sequentially (first to last) # Df Deviance Resid. Df -2*LL Pr(>Chi) # NULL 118 424.98 # mode 1 3.5883 117 421.39 0.058188 . # concentration 1 19.6154 116 401.77 9.47e-06 *** # id 5 18.8474 111 382.92 0.002052 ** # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 mod$scale # 1.248854 mod2$scale # 1.350591 mod3$scale # 1.350591 ```
Testing and Incorporating Random Effects in Survival Regression
CC BY-SA 4.0
null
2023-03-03T15:55:01.860
2023-03-05T18:07:59.947
2023-03-03T18:06:38.030
64093
64093
[ "mixed-model", "survival", "repeated-measures" ]
608294
2
null
608142
2
null
This is a partial answer and partially a request for clarification. If I understand correctly, - $a$ is a constant. - $X_i$ is Bernoulli with $\Pr(X_i=1)=p_i.$ - $Y_i$ is Bernoulli with $\Pr(Y_i=1)=q_i.$ (I changed the names of the parameters slightly to avoid cascading subscripts.) - $Z_i$ has a Normal$(\mu_i,\sigma)$ distribution. - All the variables are independent. (If not, we don't have enough information to solve this problem.) To evaluate the distribution of $W_i=a X_i(1 - Y_i(1-Z_i)),$ ignore $a$ for the moment because all it does is scale everything and set $U_i = X_i(1 - Y_i(1-Z_i)).$ We will apply definitions and basic properties, beginning by noting that $(X_i,Y_i)$ is certain to be one of just four values, as presented in this table. $$\begin{array}{rrlll} & \Pr & X_i & Y_i & U_i\\ \hline & (1-p_i)(1-q_i) & 0 & 0 & 0\\ & (1-p_i)q_i & 0 & 1 & 0\\ & p_i(1-q_i) & 1 & 0 & 1\\ & p_iq_i & 1 & 1 & Z_i\\ \hline \end{array}$$ This evidently is a mixture of an atom at $0$ with probability $(1-p_i)(1-q_i)+(1-p_i)q_i = q_i,$ an atom at $1$ with probability $p_i(1-q_i),$ and a Normal variable $V_i.$ The calculation of its moments is immediate, so that in particular $$E[U_i] = q_i(0) + p_i(1-q_i)(1) + p_iq_iE[Z_i] = p_i(1-q_i) + p_iq_i\mu_i$$ and, because $E[Z_i^2] = \operatorname{Var}(Z_i) + E[Z_i]^2 = \sigma^2 + \mu^2,$ $$E[U_i^2] = \cdots = p_i(1-q_i) + p_iq_i\left(\sigma^2 + \mu^2\right).$$ You can use the equation $$\operatorname{Var}(W_i) = \operatorname{Var}(aU_i) = a^2 \operatorname{Var}(U_i) = a^2\left(E[U_i^2] - E[U_i]^2\right).$$ At this point it's unclear what you want, because in your code the parameters are themselves random variables. Are you looking for the unconditional means and variances or, perhaps, for the variances conditional on any specific realization of these parameters, or (based on your bootstrap code) the conditional variance of the mean of the $W_i$? In any case, you can easily continue this calculation by employing basic properties of expectations and variance.
null
CC BY-SA 4.0
null
2023-03-03T15:56:20.923
2023-03-03T15:56:20.923
null
null
919
null
608295
1
null
null
0
17
I was wondering if you can suggest a statistical test that can be applied to this sort of dataset. I am staying away from machine learning techniques as the dataset is small, and due to the aggregated data values might not benefit for ML techniques I have users, list of stores a users visits and number of items he buys from that store, and total number of returns for that user. The problem would be much easier if I knew which items the user return to which stores but unfortunately I don't have that information, I only know the total number of items returned by the user. This is what the data looks like. |userID |items bought at store A |items bought at store B |items bought at store C |total items returned | |------|-----------------------|-----------------------|-----------------------|--------------------| |UserA |10 |5 |3 |6 | |UserB |15 |4 |4 |8 | I understand that this is not the best / most accurate data, and there might not be a suitable test, but can we infer something about the stores, and the total items returned by the user in a case like this? If we assume a return rate per store, like 20%, can we say that for user A 2 items were returned for store A, 1 for store B, and so on? Thank you
correct type of statistical correlation test for aggregated dataset
CC BY-SA 4.0
null
2023-03-03T16:06:26.887
2023-03-03T16:13:52.020
2023-03-03T16:13:52.020
362671
360656
[ "hypothesis-testing" ]
608296
2
null
608251
2
null
Calling [Wolfram integrator](https://www.wolframalpha.com/input?i2d=true&i=Integrate%5BPower%5Bx%2Ca%5Dexp%5C%2840%29-Divide%5BPower%5B%5C%2840%29x-b%5C%2841%29%2C2%5D%2C2%5D%5C%2841%29%2C%7Bx%2C0%2C%E2%88%9E%7D%5D), one gets \begin{align*}\int_0^\infty x^a \exp(-1/2 (x - b)^2) \text dx =& 2^{(a - 1)/2} \left\{\sqrt{2} b Γ(a/2 + 1) _1F_1((1 - a)/2, 3/2, -b^2/2)\\ + Γ((a + 1)/2) _1F_1(-a/2, 1/2, -b^2/2)\right\}\\ \end{align*} for $a>-1$, where $_1F_1$ denotes the hypergeometric function. Since the posterior can be written as the above integrand, its normalisation constant can thus be found this way, but the end result is not much more informative than keeping an unspecified constant.
null
CC BY-SA 4.0
null
2023-03-03T16:10:28.660
2023-03-03T16:10:28.660
null
null
7224
null
608297
1
608639
null
5
165
I want to report the CI for metrics like F1 and AUC. I'm a bit confused on when it's better to bootstrap it, or when to use a formula. For F1 there are several works that estimate the variance and derive a (symmetric) confidence interval, like: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8936911](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8936911). Similarly for AUC there is the Delong method, and others. However for all these metrics, we could bootstrap them by sampling with replacement, computing the metric for every sample, and then taking the percentiles of the distribution, like the 2.5% 97.5% and percentiles. Which of these approaches is expected to be better? I guess an issue with the bootstrap method is that it's slow to compute it thousands of times. Is that the only reason there are works to create a closed formula for the CI ?
Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to variance estimation
CC BY-SA 4.0
null
2023-03-03T16:21:22.310
2023-03-10T21:33:40.393
null
null
382313
[ "confidence-interval", "bootstrap" ]
608299
1
null
null
0
19
My goal is to understand how random effects are handled within a mixed-effects linear model. Given this model: ``` library(lme4) sleepstudy <- sleepstudy %>% as_tibble() %>% mutate(Subject = as.factor(Subject)) model <- lmer(Reaction ~ Days + (1|Subject), data = sleepstudy) ``` Where `Subject` is included as a random effect. How exactly are the random effects calculated, on a conceptual level? Is data from each Subject fit separately (as would happen in `model <- lm(Reaction ~ Days + Subject)`) and then random effects calculated as deviations of each group's coefficients (e.g. intercepts) from the main intercept? Is this the main intercept calculated for fixed effect or something else? My understanding is that these random effects (deviations) are then used to define the variance in a common distribution (often a normal distribution with μ = 0). In-turn this common distribution is used to estimate new random effects - this is done by randomly sampling the distribution, and assigning each Subject a sampled intercept. Is this correct? It's relatively difficult for me to envision the process without plots, or know what data is being used where.
Conceptually understanding random effects in a mixed effects linear model
CC BY-SA 4.0
null
2023-03-03T16:46:44.073
2023-03-03T16:46:44.073
null
null
289211
[ "mixed-model", "lm" ]
608300
1
null
null
0
50
My dependent variable is a count variable that takes on the value 0 in most cases (80%) so I am applying a Negative binomial regression model. Depending on the exact model specification my independent variable of interest is significant at 5% or 1%. My model has a total of 16 predictor variables and no interaction term included. It has year and industry-fixed effects. Each observation corresponds to a Merger & Acquisition announced by US companies on some day between 2009 and 2018. For each observation (i.e., each deal) my dependent variable measures the number of press releases the acquiring company has published from one day before to one day after the acquisition was announced. So ultimately my mode examines what determines the number of press releases published by an acquiring firm around the date on which the firm announced an M&A. Most papers I read so far take the natural log of a control variable that I also included in my model (market value of a company). This makes sense to me as the variable usually is skewed to the right. The non-transformed version of the variable is highly significant (1%) and doesn't do much to my independent variable of interest in terms of significance. However, if I include the logged version of the control my independent variable get's highly insignificant. I am not trying to push my data towards significance but want to understand if this could be an absolutely normal thing or indicates some sort of problem. Please find below the current Stata output of my model. To improve readability I left out the estimates for the fixed effects dummies. CNS is my variable of interest that gets highly insignificant when the variable `Acq_Size_MV42` is logged. ``` Negative binomial regression Number of obs = 888 Wald chi2(34) = . Dispersion: mean Prob > chi2 = . Log pseudolikelihood = -577.04358 Pseudo R2 = 0.0919 ------------------------------------------------------------------------------------------------------------------------------------- | Robust IM_Offsetting | Coefficient std. err. z P>|z| [95% conf. interval] --------------------------------------------------------------------+---------------------------------------------------------------- CEO_tenure | .025844 .0138651 1.86 0.062 -.0013312 .0530192 CEO_Age | .020748 .0121236 1.71 0.087 -.0030137 .0445097 CEO_Gender | -.4480861 .3675936 -1.22 0.223 -1.168556 .2723841 Acq_MA_exp | -.0114638 .0218293 -0.53 0.599 -.0542485 .0313209 Deal_Value | -6.54e-11 2.87e-11 -2.28 0.023 -1.22e-10 -9.14e-12 Deal_AllCash | -.2332634 .2853195 -0.82 0.414 -.7924794 .3259526 Deal_Stock | .1867257 .310457 0.60 0.548 -.4217589 .7952103 Targ_Listing | .0456927 .056514 0.81 0.419 -.0650726 .156458 FF12_Div | .4330456 .1427309 3.03 0.002 .1532981 .712793 Acq_Size_MV42 | 8.18e-12 2.08e-12 3.93 0.000 4.10e-12 1.23e-11 Acq_Lev_WWU | .0272248 .0347519 0.78 0.433 -.0408876 .0953373 Acq_TobinsQ_WWU | .1054541 .068864 1.53 0.126 -.0295169 .2404251 Acq_FCF | 3.457239 1.930572 1.79 0.073 -.3266118 7.241091 Acq_Cash_hold | .2302485 .5423197 0.42 0.671 -.8326787 1.293176 Acq_ROA | -4.107877 1.856848 -2.21 0.027 -7.747232 -.4685227 CNS | .2634295 .1097225 2.40 0.016 .0483773 .4784816 --------------------------------------------------------------------+---------------------------------------------------------------- /lnalpha | -.7166174 .4239483 -1.547541 .1143059 --------------------------------------------------------------------+---------------------------------------------------------------- alpha | .4884015 .207057 .2127706 1.121095 ------------------------------------------------------------------------------------------------------------------------------------- ```
Negative binomial regression: Coefficient gets insignificant when control variable is logged
CC BY-SA 4.0
null
2023-03-03T16:51:52.410
2023-04-23T19:05:57.590
2023-03-03T17:54:32.767
260814
260814
[ "regression", "negative-binomial-distribution" ]
608301
2
null
20452
1
null
For illustration I assume a two dimensional ANOVA model specified by `y ~ A * B` ## Type I ANOVA |Line term in ANOVA table |Hypothesis from model |Hypothesis to model | |------------------------|---------------------|-------------------| |A |y~ A |y~ 1 | |B |y~ A+B |y~ A | |A:B |y~ A*B |y~ A+B | The from-model of every line is the to-model of the line below. The to-model is the from-model without the line term. ## Type II ANOVA |Line term in ANOVA table |Hypothesis from model |Hypothesis to model | |------------------------|---------------------|-------------------| |A |y~ A+B |y~ B | |B |y~ A+B |y~ A | |A:B |y~ A*B |y~ A+B | The from-model is the full model without all interactions involving the line term. The to-model is the from-model without the line term. This means that the from-model in line B is the full model A*B, but without A*B - that is A+B. The to-model is then A+B without B - that is A. ## Type III ANOVA In the Anova III model interactions are parameterized such that they are orthogonal to all lower-level interactions. As a consequence it is meaningful to remove a main term from a model even though an interaction involving that term is still present in the model formula. R doesn't have a good formula notation for this, so I define `o(A,B)` as the part of the interaction A:B that is orthogonal to both A and B |Line term in ANOVA table |Hypothesis from model |Hypothesis to model | |------------------------|---------------------|-------------------| |A |y~ A*B |y~ B + o(A,B) | |B |y~ A*B |y~ A + o(A,B) | |A:B |y~ A*B |y~ A+B | The from-model is always the full model. The to-model is the from model without the line term (but keeping all higher-order orthogonal components of the interactions).
null
CC BY-SA 4.0
null
2023-03-03T16:58:50.637
2023-03-03T16:58:50.637
null
null
89277
null
608302
2
null
608164
0
null
I've find the solution to this problem from [Hansen (1996)](https://www.jstor.org/stable/2171789). In this case, parameters $(b_1,b_2)$ are [nuisance parameters](https://en.wikipedia.org/wiki/Nuisance_parameter#:%7E:text=In%20statistics%2C%20a%20nuisance%20parameter,of%20the%20location%E2%80%93scale%20family), as they are relevant for hypothesis testing but become undefined under the null $\mathcal{H}_0$ The author suggests an empirical procedure to obtain Likelihood ratio test statistics from simulations given different values of nuisance parameters. Then, the test will follow a $\chi^2$ distribution which will depend of the form of the case under study. I hope this helps!
null
CC BY-SA 4.0
null
2023-03-03T17:18:14.250
2023-03-03T17:18:14.250
null
null
139901
null
608304
2
null
304386
0
null
A two-player zero-sum game can indeed be modeled using negative discount factors. A positive/negative discount factor means the next player to act is the same/opposite player. This works because a negative discount factor turns maximization into minimization and viceversa. [DeepMind](https://en.wikipedia.org/wiki/DeepMind)'s [MCTS](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) library [mctx](https://github.com/deepmind/mctx) [uses this trick](https://github.com/deepmind/mctx/issues/24#issuecomment-1193281828).
null
CC BY-SA 4.0
null
2023-03-03T17:45:57.077
2023-03-03T17:45:57.077
null
null
82547
null
608306
2
null
608174
5
null
Adapting this from [Bauwens & Lubrano (1999)](https://doi.org/10.1093/0198292112.003.0016), the part of the statistical procedure that "breaks down" in the presence of unit roots is asymptotic normality of the (OLS) estimator. For a model as simple as $$ y_{t} = \rho y_{t-1} + \epsilon_t $$ the asymptotic distribution of $\hat{\rho}_{OLS}$ is $\sqrt{T}(\hat{\rho}_{OLS}-\rho) \to N(0,1-\rho^2)$ if $|\rho| <1$, but $$ T(\hat{\rho}_{OLS}-\rho) \to \frac{1}{2} \frac{w(1)^{2} - 1}{\int_{0}^{1} w(r)^{2} \mathrm{d}r} \quad \quad \text{if } \rho =1.$$ where $w(\cdot)$ is a [Wiener process](https://stats.stackexchange.com/q/410260). So in the presence of a unit root, the OLS estimator converges much faster (i.e., it is [superconsistent](https://stats.stackexchange.com/q/446242)) but to a random quantity instead of a constant. As a practical matter, any hypothesis test involving $\rho$ will [require special tables](https://stats.stackexchange.com/q/326748).
null
CC BY-SA 4.0
null
2023-03-03T18:15:51.917
2023-03-04T18:12:23.113
2023-03-04T18:12:23.113
71679
71679
null
608307
1
null
null
4
187
### Problem In several applications in surveys, it would be helpful to be able to generate a set of $R$ $n$-dimensional variates with the following properties: - Has mean vector $1$ - Has a specified variance-covariance matrix ($\Sigma$, a positive semi-definite matrix, with rank $\leq R$) - Is nonnegative ### Some First Thoughts about Solutions The first two requirements are easy to satisfy. We can just draw from a multivariate Normal distribution, as discussed here: [Generating data with a given sample covariance matrix](https://stats.stackexchange.com/q/120179/94994) But if we add the additional requirement that all of the variates are nonnegative, then a multivariate Normal distribution won't work. In the applications I have in mind (described below), typically the diagonal of $\Sigma$ has a few entries which are $1$ or even as large as $1.5$, so a multivariate Normal will easily generate negative values. In this application, it doesn't matter at all what the skew or kurtosis are, and it doesn't matter whether the random variates come from a particular distribution. All that matters is that they're nonnegative, have mean vector $1$, and have the specified covariance matrix (ideally, it would have the exact specified sample covariance matrix, but it would be OK if it just had the specified covariance matrix in expectation). The multivariate lognormal and Gamma distributions are nonnegative and [seem like fairly natural options](https://stats.stackexchange.com/questions/439381/generate-multivariate-log-normal-variables-with-given-covariance-and-mean), except that there are constraints on their precise shape (due to their density functions) which mean that they will often not be able to attain the desired variance-covariance. So these parametric distributions, at least, seem unnecessarily limiting. In low dimensions, one can generate random variates and then "fix them up" to satisfy constraints, at least approximately. This StackExchange gets at this kind of approach, but it doesn't really cover the constraint of nonnegativity and is really focused on a univariate case. [How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?](https://stats.stackexchange.com/questions/30303/how-to-simulate-data-that-satisfy-specific-constraints-such-as-having-specific-m/71441#71441) ### Motivating Application In survey statistics, resampling methods such as the jackknife or bootstrap are typically implemented by generating replicate weights, which are random variates with a specific mean vector and covariance matrix. To be concrete, if one has a dataset of size $n$, then one generates $R$ sets of replicate weights, which can be represented as a matrix of dimension $R \times n$. In general, we want these replicate weights to be nonnegative, since the weights will frequently be used in statistical procedures that require nonnegative weights and usually even for many procedures that allow negative weights, the software implementations won't allow negative weights because they're just unexpected. This R package vignette describes the generation of replicate weights for the "generalized survey bootstrap", and describes how multivariate normal distributions are used to generate replicate weights that have mean vector $1$ and a specified variance-covariance matrix, but which can sometimes be negative. [https://cran.r-project.org/web/packages/svrep/vignettes/bootstrap-replicates.html#forming-adjustment-factors](https://cran.r-project.org/web/packages/svrep/vignettes/bootstrap-replicates.html#forming-adjustment-factors) It also describes a rescaling adjustments that makes the random variates nonnegative and with mean vector $1$, but unfortunately that rescaling adjustment increases the variance-covariance by a constant (which is a problem).
Generate nonnegative variates with mean 1 and specified variance-covariance
CC BY-SA 4.0
null
2023-03-03T18:20:26.537
2023-03-15T14:35:47.763
2023-03-14T01:42:17.963
94994
94994
[ "simulation", "covariance-matrix", "copula", "multivariate-distribution", "survey-weights" ]
608308
1
null
null
-2
15
I have difficulty understanding Appendix B2 on [this article](https://arxiv.org/pdf/2210.10837.pdf) (you can directly go to Appendix B2). I'm trying to understand why the authors of the paper changed the distribution from $ x,y|a$ to a composition of expectations over $x | a$ and $y|x,a$ as follows: $ E_{(x,y|a)}[f(x) -y] = E_{(x | a)}[f(x)E_{(y|x,a)} - E_{(y|x,a)}y] $ I tried to use Bayes rule to find the link between $P(x,y|a)$, $P(x|a)$ and $P(y|x,a)$, But can't find it. Can anyone help with this ? Thanks,
changing distributions over expectation
CC BY-SA 4.0
0
2023-03-03T18:40:50.010
2023-03-03T18:40:50.010
null
null
329039
[ "conditional-expectation" ]
608309
1
null
null
0
20
I've got a set of variables $(x,y)$ and a corresponding linear regression model, for which I should perform the Goldfeld–Quandt test in order to check for heteroscedasticity. I performed the test and had to reject the null hypothesis, which is homoscedasticity. Then, I fitted two weighted regressions to my initial model so that I could hopefully get rid of heteroscedasticity -- the first one assumes that variation of the error term increases proportionally to $x^2$, while the second one is built assuming that the error term is proportional to $x$. Now, I have to use the Goldfeld–Quandt test again for my new weighted models, which is the part that's confusing me. As far as I understand, in order to perform the test, I should put my independent variable(s) in increasing order, since I'm checking to see whether variation increases from the first segment to the last. But when performing the test for weighted regression, should I do the same? For example, should I order by $\frac{1}{x}$ in the case of the $x^2$ weighted regression? Thank you for your answers. I've added screenshots of my sheet so that it's clearer what I'm asking. [](https://i.stack.imgur.com/Y4aZy.png) [](https://i.stack.imgur.com/f6bk9.png) [](https://i.stack.imgur.com/92CEa.png)
Using the Goldfeld–Quandt test for weighted models
CC BY-SA 4.0
null
2023-03-03T19:11:56.043
2023-03-03T19:13:47.230
2023-03-03T19:13:47.230
375028
375028
[ "regression", "econometrics", "weighted-regression", "wls" ]
608310
1
null
null
0
63
Suppose we have a set A , we split into multiple disjoint subsets ai We only have access to the ai sets , is there a way to compute the ECDF for the set A without looking at it ? If for example we have ECDFi on ai and we average them , do we have guaranteed convergence ? We can not just combine all the subsamples for memory issues , the only thing we can do is compute the ECDF for each subsample so instead of k samples for the subset ai we will have 100 values ,(ECDF evaluated at 0% 1% .. 100%) which reduces drastically the memory needed , a subset can have a size of 1000000 , we compress everything into 100 digits Now given these 100 digits for each subsets how to infer the ECDF for the A which is the union of the ais By average here I mean vertically (if we stack the ECDFs vertically) FA(0%) would be avg(Fai(0%) and so on for each X% I actually did some simulations to find out and it seems that it works I’ve chosen this metric (mape) because in case of the ECDF , the 100% percentile is by definition the largest , therefore an error in the 100% percentile would inflate the whole error , even though it is an error in only one point of the function . For example an error on the 100% percentile of approx: 400 vs true : 800 , we want it to be treated the same way as an error of approx : 40 vs true : 80 in the 40% percentile , because we want the function to converge in each point , we may even tolerate divergence in the 100% percentile. [](https://i.stack.imgur.com/zzS8r.png) - avg_mape is the average error when we only averaged the ECDFs - weighted_avg_mape is the average error when we averaged using weights that are proportional to the size of the subset ai the ECDFs I tried different underlying distributions with different parameters (sigma , mu , lambda ..) The imbalance here refers to how uneven the partitioning is doen (A to ais) you can see it as inversely proportional to the entropy , with max entropy all ratios are even and lowest entropy something like 90% , 1 % 9 % But we know already that the average will dilute the 100% percentile (we average the true 100% with inferior factors ) and will inflate the minimum . Nearby percentiles will have similar behavior as well , as shown in this figure : [](https://i.stack.imgur.com/u8rZm.png) What we would like to do is to bend the curve a little from the extremities (up in the right and down in the left ) I came up with the following function to do it (kind of a generalized formula for the average ) : $$F( L , n) = \sum (li^{n+1})/\sum (li^n)$$ L : List of positive numbers if n goes to ∞ F returns the maximum of L if n goes to -∞ F returns the minimum of L if n = 0 F returns the average of the list L Here is how the function acts on a list while varying the n [](https://i.stack.imgur.com/lwQIn.png) We can of course use the weights coming from the size of subsets ai combined And now instead of using a weighted average vertically we will use F(Lx% , n) L x% is the list containing all the Fai(10%) for example n would tend toward infinity if X is 100 toward -∞ if X is 0% 0 if X is 50% and for the rest of the values we will vary n the n continuously from -∞ to +∞ Here is an example of the result after bending the extremities [](https://i.stack.imgur.com/WNii7.png) Following is the evaluation of different approaches : [](https://i.stack.imgur.com/thv92.png) Now the problem is that I can not come with a consistent proof of the convergence of the approach even-though it sounds intuitive ? And I really have no Idea how accurate all of this is ?
Averaging ECDF vertically : Proof of convergence
CC BY-SA 4.0
null
2023-03-03T20:29:45.263
2023-03-05T17:56:43.727
2023-03-05T17:56:43.727
362671
359570
[ "machine-learning", "mathematical-statistics", "convergence", "law-of-large-numbers", "empirical-cumulative-distr-fn" ]
608311
1
null
null
1
17
I have a sample of 20 drugs defined by about 30 parameters. I would like to run a "unique-ness" test to see whether one of the drugs is unique in any way relative to the other 19 parameters. Does anyone know how I could go about doing this? Thank you for any help.
Test of unique-ness
CC BY-SA 4.0
null
2023-03-03T20:51:31.150
2023-03-03T20:51:31.150
null
null
364121
[ "inference", "small-sample" ]
608312
1
608388
null
3
36
I have come across the term "component-wise" in the literature, and I am curious if this means that a model does perform variable selection. And not having this, mean it doesn't perform variable selection. More specifically, I am currently exploring boosted trees, specifically those introduced by Friedman in his 2001 paper, "Greedy function approximation: A gradient boosting machine." Could one say it's a variable selection model? Equivalently, component-wise boosting? I appreciate any insights or clarifications on this topic. Thank you.
Does boosting being component-wise imply variable selection?
CC BY-SA 4.0
null
2023-03-03T20:55:38.283
2023-03-05T00:07:07.627
2023-03-03T22:36:59.620
382332
382332
[ "r", "machine-learning", "boosting", "cart", "adaboost" ]
608313
1
608336
null
1
154
I am quite a newbie in R and even more so in Bayesian regression. I have fit a `stan_glm` binomial model with 1689 observations, 12 variables and two interaction terms. All predictors are categorical. One of the main predictors suffers from quasi-complete separation. [](https://i.stack.imgur.com/nwJKC.png) I also followed the procedures detailed in Gelman, Hill and Vehtari (2020) to fit the model. One of the diagnostic tools introduced in the book is the binned residuals plot. To produce a binned residuals plot, I tried `performance::binned_residuals` and `arm::binnedplot`. The two plots are identical. Unfortunately, it turned out that only 32% of the residuals fall within the error bound, with the rest on the two extremes, outside the bounds. [](https://i.stack.imgur.com/67uwo.jpg) I don't know if it is due to the problem of separation or other issues in the model. I stumbled across this post and someone recommended the package DHARMa. [Interpreting a binned residual plot in logistic regression](https://stats.stackexchange.com/questions/99274/interpreting-a-binned-residual-plot-in-logistic-regression) And so, I tried it. When I ran ``` simulationOutput <- simulateResiduals(fittedModel = fit.final, plot=F, integerResponse = NULL) ``` This warning message pops up: ``` Warning message: In checkModel(fittedModel) : DHARMa: fittedModel not in class of supported models. Absolutely no guarantee that this will work! ``` I continued running the following code: ``` residuals(simulationOutput) plot(simulationOutput) ``` And the plots came out perfect. [](https://i.stack.imgur.com/jpsmd.jpg) My questions are thus: (i) What is the problem with the binned residuals plot? Anyway to fix it? (ii) Is the use of DHARMa residuals warranted and appropriate despite the warning? ============ UPDATE ============ Thank you, Shawn, for the extra efforts in labelling the steps. Following your advice by plugging my model into createDHARMa directly: I transformed the response into integers from factors [](https://i.stack.imgur.com/nCuWX.png) DHARMaRes = createDHARMa( simulatedResponse = t(posterior_predict(fit.final)), observedResponse = x$adj_pla1, fittedPredictedResponse = apply(t(posterior_predict(fit.final)), 1, median), integerResponse=T ) plot(DHARMaRes) I got this as plot. Using median (recommended for Bayesian models) in fittedPredictedResponse = returns a boxplot. [](https://i.stack.imgur.com/94ksr.jpg) If I use mean, I get the usual scatter plots. [](https://i.stack.imgur.com/iUmpp.jpg) In any case, can I use either of the plots? (Do excuse me for this rather dumb question. Please bear with me.)
DHARMa residuals plot vs. binned residuals using stan_glm object
CC BY-SA 4.0
null
2023-03-03T21:04:31.780
2023-03-06T12:10:28.350
2023-03-04T17:26:47.053
382333
382333
[ "bayesian", "logistic", "mixed-model", "residuals", "separation" ]
608315
1
null
null
1
25
Let's say a variable fits a ratio measurement level and is also normally distributed (for example, height). Why is it possible to do a linear transformation on such a variable if the only transformation which is allowed on ratio measurements is multiplication or division?
Linear trans on normal distribution
CC BY-SA 4.0
null
2023-03-03T21:59:11.413
2023-03-04T02:57:50.610
2023-03-04T02:57:50.610
345611
382334
[ "statistical-significance", "data-transformation", "standard-deviation", "descriptive-statistics" ]
608316
1
null
null
0
59
The canonical way to do the test is to perform the spherical harmonic transform of the empirical distribution and then check that the power spectrum decays, but this is presumably fairly expensive. Is there some more efficient way (I am currently most interested in $S^2$ in $\mathbb{R}^3,$ but certainly something that scales reasonably as the dimension goes up would be good (the spherical harmonic method does not))?
Testing whether a set of points on the unit sphere is uniformly distributed
CC BY-SA 4.0
null
2023-03-03T22:34:01.437
2023-03-05T21:13:56.590
2023-03-05T21:13:56.590
11887
55548
[ "hypothesis-testing", "goodness-of-fit", "uniform-distribution", "computational-statistics" ]
608317
2
null
464874
1
null
You are right, the random walk with no drift has mean zero, or the starting value if such a value is given. As you said, the random walk is just the sum of i.i.d. random variables $\epsilon_t$ with mean zero and variance $\sigma^2$. As such, if $y_t$ is a random walk then it can be written as $y_t = \sum_{i=1}^t \epsilon_i$, so in fact $y_t\sim N(0,t\sigma^2)$. In your case, if $y_0$ is fixed then $y_t \sim N(y_0,t\sigma^2)$. Note that even if the process has the same mean throughout, the variance dependes on the time, so it is not even weakly stationary. A process being nonstationary does not mean you cannot take expectations or other kind of operators, just that the (unconditional) distribution changes over time.
null
CC BY-SA 4.0
null
2023-03-03T23:02:02.443
2023-03-03T23:02:02.443
null
null
221201
null
608318
2
null
608315
1
null
- A variable won't be both ratio and exactly normal (at least not for strictly non-negative variables like height). - Leaving aside the issue of normality: Lets say I have a ratio variable like $X_i$ = "number of hours worked on day $i$", and the corresponding daily pay is "\$10 + \$30 per hour" (perhaps the ten dollars is some meal allowance or travel allowance or something, it's not important). Then the daily pay amount is a linear transformation of the hours worked and Stevens' "rule" about transformation of a ratio variable in that situation is plainly no help to us - a linear transformation is not only possible in that situation, it's inherent in the definition of the pay amount variable. You might find my answer [here](https://stats.stackexchange.com/a/106400/805) on issues with Stevens' typology of scale of some interest.
null
CC BY-SA 4.0
null
2023-03-03T23:58:32.340
2023-03-03T23:58:32.340
null
null
805
null
608320
1
null
null
1
14
Let's say I have a time series and I am taking the first differences and training a model to output the predicted 95% quantiles of these first differences at future time horizons. If this was just a normal forecast and not a quantile forecast, I could just convert back to my actual forecasted values by cumulating my forecasted values. However if this is a 95% forecast of the first differences, I cannot just cumulate these values to get 95% quantiles of the actuals forecast. So how would I generate the 95% quantile of the actuals forecast if I have 95% quantiles of the first differences?
How to generate quantile forecasts from first differences?
CC BY-SA 4.0
null
2023-03-04T00:40:30.200
2023-03-04T07:49:15.490
2023-03-04T07:49:15.490
53690
380720
[ "time-series", "forecasting", "quantiles", "quantile-regression", "differencing" ]
608321
1
null
null
0
16
This is a conceptual question About Local Average Treatment Effects (LATE). Basically, I ran an experiment that included three treatment groups and a control group. We measured our outcome variable at t1 and t2 to see if the treatment made a difference. We included an attention check in our study, and roughly ~500 people failed it. We conducted the analysis by removing all individuals who failed the attention check, and it was suggested we calculate the local average treatment effect in our study. Conceptually, I'm wondering if we can run a local average treatment analysis with our data? What I've read is that we assign individuals as binary: They were treated or were not treated. This would make sense if we only had one treatment group and a control group, but we have two additional control groups. Would we run three different sets of analyses to compare the control group to the treatment group based on their treatment outcomes? Thanks for all your help!
Can I Calculate a Local Average Treatment Effect (LATE) With Three Treatment Groups?
CC BY-SA 4.0
null
2023-03-04T00:51:56.720
2023-03-04T00:51:56.720
null
null
212008
[ "time-series", "treatment-effect", "treatment" ]
608322
1
null
null
0
52
Given: $$A\sim \mathrm N(\mu_1, \sigma_1)$$ $$B \sim \mathrm N(\mu_2, \sigma_2)$$ $$C \sim \mathrm N(\mu_3, \sigma_3)$$ $$X_1 = \alpha_1 \cdot A + \beta_1 \cdot B + \gamma_1 \cdot C$$ $$X_2 = \alpha_2 \cdot A + \beta_2 \cdot B + \gamma_2 \cdot C$$ $$Y = A$$ A, B, C and have known pairwise covariances / correlations. How can I calculate the percent of the variance of $Y$ that is explained by $X_1$ and $X_2$? I can currently generate random samples for A, B, and C and then run a regression to find the r-squared, but I was hoping to find a closed-form solution. Or alternatively, what amount of variance of $Y$ is unique, that is, what amount of variance is not explained by $X_1$ and $X_2$? Edit: Simplified my problem too much, and jbowman correctly pointed out all the variance is explained if X1 and X2 are linear combinations of just two random variables, so added a third random variable.
Percent variance explained from linear combination of normal variables
CC BY-SA 4.0
null
2023-03-04T00:55:19.407
2023-03-04T06:02:21.033
2023-03-04T06:02:21.033
178600
178600
[ "normal-distribution", "variance", "covariance", "r-squared" ]
608323
1
null
null
0
48
I'm working on a modeling problem where I can define a score function that looks a lot like a binomial likelihood, but the model isn't really binomial. I'd like to use profile likelihood to estimate confidence intervals and covariance for two parameters of interest, among many other nuisance parameters. But I don't have a real likelihood and I don't know how to estimate the variance in this score function. And so I don't know how to construct a normalized Gaussian approximation that I can treat like a profile likelihood for confidence intervals. Advice? [Edited to provide more detail]. I'm doing a many-parameter fit of a complex model to a bunch of different binary measurements. The data come from an infectious disease transmission study, where we have prevalence measurements on a few different days from the start of the study in a few different cohorts. The data are thus $\{Y_t^C,N_t^C\}$ for day $t$ in cohort $C$, positive outcomes (infected) on each day $Y_t$, and total number sampled each day $N_t$. We model the prevalence for each cohort and time as a probability $p_t^C(\theta)$ defined by model parameters $\theta$. If the data and model predictions were independent, then it would be reasonable to model the log-likelihood as a sum of binomials, $\log L(\theta|\{Y_t^C,N_t^C\}) \propto \displaystyle \sum_C\sum_t \left(Y_t^C \log\left(p_t^C(\theta)\right) + (N_t^C-Y_t^C)(1- \log\left(p_t^C(\theta)\right) ) \right)$ Since the data and model predictions aren't independent at each time point, this isn't the right likelihood (which is intractable to write down). But, it is a good score function, so I'm using it as the score in an optimizer to find best-fit parameters $\hat{\theta}$. Knowing everything here is at best approximate, I'd like to estimate approximate confidence intervals with some kind of plausible statistical interpretation, even if only asymptotically under an assumption about how the score maps onto a real likelihood. [Edited with updated thinking.] If I knew how to estimate the variance for this score, I could assume normality, and then calculate a two-parameter confidence ellipse in the standard way for profile likelihood intervals. But I don't know how to estimate the variance of the score. Only two of the parameters are of primary scientific interest, and the model is expensive to evaluate, and so I think this is a good use case for profile likelihood. I think if I'm willing to accept that as a likelihood, then I can just do the standard thing of sweeping over the two parameters, and defining the confidence ellipse with a likelihood ratio test and two degrees of freedom. (NB: my earlier text assumed I needed a variance estimator, but this score isn't approximating a normal, but rather is a binomial approximation with the wrong covariance structure to the real likelihood, and all the relevant terms are evaluated here already.) I feel like I might be missing something obvious, like asking the optimizer to return a hessian or something and inverting that. I'm using `optim` in R but could also use the `stats4::mle` wrapper if it'll make my life easier. Advice?
Approximating profile likelihood confidence intervals when I only have a score function and not a likelihood
CC BY-SA 4.0
null
2023-03-04T00:56:00.870
2023-03-10T19:00:50.263
2023-03-10T19:00:50.263
11887
252994
[ "confidence-interval", "modeling", "quasi-likelihood", "profile-likelihood" ]
608324
1
null
null
1
13
I have a dataset coming from a psychological experiment where each participant is rating faces of 5 different emotions presented randomly. So the emotion category is a within-subject variable and DV is reaction time. For the between-subject, I have two grouping variables: having gene X and having condition Y. I would normally go with the 2x2x5 repeated-measures ANOVA. However, I am not interested in the differences between each emotion category, so any within-subject contrast is of no interest. Rather, I am interested in whether there is a main effect of X or Y or their interaction for each emotion type. Moreover, SPSS does not produce 2x2 post-hoc table for repeated measures anova. So, I would like to know if running separate univariate 2x2 ANOVAs for each emotion category is feasible here. Something like; gene x condition ANOVA on the reaction time of happiness. gene x condition ANOVA on the reaction time of sadness. gene x condition ANOVA on the reaction time of fear. ..... Am I violating/missing something by not using repeated measures? Do I need to make some corrections? Also, any suggestion for an alternative approach is more than welcomed.
Repeated Measures ANOVA vs multiple Univariate ANOVAs
CC BY-SA 4.0
null
2023-03-04T01:18:18.790
2023-03-04T01:18:18.790
null
null
218005
[ "anova", "repeated-measures" ]
608325
1
null
null
0
19
I am estimating the mean of a highly skewed sampling distribution. I have a full sample of 80% of the population. I have a small partial sample of 20%. I'm using the bootstrap to estimate the statistic on the 20% and to derive confidence intervals. I somehow need to combine the statistic from the full 80% strata with the partially sampled 20%, even when the confidence intervals are asymmetric. I'm not sure where to start. Does anyone have a pointer or suggestion?
How to combine a two strata, where one is fully sampled and the other partially sampled, and the sampling statistic is highly skewed
CC BY-SA 4.0
null
2023-03-04T01:25:27.933
2023-03-04T01:25:27.933
null
null
43080
[ "confidence-interval", "sampling", "bootstrap", "stratification" ]
608326
1
null
null
0
27
Here's a signal processing/information theory problem that I've encountered in a software engineering context: Say I have a logging utility in my application that I use for recording timestamped diagnostic logs. I want to add tracking of some performance metrics to this log. Moreover, I want to do this intelligently, so as not to explode the size of the log files. A convenient way to do a form of lossy compression would be to only record values when it appears that something "interesting" is happening - as judged, for example, by looking at the output of a high-pass filter applied to the value. It's clear that the success of such an implementation depends critically on the design of the filter. How can we design an optimal filter (in a rate-distortion sense), given some knowledge (e.g. moment structure) of the stream we are trying to compress?
Optimal variable-time sampling of a real-time data stream
CC BY-SA 4.0
null
2023-03-04T01:27:42.043
2023-03-04T04:31:59.557
2023-03-04T04:31:59.557
137774
137774
[ "stochastic-processes", "information-theory", "compression" ]
608327
1
null
null
1
36
I am trying to find the "best" model for a binomial GLMER using AIC in R. Here is the equation: Survival~Age+Group+Age:Group+Age^2+Age^3+(1|Year)+(1|Individual) Where Year and Individual represent temporal differences and repeated measures, respectively. I am trying to determine if Age:Group, Age^2, and/or Age^3 are good predictors. However, I am running into convergence problems with some models when I calculate AIC values using the MuMIn package. The models that have convergence problems are caused by one or the other or both random effects. Thus, is it appropriate to use GLMERs at all to estimate the "best" model? Thank you.
GLMER and AIC: convergence problems
CC BY-SA 4.0
null
2023-03-04T03:27:32.707
2023-03-04T03:59:56.017
2023-03-04T03:59:56.017
382345
382345
[ "lme4-nlme", "convergence", "aic" ]
608328
2
null
149611
1
null
Because you get multivariate Student t as the fraction of the normal vector and a chi-squared random variable. So, of course, the histogram of entries of the vector will be normal, but for every simulation, it will be different normal, depending on what the chi-squared random variable shows you. If you normalize, chi-squared gets cancelled and it will just be standard normal all the time.
null
CC BY-SA 4.0
null
2023-03-04T03:38:03.787
2023-03-04T03:38:03.787
null
null
382346
null
608329
2
null
608322
2
null
All of it is. - Multiply $X_1$ by $c = \beta_2 / \beta_1$ to get: $$cX_1 = c\alpha_1A + \beta_2B$$ - Now subtract $X_2$ from this: $$cX_1 - X_2 = (c\alpha_1-\alpha_2)A$$ - Now divide both sides by $c\alpha_1-\alpha_2$ to get: $${cX_1-X_2 \over c\alpha_1-\alpha_2} = A$$ So... $$Y = {c\over c\alpha_1-\alpha_2}X_1 -{1\over c\alpha_1 - \alpha_2}X_2$$ with no error left over.
null
CC BY-SA 4.0
null
2023-03-04T03:45:05.407
2023-03-04T03:45:05.407
null
null
7555
null
608330
2
null
606769
0
null
I think I figured it out. The KL is with respect to the distribution of the actions given a stochastic policy, so it makes sense that int the instuctGPT paper the expectation is w.r.t. (x,y). In the PPO paper the expectation w.r.t. distribution of actions is implicit in the 'KL' function, and the expectation outside is to average across an entire episode.Since in the formulation of instructGPT there is only one step per episode (more akin to bandit setting), this is not needed.
null
CC BY-SA 4.0
null
2023-03-04T05:00:02.573
2023-03-04T05:00:02.573
null
null
272776
null
608331
2
null
326253
0
null
The way I think about it is this: $$P(A|B)=\frac{P(A \cap B)}{P(B)}$$ The key is to understand what each term means and what division by $P(B)$ means in this case (it's a little more confusing because we are dividing by a number which is $\le 1$Also, Do not think about this using a Venn diagram. As far as I know, this formula has no clear meaning that can be visualized with a Venn Diagram, and as another answer mentioned, Venn diagrams are mostly used to describe sets and intersections, not probabilities. Let's start with the numerator. $$P(A\cap B)$$ At first I was also confused as to why this is not the entire formula for conditional probability because both $P(A|B)$ and $P(A \cap B)$ describe probabilities in which both $A$ and $B$ occur. But things become clear when you consider more events(i.e. C, D, E etc). Now when you say $P(A \cap B)$, you consider the entire sample space when what you need is $A$ with some relation to only $B$, disregarding other events! What we need from conditional probability is to help us establish a "What is the likeliness of $A$ considering $B$ has occured." relation between two events. This leads us to the role of the denominator. To me, $P(B)$ was the confusing part about the formula, so let's consider the $P(B)$ in the formula as a seperate term $\frac{1}{P(B)}$. If we consider $P(B)$ to be the chance obtaining 1 from a six-sided die, then $$P(B)=\frac{1}{6}\implies \frac{1}{P(B)}=6$$This is the same as the average number of die rolls that we need to roll a one. In other words, to "probabilistically guarantee" an event $B$, we repeat the experiment $\frac{1}{P(B)}$ times ($P(B)\cdot \frac{1}{P(B)}=1$). Therefore, $$P(A \cap B)\cdot \frac{1}{P(B)}$$ Is the same as saying give me the probability of both $A$ and $B$ occuring assuming $B$ is guaranteed. NOTE: I am an undergrad student currently taking an Introduction to Probability course. If I commit a Math or Statistics crime, please forgive me. Also disregard the fact that I am answering this question 5 years later.
null
CC BY-SA 4.0
null
2023-03-04T05:19:15.357
2023-03-06T16:13:43.980
2023-03-06T16:13:43.980
382344
382344
null
608332
1
null
null
0
55
In my problem, I have a condition in which I need to compute the joint distribution of two dependent distributions. The first distribution is normal and the second one is beta distribution. How can I get the joint distribution function of these two distributions? Any help would be appreciated. Update Actually, I am testing my hypothesis on the same data using both wilcox and ks tests. (As each of these tests captures one aspect of differences, and I have to detect both.) I need to somehow get one single statistic out of these two statistics (at the moment I am choosing minimum of these two but I know it is not correct). I searched a lot through published literature and found that the wilcox can be approximated using normal distribution and ks using beta distribution. Now, I assume my case is: P(A U B) = P(A)+P(B)-P(A $\cap$ B) where P(A) refers to probability obtained from normal distribution, P(B) refers to p-value from beta distribution.So what I need is the last part, to make this puzzle completed. I hope this is clear more. Thanks a lot.
How can I compute the joint distribution function of normal distribution and beta distribution?
CC BY-SA 4.0
null
2023-03-04T05:25:59.923
2023-03-06T18:23:30.460
2023-03-06T15:44:53.973
365295
365295
[ "mathematical-statistics", "normal-distribution", "joint-distribution", "beta-distribution" ]
608333
2
null
608332
3
null
I presume your intent is that the marginal distributions are beta and normal respectively, rather than say one conditional and one marginal or both conditional. Specifying the marginal distributions does not specify the joint distribution. There's an infinite variety of joint distributions with those marginal distributions, so your question doesn't narrow things down enough to give any single answer. You would need to explain how they are dependent, in some detail. At the very least, to make much progress you'd need to explain what you're trying to achieve (again, in some detail). Indeed you can unify the specification of dependence structure for any set of continuous marginal distributions by transforming each margin to a uniform, and then looking at the joint distribution of those marginally uniform variates. This is called a copula. There are [many posts](https://stats.stackexchange.com/questions/tagged/copula) on site about copulas. There's also a [Wikipedia article](https://en.wikipedia.org/wiki/Copula_%28probability_theory%29) on them. As implied by my earlier comments, there's an infinite variety of copulas. There are many popular families of copulas that might very easily be used. It might help if you explained more about what sort of joint behavior you want to be able to model. (You refer to "my problem" but say nothing of its nature, which doesn't leave much to go on.) Here's two examples - plots of samples of 1000 pairs of values from joint distributions between a standard normal (Y) and two different betas (W is a beta($\frac12,\frac12$) and V is a beta($2,1$), which is triangular in shape). The two copulas these joint distributions are based off are very different in form, but it's likely that neither of those choices would be useful to your purpose, they're just chosen to illustrate that there's a very wide range of possibilities. [](https://i.stack.imgur.com/4gGmH.png)
null
CC BY-SA 4.0
null
2023-03-04T05:43:19.760
2023-03-06T18:23:30.460
2023-03-06T18:23:30.460
805
805
null
608334
1
608337
null
2
65
I was reading `tidymodels` and got confused about `mlp()` function's description from help section. it says from R help file, > mlp() defines a multilayer perceptron model (a.k.a. a single layer, feed-forward neural network). This function can fit classification and regression models. I am confused how a multilayer perception model becomes a single layer, feed-forward neural network? Could someone enlighten me here?
question about mlp() in tidymodels
CC BY-SA 4.0
null
2023-03-04T06:12:16.857
2023-03-04T07:49:01.590
2023-03-04T07:07:54.923
362671
382349
[ "r", "machine-learning", "neural-networks" ]
608335
2
null
608334
0
null
I agree that the explanation is confusing. The a.k.a. section refers to "perceptron model", not "multilayer perceptron model". You can sense this from the tinymodel definition of [MLP](https://parsnip.tidymodels.org/reference/mlp.html): ``` mlp( mode = "unknown", engine = "nnet", hidden_units = NULL, penalty = NULL, dropout = NULL, epochs = NULL, activation = NULL, learn_rate = NULL ) ``` It's a complete model, not single layer because we do not talk about number of epochs, penalty (and usually learning rate) for a single layer of the neural network.
null
CC BY-SA 4.0
null
2023-03-04T06:42:15.750
2023-03-04T06:42:15.750
null
null
204068
null
608336
2
null
608313
0
null
#### Bayesian DHARMA Residuals I recommend reading through this specific section in the [DHARMA package vignette](https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html#importing-external-simulations-e.g.-from-bayesian-software-or-unsupported-packages) to understand why it is saying this, along with the [supplementary vignette](https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMaForBayesians.html) they mention: [](https://i.stack.imgur.com/l2nEE.png) The package creator has some code below that section for modeling with Bayesian data, which I've slightly modified to label what's going on and to include a random seed for reproducibility: ``` #### Load DHARMa Library and Set Random Seed #### library(DHARMa) set.seed(123) #### Create Test Data and Fit to GLM #### testData <- createData(sampleSize = 200, overdispersion = 0.5, family = poisson()) fittedModel <- glm(observedResponse ~ Environment1, family = "poisson", data = testData) #### Create Simulation Function #### simulatePoissonGLM <- function(fittedModel, n){ pred = predict(fittedModel, type = "response") nObs = length(pred) sim = matrix(nrow = nObs, ncol = n) for(i in 1:n) sim[,i] = rpois(nObs, pred) return(sim) } #### Use FUnction for Fitted Model #### sim <- simulatePoissonGLM(fittedModel, 100) #### Create DHARMa Residuals #### DHARMaRes <- createDHARMa(simulatedResponse = sim, observedResponse = testData$observedResponse, fittedPredictedResponse = predict(fittedModel), integerResponse = T) #### Plot Them #### plot(DHARMaRes, quantreg = F) ``` You can see the residuals plotted below: [](https://i.stack.imgur.com/oBeO5.png) For your specific case, I think you just have to include your model into the `createDHARMa` function and then this will use your residuals in the way you prescribe, rather than using the `simulateResiduals` function typically used. #### Binned Residual Plot As for the binned residual plot, notice this section in the same vignette: > One reason why GL(M)Ms residuals are harder to interpret is that the expected distribution of the data (aka predictive distribution) changes with the fitted values. Reweighting with the expected dispersion, as done in Pearson residuals, or using deviance residuals, helps to some extent, but it does not lead to visually homogenous residuals, even if the model is correctly specified. As a result, standard residual plots, when interpreted in the same way as for linear models, seem to show all kind of problems, such as non-normality, heteroscedasticity, even if the model is correctly specified. Questions on the R mailing lists and forums show that practitioners are regularly confused about whether such patterns in GL(M)M residuals are a problem or not. Basically your standard residual plots can be severely inaccurate, and I know from personal experience this is the case.
null
CC BY-SA 4.0
null
2023-03-04T06:51:40.653
2023-03-04T06:51:40.653
null
null
345611
null
608337
2
null
608334
0
null
It's a model with a single hidden layer, plus the input and output layers, with all nodes connected from one layer to the next. That's the simplest case of multilayer perceptron (see eg [Wikipedia](https://en.wikipedia.org/wiki/Multilayer_perceptron)). For a long time there was a bit of tendency in statistics to consider only the single-hidden-layer case of neural networks, until deep convolutional networks such as AlexNet showed that more layers could really matter in practice. For example, even the second edition of Elements of Statistical Learning (from 2008), by definitely computation-friendly authors, restricts itself to a single hidden layer.
null
CC BY-SA 4.0
null
2023-03-04T07:49:01.590
2023-03-04T07:49:01.590
null
null
249135
null
608338
2
null
304788
2
null
The main idea in empirical bayes estimation is to use the observed data to estimate the prior distribution of the parameters, and then use this prior distribution to update the posterior distribution of the parameters based on the observed data. Let $X_i|\mu\sim N(\mu, \sigma^2)$ and $\mu \sim N(\mu_0, \tau^2).$ Lets find $f(X)$ by estimation of the mean and the variance (law of total variance, law of total expectation): $E(X) = E_\mu[E(X|\mu)]] = E_\mu(\mu)=\mu_0$ $Var(X) = E_\mu[V(X|\mu)]]+V_\mu[E(X|\mu)]] = E_\mu(\sigma^2)+V_\mu(\mu) = \sigma^2+\tau^2$ So we know that $X_i \sim N(\mu_0, \;\sigma^2+\tau^2)$. Now we can use the method of moments (or MLE) for estimation of $\tau ^2$: $E(X^2) = V(X)+E^2(X) = \sigma^2+\tau^2 +\mu_0^2$ $\frac{1}{n}\sum X_i^2 = \sigma^2+\tau^2+\mu_0^2$ $\rightarrow \widehat{\tau^2}=\frac{1}{n}\sum X_i^2-\sigma^2-\mu_0^2$ If $\sigma ^2$ is unknown, you can use the sample variance. Note that $\tau^2$ can't be negative so $\widehat{\tau^2}=max(\frac{1}{n}\sum X_i^2-\sigma^2-\mu_0^2,0)$. Actually, in this case, you will get the same answer if you will solve with the MLE method instead of MME.
null
CC BY-SA 4.0
null
2023-03-04T08:04:31.330
2023-03-05T13:14:22.977
2023-03-05T13:14:22.977
324968
324968
null
608339
1
null
null
0
20
I have an experiment where we obtained blood tests (glucose, hemoglobine, leukocytes…) of different patients who were in hospital for a certain infectious disease. Depending on the day the patient was admitted in the hospital there is a probability that the patient had a different strain of the disease (strain A, B or C). For example, between 1st March 2022 and 15th March the probability of strain A, B and C was 0.9, 0.05 and 0.05 respectively. So a patient admitted in that time would be probably be infected by strain A. Given this, I would like to test if certain numeric variables (hemoglobine, leukocytes…) are different depending on the probability of having a strain. How would you model this situation given these probabilities? Thank you
Test for numeric variable when groups are probabilities
CC BY-SA 4.0
null
2023-03-04T08:58:20.040
2023-03-04T16:48:03.507
2023-03-04T16:48:03.507
326551
326551
[ "conditional-probability" ]
608340
1
608349
null
2
38
Hypothetical scenario. Let's say I'm running 6 separate ANCOVAs. My independent variable in each is field of study (3 groups: science, humanities, and business). For each ANCOVA my dependent variable is a different measure of IQ measured on a continuous scale (e.g., verbal score, math score, reading score, etc). In each ANCOVA there are two covariates: years of education (continuous variable) and sex (categorical variable). Let's say that I'm considering assumptions for each of the 6 ANCOVAs to determine whether running an ANCOVA is appropriate, and let's say that in 4 of the 6 cases I determine that running an ANCOVA is not appropriate: let's say in one instance I violate the homogeneity of regression assumption, in another instance I violate the homoscedasticity assumption (based on looking at plots )of the standardized residuals against the predicted values, in another instance I violate the assumption that the residual are normally distributed for each level of the independent variable, and in a fourth instance I determine based on past literature that my age covariate likely won't be linearly related to my dependent variable at one or more levels of my independent variable. I looked online for a nonparametric test I can run in place of ANCOVA and I found something called Quade's test, and I found some instruction on how to run it in SPSS. However I can't find anywhere online what the assumptions are for the test. So I'm wondering what the assumptions are for Quade's test so I can determine if I can run Quade's test in any or all of the four instances described above in which I violated one of the assumptions for ANCOVA. Thanks much in advance! FBH
Question about running Quade's test instead of ANCOVA
CC BY-SA 4.0
null
2023-03-04T09:33:52.280
2023-03-04T14:33:23.590
null
null
128883
[ "nonparametric", "ancova" ]
608341
1
null
null
1
16
I'm reading a [meta-analysis](https://journals.sagepub.com/doi/pdf/10.1177/070674371305800702) about how well cognitive-behavioral therapy (CBT) works for treating depression in adults. The study uses a statistic called Hedge's g to measure the treatment's effectiveness. I'm learning about this statistic for the first time, and as I understand it, it's inversely proportional to the weighted average of standard deviations in compared groups. So if people in these groups are much more similar to themselves than is the case in the general population (I presume they have depression, which would correspond to being on the left tail of well-being (I assume a depression score corresponds to well-being/happiness on the low end sufficiently well that you can sort of translate between these in your mind, at least as a first approximation)), a relatively small change might result in a big Hedge's g. This would suggest that a Hedge's g = 0.53 might mean that even if the change is significant in terms of improving depression, when you look at the results in terms of a change in well-being, the change is pretty small. (i.e. a common outcome could be that you might not as depressed, but you'd still be relatively sad). I think the above is complicated by the fact that measures of depression are not sensitive to changes such that the improvement goes above the "not depressed" threshold - you can't have a score lower than 0 (no symptoms). I doubt that given Hedge's g = 0.53 this would change my analysis very much, because unless a big portion of patients had low scores to begin with, moving 0.53 standard deviations shouldn't get them close to 0 symptoms I think. My questions: - is my reasoning right? - is there a way to find what was the difference in absolute terms or at least some proxies to have an idea how big are the improvements in practical terms?
How to find/understand magnitude of difference (meta-analysis reports Hedge's g)
CC BY-SA 4.0
null
2023-03-04T10:01:56.883
2023-03-09T04:18:54.427
null
null
382364
[ "meta-analysis", "effect-size" ]
608342
1
608450
null
0
68
In regression models, usually a (descriptive) indicator of an interaction effect can be plotted via a plot with x-axis with the potential interaction variable (e.g., gender) and the outcome variable on the y-axis – if the outcome is continuous. Else, a 2×2 table should give a hint to interaction effects. In Cox models, however, I'm a bit confused about whether I should check the time until event variable or the number of event variable itself to descriptively see whether an interaction effect occurs. I know the underlying formula specifies both, time until and number of events: $$S(t|x)=\exp(−H(t|x))$$ with $H(t|x). $ But what is the "more important variable"? Consider e.g. a case where 100/100 male participants have an event, while only 75/100 female participants have an event. However, all of the male participants have the event later than the female participants. Would there be a significant difference and in which direction (assuming power is high enough)? When looking at the formula, it seems that it's time, but it's somehow confusing to understand. So my questions would be: - if I want to descriptively look at an interaction effect, should I plot the moderator (e.g., gender) and time until event, a 2×2 table with moderator and number of events, or both? - What would be the result of a Cox regression with the example I gave above?
Interaction effects in a Cox model
CC BY-SA 4.0
null
2023-03-04T10:16:03.113
2023-03-05T17:32:53.127
2023-03-05T17:32:53.127
44269
379768
[ "regression", "interaction", "cox-model" ]
608343
2
null
267485
1
null
The fact that the $(y - x)^2$ is convex in $x$ does not mean that $(y - f(x))^2$ is convex in $x$ in general. See [this answer](https://ai.stackexchange.com/questions/11979/how-is-it-possible-that-the-mse-used-to-train-neural-networks-with-gradient-desc) on AISE. So yes, you can have local minima that are not global minima.
null
CC BY-SA 4.0
null
2023-03-04T10:49:22.203
2023-03-04T10:49:22.203
null
null
260320
null
608344
1
null
null
0
7
I'm trying to analyze head rotation (euler angles on axes x y z) over time in order to predict motion sickness scores (from 0 to ~ 15). I have 12000 points (per axis) per individual and 35 individuals. I would like to use these different time series to predict one time measure of motion sickness scores (at least based on one axis, but preferably with the 3). I'm very new to time series, and every thing I find is how to forecast the future of the time serie itself, not another (cross-sectional) variable. What method should I use ? Thanks
How to use inter-individual time series (panel) to predict cross sectional data?
CC BY-SA 4.0
null
2023-03-04T10:51:52.073
2023-03-04T10:51:52.073
null
null
382365
[ "time-series", "predictive-models", "cross-section" ]
608345
1
null
null
0
35
I ran ANCOVA with dependent variable IQ, independent variable field of study (3 groups: science, humanities, business), and two covariates (age and sex). I see that my result is significant (i.e., there is at least one significant difference between my 3 groups when it comes to IQ, after adjusting for age and sex). I also had my software (SPSS) calculate an effect size (partial eta-squared). So then I look at my post hoc comparisons. I see that one of the three comparisons is significant (science vs business). So I'll report this difference in the paper I'm writing. But I wonder: rather than reporting the effect size for the full model, wouldn't it be more informative to report an effect size for the one significant comparison (i.e. effect size for the science vs business comparison)? Are effect sizes for individual post-hoc tests typically reported in papers? If so, how can I go about calculating the effect size for the significant post hoc test I have? There doesn't seem to be an option in SPSS to calculate effect sizes for post hoc comparisons, and so I need to figure out how to do it by hand. Thanks!
Calculating effect sizes for individual post hoc comparisons in ANCOVA
CC BY-SA 4.0
null
2023-03-04T11:42:09.817
2023-03-04T11:53:22.300
2023-03-04T11:53:22.300
128883
128883
[ "effect-size", "post-hoc", "ancova" ]
608346
1
608352
null
0
84
I've the following dataframe: [https://drive.google.com/file/d/1IxwI52nIdolzL9wzbxiDmu5NGR5eoukX/view?usp=sharing](https://drive.google.com/file/d/1IxwI52nIdolzL9wzbxiDmu5NGR5eoukX/view?usp=sharing) I'm wondering the best statistical analysis to investigate the relationship with the size of the colony perimeters and bacterial strains mixed with treatment variation. The hypthesis being that the perimeter of gamma strain is greater than alpha or beta strain in the presence of a chemoatttractant 0 mM succinate, 1 mM succinate and 10 mM succinate. I want to take into account any batch effect across experiments, along with random effects from bacterial variation. I initally just applied boxplots and investigated significant with wilcox rank tests, however it didn't take into account any batch effects or random variation. Any advice on any kind of transformation i could, apply which would take into effect this. Or should i be looking at it more from a linear mixed model perspective ? If i run the ComBat function for example i consistently get this error: Error in dat[, batch == batch_level] : (subscript) logical subscript too long
Best statistical practise to take into account batch effects and biological variation
CC BY-SA 4.0
null
2023-03-04T12:37:54.953
2023-03-04T19:50:42.147
2023-03-04T13:51:07.337
381157
381157
[ "r", "batch-normalization" ]
608347
1
608607
null
1
69
I have (x,y) values where x are integers that range from 0 to 843, and there is only one y value per x value. And then I have a set S, which are (x,y) values. If an element of S is selected, it should add up to the y value of the corresponding x in the above (x, y) values. I want to select a subset of S, that would make the sum of all y values close to N while having the y values distributed flat (i.e., (x,y) graph looking as if it is a straight vertical line if seen in a bigger scale). I tried with variance, having variance biggest as possible, but it would also make only the y values of the beginning and end big. So I wonder if there's any other measure to use or whether I should add any other metrics to achieve my goal along with variance. Thank you!
Is there a measure other than variance to know dispersion?
CC BY-SA 4.0
null
2023-03-04T12:50:43.940
2023-03-07T05:07:08.200
null
null
319408
[ "distributions", "python", "variance" ]
608349
2
null
608340
1
null
The Quade test works only for data arranged in unreplicated complete block design. (This is the same as for Friedman's test). It is a rank-based test, but, if my understanding is correct, assumes that the data are at least interval in nature. There may be more general nonparametric approaches, like aligned ranks transformation anova, that may be appropriate for your situation. Also, you might investigate if a generalized linear model would be appropriate. For example, for your situation, Gamma regression may be a contender.
null
CC BY-SA 4.0
null
2023-03-04T14:33:23.590
2023-03-04T14:33:23.590
null
null
166526
null
608350
1
null
null
0
11
Sorry in advance for the basic question, but I was not able to find an accurate answer to my question by browsing the forums. I have a dataset containing three groups of samples (group A, group B, and group C). In each group, I have three conditions (treatment X, treatment Y, and control). For each sample, I have calculated a specific score, and now I want to test whether treatments significantly affect the score. A within-group pairwise comparison is not feasible due to small sample sizes (three samples for each condition). So I am trying to compare the score in different conditions (e.g., treatment X vs control) in the samples across the groups while accounting for any effects the different groups might have on the score. I would appreciate it if anyone could guide me on how to approach this problem.
How to assess if a continuous variable is statistically different between two conditions when there is a confounding factor?
CC BY-SA 4.0
null
2023-03-04T15:08:16.113
2023-03-04T15:08:16.113
null
null
382374
[ "hypothesis-testing", "statistical-significance", "descriptive-statistics" ]
608352
2
null
608346
0
null
First, the non-stats question: The error you're seeing is because ComBat is designed for experiments with multiple outcomes. You only have one (perimeter), so it's throwing an error at a step when it expects a matrix, but all it's seeing is a vector. Now the stats bit: A mixed effect model is typically how one would ask this question. Now, the fact that you are using ComBat suggests that you expect that strains will differ in more than just their means (i.e., that there's heteroskedasticity). If this is a concern, you could use a [double-hglm](https://cran.r-project.org/web/packages/dhglm/) to adjust to strain differences in variance as well. An example of how to test your question with a standard mixed effect model: ``` library(lmerTest) mod1=lmer(Perimeter..cm. ~ Treatment + (1|Biological.samples),data=p.data) anova(mod1) Type III Analysis of Variance Table with Satterthwaite's method Sum Sq Mean Sq NumDF DenDF F value Pr(>F) Treatment 506.58 253.29 2 92.685 27.85 3.377e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 summary(mod1) Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest'] Formula: Perimeter..cm. ~ Treatment + (1 | Biological.samples) Data: p.data REML criterion at convergence: 824.1 Scaled residuals: Min 1Q Median 3Q Max -2.3275 -0.5361 -0.1112 0.3942 3.3652 Random effects: Groups Name Variance Std.Dev. Biological.samples (Intercept) 22.085 4.699 Residual 9.095 3.016 Number of obs: 144, groups: Biological.samples, 50 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 6.2674 0.7978 74.1016 7.856 2.41e-11 *** Treatment1 mM succinate 3.5914 0.6237 92.6853 5.758 1.10e-07 *** Treatment10 mM succinate 4.3605 0.6237 92.6853 6.991 4.13e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) Tr1mMs Trtmnt1mMsc -0.391 Trtmnt10mMs -0.391 0.500 ``` Edit Something similar with `dhglm`, comparing the Alpha and Beta bacteria: ``` library(dhglm) p.data.temp = p.data %>% dplyr::filter(ï..Bacteria == "Alpha" | ï..Bacteria == "Beta") # typical mixed effect model model_mu<-DHGLMMODELING(Model="mean", LinPred=Perimeter..cm.~ ï..Bacteria +(1|Batch.No.)) # model variance model_phi<-DHGLMMODELING(Model="dispersion", LinPred=phi~ ï..Bacteria +(1|Batch.No.),RandDist="gaussian") #put it together res_dhglm1<-dhglmfit(RespDist="gamma",DataMain=p.data.temp, MeanModel=model_mu,DispersionModel=model_phi) Distribution of Main Response : "gamma" [1] "Estimates from the model(mu)" Perimeter..cm. ~ ï..Bacteria + (1 | Batch.No.) [1] "identity" Estimate Std. Error t-value (Intercept) 11.318 1.0068 11.242 ï..BacteriaBeta -7.348 0.9539 -7.703 [1] "Estimates for logarithm of lambda=var(u_mu)" [1] "gaussian" Estimate Std. Error t-value Batch.No. 0.5491 0.5473 1.003 [1] "Estimates from the model(phi)" phi ~ ï..Bacteria + (1 | Batch.No.) [1] "log" Estimate Std. Error t-value (Intercept) -1.5263 0.2534 -6.023 ï..BacteriaBeta -0.6285 0.2997 -2.097 [1] "Estimates for logarithm of var(u_phi)" Estimate Std. Error t-value Batch.No. -1.396 0.8561 -1.631 [1] "========== Likelihood Function Values and Condition AIC ==========" [,1] -2ML (-2 p_v(mu),v(phi) (h)) : 294.33754 -2RL (-2 p_beta(mu),v(mu),beta(phi),v(phi) (h)) : 292.21448 cAIC : 290.18455 Scaled Deviance : 51.32276 df : 51.32276 ``` These results suggest an effect of bacteria, and you could repeat the analysis without the Bacteria variable and use the ratio of the likelihoods to compute a p-value (or compare the cAIC). You can also use a simpler model for the variance (`model_phi<-DHGLMMODELING(Model="dispersion")`) to see whether accounting for differences in variance between batches really matters (it doesn't seem to in this case).
null
CC BY-SA 4.0
null
2023-03-04T15:17:46.293
2023-03-04T19:50:42.147
2023-03-04T19:50:42.147
288142
288142
null
608353
1
null
null
1
35
I have a Tobit model of the form: Yi* = latent variable; Yi = censored (observable) variable; Yi* = β0 + β1*Xi + ϵ where ϵ is a random variable normally distributed with mean of zero and variance of σ2. To facilitate the notation I will use μi = β0 + β1*Xi. The correlation between the Yi and Yi* can be written as: Yi = 1, if Yi* >= 1; Yi = Yi*, if 0 < Yi* < 1; So the latent variable is unknown when Yi = 1 and I would like to estimate it. I think point estimates for the latent variable can be calculated using the conditional mean, like: E[Yi*|Yi=1,μi,σ]. But, please, could someone help me with the exact formula to do this? Edit: In R, I'm using the AER package to calculate the tobit model, the scale parameter σ and the fitted values (μi) ``` tobit.model <- tobit(Y ~ X, right = 1) sigma <- tobit.model$scale ui <- fitted(tobit.model) ```
How to estimate latent variable in Tobit Model
CC BY-SA 4.0
null
2023-03-04T15:26:32.507
2023-03-04T17:02:37.563
2023-03-04T17:02:37.563
382375
382375
[ "latent-variable", "tobit-regression" ]
608354
1
null
null
0
60
how you would implement a dynamic treatment effect model for diff in diff in stata? I tried it for my setup and the results make no sense. What I have done (2005 first treatment year): - centered the time variable --> k = year - 2004 --> so k = 0 is reference point and omitted in my regression - create dummy variables for every year for example the first control year (2002): gen T_2pre = 0 , replace T_2pre = 1 if k == -2 --> same for every year (except k = 0) - generate intersections of Treatment variable and dummy variable from 2) gen Treat_T_2pre = Treat * T_2pre --> again for every year (except k = 0) 4)run regression reg Y Treat Post Treat_T_2pre Treat_T_1pre Treat_T_1 ... Treat_T_1 first year after Treatment (= 2005) Where are my mistakes?
Dynamic Treatment Effects for Difference in Differences / treatment effect by year / longterm treatment effects
CC BY-SA 4.0
null
2023-03-04T15:43:36.133
2023-03-04T15:43:36.133
null
null
380647
[ "econometrics", "difference-in-difference" ]
608355
1
null
null
0
30
Im training an autoencoder on a time series that consists of repeating patterns (because the same process is repeated again and again). If I then use this autoencoder to reconstruct another one of these patterns, I expect the reconstruction to be worse if the pattern is different from the ones it has been trained on. Is the fact that the sime series consists of repeating patterns something that needs to be considered in any way for training or data preprocessing? I am currently using this on raw channels. Thank you.
Training Autoencoder on time series with repeating pattern
CC BY-SA 4.0
null
2023-03-04T15:58:04.890
2023-03-04T15:58:04.890
null
null
127037
[ "time-series", "autoencoders" ]
608356
1
null
null
0
25
given a joint distribution of 2 variables $P(X,Y)$, is the cdf of the Y-marginal distribution equals to the Y-marginal of the cdf of $P(X,Y)$?
marginal cdf and cdf of marginals
CC BY-SA 4.0
null
2023-03-04T15:59:18.237
2023-03-04T18:00:14.907
2023-03-04T18:00:14.907
53690
275569
[ "mathematical-statistics", "joint-distribution", "definition", "marginal-distribution" ]
608357
1
null
null
2
31
Suppose $X$ and $Y$ are two independent folded normal variates. Define : $$W = \frac{X}{Y}$$ How can we find the cumulative distribution function of $W?$ I will be highly obliged for any suggestions/links/help
Ratio of two folded normal variates
CC BY-SA 4.0
null
2023-03-04T16:10:58.910
2023-03-04T16:30:30.050
2023-03-04T16:30:30.050
69508
295633
[ "distributions", "folded-normal-distribution" ]
608358
1
608479
null
2
54
I am modeling the seasonal occurrence of a species at different sites (count data). I am specifically trying to identify potential drivers of the seasonal pattern. To this end, I have selected a number of environmental variables and I am planning on using the `gam()` function in `mgcv` to fit hierarchical GAMs allowing variation of smoothers across sites. I am using the negative binomial distribution for count data. However, the range of the candidate predictor varies across groups (see Plot 1 below, y is the count response per day, x is the environmental variable, and facets represent sites). Plot 2 is a time series of this predictor across sites. Should I transform the predictor scale prior to fitting the model? Maybe by standardizing or normalizing the data (per site) ? For some predictors, I may be able to remove a few isolated points, treating them as outliers (even if ecologically plausible) to reduce the range and fit the regression better for most of the sites. But for others, such as the one plotted below, I cannot discard points as they just reflect different dynamics of the predictor at different sites... [This thread](https://stats.stackexchange.com/questions/603326/transformation-of-predictors-for-generalized-additive-model) does not recommend scaling, while [this one](https://stats.stackexchange.com/questions/12715/should-quantitative-predictors-be-transformed-to-be-normally-distributed) answers a similar question on GLMMs. I am worried that leaving the data as they are now will affect the model by increasing the importance of one site (within a single predictor). Similarly, I wonder if such issues would arise among predictors (one variable weighing more in a model), as they are measured on different scales (e.g chlorophyll concentration, day of the year, temperature...). On the other hand; normalizing the data erases the information on inter-site variability in environmental conditions. Are there common practices for such questions in GAMs? Plot 1: [](https://i.stack.imgur.com/IqE6t.png) Plot 2: [](https://i.stack.imgur.com/SvmE2.png)
GAM: predictor range varying across groups. Should data be transformed?
CC BY-SA 4.0
null
2023-03-04T17:03:02.390
2023-03-06T01:40:01.930
2023-03-06T01:34:43.307
345611
247492
[ "data-transformation", "multilevel-analysis", "count-data", "generalized-additive-model", "mgcv" ]
608359
1
null
null
1
40
I have a dataset with patient data in a psychiatric hospital. Some patients had more than one stay, so the observations are not completely independent. I therefore used a multilevel model (in R with glmmTMB) with the patient cases als level 1 and the patients as level 2. The model contains random intercepts for patients. I used family = binomial because of the binary outcome variable. When trying to check for the assumptions of such a model I read that non-normality of random effects (random intercepts in my case) could be problematic. The histogram of my random intercepts definitely showed non-normality. I tried log-transforming the random intercepts and adding those as a predictor in the model. The fixed effects changed a bit and the diagnostics made with DHARMa package looked better after. Also the AIC decreased. But I'm not sure if this an validated approach. Does someone have an opinion on that? In addition, even after a lot of research, I am not completely sure what assumptions are relevant for a logistic multilevel model and how to test them (I found different information in different sources). Thanks for any help.
(Normality) assumptions of logistic multilevel model (glmmTMB)
CC BY-SA 4.0
null
2023-03-04T17:08:46.833
2023-03-04T18:55:09.810
2023-03-04T18:55:09.810
382377
382377
[ "logistic", "multilevel-analysis", "normality-assumption", "assumptions", "glmmtmb" ]
608360
1
null
null
0
39
I need a gut check here... I'm trying to run an A/B test where the baseline metric is a conversion rate that is close to 1 - somewhere in the 80-85% range. And so when I run my sample size calculation - i believe i'm just doing a Two Proportion Z-Test - the required sample size is very low, like a few thousand. This doesn't make much sense to me intuitively? I know that if baseline rate is higher, the sample size will decrease just looking at the formula... but intuitively, I have a hard time understanding how we can gain significance by jut a few thousand samples. We're working on ads, and a normal CTR ad campaign for us requires ~1M ads for sample size (where CTR is closer to 1%). Other details: using power = 80%, alpha = 0.05, and an MDE (absolute) of 5% which I suppose is quite high that also contributes? The reason I chose an MDE of 5% is because stakeholders have said we're ok observing an increase or decrease in the rate by 5% (absolute not relative) Any thoughts? Should I be using a different calculation?
A/B Testing Proportions - baseline rate is close to 1
CC BY-SA 4.0
null
2023-03-04T17:29:17.543
2023-03-06T04:05:08.843
null
null
326432
[ "hypothesis-testing", "statistical-significance", "experiment-design" ]
608362
1
null
null
1
15
I am writing a bachelor thesis on the evaluation of value-at-risk using GARCH models. To estimate the parameters for the GARCH models, I explained that we can do it with maximum likelihood as shown in the picture and that theta can be the parameters we want to estimate, but my teacher wants to know what is the maximum likelihood function for the parameters of the models (GARCH, EGARCH, APARCH) and not only for the distributions. Can anyone help me understand this or give me some pointers? [](https://i.stack.imgur.com/yO3Ir.png)
Log-likelihood function for GARCHs parameters
CC BY-SA 4.0
null
2023-03-04T17:42:44.090
2023-03-04T18:01:54.423
2023-03-04T18:01:54.423
53690
369107
[ "time-series", "likelihood", "garch" ]
608363
1
null
null
0
33
Problem: I have a set of samples from a continuous distribution (multivariate), call this set $W$. I have another set of samples from a different distribution $X$. I want to sample from $W$ (with replacement of course), call this $V$, such that the PDF of $V$ is as close as possible to that of $X$ (from a KL divergence perspective). It is possible that some samples from $W$ exist in $X$ but that isn’t a guarantee. I don’t know any of the PDFs, so I am estimating them with kernel density estimation. Of course, I can sample one point at a time, add it to my set, do a KDE, determine it’s divergence from the KDE of $X$, and choose the point that gives the least divergence at each step. But that is basically exhaustive search, I was wondering if there is a niftier method to accomplish this task. Any help is appreciated!
Sample from one distribution such that it’s PDF matches another distribution
CC BY-SA 4.0
null
2023-03-04T17:50:16.247
2023-03-04T17:50:16.247
null
null
382378
[ "sampling", "density-estimation", "divergence" ]
608364
1
null
null
0
53
I fitted three GAM models m1, m2, and m3. I need to choose a bit fit model among them using R-squared, AIC and anova outputs. Here is the AIC, BIC criteria output: ``` models <- list(mod1 = m1, mod2 = m2, mod3 = m3 ) map_df(models, glance, .id = "model") %>% arrange(AIC) ``` [](https://i.stack.imgur.com/kLlKL.png) Here are R2 values criteria output ``` enframe(c( mod1 = summary(m1)$r.sq, mod2 = summary(m2)$r.sq, mod3 = summary(m3)$r.sq )) %>% arrange(desc(value)) mod1 0.8698969 mod2 0.8678293 mod3 0.8657606 ``` Although the difference is not substantial, both R squared and AIC/BIC criteria suggest to choose `m1` However, I am not sure about the `anova` output. Which model is better based on anova test (and how is it better)? Can I simply say that there is significant difference among these models based on anova? ``` anova(m1, m2, m3, test = "Chisq") Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 30.114 -97.190 2 29.743 -97.197 0.371471 0.00745 0.6164 3 29.798 -97.692 -0.055745 0.49477 ``` ```
Best fit GAM model selection using different criteria
CC BY-SA 4.0
null
2023-03-04T18:08:25.373
2023-03-04T18:08:25.373
null
null
346283
[ "regression", "hypothesis-testing", "inference", "generalized-additive-model" ]
608365
1
null
null
0
34
In description of GPR in sklearn, it says: ``` normalize_y : bool, default=False Whether the target values y are normalized, the mean and variance of the target values are set equal to 0 and 1 respectively. This is recommended for cases where zero-mean, unit-variance priors are used. Note that, in this implementation, the normalisation is reversed before the GP predictions are reported. ``` As none native speaker, I am not so sure about this meaning. I think this means that if your prior y is 0 mean and 1 var, you need to set it to be `True`. Specifically, my case is that, I am very sure about my data is [0,1], while I am not sure whether they are gaussian distribution. Do I need to set this option to be true?
how to set normalize_y properly in sklearn GPR
CC BY-SA 4.0
null
2023-03-04T18:16:27.260
2023-03-04T18:16:27.260
null
null
281112
[ "machine-learning", "scikit-learn", "gaussian-process" ]
608366
2
null
608190
0
null
I believe I was correct in that assuming that I can marginalize out the noise from the joint-density of $Q_t$ and $\hat{Q}_t$ $$p\left(Q_t \middle| Y_t, \theta\right) = \int_{\mathbb{R}} p\left(Q_t, \hat{Q}_t\left(Y_t,\theta\right) = X \middle| Y_t,\theta\right)dX$$. While it wasn't explicitly stated previously, $$p\left(Q_t, \hat{Q}_t\left(Y_t,\theta\right) \middle| Y_t,\theta\right) = p\left(Q_t \middle| \hat{Q}_t\left(Y_t, \theta\right), Y_t , \theta\right) \times p\left(\hat{Q}_t\left(Y_t,\theta\right) \middle| Y_t, \theta\right)$$ follows from Bayes' Law. It should be noted sometimes these integrals have closed-form solutions. For example when $p\left(Q_t \middle| \hat{Q}_t\left(Y_t, \theta\right), Y_t , \theta\right)$ is also Gaussian. Which is the case encountered many times where a Gaussian prior is assumed in Bayesian statistics. In my specific example the conditional density is log-Gaussian, numerical integration techniques such as Gaussian quadrature must be used.
null
CC BY-SA 4.0
null
2023-03-04T18:26:16.253
2023-03-04T18:26:16.253
null
null
98420
null
608367
2
null
459133
1
null
E[Y|X] is the regression line resulting from regressing Y on X. E[X|Y] is the regression line resulting from regressing Y on X. Jsst find the conditional pdf of X|Y and then the expectation.
null
CC BY-SA 4.0
null
2023-03-04T18:33:53.557
2023-03-04T18:33:53.557
null
null
382381
null
608368
2
null
608360
2
null
Every time I have a concern about a power calculation, I simulate it. The function in R to compute the requisite sample size is `power.prop.test`. ``` ``` r p1 = 0.85 p2 = 1.05*0.85 N = ceiling(power.prop.test(p1=p1, p2=p2, power=0.8)$n) N #> [1] 974 ``` Created on 2023-03-04 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1) Ok, that seems low. Let's simulate it ``` sims = replicate(1000, { x = rbinom(2, N, c(p2, p1)) test = prop.test(x, rep(N, 2)) (test$p.value<0.05) }) mean(sims) > 0.778 # Depending on your random seed ``` So it looks like this sample size is fine.
null
CC BY-SA 4.0
null
2023-03-04T18:44:14.593
2023-03-06T04:05:08.843
2023-03-06T04:05:08.843
111259
111259
null
608370
1
608390
null
8
141
Consider a kernel density estimate of a continuous, non-negative random variable defined over the unit circle with no discontinuity between 360 and 0 degrees. Unlike in the most common KDE implementations that use a Gaussian distribution (such as Seaborn's [kdeplot](https://seaborn.pydata.org/generated/seaborn.kdeplot.html#seaborn.kdeplot)), apparently distributions of a polar nature should use the [von Mises distribution](https://en.wikipedia.org/wiki/Von_Mises_distribution) for their kernel. If such a KDE is shown on a polar plot, I think there will be a kind of visual distortion introduced. The KDE's area-under-curve is significant because the integral of a KDE should be 1. In the polar plot, looking at two sectors where - the sector angle spans are the same, but - the first sector radius is double the second sector radius, the first sector will not have double the area; it will have more than double. The effect will be a visual bias where higher densities are over-emphasized when compared with lower densities, especially if the graph is drawn to be filled under the curve. I imagine that a way to correct for this would be a non-linear radius dimension where lower values are more spaced-out than higher-values. I have searched and cannot find example images where this has been done. My questions are: - Is this kind of visual bias commonly corrected-for when showing rendered polar plots? - I believe the expression that defines the radial corrective transformation is simply $r_i = \sqrt{i}$ . Does this seem correct? - Would this corrective transformation be valid in the context of a von Mises KDE?
Radial axis transformation in polar kernel density estimate
CC BY-SA 4.0
null
2023-03-04T19:37:09.383
2023-03-05T00:12:24.030
2023-03-04T22:03:49.037
76901
76901
[ "data-visualization", "random-variable", "kernel-smoothing", "circular-statistics" ]
608372
1
608375
null
1
48
I have constructed a simple neural network model, for a classification problem, with 10 target classes where an input (with some number of features) is to be classified to only one of the 10 classes. The input data that I am provided with is just features as numbers (and their target classes) - there is no context behind it, so nothing like "vitals of cancer patients" where certain incorrect classifications would need to be punished more than others. The final layer activation function is the softmax function, and the Loss function used is categorical cross entropy. Also, the number of occurrences of each class are very similar to one another. It is very much a balanced dataset. The evaluation metrics I currently consider are: - The loss function itself: categorical cross entropy. - Accuracy score (because of course) - Confusion matrix So my question is: Which evaluation metric is the most appropriate to use given no context whatsoever - just raw numbers and balanced classes? To add on to this question - say we were adding contexts behind the data provided, which type of contexts would warrant using each of these metrics. What would be the common use cases and drawbacks of these metrics? Are there alternate metrics that are advisable to look at for this class of problems (classification problem with multiple classes)?
Very balanced dataset and a multiclass classification problem, no context behind the inputs. Which evaluation metric to use?
CC BY-SA 4.0
null
2023-03-04T21:09:14.027
2023-03-04T21:27:41.500
null
null
382386
[ "neural-networks", "model-evaluation", "multi-class", "cross-entropy" ]
608373
2
null
608231
1
null
You probably want to treat your variable Age as an ordinal variable (ordered category). Since Employed is dichotomous, it could be treated as either nominal (nominal category) or ordinal. Since you aren't interested in treated one variable as a dependent variable per se, you are looking for a test of association and the degree of association. A common way to do this is with Kendall correlation (particularly Kendall's tau-c since the number of categories in each variable are different). Spearman's correlation will also work reasonably well, as will Kendall's tau-b. Another approach is the Cochran-Armitage test or the linear-by-linear test, which I suspect is the same as what you are calling chi-square test of trend.
null
CC BY-SA 4.0
null
2023-03-04T21:13:22.483
2023-03-04T21:13:22.483
null
null
166526
null
608374
1
null
null
2
47
I'm thinking about a statistical problem and I would be grateful if you could give me some hints of how I could address this problem: I have data of n insurance companies and for each insurance company I have: - The number of buildings per postal code (here let's assume the vulnerability of each building is the same and that there are also no differences between the insurance companies). Each insurance company has buildings in more than one postal code (but not necessarily in each postal code). - The annual loss for each insurance company over the last 10 years (only the total loss, unfortunately not the loss by postal code) normalized by the total number of buildings of that company. From this data, I would like to estimate which postal codes are most prone to storm damage (the postal codes are not independent of each other, e.g., if storm losses are high in one postal code, then it's likely they are also increased in the neighbouring postal codes). I would assume that it is possible if n is sufficiently large. In this example, the number of postal codes is larger than the number of insurance companies. I don't yet have a clear idea of how to do it. I could imagine that it makes sense to reduce the number of postal codes, e.g. in the most extreme case I could reduce the number of regions to two large regions (e.g. a north region and a south region) and then it would be quite simple to estimate which of the two regions is more prone to storm loss. Is there an algorithm to do this in a more systematic way? E.g., an algorithm that automatically groups regions together? Kind regards Elsa Edit: To make the problem a bit clearer: Assume I have a 4 x 4 spatial grid and the values are a measure of the storminess (higher: increased storminess). In this example the storminess decreases from left to right. These are the values I don't know and would like to estimate/reconstruct from the data I have. |1 |2 |3 |4 | |-|-|-|-| |0.6 |0.4 |0.3 |0.0 | |0.7 |0.5 |0.3 |0.1 | |0.7 |0.4 |0.3 |0.1 | |0.9 |0.7 |0.4 |0.2 | What I have is the distribution of buildings of various insurance companies- Here the example for one insurance company: |1 |2 |3 |4 | |-|-|-|-| |0.1 |0.1 |0.0 |0.0 | |0.1 |0.1 |0.0 |0.0 | |0.4 |0.2 |0.0 |0.0 | |0.0 |0.0 |0.0 |0.0 | This insurance company has most buildings in the left grid points. Further I have the loss ($L$) of the company which calculates as: $L = \sum S_i B_i$, where $S_i$ is the storminess at location i and $B_i$ is the fraction of buildings at location i, i.e. for this example: 0.6x0.1 + 0.4x0.1 +0.7x0.1 + 0.5x0.1 + 0.7x0.4 + 0.4x0.2 = 0.58. I have $L$ and $B$ of different insurance companies. How can I reconstruct $S$ (and at which reslution, 2x2 or 3x3?)
Spatial analysis: How to find region most prone to storm loss when only have loss data for whole region?
CC BY-SA 4.0
null
2023-03-04T21:16:18.140
2023-03-05T01:50:21.933
2023-03-05T01:50:21.933
382388
382388
[ "regression", "correlation", "geostatistics" ]
608375
2
null
608372
0
null
The trouble with accuracy is that your model does not predict discrete classes. The neural network outputs values on a continuum that have a (granted, weak) interpretation as the probabilities of class membership. (I say it is weak because [neural networks tend to lack probability calibration](https://stats.stackexchange.com/questions/532813/how-much-of-neural-network-overconfidence-in-predictions-can-be-attributed-to-mo). That is, when they predict a probability of $p$, the event does not happen with probability $p$.) Consequently, your model has $0\%$ accuracy. Every prediction is at least a little bit incorrect. In order to get a positive accuracy score or a confusion matrix, you need to convert those continuous predictions into discrete categories, and this requires you to know the consequences of making incorrect decisions. I would argue that, if you do not know the consequences of the discrete decisions, you have no business making those decisions. All you should be doing is making accurate probability predictions. While you might be able to get an accurate model, even a high accuracy score could mislead someone into thinking your model does not make crucial mistakes. Mixing up two classes might be particularly disastrous, so much so that a user is willing to make sacrifices elsewhere (even leading to a less accurate model overall) in order to minimize how often such a mistake is made. The standard way to assess the probability predictions is through the crossentropy loss (“log loss” in some circles, much of Cross Validated being one such circle), which you can normalize using McFadden’s $R^2$. Brier score, which you can normalize using Efron’s $R^2$, could be a useful measure, too. Related Links [Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models) [Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp) [Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules](https://www.fharrell.com/post/class-damage/) [Classification vs. Prediction](https://www.fharrell.com/post/classification/)
null
CC BY-SA 4.0
null
2023-03-04T21:21:17.200
2023-03-04T21:27:41.500
2023-03-04T21:27:41.500
247274
247274
null
608376
1
null
null
1
45
I have data where each observation is a region within a country, and each observation falls into one of two large groups (each group makes up half the dataset). Each group corresponds to a seven-year funding period for a fiscal transfer program, and my independent variable is the percentage of total allocations that a region was able to spend within a seven-year timeframe. For the first group, I have data on all seven years of the funding program, but for the second, I have data just on the first five years. Thus, for the observations in the first group, my dependent variable is fund absorption after seven years; for the second group, I can only get fund absorption after five years as a DV. As expected, the dependent variables for the second funding period are all much lower than for the first. Other than the fact that the DVs are measured differently and are calculated after different years, they are fairly comparable. Would I be able to run a single OLS regression with funding period fixed effects, or should I run two separate regressions?
OLS regression with fixed effects when dependent variables are measured differently by group
CC BY-SA 4.0
null
2023-03-04T21:28:02.050
2023-03-04T21:34:07.903
2023-03-04T21:34:07.903
382389
382389
[ "regression", "least-squares", "fixed-effects-model" ]
608377
1
null
null
1
15
Consider an experimental setting where we observe outcomes, binary treatment, and some covariates $(Y,D,X)$ in Rubin's Potential Outcomes Framework. If the treatment assignment mechanism is completely random (e.g. flip a coin for each individual), then we may assume unconditional unconfoundedness $$ Y(1), Y(0) \perp D $$ An alternative way of randomization that is sometimes done to improve covariate balance especially in small samples is to match individuals based on some characteristics $X$, and then randomly assign each group (e.g. a pair) to treatment/control. In this case we may assume conditional unconfoundedness $$ Y(1), Y(0) \perp D | X $$ --- The situation I am considering is where the treatment $D$ is deterministically assigned to some individuals but random for others. As a simple example, let's suppose we are investigating the effect of a drug $D$ on health outcomes $Y$. We run an experiment on a group of individuals and observe covariates $X \in \{male, female\}$, but then only randomly assign females to treatment/control. The assignment mechanism is then $$ \mathbb{P}(D=1|X) = \begin{cases} & 0.5 \quad (if X = female) \\ & 0 \quad (if X = male) \end{cases} $$ So that $D$ is deterministic for males but random for females. The ATE for the whole population is unidentified here, but the ATT should be identified. However, are we able to assume conditional unconfoundedness?
Unconfoundedness with Semi-Deterministic Assignment
CC BY-SA 4.0
null
2023-03-04T21:49:06.607
2023-03-04T21:49:06.607
null
null
269723
[ "causality" ]
608378
1
608384
null
3
117
I have a plot of ROC curves for about 5 models. The curves are overlapping, as shown in the attached figure. [](https://i.stack.imgur.com/QD7Io.png) Is there a way to still call out the differences between these models in a research paper using a ROC curve, or do I present the AUC values in a metrics table? Note: when you break the axis and for example use log scale, it is still overlapping because the data values are of the form(Where M1 = model1 and M2 is model 2.): ``` +--------+---------+------------+---------+ | FPR_M1 | TPR_M1 | FPR_M2 | TPR_M2 | +--------+---------+------------+---------+ | 0 | 0 | 0 | 0 | | 0 | 0.99452 | 0 | 0.93296 | | 0 | 0.99563 | 0 | 0.97548 | | 0 | 0.99728 | 0 | 0.98833 | | 0 | 0.99863 | 0 | 0.99995 | | 0 | 1 | 0 | 1 | | 1 | 1 | 3.70233E-5 | 1 | | | | 6.17055E-5 | 1 | | | | 8.63878E-5 | 1 | | | | 1.60434E-4 | 1 | | | | 2.34481E-4 | 1 | | | | 3.3321E-4 | 1 | | | | 4.07257E-4 | 1 | | | | 5.18327E-4 | 1 | | | | 7.15784E-4 | 1 | | | | 8.63878E-4 | 1 | | | | 0.00127 | 1 | | | | 0.00202 | 1 | | | | 0.00327 | 1 | | | | 0.00585 | 1 | | | | 0.01319 | 1 | | | | 0.05294 | 1 | | | | 1 | 1 | +--------+---------+------------+---------+ ```
Showing the difference between two models with similar AUC-ROC curves
CC BY-SA 4.0
null
2023-03-04T22:25:56.700
2023-03-04T23:11:52.927
null
null
142821
[ "data-visualization", "roc", "auc" ]
608379
1
null
null
1
47
I'm new to neural networks, and in almost everything I'm reading, the activation function recommended on the output layer follows a specific pattern: - If the network does binary classification (1 output node), use sigmoid - If the network does multiclass classification (>1 output nodes), use softmax - If the network does regression, don't use an activation function (linear) Which I completely understand - for example, binary classification is a probability and is never going above 1 or below 0, so of course it makes sense to use sigmoid. My question is though, when I'm doing regression, can't I just use the activation function that best fits my range of output values instead of using linear? For example, let's say I'm trying to predict the price of a stock - wouldn't a ReLU activation function make a lot more sense to use on the output layer over a linear activation function, since the price can never be negative? Or for another example, let's say I normalized my output values between 1 and -1 - Wouldn't I want to use a TanH activation on the output?
Neural Networks - Can I Use Any Activation for the Output Layer?
CC BY-SA 4.0
null
2023-03-04T22:26:19.717
2023-03-14T00:09:41.527
null
null
379408
[ "machine-learning", "neural-networks", "normalization", "supervised-learning", "artificial-intelligence" ]
608380
1
null
null
0
12
I am using `glmmTMB` in R to identify which weather variables (n=5) most influence annual bird counts (n=5 responses) from different monitoring sites. So far, I have performed (1) all-subsets model selection (n=32 models, main effects only) for each response variable based on AICc/Akaike weights, and (2) full-model averaging to obtain parameter estimates using `MuMIn::model.avg()`. --- My questions are: - Must I cross-validate the top-ranked GLMM if I am using model-averaging to obtain parameter estimates? It seems counter-intuitive to use model-averaging to obtain parameter estimates but then validate only the top-ranked model. I am also apprehensive to CV the top-ranked model due to a high amount of model uncertainty (no top-ranked model has Akaike weight > 0.26). - If necessary, how do I perform CV on the top-ranked model? I have found several code examples in this and other forums that specify a loop function to iteratively predict response values for each row/observation in the input data, but I cannot get any of them to work with my data/glmmTMB models (example 1, example 2) - Is it necessary to back-transform the model-averaged parameter estimates from my ZINB models if none are significant (p<0.05)? I assume that reporting back-transformed effect sizes from ZI models is only necessary when significant effects are identified, but maybe that is incorrect. - Is the performance::r2() function a reliable method to calculate marginal R2 for glmmTMB models? --- Thank you for any clarification!
Stuck on interpretation/validation of GLMM results
CC BY-SA 4.0
null
2023-03-04T22:44:24.797
2023-03-04T22:44:24.797
null
null
286723
[ "r", "cross-validation", "glmm", "glmmtmb", "model-averaging" ]
608381
1
null
null
1
18
This is the first time I'm asking a question here. I'm working on a field experiment that has been running for three years. The experiment was established before I started my studies, so I was not part of designing it. The initial plan was a split plot design but due to constraints they could not do that efficiently. The response (yield) is measured multiple times in a season, so I understand this could be analyzed as repeated measures. I've made a simplified picture below. The yellow blocks represent full irrigation and the blue blocks, deficit irrigation. The smaller plot with numbers are species of plants (treatments). The treatments replicate twice in each block. We harvest plant four times a season. My question is, can I analyze my data as split plot design? Will it be better to include harvest as factor or as repeated measure? Or should I analyze each harvest separately? Thanks for your answers. [](https://i.stack.imgur.com/gJBN4.png)
Is my design a split plot and what analysis will be appropriate?
CC BY-SA 4.0
null
2023-03-04T22:51:06.697
2023-03-04T22:51:06.697
null
null
381155
[ "mixed-model", "anova" ]
608382
2
null
567092
0
null
See this paper: [https://arxiv.org/abs/1706.08498](https://arxiv.org/abs/1706.08498) for a solution, which introduces a covering bound on matrix-matrix products together with an inductive technique to combine covers of layers.
null
CC BY-SA 4.0
null
2023-03-04T22:51:24.873
2023-03-04T22:51:24.873
null
null
382391
null
608383
1
null
null
0
29
I believe I managed to simulate critical values for a single sample with the following code in R. ``` n<-10 # number of trials k<-1e5 # number of experiments U<-matrix(runif(n*k),nrow=k) # generate experiments j<-t(apply(U,1,rank)) # compute order vectors D<-sqrt(n)*apply(pmax(abs((j-1)/n-U),abs(j/n-U)),1,max) # compute D_n hist(D,freq=FALSE,breaks=100) # distribution of D_n sprintf("0.95 quantile for simulated KS = %f", quantile(D,probs=.95)) ``` However, I am finding difficulties using the same method to simulate critical values for a two sample KS test, and wonder if a good soul can spot my mistake. ``` n<-8 m<-2 k<-1e4 X<-matrix(runif(n*k),nrow=k) Y<-matrix(runif(m*k),nrow=k) supDiff<-function(A,B){ Fn<-t(apply(A,1,rank))/ncol(A) Gn<-t(sapply(1:nrow(A),function(r) sapply(1:ncol(A),function(c) sum(B[r,]<=A[r,c]))))/ncol(B) D_right<-Fn-Gn D_left<-D_right-1/ncol(A) return(apply(pmax(abs(D_right),abs(D_left)),1,max)) } D<-supDiff(X,Y) D<-pmax(D,supDiff(Y,X)) hist(D,freq=FALSE,breaks=20) c_alpha<-quantile(D,probs=.95) den<-n*m num<-c_alpha*n*m sprintf("0.95 quantile without scaling = %.0f/%.0f", num,den) sprintf("0.95 quantile with scaling = %f", c_alpha*sqrt(n*m/(n+m))) ``` The approach I have been taking is relying on supDiff(A,B) that for each row computes ECDF of A and compares it with ECDF of B at points in A; then, repeat the exercise with supDiff(B,A), provided that we take the maximum of both differences... ... but obviously I am missing something, because all my critical values are always slightly below what they should be.
How to simulate critical values for a two sample KS test?
CC BY-SA 4.0
null
2023-03-04T22:54:40.543
2023-03-04T22:54:40.543
null
null
354654
[ "goodness-of-fit", "kolmogorov-smirnov-test", "empirical-cumulative-distr-fn" ]
608384
2
null
608378
5
null
I do not think there is a reason to show these AUC-ROC curves. With AUC scores approximating $1$ all curves are going to look the same and convey the same information. Having a small one-row table will be more than enough (probably in the Appendix even). I would suggest using another metric/visualisation to communicate meaningful/any differences between model performance characteristics (if relevant). (And to point at the elephant in the room: AUC-ROC scores so close to $1$ will raise strong suspicions about overfitting the test set. I hope that this is properly addressed in the paper.)
null
CC BY-SA 4.0
null
2023-03-04T23:11:52.927
2023-03-04T23:11:52.927
null
null
11852
null
608385
1
null
null
0
16
This question has been asked elsewhere (e.g., [here](https://stats.stackexchange.com/questions/65781/how-many-observations-do-you-need-within-each-level-of-a-random-factor-to-fit-a)) but has not yet been adequately answered. I have a nested dataset of annual bird counts (n=5 response variables) across 12 years taken from 31 unique monitoring sites (total = 12x31 = 372 observations). I am attempting to use GLMMs to model the effect of certain weather variables on annual counts. I have received contradictory advice from statisticians about how many fixed effects (predictors) can be modelled based on my # observations without overfitting. Some have said that every predictor modelled requires at least 10 observations for each random effect level, meaning I could only possibly include 1 predictor variable per model. Does this rule of thumb stand in this case: that to include 5 predictor variables in a GLMM (say), I would need at least 50 observations per site?
Rule of thumb (10obs:1fixed effect) to avoid overfitting GLMMs
CC BY-SA 4.0
null
2023-03-04T23:26:54.443
2023-03-07T11:09:45.193
2023-03-07T11:09:45.193
219012
286723
[ "r", "mixed-model", "glmm", "overfitting" ]
608386
1
608425
null
1
83
I am a beginner in statistics and R. I want to run a binary logistic regression to understanding (modeling) factors affecting nest-site selection in a bird species. I have Presence/Absence(0,1) data and 13 predictors. I wanted to use the step-wise method before (using `stepAIC` function in R), but luckily I found out on this site that it is an invalid method. So I fit the model with all the variables. But now I am confused in interpreting the results because the p-values of all of the variables are more than 0.10. Unfortunately, I also cannot run a lasso method, because I have some categorical variables and also I am using splines for some continuous variables. I tried to use the lasso method with the "`glmnet`" package in R, but I was unsuccessful. What method do you suggest for reporting significant variables?
How to identify significant variables in a binary logistic regression?
CC BY-SA 4.0
null
2023-03-04T23:39:26.993
2023-03-05T22:44:03.137
2023-03-05T22:44:03.137
379762
379762
[ "r", "regression", "logistic", "p-value", "splines" ]
608387
2
null
523573
1
null
Building on the answer by @Stephan and comment by user whuber. First, there is two versions of the [geometric distribution](https://en.wikipedia.org/wiki/Geometric_distribution); for now I use the one with support $0,1,2, \ldots$ which has moment generating function (mgf) given by $$ \DeclareMathOperator{\E}{\mathbb{E}} M_X(t)= \E e^{t X} =\frac{p}{1-(1-p)e^t} $$ which is valid for $ t < -\ln(1-p)$. That gives an expression for the expectation of $e^X$ by setting $t=1$: $$ \E e^X = M_X(1)=\frac{p}{1-(1-p)e^1} $$ but only for values of $p$ satisfying the restriction $1 < -\ln(1-p)$, that is $p > 1- e^{-1}$, when the probability is too small the waiting time for first success becomes to long and the expectation of its exponential becomes infinite. This is easy to see from calculating the expectation directly: $$ \E e^X =\sum_0^\infty e^k \cdot (1-p)^k p = p\sum_0^\infty [ e(1-p) ]^k $$ and when the bracketed expression becomes 1 or larger the sum is infinite.
null
CC BY-SA 4.0
null
2023-03-04T23:47:44.023
2023-03-05T02:42:49.887
2023-03-05T02:42:49.887
362671
11887
null
608388
2
null
608312
1
null
The fitting formed by the base learners during the training of GBM acts an implicit variable selection indeed; Additionally, the component-wise nature of fitting a GBM is doing some variable selection too but that is not the primary reason why variable selection happens. Let's clarify things a bit further: A GBM performs "component-wise" learning in the sense that each new individual base learner tries to counter-balance the mistakes of the previous iterations. This is very obvious in the case of [GAM](https://en.wikipedia.org/wiki/Generalized_additive_model)s where via the [back-fitting algorithm](https://en.wikipedia.org/wiki/Backfitting_algorithm) we have component-wise smoothing splines for one selected feature $x_j$ at the time but it extends naturally to GBMs too. That said, GAMs do not perform any variable selection; they have some regularisation properties associated with the cost of the component-wise smoother but do not actually perform variable selection explicitly. A GAM might have a very flat smooth associated with a "useless" feature but that's about it. Some extensions of GAMs do perform variable selection (e.g. see Marra & Wood (2011) [Practical variable selection for generalized additive models](https://www.sciencedirect.com/science/article/abs/pii/S0167947311000491)) but that's an additional step in the fitting procedure. Revising now the case of a GBM: the base learners are trees so variable selection is performed in the sense that features that do not contribute to loss function reduction are not selected by the base learners themselves. Each individual tree performs a weak form of variable selection. In addition, the GBM itself regularises the contribution of each individual base learner based on the shrinkage/learning rate $\alpha$ further stopping a base learner's potential overfitting to harm the overall GBM's overall performance. Now, given we usually have dozens, if not hundreds of base learners in our GBM, the tree ensemble as a whole indeed performs performance variable selection via regularisation in two complementary ways: 1. within a base learner and 2. across base learners when combining them. That said, it is not the component-wise training that primarily leads this but rather fitting itself. (For example, random forests perform a similar "variable selection" procedure, [BART](https://arxiv.org/abs/0806.3286)s even more.)
null
CC BY-SA 4.0
null
2023-03-05T00:07:07.627
2023-03-05T00:07:07.627
null
null
11852
null
608390
2
null
608370
7
null
Consider any density $f$ for the circular parameter $\theta.$ The relevant integrals are of the form $$\Pr(\mathcal A) = \int_\mathcal{A}f(\theta)\,\mathrm d\theta$$ where $\mathcal A\subset[0,2\pi)$ is any circular event. Ordinarily we would plot them in Cartesian coordinates, as in this example: [](https://i.stack.imgur.com/grwPO.png) Now, if you wish to represent these integrals as circular areas, perhaps you are thinking of plotting the graph of some related functions $g$ and $h$ in polar coordinates, given by the region $$\{(\theta, r)\mid g(\theta)\le r \le h(\theta);\ 0\le \theta\lt 2\pi\}.$$ The area on the plot itself therefore is $$\int_\mathcal{A}\int_{g(\theta)}^{h(\theta)} r\,\mathrm dr\,\mathrm d\theta = \int_{\mathcal A}\frac{h(\theta)^2 - g(\theta)^2}{2}\,\mathrm d\theta.$$ Consequently, if you pick any nonnegative functions for which $h(\theta)^2 - g(\theta)^2 = f(\theta)$ the right side works out to the desired probability. Two natural choices are $$(g(\theta), h(\theta)) = (0, \sqrt{2 f(\theta)}),$$ the "filled" version [](https://i.stack.imgur.com/6TE6W.png) and $$(g(\theta), h(\theta)) = (\sqrt{f(\theta)}/\lambda, \lambda\sqrt{f(\theta)})$$ where $\lambda = \sqrt{1 + \sqrt{2}},$ the "symmetric" version. [](https://i.stack.imgur.com/2iGQI.png) Other choices are possible. For instance, you could enclose everything within a disk provided $f$ is bounded.
null
CC BY-SA 4.0
null
2023-03-05T00:12:24.030
2023-03-05T00:12:24.030
null
null
919
null
608391
1
608413
null
4
110
I am using a retrospective data set that looks at cancer patients' progression-free survival, and trying to determine if there is a difference between the treatment group and the control group. The grade of cancer though is a confounding variable, for which patients with high grade disease received treatment 80% of the time, and patients with low grade disease received treatment 20% of the time. I am trying to compare progression-free survival rates using a Cox proportional hazards model between the treatment group and the control group and control for confounding from disease grade (low vs. high). I am currently using inverse probability weighting to create ATE weights, and using these weights in the cox proportional hazards model (using R). I was wondering if it would be just as effective to create a CPH model with two covariates- both treatment group and disease grade, which I believe should also help control for the confounding for disease grade. Will both of these methods accomplish the same thing, and is one method more accepted than the other? I have posted some example code below. ``` library('survival') # Set the seed to ensure reproducibility set.seed(1234) # Create a vector of time ranging from 1 month to 5 years time <- sort(c(runif(100, 1, 12), runif(100, 12, 60))) # Create a vector of treatment group tx <- rbinom(length(time), 1, 0.5) # Create a vector of grade group grade <- factor(ifelse(tx == 1, sample(c(rep("high", 80), rep("low", 20)), length(time), replace = TRUE), sample(c("high", "low"), length(time), replace = TRUE))) # Create a vector of binary event representing disease progression event <- rep(0, length(time)) event[grade == "high"] <- rbinom(sum(grade == "high"), 1, 0.8) event[grade == "low"] <- rbinom(sum(grade == "low"), 1, 0.2) # Create the dataset by combining the vectors dat <- data.frame(time, event, tx, grade) #Generate propensity score weights mod.glm <- glm(tx ~ grade, data = dat, family = binomial); dat$ps <- predict(mod.glm, type = "response"); dat$weight <- ifelse(dat$tx ==1, 1/dat$ps, 1/(1-dat$ps) ); #Unadjusted CPH model mod.cph <- coxph(Surv(time,event)~ tx, data = dat); #CPH model using IPW mod.cph.ipw <- coxph(Surv(time,event)~ tx, data = dat, weights = weight); #Multivariate CPH model mod.cph.multi <- coxph(Surv(time,event)~ tx + grade, data=dat); ``` ```
Inverse probability weighting for cox proportional hazards model to control for group mismatch vs multivariate cox model with confounder as covariate
CC BY-SA 4.0
null
2023-03-05T00:13:39.193
2023-03-05T07:41:26.150
null
null
382393
[ "survival", "cox-model", "propensity-scores" ]
608392
1
null
null
1
20
I am receiving this error message when running a mediation analysis in r using the mediation package: "Error in model.frame.default(Terms, newdata, na.action = na.action, xlev = object$xlevels) : factor Race has new levels 4, 5" I have omitted any na's in my data, and this is only happening for my race and martial status categories that I have included as covariates. This does not happen when bootstrap is set to 'false'. I have double checked that my levels for these categories are set and am unsure of how to fix this error. Thank you in advance. This are my models: m_model <- lm(M ~ X + c1 + c2 + c3 + c4 + c5 + c6 + c7 + c8 + c9 + c10, data = thesis) y_model <- lm(Y ~ M + X + c1 + c2 + c3 + c4 + c5 + c6 + c7 + c8 + c9 + c10, data = thesis) med.result <- mediate(m_model, y_model, treat = 'X', mediator = 'M', boot = TRUE, sims = 1000, covariates = c('c1', 'c2', 'c3', 'c4', 'c5', 'c6', 'c7', 'c8', 'c9', 'c10'))
When running mediation analysis with parametric bootstrap in r, receiving error on factor variables but not without bootstrap
CC BY-SA 4.0
null
2023-03-05T00:38:59.490
2023-03-05T00:38:59.490
null
null
382395
[ "r", "regression", "interaction", "mediation" ]