Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
611359
2
null
611349
2
null
If you check the tag [feature-scaling](/questions/tagged/feature-scaling), you'll learn about many benefits of scaling, [not all algorithms](https://stats.stackexchange.com/questions/401079/which-machine-learning-algorithms-get-affected-by-feature-scaling) need it though. To answer if there are any downsides, consider what scaling is. Both standardization and normalization are about subtracting something and dividing by something. Let's discuss those operations. To calculate feature such as "number of years since the year 1995" you would subtract 1995 from the current date. You could alternatively create a different feature for "number of years since the year 1997" and those two would only differ by what did you subtract. If your algorithm broke depending if your baseline was 1995 or 1997, there would be something very wrong with it. The same applies to division. If your algorithm worked differently if your variables were in meters vs kilometers, or minutes vs seconds, it wouldn't be something that you could use to solver generic problems. The downside of scaling could be worse interpretability (though in some cases it can be the other way around), but in general we don't want the algorithms to be sensitive to something like the scale of the features. Finally, keep in mind that there are models that accept only certain kinds of features (e.g. only binary in vanilla [LCA](https://en.wikipedia.org/wiki/Latent_class_model)), where obviously you couldn't use scaled features. That said, you should not mindlessly apply any feature transformation "as a default" even if it is harmless. If you did so, sooner or later you would regret it, because it would add unnecessary complication to the code, accidentally introduce bugs, slow down the code, or lead to other unanticipated problems. Every such a default behavior in a software has a history of GitHub issues or e-mails of angry users where in their specific case it led to something they didn't want or expect.
null
CC BY-SA 4.0
null
2023-03-31T09:17:40.597
2023-03-31T09:23:47.187
2023-03-31T09:23:47.187
35989
35989
null
611360
2
null
611357
3
null
$|E[X_n] - a| = o(1)$ tells you very little about the behaviour of the distribution of $X_n$ apart from its mean. For example - if independent $Y_j \sim \mathcal N\left(\frac{1}{j^2}, j^2\right)$ and $X_n=\sum\limits_{j=1}^n Y_j$ - then you have $\left|E[X_n] - \frac{\pi^2}6\right| \le \frac1n = o(1)$, meaning $E[X_n]$ converges to $\frac{\pi^2}6$ - but $X_n$ does not converge to a constant, as it has a normal distribution with increasing variance of $\frac16n(n+1)(2n+1)$
null
CC BY-SA 4.0
null
2023-03-31T09:33:44.630
2023-03-31T09:33:44.630
null
null
2958
null
611361
1
null
null
0
21
Imagine the following dataset. 1 --> Person buys this product sometimes 0 --> Person never buys this product: ``` Noodles Rice Pickles ... Chewinggum Money spent Person 1 1 0 1 1 1000 Person 2 0 0 0 1 150 Person 3 1 1 1 1 750 ... Person n 1 0 1 0 895 ``` Sure this is a dummy dataset. In my real dataset I have too little amount of data. Only about to 70-150 Persons with 2000 features. Therefore "normal" regression approaches to predict "Money spent" directly did not work out so well. My new approach is to predict the differences in "Money spent"! I take the difference of Person 1 and Person 2, the difference of Person 1 and Person 3 and do this throughout the entire set. I end up with a dataset of $n^2-n$ rows. First row would look like this: ``` Person 1 Person 2 1 0 1 0 950 Person 1 Person 3 0 -1 0 0 250 ``` I now want to create a model to predict how much more money or how much less money Person X spends than Person Y. I want to create a 75% train 25% test split. The question now is: - A) Do I split the dataset to train and test before computing the differences? This would lead to a training set of $(0.75n)^2 - 0.75n$ - B) Or do I compute the differences and create train test split afterwards? This would result in a trainingset of $(n^2 - n) * 0.75$ My Problem: When Person 23 maybe eats a rare feature "Peanuts" and is the only one with value 1 for "Peanuts". With approach A) the model has no clue what to do with a difference for value "Peanuts" because all other don't differ here. With approach B) I sense that somehow too much information of Person 23 is still in the dataset and leaks to the test results. I hope you understand what I am talking about. I would like to read your comments on that. Maybe you can provide me some additional information about modeling of differences in general if this is a common approach.
Train Test leakage
CC BY-SA 4.0
null
2023-03-31T09:55:53.920
2023-03-31T15:15:12.140
null
null
383288
[ "regression", "difference-in-difference", "group-differences", "differences", "train-test-split" ]
611362
1
null
null
1
30
I'm trying to use either `pracma::quadprog` or `quadprog::solve.QP` in R to solve a ridge regression, which can be written as a constrained optimization: $$\begin{aligned} minimize \ \frac12|| y - X \beta||_2^2 \\ s.t. ||\beta||_2^2 \leq t \end{aligned}$$ While both functions can solve $$\begin{aligned} minimize \ \frac12x^TQx+d^Tx \\ s.t. Ax \leq b \end{aligned}$$ I think I can let $Q = X^TX$, $d = y^TX$, the objective of ridge regression could be used in either function. But I'm wondering how I could transform $||\beta||_2^2 \leq t$ to $Ax \leq b$ as the former is quadratic?
Quadratic Programming for ridge regression
CC BY-SA 4.0
null
2023-03-31T10:09:37.197
2023-03-31T10:09:37.197
null
null
355736
[ "r", "regression", "optimization", "ridge-regression" ]
611363
2
null
611349
2
null
Your updated question references application of 'StandardScaler' to everything. From the documentation by default this centers (by subtracting the mean) and then rescales every feature to a unit variance. Clearly scaling features independently like this could cause differences in your analysis (consider a scaled vs unscaled PCA for example) which might be unintended. And the centering might cause some modelling not to work correctly (eg it will introduce negative values which will fail if a log-transformation or square root is indicated).
null
CC BY-SA 4.0
null
2023-03-31T10:11:30.930
2023-03-31T10:43:20.030
2023-03-31T10:43:20.030
68149
68149
null
611364
1
null
null
0
22
Consider $X_n$ a random variable that converges in distribution to $X$ and $R_n = o_p(1)$. Then, by Slutsky's $X_n + R_n \rightarrow_d X$. I already proved that for a function $f$ continuous, nonlinear, unbounded $$\lim_{n \rightarrow \infty}E[f(X_n)] = \lim_{n \rightarrow \infty}E[f(X_n + R_n)] = E[f(X)]$$ and that $$\lim_{n \rightarrow \infty}E[f^2(X_n)] = \lim_{n \rightarrow \infty}E[f^2(X_n + R_n)] = E[f^2(X)].$$ Now I would like to prove that $$E[f(X_n + R_n)f(X_n)] \rightarrow E[f^2(X)]$$ I cannot use the Cauchy-Schwarz inequality since I am not interested by an inequality sign but I need an "=". Does anyone has an idea on how to proceed? My idea would by to bound the product as $$f(X_n + R_n)f(X_n + R_n) \leq f(X_n + R_n)f(X_n) \leq f(X_n)f(X_n) .$$
Convergence of joint moments: r.v. + $o_p(1)$
CC BY-SA 4.0
null
2023-03-31T10:38:47.233
2023-03-31T11:00:43.660
2023-03-31T11:00:43.660
365245
365245
[ "probability", "self-study", "convergence", "joint-distribution", "slutsky-theorem" ]
611365
2
null
34096
0
null
Update in 2023 If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem: [https://sktime-backup.readthedocs.io/en/v0.15.1/api_reference/clustering.html](https://sktime-backup.readthedocs.io/en/v0.15.1/api_reference/clustering.html) In general we should mention that TSC problems are well known and it is indeed possible. To see how it works I would visit this repo, which also links a paper from 2022. Especially kmedois in Time Series seems to work well with distance based clustering in Time Series problems. [https://github.com/sktime/distance-based-time-series-clustering](https://github.com/sktime/distance-based-time-series-clustering) For those, who are lazy, and do not want to visit the github repo, here is the paper: [https://arxiv.org/pdf/2205.15181.pdf](https://arxiv.org/pdf/2205.15181.pdf)
null
CC BY-SA 4.0
null
2023-03-31T10:44:31.230
2023-03-31T10:44:31.230
null
null
313093
null
611367
2
null
127501
0
null
This sounds like k-nearest neighbors (kNN) with Mahalanobis distance as the distance metric to determine the nearest neighbors. In mathematical analysis and topology, a “metric” is a function that inputs two points and outputs a non-negative number. There are formal properties of a metric function, but the key is that it behaves like we think distances should behave. Consequently, when we talk about the “nearest” neighbors in kNN, it is on us to determine the sense in which points are “near”. Mahalanobis distance considers the covariance matrix $S$ and mean vector $\bar x$ of your data, and the metric function to find the Mahalanobis distance between $x$ and $y$ is $d_M(x,y)=\sqrt{ (x-\bar x)^TS^{-1}(y-\bar x) }$. To find the nearest neighbors to a point $x$, calculate the distances between $x$ and all other points. Then pick the $k$ points that have the smallest value of $d_M$, which have a reasonable interpretation of being “nearest” to $x$. There isn’t really any training. You just apply the Mahalanobis distance, get the nearest neighbors, and use those to make your prediction, which sounds like what your visitor said.
null
CC BY-SA 4.0
null
2023-03-31T10:57:52.970
2023-03-31T10:57:52.970
null
null
247274
null
611368
1
null
null
0
47
it's me again! Today I'm asking the question about an ARMA(2,2) and how to compute all the different autocovariances etc... I found the formula for an ARMA(1,1), however I'm having some problems. Here is the text of the exercise I'm trying to solve Consider the following $\operatorname{ARMA}(2,2)$ model, $$ Y_t=1.3 Y_{t-1}-0.4 Y_{t-2}+e_t-1.2 e_{t-1}+0.2 e_{t-2} $$ where $e_t \sim$ i.i.d. $\mathscr{N}(0,1)$ (a) Rewrite the model using the lag operators. (b) Express conditions under which $Y_t$ is covariance-stationary. (c) Compute the mean, variance and autocovariance of $Y_t$ under the assumption that $Y_t$ is covariance-stationary. (d) Derive the infinite moving-average representation of the process. Under which conditions we can obtain such representation? I'm trying to follow this answer for an ARMA(1,1), however I' don't think I'm on the right track [https://math.stackexchange.com/questions/1265466/the-autocovariance-function-of-arma1-1](https://math.stackexchange.com/questions/1265466/the-autocovariance-function-of-arma1-1) Can someone help me with the questions? I was trying to rewrite the whole equation using $MA(\infty)$ but can't really get it right (like the link above). Maybe someone can give me the general formula for the autocovariances and autocorrelations and then I compute everything else? Thanks everyone!
Autocovariances of ARMA(2,2)
CC BY-SA 4.0
null
2023-03-30T14:47:05.267
2023-04-01T00:44:45.927
2023-04-01T00:44:45.927
11887
380993
[ "time-series", "variance", "arima" ]
611369
2
null
568749
1
null
It does not make sense to me to compare these, as they evaluate different data. The AUC evaluates the outputs of your model. “But so does the $F_1$ score,” you protest? False. The data used to calculate the $F_1$ score are the model outputs after an additional function has transformed them to hard classifications. This is done by applying a threshold and classifying as positive above that threshold and negative below. Consequently, if your $F_1$ is poor, is might be that your model outputs are trash, or it might be that they are fine and this threshold function is not a good one. Maybe you need a higher or lower threshold than the software default. My suspicion is that you can find a threshold for the second model that gives a higher $F_1$, since the ROCAUC indicates better ability to distinguish between the categories. Note that all of this threshold business is of debatable value. The direct outputs of models contain rich information that you destroy when you apply a threshold. For instance, [there might be more worthwhile decisions than categories](https://stats.stackexchange.com/a/469059/247274), and we have an [academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/q/603663/247274), all of which are threshold-based statistics. Thresholds have their use but do not necessarily need to be part of your modeling.
null
CC BY-SA 4.0
null
2023-03-31T11:19:50.347
2023-03-31T23:04:21.380
2023-03-31T23:04:21.380
247274
247274
null
611371
1
null
null
0
34
I am conducting a prediction using the LASSO probit model with a dummy on the LHS and several interacted variables on the RHS. My question is: can I meaningfully estimate the Average Marginal Effect of my individual predictors if I also use the penalised coefficients for prediction? If yes, could you please recommend ways of implementing this in Stata or R? Currently, I am running the following in Stata: ``` lasso probit y i.X1##i.X2##c.X3 if sample==1 predict y_hat local vars=e(allvars_sel) probit y`vars' if sample==1 margin, dydx(*) post ``` But I am not sure if this is correct given that the second probit estimation doesn't use the penalised coefficients.
Estimating Average Marginal Effects from LASSO
CC BY-SA 4.0
null
2023-03-31T11:49:03.127
2023-03-31T12:19:09.503
2023-03-31T12:19:09.503
53690
309751
[ "predictive-models", "lasso", "probit" ]
611372
1
null
null
1
41
I have 8 means referring to 8 groups. Since I have the numerosity of each group, I can calculate the mean of the means. How can I calculate the standard deviation of the mean of the means?
The standard deviation of the mean of the means
CC BY-SA 4.0
null
2023-03-31T11:51:41.567
2023-03-31T12:28:24.830
2023-03-31T12:28:24.830
345611
14270
[ "variance", "mean", "standard-deviation", "descriptive-statistics", "sample" ]
611373
1
null
null
0
12
I have a question about a mixed model I'm running. The formula is the following: ``` DV ~ Combined * Sex * Arousal + (1 + Combined | Subject) + (1 | Item) ``` My question concerns the factor "Combined". I created it by combining two separate factors plus the common baseline for both factors: Emotion (anger, happiness) and Channel (semantics , prosody, semantics + prosody) + neutral (baseline). The resulting factor is a factor with seven levels: angry prosody, happy prosody, angry semantics, happy semantics, angry prosody + semantics, happy prosody + semantics, neutral. I have created this combined factor as I'm not interested in the effects of the single factors "emotion" or "channel", but in the interaction of them. Moreover, I tried to run the model with the separate factors ``` DV ~ Emotion * Channel * Sex * Arousal + (1 + Combined | Subject) + (1 | Item) ``` But I got several problems of convergence. Do you think it is correct to combine the factors in this way?
Combining two fixed factors in one factor
CC BY-SA 4.0
null
2023-03-31T11:56:12.890
2023-03-31T12:01:39.537
2023-03-31T12:01:39.537
345611
384594
[ "regression", "mixed-model", "lme4-nlme", "glmm", "crossed-random-effects" ]
611374
2
null
611057
3
null
The VAE is a direct implementation of the ELBO, a lower bound on the log likelihood of the data, which makes optimising its training objective a provably valid approach. Any deviation from that would need to be considered carefully as you may otherwise have no idea what optimising the new objective would achieve, i.e. what the model would learn. - Aim: train a model of the data $p_\theta(x)$ to fit the true data distribution $p(x)$, where the model is defined in terms of latent variable $z$: $p_\theta(x) = \int p_\theta(x|z)p_\theta(z) dz$ - Approach: minimise the KL divergence between $p_\theta(x)$ and $p(x)$ i.e. find $\text{argmin}_{\theta} \int_x p(x)\log\tfrac{p(x)}{p_\theta(x)} = \text{argmax}_\theta \int_x p(x)\log p_\theta(x)$ i.e. maximising cross entropy (RHS) is equivalent to minimising the KL divergence (LHS) it is typically intractable to take gradients w.r.t $\theta$ to maximise this, so a lower bound is maximised: $\int p(x)\log p_\theta(x) \geq \int_x p(x)\int_z q(z|x)\{\log p_\theta(x|z) - \tfrac{q(z|x)}{p_\theta(z)}\}\quad$ [ = ELBO in "VAE form"] - While the terms of this expression can be thought of intuitively as reconstruction loss + regulariser, this is just intuition, the fact is you want $p_\theta(x)$ to learn $p(x)$, which maximising the ELBO guarantees. If you "hack around with it", that guarantee may be lost. - The proposed alternative "regulariser" may (i) happen to equate to $\tfrac{q(z|x)}{p_\theta(z)}$ for particular choices of those distributions, in which case you are baking in those assumptions (which may be good or bad depending on the data distribution); or (ii) to some extent approximate particular distributional assumptions and "seem to work" (e.g. on a given test set), but, given it is an approximation (and so "wrong" in places) there may be regions in the data space where the VAE performs poorly.
null
CC BY-SA 4.0
null
2023-03-31T12:26:47.603
2023-04-07T11:15:10.370
2023-04-07T11:15:10.370
307905
307905
null
611376
1
611380
null
0
47
below is my ADF test for regression residuals. I use alpha=5% My interpretation is : my residuals are non-stationary as my model does not take into account the 4 lags suggested by ADF. In other words, my residuals would be stationary only if i take in consideration the 4 lags in the model Is this correct? [](https://i.stack.imgur.com/eXUKB.png)
ADF Interpretation
CC BY-SA 4.0
null
2023-03-31T12:31:22.753
2023-03-31T13:33:26.970
2023-03-31T13:33:26.970
53690
70598
[ "interpretation", "lags", "augmented-dickey-fuller" ]
611377
2
null
428902
0
null
Take a simple case: the data $x$ is a mixture of Gaussians generated by picking a cluster index $z$ (from a categoric distribution $p(z)$) and then sampling from the Gaussian of that cluster $p(x|z)$. So $x$ is defined by nature as $p(x)=\sum_zp(x|z)p(z)$. - If you observe samples from $p(x)$ and want to model the distribution, the most accurate model will be one that reflects the true generative process (which may not be unique), i.e. that models the unobserved $z$ as a latent variable. - You could fit another model: like one big Gaussian, which would be a terrible fit; or a very flexible autoregressive model (e.g. a normalising flow [1]); but they are approximations and will not do better than fitting a latent variable model that reflects/respects the generative process. Also they do not reveal the "natural structure" of the data (see next point). The latent variables may be of interest in themselves. For example, in the mixture of Gaussians case, inferring $z$ identifies the cluster of each data point, which may be semantically meaningful. This would be true for a more complex mixture distribution and may identify images of the same object, or words of the same sentiment. In physics, a latent variable might represent a certain property to be estimated. [1] A Family of Nonparametric Density Estimation Algorithms; Tabak & Turner
null
CC BY-SA 4.0
null
2023-03-31T12:55:37.327
2023-03-31T12:55:37.327
null
null
307905
null
611378
2
null
611143
3
null
You could remove the blockade by hacking the definition of the Poisson family and allowing values of zero ``` fam = poisson(link="identity") fam$validmu = function(mu) { # all(is.finite(mu)) && all(mu > 0) all(is.finite(mu)) && all(mu >= 0) } glm(y~0+x, family=f, start = 0.3) ``` but this will just give a new error ``` Error in glm.fit(x = c(1, 1, 3, 0, 1, 3, 2, 1, 2, 1, 1, 0, 5, 6, 1, 3, : 0s in V(mu) Calls: glm -> eval -> eval -> glm.fit Execution halted ``` The problem is that the iterative reweighted least squares procedure that is performed by the glm function is using a division by the current estimate of the variance. See [https://www.jstor.org/stable/2344614](https://www.jstor.org/stable/2344614) > The solution of the maximum likelihood equations is equivalent to an iterative weighted least-squares procedure with a weight function $$w = (d\mu/dY)^2/V$$ and a modified dependendent variable (the working probit of probit analysis) $$y = Y + (z-\mu)/(d\mu/dY)$$ For Poisson regression, $V$, the variance of the distribution at the current estimated mean, $\mu$, is equal to $V = \mu$. So you get a division by zero if your starting values estimate $\mu = 0$ which happens in the point $x=0$. --- In your case you can exclude these points from the regression since for $x=0$ your model predicts $y=0$ anyway, no matter what the estimate is for the model coefficients. So if asside from the above hack with the valid values for mu, we also add weights to get rid of the cases where $x=0$, then you can run the glm function ``` Call: glm(formula = y ~ 0 + x, family = fam, weights = 1 * (x > 0), start = 0.3) Coefficients: x 0.1858 Degrees of Freedom: 36 Total (i.e. Null); 35 Residual Null Deviance: Inf Residual Deviance: 34.88 AIC: 68.36 ```
null
CC BY-SA 4.0
null
2023-03-31T13:11:13.707
2023-03-31T13:11:13.707
null
null
164061
null
611380
2
null
611376
0
null
Such details can usually be found in the software documentation. You have not shown what software has produced the output, so we cannot be 100% sure. However, we can guess that the ADF test specification used 4 lags* and found a $p$-value of just under 0.05. Thus you can reject $H_0$ of presence of a unit root in favour of stationarity at a 5% significance level. *The DF test does not include any lags, but the ADF does. A stands for augmented, and that means inclusion of some lags.
null
CC BY-SA 4.0
null
2023-03-31T13:33:17.543
2023-03-31T13:33:17.543
null
null
53690
null
611381
1
null
null
4
381
Can anyone explain the linearity of expectation in an intuitive way? I have been trying to understand this for far too long now. Please don't use any equations and such, try to use real world examples or simple tools such as couples, red cards black cards etc. For example, if there are 4 red cards and 5 black cards, the expectation of the amount of alternating cards would be equal to $$E[\text{alternating first pair} + \text{alternating second pair} + \dotsm + \text{alternating 8th pair}]$$, even though the first pair and second pair should be dependent. Why? Pleaes don't link to external articles and such, I really would like an intuitive explanation, and it seems like one hasn't been provided before. I have a feeling linearity holds for this kind of example, because the next outcome is symmetric for each case, I.E if first card is black, second is red, the third is black, its symmetric in a way to see the first card is red, second is black and third is red, but I'm not sure how this relates to linearity.
Provide an intuitive example of the linearity of expectation
CC BY-SA 4.0
null
2023-03-31T13:35:36.927
2023-04-01T20:37:18.133
2023-03-31T17:01:06.017
11887
262122
[ "expected-value", "conditional-expectation", "intuition", "linearity" ]
611382
1
null
null
0
47
I'm working on my master's in developing a machine-learning model to predict classes of biomedical images from a microscope. These images are - collected separately from each patient. For example, I have ~70 cases and each one of them contains ~50 images (small dataset). - collected with different conditions. For example, images in each case were taken at different lighting intensities. The second point was performed intentionally to reflect the actual use in medical healthcare. In the current state, I only measured the performance of my model, i.e., accuracy and AUC, using cross-validation. However, I have no idea about how should I split my dataset to use as a test set as the performance of the model is going to depend on which cases are selected. For example, if the selected cases in the training set are collected in a similar condition as the cases in the test set, I would surely get high accuracy which is not may not happen in real use. Thus, is it enough to report the result from cross-validation without including the result from a hold-out test set? A possible solution is collecting more data to use as a test set. However, that could take a long in the current state. So, I would like to seek other methods if it's possible. Thank you for reading
Is it sufficient to report only the result of cross validation in research paper?
CC BY-SA 4.0
null
2023-03-31T13:50:21.663
2023-03-31T13:50:21.663
null
null
384596
[ "machine-learning", "neural-networks", "cross-validation", "model-evaluation", "train-test-split" ]
611383
2
null
590701
2
null
TL;DR to "how does maximizing ELBO lead to a good/correct posterior predictive distribution (|,)?" The initial formulation for making new predictions is correct, essentially you have a model $p(y|x,w)$ to predict the distribution of $y$ for a given new $x$, which requires parameters $w$. Which parameters to use? You can - use a point estimate $w^*$ learned from the data $D$ (e.g. maximum likelihood estimate (MLE) or maximum a posteriori (MAP)). Predictions are given by $p(y|x,D) \approx p(y|x,w^*)$; - or take a Bayesian approach by averaging over all possible values of $w$ weighted by $p(w|D)$, their probability given the observed data - a.k.a the posterior - learned from the data (e.g. by maximising the ELBO). Predictions are given by $p(y|x,D) = \int_w p(y|x,w)p(w|D)$. Maximising the ELBO implicitly minimises $KL[q(w|D)\|p(w|D)]$ (see below). The ELBO is maximal when the KL term equals 0, meaning that the approximate posterior $q(w|D)$ equals the true posterior $p(w|D)$ (which is intractable to compute directly). --- Longer answer: Your definition of the ELBO isn't quite right. You want to approximate the posterior $p(w|D)$ with $q(w)$ (whether this is denoted $q(w|D)$ or $q(w)$ doesn't particularly matter, but the former makes clearer it approximates the posterior, not a prior over $w$, $p(w)$, so I use $q(w|D)$ below). - To fit $q(w|D$ to $p(w|D)$, typically an intractable distribution, as you say, you maximise a lower bound (ELBO) on the log likelihood of the data, where $p(D) = \int_w p(D|w)p(w)$. - The ELBO os derived: $\log p(D) = \int_w q(w|D)\log \tfrac{p(D,w)}{p(w|D)} \quad= \int_w q(w|D)\log \tfrac{p(D|w)p(w)}{q(w|D)}\tfrac{q(w|D)}{p(w|D)}$ $= \int_w q(w|D)\log p(D|w) - \int_w q(w|D)\log \tfrac{q(w|D)}{p(w)} + \int_w q(w|D)\log \tfrac{q(w|D)}{p(w|D)}$ $\geq \int_w q(w|D)\log p(D|w) - \int_w q(w|D)\log \tfrac{q(w|D)}{p(w)} $ $= \mathbb{E}_{q(w|D)}[\log p(D|w)] - d_{KL}[q(w|D)\|p(w)]$ The difference to what you wrote is in the KL term. In particular the ELBO doesn't feature the true posterior $p(w|D)$ as you don't know it (if you did you wouldn't be trying to approximate it). The dropped term leading to the inequality "gap" (leading to a lower bound), is the KL divergence between the approximate and true posterior, which is (implicitly) minimised as the ELBO is maximised. Subject to how well the various probabilities are modelled, the ELBO is fully maximised when the approximate posterior matches the true posterior - (i.e. the KL term between the approximate and true posteriors goes to zero, hence the inequality "gap" disappears and you have an equality). So maximising the ELBO is an elaborate indirect way of getting the approximate posterior $q(w|D)$ to fit the true posterior $p(w|D)$, which can then be used for predictions etc. - It requires (i) an assumed prior over the parameters $p(w)$, (ii) the the data likelihood for a given parameter choice $p(D|w)$ to be computable, and (iii) a parametric family of distributions $q(w|D)$. - Here, the likelihood $p(D|w) = \sum_{(x,y)\in D} p(y|x,w)$ is the same likelihood function as used for making predictions "at test time" (as you refer to at the outset). - Given sufficient data, the posterior $q(w|D)$ may concentrate on a particular value $w^*$ (the maximum a posteriori estimate), allowing $w^*$ to be plugged directly into $p(y|x,w)$ for a good approximation to having to integrate over the whole posterior distribution.
null
CC BY-SA 4.0
null
2023-03-31T14:13:02.127
2023-04-07T11:17:40.617
2023-04-07T11:17:40.617
307905
307905
null
611384
2
null
611358
2
null
In computer science, the idea you describe is called the [divide and conquer](https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm) algorithm. While often very successful, here it fails. This is because you have to take all cross correlations between the prior $n-1$ points and the new data point into account. So nothing is gained in general by doing it sequentially or subdividing the problem in any other way.
null
CC BY-SA 4.0
null
2023-03-31T14:21:12.290
2023-03-31T14:21:12.290
null
null
8298
null
611385
1
null
null
4
122
I find myself trying to solve a peculiar stopping time problem. Let $\{X_i\}$ be set of continuous random variables of a stochastic process, each with finite mean value $\mu$ and standard deviation $\sigma$. I am interested in finding the expected value of the stopping time $\tau$ given by: $$\tau = \inf_n \left\{ \left| S_n \right| := \left| \sum_{i=0}^n (-1)^i\ X_i \right|>a\right\}$$ What is then $\mathbb{E}[\tau]$? Attempts: My first attempt was based on noticing that we can let $Z_i := X_{2i+1}-X_{2i}$ so that the sum over $Z_i$ is indeed a discrete-time martingale. This way, we can apply the Optimal Stopping Theorem to obtaing $\mathbb{E}[S^{Z}_\tau] = \mathbb{E}[S^{Z}_0] = \mathbb{E}[Z_0] = 0$. The fact that this result equals $0$ prevents us from applying Wald's Theorem directly. Nevertheless, I am afraid that this way we are "missing information" in the sense that we are restricting $\tau$ to even values, which could potentially lead to a wrong result. (Side question: why does none of the results I find about stopping time problems depend on the standard deviation $\sigma$ of $X_i$?) I am relatively new to statistics and I am trying to learn all these new concepts on the way (first time working with martingales and OST Theorem), so I apologize if there is any missconception in what I did. Edit: typo in the definition of $Z_i$ Edit 2: in my second attempt, I found an extension to Wald's second equality for independent but not-id variables that states: $$\mathbb{E}[S_\tau^2] = \mathbb{E}\left[\sum^\tau_{i=0}\sigma_i^2\right]$$ Since in this particular case we have that $\sigma$ is the same for all $X_i$: $$\mathbb{E}\left[\tau\right] = \frac{1}{\sigma^2}\mathbb{E}[S_\tau^2]$$ Now, what strategy can I follow in order to obtain $\mathbb{E}[S_\tau^2]$? Intuitively, $$ \mathbb{E}[S_\tau^2] = \int_{-\infty}^{-a} dS_\tau S_\tau^2\ f(S_\tau) + \int_{a}^{\infty} dS_\tau S_\tau^2\ f(S_\tau) = 2\int_{a}^{\infty} dS_\tau S_\tau^2\ f(S_\tau)$$ For $f(S_\tau)$ the probability density function. How can I solve this with not knowing the probability distribution of $X_i$? Is there any alternative approach?
Stopping time with alternating sign random variables
CC BY-SA 4.0
null
2023-03-31T14:21:46.323
2023-04-06T16:39:03.653
2023-04-03T18:04:16.970
360009
360009
[ "central-limit-theorem", "martingale", "optimal-stopping", "stopping-time" ]
611386
1
null
null
0
8
I am trying to do a psychological network analysis, and I aim to do the analysis on dimension-level (summing all the items belonging to this dimension). However, I've found that one dimension of the scale has a very low Cronbach's alpha (only 0.51) while other dimensions of the scale is acceptable. And I've tried to throw away some poor behaved items, and the Cronbach's alpha did not improve. So can I delete this dimension? Because I know the network might change dramatically if the variables within is changed.
Can I delete a dimension of a scale if this dimension reach a very low internal consistency when doing psychological network analysis
CC BY-SA 4.0
null
2023-03-31T14:32:02.320
2023-03-31T14:32:02.320
null
null
264753
[ "reliability", "consistency", "networks", "partial-correlation", "cronbachs-alpha" ]
611388
2
null
611334
13
null
> "It seems to be that as n→∞, the event should be inevitable, which feels like a controversial word to use in probability." Nothing controversial here, although sometimes we use "almost-certain" instead of "inevitable". ### Zero-one laws You might be interested in one of the so-called ["zero-one laws"](https://en.wikipedia.org/wiki/Zero%E2%80%93one_law), various theorems which all state that under some particular conditions, a probability in the limit will always be either 0 (won't happen) or 1 (will happen), and cannot be an intermediate value. For instance, [Kolmogorov's zero-one law](https://en.wikipedia.org/wiki/Kolmogorov%27s_zero%E2%80%93one_law) states that if X0, X1, X2, ..., Xn, ... is an infinite family of random variables, and E is an event which can be described in functions of the Xn, but is independent of any finite number of Xn, then either P(E) = 0 or P(E) = 1. [Borel-Cantelli's lemma](https://en.wikipedia.org/wiki/Borel%E2%80%93Cantelli_lemma) is another zero-one law which is perhaps simpler to understand. It states that if you have an infinite family of events E0, E1, E2, E3, ..., En, ... such that the sum of probabilities $\sum E_n$ is finite, then the probability that infinitely many of these events happen is 0. A converse version of the lemma, with the added assumption of independence, states that if the sum of probabilities is infinite, then the probability that infinitely many events happen is 1. For instance, imagine you roll an infinite number of dice, with increasing number of faces, and ask for the probability that an infinite number of dice land on face $1$ (a die with $n$ faces has its faces numbered $1$ to $n$). Then depending on the increasing sequence of the number of faces, the probability will be either $0$ or $1$; but it cannot be something inbetween. - If you roll a die with 4 faces, then a die with 9 faces, then a die with 16 faces, then a die with 25 faces, etc, so that the $n$th die has $n^2$ faces, then the probability that infinitely many of these dice land on face $1$ is $0$; - If you roll a die with $4$ faces, and then a die with $5$ faces, then a die with $6$ faces, etc, so that the $n$th die has $n$ faces, then the probability that infinitely many of these dice land on face $1$ is $1$. If you can survive the dry formalism of measure theory, this text presents both Borel-Cantelli's lemma and Kolmogorov's zero-one law, and gives several examples of application: [McNamara, Kolmogorov's zero-one law with applications](https://math.uchicago.edu/%7Emay/REU2017/REUPapers/McNamara.pdf).
null
CC BY-SA 4.0
null
2023-03-31T14:47:47.430
2023-03-31T14:59:43.577
2023-03-31T14:59:43.577
301319
301319
null
611390
1
null
null
0
14
I have two time series $X_{t,A}$ and $X_{t,B}$ with an index (2015=100) as unit of measure. To be more concrete, my two time series are the index of the level of employment in countries A and B. I want to sum both of them. Of course, it does not make sense to do $X_{t,A}+X_{t,B}$. I do not have the time series of the level of employment for each country, but I do have the level of employment for one specific period (starting just before the beginning of the time series). I will call them $L_A$ and $L_B$. Can I compute the relative share of employment in each country to sum the two indices? Formally : $X_{t} = w_AX_{t,A} + w_BX_{t,B}$ with $w_A = \frac{L_A}{L_A+L_B}$ and $w_B = \frac{L_B}{L_A+L_B}$ I understand that this is an approximation, but does it make sense to you? Thanks in advance !!
How to sum two indices of time series?
CC BY-SA 4.0
null
2023-03-31T14:56:31.260
2023-03-31T14:56:31.260
null
null
384608
[ "time-series", "descriptive-statistics" ]
611391
1
611399
null
1
53
should it be advisable to standardize/scale the data on each subset of cross validation, or would it be enough to do it once with the full set? My question is related to the fact of including or not the scaling phase in a pipeline This: ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import KFold, cross_val_score from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline scaler = StandardScaler() algorithm = LogisticRegression(random_state=42) pipe = make_pipeline(scaler, algorithm) kfold = KFold(10, shuffle=True, random_state=42) results = cross_val_score(pipe, features, target, cv=kfold ) ``` Versus this: ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import KFold, cross_val_score from sklearn.preprocessing import StandardScaler scaler = StandardScaler() features_scaled = scaler.fit_transform(features) model = LogisticRegression(random_state=42) kfold = KFold(10, shuffle=True, random_state=42) results = cross_val_score(model, features_scaled, target, cv=kfold ) ``` I don't include the data, as I think is not necessary to reproduce. If there are many CV iterations (like in LOO) or the dataset is really big, wouldn't it be a waste of time? Thanks
Standardize data in cross validation
CC BY-SA 4.0
null
2023-03-31T15:02:07.840
2023-03-31T16:42:45.717
2023-03-31T16:42:45.717
359331
381118
[ "python", "cross-validation" ]
611392
2
null
611361
0
null
If you compute the differences before splitting the data, as described in option B, you risk introducing data leakage into your test set. Data leakage occurs when information from the test set is used to inform the model during training, leading to overly optimistic performance metrics. In this case, if a rare feature like "Peanuts" is only present in a few individuals, it may disproportionately influence the training data and result in overfitting. Hence, option A will be a better choice. Additionally, if possible, I suggest that you exclude the features with few observations.
null
CC BY-SA 4.0
null
2023-03-31T15:15:12.140
2023-03-31T15:15:12.140
null
null
384291
null
611394
2
null
605237
1
null
It is certainly possible for a fully connected layer to stumble upon the same solution, or even a better one - but its greater expressive power also reduces the probability that the algorithm will ever find them. Simply increasing the training time, even to infinity, does not guarantee that a neural net algorithm will find a good solution, since it could get trapped in local minima. A fully connected layer without engineered constraints like those discussed in this paper could, in fact, be prone to a wider range of such local minima. On p. 2 the paper even mentions a clear case in point: "By using causal convolutions, we make sure the model cannot violate the ordering in which we model the data..." A fully connected model lacking that constraint would have a greater capacity to violate those orders, leading to just one of several classes of suboptimal solutions. Carefully crafted features of modern neural nets, such as convolutional topologies, are designed to stack the deck against certain classes of poor solutions by baking domain knowledge right into the architecture. Without them, navigating to ideal solutions becomes more improbable and dependent on sheer luck in some cases.
null
CC BY-SA 4.0
null
2023-03-31T15:20:19.557
2023-03-31T15:20:19.557
null
null
54091
null
611395
1
null
null
0
17
Thanks in advance for the help! Our goal is to generate groups of participants based on their vowel systems. Their vowel systems contain two nested variables: - The vowel class themselves (of which there are 14 types, which are represented by words containing the vowel type e.g. "dress" and "thought") - The vowel coordinates, represented by 2 frequencies in the bark scale (a linearization of Hertz) We would like to run this through a divisive clustering analysis algorithm. If we change our data from the first format (with nested variables): To the second format (without nested variables), will we lose anything of significance? If so, what alternative method should we use? [](https://i.stack.imgur.com/qn19M.jpg)
How to group participants with nested variables (divisive clustering analysis)
CC BY-SA 4.0
null
2023-03-31T15:38:05.770
2023-03-31T15:38:05.770
null
null
384612
[ "mathematical-statistics", "clustering", "nested-data", "research-design" ]
611396
1
null
null
0
21
I'm reviewing literature of change point detection, and if I understand the concepts, this methodology would identify various points that don't belong (outliers, anomalies, etc.) However, I'm very interested in a sustained change in trend. For example, if there was a piecewise linear function describing the underlying process, I would want to see the change point to be described by one piecewise linear function that was a fairly long line segment (nontrivial coverage of domain in X). As opposed to a 'flash in the pan' sort of occurrence. Is change point detection the right methodology for my use case?
Change point detection for trends (as opposed to outliers)
CC BY-SA 4.0
null
2023-03-31T15:42:13.620
2023-03-31T19:14:11.440
2023-03-31T19:14:11.440
11887
288172
[ "time-series", "trend", "change-point" ]
611397
1
null
null
0
6
As part of my Master's dissertation, I am using photos and narratives to elicit a specific emotion. I have had 12 photos rated on various aspects of the emotion e.g. valence, arousal etc. I need pictures that are highest on some of these aspects and lowest on others. These photos should also have high inter-rater agreement for the ratings. How can I compare the photos on multiple aspects and select the best 8? Even if I calculate an agreement percentage for each photo there would be multiple agreement percentages for various aspects, how can I consolidate these and compare the photos meaningfully?
Inter-rater agreement for multiple factors for multiple pictures?
CC BY-SA 4.0
null
2023-03-31T16:00:32.107
2023-03-31T16:00:32.107
null
null
384615
[ "experiment-design", "psychology" ]
611398
2
null
611381
7
null
Linearity of expectation has everything to do with algebra. The concept is quite intuitive though because we often think in linear categories and we solve many linear equations in school. I am not sure if it is possible to answer your question without equations, but I'll try to make it intuitive though. First of all, let's start with linearity. By ["linear"](https://math.stackexchange.com/questions/331460/concept-of-linearity) we mean here the $f(x) = a + b x$ functions. To understand the following equations you only need to remember that the [multiplication is distributive](https://www.khanacademy.org/test-prep/praxis-math/praxis-math-lessons/praxis-math-algebra/a/gtp--praxis-math--article--algebraic-properties--lesson), so $cx + cy = c(x+y)$ and definitions of things like expected value, and joint distribution, etc. ## Multiplication by a constant For a random variable $X$ and a constant $c$, the following holds $$ E[cX] = cE[X] $$ Example: According to some random source on the internet, the average height of women worldwide is 163 cm, this is the same as saying that the average height is 1.63 m, as 1 meter = 100 centimeters. Intuitively this is what we would expect, but to answer why does it hold, a little bit of algebra is needed. First, recall that for a discrete† random variable, the expected value is $$ E[X] = \sum_x x\, p(x) $$ then we have $$ \begin{align} E[cX] &= \sum_x c \, x \,p(x) \\ &= c \sum_x x \,p(x) \\ &= cE[X] \end{align} $$ ## Adding a constant $$ E[X + c] = E[X] + c $$ Example: according to another random internet source, the average annual salary in the US is \$53,490. Now imagine that the US decides to introduce [universal basic income](https://en.wikipedia.org/wiki/Universal_basic_income) and will give every citizen \$2,000 every month. How would the average income change? It would be \$53,490 + \$24,000 (12 months). It's also quite intuitive: every person would get \$24,000 extra money, so the total amount of money earned by US citizens would be their salaries + \$24,000 times the number of citizens. To get an average from it, divide the total by the number of citizens. Everyone got the same extra amount, so on average also everyone got that more money, hence by this amount the average income has changed. $$ \begin{align} E[X + c] &= \sum_x (x + c) \, p(x) \\ &= \sum_x x \,p(x) + \sum_x c \, p(x) \\ &= \sum_x x \,p(x) + c \sum_x p(x) & \text{move } c \text{ outside of the summation}\\ &= \sum_x x \,p(x) + c & \text{because by definition } \sum_x p(x) = 1\\ &= E[X] + c \end{align} $$ In the universal basic income example, $\sum_x c$ would be equal to \$24,000 times the number of citizens, and $p(x) = 1/N$ where $N$ is the number of citizens. But as the math above shows, it would be the same if $p(x)$ would differ for every $x$‡. ## Sum of two random variables If $X$ and $Y$ are two random variables, then $$ E[X + Y] = E[X] + E[Y] $$ Example: Let's say that you are interested what is your average consumption of salt and sugar (daily total in grams). You could calculate this by looking at your diet every day, for every meal checking how much sugar it contained, adding it to the amount of salt it contained, summing up to daily totals, and then looking at the average of those daily totals. Alternatively, you could create an Excel sheet where in one column you collect the amount of salt per meal, in another the amount of sugar, then calculate separate daily totals for consumed sugar and salt, calculate averages of those, and sum the two averages. They would be the same. The order of how you sum them should not matter. $$ \begin{align} E[X + Y] &= \sum_x \sum_y (x + y) \, p(x, y) \\ &= \sum_x \sum_y x \, p(x, y) + \sum_x \sum_y y \, p(x, y) \\ &= \sum_x x \sum_y p(x, y) + \sum_y y \sum_x p(x, y) \\ &= \sum_x x \, p(x) + \sum_y y \, p(y) & \text{by the law of total probability} \\ &= E[X] + E[Y] \end{align} $$ where $p(x, y)$ is the [joint distribution](https://en.wikipedia.org/wiki/Joint_probability_distribution) of $X$ and $Y$. This is possible by the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability), which tells us that summing over all possible values gives us the marginal distribution $\sum_x p(x,y) = p(y)$. Finally, keep in mind that the expected value is linear, but this is not the case for every statistic. For example, the [median isn't](https://stats.stackexchange.com/questions/232261/how-can-i-prove-that-the-median-is-a-nonlinear-function). --- † The same results would hold for continuous variables if you replace things like $\sum_x x \, p(x)$ with $\int_x x \, p(x) \, dx$, because [integrals follow similar rules](https://www.mathsisfun.com/calculus/integration-rules.html) in this case. ‡ If the "every person" and "everyday" examples don't appeal to you because the $p(x)$ probabilities they use are uniform, consider that you could group the people (or days) in the examples into some groups and calculate things per group. In such a case, $p(x)$ would not be $1/N$ anymore, but rather $n_i/N$, where $n_i$ is the number of people in each group. Calculating things with such groups [would be the same as with raw data](https://stats.stackexchange.com/questions/202752/how-do-you-fit-a-poisson-distribution-to-table-data/202754#202754) if all the people within the group are the same.
null
CC BY-SA 4.0
null
2023-03-31T16:06:54.097
2023-04-01T09:55:39.730
2023-04-01T09:55:39.730
35989
35989
null
611399
2
null
611391
3
null
If you standardize the data on the full set and then perform cross-validation, then you are effectively leaking information from the validation set into the training set. This can lead to overfitting and overly optimistic performance. I recommend scaling each fold of the cross-validation so that scaling is not applied to the validation data.
null
CC BY-SA 4.0
null
2023-03-31T16:16:03.713
2023-03-31T16:16:03.713
null
null
23801
null
611400
1
611411
null
1
35
I'm new to mixed effect models, but I think I should use a crossed random effect model: V ~ A + (1|B) + (1|C) + (1|D) + (1|P). Could you please tell me if I'm right? I ran an experiment with 10 participants. Each participant had to complete several trials. My analyses include: - 1 continuous dependent variable V. - 5 categorical independent variables : A (2 categories), B (8 categories), C (2 categories), D (2 categories), P (the participant, 10 categories). The combination of these 5 variables allow to uniquely describe each trial (e.g., trial X had: A category 1, B category 6, C category 1, D category 2, P participant 10). To note, each participant had the same number of trials with each category of the variables B, C and D, BUT the number of trials with each of the two categories of the variable A differed between participants. I'm interested in answering the question: does A affect V? But I want to control for the potential effects B, C, D and P might have on V. Any help would be appreciated!
Am I right using this mixed effect model?
CC BY-SA 4.0
null
2023-03-31T16:36:29.103
2023-03-31T18:01:43.113
null
null
384522
[ "mixed-model" ]
611401
1
null
null
0
7
investigating the economic impact on transport infrastructure investment in south Africa. this is the mini dissertation research in 2023
investigating the economic impact on transport infrastructue investment in south africa
CC BY-SA 4.0
null
2023-03-31T16:40:06.170
2023-03-31T16:40:06.170
null
null
384617
[ "regression", "logistic", "forecasting", "economics", "environmental-data" ]
611402
1
null
null
0
12
I have a model that works well on predicting the target variable $Y$ on time $t$ based on the input variables $X$. However, since I need to predict the $Y$ at $t+1$, for which I don't have the data (the $X$) for $t+1$. I decided to fill the void with Monte Carlo simulation. However, when feeding the simulated $X$ at $t+1$ into the model trained above, the performance deteriorates. I'm not sure which part can possibly go wrong and where to start my investigations. Please advise. Thank you!
Monte Carlo Simulation deteriorates model predictions?
CC BY-SA 4.0
null
2023-03-31T16:42:49.367
2023-03-31T17:43:22.667
2023-03-31T17:43:22.667
29617
331633
[ "monte-carlo" ]
611403
2
null
554481
1
null
I believe it is often the case that the rank of $X_i$ within the whole sample $X$ is denoted as $R_i$, i.e. (my notation), $$R_i=\operatorname{rank}_{X_i \in X}(X_i|X).$$ This often appears in the derivation of the Wilcoxon signed-rank and rank-sum tests, so those might be a good source for alternative notation as well.
null
CC BY-SA 4.0
null
2023-03-31T16:46:08.193
2023-03-31T16:46:08.193
null
null
60613
null
611404
2
null
611197
1
null
Looks like in your example t-test interval is not adjusted for multiplicity. The quote in bold implies that Bonferroni-adjusted t-test interval would be wider that TukeyHSD's - it's because Bonferroni's procedure is more crude. But indeed, without any adjustment at all t-test interval would be narrower than with smart adjustment by HSD. So raw t-test interval size < TukeyHSD's < Bonferroni-adjusted t-test interval.
null
CC BY-SA 4.0
null
2023-03-31T16:50:05.767
2023-03-31T16:50:05.767
null
null
9619
null
611405
2
null
611381
4
null
In general, the properties of the expected value for a discrete random variable arise directly from the properties of the [(weighted) arithmetic mean](https://en.wikipedia.org/wiki/Arithmetic_mean), because the expected value of a random variable is defined precisely as the weighted arithmetic mean of the possible outcomes of that random variable, weighted by the probabilities of those outcomes. The definition of the expected value for a continuous random variable is a natural extension of this principle, if you [understand integrals](https://www.youtube.com/watch?v=WUvTyaaNkzM&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x) well enough. The extension is particularly interesting in the ["frequentist" conception of probability](https://en.wikipedia.org/wiki/Frequentist_probability) as long-run proportions of events under the hypothetical ability to re-run some data generating process an infinite number of times. So the question here amounts to: why is linearity of the arithmetic mean intuitive? Consider some real-world quantity with plenty of random variation, such as the body lengths of [snow crabs in the Bering Sea](https://www.cnn.com/2022/10/16/us/alaska-snow-crab-harvest-canceled-climate/index.html). Brushing aside all mathematical rigor, it is reasonable to "expect" that the body length of any random crab should be around the average body length of that type of crab in that region. So if you were to magically double all the body lengths of all the crabs, then naturally the average body length should double (linearity!), and therefore that the "expected" body length doubles accordingly. Similarly, you might be interested in the weights of luggage items being loaded onto an airplane, because you are interested in the total takeoff weight of the airplane. As with the crabs, if you double the weight of every item loaded on the plane, it is completely natural that you should double the average weight of those items, and therefore that you should double the "expected" weight of any one item. Furthermore, the total weight of all items is just the sum of the individual item weights. Here we can turn to the frequentist interpretation of probability for intuition about expected values. Imagine that there is some complicated and unobservable (and therefore random from our perspective) data-generating process by which the weight of each luggage item is chosen. Consider the average outcome of this data generating process. The total weight in this average outcome is naturally the sum of the individual weights. Because this is just a sum, we can use the basic properties of the arithmetic mean and state that this average total weight should also be the sums of the individual luggage weight averages. Finally, by equivalence between arithmetic means and expected values, we should conclude that the expected value of the sum is the sum of the expected values. To extrapolate this reasoning to categorical data, it might help to remember that we can think about categorical random variables as a collection of mutually-independent binary random variables. More abstractly, we can think of categorical data as vectors in [barycentric space](https://en.wikipedia.org/wiki/Barycentric_coordinate_system). In both cases, we have extended our scenario from univariate to multivariate, so the interpretation of the "expectation" [must extend accordingly](https://math.stackexchange.com/a/2704238/117452) to mean the expected value of each element, and the intuition from above holds in that case.
null
CC BY-SA 4.0
null
2023-03-31T16:56:15.913
2023-03-31T19:42:02.383
2023-03-31T19:42:02.383
36229
36229
null
611406
1
null
null
0
12
Newbie to the confusion matrix here who ended up being very confused about potential evaluation metrics. I want to understand the detection skill that groups of human annotators possess when it comes to labeling certain non-trivial characteristics of words in statements. Only 10% of the words in a statement possess this characteristic, so the distribution is highly unbalanced. I mainly care about whether the annotators are getting those few instances "right" (a ground truth exists) but want to penalize random guessing. Initially, I planned to evaluate this detection skill using the F1 score. To my surprise, many of the annotators did not spot the characteristic at all in some of the statements (so TP = 0), which in turn makes the F1 score invalid. It seems like the F1 score does not really reflect how "bad" my human annotators actually were and hasn't been the best evaluation metric from the start. After reading some more I stumbled over the MCC and d', which admittedly I have never heard of before. As the positive class is much more important than the other, it seems like MCC is also not really appropriate. I also looked into proper scoring rules, but with humans there's no predicted probability. I am thinking about reporting TPR and d' but I'm not sure whether this is really the best solution. Any thoughts or relevant reads?
MCC? dPrime? Which evaluation metric for the skill of detecting highly unbalanced classes
CC BY-SA 4.0
null
2023-03-31T17:04:23.823
2023-03-31T17:04:23.823
null
null
384620
[ "classification", "binary-data", "unbalanced-classes", "metric" ]
611407
1
null
null
1
23
I want to compare the response from three levels of one factor. I do lengthy and fairly expensive experiments, and cannot increase my sample size (often 3 measurement replicates on the same solution under different conditions). [](https://i.stack.imgur.com/CpYQS.png) My first thought was to use one-way ANOVA, but checking for ANOVAs assumptions (normal sample distribution and equal variance), I found that my data doesn't really support its use. However, because of prior knowledge of the process that produces these samples, I know their population distributions should be normal, and their variance equal. Can I therefore use ANOVA (and t-tests for that matter), despite my sample indicating otherwise? Or would it be better to use a non-parametric test?
The use of ANOVA for small sample sizes with prior knowledge of the population
CC BY-SA 4.0
null
2023-03-31T17:10:42.360
2023-03-31T17:10:42.360
null
null
384613
[ "statistical-significance", "anova" ]
611408
1
null
null
1
133
I'm training a neural network on a regression problem. I wanted to compare between (1) Gaussian negative log likelihood (GNLL) loss (the output of the network is the mean and log variance) and (2) the MSE loss (the output of the network is the mean only). I noticed that when using MSE, the MSE loss gets close to zero. However, when I use GNLL loss, MSE decreases but stays large. Why is it difficult for the neural network to reduce MSE when using GNLL as a loss function?
Gaussian Negative log likelihood loss vs MSE
CC BY-SA 4.0
null
2023-03-31T17:34:12.937
2023-03-31T23:54:58.270
2023-03-31T23:54:58.270
17072
363945
[ "neural-networks", "optimization", "loss-functions", "mse" ]
611409
2
null
606258
1
null
Each feature of yours is a pixel. Are there any pixels that you really feel could be removed without sacrificing information? It might be that you know the content of interest is always in the middle, so you might be willing to crop the images to include only the middle $128\times128$ pixels, which would be a $75\%$ reduction in the features. If, however, you could have important information anywhere in the image, then you probably want to keep it. Thus, a reasonable stance on feature selection is not to do it. You risk sacrificing information that determines the outcome without an obvious upside. Sure, it is possible to overfit when there are many features, but it is possible to underfit when you leave out features. Especially in a situation where the interactions between features are likely to be the determinants of the image content, it sure seems like you would be sacrificing useful information by discarding pixels that might be relevant. Many approaches are possible to extract features from images that might be in a lower dimension than the pixels themselves. The comments mention HOG (Histogram of Oriented Gradients) and SIFT (Scale-invariant Feature Transform) as possibilities. Fourier and wavelet transformations are other possibilities. As far as whether or not your $73\%$ accuracy represents a good score, it is hard to say. If $80\%$ of the images belong to one category, then $73\%$ is quite poor. If the best accuracy anyone has achieved so far is $74\%$, then your performance seems pretty good.
null
CC BY-SA 4.0
null
2023-03-31T17:43:48.830
2023-03-31T17:43:48.830
null
null
247274
null
611410
2
null
437572
3
null
Much of the apparent issue with class imbalance seems to come from using an improper accuracy performance metric that can give an impressive-looking $98\%$ when $99\%$ of the observations belong to one class, meaning that predicting that one class every time would yield higher accuracy. Consequently, the best move is probably to leave the data as they are and use better measures of performance, such as log-loss (negative binomial log likelihood) and Brier score (mean squared error). If you must fiddle with the data, an advantage that oversampling has over undersampling is that oversampling does not discard data. A scenario where undersampling can make sense is when it comes to data collection, which is the topic of the King and Zeng (2001) paper mentioned [here](https://stats.stackexchange.com/a/559317/247274). (There is a ton of good information in that link.) However, that paper assumes that you have yet to go through the trouble of collecting data. Once you have done so, I see minimal reason to discard data. Another scenario where is might make sense to undersample is [if you cannot fit the entire data set in memory.](https://stats.stackexchange.com/a/611191/247274) This would be true after the data collection, though the better remedy for this might be to go acquire better hardware.
null
CC BY-SA 4.0
null
2023-03-31T18:00:04.743
2023-03-31T18:00:04.743
null
null
247274
null
611411
2
null
611400
1
null
You're describing a [saturated random effects model](https://stats.stackexchange.com/questions/283/what-is-a-saturated-model). You'll need to remove at least one of the random effects. I agree that you should adjust for Participant. With so few trials per participant, you're probably better off controlling for your independent variables as fixed-effects.
null
CC BY-SA 4.0
null
2023-03-31T18:01:43.113
2023-03-31T18:01:43.113
null
null
288142
null
611412
2
null
263619
1
null
There should be a relationship between $\ell_2$ loss and accuracy. The $\ell_2$ loss measures the difference between the predicted probabilities of class membership and the actual class membership. As these predicted probabilities get closer to the actual class, the $\ell_2$ loss is going to decrease, and the probability of the true class having the highest probability will increase. In the extreme, when all predicted probabilities are either zero or one and assign the probability of one to the correct category, accuracy will be perfect, and $\ell_2$ loss will be perfect. However, you could tweak the probability to give only $0.7$ probability of the correct class and $0.3$ probability distributed between the other classes, and this would raise the $\ell_2$ loss without affecting the accuracy (since the correct category still has the highest probability), so $\ell_2$ and accuracy do not have to move in perfect unison. I like the analogy [here](https://stats.stackexchange.com/a/339993/247274) about sprinters having strong legs, yet leg strength not being a perfect proxy of sprinting speed.
null
CC BY-SA 4.0
null
2023-03-31T18:08:35.370
2023-03-31T21:07:04.587
2023-03-31T21:07:04.587
247274
247274
null
611413
1
null
null
1
30
I had this question from a quiz in my stats class: Which of the following is NOT a correct way to specify the assumptions needed for inference about the parameters of a Simple Linear Regression Model? A. The experimental units are randomly selected, the responses are Normally distributed, and there is a constant variance of the points around the line. B. The responses are random and Normally distributed with a mean of zero and constant variance. C. The errors are independent and normally distributed with a mean of zero and constant variance. D. $ε \sim \mbox{iid } N(0, σ)$ Now, of course the correct response my professor was looking for was B. But, I was under the impression that the error terms do not have to be normally distributed thus D should also be a correct answer?
Basic question about normality assumption for error terms in SLR
CC BY-SA 4.0
null
2023-03-31T18:10:36.730
2023-03-31T18:56:46.053
2023-03-31T18:54:37.950
53690
383128
[ "regression", "self-study", "normality-assumption" ]
611414
2
null
541101
0
null
This is the point of assessing probability calibration. If your predicted probabilities are calibrated, this means that an event predicted to happen with a probability of $p$ really happens with probability $p$. If you lack calibration, then the predicted probabilities are, in some sense, not telling the truth, and getting the truth out of a liar might be difficult. It is possible to apply techniques like isotonic regression to transform your predictions to have good calibration, but this requires additional modeling that might not be successful (e.g., overfitting can happen in this step, too). If you lack calibration, it is not a given that you can get it. Your comments mention that you want to discuss this just in terms of the probability instead of a regression, but $P(X = 1\vert p(X) = p)$ is a conditional expected value for a binary event, which is what regression does.
null
CC BY-SA 4.0
null
2023-03-31T18:14:43.483
2023-04-01T15:24:50.973
2023-04-01T15:24:50.973
247274
247274
null
611415
2
null
611358
3
null
Unlike parametric models that are defined by a fixed number of parameters, a Gaussian process is defined by a mean and covariance functions. While it is common to use a prior where those functions have a simple closed form (such as a squared exponential covariance function), Once you update the model with arbitrary data those functions no longer have this simple form. The updated mean and covariance functions depend in some complicated way on all the previous data points, so when you perform sequential updating, each step has an increasing complexity. Suppose for example that the covariance function after $n$ updates is $k^{(n)}(x,x')$, and you observe the next data point $x_{n+1}$. The updated covariance $k^{(n+1)}(x,x')$, as you pointed out, is $$k^{(n+1)}(x,x')=k^{(n)}(x,x') -k^{(n)}(x,x_{n+1})(\sigma^2 + k^{(n)}(x_{n+1},x_{n+1}))^{-1} k^{(n)}(x',x_{n+1})$$ Now evaluating this function at arbitrary points $x$ and $x'$ requires four different evaluations of $k^{(n)}(\cdot,\cdot)$, namely $k^{(n)}(x,x_{n+1})$, $k^{(n)}(x',x_{n+1})$, $k^{(n)}(x_{n+1},x_{n+1})$ and $k^{(n)}(x',x)$. But each of those function evaluations will itself require that you calculate the previous covariance $k^{(n-1)}(\cdot,\cdot)$ at another set of points, for example to evaluate $k^{(n)}(x',x_{n+1})$ you will need $k^{(n-1)}(x_n,x_{n+1})$ and so on. This naïve recursive calculation will require therefore a total of $4^n$ function evaluations, which is unlikely to be very efficient. Of course the sequential calculation will eventually give you exactly the same result as updating once with all the data, as those are mathematically equivalent.
null
CC BY-SA 4.0
null
2023-03-31T18:19:36.700
2023-04-01T12:41:56.403
2023-04-01T12:41:56.403
348492
348492
null
611416
1
null
null
1
14
Given a questionnaire whose questions have different numbers of options available say having questions with ratings from 1-2 and 1-5. Could I use polytomous models such as a partial credit model to study the questionnaire data?
Must questionnaire questions have the same number of options for IRT models to work?
CC BY-SA 4.0
null
2023-03-31T18:33:17.550
2023-03-31T18:33:17.550
null
null
369789
[ "item-response-theory", "mirt" ]
611417
1
null
null
4
233
I've read that ChatGPT will sometimes give different answers to the same prompt. In other words, there is an element of randomness. Where does this randomness come from? Is there some sort of component in transformers or in ChatGPT's implementation specifically that's like a variational auto-encoder where you can feed in a randomized feature vector as input to trigger different results? I also saw in OpenAI's InstructGPT paper [Training language models to follow instructions with human feedback](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf), upon which ChatGPT is based, that there is an infographic (see below) saying that answers are sampled when given a prompt during the "reward model" training step. I assume that you would only be able to sample responses if there were an element of randomness somewhere. [](https://i.stack.imgur.com/zOQsw.png)
Source of randomness in ChatGPT
CC BY-SA 4.0
null
2023-03-31T18:38:33.413
2023-05-22T10:47:37.643
2023-03-31T18:49:58.260
8401
8401
[ "autoencoders", "chatgpt" ]
611418
1
null
null
2
113
I'm trying to calculate a 2x2 covariance matrix in Cartesian coordinates that represents the amount of uncertainty when rotating and translating a point in 2D space, $\Sigma = \begin{pmatrix} \sigma_{xx}&\sigma_{xy}\\ \sigma_{yx}&\sigma_{yy} \end{pmatrix}$ The position of a point in 2D space is represented by a vector, $p= \begin{pmatrix} x\\y \end{pmatrix}$ I then rotate and translate this point to a new position as follows, $p'=Rp + t$ where $R$ is a 2D rotation matrix given by, $R= \begin{pmatrix} \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{pmatrix}$ and t is a translation vector given by, $t= \begin{pmatrix} t_x\\t_y \end{pmatrix}$ If I assume independent Gaussian noise on the translations $(t_x, t_y)$ and the rotation angle $\theta$, then I can perform a simple Monte Carlo simulation to see what the true uncertainty should look like, by passing many sampled points through the rotation and translation function. [](https://i.stack.imgur.com/IGswB.png) In the above image, the blue point is the original 2D point, the green point is the rotated and translated 2D point, and the small red dots are the samples with random noise applied to $t_x, t_y$ and $\theta$. As such, I know that my covariance matrix is a function of the amount of uncertainty in rotation and translation. However, I'm struggling to produce an analytical expression for this covariance matrix (the true distribution is nonlinear due to rotation, but I'm content with a linear representation of the covariance matrix that assumes small amounts of rotation) So my question is, given that I know the equations to move a point to a new position using rotation and translation, how do I use these equations to derive an analytical expression for the amount of uncertainty in this rotation and translation represented by a 2x2 covariance matrix in 2D Cartesian coordinates?
Covariance matrix of rotation and translation applied to a point
CC BY-SA 4.0
null
2023-03-31T18:50:09.410
2023-04-24T12:49:42.650
null
null
384626
[ "normal-distribution", "covariance", "covariance-matrix", "multivariate-normal-distribution" ]
611419
2
null
611413
0
null
This is a strange question! What we call "ordinary least squares" (OLS) your prof calls "SLR" which is fine. Just like you learned about the reasonableness of the CLT when making inference on a sample proportion when $n$ is reasonably large (>30 is often quoted), regression models have a central limit theorem too. When $n$ is big, we can ignore the exact distribution of the residual. But if $n$ is small, regression is used as an exact procedure, and we are making finite sample inference (don't confuse this with "finite population"). As an early student, this is too subtle and can be confusing. The main point is this: choice B is the wrongest choice - you already know this, but here's why; we don't require the response to be normally distributed, it's the error term that has a normal distribution. They definitely don't need a zero mean, and any constant variance is a consequence of the error term having constant variance - or homoscedasticity as we call it. Recall $Y = a + bX + e$. If $X$ and $e$ are normally distributed, it just so happens that their sum will be too. But if $X$ is sampled according to something different, I don't care what $Y$ looks like, $e$ needs to be normal.
null
CC BY-SA 4.0
null
2023-03-31T18:56:46.053
2023-03-31T18:56:46.053
null
null
8013
null
611420
1
null
null
0
7
I have some left-censored assay data. Several points in he response variable are below the assay's detection limit. I ran a Tobit model on the data, so far, so good. However, when I wanted to plot the raw data vs. predictions, the resulting prediction like looks wonky. How? It's running below all the non-censored data points. Is this to be expected? Should I try a different way to handle below-detection data points than Tobit? [](https://i.stack.imgur.com/xqwNN.jpg)
Predictions from parametric left-censored gaussian model (Tobit) look odd
CC BY-SA 4.0
null
2023-03-31T19:14:58.587
2023-03-31T19:14:58.587
null
null
28141
[ "censoring", "tobit-regression" ]
611421
2
null
610814
1
null
There are a couple of things that need to be addressed before getting into the $\gamma_{s(j)}$ issue, just to clear up some features of FEs. You mention that this is conceptual, so I don't want to assume you thought through every detail, necessarily, but there is something more fundamental that should be considered before looking at $\gamma_{s(j)}$. A really important part of this model is $\gamma_i$ (student FE). Including this means all time-invariant factors are already "controlled for" that are related to the students. Whatever time-invariant differences (average level of grade over the panel) between students that do exist are eliminated (students are now on the same expected overall "level"). Variables like race can't be in a model with $\gamma_i$. Race might be expect to generate different overall "levels" of grade, but it doesn't change within-person over the panel. The time-invariant "level" that might be captured by race is already captured by $\gamma_i$, and race would drop from the model due to collinearity. Just like race, county is also "controlled for" by $\gamma_i$, unless individuals are moving between counties (another can of worms). This is important, because you say: > I want the variation that drives the estimation of $\beta$ to be chiefly cross-sectional variation in county income... However, the cross-sectional effect of income "level" on grades among counties is already absorbed by $\gamma_i$. If we set aside class-subject for the moment, the interpretation of $\beta$ would be the expected change in a student's grade associated with a one-unit increase in county-level income (as a deviation from the mean county-level income for each student over the panel). $\gamma_i$ changes all interpretations to within-student effects. If between-student is absorbed and students stay in their counties, between-county is also absorbed in the student FE. Each student has many subjects during the panel, though, so that is NOT already absorbed by $\gamma_i$. Since we know that subjects have systematic variation in average "levels" or grading scales among all students, we know that a change from one subject to another will generate an expected change in their grade that is not related to $income_{c(i),t}$. Like we did with students (and thus, race, county etc), we can make all subjects "equal" in their time-invariant attributes by absorbing the effect of each subject on the "level" of grades in $\gamma_{s(j)}$. Without this, we don't know if the change came from income or from moving between class-subjects. Thus, $\gamma_{s(j)}$ is important to include and doesn't interfere with the cross-sectional effect of income, because that was already absorbed by $\gamma_i$. After including $\gamma_i$ and $\gamma_{s(j)}$ (and accounting changes that affect all students in a semester with $\gamma_t$), the interpretation of $\beta$ is the expected increase in grades caused by a one-unit increase in $income_{c(i),t}$ after accounting for time-invariant differences in students (including their counties) and class-subjects (and overall trends). This model won't tell you about cross-sectional effects of county income. If you are interested in a mix of the within- and between(cross-sectional)-effect of county income, you can use a random effect for student, but you won't be able to say how much effect comes from variation within- versus between-students (and thus between counties). This is a fundamental issue in research with observational data. Another option is the "hybrid model" that isolates within and between effects. [This R package](https://panelr.jacob-long.com/) gives a nice explanation of that setup.
null
CC BY-SA 4.0
null
2023-03-31T19:15:05.603
2023-03-31T19:24:34.440
2023-03-31T19:24:34.440
186886
186886
null
611422
1
null
null
2
17
I have 2 separate Bayesain networks and I was hoping to maximize `Value` within the constraint of the `Cost`. What are is a good way I can do that? Some notes about the model: - Cost is observed per unit, but there is a global constraint on Cost. - Value is not observed, so latent. - Function between Cost and Value does not exist. [](https://i.stack.imgur.com/CrosN.png)
Constrained optimization between two bayesian variables
CC BY-SA 4.0
null
2023-03-31T19:17:11.947
2023-03-31T19:17:11.947
null
null
384630
[ "optimization", "reinforcement-learning", "bayesian-network", "bayesian-optimization", "constrained-optimization" ]
611423
1
611498
null
3
40
I've seen the term "oversampling" used in a survey design methodology context and in a machine learning context (e.g. methods like SMOTE). I'm intrigued by the differences between the two. So far, here's what I understood: The purpose of oversampling in survey design is to reduce the variance for a target sub-population with a rare but interesting feature, and to cut costs related to sampling the population, as explained here: [https://ajph.aphapublications.org/doi/full/10.2105/AJPH.2017.303895](https://ajph.aphapublications.org/doi/full/10.2105/AJPH.2017.303895) Oversampling is planned beforehand, and is done when we collect the data. Weights are then applied to observations to account for the bias introduced by oversampling. In machine learning, the goal of oversampling is to improve prediction metrics (e.g. accuracy) for so-called imbalanced datasets. Oversampling is done when the data has already been collected, so it involves duplicating observations or creating synthetic observations. In this context, oversampling has been criticized as a solution to a non-problem (as many posts on this website explain well). For example accuracy may be simply the wrong indicator to look at, and other methods than oversampling may be applied to improve predictions. Am I correct as to the differences between the two contexts? Are there other major differences (or common points) I am missing? Bonus question: has oversampling in machine learning been originally inspired by oversampling as a survey design method? If so, why does it not use weights in the same way as in survey design? I don't have any particular practical problem related to any of this, my question is just out of curiosity, as it's (mildly) surprising to have the same term used in two apparently quite different contexts. So I'm afraid I cannot be really more specific than what I ask above. Thanks!
What are the differences and common points, if any, between oversampling as a survey design method and oversampling in a machine learning context?
CC BY-SA 4.0
null
2023-03-31T19:24:05.553
2023-04-02T04:07:52.757
2023-03-31T19:31:42.967
384633
384633
[ "machine-learning", "unbalanced-classes", "survey-sampling", "oversampling" ]
611424
1
611436
null
1
73
Suppose that Walmart has 100 stores. It has a coupon for cereal, and it wants to know if the coupon increases cereal sales by a significant amount. Walmart puts the coupon on the cereal shelf in 10 stores; let's call this the treatment group. The other 90 stores do not the the coupon; let's call this the control group. There are all kinds of confounders that affect sales, like - the median income of the shoppers who visit the store - the number of shoppers who visit the store - the number of competing retailers near the store Because of these confounders, we have to rely on causal inference. I know that there are 2 common ways in causal inference to deal with situations like this: - controlling for confounders in a regression model - estimating propensity scores using the confounders, then matching each treatment store to a control store with similar propensity scores, and doing a paired t-test My question is about a possible 3rd method that I learned from casual conversations with other data scientists. Here are the steps. A. For all 100 stores, find the time series of cereal sales for 1 year before the introduction of the coupon. B. For each treatment store, find 5 control stores whose cereal sales histories match the treatment store's cereal sales history. C. Using the matched control stores, build some regression model to predict the counterfactual sales for the treatment stores (i.e. what their cereal sales would have been without the coupon). D. Calculate the average treatment effect on the treated (ATT) on the treatment stores based on their actual sales and their counterfactual sales. I am not fully clear on the validity behind this approach. Somehow, the matching process of the sales bypasses the confounding effects. (I don't understand this, and nobody can explain it to me properly. I'm posting this question to gain some clarity on it.) To the experts on causal inference: Is this approach correct? If so, how exactly does matching on the outcome variable (i.e. sales) overcome the confounding effects?
In causal inference, can you control for confounders by matching the treatment and control group based on the time series of the outcome variable?
CC BY-SA 4.0
null
2023-03-31T19:31:23.523
2023-04-05T16:07:21.430
null
null
269172
[ "time-series", "causality", "matching", "confounding" ]
611425
1
null
null
4
47
I want to model survival outcome as a response to stem length for a plant species. All individuals within plots were tagged and sampled in year 1, then re-found and sampled again in year 2, with an interval of one year between measurements. The variables include fate at year 2 (survival = 1, death = 0) and length (normally distributed, continuous variable with min 0 and max 224). This data was collected from a total of 16 plots within two distinct populations (8 plots per population). Each row in the data includes data from both surveys: `len` is year 1 length, `surv` indicates that the plant was alive or dead at the time of re-sampling. Additionally, the experimental design included a single treatment variable with two levels: fenced plot ($n=7$) or unfenced plot ($n=9$). I am using length at the time of the first survey `len` to predict fate `surv` over the following year. Data snippet: ``` Population Treatment Plot TagNum flw len flw_next len_next surv repr sc fenced nf11a 11 0 5 0 63 1 0 sc fenced nf11a 25 0 5 0 0 0 0 sc unfenced hp2a 549 0 6 0 0 0 0 fl fenced mpna 413 0 7 0 0 0 0 fl unfenced mpsb 920 0 7 0 0 0 0 fl unfenced nnb 315 0 9 0 19.2 1 0 ``` I would like to identify differences in the relationship between length and survival outcome at three levels: (H1) differences between populations, (H2) differences between treatments, and (H3) differences between plots within each population. I have specified the following two GLMMs using `lme4`: Model 1: ``` mod.Surv <- glmer(surv ~ len*Population + (1|Plot:Population), family='binomial', data=plant_data) ``` Model 2: ``` mod.Surv.int <- glmer(surv ~ Treatment*Population + (1|Plot:Population), family='binomial', data = plant_data) ``` Model 2 is included only to make inferences about the effect of the interaction between treatment and population on survival. I don't have the stats background to really interrogate these models, so I'd like to know whether 1) these models are correctly specified and 2) how to interpret their output given my hypotheses 3)how to specify a single, "better" model that includes effects of all three nested variables Output from `summary(mod.Surv)`: ``` Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) [glmerMod] Family: binomial ( logit ) Formula: surv ~ len * Population + (len | Plot:Population) Data: plant_data AIC BIC logLik deviance df.resid 356.0 388.9 -171.0 342.0 802 Scaled residuals: Min 1Q Median 3Q Max -14.7529 0.0719 0.1636 0.3058 0.8438 Random effects: Groups Name Variance Std.Dev. Corr Plot:Population (Intercept) 0.789138 0.88833 len 0.001607 0.04009 -0.87 Number of obs: 809, groups: Plot:Population, 16 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.42173 0.69625 -0.606 0.54470 len 0.10638 0.02905 3.662 0.00025 *** Populationsc 1.18589 0.85981 1.379 0.16782 len:Populationsc -0.04913 0.03292 -1.493 0.13555 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) len Ppltns len -0.903 Populatinsc -0.731 0.637 len:Ppltnsc 0.692 -0.748 -0.877 ``` Output of `summary(mod.Surv.int)`: ``` Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) [glmerMod] Family: binomial ( logit ) Formula: surv ~ Treatment * Population + (1 | Plot:Population) Data: plant_data AIC BIC logLik deviance df.resid 401.4 424.8 -195.7 391.4 804 Scaled residuals: Min 1Q Median 3Q Max -4.7187 0.2172 0.2259 0.2941 0.3881 Random effects: Groups Name Variance Std.Dev. Plot:Population (Intercept) 0.1129 0.336 Number of obs: 809, groups: Plot:Population, 16 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.0219 0.3799 7.955 1.8e-15 *** Treatmentunfenced -0.7428 0.4955 -1.499 0.1338 Populationsc -0.6338 0.5219 -1.214 0.2246 Treatmentunfenced:Populationsc 1.2821 0.6886 1.862 0.0626 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) Trtmnt Ppltns Trtmntnfncd -0.752 Populatinsc -0.718 0.544 Trtmntnfn:P 0.555 -0.724 -0.763 ```
Help making inferences from interactions within nested GLMM?
CC BY-SA 4.0
null
2023-03-31T19:36:09.140
2023-04-02T08:20:12.177
2023-04-01T18:50:05.590
384627
384627
[ "regression", "mixed-model", "interaction", "glmm", "demography" ]
611426
1
null
null
1
50
I have a problem that I can't solution. Let $\mathbf{X}=\{X_1,X_2,\ldots,X_n\}\sim\mathrm{Uniform}(0,\theta)$ and we have $H_0:\theta=\theta_0$ and $H_1:\theta>\theta_0$. We reject the $H_0$ when $X_{(n)}>c$. Find $\mathbf{p\textbf{-}value}$. I know that $\mathbf{p\textbf{-}value}=\mathbb{Pr}_{\theta_0}(T(\mathbf{X})>T(\mathbf{x}))$ where $\mathbf{x}$ is the observed value of $\mathbf{X}$, but next I don't know what is observed value and how to find $T(\mathbf{x})$.
Definition p-value and find p-value in practice
CC BY-SA 4.0
null
2023-03-31T19:50:09.827
2023-03-31T20:01:45.233
2023-03-31T20:01:45.233
53690
384636
[ "hypothesis-testing", "p-value", "uniform-distribution", "extreme-value", "definition" ]
611427
1
null
null
4
37
I have what on the surface seems like a simple problem but I cannot figure out what is the appropriate test -- I am not a statistician... I am trying to determine an appropriate metric to stratify patients in regards to getting toxicity (essentially trying to find a constraint so we can avoid it during treatment) In group A, patients who met constraint A had a toxicity rate of 26% (5/19) vs 86% (12/14) for those who didn't. In group B, patients who met constraint B had a toxicity rate of 55% (6/11) vs 72% (55/76) for those who didn't. What test would I need to determine if constraint A was more effective stratifying patients (or reducing the proportion of patients with toxicity) versus constraint B. Thanks
Comparing counts (or percentages) between two groups...stuck
CC BY-SA 4.0
null
2023-03-31T20:13:19.650
2023-03-31T20:41:48.043
2023-03-31T20:14:16.283
384638
384638
[ "statistical-significance" ]
611429
1
null
null
0
17
Given I have a training dataset $(x_i=x_i^*+a_i,y_i=y_i^*+b_i)$ with independent errors $a_i$ and $b_i$ and I train a parametrizabele model $\hat{y}_\theta(x_i)=y_i^*$. I know I could bootstrap the training dataset to get an estimate for the variance of the prediction $\hat{y}_\theta(x_i)$ which is essentially $Var_{\theta|{x_i,y_i}}(\hat{y}(\tilde{x}))$ (on an evaluation data point $\tilde{x}$). However, if there is an error in x then I should take into account the variance $Var_{a_i}(\hat{y}(\tilde{x}))\approx \sqrt{Var(a_i)}\frac{\partial\hat{y}}{\partial x_i}\Bigg|_{\tilde{x}}$ (due to the well known propagation of error formulas) so that $Var(\hat{y}(\tilde{x}))\approx Var_{\theta|{x_i,y_i}}(\hat{y}(\tilde{x}))+Var_{a_i}(\hat{y}(\tilde{x}))$, where $Var_{a_i}(\hat{y}(\tilde{x}))$ just enters due to the errors in x. Would you agree with that reasoning or have some literature hint for me?
Uncertainty estimation with errors in x and y (error in variables model)
CC BY-SA 4.0
null
2023-03-31T20:39:53.487
2023-03-31T20:39:53.487
null
null
298651
[ "model", "error-propagation", "errors-in-variables" ]
611430
2
null
611427
1
null
This basically looks like a 2x2x2 table ([toxicity, non-toxicity] x [constraint, non-constraint] x [group A, group B]). [This question](https://stats.stackexchange.com/questions/423722/analysis-2x2x2-contingency-table) is similar. I'm not super familiar with this area (will delete this answer if a better one supersedes it), but you can do: - a chi-squared test with the expected values for the ([toxic, non-toxic] x [constraint, non-constraint]) combinations set equal to the average across groups - a binomial regression with glm(cbind(toxic, non-toxic) ~ constraint*group, family = binomial), testing the interaction for significance - an appropriate version of the Cochrane-Mantel-Haenszel test? (Based on the Wikipedia page it looks like this doesn't quite do what you wants - it tests the overall [toxicity x constraint] interaction across groups rather than comparing the groups ...) If you feel like digging in there's probably something in Agresti's Categorical Data Analysis that covers this case ...
null
CC BY-SA 4.0
null
2023-03-31T20:41:48.043
2023-03-31T20:41:48.043
null
null
2126
null
611431
2
null
611334
2
null
Low probability events are often not analysed based on direct data about the events. - High probability events: For example, as a contrasting example, the amount of customers in some shop (and getting a customer is a high probability event for a decent store) can be predicted by extrapolating previous observations. - Low probability events: Estimates for low probability events, some that might even have never been seen before, are based on theoretical predictions. For example, probabilities that a catastrophic meteor hits earth will not be based on previous observations of such events (which never happened, or if they did happen then we weren't there to observe it), but is based on calculations combining other information like information about the number of meteors and computed probabilities for those meteors hitting earth. > What theories, papers, or books examine low-probability events, particularly as the number of trials approaches infinity? So the theories have little to do with theory of statistics and much more with domain knowledge and whatever theories that are available there that can help to make estimates/predictions.
null
CC BY-SA 4.0
null
2023-03-31T20:45:03.873
2023-04-01T09:41:30.947
2023-04-01T09:41:30.947
164061
164061
null
611432
1
null
null
0
24
So I have a dataset where I want to test if a categorical variable A (a=1,2,or 3) is going to affect the primary outcome of a test result Y (pass or not pass as a binary variable). Note that the same test subject can have the test multiple times at different times. The dataset looks like this: [](https://i.stack.imgur.com/yVx68.png) However,there is another dataset where it contains weight of the test subjects. Note that the weight was measured multiple times for the subjects. Sometimes the date when the weight was measured matches the test date, sometimes it does not match the test date from dataset 1. For example, for subject 1, you cannot match the measure date with test date. Dataset 2 looks like this: [](https://i.stack.imgur.com/qle5c.png) In this case, how can I use the variable weight as a covariate in the regression model? How do I combine these two tables together?
Doing logistic regression using different data sources?
CC BY-SA 4.0
null
2023-03-31T20:59:11.643
2023-03-31T20:59:11.643
null
null
313293
[ "regression", "logistic", "data-transformation", "dataset" ]
611435
1
null
null
1
16
I am struggling analyzing the correlation between two variables in my experiments. It turns out that I now that when the SNR of the signal (the definition of the signal with respect to the noise) is high, the algorithm gives a better score. However, from the SNR-Score plot, I cannot see that relationship or at least it is not so evident. What do you do when you have a correlation plot such as this? [](https://i.stack.imgur.com/wt2Kx.png) I can see two clusters, and the cluster below is because some configuration of the experiment produced that results. but the rest seems to be a plateau not a linear regression model and the R2 is almost 0. What kind of information could we extract from here? Thanks for the help.
Step type correlation plot: Need to understand correlation between 2 variables
CC BY-SA 4.0
null
2023-03-31T21:24:54.357
2023-03-31T21:54:52.410
2023-03-31T21:54:52.410
365263
365263
[ "correlation", "cross-correlation" ]
611436
2
null
611424
1
null
It seems like you are describing something like a difference-in-differences panel model with matching on pre-treatment trends. A common framework for this is an event study model. Sometimes it can involve matching. In general, we use matching to build a synthetic counterfactual population. You would try to match on the pre-treatment or time-invariant factors that are most likely to lead to selection or self-selection into treatment. If you can build a counterfactual that is sufficiently similar, you can proceed as if your matched sample is an equivalent to the treatment sample that never received treatment. The researcher can build the case that the matched sample is sufficiently similar by comparing the expected/mean values of important attributes, often through t-tests or similar. If there is no statistically significant difference on the important range of attributes, you have some evidence that your matched sample is a good counterfactual. You can never be totally certain you have identified all key elements, which is why it is not as reliable as a true experiment with randomly assigned treatment. If you are matching with panel data, stable attributes can be important, but you also need to examine whether your two groups have evidence of "parallel trends" during pre-treatment periods. This is probably what was meant when you heard about "sales history" in building a matched control sample. Two stores might be very similar in size, location, average sales, or a range of attributes, but if they were not exhibiting similar trends in sales before treatment, there is no reason to think they would continue to do it post-treatment, which means the control cases would not serve to show you how the treatment cases would have behaved in absence of the treatment. Event study models test for "non-parallel pre-treatment trends" by picking one time point, and testing whether the difference-in-differences of other pre-treatment time points are significant. They don't have to be on the same "level" or nominal value for this to be valid. They just have to continue or change in a similar way for each time-point up to the point in time when treatment is applied. It is extremely helpful to know whether treated cases had anticipation of the treatment, too. If treated cases have anticipation, the researcher needs to think about how this would alter pre-treatment levels. It might cause the researcher to reject a decent match sample or accept one that is inappropriate. This isn't perfect and can't eliminate selection effects as reliably as random treatment, but it helps build the case that the counterfactual group is appropriate. If you have a large pool of potential counterfactual cases, you might use matching on pre-treatment time trends (differences in values from one point to the next) to find better control cases. Finding a matched sample that is similar in both trend and nominal values ("level heterogeneity") is often difficult. If we have evidence of no non-parallel trends, we can deal with level heterogeneity by calculating ATT as $$ATT=(posttreatment - pretreatment)-(postcontrol - precontrol)$$ This way the ATT ignores the differences in expected nominal value that might exist between treatment and control. In event study models, it is more common to use unit (store) fixed effects that subtract each stores grand/panel mean from each time point, which forces all units/stores onto the same expected nominal value. The important thing for these models is they are imperfect compared with true random assignment in the research design (often not possible), but they are often the best we can do and are often pretty convincing if the researcher is careful and transparent about the data. [Here's a nice resource](https://mixtape.scunning.com/09-difference_in_differences). Edit adding answer to OP's useful follow-up question: In my opinion, you never have certainty that you are removing confounding effects in observational (non-experimental) data. You can do certain things that make confounding effects less likely and results more believable. It would be important to know whether you are matching on "levels" of pre-treatment sales (i.e. avg. values) or on trends of pre-treatment sales. The former adds something but is not enough to trust results (IMO). To me, the latter can be convincing, especially when you test for non-parallel trends in sales and eliminate "level" heterogeneity with store FEs (event study framework).
null
CC BY-SA 4.0
null
2023-03-31T21:40:18.027
2023-04-05T16:07:21.430
2023-04-05T16:07:21.430
186886
186886
null
611437
1
null
null
1
12
I want to create a model to predict a numeric improvement (e.g. reduction in a numeric parameter) after surgery. For that I want to choose the best predictors from a list of about 30 patient characteristics (numeric and categorical). My problem is that in my data set of about 250 patients almost no patient is a complete case. I tried to use a linear regression model and run a stepwise regression in R which resulted in this error: `Error in step(lm_full, direction = "backward", scope = list(lower = lm_empty)): number of rows in use has changed: remove missing values?` If my understanding is correct, this because of missing values. I can avoid this error by using only complete cases for all 30 predictors, which leaves me with a tiny dataset of only a few observations. My cases are about 90% complete, but when filtering for all 30 predictors, this leaves me with almost no data. I have thought of several possible ways to solve the problem of missing values, but I am unsure which is the statistically best: - Preselecting predictors using an univariate regression. I would only include predictors above a certain R2 or below a certain P value in my stepwise regression. By using fewer predictors in the full model of my stepwise regression, I would have more complete cases. - Using multiple imputation to fill missing values, although I am not sure how to combine this with a stepwise regression. - I read that mixed effects models are better than linear models for missing values. However, I am not sure how to use a mixed effects model to select predictors for an outcome model. I am stuck with googling for answers and would be very grateful for some tips!
Selecting Predictors for an Outcome Model from a data set with Missing Values
CC BY-SA 4.0
null
2023-03-31T21:49:45.077
2023-03-31T21:49:45.077
null
null
384641
[ "r", "regression", "mixed-model", "missing-data", "stepwise-regression" ]
611438
1
null
null
1
84
I am wondering what I am doing wrong with the parameterization of the OU process for day-ahead electricity prices on a weekly resolution. The fitted parameters do not produce satisfying results, as the simulated paths do not look as if they correctly reflect the volatility. For the calculation, I followed the paper by Dixit & Pindyck (1994) and Bastian-Pinto & Brandao (2010). Any tips on what I did wrong? ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm import datetime # Calculate log returns weekly_prices_new['log_prices'] = np.log(weekly_prices_new['Close']) weekly_prices_new['log_returns'] = weekly_prices_new['log_prices'].diff() weekly_prices_new.dropna(inplace=True) delta_t = 1 # Linear regression for log returns with log(P_t-1) as X and log(P_t) - log(P_t-1) as y X_log_prices = weekly_prices_new['log_prices'].shift(1).values[1:].reshape(-1, 1) y_log_returns = weekly_prices_new['log_returns'].values[1:].reshape(-1, 1) reg_log_returns = sm.OLS(y_log_returns, sm.add_constant(X_log_prices)).fit() # Parameters for log returns b_log_returns = reg_log_returns.params[1] + 1 a_log_returns = reg_log_returns.params[0] std_e_log_returns = reg_log_returns.bse[1] # Calculate OU parameters eta_log_returns = -np.log(b_log_returns) / delta_t sigma_log_returns = std_e_log_returns * np.sqrt((2 * np.log(b_log_returns)) / (((b_log_returns**2) - 1) * delta_t)) mu_log_returns = np.exp((a_log_returns/(1-b_log_returns))+(sigma_log_returns**2/(2*eta_log_returns))) # Simulate the discretized OU process for log returns N = 52*5 # Get the last date in the historical data last_date = weekly_prices_new.index[-1] # Generate a list of dates for the simulated prices simulated_dates = [last_date + datetime.timedelta(weeks=x) for x in range(1, N+1)] # Plot historical prices plt.figure(figsize=(15, 8)) plt.plot(weekly_prices_new.index, weekly_prices_new['Close'], label='Historical Prices', linewidth=2) num_paths = 10 for i in range(num_paths): P = np.zeros(N) random_numbers_log_returns = np.random.normal(0, 1, N) P[0] = weekly_prices_new['Close'].iloc[-1] for t in range(1, N): P[t] = np.exp(np.log(P[t-1]) * np.exp(-eta_log_returns * delta_t) + \ (np.log(mu_log_returns) - (sigma_log_returns**2 / 2)) * (1 - np.exp(-eta_log_returns * delta_t)) + \ sigma_log_returns * np.sqrt((1 - np.exp(-2 * eta_log_returns * delta_t)) / (2 * eta_log_returns)) * random_numbers_log_returns[t]) plt.plot(simulated_dates, P, label=f'Simulated Path {i+1}' if i < num_paths-1 else f'Simulated Path {i+1}', linestyle='--', alpha=0.6) plt.title('Simulated and Historical Electricity Prices using Discretized OU Process') plt.xlabel('Years') plt.ylabel('Prices') plt.legend() plt.show() ``` [](https://i.stack.imgur.com/SWBIR.png) Additionally, the fit of the regression looks correct and quite similar to the results of Bastian-Pinto & Brandao (2010). [](https://i.stack.imgur.com/fsKCo.png) --- EDIT: I tried to implement the process as suggested in the comments and used [this](https://math.stackexchange.com/questions/345773/how-the-ornstein-uhlenbeck-process-can-be-considered-as-the-continuous-time-anal) for guidance. Unfortunately, I still receive paths that differ significantly from the historical price behavior. The parameter estimation is now: ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm # Calculate log prices weekly_prices_new['log_prices'] = np.log(weekly_prices_new['Close']) # Drop the first row since it has NaN values for log_returns weekly_prices_new.dropna(inplace=True) delta_t = 1 # Linear regression for log(P_t) as X and log(P_t+1) as y X_log_prices = weekly_prices_new['log_prices'].values[:-1].reshape(-1, 1) y_log_prices = weekly_prices_new['log_prices'].shift(-1).values[:-1].reshape(-1, 1) reg = sm.OLS(y_log_prices, sm.add_constant(X_log_prices)).fit() # Parameters from linear regression c = reg.params[0] b = reg.params[1] std = reg.bse[1] # Calculate OU-Parameters theta = -np.log(b) / delta_t mu = c/(1-np.exp(-theta*delta_t)) sigma = std * np.sqrt((2*theta)/(1-np.exp(-2*theta*delta_t))) #sigma = std * np.sqrt((2*theta)/(1-b**2)) import numpy as np import matplotlib.pyplot as plt def ou_process(mu, theta, sigma, delta_t, n_steps, n_paths, initial_log_price): log_paths = np.zeros((n_steps + 1, n_paths)) log_paths[0] = initial_log_price # Generate random increments increments = np.random.normal(0, 1, size=(n_steps, n_paths)) # Simulate the log paths for t in range(1, n_steps + 1): log_paths[t] = log_paths[t - 1] * np.exp(-theta * delta_t) + mu * (1 - np.exp(-theta * delta_t)) + sigma * np.sqrt((1 - np.exp(-2 * theta * delta_t)) / (2 * theta)) * increments[t - 1] # Convert log prices to normal prices paths = np.exp(log_paths) return paths # Simulation parameters n_steps = 52*5 n_paths = 4 initial_log_price = weekly_prices_new['log_prices'].values[-1] # Simulate OU paths paths = ou_process(mu, theta, sigma, delta_t, n_steps, n_paths, initial_log_price) ``` Nevertheless, the paths have not really changed. ``` # Simulation parameters n_steps = 52*5 n_paths = 4 initial_log_price = weekly_prices_new['log_prices'].values[-1] # Simulate OU paths paths = ou_process(mu, theta, sigma, delta_t, n_steps, n_paths, initial_log_price) import pandas as pd # Assuming weekly_prices_new has a DatetimeIndex last_date = weekly_prices_new.index[-1] date_range = pd.date_range(start=last_date, periods=n_steps+1, inclusive='left', freq='W') # Create a DataFrame for the simulated paths with the new DateTimeIndex simulated_paths = pd.DataFrame(paths, index=date_range) # Plot the historical data and simulated paths plt.figure(figsize=(10, 6)) # Plot historical data plt.plot(weekly_prices_new.index, weekly_prices_new['Close'], label='Historical Data', linewidth=2) # Plot simulated paths for i in range(n_paths): plt.plot(simulated_paths.index, simulated_paths[i], linestyle='--') plt.xlabel('Date') plt.ylabel('Prices') plt.title('Historical Data and Simulated OU Paths (Exact Discretization)') plt.legend() plt.show() ``` [](https://i.stack.imgur.com/3Mahp.png)
Ornstein-Uhlenbeck process fitting
CC BY-SA 4.0
null
2023-03-31T22:00:32.093
2023-04-13T14:53:29.620
2023-04-13T14:53:29.620
363237
363237
[ "time-series", "stochastic-processes" ]
611441
2
null
611261
1
null
Once again, it is enough to apply the definition of conditional expectation: \begin{gather*} \mathbb E\left[\left(\sum_{b\in\mathcal B}\frac{\mathbb E\big[A\cdot\mathbf 1_{B=b}\mid C\big]}{\mathbb P\big[B=b\mid C\big]}\mathbf 1_{B=b}\right)\cdot\mathbf 1_{B=\beta\,\cap\, C\in\mathcal C}\right]= \mathbb E\left[\frac{\mathbb E\big[A\cdot\mathbf 1_{B=\beta}\mid C\big]}{\mathbb P\big[B=\beta\mid C\big]}\mathbf 1_{B=\beta}\cdot\mathbf 1_{C\in\mathcal C}\right]=\\ \mathbb E\left[\frac{\mathbb E\big[A\cdot\mathbf 1_{B=\beta}\mid C\big]}{\mathbb P\big[B=\beta\mid C\big]}\mathbf 1_{C\in\mathcal C}\cdot\mathbb E\big[\mathbf 1_{B=\beta}\mid C\big]\right]= \mathbb E\Big[\mathbb E\big[A\cdot\mathbf 1_{B=\beta}\mid C\big]\cdot\mathbf 1_{C\in\mathcal C}\Big]=\\ \mathbb E\big[A\cdot\mathbf 1_{B=\beta}\cdot\mathbf 1_{C\in\mathcal C}\big]= \mathbb E\big[A\cdot\mathbf 1_{B=\beta\,\cap\, C\in\mathcal C}\big] \end{gather*}
null
CC BY-SA 4.0
null
2023-03-31T22:58:05.697
2023-03-31T22:58:05.697
null
null
376154
null
611442
1
null
null
0
44
$Xt-0.4X_t+0.03X_{t-2}=Zt-0.4Z_{t-1}$ This process is causal and invertible. For the $MA(\infty)$ representation, I wrote it as $X_t=\frac{1-0.4B}{1-0.4B+0.03B^2}Z_t$, where $\psi(B)=\frac{1-0.4B}{1-0.4B+0.03B^2}$. So, $X_t = \psi(B)Z_t=\sum _{i=0}^{\infty }\:\psi _i\:Z_{t-i}$ Now, I want to find the acf, this is where I get stuck. For MA($\infty$) model, have that $\gamma_X(h)=\sigma^2\sum _{i=0}^{\infty }\:\psi _i\psi_{i+|h|}$ I'm not quite sure how to compute the acvf or the acf here, because I'm not sure how exactly I get the values for each $\psi_i$.
Convert ARMA(p,q) to MA$(\infty)$ and find ACF
CC BY-SA 4.0
null
2023-03-31T22:58:07.920
2023-03-31T22:58:07.920
null
null
365933
[ "time-series", "arima", "autocorrelation", "acf-pacf", "moving-average-model" ]
611443
1
611450
null
0
64
A stick of length 1 is split at a random position $B$ where $B$ ~ $U(0, 1)$. Let $X$ be the length of the longer stick. What is the CDF of $X$? This problem is challenging me. I know that the cdf of X is $F_x(x) = 2x-1$ when $0.5 \le x < 1$, and $0 \text{ when } x < 0.5 \text{ and } 1 \text{ when } x \ge 1$. My trouble starts when arriving at this answer. The way I am attempting to solve the problem is as follows: Let's first assume $0.5 \le x < 1$. Then, $F_x(x) = P(X\le x)$. Here, I know that the problem needs to be split into two cases, namely, when $B > 0.5$ and $B \le 0.5$, and the value of $X$ is either $B$ or $1 - B$ depending on the size of $B$. I figured that I would attempt to split the problem into the two cases as follows: $$P(X\le x) = P(B \le x |B>0.5) + P(1 - B \le x|B\le0.5). $$ The issue is that it gives the wrong answer when you attempt to use a conditional probability, and rather, we should be splitting the problem up like this: $$P(X\le x) = P(B \le x \cap B>0.5) + P(1 - B \le x \cap B\le0.5). $$ I just don't quite understand why using a conditional probability here is wrong. $X$ is only equal to $B$ when $B>0.5$, so I figured the first probability should be $P(X\le x | B > 0.5)$ because $X=B$ given that $B < 0.5$ Could someone give an intuitive explanation as to why it is incorrect to use conditional probability like I tried to do?
A stick of length 1 is split at a random position $B\sim U(0, 1)$. Let $X$ be the length of the longer stick. What is the PDF of $X$?
CC BY-SA 4.0
null
2023-04-01T00:05:23.227
2023-04-01T04:45:54.053
2023-04-01T04:20:49.673
362671
298503
[ "probability", "distributions", "conditional" ]
611444
2
null
611443
1
null
It might easier to reason by doing a change of variable of $u = B-0.5$. Then $X = |u|+0.5$. And clearly $p(u = u_0) = p(u = -u_0)$. So you can just take the PDF for $u >0$, which is just the uniform distribution. It's not clear what you mean by "x", let alone what $F_x(x)$ means. How can you have the same variable be both a parameter and an argument?
null
CC BY-SA 4.0
null
2023-04-01T01:06:26.413
2023-04-01T01:59:13.480
2023-04-01T01:59:13.480
179204
179204
null
611445
1
611481
null
0
32
Based on the results table of the existing CoxPH model, is it possible to calculate the standard error when using the relevel function? Like this... when 2 vs. 1(ref), exp(coef) 0.96712/0.97700 = 0.9898874 se(coef) ??/?? = ?? ``` > coxph(Surv(time, status) ~ Trt, data = data) Call: coxph(formula = Surv(time, status) ~ Trt, data = data) coef exp(coef) se(coef) z p Trt1 -0.02327 0.97700 0.02451 -0.949 0.342 Trt2 -0.03343 0.96712 0.02856 -1.171 0.242 Likelihood ratio test=1.48 on 2 df, p=0.4777 n= 10000, number of events= 10000 > data$Trt <- relevel(data$Trt, ref = '1') > coxph(Surv(time, status) ~ Trt, data = data) Call: coxph(formula = Surv(time, status) ~ Trt, data = data) coef exp(coef) se(coef) z p Trt0 0.02327 1.02354 0.02451 0.949 0.342 Trt2 "Unknown" "Unknown" "Unknown" "Unknown" "Unknown" Likelihood ratio test=1.48 on 2 df, p=0.4777 n= 10000, number of events= 10000 ```
is it possible to calculate the standard error when using the relevel function in CoxPH model?
CC BY-SA 4.0
null
2023-04-01T01:23:35.687
2023-04-01T15:15:51.920
null
null
180974
[ "r", "survival", "inference", "cox-model" ]
611447
1
611464
null
2
37
First, I don't believe this is a duplicate post even though this topic has been brought up a million times. If it is, please point me to the relevant post and I will remove this one. I am basically trying to understand what this post implicates in a practical sense: [Independent samples t-test: Do data really need to be normally distributed for large sample sizes?](https://stats.stackexchange.com/questions/204585/independent-samples-t-test-do-data-really-need-to-be-normally-distributed-for-l) To summarize, the idea that one can apply the t-test to any data, regardless of sampling distribution, if the sample size is large enough is a contentious one. The argument for this claim (to my understanding, please correct if wrong) is - As n becomes larger, the numerator of the test statistic approaches a normal distribution by CLT. Although this is an asymptotic result, in practice this more or less holds on most datasets. - The denominator of the test statistic will probably not be anything close to a chi-sq. However, this is ok. By Slutsky's theorem, the ratio as a whole (i.e. test statistic) will approach a normal. For large n, even though the test statistic is asymptotically normal, we can still treat it as t-distributed because the t is very close to a standard normal when the sample size and thus df is large. - The answer shows a nice Monte Carlo simulation on log normal data which shows that there is no effect on type I or II errors. In general, we want tests with high power conditioned on the type I error being controlled. However, type I error being controlled must also be conditioned on the test statistic being valid. So my first question is: Is the idea of blindly using the t-test on any large dataset problematic not because of anything to do with type I or type II error but rather just the validity of test statistic? I don't mean the validity of the test statistic distribution. Rather whether the test stat can be interpreted to begin with. One comment in the post I linked states that the SD is not a good measure of dispersion of skewed data, but I'd like to get a second opinion on that in the context of test stat validity. Can I just tell someone that the test is invalid even though the test has good power and controlled FP? How would you convince them/show this is a problem in an applied setting? My next question is: If the test statistic is valid, when does the t-test actually lose any practical power (or inflated type I error). This directly relates to a contradictory argument in the linked post which states a t-test loses significant power on skewed distributions. A simple way to answer this question would be to show a Monte Carlo simulation that "breaks" the t-test. However, I think is the very hard to do with large n. Can someone show an example where there is a practical loss of power or inflated type I error? Let's just say "practical" here is arbitrarily > 0.1. You can use anything you want' to break the t-test as long as the sample size is over 200 and the data has a mean and finite variance. Imbalanced groups, discrete data, ratio data, skew, etc are all fair game. Thanks!
t-test on non normal data: type I/II error vs validity
CC BY-SA 4.0
null
2023-04-01T02:22:35.157
2023-04-01T11:09:55.810
null
null
261708
[ "t-test", "monte-carlo", "central-limit-theorem", "skewness", "large-data" ]
611448
2
null
611443
2
null
It is easier to compute the pdf of $X$ which has value $B$ when $B > \frac 12$ and value $1-B$ when $B < \frac 12$. Since both $B$ and $1-B$ are $U(0,1)$ random variables, the pdf of $X$ is easily determined to be $$f_X(x) = \begin{cases}2,& \frac 12 < x < 1,\\0, &\text{otherwise}\end{cases}$$ from which one gets that the CDF is $$F_X(x) = \begin{cases}2x-1,& \frac 12 < x < 1,\\0, &x \leq 0,\\ 1, & x\geq 1. \end{cases}$$
null
CC BY-SA 4.0
null
2023-04-01T03:02:14.203
2023-04-01T03:02:14.203
null
null
6633
null
611450
2
null
611443
2
null
Well, first of all, there is really no need to invoke any "conditional probability" argument. The CDF of $X$ can be directly evaluated as follows: By the setting, $X$ can be expressed as $X = \max(B, 1 - B)$, and $X \geq 1/2$. For $x \in [1/2, 1)$, we have \begin{align} F_X(x) &= P(\max(B, 1 - B) \leq x) = P(B \leq x, 1 - B \leq x) = P(1 - x \leq B \leq x) \\ &= 2x - 1. \end{align} Obviously, when $x < 1/2$, $F_X(x) = 0$; and when $x \geq 1$, $F_X(x) = 1$. Now, back to your question "why it is incorrect to use conditional probability like I tried to do". You seem to try to use the law of total probability, but failed to apply it correctly (in that you dropped out probabilities of conditioning events). Your "formula": $$P(X\le x) = P(B \le x |B>0.5) + P(1 - B \le x|B\le 0.5)$$ should be corrected as \begin{align} & P[X \leq x] = P[X \leq x|B > 0.5]\color{red}{P[B > 0.5]} + P[X \leq x|B \leq 0.5]\color{red}{P[B \leq 0.5]} \\ =& 0.5P[B \leq x|B > 0.5] + 0.5P[1 - B \leq x|B \leq 0.5] \\ =& 0.5 \times \frac{P[0.5 < B \leq x]}{P(B > 0.5)} + 0.5 \times \frac{P[1 - x \leq B \leq 0.5]}{P(B \leq 0.5)} \\ =& x - 0.5 + (0.5 - 1 + x) = 2x - 1, \end{align} which meets the expectation.
null
CC BY-SA 4.0
null
2023-04-01T04:38:41.653
2023-04-01T04:45:54.053
2023-04-01T04:45:54.053
20519
20519
null
611451
1
null
null
2
69
The data Consider some simulated data: ``` x <- rep(c(0, 6, 13, 27, 37, 82, 119), 16) # predictor set.seed(10) e <- rnorm(x, mean = 0, sd = 4) # error a <- 10 # intercept b1 <- -0.2 # slopes b2 <- -0.05 b3 <- -0.3 b4 <- -0.4 y <- a + c(b1, b2, b3, b4) * x + e # linear response set.seed(15) sd <- rexp(y, rate = 1) # measurement error df <- data.frame(x = x, y = y, sd = sd, # simulated data t = rep(1:4, length.out = length(x))) ``` There are four treatments (`t`) that vary in their linear response (`y`) to `x` (`b1` to `b4`) but do not vary in their intercept (`a`). Each response observation comes with measurement error (`sd`). This can be visualised for clarity, with point-ranges indicating measurement error: ``` require(ggplot2) ggplot(df) + geom_pointrange(aes(x = x, y = y, ymin = y - sd, ymax = y + sd)) + facet_grid(~t) + geom_hline(aes(yintercept = 0)) + theme_minimal() ``` Further consider an additional variable `m` which is causing missing data according to a predictable pattern: ``` m <- ifelse(y < 0, 1, 0) # pattern of missing data ym <- y # response with missing data ym[m == 1] <- NA df$ym <- ym sdm <- sd # measurement error with missing data sdm[m == 1] <- NA df$sdm <- sdm ``` Whenever `y` crosses a given threshold (in this case 0), data become unavailable. So we have a combination of measurement error and non-random missing data. The model My preferred method of statistical inference is Bayesian and I am using the `ulam` function of the [rethinking package](https://github.com/rmcelreath/rethinking), which essentially acts as an R interface to Stan. The case of measurement error described above is fairly standard but the missing data case is peculiar and referred to as missing not at random. On page 515 of the second edition of his book Statistical rethinking, Richard McElreath describes this case: > This type of missingness, in which the variable causes its own missing values, is the worst. Unless you know the mechanism that produces the missingness [...], there is little hope. And if you do you [sic] know the mechanism, even then it might be hopeless. Sometimes measurement is all we have. We have to do it better. This is pretty disheartening but I nonetheless tried implementing a combined measurement error and missing data model, assuming data are missing at random. `ulam` automatically imputes missing observations in the response variable, assuming that they are missing at random, so I simply wrote a measurement error model and left `ulam` to do the missing data imputation. ``` l <- as.list(df[-2:-3]) # ulam() prefers lists l$N <- nrow(df) require(rethinking) mod <- ulam( alist( # hierarchical likelihood for measurement error ym ~ dnorm(mean = y, sd = sdm), vector[N]:y ~ dnorm(mean = mu, sd = sigma), # linear model mu <- a + b[t] * x, # priors a ~ dnorm(mean = 10, sd = 1), b[t] ~ dnorm(mean = -0.2, sd = 1), sigma ~ dexp(rate = 1) ), data = l ) ``` Missing observations in `ym` are imputed fine but the model doesn't know what to do with their accompanying missing observations in `sdm`. Here is the error message: ``` Found 40 NA values in ym and attempting imputation. Found 40 NA values in sdm. Not sure what to do with these, and they might precent the model from running. Compiling Stan program... Error: Variable 'sdm' has NA values. ``` A complete cases version of the same model runs fine: ``` l <- as.list(df[-5:-6]) l$N <- nrow(df) mod.cc <- ulam( alist( # hierarchical likelihood for measurement error y ~ dnorm(mean = ytrue, sd = sd), vector[N]:ytrue ~ dnorm(mean = mu, sd = sigma), # linear model mu <- a + b[t] * x, # priors a ~ dnorm(mean = 10, sd = 1), b[t] ~ dnorm(mean = -0.2, sd = 1), sigma ~ dexp(rate = 1) ), data = l, chains = 4, cores = 4 ) precis(mod.cc, depth = 2) # summary of parameter predictions ``` Questions - Is it possible to implement a combined Bayesian measurement error and missing data model in ulam or another R interface to Stan? - In the specific case of non-random missing data, is it possible to specify that imputed values ought to be below or above a certain threshold? - Is there a way to model a combination of measurement error and missing data in both the explanatory and response variables? I would appreciate a worked example with code, preferably R implemented in `ulam` but Stan is also welcome.
How to model a combination of measurement error and missing data in R and Stan
CC BY-SA 4.0
null
2023-04-01T05:19:19.920
2023-04-02T19:06:42.323
2023-04-01T05:27:22.663
303852
303852
[ "r", "bayesian", "missing-data", "measurement-error", "stan" ]
611452
1
null
null
5
52
The problem is stated as: > Suppose $X, Y$ are independent and $X \sim \mathcal{N}(\mu_1, 1), Y \sim \mathcal{N}(\mu_2, 1)$ with unknown parameters $\mu_1, \mu_2$. Prove that unbiased estimation of $\theta=\min \left\{\mu_1, \mu_2\right\}$ does not exist. Here's my sketch: Let $p(\mu, x)$ denote the pdf of $\mathcal N(\mu,1)$. Suppose $T(X, Y)$ is an unbiased estimator. Then $$\int_{\mathbb R^2}T(x,y)p(\mu_1,x)p(\mu_2,y)\mathrm dx\mathrm dy=\min\{\mu_1,\mu_2\}.$$ Let $$q(\mu_2,x)=\int_{\mathbb R} T(x,y)p(\mu_2,y)\mathrm dy.$$ We have $$\int_{\mathbb R} p(\mu_1, x)q(\mu_2,x)\mathrm dx=\min\{\mu_1,\mu_2\}. (*)$$ Intuitively, since $\mu_1,\mu_2$ are separated on the left, it cannot be something related to the order of $\mu_1,\mu_2$. If this is true, then we are done. But I am stuck here and need some hints. There is proof of the original problem mentioned by User1865345. But it uses the fact that $\min$ is not differentiable somewhere, which is not an essential property. We can slightly adjust the function to some pseudo-minimums like something pointed out by Henry. I think the unbiased estimator should still not exist, due to the intuition above. A specific question is: whether there exists $p,q$ that satisfies $(*)$ if we forget everything and just set $p,q$ to be integrable.
No unbiased estimator of $\min\{\mu_1,\mu_2\}$
CC BY-SA 4.0
null
2023-04-01T05:44:42.240
2023-04-01T11:43:07.470
2023-04-01T11:43:07.470
384660
384660
[ "mathematical-statistics", "unbiased-estimator", "point-estimation" ]
611453
1
611487
null
0
38
According to [Burda et al (2015)](https://arxiv.org/pdf/1509.00519.pdf) number of active units is computed as: $ Cov_x(E_{z \sim q_\phi(z|x)}) > \delta $ for some particular delta. In the paper it is set to 0.02 empirically. But this only measures units in vector z that are mostly constant. Another way of prior collapse for VAE is random units in z, i.e. - units that are independent from input x and those that decoder part of VAE learns to ignore. Is there a way to measure it? One way I can think of is to set units in z to random value and measure its effect on decoded input. But this is rather slow process for high-dimensional z. Is there a better way?
VAE active units
CC BY-SA 4.0
null
2023-04-01T05:56:31.960
2023-04-15T20:17:51.460
null
null
75286
[ "machine-learning", "neural-networks", "autoencoders", "generative-models" ]
611454
2
null
603663
0
null
The KPI (Key Performance Indicator) depends on the requirements of the application. For some applications (i.e. those where a hard classification must be made and we know a-priori that the misclassification costs are equal, e.g. some handwritten character recognition tasks) accuracy is a completely reasonable performance metric and it would be a mistake to recommend avoiding it because it has problems as well as advantages. Similarly, for some applications (primarily information retrieval) where it is more natural to talk of the relative importance of precision and recall than of misclassification costs, then $F_1$ or more generally $F_\beta$ may be appropriate, especially where we need to make a decision ("do I read this article, or don't I?"). An important consideration is whether we need to make a decision. We may well implement the system using a probabilistic classifier, and then applying a threshold. However, if we need a decision, then the performance of the system depends on the setting of that threshold, so we should be using a performance metric that depends on the threshold, as we need to include the effects of the threshold on the performance of the system. The advice I would give is not to have a single KPI, but have a range of performance metrics that provide information on different aspects of classifier performance. I quite often use accuracy (to measure the quality of the decisions), or equivalently the expected risk where misclassification costs are unequal, the area under the receiver operating characteristic (to measure the ranking of samples) and the cross-entropy (or similar) to measure the calibration of the probability estimates. Basically, our job as statisticians is to understand the advantages and disadvantages of performance metrics so that we can select the appropriate metric(s) for the needs of the application. All metrics have advantages and disadvantages, and we shouldn't reject any of them a-priori because of their disadvantages if they have advantages or relevance for our application. I think the advantages and disadvantages are well covered in textbooks (even ML ones! ;o), so I would just use those. Also, as I have said elsewhere, we should make a distinction between performance estimation and model selection. They are not the same problem, and sometimes we should have different metrics for each task.
null
CC BY-SA 4.0
null
2023-04-01T06:10:57.400
2023-04-01T06:10:57.400
null
null
887
null
611455
2
null
177022
0
null
Let - $Y_{i}$ be a random variable, $i=1\dots k$. - $n\in \mathbb{N}$ be the number of trials. - $y_{i}\in \mathbb{N}$ be the number of $i$-th events in a sequence of $n$ trials. - $p_{i}\in [0,1]$ be the probability of the $i$-th event of each trial. The probability mass function of multinomial distribution is $\begin{equation} \displaystyle P(Y_{1}=y_{1},\dots,Y_{k}=y_{k}) =\frac{n!}{y_{1}!\dots y_{k}!}p_{1}^{y_{1}}\dots p_{k}^{y_{k}}, \end{equation}$ with $\displaystyle\sum^{k}_{i=1}y_{i}=n$, and $\displaystyle\sum^{k}_{i=1}p_{i}=1$. Multinomial distribution is in the exponential family Proof. $\begin{align} \displaystyle P(Y_{1}=y_{1},\dots,Y_{k}=y_{k}) &=\frac{n!}{y_{1}!\dots y_{k}!}p_{1}^{y_{1}}\dots p_{k}^{y_{k}}\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)p_{1}^{x1}\dots p_{k}^{1-\sum_{i}^{k-1}y_{i}}\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)\exp\left[y_{1}\log p_{1}+\dots+\left(1-\sum_{i}^{k-1}y_{i}\right)\log p_{k}\right]\\ &=\left(\frac{n!}{y_{1}!\dots y_{k}!}\right)\exp\left[y_{1}\log \left(\frac{p_{1}}{p_{k}}\right)+\dots+y_{k-1}\log\left(\frac{p_{1}}{p_{k-1}}\right)+\log p_{k}\right]\\ &=b(y_{1},\dots ,y_{k})\exp\left[\eta^{T}T(y_{1},\dots ,y_{k})-a(\eta)\right] \end{align}$ , where $\begin{align} \eta&= \begin{bmatrix} \log(p_{1}/p_{k}) \\ \log(p_{2}/p_{k}) \\ \vdots\\ \log(p_{k-1}/p_{k})\\ \end{bmatrix}\\ T(y_{1},\dots ,y_{k})&= \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots\\ y_{k}\\ \end{bmatrix}\\ a(\eta)&=-\log p_{k}\\ b(y_{1},\dots ,y_{k})&=\frac{n!}{y_{1}!\dots y_{k}!}. \end{align}$
null
CC BY-SA 4.0
null
2023-04-01T06:46:55.733
2023-04-01T06:46:55.733
null
null
384664
null
611458
1
null
null
0
25
I have a dataset of cars and want to model a variable called average daily driving distance. The variable was calculated before I received the dataset as: average daily driving distance = total driving distance in the days between last two inspections / days between last two inspections. However, I only have access to the quotient and do not know the total driving distance nor the number of days between inspections for each car. If I were modeling the total driving distance for each car during a certain period, a Tweedie GLM with a Tweedie index parameter between 1 and 2 could be appropriate, as the data is positively continuous with exact zeros. The number of times a car is driven in a certain period could be described as the outcome of a Poisson process, and the distances of each ride could be seen as gamma-distributed (although not entirely accurately since there is plausibly some dependence between car rides - ie. going to work and back home, but perhaps these could be seen as one event). Since the quotient data is also positively continuous with exact zeros, I am wondering if it would be legitimate to fit a Tweedie GLM to model the average daily driving distance. However, I am struggling with the interpretation of the model in Poisson and Gamma terms. Can a Tweedie GLM still be used in this case?
Appropriateness of Tweedie GLM for modeling average daily driving distance with unknown numerator and denominator
CC BY-SA 4.0
null
2023-04-01T09:13:53.387
2023-04-05T20:47:32.790
2023-04-05T20:47:32.790
11887
141256
[ "generalized-linear-model", "modeling", "poisson-distribution", "gamma-distribution", "tweedie-distribution" ]
611459
2
null
611381
2
null
Let's compute it with a smaller case such that we can write it out completely and look under the hood what happens. Consider 2 red cards and 3 black cards. What are all possible cases? $$\begin{array}{cccc} \text{combinations} & \text{alternating pairs} & \text{total alternations}\\ & 1 \quad 2 \quad 3 \quad 4 & \\\hline\\ \text{rrbbb} & 0 \quad 1 \quad 0 \quad 0 & 1\\ \text{rbrbb} & 1 \quad 1 \quad 1 \quad 0 & 3\\ \text{rbbrb} & 1 \quad 0 \quad 1 \quad 1 & 3\\ \text{rbbbr} & 1 \quad 0 \quad 0 \quad 1 & 2\\ \text{brrbb} & 1 \quad 0 \quad 1 \quad 0 & 2\\ \text{brbrb} & 1 \quad 1 \quad 1 \quad 1 & 4\\ \text{brbbr} & 1 \quad 1 \quad 0 \quad 1 & 3 \\ \text{bbrrb} & 0 \quad 1 \quad 0 \quad 1 & 2 \\ \text{bbrbr} & 0 \quad 1 \quad 1 \quad 1 & 3 \\ \text{bbbrr} & 0 \quad 0 \quad 1 \quad 0 & 1 \\ \hline \text{sums} & 6\quad 6 \quad 6 \quad 6 & 24 \end{array}$$ And the computation of the expectation goes like a sum where we multiply each case with it's frequency/probability, which is $\frac{1}{10}$ here $$E[\text{alternations}] = \begin{array}{cr} (0+1+0+0) \times \frac{1}{10} &\\ (1+1+1+0) \times \frac{1}{10} &\\ (1+0+1+1) \times \frac{1}{10} &\\ (1+0+0+1) \times \frac{1}{10} &\\ (1+0+1+0) \times \frac{1}{10} &\\ (1+1+1+1) \times \frac{1}{10} &\\ (1+1+0+1) \times \frac{1}{10} &\\ (0+1+0+1) \times \frac{1}{10} &\\ (0+1+1+1) \times \frac{1}{10} &\\ (0+0+1+0) \times \frac{1}{10} & +\\ \hline 2.4 \end{array} = \begin{array}{cr} 0.1&\vphantom{ \frac{1}{10} }\\ 0.3&\vphantom{ \frac{1}{10} }\\ 0.3&\vphantom{ \frac{1}{10} }\\ 0.2&\vphantom{ \frac{1}{10} }\\ 0.2&\vphantom{ \frac{1}{10} }\\ 0.4&\vphantom{ \frac{1}{10} }\\ 0.3&\vphantom{ \frac{1}{10} }\\ 0.2&\vphantom{ \frac{1}{10} }\\ 0.3&\vphantom{ \frac{1}{10} }\\ 0.1&+\vphantom{ \frac{1}{10} }\\ \hline 2.4 \end{array}$$ But instead of computing all the terms in the row first we can also compute the columns first $$E[\text{alternations}] = \begin{array}{cccccccccccr} &(&0&+&0.1&+&0&+&0&)&\\ &(&0.1&+&0.1&+&0.1&+&0&)&\\ &(&0.1&+&0&+&0.1&+&0.1&)&\\ &(&0.1&+&0&+&0&+&0.1&)&\\ &(&0.1&+&0&+&0.1&+&0&)&\\ &(&0.1&+&0.1&+&0.1&+&0.1&)&\\ &(&0.1&+&0.1&+&0&+&0.1&)&\\ &(&0&+&0.1&+&0&+&0.1&)&\\ &(&0&+&0.1&+&0.1&+&0.1&)&\\ &(&0&+&0&+&0.1&+&0&)&+\\ \hline &&&&&2.4 \end{array} = \begin{array}{cr} 0&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0&\\ 0&\\ 0&\\ \hline 0.6 \end{array}+ \begin{array}{cr} 0.1&\\ 0.1&\\ 0&\\ 0&\\ 0&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0&\\ \hline 0.6 \end{array}+ \begin{array}{cr} 0&\\ 0.1&\\ 0.1&\\ 0&\\ 0.1&\\ 0.1&\\ 0&\\ 0&\\ 0.1&\\ 0.1&\\ \hline 0.6 \end{array}+\begin{array}{cr} 0&\\ 0&\\ 0.1&\\ 0.1&\\ 0&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0.1&\\ 0&\\ \hline 0.6 \end{array} \quad \begin{array}{cr} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ = 2.4 \end{array} $$ --- The reason here is that we can change a product over a sum into a sum of products, like $(1+2+3)\times 4 = (1\times 4) + (2 \times 4) + (3 \times 4)$, and that makes that we get a matrix of multiple terms that we can compute either in rows first or columns first. This is the [distributive property](https://en.m.wikipedia.org/wiki/Distributive_property) of multiplication over summation.
null
CC BY-SA 4.0
null
2023-04-01T09:15:53.157
2023-04-01T09:27:20.523
2023-04-01T09:27:20.523
164061
164061
null
611460
1
611915
null
8
344
The [Ljung-Box](http://stat.wharton.upenn.edu/%7Esteele/Courses/956/Resource/TestingNormality/LjungBox.pdf) and [Box-Pierce](https://www.tandfonline.com/doi/abs/10.1080/01621459.1970.10481180) tests make use of the sample autocorrelation $$ r_k = \frac {\sum_{t=k+1}^n a_ta_{t-k}} {\sum_{t=1}^n a_t^2}$$ and the Ljung-Box test exploits the result that $$Var(r_k) = \frac {n-k}{n(n+2)}$$ Here, the lag order is $k$, $n$ is the length of the series, and $a_t$ is the (true) error, and thus not the residual of some preliminary estimation. In the original paper by Box and Pierce, we find [](https://i.stack.imgur.com/cEudS.png) My question: For those among us who cannot readily show this, how can we establish this result? Box and Pierce work under the assumption of (jointly and independent) normal innovations $a_t$, so that exact variances are within reach. In particular, the distribution of the sample correlation coefficient should then provide a starting point, as discussed e.g. [here](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient#Using_the_exact_distribution), [What is the distribution of sample correlation coefficients between two uncorrelated normal variables?](https://stats.stackexchange.com/questions/191937/what-is-the-distribution-of-sample-correlation-coefficients-between-two-uncorrel) and [here](https://mathworld.wolfram.com/CorrelationCoefficientBivariateNormalDistribution.html). However, I have not been able to use these results to establish that of Box and Pierce, nor have found any other proof. FWIW, a little simulation suggests that the stated result is indeed quite a bit more accurate than the $1/n$ approximation, with the difference however, as expected, shrinking with $n$: [](https://i.stack.imgur.com/Qr2ox.png) ``` n1 <- 25 n2 <- 100 max.lag <- 14 autocorrs.small <- replicate(20000, acf(rnorm(n1), lag.max=max.lag, plot=F)$acf[-1]) autocorrs.large <- replicate(20000, acf(rnorm(n2), lag.max=max.lag, plot=F)$acf[-1]) plot(1:max.lag, apply(autocorrs.small, 1, var), ylim=c(0, 0.04), col="brown") points(1:max.lag, apply(autocorrs.large, 1, var), col="green") lines(1:max.lag, (n1-1:max.lag)/(n1*(n1+2)), lwd=2, col="brown") segments(1, 1/n1, max.lag, 1/n1, lwd=2, col="brown", lty=2) lines(1:max.lag, (n2-1:max.lag)/(n2*(n2+2)), lwd=2, col="green") segments(1, 1/n2, max.lag, 1/n2, lwd=2, col="green", lty=2) ``` One difference that comes to mind is that the sample autocorrelation coefficient can only make use of $n-k$ observations in the numerator due to the lags, unlike the standard correlation coefficient. E.g., the last provided link would suggest $Var(r_k)=1/(n-1)$ under independence. Can anyone see further steps from here? Note: This question has been asked [before](https://stats.stackexchange.com/questions/413830/variance-of-autocorrelation), without answer so far.
Variance of sample autocorrelation (Ljung-Box)
CC BY-SA 4.0
null
2023-04-01T09:29:40.163
2023-05-08T03:49:49.953
2023-04-03T06:48:30.263
67799
67799
[ "time-series", "correlation", "variance", "autocorrelation" ]
611461
2
null
611073
2
null
Please study both my links in the comments. The [1st](https://stats.stackexchange.com/a/48859/3277) gives the LDA extraction algorithm formulas, and the [2nd](https://stats.stackexchange.com/a/83114/3277) (in my answer there) contains the demonstration of LDA on iris dataset according to such computations. I am not using R's term "scaling matrix", finding it a bit vague. Per the [algorithm](https://stats.stackexchange.com/a/48859/3277) expressed in my 1st link, LDA produces eigenvalues (which we recalculate into canonical correlations) and eigenvectors$^1$. Eigenvectors - we can multiply them by the constant $\sqrt{N-k}$ to obtain the (raw, unstandardized) discriminant coefficients $\bf C$ (called by you "unnormalized scaling matrix") which are the coefficients to compute discriminant scores with. Alternatively, we could normalize the eigenvectors to column SS=1, to obtain the matrix $\bf C_n$ (you call it "normalized scaling matrix") [usable](https://stats.stackexchange.com/a/22889/3277), for visual purposes, as the (oblique) rotation matrix of the variables into the discriminants. So, you may use either way to get the values (scores) of the discriminants ($\bf X$ being the centered analyzed variables): one via discriminat coefficients: $\bf XC$, another via normalized eigenvectors: $\bf XC_n$. The first set of scores have the property that their pooled within-class covariance matrix is the identity matrix. The second set of scores are the perpendicular projections of the data points onto the discriminants as axes drawn in the space of the variables. By both sets of scores, the discriminants are uncorrelated variates. But, as drawn in the space of the original variables, they are not orthogonal axes. By tradition, the first way to compute discr. scores is used most often by packages. [As far as I'm aware, R and Python do not follow the fast algorithm I described under my link. They probably use equivalent, slower yet computationally more stable, variant utilizing SVD instead of eigendecomposition. Again, I'm mentioning that issue in my linked answer.] --- $^1$ Because $\bf{S_w^{-1} S_b}$ of LDA isn't symmetric, its eigenvectors does not have to be an orthonormal matrix, unlike eigenvectors of PCA.
null
CC BY-SA 4.0
null
2023-04-01T09:35:40.920
2023-04-01T10:15:00.750
2023-04-01T10:15:00.750
3277
3277
null
611462
1
611496
null
1
21
I was doing a project where my target variable is a count variable. So naturally my mind went to Poisson regression, but upon further inspection, any identical set of my explanatory variables can result in multiple values of the target variable. Will my model work under this case, or should I apply something like LSTM? I am new to ML and would like some guidance.
Is the count regression model valid in this use case?
CC BY-SA 4.0
null
2023-04-01T09:54:17.693
2023-04-01T18:06:00.673
null
null
384673
[ "multiple-regression", "count-data", "lstm" ]
611463
1
null
null
1
32
I am doing a 2SLS-model in R with panel data (n=800), where I want to include an interaction-term as Instrumental-Variable in the first-stage model. My first question is: Does that even make sense? The idea is that only the interaction term works as a good IV for X on Y. 1. Stage-model: `first_stage <- plm(X ~ (Z(a) * Z(b)) + X1 + X2 + X3 + X4 + X5, data = data, model = "within", effect = "individual")` As I write above, I am particularly interested to delete "bad" variance of X on Y (2. Stage model), by limiting it to the variance in relation to the interaction-term (1. Stage model). My second question is regarding the second-stage model of my 2SLS-model. Can I, building on the first-stage model, do an interaction between the predicted X (= X hat) and the rest of the independent variables? 2. Stage-model: `second_stage <- plm(Y ~ X_hat * (X1 + X2 + X3 + X4 + X5), data = data, model = "within", effect = "individual")` Here I want to get an insight on what effect the isolated variance of predicted X (X_hat) has in interaction with other variables (X1-X5) on Y.
Two questions about Interaction-terms in IV-regressions (2SLS) in R
CC BY-SA 4.0
null
2023-04-01T09:58:57.060
2023-04-08T10:20:05.070
2023-04-08T10:20:05.070
56940
384671
[ "r", "panel-data", "2sls" ]
611464
2
null
611447
1
null
You are using the term "valid" in a binary manner, as in "either the t-test is valid or the t-test is invalid", which is problematic not only because there may be a grey area, but also because your definition of validity isn't really precise or objective ("can be interpreted", how could you distinguish whether it can or it can't?). Furthermore, you use it in a nonstandard manner. Most people would call a test "valid" if its type I and type II error behave well, i.e., it keep its level at least approximately, and should be unbiased, maybe with type II error somewhere close to the best you could achieve with the given information (which will not normally include knowledge of the underlying distribution). Also with this use of the term, the situation isn't that the test is either valid or invalid, as it's subjective and may depend on the application whether 4.47% is a good approximation to 5% or not, or whether anticonservativity as in 5.03% rejection rate can be tolerated. Addressing your first question, I actually appreciate your care for the interpretability of the test statistic. I do think that it is important to interpret the test statistic, because if data are non-normal and the actual distribution is unspecified, the test statistic implicitly defines the alternative against which the test will be unbiased, namely all distributions for which the test statistic becomes too large in absolute value with a larger probability than the nominal level, i.e., the test statistic defines what is actually distinguished by the test. The t-test statistic in my view is fairly easy to interpret: Is the mean far away from the hypothesised one compared to the appropriately scaled sample standard deviation? The issue is not so much whether we can interpret it (surely we can), but rather whether this interpretation properly reflects what we are interested in in a given application. This may indeed not be the case in situations in which the mean or the sample standard deviation do not reflect the characteristics of the distribution well in which we are interested. "Can I just tell someone that the test is invalid even though the test has good power and controlled FP? How would you convince them/show this is a problem in an applied setting?" The thing here is that (a) most people would use the term "valid" in such a way that it actually means "good power and controlled FP", so people wouldn't normally understand you. But also (b) note that power is not a fixed value but depends on the actual alternative against which you want to have good power. Now if the underlying distribution is non-normal, power against a normal alternative may not be what you're interested in. Maybe this is what you mean: Power against a normal alternative may be good, power against a non-normal alternative that is relevant in the given application may not be good. This will depend on what is actually relevant in your application, it needs to be specified for making a statement like this, and then it can be simulated. If this power with appropriately chosen non-normal alternative is good (we may be interested in a range of possible distributions really), I'd still think that the test should be called "valid". In other words, my message is that your concern about validity can be translated into a concern regarding power against non-normal alternatives! Note also that the definition of the test involves prominently both the test statistic and its distribution, as the latter determines when you reject, which is what a test really is about, so a test where the statistic is "valid" in your sense still requires an approximately correct distribution in order to work well, i.e., to be called "valid" by others. Regarding your second question, the probably clearest example for such situations are situations with gross outliers. Assume a distribution $P_{\mu,\sigma^2,\epsilon,x}=(1-\epsilon){\cal N}(\mu,\sigma^2)+\epsilon\delta_x$, where $\epsilon$ is small and fixed, say 0.01, and $\delta_x$ is the Dirac/one-point distribution in $x$. $x$ can be interpreted as outlier if far away from $\mu$. The expected value of this is $\mu^*=(1-\epsilon)\mu+\epsilon x$, and the variance is, if I'm not mistaken, $$ (1-\epsilon)((\mu-\mu^*)^2+\sigma^2)+\epsilon(x-\mu^*)^2=(1-\epsilon)(\epsilon(\mu-x)^2+\sigma^2). $$ If you consider $x\to\infty$, the numerator of the t-test statistic will be $O_P(x)$ and the denominator will be $O_P(x^2)$ (this is the essence of the example and will be true even if I got the variance slightly wrong because I did this quickly), meaning that for any arbitrarily large given $n$ you can find an $x$ that is so large that the denominator will dominate the numerator, or in other words, the absolute value of the t-statistic will be arbitrarily small with arbitrarily large probability, so that the test will almost never reject for any given value of $\mu$ (how large you have to choose the $x$ will depend on the $\mu, n, \epsilon$ for which you want to demonstrate that you can hardly ever reject). If you want to see a simulation, you should be able to make it up yourself from this description. Addressing some of the issues mentioned earlier: "the idea that one can apply the t-test to any data" - I don't like the use of "can" here, as you can for sure compute the t-test for all kinds of data whether it's any good or not. And the quality, i.e., type I and type II error, depending on the actual alternatives considered, see above, is a gradual thing, not a binary one. "in practice this more or less holds on most datasets" - this statement doesn't make much sense, as the theorem in question regards the assumed underlying distribution and $n\to\infty$, so it does not apply to a fixed dataset (note also that any theoretical probability distribution is a model and as such an idealisation; there isn't a well defined "true" underlying distribution given a dataset in reality). "The denominator of the test statistic will probably not be anything close to a chi-sq. However, this is ok." It may or may not be OK (and actually this isn't binary either), it'll depend on the model you think is appropriate for the underlying process. "Is the idea of blindly using the t-test on any large dataset problematic" - being blind is probably never a good idea; people recommend to check for skewness and outliers for a reason, and as shown before, outliers that lie sufficiently far out can destroy the power at arbitrarily large sample sizes. That said, I think that people who look at for example QQ-plots often tend to be overcritical regarding deviations from normality, as for large data the t-distribution of the test statistic is very often a good enough approximation for i.i.d. data; violations of independence are often far more critical (and the issue that significance for large data may occur too easily and one should really look at effect sizes and not just at p-values). Note also that I have discussed whether to use a normal or a t-distribution for non-normal data [here](https://stats.stackexchange.com/questions/590023/why-is-t-test-more-appropriate-than-z-test-for-non-normal-data/590039#590039).
null
CC BY-SA 4.0
null
2023-04-01T11:09:55.810
2023-04-01T11:09:55.810
null
null
247165
null
611465
2
null
611358
0
null
The updating equations (adapted from [this answer](https://stats.stackexchange.com/a/520830/60613)) are: > $$K_{i+1}^{-1} = \begin{bmatrix} K_i & k_* \\ k_*^T & k_{**}\end{bmatrix}^{-1} = \begin{bmatrix}K_i^{-1} + S_i v_iv_i^T & -S_n v_i \\ -S_iv_i^T & S_i\end{bmatrix}$$ where > $$S_i := (k_{**} - k_*^Tv_i)^{-1}$$ The complexity of computing $S_i$ given $v_i$ is $\propto i$, while the cost of computing $v_i = K_i^{-1}k_*$ as well as the $v_iv_i^T$ product is $\propto i^2$. So the cost increases quadratically per iteration. It's easy to show that: $$\sum_{i=1}^n i^2=\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6}$$ [1] jld ([https://stats.stackexchange.com/users/30005/jld](https://stats.stackexchange.com/users/30005/jld)), [Using Gaussian Processes to learn a function online](https://stats.stackexchange.com/a/520830/60613), URL (version: 2021-04-21): [https://stats.stackexchange.com/q/520830](https://stats.stackexchange.com/q/520830)
null
CC BY-SA 4.0
null
2023-04-01T11:10:41.647
2023-04-01T11:10:41.647
null
null
60613
null
611467
1
611504
null
2
80
There is a quite old yet very good question about the [proper way for using rfImpute](https://stats.stackexchange.com/questions/226803/what-is-the-proper-way-to-use-rfimpute-imputation-by-random-forest-in-r) but to me the question raised by Doug7 (whether the target variable y gets used for the imputation of the features Xi and whether that would be harmful later when trying to fit models to the new imputed data set) has not really been answered. The accepted answer by RNB points towards using `mice` or `missForest` R packages for imputation instead of `randomForest`'s `rfImpute` but the differences are that these packages - build individual imputation models for each variable Xi while rfImpute builds a single model for the whole dataset - you can choose to ex- or include the target variable y as a variable for all these individual imputation models in mice and missForest but you must include y in rfImpute So I would like to raise the question about whether or not to use y during impute again but more generally and not limited to the use of `rfImpute`. The only study I could find about this is a [2017 study](https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-016-0281-5) which focuses more on whether or not to also impute missing values of y but it also has some experiments with the potential to shed light on the question of whether or not to make use of y during imputation. Since I think there is an error for item #3 in [the table listing the experiments they made](https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-016-0281-5/tables/1) let me list the first 3 experiments relevant to my question again here as I understand it: - complete case analysis as a reference (no multiple imputation [mi]) - no imputation of missing y, y not used in mi model - no imputation of missing y, y used in mi model So in a nut shell, the question is: can you use strategy #3 from above (which would allow you using the computationally much cheaper `rfImpute`) or should you never make use of y in an imputation model (which would force you to go with something a lot more expensive like `mice` or `missForest`) and go with #2 instead when your concern is a model fit on the imputed data set? My gut feeling told me not to use #3 but the study seems to show that when it comes to bias #3 clearly outperforms #2 and when it comes to error at least for larger data sets the same seems to be true. But this refers to the quality of the impute, not to the impact onto models fit on the new imputed data set! What are your thoughts on this? Thanks, Mark
Use the target variable during imputation?
CC BY-SA 4.0
null
2023-04-01T11:51:11.053
2023-04-01T20:02:16.553
2023-04-01T15:43:26.947
248138
248138
[ "r", "machine-learning", "missing-data", "data-imputation", "multiple-imputation" ]
611470
2
null
75024
1
null
This is right and what you have in the end can be simplified, e.g.: $$ E_{X}\big[E_{Y|X}[(Y-f(X))^2|X]\big] \quad=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} [y-f(x)]^2 p_{X,Y}(x,y) \ dx\,dy \quad =E_{X,Y}[(Y-f(X))^2], $$ and manipulated: $$ =E_{Y}[Y^2] -2 E_{X,Y}[Y.f(X)] + E_{X}[f(X)^2] . $$ The expectation notation is just ... notation, whereas the mathematical notation is more explicit/universal and the "safest" way to consider things. I don't believe the condition inside the expectation square brackets $E_{Y|X}[\cdot|X]$ is necessary or adds anything when the distribution is explicit in the subscript, i.e. $E_{Y|X}[g(X,Y)|X]=E_{Y|X}[g(X,Y)]$, whereas it would be necessary if the subscript were omitted (as often the case): $E[g(X,Y)|X]\neq E[g(X,Y)]$, since the latter would typically be an expectation over $p(X,Y)$ and the former over $p(Y|X)$.
null
CC BY-SA 4.0
null
2023-04-01T12:57:07.223
2023-04-01T12:57:07.223
null
null
307905
null
611472
1
611474
null
1
20
I know that I have to use the same datasets and experimental settings. In this way, I don't have to run their codes. What if I used different datasets? Is it OK to use their code from GitHub to get the accuracy for the specific dataset? If so, how do I find the best code that will run without many errors? Because I spent a lot of time running those codes with many parts that I didn't understand and many errors that I could not fix. Wondering to know what is the easiest way or most commonly used in research.
Right way of comparing my model's result with state of the arts
CC BY-SA 4.0
null
2023-04-01T13:29:08.037
2023-04-01T13:46:06.143
null
null
380942
[ "machine-learning", "neural-networks", "dataset", "model-comparison", "research-design" ]
611473
1
null
null
1
39
I am performing classification on an imbalanced dataset (70% negatives). If a prediction is negative I take a specific action otherwise an opposite one. As in both cases some costs are implied, I want to assess how good the model is at detecting each class. To do so, I am calculating TP/(TP+FP) and TN/(TN+FN), along the logloss. Questions: 1- Is my intuition correct in judging these two metrics the most relevant for judging how reliable the model's predictions are? 2- The costs behind each decision are not constant, but they roughly follow different distributions, one for positives and one for negatives. Let's say two gaussians N(1,2) and N(1.5,3). How can I account for this?
combine specificity and
CC BY-SA 4.0
null
2023-04-01T13:39:50.300
2023-04-01T16:51:00.570
2023-04-01T16:19:04.047
105194
105194
[ "machine-learning", "model-evaluation", "roc", "precision-recall", "precision" ]
611474
2
null
611472
1
null
There is no easy way out of this. Most researchers share their code such that it can be run assuming an input format. If you want to use it with your data, you either need to adapt their code or adapt your data. The first way is usually more challenging because you need to understand their code.
null
CC BY-SA 4.0
null
2023-04-01T13:46:06.143
2023-04-01T13:46:06.143
null
null
204068
null
611475
1
null
null
1
52
I know the following influence function for a M-Estimator: $IF(x_0,T,F_0)= $ $\frac{\psi(x_0)}{\mathbb{E}_{F_0}[\psi'(X)]}$ where $F_0$ is the centered model ($F_{\theta}(x)=F_0(x-\theta)$) I am interested in knowing the influence function for the general model: $IF(x,T,F_{\mu,\sigma})$ I know that it must be $IF(x,T,F_{\mu,\sigma})= $ $\frac{\psi(\frac{x-\mu}{\sigma})\sigma}{\mathbb{E}_{F_0}[\psi'(X)]}$ I dont know how to obtain that, any hint would be welcome. Edit: The influence function describes the effect of an infinitesimal contamination at the point $x$ on the estimate we are seeking, for a given distribution $F$: $$IF(x,T,F)=\lim_{t\to 0} \frac{T((1-t)F+t\Delta_x)-T(F)}{t}$$ A location M-estimator is defined as the solution in $t$ of the equation: $$\sum^{n}_{i} \psi(\frac{x_i-t}{s_n})=0$$ where $s_n$ is an equivariant scale estimator. We define $F_0$ as the centered model: $$F_{\mu,\sigma}(x)=F_0(\frac{x-\mu}{\sigma})$$ I know that the influence function, in the centered model, at a point $x_0$ for a location M-estimator is given by: $$IF(x_0,T,F_0)=\frac{\psi(x_0)}{\mathbb{E}_{F_0}[\psi'(X)]}$$ I am trying to prove that $$IF(x,T,F_{\mu,\sigma})=\frac{\psi(\frac{x-\mu}{\sigma})\sigma}{\mathbb{E}_{F_0}[\psi'(X)]}$$ Furthermore, I know that: $$s_n(X+b)=s_n(X)$$ $$s_n(aX)=a s_n(X)$$ Location M-estimator are equivariant: $$T(aX+b)=aT(x)+b$$ Any help would be really appreciated, Ive been stuck on this for too long...
Influence Function of M-Estimator
CC BY-SA 4.0
null
2023-04-01T14:10:41.780
2023-04-03T08:59:51.410
2023-04-03T08:59:51.410
377943
377943
[ "probability", "distributions", "estimators", "m-estimation", "influence-function" ]
611476
2
null
163642
-1
null
As this is an optional assumption, just ignore the normality of residuals and go ahead. In most sensitive cases, you can remove or replace the outliers with mean value or justified value to improve the condition.
null
CC BY-SA 4.0
null
2023-04-01T14:19:42.140
2023-04-01T14:19:42.140
null
null
384682
null
611478
1
null
null
2
34
I'm using `matchit` package to create a propensity match. I'm trying to match control and treated with a 2:1 ratio in order to maximize the population and exclude less patients as possible (13.000 patients available). I have found two possible ways and I am wondering which one is better: - method=nearest, ratio 2:1, min.control=1 and max.control=3. In this way I obtain a good matched population with precise 2:1 ratio and good sample size (almost 12.000 patients) - method=nearest, caliper = 0.1 and a ratio of 3:1. In this way I still obtain a good match but a not completely balanced population. Are the two ways good options that avoid risk of bias or one of the 2 is superior to the other one?
Propensity-Score Matching - what's the best choice when matching?
CC BY-SA 4.0
null
2023-04-01T14:43:05.020
2023-04-05T15:23:52.547
2023-04-05T15:23:52.547
362671
384684
[ "bias", "propensity-scores", "matching", "weights" ]
611479
2
null
611334
9
null
You may be interested in the [Blackett Review of High Impact, Low Probability Risks](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/278526/12-519-blackett-review-high-impact-low-probability-risks.pdf) that was undertaken for the UK Government Office for Science. Not a technically heavy document, it gives much attention to risk communication: see particularly the work of David Spiegelhalter, Cambridge University, who was a member of the Blackett panel. "The Norm Chronicles" is a good light read, and [the micromort](https://obgyn.onlinelibrary.wiley.com/doi/10.1111/1471-0528.12663), a one in a million chance of death, useful for exploring low probability risks. Another vital consideration is the relationship between probability and impact: many HILP events can occur at different degrees of severity. Annex 7 of the Blackett Review sets out a typology of risk classes identified by Renn (2008): > Damocles. Risk sources that have a very high potential for damage but a very low probability of occurrence. e.g. technological risks such as nuclear energy and large-scale chemical facilities. Cyclops. Events where the probability of occurrence is largely uncertain, but the maximum damage can be estimated. e.g. natural events, such as floods and earthquakes. Pythia. Highly uncertain risks, where the probability of occurrence, the extent of damage and the way in which the damage manifests itself is unknown due to high complexity. e.g. human interventions in ecosystems and the greenhouse effect. Pandora. Characterised by both uncertainty in probability of occurrence and the extent of damage, and high persistency, hence the large area that is demarcated in the diagram for this risk type. e.g. organic pollutants and endocrine disruptors. Cassandra. Paradoxical in that probability of occurrence and extent of damage are known, but there is no imminent societal concern because damage will only occur in the future. There is a high degree of delay between the initial event and the impact of the damage. e.g. anthropogenic climate change. Medusa. Low probability and low damage events, which due to specific characteristics nonetheless cause considerable concern for people. Often a large number of people are affected by these risks, but harmful results cannot be proven scientifically. e.g. mobile phone usage and electromagnetic fields. See [Renn and Klinke, 2004](https://www.embopress.org/doi/full/10.1038/sj.embor.7400227) for more on how these were conceptualised - and named! [](https://i.stack.imgur.com/gz1kk.png) Having suitable language to describe low probability events and communicate the inherent uncertainty (a big theme of Renn's classification) is important, as it's hard to estimate either probability or impact of HILP events empirically! And communication, especially to policy-makers, is a vital part of the skillset of a statistician or scientist. You may find some of the examples mentioned in the quotation above provide useful jumping-off points. All these areas — climate change, air pollution, nuclear safety, natural disasters — have specialised sub-fields dealing with risk assessment. More examples of monitored and quantified threats, like pandemics and terrorism, appear in the [risk matrix](https://en.wikipedia.org/wiki/Risk_matrix) of the [UK National Risk Register, 2020](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/952959/6.6920_CO_CCS_s_National_Risk_Register_2020_11-1-21-FINAL.pdf), which again splits out impact and likelihood: [](https://i.stack.imgur.com/KdmsL.png) Another example would be the impact hazard of near-Earth objects (NEOs). This is especially close to what you want as there's no doubt that, if unmitigated, the probability of a catastrophic event approaches one over time. [The Torino Impact Hazard Scale](https://en.wikipedia.org/wiki/Torino_scale) tries to balance probability and likely effect. [NASA's site](https://cneos.jpl.nasa.gov/sentry/torino_scale.html) on it doesn't contain much Wikipedia doesn't but gives a reference to Morrison et al. (2004) which may interest you. The scale originates with the work of Richard Binzel, e.g. [Binzel (2000)](https://www.sciencedirect.com/science/article/abs/pii/S0032063300000064). While this scale applies to the risk presented by individual objects, you're more interested in the cumulative probability of a catastrophic impact in the long-term: this requires analysis of the geological record and of the current population of NEOs, corrected for observational bias (some types and sizes of object are more easily detected). Much of this material is set out in the [Report of the Task Force on Potentially Hazardous Near Earth Objects](https://space.nss.org/wp-content/uploads/2000-Report-Of-The-Task-Force-On-Potentially-Hazardous-Near-Earth-Objects-UK.pdf) by Atkinson et al. (2000). The task force was set up to advise the UK government, and provides the following cheerful table: [](https://i.stack.imgur.com/pkcUj.png) If we view the probability of a Tunguska-scale event in any given year as $1$ in $250$, so that on average it would occur once in $250$ years, then the probability of the Earth lasting a millennium without such a strike is as low as $\left(\frac {249} {250}\right)^{1000}\approx 1.8\%$, which is well approximated as $\exp(-\frac{1000}{250})$ using the [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution), as @Ben's answer says. Although different fields face different problems and utilise different methods to estimate a heterogeneous bunch of (often highly uncertain) probabilities and impacts, there's an overarching bureaucratic approach to dealing with HILP events, into which Atkinson argues the NEO threat should be incorporated: > Impacts from mid-sized Near Earth Objects are thus examples of an important class of events of low probability and high consequence. There are well established criteria for assessing whether such risks are to be considered tolerable, even though they may be expected to occur only on time-scales of thousands, tens of thousands or even hundreds of thousands of years. These criteria have been developed from experience by organisations like the British Health and Safety Executive to show when action should be taken to reduce the risks. Flood protection, the safety of nuclear power stations, the storage of dangerous chemicals or of nuclear waste are all examples of situations in which rare failures may have major consequences for life or the environment. Once the risk is assessed, plans can be made to reduce it from the intolerable to the lowest reasonably practical levels taking account of the costs involved. If a quarter of the world’s population were at risk from the impact of an object of 1 kilometre diameter, then according to current safety standards in use in the United Kingdom, the risk of such casualty levels, even if occurring on average once every 100,000 years, would significantly exceed a tolerable level. If such risks were the responsibility of an operator of an industrial plant or other activity, then that operator would be required to take steps to reduce the risk to levels that were deemed tolerable. One example of such guidance is the surprisingly readable report on [The Tolerability of Risk from Nuclear Power Stations (1992 revision)](https://www.onr.org.uk/documents/tolerability.pdf) from the UK Health and Safety Executive (HSE), commonly abbreviated to "TOR". TOR analyses what nuclear risks are acceptable by comparing other source of risk (as, rather infamously,* did the U.S. [Rasmussen Report, WASH-1400](https://en.wikipedia.org/wiki/WASH-1400)) but also endeavoured "to consider the proposition that people feel greater aversion to death from radiation than from other causes, and that a major nuclear accident could have long term health effects." TOR's quantitative approach to decision-making about risk evolved into HSE's "R2P2" framework set out in [Reducing risks, protecting people (2001)](https://www.hse.gov.uk/managing/theory/r2p2.pdf). Something you'll often see in discussions of catastrophic risk is the [F-N diagram](https://risk-engineering.org/concept/Farmer-diagram), also known as Farmer's Diagram or Farmer Curve (after [Frank Farmer](https://en.wikipedia.org/wiki/F._R._Farmer) of the UK Atomic Energy Authority and Imperial College London). Here $N$ is the number of fatalities and $F$ is the frequency, usually displayed logarithmically so events with very low probability, but potentially enormous consequences, can fit on the same scale as events which are orders of magnitude more probable but less lethal. The [Health and Safety Executive Research Report 073: Transport fatal accidents and FN-curves, 1967-2001](https://www.hse.gov.uk/research/rrpdf/rr073.pdf) has a good explanation and some example diagrams: [](https://i.stack.imgur.com/Y4OFm.png) What risk is to be deemed "tolerable"? One approach is to draw a "Farmer line", "limit line" or "criterion line" on the F-N diagram. The "Canvey criterion" and "Netherlands criterion" are commonly seen. The Canvey criterion is based on a major 1978-81 HSE study of the risks posed by the industrial installations on Canvey Island in the Thames estuary, where a $1$ in $5000$ chance per annum, i.e. annual probability of $2 \times 10^{-4}$, of a disaster causing $500$ fatalities was deemed politically "tolerable". This is plotted as the "Canvey point" on the F-N axes, and then extended on a risk-neutral basis. For example, the Canvey point is deemed equivalent to a probability of $10^{-4}$ of causing $1000$ fatalities, or $10^{-3}$ chance of $100$ fatalities. TOR notes this latter figure roughly corresponds to the $1$ in $1000$ per year threshold for breaches of "temporary safe refuges", mandated to protect offshore installation workers from fire or explosion following the [Piper Alpha oil rig disaster](https://en.wikipedia.org/wiki/Piper_Alpha), on the assumption of a hundred workers on a platform and that, conservatively, a breach would be fatal to all. On logarithmic axes this produces a "Canvey line" with slope $-1$, as shown in Fig. D1 of TOR ("ALARP" is "[as low as reasonably practicable](https://en.wikipedia.org/wiki/ALARP)", the idea being that efforts should still be taken to reduce even tolerable risks, up to the point where the costs of doing so become prohibitive): [](https://i.stack.imgur.com/tjXul.png) HSE has been cautious about adopting limit lines in general, but R2P2 suggests a criterion ten times more conservative than the Canvey point: a $2 \times 10^{-4}$ annual probability of a disaster only causing fifty fatalities, instead of five hundred. Many publications show an "R2P2 line" through the R2P2 criterion, parallel to the Canvey line, but R2P2 doesn't specify a risk-neutral extension even if this sounds rational. In fact the Netherlands criterion is far more risk averse, with a slope of $-2$ indicating a particular aversion to high consequence incidents: if one catastrophic event would cause ten times as many fatalities as another, its tolerated probability is a hundred times lower. Farmer's original 1967 "boundary line as a criterion" had a slope of $-1.5$, but the horizontal axis was sieverts of iodine-131 released, not deaths.** HSE RR073 Fig. 4 compares various criteria with transport casualties, breaking out those train accident casualties which may have been prevented by upgrading [the train protection system](https://en.wikipedia.org/wiki/Train_Protection_%26_Warning_System): [](https://i.stack.imgur.com/PQwAj.png) A paper about landslide risk by [Sim, Lee and Wong (2022)](https://geoenvironmental-disasters.springeropen.com/articles/10.1186/s40677-022-00205-6) graphs many limit lines in use globally. More European criteria are shown in [Trbojevic (2005)](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=b1dfe910adb9fa841c7edf28bba97c727ea3cd8c) and [Jonkman, van Gelder and Vriling (2003)](https://d2k0ddhflgrk1i.cloudfront.net/CiTG/Over%20faculteit/Afdelingen/Hydraulic%20Engineering/Hydraulic%20Structures%20and%20Flood%20Risk/staff/Jonkman_SN/JHM_jonkman.pdf): both papers also survey a wider range of quantitative risk measures and regulatory approaches beyond the F-N curve. [Rheinberger and Treich (2016)](https://publications.ut-capitole.fr/id/eprint/22377/1/wp_tse_635.pdf) take an economic perspective on attitudes to catastrophic risk, again looking at many possible regulatory criteria, and examining closely the case for society being "catastrophe averse". If you're interested in microeconomics or behavioural economics (e.g. Kahneman and Tversky's [prospect theory](https://en.wikipedia.org/wiki/Prospect_theory)) you'll find their paper valuable, especially for its lengthy bibliography. Regulatory approaches based on limit lines on the F-N diagram aim to control the probability of disaster across the range of possible magnitudes of disasters, but don't limit the overall risk. We may identify several points on the F-N diagram representing credible accident scenarios at a nuclear power plant, whose individual probabilities are sufficiently low (for the harm each would cause) to be on the "tolerable" side of the limit line, yet feel this cluster of points collectively represents an intolerable risk. For that reason, and others, some prefer to use the [complementary cumulative distribution function (CCDF)](https://en.wikipedia.org/wiki/Cumulative_distribution_function#Complementary_cumulative_distribution_function_(tail_distribution)) of the harm. This is calculated as one minus the CDF, and represents the probability the harm exceeds a given value. When examining catastrophic risks, the wide range of magnitudes and probabilities makes it conventional to plot on log-log axes, so superficially this resembles an F-N diagram. As the size of the consequences tends to infinity, the CDF tends to one, the CCDF to zero, and log CCDF to negative infinity. In the example below, I highlight the level of harm that's exceeded with probability $0.25$. This might measure fatalities, dose of radiation released, property damage, or some other consequence. [](https://i.stack.imgur.com/ZYFsy.png) Canadian nuclear regulators took the approach of setting safety limits at various points on the CCDF curve: [Cox and Baybutt (1981)](http://www.primatech.com/images/docs/limit_lines_for_risk.pdf) compare this to Farmer's limit-line criteria. See also chapters 7 and 10 of [NUREG/CR-2040, a study of the implications of applying quantitative risk criteria in the licensing of nuclear power plants in the United States](https://www.nrc.gov/docs/ML0717/ML071700399.pdf) for the U.S. Nuclear Regulatory Commission (1981). Note that the probability of an industrial disaster tends towards one not just over time but also as the number of facilities increases. Jonkman et al. raise the point that each facility may meet risk tolerability criteria yet the national risk becomes intolerable. They propose setting a national limit, then subdividing it between facilities. NUREG/CR-2040 looks at this in Chapters 6 and 9. The authors distinguish risk criteria "per reactor-year" or "per site-year" for a specific plant, versus risk "per year" for the country as a whole. If national risk is to be limited to an agreed level, then site-specific risk criteria imposed to achieve this, the appropriate way to distribute risk across plants is not obvious due to the heterogeneity of sites. The authors suggest tolerating higher frequency of core melt accidents at older reactors (newer designs are expected to be safer, so arguably should face stricter criteria), or those in remote areas or with better mitigation systems (as a core melt at such sites should be harmful). The problem of the probability of catastrophe approaching $1$ in the long run can be solved by imposing risk criteria over long (even civilisational) time-scales then solving for the required annual criteria. One proposal examined in NUREG/CR-2040 (page 71) tries to restrict core melt frequencies on the basis of a 95% probability of no such accidents in the entire lifespan of the U.S. nuclear industry. Assuming we'll rely on fission technology for about three centuries with $300$ reactors active in an average year, then $100,000$ reactor-years seemed a reasonable guess. Write $\lambda_{\text{melt}}$ for the rate of core melts per reactor-year that needs to be achieved to limit the long-term risk to our tolerated level. Using the Poisson distribution, we solve $$ \exp\left(-\lambda_\text{melt} \times 10^5\right) = 0.95 \\ \implies \lambda_\text{melt} = - \frac{\ln 0.95}{10^5} \approx 5 \times 10^{-7} \text{ per reactor-year,}$$ so we require reactors that experience core melts at a rate no more frequent than once per two million years or so. Due to the Poisson approximation for rare events, we get a very similar answer if we solve the annual probability $p_\text{melt}$ at a given reactor, using the binomial distribution: $$\Pr(\text{0 core melts}) = (1 - p_\text{melt})^{100,000} = 0.95 \\ \implies p_\text{melt} = 1 - \sqrt[100,000]{0.95} \approx 5 \times 10^{-7}.$$ How can regulators establish the probability of a given magnitude of disaster at a particular installation is tolerably tiny? Conversely, how can scientists estimate the potential casualties from a "once in 200 years" earthquake or flood? Events so rare lie beyond empirical observation. For natural disasters we might extrapolate the F-N curve for observed events (see Sim et al., 2022) or model the physics of catastrophic scenarios coupled to a statistical model of how likely each scenario is (e.g. NASA's PAIR model for asteroid risk, [Mathias et al, 2017](https://www.sciencedirect.com/science/article/pii/S0019103516307126)). In engineering systems, particularly if hyper-reliability is needed, [probabilistic risk assessment (PRA)](https://en.wikipedia.org/wiki/Probabilistic_risk_assessment) can be used. For more on PRA in the nuclear industry, including [fault tree analysis](https://en.wikipedia.org/wiki/Fault_tree_analysis), the [U.S. NRC website includes current practice and a historic overview](https://www.nrc.gov/about-nrc/regulatory/risk-informed/pra.html), as does the [Canadian Nuclear Safety Commission](https://nuclearsafety.gc.ca/eng/resources/news-room/feature-articles/probabilistic-safety-assessment.cfm). An international comparison is [Use and Development of Probabilistic Safety Assessments at Nuclear Facilities (2019)](https://one.oecd.org/document/NEA/CSNI/R(2019)10/En/pdf) by the OECD's Nuclear Energy Agency. --- ### Footnotes $(*)$ The Rasmussen Report's executive summary dramatically compared nuclear risks to other man-made and natural risks, including Fig. 1 below, to emphasise that the additional risk to the U.S. population from a nuclear energy programme was negligible in comparison. It was criticised for failing to show the uncertainty of those risks, and ignoring harms other than fatalities (e.g. land contamination), particularly as later estimates of the probability of nuclear disaster were less optimistic. See [Frank von Hippel](https://en.wikipedia.org/wiki/Frank_N._von_Hippel)'s [1977 response in the Bulletin of the Atomic Scientists](https://sgs.princeton.edu/sites/default/files/2019-10/vonhippel-1977.pdf) and, for a very readable historical overview, [NUREG/KM-0010 (2016)](https://www.nrc.gov/reading-rm/doc-collections/nuregs/knowledge/km0010/index.html). [](https://i.stack.imgur.com/5susz.png) $(**)$ Farmer's 1967 paper is available at pages 303-318 of the [Proceedings on a symposium on the containment and siting of nuclear power plants held by the International Atomic Energy Agency in Vienna, 3-7 April, 1967](https://inis.iaea.org/collection/NCLCollectionStore/_Public/44/070/44070762.pdf). His colleague J. R. Beattie's paper on "Risks to the population and the individual from iodine releases" follows immediately as an appendix; Beattie converts Farmer's limit-line radiation releases into casualty figures, so the two papers together mark the genesis of the F-N boundary line approach. This is then followed by a lively symposium discussion. Regarding the slope of $-1.5$, Farmer explains "My final curve does not directly show an inverse relationship between hazard and consequence. I chose a steeper line which is entirely subjective." Farmer is wary of simplistically multiplying probabilities together, and due to lack of empirical data is especially cautious of claims the probability of catastrophe is low due to the improbability of passive safety measures being breached: "if credit of $1000$ or more is being claimed for a passive structure, can you really feel that the possibility of it being as effective as claimed is $999$ out of $1000$. I do not know how we test or ensure that certain conditions will obtain $999$ times out of $1000$, and if we cannot test it, I think we should not claim such high reliability". He prefers to focus on things like components (for which reliability data is available) and minimising the probability of an incident occurring in the first place. Some participants welcome Farmer's probabilistic approach, others prefer the "maximum credible accident" (nowadays evolved into the [design-basis event](https://en.wikipedia.org/wiki/Design-basis_event)): Farmer dislikes this approach due to the broad range of accidents one might subjectively deem "credible", and the most catastrophic, but plausible, nuclear accident would clearly violate any reasonable safety criteria even if the reactor was sited in a rural area. There's an interesting note of scepticism concerning low probability events from the French representative, F. de Vathaire: > Applying the probability method consists of reasoning like actuaries in calculating insurance premiums, but it is questionable whether we have the right to apply insurance methods to nuclear hazard assessment. We must first of all possess sufficient knowledge of the probability of safety devices failing. ... I might add that the number of incidents which I have heard mentioned in France and other countries — incidents without serious radiological consequences, but which might have had them — is fairly impressive and suggests that the probability of failure under actual plant-operation conditions is fairly high, particularly due to human errors. On the other hand, it is large releases of fission products which constitute the only real safety problem and the corresponding probabilities are very low. What practical signification must be attached to events which occur only once every thousand or million years? Can they really be considered a possibility? NUREG/KM-0010 recounts an extreme example from the early days of probabilistic risk assessment in the nuclear industry: > ...the AEC [Atomic Energy Commission] contracted with Research Planning Corporation in California to create realistic probability estimates for a severe reactor accident. The results were disappointing. While Research Planning’s calculations were good, they were underestimates. Research Planning estimated the probability of a catastrophic accident to be between $10^{-8}$ to $10^{-16}$ occurrences per year. If the $10^{-16}$ estimate were true, that would mean a reactor might operate $700,000$ times longer than the currently assumed age of the universe before experiencing a major accident. The numbers were impossibly optimistic, and the error band was distressingly large. As Dr. Wellock recalled, "the AEC wisely looked at this and recognized that probabilities were not going to solve [the problems with] this report." At this time, the AEC understood that the large error in the obtained probabilities could be attributed to the uncertainty in estimating common-cause accidents. Clearly we need caution if a tiny probability has been obtained from multiplying many failure probabilities together. Common-cause failures violate statistical independence and undermine reliability gains of redundancy. [Jones (2012)](https://ntrs.nasa.gov/api/citations/20160005837/downloads/20160005837.pdf) gives an introduction to this topic in the context of the space industry, where NASA readopted PRA after the [1986 Challenger disaster](https://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disaster). ### References - Atkinson, H. H., Tickell, C. and Williams, D. A. (2000). Report of the Task Force on Potentially Hazardous Near Earth Objects. British National Space Centre. - Beattie, J. R. (1967). Risks to the population and the individual from iodine releases. In Proceedings of a Symposium on the Containment and Siting of Nuclear Power Plants, Vienna, April 3–7, 318-324. IAEA. - Binzel, R. P. (2000). The Torino impact hazard scale. Planetary and Space Science, 48(4), 297-303. - Cox, D. C., and Baybutt, P. (1982). Limit lines for risk. Nuclear Technology, 57(3), 320-330. - Farmer, F. R. (1967). Siting criteria: A new approach. In Proceedings of a Symposium on the Containment and Siting of Nuclear Power Plants, Vienna, April 3–7, 303-329. IAEA. - Health and Safety Executive. (1992). The tolerability of risk from nuclear power stations (Revised Edition). HSE Books. ISBN 978-0118863681. - Health and Safety Executive. (2001). [Reducing risks, protecting people: HSE's decision-making process] (https://www.hse.gov.uk/managing/theory/r2p2.pdf). HSE Books. ISBN 978-0717621514. - Health and Safety Executive (2003). Research Report 073, Transport fatal accidents and FN-curves: 1967-2001. HSE Books. - Jones, H. (2012, July). Common cause failures and ultra reliability. In 42nd International Conference on Environmental Systems (p. 3602). - Jonkman, S. N., Van Gelder, P. H. A. J. M., & Vrijling, J. K. (2003). An overview of quantitative risk measures for loss of life and economic damage. Journal of hazardous materials, 99(1), 1-30. - Mathias, D. L., Wheeler, L. F., and Dotson, J. L. (2017). A probabilistic asteroid impact risk model: assessment of sub-300 m impacts. Icarus, 289, 106-119. - Morrison, D., Chapman, C. R., Steel, D., and Binzel R. P. (2004). Impacts and the Public: Communicating the Nature of the Impact Hazard. In Mitigation of Hazardous Comets and Asteroids, (M.J.S. Belton, T.H. Morgan, N.H. Samarasinha and D.K. Yeomans, Eds), Cambridge University Press. - Renn, O. (2008). Risk Governance: Coping with Uncertainty in a Complex World. London: Earthscan. ISBN 978-1844072927. - Renn, O., & Klinke, A. (2004). Systemic risks: a new challenge for risk management. EMBO reports, 5(S1), S41-S46. - Rheinberger, C. M., and Treich, N. (2017). Attitudes toward catastrophe. Environmental and Resource Economics, 67, 609-636. - Sim, K. B., Lee, M. L., & Wong, S. Y. (2022). A review of landslide acceptable risk and tolerable risk. Geoenvironmental Disasters, 9(1), 3. - Spiegelhalter, D., & Blastland, M. (2013). The Norm chronicles: Stories and numbers about danger. Profile Books. ISBN 978-1846686207. [Various editions internationally with slightly different titles, e.g. "danger" replaced by "risk" or "danger and death".] - Spiegelhalter, D. J. (2014). The power of the MicroMort. BJOG: An International Journal of Obstetrics & Gynaecology, 121(6), 662-663. - Trbojevic, V. M. (2005). Risk criteria in EU. In: Advances in safety and reliability—proceedings of European safety and reliability conference ESREL 2005, vol 2, pp 1945–1952. - U.S. Nuclear Regulatory Commission. (1981). A study of the implications of applying quantitative risk criteria in the licensing of nuclear power plants in the United States. USNRC NUREG/CR-2040. - U.S. Nuclear Regulatory Commission. (2016). WASH-1400 — The Reactor Safety Study — The Introduction of Risk Assessment to the Regulation of Nuclear Reactors. USNRC NUREG/KM-0010. [See also errata available at https://www.nrc.gov/docs/ML1626/ML16264A431.pdf] - Von Hippel, F. N. (1977). Looking back on the Rasmussen Report. Bulletin of the Atomic Scientists, 33(2), 42-47.
null
CC BY-SA 4.0
null
2023-04-01T15:03:37.823
2023-04-07T22:56:57.720
2023-04-07T22:56:57.720
22228
22228
null