Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
609124 | 1 | null | null | 0 | 10 | In order to compare the out-of-sample forecasting accuracy of two competing models, I am trying to implement the equal accuracy test proposed in this article: [http://www.timberlake-consultancy.com/slaurent/pdf/Handbook_volfor.pdf](http://www.timberlake-consultancy.com/slaurent/pdf/Handbook_volfor.pdf)
The difference between the two MSEs is rewritten as:
$$MSE(k,t) - MSE(j,t) = cov(D_t,S_t) + \bar{D}\bar{S},$$
where $D_t$ and $S_t$ are, respectively, the difference and the sum between model $k$ and model $j$ residuals. The authors state that the null hypothesis of equal accuracy can be specified as:
$$H_0: cov(D_t,S_t)=0 \cup \bar{D}=0,$$
which can be restated as:
$$H_0: \alpha = 0 \; \cup \; \beta=0,$$
where $\alpha$ and $\beta$ are the coefficients of the linear regression $D_t = \alpha + \beta (S_t - \bar{S}) + \epsilon_t.$ However, I do not understand the rationale behind $H_0$ and how I should proceed to implement the test in practice. The two MSEs should be equal only if $cov(D_t, S_t) = \bar{D}\bar{S}$ and not if $cov(D_t,S_t) = 0$ or $\bar{D}=0$.
| Pairwise comparison test for out-of-sample MSEs | CC BY-SA 4.0 | null | 2023-03-11T17:53:44.813 | 2023-03-11T22:02:31.417 | 2023-03-11T22:02:31.417 | 313581 | 313581 | [
"hypothesis-testing",
"forecasting",
"model-comparison",
"mse"
] |
609127 | 1 | null | null | 0 | 34 | I am performing $K$-means clustering on a dataset consisting of $n$ observations and $d$ variables, and I'm trying to determine the optimal number of clusters. Is there a test that can determine the statistical significance of adding another cluster?
I have considered the $F$-test with the following $F$-statistic
$$ F = \frac{ \Big( \frac{WCSS_k-WCSS_{k+1}}{d(k+1)-dk} \Big)}{ \Big( \frac{WCSS_{k+1}}{n-d(k+1)} \Big)} = \frac{ \Big( \frac{WCSS_k-WCSS_{k+1}}{d} \Big)}{ \Big( \frac{WCSS_{k+1}}{n-dk-1} \Big)}$$
where $WCSS_i$ is the within-cluster sum of squares, or inertia, for the model containing $i$ clusters. I obtained the general formula for the $F$-statistic [here](https://en.wikipedia.org/wiki/F-test#:%7E:text=An%20F%2Dtest%20is%20any,which%20the%20data%20were%20sampled.) under "Regression Problems." In this case, I am treating inertia as a measure of error in the model, and $di$ is the number of parameters in the model with $i$ clusters because each of $i$ clusters has a $d$-dimensional mean vector at its center.
Of course, adding another cluster will often reduce the inertia and never increase it... just as adding another predictor to a linear regression model will often reduce the RSS. The question is whether the reduction observed by adding another cluster is the result of clustering noise or the result of modelling a real pattern in the data. I am assuming a statistically significant p-value would indicate the latter.
Any thoughts?
| Statistical test for comparing number of clusters in data | CC BY-SA 4.0 | null | 2023-03-11T18:21:01.430 | 2023-03-11T18:21:01.430 | null | null | 382982 | [
"machine-learning",
"hypothesis-testing",
"distributions",
"statistical-significance",
"clustering"
] |
609128 | 2 | null | 399447 | 1 | null | $[\rm I]$ notes for $T(\mathbf X) $ to be sufficient for $\theta, $
$$ \mathbb P_\theta(\mathbf X=\mathbf x\mid T(\mathbf X) =T(\mathbf x))=\frac{p(\mathbf x\mid\theta)}{q(T(\mathbf x)\mid \theta)} \tag 1\label 1$$ should be constant as a function of $\theta.$
It is well-known that if $X\sim\mathrm N(\mu, \sigma^2), $ then [$|X|\sim\mathrm{FoldedN}(\mu, \sigma^2). $](https://www.randomservices.org/random/special/FoldedNormal.html)
Here $X\sim\mathrm N(0, \sigma^2), ~n=1.$ So, evaluating $\eqref 1$
\begin{align}\frac{p(x\mid\sigma^2)}{q(|x| \mid \sigma^2)}&=\frac{(2\pi\sigma^2)^{-1/2}\exp\left[-\frac{x^2}{2\sigma^2}\right]}{(2\pi\sigma^2)^{-1/2}\left(\exp\left[-\frac{x^2}{2\sigma^2}\right]+\exp\left[-\frac{x^2}{2\sigma^2}\right]\right)}\\&=\frac12\end{align}
shows $|X|$ is sufficient for $\sigma^2.$
---
## Reference:
$\rm [I]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002, $ sec. $6.2.1, $ p. $273.$
| null | CC BY-SA 4.0 | null | 2023-03-11T18:30:40.430 | 2023-03-11T18:30:40.430 | null | null | 362671 | null |
609129 | 1 | null | null | 1 | 25 | Assuming we have three random variables $X$, $Y$, and $Z$, and we want to estimate a least squares regression plane of the form $Z = a + bX + cY$. We do not know the individual observations, but we know all the means $\mu_X$,$\mu_Y$,$\mu_Z$, variances $\sigma_X^2$,$\sigma_Y^2$,$\sigma_Z^2$ covariances $\sigma_{XY}$,$\sigma_{XZ}$,$\sigma_{YZ}$ and correlations $\rho_{XY}$,$\rho_{XZ}$,$\rho_{YZ}$. Suppose we have estimated $a$, $b$, and $c$. How can we calculate the $R^2$ value given only this information?
| R squared for a regression plane without observations | CC BY-SA 4.0 | null | 2023-03-11T19:16:47.833 | 2023-03-11T19:16:47.833 | null | null | 339993 | [
"regression",
"least-squares",
"regression-coefficients"
] |
609130 | 2 | null | 107685 | 0 | null | I think "cumulative mass function" is correct, but it hasn't been widely adopted just yet. It makes sense to me as a more specific cumulative distribution function, a sibling to probability mass functions, similar to how cumulative density functions relate to probability density functions.
| null | CC BY-SA 4.0 | null | 2023-03-11T19:59:02.410 | 2023-03-11T19:59:02.410 | null | null | 11509 | null |
609131 | 2 | null | 569878 | 1 | null | Both approaches have minimal statistical motivation and seem to address [a non-problem](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he). (There is a very interesting case described in the answer to the linked question that relates to [King & Zeng (2001)](https://gking.harvard.edu/files/abs/0s-abs.shtml), but I would argue that to be an issue of experimental design, rather than of model evaluation.)
Yes, class imbalance poses problems to the classification accuracy metric in that a score of $97\%$ might sound great but actually be quite pitiful if you would get $99.9\%$ of the cases correct by classifying as the majority class every time. However, this strikes me as a drawback of classification accuracy as a performance metric, rather than of the reality of your problem having imbalanced classes. (I discuss [here](https://stats.stackexchange.com/questions/605450/is-the-proportion-classified-correctly-a-reasonable-analogue-of-r2-for-a-clas?noredirect=1&lq=1) and [here](https://stats.stackexchange.com/questions/605818/how-to-interpret-the-ucla-adjusted-count-logistic-regression-pseudo-r2) how to remedy accuracy scores to deal with being high yet pitiful.)
Most machine learning methods "classifiers" give outputs on a continuum, and every method I know that does not can be wrestled with to give such a output on a continuum (e.g., Platt scaling for SVMs). Consequently, when you refer to the classification accuracy of a machine learning model, you either mean one of the following:
- Your model has (close to) $0\%$ accuracy, since dead-on predictions on the continuum are so unlikely.
- You are referring to the continuous predictions made by your model along with a decision rule that partitions that continuum into discrete buckets that make your classifications. The common decision rule is to classify predictions above $0.5$ as category $1$ and predictions below $0.5$ as category $0$.
When you do the second of the two, as many machine learning practitioners do and what happens when you call something like the `predict` method in software like `sklearn`, you are not actually evaluating the model. You are evaluating the model along with the decisions made using model outputs. However, especially if you give no thought to the decision rule, that decision rule might be terrible for your problem. Instead of fiddling with the data to remedy class imbalance issues, the first thought should be to change the decision rule. If you have $1000$:$1$ imbalance, maybe you want to predict as the minority class whenever the output (which often has an interpretation as a probability) is above $0.001$ instead of $0.5$. My logic for this is that, the baseline rate of minority-class occurrence is one-in-a-thousand, so if you even have a one-in-two-hundred chance of being in the minority class, that is a sizeable deviation from the norm and might warrant consideration. I discuss this idea in my question [here](https://stats.stackexchange.com/questions/608240/binary-probability-models-considering-event-probability-above-the-prior-probabi).
The more sophisticated approach would be to consider the continuous outputs. In fact, doing so allows you to have more decisions that categories. It might be that your decision rule assigns predictions to category $0$ if the prediction is below $0.2$, assigns predictions to category $1$ if the prediction is above $0.9$, and gives an "I don't know" classification for predictions between $0.2$ and $0.9$, related to the idea presented [here](https://stats.stackexchange.com/a/469059/247274) about putting "suspected spam" in an email subject line, rather than sending the message to the spam folder or letting it through without such a tag. We want confident and accurate predictions, sure, but those need not be realistic, and part of your job (or someone's job) is to handle that ambiguity.
All of this is to say that class imbalance is not inherently a problem. [Good statistical methods like evaluating proper scoring rules, assessing the continuous outputs of models, and thinking in terms of event probability vs error cost handle class imbalance fine.](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models/312787#312787)
Consequently, it does not make much sense to fiddle with your data (downsampling) or the model-fitting process (weighting) to skew the outputs a certain way. Low predicted probabilities of unlikely outcomes seems like a feature, not a bug, of machine learning outputs. Fiddling with the statistics in order to get on the correct size of a software-default cutoff of $0.5$ seems like a poor approach to modeling when you consider what you lose by doing so.
Plenty of blogs and even sources that might seem more credible will advocate for these poor statistical methods because the authors are unaware of the statistical subtleties. It's a shame that our field has to fight this noise.
(Finally, downsampling strikes me as the worst of all approaches. Not only are you trying to solve something that is not a problem, but you are sacrificing precious data in order to do so. While upsampling, synthesizing points (e.g., SMOTE), and weighting the loss function have their problems, at least they don't discard precious data.)
REFERENCE
King, Gary, and Langche Zeng. "Logistic regression in rare events data." Political analysis 9.2 (2001): 137-163.
| null | CC BY-SA 4.0 | null | 2023-03-11T20:00:12.730 | 2023-03-11T20:00:12.730 | null | null | 247274 | null |
609132 | 1 | 609150 | null | 0 | 31 | Andrew Gelman recommends placing weakly-informative $N(0, 1)$ priors on unknown parameters fitted in Stan and often does so in his own models. In Stan, the Normal distribution is parameterized by the mean and the standard deviation (not the variance).
I'm looking to fit a binary logistic GLM with at least 8 predictors. My model will also include interaction terms (though I am not sure how many, and whether they will be two-way, or higher order).
In one of Andrew Gelman's YouTube videos, he mentions that regression coefficients from logistic models are usually between -5 and 5. Thus, placing $N(0, 5)$ (where 5 is the standard deviation) priors on all coefficients seems sensible.
I'm wondering if it would be more logical to instead centre my priors on their MLEs, say from `glm()` in R. To me, this only makes sense in the presence of a large amount of data. When there is uncertainty due to limited data, wider priors should be preferred.
Is this a good strategy? What other recommendations are there?
| Centering prior distributions on MLE/OLS estimates | CC BY-SA 4.0 | null | 2023-03-11T20:07:09.313 | 2023-03-11T23:23:58.313 | 2023-03-11T22:48:42.070 | 175663 | 175663 | [
"r",
"logistic",
"generalized-linear-model",
"stan"
] |
609136 | 1 | null | null | 0 | 53 | Suppose that the discrete random variable $X_{n}$has a geometric distribution given by
$$f_{X_n}(x_n)=P_n{(1-P_n)}^{x_n}$$ where $$x_n={0,1,2,3,}$$ and $$P_n\ =\ \frac{\lambda}{n}$$ for $0<\lambda<n$. Find the limiting value of the moment-generating function of $Y_n= \frac{X_n}{n}$ as $n\rightarrow\infty$ and use this result to determine the asymptotic distribution of $Y_n$.
I found this question in a book titled Exercises and Solutions in Statistical Theory (Exercise 3.23) and really appreciate it if you can help me.
| Limiting value of the moment generating function | CC BY-SA 4.0 | null | 2023-03-11T20:39:47.550 | 2023-03-11T22:05:01.970 | 2023-03-11T22:05:01.970 | 362671 | 382603 | [
"probability",
"self-study",
"moment-generating-function",
"geometric-distribution"
] |
609137 | 2 | null | 158095 | 3 | null | Bravo on having an intuition that, knowing nothing else, predicting the mean of $y$ every time is the best you can do (at least assuming "best" to be measured in terms of squared deviations between observed and predicted values). I believe this to be a critical component of understanding what $R^2$ and its generalizations mean.
There are many equivalent ways of writing $R^2$ in the simple cases, such as in-sample for ordinary least squares linear regression. Using standard notation where $n$ is the sample size, $y_i$ are the observed values, $\hat y_i$ are the predicted values, and $\bar y$ is the usual mean of all $y_i$, the one that makes the most sense to me is the following:
$$
R^2=1-\left(\dfrac{
\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
(For (in-sample) OLS linear regression, this turns out to be equal to the squared correlation between predicted and observed values, also equal to the squared correlation between the $x$ and $y$ variables in a simple linear regression.)
A slight modification of the notation gives a relationship to variance.
$$
R^2=1-\left(\dfrac{
\dfrac{1}{n}\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\dfrac{1}{n}\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
Since the $\dfrac{1}{n}$ terms in the numerator and denominator cancel out, this is equal to the earlier formula. Then the numerator and denominator are equal to the variances of the residuals and of the original data.
$$
R^2=1-\left(\dfrac{
\dfrac{1}{n}\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\dfrac{1}{n}\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)\\
= 1 - \left(
\dfrac{
\mathbb V\text{ar}\left(
Y - \hat Y
\right)
}{
\mathbb V\text{ar}\left(
Y
\right)
}
\right)
$$
Next, I will take some of my explanation in [another answer of mine](https://stats.stackexchange.com/questions/551915/interpreting-nonlinear-regression-r2).
$$ y_i-\bar{y} = (y_i - \hat{y_i} + \hat{y_i} - \bar{y}) = (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) $$
$$( y_i-\bar{y})^2 = \Big[ (y_i - \hat{y_i}) + (\hat{y_i} - \bar{y}) \Big]^2 =
(y_i - \hat{y_i})^2 + (\hat{y_i} - \bar{y})^2 + 2(y_i - \hat{y_i})(\hat{y_i} - \bar{y})
$$
$$SSTotal := \sum_i ( y_i-\bar{y})^2 = \sum_i(y_i - \hat{y_i})^2 + \sum_i(\hat{y_i} - \bar{y})^2 + 2\sum_i\Big[ (y_i - \hat{y_i})(\hat{y_i} - \bar{y}) \Big]$$
$$ :=SSRes + SSReg + Other $$
Divide through by the sample size $n$ (or $n-1$) to get variance estimates.
In OLS linear regression, $Other$ drops to zero. Consequently, all of the variance in $Y$ is accounted for by the residual variance (unexplained) and regression variance (explained). We, therefore, can describe the proportion of total variance explained by the regression, which would be the variance explained by the regression model $(SSReg/n)$ divided by the total variance $(SSTotal/n)$.
$$
\dfrac{SSReg/n}{SSTotal/n} $$$$= \dfrac{SSReg}{SSTotal} $$$$= \dfrac{SSTotal -SSRes-Other}{SSTotal} $$$$= 1-\dfrac{SSRes}{SSTotal}$$$$=1-\left(\dfrac{
\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{n}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
For an intuition, you observe some phenomenon and record the values produced. As you notice that they are not all equal, you begin to wonder why. Different starting conditions (values of the features) can account for some of that. As an example, consider why people are not all the same height. One reason for this is that not everyone is the same age, and people tend to get taller as they grow up. If you only consider adults (so age is a feature), you will have a much narrower range of heights than if you consider all people. If you start to consider genetics and lifestyle, you might be able to get a rather tight distribution of plausible heights, thus explaining much of the variation in the combined values of all human heights.
| null | CC BY-SA 4.0 | null | 2023-03-11T20:46:55.397 | 2023-03-13T10:42:45.840 | 2023-03-13T10:42:45.840 | 247274 | 247274 | null |
609138 | 2 | null | 609093 | 2 | null | In the reflective measurement model, you have three covariances, and you are estimating three loading, so zero degrees of freedom.
In the formative measurement model you are estimating three 'loadings', and three covariances between the three measured variables, so 6 parameters, hence -3 df.
To identify the model, you need a path coming out of the 'Need for Support' latent to another variable. If you add some constraints, you will have zero degrees of freedom. But in that case the latent variable won't be doing anything - you can remove it and have the predictors pointing directly to the outcome, and you have a regular regression model.
Alternatively you could fix the covariances between the measured variables to be zero. This will almost certainly be wrong and make your model fit very poor.
To over-identify the model, you need two paths coming out of the latent variable.
| null | CC BY-SA 4.0 | null | 2023-03-11T21:02:39.010 | 2023-03-11T21:02:39.010 | null | null | 17072 | null |
609139 | 1 | null | null | 2 | 38 | I'm running into a problem after trying what I thought would be a simple analysis. I have 47 sites where I measured a variety of habitat characteristics (canopy cover, habitat type, percent of bare ground, elevation, etc.). The habitat type consists of 4 categorical variables, canopy cover is continuous, and then there are the percentages. For each site, I also measured the same characteristics at two random sites, 50 m away. My goal is to see if/how these characteristics are informing the site selection of the original site. Because I'm looking at fine-scale selection, I want the two randoms to be paired to the site. A portion of my data is available at: github.com/rlumkes/bedsite-data
I originally tried a mixed effects model with `SiteID` as the random effect, but received a singular fit warning. Type refers to site (1) or random (0).
```
bedsites.random <- glmer(Type ~ Habitat + Canopy_Cover +
X100cm_Cover + (1|BedsiteID),
family = binomial(link = "logit"),
data = bedsites)
boundary (singular) fit: see help('isSingular')
```
I surmised this was from only have one observation for each site, so I tried `clogit` for case control studies in R, only to end up with this warning and huge beta estimates:
```
bed.mod <- clogit(Type ~ Habitat + Canopy_Cover + X100cm_Cover +
strata(BedsiteID), data = bedsite)
Warning message:
In coxexact.fit(X, Y, istrat, offset, init, control, weights = weights, :
Loglik converged before variable 1,2,4 ; beta may be infinite.
```
THEN someone told me to try a negative binomial, which resulted in this:
```
summary(m1 <- glm.nb(Type ~ Habitat + Canopy_Cover,
data = bedsite))
Call:
glm.nb(formula = Type ~ Habitat + Canopy_Cover, data = bedsite,
init.theta = 9353.492376, link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.1609 -0.7308 -0.7308 0.5796 1.0996
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.343735 0.408254 -3.291 0.000997 ***
HabitatCRP 0.069801 0.578916 0.121 0.904031
HabitatForest 0.079300 0.879014 0.090 0.928117
HabitatGrassland 0.023234 0.464190 0.050 0.960081
HabitatShrubs 0.702339 0.520194 1.350 0.176969
Canopy_Cover 0.011405 0.006117 1.865 0.062243 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(9353.492) family taken to be 1)
Null deviance: 103.266 on 140 degrees of freedom
Residual deviance: 95.877 on 135 degrees of freedom
AIC: 203.88
Number of Fisher Scoring iterations: 1
Theta: 9353
Std. Err.: 115219
Warning while fitting theta: iteration limit reached
2 x log-likelihood: -189.882
Warning messages:
1: In theta.ml(Y, mu, sum(w), w, limit = control$maxit, trace = control$trace > :
iteration limit reached
2: In theta.ml(Y, mu, sum(w), w, limit = control$maxit, trace = control$trace > :
iteration limit reached
```
I also tried running these models with only one set of randoms in case the two randoms per one observation was throwing it off, but I got the same warnings. I'm completely at a loss for how to analyze this. Any ideas?
| Best analysis for site selection with paired data | CC BY-SA 4.0 | null | 2023-03-11T21:02:46.343 | 2023-03-27T19:27:18.670 | 2023-03-17T03:17:20.357 | 11887 | 382991 | [
"r",
"mixed-model",
"biostatistics",
"paired-data",
"case-control-study"
] |
609140 | 1 | null | null | 0 | 59 | I was watching Veritasium's Would You Take This Bet? video. In a part of the video Derek asks people whether they would accept the bet in the case of each true guess for flipping the coin the person would win $10$ dollars and for each true guessing this $10 $ dollar will increase twofold as $10+20+40$... etc. But for each false guess the person betting will lose $10$ dollars. So in video he tells that probability of losing money for $100 $ times of guessing is $1/2300$. I tried to find the this probability by myself. I mean it is obvious the probability of losing money in these circumstances but I couldn't find the same conclusion as Derek.
So I tried to find the minimum number of trues guesses that would make the person at lose in final situation.
For $10$ of his guesses are true $10\times ((1-(2^7))/(1-2))=1270\rightarrow$ Gaining money $10\times 93=930 \rightarrow$ Losing Money.
For $9 $ of his guesses are true $10\times ((1-(2^6))/(1-2))=630 \rightarrow$ Gaining money $10\times 94=940 \rightarrow$ Losing Money.
In order to lose money, the person has to be false at least $ 6$ of his guesses.
```
pbinom(6,size = 100, prob = 0.5) = 1.00298e-21
```
This the result that I found. Where did I make a mistake?
| What Is the Probability of Losing Money? | CC BY-SA 4.0 | null | 2023-03-11T21:19:12.967 | 2023-03-12T06:02:48.553 | 2023-03-12T06:02:48.553 | 362671 | 382989 | [
"r",
"probability",
"binomial-distribution"
] |
609141 | 1 | null | null | 1 | 74 | I'm using PSM with an Epanechnikov Kernel and a bandwith of 0.06.
I'm confused about which observations are matched. I thought it was (broadly) like a wheighted radius matching, where every control observation within a 0.06 propensity score distance from treatment unit T1 receives a weight and is matched.
But some sources say all control observations within the common support are matched. I also read that with kernel matching restricting the common support is especially important because all cotnrol units are matched.
So the question is, with an Epanechnikov Kernel and a bandwith of 0.06 and a treatment unit T1 with a propensity score of 0,5 would a control unit with a score of 0,57 be matched or not?
| PSM kernel matching and bandwith: what observations are used | CC BY-SA 4.0 | null | 2023-03-11T21:29:40.397 | 2023-03-11T21:29:40.397 | null | null | 382993 | [
"kernel-smoothing",
"propensity-scores",
"matching"
] |
609142 | 2 | null | 609136 | 1 | null | What should be the approach here?
We know if $X\sim \textrm{Geom}(p), ~M_X(t) = p(1-qe^t)^{-1}.$ Also we know for a constant $c,~M_{cX}(t) = M_X(ct).$ Here $c $ should be $1/n.$
We can hope the limiting calculation could be implemented using elementary calculus techniques.
Can we now formally start to solve the problem? To see what happens? What could go wrong?
| null | CC BY-SA 4.0 | null | 2023-03-11T21:30:34.910 | 2023-03-11T21:30:34.910 | null | null | 362671 | null |
609143 | 1 | null | null | 2 | 96 | Assume the following linear relationship:
$Y_i = \beta_0 + \beta_1 X_i + u_i$, where $Y_i$ is the dependent variable, $X_i$ a single independent variable and $u_i$ the error term.
According to Stock & Watson (Introduction to Econometrics; [Chapter 4](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjBp6CBjdX9AhWEm4kEHbnHDQUQFnoECAwQAQ&url=https%3A%2F%2Fwww.ssc.wisc.edu%2F%7Emchinn%2Fstock-watson-econometrics-3e-lowres.pdf&usg=AOvVaw0OYfyN19JTuWUBfYQuuxO8)), the second least squares assumption is that data should be i.i.d.
[](https://i.stack.imgur.com/UqrSl.png)
- I understand what this means, and know that data should be representative from the population. But I do not understand HOW exactly the violation of this assumption makes the OLS estimators biased/inconsistent. Is it just via the non-serial correlation of errors assumption?
| Why is i.i.d. an OLS assumption? | CC BY-SA 4.0 | null | 2023-03-11T22:09:13.657 | 2023-03-12T03:17:46.307 | 2023-03-12T03:17:46.307 | 362671 | 208995 | [
"regression",
"least-squares",
"econometrics",
"linear-model",
"unbiased-estimator"
] |
609145 | 1 | null | null | 1 | 26 | I am analyzing an anomaly event of daily stock returns. For many different stocks (>1000), I do some analysis to identify a day where I observe this anomaly.
To evaluate the effect of this anomaly, I am comparing the distribution of returns during a fixed period (e.g. one year) before and after the event date. For example, I calculate t-statistic and p-value with a paired t-test, under the null hypothesis that the mean of returns (or other company metrics) will be equal before and after. I repeat this for every event, of which I have a few thousand.
I am looking for a concise way of reporting these test results, and to interpret them. My current approach is to report 1 percent and 5 percent quantiles of calculated p-values to show what proportion of events generates a significant change. Furthermore, I report min/max and quartiles of observed t-statistics to illuminate the spread.
a.) Is this an appropriate approach?
b.) Are there any implications that I am not considering right now?
c.) Is there a better way? For example, I found a suggestion to t-test whether the distribution of t-statistics matches the expectation, but I am not certain this makes sense.
Thanks in advance!
NB: I also run Kolmogorov-Smirnov tests on before and after returns to see if the distribution of returns changes. Here I am interested in the effect of the mean of distributions.
| Reporting or combining multiple t-test results | CC BY-SA 4.0 | null | 2023-03-11T22:45:57.590 | 2023-03-11T22:59:39.437 | 2023-03-11T22:59:39.437 | 52945 | 52945 | [
"t-test",
"p-value",
"meta-analysis"
] |
609146 | 1 | null | null | 0 | 197 | I am running a DiD regression to evaluate the effect of a policy and I want to check whether the parallel trend assumption holds in my data. However, the model includes control variables that can affect the trends before treatment. Thus, doing a visual check might not work : it is possible that the trends are not parallel because of changes in the control variables. What would be an appropriate way to test the parallel trend assumption in that case ?
Thank you !
| Test the parallel trend assumption in DiD regression with controls | CC BY-SA 4.0 | null | 2023-03-11T22:49:24.450 | 2023-03-11T23:28:55.740 | null | null | 382884 | [
"r",
"difference-in-difference",
"controlling-for-a-variable"
] |
609147 | 1 | null | null | 1 | 8 | Hi I'm trying to spatially analyse two point patterns in relation to each other: macrophage cells and tumour cells. The images I have generated coordinates from shows that the macrophages group towards the tumour cells but my graph from the imhomogenous L-cross type function shows the opposite. I don't know where I'm going wrong here. Attached is my results graph.
this is my code:
```
#this lets me wipe R's memory
rm(list=ls())
#this allows me to load the spatstat package
library(spatstat)
#this is me loading in my tumour data and plotting it
tumour.data <- read.table(file.choose(), header=TRUE)
summary(tumour.data)
#this is me turning my table into matrix
tumour.matrix <- data.matrix(tumour.data)
#ok it is now time for me to do the exact same for my macrophage data; load it
macrophage.data <- read.table(file.choose(), header=TRUE)
summary(macrophage.data)
#turn it into a matrix
macrophage.matrix <- data.matrix(macrophage.data)
#now i need to make my window
win <- owin(xrange=c(0,10000), yrange=c(0,10000), poly=NULL, mask=NULL,
unitname=NULL, xy=NULL)
plot(win)
#i need to convert my matrix into a point pattern so that spatstat can read it
p_tumour <- ppp(x = tumour.data$X, y = tumour.data$Y, window = win)
#convert my matrix into a point pattern so that spatstat can read it
p_macrophage <- ppp(x = macrophage.data$X, y = macrophage.data$Y, window = win)
#gotta combine these ppps now
comb <- superimpose("cancer" = p_tumour, "macrophage" = p_macrophage)
#let's define my r value for the radii that i want to look at
radii <- seq(0, 3, by=0.1)
#use cross-type l function
results <- Lcross.inhom(comb, "cancer", "macrophage", r = radii, correction = "Ripley")
#and the final touch
plot(results)
```
[](https://i.stack.imgur.com/AHCkG.png)
| Why is my L-cross type function not showing me the correct results? | CC BY-SA 4.0 | null | 2023-03-11T22:52:10.350 | 2023-03-12T12:33:22.557 | 2023-03-12T12:33:22.557 | 11852 | 382999 | [
"clustering",
"ripley-k",
"spatstat"
] |
609148 | 1 | null | null | 0 | 6 | I need advice on the logistic regression formula I'm working on.
The dependent variable is a binary variable (1 for the good financial performance of the firm and 0 for bad performance). Because the purpose of my paper is to research the effect of managerial experience on firm performance, I gathered financial data on managers' current firm performance (the binary dependent variable) and on the same managers' previous firm's performance (the proposed independent variable - similarly a binary variable 1/0 for firm performance). The timeline of the measured performance in 4 years for both the new and the old firm. The only difference is that I'm using the average performance values for the previous firm and the maximum performance values at the end of year 4 for the new firm.
So basically I have the data for managers' old and new firm performance and want to see if the experience in the previous firm affects the performance in the new firm.
The proposed logistic regression formula is: Newperformance = a + b1 * oldperformance + b2* controls
Now my supervisor has said that there is an issue with such an approach and I should choose a different method, while I cannot see any issue with it.
Thanks
| Logistic regression new firm performance- old firm performance | CC BY-SA 4.0 | null | 2023-03-11T22:57:15.087 | 2023-03-11T22:57:15.087 | null | null | 347083 | [
"hypothesis-testing",
"logistic",
"mathematical-statistics",
"econometrics",
"binary-data"
] |
609149 | 1 | 609154 | null | 1 | 33 | I am not really sure how to describe it with the correct mathematical terms. If there are any questions, I will try to explain further.
I have several instances of a group of proportions. One could say I have several pie charts, where each pie chart has the same kind of pieces (classes) and the pie's pieces always sum to one.
Examples:
[](https://i.stack.imgur.com/nF4Xt.png)
[](https://i.stack.imgur.com/hDIWD.png)
I am searching for a method for comparing these distributions.
I have thought of a method but am not sure if that is scientific, it may already exist and have a proper name.
The method compares the different slices/classes of the distributions that I want to compare. The minimum value of the shares is computed and summed up. The result of this is always in the interval of 0 to 1, with 1 being identical distributions and 0 being completely different distributions.
For comparing the examples the result of this similarity measure would be 0.75
In pseudocode: (min(pie1_class1, pie2_class1) + min(pie1_class2, pie2_class2) + min(pie1_class3, pie2_class3))
Is there a name for such a similarity measure?
Is there a more appropriate similarity measure?
| Similarity measure for two discrete distributions (a group of proportions) | CC BY-SA 4.0 | null | 2023-03-11T23:20:39.077 | 2023-03-12T00:05:36.570 | null | null | 382998 | [
"proportion",
"similarities"
] |
609150 | 2 | null | 609132 | 0 | null |
#### That is a bad idea
If you use parameter estimates from your data to form your "prior" then it is not really a genuine prior distribution, since it formed using the data. This then leads to violation of Bayes' rule in the updating process, and means you are not using a genuine Bayesian model.
| null | CC BY-SA 4.0 | null | 2023-03-11T23:23:58.313 | 2023-03-11T23:23:58.313 | null | null | 173082 | null |
609153 | 1 | null | null | 0 | 98 | I want to calculate the sample size $n$ for a two-sided t-test with the following values:
- alpha = 0.05
- power = 0.8
- effect size = 0.8
This is the code I tried in Python:
```
n = statsmodels.stats.power.tt_ind_solve_power(effect_size=0.8, alpha=0.05, power=0.8)
print(n)
```
This is the code I tried in R:
```
result = power.t.test(sig.level=0.05, delta=0.8, power=0.8)
print(result["n"])
```
The result is 25.524. But I noticed this program is not the same:
```
from scipy.stats import norm
def calculate_n(alpha, power, effect_size):
t_alpha = norm.ppf(alpha / 2)
t_beta = norm.ppf(1 - power)
return 2 * ((t_alpha + t_beta) / effect_size)**2
n = calculate_n(alpha=0.05, power=0.8, effect_size=0.8)
print(n)
```
The result is 24.527. How to calculate sample size? I want to know the formula.
(postscript)
I tried this code, but power is wrong.
```
import scipy.stats as stats
def test_n(alpha, effect_size, n):
t_alpha = stats.t.ppf(1 - alpha / 2, df=n-1)
# chones'd = effect_size = (mean1 - mean2) / s
# if std1 and std2 are 1, s is always 1.
# s = np.sqrt(((n1 - 1) * std1**2 + (n2 - 1) * std2**2) / ((n1 - 1) + (n2 - 1)))
power = stats.t.cdf(t_alpha, loc=effect_size, df=n-1)
return power
for n in range(2, 50):
power = test_n(alpha=0.05, effect_size=0.8, n=n)
print(power)
```
| How to calculate the sample size for a 2-sided t test? | CC BY-SA 4.0 | null | 2023-03-11T23:42:03.873 | 2023-03-12T14:32:22.620 | 2023-03-12T14:32:22.620 | 383000 | 383000 | [
"r",
"python",
"t-test",
"statistical-power",
"two-tailed-test"
] |
609154 | 2 | null | 609149 | 0 | null | This is one minus the Bray-Curtis dissimilarity between two compositions expressed as proportions.
See: [https://en.wikipedia.org/wiki/Bray%E2%80%93Curtis_dissimilarity](https://en.wikipedia.org/wiki/Bray%E2%80%93Curtis_dissimilarity)
This is a widely used measure in ecology. There are other ways to measure the similarity or dissimilarity between compositions but I don't know much about them.
| null | CC BY-SA 4.0 | null | 2023-03-11T23:52:03.100 | 2023-03-12T00:05:36.570 | 2023-03-12T00:05:36.570 | 68149 | 68149 | null |
609155 | 1 | null | null | 1 | 14 | Let's assume that there exist a table ("Table A") with several rows and columns.
Each cell of the last column shows the sum of all values of the respective row, each cell of the last row the sum of all values of the respective column.
Let's make an example to be clear. The actual tables I have are of 19 rows and 7 columns (or similar size), therefore I need a solution working with bigger tables.
|row |a |b |c (sum) |
|---|-|-|-------|
|r1 |6 |7 |13 |
|r2 |16 |12 |28 |
|r3 |19 |24 |43 |
|r4 (sum) |41 |43 |84 |
Now, let's assume all values are published in table B, but rounded to the nearest multiple of 5.
|row |a |b |c (rounded sum of original values |d (sum of rounded values) |
|---|-|-|---------------------------------|-------------------------|
|r1 |5 |5 |15 |10 |
|r2 |15 |10 |30 |25 |
|r3 |20 |25 |45 |45 |
|r4 (rounded sum of original values) |40 |45 |85 |85 |
|r5 (sum of rounded values) |40 |40 |90 |- |
Now the rounded sums of the original values and the sum of the rounded values may differ. In general, the rounded sum of the original values should be nearer to the true value than the sum of the rounded value.
I have only table B, and I want to make an estimate of the values of table A.
I want to exploit the fact that the true values are integers which deviate at the highest 2 units from the rounded value, and that the rounded total sum in cell (r4,c), i.e. 85 in the example, is probably the best guess of the total sum (I remind that I am working with tables that are about 19x7, therefore it is not simple to make a better guess).
|row |a |b |c (sum) |
|---|-|-|-------|
|r1 |x11 |x12 |x11+x12 |
|r2 |x21 |x22 |x21+x22 |
|r3 |x31 |x32 |x31+x32 |
|r4 (sum) |x11+x21+x31 |x12+x22+x23 |85? |
I suppose that a kind of algorithm for doing this must exist, but I have no idea of how it may be called. Actually I have no idea at all on how to approach the problem.
To calculate thee values I can use R, but if it were possible to do it in excel it would be better. I would be grateful for any suggestion on where to search the solution of the problem, or the name of this kind of problems.
| Estimate of original value in a table rounded at the nearest multiple of 5 | CC BY-SA 4.0 | null | 2023-03-12T00:00:39.320 | 2023-03-17T03:16:23.150 | 2023-03-17T03:16:23.150 | 11887 | 334689 | [
"estimation",
"missing-data",
"tables"
] |
609156 | 2 | null | 609123 | 5 | null | As pointed out by @whuber, we can simply use the known formulas for conditional distributions in the multinormal, as given for instance at [Deriving the conditional distributions of a multivariate normal distribution](https://stats.stackexchange.com/questions/30588/deriving-the-conditional-distributions-of-a-multivariate-normal-distribution) (we will use notation from there). At first I doubted, since the full covariance matrix $\Sigma$ in this case is singular, but a close reading of the [proof](https://stats.stackexchange.com/a/30600/11887) by user macro shows that we only need that $\Sigma_{22}$ is non-singular. That $\Sigma$ itself is singular does not matter.
For this case, we can easily compute (details not given) that
$$
\Sigma =\begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\
\Sigma_{21} & \Sigma_{22}
\end{pmatrix}
=\sigma^2 \begin{pmatrix}I_n & 1_n \\
1_n^T & n \end{pmatrix}
$$
where $I_n$ is the $n \times n$ identity matrix, and $1_n$ is the column vector with $n$ $1$'s. Then using the formulas we find that the conditional distribution of $X$ given that $Y=y$, where $Y=\sum_1^n X_i$, is the n-dimensional multinormal distribution with mean
$$ \mu_{x|y} = 1_n \frac{y}n $$ that is, all the components have the same conditional expectation $y/n$, and covariance matrix
$$ \Sigma_{x|y} = \sigma^2 \begin{pmatrix} I_n - 1_n 1_n^T / n
\end{pmatrix} $$
Note that $1_n 1_n^T$ is an $n \times n$-matrix with all components 1.
| null | CC BY-SA 4.0 | null | 2023-03-12T00:21:29.427 | 2023-03-12T02:44:20.683 | 2023-03-12T02:44:20.683 | 362671 | 11887 | null |
609158 | 5 | null | null | 0 | null | null | CC BY-SA 4.0 | null | 2023-03-12T02:10:59.507 | 2023-03-12T02:10:59.507 | 2023-03-12T02:10:59.507 | 247274 | 247274 | null | |
609159 | 4 | null | null | 0 | null | For questions about when and why an r-squared-style calculation yields a value below zero. | null | CC BY-SA 4.0 | null | 2023-03-12T02:10:59.507 | 2023-03-12T02:10:59.507 | 2023-03-12T02:10:59.507 | 247274 | 247274 | null |
609160 | 1 | 609622 | null | 0 | 40 | I am running binomial GLMMs in R to determine whether species presence (binary) on a hydrophone is different between seasons (i.e. spring, summer, fall, and winter) and photoperiods (i.e. day, night, dawn, and dusk). My models include a temporal autocorrelation structure with `Group` given as a single value since I am using a single hydrophone. I understand that I can use the `multcomp` or `emmeans` packages to conduct pairwise comparisons on my model(s), but am unsure whether I should run separate GLMMs with a single predictor:
```
M1 <- glmmTMB(Presence ~ Photoperiod + ou(Time - 1|Group), data = df, family = binomial(link="logit"))
M2 <- glmmTMB(Presence ~ Season + ou(Time - 1|Group), data = df, family = binomial(link="logit"))
```
or a single model with both predictors:
```
M3 <- glmmTMB(Presence ~ Photoperiod + Season + ou(Time - 1|Group), data = df, family = binomial(link="logit"))
```
Using `car::Anova()`, both photoperiod and season have significant effects on presence in all three models. However, my `pairs(emmeans())` results are different enough to effect significance depending on whether I model the predictors together (`M3`) or separately (`M1` and `M2`). `M3` has a slightly lower AIC value than `M1` or `M2`.
Are there any ways to justify using two models with a single predictor vs a single model with multiple predictors if my goal is determining whether species presence differs between photoperiods and seasons? I also have a few environmental covariates (i.e. sea-surface temperature, chlorophyll concentration, and sea level) I was planning on putting in a separate model, but am now wondering if I should model them alongside season and photoperiod?
As you may be able to tell from this post, I am a bit of a modelling novice and so am partial to simpler methods so long as they don't lead to incorrect/misleading results.
| Multiple vs Single Predictor Variables for GLMM Pairwise Comparisons | CC BY-SA 4.0 | null | 2023-03-12T02:58:29.350 | 2023-03-15T22:38:52.727 | 2023-03-12T03:02:38.717 | 383007 | 383007 | [
"r",
"generalized-linear-model",
"model-selection",
"multiple-comparisons",
"lsmeans"
] |
609161 | 1 | null | null | 1 | 28 | I try to use the definition of sufficient statistic to prove that
>
Suppose that $X_1,\dots, X_n$ is an iid random sample from $X\sim \mathrm{Bernoulli}(p)$. Show that $T=\sum_{i=1}^n X_i$ is a sufficient statistics for $p$.
The definition of sufficient statistics is that if the conditional joint distribution of the sample $(X_1,\dots, X_n)$ given $T$ does not depend on $\theta$.
---
My work:
Note that
$$
P(X_1=x_1,\dots, X_n=x_n|T=t)=\frac{P(X_1=x_1,\dots, X_n=x_n, T=t)}{P(T=t)}
$$
I'm not sure if my numerator and denominator are correct.
I do not know what is the numerator part.
But the denominator part is the sum of Bernoulli random variables, which is equivalent to the Binomial distribution.
$$
P(T=t)=\binom{n}{t}p^t(1-p)^{n-t}
$$
| Show that $T=\sum_{i=1}^n X_i$ is a sufficient statistic for $p$ | CC BY-SA 4.0 | null | 2023-03-12T04:37:10.953 | 2023-03-12T04:54:16.297 | 2023-03-12T04:54:16.297 | 362671 | 334918 | [
"self-study",
"mathematical-statistics",
"estimators",
"bernoulli-distribution",
"sufficient-statistics"
] |
609162 | 1 | null | null | 1 | 34 | Let $L(\theta;X)$ denote the log-likelihood of a model and I maximize the following to estimate $\theta$,
$$
\arg\max_{\theta}L(\theta;X)+\lambda\theta^2
$$
If $\lambda$ is 0, then the asymptotic distribution of $\theta$ is normal under suitable regularity conditions. What is the case when $\lambda$ isn't zero? I think there is a Bayesian interpretation of penalization, but I am looking for arguments without invoking any prior distribution.
| Asymptotic normality of penalized MLE? | CC BY-SA 4.0 | null | 2023-03-12T04:58:38.373 | 2023-03-17T03:14:50.120 | 2023-03-17T03:14:50.120 | 11887 | 266619 | [
"mathematical-statistics",
"maximum-likelihood",
"regularization",
"asymptotics"
] |
609163 | 1 | null | null | 2 | 24 | I conducted a survey of N=88 students.
I asked a question:
'What type of goal setting do you prefer?'
A: online
B: paper-based
C: no preference
Responses
A: 58
B: 9
C: 21
I will display this result in descriptive statistics, but I want to check for the probability this result in favour of online goal setting came about by chance. For this I believe I need to know the p-value.
What test should I use?
(Someone suggested a answer that suggested a chi-squared test. Is this a good fit?)
| How to calculate p-value for a survey question with three categorical answers options? | CC BY-SA 4.0 | null | 2023-03-12T05:40:22.603 | 2023-03-12T09:24:29.597 | 2023-03-12T09:24:29.597 | 382959 | 382959 | [
"probability",
"p-value"
] |
609164 | 2 | null | 609140 | 0 | null | You have not linked to the video, presumably [https://youtu.be/vBX-KulgJ1o?t=190](https://youtu.be/vBX-KulgJ1o?t=190) starting around 3:10 up to 4:40 but I do not think it quite says what you describe. The bet sizes do not change through the process.
Instead it is a fair coin with favourable bets (win $+20$, lose $-10$). So if you have $100$ bets, you will lose money with if your side comes up $33$ or fewer times out of $100$ but win overall with your side coming up $34$ or more times, since $20 \times 33 -10 \times 67 =-10 <0$ while $20 \times 34 -10 \times 66 =+20 > 0$.
That makes the probability of losing overall $\sum \limits_{k=1}^{33} {100 \choose k} 2^{-100}$ which you can find in R with `pbinom(33,100,1/2)` giving about $0.00043686$ or about $\frac{1}{2289}$.
| null | CC BY-SA 4.0 | null | 2023-03-12T05:44:18.597 | 2023-03-12T05:44:18.597 | null | null | 2958 | null |
609165 | 1 | null | null | 1 | 19 | I am studying forward-backward algorithm following the wikipedia [page](https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm). I have little background in statistics and have managed to understand (hopefully) most part of the algorithm. However the scaling factor introduced in the Forward probabilities section is confusing me.
It says from the second last equation that the "product of scaling factor from each timestamp is the total probability for observing the given events irrespective of the final states". However from my understanding that, in order to scale the state vector, the scaling factor should be a vector that has the same dimension as the state vector. And the value of each entry of the scaling factor should be the same and equal to the sum of all entries from the state vector. Thus the product of all scaling factor will result in a vector whose entries are all the same, and the total probability calculated this way is just meaningless. I must have misunderstood something during the process but I could not figure out myself. Please correct me and any help is appreciated.
[](https://i.stack.imgur.com/V0X5B.png)
| Scaling factor in forward-backward algorithm | CC BY-SA 4.0 | null | 2023-03-12T06:00:06.897 | 2023-03-12T06:04:18.000 | 2023-03-12T06:04:18.000 | 362671 | 370629 | [
"probability",
"bayesian",
"forward-backward"
] |
609166 | 1 | 609183 | null | 7 | 658 | I have been pretty confused about maximum likelihood as expressed by my question [here](https://stats.stackexchange.com/questions/503365/understanding-maximum-likelihood-estimation?noredirect=1#comment1129540_503365). But this question is not about MLE.
It occurs to me my confusion may have been because the likelihood function does not return likelihood. It returns probability density ( if I understand correctly).
So why is it called the [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function)?
Could it have been better named?
Is the meaning of "function" in statistics different from the meaning of "function" in programming?
[Update]
I am now understanding from Glen_b that the likelihood function does return likelihood.
My problem may be that I don't have an intuitive understanding of what likelihood is. Especially since it can be [infinity](https://stats.stackexchange.com/questions/4220/can-a-probability-distribution-value-exceeding-1-be-ok)
[Update]
Is likelihood a ratio? As per this picture in [Laken's Coursera course: Improving your statistical inference Lecture 2.1](https://www.coursera.org/learn/statistical-inferences/lecture/8yZDk/likelihoods)
[](https://i.stack.imgur.com/Xijyx.png)
[Update]
On [this post's comment](https://stats.stackexchange.com/questions/28801/what-is-the-difference-between-priors-and-likelihood?rq=1#comment54311_28801), I see that "The likelihood is the joint density of the data given a parameter value".
Yet in Tim's answer it is the data that is "given" (If I interpret the vertical bar correctly)
| What is likelihood actually? | CC BY-SA 4.0 | null | 2023-03-12T06:35:18.303 | 2023-03-30T01:15:46.643 | 2023-03-14T07:08:10.497 | 362671 | 284610 | [
"mathematical-statistics",
"interpretation",
"terminology",
"likelihood"
] |
609168 | 1 | 609250 | null | 2 | 54 | I was wondering if it will violate any assumptions of linear mixed effects (LME) models if I were to include interaction terms between the covariates and IVs in my model.
For example, the model that I would like to specify would be:
```
model 1: lmer(DV ~ A / (B*C) + cov1 + cov2 + cov3 + cov4 +
B:cov1 + B:cov2 + B:cov3 + B:cov4 +
C:cov1 + C:cov2 + C:cov3 + C:cov4 + (1|ID), data=df)
```
| Interaction between IV and covariates in Linear Mixed Effects Model | CC BY-SA 4.0 | null | 2023-03-12T06:57:40.757 | 2023-03-13T13:43:23.100 | 2023-03-13T13:43:23.100 | 345611 | 379720 | [
"regression",
"mixed-model",
"interaction",
"linear-model",
"predictor"
] |
609169 | 1 | 609178 | null | 4 | 188 | I am training an vanilla 5-layers LSTM. My task is trying to compare two models between without (baseline) and with the additional features (compared model). However, I found out that the compared model only surpass the baseline in a certain way of fine-tuning.
For example, I set up learning rate as 0.01, the compared model wins, but when I set up learning rate as 0.005, the baseline wins. Tuning other hyperparameters are also cause the comparison difference.
Is it normal to have this kind of situation? How should I explain this?
| Model only exceeds baseline in a certain fine-tuning condition | CC BY-SA 4.0 | null | 2023-03-12T07:02:27.763 | 2023-03-12T09:51:57.047 | null | null | 382355 | [
"python",
"lstm",
"tuning"
] |
609170 | 1 | null | null | 0 | 9 | I'm about to run two ANOVA tests between 3 groups (A, B, C), one test is on dependent variable X, and one test is on dependent variable Y. Of course after I finish the ANOVA test on either X or Y, I would have to run a post hoc analysis, let say here is Turkey's test. However, that creates a problem: multiple hypotheses. Should I apply a layer of Bonferroni for multiple comparison correction (p-values divided by 6 - the number of conducted tests)?
| Multiple comparisons testing for multiple post hoc tests | CC BY-SA 4.0 | null | 2023-03-12T07:10:53.877 | 2023-03-12T07:13:07.097 | 2023-03-12T07:13:07.097 | 374518 | 374518 | [
"hypothesis-testing",
"anova",
"multiple-comparisons",
"post-hoc"
] |
609171 | 1 | 609187 | null | 2 | 66 | In the regression model
$$y= x'\beta + u, \quad x = (1, x_2,...,x_K)$$
with
$$E[u |x]=0,$$
we know that it implies: $E [u] = 0$ and $cov(u,x_j)=0$, for $j=1,...,K$.
I think that the reciprocal is not true. Thinking geometrically, they seem equivalent, but I can't come up with a counterexample!
Do you have a counterexample?
Thinking geometrically
Defining $\langle X,Y\rangle= E[XY]$ we have that if $X$ or $Y$ is such that $E[X]=0$ or $E[Y]=0$, we have:
$$\langle X,Y\rangle=cov(X,Y)$$
So, if $cov(u,x_j)=0$, I have orthoganility: $\langle u,x_j\rangle=0$ for all $j=1,...,k$. This seems to imply that the projection of $u$ onto $1,x_2,...,x_k$ is $0$, i.e. $E[u|x]=0$.
| Show that $E [u] = 0$ and $cov(u,x_j)=0$ does not imply $E[u|x]=0$ | CC BY-SA 4.0 | null | 2023-03-12T07:33:51.120 | 2023-03-12T12:04:42.687 | 2023-03-12T08:04:42.103 | 362671 | 373088 | [
"multiple-regression",
"least-squares",
"exogeneity"
] |
609172 | 1 | 609199 | null | 0 | 42 | I have temperature-related time daily time series. I plot the time series and found that the plot have seasonal variations. Thus I differenced the series.
[](https://i.stack.imgur.com/aBbiC.png)
When I did Dickey-Fuller test for both temperature and differenced temperature using `ur.df`, the results say that both time series are stationary.
Dickey-Fuller test for differenced temperature:
```
y_tempmax<-diff(production$tempmax, lag = 1, differences =1 )
summary(ur.df(y_tempmax, lags=1, type='trend'))#drift
###############################################
# Augmented Dickey-Fuller Test Unit Root Test #
###############################################
Test regression trend
Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)
Residuals:
Min 1Q Median 3Q Max
-11.1515 -0.9934 -0.0003 1.0565 9.1865
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.353e-02 7.477e-02 0.181 0.856
z.lag.1 -1.593e+00 3.436e-02 -46.375 <2e-16 ***
tt -8.498e-06 6.172e-05 -0.138 0.891
z.diff.lag 2.260e-01 2.131e-02 10.606 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.71 on 2092 degrees of freedom
Multiple R-squared: 0.6676, Adjusted R-squared: 0.6671
F-statistic: 1401 on 3 and 2092 DF, p-value: < 2.2e-16
Value of test-statistic is: -46.3754 716.8953 1075.342
Critical values for test statistics:
1pct 5pct 10pct
tau3 -3.96 -3.41 -3.12
phi2 6.09 4.68 4.03
phi3 8.27 6.25 5.34
```
Dickey-Fuller test for temperature:
```
summary(ur.df(production$tempmax, lags=1, type='trend'))#drift
###############################################
# Augmented Dickey-Fuller Test Unit Root Test #
###############################################
Test regression trend
Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)
Residuals:
Min 1Q Median 3Q Max
-11.6277 -1.0295 -0.0107 1.0514 9.4926
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.259e+00 8.239e-01 7.596 4.56e-14 ***
z.lag.1 -6.946e-02 9.123e-03 -7.613 4.01e-14 ***
tt -4.207e-05 6.260e-05 -0.672 0.502
z.diff.lag -2.650e-01 2.107e-02 -12.576 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.731 on 2093 degrees of freedom
Multiple R-squared: 0.1143, Adjusted R-squared: 0.113
F-statistic: 90 on 3 and 2093 DF, p-value: < 2.2e-16
Value of test-statistic is: -7.6135 19.3307 28.9914
Critical values for test statistics:
1pct 5pct 10pct
tau3 -3.96 -3.41 -3.12
phi2 6.09 4.68 4.03
phi3 8.27 6.25 5.34
```
But when I draw ACF and PACF plot for temperature, I get this:
[](https://i.stack.imgur.com/IDkYu.png)
And for differenced temperature, I get this:
[](https://i.stack.imgur.com/EgXcu.png)
I also did `auto.arima` for original temperature series to find suitable model. I got the following results:
```
summary(a_tempmax)
Series: production$tempmax
ARIMA(2,0,1) with non-zero mean
Coefficients:
ar1 ar2 ma1 mean
1.2924 -0.3013 -0.7346 89.4272
s.e. 0.0404 0.0391 0.0301 1.0500
sigma^2 = 2.776: log likelihood = -4049.1
AIC=8108.2 AICc=8108.23 BIC=8136.44
Training set error measures:
ME RMSE MAE MPE MAPE MASE ACF1
Training set 0.006608083 1.664633 1.27077 -0.02703688 1.419712 0.9228368 0.003088886
```
Do I need to difference the original temperature series to make it stationary? Any suggestions will be appreciated.
| Modelling temperature: seasonality, stationarity, differencing | CC BY-SA 4.0 | null | 2023-03-12T07:43:13.653 | 2023-03-12T17:04:06.180 | 2023-03-12T17:01:51.537 | 53690 | 383013 | [
"time-series",
"stationarity",
"seasonality",
"acf-pacf",
"differencing"
] |
609174 | 2 | null | 609171 | 0 | null | How about:
\begin{align}
P(U=-1, X=0)& = 1/2\\
P(U=1, X=1) &= 1/4\\
P(U=1, X=-1) &= 1/4
\end{align}
| null | CC BY-SA 4.0 | null | 2023-03-12T08:41:43.003 | 2023-03-12T09:03:29.420 | 2023-03-12T09:03:29.420 | 362671 | 303650 | null |
609175 | 1 | null | null | 0 | 20 | We would like to properly set up the permutation design in `anova()` testing for our `rda()`.
Study design
We chose several areas, call them `study_area`, in which we established several same-size study plots, call them `study_plot`. In each `study_plot` we noted down abundance of bird species in two subsequent years (repeated measures), thus introducing a categorical variable `year` with two levels.
Modelling
We would like to investigate the effect of `year` on composition of bird community when the effect of `study_area` is "partialled-out", by running the following model:
```
rda(bird_spec ~ year + Condition(study_area))
```
To test significance of the constrained axis, we would like to perform `anova()` allowing for two following years of bird census in each `study_plot`, which were established in particular `study_area`s.
Would it be correct to set the permutation design as follows?
```
how(plots = Plots(strata = study_plot, type = "none"),
blocks = study_area)
```
| Permutation design for repeated measures via how() in RDA | CC BY-SA 4.0 | null | 2023-03-12T08:49:02.623 | 2023-03-17T03:14:03.543 | 2023-03-17T03:14:03.543 | 11887 | 367725 | [
"repeated-measures",
"experiment-design",
"permutation-test",
"vegan",
"redundancy-analysis"
] |
609177 | 1 | null | null | 0 | 25 | Suppose we have three features $x_i \sim N(0, 1)$ for $i=1,2,3$. We then use Bayesian linear regression with interpolant $f(x, w) = wX$, such that we model y as $N(f(x, w), \beta)$, i.e., with a Gaussian likelihood. Then we set a zero mean isotropic gaussian prior on the parameters $w$ such that this prior is infinitely broad. We know that the posterior distribution in this case will be gaussian and so will the predictive distribution.
My question is, if we consider two models, one with x1 and x2, and another with x1, x2 and x3, and we generate the predictive distribution for a single test point for each, can we say anything about the covariance between these two predictive distribitions? What if x1, x2 and x3 are known to be indepednant?
| Covariance between two posterior predictive distributions? | CC BY-SA 4.0 | null | 2023-03-12T09:41:01.750 | 2023-03-12T09:41:01.750 | null | null | 371362 | [
"regression",
"machine-learning",
"bayesian",
"predictive-models",
"posterior"
] |
609178 | 2 | null | 609169 | 5 | null | If your additional features are simply not highly predictive, then this can certainly happen. Not every additional predictor or ore complex model necessarily improves accuracy. You may find this helpful: [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/q/222179/1352) Also, take a look at [the bias-variance tradeoff](https://stats.stackexchange.com/a/237850/1352).
In addition, it may of course be that the precise hyperparameter you need for your focal model to outperform the baseline varies, too. A learning rate of 0.01 may mean that the focal model is better than the baseline on your particular test set. On another test set, the optimal learning rate may well be different. I would suggest you do some cross-validation to get an idea of how variable the optimal hyperparameter is - and to see how confident you can be that a hyperparameter you set to its optimal value in training and testing continues to perform well in production on yet newer data.
| null | CC BY-SA 4.0 | null | 2023-03-12T09:42:46.790 | 2023-03-12T09:42:46.790 | null | null | 1352 | null |
609179 | 2 | null | 609169 | 2 | null | Why would you expect the model to beat the baseline with any hyperparameters? One can easily imagine coming up with absurd hyperparameters for a model that could lead to arbitrarily bad results.
>
For example, I set up learning rate as 0.01, the compared model wins, but when I set up learning rate as 0.005, the baseline wins.
The learning rate is closely related to batch size and the number of epochs needed for training. When changing the learning rate, did you alter other parameters? Did you try training the model say 20x longer?
| null | CC BY-SA 4.0 | null | 2023-03-12T09:51:57.047 | 2023-03-12T09:51:57.047 | null | null | 35989 | null |
609180 | 2 | null | 609166 | 0 | null | I am not exactly sure I fully understand the question but I suspect it might come down to understanding what the likelihood function is measuring exactly.
Let $X$ be a (absolutely) continuous random variable and let $f_X(\cdot)$ denote its PDF, also known as the likelihood function. The value of $f_X(a)$ is not equal to $P(X = a)$, indeed since $X$ is continuous it means that $P(X=a)=0$ for every real number $a$.
So then what is the interpretation of $f_X(a)$? It denotes the probability of landing inside an "infinitesimal neighborhood of $a$". To make this more precise, let $\varepsilon > 0$, then we can ask $P(|X-a|\leq \varepsilon)$. We thicken the point $a$ by a width of $2\varepsilon$. Then $f_X(a)$ is the limit of $\frac{P(|X-a|\leq \varepsilon)}{2\varepsilon}$ as we shrink $\varepsilon$.
For instance, if $f_X(0) = 2$, this means that if we drew a tiny neighborhood of thickness $\ell$, then the probability of landing inside that neighborhood is approximately equal to $2\ell$. This explains why likelihood can exceed the value of $1$ whereas the probability does not.
| null | CC BY-SA 4.0 | null | 2023-03-12T09:56:45.080 | 2023-03-12T09:56:45.080 | null | null | 68480 | null |
609181 | 1 | null | null | 0 | 48 | Consider the following latent variable model for a potential [Elo rating with additional player type](https://stats.stackexchange.com/questions/557688/elo-rating-with-additional-player-type).
Each player has a performance level $r$ and a playing type $\theta$. Assume that a game between two players can be modelled as follows
- Compute the difference in playing styles $$d_1 = \frac{\theta_1 - \theta_2}{2 \pi} \text{ modulo 1} \\d_2 = \frac{\theta_2 - \theta_1}{2 \pi} \text{ modulo 1}\\$$
- Draw two random variables from a normal distribution $$X_1 = N(r_1+f(d_1),1) \\ X_2 = N(r_2+f(d_2),1)$$ where in the example here $f(d) = 300(1-d)^4d^2$
- The player with the largest number wins.
The $r$ plays a role of a traditional ELO rating. The $\theta$ has an adjusting effect on the probability of outcomes. When an opponent is $1/3$ clockwise from you, then you have an advantage, when an opponent is $1/3$ counter-clockwise from you, then you have a disadvantage.
Below is an example of a simulation for 25 players, playing a round-robin tournament where each player sees another player 20 times (each player has 480 games).
[](https://i.stack.imgur.com/goIUU.png)
In the image you can see the effect of the playing style. The scissor in the top with a score of 228 has an above average performance $r$, but is scoring lower than several rocks in the bottom with lower $r$. This is because the rocks encounter many scissors where they have an advantage, but the scissors encounter not so many papers.
---
Question: Say we have a matrix of results from players' games, can we estimate the underlying latent variables $r$ and $\theta$? (Where the $\theta$ has of course a free degree of freedom due to symmetry and we can only recover the relative $\theta$ values)
R code for generating example data:
```
set.seed(1)
### generate some players
### with random statistics
n = 25
r = rgamma(n,100,10)
angle = runif(n,0,360)
### function for outcome of a single game
game = function(x1,t1,x2,t2) {
angle1 = ((t1-t2) %% 360)/360
angle2 = ((t2-t1) %% 360)/360
performance1 = rnorm(1,x1)+300*(1-angle1)^6*angle1^2
performance2 = rnorm(1,x2)+300*(1-angle2)^6*angle2^2
if (performance1 >= performance2) {
return(1)
} else {
return(2)
}
}
### play some games
M = matrix(rep(0,n*n),n)
for (i in 1:n) {
for (j in 1:n) {
if (i != j) {
for (k in 1:10) {
outcome = game(r[i],angle[i],r[j], angle[j])
if(outcome == 1) {
M[i,j] = M[i,j] +1
} else {
M[j,i] = M[j,i] +1
}
}
}
}
}
M[1:5,1:5]
```
The first 5 entries in the table will look like
```
[,1] [,2] [,3] [,4] [,5]
[1,] 0 15 0 19 10
[2,] 5 0 0 17 13
[3,] 20 20 0 19 20
[4,] 1 3 1 0 5
[5,] 10 7 0 15 0
```
| A multidimensional ELO rating with rock paper scissors playing styles: how to estimate | CC BY-SA 4.0 | null | 2023-03-12T10:04:30.733 | 2023-03-12T11:46:23.443 | 2023-03-12T11:46:23.443 | 164061 | 164061 | [
"estimation",
"algorithms",
"games",
"elo"
] |
609183 | 2 | null | 609166 | 12 | null | The likelihood function parametrized by a parameter $\theta$ in statistics is defined as
$$
\mathcal{L}(\theta \mid x) = f_{\theta}(x)
$$
where $f_{\theta}$ is the probability density or mass function with parameter $\theta$ and $x$ is the data.
If for some data $x$ you evaluate the function for the parameter $\theta$ we call the result the “likelihood” of $\theta$. There's no other “likelihood”, because this is how we define it.
Using a code example, Gaussian likelihood could be implemented in Python as below.
```
import numpy as np
from scipy.stats import norm
def likelihood(loc, scale):
return np.prod(norm.pdf(X, loc=loc, scale=scale))
```
where [norm.pdf](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html) is the Gaussian probability density function and `np.prod` calculates the [product of the probability density values](https://stats.stackexchange.com/questions/211848/likelihood-why-multiply) returned for each value in the array `X`. Notice that `X` is not an argument of the function, it is fixed, and the only arguments of the likelihood function are the parameters (here `loc` and `scale`). What the function returns, is the likelihood for the parameters passed as arguments. If you maximize this function, the result would be a [maximum likelihood](https://stats.stackexchange.com/questions/112451/maximum-likelihood-estimation-mle-in-layman-terms/137081#137081) estimate for the parameters.
>
Could it have been better named?
Maybe, but it wasn't. But the same applies to all the other names in mathematics or names in general. For example, “isomorphism” or “monoid” may also not be a great names, but this is how we called them.
| null | CC BY-SA 4.0 | null | 2023-03-12T10:26:03.930 | 2023-03-14T10:00:14.493 | 2023-03-14T10:00:14.493 | 35989 | 35989 | null |
609184 | 1 | 609204 | null | 0 | 27 | As far as I remember, it is possible to convert a multinomial logit model into a binary logit model using restrictions on parameters.
For example, suppose we have three alternatives, say A, B, and C.
Then, the choice probability in a multinomial choice model is
$$\Pr[J=A|W]=\frac{1}{1+\exp(\alpha_b+W'\beta_b)+\exp(\alpha_c+W'\beta_c)}, \\[5pt]
\Pr[J=B|W]=\frac{\exp(\alpha_b+W'\beta_b)}{1+\exp(\alpha_b+W'\beta_b)+\exp(\alpha_c+W'\beta_c)}, \\[5pt]
\Pr[J=C|W]=\frac{\exp(\alpha_c+W'\beta_c)}{1+\exp(\alpha_b+W'\beta_b)+\exp(\alpha_c+W'\beta_c)}.$$
I think, if we impose the restriction, $\beta_b=\beta_c$, and define a new alternative combining alternatives B and C, this multinomial logit model becomes actually a binary logit model.
However, I cannot remember the details (i.e. how to derive the binary logit model).
Thus, is there someone who remember the detail?
| Converting a multinomial logit model into a binary logit model | CC BY-SA 4.0 | null | 2023-03-12T10:28:45.797 | 2023-03-12T17:51:24.373 | null | null | 375224 | [
"logistic",
"multinomial-logit"
] |
609186 | 2 | null | 609166 | 10 | null | There have been numerous responses including some to your very posts earlier and the present one too.
It should be reiterated that $\mathcal L(\theta\mid \mathbf x) $ or $\ell_\mathbf x(\theta)$ (to emphasize what the argument here is) even though has the same functional form as the corresponding density function of the distribution, in the former, what varies is the value of $\theta$ over the parameter space given the observed sample value $\mathbf x. $ As has been noted earlier too, $\ell_\mathbf x(\theta) $ , as a function of $\theta$, doesn't have to be a legit density function.
It returns likelihood as codified in the Likelihood Principle which basically says that two likelihood functions have same information about $\theta$ if they are proportional to one another or stating in more formal terms, if $E:=(\mathbf X, \theta,\{f_\theta(\mathbf x) \}),$ is the experiment, then any conclusion about $\theta$ (measured by the evidence function $\textrm{EV}(E,\mathbf x) $) should depend on $E,~\mathbf x$ only via $\ell_\theta(\mathbf x).$ So, if $\ell_\mathbf x(\theta)=C(\mathbf x, \mathbf y) \ell_\mathbf y, ~\forall\theta\in\Theta,$ ($C(\mathbf x, \mathbf y) $ would be independent of $\theta$) for two sample values $\mathbf x, \mathbf y, $ then the inference on $\theta$ based on either of the sample observation is equivalent.
Thus likelihood functions enable us to assess the "plausibility" of $\theta:$ if $\ell_\mathbf x(\theta_2) =c\ell_\mathbf x(\theta_1),$ then it is likely that $\theta_2$ is $c ~(c>0) $ (say) times as plausible as $\theta_1.$ By the likelihood principle, for the sample value $\mathbf y, ~\ell_\mathbf y(\theta_2) =c\ell_\mathbf y(\theta_1)$ and likely $\theta_2$ is $c$ times as plausible as $\theta_1$ irrespective of whether $\mathbf x$ or $\mathbf y$ is the realized observation of the sample.
---
Since the confusion still lingers in light of the likelihoods and priors, let me quote verbatim from $\rm [II]$ (to articulate the relationship of Bayes' theorem and likelihood function; emphasis mine):
>
[...] given the data $\mathbf y, ~p(\mathbf y\mid\boldsymbol\theta) $ in $$p(\boldsymbol\theta\mid\mathbf y) =cp(\mathbf y\mid\boldsymbol\theta) p(\boldsymbol\theta)$$ may be regarded as a function not of $\bf y$ but of $\boldsymbol\theta.$ When so regarded, following Fisher ($1922$), it is called the likelihood function of $\boldsymbol\theta$ for given $\mathbf y$ and can be written $l(\boldsymbol\theta\mid\mathbf y). $ We can thus write Bayes' formula as $$ p(\boldsymbol\theta\mid\mathbf y) =l(\boldsymbol\theta\mid\mathbf y)p(\boldsymbol\theta).$$ In other words, Bayes' theorem tells us that the probability distribution for $\boldsymbol \theta$ posterior to the data $\bf y$ is proportional to the product of the distribution for $\boldsymbol\theta$ prior to the data and the likelihood for $\boldsymbol\theta$ given $\mathbf y. $
---
## Reference:
$\rm [I]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002, $ sec. $6.3.1, $ pp. $290-291, ~293-294.$
$\rm [II]$ Bayesian Inference in Statistical Analysis, George E. P. Box, George C. Tiao, Wiley Classics, $1992, $ sec. $1.2.1, $ pp. $10-11.$
| null | CC BY-SA 4.0 | null | 2023-03-12T10:55:30.783 | 2023-03-14T07:37:22.637 | 2023-03-14T07:37:22.637 | 362671 | 362671 | null |
609187 | 2 | null | 609171 | 5 | null | One way to contrive counterexamples is to let $X$ follow some (non-degenerate) distribution symmetric about $0$ and choose $c \in \mathbb R_{>0}$ s.t. $U \mathrel{:=} X^b - c$ has expectation $0$ for an even number $b \neq 0$.
Then, symmetry of the distribution of $X$ about $0$ [implies](https://stats.stackexchange.com/a/582292/136579) $\mathrm{Cov}(X,U) = 0$, and we have $\mathbb E(U) = 0$ and $\mathbb E(U \,|\, X) = X^b - c$ by construction.
An easy example that comes to my mind is $X \sim \mathcal N(0,1)$ and $U \mathrel{:=} X^2 - 1$.
| null | CC BY-SA 4.0 | null | 2023-03-12T11:59:38.373 | 2023-03-12T12:04:42.687 | 2023-03-12T12:04:42.687 | 136579 | 136579 | null |
609188 | 1 | 609211 | null | 6 | 214 | A proportional hazards model $\lambda_i(t|x_i) = \lambda_0(t)\exp\{x_i^T \beta\}$ can be approximated using a piece-wise constant hazard, where the duration is partitioned into intervals and the baseline hazard is constant within each interval. In [these](https://grodri.github.io/glms/notes/c7s4) notes, for example, it is shown that the death indicators $d_{ij}$ for individual $i$ and interval $j$, can be modeled as independent Poisson observations with means $\mu_{ij} = t_{ij} \lambda_{ij}$, with obvious notations.
While I can follow the proof, this still seems wrong to me, as a Poisson distribution places probability mass on values > 1, and yet $d_{ij}$ can only be 0 or 1. Intuitively, how can you model a random variable that only takes values 0 and 1 with a Poisson distribution? I understand that for small means, the Poisson [does converge](https://stats.stackexchange.com/a/394177/103007) to a Bernoulli distribution, but this approximation does not appear anywhere in the proof, and there are cases where the rate of death may be quite high.
| Piece-wise proportional hazards model as equivalent Poisson model | CC BY-SA 4.0 | null | 2023-03-12T12:24:54.080 | 2023-03-12T19:16:33.293 | null | null | 103007 | [
"survival",
"poisson-distribution",
"cox-model",
"poisson-regression"
] |
609189 | 1 | 609192 | null | 1 | 82 | I have 5 years 15 min interval of electricity demand time series with a datetime index and a target variable. Don't have any other data to use. I'm curious about your experiences. In general how far is it possible to accurately predict the 15 min electricity demand? Our client wants 1 year prediction, but as far as I know it is quite impossible. What is your experience?
| Time-series prediction of 15 min interval data | CC BY-SA 4.0 | null | 2023-03-12T14:10:46.507 | 2023-03-12T14:59:09.317 | 2023-03-12T14:59:09.317 | 1352 | 383027 | [
"time-series",
"forecasting",
"multiple-seasonalities"
] |
609190 | 1 | null | null | 0 | 17 | I have a problem I've been going over and over for weeks and I'm not sure what statistical test to use for my analysis. I'm planning to execute the analysis in R, so any demonstration of the relevant script will be highly appreciated.
I am conducting macroecological research on the expansion of an invasive species across a newly invaded area, using a dataset that describes the annual number of observations (for the course of 20 years) across different localities of the invaded region. The data is count data and not normally distributed (typical to species invasion chronology).
I am interested in comparing each pair of years within my dataset and checking whether the total number of observations (irrespective of location) was significantly different. the twist is- the dataset was compiled by multiple teams across different countries, using different surveying techniques and at uneven intervals. Some teams also used social media and public help to collect valuable data, yet in an uncalibrated manner.
Given these factors, I need to find a statistical test suitable for uneven sampling efforts, that does not assume normality of the data and allows corrections for multiple comparisons. I am planning to execute the analysis in R, so any demonstration of the relevant script would be greatly appreciated.
Thank you
| What is the appropriate statistical test for count data with uneven groups and uneven sampling effort? | CC BY-SA 4.0 | null | 2023-03-12T14:48:40.903 | 2023-03-12T14:48:40.903 | null | null | 383028 | [
"r",
"count-data",
"ecology"
] |
609191 | 2 | null | 609112 | 0 | null | Okay I think $\widetilde{\mathbf{X}}$ can be interpreted as the likelihood function $L(\mu)$ after a partial-application of the plug-in principle to match the first moment of the distribution.
Here is the argument. We're starting from the probability model in CASE 1:
$$
\overline{\mathbf{X}}
= p(\overline{\mathbf{x}} |\mu,\sigma)
= Cg\big( (\overline{\mathbf{x}}-\mu)^2 \big)^e + \mu
= \widehat{\mathbf{se}}\cdot \mathcal{T}(\nu) + \mu
$$
where I have hidden the details of Student's t-distribution behind the constant $C$, funciton $g$, and exponent $e$.
We know $\mathbb{E}(\overline{\mathbf{X}}) = \mu$, and we have observed $\mathbf{Mean}(\mathbf{x}) = \overline{\mathbf{x}}$,
which is an estimate of the population mean $\mu$.
We want to update our model to match the observation $\overline{\mathbf{x}}$. Specifically we want to choose a model whose first moment matches the observed data. Currently $\mathbb{E}(\overline{\mathbf{X}}) = \mu$ but we want a new model such that
$\mathbb{E}(\widetilde{\mathbf{X}}) = \overline{\mathbf{x}}$,
which we can easily do if we define
$$
\widetilde{\mathbf{X}} = \overline{\mathbf{X}} \;\;\; + \overline{\mathbf{x}} -\mu,
$$
where we adding the term $\overline{\mathbf{x}}-\mu$ to match the first momen of the model to observed data. This horizontal shift doesn't affect the other part of the formula since it has zero mean.
We thus obtain the model:
$$
\widetilde{\mathbf{X}}
= p(\widetilde{\mathbf{x}} |\mu,\sigma)
= Cg\big( (\overline{\mathbf{x}}-\mu)^2 \big)^e + \mu \;\; + \overline{\mathbf{x}} - \mu
= \widehat{\mathbf{se}}\cdot \mathcal{T}(\nu) + \overline{\mathbf{x}}
$$
Let us now consider the function that defines the probability density of the random variable $\widetilde{\mathbf{X}}$ as a likelihood, meaning $\overline{\mathbf{x}}$ is a fixed quantity, and $\mu$ is the variable:
$$
L(\mu) = p(\widetilde{\mathbf{x}} |\mu,\sigma) = \widehat{\mathbf{se}}\cdot \mathcal{T}(\nu) + \overline{\mathbf{x}}.
$$
The fact that $L(\mu)$ happens to be well normalized is an accidental feature due to the symmetry of $g$, but since we're in frequentist context we are not interpreting $L(\mu)$ as a pdf.
We can therefore understand the construction of the 90% confidence interval for the population mean, $[\bar{\mathbf{x}}+t_\ell \cdot \widehat{\mathbf{se}}, \bar{\mathbf{x}}+t_u \cdot \widehat{\mathbf{se}} ]$ as a procedure for selecting a range of $\mu$s that contains 0.9 proportion of the area under the likelihood function $L(\mu)$.
| null | CC BY-SA 4.0 | null | 2023-03-12T14:51:17.133 | 2023-03-12T14:51:17.133 | null | null | 62481 | null |
609192 | 2 | null | 609189 | 0 | null | It is absolutely possible to forecast out for one year, or for ten years, or for a hundred years. What is not possible is to achieve any desired accuracy. Achievable accuracy in general deteriorates with forecast horizon. If your client's accuracy expectations are realistic, everyone can be happy. If not, well, reality has a way of stubbornly resisting people's expectations.
One key aspect of forecasting electricity on this temporal granularity is the [multiple-seasonalities](/questions/tagged/multiple-seasonalities) involved. You have intra-daily patterns, but these differ between weekdays, with weekends being quite different from working days. There have been specialized methods proposed to deal with that that typically improve in terms of accuracy on simpler methods - but at great computational cost, which becomes especially relevant if you forecast not a single time series, but thousands.
Electricity demand typically is driven by causal drivers beyond these multiple seasonalities. If you know that the Super Bowl is coming up, you know that during the quarter breaks, electricity consumption will peak, because roughly 100 million Americans simultaneously get another beer from the fridge, causing the compressor to start up to cool the fridge back down. Take a look at my answer to [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/q/222179/1352), especially the water consumption time series - similar patterns happen to electricity. It's important to understand these drivers. And of course you can predict the temperature (which drives electricity consumption for heating and air conditioning) more easily for tomorrow than for one day from now.
You may want to look at dedicated energy forecasting competitions to get an idea of what tools are effective, like the [GEFcoms](https://en.wikipedia.org/wiki/Global_Energy_Forecasting_Competition).
(Blatant self-promotion: I plan on presenting an analysis on multiple seasonalities forecasting at this year's [ISF](https://isf.forecasters.org/), and one of my datasets is actually about electricity demand. [Our workshop on "Forecasting to meet demand"](https://isf.forecasters.org/wp-content/uploads/Forecasting-to-meet-demand_ISF23.pdf) is not explicitly geared towards electricity forecasting, but may still be relevant.)
| null | CC BY-SA 4.0 | null | 2023-03-12T14:58:52.797 | 2023-03-12T14:58:52.797 | null | null | 1352 | null |
609196 | 1 | null | null | 0 | 22 | I‘m conducting ROC analyses in order to assess the diagnostic accuracy (AUC, sensitivity and specificity for certain cut-offs) of multiple index tests (ordinal scaled measures). The goal is to compare the performance of the index tests, possibly with inferential stastics. In this field, there is no accepted gold standard. For now, binary expert judgements serve as a fuzzy reference standard against which the tests shall be compared to. These judgements can be viewed as indicators of a binary latent construct. One can then assume that these judgements contain some truth and some measurement error. To quantify this, we had a small subsample assessed by multiple experts, which allowed us to estimate the interrater reliability using Fleiss‘ Kappa. It’s not feasible to assess the whole sample by multiple experts, so we can’t conduct LCA or use similar approaches of composite reference standards.
The problem of fuzzy reference standards isn’t new. From what I understood when reading Phelps & Hutson (1995), fuzzy reference standards lead to underestimates of the AUC if the errors of the reference standard are uncorrelated with the index tests, but can lead to overestimates in case of correlations. I found some reviews that deal with fuzzy gold standards: Umemneku Chikere et al. (2019) and Walsh (2018). However, from what I have seen none of the proposed solutions that I found cover the topic from a reliability standpoint.
Do you know of any way to correct our estimates of AUC, sensitivity, specificity, PPV and NPV given the estimated reliability of the fuzzy reference standard (or at least estimate the expected variability by deriving confidence intervals)? I’m assuming the reference standards‘ errors to be uncorrelated with the index tests.
I’m especially interested in citable references and research papers. Note that I don’t have a firm background in mathematics and statistics, so I’m grateful if you point out any flaws in the argumentation.
References:
- Phelps CE, Hutson A. Estimating Diagnostic Test Accuracy Using a “Fuzzy Gold Standard.” Medical Decision Making. 1995;15(1):44-57. doi:10.1177/0272989X9501500108
- Walsh T. Fuzzy gold standards: Approaches to handling an imperfect reference standard. J Dent. 2018 Jul;74 Suppl 1:S47-S49. doi: 10.1016/j.jdent.2018.04.022. PMID: 29929589.
- Umemneku Chikere CM, Wilson K, Graziadio S, Vale L, Allen AJ (2019) Diagnostic test evaluation methodology: A systematic review of methods employed to evaluate diagnostic tests in the absence of gold standard – An update. PloS ONE 14(10): e0223832. https://doi.org/10.1371/journal.pone.0223832
| ROC analysis with a fuzzy reference standard with estimates of its reliability | CC BY-SA 4.0 | null | 2023-03-12T16:01:10.570 | 2023-03-12T16:01:10.570 | null | null | 383016 | [
"roc",
"auc",
"reliability",
"measurement-error",
"sensitivity-specificity"
] |
609199 | 2 | null | 609172 | 0 | null | Does temperature have a unit root? Probably not. Why would you difference a variable that does not have a unit root? Doing that introduces a unit-root moving-average component. Instead, model the seasonality in temperature deterministically using Fourier terms or dummy variables. You can include them in `auto.arima` via the argument `xreg`. For a more detailed explanation and for R code, see ["Forecasting with long seasonal periods"](https://robjhyndman.com/hyndsight/longseasonality/) by Rob J. Hyndman.
| null | CC BY-SA 4.0 | null | 2023-03-12T16:56:13.980 | 2023-03-12T17:04:06.180 | 2023-03-12T17:04:06.180 | 53690 | 53690 | null |
609200 | 2 | null | 596638 | 3 | null | I don't think the two programs are answering the same question.
In Prism, you specified that each row (data for each day) is a matched set, so a repeated measures analysis is performed. Because you specified a nonparametric test, it does the Friedman test with Dunn followup comparisons.
I am not very familiar with those R commands, but it doesn't look like you specified pairing or repeated measures. I think your R analysis is doing the Kruskal-Wallis nonparametric test (without pairing) with Dunn followup comparisons.
"Dunn's" test just means it corrects for multiple comparisons using what is often called the Bonferroni method (but it is more appropriate to attribute to Dunn). Dunn's adjustment can be done for many kinds of analyses, including repeated measures (paired) or not.
| null | CC BY-SA 4.0 | null | 2023-03-12T17:18:51.143 | 2023-03-12T17:18:51.143 | null | null | 25 | null |
609201 | 2 | null | 609188 | 4 | null | One way to think about this is that, within each time interval $j$, you are modeling the Poisson parameter $\mu_{ij}$ based on the time $t_{ij}$ to the first event within interval $j$ for individual $i$, where interval $j$ starts at time $\tau_{j-1}$ and ends at $\tau_j$.
Similarly to how individuals with censored event times don't contribute information beyond the last follow-up time, individual $i$ provides no information after the observed event time $(\tau_{j-1}+t_{ij})$--even if more than one event might be possible in principle. Put slightly differently, the derivation on that page shows that the likelihood of the data you have is equivalent to that of a Poisson model, except for a parameter-independent constant.
The extension of this approach to multiple independent individuals having the same baseline hazard then leads to a Poisson model for the total number of deaths during a shared time interval.
| null | CC BY-SA 4.0 | null | 2023-03-12T17:31:31.757 | 2023-03-12T17:31:31.757 | null | null | 28500 | null |
609203 | 2 | null | 606483 | 7 | null | The ROC curve is the curve
$$f(t) = (\mathrm{FPR}(t), \mathrm{TPR}(t)),$$
for a threshold $t \in \mathbb R$.
You are proposing a new curve $g(t) = (\mathrm{FNR}(t), \mathrm{TNR}(t))$, but remember that $\mathrm{TPR} = 1 - \mathrm{FNR}$ and $\mathrm{FPR} = 1 - \mathrm{TNR}$, so that:
$$g(t) = (1-\mathrm{TPR}(t), 1-\mathrm{FPR}(t))$$
is simply a mirrored version of $f(t)$.
| null | CC BY-SA 4.0 | null | 2023-03-12T17:48:14.543 | 2023-03-12T18:11:54.933 | 2023-03-12T18:11:54.933 | 296197 | 60613 | null |
609204 | 2 | null | 609184 | 1 | null | Let
$$\alpha = -\log(e^{\alpha_b}+e^{\alpha_c})$$
and let
$$\beta = -\beta_b = -\beta_c.$$
Then we have
$$\exp(\alpha_b + W'\beta_b) + \exp(\alpha_c+W'\beta_c) =
(e^{\alpha_b}+e^{\alpha_c})\exp(-W'\beta) = \exp(-(\alpha+W'\beta))$$
giving
$$\text{Pr}[J=A\mid W] = \frac{1}{1+\exp(-(\alpha + W'\beta))}.$$
The other outcome is
$$\text{Pr}[J\in\{B,C\}\mid W] =
\frac{\exp(-(\alpha+W'\beta))}{1+\exp(-(\alpha + W'\beta))}.$$
| null | CC BY-SA 4.0 | null | 2023-03-12T17:51:24.373 | 2023-03-12T17:51:24.373 | null | null | 383034 | null |
609205 | 2 | null | 606483 | 5 | null | Since there are just two categories, and the TNR and FNR are so related to the TPR and FPR, this sounds like a graph that would not contain new information. Let's see what happens in a simulation.
```
library(pROC)
library(ggplot2)
set.seed(2023)
N <- 1000
p <- rbeta(N, 1, 1) # Pick some event probabilities
y <- rbinom(N, 1, p) # Simulate events with those probabilities
r <- pROC::roc(y, p)
plot(r)
points(1 - r$specificities, 1 - r$sensitivities)
d0 <- data.frame(
x = 1 - r$specificities,
y = r$sensitivities,
Curve = "Normal ROC"
)
d1 <- data.frame(
x = 1 - (1 - r$specificities), # so just specificity
y = 1 - r$sensitivities,
Curve = "Negatives"
)
d <- rbind(d0, d1)
ggplot(d, aes(x = x, y = y, col = Curve)) +
geom_line() +
geom_abline(slope = 1, intercept = 0)
```
[](https://i.stack.imgur.com/QUKA4.png)
Perhaps you get something out of the pink plot that you do not get out of the usual ROC curve in blue. If you do (I do not), the formula for plotting the pink curve is straightforward and has as straightforward of an interpretation as the usual ROC curve (maybe explain to clients that you want such curve to tend toward the lower right instead of the upper left as evidence of good separation between the two groups).
The pink plot about negative predictions (as opposed to the usual ROC curve that deals with positive predictions) might be a useful visualization for a particular project. However, we are not majorly losing out on information by considering the usual ROC curve that deals with positive predictions. Each plot is just a mirror-image of the other, reflected about the line $y=x$.
| null | CC BY-SA 4.0 | null | 2023-03-12T18:00:42.633 | 2023-03-12T18:00:42.633 | null | null | 247274 | null |
609206 | 2 | null | 569062 | 2 | null | If you have no true positives or false positives, it means that your model only predicts negatives. Most machine learning models (such as logistic regressions and neural networks) work by making predictions about the probability of class membership.$^{\dagger}$ To get hard classifications, that predicted probability is compared to a threshold, typically $0.5$ as a software default, where cases are considered positive if their predictions are above the threshold and negative if their predictions are below the threshold.
Overall, it seems like your model is making a predictions of low probabilities of being a positive case. This need not be bad behavior by the model! It might be that positive cases are always unlikely. It might be that you want to use a different threshold, perhaps something closer to (or equal to) the prior probability of the minority class (that is, the proportion of minority class members). That way, you get alerted when there is a particularly high probability of the minority class occurring, even if that probability is fairly small. This is related to the idea that I discuss [here](https://stats.stackexchange.com/questions/608240/binary-probability-models-considering-event-probability-above-the-prior-probabi). This is particularly common when the classes are imbalanced, such as in your case where you have almost $1000$ negative cases for every positive case, [since the prior probability of minority class membership is so low](https://stats.stackexchange.com/a/583115/247274).
However, it is not clear that you should be using a threshold of any kind, since the predicted probabilities are quite useful. Hopefully, some of the below links can convince you of this.
[Cross Validated: Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models)
[Cross Validated: Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he)
[Cross Validated: Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp)
[Cross Validated: Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specificity, F1 score etc](https://stats.stackexchange.com/a/603869/247274)
[Cross Validated: Upweight minority class vs. downsample+upweight majority class?](https://stats.stackexchange.com/questions/569878/upweight-minority-class-vs-downsampleupweight-majority-class/609131#609131)
[Cross Validated Meta: Profusion of threads on imbalanced data - can we merge/deem canonical any?](https://stats.meta.stackexchange.com/questions/6349/profusion-of-threads-on-imbalanced-data-can-we-merge-deem-canonical-any)
[Frank Harrell's Blog: Classification vs. Prediction](https://hbiostat.org/blog/post/classification/index.html)
[Frank Harrell's Blog: Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules](https://hbiostat.org/blog/post/class-damage/index.html)
$^{\dagger}$ [Neural networks tend to be overconfident in their predictions](https://stats.stackexchange.com/questions/532805/probability-distribution-as-output-for-my-lstm), so if a prediction of $p$ does not correspond to an event probability of $p$, then it is debatable if the predictions have a probabilistic interpretation. However, the predictions are on a continuum that can be thresholded to make hard classifications (if needed). Also, because the predicted probabilities are useful, the overconfident predictions by neural networks can be seen as problematic.
| null | CC BY-SA 4.0 | null | 2023-03-12T18:23:02.420 | 2023-03-12T18:41:57.590 | 2023-03-12T18:41:57.590 | 247274 | 247274 | null |
609207 | 1 | null | null | 0 | 13 | I am using XLSTAT tool to get the ML results. For the classification and regression trees, I am getting the results given in the image [https://i.stack.imgur.com/fa8mw.png]. I am not sure about the interpretation of the test statistic and p-value given in the table. Can I rely on them to say whether the prediction model is significant. For example, because p-value is less than 0.05, the model is statistically significant predictor? If yes, my problem would be, what about the other nodes in the model. Should I take my conclusion based on the p-value for the main node (node 1) only?
Regarding classification and regression random forest, XLSTS provides me with variable importance (mean decrease accuracy). Based on this number, I am not sure how to decide whether the dependent variable is a good predictor. For example, what if that number is negative, what if it is between 0 and 1, ..etc. Is there a rule of thumb?
| Interpretation for p-value in classification and regression trees and variable importance in classification and regression random forest | CC BY-SA 4.0 | null | 2023-03-12T18:24:41.583 | 2023-03-12T18:24:41.583 | null | null | 382711 | [
"machine-learning",
"classification",
"p-value",
"random-forest",
"importance"
] |
609208 | 2 | null | 343234 | 0 | null | If you are looking for a visual and more intuitive explanation of what the PCA does and represents I recommend looking at this video: [https://youtu.be/TJdH6rPA-TI](https://youtu.be/TJdH6rPA-TI). It walks through an example using PCA on a music dataset.
| null | CC BY-SA 4.0 | null | 2023-03-12T19:04:43.210 | 2023-03-12T19:04:43.210 | null | null | 383038 | null |
609209 | 1 | null | null | 0 | 23 | For any given value of x, say $X$, the distribution of y is called a y-array. Which is of course the conditional distribution of y given that $x = X$, which looks like:
$$\bar{y}_X = E(y|X)$$
which will be a function of X, and vary with it. However, what does it mean to say - by considering the x-array for $y = Y$, we have
$$\bar{x}_Y = E(x|Y)$$
Would this be similar to the following r-expression?
```
df <- structure(list(Age = c(0.5, 0.5, 1, 1, 1, 4, 4, 4, 4.5, 4.5,
4.5, 5, 5, 5, 5.5, 6, 6), Maint = c(182L, 163L, 978L, 466L, 549L,
495L, 723L, 681L, 619L, 1049L, 1033L, 890L, 1522L, 1194L, 987L,
764L, 1373L)), class = "data.frame", row.names = c(NA, -17L))
#E(y|X)
lm(Maint ~ Age, data=df)
#E(x|Y)
lm(Age ~ Maint, data=df)
```
```
| Regression curves of y on x and of x on y | CC BY-SA 4.0 | null | 2023-03-12T19:12:51.290 | 2023-03-12T19:12:51.290 | null | null | 359583 | [
"r",
"regression",
"linear"
] |
609210 | 2 | null | 272417 | 0 | null | >
Should my bootstrap function return the test statistic calculated for each sample, or the estimate?
We can bootstrap both the coefficient estimates and the test statistics but it would be better to bootstrap the $t$-statistics. If we take care how we calculate the $t$-statistic in each bootstrap sample, we increase the power of the test as discussed in [Correct creation of the null distribution for bootstrapped -values](https://stats.stackexchange.com/q/603540/237901).
>
Should I calculate the proportion of the test statistic/estimate above 0 or above the point estimate of the base model?
This is perhaps the most confusing step as the reasoning behind the p-value calculation is different depending on whether we bootstrap coefficient estimates or test statistics.
The bootstrap principle states that the bootstrap distribution of $\beta^*$ is close to the sampling distribution of $\hat{\beta}$, and that $\hat{\beta}$ itself is close to the true value $\beta$. This is helpful as we can construct confidence intervals for $\beta$. However, unless the true $\beta$ is indeed equal to 0, the bootstrap sample is not simulated under the null hypothesis $H_0:\beta = 0$. Instead [we can "invert" a confidence interval to compute a p-value](https://www.stat.umn.edu/geyer/5601/examp/tests.html); for example, $\operatorname{Pr}\left\{\beta^* \geq 0\right\}$ is the p-value for the one-sided right-tail test.
The bootstrap principle also states that the distribution of $t^* = (\beta^* - \hat{\beta}) / \operatorname{se}(\beta^*)$ is close to the distribution of $t = (\hat{\beta} - \beta) / \operatorname{se}(\hat{\beta})$. This is even more helpful because the $t$-statistic is (approximately) pivotal. A pivot is a random variable whose distribution doesn't depend on the parameters. In this case, the distribution of the $t$-statistic doesn't depend on the true value of $\beta$. So while $\operatorname{E}\hat{\beta} = 0$ under the null and $\operatorname{E}\hat{\beta}\neq 0$ under the alternative, the $t$-statistic has the same distribution under the null and under the alternative. The p-value for the one-sided right-tail test is $\operatorname{Pr}\left\{t^* \geq \hat{t}\right\}$ where the $t^*$s are the bootstrapped test statistics and $\hat{t}$ is the observed test statistic.
>
Should I multiply the result by 2 because the test is bilateral or use absolute values?
To report a two-sided p-value, calculate both tail area probabilities and multiply the smaller one (corresponds to "more extreme" situations) by 2.
I use the same example as @risingStar: a linear regression for US divorce rate as a function of six predictors + an intercept. @risingStar shows how to bootstrap the coefficient estimates (+1); I show how to bootstrap the $t$-statistics. The p-values for all but the last predictor, military, are pretty much the same with both methods.
```
bootstrap.summary(beta.hats, t.stats, p)
#> # A tibble: 7 × 4
#> Name Estimate `t value` `Pr(>|t|)`
#> <chr> <dbl> <dbl> <dbl>
#> 1 (Intercept) 380. 3.83 0.000200
#> 2 year -0.203 -3.81 0.000200
#> 3 unemployed -0.0493 -0.917 0.292
#> 4 femlab 0.808 7.03 0.000200
#> 5 marriage 0.150 6.29 0.000200
#> 6 birth -0.117 -7.96 0.000200
#> 7 military -0.0428 -3.12 0.00160
```
Aside: None of the p-values are exactly 0 because I use the bias-corrected formula for the p-values as described in [After bootstrapping regression analysis, all p-values are multiple of 0.001996](https://stats.stackexchange.com/q/488356/237901).
And finally I plot histograms of the bootstrap distributions of the coefficient estimate [left] and the test statistic [right] for military. These nicely illustrate the effect of "bootstrap pivoting".

---
R code to bootstrap p-values:
```
library("tidyverse")
data(divusa, package = "faraway")
model <- function(data) {
lm(divorce ~ ., data = data)
}
simulator <- function(data) {
rows <- sample(nrow(data), nrow(data), replace = TRUE)
data[rows, ]
}
estimator <- function(data) {
coefficients(model(data))
}
test <- function(data, b.test) {
fit <- model(data)
b <- coefficients(fit)
var <- diag(vcov(fit))
t <- (b - b.test) / sqrt(var)
t
}
pvalue <- function(t.star, t.hat, alternative = c("two.sided", "less", "greater")) {
alternative <- match.arg(alternative)
p.upper <- (sum(t.star >= t.hat) + 1) / (length(t.star) + 1)
p.lower <- (sum(t.star <= t.hat) + 1) / (length(t.star) + 1)
if (alternative == "greater") {
p.upper
} else if (alternative == "less") {
p.lower
} else {
# The two-tailed p-value is twice the smaller of the two one-tailed p-values.
2 * min(p.upper, p.lower)
}
}
bootstrap.summary <- function(b, t, p) {
tibble(
`Name` = names(b),
`Estimate` = b,
`t value` = t,
`Pr(>|t|)` = p
)
}
set.seed(1234)
B <- 10000
# These are the coefficient estimates, $\{ \hat{\beta}_i \}$ and the $t$ statistics, respectively.
# We can also get those with the `summary` function.
beta.hat <- estimator(divusa)
beta.hat
t.stat <- test(divusa, 0) # Calculate (beta.hat - 0) / se(beta.hat)
t.stat
# Bootstrap the coefficient estimates.
boot.estimate <- replicate(B, estimator(simulator(divusa)))
# Bootstrap the t statistics.
boot.statistic <- replicate(B, test(simulator(divusa), beta.hat)) # Calculate (beta.star - beta.hat) / se(beta.star)
# Bootstrapped p-values computed two ways:
p <- NULL
for (i in seq(beta.hat)) {
p <- c(p, pvalue(boot.estimate[i, ], 0))
}
bootstrap.summary(beta.hat, t.stat, p)
p <- NULL
for (i in seq(t.stat)) {
p <- c(p, pvalue(boot.statistic[i, ], t.stat[i]))
}
bootstrap.summary(beta.hat, t.stat, p)
# The 7th coefficient is the estimate for x = military
i <- 7
pvalue(boot.estimate[i, ], 0)
pvalue(boot.statistic[i, ], t.stat[i])
par(mfrow = c(1, 2))
hist(boot.estimate[i, ],
breaks = 50, freq = TRUE,
xlab = NULL, ylab = NULL,
main = paste0("Histogram of β* (x = ", names(beta.hat)[i], ")"),
font.main = 1
)
hist(boot.statistic[i, ],
breaks = 50, freq = TRUE,
xlab = NULL, ylab = NULL,
main = paste0("Histogram of t* (x = ", names(t.stat)[i], ")"),
font.main = 1
)
```
| null | CC BY-SA 4.0 | null | 2023-03-12T19:15:48.327 | 2023-03-12T19:15:48.327 | null | null | 237901 | null |
609211 | 2 | null | 609188 | 5 | null | It doesn't have a Poisson distribution, only a Poisson likelihood. That is, for any observed values $y$ of $Y$ the likelihood ratios $P(Y=y;\theta_1)/P(Y=y;\theta_0)$ are the same as they would be for a Poisson distribution (which is all you need for estimation and other parameter inference). The probabilities of values of $Y$ that you don't observe don't enter into the likelihood ratios, so they need not match.
The reason you have a Poisson likelihood is that you can model the data as produced by a stopped non-homogeneous Poisson process. The process for any individual stops when that individual first dies (or when they are censored). This sort of early stopping alters the sampling distribution of the data but doesn't alter the likelihood ratios.
A simpler example is binomial vs negative binomial: the likelihood ratios for a given set of data are the same even though the sampling distributions over sets of data are different. Here we have 5 successes and 2 failures in 7 seven trials, as binomial and negative binomial
```
> dbinom(5,7,.5)/dbinom(5,7,.3)
[1] 6.561266
> dnbinom(2,5,.5)/dnbinom(2,5,.3)
[1] 6.561266
```
| null | CC BY-SA 4.0 | null | 2023-03-12T19:16:33.293 | 2023-03-12T19:16:33.293 | null | null | 249135 | null |
609212 | 1 | null | null | 0 | 37 | According to the theory, power is 1-beta which is the probability of not rejecting the null hypothesis when this one is false. Then power uses the alternative hypothesis distribution.
The minimum detectable size is the distance between both distributions means, so the higher the MDE, the higher the power, which is good, and the lower the beta.
In this formula, they use the Critical value as the Power to calculate the MDE, and I wondering how is that possible when they are different concepts.
[](https://i.stack.imgur.com/3HIML.png)
| Can you explain me how critical value can be included in this equation of the minimum detectable size? | CC BY-SA 4.0 | null | 2023-03-12T19:21:55.810 | 2023-03-12T19:21:55.810 | null | null | 369666 | [
"hypothesis-testing",
"statistical-power",
"effect-size"
] |
609213 | 2 | null | 111355 | 0 | null | There are several methods to get confidence intervals for multinomial proportions, and many of them are implemented in R's function `MultinomCI()` from the `DescTools` package:
```
> DescTools::MultinomCI(400 * c(0.10, 0.25, 0.35, 0.30), sides = "two.sided", method = "sisonglaz")
est lwr.ci upr.ci
[1,] 0.10 0.05 0.1549989
[2,] 0.25 0.20 0.3049989
[3,] 0.35 0.30 0.4049989
[4,] 0.30 0.25 0.3549989
```
For details, see [https://cran.r-project.org/web/packages/DescTools/DescTools.pdf](https://cran.r-project.org/web/packages/DescTools/DescTools.pdf)
| null | CC BY-SA 4.0 | null | 2023-03-12T19:25:14.613 | 2023-03-12T19:25:14.613 | null | null | 184252 | null |
609214 | 2 | null | 609029 | 5 | null | The R syntax is correct: `~cl1+houseid` specifies that `cl1` values identify sampling units at stage 1 (PSUs) and `houseid` values identify sampling units at stage 2. If instead you want to use combinations of two variables to identify PSUs, you need to use the `interaction` function to create a single variable with all combinations, eg, `~interaction(cl1, houseid)`. Specifying the formula backwards `~houseid+cl1` gives you `houseid` as the PSU (which, since they're nested, is the same as `interaction(houseid, cl1)`.
In SPSS, you have the same options. At stage 1, specify just the PSU (`cl1`), and at stage 2 specify just the stage 2 sampling unit (`houseid`). Or, since stage 2 doesn't matter for 'with-replacement' standard errors, just specify stage 1. If you specify two cluster variables at stage 1 (as you did) you get all combinations of them as the PSUs, which is wrong.
There's extra potential for confusion in the social sciences because multilevel modellers call the finest partition of the data 'level 1' and survey samplers call the coarsest partition 'stage 1'.
Finally, I note that R will tell you how many clusters you have if you just print the survey design object. Here's one of the built-in examples, constructed the right way around and constructed backwards. The right way around, it describes itself correctly as a 2-stage sampling design and gives the numbers of clusters. The wrong way around, it describes itself as 'independent sampling' because we've told it that the PSUs come from what's really the second-stage identifier (`snum`), which identifies individual records in the dataset.
```
> dclus2
2 - level Cluster Sampling design
With (40, 126) clusters.
dclus2<-svydesign(id=~dnum+snum, fpc=~fpc1+fpc2, data=apiclus2)
> dclus2a<-svydesign(id=~snum+dnum, data=apiclus2,weight=~pw)
> dclus2a
Independent Sampling design (with replacement)
svydesign(id = ~snum + dnum, data = apiclus2, weight = ~pw)
```
I assume you can also get that sort of information in SPSS, but I don't speak SPSS.
| null | CC BY-SA 4.0 | null | 2023-03-12T19:34:17.360 | 2023-03-12T19:34:17.360 | null | null | 249135 | null |
609215 | 1 | null | null | 0 | 26 | I have an observed effect in a 2x2 contingency table of smokers who suffer from alopecia. There appears to more smokers that suffer from alopecia in the sample.
I have performed Fisher's Exact test, and have a significant two-sided p-value and a highly significant one-sided p-value.
I'm unclear as to whether, and if so why, I can infer the direction of the effect from this and state it in my conclusion.
i.e. "this suggests that there is an increased occurrence among smokers in the population" instead of "there is an association between smoking status and alopecia in the population"
Thanks,
John
| Direction of Inference from Fisher's Exact Test | CC BY-SA 4.0 | null | 2023-03-12T19:35:29.113 | 2023-03-12T19:35:29.113 | null | null | 382950 | [
"hypothesis-testing",
"self-study",
"inference"
] |
609216 | 1 | null | null | 1 | 38 | Is there any theoretical work on how to measure posterior collapse?
One can measure decoder output, but it is not clear if the degradation (if any) happened due to posterior collapse or due to failing to match the data distribution. Therefore I'm interested in measuring "how informative latent variable z is". Thank you.
UPDATE
By "posterior collapse" I meant an event when signal from input x to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring z samples drawn from the posterior $q_\theta(z_d | x)$. If z is too noisy then decoder ignores it during x' generation. If z too weak, then we observe that $\mu$ and $\sigma$ become constant regardless of the input x.
| How to measure posterior collapse if any | CC BY-SA 4.0 | null | 2023-03-12T19:38:38.333 | 2023-03-12T20:42:40.833 | 2023-03-12T20:42:40.833 | 75286 | 75286 | [
"machine-learning",
"distributions",
"autoencoders",
"variational-inference"
] |
609217 | 1 | null | null | 0 | 10 | I am having some doubts related to the PAC learning. I understood the main idea, but with the explanation found in the book "Understanding Machine Learning" I couldn't understand some ideas. The author starts by defining some sets, being H the possible models, h the selected model, D the true distribution of the data in the removed environment, Ldf being the loss in the test, Ls loss in training, hs being the best model according to the result of Ls.
[](https://i.stack.imgur.com/EWFSr.jpg)
[](https://i.stack.imgur.com/fgze9.jpg)
I understand that the set Dm will show the distribution of the collected data, but how would this inequality work? (I think Dm(M) will contain the distribution of several data-sets that do not match the environment (since the model overfit these data), that is, how does it compare the distribution between the two sets?
[](https://i.stack.imgur.com/piLy8.jpg)
And it keeps using D, which is my main difficulty in understanding. So if someone could explain the questions about D made above, giving a brief example of its form and how it arrived at the final result I would appreciate it.
| Some doubts about the logic of PAC learning | CC BY-SA 4.0 | null | 2023-03-12T20:26:06.540 | 2023-03-12T20:26:06.540 | null | null | 383041 | [
"machine-learning"
] |
609219 | 1 | null | null | 0 | 22 | I'm running into a problem after trying what I thought would be a simple analysis. I have 47 sites where I measured a variety of habitat characteristics (canopy cover, habitat type, percent of bare ground, elevation, etc.). The habitat type consists of 4 categorical variables, canopy cover is continuous, and then there are the percentages. For each site, I also measured the same characteristics at two random sites, 50 m away. My goal is to see if/how these characteristics are informing the site selection of the original site. Because I'm looking at fine-scale selection, I want the two randoms to be paired to the site.
I originally tried a mixed effects model with SiteID as the random effect, but received a singular fit warning. Type refers to site (1) or random (0).
```
bedsites.random <- glmer(Type ~ Habitat + Canopy_Cover +
X100cm_Cover + (1|BedsiteID),
family = binomial(link = "logit"),
data = bedsites)
boundary (singular) fit: see help('isSingular')
```
I surmised this was from only have one observation for each site, so I tried clogit for case control studies in R, only to end up with this warning and huge beta estimates:
```
bed.mod <- clogit(Type ~ Habitat + Canopy_Cover + X100cm_Cover +
+ strata(BedsiteID),
+ data = bedsite)
Warning message:
In coxexact.fit(X, Y, istrat, offset, init, control, weights = weights, :
Loglik converged before variable 1,2,4 ; beta may be infinite.
```
THEN someone told me to try a negative binomial, which resulted in this:
```
summary(m1 <- glm.nb(Type ~ Habitat + Canopy_Cover, data = bedsite))
Call:
glm.nb(formula = Type ~ Habitat + Canopy_Cover, data = bedsite,
init.theta = 9353.492376, link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.1609 -0.7308 -0.7308 0.5796 1.0996
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.343735 0.408254 -3.291 0.000997 ***
HabitatCRP 0.069801 0.578916 0.121 0.904031
HabitatForest 0.079300 0.879014 0.090 0.928117
HabitatGrassland 0.023234 0.464190 0.050 0.960081
HabitatShrubs 0.702339 0.520194 1.350 0.176969
Canopy_Cover 0.011405 0.006117 1.865 0.062243 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(9353.492) family taken to be 1)
Null deviance: 103.266 on 140 degrees of freedom
Residual deviance: 95.877 on 135 degrees of freedom
AIC: 203.88
Number of Fisher Scoring iterations: 1
Theta: 9353
Std. Err.: 115219
Warning while fitting theta: iteration limit reached
2 x log-likelihood: -189.882
Warning messages:
1: In theta.ml(Y, mu, sum(w), w, limit = control$maxit, trace = control$trace > :
iteration limit reached
2: In theta.ml(Y, mu, sum(w), w, limit = control$maxit, trace = control$trace > :
iteration limit reached
```
I also tried running these models with only one set of randoms in case the two randoms per one observation was throwing it off, but I got the same warnings. I'm completely at a loss for how to analyze this. Any ideas?
| Using mixed effects models or clogit on paired data | CC BY-SA 4.0 | null | 2023-03-11T17:47:28.143 | 2023-03-13T03:07:03.180 | 2023-03-13T03:07:03.180 | 11887 | 382991 | [
"r",
"logistic",
"lme4-nlme"
] |
609220 | 1 | null | null | 1 | 43 | Inspired by Richard McElreath's "Full Luxury Bayes" in his [Statistical Rethinking course](https://youtu.be/F0N4b7K_iYQ?t=4553), I wanted to implement a "Full Luxury Bayesian Marginal Structural Model".
Briefly: MSMs are a two-step model for the average treatment effect. First, you regress the (binary) treatment $A$ on confounders $X$; second, you compute the inverse probability weights $w_i=\frac{1}{\Pr[A=a_i|X=X_i]}$; and third, regress the outcome on the treatment ($Y \sim 1+A$) weighted by $w$.
I thought this is a classic case for this type of multiple submodels within a single "full" model, since there is one regression, a deterministic computation and a second regression. Very similar to the example McElreath presents.
Unfortunately, when I experiment, I can't recover the true parameters, and I believe it's not a software bug (see details below), but an actual consequence of learning the propensity and outcome models jointly (thus posting here and not on stackoverflow).
I wonder if someone could explain why this goes wrong.
---
The model that doesn't work:
```
import pymc as pm
with pm.Model() as msm_model:
# Treatment model:
intercept_a = pm.Normal("intercept_a", mu=0, sigma=2)
betas_a = pm.Normal("betas_a", mu=0, sigma=2, shape=X.shape[1])
mu_lin_a = pm.Deterministic(
"mu_lin_a", intercept_a + pm.math.dot(X, betas_a),
)
p_a1 = pm.Deterministic("p_a1", pm.math.sigmoid(mu_lin_a)) # Pr[A=1|X]
a_obs = pm.Bernoulli("a_obs", p_a1, observed=a)
p_a0 = pm.Deterministic("p_a0", 1-p_a1) # Pr[A=0|X]
p_a = pm.Deterministic("p_a", (a*p_a1) + (1-a)*p_a0) # Pr[A=a_i|X]
ipa = pm.Deterministic("ipa", 1/p_a)
# Outcome model (MSM):
intercept_y = pm.Normal("intercept_y", mu=0, sigma=3)
betas_y = pm.Normal("betas_y", mu=0, sigma=3)
sigma_y = pm.HalfNormal("sigma_y", sigma=3)
mu_lin_y = pm.Deterministic(
"mu_lin_y", intercept_y + betas_y*a,
)
# This is how to define a weighted regression in PyMC:
y_obs = pm.Potential(
"y_obs",
ipa * pm.logp(pm.Normal.dist(mu=mu_lin_y, sigma=sigma_y), y)
)
```
The reason I think there isn't a bug in this model is that I can make it work by changing two things separately:
- If I precompute the IP-weights beforehand using regular logistic regression (say, in statsmodels).
- If instead of a weighted outcome regression I compute the weighted average in each group (Horvitz–Thompson estimator).
Furthermore, when I do so the (averaged over chains) propensities (p_a1) suddenly match the ones I get from the non-Bayesian model, whereas they do not match in the "full" model.
This why I think there's some inherent fault in the joint model which I don't understand.
---
Sample data:
```
import numpy as np
def generate_data(seed=0, N=1000, D=1, effect=0):
rng = np.random.default_rng(seed)
X = rng.normal(0, 1, size=(N, D))
beta_xa = rng.normal(2, 1, size=D) # 3.184
a_logit = 1 + X@beta_xa + rng.normal(0, 0, size=N)
a_propensity = 1 / (1 + np.exp(-a_logit))
a = rng.binomial(1, a_propensity)
beta_xy = rng.normal(-2, 1, size=D) # -1.418
y = 1 + X@beta_xy + a*effect + rng.normal(0, 1, size=N)
return X, a, y
```
| A Bayesian marginal structural model (IPW) in a single model | CC BY-SA 4.0 | null | 2023-03-12T20:53:36.910 | 2023-03-12T20:53:36.910 | null | null | 153005 | [
"bayesian",
"markov-chain-montecarlo",
"causality",
"pymc",
"marginal-model"
] |
609221 | 2 | null | 586857 | 1 | null | Good afternoon @Augustine, looking at what you post, the first step would be to put the date column as an index, after that, do a treatment from the initial date to the final date. Using:
[pandas.date_range](https://pandas.pydata.org/docs/reference/api/pandas.date_range.html)
[pandas.DataFrame.reindex](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.reindex.html)
Creating a new data index, you make a reindex for this current index, and the existing indexes will be overwritten, not losing the lines of either A or B.
After doing this for the two sets, you have to create a new date index, to correct any missing dates in this period of the two sets.
To merge you an index based on dates, to merge I usually use pandas merge, but that depends a lot on your set if it has a very high dimension. This can take a lot of processing. Then do it with concat using axis 0 or 1, to see if I've already managed to meet your needs. If that doesn't work, try with merge, it has more features for merging. It's a little difficult to say which one would be better because I don't have something to test here.
[merge, concat, join](https://pandas.pydata.org/docs/dev/user_guide/merging.html)
| null | CC BY-SA 4.0 | null | 2023-03-12T20:57:54.620 | 2023-03-12T20:58:29.383 | 2023-03-12T20:58:29.383 | 373067 | 373067 | null |
609222 | 2 | null | 206896 | 0 | null | Depending on the calculation, out-of-sample $R^2$ can be negative. In fact, for LASSO, even in-sample $R^2$ can be negative (again, depending on the calculation).
If you do the $R^2$ calculation by squaring the Pearson correlation between the predictions and true values, that is bounded below by zero (cannot be negative). However, I give below another common way to express $R^2$ that is likely used by your software.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
In the OLS linear regression case, this turns out to be equivalent to to squaring the Pearson correlation between the predicted and true values. Also, in that OLS simple linear regression case (just a slope and an intercept), the above equation in equivalent to squared Pearson correlation of the outcome $y$ and lone feature $x$. However, [the above equation for $R^2$ allows for the usual interpretation as the "proportion of variance explained" by the regression](https://stats.stackexchange.com/questions/551915/interpreting-nonlinear-regression-r2).
All of this is to say that the above equation is a totally reasonable way of writing $R^2$.
In order for such a formula to give a negative number, the fraction numerator must exceed the denominator. Digging into the fraction, the numerator is the sum of squared residuals for your model, and the denominator is the sum of squared residuals for a model that predicts $\bar y$ every time, regardless of the feature/covariate values. Such a model can be regarded as a reasonable naïve baseline "must-beat" model: if you want to predict the conditional expected value and know nothing about how the features influence $y$, what better prediction than the mean of $y$ every time?
Consequently, when you get that formula to give a value less than zero, that is a signal that your predictions are doing worse in terms of square loss (sum of squared residuals) than your baseline, "must-beat" model. Given that you aim to predict financial returns that are notoriously difficult to predict, poor model performance is not surprising. Looking at your graph, you see that the green line of predictions is far away from the blue line of true values, consistent with poor model performance. A useful visualization might be a scatterplot of true and predicted values. I have [another answer](https://stats.stackexchange.com/a/584562/247274) where I show plots like this and why they can show strong correlation yet make terrible predictions. Depending on the mistakes your model makes, you might be able to calibrate the predictions (such as swapping negative and positive predicted returns, if the model consistently gets the wrong sign), though this warrants a separate question and answer, discussed to some extent in [a question of mine from about a year ago](https://stats.stackexchange.com/questions/565642/if-going-with-the-opposite-prediction-of-a-bad-predictor-gives-good-predictions) and the comment by Stephan Kolassa.
Overall, it seems that your LASSO model simply does a poor job or making predictions. This is disappointing, sure, but you want to catch performance like this before you deploy a model. After all, I would not want to trust my life's savings to an investment plan that uses a model that makes such poor predictions, and I will venture a guess that I am not alone in feeling that way!
(There is an issue when it comes to what the denominator should be when you do out-of-sample assessments, and [I disagree with typical software implementations, such as that of sklearn](https://stats.stackexchange.com/questions/590199/how-to-motivate-the-definition-of-r2-in-sklearn-metrics-r2-score); see below for the equations, noting that there is no disagreement for the in-sample case. Fortunately, however, my way of doing it that uses the in-sample $\bar y$ and the `sklearn` way of doing it with the out-of-sample $\bar y$ are likely to give similar denominators (since the in-sample and out-of-sample means should be fairly close, unless there is data drift (which is not so unusual), but that is a separate issue), so if you get $R^2<0$ one way, you are likely to get $R^2<0$ the other way.)
$$
R^2_{\text{out-of-sample, Dave}}=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y_{\text{in-sample}}
\right)^2
}\right)
$$
$$
R^2_{\text{out-of-sample, sklearn}}=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y_{\text{out-of-sample}}
\right)^2
}\right)
$$
| null | CC BY-SA 4.0 | null | 2023-03-12T21:05:47.810 | 2023-03-13T11:05:38.740 | 2023-03-13T11:05:38.740 | 247274 | 247274 | null |
609223 | 1 | null | null | 0 | 33 | Suppose I fit a generalised linear model with a binary response on multiple predictors with a logit link. When calculating the inverse of logit, I have the following exponential formula:
$$\frac{e^{\theta}}{1+e^{\theta}}$$
I have been taught that $\theta$ can be represented by the following $\beta_0+\beta_1x_i$, suppose my predictors are factor, would it make sense to do the following?
$$\frac{e^{\theta}}{1+e^{\theta}} = \frac{e^{\beta_0+\beta_1x_i}}{1+e^{\beta_0+\beta_1x_i}} = \\ = \frac{e^{\beta_0+\beta_10}}{1+e^{\beta_0+\beta_10}}
\\ = \frac{e^{\beta_0+\beta_1(1)}}{1+e^{\beta_0+\beta_1(1)}} \\ = \frac{e^{\beta_0+\beta_1(2)}}{1+e^{\beta_0+\beta_1(2)}} \\
\vdots$$
Because I cannot calculate the fitted values from this (OR what is the appropriate method to compare fitted values from this?), additionally, would it make sense to take the average of all values x in this case, to have a mean score?
| Logit inverse for binary data | CC BY-SA 4.0 | null | 2023-03-12T21:07:31.097 | 2023-03-13T16:52:50.210 | 2023-03-13T16:52:50.210 | 359583 | 359583 | [
"r",
"logistic",
"multiple-regression",
"generalized-linear-model"
] |
609224 | 1 | null | null | 0 | 16 | I have many sets of datapoints of the form $\{(x_1, y_1), (x_2,y_2), (x_3, y_3), \cdots, (x_n,y_n)\}$. Within each set, the y values have some (generally nonlinear) functional dependence on the x values, $y = f_i(x)$ (index $i$ runs over the number of sets of datapoints) in addition to noise. The $x$ values are not the same across sets, but are chosen roughly randomly on the same interval.
My question is if there is a statistical method to find all of the groups of sets such that the data says $f_i(x) = f_j(x)$ for all $i$ and $j$ in that group. Alternatively, is there a statistical test to see if two of the sets of datapoints have the same functional dependence.
| Statistical Test if Sets of Datapoints Share Functional Dependence | CC BY-SA 4.0 | null | 2023-03-12T21:14:39.610 | 2023-03-12T21:14:39.610 | null | null | 233938 | [
"machine-learning",
"hypothesis-testing"
] |
609225 | 1 | 609268 | null | 2 | 72 | I know that theoretically stacking multiple linear layer does not make any sense, as multiple affine transformations are no more powerful than a single one, however I heard many DL practitioner saying that multiple linear layer speed up the learning of a network
However, I found no documented case of this in the literature or in any paper on arxiv that I was able to find being related to this... any insight/reference?
Some paper that references this behavior:
- arxiv.org/pdf/1912.02975.pdf: To show that overparametrization alone is an important implicit regularizer in RL, LQR allows the use of linear policies (and consequently also allows stacking linear layers) without requiring a stochastic output such as discrete Gumbel-softmax or for the continuous case, a parametrized Gaussian. This is setting able to show that overparametrization alone can affect gradient dynamics, and is not a consequence of extra representation power due to additional non-linearities in the policy
- arxiv.org/pdf/1811.10495.pdf we expand each linear layer in a given compact network into a succession of multiple linear layers. Our experiments evidence that training such expanded networks, which can then be contracted back algebraically, yields better results than training the original compact networks, thus empirically confirming the benefits of over-parameterization.
however none of those paper even tries to argue why this happens, they just limit themself to observe this behavior
| Does stacking multiple linear layer have some documented improvements? | CC BY-SA 4.0 | null | 2023-03-12T21:28:55.460 | 2023-03-13T10:03:55.997 | 2023-03-12T21:49:25.427 | 346940 | 346940 | [
"neural-networks",
"references",
"linear"
] |
609226 | 1 | null | null | 0 | 7 | I'm looking for which statistical test I need to use for my dissertation?
I am looking to see if the strength of a relationship between a candidate and interviewer effects the likelihood of their application outcome. Questions were answered by candidates.
I have data from 3 sets of individuals, those who were:
- Successful appointed
- Interviewed but unsuccessful
- Unsuccessful
All respondents have answered multiple questions to give them a relationship score for 4 types of relationship
- Mentoring
- Networking
- Sponsorship
- Social
These can also be added to give them an overall relationship score.
I've tried individuals anovas and regression, but as there are only 3 outcomes I can't seem to get anything to work.
Please help!
| What type of statistical test? | CC BY-SA 4.0 | null | 2023-03-12T21:30:26.480 | 2023-03-12T21:30:26.480 | null | null | 383048 | [
"statistical-significance"
] |
609227 | 1 | null | null | 0 | 8 | I am curious about whether you can make statistical inferences when you have nested data, and the way your independent variable is measured makes it only comparable within its own group.
For example, you want to know the relationship between a town's investment in recreation and the level of physical fitness of the people in the town. Your towns are nested within states, and each state has a different way of allocating public funds for things like recreation to towns and each state classifies recreation a bit differently. So the level of recreation funding in Town A in State 1 includes Town 1's spending on, say the local public track, but the level of funding in Town B in State 2 does not include spending on running tracks. If we simply regress fitness on investment in recreation, then we could just be capturing the effect of different expenditure classification schemes. Spending on recreation is only comparable within states.
So instead of measuring gross per capita recreation spending in each town, let's say we measure the amount of revenue that each town spends on recreation per capita as a percentage the entire state's spending on recreation. For simplicity's sake, let's say every town is the same size and has the same characteristics (although you could obviously use population weights if this weren't the case). Let's also say that you have reasonable evidence that cross-state variation in recreation spending [according to a common definition of recreation] is not very large; that is, most states spend a similar amount and that the variation is mostly at the sub-state level. So a town in State 1 may have higher gross spending but a lower percentage than a town in State 2, since State A has a more expansive definition of recreation.
Could you run a random effects, fixed effects, or mixed model estimating the relationship between recreation investment as a percentage of total state rec. investment and overall fitness? I understand that you lose some insights, since you're no longer looking at gross spending. But my question stands: Is it possible to make inferences about the relationship between fitness and recreation spending with independent variables that are only comparable within the group they're nested in, or is it kind of impossible to draw any conclusions about this relationship?
| Running a regression model where the observations' independent variables are only comparable to other observations within their group | CC BY-SA 4.0 | null | 2023-03-12T21:40:29.090 | 2023-03-12T21:40:29.090 | null | null | 382389 | [
"mixed-model",
"econometrics",
"multilevel-analysis",
"fixed-effects-model"
] |
609228 | 1 | null | null | 1 | 26 | The plot below models a sample of departure delays of flights, the x-axis is minutes delayed. The mean is about 10 (red line) and std is 36. The distribution has a very long tail (some flights are 800 minute delayed). The blue line is a fitted normal (mean=10, std=36) and the purple distribution is a shifted Poison (such that it can handle negative values).
Both of them completely miss the data as the data has a long tail and is very skewed. How might I model the probability of flight delays best?
[](https://i.stack.imgur.com/pOMtL.png)
| Modelling a right-long-tailed distribution | CC BY-SA 4.0 | null | 2023-03-12T21:40:31.633 | 2023-03-12T21:40:31.633 | null | null | 109304 | [
"probability",
"distributions"
] |
609230 | 1 | null | null | 1 | 31 | My team is conducting propensity score matching with 1:1 nearest neighbor replacement for a case-control healthcare study.
While we're obtaining match rates of 80-90% with good covariate balance, we have noticed a handful of treatment subjects are matched to 20+ controls.
Is it acceptable to manually trim the number of matched records so that, for example, no treatment subject is matched to > 5 controls? This would be done following matching by ranking the PS matches and keeping the top matched pairs.
Or is it better to incorporate matching limits into the matching procedure itself rather than ex post adjustments?
My understanding is there's a bias-variance tradeoff when matching with or without replacement. We prefer matching with replacement to obtain less biased estimates of treatment effects. Trimming members would slightly increase bias but (ideally) reduce variance of our treatment effect estimates.
| Propensity score matching with replacement - OK to trim excess control group matches to same treatment subject? | CC BY-SA 4.0 | null | 2023-03-12T22:04:42.513 | 2023-03-12T22:04:42.513 | null | null | 13634 | [
"propensity-scores",
"matching",
"bias-variance-tradeoff"
] |
609231 | 1 | null | null | 0 | 20 | From an experiment where we recorded performances before and after medication, I have gathered data for two groups of patients (Disease A and Disease B) and now I wish to examine if there are any differences between them.
To determine which statistical test to use, I conducted a normality test (Shapiro-Wilk) on both datasets (before and after medication). The results indicate that in the before phase, most variables are not normally distributed, but in the after phase, over half of them are now normally distributed.
That could be because I have less than 50 observation, so high variability is expected. However, I am unsure about which test to proceed with. Specifically should I use the Wilcoxon-Mann-Whitney test every time that the variable is not normally distributed, and the t-test when they are?
As I want to show also the effect size, can I include both Cohen's d and Cliff's Delta ?
| checking normality for longitudinal data | CC BY-SA 4.0 | null | 2023-03-12T22:27:01.147 | 2023-03-12T22:27:01.147 | null | null | 375245 | [
"panel-data",
"normality-assumption",
"wilcoxon-mann-whitney-test",
"cohens-d"
] |
609232 | 1 | null | null | 0 | 73 | I am trying to recover the formula of my regression model. I build the polynomial regression model using `glmer(optionval ~ nt1 + nt2 + nt3 + section:(nt1+ nt2+ nt3)+(nt1-1|participant.id),...)`. "nts" are time based natural polynomials generated using `poly(..., raw=TRUE)` and "section" is a categorical factor consisting of 2 levels.
I got the model summary including estimates like below:
```
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.04591 0.03217 -1.427 0.154
nt1 2.96992 2.70780 1.097 0.273
nt2 61.03648 1.78963 34.106 < 2e-16 ***
nt3 -29.26076 1.60500 -18.231 < 2e-16 ***
nt1:section1 -6.31923 2.67106 -2.366 0.018 *
nt2:section1 7.57970 1.46541 5.172 2.31e-07 ***
nt3:section1 -10.13101 1.41280 -7.171 7.45e-13 ***
```
Form what I know, I think these estimates are the βs of the model formula, but I am n̶o̶t̶ ̶s̶u̶r̶e̶ now sure if value for "section" is associated with the coding defined by `"contrast()"`(in my case is (-1,1)). Anyway, based on my assumption, the formula I recovered (61.03x^2-29.26x^3+6.32x-7.58x^2+10.13x^3; 61.03x^2-29.26x^3-6.32x-7.58x^2-10.13x^3) is different from the one visualized by averaging model predictions from `predict()`.
Stats pros on the platform please give me some ideas!
Update 1:
I managed to recover the formula for the model with no random effect using the procedure I described above (bold sentence; see Fig.1). However, when there is a random effect, the formula recovered directly using the summarized parameters does not produce a similar curve as the one produced by averaging individual `predict()` values (see Fig.2).
[](https://i.stack.imgur.com/Wev1d.png)
Fig.1 Fixed effect model. dashed lines are produced by recovered model formula. Solid lines are produced by averaging predict().
[](https://i.stack.imgur.com/9vejw.png)
Fig.2 Mixed effect model. dashed lines are produced by recovered model formula. Solid lines are produced by averaging predict().
I looked the `coef(model)` and found different `nt1` for each sample, so I realized that the `predict()` values for each sample must be generated using individual coefficients instead of the united model coefficients. Furthermore, the averaged `predict()` curve even has 1 more inflection point (3 inflection points) than the model (3rd order so 2 inflection points) could possibly produce.
In this case, I doubt the possibility of recovering the formula for a mixed effect model.
| How to recover the formula of the polynomial regression model? | CC BY-SA 4.0 | null | 2023-03-13T00:28:19.260 | 2023-03-16T22:23:17.010 | 2023-03-16T22:23:17.010 | 321486 | 321486 | [
"r",
"regression",
"mixed-model",
"generalized-linear-model",
"polynomial"
] |
609233 | 2 | null | 312877 | 0 | null | Did you try [Bayesian Neural Network](https://arxiv.org/abs/1801.07710)? This can be built based on the [TensorFlow Probability](https://www.tensorflow.org/probability) library. Here I found a [blog post](https://medium.com/towards-data-science/bayesian-neural-networks-3-bayesian-cnn-6ecd842eeff3) that describes how to do it with CNN.
BNN can consider two types of uncertainties, namely: Epistemic Uncertainty and Aleatoric Uncertainty.
- Epistemic Uncertainty: This type of uncertainty occurs when your model lacks the necessary training data. So, the parameter estimation cannot be determined super confidently. Adding more data reduces this type of uncertainty.
- Aleatoric Uncertainty: This type of uncertainty is introduced directly from the training data. What if your data itself is noisy? It can be due to labeling errors, or the data can naturally contain uncertainty, for example: when you are working with sensor data, for the same input, different outputs can be generated naturally due to some imperfection. No matter how many data samples you add, you are still in this type of uncertainty. So we have to make the model aware of this as well.
According to [Wikipedia](https://en.wikipedia.org/wiki/Uncertainty_quantification#:%7E:text=In%20mathematics%2C%20uncertainty%20is%20often%20characterized%20in%20terms,sample%20drawn%20from%20a%20probability%20distribution%20will%20be.):
>
In mathematics, uncertainty is often characterized in terms of a probability distribution. From that perspective, epistemic uncertainty means not being certain what the relevant probability distribution is, and aleatoric uncertainty means not being certain what a random sample drawn from a probability distribution will be.
With the help of BNN, you can make your model aware of both of them, such that it knows when it doesn't know and doesn't give any wrong prediction with embarrassingly high confidence.
But like any other system, it is not perfect. Following are the fallbacks I experienced while working with it:
- You should not keep an extremely high hope. But at least you can hope for a better uncertainty estimation than a regular Non-Bayesian Neural Network.
- It is slower than its Non-Bayesian counterpart.
- I worked on text classification with this type of network. What I found, this network is good at preventing high confidence for completely nonsensical utterings but it struggles if you intentionally generate some adversarial inputs like, for text classification, if you query the model with a valid but incomplete sentence (which as a human we understand that it is also an OOD, but BNN sometimes fails in this case).
- Some say you can approximate BNN with Dropout.
| null | CC BY-SA 4.0 | null | 2023-03-13T01:33:55.387 | 2023-03-13T02:19:34.343 | 2023-03-13T02:19:34.343 | 245577 | 245577 | null |
609234 | 1 | 609247 | null | 12 | 2100 | I have taken hourly-based wind power data, by taking its periodogram after making it stationary in R. It gives me four seasonal patterns at periods of 24, 12, 08, and 06, as shown in the figure below.
Is it possible to have sub-hourly seasonality in my wind power data?
The data is given as follows:
[](https://i.stack.imgur.com/OCvR6.png)
| Is it possible to have seasonality at 24, 12, 8 periods in hourly based wind power data? | CC BY-SA 4.0 | null | 2023-03-13T01:42:40.593 | 2023-03-13T14:21:08.943 | 2023-03-13T04:05:29.373 | 362671 | 377662 | [
"time-series",
"seasonality",
"fourier-transform"
] |
609237 | 1 | 609251 | null | 0 | 43 | I would like to perform multiclass segmentation on DRR(digitally reconstructed radiographs) by using Unet network. For binary segmentation, unet works well (dice score over 0.93)
What I’m trying to do is multiclass segmentation to segment multiple bones.I looked the topic up on Kaggle and different sources, but those bear disappointing segmentation results.
I thought of ideas to start with:
- For each bone label, assign different Unet models to detect it.
- Use ensemble models consisting of multiple networks and when classifying a pixel, use majority voting method.
- Use only one model to segment multiple labels.
Among those ideas, which would be the best approach to do multiclass segmentation? I would appreciate any recommendations.
| Which would be a better approach for Multiclass segmentation? | CC BY-SA 4.0 | null | 2023-03-13T02:41:05.227 | 2023-03-13T08:17:10.927 | 2023-03-13T08:17:10.927 | 95000 | 383059 | [
"neural-networks",
"image-segmentation"
] |
609238 | 1 | 609258 | null | 5 | 941 | I tried looking for my specific question, but I only found partially related questions [here](https://stats.stackexchange.com/questions/509052/understanding-the-bayesian-grid-approximation-probabilities), [here](https://stats.stackexchange.com/questions/238288/bayesian-clt-with-grid-approximation), and [here](https://stats.stackexchange.com/questions/319839/posterior-grid-approximation). I think my question is much simpler than what was asked and answered in these queries. I'm working through Statistical Rethinking by McElreath and there is a portion where they explain grid approximation for Bayesian statistics. The author uses this code to generate and plot the data, though I've made some minor formatting changes so it's more readable:
```
#### Define Grid ####
p_grid <- seq( from=0 , to=1 , length.out=20 )
#### Define Prior ####
prior <- rep( 1 , 20 )
#### Compute Likelihood at Each Value in Grid ####
likelihood <- dbinom( 6 , size=9 , prob=p_grid )
#### Compute Product of Likelihood and Prior ####
unstd.posterior <- likelihood * prior
#### Standardize the Posterior (Sums to 1) ####
posterior <- unstd.posterior / sum(unstd.posterior)
#### Plot Grid Approximation ####
plot( p_grid ,
posterior ,
type="b" ,
xlab="probability of water" ,
ylab="posterior probability")
mtext("20 points")
```
This is the plot:
[](https://i.stack.imgur.com/5PPjT.png)
Conceptually, I get most of what is going on here. The code creates a flat prior, a likelihood of a result based off given arguments, and a resulting prior distribution. But my main question is what the grid here does. I know that `p_grid` takes a 20 number sequence from 0 to 1, but I don't quite understand why this is done.
| What is the "grid" in Bayesian grid approximations? | CC BY-SA 4.0 | null | 2023-03-13T02:51:39.053 | 2023-03-13T09:15:59.557 | null | null | 345611 | [
"r",
"probability",
"bayesian",
"grid-approximation"
] |
609239 | 5 | null | null | 0 | null | One of the conditioning techniques used for Bayesian inference is grid approximation. This takes a finite grid of values (say 100 values from 0 to 1), and generates a continuous posterior distribution based off these values. Some more information on this technique can be found at this link:
[https://pub.towardsai.net/bayesian-inference-how-grid-approximation-works-e2c79a516c49](https://pub.towardsai.net/bayesian-inference-how-grid-approximation-works-e2c79a516c49)
| null | CC BY-SA 4.0 | null | 2023-03-13T02:57:53.140 | 2023-03-13T08:16:41.487 | 2023-03-13T08:16:41.487 | 345611 | 345611 | null |
609240 | 4 | null | null | 0 | null | Grid approximation is a Bayesian statistical technique for approximating a continuous posterior distribution using a finite grid of values. This tag can be used with the `bayesian` tag. | null | CC BY-SA 4.0 | null | 2023-03-13T02:57:53.140 | 2023-03-13T08:16:50.063 | 2023-03-13T08:16:50.063 | 345611 | 345611 | null |
609241 | 1 | 609253 | null | 3 | 395 | I'm working on an unassessed course problem,
>
The file Pas-mile.txt contains the monthly numbers of passenger miles travelled on US airlines for each month between January 1960 and December 1977. Find an ARIMA model for the series, carrying out appropriate diagnostic checks.
I've put the data at the bottom of this post. Here's a plot of the time series.
[](https://i.stack.imgur.com/9Bvni.png)
I differenced the data seasonally and non-seasonally. I think I can pose my question using only one of these, say the seasonal one.
```
y <- ts(data)
x <- diff(y,lag=12,differences=1)
```
I visually inspected the acfs and pacfs as suggested by my course notes,
```
acf(x, lag.max=216)
acf(x, type='partial', lag.max=216)
```
and I also did a grid-search:
```
aic.df <- data.frame()
for (i in 1:12){
for (j in 1:12){
aic.df[i,j] <- AIC(arima(x, order=c(i-1,0,j-1)))
}
}
aic.df
```
The lowest AIC (I think) is 324.5148, for $\text{ARIMA}(10,1,8)_{12}$. I tried searching a larger grid but stopped because the computation was taking too long. I also got quite a lot of messages like
`Warning: possible convergence problem: optim gave code = 1` and `Warning: NaNs produced`.
This seems like quite a lot of parameters, and the non-seasonal differencing adds more. Writing the model out as $y_t=\dots$ would be quite cumbersome. If I had more computing power, perhaps I could have found a still better model with still more parameters. As I understand it, there's no risk of overfitting since the AIC takes parameter number into account. So what's the 'right' number of parameters?
Data
```
2.42 2.14 2.28 2.50 2.44 2.72 2.71 2.74 2.55 2.49 2.13 2.28
2.35 1.82 2.40 2.46 2.38 2.83 2.68 2.81 2.54 2.54 2.37 2.54
2.62 2.34 2.68 2.75 2.66 2.96 2.66 2.93 2.70 2.65 2.46 2.59
2.75 2.45 2.85 2.99 2.89 3.43 3.25 3.59 3.12 3.16 2.86 3.22
3.24 2.95 3.32 3.29 3.32 3.91 3.80 4.02 3.53 3.61 3.22 3.67
3.75 3.25 3.70 3.98 3.88 4.47 4.60 4.90 4.20 4.20 3.80 4.50
4.40 4.00 4.70 5.10 4.90 5.70 3.90 4.20 5.10 5.00 4.70 5.50
5.30 4.60 5.90 5.50 5.40 6.70 6.80 7.40 6.00 5.80 5.50 6.40
6.20 5.70 6.40 6.70 6.30 7.80 7.60 8.60 6.60 6.50 6.00 7.60
7.00 6.00 7.10 7.40 7.20 8.40 8.50 9.40 7.10 7.00 6.60 8.00
10.45 8.81 10.61 9.97 10.69 12.40 13.38 14.31 10.90 9.98 9.20 10.94
10.53 9.06 10.17 11.17 10.84 12.09 13.66 14.06 11.14 11.10 10.00 11.98
11.74 10.27 12.05 12.27 12.03 13.95 15.10 15.65 12.47 12.29 11.52 13.08
12.50 11.05 12.94 13.24 13.16 14.95 16.00 16.98 13.15 12.88 11.99 13.13
12.99 11.69 13.78 13.70 13.57 15.12 15.55 16.73 12.68 12.65 11.18 13.27
12.64 11.01 13.30 12.19 12.91 14.90 16.10 17.30 12.90 13.36 12.26 13.93
13.94 12.75 14.19 14.67 14.66 16.21 17.72 18.15 14.19 14.33 12.99 15.19
15.09 12.94 15.46 15.39 15.34 17.02 18.85 19.49 15.61 16.16 14.84 17.04
```
| What's the 'right' number of parameters for an ARIMA model? | CC BY-SA 4.0 | null | 2023-03-13T03:25:38.940 | 2023-03-13T12:40:12.647 | 2023-03-13T12:40:12.647 | 285236 | 285236 | [
"r",
"time-series",
"arima",
"model-selection",
"differencing"
] |
609242 | 1 | 609460 | null | 1 | 49 | Propensity score matching techniques can be assessed and compared with covariate balance metrics like the standardized mean difference (SMD).
However, SMDs don't account for varying matching rates.
For example, how would you compare Matching Technique A, which achieves 95% matched rate but low covariate balance (SMDs in 0.1 to 0.5 range), with Matching Technique B, which has a 70% matched rate and SMDs all ~0.0?
| Evaluating success of propensity score matching with single metric that accounts for both covariate balance and matching rate? | CC BY-SA 4.0 | null | 2023-03-13T03:46:49.833 | 2023-03-14T17:30:22.020 | null | null | 13634 | [
"propensity-scores",
"matching",
"standardized-mean-difference",
"covariate-balance"
] |
609243 | 2 | null | 609238 | 5 | null | Let me make up an example to make it easier to understand. It is not what the book says but it is the same idea and I think it will make it even easier.
---
Suppose you have independent samples $x_1,...,x_{20}$ from a $\textbf{Nor}(\mu,\sigma^2)$ distribution. You would like to draw/find the posterior distribution of the data with the priors (for example) $\mu\sim \textbf{Unif}(0,20)$ and $\sigma \sim \textbf{Exp}(0.5)$. Let us generate some fake data in R,
```
set.seed(2024)
data = rnorm(20, mean = 10, sd = 2)
```
The posterior distribution estimates $\mu$ and $\sigma$, i.e. it is a two dimensional distribution. Let $f(\mu,\sigma)$ denote the likelihood/posterior of the data given by Bayes theorem. Therefore,
$$ f(\mu,\sigma) = (\text{constant}) \times \prod_{k=1}^{20} f_X(x_k) g(\mu) h(\sigma) $$
Here $f_X(\cdot)$ is the PDF of $X\sim \textbf{Nor}(\mu,\sigma^2)$, $g(\cdot)$ is the PDF of $\mu$, and $h(\cdot)$ is the PDF of $\sigma$. Once we choose a specific choice of $\mu$ and $\sigma$ we can evaluate $f_X(x_k)$ to get a number.
Instead of using Calculus we use a discrete approximation. We take the $\mu$ and discreterize it, say from $0$ to $20$, and then discretize $\sigma$ from, say $0$ to $5$. Then we simplify evaluate the value of the posterior at each of those points. Let us illustrate this with some code. So first we define this posterior function in R,
```
f = function(mu,sigma){
prod(dnorm(data, mean = mu, sd = sigma))*dunif(mu, min = 0, max = 20)*dexp(sigma, rate = 0.5)
}
```
Next we generate a discretization.
```
mu = seq( from = 0, to = 20, length.out = 30)
sigma = seq( from = 0, to = 5, length.out = 30)
```
Now we can store evaluate the posterior at each of those points. We will store all of those combinations into a matrix of possibilities.
```
posterior = matrix(NA, nrow = 30, ncol = 30)
for(n in 1:30){
for(m in 1:30){
posterior[n,m] = f(mu[n],sigma[m])
}}
```
Now do not forget to normalize your posterior! Here is the code which will accomplish this:
```
mu.thiccness = mu[2] - mu[1]
sigma.thiccness = sigma[2] - sigma[1]
posterior = posterior/sum( posterior*mu.thiccness*sigma.thiccness )
```
Note: There is a mistake in ``Statistical Rethinking'', the author only sums the values, he did not take into account the thiccness of the grid.
Now you can display your posterior as a matrix,
```
View(posterior)
```
But even better is to visualize it. The posterior here can be visualized as a two-dimensional distribution, i.e. a surface. Here is some code to generate this picture.
```
persp( mu, sigma, posterior,
theta = 30, phi = 20, col = "red",
shade = 0.5, ticktype = "detailed" )
```
From this picture you can see that the posterior is peaked at its most likely estimate. Exactly what should happen.
| null | CC BY-SA 4.0 | null | 2023-03-13T06:16:04.340 | 2023-03-13T06:16:04.340 | null | null | 68480 | null |
609246 | 1 | null | null | 1 | 38 | While studying univariate time series analysis, I got curious about how I can apply the AIC metric to determine the appropriate order for an ARIMA model.
As far as I've investigated, in many texts ARIMA models assume the error terms to be white noise, not Gaussian white noise, which means that there is no distributional assumption on the error terms.
Then, how can we define AIC properly for such ARIMA models? Or is it implicitly assumed to be normal?
| AIC for ARIMA model | CC BY-SA 4.0 | null | 2023-03-13T07:01:26.770 | 2023-03-13T07:44:35.360 | 2023-03-13T07:44:35.360 | 1352 | 383074 | [
"arima",
"aic"
] |
609247 | 2 | null | 609234 | 13 | null | Many things are possible. ([Here is one possible data-generating process that might lead to such sub-daily seasonalities](https://xkcd.com/2737/), but no, I'm not really serious about that.)
I am not an expert in wind power generation as such, however, I find such a periodicity extremely unlikely. I understand that wind could have intra-daily patterns (differences between night and day), i.e., a seasonal period of 24 hours, and intra-yearly patterns (differences between summer and winter), i.e., a seasonal period of about 8766 hours. (These are examples of [multiple-seasonalities](/questions/tagged/multiple-seasonalities) - you may find [the tag wiki](https://stats.stackexchange.com/tags/multiple-seasonalities/info) helpful.)
However, other patterns simply don't make a lot of sense from a data generation point of view. Yes, the moon might have some very weak effect on the wind, perhaps via tides, so we might have intra-monthly patterns. I see no reason why the wind itself should have intra-weekly effects. Conversely, wind power generation might actually have such patterns: since electricity demand has weekly patterns, it might be that wind farms get cycled up or down with a certain weekly influence. But subdaily patterns at all these periodicities sound like an artifact or an error to me.
I have to admit that I am a forecaster and think like one. To a forecaster, the question is not so much whether a structure is present in a time series, but whether it helps us forecast the series better. As such, I naturally turn to papers about forecasting wind power, e.g., something connected with a good journal like [the International Journal of Forecasting](https://www.google.com/search?client=firefox-b-d&q=wind+power+international+journal+of+forecasting), and then I start skimming them on whether they report this kind of seasonality. [Here is a paper](https://www.mdpi.com/1996-1073/15/24/9403) that sampled in 10-minute buckets and found daily seasonality ($24\times 6 = 144$) periods per cycle - nothing about the kinds of seasonality you found. [The GEFCom2014](http://dx.doi.org/10.1016/j.ijforecast.2016.02.001) did include a competition on wind power forecasting, but the summary paper does not go into details on what seasonalities were modeled by the winning methods. You might be able to get a better idea of what other people have found with a deeper literature search.
| null | CC BY-SA 4.0 | null | 2023-03-13T07:29:24.987 | 2023-03-13T07:29:24.987 | null | null | 1352 | null |
609248 | 2 | null | 609234 | 24 | null |
TLDR; The signal period is $24$ hours, eventhough you have your power spectrum indicating components with a smaller period. The components with period $6, 8, 12$ hrs also repeat themselves every $4 \times 6 = 24, 3 \times 8 =24, 2 \times 12 =24 $ hrs. So they all have, in a way, a common period of 24 hrs.
---
>
It gives me four seasonal patterns at periods of 24, 12, 08, and 06
This sounds like you get overtones. If your periodogram is a sort of Fourier spectrum, then this is not weird. It means that the daily pattern consists of more structure than just a single sine wave.
This doesn't mean that the period of the signal is smaller than 1 day. Below is an example signal constructed with several overtones (wave lengths smaller than 1 day), and you can see that the period of the signal is 1 day. The higher frequency signals influence the shape but not the period of the signal.
[](https://i.stack.imgur.com/NPqI6.png)
The period of a function that is a sum of periodic functions is the [least common multiple](https://en.wikipedia.org/wiki/Least_common_multiple) of the periods of the functions in the sum.
```
### t is time for one week of data sampled every ten minutes
t = seq(0,7*24*60,10)
### some example measurement of data that depends on sin waves with multiple sub-daily periods
Td = (24*60)/(2*pi) ### daily period
y = 2 + sin((t+1000)/(Td)) + 0.4* sin((t+1200)/(Td/2)) + 0.1* sin((t+800)/(Td/3)) + 0.1* sin((t+1000)/(Td/4))
### plot
plot(t/24/60,y+rnorm(7*24*6,0,0.2),
type = "l", xlab = "time in days", ylab = "signal",
main = "example of signal with a daily period, but several overtones")
```
| null | CC BY-SA 4.0 | null | 2023-03-13T07:35:51.383 | 2023-03-13T14:21:08.943 | 2023-03-13T14:21:08.943 | 164061 | 164061 | null |
609249 | 2 | null | 609246 | 1 | null | That is a very good question.
Especially in the time series forecasting subdiscipline, people are frequently rather sloppy about their terminology, using the term "white noise" to refer to Gaussian white noise implicitly. [See here for an otherwise excellent textbook that does so.](https://otexts.com/fpp3/arima.html)
Conversely, people who look at ARIMA from a more "statistical" point of view tend to be a little more precise. For instance, [Shumway & Stoffer (2016)](https://link.springer.com/book/10.1007/978-3-319-52452-8) in section 3.1 on p. 78 explicitly refer to Gaussian white noise in their introduction to ARIMA processes. [Brockwell & Davis (2016)](https://link.springer.com/book/10.1007/978-3-319-29854-2) introduce ARIMA with general white noise innovations, but when they come to the AIC(c) criterion in section 5.2.2 on p. 151, they are careful to note the condition on the white noise to be Gaussian when they give the explicit formula.
Also, non-Gaussian ARIMA processes are very rarely studied. I would honestly not be able to recall anyone doing so (but then, I'm a forecaster, not a statistician, so my view on the field is biased). There are people looking at integer-valued INARMA processes, but these are different than "ARMA with non-Gaussian white noise innovations".
Bottom line: when the Gaussianity is not explicitly mentioned, but an AIC (or any other information criterion) is calculated, you can probably safely assume it is Gaussian. (If you are a reviewer of a paper, it would be good to ask the authors to be more precise in such a situation.)
| null | CC BY-SA 4.0 | null | 2023-03-13T07:42:31.990 | 2023-03-13T07:42:31.990 | null | null | 1352 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.