Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
612703
1
null
null
2
28
I have carried out a designed agricultural experiments with two treatments and recorded the effect on the abundance of a pest insect. The field experiment was divided into four blocks with two plots (replications) per block, resulting in 2 x 2 x 4 = 16 plots. Pest insects were counted per plant on the same 15 plants in a row in each of the replications. The pest insects originated from one or two nearby fields and are therefore not evenly distributed. The data looks like this: |Treatment |Block |Plant |Plot |Insects | |---------|-----|-----|----|-------| |A |1 |1 |1 |0 | |A |1 |2 |1 |5 | |A |1 |3 |1 |2 | |... | | | | | |B |4 |15 |16 |1 | Since the counts of `insect` are Poisson distributed (or negative binomial, zero-inflated, generalized poisson, zero-inflated poisson ... whatever best fits the data), I was going to use a GLM (glmmTMB in R). The latest book by Zuur and Ieno (2021) gives good guidance on choosing the best distribution, so I have no questions about this. The treatments are what I'm interested in, so the `treatment` is an essential covariate in the model. The nested structure is accounted for by `+(1|block/plot)`. This leads to the model: ``` model <- glmmTMB(insect ~ treatment + (1 | block / plot), family = poisson, data = mydata) ``` But this still doesn't take into account the spatial correlation of the 15 adjacent plants. As suggested by A.F. Zuur et al. (2009) in 'Mixed Effects Models and Extensions in Ecology with R', p. 161 and following, I first made a bubble plot to get an idea of the spatial patterns. I used the standardized residuals from a simple model with no spatial/correlation structure ``` simpel_model <- glm(insect ~ treatment, family = "poisson", data = mydata) E <- rstandard(simp_model) ``` and the coordinates of the plants. In the bubble plot you can clearly see the 16 plots with the 15 plants in a row (distance from plant 1 to plant 15 is about 5m). As the positive and negative residuals are grouped together, it looks to me like there is a high spatial correlation. [](https://i.stack.imgur.com/7eeqA.jpg) I also produced a semi-variogram to see if it would be appropriate to use one of the suggested correlation structures such as `correlation = corGaus()` in `gls` (probably I would have to use `+ gau()` in `glmmTMB` for my purpose). As my understanding is that the spatial correlation occurs mainly within the 5m plant rows, I also created a semi-variogram with `cutoff = 5`. [](https://i.stack.imgur.com/MXhnm.jpg) [](https://i.stack.imgur.com/HDRZc.jpg) In none of the variograms do I see any of the proposed correlation patterns (Gaussian, Exponential, Linear...). Unfortunately I have no ideas how to proceed now. Does anyone have any idea how I can implement the correct correlation structure in my model?
Analysis of spatially correlated count data from a designed agricultural experiment
CC BY-SA 4.0
null
2023-04-12T15:37:25.467
2023-04-13T08:31:11.293
2023-04-13T08:31:11.293
383278
383278
[ "mixed-model", "generalized-linear-model", "count-data", "glmmtmb", "spatial-correlation" ]
612704
2
null
612505
5
null
Regarding the truth of a single experimental hypothesis, there are two cases: - In the first case, we assume the null to be true, the false positive error rate $\alpha$ should be assumed to be correct. So given that experiment 1 was a false positive, the probability that experiment 2 is a false positive is still 0.05 - any other value is gambler's fallacy. - In the second case, we assume the null to be false - but the precise value of the parameter or estimand is unknown. If the original study is well-powered (say 0.8) and the follow-up is identical, then, if the assumptions are correct, the probability of the replicate showing significance is 0.8. If you update the confirmatory design based on the findings from the first study, it may be less likely to reproduce p<0.05 because the initial study results are known to be favorable given that the hypothesis test was significant. Your arithmetic presentation doesn't make sense in the context of a single hypothesis because we cannot speak of a heterogeneous "truth" of the null - not without unnecessarily invoking a Bayesian approach. Regarding a collection of hypotheses such as a scientific body of evidence or a clinical trial repository, you can speak of a distribution frequency of false positives and true positives - assuming negatives primary results are not published, a frequent problem of publication bias. The exact distribution depends on the quality of the science initially performed. So if there are 10,000 initial attempts at experiments and only 1,000 of these are feasible (null is false, and study is well controlled with 80% power) then there are 1000*0.80 = 800 true positives and 9000 * 0.05 =450 false positives that gain publication. That is, the probability that any given publication is actually correct is 64%. If we replicate all publications, for the 800 true positives they will "replicate p<0.05" with 80% probability so 640 true positive findings confirmed. However, for the 450 false positives, they will only replicate with 5% probability so only 22 false positives confirmed. Overall, there will be a 53% confirmation rate. We can outline the required parameters below: $$ N(\text{Confirmed}) = N(\text{Studies with } \mathcal{H}_0 \text{ true}) \alpha^2 + N(\text{Studies with } \mathcal{H}_0 \text{ false}) \beta^2$$ where $\alpha$ is the false positive error rate and $\beta$ is the power (note you can easily generalize this to studies with differing $\alpha$s and $\beta$s).
null
CC BY-SA 4.0
null
2023-04-12T15:37:49.327
2023-04-12T15:49:36.520
2023-04-12T15:49:36.520
8013
8013
null
612705
1
612712
null
0
59
For classification problems, I wonder why using different kinds of loss functions makes sense. In particular, it feels like the model being learned, $p(y|X)$, can always be thought of as a binomial or multinomial distribution. Consequently, we can always minimize cross-entropy, as it is equivalent to the maximum likelihood for binomial or multinomial distribution. Yet I do see that other forms of loss functions appear to be more effective than cross-entropy. For example, [polyloss](https://arxiv.org/abs/2204.12511) claims to outperform cross entropy and focal loss on a variety of classification tasks. My question is then why not always use cross entropy? Why can a different loss sometimes do better?
Why do we use different loss functions for classification?
CC BY-SA 4.0
null
2023-04-12T15:41:06.180
2023-04-12T17:01:12.047
2023-04-12T16:57:18.083
28942
28942
[ "classification", "loss-functions" ]
612706
2
null
605396
1
null
The paper "Simple Buehler-optimal confidence intervals on the average success probability of independent Bernoulli trials" by Bancal and Sekatski ([https://arxiv.org/pdf/2212.12558.pdf](https://arxiv.org/pdf/2212.12558.pdf)) would provide one solution. That paper studies the following problem: given $m$ independent but non-identical Bernoulli draws, produce a one-sided confidence interval for the average of the Bernoulli parameters. To apply this to your problem, consider that you have $m=7N$ independent Bernoulli draws in the form of $W\in \mathbb{R}^{N\times 7}$, each with $W_{ij}\sim \mathrm{Bernoulli}(q_{ij})$. In your special case we have that $q_{ij}=p_j$. The paper above gives you a way to put a confidence interval on what they call $\bar q = \sum_{i=1}^N \sum_{j=1}^7 q_{ij} / (7N) = \sum_{j=1}^N p_{j} / 7$. That, in turn, implies a confidence interval on $\sum_{j=1}^N p_j = 7\bar q$ (just multiply the confidence interval from the paper by 7). Note. It might be possible to use the fact that $q_{ij}=p_j$ to enable tighter confidence intervals. Not sure.
null
CC BY-SA 4.0
null
2023-04-12T15:47:57.870
2023-04-12T15:47:57.870
null
null
379030
null
612707
2
null
612690
1
null
In order to isolate a causal effect, we need the causal effect to be "identifiable." At a high level, assuming binary variables here, a causal effect is identifiable if we can express the treatment effect that we care about — in this case $P(Cancer(Drug = 1)) - P(Cancer(Drug = 0))$ — in terms of quantities computable from our observed data. There are a few conditions that need to be satisfied for our causal effect to be identifiable, but since you're asking about "what should I control for," the one that is most relevant is exchangeability/conditional exchangeability. Formally, for your setting, we'd express this as $Cancer(Drug) \perp Drug \mid L$ — conditioned on some set of confounders $L$, there is no dependence between the counterfactual value of "Cancer" and the observed treatment "Drug." "The hard part" is determining "what goes in $L$." Luckily, the "backdoor criterion" exists to determine which variables you need to control for in a given causal DAG in order to achieve (conditional) exchangeability. This criterion states that, given a causal DAG, you need to "block" all "paths" between treatment and outcome that aren't the treatment -> outcome arrow denoting the effect you're trying to estimate. You can think of a path in a DAG as a chain of arrows (ignoring the direction for now). To block a path, there needs to be either a "collider" ($\rightarrow X \leftarrow$, where $X$ is some placeholder variable) that we are not conditioning on (+ one other condition that I'll omit for simplicity), or we need to condition on a non-collider ($\rightarrow X \rightarrow$ or $\leftarrow X \rightarrow$). If you apply these conditions to your DAG, you'll see that, to achieve conditional exchangeability, we need to block the path $Drug \leftarrow Age \rightarrow Cancer$. Since $Age$ is a non-collider, we need to condition on it. We do not need to condition on $Area$, since it does not lie on a path between $Cancer$ and $Drug$. There may settings/specific designs where you might condition on $Area$, but for identifying the causal effect of $Drug$ on $Cancer$, there is no need. Further reading My summary of the backdoor criterion is derived from [these lecture notes](https://jean997.github.io/BIOST_881_causal_inference/slides_2023/2_dags_confounding.html#32) — slides 27-48 — which give a further overview "what do I condition on." For further details, I'd recommend reading the first 3 chapters (approximately) of [What If?](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/) — it's a fairly approachable textbook on causal inference.
null
CC BY-SA 4.0
null
2023-04-12T16:18:03.753
2023-04-12T16:18:03.753
null
null
263918
null
612708
2
null
554481
0
null
Interestingly, Wilxcon's 1945 paper introducing the rank sum test and signed rank test did not include any such notation for the rank of $x_i$ as I ask about, and only described ranking in the text. Mann and Whitney (1947) similarly do not give a notation for the rank of $x_i$, but only use $T$ as "the sum of the ranks of the $y$'s in the ordered sequence of $x$'s and $y$'s." Some observations from notable literature on rank-based tests: - In their 1952 paper motivating the Kruskal-Wallis test, Kruskal & Wallis used $R_i$ and $\overline{R}_i$ to represent the sum of ranks and the mean rank in the $i^{\text{th}}$ group respectively, but use no notation for the rank of the $i^{\text{th}}$ observation of $x$. - Dunn's (1964) pairwise test post hoc to rejecting the Kruskal-Wallis null hypothesis, used $T_i$ to indicate the rank of $x_i$. - Conover and Iman, in their (1979) presentation of a more powerful alternative than Dunn's test used $R_i$ for the rank of $x_i$ (with a different index to indicate the rank of a different variable, e.g., $R_j$ for the rank of $y_j$). - Kornbrot uses $r(i)$ for the rank of the order statistic $x(i)$, and $t_a(i,j)$ as the rank of the differences $r(i) - r(j)$ in paired data (so $i=1$ corresponds to $j=1$). - Conover's Practical Nonparametric Statistics uses the notation $R(X_i)$ for the rank of $X_i$ (e.g., in the rank sum test), and $R(X_{ij})$ for the rank of $X_i$ in group $j$ (i.e. in the Kruskal-Wallis test). Similarly, Gibbons and Chakraborti's Nonparametric Statistical Inference uses $r(X_i)$, but a different index for ranks in different groups (e.g., $r(X_i)$ vs $r(X_j)$). Conclusion: It appears that "the rank of $x_i$" does not have a common and widely used notation in the originating texts of several of the most common rank-based tests, and in two popular textbooks on nonparametric statistics and inference. Explicitly used notations include $R_i$, $T_i$, $R(X_i)$, and $r(X_i)$. References Conover, W. J., & Iman, R. L. (1979). [On multiple-comparisons procedures](http://library.lanl.gov/cgi-bin/getfile?00209046.pdf) (Technical Report LA-7677-MS). Los Alamos Scientific Laboratory. Conover, W. J. (1999). Practical Nonparametric Statistics (3rd ed.). Wiley. Dunn, O. J. (1964). [Multiple Comparisons Using Rank Sums](https://www.tandfonline.com/doi/pdf/10.1080/00401706.1964.10490181). Technometrics, 6(3), 241–252. Kornbrot, D. E. (1990). [The rank difference test: A new and meaningful alternative to the Wilcoxon signed ranks test for ordinal data](https://www.researchgate.net/profile/Diana-Kornbrot/publication/230264296_The_rank_difference_test_A_new_and_meaningful_alternative_to_the_Wilcoxon_signed_ranks_test_for_ordinal_data/links/5e7dda9ca6fdcc139c0902bf/The-rank-difference-test-A-new-and-meaningful-alternative-to-the-Wilcoxon-signed-ranks-test-for-ordinal-data.pdf). British Journal of Mathematical and Statistical Psychology, 43(2), 241–264. Kruskal, W. H., & Wallis, A. (1952). [Use of ranks in one-criterion variance analysis](https://people.ucalgary.ca/%7Ejefox/Kruskal%20and%20Wallis%201952.pdf). Journal of the American Statistical Association, 47(260), 583–621. Mann, H. B., & Whitney, D. R. (1947). [On A Test Of Whether One Of Two Random Variables Is Stochastically Larger Than The Other](http://webspace.ship.edu/pgmarr/Geo441/Readings/Mann%20and%20Whitney%201947%20-%20On%20a%20Test%20of%20Whether%20one%20of%20Two%20Random%20Variables%20is%20Stochastically%20Larger%20than%20the%20Other.pdf). Annals of Mathematical Statistics, 18, 50–60. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1(6), 80–83.
null
CC BY-SA 4.0
null
2023-04-12T16:29:07.643
2023-04-12T16:29:07.643
null
null
44269
null
612710
2
null
612673
3
null
$\DeclareMathOperator{\pl}{\operatorname{plim}}$ We have \begin{align}\pl \hat{\boldsymbol\beta}&= \boldsymbol \beta + \pl \left(\frac{\mathbf{X^\top X}}{n}\right)^{-1}\cdot\pl \left(\frac{\mathbf X^\top\boldsymbol \varepsilon}{n}\right)\tag 1, \label 1\end{align} The bone of contention could be $\pl \left(\frac{\mathbf{X^\top X}}{n}\right)=:\mathbf Q.$ When $\bf X$ is of full column rank, then we can assume $\mathbb E\left[\mathbf x_i\mathbf x_i^\top\right] = \mathbf Q$ and the rest is what you asserted. The bare minimum or "very weak" assumptions that $\mathbf X$ should follow is Grenander conditions. Observe that once $\lim_{n\to\infty}\lambda_\text{smallest}(\mathbf{X^\top X}) = \infty, ~\hat{\boldsymbol\beta}$ becomes consistent. (See this relevant post of [mine](https://stats.stackexchange.com/a/601365/362671)). --- ## Reference: $\rm [I]$ Advanced Econometrics, Takeshi Amemiya, Harvard University Press, $1985,$ sec. $3.5,$ p. $95.$
null
CC BY-SA 4.0
null
2023-04-12T16:32:03.823
2023-04-12T16:32:03.823
null
null
362671
null
612711
2
null
612505
4
null
One must distinguish between two cases. - P1<alpha what is the probability that P2<alpha - P1=alpha what is the probability that P2<alpha. Goodman treats the second case. For the second case the answer is plausibly 1/2 and it does not really depend on using P-values. Take any two statistics, say S1 and S2. Assume that nothing else is known. Then probability (S2<S1) = probability (S1<S2). Of course in practice any Bayesian having observed a particular value for S1 may think differently, but that would depend on something else being known or believed. There is a published commentary of mine in Stats in Med and it is also treated in Statistical Issues in Drug Development References SENN, S. J. 2002. A comment on replication, p-values and evidence by S.N.Goodman, Statistics in Medicine 1992; 11:875-879. Statistics in Medicine, 21, 2437-44. SENN, S. J. 2021. Statistical Issues in Drug Development, Chichester, John Wiley & Sons.
null
CC BY-SA 4.0
null
2023-04-12T16:33:16.727
2023-04-17T21:20:52.160
2023-04-17T21:20:52.160
1679
305995
null
612712
2
null
612705
3
null
Here are a couple situations where you may not want to use cross-entropy, - Class imbalance: In situations where the number of samples in different classes is imbalanced, cross-entropy may not perform well. This is because cross-entropy puts more emphasis on correctly classifying the majority class, which can lead to poor performance on the minority class. In such cases, loss functions like focal loss or class-balanced loss can be more effective as they help to address this issue. - Addressing the model's limitations: Different loss functions can encourage the model to focus on different aspects of the data that are important for the task. For example, polyloss tries to address the limitations of softmax by encouraging the model to focus on the hardest examples in the dataset. This can help to improve the model's performance on difficult examples, which may not be well-addressed by cross-entropy.
null
CC BY-SA 4.0
null
2023-04-12T17:01:12.047
2023-04-12T17:01:12.047
null
null
23801
null
612713
2
null
612640
2
null
I agree with the diagnoses in the comments. If you include `offset(log(offset.var))` in your model, then you are assuming that the mean observed counts are strictly proportional to `offset.var`. In principle, observations with `offset.var` equal to 0 are either impossible (if >0 counts are observed) or provide no information (because the predicted value and the observed value should both be zero, independent of the value of any other covariates). In the comments you say > originally there were zeros, but in order to calculate the logarithm of the offset.var, I added a small number (1e-09) to the value When you have a component in the model that isn't supposed to be zero, it is tempting to set zero values to "a very small value" on the assumption that will have the minimal effect on the model estimates; however, setting zero values to 1e-9 (for example) makes them extreme values on the log scale, making them into (artificially constructed) outliers. (This issue also arises when people are trying to log-transform continuous responses that contain zeros; when log-transforming counts, $\log(1+x)$ is a common (and natural) way to avoid the problem — we don't need to pick an arbitrary small value.) The [Gordian solution](https://en.wikipedia.org/wiki/Gordian_Knot) (Alexandrian solution?) to the problem of what to do with outliers is usually to fit the model twice, once with and once without the outliers, and see what difference it makes. (To prevent snooping, you should pre-specify which version of the model will be your primary result.) You can then report that "results with and without outliers were similar", or not, depending on the results. If these outliers do make a big difference, then you may need to think much more carefully about what you're going to do with them. For example, maybe values of 0 for the offset variable really mean "less than 1 day" (depending on your experimental design), in which case it might make sense to (crudely) add 0.5 to the values? If your exposure variable is discrete (days), it might make sense to add 1 instead ... PS adding `offset.var` to control dispersion is not ridiculous, but in this case I think it's probably trying to patch the original self-inflicted problem of setting the zero values ...
null
CC BY-SA 4.0
null
2023-04-12T17:15:19.040
2023-04-13T15:28:06.390
2023-04-13T15:28:06.390
2126
2126
null
612714
2
null
606173
0
null
Finally, the solution that I've found is to use F-score: a metric that represents both, precision and recall. [](https://i.stack.imgur.com/AoZ0d.png) In my case I just had to set a low value for the weight "β" : for example β = 0.1 means that precision is considered 10 times as important as recall.
null
CC BY-SA 4.0
null
2023-04-12T17:15:23.483
2023-04-12T17:15:23.483
null
null
340956
null
612715
2
null
612683
5
null
I have upvoted Sycorax's answer as useful. Nevertheless, I think there is a serious issue with the question: [sklearn.linear_model.LogisticRegression.score](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.score) returns the mean accuracy, not AUC-ROC. If we used `LR.predict_proba(df_1)[:,1]` to get the predicted probabilistic estimates AUC-ROC values both in the training and testing sets would be higher for the "perfect" logistic regresssion model than XGBoost. For example, in the testing set, XGBoost's AUC-ROC is: 0.9071 and the AUC-ROC score from the logistic regression is: 0.9167.
null
CC BY-SA 4.0
null
2023-04-12T17:38:29.660
2023-04-12T17:38:29.660
null
null
11852
null
612716
1
null
null
0
69
I have multiple explanatory variables and one dependent variable. Data for all are collected at an annual basis, time series, dependent on their t-1 value. ¨ Will use of ARDL Autoregressive distributed lag be appropritate to apply? If so, some other questions regarding ARDL: Do all, both Xs and Y, need to be I(1)? or is it ok if some Xs are I(0)? I also did read somewhere that for ARDL non-stationary data can be used? If have non-stationary data and have to turn it stationary, is it right that I can use either 1 differencing OR just include time as an independent variable? Thank you so much in advance!
ARDL non-stationary
CC BY-SA 4.0
null
2023-04-12T17:58:49.243
2023-04-12T17:58:49.243
null
null
383188
[ "stationarity", "autoregressive", "ardl" ]
612718
1
null
null
0
42
I have data which has additive Rician noise. I found that the likelihood function for the Rice distribution in equation 2.3 here: [https://arxiv.org/pdf/1403.5065.pdf](https://arxiv.org/pdf/1403.5065.pdf) For my analysis, however, we take a signal (a vector of N points) that has Rician noise, and then we transform it in the following way (given in Python code, but let me know if someone would like me to write it out): ``` def convert(S, C1, C2, C3, C4): A=S/np.mean(S[0:5]) E0 = exp(-C1*C2) E = (1.0 - A + A*E0 - E0*np.cos(C3)) / (1.0 - A*cos(C3) + A*E0*np.cos(C3) - E0*np.cos(C3)) R = (-1/C2)*np.log(E) transformed_signal = (R-C1)/C4 return transformed_signal ``` Where C1-C4 are known constants. The first 5 points used here are the baseline of the signal (before it changes). The total number of points in signal is ~1000. I then fit this transformed_signal to a model to extract 2 parameters of interest. My question is how do I calculate the likelihood function for this transformed data? Does anyone know the procedure to achieve that?
Calculating log-likelihood from transformed Rician noise
CC BY-SA 4.0
null
2023-04-12T18:28:37.400
2023-04-12T20:06:27.090
2023-04-12T20:06:27.090
60403
60403
[ "likelihood" ]
612719
2
null
609541
0
null
One approach I've taken with sales data is to identify the range which was affected by the pandemic, remove the actuals, and create dummy variables within the range using an ARIMA model to interpolate as described here: [https://otexts.com/fpp3/missing-outliers.html](https://otexts.com/fpp3/missing-outliers.html)
null
CC BY-SA 4.0
null
2023-04-12T18:38:47.973
2023-04-12T18:38:47.973
null
null
260238
null
612720
1
null
null
0
18
I am looking for some guidance (published or otherwise) on performing PCA on left-censored environmental data (where data above an instrument's detection limit is reported). Any help is appreciated.
Performing PCA on left-censored data (non-detects)
CC BY-SA 4.0
null
2023-04-12T18:47:23.483
2023-04-12T18:47:23.483
null
null
312335
[ "pca", "censoring" ]
612721
1
613016
null
2
200
### Overview I want to perform a Bayesian model selection on many datasets and use these datasets to determine the required parameter priors. ### Example Scenario: Coins Suppose I have a collection of thousand coins produced by a machine that randomly produces fair and loaded coins. The loaded coins are not identical, but their heads ratio $θ$ follows an unknown distribution $p(θ|\mathcal{M}_\text{loaded})$ obeying some constraints (see below). For each coin, I want to decide whether it’s fair using Bayesian model selection with two models $\mathcal{M}_\text{loaded}$ and $\mathcal{M}_\text{fair}$. I know: - For each coin: the number of heads from hundred tosses (and thus an estimator $\hat{θ}$ for the heads ratio $θ$). - Model priors $p_\text{fair}$ and $p_\text{loaded}$ with $0.1≤p_\text{fair}≤0.9$. - The probability density $p(θ|\mathcal{M}_\text{loaded})$ of the heads ratio of the loaded coins obeys the following constraints: symmetric around ½, smooth, not very far from a uniform distribution, say, $0.1 < p(θ|\mathcal{M}_\text{loaded}) < 10$ everywhere. With all this given, the main information I am lacking for this is a prior for $p(θ|\mathcal{M}_\text{loaded})$. I estimate this by finding a suitable distribution and fitting it to my data for all coins, ignoring coins with $0.4<\hat{θ}<0.6$, since those have a decent chance to be fair. The rest of the Bayesian model selection is straightforward. ### Questions - Is this procedure sound? I acknowledge that I use the same data twice. However, the data for a given coin has barely any impact on the parameter priors relevant to its model selection. (I could also exclude the data for the given coin when determining the priors for its analysis, doing a thousand fits instead of just one.) - If yes, is there a name or reference for this approach? - If no, is there a better way to determine parameter priors for $\mathcal{M}_\text{loaded}$? I am particularly interested in ways that can be extended to a more complex model space as well as higher-dimensional and unbounded parameter spaces.
Is Bayesian model selection with empirical parameter priors sound?
CC BY-SA 4.0
null
2023-04-12T18:53:45.830
2023-04-16T11:43:22.670
2023-04-16T11:43:22.670
36423
36423
[ "bayesian", "model-selection", "empirical-bayes" ]
612724
1
null
null
0
22
I have eight independent variables, and also think of adding a ninth variable for time. Is this too much? Are there any consequences? Annual data for nine years.
Too many independent variables in ARDL?
CC BY-SA 4.0
null
2023-04-12T19:07:45.943
2023-04-13T09:43:42.837
2023-04-13T09:43:42.837
383188
383188
[ "time-series", "ardl" ]
612725
1
null
null
0
29
I want to perform LDA in my cohort which is based on 140 inviduals distributed according in 3 groups. These individuals have undergone an analysis of 50 variables (gene expression). So my dataset is 137x51 (1 categorical variable + 50 numerical variables) I want to perform LDA and see how the individuals behave using the multiple predictors (in my case a set of genes). However I am not sure how to deal with missing values in the dataset, and which method fits better to the LDA. I lie my doubts in here to see if somebody has experience in this topic: The mice package has multiple approaches to do it with "pmm", "norm" or whatever. From my point of view, the missing values due to non amplification molecular process are missing completely at random, but seems that this consideration introduces a bias. So they could be treated as missing at random and perform multiple imputation. The thing is, should I construct blocks with the mice package, according to the genes and their family belonging, or should I leave the default option and let the imputation work. My data adjusts to normal distribution Thanks in advance
Imputation process before LDA
CC BY-SA 4.0
null
2023-04-12T19:07:51.280
2023-04-18T14:51:11.480
2023-04-18T14:51:11.480
339186
339186
[ "r", "data-imputation", "caret", "discriminant-analysis", "mice" ]
612726
1
null
null
1
37
I have an ARMA(2,1) model of the following form, $$y_t=a_1y_{t-1}+a_2y_{t-2}+\epsilon_t+b_1\epsilon_{t-1}$$ Re-arranging and using lag operators: $$(1-a_1L-a_2L^2)y_t=(1+b_1L)\epsilon_t$$ solving for $y_t$ $$y_t=\frac{(1+b_1L)}{1-(a_1L+a_2L^2)}\epsilon_t$$ Using the definition of an infinite geometric series $$(1+b_1L)\sum^\infty_{j=0}(a_1L+a_2L^2)^j\epsilon_t$$ $$(1+b_1L)\sum^\infty_{j=0}(a_1\epsilon_{t-1}+a_2\epsilon_{t-2})^j$$ $$\sum^\infty_{j=0}(a_1\epsilon_{t-1}+a_2\epsilon_{t-2})^j+\sum^\infty_{j=0}(a_1b_1\epsilon_{t-1}+a_2b_1\epsilon_{t-2})^j$$ Using this solution to compute the variance: $$Var(y_t)=Var(\sum^\infty_{j=0}(a_1\epsilon_{t-1}+a_2\epsilon_{t-2})^j+\sum^\infty_{j=0}(a_1b_1\epsilon_{t-1}+a_2b_1\epsilon_{t-2})^j)$$ $$=\sum^\infty_{j=0}Var(a_1^j\epsilon_{t-1}^j+a_2^j\epsilon_{t-2}^j)+\sum^\infty_{j=0}Var(a_1^jb_1^j\epsilon_{t-1}^j+a_2^jb_1^j\epsilon_{t-2}^j)$$ $$=\sum^\infty_{j=0}(a_1^{2j}+a_2^{2j})Var(\epsilon_{t-1}^j+\epsilon_{t-2}^j)+\sum^\infty_{j=0}(a_1^{2j}b_1^{2j}+a_2^{2j}b_1^{2j})Var(\epsilon_{t-2}^j+\epsilon_{t-3}^j)$$ I think I must have made a mistake along the way, I don't know what to do with the term $Var(\epsilon_{t-1}^j+\epsilon_{t-2}^j)$ any help would be much appreciated.
ARMA(2,1) Solution and Variance
CC BY-SA 4.0
null
2023-04-12T19:25:37.297
2023-04-12T21:49:34.653
2023-04-12T21:49:34.653
300124
300124
[ "time-series", "variance", "arima" ]
612727
1
null
null
1
41
The form for [MSE](https://en.wikipedia.org/wiki/Mean_squared_error) for $N$ data points with scalar values $Y=[Y_1,Y_2,...,Y_N]$ is given by the formula: $$ MSE = \frac{1}{N} \sum_{i=1}^N (Y_i - \hat{Y}_i)^2 $$ How I see it, $ d_i = Y_i - \hat{Y}_i$, where $d_i$ is the Euclidean distance between the actual and predicted values for the $i^{th}$ data point. Thus, extending this to higher dimensions, say $D$ dimensions, $Z=[\vec{Z_1},\vec{Z_1},...,\vec{Z_N}]$. Thus, the MSE should be: $$ MSE = \frac{1}{N} \sum_{i=1}^N d_i^2 = \frac{1}{N} \sum_{i=1}^N \|Z_i - \hat{Z}_i\|^2 = \frac{1}{N} \sum_{i=1}^N \sum_{j=1}^D (Z_{ij} - \hat{Z}_{ij})^2 $$ However, although I did not see any direct result which mentions this, it seems most implementations of MSE use a different formula (not too different from what I thought above): $$MSE' = \frac{1}{N} \sum_{i=1}^N \frac{1}{D} \sum_{j=1}^D (Z_{ij} - \hat{Z}_{ij})^2$$ - Is this $MSE'$ the correct form? If the MSE should provide the Mean Squared Error, where the error is measured by the Euclidean distance between the points, then why is this averaging over $D$ too? - I do know that it doesn't make too much of a difference (a constant factor) if we use one of the measures consistently, but which one is standard? Is there a unique definition of MSE in these cases?
Mean Squared Error (MSE) formula for data points in higher dimensions
CC BY-SA 4.0
null
2023-04-12T19:33:46.057
2023-04-13T05:53:25.637
2023-04-13T05:53:25.637
385551
385551
[ "error", "mse" ]
612728
1
null
null
2
39
I have a function that takes a few hundred parameters and which returns a score I want to optimize for - It's a piece of software attempting to play a game against another player. The parameters partially determine the actions of the player and so have an effect on my final score I would like to find a set of parameters that optimizes the likely outcome of the played game. I'm facing several difficulties: - The game is chaotic, so except for the most sensitive parameters, most of the hundreds of parameters have only a small individual effect. - The game is computationally heavy to run. I will likely only have around 10000 datapoints I can gather with my limited computational resources. The only way I can even get to 10k datapoints is by running it in parallel. Single threaded approaches may not work for me - I don't have a derivative of my function - Parameters can be floats, integers or booleans. Some of the ints/floats may not currently have the right sign. Booleans tend to be the most impactful parameters, but I think these are mostly set right now - Some parameters can entirely shut down my player if brought outside of acceptable ranges. I do not always know these acceptable ranges. - While I am adjusting parameters, I am also adjusting the software, which subtly or not so subtly changes the meaning and ideal value of some parameters Due to the difficulty, I am not expecting to find even a local maximum let alone a global one, I am happy if I can get some of the most important parameters in the right order of magnitude without messing the less important parameters up too much. So far the best approach I've found and am currently using is: - Randomly vary a subset of parameters by picking a value from a normal distribution around the currently selected best value. (booleans are randomly flipped) - Play a game (selfplay), then store the used parameters and final score in a file - Collect datapoints from my last n games, for floats and integers calculate a Pearson correlation coefficient (p) for every parameter correlated with my score. Then adjust every parameter x by setting x = x + abs(p) * y * p, where y is a scaling factor. Booleans are flipped if p indicates I should - Occasionally manually adjust parameters based on what seems nonsensical. - I've alternated optimizing for different rating values, not just the score, but also whether my bot has won and other relevant game specific values such as how many gamepieces I own at the end This (clearly flawed) approach at least seems to make my parameters drift closer to their ideal on average. But, if I pick a low scaling factor y, my parameter convergence is way too slow. If I pick a high y, there's a lot of unintended drift. I often observe (regardless of y and n) that my performance score decrease after a optimization attempt. I've tried some other approaches such as machine learning (neural nets and random forest trees) for parameter optimization, but with little luck. There probably isn't enough data to prevent overfitting on my noisy data Are there better approaches I can use here to optimize my parameters?
What are effective methods to maximize an unknown noisy function?
CC BY-SA 4.0
null
2023-04-12T19:34:38.057
2023-04-22T16:14:22.620
2023-04-22T16:14:22.620
26948
385538
[ "machine-learning", "correlation", "optimization", "hyperparameter", "approximation" ]
612729
1
612916
null
0
58
I am conducting a study where I look at the interaction of 3 categorical variables and 1 continuous variable. However, I want to be able to see all the possible comparisons of these 4 variables. In the past, I have used `emmeans` but I noticed that `emmeans` only takes the lowest and highest value of the continuous variable which does not make sense in repeated measures since it basically takes the lowest participant compared to the highest participant. ``` Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest'] Formula: RT ~ Domain * ShiftType * TrialType * VA_k + (1 | Probe) + (1 | Story_order) + (1 | Subject) Data: bsmu[bsmu$ACC == 1, ] REML criterion at convergence: 79528.9 Scaled residuals: Min 1Q Median 3Q Max -2.2523 -0.4384 -0.1779 0.1482 12.8338 Random effects: Groups Name Variance Std.Dev. Probe (Intercept) 169631 411.9 Subject (Intercept) 545749 738.7 Story_order (Intercept) 101769 319.0 Residual 3042405 1744.2 Number of obs: 4472, groups: Probe, 380; Subject, 60; Story_order, 8 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 2254.31 228.51 80.13 9.865 1.74e-15 *** DomainL2 268.94 188.33 4375.18 1.428 0.1534 ShiftTypeNo Shift -30.19 200.03 1992.88 -0.151 0.8800 ShiftTypeUnchanged 244.29 201.03 1959.52 1.215 0.2244 TrialTypeSpace 482.29 206.74 2100.11 2.333 0.0198 * VA_k 150.41 105.24 170.10 1.429 0.1548 DomainL2:ShiftTypeNo Shift -132.44 264.37 4376.51 -0.501 0.6164 DomainL2:ShiftTypeUnchanged -193.74 265.03 4372.25 -0.731 0.4648 DomainL2:TrialTypeSpace -300.61 276.90 4372.10 -1.086 0.2777 ShiftTypeNo Shift:TrialTypeSpace -78.89 289.34 2148.38 -0.273 0.7851 ShiftTypeUnchanged:TrialTypeSpace -444.28 289.46 2117.47 -1.535 0.1250 DomainL2:VA_k -97.90 101.58 4222.47 -0.964 0.3352 ShiftTypeNo Shift:VA_k 56.42 101.62 4242.86 0.555 0.5788 ShiftTypeUnchanged:VA_k -82.56 100.25 4236.84 -0.824 0.4103 TrialTypeSpace:VA_k 39.49 104.60 4268.38 0.377 0.7058 DomainL2:ShiftTypeNo Shift:TrialTypeSpace 198.60 385.45 4374.32 0.515 0.6064 DomainL2:ShiftTypeUnchanged:TrialTypeSpace 446.45 385.71 4379.42 1.157 0.2471 DomainL2:ShiftTypeNo Shift:VA_k 53.32 142.70 4224.51 0.374 0.7087 DomainL2:ShiftTypeUnchanged:VA_k 292.02 142.67 4209.58 2.047 0.0407 * DomainL2:TrialTypeSpace:VA_k 164.48 148.53 4217.14 1.107 0.2682 ShiftTypeNo Shift:TrialTypeSpace:VA_k -101.93 148.10 4262.76 -0.688 0.4913 ShiftTypeUnchanged:TrialTypeSpace:VA_k 55.99 145.95 4256.12 0.384 0.7013 DomainL2:ShiftTypeNo Shift:TrialTypeSpace:VA_k -152.50 208.23 4232.78 -0.732 0.4640 DomainL2:ShiftTypeUnchanged:TrialTypeSpace:VA_k -422.48 206.97 4219.86 -2.041 0.0413 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation matrix not shown by default, as p = 24 > 12. Use print(x, correlation=TRUE) or vcov(x) if you need it ``` ```
Unpacking interactions in LME4 with repeated measures design
CC BY-SA 4.0
null
2023-04-12T19:52:07.687
2023-04-14T12:29:03.513
null
null
363391
[ "lme4-nlme" ]
612730
1
null
null
1
24
I am working on a model to account for flood risk and it is based on three variables: Variable 1: drainage (float: 0 - 80) Variable 2: estimated population (float: 0-2,000) Variable 3: road network importance (float: 0-1) All the 3 variables are highly left-skewed, but they are not correlated. Supposing that the 3 variables are equally important to analyze the flood risk , is there a way I can combine them to create a score?
How to create a composite variable out of 3 non-correlated variables?
CC BY-SA 4.0
null
2023-04-12T19:57:59.757
2023-04-12T19:57:59.757
null
null
385555
[ "random-variable", "normalization", "standardization", "composite" ]
612731
2
null
612690
1
null
## Blocking the single confounder path is enough in this case I assume you would like to estimate the effect of 'Drug' on 'Cancer' from some observational data. Your depiction of the graph under an intervention is correct. By choosing an adjustment set, you effectively try to transform the observational (left) into the interventional graph (right) insofar as the effect of 'Drug' on 'Cancer' is concerned. You are right to observe that the intervention removes two paths, but in the adjustment set only seems to block one of them. This is because only the confounder path via 'Age' is relevant to the estimation of the effect of 'Drug' on 'Cancer'. ### Controlling for 'Age' closes all backdoor paths A backdoor path is any path regardless of direction that would remain between your source and target node if all outgoing edges from your source node were removed. Such a path is closed when there is a collider on the path (not relevant for your setting), or if you control for one of the variables along the path. In your case, there is an open backdoor path between 'Drug', 'Age', and 'Cancer', with 'Age' being what is called a "confounding" variable. You can close it by adjusting for 'Age'. Since there are no more backdoor paths, you are done! ### Controlling for 'Area' is not necessary but it will not hurt either 'Area' is simply a cause of 'Drug' without a special name for its position, as far as I am aware of. It is not a descendant of 'Drug' and not involved in any backdoor paths. You could control for it, but do not need to given your graph.
null
CC BY-SA 4.0
null
2023-04-12T20:22:17.763
2023-04-12T20:22:17.763
null
null
250702
null
612732
1
null
null
1
27
Tests exist to determine whether a distribution is normal. For example the Shapiro-Wilk’s method. I'm wondering how to determine whether I'm powered to detect that my distribution is non-normal (e.g., null hypothesis is that skew is 0, alternative is that it is different from 0). I could of course run a simulation - I'm specifically interested in an analytic power analyses. Here is an example where my sample is too small (underpowered) to detect a significant effect, even though the population is skewed (the data is drawn from a gamma distribution with a skew of 2): ``` > set.seed(123) > dat = rgamma(10,shape = 1) > shapiro.test(dat) Shapiro-Wilk normality test data: dat W = 0.94299, p-value = 0.5867 ``` What sample size would I need to have 80% power to find a significant effect?
Power analysis to detect non-zero skew/kurtosis
CC BY-SA 4.0
null
2023-04-12T20:34:48.423
2023-04-13T01:03:40.943
2023-04-13T01:03:40.943
288142
288142
[ "statistical-power", "skewness", "kurtosis", "an" ]
612733
1
null
null
1
24
A literature search yielded no obvious answers, so I wonder here if there any feasible methods to estimate the following. Suppose I have data $Y_i, \vec X_i$ indexed by $i = 1, \cdots, N$. Note that $Y_i$ represents a scalar binary outcome, and $\vec X_i$ a vector of predictors. I assume that my data are generated by the following, where $\mathbf{1}\left\{ . \right\}$ is the indicator function, $\varepsilon_i$ independent error, and $f(.)$ an unknown function, $$ Y_i = \mathbf{1}\left\{ f(\vec X_i) \geq 0 \right\} + \varepsilon_i $$ Are there any methods to estimate the function, $f(.)$, or more likely moment of the function $E[f(X_i)]$, possibly non-parametrically?
I am looking for a method to estimate a threshold function for binary outcome data
CC BY-SA 4.0
null
2023-04-12T20:41:50.737
2023-04-12T20:57:44.757
2023-04-12T20:57:44.757
385487
385487
[ "inference", "econometrics", "estimators" ]
612734
1
null
null
0
39
[Ranger documentation](https://cran.r-project.org/web/packages/ranger/ranger.pdf) states that if the importance mode is set to 'Impurity', then the estimated measure is '...the variance of the responses for regression..." Could someone expand on this or provide a relevant publication? As a starting point for an answer, I'm assuming it is something like the the sum of all the differences in response variance between nodes pre/post split where the feature of interest is used...maybe normalized by the number of trees?
Clarification on variable importance (i.e., impurity mode) for Ranger random forest regression model
CC BY-SA 4.0
null
2023-04-12T21:03:27.520
2023-04-12T21:03:27.520
null
null
162599
[ "r", "random-forest", "importance" ]
612735
2
null
5268
2
null
Using exemplars, i.e. data points which could best describe the dataset as a whole, should be a reasonable first step. The most common exemplar clustering method is the [Affinity Propagation](https://en.wikipedia.org/wiki/Affinity_propagation) (AP) methodology put forward by Frey & Dueck (2007) [Clustering by Passing Messages Between Data Points](https://utstat.toronto.edu/reid/sta414/frey-affinity.pdf); it is considered somewhat more robust to noise than standard $k$-means but usually quite slower too. AP allows us "making these (dependency structures) explicitly visible" by looking at the fitted availabilities and responsibilities matrices; roughly speaking these matrices encode how suitable is candidate instance $j$ is to be cluster centre (i.e. overall examplar) for point $i$ and how well the point $i$ will do to choose point $j$ as its exemplar, respectively. The R package [apcluster](https://cran.r-project.org/web/packages/apcluster/index.html) is actually much more faithful to the original MATLAB implementation of the algorithm than the Python `sklearn` implementation of the [Affinity Propagation clustering](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.AffinityPropagation.html) methodology so I would suggest familiarising oneself first with the R version.
null
CC BY-SA 4.0
null
2023-04-12T21:09:27.230
2023-04-12T21:09:27.230
null
null
11852
null
612736
1
null
null
0
11
So I'm running into an issue I can illustrate as follows. Lets say you have a shipment of fruit, of various kinds. You want to compare across shipments. You, for some reason, decide to Z-score these shipments. In the first shipment, you have 10 tons of Apples and 10 tons of Oranges. In the second shipment, you have 10 tons of Apples and 20 tons of Oranges. If you Z Score within shipments, It seems like you'll run into an issue, that the score for Apples will end up negative, as it is now below the mean, whereas it is 0 in the original case, Despite the fact the only change was amount of Oranges, you'll get the false impression Apples went down. Does this problem have a formal name? What are the approaches to preserve these kinds of changes? Are there alternatives to Z scoring?
Z-scoring (or alternatives) while not creating artifacts
CC BY-SA 4.0
null
2023-04-12T21:27:49.940
2023-04-12T21:27:49.940
null
null
245715
[ "scikit-learn", "z-score" ]
612737
1
null
null
0
17
Suppose you sample N people with unequal probabilities from some superpopulation. Your sample contains W_sample the probability with which the person was sampled from the superpopulation and their outcome Y. For our purposes, let's assume W in the superpopulation is a draw from a dirichlet distribution, so W is a probability distribution while W_sample is not and need not sum to 1. When confronted with unequal sample selection probabilities, Bayesians usually advise to condition on the variables generating the sampling weight. This makes sense as it renders the sampling design ignorable. However, it isn't clear what this would mean in this case - what should be conditioned on?
How to make sampling ignorable in Bayesian model with random (unequal) sampling probabilities?
CC BY-SA 4.0
null
2023-04-12T22:10:46.460
2023-04-12T22:10:46.460
null
null
120828
[ "probability", "bayesian", "sampling", "survey-sampling", "survey-weights" ]
612739
1
null
null
0
14
Let's say I have a range of formulations, and each formulation contains a different starting rate of water "x", and I want to test how fast the formula dries out over time (ie. loss of water over time), and put this on a plot. Let's suppose the starting rate of x for each formulation is known, but there is no control for over the starting rate of x. In order to make the regression on the plot between each formulation more comparable, is it appropriate to scale x for each formula to be the same starting rate? For instance: x1: 20, x2: 35, x3: 22 And then, to scale each resulting regression I could just adjust each data point by ~57% for x2, and ~91% for x3; so that they all have a starting rate of x = 20 and scale the remainder of the data points by those same percentages for x2 and x3? Thank you! This is also similar to a question asked [here](https://math.stackexchange.com/questions/2238615/percentage-scaling) (also on stackexchange!) incase my post doesn't make sense.
Scaling by percentage - is this appropriate given this situation?
CC BY-SA 4.0
null
2023-04-12T22:57:57.893
2023-06-02T05:42:02.100
2023-06-02T05:42:02.100
121522
255948
[ "regression", "inference", "percentage", "feature-scaling" ]
612740
1
null
null
1
15
A colleague recently presented results from a chi-squared test that used a Bayesian method for estimation. The results seemed promising, but when I looked up the main function `contingencyTableBF` for this, I was surprised to find that there is no method for inputting priors for estimation. As Bayesian methods typically require fitting priors based off past data, I was a little confused as to why this is the case. The function has some documentation [here](https://www.rdocumentation.org/packages/BayesFactor/versions/0.9.12-4.3/topics/contingencyTableBF) and [here.](https://cran.r-project.org/web/packages/BayesFactor/vignettes/manual.html#ctables) This to me is somewhat confusing for a couple reasons. First, the chi-squared test technically derives expected values by definition: $$ \chi^2 = \sum{\frac{(O_i-E_i)^2}{E_i}} $$ where $O$ is the observed value and $E$ is the expected value. In some ways, this expected value serves in a way to be a default prior...we expect each cell to have a given value already, and deviations from that should be apparent. However, it is also possible to manipulate the $E$ values here by simply replacing them with what the expected cells should be (simply adding whatever value we expect versus one derived from the number of cells). Given that is the case, how come this function does not allow one to input these expected counts directly? Second, it seems that the main method for alternating estimation is the `sampleType` argument in this function. However, I read through each description and none of these arguments seem to indicate one can manipulate the prior expected values in any way. The one argument that hints at this is `priorConcentration`, but this lacks useful explanation in the RDocumentation. To summarize, my main question is this: How does one fit informative priors for Bayesian chi-squared tests in R? Is this even possible?
Informative priors for Bayesian chi-squared test
CC BY-SA 4.0
null
2023-04-12T23:15:44.883
2023-04-12T23:15:44.883
null
null
345611
[ "r", "bayesian", "chi-squared-test", "prior", "uninformative-prior" ]
612741
1
null
null
4
454
Odds ratio, as the word itself demonstrates, refers the ratio of odds. Hence, we need 2 events in computing odds ratio. But in simple logistic regression, given that we are interested in estimating is the relative likelihood of event A over the event of not A, why should we call it odds “ratio” not just “odds”? Perhaps is it just because odds with the denominator of 1 is called odds ratio?
Odds "ratio" in logistic regression?
CC BY-SA 4.0
null
2023-04-12T23:30:47.097
2023-04-13T13:37:59.257
2023-04-13T07:43:01.553
53690
311012
[ "logistic", "terminology", "odds-ratio" ]
612742
2
null
612741
3
null
If you're talking about the value $\exp(\beta_0)$ from the logistic regression $$ Y_i \sim \textrm{Bernoulli}\left(P=(1+\exp(-\beta_0))^{-1}\right)) $$ then you are absolutely right: (in my opinion) it should be called "odds", not "odds ratio", and (again in my opinion) people who call it an "odds ratio" are just being sloppy*. (See also [this answer](https://stats.stackexchange.com/a/92906/2126), which points out that in a non-simple logistic regression (i.e., with additional parameters/covariates), $\exp(\beta_0)$ is the odds in the baseline condition, when all covariates are equal to zero. --- * although perhaps harmlessly so
null
CC BY-SA 4.0
null
2023-04-12T23:40:55.810
2023-04-12T23:40:55.810
null
null
2126
null
612745
2
null
612739
1
null
This sounds like a survival analysis problem. Survival analysis involves analysing time-to-event data (traditionally, time until death in medical analyses, hence the name, but the event can be anything). In this case, your event is the drying of the formulation, and you have a covariate which is the amount of water in the formulation. A good place to start could be fitting a survival regression model with amount of water `x` as a predictor. Survival analysis is quite a flexible technique and allows for various distributions describing the time to event data, multiple predictors etc.
null
CC BY-SA 4.0
null
2023-04-13T00:09:25.270
2023-04-13T00:09:25.270
null
null
369002
null
612746
2
null
612741
8
null
By "simple logistic regression," do you mean a logistic regression with one explanatory variable? $$\log(odds(x_i))=\log\left(\frac{p(x_i)}{1-p(x_i)}\right) = \beta_0 + \beta_1 x_i$$ We may be interested in estimating the odds for a certain $x_i$: $$\frac{\hat p(x_i)}{1-\hat p(x_i)}$$ Or just the probability of $y_i=1$ at that $x_i$: $$\hat p(x_i)$$ But the way I've always used odds ratio in logistic regression is regarding $\exp(\hat \beta_1)$. That's because - $\hat\beta_1$ is the estimated (additive) increase in log-odds when $x_i$ increases by 1 unit, so - $\exp(\hat\beta_1)$ is the estimated (multiplicative) increase in odds when $x_i$ increases by 1 unit, so - $\exp(\hat\beta_1) = \frac{\widehat{odds}(x_i+1)}{\widehat{odds}(x_i)}$, so it's an odds ratio. Let's say we are studying a disease which is more likely among older people, so $p(x_i)$ is the probability of having this disease at age $x_i$, and let's say the simple logistic model fits well. Then for every additional year of age, the log-odds go up additively by $\hat\beta_1$. So the odds for someone my age are $\hat\beta_1$ times the odds for someone 1 year younger than me.
null
CC BY-SA 4.0
null
2023-04-13T00:17:37.427
2023-04-13T13:37:59.257
2023-04-13T13:37:59.257
17414
17414
null
612747
1
null
null
1
41
Some papers I see take the uncertainty estimation of a prediction as simply its softmax/sigmoid output, whereas [some papers](https://arxiv.org/abs/1506.02142) will use techniques such as MC Dropout and calculate the variance across the predictions. The softmax function is typically used in machine learning models to convert a set of input values, often called logits or scores, into a set of output probabilities. These output probabilities can be interpreted as the model's confidence but I have often heard they cannot be used to check the confidence so other methods such as MC dropout is used , why is this the case , what causes the softmax to give high confidence even for predictions that are false? Is it because the soft max can be intuitively thought of as an ensemble of various activations of neurons and this leads to some noise creeping in and making the softmax make wrong predictions confidently. Why wouldn't the MC dropout lead to such problems , During inference, the input image is passed through the network, and each neuron in the network computes an activation value based on its weighted inputs. The weights in the network are learned during training, so they are optimized to produce the correct output for a given input image. So, as the input neurons are removed during MC dropout, the pattern of activation in the neurons will also change which would lead to varied predictions and it should technically give high variance for all inputs but it dosent happen often.
Softmax Response vs MC Dropout for Uncertainty Estimation
CC BY-SA 4.0
null
2023-04-13T00:25:32.427
2023-04-20T20:01:36.483
2023-04-20T20:01:36.483
375558
385566
[ "probability", "neural-networks", "entropy", "uncertainty", "calibration" ]
612749
1
null
null
0
29
In Mann-Kendall test there are 3 values, p-value, Z-value and Kendalls Tau value. What I want to know is what the difference in concept between Z and Kendalls Tau, as I read that both of them indicate the positive or negative relation between the variable?
The difference between Z and Kendalls Tau in Mann-Kendall
CC BY-SA 4.0
null
2023-04-13T01:00:46.573
2023-04-13T01:00:46.573
null
null
385570
[ "kendall-tau" ]
612750
1
null
null
0
22
I created a [CausalForest (from econml)](https://econml.azurewebsites.net/_autosummary/econml.grf.CausalForest.html) model to estimate non-binary outcomes (similar to daily sales amount) given covariates and a binary treatment. I evaluate the model using the following procedure: - use the model, denoted $\tau(X, w)$, to estimate the effects for my dataset; - rank and bin the estimated effects of the units/examples from high to low; - for each bin $b$ of size $n_b$, compute its estimated effect as the mean of model's estimates of all units in that bin, i.e. $\hat{\tau}_b = {1 \over n_b} \sum_{i \in b}{\tau}(X_i, w=1)$; - for each bin $b$, compute its "empirical" effect by subtracting the mean outcomes of the treated from that of the untreated, i.e. $\tau_b = E[Y_b(1)] - E[Y_b(0)]$. I found that the effects estimated by the CF has rather small values (small mean effect and small variance) of every bin compared to the "empirical" effects of the corresponding bin.) However, the estimates seem to be consistent in the sense that bins with higher estimated effects also have higher empirical effects as computed above; only the magnitudes of the estimates are too small. Further it does not seem to be an overfitting issue since the same is observed for both training set and validation/test sets. It is said that applying typical ML algorithms to causal effect estimate may [bias the estimated effects to zero](https://matheusfacure.github.io/python-causality-handbook/22-Debiased-Orthogonal-Machine-Learning.html#more-econometrics-may-be-needed). Does the same happen to CausalForest? What would be the remedy?
How does Causal Forest make small/narrow effect estimates comparing to empirical data?
CC BY-SA 4.0
null
2023-04-13T01:23:53.373
2023-04-13T01:23:53.373
null
null
78081
[ "mixed-model", "econometrics", "causality" ]
612751
2
null
612305
0
null
You have four regressions (ERQ-CR x Age, ERQ-CR x Gender, ERQ-E x Age, ERQ-E x Gender). Run the moderation macro (model 1) four times, once for each regression. Then correct for multiple comparisons across the four regressions. (This is very easy to do in other software, like R, and also relatively easy to do in SPSS, though you have to make the interaction variables by hand, which can be tricky, depending on how long you've been using SPSS for).
null
CC BY-SA 4.0
null
2023-04-13T01:26:00.033
2023-04-13T01:26:00.033
null
null
288142
null
612752
1
null
null
0
15
I am trying to fit a RJAGS zero-inflated negative binomial model. The data I am using has 451 observation and only 12 of them have values different to 0, which means that 97% of my observations are 0. My objective is obtaining the the posterior distribution of the probability of the data belonging to the non-structural zero $\pi$, the expected value of the negative binomial part $\mu$ and the size parameter $r$. My data is distributed as: [](https://i.stack.imgur.com/n587Z.png) I have created a model in RJAGS with the following structure: ``` negativebinom <-"model { # Likelihood for (i in 1:length(Y)) { Y[i] ~ dnegbin(p1[i],r) p1[i]=r/(r+mu1[i]) mu1[i]=z[i]*mu z[i]~dbern(pro) } log(mu)=eta pro=1-zero.prob logit(zero.prob)=theta theta~dnorm(2,1) eta~dgamma(1.2,0.7) r ~ dnorm(7.280611e+06,1.430766e+04) }" ``` Where $Y$ stands for the count variable, $r$ the size parameter of the negative binomial, $p1$ is the probability parameters of each observation which depends on $\mu_1$. $\mu_1$ is the expected value of each observation which depends on $z$ a Bernoulli variable modelling if we are in the non-structural zero part or not. $z$ takes value 0 if Y is 0 and 1 otherwise. However, I think I am forcing the model to consider all 0's part of the non-structural 0 and not leaving any chance for them to be generated by the negative binomial part. In fact, while computing the posterior distribution probability zero.pro, which should be the probability of non-structural zero, is the same or matches with the actual proportion of 0's of our data: [](https://i.stack.imgur.com/FOEX7.png) What should I modify in my model to make this probability modelling the non-structural zero probability and not the overall probability of being zero?
RJAGS - Zero Inflated Negative Binomial RJAGS
CC BY-SA 4.0
null
2023-04-13T01:46:07.183
2023-04-13T07:32:23.440
2023-04-13T07:32:23.440
362671
384249
[ "negative-binomial-distribution", "zero-inflation", "jags" ]
612753
2
null
612539
1
null
A difference in significance does not indicate a significant difference. Males differ from 0, females do not. BUT, this does not necessarily mean that males and females differ from each other. Here is an example plot where group 1 does not differ from 0, but group 2 does (95% CI does not overlap 0). However, the two groups are not significantly different from each other, as their respective 95% CIs overlap.[](https://i.stack.imgur.com/KYifM.png)
null
CC BY-SA 4.0
null
2023-04-13T01:49:59.710
2023-04-13T01:49:59.710
null
null
288142
null
612754
1
null
null
3
36
I am trying to generate evenly distributed particles in an $n$-dimensional flat torus or a periodic hypercube. I am not sure if any of this approaches suffices. Can you suggest alternative methods for generating evenly distributed particles in this space or how to correct any of these? ## First approach: Sampling $\varphi = \arccos(1-2u)$ with $u \in U[0,1]$ for the azimutal angle of the 3D unit sphere $\left( \mathcal{S}^{2} \subset \mathbb{R}^{3} \right)$ prevents accumulation of points near the poles being the polar angle sample, where $\theta = 2\pi v$ with $v \in U[0,1]$. I was wondering if "copying" this behaviour by sampling $$\vec{x} = \{ 2 \cdot \arccos(1-2u_{i}): u_{i} \in U[0,1] \}_{i=1}^{n} \in [0,2 \pi]^{n}$$ would be enough to generate $n$ random points in an $n$-dimensional flat torus or hypercube, where each dimension has length $2\pi$. I have run a little simulation for 2D and 3D and it looks like the corners (which represent the same point) aren't very likely to have much particles, which is reasonable given the functional form of the $\arccos$. ## Another approach: Considering the map into the torus $$\sigma: (x_1,x_2) \in [0,2\pi]^{2} \mapsto \left[ (1+\cos(x_1))\cdot \sin(x_2), (1+\cos(x_1))\cdot \sin(x_2), \sin(x_1) \right] \in \mathbb{T}^{2} \subset \mathbb{R}^{3}$$ Obtaining the differential volume element $\| \frac{\partial \sigma}{\partial x_1} \times \frac{\partial \sigma}{\partial x_2} \| = 1+\cos(x_1)$, which also gives the enclosed area $A = \int \int \| \frac{\partial \sigma}{\partial x_1} \times \frac{\partial \sigma}{\partial x_2} \| dx_1 dx_2 = (2\pi)^2$ Then, obtaining the marginal probability density functions $f(\cdot)$ and the cumulative probability functions $F(\cdot)$, $$f(x_1) =\frac{1}{(2\pi)^2} \int_{0}^{2\pi} 1+\cos(x_1) dx_2 = \frac{1+\cos(x_1)}{2\pi} \Longrightarrow \\ \Longrightarrow F(x_1) = \int_{0}^{x_1} f(x_1) dx_1 = \frac{x_1 + \sin(x_1)}{2\pi}$$ $$f(x_2) =\frac{1}{(2\pi)^2} \int_{0}^{2\pi} 1+\cos(x_1) dx_1 = \frac{1}{2\pi} \Longrightarrow \\ \Longrightarrow F(x_2) = \int_{0}^{x_2} f(x_2) dx_2 = \frac{x_2}{2\pi}$$ I tried to also sample $(x_1,x_2)$ from $u_1,u_2 \in U[0,1]$ by computing $x_1 = F^{-1}(u_1)$ (numerically solved) and $x_2 = F^{-1}(u_2)= 2\pi u_2$. Note that the inverse function makes sense since $F(\cdot)$ is monotone and bijective (inyective and surjective) in $[0,1]$. For higher dimensions ($n>2$) the mapping $\sigma = (\sigma_1, \dots, \sigma_n) $ would be modified accordingly: $$ \begin{array}{ll} \sigma_1(\vec{x}) &= \left[ 1+\cos(x_1) \right] \cos(x_2) \\ \sigma_2(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cos(x_3) \\ \dots & \\ \sigma_{n-2}(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cdot \dots \cdot \sin(x_{n-3}) \cos(x_{n-2}) \\ \sigma_{n-1}(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cdot \dots \cdot \sin(x_{n-2}) \cos(x_{n-1}) \\ \sigma_{n}(\vec{x}) &= \left[ 1+\cos(x_1) \right] \sin(x_2) \cdot \dots \cdot \sin(x_{n-1}) \sin(x_{n-1}) \\ \end{array} $$ ## Note: I have used as reference [http://corysimon.github.io/articles/uniformdistn-on-sphere/](http://corysimon.github.io/articles/uniformdistn-on-sphere/)
Generating uniformly distributed particles on a $n$-dimensional flat torus or periodic hypercube
CC BY-SA 4.0
null
2023-04-13T03:20:00.063
2023-04-13T13:14:27.270
2023-04-13T13:14:27.270
385569
385569
[ "self-study", "sampling", "simulation", "uniform-distribution", "random-generation" ]
612756
1
null
null
0
6
Data health for PLS modeling Hi, I am working on a manufacturing data that is fairly new (only has 60 batches produced so far) dataset size is 60 observations of 150 variables and I am building a PLS model to predict the Final Product quantity in Kgs that meets minimum specifications. After removing intermediate product measurements, redundant variables, calculated variables to avoid Colinearity I am left with 60 observations of 110 variables. This PLS model has predictability only at 23% and more than 70% of the variables have huge variations in their data so far. My thoughts are this process data is too early and not sufficient to make a predictive PLS model but I would like to get some expert opinions on this situation. Can I assume adding more observations to this data helps the model? Is there any basic data health check I am missing for PLS modeling before I submit the outcomes to my manufacturing team? Thank you
PLS model on wide dataset with 60 samples and 120 variables
CC BY-SA 4.0
null
2023-04-13T04:13:32.753
2023-04-13T04:13:32.753
null
null
314613
[ "pca", "small-sample", "partial-least-squares" ]
612757
1
null
null
0
11
I have a business use case around adverse news detection. We have set-up an experiment where we compare the human vs a bot and we need to test it. The experiment results will come out something like this: Case #, Human (0,1), Bot(0,1) 0 - No adverse news detected 1- Adverse news detected Some quirks about our case: - H0 is that the human is better at detection. So there has to sufficient evidence to disprove it - We do not have ground truth label. In a way the human is the ground truth What I need to know: - How to test this hypothesis? - What is the minimum sample size for this test?
Detection accuracy of human vs bot. Which test to use and how to determine required sample size
CC BY-SA 4.0
null
2023-04-13T05:14:44.193
2023-04-13T05:14:44.193
null
null
117353
[ "hypothesis-testing", "binary-data" ]
612758
1
null
null
1
19
Suppose I'm running a regression that looks something like $$\log(price)=\beta_0 + \beta_1\log(population)+\beta_2\log(population)^2.$$ I have found the residuals, grouped them according to the number of sellers in the observation's town, and calculated the mean residual for each group. Suppose the residuals are 0.05 for 1 seller, -0.01 for 2 sellers, -0.02 for 3 sellers. I want to make a statement about the %markup to the average price for each group. Since these are logged residuals, can I just interpret the mean residual as the %markup? (eg. there is a 5% markup from the average price when there is 1 seller, -1% when there are 2, etc.) Or do I need to use the mean log-price, and then calculate the %change using: $$100 \times \frac{(meanLogPrice+residual)-meanLogPrice}{meanLogPrice} = 100 \times \frac{residual}{meanLogPrice}? $$
Interpretting a group's Mean Residuals when Logged
CC BY-SA 4.0
null
2023-04-13T05:56:04.007
2023-04-13T06:32:52.603
2023-04-13T06:32:52.603
362671
385584
[ "residuals", "logarithm" ]
612759
1
612761
null
1
46
[](https://i.stack.imgur.com/nVYNV.png) Screenshot from page 80 of the textbook "Introduction to Linear Regression Analysis" fifth edition by Douglas C. Montgomery Let $X$ be $n \times p$, $y$ and $\hat{y}$ be $n \times 1,$ and $\hat{\beta}$ be $p \times 1$ matrix in the multiple linear regression model. From the matrix calculation, we can easily find $\hat{\beta} = (X'X)^{-1}X'y$ and $\hat{y}=X\hat{\beta}$. However, in the later part of this chapter, the estimation of $\sigma^{2}$, while calculating the residual sum of squares, the textbook says $X'X\hat{\beta} = X'y$ which indicates $X\hat{\beta}=y$. But I don't understand why it is not $\hat{y}$. Can anyone answer this question?
Why is $X\hat{\beta}$ regarded as $y$ in multiple linear regression while estimating sigma square?
CC BY-SA 4.0
null
2023-04-13T06:32:52.097
2023-04-15T12:34:23.850
2023-04-13T06:52:40.280
362671
375779
[ "regression", "multiple-regression" ]
612761
2
null
612759
3
null
$X'X\hat{\beta} = X'X \left((X'X)^{-1}X'y\right)=\left(X'X (X'X)^{-1}\right)X'y=X'y$ This does not imply that $X\hat{\beta}=y$ though. In algebra, the statement $CA=CB$ only implies that $A=B$ if $C$ is invertible, and, in this case, $C=X'$ is not even (necessarily) a square matrix. What it does imply, however, is that $X'y = X'\hat y$, which is true in general, since $X'e = X'y-X'X\hat \beta=0$. --- Addendum: If $X$ is a square matrix with full-rank ($n=p$), then $y=\hat y$.
null
CC BY-SA 4.0
null
2023-04-13T06:50:33.410
2023-04-15T12:34:23.850
2023-04-15T12:34:23.850
60613
60613
null
612762
1
null
null
1
8
I am using ANOVA and T-Test to compare wheat grain characteristics between states, agroecological zones, soils, etc. Most of this data is grain mineral concentrations but some of it is ratios describing grain mineral bioavailability. For example, a phosphorus fraction expressed as a percentage of total phosphorus. My question is, can I use compare means if the means are generated from ratios? Or do I have to transform the data first? I'm asking because my supervisor gave me this comment: > I believe the estimates of mineral bioavailability as a molar ratio is that a ratio. Data that involves ratios should be transformed accordingly for proper comparison. There is no reporting on data transformation in the statistical analysis.
Necessary to transform ratios for ANOVA and T-Test?
CC BY-SA 4.0
null
2023-04-13T07:05:34.783
2023-04-13T07:16:35.037
2023-04-13T07:16:35.037
362671
374339
[ "anova", "t-test", "data-transformation", "ratio" ]
612763
1
null
null
1
48
I have a confusion related to the likelihood function. I suppose that users waiting time $W$ follows an Exp distribution with the rate $\lambda$, and the prior of $\lambda$ follows Gamma($\alpha$, $\beta$). We have the information that after the user has waited for 5 min, he still needs to wait for another 10min. But, I am confused that if I want to use this information to make bayesian updating on the waiting time, should it be $$P(\lambda|D)=P(X=5+10|X\geqslant 5) \times P(X\geqslant 5)\times P(\lambda)=P(X=5+10)\times P(\lambda)$$ or $$P(\lambda|D)=P(X=5+10|X\geqslant 5)\times P(\lambda)$$ or other.
Which likelihood function is correct?
CC BY-SA 4.0
null
2023-04-13T07:07:32.520
2023-04-19T20:21:31.063
2023-04-19T20:21:31.063
71679
327159
[ "probability", "bayesian", "likelihood", "queueing" ]
612764
1
null
null
0
13
I have carried out a designed agricultural experiment with two treatments and recorded the effect on the abundance of a pest insect on three dates. The field experiment was divided into four blocks with two plots (replications) per block, resulting in 2 x 2 x 4 = 16 plots. Pest insects were counted per plant on the same 15 plants in a row in each of the replications ([I wrote another post, where asking how to deal with this spatial correlation](https://stats.stackexchange.com/questions/612703/analysis-of-spatially-correlated-count-data-from-a-designed-agricultural-experim)). The pest insects originated from one or two nearby fields and are therefore not evenly distributed. The data looks like this: |Treatment |Block |Plot |Date |Plant |Insects | |---------|-----|----|----|-----|-------| |A |1 |1 |2019-06-18 |1 |0 | |A |1 |1 |2019-06-18 |2 |5 | |A |1 |1 |2019-06-18 |3 |2 | |... | | | | | | |B |4 |16 |2019-07-10 |15 |1 | I have three Date levels `"2019-06-18", "2019-06-25", "2019-07-10"`. I have already analysed the data date by date. I used the suggested error structure from Jones, Harden, Crawley (2022) "The R Book", chapter 13, for nested (hierarchical) structures: ``` single_date_model <- glmmTMB(insect ~ treatment + (1 | block/plot), family="poisson", data = subset(mydata, DATE == "2019-06-25")) ``` However, I would like to analyse all dates at once. The above book suggests adding `+ (time | random)` to the model, which is in my case should be `+ (cDATE | BLOCK / PLOT)` (I changed `DATE` to `cDATE` by `mydata$cDATE <- as.integer(mydata$DATE) - 18064` to get a continuous variable starting at 1). As an alternative, they suggest adding time as a fixed effect and comparing the models. These are the models: ``` t_random_model <- glmmTMB(insect ~ treatment + (fDATE | block / plot), family = "poisson", data = mydata, control=glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS"))) t_fixed_model <- glmmTMB(insect ~ treatment + cDATE + (1 | block / plot), family = "poisson", data = mydata, control=glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS"))) ``` (I added `control=glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS"))` to both as suggested in the vignette [Troubleshooting with glmmTMB](https://cran.r-project.org/web/packages/glmmTMB/vignettes/troubleshooting.html) in order to avoid convergence problems.) Compring the models with `anova()` shows, that the `t_random_model` is significantly better with a far lower AIC. However, unlike my data with three irregular time points, the example in the book has five equidistant time points, and I'm not sure that the way I did it is still valid. I have also tried a model with an Ornstein-Uhlenbeck covariance structure from the vignette [Covariance structures with glmmTMB](https://cran.r-project.org/web/packages/glmmTMB/vignettes/covstruct.html), which is said to be able to handle irregular time points. For the model I prepared the data with `mydata$numDATE <- numFactor(mydata$cDATE)` and then ran: ``` t_corr_model <- glmmTMB(insect ~ treatment + ou(numDATE + 0 | block / plot), data = mydata) ``` Comparing it with `anova()` says that it's a worse model. However, I'm not sure if one model is nested within the other and if it's valid to compare them. Are all of these models valid ways to analyse the data? Is `anova()` the right way to find out which model is best or should I go another way?
GLMM on temporally correlated count data from a designed agricultural experiment
CC BY-SA 4.0
null
2023-04-13T08:18:34.613
2023-04-13T08:32:18.103
2023-04-13T08:32:18.103
383278
383278
[ "mixed-model", "generalized-linear-model", "count-data", "time-varying-covariate", "glmmtmb" ]
612765
2
null
431966
1
null
Here's a simple counterexample (for discrete time). Let $X_t$ and $Z_t$ be iid standard Normal sequences. Let $\alpha_t$ be a sequence of numbers in $(-1,1)$. Define $Y_t=\alpha_t X_t+\beta_t Z_t$. Now - $Y_t$ is independent for different times. - The variance of $Y_t$ is $\alpha_t^2+\beta_t^2$ so given any $\alpha_t$ with $|\alpha_t|<1$ we can (and do) choose $\beta_t$ to make $Y_t$ standard Normal - therefore $Y_t$ is weakly stationary: its distributions are all standard Normal - But $\mathrm{cov}[X_t,Y_t]= \alpha_t$ Now consider the linear combination $Y_t-\alpha X_t=\beta_t Z_t$. This series is not weakly stationary because $\beta_t$ changes over time. The variance at time $t$ is $\beta_t^2$, which is not constant. The condition you'd need for weak stationarity of linear combinations is that the pair $(X_t, Y_t)$ were individually weak-stationary and that their covariance was constant over time. You could say they were "jointly weak stationary", though I don't know whether this is standard terminology. Two final notes: first, $X_t$ and $Y_t$ in this example are strongly stationary as well as weakly stationary. Second, $X_t$ and $Y_t$ are each uncorrelated over time, but that would be easy to change.
null
CC BY-SA 4.0
null
2023-04-13T08:22:37.180
2023-04-13T08:22:37.180
null
null
249135
null
612766
1
null
null
0
19
I am currently conducting a set of analyses examining the relationship between two predictors and an outcome. For example, the relationship between motivation (predictor 1), revision (predictor 2), and performance in an exam (outcome). I have reason to believe that predictor 2 (e.g. revision) may mediate the relationship between predictor 1 (motivation) and the outcome (performance on the exam). I have therefore ran a mediation model and find evidence of full mediation after controlling for covariates. I am also interested in whether a model containing the predictor (e.g. motivation) and the mediator (e.g. revision) is more predictor of the outcome than the mediator alone. Can I obtain this from the mediation model, or would I need to conduct additional analyses to examine this (e.g. separate regression analyses including only the mediator (model 1) and then the mediator and the predictor (model 2), and then compare these models)?
Does mediation tell you effect of predictor above that of the mediator?
CC BY-SA 4.0
null
2023-04-13T08:22:42.357
2023-04-13T14:32:31.767
null
null
385590
[ "regression", "multiple-regression", "predictor", "mediation" ]
612767
1
null
null
0
16
I have an experiment which comprise a numerical dependent variable, say a feature such as growth_rate, and three independent factor variables describing where the samples were collected, i.e. locality, were the collected samples were growth, i.e. medium, and which taxonomical group they belong to, i.e. taxa. plus, there is variable that is adding some random noise which will be considered the random effect. What I need to test is the combined effect of taxa and medium, interaction, while controlling for locality. Further complication is that I need to test all the possible combinations between taxa and medium. To solve such problem I am thinking about a model like: `lme(growth_rate ~ medium*taxa + locality, block=random, data)` but then how to construct the matrix for the `multcomp` function `glht`? I am reading [the vignette](https://cran.r-project.org/web/packages/multcomp/vignettes/multcomp-examples.pdf) for the multcomp package and I can understand the two way anova and how it works but I am unable to extend it to what i need. Specifically, i was looking at the Two-Way Anova part but I am still missing the how to add the locality variable. I was also thinking about merging the `medium` and `taxa` variables into one, as mentioned in [this thread](https://stats.stackexchange.com/questions/5250/multiple-comparisons-on-a-mixed-effects-model) but I am not sure how to manage the fact that i also have the locality variable These are some data, to give an idea what i am dealing with. I randomised the whole table, so it's not the real data. ``` structure(list(locality = c("L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L1", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L2", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3", "L3"), medium = c("M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3", "M1", "M2", "M3"), random = c("rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd9", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd6", "rnd6", "rnd6", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd7", "rnd8", "rnd8", "rnd8", "rnd8", "rnd8", "rnd8", "rnd6", "rnd6", "rnd6", "rnd6", "rnd6", "rnd6", "rnd6", "rnd6", "rnd6"), taxa = c("g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g1", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g3", "g3", "g3", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g2", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g3", "g2", "g2", "g2", "g2", "g2", "g2", "g1", "g1", "g1", "g3", "g3", "g2", "g2"), growth_rate = c(7L, 2L, 7L, 4L, 5L, 1L, 6L, 10L, 0L, 5L, 4L, 0L, 10L, 0L, 1L, 3L, 8L, 8L, 0L, 0L, 0L, 5L, 0L, 6L, 5L, 3L, 10L, 1L, 7L, 5L, 0L, 1L, 7L, 10L, 3L, 3L, 6L, 6L, 6L, 2L, 2L, 1L, 10L, 0L, 5L, 7L, 1L, 2L, 8L, 5L, 9L, 1L, 4L, 10L, 0L, 4L, 3L, 3L, 5L, 7L, 3L, 5L, 10L, 5L, 2L, 0L, 10L, 0L, 9L, 9L, 3L, 1L, 10L, 1L, 0L)), class = "data.frame", row.names = c(NA, -75L)) ```
How to design contrasts for a three way mixed effect model with interactions?
CC BY-SA 4.0
null
2023-04-13T08:41:25.603
2023-04-13T08:41:25.603
null
null
114511
[ "mixed-model", "multiple-regression", "multiple-comparisons" ]
612769
1
null
null
0
48
For my Master thesis I have to perform the DCC-GARCH model to examine the correlation between real estate house prices and the stock market. I tested the data for normality (both not normal) and stationarity (both not stationary) and variance ratio test (was significant). I used the log function because of the non-normality and took the first difference. After this, the real estate house prices were still not stationary, so I took the second difference which resulted in stationary data. The stock market data was stationary after taking the first difference, but I think I need to take the second difference as well in order to use it for the DCC-GARCH model. The code I used for the DCC model is: ``` #perform DCC model1=ugarchspec(mean.model = list(armaOrder=c(0,0)),variance.model = list(garchOrder=c(1,1),model="sGARCH"),distribution.model = "norm") modelspec=dccspec(uspec = multispec(replicate(2,model1)),dccOrder = c(1,1), distribution = "mvnorm") modelfit=dccfit(modelspec,data=(data.frame(ts_nominal,ts_share))) modelfit ``` I'm not sure if I took the right steps to perform this analysis or if my code is even correct. Compared to other papers, I find it strange that only 3 parameters are significant and that `alpha1` for stocks and `dcca1` are almost equal to 1. Can anyone help me with this? ### Update: Further Research I have proceeded by taking the log return of both the property price index and stock price index. Then I used the `diff()` function, resulting in both time series being stationary. The results, however, are barely different than the last results I showed in the post. ``` Distribution : mvnorm Model : DCC(1,1) No. Parameters : 11 [VAR GARCH DCC UncQ] : [0+8+2+1] No. Series : 2 No. Obs. : 130 Log-Likelihood : 502.7599 Av.Log-Likelihood : 3.87 Optimal Parameters ----------------------------------- Estimate Std. Error t value Pr(>|t|) [ts_prop].mu 0.000547 0.003082 0.177488 0.859125 [ts_prop].omega 0.000011 0.000072 0.156417 0.875704 [ts_prop].alpha1 0.349696 0.649448 0.538451 0.590266 [ts_prop].beta1 0.649304 0.387195 1.676944 0.093553 [ts_share].mu 0.000031 0.008641 0.003615 0.997116 [ts_share].omega 0.000004 0.000007 0.561343 0.574564 [ts_share].alpha1 0.000000 0.000673 0.000009 0.999993 [ts_share].beta1 0.999000 0.000882 1132.157095 0.000000 [Joint]dcca1 0.000000 0.000007 0.000353 0.999719 [Joint]dccb1 0.895265 0.119982 7.461638 0.000000 Information Criteria --------------------- Akaike -7.5655 Bayes -7.3229 Shibata -7.5784 Hannan-Quinn -7.4669 Elapsed time : 0.909425 ``` The next step in our analysis is to apply linear regression to see which determinants (like the long term interest rate) have a significant effect on the dynamic correlation between the property return time series and stock return time series. For this, I extracted the dynamic correlations from the DCC-GARCH model, but as you can see in the graph 'fcor', these correlations all have the same value of 0.133784 with minimal changes. ``` mod1=lm(fcor~long) > summary(mod1) Call: lm(formula = fcor ~ long) Residuals: Min 1Q Median 3Q Max -0.000000010205 -0.000000003962 -0.000000001022 0.000000004495 0.000000019842 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.1337839573855 0.0000000005038 265555616.311 < 0.0000000000000002 *** long 0.0000000045540 0.0000000016445 2.769 0.00646 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.000000005664 on 128 degrees of freedom Multiple R-squared: 0.05652, Adjusted R-squared: 0.04915 F-statistic: 7.669 on 1 and 128 DF, p-value: 0.006455 ``` I also performed regression on other determinants with all having a significant effect on the dynamic correlations. Can someone explain to me why the dynamic correlations barely change and why this has an effect on the linear regression result? [](https://i.stack.imgur.com/c83yF.png) [](https://i.stack.imgur.com/EUi7g.png) [](https://i.stack.imgur.com/bv2KP.png) [](https://i.stack.imgur.com/vTCZ9.png) [](https://i.stack.imgur.com/ZqeYT.png) [](https://i.stack.imgur.com/pv8hO.png)
Interpretation of DCC-GARCH model
CC BY-SA 4.0
null
2023-04-13T09:29:45.013
2023-04-23T09:41:38.630
2023-04-23T09:41:38.630
53690
385593
[ "time-series", "multivariate-analysis", "garch", "differencing" ]
612773
1
null
null
1
33
I'm trying to predict some financial feature (continuous) and there are two or more good regression models. Is it possible to combine multiple regression models? If so, what kind of method is it called?
How can I combine multiple regression models?
CC BY-SA 4.0
null
2023-04-13T10:15:54.977
2023-04-13T13:22:37.003
2023-04-13T13:22:37.003
53690
339581
[ "regression", "multiple-regression", "finance", "forecast-combination" ]
612774
1
612790
null
1
24
For example, I am interested in specific brain regions' development over time. I have data, but data is one-time point data. it is not longitudinal. But in the data, I have participants aged from 0~17. Also, I have the depression score of each participant. So, my research question is: if there is any statistically significantly different in certain brain regions' development trajectories between depressed children and healthy children. But my data is collected at one-time point, and it is not longitudinal. So I an concerned that if I can use longitudinal data analysis methods to compare developmental trajectories between two groups. (although sample size is enough, and participants are ranged from 0~17)
statistical method to analysis the trejectories (developmental study) but data is not longitudinal
CC BY-SA 4.0
null
2023-04-13T10:26:46.280
2023-04-13T12:40:42.873
2023-04-13T11:26:07.827
220466
385598
[ "panel-data", "generalized-least-squares", "growth-mixture-model" ]
612775
1
613377
null
3
137
## Motivating question I have a high-dimensional state space $\Omega \subseteq \mathbb R^n$ with an admissible subset $S\subseteq \Omega$, which is connected. I would like to draw a uniform random sample from $S$. In my application, it is easy to verify whether a state $\vec x$ is in $S$, but difficult to find a state in $S$ ad hoc. However, it is known that $\vec 0 \in S$. ## Solution idea I think the problem should be relatively easy to solve with a Metropolis-Hastings algorithm. - Start at $x_0 = \vec 0$. Set $i:=0$. - Set $i:=i+1$. - Randomly generate a close-by state $\vec x_i:= \vec X( \vec x_{i-1})$ based on $\vec x_{i-1}$. - Accept the step if $\vec x_i \in S$, else set $\vec x_i = \vec x_{i-1}$. - Add $\vec x_i$ to the sample. - Repeat the procedure after step 2. We may need to through away a large number of steps from the burn-in period. ## Question I am wondering which properties the random variable $\vec X(\vec x)$ needs to have to lead to a uniform sample over the state space. Does the step distribution need to satisfy detailed balance? Why? ## Example Draw a uniform sample from the unit disk in 2D and use the bivariate standard normal distribution centred at $\vec x$ for the steps, i.e., $\vec X(\vec x) \sim \vec N(\vec x, \underline{1})$. I think this should work, but I also think that biasing the steps towards the centre would lead to a different sample. --- The question is so basic that I would expect to find lecture notes that help me, but so far I was unsuccessful in finding something where the Metropolis algorithm was interpreted in a Bayesian framework with prior and posterior distribution. A corresponding reference might be perfectly fine as an answer.
How to draw from a uniform distribution over a large state space via MCMC
CC BY-SA 4.0
null
2023-04-13T10:39:34.013
2023-05-25T01:51:54.797
2023-04-15T10:07:10.187
142696
142696
[ "bayesian", "sampling", "markov-chain-montecarlo", "metropolis-hastings" ]
612777
2
null
612773
1
null
I think what you are looking for is [Ensemble Learning](https://en.wikipedia.org/wiki/Ensemble_learning). Where the predictions of multiple models are aggregated. One simple strategy is to use the average of the predictions. Additionally, people use the standard deviation of the different predicted values as a simple measure of the uncertainty.
null
CC BY-SA 4.0
null
2023-04-13T10:46:46.153
2023-04-13T10:46:46.153
null
null
220466
null
612778
1
null
null
-5
305
That’s a sequel to my previous question [Does Gaussian process functional regression fulfill the consistency condition?](https://stats.stackexchange.com/questions/611358/does-gaussian-process-functional-regression-fulfill-the-consistency-condition) The conclusion was that: - Gaussian process regression with i.i.d. Gaussian noise returns the same posterior Gaussian process for any partition of the data; - ... but with completely different calculations/algorithms. In particular, GP regression with full $n-$update (i.e. the trivial partition) has $O\left( {{n^3}} \right)$ generic computational complexity but GP regression with $n$ sequential $1-$updates (i.e. the atomic partition) has exponential computational complexity in $n$. That’s the reason why we never do $n$ sequential $1-$updates but a $(n-1)-$update followed by a $1-$ update in sequential/online learning, see e.g. Using Gaussian Processes to learn a function online. Now, consider a Bayesian problem with data $D = \left( {{d_1},...,{d_n}} \right)$ and parameters $\Theta $: $p\left( {\left. \Theta \right|D} \right) \propto p\left( {\left. D \right|\Theta } \right)p\left( \Theta \right)$ Proposition $1$: if the likelihood factorizes $p\left( {\left. D \right|\Theta } \right) = \prod\limits_{i = 1}^n {p\left( {\left. {{d_i}} \right|\Theta } \right)} $ and $\Theta$ is fixed once and for all then the posterior calculations are exactly the same for any partition $D = \bigcup\limits_{j = 1}^p {{D_j}} $ of the data and any of its $p!$ permutations. Proof: we have $p\left( {\left. \Theta \right|D} \right) \propto p\left( {\left. D \right|\Theta } \right)p\left( \Theta \right) = \left( {p\left( {\left. {{D_p}} \right|\Theta } \right)...\underbrace {\left( {p\left( {\left. {{D_2}} \right|\Theta } \right)\underbrace {\left( {p\left( {\left. {{D_1}} \right|\Theta } \right)p\left( \Theta \right)} \right)}_{ \propto p\left( {\left. \Theta \right|{D_1}} \right)}} \right)}_{ \propto p\left( {\left. \Theta \right|{D_1},{D_2}} \right)}...} \right)$ Therefore, the only difference from one partition to another and from one permutation to another are the parentheses and the order of the products that are totally useless by the associative and commutative properties of the product. QED. Proposition 1 just says that the likelihood $\prod\limits_{i = 1}^n {p\left( {\left. {{d_i}} \right|\Theta } \right)} $ and the full posterior remain the same regardless of how the data are grouped together and of their order of arrival. Corollary $1$: GP regression with i.i.d. Gaussian noise is not a Bayesian method. Proof: We have ${d_i} = \left( {{x_i},{y_i}} \right)$ and for i.i.d. Gaussian noise the likelihood factorizes $p\left( {\left. D \right|\Theta } \right) = \prod\limits_{i = 1}^n {p\left( {\left. {{x_i},{y_i}} \right|f,\sigma } \right) = } \prod\limits_{i = 1}^n {p\left( {\left. {{y_i}} \right|{x_i},f,\sigma } \right)p\left( {\left. {{x_i}} \right|f,\sigma } \right)} \propto \prod\limits_{i = 1}^n {p\left( {\left. {{y_i}} \right|{x_i},f,\sigma } \right)} \propto {\sigma ^{ - n}}\prod\limits_{i = 1}^n {{e^{ - \frac{{{{\left( {{y_i} - f\left( {{x_i}} \right)} \right)}^2}}}{{2{\sigma ^2}}}}}} $ Moreover, $\Theta$ is fixed once and for all: $\Theta = \left( {f,\sigma ,m,k,{\rm M},{\rm K}} \right)$, see [Is Gaussian process functional regression a truly Bayesian method (again)?](https://stats.stackexchange.com/questions/611582/is-gaussian-process-functional-regression-a-truly-bayesian-method-again) But the posterior calculations are not exactly the same from one partition/update scheme to another. QED. In the same way, we have Proposition $2$: if the likelihood factorizes and $\Theta$ is fixed once and for all, then Bayesian inference has $O(n)$ computational complexity. Proof: Computing the prior $p\left( \Theta \right)$ has $O(1)$ computational complexity because it does not depend on $n$. Computing the likelihood $p\left( {\left. D \right|\Theta } \right) = \prod\limits_{i = 1}^n {p\left( {\left. {{d_i}} \right|\Theta } \right)} $ has $O(n)$ computational complexity. Computing the normalization constant $p\left( D \right) = \int {p\left( {\left. D \right|\Theta } \right)p\left( \Theta \right){\text{d}}\Theta } $ has $O(1)$ complexity because that's a $|\Theta|-$ dimensional integral that has nothing to do with $n$ (moreover, we don't need to compute it, it cancels out by Leibniz rule/Feynman trick). Therefore, computing the full posterior $p\left( {\left. \Theta \right|D} \right)$ has $O(n)$ computation complexity. Finally, drawing posterior inferences, taking Bayes estimators and computing credible intervals has $O(1)$ computational complexity because it involves $\left| \Theta \right|-$dimensional integrals whose complexity basically does not depend on $n$ (we just integrate different functions that depend on $n$ but the complexity of those integrals basically does not depend on $n$). All in all, Bayesian inference has $O(n)$ computational complexity. QED. For one example of such truly Bayesian $O(n)$ functional regression algorithm, see [Bayesian interpolation and deconvolution](https://bayes.wustl.edu/glb/deconvolution.pdf). Corollary $2$: again, GP regression with i.i.d. Gaussian noise is not a Bayesian method. Proof: GP regression does not have $O(n)$ computational complexity. Is that correct please?
Is Gaussian process functional regression a Bayesian method (over again)?
CC BY-SA 4.0
null
2023-04-13T10:48:53.933
2023-04-19T12:24:00.390
2023-04-19T12:24:00.390
384580
384580
[ "bayesian", "gaussian-process" ]
612779
1
null
null
3
50
I run a null binomial generalized additive models (gam) using `mgcv` and it gives negative deviance explained! As far as I know deviance explained is analogue of R^2 so it should be between 0 and 1. So is this negative deviance explained occurred by the package error? If so, then how can I manually estimate deviance explained? My code is given below ``` library(mgcv) x1 = rnorm(100) x2 = rnorm(100) y = rbinom(100, 1, 0.5) Data = data.frame(y, x1, x2) model = gam(y ~ 1, data=Data, family=binomial) summary(model)$dev.expl ``` output: ``` [1] -2.050785e-16 ```
Gam using mgcv is giving negative deviance explained
CC BY-SA 4.0
null
2023-04-13T10:52:33.217
2023-04-13T16:03:53.410
2023-04-13T11:12:00.757
247274
null
[ "error", "generalized-additive-model", "mgcv", "deviance", "negative-r-squared" ]
612780
1
null
null
2
41
Consider a simple regression model $Y_i = \alpha + \beta X_i + u_i, (i=1,...,n)$ where $(Y_i,X_i)$ is a random sample. Let $\hat{\beta}$ be the OLS estimator of $\beta$ and $\bar{X}$ be the sample mean of $X_i$ given by $\bar{X}=n^{-1} \sum_{i=1}^n X_i$. I'm trying to derive $Var((\hat{\beta}-\beta)\bar{X})$. If $X_i$ is fixed(nonrandom), it is easy to derive. But when $X_i$ is random, I have no idea how to derive this variance since $X_i$ appears both in the numerator and denominator in $\hat{\beta}$. Any comments/answers would be appreciated!
on the variance of sample mean times estimated coefficient
CC BY-SA 4.0
null
2023-04-13T10:54:19.673
2023-04-13T10:54:19.673
null
null
111064
[ "regression", "variance" ]
612782
2
null
612779
3
null
A number like `-2.050785e-16` is R’s way of telling you the answer is zero. When you fit an intercept-only model like you do here, the model really does explain zero percent of the deviance, so this is correct behavior. Getting a value with a minus sign out in front could be because the inner workings of the optimization algorithm is slightly different from what happens when the total deviance is calculated, but this is not so concerning to me.
null
CC BY-SA 4.0
null
2023-04-13T11:11:46.000
2023-04-13T16:03:53.410
2023-04-13T16:03:53.410
247274
247274
null
612783
1
612787
null
0
25
Suppose I have a likelihood maximisation problem $$ \hat{\theta} = \max L_n(\theta;y) $$ where $\theta = [\theta_1, \theta_2, ...., \theta_k]^T$. What if I would estimate instead estimate the maximisation problem leaving out a parameter $$ \hat{\theta_{-k}} = \max L_n(\theta_{-k};y) $$ but a loop over each value of $\theta_k$ and pick the specification with the highest likelihood. Would this be identical to estimating the problem jointly?
Two step maximum likelihood
CC BY-SA 4.0
null
2023-04-13T11:13:34.320
2023-04-13T12:15:08.067
null
null
172814
[ "maximum-likelihood", "two-step-estimation" ]
612785
1
null
null
2
35
I am developing a credit risk decisioning model, i.e. a model that assesses the risk of default of an incoming transactions and decides whether to accept it or not. Of course my dataset is imbalanced : the minority class (i.e. defaulted transactions) represents ~5% of my data. What I care about is discrimination power rather than good probabilities because I will use the model to make decisions given an acceptance rate target (e.g. I want to accept 90% of incoming transactions), not to make financial predictions (in which case well calibrated probabilities would be important). Because of that, I evaluate my model with ROC AUC (or PR AUC, I am still unsure which would be best). However, I saw that even though I evaluate my model with AUC, I should still keep binary:logistic as the objective function of my XGBClassifier. The reasons to me are unclear why, but one difficulty I can foresee with having an "AUC objective function" is that it's not possible to define a loss function (and let alone a differentiable one) that would give the "AUC loss" of one given sample as AUC is an aggregate loss rather than an individual one, and I understand that XGB needs an individual loss to compute the losses at the leaf level. Knowing that, it means that the only way to optimize my model for discrimination power is to use AUC as an evaluation metric in the process of hyperparameter optimization. I find that quite disappointing because as per my experience, hyper-param optimization is not a real game changer and usually only allows to earn a few basis points of AUC. Therefore my questions are : - Is it possible to re-define the objective function to optimize XGBClassifier for discrimination power rather than probabilities ? - If not, what are other ways to "boost" the discrimination power of my model besides hyper-parameter optimization ? - Conceptually, the ability to discriminate 2 samples is close to the task of "learning to rank". Therefore, is there a way to use XGBRanker for standard classification ? Have you tried it ? PS : Don't think it is important here but mentioning it just in case : I actually only care about partial AUC : [https://en.wikipedia.org/wiki/Partial_Area_Under_the_ROC_Curve](https://en.wikipedia.org/wiki/Partial_Area_Under_the_ROC_Curve) because areas where the False Positive Rate is too high (say above 20%) is not applicable in my case.
Optimizing XGBClassifier for discrimination power
CC BY-SA 4.0
null
2023-04-13T11:41:57.833
2023-04-13T12:46:27.240
null
null
385606
[ "classification", "boosting", "loss-functions", "unbalanced-classes", "ranking" ]
612786
1
null
null
0
93
I conducted a moderation analysis on repeated-measures data using the MEMORE macro for SPSS ([https://www.akmontoya.com/spss-and-sas-macros](https://www.akmontoya.com/spss-and-sas-macros)). However, I need standardized effect sizes but I haven't managed to figure it out and it's quite urgent. So each participant read 2 character descriptions about a healthy (Condition C1) and unhealthy (Condition C2) male (independent variable) and had to judge the likability (outcome). They also had to score their gender system justification beliefs once (moderator). MEMORE recalculates the outcome by taking a difference score of likability_C1 - likability_C2 at various levels of the moderator and then calculates a t-statistic to check significance. I got this output, with a mean-centered moderator: Conditional Effect of 'X' on Y at values of moderator(s) ``` SystemX Effect SE t p LLCI ULCI -1,2917 ,7257 ,0877 8,2786 ,0000 ,5535 ,8980 ,0000 ,4285 ,0620 6,9172 ,0000 ,3068 ,5503 1,2917 ,1314 ,0877 1,4984 ,1347 -,0409 ,3036 ``` How do I now get effect sizes?
How to calculate effect size of moderation analyses on repeated-measures data?
CC BY-SA 4.0
null
2023-04-13T11:52:13.073
2023-04-14T18:43:53.560
2023-04-13T11:53:56.583
378828
378828
[ "repeated-measures", "interaction", "spss", "effect-size" ]
612787
2
null
612783
2
null
Note that \begin{align} \max_{\theta} L_n(\theta;y) &= \max_{\theta_1,\dots,\theta_k} L_n(\theta_1,\dots,\theta_k;y) \\ &= \max_{\theta_1} \left[\max_{\theta_2}\left[\cdots \max_{\theta_k} L_n(\theta_1,\dots,\theta_k;y)\right]\right] \end{align} See [this answer](https://math.stackexchange.com/a/4417387/652310) for why the last equality is true. Any permutation of the $\max_{\theta_i}$ operators would work too.
null
CC BY-SA 4.0
null
2023-04-13T12:15:08.067
2023-04-13T12:15:08.067
null
null
296197
null
612788
1
612868
null
2
106
In the book Mathematical Methods for Physics and Engineering it is said that the likelihood function tends to a Gaussian (centred on the maximum-likelihood estimate) in the large sample limit. The way it is phrased makes it seem like they are saying this is due to the central limit theorem, but I am struggling to see how it is relevant. It relies on the random variable being a sum of a sequence of other random variables, which I don't think is the case here. I believe this is often misunderstood, for example in [this question](https://stats.stackexchange.com/questions/394768/how-is-it-possible-for-both-the-likelihood-and-log-likelihood-to-be-asymptotical), which I have several problems with. The arguments use the central limit theorem to find the distribution of the likelihood and show that it is asymptotically normal. However, we are not interested in its distribution as a random variable; we instead care about its functional form as the parameters are varied for given observed sample values. As an example of what I mean, suppose we draw $n$ sample values $x_i$ from a distribution $P(x|\tau)=(1/\tau)\exp(-x/\tau)$. The likelihood function is then $$L(\boldsymbol{x};\tau)=P(x_1|\tau)P(x_2|\tau)\dots P(x_n|\tau)=\frac{1}{\tau^n}\exp{\left[-\frac{\sum_i x_i}{\tau}\right]}.$$ Suppose we now evaluate this using the observed values of $x_i$ and consider it as a function of $\tau$. In general this will obviously be different every time, but the book says that in the limit $n\to\infty$, the function tends to a Gaussian with peak centred on the maximum likelihood estimate $\hat{\tau}$ and width inversely proportional $\sqrt{n}$. Why should we expect this to be the case?
Does asymptotic normality of the likelihood function follow from the central limit theorem?
CC BY-SA 4.0
null
2023-04-13T12:15:37.653
2023-04-13T23:24:58.830
2023-04-13T21:39:00.040
290934
290934
[ "normal-distribution", "maximum-likelihood", "likelihood", "central-limit-theorem", "asymptotics" ]
612789
2
null
612699
5
null
Thanks for the valuable comments, which I try to summarize and put into a unifying framework in this answer. As pointed out by @whuber, my suggested formula violated one plausible axiom for a mean, namely that it should increase in all arguments. @whuber also suggested to base the formula on a more rigorous axiomatic ground by postulating a number of reasonable properties of the desired "mean". Hence I did a brief literature study, and it turned out that none less than Kolmogoroff (sic!) already did exactly that. In 1930, he postulated the following properties for a function $M:{\mathbb R}^n \to{\mathbb R}$ that represents a "regular mean": - $M$ is continuous and increasing in each variable. - $M$ is a symmetric function. - The mean of repeated data equals the repeated value. - The mean of a sample remains unchanged if a part of the sample is replaced by its corresponding mean Kolmogoroff proved that, if these conditions hold, the mean must be of the form $$M(x_1,\ldots,x_n) = f^{-1}\left(\frac{1}{n}\sum_{i=1}^n f(x_i)\right)$$ which the Wikipedia page cited by @COOLSerdash calls "generalized f-mean", and the French Wikipedia calls it "quasi-arithmetic mean" or "Kolmogoroff mean". > A. Kolmogoroff: "Sur la notion de la moyenne." Atti Reale Accademia Nazionale dei Lincei, vol. 12,‎ 1930, p. 388–391 When applying this to my particular problem of defining a mean that is greater than the arithmetic mean, it is sufficient that $f$ is a convex function, because for a convex function we have $$\frac{1}{2}\Big(f(x)+f(y)\Big) \geq f\left(\frac{x+y}{2}\right) \Rightarrow f^{-1}\left(\frac{1}{2}(f(x)+f(y))\right) \geq \frac{x+y}{2}$$ The Hölder mean of order $p$, which is the root-mean-square for $p=2$, as suggested by @BenBolker, is the special case $f(x)=x^p$. The choice $f(x)=e^x$, as suggested by @Henry, is yet another special case. For my use case, I have settled on the Hölder mean of order two.
null
CC BY-SA 4.0
null
2023-04-13T12:18:02.027
2023-04-26T13:19:19.000
2023-04-26T13:19:19.000
244807
244807
null
612790
2
null
612774
0
null
A general way to evaluate changes over time is to model your measure of brain-region development flexibly as a function of time, for example with a regression spline, and include an interaction term between that and your measure of depression. Significance of the (set of) interaction terms indicates whether there is a difference associated with depression. That basic strategy is used for longitudinal data in generalized least squares, as outlined in Chapter 7 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/long.html#modeling-within-subject-dependence). That strategy, however, is not restricted to longitudinal data; it can be used for anything measured as a function of time. To some extent this is even simpler than a longitudinal generalized least squares model, as for any one brain region there are no intra-individual correlations to take into account. The potential problem is that the variability in results among individuals might limit your ability to detect a true difference related to depression. That's similar to the possible difference of power between a [2-sample t-tests and paired t-tests](https://stats.stackexchange.com/q/524445/28500). You don't say how many brain regions you are measuring, whether you have specific hypotheses about one or more brain regions, or whether you are just looking for any brain regions that you might find to differ. With multiple regions in the same individual, you should take correlations among regions within the same individual into account. If you are evaluating many regions, you need to adjust for [multiple comparisons](https://en.wikipedia.org/wiki/Multiple_comparisons_problem).
null
CC BY-SA 4.0
null
2023-04-13T12:40:42.873
2023-04-13T12:40:42.873
null
null
28500
null
612791
1
null
null
1
35
I am looking for ways of estimating or mitigating the risk of applying a classification model (say logistic regression for simplicity) in a certain population (the inference set) that is known to be different from the training population. We know that our metrics estimated in the test set are not directly applicable to the inference set since many of the features have different distributions. We are struggling to find ways of measuring how the model will be impacted and if/how the metrics measured can be translated to this inference set or which actions we should take before doing so. One idea was to build a simple distance based model to first filter cases that are close to the training set but it ends up filtering out too many cases, so I am open to suggestions :) @Update I have tried the following procedure to get more understanding of my data - Selected the N most important features of my model - Trained an IsolationForest with the inference set and predicted on the training set - Trained an IsolationForest with the training set and predicted on the inference set - Compared the % rejections between 2 and 3. The rejection rate is much lower on 3., which leads me to believe that the inference set is "contained" by the training set, despite the distributions being different. - In this scenario, it should be safe(ish) to apply my model Can you please criticize the approach above?
Applying classification model when training and inference populations are different
CC BY-SA 4.0
null
2023-04-13T12:41:42.603
2023-04-14T13:45:11.783
2023-04-14T13:45:11.783
385609
385609
[ "classification", "metric", "out-of-distribution" ]
612792
2
null
612785
0
null
- Yes, we can consider using binary:hinge as our objective such that we get 0/1 predictions. - Using a custom evaluation metric is straightforward in XGBoost (a custom objective function is a bit thornier as it requires a Hessian), there is a nice worked example in the Custom Objective and Evaluation Metric section so defining Partial AUC-ROC should be easy. Note that sklearn.metrics.roc_auc_score has an argument for partial AUC calculations already so that might save you even more coding. Just note that we need to set disable_default_eval_metric to True so we use primarily our own metric. - Using rankers for fraud detection definitely can be done. Nevertheless, I have not seen it in Kaggle Fraud detection competitions yet so I am not sure how easy is to implement and if it's worth the effort. We would have to reshape our data quite a bit. That said, there are some publications on this for example Viola et al. (2022) MetaAP: a meta-tree-based ranking algorithm optimizing the average precision from imbalanced data specifically uses XGBRanker as its baseline. Almendra (2013) Finding the needle: A risk-based ranking of product listings at online auction sites for non-delivery fraud prediction covers a similar area too.
null
CC BY-SA 4.0
null
2023-04-13T12:46:27.240
2023-04-13T12:46:27.240
null
null
11852
null
612793
1
null
null
0
24
I have an `lme` model with a significant categorical variable. I have recently been advised to test for autocorrelation in the residuals of that model separately for each level of that variable. The result is that some levels show autocorrelation and some don't and I am not able to deal with this autocorrelation by specifying relevant correlation structures. I am not entirely sure, whether it is really necessary to perform these tests independently. For illustration, I have attached an fairly random example, using the `sp::meuse` data set. Any opinions will be much appreciated! ``` easypackages::libraries("sp", "tidyverse", "gstat") data("meuse") mod <- lm(lead ~ soil*elev, data = meuse) summary(mod) meuse$E1 <- resid(mod) coordinates(meuse) <- c("x", "y") ###separate tests v1 <- variogram(E1 ~ x + y, data = meuse[meuse$soil == "1",]) %>% mutate(soil = "1") v1.fit <- fit.variogram(v1, vgm(psill = 8000, model = "Sph", range = 1500, nugget = 2000)) vario_line1 <- variogramLine(v1.fit, maxdist = 1800) %>% mutate(soil = "1") v2 <- variogram(E1 ~ x + y, data = meuse[meuse$soil == "2",]) %>% mutate(soil = "2") v2.fit <- fit.variogram(v2, vgm(psill = 3000, model = "Sph", range = 1200, nugget = 500)) vario_line2 <- variogramLine(v2.fit, maxdist = 1800) %>% mutate(soil = "2") v3 <- variogram(E1 ~ x + y, data = meuse[meuse$soil == "3",]) %>% mutate(soil = "3") v3.fit <- fit.variogram(v3, vgm(psill = 300, model = "Sph", range = 50, nugget = 10)) vario_line3 <- variogramLine(v3.fit, maxdist = 1800) %>% mutate(soil = "3") mrg_fit <- rbind(v1, v2, v3) mrg_line <- rbind(vario_line1, vario_line2, vario_line3) ggplot() + geom_point(aes(x = dist, y = gamma), data = mrg_fit) + geom_line(aes(x = dist, y = gamma), data = mrg_line, color = "blue") + facet_wrap(~soil, scales = "free_y") + labs(x = "Distance", y = "Semi-variogram") ``` [](https://i.stack.imgur.com/i8pyw.png) ``` ###combined test v <- variogram(E1 ~ x + y, data = meuse) v.fit <- fit.variogram(v, vgm(psill = 8000, model = "Sph", range = 1500, nugget = 2000)) vario_line <- variogramLine(v.fit, maxdist = 1800) %>% mutate(soil = "1") ggplot() + geom_point(aes(x = dist, y = gamma), data = v) + geom_line(aes(x = dist, y = gamma), data = vario_line, color = "blue") + labs(x = "Distance", y = "Semi-variogram") ``` [](https://i.stack.imgur.com/7vwoQ.png)
Testing for autocorrelation in model with categorical variable
CC BY-SA 4.0
null
2023-04-13T12:50:41.020
2023-04-13T13:00:01.437
2023-04-13T13:00:01.437
251270
251270
[ "r", "time-series", "mixed-model", "autocorrelation", "variogram" ]
612794
1
null
null
0
8
So, I need to do some exploratory data analysis and I picked MDS to figure up if there were trends in the data. The structure of my data looks like this: ``` $ Generation: int 2 2 2 2 2 2 2 2 2 2 ... $ Panel : chr "A" "A" "A" "A" ... $ Line : int 1 1 1 1 1 1 1 1 1 1 ... $ Rep : int 2 2 2 2 2 2 2 2 6 6 ... $ Sex : chr "F" "F" "F" "F" ... $ Size : num 1662 1720 1721 1778 1565 ... $ ILD12 : num 1930 1954 1947 1932 1915 ... $ ILD15 : num 1524 1567 1575 1539 1528 ... $ ILD18 : num 427 414 420 389 418 ... $ ILD23 : num 732 706 702 733 749 ... $ ILD25 : num 1380 1386 1383 1393 1391 ... $ ILD29 : num 1544 1584 1554 1568 1531 ... $ ILD37 : num 1586 1546 1575 1568 1611 ... $ ILD39 : num 2070 2060 2046 2061 2060 ... $ ILD46 : num 1515 1481 1498 1493 1532 ... $ ILD49 : num 1970 1973 1953 1971 1962 ... $ ILD57 : num 673 695 705 691 697 ... $ ILD58 : num 1117 1166 1172 1164 1127 ... $ ILD67 : num 192 194 188 196 178 ... $ ILD69 : num 611 644 623 642 585 ... $ ILD78 : num 522 552 531 545 497 ... $ ILD89 : num 97.5 99.2 97.9 99.9 96.9 ... ``` How would I deal with categorical data in my dataset if I am using R to analyse the data? I am using ggPlot too - would I just fit a model first using `cmdscale` and then plot the x and y coordinates? So something like this: ``` ggplot(df, aes(x=x, y=y, color = Panel)) + geom_point() + ggtitle("Metric MDS Results") + labs(x="Coordinate 1", y="Coordinate 2") theme_bw() ``` Am I correct to assume the `color` parameter in ggplot shows the similarity of categorical variable `Panel`?
What's the difference between metric multidimensional scaling and non-metric dimensional scaling? And how to deal with categorical variables?
CC BY-SA 4.0
null
2023-04-13T13:12:06.573
2023-04-13T13:12:06.573
null
null
385611
[ "multidimensional-scaling" ]
612795
2
null
612686
2
null
Slightly adapting my answer in [Why do we need a VECM specification if the I(1) processes are cointegrated?](https://stats.stackexchange.com/questions/397644/why-do-we-need-a-vecm-specification-if-the-i1-processes-are-cointegrated/397665#397665), assume the following $AR(p)$ \begin{equation}\tag{1}\label{1} y_t = \alpha + \phi_1{y_{t-1}} + \ldots + \phi_p{y_{t - p}} + \epsilon_t \end{equation} Using lag operators we can write this as $$(1 - \phi_1{L^1} - \ldots - \phi_pL^p) \cdot y_t = \phi(L) y_t = \alpha + \epsilon_t$$ Define \begin{equation}\tag{2}\label{2} \rho \equiv \phi_1 + \phi_2 + \ldots + \phi_p \end{equation} and \begin{equation}\tag{3}\label{3} \zeta_s \equiv - [\phi_{s + 1} + \phi_{s + 2} + \ldots + \phi_p] \end{equation} Rewrite $1 - \phi_1{L^1} - \ldots - \phi_pL^p$ by adding and immediately subtracting the coefficients of order $j+1$ to $p$ on the lag operator of order $j$. We get $$ \begin{gathered} 1 - [(\phi_1 + \phi_2 + \ldots + \phi_p) - (\phi_2 + \phi_3 + \ldots + \phi_p)]L \hfill \\ - [(\phi_2 + \ldots + \phi_p) - (\phi_3 + \ldots + \phi_p)]L^2 \hfill \\ - [{\phi_{p-1}} + \phi_p - \phi_p]L^{p-1} - \phi_pL^p \hfill \\ \end{gathered} $$ Using \eqref{2} and \eqref{3} yields $$1 - (\rho + \zeta_1)L - (\zeta_2 - \zeta_1)L^2 - \ldots - (\zeta_{p-1} - {\zeta_{p-2}})L^{p-1} - ( - \zeta_{p-1})L^p_ \cdot $$ Solving the terms in brackets gives \begin{equation}\tag{4}\label{4} 1 - \rho L - \zeta_1L - \zeta_2L^2+\zeta_1L^2 - \ldots - \zeta_{p-1}L^{p-1} + {\zeta_{p-2}}L^{p-1} - ( - \zeta_{p-1})L^p \end{equation} The $\zeta_i$ appear both before the $i$th lag operator and, with reverse sign, before the $i+1$th lag operator. We can hence rewrite \eqref{4} as $$1 - \rho L - (\zeta_1L + \zeta_2L^2 + \ldots + \zeta_{p-1}L^{p-1})(1-L)$$ Hence, we have rewritten \eqref{1} as $$\left[ 1 - \rho L - \left( \zeta_1L + \zeta_2L^2 + \ldots + \zeta_{p-1}L^{p-1} \right)(1-L) \right]y_t = \alpha + \epsilon_t$$ Multiplying out the square brackets, using $\Delta=1-L$, applying the lag operators and rearranging yields $$y_t = \alpha + \rho y_{t-1} + \zeta_1\Delta y_{t-1} + \zeta_2\Delta y_{t-2} + \ldots + \zeta_{p-1}\Delta y_{t-p+1} + \epsilon_t$$ Subtract $y_{t-1}$ from either side to get \begin{equation}\tag{5}\label{5} \Delta y_t = \alpha + (\rho - 1)y_{t-1} + \zeta_1\Delta y_{t-1} + \zeta_2\Delta y_{t-2} + \ldots + \zeta_{p-1}\Delta y_{t-p+1} + \epsilon_t \end{equation}
null
CC BY-SA 4.0
null
2023-04-13T13:14:26.587
2023-04-13T13:22:31.397
2023-04-13T13:22:31.397
67799
67799
null
612796
1
null
null
1
45
I want to conduct a meta-analysis of single means. However, these means are restricted mean survival time (RMST) of cerebrospinal fluid shunt inserted to treat hydrocephalus. For this, I have digitized published survival curves with [https://apps.automeris.io/wpd/](https://apps.automeris.io/wpd/). Then I have extracted the individual patient data with the R package [IPDfromKM](https://cran.r-project.org/package=IPDfromKM). Finaly, I have recontructed the survival curve and calculated the RMST at 12 months as decribed here: [https://stackoverflow.com/questions/43173044/how-to-compute-the-mean-survival-time](https://stackoverflow.com/questions/43173044/how-to-compute-the-mean-survival-time). I have yet the following RMST and associated standard error. ``` study <- c("study1", "study2", "study3", "study4", "study5") n_patients <- c(535, 209, 111, 599, 434) rmst_12 <- c(10.54759, 11.36175, 10.50244, 10.51183, 8.716552) se_12 <- c(0.1463532, 0.1439506, 0.3246873, 0.1471398, 0.2374582) > data study n_patients rmst_12 se_12 1 study1 535 10.54759 0.1463532 2 study2 209 11.36175 0.1439506 3 study3 111 10.50244 0.3246873 4 study4 599 10.51183 0.1471398 5 study5 434 8.716552 0.2374582 ``` Now I am performing the meta-analysis of my RMST. ``` # Compute standard deviation (SD) from standard error (SE) ---- data $ sd_12 <- data $ se_12 * sqrt (data $ n_patients) # Download useful package require (meta) # Compute the meta-analysis with the metamean function ---- mm_12 <- metamean (n = n_shunts, mean = rmst_12, sd = sd_12, studlab = author_year, data = data, method.mean = "Luo", method.sd = "Shi", sm = 'MRAW', random = TRUE, warn = TRUE, prediction = TRUE, method.tau = "REML") > mm_12 Number of studies combined: k = 5 Number of observations: o = 1888 mean 95%-CI Common effect model 10.5757 [10.4247; 10.7268] Random effects model 10.3373 [ 9.4868; 11.1878] Prediction interval [ 7.0212; 13.6534] Quantifying heterogeneity: tau^2 = 0.8975 [0.2937; 7.7528]; tau = 0.9474 [0.5419; 2.7844] I^2 = 95.6% [92.3%; 97.5%]; H = 4.78 [3.61; 6.34] Test of heterogeneity: Q d.f. p-value 91.39 4 < 0.0001 Details on meta-analytical method: - Inverse variance method - Restricted maximum-likelihood estimator for tau^2 - Q-Profile method for confidence interval of tau^2 and tau - Prediction interval based on t-distribution (df = 3) - Untransformed (raw) means ``` Everything works fine but, I am wondering myself if this is correct and right from a methodological point of view ? Thank you in advance for your help. PS. I am unsure if it is the right place for this post. Charles
Restricted mean survival time meta-analysis
CC BY-SA 4.0
null
2023-04-13T13:15:19.793
2023-04-14T16:41:46.180
2023-04-14T16:25:41.900
28500
385612
[ "survival", "meta-analysis" ]
612798
2
null
612610
1
null
This can be modeled by regression, with some considerations relating to the correlations within individuals over time and the correlations among outcome measures within individuals. Start by considering a single outcome measure. You fit the outcome as a function of your controlled independent variable (the pH treatment), as a function of time, and with an interaction between treatment and time that allows for differences in trajectories over time depending on pH. You should allow for some flexibility in the association of the outcome with both pH and time, for example with regression splines. Section 2.4 of Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/genreg.html#sec-relax.linear) (RMS) discusses that aspect of the modeling. Even for a single outcome measure, you would also have to take the correlations among measurements within an individual into account. [Chapter 7 of RMS](https://hbiostat.org/rmsc/long.html) discusses ways to do that, with an emphasis on generalized least squares. Such data are also often analyzed with mixed models, treating the individuals as contributing random effects to baseline values (intercepts) or associations with treatment or time. The multiple outcome measurements (length, weight) are likely to be correlated within a single individual, which should be taken into account. There are several ways to handle such correlated outcome, with different strengths and weaknesses. You need to consider your goals and your understanding of the subject matter to decide how to proceed. You could, for example, combine each set of outcome measurements into a single value (for example, via principal-component analysis, or some other method established in your field of study). You could build separate models for each outcome and combine them in a way that takes the correlations into account. Or you could devise a model of all outcome measurements together. As an example of the latter, if you have complete data on all individuals and all measurements at all time points, you could consider the classic repeated-measures model described in Section 3 of Fox and Weisberg's [Appendix on Multivariate Linear Models](https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Multivariate-Linear-Models.pdf). You put all data for each individual into a single row, with all of the measurements as responses in a multivariate linear model. Then you obtain results from the model by specifying the structure of the intra-individual design to post-modeling tests. Technically, what you have is multivariate (multiple-outcome) longitudinal (measured on the same individuals) data. With a set of shared time points at which all individuals were measured, this could also be called "panel data." Searches on those terms should provide more details about ways to proceed (although some people use "multivariate" to mean multiple predictors rather than multiple outcomes, so you need to be careful with that term). For example, Bandyopadhyay et al. provide "A review of multivariate longitudinal data analysis" in [Statistical Methods in Medical Research 2011; 20: 299–330](https://doi.org/10.1177/0962280209340191), which discusses the strengths and weaknesses of the three general approaches outlined above and different ways to implement them.
null
CC BY-SA 4.0
null
2023-04-13T13:51:24.317
2023-04-13T13:51:24.317
null
null
28500
null
612799
2
null
612791
0
null
I believe that you are right with regard to the concept out first identifying which samples can be considered to lie within your training distribution and which do not. Then you could simply assess how good the models work for samples considered inside the training distribution vs samples outside of it. There is a whole research area considered with identifying such samples called Out-of-Distribution Detection. [Here](https://arxiv.org/abs/1802.04865) is a example of such.
null
CC BY-SA 4.0
null
2023-04-13T13:58:31.587
2023-04-13T13:58:31.587
null
null
220466
null
612800
1
613230
null
1
73
I'm working on Cox regression in my PhD research and I would like to know some references about applying the stratified-extended cox regression model on a real life data. I'm interested about combining the two approach: stratification and the extended cox PH in a single model and not separately.
Stratified-Extended Cox regression modeling to deal with survival data with time-varying covariates
CC BY-SA 4.0
null
2023-04-13T14:03:36.953
2023-05-06T22:03:06.090
2023-04-13T14:10:13.557
362671
384654
[ "survival", "references", "cox-model", "stratification", "proportional-hazards" ]
612802
2
null
612766
1
null
The test of the direct effect is a test of whether adding the predictor to a model with just the mediator (and covariates) explains more variability in the outcome than the model with just the mediator (and covariates). The test statistic for the direct effect is the same test statistic for the change in $R^2$ or ANOVA test you would run to compare the two models. Given this, you already know your answer; if you have full mediation, that suggests the direct effect is negligible. If you want to claim full mediation, though, you should run an equivalence to test to test that the direct effect isn't negligible. Just because the direct effect is nonsignificant, doesn't mean it is equal to zero. A wide confidence interval for the direct effect could include meaningful nonzero values that would oppose the full mediation interpretation.
null
CC BY-SA 4.0
null
2023-04-13T14:32:31.767
2023-04-13T14:32:31.767
null
null
116195
null
612803
2
null
530462
1
null
If the $R_j$'s are known a-priori and $f(x)$ is genuinely equal to $\sum_j b_j I(x \in R_j)$ then this approach will basically work, under minor conditions (e.g., the number of observations in each $R_j$ tends to $\infty$, the error variance is finite and constant, and so forth). There are a couple of issues with this approach in practice: - In practice, the $R_j$'s are learned from the data, and so even if the model is correctly specified you need to take into account the fact that the $R_j$'s are estimated. It's far from trivial how to combine estimation of the $R_j$'s with the uncertainty quantification for $f(x)$ (even estimating the $R_j$'s in the first place is not easy, with CART being a particular estimator that need not be consistent). - One can lower their expectations a bit, and only require $f(x) \approx \sum_j b_j I(x \in R_j)$. Again, if the $R_j$'s are known, then the approach outlined is valid as a confidence interval for the modified parameter $\widetilde f(x) = E\{f(X) \mid X \in R(x)\}$ where $R(x)$ is the rectangle $R_j$ that $x$ belongs to. So you get a valid confidence interval for something, it's just that this something is not $f(x)$. - Even if the $R_j$'s are estimated, it still makes sense to take about inference for $\widetilde f(x)$. That is, we can ask about intervals for $\widetilde f(x)$ for the particular $R_j$'s we've estimated. A simple (albeit inefficient) way to do something like this is to data-split, using a training set to estimate the tree and a validation set to compute the confidence interval. But, again, a big issue that that you don't get an interval for $f(x)$. I don't have citations on hand unfortunately (sorry) but I think most of the situation I laid out above is well-known to academics who work with decision trees. The much harder problem of making confidence intervals for random forests has received substantial interest in recent years, however, see e.g., [this work from Wager et al.](https://jmlr.org/papers/v15/wager14a.html). That might give some references to works that handle the much simplerproblem you are interested in.
null
CC BY-SA 4.0
null
2023-04-13T14:32:46.240
2023-04-13T14:32:46.240
null
null
5339
null
612804
2
null
451018
0
null
Alternate answer for part-2: Let $X = \lim_{n\rightarrow\infty}X_n$, then we have $$ \begin{align*} P(X=k) & = \lim_{n\rightarrow\infty}\,P(X_n=k) \\ & = \lim_{n\rightarrow\infty}\frac{1}{e^{\frac{1}{n}}n^{k}k!} & = \begin{cases} & 1;\qquad X_n = 0 \\ & 0;\qquad \text{otherwise} \end{cases} \end{align*} $$ Now, using this definition of $X$, we get $$ \begin{align*} \lim_{n\rightarrow \infty} P(|nX_n - X| > \epsilon) & = \lim_{n\rightarrow \infty} P(|nX_n|>\epsilon) \\ & = \lim_{n\rightarrow\infty} P(X_n > \frac{\epsilon}{n}) \\ & = 1 - \lim_{n\rightarrow\infty} P(X_n \leq \frac{\epsilon}{n}) \\ & = 1 - 1 = 0\qquad\text{since, }P(X\leq 0) = 1 \end{align*} $$ QED!
null
CC BY-SA 4.0
null
2023-04-13T14:44:06.163
2023-04-13T14:44:06.163
null
null
248568
null
612805
1
612830
null
1
63
I have used `RandomForestClassifier` from Sklearn to solve a multiclass classification problem (12 classes in total). I get my `x` and `y` from a pandas dataframe. ``` label_bin = LabelBinarizer() unique_classes = np.unique(y) label_bin.fit_transform(unique_classes) x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1) classifier_RF = RandomForestClassifier(n_estimators=100, criterion='entropy', min_samples_split=2, min_samples_leaf=1, random_state=1) y_train_one_hot = label_bin.transform(y_train) classifier_RF.fit(x_train, y_train_one_hot) y_pred = classifier_RF.predict(x_test) # Converting predictions to original form y_pred_orig = label_bin.inverse_transform(y_pred) ``` Here is a normalized Confusion Matrix obtained from `y_test` and `y_pred_orig`: [](https://i.stack.imgur.com/RssoF.png) And if make predictions using `x_train`, like this: ``` y_train_pred = classifier_RF.predict(x_train) y_train_pred = label_bin.inverse_transform(y_train_pred) ``` and get a confusion matrix using `y_train` and `y_train_pred`, this is the result: [](https://i.stack.imgur.com/KcdZo.png) Question: By looking at both confusion matrices, can i confirm that my model is overfitting (can't generalize to new unseen data)? If this isn't proof enough, how can i be sure that overfitting is really happening (or not happening)? As aditional info, this is the `classification_report`: ``` precision recall f1-score support 0 0.22 0.96 0.35 2863 1 0.84 0.17 0.29 1918 2 1.00 0.98 0.99 1987 3 0.97 0.02 0.04 2020 4 0.33 0.00 0.00 1928 5 0.97 0.49 0.65 1995 6 0.84 0.26 0.39 1951 7 0.98 0.99 0.99 1997 8 0.94 0.74 0.83 1987 9 0.99 0.99 0.99 1967 10 0.96 0.71 0.81 1985 11 0.79 0.32 0.46 1916 accuracy 0.57 24514 macro avg 0.82 0.55 0.57 24514 weighted avg 0.80 0.57 0.56 24514 ``` EDIT: Here is how i got the log loss values (not sure if those are the correct steps): ``` y_train_pred_prob = classifier_RF.predict_proba(x_train) y_train_pred_probb = np.concatenate([arr[:, 1].reshape(-1, 1) for arr in y_train_pred_prob], axis=1) log_loss_train = log_loss(y_train, y_train_pred_probb) y_test_pred_prob = classifier_RF.predict_proba(x_test) y_test_pred_probb = np.concatenate([arr[:, 1].reshape(-1, 1) for arr in y_test_pred_prob], axis=1) log_loss_test = log_loss(y_test, y_test_pred_probb) print('Logloss Train:', log_loss_train) print('Logloss Test:', log_loss_test) Logloss Train: 0.20318181358124715 Logloss Test: 0.9682325123617269 ```
Sign of Overfitting from a Confusion Matrix
CC BY-SA 4.0
null
2023-04-13T14:44:17.150
2023-04-13T17:21:30.523
2023-04-13T16:39:38.323
346317
346317
[ "python", "scikit-learn", "overfitting" ]
612808
1
612911
null
0
55
I got contrary results from my log-rank (not significant) versus my cox regression (significant) regarding the effect of my treatment variable with the aim of hypothesis testing. That's no real wonder, considering that the cox regression adjusts for 6 covariates other than my treatment. However, I'm confused about the proposition I can make. Without putting too much emphasis on the statistical significance, is the proposition correct that the treatment has a significant effect on survival? I know that the logrank test equals a univariate cox regression (only considering treatment) and that it's critized by experts ([The logrank test statistic is equivalent to the score of a Cox regression. Is there an advantage of using a logrank test over a Cox regression?](https://stats.stackexchange.com/questions/486806/the-logrank-test-statistic-is-equivalent-to-the-score-of-a-cox-regression-is-th)). Further, the statement made by the logrank test compares the survival curves while the cox regression models the relationship of the variables used to the survival time. But what is the implication if the aim is hypothesis testing?
Log-rank and cox regression showing contrary results
CC BY-SA 4.0
null
2023-04-13T15:06:44.877
2023-04-14T10:27:26.587
null
null
379768
[ "cox-model", "logrank-test" ]
612809
1
null
null
0
47
I have a dataset containing a fair amount of continuous and categorical variables. I one-hot encode these variables to be used in various machine learning algorithms. Let's presume a variable has n categories, which we one-hot encode into n columns. If we work with penalized models, we want to standardize all variables. However, when we standardize a one-hot encoded variable, for one variable, we get n standardized columns. Does this mean we are giving an advantage to categorical variables in terms of regularization, especially if a variable has many categories? This problem seems especially relevant when using KNN algorithms (not only for prediction but also imputation). Without standardization, the distances would be biased towards high-valued variables. However, the distances seem to become biased towards categorical data when we standardize, especially if the variable has many categories. If, say, we have a binary categorical variable with an equal number of samples in each category, after standardization 0's would be replaced with -1's, and 1's would remain 1's. Then, the Euclidean distance between two samples with a different categorical value ([-1, 1] and [1, -1]) would be 2*sqrt(2), whereas the distance between a standardized continuous variable would likely be closer to one or two standard deviations (1-2), presuming it's normally distributed (is that an important assumption in this case?). Following the above logic, a simple solution that comes to mind is dividing each one-hot encoded column by the number of categories. So, in the above example, the distance between two samples with a different binary category would be sqrt(2) ([-0.5, 0.5] and [0.5, -0.5]). That way, the total distance between samples seems to be more evenly distributed between variables, with less bias towards categorical variables with a large number of categories. Another solution that comes to mind is, instead of standardizing categorical variables, simply replacing the 0's with -1/n, and the 1's with 1/n. Naturally, simply treating all of these ideas as hyperparameters would likely be the "best" solution when trying to get the best model, but I'm interested if there's any literature on the subject. Has anyone had any experience with this? Thanks.
Bias towards categorical data when one-hot encoding and standardizing (for machine learning)
CC BY-SA 4.0
null
2023-04-13T15:09:42.223
2023-04-13T15:35:04.657
2023-04-13T15:35:04.657
3277
357871
[ "machine-learning", "regularization", "categorical-encoding", "standardization", "many-categories" ]
612810
2
null
612659
1
null
You can't do that with a Cox model, as it provides no survival information beyond the last observed event time. You can try to do that with a parametric survival model, for example one of those provided by the `survreg()` function in R. For your plot based on the `survfit()` function applied to a Cox model, "Default is the mean of the covariates used in the `coxph` fit," according to the `survfit.coxph` help page. In your case, one might ask whether a mean value of `sex` is a useful concept. There is no such default for a `survreg` object. You specify particular covariate values in a data frame to the `predict()` function. The last example on the `predict.survreg` help page shows how to generate a survival plot with standard errors for a simpler model also based on the `lung` data set.
null
CC BY-SA 4.0
null
2023-04-13T15:14:45.387
2023-04-13T15:14:45.387
null
null
28500
null
612811
2
null
611248
0
null
Testing for statistical significance should not be the guiding principle for building a model; see e.g. see [Statistical tests for variable selection](https://robjhyndman.com/hyndsight/tests2/) by Rob J. Hyndman. Model building comes first, inference comes next. But I think you can implement HAC standard errors directly. If Stata does not support that for ARDL model (perhaps it does?), just formulate your model as a regression where lagged terms are included manually.
null
CC BY-SA 4.0
null
2023-04-13T15:19:18.340
2023-04-13T15:19:18.340
null
null
53690
null
612812
1
null
null
1
32
Suppose $x$ is an isotropic random variable in $\mathbb{R}^d$ with $E[\|x\|^2]=d$ and $v$ is some vector. It appears that $\sum_i x_i^2 v_i \approx \sum_i v_i$ when $d \approx \infty$. What is an easy way of showing it? The hard way is to follow this [answer](https://stats.stackexchange.com/a/532222/511) or this [paper](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.46.4629&rep=rep1&type=pdf&ref=machine-learning-etc.ghost.io).
What is the value of $\sum_i x_i^2 v_i$ for isotropic $x$?
CC BY-SA 4.0
null
2023-04-13T15:34:19.797
2023-04-13T18:07:05.860
2023-04-13T18:07:05.860
22311
511
[ "normal-distribution", "circular-statistics" ]
612813
2
null
612664
0
null
> My guess is that patients receiving A are protected for infection-related mortality, but starts to develop non-infection mortality, get censored, and this influence the survival curves (as well as the incidence rate of events). Indeed, if we observe patients long enough, we will end up having all patients either died of infection-related mortality or censored (i.e., died for other reasons). That's why it's not a good idea to censor one type of event time at the occurrence of a different type of event. As the R [vignette on competing risks](https://cran.r-project.org/web/packages/survival/vignettes/compete.pdf) says in Setion 2.2: > A common mistake with competing risks is to use the Kaplan-Meier separately on each event type while treating other event types as censored...We thus have an unreliable estimate of an uninteresting quantity. You need to use a true competing-event analysis, as explained in detail in the vignette. It outlines both multi-state rate models (Section 3) and the Fine-Gray subdistributional hazard model (Section 4). As the vignette points out in Section 3, for a Cox model you will get the same regression coefficient estimate for an event type whether you use a multi-state rate model or censor when other event types occur. The latter censoring approach, however, does not provide the correct probabilities of being in each state at any given time. The Fine-Gray model allows for individuals to remain at risk for one type of event even after experiencing a terminal event of a different type. The assumptions needed for a Fine-Gray model might, however, be harder to meet in practice; see Section 4 of the vignette.
null
CC BY-SA 4.0
null
2023-04-13T15:34:35.207
2023-04-13T15:34:35.207
null
null
28500
null
612814
2
null
612334
3
null
Yes, the input is duplicated for each head. This allows the model to jointly attend to different information about the input at the same time. Passing only a subset of the embedding vector would likely result in a worse representation, since each head has less information about the input. The confusion probabily arises from the fact that the output dimension of each Attention head in [[1]](https://arxiv.org/pdf/1706.03762.pdf) is $d_{\text{model}}/h$, a fraction of the input embedding dimension $d_{\text{model}}$, where $h$ is the number of attention heads. However, the reduction of dimensionality is not obtained by selecting a subset of the input dimension $d_\text{model}$, but by performing the linear projections (i.e matrix multiplications) $QW_i^Q$, $K W_i^K, VW_i^V$ since $Q \in \mathbb{R}^{n \times d_{\text{model}}}$ and $W_i^Q, W_i^K \in \mathbb{R}^{d_{\text{model}} \times d_k}$. This reduction is useful to reduce the computational cost of each head, which is given by $$\mathcal{O}(n^2 \cdot d_i)$$ This way, the total computational cost of the Multi-Head Attention is similar to that of a single-head attention with full dimensionality, but is shown in [[1]](https://arxiv.org/pdf/1706.03762.pdf) that multi-head attention works better. The output dimension is mantained, since values coming from each attention head are concatenated and then projected with a matrix $W^O \in \mathbb{R}^{h d_v \times d_\text{model}}$, obtaining a representation of the same shape as the input. $$\begin{equation} \text{MultiHead}(Q,K,V) = \text{Concat}(h_1, \dots, h_i) W^O \\ h_i = \text{Attention}(Q W_i^Q, KW_i^K,VW_i^V) \end{equation}$$ Please note that with the choice $d_k = d_v = d_\text{model}/h$, $W^O$ is a square matrix, but with different choices of $d_k$ and $d_v$ it would project the attention output in the same space of the input. ## References [[1] Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017).](https://arxiv.org/pdf/1706.03762.pdf)
null
CC BY-SA 4.0
null
2023-04-13T15:35:57.207
2023-04-13T20:13:11.397
2023-04-13T20:13:11.397
377435
377435
null
612815
1
null
null
0
26
I am currently running a mixed log-linear model which is in this form: Log yit = Xit + X2it + (1|individu) I suspect a multicollinearity ( cor (Xit , X2it) close to 1 ). Do you think it makes sense to scale the explanatory variables (Xit). Or is there another alternative? If yes, how may I interpret the result( 1 SD?). I already tried to linearize (Log yit = log Xit + (log Xit )2 + (1|individu)) but the result is not better. Thanks
Multicollinearity in mixed log-linear model
CC BY-SA 4.0
null
2023-04-13T15:39:42.533
2023-04-13T15:59:57.767
2023-04-13T15:59:57.767
347675
347675
[ "mixed-model", "interpretation", "multicollinearity", "scales", "log-linear" ]
612816
1
null
null
0
7
Suppose I'm running a regression that looks something like $log(price)=β_0+β_1log(X)+β_2log(X)^2$. I have found the residuals, grouped them according to the number of sellers in the observation's town, and calculated the mean residual for each group. Suppose the residuals are 0.05 for 1 seller, -0.01 for 2 sellers, -0.02 for 3 sellers. I want to make a statement about the %markup to the average price for each group. Since these are logged residuals, can I just interpret the mean residual as the %markup from the average price? (eg. there is a 5% markup from the average price when there is 1 seller, -1% when there are 2, etc.) Or will I need to calculate the %change by comparing the average+markup to the average? Which would take the form of: $100×\frac{residual}{meanLogPrice}?$
Interpretting Logarithmic Residuals as Percent Change from Average
CC BY-SA 4.0
null
2023-04-13T15:46:22.220
2023-04-13T15:46:22.220
null
null
385621
[ "residuals", "logarithm" ]
612818
2
null
612617
2
null
In principle, [multiple imputation](https://stefvanbuuren.name/fimd/) might be applicable. The missing data might be considered "missing at random" in the technical sense explained in that reference, because your data seem to contain the reason for the missingness: some facilities provide it, others don't. Using all of your available data in a well-designed multiple imputation model could provide a way forward. That might particularly be true in your situation, where you hypothesize that your (often missing) non-invasive test of interest adequately represents the results of invasive tests, whose results are presumably also available in your data. In that case, the results of the invasive tests might be used to get reasonably consistent imputations of the results of the non-invasive test. That approach repeats the modeling on multiple data sets, each with imputations done probabilistically. You combine the results of the multiple models in a way that takes the uncertainty in imputation into account. In general, with so many missing data values, you might have very wide confidence intervals around your estimates for the association between the non-invasive test of interest and the disease status. If your hypothesis about the ability of the non-invasive test to capture information provided by other tests holds, however, then there might be little enough variability in its imputed values to provide reasonable estimates. You say that random forests and gradient boosting handle missing data well. That can be true, but be sure to know which of several approaches are used in your implementation. See the discussion on [this page](https://stats.stackexchange.com/q/98953/28500). Imputation is one type of missing-data handling in those types of models, too.
null
CC BY-SA 4.0
null
2023-04-13T16:06:00.613
2023-04-13T16:06:00.613
null
null
28500
null
612819
1
612827
null
2
47
We know that for a sample (assume it's a data set that has two variables $x$ and $y$ of size $n$), $$R = \frac1{n-1}\sum_{i=1}^n\left(\frac{x_i-\overline{x}}{s_x}\right)\left(\frac{y_i-\overline{y}}{s_y}\right)$$ Say we add in a data point $(\overline{x}, \overline{y})$ to the sample, which lies on the linear regression trendline of the sample (?). We can mathematically see this actually decreases the $R$ value (the sum portion for this data point is $0$, but $n$ increases by $1$ so the denominator increases). However, cannot intuitively understand why. Is there an intuitive explanation for this? Thanks!
Why does adding a mean datapoint decrease the Pearson correlation coefficient $R$?
CC BY-SA 4.0
null
2023-04-13T16:12:21.403
2023-04-13T17:49:00.167
null
null
382657
[ "regression", "regression-coefficients", "linear" ]
612820
1
null
null
0
9
After logistic regression of the cross-sectional data sets, link test _hatsq shows insignificant. However, when I pool the same two data sets, the link test using the same set of variables regression gives significant _hatsq which shows specification error in the pooled data regression. Why is this happening?
Link test for pooled logistic regression
CC BY-SA 4.0
null
2023-04-13T16:12:37.193
2023-04-13T16:12:37.193
null
null
385624
[ "misspecification", "pooled-model" ]
612821
1
null
null
0
7
Apologies if some of my terminology is incorrect, I only have relatively basic stats knowledge. I have two groups of patients where we've done some experiments on the electrical conduction within the heart. Within each group, I've got a measurement of the degree of electricity abnormality (fractionation) during different conditions. Condition A, B and C For each condition, I've measured abnormal electricity (% of signals that were fractionated) and because of the way we've measured, we cannot get specific values, only stratified into a range. The range is 0-24%, 25-49%, 50-74%, or 75-100%. I want to show that the degree of fractionation during condition A correlates or doesn't correlate to degree of fractionation during condition B and/or C. I can't quite figure out which test to use in order to do this. Could someone help? I'm using GraphPad Prism as my stats software. I'm guess very few/noone here would use that, so I just need the name of the test/an explanation of how to test for it, and I'll figure it out within the software. Many thanks in advance for your help!
Trying to figure out how to test these data - correlation between groups for stratified/range of data?
CC BY-SA 4.0
null
2023-04-13T16:14:05.857
2023-04-13T16:14:05.857
null
null
308219
[ "correlation" ]
612822
1
613732
null
1
41
Let $X_1,\ldots,X_n$ be i.i.d. log-normal random variables such that $$\log(X_i)\sim N(\mu,\sigma^2)\ \ \forall i=1,\ldots,n$$ Now let $Y$ be equal to the $\min(X_1,\ldots,X_n)$. It is quite easy to obtain the relation between the corresponding CDFs: $$P(Y<y)=F_Y(y)=1-[1-F_X(y)]^n$$ Is there a closed form to calculate parameters $\mu_y,\sigma_y$ of a log-normal variable $Y$ given parameters $\mu, \sigma$ of a log-normal variable $X$? And vice versa (find $\mu,\sigma$ given $\mu_y,\sigma_y$)?
Parameters of the log-normal from CDF of a composition of $n$ i.i.d
CC BY-SA 4.0
null
2023-04-13T16:14:06.857
2023-04-21T20:08:07.340
2023-04-13T17:21:06.290
362671
385623
[ "distributions", "normal-distribution", "cumulative-distribution-function", "lognormal-distribution" ]
612824
1
null
null
0
15
i am confused at how to estimate the variance of a classifier. Currently, i have split my data into training and test sets and used the training data with a k-fold cross-validation strategy to get the best model. Then, i could use the whole training set to train the selected model and then evaluate the test set, but i can't find it very informative, because if gives no variance estimate of the model !? So, how is the variance usually estimated ? maybe another k-fold cross-validation on the whole data set ? but in that case, it seems to me that the original test set from the original split was just never used and therefore useless.
cross-validation and test-set: variance estimate
CC BY-SA 4.0
null
2023-04-13T16:23:47.537
2023-04-13T16:23:47.537
null
null
301511
[ "machine-learning", "cross-validation", "train-test-split" ]
612825
2
null
612638
6
null
This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs: > When $f:\mathbb R\to\mathbb R$ is an integrable function with non-zero norm $|f|=\int |f(x)|\,\mathrm dx \lt \infty$ and $\mathcal A$ is a set of positive measure $|\mathcal A| = \int_{\mathcal A}\mathrm dx \gt 0$ on which the values of $f$ all exceed some positive number $\epsilon \gt 0,$ then there exists an increasing (measurable) function $u$ for which the transformed function $f\circ u$ has a positive integral, $$\int_\mathbb{R}f(u(x))\,\mathrm dx \gt 0.$$ The idea is to make the image of $u$ focus on $\mathcal A$ while practically skipping over everything else: the integral is then at least $\epsilon$ (the minimum value of $f$ on $\mathcal A$) times the measure of $\mathcal A$ -- plus any negative contributions elsewhere. By limiting the latter we wind up with a positive integral. [](https://i.stack.imgur.com/seHm8.png) In this illustration, the set $\mathcal A$ is highlighted in orange along the horizontal axis and the area under $f$ over the region $\mathcal A$ is shaded. One such function $u$ is obtained by inverting the (strictly) increasing function $$v(y) = \int_{-\infty}^y \mathcal{I}_\mathcal{A}(x) + \delta(1-\mathcal{I}_\mathcal{A}(x))\,\mathrm dx$$ for a positive $\delta$ to be determined. ($\mathcal I$ is the indicator function.) [](https://i.stack.imgur.com/Nco4C.png) This illustration graphs $v$ for $\delta = 0.05.$ Its slopes are $1$ (orange) and $0.05$ (gray). The Fundamental Theorem of Calculus and the rule of differentiating inverse functions show the inverse $u=v^{-1}$ is (a) differentiable with (b) derivative equal to $1$ on $\mathcal A$ and $1/\delta$ elsewhere. Writing $v(\mathcal A)^\prime$ for the complement of $v(\mathcal A)$ within the image of $v$ (which is $\mathbb R$ itself), use the standard integral inequalities (Holder's, for instance) and the change of variables formula for integrals to deduce $$\begin{aligned} \int f(u(x))\,\mathrm dx &= \int_{v(\mathcal A)} f(u(x))\,\mathrm dx + \int_{v(\mathcal A)^\prime} f(u(x))\frac{|u^\prime(x)|}{|u^\prime(x)|}\,\mathrm dx\\ &\ge \int_{v(\mathcal A)} f(u(x))\,\mathrm dx - \left(\sup_{x\in v(\mathcal A)^\prime} \frac{1}{|u^\prime(x)|}\right)\left|\int f(u(x))|u^\prime(x)|\,\mathrm dx\right|\\ &\ge |\mathcal A|\epsilon - \delta|f|. \end{aligned}$$ Taking $\delta = |\mathcal A|\epsilon / (2|f|)$ produces a strictly positive value, proving the lemma. [](https://i.stack.imgur.com/8jbdq.png) This illustration of the graph of $f\circ u$ shows how the horizontal axis has been squeezed at all places where $f\lt \epsilon,$ thereby giving the entire integral a positive value. Making $\delta$ sufficiently close to zero will effectively eliminate the dips in the graph below $\epsilon.$ --- As a corollary, applying the lemma to $-f$ shows that when there is a set of positive measure on which $f$ has negative values below $-\epsilon \lt 0,$ then there is an increasing function $u$ for which $f\circ u$ has a negative integral. > Consequently, if for all increasing (measurable) functions $u$ the integral in the lemma is positive, it follows that the set of places where $f$ has a negative value has measure zero. That's the heart of the matter. Let's pause to notice two things. The first is technical: in this construction of $u,$ $u^{-1}$ is also almost everywhere differentiable and therefore continuous and measurable, allowing us to focus on such "nice" functions. The second is probabilistic: when $F_X$ is the distribution function of a random variable $X$ -- that is, $F_X(x)=\Pr(X\le x)$ -- and $u$ is an increasing (measurable) function with an increasing (measurable) inverse $u^{-1},$ then the distribution function of $u^{-1}(X)$ is $$F_{u^{-1}(X)}(y) = \Pr(u^{-1}(X)\lt y) = \Pr(X \le u(y)) = F_X(u(y)).$$ That is, $F_{u^{-1}(X)} = F_X\circ u.$ Now observe that when $F$ and $G$ are distinct distribution functions for a random variable $X$ and $u$ is an increasing (measurable) function, $$E_G[u^{-1}(X)] - E_F[u^{-1}(X)] = \int F(u(x)) - G(u(x))\,\mathrm dx = \int (F-G)(u(x))\,\mathrm dx.$$ (For the elementary proof see [Expectation of a function of a random variable from CDF](https://stats.stackexchange.com/questions/222478) for instance. It's just an integration by parts.) --- ## Proof of the theorem Applying the corollary to the function $f = F-G$ (which has a nonzero norm since $F$ and $G$ are distinct), under the assumption $f$ has finite norm, shows that when all such integrals are positive, the set on which $F-G$ is negative has measure zero: that is, $G$ stochastically dominates $F,$ QED. --- We can eliminate the finite-norm assumption by noting that $F-G$ can have an infinite norm only by diverging at infinity: it cannot have vertical asymptotes. (The values are differences of probabilities, whence they are bounded by $\pm 1.$) Consequently we can approximate $F-G$ on an expanding sequence of compact sets, such as the intervals $(-n,n)$ for $n=1,2,3,\cdots,$ and apply a limiting argument. But that should be viewed as a technicality, because the underlying idea remains the same, as expressed in the lemma.
null
CC BY-SA 4.0
null
2023-04-13T16:26:09.123
2023-04-16T14:11:06.677
2023-04-16T14:11:06.677
919
919
null