Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
609004
1
null
null
2
24
I have a very general (and probably also quite naive) question: Let's say I have arbitrary data points in `{x, y]`, when I plot all these and fit and/or filter them (with filter I mean something like a Chebyshev-filter, high pass and the like) and I use the fit or filter afterwards for further invesigations. Would I risk to ignore data aka information?
Would fitting / filtering result in a loss of information?
CC BY-SA 4.0
null
2023-03-10T12:36:57.803
2023-03-10T13:12:54.210
null
null
163146
[ "fitting", "filter" ]
609005
1
null
null
1
58
I am confused about the possibilities to predict the mean using quantile regression forests. In my understanding, quantile regression enables the prediction of the probability distribution, i.e. the prediction of quantiles. Some studies, however, use quantile regression forests to predict the mean in addition to the prediction of quantiles, see [related question](https://stats.stackexchange.com/questions/580162/what-is-the-meaning-of-a-quantile-regression-model-that-predicts-the-conditional) and [study example](https://soil.copernicus.org/articles/7/217/2021/soil-7-217-2021.pdf). R code for the study example (SoilGrids) has been shared, see [git page](https://git.wur.nl/isric/soilgrids/soilgrids/-/blob/master/models/ranger/predict_qrf_fun.R). Testing this concept on a sample dataset, it indeed seems that the mean can be predicted using a quantile regression forest (see code and visualisation below). My question is: does this make sense from a theoretical perspective? Or are there cases at which the quantile regression model will provide different predictions in comparison to the regular model? ``` #libraries library(ranger) library(ggplot2) library(tidyr) #load data dt <- iris #create ranger models (with and without quantile forests) ranger.models <- lapply(c(TRUE, FALSE), FUN = function(x){ ranger( formula = Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width, data = dt, quantreg = x, seed = 123 ) }) names(ranger.models) <- c("quantreg", "regular") #predict on data #quantile regression model - mean dt$quantreg_mean <- predict( object = ranger.models$quantreg, data = dt, type = "response", se.method = "infjack" )$predictions #quantile regression model - median dt$quantreg_median <- predict( object = ranger.models$quantreg, data = dt, type = "quantiles", quantiles = 0.5, se.method = "infjack" )$predictions |> as.vector() #regular regression model - mean dt$regular_mean <- predict( object = ranger.models$regular, data = dt, type = "response", se.method = "infjack" )$predictions #visualize dt.long <- pivot_longer(dt, cols = c("quantreg_median", "quantreg_mean")) ggplot(dt.long, aes(x = regular_mean, y = value)) + facet_wrap(~name) + geom_point() + geom_abline(slope = 1, intercept = 0) + theme_bw() ``` [](https://i.stack.imgur.com/Mc44f.png)
Prediction of mean in addition to quantiles using quantile regression in ranger
CC BY-SA 4.0
null
2023-03-10T12:55:05.817
2023-03-10T12:55:05.817
null
null
382877
[ "r", "random-forest", "quantile-regression" ]
609006
2
null
609004
1
null
Information is lost after filtering. In fact, that is one of the functions of filtering as a method: removing information that is not of interest. For an extreme case, consider the filter function $F(X)=k$ with $k$ constant. No information about $X$ remains after filtering
null
CC BY-SA 4.0
null
2023-03-10T13:12:54.210
2023-03-10T13:12:54.210
null
null
60613
null
609007
1
null
null
0
39
Given a symmetric positive definite matrix $\bf \Sigma \in \mathbb{R}^{n \times n}$, I want to find a matrix ${\bf \Gamma} \in \mathbb{R}^{n \times n}$ and a vector ${\bf m} \in \mathbb{R}^n$ such that if ${\bf X} \sim N(\bf{m}, \bf{\Gamma})$, then the random vector $\bf{Y} = (\exp(X_1), ..., \exp(X_n))$ has covariance matrix $\bf \Sigma$. For $n=1$, this is possible, as shown e.g. in [this blog entry](https://www.johndcook.com/blog/2022/02/24/find-log-normal-parameters/) In higher dimensions $n > 1$, this is probably not possible for all matrices $\bf \Sigma$, so I would be interested in conditions for $\bf \Sigma$ under which such a matrix $\bf \Gamma$ exists, and how to find it. I have found [this previous question](https://stats.stackexchange.com/questions/439381/generate-multivariate-log-normal-variables-with-given-covariance-and-mean), but maybe after 3 years somebody has a better answer, and also I have no restriction on the mean of $\bf Y$.
Multivariate Log-Normal variables with given covariance
CC BY-SA 4.0
null
2023-03-10T13:14:02.533
2023-03-10T13:50:34.840
2023-03-10T13:50:34.840
375859
375859
[ "normal-distribution", "covariance-matrix", "multivariate-normal-distribution", "lognormal-distribution", "random-vector" ]
609008
1
609016
null
0
30
Consider a hypothetical discussion about a Lotka–Volterra experiment involving foxes and rabbits: ``` > Ernie: Species is categorical. > Bert: In fact it is bimodal. > Ernie: Can you specify the units of the axis you claim species is bimodal? > Bert: Sorry no, because it is bimodal along several axes. ``` Ignoring whether the phenomenon is categorical or continuous, is it possible for bimodality of some quantity to exist not independently but along several axes - eg. as a function of length, height, weight, speed, and others - in the way Bert asserts?
Can bimodality exist in a multivariate sense?
CC BY-SA 4.0
null
2023-03-10T13:27:26.123
2023-03-10T14:44:16.180
null
null
13849
[ "bimodal" ]
609009
1
null
null
0
56
Currently, I am trying to run the multinomial model Age = Gender to investigate the effect of Gender on the probability that my study animal belongs to a particular age class. I am not necessarily interested in the model parameters, but more in the model fit estimates. More precise I am interested in whether: Estimates significantly differ from 0.25 (no bias towards a particular age class) Estimates significantly differ between males and females. Until now, I tried to answer these questions by running the model, calculating the emmeans and using the contrast function to see if there are differences between both genders. My main question is: Is this a statistically correct way (I know it from "normal" linear modelling , but as I use multinomial models, I don't know whether this is a good way ) WHAT I TRIED ``` a <- multinom(`Age class` ~ Sex , data = goodyears_hunting) emmeans = emmeans(a,~ `Age class` | Sex, mode = "prob") x = as.data.frame(emmeans) gt(x, rownames_to_stub = TRUE) z =contrast(emmeans, "pairwise", simple = "each", combine = TRUE, adjust = "mvt") z = as.data.frame(z) gt(z, rownames_to_stub = TRUE) ``` This resulted in the following emmeans output ``` Age Sex prob SE df lower.CL upper.CL 0 female 0.2198662 0.01383598 6 0.1860108 0.2537216 1 female 0.3169641 0.01554436 6 0.2789284 0.3549998 2 female 0.1439735 0.01172819 6 0.1152757 0.1726714 3+ female 0.3191962 0.01557349 6 0.2810892 0.3573031 0 male 0.2067437 0.01206316 6 0.1772262 0.2362612 1 male 0.2954747 0.01359085 6 0.2622191 0.3287303 2 male 0.2413486 0.01274622 6 0.2101597 0.2725375 3+ male 0.2564329 0.01300724 6 0.2246054 0.2882605 ``` For example: May I conclude that 2y female's are underrepresented in the population as 0.25 is not part from the confidence interval? In case for the contrasts I get the following results (selected only the relevant part) ``` Age contrast estimate SE df t.ratio p.value 0 female - male 0.013122464 0.01835631 6 0.71487502 0.982821954 1 female - male 0.021489382 0.02064796 6 1.04075092 0.911929460 2 female - male -0.097375074 0.01732099 6 -5.62179680 0.010827774** 3+ female - male 0.062763228 0.02029093 6 3.09316739 0.140517335 ``` Is it correct to conclude that the ratio of 2y old indivduals is signficant different between males and females? I am mainly confused as I have little experience with discrete data and have not used these types of models much. Hopefully some of you can approve my way of thinking, or help me on the right way :D
What are the option for a post-hoc test after running a multinomial model?
CC BY-SA 4.0
null
2023-03-10T13:28:02.253
2023-03-11T04:54:41.917
null
null
382882
[ "r", "regression", "multinomial-distribution", "post-hoc", "contrasts" ]
609013
1
null
null
0
21
I'm wondering how to model response in a group of individuals as a function of time interacting with 3 other factors for a dataset that does not have values for all ID's throughout entire period, as some individuals died before the end. Whenever I include time as an interaction the model "fails to converge". My time period is 55 days, and one of the treatments have no survivors after day 35. Should I omit the rest of the time period for the entire dataset? This seems like a bad idea.
Can time be used as interaction factor when data is lacking for whole period?
CC BY-SA 4.0
null
2023-03-10T13:48:16.563
2023-03-10T13:48:16.563
null
null
380763
[ "survival", "interaction" ]
609014
1
null
null
1
30
I am reading a paper named "[Clinical scoring system to predict hepatocellular carcinoma in chronic hepatitis B carriers](https://doi.org/10.1200/jco.2009.26.2675)" and confused about how to construct the prediction score by weights of significant variables. The paper said :"A simple risk score was devised by using significant variables obtained from stepwise multivariate analysis with P < .05. The score was the weighted sum of those variables of which the weights were defined as the quotient (rounded to nearest integer) of corresponding estimated coefficient from a Cox regression analysis divided by the smallest χ2 coefficient." What is χ2 coefficient of Cox proportional hazards model? And how to get χ2 coefficient of each significant variable. Thanks!! Best regards!
What is the χ2 score representing relative contribution in the Cox proportional hazards model? How to get?
CC BY-SA 4.0
null
2023-03-10T14:31:04.603
2023-03-11T17:26:01.010
2023-03-10T15:05:01.307
28500
357266
[ "survival", "chi-squared-test", "cox-model", "likelihood" ]
609015
1
null
null
0
29
When training a model for image classification it is common to use pooling layers to reduce the dimensionality, as we only care about the final node values corresponding to the categorical probabilities. In the realm of VAEs on the other hand, where we are attempting to reduce the dimensionality and subsequently increase it again, I have rarely seen pooling layers being used. Is it normal to use pooling layers in VAEs? If not, whats the intuition here? Is it because of their injective nature?
Is it acceptable to use pooling layers in variational autoencoders?
CC BY-SA 4.0
null
2023-03-10T14:31:09.013
2023-03-10T14:31:09.013
null
null
382888
[ "machine-learning", "autoencoders", "convolution", "pooling" ]
609016
2
null
609008
0
null
Expanding on the idea of a saddle point. Suppose $X \sim \text{Beta}(0.5, 0.5)$ and $Y \sim \text{Beta}(2, 2)$, independent. Then their marginal densities are depicted as below: [](https://i.stack.imgur.com/sRb5N.png) So $X$ has a well defined mode achieving a maximal density of 1.5 whereas $Y$ (please replace ylabel with dy) is bimodal with a maximal density of 3. For the joint $X,Y$ process, the density can be visualized as [](https://i.stack.imgur.com/HNnyc.png) And it's plain to see that there are modes at (0.5, 1) and (0.5, 0) respectively. In univariate and multivariate cases you can even have measurable regions being the mode - the uniform density has the support as a mode!
null
CC BY-SA 4.0
null
2023-03-10T14:44:16.180
2023-03-10T14:44:16.180
null
null
8013
null
609018
1
null
null
1
32
I have a question about a random forest algo I just created in order to predict a binary output called "LesionResponse". I divided my native dataset into train, val and test sets. I normalized datas with Z-score and now I'm trying to selection features and create an algo. Here is an example of my dataset : ``` structure(list(PatientID = c("P1", "P1", "P1", "P2", "P3", "P3", "P4", "P5", "P5", "P6"), LesionResponse = structure(c(2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L), .Label = c("0", "1"), class = "factor"), pyrad_tum_original_shape_LeastAxisLength = c(19.7842995242803, 15.0703960571122, 21.0652247652897, 11.804125918871, 27.3980336338908, 17.0584330264122, 4.90406343942677, 4.78480430022189, 6.2170232078547, 5.96309532740722, 5.30141540007441), pyrad_tum_original_shape_Sphericity = c(0.652056853392657, 0.773719977240238, 0.723869070051882, 0.715122964970338, 0.70796498824535, 0.811937882810929, 0.836458991713367, 0.863337931630415, 0.851654860256904, 0.746212862162174), pyrad_tum_log.sigma.5.0.mm.3D_firstorder_Skewness = c(0.367453961973625, 0.117673346718817, 0.0992025164349288, -0.174029385779302, -0.863570016875989, -0.8482193060411, -0.425424618080682, -0.492420174157913, 0.0105111292451967, 0.249865833210199), pyrad_tum_log.sigma.5.0.mm.3D_glcm_Contrast = c(0.376932105256115, 0.54885738172596, 0.267158344601612, 2.90094719958076, 0.322424096161189, 0.221356030145403, 1.90012334870722, 0.971638740404501, 0.31547550396399, 0.653999340294952), pyrad_tum_wavelet.LHH_glszm_GrayLevelNonUniformityNormalized = c(0.154973213866752, 0.176128379241556, 0.171129002059539, 0.218343919352019, 0.345985943932352, 0.164905080489496, 0.104536489151874, 0.1280276816609, 0.137912385073012, 0.133420904484894), pyrad_tum_wavelet.LHH_glszm_LargeAreaEmphasis = c(27390.2818110851, 11327.7931034483, 51566.7948885976, 7261.68702290076, 340383.536555142, 22724.7792207792, 45.974358974359, 142.588235294118, 266.744186046512, 1073.45205479452), pyrad_tum_wavelet.LHH_glszm_LargeAreaLowGrayLevelEmphasis = c(677.011907073653, 275.281153810458, 582.131636238695, 173.747506476692, 6140.73990175018, 558.277670638306, 1.81042257642817, 4.55724031114589, 6.51794350173746, 19.144924585586), pyrad_tum_wavelet.LHH_glszm_SizeZoneNonUniformityNormalized = c(0.411899490603372, 0.339216399209913, 0.425584323452468, 0.355165782879786, 0.294934042125209, 0.339208410636982, 0.351742274819198, 0.394463667820069, 0.360735532720389, 0.36911240382811)), row.names = c(NA, -10L), class = c("tbl_df", "tbl", "data.frame")) ``` So concerning this set, it's an example of my train datas (not normalized). It's just to have a look on the aspect of my datas. ``` train_norm$LesionResponse<-as.factor(train_norm$LesionResponse) set.seed(1234) foret<-randomForest(LesionResponse~.,data = train_norm, importance = TRUE, LocalImp = TRUE) plot(foret) foret ``` I obtain this graph and I don't understand why is the accuracy so loooow : [](https://i.stack.imgur.com/j1cZG.png) ``` Call: randomForest(formula = LesionResponse ~ ., data = train_norm, importance = TRUE, LocalImp = TRUE) Type of random forest: classification Number of trees: 500 No. of variables tried at each split: 28 OOB estimate of error rate: 28.62% Confusion matrix: 0 1 class.error 0 4 164 0.97619048 1 16 445 0.03470716 ``` To understand my problem, I checked the datas and took into account the fact that there are imbalanced datas (70% of "1" / 30% of "0"), my train set has 70% 1 and 30% 0... Where do you think my problem is ?
Random forest for features selection and prediction error
CC BY-SA 4.0
null
2023-03-10T15:12:27.343
2023-03-10T15:12:27.343
null
null
378883
[ "r", "machine-learning", "random-forest", "feature-selection" ]
609019
2
null
466801
1
null
If A plays a tournament with B, C, D and A wins then this can be coded as A beats B, A beats C and A beats D. In your example this gives the following tournament matrix: ``` P N A B C D E F G H 1 A . 1 1 1 . . 0 0 2 B 0 . . . . . . 0 3 C 0 . . . . 0 . 0 4 D 0 . . . 1 0 . 0 5 E . . . 0 . 0 . 0 6 F . . 1 1 1 . . 0 7 G 1 . . . . . . 1 8 H 1 1 1 1 1 1 1 . ``` To establish a meaningful win probability between players x, y, a winning path between x to y and y to x is needed. Update-1 Add the folowing tournaments to the example to make the tournament matrix connected. ``` 5) H, B where B wins, 6) B, C where C wins, 7) C, E where E wins, ``` The resulting tournament matrix will be: ``` P N |1 2 3 4 5 6 7 8 |Pts| Rrtg 1 A |. 1 1 1 . . 0 0 | 3 | 104.87 2 B |0 . 0 . . . . 1 | 1 |-157.99 3 C |0 1 . . 0 0 . 0 | 1 |-290.67 4 D |0 . . . 1 0 . 0 | 1 |-178.03 5 E |. . 1 0 . 0 . 0 | 1 |-276.60 6 F |. . 1 1 1 . . 0 | 3 | 142.96 7 G |1 . . . . . . 1 | 2 | 360.54 8 H |1 1 1 1 1 1 1 . | 7 | 294.91 ``` Let `Rrtg` be the relative Elo ratings so that the expected score equals the actual score (Pts). See also: [Obtain ranking from pairwise comparison with continuous outcome](https://stats.stackexchange.com/questions/592572/obtain-ranking-from-pairwise-comparison-with-continuous-outcome/600631#600631). Derived from the relative ratings (Rrtg): ``` Pr(A beats C) = pnorm(104 - -290.67,0, 2000 / 7) = 92% Pr(A beats D) = pnorm(104 - -178.03,0, 2000 / 7) = 84% Pr(A beats E) = pnorm(104 - -276.60,0, 2000 / 7) = 91% Pr(A beats F) = pnorm(104 - 142.96,0, 2000 / 7) = 45% ``` The probability that A defeats all of his opponents in a single tournament is equal to `Π Pr(A beats x) = 31%, where x in (C, D, E, F)` Update-2 An alternative approach is to replace the normal distribution by a linear function: `p800(D) = D / 4C + 0.5`,where C=200 is the Elo class interval, 1/4 the slope of the logistic function at x=0. Solving the Elo ratings for all games simultaneously equates solving a system of linear equations. In the previous example the solution becomes: `Rrtg = c(84.31, -126.40, -225.99, -150.88, -212.94, 107.06, 306.80, 218.04)` This gives: ``` Pr(A beats C) = p800(84.31 - -225.99) = 89% Pr(A beats D) = p800(84.31 - -150.88) = 79% Pr(A beats E) = p800(84.31 - -212.94) = 87% Pr(A beats F) = p800(84.31 - 107.06) = 47% A's overall probability equals 89% * 79% * 87% * 47% = 29%. Note that these ratings are equivalent to least squares ratings. ```
null
CC BY-SA 4.0
null
2023-03-10T15:28:49.020
2023-03-11T13:57:42.483
2023-03-11T13:57:42.483
376307
376307
null
609020
1
null
null
0
14
Background: I have a longitudinal analysis and am running a linear mixed effects model in R (nlme library). I tried 3 different models: 1) Random intercept only, 2) Random intercept + Autocorrelation structure on the errors, and 3) Autocorrelation structure on the errors only (using gls() command). I fit model 3 because I've been taught that sometimes an autocorrelation structure is enough for longitudinal data. For model 1, variance of random effect (intercept) was 676.9 (and accounted for 62% of total variance). AIC was 8444.01. For model 2, variance of random effect (intercept) was much smaller, 0.001 (and accounted thus for <1% of total variance). AIC was 7830.01. For model 3, AIC was 7828.01. My question: Does this mean that the autocorrelation structure alone was enough to explain much of the variance in the outcome? Should I abandon the LME model, and use model 3 instead of model 2, since it estimates 1 fewer parameter and has comparable AIC? Or am I misinterpreting the output? Thanks in advance!
How to pick between models with random intercept only VS. autocorrelation structure only VS. both?
CC BY-SA 4.0
null
2023-03-10T15:30:59.243
2023-03-10T16:19:03.903
2023-03-10T16:19:03.903
382890
382890
[ "mixed-model", "lme4-nlme", "autocorrelation" ]
609021
1
null
null
0
63
I want to train a CNN to detect the position of an object in the image. Given an input image, I know that the image can contain either 0 or 1 instances (i.e. one class, no more than one item per image). This makes the problem simpler that traditional object detection problems. I designed a neural network that outputs five values: `[v, x, y, w, h]`, where `v` indicates the probability that the input image contains the object and the other values are the bounding box coordinates. My dataset includes both images with the object and background-only images. The ground truth for a BG image is `[0, 0, 0, 0, 0]`. The loss function is the following: ``` import tensorflow as tf @tf.function def _compute_loss(y, pred): # y: batch_size x 5 # pred: batch_size x 5 bbox_gt = y[:, 1:] bbox_pred = pred[:, 1:] vis_gt = y[:, 0] vis_pred = pred[:, 0] # regression loss for bbox bbox_loss = loss_fn(bbox_gt, bbox_pred, sample_weight=tf.expand_dims(vis_gt, 0)) # loss for visibility flag # put bbox loss to 0 where there are no bboxes vis_loss = vis_loss_fn(vis_gt, vis_pred) return bbox_loss + vis_loss loss_fn = tf.keras.losses.MeanSquaredError() vis_loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=False) ``` When the image contains the object, the loss is the sum of the bounding box regression loss and the binary crossentropy loss related to the visibility flag. When the image does not contain the object, only the cross entropy is considered. This is achieved using the `sample_weight` parameter. Is my approach correct?
Is my approach right for simple single-class object detection?
CC BY-SA 4.0
null
2023-03-10T15:41:24.250
2023-03-10T15:41:24.250
null
null
73122
[ "neural-networks", "loss-functions", "tensorflow", "object-detection" ]
609022
1
null
null
2
15
Given high-dimensional Monte Carlo samples $\bf{X_1},...,\bf{X_N}$ from a probability distribution $p({\bf x})$ in $\mathbb{R^d}$, I want to estimate a rectangular highest-density credible region for $p({\bf x})$. That is, I want the smallest-volume hyper-rectangle (or "box") $[{{\bf l}, {\bf u}]=\{{\bf x} \in \mathbb{R^d} ~|~ l_i \leq x_i \leq u_i \text{ for } i=1,...,n} \} \qquad {\bf l}, {\bf u} \in \mathbb{R}^d$, such that for a given percentage $\alpha \in (0, 1)$, we have $p({\bf x} \in [{\bf l}, {\bf u}]) \geq \alpha$. The only dedicated method for estimating such regions (which are sometimes referred to as simultaneous credible bands) that I could find is the classic method by [Besag, Green, Higdon and Mengersen (1995)](https://projecteuclid.org/journals/statistical-science/volume-10/issue-1/Bayesian-Computation-and-Stochastic-Systems/10.1214/ss/1177010123.full). However, I have tried that method and it performs poorly for high-dimensional or skewed distributions, as it leads to credible regions that are too large and contain all of the samples. There are also other heuristic methods based on rescaling the bounding box of $\bf{X_1},...,\bf{X_N}$ or the coordinate-wise credible intervals until the resulting set only contains the desired percentage of samples. In my case they work better than the first method, but still not great. Finally, note that in my case I have cheap access to the density function $p({\bf x})$, while all the approaches listed above do not make use of that. So I expect a density-based approach to perform well, but I could not find an existing method that yields the desired rectangular sets. I would be extremely interested if anyone knows an efficient, dedicated algorithm for this or could point me to some references. Thank you very much.
How to compute a rectangular credible region from samples
CC BY-SA 4.0
null
2023-03-10T15:45:57.537
2023-03-10T15:45:57.537
null
null
375859
[ "sampling", "monte-carlo", "credible-interval", "highest-density-region" ]
609023
1
null
null
0
19
This is a short question. For the test-retest reliability I used both ICC and Wilcoxon Paired Samples t-test as measures. My data isn't normally distributed, so that's why I opted for Wilcoxon. However, I now have a high ICC value (.80), but a significant p-value of the t-test (<.05). Is this possible? And what possible explanation is there?
Disagreement between ICC and Wilcoxon Paired samples t-test
CC BY-SA 4.0
null
2023-03-10T16:02:20.830
2023-03-10T16:02:20.830
null
null
382893
[ "t-test", "intraclass-correlation" ]
609024
2
null
608953
8
null
The Wikipedia's proof is not fully rigorous and incomplete. It is not fully rigorous because as we allow $g$ can have discontinuities, the statement "$F = f \circ g$ is itself a bounded continuous functional" is an overstatement. It is incomplete because it failed to explicitly cite the bounded convergence theorem (as Durrett's book did) or any other propositions to close the argument "And so the claim follows from the statement above". Because it skipped this important step which relies on the Skorohod's theorem (i.e., the "a.s. representation" in your post) to prepare the convergence condition in BCT, it created the illusion that its "proof" is simpler. The application of the Skorohod's theorem in Durret's proof to continuous mapping theorem is very elegant, and the same idea is also shared by Billingsley (see Theorem 25.7 in Probability and Measure). However, if you think such proof used too much machinery, you can directly verify other equivalence conditions of weak convergence ([portmanteau lemma](https://en.wikipedia.org/wiki/Convergence_of_random_variables#Properties)). For example, to check $$\limsup_{n \to \infty} P(g(X_n) \in F) \leq P(g(X) \in F)$$ for every closed set $F$. A proof of this kind can be found in Theorem 2.3 of Asymptotic Statistics by A. W. Van der Vaart.
null
CC BY-SA 4.0
null
2023-03-10T16:18:15.087
2023-05-07T02:10:23.570
2023-05-07T02:10:23.570
20519
20519
null
609025
1
null
null
0
10
I am reviewing the statistical analysis of "[Debugging Tests for Model Explanations](https://arxiv.org/abs/2011.05429)" in arXive. In the paper, the authors have subjects looking at the output of 5 different ml models. In each of these 5 models they use 3 explanation techniques. On each model with each explanation technique, the same participants make a rating on a 5-point scale. For analysis, the authors split the data by the 5 different models and analysed the ratings for each using a one-way ANOVA with the explanation technique as the only factor. Besides, it is questionable whether to use ANOVA on self-reported rating scales; this is a two-factorial design for me. I would have used two-way repeated measures ANOVA. My questions are: - Is the way the authors analysed their data valid if they wanted to answer questions like: For model 1 explanation technique A receive significantly higher ratings? - Wouldn't they have needed to correct for multiple comparisons when running multiple ANOVAs on data obtained from the same subjects? - Is there any harm done by ignoring the second factor and the fact that this is a within-subject design?
One-way ANOVA on two factorial repeated measures design
CC BY-SA 4.0
null
2023-03-10T16:21:31.317
2023-03-10T17:11:22.983
2023-03-10T17:11:22.983
44269
367661
[ "anova", "repeated-measures", "multiple-comparisons" ]
609026
1
null
null
0
13
I am doing a translation and validation of a scale. I do not have any other measures, so I cannot do convergent/discriminant validity assessments. Is CFA enough to validate a scale, and if not what are my options? Please help! I have a large sample size.
Is confirmatory factor analysis a way to validate a scale?
CC BY-SA 4.0
null
2023-03-10T16:22:21.597
2023-03-10T16:22:21.597
null
null
382895
[ "validation", "confirmatory-factor", "psychometrics", "validity", "scale-parameter" ]
609027
2
null
608694
0
null
This question requires answers to two sub questions. When is a model useful? What is the cost function to determine usefulness? Group 1 says that the model is useful because it is doing better than nothing. Group 2 says that the model is not useful because the cost is only reduced in a meaningful way when the performance is above some level (apparently 70-80% for your colleagues). The two group's conclusion don't really contradict, they just look at it with different perspectives based on answers to the subquestions.
null
CC BY-SA 4.0
null
2023-03-10T16:30:38.987
2023-03-10T16:30:38.987
null
null
164061
null
609028
1
null
null
0
17
I have a small dataset that has five variables of interest (diagnosis 1, diagnosis 2, diagnosis 3, diagnosis 4 and diagnosis 5). There are eight category values used across all five of these variables. There are 567 episodes in total in the dataset. I am trying to understand how I can identify the most frequent combination of diagnoses (categories) in SPSS. E.g. category 3 and 7 are the most frequent combination.
SPSS: frequent combination of values?
CC BY-SA 4.0
null
2023-03-10T16:38:16.827
2023-03-10T16:38:16.827
null
null
382896
[ "spss" ]
609029
1
609214
null
4
49
I was comparing results that I generated in R for complex survey analysis using the survey package to results from SPSS using the complex samples analysis add-on. The sample size is large ~ N=5500 This is the R code: ``` svy <- svydesign(ids = ~cl1+houseID, strata = ~strata_study, weights = ~wt, data=data_in,nest = TRUE, fpc = NULL) svyby(~X, by = ~sadmood, design=svy, FUN=svymean) ``` With the output: ``` X group0 group1 se.group0 se.group1 No No 0.7898876 0.2101124 0.03106912 0.03106912 Yes Yes 0.7533348 0.2466652 0.04133818 0.04133818 ``` In SPSS I navigated to the complex samples Analysis Preparation Wizard, selected sampling with replacement (WR) and the following: Strata: strata_study Clusters: cl1 (cluster 1), houseID (cluster 2) Sample Weight: wt The output as follows: ``` X group0 group1 se.group0 se.group1 No No 79.0% 21.0% 1.4% 1.4% Yes Yes 75.3% 24.7% 4.0% 4.0% ``` While the estimates for % of group==0 and group==1 within X==NO and X==YES are similar in SPSS and R; the standard errors are different. Does anyone know why these differences are there? Thanks!
Why does the survey package in R and SPSS complex samples add-on give different standard errors?
CC BY-SA 4.0
null
2023-03-10T16:40:34.293
2023-03-12T19:34:17.360
null
null
198413
[ "r", "spss", "survey-sampling", "survey-weights" ]
609030
2
null
411383
3
null
Under your assumptions, $$ X =\sum_1^{n_1} X_i \sim \mathcal{Binom}(n\cdot n_1, p_1) \\ Y = \sum_1^{n_2} Y_i \sim \mathcal{Binom}(n \cdot n_2, p_2) $$ which reduces this to a test of equality of two binomial proportions. Many questions on this site about that. You say: > Letting $_1$ and $_2$ be the averages of all individual proportions feels silly because it ignores the distributions of the proportions. Maybe it feels so, but under your assumptions, this is a sufficient reduction of the data ... so if it feels silly, maybe because you wonder if the individual probabilities of the $X_i$'s and of the $Y_i$'s might not be all equal? Your question 2: Again, under the stated assumptions, this would not be better, as the test would be based on variation (in the individual $X_i/n, Y_j/n$ which under the stated model is irrelevant. So how to test? In R you could use the function `prop.test`, see for instance [Why not always use a binomial exact test to compare two proportions instead of chi square?](https://stats.stackexchange.com/questions/135691/why-not-always-use-a-binomial-exact-test-to-compare-two-proportions-instead-of-c) or you could make the 2 x 2 contingency table and use `chisq.test`.
null
CC BY-SA 4.0
null
2023-03-10T16:48:07.553
2023-03-10T16:48:07.553
null
null
11887
null
609031
1
null
null
1
36
Can someone help with a sample size analysis question? Study design: I have a sample randomly assigned to answer 1 of 5 different questions that vary in difficulty (1 - Easy -> 5 - Hard). The question is true/false question (binomial). I plan to conduct a chi-square test of independence to see if there's a relationship between question assigned and rate of correct response. I plan to do follow-up chi-square tests to answer my question: do people assigned more difficult questions have a lower rate of correct responses? This would require comparing group 1 to 2,3,4,5 and then group 2 to 3,4,5 and so on. My intention is to identify my sample size based on the smallest hypothesized effect. The logic being that I want to obtain a sample size that enables me to detect the smallest hypothesized effect. I have pilot data to help me calculate effect sizes. Does it make sense for me to do all the comparisons between groups in my search for the smallest effect?
Sample size and power analysis: Chi-square
CC BY-SA 4.0
null
2023-03-10T16:52:03.797
2023-03-10T17:02:00.680
2023-03-10T17:02:00.680
382898
382898
[ "chi-squared-test", "sample-size", "multiple-comparisons", "effect-size" ]
609032
1
null
null
0
21
A binary logistic regression outcome variable is defined with reference to an age criterion. The binary outcome is 'Early death'=1 -- meaning the person died (in a follow-up observation period) before reaching age 65. All subjects have the same observation period. At the start of the observation period, the age of all subjects is known. I intend to use several regression predictor variables. One of the regression predictors will be a categorical age variable. It has 6 categories. Two of the category levels are 65-74 years old and 75 and above. Any person who was 65-74 or 75+ at the start of the observation period, and died in the observation period, could not have experienced the regression outcome, early death, because they were too old; they were not 64 or younger. If they died in the observation period, their value of the dependent variable is necessarily 0 (did not experience an early death). Consequently, the dependent variable's value is determined by age for these two age category levels (65-74 and 75+); the dependent variable must be 0. But the dependent variable can be 0 or 1 for any other age category that is 64 and less. Question: is it advisable to exclude from the regression model observations where the person's age is => 65? Or should they be retained in the model? (And not interpret the parameter estimates for the age categories => 65.)
What is appropriate when a binomial regression DV is determined by some levels of a categorical regression predictor variable?
CC BY-SA 4.0
null
2023-03-10T17:10:39.570
2023-03-10T17:11:48.657
2023-03-10T17:11:48.657
382844
382844
[ "regression", "logistic" ]
609033
2
null
608953
7
null
[Zhanxiong](https://stats.stackexchange.com/a/609024/362671) has already elaborated on what Durrett is up to and what the Wikipedia article missed. However let me emphasize the fact that the application of (Baby) Skorohod Theorem rather is ingenious and makes the deduction extremely easier than what would have been otherwise. To give an outline: $\bullet$ If $\mathrm F_n\Rightarrow \mathrm F, $ for $t\in (0, 1) \cap \mathcal C(\mathrm F^\leftarrow) ,$ then $\mathrm F_n^\leftarrow (t) \to \mathrm F^\leftarrow(t). $ $\bullet$ Using this, show there exists, if $X_n\Rightarrow X$ on a probability space, $X^\#_n,X^\#$ on $([0, 1], \mathcal B([0, 1]), \lambda) $ such that $X_n\overset{\mathrm d}{=}X_n^\#$ and $X_n^\#\overset{\mathrm{a.s.}}{\to}X^\#.$ This is possible by defining $X_n^\#:= \mathrm F_n^\leftarrow(U) $ where $U$ is uniformly distributed. (Baby Skorohod Thoerem) $\bullet$ For any map $h:\mathbb R\mapsto \mathbb R$ such that $\mathbb P[X\in \mathrm{Disc}(h) ]=0,$ it is easy to check $h\left(X_n^\#\right) \overset{\textrm{a.s.}}{\to} h\left(X^\#\right) $ w.r.t. $\lambda.$ As almost sure convergence implies weak convergence, then $$ h(X_n) \overset{\mathrm d}{=}h\left(X_n^\#\right)\Rightarrow h\left(X^\#\right)\overset{\mathrm d}{=} h(X).$$ As $\rm [II]$ sums up: > Once again, this is a pretty and easy proof. BUT it relies on a very sophisticated prerequisite. In other words, we do not get anything for free. --- ## References: $\rm [I]$ A Probability Path, Sidney Resnick, Birkhäuser, $1999, $ sec. $8.3, $ pp. $259-261.$ $\rm [II]$ Probability: A Graduate Course, Allan Gut, Springer Science$+$Business, $2005, $ sec. $5.13.1, $ pp. $258-260.$
null
CC BY-SA 4.0
null
2023-03-10T17:29:12.433
2023-03-10T18:13:30.990
2023-03-10T18:13:30.990
362671
362671
null
609034
1
null
null
0
61
Suppose I observe random $y_{i,1}, y_{i,2}$, and I wish to estimate the correlation between them. However, the $y_{i,j}$ are observed subject to some sample selection criterion. That is, there are some observed covariates $x_{i,1}, \ldots, x_{i,k}$ and an unobserved variable $z_i$ such that we only see the $y_{i,j}$ if $f\left(x_{i,1}, \ldots, x_{i,k}, z_i\right) > 0$, otherwise we do not observe the values of the $y_{i,j}$. (For simplicity assume the $f$ is linear in its arguments.) My questions: - Is the sample Pearson correlation biased or inefficient for estimating the correlation of the $y$? I suspect that if the $y_{i,j}$ are independent of the $x_{i,j}$ and $z_i$, there is no problem, but in general I assume there is a bias. - Can one use Heckman correction, or some other technique, to improve estimation of the correlation?
Heckman correction for correlation estimates
CC BY-SA 4.0
null
2023-03-10T18:18:40.217
2023-03-13T01:00:36.010
null
null
795
[ "correlation", "linear-model", "selection-bias", "heckman" ]
609036
1
null
null
0
8
I'm learning about random effects models. In the [example on wikipedia](https://en.wikipedia.org/wiki/Random_effects_model#Simple_example), they posit scores on a test from individuals at various schools, which they propose to model as: $$Y_{ij} = \mu + U_i + W_{ij} + \epsilon_{ij}$$ $Y$ is the score, $W$ is an effect per student, $U$ is an effect per school. The article isn't explicit, but presumably $\epsilon$ is an error term. I'm finding the error term confusing, given the presence of the $W$ term. For example, if we proposed a simpler model: $Y_{ij} = W_{ij} + \epsilon_{ij}$, then $W$ accounts for random variation between individuals - $\epsilon_{ij}$ accounts for "what's left", but it seems to me that all of the random variation here is random variation between individuals, so once we account for that, why is there anything "left"? Are they assuming some kind of parametric constraints on $U$ and $W$ that limit what they can account for? Like, would we assume that $W$ is Gaussian, or something? What is the difference between the individual term and the error term?
Random effects model, distinction between individual and error term?
CC BY-SA 4.0
null
2023-03-10T18:29:40.240
2023-03-10T18:29:40.240
null
null
143446
[ "mixed-model" ]
609039
1
null
null
2
30
I have longitudinal data across several time points. At each time point, participants completed an online test up to 3 times (i.e., measurements were completed 1-3 times at each timepoint). The number of times a participant repeated the test was optional, with participants generally completing the test more times at baseline. I plan to run latent growth models (SEM) on this data to model the average test scores over time to look at the trajectories of test performance. However, I would like to take into account the number of times a participant completed a test at a given timepoint. Therefore I plan to include the number of times a participant completed the test at each timepoint as an unconditional time-varying covariate (TVC). I'm wondering which method of scaling is most appropriate in this circumstance for my TVC (i.e., number of times the test was taken). Mean-centering using: A) the mean across all timepoints for each participant (i.e., person-mean centered), B) the mean across all participants and timepoints (i.e., grand-mean centered), or C) the mean of first time point (or other referent point) across all participants (i.e., group-mean centered/centering within context)?
Which method to use to scale/mean center a time-varying covariate in latent growth models (SEM)?
CC BY-SA 4.0
null
2023-03-10T18:54:30.107
2023-03-16T11:50:40.017
2023-03-16T11:50:40.017
323168
323168
[ "panel-data", "structural-equation-modeling", "time-varying-covariate", "centering", "growth-model" ]
609040
2
null
603986
0
null
For everyone else thats searching for the answer: Balance the 2 datasets by encoding all 0's for the missing columns. Eg. Train set has `Color: Green, Blue`, Test set has `Color: Red, Green`. You will have 4 OneHotEncoded columns in both Train and Test datasets. The Blue column will be all 0's in the Test set and the Red column will be all 0's in the Train dataset.
null
CC BY-SA 4.0
null
2023-03-10T18:57:16.773
2023-03-10T18:57:16.773
null
null
361781
null
609041
2
null
344907
0
null
Let us reparametrize your problem, I will assume $X_1$ and $X_2$ are independent (you did not specify). Let $\theta = p_2-p_1$ and now $$ X_1 \sim \mathcal{Binom}(n_1, p), \quad X_2\sim\mathcal{Binom}(n_2,p+\theta) $$ and the null hypothesis is $H_0\colon \theta < \nu$, where $0<\nu < 1$ is a prespecified constant. In this formulation, $\theta$ is the focus parameter and $p$ is incidental. So it is natural to focus on the profile likelihood function of $\theta$, profiling out $p$. This is similar to the situation in - Confidence interval on the percentage difference of two binomial distributions but not the same. The code there can be adapted for this problem. I will do so later when I have some time. But there is one big difference: Your hypothesis is one-sided, so it corresponds to a one-sided confidence interval. But that can be got from the profile likelihood function as easily as the usual two-sided interval.
null
CC BY-SA 4.0
null
2023-03-10T19:05:37.907
2023-03-10T19:05:37.907
null
null
11887
null
609042
1
null
null
1
169
I am looking for help on correlated systematic errors, and their meaning. I have some quantities $x,y,z$ which determine a function I need to calculate. These 3 quantities are determined by a measurement, and the uncertainties on the measurement translate into an uncertainty on the function they are used to calculate. I think that somehow, these 3 observables can be varied individually to determine some type of error correlation, or covariance matrix. But I'm not sure I understand the use or meaning of error correlation. Let me be more concrete... I am calculating a certain function, say $N = \int f(x,y,z) ~dx ~dy~ dz$ using a Monte Carlo, for a certain experiment; so I can sample randomly over $x , y , z$ and calculate $N$. Now the measured $x,y,z$ have some experimental uncertainties, and I wish to estimate the impact of these uncertainties on $N$; i.e. $N \rightarrow N \pm \delta N$. We can take the systematic errors on $x,y,z$ to be gaussian with widths $\delta x, \delta y, \delta z$. So I can take my Monte Carlo data set $\{x_i,y_i,z_i\}$ and smear each entry randomly within their widths to produce a new Monte Carlo Data set $\{x_j,y_j,z_j\}$, and then calculate $N_j$. I can repeat this process to produce a set of $\{N\} = \{N_1,N_2...\}$. I can then take $N \pm \delta N = {\rm mean}(\{N\}) \pm \rm{Std. Dev}(\{N\})$. Now I think this would be a result with fully decorrelated errors. My question: what does error correlation mean, how does it apply in general (or for my problem), and what is the purpose? Am I over/under-estimating the error? If so, how do I properly check if there is correlation and deal with it properly? If anyone has any good sources that would be greatly appreciated. I can find some useful info ([https://arxiv.org/pdf/1507.08210.pdf](https://arxiv.org/pdf/1507.08210.pdf)), but I'm still quite confused. This question ([How do I propagate correlated errors numerically?](https://stats.stackexchange.com/questions/465853/how-do-i-propagate-correlated-errors-numerically)) seems to be related. Thank you in advance Edit: There may be some confusion on terminology as pointed out in the comments by whuber. The types of measurement errors which I called "systematic errors" may be better called "random errors".
What are correlated errors and why are they important?
CC BY-SA 4.0
null
2023-03-10T19:33:27.753
2023-03-13T18:51:34.950
2023-03-13T18:51:34.950
291643
291643
[ "correlation", "covariance", "standard-error", "monte-carlo", "measurement-error" ]
609043
1
609050
null
2
34
I have data that consists of multiple variables. One of those variables, let's call it A, is categorical. The rest of the variables (i.e., B1, B2, ..., B12) are count data. I want to determine whether A affects any one of the count variables. To do this, I implemented the following procedure in R, ``` for (i in colnames(CountDataVariableNames){ form <- formula(paste(i,'~','A')) print(form) model.p <- glm(form,data=df,family='poisson') print(Anova(model.p,type='II',test='LR')) } ``` I am following the code used in this source: [https://rcompanion.org/handbook/J_01.html](https://rcompanion.org/handbook/J_01.html) I'm not sure what's going on here, but I get this kind of output: ``` Analysis of Deviance Table (Type II tests) Response: B1 LR Chisq Df Pr(>Chisq) A 1.27 1 0.2598 ``` Does my procedure make sense? And what exactly am I doing? If the answer to the latter isn't easy, can anyone recommend any resources on statistical tests (preferably in `R`)? Thank you!
Running a statistical test to determine whether A (categorical) affects B (counts)
CC BY-SA 4.0
null
2023-03-10T19:34:55.517
2023-03-10T21:37:52.857
2023-03-10T21:37:52.857
56940
331670
[ "r", "anova", "generalized-linear-model", "chi-squared-test", "poisson-regression" ]
609044
1
null
null
0
21
I have a panel data set with a continuous endogenous variable and exogenous time-invariant categorial variables (and also some continuous exogenous variables). Now I am wondering which panel regression model would be preferable. Is it right that a fixed effects model would drop all my categorial variables as they're time-invariant or is there a method of applying a fixed effects on this kind of data? Or is it just possible to apply a random effects model? Thank you for your help!
Panel regression with exogenous time-invariant categorial variables
CC BY-SA 4.0
null
2023-03-10T20:08:53.137
2023-03-10T20:09:49.477
2023-03-10T20:09:49.477
382909
382909
[ "mixed-model", "panel-data", "fixed-effects-model", "cross-section" ]
609045
2
null
609042
2
null
You can have correlation in two ways in settings like this: - the error in $x_i$ is correlated with the error in $y_i$ - the error in $x_i$ is correlated with the error in $x_{i+1}$ In both situations, the effect of correlation is on how the positive and negative errors cancel (or reinforce) each other. If the errors in $x_i$ and $y_i$ are positively correlated, the error in $x_i+y_i$ is larger and the error in $x_i-y_i$ is smaller than if they were independent. Similarly, if the errors in $x_i$ and $x_{i+1}$ are positively correlated, the error in the sum (or average) will be higher than if they are independent. In the real world there are reasons why correlated measurement errors are plausible - $x$, $y$, and $z$ are all measured on the same physical sample, which might not be perfectly representative (air pollution, soil sampling) - $x_i$ and $x_{i+1}$ are measured in the same location at different times, and that location is high/low compared to the average - measured in the same lab (lab drift/batch effects) - negative correlation because $x$, $y$, and $z$ add up to a fixed total (% calories from different sources) - measurements derived from the same imperfectly accurate theoretical model - etc, etc In your case, then, you have the questions: - are your measurements positively or negatively correlated? - is your function $N$ more like an average or more like a difference? (low-pass or high-pass in engineering terms) These questions aren't how you calculate -- you do that by simulating appropriately correlated errors -- but they are useful for thinking about what you should expect.
null
CC BY-SA 4.0
null
2023-03-10T20:11:42.987
2023-03-10T20:11:42.987
null
null
249135
null
609046
2
null
523463
1
null
The rationale for dropping variables see to go something like this. - Having many parameters in the model risks overfitting. - Thus, if we can reduce the parameter count, we might be able to guard against overfitting. - When variables are related, dropping one would seem to retain much of the information available in both (or a whole group), due to the relationship. In some sense, it is like dropping a quarter of a variable to get a reduction of a full parameter. - Therefore, if we drop one of those variables, we might be able to cut down on overfitting without sacrificing much of the information that is available in our features. While it is true that a high parameter count can risk overfitting, it also is true that a low parameter count can risk underfitting, so it is not obvious that removing variables puts you in a better position. Further, as Frank Harrell discusses here, [variable selection techniques tend not to be very good at what they claim to do](https://stats.stackexchange.com/a/18245/247274). If you find yourself tempted to drop variables, ask yourself why you want to drop any and why you want to drop those particular variables. To some extent, the above is just for predictive modeling. If you want to interpret your model, the situation gets even worse. First, much of variable selection distorts downstream inferences, so your confidence intervals and p-values on regression coefficients are not accurate. Second, omitting variables that are correlated with variables that enter the model risks. Maybe you have a simpler model that reduces the VIF on your variable of interest, but: - It is not a given that removing a correlated variable will shrink the confidence interval on your variable interest, since the VIF is competing with the overall error variance that might be higher after you remove a variable. - You're perhaps giving a confidence interval for a biased estimate. Of all of the methods for doing biased estimation, it is not clear why this is the best or even a remotely competitive approach. One of the major advantages of ridge and LASSO regression is that they work fine when you have huge variable counts. If you can pare down the parameter count using domain knowledge (knowing the literature or the scienfitic theory behind the study), that could be a reasonable way of reducing the variable count before you present data to the ridge and LASSO estimators. Aside from that, however, one of the points of using regularization techiques is to allow for large variable counts.
null
CC BY-SA 4.0
null
2023-03-10T20:17:14.147
2023-03-10T20:17:14.147
null
null
247274
null
609047
1
null
null
2
38
If I am running a regression of the following form: $y_{i,t} = \mu_i + \beta_1x_{i,t} + \epsilon_{i,t}$ Where i indicates group, t time, and $\mu_i$ are a set of fixed effects for each group, and x is my independent variable of interest. Say I estimate this model to control for time invariant characteristics. Now, if within each i, the slopes of x and y are different, what exactly does regression (Ordinary Least Squares) estimate when not accounting for different slopes within groups? Does it somehow take an average of the group specific slopes? Just as a very simplified example if it helps to explain my confusion, I drew fake datapoints for two groups (one in red, one in blue). Fitting the fixed effects allows for the groups to have their own intercepts, but what line will regression ultimately draw? [](https://i.stack.imgur.com/5m23F.jpg)
Fixed Effects regression, but different slopes within each group. How does regression estimate the slope?
CC BY-SA 4.0
null
2023-03-10T20:37:55.127
2023-03-10T22:20:08.440
2023-03-10T22:20:08.440
175283
175283
[ "regression", "estimation", "econometrics", "interpretation", "fixed-effects-model" ]
609048
2
null
608957
1
null
Referring to your comments, if the proposal distribution is $$ \rho_x=e^{-\frac{E_x}T} $$ with $E_x$ being known, deterministic function, then it's just a [Laplace distribution](https://en.wikipedia.org/wiki/Laplace_distribution) for $E_x$ with a location parameter equal to zero and scale equal to $T$. You can sample from it directly.
null
CC BY-SA 4.0
null
2023-03-10T21:11:48.663
2023-03-10T21:11:48.663
null
null
35989
null
609049
1
null
null
2
27
I have conducted exploratory factor analysis with psych and confirmatory factor analysis with lavaan (following the code on ch 15 of this book: [https://doi.org/10.1515/9783110786088](https://doi.org/10.1515/9783110786088)) on a dataset. Everything seems fine with the analyses, the model fit is good (I am adapting a previously used scale to my dataset), and all items load cleanly on the factors. I want to create a scale from the dataset, with scores to use for further analysis. It is my understanding that this can be done with factor scores, however after searching on here and around the internet I am just more confused about: - what exactly are factor scores - how do I estimate them in R (i know this is more of a coding question but I don't even know what I should provide in terms of code to give a reproducible example - however, you can disregard this question is inappropriate and I will figure it out in a different way) - of all the seemingly available methods, how do I choose the best one for my data? All of my variables are numeric and measured on Likert scales (different scales for different variables but from what I understood this should not be a problem). My sample has 2056 observations and 17 variables. I am very new to both R and data analysis so please explain this to me like I am a 5 year old of below-average intelligence
How to construct a scale from factor analysis
CC BY-SA 4.0
null
2023-03-10T21:13:07.240
2023-03-10T21:13:07.240
null
null
382486
[ "r", "factor-analysis", "scale-construction" ]
609050
2
null
609043
2
null
The chunk > for (i in colnames(CountDataVariableNames){ form <- formula(paste(i,'~','A')) print(form) model.p <- glm(form,data=df,family='poisson') print(Anova(model.p,type='II',test='LR')) } performs a Poisson regression using $B[i]$ as response and the categorical variable `A` as a covariate, for every $i = 1,\ldots,12$. This regression model is useful in order to estimate the population means of the groups in `A`. The chunk > print(Anova(model.p,type='II',test='LR')) runs a likelihood ratio test for the null hypothesis that all groups of `A` have the same population mean against the alternative that at least two groups have different population means. Thus, the code does exactly what you have in mind. Remark. There is a multiple comparison problem here due to the fact that you are running several tests (by means of the command `Anova`). Indeed, the joint confidence level may be far from, i.e. much lower than, what you have in mind. Therefore, consider adjusting your `Anova`'s p-values by means of a suitable multiplicity correction method. For this, check the `p.adjust` function of `R`.
null
CC BY-SA 4.0
null
2023-03-10T21:35:42.187
2023-03-10T21:35:42.187
null
null
56940
null
609051
1
null
null
0
64
I'm a beginner at using lcmm/hlme, and to latent class analyses in general. I'm trying to figure out how to understand the initial values (B in the packages), and what they mean exactly. There are a lot of complex examples and explanations out there, but I'm looking for a more high-level explanation for what each parameter in the B list means/how many there are in a simple LCA setting.
Definition of B/Initial Values in Latent Class Analysis (particularly lcmm)
CC BY-SA 4.0
null
2023-03-10T21:49:10.650
2023-03-15T23:50:56.820
2023-03-15T23:50:56.820
11887
382914
[ "classification", "optimization", "latent-variable", "latent-class", "weight-initialization" ]
609052
1
null
null
0
41
If I have a set of $N$ independent samples from a probability distribution $P(X)$, $X_i\sim P(X)$, then I know from the central limit theorem (assuming the distribution is well behaved) that the moments of the sample distribution $\overline{X^k}\equiv \frac{1}{N}\sum_i X_i^k$, will each individually be normally distributed, and there are useful formulas I can use for the sample error. But how can I talk about the joint distribution $P(\overline{X^k},\overline{X^{\ell}})$? Specifically I care about the joint distribution of the second and fourth sample moments, $P(\overline{X^2},\overline{X^{4}})$. From quick numerical experiments and naive intuition, I would guess that this joint distribution should be a multivariate Gaussian under some set of conditions. But I also see problems with that, because inequalities like $(\overline{X^2})^2\le\overline{X^4}$ are satisfied. There is also the fact that even if two distributions $P(A)$ and $P(B)$ are normal, it doesn't have to be the case that $P(A,B)$ is normal. --- In practice this question arises because I have a time series of data from a Markov chain with $0\leq X_i\leq 1$ (so I have to use the Markov chain central limit theorem), and I care about estimating the quantity $f(\mathbb{E}[X^2],\mathbb{E}[X^4])$. There are lots of other ways to estimate the error on the quantity $f$, but I think it would be useful to understand the generic situation above.
How to apply the central limit theorem on higher order moments?
CC BY-SA 4.0
null
2023-03-10T21:52:11.437
2023-03-10T21:52:11.437
null
null
225402
[ "mean", "central-limit-theorem", "moments" ]
609053
2
null
608696
1
null
$$E\left(\frac{1}{x}\right)=\int_{-\infty}^{0}{e^{ux}du}=\int_{0}^{\infty}{e^{-ux}du}$$ $$E\left(\frac{1}{x}\right)=\int_{0}^{\infty}{x^{-1}f\left(x\right)dx=\int_{0}^{\infty}{\left(\int_{0}^{\infty}{e^{-ux}du}\right)f\left(x\right)dx=}}$$ $$\int_{0}^{\infty}{\left(\int_{0}^{\infty}{e^{-ux}f(x)dx}\right)du\ =\int_{0}^{\infty}{M_X(-u)du\ =\ \int_{0}^{\infty}{M_X(-t)dt}}\ }$$
null
CC BY-SA 4.0
null
2023-03-10T21:53:39.567
2023-03-11T03:31:13.783
2023-03-11T03:31:13.783
362671
382603
null
609055
1
609061
null
0
25
I have the output (shown below) from the GLM with Gamma(link = "log"). The outcome (dependent variable) is strictly greater than 0, and the group variable (predictor) is binary (either 0 or 1). In this case, is it right to conclude as follows? - Group 1 reduces the mean outcome by a factor of exp(-0.04) = 0.96. - The expected mean ratio of Group 1 to Group 0 is 0.96. Looking forward to hearing from you!! ``` Call: glm(formula = Outcome ~ group, family = Gamma(link = "log"), data = d2) Deviance Residuals: Min 1Q Median 3Q Max -0.49019 -0.21677 -0.11818 0.02478 0.96391 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.85327 0.04017 46.13 <2e-16 *** group1 -0.04309 0.06844 -0.63 0.53 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Gamma family taken to be 0.1258647) Null deviance: 11.146 on 118 degrees of freedom Residual deviance: 11.097 on 117 degrees of freedom AIC: 489.68 Number of Fisher Scoring iterations: 4 ``` ```
GLM with Gamma(link = "log") in R
CC BY-SA 4.0
null
2023-03-10T22:05:32.223
2023-03-10T23:29:18.223
null
null
261445
[ "r", "regression", "generalized-linear-model", "gamma-distribution" ]
609057
1
null
null
0
12
I calculated the difference in 2 proportions using the `prop.test()` using some exemplary data I found in internet. ``` > (ppt <- prop.test(x = c(11, 8), n = c(16, 21),correct = FALSE)) 2-sample test for equality of proportions without continuity correction data: c(11, 8) out of c(16, 21) X-squared = 3.4159, df = 1, p-value = 0.06457 alternative hypothesis: two.sided 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: prop 1 prop 2 0.6875000 0.3809524 ``` The p-value is 0.06457. The confidence interval for the difference in proportions is -0.001220547 to 0.614315785. If I calculate the p-value from the CI manually using the normal distribution: ``` > z_score <- qnorm((1 + 0.95) / 2) > se <- diff(ppt$conf.int) / (2 * z_score) > z <- pw$estimate / se > 2 * (1 - pnorm(abs(z))) [1] 0.05091551 ``` that previous p-value does not agree with the obtained p-value using the provided confidence interval. When I calculate it from the logistic regression using the marginal effects, I get the same confidence interval that `prop.test()` gives and the p-value I calculated above. ``` data <- data.frame(Status = c(rep(TRUE, 11), rep(FALSE, 16-11), rep(TRUE, 8), rep(FALSE, 21-8)), Group = c(rep("Gr1", 16), rep("Gr2", 21))) > m <- glm(Status ~ Group,family = binomial(), data=data) > margins::margins_summary(m) factor AME SE z p lower upper GroupGr2 -0.3065 0.1570 -1.9522 0.0509 -0.6143 0.0012 ``` But when I use ANOVA on this model with Rao test I get the p-value from `prop.test()` ``` > anova(m, test="Rao") Analysis of Deviance Table Model: binomial, link: logit Response: Status Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) NULL 36 51.266 Group 1 3.4809 35 47.785 3.4159 0.06457 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` So how is that possible, that prop.test() gives me p-value which does NOT come from the confidence interval provided by the same function? The confidence interval is Wald's, but the p-value is Rao score test. What is the statistical reasoning behind it? The CI is calculated using one method, the test is calculated using another method. It would agree, if the Wald's CI was related with the p.value I calculated by hand from it. The topic: [P value and confidence interval for two sample test of proportions disagree](https://stats.stackexchange.com/questions/57104/p-value-and-confidence-interval-for-two-sample-test-of-proportions-disagree) doesn't answer my question, as the obtained CI is not the Wilson, it's Wald and still it doesn't answer the question why not reporting consistent inference. It can be found also here: [https://stats.stackexchange.com/a/570528/382831](https://stats.stackexchange.com/a/570528/382831)
What is the reason in R prop.test() for 2 proportions to report Wald's confidence interval yet Rao test score? They are not consistent
CC BY-SA 4.0
null
2023-03-10T22:07:15.840
2023-03-10T22:07:15.840
null
null
382831
[ "inference", "chi-squared-test", "z-test" ]
609059
2
null
57104
1
null
Unfortunately, the accepted answer is not correct for the 2 sample prop.test. The by-hand calculation shows, that the confidence interval is the Wald's one (if no correction is used), and not Wilson. This is also referred here: [https://stats.stackexchange.com/a/570528/382831](https://stats.stackexchange.com/a/570528/382831) The returned CI is the Wald's one: ``` > (ppt <- prop.test(x = c(11, 8), n = c(16, 21),correct = FALSE)) 2-sample test for equality of proportions without continuity correction data: c(11, 8) out of c(16, 21) X-squared = 3.4159, df = 1, p-value = 0.06457 alternative hypothesis: two.sided 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: prop 1 prop 2 0.6875000 0.3809524 ``` which agrees with the logistic regression followed by the marginal effect: ``` data <- data.frame(Status = c(rep(TRUE, 11), rep(FALSE, 16-11), rep(TRUE, 8), rep(FALSE, 21-8)), Group = c(rep("Gr1", 16), rep("Gr2", 21))) > m <- glm(Status ~ Group,family = binomial(), data=data) > margins::margins_summary(m) factor AME SE z p lower upper GroupGr2 -0.3065 0.1570 -1.9522 0.0509 -0.6143 0.0012 ``` which agrees with ``` > PropCIs::wald2ci(11, 16, 8, 21, conf.level=0.95, adjust="Wald") data: 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: [1] 0.3065476 ``` While the reported p-value comes from the Rao score test: ``` > anova(m, test="Rao") Analysis of Deviance Table Model: binomial, link: logit Response: Status Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) NULL 36 51.266 Group 1 3.4809 35 47.785 3.4159 0.06457 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ```
null
CC BY-SA 4.0
null
2023-03-10T22:20:38.937
2023-03-10T22:20:38.937
null
null
382831
null
609060
2
null
183225
2
null
The accepted answer is right: the 1-sample `prop.test()` is calculated using the Wilson score. It can be checked with: ``` > binom::binom.confint(319, 1100, conf.level = 0.99) method x n mean lower upper 1 agresti-coull 319 1100 0.2900000 0.2560789 0.3264393 2 asymptotic 319 1100 0.2900000 0.2547589 0.3252411 # Wald's (SAS) 3 bayes 319 1100 0.2901907 0.2554718 0.3258328 4 cloglog 319 1100 0.2900000 0.2552377 0.3255863 5 exact 319 1100 0.2900000 0.2552831 0.3265614 6 logit 319 1100 0.2900000 0.2560616 0.3264627 7 probit 319 1100 0.2900000 0.2558036 0.3261994 8 profile 319 1100 0.2900000 0.2556501 0.3260360 9 lrt 319 1100 0.2900000 0.2556607 0.3260543 10 prop.test 319 1100 0.2900000 0.2635118 0.3179745 11 wilson 319 1100 0.2900000 0.2561013 0.3264169 # Wilson ``` For the 2 sample it's Wald's. ``` > (ppt <- prop.test(x = c(11, 8), n = c(16, 21),correct = FALSE)) 2-sample test for equality of proportions without continuity correction data: c(11, 8) out of c(16, 21) X-squared = 3.4159, df = 1, p-value = 0.06457 alternative hypothesis: two.sided 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: prop 1 prop 2 0.6875000 0.3809524 ``` which agrees with the logistic regression followed by the marginal effect: ``` data <- data.frame(Status = c(rep(TRUE, 11), rep(FALSE, 16-11), rep(TRUE, 8), rep(FALSE, 21-8)), Group = c(rep("Gr1", 16), rep("Gr2", 21))) > m <- glm(Status ~ Group,family = binomial(), data=data) > margins::margins_summary(m) factor AME SE z p lower upper GroupGr2 -0.3065 0.1570 -1.9522 0.0509 -0.6143 0.0012 ``` which agrees with ``` > PropCIs::wald2ci(11, 16, 8, 21, conf.level=0.95, adjust="Wald") data: 95 percent confidence interval: -0.001220547 0.614315785 sample estimates: [1] 0.3065476 ``` While the reported p-value comes from the Rao score test: ``` > anova(m, test="Rao") Analysis of Deviance Table Model: binomial, link: logit Response: Status Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) NULL 36 51.266 Group 1 3.4809 35 47.785 3.4159 0.06457 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ```
null
CC BY-SA 4.0
null
2023-03-10T22:23:29.910
2023-03-10T22:30:07.280
2023-03-10T22:30:07.280
382831
382831
null
609061
2
null
609055
1
null
"Group 1 reduces the mean outcome by a factor of exp(-0.04) = 0.96" needs something like "from the mean of group 0" or "from the baseline mean" in there, or at least implied (e.g. by having been mentioned immediately before this part) On the second form, beware the distinction between "mean ratio" (which seems to be implying $E(Y_{1i}/Y_{0j})$) and "ratio of means". ($E(Y_{1i})/E(Y_{0j})=\mu_1/\mu_0$), which is what you intend. The first will be larger than the second (e.g. by Jensen's inequality, though there are simpler arguments for this specific case), and might not even be finite. If you correct for both issues, choose whichever seems to best suit what you're trying to say at the time. (I would also avoid reducing reporting estimates to a single significant figure.)
null
CC BY-SA 4.0
null
2023-03-10T23:24:04.383
2023-03-10T23:29:18.223
2023-03-10T23:29:18.223
805
805
null
609063
1
609069
null
0
61
Let's suppose $$E\left(\frac{1}{x}\right)=\int_{0}^{\infty}{M_X(-t)dt}$$ Could you please help me to find $$E\left(\frac{1}{x}\right)$$ where $$X \sim\textrm{Gamma}(\alpha, \beta)$$ and $$E(X) = \alpha\beta~?$$
Expected value of Y = (1/X) where X is Gamma Distribution
CC BY-SA 4.0
null
2023-03-10T23:52:04.200
2023-03-11T02:42:47.240
2023-03-11T02:42:47.240
362671
382603
[ "self-study", "expected-value", "gamma-distribution", "moment-generating-function" ]
609064
2
null
385623
1
null
So far the observed likelihood is: $$ L(\pi) = \prod_{n=1}^Np(\mathbf{t_n}) = \prod_{n=1}^n\left(\sum_{i=1}^M \pi_ip(\mathbf{t}_n|i)\right) $$ and the observed log likelihood is given by: $$ l(\pi) =\sum_{n=1}^N \log\left(\sum_{i=1}^M \pi_ip(\mathbf{t}_n|i)\right) $$ If we consider the complete data, we invoke the use of auxiliary variable $Z$ which is unobserved, but it indicates which group the data $\mathbf{t_n}$ belongs to. Think of $Z_n$ as a one hot vector with with the $i^{th}$ entry being $1$ indicating the $i^{th}$ group amoung all $M$ groups which the data $\mathbf{t_n}$ belongs in. Now $Z$ will be a $N\times M$ matrix with only one entry per row being $1$ and the rest $0$ We know that $\pi_i = \sum_{n=1}^NZ_{ni}/N$. To write the complete log-likelihood, we need the joint density of $\mathbf{t}_n$ and $z_{n}$ $$ p(\mathbf{t}_n, \mathbf{z}_{n}) = p(\mathbf{t}_n|\mathbf{z}_{n})p(\mathbf{z}_n) $$ Suppose that Z_{ni} = 1 then $$ p(\mathbf{t}_n|z_{ni}) = p(\mathbf{t}_n)\\ p(z_{ni} = 1) = P(z_{ni} = 1) = \pi_i\\ \implies p(\mathbf{t}_n, z_{ni} = 1) = \pi_ip(\mathbf{t}_n) $$ Thus for a given complete data, the contribution to the likelihood is $\left(\pi_ip(\mathbf{t}_n)\right)^{z_ni}$ Hence $$ L_c = \prod_{n=1}^N\prod_{i=1}^M \left[\pi_ip(\mathbf{t}_n)\right]^{z_{ni}} =\prod_{n=1}^N\prod_{i=1}^M \Big[\pi_ip(\mathbf{t}_n|i)\Big]^{z_{ni}}\\ $$ Therefore $$ l_c = \sum_{n=1}^N\sum_{i=1}^M z_{ni}\log\Big(\pi_ip(\mathbf{t}_n|i)\Big)\\ $$
null
CC BY-SA 4.0
null
2023-03-10T23:55:26.097
2023-03-10T23:55:26.097
null
null
180862
null
609065
1
null
null
0
18
The crux of my question is as follows: Would a higher-order Markov model produce a different result than a first-order Markov model when used for Channel Attribution modelling? Once the transition matrix is constructed/estimated using the given data, Removal Effects are calculated to understand the importance of each channel in the data. Removal Effects are the percentage decrease that would occur if a particular channel is removed. Basically what this means is all the incoming and outgoing edges for a given channels would be removed. Now in case of a k-order Markov model, even though the transition matrix would be much larger than its first-order counterpart, the removal effects would be calculated by eliminating a particular channel, say, A. This, however, means that every sequence of channels of length k that contains A would be removed. I think due to this the removal effects of a first-order and higher-order Markov Chains would be almost similar. And since Removal Effects are the ultimate result of Markov Chain attribution, is it worth it to implement a higher-order Markov model for attribution modelling? P.S. - My question is simply based on my intuition and has no numerical data to back it up. Apologies if its too wordy and thanks in advance!
Attribution modelling using First and Higher-Order Markov Chains
CC BY-SA 4.0
null
2023-03-10T23:59:18.373
2023-03-10T23:59:18.373
null
null
317621
[ "markov-process" ]
609066
1
null
null
0
29
I have two subjects: one was vaccinated, one was not. 19 different behaviours were recorded at each vaccination stage (5 stages in total). I want to know if the vaccination worked in reducing certain behaviours. I also want to know if time of day also affected behaviour. I was told to use a GLM. I picked poisson regression model as I read this was best for count data. I have never used a glm before so I used this code: ``` Mod1 <- glm(Nodding ~ Monkey+Vacc_Stage, data = ObsData, family = poisson("log")) ``` I want to know if monkey, vaccination stage and time of day have an affect on the number of times the nodding behaviour is performed. Is this the correct code to use as it doesn't include time of day or would I have to perform the code again but replace Vacc_Stage for Time_Of_Day? Would I also have to perform the glm for each of the 19 behaviours? Another question is regarding the residuals. I was told the residuals had to roughly follow a normal distribution if my model fits, but at the moment it doesn't look like it does. Could this be due to having a small sample size or because the data contains lots of 0's? I tried a Tukey's transformation as suggested by someone but this didn't work either. Someone else suggested using a zero inflated model but this just returns NA for the Std. Error, z value, Pr(>|z|). Using Date as opposed to Vacc_Stage also returns different residuals. I've uploaded the QQ plots that I got for the residuals. If any, would either of these be okay to use? [](https://i.stack.imgur.com/5fif5.png) [](https://i.stack.imgur.com/DxbXE.png) I've uploaded a pic of what data roughly looks like. [](https://i.stack.imgur.com/JU0u5.png) Simple answers would be appreciated.
Assessing whether the given code for Poisson regression model is apt for the scenario and steps needed when residuals are not normally distributed
CC BY-SA 4.0
null
2023-03-11T00:06:04.387
2023-03-11T02:49:53.893
2023-03-11T02:49:53.893
362671
382919
[ "r", "generalized-linear-model", "residuals", "count-data", "poisson-regression" ]
609067
1
null
null
1
26
Let's say that medication A significantly increased the patients' health on average, but so did medication B (for independent sets of patients). Obviously both medications increased the health of the patients on average, but is there a statistical test that could test whether the increase of one of the medications was significantly higher than the increase of the other medication? For example, if medication A increased health by 7.3 (imagine a health score) and medication B increased health by 8.8. Both significant, but can we test whether 8.8 is significantly higher increase than 7.3. My idea is create a column of increases for each medication and then run a t-test on these two columns. Is that feasible? Are there other techniques that have a more sound foundation? I have found a statistical concept (and I tagged it) called "difference in differences"; Is that relevant here?
Which test can I use to test a difference in increase?
CC BY-SA 4.0
null
2023-03-11T00:13:38.117
2023-03-11T03:10:19.607
null
null
339558
[ "hypothesis-testing", "statistical-significance", "difference-in-difference" ]
609068
1
null
null
1
12
I’m looking at a data set where 15 properties undertaking new management practices are sampled over three seasons. Measurements taken include variables such as vegetation cover and canopy cover. Quadrats have been randomly allocated across these properties, and the number of quadrats per site is related to property size. Every season the placement of the quadrat is randomly generated again. Therefore, different arrangment of quadrats at different times within each property. Could these measurements be treated as individual samples or replicates, or is this entering pseudo-replication territory?
Does this experimental design avoid pseudo-replication?
CC BY-SA 4.0
null
2023-03-11T00:19:51.307
2023-03-11T00:19:51.307
null
null
382920
[ "experiment-design", "pseudorepliction" ]
609069
2
null
609063
1
null
This is straightforward integration: Note that since $\mathbb{E}(X) = \alpha\beta$ then the $\beta$ is a scale parameter. Thus $$ \begin{aligned} M_X(t) =& (1-\beta t)^{-\alpha}\\ \therefore\mathbb{E}\left(\frac{1}{X}\right) =& \int_0^\infty M_X(-t)dt = \int_0^\infty (1+\beta t)^{-\alpha}dt\\ =&\frac{(1+\beta t)^{-\alpha + 1}}{-\beta(\alpha-1)}\Bigg|_0^\infty = \frac{1}{\beta(\alpha-1)} \end{aligned} $$ Also note that the expectation could be computed directly: $$ \begin{aligned} \mathbb{E}\left(\frac{1}{X}\right) &= \int_0^\infty\frac{1}{x} \frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-\frac{x}{\beta}}dx\\ &=\frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^\infty x^{\alpha-1-1}e^{-\frac{x}{\beta}}dx \\ &=\frac{\Gamma(\alpha - 1)\beta^{\alpha - 1}}{\Gamma(\alpha)\beta^\alpha} = \frac{\Gamma(\alpha - 1)\beta^{\alpha - 1}}{(\alpha-1)\Gamma(\alpha - 1)\beta^\alpha} = \frac{1}{\beta(\alpha-1)} \end{aligned} $$ Lastly you can note that since $X\sim\Gamma(\alpha, 1/\beta)$ then $Y=1/X\sim\Gamma^{-1}(\alpha,1/\beta)$ and therefore $$\mathbb{E}\left(\frac{1}{X}\right) = \mathbb{E}(Y) = \frac{1}{\beta(\alpha-1)}$$ You can easily look at [wikipedia](https://en.wikipedia.org/wiki/Inverse-gamma_distribution) for the moments of the inverse gamma distribution
null
CC BY-SA 4.0
null
2023-03-11T00:46:34.110
2023-03-11T00:52:44.220
2023-03-11T00:52:44.220
180862
180862
null
609070
2
null
275617
0
null
Let $X_1\sim \mathcal{Binom}(n_1, p_1), \quad X_2\sim\mathcal{Binom}(n_2, p_2)$. You observed $X_1=10, X_2=0$. You didn't tell us $n_1, n_2$. Assuming you know $n_1, n_2$, you have enough data for a binomial test of $H_0\colon p_1=p_2$. How to do the test are discussed in multiple posts on this site: - Exact two sample proportions binomial test in R (and some strange p-values) - Test if two binomial distributions are statistically different from each other - Finding a confidence interval for difference of proportions
null
CC BY-SA 4.0
null
2023-03-11T01:04:49.067
2023-03-11T01:04:49.067
null
null
11887
null
609071
1
null
null
2
25
I'm new in NN and my math is not that good. I try to do manual calculation using NN model. I already know and try to calculate the feedforward and backward one by one using the formula. but when I try to calculate the backpropagation in matrix form, I am confused how to write it, especially in dE/dV. I have NN with architecture like this: [](https://i.stack.imgur.com/SLFBo.png) Input, weight and bias is like this: [](https://i.stack.imgur.com/gKihH.png) if the formula of dE/dV given as below: [](https://i.stack.imgur.com/ugHc9.png) what I want to ask is, how to apply this formula to get a dE/dV matrix like the one below? [](https://i.stack.imgur.com/8Uhwt.png) I'm confused because if I calculate one dE/dV, for example dE/dV11, I can directly enter the wjk matrix and zj matrix, but for xi, I only have to enter x1 while the xi matrix consists of x1, x2, and x3.
How to express backpropagation dE/dV using matrix
CC BY-SA 4.0
null
2023-03-11T01:32:08.353
2023-03-11T16:23:09.707
null
null
382927
[ "neural-networks", "matrix", "backpropagation", "derivative" ]
609072
1
null
null
0
32
The Fisher information is given by $$J(\theta) = -E\left[\frac{d^{2}\log p(x | \theta)}{d\theta^{2}} \bigg|~\theta\right]$$ To consider the Fisher information for a binomial parameter: Let $p(x | \theta) =\binom{n}{x} \theta ^{x} (1 - \theta)^{n - x}$ where $\binom{n}{x}$ is the binomial coefficient. The log likelihood for the binomial distribution $p(x | \theta)$ is $C_{1} + x \log \theta + (n - y) \log (1 - \theta)$ where $C_{1}$ is a real constant. The second derivative of the binomial distribution under the log likelihood operation is $$\frac{d^{2}\log p(x | \theta)}{d\theta^{2}} = \frac{-x}{\theta^{2}} - \frac{(n-x)}{(1 - \theta)^{2}}.$$ How do I proceed to Fisher information evaluating to $\frac{n}{\theta (1 - \theta)}$?
conditional expectation substitution in Fisher information
CC BY-SA 4.0
null
2023-03-11T02:09:35.137
2023-03-11T04:56:55.730
2023-03-11T04:56:55.730
180862
109101
[ "self-study", "conditional-expectation", "fisher-information" ]
609073
2
null
609072
1
null
The expected value of $x$ is $n\theta$, so $$E\left[-\frac{x}{\theta^2}-\frac{n-x}{(1-\theta)^2}\right]=-\frac{n\theta}{\theta^2}-\frac{n-n\theta}{(1-\theta)^2}$$ and from there it is straightforward algebra.
null
CC BY-SA 4.0
null
2023-03-11T02:15:42.783
2023-03-11T02:15:42.783
null
null
249135
null
609074
1
null
null
2
47
I created boxplots for 5 traits to show the spread of true age for each variant of each trait in both samples between 2 observers: [](https://i.stack.imgur.com/kMP8q.png) A professor suggested that I should use scatterplot of Price vs Kim with symbols representing certain age ranges. Would this be a better choice?
Boxplot or Scatterplot?
CC BY-SA 4.0
null
2023-03-11T02:59:18.383
2023-03-11T06:18:27.423
2023-03-11T06:18:27.423
1352
242142
[ "data-visualization", "descriptive-statistics", "scatterplot", "boxplot" ]
609076
2
null
609067
1
null
The [difference-in-difference](/questions/tagged/difference-in-difference) regression for which you have included a tag works by having four groups. - The group that will receive treatment A, before the treatment - The group that will receive treatment B, before the treatment - The group that received treatment A, after the treatment - The group that received treatment B, after the treatment The regression works by having an indicator variable for the treatment group and another indicator variable for before/after treatment. There is also an interaction between the two indicator variables, and that interaction term describes the difference between the reaction to treatment experienced by each of the two treatment groups. In other words, depending on how you set up your indicator variables, the coefficient on interaction term will tell you exactly $\Delta A-\Delta B$, which seems to be what you want to quantify and test. The testing of this coefficient is through the usual testing of regression coefficients.
null
CC BY-SA 4.0
null
2023-03-11T03:10:19.607
2023-03-11T03:10:19.607
null
null
247274
null
609078
1
null
null
1
38
I am struggle in the variance associated with angle variance. For example, I have $$\theta= \operatorname{angle}\left(\frac{1}{N}\sum_{n=1}^N r_n e^{i\theta_n} \right),$$ where both $r_n$ and $\theta_n$ are independent variables. Now how can I obtain the variance associated with $\theta$?
variance in the polar coordinate
CC BY-SA 4.0
null
2023-03-11T04:02:00.420
2023-03-11T04:07:31.913
2023-03-11T04:07:31.913
362671
382933
[ "variance" ]
609079
2
null
148280
0
null
I believe that the OOB stuff works differently on Random Forrests and Gradient Boosting. On Random Forrests, you can train a tree on a portion of the data (train data) and thus any predictions on the OOB data are clean-room predictions. So, the implementation would keep two separate arrays in order to implement OOB predictions. Each of these two array has the same length as the data passed in to the fit() method. The first array (y_hat_oob) would keep a sum of the OOB y_hats for each OOB data point, and the second array (oob_count) would keep a count of how many predictions (trees) contributed to each OOB prediction (for each data point). So, let's say you have 10 data points (0..9) and your first bag uses points (0..4) for training and (5..9) for OOB. You fit() a tree on points (0..4) and use that tree to predict() on points (5..9). You add the predictions to the y_hat_oob array for points (5..9) and you add 1 to the oob_count array in points (5..9). Now, let's say your bag 2 uses points (5..9) for training and thus points (0..4) for oob. You do the same, store the oob predictions in the y_hat_oob array points (0..4) and increment the corresponding counts. Now you have two trees in your model, but since you use only 0.5 of your data for oob, then your oob contains predictions from only (number of trees) * (proportion of oob data) trees (2 * 0.5 = 1). To get the actual y_hat_oob at the end of training, you'd need to divide y_hat_oob by count_oob -- this does the same as averaging all the predictions from all the tree models. Ok, so that's Random Forrests, and this was the easy one. For Gradient Boosting, I believe there is a problem. Because each tree depends on the predictions from prior trees, you can't take the same approach as with Random Forrests. What you CAN do is calculate the OOB "improvement" in "performance" due to the latest iteration/model. You'd calculate the error/performance on the iteration OOB data using the models of the prior iteration. Now, once the new iteration model is fit(), then you can recalculate the error/performance on the same OOB data and calculate the difference. That will give you and estimate of the OOB "improvement" in performance due to the latest iteration. At least this is how I understand things. Hope that helps.
null
CC BY-SA 4.0
null
2023-03-11T04:12:25.610
2023-03-11T04:12:25.610
null
null
382931
null
609080
2
null
609009
1
null
If you truly want to test the proportions against 0.25, that is possible; you should do ``` # as you have... emmeans = emmeans(a,~ `Age class` | Sex, mode = "prob") test(emmeans, null = 0.25) ``` That said, I think I like your contrast comparisons better, because (a) that is a more conventional approach, and (b) when you test females against 0.25 and males against 0.25, you have two tests instead of one; what will you say if the conclusions are different?
null
CC BY-SA 4.0
null
2023-03-11T04:54:41.917
2023-03-11T04:54:41.917
null
null
52554
null
609081
2
null
469397
0
null
This question is unclear. First, "duplicated" in what sense? If data for the same unit accidentally is repeated in the file, you should de-duplicate, but it might as well be two units with the same values. Two or more six year old girls with the same response value is not impossible ... If you do not know which case, it is up to you to investigate and find out. Assuming the last case, this will not make any problems. The $X$ matrix itself is not inverted so that is not an issue. You can group the identical rows if you want, but it is not necessary. > R still calculates this regression - is it just throwing out duplicated cases? R (and I hope other software) will never throw out duplicated rows. > And another question regarding the grouping process itself. If I use grouped data, I just have to throw out double rows and record the frequency and use that as weights? With one row per unit, you have Bernoulli data, the response is 0 or 1. With grouped rows, it is Binomial data and the response is now the total number of successes, you must also have the total number of units grouped together as another variable. Let this variables be `x, n` . One way of specifying the response with the R formula language is now ``` cbind(x, n-x) ~ ... ```
null
CC BY-SA 4.0
null
2023-03-11T05:02:00.220
2023-03-11T05:02:00.220
null
null
11887
null
609082
1
609095
null
0
88
My question is related to the autocorrelation present in the mean-model (which is an ARMA process), which will be used in a GARCH model. Is it ok to have autocorrelations in the residuals of the mean-model, given that it will be used in a GARCH model, which by robust standard errors takes care of the residuals in the variance model? I have edited my question a bit and attached the ACF and PACF plots of the residuals of the ARMA process with AR=11, MA=6, and zero mean. Via an algorithm, this was the model with the lowest AIC value. Using checkresiduals in R, the Ljung-Box test figures are: data: Residuals from ARIMA(11,0,6) with zero mean Q* = 23.228, df = 3, p-value = 3.619e-05 Model df: 17. Total lags used: 20 Based on this, there is autocorrelation present. Based on these plots, can I consider the model good for a statistically adequate descriptive model and ignore the Ljung-Box result?[](https://i.stack.imgur.com/XcBKu.jpg) EDIT: 2 Based on discussions and advice, I am attaching the screen-shot of the output of the model, prescribed by auto.arima. It gives an optimum model of (4,0,4). But when I run the model in the ARIMA function, it shows a convergence problem. The screenshot is given below: [](https://i.stack.imgur.com/y7Eci.png) The ACF and PACF plots for (4,0,4) are: [](https://i.stack.imgur.com/3akvV.png) How do we take care of the convergence issue? Is it ok to accept the model? The original log return series is: [](https://i.stack.imgur.com/6Ob4q.png) EDIT: 3 The ACF & PACF of the return series is given below: [](https://i.stack.imgur.com/4Pzb2.png)
Autocorrelation in residuals of mean model to be used in a GARCH model
CC BY-SA 4.0
null
2023-03-11T05:21:09.733
2023-03-14T09:35:58.837
2023-03-14T09:35:58.837
369873
369873
[ "arima", "autocorrelation", "garch" ]
609084
2
null
609074
1
null
A scatterplot would be most useful for assessing for well the two observers agree (and draw the attention to this comparison). It's hard to say which one would be more useful in your case. Try looking at scatterplots and seeing whether they convey what you need. You would need to replace each one of your five panels by four scatterplots (please keep the scaling constant within each variant). If you have a gold standard (true age), indicate that within each scatterplot. If you decide to stick with this visualization, I would consider removing the boxplots, since you do not really have a lot of data (again, try it and see what works). However, reduce the horizontal spread between points within a variant and observer, and increase the spread between adjacent variants. And again, if you have a "true" value, indicate it in the plots.
null
CC BY-SA 4.0
null
2023-03-11T06:18:16.793
2023-03-11T06:18:16.793
null
null
1352
null
609085
1
609087
null
1
44
My model generates numbers, say x= 0.2719094. In the output, this is supposed to be an integer (that follows Poisson distribution, which is accounted for in the generation) mostly 0s, some 1s and a little of 2+. My idea is to round this x= 0.2719094 to one of the 2 closest integers with uniform weighting. These numbers are in a column and I would like to generate a column output of integers. TLDR example: how to generate integers from x= 0.2: would be 80% to become 0 and 20% to become 1 x= 1.2: 80% to become 1 and 20% to become 2 I am not sure how to answer this question. A solution preferably in `R` (or `python`/`excel`) would be nice.
generating integers from a list of real numbers that came out of distribution
CC BY-SA 4.0
null
2023-03-11T06:27:55.423
2023-03-11T13:52:42.187
2023-03-11T06:40:49.307
56940
382935
[ "r", "probability" ]
609086
1
null
null
0
25
I have some confusion around what the null hypothesis should be for an A/B test. When we decide to do an A/B test and proceed to perform a sample size calculation. Let's say we work with the following assumptions, a conversion baseline of 2% and a relative MDE of 50% (80% power, 5% alpha, 1 sided test) and use something like the Evan Miller miller sample calculator to find that the minimum sample size per variant is 3292. Is the null hypothesis in this case: - The difference between our treatment and control is less than 50% - The difference between our treatment and control is equal to 0% The reason I am asking is I have seen multiple A/B tests at our workplace conducted where a test is considered as a "success" when a statistically significant result is found even if the relative lift is lower than what the MDE used to calculate the experiment sample size is, when I was under the impression that a test should be considered as rejecting the null hypothesis if and only if the lift is at least higher than the MDE and is statistically significant. Is this correct?
What's the null hypothesis of A/B test following a sample size calculation?
CC BY-SA 4.0
null
2023-03-11T06:32:53.717
2023-03-11T06:32:53.717
null
null
215688
[ "hypothesis-testing", "experiment-design" ]
609087
2
null
609085
2
null
It would be nice if you provide some motivation behind this question since, as such, it seems an implementation issue. However, the question can be solved using the `sample` function of `R`. For > TLDR example: how to generate integer from x= 0.2: would be 80% to become 0 and 20% to become 1 run ``` sample(0:1, 1, prob= c(0.2, 0.8)) ``` For > TLDR example: how to generate integer from x= 0.2: that would be 80% to become 0 and 20% to become 1 you can use ``` sample(1:2, 1, prob= c(0.8, 0.2)) ``` As per your request, here is a simple function that generates the random integer from a given the real $x>0$. ``` # x is a number in decimal form, i.e. x = n.d sep_num <- function(x) { n = floor(x) d = x - n return(c(integer = n, decimal = d)) } sep_num(3.142) convert_to_int <- function(x) { oo = sep_num(x) prob = oo[2] rr = c(oo[1], oo[1]+1) sample(x = rr, size = 1, prob = c(prob, 1-prob)) } > convert_to_int(3.142) integer 4 ``` Note that the function is not vectorized, thus if you have a vector or list of values you have to apply a loop, e.g. a `for` loop, `lapply`, `sapply`, etc.
null
CC BY-SA 4.0
null
2023-03-11T06:38:40.180
2023-03-11T13:52:42.187
2023-03-11T13:52:42.187
56940
56940
null
609088
1
null
null
0
18
I applied exploratory factor analysis with network analysis to data from healthy and diseased patients. The analysis shows different clusters of parameters; some are similar in both groups, and some groups cluster differently. For instance, there are 4 clusters in the health group but 5 in the disease group. What is the actual meaning of this? Also, the parameters IleValLeu are similarly clustered in both groups, whereas the parameters Pro Ala are not (they end up in two different clusters): [](https://i.stack.imgur.com/vCSYR.png) How shall I interpret these data? For instance, in the Pro/Ala case, I expected to see some differences between the groups, but they looked pretty much the same to me. Is the differences about correlation? The scatterplot of the data shows slightly different regression models, but nothing compelling. [](https://i.stack.imgur.com/om5Zf.png) Is it about the values themselves? But again, there is no real difference in the value distribution between the two groups (group 1 [Health] is slightly higher than group 2 [Disease] in both cases). [](https://i.stack.imgur.com/4sGrD.png) So, what is the actual outcome/interpretation of the network analysis? Thank you
How to interpet Exploratory Factor analysis (Network analysis)?
CC BY-SA 4.0
null
2023-03-11T06:47:03.713
2023-03-11T06:47:03.713
null
null
95357
[ "inference", "exploratory-data-analysis", "networks" ]
609089
1
null
null
0
15
Given the following distribution $$ Y \mid X=x \sim \ LogNormal(\beta_0 + \beta_1ln(x), \sigma^2),$$ what is $\hat{\mathbb{E}}[Y \mid X=x]$ (the estimated expected value)? My attempt: My thinking was that since $ Y \mid X=x \sim \ LogNormal$, then $ln(Y)=\beta_0+\beta_1ln(X)+\epsilon$, thus $$\hat{\mathbb{E}}[Y \mid X=x]=\hat{\mathbb{E}}[e^{\beta_0+\beta_1ln(x)+\epsilon}\mid X=x]=e^{\hat{\beta_0}+\hat{\beta_1}ln(x)}\hat{\mathbb{E}}[e^{\epsilon}]=e^{\hat{\beta_0}+\hat{\beta_1}ln(x)}e^{\frac{\sigma^2}{2}}$$ However, I don't know whether my original thinking is okay, and also whether you are allowed to do what I did in the second equality?
Predicted value for log-log
CC BY-SA 4.0
null
2023-03-11T08:44:14.373
2023-03-11T08:54:09.630
2023-03-11T08:54:09.630
99674
99674
[ "regression", "predictive-models" ]
609090
2
null
608972
0
null
You cannot just combine the $p$ values without further ado as your effects for a given subject are correlated: For example, some subject did something (besides running) during your observation period that affected their muscle 1, there is a decent chance that it also affected their muscle 2. The same thing goes for predispositions affecting muscle growth. If you would ignore this, you would be committing [pseudoreplication](https://en.wikipedia.org/wiki/Pseudoreplication). If you have a control group, the straightforward way to address this would be to compute your combined effect for all subjects in your control and treatment group and treat it like a regular observable (such as the single effects). Then compare the combined effects with an appropriate test. You wouldn’t use the $p$ values for the individual muscles. (Mind that you can easily fall into the trap of $p$ hacking here by changing your weights to minimise your $p$ value for the combined effect.) The alternative would be to somehow establish your correlation structure and build a null model based on this, but presumably your data or prior knowledge doesn’t allow for this.
null
CC BY-SA 4.0
null
2023-03-11T09:14:01.220
2023-03-11T09:14:01.220
null
null
36423
null
609091
1
609104
null
0
89
I have begun studying survival analysis and am using R packages `survival` and `survminer`. Verbal descriptions of statistical concepts can be “sloppy” and I’m trying to understand these concepts using crystal-clear language. I’ve worked through a Cox Proportional Hazards model example using [http://www.sthda.com/english/wiki/cox-proportional-hazards-model](http://www.sthda.com/english/wiki/cox-proportional-hazards-model) using univariate Cox regression and the lung cancer data provided in the “lung” dataset of the survival package. I am trying to interpret the `coxph()` output as illustrated below. Are my interpretations shown below, correct? And as asked below in brackets, how does one tell which variable is used as the baseline and which is the comparison variable? How can you tell from the below `coxph()` output that it is the female sex variable with the lower hazard rate? [](https://i.stack.imgur.com/2f7L1.png)
How to interpret Cox Proportional Hazards model output when running survival analysis in R?
CC BY-SA 4.0
null
2023-03-11T09:22:42.187
2023-03-11T13:22:25.853
null
null
378347
[ "r", "survival", "cox-model" ]
609092
1
null
null
0
11
$p(\theta | x)$ is the unnormalised posterior distribution of interest. Let's suppose the likelihood function for this posterior $p(\theta | x)$ is $L(\theta | x) = N(\theta | \mu, \sigma^{2}) exp[\frac{-1}{2}\frac{(\theta - x)^{2}}{\sigma^{2}}]$. A choice for the conjugate prior is a prior with a similar form to the likelihood function $L(\theta | x)$: Let the prior $\pi(\theta)$ be of the form $N(\theta| \mu_{0}, \sigma_{0}^{2}) \propto exp[\frac{-1}{2}\frac{(\theta - \mu_{0})^{2}}{\sigma_{0}^{2}}]$. Then $p(\theta | x) \propto L(\theta | x).\pi(\theta) = exp[\frac{-1}{2}(\frac{(\theta - x)^{2}}{\sigma^{2}} + \frac{(\theta - \mu_{0})^{2}}{\sigma_{0}^{2}})]$ which can be further made explicit with - $\frac{1}{\sigma_{1}^{2}} = \frac{1}{\sigma_{2}} + \frac{1}{\sigma^{2}_{0}}$ - $ \frac{\mu_{1}}{\sigma^{2}_{0}} = \frac{x}{\sigma^{2}} + \frac{\mu_{0}}{\sigma^{2}_{0}}$ to give $p(\theta | x) = N(\theta| \mu_{1}, \sigma^{2}_{1}) \propto exp[\frac{-1}{2}(\frac{\theta - \mu_{1}}{\sigma_{1}})^{2}]$ - informs me how much variances for the posterior is dominated by likelihood's variance (data variance) and how much by the prior's variance. What does $ \mu_{1} = \sigma_{1}^{2} [\frac{x}{\sigma^{2}} + \frac{\mu_{0}}{\sigma^{2}_{0}}]$ inform me about?
intepretating mean of posterior with dependency on likelihood and prior variance
CC BY-SA 4.0
null
2023-03-11T09:38:06.093
2023-03-11T09:38:06.093
null
null
109101
[ "bayesian", "posterior", "precision" ]
609093
1
null
null
2
64
In a reflective measurement model, the number of degrees of freedom is calculated according to the number of information (covariances / variances) minus the number of parameters to be estimated (factor loadings, variance of the latent factor, error variances of the manifest variables). See Eoin's example below. Here I have 7 parameters to estimate (3 factor loadings, 3 error variances, variance of the latent variable) but only 6 pieces of information (3 covariances, 3 variances). Therefore, I fix a factor loading to 1 and my model is just so identified. 6 - 6 = 0 degrees of freedom. Now if I look at my code and the formative right model. Then Lavaan shows me -3 degrees of freedom. ``` modUU <- ' # Measurment Model AB <~ AB1 + AB2 + AB3' fit1 <- cfa(modUU, df) summary(fit1) ``` How do these -3 degrees of freedom come about? What information do I have? Which do I estimate? Apparently the 6 covariances are not used in this case, because I am estimating a maximum of 4 values, right? So it should actually be overidentified in this case. I would appreciate some feedback. [](https://i.stack.imgur.com/rn5rg.png)
How do you calculate degrees of freedom in a formative measurement model?
CC BY-SA 4.0
null
2023-03-11T10:00:22.833
2023-03-11T21:02:39.010
null
null
380073
[ "structural-equation-modeling", "lavaan" ]
609094
1
609840
null
0
164
I would like to use slope values in PCA. The problem I face is that the slopes I calculate per group could be within different ranges of values. We know that it is important to normalize your data before PCA discussed here: [Why do we need to normalize data before principal component analysis (PCA)?](https://stats.stackexchange.com/questions/69157/why-do-we-need-to-normalize-data-before-principal-component-analysis-pca). So my groups could have values within a range of 0 to 1 or for example 1-10 (these are examples). So let's assume I have 5 ID's with each 4 features. Each ID has 10 observations per feature, but these could be within different ranges. Here I created some reproducible data: ``` set.seed(7) df = data.frame(ID = rep(LETTERS[1:5], each = 10), time = c(1:10), V1 = c(runif(10, 0, 1), sample(5:20, 10, replace=T), runif(10, 0.2, 0.5), sample(1:100, 10, replace=T), sample(1:20, 10, replace=T)), V2 = c(runif(10, 0, 0.2), runif(10, 0, 0.3), sample(1:10, 10, replace=T), runif(10, 0.2, 0.3), runif(10, 0.7, 0.9)), V3 = c(runif(10, 0, 0.4), sample(1:10, 10, replace=T), runif(10, 0.2, 0.3), sample(1:10, 10, replace=T), runif(10, 0.5, 0.8)), V4 = c(runif(10, 0, 0.1), sample(1:5, 10, replace=T), runif(10, 0.2, 0.9), sample(1:20, 10, replace=T), runif(10, 0.5, 1))) ``` Now we calculate the slopes for each ID for each feature like this: ``` library(tidyverse) library(broom) library(factoextra) slopes = df %>% pivot_longer(cols = V1:V4) %>% group_by(ID, name) %>% nest() %>% mutate(modelout = map(data, ~lm(value ~ time, data = .x) %>% tidy %>% filter(term == "time") %>% select(slope = estimate))) %>% unnest() %>% summarise(slope = unique(slope)) %>% pivot_wider(names_from = name, values_from = slope) %>% column_to_rownames('ID') slopes #> V1 V2 V3 V4 #> A -0.004548211 0.0023702846 -0.005529712 0.002792688 #> B 0.660606061 0.0013715805 -0.200000000 -0.278787879 #> C -0.003737507 0.2666666667 0.001533572 -0.026828235 #> D 6.084848485 -0.0003168725 0.236363636 -0.757575758 #> E -0.054545455 0.0005361159 0.003567638 0.012812879 ``` Let's check PCA results: ``` res_pca = prcomp(slopes) res_pca #> Standard deviations (1, .., p=4): #> [1] 2.692459114 0.139136216 0.104503624 0.001634561 #> #> Rotation (n x k) = (4 x 4): #> PC1 PC2 PC3 PC4 #> V1 0.99196373 0.04004058 0.02126357 -0.11812097 #> V2 -0.01267253 0.50183843 -0.86005381 -0.09113202 #> V3 0.04364794 0.67949821 0.32636990 0.65563689 #> V4 -0.11807716 0.53370134 0.39158396 -0.74019096 fviz_eig(res_pca) ``` ![](https://i.imgur.com/XeAaMW1.png) Created on 2023-03-11 with [reprex v2.0.2](https://reprex.tidyverse.org) As we can see the first component has a big contribution probably due to the scale problem. I also found this [PCA with variables in different Likert scales](https://stats.stackexchange.com/questions/224813/pca-with-variables-in-different-likert-scales), but I think the difference here is that for mine the scales could also differ within each feature. So I don't know how you should `scale` these kinds of values within ranges for PCA. Could anyone please explain how to normalize this kind of problem? --- Edit: clarification I think the problem I may face is that I have multiple ID's per variable V1, V2 and each ID has a different range 0-1 and 5-20 (these are examples that could be anything). I would like to calculate the slope for each ID of each variable, but when I calculate it with these values, the IDs with a wider range will have more contribution in the PCA. Let's see the distribution of V1: ``` set.seed(7) df = data.frame(ID = rep(LETTERS[1:5], each = 10), time = c(1:10), V1 = c(runif(10, 0, 1), sample(5:20, 10, replace=T), runif(10, 0.2, 0.5), sample(1:100, 10, replace=T), sample(1:20, 10, replace=T)), V2 = c(runif(10, 0, 0.2), runif(10, 0, 0.3), sample(1:10, 10, replace=T), runif(10, 0.2, 0.3), runif(10, 0.7, 0.9))) library(tidyverse) df %>% ggplot(aes(x = V1, fill = ID)) + geom_histogram() + labs(title = 'Distribution of V1') ``` ![](https://i.imgur.com/hpwGDpE.png) As you can see, for example, ID D has a bigger range. This will mean that it has a higher slope which will result in a different contribution, but each slope should have an equal contribution within each variable. But they could have different contributions across variables. I think my question is how to normalize these slopes within each variable? To have an equal contribution of each slope within each variable, otherwise, the PCA will be misleading I think right?
How to use slopes in PCA?
CC BY-SA 4.0
null
2023-03-11T10:03:54.063
2023-03-17T20:01:33.827
2023-03-17T18:57:47.163
323003
323003
[ "r", "pca", "normalization", "dimensionality-reduction", "scales" ]
609095
2
null
609082
1
null
If your goal is to build a statistically adequate descriptive model of the conditional distribution of a time series of interest, then having autocorrelated residuals is a problem. ARIMA-GARCH model assumes i.i.d. standardized innovations, so autocorrelated residuals, and by implication non-i.i.d. standardized residuals, would be an indication that the model's assumptions are violated. You could thus reject a null hypothesis that the model has generated the data. If you want a decent model for prediction, you face a bias-variance trade-off. It is quite possible that a simpler model with mildly autocorrelated residuals will outperform a more complex model with i.i.d. standardized residuals out of sample. Update: From the edit of the original post, we get to know the time series of interest is oil price. A decent approximation might be ARIMA(0,1,0). If that yields highly autocorrelated residuals, try `auto.arima` for a potentially different model. ARIMA(11,0,6) makes little sense to me; I suspect this is a highly overfitted model. But if we were to examine its residuals out of curiosity, I would say the ACF and PACF plots look fairly innocuous (except that the two plots seem to be identical; that might be a mistake). Whether you should trust Ljung-Box test over eyeballing the plots is a contentious question. You probably should not; see [Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey](https://stats.stackexchange.com/questions/148004)or a criticism of the use of the Ljung-Box test on residuals from an ARIMA model.
null
CC BY-SA 4.0
null
2023-03-11T10:08:08.503
2023-03-13T09:21:52.293
2023-03-13T09:21:52.293
53690
53690
null
609097
1
null
null
1
52
Question: "Is there a difference in hearing quality between the left and right ears?" I'm testing for a difference in an interval variable (1-12 scale, hearing quality in each ear) between two paired samples (left ear/right ear). Both samples have a highly skewed distribution (suggesting Wilcoxon). However, a histogram of the actual difference on each sample shows a normal distribution (suggesting paired t-test). There is no specific alternative hypothesis, I've been tasked with applying whichever test I deem more suitable, both seem to fit but is there generally a preference in this situation? Thanks for any help given, John
Use Paired T-test or Wilcoxon for this data set
CC BY-SA 4.0
null
2023-03-11T10:24:19.263
2023-03-12T10:57:54.060
2023-03-12T10:57:54.060
382950
382950
[ "hypothesis-testing", "self-study", "mathematical-statistics", "statistical-significance" ]
609098
1
null
null
0
29
I'm running a weighted regression model, but I have no idea how to deal with some variable that I need to put inside My dependent variable has values with a scale of thousands, while my independent variables have scale of tens and hundreds or are categorical variables. I usually run the regression with the log of the dependent variable ( in this way I can interpret the estimated coefficient as % increase ) Here an example [](https://i.stack.imgur.com/r0Kfn.jpg) How to handle instead with a variable between the regressors that has a scale of millions ? For example, I include in my regression the variable `occ_tot` expressed in millions. This is what happens [](https://i.stack.imgur.com/VpxZ7.jpg) How should I interpret these coefficients? Is there a nice way to include an independent variable with a bigger scale of the dependent one? I'm new with these kind of things...
regression coefficients: changing the scale of the independent variables?
CC BY-SA 4.0
null
2023-03-11T10:42:52.257
2023-03-11T10:42:52.257
null
null
382951
[ "regression", "linear-model", "interpretation", "regression-coefficients", "scales" ]
609100
2
null
466801
1
null
You might use an elo rating, but whether it is a good model depends on your game. An elo rating relates to an underlying latent variable similar to a [probit model](https://en.m.wikipedia.org/wiki/Probit_model) where each player has some performance score distributed according to a normal distribution (or a logistic distribution when we use the approximate logistic regression model) and a game is modelled as each player drawing a performance for the particular game from those individual normal distributions, and the player with the highest performance wins. This approach becomes problematic when the performance is not independent from the opponent, for instance whether there is some asymmetric rock paper scissors effect. In Chess it works reasonable, you have chess players with different styles but it works out reasonably, there is not too much variation in the game. In other games, for instance card trading games, players may have startegies with large variations that work out vary different depending on the opponent. If an elo-system makes sense for your game, then you can apply it also for games with multiple players. You could apply an elo rating updating scheme. But also, you could solve the model for all games at once. In the case of chess this would be a binomial regression model. In your case with multiple players this becomes a multinomial regression model. Potentially you could add additional variables for - Cases when specific players encounter each other (to get a rock paper scissors effect) with few players and many games you can include parameters for individual interaction, for many players with few games you could try to define categories of playing styles. Some sort of PCA could be interesting here. Based on the matrix of wins against other players you can consider the principle components of that matrix as a model for the win probability. - other variables that may influence the game. Possibly some players are better at games with many players and other players are better at games with few players.
null
CC BY-SA 4.0
null
2023-03-11T12:12:41.400
2023-03-11T12:12:41.400
null
null
164061
null
609101
1
null
null
0
49
I'm fairly new to statistical analysis, but was told to use a multilevel model for my research purpose. I am interested in investigating whether attitudes on the EU integration are different for left and right wing individuals in countries with stronger and weaker welfare state. For this I would like to use an interaction term. ``` model04 <- lmer(eu_idt_std ~ 1 + leftright_std*redistribution_std + (1 + leftright_std | cntry), data = analysis, REML=F, na.action=na.exclude) ``` I was also wondering whether it might be smarter to specify the model like this: ``` model04 <- lmer(eu_idt_std ~ 1 + leftright_std*redistribution_std + (1 | cntry) + (0 + leftright_std | cntry), data = analysis, REML=F, na.action=na.exclude) ``` The actual model includes more control variables and here is some of my data. ``` structure(list(cntry = c("BG", "BG", "BG", "BG", "BG", "BG"), regunit = structure(c(3, 3, 3, 3, 3, 3), label = "Regional unit", format.stata = "%3.0g", labels = c(`NUTS level 1` = 1, `NUTS level 2` = 2, `NUTS level 3` = 3, `Regional unit not part of the NUTS nomenclature` = 4 ), class = c("haven_labelled", "vctrs_vctr", "double")), region = c("BG422", "BG411", "BG411", "BG342", "BG422", "BG314" ), lrscale = structure(c(8, 4, 8, 7, 5, NA), label = "Placement on left right scale", format.stata = "%4.0g", labels = c(Left = 0, `1` = 1, `2` = 2, `3` = 3, `4` = 4, `5` = 5, `6` = 6, `7` = 7, `8` = 8, `9` = 9, Right = 10, Refusal = NA, `Don't know` = NA, `No answer` = NA), class = c("haven_labelled", "vctrs_vctr", "double")), gincdif = c(3, 2, 3, 2, 4, 4), euftf = structure(c(10, 8, 10, 9, 4, 5), label = "European Union: European unification go further or gone too far", format.stata = "%4.0g", labels = c(`Unification already gone too far` = 0, `1` = 1, `2` = 2, `3` = 3, `4` = 4, `5` = 5, `6` = 6, `7` = 7, `8` = 8, `9` = 9, `Unification go further` = 10, Refusal = NA, `Don't know` = NA, `No answer` = NA), class = c("haven_labelled", "vctrs_vctr", "double")), atchctr = structure(c(10, 8, 8, 10, 10, 5), label = "How emotionally attached to [country]", format.stata = "%4.0g", labels = c(`Not at all emotionally attached` = 0, `1` = 1, `2` = 2, `3` = 3, `4` = 4, `5` = 5, `6` = 6, `7` = 7, `8` = 8, `9` = 9, `Very emotionally attached` = 10, Refusal = NA, `Don't know` = NA, `No answer` = NA), class = c("haven_labelled", "vctrs_vctr", "double")), atcherp = structure(c(10, 8, 7, 3, 10, 3), label = "How emotionally attached to Europe", format.stata = "%4.0g", labels = c(`Not at all emotionally attached` = 0, `1` = 1, `2` = 2, `3` = 3, `4` = 4, `5` = 5, `6` = 6, `7` = 7, `8` = 8, `9` = 9, `Very emotionally attached` = 10, Refusal = NA, `Don't know` = NA, `No answer` = NA), class = c("haven_labelled", "vctrs_vctr", "double")), gndr = structure(c(2, 1, 1, 1, 2, 1), label = "Gender", format.stata = "%3.0g", labels = c(Male = 1, Female = 2, `No answer` = NA), class = c("haven_labelled", "vctrs_vctr", "double")), agea = structure(c(56, 55, 25, 58, 67, 77), label = "Age of respondent, calculated", format.stata = "%5.0g", labels = c(`Not available` = NA_real_), class = c("haven_labelled", "vctrs_vctr", "double")), eduyrs = structure(c(18, 16, 17, 12, 12, 11), label = "Years of full-time education completed", format.stata = "%4.0g", labels = c(Refusal = NA_real_, `Don't know` = NA_real_, `No answer` = NA_real_), class = c("haven_labelled", "vctrs_vctr", "double")), female = c(1, 0, 0, 0, 1, 0), leftright = c(8, 4, 8, 7, 5, NA), euint_opn = c(10, 8, 10, 9, 4, 5), cntry_idt = c(10, 8, 8, 10, 10, 5), eu_idt = c(10, 8, 7, 3, 10, 3), yrs_edu = c(18, 16, 17, 12, 12, 11), age = c(56, 55, 25, 58, 67, 77), gdp_pc = c(25169.908002, 25169.908002, 25169.908002, 25169.908002, 25169.908002, 25169.908002 ), Country = c("Bulgaria", "Bulgaria", "Bulgaria", "Bulgaria", "Bulgaria", "Bulgaria"), year = c(2019, 2019, 2019, 2019, 2019, 2019), market_gini = c(0.523, 0.523, 0.523, 0.523, 0.523, 0.523), post_gini = c(0.402, 0.402, 0.402, 0.402, 0.402, 0.402), redistribution = c(0.121, 0.121, 0.121, 0.121, 0.121, 0.121), gincdif_cen = c(0.0382937934339802, -0.96170620656602, 0.0382937934339802, -0.96170620656602, 1.03829379343398, 1.03829379343398), leftright_cen = c(2.6442974848222, -1.3557025151778, 2.6442974848222, 1.6442974848222, -0.355702515177797, NA), euint_opn_cen = c(4.46396603754284, 2.46396603754284, 4.46396603754284, 3.46396603754284, -1.53603396245716, -0.536033962457164), cntry_idt_cen = c(1.76904445288754, -0.230955547112462, -0.230955547112462, 1.76904445288754, 1.76904445288754, -3.23095554711246), eu_idt_cen = c(3.93090891667864, 1.93090891667864, 0.930908916678645, -3.06909108332136, 3.93090891667864, -3.06909108332136), yrs_edu_cen = c(5.02001824204311, 3.02001824204311, 4.02001824204311, -0.979981757956891, -0.979981757956891, -1.97998175795689), age_cen = c(5.15717539863326, 4.15717539863326, -25.8428246013667, 7.15717539863326, 16.1571753986333, 26.1571753986333 ), gdp_pc_cen = c(-14836.5688141943, -14836.5688141943, -14836.5688141943, -14836.5688141943, -14836.5688141943, -14836.5688141943), market_gini_cen = c(0.0335484709552394, 0.0335484709552394, 0.0335484709552394, 0.0335484709552394, 0.0335484709552394, 0.0335484709552394), post_gini_cen = c(0.0945660537883443, 0.0945660537883443, 0.0945660537883443, 0.0945660537883443, 0.0945660537883443, 0.0945660537883443), redistribution_cen = c(-0.0610175828331049, -0.0610175828331049, -0.0610175828331049, -0.0610175828331049, -0.0610175828331049, -0.0610175828331049), excl_natidt = c(0, 0, 0, 1, 0, 0), gincdif_std = structure(c(0.0381855950433824, -0.958988923830432, 0.0381855950433824, -0.958988923830432, 1.0353601139172, 1.0353601139172), .Dim = c(6L, 1L)), leftright_std = structure(c(1.16632317376368, -0.597961186007741, 1.16632317376368, 0.725252083820824, -0.156890096064886, NA), .Dim = c(6L, 1L)), euint_opn_std = structure(c(1.71185048703875, 0.944886548405815, 1.71185048703875, 1.32836851772228, -0.589041328860047, -0.205559359543581), .Dim = c(6L, 1L)), cntry_idt_std = structure(c(0.895957635755242, -0.116970710158064, -0.116970710158064, 0.895957635755242, 0.895957635755242, -1.63636322902802), .Dim = c(6L, 1L)), eu_idt_std = structure(c(1.52801804197905, 0.750580520852194, 0.361861760288766, -1.19301328196495, 1.52801804197905, -1.19301328196495 ), .Dim = c(6L, 1L)), yrs_edu_std = structure(c(1.24083228276956, 0.746478588044916, 0.993655435407238, -0.24222880140437, -0.24222880140437, -0.489405648766692), .Dim = c(6L, 1L)), age_std = structure(c(0.28064162523024, 0.22622392493162, -1.40630708402697, 0.389477025827479, 0.879236328515057, 1.42341333150125), .Dim = c(6L, 1L)), gdp_pc_std = structure(c(-1.54102451499488, -1.54102451499488, -1.54102451499488, -1.54102451499488, -1.54102451499488, -1.54102451499488), .Dim = c(6L, 1L)), market_gini_std = structure(c(0.984810228681162, 0.984810228681162, 0.984810228681162, 0.984810228681162, 0.984810228681162, 0.984810228681162), .Dim = c(6L, 1L)), post_gini_std = structure(c(2.04823627978607, 2.04823627978607, 2.04823627978607, 2.04823627978607, 2.04823627978607, 2.04823627978607), .Dim = c(6L, 1L)), redistribution_std = structure(c(-1.62906248185286, -1.62906248185286, -1.62906248185286, -1.62906248185286, -1.62906248185286, -1.62906248185286), .Dim = c(6L, 1L))), row.names = c(NA, 6L), class = "data.frame") ``` I then plotted the residuals and assume that this pattern is somewhat problematic. ``` res <- resid(model04) plot(fitted(model04), res) ``` I can however not fully make sense of this pattern and would very much appreciate if someone with more experience in handling this type of model/data could maybe give me some hints?
Multilevel model with interaction effect in R
CC BY-SA 4.0
null
2023-03-11T12:51:22.500
2023-03-11T14:10:08.997
2023-03-11T14:10:08.997
362671
382213
[ "r", "multilevel-analysis" ]
609103
1
null
null
0
15
Dear statistics community, Hello! I have a question about how to define a mixed-effects model for my data? So, let me introduce the data set first. My data set includes two main groups: A group exposed to risk factor A and includes 14 subjects and a group exposed to factor B and includes 19 subjects. Measurements were collected from all group A and group B subjects at time point T1. Measurements were repeated on 9 of the 14 subjects in group A and 7 of the 19 subjects in group B at time point T2. In addition, measurements were taken only once from a group of 12 control subjects that were neither exposed to A nor B (I call this group C). I am interested in making multiple comparisons in this study: - Compare Group A at T1 to group C (similarly compare B at T1 to C) - Compare Group A at T2 to group C (similarly compare B at T2 to C) - Compare Group A at T1 to A at T2 (similarly compare B at T1 to B at T2) - Compare A at T1 to B at T1 (similarly compare A at T2 to B at T2) How should I design a linear mixed-effects model to be able to make the described comparisons between measurements? N.B. The control group C was only measured once so I do not have different time points for C.
mixed-effects model with both repeated and non-repeated measures
CC BY-SA 4.0
null
2023-03-11T13:21:10.230
2023-03-11T13:21:10.230
null
null
341221
[ "mixed-model", "mixed-random-variable" ]
609104
2
null
609091
3
null
The interpretation of coefficients depends on how the predictor variables are coded. In this example, with `male = 1` and `female = 2`, the software will assume that the values are numeric. On that basis, the coefficient for `sex` will be just what you would have for a continuous numeric predictor variable: the change in log-hazard per unit increase in the "numeric" `sex` variable. With only 2 possible values of the "numeric" `sex` variable, that unit increase is for the change from `male` to `female`. That's equivalent to having `male` as the reference level, and the coefficient being the extra log-hazard associated with `female`. As comments suggest, you can resolve any ambiguity by explicitly coding `sex` as a factor. That allows you to choose the reference level. The R default when reporting coefficient values in that case is to append the specific non-reference level to the overall predictor name. Even in that case, however, you have to be careful. The default "treatment" or "dummy" coding of factors in R is to treat the first level as the reference/baseline. I recall that SPSS (or some other software that I don't use) treats the last level as the reference instead. Also, even within R, you can choose [different types of coding](https://stats.oarc.ucla.edu/r/modules/coding-for-categorical-variables-in-regression-models/) for categorical predictors that will affect the interpretation of coefficients.
null
CC BY-SA 4.0
null
2023-03-11T13:22:25.853
2023-03-11T13:22:25.853
null
null
28500
null
609106
1
null
null
0
72
Given $X_1 ... X_n \sim \textrm{Exp}(\lambda)$, I found the MLE : $$\hat{\lambda} = \frac{1}{\bar{X}}$$ Now I need to find confidence intervals for: $$\eta = \lambda \cdot \log(\lambda)$$ To do so, I need to find the standard error for $\eta$, but first, I'll need the standard error for $\lambda$, and I found that via Fisher Information (which is a result of the log-liklihood function' second derivative): $$\operatorname{se}(\lambda) = \sqrt{\frac{1}{I_n(\lambda)}} = \sqrt{\frac{\lambda^2}{n}}$$ Now: $$\operatorname {se}(\eta) = f'(\lambda) \cdot \operatorname {se}(\lambda) = -\frac{1}{\bar{X}^2} \cdot \sqrt{\frac{\lambda^2}{n}}$$ So I finally got, for $\alpha = 0.05$: $$\hat{\eta} \pm Z_{\frac{\alpha}{2}} \cdot -\frac{1}{\bar{X}^2} \cdot \sqrt{\frac{\lambda^2}{n}}$$ I don't know if what I got is right. Can anyone please check if I used the delta method correctly? cause I actually didn't use this: $$\eta = \lambda \cdot \log(\lambda)$$ And I feel it's a mistake not to use it.
Use the delta method to find confidence intervals
CC BY-SA 4.0
null
2023-03-11T13:47:35.487
2023-03-16T04:35:19.527
2023-03-11T14:04:07.633
362671
357522
[ "self-study", "mathematical-statistics", "confidence-interval", "maximum-likelihood", "delta-method" ]
609107
1
null
null
0
25
My question: I know there exists a lot of information about what causes the vanishing gradient from a computational standpoint. Ie due to way the RNN is trained by backpropagation [...]. [Why do RNNs have a tendency to suffer from vanishing/exploding gradient?](https://stats.stackexchange.com/questions/140537/why-do-rnns-have-a-tendency-to-suffer-from-vanishing-exploding-gradient) covers that in detail. I also know that the problem does not always occur. From a data standpoint, I don't know what characteristics in the input data of the RNN cause the problem to sometimes occur and sometimes not. Any ideas? It might be that there is no relation at all. If not, what causes the problem to appear only sometimes? My guess: [This famous blogpot about LSTMs by Colah](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) made me suspect that it might be caused by input data that contains patterns that are very long term. Colah talks about the RNN "forgetting" the patterns and puts it as though it would be a symptom of the v. gradient problem. So the intuition would be that the RNN is adequate for sequential data, but if the patterns are very long, for instance the LSTM is better than the RNN because the RNN suffers from the v. gradient problem due to these patterns being too long. Edit: The following quote from [Hochreiter, Schmidhuber 97: Long short-term memory](https://papers.baulab.info/Hochreiter-1997.pdf) makes me belive my intial guess could be correct, but I am not sure: "The most widely used algorithms for learning what to put in short-term memory [of the RNN], however, take too much time or do not work well at all, especially when minimal time lags between inputs and corresponding teacher signals are long."
Vanishing Gradient Problem: What is the cause from a Data perspective?
CC BY-SA 4.0
null
2023-03-11T14:06:27.640
2023-03-11T16:01:25.473
2023-03-11T16:01:25.473
367429
367429
[ "neural-networks", "lstm", "recurrent-neural-network", "gradient" ]
609109
1
null
null
0
13
How should I check for statistical significance from a 3 option survey question? Example: Which do you prefer? A B No preference My responses were A:58 B:9 No preference: 21 N=88 For context, this was a convenience sample of university students. I was asking about what kind of goal setting they preferred: online/paper/no preference. I’m interested to know if this result in favour of online is statistically significant. Any guidance will be gratefully received. I’d really like to understand this issue.
How to test for statistical significance in a survey response that has three options: I prefer A, I prefer B, no preference
CC BY-SA 4.0
null
2023-03-11T14:37:16.037
2023-03-11T14:42:57.320
2023-03-11T14:42:57.320
362671
382959
[ "statistical-significance" ]
609110
1
null
null
0
9
i have seen many links about MA for batch normalization but nothing answered my question. in Batch normalization you get means and variance for each mini batches in training process. And the default process, you calculate through moving average. those mini batches are chosen totally random so there is no higher priority for recent mini batch. but why should i calculate the mean and variance for evaluation process with MA and not just normal average?
Why do i have to use moving averages and not just average to use it in evaluation process for Batch Normalization layer?
CC BY-SA 4.0
null
2023-03-11T14:37:40.300
2023-03-11T14:37:40.300
null
null
382968
[ "model-evaluation", "moving-average", "batch-normalization" ]
609111
2
null
608226
2
null
As @dipetkov says in a comment, a full answer requires chapters if not a [book](https://doi.org/10.1017/CBO9780511802843). If you want to understand the bootstrap, originally developed by Efron, then you could do worse than to consult the relevant parts of [Computer Age Statistical Inference](https://hastie.su.domains/CASI/) (CASI) by Efron and Hastie. What follows is a bit of guidance to point you in better directions as you undertake that study. I sense that there are some misunderstandings of what's involved in some of the bootstrap flavors that you note. Overall, remember that what you are trying to get with any [confidence interval](https://en.wikipedia.org/wiki/Confidence_interval) is a range, calculated from the data sample, that would cover the true value of the statistic of interest in the specified fraction of repeated experiments (often 95%), sampling from the same population, when the confidence interval is calculated the same way. What's of interest is the distribution of a statistic. In reverse, starting with the parametric bootstrap: > Parametric... If it's a Normal distribution, for example, we can calculate the mean for each subset. And then, what do we do? If we only want the mean of the normal distribution we take the mean of the means? You want the distribution of estimates of the means. Or the distribution of differences between estimates for different groups. For 95% CI, you put those estimates in order and choose a range that covers 95% of the estimates, similar to how you describe the percentile bootstrap. In some circumstances you can learn something about bias in an estimate by comparing the mean of the means of the bootstrapped samples against the original mean. If you're willing to assume a parametric distribution then that's less likely to be helpful than it is in non-parametric bootstrapping. > Percentile interval: as I understand it, involves generating multiple subsets of the data with n samples in each one, from the original data distribution (or the empirical data distribution if the original is unknown). For each subset, we compute the statistic of interest... The thing to recognize is that the "statistic of interest" is often a difference between an estimate from a sample and the true population value. The distribution of the estimates themselves among bootstrap samples isn't always the same thing, particularly when there is bias or skew in the estimates. > Pivot (pivotal intervals): This one confuses me the most... I am uncertain as to how this method helps in determining the CI, and whether it implies that the statistic itself is a pivot. I think there's some confusion here between the concept of a pivot and what's sometimes called the empirical/basic bootstrap. I suspect that this section of your course of study was on the empirical/basic bootstrap. Frequentist statistical inference in general is based on analysis of pivotal quantities. See page 16 of CASI. The reliability of the bootstrap also depends on having a pivot to analyze. That's not usually possible in practice, so the issue becomes how close to pivotal a quantity is. The importance of a pivot might be discussed in the context of the empirical/basic bootstrap, the flavor that comes closest to following the bootstrap principle that a bootstrap re-sample is to the sample as the sample is to the population. In that method, you evaluate the distribution of the differences between the values calculated from the boostrapped samples and the value from the original sample. [This answer](https://stats.stackexchange.com/a/357498/28500) and its links covers the distinction between the percentile and the empirical/basic bootstrap. The empirical/basic bootstrap handles bias and skew more reliably than the percentile method. The "BCa" method can be even better. > Normal approximation... I just didn't get why would it only work when the statistic is normally distributed? If the statistic of interest doesn't have a normal distribution, then there's no assurance that a range covering 95% of an approximating normal distribution would cover 95% of the distribution of the statistic of interest. Think in particular about an asymmetric distribution versus the symmetric normal distribution.
null
CC BY-SA 4.0
null
2023-03-11T15:14:39.293
2023-03-11T15:14:39.293
null
null
28500
null
609112
1
null
null
0
50
Let $X \sim \mathcal{N}(\mu, \sigma)$ be the model for a normally distributed population, described by the probability density function $f_{X}(x; \mu, \sigma)$. We can denote $\mathbf{X} = (X_1, X_2, \ldots, X_n)$ a random sample of size $n$, and $\mathbf{x} = (x_1, x_2, \ldots, x_n)$ is a particular sample. We know the sample mean $\overline{\mathbf{x}} = \mathbf{Mean}(\mathbf{x})$ is an estimate of the population mean $\mu$. The sampling distribution of the sample mean is a random variable $\overline{\mathbf{X}} = \mathbf{Mean}(\mathbf{X})$, which describes the distribution of possible means we could observe from random samples of size $n$ from the population. I will denote the probability density function of the random variable $\overline{\mathbf{X}}$ as $f_{\overline{\mathbf{X}}|\mu,\sigma}$, in order to show explicitly the dependence on the population parameters $\mu$ and $\sigma$. ### CASE 0 Before we get to the confidence interval question about an unknown population mean $\mu$, I want to give an example of a calculation involving the sampling distribution $f_{\overline{\mathbf{X}}|\mu,\sigma}$. When $\mu$ and $\sigma$ are known, the CLT tells us that $\overline{\mathbf{X}} \sim \mathcal{N}(\mu, \frac{\sigma}{\sqrt{n}})$, in other words $f_{\overline{\mathbf{X}}|\mu,\sigma}$ is the pdf of a normal distribution with mean $\mu$ and standard deviation $\mathbf{se} = \frac{\sigma}{\sqrt{n}}$. Using the CLT, we can calculate a confidence interval for the sample means we would expect to observe for samples of size $n$ by choosing appropriate values $a$ and $b$ such that $\Pr(\{a \leq \overline{\mathbf{X}} \leq b\} = \int_{\overline{\mathbf{x}}=a}^{\overline{\mathbf{x}}=b} f_{\overline{\mathbf{X}}|\mu,\sigma}(\overline{\mathbf{x}}|\mu,\sigma)\,d\overline{\mathbf{x}}$ equals to the desired coverage probability. ### CASE 1 Now let's tackle the slightly more interesting case we know$\mu$ but $\sigma$ is unknown. We don't know $\sigma^2$ but we can use the sample variance estimate $s^2_{\mathbf{x}} = \mathbf{Var}(\mathbf{x})$ for it. Using the sample standard deviation $s_{\mathbf{x}}$ as an estimate for $\sigma$, we can also obtain an estimated standard error $\widehat{\mathbf{se}} = \frac{s_{\mathbf{x}}}{\sqrt{n}}$. Since we know that sample variances tend to underestimate the population variance, we can't just plug in these estimates into the normal model suggested by the CLT, but instead use Gosset's model based on Student's $t$-distribution. Using a combination of the plug-in principle and Gosset's ``heavy-tail'' correction to compensate for the variability underestimation that we can expect to occur, we obtain the following model for the sampling distribution of the sample mean we can expect to observe: $$ \overline{\mathbf{X}} \sim \mathcal{T}(\texttt{df}=n-1, \texttt{loc}=\mu, \texttt{scale}=\widehat{\mathbf{se}}) = \widehat{\mathbf{se}}\cdot \mathcal{T}(\nu) + \mu, $$ where $\mathcal{T}(\nu)$ is Student's $t$-distribution with $\nu = \texttt{df} = n-1$ degrees of freedom. This result is usually presented in terms of the location-scale pivot: $$ T = \frac{\overline{\mathbf{X}} - \mu}{ \widehat{\mathbf{se}} } \sim \mathcal{T}(n-1), $$ where $\widehat{\mathbf{se}} = \frac{s_{\mathbf{x}}}{\sqrt{n}}$ is the standard error computed estimated from the random sample $\mathbf{x}$. ### CASE 2 We now come to the most realistic scenario when both $\mu$ and $\sigma$ are unknown. If plug in the estimate $\overline{\mathbf{x}}$ instead of $\mu$, and $s^2_{\mathbf{x}}$ instead of $\sigma^2$ into Student's $t$-distribution we end up with the following model: $$ \widetilde{\mathbf{X}} \sim \mathcal{T}(\texttt{df}=n-1, \texttt{loc}=\overline{\mathbf{x}}, \texttt{scale}=\widehat{\mathbf{se}}) = \widehat{\mathbf{se}}\cdot \mathcal{T}(\nu) + \overline{\mathbf{x}}, $$ Is there some interpretation we can give to $\widetilde{\mathbf{X}}$ (shown in blue in the figure) ? It's clearly showing something useful---estimating the variability of the sample via $\widehat{\mathbf{se}}$, and centred at the observed mean $\overline{\mathbf{x}}$ since this is our best guess for $\mu$, but I have not seen any stats book that discuss this. The reason for my question is because $\widetilde{\mathbf{X}}$ is also the bootstrap distribution of the mean we from the sample $\mathbf{x}$, as shown with the blue histogram. Is the blue curve showing something like a "frequentist posterior" of the population mean $\mu$? Or the likelihood of $\mu$ given the data? [](https://i.stack.imgur.com/9ghVS.png) The interpretation of the blue curve as a likelihood of $\mu$ given data $\mathbf{x}$ seems plausible, Recall the pivotal quantity from CASE 1 $$ T = \frac{\overline{\mathbf{X}} - \mu}{ \widehat{\mathbf{se}} } \sim \mathcal{T}(n-1). $$ If we want to construct a confidence interval for the unknown population mean, we star with $t_\ell$ and $t_u$ which are the 5th and 95th percentiles of the T distribution, which means $\textrm{Pr}( \{ t_\ell \leq T \leq t_u \}) = 0.9$, and after some manipulations end up with $$ Pr( \{ \bar{\mathbf{X}}+t_\ell \cdot \widehat{\mathbf{SE}} \leq \mu \leq \bar{\mathbf{X}}+t_u \cdot \widehat{\mathbf{SE}} \} )= 0.9. $$ which is statement that works for random samples $\mathbf{X}$, which we can then apply for a particular sample by saying $[\bar{\mathbf{x}}+t_\ell \cdot \hat{\mathbf{se}}, \bar{\mathbf{x}}+t_u \cdot \hat{\mathbf{se}} ]$ is a 90% confidence interval for the population mean. Clearly the confidence interval seems to have been constructed from the model $\widetilde{\mathbf{X}}$ we defined above, but the "inverting the pivotal quantity" procedure didn't provide us with any interpretation for it, since we just followed the procedure mechanically. So to summarize: - Q1: what is the correct way to interpret the model $\widetilde{\mathbf{X}}$ (blue in figure)? - Q2: is the distribution of $\widetilde{\mathbf{X}}$ the same as the one used for inverting-the-pivot procedure for confidence intervals? Here is the notebook I used to generate the figure: [online](https://nobsstats.com/notebooks/99_mean_estimation_details.html) or [mybinder](https://mybinder.org/v2/gh/minireference/noBSstatsnotebooks/main?urlpath=tree/./notebooks/99_mean_estimation_details.ipynb).
Interpretation of distribution that appears when calculatin CI for population mean
CC BY-SA 4.0
null
2023-03-11T15:17:17.617
2023-03-12T22:58:03.180
2023-03-12T22:58:03.180
62481
62481
[ "confidence-interval", "likelihood", "t-distribution", "pivot", "sampling-distribution" ]
609113
1
null
null
1
39
It can be seen that the following random variates have the same distribution: - $\frac{X_1 + X_3}{X_2 + X_3}$, where $(X_1, X_2, X_3) \sim \text{Dirichlet} (\alpha_1, \alpha_2, \alpha_3)$ - $\frac{Y_1 + Y_3}{Y_2 + Y_3}$, where $(Y_0, Y_1, Y_2, Y_3) \sim \text{Dirichlet} (\alpha_0, \alpha_1, \alpha_2, \alpha_3)$ - $\frac{Z_1 + Z_3}{Z_2 + Z_3}$, where the $(Z_i)_i$ are independent and $Z_i \sim \text{Gamma}(k = \alpha_i, \theta = 1)$ Question: does this distribution have a name? Has it been studied somewhere in the literature? Were it not for $X_1$ in the numerator, it seems that this would be a Beta-Prime distribution.
Distribution of the ratio of Dirichlet/Gamma variates
CC BY-SA 4.0
null
2023-03-11T15:28:24.903
2023-03-11T15:28:24.903
null
null
244176
[ "distributions", "gamma-distribution", "ratio", "dirichlet-distribution" ]
609115
2
null
608439
2
null
Assume $X,Y,Z$ are centered and organized as follows: $$A=\begin{pmatrix} \mid &\mid &\mid \\ X &Y &Z \\ \mid &\mid &\mid \end{pmatrix}$$ And $$A = A B + E$$ Where $$B = \begin{pmatrix} 0 &\beta_{X\to Y} &\beta_{X\to Z}\\ \beta_{Y\to X} &0 & \beta_{Y\to Z}\\ \beta_{Z\to X} &\beta_{Z\to Y} & 0 \end{pmatrix}$$ The general problem statement is $$ \cases{ \min_B\|E\|_2^{2}=\min_B\|A\cdot (\mathbb{I}-B)\|_2^{2}\\ \mathrm{diag}(B)=0 } $$ Using Lagrange multipliers: $$ f = \|A\cdot (\mathbb{I}-B)\|_2^{2}-\lambda^\top \cdot \mathrm{diag}(B) $$ $$\frac{\partial f}{\partial B} = -(2\cdot A^\top \cdot A\cdot (\mathbb{I}-B)+\mathrm{diag}(\lambda))=0 $$ $$2\cdot A^\top \cdot A\cdot (B-\mathbb{I})=\mathrm{diag}(\lambda)$$ $$B=\frac12(A^\top \cdot A)^{-1}\cdot \mathrm{diag}(\lambda)+\mathbb{I}$$ Using our equality constraint: $$\mathrm{diag}(B)=0=\mathrm{diag}\left(\frac12(A^\top \cdot A)^{-1}\cdot \mathrm{diag}(\lambda)+\mathbb{I}\right)=\\ \frac12\mathrm{diag}\left((A^\top \cdot A)^{-1}\right)\cdot\mathrm{diag}(\lambda)+\mathrm{diag}(\mathbb{I})\\ \therefore \lambda_i=\frac{-2}{(A^\top \cdot A)^{-1}_{ii}} \therefore \mathrm{diag}(\lambda)=-2 \left( \mathbb I \odot (A^\top \cdot A)^{-1}\right)^{-1} $$ Plugging it back: $$B=\frac12(A^\top \cdot A)^{-1}\cdot \mathrm{diag}(\lambda)+\mathbb{I}\\ =\frac12(A^\top \cdot A)^{-1}\cdot -2 \left( \mathbb I \odot (A^\top \cdot A)^{-1}\right)^{-1}+\mathbb{I}\\ B=\mathbb{I}-(A^\top \cdot A)^{-1}\cdot \left( \mathbb I \odot (A^\top \cdot A)^{-1}\right)^{-1}\\$$ Call $n\Sigma = A^\top \cdot A,\Omega = \Sigma^{-1}, D_\Omega = \mathrm{diag}(\Omega)$ $$B = \mathbb{I} - \Omega\cdot D_\Omega^{-1}$$ --- Compare $B$ with the partial correlation matrix, $R=2\mathbb{I}-D_\Omega^{-1/2}\Omega\cdot D_\Omega^{-1/2}$ for some intuition --- You can check it is true with the following R code: ``` # generate X, Y, Z as a 100x3 matrix A <- matrix(rnorm(300), ncol=3) A <- scale(A) # generate a 3x3 mixing matrix M M <- matrix(rnorm(9), ncol=3) # generate a 100x3 matrix of observed data B <- A %*% M # perform three linear regressions, one for each column of B from the other two # columns of B lm1 = lm(B[,1] ~ 0 + B[,2] + B[,3]) lm2 = lm(B[,2] ~ 0 + B[,1] + B[,3]) lm3 = lm(B[,3] ~ 0 + B[,1] + B[,2]) S = solve(t(B) %*% B) S = diag(1, 3, 3) - S %*% diag(1/diag(S)) coef(lm1) # B[, 2] B[, 3] #-0.7249205 -0.9375626 coef(lm2) > coef(lm2) # B[, 1] B[, 3] # -1.351588 -1.280748 coef(lm3) # B[, 1] B[, 2] # -1.0485396 -0.7682355 S # [,1] [,2] [,3] # [1,] 0.0000000 -1.351588 -1.0485396 # [2,] -0.7249205 0.000000 -0.7682355 # [3,] -0.9375626 -1.280748 0.0000000 ``` --- I released a package that can help with this at [https://github.com/bhvieira/avaols](https://github.com/bhvieira/avaols). You can install it simply doing (requires `devtools`) ``` # install.packages("devtools") devtools::install_github("bhvieira/avaols") ``` Then you can simply do ``` library(avaols) # generate X, Y, Z as a 100x3 matrix A = matrix(rnorm(300), ncol=3) # generate a 3x3 mixing matrix M M <- matrix(rnorm(9), ncol=3) # generate a 100x3 matrix of observed data B <- data.frame(A %*% M) # fit avaols object obj = avaols(B) coef(obj) # X1 X2 X3 # Intercept -0.009905788 -0.1840545 2.097737e-02 # X1 0.000000000 -3.1097798 1.219195e+00 # X2 -0.192439394 0.0000000 2.372571e-01 # X3 0.782301262 2.4601173 1.110223e-16 # compare with three linear regressions lm1 = lm(X1 ~ ., data = B) lm2 = lm(X2 ~ ., data = B) lm3 = lm(X3 ~ ., data = B) coef(lm1) # (Intercept) X2 X3 # -0.009905788 -0.192439394 0.782301262 coef(lm2) # (Intercept) X1 X3 # -0.1840545 -3.1097798 2.4601173 coef(lm3) # (Intercept) X1 X2 # 0.02097737 1.21919453 0.23725706 ```
null
CC BY-SA 4.0
null
2023-03-11T16:04:18.460
2023-03-15T13:29:43.250
2023-03-15T13:29:43.250
60613
60613
null
609116
2
null
609071
2
null
> I can directly enter the wjk matrix and zj matrix, but for xi, I only have to enter x1 while the xi matrix consists of x1, x2, and x3. $W_{jk}$ is not a matrix, it's a real number whose value determined by the connection between $j$-th neuron in the last hidden layer and the $k$-th neuron in the output layer in the shape you gave. For the matrix shape, it depends on the [layout convention](https://en.wikipedia.org/wiki/Matrix_calculus#Layout_conventions) you assume (i.e. numerator or denominator). Either is fine, you just need to be consistent across the calculations. The one you assumed is denominator layout. Its transpose is the numerator layout. You can then substitute the individual values you've found to the matrix.
null
CC BY-SA 4.0
null
2023-03-11T16:23:09.707
2023-03-11T16:23:09.707
null
null
204068
null
609117
2
null
608938
2
null
> It is my understanding that car::Anova() is a useful function for any type of model where a single predictor is involved in multiple terms (e.g., non-linear terms or interactions). That's true for many types of models, but a GAM is fit differently from the type of model covered on the [page you link](https://stats.stackexchange.com/q/603155/28500). I don't think that `car::Anova()` can handle your GAM, which uses penalization to trade off the flexibility of the fit against the amount of data available. You will notice that coefficients aren't reported for the smooths in your GAM model. There is, hiding within the model, effectively a large set of (penalized) coefficients for each smooth, with a Wald test on the entire smooth evaluating the overall significance reported. Within each of your tensor-product smooths, that set of coefficients includes what you might consider all the "main" and "interaction" coefficients involving the included predictors. Conceptually, the displayed Wald test on each smooth thus accomplishes what a Wald Type II Anova would accomplish in a different type of model: evaluating a combination of multiple coefficient estimates. So there's no need to use something like `car::Anova()` for this model. You already have the equivalent. The [mgcv package](https://cran.r-project.org/package=mgcv) provides an `anova.gam()` function appropriate to its GAM models. That would be the best choice for evaluating terms in a single model, or for comparing nested GAM models. See its help page for cautions about its use.
null
CC BY-SA 4.0
null
2023-03-11T16:33:43.803
2023-03-11T22:52:51.763
2023-03-11T22:52:51.763
28500
28500
null
609118
1
null
null
0
33
I am hoping to estimate the causal effects of a voluntary employment scheme available for U25 on mental health and financial independence using a difference-in-difference approach (DiD). Because participation in the program is voluntary, however, I am worried about self-selection bias. Could I still use DiD in this case?
Self-selection bias in difference-in-difference estimation
CC BY-SA 4.0
null
2023-03-11T16:45:17.720
2023-03-11T16:45:17.720
null
null
382977
[ "difference-in-difference", "selection-bias" ]
609119
2
null
399447
2
null
The problem of your derivation is that you misunderstood the concept of conditional distribution. It is not $P_\sigma(X \leq x |T(X) \leq t)$ -- it should be $P_\sigma(X \leq x | T(X) = t)$. For a thorough discussion of the latter notation, see [this answer](https://stats.stackexchange.com/questions/601921/formula-of-conditional-probability-when-we-have-discrete-and-continuous-random-v/602071#602071). To derive the correct conditional distribution, intuitively, given $|X| = t > 0$, then $X$ can only take value $t$ or $-t$. Therefore, for any $x \in \mathbb{R}$, the event $[X \leq x] \cap [|X| = t]$ is: \begin{align} \begin{cases} \varnothing & x < -t, \\ [X = -t] & -t \leq x < t, \\ [|X| = t] & x \geq t. \end{cases} \end{align} By symmetry of $N(0, \sigma^2)$, this implies the conditional distribution of $X$ given $|X| = t$ is \begin{align} P[X \leq x | |X| = t] = \begin{cases} \frac{1}{2}I_{[-x, \infty)}(t) & x \leq 0, \\ I_{(0, x]}(t) + \frac{1}{2}I_{(x, \infty)}(t) & x > 0. \end{cases} \tag{1} \end{align} Therefore the conditional distribution of $X$ given $|X|$ does not depend on $\sigma^2$, hence $|X|$ is sufficient for the distribution family $\{N(0, \sigma^2): \sigma > 0\}$. --- To prove $(1)$ rigorously, first rewrite $(1)$ as \begin{align} P[X \leq x | |X|] = \begin{cases} \frac{1}{2}I_{[|X| \geq -x]}(\omega) & x \leq 0, \\ I_{[|X| \leq x]}(\omega) + \frac{1}{2}I_{[|X| > x]}(\omega) & x > 0. \end{cases} \tag{2} \end{align} Since the right-hand side of $(2)$ is obviously $\sigma(|X|)$-measurable, it suffices to show for any generic $\sigma(|X|)$-set $[|X| \leq t]$, where $t > 0$, it holds that (these are two defining relations of the measure-theoretic conditional probability. For more details, refer to, for example, Equation (33.8) in Probability and Measure by Patrick Billingsley): \begin{align} P[[X \leq x]\cap [|X| \leq t]] = \int_{[|X| \leq t]}P[X \leq x ||X|]dP. \tag{3} \end{align} When $x \leq 0$, the left-hand side of $(3)$ is $(\Phi_\sigma(x) - \Phi_\sigma(t))I_{[-t, 0]}(x)$, while the right-hand side of $(3)$ is \begin{align} \frac{1}{2}P[|X| \leq t, |X| \geq -x] = (\Phi_\sigma(x) - \Phi_\sigma(t))I_{[-t, 0]}(x). \end{align} Hence $(3)$ holds. When $x > 0$, the left-hand side of $(3)$ is \begin{align} (\Phi_\sigma(x) - \Phi_\sigma(-t))I_{(0, t)}(x) + (\Phi_\sigma(t) - \Phi_\sigma(-t))I_{[t, \infty)}(x), \end{align} while the right-hand side of $(3)$ is \begin{align} & P[|X| \leq t, |X| \leq x] + \frac{1}{2}P[|X| \leq t, |X| > x] \\ =& \left[P[|X| \leq x] + \frac{1}{2}P[x < |X| \leq t]\right]I_{(0, t)}(x) + P[|X| \leq t]I_{[t, \infty)}(x) \\ =& (\Phi_\sigma(t) - \Phi_\sigma(-x))I_{(0, t)}(x) + (\Phi_\sigma(t) - \Phi_\sigma(-t))I_{[t, \infty)}(x)\\ =& (\Phi_\sigma(x) - \Phi_\sigma(-t))I_{(0, t)}(x) + (\Phi_\sigma(t) - \Phi_\sigma(-t))I_{[t, \infty)}(x). \end{align} Hence $(3)$ holds. This completes the proof.
null
CC BY-SA 4.0
null
2023-03-11T16:57:54.227
2023-03-11T18:58:16.137
2023-03-11T18:58:16.137
20519
20519
null
609120
2
null
609014
1
null
The $\chi^2$ values shown in Table 2 of the cited paper seem to be those used for Wald tests on the coefficient estimates. It's the square of the ratio of a coefficient estimate to its standard error. In this case all of the Wald statistics seem to be based on single degrees of freedom, so they are the squares of z-statistics that other software like R might report. Those are included in standard summaries of Cox models. That said, I don't see how the authors get the "weights" they report in Table 3 from the hazard ratios and $\chi^2$ values in Table 2, if you follow their explanation. If anything, the "weights" seem to be related to the inverse of what the authors state. For example, the hazard ratios associated with `age` and `bilirubin` are almost the same, the $\chi^2$ value for `age` is twice that for `bilirubin`, and the weight for `age` is also twice that for `bilirubin`. If you divided the Cox coefficients or HR by the $\chi^2$ values you would get "weights" in the opposite order for those predictors. There also seems to be some additional scaling, not just rounding, involved to get the values reported for "weights." Note that the $\chi^2$ tests in this report aren't strictly correct, as the authors used stepwise selection to build the multiple-regression model but didn't that that into account in the tests. A $\chi^2$ test assumes that you didn't use the outcomes to design the model, but stepwise selection necessarily does that.
null
CC BY-SA 4.0
null
2023-03-11T17:26:01.010
2023-03-11T17:26:01.010
null
null
28500
null
609121
2
null
608721
0
null
This question proposes two different problems/models. - Model A for a continuous outcome Y in two groups (men and women) with age as a covariate. Each group has its own intercept and slope (ie. group interacts with age) and we can compare the two groups at different ages. - Model B for a continuous outcome Y as a linear function of age with a change point at age = 25 (ie. a linear spline with a knot at 25). The change point represents a life event experienced by every subject and we can compare the outcome before and after the event. Neither model can satisfactorily answer the main question posed by the OP: > Estimate difference in the predicted outcome at specified time points, under the actual scenario that the life event changed the trajectory of the outcome, vs the counterfactual/hypothetical scenario that the trajectory prior to the life event was allowed to continue. The models don't have a satisfactory answer because the question (as formulated here) doesn't quite make sense. If everyone experienced the life event by the age of 30, then the comparison between people who have and who haven't experienced it at the age of 60 is not a counterfactual٭ scenario, it's an impossible scenario. Model B cannot represent the alternative of not undergoing the life event altogether: given a specific age, model B makes an "age-appropriate" estimate E{Y | age} but there is only one possible estimate. Model A can represent the counterfactual if we code subjects who have experienced the life event and those who haven't as different groups. Then, given any age, the model can estimate E{Y | age, event=yes} and E{Y | age, event=no}. The issue is that since everyone actually undergoes the event at a similar age, there is no overlap in age between the two groups. All comparisons between those who have and those who haven't experienced the event but are otherwise the same age require linear extrapolation. This is a huge assumption which we can never check because everyone experiences the life event. On the plus side, once we have fitted model A, it's straightforward to compute counterfactual comparisons with emmeans. The important question is: Do these contrasts say anything meaningful? ٭ A counterfactual is the change an individual is expected to experience if they are given, say, a novel treatment vs the standard treatment. It's possible for the patient to take either the new or the current treatment; we want to know which one is expected to be more effective for the patient. ``` mod <- lm(BP ~ group * age, data = dtstudy) emm <- emmeans( mod, ~ group | age, at = list(age = c(30, 40, 50, 60)) ) pairs(emm) #> age = 30: #> contrast estimate SE df t.ratio p.value #> event=no - event=yes -7.32 5.31 46 -1.378 0.1748 #> #> age = 35: #> contrast estimate SE df t.ratio p.value #> event=no - event=yes -12.24 5.08 46 -2.413 0.0199 #> #> age = 40: #> contrast estimate SE df t.ratio p.value #> event=no - event=yes -17.17 6.06 46 -2.832 0.0068 #> #> age = 50: #> contrast estimate SE df t.ratio p.value #> event=no - event=yes -27.03 9.95 46 -2.716 0.0093 ``` ![](https://i.imgur.com/lPgk1LK.png)
null
CC BY-SA 4.0
null
2023-03-11T17:28:29.027
2023-03-24T10:35:26.730
2023-03-24T10:35:26.730
237901
237901
null
609122
2
null
569878
0
null
They are both correct. There's no conflict between these approaches because there are different upweights: sample weights and class weights. When you adopt the downsample-upweight approach, you downsample the majority class, and upweight the sample weights of the same class, this means you are using fewer samples but each of the samples has higher weights, this is intuitive. When you adopt the upweight the minority approach, you are upweighting the class weights to tell the model to treat this class as more important. Hopefully this helps.
null
CC BY-SA 4.0
null
2023-03-11T17:37:55.733
2023-03-11T17:37:55.733
null
null
382875
null
609123
1
609156
null
3
131
What is the multivariate distribution of $(X_1, \ldots, X_n | X_1+ \dotsm + X_n = y)$, given $y$ and that $X_i$ is (otherwise; without being given $y$) unconditionally independently normally distributed with mean $\mu $ and standard deviation $\sigma$? And, to the task at hand: How do I simulate random drawings of $X$ given $y,~ \mu,$ and $\sigma$?
What is the multivariate distribution of $(X_1, \ldots, X_n | X_1+ \dotsm + X_n = y)$?
CC BY-SA 4.0
null
2023-03-11T17:38:30.993
2023-03-13T15:14:47.290
2023-03-13T15:14:47.290
11887
370545
[ "conditional-probability", "conditional-expectation", "multivariate-normal-distribution" ]