Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
608393
2
null
204484
1
null
I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape curved monotonically increasing fuction being confined a and b are sigmoid functions.
null
CC BY-SA 4.0
null
2023-03-05T00:49:20.237
2023-03-05T00:50:36.907
2023-03-05T00:50:36.907
374361
374361
null
608394
1
608578
null
0
63
I'm reading a lecture slide that starts by asking if there's a way to invert a characteristic function $\psi_X$ if $\int|\psi_X(t)|~\mathrm{d}t = \infty$. From my reading, the slide then provides a proof sketch of the inversion formula. This proof sketch states to assume you can use a theorem (proposition 13 [here](https://www.stat.cmu.edu/%7Earinaldo/Teaching/36752/S18/Notes/lec_notes_11.pdf)). [](https://i.stack.imgur.com/myuHg.jpg) Why ask about the integrability of $\psi_X$, but then assume just that?
Inverting a characteristic function if the integral of the modulus of the cf is infinity
CC BY-SA 4.0
null
2023-03-05T00:54:43.810
2023-03-07T03:02:29.780
2023-03-06T21:50:10.923
364080
364080
[ "probability", "measure-theory", "characteristic-function" ]
608395
1
null
null
0
10
Let's I have $k$ strata, each with resamples $\mathcal{D_i}$ resamples where $i = 1 ... k$ Each has some confidence interval $[\hat{\theta_i} - \theta^{\star}_{i, \alpha/2}, \hat{\theta} - \theta^{\star}_{i, 1 - \alpha/2}]$ How do I combine the strata to get a single confidence interval for the population statistic? I'm computing a mean. Is it just a matter of getting the weights of each strata and producing a weighted average of the statistic and intervals?
How to combine strata in bootstrap resampling to produce a confidence interval for the population statistic?
CC BY-SA 4.0
null
2023-03-05T01:10:07.460
2023-03-05T01:10:07.460
null
null
43080
[ "confidence-interval", "sampling", "mean", "bootstrap", "stratification" ]
608396
1
null
null
0
45
I have a set of log-return data for a commodity and am unable to identify an appropriate ARMA model. I used auto.arima() function, and the optimized model is (4,0,4) with zero mean. However, when I run the Arima model, I get a warning message that there is a convergence problem with an optim code = 1. I wrote an iterative algorithm in R to identify the best model by minimizing AIC value, which gave me a model of (12,0,7) without mean and (12,0,7) with mean. The AIC score without mean is lower. I run the program for a max of 12 in each AR and MA to identify the best model. My algorithm keeps away from selecting any model that has convergence issues or NaNs in the standard errors for any of the iterative models. With the models that my algorithm selected, I note that there is a serial correlation present in the residuals, as there are outliers in the data. I request help with the following: - Should I winsorize the log-return data to reduce the impact of outliers (OR) - Should I use tsclean() to transform the outliers? My objective is to obtain a model that has zero auto-correlation in the residuals.
Identify ARMA model with no autocorrelation in residuals
CC BY-SA 4.0
null
2023-03-05T01:11:40.787
2023-03-05T08:01:20.883
2023-03-05T08:01:20.883
53690
369873
[ "time-series", "arima", "model-selection", "autocorrelation", "winsorizing" ]
608397
1
null
null
0
82
I am working on a database that looks at progression-free survival and includes event and time-to-event data. It is missing about 40% of both time-to-event data and event data. I am wondering if I should impute the missing event and time-to-event data, or simply exclude them from the analysis? Would it make sense to exclude missing event and time-to-event data and impute other missing data (other covariates have about 10% missing data)? I am using multiple imputation with chained equations. You can see some example code below: ``` library('mice') set.seed(1234) # Set seed for reproducibility # Create time variable ranging from 1 month to 5 years time <- sort(runif(500, 1, 60)) # Create event variable with 50% probability of event occurrence event <- rbinom(500, 1, 0.5) # Create treatment variable with 50% probability of treatment occurrence treatment <- rbinom(500, 1, 0.5) # Create grade variable with 80% of treated patients having 'high' grade disease grade_treated <- sample(c("low", "high"), sum(treatment), prob = c(0.2, 0.8), replace = TRUE) grade_untreated <- sample(c("low", "high"), sum(1-treatment), prob = c(0.5, 0.5), replace = TRUE) grade <- factor(c(grade_treated, grade_untreated)) # Create missing data for time, event, and grade variables time[sample(500, 200)] <- NA event[sample(500, 200)] <- NA grade[sample(500, 50)] <- NA # Combine variables into a data frame dat <- data.frame(event, time, treatment, grade) #Run MICE dat$event <- as.factor(dat$event); dat$treatment <- as.factor(dat$treatment); dat$grade <- as.factor(dat$grade); dat_imp <- mice(dat, maxit=5, print=FALSE); ``` ```
Should you impute missing event and time-to-event variables for survival data that has missing and censored data?
CC BY-SA 4.0
null
2023-03-05T01:23:04.347
2023-03-20T20:21:07.303
null
null
382393
[ "survival", "multiple-imputation" ]
608398
1
608402
null
0
49
#### Motivating Background Info I was recently in a grad class and someone was presenting on a structural equation model (SEM) that had a mediation path. If I recall correctly, this was the specific pathway they were speaking about from [this paper](https://www.researchgate.net/profile/Antonio-Valle-4/publication/273518385_Relationships_between_perceived_parental_involvement_in_homework_student_homework_behaviors_and_academic_achievement_differences_among_elementary_junior_high_and_high_school_students/links/564e15fd08ae4988a7a5f46f/Relationships-between-perceived-parental-involvement-in-homework-student-homework-behaviors-and-academic-achievement-differences-among-elementary-junior-high-and-high-school-students.pdf): [](https://i.stack.imgur.com/rbVWk.png) Someone in the class stated that there is no ability to run a path analysis with this data because the data isn't longitudinal. I have never heard this distinction before and because I wasn't aware of this, I didn't question the statement until I got home and wondered if this was indeed correct. #### Question My understanding is that path analysis and SEM are basically just extensions of regression (the paths are literally just regression paths at the end of the day), and this doesn't necessitate longitudinal data, but obviously having this kind of data is ideal. For example, Sewall Wright's original path analysis was done on Guinea pig heredity features, and as far as I recall, he did not use longitudinal data to explain his methods. His original path analysis is shown below: [](https://i.stack.imgur.com/b9E4M.png) Using this logic, let's say somebody wanted to investigate whether waking up early (measured in time) predicts coffee consumption (measured in cups), thereafter coffee consumption predicts productivity (measured in minutes). This is simply one regression path followed by another, and I don't think it requires longitudinal data (again, it would be ideal, but I don't know if that's totally necessary). Basically my question is what I've summarized already: does path analysis / SEM with mediation require longitudinal data?
Is longitudinal data a necessity of mediation paths in structural equation modeling?
CC BY-SA 4.0
null
2023-03-05T01:37:48.233
2023-03-05T03:33:24.917
2023-03-05T01:51:37.867
345611
345611
[ "regression", "panel-data", "structural-equation-modeling", "assumptions" ]
608399
2
null
606889
1
null
I think you would need repeated measurements of the patients before and after the treatment to determine if the effect is significant for a single patient. If patient N has values of 67, 66, 67 before and 70, 71, 70 after the treatment, it would probably be still significant. If you have an idea of how large this variance is, you could use the [effect size](https://en.wikipedia.org/wiki/Effect_size). Alternatively, you could rewrite your result from: "X% of the patients showed a significant increase in their holding breath performance". to "X% of the patients showed a increase in their holding breath performance of at least Y%".
null
CC BY-SA 4.0
null
2023-03-05T02:23:09.740
2023-03-05T02:23:09.740
null
null
382388
null
608400
1
null
null
1
23
I am reading a book about the introduction to statistics. One chapter is about ANOVA, F distribution, and the null hypothesis. In ANOVA, the F value is $$ F = \frac{\text{SSB}/(k-1)}{\text{SSW}/(n-k)} $$ where SSB is the sum of squared between groups and SSW is the sum of squared within groups while there are total n samples and k groups. The degree of freedom for between-group is df1=k-1 and the degree of freedom for within groups is df2=n-k. It is said in the book that when the significance level $\alpha$, df1, df2 are given, the F distribution could give a critical F value (Fc), and if F>Fc, the null hypothesis is rejected. Let's consider 3 groups (k=3 or df1=2), and $\alpha=0.05$, the Fc corresponding to df2=10, df2=100, df3=1000 are 4.1028, 3.0873 and 3.0047 separately. I am not sure if I understand it correctly but since df2 is depending on the sample size (n) while k is constant, so does it mean if we have more samples, it is easier to reject the null hypothesis? I don't quite follow it. Let's say I did 2 experiments, 3 groups. The first experiment I collected 10 samples, I got Fc value to be 4.1028, F= 3.65 so I accept the null hypothesis. Later I did the same experiment but I collect 100 samples, now Fc is 3.0873 and which will reject the null hypothesis. It is confusing and I don't understand why more samples will lead to a harsher critical value. If this is true, does it mean more samples will make it harder to accept the null hypothesis?
How sample size affect the chance to reject null hypothesis in ANOVA?
CC BY-SA 4.0
null
2023-03-05T02:51:55.797
2023-03-05T03:22:36.840
2023-03-05T03:22:36.840
109811
109811
[ "hypothesis-testing", "anova", "f-distribution" ]
608401
1
null
null
0
22
I'm currently developing an agent-based model and at the stage of verification and validation. I'm new to sensitivity analysis and was wondering if you know of any sensitivity analysis techniques to know parameter variability (e.g., SOBOL) that accepts nominal parameters? Here's an example scenario: Input parameters: - Strategy1: ['Strategy1.1', 'Strategy1.2', 'Strategy1.3'] - Strategy2: ['Strategy2.1', 'Strategy2.2', 'Strategy2.3'] Output variables: - SomeOutcome1 (Continuous) Analysis: I'm interested to know how each parameter impacts the outcome variable just like what SOBOL is doing. I tried SOBOL once and I think it only works for discrete and continuous variable as parameters. Thank you so much.
What sensitivity analysis technique is suitable for nominal parameters?
CC BY-SA 4.0
null
2023-03-05T03:21:49.153
2023-03-05T03:24:34.143
2023-03-05T03:24:34.143
382398
382398
[ "modeling", "simulation", "sensitivity-analysis" ]
608402
2
null
608398
1
null
As you said, mediation is at it's core a regression, so you can run a mediation model with any kind of data. However, typically when we talk about "mediation" we really mean causal inference. Causal inference requires that a host of assumptions be met, and having longitudinal data can be useful for this, though it isn't necessary. [This](https://journals.sagepub.com/doi/10.1177/25152459221095827) is a nice primer on the limitations of naively applied path models (including mediation) and it discusses how one might do better.
null
CC BY-SA 4.0
null
2023-03-05T03:33:24.917
2023-03-05T03:33:24.917
null
null
288142
null
608403
1
null
null
0
46
I am looking for a example of MA($\infty$) process with the property of long-term dependence and stationary on the strong sense. I wolud like you consider the following points to discuss: - What conditions implies that the MA($\infty$) has long-term dependence property? Remember that an AR($1$) process can be expressed as an MA($\infty$) processes. So not every MA($\infty$) process has the long-term dependency property (I'm assuming that the AR(1) process doesn't have this property) - What conditions implies that the MA($\infty$) is strong stationary process? Of course, you don't have to set an example with all the details, but I welcome suggestions so that I can show the details myself. Thanks
An example of MA($\infty$) process with the property of long-term dependence and strictly stationarity
CC BY-SA 4.0
null
2023-03-05T04:33:18.820
2023-03-05T04:33:18.820
null
null
373088
[ "arima", "stochastic-processes", "stationarity", "autoregressive", "moving-average" ]
608404
2
null
608150
2
null
We can use profile likelihood methods to construct a confidence interval for the maximum probability $\theta = \max_{j=1}^k p_k$. Here $p_1, p_2, \dotsc, p_k$ represent the discrete distribution of dice rolls, where in your example $p_5$ is somewhat larger than the others. The results of $n$ dice rolls is given by the random variable $X=(X_1, \dotsc, X_k)$ where in the dice example $k=6$. The likelihood function is then $$ L(p) = p_1^{X_1} p_2^{X_2} \dotsm p_k^{X_k} $$ and the loglikelihood is $$ \ell(p) =\sum_1^k x_j \log(p_j) $$ The profile likelihood function for $\theta$ as defined above is $$ \ell_P(\theta) = \max_{p \colon \max p_j = \theta} \ell(p) $$ With some simulated data we get the following profile log-likelihood function [](https://i.stack.imgur.com/mD7NF.png) where the horizontal lines can be used to read off confidence intervals with confidence levels 0.95, 0.99 respectively. I will add the R code use at the end of the post. A paper using bootstrapping for estimating $\theta$ is [Simultaneous confidence intervals for multinomial proportions](https://www.sciencedirect.com/science/article/abs/pii/S0378375899000476) But this is only a partial solution, in a comment you say > @Dave No, I want to find the most probable outcome of the die. I read that as finding the maximum probability (done above), but also which of the sides of the dice correspond to the max probability. The Bayesian approach in the answer by user Henry is a direct answer to that. It is not so clear how to approach that in a frequentist way, maybe bootstrapping could be tried? One old approach is subset selection, choosing a subset of the sides of the coin which contains the side with max probability with a certain confidence level. Papers discussing such methods is [A subset selection procedure for multinomial distributions](https://www.tandfonline.com/doi/abs/10.1080/02664763.2013.789493?journalCode=cjas20) and [SELECTING A SUBSET CONTAINING ALL THE MULTINOMIAL CELLS BETTER THAN A STANDARD WITH INVERSE SAMPLING](https://www.jstor.org/stable/43836370). R code for the plot above: ``` library(alabama) make_proflik_max <- function(x) { stopifnot(all(x >= 0)) k <- length(x) Vectorize(function(theta) { par <- rep(1/k, k) # initial values fn <- function(p) -sum(log(p) * x) gr <- function(p) -x/p hin <- function(p) { # each component must be positive c(p) } heq <- function(p) { # must be zero c(sum(p)-1 , max(p) - theta) } res <- alabama::auglag(par, fn, hin=hin, heq=heq) -res$value } ) } set.seed(7*11*13) # My public seed x <- sample(1:6, 200, replace=TRUE, prob=c(9,9,9,9,10,9)) # 5 is a little more probable x <- table(x) proflik_max <- make_proflik_max(x) plot(proflik_max, from=1/6 + 0.001, to=0.35, xlab=expression(theta)) loglik <- function(p) sum(x * log(p)) maxloglik <- loglik(x/200) mle_pmax <- max(x/200) abline(h=maxloglik - qchisq(0.95,1)/2, col="red") abline(h=maxloglik - qchisq(0.99,1)/2, col="blue") abline(v=mle_pmax) ```
null
CC BY-SA 4.0
null
2023-03-05T04:48:34.853
2023-03-07T14:01:16.010
2023-03-07T14:01:16.010
11887
11887
null
608407
2
null
452611
1
null
You can generate the Critical_value from python or Mathematica. from python3 it would look like: ``` from scipy import stats def Gcritical(n, CL): #computes the GcriticalValue from the confidence level Tcritical = stats.t.isf((1 - CL/100.)/n, n-2) return (n - 1)/sqrt(n)*sqrt(Tcritical**2/(n - 2 + Tcritical**2)) ``` where n is the number of samples and CL is the confidence level in a percent, like 99.5. in mathematica it is similar.
null
CC BY-SA 4.0
null
2023-03-05T06:04:48.267
2023-03-05T06:25:49.540
2023-03-05T06:25:49.540
362671
382408
null
608408
2
null
442366
1
null
Because the second Principal Component should capture the highest variance from what is left after the first Principal Component explains the data as much as it can. (The first principal component captures the most data variability.) But why does the orthogonal direction capture the most variation? If two directions are not orthogonal, they are linearly dependent on each other, which means that one direction can be expressed as a linear combination of the other direction. If two directions are orthogonal (linearly independent), they do not capture any unique variance in the data beyond what the first direction has already caught. You can read further [here](https://medium.com/intuitionmath/why-is-the-second-principal-component-orthogonal-to-the-first-one-d453c9fd97ca).
null
CC BY-SA 4.0
null
2023-03-05T06:09:01.690
2023-03-05T06:09:01.690
null
null
59072
null
608409
1
null
null
1
14
In running mlp() function of the nnfor package, you can allow the model to choose the number of hidden nodes through cross validation. ``` fit2 <- mlp(fit, auto.hd.type = "cv") fit2$MSEH ``` I would just like to ask, what does the MSE values means for each number of hidden nodes if you let the mlp function select the number of hidden nodes. How does the computation was performed in getting these values? [](https://i.stack.imgur.com/NXvWQ.png) This differs completely from the MSE of the model [](https://i.stack.imgur.com/4baog.png)
auto.hd.type = "cv" in mlp() function in nnfor package
CC BY-SA 4.0
null
2023-03-05T06:12:50.473
2023-03-05T06:12:50.473
null
null
367146
[ "time-series", "neural-networks", "forecasting" ]
608411
1
608742
null
2
175
I know about tokenization algorithms like BPE and some other basics of tokenization from the Hugging Face course. I've also heard about word2vec and other algorithms for assigning words to vectors. I'm very confused about how these two fit together, if at all, in LLMs. What are some common practices for converting tokenized text into input tensors? I believe one approach is to randomly initialize a matrix of shape (embedding dimension, token vocabulary size). Multiplying with the one-hot encoded column vector for a given token will select a particular column of the matrix; thus the matrix functions as a lookup table that assigns vectors to tokens in the vocabulary. During training, update this matrix normally via backpropagation. (This approach makes no use of word2vec.)
How do LLMs transform tokens into vectors?
CC BY-SA 4.0
null
2023-03-05T06:47:55.687
2023-03-24T06:41:35.553
2023-03-24T06:41:35.553
1352
277601
[ "machine-learning", "neural-networks", "natural-language", "transformers", "llm" ]
608413
2
null
608391
5
null
They won't accomplish the same thing, but they will accomplish very similar things. The reason they aren't the same is that the Cox model isn't collapsible. Suppose treatment was actually randomised, so that there was no confounding. The propensity score would then be constant -- there is no varying propensity to be treated -- so the weighted propensity score model would estimate the same target parameter as an unadjusted two-group treatment comparison. The log hazard ratio for treatment in the multivariable model would not estimate this same parameter. It would estimate a parameter that's further from zero, because that's how hazard ratios work (so do odds ratios). Now consider what happens when there's confounding. The best you could hope for is that the propensity-weighted model estimates the same thing that it would under randomisation (which it does if the models are correctly specified) and the multivariable model estimates the same thing that it would under randomisation (which it also does if the models are correctly specified). So, if everything works perfectly, the two estimates are different, but in the same way that they would be different under randomisation. Either one is ok. For completeness, there's one additional wrinkle: if grade and treatment both have non-zero effects, it's not possible for the model with just treatment and the model with treatment and grade to both satisfy the proportional hazards assumption exactly. Heuristically, the mix of grades in the two treatment groups will change over time, as the people with high-grade disease die. If the hazard ratio for treatment is constant over time in the multivariable model it will decrease over time in the treatment-only model. The departures may not be large, and you might be happy ignoring this complication, but you did ask.
null
CC BY-SA 4.0
null
2023-03-05T07:41:26.150
2023-03-05T07:41:26.150
null
null
249135
null
608414
1
null
null
1
37
I have datasets that correspond to different traffic load inputs. I am doing binary classification on them. The proportion of 1s to 0s varies from dataset to dataset. E.g., dataset 1 is imbalanced with >80% of class 1, dataset 2 has >70% class 1 samples, and so on. I am training different neural networks for each dataset, i.e., for each input traffic load value. When I was using dataset 1 and dataset 2, I used AUC ROC as a metric. My dataset 3 has 59% class 1 samples, and I am confused if I should use AUC ROC as a metric still. Or is accuracy much better here since it's not very imbalanced. My subsequent datasets will have the proportion changing to class 0 being dominant. So I may have to revert to AUC ROC again to deal with that imbalance. - I want to compare the performance of the neural networks over increasing traffic loads in which case I should tune to obtain the possible optimal classifier for each dataset case using the same metric. I cannot say that I used AUC in the beginning and ending datasets and accuracy for in-between ones as they were more balanced. or can I? Please anyone suggest? - I use 1 as positive class and 0 as negative class unlike most of the blogs that say minority class is the positive class. Hope my approach is also correct. I am equally interested in predicting both classes. - I also read some AUC PR is also good. Is it so and should I use this for all cases instead od AUC ROC? I am anyway capturing this alongside& AUC PR values seem to be always higher than AUC ROC.
AUC ROC and accuracy for different datasets of same problem
CC BY-SA 4.0
null
2023-03-05T08:04:21.150
2023-03-05T08:22:09.010
2023-03-05T08:22:09.010
346726
346726
[ "neural-networks", "unbalanced-classes", "roc", "accuracy" ]
608415
1
null
null
0
6
Here below, I want to compare Rate1 and Rate2. The problem is that the rate is not linear with the timespan but literature data is always given as simple division. $$ Rate_{1} = \frac{\Delta x_{arbitrary}}{t_{1}} $$ $$ Rate_{2} = \frac{\Delta x_{arbitrary}}{t_{2}} $$ Is there a logical way to normalize and compare different rates if the rate depends on the timespan?
Normalizing rate with respect to time span
CC BY-SA 4.0
null
2023-03-05T08:05:50.343
2023-03-05T08:05:50.343
null
null
382337
[ "time-series" ]
608416
1
null
null
0
26
Consider the discrete distribution below: |X |0 |1 |2 |3 |4 |5 |6 | |-|-|-|-|-|-|-|-| |$p_0$ |$a_0$ |$a_1$ |$a_2$ |$a_3$ |$a_4$ |$a_5$ |$a_6$ | |$p_1$ |$b_0$ |$b_1$ |$b_2$ |$b_3$ |$b_4$ |$b_5$ |$b_6$ | Suppose $H_0: p_0$ is the correct distribution, and $H_a: p_1$ is the correct distribution. We have the decision rule where we reject $H_0$ if $X \leq 2$. How would we find the type 1 and type 2 error? Since a Type 1 error is the probability of rejecting $H_0$ when it is actually true, I believe the probability of type 1 error with this decision rule is $a_0 + a_1 + a_2$. Since a Type 2 error is the probability of not rejecting $H_0$ when it is actually false, I believe the probability of type 2 error with this decision rule is $b_3 + b_4 + b_5 + b_6$.
Type 1 and Type 2 error with a decision rule with a discrete distributions
CC BY-SA 4.0
null
2023-03-05T08:44:16.030
2023-03-08T06:57:34.090
2023-03-08T06:57:34.090
240887
240887
[ "statistical-power", "type-i-and-ii-errors", "discrete-distributions" ]
608417
2
null
512134
1
null
I'm sure you got your answer by now from somewhere else. That said, QE is the test for residual heterogeneity while Qm is the omnibus test for moderators in your model. You do have to realise that the a mixed model is just a random model with moderators. In your first question I will think the heterogeneity left is that due to variance. If the p-value of Qm is significant then it means the moderator you included explains a great propotion of the heterogeneity and that the differeces in mean ratios are likely not due to chance. I'm not sure about your last question but read this [http://dx.doi.org/10.1027/0044-3409.215.2.104](http://dx.doi.org/10.1027/0044-3409.215.2.104) by Prof Wolfgang Viechtbauer himself.
null
CC BY-SA 4.0
null
2023-03-05T09:28:18.287
2023-03-05T09:28:18.287
null
null
293492
null
608418
1
null
null
0
84
I am working on a dataset. I built several models using 10-fold cross validation (not a train-test split). Now, I want to plot a learning curve for each model to show if it is overfitting, underfitting, or good fit. When I searched about the way to do it, I found that the steps include splitting dataset into training and testing sets. I read some similar questions but I did not understand how to do it without splitting the dataset into train and test sets. Can you please explain in details, or provide me with a link, the steps of plotting a learning curve when the k-fold cross validation is used? Sorry I am a beginner in ML field and Python programming language. N.B: I built my models using Weka platform, but it is OK to plot the curve using python code via Kaggle website, if it is needed.
How to plot a learning curve for 10-fold cross validation?
CC BY-SA 4.0
null
2023-03-05T09:28:19.347
2023-03-05T11:58:52.387
2023-03-05T11:58:52.387
362671
379079
[ "python", "cross-validation", "curve-fitting", "learning" ]
608420
1
null
null
0
48
I am analyzing host seeking behavior (called questing) of ticks from two populations (Lab and field collected). I have ~20 percent zeros in my dataset. I had 20 ticks per enclosure (where 0 is no ticks questing and up to 20 ticks can quest, can be expressed as a proportion). For variables I also have time of day, tree stand, collection method, and weather. I want to find if there is a way to predict if a higher proportion of ticks will quest by each of these variables. (i.e are ticks more likely to quest during the night, a specific tree stand, or weather event and is collection method significant for each of these? I'm a little lost on how to model this, however I was very interested and tried to utilize this method [I have zero inflated data, with discrete variables. Is it possible to use zero inflated poisson model?](https://stats.stackexchange.com/questions/594209/i-have-zero-inflated-data-with-discrete-variables-is-it-possible-to-use-zero-i) to graph the probability of questing with time of day on the x axis based on stand and weather. Thanks @EdM for helping! |time |time of day |stand |weather |collection |Total_Count | |----|-----------|-----|-------|----------|-----------| |05:24 |morning |pine |rain |lab |3 | |14:12 |afternoon |oak |clear |field |0 | |20:45 |evening |birch |cloudy |lab |5 | |00:30 |night |ash |rain |field |1 | --- [Raw Data](https://docs.google.com/spreadsheets/d/1L35ZhR5EE7Ei6VYTqpgjFsPDttD6isLivJvugioeBVw/edit?usp=sharing).
Zero inflated model with prediction
CC BY-SA 4.0
null
2023-03-05T10:01:51.927
2023-03-05T11:07:57.213
2023-03-05T11:07:57.213
362671
382417
[ "r", "regression", "predictive-models", "bootstrap", "zero-inflation" ]
608424
1
null
null
0
4
in our case, we have a panel data of 8 countries, in which we are testing whether GDP has any effect on GDI, time period is 1990 to 2021. we want to compare panel data of top 4 countries GDP with the data of lower 4 countries GDP. Hypothesis is Whether impact of GDP on GDI of high income countries differ significantly from impact of GDP on GDI of lower income countries . Which test we can apply on this panel series.
Panel data series comparison
CC BY-SA 4.0
null
2023-03-05T11:38:21.933
2023-03-08T03:58:38.623
2023-03-08T03:58:38.623
382422
382422
[ "regression" ]
608425
2
null
608386
3
null
Just as a side note, because you are new to both R and logistic regression, I highly recommend reading through Practical Guide to Logistic Regression in R by Joseph Hilbe, which covers how to do logistic regression in R. Given you are new to statistics in general, Learning Statistics with R by Daniel Navarro should also precede this so you have a base understanding of how statistics work. I also strongly advise against using really complicated procedures like splines or lasso regression if you are new to stats, as it requires a solid understanding of a lot of statistical principals before employing them. As Demetri already pointed out, the need to select predictors should be heavily theory-driven or at least based on some research. I find it difficult to believe there isn't some research on your idea or what predictors you need, even if it isn't directly comparable. For example, my field has decades of research on the effects of morphological awareness on reading ability. This has been teased apart a bazillion ways. Morphological awareness can be sub-typed by three varieties in European scripts, can be categorized as a single construct in written scripts like Chinese, and is sometimes lumped together with other predictors of reading using factor analysis. Suppose I believe there is some element of morphological awareness that simply hasn't been tested yet. We can call it Factor M. Factor M hasn't been tested yet, but we as researchers believe it has some effect on reading. To study this, we include it into a regression using a novel measure to identify it's effects. We create a candidate model to the effect of something like this: $$ \text{Reading Comprehension} = \beta_0 + \beta_1\text{Factor M} + \epsilon $$ Perhaps we also believe that while the effect of Factor M alone is meaningful on reading comprehension, perhaps the influence of a second variable is important to control for, so we also create a candidate model with this effect, defined below with Factor N as the control variable. $$ \text{Reading Comprehension} = \beta_0 + \beta_1\text{Factor M}+ \beta_2\text{Factor N}+ \epsilon $$ With these two candidate models, we can test their effects. Thereafter we can perform model comparisons using AIC, BIC, etc. However, at no point have we invented variables out of thin air or just grabbed whatever variables we could. This is because of the following reasons: - It is a huge waste of time. In the case where we have 50 variables, we could spend forever trying to clean the data, check assumptions like linearity, estimate outliers, etc. Why waste all that effort when one can simply select predictors that have already been researched and are far more likely to yield results you believe exist? Better to save time on all of this by selecting only what you need. - It is very unscientific. This verges on HARKing (Hypothesizing After Results are Known), in that you are just applying whatever regression gives you some nice p values and then shipping your baked p-values to the nearest journal. Yet without hypothesizing before the fact what you achieved, you are conveying that you already knew this relationship existed when in fact you didn't. More importantly, you may capitalize on totally chance findings that will never be replicated with future testing. - The meaningfulness of the model may be questionable at best. Let's say we find that wind speed, the amount of oxygen I breathe, and the number of letters I type in a day all are significant predictors of math ability. Why? What could we possibly derive from such a model without at least some assumptions of what relationship actually exist? A regression of this variety would be pointless to entertain. So to summarize and answer your main question: > What method do you suggest for reporting significant variables? The answer is the normal way.
null
CC BY-SA 4.0
null
2023-03-05T12:40:55.430
2023-03-05T13:03:08.987
2023-03-05T13:03:08.987
345611
345611
null
608426
1
null
null
0
79
I am currently applying the [code provided in the demo of the glmmLasso package](https://rdrr.io/cran/glmmLasso/src/demo/glmmLasso-soccer.r) to my data. However, I stumbled over the part where the sample is split into 5 folds. It seems like in the provided code the sample is split in the same way as cross-sectional data would be split. Wouldn't it make more sense to split the sample such that each individual/ in this case each soccer team is only contained in one of the folds? I would have assumed that otherwise the folds cannot be regarded as independent. Here is the code of one of the proposed CV-procedures as a reference: ``` library(glmmLasso) data("soccer") ## generalized additive mixed model ## grid for the smoothing parameter ## center all metric variables so that also the ## starting values with glmmPQL are in the correct scaling soccer[,c(4,5,9:16)]<-scale(soccer[,c(4,5,9:16)],center=T,scale=T) soccer<-data.frame(soccer) lambda <- seq(500,0,by=-5) family <- poisson(link = log) ################## Elegant Cross-Validation ########################### ## Using 5-fold CV to determine the optimal tuning parameter lambda ## Idea: on each training data, similar to the previous method, start ## with big lambda and use the estimates of the previous fit (BUT: before ## the final re-estimation Fisher scoring is performed!) as starting values for the next fit; ## make sure, that your lambda sequence starts at a value big enough such that all ## covariates are shrunk to zero; ### set seed set.seed(1909) N<-dim(soccer)[1] ind<-sample(N,N) lambda <- seq(500,0,by=-5) kk<-5 nk <- floor(N/kk) Devianz_ma<-matrix(Inf,ncol=kk,nrow=length(lambda)) ## first fit good starting model library(MASS);library(nlme) PQL <- glmmPQL(points~1,random = ~1|team,family=family,data=soccer) Delta.start <- as.matrix(t(c(as.numeric(PQL$coef$fixed),rep(0,6),as.numeric(t(PQL$coef$random$team))))) Q.start <- as.numeric(VarCorr(PQL)[1,1]) ## loop over the folds for (i in 1:kk) { print(paste("CV Loop ", i,sep="")) if (i < kk) { indi <- ind[(i-1)*nk+(1:nk)] }else{ indi <- ind[((i-1)*nk+1):N] } soccer.train<-soccer[-indi,] soccer.test<-soccer[indi,] Delta.temp <- Delta.start Q.temp <- Q.start ## loop over lambda grid for(j in 1:length(lambda)) { #print(paste("Lambda Iteration ", j,sep="")) glm4 <- try(glmmLasso(points~transfer.spendings + ave.unfair.score + ball.possession + tackles + ave.attend + sold.out, rnd = list(team=~1), family = family, data = soccer.train, lambda=lambda[j],switch.NR=FALSE,final.re=FALSE, control=list(start=Delta.temp[j,],q_start=Q.temp[j])) ,silent=TRUE) if(!inherits(glm4, "try-error")) { y.hat<-predict(glm4,soccer.test) Delta.temp<-rbind(Delta.temp,glm4$Deltamatrix[glm4$conv.step,]) Q.temp<-c(Q.temp,glm4$Q_long[[glm4$conv.step+1]]) Devianz_ma[j,i]<-sum(family$dev.resids(soccer.test$points,y.hat,wt=rep(1,length(y.hat)))) } } } Devianz_vec<-apply(Devianz_ma,1,sum) opt4<-which.min(Devianz_vec) ## now fit full model until optimnal lambda (which is at opt4) for(j in 1:opt4) { glm4.big <- glmmLasso(points~transfer.spendings + ave.unfair.score + ball.possession + tackles + ave.attend + sold.out, rnd = list(team=~1), family = family, data = soccer, lambda=lambda[j],switch.NR=FALSE,final.re=FALSE, control=list(start=Delta.start[j,],q_start=Q.start[j])) Delta.start<-rbind(Delta.start,glm4.big$Deltamatrix[glm4.big$conv.step,]) Q.start<-c(Q.start,glm4.big$Q_long[[glm4.big$conv.step+1]]) } glm4_final <- glm4.big summary(glm4_final) ``` ```
cross-validation to find optimal Lambda in glmmLasso function
CC BY-SA 4.0
null
2023-03-05T12:44:04.577
2023-03-05T12:44:04.577
null
null
297627
[ "r", "mixed-model", "cross-validation", "lasso", "glmm" ]
608427
1
null
null
2
88
I'm currently studying the textbook Reinforcement Learning by Sutton and Barto. I can't seem to understand the derivation in Equation 5.2: [](https://i.stack.imgur.com/q8Vfz.png) How did (a) become (b)? In particular, why is the probability component in (b) computed as $\frac{\pi(a|s) - \frac{\epsilon}{|A(s)|}}{1-\epsilon}?$ Thanks in advance. :)
Question on Equation 5.2 of Reinforcement Learning by Sutton and Barto
CC BY-SA 4.0
null
2023-03-05T12:48:56.790
2023-03-06T07:13:39.603
2023-03-06T02:15:53.837
341977
341977
[ "monte-carlo", "reinforcement-learning", "policy-iteration" ]
608428
1
null
null
0
12
I am currently analyzing survey data using covariance-based SEM. Here, I would like to explore how the same predictors are associated with two different (but related) behaviors. I also want to investigate how similar predictors for each behavior affect those behaviors. I am still undecided whether to run seperate models and compare them or model a SEM with two dependent variables. I would like to know how these approaches would differ in terms of underlying statistical assumptions and which approach would be preferable to analyze whether participants perceive these behaviors to be different and whether these behaviors are affected by the same and/or corresponding predictors.
SEM with multiple dependent variables or comparing two models
CC BY-SA 4.0
null
2023-03-05T13:02:31.860
2023-03-05T13:02:31.860
null
null
382425
[ "structural-equation-modeling" ]
608430
1
null
null
1
55
I have 2 drug treatment groups, namely Cis and RT. So, a cell is either exposed to none, Cis only, RT only, or a combination of Cis+RT. There is also another cancer modality group. I would like to perform a linear model, but I am not sure if I should use a categorical variable with 4 groups or 2 dummy variables for the drug treatment. Option 1 (`tx_group`) with categories: - NT - Cis - RT - Cis+RT or Option 2 (`cis` and `rt`): - 0 - 1 Based on the above, considering an interaction model, the formula would be either `y ~ tx_group*cancer_group` or `y~cis*rt*cancer_group`. Which one should I use?
Difference between using a categorical variable vs separate dummy variables
CC BY-SA 4.0
null
2023-03-05T12:43:05.687
2023-03-05T14:31:54.080
2023-03-05T14:31:54.080
11887
129468
[ "r", "regression", "categorical-data", "categorical-encoding" ]
608432
1
null
null
0
19
I am conducting two-way ANOVA in r using the t2way function in WRS2 packages for my thesis. t2way function, as you know, uses trimmed mean to escape from the severe problem of heterogeneity of variance. but the problem is, I don't know how to draw an interaction plot using trimmed means. I've tried to use a basic interaction plot function, but it probably doesn't have any options for trimmed mean. can I use ggplot2 packages for it? Does it have any options for it? or there is some syntax for it?? I have definitely no idea anymore and I am not native English user so tell me if you don't understand my intention. It would be my fault.
How can I draw a interaction plot using trimmed means?
CC BY-SA 4.0
null
2023-03-05T13:20:12.547
2023-03-05T13:20:12.547
null
null
382426
[ "r", "interaction", "ggplot2", "trimmed-mean" ]
608433
1
null
null
1
58
I have numerical vectors $y$, $a$, $x$, each with length $N\approx 10^6$, representing data from an experiment. Mechanistically, $y$ is related to $a$ and $x$ in the following way ($i\in \{1,\ldots,N\}$): $$ y_i \sim \sum_{j=1}^N \big( a_i\cdot f(x_i,x_j) - a_j\cdot f(x_j,x_i) \big) $$ with some nonlinear differentiable function $f: \mathbb{R}^2 \to \mathbb{R}$, which I have no other a priori information about. I need to estimate the function $f$ that provides the best fit, i.e. that minimises the error $\sum_{i=1}^N \Big( y_i - \sum_{j=1}^N \big( a_i\cdot f(x_i,x_j) - a_j\cdot f(x_j,x_i) \big) \Big)^2 $. My idea is to represent $f$ by a neural network (2 input notes, a few hidden nodes, 1 output node). I would initialise a random set of weights and biases, compute the error, and try to iteratively converge to the optimal weights and biases. Before I dive into this, I wanted to ask if anyone can see an easier way to estimate $f$ by using existing packages (Python/R), rather than me having to implement a custum backpropagation and gradient descend algorithm from scratch. I am also completely open to approaches other than NNs. //edit: I asked ChatGPT to adapt the Adam gradient descend approach that I usually use in skilearn to my specific problem, using the above error $\sum_{i=1}^N (y_i - \ldots)^2$ for the loss function - this solved my problem.
Estimating a trickily defined nonlinear function (e.g., via neural networks)
CC BY-SA 4.0
null
2023-03-05T13:24:52.387
2023-03-05T18:15:58.743
2023-03-05T18:15:58.743
382429
382429
[ "neural-networks", "nonlinear-regression" ]
608434
1
null
null
0
44
I would like to build a SARIMAX model where I am able to assign weights to my exogenous variables, something like this if I have two exogenous variables: ``` import pandas as pd import statsmodels.api as sm # Load the data data = pd.read_csv('data.csv', index_col='date', parse_dates=True) # Define the weights for the exogenous variables weights = [1, 0.5] # Define the SARIMAX model with weighted exogenous variables model = sm.tsa.SARIMAX(data['y'], exog=data[['x1', 'x2']], order=(1,0,1), seasonal_order=(1,0,1,12), exog_weights=weights) # Fit the model results = model.fit() # Print the summary print(results.summary()) ``` However, this doesn't seem to do anything! I tried to play around with the weights but it doesn't do anything! I also passed `exog_weights` to the `fit()` function instead of SARIMAX object, but to no avail. Any idea how can we incorporate weights to exogenous variables in SARIMAX?
Assigning weights to exogenous variables in SARIMAX model
CC BY-SA 4.0
null
2023-03-05T13:52:36.333
2023-03-05T17:39:58.193
2023-03-05T17:39:58.193
53690
382431
[ "arima", "statsmodels", "weights" ]
608435
1
614121
null
0
46
I have been trying to find a simple way to use the bootstrap for a hypothesis test that involves more than two samples. The motivation for using the bootstrap is for the usual reasons: the test statistic is complicated; we don’t want to make parametric assumptions. One method that I think would work for my purpose is stated on [bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) and is called the basic bootstrap and cites the textbook Bootstrap methods and their application (Davison and Hinkley 1997, equ. 5.6 p. 194). Problem formulation: We have 16 independent observations $$ \{ x_i \}_{i = 1,}^{16} $$ where $\{ x_1, x_2, x_3, x_4 \}$, $\{ x_5, x_6, x_7, x_8 \}$, $\{ x_9, x_{10}, x_{11}, x_{12} \}$, $\{ x_{13}, x_{14}, x_{15}, x_{16} \}$ are four random samples drawn from four different populations. We denote the means of the respective populations as $\mu_1, \mu_2, \mu_3, \mu_4$. I want to test $$ H_0: (\mu_1 - \mu_2) - (\mu_3 - \mu_4) = 0 \\ H_1: (\mu_1 - \mu_2) - (\mu_3 - \mu_4) \neq 0 $$ The test statistic is $$ t = \left(\frac{x_1 + x_2 + x_3 + x_4}{4} - \frac{x_5 + x_6 + x_7 + x_8}{4}\right) - \left(\frac{x_9 + x_{10} + x_{11} + x_{12}}{4} - \frac{x_{13} + x_{14} + x_{15} + x_{16}}{4}\right) $$ Bootstrap: I resample each of the 4 sets independently. That is, use functions $\sigma : \{ 1, \dots, 16 \} \to \{ 1, \dots, 16 \}$ such that $$ \sigma(\{1, 2, 3, 4\}) \subseteq \{1, 2, 3, 4\} \\ \sigma(\{5, 6, 7, 8\}) \subseteq \{5, 6, 7, 8\} \\ \sigma(\{9, 10, 11, 12\}) \subseteq \{9, 10, 11, 12\} \\ \sigma(\{13, 14, 15, 16\}) \subseteq \{13, 14, 15, 16\} $$ Then the resampled statistic would we $$ t^* = \left(\frac{x_{\sigma(1)} + x_{\sigma(2)} + x_{\sigma(3)} + x_{\sigma(4)}}{4} - \frac{x_{\sigma(5)} + x_{\sigma(6)} + x_{\sigma(7)} + x_{\sigma(8)}}{4}\right) - \left(\frac{x_{\sigma(9)} + x_{\sigma(10)} + x_{\sigma(11)} + x_{\sigma(12)}}{4} - \frac{x_{\sigma(13)} + x_{\sigma(14)} + x_{\sigma(15)} + x_{\sigma(16)}}{4}\right) $$ Assume we have $N = 999$ resamples $t^*_i$ with order statistics $t^*_{(i)}$. Then using the basic bootstrap method, we would have the $95\%$ (or $\alpha = 0.05$) two-sided confidence interval $$ \left[ 2 t - t^*_{((N+1)(1-\alpha/2))}, 2 t - t^*_{((N+1)(\alpha/2))} \right] = \left[ 2 t - t^*_{(975)}, 2 t - t^*_{(25)} \right] $$ or one-sided confidence intervals $$ \left[ 2 t - t^*_{((N+1)(1-\alpha))},\infty\right) = \left[ 2 t - t^*_{(950)}, \infty \right) \\ \left(\infty, 2 t - t^*_{((N+1)(\alpha))}\right] = \left(\infty, 2 t - t^*_{(50))}\right] $$ Thus, we can reject $H_0$ at 5% significance if $0$ is not in this interval. Although not stated in the reference above, I believe we can also use this process to determine a one-sided P-values for the test statistic $t$ using (respectively) $$ p = \frac{1 + \sum_{i=1}^{999} \mathbf{1}\{ 2t - t^*_i \geq 0 \}}{1000} \\ p = \frac{1 + \sum_{i=1}^{999} \mathbf{1}\{ 2t - t^*_i \leq 0 \}}{1000} $$ or a two-sided P-value $$ p = 2 \left(\frac{1 + \min\left(\sum_{i=1}^{999} \mathbf{1}\{ 2 t - t^*_i \geq 0 \}, \sum_{i=1}^{999} \mathbf{1}\{ 2 t - t^*_i \geq 0 \}\right)}{1000}\right) $$ (Note: the last value above can be $> 1$, in which case I would set it to $1$). Question: Does the above procedure for determining the confidence intervals and P-values seem correct, even though it uses a difference of four means instead of the usual two shown in most examples?
Bootstrap P-value and confidence intervals with more than two samples
CC BY-SA 4.0
null
2023-03-05T13:55:05.720
2023-04-25T16:10:55.287
2023-03-05T14:04:45.620
362671
382320
[ "hypothesis-testing", "confidence-interval", "nonparametric", "bootstrap" ]
608436
1
608461
null
11
597
I'm making some self-study notes for Markov chain Monte Carlo (MCMC) and want to check my understanding before proceeding. After reading a few papers and tutorials this is what I've synthesised: What - We can't directly evaluate the posterior as the normalising constant is too hard to calculate for interesting problems - Instead we sample from it - We do this by engineering a Markov chain that has the same stationary distribution as the target distribution (the posterior in our case) - When we have reached this stationary state we continue to run the Markov chain and sample from it to build up our empirical distribution of the posterior How - All Markov chains are completely described by their transition probabilities - We therefore control/engineer the Markov chain by controlling the transition probabilities - All MCMC algorithms work from this principal but the exact method for generating these transition probabilites differs from algorithm to algorithm - If we have a particular algorithm for generating these transition probabilities, we can verify that it converges to the stationary distribution by using the detailed balance equation on the proposed transition probabilities - Thus the remaining challenge is to come up with a method to generate these transition probabilities
Is this summary of MCMC correct?
CC BY-SA 4.0
null
2023-03-05T14:05:31.333
2023-03-07T18:14:49.267
2023-03-05T23:28:56.620
16974
10960
[ "self-study", "bayesian", "markov-chain-montecarlo" ]
608438
1
null
null
1
46
I am conducting a survey where people are required to rank 3 different designs from 1st to 3rd. For example, there are 3 designs - A, B and C. Participants are required to rank them in order of preference, e.g. ``` A: 3rd B: 1st C: 2nd ``` How should I conduct a hypothesis test to see if people prefer design A over the rest? I came across the Friedman test which mentions that measured values across the 3 groups (in this case, designs) are correlated. However, in my scenario, they are mutually exclusive within the same participant, e.g. if someone chooses design A as 1st, they cannot choose designs B and C as 1st. Is the Friedman test still applicable? I also came across posts that recommended doing the Friedman test, followed by the Wilcoxon-test to first figure out if there are statistical differences between the groups, then doing pairwise comparisons. Is it possible to compare Design 1 vs Design 2+3 directly, such that my alternative hypothesis would be Design 1 is preferred over both Designs 2 & 3? I was thinking if I could calculate the averages of 2 & 3 together, then followed by design 1 and comparing them against each other directly using the Wilcoxon Signed Rank test.
Hypothesis Test for Ranked Preference?
CC BY-SA 4.0
null
2023-03-05T14:43:06.677
2023-03-05T18:06:24.543
2023-03-05T18:06:24.543
382434
382434
[ "hypothesis-testing", "statistical-significance", "survey", "friedman-test" ]
608439
1
null
null
1
135
I have 3 variables X, Y and Z. I want to perform 3 OLS regressions: X dependent on Y and Z, Y dependent on X and Z, and Z dependent on X and Y. Instead of doing the 3 of them sepparately, I want to know how can I do them in a single go via: $A = B \beta + \epsilon$ Where $A$ is an `n x 3` matrix (n observations of the 3 variables) I asked ChatGPT and it suggested creating a $B$ matrix with shape (n, 6) (or 7 if we account for the intercept) where the columns would be the variables `Y, Z, X, Z, X, Y` and then regress it to get a parameters matrix beta with shape (6, 3) (or (7, 3) with the intercept). But after that I'm not sure how to interpret this resulting matrix or if it even makes sense (CGPT has definitely stopped making sense when talking about this resulting matrix and how to interpret it). Another strange thing is that even if this makes sense and I'm doing it correctly (which I doubt), I'm not getting the results that I'd expect. If I run a regression of X on Y and Z only, I get the parameters ``` array([1.84477116, 0.74949417, 0.46818174]) ``` But if I run all the regressions at the same time I get the matrix: ``` array([[ 1.32178002e-09, 9.02019792e-10, -2.18881269e-09], [ 9.83568782e-11, 5.00000000e-01, -1.72848402e-10], [ 5.98987526e-12, 2.35829134e-11, 5.00000000e-01], [ 5.00000000e-01, -9.13527032e-11, 3.39086093e-10], [ 1.90638616e-11, 3.31752403e-11, 5.00000000e-01], [ 5.00000000e-01, -4.81339413e-11, 3.72506470e-10], [ 1.19651844e-10, 5.00000000e-01, -1.60753189e-10]]) ``` Which doesn't show the results of the single OLS anywhere. Here is the code I'm using: ``` y = df.to_numpy() #df is a dataframe with 3 columns and n rows (observations) x = add_constant(pd.DataFrame(data=[df.iloc[:,1], df.iloc[:,2], df.iloc[:,0], df.iloc[:,2], df.iloc[:,0], df.iloc[:,1]]).T).to_numpy() #it adds a new column of ones for the intercept cov = np.dot(x.T, x) inv = np.linalg.pinv(cov) H = np.dot(inv, x.T) betas = np.dot(H, y) ```
Performing 3 multivariate linear regressions at once
CC BY-SA 4.0
null
2023-03-05T14:54:35.183
2023-03-15T13:29:43.250
2023-03-05T15:05:38.873
379183
379183
[ "least-squares", "multivariate-regression" ]
608440
1
null
null
0
20
I am learning the deeplab models. However, some concepts in the papers made me confused. Receptive field (RF) and field-of-views (FOV) are two concepts mentioned in the Deeplabv1 paper. I know that receptive field is defined as the region in the input space that a particular CNN’s feature is looking at. What about the field-of-views (FOV)? Are they the same?
what are the differences between receptive field (RF) and field-of-views (FOV) in DeepLab papers?
CC BY-SA 4.0
null
2023-03-05T14:57:35.523
2023-04-02T00:47:49.413
null
null
356444
[ "neural-networks", "computer-vision", "convolution" ]
608441
2
null
549371
1
null
χ² (chi-squared) statistic of scipy.stats.chi2_contingency vs sklearn.feature_selection.chi2 It appears from reading [Scikit-learn χ² (chi-squared) statistic and corresponding contingency table](https://stackoverflow.com/questions/21281328/scikit-learn-%CF%87%C2%B2-chi-squared-statistic-and-corresponding-contingency-table) that sklearn does not perform a standard contingency table analysis when calculating the χ² statistic between two categorical variables. For example, given the data below ``` # libraries import scipy.stats as sps import pandas as pd # data data = pd.DataFrame({'gender': ['female']*60+['female']*54+['female']*46+['female']*41+['male']*40+['male']*44+['male']*53+['male']*57, 'education level': ['high school']*60+['bachelors']*54+['masters']*46+['phd']*41+['high school']*40+['bachelors']*44+['masters']*53+['phd']*57}) # contingency table contingency_table = pd.crosstab(index=data['gender'], columns=data['education level']) ``` [](https://i.stack.imgur.com/ZC1hr.jpg) the χ² statistic ``` print sps.chi2_contingency(contingency_table, correction=False) ``` is 8.0060 using scipy. If we were to instead use sklearn ``` # libraries from sklearn.preprocessing import LabelEncoder from sklearn.feature_selection import chi2 # label encoder for categorical features le = LabelEncoder() # transforming the categorical features data['education level le'] = pd.DataFrame(le.fit_transform(data['education level'])) data['gender le'] = pd.DataFrame(le.fit_transform(data['gender'])) ``` we would obtain ``` chi_2, p_value = chi2(data['education level le'].values.reshape(-1, 1), data['gender le'].values) print chi_2 ``` a value of 4.6557. I am not entirely sure which of these two methods is the appropriate one to use in order to determine whether the two categorical features are independent or not. [Related question 549371](https://stats.stackexchange.com/questions/315697/%cf%87%c2%b2-chi-squared-statistic-of-scipy-stats-chi2-contingency-vs-sklearn-feature-se). Update (3/5/23): It looks like the contingency matrix is a 2x2 matrix and you get a single p-val for the set of all the features in producing the target, whereas the feature_selection chi2 shows the p-vals for each feature independently of the others in producing the target. I believe that is why the two functions do not produce the same results.
null
CC BY-SA 4.0
null
2023-03-05T15:08:41.097
2023-03-05T15:21:12.557
2023-03-05T15:21:12.557
186183
186183
null
608442
2
null
608132
0
null
Answer, from my comment: My understanding of the seasonal Mann-Kendall test is that it looks at e.g., for monthly data, the same month across the years. In your case, your time unit is a day, for one year, so there's no way it can compare e.g. January 1 to the other January 1's across years. So, I don't think the test will be meaningful in your case. You could use a different model which can include "season", however you define it. Daily data will likely have auto-correlation, so you will likely want to at least investigate this effect as well.
null
CC BY-SA 4.0
null
2023-03-05T15:25:36.043
2023-03-05T15:25:36.043
null
null
166526
null
608443
2
null
608439
0
null
In your solution, you have 7 regressors, which are Y, Z, X, Z, X, Y, which are all regressed onto X,Y,Z. In the final matrix, each regressor has a beta (= rows in the final array) onto each target (= column in the final array). This is clearly not what you want. I would propose looking into regression theory and figuring this out on your own instead of relying on ChatGPT. The answer you got is very wrong and the code quality is also pretty bad. From the way you posed your question it also appears as if you have some basic misconceptions about how linear regression works, so diving into the theory might be a good idea. If you are a visual learner, I can highly recommend the [ritvikmath](https://youtu.be/EL-tayJzK7M) YouTube channel, which is the place I usually go to if I need to get an overview over a topic. However, there is a plethora of high quality material on OLS, so this is just one suggestion. Lastly, I would like to add one point: I have personally never heard of any scenario like the one you're describing and feel like it's a very uncommon use case to the extent where I would say it's not really valid. Could you maybe explain a bit about what you're trying to achieve?
null
CC BY-SA 4.0
null
2023-03-05T15:29:28.843
2023-03-05T15:29:28.843
null
null
375987
null
608444
1
null
null
0
38
I have two user groups A and B. I want to see the difference in the effect between A and B when a condition is given compared to the condition being not given. If the sample of condition given and not given is the same, I would simply subtract the evaluation metric and compare them using t-test. However, they're not. In this case, what statistical test should I use? To be specific, I have two user groups A and B. Half of each user groups will be shown caption and half will not be shown caption while watching a video. I want to compare how much the user experience will be different between user groups A and B. My hypothesis is: Compared to when caption is given, when caption is not given, the user experience will drop more significantly for user group A compared to user group B. I'm trying to see this by recruiting user groups A and B and showing them one random video among many videos. In this case, what statistical test should I use? Thank you!
What statistical test can I use to show the following hypothesis?
CC BY-SA 4.0
null
2023-03-05T15:36:50.150
2023-03-05T15:36:50.150
null
null
319408
[ "statistical-significance" ]
608445
2
null
606004
1
null
The simple answer is that Tukey's HSD isn't a test of overlapping plots. If it were, we would just look at the plots and not worry about conducting the HSD test. Note also that LFa and LK have overlapping data, and different Tukey letter assignments, if I understand how you are using "overlapping". A potential complication, or explanation, may lie in the fact that Tukey HSD has an assumption of homoscedasticity across groups. The heteroscedasticity in your groups may be causing some the unexpected results you are seeing.
null
CC BY-SA 4.0
null
2023-03-05T15:37:20.713
2023-03-05T15:37:20.713
null
null
166526
null
608446
1
608475
null
8
735
I ran ANOVA with dependent variable IQ, independent variable field of study (3 groups: science, humanities, business), and two covariates (age and sex). I see that my result is not quite significant, but very nearly significant (p = .051). Is it still ok to run post hoc comparisons? The reason being, if one or more of the three individual pairwise comparisons is also nearly significant, I would like to report it as a trend towards significance. Thanks, FBH.
Is it ok to run post hoc comparisons if ANOVA is nearly significant?
CC BY-SA 4.0
null
2023-03-05T16:00:10.497
2023-04-06T17:53:22.743
2023-03-06T00:32:08.297
345611
128883
[ "statistical-significance", "anova", "p-value", "post-hoc" ]
608447
2
null
607075
2
null
> So far I’ve tried comparing the raw values (cm) which are generally non-normally distributed... The raw values don't have to be normally distributed. A strict requirement (typically too strict) is that the residuals between observations and modeled values should be normally distributed. What's important for statistical interpretation of regression coefficients is that the distributions of the coefficient estimates are normally distributed. A normal distribution of error terms is sufficient but not necessary for that. Having no associations between residual distributions and modeled outcome values (homoscedasticity) in a large enough study is often good enough. See [this page](https://stats.stackexchange.com/q/16381/28500) for more details. Often in biology and biochemistry the magnitudes of residuals tend to increase as a function of modeled outcomes. That can happen if error magnitudes are proportional to observed values instead of constant. A log transformation of the outcome values can sometimes solve that problem. > Is there any kind of paired batch effect standardisation across scans, biological samples or comparison to the mean or average values you’d suggest, or any smoothing techniques I could apply to the perimeter (cm) values. Or any kind of way I could say model the likelihood of one perimeter being greater than the other ? A [mixed model](https://stats.stackexchange.com/tags/mixed-model/info), in this case with biological sample as a random effect, is one good way to deal with systematic differences among samples that might need "standardization." Random intercepts allow for variation among biological samples in terms of the estimated baseline outcomes (here, at 0 mM sugar). They also can deal with missing data for particular combinations of samples and treatments. That's better than removing all observations from a biological sample just because of contamination in one trial involving it. These considerations seem to solve your problems. I show a start on your data below. You should work this through on your own, make sure that you understand what each step involves, and incorporate your understanding of the subject matter if there's something else that needs to be addressed. There also are more extensive tests of mixed-model quality, for example in the R [DHARMa package](https://cran.r-project.org/package=DHARMa), than what I describe. Start at modeling I took your data and changed some of the column names to fit better into R data frames. With only 3 treatment levels, it's best to model treatment with a categorical factor. ``` colonyData <- read.delim("colonyData.txt") ## set "Treatment" to factor called "Sugar" with values "0", "1", "10" colonyData$Sugar <- factor(colonyData$Sugar) colonyData$BioSample <- factor(colonyData$BioSample) colonyData$Bacterium <- factor(colonyData$Bacterium) ``` I tried a simple mixed model without transforming outcomes. The interaction `*` allows the response to sugar to differ among bacterial strains. ``` library(lme4) lme1 <- lmer(Perimeter~Sugar*Bacterium + (1|BioSample),data=colonyData) plot(lme1) ## not shown; suggests increasing residual magnitude with modeled values ``` That plot indicated that the magnitudes of residuals tended to increase with modeled values. Working with log-transformed perimeter values worked better. ``` lme2 <- lmer(log(Perimeter)~Sugar*Bacterium + (1|BioSample),data=colonyData) plot(lme2) ## not shown; much better ``` When there are multiple levels of categorical predictors then the usual model summary (not shown here) can be difficult to interpret. For a categorical predictor, it displays coefficients for the difference between each of the individual factor levels and the reference level. Thus the apparent "significance" of one level can depend on the choice of the reference level. Use post-modeling tools to estimate the combined significance of all levels of a categorical predictor. The standard R `anova()` function [doesn't handle unbalanced data well](https://stats.stackexchange.com/q/13241/28500). The `Anova()` function in the R [car package](https://cran.r-project.org/package=car) is one good alternative. ``` car::Anova(lme2) # Analysis of Deviance Table (Type II Wald chisquare tests) # # Response: log(Perimeter) # Chisq Df Pr(>Chisq) # Sugar 59.9987 2 9.364e-14 # Bacterium 133.7860 4 < 2.2e-16 # Sugar:Bacterium 7.8283 8 0.4504 ``` This indicates that there are differences among levels of `Sugar` (treatment) and among bacterial strains. The overall `Sugar:Bacterium` interaction isn't "significant" but that doesn't mean that it's necessarily unimportant. That's illustrated by detailed analysis of the model predictions. The [emmeans package](https://cran.r-project.org/package=emmeans) can provide reports of detailed model predictions. Its "revparwise" comparison method, in this case, evaluates all 3 differences among `Sugar` levels for each of your bacterial strains. The `type="response"` specification lets these differences be expressed in terms of perimeter ratios. That makes sense for a model based on log-transformed perimeter values, as a difference in logs is the log of a corresponding ratio. ``` emm2pairwise <- emmeans(lme2,revpairwise~Sugar|Bacterium, type="response") emm2pairwise$contrasts # Bacterium = Alpha: # contrast ratio SE df null t.ratio p.value # Sugar1 / Sugar0 1.87 0.306 86.9 1 3.819 0.0007 # Sugar10 / Sugar0 2.19 0.359 86.9 1 4.800 <.0001 # Sugar10 / Sugar1 1.17 0.198 88.8 1 0.951 0.6095 # # Bacterium = Beta: # contrast ratio SE df null t.ratio p.value # Sugar1 / Sugar0 1.32 0.228 87.1 1 1.625 0.2407 # Sugar10 / Sugar0 1.62 0.270 85.3 1 2.912 0.0126 # Sugar10 / Sugar1 1.23 0.211 87.1 1 1.188 0.4634 # # Bacterium = Delta: # contrast ratio SE df null t.ratio p.value # Sugar1 / Sugar0 1.45 0.250 87.1 1 2.175 0.0812 # Sugar10 / Sugar0 1.49 0.266 89.3 1 2.248 0.0688 # Sugar10 / Sugar1 1.03 0.177 87.1 1 0.151 0.9874 # # Bacterium = Epsilon: # contrast ratio SE df null t.ratio p.value # Sugar1 / Sugar0 1.35 0.233 87.1 1 1.758 0.1899 # Sugar10 / Sugar0 1.37 0.237 87.1 1 1.850 0.1597 # Sugar10 / Sugar1 1.02 0.169 85.3 1 0.095 0.9950 # # Bacterium = Gamma: # contrast ratio SE df null t.ratio p.value # Sugar1 / Sugar0 1.92 0.342 89.3 1 3.661 0.0012 # Sugar10 / Sugar0 2.19 0.389 88.2 1 4.414 0.0001 # Sugar10 / Sugar1 1.14 0.203 88.2 1 0.742 0.7393 # # Degrees-of-freedom method: kenward-roger # P value adjustment: tukey method for comparing a family of 3 estimates # Tests are performed on the log scale ``` The report incorporates an appropriate correction for [multiple comparisons](https://en.wikipedia.org/wiki/Multiple_comparisons_problem) within each bacterial strain. No strain showed a "statistically significant" difference at p < 0.05 between 1 mM and 10 mM sugar. Only 2 strains showed such a difference between 1 and 0 mM sugar, but a third showed such a difference between 10 and 0 mM. With this size of study, it looks like you need a perimeter ratio of about 1.5 to meet that standard (if arbitrary) criterion of "significance." The above doesn't deal with differences among assay dates. In principle, you could include them as random effects, also.
null
CC BY-SA 4.0
null
2023-03-05T16:22:39.903
2023-03-05T16:22:39.903
null
null
28500
null
608448
1
null
null
0
13
why is HMC sampling parallelizable after stationary? Is it related with some kind of sequential bottlenecks?
Why is HMC sampling parallelizable after stationary?
CC BY-SA 4.0
null
2023-03-05T16:33:02.317
2023-03-05T16:33:02.317
null
null
382402
[ "hamiltonian-monte-carlo" ]
608449
1
609534
null
0
37
I was given an oil time series dataset (quarterly). I was trying to build an ARIMA Model in Stata. There is trend and seasonality in the data I am trying to plot acf and pacf plot to determine the order of AR and MA process I had already taken the log and differenced the data to remove the trend and seasonality and to make the data stationary. So oil became log_oil and then D.log_oil The following is the oil data plotted against time [](https://i.stack.imgur.com/2JZGF.png) The following is the ACF plot of D.log_oil [](https://i.stack.imgur.com/eM0F9.png) There is still a decaying pattern in the ACF plot even after differencing and taking log. Differencing a second time still retains a similar pattern.
Differenced data still showing seasonal pattern in ACF Plot
CC BY-SA 4.0
null
2023-03-05T16:38:43.580
2023-03-26T08:54:05.403
2023-03-26T08:54:05.403
22047
382439
[ "time-series", "arima", "stata", "acf-pacf" ]
608450
2
null
608342
1
null
Question 1. The censoring typical of survival data makes reliance on any simple tabular summary unreliable. If you think that an interaction might be important, include it in your model. Test for "significance" of the interaction if you wish. Unless you are overfitting your data it's a good idea to keep interaction terms that might reasonably be expected, based on your understanding of the subject matter, to be outcome-related. Showing full survival curves for different combinations of predictor values is a good way to display Cox-model results when there are interaction terms. Question 2. The data in your example would not meet the proportional hazards assumption, unless there are other covariates involved. The earlier event times for most of the females would lead to a high female/male hazard ratio at early times, but the continued presence of some event-free females in the sample later, while all males ultimately have events, would lead to a low female/male hazard ratio at late times. Changes in hazard ratios with time mean that the proportional hazards assumption doesn't hold.
null
CC BY-SA 4.0
null
2023-03-05T16:54:01.143
2023-03-05T16:54:01.143
null
null
28500
null
608451
1
608457
null
2
167
I have a classification problem. The actual outcomes are binary (0 or 1), but I want to predict probabilities, rather than predicting simply 0 or 1. I also want something with feature selection, since there are a lot of predictors. One approach that I want to try is L1-regularized logistic regression (specifically [this implementation](https://www.statsmodels.org/dev/generated/statsmodels.discrete.discrete_model.Logit.fit_regularized.html) in [statsmodels](https://www.statsmodels.org)). One has to find a value for $\alpha$, the weight of the L1-regularization. I plan to do this in the following way: - Select some potential values of $\alpha$, say 0.001, 0.01, 0.1, 1, 10 and 100. - Employ 5-fold cross validation: Fit the model on the union of the four training folds (using the aforementioned method) and then calculate the mean absolute error (MAE) on the test fold. A toy example: If the actual outcomes in the test fold are [1, 0, 1, 0] and the predicted probabilities are [0.9, 0.2, 0.8, 0.7], then the MAE is 0.2 (= (0.1 + 0.2 + 0.2 + 0.7) / 4). - Repeat step 2. for each of the five cross-validation runs and then calculate the mean MAE. A toy example: If the MAEs of the cross-validation runs are 0.2, 0.1, 0.3, 0.3 and 0.1, then the mean MAE is 0.2 (= (0.2 + 0.1 + 0.3 + 0.3 + 0.1) / 5). - Repeat steps 2. and 3. for each value of $\alpha$ given in step 1. - Choose the value of $\alpha$ with the least mean MAE. Is this a sensible approach? Is it theoretically sound or would an information criterion such as the AIC be better? There is [this nice guide](https://scikit-learn.org/stable/auto_examples/linear_model/plot_lasso_model_selection.html) from [sklearn](https://scikit-learn.org/stable/index.html), but it is for linear regression, rather than logistic regression; in any case, they use the mean squared error. The AIC takes the number of parameters into account (the fewer the better), but the cross-validation approach does not. Since I want feature selection, I would be willing to sacrifice some predictive accuracy for the sake of having fewer features in the model. To give a rough picture: The data contains approximately 120 features and 10000 rows. I have scaled the data. And to avoid any confusion: The approach uses the MAE only for hyperparameter tuning, not for the model fitting itself. EDIT: Another potential approach would be calculate the likelihood of the test-fold predictions: $$ \prod_{\text{outcome is 1}}\text{predicted probability} \; \times \prod_{\text{outcome is 0}}1 - \text{predicted probability} $$ Would this be a better scoring method than the MAE?
MAE to find tuning parameter for lasso logistic regression
CC BY-SA 4.0
null
2023-03-05T16:59:17.407
2023-03-05T20:10:49.893
2023-03-05T20:10:49.893
1352
180214
[ "logistic", "classification", "cross-validation", "regularization", "sparse" ]
608453
2
null
608182
2
null
# How to simplify the visual presentation of a DAG Note of caution: You can only reasonably simplify the presentation if some parts of the DAG can be grouped together, or if not all variables are (equally) important. If things can very easily be grouped, you may want to check if you are using the right level of representation. If not everything is (equally) important, check if you really need/want to use all variables. Once you have decided on a set of variables, here are some strategies to simplify the visual presentation of a DAG connecting them. 1. Multi-dimensional variables Say you have 10 variables $(X_1, ..., X_{10})$. Perhaps the first 5 describe one concept (e.g., health indicators), and the other 5 another (e.g., education measures). If the role of variables within the groups is similar enough, you could present them as two high-level variables $H$ (health) and $E$ (education). 2. Group by DAG position Another way to reduce variables is to group by their position in the DAG. For example, if you have multiple confounders, you can simplify by using a placeholder that indicates that there are multiple variables with the same function (this is frequently used for unobservable variables, of which we do not know how many there may be). [](https://i.stack.imgur.com/IrV4V.png) 3. Other visual aids You could visually group variables into a containing shape, use colors, or different types of arrows or nodes. This could be used for example to delineate context variables from model variables. ## Non-visual presentation If your DAG is really big (e.g., in the 100s of nodes), it might make more sense to provide it in a machine-readable format like an edge list or adjacency matrix.
null
CC BY-SA 4.0
null
2023-03-05T17:16:06.213
2023-03-17T17:57:08.487
2023-03-17T17:57:08.487
44269
250702
null
608455
2
null
608446
10
null
Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA and should be used only when the overall ANOVA results in $p < 0.05$ (or whatever threshold you choose). In fact, this isn't so. > "An unfortunate common practice is to pursue multiple comparisons only when the null hypothesis of homogeneity is rejected." (1) With one exception, the results of multiple comparison tests (post-hoc tests) following ANOVA are valid even if the overall ANOVA did not find a statistically significant difference among means. The exception is the first multiple comparison test invented (now obsolete), the protected Fisher Least Significant Difference (LSD) test. I suggest focusing on confidence intervals of the differences between means, and not on whether any p-value is less than 0.05. And please don't ever use the phrase "trending towards significance". You actually don't know what would happen to the p-value if there were more data, so you can't state a trend (2). - J. Hsu, Multiple Comparisons: Theory and Methods, page 177, ISBN 978-0412982811 - Wood J, Freemantle N, King M, Nazareth I (2014) Trap of trends to statistical significance: likelihood of near significant P value becoming more significant with extra data. BMJ Br Medical J 348:g2215. https://doi.org/10.1136/bmj.g2215
null
CC BY-SA 4.0
null
2023-03-05T17:21:57.813
2023-03-06T00:21:13.333
2023-03-06T00:21:13.333
345611
25
null
608456
1
null
null
0
4
i´m working with the [SNHT](https://rmets.onlinelibrary.wiley.com/doi/10.1002/%28SICI%291097-0088%28199701%2917%3A1%3C25%3A%3AAID-JOC103%3E3.0.CO%3B2-J) for single shifts developed by Alexandersson. The candidate station temporal series (56 years long, theoretically n=56) has some missing values (five). The test statistics depends on the value of n. I´m a bit confused on what should i do with this fact. a) refill the missing value using a linear regression with the data of a near station (the correlation coefficients are high) b) avoid the calculation of the test statistics in the years when the candidate station data is not available c) run the test anyway, considering n=56?
Candidate station with missing data using Standard Normal Homogeneity Test (SNHT)
CC BY-SA 4.0
null
2023-03-05T17:47:59.047
2023-03-05T17:47:59.047
null
null
382441
[ "hypothesis-testing", "missing-data", "climate" ]
608457
2
null
608451
5
null
Per [the thread](https://stats.stackexchange.com/q/473702/1352) Dave [links to](https://stats.stackexchange.com/questions/608451/mae-to-find-tuning-parameter-for-lasso-logistic-regression#comment1129243_608451), minimizing the MAE will incentivize you towards biased "hard classifications": if 60% of samples with a given predictor configuration are of class A, then the MAE-optimal classification would be to predict a 100% (not 60%) probability of them to be of class A. Minimizing the MAE is thus equivalent to maximizing accuracy, [which has major problems](https://stats.stackexchange.com/q/312780/1352). I sympathize with your goal of [sparse probabilistic classification](https://www.google.com/search?q=sparse+probabilistic+classification), but minimizing the MAE is not the way to go about it.
null
CC BY-SA 4.0
null
2023-03-05T17:59:33.413
2023-03-05T17:59:33.413
null
null
1352
null
608458
1
null
null
1
79
Are there any methods that combine VI and MCMC? If it exists, why isn’t it used prominently over techniques such as NUTS or other VIs.
Are there any methods that combine mcmc and VI?
CC BY-SA 4.0
null
2023-03-05T18:06:55.283
2023-03-05T18:06:55.283
null
null
382402
[ "markov-chain-montecarlo", "variational-inference" ]
608459
2
null
608293
0
null
The `cluster` argument in `survreg()` doesn't model random/frailty effects. It just adjusts the coefficient variance-covariance matrix to provide robust error estimates that take within-individual correlations into account. The point estimates are thus the same as without that argument, as you found. That's not what you evidently want. The `survival` package can model gamma frailties in `coxph()` models, but not in `survreg()` models. As the [main survival vignette](https://cran.r-project.org/web/packages/survival/vignettes/survival.pdf) says in Section 5.5.3: > The penalty functions in survreg currently use the same code as those for coxph. This works well in the case of ridge and pspline, but frailty terms are more problematic in that the code to automatically choose the tuning parameter for the random effect no longer solves an MLE equation. The current code will not lead to the correct choice of penalty. The [coxme package](https://cran.r-project.org/package=coxme) handles Gaussian random effects in Cox models. I'm not convinced that you need a parametric model to accomplish what you need, as it's possible to get survival estimates from Cox models and frailties are pretty simple to interpret in the context of proportional hazards. If you do need parametric modeling, the [survival task view](https://cran.r-project.org/view=Survival) suggests possibilities in R. The [frailtypack package](https://cran.r-project.org/package=frailtypack) provides extensive tools for this type of modeling, and the [MCMCglmm package](https://cran.r-project.org/web/packages/MCMCglmm/index.html) builds corresponding Bayesian models. With your `dist="gaussian"`, what you are trying to do also goes under the name of [tobit regression](https://en.wikipedia.org/wiki/Tobit_model). Try searching under that name, too. This [UCLA web page](https://stats.oarc.ucla.edu/sas/faq/how-do-i-run-a-random-effect-tobit-model-using-nlmixed/) illustrates mixed tobit models in SAS.
null
CC BY-SA 4.0
null
2023-03-05T18:07:59.947
2023-03-05T18:07:59.947
null
null
28500
null
608460
1
null
null
0
23
Struggling with something so hoped the brilliant minds of the internet could help me out. I have a large dataset of job postings from which I have extracted the skill demand (no. of times a skill is requested) for each occupation (which I am treating as documents for TF-IDF). Essentially each row is a different occupation, and each column is a skill, so each cell is the number of times a skill appears in an occupation's job postings An issue I have is that because of the large number of postings, all of the more general skills, and even some more specialised skills, often appear in every occupation (i.e. document) at least once, making the TF-IDF value 0. This is problematic as though I do I want skills that appear frequently in every occupation weighted down relative to that frequency, as you can imagine these skills are still important to the occupations, and I'm also getting a 0 score for some specialised skills just because they appear infrequently in every occupation. Have tried adjusting the IDF formula to $idf(t) =1+log \left (\frac{numDocs}{docFreq+1} \right )$ to set a lower bound, but this doesn't solve everything - is there a better way to do this? I'm thinking it might also be reasonable to treat skills that appear very infrequently in an occupation as outliers and set their frequency to 0 to partially solve this. Would love to hear anyone else's thoughts on this.
Adjusted TF-IDF where many terms appear in every document
CC BY-SA 4.0
null
2023-03-05T18:09:37.737
2023-03-05T18:14:42.523
2023-03-05T18:14:42.523
379630
379630
[ "r", "clustering", "natural-language", "text-mining", "tf-idf" ]
608461
2
null
608436
15
null
My comments on these assertions: > We can't directly evaluate the posterior as the normalising constant is too hard to calculate for interesting problems. Instead we sample from it. No, the normalising constant$$\in_\Theta \pi(\theta)f(x|\theta)\,\text d\theta$$being unknown is [not the issue](https://stats.stackexchange.com/q/307882/7224) for being unable to handle inference from the posterior distribution. The complexity of the posterior density is the primary reason for running simulations. (The normalising constant is mostly useful to compute the evidence in Bayesian hypothesis testing.) > We do this by engineering a Markov chain that has the same stationary distribution as the target distribution (the posterior in our case) This is correct (if one possibility). Note that [MCMC](http://amzn.to/2lQDmJR) is a general simulation method that is not restricted to [Bayesian computation](http://amzn.to/2kxykkw). > When we have reached this stationary state we continue to run the Markov chain and sample from it to build up our empirical distribution of the posterior Not exactly as "reaching stationarity" is most often impossible to detect/assert in practice. Some [techniques](https://stats.stackexchange.com/a/176708/7224) exist, but they are not exact and mileage [varies][5]. [Exact (or perfect) sampling](http://amzn.to/2lQDmJR) is restricted to some ordered settings and very costly. However, the [ergodic theorem](https://stats.stackexchange.com/q/565429/7224) validates the use of Monte Carlo averages in this setting without "waiting" for stationarity. > All Markov chains are completely described by their transition probabilities. The generic term is transition kernel, as the target distribution often is absolutely continuous. Some MCMC methods use [continuous time processes](https://en.wikipedia.org/wiki/Piecewise-deterministic_Markov_process), in which case there is no transition kernel stricto sensus. > We therefore control/engineer the Markov chain by controlling the transition probabilities. All MCMC algorithms work from this principle but the exact method for generating these transition probabilities differs between algorithms. Markov chain Monte Carlo algorithm are indeed validated by the fact that their transition kernel ensures stationarity for the target distribution$$\pi(\theta'|x) = \int_\Theta \pi(\theta|x)K(\theta,\theta')\,\text d\theta\tag{1}$$ > If we have a particular algorithm for generating these transition probabilities, we can verify that it converges to the stationary distribution by using the detailed balance equation on the proposed transition probabilities No, [detailed balance](https://stats.stackexchange.com/q/45743/7224) is not a necessary condition for stationarity wrt the correct target. Take for instance the Gibbs samplers or the Langevin version ([MALA](https://stats.stackexchange.com/q/234897/7224)), which are usually not reversible and hence do not check [detailed balance](https://stats.stackexchange.com/q/45743/7224). They are nonetheless valid and satisfy global balance (1). > Thus the remaining challenge is to come up with a method to generate these transition probabilities Not really, since there exist families of generic MCMC algorithms such as random walk Metropolis-Hastings algorithms or Hamiltonian Monte Carlo. The challenge is more into calibrating a given algorithm or choosing between algorithms.
null
CC BY-SA 4.0
null
2023-03-05T18:09:55.480
2023-03-07T18:14:49.267
2023-03-07T18:14:49.267
7224
7224
null
608462
2
null
597719
1
null
For a normal random variable, the moment-matching estimator (MME) for the mean is the maximum likelihood estimate (MLE). For the variance, the MME and the MLE differ just by the bias adjustment ( n/(n-1) ), so asymptotically they will coincide, and the MME will have the asymptotic properties of the MLE. You can observe this in the [Wikipedia article about the Normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Statistical_inference) [The MLE has the properties you mention](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#Properties): consistency, efficiency, asymptotic normality, etc. But the empirical mean of a sample coming from a normal distribution actually has a normal distribution, this is not true only asymptotically but for any sample size. You can understand this easily if you recall that the sum of two normal random variables is itself a normal random variable. The mean is a sum of normal random variables, multiplied by a scalar (1/n), so it also is a normal random variables. There should be a wealth of material for a step by step proof, [here is one.](https://www.youtube.com/watch?v=TuBAhUK9fWc)
null
CC BY-SA 4.0
null
2023-03-05T18:36:11.887
2023-03-05T18:36:11.887
null
null
382445
null
608463
1
608470
null
1
31
Two related questions from statistics noob. - Can someone recommend a good statistics textbook that covers Wilson confidence interval? The reason why I'm looking for Wilson CI specifically is that I am a ML practitioner and recently I had to estimate CI for precision and recall of a certain ML model. My undergrad probability theory class didn't go that far, so a data scientist at my company told me to use Wilson confidence interval. I was embarrased that I haven't even heard that term, yet it appears that Wilson confidence interval is an industry standard, at least in my FAANG company. I checked a bunch of popular statistics textbooks and none of them talk about Wilson CI, or any other CI for population ratio, or how to measure CI of precision/recall. Do I need to look for more specific area within statistics to learn about that? - Is Wilson confidence interval right or at least reasonable way to estimate CI for precision? After checking wikipedia article on it, I understand why it's good for recall, since the denominator for recall is the entire population of true positives, so I can see how evaluating model score on each sample is a Bernouuli trial. But it seems to me that it's not applicable to precision in the same way, since denominator of precision is the set of all samples with model score greater than threshold, and so it varies with threshold. And we would have to consider Bernoulli trial to be looking at the label of each example, but this doesn't make sense to me since the label is actually known before we train and evaluate the model.
Understanding Wilson confidence interval for estimating precision of ML model
CC BY-SA 4.0
null
2023-03-05T19:07:34.630
2023-03-05T22:20:13.813
null
null
382446
[ "classification", "estimators" ]
608464
2
null
449100
3
null
Word2vec is popular because it's simple while being good enough. Because of being simple, it's fast and needs less expensive hardware to run. Yes, using one vs two layers does not seem to make much difference, but if you need to run it in production environment, the all those milliseconds can add to something quite big over millions of calls. There are a lot of NLP deep neural networks, including huge ones. It's about picking the right tool for the job, you don't need a laser knife for a job doable by a rusted axe.
null
CC BY-SA 4.0
null
2023-03-05T19:09:26.387
2023-03-05T19:09:26.387
null
null
35989
null
608465
1
null
null
1
28
I actually have two original datasets (each one for a departure that are related to each one in a specific way , but it's not important to know how exactly) , but these 2 datasets contain some outliers in the column 'value' that i deleted and which led to the creation of 2 new datasets filtered. My main objective is to actually impute the deleted values but additionally i want the imputed values to respect a certain constraint which is that the relative difference between "The sum of original values of both departures" ( Y1 +Y2) and "The sum of imputed values of both departures" ( X1+X2) must be inferior to a certain threshold ( percentage epsilon).I Initialized the values with a KNN method. This is what i wrote for my code ``` # fonction huber loss def huber_loss(x, y, eps): diff = np.abs(y - x) bool = diff <= eps loss = 0.5 * (diff**2) * bool + eps * (diff - 0.5 * eps) * (1 - bool) return np.sum(loss) # fonction objective def objective(x1, x2, y1, y2, eps, lam): mse = np.mean((y2 +y1 - x2 - x1)**2) constraint_loss = huber_loss(x1 + x2, y1 + y2, eps) return mse + lam*constraint_loss # fonction pour imputation avec contrainte pour les deux départs def constrained_imputation(data1=pd.DataFrame, data2=pd.DataFrame,df1=pd.DataFrame,df2=pd.DataFrame, eps=400, lam=0.7, max_iter=10000, tol=1e-9, alpha=0): # on repere les indices des valeurs manquantes value_missing=data1['value'].isnull() indexes_missing = np.where(value_missing)[0] #on récupère les valeurs réelles sur les périodes de reports de charge y1=df1['value'][indexes_missing].values y2=df2['value'][indexes_missing].values # knn imputation sur les deux départs pour initialiser imputer1 = KNNImputer(n_neighbors=5) X=data1.drop(['horodate','gdo','Unnamed: 0'],axis=1) x01 = imputer1.fit_transform(X) x01=x01[:,0] x01=x01[indexes_missing] imputer2 = KNNImputer(n_neighbors=5) X=data2.drop(['horodate','gdo','Unnamed: 0'],axis=1) x02 = imputer2.fit_transform(X) x02=x02[:,0] x02=x02[indexes_missing] # Définit la fonction d'optimization : fun = lambda x: objective(x[:len(indexes_missing)], x[len(indexes_missing):], y1, y2, eps, lam) # Le vecteur x0 pour lequel il faut trouver la solution : x0 = np.concatenate([x01, x02]) # Minimization de la fonction objective result = minimize(fun, x0, method='L-BFGS-B', options={'maxiter': max_iter, 'ftol': tol}) # on extrait les valeurs imputées : x1_imputed = result.x[:len(indexes_missing)] x2_imputed = result.x[len(indexes_missing):] #creer les tables finales : df_imputed_1,df_imputed_2=data1.copy(),data2.copy() df_imputed_1['value'][indexes_missing]=x1_imputed df_imputed_2['value'][indexes_missing]=x2_imputed return df_imputed_1,df_imputed_2 ``` But i feel like even when i tune the values of the parameters of the function it doesn't really change the number of imputed values that verify the constraint. I think that the problem might be caused by the objective function , so what do u guys think about this ? What are the objective functions possible that i can use in this problem , or if there is another possible method to impute with a specific constraint.
Constrained imputation in Python
CC BY-SA 4.0
null
2023-03-05T19:50:44.780
2023-03-05T19:50:44.780
null
null
382447
[ "python", "optimization", "data-imputation", "constrained-optimization" ]
608466
1
null
null
1
42
I have data from 3 different sources, measuring different variables for different samples taken from the same population (a country). All of the data is from country-wide studies and should be representative of the country's population. E.g. I have data from source 1 about country-wide demographic variables (census-type), data from source 2 about certain beliefs of the population, and data from source 3 about another set of beliefs. I am comparing two countries and have the same data for each of them. I was wondering if there is any way to analyse the relationship between variables in this situation, even if from different sources, as they are all measures from the same population. What I am trying to analyse is the relationship between the two sets of beliefs from source 2 and 3. I only need the demographic variables as there is evidence they affect the set of beliefs from source 2, so I guess I could just use the demographic details from source 2 instead off the census, but I still would need to use data from a different source for the other set of beliefs. If it is possible to use/combine data in this way are there any sources I could read to help me through it? I am very very new to all this so sorry about the confusion. EDIT: To be clearer, I have 3 sets of data for two countries: - Set of beliefs 1, taken from the World Values Survey - Set of beliefs 2, taken from a separate survey (but corroborated by findings from other surveys as well) - Demographic data for both countries at an aggregate level, taken from each country's National databanks/censuses My hypothesis is that Beliefs 1 have a relationship to Beliefs 2. As Beliefs 2 are described in the literature as being influenced by certain demographic variables, I also need to consider demographic data to account for that. I am performing the analysis on each country separately and then comparing the results to see if they are similar (the hypothesis holds up in different contexts) or not (something else is at play).
Analysis with data from different sources
CC BY-SA 4.0
null
2023-03-05T20:18:57.683
2023-03-06T16:15:29.307
2023-03-06T16:15:29.307
382486
382486
[ "regression", "hypothesis-testing", "multivariate-analysis", "regression-strategies", "research-design" ]
608468
1
null
null
1
62
Paper: [Isolating Sources of Disentanglement in VAEs](https://arxiv.org/pdf/1802.04942.pdf) [](https://i.stack.imgur.com/AO1rh.png) I follow as far as, $$\mathbb{E}_{q(z)}[log[q(z)] = \mathbb{E}_{q(z, n)}[\ log\ \mathbb{E}_{n'\sim\ p(n)}[q(z|n')]\ ]$$ Subsequently, I don't follow how they get the following and beyond that: $$\mathbb{E}_{q(z, n)}\left[\ log\ \mathbb{E}_{p(\mathbb{B_m})}\left[\frac{1}{M}\sum_{m = 1}^Mq(z|n_m)\right]\right ]$$
In the β-TCVAE paper, can someone help with the derivation (S3) in Appendix C.1?
CC BY-SA 4.0
null
2023-03-05T20:51:16.733
2023-03-05T21:47:56.723
2023-03-05T21:47:56.723
371267
371267
[ "machine-learning", "autoencoders", "variational-bayes", "importance-sampling", "variational" ]
608469
1
null
null
0
43
The question I tried to solve, but failed, goes like this: Find the expected value of $(\bar{X}_n)^2$ and find an ubiased estimator for $\mu^2$. This is the solution given by the TA: $$E[(\frac{1}{n}\sum^n_{i=1} X_i)^2]$$ $$= E[\frac{1}{n^2}\sum^n_{i=1} X_i \sum^n_{k=1} X_k]$$ $$= \frac{1}{n^2}\sum^n_{i=1} E[X^2_{i}] + \frac{1}{n^2}\sum^n_{i=1} \sum_{k \neq i} E[X_i]E[X_k]$$ $$= \frac{1}{n}(\sigma^2 + \mu^2) + \frac{n-1}{n}\mu^2$$ $$= \mu^2 + \frac{\sigma^2}{n}$$ Hence, the unbiased estimator is: $$(\bar{X}_n)^2 - \frac{\hat{\sigma}^2}{n}$$ I have a few questions about this. The TA will take days to answer and I can't wait that long: - Where did this $X_k$ come from? why didn't he just use $\frac{1}{n}\sum^n_{i=1} X_i$ twice? - I don't understand the second line at all. Why is there $\frac{1}{n^2}$ two times? and how did he separate the sum into two parts like that? - The transition from the second line to the third line is also unclear. Can anyone please assist me with this question? thank you.
Finding the expected value for the mean squared
CC BY-SA 4.0
null
2023-03-05T21:27:45.743
2023-03-05T23:29:13.710
2023-03-05T23:29:13.710
296197
362803
[ "self-study", "mathematical-statistics", "mean", "expected-value" ]
608470
2
null
608463
1
null
Yes, the Wilson interval is one of the good options for a confidence interval for precision (which I would call positive predictive value). It's true that the label is determined before you know the prediction when you're testing the model, but that doesn't matter. You can still ask "among those who scored above the threshold, what proportion are truly positive", and you still answer that question by finding everyone in the same who scores above the threshold and counting up the number who are actually positives and number who are actually negatives. If someone gives you a bunch of records and says "these all scored above the threshold in the validation test; go look up their labels", you still learn an independent binary piece of information for each record when you look them up. And the total amount of information you learn depends on how many records are in the bunch (even if it's random). Formally, if you have pairs of binary observations $(T_i, X_i)$ on $n$ individuals, where $P(T_i=1)$ depends on $X_i$, the distribution of $X_i$ conditional on $T_i=1$ is still Bernoulli, and the distribution of $\sum_i X_i$ conditional on $\sum_i T_i$ is still Binomial. In training you have already measured $X_i$ before you compute $T_i$. In production use you haven't measured $X_i$ when you compute $T_i$, and in validation it can go either way. It doesn't matter for the math -- these are just correlational quantities, and direction in time isn't important.
null
CC BY-SA 4.0
null
2023-03-05T22:20:13.813
2023-03-05T22:20:13.813
null
null
249135
null
608471
2
null
608469
0
null
Here is a more expanded version of the derivation: \begin{align} E\left[\bar X_n^2\right] &= E\left[\left(\frac{1}{n}\sum^n_{i=1} X_i\right)^2\right] \\ &= E\left[\left(\frac{1}{n}\sum^n_{i=1} X_i\right)\left(\frac{1}{n}\sum^n_{i=1} X_i\right)\right] \\ &= E\left[\left(\frac{1}{n}\sum^n_{i=1} X_i\right)\left(\frac{1}{n}\sum^n_{k=1} X_k\right)\right] \\ &= E\left[\frac{1}{n^2}\left(\sum^n_{i=1} X_i\right)\left(\sum^n_{k=1} X_k\right)\right] \\ &= E\left[\frac{1}{n^2}\left(\sum^n_{i=1} X_i\right)a\right] \\ &= E\left[\frac{1}{n^2}\sum^n_{i=1} a X_i\right] \\ &= E\left[\frac{1}{n^2}\sum^n_{i=1} \left(\sum^n_{k=1} X_k\right) X_i\right] \\ &= E\left[\frac{1}{n^2}\sum^n_{i=1} \left(\sum^n_{k=1} X_kX_i\right)\right] \\ &= \frac{1}{n^2}\sum^n_{i=1} \sum^n_{k=1} E\left[X_kX_i\right] \\ \end{align} Notice that, in the inner summation, as we are iterating over $k$, and because $k$ and $i$ both range from $1$ to $n$, then there will come a point when $k = i$. This means that the inner summation becomes $$\sum^n_{k=1} E\left[X_kX_i\right] = E[X_iX_i] + \sum_{k \neq i} E[X_kX_i] = E[X_i^2] + \sum_{k \neq i} E[X_kX_i]$$ and so \begin{align} E\left[\bar X_n^2\right] &= \frac{1}{n^2}\sum^n_{i=1} \sum^n_{k=1} E\left[X_kX_i\right] \\ &= \frac{1}{n^2}\sum^n_{i=1} \left(E[X_i^2] + \sum_{k \neq i} E[X_kX_i]\right) \\ &= \frac{1}{n^2}\sum^n_{i=1} E[X_i^2] + \frac{1}{n^2}\sum^n_{i=1}\sum_{k \neq i} E[X_kX_i] \end{align} Because $X_k$ and $X_i$ are assumed to be independent and identically distributed for $k \neq i$, then $E\left[X_kX_i\right] = E[X_k]E[X_i]$ and $E[X_k] = E[X_i]$ and so \begin{align} E\left[\bar X_n^2\right] &= \frac{1}{n^2}\sum^n_{i=1} E[X_i^2] + \frac{1}{n^2}\sum^n_{i=1}\sum_{k \neq i} E[X_k]E[X_i] \end{align} I'll leave the rest up to you.
null
CC BY-SA 4.0
null
2023-03-05T22:27:27.647
2023-03-05T22:27:27.647
null
null
296197
null
608473
1
null
null
0
9
I have a collection of $N$ sets of data $(x_i, y_i)$ that I believe follow the model $$ y_i = a x_i $$ where $a(b)$ is a linear function of some parameter $b$ that is known for each $i$. My process is first to fit a line to each set $(x_i, y_i)$ which gives me the pairs $(a_i,\sigma_i)$. I then fit a line to $(a_i,b_i)$ which gives me $$ a_i = c b_i + d $$ along with an error for each parameter, $\sigma_c$ and $\sigma_d$. My question is how do I correctly propogate the errors $\sigma_i$ through this problem to get the total error associated with both $c$ and $d$. My thought was it should look something like $$ \sigma = \frac{1}{N+1}\sqrt{\sigma_c^2 + \sum_i \sigma_i^2} $$ but am not sure if this is the correct approach given the regression. Any advice is appreciated!
Error propagation through multiple fitting
CC BY-SA 4.0
null
2023-03-05T23:22:28.837
2023-03-05T23:22:28.837
null
null
382455
[ "regression", "error", "error-propagation" ]
608475
2
null
608446
8
null
I highly recommend reading [Midway et al., 2020](https://peerj.com/articles/10387), which is probably the best article I have ever read that summarizes pairwise comparisons for ANOVA. Along with the guidelines they provide for how to properly utilize these post-hoc tests, one of the opening paragraphs states this: > The classic ANOVA (ANalysis Of Variance) is a general linear model that has been in use for over 100 years (Fisher, 1918) and is often used when categorical or factor data need to be analyzed. However, an ANOVA will only produce an F -statistic (and associated p-value) for the whole model. In other words, an ANOVA reports whether one or more significant differences among group levels exist, but it does not provide any information about specific group means compared to each other. Additionally, it is possible that group differences exist that ANOVA does not detect. For both of these reasons, a strong and defensible statistical method to compare groups is nearly a requirement for anyone analyzing data. For this reason, it is useful to explore differences between groups even if there is no significant ANOVA. Remember that the f value derived from an ANOVA simply tests the overall variance between groups. How some groups compare to each other cannot be understood without exploring further.
null
CC BY-SA 4.0
null
2023-03-06T00:28:59.013
2023-03-06T00:28:59.013
null
null
345611
null
608476
1
null
null
0
20
Hi I am writing a critical review for one of my assignments, and I am struggling to extract the unit of observation. I have written a critical review for one of my assessments at university which is a data analysis course and looks at research design in social sciences. I have since second guessed myself as to whether my judgement was correct and was seeking a further opinion. The article looks at whether or not native Swiss commuters are likely to help high to low status immigrants and operationalises this by measuring whether native commuters assist confederate immigrants. The researchers then run a regression analysis on the data collected The unit of an analysis is at a group level for (immigrant groups) as that is what they are making inferences about. I had originally thought that the unit of observation was at an individual level as they collected data on the native commuters such as age , ethnicity and gender (this however was a guess as it was done covertly) I however wondered if it was actually at the group level as they are making inferences about immigrant groups The link to the article is below in case that is preferred over my explanation :[https://academic.oup.com/esr/article/35/4/582/5512302#140154323](https://academic.oup.com/esr/article/35/4/582/5512302#140154323)
obtaining the unit of observation for critical review of a research design
CC BY-SA 4.0
null
2023-03-06T00:51:32.453
2023-03-06T02:47:53.870
2023-03-06T02:47:53.870
382460
382460
[ "research-design" ]
608477
1
608487
null
6
267
#### Describe the bug I'm attempting to replicate a GEE model in statsmodels from a published paper that used SPSS ([https://pubmed.ncbi.nlm.nih.gov/33279717/](https://pubmed.ncbi.nlm.nih.gov/33279717/)). I am getting very different answers for what seems like the same input structure. I even signed up for a free trial of SPSS and can confirm SPSS gives the answers reported in the paper. The input matrices are being loaded from the same .csv (and I filter using pandas to achieve the same dataframe as in SPSS). #### Code Sample, a copy-pastable example if possible ``` USE ALL. COMPUTE filter_$=(BehTaskNum = 1 or BehTaskNum = 2 or (BehTaskNum = 3 and BlockNumber = 6)). FILTER BY filter_$. EXECUTE. GENLIN DifferenceScore BY White Right (ORDER=ASCENDING) /MODEL White Right White*Right INTERCEPT=YES DISTRIBUTION=NORMAL LINK=IDENTITY /CRITERIA SCALE=MLE PCONVERGE=1E-006(ABSOLUTE) SINGULAR=1E-012 ANALYSISTYPE=3(WALD) CILEVEL=95 LIKELIHOOD=FULL /REPEATED SUBJECT=participantID SORT=YES CORRTYPE=EXCHANGEABLE ADJUSTCORR=YES COVB=ROBUST MAXITERATIONS=1000 PCONVERGE=1e-006(ABSOLUTE) UPDATECORR=1 /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION. ``` ``` fam = sm.families.Gaussian(link=sm.families.links.identity) ind = sm.cov_struct.Exchangeable() GEE_model = smf.gee("DifferenceScore ~ White * Right", groups="ParticipantID", data=stim_df_with_facename,cov_struct=ind, family=fam) stim_model_out = GEE_model.fit(maxiter=1000) stim_model_out.summary() ``` SPSS results: [](https://i.stack.imgur.com/Pakdx.png) statsmodel results: [](https://i.stack.imgur.com/nIhSq.png) The results aren't even close (seems statsmodels isn't converging--and I've tried up to 10000 iterations but get the same result). I should point out if I run a model with an additional predictor (White*Right+Macro) the results are closer...but still quite a bit different: SPSS results: [](https://i.stack.imgur.com/bu4Ic.png) statsmodel results: [](https://i.stack.imgur.com/4siTX.png) I'm much more familiar with Mixed Effect models...but trying those in statsmodels were not replicating the GEE results either (even though in principle they should be similar).
Wildly different answers replicating a GEE model from SPSS
CC BY-SA 4.0
null
2023-03-06T01:14:38.103
2023-03-06T03:51:32.060
2023-03-06T01:40:18.947
362671
295527
[ "spss", "statsmodels", "generalized-estimating-equations" ]
608478
1
608574
null
1
41
The primary objective of Bayesian inference is to compute the posterior. For instance, if the posterior $p(\theta | x)$ is known then the expectation $\mathbb{E}$ of the test function $\tau(x)$ under the posterior $p(\theta | x)$ can be computed like $E[\tau | x] = \int d\theta \tau(\theta) p(\theta | x)$. To make a prediction $x'$ from the data distribution $p(x | \theta)$, assuming $x'$ and $x$ are independent to each other, the posterior predictive distribution of $x'$ is $p(x' | x) = \int d\theta p(x' | x, \theta) p(\theta | x) = \int d\theta p(x' | \theta) p(\theta | x)$. How should I convince myself on an intuitive level the necessity for the integration over all $\theta$ in order to compute the predictive posterior?
why does posterior prediction involve integration over all parameter space?
CC BY-SA 4.0
null
2023-03-06T01:19:29.177
2023-03-06T21:59:14.877
null
null
109101
[ "bayesian", "inference" ]
608479
2
null
608358
1
null
#### GLMM Scaling The reason you are getting different answers on a thread from GLMM and a thread on GAM(M)s is that scaling affects each differently. Regarding GLMMs, there are generally a number of reasons for transforming the data, which may include: - The data is not linear and a simple transformation may make the relationship linear. - There is an interaction and the scales of each variable involved are not comparable. - The response variable is not normally distributed, and transforming it to be normal allows one to apply a Gaussian mixed effects model to the data. Specific to the interaction case, here is a useful quote from Harrison et al., 2018 that highlights why this is specifically done for standardized scaling: > Transformations of predictor variables are common, and can improve model performance and interpretability (Gelman & Hill, 2007). Two common transformations for continuous predictors are (i) predictor centering, the mean of predictor x is subtracted from every value in x, giving a variable with mean 0 and SD on the original scale of x; and (ii) predictor standardising, where x is centred and then divided by the SD of x, giving a variable with mean 0 and SD 1. Rescaling the mean of predictors containing large values (e.g. rainfall measured in 1,000s of millimetre) through centring/standardising will often solve convergence problems, in part because the estimation of intercepts is brought into the main body of the data themselves. Both approaches also remove the correlation between main effects and their interactions, making main effects more easily interpretable when models also contain interactions (Schielzeth, 2010). Note that this collinearity among coefficients is distinct from collinearity between two separate predictors (see above). Centring and standardising by the mean of a variable changes the interpretation of the model intercept to the value of the outcome expected when x is at its mean value. Standardising further adjusts the interpretation of the coefficient (slope) for x in the model to the change in the outcome variable for a 1 SD change in the value of x. Scaling is therefore a useful tool to improve the stability of models and likelihood of model convergence, and the accuracy of parameter estimates if variables in a model are on large (e.g. 1,000s of millimetre of rainfall), or vastly different scales. When using scaling, care must be taken in the interpretation and graphical representation of outcomes. From personal experience, not scaling an interaction almost always leads to model convergence failure unless the predictors are on very similar scales, so it can often be a matter of practical importance. However, for other transformations of the data, it depends on what you are trying to achieve (such as normality, linearity, etc.). #### GAMM Scaling I was the one who originally answered the question you linked and it's important to recognize the context of what I was stating there. First, I don't know if they understood the `gam` function arguments so they had applied it blindly without understanding what they did. Second, my answer is more specific to standardized scaling, which typically involves transforming data from raw scores to z-scores. This is generally a bad idea for GAMMs because it can totally mess up the interpretation of the model due to the lack of context it provides. However, that doesn't mean that scaling or transformation in general is bad. A great example is from Pedersen et al., 2019, which highlights a a GAMM that includes log concentration of CO2 and log uptake of C02 for some plants. They don't show the original data they applied this to, but I suspect the reason they did this was for reasons similar to your plots in your Plot 1 area. When data is "smooshed into the left corner" as I horribly describe it, it is typical for people to use a log-log regression in the linear case to spread out the distribution of values to be more meaningful. I imagine this was applied to similar effect in the GAMM data. For examples of this kind of regression and why it is done, I recommend reading Chapter 3 of Regression and Other Stories, which has a worked example in R. In any case, you can theoretically scale the data, just understand that your interpretation of the data will have to change with it, which is why caution should be taken when doing so. In the case where data is transformed from log-log, they are no longer in raw form and represent percent increases/decreases along the x/y axes. #### Citations - Gelman, A., Hill, J., & Vehtari, A. (2022). Regression and other stories. Cambridge University Press. - Harrison, X. A., Donaldson, L., Correa-Cano, M. E., Evans, J., Fisher, D. N., Goodwin, C. E. D., Robinson, B. S., Hodgson, D. J., & Inger, R. (2018). A brief introduction to mixed effects modelling and multi-model inference in ecology. PeerJ(6), e4794. https://doi.org/10.7717/peerj.4794 - Pedersen, E. J., Miller, D. L., Simpson, G. L., & Ross, N. (2019). Hierarchical generalized additive models in ecology: An introduction with mgcv. PeerJ(7), e6876. https://doi.org/10.7717/peerj.6876
null
CC BY-SA 4.0
null
2023-03-06T01:21:02.597
2023-03-06T01:40:01.930
2023-03-06T01:40:01.930
345611
345611
null
608485
2
null
49528
1
null
If you want to see the differences in a formula, [this](https://medium.com/intuitionmath/difference-between-batch-gradient-descent-and-stochastic-gradient-descent-1187f1291aa1) might help. [](https://i.stack.imgur.com/wXXGP.png) In above equation, m indicates the number of training data points. In Batch Gradient Descent, As the yellow circle shows, in order to calculate the gradient of the cost function, we add up the cost of each sample. If we have 3 million samples, we have to loop through all 3 million samples or use the dot product. ``` def gradientDescent(X, y, theta, alpha, num_iters): """ Performs gradient descent to learn theta """ m = y.size # number of training examples for i in range(num_iters): y_hat = np.dot(X, theta) theta = theta - alpha * (1.0/m) * np.dot(X.T, y_hat-y) return theta ``` Do you see `np.dot(X.T, y_hat-y)` above? That’s the vectorized version of “looping through (summing) all 3 million samples”. Wait... just to move a single step towards the minimum, do we really have to calculate each cost 3 million times? Yes. If you insist on using the batch gradient descent. But if you use Stochastic Gradient Descent, you don’t have to! [](https://i.stack.imgur.com/eqeAE.png) In SGD, we use the cost gradient of ONE (1) example at each iteration, instead of adding up and using the costs of ALL examples. [Image Source](https://medium.com/intuitionmath/difference-between-batch-gradient-descent-and-stochastic-gradient-descent-1187f1291aa1)
null
CC BY-SA 4.0
null
2023-03-06T03:22:34.303
2023-03-06T03:22:34.303
null
null
59072
null
608486
1
null
null
4
129
Let $\mathbf{Y}=\begin{pmatrix} \mathbf{Y_1}\\\mathbf{Y_2} \end{pmatrix}\sim N\left (\boldsymbol{\mu},\boldsymbol{\Sigma} \right ), $ $\boldsymbol{\mu}=\begin{pmatrix} \boldsymbol{\mu_1}\\\boldsymbol{\mu_2} \end{pmatrix}$ and $\boldsymbol{\Sigma}=\begin{pmatrix} \boldsymbol{\Sigma_{11}}& \boldsymbol{\Sigma}_{12} \\ \boldsymbol{\Sigma}_{12}& \boldsymbol{\Sigma}_{22} \\ \end{pmatrix}$ are are compatibly partitioned. Making a variable transform $$\left(\begin{array}{l} \boldsymbol{U}_{1} \\ \boldsymbol{U}_{2} \end{array}\right)=\left(\begin{array}{cc} \boldsymbol{I} & -\boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} \\ \mathbf{0} & \boldsymbol{I} \end{array}\right)\left(\begin{array}{c} \boldsymbol{Y}_{1} \\ \boldsymbol{Y}_{2} \end{array}\right)-\begin{pmatrix} \boldsymbol{\mu}_{1}\\ \boldsymbol{\mu}_{2}\\ \end{pmatrix}.$$ The conditional distribution of $\boldsymbol{X_{1}}$ given $ \boldsymbol{X_{2}} $ denote as $\boldsymbol{X}_{1} \mid \boldsymbol{X}_{2}.$Symbol $\boldsymbol{X}_{1} \stackrel{\mathrm{d}}{=}\boldsymbol{X}_{2} $ means two random variables $\boldsymbol{X}_{1} $ and $\boldsymbol{X}_{2}$ have the same distribution. Show that $$\boldsymbol{Y}_{1} \mid \boldsymbol{Y}_{2}\stackrel{\mathrm{d}}{=}\left[\boldsymbol{U}_{1}+\boldsymbol{\mu}_{1}+\boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{U}_{2}\right] \mid \boldsymbol{U}_{2}.$$ --- From the inverse transform $$\left(\begin{array}{l} \boldsymbol{U}_{1} \\ \boldsymbol{U}_{2} \end{array}\right)=\left(\begin{array}{cc} \boldsymbol{I} & -\boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} \\ \mathbf{0} & \boldsymbol{I} \end{array}\right)\left(\begin{array}{c} \boldsymbol{Y}_{1}-\boldsymbol{\mu}_{1} \\ \boldsymbol{Y}_{2}-\boldsymbol{\mu}_{2} \end{array}\right),$$ $\boldsymbol{U}_{1}$ and $\boldsymbol{U}_{2}$ are independent, $\boldsymbol{Y}_{1}=\boldsymbol{\mu}_{1}+\boldsymbol{U}_{1}+\boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{U}_{2},\boldsymbol{Y}_{2}=\boldsymbol{\mu}_{2}+\boldsymbol{U}_{2}.$ I can not go further about this.I don't know what's the formal definition about $\boldsymbol{Y_{1}}|\boldsymbol{Y_{2}}\stackrel{\mathrm{d}}{=}\boldsymbol{X_{1}}|\boldsymbol{X_{2}}.$
Show that two conditional distributions are the same
CC BY-SA 4.0
null
2023-03-06T03:24:32.297
2023-03-11T02:50:54.337
2023-03-11T02:50:54.337
362671
73778
[ "probability", "self-study", "conditional-probability", "multinomial-distribution" ]
608487
2
null
608477
11
null
They're less wildly different once you correct for the different contrasts the two programs use. SPSS has 1 as the reference level of the two variables and `statsmodels` has 0. Here are the fitted values for the four combinations of the two binary variables ``` statsmodels SPSS neither -0.0284 -0.024 white -0.0134 -0.021 right -0.0156 -0.015 both 0.1117 0.115 ``` That's still more different than I'd expect, and it's a bad sign that the `statsmodels` estimate hasn't converged. So I ran the model with two different R implementations (`gee` and `geeM`). They also give different answers, but more importantly they agree there's an estimation problem. The working correlation parameter is trying to be more negative than is possible given the cluster size, giving a non-positive-definite working correlation matrix. (I note that neither your SPSS nor `statsmodels` output shows the estimated working correlation) So, I think neither result is really reliable for this dataset, and the exchangeable working correlation model isn't stable. If the estimates haven't converged in 1000 iterations, they aren't going to (and looking at the R version, they aren't showing any signs of converging). I would suggest falling back to working independence. For working independence, SPSS and R give the same answers (I didn't check `statsmodels`). There's some potential efficiency gain from the exchangeable working correlation, but not if the correlation parameter can't be estimated reliably.
null
CC BY-SA 4.0
null
2023-03-06T03:51:32.060
2023-03-06T03:51:32.060
null
null
249135
null
608489
1
null
null
0
45
I had a question regarding the "choosing the optimal model" section of chapter 6 of ISLR (pg. 232). The book states that "In order to select the best model with respect to test error, we need to estimate this test error. There are two common approaches: - We can indirectly estimate test error by making an adjustment to the training error to account for the bias due to overfitting.  - We can directly estimate the test error, using either a validation set approach or a cross-validation approach, as discussed in Chapter 5. " I understand that we can indirectly estimate the test error by adjusting the training error with the utilization of AIC, BIC, Mallows Cp, and adjusted R-squared to account for the fact that training MSE tends to be lower than test MSE .  In addition, we can estimate the test error directly via the validation set and cross-validation to discern the model with the lowest test error from the candidate models garnered from our model selection methods (best subset / forward selection/ backward selection/ hybrid) As such, given that the methods to directly estimate the test error makes less assumptions and makes a direct estimate of the test error, why would someone ever choose approach 1?  Or is it that these approaches are complementary to one another and so researchers typically use them both during their analyses?
ISLR Chapter 6 : Choosing the Optimal Model
CC BY-SA 4.0
null
2023-03-06T04:23:11.477
2023-03-06T04:23:11.477
null
null
315201
[ "regression", "machine-learning", "modeling", "model-selection" ]
608491
2
null
606901
0
null
Since you know that your errors are normal, another approach would be to use the delta method to find the limiting distribution of $g(\hat{a}, \hat{b})$ and find a confidence interval for $g(a, b)$ that way. I'm going to assume that "normal errors" means that $$ \sqrt{n}((\hat{a}, \hat{b}) - (a, b)) \stackrel{\mathrm{d}}{\rightarrow} \mathcal{N}(0, \Sigma) $$ for some covariance matrix $\Sigma$ that you can estimate. Indeed, this is the case for logistic regression generally. Now, the delta method implies that $$ \sqrt{n}(g(\hat{a}, \hat{b}) - g(a, b)) \stackrel{\mathrm{d}}{\rightarrow} \mathcal{N}(0, \nabla g(a, b)^{\mathrm{T}} \Sigma \nabla g(a, b)), $$ where in the case of this problem, $$ \nabla g(a, b) = \begin{pmatrix} -c/a^2 \\ -1/b \end{pmatrix} g(a, b), $$ yielding a standard error of $$ \mathrm{se}(g(\hat{a}, \hat{b})) = g(\hat{a}, \hat{b}) \sqrt{(c/\hat{a}^2, 1/\hat{b}) \hat{\Sigma} \begin{pmatrix} c/\hat{a}^2 \\ 1/\hat{b} \end{pmatrix}}, $$ and so a 95% confidence interval of $$ \mathrm{CI}_{0.95} = g(\hat{a}, \hat{b}) \biggl[1 \pm 1.96 \sqrt{(c/\hat{a}^2, 1/\hat{b}) \hat{\Sigma} \begin{pmatrix} c/\hat{a}^2 \\ 1/\hat{b} \end{pmatrix}} \biggr] $$
null
CC BY-SA 4.0
null
2023-03-06T05:01:23.107
2023-03-06T05:01:23.107
null
null
335519
null
608493
2
null
606988
3
null
You can obtain the exact mean and variance of the ratio of any two elements of a Dirichlet distributions (assuming that the mean and variance exists which depends on the values of the parameters). I'm going to take the lazy way out and use Mathematica. First define the distribution of the ratio of the first two elements: ``` dist = DirichletDistribution[{a1, a2, a3, a4, a5, a6}]; dist12 = TransformedDistribution[x1/x2, {x1, x2, x3, x4, x5} \[Distributed] dist]; ``` Now determine the pdf of the ratio: ``` pdf = PDF[dist12, r] ``` $$\frac{r^{a_1-1} (r+1)^{-a_1-a_2} \Gamma (a_1+a_2)}{\Gamma (a_1) \Gamma (a_2)}$$ for $r>0$ and 0 elsewhere. The mean and variance when those exist are ``` Mean[dist12] ``` $$a_1/(a_2-1)$$ ``` Variance[dist12] ``` $$\frac{a_1 (a_1+a_2-1)}{(a_2-2) (a_2-1)^2}$$ The mean will only exist if $a_2>1$ and the variance will only exist if $a_2>2$. Your example has both $a_1$ and $a_2$ less than 1 so neither the mean nor variance exist in that case.
null
CC BY-SA 4.0
null
2023-03-06T06:06:53.140
2023-03-06T06:06:53.140
null
null
79698
null
608494
2
null
608427
2
null
As hinted in @whuber's comment, the summation in your blue box is a weighted average with nonnegative weights summing to 1, and as such it must be less than or equal to the largest number averaged where the largest number is nothing but $\max\limits_a q_\pi(s,a)$. To further clarify, each weight has the form $\frac{\pi(a|s)-\frac{\epsilon}{|\mathcal{A}(s)|}}{1-\epsilon}$ which is nonnegative since the $\epsilon$-soft policy $\pi$ to be improved upon is $\epsilon$-greedy (at least after first iteration in the book's algo), and also clearly $$\sum\limits_a \frac{\pi(a|s)-\frac{\epsilon}{|\mathcal{A}(s)|}}{1-\epsilon}=1$$ since the number of summation terms are just $|\mathcal{A}(s)|$. And since each such weight is nonnegative, it's obvious and easily provable that $$\max\limits_a q_\pi(s,a)=\sum\limits_a \frac{\pi(a|s)-\frac{\epsilon}{|\mathcal{A}(s)|}}{1-\epsilon}\max\limits_a q_\pi(s,a).$$ Now I believe you can make sense of the blue box line. Why it has this form? It’s so contrived that one can compare with the previous value function to show improvement for a generic state $s$ in the state space.
null
CC BY-SA 4.0
null
2023-03-06T06:59:08.463
2023-03-06T07:13:39.603
2023-03-06T07:13:39.603
371017
371017
null
608495
1
null
null
0
27
I have a dataset , which is in panel nature for two time periods. I would like to whether it is possible to perform a DID analysis, and if yes is it possible to do a parallel trend analysis for only two time periods.
Parallel trend analysis in a two period panel data
CC BY-SA 4.0
null
2023-03-06T07:06:55.140
2023-03-06T08:35:09.680
null
null
380206
[ "difference-in-difference", "trend", "parallel-analysis" ]
608497
2
null
412887
0
null
The above expression can be written as: $$ \frac{\sum\limits_{i=1}^N I\{X_i=x\} Y_i}{N}\div \frac{\sum\limits_{i=1}^N I\{X_i=x\}}{N} $$ both $I\{X_i=x\}Y_i$ and $I\{X_i=x\}$ are i.i.d random variables with finite expectation, therefore, by the SLLN, the above expression converges a.s. to: $$\frac{E[I\{X_i=x\}Y_i]}{E[I\{X_i=x\}]}=E[Y_i|X_i=x] $$
null
CC BY-SA 4.0
null
2023-03-06T07:25:14.310
2023-03-06T07:32:09.600
2023-03-06T07:32:09.600
376154
376154
null
608498
1
608629
null
4
102
If an AR(2) model is stationary, how to prove that $$\rho_1^2<\frac{\rho_2+1}{2}$$ I know that $$\rho_1=\frac{\phi_1}{1-\phi_2}$$ and $$\rho_2=\frac{\phi_1^2+\phi_2(1-\phi_2)}{1-\phi_2}$$ according to Yule-Walker equation, but when I try to prove it by the two equations above it just doesn't make sense.
A question reagarding stationarity of an AR(2) model
CC BY-SA 4.0
null
2023-03-06T07:57:36.227
2023-03-10T08:02:39.967
2023-03-10T08:02:39.967
67799
382474
[ "time-series", "self-study", "stationarity", "autoregressive" ]
608499
1
null
null
1
80
I am trying to answer a question about satisfaction and its relation with a certain variable (numeric, 1-10). However, my data contains a lot of missing values in the satisfaction outcome, therefore I applied multiple imputation (all other variables are complete, no missings). So far, I managed to create a MI with 20 datasets. I ran the ANOVA on all imputed datasets which gives me a Mira object (list of 4) that contains the analyses of all 20 sets. However, when I try to pool them I get the following error: Error in `summarize()`: ! Problem while computing `qbar = mean(.data$estimate)`. ℹ The error occurred in group 1: term = satisfaction Caused by error in `.data$estimate`: ! Column `estimate` not found in `.data`. It looks like it needs some sort of mean. However, I am new to R and do not quite understand what I am doing wrong. There indeed is no mean or anything alike from the satisfaction in my Mira object, see screenshot below. Does anybody know what I am doing wrong? Thank you for your response! [](https://i.stack.imgur.com/WhERP.png)
Multiple imputation MICE categorical variable: problem with pooling after running ANOVA
CC BY-SA 4.0
null
2023-03-06T08:21:03.623
2023-03-08T04:31:29.133
null
null
382477
[ "anova", "pooling", "mice" ]
608500
1
null
null
1
7
I've run SNA and got the value of centrality of each node. I classified the centrality value into two group. Then based on this grouping, I would like to find the association of the attribute(gender, race, etc) with the grouping i've constructed. What statistical test that can I use? Can the classic logistic regression be use in this scenario?
Finding association of attributes with node centrality/ degree value
CC BY-SA 4.0
null
2023-03-06T08:22:32.473
2023-03-06T08:22:32.473
null
null
382473
[ "regression", "association-measure", "social-network" ]
608501
1
null
null
0
46
How can I calculate the covariance between r1 and r2, where r1 and r2 are the Spearman correlation coefficients: - based on two independent groups - based on dependent groups - a 3-dimensional distribution, where r1 is based on the first two components and r2 is based on the first and third component - based on dependent groups - a 4-dimensional distribution, where r1 is based on the first two components and r2 is based on the last two components I came across this paper from Steiger ([http://www.psychmike.com/Steiger.pdf](http://www.psychmike.com/Steiger.pdf)). However, this is based on the Pearson correlation. I couldn't find anything for Spearman Correlation.
Covariance between Spearman Correlation Coefficients
CC BY-SA 4.0
null
2023-03-06T08:32:22.310
2023-03-06T08:32:22.310
null
null
382478
[ "correlation", "covariance", "spearman-rho" ]
608502
2
null
608495
0
null
For a canonical DiD that includes only two-time periods, the assumption of parallel trends involves a counterfactual and cannot be observed, i.e. you're assuming that absent intervention trends would have continued in parallel but there's no way to test. What you can do is see (test) if the trends were parallel before the treatment, but you'd have to have data for two (not just one) pre-treatment time periods. So with only data for two time periods, you can do a DiD, but you may have very little reason to believe that trends were parallel before treatment. That being said, if I remember correctly, the famous minimum wage study by Card and Krueger had only data for two time periods--though a follow-up study showed that trends were generally not parallel.
null
CC BY-SA 4.0
null
2023-03-06T08:35:09.680
2023-03-06T08:35:09.680
null
null
266571
null
608503
1
null
null
1
44
In lavaan, I am running a two-factor CFA on a questionnaire with 28 items, all of which are scored on a 6-point Likert scale. In total I have ~350 participants who completed the questionnaire. Because of the ordinal nature of the data, I am using ULS instead of ML, and the ordered = T command. Because I have found outliers in my dataset (both in terms of Mahalonobis distance and generalized Cook's distance, but not in terms of standardized residuals), I want to use a robustification method to reduce the influence of these extreme cases, instead of discarding these observations, as recommended by [Flora et al.](https://www.frontiersin.org/articles/10.3389/fpsyg.2012.00055) Now, based on my understanding, I can do that by choosing the estimator `ULSM`, `ULSMV`, or `ULSMVS`. According to the [lavaan documentation](https://lavaan.ugent.be/tutorial/est.html), the difference is the following: - ULSM estimator uses "robust standard errors and a Satorra-Bentler scaled test statistic"; - ULSMV estimator uses " robust standard errors and a mean- and variance adjusted test statistic (using a scale-shifted approach)"; - ULSMVS estimator uses "robust standard errors and a mean- and variance adjusted test statistic (aka the Satterthwaite approach)" Which estimator I use has a very strong effect on my CFI and RMSEA scores (but not SRMR or WRMR), as can be seen in this figure: [](https://i.stack.imgur.com/Qqcay.png) However, I cannot find out which of these estimators, the `ULSM`, the `ULSMV`, or the `ULSMVS`, I should use. So, - Am I approaching this analysis generally correct, and - Which of these estimators should I use? Thanks in advance!
Robustification in lavaan: Difference between M, MV and MVS?
CC BY-SA 4.0
null
2023-03-06T08:52:12.673
2023-03-06T15:40:54.973
null
null
374002
[ "outliers", "robust", "confirmatory-factor", "lavaan", "robust-standard-error" ]
608504
1
608506
null
8
894
Out of curiosity, I realised that the Kolmogorov-Smirnov normality test returns two very different p-values depending on whether the dataset has small or large numbers. Is this normal and is there a number size limit for this test? From what I saw, the Shapiro-Wilk test was much more stable. I tried this ``` ks.test(c(0.5379796,1.1230795,-0.4047321,-0.8150001,0.9706860),"pnorm") One-sample Kolmogorov-Smirnov test data: c(0.5379796, 1.1230795, -0.4047321, -0.8150001, 0.970686) D = 0.3047, p-value = 0.6454 alternative hypothesis: two-sided ``` And then I multiplied each value by 100 ``` ks.test(c(53.79796,112.30795,-40.47321,-81.50001,97.06860),"pnorm") One-sample Kolmogorov-Smirnov test data: c(53.79796, 112.30795, -40.47321, -81.50001, 97.06860) D = 0.6, p-value = 0.03008 alternative hypothesis: two-sided ``` With the same data, the Shapiro-Wilk test returns a p-value of 0.3999.
Kolmogorov-Smirnov instability depending on whether values are small or big
CC BY-SA 4.0
null
2023-03-06T09:33:10.690
2023-03-07T12:01:16.410
2023-03-06T09:52:17.380
22047
261354
[ "r", "normality-assumption", "kolmogorov-smirnov-test" ]
608505
2
null
307882
0
null
A short answer: Sampling is not only used to estimate a distribution function*, it is also used to perform computations with a density function and MCMC is just one of many ways of sampling. Often such computations are a way of [Monte Carlo integration](https://en.m.wikipedia.org/wiki/Monte_Carlo_integration). Computing a posterior average, a 95% highest density interval, marginal distributions*, etcetera, if the density function is a complex function then such values can be difficult to derive analytically and manual integration might require a lot of computations to perform. A sampling method is an alternative to approximate the desired quantity. --- *In the case of nuisance parameters, then Bayes rule doesn't give the posterior of the parameter, but of a joint distribution with the nuisance parameter. So the statement "if we already KNOW the posterior distribution?" must be nuanced and the marginal posterior distribution is not even known. In these cases the MCMC can also be used to compute the density function (which is unknown).
null
CC BY-SA 4.0
null
2023-03-06T09:40:48.637
2023-03-06T09:40:48.637
null
null
164061
null
608506
2
null
608504
16
null
The ways you’ve coded it, you’re asking the KS test about a null hypothesis that the distribution is $N(0,1)$. In the first set of numbers, that looks plausible. Consequently, the p-value is high. In the second set of numbers, that does not seem to be the case. Numbers like those don’t typically come from a $N(0,1)$ distribution. Consequently, the p-value is low. By multiplying by a factor, you’ve changed the variance. Since the KS test considers all aspects of the distribution, variance included, the test correctly regards the two data sets as different. The reason that Shapiro-Wilk is more stable is because it evaluates normality. Multiplying by a positive factor does not change the normality, so Shapiro-Wilk will not have the same kind of sensitivity to a variance change that KS has.
null
CC BY-SA 4.0
null
2023-03-06T09:41:42.237
2023-03-06T09:41:42.237
null
null
247274
null
608508
1
null
null
0
17
Someone knows any library that implement filter feature selection methods that can detect feature interaction. Until now I've used Relief, it works great but it do not detect feature redundance. It seem that there is others: FOCUS, INTERACT... but any of them have python implementations or I've not able to find it. There is some Mutual Information related methods already implemented but none of them seems to handle correctly the features interactions. Note: When i talk about feature interactions i refer, for example, a XOR dataset. Thanks for your time and help.
Filter methods for feature selection that take into account features interactions
CC BY-SA 4.0
null
2023-03-06T09:43:28.380
2023-03-06T09:43:28.380
null
null
376315
[ "python", "feature-selection" ]
608510
1
null
null
1
22
I would like to sample from a multivariate Gaussian distribution with covariance matrix $\Sigma - uu^T $, where $u$ is a vector and $\Sigma - uu^T $ is PSD. I have knowledge of a non-Cholesky decomposition matrix $L$ of $\Sigma$, such that $LL^T = \Sigma$. Is there an efficient algorithm for computing a decomposition of $\Sigma - uu^T $ using $L$? I have found a downgrading algorithm for Cholesky decompositions on [Wikipedia](https://en.wikipedia.org/wiki/Cholesky_decomposition#Rank-one_downdate), but I am not sure it applies for general decompositions that are not necessarily triangular. In this case, I only need to sample from the distribution, therefore any solution that does not explicitly compute a downgraded decomposition is also valid.
General matrix decomposition downgrading algorithm for sampling
CC BY-SA 4.0
null
2023-03-06T09:47:05.730
2023-03-06T09:47:05.730
null
null
382479
[ "normal-distribution", "sampling", "matrix-decomposition", "cholesky-decomposition" ]
608511
1
null
null
2
37
My experiment consists in testing the effect of a compound on a cell line. I have 5 groups: control, concentration 1, 2, 3 and 4 of the compound. I performed three replicates of the experiment, each using cells from a different flask and cell passage, but still from the same original vial. In the same replicate, cells in the different experimental wells are coming from the same flask. To perform the analysis, considering that my values are normally distributed, should I use the ordinary ANOVA or the repeated measurements ANOVA? My supervisor said that we should use the second option since cells are coming from the same flask and cell vial so they can be considered as the same "subject". But I am confused since at the end the cells are plated, treated and measured in different wells, so it is not really as the same cells undergo the different treatments. It would be very helpful to know your opinion, thanks a lot in advance! :)
Ordinary ANOVA or Repeated Measurements ANOVA
CC BY-SA 4.0
null
2023-03-06T10:03:21.243
2023-04-06T17:49:38.000
2023-03-06T10:04:30.560
382485
382485
[ "anova", "repeated-measures" ]
608513
2
null
608504
8
null
Adding to the existing response, it's worth noting that the two `ks.test` calls below produce the same output. ``` x = c(0.5379796,1.1230795,-0.4047321,-0.8150001,0.9706860) ks.test(x, pnorm) #> #> Exact one-sample Kolmogorov-Smirnov test #> #> data: x #> D = 0.3047, p-value = 0.6454 #> alternative hypothesis: two-sided ks.test(x*100, pnorm, sd = 100) #> #> Exact one-sample Kolmogorov-Smirnov test #> #> data: x * 100 #> D = 0.3047, p-value = 0.6454 #> alternative hypothesis: two-sided ``` R syntax note: the default arguments to `pnorm()` are `mean = 0, sd = 1`. Anything after the second argument in `ks.test()` gets passed as an argument to the `pnorm()` function in this case.
null
CC BY-SA 4.0
null
2023-03-06T10:23:04.853
2023-03-07T12:01:16.410
2023-03-07T12:01:16.410
42952
42952
null
608514
1
608569
null
2
85
I performed a chi-square test for 2 groups on a categorical variable. The p-value indicates that the categorical variable is significant. I concluded that at least one level of the categorical variables differ between 2 groups. How can I find which levels are actually different and which levels are not different between 2 groups? For instance if a categorical variable has 3 levels, married, divorced and single, which test should I conduct to be able to say, married people significance have changed but divorced people are not significantly different between 2 groups? I am conducting the tests in python and I used chi2_contingency from scipy.stats.
How to find out which levels of a categorical variable are different comparing 2 groups?
CC BY-SA 4.0
null
2023-03-06T10:28:45.340
2023-03-06T19:53:45.980
2023-03-06T10:30:58.763
362671
380664
[ "python", "categorical-data", "chi-squared-test" ]
608515
1
null
null
0
45
Define $X_n$ a continuous random variable that converges in distribution to $X$. Morever, we know that $E[|X_n|^p] \rightarrow E[|X|^p]$ for some $p > 0$. Then, could we prove that for any continuous function $f$ $$E[|f(X_n)|^p] \rightarrow E[|f(X)|^p]?$$ Or which additional assumptions do we need? Or if $X_n$ is uniformly integrable does it imply that $f(X_n)$ is as well U.I ?
Convergence of moment of functional of random variable
CC BY-SA 4.0
null
2023-03-06T10:38:38.893
2023-03-06T15:09:07.797
2023-03-06T11:22:54.700
365245
365245
[ "expected-value", "convergence", "moments", "function" ]
608516
1
null
null
0
45
How can I classify the two terms differential equation and Markov process? Here are a few questions that I ask myself: - Is a Markov process a superset of differential equation or vice versa? - Do differential equations satisfy the Markovian property that the current state depends only on its previous state? - If I discretize a continuous differential equation via an integration method, do I have a Markov process? - Are stochastic differential equations Markov processes?
How are differential equations related to Markov processes?
CC BY-SA 4.0
null
2023-03-06T11:06:12.663
2023-03-06T11:06:12.663
null
null
170946
[ "markov-process", "differential-equations" ]
608517
1
null
null
0
13
I am using sklearn's `DecisionTreeClassifier` and LSTMs (Keras) for time series classification. To increase the accuracy and robustness of the models I augmented the training data set with jittered, interpolated, and warped data. The accuracy of the decision tree decreases from 89.35% to 88.12% whereas the accuracy of the LSTM increases from 92.61% to 94.32%. Is there a reason for that? The maximum depth of the Decision Tree is 16 and the criterion is entropy ``` tree.DecisionTreeClassifier(criterion='entropy', max_depth=16) ``` and the following LSTM configuration ``` model_2_0 = tf.keras.Sequential([ layers.LSTM(64, activation="relu", return_sequences = True, input_shape= (win_length, num_features)), layers.Dropout(0.2), layers.LSTM(64, activation="relu", return_sequences=True), layers.Dropout(0.2), layers.LSTM(64, activation="relu"), layers.Dropout(0.2), layers.Dense(23, activation="softmax") ], name=model_name) ``` The data sets were really large, to begin with. I am not quite sure how to interpret these results. What could be the reason here?
Why is the accuracy of a Decision Tree decreasing whereas the accuracy of an LSTM is increasing when adding augmented data?
CC BY-SA 4.0
null
2023-03-06T11:49:23.240
2023-03-06T11:49:23.240
null
null
376960
[ "time-series", "classification", "cart", "lstm", "accuracy" ]
608519
2
null
608313
1
null
Another good option for posterior predictive checks for binomial predictions is a reliability diagram by [Dimitriadis et al. (2021)](https://www.pnas.org/doi/full/10.1073/pnas.2016191118). Here, the mean predictive probabilities (x-axis) are compared to conditional event probabilities (CEP) on the y-axis. CEP are computed by fitting a monotonic step function to the observed events, when they are sorted by predictive probabilities. That is, the diagram is designed to assess the posterior calibration and instead of a even binning, uses a monotonicity assumption. ``` library(reliabilitydiag) library(rstanarm) library(ggplot2) set.seed(123) # We fit an example model. fittedModel <- stan_glm(switch ~ arsenic + dist + educ, data = wells, chains = 4, iter = 1000, family = "binomial") # Compute the reliabilitydiagram r <- reliabilitydiag(x = fittedModel$fitted.values, y = wells$switch) # Plot the diagram plot(r) ``` [](https://i.stack.imgur.com/qiiId.png) The resulting plot is a ggplot-object, so we can easily edit the diagram to for example show `ggdist:stat_dots` instead of the histogram of predictive probabilities: ``` library(ggdist) p <- plot(r) p$layers[[1]] <- stat_dots(aes(x = fittedModel$fitted.values), quantiles = 100, alpha = .5, scale = .5) p ``` [](https://i.stack.imgur.com/FY3mL.png) Now, if the red line of CEP values would fall under confidence interval, the predicted event probabilities are too high, and respectively, if the CEP values are above the intervals, the model is predicting too low probabilities.
null
CC BY-SA 4.0
null
2023-03-06T12:10:28.350
2023-03-06T12:10:28.350
null
null
382493
null
608520
1
null
null
0
19
It's known that integrating out $\Lambda \equiv \Sigma^{-1}$ below, $$ y|\Lambda \sim \mathcal N(0, \Lambda^{-1}), $$ $$ \Lambda \sim \mathcal W(M^{-1}, \nu) $$ leads to a multivariate t distribution on $y$. [A reference for this](https://www.cs.ubc.ca/%7Emurphyk/Papers/bayesGauss.pdf). Is a similar result known if the Wishart prior is placed directly on the covariance $\Sigma$, as below? $$ y|\Sigma \sim \mathcal N(0, \Sigma), $$ $$ \Sigma \sim \mathcal W(M, \nu) $$ (approximate/limiting results would also be appreciated)
Does marginalizing the covariance of a Normal (with a Wishart prior, not inv-Wishart) lead to a t distribution?
CC BY-SA 4.0
null
2023-03-06T12:15:04.790
2023-03-06T12:40:17.527
2023-03-06T12:40:17.527
211930
211930
[ "bayesian", "normal-distribution", "hierarchical-bayesian", "multivariate-normal-distribution", "multivariate-distribution" ]
608521
1
null
null
0
11
I want to run a difference-in-difference model to investigate the location of agricultural investments based on election outcomes. I assume that districts that voted for the local goverment receive more investments as a reward for their support. However, enviromental factors (which tend to be time-invariant) may also impact the allocation of such investments. To my knowlegde such time-invariant variables get cancled out in DiD models... Is there a way to keep them?
Keep time invariant variables in difference-in-difference models
CC BY-SA 4.0
null
2023-03-06T12:28:55.143
2023-03-06T12:28:55.143
null
null
305206
[ "regression", "inference", "econometrics", "difference-in-difference" ]