Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
612580
1
null
null
0
27
I have already a SARIMA model working at my company (an ecommerce) to predict sales. Right now I am only trying to improve it. The current model is only using endogenous variables (i.e the sales variable). Commercial teams claims that Stock info should be really useful to help predicting sales, as it is very correlated to our goal metric (they have an study on it) That beeing said I am trying to include this exogenous variable to the model, but I aint getting no improvment (actually it is getting a little worse). I have already shown them the result but they arent getting convinced with it and I also dont know how to explain why this is happne. Can someone help me undertand it ??
Exogenous variables having negative impact on ARIMA
CC BY-SA 4.0
null
2023-04-11T12:55:42.140
2023-04-11T12:55:42.140
null
null
385438
[ "python", "arima" ]
612581
1
null
null
1
24
Per the regression model: $\mathbf{y} = f(\mathbf{x},\mathbf{\beta}) + \mathbf{\epsilon}$ Where the $\beta$ estimate of LAD regression is given by: $ \hat{\beta}_{LAD} = \text{argmin}_{ b} \sum_{i=1}^n |y_i - f(\mathbf{b},x_i)|$ Now, some background, per Wikipedia on the topic of [Iteratively Reweighted Least Squares](https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares), to quote: > Iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm. And further: > IRLS is used to find the maximum likelihood estimates of a generalized linear model, and in robust regression to find an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing the least absolute errors rather than the least square errors. One of the advantages of IRLS over linear programming and convex programming is that it can be used with Gauss–Newton and Levenberg–Marquardt numerical algorithms. I am particularly interested in applying the least absolute deviation (or LAD and also referenced as ℓ1 norm) per an iterative least-square solution approach with weights equal to the respective reciprocal of the absolute value of the observed residues (or, more prudently, the max of some small value, like 0.0001, and the absolute value of the residue as created from the difference of actual observed and currently fitted per the centered data set). Interestingly, this recommended weighting scheme is noted in the Wikipedia cited reference, as equivalent to employing a Huber loss function in the context of robust estimation (albeit, its use in the attached spreadsheet was not deemed apparently currently necessary). Now, an issue I have with normally applied LAD regression arises, to quote from [Wikipedia on the topic of Least Absolute Deviations](https://en.wikipedia.org/wiki/Least_absolute_deviations): > Checking all combinations of lines traversing any two (x,y) data points is another method of finding the least absolute deviations line. Since it is known that at least one least absolute deviations line traverses at least two data points….. Or, I would restate per above, any possible arbitrary two data points in the data set, which may not be an intuitively best-suited alternative (as an intercept related to measures of a central tendency). In particular, my suggested improvement to the art of applying LAD is, relating to, say, a two-parameter robust regression model, especially with a limited number of data points and especially perhaps some evident outlier presence, is, in a manner paralleling Least-Squares (that forces the intercept parameter through the mean of Y and X) is to reduce the dimensionality of the respective LAD regression, while also promoting robustness, along with being consistent with both an intuitive interpretation and possible prior expectation, by proscribing the respective LAD intercept passes through the median of Y and X, namely here Y_bar and X_bar. That is, substitute Y_bar - Beta*X_bar for the intercept term in the LAD regression model, which interestingly and more simply equates to transforming said Y and X by centering them around their respective medians, producing new centered variables Y’ and X’ in a no-intercept model. Further, dividing both Y’ and X’ by the square root of the first Least Squares’ one parameter regression model of Y’ versus X’ with starting weights of 1, results in two new variables Y’’ and X’’ whose simple one parameter regression produces a new Beta estimate. Using this Beta on X’ and subtracting Y’, forms a point residue that upon taking its absolute value, and square root and its respective residue whose reciprocal is used to again-transform X’ and Y’ to X” and Y” for the next step of simple one-parameter Least-Squares resulting in a new Beta. Repeat until sufficient convergent is achieved. Note: One can construct (see the first cited Wikipedia reference for the iterative matrix approach) or employ your existing software package for higher dimension robust LAD regressions with several explanatory variables by simply using the median-centered data in a no-intercept standard LS regression model producing repeatedly new coefficients for all X-variables and their associated residues for weight determination and a new Beta (for a further iteration on transformed data per standard LS analysis or as an IRLS model, if available). The above-described process is freely available for all as an illustrative two-parameter least absolute deviation regression model in a spreadsheet format [at this link](https://docs.google.com/spreadsheets/d/1eNYm-p48oMAaeZ-ENjfDC_RiLJRutABncS3kH63df3w/edit?usp=sharing)S. Just change the data to observe the resulting robustness of the suggested median data-centered process which removes the intercept term. One can self-augment the analysis by also adding a normally (or other) distributed error term. From a theoretical perspective, if one defines a variable Z = Y - Beta*X, then the ℓ1-norm of Z minus its median, one has, by definition, the MAD (median absolute deviation) of Z, which is a cited known robust statistic, as is claimed (reference, see, for example, [Wikipedia on “Median absolute deviation”](https://en.wikipedia.org/wiki/Median_absolute_deviation)). The only argument I can envision against my median data-centered approach for more robust LAD regression is the very special case where one has particular knowledge of the regression’s intercept term (aka, when Y equals zero) that deviates from the assumed function of the associated medians.
Is Centering Data Around Their Medians in Least Absolute Deviation Regression Model (No Intercept), a Good Robust Practice For Smaller Data Sets?
CC BY-SA 4.0
null
2023-04-11T13:09:46.483
2023-04-11T13:09:46.483
null
null
54013
[ "robust", "mad", "least-absolute-deviations" ]
612582
2
null
21825
0
null
As stated in my comment, there is an analytical formula for this problem in [https://math.stackexchange.com/a/59749/51275](https://math.stackexchange.com/a/59749/51275). The accepted answer, translated into Python looks like this: ``` import numpy as np from scipy.special import binom def theoretical_distr2(N, p, run_length)->float: # Mathematica code: Sum[(-1)^(j + 1) * (p + (n - j*m + 1)/j*(1 - p))*Binomial[n - j m, j - 1] p^(j*m)*(1 - p)^(j - 1), {j, 1, Floor[n/m]}] # where n = N, m = run_length, p = p ans = 0 for j in range(1, N // run_length + 1): ans += (-1) ** (j + 1) * (p + (N - j * run_length + 1) / j * (1 - p)) * binom(N - j * run_length, j - 1) * p ** (j * run_length) * (1 - p) ** (j - 1) return ans ``` I don't deserve a credit. I just found the other solution and translated it into a code. There's an obvious issue with the code: for large N `binom` explodes to infinity, yielding a NaN answer. The @Neil's code is more stable. For large N we can use the approximate formula (source: [https://www.maa.org/sites/default/files/pdf/upload%5flibrary/22/Polya/07468342.di020742.02p0021g.pdf](https://www.maa.org/sites/default/files/pdf/upload%5flibrary/22/Polya/07468342.di020742.02p0021g.pdf)) ``` def theoretical_distr3(N, p, run_length)->float: # Mathematica code: -E^-p^(0.5 + m - Log[n - n p]/Log[1/p]) p^(0.5 + m - Log[n - n p]/Log[1/p]) Log[p] # -Exp[-p^(0.5 + i - Log[n (1 - p)]/Log[1/p])] p^(0.5 + i - Log[n (1 - p)]/Log[1/p]) Log[p] ans = 1 for i in range(1, run_length): ans -= -np.exp(-p ** (0.5 + i - np.log(N * (1 - p)) / np.log(1 / p))) * p ** (0.5 + i - np.log(N * (1-p)) / np.log(1 / p)) * np.log(p) return ans ```
null
CC BY-SA 4.0
null
2023-04-11T13:24:23.423
2023-04-11T14:58:16.110
2023-04-11T14:58:16.110
10069
10069
null
612583
1
null
null
0
22
To illustrate the problems imagine I'm drawing labelled spheres from a box. I may or may not know the number of spheres in the box (does it make a difference?) If I draw 10 spheres from the box and they are all different what did I learn about the distribution of the probability of drawing each label? They may be just 10 out of zillion other labels or maybe there exists exactly 10 labels and by luck we sampled exactly one of each. A follow up question is when we have two boxes. We don't know if the draws from those boxes are independent or not. If I keep sampling pairs of labels that haven't shown up before we don't know if these labels were sampled by chance or not. In case it matters or not: my final goal is to draw conclusions about entropy and mutual information
Model marginal and joint distributions from a sample of unkown number of categories
CC BY-SA 4.0
null
2023-04-11T13:36:16.373
2023-04-11T13:36:16.373
null
null
155110
[ "bayesian", "categorical-data", "entropy", "frequentist", "mutual-information" ]
612585
1
612587
null
2
22
I'm learning to make a book recommendation system but I am facing some difficulties to evaluate the model. I chose the collaborative filtering item based strategy. The dataset is a matrix (book, user) filled with the ratings of the books. The dataset is something like: ``` user_a user_b user_c ... user_x book_1 0 3 5 ... 4 book_2 2 1 0 ... 0 book_3 0 0 0 ... 2 ... ``` Bellow the code to train the model. ``` from sklearn.neighbors import NearestNeighbors model = NearestNeighbors(metric='cosine', algorithm='brute') model.fit(dataset) # get the index of a book that contains 'harry potter' in its name title = 'Harry Potter' mask = books['title'].str.contains(title) book_isbn = books[mask]['isbn'] mask = X.index.isin(book_isbn) book_reference = X[mask].head(1) # find the 5 nearest books from 'harry potter' k = 5 distances, indices = model.kneighbors(book_reference.values, n_neighbors=k+1) ``` Well, the thing is 'kneighbors()' function is returning the distances of the 5 nearest vectors(books) from 'book_reference' and their indices. I don't know how evaluate the performance of this model since its not making predictions. How can I do this?
Evaluation of a recommendation system. How can I do that?
CC BY-SA 4.0
null
2023-04-11T13:58:24.940
2023-04-11T14:16:00.217
null
null
369891
[ "machine-learning", "model-evaluation", "recommender-system" ]
612586
2
null
612566
0
null
You can obviously just add a constant to all the item difficulties on the log-scale and subtract the same constant from the `theta[i]` (and vice versa). This makes the model hard to sample (but if you manage that because the priors constrain things sufficiently to allow the NUTS sampler to sample it with a high enough `adapt_delta`, then relatively speaking everything is fine). Alternatively, you can introduce constraints (e.g. sum-to-zero constraints on the log-scale, or defining a reference category).
null
CC BY-SA 4.0
null
2023-04-11T13:58:27.357
2023-04-11T13:58:27.357
null
null
86652
null
612587
2
null
612585
2
null
There are [specialized metrics](https://stats.stackexchange.com/questions/351963/how-good-is-my-recommender/351982#351982) for recommender systems like [Mean Percentage Ranking (MPR)](https://stackoverflow.com/questions/46462470/how-can-i-evaluate-the-implicit-feedback-als-algorithm-for-recommendations-in-ap/46490352#46490352) and [Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and [variations of the regular classification metrics](https://stats.stackexchange.com/a/25102/35989) like [Precision@$k$ or Recall@$k$](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54). You should look into those.
null
CC BY-SA 4.0
null
2023-04-11T14:16:00.217
2023-04-11T14:16:00.217
null
null
35989
null
612588
1
null
null
1
51
I am trying to create a dataset where columns 2,3,4 are correlate (0.98,0.97,0.96, respectively) to column 1. Right now I have this code: ``` library(MASS) X<-mvrnorm(20,mu=c(5,6),Sigma=matrix(c(1,0.98,0.98,1),ncol=2),empirical=TRUE) cor(X) Y<-mvrnorm(20,mu=c(5,6),Sigma=matrix(c(1,0.97,0.97,1),ncol=2),empirical=TRUE) cor(Y) Y <- Y[,2] Z<-mvrnorm(20,mu=c(5,6),Sigma=matrix(c(1,0.97,0.97,1),ncol=2),empirical=TRUE) cor(Z) Z <- Z[,2] data <- cbind (X,Y,Z) cor(data) ``` and it produces this matrix: ``` Y Z 1.00000000 0.9800000000 0.0826655886 -0.4293286 0.98000000 1.0000000000 0.0009559618 -0.5221029 Y 0.08266559 0.0009559618 1.0000000000 0.1847887 Z -0.42932859 -0.5221029358 0.1847886713 1.0000000 ``` I would like the final outcome to look like this. It doesn't matter how the column 2,3,4 are correlated with each other (i.e. the X), as long as they have the right correlation with the first column. ``` 1 0.98 0.97 0.96 0.98 1 X X 0.97 X 1 X 0.96 X X 1 ``` Thanks for your help!
Create a data set with correlated variables
CC BY-SA 4.0
null
2023-04-11T14:24:35.570
2023-04-12T12:49:26.360
null
null
385445
[ "r", "correlation", "simulation" ]
612589
1
null
null
2
59
Here I generate a dataset where measurements of response variable `y` and covariates `x1` and `x2` are collected on 30 individuals through time. Each individual is denoted by a unique `ID`. The observations are collected in hourly increments, but are only available for given individuals (`ID`s) when they are present during the respective hour (thus creating irregularities in each time series). ``` library(tidyverse) library(lubridate) library(data.table) set.seed(123) TimeSeries <- data.table(tm = rep( seq(as.POSIXct("2021-01-23 01:00"), as.POSIXct("2021-10-27 17:00"), by="hour"),30), ID = factor(rep( paste("ID_",c(1:30), sep = ""), each = 6664)), Obs = sample(c(NA, 1), 199920,prob = c(0.7,0.3), replace = TRUE)) TimeSeries<- TimeSeries[Obs == 1] #explicitly making large gaps at the beginning of some time series to illustrate that some individuals are present (for the first time) until later in the time series TimeSeries <- TimeSeries%>% dplyr::filter(!(ID== "ID_2" & tm < as.POSIXct("2021-5-27 10:00")), !(ID== "ID_5" & tm < as.POSIXct("2021-3-10 15:00")), !(ID== "ID_6" & tm > as.POSIXct("2021-3-10 15:00")), !(ID== "ID_5" & tm > as.POSIXct("2021-6-10 23:00"))) #response variable TimeSeries[,y:= rnorm(nrow(TimeSeries))] #predictors x1 and x2 TimeSeries[,x1:= rnorm(nrow(TimeSeries))] TimeSeries[,x2:= rnorm(nrow(TimeSeries))] #now irrelevant so remove: TimeSeries[,Obs:= NULL] ``` we wish to fit a linear mixed effects model to determine if changes in `x1` and `x2` have an effect on the response `y` while allowing for variation across indivduals with a random intercept. I demonstrate this with `nlme`: ``` mod <- lme(y ~ x1+x2, random = ~1|ID, data = TimeSeries, method = "ML") ``` However, we suspect that when `ID`s are present in consecutive (or close) hours, the residuals will be heavily autocorrelated (within `ID`s). Thus we would like to check this assumption with ACF/PACF plots, and explore different correlation structures if it is a problem. (note, obviously this will not be true for the simulated data above, it was just to illustrate the structure of my data) I am unsure of the appropriate method to look at autocorrelation in this case, and calculate confidence bands. My understanding is that the `nlme::ACF` function will respect the grouping structure of random effects, but does not calcualte the correct autocorrelation function with irregular or missing values (the later of which is the apparent issue here). Is this true even with the inclusion of `na.action = na.omit` in the ACF call? Or is there a more appropriate method? Moreover, assuming autocorrelation is an issue, does the structure of this data require the use of continuous correlation structures (e.g., `corCAR1`) or is it reasonable to use `corAR1` and `corARMA` if we do something like specify the time variable as the number of hours since the first observation? for example: ``` TimeSeries[,TimeSinceFirst:= as.numeric(difftime(tm, min(TimeSeries$tm), units = "hours"))] lme(y ~ x1+x2, random = ~1|ID, correlation = corAR1(form = ~TimeSinceFirst | ID),data = TimeSeries, method = "ML") ``` I have worked through several examples (including those available [here](https://sakai.unc.edu/access/content/group/2842013b-58f5-4453-aa8d-3e01bacbfc3d/public/Ecol562_Spring2012/docs/notes.htm), [here](https://aosmith.rbind.io/2018/06/27/uneven-grouped-autocorrelation/#plot-autocorrelation-function-of-appropriately-spaced-residuals) , and [here] ([https://bbolker.github.io/mixedmodels-misc/ecostats_chap.html](https://bbolker.github.io/mixedmodels-misc/ecostats_chap.html)) ), but I cant seem to find any that deal with gaps and irregularities within each time series like I have presented above.
Inspecting and modeling residual autocorrelation with gaps in linear mixed effects models
CC BY-SA 4.0
null
2023-04-11T14:47:34.920
2023-04-25T16:36:13.407
2023-04-11T15:57:30.450
269633
269633
[ "r", "regression", "mixed-model", "lme4-nlme", "autocorrelation" ]
612590
2
null
612588
0
null
### Method 1 Basically an abridged version of the one exposed by @whuber in [this answer](https://stats.stackexchange.com/a/313138/60613). Sample from $X_1 \sim \mathcal N(0, 1)$, then have $$ \require{cancel} \begin{cases} X_2=f_2X_1+\sqrt{1-f_2^2}\epsilon_2\\ X_3=f_3X_1+\sqrt{1-f_3^2}\epsilon_3\\ X_4=f_4X_1+\sqrt{1-f_4^2}\epsilon_4\\ \end{cases} $$ Where the fractions are given as $f=(f_2, f_3, f_4) = (0.98,0.97,0.96)$ and the $\epsilon_i$ are white noise samples from $\mathcal N(0,1)$. It's easy to see that, for $i \in (2,3,4)$: $$\operatorname{Var}(X_i)=f_i^2\cancelto{1}{\operatorname{Var}(X_1)}+(1-f_i^2)\cancelto{1}{\operatorname{Var}(\epsilon_i)}+f_i\sqrt{(1-f_i^2)}\cancelto{0}{\operatorname{Cov}(X_1,X_i)}=\\ f_i^2+(1-f_i^2)=1$$ $$ \operatorname{Cor}(X_1, X_i) = \operatorname{Cov}(X_1,X_i) = E[X_1X_i] - \cancelto{0}{ E[X_1]}\cancelto{0}{E[X_i]}= f_i\cancelto{1}{E[X_1^2]}+\sqrt{1-f_i^2}\cancelto{0}{E[X_1\epsilon_i]}\\ =f_i$$ ### Method 2 (requires specifying the whole correlation matrix): as @whuber pointed in the comments, in your question, you only presented the 3 correlations pertaining to the first variable. You would need to specify the remaining 3 correlations in such a manner that $R$ is positive definite. A solution without this caveat is also possible, where you first sample the first variable in isolation. --- Given your correlation matrix $R$, you can simulate data from a multivariate normal distribution by: $$X=Z \cdot R^{1/2},$$ where $Z \sim \text{MVN}(\mathbb 0, \mathbb I)$. It's easy to see that the covariance of the $X$ is given by $$\operatorname{Cov}(X)=E[X^\top X] - E[X]^\top \cdot E[X]\\ ={R^{1/2}}^\top E[Z^\top Z]R^{1/2}-R^{1/2}E[Z]^\top \cdot E[Z]R^{1/2}$$ But $E[Z] = \mathbb 0$, while $E[Z^\top Z] = \mathbb I$, so $$\operatorname{Cov}(X) = {R^{1/2}}^\top\cdot R^{1/2} = R.$$ --- $R^{1/2}$ is any decomposition of $R$ such that $R={R^{1/2}}^\top R^{1/2}$
null
CC BY-SA 4.0
null
2023-04-11T15:00:05.527
2023-04-12T12:49:26.360
2023-04-12T12:49:26.360
60613
60613
null
612591
2
null
479044
0
null
From a [Kaggle webpage](https://www.kaggle.com/code/ryanholbrook/mutual-information): > The least possible mutual information between quantities is 0.0. When MI is zero, the quantities are independent: neither can tell you anything about the other. Conversely, in theory there's no upper bound to what MI can be. In practice though values above 2.0 or so are uncommon. (Mutual information is a logarithmic quantity, so it increases very slowly.) So, the answer to your question is yes when MI is "high". However, when MI is "low", it does not necessarily mean there is no possibility of discrimination at all. The same page mentions the role of fuel type in a vehicle's price. Although fuel type MI is low, it separates two classes of prices.
null
CC BY-SA 4.0
null
2023-04-11T15:32:47.523
2023-04-11T15:32:47.523
null
null
204281
null
612592
1
null
null
0
38
As precipitation prediction models can only predict positive values, they won't be able to undershoot small values by much. When it comes to overshooting, there is no boundary. High precipitation values can essentially be overshot and undershot equally, except a model predicts ridiculously large amounts. Furthermore, if previous weather has been dry, simple models, such as the moving average can easily predict zero values. This issue I'd like to address. I've come up with a custom variant of the RMSE (cRMSE). Would this address this issue? ``` np.sqrt(np.mean((y_true - y_pred)**2 + w * np.exp(-np.abs(y_true)))) ``` The cRMSE is a custom implementation of the Root Mean Squared Error (RMSE) error metric. This could be a potentially useful approach for precipitation forecasting, as it incorporates an additional weighting factor `w` $\in\{ℝ|0<w<1\}$ applied to values close to zero for `y_true`. The cRMSE metric could be useful in cases where you want to give less weight to values close to zero, for example, in situations where predicting zero values accurately is considered less important than predicting non-zero values. The weighting factor `w` allows adjusting the impact of the additional term in the error metric, and you can experiment with different values of `w` to find the best balance between accuracy for non-zero values and tolerance for zero values.
Is there an error metric that decreases the weight when the target is near zero?
CC BY-SA 4.0
null
2023-04-11T15:39:40.243
2023-04-11T20:25:42.360
null
null
385448
[ "regression", "python", "error", "metric", "rms" ]
612593
2
null
609499
1
null
The hypothesis to test is: $H_0$: $D$ and $D'$ are random draws from the same population. $H_1$: The population from which $D$ was drawn is not the same population from which $D'$ was drawn. ## A possible approach Here I modify the notation so that $D \in \{0,1\}^{n \times k}$ and $D' \in \{0,1\}^{m \times k}$. Combine $D$ and $D'$, then partition the observations in $k$ different ways, each corresponding to the presence of the $i^{\text{th}}$ trait, with $i\in\{1...k\}$. Given $H_0$, the number of observations from $D'$ in the partition with the $i^{\text{th}}$ trait should follow the hypergeometric distribution with PMF: $$p(a'_i|m,n,a_i)=\frac{\binom{m}{a'_i}\binom{n}{a_i}}{\binom{m+n}{a_i+a'_i}}$$ where $a_i$ and $a'_i$ are the total number of observations in $D$ and $D'$ with the $i^{\text{th}}$ trait. Further, if $S_i=U(F(a'_i-1;m,n,a_i),F(a'_i;m,n,a_i))$, then $S_i\sim{U(0,1)}$, again assuming $H_0$. Test for $H_0$ by testing if $S\sim{U(0,1)}$, using, e.g., Kolmogorov-Smirnov (which may require relaxing the requirement of independence between the $S_i$). Demonstrating in R: ``` set.seed(704776517) n <- 100L m <- 40L k <- 9L # simulate 1000 KS p-values for identically distributed observations p <- replicate( 1e3, { x <- mapply(\(p) rbinom(n, 1, p), seq(0.1, 0.9, length.out = k)) y <- mapply(\(p) rbinom(m, 1, p), seq(0.1, 0.9, length.out = k)) csy <- colSums(y) csxy <- colSums(x) + csy S <- sapply(1:k, \(i) runif(1, phyper(csy[i] - 1, m, n, csxy[i]), phyper(csy[i], m, n, csxy[i]))) ks.test(S, punif)$p.value } ) plot(ecdf(p), col = "blue") lines(0:1, 0:1) ``` [](https://i.stack.imgur.com/n5egs.png) ``` # simulate p-values when the distributions are slightly different p <- replicate( 1e3, { x <- mapply(\(p) rbinom(n, 1, p), seq(0.1, 0.9, length.out = k)) y <- mapply(\(p) rbinom(m, 1, p), seq(0.2, 0.8, length.out = k)) csy <- colSums(y) csxy <- colSums(x) + csy S <- sapply(1:k, \(i) runif(1, phyper(csy[i] - 1, m, n, csxy[i]), phyper(csy[i], m, n, csxy[i]))) ks.test(S, punif)$p.value } ) plot(ecdf(p), col = "blue") lines(0:1, 0:1) ``` [](https://i.stack.imgur.com/FquHd.png) ``` # simulate p-values when the distributions are even more different p <- replicate( 1e3, { x <- mapply(\(p) rbinom(n, 1, p), seq(0.1, 0.9, length.out = k)) y <- mapply(\(p) rbinom(m, 1, p), seq(0.3, 0.7, length.out = k)) csy <- colSums(y) csxy <- colSums(x) + csy S <- sapply(1:k, \(i) runif(1, phyper(csy[i] - 1, m, n, csxy[i]), phyper(csy[i], m, n, csxy[i]))) ks.test(S, punif)$p.value } ) plot(ecdf(p), col = "blue") lines(0:1, 0:1) ``` [](https://i.stack.imgur.com/MuaN3.png) The final demonstration is in log-space, so as to test if $S\sim\text{exp}(1)$. (This formulation is not actually needed here, but it demonstrates how to maintain numeric stability if the proportion of observations of $D$ or $D'$ in the partition is very unbalanced). ``` # simulate p-values when the distributions are very different p <- replicate( 1e4, { x <- mapply(\(p) rbinom(n, 1, p), seq(0.1, 0.9, length.out = k)) y <- mapply(\(p) rbinom(m, 1, p), seq(0.7, 0.3, length.out = k)) csy <- colSums(y) csxy <- colSums(x) + csy S <- sapply(1:k, \(i) (a <- -phyper(csy[i], m, n, csxy[i], log.p = TRUE)) - log1p(runif(1)*expm1(phyper(csy[i] - 1, m, n, csxy[i], log.p = TRUE) + a))) ks.test(S, pexp)$p.value } ) plot(ecdf(p), col = "blue") lines(0:1, 0:1) ``` [](https://i.stack.imgur.com/UfFym.png)
null
CC BY-SA 4.0
null
2023-04-11T15:48:17.440
2023-04-11T16:05:51.187
2023-04-11T16:05:51.187
214015
214015
null
612594
1
null
null
0
20
In a regression setting with input-output pairs $(x_n, y_n)$ for $n =1, . . . , N$, where the inputs $x_n = (x_{n,1}, . . . , x_{n,D})$ are generated by: $$x_{n,d} \sim N(0, s_d/N),$$ for dimension $d = 1, . . . , D$. $X$ denotes the input matrix and $X^TX$ is a diagonal matrix with diagonal elements $(s_1, . . . , s_D)$. How can you show that the estimated ridge weights simplify to: $$\hat{w}_d^{Ridge} = \frac{s_d}{s_d + \lambda} \hat{w}_d^{LS}$$ for $d = 1, . . . , D$, where $\lambda$ denotes the ridge penalty parameter and $\hat{\mathbf{w}}^{LS}$ denotes the least squares estimates of the regression weights?
Ridge Regression Weight Estimation
CC BY-SA 4.0
null
2023-04-11T15:51:10.257
2023-04-11T18:06:07.700
null
null
383595
[ "least-squares", "estimators", "ridge-regression" ]
612595
2
null
612592
0
null
It sounds like you want to implement weighted RMSE, where the weights are set according to the y_true values. To do this in a way that will be useful for discriminating between models that make different predictions, you should use multiplication instead of addition on the weight term: ``` np.sqrt(np.mean((y_true - y_pred)^2 * w * (y_true)) ``` The use of addition as you suggested would made the added error term dependent solely on the y_true values but not on the predictions - every model would see its error measure increase by a fixed amount, so it's the equivalent of simply adding an arbitrary constant. By using multiplication instead, the actual error terms get weighted by the y_true values, meaning that models which make different predictions will see their error measure change by different amounts.
null
CC BY-SA 4.0
null
2023-04-11T15:54:07.483
2023-04-11T20:25:42.360
2023-04-11T20:25:42.360
76825
76825
null
612597
1
612605
null
1
97
Let $X$ and $Y$ be real valued random variables. And define a truncation operator as: $\begin{align} X(\tau) = (|X| \wedge \tau) \; \text{sign}(X), \quad \tau > 0 \end{align}$ Now, I am not sure how to show the inequality: $\begin{aligned} & \mathbb{E}\left[X Y\right]-\mathbb{E}\left[X(\tau) Y(\tau)\right] \\ \leq & \mathbb{E}\left[\left|XY\right|\left(\mathbb{I}\left\{\left|X\right| \geq \tau\right\}+\mathbb{I}\left\{\left|Y\right| \geq \tau\right\}\right)\right]\end{aligned}$
Proving upper bound for truncated difference
CC BY-SA 4.0
null
2023-04-11T16:44:17.210
2023-04-12T11:26:01.470
null
null
283493
[ "probability", "mathematical-statistics", "robust", "probability-inequalities", "bias-variance-tradeoff" ]
612598
1
null
null
0
18
Each time that I wanted to build mediation models a-b-c symbols were very confusing, especially in more complex SEM relations. Therefore, I made some pre-built models (for lavaan in r) that I propose for better name-representation. Please, feel free to comment on that or to propose changes. Note that the pre-built example may contain some errors but they are easily corrected. The advantage is that all indirect, direct, total effect are "ready" calculated, and you dont need to change anything each time that you need such mediation models in basic form. You only need to replace variables only once in Y and M models. Also they are very easily understood what they mean (at my side). subnote: do you think that some function can automate the generation of these relations to n "X"s, to n"Med"s and to n"Y"s by simply provide as input xs, meds, & ys? ``` M=Mediator i= item y = y variable x = x variable d = direct pathX (x to mediator ) m = mediator pathY (mediator to y ) de = direct effect ind = indirect path ``` Example: - ind1M1y1 = indirect path of Mediator1 on y1 - dx2M2 = direct path of x2 to Mediator2 - total2M2y1 = total effect of Mediator2 on model y1 - de_toty1 = direct effect - total on model y1 etc. > #3x - 3Meds - 4Ys multipleX_Y_MEDs <- " #visual textual speed as multiple x vars visual =~ i1a + i2a + i3a textual =~ i1b + i2b + i3b speed =~ i1c + i2c + i3c # y vars ~ x vars + mediation vars Y1 ~ d1y1*visual + d2y1*textual + d3y1*speed + m1y1*M1 + m2y1*M2 + m3y1*M3 Y2 ~ d1y2*visual + d2y2*textual + d3y2*speed + m1y2*M1 + m2y2*M2 + m3y2*M3 Y3 ~ d1y3*visual + d2y3*textual + d3y3*speed + m1y3*M1 + m2y3*M2 + m3y3*M3 Y4 ~ d1y4*visual + d2y4*textual + d3y4*speed + m1y4*M1 + m2y4*M2 + m3y4*M3 M1 ~ dx1M1*visual + dx2M1*textual + dx3M1*speed M2 ~ dx1M2*visual + dx2M2*textual + dx3M2*speed M3 ~ dx1M3*visual + dx2M3*textual + dx3M3*speed # indirect effect (dx()*m) - y1 ind1M1y1 := dx1M1*m1y1 ind2M1y1 := dx2M1*m1y1 ind3M1y1 := dx3M1*m1y1 ind1M2y1 := dx1M2*m2y1 ind2M2y1 := dx2M2*m2y1 ind3M2y1 := dx3M2*m2y1 ind1M3y1 := dx1M3*m3y1 ind2M3y1 := dx2M3*m3y1 ind3M3y1 := dx3M3*m3y1 # total mediation of each Mediator - y1 ind1M1y1tot := ind1M1y1 + ind2M1y1 + ind3M1y1 ind1M2y1tot := ind1M2y1 + ind2M2y1 + ind3M2y1 ind1M3y1tot := ind1M3y1 + ind2M3y1 + ind3M3y1 # total mediation for each X - y1 ind1y1x1 := ind1M1y1 + ind1M2y1 + ind1M3y1 ind2y1x2 := ind2M1y1 + ind2M2y1 + ind2M3y1 ind3y1x3 := ind3M1y1 + ind3M2y1 + ind3M3y1 # Total mediation on y1 indtoty1 := ind1M1y1tot + ind1M2y1tot + ind1M3y1tot # indirect effect (dx()*m) - y2 ind1M1y2 := dx1M1*m1y2 ind2M1y2 := dx2M1*m1y2 ind3M1y2 := dx3M1*m1y2 ind1M2y2 := dx1M2*m2y2 ind2M2y2 := dx2M2*m2y2 ind3M2y2 := dx3M2*m2y2 ind1M3y2 := dx1M3*m3y2 ind2M3y2 := dx2M3*m3y2 ind3M3y2 := dx3M3*m3y2 # total mediation of each Mediator - y2 ind1M1y2tot := ind1M1y2 + ind2M1y2 + ind3M1y2 ind1M2y2tot := ind1M2y2 + ind2M2y2 + ind3M2y2 ind1M3y2tot := ind1M3y2 + ind2M3y2 + ind3M3y2 # total mediation for each X - y2 ind1y2x1 := ind1M1y2 + ind1M2y2 + ind1M3y2 ind2y2x2 := ind2M1y2 + ind2M2y2 + ind2M3y2 ind3y2x3 := ind3M1y2 + ind3M2y2 + ind3M3y2 # Total mediation on y2 indtoty2 := ind1M1y2tot + ind1M2y2tot+ ind1M3y2tot # indirect effect (dx()*m) - y3 ind1M1y3 := dx1M1*m1y3 ind2M1y3 := dx2M1*m1y3 ind3M1y3 := dx3M1*m1y3 ind1M2y3 := dx1M2*m2y3 ind2M2y3 := dx2M2*m2y3 ind3M2y3 := dx3M2*m2y3 ind1M3y3 := dx1M3*m3y3 ind2M3y3 := dx2M3*m3y3 ind3M3y3 := dx3M3*m3y3 # total mediation of each Mediator - y3 ind1M1y3tot := ind1M1y3 + ind2M1y3 + ind3M1y3 ind1M2y3tot := ind1M2y3 + ind2M2y3 + ind3M2y3 ind1M3y3tot := ind1M3y3 + ind2M3y3 + ind3M3y3 # total mediation for each X - y3 ind1y3x1 := ind1M1y3 + ind1M2y3 + ind1M3y3 ind2y3x2 := ind2M1y3 + ind2M2y3 + ind2M3y3 ind3y3x3 := ind3M1y3 + ind3M2y3 + ind3M3y3 # Total mediation on y3 indtoty3 := ind1M1y3tot + ind1M2y3tot + ind1M3y3tot # indirect effect (dx()*m) - y4 ind1M1y4 := dx1M1*m1y4 ind2M1y4 := dx2M1*m1y4 ind3M1y4 := dx3M1*m1y4 ind1M2y4 := dx1M2*m2y4 ind2M2y4 := dx2M2*m2y4 ind3M2y4 := dx3M2*m2y4 ind1M3y4 := dx1M3*m3y4 ind2M3y4 := dx2M3*m3y4 ind3M3y4 := dx3M3*m3y4 # total mediation of each Mediator - y4 ind1M1y4tot := ind1M1y4 + ind2M1y4 + ind3M1y4 ind1M2y4tot := ind1M2y4 + ind2M2y4 + ind3M2y4 ind1M3y4tot := ind1M3y4 + ind2M3y4 + ind3M3y4 # total mediation for each X - y4 ind1y4x1 := ind1M1y4 + ind1M2y4 + ind1M3y4 ind2y4x2 := ind2M1y4 + ind2M2y4 + ind2M3y4 ind3y4x3 := ind3M1y4 + ind3M2y4 + ind3M3y4 # Total mediation on y4 indtoty4 := ind1M1y4tot + ind1M2y4tot + ind1M3y4tot # total effect on y1 total1M1y1 := m1y1 + ind1M1y1 total2M1y1 := m1y1 + ind2M1y1 total3M1y1 := m1y1 + ind3M1y1 total1M2y1 := m2y1 + ind1M2y1 total2M2y1 := m2y1 + ind2M2y1 total3M2y1 := m2y1 + ind3M2y1 total1M3y1 := m3y1 + ind1M3y1 total2M3y1 := m3y1 + ind2M3y1 total3M3y1 := m3y1 + ind3M3y1 totalM1y1 := total1M1y1 + total2M1y1 + total3M1y1 totalM2y1 := total1M2y1 + total2M2y1 + total3M2y1 totalM3y1 := total1M3y1 + total2M3y1 + total3M3y1 totaly1 := totalM1y1 + totalM2y1 + totalM3y1 # total effect on y2 total1M1y2 := m1y2 + ind1M1y2 total2M1y2 := m1y2 + ind2M1y2 total3M1y2 := m1y2 + ind3M1y2 total1M2y2 := m2y2 + ind1M2y2 total2M2y2 := m2y2 + ind2M2y2 total3M2y2 := m2y2 + ind3M2y2 total1M3y2 := m3y2 + ind1M3y2 total2M3y2 := m3y2 + ind2M3y2 total3M3y2 := m3y2 + ind3M3y2 totalM1y2 := total1M1y2 + total2M1y2 + total3M1y2 totalM2y2 := total1M2y2 + total2M2y2 + total3M2y2 totalM3y2 := total1M3y2 + total2M3y2 + total3M3y2 totaly2 := totalM1y2 + totalM2y2 + totalM3y2 # total effect on y3 total1M1y3 := m1y3 + ind1M1y3 total2M1y3 := m1y3 + ind2M1y3 total3M1y3 := m1y3 + ind3M1y3 total1M2y3 := m2y3 + ind1M2y3 total2M2y3 := m2y3 + ind2M2y3 total3M2y3 := m2y3 + ind3M2y3 total1M3y3 := m3y3 + ind1M3y3 total2M3y3 := m3y3 + ind2M3y3 total3M3y3 := m3y3 + ind3M3y3 totalM1y3 := total1M1y3 + total2M1y3 + total3M1y3 totalM2y3 := total1M2y3 + total2M2y3 + total3M2y3 totalM3y3 := total1M3y3 + total2M3y3 + total3M3y3 totaly3 := totalM1y3 + totalM2y3 + totalM3y3 # total effect on y4 total1M1y4 := m1y4 + ind1M1y4 total2M1y4 := m1y4 + ind2M1y4 total3M1y4 := m1y4 + ind3M1y4 total1M2y4 := m2y4 + ind1M2y4 total2M2y4 := m2y4 + ind2M2y4 total3M2y4 := m2y4 + ind3M2y4 total1M3y4 := m3y4 + ind1M3y4 total2M3y4 := m3y4 + ind2M3y4 total3M3y4 := m3y4 + ind3M3y4 totalM1y4 := total1M1y4 + total2M1y4 + total3M1y4 totalM2y4 := total1M2y4 + total2M2y4 + total3M2y4 totalM3y4 := total1M3y4 + total2M3y4 + total3M3y4 totaly4 := totalM1y4 + totalM2y4 + totalM3y4 # direct effects de_toty1 := d1y1 + d2y1 + d3y1 de_toty2 := d1y2 + d2y2 + d3y2 de_toty3 := d1y3 + d2y3 + d3y3 de_toty4 := d1y4 + d2y4 + d3y4 # general total of whole model tot_whole := totaly1 + totaly2 + totaly3 + totaly4 ind_whole := indtoty1 + indtoty2 + indtoty3 + indtoty4 de_whole := de_toty1 + de_toty2 + de_toty3 + de_toty4 " and > #2X - 3Meds - 4Ys multipleX_Y_MEDs <-" #visual textual speed as multiple x vars visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 Y1 ~ d1y1*visual + d2y1*textual + m1y1*M1 + m2y1*M2 + m3y1*M3 Y2 ~ d1y2*visual + d2y2*textual + m1y2*M1 + m2y2*M2 + m3y2*M3 Y3 ~ d1y3*visual + d2y3*textual + m1y3*M1 + m2y3*M2 + m3y3*M3 Y4 ~ d1y4*visual + d2y4*textual + m1y4*M1 + m2y4*M2 + m3y4*M3 M1 ~ dx1M1*visual + dx2M1*textual M2 ~ dx1M2*visual + dx2M2*textual M3 ~ dx1M3*visual + dx2M3*textual # indirect effect (dx()*m) - y1 ind1M1y1 := dx1M1*m1y1 ind2M1y1 := dx2M1*m1y1 ind3M1y1 := dx3M1*m1y1 ind1M2y1 := dx1M2*m2y1 ind2M2y1 := dx2M2*m2y1 ind3M2y1 := dx3M2*m2y1 ind1M3y1 := dx1M3*m3y1 ind2M3y1 := dx2M3*m3y1 ind3M3y1 := dx3M3*m3y1 # total mediation of each Mediator - y1 ind1M1y1tot := ind1M1y1 + ind2M1y1 + ind3M1y1 ind1M2y1tot := ind1M2y1 + ind2M2y1 + ind3M2y1 ind1M3y1tot := ind1M3y1 + ind2M3y1 + ind3M3y1 # total mediation for each X - y1 ind1y1x1 := ind1M1y1 + ind1M2y1 + ind1M3y1 ind2y1x2 := ind2M1y1 + ind2M2y1 + ind2M3y1 ind3y1x3 := ind3M1y1 + ind3M2y1 + ind3M3y1 # Total mediation on y1 indtoty1 := ind1M1y1tot + ind1M2y1tot + ind1M3y1tot # indirect effect (dx()*m) - y2 ind1M1y2 := dx1M1*m1y2 ind2M1y2 := dx2M1*m1y2 ind1M2y2 := dx1M2*m2y2 ind2M2y2 := dx2M2*m2y2 ind1M3y2 := dx1M3*m3y2 ind2M3y2 := dx2M3*m3y2 # total mediation of each Mediator - y2 ind1M1y2tot := ind1M1y2 + ind2M1y2 ind1M2y2tot := ind1M2y2 + ind2M2y2 ind1M3y2tot := ind1M3y2 + ind2M3y2 # total mediation for each X - y2 ind1y2x1 := ind1M1y2 + ind1M2y2 + ind1M3y2 ind2y2x2 := ind2M1y2 + ind2M2y2 + ind2M3y2 # Total mediation on y2 indtoty2 := ind1M1y2tot + ind1M2y2tot + ind1M3y2tot # indirect effect (dx()*m) - y3 ind1M1y3 := dx1M1*m1y3 ind2M1y3 := dx2M1*m1y3 ind1M2y3 := dx1M2*m2y3 ind2M2y3 := dx2M2*m2y3 ind1M3y3 := dx1M3*m3y3 ind2M3y3 := dx2M3*m3y3 # total mediation of each Mediator - y3 ind1M1y3tot := ind1M1y3 + ind2M1y3 ind1M2y3tot := ind1M2y3 + ind2M2y3 ind1M3y3tot := ind1M3y3 + ind2M3y3 # total mediation for each X - y3 ind1y3x1 := ind1M1y3 + ind1M2y3 + ind1M3y3 ind2y3x2 := ind2M1y3 + ind2M2y3 + ind2M3y3 # Total mediation on y3 indtoty3 := ind1M1y3tot + ind1M2y3tot + ind1M3y3tot # indirect effect (dx()*m) - y4 ind1M1y4 := dx1M1*m1y4 ind2M1y4 := dx2M1*m1y4 ind1M2y4 := dx1M2*m2y4 ind2M2y4 := dx2M2*m2y4 ind1M3y4 := dx1M3*m3y4 ind2M3y4 := dx2M3*m3y4 # total mediation of each Mediator - y4 ind1M1y4tot := ind1M1y4 + ind2M1y4 ind1M2y4tot := ind1M2y4 + ind2M2y4 ind1M3y4tot := ind1M3y4 + ind2M3y4 # total mediation for each X - y4 ind1y4x1 := ind1M1y4 + ind1M2y4 + ind1M3y4 ind2y4x2 := ind2M1y4 + ind2M2y4 + ind2M3y4 # Total mediation on y4 indtoty4 := ind1M1y4tot + ind1M2y4tot + ind1M3y4tot # total effect on y1 total1M1y1 := m1y1 + ind1M1y1 total2M1y1 := m1y1 + ind2M1y1 total1M2y1 := m2y1 + ind1M2y1 total2M2y1 := m2y1 + ind2M2y1 total1M3y1 := m3y1 + ind1M3y1 total2M3y1 := m3y1 + ind2M3y1 totalM1y1 := total1M1y1 + total2M1y1 totalM2y1 := total1M2y1 + total2M2y1 totalM3y1 := total1M3y1 + total2M3y1 totaly1 := totalM1y1 + totalM2y1 + totalM3y1 # total effect on y2 total1M1y2 := m1y2 + ind1M1y2 total2M1y2 := m1y2 + ind2M1y2 total1M2y2 := m2y2 + ind1M2y2 total2M2y2 := m2y2 + ind2M2y2 total1M3y2 := m3y2 + ind1M3y2 total2M3y2 := m3y2 + ind2M3y2 totalM1y2 := total1M1y2 + total2M1y2 totalM2y2 := total1M2y2 + total2M2y2 totalM3y2 := total1M3y2 + total2M3y2 totaly2 := totalM1y2 + totalM2y2 + totalM3y2 # total effect on y3 total1M1y3 := m1y3 + ind1M1y3 total2M1y3 := m1y3 + ind2M1y3 total1M2y3 := m2y3 + ind1M2y3 total2M2y3 := m2y3 + ind2M2y3 total1M3y3 := m3y3 + ind1M3y3 total2M3y3 := m3y3 + ind2M3y3 totalM1y3 := total1M1y3 + total2M1y3 totalM2y3 := total1M2y3 + total2M2y3 totalM3y3 := total1M3y3 + total2M3y3 totaly3 := totalM1y3 + totalM2y3 + totalM3y3 # total effect on y4 total1M1y4 := m1y4 + ind1M1y4 total2M1y4 := m1y4 + ind2M1y4 total1M2y4 := m2y4 + ind1M2y4 total2M2y4 := m2y4 + ind2M2y4 total1M3y4 := m3y4 + ind1M3y4 total2M3y4 := m3y4 + ind2M3y4 totalM1y4 := total1M1y4 + total2M1y4 totalM2y4 := total1M2y4 + total2M2y4 totalM3y4 := total1M3y4 + total2M3y4 totaly4 := totalM1y4 + totalM2y4 + totalM3y4 # direct effects de_toty1 := d1y1 + d2y1 de_toty2 := d1y2 + d2y2 de_toty3 := d1y3 + d2y3 de_toty4 := d1y4 + d2y4 # general total of whole model tot_whole := totaly1 + totaly2 + totaly3 + totaly4 ind_whole := indtoty1 + indtoty2 + indtoty3 + indtoty4 de_whole := de_toty1 + de_toty2 + de_toty3 + de_toty4 "
a better name-representation for mediation SEM models
CC BY-SA 4.0
null
2023-04-11T16:53:50.533
2023-04-11T21:55:28.437
2023-04-11T21:55:28.437
212256
212256
[ "r", "mediation", "lavaan" ]
612599
1
null
null
0
26
I am trying to write/understand a conditionally-conjugate Gibbs sampler for what is essentially a linear, mixed effects model. I more or less get the conditionally-conjugate posterior for the hierarchical parts of the model (nested/random effects), but I am having trouble understanding what the conjugate posterior looks like when conditioning on one of the crossed (not nested) effects. Borrowing [Gelman's notation](https://vulstats.ucsd.edu/pdf/Gelman.ch-13.more-multilevel-models.pdf) (equations 13.7 and 13.8), a random effects model without fixed or crossed effects looks like: $$ y_i \sim \mathbf{N}\left(X_iB_{j[i]},\ \sigma^2_y\right) \\ B_j \sim \mathbf{N}\left(U_jG,\ \Sigma_B\right) \\ G \sim \mathbf{N}\left(G_0, \Sigma_0\right) $$ Where $X_i$ are the individual level predictors, $B_j$ are the effects within group $j$, $U_j$ are the group-level predictors, $G$ are the group-level regression coefficients, and the subscript $j[i]$ is "the group $j$ containing observation $i$." $G_0$ and $\Sigma_0$ are fixed hyperparameters. Building off the work of [Sosa et al](https://arxiv.org/abs/2110.10565), I think the conditionally conjugate posterior for $B_j$ is: $$ B_j|... \sim \mathbf{N}\left[\left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\left(\Sigma_B^{-1}U_jG+\sigma^{-2}_yX_j^\intercal y_j\right),\ \left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\right] \\ G|... \sim \mathbf{N}\left[\left(\Sigma_0^{-1}+\Sigma_B^{-1}U^\intercal U\right)^{-1}\left(\Sigma_0^{-1}G_0 + \Sigma_B^{-1}U^\intercal B_j\right),\ \left(\Sigma_0^{-1}+\Sigma_B^{-1}U^\intercal U\right)^{-1}\right] $$ If we add fixed effects $\beta^0$ then: $$ y_i \sim \mathbf{N}\left(X_i^0\beta^0 + X_iB_{j[i]},\ \sigma^2_y\right) \\ B_j \sim \mathbf{N}\left(U_jG,\ \Sigma_B\right) $$ Gelman suggests we can express this in the form of the first model by "rolling" the fixed effects $X^0$ into $X$, $\beta^0$ into $B_j$, and setting appropriate terms in $\Sigma_B$ to zero. I can sort of see how to make this work through matrix algebra, although sampling from $\Sigma_B$ using an inverse-Wishart posterior becomes trickier. However, I'm really having trouble seeing how to sample from a conditionally conjugate posterior for non-nested, crossed effects $\gamma_k$ where the levels of $k$ do not nest within the levels of $j$. Note that Gelman suggests giving the non-nested effects an intercept of zero since any non-zero means could be folded into an intercept term within $\beta^0$. $$ y_i \sim \mathbf{N}\left(X_i^0\beta^0 + X_iB_{j[i]} + Z_i\gamma_{k[i]},\ \sigma^2_y\right) \\ \beta_j \sim \mathbf{N}\left(U_jG_B,\ \Sigma_B\right) \\ \gamma_j \sim \mathbf{N}\left(V_kG_\gamma,\ \Sigma_\gamma\right) $$ Is it possible to obtain a conjugate posterior for $B_j|\beta_0,\gamma_k,...$ and equivelanetly for $\gamma_k|\beta_0,B_j,...$? Can one simply subtract the expected values of the other parameters we are conditioning on from $y$? $$ B_j|... \sim \mathbf{N}\left[\left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\left(\Sigma_B^{-1}U_jG+\sigma^{-2}_yX_j^\intercal \left(y_j-X^0\beta^0-Z_k\gamma_k\right)\right),\ \left(\Sigma_B^{-1}+\sigma^{-2}_yX_j^\intercal X_j\right)^{-1}\right] $$ ## EDIT To answer my own question, subtracting the conditioned-upon parameters from $y$ as above appears to be correct. See [Zanella & Roberts](https://doi.org/10.1214/20-BA1242), in particular the expressions for the posterior distributions in Section 7 of [the supplementary material](https://doi.org/10.1214/20-BA1242SUPP). I would still appreciate comments/confirmation from anyone with experience using these sorts of models!
Conditionally conjugate prior for non-nested (i.e. crossed) normal model?
CC BY-SA 4.0
null
2023-04-11T17:08:37.617
2023-04-13T14:18:03.847
2023-04-13T14:18:03.847
289416
289416
[ "bayesian", "multilevel-analysis", "hierarchical-bayesian", "gibbs", "conjugate-prior" ]
612600
2
null
104958
0
null
There is a nice approximation described in the paper ["Computing moments of ratios of quadratic forms in normal variables"](https://www.sciencedirect.com/science/article/pii/S016794730200213X) (the approximation predates this paper though). It uses a second-order Taylor expansion that leads to a simple formula that is a good approximation in many cases (this approximation is used in [this other answer](https://stats.stackexchange.com/questions/586875/covariance-of-fracx-x-for-gaussian-x/609445#609445) of mine, see the comments of the original poster). Let's write $N = w^T A w$ and $D = w^T B w$. Then $\mathbb{E}\left(\frac{w^T A w}{w^T B w}\right)$ can be approximated with the following expression of the moments of $N$ and $D$: \begin{equation} \mathbb{E}\left(\frac{N}{D}\right) \approx \frac{\mu_N}{\mu_D}\left( 1 - \frac{Cov(N,D)}{\mu_N \mu_D} + \frac{Var(D)}{\mu_D^2} \right) \end{equation} where: \begin{equation} \begin{split} & \mu_N = tr(A\Sigma) + \mu_{w}^T A \mu_{w} \\ & \mu_D = tr(B\Sigma) + \mu_w^T B \mu_w \\ & Var(D) = 2tr([B \Sigma]^2) + 4 \mu_w^T B \Sigma B \mu_w \\ & Cov(N,D) = 2tr(B \Sigma A \Sigma) + 4 \mu_w^T B \Sigma A \mu_w \end{split} \end{equation} and $\mu_w$ and $\Sigma$ are the mean and covariance of normal vector $w$. That is, $w\sim \mathcal{N}(\mu_w, \Sigma)$. If this answers your question, please consider upvoting.
null
CC BY-SA 4.0
null
2023-04-11T17:26:13.293
2023-04-11T17:26:13.293
null
null
134438
null
612602
1
null
null
0
48
I have following issue: I am trying to cluster similar countries with respect to different temporal features. Therefore, I have twelve different datasets each representing a different country. Each dataset contains of a couple of features as time series. The features can differ from dataset to dataset (e. g., dataset1 = feature A, feature C, feature E, ...; dataset2 = feature B, feature H, feature L, ...; and so on). Same features can also be found in different datasets. The length of each time series is the same. I am struggling with finding an appropriate ML clustering algorithm to deal with all three dimensions (i. e., time, country, feature). I tried this approach ([https://www.pythonforfinance.net/2018/02/08/stock-clusters-using-k-means-algorithm-in-python/](https://www.pythonforfinance.net/2018/02/08/stock-clusters-using-k-means-algorithm-in-python/)), but I can only cluster the feature importances to another. Also, breaking the issue down to 2x2 (either country & feature over time; or weighted features & country) dimensions instead of having 1x3 dimensions results in an information loss. Is there any ML clustering algorithm in python which can deal with all three dimensions at a time?
How to cluster multivariate time series on different datasets?
CC BY-SA 4.0
null
2023-04-11T17:54:54.707
2023-04-11T17:54:54.707
null
null
385457
[ "time-series", "classification", "clustering", "multivariate-analysis", "k-means" ]
612605
2
null
612597
3
null
As in [this question](https://stats.stackexchange.com/questions/611794/proving-upper-bound-for-bias-of-truncated-sample-mean#comment1137865_611794), the idea is directly comparing $f(X, Y) = XY - X(\tau)Y(\tau)$ and $g(X, Y) = |XY|(I(|X| \geq \tau) + I(|Y| \geq \tau))$ on different regions of $\Omega$. On the region $[|X| < \tau, |Y| < \tau]$, $f(X, Y) = 0 \leq g(X, Y) = 0$. On the region $[|X| < \tau, |Y| \geq \tau]$, $f(X, Y) = X(Y - Y(\tau)) = X(Y - \tau\operatorname{sign}(Y))$, $g(X, Y) = |XY|$. To see $f(X, Y) \leq g(X, Y)$ on this region, note that if $|Y| \geq \tau$, then $|Y - \tau\operatorname{sign}(Y)| \leq |Y|$. On the region $[|X| \geq \tau, |Y| \geq \tau]$, $f(X, Y) = XY - \tau\operatorname{sign}(X)\tau\operatorname{sign}(Y)$, $g(X, Y) = 2|XY|$. It follows by $|\tau\operatorname{sign}(X)| = \tau \leq |X|$ and $|\tau\operatorname{sign}(Y)| = \tau \leq |Y|$ on this region that \begin{align} f(X, Y) \leq |f(X, Y)| \leq |XY| + |\tau\operatorname{sign}(X)| |\tau\operatorname{sign}(Y)| \leq |XY| + |XY| = g(X, Y). \end{align} Now you should be able to finish the comparison on the remaining $1$ region.
null
CC BY-SA 4.0
null
2023-04-11T18:37:03.667
2023-04-12T11:26:01.470
2023-04-12T11:26:01.470
20519
20519
null
612606
2
null
599855
0
null
If you have the two original datasets, I recommend aggregating $5290$ copies of the first population and $500$ copies of the second population to get a new dataset with $3830\times 5290+500\times 5290=22\,905\,700$ datapoints. You can estimate the percentiles from that. If you don't have the two original datasets, you'll have to guess. If we call the given percentiles $q$ (for quantile), we might try the assumption that each pdf is a piecewise-linear function between the points $(q_0,0)$, $(q_{10},f_{10})$, $(q_{25},f_{25})$, $(q_{50},f_{50})$, $(q_{75},f_{75})$, $(q_{90},f_{90})$, $(q_{100},0)$, and zero elsewhere. We can calculate the mean and the six areas separated by those five percentiles in terms of $q_0, f_{10}, f_{25}, f_{50}, f_{75}, f_{90}, q_{100}$, and solve the resulting seven equations for the seven variables. A reasonable solution would have $q_0\le q_{10}\le q_{90}\le q_{100}$ and $\min(f_{10},f_{25},f_{50},f_{75},f_{90})\ge0$. The first dataset has exactly one reasonable solution. By contrast, the second dataset is so highly spiked between the 50th and 75th percentiles that it has no reasonable solutions of this form. (See update below.) But there is one solution for the second data set satisfying the inequalities on the $q$'s, and we can work with the negative $f$'s. Here are the graphs of the two piecewise-linear functions, scaled up by 3830 and 500 to give approximate formal histograms for the two datasets. [](https://i.stack.imgur.com/fD1gk.png) Obviously the orange graph goes negative and is not a reasonable histogram! We can still plot the sum of the two graphs, and fortunately the result is positive from 46286 to 498411, and a plausible-looking histogram for the mixture. [](https://i.stack.imgur.com/I6LM3.png) Similarly, using the combined pdf of (3830/4330) times the first function plus (500/4330) times the second function, we can estimate the quantiles of the mixture as: - 10th quantile: $\ \,$62780 - 25th quantile: $\ \,$83090 - 50th quantile: 120350 - 75th quantile: 153520 - 90th quantile: 178910 Perhaps these estimates, even if calculated via some not-very-meaningful negative numbers, are good enough for your purposes. Perhaps fitting the data to a wider class of models would be better, even with more difficult calculations and maybe arbitrary choices between multiple solutions. Or perhaps a simple calculation with 23 million datapoints looks pretty good. Update: By examining average pdfs over three regions, we can prove that reasonable models of this type can not fit the second dataset. In this model $$\frac{f_{25}+f_{50}}{\phantom{1}2^{\phantom{2}}}=\frac{\int_{q_{25}}^{q_{50}}f(x)dx}{q_{50}-q_{25}}=\frac{0.50-0.25}{q_{50}-q_{25}}$$ and similarly for other pairs of neighboring quantiles. So a reasonable solution would satisfy $$\frac{f_{25}+f_{50}}{2}+\frac{f_{75}+f_{90}}{2}\ge \frac{f_{50}+f_{75}}{2}$$ and $$\frac{0.50-0.25}{q_{50}-q_{25}}+ \frac{0.90-0.75}{q_{90}-q_{75}}\ge \frac{0.75-0.50}{q_{75}-q_{50}}$$ But in this case that would give $$\frac{7.3}{10^6}+\frac{6.3}{10^6}\ge\frac{25.6}{10^6}$$ which is false.
null
CC BY-SA 4.0
null
2023-04-11T18:57:37.377
2023-04-12T17:20:40.137
2023-04-12T17:20:40.137
225256
225256
null
612607
1
null
null
2
109
Before training a machine learning algorithms, it is advisable to perform feature scaling. Suppose we have a "toy" dataset where each image is composed of two pixels $x_0$ and $x_1$. Lets assume that $x_0 \approx 0$ and $x_1 \approx 1$ for all the training samples. Initially, all images will be $(0, 1)$ but after e.g. min-max normalization they become $(1, 1)$, that is we have lost the "contrast". Are there cases where feature scaling should be perform with caution? Should a per image normalization makes more sense in the aforementioned example? How normalization should be performed? In case of min-max normalization do we use the min and max values across all pixels of all images? I am asking this because when we have a dataset where columns are features (e.g. height, weight etc) we normalize per column. Does this per column normalization makes sense in images (if we flatten an $n\times n$ image into a vector with $n^2$ entries)?
Do we lose information when we normalize an image?
CC BY-SA 4.0
null
2023-04-11T19:04:49.183
2023-04-12T10:05:08.697
2023-04-12T10:05:08.697
271176
271176
[ "machine-learning", "conv-neural-network", "standardization", "data-preprocessing", "feature-scaling" ]
612608
1
null
null
1
45
I want to prove a consistency of the sample correlations from canonical correlation analysis(CCA). Here is an informal statement of the theorem: Let $\textbf{X}$ and $\textbf{Y}$ be two p-dimensional and q-dimensional random vectors, respectively, with joint distribution P. Let $\left( x_{1},y_{1} \right), \ldots,(x_{n},y_{n})$ be random samples from P. Suppose $\textbf{r}$ be a set of population correlations between canonical variates produced when doing CCA between $\textbf{X}$ and $\textbf{Y}$. Note that the length of $\textbf{r}$, $|\textbf{r}|=\min(p,q)$. We use population covariance matrices to calculate $\textbf{r}$. In data analysis, we replace the population covariance matrix with the sample covariance matrix which results in sample correlations, $\hat{\textbf{r}}$. Then, $\hat{\textbf{r}}_{n}$ converges to $\textbf{r}$ in probability as n goes to infinity. I really appreciate any help.
Consistency of sample correlations from CCA
CC BY-SA 4.0
null
2023-04-11T19:06:43.620
2023-04-12T14:32:10.553
2023-04-12T14:32:10.553
211255
211255
[ "machine-learning", "mathematical-statistics", "inference", "multivariate-analysis", "biostatistics" ]
612609
1
612887
null
3
57
In the 2015 paper "[Deep Unsupervised Learning using Nonequilibrium Thermodynamics](https://arxiv.org/pdf/1503.03585.pdf)" by Sohl-Dickstein et al. on diffusion for generative models, Figure 1 shows the forward trajectory for a 2-d swiss-roll image using Gaussian diffusion. The thin lines are gradually blurred into wider and fuzzier lines, and eventually into an identity-covariance Gaussian. Table App.1 gives the diffusion kernel as: $$ q(\mathbf{x}^{(t)} \mid \mathbf{x}^{(t-1)}) = \mathcal{N}(\mathbf{x}^{(t)} ; \mathbf{x}^{(t-1)} \sqrt{1 - \beta_t}, \mathbf{I} \beta_t ) $$ The covariance of the diffusion kernel is diagonal, so each component $x_i^{(t)}$ (i.e., each pixel in the image at time step $t$) is independently sampled from a 1-d Gaussian based on the prior time step's pixel value at the same x-y location in the image. So a given pixel should NOT diffuse into neighboring pixels; instead, the action of the diffusion step is a linear Gaussian 1-d transformation of the number held in the pixel, with the mean slightly reduced and some noise added. Question: This seems inconsistent with Figure 1? Instead of the blurred line (wider and fuzzier line), we should have a line that has the same width, but exhibits more noise? In order to have a pixel diffuse into neighboring pixels, we would need a diffusion kernel with a non-diagonal covariance, so that there is nonzero covariance between components?
Blurring of image in generative model using diffusion probabilistic method
CC BY-SA 4.0
null
2023-04-11T19:07:22.193
2023-04-14T05:21:08.603
2023-04-12T15:13:23.907
385459
385459
[ "generative-models", "diffusion" ]
612610
1
612798
null
2
58
Experiment is designed as it follows: 9 different pH treatments (controlled), in each treatments 40 marked individuals (randomly chosen from a larger population). Measurements (length, weight, etc.) were performed on the same day on every marked individual from each pH treatment over several time points. Research question is: Is the variable response changing with pH? How do I approach this? What statistical test would be appropriate? Point in any direction would be appreciated.
Appropiate statistical test for the following design
CC BY-SA 4.0
null
2023-04-11T19:14:32.270
2023-04-15T08:39:43.583
null
null
385462
[ "time-series", "repeated-measures", "experiment-design" ]
612611
1
null
null
0
19
I´m trying to find by hand the se.fit. I have 30 different values of the Y real and the Estimate Y. How can I calculate it? Thank you!
How to find se.fit by hand? in R
CC BY-SA 4.0
null
2023-04-11T19:25:21.993
2023-04-11T19:25:21.993
null
null
385464
[ "mathematical-statistics" ]
612612
2
null
612607
3
null
I am not sure why in your example standardisation would result in both pixels being 0. Generally speaking you standardize the values of an image per channel. So in an colored image you would have three 2D Tensors each representing a color (Red, Green and Blue). You would calculate the mean and standard deviation across all images in the training set. Which are then used to standardize/normalize the values of each channel. In you example, a black and white image, you only have a single channel. Thus, this per channel standardization does not come into play. As a counter example to yours, I would bring up the MinMax Scaler which scales all values between 0 and 1 your images would look identical to the pre-scaling image. Lastly, in most cases per image scaling is not advisable, because it distorts the relationship between the images. Lets say you have an image that is almost completely black and only has small pixel value variation that are too small to be picked up by the human eye. Scaling this image on its own, would suddenly make these small differences as meaning full as the much larger pixel value differences in "regular" images.
null
CC BY-SA 4.0
null
2023-04-11T19:36:26.897
2023-04-11T19:36:26.897
null
null
220466
null
612613
2
null
463412
1
null
It's not the presence of a covariate that makes the Tukey method inappropriate, it's the fact the book's authors formulate the covariance model using sum contrasts. With sum contrasts, as opposed to treatment contrasts, you're not simply comparing group means. You're comparing a group mean to the grand mean of the group means. [This page](https://stats.oarc.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/#DEVIATION) does a nice job of explaining sum contrasts, or as they call it, "deviation coding". Hence the need for the Scheffe procedure, which generalizes to all possible contrasts, not just pairwise comparisons of means. The Tukey procedure is specifically for pairwise comparisons of means. It's not clear to me why the authors specified the ANCOVA model using sum contrasts. I suppose it's because they're orthogonal. I think it would have been better if they said "The Tukey method is not appropriate for covariance analysis as we have defined it." I see no reason why you can't use treatment contrasts and follow-up with a pairwise comparison of means.
null
CC BY-SA 4.0
null
2023-04-11T19:43:12.123
2023-04-11T19:43:12.123
null
null
46334
null
612615
2
null
544918
0
null
Looking through the vignette for the `clm` function (assuming `clmm` works similarly), [clm_article.pdf](https://cran.r-project.org/web/packages/ordinal/vignettes/clm_article.pdf), it looks like it's returning the predicted probability of the observed outcome level for each observation. As mentioned by Amir H, `polr` from `MASS` returns a matrix with the predicted probabilities of each possible outcome level (not just the observed level for that row).
null
CC BY-SA 4.0
null
2023-04-11T20:29:25.387
2023-04-11T20:29:25.387
null
null
378484
null
612616
1
612632
null
2
51
I have a data set that looks like this toy data ``` library(tidyverse) data <- tibble(ID = rep(c("Billie", "Elizabeth", "Louis"), times = 1, each = 6), Group = c(rep("control", 12), rep("patient", 6)), Time = rep(c("T1", "T2"), times = 3, each = 3), Item = rep(c("a", "b", "c"), times = 6, each = 1), answer = sample(1:7, size = 18, replace = TRUE)) ``` There are some individual participants (`ID`), who can be either patient or control participants (`Group`). The participants take part in an experiment two times (`Time`). At each time, they answer three items (`Item`), which all measure the same construct . The `answer`shows their answer on a 7-point-Likert-Scale (if you are not from the psychology world, the patients can have biopsies two times, and each time, three samples (items a, b, c) are taken). The research question is: does the group-membership alter the change in answers between time points / is the change in answers between the time points different for the two groups? (are the changes in biopsied tissues different for the two groups). To analyze the data, I use the brms-package. If I wanted an easy life, I would just calculate the average answer per person and time point and continue from there. ``` easy <- data %>% group_by(ID, Time) %>% summarize(Group = unique(Group), mean_ans = mean(answer)) ``` To analyze with brms, my formula would then be ``` bf(mean_answer ~ 1 + Group * Time + (1|ID)) ``` (At least I hope so...) But life is nicer when it's complicated, so my question is: how can I specify a brms-formula that allows me to include the item-level information that is present in my data? I think what I would like to write is something like this ``` bf(answer ~ 1 + Group * Time + (1 | Item|Time|ID)) ``` Reading into crossed and nested random effects [here](https://errickson.net/stats-notes/vizrandomeffects.html) and [here](https://stats.stackexchange.com/questions/228800/crossed-vs-nested-random-effects-how-do-they-differ-and-how-are-they-specified), I was under the impression that my data are crossed, leading to the following formula: ``` bf(answer ~ 1 + Group * Time + (1+ ID) + (1|Time) + (1|Item)) ``` But does this formula take into account the correlation structure of my data? Moving on, following [this paper](https://www.nature.com/articles/nmeth.3137), I was under the impression that my data are the "crossed and nested" part of the figure. Following this track, at the end of [this site](https://yury-zablotski.netlify.app/post/mixed-effects-models-2/) is a guide as to how to specify this case in `lme4`, but I have a hard time translating this into `brms` formulas. Finally, i found [this great site](https://www.andrewheiss.com/blog/2021/12/01/multilevel-models-panel-data-guide/) on country-year panel data, which I am currently exploring, but I am having a hard time translating the scenarios there to my case. I would greatly appreciate any help in this. Thank you already in advance!
brms model specification with 3 (crossed or nested?) levels
CC BY-SA 4.0
null
2023-04-11T20:38:40.960
2023-04-12T01:06:49.800
null
null
261023
[ "r", "multilevel-analysis", "hierarchical-bayesian", "crossed-random-effects", "brms" ]
612617
1
612818
null
1
39
I'm working on a project developing a predictive model for whether or not an individual has a (rare) disease based on some non-invasive test results. The idea is that this could help patients avoid lengthy and invasive tests. One of the non-invasive techniques is very predictive for this particular disease; if a patient has received this test and has a certain combination of results (the test returns several different results), it can be very predictive for this disease. However, only a small subset (10-20%) of patients in the dataset have received the test, as it is not offered at all facilities. What would be an appropriate way of modeling these data? I could be wrong but my intuition is that imputation doesn't make sense here. Would a random forest or gradient-boosted model work best, since they can handle "missing" data well? A colleague suggested building two models, one with and one without the data in question, but I'm not sure how that would work. Thank you!
Handling "missing" data, when the missing variable is the most informative feature
CC BY-SA 4.0
null
2023-04-11T20:45:57.563
2023-04-13T16:06:00.613
null
null
328585
[ "predictive-models", "random-forest", "missing-data", "boosting" ]
612619
2
null
265811
0
null
- Using the largest model as the yardstick for comparison is a choice. This choice makes sense if you have reason to think that the largest model considered is the right yardstick for comparison. If the largest model considered is not a good yardstick, for whatever reason, then this choice doesn't make sense, and you should do something else. The largest model might be a bad choice for any number of reasons. For example, it might be very close to saturated, or it might use information to which no workable model could have access. - It is not necessary to compare only two models at a time, but this might make sense if you don't have a large model that is a good standard for comparison. Pairwise comparison of models might yield a different result from the simultaneous comparison of all models. For an example of this, which prompted this post, see Faraway, "Extending the Linear Model with R" (2ed), Chapter 6, Exercise 3 (d). To see intuitively that the pairwise comparison can yield a different answer from a simultaneous comparison of 3 models, let's call our models m1, m2, m3, in order from smallest to largest. One of the simultaneous comparisons will be a comparison of m1 to m2 with m3 as the yardstick. Such an F-statistic will never appear in the 3 possible pairwise comparisons.
null
CC BY-SA 4.0
null
2023-04-11T21:20:41.117
2023-04-11T21:20:41.117
null
null
385470
null
612620
1
null
null
0
31
I have a .csv data set (N = 140) that I want to perform analysis on. Specifically, I want to find a relationship between the .csv's "P" (Predictor) cells and the "T" (Target) cells (pictured below). T_Summed is simply the sum of the T cells and P_Avg is simply the average of the P cells. An observation containing these values is pictured below. [](https://i.stack.imgur.com/005Ep.png) Using R, nearly every combination of running the P's against the T's (one variable against another) leads to a scatter plot that looks like this: [](https://i.stack.imgur.com/KKBtT.png) Some of them have slightly more randomness/spread, but all of them had at least some data points with the 'vertical' shape shown in the plot above. My issue lies in the fact that I have extremely minimal knowledge of statistics or data analysis, so I do not know how to run an appropriate model that will fit a plot of this shape, how to validate it, and, if using a model with multiple independent variables, how to visualize it. The only things I know how to do are run a simple linear model, a multiple linear model (don't know how to visualize this), and a potentially misused k-folds cross validation. Here's an example of how I created one of the linear models in question with repeated k-folds cross validation: ``` ex <- trainControl(method = "repeatedcv", number = 5) simple_reg_model <- train(`T_Summed` ~ `P_Avg`, data = data, method = "lm", trControl = ex) ``` I am afraid that my lack of knowledge in statistics will lead me to search around the internet and almost arbitrarily implement solutions that I have little understanding of, thus making my analysis faulty, inaccurate, or misleading. Due to this, I decided to consult the statistics StackExchange with a few questions: - Is the code block above, at least for a linear model, an acceptable way of using repeated CV? - What alternative models should I possibly begin to explore for this data? How might I go about learning about how to implement them properly? - Models with multiple IVs appear to me to be difficult to plot. If anyone has a suggestion for question number 2. that involves multiple IVs, how might I go about plotting them so that they are interpretable? - Another question regarding multiple IV models: I do not know how to use feature selection. If anyone has a suggestion for number 2., how would you go about selecting features for your suggestion? I very much appreciate any support. Thank you
Questions Regarding Tabular Data and Implementing Machine Learning Methods on it
CC BY-SA 4.0
null
2023-04-11T22:21:30.633
2023-04-12T03:24:28.167
2023-04-12T03:24:28.167
385473
385473
[ "r", "regression", "machine-learning", "multiple-regression", "nonlinear-regression" ]
612621
1
null
null
0
13
I found a similar question [here](https://stats.stackexchange.com/questions/581605/the-variance-of-difference-in-means-estimator) but unanswered. Given a binary treatment generated from a Bernoulli distribution with probability $p$ of success ($(_=1)=$, $(_=0)=1−$), how do I know how the error of the difference in means estimator scales with $p$ and $n$? I know that writing the difference in means estimator as $̂=1/_1∑__−1/_0∑(1−_)_$, can derive its variance from the law of total variance and marginalizing as $$Var(\hat{\tau}) = \frac{1}{n} \left( p \sigma_1 + (1 - p) \sigma_{0} +p(1-p) (\mu_1+ \mu_{0})^2\right),$$ where $\sigma_1, \sigma_{0}, \mu_1, \mu_{0}$ are the variance and mean of $Y | T = 1$ and $Y | T = 0$, respectively. Is this correct? If so, then I get that the variance of the estimator scales with $p/n$, is this right? I am not sure whether this is the right way of modelling things.. I found [this derivation](https://static1.squarespace.com/static/5d54a19a5a1edf0001ea677a/t/622ab2ae06e015231e73e814/1646965423144/Tom_Leavitt_var_diff_means_estimator.pdf) of the difference in means estimator variance but for complete random assignment (not Bernoulli trials) and I am not sure how to compare.
How does the variance of the difference in means estimator scale with n and the probability of treatment?
CC BY-SA 4.0
null
2023-04-11T22:46:33.043
2023-04-11T22:46:33.043
null
null
384581
[ "variance", "inference", "estimators" ]
612622
1
null
null
0
23
I read the textbook [https://web.stanford.edu/class/bios221/book/06-chap.html](https://web.stanford.edu/class/bios221/book/06-chap.html) about the false discovery proportion and the p-value histogram. ``` library("DESeq2") library("airway") data("airway") aw = DESeqDataSet(se = airway, design = ~ cell + dex) aw = DESeq(aw) awde = as.data.frame(results(aw)) |> dplyr::filter(!is.na(pvalue)) alpha = binw = 0.025 pi0 = 2 * mean(awde$pvalue > 0.5) ggplot(awde, aes(x = pvalue)) + geom_histogram(binwidth = binw, boundary = 0) + geom_hline(yintercept = pi0 * binw * nrow(awde), col = "blue") + geom_vline(xintercept = alpha, col = "red") ``` [](https://i.stack.imgur.com/Q8Njp.png) I do not understand why the false discovery proportion is calculated with ``` pi0 = 2 * mean(awde$pvalue > 0.5) pi0 * binw * nrow(awde) ```
How to calculate the false discovery proportion from the p-value histogram?
CC BY-SA 4.0
null
2023-04-11T22:48:05.693
2023-04-11T22:48:05.693
null
null
256516
[ "false-discovery-rate" ]
612624
1
null
null
0
12
I understand that in creating prediction intervals for point prediction we typically use root mean-squared error (RMSE) if we meet our linearity, homoskedacity, and normality conditions. Our normality and homoskedacity conditions allow us to use the empirical rules to construct intervals. That is about $95\%$ of our data at say $x = x_0$ is within two RMSE of our prediction. However, in creating confidence intervals we use standard error(sample standard deviation divided by n-1). I understand that we typically do this in parameter estimation. For example, when predicting the $\beta 's$ in linear regression, we might make a confidence interval for specific betas. Here we're essentially admitting that we don't actually know the true standard deviation, but we have a sampled population form which we'll estimate the standard deviation. This estimate is called the standard error. We can use this approximation even if we don't have the conditions I stated above, correct? My question I guess is if we are already admitting to not knowing the standard deviation because we don't have the entire population's dataset, shouldn't we also use standard error in our prediction intervals as well? It also has the benefit of not needing to meet any conditions unlike in our RMSE interval.
Intervals using Standard Error vs Intervals using RMSE
CC BY-SA 4.0
null
2023-04-11T23:02:19.687
2023-04-11T23:02:19.687
null
null
270165
[ "confidence-interval", "inference", "standard-deviation", "standard-error", "prediction-interval" ]
612626
1
null
null
1
11
Does anyone know how to model selection for function on function linear model to find the best subset of functional covariate in R
Does anyone know how to model selection for function on function linear model to find the best subset of functional covariate in R
CC BY-SA 4.0
null
2023-04-11T23:36:16.810
2023-04-11T23:36:16.810
null
null
385477
[ "model-selection", "bic", "stepwise-regression", "functional-data-analysis", "subset" ]
612627
2
null
611958
1
null
Disclaimer: I assume all three models have: 1. Common preprocessing of the training set. 2. XGBoost classifier hyperparameters tuned adequately. 3. Evaluation is done on a separate test set. 4. No target leakage. Gradient boosting machines (like XGBoost) are great but often they are hard to interpret. Here we can move forward with our investigation in two ways: 1. Use simpler explainable models. 2. Use post hoc explanation methods. Both options involve some visualisation because ultimately we want to have some contrasts. (Third option in the end) Option 2: I will actually start with the post hoc explanation methods first. Use partial dependence plots ([PDPs](https://christophm.github.io/interpretable-ml-book/pdp.html)) from the models you think are "reasonably performant". Examine the ones corresponding to the variables having the highest importance (I would use the usual [gain](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.Booster.get_score) attribute as the average gain across all splits the feature is used corresponds the "closest" to generic feature importance for prediction purposes). These plots should show some coherent gradient/variation. Superimpose those PDPs against the same PDPs from the less performant model. Those plots should immediately tells us how our "performant models" use certain variables while our least performant model cannot find any coherent signal there. Note that it is not necessary that different models have similarly looking PDPs, the contrary is usually true. What we want to show is there one model finds meaningful variation and the other model does not. Option 1: Using a [glass-box model](https://interpret.ml/docs/glassbox.html) that is directly explainable allows us to immediately ascertain how certain predictions are made or why can't they be made. As a first step: Let's remember that XGBoost ultimately uses CARTs as base learners. That means that we can visualise (some of) these trees and see what is learned. I would suggest first trying a Random Forest (not Extra Trees) with a very small number of trees. Again the catch is to first see how a "good ensemble learner" behaves and contrast that with the "bad ensemble learner". As a second step: Let's also remember that GBMs are closely related to GAMs. We can use the most informative features from the post hoc analysis and build a GLM or a GAM. While this is wrong as a general model-building strategy because it leads to overfitting and fails to capture interactions, the catch here is to see if we can indeed overfit any signal we might have. We can then compare how that signal can or cannot be found between our models (for example, a $\beta_{x_1}$ might be statistically significant in our "good model" but statistically insignificant and/or with a minimal effect size in our "bad model". A third option I wouldn't immediately consider but one might try is to use [Boruta](https://www.jstatsoft.org/article/view/v036i11) or a similar meta-learning variable importance algorithm. I personally find such methods not horribly insightful but then again they should allow one to quantify feature importance scores with a standardised methodology. We can then make an almost like-for-like comparison too. As above we would want to contrast the variable importance computed rather than focus on the exact magnitudes. Final comment: The above is essentially a guide to post hoc model explainability comparisons. There are no substitutes for doing proper [EDA](https://en.wikipedia.org/wiki/Exploratory_data_analysis) before modelling. Simply using different response variables and then saying "this sometimes works and sometimes doesn't work" is completely normal. The whole argument here is why seemingly related response variables are not having similar performance. But we have no commentary if these response variables have the same "noise" components too. Again, EDA and literature reviews are our friends here. Good luck! :)
null
CC BY-SA 4.0
null
2023-04-11T23:50:49.030
2023-04-11T23:50:49.030
null
null
11852
null
612628
1
null
null
1
14
I am looking at the incremental validity of the MMPI-A-RF (a psychological instrument) and MACI (another psychological instrument) in the prediction of the DSM code for depression disorders found in the clinical record prior to testing. Both psychological instruments are continuous variables, and the outcome variable (DSM code) is dichotomous (DSM depression diagnosis present: (yes/no)). How do you apply an odds ratio to determine the effect size in this example? Should I apply the odds ratio in this case? I have eight, continuous predictors in total.
Should the odds ratio be used to judge incremental information in psychological scales?
CC BY-SA 4.0
null
2023-04-11T23:52:38.170
2023-04-11T23:52:38.170
null
null
385472
[ "ratio", "odds" ]
612632
2
null
612616
2
null
Let's go through each of the specifications that you've suggested one by one. The simplest model you suggest is: ``` bf(mean_answer ~ 1 + Group * Time + (1|ID)) ``` This corresponds to the 'random intercept' model. This will allow you to estimate the mean of mean_answer at baseline and follow-up in each group. The Group:Time interaction will indicate whether the treated group changed relatively more than the untreated group between baseline and follow-up. But it would be nice to use the item-level information, as you suggest. Keeping things on their natural scale (7-point Likert) rather than a relatively artificial average might help for interpretation/presentation. You suggest the following model: ``` bf(answer ~ 1 + Group * Time + (1 | Item|Time|ID)) ``` I don't know what's going on here so I'm going to skip it. The next one is: ``` bf(answer ~ 1 + Group * Time + (1+ ID) + (1|Time) + (1|Item)) ``` There is a mistake in this specification (at least, I think so). We usually don't want to include time (a 2-level variable) as a random intercept. It's already included as a fixed effect so it's redundant anyway. Rather, we want to allow the effect of time, that is, the trajectory from baseline to follow-up, to be different for each participant. In other words, we want a random slope of time. That would look like this: ``` bf(answer ~ 1 + Group * Time + (1+time|ID) + (1|Item)) ``` Here Item and ID are crossed random effects as each participant completes each item. A few issues. Firstly, it is usually suggested that you need about 5-6+ levels of the random effect to reliably estimate the variance and Item only has 3. Likely this will be fine with a little regularisation from the prior, though you might try just including Item as a fixed effect. Secondly, for a pre-post study, you might struggle to estimate a random intercept for ID and a random slope for Time. The reason being that allowing each participant to have their own starting point and their own trajectory can result in a model with about as many parameters as there are data points. Again, the prior might help with this, I'm not sure. Here is one last formulation to consider: ``` bf(answer ~ 1 + Group * Time + Item + (1|ID)) ``` This model differs from the one above in that Item is included as a fixed effect and the random slope for Time is dropped (i.e., participants within groups are assumed to have a constant trajectory). It would be interesting to contrast this model with one that includes a random slope for time and see how they compare. Also, keep in mind, for Likert data, you are [better off](https://osf.io/9h3et/download) with an ordinal regression model rather than a metric (Gaussian likelihood) model. ---
null
CC BY-SA 4.0
null
2023-04-12T01:06:49.800
2023-04-12T01:06:49.800
null
null
228747
null
612633
1
null
null
3
360
Here's the sample data: [Link to a .csv file](https://drive.google.com/file/d/17q7snP4b1dB0K7lXbyAYkQ0aZeEELMCF/view?usp=sharing) To briefly explain this: `grandparent` is 1 if the individual is a grandparent and 0 if otherwise. `m_age` is the individual's age. `m_work` is the individual's working status and `m_workhour` is the individual's weekly working hours. `child1_female` indicates whether the individual's first child is female. `child_number` is the number of children that the individual has. I try to use `fixest` package to do instrumental variable fix effects regression. The variable `grandparent` is endogenous and its instrument is `child1_female`. The outcome variable is `m_work`. However, if I add `m_age` as the exogenous regressor and use the following code: ``` ivgrandma<-feols(m_work~m_age|respondent_id+year|grandparent~child1_female,grandma) ``` It says that "The endogenous regressor 'fit_grandparent' have been removed because of collinearity (see $collin.var)." I'm very confused about where the collinearity comes from.
Where does the collinearity even come from?
CC BY-SA 4.0
null
2023-04-12T01:07:09.323
2023-04-12T01:35:28.230
null
null
336679
[ "panel-data" ]
612634
1
null
null
0
31
0 I have a set of data points. The first coordinate is time and the second coordinate is energy. I am trying to figure out how the energy is decaying over time. Particularly, I have to find if it is decaying over time exponentially or as a power law. I used Mathematica FindFit to model my points as both an exponential decay and a power law decay. It turned out that the exponential decay describes my data points better. But I am not sure if I am doing the right thing. I also plotted my data points in a ListLogPlot and ListLogLogPlot. In both cases, I got a straight line. So, I am a little confused about the actual behavior of my data points. Could anyone help me with this issue? I am copying my data points here. Note that I am only interested in the late-time behavior of the function, not the entire time axis. Thank you! Data1={{5,0.0210796},{7,0.0293022},{9,0.0302858},{11,0.0257149},{13,0.0182589},{15,0.0106745},{17,0.00473577},{19,0.00101295},{21,-0.000754187},{23,-0.00117344},{25,-0.000860244},{27,-0.000278088},{29,0.000293337},{31,0.00072545},{33,0.000988823},{35,0.00110603},{37,0.00111822},{39,0.00106582},{41,0.000980234},{43,0.000882181},{45,0.000783367},{47,0.000689278},{49,0.0006018},{51,0.000521108},{53,0.000446822},{55,0.000378596},{57,0.000316303}, {59, 0.000259989190761133}}
Are these data points decaying exponentially or as a power law?
CC BY-SA 4.0
null
2023-04-12T01:19:10.330
2023-04-12T01:19:10.330
null
null
385480
[ "mathematical-statistics", "dataset", "exponential-distribution", "power-law", "mathematica" ]
612635
2
null
612270
0
null
Unfortunately, this is difficult, for conceptual (not computational reasons). Count models (negative binomial, Poisson, etc.), almost always use a logarithmic link function, meaning that they model the expected number of counts as an exponential function of some linear combination of covariates, in your case (because you only have one covariate: I'm ignoring the random effects here for simplicity, but they won't affect the argument) $$ \textrm{expected counts} = \exp(\beta_0 + \beta_1 x_1) $$ The models are set up this way primarily because otherwise it would be possible for the expected number of counts to be negative, which would cause both computational and conceptual problems. However, this setup means that the expected-counts curve never intersects the $x$ axis — it just gets closer and closer to zero as the linear predictor ($\beta_0 + \sum \beta_i x_i$) becomes more and more negative. One possibility would be to pick a small expected number of counts $C$ (say, 0.1 or 0.01) and define that as the effective minimum value; then you'd solve the equation $\log(C) = \beta_0 + \beta_1 x$ for $x$. Another possibility, which I don't recommend, would be to fit an identity-link model (`family = nbinom1(link = "identity")`); this would make expected counts equal to $\beta_0 + \beta_1 x$ rather than $\exp(\beta_0 + \beta_1 x)$.
null
CC BY-SA 4.0
null
2023-04-12T01:27:22.390
2023-04-12T01:27:22.390
null
null
2126
null
612636
2
null
612633
6
null
I suspect now, having looked through the csv that this arises because you are including respondent fixed effects. Note that your instrument, `child1_female`, only varies at the respondent level, so if you have `respondent_id` as a fixed effect, that will absorb all variation in `child1_female`. To see why, remember, including `respondent_id` fixed effects is equivalent to estimating a regression where you add indicator variables corresponding to each unique `respondent_id` as a control. Since `child1_female` takes on the same value for each observation corresponding to the same respondent, these indicators for `respondent_id` will clearly completely predict `child1_female`, hence your collinearity problem. More generally, whenever you are estimating a panel regression, you will not be able to separately fit coefficients for any variable does not exhibit within-individual variation.
null
CC BY-SA 4.0
null
2023-04-12T01:28:42.717
2023-04-12T01:35:28.230
2023-04-12T01:35:28.230
188356
188356
null
612637
2
null
211015
0
null
The theorem states that if the loss function is convex and decreases at around 0, a classifier that minimizes the expected loss will make the same decision as the Bayes optimal classifier. One significance is that we can get P(y|X) by minimizing the expected loss. You can find more details in the [original paper](https://web.mit.edu/lrosasco/www/publications/loss.pdf).
null
CC BY-SA 4.0
null
2023-04-12T03:03:23.620
2023-04-15T16:17:18.930
2023-04-15T16:17:18.930
28942
28942
null
612638
1
612680
null
6
238
Note that a distribution function (cadlag etc) $F$ is said to be stochastically dominated by a distribution function $G$ if $F(x)\geq G(x)$ for all $x \in \mathbb{R}$. The following result characterizes stochastic dominance equivalently: > Theorem: $F$ is stochastically dominated by $G$ if and only if for every increasing function $u$ $$\mathbb{E}_F[u(x)] \leq \mathbb{E}_G[u(x)].$$ I have seen proofs for this result when $F$ and $G$ are absolutely continuous (and thus admit densities) using integration by parts. Is there a more general proof that holds for arbitrary distribution functions/measures on the real line? To be clear, the integral definition implies the CDF one by using $u(x) = \mathbb{1}\{x \in (z,\infty)\}$ for all $z \in \mathbb{R}$. The converse direction doesn't seem immediately obvious.
Equivalent definition of stochastic dominance
CC BY-SA 4.0
null
2023-04-12T03:11:04.247
2023-04-16T14:11:06.677
2023-04-12T03:29:49.640
283635
283635
[ "probability", "decision-theory", "stochastic-ordering" ]
612639
2
null
287099
0
null
> measurements in the example are integers. yes, they are acting as sensors (2D) for 1 (Linear-Gaussian model) process , describing random_walk... inputing your previous state's distribution to the state_next_output calculations according to probabilities of current state & current observable measurements... can consider it to represent some form of [State-Space model](https://www.chadfulton.com/topics/implementing_state_space.html) or [code](https://github.com/ChadFulton/tsa-notebooks/blob/master/code_state_space.ipynb), I suppose > Finally, where can I find my output? Should it be filtered_state_means or smoothed_state_means read your link: "Functionally, Kalman Smoother should always be preferred. Unlike the Kalman Filter, the Smoother is able to incorporate “future” measurements as well as past ones at the same computational cost"... but if you need just filter - use filter > provide some transition_matrices and observation_matrices. What values should I put there? What do these matrices mean? see "Mathematical Formulation" at your link + transition_matrix's good explanation at [unofficed.com](https://unofficed.com/courses/markov-model-application-of-markov-chain-in-stock-market/lessons/markov-chain-and-linear-algebra-calculation-of-stationary-distribution-using-python/) (incl. the whole issues series in the left-hand side of the link) - it is just the paths of state_change described in table (from your graph representation), "shows the probability of the occurrence", found like e.g. [here](https://stackoverflow.com/a/64118370/15893581) or more [OOP-implementation](https://tylermarrs.com/posts/deriving-markov-transition-matrices/) observation_matrix is your observations - observation_matrix = np.array( [observed_x, observed_y] )
null
CC BY-SA 4.0
null
2023-04-12T03:44:36.447
2023-04-12T11:31:49.310
2023-04-12T11:31:49.310
347139
347139
null
612640
1
612713
null
4
70
I'm working on a negative binomial model for count data. Unfortunately I can't provide a more detailed description because I wasn't explicitly allowed to. All I can say now is that the data is about people (social sciences). Here is my model: ``` model = glmmTMB(Y ~ offset(log(offset.var)) + X1 + X2 + X2.squared + X3 + X2:X3 + scale(X4a) + scale(X4b) + scale(X4c) + scale(X4d) + (1|factor), dispformula = ~ offset.var, family = "nbinom2", data = data) # I included a dispersion formula because of the "residuals against predictor" plots # it also seems to help the uniformity (KS) test ``` I used an excellent `DHARMa` library for diagnostics. Here are the main results (the best I've come to): [](https://i.stack.imgur.com/3N7h0.png) The `testOutliers(model_res, type = "bootstrap")` comes out significantly. When I inspected the outliers, I found that there are 12 of them, they occur only on the right side (scaled residual = 1) and their problem is that they have non-zero values in dependent variable while having very low (almost zero) values in the offset variable. (But the Y values are not excessively high – those observations were excluded prior to the analysis.) Since I consider these observations valid, I decided to keep them in my analysis but I don't know if that's correct because they seem to highly confuse the overdispersion tests: ``` > performance::check_overdispersion(model) dispersion ratio = 64208237559.269 Pearson's Chi-Squared = 64079821084150.227 p-value = < 0.001 # if I remove all 12 observations whose scaled residual == 1, I get dispersion ratio = 1.183 > DHARMa::testDispersion(model_res) DHARMa nonparametric dispersion test via sd of residuals fitted vs. simulated data: simulationOutput dispersion = 0.35968 p-value = 0.254 alternative hypothesis: two.sided # {DHARMa} test doesn't detect any problem even with the 12 observations included, unlike {performance} test > DHARMa::testZeroInflation(model_res) DHARMa zero-inflation test via comparison to expected zeros with simulation under H0 = fitted model data: simulationOutput ratioObsSim = 1.0023 p-value = 0.99 alternative hypothesis: two.sided > performance::check_collinearity(update(model, . ~ . - X2.squared - X2:X3)) Term VIF VIF 95% CI Increased SE Tolerance Tolerance 95% CI X1 1.09 [1.04, 1.20] 1.04 0.92 [0.83, 0.96] X2 1.12 [1.06, 1.22] 1.06 0.89 [0.82, 0.94] X3 1.02 [1.00, 1.37] 1.01 0.98 [0.73, 1.00] scale(X4a) 1.25 [1.17, 1.36] 1.12 0.80 [0.73, 0.85] scale(X4b) 1.12 [1.06, 1.22] 1.06 0.89 [0.82, 0.94] scale(X4c) 1.09 [1.04, 1.20] 1.04 0.92 [0.83, 0.96] scale(X4d) 1.04 [1.01, 1.22] 1.02 0.96 [0.82, 0.99] > performance::check_autocorrelation(model) OK: Residuals appear to be independent and not autocorrelated (p = 0.770). # observed values: 0 1 2 3 4 5 6 7 8 10 11 12 14 15 16 20 563 170 119 56 25 40 5 1 6 19 1 1 1 3 1 2 # predicted values: 0 1 2 3 4 5 6 7 8 9 12 13 14 15 17 18 20 21 23 41 46 220 441 176 75 46 9 11 6 6 8 2 2 3 1 1 1 1 1 1 1 1 # conditional R^2 = 0.3949, marginal R^2 = 0.3380 (if it helps) ``` My question is: Is there anything I should do with the outliers or model performance? I would appreciate any answers or suggestions. You can download the anonymised dataset and some basic R script from [here](https://drive.google.com/file/d/1YFF_a9XlKZrarP0iviEklF2Kico7IMRN/view?usp=sharing). EDIT: I've clarified the statement about outliers and narrowed the focus of the question.
Should I be concerned about outliers in NB GLMM with an offset term?
CC BY-SA 4.0
null
2023-04-12T03:45:02.113
2023-04-13T15:28:06.390
2023-04-12T11:11:22.970
371553
371553
[ "outliers", "model-evaluation", "glmm", "negative-binomial-distribution", "count-data" ]
612641
2
null
525170
1
null
In the script you write $X$ and $Z$ as jointly determined. This is equivalent to there being selection into instrument on endo2 in your script, confounding the estimates of the estimation. The LATE framework requires the instrument, $Z$, be independently assigned to unit (or conditionally so). This is violated in your script by determining both $Z$ and $X$ by endo2. To see this, note that the $\kappa$-weight gets the average rate of compliance correct if you estimate $\Pr(Z_i = 1)$ without conditioning on $X_i$: ``` ### simulate the data generating process set.seed(2839) n <- 20000 endo <- rnorm(n) endo2 <- rnorm(n) z <- as.integer(endo2 + rnorm(n)>0) d <- endo2 + endo + z + rnorm(n) d <- as.integer(d > median(d)) x <- rnorm(n)+endo2 y <- 1.5+ 2*d - 4*x + 1.5*endo + rnorm(n) ### wald 1st stage implied size of complier group mean(d[z==1]) - mean(d[z==0]) # roughly .49 zmodel <- fitted(lm(z~1)) # This is where the change is, Z ~ 1 kappa <- (1 - (d * (1-z)) / (1 - zmodel) - ((1-d)*z)/zmodel) mean(kappa) # roughly .49 ``` Though, if you're interested in further practical details on the [Abadie (2003)](https://doi.org/10.1016/S0304-4076(02)00201-4) weights approach, new research in the area is fairly interesting. - https://doi.org/10.2139/ssrn.4093467 - https://doi.org/10.48550/arXiv.1909.05244
null
CC BY-SA 4.0
null
2023-04-12T03:50:44.993
2023-04-12T16:22:00.330
2023-04-12T16:22:00.330
385487
385487
null
612643
1
612647
null
1
24
I am not clear on the difference between the two concepts but I am interested in air pollution exposure in a given period of time and in the literature, I know that lag models are used. I have also seen the moving average for air pollution, is this the same as lag models? If not, what is the difference between them?
are moving average and lag modeling the same?
CC BY-SA 4.0
null
2023-04-12T05:07:48.093
2023-04-12T07:01:54.097
2023-04-12T07:01:54.097
35989
328519
[ "lags", "moving-average" ]
612644
1
null
null
6
173
Definition (Consistency) Let $T_1,T_2,\cdots,T_{n},\cdots$ be a sequence of estimators for the parameter $g(\theta)$ where $T_{n}=T_{n}(X_1,X_2,\cdots,X_{n})$ is a function of $X_{1},X_{2},\cdots,X_{n}.$ The sequence $T_{n}$ is a weakly consistent sequence of estimators for $\theta$ if for every $\varepsilon>0,$ $$\lim_{n\rightarrow\infty}P_{\theta}(|T_{n}-g(\theta)|<\varepsilon)=1.$$ If $T_{n}$ converges with probability one or almost surely (a.s.) to $g(\theta)$, that is, for every $\theta\in\Theta$ $$P_{\theta}\left(\lim_{n\rightarrow\infty}T_{n}=g(\theta)\right)=1,$$ then it is strongly consistent. Strongly consistency implies weakly consistency.This definition says that as the sample size $n$ increases,the probability that $T_{n}$ is getting closer to $\theta$ is approaching $1$. --- I am confused about what is the ${\color{Red}{\left(\Omega,\mathcal{F},P_{\theta}\right)}}$ those $T_{n},\,n=1,2,\ldots$, defined on? What's the specific probability measure ${\color{Red} {P_{\theta}}}$ ?
What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
CC BY-SA 4.0
null
2023-04-12T05:09:31.230
2023-04-12T14:21:55.900
2023-04-12T14:21:55.900
371966
371966
[ "probability", "mathematical-statistics", "estimators", "consistency" ]
612645
1
null
null
0
34
I am doing a regression to find what are the association between hospital admission (yes/no) and disease severity (mild, moderate, severe) using multinomial logistic regression. I noticed that the regression estimates differ when I fit severe vs mild (mild as the reference category) and mild vs severe (severe as the reference category). I think that's probably related to degrees of freedom and other statistics that moderate vs mild (in model 1) or moderate vs severe (in model 2) have. My questions: - In this case, is it possible to obtain a consistent estimates like, when using a variable with >2 categories as a covariate (say: A, B, C), A vs C is the inverse (1/) of C vs A - Alternatively, is it also reasonable to swap the places? So age becomes the dependent variable and severity becomes the independent variable (note that the variables are dummy and, hypothetically in my real dataset, it is plausible although the ideal way is to use severity as the dependent variable). That way I can get the consistent estimates where mild vs severe is the inverse of severe vs mild, but I'm confused as I am combining the severity (in this case dependent variable becomes independent variable) with other covariates (which are definitely independent variables) Thanks in advance.
Changing reference category in multinomial logistic regression
CC BY-SA 4.0
null
2023-04-12T05:43:43.927
2023-04-12T05:43:43.927
null
null
234366
[ "regression", "logistic", "references", "multinomial-logit" ]
612646
1
null
null
0
48
I like to organize my studies always with the weakest hypotheses possible. In this case, I want to understand well what assumptions I should add to be able to study linear regression models in time series. I want to analyze how far I can go in the OLS context assuming that my sample is not independent. At the end of the day, in the world of time series this is one of the first assumptions that fails. So let's start with the classical assumptions of linear regression. First, suppose the true model given by \begin{equation}\label{I}\tag{I} y= \beta_0 + x_1 \beta_1 + ...+x_K \beta_K + u= x'\beta + u, \quad E(u|x)=0 \end{equation} Now consider a sample $(y_i,x_i)_{i=1}^n$ (identically distributed but not independent) such that: - $(y_i,x_i)_{i=1}^n$ satisfies (\ref{I}): $$y_i= \beta_0 + x_{i1} \beta_1 + ...+x_{iK} \beta_{K} + u_i= x_i'\beta + u_i,\quad i=1,...,n$$ in matrix notation, we have $$y= X\beta + U$$ - Suppose that $X'X$ is non singular (almos sure); With this two assumptions I can show the existence of $\hat \beta= (X'X)^{-1}X' y$ - Now, suppose that $$E(U|X)=E\left( \begin{bmatrix} u_{1} \\ \vdots \\ u_{n} \\ \end{bmatrix}\Bigg| \, \begin{bmatrix} x_{11} & \cdots & x_{1K} \\ x_{21} & \cdots & x_{2K} \\ x_{n1} & \cdots & x_{nK} \end{bmatrix} \right)=0$$ with this aditional assumption, the traditional books show that $E[\hat \beta]=\beta$. And I think that an $AR(1)$ process does not satisfy this assumption. - The fourth assumption in the classical linear regression model is the normal distribution and the no correlation of the errors: $$U \sim N(0. \sigma^2 I), \quad I \,\, \hbox{identity matrix }$$ In this case, we have $\hat \beta \sim N(0, \sigma^2 (X'X)^{-1})$. But I think (I'm not sure) that this hypothesis fails not assuming independence in the sample, because: $$u_i = y_i - x_i' \beta$$ Thus, it is likely that the correlation matrix of $U$ is not $\sigma^2 I$. But this is not a problem, we can assume that $$U \sim N(0. \sigma^2 \Omega)$$ In this case, we have $\hat \beta \sim N(0, \sigma^2(X'X)^{-1} X' \Omega X (X'X)^{-1})$, assuming that $\Omega$ is known. I would still need to mention a hypothesis of efficiency, but in order not to go on too long, I ended up here. As far as I've read, many of the classic books don't make the case for the sample not being independent. They start from the hypothesis of an iid sample. I think this is really bad, as it doesn't allow you to make a natural transition into the world of time series. So my concrete question is: are all the conclusions I mentioned true assuming the hypotheses of linear regression, but in case the sample is not independent? Is there something I'm getting wrong?
Classical linear regression with no independent sample
CC BY-SA 4.0
null
2023-04-12T06:43:09.667
2023-04-12T12:16:24.803
2023-04-12T12:16:24.803
373088
373088
[ "self-study", "multiple-regression", "least-squares", "econometrics" ]
612647
2
null
612643
1
null
Both models are quite different. As described in [Forecasting: Principles and Practice](https://otexts.com/fpp3/) by Rob J Hyndman and George Athanasopoulos, what you call the lagged model seems to be the [autoregressive model (AR)](https://otexts.com/fpp3/AR.html). > [...] an autoregressive model of order $p$ can be written as $$y_{t} = c + \phi_{1}y_{t-1} + \phi_{2}y_{t-2} + \dots + \phi_{p}y_{t-p} + \varepsilon_{t}$$ where $\varepsilon_{t}$ is white noise. This is like a multiple regression but with lagged values of $y_t$ as predictors. We refer to this as an AR($p$) model, an autoregressive model of order $p$. It can be compared with the [moving average (MA) model](https://otexts.com/fpp3/MA.html). > Rather than using past values of the forecast variable in a regression, a moving average model uses past forecast errors in a regression-like model, $$y_{t} = c + \varepsilon_t + \theta_{1}\varepsilon_{t-1} + \theta_{2}\varepsilon_{t-2} + \dots + \theta_{q}\varepsilon_{t-q}$$ where $\varepsilon_t$ is white noise. We refer to this as an MA($q$) model, a moving average model of order $q$. Of course, we do not observe the values of $\varepsilon_t$, so it is not really a regression in the usual sense. Though, as described later in the book, under some conditions, one model can be written as another. They are usually combined in the [ARMA models](https://otexts.com/fpp3/arima.html). You would more commonly see the regression with lagged values because it is easy to interpret and estimate (does not need specialized time-series software as it can be done using regular linear regression).
null
CC BY-SA 4.0
null
2023-04-12T07:01:28.360
2023-04-12T07:01:28.360
null
null
35989
null
612648
2
null
612644
3
null
It is customary in probability or mathematical statistics to encounter statements such as > Let $X$ be an absolutely continuous random variable with density $f$ with no reference to underlying probability space. However, we can always supply an appropriate space as follows. Take $\Omega = \mathbb{R}$, $\mathcal{F} =$ Borel sets, $P(B) = \int_B f(x)\,dx$ for all $B\in \mathcal{F}$. If $X(w) = \omega$, $\omega \in\Omega$, then $X$ is absolutely continuous and has density $f$. In a sense, it does not make any difference how we arrive at $\Omega$ and $P$; we may equally use a different $\Omega$ and different $P$ and a different $X$, as long as $X$ is absolutely continuous with density $f$. No matter what construction we use, we get the same essential result, that is $$ P(X\in B) = \int_B f(x)\,dx. $$ Therefore, questions about probabilities of events involving $X$ are answered completely by knowledge of the density $f$. This implies that probabilities of events of $T(X_1,\ldots,X_n)$ are also defined by the density of $T$.
null
CC BY-SA 4.0
null
2023-04-12T07:23:02.903
2023-04-12T07:23:02.903
null
null
56940
null
612652
1
null
null
1
24
Suppose we want to solve $$max_{\theta} \sum_{i}^N log f(y_i|x_i; \theta, \gamma).$$ Here, $\theta$ and $\gamma$ are two parameter vectors. The problem above derives an estimate of $\theta$, taking the parameter vector of $\gamma$ as given. Suppose we know that $\gamma\sim N(\mu, \Sigma)$. Then, we can plug in $\mu$ as a consistent estimate of $\gamma$ in the problem above and derive a consistent estimate of $\theta$. In this case, I believe that we should correct the standard errors for $\theta$ for the fact that $\gamma$ are random variables. However, I am not entirely sure how to correctly adjust the standard errors. Wooldridge, in "Econometric Analysis of Cross Section and Panel Data", writes that if the first-stage estimator of $\gamma$ is of the form $$ \sqrt{N}(\hat{\gamma} - \gamma) = \frac{1}{\sqrt{N}} \Sigma_{i} r_i(\gamma) + o_p(1), $$ then $$\hat{Avar}(\hat{\theta}) = (\Sigma_i \hat{H}_j)^{-1}(\Sigma_i \hat{g}_i \hat{g}_i')(\Sigma_i \hat{H}_i)^{-1},$$ where $$\hat{g}_i = \hat{s}_i + \hat{F}\hat{r}_i,$$ $s_i = \frac{\partial log f_i}{\partial\theta}$ is the score of the likelihood and $F$ is the gradient of the score with respect to $\gamma$. Now I am wondering what an estimator for $r_i$ would possibly be in this case. In particular, one would need to calculate $r_i r_i'$, which could be equal to the covariance of $\gamma$ ($\Sigma$), but I am not entirely sure.
Adjusting standard errors in two-step maximum likelihood estimation
CC BY-SA 4.0
null
2023-04-12T08:06:02.570
2023-04-12T08:06:02.570
null
null
164761
[ "variance", "maximum-likelihood", "estimation" ]
612653
1
null
null
1
43
When I tried SPSS two-way ANOVA analysis, the result shows df is zero,and no value in F, Sig, and Mean square. Does anyone know why this happened? Does it mean I need to input more data to analysis? [](https://i.stack.imgur.com/NA2zS.png) [](https://i.stack.imgur.com/HtC1B.png)
SPSS: Zero df and no value in F and Sig
CC BY-SA 4.0
null
2023-04-12T09:07:24.240
2023-04-12T09:08:48.897
2023-04-12T09:08:48.897
362671
385509
[ "anova", "spss" ]
612654
1
null
null
0
71
My LSTM model is configured as follows: - Input: 60 measurements of the same float feature over time (cell dim = 60) - Output: 3 classes - Training and validation function: cross entropy with softmax - Optimizer: Adam - Learning rate: 0.0005 - Batch size: 180 (3 sequences) The classes have the following distribution in the training and test datasets: ``` A = 49.5%, B = 49.5%, C = 1% ``` Class C rarely occurs and I'm only interested in classes A and B. During the training, the validation loss has an opposite trend to the training loss, and in some points they seem perfectly specular. Also, I've noticed that the precision and recall improve when the training loss falls below a certain threshold, but that's generally where the validation loss is high. What could be causing this behavior? I started with 25 layers and saw this behavior, suspecting overfitting I progressively decreased the layers [](https://i.stack.imgur.com/jhYII.png) 25 LSTM layers, 1500 training samples, 1500 test samples, best F1 score = 0.564 [](https://i.stack.imgur.com/uSzcH.png) 6 LSTM layers, 1500 training samples, 1500 test samples, best F1 score = 0.569 [](https://i.stack.imgur.com/dWEOE.png) 1 LSTM layers, 1500 training samples, 1500 test samples, best F1 score = 0.639 [](https://i.stack.imgur.com/Zripr.png) Therefore, since the confusion matrix and consequent F1 remained very similar to the previous training (if not better), I kept on only 1 layer to reduce the complexity of the model. Since testing this on a larger number of test samples (about a million) the F1 score was very poor, I thought of adding progressively more training and test data, also hoping that this it would have improved any overfitting issues. 1 LSTM layers, 3000 training samples, 3000 test samples, best F1 score = 0.568 [](https://i.stack.imgur.com/esRvR.png) Since with 3000 training samples, the F1 score is similar to the previous model, but still poor when tested on a larger dataset I tried to add even more data. 1 LSTM layers, 6000 training samples, 6000 test samples, best F1 score = 0.320 [](https://i.stack.imgur.com/0Ts2d.png) And here is the real problem: even if the training and validation loss are similar to previous models, the F1 is very low and does not seem to improve, even by increasing the learning rate. On this dataset I also tried to add L1 and L2 regularization, but they seem to have the only effect of reaching the same F1 score in fewer epochs. Adding dropouts I didn't notice any improvements. I also tried varying the batch size and learning rate, but I always got very similar results. In short - Is the divergence between the training loss and the test loss normal? - Why doesn't the model seem to work anymore when on a 6000 sample dataset? - What can I do to maintain the previously obtained F1 score even on the dataset of 6000 samples?
LSTM model performs worse adding more data
CC BY-SA 4.0
null
2023-04-12T09:30:12.843
2023-04-12T11:14:01.807
2023-04-12T11:14:01.807
228269
228269
[ "machine-learning", "neural-networks", "lstm" ]
612656
1
null
null
0
19
So for an assignment, I am looking at how plant foliage health (ranked on a scale of 0-5, 0 meaning the foliage is all dead, 5 meaning the foliage is all alive) has changed over time (I have data from 2008 and 2023) as an indicator of population health. It has been a while since I have done statistics so any help on what analysis is best would really be appreciated
What anova do i use
CC BY-SA 4.0
null
2023-04-12T09:36:59.930
2023-04-12T09:36:59.930
null
null
385511
[ "anova" ]
612658
1
null
null
4
36
When you do LASSO or ridge regression, and pick the hyperparameter using cross-validation, the 1SE rule suggest to select not the best CV result but the one with the most penalization that's still within 1SE of the best value. That's meant to be a good approximation to accounting for the overfitting to the validation set that occurs by picking the hyperparameters on the validation set itself. Once I move to elastic net regression (with both a L1- and a L2-penalty), it is less clear what an equivalent rule would be. That's because there will be a whole curve in 2D-space that will be the boundary of where you achieve within 1SE of the best CV result and it's not really clear what's "more penalization" (more of a L1 or more of a L2 or some combination thereof). Is there any work that has looked into this? Is there some clever averaging approaches? Like take solutions all along the curve and model average them (I guess that's as easy as averaging coefficients taking 0 for models where one is not selected for linear regression, but involves more formal model averaging in case of GLMs with non-linear link functions?)? I'm also interested in whether we know if with two hyperparameters any such way of picking hyperparameters enjoys a similar "approximate optimality" as the 1SE rule.
Generalize the 1SE rule to elastic net
CC BY-SA 4.0
null
2023-04-12T09:42:56.300
2023-04-12T09:42:56.300
null
null
86652
[ "cross-validation", "lasso", "regularization", "ridge-regression", "elastic-net" ]
612659
1
612810
null
0
50
I am studying survival analysis and am trying to see if there's a way to probabilistically forecast future outcomes, using simulation or other means. In the first example below, I fit a Cox model to the complete "lung" data from the `survival` package, showing 1000 months of outcomes. In the second example, I adjust the "lung" data as-if I only had 500 months of survival data, creating object "lung1". Using survival analysis, how could I probabilistically forecast events for months 501-1000 for lung1, assuming I only had data for months 1-500? I've used time-series forecasting models (ETS, ARIMA, etc.) but I wonder if there's a better solution using survival analysis? A problem with these time-series models is generating negative survival outcomes which obviously is impossible. Nevertheless, I post an image below of an ETS forecast model I've used before with log adjustments to eliminate negative-value outcomes. I post simple code for the Cox survival models at the bottom. Images for "lung" and truncated "lung1" data: [](https://i.stack.imgur.com/cPXi6.png) Example of ETS time-series model forecast (using other data): [](https://i.stack.imgur.com/c5sTP.png) Code: ``` # Example from http://www.sthda.com/english/wiki/cox-proportional-hazards-model library(survival) library(survminer) # status 1 = censored # status 2 = dead ### Full data set ### # Cox regression of time to death on the time-constant covariates cox <- coxph(Surv(time, status) ~ age + sex + ph.ecog, data = lung) # Plot the baseline survival function ggsurvplot(survfit(cox, data = lung), palette = "#2E9FDF", ggtheme = theme_minimal()) ### Truncate the full data set "as if" we only had the first half of the time series available # lung1 reduces study time to 500 months (from 1000) and adjusts status (via status1) at month 500 cut-off lung1 <- lung %>% mutate(time1 = pmin(time,500)) %>% mutate(status1 = if_else(time > time1,as.integer(1),as.integer(status))) # Cox regression of time to death on the time-constant covariates cox1 <- coxph(Surv(time1, status1) ~ age + sex + ph.ecog, data = lung1) # Plot the truncated survival data myplot <- ggsurvplot(survfit(cox1, data = lung1), palette = "#2E9FDF", ggtheme = theme_minimal(),xlim = c(0, 1000)) myplot$plot <- myplot$plot + scale_x_continuous(breaks = sort(c(seq(0, 1000, 250)))) myplot ```
How to forecast future period events using survival analysis?
CC BY-SA 4.0
null
2023-04-12T10:03:10.220
2023-04-13T15:14:45.387
null
null
378347
[ "r", "time-series", "forecasting", "survival", "cox-model" ]
612660
2
null
612644
1
null
Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, ..., X_n)$ is a small abuse of notation as random variables are functions from $\Omega$, while the right hand side is a "deterministic" function $t_n$ from $S_1\times S_2\times ...\times S_n$. So I would rewrite $$ T_n := t_n(X_1, X_2, ..., X_n) $$ with $ \omega_i \in \Omega_i $ and with this we can consider $$ t_n(X_1(\omega_1), X_2(\omega_2),..., X_n(\omega_n)) = T_n(\omega_1, \omega_2, ..., \omega_n)$$ which tells us that $T_n$ is a function from $\Omega_1\times \Omega_2\times ...\times \Omega_n$, which therefore can be used to define $\Omega$. Now $\mathcal{F}$ and $P_\theta$ can be anything as long as $\mathcal{F}_i$ and $P_{i, \theta}$ are their projections down to the individual $\Omega_i$. To answer the questions in your comments I will be a little bit more explicit: If $X_i$ are iid with $(\Omega_x, P_{x, \theta}, \mathcal{F}_x)$, then $\mathcal{F}$ is actually not $\mathcal{F}_x^n$ but instead the $\sigma$-Alegabra generated by $\mathcal{F}_x^n$ through intersection, compliments and $\sigma$-countable unions. $P_\theta$ is a probability measure that means a function from $\mathcal{F}$ to $[0, 1]$ with certain properties. Now $A \in \mathcal{F}_x^n$ then $A = A_1 \times ... \times A_n, A_i \in \mathcal{F}_x$ and $P_\theta(A) = P_{x, \theta}(A_1) \cdot ...\cdot P_{x, \theta}(A_n)$. If $A\in\mathcal{F}/\mathcal{F}_x^n $ then $P_{\theta}(A)$ is determined by how it was generated from the elements of $\mathcal{F}_x^n$ When you write $P_\theta(|T_n - \theta| < \varepsilon)$ it really means $P_\theta(\{\omega \in \Omega: |T_n(\omega) - \theta| < \varepsilon\})$, with $\{\omega \in \Omega: |T_n(\omega) - \theta| < \varepsilon\} \in \mathcal{F} $
null
CC BY-SA 4.0
null
2023-04-12T10:08:30.730
2023-04-12T12:26:34.910
2023-04-12T12:26:34.910
371966
341520
null
612662
1
null
null
0
71
My answer still unaddressed for some unknown reasons. Given the constants $\{a,b,c,d,e,f$}, I want to compute the conditional mean $\text{E}[Z|S_1,S_2]$ and the conditional variance $\text{Var}[Z|S_1,S_2]$, with: $Z=a+bX_1+cX_2+dY_1+eY_2+fY_3$ Is the following true? $\text{E}[Z|S_1,S_2]=a+b\text{E}[X_1|S_1,S_2]+c\text{E}[X_2|S_1,S_2]$ and $\text{Var}[Z|S_1,S_2]=b^2\text{Var}[X_1|S_1,S_2]+c^2\text{Var}[X_2|S_1,S_2]+d^2\sigma_{Y_1}^2+e^2\sigma_{Y_2}^2+f^2\sigma_{Y_3}^2+bc\text{Cov}[X_1,X_2|S_1,S_2]+2de\text{Cov}[Y_1,Y_2]+2df\text{Cov}[Y_1,Y_3]+2ef\text{Cov}[Y_2,Y_3]$ where $\text{Cov}[X_1,X_2|S_1,S_2]=\text{E}[X_1X_2|S_1,S_2]-\text{E}[X_1|S_1,S_2]\text{E}[X_2|S_1,S_2]$ Assume $S_1=X_1+\epsilon_{X_1}, S_2=X_2+\epsilon_{X_2}$ (where $\epsilon_{X_1}\sim \mathcal N(0,\sigma_{\epsilon_{X_1}}^2)$ and $\epsilon_{X_2}\sim \mathcal N(0,\sigma_{\epsilon_{X_2}}^2)$) and the following joint distributions: $\begin{pmatrix} X_1 \\ X_2 \end{pmatrix}$ $\sim \mathcal N$ $\bigg(\begin{pmatrix} \mu_{X_1} \\ \mu_{X_2} \end{pmatrix}, \begin{pmatrix} \sigma_{X_1}^2 & \rho_{X_1X_2}\sigma_{X_1}\sigma_{X_2}\\ * & \sigma_{X_2}^2 \end{pmatrix}\bigg)$ $\begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \end{pmatrix}$ $\sim \mathcal N$ $\Bigg(\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} \sigma_{Y_1}^2 & \rho_{Y_1Y_2}\sigma_{Y_1}\sigma_{Y_2} & \rho_{Y_1Y_3}\sigma_{Y_1}\sigma_{Y_3}\\ * & \sigma_{Y_2}^2 & \rho_{Y_2Y_3}\sigma_{Y_2}\sigma_{Y_3}\\ * & * & \sigma_{Y_3}^2 \end{pmatrix}\Bigg)$ Assume also that $(X_1,X_2)$ and $(S_1,S_2)$ are independent from $(Y_1,Y_2,Y_3)$.
If $X$ $\sim \mathcal N_2(\mu_X,\Sigma_X)$, $Y$ $\sim \mathcal N_3(\mu_Y,\Sigma_Y)$, are these the $\text{E}[Z|S_1,S_2]$ and $\text{Var}[Z|S_1,S_2]$?
CC BY-SA 4.0
0
2023-04-12T10:30:37.200
2023-04-12T17:21:04.243
2023-04-12T17:21:04.243
384956
384956
[ "self-study", "normal-distribution", "conditional-probability", "covariance" ]
612663
1
null
null
2
36
I am confused about one aspect of the use of Gaussian processes for Bayesian inference. I understand that it relies on the assumption that your train and test data points form a multivariate normal distribution where you define a prior mean and covariance for the distribution. What I don't understand is that I believed covariance had a strict statistical definition $\text{cov}(X, Y) = \mathbb{E}\left(X-\mu_X)(Y-\mu_Y)^\top\right)$. How is it justified statistically to just use what seems like any old function we like? I am pretty new to this so would appreciate if anyone could direct me to good resources on the topic too.
How are custom kernel functions in Gaussian processes statistically justified?
CC BY-SA 4.0
null
2023-04-12T10:36:12.263
2023-04-12T12:04:48.227
null
null
363176
[ "bayesian", "inference", "gaussian-process" ]
612664
1
null
null
0
9
I have a question on how to account theoretically for the risk of competing event in a specific setting. Suppose we have a cohort of patients at high risk of both infection-related mortality and non-infection related mortality. We randomize these patients to receive treatment A, which has an effect in reducing the risk of infection-related mortality, but not on the non-infection related one. Follow-up length is long, let's say 10 year. When we draw survival curves for infection-related mortality, we observe an initial reduction in the risk in patients randomised to A; however, this effect dilutes after 4 years, and the curves approaches after that time. My guess is that patients receiving A are protected for infection-related mortality, but starts to develop non-infection mortality, get censored, and this influence the survival curves (as well as the incidence rate of events). Indeed, if we observe patients long enough, we will end up having all patients either died of infection-related mortality or censored (i.e., died for other reasons). Is this an actual problem when looking at cause-specific risk of death, in the context of competing events? How can I account for such bias? Would a Cox-regression analysis be influenced by this potential bias if we have a long enough follow-up?
Competing Events - does they have a role when looking at cause-specific mortality?
CC BY-SA 4.0
null
2023-04-12T10:38:53.160
2023-04-13T15:34:35.207
null
null
122916
[ "survival", "competing-risks" ]
612665
1
null
null
1
22
I'm looking for a way to compare 2 (or more) igraph objects in R. These are trajectories in 3-dim which are a network of nodes and edges but are not necessarily the same number of either, just that they have a corresponding coordinate in 3-dim. I think something like Procrustes could work nicely but requires an equal number of points, and so I'd need a pre-step like iterative closest point (ICP) to find correspondences between the network. I fear I have gone down the rabbit hole trying to fit a solution to Procrustes (and have yet to get a sensible answer from ICP so far), and thought I'd reach out to the smart people in this community and see if there's a technique I'm missing. This is a new area for me so grateful for any advice. Thanks in advance!
Method for comparing 2 3-dimensional networks (igraphs in R)
CC BY-SA 4.0
null
2023-04-12T10:55:07.617
2023-04-12T10:55:07.617
null
null
322577
[ "r", "networks", "igraph" ]
612667
1
null
null
0
14
this is my first post. I am not sure how to approach this problem, I would like direction or even an algebraic solution please. I have to sort 70 different sized items out into 12 groups. Each group has the same total allowed size, the size has minimum and maximum. I want to know how to work out the total possible permutations there are? How would I go about working this out? Thanks
Number of possible permutations for N groups from I items where each item is of a different size but group total is fixed across groups
CC BY-SA 4.0
null
2023-04-12T11:57:09.153
2023-04-12T11:57:09.153
null
null
385517
[ "permutation" ]
612668
2
null
612663
0
null
In [Gaussian Process](https://stats.stackexchange.com/questions/502531/elementary-explanation-of-gaussian-processes), your task is to learn the distribution over functions $$f(\mathbf{x}) = [f(x_1), f(x_2), \dots, f(x_n)]'$$ This distribution is modeled by Gaussian Process $\mathcal{GP}\left(m(\mathbf{x}),\, k(\mathbf{x}, \mathbf{x}')\right)$ parametrized by the mean function and covariance function $$\begin{align} m(\mathbf{x}) &= E[f(\mathbf{x})] \\ k(\mathbf{x}, \mathbf{x}') &= E\big[\big(f(\mathbf{x}) - m(\mathbf{x})\big)\big(f(\mathbf{x}') - m(\mathbf{x}')\big)\big] \end{align}$$ So the covariance function is the function that tells us what would the covariance be. It's the same as saying, for example, that the [conditional expectation](https://en.wikipedia.org/wiki/Conditional_expectation) $E[y|\mathbf{x}]$ is given by a function $g(\mathbf{x})$, where in linear regression it would be a linear function $g(\mathbf{x}) = \mathbf{x}\boldsymbol{\beta}$. It doesn't say that we are not taking the integral to calculate the expectation anymore, but that the integral has a solution given by $g(\mathbf{x})$. It also doesn't say that any conditional expectation is like this, but that this particular random variable has an expectation that takes such form. Gaussian Process defines such a distribution that if using the regular definition of covariance you calculated the covariance of $f(\mathbf{x})$ it would take the form of the covariance function.
null
CC BY-SA 4.0
null
2023-04-12T11:58:47.080
2023-04-12T12:04:48.227
2023-04-12T12:04:48.227
35989
35989
null
612669
1
null
null
0
10
I need some clarification: Is it correct that when applying a VAR or VARMA model, there is only dependent variables? for example you will have a dependent variable X_t and dependent variable Y_t. Thus, there is no independent variables included in the data set.
VAR independent variables?
CC BY-SA 4.0
null
2023-04-12T12:01:48.663
2023-04-12T12:01:48.663
null
null
383188
[ "vector-autoregression" ]
612670
1
612885
null
7
559
I am using [lmfit](https://lmfit.github.io/lmfit-py/) to fit a function in Python. This is my fit: Log scale: [](https://i.stack.imgur.com/udTLq.png) Lin scale: [](https://i.stack.imgur.com/dI803.png) where the histogram is my data and the dashed line the fit. The error bars are simply the sqrt of the number of counts. I would like to decide on whether it is a good fit automatically, for this I am using the chi squared test. Despite the fit looks decent to me, I always have to reject it when performing the chi squared test. So either I am doing the test wrong or my 'eye calibration for what a good fit looks like' is wrong. I am obtaining the value of chi square from the fit by doing `result.chisqr` as stated in [the documentation of lmfit](https://lmfit.github.io/lmfit-py/fitting.html#goodness-of-fit-statistics). For this fit I get a value of `162.8 = result.chisqr`. Now to select the critical value for chi I am using `scipy.stats.chi2.ppf(.95, degrees_of_freedom)` where `degrees_of_freedom = result.nfree` (for this example it is `22 = degrees_of_freedom`). This gives me a critical chi squared value of `33.9` and since `result.chisqr` is larger, I have to consider that this is a bad fit. Is this actually a bad fit or am I doing the test wrongly? I noticed that if I use `result.redchi`, i.e. the reduced chi squared statistic, then this seems to work. I am not sure if I have to use this or it is just a coincidence.
Chi squared for goodnes of fit test always rejects my fits
CC BY-SA 4.0
null
2023-04-12T12:27:13.530
2023-04-14T04:44:47.290
2023-04-12T12:50:17.370
313385
313385
[ "chi-squared-test" ]
612671
2
null
612644
4
null
I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$ Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the sequence of probability spaces $\langle (\mathcal X_i, \boldsymbol{\mathfrak A}_i, \mathbf P_i)\rangle_{i=1}^\infty,$ where $(\mathcal X_i, \Vert \cdot \Vert_i)$ is a normed linear space. Consider a sequence of rvs $\langle X_i\rangle_{i=1}^\infty$ and a sequence of real numbers $\langle r_i\rangle_{i=1}^\infty.$ Then $X_n = o_P(r_n)\iff \Pr[\Vert X_n \Vert_n\leq c|r_n|] = 1, ~\forall c >0.$ Now consider a sequence of measurable functions $f_n:\mathcal X_n\to \mathcal R ,~\mathcal R$ being a normed linear space with Borel $\sigma$-field. Define $T_n := f_n(X_n)$ and $T: \Omega \to \mathcal R. $ Then $T_n$ converges in probability to $T$ if and only if $\Vert T_n - T\Vert = o_P(1).$ Now consider a parametric family of distributions $\{\mathbf P_\theta\mid \theta\in \Theta \}$ on a sequence space $\mathcal X^\infty.$ Define a measurable function $g: \Theta\to \mathcal G, ~\mathcal G$ being a metric space with Borel $\sigma$-field. Take $\mathcal X_n = \mathcal X^n$ and take measurable functions $T_n: \mathcal X_n\to \mathcal G.$ Then $T_n$ is consistent for $g(\theta)$ if for each $\theta, ~T_n\overset{\mathbf P}{\to} g(\theta).$ The simplest and most common instance is taking $\Omega = \mathbb R^\infty, ~\mathcal X_n = \mathbb R^n.$ Observe how the underlying probability space is at work here based on the implications of the characterization of the convergence in probability above. (Also, as a footnote, one can see how in probability can be generalized: Take $S\subseteq \prod_{i=1}^\infty \mathcal X_i.$ Then $S$ occurs in probability (denoted by $\mathcal P(S)$) if, for each $i,$ there exists $S_i(\varepsilon)\in\boldsymbol{\mathfrak A}_i$ such that $\prod_{i=1}^\infty S_i(\varepsilon)\subseteq S$ and for each $\varepsilon > 0,~\mathbf P_i(S_i(\varepsilon))\geq 1-\varepsilon. $ To see how powerful it is, consider $f_n:\mathcal X_n \to \mathbb R$ and, as above, take $T_n = f_n(X_n).$ Now, define $S:= \left\{\langle x_i\rangle_{i=1}^\infty\mid\lim_{n\to\infty} f_n(x_n) = 0\right\}.$ Then $T_n = o_{\mathbf P}(1)\iff \mathcal P(S).$) --- ## Reference: $\rm [I]$ Theory of Statistics, Mark J. Schervish, Springer-Verlag, $1995,$ sec. $7.1.2,$ pp. $395-398.$
null
CC BY-SA 4.0
null
2023-04-12T12:33:23.803
2023-04-12T12:33:23.803
null
null
362671
null
612672
1
null
null
0
30
I asked a question originally [here](https://stats.stackexchange.com/questions/611345/can-the-error-be-modeled-in-the-approximation-of-expectation), but my notations were confusing and I couldn't convey properly in terms of statistics. Notations in statistics is a bit new to me, because my background is mostly related to deterministic sciences. I have an i.i.d. random variable $$ Y_1, Y_2, Y_3, ..., Y_M \sim \mathcal{N}(0, \sigma^2) $$ I have a function that uses this random variable as follows, $$ X_i = f(Y_i) = \left| \frac{\sin\left( N \frac{Y_i}{2} \right)}{\sin\left(\frac{Y_i}{2}\right)} \right|^2 $$ So, in principle, I can say that $X$ is also a random variable. Then, I have another variable $Z$ that is also, in principle, a random variable. $$ Z_l = \sum_{i = 1}^{M} X_i $$ Now I have a different number of $Z$; that is $L$. From prior experience, I know that the distribution of $Z$ is an exponential one. $$ Z_1, Z_2, Z_3, ..., Z_L \sim \text{Exp}\left[ \frac{1}{F} \right] $$ Where $F$ is the expectation of the random variable $Z$. When I try to find an expression for $F$, I use the following integral. However, I use the assumption that $M$ is a large number. $$ F = M \int_{-\infty}^{+\infty} f(y) p(y) dy $$ However, for a finite number of samples of $Z$, (that is $L$) and also probably $M$, the expectation of $Z$ itself would have some variance that should go to $0$ when $L \to \infty$. Basically, I want to know the distribution of the expectation of $Z$ instead of just having the expression above. Something like the following. $$ \mathbb{E}[Z] \sim \Pi(F, ...) $$ I wrote the first entry as $F$ to emphasize that it is the mean of the distribution. Additional information: I know what $M \int_{-\infty}^{+\infty} f(y) p(y) dy$ looks like in closed form.
Confused with notations about random variable and expectation
CC BY-SA 4.0
null
2023-04-12T12:38:06.753
2023-04-12T13:25:49.700
2023-04-12T13:25:49.700
327104
327104
[ "random-variable", "expected-value", "integral", "sum" ]
612673
1
612710
null
2
49
Suppose $Y_i=X_i'\beta+\epsilon_i$ with $E(\epsilon_i|X_i)=0$. Consider the usual OLS estimator for $\beta$ using a random sample $\{X_i,Y_i\}_{i=1}^n$: $\widehat{\beta}=(\frac{1}{n}\sum_{i=1}^nX_iX_i')^{-1}\frac{1}{n}\sum_{i=1}^n X_iY_i$. Substitute $Y_i=X_i'\beta+\epsilon_i$ into the expression gives $\widehat{\beta}=\beta+(\frac{1}{n}\sum_{i=1}^nX_iX_i')^{-1}\frac{1}{n}\sum_{i=1}^n X_i\epsilon_i$. The way to prove consistency is to show that $\frac{1}{n}\sum_{i=1}^nX_iX_i'\overset{p}{\rightarrow} E(X_iX_i')$, and $\frac{1}{n}\sum_{i=1}^n X_i\epsilon_i\overset{p}{\rightarrow} E(X_i\epsilon_i)=0$ by weak law of large numbers and then by continuous mapping theorem. Note that the weak law of large numbers only requires the existence of expected values: $E(X_iX_i')$ and $E(X_i\epsilon_i)$, where $E(X_i\epsilon_i)=E(X_iE(\epsilon_i|X_i))=0$ always hold under our model. Thus it seems that all I need to assume is that $E(X_iX_i')<\infty$ and $E(X_iX_i')$ being invertible and I only need these two assumptions. Am I right?
What are the minimum conditions needed for the consistency of OLS estimator in the following linear regression model?
CC BY-SA 4.0
null
2023-04-12T12:41:22.230
2023-04-12T16:32:03.823
null
null
224576
[ "least-squares", "expected-value", "consistency", "law-of-large-numbers" ]
612674
2
null
134282
1
null
Let's try to understand using a data matrix $X$ of dimension $n \times d$, where $d \gg n$ and $rank(X)=n$. Then $\underset{n \times d}{X}=\underset{n \times n}{U}\underset{n \times n}{\Sigma} \underset{n \times d}{V^T}$ (reduced SVD) with $X^TX=V\Sigma^TU^TU\Sigma V^T=V\Sigma^2V^T$ (since unitary / orthonormal $U$, $V$ and diagonal $\Sigma$) and the covariance matrix (assuming $X$ is already mean centered, i.e., the columns of $X$ have $0$ means) $C=E[X^TX]-E[X]^TE[X]=\frac{X^TX}{n-1}-0=V\frac{\Sigma^2}{n-1}V^T=\tilde{V}\Lambda \tilde{V}^T$ (PCA, by spectral decomposition) $\implies \Lambda = \frac{\Sigma^2}{n-1}$ and $V=\tilde{V}$ upto sign flip. Let's validate the above with `eigenfaces` (i.e., the principal components / eigenvectors of the covariance matrix for such a face dataset) using the following face dataset: ``` import numpy as np from sklearn.decomposition import PCA from sklearn.datasets import fetch_olivetti_faces X = fetch_olivetti_faces().data X.shape # 400 face images of size 64×64 flattened # (400,4096) n = len(X) # z-score normalize X = X - np.mean(X, axis=0) # mean-centering # X = X / np.std(X, axis=0) # scaling to have sd=1 # choose first k eigenvalues / eigenvectors for dimensionality reduction k = 25 # SVD U, Σ, Vt = np.linalg.svd(X, full_matrices=False) # PCA pca = PCA(k).fit(X) PC = pca.components_.T #Vt.shape, PC.shape ``` Now let's compare the eigenvalues and eigenvectors computed: ``` # first k eigenvalues of Λ = Σ^2/(n-1) # from SVD print(Σ[:k]**2/(n-1)) # from SVD # [18.840178 11.071763 6.304614 3.9545844 2.8560426 2.49771 # 1.9200633 1.611159 1.5492224 1.3229507 1.2621089 1.1369102 # 0.98639774 0.90758985 0.84092826 0.77355367 0.7271429 0.64526594 # 0.59645116 0.5910001 0.55270135 0.48628208 0.4619924 0.45075357 # 0.4321357 ] print(pca.explained_variance_[:k]) # from PCA # [18.840164 11.07176 6.3046117 3.9545813 2.8560433 2.4977121 # 1.9200654 1.6111585 1.549223 1.3229507 1.2621082 1.1369106 # 0.98639697 0.9075892 0.84092826 0.773553 0.72714305 0.64526534 # 0.59645087 0.5909973 0.55269724 0.4862703 0.461944 0.45075053 0.43211046] # plot PC the k dominant principal components / eigenvectors as images # 1. using PC obtained with PCA # 2. using Vt[:k,:].T obtained from SVD ``` [](https://i.stack.imgur.com/WUWE6.png) Here the differences in the eigenvectors are due to sign ambiguity (refer to [https://www.osti.gov/servlets/purl/920802](https://www.osti.gov/servlets/purl/920802))
null
CC BY-SA 4.0
null
2023-04-12T12:53:14.610
2023-04-13T04:18:40.547
2023-04-13T04:18:40.547
131074
131074
null
612675
2
null
324372
0
null
My typical spiel about $R^2$ seems to apply here. Below, I give a standard definition of $R^2$. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ The fraction numerator is the square loss incurred by your model. The fraction denominator is the square loss incurred by a baseline model that always predicts the overall mean $\hat y$. However, nothing says that we have to use such a model as the baseline to which we compare our performance. Sure, that might be a reasonable “must-beat” level of performance, but if we know our competitor to be able to achieve some level of performance and need to be able to do better than that to get any business, then that sure sounds “must beat” performance. In fact, when you run a [chunk test](https://stats.stackexchange.com/questions/27429/what-are-chunk-tests) of nested models (such an a fairly routine ANCOVA test of a categorical variable), you are comparing the performance of your full model to a model that is reduced but perhaps not reduced all the way to predicting the same value every time, so this idea even seems to exist in the canon of standard statistical analysis! While I would have major reservations about calling such a statistic $R^2$, it seems to be quite aligned with standard statistics ideas to calculate as below: $$ 1-\dfrac{\text{ Performance of your model }}{\text{ Benchmark performance }} $$ I would use a similar argument about reduction in error rate (reduction in MSE, reduction in MAE, etc) as I discuss [here](https://stats.stackexchange.com/a/605451/247274).
null
CC BY-SA 4.0
null
2023-04-12T12:55:32.650
2023-04-12T12:55:32.650
null
null
247274
null
612676
2
null
612670
4
null
Your fit is pretty good. It is not perfect. In some sense, you are correct that your eye calibration is off, but that is not such a terrible issue. Basically every model is at least a little bit wrong. With sufficient data, the slightest of deviations can be caught. However, the eye test is often a powerful and useful approach when “good enough for subsequent work” falls into that “I know it when I see it” category. It seems that you are experiencing the [same kind of issue encountered with a lot of testing for distribution normality.](https://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless)
null
CC BY-SA 4.0
null
2023-04-12T13:01:09.953
2023-04-12T13:01:09.953
null
null
247274
null
612677
1
null
null
0
10
I'm comparing the performance of the Restrected ML estimator and the Baysean method for estimating a multilevel model by Monte Carlo simulation. As I'm a beginner in Baysean analysis, I don't know how to specify the prior distribution. In general, it is preferable to use an uninformative prior for a fair comparison. But in the current case, since estimates of ML are available, I'm wondering if I can use them in the prior distribution. For example, suppose I have an estimate of 1 with an estimated standard error of 2 for some parameter beta. Can I then use it in the prior such that beta~Normal(1, 2^2)(use in both mean and variance) or beta~Normal(1, 100^2)(use in mean only)?
on the choice of prior distribuion in multi level model
CC BY-SA 4.0
null
2023-04-12T13:06:22.740
2023-04-12T13:06:22.740
null
null
111064
[ "bayesian", "prior" ]
612678
2
null
612670
7
null
In a way, the "eye test" measures an effect size but not a p-value. Looking at the histogram versus the fit line can give you a sense of "how far" the two are apart, but the p-value will depend very much on the sample size represented in the graph, which is not as immediately obvious. With sufficient sample size, arbitrarily small deviations from the true distribution may be deemed significant, which you would never be able to discern by the eye test. What you're seeing here is a "pretty good" fit with a decent amount of data - the effect size of the deviation from the chi-squared distribution is small, but we have sufficient data to be sure it's not by chance. Oftentimes, downstream analysis that makes particular assumptions about data distribution will still work reasonably well even if the data is not truly from the assumed distribution. The eye test may mislead you about whether a difference is of statistical significance, but it's usually a reasonable indicator of whether the difference is of practical significance. Rather than looking at p-value, you could examine a measure of effect size like phi or Cramer's V. These measures can quantify "how similar" the observed and expected distributions are, rather than "how sure" you are that they are different. It may be trickier to justify an effect size cutoff for rejecting/accepting fits, but you could perhaps use the eye test to identify meaningful differences and see if that represents a common effect size range where you'd reject by eye.
null
CC BY-SA 4.0
null
2023-04-12T13:26:19.390
2023-04-13T13:39:23.943
2023-04-13T13:39:23.943
76825
76825
null
612679
1
null
null
0
29
We regress $Y$ on categorical data $X_i,\ i=1,\ldots,\ p$. Suppose this is a large dataset and many of the rows of the design matrix are duplicated. We minimize the dataset as follows: - We average $Y$ for each unique row of the design matrix, - We perform a weighted regression with each row of the design matrix weighted by its number of duplicates. How do the regression coefficients and their standard errors in this reduced dataset compare to regression on the raw data?
How do coefficients change in a weighted least squares regression?
CC BY-SA 4.0
null
2023-04-12T13:32:43.383
2023-04-12T13:32:43.383
null
null
385523
[ "regression", "heteroscedasticity", "weights" ]
612680
2
null
612638
2
null
The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley (do not confuse $F, G$ in this theorem with $F, G$ in your question): > Let $F$ and $G$ be two nondecreasing, right-continuous functions on an interval $[a, b]$. If $F$ and $G$ have no common points of discontinuity in $(a, b]$, then \begin{align} \int_{(a, b]}G(x)dF(x) = F(b)G(b) - F(a)G(a) - \int_{(a, b]}F(x)dG(x). \tag{1} \end{align} Equation $(1)$ is a good start for proving the result of your interest -- if we can extend the integral region $(a, b]$ to $\mathbb{R}$. To this end, we would need to impose integrability conditions of $u$ (of course, for $(1)$ to hold, we need to also assume that $u$ and $F$ / $G$ have no common discontinuity in $\mathbb{R}$ and $u$ is right-continuous): \begin{align} \int_{\mathbb{R}}|u(x)|dF(x) < \infty, \; \int_{\mathbb{R}}|u(x)|dG(x) < \infty. \tag{2} \end{align} By $(1)$, for every $n$: \begin{align} & \int_{(-n, n]}u(x)dF(x) = F(n)u(n) - F(-n)u(-n) - \int_{(-n, n]}F(x)du(x). \tag{3} \\ & \int_{(-n, n]}u(x)dG(x) = G(n)u(n) - G(-n)u(-n) - \int_{(-n, n]}G(x)du(x). \tag{4} \end{align} It then follows by $(3), (4)$ and $F(x) \geq G(x)$ for all $x \in \mathbb{R}$ that \begin{align} & \int_{(-n, n]}u(x)dF(x) - \int_{(-n, n]}u(x)dG(x) \\ =& u(n)(F(n) - G(n)) - u(-n)(F(-n) - G(-n)) - \int_{(-n, n]}[F(x) - G(x)]du(x) \\ \leq & u(n)(F(n) - G(n)) - u(-n)(F(-n) - G(-n)). \tag{5} \end{align} If $u$ is non-negative, then the right-hand side of $(5)$ is bounded by $u(n)(F(n) - G(n))$, which can be rewritten as $u(n)(1 - G(n)) - u(n)(1 - F(n))$, which converges to $0$ as $n \to \infty$ by the integrability of $F, G$ and the monotonicity of $u$. Similarly, if $u$ is non-positive, then the right-hand side of $(5)$ is bounded by $-u(-n)(F(-n) - G(-n))$ and converges to $0$ as $n \to \infty$ (see the next paragraph for a more detailed derivation). If $u$ takes both negative and positive values, it follows by the monotonicity of $u$ that for sufficiently large $N$, $u(N) > 0$ and $u(-N) < 0$, whence for all $n > N$, again by monotonicity of $u$: \begin{align} 0 \leq u(n)(1 - F(n)) \leq \int_n^\infty u(x)dF(x), \; \int_{-\infty}^{-n}u(x)dF(x) \leq u(-n)F(-n) \leq 0. \tag{6} \end{align} By condition $(2)$ and Lebesgue's dominated convergence theorem (DCT), $(6)$ implies that $u(n)(1 - F(n)) \to 0$ and $u(-n)F(-n) \to 0$ as $n \to \infty$. Similarly, $u(n)(1 - G(n)) \to 0$ and $u(-n)G(-n) \to 0$ as $n \to \infty$. Therefore, the right-hand side of $(5)$ always converges to $0$ as $n \to \infty$ for $u$ that is nondecreasing and integrable. Now the result follows by passing $n \to \infty$ on both sides of $(5)$ (note that condition $(2)$ and DCT imply the left-hand side of $(5)$ converges to $E_F[u] - E_G[u]$).
null
CC BY-SA 4.0
null
2023-04-12T13:37:54.927
2023-04-14T00:54:11.893
2023-04-14T00:54:11.893
20519
20519
null
612682
1
null
null
0
38
I'm running Linear Mixed Models on a dataset. The assumption for homoscedasticity is not being met, however when I remove one independent variable, then it's being met. So all the other variables except this one are homoscedastic. To fix this and make the dataset fit the model better, could I just transform the independent variable which is the issue by cube rooting it? And leaving others as they are? Or do I have to transform all of them if I'm transforming one variable? Also, is transforming them going to increase any kind of error rates and make inferences difficult? I would really appreciate help with this and I apologize if this is a silly question, as I'm a beginner.
Should all independent variables be transformed to fit Homoscedasticity in Linear Mixed Model?
CC BY-SA 4.0
null
2023-04-12T13:47:15.840
2023-04-12T13:47:15.840
null
null
385522
[ "regression", "mixed-model", "multiple-regression", "data-transformation", "continuous-data" ]
612683
1
612715
null
6
189
I try to compare the logistic regression with XBGoost on the simulated data. What I found is that XBGoost AUC is better than that of logistic regression, even when logistic regression predict perfect probability (the probability used to generate binary outcome). Please see details below: - Simulate X: generate 4 random variables (x1,x2,x3 and x4).See code section A. - Similate Y: Let log_odds = x1+x2+x3+x4 (setting all coefficient to 1 and intercept to 0). Then convert log_odds to probability, and use probability to generate binary outcome. See code section B. - Fit logistic regression. The estimated coefficients are very close to ones used for similation.The AUC is 0.834. coef: [[0.92180079 1.07390035 0.97258221 0.80164048]] Intercept [-0.00462648]. See code section C. - Fit XGBoost. The AUC is 0.908.See code section D. - Simulate testing set with different random seed. Logistic regression AUC is 0.836, and XGBoost AUC is 0.907. See code section E. As I understand, when I use simulated probability to generate binary outcomes, I was introducing randomness to the data, which could not be modelled/predicted. However, if the logistic regression already predict probabilities that are so close to the simulated ones, how could XGBoost generate better performance. Is this a problem of AUC, my test design, or my code? Thank you very much in advance! ``` import random import numpy as np import pandas as pd import xgboost as xgb from xgboost import XGBClassifier import sklearn from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, LabelEncoder, MinMaxScaler from sklearn.metrics import classification_report from numpy.random import seed from numpy.random import rand # Section A: Simulate X random.seed(1) seed(1) n=10000 x1=np.array(rand(n))*4-2 x2=x1+np.array(rand(n)) x3=-np.array(rand(n))*1.9 x4=np.array(rand(n))*1 print(sum(x1<=x2)==n) df=pd.DataFrame({"x1":x1,"x2":x2,"x3":x3,"x4":x4}) # Section B: Simulate Y def logistic(z): return 1 / (1 + np.exp(-z)) lp=x1+x2+x3+x4 prob = logistic(lp) y = np.random.binomial(1, prob.flatten()) # Section C: Fit logistic regression and check AUC from sklearn.linear_model import LogisticRegression LR = LogisticRegression() LR.fit(df.values,y) print("coef: ",LR.coef_, LR.intercept_) print("AUC: ",LR.score(df, y)) # Section D: Fit XGBoost and check AUC from sklearn.tree import DecisionTreeRegressor, plot_tree from xgboost import XGBClassifier from sklearn.metrics import classification_report from sklearn.metrics import roc_auc_score, auc, log_loss fit_xgb= XGBClassifier(booster='gbtree', numWorkers=1, n_estimators=10, minChildWeight=15.0, seed=1, objective='binary:logistic', maxDepth=3,eta=0.08, reg_lambda=10.0, alpha=10.0, gamma=0.0, colsampleBytree=0.7, subsample=0.8) fit_xgb.fit(df, y) xgb_prob = fit_xgb.predict_proba(df)[:,1] print('XGB AUC is', roc_auc_score(y, xgb_prob)) # Section E: Simulate testing set with different random seed, and check AUC random.seed(10) seed(10) n=10000 x1_1=np.array(rand(n))*4-2 x2_1=x1_1+np.array(rand(n)) x3_1=-np.array(rand(n))*1.9 x4_1=np.array(rand(n))*1 print(sum(x1_1<=x2_1)==n) df_1=pd.DataFrame({"x1":x1_1,"x2":x2_1,"x3":x3_1,"x4":x4_1}) lp_1=x1_1+x2_1+x3_1+x4_1 prob_1 = logistic(lp_1) y_1 = np.random.binomial(1, prob_1.flatten()) xgb_prob_1 = fit_xgb.predict_proba(df_1)[:,1] print('XGB AUC on testing set is: ', roc_auc_score(y_1, xgb_prob_1)) print("Logistic regression AUC is: ",LR.score(df_1, y_1)) ```
How could XGBoost beat perfect logistic regression?
CC BY-SA 4.0
null
2023-04-12T13:56:34.130
2023-04-12T18:11:45.103
2023-04-12T14:40:24.587
44356
44356
[ "logistic", "simulation", "boosting", "model-evaluation" ]
612684
1
null
null
2
40
I have recently read some work that features hypothesis testing of individual regression coefficients when the overall regressions featuring those coefficients have $R^2_{adj}<0$. One example is Schmidt & Fahlenbrach (2017), granted, in regressions where the primary variables of interest (the ones whose tests I am skeptical to believe) are instrumental variables. The hypothesis tests of the individual regression coefficients turn out significant with $p<0.05$, for what it is worth. However, the $R^2_{adj}<0$ is troubling. If we take $$R^2_{adj} = 1 - \left[\left(\dfrac{\overset{n}{\underset{i=1}{\sum}}\left( y_i - \hat y_i \right)^2}{n - p - 1}\right) \middle/ \left(\dfrac{\overset{n}{\underset{i=1}{\sum}}\left( y_i - \bar y \right)^2}{n-1}\right) \right]\text{,}$$ then $R^2_{adj}<0$ means that the fraction numerator exceeds the fraction denominator. That is, our (unbiased) estimate of the error variance is worse than our (unbiased) estimate of total variance. From this, I conclude that the model exhibits "anti"-performance, and we are worse-off for having done the modeling. How could I possibly believe any individual regression coefficient hypothesis test when the model performs so poorly that we not only lack much predictive ability (rather typical) but do a worse job of predicting than we would do if we did no modeling? How believable are the hypothesis tests of individual regression coefficients when the overall regressions have $R^2_{adj}<0?$ ([This](https://stats.stackexchange.com/q/561411/247274) seems related but not quite the same and containing a mixed-bag of responses, anyway.) REFERENCE [Schmidt, Cornelius, and Rüdiger Fahlenbrach. "Do exogenous changes in passive institutional ownership affect corporate governance and firm value?." Journal of Financial Economics 124.2 (2017): 285-306.](https://pdf.sciencedirectassets.com/271671/1-s2.0-S0304405X17X00056/1-s2.0-S0304405X17300053/Cornelius_Schmidt_Institutional_Ownership_2017.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEN7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIDJooH%2B15SguzcG11zrCTuadE6ceDuIk4GcfNkRGS7xDAiEAupQwDu7PdtXZh%2B2R0Qf4zq36g5i6WeYTEHzLsOmTWOsqvAUIxv%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARAFGgwwNTkwMDM1NDY4NjUiDLB4B4AiVIHEtdjUWyqQBWLRxRIsw3V7bVAfp9i8569twnbcTJN0DPijiNKs0fi6CRs5qWd25lcNGFxEzDA6B24nZpMJ8fHAtcI4xrGUlGVKFpV1Ny%2Bol%2BDkgn2SdOsl53mXrRlhQhceOvBMGiZZCAFA2344V3A8JxjLs8HV39sbyg7j4JJNW1MPUHrdYM06VLvuBdu2ZesLTKPrCBcT0RbL%2FXOm%2FDTKFJp9T2egPvAzFK5y7Qe5BSUytEaxNN%2FGrhlTV084vqZnSFIOFDuGAibm5slXcjI%2F2K3XA8UGxX9ndXPfUwuwdaHeFxcAzsKu7NEie3iabKv0vgqCr6vJpx7FWZheItYabQ%2BL%2BKnFal%2B6sjNpWJucVzzff%2Fqqelej72t%2F0Hli7ybf5LHzI72Lw6ASQZ%2BWyWe%2Fhrw80jj%2FixDY0cfZrKkHdZg5HwEBQBnboonGdzUy%2BPaD9Jy93V6yg0ohnCYwGyBvNphgRdq1njuaClmZD%2FP3La27YFBs6wdxoaAHBOa5Zaf8%2FMtjv4ieBq%2Btdh%2FLNi%2F5tYg2LyveJcKndGDwtg%2BnZGAtawpNKzZHhgkAYhNlp9BP%2BOKhuLqwwLU0HpXaLbxkkptXjIjFZfBTsstBbWJ%2ByuGkDLZt9V%2FBekvdL3qRtUN1YwA4j4WONb9CVtIAxiBrRbvtIc2xcatEgcONthoxAdgOxEJj2cPRMDfRV%2FPoVHpkVkcSwYboMWadOiqVMhrrmN8GWn1L86WaUVxeMdxoBtHlUNszvIoGhf2vZ5TJqGww21u956hD%2Bdlrf8i9SRSrNJvAfLYNn3R2nvi%2BgAQ1jq0YjHuVn%2F5vK2fmr52GK%2BqgZ2A7KzUq1Jbsp4s08QRELylAKfaSsVZJ1YkIHjvKIbdAUylDhu9qMKC63KEGOrEB4Wmp3TllUt2ZtRJlcxDNN4oNm9p8rl%2BuimDFHeNZhtLEYtQ3csUwZnb7Y1CGDpMGM%2FRWhqIqjOuO5LLA4DR5ZT8R6bbnD7jrjv30JExzKumIGDvecYSIDHYDwqTR4YFOJ0UUaT0DzT3O5snlSLSqv20RBW0VDtkxQKC0LpwikF1fHgpmsoYNFgpxteCzOWdQD9A6DjvaKBVCEnr6fWK8QH4BHlGLNMQjGy06StsyM5CU&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230412T215039Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTY7Q6LLHSW%2F20230412%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=1a101cebb609f713416eda7ea5863b1b88148f22301335fc0fcbe1e737bebc9d&hash=55181f63832c83fe16d0e7f6552be390c1698842a995520732446b3a9d21b16a&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S0304405X17300053&tid=pdf-c4ea12fd-5936-41fc-b11d-5ba8690fe9e2&sid=027339f98dfd9542127ab5f38386e22c1a9bgxrqa&type=client)
Test regression coefficient when overall regression has $R^2_{adj}<0?$
CC BY-SA 4.0
null
2023-04-12T13:57:54.990
2023-04-12T22:02:19.257
2023-04-12T22:02:19.257
247274
247274
[ "regression", "hypothesis-testing", "econometrics", "regression-coefficients", "negative-r-squared" ]
612685
1
null
null
0
33
I have a dataset from a class taught in two different ways, groupwork or lectures, and they chose which one they preferred after the first lesson and then again after the second lesson. I'm not sure the best way to analyse this potential change in preference; the data is all anonymous and not paired from the 1st questionnaire to the 2nd one. I'm not great with stats so any help greatly appreciated :)
Yes/No questionnaire analysis
CC BY-SA 4.0
null
2023-04-12T14:01:56.120
2023-04-12T14:01:56.120
null
null
385525
[ "r" ]
612686
1
612795
null
2
51
I don't understand how we obtain the model that we test for a unit root in the ADF test. Let me explain better. ADF test is used to test if a time series has a unit-root. We assume that the Data Generating Process of the time series is an AR(p). $$Y_t = \rho_1\cdot Y_{t-1} + \rho_2\cdot Y_{t-2} + \rho_3\cdot Y_{t-3} +...+ \rho_p\cdot Y_{t-p} + \upsilon_t$$ $$ \upsilon_t \sim i.i.d.\text{N} \Big( 0, \sigma^{2}\Big)$$ In order to do so, ADF test the null hypothesis that $\phi = 0 $ on the following model. $$\Delta{Y_t} = \phi\cdot Y_{t-1}+ \gamma_1\cdot \Delta{Y_{t-1}} + \gamma_2\cdot \Delta{Y_{t-2}} + \gamma_3\cdot \Delta{Y_{t-3}} +...+ \gamma_{p-1}\cdot \Delta{Y_{t-p+1}} + \upsilon_t$$ My question is how did we derive the second model from the first one? Which math operations are required? I tried to subtract $ Y_{t-1} $ from both side, some other manipulation and rearrangements but I didn't succeed. Lagged error terms always came up. The book "Applied Time Series Econometrics" from Lütkepohl states that: [](https://i.stack.imgur.com/pxdp0.png) [](https://i.stack.imgur.com/EQv2j.png) Any suggestions? Am I misunderstanding or forgetting something?
Augmented Dickey–Fuller test, Test Model
CC BY-SA 4.0
null
2023-04-12T14:05:28.383
2023-04-13T14:04:55.143
2023-04-13T14:04:55.143
67799
378576
[ "time-series", "hypothesis-testing", "unit-root", "augmented-dickey-fuller" ]
612688
2
null
612405
2
null
I think it is possible to simulate this simply using three independent standard uniform random variables, without any approximation or numerical interpolation or rejection sampling or use of the Lambert W function. [Yves Daoust's answer](https://stats.stackexchange.com/a/612561/2958) used $Z= X^{\alpha+1}$ and $\gamma= \frac{1}{{(\alpha+1)\log(\beta)-1}}$ to simplify the cdf for $Z$ to $z(\gamma\log z+1)$ on $(0,1]$ and so the density to $1+\gamma +\gamma \log(z)$. You will have $-1 \le \gamma \le 0$. This is a mixture distribution for $Z$, with probability $1+\gamma$ of being drawn from a uniform distribution on $(0,1]$ and probability $-\gamma$ of being drawn from a distribution with density $-\log(z)$ on the same support. [Somebody previously asked about the latter distribution](https://stats.stackexchange.com/questions/379694/is-there-a-name-for-the-distribution-whose-pdf-is-lnx-on-its-support-0-1) with an answer suggesting it might be a negative log-gamma distribution and a comment suggesting it might be a product uniform distribution as the distribution of the product of two standard uniform random variables. That is the kind of thing we want to make random samples. The R code is short and simple: ``` rthisdist <- function(n, alpha=0, beta=1){ # require alpha > -1 and # beta positive for logarithm and <= 1; 1E-99 will do gam <- 1 / ((alpha+1) * log(beta) - 1) # gamma without overloading uniformA <- runif(n) uniformB <- runif(n) uniformC <- runif(n) z <- ifelse(uniformA < -gam, uniformB, 1) * uniformC return(z ^ (1 / (alpha+1))) } ``` Testing this against your CDF gives a convincing match (simulations in blue, your CDF in red): ``` testalpha <- 2 testbeta <- 0.6 plot.ecdf(rthisdist(10^4, testalpha, testbeta), col="blue") curve((x^(testalpha+1) * ((testalpha+1) * (log(testbeta * x)) - 1)) / ((testalpha+1) * log(testbeta) - 1), from=0, to=1, add=TRUE, col="red") ``` [](https://i.stack.imgur.com/5qiHQ.png)
null
CC BY-SA 4.0
null
2023-04-12T14:18:01.707
2023-04-12T14:18:01.707
null
null
2958
null
612690
1
612707
null
2
45
I have the below graphical causal model. I thought that when we apply the intervention i.e. do calculus we get to the graph on the right - that is deleting arrows going into the treatment (drug). to be clear i want to see effect of drug on cancer. [](https://i.stack.imgur.com/wTfdl.png) However when i used R daggify and use the ggdag_adjustment_set it only highlights age as something to control for: [](https://i.stack.imgur.com/xsOqZ.png) Why hasn't area been highlighted - is it because if i control for area it may lead to bias? What kind of variable is 'area' and do i need to control for it
What variables need to be controlled for in this causal graphical model?
CC BY-SA 4.0
null
2023-04-12T14:23:09.083
2023-04-12T20:22:17.763
null
null
250242
[ "causality", "graphical-model", "causal-diagram" ]
612691
1
null
null
1
32
I am interested in testing for equivalence. I know about the TOST procedure, but many people who I work with do not, so I wanted to apply a method that they are more familiar with it. Specifically, I was about to use the two-sided $1-\alpha$ confidence interval of an estimate $\hat{\theta}$ to check whether this estimate lies within a predefined equivalence interval $(\theta_L,\theta_U)$. But I am hesitating after reading the following statement at the [website](https://support.minitab.com/en-us/minitab/20/help-and-how-to/statistics/equivalence-tests/supporting-topics/confidence-intervals-in-equivalence-testing/) of the software Minitab: > [...] the confidence interval for equivalence also considers the additional information of the lower and upper limits of the equivalence interval. Because the confidence interval incorporates this additional information, a (1 – alpha) x 100% confidence interval for equivalence is in most cases tighter than a standard (1 – alpha) x 100% confidence interval that is calculated for a t-test. In another [post](https://stats.stackexchange.com/questions/125410/equivalence-testing-tost-method-why-ci-of-90) on Stack Overflow, Horst Grünbusch made an interesting statement: > The TOST-CI is simply the intersection of the one-sided CIs. Assuming this statement is true, then my understanding is that for any test statistic $\theta$ with symmetrical distribution, the intersection of the one-sided $1-2\alpha$ confidence intervals should be identical with the one-sided $1-\alpha$ confidence interval. I would like to ask: - Is the cited statement by Horst Grünbusch true? - For which test statistics is the two-sided confidence interval not identical to the intersection of the one-sided confidence intervals (i.e. the TOST-CI)? - Can one give an intuition, why the TOST-CI would "in most cases be tighter", as indicated by Minitab?
Is the Intersection of two one-sided confidence intervals equivalent to the two-sided confidence interval?
CC BY-SA 4.0
null
2023-04-12T14:26:24.220
2023-04-12T14:26:24.220
null
null
183460
[ "hypothesis-testing", "equivalence", "tost" ]
612692
1
null
null
1
38
I'm trying to find the variance of an ARMA(1,1) model of the following form: $$y_t=a_0+a_1y_{t-1}+\epsilon_t+b_1\epsilon_{t-1}$$ where $\epsilon_t$ is a white noise process. I have found it more convenient to write this model in terms of $\epsilon_t$'s: Writing using lag operators: $$(1-a_1L)y_t=a_0+(1+b_1L)\epsilon_t$$ Re-arranging: $$y_t=\frac{1}{1-a_1L}(a_0+(1+b_1L)\epsilon_t)$$ $$=\sum^\infty_{j=0}a_1^jL^j(a_0+(1+b_1L)\epsilon_t) $$ $$=\sum^\infty_{j=0}a_1^ja_0+\sum^\infty_{j=0}a_1^jL^j(\epsilon_t+b_1\epsilon_{t-1})$$ $$=\frac{a_0}{1-a_1}+\sum^\infty_{j=0}a_1^j(\epsilon_{t-j}+b_1\epsilon_{t-1-j})$$ Taking Variance, $$Var(y_t)=Var(\frac{a_0}{1-a_1}+\sum^\infty_{j=0}a_1^j(\epsilon_{t-j}+b_1\epsilon_{t-1-j}))$$ $$\sum^\infty_{j=0}a_1^jVar(\epsilon_{t-j})+b_1\sum^\infty_{j=0}a_1^jVar(\epsilon_{t-j-1})+2Cov(\sum^\infty_{j=0}a_1^j\epsilon_{t-j}, b_1\sum^\infty_{j=0}a_1^j\epsilon_{t-1-j})$$ $$=\sum^j_{j=0}a_1^j\sigma^2+b_1\sum^\infty_{j=0}a_1^j\sigma^2+\sum^\infty_{j=0}a_1^j.2Cov(\epsilon_{t-j}, b_1\epsilon_{t-1-j})$$ $$\frac{\sigma^2}{1-a_1}+\frac{b_1\sigma^2}{1-a_1}=\frac{\sigma^2(1+b_1)}{1-a_1}$$ This answer seems intuitive however it differs from [ARMA (1,1) Variance Calculation](https://stats.stackexchange.com/questions/196994/arma-1-1-variance-calculation) @Neeraaj $$Var(y_t)=\frac{(1+2a_1 b_1 + b_1^2)\sigma^2}{1-a_1^2}$$
Variance of ARMA (1,1) Model
CC BY-SA 4.0
null
2023-04-12T14:31:23.817
2023-04-12T15:07:59.457
null
null
300124
[ "time-series", "variance", "arima" ]
612693
1
null
null
1
40
I am fairly comfortable with Bayesian hierarchical regression models, but I am new to panel data analysis. As someone from the social sciences, I have found that the majority of resources on panel data come from econometrics, which can be confusing. The terminology, particularly the distinction between 'Fixed' and 'Random' effects models, differs from what I am used to. An yes, I've read several threads on this website detailing the difference. In my view, panel data is well-suited for 'Hierarchical' regression models because each individual is measured at least twice, making it logical to introduce varying intercepts to the model. For example, in [this question](https://stats.stackexchange.com/questions/238214/how-exactly-does-a-random-effects-model-in-econometrics-relate-to-mixed-models), I would assume the following model: $$y_{it} = \beta X_{it} + u_i + \epsilon_{it}$$ where $u_i$ represents the varying intercept across people (although I would probably express $u_i$ as $a_i$, but that's beside the point). However, I have come across materials that suggest that if $u_i$ is correlated with observed predictors, this model should not be used (although I still don't fully understand why). But I don't see how this assumption is any different from clustering on countries or any other hierarchical structure. It is an incredibly strong assumption that probably does not hold, so is everyone who uses hierarchical regression models simply wrong?
Bayesian Hierarchical Regression Models for Panel Data
CC BY-SA 4.0
null
2023-04-12T14:36:48.383
2023-04-12T14:36:48.383
null
null
219593
[ "regression", "panel-data", "multilevel-analysis", "hierarchical-bayesian" ]
612694
1
612888
null
3
59
I am fitting a function to data. I want to tell whether the fit is good or not. Consider this example (which is actually my data): [](https://i.stack.imgur.com/frtUu.png) Despite the definition of 'fit is good' being totally ambiguous, most humans will agree in that the fit in the plot is reasonable. On the other hand, the 'bad fit example' shows a case in which most humans will agree in that this fit is not good. As a human, I am capable of performing such 'statistical eye test' to tell whether the fit is good looking at the plot. Now I want to automate this process, because I have tons of data sets and fits and simply cannot look at each of them individually. I am using a chi squared test, but it seems to be very much sensitive and is always rejecting all the fits, no matter what significance I choose, even though the fits are 'not that bad'. For example a chi square test with a significance of 1e-10 rejected the fit from the plot above, which is not what I want as it looks 'reasonably good' to me. So my specific question is: What kind of test or procedure is usually done to filter between 'decent fits' and 'bad fits'? This question is a follow up of [this other question](https://stats.stackexchange.com/questions/612670/chi-squared-for-goodnes-of-fit-test-always-rejects-my-fits).
How to determine whether a fit is reasonable
CC BY-SA 4.0
null
2023-04-12T14:44:40.117
2023-04-14T05:24:27.390
2023-04-13T06:51:48.077
313385
313385
[ "goodness-of-fit", "fitting" ]
612695
1
null
null
0
32
I have a measured signal with additive noise that is sampled from a Rician distribution originally. For processing I divide the signal by the baseline measurement (first few points), and then end up taking the log of this signal to yield my final vector, y. I want to then use maximum likelihood to estimate the most likely set of parameters from my data. I know the model that maps parameters to to y (it's an integral equation). My question is: Is it possible to calculate the maximum likelihood without an analytical form of the likelihood (and hence log-likelihood)? The reason is the noise pdf is quite complex. Any help or ideas would be greatly appreciated
Maximum likelihood estimation without analytical form for noise?
CC BY-SA 4.0
null
2023-04-12T14:50:35.977
2023-04-12T16:06:01.340
2023-04-12T16:06:01.340
60403
60403
[ "maximum-likelihood" ]
612696
2
null
611280
2
null
It seems that $X_{ij}(t)$ should be a 3D array with dimensions ($25$, $k=3$, $10$). Also, if $\sigma$ is the dispersion parameter, I would think it should be multiplied by the square root of the difference between the discrete $t_r$. Working with arrays of dimension > 2 is somewhat cumbersome in R since it lacks native broadcasting, but this is my attempt at generating a simulated $X_{ij}(t)$: ``` k <- 3L n <- 10L r <- 25L t <- seq(0, 1, length.out = r) g <- 1L # subgroup sigma <- seq(0.2, 5, 0.8) m <- array(rep(t*(1 - t), k), c(r, k)) # m(t)_i e <- array(replicate(k*n, cumsum(c(0, rnorm(r - 1L, 0, sigma[g]*sqrt(diff(t)))))), c(r, k, n)) # e(t)_ij X <- array(c(e) + c(m), c(r, k, n)) # X(t)_ij dim(X) #> [1] 25 3 10 # or to get X with dimensions c(3, 10, 25) X <- aperm(X, c(2:3, 1L)) # X_ij(t) dim(X) #> [1] 3 10 25 ```
null
CC BY-SA 4.0
null
2023-04-12T14:52:46.467
2023-04-12T15:42:52.610
2023-04-12T15:42:52.610
214015
214015
null
612697
2
null
610231
1
null
A slightly different approach I believe would work here is - Transform your correlation matrices using the inverse hyperbolic tangent (artanh) function (the Fisher transformation) - Calculate the difference, $\Delta$, between your two artanh transformed correlation matrices. - On the artanh scale, the standard error of each correlation coefficient is $\frac{1}{N-3}$. The standard error of the $\Delta$ is therefore $\sqrt{se_1^2 + se_2^2}$. - Divide $\Delta$ by its standard error to obtain z-statistics. - Any z statistic greater than $2$ in absolute value is statistically significant at $p < .05$ (although adjustment for multiple comparisons is probably appropriate here).
null
CC BY-SA 4.0
null
2023-04-12T14:57:13.753
2023-04-12T14:57:13.753
null
null
42952
null
612698
2
null
612542
0
null
I think this can be chalked up to overfitting. While x2 doesn't actually contribute in the data-generating process, its noise compared to x1 is sometimes more useful to fit to the training data, so xgboost uses x2 to split reasonably often, and so it is correctly identified by gain and shap as important to the model's predictions.
null
CC BY-SA 4.0
null
2023-04-12T14:58:14.003
2023-04-12T14:58:14.003
null
null
232706
null
612699
1
612789
null
3
95
Due to the inequality $2\sqrt{xy}\leq x+y$, the geometric mean is always closer to the smaller value than the arithmetic mean. In my situation, I need a "mean" that is closer to the larger value, so I thought of simply "flipping" the geometric mean $g(x,y)$ at the arithmetic mean $a(x,y)$: $$a(x,y) + \Big(a(x,y)-g(x,y)\Big) = x+y-\sqrt{xy}$$ The result is always between the arithmetic mean and the larger value. Is there some special name for this "mean"?
Geometric mean "flipped" at arithmetic mean?
CC BY-SA 4.0
null
2023-04-12T15:04:44.853
2023-04-26T13:19:19.000
null
null
244807
[ "mean", "geometric-mean" ]
612700
2
null
612692
0
null
Two errors in your solution: - As @Henry pointed out in the comment, you forgot to square the coefficients when you took them out of the variance. Your second line after "Taking Variance" should be $$ \sum^\infty_{j=0}a_1^{2j}Var(\epsilon_{t-j})+b_1^2\sum^\infty_{j=0}a_1^{2j}Var(\epsilon_{t-j-1}) + \ldots $$ - In the same expression, the covariance contains some non-zero terms. We have: $$ \mathrm{Cov}\left(\sum^\infty_{j=0}a_1^j\epsilon_{t-j}, b_1\sum^\infty_{j=0}a_1^j\epsilon_{t-1-j}\right) = \sum^\infty_{j=1} \mathrm{Cov}\left(a_1^j\epsilon_{t-j}, b_1a_1^{j-1}\epsilon_{t-j}\right)= \sum^\infty_{j=1} b_1 a_1^{2j-1}\sigma^2 $$ This gives you the missing $a_1b_1$ term.
null
CC BY-SA 4.0
null
2023-04-12T15:07:59.457
2023-04-12T15:07:59.457
null
null
238285
null
612701
1
null
null
0
27
I have been seeing the notebooks provided by GPy to do a Coregionalized Regression. The notebook is [here](https://bigaidream.gitbooks.io/subsets_ml_cookbook/content/bayes/gp/coregionalized_regression_gpy.html#only-mean-icm). From what I see you have two inputs and two outputs, $X_1$ which is related to $Y_1$ and $X_2$ which is used to model $Y_2$. Is there any way to make a coregionalized model that uses both the input of X1 and X2 to predict both Y1 and Y2 simultaneously? So, I can in a way have $Y_{i} = f(X_{1}, X_{2})$ I've tried playing with kernels, but it doesn't work well. It seems that, in a way, they are assuming independence of the input space since you have to add to which task the input relates. According to what I understand in the notebook, I would have $Y_{1} = f(X_{1})$ and $Y_{2} = f(X_{2})$ but having $cov(Y_1, Y_{2}) \neq 0$ since there's a task covariance. e.g: ``` newX = np.arange(100,110)[:,None] newX = np.hstack([newX,np.ones_like(newX)]) print newX ```
Multi-Task Gaussian Process Regression that uses the whole input space to predict
CC BY-SA 4.0
null
2023-04-12T15:23:01.980
2023-04-12T15:28:31.117
2023-04-12T15:28:31.117
261349
261349
[ "machine-learning", "bayesian", "nonparametric", "gaussian-process" ]
612702
2
null
612161
1
null
Until a more optimal answer is provided, here's a solution based on the information that I've found in a "GLM blog" by [Matthias Dörig](https://www.datascienceblog.net/post/machine-learning/interpreting_generalized_linear_models/): The glmmTMB() function allows to extract the "response", "working" and "pearson" residuals of a considered model, but (to my knowledge) not the "deviance" residuals. This is unfortunate, as the "Residual deviance" of a GLM is simply the sum of the squared "deviance" residuals. Using the same walleye data, I'll start by using the glm() function with family=poisson as the solution relies on transforming the "response" residuals into "deviance" residuals based on an error-specific function that was provided by Dr. Dörig for the Poisson case. ``` summary(m.walleye.p<-glm(count~age,family=poisson,data=walleye)) Call: glm(formula = count ~ age, family = poisson, data = walleye) Deviance Residuals: Min 1Q Median 3Q Max -5.0251 -1.4546 -0.1779 0.8031 7.0199 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 5.45450 0.08303 65.70 <2e-16 *** age -0.47326 0.02485 -19.05 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 851.04 on 14 degrees of freedom Residual deviance: 111.96 on 13 degrees of freedom AIC: 169.06 Number of Fisher Scoring iterations: 5 ``` When using glmmTMB() to model the same data in an identical manner, we aim to obtain a Residual deviance of 111.96. Not displaying the output, this is coded as follow, using .P instead of .p to differentiate it from the first GLM: ``` summary(m.walleye.P<-glmmTMB(count~age,family=poisson,data=walleye)) ``` The "response" residuals correspond to the simplest form, i.e. the difference between the observed and predicted values: ``` observed<-walleye$count predicted<-fitted(m.walleye.P) resid<-observed-predicted ``` The "response" residuals obained (i.e., resid object) would be identical as those: ``` residuals_response<-residuals(m.walleye.P,type="response") resid == residuals_response 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE ``` The function to transform the "response" to "deviance" residuals for family=poisson is based on the "observed" and "predicted" objects. Note that the function needs to be modified according to the considered error structure, the Poisson family representing (I believe) the simplest case when analyzing count data: ``` Poisson.dev<-function(y, mu) 2*(y*log(ifelse(y == 0, 1, y/mu))-(y-mu)) residuals.deviance<- sqrt(Poisson.dev(observed,predicted))*ifelse(observed>predicted,1,-1) ``` The Residual deviance is finally obtained as: ``` sum(residuals.deviance^2) [1] 111.9586 ``` It's the correct value. Here's a last check to make sure that we would get the same result for m.walleye.p from the "deviance" residuals (see this [link](https://stats.stackexchange.com/questions/237702/comparing-models-using-the-deviance-and-log-likelihood-ratio-tests)) that can be directly extracted from a glm() object: ``` sum(residuals(m.walleye.p,type="deviance")^2) [1] 111.9586 ``` It's the same value as: ``` m.walleye.p$deviance [1] 111.9586 ``` And the Null deviance would be: ``` m.walleye.p$null.deviance [1] 851.0357 ``` So the deviance explained (D2) would be: ``` 100*(1-m.walleye.p$deviance/m.walleye.p$null.deviance) [1] 86.84443 ``` Although this value is pretty high, this model is completely inadequate. The count data are over-dispersed and thus, the required equi-dispersion (variance = mean) for a Poisson model is not respected. To be adequate, about 95% of the residuals should be found within the hnp simulated envelope below if the distributional assumptions of the Poisson family were to be respected, which is clearly not the case here, as 80% are found outside: ``` library(hnp) hnp(m.walleye.p,resid.type="pearson",how.many.out=TRUE,paint=TRUE) Poisson model Total points: 15 Points out of envelope: 12 ( 80 %) ``` [](https://i.stack.imgur.com/7ITJX.png) So, in the end, contrasting model adequacy (hnp) and model approximated explanatory power (D2) is valuable for more reliable statistical inferences. For glmmTMB, one need to also run the null counterpart of the model (i.e., ~ 1), as indicated by @Sal Mangiafico and @Ben Bolker, to obtain the Null deviance too and then, calculate D2 from these values. Maybe at some point the glmmTMB package will allow to obtain these deviance-related metrics, but the solution here is a bit too long to apply and I'm afraid that the required functions for either the Generalized Poisson (family=genpois), mean-parameterized Conway-Maxwell-Poisson (family=compois) or esle such as family=nbinom1 may be too difficult to code under this proposed approach.
null
CC BY-SA 4.0
null
2023-04-12T15:29:59.803
2023-04-24T11:58:49.330
2023-04-24T11:58:49.330
338493
338493
null