Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
611480 | 1 | null | null | 0 | 44 | I am dealing with a binary endogenous variable and thus trying to use a 2SLS model with Probit correction in the preliminary ("0th") stage, a procedure described in Wooldridge (2002; p.623, procedure 18.1).
Below I describe my question in detail using an example (I have cross-posted this question on [Statalist](https://www.statalist.org/forums/forum/general-stata-discussion/general/1708068-2-stage-least-squares-2sls-with-0th-stage-probit-correction-additional-fixed-effects-in-2sls)).
I have a yearly panel dataset of firms, and half of the firms in the sample are treated. The treatment status does not change over time within firm (i.e., the sample consists of only never-treated and always-treated ones), so firm fixed effects cannot be included in regression models. The dependent variable is profit that a firm made in a given year. I am interested in estimating the impact of the treatment on firm profit. Industry (e.g., 4 digit SIC of firm) and year fixed effects are included, along with some other covariates.
Assume that I have a "good" instrumental variable (IV) that satisfies both relevance and exclusion restrictions. But this IV is time-invariant within industry, that is, the values of IV is the same for all firms that belong to the same industry (i.e., those that have the same SIC) regardless of time. This means that, in a (conventional) 2SLS approach, industry fixed effects cannot be included in the first stage (because IV will be absorbed by industry fixed effects). My understanding is that the same covariates and fixed effects should be used in both first and second stages of 2SLS; if correct, industry fixed effects should not be included in the second stage of 2SLS either (this seems to be the case when I look at Stata commands like ivreg or ivreghdfe.
Now, consider a 2SLS with 0th-stage Probit correction approach. Under this approach, the treatment is regressed on IV (and on other covariates and fixed effects) using a Probit model in the 0th stage. For the same reason described above (i.e., IV is time-invariant within industry), industry fixed effects cannot be included in this 0th-stage Probit model. The predicted probabilities of getting treated, estimated from this Probit model, is then considered as a (new) IV, and the new IV is now used in the subsequent 2SLS models, just like in a conventional 2SLS approach.
So, my question is whether it would be "wrong" to include industry fixed effects in the 2SLS part of this "2SLS with Probit correction" approach. Now that I have a "better" IV (i.e., predicted probabilities of getting treated based the Probit model) that does vary over time within industry, technically I can include industry fixed effects in the first stage (and also in the second stage) of 2SLS. The new IV is also strongly relevant with the treatment (i.e., the Cragg-Donald Wald F statistic is sufficiently large in the first stage of 2SLS with industry fixed effects).
I can't think of any obvious reasons why it would be wrong to include industry fixed effects in the 2SLS part of this approach (i.e., 2SLS with Probit correction), but I am not sure about statistical/econometrical implications of doing it. I have looked at some studies that used this approach (e.g., Adams et al. 2009, Cameron et al. 1988, Dubin and McFadden 1984) and they don't seem to emphasize that the same set of fixed effects should be included in all three models (i.e., Probit and 2SLS).
References:
- Adams, R., Almeida, H. and Ferreira, D., 2009. Understanding the relationship between founder–CEOs and firm performance. Journal of empirical Finance, 16(1), pp.136-150.
- Cameron, A.C., Trivedi, P.K., Milne, F. and Piggott, J., 1988. A microeconometric model of the demand for health care and health insurance in Australia. The Review of economic studies, 55(1), pp.85-106.
- Dubin, J.A. and McFadden, D.L., 1984. An econometric analysis of residential electric appliance holdings and consumption. Econometrica: Journal of the Econometric Society, pp.345-362.
- Wooldridge, J.M., 2002. Econometric Analysis of Cross Section and Panel Data. MIT Press, Cambridge, MA.
| 2-stage least squares (2SLS) with 0th-stage Probit correction: Additional fixed effects in 2SLS | CC BY-SA 4.0 | null | 2023-04-01T15:15:46.133 | 2023-04-06T02:23:24.987 | 2023-04-06T02:23:24.987 | 11887 | 229910 | [
"instrumental-variables",
"probit",
"2sls"
] |
611481 | 2 | null | 611445 | 0 | null | I don't know what's going on with `relevel()` here (and the answer is probably a software-specific matter off-topic on this site). There's a more generally useful way to get the standard errors of combinations of coefficient estimates in a Cox or other regression model, which doesn't require refitting.
The formula for the [variance of a weighted sum of variables](https://en.wikipedia.org/wiki/Variance#Weighted_sum_of_variables) is the key. I find it least error-prone to work in the original coefficient scale and leave the transformation to hazard ratios until the end. For your difference between `Trt2` and `Trt1` with corresponding coefficients $\beta_2$ and $\beta_1$, the variance estimate is:
$$\text{Var}(\beta_2-\beta_1) = \text{Var} (\beta_2) + \text{Var} (\beta_1) -2\text{Cov}(\beta_1,\beta_2),$$
where $\text{Cov}$ is the covariance between the estimates. The `vcov()` function applied to the model output returns the variance-covariance matrix for the coefficient estimates. The individual variances will be the squares of the `se(coef)` values, but you'll need to use `vcov()` to get the covariance.
Once you get the variance for the difference, the square root is the standard error. The point estimate of $(\beta_2-\beta_1)$, plus/minus 1.96 times the standard error, gives the standard 95% confidence interval in the coefficient scale for a Cox model. Exponentiation puts those into the hazard-ratio scale.
Working with the single original model, instead of trying to refit with releveled multi-level factors, is a general strategy that extends better to more complex models than this. Post-modeling tools, for example those in the R [car](https://cran.r-project.org/package=car) and [emmeans](https://cran.r-project.org/package=emmeans) packages, are designed to help with this.
| null | CC BY-SA 4.0 | null | 2023-04-01T15:15:51.920 | 2023-04-01T15:15:51.920 | null | null | 28500 | null |
611482 | 1 | null | null | 0 | 29 | given the following regression model $y_t = x_{1,t}' \beta_1+x_{2,t}' \beta_2 +e_t$ where $E(e_t|x_t)=0$ for all $t$ $x_{1,t}$ is $k_1$ vector $x_{2,t}$ is a $k_2$ vector with $k_1,k_2>1$ and $x_t= [x_{1,t} x_{2,t}]$
we want to test the null hypothesis $H_0:\beta_2=0$ which test statistics would you use? derive its asymptotic distribution
basically we studied 2 test, the t-test and the F-test
I know that we use the t-test when we want to test a single restriction on $\beta_2$ , that is $\beta_2$ should be a scalar, and we use the F-test when we want to test for several restrictions, that is $\beta_2$ is a vector and we want to test that all the elements are jointly significant( equal to $0$ in this case)
my doubt is that from the text I cannot understand if $\beta_2$ is a scalar (single restriction) or a vector(several restrictions)
can someone help in the choice of the right test?
| choice of the right hypothesis test | CC BY-SA 4.0 | null | 2023-04-01T15:23:36.617 | 2023-04-01T15:34:02.470 | 2023-04-01T15:34:02.470 | 362147 | 362147 | [
"hypothesis-testing",
"t-test",
"f-test"
] |
611485 | 2 | null | 604961 | 1 | null | >
I want to ask whether "relaxing" the demand for high entropy is acceptable or not.
Your purpose in identifying the classes provides the best answer to that question.
Decide on a measure of model performance that makes the most sense for your application. There's a potential problem with entropy, as that term seems to get used in different ways among implementations of latent class analysis (LCA). The `poLCA` package you use reports the standard Shannon definition of entropy calculated over the cells $c$ of the cross-classification table, $-\sum_c p_c \ln p_c$, with an upper limit equal to the log of the number of cells. The cutoffs you cite seem to be based on values re-scaled into a range of 0 to 1 like `Mplus` reports. I'm not an expert in LCA, but I suspect that there might be a better measure of model performance than entropy.
It would seem that a measure of the stability of class assignments among resampled data sets would be most useful for a clinical application. I don't think that you can get that from your single model fit, but you could get that by repeating your entire modeling process on multiple bootstrapped samples of your data. That mimics the process of taking your original sample from the underlying population. A process that tends to co-identify the same cases into the same classes would be most reliable.
Save the predictions of each bootstrap-based model on the full data set. See how well the multiple models tend to match the same cases into the same classes. Then decide if that performance is good enough for your purpose. If so, report the results of your original model along with your measure of model-process performance.
In terms of application to survival analysis, LCA might provide a type of data reduction, unsupervised with respect to survival outcomes, to convert a large number of `X` categorical variables into a smaller number of predictors for modeling an event-limited survival data set. Frank Harrell discusses similar data-reduction approaches in [Section 4.7 of Regression Modeling Strategies](https://hbiostat.org/rmsc/multivar.html#sec-multivar-data-reduction). Using the modal class assignment would seem to be the most appropriate choice. If you have enough events to include all of your variables into the model, however, the pre-assignment to classes would seem to lead to a loss of precision.
| null | CC BY-SA 4.0 | null | 2023-04-01T16:29:16.933 | 2023-04-01T16:29:16.933 | null | null | 28500 | null |
611486 | 2 | null | 611473 | 1 | null | It seems like you have a probability $p_1$ of making one kind of mistake, which incurs a cost that is distributed according to $X_1\sim N(1,2)$, and you have another probability $p_2$ of making a mistake that incurs a cost that is distributed according to $X_2\sim N(1.5,3)$. (How you incur negative cost upon making a mistake is a mystery to me, but I can roll with it.
Then your distribution of costs incurred would be $p_1X_1+p_2X_2$.
However, I do not think your probabilities are sensitivity and specificity, which condition on the outcome that you do not know. (If you knew it, you would not be making mistakes in predicting it.) You might find yourself interested in the probabilities of misclassification given each type of classification (related to PPV and NPV). Better still might be to use the predicted probabilities, particularly if those predictions are calibrated (they might not be) and reflect the true probability of event occurrence.
| null | CC BY-SA 4.0 | null | 2023-04-01T16:36:56.233 | 2023-04-01T16:51:00.570 | 2023-04-01T16:51:00.570 | 247274 | 247274 | null |
611487 | 2 | null | 611453 | 1 | null | A latent unit $z_i$ having low covariance with $x$ doesn't mean that $z_i$ is constant, it means it varies independently of $x$, i.e. $p(x,z_i) = p(x)p(z_i)\ ^{**}$. Simple proof:
$$\text{Covar}(x, z_i)
= \int_{x,z} \!\!\!p(x,z) (x-\bar{x})(z - \bar{z})
\ \overset{**}{=}
\int_{x,z} \!\!\!p(x)p(z) (x-\bar{x})(z - \bar{z}) \\\qquad\quad\ \ =
\int_{x} p(x)(x-\bar{x}) \int_zp(z) (z - \bar{z}) =
(\bar{x}-\bar{x}) (\bar{z} - \bar{z}) = 0$$
So low covariance (not variance) identifies $z$ components that are independent of $x$, i.e. carry no information, or are "dead" from a useful representation perspective.
The same measure is used when considering posterior collapse (e.g. [in Variational Autoencoders (VAEs)](https://arxiv.org/pdf/1901.05534.pdf)), which refers to when all dimensions of $z$ are independent of $x$.
| null | CC BY-SA 4.0 | null | 2023-04-01T16:55:12.487 | 2023-04-15T20:17:51.460 | 2023-04-15T20:17:51.460 | 307905 | 307905 | null |
611488 | 1 | null | null | 0 | 18 | I am attempting to recreate the results of the paper written by King, Stock and Watson in 1995: Temporal instability in the unemployment inflation relationship.
The paper estimates a VAR model with 12 lags using data on inflation and unemployment. They mention that they obtain in sample RMSEs for forecasts at the 6, 12 and 24 month horizon. How is this possible, as I was under the impression that in sample forecasts can be made only when calculating one step ahead forecasts?
| Obtaining 12 month ahead in sample RMSEs | CC BY-SA 4.0 | null | 2023-04-01T16:57:05.033 | 2023-04-01T16:57:05.033 | null | null | 384690 | [
"forecasting",
"vector-autoregression",
"rms"
] |
611490 | 1 | 611492 | null | 1 | 109 | As the title stated, $X_n$ converges to $X$ in distribution. $Y_n$ converges to $Y$ in probability. Does $(X_n, Y_n)$ converges to $(X,Y)$ in distribution?
If not, if we make the condition so that $Y_n$ converge to $Y$ almost surely, does $(X_n, Y_n)$ converge in distribution to $(X,Y)$?
| $X_n$ converges to $X$ in distribution. $Y_n$ converges to $Y$ in probability. Does $(X_n, Y_n)$ converges to $(X,Y)$ in distribution? | CC BY-SA 4.0 | null | 2023-04-01T17:12:36.200 | 2023-04-01T21:52:00.787 | 2023-04-01T21:52:00.787 | 362671 | 260660 | [
"probability",
"random-variable",
"convergence"
] |
611491 | 1 | null | null | 0 | 44 | Popular python libraries for topic modeling like gensim or sklearn allow us to predict the topic-distribution for an unseen document, but I have a few questions on what's going on under the hood. I've read a few responses about ["folding-in"](https://stats.stackexchange.com/questions/9315/topic-prediction-using-latent-dirichlet-allocation/9479#9479), but the [Blei et al.](https://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf) LDA paper the authors state.
" An alternative approach is the “folding-in” heuristic suggested by Hofmann (1999), where one ignores the p(z|d) parameters and refits p(z|dnew). Note that this gives the pLSI model an unfair advantage by allowing it to refit k −1 parameters to the test data.
LDA suffers from neither of these problems. As in pLSI, each document can exhibit a different proportion of underlying topics. However, LDA can easily assign probability to a new document; no heuristics are needed for a new document to be endowed with a different set of topic proportions than were associated with documents in the training corpus."
Which makes me thing folding-in may not be the right way to predict topics for LDA. Furthermore, I'm curious about how we could predict topic mixtures for documents with only access to the topic-word distribution $\Phi$. Essentially, I want the document-topic mixture $\theta$ so we need to estimate $p(\theta_z | d, \Phi)$ for each topic $z$ for an unseen document $d$.
I might be overthinking it. Can we sample from $\Phi$ for each word in $d$ until each $\theta_z$ converges?
| LDA Document Topic Distribution Prediction for Unseen Document | CC BY-SA 4.0 | null | 2023-04-01T17:13:40.727 | 2023-04-01T17:13:40.727 | null | null | 378969 | [
"bayesian",
"clustering",
"sampling",
"topic-models",
"latent-dirichlet-alloc"
] |
611492 | 2 | null | 611490 | 4 | null | Counterexample: Let $X, Y \text{ i.i.d.} \sim N(0, 1)$, and let $(X_n, Y_n) \equiv (-Y, Y)$ for all $n$. It is easy to see that $X_n$ converges to $X$ in distribution, $Y_n$ converges to $Y$ almost surely. But $X_n + Y_n = 0$ does not converge in distribution to $X + Y \sim N(0, 2)$, which implies that $(X_n, Y_n)$ couldn't converge to $(X, Y)$ in distribution (why?).
| null | CC BY-SA 4.0 | null | 2023-04-01T17:38:43.667 | 2023-04-01T17:38:43.667 | null | null | 20519 | null |
611493 | 2 | null | 314782 | 1 | null | Here is probabilistic proof to this old problem.
Let $(X_n:n\in\mathbb{N})$ be an i.i.d sequence of exponential random variables with parameter $\theta>0$ ($\mu_{X_1}(dx)=\theta e^{-\theta x}\mathbb{1}_{(0,\infty)}(x)\,dx$). Define
$$W_n=\frac{X_1}{X_1+(X_2+\ldots + X_{n+1})}$$
As $X_1$ has $\operatorname{Gamma}(1,\theta)$ and $X_2+\ldots+X_{n+1}$ has $\operatorname{Gamma}(n,\theta)$ distribution,
has distribution $\operatorname{Beta}(1,n)$. By the law of large numbers
$$nW_n=\frac{X_1}{\tfrac1n X_1+\frac{1}{n}(X_2+\ldots + X_{n+1})}\xrightarrow{n\rightarrow\infty}\frac{X_1}{0+1/\theta}=\theta X_1$$
Notice that $\theta X_1\sim\operatorname{Exp}(1)$.
| null | CC BY-SA 4.0 | null | 2023-04-01T17:41:19.210 | 2023-04-01T18:55:10.740 | 2023-04-01T18:55:10.740 | 309747 | 309747 | null |
611494 | 1 | null | null | 0 | 38 | I was reading [this](https://www.bbvaresearch.com/wp-content/uploads/2014/09/WP14-26_Financial-Inclusion1.pdf) paper when I stumbled upon two-staged PCA; apparently it divides indicators into sub-indices because empirical evidence supports that PCA is biased towards the weights of indicators which are highly correlated with each. I was wondering how to make sense of the step by step process and how to perform it in Stata.
| How to perform two-stage PCA? | CC BY-SA 4.0 | null | 2023-04-01T17:49:18.243 | 2023-04-01T19:20:01.863 | 2023-04-01T19:20:01.863 | 22047 | 382735 | [
"pca",
"factor-analysis"
] |
611495 | 1 | null | null | 0 | 14 | I want to evaluate factors influencing the time to degree for college students in their master's degree with panel data. The dependent variable is time to degree in months (reference: student is still studying). However, I must respect the time to college dropout as a competing event. The data is in person-month (long) format with time-varying and time-constant independent variables. I have observed a cohort that started in the winter semester of 2017/18 for six semesters. The standard period of study is four semesters. There are no graduations within the first 12 months (first two semesters) because students usually graduate in the third, fourth, or fifth semester. However, students can drop out at any time.
How can I handle such data in an estimation? I have estimated a piecewise constant model in Stata, and the first 12 months are automatically discarded - because there are no graduations. The Cox model yields similar results. Is this problematic?
What would be alternatives? I have considered defining a long first interval and then using the monthly data (first two semesters; months 13, 14, ...). I have also thought about starting process time with the third semester (13th month) - however, I have many students who have responded to the questionnaire in the first or second semester - and indicated graduation later on but have not responded to the questionnaire in the third semester. I do not want to discard these observations.
What would you recommend me to do?
| Time intervals without events in event history models with competing risks (e.g., Cox, piecewise constant, logit) | CC BY-SA 4.0 | null | 2023-04-01T18:01:03.077 | 2023-04-02T09:54:45.443 | 2023-04-02T09:54:45.443 | 22047 | 384674 | [
"time-series",
"survival",
"panel-data",
"cox-model",
"interval-censoring"
] |
611496 | 2 | null | 611462 | 0 | null | >
any identical set of my explanatory variables can result in multiple values of the target variable.
That's to be expected in a Poisson model. A standard Poisson regression models the log of the average number of counts as a function of a linear predictor, which is based on covariate values and regression coefficients. But the actual observed values vary about that mean, as the variance of a Poisson-distributed variable equals its mean.
For example, say that the linear predictor for one set of predictor values gives a mean of 1 count. Then only about 37% of observations would have a count value of 1. The same fraction of observations would have a value of 0, and about 18% would have a value of 2. You can explore this behavior yourself with the `dpois()` function in R. For example:
```
print(dpois(0:5,lambda=1),digits=2)
# [1] 0.3679 0.3679 0.1839 0.0613 0.0153 0.0031
```
gives the probability of finding each of 0 through 5 counts when the mean count is 1.
Similarly, different sets of predictor values can result in the same number of observed counts in many cases. Say that a second set of values gives a predicted mean of 2 counts. There's a lot of overlap in the distribution of count values with what's seen with a mean of 1 count.
```
print(dpois(0:5,lambda=2),digits=2)
# [1] 0.135 0.271 0.271 0.180 0.090 0.036
```
You might, however, need to go beyond a simple Poisson model if the dispersion around model predictions differs substantially from the strict "variance equals mean" assumption of a Poisson model. A "quasi-Poisson" or a negative-binomial model can handle that situation while respecting the count-based nature of your outcome values.
| null | CC BY-SA 4.0 | null | 2023-04-01T18:06:00.673 | 2023-04-01T18:06:00.673 | null | null | 28500 | null |
611497 | 1 | null | null | 0 | 8 | Let $\mathcal{O}$ (for observed) and $\mathcal{E}$ (for expected) be two $D$-variate multinomial distributions, such that $\sum_{d=1}^{D}\mathcal{O}_{d}=1$ and $\sum_{d=1}^{D}\mathcal{E}_{d}=1$. The G-test of goodness of fit is defined as follows:
$$G(O,E)=2\sum_{d=1}^{D}O_{d}\log(\frac{O_{d}}{E_{d}}) \tag{1}$$
where $O$ and $E$ are realizations of $\mathcal{O}$ and $\mathcal{E}$ such that $\sum O = \sum E = N$.
My question: Assume that I can repeatedly draw $N$ i.i.d. samples from each distribution. For finite $N$, the G-test statistic should vary for different repetitions due to random fluctuations. Is there an analytical equation (approximate solutions are acceptable) to estimate the mean and variance (in the limit of infinitely many repetitions) of the G-test?
---
While I am not sure about how to solve this problem directly, I have some ideas on how to reformulate it: The mean and variance of $O$ and $E$ for a sample size $N$ are given as (see [Wikipedia](https://en.wikipedia.org/wiki/Multinomial_distribution#Properties)):
$$\mu_{O} = N\mathcal{O} \tag{2}$$
$$\Sigma_{O}=\begin{cases}
N\mathcal{O}_{i}(1-\mathcal{O}_{i}) & \text{for the $i$th entry along the diagonal} \\
-N\mathcal{O}_{i}\mathcal{O}_{j} & \text{for entries in row $i$ and column $j$}.
\end{cases} \tag{3}$$
with equivalent expressions (omitted for brevity) for $E$. I suspect that with these statistical moments, we could exploit the relationship between the [KL-divergence and the G-test](https://en.wikipedia.org/wiki/G-test#Relation_to_Kullback%E2%80%93Leibler_divergence) to estimate the KL-divergence between the multivariate Gaussian approximation to $O$ and $E$ instead. However, I have also not found a solution for the variance for sample-approximated KL divergences.
Do you have any suggestions on how to solve this issue?
| Mean and variance of the G-test of goodness of fit? | CC BY-SA 4.0 | null | 2023-04-01T18:39:13.773 | 2023-04-01T18:39:13.773 | null | null | 191317 | [
"variance",
"multinomial-distribution",
"likelihood-ratio",
"kullback-leibler"
] |
611498 | 2 | null | 611423 | 1 | null | You are correct about oversampling in survey design. If you want to read more about it, another useful search terms is "stratified sampling" (which is itself another term that survey stats and ML use in different ways!).
To the best of my knowledge you are right about oversampling in ML too. However, others may be able to answer that better than I can.
Finally, I don't know the history of why oversampling in ML came to be used the way it is, but I doubt that it came from survey statistics. If the people who introduced oversampling to ML had a background in statistical survey methods, they probably would have introduced sampling weights instead. Their background would have primed them to realize that ML-style oversampling effectively makes you think you have lower standard errors than you really do.
Instead, I suspect it was developed independently by ML practitioners with limited statistical background. My guess is that someone thought, "I want to deal with class imbalance somehow, and it's easy to write a few lines of code to oversample the minority class," and they just didn't have the right background to think of the downsides or know of the alternatives. Even when people do realize the risks of oversampling and do think of using weights, it's not always obvious how to incorporate sampling weights sensibly into a new ML algorithm.
(Edit: About that last comment---there are at least 3 common types of weights used in statistics. [Thomas Lumley summarizes them as](https://notstatschat.rbind.io/2020/08/04/weights-in-statistics/) precision weights ("this row in the dataset is actually the sample mean of 10 different units"), frequency weights ("we've compressed our dataset by noting that 10 different units all had identical values on each variable"), and sampling weights ("this unit was sampled from the population with probability 1/10"). The different types of weights often give similar point estimates or model predictions, but not always; and they give very different standard errors. Precision weights and frequency weights seem like a more natural fit for the ML world ("we acquired this large dataset, without using any particular deliberate sampling design, and then we compressed it to take up less disk space"). So when ML algorithms are adapted to use weights at all, I suspect the priority is often on using precision or frequency weights, not sampling weights.)
| null | CC BY-SA 4.0 | null | 2023-04-01T18:40:31.613 | 2023-04-02T04:07:52.757 | 2023-04-02T04:07:52.757 | 17414 | 17414 | null |
611499 | 1 | 611521 | null | 0 | 61 | In section 4.7.7 of Introduction to Statistical Learning (version 2), the authors code regression contrasts where the last level of a predictor sums to the remaining levels.
My question is, why doesn't the coefficient for this last level (e.g. mnth12) show up in R's summary output? The authors note that it's not reported. But where does this decision (by the R package authors) come from?
The last level of mnth for example has coefficient value 0.37 (negative of -0.37, the sum of the other coefficients of mnth).
```
library(ISLR2)
library(tidyverse)
attach(Bikeshare)
contrasts(Bikeshare$hr) = contr.sum (24)
contrasts(Bikeshare$mnth) = contr.sum (12)
mod.lm2 <- lm(
bikers ~ mnth + hr + workingday + temp + weathersit ,
data = Bikeshare
)
# Does not show mnth12
summary(mod.lm2)
# Its coefficient should be equal to:
summary(mod.lm2) |> broom::tidy() |> filter(term |>
str_detect("mnth")) |> pull(estimate) |> sum()
```
| Why is the last level not reported in R's `summary()`, if its coefficient is not 0? | CC BY-SA 4.0 | null | 2023-04-01T18:58:49.313 | 2023-04-02T22:23:03.780 | 2023-04-02T07:04:09.627 | 22047 | 275740 | [
"regression",
"categorical-encoding",
"contrasts"
] |
611501 | 2 | null | 611334 | 2 | null | "... one of the most fundamental and critical conceptual notions in inference is the distinction between probabilities and frequencies. A useful and concise phrase to hang our hats on is a philosophical one that probabilities are epistemological, while frequencies are ontological."
"The very same distinction applies to information and data. Information is an epistemological concept, while data is an ontological concept. Entropy, which is a quantitative measure of the amount of missing information, must perforce be an epistemological concept."
These two quotes are from David J Blower. If these quotes resonate with you, I can only advise you to read his books in his series "[Information Processing](https://www.amazon.com/s?k=david%20J%20blower&i=stripbooks-intl-ship&crid=48WI6E870ZSV&sprefix=david%20j%20blower%2Cstripbooks-intl-ship%2C181&ref=nb_sb_noss)."
Admittedly, a long read. But he treats your questions at a deeper, more fundamental level. Starting with Boolean Algebra, logic, probability manipulations, Bayes' theorem, etcetera.
Especially, Blower's discussion of Sir Harold Jeffreys' hang-up "animals with feathers" may be useful (Volume 1, p. 377).
| null | CC BY-SA 4.0 | null | 2023-04-01T19:38:44.357 | 2023-04-01T19:38:44.357 | null | null | 382413 | null |
611502 | 1 | null | null | 1 | 20 | I have two replicates (tanks) for 4 treatments (hyperthermia, hypoxia, combined hyperthermia and hypoxia, and control), with 10 randomly assigned organisms for each tank. The response variable is non-normal. I have tested for treatment effect through Kruskal-Wallis H and Mann-Whitney U test (Wilcoxon) as posthoc analysis to evaluate all pairwise comparisons. For these analyses, I considered all individuals for each treatment (aggregated individuals from the two replicate tanks).
I know the conditions between replicates are the same, and individuals were randomly distributed before the experiment. Reviewers posted the question of the potential blocking effect between the two replicates for each condition. Is there a non-parametric test I can use to demonstrate no blocking effect?
| Can I test blocking effect in non parametric test | CC BY-SA 4.0 | null | 2023-04-01T19:50:53.270 | 2023-04-05T22:31:04.527 | 2023-04-05T22:31:04.527 | 11887 | 384697 | [
"nonparametric",
"ordinal-data",
"post-hoc",
"blocking"
] |
611503 | 1 | 611510 | null | 2 | 80 | Given some probability space $(\Omega, \mathcal{F},\mathbb{P})$, $F_{n}$ some filtration, and $X_{n}$ some martingale with $\tau$ stopping time. We know some major things such as $\tau$ is finite almost surely, $\mathbb{E}|X_{\tau}|$ is finite, and $\lim_{n\to\infty}\int_{\tau>n}|X_{n}|d\mathbb{P}=0$. The goal being to show the big result of the Stopping Theorem, $\mathbb{E}X_{\tau}=\mathbb{E}X_{0}$
I have already proven that $\lim_{n\to\infty}\mathbb{E}(X_{\tau\wedge n})=\mathbb{E}(X_{\tau})$
I am stuck trying to show that $\mathbb{E}(X_{\tau \wedge n})=\mathbb{E}(X_{0})$.
A suggestion I found was the show that $\lim_{n\to\infty}\mathbb{E}|X_{\tau \wedge n}-X_{\tau}|\to 0$, but not really exactly sure what the punchline is?
Any suggestions?
| Piece of Optional Stopping Theorem | CC BY-SA 4.0 | null | 2023-04-01T19:54:40.103 | 2023-04-01T23:43:15.717 | null | null | 384698 | [
"probability",
"stochastic-processes"
] |
611504 | 2 | null | 611467 | 2 | null | It's seldom good to argue "always" or "never" in response to this type of question. Much depends on the details of the data, your understanding of the subject matter, and what tradeoffs you are willing to make. What follows are some links to guidance that you can apply as needed.
Stef van Buuren's [Flexible Imputation of Missing Data](https://stefvanbuuren.name/fimd/) (FIMD) is a reliable reference. In general, the book doesn't even distinguish "outcomes" from "predictors" in terms of imputation. All data are typically lumped together into a single matrix with indicators of which elements are missing in the original data.
The missing-at-random (MAR) assumption underlying multiple imputation is that missingness can depend on observed but not unobserved information. Thus it makes sense to use as much of the observed information as reasonable in multiple imputation. I don't see any reason completely to ignore whatever information the observed $Y$ values might provide for imputation, as in your Strategy 2. You might choose for some reason not to, based on your understanding of the subject matter, but that should be a conscious choice.
Even if you choose not to use the outcome $Y$ in the imputation, it's still generally important to perform multiple imputations, fit models on each of the imputed data sets, and then combine the results as explained in [Section 5](https://stefvanbuuren.name/fimd/ch-analysis.html). A quote from Rubin at the start of [Chapter 2](https://stefvanbuuren.name/fimd/ch-mi.html) puts it simply:
>
Imputing one value for a missing datum cannot be correct in general, because we don’t know what value to impute with certainty (if we did, it wouldn’t be missing).
As the [page you link](https://stats.stackexchange.com/q/226803/28500) notes, confidence intervals and the like won't be correct if you only use a single imputation.
[Section 2.7](https://stefvanbuuren.name/fimd/sec-when.html) of FIMD discusses some potential exceptions relating to this question.
First, your Strategy 3 never lets you use cases with a missing outcome value $Y$. That's OK if the only missing data are in $Y$. But that's not the situation that you are describing. Otherwise you run into the problems with complete-case analysis discussed in that Section and elsewhere:
>
The efficiency of complete-case analysis declines if $X$ contains missing values, which may result in inflated type II error rates.
Second, complete-case analysis is OK, in terms of bias, if the probability of missingness is independent of the outcome $Y$ in a regression model against predictors $X$:
>
The first special case occurs if the probability to be missing does not depend on $Y$. Under the assumption that the complete-data model is correct, the regression coefficients are free of bias...
That might argue for Strategy 1 if you can make that assumption about missingness! In that context, I think about single imputation with `rfImpute()` as extending complete-case analysis to cases with some missing $X$ values but observed $Y$ values that might otherwise be omitted. Insofar as the $Y$ values contain information about the values of the missing $X$ entries in the data matrix, that should tend to increase power (although it will again introduce problems with things like confidence intervals).
| null | CC BY-SA 4.0 | null | 2023-04-01T20:02:16.553 | 2023-04-01T20:02:16.553 | null | null | 28500 | null |
611505 | 2 | null | 424737 | 0 | null | Yes, it is called Negative Predictive Value (NPV) which measures the accuracy of negative predictions. You can read more about it on wikipedia here [https://en.wikipedia.org/wiki/Confusion_matrix](https://en.wikipedia.org/wiki/Confusion_matrix)
| null | CC BY-SA 4.0 | null | 2023-04-01T20:35:34.720 | 2023-04-01T20:36:43.233 | 2023-04-01T20:36:43.233 | 384704 | 384704 | null |
611507 | 1 | null | null | 0 | 8 | Could someone please help me understand this?
My data reports absenteeism information from a sample of schools. Variables in order of columns are ethnic background, sex, age (recorded as a categorical factor), learner status, and the number of days absent.
I need to explore potential models to predict the mean number of days absent from other variables in this data set (in order to derive my final model), along with a summary of the steps I took to decide on it. Some items to consider: two-way interactions (higher-order interactions may be ignored), likelihood ratio tests and/or information criteria, and overdispersion.
I've tried the following code but it seems not right... All of my IVs are not numeric, do I need to do anything about it so that R could give me the right final model?
absent = read.csv('/Users/Desktop/absent.csv', header=TRUE) # absent.csv
absent$Age <- factor(absent$Age)
fit.all = glm(Days ~ (Eth+Sex+Age+Lrn)^2, family=poisson, data=absent)
fit.final = step(fit.all, direction="backward")
My data looks like this
[](https://i.stack.imgur.com/UcFqy.png)
| Poisson with Categorical IV and Numeric DV | CC BY-SA 4.0 | null | 2023-04-01T20:53:07.967 | 2023-04-01T20:53:07.967 | null | null | 384706 | [
"categorical-data",
"poisson-regression"
] |
611508 | 2 | null | 112516 | 0 | null | Following Dan, the algebraic computation can actually be done, and it's not so complicated:
$$
P(Y=y) = \sum_{x=y}^n P(Y=y|X=x)P(X=x) = \sum_{x=y}^n \binom xy q^y(1-q)^{x-y} \binom nx p^x(1-p)^{n-x}
$$
expanding the binomial coefficients and cancelling $x!$:
$$ \frac{n!}{y!} p^y q^y \sum_{x=y}^n \frac 1{(x-y)!}(1-q)^{x-y}\frac 1{(n-x)!} p^{x-y}(1-p)^{n-x} $$
change variables: $t=x-y$:
$$ \frac{n!}{y!} (pq)^y \sum_{t=0}^{n-y} \frac 1{t!(n-y-t)!} (1-q)^tp^t(1-p)^{n-y-t}=\\ \frac{n!}{y!(n-y)!} (pq)^y \sum_{t=0}^{n-y} \frac {(n-y)!}{t!(n-y-t)!} (p-pq)^t(1-p)^{n-y-t}=\\ \binom ny (pq)^y (p-pq+1-p)^{n-y}=\binom ny (pq)^y (1-pq)^{n-y}$$
Using the binomial
| null | CC BY-SA 4.0 | null | 2023-04-01T21:11:01.310 | 2023-04-01T21:11:01.310 | null | null | 384708 | null |
611510 | 2 | null | 611503 | 3 | null | To prove $E(X_{\tau \wedge n}) = E(X_0)$, you may show that $\{(X_n^*, \mathscr{F}_n): n = 1, 2, \ldots\}$ is a martingale. Here $X_n^* = X_{\tau \wedge n}$.
First, $X_n^*$ is measurable $\mathscr{F}_n$: since $[\tau > n] = \Omega - [\tau \leq n] \in \mathscr{F}_n$, for all $H \in \mathscr{R}^1$, it follows that
\begin{align*}
[X_n^* \in H] = \bigcup_{k = 0}^n[\tau = k, X_k \in H] \cup [\tau > n, X_n \in H] \in \mathscr{F}_n.
\end{align*}
Second, $E[|X_n^*|] < \infty$ for every $n$. This is because
\begin{align*}
E[|X_n^*|] =
\sum_{k = 0}^{n - 1}\int_{[\tau = k]}|X_k|dP +
\int_{[\tau \geq n]}|X_n|dP \leq \sum_{k = 0}^nE[|X_k|] < \infty.
\end{align*}
Third, for every $A \in \mathscr{F}_n$,
\begin{align*}
& \int_A X_n^* dP = \int_{A \cap [\tau > n]}X_n dP + \int_{A \cap [\tau \leq n]}X_\tau dP, \tag{1} \\
& \int_A X_{n + 1}^* dP = \int_{A \cap [\tau > n]}X_{n + 1}dP + \int_{A \cap [\tau \leq n]}X_\tau dP. \tag{2}
\end{align*}
Because $\{(X_n, \mathscr{F_n})\}$ is a martingale and $A \cap [\tau > n] \in \mathscr{F}_n$, the right-hand sides of $(1)$ and $(2)$ coincide, which shows that $E[X_{n + 1}^*|\mathscr{F}_n] = X_n^*$.
The above three items showed that $\{(X_n^*, \mathscr{F}_n): n = 1, 2, \ldots\}$ is a martingale, it then follows that $E[X_n^*] = E[X_0^*] = E[X_0]$. This completes the proof.
| null | CC BY-SA 4.0 | null | 2023-04-01T21:24:18.857 | 2023-04-01T23:43:15.717 | 2023-04-01T23:43:15.717 | 20519 | 20519 | null |
611511 | 1 | 611513 | null | 0 | 20 | I collected data among individuals-
I took noted screening results they had within the last year, collected a baseline screening, and another follow up 3 week screening.
I’m comparing the 3 screenings to determine changes (does more frequent screenings teach us anything?)
I’m thinking chi-squared? But not completely sure if that’s the best option.
| I don’t know if chi-squared is appropriate | CC BY-SA 4.0 | null | 2023-04-01T21:32:35.423 | 2023-04-01T21:50:59.207 | null | null | 384710 | [
"chi-squared-test"
] |
611512 | 1 | null | null | 0 | 14 | I have a dataset of Log count data per day (for DNS log) for over a year. I need to create an anomaly detection method to predict future anomalous counts to detect unusual log count. Initially, I tried using Z-score but the precision and recall values where very poor. I next tried using a Poisson regression approach but there was a high number of false negatives found. I set `variance = avg_count*exp(-avg_count)`, to account for possible over-dispersion and used `outlier=if(count>(avg_count+3*sqrt(variance)),1,0)`.
Finally, I tried using the negative binomial distribution but again, a lot of false negatives. The dispersion coefficient I used for the negative binomial was `k=(avg_count^2)/(variance*(1+count/avg_count)) and identified an outlier as outlier=if(count>(avg_count+k*count),1,0)`.
What approach would be best and what would be the best way to access which method performs best? Or is there a best approach?
| Anomaly Detection for Log Counts | CC BY-SA 4.0 | null | 2023-04-01T21:36:06.443 | 2023-04-01T21:47:53.350 | 2023-04-01T21:47:53.350 | 362671 | 340452 | [
"anomaly-detection"
] |
611513 | 2 | null | 611511 | 0 | null | Chi squared is used to test if the frequencies predicted by a model are consistent with the frequencies observed. For example, if your model predicts that in a sample of 100 there should be 50 blue, 30 red and 20 green flowers, and you observer 55 blue 28 red and 17 green flowers... does the observed data fit the model.
If you don't have expected frequencies and observed frequencies then chi-squared is the wrong test.
It seems that your hypothesis is not "does the data fit the model" but "has the results of the screening changed" This suggests a t-test, or a non-parametric alternative such as Mann-Whitney U test. But you must be clear on why you are choosing a particular test; why the conditions required by the test are satisfied by your data. You can't try different tests until you get one that gives you the answer you want!
| null | CC BY-SA 4.0 | null | 2023-04-01T21:43:07.153 | 2023-04-01T21:50:59.207 | 2023-04-01T21:50:59.207 | 147572 | 147572 | null |
611514 | 2 | null | 599167 | 0 | null | In the graph produced by the code in the original question, the hope is for the plotted points to follow the line $y=x$. That is, we want the probabilities predicted by the model to match up with the true probabilities of event occurrence (a reasonable desire, I believe).
A standard way to check the deviation of predictions $\hat y$ and true values $y$ is with the $R^2$. Therefore, such a statistic might prove useful here.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
If the predicted and actual probabilities match perfectly, your score will be a perfect $1$. As predictions deviate more and more from the true probability, the score will get worse and worse.
Another approach, granted, not derived from the graph, is that Brier score can be decomposed into measures of model calibration and discrimination (ability to distinguish between the categories), the latter of which might be of interest.
| null | CC BY-SA 4.0 | null | 2023-04-01T21:58:06.043 | 2023-04-01T21:58:06.043 | null | null | 247274 | null |
611517 | 1 | 611551 | null | 1 | 59 | Explaining my dataset: I have a univariate timeseries dataset of energy consumption. Each row of the dataset are half-hourly records of the energy consumption of a consumer. So every `48` lines contain 1 full day of measurements of a single consumer.
So, for example, the first `48` rows (rows `1` to `48`) of the dataset contains measurements of `Day 1` of `Consumer 1`. The next `48` rows (rows `49` to `96`) contains the measurements of `Day 2` of `Consumer 1`, the following `48` (rows `97` to `144`) contains measurements of `Day 3` of `Consumer 1` and so on... after `534` days the measurements of `Consumer 1` ends, and the measurements of `Consumer 2` starts at `Day 1`. The dataset contains around `6000` consumers, and following what has been stated before, it has `6000*48*534 ~= 154 millions` of rows.
I need to frame this timeseries as multiclass classification problem. Each sample/row of the supervised dataset will contain 48 columns (1 day of data of a single consumer, hence a consumer will appear more than once in the supervised dataset).
Question: When should i normalize (let's say using MinMax Scaler from Sklearn) my data? Before framing it as supervised learning problem? Or after? And why? Any additional information or a link to a paper so i can back this up would be great!
| Normalization of time series data for classification | CC BY-SA 4.0 | null | 2023-04-01T22:28:48.740 | 2023-04-02T10:34:14.207 | null | null | 346317 | [
"time-series",
"classification",
"normalization"
] |
611518 | 1 | null | null | 1 | 30 | I am wondering the best approach to plot what I want.
I have 4 dataframes of data (2 types of data with 2 replicates for each type). To tis point, I've just been choosing he best replicate of each type and plotting that.
However, I think to be more accurate, I should include both replicates in each line. What is the best way to do this? Box plots? Error bars? Here is an example of the graph I have been creating.
[](https://i.stack.imgur.com/zYp0P.png)
Thanks.
| Best approach to a line plot (or other type) whose lines consist of an average of 2 Y values | CC BY-SA 4.0 | null | 2023-04-01T22:44:55.397 | 2023-04-01T22:44:55.397 | null | null | 384713 | [
"python",
"data-visualization",
"matplotlib"
] |
611519 | 1 | null | null | 0 | 24 | I am trying to understand the meaning of the intercept in my glmm model summary. I get that is is the prediction when all predictors are 0. The reference levels can be specified so that this is meaningful, in case it is not bobilister in the model what is the reference/baseline/control level.
However, one of my predictor variables are days so i get the predicted value for the reference at day 1. The incidence rate ratio, IRR, of two predictor variables are above 1 when they are singular, but below 1 when they interact with days. Does this mean that the IRR for groups with these predictors are higher than the control group on day 1, but that they decrease with time relative to the control on dat 1? Is it possible to get a comparison to the the control groups onteraction with time as well?
I’m struggling with seeing how this is informative, when I am interestes in effect over time but the reference is just day 1.
| How to interpret intercept of mixed model with time | CC BY-SA 4.0 | null | 2023-04-01T22:55:05.873 | 2023-04-01T22:55:05.873 | null | null | 380763 | [
"glmm",
"intercept",
"incidence-rate-ratio"
] |
611520 | 1 | null | null | 2 | 46 | For some reason my model keeps showing up to be a poor model when checking its accuracy through confusion matrix and AUC ROC. This is the model i stuck with after doinf backward elimination
this is the logistic output:
```
`Call:
glm(formula = DEATH_EVENT ~ age + ejection_fraction + serum_sodium +
time, family = binomial(link = "logit"), data = train, control = list(trace = TRUE))
Deviance Residuals:
Min 1Q Median 3Q Max
-2.1760 -0.6161 -0.2273 0.4941 2.6827
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 15.741338 7.534348 2.089 0.03668 *
age 0.063767 0.018533 3.441 0.00058 ***
ejection_fraction -0.080520 0.019690 -4.089 4.33e-05 ***
serum_sodium -0.111499 0.053639 -2.079 0.03765 *
time -0.020543 0.003331 -6.167 6.95e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1`
```
This is the confusion matrix output
```
glm.pred Survived Dead
0 46 10
1 5 14
```
The auc is showing up as 0.178
```
library(pROC)
# Calculate predicted probabilities for test set
glm.probs <- predict(glm9, newdata=test, type="response")
# Create prediction object for test set
pred <- prediction(glm.probs, test$DEATH_EVENT)
# Create ROC curve for test set
roc.perf <- performance(pred, measure = "tpr", x.measure = "fpr")
# Plot ROC curve for test set
plot(roc.perf, legacy.axes = TRUE, percent = TRUE,
xlab = "False Positive Percentage", ylab = "True Positive Percentage",
col = "#3182bd", lwd = 4, print.auc = TRUE)
# Add AUC to ROC curve
auc <- as.numeric(performance(pred, measure = "auc")@y.values)
text(x = 0.5, y = 0.3, labels = paste0("AUC = ", round(auc, 3)),
col = "black", cex = 1.5)
abline(a=0, b= 1)
```
Can someone help please project is due very soon and i cant get past this problem
| Why is my significant pvalues model giving me a low AUC and ROC? | CC BY-SA 4.0 | null | 2023-04-01T23:04:15.293 | 2023-04-02T15:24:59.780 | null | null | 384715 | [
"r",
"regression",
"machine-learning",
"cross-validation",
"data-visualization"
] |
611521 | 2 | null | 611499 | 0 | null | The standard approach to include categorical variables in a linear regression model is to use one-hot encoding to replace this variable. That is, to add a binary variable for each level of the categorical variable.
If a categorical variable has $k$ levels, then one must use only $k-1$ binary variables, because the $k^{th}$ binary variable can be determined by the value of the other $k-1$ binary variables. The rationale behind is that using linearly-dependent variables in a regression model with intercept introduces instability into the model.
So, if variables mnth1, mnth2, ..., mnth11 are already introduced into the model, then variable mnth12 should not be used.
| null | CC BY-SA 4.0 | null | 2023-04-01T23:06:55.427 | 2023-04-02T22:23:03.780 | 2023-04-02T22:23:03.780 | 44221 | 44221 | null |
611522 | 1 | null | null | 0 | 17 | I have a model where continuous variables X and Z affect M which in turn affects Y. That is, M mediates the effects of X and Z on Y. I need a statistical comparison of the mediation effects to verify whether the mediation of X's effect by M is stronger than the mediation of Z's effect by M.
My thoughts and questions:
(A) I couldn't find a model in Andrew Hayes' PROCESS models to fit my model. I think one way of doing this is using structural equations modeling. If I go this route, and perform a statistical test to compare the two path coefficients representing the mediation effects (one from X -> M, and the other from Z->M), it might work. However, what statistical test should I perform to compare the path coefficients, and how do I do that in SPSS/other?
(B) I think an alternative approach might be to conduct analysis in the manner in which PROCESS does. First, conduct the two regressions:
M = B0 + B1X + B2Z
Y = B00 + B3M
Second, generate a bootstrap interval for the difference of the two product terms representing the mediation effects (indirect effects), i.e., B3.B1 and B3.B2, and see if that bootstrap interval includes 0. [Of course, we would also have to generate bootstrap intervals for each of the product terms.]
Is this conceptually correct? If yes, how do I do it in practice using SPSS/other?
(C) Are there other, better ways of achieving my goals?
Thanks in advance for your help!
| Mediation model with two independent variables | CC BY-SA 4.0 | null | 2023-04-02T00:21:59.127 | 2023-04-02T00:21:59.127 | null | null | 327049 | [
"bootstrap",
"mediation"
] |
611523 | 1 | null | null | 2 | 38 | [](https://i.stack.imgur.com/asAmE.png)
- Are all terms in the first line(71) of the equation random variables or probability density functions? If they are probability density functions, is there a possibility of obtaining a value that is not equal to 1 when integrating the right-hand side of the equation after all calculations have been completed?
- Based on the answer given in line 72, it seems that all terms are considered as probability density functions. If so, is it possible to transform them into the probability density function of a Gaussian distribution?
- in q(x_{t-1}|x_t,x_0), x_t, x_0, x_{t-1} are EVENTS? or Distribution?
I feel like I'm lacking some basic concepts in statistics. Can you help me, please?
| Are the terms in the diffusion model equation random variables or probability density functions? | CC BY-SA 4.0 | null | 2023-04-02T00:31:11.497 | 2023-05-27T18:29:27.693 | null | null | 384717 | [
"machine-learning",
"neural-networks",
"normal-distribution",
"density-function"
] |
611524 | 2 | null | 608440 | 0 | null | It means the same thing. I guess the authors are specifically referring to effect of using "atrous" or "dilated" convolutions here, as far as I can tell FOV is the same as receptive field.
| null | CC BY-SA 4.0 | null | 2023-04-02T00:47:49.413 | 2023-04-02T00:47:49.413 | null | null | 26948 | null |
611525 | 1 | 611666 | null | 2 | 85 | Suppose I have data in R that looks like this. This data represents measurements of different patients over a period of time (discrete level):
```
df <- data.frame(patient_id = c(111,111,111, 111, 222, 222, 222),
year = c(2010, 2011, 2012, 2013, 2011, 2012, 2013),
gender = c("Male", "Male", "Male", "Male", "Female", "Female", "Female"),
weight = c(98, 97, 102, 105, 87, 81, 83),
state_at_year = c("healthy", "sick", "sicker", "sicker", "healthy", "sicker", "sicker"))
patient_id year gender weight state_at_year
1 111 2010 Male 98 healthy
2 111 2011 Male 97 sick
3 111 2012 Male 102 sicker
4 111 2013 Male 105 sicker
5 222 2011 Female 87 healthy
6 222 2012 Female 81 sicker
7 222 2013 Female 83 sicker
```
I am interested in modelling the effect of different patient characteristics on how they transition between different states. To accomplish this, I am thinking of using Discrete Time Markov Cohort Models. Specifically, I am thinking of using the approach provided here ([https://hesim-dev.github.io/hesim/articles/mlogit.html](https://hesim-dev.github.io/hesim/articles/mlogit.html)) in which:
- All rows of data are isolated in which the patient starts at state k = 1
- Then, a Multinomial Logistic Regression is fit on this dataset (note: since we are modelling the probability of transition based on the information we know BEFORE the transition, for any given row - the weight_end variable is never directly modelled)
- This process is repeated for all other "k" states (excluding recurrent states, e.g. "death)
- As a result, a series of Multinomial Logistic Regression Models are being used to estimate the time-dependent transition probabilities all states.
To reformat the data for Discrete Time Markov Cohort Models ([https://hesim-dev.github.io/hesim/articles/mlogit.html](https://hesim-dev.github.io/hesim/articles/mlogit.html)) - I would have to reformat the data in such a way, such that it represents transitions between states:
```
patient_id year_start year_end gender_start gender_end state_start state_end weight_start weight_end
1 111 2010 2011 Male Male healthy sick 98 97
2 111 2011 2012 Male Male sick sicker 97 102
3 111 2012 2013 Male Male sicker sicker 102 105
4 222 2011 2012 Female Female healthy sicker 87 81
5 222 2012 2013 Female Female sicker sicker 81 83
structure(list(patient_id = c(111, 111, 111, 222, 222), year_start = c(2010,
2011, 2012, 2011, 2012), year_end = c(2011, 2012, 2013, 2012,
2013), gender_start = c("Male", "Male", "Male", "Female", "Female"
), gender_end = c("Male", "Male", "Male", "Female", "Female"),
state_start = c("healthy", "sick", "sicker", "healthy", "sicker"
), state_end = c("sick", "sicker", "sicker", "sicker", "sicker"
), weight_start = c(98, 97, 102, 87, 81), weight_end = c(97,
102, 105, 81, 83)), row.names = c(1L, 2L, 3L, 4L, 5L), class = "data.frame")
```
It appears as though there is no way to but to eliminate the last row of data for each patient - as this will be the last available transition for that patient. This means, that we will be forced to lose one row of data for each patient.
In cases where the patient experiences an absorbing event (e.g. death) - in these cases, this is not a problem. However, in cases where the patient is "right censored" (i.e. has the event after the end of the study) - there is nothing we can do to account for censoring other than removing the last row of data for each patient. We could try to use some imputation method or assume that patient transitions to the same state that they are currently in - but this is a risky process. As such, it seems like there is no option but to discard the last available row of data (i.e. weight_end value occurring at the last row) for each patient and only keep all complete transitions for each patient.
Is my understanding of this correct?
| Censoring for Discrete Survival Data | CC BY-SA 4.0 | null | 2023-04-02T00:49:44.423 | 2023-04-04T20:16:39.763 | 2023-04-03T01:10:04.017 | 77179 | 77179 | [
"regression",
"probability",
"logistic",
"survival"
] |
611526 | 2 | null | 314782 | 1 | null | Another view: if $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$, then $X_{(1)} = \min(X_1, \ldots, X_n) \sim \text{Beta}(1, n)$. [More generally](https://en.wikipedia.org/wiki/Beta_distribution#Derived_from_other_distributions),
\begin{align}
X_{(i)} \sim \text{Beta}(i, n + 1 - i), i = 1, \ldots, n.
\end{align}
It then follows by (for fixed $x > 0$, when $n$ is sufficiently large, $x/n$ can be bounded above by $1$)
\begin{align*}
P[nX_{(1)} > x] = P[X_{(1)} > x/n] = \prod_{i = 1}^nP[X_i > x/n] =
\left(1 - \frac{x}{n}\right)^n \to e^{-x}
\end{align*}
that the claimed property holds.
| null | CC BY-SA 4.0 | null | 2023-04-02T01:19:16.087 | 2023-04-02T01:19:16.087 | null | null | 20519 | null |
611527 | 1 | null | null | 0 | 21 | I have an assignment for my R course that has the following questions:
Given a dataset with the following variables:
- “. . . 1” - row counter variable; simply counts the number of rows.
- “sales_outlet_id” - one of three locations coded as 3, 5, or 8.
- “transaction_id” - id of a transaction purchase.
- “quantity” - quantity of order (1-8 but most are 1 or 2 except for a few outlier).
- “transaction_time” - time of a transaction purchase.
Answer the following questions:
- Do people buy more (i.e., quantity) items in one of the three locations (i.e., outlets) compared to the others?
I am unsure what type of analysis (anova or regression) to use for this question and how I would go about doing it.
- Does the unit price, accounting for time of purchase transaction impact how many items customers buy?
Again I am unsure what analysis I could use here but also how I can add the time of purchase as I'm guessing it has to be recoded.
__
I have currently tried:
For Q1:
```
combined_data$sales_outlet_id_factor <- as.factor(combined_data$sales_outlet_id)
lm_results <- lm(quantity ~ sales_outlet_id_factor, data = combined_data)
summary(lm_results)
plot (lm_results)
```
For Q2:
```
model2 <- lm(quantity ~ unit_price + transaction_time, data = combined_data)
summary(model2)
```
```
| Which analysis to use: regression/ANOVA? | CC BY-SA 4.0 | null | 2023-04-01T14:35:29.503 | 2023-04-04T03:10:27.087 | 2023-04-04T03:10:27.087 | 350641 | null | [
"r",
"regression",
"anova"
] |
611529 | 1 | null | null | 0 | 15 | I am trying to predict a continuous time series using predictors which are discrete (they can be only one of 1,2,3,4). I found this [answer](https://stats.stackexchange.com/questions/24521/problem-in-discrete-valued-time-series-forecasting) but the target variable is also discrete in that case.
Can anyone assist me with this?
| Time Series Forecasting with Discrete Predictors | CC BY-SA 4.0 | null | 2023-04-02T03:15:05.303 | 2023-04-02T03:15:05.303 | null | null | 338145 | [
"time-series",
"forecasting"
] |
611531 | 1 | null | null | 0 | 59 | I'm starting to get a little confused about how R knows which variable is the main variable and which others are simply controls. For example, let's say there are three variables Y, X1, and X2, and I'm mainly interested in X1. I regress Y = constant + X1 + X2 + error. The function of X2 is to block any indirect effect that can confound the true relationship between Y and X1, such as in the diagram below. However, the coefficients and standard errors are probably going to be the exact same if we run Y = constant + X2 + X1 + error. So does that mean the coefficient that R calculates for X1 and X2 are already taking into account that both could influence each other regardless of what the main variable is?
[](https://i.stack.imgur.com/1Uwdo.png)
| How does R know which variable is the main variable of interest? | CC BY-SA 4.0 | null | 2023-04-02T04:55:47.597 | 2023-04-02T04:55:47.597 | null | null | 355204 | [
"regression",
"controlling-for-a-variable"
] |
611532 | 1 | null | null | 0 | 22 | I am trying to write a custom statespace.mlemodel for panel data. I have 2 observation equations and 3 state equations.
$$
y_{1,i,t+1} = \alpha + x_{1,i,t} + \phi_{1,1} x_{2,i,t} + \phi_{1,2} x_{3,i,t} + \varepsilon_{1,i,t+1}
$$
$$
y_{2,i,t+1} = x_{2,i,t} + \varepsilon_{2,i,t+1}
$$
The state equations are as follows:
$$
x_{1,i,t+1} = \delta_{1} x_{1,i,t} +\nu_{1,i,t+1}
$$
$$
x_{2,i,t+1} = \delta_{2} x_{2,i,t} +\nu_{2,i,t+1}
$$
$$
x_{3,i,t+1} = \delta_{3} x_{3,i,t} +\nu_{3,i,t+1}
$$
Where $y_{1,i,t}$ and $y_{2,i,t}$ are endogenous variables and, $x_{1,i,t}$,$x_{2,i,t}$ and $x_{3,i,t}$ are state variables.
I was able to write a custom model for the above model for a single time series, but could not figure how to apply it to panel data.
```
class SSM2_nocov(sm.tsa.statespace.MLEModel):
def __init__(self, endog1: np.array, endog2: np.array):
starting_values = {
"nu1":0,
"nu2":0.2,
"nu3":0.2,
"c": 3,
"phi11": 0.5,
"phi12": 0.7,
"delta1": 0.9,
"delta2": 0.5,
"delta3":0.3,
}
super(SSM2_nocov,self).__init__(endog=np.c_[endog1,endog2], k_states=3, k_posdef=3)
# Initialize the matrices, we will update them in the update function
self['design']=np.array([[1,1,1],[0,1,0]])
self['transition'] = np.diag([1, 1, 1])
self['obs_intercept']=np.r_[1,0]
self['selection'] = np.diag([1, 1, 1]) # R=1
self['state_cov'] = np.diag([1, 1, 1]) # W
init=initialization.Initialization(self.k_states)
init.set((0,1),'diffuse')
init.set((1,3),'stationary')
self.ssm.initialize(init)
self.loglikelihood_burn = 2
self.position_dict= OrderedDict(
omega1=1,omega2=2,omega3=3,c=4,gamma11=5,gamma12=6,phi=7,delta2=8,delta3=9
)
self.initial_values=starting_values
self.positive_parameters = slice(0, 3)
@property
def param_names(self):
return list(self.position_dict.keys())
@property
def start_params(self):
params=np.r_[
self.initial_values['omega1'],
self.initial_values['omega2'],
self.initial_values['omega3'],
self.initial_values['c'],
self.initial_values['gamma11'],
self.initial_values['gamma12'],
self.initial_values['phi'],
self.initial_values['delta2'],
self.initial_values['delta3'],
]
return params
def transform_params(self, unconstrained):
constrained = unconstrained.copy()
constrained[self.positive_parameters] = constrained[self.positive_parameters]**2
constrained[4] = np.square(constrain_stationary_univariate(constrained[4:5]))
constrained[5] = np.square(constrain_stationary_univariate(constrained[5:6]))
constrained[7] = constrain_stationary_univariate(constrained[7:8])
constrained[8] = constrain_stationary_univariate(constrained[8:])
return constrained
def untransform_params(self, constrained):
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = unconstrained[self.positive_parameters]**0.5
unconstrained[4] = unconstrain_stationary_univariate(np.sqrt(constrained[4:5]))
unconstrained[5] = unconstrain_stationary_univariate(np.sqrt(constrained[5:6]))
unconstrained[7] = unconstrain_stationary_univariate(constrained[7:8])
unconstrained[8] = unconstrain_stationary_univariate(constrained[8:])
return unconstrained
def update(self, params,**kwargs):
params = super(SSM2_nocov,self).update(params,**kwargs)
# Define parameters of the State Space Model
self['design'] = np.array([[1,params[4],params[5]],[0,1,0]])
self['transition'] = np.array([[params[6],0,0],[0,params[7],0],[0,0,params[8]]])
self['obs_intercept',0,0]=params[3]
self['state_cov']=np.diag([params[0]**2,params[1]**2,params[2]**2])
```
| Panel State Space Model in statespace.mlemodel in stats models | CC BY-SA 4.0 | null | 2023-04-02T06:25:34.427 | 2023-04-02T06:30:40.917 | 2023-04-02T06:30:40.917 | 384722 | 384722 | [
"panel-data",
"statsmodels",
"state-space-models"
] |
611533 | 1 | null | null | 0 | 7 | I have three groups and I have a task thT “Bifurcation parameter, angular frequency for each ROI, and the signs of interactions between each pair of ROIs should all be compared between different groups. And all the differences should be statistically tested.” Which statistics test can I perform? Please help
X1, X2, …..X23 Represents 23 regions of brain. Each Row represents the separate region of interest and it’s interaction with other regions. For example
[](https://i.stack.imgur.com/o8Q43.png)
Where as
AmplitudeX1 = -0.0480
OmegaX1 = 0.4428
AmplitudeX2 =2.9340
OmegaX2 = -0.7211
AmplitudeX3 = 1.6869
OmegaX3 = 0.0128
AmplitudeX4 = 0.0274
OmegaX4 = -2.8547
AmplitudeX5 = 0.4260
OmegaX5 = -0.3601
AmplitudeX6 =0.4875
OmegaX6 =-4.3028
AmplitudeX7 =5.5113
OmegaX7 =-0.6893
AmplitudeX8 =-0.0080
OmegaX8 =4.5450
AmplitudeX9 =0.0073
OmegaX9 =0.0683
AmplitudeX10 =0.5470
OmegaX10 =-0.1217
AmplitudeX11 =0.0313
OmegaX11 =-0.1290
AmplitudeX12 =6.5457
OmegaX12 =-2.7924
AmplitudeX13 =0.0750
OmegaX13 =0.0990
AmplitudeX14 =8.4384e-09
OmegaX14 =-3.4928
AmplitudeX15 =1.6016
OmegaX15 = -0.6919
AmplitudeX16 =0.0454
OmegaX16 =-0.1040
AmplitudeX17 =1.5372
OmegaX17 = -0.0897
AmplitudeX18 =8.5307e-09
OmegaX18 =-2.5032
AmplitudeX19 =5.5131
OmegaX19 =0.0427
AmplitudeX20 = 4.6730
OmegaX20 =0.0581
AmplitudeX21 = 5.8575e-08
OmegaX21 =-1.0924
AmplitudeX22 =3.0111
OmegaX22 =0.1290
AmplitudeX23 =-1.8731e-08
OmegaX23 =-2.5229
Group 2 and Group 3 are in the same format.
| By which statistical test I can compare Amplitude, Frequency and Adjancey Matrix of three different groups? | CC BY-SA 4.0 | null | 2023-04-02T07:18:56.373 | 2023-04-02T07:18:56.373 | null | null | 384724 | [
"mathematical-statistics",
"statistical-significance"
] |
611534 | 1 | null | null | 1 | 62 | I would like to know if there is any drawback to increasing `K` in `K-fold CrossValidation`, except the computational one.
| Drawbacks of increasing K in k-fold Cross-Validation | CC BY-SA 4.0 | null | 2023-04-02T07:56:39.843 | 2023-04-02T12:15:48.050 | null | null | 146656 | [
"machine-learning",
"cross-validation",
"validation"
] |
611535 | 1 | null | null | 0 | 16 | I would like to compare two regression models using bootstrapping but I don't know how to do it.
I have two different univariate models (COX regression) and I am using the concordance index (Harrell index) to compare the two models.I obtain two indices for each of the two models. Now, using bootstrapping, I would like to check that the difference between the two indices of these two models is not due to chance (by calculating a p-value).
I tried several methods including this one: after bootstrap (500 samples), I have 500 concordance indices. Then I set the p-value as: p = sum ( ( c-index1 - c-index2) >= (boot1 - boot2) ) / 500
where c-index1 is the match index of the first model, c-index2 l index of the second model, boot1 the sum of the bootstrapped indices of the first model and boot2 the sum of the bootstrapped indices of the second model (divided by the number of samples).
I don't think this method is the right one...thanks for your help.
| How to use bootstrapping to compare regression models | CC BY-SA 4.0 | null | 2023-04-02T08:07:32.277 | 2023-04-02T08:07:32.277 | null | null | 342569 | [
"regression",
"bootstrap"
] |
611536 | 1 | null | null | 0 | 15 | In PCA analysis, covariance matrix as input is often used. But for LDA, it seems that it's not so usual. Is there any reason why covariance as data is not used well as a standard for LDA?
| Using covariance matrix as data for linear discriminant analysis? | CC BY-SA 4.0 | null | 2023-04-02T08:10:41.377 | 2023-04-02T08:10:41.377 | null | null | 275488 | [
"pca",
"covariance",
"covariance-matrix",
"discriminant-analysis"
] |
611537 | 2 | null | 611425 | 0 | null | If your understanding of the subject matter suggests that the association of one predictor with outcome depends on the value of a different predictor, then you should include an interaction between those predictors in the model. Consider that for each of the two-way interactions among `length`, `treatment`, and `population`, and for the 3-way interaction among all of them. Include all predictors and interactions in a single model, as binomial regression has an omitted-variable bias if [any outcome-associated predictor is omitted](https://stats.stackexchange.com/q/113766/28500).
In a comment, you suggest an additional model:
```
mod.Srv <- glmer(surv ~ len*Population + Treatment + (0+len|Plot), family='binomial', data=ribes_noNew)
```
That doesn't allow for the effect of `Treatment` to differ depending on either `length` or `Population`. Consider whether that makes sense. If it does, then the fixed effects are close to what you want.
You need to consider how to model the continuous predictor `length`. This model assumes that length is linearly associated with the log-odds of one-year survival. Things are seldom that simple. It's usually wise to fit such a continuous variable flexibly, for example with a regression spline.
For specifying the random effects in the model, consult this site's [lmer cheat sheet](https://stats.stackexchange.com/q/13166/28500) and Ben Bolker's [GLMM FAQ page](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#model-definition).
The models in the original question have some problems. With only 2 levels of `population`, there isn't much to be gained by including it as a random effect, and it's already included as a fixed effect. Typically, random effects are most useful when there are on the order of 6 or more levels. Also, I don't think that the way you had specified `Plot:Population` accomplishes what you might have thought. If you think that the association of `population` with survival depends on the Plot, allow for a random slope of `population` among Plots.
In the model from your comment and copied above, the `(0 + len|Plot)` specification is risky. That doesn't allow for random intercepts among Plots, yet the intercept is the thing that is most likely to vary randomly among Plots: the intercept is the baseline log-odds of survival.
When the model is finally fitted to the data, it can be difficult to interpret the initial summary of a model with lots of interactions. In the default coding in R, each interaction coefficient is an estimated difference from what you would predict based solely on the lower-level coefficients. Furthermore, the value of a lower-level coefficient can depend on the coding of the predictors with which it interacts.
So don't worry about the individual coefficients on their own. Evaluate together all coefficients involving a predictor in a single test, for example with the `Anova()` function in the R [car package](https://cran.r-project.org/package=car). Let post-modeling tools like those in the [emmeans package](https://cran.r-project.org/package=emmeans) display predictions for particular predictor combinations of interest.
| null | CC BY-SA 4.0 | null | 2023-04-02T08:20:12.177 | 2023-04-02T08:20:12.177 | null | null | 28500 | null |
611540 | 2 | null | 611495 | 1 | null | A lack of events simply means that there is no evidence for a hazard of an event during the time period of interest. In a continuous-time Cox model, there is 0 hazard between event times. One might consider that an even more extreme example of what you describe. Furthermore, a Cox model doesn't directly include time in the calculation, only the ordering of events in time. Any survival curves over time are estimated from the model results, based on the observed event times.
A continuous-time model like a Cox model isn't appropriate here. I don't use Stata so I'm not sure how it implements the "piecewise constant" model. I suspect that it is also at its base a continuous-time model.
What you want is a true discrete-time model. That's a set of binomial models (for individual event types) or multinomial models (for both event types together), evaluated for each time period (semester) along with the covariate values in place during that period. Depending on the nature of the questionnaires, when questionnaire data are missing for an individual in a semester you could either carry forward prior responses or use multiple imputation to build a model that includes all individuals while accounting for the potential errors in the estimated questionnaire responses.
| null | CC BY-SA 4.0 | null | 2023-04-02T08:56:15.393 | 2023-04-02T09:53:30.717 | 2023-04-02T09:53:30.717 | 22047 | 28500 | null |
611541 | 1 | 611624 | null | 1 | 67 | I have run model selection and have been advised to use the BIC value to determine the best model for my purposes.
However, having looked around online I cannot seem to find a standard way to report results, even published papers have different ways of reporting them. Could someone please tell me how to report the BIC value and what other values I need to report? Does anyone know how to generate a table of the BIC results in r?
Thanks
| How to report on model selection | CC BY-SA 4.0 | null | 2023-04-02T09:02:32.633 | 2023-04-03T01:09:11.113 | null | null | 383385 | [
"r",
"model-selection",
"reporting",
"bic"
] |
611542 | 1 | null | null | 1 | 99 | I am seeking an expression of the Jensen-Shannon Distance (JSD) between two normal distributions that only uses the respective means and standard deviations. The continuous version of the JSD (in nats) is given as:
$$\sqrt{\frac{1}{2}\int_{-\infty}^{\infty} \left(f(x)\ln\frac{2f(x)}{f(x)+g(x)}+g(x)\ln\frac{2g(x)}{f(x)+g(x)} \right) dx}$$
As shown in the [Wikipedia page](https://en.wikipedia.org/wiki/Normal_distribution#Other_properties), the Kullback-Leibler divergence and the Hellinger Distance can be expressed using only the means and standard deviations - that is, without the variable $x$. Can the JSD be expressed in a similar way?
| Jensen-Shannon Distance between two normal distributions defined only by the respective means and standard deviations | CC BY-SA 4.0 | null | 2023-04-02T09:03:58.837 | 2023-04-04T09:26:55.163 | 2023-04-04T09:26:55.163 | 110840 | 110840 | [
"normal-distribution",
"distance",
"distance-functions"
] |
611543 | 1 | 611596 | null | 3 | 84 | Cohen's d and Hedges'g effect sizes are used when two distributions have equal variances, which is a requirement and assumption to use these statistical tests.
When two distributions have unequal variances, Glass'delta should be used instead, putting in the equation only the standard deviation of the control/pre-measurement group.
What happens when two distributions have unequal variances but none of them is a control or pre-measurement group? For example, two sets of measurements are made on the same participants on different body sides. For some reason, the two data distributions have unequal variances, so I cannot use Cohen's d or Hedges'g effect sizes, because the equal-variance assumption is violated. Therefore, I have to use Glass'delta, but none of the two distributions is a control or pre-measurement group, compared to the other. What to do in this case?
Thanks.
| Glass' delta effect size when there is no control or pre-measurement group | CC BY-SA 4.0 | null | 2023-04-02T09:36:34.453 | 2023-04-02T18:02:32.893 | null | null | 384728 | [
"effect-size"
] |
611544 | 1 | null | null | 2 | 38 | I came across a paper which reported the following results
|Accuracy |Specificity |Sensitivity |
|--------|-----------|-----------|
|97.49% |93.6% |94.3% |
It seems unusual for accuracy to be higher than both sensitivity and specificity. Is this an error in reporting?
I looked at [this question](https://stats.stackexchange.com/a/555729) which suggests that this is possible for precision and recall, but I reproduced the test and found no examples in which accuracy exceeded both metrics.
I believe it should be possible to demonstrate this algebraically or even just logically (just to satisfy myself that the intuition is correct) but I haven't been able to do so.
For reference, [this is the paper in question](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8669763)
| Can Accuracy be higher than both sensitivity and specificity? | CC BY-SA 4.0 | null | 2023-04-02T09:40:05.457 | 2023-04-03T13:13:32.110 | 2023-04-03T13:13:32.110 | 256880 | 256880 | [
"accuracy",
"confusion-matrix",
"sensitivity-specificity"
] |
611545 | 1 | null | null | 1 | 26 | I'm relatively new to causal inference, so please be gentle.
[](https://i.stack.imgur.com/urGQP.png)
I have the above DAG, which represents the following variables:
- G: exposure variable, two factors (control and treatment)
- S: Pre-treatment variable, Continuous
- PT: Post-treatment variable, Continuous
- Y: Outcome of interest, Continuous
Is there a way to estimate the contrast effect between the two factors of G across the values of PT using regression analysis? I ask this because my basic understanding of causal inference tells me that conditioning on PT would incur post-treatment bias on the estimate.
Cheers
| Conditioning group effect on values post-treatment variable | CC BY-SA 4.0 | null | 2023-04-02T10:02:32.603 | 2023-04-02T10:02:32.603 | null | null | 30813 | [
"regression",
"causality",
"bias"
] |
611548 | 1 | null | null | 0 | 15 | Introduction: Archaeological research often involves drawing conclusions about past societies and cultures based on a limited number of artifacts or findings. In some cases, such as the study of synagogues in the Levant, the number of findings may be significantly smaller than the number that existed during the period of interest (0.1%, 1%, 10%). This raises questions about the validity of conclusions based on small sample size and the role of statistical analysis in assessing confidence and significance.
Questions:
- What minimum sample size is required to draw valid conclusions in archaeological research, particularly when studying the prevalence or frequency of artifacts such as synagogues in a given region and time period?
- How can statistical methods be used to account for bias and uncertainty in archaeological data, such as the incomplete or limited nature of findings, to arrive at more reliable conclusions about past societies and cultures?
- What assumptions can be made about the size of the population and distribution of synagogues, and how can statistical analysis be used to assess the level of confidence and significance of conclusions drawn from a small sample size of artifacts?
Thank you!
| Assessing the Validity of Conclusions Based on Limited Findings | CC BY-SA 4.0 | null | 2023-04-02T10:20:23.987 | 2023-04-02T11:10:52.933 | null | null | 275495 | [
"sample-size",
"bias",
"small-sample",
"uncertainty",
"research-design"
] |
611549 | 2 | null | 611107 | 1 | null | Your interpretation of the mean exposure parameter is correct: it's the number of units per subject. In your case, the rate parameters $\exp(\beta_0)$ and $\exp(\beta_1)$ specify the pill consumption rate per day, so the unit is 1 day.
One way to think about this is in terms of patient days. Under some fairly strong assumptions (more on this below), you get the same number of patient days in a study of $n$ patients who are followed up for $m$ days each, and a study with $n \times m$ patients who are followed up for 1 day. Consequently, you get the same power with both study designs.
You can verify this with G*Power by checking that power is ~0.8 under the following two settings:
- sample size $n$ = 1175, mean exposure = 1
- sample size $n$ = 1175 / $m$ days, mean exposure = $m$ days
while the other input parameters are fixed at the values specified in the question: one-sided test, base rate $\exp(\beta_0)$ = 2, $\exp(\beta_1) = 0.9$, $\alpha$ = 0.05, $X$ distribution = Binomial with $\pi$ = 0.5.
Here are the results for $m$ days = {1, 31, 62, 91}, ie. follow-up is 1 day, 1 month, 2 months and 3 months.
```
#> days patients power patient_days
#> 1 1175 0.800 1175
#> 31 38 0.801 1178
#> 62 19 0.801 1178
#> 91 13 0.802 1183
```
So what are the conditions to get the same power by following up 13 patients for 3 months as you would get by following up 1,175 patients for 1 day?
One assumption is independence within patient: the number of pills patient $i$ consumes in a day are iid $\operatorname{Poisson}(\lambda_i)$.
The second assumption is that the daily pill consumption rate doesn't vary by patient: $\lambda_i = \exp(\beta_0)$ if patient $i$ is in the control group and $\lambda_i = \exp(\beta_0 + \beta_1)$ if patient $i$ is in the treatment group. If this assumption is violated, the Poisson counts will be over-dispersed and the power will be lower than planned. This will happen even if the observations are independent.
Let's demonstrate this with a simulation. I use the [simstudy](https://cran.r-project.org/web/packages/simstudy/index.html) package to simulate Poisson data with $\lambda_i \sim \operatorname{normal}\big(\exp(\beta_0 + \beta_1 x), \sigma^2_{\text{patient}}\big)$ where $x$ = 0 for the control group and $x = 1$ for the treatment group. I estimate the power by replicating the study 1,000 times: each time I simulate count data under the alternative $\exp(\beta_1) = 0.9$, fit a Poisson GLM and check whether the confidence interval for $\beta_1$ excludes 0. The last three columns correspond to $\sigma_{\text{patient}}$ = {0, 0.05, 0.1}.
```
#> days patients expected_power `0` `0.05` `0.1`
#> 1 1175 0.800 0.797 0.807 0.799
#> 31 38 0.801 0.794 0.792 0.729
#> 62 19 0.801 0.793 0.733 0.716
#> 91 13 0.802 0.788 0.729 0.69
```
You can see that when the patient rate $\lambda_i$ varies about the mean rate $\exp(\beta_0 + \beta_1 x)$ — a rather reasonable supposition — we lose power by recruiting fewer patients even though we follow them up for a longer period of time.
---
R code to estimate power for a Poisson regression. NB: The simulation takes ~30min.
```
library("broom")
library("simstudy")
library("tidyverse")
simulate_poisson_data <- function(patients, days, exp0, exp1,
sd.patient = 0, sd.day = 0,
prob.x = 0.5, seed = NULL) {
beta0 <- log(exp0)
beta1 <- log(exp1)
def.patient <- defData(
varname = "x",
dist = "binary",
formula = prob.x,
id = "patient"
)
def.patient <- defData(def.patient,
varname = "lambda0",
dist = "normal",
formula = "..beta0 + ..beta1 * x",
# Set `variance = 0` to let each patient mean equal to the group mean
variance = sd.patient^2
)
def.patient <- defData(def.patient,
varname = "days",
# Comment out `dist = ...` to observe the same number of days per patient
# dist = "noZeroPoisson",
formula = "..days"
)
def.day <- defDataAdd(
varname = "lambda",
dist = "normal",
formula = "lambda0",
# Set `variance = 0` to let each day mean equal to the patient mean
variance = sd.day^2
)
def.day <- defDataAdd(def.day,
varname = "y",
dist = "poisson",
formula = "lambda",
link = "log"
)
set.seed(seed)
dt.patient <- genData(patients, def.patient)
dt.day <- genCluster(
dt.patient,
cLevelVar = "patient",
numIndsVar = "days",
level1ID = "patient_day"
)
dt.day <- addColumns(def.day, dt.day)
dt.day
}
estimate_poisson_rate <- function(dd, alternative = c("two.sided", "less", "greater"),
conf.level = 0.95) {
alternative <- match.arg(alternative)
fit <- glm(
y ~ x,
data = dd,
family = poisson
)
bx.hat <- tidy(fit, conf.int = TRUE)[2, ]
if (alternative == "less") {
lower <- -Inf
upper <- bx.hat$estimate + qnorm(conf.level) * bx.hat$std.error
} else if (alternative == "greater") {
lower <- bx.hat$estimate - qnorm(conf.level) * bx.hat$std.error
} else {
lower <- bx.hat$conf.low
upper <- bx.hat$conf.high
}
list(lower = lower, upper = upper)
}
study_poisson_counts <- function(patients, days, exp0, exp1,
sd.patient = 0, sd.day = 0,
prob.x = 0.5,
alternative = c("two.sided", "less", "greater"),
conf.level = 0.95, seed = NULL) {
dd <- simulate_poisson_data(patients, days, exp0, exp1,
sd.patient = sd.patient, sd.day = sd.day,
prob.x = prob.x, seed = seed
)
# Check for overdispersion: var(y|x) > mean(y|x)
dd[, list(mean.y = mean(y), var.y = var(y)), by = x]
conf.int <- estimate_poisson_rate(dd,
alternative = alternative, conf.level = conf.level
)
(conf.int$lower > 0) || (conf.int$upper < 0)
}
exp0 <- 2
exp1 <- 0.9
study <- tribble(
~days, ~patients, ~power,
1, 1175, 0.8000978,
31, 38, 0.8009854,
62, 19, 0.8009854,
91, 13, 0.8024570
)
study %>%
mutate(
patient_days = patients * days
)
set.seed(1234)
out <- study %>%
expand_grid(
sd = c(0, 0.05, 0.1)
) %>%
rename(
expected_power = power
) %>%
mutate(
actual_power = pmap_dbl(list(patients, days, sd), \(patients, days, sd) {
excludes0 <- replicate(
1000,
study_poisson_counts(
patients, days, exp0, exp1,
sd.patient = sd,
alternative = "less"
)
)
mean(excludes0, na.rm = TRUE)
})
)
out %>%
pivot_wider(
names_from = sd,
values_from = actual_power
)
```
| null | CC BY-SA 4.0 | null | 2023-04-02T10:24:14.613 | 2023-04-02T10:24:14.613 | null | null | 237901 | null |
611551 | 2 | null | 611517 | 1 | null | Normalisation is used in order to make things comparable that in original form are not comparable, such as variables with different measurement units. Your measurements to me seem all comparable as they are, so my first guess is that you don't need to do any normalisation (however I would always make such decisions based on full knowledge of the situation, which obviously I don't have).
There may however be reasons to normalise even in situations with comparable measurements. In a classification task the key question is whether certain systematic differences are informative for the class distinction. For example, the fact that one time series has a larger variance or value range than another may or may not involve information about the class membership. If time series are normalised, such information will be removed. This is a bad thing if the information in fact is useful for classification, but if it isn't, it may dominate more important information, and removing it by normalisation may be a good thing. This may also be the case if different time series have a different average level, and this is not informative for classification, but rather only the pattern of relative changes over time.
Note that one can in principle normalise over the time series, but one could also normalise over the time points, which can make sense if there are systematic differences between time points in all series, once more if those differences do not point to information useful for distinguishing the time series. For example once I worked on an application in which larger variance at a certain point in fact meant that what was going on at this point was more important for class discrimination, so it was important not to standardise this information away.
Such things could be deduced from subject matter knowledge, or you could find them out comparing different possibilities using cross-validation (but be careful to avoid information leakage by not doing things based on knowledge of the whole data set when leaving out observations - although a rough peek at the data may reveal a reason to normalise of which you may not be aware otherwise), although there is also a danger in comparing too many things based on the data, as then some things may look good by accident. In a good number of situations it doesn't make much of a difference whether you normalise or not, and in such a situation model selection bias from "overcomparison" may be worse than any option you choose without comparison.
So "no normalisation" would still be the first thing I'd try out with unified measurements in a situation like this, unless any subject matter knowledge would suggest normalisation.
| null | CC BY-SA 4.0 | null | 2023-04-02T10:34:14.207 | 2023-04-02T10:34:14.207 | null | null | 247165 | null |
611552 | 2 | null | 611534 | 2 | null | An issue with larger K is that you have larger training sets that overlap more, leading to a stronger dependence between the results in the K folds. There is always a trade-off between (a) using more information in larger folds to achieve better results that are closer to using the full data and (b) keeping dependence between the folds low because if dependence is large, you can't learn much new from another fold.
I had for a long time thought that larger K is always better, but Peter Buehlmann made me aware of this, and I think that he believed, at least at the time (20 years ago), that actually smaller K might be better. Unfortunately I never thoroughly researched whether there is literature on that issue and what it says. I did once supervise a student project exploring it by simulation, but the results were inclusive one way or the other, and the scope was very limited.
Now the qualification here is this: Obviously if you run K-folds CV just once, larger K needs more computation time, and my intuition still would say that this is better than smaller K, because the larger number of folds may just be worth paying the larger degree of dependence.
However note that you can randomly split a data set into K folds more than once, and you could compare lower and higher K standardising the computing time, i.,e., running more replicates for smaller K so that ultimately the computation time is the same. I believe that Buehlmann's comment that smaller K may be better may have referred to this version of comparison, and this was also what we tried out in the inconclusive student project mentioned above.
I think at the time Buehlmann could only point me to theory that investigated running all possible partitions into K folds, which is practically not feasible (I also forgot where to find this, as I didn't find it much enlightening, except that you could see theoretically how larger dependence could be an issue for larger K).
So from my point of view the issue isn't clearly decided (I'd be happy to see any response that could cite more contemporary literature on this). Buehlmann has a point there, but to what extent this point outweighs the advantage to have more larger folds isn't obvious.
Based on this, choosing smaller K but investing some more computation time to run more replicates may well be a viable option (more so if your data set is large).
| null | CC BY-SA 4.0 | null | 2023-04-02T10:56:00.413 | 2023-04-02T12:15:48.050 | 2023-04-02T12:15:48.050 | 247165 | 247165 | null |
611553 | 2 | null | 611548 | 1 | null | A major issue here is that, as you mention in item 2, the samples you have may very well not be random samples as assumed by standard statistical inference; you may see certain things with systematically larger probability than others, and you may not know about these probabilities.
This means, unfortunately, that not only may standard inference be misleading, but also you can't really know how misleading it is.
All statistical inference is based on model assumptions, and any recommendation of required sample sizes (item 1) will also be based on such assumptions, which means that such recommendations cannot be made for situations where these assumptions don't hold. An arbitrarily large sample size can be systematically biased in ways you cannot see.
I think that probably the best that can be done is to use all available subject matter knowledge to model possible processes that led to the data you observe. Such models may depend on parameters for which you have to assume certain values and that you cannot estimate from data (such as how systematically different the things are that you ultimately observe from those that had originally been there). You can then generate artificial data from such model, apply inference as you would to real data, and see how this relates to the "truth" that you have put into the model. Doing this with various different parameter choices can give you an idea of how much variation and uncertainty there is.
Such modeling can also be used in a Bayesian analysis where such work would be put into setting up a prior distribution that quantifies your prior beliefs about not only what's true but also how the observed data relate to it.
Doing this in practice of course requires work and knowledge of all kinds of information that is not available on this board. E.g., "What assumptions can be made about the size of the population and distribution of synagogues" - that's not really for the statisticians to know, rather the subject matter specialists need to decide this (maybe with statistician's help "translating" the information into a formal distribution).
| null | CC BY-SA 4.0 | null | 2023-04-02T11:10:52.933 | 2023-04-02T11:10:52.933 | null | null | 247165 | null |
611556 | 1 | null | null | 1 | 39 | I would like to test the difference between two classification outcomes for statistical significance using [McNemar's test](https://en.wikipedia.org/wiki/McNemar%27s_test), but I am unsure how to best account for multiple comparisons.
The setting is as follows (see plot below):
I have obtained two classification accuracies based on the same dataset (solid line vs. dashed line) over multiple levels of an ordinal independent variable (x-axis). For each level of this ordinal variable (i.e. at each tick on the x-axis), I would like to run a McNemar's test to compare the classification outcomes. This means that I would have to run a total of 6 McNemar's tests.
[](https://i.stack.imgur.com/Vetkfm.png)
To complicate things further, there is another more global nominal independent variable involved. Let's call it A. For each level of A, I have obtained a plot as shown above, and there are 7 levels. This means that, in total, I would have to run 7 x 6 = 42 McNemar's tests.
Now, I'm wondering at which level I have to account for multiple comparisons. I am convinced that at least at each level of A, I should correct for multiple comparisons with e.g. a Bonferroni correction (i.e. divide my alpha by 6). However, I'm unsure whether I need to also apply a correction across levels of A, i.e. perform each of the total 42 McNemar's tests with alpha/42.
Which of the two possible corrections is more appropriate? Or should I use a different approach altogether?
| McNemar's test with multiple comparisons | CC BY-SA 4.0 | null | 2023-04-02T11:42:13.030 | 2023-04-02T11:42:13.030 | null | null | 384739 | [
"classification",
"multiple-comparisons",
"mcnemar-test"
] |
611558 | 2 | null | 569564 | 0 | null | "The grey bars to the right of each panel show the relative scales of the components. Each grey bar represents the same length but because the plots are on different scales, the bars vary in size."
Ref: [https://otexts.com/fpp2/components.html](https://otexts.com/fpp2/components.html), second paragraph below Figure 6.2.
Below, I've scaled down the graph of the remainder component (in pink) so the scale bar is the same size as the scale bar of the data and overlaid it on the data. Now the bars are the same size, you can clearly see how the remainder component relates to the data.
If you look at the seasonal component, you can see the bar is longer than that of the remainder component so the remainder component would have to be scaled up if it were overlaid on the seasonal component.

| null | CC BY-SA 4.0 | null | 2023-04-02T11:54:52.263 | 2023-04-02T16:20:23.937 | 2023-04-02T16:20:23.937 | 384742 | 384742 | null |
611559 | 1 | null | null | 3 | 62 | I'm studying regression bootstrap procedures for my thesis, and have stumbled something which I find difficult to understand. Consider the standard normal linear model
$$
Y = \beta x + \epsilon \,, \qquad \epsilon \sim \mathcal{N}(0, \sigma^2) \,,
$$
and suppose we want to simulate the distribution of the response $Y_+$ at a new value $x_+$ of the regressors. I have only found very few mentions of this setup in the literature (for instance [this article](https://gking.harvard.edu/files/making.pdf) and some domain-specific ones in my particular field), and these suggest applying the following bootstrap procedure:
- Fit the model using OLS to obtain $\widehat{\beta}$ and resample the residuals $e_k = Y_k - \widehat{\beta} x_k$ (or some standardised version) to obtain a bootstrap sample $\epsilon_1^{(b)}, \dots, \epsilon_n^{(b)}$.
- Simulate new responses $Y_k^{(b)} = \widehat{\beta} x_k + \epsilon_k^{(b)}$ and refit the model to obtain bootstrapped regression coefficients $\widehat{\beta}^{(b)}$ and variance estimate $\widehat{\sigma}^{(b)}$.
- Simulate new responses at the point of interest $x_+$ by drawing $Y_+^{(b)}$ from the estimated distribution $\mathcal{N}(\widehat{\beta}^{(b)} x_+, (\widehat{\sigma}^{(b)})^2).$
I find it difficult to understand the intuition behind this approach. Specifically, why is it necessary to randomise the regression parameter $\widehat{\beta}$? The justification given in the expositions I've been able to find is usually something hand-wavy about 'incorporating the estimation error on the coefficients', but I find this unconvincing. The true response $Y_+ = \beta x_+ + \epsilon$ does not vary for different realisations of our sample data, contrary to the predictor $\widehat{Y}_+$, and so the variation in the parameter estimate should not be relevant to its simulated distribution. Indeed, the bootstrap procedures outlined in [Stine](https://www.jstor.org/stable/2288570) and [Davison & Hinkley](https://www.cambridge.org/core/books/bootstrap-methods-and-their-application/ED2FD043579F27952363566DC09CBD6A) (page 285) for estimating the prediction error use simulated regression parameters to obtain new predictor realisations, but stick with the original $\widehat{\beta}$ to simulate responses, only adding simulated residuals to them, i.e. $\delta := \widehat{Y}_+ - Y_+$ is approximated by
$$
\delta^{(b)} = x_+ \widehat{\beta}^{(b)} - (\widehat{\beta} x_+ + \epsilon^{(b)}) \,.
$$
Can anyone provide a good explanation/reference for bootstrapping the response distribution? Apologies for the lengthy question.
| Generating a bootstrap simulation of the response in regression | CC BY-SA 4.0 | null | 2023-04-02T12:11:29.867 | 2023-04-08T13:08:00.670 | null | null | 304924 | [
"regression",
"bootstrap",
"simulation"
] |
611560 | 1 | null | null | 0 | 38 | I am comparing the association between [BRCA1 gene mutations](https://www.cancer.gov/about-cancer/causes-prevention/genetics/brca-fact-sheet) and the prevalence of ovarian cancer by ethnicity (5 ethnic groups). I collected data on the number of patients who develop ovarian cancer and the number of these patients who have a BRCA1 mutation, and I would like to know to what extent BRCA1 mutations affect the incidence of ovarian cancer, and whether there are significant differences between ethnic groups.
My question is, what statistical test should I use? I have searched on the internet and many sites say that the chi-square test is the best way to find out if there is a significant difference. However, I am not very sure about this answer and do not know why the chi-square test is the best statistical test.
I would really appreciate it if somebody could answer my question.
| What statistical tests should I use? | CC BY-SA 4.0 | null | 2023-04-02T13:12:13.390 | 2023-04-03T04:11:49.933 | 2023-04-03T04:11:49.933 | 384744 | 384744 | [
"statistical-significance",
"chi-squared-test",
"association-measure"
] |
611563 | 1 | null | null | 0 | 28 | In [this answer](https://stats.stackexchange.com/a/35653/9162) @ttnphns writes
>
Both eigenvectors and loadings are similar in respect that they serve
regressional coefficients in predicting the variables by the
components (not vice versa!)
, and in a footnote adds that
>
Since eigenvector matrix in PCA is orthonormal and its inverse is its
transpose, we may say that those same eigenvectors are also the
coefficients to back predict the components by the variables. It is
not so for loadings, though.
I understand the distinction between eigenvectors and loadings, as explained in many posts here. But why is it that we can say
$Component_1 = Eigenvector_{11} \times x_1 + Eigenvector_{21} \times x_2 + ...$
but not
$Component_1 = Loading_{11} \times x_1 + Loading_{21} \times x_2 + ...$
And why is it that the problem doesn’t occur when predicting variables by components?
| PCA: Why can’t we say loadings are the coefficients used to back predict the components by the variables? | CC BY-SA 4.0 | null | 2023-04-02T13:36:55.843 | 2023-04-02T13:36:55.843 | null | null | 9162 | [
"pca"
] |
611564 | 1 | null | null | 0 | 15 | I'm studying economics and learning statistical topics only related to my fields.
In my textbook I read this formula for likelihood function, and I can't follow the rule.
$Y_t | Y_{t-1},Y_{t-2},\dots,Y_{-p+1} \sim N(0,\sum)$
$f(Y_t | Y_{t-1},Y_{t-2},\dots,Y_{-p+1},\theta)=(2\pi)^{-n/2} \sum^{-1/2}exp[(-1/2)(Y_t)'\sum^{-1}(Y_t)]$
I wonder why $(2\pi)^{-n/2}$ not $(2\pi)^{-1/2}$?
Isn't it from the normal density function formula $(2\pi)^{-1/2} \sum^{-1/2} exp[(-1/2)(Y_t)'\sum^{-1}(Y_t)]$?
I think it's a very simple and basic question to you guys but I can't find any hint from my textbook and internet.
Hope to find answer here. Thank you.
| Conditional density of normal distribution | CC BY-SA 4.0 | null | 2023-04-02T13:40:45.797 | 2023-04-02T13:45:01.357 | 2023-04-02T13:45:01.357 | 335673 | 335673 | [
"normal-distribution",
"conditional-probability",
"likelihood"
] |
611566 | 1 | null | null | 0 | 52 | I am reading T. Chen, C. Guestrin, "XGBoost: A Scalable Tree Boosting System", 2016 ([arXiv](https://arxiv.org/abs/1603.02754)), which is seemingly full of typos. They propose the so-called "approximate algorithm" (Algorithm 2 below) and justify it as follows (section 3.2):
>
However, it is impossible to efficiently do so [Algorithm 1] when the data does not fit entirely into memory. Same problem also arises in the distributed setting. To support effective gradient tree boosting in these two settings, an approximate algorithm is needed.
I understand that in order to calculate the gain for every possible split for each feature, one needs to access all datapoints assigned to the current node ("instance set" $I$). Because this assignment is different for every node for every tree, one has to read the entire dataset from the disk, which is slow. What I don't understand is
Why don't we have to do the same when considering a subset of candidate split points?
We still have to calculate the statistics $G_{kv}, H_{kv}, \forall k,v$. These depend on the set of points assigned to the current node. In order to determine which $\mathbf{x}_j$ belong to the node, we need to examine each of them. Therefore, we need to read the entire dataset from disk, which is slow. So where is the speed-up?
---
From the XGBoost paper [with my comments]:
>
Algorithm 1: Exact Greedy Algorithm for Split Finding
Input: $I$, instance set of current node
Input: $d$, feature dimension [Probably a typo: must be $m$, not $d$]
$gain \leftarrow 0$
$G \leftarrow \sum_{i\in I} g_i, \quad H \leftarrow \sum_{i\in I} h_i$
for $k = 1$ to $m$ do:
$\quad$$G_L \leftarrow 0,\quad H_L \leftarrow 0$
$\quad$for $j$ in sorted($I$, by $x_{jk}$) do:
$\qquad$$G_L \leftarrow G_L + g_j,\quad H_L \leftarrow H_L + h_j$
$\qquad$$G_R \leftarrow G − G_L, \quad H_R \leftarrow H − H_L$
$\qquad$$score \leftarrow \max(score, \frac{G_L^2}{H_L + \lambda} + \frac{G_R^2}{H_R + \lambda} - \frac{G^2}{H + \lambda})$ [Obviously a typo: must be $gain$ instead of $score$]
$\quad$end
end
Output: Split with max score
>
Algorithm 2: Approximate Algorithm for Split Finding
for k = 1 to m do:
$\quad$Propose $S_k = \{s_{k1}, s_{k2}, \dots, s_{kl}\}$ by percentiles on feature $k$.
$\quad$Proposal can be done per tree (global), or per split(local).
end
for $k = 1$ to $m$ do:
$\quad$$G_{kv} \leftarrow \sum_{j\in\{j|s_{k,v} \geq x_{jk}>s_{k,v−1}\}} g_j$
$\quad$$H_{kv} \leftarrow \sum_{j\in\{j|s_{k,v} \geq x_{jk}>s_{k,v−1}\}} h_j$
end
Follow same step as in previous section to find max score only among proposed splits.
| XGBoost: Why is the "approximate algorithm" faster? | CC BY-SA 4.0 | null | 2023-04-02T13:55:55.927 | 2023-04-22T03:34:52.923 | 2023-04-02T21:50:33.710 | 254326 | 254326 | [
"boosting",
"cart",
"computational-statistics",
"parallel-computing",
"distributed-computing"
] |
611568 | 1 | null | null | 0 | 22 | I am following the derivation of Variational Bayes approach in [David Blei's lecture notes](https://www.cs.princeton.edu/courses/archive/fall11/cos597C/lectures/variational-inference-i.pdf), particularly equations (13 - 16).
In particular, the line:
$$
= E_q [\ \log_2 q(Z) ]\ - E_q \left[\ \log_2 \dfrac{p(Z, x)}{p(x)} \right]\
$$
$$
= E_q [\ \log_2 q(Z) ]\ - E_q [\ \log_2 p(Z, x) ]\ +\log_2 p(x)
$$
In particular, is it correct the last term goes from $ E_q [\ \log_2 p(x) ]\ $ to $ \log_2 p(x) $ because the expectation w.r.t $q(Z)$ is not relevant for the log probability over $x$?
EDIT:
Just to be explicit in the step I mean. My understanding is Eq 14 goes to Eq 15 by:
$$
= E_q [\ \log_2 q(Z) ]\ - E_q \left[\ \log_2 P(Z|x) \right]\
$$
$$
= E_q [\ \log_2 q(Z) ]\ - E_q \left[\ \log_2 \dfrac{p(Z, x)}{p(x)} \right]\
$$
$$
= E_q [\ \log_2 q(Z) ]\ - E_q [\ \log_2 p(Z, x) ]\ + E_q [\ \log_2 p(x) ]\
$$
$$
= E_q [\ \log_2 q(Z) ]\ - E_q [\ \log_2 p(Z, x) ]\ +\log_2 p(x)
$$
With my question on the disappearance of $E_q$ on the last two lines
| Understanding line in the derivation of KL divergence optimising function in Variational Bayes | CC BY-SA 4.0 | null | 2023-04-02T14:12:44.070 | 2023-04-02T14:58:17.750 | 2023-04-02T14:58:17.750 | 336682 | 336682 | [
"bayesian",
"expected-value",
"variational"
] |
611569 | 2 | null | 609893 | 2 | null | I had a chat with some of the authors of [Spatial Mapping with Gaussian Processes and Nonstationary Fourier Features](https://arxiv.org/abs/1711.05615), and I got to the bottom of the first part of my question.
The authors did not mean that all $f$'s may necessarily be written as this sum of four terms in $g$. They chose to place themselves in the settings where $f$ may be written this way (in so-called "symmetrised" form) because in that case the imaginary part disappears automatically when deriving the RFF scheme.
In the stationary case, we do not need to "symmetrise" : in fact what is typically done when deriving the RFF scheme is to use the fact that as the kernel is real-valued, $k(x, y) = \text{Re}\left[k(x, y)\right]$ (i.e. its real part). This is what was done in the [original 2007 paper on RFFs by A. Rahimi and B. Recht](https://papers.nips.cc/paper_files/paper/2007/file/013a006f03dbc5392effeb8f18fda755-Paper.pdf), by the way.
In the nonstationary case, this "trick" can be used but you end up with a form of $k$ from which you cannot write the kernel Gram matrix in factorised form...which is a problem since what makes RFFs work is that the kernel Gram matrix can be written in factorised form $\text{K}_{\text{X}} = \Psi \Psi^{\text{T}}$.
On the second part of my question, the problem is mostly a matter of notations : what is meant by $g(\omega_1, \omega_1)$ is $g(\omega_1, \omega_2) \mathbb{1}\{\omega_1 = \omega_2\}$. Likewise for $g(\omega_2, \omega_2)$.
Further : in [a 2015 paper by Samo and Roberts](https://arxiv.org/abs/1506.02236), section 2.3 specifically, it is shown how to derive a RFF scheme for some nonstationary kernels without having to assume $f$ can be written as the sum of the four terms in $g$. What the results in this Samo & Roberts paper imply is that not all nonstationary kernels lend themselves to RFFs. This is a clear difference with the stationary case : all stationary kernels may be approximated by RFFs.
| null | CC BY-SA 4.0 | null | 2023-04-02T14:51:52.390 | 2023-04-02T14:57:26.293 | 2023-04-02T14:57:26.293 | 383233 | 383233 | null |
611571 | 2 | null | 611520 | 1 | null | The p-values are calculated as if you did not do the backward elimination for feature selection. However, you did do feature selection. Therefore, the p-values are not valid for your model. This is related to issues $2$, $3$, $4$, an $7$ posted [here](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/) (which are based on statistical theory and do not rely on any particular software, despite the source being a Stata website).
It seems that you overfit the feature selection to your training data, and you picked features that are solid predictors in the training data but turn out not to be in the test data.
Note that [stepwise feature selection can be competitive when it comes to pure prediction problems](https://stats.stackexchange.com/questions/594106/how-competitive-is-stepwise-regression-when-it-comes-to-pure-prediction), but the usual p-values and confidence intervals printed by software functions do not account for the feature selection and, thus, are too optimistic in favor of nonzero effects (rejection of null hypotheses that the parameters are zero).
| null | CC BY-SA 4.0 | null | 2023-04-02T15:24:59.780 | 2023-04-02T15:24:59.780 | null | null | 247274 | null |
611572 | 1 | null | null | 0 | 9 | This is what my data looks like.
```
"idno" "visit number" "time passed from 1st visit" "Medication use" "bone density"
1 1 0 1 2000
1 2 1 1 1900
1 3 3 1 1800
2 1 0 0 2200
2 2 2 0 1980
3 1 0 1 2050
3 2 1 1 1980
3 3 3 1 1800
3 4 5 1 1780
4 1 0 0 1980
4 2 3 0 1800
```
I have patients specified with `idno`. Each patient has multiple visits in which bone density is measured. The time interval between visits is measured in years. Each patient is a "medication user" or "non-user" over the study period.
My research question is: how "medication use" impacts "bone density" over time in those that "use the medication" in comparison to those who "do not use". My hypothesis is that medication use may lead to a significant decrease in bone density over time.
I have 2 R codes, I am not sure which I should use.
```
lmer(bonedensity ~ medicationuse * timepassedfrom1stvisit + (1 | idno),
data = mydata)
lmer(bonedensity ~ medicationuse + (1 | idno),
data = mydata)
```
How would the interpretation of these two models differ with my data? Does it make sense to add an interaction term of `medicationuse` AND `timepassedfrom1stvisit`? Considering that I have already `idno` as a grouping variable.
| can I put a time(months) variable as interaction term in linear fixed effects model to account for time-varying changes? | CC BY-SA 4.0 | null | 2023-04-02T15:25:30.880 | 2023-04-02T15:31:17.033 | 2023-04-02T15:31:17.033 | 28500 | 384752 | [
"r",
"regression",
"mixed-model",
"linear-model"
] |
611573 | 1 | 611585 | null | 0 | 84 | If we draw independent samples $x_0\sim N(0,\sigma^2), x_i \sim N(\mu_i,\sigma^2)$
for $i \in \{1, 2, \cdots, N-1\}$, what is the probability that $|x_0|$ is the smallest among all samples $|x_i|$'s? Under what condition is $|x_0|$ the smallest almost surely?
I was trying to calculate $\mathbb{P}[|x_0|<|x_i|]$ for some $i$. Following the suggestion [here](https://math.stackexchange.com/questions/803421/probability-that-one-folded-normal-is-bigger-than-another), I get the probability $(1−\Phi(\frac{−\mu_i}{2\sigma}))(1−\Phi(\frac{−\mu_i}{\sigma}))+(1−\Phi(\frac{−\mu_i}{2\sigma}))\Phi(−\frac{\mu_i}{\sigma})$. But I'm not sure if it is the right approach.
| Minimum of folded normal distributions | CC BY-SA 4.0 | null | 2023-04-02T15:26:10.977 | 2023-04-03T16:09:12.037 | 2023-04-02T15:50:56.500 | 252039 | 252039 | [
"probability",
"normal-distribution",
"convergence",
"folded-normal-distribution"
] |
611574 | 1 | null | null | 0 | 15 | What is the best method for selecting the moving average value to be used as an input variable for a feedforward neural network (FFNN) in a univariate time series forecasting problem where lag values can be obtained from ACF values and cut-off lags?
| Deciding which moving average to feed as input to a neural network | CC BY-SA 4.0 | null | 2023-04-02T15:30:59.277 | 2023-04-02T15:30:59.277 | null | null | 384755 | [
"machine-learning",
"time-series",
"forecasting",
"moving-average"
] |
611575 | 1 | null | null | 1 | 19 | I am trying to understand how to incorporate noisy gradient information in Gauss Process Regression using a Bayesian framework and I need your advice. I am following the following approach:
Let $\mathcal{D} = (x,y)_i$ be noisy data and let $\mathcal{D}_g = (x_g,y')_j$ be noisy gradient observations. The joint prior should then be given by
\begin{equation}\label{eq:EGP}
p(\textbf{f},\textbf{f}'\mid \textbf{X},\textbf{X}_g) \propto \mathcal{N}\left (\begin{bmatrix}
\textbf{f}\\
\textbf{f}'
\end{bmatrix} \mid \textbf{0},\tilde{\textbf{K}}\right )
\end{equation}
where $\tilde{\textbf{K}}$ is the block covariance matrix between $\textbf{X}$ and $\textbf{X}_g$.
The joint likelihood should look like this, regarding the noise model
\begin{align}
p(\textbf{y},\textbf{y}'\vert \textbf{f},\textbf{f}') &\propto \prod_{i}^{n}\exp\left (-\frac{1}{2} \left ( y(\textbf{x})_i-f_i \right )^2\sigma^{-2}_i \right ) \prod_{j}^{m\cdot d}\exp\left (-\frac{1}{2} \left ( y'_j(\textbf{x})-f_j' \right )^2\sigma^{-2}_{g,j} \right ) \nonumber \\
&= \prod_{i}^{n+m\cdot d}\exp\left (-\frac{1}{2} \left ( \tilde{y}_i(\textbf{x})-f_i \right )^2\tilde{\sigma}^{-2}_i \right ) \nonumber \\
&= \exp\left (-\frac{1}{2} \left ( \tilde{y}(\textbf{x})-f \right )^T\tilde{\textbf{E}}\left ( \tilde{y}(\textbf{x})-f \right ) \right ).
\end{align}
and the marginal likelihood should be
\begin{align}
p(\textbf{y},\textbf{y}'\vert \textbf{X},\textbf{X}_g) &= \iint p(\textbf{y},\textbf{y}'\vert \textbf{f},\textbf{f}')p(\textbf{f},\textbf{f}'\vert \textbf{X},\textbf{X}_g) d\textbf{f}d\textbf{f}' \nonumber \\
&= \iint \mathcal{N}(\tilde{\textbf{y}}\mid \textbf{f},\textbf{f}', \tilde{\textbf{E}}) \cdot \mathcal{N}(\textbf{0},\tilde{\textbf{K}}) d\textbf{f}d\textbf{f}' \nonumber \\
&\propto \mathcal{N}\left(\tilde{\textbf{y}}\vert\textbf{0} , \tilde{\textbf{K}}+\tilde{\textbf{E}} \right)
\end{align}
where $\tilde{\textbf{y}} = (\textbf{y},\textbf{y}')$.
Up to this point, is the approach correct ? I am not sure about the integral of the marginal likelihood.
The joint posterior - using Bayse is then given by
\begin{align}
p(\textbf{f},\textbf{f}'\vert\textbf{X},\textbf{X}_g) &= \frac{p(\textbf{y},\textbf{y}'\mid \textbf{f},\textbf{f}')p(\textbf{f},\textbf{f}'\mid \textbf{X},\textbf{X}_g)}{p(\textbf{y},\textbf{y}' \mid \textbf{X},\textbf{X}_g)} \nonumber \\
&\propto\mathcal{N}(\textbf{y},\textbf{y}'\vert \textbf{f},\textbf{f}', \tilde{\textbf{E}}) \cdot \mathcal{N}(\textbf{0},\tilde{\textbf{K}}) \nonumber \\
&=\mathcal{N}\left (\textbf{f},\textbf{f}' \mid \left(\tilde{\textbf{K}}^{-1}+\tilde{\textbf{E}}^{-1}\right)^{-1}\tilde{\textbf{E}}^{-1}\tilde{\textbf{y}},\left(\tilde{\textbf{K}}^{-1}+\tilde{\textbf{E}}^{-1}\right)^{-1} \right )\end{align}
where $\tilde{\textbf{E}}:= \mathrm{diag}\left ( \sigma_1^2,\dots,\sigma_n^2,\sigma_{11}^2,\dots,\sigma_{1d}^2,\dots,\sigma_{m1}^2,\dots,\sigma_{md}^2 \right )$.
I have more problems with the predictive distribution. Let $x*$ be an unseen data point , then I'm interested in $ p(y^* \vert\textbf{x}^*,\textbf{X},\textbf{X}_g) $ i.e. the distribution given the gradient data.
\begin{align}
p(y^* \vert\textbf{x}^*,\textbf{X},\textbf{X}_g) &= \iint p(y^*\vert \textbf{f},\textbf{f}',\textbf{x}^*,\textbf{X},\textbf{X}_g)p(\textbf{f},\textbf{f}'\mid \textbf{X},\textbf{X}_g) d\textbf{f} d\textbf{f}' \nonumber\\
&= \iint \mathcal{N}\left ( y^* \mid m^* + \tilde{\textbf{k}}^*\textbf{K}^{-1}\left ( \textbf{f},\textbf{f}' \right ),\textbf{k}_{y^*y^*}-\tilde{\textbf{k}}^{*T}\textbf{K}^{-1}\tilde{\textbf{k}}^* \right ) \nonumber\\
&\times \mathcal{N}\left (\left ( \textbf{f},\textbf{f}' \right ) \mid \left(\tilde{ \textbf{K}}^{-1}+\tilde{ \textbf{E}}^{-1}\right)^{-1}\tilde{ \textbf{E}}^{-1}\left ( \textbf{y},\textbf{y}' \right ),\left(\tilde{ \textbf{K}}^{-1}+\tilde{ \textbf{E}}^{-1}\right)^{-1} \right ) d\textbf{f} d\textbf{f}' \nonumber \\
&= \mathcal{N}\left(y^* \mid m^* + \textbf{k}^{*T}(\tilde{\textbf{K}}+\tilde{\textbf{E}})^{-1})\left ( \textbf{y},\textbf{y}' \right ),\textbf{k}^{**}-\tilde{\textbf{k}}^{*T}(\tilde{\textbf{K}}+\tilde{\textbf{E}}^{-1})^{-1}\tilde{\textbf{k}}^*\right).
\end{align}
where $p(y^*\vert \textbf{f},\textbf{f}',\textbf{x}^*,\textbf{X},\textbf{X}_g)$ is given by
\begin{equation}
\begin{bmatrix}
f\\
f'\\
y^*
\end{bmatrix} \sim \mathcal{N}\left (
\begin{bmatrix}
0\\
0\\
m^*
\end{bmatrix} \mid
\begin{bmatrix}
k_{ff} & k_{ff'} & k_{fy^*} \\
k_{f'f} & k_{f'f'} & k_{f'y^*} \\
k_{y^*f} & k_{f',y^*} & k_{y^*y^*}
\end{bmatrix}\right ).
\end{equation}
Is this the right idea or do I have to calculate the joint predictive distribution $p(y^*,y'^{*}\vert\textbf{x}^*,\textbf{X},\textbf{X}_g)$ and if so, do I just marginalize the distribution in order to return $ p(y^* \vert\textbf{x}^*,\textbf{X},\textbf{X}_g)$ ? I'm really confused and would appreciate your help.
| Gaussian Process Regression with noisy gradient observation | CC BY-SA 4.0 | null | 2023-04-02T15:47:07.550 | 2023-04-02T15:47:07.550 | null | null | 309706 | [
"regression",
"gaussian-process"
] |
611576 | 1 | 611597 | null | 0 | 79 | This is my data. It has no gaps in survival or the used predictor - for the sake simplicity in this example. I want to see, if multiply generated the same dataset will give - after pooling - the same results as the raw approach.
Of course my REAL data will have some gaps. The gaps will occur in a few variables, used to derive the survival (=success) status, which is a compound endpoint. After the imputation the status will vary a bit between the samples. Times will be always the same.
So this my exemplary data. Not much meaningful, but suffices:
```
surv_data <- as.data.frame(list(time=c(4,3,1,1,2,2,3,5,2,4,5,1),
status=c(1,1,1,0,1,1,0,0,1,1,0,0),
x=c(0,2,1,1,3,2,0,1,1,2,0,1),
sex=c(0,0,0,0,1,1,1,1,0,1,0,0)))
```
Let's "impute" (nothing will be imputed, just generated) 10 identical datasets. All will have same survival, time, and sex.
```
imp <- mice(surv_data,m=10)
```
Now I will perform the fake analysis and pool the results by hand, using the `pool_scalar()` function from the mice package.
```
km_imp <- with(imp, with(summary(survfit(Surv(time, status) ~ sex), times = 1:5, extend = TRUE),
data.frame(strata, time, surv, std.err, lower, upper)))
> km_imp$analyses %>%
bind_rows() %>%
group_by(strata, time) %>%
summarize(pooled = data.frame(pool.scalar(Q=surv, U=std.err^2, n=12, k = 1)[c("qbar", "t")])) %>%
unpack(pooled) %>%
mutate(surv=qbar, SE = sqrt(t), LCI = qbar - 1.96*SE, UCI=qbar + 1.96*SE)
strata time qbar t surv SE LCI UCI
<fct> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 sex=0 1 0.857 0.0175 0.857 0.132 0.598 1.12
2 sex=0 2 0.643 0.0443 0.643 0.210 0.230 1.06
3 sex=0 3 0.429 0.0503 0.429 0.224 -0.0110 0.868
4 sex=0 4 0.214 0.0355 0.214 0.188 -0.155 0.584
5 sex=0 5 0.214 0.0355 0.214 0.188 -0.155 0.584
6 sex=1 1 1 0 1 0 1 1
7 sex=1 2 0.6 0.048 0.6 0.219 0.171 1.03
8 sex=1 3 0.6 0.048 0.6 0.219 0.171 1.03
9 sex=1 4 0.3 0.057 0.3 0.239 -0.168 0.768
10 sex=1 5 0.3 0.057 0.3 0.239 -0.168 0.768
```
The bounds of the CI exceed 1 or 0, but that's natural for normal approximation. Will handle it later via log or log-log CIs.
and the data from the raw function - no imputation, no pooling:
```
> with(summary(survfit(Surv(time, status) ~ sex, data=surv_data, conf.type="plain"), times = 1:5, extend = TRUE),
+ data.frame(strata, time, surv, lower, upper))
strata time surv lower upper
1 sex=0 1 0.8571 0.5979 1.0000
2 sex=0 2 0.6429 0.2304 1.0000
3 sex=0 3 0.4286 0.0000 0.8681
4 sex=0 4 0.2143 0.0000 0.5837
5 sex=0 5 0.2143 0.0000 0.5837
6 sex=1 1 1.0000 1.0000 1.0000
7 sex=1 2 0.6000 0.1706 1.0000
8 sex=1 3 0.6000 0.1706 1.0000
9 sex=1 4 0.3000 0.0000 0.7679
10 sex=1 5 0.3000 0.0000 0.7679
```
But besides, if we pretend >1 = 1, and <0 = 0, then they fully agree.
Let me also try the log-log, to avoid these artefacts.
```
km_imp$analyses %>%
bind_rows() %>%
mutate(surv_log = log(-log(surv))) %>%
group_by(strata, time) %>%
summarize(pooled = data.frame(pool.scalar(Q=surv_log, U=std.err^2, n=12, k = 1)[c("qbar", "t")])) %>%
unpack(pooled) %>%
mutate(surv=exp(-exp(qbar)), SE = (sqrt(t)/surv)/log(surv),
LCI = exp(-exp(qbar - 1.96*SE)) , UCI=exp(-exp(qbar + 1.96*SE)))
```
giving:
```
strata time qbar t surv SE LCI UCI
<fct> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 sex=0 1 -1.87 0.0175 0.857 -1.00 0.334 0.979
2 sex=0 2 -0.817 0.0443 0.643 -0.741 0.151 0.902
3 sex=0 3 -0.166 0.0503 0.429 -0.618 0.0583 0.777
4 sex=0 4 0.432 0.0355 0.214 -0.571 0.00894 0.605
5 sex=0 5 0.432 0.0355 0.214 -0.571 0.00894 0.605
6 sex=1 1 -Inf NaN 1 NaN NaN NaN
7 sex=1 2 -0.672 0.048 0.6 -0.715 0.126 0.882
8 sex=1 3 -0.672 0.048 0.6 -0.715 0.126 0.882
9 sex=1 4 0.186 0.057 0.3 -0.661 0.0123 0.719
10 sex=1 5 0.186 0.057 0.3 -0.661 0.0123 0.719
```
vs.
```
with(summary(survfit(Surv(time, status) ~ sex, data=surv_data, conf.type="log-log"), times = 1:5, extend = TRUE),
data.frame(strata, time, surv, lower, upper))
strata time surv lower upper
1 sex=0 1 0.8571 0.334054 0.9786
2 sex=0 2 0.6429 0.151467 0.9017
3 sex=0 3 0.4286 0.058274 0.7768
4 sex=0 4 0.2143 0.008937 0.6047
5 sex=0 5 0.2143 0.008937 0.6047
6 sex=1 1 1.0000 1.000000 1.0000
7 sex=1 2 0.6000 0.125730 0.8818
8 sex=1 3 0.6000 0.125730 0.8818
9 sex=1 4 0.3000 0.012302 0.7192
10 sex=1 5 0.3000 0.012302 0.7192
```
All fine, but my question is: isn't this too naive, to simplistic? In the real case I will have data with gaps, the imputed datasets won't be same. I read somewhere, that the complementary log-log should be applied to survival probabilities, to pool survival probability with Rubin's rules. But cloglog and log-log are just the mirrors:
[](https://i.stack.imgur.com/AmZVu.png)
What's your opinion and can you give some example of solution, example?
---
I was provided with this [reference](https://www.pharmasug.org/proceedings/2017/SP/PharmaSUG-2017-SP05.pdf). My mistake was that I didn't transformed the variance passed to the imputation method.
The code is in SAS, but I translated it to R, hope without mistakes. I currently have no access to SAS to validate it numerically. If you find any, please let me know in a comment! Maybe someone will find this code useful.
```
km_imp1$analyses %>%
bind_rows() %>%
mutate(K = log(-log(surv)),
U = std.err^2 / (surv * log(surv))^2 ) %>%
group_by(strata, time) %>%
summarize(pooled = data.frame(pool.scalar(Q=K, U=U, n=Inf, k = 1)[c("qbar", "t")])) %>%
unpack(pooled) %>%
mutate(surv = exp(-exp(qbar)),
SE = abs(sqrt(t)*surv*log(surv)),
LCI = surv^(exp(1.96*SE)), UCI=surv^(exp(-1.96*SE)))
```
and the result:
```
strata time qbar t surv SE LCI UCI
<fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 trt=1 3 -4.23 1.00 0.986 0.0144 0.985 0.986
2 trt=1 50 -0.952 0.0461 0.680 0.0563 0.650 0.708
3 trt=1 100 -0.372 0.0307 0.502 0.0606 0.460 0.542
4 trt=2 3 -3.10 0.333 0.956 0.0249 0.954 0.958
5 trt=2 50 -0.541 0.0343 0.559 0.0602 0.520 0.596
6 trt=2 100 0.0959 0.0249 0.333 0.0578 0.292 0.374
```
Now I'm double confused. I found this article: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4517373/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4517373/)
and implemented it:
```
km_imp1$analyses %>%
bind_rows() %>%
mutate(K = log(-log(1-surv)),
U = std.err^2 / ((1-surv) * log(1-surv))^2 ) %>%
group_by(strata, time) %>%
summarize(pooled = data.frame(pool.scalar(Q=K, U=U, n=Inf, k = 1)[c("qbar", "t")])) %>%
unpack(pooled) %>%
mutate(surv = 1-exp(-exp(qbar)),
SE = sqrt(t),
LCI = 1-exp(-exp(qbar - 1.96*SE)),
UCI = 1-exp(-exp(qbar + 1.96*SE)))
strata time qbar t surv SE LCI UCI
<fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 trt=1 3 1.44 0.0550 0.986 0.234 0.931 0.999
2 trt=1 50 0.130 0.0239 0.680 0.154 0.569 0.786
3 trt=1 100 -0.361 0.0305 0.502 0.175 0.390 0.625
4 trt=2 3 1.14 0.0327 0.956 0.181 0.888 0.988
5 trt=2 50 -0.201 0.0278 0.559 0.167 0.446 0.678
6 trt=2 100 -0.905 0.0458 0.333 0.214 0.233 0.459
```
The results are now different. Be careful when implementing these methods.
| Is this way of pooling Kaplan-Meier estimates correct? Example made with R mice::pool_scalar | CC BY-SA 4.0 | null | 2023-04-02T15:47:19.253 | 2023-04-02T21:06:52.187 | 2023-04-02T21:06:52.187 | 384756 | 384756 | [
"survival",
"data-transformation",
"cox-model",
"multiple-imputation"
] |
611577 | 1 | null | null | 0 | 21 | I want to compare some short-rate models based on likelihood and AIC. I will use least squares estimates.
Let's take the Vasicek model as an example and its discretized version:
$$
dr = \alpha(\beta-r)dt + \sigma dW
$$
$$
r_{t+1} = \alpha\beta\Delta t + (1-\alpha\Delta t)r_t + \sigma\sqrt{\Delta t} \varepsilon_t
$$
With R, the coefficients of `regression <- lm(rtnext ~ rt)` can be used to estimate $\alpha$ and $\beta$, and the residuals can be used for the 3rd parameter.
My question is as follows:
As the loglikelihood depends only on [RSS](https://en.wikipedia.org/wiki/Akaike_information_criterion#Comparison_with_least_squares), it seems it does not depend on $\sigma$. Can I take $\sigma$ into consideration, or did I miss something?
Note: I used the same implementation in R as [statsmodels](https://github.com/statsmodels./statsmodels/blob/6b66b2713c408483dfeb069212fa57c0ee1e078b/statsmodels/regression/linear_model.py#L1905)
And an additional, more straightforward example in R:
```
N <- 1000
k <- 3
x <- rnorm(N)
e <- rnorm(N)
y <- 0.5 + 2.3 *x + 1.5*e # Can I consider 1.5 as an additional parameter?
reg <- lm(y ~ x)
reg
(RSS <- sum(reg$residuals^2))
nobs <- N/2
(llf <- -nobs * log(RSS) - (1 + log(pi/nobs))*nobs)
logLik(reg)
```
| Likelihood of least squares estimates of Vasicek model | CC BY-SA 4.0 | null | 2023-04-02T15:49:59.723 | 2023-04-02T17:19:52.220 | 2023-04-02T17:19:52.220 | 236453 | 236453 | [
"r",
"least-squares",
"aic"
] |
611578 | 1 | null | null | 0 | 33 | I'm implementing Newton's method for a simple logistic regression model but I keep getting very large values for the inverse of the Hessian matrix. I am using the standard formulas found in books... I'm pasting a working example:
```
import numpy as np
from math import log, exp
obs_x = np.array([0, 0, 0, 0.1, 0.1, 0.3, 0.3, 0.9, 0.9, 0.9])
obs_y = np.array([0, 0, 1, 0, 1, 1, 1, 0, 1, 1])
def g(b):
negative_loglik = 0.0
for i in range(len(obs_x)):
val = b[0] + b[1] * obs_x[i]
negative_loglik += log(1 + exp(val)) - val * obs_y[i]
return negative_loglik
def grad(b):
gradient = np.array([0.0, 0.0])
for i in range(len(obs_x)):
val = b[0] + b[1] * obs_x[i]
val = 1 / (1 + exp(-val)) - obs_y[i]
gradient[0] += val
gradient[1] += val * obs_x[i]
return gradient
def hess(b):
hessian = np.array([[0.0, 0.0], [0.0, 0.0]])
for i in range(len(obs_x)):
val = b[0] + b[1] * obs_x[i]
val = 1 / (1 + exp(-val))
val = val * (1 - val)
hessian[0, 0] += val
hessian[0, 1] += val * obs_x[i]
hessian[1, 0] += val * obs_x[i]
hessian[1, 1] += val * obs_x[i] * obs_x[i]
return hessian
init_b = np.array([-7, -7])
print("Negative log likelihood:", g(init_b))
print("Gradient:", grad(init_b))
print("Hessian:", hess(init_b))
print("Inverse of Hessian:", np.linalg.inv(hess(init_b)))
```
Can someone help me?
| Large values in the inverse of a Hessian | CC BY-SA 4.0 | null | 2023-04-02T16:15:57.700 | 2023-04-02T16:15:57.700 | null | null | 384758 | [
"logistic",
"python",
"optimization",
"hessian"
] |
611579 | 1 | null | null | 0 | 34 | I am interest in the constrained L1 Lasso problem:
$$\min_{\beta\in \mathbb{R}^p:\sum_{i\in[p]}|\beta_i|=1} \|X\beta -s\|^2, (1)$$
for design matrix $X\in \mathbb{R}^{n \times p}$ and target $s\in \mathbb{R}^n$.
In particular, I am interested in the setting where there are more features than examples, i.e. $p>n$ (and potentially $p\gg n$). Is there any algorithm that solves $(1)$ up to error, say $\epsilon$, with a computational complexity of order $O(p n^2 \log \epsilon^{-1})$ or something similar.
Specific requirements:
- I want an algorithm with a sample complexity that scales linearly (up to potentially log factors) with the number of features $p$. For example, I do not want a complexity that scales with $O(p^k)$ for $k\geq 2$.
- I specifically want a linearly-convergent algorithm, i.e. one that requires $O(\log(1/\epsilon))$ iterations instead of $O(poly(1/\epsilon))$.
Note that the answer given in [Computational complexity of the lasso (lars vs coordinate descent)](https://stats.stackexchange.com/questions/218087/computational-complexity-of-the-lasso-lars-vs-coordinate-descent) does not give me what I want; it mentions an algorithm with a computational complexity that scales with $p^3 + n p^2$ (if I adapt their notation to my setting).
| Computational complexity of L1 LASSO | CC BY-SA 4.0 | null | 2023-04-02T16:22:29.003 | 2023-04-02T16:22:29.003 | null | null | 149210 | [
"regression",
"lasso",
"time-complexity"
] |
611580 | 2 | null | 610853 | 1 | null | As Stephan Kolassa said in a comment, simulation is the best way to proceed. With modern computing technology it's reasonably straightforward to use your understanding of the subject matter to evaluate a wide range of scenarios.
Analytical formulas for power size date back to times when data crunching was done by hand on [mechanical calculators](https://en.wikipedia.org/wiki/Mechanical_calculator#Mechanical_calculators_reach_their_zenith). That was before [digital computers](https://en.wikipedia.org/wiki/Computer#Digital_computers) were commercially available, long before electronic calculators like those produced by [Wang](https://en.wikipedia.org/wiki/Wang_Laboratories#Calculators) or [Hewlett-Packard](https://en.wikipedia.org/wiki/Hewlett-Packard_9100A) became available to individuals in the 1960s, and even longer before personal computers extended major computing capacity to a wide audience. At that time those analytical power formulas were developed, that was about the best one could do.
Although analytical power formulas provide general guidance, the large level of uncertainty about the values of the parameters in the formula means that it's wise to evaluate a range of possibilities in study design. See [this page](https://stats.stackexchange.com/q/567560/28500), for example, about the difficulty simply in evaluating the value of a linear-regression error $\sigma^2$ from a limited data set. You want to be covered in case your initial estimate is incorrect.
Simulation allows you to handle more complicated situations like the crossover, stratified, and clustered designs that your mention in a comment. If your outcome isn't a simple continuous value as in a linear regression, say a binomial or survival outcome, simulation is even more likely to prevent you from being led astray.
The simulations provide information about the types of results that you might end up finding. You then have to apply judgment to choose the sample size based on that information.
| null | CC BY-SA 4.0 | null | 2023-04-02T16:35:12.997 | 2023-04-02T16:35:12.997 | null | null | 28500 | null |
611581 | 1 | null | null | 0 | 35 | For $(X_1,X_2)$ ~ $Normal(\mu, \Sigma)$:
$$E(X_2|X_1)=\mu_2+\rho*\sigma_2\frac{X_1-\mu_1}{\sigma_1}$$
I am trying to derive the $E(Z_2|Z_1)$ for $(Z_1,Z_2)$~$LogNormal(\mu, \Sigma)$. I guess I could calculate $E(Z_2|Z_1)$ as follows: $$E(Z_2|Z_1)=\int\frac{f_{Z_1,Z_2}}{f_{Z_1}}*Z_2*dZ_2$$ but this seems too complicated. Maybe we can use the relationship between lognormal and normal variables to obtain it in a simpler way? At the end of the day, I am looking to derive this expression so whatever approach is taken will work for me.
The expression for $E(Z_2|Z_1)$ is given [here](https://www.jstor.org/stable/2334167) in (2.3): $$E(Z_2|Z_1)=exp(\mu_2+\rho*\frac{\sigma_2}{\sigma_1}(logZ_1-\mu_1)+0.5*\sigma_2^2(1-\rho^2))$$ I am getting something different.
| Deriving the Expectation of the conditional distribution for the bivariate lognormal distribution | CC BY-SA 4.0 | null | 2023-04-02T16:38:00.617 | 2023-04-02T16:55:44.887 | 2023-04-02T16:55:44.887 | 198058 | 198058 | [
"conditional-expectation",
"lognormal-distribution",
"bivariate"
] |
611582 | 1 | null | null | 2 | 290 | This question has been asked before but I'd like to come back to it because I point a precise issue out.
Suppose we want to estimate a function $f\left( x \right)$ from data
$D = \left( {\left( {{x_1},{y_1}} \right),...,\left( {{x_n},{y_n}} \right)} \right)$
with ${y_i} = f\left( {{x_i}} \right) + {\xi _i}$, ${\xi _i}\mathop \sim \limits^{{\text{i}}{\text{.i}}{\text{.d}}{\text{.}}} \mathcal{N}\left( {0,{\sigma ^2}} \right)$ by Gaussian process functional regression.
Let $X = \left( {{x_1},...,{x_n}} \right)$ and $Y = \left( {{y_1},...,{y_n}} \right)$.
The likelihood is $p\left( {\left. Y \right|X , f , \sigma } \right) \propto {\sigma ^{ - n}}\prod\limits_{i = 1}^n {{e^{ - \frac{{{{\left( {{y_i} - f({x_i})} \right)}^2}}}{{2{\sigma ^2}}}}}} $
We have a ${\text{GP}}\left( {m\left( x \right),k\left( {x,x'} \right)} \right)$ prior on $f$ with hyperparameters $m$ and $k$.
Generally speaking, we can have hyperhyperparameters ${\rm M}$ and ${\rm K}$ for $m\left( x \right)$ and $k\left( {x,x'} \right)$.
Therefore Bayes rule writes
$ p\left( {\left. {f , \sigma , m , k ,{\rm M} , {\rm K}} \right|X , Y} \right) \propto \\
{\sigma ^{ - n}}\prod\limits_{i = 1}^n {{e^{ - \frac{{{{\left( {{y_i} - {x_i}} \right)}^2}}}{{2{\sigma ^2}}}}}} {\text{GP}}\left( {m\left( x \right),k\left( {x,x'} \right)} \right)p\left( {\left. m \right|{\rm M}} \right)p\left( {\left. k \right|{\rm K}} \right)p\left( {\rm M} \right)p\left( {\rm K} \right)p\left( \sigma \right) \\
$
and that's all we should need.
The problem is that we have something more that does not appear in Bayes rule at this point: the ${\text{GP}}\left( {m\left( x \right),k\left( {x,x'} \right)} \right)$ prior is used to assign the probability distribution $\mathcal{N}\left( {m\left( X \right),k\left( {X,X} \right)} \right)$ of r.v. $\left. {f\left( X \right)} \right|X,m,k$ that is used in the update equations to get the posterior Gaussian process.
It seems there is no place for $f(X)|X, m , k$ in Bayes rule because we can only add hyperparameters and $f\left( X \right)$ is not such an hyperparameter but a function of the piece of data $X$. $f(X)|X, m ,k$ doesn't look like a prior because it is conditional on $X$ and depends on $X$, nor a likelihood because the likelihood is ${\left. X , Y \right|f , \sigma }$ nor a posterior because all of them are conditional on $X , Y$.
How to plug $f(X)|X, m , k$ in Bayes rule above and where, given we can only add hyperparameters?
If we can't, where does ${\mathcal{N}}\left( {m\left( X \right),k\left( {X,X} \right)} \right)$ come from?
So my question is: can we find a set/logical conjunction of hyperparameters that makes the ${\mathcal{N}}\left( {m\left( X \right),k\left( {X,X} \right)} \right)$ multivariate Gaussian appears somewhere in Bayes rule?
| Is Gaussian process functional regression a truly Bayesian method (again)? | CC BY-SA 4.0 | null | 2023-04-02T16:44:55.097 | 2023-04-07T10:33:46.207 | 2023-04-06T15:49:40.370 | 384580 | 384580 | [
"bayesian",
"gaussian-process",
"inverse-problem"
] |
611583 | 1 | null | null | 1 | 135 | Currently, I am working on a research project that involves forecasting electricity consumption using data mining. In my analysis, I have detected the change point for time series using the changepoint package for my data sets using the "AMOC", PELT," and "BinSeg" methods. For each method, I have used var, mean, and meanvar. However, I have encountered a problem in identifying a change point that occurred during the pandemic period in 2020.
```
library(readxl)
library(changepoint)
tsdata <- ts(mydata$DOM, start = c(1997,1), frequency = 12)
amoc_cp <- cpt.var(tsdata, method = "AMOC")
amoc_cp <- cpt.mean(tsdata, method = "AMOC")
amoc_cp <- cpt.meanvar(tsdata, method = "AMOC")
# Use the PELT method to detect change points in the time series using var , mean and meanvar as above
pelt_cp <- cpt.meanvar(tsdata, method = "PELT")
# Use the BinSeg method to detect change points in the time series using var , mean and meanvar
Binseg_cp <- cpt.meanvar(tsdata, method = "BinSeg")
# Print the location and value of each change point using the AMOC method
for (i in 1:length(amoc_cp@cpts)) {
+ cat("AMOC change point for diff", i, "at location", amoc_cp@cpts[i], "with value", tsdata[amoc_cp@cpts[i]], "\n")
# Plot the time series with change points
plot(tsdata, main = "Time Series with AMOC_MEAN Change Points")
# Add vertical lines at the locations of the change points
abline(v = index(tsdata)[ amoc_cp@cpts], col = "blue")
```
these code doesn't identify the visible change point as
[](https://i.stack.imgur.com/CUwC6.png)
when run the above code for differentiated time series it clearly indicate there is a change point in 2020 march, isnt it should be indicate for original data set too? if so how should I detect the change in 2020 statistically?
| change point detection of time series | CC BY-SA 4.0 | null | 2023-04-02T16:49:19.343 | 2023-04-05T05:43:21.383 | 2023-04-05T05:43:21.383 | 384755 | 384755 | [
"r",
"time-series",
"forecasting",
"data-preprocessing",
"change-point"
] |
611584 | 1 | null | null | 0 | 32 | In decision theory, given a family of distributions $(P_{\theta})_{\theta \in \Theta}$ on the data space $\mathcal{X}$, the risk $R_L(\theta, \delta)$ of an estimator $\delta$ for a given loss function $L(\theta, d)$ is given by
$$R_L(\theta, \delta) = \int_{\mathcal{X}} L(\theta, \delta(x)) \, dP_{\theta}(x)$$
I have only seen cases where the loss function $L$ is nonnegative, making the risk function $R_L$ effectively an $L^1$-norm. Just considering the $L^1$-norm of $L$ seems a bit restrictive.
For example if we wanted to talk about the tails of $L(\theta, \delta(X))$ when $X \sim P_{\theta}$, why not use for example an Orlicz norm, $\|\cdot\|_{\psi}$? Then we could look for an estimator that minimizes
$$R^{\psi}_L(\theta, \delta) = \|L(\theta, \delta(X))\|_{\psi}$$
In the case of $\psi_1$, $\psi_2$ or a similar norm, this would imply we are looking for an estimator such that $L(\theta, \delta(X))$ has (at least for a minimum assumed decay rate $\exp(-t)$, $\exp(-t^2)$ or similar) the fastest decaying tails.
I haven't been able to find anything, but I am guessing that the choice of loss function can account for this, at least to within an arbitrarily small error?
| Alternate definitions of risk in decision theory? | CC BY-SA 4.0 | null | 2023-04-02T16:58:00.063 | 2023-04-02T19:07:39.520 | 2023-04-02T18:06:24.757 | 328968 | 328968 | [
"loss-functions",
"decision-theory"
] |
611585 | 2 | null | 611573 | 3 | null | Let $\Phi_{\mu, \sigma^2}, \varphi_{\mu, \sigma^2}$ denote the CDF and pdf of a $N(\mu, \sigma^2)$ random variable respectively. In particular, denote $\Phi_{0, 1}$ and $\varphi_{0, 1}$ by $\Phi$ and $\varphi$. Clearly, we have
\begin{align*}
\varphi_{\mu, \sigma^2}(x) = \sigma^{-1}\varphi(\sigma^{-1}(x - \mu)), \;
\Phi_{\mu, \sigma^2}(x) = \Phi(\sigma^{-1}(x - \mu)).
\end{align*}
Your calculation of $P[|X_i| > |X_0|]$ for some $i$ is incorrect. By independence of $X_0$ and $X_i$, the correct result should be
\begin{align}
& P[|X_i| > |X_0|] \\
=& \int_{-\infty}^\infty P[|X_i| > |x|]\varphi_{0, \sigma^2}(x)dx \\
=& \int_{-\infty}^\infty (P[X_i > |x|] + P[X_i < -|x|])\varphi_{0, \sigma^2}(x)dx \\
=& \frac{1}{\sigma}\int_{-\infty}^\infty[\Phi(-\sigma^{-1}(|x| -\mu_i)) + \Phi(-\sigma^{-1}(|x| + \mu_i))]\varphi(\sigma^{-1}x)dx. \tag{1}
\end{align}
Unless $\mu_i = 0$, a closed-form of the integral $(1)$ is unavailable. When $\mu_i = 0$, $(1)$ can be further simplified to $1/2$, which meets the intuition.
Similarly, it can be shown that
\begin{align*}
& P[|X_1| > |X_0|, \ldots, |X_{N - 1}| > |X_0|] \\
=& \int_{-\infty}^{\infty}\prod_{i = 1}^{N - 1}P[|X_i| > |x|]\varphi_{0, \sigma^2}(x)dx \\
=& \frac{1}{\sigma}\int_{-\infty}^\infty
\prod_{i = 1}^{N - 1}[\Phi(-\sigma^{-1}(|x| -\mu_i)) + \Phi(-\sigma^{-1}(|x| + \mu_i))]\varphi(\sigma^{-1}x)dx.
\end{align*}
To show that $|X_0|$ cannot be the smallest almost surely, it is equivalent to show that the probability $P[|X_0| < \min(|X_1|, \ldots, |X_{N - 1}|)] < 1$, or $P[|X_0| \geq \min(|X_1|, \ldots, |X_{N - 1}|)] > 0$, which would be implied by $P[|X_0| \geq |X_i|] > 0$ for some $i$. Evaluating this probability is almost the same as obtaining $(1)$:
\begin{align}
P[|X_i| \leq |X_0|] = \int_{-\infty}^\infty P[|X_i| \leq |x|]\varphi_{0, \sigma^2}(x)dx. \tag{2}
\end{align}
By Theorem 15.2(ii) in Probability and Measure by Patrick Billingsley, to show that $(2)$ is strictly positive, it suffices to show the integrand $f(x) := P[|X_i| \leq |x|]\varphi_{0, \sigma^2}(x) > 0$ on a set with positive Lebesgue measure. Since $\varphi_{\mu_i, \sigma^2}(t) > 0$ for all $t$, if $x \neq 0$, then
\begin{align*}
P[|X_i| \leq |x|] =
P[-|x| \leq X_i \leq |x|] =
\int_{-|x|}^{|x|}\varphi_{\mu_i, \sigma^2}(t)dt > 0.
\end{align*}
In addition, $\varphi_{0, \sigma^2}(x) > 0$ everywhere. Hence $f(x) > 0$ for all $x \neq 0$. This completes the proof.
| null | CC BY-SA 4.0 | null | 2023-04-02T17:01:32.750 | 2023-04-03T16:09:12.037 | 2023-04-03T16:09:12.037 | 20519 | 20519 | null |
611586 | 1 | null | null | 0 | 23 | I am currently writing a research paper for grad school and it seems that I overshot my understanding and I wanted to make sure that I get everything right. I am using multiple regression analysis to try to determine whether top income tax rates for individuals and corporations impact income inequality in America.I also want to determine whether there is a significant difference between the correlation coefficients $\beta_1$ and $\beta_2$ as that information would likely influence tax policy. Because of the nature of the variables, I need to use log-log transformations. However, this raises some issues. First, I am not certain which test of significance that I need to use and despite by google searches, I am just not finding a definitive answer. Second, I'm like 94% sure that if I do need to include the interaction effects, eq. 3 is the proper way of doing it. But I haven't a clue how to parse out if there's a significant difference in the effects of the predictor variables here when there's an interaction effect.
Possible Regression Equations and Slope coefficient equations.
$(1)\; ln(y)=β_0+β_1ln(x_1)+ β_2 ln(x_2)+ \epsilon$
$(2)\; \delta y/y = \beta_1*\delta x_1/x_1$
$\quad \; \frac{\delta y}{\delta x_1} \,\frac{x_1}{y}= \beta _1 $
$(3)\; ln(y)=β_0+β_1ln(x_1)+ β_2 ln(x_2)+β_3 ln(x_1)ln(x_2)$
$(4)\; \frac{\delta y}{y} = \beta_1* \delta x_1/x_1 + \beta_3*ln(x_2)*\delta x_1/x_1$
$\;\;\;\;\; \frac{\delta y}{\delta x_1} \,\frac{x_1}{y} = (\beta_1 + \beta_3*ln(x_2))$
$\; $
Variables
$x_1 = top\; individual\; income\; tax\; rate$
$x_2 = top \; corporate \; income \; tax \;rate$
$y_1 = Gini\;Coefficient$
| Questions about log-log regression analysis | CC BY-SA 4.0 | null | 2023-04-02T17:08:00.807 | 2023-04-02T17:08:00.807 | null | null | 384753 | [
"statistical-significance",
"multiple-regression",
"regression-coefficients",
"differences",
"elasticity"
] |
611588 | 2 | null | 474651 | 1 | null | If we consider a neural network to just be some mapping $f(x): X \mapsto Y,$ where $X$ and $Y$ are features and outcomes (basically, the term "AI" is just rebranding point estimation, which has been studied a lot in statistics), we can maybe make some progress on this question. In particular, we can note that a neural network, when trained on a dataset $(X_n,Y_n)$ is actually $f_n(x).$ The question of whether a neural network si "biased" is then whether $Ef_n(x)=f_0(x),$ where $f_0$ is the true mapping from $X$ to $Y$, chosen by nature.
In general, if a neural network can approximate any function, which I think it basically can, then definitely $f_n(x)\rightarrow^p f_0(x),$ for any $x$, or, in other words, as the sample size $n\rightarrow\infty$ then $f_n\rightarrow f_0$ in probability (ie, the probability $f_n$ and $f_0$ are different shrinks to zero), or $f_n$ is consistent. This is not the same as unbiased, and some biased estimators can be consistent. To show that a neural network is unbiased would requie that one evaluate the expectation $Ef_n(X)$ for a neural network by hand, which is, as far as I know, not really possible.
By regularizing in such a way that the regularization does not disappear in the limit, we are definitely causing bias that also will not disappear in the limit, and therefore also losing consistency. However, this can make sense to still do in the finite sample if the error from variance dominates the error from bias.
Looking at the "bias" of the individual weights $w_n$ in the network rather than of the function $f_n$ itself is problematic imo because the individual weights can be different and give the same function. There are not really "true" weights $w_0$ that we can assess in terms of their distance from $Ew_n$ to make statements about bias.
Now that we maybe have considered a fully flexible neural network, we can consider a CNN, which has tied parameters. This will definitely lead to bias. That said this is maybe a feature of the model, so maybe the function is consistent even under this constraint. Actually not sure about this.
| null | CC BY-SA 4.0 | null | 2023-04-02T17:29:44.793 | 2023-04-02T17:29:44.793 | null | null | 54664 | null |
611589 | 1 | null | null | 0 | 19 | I'm an engineer with a minor in statistics, so I apologize if this question is basic, but I haven't been able to find a clear resource on this topic after quite a bit of googling and looking through textbooks.
I have a data set $D$ composed of many time series curves, and I'm trying to fit a model $f(t, y_0, p)$ to the whole set of curves. For context, the model $f$ is the numerical solution to a system of delay differential equations given the initial history $y_0$ and a vector of real-valued parameters $p$. $f$ is computationally expensive. In some cases, $p$ can be a large vector with >10 elements.
A subset of $D$, $D_s$, depends only on a subset of $p$, $p_s$. The function $f$ is much faster to evaluate on $D_s$ because an analytical solution exists if those elements of $p$ not in $p_s$ are 0. I would like to leverage this situation to speed up the regression calculation, but I'm not sure if that's possible. I've considered two options:
(1) Find $p_s$ by fitting $f(p_s)$ to $D_s$, and then treat $p_s$ as constant when fitting $f$ to $D$.
This doesn't seem valid from a statistical point of view, although it would be computationally cheapest, because I wouldn't ever find a $p$ that minimizes the error across all data.
(2) Fit in two stages with some linkage: $p_s$ from $D_s$ and then $p$ from $D$ on some more constrained/weighted region of the parameter space.
In this case, I would use the information gained from the first stage to speed up the second stage. I could constrain the search to a region of the parameter space s.t. $p_s$ is close $ (p_s)_{1st\space fit} $. The problem here is that I would have to make a somewhat arbitrary choice for how the first fit constrains the parameter space of the second fit. Instead of completely constraining the search, I could somehow weigh certain regions so that the optimization algorithm is more likely to check the space around the first fit.
(3) Fit in three stages. First perform the fits in option (1), but add a third fit that has been constrained by the first two fits.
None of the three options seem to be elegant solutions, and I'm not sure performing multiple partially redundant regressions will really result in less overall computing time. My problem seems general enough that I feel that there should be a "right" way to approach it.
| What are the best practices for parameter estimation when a subset of data depends only on a particular subset of your parameters? | CC BY-SA 4.0 | null | 2023-04-02T17:30:17.480 | 2023-04-02T17:30:17.480 | null | null | 384757 | [
"optimization",
"nonlinear-regression"
] |
611590 | 2 | null | 611327 | 1 | null | $R^2$ is a monotonic (decreasing) function of the sum of squared residuals, which is the numerator of the fraction below.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
Therefore, if you increase (improve) the $R^2$, you are correct to think that you have decreased (improved) the sum of squared residuals...
...when the denominators are the same.
The trouble in your situation is that, when you apply the $\log$ transformation, you change $y_i$, $\hat y_i$, and $\bar y$. For instance, when I simulate data, I get very different denominators of $48272.05$ and $195.8113$ before and after the $\log$ transformation, respectively.
```
set.seed(2023)
N <- 1000
x <- rchisq(N, 1)
y <- 7 + 5*x + rnorm(N)
Ly <- log(y)
sum((y - mean(y))^2) # 48272.05
sum((Ly - mean(Ly))^2) # 195.8113
```
Therefore, your $R^2$ values are not really comparable, and I do not see the argument that the $\log$ transformation reduced overfitting. In particular, consider what happens when you [transform back](https://stats.stackexchange.com/a/115572/247274) to the original (non-$\log$) scale. It is not obvious that the back-transformed predictions will be better. Also, I am not even convinced that your performance is screaming out that severe overfitting has occurred.
Finally, there is no assumption that the marginal distribution of $y$ be normal, so there is not necessarily a need to $\log$-transform a skewed $y$ variable. An advantage to taking $\log(y)$ is that there is an interpretation in terms of percent change in $y$. If this makes sense for your application, you might be interested in such a transformation. However, skewness in $y$ alone is not a reason to apply a $\log$ transformation. The normality assumption in linear regression, if we even care to make such an assumption, concerns the conditional distribution, not the marginal distribution. This is a common misconception that I suspect many of these Kaggle cempetitors have.
As far as why you cannot get better performance out of your models, a great many factors go into a rental price, many of which (related to the way human brains function) I suspect are not included in your model. If you omit additional determinants of rental prices, then you are lacking information needed to make accurate predictions; of course your performance suffers.
| null | CC BY-SA 4.0 | null | 2023-04-02T17:45:34.377 | 2023-04-02T17:45:34.377 | null | null | 247274 | null |
611592 | 2 | null | 577353 | 1 | null | Two options come to mind.
- Apply a weighted loss function that severely penalizes the model for assigning probability to category c when the true category is a. You might not be getting accurate probability values out of your model, but if it keeps you from making a catastrophic mistake, that might be acceptable.
- Make accurate probability estimates, and if the probability of category $a$ is too high (where "too high" is determined by how catastrophic it is to mistake an a for a c), then you go with category a, even if a is not the category with the majority of the probability density. For instance, your rule might be to assign to category a if the probability of category a ever exceeds $0.3$, because then the probability of mistaking an a for something else is just too high for the damage caused by a misclassification.
| null | CC BY-SA 4.0 | null | 2023-04-02T17:51:24.850 | 2023-04-02T17:51:24.850 | null | null | 247274 | null |
611593 | 2 | null | 560129 | 0 | null | I work break this out into two component steps. First,
check for multicollinearity separately using variance inflation factor via
```
from statsmodels.stats.outliers_influence import variance_inflation_factor
```
Once multicollinear factors/features with VIF >5 are removed (as general rule of thumb), then run those factors/features with target through this key driver analysis via this [key-driver-analysis](https://pypi.org/project/key-driver-analysis/) python package.
| null | CC BY-SA 4.0 | null | 2023-04-02T17:55:13.447 | 2023-04-02T17:55:13.447 | null | null | 328036 | null |
611595 | 2 | null | 505978 | 1 | null | A discriminative model means that the model discriminates between values of the outcome. Therefore, even though a linear regression is not drawing a decision boundary between two categories, it is discriminating between an outcome of $1$ vs an outcome of $1.1$ vs an outcome of $0.9$.
If your brain has a good handle on categorical discrimination, perhaps start thinking of linear regression as discriminating between high and low predictions. Then think of discriminating between high, medium, and low. Then think of discriminating between high, medium-high, medium-low, and low. Then think of discriminating between high, medium-high, medium, medium-low, and low.
Linear regression then takes the partitioning to the continuum (ditto for any other kind of "regression" model that predicts values on a continuum).
| null | CC BY-SA 4.0 | null | 2023-04-02T17:56:44.367 | 2023-04-02T17:56:44.367 | null | null | 247274 | null |
611596 | 2 | null | 611543 | 3 | null | You can also compute a standardized mean difference that does not assume homoscedasticity but that uses the square-root of the average of the two variances in the denominator. Then you don't have to choose which group's SD is used in the denominator. See Bonett (2008, 2009) for further details.
Bonett, D. G. (2008). Confidence intervals for standardized linear contrasts of means. Psychological Methods, 13(2), 99–109. [https://doi.org/10.1037/1082-989X.13.2.99](https://doi.org/10.1037/1082-989X.13.2.99)
Bonett, D. G. (2009). Meta-analytic interval estimation for standardized and unstandardized mean differences. Psychological Methods, 14(3), 225–238. [https://doi.org/10.1037/a0016619](https://doi.org/10.1037/a0016619)
| null | CC BY-SA 4.0 | null | 2023-04-02T18:02:32.893 | 2023-04-02T18:02:32.893 | null | null | 1934 | null |
611597 | 2 | null | 611576 | 0 | null | The idea is to find a transformation that makes estimates most closely follow a normal distribution, so that Rubin's rules can be used for pooling among models built on multiply imputed data sets. [Stef van Buuren recommends](https://stefvanbuuren.name/fimd/sec-pooling.html) (based on literature citations*) the complementary log-log transformation for survival probabilities, and a log transformation for survival distributions or hazard ratios.
There's a potential ambiguity in the use of "complementary log-log" in a survival model. The survival function is the complement of the corresponding probability distribution function. When `survfit.formula()` uses a "log-log" confidence interval, that's `log(-log(survival))`. But that's the same as the complementary log-log transformation of the corresponding probability distribution.
[Therneau and Grambsch](https://www.springer.com/us/book/9780387987842) discuss different transformations on pages 16-17. They conclude: "as long as one avoids the 'plain' intervals, all of the options work well." That's consistent with what you show.
One is never sure that a generic recommendation holds in any specific application. In multiple imputation, you can start with the generic recommendation and evaluate the corresponding distribution of transformed estimates directly among large numbers of imputed data sets. If one transformation doesn't work adequately, try another.
---
*Upon review, the [citation](https://doi.org/10.1007/s10198-008-0129-y) that van Buuren provides for these transformations doesn't seem to mention any transformations at all! Nevertheless, the proposed transformations make sense based on the underlying subject matter. [This](http://www.biomedcentral.com/1471-2288/9/57) is the correct link, provided in a [comment by David Luke Theissen](https://stats.stackexchange.com/questions/581063/draw-survival-curves-of-2-groups-after-multiple-imputation-on-covariates#comment1073515_581063) on a similar question.
| null | CC BY-SA 4.0 | null | 2023-04-02T18:21:04.657 | 2023-04-02T20:37:35.923 | 2023-04-02T20:37:35.923 | 28500 | 28500 | null |
611598 | 2 | null | 611451 | 0 | null | I think your model in brms would look like this:
```
library(brms)
df$t = as.factor(df$t)
m = brm(bf(ym|mi(sdm) ~ 1 + x:t), #or ~1+x+x:t for ref. slope and contrasts
family = gaussian(),
prior = c(prior(normal(10, 1), class = Intercept),
prior(normal(-.2, 1), class = b),
prior(exponential(1), class = sigma)),
chains = 4,
iter = 2000,
cores = 4,
backend = "cmdstanr",
data = df)
summary(m)
stancode(m)
```
It looks like, whether you go with `y` and `sd` or `ym` and `sdm`, brms proceeds without issues...
`stancode()` does what it says on the tin. However brms does a bunch of stuff under the hood so parsing the code is not straightforward (at least it is beyond my ability). If you figure it out and manage to write your own code based on brms's I'd be interested to see it ;)
| null | CC BY-SA 4.0 | null | 2023-04-02T19:06:42.323 | 2023-04-02T19:06:42.323 | null | null | 273568 | null |
611599 | 2 | null | 611584 | 0 | null | As mentioned in the comment by @whuber, the risk is defined as expected loss. It is not the only criterion possible, and decision theory studies many of other alternatives for making decisions, but as discussed in [Why care so much about expected utility?](https://stats.stackexchange.com/questions/313290/why-care-so-much-about-expected-utility) we have have many good reasons to care about expected loss.
| null | CC BY-SA 4.0 | null | 2023-04-02T19:07:39.520 | 2023-04-02T19:07:39.520 | null | null | 35989 | null |
611600 | 2 | null | 357336 | 0 | null | I can't comment on Geoff's answer because I don't have enough reputation.
To answer Death Metal's question: to change the above to "absolute delta", all we need to do is set `delta = pct_mde` in the first line within the function. I modified Geoff's original function to work with relative and absolute MDEs and for one and two-tailed tests:
```
def calc_sample_size(alpha, power, p, pct_mde, delta_type='relative', num_tails="two"):
""" Based on https://www.evanmiller.org/ab-testing/sample-size.html and Geoff's (https://stats.stackexchange.com/users/127726/geoff) Python implementation of the same.
Args:
alpha (float): How often are you willing to accept a Type I error (false positive)?
power (float): How often do you want to correctly detect a true positive (1-beta)?
p (float): Base conversion rate
pct_mde (float): Minimum detectable effect
delta_type (str): either 'relative' or 'absolute'
num_tails (str): either 'one' or 'two'
"""
delta = p * pct_mde
if delta_type == 'absolute':
delta = pct_mde
t_alpha2 = scipy.stats.norm.ppf(1.0-alpha/2)
if num_tails == 'one':
t_alpha2 = scipy.stats.norm.ppf(1.0-alpha)
t_beta = scipy.stats.norm.ppf(power)
sd1 = np.sqrt(2 * p * (1.0 - p))
sd2 = np.sqrt(p * (1.0 - p) + (p + delta) * (1.0 - p - delta))
return (t_alpha2 * sd1 + t_beta * sd2) * (t_alpha2 * sd1 + t_beta * sd2) / (delta * delta)
```
Calling the function with `calc_sample_size(0.05, 0.8, 0.2, 0.05, 'absolute', 'two')` gives us `~1030`, which the same result as Evan's calculator:
[](https://i.stack.imgur.com/zHGZD.png)
| null | CC BY-SA 4.0 | null | 2023-04-02T19:36:12.603 | 2023-04-02T19:38:22.193 | 2023-04-02T19:38:22.193 | 384765 | 384765 | null |
611601 | 1 | null | null | 0 | 8 | i try to do sentiment analysis (tweet scor) after i cleaned the data successfully, i struggled in the sentiment analysis step
and i cant get the result , any help pleas
```
import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv ('C:\\Users\\LENOVO\\Desktop\\slides\\3rd semester\\CP\\train.csv')
from nltk.stem import WordNetLemmatizer
lemma= WordNetLemmatizer()
stop = stopwords.words('English')
def clean_tweets(text):
text = text.lower()
words= nltk.word_tokenize(text)
# Lemmatization is another technique used to reduce inflected words to their root word. It describes the algorithmic process of identifying an inflected word's “lemma” (dictionary form) based on its intended meaning.
words = ' '.join ([lemma.lemmatize(word) for word in words
if word not in (stop)])
text=''.join(words)
# removing non-alphabit characters
text = re.sub('[^a-z]',' ',text)
return text
# applying the clean_tweets function in our text columns
df['cleaned_tweets'] = df['text'].apply(clean_tweets)
import textblob
def tweet_sentiment(tweet):
tb= textblob(tweet)
score= tb.sentiment.polarity
if score > 0 :
return 'Positive'
elif score < 0:
return 'Nigative'
else:
return 'Neutral'
df['tweet_sentiment'] = df['cleaned_tweets'].apply(tweet_sentiment)
plt.clf()
df[tweet_sentiment].value_counts().plot(kind = 'barh')
plt.title('sentiment of tweets')
plt.xlabel('frequency of tweet sentiment')
plt.show()
```
| struggled in result of sentiment analysis | CC BY-SA 4.0 | null | 2023-04-02T19:44:17.810 | 2023-06-03T07:41:29.913 | 2023-06-03T07:41:29.913 | 121522 | 384637 | [
"sentiment-analysis"
] |
611603 | 1 | null | null | 0 | 17 | Given the results of an Aalen's additive hazard model, how can I compute the hazard difference in cases per 1000 person-years?
Suppose the effect of my treatment variable is constant over time, I should be able to compute this directly from the corresponding coefficient. Does the following work?
```
library(timereg)
# Load the dataset
data(sTRACE)
# Fits Aalen model
out<-aalen(Surv(time,status==9)~const(sex)+diabetes, sTRACE,max.time=7,n.sim=100)
beta_A <- coef(out)['const(sex)','Coef.']
person_years_treated <- sum(sTRACE$time[sTRACE$sex == 1])
hazard_diff_per_1000 <- (exp(beta_A) - 1) * 1000 / person_years_treated
print(hazard_diff_per_1000)
```
And if I would then like to calculate excess deaths per 1000 person-years, is that also a simple calculation?
| Calculate hazard difference and excess deaths from Aalen's additive hazard model | CC BY-SA 4.0 | null | 2023-04-02T20:24:54.793 | 2023-04-02T20:24:54.793 | null | null | 103007 | [
"survival",
"hazard"
] |
611604 | 2 | null | 87826 | 1 | null | This question seems to repeat itself every now and then (see [here](https://stats.stackexchange.com/questions/596750/how-to-define-a-classification-loss-function-for-discrete-ordinal-values), for example), so I'll just summarize the answers and sources that have been accumulated in the time since this question was asked.
# Redefine Objective
- Ordinal Categorical Classification
A modified version of the cross-entropy, adjusted to ordinal target. It penalizes the model for predicting a wrong category that is further away from the true category more than for predicting a wrong category that is closer to the true category.
$$l(y,\hat{\ y}) = (1+\omega) * CE(y,\hat{\ y}), \text{ s.t.} $$
$$\omega = \dfrac{|class(\hat{\ y})-class(y)|}{k-1}$$
Where $k$ is the number of possible classes. The operation () the predicted class of an observation that is achieved by arg-maxing the probabilities in a muti-class prediction task.
A Keras implementation of the ordinal cross-entropy is available [here](https://github.com/JHart96/keras_ordinal_categorical_crossentropy). An example (taken from the link)
```
import ordinal_categorical_crossentropy as OCC
model = *your model here*
model.compile(loss=OCC.loss, optimizer='adam', metrics=['accuracy'])
```
- Treat your problem as a regression problem
Instead of classification task, treat your task as a regression task, and then round your prediction/map them to categories using any kind of method. As Stephan Kolassa mentions, the underlying assumption of this method is that one's scores are interval scaled.
- Cumulative Link Loss
Originally proposed here. It is based on the logistic regression model, but with a link function that maps the logits to the cumulative probabilities of being in a given category or lower. A set of ordered thresholds splits this space into the different classes of the problem.The projections are estimated by the model.
Under this method there two loss functions are suggested, probit based and logitbased.
A keras implementation of the two versions is available here.
- Other methods I won't cover
The following table was taken from here and describe other loss functions for ordinal categorical target.
- Treat ordinal categories as multi-label problem
Under this method, we convert our ordinal targets into a matrix such that every class $c$ includes all the first $c$ entries being 1, as the following suggest:
```
Lowest -> [1,0,0,0,0]
Low -> [1,1,0,0,0]
Medium -> [1,1,1,0,0]
High -> [1,1,1,1,0]
Highest -> [1,1,1,1,1]
```
Then the loss function has to be changed as [this](https://arxiv.org/pdf/0704.1028.pdf) paper suggest. See [this](https://towardsdatascience.com/how-to-perform-ordinal-regression-classification-in-pytorch-361a2a095a99) blog-post for an implementation.
| null | CC BY-SA 4.0 | null | 2023-04-02T20:32:00.890 | 2023-04-02T20:32:00.890 | null | null | 285927 | null |
611608 | 1 | 611609 | null | 5 | 236 | In a seminal article in virology ('Sur l’unité lytique du bactériophage'
Comptes rendus des séances de la Societé de biologie et de ses filiales, 1939, 130, pp. 904-907) the Nobel prize winner Salvador Luria applied the Poisson distribution to demonstrate that one bacteriophage particle was sufficient to lyse a culture of 80 million bacteria.
He defined the probability P that the actual number of virions in a tube was x as:
```
P = (nˣ/x!)e⁻ⁿ
```
where n is the average number of virions in a series of tubes.
He then defined the probability Q that x would be less than m as the sum between x=0 and m-1 of P. For m=1:
```
Q = e⁻ⁿ
```
He then provided a set of numbers of virions x and the corresponding set of Q probabilities experimentally determined:
```
N = c(1, 2, 4, 8, 15)
Q = c(0.736, 0.586, 0.357, 0.135, 0.022)
```
However, I do not understand what actually goes into the P equation and how to derive Q.
I have defined a Poisson function, considering the number of particles x as a success, but I do not understand where to get the number of trials (Luria used 100 tubes, but `pois_f(X, 100)` does not work either). I used a function to get a vector of probabilities:
```
pois_f = function(x, n) {
p = ((n^x)/factorial(x)) * exp(-n)
return(p)
}
p1 = pois_f(N, length(N))
```
I tried to get a Q by exponentiating the negative number of particles found, but the slope is off:
```
p2 = exp(-N)
```
[](https://i.stack.imgur.com/OBhkt.png)
If `p1` is wrong because I am measuring the poison distribution (assuming that I have used the function correctly, perhaps I need to invert the terms...) instead of Q; `p2` should be Q, but it is also off the target.
How do I correctly calculate Q? What measurements are required to define Q? Is Q 1-P?
PS: for reference, here as some extracts of the original paper:
Definition of P (poison distribution):
[](https://i.stack.imgur.com/Lbskt.png)
Definition of Q (cumulative distribution):
[](https://i.stack.imgur.com/huZJV.png)
Original graph (data here based on lower set, I):
[](https://i.stack.imgur.com/4J3M2.png)
New graph based on the updated information:
[](https://i.stack.imgur.com/khJQ4.png)
| How to demonstrate amount of virus required to lyse all cells using Poisson distribution? | CC BY-SA 4.0 | null | 2023-04-02T21:06:23.520 | 2023-04-04T21:32:45.500 | 2023-04-04T12:52:27.933 | 95357 | 95357 | [
"poisson-distribution",
"definition"
] |
611609 | 2 | null | 611608 | 5 | null | Your `P` equation is just a probability mass function of [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution), but what you need is the cumulative distribution function for the distribution. They are both available in R as `dpois` and `ppois` respectively.
What you are missing is the $\lambda$ parameter (`N` in your case). It is not the number of trials, but rather something like the average number of virions (I didn't read the paper, so take it with a grain of salt). It definitely is not the number of tubes. To replicate the result you would need to know the parameter (either know what it was or how it was calculated by the author).
Your comment has shed some light on what you are describing. To reproduce the result, you need to show the cumulative distribution function of the Poisson distribution parametrized by $\lambda = N$. The [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function) $F$ calculates the probability that the random variable $X$ is less or equal to some value $x$. In your case, you are interested in the "probability $Q$ that $x$ would be less than $m$" since the Poisson distribution is discrete and non-negative, it's just $P(X < 1) = P(X = 0)$. In the case of Poisson distribution,
$$
P(X = x) = \frac{\lambda^x e^{-\lambda}}{x!}
$$
for $x$ equal to $0$ reduces to $e^{-\lambda}$, as in the paper. This means that what they are calculating is $P(X=0) = e^{-\lambda}$ for different values of $\lambda$, presumably equal to `N`.
This is consistent with what you described, but I have no idea why the numbers are not consistent with the paper. I don't have access to the paper and don't know French, so cannot comment beyond what was described by you.
| null | CC BY-SA 4.0 | null | 2023-04-02T21:29:41.113 | 2023-04-04T21:32:45.500 | 2023-04-04T21:32:45.500 | 35989 | 35989 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.