Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
612330
1
null
null
0
23
A [previous question](https://stats.stackexchange.com/questions/553060/post-hoc-test-of-two-way-permutation-anova) indicates that after conducting a permutation ANOVA and finding a significant interaction, permutation tests would be suitable for pairwise comparisons. However, I also saw online tutorials (e.g. this [blog post](https://www.r-bloggers.com/2019/08/bootstrapping-follow-up-contrasts-for-within-subject-anovas-part-2/), or an [older one](https://shouldbewriting.netlify.app/posts/2019-05-09-bootstraping-rm-contrasts/) from the same author) presenting how to use bootstrapping to obtain contrasts, confidence intervals and p-values after conducting a permutation ANOVA. How should I decide which is the suitable approach?
After permutation ANOVA, is it appropriate to use bootstrapping for posthoc analysis?
CC BY-SA 4.0
null
2023-04-08T08:32:29.203
2023-04-08T08:32:29.203
null
null
54123
[ "nonparametric", "permutation-test" ]
612331
2
null
279010
0
null
Logistic regression is robust to concordant outliers, (with an extreme X value but an outcome that accords with the X value), in a way that the linear probability model is not. See [http://teaching.sociology.ul.ie:3838/logitinfl](http://teaching.sociology.ul.ie:3838/logitinfl) for a simple illustration. It might be better to say that such observations are outliers for the linear probability model, but not for logistic.
null
CC BY-SA 4.0
null
2023-04-08T08:56:40.937
2023-04-08T08:56:40.937
null
null
385232
null
612332
1
null
null
2
42
I'm confused about the two different recalculated 95% confidence interval (CI) results from two meta-analyses on the same article. The original article reported a geometric mean of 2.3 and a geometric standard deviation (SD) of 1.60. However, one meta-analysis calculated a 95% CI of 0.76-3.84 using these values, while another meta-analysis reported a 95% CI of 2.03-2.57. How is it possible for two different meta-analyses on the same original article to produce two different 95% confidence intervals? Thank you for reading this :)
Is it possible to have two different 95% confidence intervals from two meta-analyses on the same original article?
CC BY-SA 4.0
null
2023-04-08T09:15:09.813
2023-04-08T09:15:09.813
null
null
385139
[ "confidence-interval", "standard-deviation", "meta-analysis" ]
612334
1
612814
null
3
53
I red the paper about transformers, and fully understood every single piece of that, so now I'm implementing it from scratch in tensroflow, without using any shipped layer from the library. The only missing part is how they intend to take a single tensor `(batch size, time-steps, embeddings)` and give it to the multihead module. For what I can tell, it seems that they take the embeddings of size $d_{model}$, then split the into $n$ heads, so each head get a piece of the embedding of size $d_{model}/n$, however I don't quite see what's the intuition why this should work Am I missing something? are they just duplicating the input for each head instead?
How are the embedding split between multihead in transformers
CC BY-SA 4.0
null
2023-04-08T10:36:21.633
2023-04-13T20:13:11.397
null
null
346940
[ "neural-networks", "transformers" ]
612335
1
613006
null
3
74
Let $X_i$ ($i=1,\dots, n$) be a random sample from $X\sim \exp(\lambda_1)$ and $Y_j$ ($j=1,\dots, m$) be a random sample from $Y\sim \exp(\lambda_2)$, and $X$ and $Y$ be independent. I try to find the generalized test of $H_0: \lambda_1=\lambda_2$ v.s. $H_1: \lambda_1\neq \lambda_2$. Find the distribution of the statistic and the critical region of the generalized test at level $\alpha$. --- My work: The likelihood function is that for $\theta=(\lambda_1, \lambda_2)$ $$ L(\theta)=\lambda_1^n\lambda_2^m \exp(-n\lambda_1\bar{X}-m\lambda_2\bar{Y}) $$ where $\bar{X}$ and $\bar{Y}$ are sample mean. Then I want to get the likelihood ratio statistic: \begin{align} \Lambda(x) &= \frac{\sup_{\theta=\theta_0}L(\theta\mid X)}{\sup_{\theta\neq\theta_0}L(\theta\mid X)} \end{align} The global MLE are $\hat{\lambda}_1=\frac{1}{\bar{X}}$ and $\hat{\lambda}_2=\frac{1}{\bar{Y}}$. The restricted MLE for $\lambda_1=\lambda_2$ is $$\lambda_0=\frac{m+n}{n\bar{X}+m\bar{X}}$$ So we have $$ \Lambda=\frac{(m+n)^{m+n}}{n^nm^m}[\frac{n\bar{X}}{n\bar{X}+m\bar{Y}}]^n[1-\frac{n\bar{X}}{n\bar{X}+m\bar{Y}}]^m $$ So we take the statistic $$T=\frac{n\bar{X}}{n\bar{X}+m\bar{Y}}$$ So to find the critical region, we need $$\Lambda=CT^n(1-T)^m\le \lambda_0$$ From here, I am not sure how to solve that. It seems that $CT^n(1-T)^n$ is decreasing if $T\le n/(n+m)$ and increasing is $T\ge n/(m+n)$. (since $g'(T)=T^{n-1}(1-T)^{m-1}[n-(m+n)T]$) So $\Lambda\le \lambda_0$ (we will reject $H_0$) is equivalent as $$c_1\le T\le c_2$$ for some constants $0<c_1<c_2$ and it satisfy $$c_1^n(1-c_1)^m=c_2^n(1-c_2)^m$$ For the test level $\alpha$, we also need $$ P(c_1\le T\le c_2)=\alpha $$ (the probability that we reject $H_0$). The distribution of $T$: under $H_0$ we have $\sum X_i\sim Gamma(n,1/\lambda)$ and $\sum X_i+\sum Y_j\sim Gamma(n+m,1/\lambda)$. Then $$ \frac{m+n}{n}T\sim \frac{\chi^2(2n)/(2n)}{\chi^2(2(m+n))/(2(m+n))}\sim F(2n, 2(m+n)). $$ But what is the critical region?
Find the distribution of the statistic and the critical region of the generalized test at level $\alpha$ for two sample test
CC BY-SA 4.0
null
2023-04-08T10:38:20.517
2023-04-15T10:05:27.757
null
null
334918
[ "hypothesis-testing", "self-study" ]
612337
2
null
610231
2
null
A raw, untested thought. Maybe try this. It seems to me that your interest requires to do factor analysis first. Do FA on control sample matrix. (I would perhaps recommend to do all the analysis on correlations rather than on covariances because differences in variances between case and control samples may be of lesser importance than differences in correlational patterns.). So, factor correlations of the control sample and arrive at a satisfying interpretable solution (you will need a rotation to interpret the common factors.). Get the matrix of restored correlations. Subtract those from the correlations of the case sample. Perform FA of the obtained "residual" or remnant correlations. You essentially "wash out" the factor (= correlational) pattern of controls from the cases, and factor analyse what's left. But what's left is a mixture of the pattern unique for the cases and the noise (nuisance factors) unique of the case sample. One tricky moment of the subtraction is whether to subtract the diagonal elements too or to leave 1s there. One has to think over this. My immediate, inconclusive thought is to leave 1s. So that in the 2nd FA we do not restrict communalities by the uniquenesses from the 1st one. A modified approach may be to do the 1st FA on the pooled correlation matrix instead of the correlation matrix of controls. Less contrasting results are expected, because now we reason the "common pattern" as extracted from both sides rather than from one specific side.
null
CC BY-SA 4.0
null
2023-04-08T11:06:34.007
2023-04-08T12:09:21.483
2023-04-08T12:09:21.483
3277
3277
null
612338
1
null
null
2
125
I am reading [this paper](http://www.stat.columbia.edu/%7Egelman/research/published/2018_gelman_jbes.pdf) and need to replicate what they did in Table 4: However, I am having trouble understanding what are local and global polynomial regressions. Could someone please explain? If you need more context, you can see the link and see page 8, right above Figure 5: "We do this exercise for the six global and the two local polynomial regressions"
What are global and local polynomial regressions?
CC BY-SA 4.0
null
2023-04-08T12:22:04.497
2023-04-08T13:30:20.233
null
null
385241
[ "regression", "econometrics" ]
612340
2
null
611559
3
null
This is a great question because it is a variation of a problem/issue my students often pose when learning about confidence and prediction interval estimation with multiple regression (MR). The addition of the bootstrapping element for the parameters to the simulation protocol is indeed appropriate (and I'm surprised I haven't thought about it for class/lecture demonstration purposes). My hope is to provide a useful explanation for why we definitely should bootstrap the parameters in order to simulate the response distribution. But, I will also provide a simpler protocol (which draws on the prediction interval from an MR analysis). And lastly, I'll briefly indicate a scenario where the bootstrapping protocol detailed here would probably be more applicable. First, in the OP, you find the statement: “The true response $Y^+ = \beta x_+ + \epsilon$ does not vary for different realisations of our sample data, contrary to the predictor $Y^+$, and so the variation in the parameter estimate should not be relevant to its simulated distribution.” This actually is not a fully correct statement...but part of it is...and this is the key to understanding the need for the simulation. Let me use the convention that lower-case Greek letters represent the parameter (true) values of the population, and upper-case Latin letters represent sample estimates for these parameters. So, while it is true for all values of the population that $$Y =\beta x + \epsilon$$ it is key to remember that when we obtain our parameter estimates from a sample, we do not have $\beta$, but $B$. And even if these values are off by just a little bit, this means the error estimate will also be incorrect. So, what we have is $$Y = B x + E$$ If you think about this as a basic bivariate regression, if your slope estimate is off from the population slope by just a little bit, well...you can still find the error to make your predicted values match your observed values of $Y$, but one of either the predicted value or the error estimate will be too high (and the other too low). So, if you just run a bootstrap using the estimated slope, you run the risk of making all of your simulated predictions be a little bit too high or low. Thus, in a bootstrapping approach, we indeed would want to simulate a variety of the possible $B$ estimates we might obtain in order to make our prediction distribution. And the nice part is that the normal theory behind ordinary least squares (OLS) MR has already answered this question of future prediction without needing to rely on bootstrapping the parameter estimates. Without elaborating the proof of these intervals here, I will give the confidence interval for the conditional mean and the prediction interval. The confidence interval for the conditional mean is a range of values in which we would reasonably expect to find the mean value of $Y$ for some given value $x$ (that is to say, $\mu_Y$ is conditioned on knowing this value of $x$). This interval is $$\mu_{Y|x_+}= \hat{Y}_+ \pm t_\text{c.v.} · \hat{\sigma}_\epsilon\sqrt{x_+ (X^T X)^{-1} x_+^T}$$ This interval adds variability to our predicted (estimated) value to account for the fact that our parameter estimates for the regression $B$ may not have perfectly matched up with the population parameters $\beta$ in our MR model. Next, we can adjust this formula slightly to get the prediction interval for any future value of $Y$ at this value of $x_+$: $$Y|x_+= \hat{Y}_+ \pm t_\text{c.v.} · \hat{\sigma}_\epsilon\sqrt{1+ x_+ (X^T X)^{-1} x_+^T}$$ This interval accounts for the variability in the parameter estimates from the MR analysis, and it accounts for the error variability (which also has been estimated from the MR analysis). And this gives us the simpler bootstrapping protocol. All we need do is simulate new responses at the point of interest $x_+$ by drawing $Y_+^{(b)}$ from the normal distribution with mean $\hat{Y}_+$ and standard deviation $\hat{\sigma}_\epsilon\sqrt{1+ x_+ (X^T X)^{-1} x_+^T}$. However, as noted above...as this is a fairly well understood problem (and solution), I am unsure why the bootstrapping protocol would be necessary unless there is some violation of the assumptions for the MR model. And this brings me to my final point: ¿when would this bootstrapping protocol be appropriate? I would argue that this protocol would be appropriate if you are using some other estimation method to obtain the regression model, like a robust estimation using the median instead of the mean. As the associated distributions of these protocols are more complicated, a bootstrapping process that resamples the parameter estimates prior to making each prediction would be appropriate to gain a better understanding of the distribution of future predicted values. I hope this answer proves useful, and I’m happy to elaborate further as needed.
null
CC BY-SA 4.0
null
2023-04-08T13:02:29.883
2023-04-08T13:08:00.670
2023-04-08T13:08:00.670
199063
199063
null
612343
2
null
612338
5
null
A global polynomial regression tries to fit the entire data set with a single polynomial. This leads to many problems, explained on [this page](https://stats.stackexchange.com/q/549012/28500) and, in more technical detail, on [this page](https://stats.stackexchange.com/q/560383/28500). The question on the latter page cites the same [Gelman and Imbens paper](http://www.stat.columbia.edu/%7Egelman/research/published/2018_gelman_jbes.pdf) that you do. [Frank Harrell's answer](https://stats.stackexchange.com/a/549018/28500) is a brief, simple summary of the problems. A major problem with a global regression is that any single point can have a large influence on the fit far away from it. Unless you know that you have the correct form for the polynomial that can lead to problems. A local regression instead fits a series of restricted, local ranges of the data. That way, points don't affect the behavior of the curve far from their own locations. This is combined with a mechanism to connect the local fits, often with some constraint on the smoothness of the connections. In the context of regression discontinuity discussed by Gelman and Imbens, the most important range of the data is that close to the threshold for discontinuity. Thus the specific situation they cover is when > researchers discard the units with $x_i$ more than some bandwidth $h$ away from the threshold and estimate a linear or quadratic function on the remaining units... More generally, beyond regression discontinuity studies, there are several ways to do local polynomial (including linear) regressions, including weighted regressions like [loess](https://stats.stackexchange.com/tags/loess/info) (where the weights versus distance aren't necessarily the all-or-none type mentioned in the above quote) and several types of [splines](https://stats.stackexchange.com/tags/splines/info) whose differences are outlined [here](https://stats.stackexchange.com/q/558759/28500).
null
CC BY-SA 4.0
null
2023-04-08T13:30:20.233
2023-04-08T13:30:20.233
null
null
28500
null
612344
1
612355
null
1
58
Consider random variables $V_1$, $V_2$, $V_3$, and $V_4$. Then, define $U_2=V_1+V_2$ and $U_3=V_1+V_3$. That is, $U_2$ and $U_3$ are correlated by construction. My first question is whether the following equality can hold: $$ \Pr\left[U_2<V_4\mid V_4,U_3\right] = \Pr\left[U_2<V_4 \mid V_4\right] \tag1 $$ In my personal opinion, because $U_2$ and $U_3$ are correlated, the equality hoes not hold in general. But, I am not sure. Second, I am considering an assumption below: $$\Pr\left[U_2<V_4+U_3\mid V_4,U_3\right] = \Pr\left[U_2<V_4 \mid V_4 \right]. \tag 2$$ Here, again, it is not clear for me that this assumption can be plausible since $U_2$ and $U_3$ are correlated. Lastly, if the assumption $(2)$ can be plausible, does the assumption $(2)$ imply the equation $(1)$? Thank you.
Conditional probability with correlated random variables
CC BY-SA 4.0
null
2023-04-08T13:36:26.757
2023-04-08T21:50:11.420
2023-04-08T21:46:11.260
5176
375224
[ "correlation", "conditional-probability" ]
612345
1
null
null
0
31
I want to predict the concentration of a biomarker (continuous) according to the interaction of white blood cells with time (both continuous), considering medical units as a random effect that may impact both the intercept and smooth of this interaction. Is the following syntax appropriate? ``` bam0 <- bam(bmk ~ s(units, bs="re") + te(time, wbc, k=c(6,9), bs=c("cr","cr")) + ti(units, time, wbc, k=c(6,9), bs=c("re","cr","cr")), data = dat, method = 'fREML', family = "gaussian", discrete = TRUE) ``` or should I use ``` bam1 <- bam(bmk ~ s(units, bs="re") + te(units, time, wbc, k=c(6,9), bs=c("re","cr","cr")), data = dat, method = 'fREML', family = "gaussian", discrete = TRUE) ``` The advantage of bam0 is that the second term can be viewed as a contour plot because it does not include the random effect (see below). Nevertheless, isn't there redundancy of the specific interaction (i.e., main effect excluded) of time with wdc between the 2nd and the 3rd term? bam1 would seem more appropriate to me, but unlike bam0, it does not allow viewing the contour plot below, a priori because it includes the random effect (?). [](https://i.stack.imgur.com/24dBD.jpg)
Which mgcv syntax for a bivariate smooth interaction with a random effect as predictors?
CC BY-SA 4.0
null
2023-04-08T14:14:35.007
2023-04-09T17:29:03.327
2023-04-09T17:29:03.327
307344
307344
[ "r", "mixed-model", "interaction", "mgcv", "bivariate" ]
612347
1
null
null
1
12
The question of modeling the zero-inflated part of a negative binomial mixed effects model is a thorn in my side. I've read a lot of articles and blogs and it seems to be an issue that is largely glossed over, perhaps due to the fact the choice about the zero-inflated part of the model is very specific to each research question. However, some articles/blogs emphasize that the zero-inflated part intends to model structural zeros and they select variables very narrowly. Others discuss how it is a mixture of sampling and structural zeros, and model both the count and zero inflated parts the same way. However, with zero-inflated assumed to be a mixture of structural and sampling zeros, why not just generally model both sides the same way? Is it more about parsimony? How do you chose which variables to include?
How to think about and select variables for the zero-inflated part of the ZINB
CC BY-SA 4.0
null
2023-04-08T15:16:21.800
2023-04-08T16:04:28.830
2023-04-08T16:04:28.830
205125
205125
[ "negative-binomial-distribution", "zero-inflation" ]
612348
1
null
null
0
27
I am seeking help on how to perform Monte Carlo simulations on (potentially) correlated time series. I have a single product (e.g., men's wallets) that are sold out of seven stores in the same city. I have last year's daily sales of wallets for each store. Most days, a store sold zero, some days they sold one, fewer days they sold two... the histogram resembles an exponential distribution. I want to be able to use last year's sales, to simulate the potential combined sales of the seven stores. (Assume no sales growth year on year). I am wary of just sampling each store's daily sales independently and combining them into a new RV, as there may be some correlation, and some seasonality, of the sales. Ideally I'd be able to simulate a range of the combined sales for January 1, a range of the combined sales for January 2, .... a range of the combined sales for December 31; that takes into account potential correlation and seasonality of the sales. Bonus points if you can point me to some R functions that do this. It's important to note that I'm not looking to forecast a time series, but rather understand potential range of the 7 stores' sales in sum, based upon variability in the existing 7 stores' time series of sales.
Monte Carlo simulation of (potentially) correlated multiple time series
CC BY-SA 4.0
null
2023-04-08T15:26:30.803
2023-04-08T15:26:30.803
null
null
385254
[ "time-series", "monte-carlo" ]
612349
2
null
45784
1
null
@SergeyBushmanov points out that sometimes you need to give maximum likelihood estimation a hand by providing appropriate starting point and lower/upper bounds for the optimization algorithm to find the MLE. Another way to give the likelihood a hand is to augment it with a (weakly) informative prior and do Bayesian estimation. This approach is most applicable if we have domain information to help us formulate a useful prior. I'll illustrate one Bayesian solution to the $t$ distribution fitting problem in the setting where @kjetilbhalvorsen demonstrates that the MLE approach fails: sample size $n = 10$, location $\mu = 0$, scale $\sigma = 1$ and degrees of freedom $\nu = 2.5$. I'll use the [brms](https://cran.r-project.org/web/packages/brms/index.html) package to do the Bayesian model fitting and two different priors for the degrees of freedom $\nu$: - strongly informative prior: $\nu \sim \operatorname{Gamma}(1, 0.3)$ and - informative prior: $\nu \sim \operatorname{Gamma}(1, 0.1)$, with the constraint $\nu > 1$ (otherwise the mean of the $t$ distribution is undefined.) Both choices indicate there is high prior probability the degrees of freedom are less than 5: under the informative prior, $\operatorname{Pr}(\nu<5) = 0.33$ and under the strongly informative prior, $\operatorname{Pr}(\nu<5) = 0.7$. So both priors are quite "opinionated", which we would like to avoid in general. [The default `brms` prior on the degrees of freedom is $\operatorname{Gamma}(2, 0.1)$.] On the other hand, in general we would also probably collect more than $n = 10$ observations for a study where we know there is a lot of variability in the outcome. ![](https://i.imgur.com/KfEzO4r.png) And here is how to do Bayesian $t$ distribution fitting with [brms::brm](https://www.rdocumentation.org/packages/brms/versions/2.19.0/topics/brm). First we simulate a sample of size $n$ from $t(\mu=0,\sigma=1,\nu=2.5)$ where $\mu$ is the location, $\sigma$ is the scale and $\nu$ are the degrees of freedom. ``` n <- 10 mu <- 0 sigma <- 1 nu <- 2.5 x <- rt(n, nu) * sigma + mu ``` Then we fit an intercept-only $t$-family model. ``` fit.brm <- brm( x ~ 1, family = student, data = data.frame(x), prior = c( prior(student_t(3, 0, 2.5), class = "Intercept"), prior(student_t(3, 0, 2.5), class = "sigma"), prior(gamma(1, 0.3), class = "nu") ) ) ``` While the posterior distributions of the location $\mu$ and scale $\sigma$ are reasonably symmetric, the posterior of the degrees of freedom $\nu$ is very skewed. So I will use the posterior mode (rather than the posterior mean or the posterior median) to estimate the parameters. ![](https://i.imgur.com/UAW91X1.png) ``` round( apply(posterior, 2, mode), 3 ) #> μ σ ν #> 0.161 0.785 2.164 ``` And finally, I repeat the three analyses (MLE, Bayesian with weakly informative prior, Bayesian with informative prior) 200 times. Each analysis estimates all three parameters (location $\mu$, scale $\sigma$ and degrees of freedom $\nu$) but below I plot histograms of the estimated degrees of freedom only. The true $\nu$ is 2.5. Maximum likelihood estimation fails dramatically more than half of the time (100 is the upper bound for $\nu$ in the optimization). Bayesian estimation doesn't fail in any of the 200 simulations but the estimate is biased upwards unless the prior information indicates strongly that we expect a priori only a few degrees of freedom. ![](https://i.imgur.com/aPoJHvm.png)
null
CC BY-SA 4.0
null
2023-04-08T15:45:00.267
2023-04-08T15:45:00.267
null
null
237901
null
612350
2
null
611065
0
null
I have found out [here](https://real-statistics.com/chi-square-and-f-distributions/effect-size-chi-square/) that Cohen's w is also known as phi. Rows and columns refer to contingency tables. Instead, here we care about (one-dimensional) goodness of fit. The correct way to get phi/w is: w = sqrt(chisq/N) Where chisq is the statistic obtained with the `anova` function and N is the sample size.
null
CC BY-SA 4.0
null
2023-04-08T16:09:44.763
2023-04-08T16:09:44.763
null
null
307879
null
612352
1
null
null
0
49
When studying the relationship between multiple time series often the first step is to determine stationarity of the individual time series. Given one of the time series, one can check for stationarity by assuming an AR(p) model for some p, and then applying an ADF test. However, from the outset one expects that the series is related to some other series. This means that: - One would expect significant autocorrelation in the residuals of the univariate analysis, and - The univariate model is misspecified I'm wondering how to make sense of this situation then. Is there a way to take into account that the time series is expected to be driven partially be external forces when doing an ADF test? If so, how? And which, if any, problems can arise from failing to take this into account? edit: To make this more precise, suppose that the data is generated by a VAR(p) process with 2 variables. If we then do a univariate AR(p) model fit on one of these two variables, the residuals need not be white noise, and can be auto-correlated. How does this affect the applicability of the ADF test? and more generally, the ADF test is really a 'unit root' test, not a 'stationarity' test. Meaning, that when the data generating process is not an AR(p), then simply fitting an AR model to it anyway and testing if that has a unit root does not seem to make sense to me... If the process is not even an AR, then the concept of a unit root is not relevant to begin with, so why test for it?
How to deal with the fact that almost all univariate time series models have autocorrelated correlated residuals?
CC BY-SA 4.0
null
2023-04-08T16:55:28.537
2023-04-08T18:40:29.033
2023-04-08T18:40:29.033
376142
376142
[ "time-series", "autocorrelation", "stationarity", "augmented-dickey-fuller" ]
612353
1
null
null
2
12
I received a reviewer report on a meta-analysis that I submitted which stated that a multivariate regression that I performed should instead be "a backward stepwise multivariate logistic regression analysis since Bsum is insignificant". Ignoring the fact that every source I know says that stepwise procedures should be avoided, what is Bsum and how do you test for it? I suppose it is the sum of coefficients, but I don't seem to find anything about it. And how did the reviewer find out it was not significant since they don't have access to the dataset itself to compute the model themselves?
What is Bsum in (meta-)regression and how do you test for its significance?
CC BY-SA 4.0
null
2023-04-08T16:59:06.663
2023-04-08T16:59:06.663
null
null
356507
[ "multiple-regression", "meta-analysis", "meta-regression" ]
612355
2
null
612344
2
null
Consider $V_i \sim \text{Bernoulli}(0.5),\,i=1,\dots,4$. Let $V_4 = 1$; $U_2 < V_4$ iff $U_2 = 0$, which has probability $0.5$, so $\Pr[U_2 < V_4\mid V_4] = 0.5$. Let $U_3 = 2$, implying that $V_1 = V_3 = 1$. Clearly $U_2 \geq 1$, so $\Pr[U_2 < V_4\mid V_4, U_3] = 0$, so the first equality does not always hold. As $V_4 + U_3 = 3$, $\Pr[U_2 < V_4+U_3\mid V_4,U_3] = 1$, and the second inequality does not always hold either.
null
CC BY-SA 4.0
null
2023-04-08T17:28:56.337
2023-04-08T21:50:11.420
2023-04-08T21:50:11.420
5176
7555
null
612356
1
null
null
0
41
I'm having trouble understanding why I get radically different results if I try to find the parameter of a [Zipf distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) when I use the methods proposed by [Clauset et al. (2009)](https://arxiv.org/pdf/0706.1062.pdf) as opposed to using the log-transformed rank and frequency data and fitting a linear regression. This is the code I'm using: ``` # Creating 500 random samples from Zipf distribution with parameter 1.5 frequencies = np.random.zipf(1.5, 100) n = len(frequencies) # Continuous MLE alpha_hat_cont = 1 + n / sum(np.log(frequencies)) # Approximation of discrete MLE alpha_hat_discr = 1 + n / sum(np.log(frequencies/0.5)) # Fitting linear regression rank vs frequency to log-transformed data ranks = np.arange(1, n+1) slope=-np.polyfit(np.log(ranks), np.log(np.array(sorted(frequencies, reverse=True))), 1)[0] print(alpha_hat_cont, alpha_hat_discr, slope) ``` One can also use the `powerlaw` package in Python, as such: ``` fit = powerlaw.Fit(frequencies, discrete=True, xmin=1) print(fit.alpha) ``` which gives the exact same result as `alpha_hat_discr` above (if the argument `xmin` is specified to be equal to 1). I know results don't have to be the same (Clauset et al. suggest using MLE because the OLS on the log-transformed data is a bad approximation), but these are radically different. For context, I'm trying to find the Zipf exponent of the rank-frequency distribution of a corpus. Thank you very much for your help!
Estimating exponent of Zipf distribution using MLE vs fitting linear regression on log-transformed rank and frequency data
CC BY-SA 4.0
null
2023-04-08T17:34:19.347
2023-04-08T21:14:00.867
null
null
217382
[ "maximum-likelihood", "least-squares", "natural-language", "zipf", "corpus-linguistics" ]
612357
1
null
null
1
47
1. Context I have a dataset structured like this: ``` > str(dataset) 'data.frame': 52135 obs. of 9 variables: $ lat : num 59 59 55 59 59 63 59 59 59 59 ... $ long : num 16 16 12 16 16 14 15 16 15 15 ... $ date : chr "1951-03-22" "1951-04-08" "1952-02-03" "1952-03-08" ... $ julian_day : num 81 98 34 68 53 71 16 37 73 87 ... $ year : int 1951 1951 1952 1952 1953 1953 1954 1954 1954 1954 ... $ decade : chr "1950-1959" "1950-1959" "1950-1959" "1950-1959" ... $ time : int 1 1 1 1 1 1 1 1 1 1 ... $ lat_grouped : num 1 1 1 1 1 2 1 1 1 1 ... $ year_centered: 'AsIs' num -35 -35 -34 -34 -33 -33 -32 -32 -32 -32 ... ``` I have performed two quantile regression methods on the 3 groups of latitudes (1, 2 and 3) in my data. The first method is very common, using the `rq` function from the `quantreg` package. The second is adapted from Solution to the Non-Monotonicity and Crossing Problems in Quantile Regression by Saleh & Saleh ([https://arxiv.org/abs/2111.04805](https://arxiv.org/abs/2111.04805)). From what I have understood (I am not a mathematician), the algorithm is based on a constrained optimization approach, where the quantile regression line is constrained to be non-crossing by imposing a set of linear constraints on the parameters of the regression line. These constraints are formulated as a set of linear inequalities, which are then solved using a linear programming algorithm. The functions implemented given in the paper are these ones: ``` minimize.logcosh <- function(par, X, y, tau) { diff <- y-(X %*% par) check <- (tau-0.5)*diff+(0.5/0.7)*logcosh(0.7*diff)+0.4 return(sum(check)) } smrq <- function(X, y, tau){ p = ncol(X) op.result <- optim(rep(0, p), fn = minimize.logcosh, method = 'BFGS', X = X, y = y, tau = tau) beta <- op.result$par return (beta) } ``` The regression was performed for tau = 1:99 / 100. 2. Results [](https://i.stack.imgur.com/cJGvT.png) As you can see, visually, there is a clear difference between the 2 methods. The `smrq` function from Saleh&Saleh (red, right column) seems to outperform `qr` traditional way of doing (blue, left column). I have also plotted the intercepts, and as shown in Saleh&Saleh, smrq gets rid of the non-monotonic behaviour observed in rq: [](https://i.stack.imgur.com/6LOMy.png) However, I wanted confirmation that the `smrq` is better, so I performed a k-fold cross-validation. But here, `qr` seems to be much better than `smrq`. [](https://i.stack.imgur.com/DmWwr.png) 3. Issues I have in mind that the evaluation of a model is to correlate to your research question and that deciding which modelling approach is better is context-dependent. However, the approach used by Saleh&Saleh is supposed to help deal with crossing problems. As they state, their paper "describes a unique and elegant solution to the problem based on a flexible check function that is easy to understand and implement in R and Python, while greatly reducing or even eliminating the crossing problem entirely. It will be very important in all areas where quantile regression is routinely used and may also find application in robust regression, especially in the context of machine learning." I must admit that I do not know where to go from all this. Some questions that I have in mind are: - was the k-fold cross-validation relevant (why/why not)? - any idea on a relevant way to test rq against smrq other than a visual approach? I have tried to be concise and yet complete, which is not always easy. I would be happy to bring some more details if needed - please just ask.
Quality assessment of 2 quantile regression methods
CC BY-SA 4.0
null
2023-04-08T17:41:39.177
2023-04-09T06:24:17.463
null
null
355895
[ "r", "cross-validation", "quantile-regression" ]
612358
1
null
null
0
43
I am training a model with LightGBM, and I am getting an output like this: ``` [LightGBM] [Info] Total Bins 1981 [LightGBM] [Info] Number of data points in the train set: 28632, number of used features: 15 [LightGBM] [Info] Start training from score 2.534713 [20] training's rmse: 4.7065 valid_1's rmse: 4.79156 [40] training's rmse: 4.45158 valid_1's rmse: 4.61878 [60] training's rmse: 4.32291 valid_1's rmse: 4.55663 [80] training's rmse: 4.24446 valid_1's rmse: 4.53266 [100] training's rmse: 4.18674 valid_1's rmse: 4.52748 [120] training's rmse: 4.13661 valid_1's rmse: 4.52959 [140] training's rmse: 4.09082 valid_1's rmse: 4.53327 [160] training's rmse: 4.04819 valid_1's rmse: 4.53705 [180] training's rmse: 4.00448 valid_1's rmse: 4.53943 [200] training's rmse: 3.96052 valid_1's rmse: 4.54488 [220] training's rmse: 3.9187 valid_1's rmse: 4.5526 [240] training's rmse: 3.87888 valid_1's rmse: 4.55612 [260] training's rmse: 3.83932 valid_1's rmse: 4.56151 [280] training's rmse: 3.8001 valid_1's rmse: 4.56596 [300] training's rmse: 3.76323 valid_1's rmse: 4.56899 [320] training's rmse: 3.72648 valid_1's rmse: 4.57288 [340] training's rmse: 3.68954 valid_1's rmse: 4.57776 [360] training's rmse: 3.65472 valid_1's rmse: 4.58399 [380] training's rmse: 3.62083 valid_1's rmse: 4.58822 [400] training's rmse: 3.58848 valid_1's rmse: 4.59112 [420] training's rmse: 3.55622 valid_1's rmse: 4.5942 [440] training's rmse: 3.52427 valid_1's rmse: 4.59629 [460] training's rmse: 3.49288 valid_1's rmse: 4.5998 [480] training's rmse: 3.46305 valid_1's rmse: 4.60098 [500] training's rmse: 3.4332 valid_1's rmse: 4.604 [520] training's rmse: 3.40395 valid_1's rmse: 4.60809 [540] training's rmse: 3.37517 valid_1's rmse: 4.61122 [560] training's rmse: 3.34607 valid_1's rmse: 4.61451 [580] training's rmse: 3.31775 valid_1's rmse: 4.61881 [600] training's rmse: 3.28888 valid_1's rmse: 4.62112 [620] training's rmse: 3.26158 valid_1's rmse: 4.62205 [640] training's rmse: 3.23438 valid_1's rmse: 4.62703 [660] training's rmse: 3.2086 valid_1's rmse: 4.63075 [680] training's rmse: 3.18285 valid_1's rmse: 4.63385 [700] training's rmse: 3.15612 valid_1's rmse: 4.63661 [720] training's rmse: 3.12905 valid_1's rmse: 4.64054 [740] training's rmse: 3.10485 valid_1's rmse: 4.64305 [760] training's rmse: 3.07981 valid_1's rmse: 4.6468 [780] training's rmse: 3.05544 valid_1's rmse: 4.65087 [800] training's rmse: 3.03116 valid_1's rmse: 4.65354 [820] training's rmse: 3.00748 valid_1's rmse: 4.65625 [840] training's rmse: 2.98355 valid_1's rmse: 4.65951 [860] training's rmse: 2.96051 valid_1's rmse: 4.66279 [880] training's rmse: 2.93658 valid_1's rmse: 4.66475 [900] training's rmse: 2.91332 valid_1's rmse: 4.66645 [920] training's rmse: 2.89188 valid_1's rmse: 4.67055 [940] training's rmse: 2.87073 valid_1's rmse: 4.67544 [960] training's rmse: 2.84836 valid_1's rmse: 4.67783 [980] training's rmse: 2.82654 valid_1's rmse: 4.68047 [1000] training's rmse: 2.80509 valid_1's rmse: 4.68257 [1020] training's rmse: 2.78431 valid_1's rmse: 4.68481 [1040] training's rmse: 2.76303 valid_1's rmse: 4.68846 [1060] training's rmse: 2.74237 valid_1's rmse: 4.69077 [1080] training's rmse: 2.7221 valid_1's rmse: 4.69346 [1100] training's rmse: 2.70158 valid_1's rmse: 4.69541 [1120] training's rmse: 2.68161 valid_1's rmse: 4.69885 [1140] training's rmse: 2.66289 valid_1's rmse: 4.70274 [1160] training's rmse: 2.6443 valid_1's rmse: 4.70658 [1180] training's rmse: 2.62565 valid_1's rmse: 4.70889 [1200] training's rmse: 2.60685 valid_1's rmse: 4.71195 ``` From [20] -> [1200[, the RMSE of the validation set has barely changed, whereas the test set's RMSE has improved quite a bit. I know this is a clear sign of overfitting, but is the fact that the validation set isn't improving at all a sign of something? Is the model learning relations that only exist in the training set somehow? I don't really understand how the model can be improving on the training set so much, whereas the validation set remains static?
LightGBM accuracy not increasing with iterations on Validation Set?
CC BY-SA 4.0
null
2023-04-08T18:43:45.547
2023-04-14T09:31:30.890
null
null
292642
[ "boosting", "overfitting", "lightgbm" ]
612359
2
null
610738
1
null
I'm confident that wonderer's use of the wold decomposition is correct but I wanted to provide an answer that doesn't use it, in case a reader is not familar with that theorem. We have that 1) $Y_t = \phi Y_{t-1} + U_t$ We are also given that 2) $Y_t = X_t + W_t$. First, we take 2) and use the lag operator $(1 - \phi L)$ on both sides. This results in: $Y_{t} - \phi Y_{t-1} = X_{t} - \phi X_{t-1} + W_{t} - \phi W_{t-1}$. But, if one calculates the autocorrelations of of $Y_{t} - \phi Y_{t-1}$, (i.e. the process above ) one finds that it is autocorrelated only at lag one and no other lag so this implies that, the process on the RHS is MA(1). So, introducing new variables called $\epsilon$ and $\theta$, we can rewrite the previous relation as : $Y_{t} - \phi Y_{t-1} = U_{t} = \epsilon + \theta \epsilon_{t-1}$. This relation is justified by the fact that we know that the RHS is an MA(1). Finally moving the second term on the LHS to the RHS, we obtain, 4) $Y_{t} = \phi Y_{t-1} + \epsilon + \theta \epsilon_{t-1}$ Notice the first term on the RHS of 4) is the AR(1) component and the last two terms on the RHS are the MA(1) component. So, it has been shown that the process $Y_{t} - \phi Y_{t-1}$ is ARMA(1,1).
null
CC BY-SA 4.0
null
2023-04-08T19:02:13.930
2023-04-09T14:07:42.793
2023-04-09T14:07:42.793
64098
64098
null
612360
2
null
611727
0
null
A few thoughts that might help, even if this isn't a complete answer. First, in small studies it can be surprisingly easy to miss a low-frequency allele. See Gregorius, H.R. (1980) "The probability of losing an allele when diploid genotypes are sampled," [Biometrics 36, 643-652](https://www.jstor.org/stable/2556116). That's particularly the case if there's no information about the population structure of the diplotypes. The implementation of the Gregorius estimate (which I understand to be a worst-case scenario) in the R [genetics package](https://cran.r-project.org/package=genetics) is 65% for missing an allele at frequency of 0.075 in a sample of 34. Pay close attention to the design of the studies that you're examining. It's quite possible that the "NA" in a table like you show means that there was no evidence of that allele in the sample, but that it would have been detected if present. In that case you should be including it as a true 0, not ignoring it. Second, depending on the nature of your data, you might benefit from [multiple imputation](https://stefvanbuuren.name/fimd/), using the information that you have on other characteristics of the data to estimate what would have been found for that missing allele in a situation where it simply wasn't looked for. You generate several probabilistic estimates and pool the results together in a way that incorporates the uncertainty in the imputation. In your case, with the emphasis on the corresponding phenotype estimates, the final pooling would probably be best done on the phenotype level. Third, in practice in genome-wide studies based on single-nucleotide polymorphisms, a substantial deviation from Hardy-Weinberg equilibrium (HWE) among diplotypes at a locus is sometimes used to remove the locus from consideration. See Section 2.4 of Reed et al. (2015) "A guide to genome-wide association analysis and post-analytic interrogation," [Statist. Med. 34: 3769–3792](https://doi.org/10.1002/sim.6605): > Violations of HWE can be an indication of the presence of population substructure or the occurrence of a genotyping error. While they are not always distinguishable, it is a common practice to assume a genotyping error and remove SNPs for which HWE is violated. If case-control status is available, we limit this filtering to analysis of controls as a violation in cases may be an indication of association. So, depending on your situation, a working assumption of HWE might be tenable for your phenotype estimates. It would nevertheless seem to be wise also to present the sensitivity of your phenotype prevalence estimates to potential violations of HWE. As an extreme example, what if `phenotype y` from "Allele 2/Allele 3" led to fetal death when combined with some genotype at a different locus? For less extreme possibilities, that might be done with simulation based on different assumptions about co- or counter-occurrence probabilities for the alleles at the locus that are consistent with their overall frequencies.
null
CC BY-SA 4.0
null
2023-04-08T20:02:10.260
2023-04-08T20:02:10.260
null
null
28500
null
612361
1
null
null
0
11
Can I use the same training and validation data to perform MLM and train the weights of a classification head? Here is the background of my specific problem: The problem is a binary classification problem using text data. I am using 'bert-base-uncased' from Huggingface. The entire data set was created using various data augmentation methods. The test set will be the original data set used to augment the data. My question is whether I can use the same data to do MLM and later train the AutoClassForSequenceClassification head.
Can the same training data be used for MLM and fine-tuning of a transformer model?
CC BY-SA 4.0
null
2023-04-08T20:38:00.560
2023-04-08T20:38:00.560
null
null
325454
[ "neural-networks", "natural-language", "transformers", "data-augmentation" ]
612362
2
null
612356
0
null
- This is a Zeta distribution, not a Zipf distribution, despite Numpy's naming. - The issue appears to be that the frequencies variable does not contain the frequencies of the values but the values themselves. I am not a Python programmer, so you'll have to put up with a little R: ``` x <- rzeta(500, s=1.5) head(x, 10) [1] 1 1 1 1 1 1 2 1 1 1 xf <- table(x) head(xf, 10) x 1 2 3 4 5 6 7 11 12 13 395 53 19 11 6 3 4 1 1 2 frequencies <- as.numeric(xf) # The values in xf: 395, 53, ... values <- as.numeric(names(xf)) # The labels in xf: 1, 2, 3, ... summary(lm(log(frequencies/500)~log(values))) *** stuff *** Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.5042 0.4509 -3.336 0.00594 ** log(values) -1.5713 0.1977 -7.947 4.03e-06 *** ``` ... and you can see the coefficient estimate for `log(values)` is pretty close to `-s`, the parameter we are trying to estimate.
null
CC BY-SA 4.0
null
2023-04-08T21:14:00.867
2023-04-08T21:14:00.867
null
null
7555
null
612363
1
null
null
1
11
[In this paper](https://pubmed.ncbi.nlm.nih.gov/23123231/) (PMID 23123231; paywalled), the authors develop a logistic regression prediction model for Alzheimer's disease. In Table 3 the authors then present disease prediction results after applying the model to a validation set (the Class column specifies probability intervals into which the patients were split): [](https://i.stack.imgur.com/SZtoC.png) I'm trying to replicate a statistical analysis I've found in a spreadsheet, where the creator of the spreadsheet has used the data from Table 3 alone to calculate the Positive Likelihood Ratio for different probabilities of disease predicted by the model. While I'm able to blindly follow the calculations in the spreadsheet, I'm struggling with understanding the reasoning and validity of the steps. All similar worked examples I've found assume access to the complete validation set, making it possible to calculate the sensitivity and specificity of the model from the true/false positive/negative rates, which doesn't seem to be the case here. Is there a general procedure for this type of analysis? Any help in the form of explaining the reasoning or guiding me to some good reading material is much appreciated.
Likelihood Ratio when true/false positive/negative rates are not available
CC BY-SA 4.0
null
2023-04-08T21:19:57.617
2023-04-08T21:19:57.617
null
null
385262
[ "likelihood-ratio", "disease" ]
612364
1
null
null
0
16
I have a sampling process like the following: - Randomly select psus from one stage with equal probability - Use all ssus of each psu selected for estimation. - Each ssu is associated with a statistic from an aggregate of sub-units So for example we want to estimate a statistic about the number of insect per trees in orchards: Let's say we sample 10 counties as our psus. All the farms from those counties are our ssus. We then have aggregate statistics from each farm about the number of insects per tree, where the farms have different number of trees. We don't have individual observations for each tree, just the aggregate statistics. Because our statistic relies on the number of trees, which we don't have observations on directly, in a sense trees is our sampling unit. So we might think of our effective sample size as being based on the design effect on measuring the variance of insects per tree. However, that is tricky. We only have one observation: the statistic for each farm. But farms may have wildly differently number of trees that contribute to the aggregate total. Would it make sense to think of there being one observation for each tree, where each tree within each farm is assumed to have the same rate of insects per tree? Since we do not have any sub-information from each farm, I'm not sure how else to structure my data for standard error estimate and analysis.
How to determine effective sample size when sampling averaged samples?
CC BY-SA 4.0
null
2023-04-08T21:29:11.807
2023-04-12T06:28:33.360
2023-04-12T06:28:33.360
3277
43080
[ "sampling", "standard-error", "survey", "cluster-sample" ]
612365
2
null
612286
0
null
Thank you for the big edit JHBatley. It makes it much clearer what is going on. As you might have already guessed most people here think your approach is confused. In my opinion, where your notation goes awry is that you don't properly respect that choice of Box $B$ and choice of doughnut $D$ are non interchangeable and therefore the way to combine them into your space of elementary/atomic (there is no standard term) events $\Omega$ needs to be with $2$-tupels([https://en.wikipedia.org/wiki/Tuple](https://en.wikipedia.org/wiki/Tuple)), also known as an ordered pairs. $$\Omega = \Omega_B \times \Omega_D = \{(b, d) : \text{such that } b\in \Omega_B \text{ and }d\in \Omega_D \}$$ Under specific circumstances $n$-tupels form $n$-dimensional vectors so if you want you can think about $B, D$ as the two dimensions of your randomness. Now Question 1: So $P(D = p|\text{ we can choose any box})$, well you could represtent that with $P(D = p| B \in \Omega_B)$, but as you can see from the definition of $\Omega, B \in \Omega_B$ is true for all its elements therefore this is just $P(D = p| \Omega) = P(D = p)$, because you didn't condition anything. In short: you always pick a box, there is nothing to think about! If you want the option of not picking a box you need to add an element to $\Omega_B$ that represents that. Question 2: Really the same as Question 1, you always pick, so conditioning is meaningless here.What you meant by $\Omega_B \cap \Omega_D $ is my definition of $\Omega$. Also you should have noticed that $\Omega_B \cap \Omega_D $ is the empty set $ \emptyset$ and conditioning on $\emptyset$ would involve divinding by $P(\emptyset) = 0$. The Test-Question: $\{\{B = L\},\{B = C\},\{B = R\}\} $ is a perfectly good partition of $\Omega$ and if you 1st pick box at random then a doughnut from within the box your student has the correct answer. Just consider the extreme example of $1$ plain in boxes $L, C$ each and $98$ chocolate in $R$. $P(D = p)$ would be $2/3$ not $2/100$
null
CC BY-SA 4.0
null
2023-04-08T22:06:48.123
2023-04-08T22:06:48.123
null
null
341520
null
612366
2
null
533176
0
null
Your `best.result` line is cherry-picking the best results. Just by some luck (you decide if it is good or bad luck), you might find some empirical relationship between the data, despite the random generation structure. You then call this the performance, which is better-than-chance, while ignoring the fact other models have poor performance.
null
CC BY-SA 4.0
null
2023-04-08T22:46:16.660
2023-04-08T22:46:16.660
null
null
247274
null
612367
1
null
null
3
38
[](https://i.stack.imgur.com/p6i5l.png) I need to test if there is a significant difference in leakage between prototypes. I've tried kruskal wallis and post hoc wilcoxon pairwise and Dunn-Bonferroni test, and the post hoc test always say that there is no significant difference between them for except A.4 and C.4, but that can't be... for example how can there not be a significant difference between A.4 with mean 27% leakage and C.2 with mean 1.13 % leakage?
Test on significant difference
CC BY-SA 4.0
null
2023-04-08T22:48:23.650
2023-04-09T00:35:50.933
null
null
385264
[ "statistical-significance", "wilcoxon-signed-rank", "kruskal-wallis-test", "dunn-test" ]
612368
2
null
423957
1
null
One other simple way to answer this would be determine the probability of seeing 2 accidents in a sample of 14000 when the probability of accident from baseline is (=70/35000000). If this probability is below 0.05 (assuming 95% confidence level) then one could conclude that sample 2 is riskier. The probability would be: nCr * p^r * (1-p)^(n-r) [using binomial distribution] where n = 14000; r = 2; p = 0.002% This comes out at 2.96%. This means the probability of seeing 2 accidents in a sample of 14000 is only 2.96%. So we can assume that is not due to noise i.e. reject null hypothesis and accept alternate hypothesis
null
CC BY-SA 4.0
null
2023-04-08T22:56:52.837
2023-04-08T22:56:52.837
null
null
385265
null
612369
2
null
610112
0
null
There is a "yes" and a "no" to this. NO These are just ways of encoding categorical information as numbers that math can handle. In other implementations, you will encode as $0$ or $1$, [as is discussed in some nice answers to a question of mine from last month.](https://stats.stackexchange.com/q/609727/247274) YES There will be an output from your model, and you need to know how to interpret that output. For instance, if your output is the probability of membership in the positive class, it matters which category is the positive class.
null
CC BY-SA 4.0
null
2023-04-08T23:26:00.540
2023-04-08T23:26:00.540
null
null
247274
null
612370
2
null
612367
3
null
Leakage data is exponential in character, so your 7 experimental groups will have unequal and non-constant variance. Transform your data first by applying the logarithm function. Then repeat your ANOVA. Then apply a multiple comparison procedure. You've chosen non-parametric procedures in the past (which are fine), but you might look into the Tukey, Scheffe, and Bonferroni procedures. There is a little cheat here. Tukey is less likely to flag differences.
null
CC BY-SA 4.0
null
2023-04-08T23:34:28.367
2023-04-08T23:34:28.367
null
null
41696
null
612371
2
null
609293
1
null
While XGBoost can give probabilities, watch out for saying that you want to use those probabilities over those of a logistic regression that has lower classification accuracy. Accuracy is calculated for just one threshold (typically $0.5$, which might be wildly inappropriate for your task), while the predicted probabilities are independent of the threshold. The XGBoost might be worse than the logistic regression at estimating probabilities despite being better in terms of classification accuracy at one threshold. Most models return probabilities or some kind of score that is on a continuum (not just the category) that can be transformed into a probability. Explicitly addressing the models you mentioned: Support vector classifiers can be wrestled with to give probabilities. Platt scaling is a topic you might want to learn more about for this. K-nearest neighbors can be considered to give probabilities if you take the probability as the proportion of each class represented among the neighbors. Linear discriminant analysis seems to have a probabilistic flavor, as is discussed by Ioffe (2006). Watch out for the fact that models can give probabilities that might lead to accurate models at a particular threshold but might not reflect the true probability of event occurrence (e.g., probabilities estimated at $0.9$, yet the event only happens $65\%$ fo the time). Such models can have their outputs calibrated, but it is not a given that this can be done. Even if it can be done, it is important to know that this might be a necessary step. The `rms` package in `R` software gives the ability to check for probability calibration. ``` library(rms) set.seed(2023) N <- 1000 x1 <- runif(N) x2 <- runif(N) z <- 3*x1 - 3*x2 p <- 1/(1 + exp(-z)) y <- rbinom(N, 1, p) L <- rms::lrm(y ~ x1 + x2, x = T, y = T) cal <- rms::calibrate(L, B = 1000) plot(cal) ``` [](https://i.stack.imgur.com/JD59G.png) Ideally, the calibration curve will equal the line $y=x$, since we want the probability of event occurrence to equal the predicted probability. Since the calibration curve is close to the $y=x$ line, the calibration seems to be pretty good. REFERENCE Ioffe, Sergey. "Probabilistic linear discriminant analysis." Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006, Proceedings, Part IV 9. Springer Berlin Heidelberg, 2006.
null
CC BY-SA 4.0
null
2023-04-08T23:41:29.057
2023-04-09T15:32:45.427
2023-04-09T15:32:45.427
247274
247274
null
612372
2
null
609283
0
null
Yes, what you wrote appears to be correct. The final layer is just one neuron, and it has activation function $F$ applied to some values. Those values come from the values in the hidden neurons (call them $h_1$ and $h_2$) multiplied by their respective weights. So far, this gives $F(w_{35}h_1 + w_{45}h_2)$. You get $h_1$ from the input feature values times their respective weights, and then you apply the activation function $F$. Ditto for $h_2$. $$ h_1 = F(w_{13}x_1 + w_{23}x_2)\\ h_2 = F(w_{14}x_1 + w_{24}x_2) $$ Finally, combine it all. $$ F(w_{35}F(w_{13}x_1 + w_{23}x_2 + w_{45}F(w_{14}x_1 + w_{24}x_2)) $$ (Unrealted to the question, seeing it written out with this composition of functions, is it clear why there are a bunch of chain rule derivatives when you do the optimization calculus?) When I run this in `R` software, I get the same $5043$ you got. ``` # Define the activation function # f <- function(x){ return(x^2 + 2*x + 3) } # Define the weights # w13 <- 2 w23 <- -3 w14 <- 1 w24 <- 4 w35 <- 2 w45 <- -1 # Define the input feature values # x1 <- 1 x2 <- -1 # Calculate the values of the hidden-layer neurons # h1 <- f(w13*x1 + w23*x2) h2 <- f(w14*x1 + w24*x2) # Use the hidden-layer neurons to calculate the final output # f(w35*h1 + w45*h2) ``` (The variable `F` is taken as meaning `FALSE` in my software package, so I went with the lowercase `f`.)
null
CC BY-SA 4.0
null
2023-04-09T00:03:10.297
2023-04-09T00:03:10.297
null
null
247274
null
612373
2
null
598233
0
null
When you upsample, you are telling the model that the minority class is more likely. Consequently, you should not be surprised to find that the model lacks a skepticism about membership in the minority class. Further, by making it more likely to belong to the minority class, you are making it less likely to belong to the majority class, so you should not expect the predictions to be altered just for members of the minority class. While [class imbalance is minimally problematic in most situations](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he) and there is no need to use upsampling to fix a non-problem, the good news is that [you can calibrate your model](https://stats.stackexchange.com/questions/558942/why-is-it-that-if-you-undersample-or-oversample-you-have-to-calibrate-your-outpu/558950#558950) to account for having altered the [prior probability](https://stats.stackexchange.com/questions/229968/the-usage-of-word-prior-in-logistic-regression-with-intercept-only/583115#583115) (class ratio).
null
CC BY-SA 4.0
null
2023-04-09T00:26:14.797
2023-04-09T00:26:14.797
null
null
247274
null
612375
2
null
553960
0
null
Some of the trouble here is that $AUC$ and the likelihood ratio test are based on different ideas. The $AUC$ measures the extent to which the predictions are separated by true category: the ability of the model to discriminate or discern between categories. Notably, if you divide the predictions by two or apply any other monotonically increasing function (multiplying by $1/2$ is an increasing function), you do not change the order of the predictions, so you do not change the extent to which the predictions are separable into the two categories. Consequently, $AUC$ does not consider the output calibration and if a predicted probability of $p$ corresponds to the event truly happening with probability $p$. The likelihood involves both the ability of the model to discriminate but also the calibration of the outputs. Consequently, if adding a variable slightly lowers the ability to discriminate but dramatically improves the calibration, the likelihood favors adding this variable. However, the $AUC$ will suffer when you add this variable. If you want to consider the fit according to the likelihood and also want to have some kind of "absolute" measure of performance ([it is hard to say that any particular score counts as "good"](https://stats.stackexchange.com/questions/414349/is-my-model-any-good-based-on-the-diagnostic-metric-r2-%7B0b5f1393-24cf-4cbe-8b00-592bcf116029%7D-accuracy-rmse/414350#414350), but it can be nice to give some context for a likelihood value that lacks an easy interpretation like mean absolute error), you might consider McFadden's $R^2$, which compares the likelihood of your model (fraction numerator) to the likelihood of a reasonable baseline model that always predicts the overall probability (fraction denominator). $$ R^2_{McFadden} = 1-\left( \dfrac{ \overset{N}{\underset{i=1}{\sum}}\left[ y_i\log(\hat y_i) + (1 - y_i)\log(1 - \hat y_i) \right] }{ \overset{N}{\underset{i=1}{\sum}}\left[ y_i\log(\bar y) + (1 - y_i)\log(1 - \bar y) \right] } \right) $$ In the equation above, $y_i\in\left\{0, 1\right\}$ are the true labels, $\hat y_i$ are the predicted probabilities, and $\bar y$ is the overall probability of the event coded as $1$. While McFadden's $R^2$ does not seem to be as popular in machine learning circles as the $AUC$, it is part of the literature, and a big part of me thinks that it should be a more popular metric.
null
CC BY-SA 4.0
null
2023-04-09T00:47:43.253
2023-04-09T00:47:43.253
null
null
247274
null
612376
2
null
611835
0
null
To get an overall number to compare graph to graph, add up the peak areas. It looks like the width is usually 1, so you would just sum up all of the measured values. Now repeat, but subtract 0.05 from each value -- and only sum up positive values. This gives you the area above 0.05. Finally, create a [frequency plot](https://blogs.sas.com/content/iml/files/2020/03/cumul2.png), but order it in reverse, so 1 is at the left of the x-axis and 0 is at the right. This will give you a good cumulative picture of what the results were. You can, then, if you want, compute % of points in each frequency bucket.
null
CC BY-SA 4.0
null
2023-04-09T00:51:07.260
2023-04-09T00:51:07.260
null
null
41696
null
612377
1
null
null
0
23
I have set of data Columns say $A,B$ Now I form a new data Columns $Y= A+B$ Now $Y$ is the dependent I have another independent data $X$ I now need to fit a regression of the form $Y=c*X^b$ where $c$ and $b$ are constant to be predicted from regression Now I used power fit to find $c$ and $b$ To get a good $R^2$, $AdjR^2$ , $P(>\mid t \mid) <0.06$ and $cooks~Distance<0.16$ I took less than 0.16 as the conditions to be checked if the model is good Until now if I get $YPred$ as the $Y$ predicted values. As I know $A$ and $B$ values Now by doing $A= YPred - B$ To get $Apred$ to be very close to the actual $A$ That is if I perform a regression of $A$ vs $Apred$ then by I look to get $R^2$, $AdjR^2$ , $P(>\mid t \mid) <0.06$ and $cooks~Distance<0.16$ Kind help with the best way to approach If my method of approach Someone can guide me in this Actually what I have is $A=c*X^b+d*B$ Now i need find a suitable value of $c,b$ and $d$ using regression here I really don't know how to approach I assumed $d=-1$ And tried the above Please someone help I will be helpful to me
What is the best regression approach for my case?
CC BY-SA 4.0
null
2023-04-09T01:07:39.557
2023-04-09T03:10:40.767
2023-04-09T03:10:40.767
325928
325928
[ "regression" ]
612378
1
null
null
1
30
I have hypothesised equivalence among 3 repeated measures. The data are such that a nonparametric approach would be needed. What I have considered, but not necessarily know how to implement correctly: - Compare confidence intervals at 100 * (1-2 * alpha)% for a Friedman ANOVA. This may be represented in a graph. - Attempt something like a two one-sided t-test procedure, but for a Friedman ANOVA. Could someone point me to a resource where I could figure out how to do this?
Equivalence test for nonparametric one-way ANOVA
CC BY-SA 4.0
null
2023-04-09T01:22:57.287
2023-04-10T06:50:15.640
2023-04-10T06:50:15.640
54123
54123
[ "nonparametric", "equivalence" ]
612381
1
null
null
1
10
I know this is technically assignment question. The training data contains 9000 observations and 900 features. I have to build the model to predict the testing data, which contains roughly 5,000 observations and same number of features as the training data. I am wondering that since there are so many features, we use PCA for feature selection? I tried random forest, lasso regression, but they are so slow. I am not hitting the good accuracy using the features given by PCA. Should I use random samples for random forest or lasso regression? For modeling, I notice that SVM overperforms compared to naive bayes, neural network, and linear discrimnant analysis. Should I just use SVM over the ensemble method, because of multiclass data? I am getting roughly 88 percent correct on hold one leave one out of training data, but 25 percent correct on testing data. For the binary data, I was able to get good accuracy(not perfect) by lasso for feature selection and doing ensemble method of logistic regression, neural network, svm, qda, and lda.
How do I improve my approach towards this feature selection and model building for large multiclass dataset?
CC BY-SA 4.0
null
2023-04-09T04:01:04.550
2023-04-09T04:01:04.550
null
null
385275
[ "r", "machine-learning" ]
612382
1
null
null
1
21
Is bias variance tradeoff a thing for quartile regression? Can I assume the error for quantile estimation follows a certain distribution (e.g., estimated quantile - true quantile follows normal distribution)?
Bias and variance for quantile estimates
CC BY-SA 4.0
null
2023-04-09T04:45:40.630
2023-04-09T04:45:40.630
null
null
385279
[ "regression", "quantiles", "bias-variance-tradeoff" ]
612386
1
null
null
2
41
Let $Z$ have a uniform distribution on $[−0.5, 0.5]$. Let $X$ be a continuous random variable which is independent of $Z$. Let$$Y = ⌊X + Z⌋ − Z.$$ I would like to ask how to compute the marginal density function and joint density function of $Y$ and $X-Y$. I have zero idea on how to deal with the floor function. I only knew that $$⌊x+ n⌋=⌊x⌋+n$$ when n is an integer. However $Z$ is in the range $[−0.5, 0.5]$.
Density function involving floor function
CC BY-SA 4.0
null
2023-04-09T07:23:45.303
2023-04-09T07:23:45.303
null
null
385283
[ "probability", "distributions" ]
612389
1
null
null
1
39
I’m looking at making predictions of baseline and treated survival from a parametric survival curve that are unbiased. I have a matched sample to try to control for observed confounding and wanted to use doubly robust estimates in my treatment effect model by controlling for covariates. I was wondering, if it was valid to estimate, say, a exponential survival curve, with a treatment variable and some control variables, then take the treatment effect parameter and intercept from this model into a pexp function in R and make predictions with just these parameters and plot a marginal exponential survival curve, and leave out the other predictors? I’m not sure if this breaks any statistical rules. Edit I'm trying to do something like the below. I have estimated the conditional treatment effect and estimated parameters for treatment and covariates. Now, I just want to plot a survival curves for the baseline and treatment parameters with these parameters only to get the marginal distribution. The idea was to remove any bias caused by the covariates through the GLM then predict what survival looks like for baseline and treatment only I was wondering if this procedure is valid. ``` t<-seq(0,max(pbc$time),50) boot_fit_exp <- flexsurvreg((Surv(time, status) ~ trt+age+sex), data = pbc, dist = "exp") coef_boot_fit_exp <- coef(boot_fit_exp) hazard <- exp(coef_boot_fit_exp[1] + coef_boot_fit_exp[2]) # Calculate survival probability surv_prob_baseline <- 1-pexp(t,boot_fit_exp$res[1]) surv_prob_trt <- 1-pexp(t,as.vector(hazard)) ``` Any help on this would be really appreciated, thanks
Marginal predictions from a conditional survival model
CC BY-SA 4.0
null
2023-04-09T09:27:31.497
2023-04-09T17:31:27.243
2023-04-09T14:31:43.180
211127
211127
[ "generalized-linear-model", "survival", "treatment-effect", "conditional", "marginal-model" ]
612390
1
null
null
0
41
For some reason my model keeps showing up to be a poor model when checking its accuracy through the [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) and [AUC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Further_interpretations) [ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic). This is the model I am stuck with after doing backward elimination. This is the logistic output: ``` `Call: glm(formula = DEATH_EVENT ~ age + ejection_fraction + serum_sodium + time, family = binomial(link = "logit"), data = train, control = list(trace = TRUE)) Deviance Residuals: Min 1Q Median 3Q Max -2.1760 -0.6161 -0.2273 0.4941 2.6827 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 15.741338 7.534348 2.089 0.03668 * age 0.063767 0.018533 3.441 0.00058 *** ejection_fraction -0.080520 0.019690 -4.089 4.33e-05 *** serum_sodium -0.111499 0.053639 -2.079 0.03765 * time -0.020543 0.003331 -6.167 6.95e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1` ``` This is the confusion matrix output ``` glm.pred Survived Dead 0 46 10 1 5 14 ``` The auc is showing up as 0.178 ``` library(pROC) # Calculate predicted probabilities for test set glm.probs <- predict(glm9, newdata=test, type="response") # Create prediction object for test set pred <- prediction(glm.probs, test$DEATH_EVENT) # Create ROC curve for test set roc.perf <- performance(pred, measure = "tpr", x.measure = "fpr") # Plot ROC curve for test set plot(roc.perf, legacy.axes = TRUE, percent = TRUE, xlab = "False Positive Percentage", ylab = "True Positive Percentage", col = "#3182bd", lwd = 4, print.auc = TRUE) # Add AUC to ROC curve auc <- as.numeric(performance(pred, measure = "auc")@y.values) text(x = 0.5, y = 0.3, labels = paste0("AUC = ", round(auc, 3)), col = "black", cex = 1.5) abline(a=0, b= 1) ``` How can I get past this problem? I check the classes and its showed that there is a data imbalance. But I don’t know what to do with this knowledge.
Why is my significant model giving me a low AUC and ROC?
CC BY-SA 4.0
null
2023-04-02T00:48:49.677
2023-05-27T12:23:15.483
2023-05-27T01:54:12.260
11887
384715
[ "r", "logistic", "cross-validation", "unbalanced-classes" ]
612392
1
null
null
0
24
I am working on binary classification problem: there is a website, where user can do 2 types of actions: - Non-target actions - Target actions I have a large dataset with columns: utm_source, utm_medium, utm_keyword, ... target There is one row per session. utm_source, utm_medium, utm_keyword and others are parameters of session. target = 1, if user performed at least 1 target action during session target = 0 otherwise My task is - given the parameters of session to predict if user will perform at least 1 target action during this session. I have to achieve ROC-AUC > 0.65, if this makes sense. Dataset contains 1732218 rows total 50314 (2.9%) rows with target = 1 But there are many sessions with identical parameters (and with different session_id in raw data, but naturally I have dropped session_id). So if I remove duplicates from the dataset, it will contain 398240 rows total 24205 (1.4%) rows with target = 1 The question is - should I remove these duplicates and when? My current approach is - Original dataset with duplicate rows represents natural distribution of data, so I have to test my model on the part of original dataset. - I can train my model on explicitly balanced dataset, and duplicate removal can be the part of this balancing. But I there are people (on course, where I am studying now), who have removed all duplicates before train-test split - and these people have successfully defended this task and achieved certificate... So what approach is right? Links to ML literature are welcome.
Imbalanced problems, undersampling and removal of duplicates
CC BY-SA 4.0
null
2023-04-09T10:11:28.777
2023-04-09T10:19:36.680
2023-04-09T10:19:36.680
377057
377057
[ "classification", "unbalanced-classes", "duplicate-records" ]
612394
1
612395
null
2
83
Let's say I train a model and it has an RMSE of 2.5. Does this mean, that on average, my prediction will be 2.5 away from the true value? Or does some scaling need to be done in oreder to get this value's magnitute to be in line with the target variable's magnitude?
RMSE model interpretation
CC BY-SA 4.0
null
2023-04-09T10:35:03.667
2023-04-09T11:27:04.517
null
null
292642
[ "predictive-models", "model", "rms" ]
612395
2
null
612394
3
null
> Does this mean, that on average, my prediction will be 2.5 away from the true value? No. What you described is the definition of the [bias of the estimator](https://stats.stackexchange.com/q/13643/35989). RMSE is the square root of the mean of the squared deviation of the prediction from the predicted value. So it tells you how wrong your model is on average, but the same applies to any other error metric. If your prediction for all the values was the global arithmetic average of the data, RMSE would be equal to the standard deviation of residuals. So a reasonable model should aim at a value lower than this. You may also want to read more about [prediction-interval](/questions/tagged/prediction-interval) that do estimate the bounds for the predictions.
null
CC BY-SA 4.0
null
2023-04-09T11:08:00.057
2023-04-09T11:27:04.517
2023-04-09T11:27:04.517
35989
35989
null
612396
1
613281
null
3
75
Is there any mathematical result that states that the Wilcoxon-Mann-Whitney (WMW) test is optimal in some sense, for a specific testing problem that is a subproblem of the general problem the WMW test is testing, say against an alternative of two specific distributions where one is stochastically larger than the other, maybe a location shift model with specified distributions but maybe something else? I have in mind maximum power for given level, however I'd be interested in other types of optimality as well. Also I suspect that any result would be asymptotic, maybe of the type "locally asymptotically optimal". I had a look at the Hajek, Sidak, Sen book Theory of Rank Tests, but I don't think it has such a result. There is an exercise that states efficiency 1 of the Wilcoxon signed rank test for one sample in a specific situation, also mentioned here: [https://www.jstor.org/stable/43686636](https://www.jstor.org/stable/43686636) I am however not aware of anything like this for the two-sample test, and I'd like to know whether anything exists.
Is there an optimality result for the two-sample Wilcoxon-Mann-Whitney test?
CC BY-SA 4.0
null
2023-04-09T11:32:39.990
2023-04-18T03:28:56.700
null
null
247165
[ "nonparametric", "wilcoxon-mann-whitney-test", "asymptotics", "optimal" ]
612397
2
null
346199
1
null
There is a similar topic: ["Is this way of pooling Kaplan-Meier estimates correct? Example made with R mice::pool_scalar"](https://stats.stackexchange.com/questions/611576/is-this-way-of-pooling-kaplan-meier-estimates-correct-example-made-with-r-mice) that may be useful to you. The author shows how to pool the KM estimates and calculate the pooled confidence intervals for them. The log(-log(1-Surv)) transformation is for the survival probability, and log(-log(Surv)) for the CDF = CIF = 1-KM. Contrary to one of the comments below you question, there is nothing wrong is pooling KM estimates to observe how close is the result AFTER the imputation to the complete observed data. One must not forget that imputation CAN distort the original distribution if the method imputation model is misspecified and the analysis model is not congenial with the imputation model. This is easy to fail under the chained multiple imputation with more than single variable. Nobody should be surprised that applying a misspecified imputation will lead to biased estimates. Then, one of the mandatory assessments after the imputation is to observe both post-imputation and pooled distributions against the observed one. Of course discrepancies may exist, as the imputation is at MAR (conditional to the predictor), BUT if the pooled and empirical distributions vary a lot, then evidently something happened to your imputation and this must be explained. It doesn't necessarily mean something "bad", it's just something you should explore further to be able to answer the question why is the observed estimate so different from the post-imputed one? And for this the pooling process is mandatory. Not to mention, that KM is not just a descriptive method. It's a valid non-parametric cross-product estimator and can be used for inferential purposes as well, similarly to survival estimate at each timepoint along with it's pointwise confidence interval.
null
CC BY-SA 4.0
null
2023-04-09T11:35:18.763
2023-04-09T11:35:18.763
null
null
383859
null
612398
1
null
null
0
6
I am performing different Active Learning experiments using several query strategies. One of the query strategies is the "Ranked batch-mode sampling" from [Python's modAL library](https://modal-python.readthedocs.io/en/latest/content/examples/ranked_batch_mode.html), which is an implementation of [Cardoso et al.](https://www.sciencedirect.com/science/article/abs/pii/S0020025516313949)’s ranked batch-mode sampling. The query strategy computes the scores for each data instance of the unlabelled pool $x \in \mathcal{D}_{\mathcal{U}}$ using the formula $$ score(x) = \alpha (1 - sim(x, \mathcal{D}_{\mathcal{L}})) + (1 - \alpha)\phi(x), $$ where $\alpha = \frac{n}{n + p}$, $n$ is the size of $D_{\mathcal{U}}$, $p$ is the size of $\mathcal{D}_{\mathcal{L}}$, $\phi(x)$ is the uncertainty of predictions for $x$, and $sim$ is a similarity function, for instance cosine similarity. The similarity function measures how well the space is explored near $x$, and, thus, will give a higher score to those data instances that are less similar to the ones that have already been labelled. In each iteration, the scores are computed for all the instances in the unlabelled pool. The instance with the highest score is removed from the pool and the scores are recalculated until $B$ instances have been selected. My computation of the time complexity is as follows: The time complexity of computing cosine similarity between two vectors of size $d$ is $O(d)$. Therefore, the time complexity of computing the cosine similarity between the every unlabelled sample $x \in D_{\mathcal{U}}$ and all the labelled samples in $\mathcal{D}_{\mathcal{L}}$ is $O(n \cdot p \cdot d)$. Overall, the time complexity of querying a batch of size $B$ is $O(B \cdot n \cdot p \cdot d)$. Are my computations correct? A representation of the time required by this query strategy vs. the size of the labelled dataset $D_{\mathcal{L}}$ show an exponential curve, not a linear curve.
Time complexity of ranked batch-mode sampling query strategy
CC BY-SA 4.0
null
2023-04-09T13:05:20.287
2023-04-09T13:05:20.287
null
null
385302
[ "time-complexity", "active-learning" ]
612399
2
null
602151
0
null
- Define probability density functions $p(x)\sim N(0,I_n)$ and $q(x)\sim N(0,2I_n)$. Define the conditional probability density functions $$p(y|x) = q(y|x) \sim \text{Bernoulli}\left(\frac{1}{2}\right)+\sum_{i = 1}^{n} x_i^3-\frac{1}{2}.$$ Define generative models P, Q as $$p(x,y) = p(x)p(y|x),~~~~q(x,y) = q(x)q(y|x).$$ We compute the least square solution $$\beta^*_P = (PXX^{\top})^{-1}PX^{\top}Y,~~~~\beta^*_Q = (QXX^{\top})^{-1}QX^{\top}Y.$$ Note that $PXX^{\top} = I_n$, $QXX^{\top} = 2I_n$ and $$PX^{\top}Y = \int x^{\top}(\int yp(y|x)dy)p(x)dx = \int x^{\top}\sum_{i = 1}^{n} x_i^3p(x)dx = \begin{bmatrix} \int x_1^4p(x)dx \\ \int x_2^4p(x)dx \\ \vdots \\ \int x_n^4p(x)dx \end{bmatrix} = 3*\textbf{1}_n$$ $$QX^{\top}Y = \int x^{\top}(\int yq(y|x)dy)q(x)dx = \int x^{\top}\sum_{i = 1}^{n} x_i^3q(x)dx = \begin{bmatrix} \int x_1^4q(x)dx \\ \int x_2^4q(x)dx \\ \vdots \\ \int x_n^4q(x)dx \end{bmatrix} = 12*\textbf{1}_n$$ (we use the result that for any Gaussian r.v. $x\sim N(0,1)$, we have $~\mathbb{E}(x^4) = 3$.) Therefore, we can compute the least square solution $$\beta^*_P = 3*\textbf{1}_n,~~~~\beta^*_Q = 6*\textbf{1}_n.$$ - Define the estimator $$\hat{\beta}_n=\left(Q_n\frac{p(X)}{q(X)}XX^\top\right)^{-1}\left(Q_n\frac{p(X)}{q(X)}X^\top Y\right),$$ we will show that this $\hat{\beta}_n$ is consistent to $\beta^*_P$ and $\sqrt{n}(\hat{\beta}_n-\beta_P^*)$ is asymptotically normal. To prove consistency, we show that $\hat{\beta}_n\to\beta^*_P$ a.s. as $n\to\infty$. Note that $$\hat{\beta}_n-\beta^*_P=\left(Q_n\frac{p(X)}{q(X)}XX^\top\right)^{-1}Q_n\frac{p(X)}{q(X)}X(Y-X^\top\beta^*_P),$$ and $\beta^*_P$ satisfies the population normal equation $Q\frac{p(X)}{q(X)}\left(Y-X^\top\beta^*_P\right)X=0$, therefore, we have $$\hat{\beta}_n-\beta^*_P=(Q_n\frac{p(X)}{q(X)}XX^\top)^{-1}(Q_n-Q)\frac{p(X)}{q(X)}X(Y-X^\top\beta^*_P).$$ Since $Q\|\frac{p(X)}{q(X)}XX^\top\|_1<\infty$ (finite moment condition), by SLLN, we have $$Q_n\frac{p(X)}{q(X)}XX^\top\to Q\frac{p(X)}{q(X)}XX^\top = PXX^\top~~~~ a.s.,$$ By CLT, we have $$\sqrt{n}(Q_n-Q)\frac{p(X)}{q(X)}X(Y-X^\top\beta^*_P)\stackrel{d}\longrightarrow N(0,Q\frac{p(X)^2}{q(X)^2}(Y-X^\top\beta^*_P)^2XX^{\top})$$ By Slutsky's theorem, $$ \begin{aligned} &\sqrt{n}(Q_n\frac{p(X)}{q(X)}XX^\top)^{-1}(Q_n-Q)\frac{p(X)}{q(X)}X(Y-X^\top\beta^*_P)\stackrel{d}\longrightarrow \\ & (PXX^\top)^{-1}N(0,Q\frac{p(X)^2}{q(X)^2}(Y-X^\top\beta^*_P)^2XX^{\top}). \end{aligned} $$ Therefore, by the property of linear transformation for normal distribution $$\sqrt{n}(\hat{\beta}_n-\beta_P^*)\stackrel{d}\longrightarrow N(0,(PXX^\top)^{-1}Q\frac{p(X)^2}{q(X)^2}(Y-X^\top\beta^*_P)^2XX^{\top}(PXX^\top)^{-1}).$$ The asymptotic property further indicates that $$\hat{\beta}_n-\beta_P^*\stackrel{d}\longrightarrow \frac{1}{\sqrt{n}}N(0,(PXX^\top)^{-1}Q\frac{p(X)^2}{q(X)^2}(Y-X^\top\beta^*_P)^2XX^{\top}(PXX^\top)^{-1}),$$ as a result, when $n\rightarrow\infty$, $$\hat{\beta}_n-\beta_P^*\stackrel{p}\longrightarrow 0.$$ by the fact that convergence in distribution to a constant implies convergence in probability to the same constant. - In this setting, we use the data drawn from $Q$ and samples $\{X_j\}_{j = 1}^M$ to estimate $\beta^*_P$. To impose the estimator in 1(c), we have to approximate the unknown density functions $p(x)$ and $q(x)$. Take $p(x)$ as an example, we make a parametric assumption on it and use kernel methods to estimate this density function. First choose a Mercer kernel $K(\cdot,\cdot)$ on the covariate space $R^p$, such as the Gaussian kernel. By Riesz representation theorem, kernel $K$ induces a unique RKHS $\mathcal{H}$ on $R^p$. Assumption: Suppose the density function $p(x)$ is in a kernel exponential family, i.e. $$p\in\{p_f(x) = e^{f(x)-A(f)}q_0(x),f\in\mathcal{H}\}$$ we find the optimal function by solving the following optimization problem: $$f^* = \mathop{\arg\min}\limits_{f\in\mathcal{H}}\hat{J}(p_f)+\lambda\lVert f\rVert_{\mathcal{H}}^2,$$ where $J$ is a bilinear functional of $p_f$ derived by kernel $K$ and Fisher divergence. After we get the approximation $\hat{p}$ and $\hat{q}$, we impose the estimator $$\beta_P^* = \mathop{\arg\min}\limits_{\beta}Q_n\frac{\hat{p}(X)}{\hat{q}(X)}(Y-X^{\top}\beta)^2.$$
null
CC BY-SA 4.0
null
2023-04-09T13:15:22.297
2023-04-09T13:15:22.297
null
null
320438
null
612400
1
null
null
0
38
I want to verify for multicollinearity between independent categorial variables. Which test I should use? First, I want to examine the relationship between the willingness to participate in medical decision making (dependent variabele - 2 categories) and education (independent variable). Later, I would do a multiple binary logistic regression (adjusted). I have education (5 categories), paid work (2 or 3 categories) and household income (2 categories). Income was questioned using a 5-point likert scale.
How to verify multicollinearity for categorial variables?
CC BY-SA 4.0
null
2023-04-09T13:21:19.180
2023-04-09T15:38:52.990
2023-04-09T15:23:17.280
383942
383942
[ "categorical-data", "multicollinearity" ]
612401
1
null
null
1
15
I'm making experiments to evaluate language models to brazilian portuguese datasets. So, i've made so each dataset is divided in 10 parts, I want to use cross-validation to determine the model's performance. But the thing is, I also want to use Hyperparam Search to determine the best parameter for the model. I've red that you should do Hyperparam Search for each fold of the cross validation, and in other places, that you have to use a part of the dataset that will not be used in the cross validation. Can someone please give me a hand on this one? I would also appreciate if someone has some academical articles that describe the process please :)
Best Way to do Hyperparam Search and Cross-Validation
CC BY-SA 4.0
null
2023-04-09T13:26:12.613
2023-04-09T19:50:56.433
null
null
385304
[ "cross-validation", "natural-language", "hyperparameter" ]
612402
2
null
585651
0
null
One possible approach is to take the Cholesky decomposition of the matrix $\mathbf{A}'(\mathbf{H}^{-1}\otimes\mathbf{H}^{-1})\mathbf{A}$ (which I believe should be invertible and symmetric). This means the matrix can be decomposed into a matrix product $\mathbf{L}\mathbf{L}'$, where $\mathbf{L}$ is lower-triangular. This then allows for a straightforward algorithmic solution of the matrix equation. My understanding is that the algorithm for a Cholesky decomposition is relatively straightforward to code (and faster than matrix inversion).
null
CC BY-SA 4.0
null
2023-04-09T13:57:09.427
2023-04-09T13:57:09.427
null
null
199063
null
612403
1
null
null
1
40
(Using R) - this is my first time posting a stats question online, so please let me know if I'm on the wrong forum or haven't provided enough information and I'll do my best to fix it! About the data and my goal here: Best analogy I can think of is that it's a language course and the final exam is a long conversation. Four times during the course I gather reports on student performance (for example, handwriting, speed of writing, reading ability). I want to know if I can predict pass or failure for the course based on these four reports. I've created a demo dataset here: ``` set.seed(22) reportsdata <- structure(list(Student = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 8L, 8L, 8L, 8L, 9L, 9L, 9L, 9L), TermReport = c("A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D"), Handwriting = c(sample(x = 1:5, size = 36, replace = TRUE)), Speedwriting = c(sample(x = 1:5, size = 36, replace = TRUE)), Reading = c(sample(x = 1:5, size = 36, replace = TRUE)), Loudness = c(sample(x = 1:5, size = 36, replace = TRUE)), Enthusiasm = c("5", "5", "3", "5", "2", "4", "3", "NA", "1", "4", "3", "3", "NA", "2", "1", "1", "1", "2", "2", "NA", "3", "2", "4", "2", "4", "3","5", "2", "3", "1", "2", "3", "5", "4", "NA", "5"), EndCoursePassFail = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L)), class = "data.frame", row.names = c(NA, -36L)) ``` Note that their score at the end of the course has been retroactively applied to all their entries, though whether they would pass (1) or fail (0) was not known at the time. My real dataset has the same structure, but contains a little over 600 observations and 30 variables (which have been filtered to those that contain less than 30% NA entries, e.g. sometimes could not get a score for enthusiasm). So far I've been trying a mixed effects logistic regression with student and trial as random effects (Bobyqa & Nelder_Mead are the only optimisers that don't fail, I need to use ~ . syntax as there are too many variables to list and for reproducibility). E.g.: ``` model <- glmer(EndCoursePassFail ~ . -Student -TermReport + (1|Student) + (1|TermReport), data = reportsdata, family = binomial, control = glmerControl (optimizer = "bobyqa", optCtrl=list(maxfun=1e6)), nAGQ = 1, na.action=na.exclude) ``` For both my original dataset and for the sample data provided above when the seed is set to 22, my model produces convergence errors: ``` Warning messages: 1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 0.0477113 (tol = 0.002, component 1) 2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue - Rescale variables? ``` But, my sample dataset shows the following errors if seed is set to 1: ``` boundary (singular) fit: see help('isSingular') ``` I think my issue could be due to perfect separation between Student and Course Result since the result had been retroactively added. The question is - where to go from here? Some thoughts: - I could average student scores across term reports somehow, so that I no longer need repeated measures and therefore don't get perfect separation. But this seems crude and feels like looking at only the tip of the iceberg. - Looking at other answers for similar issues, I might need to switch to trying a penalised likelihood from blme (R package), however I don't understand it well enough yet to know whether (and if so, how) this sort of perfect separation can be dealt with using blme. - Or, I could pretend that there aren't any repeated measures and run the model as though there weren't any - but of course, this is also crude and ignores a lot of potentially useful information provided by the data. Also, in case it is relevant - because there are so many scores to include in the full dataset, I want to later use stepAIC (or a loop, or equivalent) to roughly identify the 'best' model.
Perfect separation, perhaps? In binary outcome and repeated measure (random effect) with multiple independent variables (using R)
CC BY-SA 4.0
null
2023-04-09T14:03:43.303
2023-04-10T15:36:49.637
2023-04-09T14:05:01.400
385305
385305
[ "logistic", "repeated-measures", "multivariate-analysis", "binary-data", "separation" ]
612404
2
null
612403
1
null
First, don't treat `TermReport` as a random effect. The `isSingular` warning comes from finding 0 variance for the attempt by `glmer()` to force the intercepts for `TermReport` into a Gaussian distribution. In many cases with 5 or fewer levels of a categorical predictor it's better to model as a fixed effect. When I replaced the random effect for `TermReport` with a fixed effect I had no errors or warnings with your sample data. Second, be careful how you handle missing data. For your "Enthusiasm" you have specified "NA" as an actual level. That's not usually a wise choice. You might instead consider [multiple imputation](https://stefvanbuuren.name/fimd/) to deal with the missing data. See Section 1.3 of the linked reference for the problems with solutions like yours. Third, watch out for overfitting with your data. You typically need about 15 cases in the minority outcome class per coefficient that you are evaluating. Even if half of the 600 students failed (300 in the minority outcome group), that would limit you to about 20 coefficients. I doubt that such a high proportion failed. If you had a continuous outcome (like a course grade) instead of pass/fail you could probably fit a more flexible model. So 30 variables are already too many, and multi-level categorical predictors make it worse. For example, the way that you have coded `Enthusiasm` requires 5 coefficients, and would still require 4 if you did multiple imputation to get around the `NA` category. Fourth, do NOT use `stepAIC()` or similar automated methods to find the "best" model. Yes, that cuts down on the number of predictors, but it does it in a way that is likely to lead you astray, to end up overfitting your data anyway, and to make inference and p-values uninterpretable. Any use of the outcomes to choose the model ends up posing problems. See [this page](https://stats.stackexchange.com/q/20836/28500) among many others on this site. Fifth, you will be better off using the data and your understanding of the subject matter to reduce the number of predictors in a way that doesn't involve the outcomes. Highly related predictors can be combined into single combined predictors. Continuous predictors might be reduced to a few principal components. See Chapter 4 of Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/multivar.html), in particular Section 4.7 dealing with "data reduction." Sixth, penalized regression is certainly a possibility if you do end up with perfect separation. In the form of ridge regression, it also allows you to use all of your predictors in a way that, properly implemented, will minimize overfitting by reducing the magnitudes of all the regression coefficients. It's worth learning about in general even if you end up not needing it for this project.
null
CC BY-SA 4.0
null
2023-04-09T15:04:32.603
2023-04-09T15:04:32.603
null
null
28500
null
612405
1
null
null
12
1487
I am working with a distribution with the following density: $$f(x) = - \frac{(\alpha+1)^2 x^\alpha \log(\beta x)}{1-(\alpha + 1)\log(\beta)}$$ and CDF $$\mathbb{P} (X \leq x) = \int_0^x - \frac{(\alpha+1)^2 t^\alpha \log(\beta t)}{1-(\alpha + 1)\log(\beta)} \, dt = \frac{x^{\alpha+1}((\alpha+1)(\log(\beta x))-1)}{(\alpha+1)\log(\beta)-1}$$ with $x \in (0,1), \beta \in (0,1)$ and $\alpha >-1.$ How can I generate random samples from this distribution in Python/R? Which books can I use to learn about the simulation of random variables and random numbers? Any help is appreciated.
How to generate from this distribution without inverse in R/Python?
CC BY-SA 4.0
null
2023-04-09T15:16:47.200
2023-04-12T14:18:01.707
2023-04-09T20:30:02.537
56940
334650
[ "r", "density-function", "random-generation", "computational-statistics" ]
612407
2
null
612400
1
null
There's no need to pre-test for multicollinearity in a study like this. That can be an issue in large-scale studies with many potential predictor variables, so it gets a lot of attention in machine learning courses. Even if you found multicollinearity, how would you change the modeling strategy for your studies? All that multicollinearity will do here is inflate the variance estimates for individual coefficients.* It probably won't hurt predictions from the models at all, as the high individual variance of one coefficient will typically be offset by corresponding covariances with other coefficients. If you are worried about multicollinearity, fit a model and then use the `vif()` function in the R [car package](https://cran.r-project.org/package=car) to calculate the variance inflation factors (VIF) directly from the model. As [this page](https://stats.stackexchange.com/a/457970/28500) explains, the VIF produced by that function are appropriate both for categorical predictors and for models other than ordinary least squares, situations where the [usual VIF calculations](https://en.wikipedia.org/wiki/Variance_inflation_factor) are inappropriate. --- *If the collinearity is perfect then there might be a problem, but that just means that some of your predictors are exact linear combinations of others. Software will often just remove such a predictor from the model before it does the fitting.
null
CC BY-SA 4.0
null
2023-04-09T15:38:52.990
2023-04-09T15:38:52.990
null
null
28500
null
612408
2
null
523397
0
null
Let's look at an example where there are regions where one classification is obvious but also a region where there is overlap and ambiguity. [](https://i.stack.imgur.com/vCReY.png) In this picture, the upper right and lower left are clearly dominated by red, while the upper left and lower right are clearly dominated by blue. Thus, if you have a point like $(2, 2)$, the prediction should be a high probability of the red category, while $(2, -2)$ should lead to a high probability of the blue category. At a point like $(0, 0)$, the category to which the point belongs is not clear, and I would want my model to reflect this. Sure, it is desirable to get confident predictions, but the data in this case do not allow for such confidence. It really is the case that there is ambiguity, and to force a model to predict with confidence is to dismiss reality. Given how I generated this plot (code below), the probability that $(0,0)$ belongs to red is $1/2$, same as the probability that $(0,0)$ belongs to blue. If you force some other probability, you will be in a position to make mistakes. I would say that, if you have overlapping classes like we do here, the way to proceed is to embrace the fact that there can be ambiguity. For instance, in this example, you really cannot accurately predict the category to which $(0,0)$ belongs. ``` library(MASS) library(ggplot2) set.seed(2023) N <- 250 X0 <- MASS::mvrnorm(N, c(0, 0), matrix(c( 1, 0.9, 0.9, 1 ), 2, 2)) X1 <- MASS::mvrnorm(N, c(0, 0), matrix(c( 1, -0.9, -0.9, 1 ), 2, 2)) d0 <- data.frame( x1 = X0[, 1], x2 = X0[, 2], y = "Category 0" ) d1 <- data.frame( x1 = X1[, 1], x2 = X1[, 2], y = "Category 1" ) d <- rbind(d0, d1) ggplot(d, aes(x = x1, y = x2, col = y)) + geom_point() ```
null
CC BY-SA 4.0
null
2023-04-09T16:01:50.880
2023-04-09T16:01:50.880
null
null
247274
null
612410
2
null
257762
0
null
> Does it mean: they apply Shapiro-Wilk test on the dependent variable and if does not pass the test then they discard certain samples from the data set? No. It sounds like they did this for the features. Their methodology seems to follow this approach (not to say that I support their approach): - Run a feature through the Shapiro-Wilk test for normality. - Check the p-value of the test. - If the p-value is high (say above $0.05$), do not modify the data. - If the p-value is low (say below $0.05$), perform some kind of outlier screening to remove observations from the entire data set. This is then repeated for all features, meaning that, if feature $1$ leads to the removal of observations $7$ and $62$ while feature $2$ leads to the removal of observations $9$, $62$, and $90$, then observations $7$, $9$, $62$, and $90$ are omitted from the analysis. Two comments to the original question align with my take on this methodology. > You should never delete observations that appear to be outliers so that the remaining data can pass a normality test such as the the Shapiro Wilk Test. -Michael R. Chernick > I never understood the logic of a test of the null hypothesis that a distribution is exactly normal when no actual distributions are exactly normal. –David Lane
null
CC BY-SA 4.0
null
2023-04-09T16:16:02.503
2023-04-09T16:16:02.503
null
null
247274
null
612411
2
null
245893
1
null
There are two issues with the code in the question. The first one is that the residuals effect (`resid.effect`) is not correctly computed, because the value for the Intercept, which is fitted by the `lm` model, is not taken into account. This is the correct way of computing `Ypred`: ``` Ypred <- fit$coefficients["(Intercept)"] + YB + YE ``` The second issue is related to the fact that the example given in the question does not have a balanced design. Indeed, there are 2 cases of E=1 and 3 cases of E=0 when B=0, and 2 cases of E=0 and 3 cases of E=1 when B=1. When developing the equations for the sum of squares, a cross product appears, which is `2 * sum (ye * yb)`, and whose value was not taken into account in the code. Here is a minimally modified version of the original code, such that it works (i.e. the two final calls to `print` show identical values): ``` # One realization of `rnorm(10, 0, 1)` noise <- c(-0.48698389, -0.89996146, 0.32048085, 0.31472992, -1.04079374, 0.38544421, 0.79755477, -0.55370181, 0.17731321, 0.04298504) B <- rep(0:1, each = 5) E <- rep(0:1, 5) Y <- 5 * B + 2 * E + noise fit <- lm(Y ~ B + E) n <- length(Y) y <- Y - mean(Y) y2 <- y ^ 2 tot.SS <- sum(y2) YB <- B * fit$coefficients["B"] yb <- YB - mean(YB) yb2 <- yb ^ 2 B.effect <- sum(yb2) YE <- E * fit$coefficients["E"] ye <- YE - mean(YE) ye2 <- ye ^ 2 E.effect <- sum(ye2) Ypred <- fit$coefficients["(Intercept)"] + YB + YE YR <- Y - Ypred yr2 <- YR ^ 2 resid.effect <- sum(yr2) all.effect <- resid.effect + B.effect + E.effect + 2 * sum(ye * yb) print(paste("Computed effects: ", all.effect)) # [1] "Computed effects: 98.1221885328716" print(paste("Total sum of squares: ", tot.SS)) # [1] "Total sum of squares: 98.1221885328716" ``` Below is an example with balanced design that follows the conventions of the original code. Notice that there is no need for adding the term `2 * sum(ye * yb)` to `all.effects` (since its value is equal to zero, anyway): ``` # One realization of `rnorm(12, 0, 1)` noise <- c(-0.71181782, -0.12173941, -0.37700219, -0.23534021, 0.20056269, 0.46490554, 0.97869267, -0.49941705, 0.03854682, 0.49561333, 1.02325336, -0.23959159) B <- rep(0:1, each = 6) E <- rep(0:1, 6) Y <- 5 * B + 2 * E + noise fit <- lm(Y ~ B + E) n <- length(Y) y <- Y - mean(Y) y2 <- y ^ 2 tot.SS <- sum(y2) YB <- B * fit$coefficients["B"] yb <- YB - mean(YB) yb2 <- yb ^ 2 B.effect <- sum(yb2) YE <- E * fit$coefficients["E"] ye <- YE - mean(YE) ye2 <- ye ^ 2 E.effect <- sum(ye2) Ypred <- fit$coefficients["(Intercept)"] + YB + YE YR <- Y - Ypred yr2 <- YR ^ 2 resid.effect <- sum(yr2) all.effect <- resid.effect + B.effect + E.effect print(paste("Computed effects: ", all.effect)) # [1] "Computed effects: 100.760110733347" print(paste("Total sum of squares: ", tot.SS)) # [1] "Total sum of squares: 100.760110733347" ```
null
CC BY-SA 4.0
null
2023-04-09T16:31:34.150
2023-04-09T16:31:34.150
null
null
385308
null
612412
2
null
531087
0
null
In OLS linear regression with an intercept, there are two equivalent ways to calculate $R^2$. $$ R^2=\left(\text{corr}\left(y,\hat y\right)\right)^2\\ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ When you use other methods, these need not be equal. In particular, $\left(\text{corr}\left(y,\hat y\right)\right)^2 = \left(\text{corr}\left(y,a + b\hat y\right)\right)^2$ for any real $a$ and any nonzero $b$. Thus, squaring the Pearson correlation between predicted and observed values can lead to thinking that terrible predictions are good, such as $y = (1,2,3)$ while $\hat y=(105, 205, 305)$. In this case, $\left(\text{corr}\left(y,\hat y\right)\right)^2 = 1$, yet the predictions are terrible. The seond calculation will catch tht the predictions are terrible and report a value less than zero. Your results are completely consistent with this. You get a fairly high squared Pearson correlation and a much lower result from the second calculation.
null
CC BY-SA 4.0
null
2023-04-09T16:37:53.093
2023-04-09T16:37:53.093
null
null
247274
null
612413
2
null
612389
2
null
There are two problems with your approach. One is a simple oversight in the way that you set up your example. The second is fundamental, in that your approach doesn't provide a true marginal estimate over the population of interest. First, the parametric survival estimate needs to include all predictor variables that contribute to the linear predictor. The distribution of survival times $T$ in an exponential model conditional on a vector $X$ of covariate values can be written as: $$\log T \sim X'\beta +W $$ where $\beta$ is the corresponding vector of coefficients and $W$ has a standard minimum extreme value distribution. In your construction of the baseline hazard you omitted entries for `age` or `sex`, effectively setting `age` to 0 and `sex` to the reference level of `m`. That's certainly not a "marginal" survival estimate. If you plot your "baseline" survival against the corresponding raw data, you will see that your "baseline" survival estimates are much too high, probably because of effectively setting `age=0`. Second, you might think that you can get around that problem by specifying values for `age` and `sex`. But which to choose? The `pbc` data set illustrates the problem nicely: ``` with(pbc,coef(lm(age~sex))) # (Intercept) sexf # 55.710721 -5.553778 ``` The average age for females is over 5 years less than that for males. What single combination of `age` and `sex` could possibly be used to get anything like a marginal survival estimate this way? Would you use the average `sex` representation of 0.89 of a female in the `pbc` data set? Furthermore, the survival of an "average" individual, even if you could define such an individual, can be far from the true marginal survival, an average of survival versus time over the population of interest. For example, based on your model you can determine $S_i(t)$ for an appropriate set of $N$ individuals $i$ representing the population, and calculate the Ederer estimate: $$S_e(t)=\frac{1}{N}\sum_{i=1}^N S_i(t) .$$ There's a `survexp()` function in the R `survival` package to do that, or to provide other estimates of cohort survival. Each of those has potential problems, however, discussed in detail in Chapter 10 of [Therneau and Grambsch](https://www.springer.com/us/book/9780387987842). In particular, after decades of effort, the authors of the package still don't have a reliable way to calculate standard errors for these estimates. However you end up trying to get marginal survival estimates, be very careful in how you interpret them. Finally, consider whether marginal estimates are needed at all. As this [post by Frank Harrell](https://www.fharrell.com/post/marg/) documents, they are highly problematic in binomial or survival regression models and depend on the composition of the population used to build the model. Such marginal estimates might not carry over well to a different population. Furthermore, a treatment decision in clinical practice should always be conditional upon a particular patient's situation, not some estimated population-average treatment effect. Illustrating survival curves conditional upon particular typical covariate values and treatment choices is a more useful way to document a model's results, unless a treatment is being applied to an entire population (as in the water-fluoridation example noted in that post).
null
CC BY-SA 4.0
null
2023-04-09T16:40:16.913
2023-04-09T17:31:27.243
2023-04-09T17:31:27.243
28500
28500
null
612414
1
null
null
1
19
So I am currently studying the relationship between University Rankings (my dependent variable) and academic freedom (my independent variable, which can take values between 0 and 1). I have a panel of hundreds of universities over 17 years. At first, I was hesitating between a linear model (using OLS regression with fixed effects) and an ordinal model (since my dependant variable is a ranking going from 1 to 600). Some research showed me that an ordinal regression model might not be the best choice because of two reasons : - Non-linear regressions can be problematic when there is a lot of fixed effects - An ordinal regression model might not be optimal when there is a high number of different ordinal categories (my model has 105 categories : one for each number from 1 to 100, and then one for 200, 300, 400, 500 and 600). So, knowing this, I have two questions regarding what I should do next. Question 1 : is what I read in my research true ? Is it really not a good idea to use an ordinal regression model here ? Question 2 : If I cannot use an ordinal regression, what could I do in order for my regression to take into account that losing or gaining ranks in the top 100 part of the ranking is way more impactful than at the bottom ? What I mean by this, in the context of Univeristy Rankings, is that going from the 80th place to the 30th place should be considered way more of a big deal than going from the 500th place to the 450th.
How can I give more weight/importance to certain values in my regression when my dependent variable is a ranking?
CC BY-SA 4.0
null
2023-04-09T16:44:45.400
2023-04-09T16:44:45.400
null
null
382870
[ "panel-data", "ordinal-data" ]
612415
1
null
null
2
27
I've recently started learning about Bayesian statistics, and I came across this very nice answer by Xi'an [https://stats.stackexchange.com/a/129908/268693](https://stats.stackexchange.com/a/129908/268693), which [in my slight paraphrasing] says the following: Given a family of distributions $\{f(\cdot|\theta): \theta \in \Theta \}$ defined on a sample space $\mathcal{X}$, and a prior distribution $\pi$, we require that $$ \int_{\Theta} f(x|\theta) \pi(\theta) \,d\theta < \infty \quad \text{ for all } x \in \mathcal{X}; $$ otherwise, we do not obtain a valid prior distribution $\pi(\theta)$, and so Bayesian inference is not possible. This leads me to the following question: > What are some families of distributions $\{f(\cdot|\theta): \theta \in \Theta \}$ that one might encounter in practice for which there exists a set $E \subset \mathcal{X}$ of positive measure such that $$ \int_{\Theta} f(x|\theta) \,d\theta = + \infty \quad \text{ for all } x \in E? $$ In other words, I'm curious if there is a family of distributions for which a uniform prior leads to an "improper posterior" such that the problem cannot be remedied by re-defining the $f(\cdot|\theta)$'s on a set of measure zero. Here are a couple of examples I came up with: - The Cauchy distribution: $f(x|\theta) = \frac{1}{\theta \pi [1 + (x/\theta)^2 ]}, \; x > 0, \; \theta > 0$. In this case, $\int_{0}^{\infty} f(x|\theta) \,d\theta = + \infty$ for all $x > 0$. - A rather contrived example: For each $\theta \in \Theta := (1,\infty)$, let $f(x|\theta) := \frac{1}{\theta^x}$ for each $x \in \mathcal{X} := (0,\infty)$. Then $$ \int_{\Theta} f(x|\theta) \,d\theta := \begin{cases} \frac{1}{x-1} & \text{ if } x > 1 \\ \infty & \text{ if } 0 < x \leq 1 \end{cases} $$ (Would $f(x|\theta) = 1/\theta^x$ ever be used in practice?) Are there any other such examples? I am especially interested in examples such as last one, where the set $\{x \in \mathcal{X}: \int_{\Theta} f(x|\theta) \,d\theta = + \infty \}$ has positive and finite measure.
Distribution families whose likelihoods integrate to $+\infty$ for some sample values
CC BY-SA 4.0
null
2023-04-09T16:47:54.410
2023-04-09T16:47:54.410
null
null
268693
[ "bayesian", "prior", "posterior", "uninformative-prior", "improper-prior" ]
612416
1
null
null
0
14
I am trying to implement [Attention is All You Need](https://arxiv.org/abs/1706.03762) paper in PyTorch without looking at any code. I'm struggling to understand how do I get the Keys and the Values from the output of the top encoder. Do I learn 2 linear projections which take as input the output of the top encoder (which is a single matrix) and output the keys (or values) matrix? I don't quite understand it from the original paper, nor from [The Illustrated Transformer](http://jalammar.github.io/illustrated-transformer/). I understand how to get the Queries; they are obtained the same way as in the encoder self-attention layer: via a learned linear projection (linear layer).
In the Transformer, how do I get the Keys and the Values from the output of the top encoder (which go into Encoder-Decoder Attention)?
CC BY-SA 4.0
null
2023-04-09T16:50:25.747
2023-04-09T16:50:25.747
null
null
384280
[ "machine-learning", "transformers", "attention" ]
612417
1
null
null
1
14
In short: MCMC is used to construct posterior distributions for parameters of central tendency and all parameters used in the formula for this central tendency. I only care about the parameters of central tendency which will readily converge. However, several upstream parameters do not converge. Should I worry about this? More details: I work with a Hierarchical Bayesian Model in which an exponential fit is applied to a dose-response curve constructed for each sample of a set of 100+ samples. Given the fitting parameters, a dose is determined from a measured response. These 100+ doses are then used to calculate parameters of central tendency. I use MCMC to derive posteriors for all these parameters. I only care about the parameters of central tendency, and these will readily converge by visual inspection and Rubin & Gelman diagnostics. But many of the exponential fit parameters will not converge. Should I be worried about this?
Nonconvergence of some parameters in MCMC of Hierarchical Bayesian Model
CC BY-SA 4.0
null
2023-04-09T17:35:09.060
2023-04-09T17:35:09.060
null
null
270794
[ "markov-chain-montecarlo", "convergence", "hierarchical-bayesian" ]
612418
1
612420
null
1
196
I have noticed that Logistic Regression ([https://en.wikipedia.org/wiki/Logistic_regression](https://en.wikipedia.org/wiki/Logistic_regression)) is a model that used significantly for both Regression problems and Classification problems. When used for Regression, the main purpose of Logistic Regression appears to be to estimate the effect of a predictor variable on the response variable. For example, here are some examples in which Logistic Regression is used for Regression problems: - Modelling of binary logistic regression for obesity among secondary students in a rural area of Kedah : https://aip.scitation.org/doi/pdf/10.1063/1.4887702 - A logit model for the estimation of the educational level influence on unemployment in Romania : https://mpra.ub.uni-muenchen.de/81719/1/MPRA_paper_81719.pdf - A logistic regression investigation of the relationship between the Learning Assistant model and failure rates in introductory STEM courses : https://stemeducationjournal.springeropen.com/articles/10.1186/s40594-018-0152-1 When used for Classification, the main purpose of Logistic Regression appears to be to estimate the probability of the response variable assuming a certain value given an observed set of predictor variables. For example, here are some examples in which Logistic Regression is used for Classification problems: - Using logistic regression to develop a diagnostic model for COVID‑19: A single‑center study : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9277749/pdf/JEHP-11-153.pdf - Logistic regression technique for prediction of cardiovascular disease : https://www.sciencedirect.com/science/article/pii/S2666285X22000449 - A Study of Logistic Regression for Fatigue Classification Based on Data of Tongue and Pulse : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8917949/pdf/ECAM2022-2454678.pdf Based on surveying such articles, I noticed the following patterns: - When Logistic Regression is being used for Regression problems, the performance of the Regression Model seems to be primarily measured using metrics that correspond to the overall "Goodness of Fit" and "Likelihood" of the model (e.g. in the Regression Articles, the Confusion Matrix is rarely reported in such cases) - When Logistic Regression is being used for Classification problems, the performance of the Regression Model seems to be primarily using metrics that correspond to the ability of the model to accurately classify individual subjects such as "AUC/ROC", "Confusion Matrix" and "F-Score". The interesting thing being that regardless of whether you working on a Regression problem or a Classification problem - if you do decide to use Logistic Regression, in both cases you can calculate Classification metrics such as the Confusion Matrix. Based on these observations, I have the following question: My Question: Suppose if I am using Logistic Regression in a regression problem (e.g. estimating the effect of predictors such as age on employment vs unemployment) and the model seems to be performing well (e.g. statistically significant model coefficients, statistically significant overall model fit, etc.). Even though I technically still able to calculate Classification metrics such as the Confusion Matrix, F-Score and AUC/ROC - am I still obliged to measure the ability of this Regression model to successfully classify individual observations based on metrics such as ROC/AUC? Or am I not obliged to this since I not working on a Classification problem? I feel that it might be possible to encounter a situation/dataset in which the goal was to build a Logistic Regression model for a Regression problem - and the resulting model might have good performance metrics used in regression problems, but might have poor ROC/AUC values. In such a case, is this a good Logistic Regression model as it performs well for the regression problem as intended - or is it a questionable model as it is unable to perform classification at a satisfactory level?
Measuring the Performance of Logistic Regression: Regression vs Classification
CC BY-SA 4.0
null
2023-04-09T18:09:20.173
2023-05-08T13:06:15.080
2023-04-09T18:55:52.633
247274
77179
[ "regression", "logistic", "classification", "inference", "model-evaluation" ]
612419
1
612458
null
2
37
Let $\{f(\cdot|\theta): \theta \in \Theta \}$ be a family of pdfs and let $\pi: \Theta \to \mathbb{R}$ be a prior. According to Bayes' theorem (as stated in, e.g., [Casella and Berger](https://amzn.to/3MuERwD)), the posterior distribution $\pi(\cdot|x)$ is given by $$ \pi(\theta|x) = \frac{f(x|\theta) \pi(\theta)}{\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta}. $$ My questions: - How do we define the posterior distribution for values of $x$ such that $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta = 0$ ? Or do we just leave it undefined? - If my reasoning is correct, $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta > 0$ provided that the set $E_x := \{\theta \in \Theta: f(x|\theta)\, \pi(\theta) > 0 \}$ has positive measure. Is there anything else that we can say about the set of $x$ for which $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta > 0$ ? - If one is working with a model such that $\int_{\Theta} f(x|\theta) \pi(\theta)\,d\theta = 0$ for certain values of $x$ in the sample space, does this indicate that there is a problem with the model (either in our choice of pdf $f(\cdot|\theta)$ or our choice of prior $\pi$) ?
Some questions about the posterior distribution when the marginal distribution is zero
CC BY-SA 4.0
null
2023-04-09T18:29:09.920
2023-04-10T07:53:33.953
2023-04-10T07:43:11.947
7224
268693
[ "bayesian", "prior", "posterior", "marginal-distribution" ]
612420
2
null
612418
3
null
Some of the trouble with this question is that many of the "metrics typically associated to measure the performance of predictive modelling" are applied inappropriately, such as assessing the classification accuracy with no regard for how those classifications are created and if a software-default threshold (usually $1/2$) is appropriate. That said, I have seen plenty of papers (published in the elite journals of their respective fields) where the ultimate goal is causal inference on the coefficients, yet the logistic regression models have standard metrics calculated like McFadden's $R^2$ or classification accuracy. For instance, [Sundaram & Yermack (2007)](https://archive.nyu.edu/jspui/bitstream/2451/25975/2/06-003.pdf) report classification accuracy in their table 6 despite the main purpose of running those logistic regressions being the coefficient inference. (My take is that they made a mistake in doing so, because one of their models reports classification accuracy worse than would be achieved by predicting the majority class every time.) On the other hand, I recently saw another paper that had $R^2_{adj}<0$ all over the place. That their regression models were, arguably, doing worse than doing no modeling at all, helped me form a rationale of for being skeptical of their results (there were other issues with the statistics). Thus, to explictly address your question, it can be helpful to give some sense of model performance, no matter how modest, even if prediction is not the main goal of the work. While a reviewer/professor/customer might not require it and oblige you to report it, I still believe it to be valuable information. In such a case, it is typically acceptable to editors and referees to have rather modest performance in terms of the performance metrics. For instance, I have seen papers in top journals with adjusted $R^2$ around $0.05$, maybe even lower. Even that Sundaram & Yermack paper has rather pedestrian performance [when accuracy scores are compared to naïvely predicting the majority category every time](https://stats.stackexchange.com/a/605451/247274). (Most of their $R^2$-style scores, defined as I do in that link, are greater than zero (one is less than zero), but they do not scream out, "This model gets an $\text{A}$," the way that a classification accuracy of $97\%$ might.) REFERENCE Sundaram, Rangarajan K., and David L. Yermack. "Pay me later: Inside debt and its role in managerial compensation." The Journal of Finance 62.4 (2007): 1551-1588.
null
CC BY-SA 4.0
null
2023-04-09T18:55:41.380
2023-05-08T13:06:15.080
2023-05-08T13:06:15.080
247274
247274
null
612421
1
null
null
1
44
How to deduce the following general result: $$ \operatorname{Var}(\max_i X_i) \leq \sum_i \operatorname{Var}(X_i)\,,$$ where $X_1,\dots,X_N$ are any random variables (maybe not independent) with finite second moment. I have seen this result many times, but I don't know how to prove it. Thank you for your help.
Variance of maximum of random variables with finite second moment
CC BY-SA 4.0
null
2023-04-09T19:11:41.007
2023-04-09T20:53:29.203
null
null
250665
[ "variance" ]
612422
2
null
611958
1
null
First, note that $AUC = 0.64$ isn't such terrible performance. Sure, it is better to get $AUC = 0.9$ or $AUC = 0.99$, but if you validate $AUC > 0.5$, then your modeling has some ability to tell apart the two categories. Consequently, it is not a given that you really have to do an autopsy to determine the cause of death of your model. Your model sounds like it could be very-much alive. Since you have a $9$$:$$1$ class imbalance, it cannot be that your predictions between $0.49$ and $0.51$ are calibrated. If all of the events really happened with such probabilities, you would not have that kind of imbalance (your categories would be roughly balanced, since all events happen with about probability $0.5$). Consequently, since the $AUC$ seems to be decent (not great, but decent) and reflective of at least modest ability to distinguish the categories, you might get somewhere by calibrating the predictions.
null
CC BY-SA 4.0
null
2023-04-09T19:17:46.740
2023-04-09T19:17:46.740
null
null
247274
null
612423
2
null
432047
2
null
[Oversampling is largely a solution to a non-problem](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he) Consequently, it is up for debate if you should be doing this at all on your training set. However, except for some particular situations (I give one below), fiddling with the data in the test set is a terrible idea. The idea of having a test set is to get an honest evaluation of how the model will perform in production. If you do this under conditions that are not representative of the real conditions, you are tricking yourself into believing your model is better than it really is or not as good as it is. Using a model for real that is not as good as you think is bad for business because the performance will turn out to be subpar. Underestimating your performance can result in spending time and money fixing a model that is not broken. You do not want either of these. A valid reason for fiddling with test data could be to check out what happens if there is some kind of data drift (checking robustness), but that does not seem to be what is happening here.
null
CC BY-SA 4.0
null
2023-04-09T19:25:59.343
2023-04-09T19:25:59.343
null
null
247274
null
612424
1
null
null
2
37
I have a file (corresponding to one and only one person) with n columns (or classes) containing different data: Name (class 1) , address(class 2) , number (class 3), username (class 4), etc (class n). I wish to classify an incoming data as one of these classes. Example : If I get an incoming data to be : "John Smith" ..my ML model should classify it as : "Name". If i get "Ronald Reagan Avenue 3456" : the output should be "address" and so on. I wonder if the naive Bayes model in ML would be suitable for such task? I have been looking at the naive Bayes model that make me skeptical : It assumes independence between features . Would that be a problem in this case?
Classifying data contained in a file with Machine Learning
CC BY-SA 4.0
null
2023-04-09T19:31:56.103
2023-04-10T08:48:43.307
null
null
385319
[ "machine-learning" ]
612425
2
null
403163
0
null
Classification accuracy tells you how accurate your model is when you use some threshold to bin the probability predictions. In fact, [the predicted probabilities can be quite poor despite good classification accuracy at a particular threshold.](https://stats.stackexchange.com/questions/464636/proper-scoring-rule-when-there-is-a-decision-to-make-e-g-spam-vs-ham-email) Consequently, classification accuracy does not seem to be particularly interesting or useful to your situation. What you can do is estimate true response probabilities and how reflective of them your model is. This is called "calibration". The Python package `sklearn` has a [calibration method](https://scikit-learn.org/stable/modules/calibration.html), as does the R package `rms` that I demonstrate below. ``` library(rms) set.seed(2023) N <- 1000 x1 <- runif(N) x2 <- runif(N) z <- 3*x1 - 3*x2 p <- 1/(1 + exp(-z)) y <- rbinom(N, 1, p) L <- rms::lrm(y ~ x1 + x2, x = T, y = T) cal <- rms::calibrate(L, B = 1000) plot(cal) ``` [](https://i.stack.imgur.com/JD59G.png) Ideally, the calibration curve will equal the line $y=x$, since we want the probability of event occurrence to equal the predicted probability. Since the calibration curve is close to the $y=x$ line, the calibration seems to be pretty good. When you do something like this for your model, you may find that it lacks calibration. Examples of models that lack calibration are given in the `sklearn` documentation.
null
CC BY-SA 4.0
null
2023-04-09T19:34:42.383
2023-04-09T19:34:42.383
null
null
247274
null
612426
2
null
612401
1
null
So I don't know if I got the idea right. But if you test it explicitly, you'll have to break some functions. I did a quick search in the sk-learn library, I'll leave some links. [grid_search](https://scikit-learn.org/stable/modules/grid_search.html) [cross_validation](https://scikit-learn.org/stable/modules/cross_validation.html) [cross_validate](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html#sklearn.model_selection.cross_validate) [KFold](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold) [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV) [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV) If I understand the idea of passing the Hyperparam to each node that cross validation is going to pass. First we have to see how the cross-validation behaves, these links have very good explanations. In the code below I put a function similar to cross validation without hyperparam: ``` import numpy as np from sklearn.model_selection import train_test_split from sklearn import datasets from sklearn import svm from sklearn.model_selection import KFold X, y = datasets.load_iris(return_X_y=True) kf = KFold(n_splits=2) for i, (X_train_index_fold, X_test_index_fold) in enumerate(kf.split(X)): print(f"Fold {i}:") print(f'Shape {X.shape}') print(f" Train: shape={X_train_index_fold.shape}") print(f" Test: shape={X_test_index_fold.shape}") """ kf.split will separate by different index, then I just add the indexes that were selected in my numpy array, which it returns to me. """ X_train, X_test, y_train, y_test = train_test_split( X[X_train_index_fold], y[X_test_index_fold], test_size=0.3, random_state=0) print(f" X train: shape={X_train.shape}") print(f" X test: shape={X_test.shape}") print(f" y train: shape={y_train.shape}") print(f" y test: shape={y_test.shape}") clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train) print(f'Fold {i} | Score: {clf.score(X_test, y_test)}') ``` I recommend that you print the steps, you can see the processes individually. output ``` Fold 0: Shape (150, 4) Train: shape=(75,) Test: shape=(75,) X train: shape=(52, 4) X test: shape=(23, 4) y train: shape=(52,) y test: shape=(23,) Fold 0 | Score: 0.5217391304347826 Fold 1: Shape (150, 4) Train: shape=(75,) Test: shape=(75,) X train: shape=(52, 4) X test: shape=(23, 4) y train: shape=(52,) y test: shape=(23,) Fold 1 | Score: 0.6086956521739131 ``` You can see that I can separate each node, with different index to be able to train the model again, passing through the classifier This very simple example how the nodes are separated. [](https://i.stack.imgur.com/E28dm.png) At the point I couldn't understand, you want to pass the hyperparam, because the link I left above GridSearch and RandomizedSearchCV. You already pass the cross validation function as a parameter, well if you have a very large set you will be able to pass it several times, but if it is too short it will return an error. Being a very populated set it will run twice, using a similar function from above, and then passing the hyperparam again. If you give more details edited this answer, to see if within my knowledge I can help you. GridSearch documentation info: ``` cv int, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, integer, to specify the number of folds in a (Stratified)KFold, CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls. Refer User Guide for the various cross-validation strategies that can be used here. ``` Tamo junto bro, brazuca na área.
null
CC BY-SA 4.0
null
2023-04-09T19:50:56.433
2023-04-09T19:50:56.433
null
null
373067
null
612427
1
612439
null
2
19
[Decision Tree](https://i.stack.imgur.com/Bkjns.png) I have found Misclassification rates for all the leaf nodes. - samples = 3635 + 1101 = 4736, class = Cash, misclassification rate = 1101 / 4736 = 0.232. - samples = 47436 + 44556 = 91992, class = Cash, misclassification rate = 44556 / 91992 = 0.484. - samples = 7072 + 15252 = 22324, class = Credit Card, misclassification rate = 7072 / 22324 = 0.317. - samples = 1294 + 1456 = 2750, class = Credit Card, misclassification rate = 1294 / 2750 = 0.470. - samples = 7238 + 22295 = 29533, class = Credit Card, misclassification rate = 7238 / 29533 = 0.245. I'm finding it difficult to find AUC value from here. Please help me out with this. I will be grateful.
How to find AUC from Binary Classification Decision Tree?
CC BY-SA 4.0
null
2023-04-09T19:58:51.760
2023-04-10T00:33:53.760
null
null
385320
[ "machine-learning", "classification", "cart", "auc" ]
612428
2
null
612405
17
null
A quick and dirty solution is to apply the [inverse transform sampling](https://en.wikipedia.org/wiki/Inverse_transform_sampling) in which the quantile function is computed via numerical inversion. Below is an `R` implementation along with an example. ``` # (your density) myf <- function(x, a, b) { num = (a+1)**2 * x**a * log(b*x) den = 1 - (a+1)*log(b) return(-num/den) } # (your cdf) myF <- function(x, a, b) { num = x**(a+1)* ((a+1)*log(b*x)-1) den = (a+1) * log(b) - 1 return(num/den) } # quantile function computed numerically myqf <- function(p, a, b) { uniroot(function(x) myF(x,a,b) -p, lower = .Machine$double.eps, upper = 1-.Machine$double.eps)$root } # get some uniform draws u <- runif(1e+5) # apply the inverse transform x <- sapply(u, myqf, a = 5, b=0.99) hist(x,probability = TRUE) plot(function(x) myf(x,a=5, b=0.99), from=0, to=1, col=2, lwd=2, add= TRUE) ``` [](https://i.stack.imgur.com/udsEA.png) Surely there are less dirty solutions, i.e. more computationally efficient methods to handle this problem, one of these is suggested by Galen in the comments. Many other methods can be found in these three outstanding books: - Monte Carlo Statistical Methods by Robert and Casella, Springer; - Computational Statistics by Givens and Hoeting, Wiley; - Non-Uniform Random Variate Generation by Devroye.
null
CC BY-SA 4.0
null
2023-04-09T20:01:22.257
2023-04-10T04:49:13.033
2023-04-10T04:49:13.033
56940
56940
null
612429
1
null
null
1
81
I'm working on a model for survival prediction and using the concordance index to evaluate the results ([https://medium.com/analytics-vidhya/concordance-index-72298c11eac7](https://medium.com/analytics-vidhya/concordance-index-72298c11eac7)) . I want to show that my model is better than a baseline model using my test set. My understanding is that for a typical test metric (ie prediction of housing prices for example ) one could use the Wilcoxson signed rank test to see if my model is statistically significantly better than a baseline model. However, in this case, the concordance metric doesn't have any meaning for a single sample-- it's a description of how well the model can discriminate ordering amongst a set of inputs. Therefore, is doing the following valid: divide test set into batches, get each model's prediction on each batch, and run the wilcoxon signed-rank test on the predictions on the batches? Perhaps also vary batch size and shuffle the samples and run the test again to verify the result still holds? Clarifications: I should have clarified that I'm not use the Harrel C-index but the adjusted Antolini index. I know the Harrel C-index is not often used anymore. The baseline model is a machine learning model that provides decent values. My model is also the same type of machine learning model but trained differently . The two models have the exact same architecture, and therefore the same set of predictors (just different weights on the predictors since the two models were trained differently).
Statistical test to compare models results for concordance index
CC BY-SA 4.0
null
2023-04-09T20:19:00.753
2023-04-10T13:37:40.560
2023-04-10T08:43:24.017
385321
385321
[ "machine-learning", "survival", "wilcoxon-signed-rank", "concordance" ]
612431
2
null
612405
14
null
Your pdf is the same as a transformation of a variable that follows a truncated gamma distribution. The gamma distribution has shape $k=2$, rate $\alpha+1$ and is truncated at $-\log(\beta)$. ### Transformation rule for probability density functions If we apply the transformation $$\begin{array}{} Y &=& -\log(\beta X) \\ X &=& e^{-Y}/\beta \end{array}$$ then the pdf for $Y$ is $$g(y) = f\left({e^{-y}}/{\beta}\right) \cdot \left| \frac{dx}{dy}\right| = \frac{{\alpha^\prime}^2 \beta^{-\alpha^\prime} }{1-\alpha^\prime\log(\beta)} y e^{-\alpha^\prime y}$$ with $\alpha^\prime = \alpha +1$ and the domain of $y$ is $(-\log(\beta),\infty)$ ### Computational A computational demonstration in R is: ``` alpha = 1 beta = 0.5 ### sampling from truncated gamma distribution using inverse transform p = runif(10^5, pgamma(-log(beta), shape = 2, rate = alpha+1), 1) y = qgamma(p, shape = 2, rate = alpha+1) ### draw a histogram hist(exp(-y)/beta, breaks = seq(0,1,1/40), freq =F) ### add lines with your density function x = seq(0,1,0.001) y = - ((alpha+1)^2 * x^alpha * log(beta * x))/(1-(alpha + 1)*log(beta)) lines(x,y) ``` [](https://i.stack.imgur.com/p6mv9.png) ### Alternative transformation Alternatively, one can also apply an additional shift $$Z = Y+\log(\beta)$$ or effectively $$Z = -\log(X)$$ which is a variable in the range $(0,\infty)$ and has a pdf like $$h(z) = g(z-\log(\beta)) = \frac{{\alpha^\prime}^2 \beta^{-\alpha^\prime} \beta e^{\alpha} }{1-\alpha^\prime\log(\beta)} (z-\log(\beta)) e^{-\alpha^\prime z}$$ which is a mixture of two gamma distributed variables (or more specifically Erlang distributions). So one should be able to perform the trick of Ben with the log transform of uniform variables (creating exponential/Erlang distributed variables), but without the need for rejections sampling. The method will go something like - Draw to uniform variables $U_1$, $U_2$. - Depending on the value of $U_1$ being above or below some value $u_c$ have some sum like $z= a \log U_1 + b \log U_2 + c$ or use $z= d \log U_2$. - Compute $x = e^{-z}$ One will need to work out the values $a,b,c,d,u_c$ by writing out $h(z)$ more precisely as the mixture of $\alpha \exp(-\alpha z)$ and $\alpha^2 z \exp(-\alpha z)$. And one needs to take care that $U_1$ is used both as variable to decide which part of the mixture is used as well as generating the exponential distributed variable.
null
CC BY-SA 4.0
null
2023-04-09T21:25:47.800
2023-04-10T10:11:08.973
2023-04-10T10:11:08.973
164061
164061
null
612432
2
null
612429
0
null
A few thoughts: First, Frank Harrell (who developed the C-index) doesn't think that it is useful for comparing models. See [this answer](https://stats.stackexchange.com/a/576943/28500) and its links, especially to a web post by Harrell. The C-index nicely summarizes how well a single model discriminates between individuals, but it says nothing directly about the calibration of the model: how well the model's predicted survival estimates correspond with what was observed. The link contains references to better ways to compare models. In response to edited question: The Antolini index, as I understand it, is still just a measure of discrimination. It evidently extends the C-index to a situation with time dependence. It does not seem to provide any more information about calibration than the C-index. If you are interested in the quality of a prediction "for a single sample," as you say, then what you need is a measure of calibration instead. That's typically done by estimating "observed" probabilities of survival at some time by a very flexible fit to the data set and comparing against model-predicted probabilities. Repeating the process on resampled data sets can give corrections for optimism in the fit. See [this page](https://stats.stackexchange.com/a/206308/28500) for an outline. The potential problem with a calibration measure is that your use of the Antolini index suggests that you have time-dependent covariates in your model. In that situation I don't know of a reliable way to estimate "observed" and "expected" probabilities of events at any given survival time, at least in a way that can extend reliably to new data samples. The problem (at least for survival models with at most one event per individual) is that if you have a covariate value for an individual at some time, you already know that the individual is alive at that time. The `lifelines` package, for example, thus won't even allow for predictions from Cox models with time-dependent covariates. Harrell's `calibrate()` function in his R `rms` package won't handle them, either. There might be ways around that with a joint model of covariates and survival over time, but that's beyond my expertise. Second, unless you have tens of thousands of cases, using separate training and test sets isn't a good idea. See [this post](https://www.fharrell.com/post/split-val/) by Frank Harrell. In response to edited question: If you have a large enough sample to set aside a completely separate test set, then you already can compare your discrimination measure directly between the two models. The Wilcoxon-Mann-Whitney test is just a discrimination test, a rescaling of the C-index. See the section of [this answer](https://stats.stackexchange.com/a/146174/28500) about that test. Resampling via cross-validation or bootstrapping is close to what you suggest for getting estimates of variability; explaining what you did to others would be easiest if you directly invoked one of those approaches. They have the advantage that you could compare the whole modeling process between your two training methods. For example, you could repeat each of the training methods on multiple bootstrap samples of the training data and evaluate the discrimination either on the entire data set (for a small data set) or on the held-out test set (for very large data set). That would compare the performance of the two training methods directly. Third, an answer would depend on what you mean by the "baseline model." Trivially, if the baseline model is a null model (no model at all, no predictors) and you really wanted to use the C-index, then the standard error reported for the C-index (concordance) would indicate whether it's statistically different from the value of 0.5 expected from a null model. If the "baseline model" is a model only using a subset of the predictors in your complete model (that is, the baseline model is nested within the complete model), then a likelihood-ratio test between those models would be a better-accepted and more sensitive comparison. (Again trivially, the likelihood-ratio test reported just for the complete model also documents its superiority to a null model.) In response to edited question: These don't seem to be nested models of the type that a likelihood-ratio test could evaluate.
null
CC BY-SA 4.0
null
2023-04-09T21:32:36.960
2023-04-10T13:37:40.560
2023-04-10T13:37:40.560
28500
28500
null
612433
1
612438
null
1
27
I'm trying to make sense of some results I ran on a weighted regression with log-transformed IV1 and DV, and IV2, which is a factor variable. IV2 is restaurant price range, categorized as 1 (low price range) or 4 (high price range), to be exact. I'm trying to interpret the effect of IV2 on the DV. The coefficient for the price range is 0.657: log(DV) = 2.99 + 0.657(price4) + 1.11(log_IV1) - 0.190(log_IV1 * price4) This seems to mean that with a higher price range of 4, the DV will increase (exp(0.657)-1)*100%. Is this correct? I'm feeling confused because when I run a regression where IV1 and DV are NOT log-transformed, results will say that the DV will decrease with a higher price range of 4: DV = 306.87 - 47.44(price4) + 576.77(IV1) - 175.67(IV1 * price4) How can I explain this difference in results?
Interpreting regression output with categorical IV and log-transformed DV
CC BY-SA 4.0
null
2023-04-09T21:53:17.143
2023-04-10T00:12:49.640
null
null
385327
[ "regression", "categorical-data", "data-transformation", "logarithm" ]
612434
1
612437
null
2
37
What's the difference between Locality Preserving Projection (LPP) and Principal Component Analysis (PCA)? This is our data. It's a 3D plot. Here I use LPP and PCA to reduce the 3D data to 2D data. It gives different results. I know that PCA reduces the dimension on maximum variance, but what about LPP? [](https://i.stack.imgur.com/gmIXQ.png) Locality Preserving Projection (LPP) [](https://i.stack.imgur.com/gbEjx.png) Principal Component Analysis (PCA) [](https://i.stack.imgur.com/hgfnD.png)
Locality Preserving Projection (LPP) VS Principal Component Analysis (PCA)
CC BY-SA 4.0
null
2023-04-09T22:39:30.333
2023-04-10T00:06:28.473
2023-04-09T22:48:15.900
275488
275488
[ "pca", "dimensionality-reduction" ]
612435
1
null
null
5
102
can someone help me please? How can I see the coeficient for municipio6:ano0, considering that I don't want to have a reference level in my model for municipio variable (that's why I set intercept to 0). ``` > m1 <- glm.nb(casos ~ 0 + municipio + ano0 + municipio:ano0 + offset(log(populacao)), data = dataset) > summary(m1) Call: glm.nb(formula = casos ~ 0 + municipio + ano0 + municipio:ano0 + offset(log(populacao)), data = dataset, init.theta = 1.202601993, link = log) Deviance Residuals: Min 1Q Median 3Q Max -1.6965 -1.0652 -0.7363 0.3766 4.0045 Coefficients: Estimate Std. Error z value Pr(>|z|) municipio6 -12.22419 0.26220 -46.622 < 2e-16 *** municipio1 -10.07937 0.16096 -62.621 < 2e-16 *** municipio2 -10.15899 0.16450 -61.758 < 2e-16 *** municipio3 -11.46408 0.20718 -55.333 < 2e-16 *** municipio4 -11.12637 0.21401 -51.990 < 2e-16 *** municipio5 -12.18312 0.26636 -45.739 < 2e-16 *** ano0 0.06878 0.03153 2.181 0.02916 * municipio1:ano0 -0.09080 0.03802 -2.389 0.01692 * municipio2:ano0 -0.10842 0.03845 -2.820 0.00480 ** municipio3:ano0 -0.06582 0.04144 -1.588 0.11223 municipio4:ano0 -0.14021 0.04388 -3.196 0.00139 ** municipio5:ano0 0.01989 0.04447 0.447 0.65471 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Negative Binomial(1.2026) family taken to be 1) Null deviance: 24450.50 on 1008 degrees of freedom Residual deviance: 989.11 on 996 degrees of freedom AIC: 2863 Number of Fisher Scoring iterations: 1 Theta: 1.203 Std. Err.: 0.134 2 x log-likelihood: -2837.036 ```
GLM dropping an interaction
CC BY-SA 4.0
null
2023-04-09T22:39:42.527
2023-04-10T14:30:50.463
null
null
375122
[ "regression", "multiple-regression", "generalized-linear-model", "nonlinear-regression", "negative-binomial-distribution" ]
612436
2
null
612435
6
null
You actually need to remove the main effect of `ano0`. So your model formula should be ``` casos ~ 0 + municipio + municipio:ano0 + offset(log(populacao)) ``` I know it looks like you're omitting a term, but you will estimate exactly the same number of parameters and the model fit will be identical. This gives you an intercept and slope of `ano0` for each level of `municipio`.
null
CC BY-SA 4.0
null
2023-04-09T23:15:56.037
2023-04-10T14:30:50.463
2023-04-10T14:30:50.463
116195
116195
null
612437
2
null
612434
1
null
As you note in your question, PCA attempts to reduce dimensionality while maximizing variance accounted for. LPP on the other hand attempts to reduce the number of dimensions while maximizing a sense of nearness to one's neighbors. I located the following website ([https://notebook.community/jakevdp/lpproj/Example](https://notebook.community/jakevdp/lpproj/Example)) which appears to give a nice explanation with some sample code in Python.
null
CC BY-SA 4.0
null
2023-04-10T00:06:28.473
2023-04-10T00:06:28.473
null
null
199063
null
612438
2
null
612433
1
null
The issue here is that you also are including an interaction term, and this makes an interpretation of the "main" effect of price4 challenging. The reason is that this is a partial slope (in both models). That is to say, this is the rate of change in the dependent variable with one unit change in this independent variable ASSUMING all other variables are held constant. And here's the rub, you can't hold the interaction constant while changing the main variable of focus. My expectation would be that if you plugged in corresponding values for IV1 and log_IV1 and simplified each model, then the signs for your "slope" for price4 would be the same. Happy to clarify/elaborate more if desired.
null
CC BY-SA 4.0
null
2023-04-10T00:12:49.640
2023-04-10T00:12:49.640
null
null
199063
null
612439
2
null
612427
1
null
A ROC curve is created by doing the following. - Make predictions over a range of values (such as the probability of membership in a category). - Apply a threshold to bin those predictions into two categories. - Calculate the sensitivity and specificity when that threshold is applied. - Vary the threshold all over to get many different values of sensitivity and specificity. Your predictions seem to be the categories rather than over a range of values. Consequently, there is not much that you can do with a ROC curve. (You can do a little bit, but not much.) However, your software package likely has a function or method that allows you to access the predictions made over a range of values. Then you would be able to calculate a nice ROC curve.
null
CC BY-SA 4.0
null
2023-04-10T00:33:53.760
2023-04-10T00:33:53.760
null
null
247274
null
612441
1
null
null
0
9
I am building DL model that test data can have some different labels compared to training dataset. My semi-supervised model is kind of terrible to predict test data, and I want to know is there any DL paper about this kind of work? And I'm trying to train with multiple datasets that have common labels but also have other labels as a training set. these datasets have common features but also have other features too. Is there any tip or guide for pre-process these kind of work? Some Deep learning paper, or some guides?
Is there any tips or research of Deep learning for data integration, and label unmatched in train, and test data?
CC BY-SA 4.0
null
2023-04-10T01:35:50.087
2023-04-10T01:35:50.087
null
null
334249
[ "neural-networks" ]
612442
2
null
316827
1
null
I came across the same question by myself when I'm studying a Bayesian book. And I guess I have possible answers (one is a detailed explanation and another is a simple one that appeals your experience in multivariate calculus class). I. One Possible Answer (detailed explanation) When you specify the model as you did above by $p(Y|Z, \theta)$, not by $p(Y|Z, \theta, \pi)$, then it's like you are incorporating the "information" or "knowledge" (such that $Y$ is not directly dependent on $\pi$) about your model and restricting the form of the joint distribution into simpler form, as Tom Loredo explains the notion of conditional independence in [https://math.stackexchange.com/questions/23093/could-someone-explain-conditional-independence](https://math.stackexchange.com/questions/23093/could-someone-explain-conditional-independence). Or, you can think $y_1, ..., y_n | Z_1, ..., Z_n, \theta_1, ..., \theta_K \sim p(Y|Z, \theta, \pi) := \mathcal{N}(\theta_{z_i}, 1) = p(Y|Z, \theta)$. If you add another model specification like $p(Y|\pi)$ or $p(Y|Z, \theta, \pi)$ that is not equal to $p(Y|Z, \theta)$ to the model you specified above, I think such a model still makes sense while you will lose conditional independence due to the Bayes rule (think of the example of a discrete case below). Hence there is no proof about the conditional independence but the conditional independence is rather the "specification" of the model you work with. EXAMPLE OF DISCRETE CASE: Think of defining $P(W|R, C)$ that is not $0.9$ in the video below of discrete case example will still makes sense but changes the structure of graphical model and lose conditional independence (let's say $P(W|R, C) = 0.95, P(W|R, \sim C) = 0.85, P(W|\sim R, C) = 0.3, P(W|\sim R, \sim C) = 0.05$. In this case $P(W|R)$ is calculated as $0.9342105$ hence $P(W|R, C) \neq P(W|R)$, which means $C$ and $R$ are not conditionally independent given $W$. Note we can even keep $P(W|R) = 0.9$ and omit some of $P(W|R, C)$, $P(W|\sim R, C)$, $P(W|R, \sim C)$ or $P(W|\sim R, \sim C)$). [https://www.youtube.com/watch?v=WVKFaDqcBFQ](https://www.youtube.com/watch?v=WVKFaDqcBFQ) II. Another Possible Answer (simple explanation that appeals your experience) Consider a simple multivariate function $f(x, y)$. Given this specification of the function $f$, you would assume $x$ and $y$ are independent unless $y$ is specified as $y = g(x)$, or $y(x)$. I suppose the same kind of situation is happening in graphical model specifications. Hope these might help someone or activate the discussion, and sorry in advance if there is some error or unclarity in my answer since this is my first post to stack exchange.
null
CC BY-SA 4.0
null
2023-04-10T01:39:38.127
2023-05-15T05:38:29.803
2023-05-15T05:38:29.803
385335
385335
null
612443
1
612447
null
5
380
It is well known that if a random variable $X$ has distribution: $$ \mathrm{P}(X = x) = \begin{cases} \frac{1}{2}, & x=0,\\ \frac{1}{2}, & x=1,\\ 0, & \text{otherwise}, \end{cases} $$ (i.e., it is Bernoulli-distributed with probability of success $\tfrac{1}{2}$), it saturates Chebyshev's inequality for $k=1$: $$ \mathrm{P}(|X - \mathrm{E}[X]| \leq \sqrt{\mathrm{Var}[X]}) = \mathrm{P}(|X - \tfrac{1}{2}| \leq \tfrac{1}{2}) = 1. $$ Using Chebyshev's inequality, is it possible to show the following statement? If $X$ is a random variable with $0 \leq X \leq 1$, $\mathrm{E}[X] =\tfrac{1}{2}$, and $\mathrm{Var}[X] = \tfrac{1}{4}$, then $X$ is Bernoulli-distributed with probability of success $\tfrac{1}{2}$. Thanks!
Using the Chebyshev inequality to uncover saturating distribution
CC BY-SA 4.0
null
2023-04-10T01:50:37.893
2023-04-11T11:41:31.287
2023-04-11T01:45:39.947
385336
385336
[ "probability", "probability-inequalities" ]
612444
1
null
null
1
20
I'm working with a list of performance metrics. Each row represents some # of observations of one leg of the course/route/etc. So each row/leg will have a best (time), worst, avg and stdev. Adding together the best, worst and average values to get the course/route level totals makes intuitive sense to me, but I'm not so sure about stdev. I don't have the individual data points, only the aggregated "per-leg" values. Can I use the values I do have to calculate a valid stdev for the overall course/route?
Can I calculate stdev from a list of stdev values?
CC BY-SA 4.0
null
2023-04-10T02:03:35.590
2023-04-10T11:31:15.170
null
null
385337
[ "standard-deviation" ]
612445
1
null
null
0
75
If a string of characters (S) that is 325384 long, contains 458 As and 22 Bs. What is the probability that if the 458 As and 22 Bs were randomly positioned along the 325384 string of characters that there would be exactly 861 characters between at least one of the AB pairs. Whilst I am interested in the probability of this occurring, I would also like to see how to calculate it for any given value of A,B,S or value of characters between them.
Stochasticity challenge
CC BY-SA 4.0
null
2023-04-10T02:07:32.020
2023-04-12T13:22:09.620
2023-04-12T13:22:09.620
22311
385334
[ "probability", "combinatorics" ]
612447
2
null
612443
10
null
I don't think Chebyshev's inequality helps in proving this reversed problem (Chebyshev's inequality only tells you, with the given condition, that $P[|X - 1/2| > \epsilon] \leq \frac{1}{4\epsilon^2}$. When $\epsilon \leq 1/2$, this is weaker than the trivial statement that $P[|X - 1/2| > \epsilon] \leq 1$. When $\epsilon > 1/2$, this amounts to say that the probability of $X > 1/2 + \epsilon$ or $X < 1/2 - \epsilon$ is less than $\frac{1}{4\epsilon^2} < 1$, but this is already implied by the other condition "$0 \leq X \leq 1$". Hence the Chebyshev's inequality does not provide any additional insights in proving the goal.), but may be proved as follows (the key of this proof is that if $E[Y] = 0$ and $Y$ is nonnegative, then $Y = 0$ with probability $1$): The condition $E[X] = 1/2$ and $\operatorname{Var}(X) = 1/4$ implies that $E[X^2] = 1/4 + (1/2)^2 = 1/2$. Therefore, $E[X - X^2] = 0$. By the condition $0 \leq X \leq 1$, the random variable $X - X^2$ is nonnegative, hence $E[X - X^2] = 0$ implies that $X - X^2 = 0$ with probability $1$, i.e., $P[X = 0] + P[X = 1] = 1$. This shows that $X$ must be a binary discrete random variable, and if $P[X = 0] = p$, then $P[X = 1] = 1 - p$. Hence $E[X] = 1/2 = 1 - p$ gives $p = 1/2$. In other words, $X \sim \text{Bernoulli}(1, 1/2)$.
null
CC BY-SA 4.0
null
2023-04-10T02:33:18.750
2023-04-11T11:41:31.287
2023-04-11T11:41:31.287
20519
20519
null
612448
1
612450
null
2
87
I would like some help with calculating the Fisher Information $I_o(\beta)$ and the expected information for a gamma distribution defined by \begin{align*} f_X(x) = \frac{\beta^\alpha x^{\alpha - 1}e^{-\beta x}}{\Gamma(\alpha)} \; x > 0, \alpha >0, \beta > 0 \end{align*} Where $\alpha$ is a known value and $\beta$ is the parameter of interest. Attempt I have attempted to calculate a likelihood function as follows: \begin{align} L(\beta | X_i) &= \prod_{i = 1}^{n}f(x_i | \alpha, \beta) \\ &= \prod_{i = 1}^{n}\left( \frac{\beta^{\alpha}}{\Gamma(\alpha)}x_{i}^{\alpha - 1}\mathrm{exp}{\{-\beta x\}}\right) \\ &= \left(\frac{\beta^{\alpha}}{\Gamma(\alpha)}\right)^{n}\prod_{i = 1}^{n} \left(x_{i}^{\alpha-1}\right) \mathrm{exp}\{-\beta\sum_{i = 1}^{n} x_i\} \\ L(\beta | X_i) &= \left(\frac{\beta^{\alpha}}{\Gamma(\alpha)}\right)^{n}\left(\prod_{i = 1}^{n} x_{i}\right)^{\alpha-1} \mathrm{exp}\{-\beta\sum_{i = 1}^{n} x_i\} \end{align} Thus the log-likelihood would be the following: \begin{align} \ell(\beta | x_i) &= \ln\left((L(\beta | X_i)\right)) \\ &= n\alpha\ln(\beta) - n\ln(\Gamma(\alpha)) + (na-n)\ln(x_i) - \beta \sum_{i = 1}^{n} x_i \\ \end{align} I understand that the information is found by taking the 2nd derivative of any of the likelihood functions where $I_o(\beta) = -\frac{\mathrm{d}^{2}{\ell}}{\mathrm{d}{\beta}^{2}} $ The derivatives calculated were as follows: \begin{align} &= n\alpha\ln(\beta) - n\ln(\Gamma(\alpha)) + (na-n)\ln(x_i) - \beta \sum_{i = 1}^{n} x_i \\ &= \frac{n\alpha}{\beta} - \beta \\ &= - \frac{n \alpha}{\beta^2} - 1 \end{align} Thus the information would be \begin{align} I_0(\beta) = \frac{n \alpha}{\beta^2} + 1 \end{align} I am unsure what to do once I get to the expectation. \begin{align} \mathbb{E}\{I_0(\beta)\} &= \mathbb{E}\left(\frac{n \alpha}{\beta^2} + 1\right) \end{align} Would I have made a mistake within the derivation process? Any insight would be very much appreciated.
Fisher information and Expected Information for Gamma Distribution
CC BY-SA 4.0
null
2023-04-10T02:55:48.770
2023-04-10T03:46:21.000
null
null
376744
[ "maximum-likelihood", "gamma-distribution", "fisher-information" ]
612449
1
null
null
0
25
I recently saw some paper about stereo matching : End-to-End Learning of Geometry and Context for Deep Stereo Regression [1] [https://openaccess.thecvf.com/content_ICCV_2017/papers/Kendall_End-To-End_Learning_of_ICCV_2017_paper.pdf](https://openaccess.thecvf.com/content_ICCV_2017/papers/Kendall_End-To-End_Learning_of_ICCV_2017_paper.pdf) End-to-End Learning for Omnidirectional Stereo Matching with Uncertainty Prior [2](which isn't free to see this paper. But the DL model is almost as same as Gc-net paper, except the model is for wide angle multi camera ) These papers are about stereo matching in computer vision area, The models in the papers are composed of Unary feature extraction + cost volume (As simple you can think it as concatenating 2 or more data's feature) + 3D convolution. What I don't understand fully is the Unary feature extraction part, the models have some 2D convolution layers with residual connection, and then after this there's a 2D convolution layer with no activation and batch normalization before cost volume part. The gc-net [1]'s unary feature extraction works like ``` (1) 5x5 conv, 32 features, stride 2-> (2)3x3 conv, 32 features -> (3) 3x3 conv, 32 features (4) (1)-(3) residual connection (5) (2), (3) repeat a few times with residual connection (6) 3x3 conv, 32 features (no ReLu or BN) ``` What does this (6) convolution layer without activation and batch normalization contribute to the model? In the 'End-to-End Learning for Omnidirectional Stereo Matching with Uncertainty Prior' paper says that > the last layer without the ReLU in the feature extraction part allows the network to distinguish between negative features and invisible areas, where the network is set to zero by the ReLU and warping operations, respectively. But I can't understand how the layer without activation function and normalization contribute above things. And in 3D convolution part of Gc-net [1] also has no-ReLu, or BN too. The paper says > The network can also pre-scale the matching costs to control the peakiness (sometimes called temperature) of the normalized post-softmax probabilities (Figure 2). We explicitly omit batch normalization from the final convolution layer in the unary tower to allow the network to learn this from the data. Which means 3D conv layer without BN helps 'softargmin' (or softargmax) function not to have multi-modal distribution. This thing I don't understand how the conv layer without activation and and normalization has such ability.
What can convolution network without activation function contribute to the DL model if it is the last layer?
CC BY-SA 4.0
null
2023-04-10T03:11:12.280
2023-04-10T07:34:49.340
2023-04-10T07:34:49.340
334249
334249
[ "neural-networks", "conv-neural-network" ]
612450
2
null
612448
1
null
You almost got it right ! You just made a tiny mistake when computing the derivative of the log-likelihood, you should have had : $$\frac{\partial\ell}{\partial \beta} = \frac{n\alpha}{\beta} -\color{red}{\sum_{i=1}^n x_i} $$ From which it follows that $$\frac{\partial^2\ell}{\partial \beta^2} = -\frac{n\alpha}{\beta^2} $$ Next, to compute the Fisher information, all you have to do is to take the expectation of $-\frac{\partial^2\ell}{\partial \beta^2} $. However, that expectation is with respect to the distribution of $x_i$, and there are no $x_i$'s in the expression of $\frac{\partial^2\ell}{\partial \beta^2}$, it is a constant ! Its expectation is thus equal to itself : $$ \mathbb E\left[-\frac{\partial^2\ell}{\partial \beta^2}\right] = \mathbb E\left[\frac{n\alpha}{\beta^2}\right] = \frac{n\alpha}{\beta^2}. $$
null
CC BY-SA 4.0
null
2023-04-10T03:46:21.000
2023-04-10T03:46:21.000
null
null
305654
null
612451
1
null
null
1
47
As far as I know, Co-variance based SEM requires the normality assumption. To avoid violating this assumption, I am considering using Bootstrap Bias-corrected method intervals for hypothesis testing when performing CB-SEM in AMOS. However, I have found not many studies using these two methods combined. I think the main reason is that the authors tend to perform CFA before using SEM and all the indicators in their studies do not violate the threshold values, but this is my first research paper and I have not had enough foundation in Statistics so I am not sure. Is it possible to use Bootstrap Bias-corrected confidence intervals when performing CB-SEM model for hypothesis testing? If yes, is there anything I should perform before doing CB-SEM combined with Bootstrapping? To provide more context: My CFA indicators are fine.
Is it possible to use Bootstrap Bias-corrected confidence intervals when performing SEM?
CC BY-SA 4.0
null
2023-04-10T05:29:43.913
2023-04-10T05:29:43.913
null
null
385346
[ "bootstrap", "structural-equation-modeling", "confirmatory-factor" ]
612454
2
null
612107
3
null
Mostly an expanded version of my comment, reading the original paper on p. 880 > To enable a clearer interpretation of the results, the values of the connections were transformed into connection strength. This was achieved by multiplying the raw connection values with the sign of their values. Therefore $$t = \text{(sign of mean value connection)}\times \frac{\beta}{\text{Standard error}}.$$
null
CC BY-SA 4.0
null
2023-04-10T07:08:11.630
2023-04-11T09:26:07.940
2023-04-11T09:26:07.940
56940
56940
null