Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
612455 | 1 | null | null | 2 | 17 | I'm looking for appropriate types of analysis for a data set that contains counts of different crab species across 4 sites with 3 replicates per site (12 in total) over a time period of 1.5 years - 5 sampling time points.
The peculiarity of the data set is that each crab species only occurs on specific coral hosts. I'm thus following their hosts throughout time. A brief example: Colony 1 has a single crab at t1, two crabs at t2, and no crabs at t3, whereas Colony 2 has three crabs t1, t2 and t3. This would indicate that the crab abundance in colony 1 fluctuates much more. Currently, I have about 50 Colonies per site, so 200 in total. I'm interested to see how crab abundance on these colonies changes over time depending on the sites and also which impact the coral host species has. For instance: Does the abundance of crabs on coral species "Pavona" fluctuate more than abundance of crabs on coral species "Pocillopora"?
In summary, I would like to analyse this data regarding general changes in community composition of crabs over time and across sites but, more specifically, also to follow each colony through time and then compare this data to see whether certain coral species are prone to be colonized/abandoned more frequently than others.
| What are suitable statistical approaches to examine temporal variation of species abundance/ community composition across multiple sites? | CC BY-SA 4.0 | null | 2023-04-10T07:09:32.363 | 2023-04-10T20:44:21.113 | null | null | 385354 | [
"count-data",
"ecology",
"spatio-temporal",
"temporal-difference"
] |
612456 | 2 | null | 612405 | 11 | null | It is possible to generate this random variable using a rejection-sampling algorithm using uniform random variables. To do this, let $U_1,U_2 \sim \text{IID U}(0,1)$ and define the random variable:
$$Z = -\frac{\log(U_1) + \log(U_2)}{\alpha+1} \sim \text{Gamma}(2, \alpha+1).$$
Using the transformation shown in the excellent answer by [Sextus Empiricus](https://stats.stackexchange.com/users/164061), it can be shown that:
$$\begin{align}
\mathbb{P}(X \leqslant x)
&= \mathbb{P} \bigg( \frac{e^{-Z}}{\beta} \leqslant x \bigg| Z \geqslant |\log(\beta)| \bigg) \\[6pt]
&= \mathbb{P}( Z \geqslant |\log(\beta x)| |Z \geqslant |\log(\beta)|). \\[6pt]
\end{align}$$
Hence, we can generate the random variable $X$ using a rejection-sampling method that generates the uniform values $(U_1,U_2)$ until we get a corresponding value $Z \geqslant |\log(\beta)|$, and then take the appropriate transformation to get $X$. Below we code this method as the function `rdist` and we also code the CDF as `pdist`.$^\dagger$
```
rdist <- function(n, alpha, beta) {
#Check input n
if (!is.vector(n)) stop('Error: Input n should be a single numeric value')
if (!is.numeric(n)) stop('Error: Input n should be a single numeric value')
if (length(n) != 1) stop('Error: Input n should be a single numeric value')
if (as.integer(n) != n) stop('Error: Input n should be an integer')
if (min(n) <= 0) stop('Error: Input n should be a non-negative integer')
if (n == 0) { return(numeric(0)) }
#Check input alpha
if (!is.vector(alpha)) stop('Error: Input alpha should be a single numeric value')
if (!is.numeric(alpha)) stop('Error: Input alpha should be a single numeric value')
if (length(alpha) != 1) stop('Error: Input alpha should be a single numeric value')
if (min(alpha) <= -1) stop('Error: Input alpha should be greater than minus-one')
#Check input beta
if (!is.vector(beta)) stop('Error: Input beta should be a single numeric value')
if (!is.numeric(beta)) stop('Error: Input beta should be a single numeric value')
if (length(beta) != 1) stop('Error: Input beta should be a single numeric value')
if (min(beta) <= 0) stop('Error: Input beta should be greater than zero')
if (min(beta) >= 1) stop('Error: Input beta should be less than one')
if ((alpha+1)*log(beta) == 1) stop('Error: Inadmissible parameters')
#Generate pseudo-random values
x <- rep(0, n)
min.z <- -log(beta)
for (i in 1:n) {
z <- 0
while (z <= min.z) { u <- runif(2); z <- -sum(log(u))/(alpha+1) }
x[i] <- exp(-z)/beta }
#Give output
x }
#####################################################################
pdist <- function(x, alpha, beta, lower.tail = TRUE, log.p = FALSE) {
#Check input x
if (!is.vector(x)) stop('Error: Input x should be a numeric vector')
if (!is.numeric(x)) stop('Error: Input x should be a numeric vector')
if (length(x) == 0) { return(numeric(0)) }
#Check input alpha
if (!is.vector(alpha)) stop('Error: Input alpha should be a single numeric value')
if (!is.numeric(alpha)) stop('Error: Input alpha should be a single numeric value')
if (length(alpha) != 1) stop('Error: Input alpha should be a single numeric value')
if (min(alpha) <= -1) stop('Error: Input alpha should be greater than minus-one')
#Check input beta
if (!is.vector(beta)) stop('Error: Input beta should be a single numeric value')
if (!is.numeric(beta)) stop('Error: Input beta should be a single numeric value')
if (length(beta) != 1) stop('Error: Input beta should be a single numeric value')
if (min(beta) <= 0) stop('Error: Input beta should be greater than zero')
if (min(beta) >= 1) stop('Error: Input beta should be less than one')
if ((alpha+1)*log(beta) == 1) stop('Error: Inadmissible parameters')
#Check inputs lower.tail and log.p
if (!is.vector(lower.tail)) stop('Error: Input lower.tail should be a single logical value')
if (!is.logical(lower.tail)) stop('Error: Input lower.tail should be a single logical value')
if (length(lower.tail) != 1) stop('Error: Input lower.tail should be a single logical value')
if (!is.vector(log.p)) stop('Error: Input log.p should be a single logical value')
if (!is.logical(log.p)) stop('Error: Input log.p should be a single logical value')
if (length(log.p) != 1) stop('Error: Input log.p should be a single logical value')
#Generate probabilities
a <- alpha+1
n <- length(x)
PROBS <- rep(0, n)
for (i in 1:n) {
xx <- x[i]
if (xx >= 1) { PROBS[i] <- 1 }
if ((xx > 0)&(xx < 1)) {
PROBS[i] <- (xx^a)*(a*log(beta*xx)-1)/(a*log(beta)-1) } }
if (!lower.tail) { PROBS <- 1-PROBS }
#Give output
if (log.p) { log(PROBS) } else { PROBS } }
```
We can now use this function to generate a large number of pseudo-random values. Plotting the resulting ECDF for the values gives a close approximation to the true CDF.
```
#Set distribution parameters
alpha <- 2
beta <- 0.6
#Generate values from the distribution
set.seed(1)
n <- 10^6
X <- rdist(n, alpha, beta)
#Plot empirical cumulative distribution function (ECDF) of values
plot(ecdf(X), col = 'blue', main = 'ECDF of pseudo-random values', ylab = 'Proportion')
```
[](https://i.stack.imgur.com/h0PaF.jpg)
---
$^\dagger$ Here is alternative code for `rdist` if you are want to avoid rejection sampling by using the `pgamma` and `rgamma` functions.
```
rdist <- function(n, alpha, beta) {
#Check input n
if (!is.vector(n)) stop('Error: Input n should be a single numeric value')
if (!is.numeric(n)) stop('Error: Input n should be a single numeric value')
if (length(n) != 1) stop('Error: Input n should be a single numeric value')
if (as.integer(n) != n) stop('Error: Input n should be an integer')
if (min(n) <= 0) stop('Error: Input n should be a non-negative integer')
if (n == 0) { return(numeric(0)) }
#Check input alpha
if (!is.vector(alpha)) stop('Error: Input alpha should be a single numeric value')
if (!is.numeric(alpha)) stop('Error: Input alpha should be a single numeric value')
if (length(alpha) != 1) stop('Error: Input alpha should be a single numeric value')
if (min(alpha) <= -1) stop('Error: Input alpha should be greater than minus-one')
#Check input beta
if (!is.vector(beta)) stop('Error: Input beta should be a single numeric value')
if (!is.numeric(beta)) stop('Error: Input beta should be a single numeric value')
if (length(beta) != 1) stop('Error: Input beta should be a single numeric value')
if (min(beta) <= 0) stop('Error: Input beta should be greater than zero')
if (min(beta) >= 1) stop('Error: Input beta should be less than one')
if ((alpha+1)*log(beta) == 1) stop('Error: Inadmissible parameters')
#Generate pseudo-random values
min.p <- pgamma(-log(beta), shape = 2, rate = alpha+1)
z <- qgamma(runif(n, min.p, 1), shape = 2, rate = alpha+1)
x <- exp(-z)/beta
#Give output
x }
```
| null | CC BY-SA 4.0 | null | 2023-04-10T07:11:04.230 | 2023-04-10T22:45:58.680 | 2023-04-10T22:45:58.680 | 173082 | 173082 | null |
612457 | 1 | null | null | 0 | 18 | I'm training a BERT sequence classifier on a custom dataset. When the training starts, the loss is at around ~0.4 in a few steps. I print the absolute sum of gradients for each layer/item in the model and the values are high. The model converges initially but when left to be trained for a few hours and sometimes even early as well it gets stuck. I am calculating gradients with the below code. Also the logs are at - [https://pastecode.io/s/v2s3mr3e](https://pastecode.io/s/v2s3mr3e) (initial convergence) . I'm printing gradients, loss, metrics and logits.
```
for name, param in model.named_parameters():
print(name, param.grad.abs().sum())
```
While the model is stuck, the loss value is around ~0.69 with worse performance metrics (precision/recall) on the training set but the gradients are very small compared to the initial training phase. Also it seems that predictions are swinging with most of the values predicted as either 0 or 1. Following are the logs for the stuck phase - [https://pastecode.io/s/cjuxog44](https://pastecode.io/s/cjuxog44)
Training code Link - [Training Code](https://gist.github.com/thesillystudent/56046463cdc4da9dfa8ba13bc02cd5da)
It seems that the model is stuck at a local minima where the gradient values are relatively smaller even though the loss is high. How can I mitigate it ? One option I see is using a higher learning rate or a cyclic learning rate but not sure if that's the right approach since the the learning rate is 5e-5 with LR scheduler disabled. Below is the plot for Loss, Bert pooler and classifier gradients sum over steps.
Also the data is 50-50 balanced. Batch size is 32. I'm using AdamW. I have also tried SGD but the convergence is very slow.
Or there might be some error/reason which I am not able to identify. Please help
[](https://i.stack.imgur.com/FGjyf.png)
| Unstable training of BERT binary sequence classification. Higher loss but lower gradients | CC BY-SA 4.0 | null | 2023-04-10T07:25:04.557 | 2023-04-10T07:25:04.557 | null | null | 385356 | [
"machine-learning",
"neural-networks",
"gradient-descent"
] |
612458 | 2 | null | 612419 | 1 | null | If$$\int_{\Theta} f(x|\theta) \pi(\theta)\,\text d\theta = 0$$the Bayesian model is incompatible with the data $x$. For instance, if
$$X\sim\mathcal U(0,\theta)\qquad\theta\sim\mathcal U(0,1)$$
and the realisation $x$ of $X$ is $x=2$, this realisation is incompatible with the model. The model must be modified.
| null | CC BY-SA 4.0 | null | 2023-04-10T07:53:33.953 | 2023-04-10T07:53:33.953 | null | null | 7224 | null |
612460 | 1 | null | null | 0 | 9 | I am working on a classic problem when the posterior probability distribution of a proportion must be obtained. This parameters is assumed to followed a beta distribution, therefore the number of ocurrence is modelled by a beta-binomial distribution. I am using RJAGS to compute the posterior distribution and have some question about the syntax I have to use. The data I have is characterized for having number of occurrence (Y) and sampling size or number of observation (M) for each row.
The distribution of the observed failure rate (Y/M) is depicted in the following plot:
[](https://i.stack.imgur.com/QuIAE.png)
As can be seen, it is left-skewed distribution with some observation to the right side showing larger proportion values. The quantiles for this parameters is the following:
[](https://i.stack.imgur.com/sml0u.png)
The code of the first model is given by:
```
jags_model_syntax <-"model {
# Likelihood
for (i in 1:length(Y)) {
Y[i] ~ dbin(p,N[i])
N[i] <- M[i] # Population size
}
p~dbeta(alpha,beta)
alpha~dnorm(0.77,0.1)
beta~dnorm(76,15)
}"
```
After compiling the model and computing the posterior, the results are the following. The probability distribution of p (the proportion) is a beta distribution and it is signficantly different to the distribution of the observed proportion.
[](https://i.stack.imgur.com/YrpZU.png)
I have following question:
- Should I give one probability to each lot, that means fitting a hierarchical model. This would change by adding a p[i] at each iteration
- Why beta distribution can not fit the original distribution?
| RJAGS - Bayesian Beta binomial syntax | CC BY-SA 4.0 | null | 2023-04-10T08:25:19.203 | 2023-04-10T08:25:19.203 | null | null | 384249 | [
"beta-distribution",
"beta-binomial-distribution"
] |
612461 | 1 | null | null | 0 | 15 | I have frequency predictions for a discrete distribution:
$p(x_1)=0$, $p(x_2)=0$, $p(x_3)=0.05$, $p(x_4)=0.95$
I need to smooth the distribution so I don't have zero values. I think the solution is to use the Lapalace additive smoothing function:
$(x_i + \alpha) / (N + \alpha*d)$
But I am unsure what values to use for $N$ and $d$. I guess $d$ is the domain size?
This is supposed to be basic, but I can't seem to find a clear answer.
$N$ is supposed to be the number of samples, but I don't have any samples, just this distribution.
| How to replace empirical frequency predictions with Laplace estimates | CC BY-SA 4.0 | null | 2023-04-10T08:29:49.903 | 2023-04-10T08:29:49.903 | null | null | 34756 | [
"laplace-smoothing"
] |
612462 | 1 | 612494 | null | 2 | 57 | We want to combine two truncated distributions to better model one phenomenon. For example, we have a Gaussian distribution, but we want to modify the right hand side tail to make it heavier. So we want to put a Pareto distribution. Now the problem is that we want to make sure that there is continuity at the density level and the total probability should be 1. And it seems that it is impossible to satisfy the two conditions. I would like to know what are some other possibilities to create a such distribution.
I use R to create an example :
```
nb_total_sim = 100000
proba_1= pnorm(4,1,6)
nb_sim_1 = trunc(nb_total_sim * proba_1)
nb_sim_2 = nb_total_sim - nb_sim_1
sim_1 = qnorm( runif(nb_sim_1,0,proba_1),1,6)
sim_2 = rPareto(n = nb_sim_2, t = 4,
alpha = 1.99,truncation = 100)
hist(c(sim_1,sim_2),nclass = 200)
```
The histogram is here and you can see that the "density" is not continuous.
[](https://i.stack.imgur.com/kXFwp.png)
| How to combine two truncated distributions | CC BY-SA 4.0 | null | 2023-04-10T08:35:04.177 | 2023-04-10T20:16:06.230 | null | null | 96531 | [
"r",
"distributions",
"simulation",
"truncated-normal-distribution",
"truncated-distributions"
] |
612463 | 1 | null | null | 0 | 20 | I have used [Monolix](https://lixoft.com/products/monolix/) modeling to estimate the parameters of a Weibull distribution and got the follwing output(scale: 37.8/ shape: 1.73) . I would like to reconstruct the survival function to estimate the survival rate at a 12,24,36,48,60months and 95%CI.
```
ESTIMATION OF THE POPULATION PARAMETERS ________________________________________
Fixed Effects ---------------------------- se_sa rse(%)
Te_pop : 37.8 0.556 1.47
beta_Te_Race_global : 0.24 0.0341 14.2
p_pop : 1.73 0.2 11.6
Standard Deviation of the Random Effects -
omega_Te : 0.00304 0.00698 229
omega_p : 0.0144 0.0752 522
```
Also, I excuted bootstrap using R and got 1,000 datasets. How can I get 95% CIs using R?
| R - Weibull Distribution Parameters (Shape and Scale) - Survival table & Confidence Intervals & Bootstrap | CC BY-SA 4.0 | null | 2023-04-10T08:35:56.647 | 2023-04-10T14:19:02.423 | 2023-04-10T14:19:02.423 | 28500 | 385364 | [
"shape-parameter",
"scale-parameter"
] |
612464 | 1 | null | null | 0 | 19 | I have the following stochastic differential equation
$dX_t=\kappa\left [ \theta-X_t\right ]dt + \Sigma d W_{t}$
I derived formula for $X_t$ which is in the following form
$X_{t}=\theta+e^{-\kappa t}\left ( X_0-\theta \right )+\Sigma e^{-\kappa t}\int_{0}^{t}e^{\kappa s}dW_{s} \qquad \forall t \in\left [ 0,T \right ]$
Now I want to derive covariance. The formula for covariance is in the following form
$Cov(X_t,X_r)=E\left [ \left ( X_t- E\left [ X_t \right ] \right )\left ( X_r -E\left [ X_r \right ]\right ) \right ] $
$=E\left [ \left ( \Sigma e^{-\kappa t}\int_{0}^{t}e^{\kappa s}dW_{s}\right ) \left ( \Sigma e^{-\kappa r}\int_{0}^{r}e^{\kappa u}dW_{u} \right )\right ] $
$=\Sigma e^{-\kappa (t+r)}\Sigma ^{\top} E\left [ \left ( \int_{0}^{t}e^{\kappa s}dW_{s} \right ) \left ( \int_{0}^{r}e^{\kappa u}dW_{u} \right )\right ]$
And using Using Itô isometry it follows that
$Cov(X_t,X_r)=\Sigma e^{-\kappa (t+r)}\Sigma ^{\top} \int_{0}^{t}e^{\kappa s}ds$
But result in the book is in the following form
$Cov(X_t,X_r)=\int_{0}^{t}e^{-\kappa s} \Sigma \Sigma ^{\top} {e^{-\kappa s}}^{\top} ds$
Can someone please tell me where I made a mistake?
| Covariance of stochastic process | CC BY-SA 4.0 | null | 2023-04-10T08:38:40.847 | 2023-04-10T08:38:40.847 | null | null | 384330 | [
"covariance",
"stochastic-processes",
"covariance-matrix",
"stochastic-calculus"
] |
612465 | 2 | null | 612424 | 0 | null | This can be used as a Text Classification model, where the text data is "Ronald Reagan Avenue 3456", "Gorgit", "17000", etc and the field names can be the classes that you need to classify.
You can use transfer learning to fine tune the model on your data or just code
| null | CC BY-SA 4.0 | null | 2023-04-10T08:48:43.307 | 2023-04-10T08:48:43.307 | null | null | 362382 | null |
612466 | 2 | null | 612202 | 3 | null | In this case vertical lines are expected. You have only factors as predictors, so there is only a limited number of values that your model will produce as predictions; hence the small set of vertical lines on your plot.
| null | CC BY-SA 4.0 | null | 2023-04-10T08:56:03.310 | 2023-04-10T10:20:35.110 | 2023-04-10T10:20:35.110 | 22047 | 53084 | null |
612467 | 1 | 612485 | null | 3 | 35 | With this very simple data:
```
> A
[1] "a" "a" "a" "a" "b" "b" "b" "b" "b" "b" "b" "b"
> B
[1] "x" "y" "x" "y" "x" "y" "x" "y" "x" "x" "x" "x"
> C
[1] "l" "l" "m" "m" "l" "l" "m" "m" "l" "l" "l" "l"
> response
[1] 14 30 15 35 50 51 30 32 51 55 53 55
```
I try to reproduce the Type-3 car::Anova by using step-by-step term elimination to better understand interactions and analysis of variance.
For example I want to assess the term "C"
```
> options(contrasts = c("contr.sum", "contr.poly"))
> m1 <- lm(response ~ A*B*C) # the full model
> car::Anova(m1, type=3, test.statistic = "LR")
Anova Table (Type III tests)
Response: response
Sum Sq Df F value Pr(>F)
(Intercept) 9374 1 1802.78 1.8e-06 ***
A 716 1 137.69 0.0003 ***
B 182 1 35.00 0.0041 **
C 178 1 34.23 0.0043 **
A:B 178 1 34.23 0.0043 **
A:C 317 1 61.03 0.0014 **
B:C 8 1 1.63 0.2714
A:B:C 0 1 0.00 0.9755
Residuals 21 4
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
For term "C" I got p-value = 0.0043
Now I going to assess term C by elimination:
```
> m2 <- lm(response ~ A + B + A:B) # the term "C", eliminated all terms with C
> anova(m1, m2)
Analysis of Variance Table
Model 1: response ~ A * B * C
Model 2: response ~ A + B + A:B
Res.Df RSS Df Sum of Sq F Pr(>F)
1 4 21
2 8 648 -4 -627 30.1 0.003 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Here p-value is = 0.003.
Close but not the same.
This is the simplest general linear model, no mixed effects, no transformations.
When I typed "test = LRT"
```
> anova(m1, m2, test="LRT")
Analysis of Variance Table
Model 1: response ~ A * B * C
Model 2: response ~ A + B + A:B
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 4 21
2 8 648 -4 -627 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
The result is totally different.
Please note, my goal IS NOT to make any formal inference, only to UNDERSTAND the calculations behind. I understand, that elimination terms from model should be equivalent to the car::Anova() and give equal result numerically.
Let's confirm:
```
> drop1(m1, scope = ~A*B*C, test="F")
Single term deletions
Model:
response ~ A * B * C
Df Sum of Sq RSS AIC F value Pr(>F)
<none> 21 22.6
A 1 716 737 63.4 137.69 0.0003 ***
B 1 182 203 47.9 35.00 0.0041 **
C 1 178 199 47.7 34.23 0.0043 **
A:B 1 178 199 47.7 34.23 0.0043 **
A:C 1 317 338 54.1 61.03 0.0014 **
B:C 1 8 29 24.7 1.63 0.2714
A:B:C 1 0 21 20.6 0.00 0.9755
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
It works! It agrees with car::Anova().
So how should I eliminate the terms to obtain the same result using anova()? Evidently comparing A+B+A:B vs. the full model is not enough!
| Why does the Type-3 ANOVA using LRT via car::Anova() give different result than term-by-term LRT model comparison via anova() in R? | CC BY-SA 4.0 | null | 2023-04-10T09:36:24.643 | 2023-04-10T14:40:52.863 | null | null | 385368 | [
"hypothesis-testing",
"anova",
"likelihood-ratio",
"model-comparison"
] |
612468 | 1 | null | null | 1 | 17 | If I have any N*d sample matrix X, how to rescale X such that X has a specific covariance matrix $\Sigma$? N is sample number and d is dimension.
| How to rescale sample matrix X such that X has a specific covariance matrix $\Sigma$? | CC BY-SA 4.0 | null | 2023-04-10T10:08:29.850 | 2023-04-10T10:08:29.850 | null | null | 384963 | [
"mathematical-statistics"
] |
612469 | 1 | 612502 | null | 1 | 54 | This wikipedia article describes spam filtering using Naïve Bayes: [https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering)
It says `P(S|W)` is given as `Pr(W|S)*Pr(S) / (Pr(W|S)*Pr(S)) + Pr(W|H)*Pr(H))`.
However, one could also get `P(S|W)` by estimating `P(W)` instead.
Most textbooks simply say it's unnecessary to estimate `P(W)` which I get, but one could also say it's unnecessary to estimate `Pr(W|H)*Pr(H)`. Why is it that estimating `Pr(W|H)*Pr(H))` is preferred?
Example:
If we use [this example](https://towardsdatascience.com/the-naive-bayes-classifier-how-it-works-e229e7970b84),
The "correct" estimation for `P(yes|rain, good)` is `0.143` because
```
P(rain|yes) * P(good|yes) * P(yes) = 1/5 * 1/5 * 5/10 = 0.02
P(rain|no) * P(good|no) * P(no) = 2/5 * 3/5 * 5/10 = 0.12
P(yes|rain, good) = 0.02 / (0.02 + 0.12) = 0.143
```
Whereas one could also estimate it as follows which gives `0.042` instead:
```
P(rain|yes) * P(good|yes) * P(yes) = 1/5 * 1/5 * 5/10 = 0.02
P(rain, good) = P(rain) * P(good) = 3/5 * 4/5 = 0.48
P(yes|rain, good) = 0.02 / 0.48 = 0.042
```
My question is, why is the former preferred, even though they seem to make similar approximations
| In Naïve Bayes, why do we estimate Pr(W|H)*Pr(H) instead of Pr(W) | CC BY-SA 4.0 | null | 2023-04-10T10:19:09.930 | 2023-04-10T16:52:30.720 | 2023-04-10T12:28:47.893 | 272851 | 272851 | [
"naive-bayes"
] |
612471 | 1 | null | null | 2 | 34 | From [What's the skewed-t distribution?](https://stats.stackexchange.com/questions/276327/whats-the-skewed-t-distribution/365400?noredirect=1#comment1137516_365400) there seems to be multiple way of defining skew distributions. However I am not sure if these methods are equivalent
The original questions show methods from
- C. Fernandez and M. Steel (1998)
- P. Theodossiou (1998) - which is the basis of this wikipedia page, and this R package
- A. Azzalini (1985)
In `scipy.stats`, the package implement [scipy.stats.skewnorm](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html#scipy.stats.skewnorm), and [scipy.stats.skewcauchy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewcauchy.html) however, it is based on Azzalini, and Theodossiou method respectively which define skewed parameter differently
So I am wondering if these method of defining skew distribution equivalent? If not, what is the generally most accepted way of defining skew distribution?
## Update 1
At least I don't think they are. I tried using method 2, and 3 then minimize the equivalent coefficient, and plug it back in. The plot suggest they are different
[](https://i.stack.imgur.com/9RJiK.png)
Reproduction code
```
import numpy as np
from scipy import stats
from tqdm.auto import tqdm
import matplotlib.pyplot as plt
from scipy.optimize import minimize
def logp_skew_cauchy_v1(xs, mu, sigma, alpha):
# A. Azzalini (1985)
return (
np.log(2) + stats.cauchy.logpdf((xs - mu)/sigma, loc=0, scale=1) +
stats.cauchy.logcdf(alpha*(xs - mu)/sigma, loc=0, scale=1) - np.log(sigma)
)
def logp_skew_cauchy_v2(xs, mu, sigma, lam):
# P. Theodossiou (1998)
return stats.skewcauchy.logpdf((xs - mu)/sigma, a=lam) - np.log(sigma)
xrange = np.linspace(-50, 50, 1001)
mu, sigma = 0, 1
eps = 0.01
lams = np.linspace(-1 + eps, 1 - eps, 1000)
matched_alphas = np.zeros_like(lams)
result_func_vals = np.zeros_like(lams)
for idx, lam in tqdm(enumerate(lams), total=len(lams)):
target = logp_skew_cauchy_v2(xrange, mu, sigma, lam)
result = minimize(lambda alpha: np.nanmean(np.abs(logp_skew_cauchy_v1(xrange, mu, sigma, alpha) - target)), 0, method = 'Nelder-Mead')
if result.success:
matched_alphas[idx] = result.x[0]
result_func_vals[idx] = result.fun
else:
matched_alphas[idx] = np.nan
result_func_vals[idx] = np.nan
idx = 700
plt.plot(xrange, logp_skew_cauchy_v2(xrange, 0, 1, lams[idx]), label=f"P. Theodossiou (1998) - a = {lams[idx]:.3f}")
plt.plot(xrange, logp_skew_cauchy_v1(xrange, 0, 1, matched_alphas[idx]), label=f"A. Azzalini (1985) - alpha = {matched_alphas[idx]:.3f}")
plt.title("Log distribution plot")
plt.legend(loc="best")
plt.show()
```
| What is the proper way to define skew distribution? | CC BY-SA 4.0 | null | 2023-04-10T10:51:27.627 | 2023-04-10T11:17:13.967 | 2023-04-10T11:17:13.967 | 236007 | 236007 | [
"skewness",
"skew-normal-distribution"
] |
612472 | 1 | null | null | 1 | 18 | Suppose that you have a null hypothesis $H_0$ that you want to test. Let $\alpha$ be a given confidence level, for example $\alpha = 0.05$. Suppose that the test statistic $T$ follows (under $H_0$) a $\chi^2$ distribution, say $T \sim \chi^2 (4)$.
In my understanding, the critical region of a distribution means a set of 'rare' values of $T$ in the sense that
$$ \mathbb{P}\left[T \in \text{critical region}\right]=\alpha. $$
How should one choose the critical region? I guess that, in the case of this $\chi^2 (4)$ distribution, one typically chooses a critical value $t_0>0$ such that
$$\mathbb{P}[T\ge t_0] = \alpha, $$
and thus the critical region would be $[t_0, +\infty)$. However, when you look at the distribution, the values of $T$ very close zero are rare as well; the pdf is continuous and has value 0 at 0. Thus, it would be tempting to determine two critical values $t_0$ and $t_1$ such that
$$ \mathbb{P} [T \le t_0 \text{ or } T \ge t_1 ] = \alpha,$$
and choose the critical region to be $[0, t_0] \cup [t_1, +\infty)$. Is there a rule of thumb, or is this just a matter of taste?
| Determining and interpreting the critical region of a distribution | CC BY-SA 4.0 | null | 2023-04-10T11:16:16.963 | 2023-04-10T11:16:16.963 | null | null | 366320 | [
"hypothesis-testing",
"critical-value"
] |
612473 | 1 | 616693 | null | 3 | 49 | I am conducting various active learning experiments on two Biomedical Relation Extraction corpora:
- 2018 n2c2 challenge: 41000 test samples
- DDI Extraction corpus: 5700 test samples
and using four different machine learning methods: Random Forest, BiLSTM-based model, Clinical BERT, and Clinical BERT with an extended input.
Initially, I evaluated the performance of all 4 methods on both corpora using all the available data (passive learning setting). Then, I conducted additional experiments using 3 different active learning query strategies (random sampling, least confidence, and BatchBALD) on both corpora, using up to 50% of the data. All experiments were repeated 5 times with different random seeds.
The specific active learning process followed in the experiments is as follows:
Randomly select 2.5% of the total dataset to create the labeled dataset, while the remaining data forms the unlabeled pool.
In the active learning step, query 2.5% of the total data, retrain the model from scratch, and test the newly trained model on the test set, measuring precision, recall, and F1-score.
Stop the process when 50% of the entire dataset has been annotated, i.e. 19 iterations have been done.
Is there a statistical test suitable for this experimental (C=2 corpora, M=4 methods, S=2 query strategies + random baseline, 5 repetitions of each experiment) setup that allows me to determine if one of the query strategies has performed signifantly better?
| Statistical test to determine if Active Learning has provided a significant improvement? | CC BY-SA 4.0 | null | 2023-04-10T11:16:57.140 | 2023-05-23T14:19:19.070 | 2023-05-23T14:18:07.633 | 385302 | 385302 | [
"statistical-significance",
"active-learning"
] |
612474 | 2 | null | 612444 | 1 | null | If you have the sample size for each group, then you can work the ANOVA summary table formulas backward to get the overall (aggregated sample) standard deviation. Let $n_k$ be the sample size for each group and $N=\sum_{k=1}^K n_k$. I'll use $K$ for the number of groups (and all summations will be over $k=1$ to $k=K$. I'll use $\bar x$ and $s$ as the sample means and standard deviations. Lastly, $y$ will be the aggregate data set of interest.
- Find the grand mean: $G = \frac{\sum n_k·\bar{x}_k}{\sum n_k}$
- Find $SS_\text{between} = \sum n_k · (\bar{x}_k - G)^2$
- Find $SS_\text{within} = \sum s_k^2·(n_k-1)$
- Find $SS_\text{total} = SS_\text{between} + SS_\text{within}$
- Find $MS_\text{total} = \text{Var}(y) = \frac{SS_\text{total}}{N-1}$
- Take the square-root of this final value: $s_Y = \sqrt{MS_\text{total}}$
I hope this is useful in your analysis for your problem.
| null | CC BY-SA 4.0 | null | 2023-04-10T11:31:15.170 | 2023-04-10T11:31:15.170 | null | null | 199063 | null |
612476 | 2 | null | 446662 | 0 | null | This is what [proper and strictly proper scoring rules](https://stats.stackexchange.com/questions/339919/what-does-it-mean-that-auc-is-a-semi-proper-scoring-rule) do, and they tend to be preferred in statistics over measures like [accuracy](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models) and [$F_1$ score](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp).
Briefly, your models are not the same. One is more confident about the observation belonging to the second class, and it should be rewarded for this confidence if the true observation is the second category; likewise, that model should be penalized more severely for being so overconfident.
Log loss and Brier score are two standard statistics for assessing the probability outputs of machine learning models. Below, $y_i\in\{0,1\}$ are the true obervations, $\hat y_i$ are the predicted probabilities, and $N$ is the sample size.
$$
\text{Log Loss}=
-\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left[
y_i\log(\hat y_i) + (1 - y_i)\log(1 - \hat y_i)
\right]\\
\text{Brier Score} = \dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
$$
If the true label for that $x_1$ feature vector in the original question is the second category, you will find both of these giving lower (better) values for model $f$. If the true label is the first category, you will find both of these giving lower values for model $g$.
>
But given that we choose the majority probability for prediction they both choose y=1.
It is common to do this kind of thresholding, but doing so throws away a lot of information. First, it might be that a threshold of $0.5$ is wildly inappropriate for your task, such as if the consequences of mistaking a $0$ for a $1$ are much worse than the consequences of mistaking a $1$ for a $0$. Second, this removes any kind of "grey zone" where the best decision is not to make a decision and collect more data. Yes, a prediction of $0.51$ will be mapped to a particular categorical prediction, but I would like to know that, even if this is the likely outcome, I am on thin ice.
Frank Harrell of Vanderbilt University has two great blog posts that get into this in more detail.
[Classification vs. Prediction](https://www.fharrell.com/post/classification/)
[Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules](https://www.fharrell.com/post/class-damage/)
| null | CC BY-SA 4.0 | null | 2023-04-10T12:28:33.500 | 2023-04-10T12:28:33.500 | null | null | 247274 | null |
612477 | 1 | null | null | 0 | 20 | >
Def. Let $ \theta \in \mathbb{R}^{\mathbb{Z}}$ be a sequence of real-numbers such that $ \sum_{j \in \mathbb{Z}} | \theta_j | < \infty$ and $\{W_t\}$ be a white noise with variance $\gamma$. Then a time series $\{X_t\}$ is a linear porcess if it can be represented as $$ X_t = \sum_{j\in \mathbb{Z}} \theta_j W_{t-j}.$$
Using the criterion for mean-square convergence, if $S = \sum_{j \in \mathbb{Z}} | \theta_j |$, then for any $ t \in \mathbb{Z}$,
$$ E(X_t^2) = E(\sum_{j\in \mathbb{Z}} \theta_j W_{t-j}) \leq \bigg(\sum_{j \in \mathbb{Z}} | \theta_j | \sqrt{E(W_{t-j})}\bigg)^2 \stackrel{?}{=} S^2 \gamma.$$
Im failing to see why the last equality holds.
Should not $ \sum_{j \in \mathbb{Z}} E(W_{t-j}) = \infty$ anyways, if $\gamma ≠ 0$?
| Linear process, how is $X_t$ well-defined? | CC BY-SA 4.0 | null | 2023-04-10T12:44:53.793 | 2023-04-10T12:44:53.793 | null | null | 384994 | [
"linear",
"stochastic-processes"
] |
612478 | 2 | null | 533581 | 2 | null | A typical use of oversampling or other artificial balancing of the categories is to make it so the minority category has a better chance of having a prediction above the threshold to transform continuous model predictions into discrete categorical predictions. However, when the categories are imbalanced, it might be that the majority category is always more likely. Consequently, to get predictions that are aligned with the reality of how frequently the categories really occur, those artificially inflated high predictions have to be toned down.
So the strategy is:
- Artificially inflate the probability of membership in the minority category so thresholded predictions are more likely to be above the threshold.
- Calibrate these inflated predictions so the final predictions of a pipeline are related to the true probabilities of event occurrence. That is, we do not want a predicted probability of $0.6$ to correspond to the event happening $20%$ of the time, as this would mean that the predicted probability is not telling the truth.
At best, this strikes me as inefficient.$^{\dagger}$ At worst, it misleads aspiring machine learning modelers into deemphasizing the rich information available in the probability predictions and to obsess over a threshold of $0.5$ just because that is the software default. Even if there is considerable information available in full probability predictions, at the very least, it is possible to change the threshold to something more reasonable for the task if you must use a threshold (such as in an automated software system that either does or does not ring an alarm).
$^{\dagger}$There are interesting edge cases where such an approach of oversampling and then adjusting the outputs can be a good idea. There is a nice example in the comments related to computational efficiency and another one [linked here](https://stats.stackexchange.com/a/559317/247274) (by the same member as the comment).
| null | CC BY-SA 4.0 | null | 2023-04-10T12:47:26.060 | 2023-04-10T13:08:13.343 | 2023-04-10T13:08:13.343 | 247274 | 247274 | null |
612479 | 2 | null | 453552 | 3 | null | The plot and the Shapiro-Wilk test seem totally consistent with each other.
The test gives a tiny p-value, indicating that normality is basically out of the question.
The plot shows deviations from normality, especially at the top right. A normal distribution would give points around the diagonal line, while your points start drifting away from the diagonal line around $x=1$ and higher.
Note, however, that formal normality testing has a [tendency to catch differences that, upon visual inspection, are easy to see will be trivial](https://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless). The test is doing what it is supposed to be doing by flagging a slight deviation from normality as a deviation from normality, and I give a demonstration [here](https://stats.stackexchange.com/a/611979/247274). However, most of the time when normality is desired, we just need "close enough" to normality for downstream statistics to work as we want. Hypothesis testing for normality, particularly when sample sizes are large, is likely to catch deviations that have minimal impact on our work, even if the test is correct to notice the deviation.
| null | CC BY-SA 4.0 | null | 2023-04-10T12:55:08.680 | 2023-04-10T13:53:21.730 | 2023-04-10T13:53:21.730 | 22047 | 247274 | null |
612480 | 2 | null | 565044 | 0 | null | I would test this by running a regression with the predictors being the environment categorical variable on its own, the continuous temperature variable on its own, and an interaction between the two variables. Then I would test the coefficient on the interaction. However, this is complicated by the fact that your categorical variable has three levels, so there is not just one interaction term. Your regression winds up being something like this.
$$
\mathbb E[Y\vert X] = \beta_0 + \beta_1x_{\text{temp}} + \beta_2 x_{\text{env2}} + \beta_3 x_{\text{env3}} + \beta_4x_{\text{temp}}x_{\text{env2}} + \beta_5x_{\text{temp}}x_{\text{env3}}
$$
As is typical of a regression with categorical variables, one of the categories (here, environment 1) is subsumed by the intercept $\beta_0$.
Then, I would jointly test the $\beta_4$ and $\beta_5$ coefficients to see if either is nonzero. I have gotten into calling this a [chunk test](https://stats.stackexchange.com/questions/27429/what-are-chunk-tests) because the test is of an entire chunk of coefficients instead of just one. A typical test in this situation would be to consider the above model to be the full model and the model below that drops the interactions to be the reduced model, and then perform an F-test.
$$
\mathbb E[Y\vert X] = \beta_0 + \beta_1x_{\text{temp}} + \beta_2 x_{\text{env2}} + \beta_3 x_{\text{env3}}
$$
In software, you could execute this as follows.
```
set.seed(2023)
N <- 1000
x1 <- runif(N, 0, 1)
x2 <- as.factor(sample(c(1, 2, 3), N, replace = T))
y <- 2*x1 + rnorm(N)
model_full <- lm(y ~ x1 + x2 + x1:x2)
model_reduced <- lm(y ~ x1 + x2)
anova(model_reduced, model_full)
```
I get a p-value of `0.1304` from the `anova` line, which is consistent with the fact that my simulated data made no use of the interaction (or even of the categorical `x2` variable).
EDIT
The math is as follows. Define $SSR_0$ to be the sum of squared residuals for the reduced model, $SSR_1$ to be the sum of squared residuals for the full model, $n$ to be the sample size, $p_0=4$ to be the number of coefficients in the reduced model, and $p_1=6$ to be the number of coefficients in the full model. Then:
$$
\dfrac{
\left(
SSR_0 - SSR_1
\right)
/
\left(
p_1 - p_0
\right)
}{
\left(
SSR_1
\right)
/
\left(
n - p_1
\right)
}
=F\sim F_{p_1 - p_0, \space n - p_1}
$$
A reference is on page 89 of Agresti (2015).
Then this "F-statistic" is compared to the F-distribution with $p_1 - p_0$ and $n - p_1$ degrees of freedom to calculate a p-value. You want the probability above this calculated $F$.
REFERENCE
Agresti, Alan. Foundations of linear and generalized linear models. John Wiley & Sons, 2015.
| null | CC BY-SA 4.0 | null | 2023-04-10T13:09:08.773 | 2023-04-10T14:54:01.097 | 2023-04-10T14:54:01.097 | 247274 | 247274 | null |
612483 | 2 | null | 612405 | 7 | null | There are already some great answers here. Another way to do this would be to use MCMC a la the Metropolis Hastings Algorithm. See my implementation below
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import beta
@np.vectorize
def target_distribution(x):
a = 5
b = .999
top = -(a+1)**2 * x**a * np.log(b*x)
bottom = 1 - (a+1)*np.log(b)
return top/bottom
# Initialize the Metropolis-Hastings algorithm with an initial value of x = 0 and a number of iterations
x = 0.5
iterations = 100000
# Initialize an array to store the samples
samples = np.zeros(iterations)
# Run the Metropolis-Hastings algorithm
for i in range(iterations):
# Sample from the proposal distribution
x_proposal = beta(a=1, b = 1).rvs()
# Calculate the acceptance ratio
acceptance_ratio = target_distribution(x_proposal) / target_distribution(x)
# Accept or reject the proposal
if np.random.uniform() < acceptance_ratio:
x = x_proposal
# Store the sample
samples[i] = x
# Plot the samples
plt.hist(samples, bins=50, density=True, alpha=0.5)
x_range = np.linspace(0, 1, 1000)
plt.plot(x_range, target_distribution(x_range))
plt.show()
```
[](https://i.stack.imgur.com/t1XZH.png)
| null | CC BY-SA 4.0 | null | 2023-04-10T13:54:14.157 | 2023-04-10T16:05:40.800 | 2023-04-10T16:05:40.800 | 597 | 111259 | null |
612484 | 2 | null | 552192 | 0 | null | It seems that there is a mistake on the home page where it states the number of instances and attributes for the ElectricityLoadDiagrams20112014 Data Set. there are 140256 instances (rows) and 370 attributes (columns) in the dataset, which is the reverse of what is stated on the home page.
It's possible that the mistake was made inadvertently, as the numbers could be confused if the dataset is represented as a matrix or table with instances as rows and attributes as columns.
| null | CC BY-SA 4.0 | null | 2023-04-10T14:29:34.377 | 2023-04-10T14:29:34.377 | null | null | 385381 | null |
612485 | 2 | null | 612467 | 0 | null | For a test of `C`, Type III sum of squares compare the full model with a reduced model where just the main effect of `C` is removed but all interactions involving `C` are retained (see [this excellent post](https://stats.stackexchange.com/a/20455/21054) for a detailled comparison between the different types of SS). In other words, Type III SS do not respect the [principle of marginality](https://en.wikipedia.org/wiki/Principle_of_marginality).
In order to reproduce the output from `car::Anova` using `anova`, you need to fit the two models first and then feed them into `anova`. The problem is that with only categorical variables (i.e. factors), it's [not possible](https://stackoverflow.com/a/40730731) to drop just the main effect of `C` using the formula interface when interactions with `C` are present in the model. But you can do it by manipulating the model matrix directly (see [here](https://stackoverflow.com/questions/43146368/checking-type-iii-anova-results)). Save the model matrix of the full model and remove the column that corresponds to `C`, then refit the model to obtain the reduced model.
Here is how it's done:
```
library(car)
options(contrasts = c("contr.sum", "contr.poly"))
#
# The data
#
A <- factor(rep(c("a", "b"), c(4, 8)))
B <- factor(c("x", "y", "x", "y", "x", "y", "x", "y", "x", "x", "x", "x"))
C <- factor(c("l", "l", "m", "m", "l", "l", "m", "m", "l", "l", "l", "l"))
y <- c(14, 30, 15, 35, 50, 51, 30, 32, 51, 55, 53, 55)
dat <- data.frame(y, A, B , C)
m <- lm(y~A*B*C, data = dat)
Anova(m, type = "III")
Response: y
Sum Sq Df F value Pr(>F)
(Intercept) 9374.5 1 1802.7788 1.839e-06 ***
A 716.0 1 137.6934 0.0003017 ***
B 182.0 1 35.0011 0.0040878 **
C 178.0 1 34.2318 0.0042571 **
A:B 178.0 1 34.2318 0.0042571 **
A:C 317.3 1 61.0267 0.0014491 **
B:C 8.5 1 1.6250 0.2714108
A:B:C 0.0 1 0.0011 0.9754909
Residuals 20.8 4
```
So far so good. To reproduce the output for `C`, fit the full model, extract the model matrix, remove column corresponding to `C` and refit.
```
m1 <- lm(y~A*B*C, data = dat) # Full model
X <- model.matrix(m1) # Extract the model matrix of the full model
X0 <- X[, -c(4)] # Removing the column corresponding to C
m0 <- with(dat, lm(y~X0 + 0)) # Reduced model
anova(m0, m1, test = "F")
Model 1: y ~ X0 + 0
Model 2: y ~ A * B * C
Res.Df RSS Df Sum of Sq F Pr(>F)
1 5 198.81
2 4 20.80 1 178.01 34.232 0.004257 **
```
You see that now we fully reproduced the output from `Anova` for the main effect of `C` using the two models and `anova`.
| null | CC BY-SA 4.0 | null | 2023-04-10T14:30:10.223 | 2023-04-10T14:40:52.863 | 2023-04-10T14:40:52.863 | 21054 | 21054 | null |
612486 | 1 | null | null | 0 | 12 | I want to build a model using a dataset. Then, I edit the dataset by changing the class attribute (let's say I will have a new version of the dataset). After that, I want to apply the same model to the new version of the dataset.
Is this procedure correct?
Because I did it but the performance of the model improved significantly.
I will explain my procedure in detailed steps:
- I have an imbalanced binary classification dataset (let's call it: a raw dataset).
- I balanced the raw dataset using SMOTE technique.
- I built a model and its performance was: accuracy (86.4626), precision (84.7), recall (86.5), F1-measure (84.5), and AUC-ROC (80.9).
- I changed the class attribute of the raw dataset (let's say I had a new imbalanced dataset).
- I balanced the new dataset using SMOTE technique.
- I applied the same model, mentioned in step 3, to the new dataset, and its performance was: accuracy (96.9388), precision (97.6), recall (96.9), F1-measure (97.1), and AUC-ROC (99).
I'm afraid there is something wrong with what I did causing overfitting or something.
| About the performance of a model after changing the class attribute | CC BY-SA 4.0 | null | 2023-04-10T14:59:13.290 | 2023-04-10T14:59:13.290 | null | null | 379079 | [
"binary-data",
"unbalanced-classes",
"overfitting"
] |
612487 | 1 | null | null | 0 | 14 | Let $X_i$ be iid random sample from $exp(\lambda)$ with $f(x;\lambda)=\lambda e^{-\lambda x}$ for $x>0$ and $\lambda>0$. Find the $\alpha$-level uniformly most powerful test for $H_0: \lambda\le \lambda_0$ v.s. $H_1: \lambda> \lambda_0$.
---
I try to use the Karlin-Rubin theorem as follows. Under $H_0$, we take $$T=2\lambda \sum_i X_i\sim Gamma(n ,2)=\chi^2(2n)$$. So I take the test function of UMPT:
\begin{equation}
\Phi(x)=
\begin{cases}
1 & \text{if } T>\chi^2_{1-\alpha}(2n)\\
0 & \text{if } T\le \chi^2_{1-\alpha}(2n)
\end{cases}
\end{equation}
But the solution used
\begin{equation}
\Phi(x)=
\begin{cases}
1 & \text{if } T\le \chi^2_{1-\alpha}(2n)\\
0 & \text{if } T> \chi^2_{1-\alpha}(2n)
\end{cases}
\end{equation}
I am confused about why here we choose $T<k$ as the rejection region.
| Why here we choose $T<k$ as the rejection region in the $\alpha$-level uniformly most powerful test? | CC BY-SA 4.0 | null | 2023-04-10T15:06:30.577 | 2023-04-10T15:06:30.577 | null | null | 334918 | [
"hypothesis-testing",
"self-study"
] |
612488 | 2 | null | 611891 | 1 | null | It seems a diagonal line can appropriately be called a reference line as mentioned by whuber
| null | CC BY-SA 4.0 | null | 2023-04-10T15:09:56.710 | 2023-04-10T15:09:56.710 | null | null | 321032 | null |
612489 | 2 | null | 547499 | 3 | null | $1\text{st}$
>
(PS: I don't really know how to choose the equivalence bounds..)
The good news is that this is not really a statistics question. This is a matter of what your customer (loosely speaking) regards as "close enough" to zero for the difference to be acceptable. The bad news is that, if you customer will not say, you need to use your own knowledge of the subject matter to decide this or bring in a subject matter expert to help you decide it.
If you cannot determine what constitutes equivalence bounds, it is reasonable to question if equivalence testing is an appropriate method for your work.
$2\text{nd}$
You have set your equivalence bounds as $\pm 0.395$. One one-sided test of the TOST procedure says that the difference is no greater than $0.395$, so the difference must be less. The other one-sided test of the TOST procedure says that the difference is no less that $-0.395$, so the difference must be greater. Consequently, these tests together say that the difference is between $-0.395$ and $+0.395$. It seems that your equivalence test was a success!
The NHST result seems to be testing if the difference is nonzero. However, such a test aligns with your "the absence of evidence is not the evidence of absence" line and is not of interest to showing equivalence (maybe if you have done a power calculation).
Your TOST and NHST result can coexist. TOST says that the difference is between $-0.395$ and $+0.395$, while the NHST says that the difference is not zero. Since the plot shows the difference to be about $0.07$, these results are totally compatible.
| null | CC BY-SA 4.0 | null | 2023-04-10T15:29:27.917 | 2023-04-10T15:37:51.773 | 2023-04-10T15:37:51.773 | 247274 | 247274 | null |
612491 | 2 | null | 612403 | 1 | null | You get a perfect separation because the random effects can seperate the results for each student. For n students you have 4n datarows, but within the students the result is each time the same.
So effectively you have no repetition within the groups for whih you compute the random effects.
| null | CC BY-SA 4.0 | null | 2023-04-10T15:36:49.637 | 2023-04-10T15:36:49.637 | null | null | 164061 | null |
612494 | 2 | null | 612462 | 2 | null | You can change the ratio's of the mixture to make the density function continuous.
The ratio can be find with the pdf's of the two truncated distributions. Or the cdf's and the pdf's of the original distributions that were truncated.
[](https://i.stack.imgur.com/MquOa.png)
```
library(sads)
set.seed(1)
n = 10^5
### value where we stitch the two distributions together
xc = 4
### pdf and cdf of two distributions
Gx = ppareto(xc,1.99,1)
gx = dpareto(xc,1.99,1)
Fx = pnorm(xc,1,6)
fx = dnorm(xc,1,6)
### sample uniform variable
U = runif(n)
### determine the mixing ratio based of pdf and cdf
mixtureodds = (gx/(1-Gx))/(fx/Fx)
mixturecut = mixtureodds/(1+mixtureodds)
### compute the mixture based on the uniform variable
sel = which(U<mixturecut)
### compute the distribution with the qua tile function
### re-use the uniform variable as input
### the variable needs to be properly scaled
U[sel] = qnorm(U[sel]*Fx/mixturecut,1,6)
U[-sel] = qpareto(1-(1-U[-sel])*(1-Gx)/(1-mixturecut),1.99,1)
hist(U[U<100], nclass = 200)
```
### Alternative approach
As Whuber mentions in the comments you can also keep the mixing ratio the same, but instead adjust the Pareto distribution. A simple way would be to scale and shift it such that the pdf's match each other.
Basically the trick is to adjust $f(x)$, $g(x)$ and/or the mixing ratio, such that the two match each other at the point where they are truncated.
In the example below the Pareto distribution part was a factor 3 larger at the point $x = 4$ so by scaling it with a factor $3$ (and shifting it 8 back to make it match again) you can get it 3 times smaller.
Example:
[](https://i.stack.imgur.com/0LMhd.png)
```
library(sads)
set.seed(1)
n = 10^5*3
### value where we stitch the two distributions together
xc = 4
### desired ratio 2 parts normal 1 part pareto
ratio = 2/1
### pdf and cdf of two distributions
scale = 2
Gx = ppareto(xc,1.99,scale)
gx = dpareto(xc,1.99,scale)
Fx = pnorm(xc,1,6)
fx = dnorm(xc,1,6)
### relative difference in height between the two mixture parts
diff = (fx/Fx)/(gx/(1-Gx))*ratio
### sample uniform variable
U = runif(n)
### determine the cutoff
mixturecut = ratio/(1+ratio)
### compute the mixture based on the uniform variable
sel = which(U<mixturecut)
### compute the distribution with the qua tile function
### re-use the uniform variable as input
### the variable needs to be properly scaled
U[sel] = qnorm(U[sel]*Fx/mixturecut,1,6)
U[-sel] = qpareto(1-(1-U[-sel])*(1-Gx)/(1-mixturecut),1.99,scale)/diff+(xc)*(1-1/diff)
hist(U[U<100], nclass = 100, xlab = "x", main = "mixture of truncated normal \n and truncated scaled and shifted Pareto",freq = F)
```
| null | CC BY-SA 4.0 | null | 2023-04-10T16:17:05.670 | 2023-04-10T20:16:06.230 | 2023-04-10T20:16:06.230 | 164061 | 164061 | null |
612495 | 2 | null | 522042 | 0 | null | The accuracy, precision, and recall hitting a plateau while the loss does not is reasonable behavior. Accuracy, precision, and recall all just depend on the side of some threshold (basically always $0.5$ as a software default) where the prediction falls. Thus, the training might reach a point where basically every instance is on the correct side of that threshold but still be able to improve the raw predictions, as opposed to the binned predictions (classifications).
Further, despite the [issues with accuracy, precision, and recall](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp), your values look quite high, so your model seems to be good at something despite the fact that the loss value might be able to be improved.
It might be worth continuing to train to improve the loss, even though the threshold-based metrics are nearing a plateau. That little bit of additional loss you squeeze out of the model should result in better probabilistic prediction (which are useful) and might even flip a few observations to the correct side of the threshold to yield better accuracy, precision, and recall.
| null | CC BY-SA 4.0 | null | 2023-04-10T16:17:58.813 | 2023-04-10T16:17:58.813 | null | null | 247274 | null |
612496 | 1 | 612499 | null | 9 | 829 | Suppose that we want to know how the price of a house changes per meter square of the area of the house. Further suppose that I have a dataset as the following:
```
Area | Price
100 200k
120 230k
...
```
That is, only the area and the price of a set of houses.
Given this setup I can think of two different ways:
- Fit a linear model (using linear regression) and look at the coefficient of Area.
- For all pairs of houses $i$ and $j$ find $\frac{price_i - price_j}{area_i - area_j}$
then take the average.
My question: Are these solutions different? If yes, in what ways they are different? or pros and cons?
| Linear regression vs. average of slopes | CC BY-SA 4.0 | null | 2023-04-10T16:20:31.713 | 2023-04-12T05:55:19.020 | 2023-04-10T16:38:23.697 | 247274 | 29475 | [
"regression",
"linear-model",
"regression-coefficients"
] |
612498 | 1 | null | null | 5 | 379 | The screenshot below is from a paper that I am reading and the author says it is a non-parametric regression. The explanation below just seems like a normal OLS with some covariate, fixed effects.. etc. What exactly is a non-parametric regression and how do we see it from the equation below? When do we use it? The only noticeable difference from standard OLS seems to be the L function, which I don't understand.. Also, when running a non-parametric regression, is the function at R different from the normal lm function?
[](https://i.stack.imgur.com/PKmH9.png)
| What is non-parametric regression? | CC BY-SA 4.0 | null | 2023-04-10T16:35:53.240 | 2023-04-10T19:17:20.517 | null | null | 355204 | [
"nonparametric"
] |
612499 | 2 | null | 612496 | 11 | null | DIFFERENT
We just need one example of these two being different to show that the two need not give the same result, so let's simulate an example.
```
x <- c(1, 2, -3)
y <- c(1, 6, 3)
# Fit a linear model to calculate the OLS slope coefficient
#
L <- lm(y ~ x)
# Find the pairwise slopes
#
slopes <- rep(NA, 3)
slopes[1] <- (y[2] - y[1])/(x[2] - x[1])
slopes[2] <- (y[3] - y[1])/(x[3] - x[1])
slopes[3] <- (y[3] - y[2])/(x[3] - x[2])
# Compare the OLS estimate with the mean of the pairwise slopes
#
summary(L)$coef[2, 1]
mean(slopes)
```
The OLS slope estimate is $0.2857143$, while the mean pairwise slope is $1.7$, so the two methods do not have to agree.
For a really interesting example, as is pointed out in the comments, consider what happens when two distinct $y$-values correspond to the same $x$-value. Will that code even run for `x <- c(1, 2, 1)`? Should it run?
A similar regression method that might be of interest is the [Theil–Sen estimator](https://en.wikipedia.org/wiki/Theil%E2%80%93Sen_estimator) (median pairwise slope instead of mean). In the above example, the Theil-Sen slope estimate is $0.6$, different from both the OLS estimate and the mean of the pairwise slopes.
| null | CC BY-SA 4.0 | null | 2023-04-10T16:36:13.297 | 2023-04-12T05:55:19.020 | 2023-04-12T05:55:19.020 | 1352 | 247274 | null |
612500 | 1 | 612503 | null | 8 | 724 | Assume that you have made a PCA analysis and you got your eigenvectors inside the projection matrix $W$. If you project your data $X$ with $W$, then you get the desired projected dimension.
But PCA and ICA are opposite. If we look at this picture. PCA is projecting the most principal data, while ICA is projecting the most non commonly data.
My question is simple: Is it possible to turn PCA into ICA by rotating the eigenvectors in some angles? If yes, how?
[](https://i.stack.imgur.com/cfZWK.png)
| Is it possible to turn PCA into ICA by rotating the eigenvectors? | CC BY-SA 4.0 | null | 2023-04-10T16:43:57.653 | 2023-04-10T18:34:45.640 | null | null | 275488 | [
"pca",
"dimensionality-reduction",
"independent-component-analysis"
] |
612501 | 2 | null | 612161 | 4 | null | This would be a comment, but I need to provide some code, so I'll include it as an answer.
This may be in essence a programming question, but there might be a statistics question as well.
- With glmmTMB objects, you may need to fit a null model to determine the null deviance. What constitutes a null model may depend on your purposes.
- You should be able to derive the deviance from the log likelihood. The latter can be extracted with logLik(model).
- The anova() function displays the deviance.
```
library(glmmTMB)
model = glmmTMB(count ~ mined + (1|site), family=poisson, data=Salamanders)
model.null = glmmTMB(count ~ 1+ (1|site), family=poisson, data=Salamanders)
logLik(model)
### 'log Lik.' -1104.849 (df=3)
logLik(model.null)
### 'log Lik.' -1120.773 (df=2)
anova(model, model.null)
### Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
### model.null 2 2245.6 2254.5 -1120.8 2241.6
### model 3 2215.7 2229.1 -1104.8 2209.7 31.847 1 1.668e-08 ***
```
| null | CC BY-SA 4.0 | null | 2023-04-10T16:51:19.627 | 2023-04-10T16:51:19.627 | null | null | 166526 | null |
612502 | 2 | null | 612469 | 2 | null | You can immediately see that the second calculation is incorrect because probabilities should add to 1, namely you should always have `P(yes|rain, good) + P(no|rain, good) = 1`. But this is not the case in your second calculation, so clearly it makes no sense.
The reason for this is that conditional independence does not imply independence. Naïve Bayes assumes that the features (e.g. road condition & weather condition) are conditionally independent given the target variable, so for example `P(rain,good|yes)=P(rain|yes)*P(good|yes)` but this does not imply that `P(rain, good) = P(rain) * P(good)`, hence the inconsistency in your calculation.
| null | CC BY-SA 4.0 | null | 2023-04-10T16:52:30.720 | 2023-04-10T16:52:30.720 | null | null | 348492 | null |
612503 | 2 | null | 612500 | 11 | null | No, in general, you can't rotate the principal components to obtain ICA. One of the defining traits of PCA is that the component directions are orthogonal. If you rotate the principal components, they'll still be orthogonal after the rotation. (This is because a rotation matrix is an [orthogonal transformation](https://en.wikipedia.org/wiki/Orthogonal_transformation).) Almost always, ICA components are not orthogonal, so rotation of principal components will not recover ICA components.
The only caveat is trivial -- if the ICA directions are orthogonal to begin with, then they will still be orthogonal after rotation, for the same reasons.
| null | CC BY-SA 4.0 | null | 2023-04-10T16:57:57.193 | 2023-04-10T18:34:45.640 | 2023-04-10T18:34:45.640 | 22311 | 22311 | null |
612504 | 1 | null | null | 0 | 12 | Scenario:
I have data comparing the number of tree stems in 30 forest plots between two sampling years (1992 and 2012). Each plot received hurricane damage between these 2 sampling years -- this damage was coded as being 0-100% of trees felled/damaged.
I ran a linear regression using `lm()` in R including a centered year term, hurricane damage, and an interaction term between them.
I get the following output:
```
Call:
lm(formula = Count.Ha ~ I(Year - 1992) * HurrDam, data = dataset, ])
Residuals:
Min 1Q Median 3Q Max
-368.84 -69.79 -23.01 81.30 413.28
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 147.3300 50.7297 2.904 0.00529 **
I(Year - 1992) -17.2595 3.4007 -5.075 4.73e-06 ***
HurrDam -1.4680 1.6764 -0.876 0.38503
I(Year - 1992):HurrDam 0.7634 0.1128 6.766 9.11e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 138.1 on 55 degrees of freedom
Multiple R-squared: 0.5886, Adjusted R-squared: 0.5662
F-statistic: 26.23 on 3 and 55 DF, p-value: 1.15e-10
```
As you can see, Year is significant as is the interaction term, but `HurrDam` is not.
How do I interpret this??
- I've seen a number of posts discussing intepretation when discreet variables or even continuous non-bounded variables are involved, but I'm not sure how my inclusion of a time variable and a bounded percentage as a variable impact the way one would interpret these results.
- Note: my ultimate hypothesis I'm trying to investigate is that the number of stems did not increase with time except in plots with greatest hurricane damage.
| Interpret significant interaction with nonsignificant main term in regression involving continuous time variable and a bounded (0-100%) variable? | CC BY-SA 4.0 | null | 2023-04-10T17:03:26.957 | 2023-04-10T17:03:26.957 | null | null | 80624 | [
"r",
"statistical-significance",
"multiple-regression",
"interaction",
"interpretation"
] |
612505 | 1 | null | null | 4 | 222 | With all the concern about reproducibility, I have not seen a very basic question answered. Using the standard hypothesis testing approach, if one experiment results in p<0.05, what is the chance that a repeat experiment will also result in p<0.05? I've seen a related problem approached by Goodman (1) and others, starting with a particular p-value for the first experiment, but I have not seen it more generally as I stated the problem.
So my question here is if the approach below has already been published somewhere.
Let’s make pretty standard decisions that alpha = 0.05 and power = 0.80. We also need to define the scientific context of the experimentation. Let's say we are in a situation where you expect half the hypotheses tested to be true and half are not. In other words the probability of the null hypothesis is 0.50, which we'll call pNull.
Let's compute the results of 1000 (arbitrary, of course) first experiments.
- Number of experiments where the null H is actually true = 1000 * pNull = 500.
- Number of these expected to result in p<alpha = 500 * alpha = 25 experiments.
- Number of experiments where the alternative H is actually true = (1 - pNull)*1000 = 500
- Number of these expected to result in p<alpha = 500 * power = 400
- Total experiments expected to result in p<alpha = 25 + 400 = 425
Now on to the second experiment. We only run the second experiment for cases where the first experiment resulted in p<alpha.
- Of the 25 experiments (where null is actually true), how many of the second experiments are expected to result in p<alpha? 25 * alpha = 1.25
- Of the 400 experiments (where the alternative is true), how many of the repeat experiments are expected to result in p<alpha? 400 * power = 320
- Number of second experiments expected to result in p<alpha = 1.25 + 320 = 321.25
Given that the first experiment resulted in p<alpha, the chance that a second identical experiment will also result in p<alpha = 321.25/425 = 0.756
This assumes you set alpha = 0.05 and power = 0.80, and the scientific situation is such that pNull = 0.50. I like to think things out verbally, but of course, this can all be compressed into equations. But my question is if this straightforward approach has already been published.
- Goodman, S. N., 1992, A comment on replication, P-values and evidence: Statistics in Medicine, v. 11, no. 7, p. 875–879, doi:10.1002/sim.4780110705.
| If p < 0.05 in one experiment, what is the probability of p < 0.05 in a repeat experiment? | CC BY-SA 4.0 | null | 2023-04-10T17:05:38.073 | 2023-04-17T21:20:52.160 | 2023-04-11T18:42:39.527 | 25 | 25 | [
"hypothesis-testing",
"p-value",
"reproducible-research"
] |
612506 | 2 | null | 612498 | 7 | null | In general, this is an interesting question that comes up a lot.
I'll be the first to say "non-parametric" regression is not well-defined. You might be referred to Wasserman's text "All of Non-Parametric Statistics" which was the first seminal reference of its kind, attempting to broach the concept. The text wasn't without its issues, and I recall several of the professors in my department being deeply agitated by the material - actual mistakes, not just epistemological disagreements.
In general, to refer to something as "parametric" means that the terms in the regression model index a probability model. In Poisson regression, for instance, it's quite easy to take the design of $X$ and the estimated coefficients, and simulate responses from the results. The same is true of ordinary linear regression when it's treated like maximum likelihood of a normally distributed error term. But linear regression does not actually require normal errors. So, when we perform asymptotic inference, relying on the CLT to give us asymptotically correct CIs for the regression coefficients, we cannot say that linear regression is a parametric routine because our estimates do not, in fact, index a probability model. Whether or not "asymptotic" OLS is semi-parametric or non-parametric was an issue that not even my professors could agree on; but I'm in the non-parametric camp, if we are willing to make minimal assumptions about the existence of first and second moments.
So in my opinion there's nothing fundamentally wrong with writing down what looks like an ordinary least squares model and saying, "This is a non-parametric regression". Recall, a coefficient is only a parameter - necessitating parametric regression - if we claim to believe there's a probability model beneath it - an estimable probability model, that we know to be true, and for which our reg ression model provides reliable estimates of all the actual components.
In your example, the model description confirms that, while these are panel data, the authors are confident in the robustness and surfeit of data to assure us of the reliability of estimates for what appear to be a large number of fixed effects, whereas the error term has no descriptor other than being "idiosyncratic". One may only hope that this at least means these errors are independent or identically distributed - even if not, the OLS can be motivated, but I argue as a semiparametric estimator.
| null | CC BY-SA 4.0 | null | 2023-04-10T17:15:56.390 | 2023-04-10T19:17:20.517 | 2023-04-10T19:17:20.517 | 8013 | 8013 | null |
612507 | 2 | null | 612161 | 4 | null | Another general comment, from the details section of `?lme4::deviance.merMod`:
## Deviance and log-likelihood of GLMMs:
>
One must be careful when defining the deviance of a GLM. For
example, should the deviance be defined as minus twice the
log-likelihood or does it involve subtracting the deviance for a
saturated model? To distinguish these two possibilities we refer
to absolute deviance (minus twice the log-likelihood) and relative
deviance (relative to a saturated model, e.g. Section 2.3.1 in
McCullagh and Nelder 1989). With GLMMs however, there is an
additional complication involving the distinction between the
likelihood and the conditional likelihood. The latter is the
likelihood obtained by conditioning on the estimates of the
conditional modes of the spherical random effects coefficients,
whereas the likelihood itself (i.e. the unconditional likelihood)
involves integrating out these coefficients. The following table
summarizes how to extract the various types of deviance for a
‘glmerMod’ object:
```
conditional unconditional
relative ‘deviance(object)’ NA in ‘lme4’
absolute ‘object@resp$aic()’ ‘-2*logLik(object)’
```
```
library(lme4)
library(glmmTMB)
m1 <- glmmTMB(count~spp * mined + (1|site), data = Salamanders, family="nbinom2")
m2 <- glmer.nb(count~spp * mined + (1|site), data = Salamanders)
```
- -2*logLik() is the same (up to a small numerical difference) for glmmTMB and lme4 (approx 1631.3)
- deviance(m2) is 501.8 (NULL for m1)
- m2@resp$aic() is 1584.3 (undefined for m1)
For what it's worth, the canonical definition of deviance is what's called "relative" deviance above: people (including me) are sometimes sloppy and call $-2 \log {\cal L}$ (i.e. the "absolute deviance" from above) a deviance, but I believe that's technically incorrect. If we were only interested in differences between deviances rather than ratios this distinction wouldn't matter ...
Since you're not dealing with GLMMs (no random effect in your example) this gets a little bit less messy, but you still have a problem. We can fit the null model via `update(my_model, . ~ 1)`, but that only gets us the values of $-2 \log{\cal L}$ ("absolute") for the full and null models, not the deviances.
| null | CC BY-SA 4.0 | null | 2023-04-10T17:22:53.540 | 2023-04-11T14:50:46.597 | 2023-04-11T14:50:46.597 | 2126 | 2126 | null |
612508 | 1 | null | null | 2 | 16 | For GLMs in the exponential family, we can obtain the standard errors for the regression coefficients as a function of the diagonal of the fisher information matrix. Does this still hold if the regression distribution is not in the exponential family (this is of course technically not a GLM but I'm not sure if there is a technical name for this kind of model)? For example, beta-binomial or dirchlet-multinomial? In this case, does it instead become necessary to use the diagonal of the Hessian?
| Coefficient standard error for "GLM" not in exponential family | CC BY-SA 4.0 | null | 2023-04-10T17:35:56.693 | 2023-04-10T17:35:56.693 | null | null | 261708 | [
"generalized-linear-model",
"standard-error",
"fisher-information",
"hessian"
] |
612509 | 1 | null | null | 1 | 37 | can you help me please?
In this model, the interpretation of the continuous variable `tmax` for an example would be:
a increase 1 unit of tmax (exp(coef)=1.06) increases in 6% the incidence of disease in a month. Considering that casos = monthly number of cases of the disease, and populacao variable being used as offset representing the population in each city (municipio).
Is this interpretation correct?
>
summary(m1<- glm.nb(casos ~ 0 + municipio + precip_ant + tmax + tmax_ant + umid + umid_ant + enxu_2 + offset(log(populacao)), data = dataset))
```
Call:
glm.nb(formula = casos ~ 0 + municipio + precip_ant + tmax +
tmax_ant + umid + umid_ant + enxu_2 + offset(log(populacao)),
data = dataset, init.theta = 2.105944887, link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.1763 -1.0107 -0.6286 0.3874 4.2286
Coefficients:
Estimate Std. Error z value Pr(>|z|)
municipio6 -2.566e+01 1.698e+00 -15.110 < 2e-16 ***
municipio1 -2.406e+01 1.706e+00 -14.108 < 2e-16 ***
municipio2 -2.424e+01 1.707e+00 -14.205 < 2e-16 ***
municipio3 -2.530e+01 1.696e+00 -14.914 < 2e-16 ***
municipio4 -2.525e+01 1.701e+00 -14.846 < 2e-16 ***
municipio5 -2.524e+01 1.702e+00 -14.829 < 2e-16 ***
precip_ant 1.750e-03 6.414e-04 2.728 0.006373 **
tmax 6.291e-02 1.922e-02 3.273 0.001066 **
tmax_ant 1.600e-01 1.995e-02 8.020 1.06e-15 ***
umid 2.665e-02 1.230e-02 2.166 0.030297 *
umid_ant 5.555e-02 1.454e-02 3.820 0.000134 ***
enxu_2 3.154e-01 2.074e-01 1.521 0.128384
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(2.1059) family taken to be 1)
Null deviance: 41285.6 on 1008 degrees of freedom
Residual deviance: 1002.5 on 996 degrees of freedom
AIC: 2702.5
Number of Fisher Scoring iterations: 1
Theta: 2.106
Std. Err.: 0.308
2 x log-likelihood: -2676.499
> exp(coef(m1))
municipio6 municipio1 municipio2 municipio3 municipio4 municipio5 precip_ant tmax tmax_ant
7.171204e-12 3.553106e-11 2.963580e-11 1.033429e-11 1.082516e-11 1.089931e-11 1.001751e+00 1.064932e+00 1.173543e+00
umid umid_ant enxu_2
1.027009e+00 1.057120e+00 1.370790e+00
```
| Interpretation of negative binomial GLM | CC BY-SA 4.0 | null | 2023-04-10T17:45:33.170 | 2023-04-10T21:16:15.640 | null | null | 375122 | [
"regression",
"generalized-linear-model",
"negative-binomial-distribution"
] |
612510 | 1 | null | null | 0 | 40 | I was reading the following link ([https://en.wikipedia.org/wiki/Scoring_algorithm](https://en.wikipedia.org/wiki/Scoring_algorithm)) on the "Fisher Scoring Algorithm". As I understand, the Fisher Scoring Algorithm is similar to the Newton-Raphson Algorithm, but is used more to optimize Likelihood Functions of Statistical and Probabilistic Models.
Here is my understanding of this algorithm:
- Suppose we have observations: $$y_1, y_2, \dots$$
- And suppose these observations have a probability distribution function: $$f(y;\theta)$$
- If we consider the "Score Function" as the first derivative of the log-likelihood function, we can take the First Order Taylor Expansion of the Score Function and write it as follows: $$V(\theta) \approx V(\theta_0) - J(\theta_0)(\theta - \theta_0)$$
- Note that J(thetha) is the negative Hessian of the log-likelihood function : $$J(\theta_0) \approx -\sum_{i=1}^n \left(\triangledown_{\theta}^2 \log \left(f(y_i, \theta)\right)\right)$$
- We can then write the Fisher Scoring Algorithm as: $$\theta_{m+1} = \theta_m + J^{-1}(\theta_m)V(\theta_m)$$
In this article, the following two proofs are claimed about the Fisher Scoring Algorithm:
- Proof 1: As the number of iterations (i.e. "m") increases, the estimates from the Fisher Scoring Algorithm converges to the estimates that would have been obtained from Maximum Likelihood Estimation. As I understand, this is important for the following reason: Suppose you have some complicated Likelihood Function and have difficulty solving the resulting system of equations (e.g. multidimensional, non-linear, etc.) - then, the results of this proof would permit you to indirectly obtain estimates "close" to the estimates that you would have obtained via Maximum Likelihood Estimation (Note: Estimates obtained via MLE are "desirable" as these estimates have useful properties such as Unbiasedness, Consistency, Asymptotic Normality, etc.). In mathematical notation, this proof can be written like this:
$$\lim_{m\rightarrow\infty} \theta_m = \hat{\theta}_{MLE}$$
- Proof 2: To reduce the computational complexity of the Fisher Scoring Algorithm, we often replace J(thetha) with the "Expected Value" of J(thetha) - we call this I(thetha):
$$\theta_{m+1} = \theta_m + I^{-1}(\theta_m)V(\theta_m)$$
Given this information - the estimates produced from the Fisher Scoring Algorithm (after many iterations) are expected to have same asymptotic distribution properties as the true estimates under Maximum Likelihood Estimation. As I understand, this result is important because it allows for statistical inferences made using the results of the Fisher Scoring Algorithm to have similar properties as statistical inferences made using estimates from MLE. In mathematical notation, this proof can be written like this:
$$\sqrt{n}(\theta_{m+1} - \theta_{MLE}) \stackrel{d}{\rightarrow} N(0, I^{-1}(\theta_{MLE}))$$
My Question: I am trying to understand why the ideas captured within Proof 1 and Proof 2 are true.
When looking online, I found different references on these topics - but none of these references explicitly explained why these two proofs are true. Can someone please help me understand why these two proofs are true?
Thanks!
| Why Does the Fisher Scoring Algorithm "Work"? | CC BY-SA 4.0 | null | 2023-04-10T18:09:19.743 | 2023-04-10T18:09:19.743 | null | null | 77179 | [
"probability",
"distributions",
"normal-distribution",
"variance",
"maximum-likelihood"
] |
612511 | 2 | null | 612269 | 0 | null | There is one way that this might have been thought by the authors to make sense. But I think that you are correct.
Their argument might be if the diagnosis of Phase 1 came 2 years after Phase 1 actually started. Then the argument might be that expected remaining length of Phase 1 is $5-2=3$ years.
The problem with that argument is the memoryless nature of the assumed exponential survival function. For an exponential survival function, the mean residual life conditional upon survival to any time after 0 is always equal to the expected survival from time 0. That supports your argument that the remaining expected duration of Phase 1 for that individual should be 5 years, and that the extension of Phase 1 by the drug would be 3 times that. It's possible that the authors had some other model for the effect of the drug in mind that would support their argument, but that's not clear from what you quote.
It looks like the [book you cited](https://leanpub.com/biostatmethods) is essentially self-published, and thus might not have undergone the editorial review of a traditionally published text. This seems to be an error, perhaps made without properly thinking through the implications, that might have been caught during a traditional editorial process.
In this case, it would make sense to address your question directly to the authors. If they provide a compelling argument to support what you quote, please provide that as another answer to this question. (It's OK to provide and accept your own answer to your question on this site.)
| null | CC BY-SA 4.0 | null | 2023-04-10T18:14:18.133 | 2023-04-10T18:14:18.133 | null | null | 28500 | null |
612513 | 1 | null | null | 1 | 43 | Let $p$ be a positive integer and suppose that each observation in my data set is a length-$p$ multivariate normal vector, and I have $n$ (an integer) observations of the length-$p$ multivariate normal vector. So
$$
\vec{Y} = \beta_0 + \beta_1 \vec{X}_{1} + \cdots + \beta_k \vec{X}_{k} + \vec{\epsilon},
$$
with $\vec{\epsilon} \sim N_p(\vec{0}, \Sigma) $, $\Sigma$ is a covariance matrix of an observation-vector, $\beta_i \in \mathbb{R}$ (for $i \in \{0,1,\cdots,k\}$) and $X_i \in \mathbb{R}^p$. I am in a situation where this model looks relevant to my problem, but I have never been taught how to generalize the usual regression model into one where each observation is itself a vector of size $p>1$.
Is this called multivariate multiple regression? How can I find literature for it? If I look up multivariate-, or multidimensional linear regression I only get stuff on the multivariate linear regression model (the case where $p=1$).
| Multidimensional linear regression (not multiple linear regression) | CC BY-SA 4.0 | null | 2023-04-10T18:31:03.707 | 2023-04-11T12:54:03.043 | 2023-04-10T19:32:51.177 | 124155 | 124155 | [
"regression",
"multiple-regression",
"references",
"multivariate-analysis",
"linear-model"
] |
612514 | 1 | 612518 | null | 4 | 63 | I see it is often quoted that the omitted variable bias formula is
$$
\text{Bias}\left(\widehat{\beta_1}\right) = \beta_2 \cdot \text{Corr}\left(X_2,X_1\right)
$$
where $\widehat{\beta_1}$ is the estimated coefficient in the biased model, $\beta_2$ is the true coefficient of the omitted variable $X_2$ in the full model.
I am wondering how this is derived generally. Thanks.
| How is the omitted variable bias formula derived? | CC BY-SA 4.0 | null | 2023-04-10T19:06:54.263 | 2023-04-10T19:51:37.053 | null | null | 108150 | [
"regression",
"omitted-variable-bias"
] |
612515 | 2 | null | 446392 | 0 | null | The weights are sampled from a zero-centered distribution (e.g., Uniform, Normal), but the outputs of the activations are not zero-centered - and because of that they need some modification from the Glorot init to keep the variance the same across layers.
The whole point of ReLU's is that some outputs will be zeroed. So, I wouldn't want to put weights with a big positive mean (e.g., a 100), because then the network will have to spend many rounds training/learning to zero out the neurons that need to be zeroed out. Also remember that the ReLU outputs are always positive, so the weights must learn to become negative in that case (except for the 1st layer which interacts with the input data itself). Zero-centered sounds the most logical to me.
| null | CC BY-SA 4.0 | null | 2023-04-10T19:15:11.107 | 2023-04-10T19:15:11.107 | null | null | 117705 | null |
612516 | 1 | null | null | 0 | 26 | Scenario: I have data comparing the number of tree stems in 30 forest plots between two sampling years (1992 and 2012). Each plot experienced hurricane damage between these 2 sampling years (in 1996) -- this damage was coded as being 0-100% of trees felled/damaged.
Interest: my ultimate hypothesis I'm trying to investigate is that the number of stems did not increase with time except in plots with greatest hurricane damage. (So, I'd like to know the effect of the hurricane on stem counts between plots while accounting for changes in time).
Data: designed as follows:
```
Plot Year HurrDam Count
1 1992 ??? 11
1 2012 30 115
2 1992 ??? 22
2 2012 60 381
....
```
I've placed question marks (`???`) in the above example because I'm not sure how to best enter (and therefore analyze) my data.
- Technically, all plots had 0% hurricane damage in 1992 because they occurred before the hurricane. So to provide a value here seems kind of artificial.
- One option is to replace the ??? with `0' for all data rows from 1992.
- Alternatively, my other thought was to treat the hurricane damage of any given plot as an unchanging characteristic of that plot overall -- i.e., regardless of year of sample. Under this scenario, the ??? would be replaced not with 0 but with the HurrDam value from 2012. So in my example data above, the ??? would be replaced with 30 and 60, respectively.
The result would be that HurrDam would be identical for both samples of any given plot although such a value only really applies to the latter sampling period for each plot in real life.
Which of these approaches is more appropriate for analyzing this data using linear regression (i.e., `lm()` in R)?
I feel like making `HurrDam` = 0 for all 1992 data creates a strong temporally-structured trend between years (which I'm not interested in investigating when it comes to hurricane damage -- In fact, this is the whole point of including `Year` as its own variable: I want to tease the effects of hurricane damage and simple passage of time apart).
- I could make HurrDam = 0 for all 1992 samples, then eliminate Year as a variable from my model, and instead just rely on the differences in HurrDam between years to account for this change, but this is problematic because 1) it ignores the repeated-measures structure of my data and 2) I feel like it accentuates the differences in HurrDam between years when I'm really only interested in knowing the effects of differences in HurrDam between plots in the latter year (while, again, simply accounting for any changes due to the passage of time across plots).
I also noticed that if I want to add an interaction term between Year and HurrDam in my ultimate linear model, the values for that interaction term becomes `NA` if I zero out the 1992 data.
Any suggestions/insights would be appreciated!
| How best to code longitudinal data and design a regression with IV that only applies to later time point? | CC BY-SA 4.0 | null | 2023-04-10T19:17:47.813 | 2023-04-11T12:20:07.773 | 2023-04-10T20:11:42.763 | 80624 | 80624 | [
"r",
"regression",
"time-series",
"multiple-regression"
] |
612517 | 1 | 612551 | null | 3 | 317 | After using inverse probability of treatment weighting (IPTW) on the variables of my dataset, there is still an imbalance in one covariate between the two groups. My outcome is binary (yes/no) and it is not a longitudinal study.
One example is:
```
library(WeightIt)
W.out <- weightit(treat ~ age + married + race,
data = lalonde, estimand = "ATE", method = "ps")
bal.tab(W.out, threshold=0.1)
```
Age is not balanced.
- How can I make all the variables balanced? Is it possible to "re-weight"? How?
- Is it possible to apply directly "entropy balancing" instead of IPTW
in this case? Can somebody explain to me entropy balancing? I tried
reading the original paper (here) but I didn't understand it so
much. How is entropy balance computed? Can it be always used at the same conditions as IPTW or are there particular conditions?
- If entropy balancing is able to adjust with Standardized differences of almost 0, then why is it so little used in the medical field?
- I noticed that in some papers there is the cohort after 1st weighting, then 2nd weighting, etc.. can someone explain how you obtain this? how many weighting do you have to do?
For instance, if I want to use this code:
```
W.out <- weightit(treat ~ age + married + race,
data = lalonde, estimand = "ATE", method = "ebal")
```
What are the parameters that I have to set and that I have to pay attention for in order to know that I applied the method correctly? Is there a way to visualize the scores from which the weights were obtained from? as in the case of IPTW (`W.out$ps`)
| Unbalanced variables after IPTW - entropy balancing? | CC BY-SA 4.0 | null | 2023-04-10T19:43:34.073 | 2023-04-11T05:26:35.420 | 2023-04-10T23:58:11.747 | 384938 | 384938 | [
"r",
"propensity-scores",
"treatment-effect",
"weighted-data"
] |
612518 | 2 | null | 612514 | 5 | null | Here's an example of how one analyzes such a situation. Suppose that the model
$$E[Y\mid X_1,X_2] = \beta_1 X_1 + \beta_2 X_2$$
holds where $X_1,$ $X_2,$ and $Y$ are random $n$-vectors. If you omit the second variable and use the wrong model (omitting the $X_2$ variable)
$$E[Y\mid X] = \gamma_1 X_1,$$
we may ask what error you might expect in using an estimate of $\gamma_1$ to estimate $\beta_1.$ When you use ordinary least squares regression to estimate $\gamma_1,$ the formula is
$$\hat\gamma_1 = \frac{Y\cdot X_1}{X_1\cdot X_1}.$$
Using the correct model formulation you may compute
$$E[\hat\gamma_1\mid X_1, X_2] = E\left[\frac{Y\cdot X_1}{X_1\cdot X_1}\mid X_1,X_2\right] = E\left[\frac{(\beta_1 X_1 + \beta_2 X_2)\cdot X_1}{X_1\cdot X_1}\mid X_1,X_2\right].$$
Basic properties of expectation (linearity) and conditional expectation (taking out what is known) allow you to simplify the right hand side to
$$E[\hat\gamma_1\mid X_1, X_2] = \beta_1 + \beta_2 \frac{X_2\cdot X_1}{X_1\cdot X_1}.$$
By definition, the (conditional) bias in an estimate is the difference between its expectation and estimand,
>
$$\text{bias} = E[\hat\gamma_1\mid X_1,X_2] - \beta_1 = \beta_2 \frac{X_2\cdot X_1}{X_1\cdot X_1}.$$
That is a general formula, applicable for fixed $X_i$ or for random $X_i$ where $X_1\cdot X_1$ is almost surely nonzero. Already the result is helpful, because it implies that when $X_1$ is orthogonal to $X_2$ (which is just a way of saying the numerator is zero), the bias is zero; and otherwise it shows that the bias is nonzero and it gives you information about its sign and magnitude.
If, additionally, you arrange for the $X_i$ to be standardized (which means their components sum to zero and the squares of their components sum to unity), the fraction on the right could be called the "correlation" of $X_1$ and $X_2,$ understanding this term to be a shorthand for
$$\operatorname{correlation}(X_1,X_2) = \frac{\sum_{i=1}^n X_{1i}X_{2i}}{\sum_{i=1}^n X_{1i}X_{1i}} = \frac{\sum_{i=1}^n X_{1i}X_{2i}}{1} = \sum_{i=1}^n X_{1i}X_{2i},$$
which is the Pearson correlation of standardized vectors. But please note that this is not the correlation in the sense that $X_1$ and $X_2$ might be random vectors: the bias was computed conditionally and still depends on $X_1$ and $X_2.$ If, for instance, each $X_i$ were an iid sequence of random values drawn from a bivariate distribution, that underlying distribution can have a correlation but it's unlikely to equal the value computed in the bias formula.
| null | CC BY-SA 4.0 | null | 2023-04-10T19:51:37.053 | 2023-04-10T19:51:37.053 | null | null | 919 | null |
612519 | 1 | null | null | 1 | 34 | I'm working on getting a read out of a Logistic regression classification model (setup in Python via Scikit-learn's LogisticRegression() wrapped in a OneVsRestClassifier()). I got the confusion matrix running pretty quick, and after a decent amount of effort I got the PR Curve (with a lot of help from [https://stackoverflow.com/questions/29656550/how-to-plot-pr-curve-over-10-folds-of-cross-validation-in-scikit-learn](https://stackoverflow.com/questions/29656550/how-to-plot-pr-curve-over-10-folds-of-cross-validation-in-scikit-learn))
The Algorithm consists of doing KFoldStratified, balancing across the 6 classes present, and doing Leave-one-out cross validation. My test set has a single example of each label in the X and y that gets fed in. The Confusion Matrix is generated based on that, and then I use clf.predict(X_test) to generate y probabilities. I separate them into independent lists per label, then I use 'precision_recall_curve' to calculate precision and recall on the combined list per class, then the list containing everything.
Below is the Confusion Matrix and PR Curves I've generated. I don't understand how class 2, for instance, has a seeming perfect classification on the confusion matrix while having a near 0.5 AUC. I'm definitely only using the testing data to calculate both. Any ideas?
[](https://i.stack.imgur.com/PXQqt.png)
[](https://i.stack.imgur.com/cShtL.png)
| Surprising disparity between Confusion matrix values and AUC? | CC BY-SA 4.0 | null | 2023-04-10T20:30:20.417 | 2023-04-10T21:04:43.360 | null | null | 245715 | [
"probability",
"auc",
"confusion-matrix"
] |
612520 | 2 | null | 612455 | 1 | null | I'd recommend using the Temporal BetaDiversity Index (TBI) from Legendre available at the `R` package `adespatial`. You would only need a species composition matrix replicated in time. The `stimodel` function of `adespatial` could also be useful for testing interaction between time and space in species composition.
| null | CC BY-SA 4.0 | null | 2023-04-10T20:44:21.113 | 2023-04-10T20:44:21.113 | null | null | 103642 | null |
612521 | 2 | null | 583230 | 0 | null | Normally all your $\beta$ values are scaled by the constant scale parameter $\lambda$, but when you know that you want to make comparisons across differing groups of individuals the scaling of $\lambda$ for each value becomes a problem.
Take a look at [Train's paper](https://eml.berkeley.edu/%7Etrain/scale.pdf) on the role of scale heterogeneity for an detailed discussion in relation to MNL models.
| null | CC BY-SA 4.0 | null | 2023-04-10T20:53:37.007 | 2023-04-10T20:53:37.007 | null | null | 385393 | null |
612523 | 2 | null | 612519 | 0 | null | Precision-recall curves are not ROC curves, they have a different interpretation and they do not have the chance level at 0.5. Also, they are sensitive to class imbalance. You can have high accuracy by a model that for example only predicts the majority class in an imbalanced dataset, but the precision-recall curve will be bad.
| null | CC BY-SA 4.0 | null | 2023-04-10T21:04:43.360 | 2023-04-10T21:04:43.360 | null | null | 53084 | null |
612524 | 2 | null | 612509 | 1 | null | I think it is mostly correct, yes.
In a Negative Binomial (NB) regression model with no offset, the `6.291e-02` coefficient would represent the change in the log of the expected count of the monthly number of cases of the disease for a one-unit increase in the corresponding predictor variable `tmax`, while holding all other predictor variables constant. i.e. the expected count would be multiplied by ($\exp(6.291\text{E-02})$=) 1.06 for a one-unit increase in `tmax`.
Because though our NB regression model has an offset this increase is against the expected rate. The post uses the term: "incidence of disease" which I think is somewhat open to interpretation, it is probably to explicitly say: "incidence rate of disease" but aside from that the interpretation is good to go.
| null | CC BY-SA 4.0 | null | 2023-04-10T21:16:15.640 | 2023-04-10T21:16:15.640 | null | null | 11852 | null |
612525 | 2 | null | 134380 | 0 | null | Median unbiased estimates can be used to estimate sample proportions and (non-singular) 95% CIs in Bernoulli samples with no variability. In a sample with no positive cases, you can estimate the upper bound of a 95% confidence interval with the following formula:
$$ p_{1-\alpha/2} : P(Y=0)/2 + P(Y>y) > 0.975$$
that is we seek a value $p_{1-\alpha/2}$ as the upper bound of the CI so that the Bernoulli process with probability $p=p_{1-\alpha/2}$ gives the above probability inequality. In R this is solved with a NR-like uniroot application.
```
set.seed(12345)
y <- rbinom(100, 1, 0.01) ## all 0
cil <- 0
mupfun <- function(p) {
0.5*dbinom(0, 100, p) +
pbinom(1, 100, p, lower.tail = F) -
0.975
} ## for y=0 successes out of n=100 trials
ciu <- uniroot(mupfun, c(0, 1))$root
c(cil, ciu)
[1] 0.00000000 0.05357998 ## includes the 0.01 actual probability
```
| null | CC BY-SA 4.0 | null | 2023-04-10T21:16:58.950 | 2023-04-10T21:16:58.950 | null | null | 8013 | null |
612526 | 1 | null | null | 0 | 19 | Im comparing two multidimensional MDS solutions, the solutions have the same number of dimensions. I don't think I can use the permutation version of procrustes analysis (commonly, PROTEST in R::vegan) because I doubt my two sets are exchangeable. I read that the sets must have similar covariance matrices to be exchangeable, which they don't. I can also not motivate exchangeability from a design perspective, as it is not experimental, rather it is a study comparing human and model-based assessment in lower dimensional space.
My endeavor turned towards bootstrapping as it has fewer restrictions, however now I doubt I can use that as it seems to sample individual rows multiple times generates an incredible bias (procrustes of original datasets estimate of correlation is just half of the estimate from the bootstrap)
Any ideas? I'm wondering if I have to take another path entirely or if it is possible to run permutated procrustes and I'm just not knowledgeable enough.
BR,
Eric
| procrustes alternative | CC BY-SA 4.0 | null | 2023-04-10T21:24:48.253 | 2023-04-10T21:24:48.253 | null | null | 379186 | [
"bootstrap",
"permutation-test",
"multidimensional-scaling",
"exchangeability",
"procrustes-analysis"
] |
612527 | 1 | null | null | 0 | 26 | Good afternoon,
I am trying to convert annual reported rates to daily probabilities. However, I have annual rates that only occur during certain months of the year. For example, I have an annual mortality rate of 0.72 and this rate needs to be converted to a daily probability of mortality across only 4 months (120 days).
Is this equation simply: 1 - [(1-Annual Rate)^(1/120)] ?
And/or the equivalent: 1-exp((1/365)log(1-Annual Rate)) ?
The '120' would normally be 365 to convert an annual rate to a daily probability. Can I simply exchange '365' for the relevant timespan (i.e., number of days) for each rate?
Thank you in advance.
| Converting annual rates to daily probability | CC BY-SA 4.0 | null | 2023-04-10T21:36:27.900 | 2023-04-10T21:36:27.900 | null | null | 385396 | [
"probability",
"conditional-probability"
] |
612528 | 1 | null | null | 0 | 9 | I am looking for the right approach to do a sample size calculation for the positive predictive value in a cross-sectional study. From external sources, I know the prevalence in the population of interest. I also have an estimate of the sensitivity and specificity of the test. My question is now: How many subjects do I have to include in the study in order to show that my positive predictive value is above some value, with a confidence level of 95% and a power of 80%?
The literature I found so far is on the more complex case of a case-controlled study, such as [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3668447/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3668447/). That paper seems to refer to Pepe, The Statistical Evaluation of Medical Tests for Classification and Prediction (2003) for my case, but I do not have access to it. I'd be grateful if anyone could share the applicable formulas for this sample size calculation!
| Sample size calculation for positive predictive value of test in cross-sectional study | CC BY-SA 4.0 | null | 2023-04-10T21:39:29.430 | 2023-04-10T21:39:29.430 | null | null | 385397 | [
"sample-size"
] |
612529 | 1 | null | null | 0 | 36 | I am conducting a regression analysis on TNF-a in relation to a genetic marker. I performed a post-hoc power calculation using a software called Quanto. The mean(SD) of my outcome variable is 34(14). The Minor Allele Frequency of the genetic marker is 0.102. The Beta coefficient of the predictor from the regression model is 2.852 and my sample size is 799
The power calculation result shows that the minimum detectable effect size (Beta) for a sample size of 781 is 3.271, while the observed effect size (Beta) from the regression model is 2.852. What can I conclude from these results? If the observed effect size is lower than the expected effect size, does it mean that the study is underpowered?
| Power calculation: What does it mean if the observed effect size is lesser than the expected effect size in power calculation? | CC BY-SA 4.0 | null | 2023-04-10T21:41:48.583 | 2023-04-10T21:41:48.583 | null | null | 20213 | [
"statistical-power",
"genetics"
] |
612531 | 1 | null | null | 0 | 28 | Suppose I have a conditional (or any) distribution as so:
$$
p(A \mid B,C,D)
$$
and in the text, I want to refer to the variables $\{A,B,C,D\}$ associated with that density (or mass). Is there a formal name or notation for this set? What is the formal way of referring to it? Is there a commonly used symbol?
It is not the support of the distribution as far as I understand the definition of support, so what would it be called?
Thanks
| Terminology / notation for the set of variables associated with a distribution | CC BY-SA 4.0 | null | 2023-04-10T21:54:46.120 | 2023-04-10T21:54:46.120 | null | null | 37280 | [
"probability",
"distributions",
"conditional-probability",
"notation"
] |
612532 | 1 | null | null | 0 | 8 | I am trying to figure out if/how I can understand if animals are choosing particular plant species for foraging or just using them based on their availability. To do this, I need to characterize the availability of multiple plant species in plots. I have many plots, each one had 8 quadrats (roughly 5% of total plot area), and individuals of each plant species were counted per quadrat. It was not possible to estimate the abundance of plant species in the whole plot, but the plot-scale is relevant for the foraging animals. Observers also walked through the whole plot and recorded presence of any other species that was not detected in the quadrats to gain an overall species list. Now I would like to characterize the relative frequency or abundance of each species at the plot-level to test how their availability compares with how they were used by foraging animals. I have found that the animals often (43% of the time) foraged on species that were not observed in quadrats, and therefore, lack abundance estimates. How can I estimate the availability of those "outsiders"? Here are some ways I have considered:
- Arbitrarily assign them a low number, assuming that if they were not detected in our quadrats, they were rare. This may bias my results toward preference for these species, especially if they actually are not rare.
- Throw out observations of animals foraging on these species since I don't have a way of estimating abundance. This may bias my results by removing observations of foraging on (maybe locally) rare species, when there may be a preference there. Also that is a lot of data :'(
- Some elegant way someone here will suggest for estimating the relative abundance of these outsiders? I keep thinking of detection probabilities and how we might use estimates from other plots to get work this out but ???
- Find another way to assess the importance of particular plant species to these animals, given that the sampling design for plants clearly missed a lot of species. Maybe a logistic model of plants that were used, and covariates...although relative abundance seems important here too...
My inclination is to go with #4, but I wondered if anyone had a better idea?
Thanks!
| Combining multiple observations across quadrats to estimate relative frequency: how do I account for "unobserved" species? | CC BY-SA 4.0 | null | 2023-04-10T22:21:16.827 | 2023-04-10T22:21:16.827 | null | null | 317031 | [
"chi-squared-test",
"experiment-design",
"ecology"
] |
612534 | 1 | null | null | 0 | 24 | I am analyzing a set of data of two factors, one at three and other at seven levels, to check how they influence my response variable. However, when testing the ANOVA assumpions it results it follows a normal distribution but the data is heterocedastic. I know that for heterocedastic one-way ANOVA there is the Welch test, but I did not find any alternative for the multifactor ANOVA.
| How to perform a multifactor ANOVA with heterocedastic data | CC BY-SA 4.0 | null | 2023-04-10T22:07:57.967 | 2023-04-12T13:16:43.840 | 2023-04-12T13:16:43.840 | 11887 | 360170 | [
"r",
"anova",
"heteroscedasticity",
"manova"
] |
612535 | 1 | 612538 | null | 5 | 264 | When fitting a Poisson regression on data with low expected values, the intercept term has a small bias even when the model is perfectly specified. Below, I simulated data just using $y \sim rPois(exp(\beta_0))$ and then fit the data using the glm model $log(E[y]) \sim \beta_0$. On average, the estimates are slightly biased downwards. The bias is small, but I would like to understand why this happens.
I could understand why this would happen if $\beta_0$ was a large negative number and the data were mostly zeros, but the data from the $\beta_0$ values I chose is always mostly non-zero. Why would this happen?
```
# function to run the simulation for one set of beta values
run_sim <- function(b0, n = 50, R = 10000){
# simulate y values and then estimate
b0_estimates <- sapply(1:R, function(i){
y = rpois(n, exp(b0))
tmp = data.frame(y = rpois(n, exp(b0)))
mod_col <- glm('y ~ 1', data = tmp, family=poisson)
b0_hat <- mod_col$coefficients[1]
return(b0_hat)
})
# get the bias
mean_bias = mean(b0_estimates) - b0
return(mean_bias)
}
# simulate for beta0 values ranging from 1 to 10
b0_vec = 1:10
bias_vec = sapply(b0_vec, function(b0){
run_sim(b0, R = 10000)
})
# plot the results
plot(b0_vec, bias_vec, xlab = 'true b0', ylab = 'b0 bias')
```
[](https://i.stack.imgur.com/pFglI.png)
| Poisson regression intercept downward bias when true intercepts are small | CC BY-SA 4.0 | null | 2023-04-10T22:45:23.587 | 2023-04-10T22:57:16.653 | null | null | 385399 | [
"bias",
"poisson-regression",
"intercept"
] |
612536 | 1 | null | null | 2 | 20 | We are using a mixed-effects model to assess the potential impact of different treatments (categorical variables) on a specific soil characteristic (numerical variable). The study uses a randomized complete block design with plots nested in blocks.
Longitudinal data were collected and initial assessments of the raw data indicated temporal autocorrelation. There were 12 total samples taken during each observation, with three treatments represented within four blocks. For example, data collected on date 1 were similar in format to:
[](https://i.stack.imgur.com/3cwr6.png)
The treatments were considered the fixed effects and time and block (both factors) were considered random effects.
The basic model being used is:
```
M1 <- lmer(y ~ Trt1 + (1|Block) + (1|Date), data)
```
I have reviewed the results of model and reviewed the residual plots. To look at the residuals, I used the `compute_resid()` function to calculate marginal and conditional residuals and applied the `acf()` function. However, since we have multiple observations per date, I am not sure the best way to handle the residuals.
I also tried using `lme()` with a similar approach but specifying the correlation. However, it did not appear to improve the outcome.
My primary questions are:
- Should time (monthly data in this case) be considered a fixed or random effect?
- How should I assess the residuals (as well as model performance in general)? When I average the residuals by time (I'm not sure this is the best approach), strong autocorrelation is indicated when time is included as a random effect but the autocorrelation is not present in the residuals when time is included as a fixed effect.
3.What is the best type of residual to use in this case (e.g., conditional vs. marginal)?
| Interpretation of results from a mixed-effects model with a nested design and autocorrelation | CC BY-SA 4.0 | null | 2023-04-10T22:51:08.910 | 2023-04-12T22:31:09.407 | 2023-04-12T22:31:09.407 | 246835 | 384694 | [
"mixed-model",
"panel-data",
"multilevel-analysis",
"autocorrelation",
"nested-data"
] |
612538 | 2 | null | 612535 | 11 | null | The score function is exactly unbiased
$$E_{\beta_0}[\sum_i x_i(y_i-\mu_i)]=0$$
In your case that simplifies to
$$E_{\beta_0}[\sum y_i-\exp\beta_0]=0$$
The parameter estimate is a non-linear function of the score, so that tells us it won't be exactly unbiased.
Can we work out the direction of the bias? Well, the mean of $Y$ is $\exp \beta_0$, so $\beta_0=\log EY$ and $\hat\beta=\log \bar Y$. The logarithm function is concave, and $E[\bar Y]=E[Y]=\exp\beta_0$ so we can use Jensen's inequality to see that the bias is downward. (Or draw a picture, like the one [here](https://stats.stackexchange.com/questions/489912/showing-bias-of-mle-for-exponential-distribution-is-frac-lambdan-1) only the other way up)
| null | CC BY-SA 4.0 | null | 2023-04-10T22:57:16.653 | 2023-04-10T22:57:16.653 | null | null | 249135 | null |
612539 | 1 | null | null | 1 | 48 | Let's suppose I perform two separate logistic regression models in two different subgroups of my dataset.
```
glm(death ~ age + ..... , data = female, family="binomial") #female population
glm(death ~ age + ..... , data = male, family="binomial") #male population
```
From these, I obtan an OR for age in the female group and one for the male group (numbers are just examples):
- OR age in the female group: 1.88 (0.41-2.89); p>0.05
- OR age in the male group: 1.45 (1.20-1.78); p<0.05
P interaction: 0.3
So the males OR is significant while the female's not. However, when I perform the interaction test, the p is >0.05. How do I interpret this? It means that there is no difference between the two ORs, so why one is significant and the other isn't? If there is no difference, then is the "true" OR < or >1?
| Interaction test non significant | CC BY-SA 4.0 | null | 2023-04-10T22:59:42.400 | 2023-04-13T01:49:59.710 | 2023-04-11T00:02:03.423 | 384938 | 384938 | [
"r",
"logistic",
"interaction",
"odds-ratio"
] |
612540 | 2 | null | 612496 | 8 | null | What is equivalent to OLS is a weighted mean pairwise slope. Suppose you want to fit $Y=\alpha+\beta X$ and you have $n(n-1)$ pairs $(x_i,y_i,x_j,y_j)$ with slopes $\beta_{ij}$. The information about $\beta$ in a pair is proportional to $(x_i-x_j)^2$, and if you write $w_{ij}=(x_i-x_j)^2$, the OLS estimator is
$$\hat\beta_{OLS}= \frac{\sum_{i\neq j} w_{ij}\beta_{ij}}{\sum_{i\neq j} w_{ij}}$$
(where you use the convention $w_{ij}\beta_{ij}=0$ if $x_i=x_j$)
The proof involves relating the numerator to the U-statistic estimator of covariance
$$\mathrm{cov}[X,Y]=\frac{1}{n(n-1)}\sum_{i\neq j} (x_i-x_j)(y_i-y_j)$$ and
the denominator to the U-statistic estimator of the variance
$$\mathrm{var}[X]=\frac{1}{n(n-1)}\sum_{i\neq j} (x_i-x_j)(x_i-x_j).$$
(To generalise to two predictors you take triples of points and so on. The algebra becomes more tedious and you need to know some formulas for determinants.)
| null | CC BY-SA 4.0 | null | 2023-04-10T23:12:51.703 | 2023-04-12T02:08:35.413 | 2023-04-12T02:08:35.413 | 2126 | 249135 | null |
612541 | 2 | null | 478982 | 0 | null |
#### Does not matter to whom and for what?
The number of possible ordered samples of $n$ items from a population of $N$ items is indeed $N!/n!$, and the number of possible unordered samples of $n$ items from a population of $N$ items is ${N \choose n}$. Both of these are legitimate counts of the number of possible outcomes of a certain aspect of the sample. As to whether or not the order matters, that begs the question: matters to whom and for what?
When we sample via simple-random-sampling without replacement, this means that samples containing the same items (in any order) are equally likely, which means that we will typically use procedures on the sample that are invariant to their order. If we use the sample in ways that are order invariant, then the order doesn't matter to the way in which we use the sample. In such cases, it is quite natural for us to ask the number of possible unordered samples we can get and to perform probability calculations on this basis. Contrarily, if we use the sample in ways that are not order invariant, then the order matters to the way in which we use the sample, and we would therefore have to consider order. Even in the latter case, we know that the probability of the unordered sample is $(N-n)!$ times the probability of any ordered sample of the same items, so the conversion is quite simple.
| null | CC BY-SA 4.0 | null | 2023-04-10T23:35:55.530 | 2023-04-10T23:35:55.530 | null | null | 173082 | null |
612542 | 1 | null | null | 3 | 58 | I am confused about the derivation of importance scores for an xgboost model. My understanding is that xgboost (and in fact, any gradient boosting model) examines all possible features in the data before deciding on an optimal split (I am aware that one can modify this behavior by introducing some randomness to avoid overfitting, such as the by using the colsample_bytree option, but I’m ignoring this for now).
Thus, for two correlated features where one is more strongly associated with an outcome of interest, my expectation was that the one that is more strongly associated with an outcome be selected first. Or in other words, that once this feature is selected, no additional useful information should be found in the other, correlated feature. This however does not seem to be always the case.
To put this concretely, I simulated the data below, where x1 and x2 are correlated (r=0.8), and where Y (the outcome) depends only on x1. A conventional GLM with all the features included correctly identifies x1 as the culprit factor and correctly yields an OR of ~1 for x2. However, examination of the importance scores using gain and SHAP values from a (naively) trained xgboost model on the same data indicates that both x1 and x2 are important. Why is that? Presumably, x1 will be used as the primary split (i.e. the stomp) since it has the strongest association with the outcome. Once this split happens (even if over multiple trees due to a low learning rate), x2 should have no additional information to contribute to the classification process. What am I getting wrong?
```
pacman::p_load(dplyr, xgboost,data.table,Matrix,MASS, broom, SHAPforxgboost)
expit<-function(x){
exp(x)/(1+exp(x))
}
r=0.8
d=mvrnorm(n=2000, mu=c(0,0),Sigma=matrix(c(1,r,r,1),nrow=2),empirical=T)
data=data.table(d,
replicate(10,rbinom(n=2000,size=1,prob=runif(1,min=0.01,max=0.6))))
colnames(data)[1:2]<-c("x1","x2")
cor(data$x1,data$x2)
data[,Y:=rbinom(n=2000,size=1,prob=expit(-4+2*x1+V2+V4+V6+V8+V3))]
model<-glm(Y~., data=data, family="binomial")
mod<-tidy(model)
mod$or<-round(exp(mod$estimate),2)
sparse_matrix<-sparse.model.matrix(Y~.-1,data=data)
dtrain_xgb<-xgb.DMatrix(data=sparse_matrix,label=data$Y)
xgb<-xgboost(tree_method="hist",
booster="gbtree",
data=dtrain_xgb,
nrounds=2000,
fold=5,
print_every_n=10,
objective="binary:logistic",
eval_metric="logloss",
maximize = F)
shap<-shap.values(xgb,dtrain_xgb)
mean_shap<-data.frame(shap$mean_shap_score)
gain<-xgb.importance(model=xgb)
head(mod,14) #regression
head(mean_shap) #shap values
head(gain) #gain
```
```
| importance score for correlated features xgboost | CC BY-SA 4.0 | null | 2023-04-10T23:47:11.177 | 2023-04-12T14:58:14.003 | null | null | 292896 | [
"r",
"boosting",
"shapley-value",
"information-gain"
] |
612543 | 2 | null | 558214 | 1 | null |
#### Your variables do not change --- only the coefficients change
You presently have a regression equation of the form:
$$\log_{10}(\hat{y}) = \hat{\beta}_0 + \hat{\beta}_1 x_1 + \hat{\beta}_2 x_2
\quad \quad \quad \quad \quad
\hat{y} = 10^{\hat{\beta}_0} \times (01^{\hat{\beta}_1})^{x_1} \times (10^{\hat{\beta}_2})^{x_2}.$$
You can convert this model form to any logarithmic-base $r$ by using $\log_{r}(y) =
\log_{r}(10) \cdot \log_{10}(y)$. To convert to the natural logarithm we take $r=e$ and define new coefficient parameters ${\alpha}_i \equiv \ln(10) {\beta}_i$ and their corresponding estimators $\hat{\alpha}_i \equiv \ln(10) \hat{\beta}_i$, which yields the equivalent converted model form:
$$\ln(\hat{y}) = \hat{\alpha}_0 + \hat{\alpha}_1 x_1 + \hat{\alpha}_2 x_2
\quad \quad \quad \quad \quad
\hat{y} = e^{\hat{\alpha}_0} \times (e^{\hat{\alpha}_1})^{x_1} \times (e^{\hat{\alpha}_2})^{x_2}.$$
| null | CC BY-SA 4.0 | null | 2023-04-10T23:54:27.887 | 2023-04-10T23:54:27.887 | null | null | 173082 | null |
612546 | 1 | null | null | 0 | 41 | I want to prove the following, for a given image distribution $P(X), \mathcal{X} \in \mathbf{R}^{n \times m}$, we have a masking model, $\phi:X \rightarrow \{0,1\}^{n\times m}$. We also have the complement mask $\bar{\phi}$, which basically does the complement masking of $\phi$. I want to find the entropy $H(X \circ \phi(X) | X \circ \bar{\phi}(X))$, $\circ$ is Hadamard product. Here is my attempt:
$$\begin{align}
H(X \circ \phi(X) | X \circ \bar{\phi}(X)) &= \sum_{x_1\in\mathcal{X}}P(x_1)H(X \circ \phi(X) | x_1 \circ \bar{\phi}(x_1)) \\
&= -\sum_{x_1\in\mathcal{X}, x_2 \in\mathcal{X}}P(x_1)P(x_2 \circ \phi(x_2) | x_1 \circ \bar{\phi}(x_1))\dots\\
\dots\log P(x_2 \circ \phi(x_2) | x_1 \circ \bar{\phi}(x_1))
\end{align}$$
Based on this [paper](https://arxiv.org/pdf/2012.07287.pdf), page 12, equation 11, this quantity should equal to:
$\mathbb{E}[\log P(X \circ \phi(X) | X \circ \bar{\phi}(X)]$
But I am not sure how to further progress to get their equation.
| Conditional Entropy Formula for a Masking Network | CC BY-SA 4.0 | null | 2023-04-11T01:05:28.367 | 2023-04-11T01:17:38.143 | 2023-04-11T01:17:38.143 | 296047 | 296047 | [
"entropy",
"information-theory"
] |
612547 | 1 | 612553 | null | 1 | 33 | This is the target I want to integrate with Monte Carlo with control variate method:
$$\theta = \int_{1}^{\infty}\frac{x^2}{\sqrt{2\pi}}e^{-x^2/2}dx$$
I have checked with Wolfram that it is 0.400626, so the control variate should be converge to this value.
I use standard normal distribution and gamma distribution (shape = 3 and rate = 1) as two control variate, but fail to converge it! What's wrong with my code? or Idea?
Here is my R code and output.
```
sample_size <- seq(from = 100, to = 10^4, by = 10)
target <- function(x){
x^2 * exp(-(x^2)/2) /sqrt(2*pi)
}
Sim4.1.theta <- numeric(length(sample_size))
Sim4.1.se <- numeric(length(sample_size))
Sim4.2.theta <- numeric(length(sample_size))
Sim4.2.se <- numeric(length(sample_size))
MC.sim.4 <- function(size){
u1 <- rnorm(size)
f2.1 <- u1 <- u1[u1>=1]
T1.1 <- target(u1)
u2 <- rgamma(size, shape = 3, rate = 1)
f2.2 <- u2 <- u2[u2>=1]
T1.2 <- target(u2)
c.star.1 <- -lm(T1.1~f2.1)$coeff[2]
c.star.2 <- -lm(T1.2~f2.2)$coeff[2]
T2.1 <- T1.1 + c.star.1*(f2.1 - pnorm(1,lower.tail = FALSE))
T2.2 <- T1.2 + c.star.2*(f2.2 - pgamma(1,shape = 3, rate = 1, lower.tail = FALSE))
control1.estimate <- mean(T2.1[u1>=1])
control2.estimate <- mean(T2.2[u2>=1])
control1.se <- sd(T2.1)/sqrt(size)
control2.se <- sd(T2.2)/sqrt(size)
return(rbind(control1.estimate,control2.estimate,control1.se,control2.se))
}
for (i in 1:length(sample_size)) {
tem <- MC.sim.4(sample_size)
Sim4.1.theta[i] <- tem[1]
Sim4.1.se[i] <- tem[2]
Sim4.2.theta[i] <- tem[3]
Sim4.2.se[i] <- tem[4]
}
plot(x = sample_size, y = Sim4.1.theta, type = 'l',col = '#2166AC', ylim = c(0,0.5), xlab = '# of sampling size')
lines(x = sample_size, y = Sim4.2.theta, col = '#B2182B')
abline(a=0.400626,b=0,col='red')
```
[](https://i.stack.imgur.com/KrV7B.png)
| How to use control variate method to estimate $\theta = \int_{1}^{\infty}\frac{x^2}{\sqrt{2\pi}}e^{-x^2/2}dx$ | CC BY-SA 4.0 | null | 2023-04-11T01:28:53.073 | 2023-04-11T05:49:41.167 | 2023-04-11T01:30:54.623 | 362671 | 385382 | [
"monte-carlo",
"numerical-integration"
] |
612548 | 1 | null | null | 0 | 10 | I am doing a longitudinal study investigating how Y changed across time, with two time-invariant covariates at level two (between-person), one is `Gender` and another one is `OnAge`. So I tried to formulate a conditional growth model via lme4, and if I was correct, the equation should be like this:
Level 1: Yti=b1i+b2i Time+uti
Level 2: b1i=β01+β11Genderi+β21OnAgei+d1i
b2i=β02+β12Genderi+β22OnAgei+d2i
Composite: Yti=(β01+β11Genderi+β21OnAgei+d1i)+(β02+β12Genderi+β22OnAgei+d2i)*Time+ uti
Accordingly, the formula for lme4 should be `lmer(Y~Time+Gender+OnAge+Gender*Time+OnAge+Time+(Time|id))`
---
The question here is how should I interpret the obtained Level-2 coefficients of `Gender`, `Gender*Time`, `OnAge`, and `OnAge*Time`? I suppose the interpretations for them should be very different from unconditional growth model.
| How to interpret the coefficients obtained from lme4, which has two co-variants in level 2? | CC BY-SA 4.0 | null | 2023-04-11T01:40:37.237 | 2023-04-11T01:47:35.577 | 2023-04-11T01:47:35.577 | 384978 | 384978 | [
"regression",
"lme4-nlme",
"multilevel-analysis",
"nested-data",
"growth-model"
] |
612549 | 1 | null | null | 0 | 19 | In my study I want to test if certain cities were predominantly visited for holidays or for work (see boxplot below). That means I want to know if there is a difference between the grey box and the white box for each group (City). I have about 20 persons in my study. Some were traveling a lot. That is why for the boxplot I chose 'Proportion of visits' as the y-axis. Can I used a paired Wilcoxon test for each city (paired for each individual and using the number of counts) or is that wrong?
I am totally unsure. Help would be very appreciated.
[](https://i.stack.imgur.com/OIYC9.png)
| How do I test the difference between male and female in this dataset? (Paired Wilcoxon??) | CC BY-SA 4.0 | null | 2023-04-11T02:22:38.070 | 2023-04-11T02:22:38.070 | null | null | 385405 | [
"statistical-significance"
] |
612550 | 1 | null | null | 0 | 19 | I'm working with a data set from here: [https://nij.ojp.gov/funding/recidivism-forecasting-challenge](https://nij.ojp.gov/funding/recidivism-forecasting-challenge)
To put it simply, it is a binary classification problem. The gang affiliation variable is pretty useful, but it is only recorded for men, making this a missing not at random data, with the missingness completely correlated with the gender variable.
I'd like to use both gender and gang affiliation variable along with other variables.
Could someone point me to the resources to handle this kind of missing data?
I'm thinking of fitting a logistic regression model and a Bayesian model.
Thank you.
| Handling Missing Not At Random Data | CC BY-SA 4.0 | null | 2023-04-11T03:04:47.620 | 2023-04-11T03:04:47.620 | null | null | 260660 | [
"missing-data",
"binary-data"
] |
612551 | 2 | null | 612517 | 3 | null | These are some good questions. I'll do my best to give simple answers to them.
Entropy balancing (EB) for the ATT (which is not your query) is IPTW. It implicitly estimates a propensity score (PS) using logistic regression, but instead of doing so with maximum likelihood, it does so using a different algorithm that yields exact mean balance on the included covariates. This is described in [Zhao & Percival (2017)](https://doi.org/10.1515/jci-2016-0010) and [Zhou (2019)](https://doi.org/10.1214/18-AOS1698), among others.
However, it was not known that this was what EB was when it was first described in [Hainmueller (2012)](https://doi.org/10.1093/pan/mpr025). Hainmueller considered EB an optimization problem: estimate weights for each individual such that the following characteristics hold: the covariate means are exactly balanced after weighting, the weights are positive, and the "negative entropy" of the weights is minimized. The negative entropy is a measure of variability, so EB weights are meant to be less extreme than standard IPTW weights. Instead of having to do the optimization problem and estimate $n$ parameters (i.e., a weight for each individual in the sample), Hainmueller discovered a trick where you can just estimate one parameter for each variable to be balanced. The reason this trick is possible is because of the later-discovered fact that EB is a special kind of logistic regression, and in logistic regression you just estimate one parameter for each variable (i.e., the regression coefficient).
For the ATE, unfortunately, it's a different story. The nice equivalence between logistic regression and EB doesn't hold, but `WeightIt` still relies on the trick of estimating one parameter per variable (actually two, one for each treatment group) instead of estimating a weight for each unit. How `WeightIt` does it is irrelevant, but to summarize, it performs EB twice, once for each treatment group, and estimates weights for each treatment group that yield exact mean balance on the covariates between each treatment group and the overall sample.
Since the goal of IPTW is to achieve balance, EB skips the step of estimating a PS and goes straight to balance, while ensuring the weights have minimal variability. For this reason, it performs excellently in simulations and real data. It is in line with the philosophy of matching as nonparametric preprocessing described by [Ho et al. (2007)](https://doi.org/10.1093/pan/mpl013), who identify the PS tautology, which is that a good PS achieves balance, but the only way to evaluate a PS is to assess whether it has achieved balance. So EB skips the middleman and goes straight to balance, skipping over the steps of estimating a PS, checking balance, if balance isn't good, choosing a different PS specification, etc. EB guarantees exact mean balance on the covariates right away.
There are two philosophies to estimating PSs, which I described in detail in [this post](https://stats.stackexchange.com/a/421124/116195), which mentions EB and its alternatives. First, there is the philosophy of trying to estimate the PS as accurately as possible, because then the "magical" properties of the PS that guarantee unbiasedness in large samples come into play. Second, there is the philosophy of estimating PSs that yield balance with no attempt to estimate the true PS or even an accurate one. EB falls squarely in the second camp, omitting a PS entirely. However, one weakness of this is that the magical properties of the PS cannot come into play: you can only balance the terms you request to be balanced, and there is no guarantee the rest of the covariate distribution (i.e., moments beyond the means, features of the joint distribution like covariances) will be balanced unless those are specifically requested, too. An analyst at SAS said, wisely, "When a metric becomes a target, it ceases to be a metric"; that is, measured covariate balance is a metric of the PS's ability to balance unmeasured features of the covariate distribution (and by unmeasured I mean unseen features of the distribution of observed covariates, not unmeasured covariates), and achieving measured balance automatically using EB doesn't tell you about the unmeasured features of the covariate distribution. You can no longer rely on the theoretical properties of the PS to balance the distributions.
Okay, I know I've been a little theoretical and technical here. I'l bring it back to answering your questions directly.
>
How can I make all the variables balanced? Is it possible to "re-weight"? How?
You can use EB directly on the covariates; you don't need to re-weight (i.e., apply entropy balancing to the propensity score-weighted sample). That is, if your IPTWs didn't yield balance, toss them out and use a different method of estimating weights. EB is one, but there are others. My favorite is energy balancing, which is also implemented in `WeightIt`. (It actually is possible to combine IPTW and EB, which was one of the winning methods in the [2016 ACIC data competition](https://doi.org/10.1214/18-STS667). It has not been studied beyond that, though.)
>
Is it possible to apply directly "entropy balancing" instead of IPTW in this case? Can somebody explain to me entropy balancing? I tried reading the original paper (here) but I didn't understand it so much. How is entropy balance computed? Can it be always used at the same conditions as IPTW or are there particular conditions?
I attempted to answer this above, but I'll summarize. EB for the ATE skips the PS and estimates weights that exactly balance the covariate means and ensure the weights have minimal variability. The specific method of estimation is a very simple optimization that runs extremely fast. For the ATT, the story is slightly different, and more connections to standard IPTW exist. For a treatment at a single time point, EB can be used in the exact same situations IPTW can, including for binary, multi-category, and continuous treatments, for the ATT or ATE, for subgroup analysis, etc. The estimates from EB have the exact same interpretations as those from IPTW. There are many extensions to entropy balancing, including for longitudinal treatments and when you have a single treated unit and multiple controls (this is called the synthetic control method). For the ATT, it performs almost uniformly better than logistic regression-based PS weighting except in pathological circumstances.
>
If entropy balancing is able to adjust with Standardized differences of almost 0, then why is it so little used in the medical field?
Mostly because medical researchers have not heard of it, and even if they have, they might be scared to use it because it sounds complicated, even though it isn't. It is very popular in labor economics and is getting more popular in medicine and other fields as well, slowly. It deserves way more attention and, in my opinion, should be the first method a researcher tries, not a backup when IPTW fails. It must be accompanied by a robust assessment of balance because the theoretical properties of the propensity score do not apply (for the ATE, but they actually do for the ATT); this includes assessing balance beyond the means using, e.g., KS statistics and balance statistics for interactions and polynomial terms, which are all available in `cobalt`.
>
I noticed that in some papers there is the cohort after 1st weighting, then 2nd weighting, etc.. can someone explain how you obtain this? how many weighting do you have to do?
I'm not exactly sure what you're referring to, but this is probably multiple attempts to estimate a single set of weights that balance the covariates. E.g., you try a logistic regression, then a logistic regression with squared terms added, then with some interactions added, etc. Only the properties of the final set of weights (i.e., those that yield the best balance without sacrificing precision) should be reported and used in effect estimation, but it is important to describe your process of estimating weights in your manuscript to ensure your procedure is replicable. (There are some contexts where multiple sets of weights are combined together, but that is an advanced matter that is beyond the scope of your question.)
Go forth, and use entropy balancing!
| null | CC BY-SA 4.0 | null | 2023-04-11T05:26:35.420 | 2023-04-11T05:26:35.420 | null | null | 116195 | null |
612553 | 2 | null | 612547 | 3 | null | Your basic Monte Carlo integral is wrong, before you even add in the control variates. When I return `T1.1` from `MC.sim.4` I get
```
> MC.sim.4(100000)
[,1]
simple.MC 0.2546318389
control1.estimate 0.3854293298
control2.estimate 0.2284484298
control1.se 0.0001006455
control2.se 0.0002170077
```
As you say, the true value of the target is 0.40026
```
> integrate(function(x) x*x*dnorm(x),lower=1,upper=Inf)
0.400626 with absolute error < 5.7e-07
```
But that's not what your `T1.1` is doing.
```
> z<-rnorm(1e5)
> mean((z*z*dnorm(z))[z>1])
[1] 0.2549178
```
There are two problems. First, you don't want the normal density in `target`, because you get the Normal density by sampling from a Normal. Second, you don't want to drop the values with `z<1`; you want to set them to zero
Fixing these problems
```
> z<-rnorm(1e5)
> mean((z*z)*(z>1))
[1] 0.4006146
```
The basic code for the control variates looks ok, but you'll have to fix similar problems in how those variables are defined.
| null | CC BY-SA 4.0 | null | 2023-04-11T05:49:41.167 | 2023-04-11T05:49:41.167 | null | null | 249135 | null |
612554 | 1 | null | null | 1 | 37 | I am interested in numerical data imputation problems: how to properly estimate missing values in a tabular data set (rows and columns) with missing numerical values?
In 2018, Yoon et al. proposed the GAIN framework, a Generative Adversarial Network tailored for numerical data imputation. Here is the link to the original paper: [https://arxiv.org/abs/1806.02920](https://arxiv.org/abs/1806.02920)
Unfortunately, after many trial and error (and even after contacting the authors of the paper), it appears to me that traditional imputation methods -- like the kNN-Imputer, or MissForest -- provide better missing value estimates than GAIN. That said, the authors of GAIN claim state of the art results. I attach their Table showing the main results
[](https://i.stack.imgur.com/QVeOF.png)
This Table shows the mean normalized Root Mean Square Error (RMSE) of 6 data imputation methods using 5 numerical data sets, averaged over 10 repetitions. Parameters are optimized with 5-cross validation scheme.
My questions are... Did anyone manage to obtain satisfactory imputation results with GAIN? If yes, are those results really better than MissForest (which is one of the best numerical imputation methods in my experience)? If not, then what is that Table 2 from the paper GAIN: Missing Data Imputation using Generative Adversarial Nets (Yoon et al., 2018)?
| Numerical data imputation: Generative Adversarial Imputation Nets (GAIN) not reproducible? | CC BY-SA 4.0 | null | 2023-04-11T05:55:40.503 | 2023-05-07T05:27:14.670 | 2023-05-07T05:27:14.670 | 213020 | 213020 | [
"missing-data",
"data-imputation",
"gan"
] |
612559 | 2 | null | 612443 | 0 | null | Yes, the logic of the Chebyshev's inequality can be reversed. You could say that $$\mathrm{P}(|X - \mathrm{E}[X]| \leq \sqrt{\mathrm{Var}[X]}) = 1$$
if and only if $X$ is a Bernoulli variable with parameter $p = 0.5$ shifted and scaled to match the specific mean and variance.
---
"If $X$ is a random variable with $0 \leq X \leq 1$" this condition is unnecessary for reversing the logic of the inequality. The mean and variance, along with the condition that the Chebyshev's inequality is an equality is enough.
---
Proof, consider the quantile function of $Y = \frac{X-\mu}{\sigma}$. It must be constrained between -1 and +1, have mean 0, and at the same time the square has to integrate to 1.
| null | CC BY-SA 4.0 | null | 2023-04-11T06:31:27.527 | 2023-04-11T06:31:27.527 | null | null | 164061 | null |
612561 | 2 | null | 612405 | 7 | null | Though you seem to be willing to avoid inversion, this can be done analytically.
After setting $z:=x^{\alpha+1}$, from
$$u(z):=\text{cdf}(z)=z(\gamma\log z+1)=\gamma e^{-1/\gamma}(e^{1/\gamma}z)\log(e^{1/\gamma}z)=\gamma e^{-1/\gamma} W^{-1}(e^{1/\gamma}z)$$
we draw
$$e^{1/\gamma}z(u)=e^{1/\gamma}\text{cdf}^{-1}(u)=W\left(\gamma^{-1} e^{1/\gamma}u\right)$$
where $W$ denotes Lambert's function.
Now it suffices to draw $u$ uniformly in $[0,1]$ and pass it to $\text{cdf}^{-1}(u)$.
| null | CC BY-SA 4.0 | null | 2023-04-11T07:32:47.300 | 2023-04-11T07:56:21.417 | 2023-04-11T07:56:21.417 | 37306 | 37306 | null |
612562 | 1 | null | null | 1 | 45 | Suppose that the DGP is an AR(p) with a unit root. When we fit, using OLS, an AR(1) model $x_t=\alpha x_{t-1}+u_t$ to the data, we get $\alpha=1$, indicating, correctly, that this is a unit root process. I've been trying to prove this, but so far I've not been successful. Can someone give some hints?
| Prove that OLS estimator of AR(1) coefficient for an AR(p) process with unit root converges to 1 | CC BY-SA 4.0 | null | 2023-04-11T07:53:48.107 | 2023-04-11T07:53:48.107 | null | null | 376142 | [
"time-series",
"unit-root"
] |
612564 | 2 | null | 611779 | 5 | null | Yes, choosing hyperparameters with a validation set (or similarly through cross-validation) can lead to overfitting to the validation set. This get worse, the smaller the validation set is, the more hyper-parameters to tune there are and the more different hyper-parameter you try (although there is a limit closely related to the previous point based on how flexibly the hyper-parameters can ever be made to overfit, if you tried to maximize overfitting). This tends to be less bad with cross-validation (or even repeated cross-validation) vs. with a single training-validation split, but to some extent cannot be totally avoided.
Exactly how you try hyperparameters (some kind of clever search like in `optuna`, grid-search, random-grid-search) does not really matter too much for this answer.
An example of this for ridge or LASSO regression is [the 1SE rule](https://stats.stackexchange.com/questions/138569/why-is-lambda-within-one-standard-error-from-the-minimum-is-a-recommended-valu), which suggest to look for the value of a single hyper-parameter that is the minimum cross-validation performance and then to make the penalty stronger until the CV-performance is still within 1 standard error (in order to pick something that will perform better on unseen new data). This is a decent rule of thumb that tries to account for the overfitting to the validation parts of the CV-fold splits. With models with many more hyperparameters, it is a lot harder to find such a simple rule.
An illustration of the issue can also be seen in the ["Do ImageNet classifiers generalize to ImageNet" paper](https://arxiv.org/abs/1902.10811), if you look at the ImageNet test set as a validation set on which you try out completely different model architecture (a very high-dimensional hyperparameter space). As one can see in such a case with a very large test set, the test set performance overestimates the performance on a newly created test set, but at least the ordering of the models is roughly right.
| null | CC BY-SA 4.0 | null | 2023-04-11T09:40:55.287 | 2023-04-11T09:40:55.287 | null | null | 86652 | null |
612565 | 2 | null | 549458 | 0 | null | The motivation behind Dropout varies - there's the biologically inspired prevention of "co-adaptation", and there's the ensemble explanation (which is very hand-wavy in my opinion, and I'm not convinced by the Baldi and Sadowski paper either). As in other parts of NN it's not so theoretically driven as it is empirically driven. I think your idea is valid, the only thing is that it needs to be tested and experimented with.
[I implemented this now](https://colab.research.google.com/drive/1wRu8-LIgW69ss1sQATMM9Q9ufbSluwyh?usp=sharing) (adjusted a manual Dropout implementation I had in numpy): from a very preliminary analysis on the MNIST data this does seem to improve the validation accuracy - even more than regular Dropout (on a specific architecture, LR, # of epochs, optimizer, etc.), but the trajectory of the train/validation accuracy doesn't look like what you would expect in Dropout: the training accuracy goes very high fast, and the validation accuracy stumbles behind - which is similar to simple networks without Dropout.
| null | CC BY-SA 4.0 | null | 2023-04-11T09:49:32.980 | 2023-04-11T09:49:32.980 | null | null | 117705 | null |
612566 | 1 | null | null | 0 | 51 | So I am trying to validate my STAN model before using real data and am having some trouble estimating parameters separately. My data structure contains count data with people on the rows, and test items in the columns. I am trying to use a covariate for item type, for example multiple choice, and see how this affects my difficulty parameters. Normally, I would just have the following:
```
for(i in 1:n_examinee){
for (j in 1:n_item){
lambdas[i,j] = exp(theta[i] + raw_difficulty [j]);
target += poisson_lpmf(Y[i,j] | lambdas[i,j]);
}
```
With the addition of the covariate on the item difficulty we now have:
```
for (i in 1:n_item) {
item_difficulty[i] = raw_difficulty [i] + dot_product(X[i], beta_difficulty);
}
for(i in 1:n_examinee){
for (j in 1:n_item){
lambdas[i,j] = exp(theta[i] + item_difficulty[j]);
target += poisson_lpmf(Y[i,j] | lambdas[i,j]);
}
```
In this case X[i] refers to the row of a dummy matrix containing item types, indicating which items are which type. This matrix also had a column removed to use as a reference category. Beta difficulty are the covariate estimates for the remaining covariates in X.
So for example the first item may be item_difficulty[1] = .3 + [1,0] * [-.25,.63]. However, when comparing my estimates against my true parameters it is clear there are identifiability problems.My estimates for raw_difficulty and beta_difficulty are not near their respective true values, however, their sum, item_difficulty is. Which logically makes sense as there are infinite values that could satisfy the equation.
What can I do to solve this problem? While item_difficulty matches the true values, I want to be able to tell how severe the effects of the item type are, so I would need to have reliable beta_difficulty estimates. Any suggestions on how to solve this problem?
| How can I solve identifiability problems in my STAN estimation? | CC BY-SA 4.0 | null | 2023-04-11T09:56:57.950 | 2023-04-11T13:58:27.357 | 2023-04-11T12:55:57.103 | 71679 | 366251 | [
"bayesian",
"stan",
"item-response-theory",
"identifiability",
"uninformative-prior"
] |
612567 | 1 | 612844 | null | 1 | 48 | Hi StackExchange Community,
I am performing a Principal Components Analyses (PCA). I would like to know how to extrapolate some PCA components with other variables that were not considered in the PCA function.
I have a nutritional survey with 60 questions that was applied to 420 people. The frequency of consumption was measured in servings and It is standardized for each type of food. I have a clearly Components identified using the following criteria:
a. Selected components by eigen-value >1.5
b. Varimax rotation loadings >0.2 for variable .
The Results of PCA+varimax rotation:
...
PC1: Orange, Apple, Watermelon
PC2: Homemade fries, Mayonesa, Pizza
PC3: Eggs, Walnuts, Hazelnuts
PC4: Witefish , fatty fish small, fatty fish big
...
Then, I want to know if it is possible to carry out post-PCA statistical analysis with the standardized scores of the Varimax rotation of each subject in the component and cross-check that information with other confounding variables such as sex, age, education level, etc.
This table illustrates that I want to compute:
[https://ijbnpa.biomedcentral.com/articles/10.1186/s12966-016-0353-2/tables/4](https://ijbnpa.biomedcentral.com/articles/10.1186/s12966-016-0353-2/tables/4)
Other studies where similar approach was applied:
- https://www.mdpi.com/2072-6643/13/1/70#app1-nutrients-13-00070
- https://www.cambridge.org/core/journals/british-journal-of-nutrition/article/comparison-of-cluster-and-principal-component-analysis-techniques-to-derive-dietary-patterns-in-irish-adults/2130E0404EA1C0AC9CF4382839DE3498
Can I recover the position of the subjects in the components? I tried to do something using info of this link but I'm not sure if it's correct. I think that with this step I could compute an ANAVOA test or Chi-Square to confounding variables such as sex, education, diet calories etc
[How to compute varimax-rotated principal components in R?](https://stats.stackexchange.com/questions/59213/how-to-compute-varimax-rotated-principal-components-in-r)
```
#Code for RStudio
library(factoextra)
#PCA
prc <- prcomp(df, center=TRUE, scale=TRUE)
prc$sdev^2 # Choose components with the eigenvalues >1.5
#Varimax and loadings
varimax_df = varimax ( prc$rotation [, 1:4] )
varimax_df$loadings
varimax_df$rotmat
#Scaling component to row. Standarized scores for each row
newData <- scale(df) %*% varimax_df$loadings
```
Thanks!
| Extrapolate Principal Components Factors with other variables in the components | CC BY-SA 4.0 | null | 2023-04-11T10:11:37.633 | 2023-04-13T19:02:04.427 | 2023-04-13T13:47:10.307 | 385429 | 385429 | [
"pca",
"biostatistics",
"factor-rotation"
] |
612568 | 1 | null | null | 0 | 8 | To compare two treatments (independent variables = intervention A vs intervention B), dependent variables are lipid levels (i.e. LDL, HDL, TC, TG) in scales. Blood samplings were collected at various time points for 3 times. What is the best test to calculate the mean change for each dependent variables while factoring in the different measurement time intervals?
| Stat test for unequal sampling time/measurement time intervals (retrospective, clinical data, blood test results) | CC BY-SA 4.0 | null | 2023-04-11T10:12:26.600 | 2023-04-11T10:12:26.600 | null | null | 385427 | [
"unevenly-spaced-time-series"
] |
612569 | 1 | null | null | 0 | 39 | A dataset is having 2 or more timeseries
eg: with two timeseries x and y
[](https://i.stack.imgur.com/fAwDd.png)
I need to predict the slope and intercept using Linear regression model b/w x & y.
But my data can have Outliers
My Approach:
- Calculate Mahalanobis distance using MCD
- Find the Outlier using some threshold value, if greater than threshold then outlier
- For finding threshold, I chose quantile and chi-square but for all datasets it is not calculating outlier, By playing with it I am able to find but I want to automate this
- After finding Outliers, Remove the outliers from the dataset
- Then Build the model using Linear regression
My problem is how to choose the threshold value for removing the outlier that will work in all cases or atleast calculation of slope and intercept cases? I cannot check the plot of the data of everytime, because I am writing an python script which will build best model between the timeseries and give slope and intercept as output.
I also explored the robust regression, but I want to keep this simple
| How to choose threshold value in MCD-based Mahalanobis distances? | CC BY-SA 4.0 | null | 2023-04-11T10:30:06.580 | 2023-04-11T10:30:06.580 | null | null | 381018 | [
"time-series",
"python",
"covariance",
"outliers",
"mahalanobis"
] |
612571 | 1 | null | null | 0 | 21 | While building ML model I'm facing a covariate shift detection, so I need to compare old labelled data and new unlabelled data. I'm planning to use KL divergence to quantify the distances between these two distributions. However, KL divergence is asymmetric. Hence, which distribution should be $P(x)$ and which $Q(x)$? My intuition is $P(x)$ should be new, unlabelled (inference) data and $Q(x)$ should be old, historic (training) data.
KL divergence is defined like this:
$$
\DeclareMathOperator {\KL}{KL}
\KL(P || Q) = \int_{-\infty}^\infty P(x) \log \frac{P(x)}{Q(x)} \; dx
$$
| How to choose the order of distributions in KL divergence | CC BY-SA 4.0 | null | 2023-04-11T11:15:56.427 | 2023-04-11T11:15:56.427 | null | null | 347904 | [
"distance",
"kullback-leibler",
"covariate-shift"
] |
612573 | 1 | null | null | 0 | 34 | I'm trying to implement the (R-)ALoKDE algorithm for the density estimation of the data streams. The algorithm has been published and presented in [1, 2]. Although the algorithm seems simple, I'm struggling to implement it, even in the 1D case. My current (1D) implementation can be found on GitHub [3]. I believe that my problems may be due to my lack of statistical knowledge -- hence I'm looking for some guidance here.
- Imagine a Kernel Density Estimator (KDE) that's a weighted sum of standard KDEs -- let's name it WS-KDE. I've read that -- to sample from a simple KDE -- one can randomly select one of its kernels and draw a sample from it. Is that also correct for the WS-KDE? Can I select the simple KDE (according to weights) first and then a kernel from the selected KDE? If not, what's the proper way to draw a sample from WS-KDE?
- ALoKDE algorithm -- as far as I understand -- works in the following way. First, it detects if the concept drift (stream non-stationarity) occurred. If so, then a local estimator
$$
\hat{f}^{kde}_t = \frac{1}{m_t +1} \left( \sum^{m_t}_{i=1} \frac{1}{h^d_{D_t}} K \left( \frac{||\textbf{x} - \textbf{x}_t^{(i)}||}{h_{D_t}} \right) + \frac{1}{h^d_{\textbf{x}_t}} K \left(\frac{||\textbf{x} - \textbf{x}_t||}{h_{\textbf{x}_t}} \right) \right)
$$
is created. The $h$ parameters are easy to compute, $\textbf{x}_t$ is the new sample from the stream, and $\textbf{x}^{(i)}_t$ are local samples, (see pt. 3). I believe the rest of the symbols are self-explanatory, but I'll edit the post again if needed.
At this moment, I should have 2 KDEs -- the KDE from the previous step $\hat{f}_t$ and the local one $\hat{f}_t^{kde}$. I then make a weighted sum of them
$$\hat{f}_{t+1}(\textbf{x}) = \lambda_t \hat{f}^{kde}_t(\textbf{x}) + (1 - \lambda_t) \hat{f}_t(\textbf{x})$$
where $\lambda_t$ is the weight computed via the formula
$$
\lambda_t = max \left(0, min \left(1, \frac{B_t - C_t}{A_t + B_t - 2C_t} \right) \right)
$$
The second problem concerns finding the $\lambda_t$. Here, one has to compute
$$ A_t = \int [(E(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}) - f(\textbf{x}))^2 + Var(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}))] dx $$
$$ B_t = \int [(E(\hat{f}_{t}(\textbf{x}) - f(\textbf{x}))^2 + Var(\hat{f}_{t}(\textbf{x}))] dx $$
$$ C_t = \int [(E(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}) - f(\textbf{x}))) \cdot (E(\hat{f}_{t}(\textbf{x})) - f(\textbf{x})) + Cov(\hat{f}^{kde}_{t}(\textbf{x}; h_{D_t}, h_{\textbf{x}_t}), \hat{f}_{t}(\textbf{x})] dx $$
I can see that $A_t$ and $B_t$ are MISE of $\hat{f}^{kde}_t$ and $\hat{f}_t$ respectively, and I know how to compute them. The authors claim that $C_t$ is the covariance between $\hat{f}^{kde}_t$ and $\hat{f}_t$, and this I have no idea how to compute.
I've also tried to bypass this problem by finding
$\lambda_t$, which minimizes MISE of $\hat{f}^{kde}_{t+1}(\textbf{x})$, but it doesn't work correctly -- it tends to either $\lambda=1$ or $\lambda=0$ depending on MISE of which estimator ($\hat{f}_t$ or $\hat{f}_{t}^{kde}$) is smaller. I believe that one computes the MISE of $\hat{f}_{t+1}$ differently than a weighted sum of simple KDEs MISEs.
- My additional concern is the local sampling described in the paper (mentioned in point 2). During the update step of the algorithm, one has to draw $m_t$ local samples $\{\textbf{x}_t^{(i)}$ from the current KDE $\hat{f}_t$ that are $\tau$-close (according to some distance measure) to the sample $\textbf{x}_t$ drawn from the stream prior to the update step. Just for the sake of argument, I'll mention that the default is $\tau=1$. Imagine now that $\hat{f}_t$ is a good estimator of $N(0, 1)$. Now, due to concept drift, the stream now draws the data from $N(100, 1)$ so $\textbf{x}_t$ would be a value close to 100. How can I efficiently and numerically draw samples that are so deep into the tail of the distribution?
Currently, I test my implementation on the stationary standard normal distribution $N(0, 1)$.
Solving these two issues will allow me to implement what I need. Any help is greatly appreciated.
[1] [https://link.springer.com/article/10.1007/s13042-021-01275-y](https://link.springer.com/article/10.1007/s13042-021-01275-y)
[1, free and public] [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8210923/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8210923/)
[2] [https://ieeexplore.ieee.org/abstract/document/8621923](https://ieeexplore.ieee.org/abstract/document/8621923)
[3] [https://github.com/Tomev/ALoKDE](https://github.com/Tomev/ALoKDE)
| Implementing (R-)ALoKDE algorithm for data streams density estimation | CC BY-SA 4.0 | null | 2023-04-11T11:44:02.863 | 2023-04-13T20:28:25.100 | 2023-04-13T20:28:25.100 | 367638 | 367638 | [
"algorithms",
"kernel-smoothing"
] |
612574 | 2 | null | 609727 | 0 | null | Let's start by defining some notation.
$$
y_i\in\{-1,+1\}\\
z_i\in\{0,1\}
$$
Then $z_i = (y_i+1)/2 \iff 2z_i = y_i + 1 \iff y_i = 2z_i-1$.
Also, $p(y_i\equiv1) = p(z_i\equiv1)$ and $(y_i\equiv-1) = p(z_i\equiv0)$.
Also:
$$p(y_i\equiv1) = p(z_i\equiv1)= \left(1+\exp(-w^Tx_i)\right)^{-1}\\
p(y_i\equiv-1) = p(z_i\equiv0)= \left(1 + \exp(w^Tx_i)\right)^{-1}$$
When $y_i = -1$, then $\dfrac{y_i + 1}{2} = 0$ and $\dfrac{y_i - 1}{2} = -1$. When $y_1 = +1$, then $\dfrac{y_i + 1}{2} = 1$ and $\dfrac{y_i - 1}{2} = 0$. Consequently:
$$
\color{red}{\log\left(1 + \exp(-y_i w^Tx_i)\right)}\\
=\left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)-
\left(\frac{y_i-1}{2}\right)\log\left(1 + \exp(w^Tx_i)\right)
$$
Then $1-2 = -1$, so we get:
$$
\left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)-
\left(\frac{y_i-1}{2}\right)\log\left(1 + \exp(w^Tx_i)\right)\\=
\left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)-
\left(\frac{y_i+(1-2)}{2}\right)\log\left(1 + \exp(w^Tx_i)\right)
$$
For the fraction on the right, $\dfrac{y_i+(1-2)}{2} = \dfrac{y_i + 1}{2} - 1$, so:
$$
\left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)-
\left(\frac{y_i+(1-2)}{2}\right)\log\left(1 + \exp(w^Tx_i)\right)\\=
\left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)-
\left(\frac{y_i+1}{2}-1\right)\log\left(1 + \exp(w^Tx_i)\right)
$$
Since $z_i = (y_i+1)/2$, $p(y_i\equiv1) = \left(1+\exp(-w^Tx_i)\right)^{-1}$, and $p(y_i\equiv-1) = \left(1 + \exp(w^Tx_i)\right)^{-1}$:
$$
\left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)-
\left(\frac{y_i+1}{2}-1\right)\log\left(1 + \exp(w^Tx_i)\right)\\=
z_i\log\left(1/p(z_i\equiv1)\right)-
(z_i-1)\log\left(1/p(z_i\equiv0)\right)
$$
Next, a logarithm rule is that $\log(1/x) = -\log(x)$ for $x>0$.
$$
z_i\log\left(1/p(z_i\equiv1)\right)-
(z_i-1)\log\left(1/p(z_i\equiv0)\right)\\=
-z_i\log\left(p(z_i\equiv1)\right)+
(z_i-1)\log\left(p(z_i\equiv 0\right)
$$
Next, $p(z_i \equiv 0) = 1 - p(z_i \equiv 1)$, so:
$$
-z_i\log\left(p(z_i\equiv1)\right)+
(z_i-1)\log\left(p(z_i\equiv 0\right)\\=
-z_i\log\left(p(z_i\equiv1)\right)+
(z_i-1)\log\left(1-p(z_i\equiv1)\right)
$$
Next, factor out the minus sign.
$$
-z_i\log\left(p(z_i\equiv1)\right)+
(z_i-1)\log\left(1-p(z_i\equiv1)\right)\\=
{-\left(z_i\log\left(p(z_i\equiv1)\right)-
(z_i-1)\log\left(1-p(z_i\equiv1)\right)\right)}
$$
Finally, distribute the minus sign across the $z_i - 1$ on the right.
$$
{-\left(z_i\log\left(p(z_i\equiv1)\right)-
(z_i-1)\log\left(1-p(z_i\equiv1)\right)\right)}\\=
\color{blue}{-\left(z_i\log\left(p(z_i\equiv1)\right)+
(1-z_i)\log\left(1-p(z_i\equiv1)\right)\right)}
$$
With each summand equal in the logistic and log loss functions defined in the question, the two loss functions are equal.
| null | CC BY-SA 4.0 | null | 2023-04-11T12:11:34.063 | 2023-04-11T18:23:28.547 | 2023-04-11T18:23:28.547 | 247274 | 247274 | null |
612575 | 2 | null | 612516 | 0 | null | Note: I am not an ecologist. So take inspiration from my answer but adjust if necessary.
I think the two-rows-per-plot formatting of the data is misleading. I think it's really one row per plot with three variables: the count at 1992, the hurricane damage at 1996, and the count at 2012. With that in mind...
Here is a model: Assume that at time $t=0$ (1992), there are $S_0$ stems, and we have a constant growth rate $\mu$. Then prior to the hurricane event ($t=4$), $S_t=\mu^tS_0$. This can also be expressed logarithmically, $\log(S_t)=t\log(\mu)+\log(S_0)$.
At $t=4$ there was a hurricane event, in which a proportion (NOT PERCENTAGE) $H$ of stems were destroyed. Therefore at $t=4$, the number of stems is $S_4=H\mu^4S_0$ or $\log(S_4)=\log(H) + 4\log(\mu)+\log(S_0)$.
Your hypothesis is that larger $H$ results in a larger growth rate, post-hurricane, $t>4$. This could be represented generally as the growth rate post-hurricane being $\mu H^\beta$, for reasons which will shortly be explained.
From year 4 until the last year (2012, $t=20$), $\log(S_t)=[\log(H)+t\log(\mu)+\log(S_0)] + \beta\log(H)$ (using some basic algebra, not shown).
The cool thing about this is that the $\beta$ is pretty close to a form that we could put into a linear model. Since we only have two time observations, I suggest something like this (algebra not shown):
$$
\log(S_{20}/S_0) = 20\log(\mu) + \log(H) + \beta\log(H)
$$
The first term is the intercept, and an estimate of the baseline growth due to time. I suggest putting the second term in as an `offset` just for ease of interpretation of the third term. The third term accounts for the effect of the hurricane damage on the growth rate.
How do we interpret the third term's coefficient? If $\beta$ is zero, then the amount of hurricane damage doesn't affect the growth rate. If $\beta$ is negative, then this corresponds to hurricane damage increasing the growth rate. If $\beta$ is positive, then hurricane damage decreases the growth rate.
This can probably be done with a standard linear model. Or a GLM. You might also have to deal with zero proportions. Haven't really thought about that.
| null | CC BY-SA 4.0 | null | 2023-04-11T12:20:07.773 | 2023-04-11T12:20:07.773 | null | null | 369002 | null |
612576 | 1 | null | null | 0 | 11 | I would like to do a DIF analysis on a scale
The problem is that I have missing data for 10 items for the focal group
The questionnaire in question exists in two forms: a 15 item version and a 25 item version
When we collected the data from different researchers we realized that they had not all used the same version: About 100 participants filled out the 15-item questionnaire and 100 others the 25-item one (which includes the 15 items of the first one).
That makes about 20% of data missing, I don't know if I can still do something with it and I can't identify if I am in a MAR (missing at random), MCAR (missing completely at random) or MNAR (missing not at random) case.
What do you think?
Thanks for your help
| How to deal with missing data in DIF analysis? | CC BY-SA 4.0 | null | 2023-04-11T12:38:53.677 | 2023-04-11T12:38:53.677 | null | null | 385436 | [
"missing-data",
"scales",
"psychometrics"
] |
612577 | 1 | null | null | 0 | 23 | Is there a way to demonstrate strength of a Hierarchical Bayesian Model versus a non-Hierarchical Bayesian Model on simulated data?
I'm ideally looking for a plot that shows that a Hierarchical Bayesian Model better succeeds in detecting a difference/effect against a non-Hierarchical Bayesian Model as we keep decreasing the amount of data.
An effect is detected if the pairwise difference between estimated quantities is $> 0$ with probability $0.95$. So this can be calculated from both the models.
Is there a way to demonstrate this using simulated data?
| Is there a way to demonstrate strength of a Hierarchical Bayesian Model versus a non-Hierarchical Bayesian Model on simulated data? | CC BY-SA 4.0 | null | 2023-04-11T12:39:31.323 | 2023-04-11T12:39:31.323 | null | null | 300676 | [
"bayesian",
"simulation",
"hierarchical-bayesian"
] |
612578 | 1 | null | null | 0 | 43 | Let the DGP be given as:
$$X_t\sim t^2\chi_1$$
with all $X_t$ independent. Based on simulations, an ADF test fails detect non-stationarity (i.e. it does not find a unit root). This makes sense, since ADF is a parametric test that assumes a completely different model than the DGP above.
I'm looking for some other test that will reliably detect non-stationarity of the DGP above, based on, for example, a sample of size 100 drawn at $t=1,\dots,100$.
| Test that detects non-stationarity of the following time series? | CC BY-SA 4.0 | null | 2023-04-11T12:51:12.930 | 2023-04-12T15:17:38.213 | 2023-04-12T15:17:38.213 | 53690 | 376142 | [
"time-series",
"hypothesis-testing",
"stationarity",
"trend"
] |
612579 | 2 | null | 612513 | 0 | null | Much confusion can come from the too-frequent lack of distinction between "multivariate" and "multiple" regression. Although one might argue that "multivariate" can describe any situation with multiple variables, it's best current practice to restrict "multivariate" to situations with multiple outcome variables. See Hidalgo, B and Goodman, M (2013) [American Journal of Public Health 103: 39-40](https://doi.org/10.2105/AJPH.2012.300897), or [this page](https://stats.stackexchange.com/q/447455/28500) or [this page](https://stats.stackexchange.com/q/2358/28500). Having more than one predictor variable is then "multiple" or "multivariable" regression. This ideal distinction, unfortunately, is too often neglected; at least once I have published "multivariate" when I should have said "multivariable."
For your application, a classic multivariate multiple regression model would seem to be OK. [This page](https://stats.stackexchange.com/q/11127/28500) illustrates such a model. Fox and Weisberg have an [online appendix](https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Multivariate-Linear-Models.pdf) to their text that explains in detail. The point estimates end up the same as with separate regressions for each outcome, but the (co)variances are adjusted to take the correlations into account.
More generally, there are several ways to deal with correlated outcomes. Chapter 7 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/long.html) provides a useful overview in a table. That chapter focuses on generalized least squares, which avoids the very strict no-missing-values requirement of classical multivariate multiple regression.
| null | CC BY-SA 4.0 | null | 2023-04-11T12:54:03.043 | 2023-04-11T12:54:03.043 | null | null | 28500 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.