Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
611234 | 1 | null | null | 1 | 8 | The Beta-Pert-Distribution is often used to model uncertainty in risk management. It takes three values a minimum, maximum and most likely (mode).
Generally, these numbers ought to be provided by expert judgement or historical data.
My question: I am using the Beta-Pert distribution in the context of academic research. I model a relatively new phenomena, where there is little data. I have found data on most likely input values. Yet, I am wondering how I could come up with upper (maximum) and lower (minimum) thresholds.
Is there a generally acceptable heuristic I could apply? (e.g. max value 200% of mode to account for tail estimates?)
| How to choose the input parameters for a Beta-Pert distribution if no expert estimates can be elicited? | CC BY-SA 4.0 | null | 2023-03-30T09:50:42.503 | 2023-04-04T00:34:28.970 | 2023-04-04T00:34:28.970 | 11887 | 384485 | [
"beta-distribution",
"uncertainty",
"risk"
] |
611235 | 2 | null | 274123 | 0 | null | AutoEncoders are essentially regression, so you could calculate the $R^2 = 1-\frac{SSE}{SST}$, where $SST=\sum_i (x_i-\bar x)^2$ and $SSE$ is the reconstruction loss (sum, not mean). This metric has an upper bound of 1 for perfect reconstruction but doesn't have a lower bound (as the network outputs can be worse than the mean, in case of bad "learning").
You could maybe (at your own risk) interpret a positive score as a percentage of variance explained by the model using $k$ latent variables. The difference in $R^2$ between the models with different dimensions could be associated with the % variance associated with this extra dimensions. But note that this depends on the network actually learning and reaching a good minima, which in NN with SGD is not guaranteed. Also, there's a question about the hidden dimensions before the final latent bottle-neck. So, a lot of assumptions and approximations, but could "kind-of" work.
[Here's](https://colab.research.google.com/drive/10IaR4kyjtPJG28O5GwnuHhboh9H0GZGv?usp=sharing) a Colab notebook where I did some experiments on the MNIST data with some basic and shallow AE network. And here's the $R^2$ scores:
[](https://i.stack.imgur.com/JgCCF.png)
| null | CC BY-SA 4.0 | null | 2023-03-30T09:56:53.393 | 2023-03-30T09:56:53.393 | null | null | 117705 | null |
611236 | 1 | 611324 | null | 0 | 104 | If I sample a population distribution 2,000 times and get an estimator for the population mean, $\mu$, and the standard deviation, $\sigma$, how can I use these to get the probability that an observation is part of the population distribution?
Mathematically, say I sample the population distribution and get estimators for the mean and standard deviation. I assume my population is distributed:
$$ X \sim \mathcal{N}(\mu, \sigma) $$
where $X$ is a normally distributed random variable, $\mathcal{N}$ is a normal distribution, $\mu$ is an [estimator](https://en.wikipedia.org/wiki/Estimator#:%7E:text=In%20statistics%2C%20an%20estimator%20is,estimator%20of%20the%20population%20mean.) for the population mean, and $\sigma$ is an estimator for the population standard deviation.
How can I use this distribution to test the probability that an observation, $Y_i$, has been drawn from the population distribution?
| Calculating the probability my observation, $Y_i$, is drawn from a random variable $X$? | CC BY-SA 4.0 | null | 2023-03-30T10:32:21.253 | 2023-03-30T22:08:51.300 | 2023-03-30T19:33:53.643 | 363857 | 363857 | [
"hypothesis-testing",
"statistical-significance",
"normal-distribution",
"sampling",
"sampling-distribution"
] |
611237 | 2 | null | 611157 | 0 | null | I think I found what I need
Y=1/(β2X2+β1X1+β0)
from here:
[https://cran.r-project.org/web/packages/GlmSimulatoR/vignettes/exploring_links_for_the_gaussian_distribution.html](https://cran.r-project.org/web/packages/GlmSimulatoR/vignettes/exploring_links_for_the_gaussian_distribution.html)
Example 1:
Person from Class 3, mean Age and basic_needs_covered = 4
1/(0.78+0.17+4(-0.11)) = 1.96*
Example 2:
Person from Class 2, mean Age and basic_needs_covered = 4
1/(0.78-0.07+4(-0.11)) = 3.70*
| null | CC BY-SA 4.0 | null | 2023-03-30T10:40:25.443 | 2023-03-30T10:48:08.123 | 2023-03-30T10:48:08.123 | 383360 | 383360 | null |
611238 | 2 | null | 610709 | 0 | null | I found [another solution](https://www.quora.com/You-have-an-unfair-coin-for-which-heads-turns-up-with-probability-p-3-5-You-flip-the-coin-repeatedly-until-there-have-been-more-heads-than-tails-How-many-flips-on-average-does-this-take/answer/Dave-Smith-402) on Quora which ingeniously avoids the labour of finding out the PMF.
$\displaystyle \mathbb E[X] = p\cdot \mathbb E[H|X] + (1-p)\cdot \mathbb E[T|X] \tag{1}\label 1$
where,
- $\mathbb E[T|X]$ denotes the expected number of flips given that the first flip has been a tail.
- $\mathbb E[H|X]$ denotes the expected number of flips given that the first flip has been a heads.
In case the first flip is a heads, heads is already leading, so
$\mathbb E[H|X]=1.$
Think about the case where the first flip is a tail. $$\begin{cases}F_1(T)&= 1 \\ F_1(H)&=0\end{cases}$$
where $F_i$ represents the $i^{\texttt{th}}$ cumulative frequency.
Further $\mathbb E[X]$ flips, it is expected that it'll add $n$ heads and $n+1$ tails.
$$\begin{cases}F_{\mathbb E[X]+1}(T) &=1+n \\ F_{\mathbb E[X]+1}(H) &=0+(n+1)\end{cases}$$
Now that, heads and tails are at par, it's expected that flipping further $\mathbb E[X]$ times more, heads will finally lead! (Say this time, it adds $m$ tails and $m+1$ heads)
$$\begin{cases}F_{2\mathbb E[X]+1}(T) &=1+n+m \\ \\ F_{2\mathbb E[X]+1}(H) &=0+(n+1)+(m+1)\end{cases}$$
Thus, we have:
$$\mathbb E[T|X]=2\mathbb E[X]+1.$$
Substituting these results in $\eqref 1$,
$$\mathbb E[X]=p+(1-p)(2\mathbb E[X]+1)\\ \implies \boxed{\mathbb E[X] = \frac{1}{2p-1}}$$
This makes sense only if $1\geqslant p>1/2$.
| null | CC BY-SA 4.0 | null | 2023-03-30T10:52:42.477 | 2023-03-30T12:46:35.320 | 2023-03-30T12:46:35.320 | 362671 | 380075 | null |
611240 | 1 | null | null | 0 | 13 | I developed a theoretical framework, based on signalling theory and further developed testable hypothesis. However, my hypotheses are rejected, results suggest the opposite. Countersignalling theory is what seems to explain my results. Typically, the signal I studied resulted in the reduction of information asymmetry problem, hence why I stuck to the signaling theory. Is it okay to just give explanation in the results that the reason could have been countersignaling. Or should one redo their theoretical framework?
| If hypotheses get rejected, should one recraft the theoretical framework to better highlight the results or just report the results? | CC BY-SA 4.0 | null | 2023-03-30T11:08:51.837 | 2023-03-30T11:08:51.837 | null | null | 369093 | [
"hypothesis-testing",
"research-design",
"signal-detection"
] |
611244 | 1 | null | null | 0 | 79 | Iam really trying to understand convergence in probability, but I have an example that I am struggling to understand. Perhaps pointing towards a deeper issue. Take the following simple case of convergence in probability to a constant using the exponential distribution:
[](https://i.stack.imgur.com/uUMyD.png)
I don't understand how this works. Suppose that we are doing an experiment of how much time passes until a new customer enters into a shop. If this is the case, then I don't get is how this sequence of random variables is defined on the same probability space. After all a random variable is a function from events to the real numbers, but here, to put it simply though inaccurately, the function itself does not change. That is, it is not as if different real values are assigned to various outcomes.
Instead, what changes is the rate at which events take place, or in other words it looks like the underlying probability measure changes and not the random variables themselves. If I have a probability measure in hand for various events, surely this means that the random variable that models this experiment must adopt the same rate which is determined by this measure? If this is the case, how can a sequence like this exist and be defined on the same probability space?
Reference:
[https://www.probabilitycourse.com/chapter7/7_2_5_convergence_in_probability.php](https://www.probabilitycourse.com/chapter7/7_2_5_convergence_in_probability.php)
| Random Variables and Convergence in Probability | CC BY-SA 4.0 | null | 2023-03-30T11:26:47.520 | 2023-03-30T18:44:53.300 | 2023-03-30T18:44:53.300 | 339190 | 339190 | [
"probability",
"convergence"
] |
611245 | 1 | null | null | 0 | 25 | Hypothetical experiment:
I want to determine the "best" method of cooking chips - oven or fryer.
I have ten bags of chips, each bag I split in two, cook half in the oven, half in the fryer. for each bag, I rate the cooked chips out of ten for quality. I also time how long each method of cooking takes over all, as well as how long I physically spend cooking for each method.
I now have 6 columns X 10 rows of data. I want to compare the average quality score for each cooking method, and each length of time for each method.
But what test do I run, to compare the average time of cooking in conjunction with the average time I spend cooking, between the two methods? I have manually created a clustered bar chart, but really want to understand which statistical test to use.
Thanks!
| Which statistical test can I run to making a stacked bar chart in SPSS of the means of two time measures for one IV? | CC BY-SA 4.0 | null | 2023-03-30T11:26:50.777 | 2023-03-30T18:12:18.773 | null | null | 384497 | [
"regression"
] |
611246 | 1 | null | null | 1 | 17 |
I want to fit model parameters $\omega$ to a data set $y$, where the model is time dependent, i.e. I want to optimise $f(t,\omega)$. The main problem I'm facing right now is that the data is left censored (below $c$). So I need an error function for my fitting algorithm.
My idea would be to take the maximum likelihood function (or the log version of it), assuming that the data points $y$ are all normally distributed around a time dependent $\mu(t)$ and that the variance $\sigma$ does not depend on $t$. Let $y_i$ for $i\in{1,...,m}$ be all the censored data points , then
$L(\omega)=\prod_{i=1}^m\Phi(\frac{c-f(t_i,\omega)}{\sigma})\prod_{i=m+1}^n\phi(\frac{y_i-f(t_i,\omega)}{\sigma})$, (1)
where $\Phi$ is the cumulative density function and $\phi$ is the denisty function. In the uncensored case one could ignore $\sigma$ as it does not affect the order of different solutions. In the censored case, however, this is not the case. So the question is which $\sigma$ to take. My idea would be to take the one that minimises equation (1), i.e.
$\tilde{\sigma}=argmax_\sigma\{L(\omega;\sigma)\}$.
Is there an efficient way to derive $\tilde \sigma$ or do I need to use numerical methods and if so, which would be efficient for this problem?
Is there a better choice for $\tilde \sigma$?
Thanks :)
| Fitting censored data | CC BY-SA 4.0 | null | 2023-03-30T11:33:35.110 | 2023-03-30T11:33:35.110 | null | null | 384491 | [
"fitting"
] |
611247 | 2 | null | 610842 | 1 | null | Binning a variable by percentiles with equal spacing will ensure intervals with the same density.
| null | CC BY-SA 4.0 | null | 2023-03-30T11:42:03.280 | 2023-03-30T11:42:03.280 | null | null | 143489 | null |
611248 | 1 | null | null | 1 | 127 | I'm currently looking to run an ARDL model - I'm able to compute results that show cointegration, however there is serial correlation when I run the Durbin-Watson and Breusch-Godfrey tests. To correct for this, I have tried to have larger lags, however this results in most of my coefficients becoming insignificant. I was wondering if there was another way to correct for this, such as using HAC standard errors?
If there is a solution e.g. HAC standard errors, how would I write it into an ARDL model on Stata?
| How to correct for serial correlation in an ARDL model without increasing lags? | CC BY-SA 4.0 | null | 2023-03-30T11:42:08.083 | 2023-04-13T15:19:18.340 | 2023-03-30T13:09:25.480 | 53690 | 384488 | [
"autocorrelation",
"stata",
"robust-standard-error",
"ardl"
] |
611249 | 1 | null | null | 0 | 65 | I'm have an application that after some manipulation boils down to an $\mathrm{ARMAX}(1,1,2)$ with some parametric restrictions:
$$y_{i,g,t}= k + \beta y_{i,g,t-1} + \epsilon_{i,t}+\beta_e \epsilon_{i,t-1}+\sum_{s=1}^2(\eta_{x,s}x_{i,t+1-s}+\eta_{\bar{x},s}\bar{x}_{i,t-s}+\eta_{z,s}z_{i,t+1-s}) \tag{1} \label{1}$$
with parametric restrictions
$\beta_e=-\beta$
$\eta_{\bar{x},1}=-\eta_{x,2}=\beta\eta_{x,1}$
$\eta_{\bar{x},2}=\eta_{z,2}=0$
and $\epsilon_{i,t}\sim N(0,\sigma^2)$, and for all $s,\,t$, $\mathrm{cov}(\epsilon_{i,t},\epsilon_{i,t-s})=0$. Also, $-1<\beta<1$ is assumed.
The problem: I have panel data and don't know how to estimate \eqref{1} (or implement it in, say, Stata for instance).
Background: motivation and what I have tried. The original setting is a dynamic linear model with social interactions as follows:
$$y_{i,g,t}= k + c x_{i,t} + d z_{g,t} + \beta m_{g,t-1} + \epsilon_{i,t} \tag{2} \label{2}$$
with $\epsilon_{i,t}\sim N(0,\sigma^2)$ and for all $s,\,t$, $\mathrm{cov}(\epsilon_{i,t},\epsilon_{i,t-s})=0$. Also, $-1<\beta<1$ is assumed.
where $y_{i,g,t}$ corresponds to an individual $i$ action (member of group $g$) in time $t$. It depends of individual-specific characteristics $x_{i,t}$, group-specific characteristics $z_{g,t}$, and the average choice in that group in the previous period $m_{g,t-1}$.
[Brock and Durlauf (2001)](https://www.sciencedirect.com/science/article/abs/pii/S1573441201050073) show that the model is identified (i.e. there is no [reflection problem](https://academic.oup.com/restud/article-abstract/60/3/531/1570385)): taking expectations and using the lag operator $\mathrm{L}$:
$$m_{g,t}= k + c \bar{x}_{g,t} + d z_{g,t} + \beta \mathrm{L} m_{g,t} $$
$$\Rightarrow m_{g,t}=\dfrac{ k + c \bar{x}_{g,t} + d z_{g,t} }{1-\beta \mathrm{L}}$$
So, model \eqref{2} can be rewritten as:
$$y_{i,g,t}= \dfrac{k}{1-\beta} + c x_{i,t} + \sum_{s=1}^{\infty}\beta^{s}(c \bar{x}_{g,t-s} +\beta^{-1} d z_{g,t+1-s}) + \epsilon_{i,t} \tag{3}\label{3}$$
Thus, $y_{i,g,t}$ depends on the entire history $\{\bar{x}_{g,s},z_{g,s}\}_{s=0}^{t}$. The model is identified, because (i) $c$ is identified by the coefficients on $x_{i,t}$, (ii) then $\beta$ is identified by the ratio of coefficients on $\bar{x}_{g,t-1}$ and $\bar{x}_{g,t-2}$, (iii) then $k$ is identified by the constant and $d$ by the ratio of coefficients on $\bar{z}_{g,t}$ and $\bar{z}_{g,t-1}$.
Model \eqref{3} is reminiscent of the [Koyck (1954) model](https://repub.eur.nl/pub/1190) in its infinite part and parametric restriction (geometric progression in the coefficients). Subtracting $\beta y_{i,g,t-1}$ from \eqref{3} (Koyck transformation):
$$y_{i,g,t}= k +\beta y_{i,g,t-1} + c x_{i,t}-\beta c x_{i,t-1} + \beta c \bar{x}_{g,t-1} +d z_{g,t} + \epsilon_{i,t}-\beta\epsilon_{i,t-1} \tag{4}\label{4}$$
which can be cast as the general ARMAX(1,1,2) in \eqref{1} with the parametric restrictions I included.
| Estimation of $\mathrm{ARMAX}(1,1,2)$ using panel data | CC BY-SA 4.0 | null | 2023-03-30T11:43:25.537 | 2023-03-31T23:44:45.267 | 2023-03-31T23:44:45.267 | 254312 | 254312 | [
"regression",
"estimation",
"econometrics",
"stata",
"armax"
] |
611250 | 1 | null | null | 1 | 13 | I am conducting a mediation analysis using a 4-way decomposition method (med4way command in STATA). According to ([https://pubmed.ncbi.nlm.nih.gov/30452641/](https://pubmed.ncbi.nlm.nih.gov/30452641/)), after getting the results of the 4-way dcomposition, the formula to calculate mediation is ((mediated interaction + pure indirect effect)/total effect)*100.
My question is should I include the mediated interaction if in the formula if it is not statistically significant?
For example:
total effect = -0.1530608, p=0.001
mediated interaction = 0.0304828, p=0.830
pure indirect effect = -0.0721859, p=0.000
Many thanks,
Andrew
| Calculating mediation from total effect, mediation from interaction, and pure indirect effect? | CC BY-SA 4.0 | null | 2023-03-30T11:43:53.043 | 2023-03-30T11:43:53.043 | null | null | 384498 | [
"regression",
"survival",
"mediation",
"epidemiology"
] |
611252 | 1 | null | null | 0 | 31 | [](https://i.stack.imgur.com/daAJk.png)
From my understanding, the model has high bias but low variance. This indicates underfitting. I assume that the model can flawlessly match the training data since the error rate of misclassification stays zero as the train size grows up to around 124,000 samples. The sudden increase in misclassification error for the train set shows that as model complexity increases, the model tends to misclassify more because it is unable to capture the underlying pattern of data since it is a simplistic model. When it comes to the validation data, since this data is essentially a "never seen before" data, the validation error is initially quite high. The error for validation goes down as the model gradually learns more.
Am I correctly interpreting the graph?
| The concept of overfitting and underfitting | CC BY-SA 4.0 | null | 2023-03-30T12:00:04.007 | 2023-03-30T12:00:04.007 | null | null | 376954 | [
"machine-learning",
"supervised-learning"
] |
611253 | 2 | null | 610884 | 1 | null | Note the word standardized in standardized residuals. Meanwhile, you seem to be worried about the correlation estimate from raw/unstandardized residuals not being equal to the theoretical correlation of the standardized ones.
Suppose we have standardized innovations $(z_{1,t},z_{2,t})^\top$ that have a certain unconditional correlation $\rho=\text{Corr}(z_{1,t},z_{2,t})$. When multiplied by the time-varying standard deviation, they become raw innovations $(\varepsilon_{1,t},\varepsilon_{2,t})^\top=(\sigma_{1,t}z_{1,t},\sigma_{2,t}z_{2,t})^\top$. The unconditional correlation between them, $\xi=\text{Corr}(\varepsilon_{1,t},\varepsilon_{2,t})$ need not be equal to $\rho$. While $\text{Corr}(aX,bY)=\text{Corr}(X,Y)$ for constants $(a,b)^\top$, this does not apply for random variables $(U,V)^\top$: $\text{Corr}(UX,VY)\not\equiv \text{Corr}(X,Y)$.
The remaining problem is why you still get a noticeable discrepancy between the expected and estimated unconditional correlation, $(\rho,\tilde\rho)^\top=(0.950,0.945)^\top$ with a huge sample of 100k points.
| null | CC BY-SA 4.0 | null | 2023-03-30T12:10:07.540 | 2023-03-30T12:10:07.540 | null | null | 53690 | null |
611254 | 1 | null | null | 3 | 104 | Let $X = (X_1, ..., X_n)$ be an independent sample from $N(\mu_1, \sigma_1^2)$ and $Y=(Y_1,...,Y_m)$ an independent sample from $N(\mu_2, \sigma_2^2)$. Consider the following hypothesis test given that $\sigma_1^2 = \sigma_2^2 = \sigma^2$ is unknow.
$$
H_0: \mu_1 = \mu_2 \leftrightarrow H_1: \mu_1 \neq \mu_2
$$
Notations: $\bar X$ and $\bar Y$ be the mean of sample, $\mu = \frac{n}{n+m}\bar X + \frac{m}{n+m}\bar Y$ be the pooled mean of two samples. $S_X^2 = \frac1{n-1}\sum (X_i - \bar X)^2$, $S_Y^2 = \frac1{m-1}\sum (Y_i - \bar Y)^2$, $S^2 = \frac{1}{n+m-1}\left(\sum_i (X_i -\mu)^2 + \sum_j(Y_j -\mu)^2\right)$.
Let test statistics $T_2$ be
$$
T_2 = \frac{\bar X - \bar Y}{\sqrt{\frac1n + \frac1m}\sqrt{\frac{(n-1)S_X^2 + (m-1)S_Y^2}{n+m-2}}} \sim t_{n+m-2}
$$
if $H_0$ is true by Cochran's theorem. The problem arises from the condition that $\sigma_1^2 = \sigma_2^2 = \sigma^2$. Let another test statistics $T_1$ be
$$
T_1 = \frac{\bar X - \bar Y}{\sqrt{\frac1n + \frac1m}{S}} \sim t_{n+m-1}
$$
if the $H_0$ is true. But $T_2$ is commonly used in test. My question is which is better by analyzing the power function.
To achieve that, we need to define the two different reject regions denoted $W_1$ and $W_2$ with significance level. I have done that,
$$
W_1 = \{(X, Y) \big| |T_1| > t_{n+m-1}(\alpha/2) \} \\
W_2 = \{(X, Y) \big| |T_2| > t_{n+m-2}(\alpha/2) \}
$$
where $t_k(\alpha/2)$ is the upper $\alpha/2$ quantile of t distribution with degree $k$.
The respect power functions are $\beta_i(\mu_1, \mu_2) = P((X, Y) \in W_i), i = 1,2$.
Assume that $(\mu_1, \mu_2)$ lies in the alternative space $\Theta_1 = \{ \mu_1\neq \mu_2 \}$, $T_2$ is a non-centralized $t$ distribution but what is the distribution of $T_1$? I got stuck here since $S$ does not obey the $\chi^2$ distribution with some degree.
Btw, the key idea of comparing two statistics is to analyze the power function. Alternative method to compare them is appreciated.
Thanks in advance.
---
UPDATE
This is really a question in statistics. With so many people learned about t-test during their college lectures, seldom do think deeply on conclusions they got from their textbooks.
It can be seen that the real distribution of $T_2$ in $\mathcal H_1$ is noncentral student t distribution and of $T_1$, well, difficult to get. So I choose to simulate the values of power functions in R.
```
library(ggplot2)
delta.mu=seq(-2,2,0.01)
alpha=0.05;sigma=1
T2.fun=function(rand.x,rand.y){
sigma_1=sqrt(((n-1)*var(rand.x)+(m-1)*var(rand.y))/(n+m-2))
return((mean(rand.x)-mean(rand.y))/(sqrt(1/m+1/n)*sigma_1))
}
T1.fun=function(rand.x,rand.y){
pooled.mean=mean(c(rand.x,rand.y))
sigma_2=sqrt((sum((rand.x-pooled.mean)^2)+sum((rand.y-pooled.mean)^2))/(n+m-1))
return((mean(rand.x)-mean(rand.y))/(sqrt(1/m+1/n)*sigma_2))
}
set.seed(1)
n=100;m=200
# reject region bound
t2=qt(1-alpha/2,n+m-2)
t1=qt(1-alpha/2,n+m-1)
pesudo.power.value1=c();pesudo.power.value2=c()
for(i in delta.mu){
T2=c();T1=c()
for(j in 1:1000){
# μ1-μ2=i
X=rnorm(n,i,sigma)
Y=rnorm(m,0,sigma)
T1=c(T1,T1.fun(X,Y))
T2=c(T2,T2.fun(X,Y))
}
# power function value
value1=mean(abs(T1) > t1)
value2=mean(abs(T2) > t2)
pesudo.power.value1=c(pesudo.power.value1,value1)
pesudo.power.value2=c(pesudo.power.value2,value2)
}
df1=data.frame(power.value=c(pesudo.power.value1,pesudo.power.value2),
x=delta.mu,method=gl(2,length(delta.mu)))
ggplot(df1,aes(x=x,y=power.value,color=method))+geom_smooth()
set.seed(1)
n=4;m=5
# reject region bound
t2=qt(1-alpha/2,n+m-2)
t1=qt(1-alpha/2,n+m-1)
pesudo.power.value1=c();pesudo.power.value2=c()
for(i in delta.mu){
T1=c();T2=c()
for(j in 1:1000){
# μ1-μ2=i
X=rnorm(n,i,sigma)
Y=rnorm(m,0,sigma)
T1=c(T1,T1.fun(X,Y))
T2=c(T2,T2.fun(X,Y))
}
# power function value
value1=mean(abs(T1) > t1)
value2=mean(abs(T2) > t2)
pesudo.power.value1=c(pesudo.power.value1,value1)
pesudo.power.value2=c(pesudo.power.value2,value2)
}
df2=data.frame(power.value=c(pesudo.power.value1,pesudo.power.value2),
x=delta.mu,method=gl(2,length(delta.mu)))
ggplot(df2,aes(x=x,y=power.value,color=method))+geom_smooth()
```
[](https://i.stack.imgur.com/dWpuN.png)
[](https://i.stack.imgur.com/pNAcl.png)
As you can see, in large sample size condition these two statistics are similar in test power; for a relatively small sample size, the traditional statistics we use ($T_1$) has a higher power in test. So there are reasons why we choose $T_2$: it drives $X$ and $Y$ clearly in the form of statistics which is better than take them as a whole in $T_1$.
| About the statistics for the hypothesis test of the mean of two normal population | CC BY-SA 4.0 | null | 2023-03-30T12:31:26.827 | 2023-04-04T15:27:51.767 | null | null | 384501 | [
"hypothesis-testing",
"normal-distribution",
"t-test",
"statistical-power"
] |
611255 | 2 | null | 611157 | 0 | null | In a generalized linear model, a link function $g()$ defines the association between the expected outcome $y$ and the $k$ corresponding predictor values $x_j$:
$$g(y)=\beta_0 + \sum_{j=1}^k \beta_i x_j.$$
If you specify an inverse link in your model, $g(y)= 1/y$, then it will be difficult to have an easy interpretation of associations of individual coefficients with outcome, because you then have:
$$y=\frac{1}{\beta_0 + \sum_{j=1}^k \beta_i x_j} .$$
Even without interactions among predictors, the association of any one predictor with outcome thus depends on the values of the others. You can still use that formula for predictions, however, as your answer illustrates.
With a numeric outcome scale representing increasing values of loneliness, this type of data might instead be modeled via [ordinal logistic regression](https://stats.oarc.ucla.edu/r/dae/ordinal-logistic-regression/). With a logit link, the individual predictor coefficients then have reasonably straightforward interpretations in terms of changes in the log-odds of changing the outcome by 1 level, given that other predictor values are held constant.
| null | CC BY-SA 4.0 | null | 2023-03-30T12:31:50.527 | 2023-03-30T12:31:50.527 | null | null | 28500 | null |
611256 | 2 | null | 611223 | 1 | null | Basically, yes.
- Indeed, consistency of OLS for linear projection coefficients is basically an application of the law of large numbers.
- Indeed - ridge is a tool to trade off some bias against less variance in small samples. When sample size goes to infinity, that tradeoff vanishes when the number of parameters stays constant.
- Technically, yes, although - see 2) - I would not see why one would want a different limit.
- Probably - e.g., again see 2): the ridge adjustment only matters in finite samples.
| null | CC BY-SA 4.0 | null | 2023-03-30T12:36:56.230 | 2023-03-30T12:36:56.230 | null | null | 67799 | null |
611257 | 1 | null | null | 0 | 9 | I am not sure if what I am doing is correct, but here goes:
I want to compare the X big and X small, but they are different in actual sizes. How do I standardize it? Is it by getting each of the sample's (30 samples) z-score?
[](https://i.stack.imgur.com/YhtjT.png)
And after I standardize it, how can I compare it to their respective actual size?
I am truly lost on how to proceed, so I would appreciate even a point in the right direction (what to look for). Thank you!
| What to do after standardizing a data set using a t-score if I want to be able to compare it to a value? | CC BY-SA 4.0 | null | 2023-03-30T12:36:59.233 | 2023-03-30T12:36:59.233 | null | null | 384503 | [
"t-test"
] |
611258 | 2 | null | 444819 | 4 | null | It depends on what is meant by $R^2$. In simple settings, multiple definitions give equal values.
- Squared correlation between the feature and outcome, $(\text{corr}(x,y))^2$, at least for simple linear regression with just one feature
- Squared correlation between the true and predicted outcomes, $(\text{corr}(y,\hat y))^2$
- A comparison of model performance, in terms of square loss (sum or squares errors), to the performance of a model that predicts $\bar y$ every time
- The proportion of variance in $y$ that is explained by the regression
In more complicated settings, these are not all equal. Thus, it is not clear what constitutes the calculation of $R^2$ in such a situation.
I would say that #1 does not make sense unless we are interested in a linear model between two variables. However, that leaves the second option as viable. Unfortunately, this correlation need not have much to do with how close the predictions are to the true values. For instance, whether you predict the exactly correct values or always predict high (or low) by the same amount, this correlation will be perfect, such as $y = (1,2,3)$ yet $\hat y = (101, 102, 103)$. That such egregiously poor performance can be missed by this statistic makes it of questionable utility for model evaluation (though it might be useful to flag a model as having some kind of systemic bias that can be corrected). When we use a linear model fit with OLS (and use an intercept), such in-sample predictions cannot happen. When we deviate from such a setting, all bets are off.
However, Minitab appears to take the stance that $R^2$ is calculated according to idea #3.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
(This could be argued to be the Efron pseudo $R^2$ mentioned in the comments.)
This means that Minitab takes the stance, with which I agree, that $R^2$ is a function of the sum of squared errors, which is a typical optimization criterion for fitting the parameters of a nonlinear regression. Consequently, any criticism of $R^2$ is also a criticism of SSE, MSE, and RMSE.
I totally disagree with the following Minitab comment.
>
As you can see, the underlying assumptions for R-squared aren’t true for nonlinear regression.
I assumed nothing to give the above formula except that we are interested in estimating conditional means and use square loss to measure the pain of missing. You can go through the [decomposition of the total sum of squares](https://stats.stackexchange.com/questions/551915/interpreting-nonlinear-regression-r2) (denominator) to give the "proportion of variance explained" interpretation in the linear OLS setting (with an intercept), sure, but you do not have to.
Consequently, I totally disagree with Minitab on this.
| null | CC BY-SA 4.0 | null | 2023-03-30T12:40:30.013 | 2023-04-26T14:54:01.477 | 2023-04-26T14:54:01.477 | 247274 | 247274 | null |
611259 | 1 | null | null | 1 | 42 | I have a panel (N firms across 10 years) dataset on which I want to estimate and test a prediction model $f$:
\begin{equation}
y = f(x).
\end{equation}
Following common practice, I split my data into two parts: training data and test data. My training data consists of the first 9 years and my test data consists of the last 1 year. I also standardize my training data per cross section, i.e. per year.
Now, I want to also standardize my independent test variables, i.e. my test input. My intuition tells me that in order to avoid data leakage, I should be using the training data mean and standard deviation in order to standardize my test data. This is also explicitly mentioned in several Stack Overflow threads, e.g. [Thread Example](https://datascience.stackexchange.com/questions/27615/should-we-apply-normalization-to-test-data-as-well).
(1) However, how do I do this in my specific example? I have nine different means and nine different standard deviations, as I do not standardize my training data one time, but nine times. Do I just take the latest (i.e. the ninth cross section) mean and standard deviation to standardize my test data?
(2) What about winsorization? If I winsorize my training data per cross-section and want to winsorize my test input data, how do I proceed? The answer to this should in principle be the same one as to (1) I suppose.
| Standardization of out-of-sample data | CC BY-SA 4.0 | null | 2023-03-30T12:40:38.563 | 2023-03-30T18:27:08.847 | 2023-03-30T14:02:48.053 | 182258 | 182258 | [
"machine-learning",
"outliers",
"standardization"
] |
611260 | 2 | null | 611169 | 0 | null | As you note, your "second option" omits almost all of the two-way interactions among predictors (except for the `Intervention:Time` interaction) while maintaining the 3-way interactions. Omitting lower-level terms or interactions while including higher-level interactions is generally not a good idea, as explained on [this page](https://stats.stackexchange.com/q/11009/28500).
What might be missing from your model is the baseline `Response` value as a predictor. That depends on the nature of `Response`. It often helps to include the pre-intervention values of a `Response` variable as a predictor, to control for differences between treatment and control groups in the post-intervention `Response` values. If the `Response` is a change from a baseline, however, then you don't want include the baseline level as a predictor. [This page](https://stats.stackexchange.com/q/3466/28500) and [this page](https://stats.stackexchange.com/q/489160/28500) might help guide your design.
Finally, note that your model with 3-way interactions involving 3 time points requires estimating a large number of coefficients and a correspondingly large sample size. You need to estimate: for individual coefficients, 1 each for `Intervention`, `Cov1`, and `Cov2` and 2 for `Time` (5 total individual coefficients); for 2-way interactions, 1 each for `Intervention:Cov1` and `Intervention:Cov2` and 2 each for the 3 two-way interactions involving `Time` (8 total 2-way interaction cofficients); for 3-way interactions, 2 for each of them. That's 15 coefficients to estimate beyond an intercept.
Based on a rule of thumb of 15 observations per coefficient to avoid overfitting, you will need on the order of 225 total observations. With 3 observations per individual and 2 treatment groups, that suggests at least 35 to 40 individuals per intervention group, with more possibly needed depending on the magnitude of the intervention effect you are investigating.
| null | CC BY-SA 4.0 | null | 2023-03-30T13:12:52.853 | 2023-03-30T13:12:52.853 | null | null | 28500 | null |
611261 | 1 | 611269 | null | 0 | 95 | I'm having trouble understanding a certain equation in a paper I'm reading.
- Let $A, B, C$ be random variables and let $\mathcal{B}$ be the range of $B$.
- Let $\mathbb{E}$ be the expected value an $\mathbb{P}$ a probability measure.
- Let $\mathbb{1}_{X=x}$ the indicator function that is $1$ iff $X$ takes on value $x$ and $0$ otherwise.
The claim seems to be that, almost surely (original statement given below)
$$
\mathbb{E}[A | B, C] = \sum_{b \in \mathcal{B}} \frac{\mathbb{E}[A \cdot \mathbb{1}_{B=b} | C]}{\mathbb{P}[B=b | C]} \mathbb{1}_{B=b}
$$
I'm having trouble seeing why this statement is true, let alone proving it. To begin with, I am unsure what we even mean with conditioning an expectation on a random variable, i.e. writing $\mathbb{E}[A|B]$. I've seen conditioning on a concrete value of a random variable, i.e. $\mathbb{E}[A|B=b]$; or conditioning on an event, i.e. $\mathbb{E}[A|\{B=b\}]$ (which is probably the same thing.)
Further, what does "almost surely" mean here? Does this mean that the "probability" of this equivalence is 1? What would that mean?
I've tried applying definitions of conditional probability, expectation and the fact that $\mathbb{E}[\mathbb{1}_{B=b}] = \mathbb{P}[B=b]$ but I'm not really getting anywhere. Would appreciate if anyone could help me unwrap this.
---
Full statement from [Scornet2015 - Consistency of Random Forests](https://projecteuclid.org/journals/annals-of-statistics/volume-43/issue-4/Consistency-of-random-forests/10.1214/15-AOS1321.full): Let $Z_{i,j} = (Z_{i}, Z_{j})$ be another random variable.
>
... Thus, almost surely,
$$
\begin{aligned}
\mathbb{E}\left[Y_i\right. & \left.-m\left(\mathbf{X}_i\right) \mid Z_{i, j}, \mathbf{X}_i, \mathbf{X}_j, Y_j\right] \\
& =\sum_{\ell_1, \ell_2=1}^2 \frac{\mathbb{E}\left[\left(Y_i-m\left(\mathbf{X}_i\right)\right) \mathbb{1}_{Z_{i, j}=\left(\ell_1, \ell_2\right)} \mid \mathbf{X}_i, \mathbf{X}_j, Y_j\right]}{\mathbb{P}\left[Z_{i, j}=\left(\ell_1, \ell_2\right) \mid \mathbf{X}_i, \mathbf{X}_j, Y_j\right]} \mathbb{1}_{Z_{i, j}=\left(\ell_1, \ell_2\right)}
\end{aligned}
$$
| Equality on conditional expected value and indicator functions | CC BY-SA 4.0 | null | 2023-03-30T13:45:45.033 | 2023-03-31T22:58:05.697 | null | null | 178468 | [
"conditional-expectation"
] |
611263 | 1 | null | null | 0 | 13 | I'm looking at whether an intervention (treatment/control) has produced comparing timepoints
Here is my model:
```
formula <- MEASURE ~ timepoint*Group + (1|ID)
anova(lmer( formula, data=data, REML=TRUE))
```
Interaction and main effects are significant. Post-hoc t-tests show sig reductions in MEASURE (but for both treatment and control). My question is do I have to perform additional analysis to prove the reductions in the treatment group are bigger than that in the control group? Or does the interaction on its own enough to show that?
I have attached a graph for reference.
| Interaction and main effects in a mixed model | CC BY-SA 4.0 | null | 2023-03-30T14:04:25.800 | 2023-03-30T14:14:38.880 | 2023-03-30T14:14:38.880 | 362671 | 384513 | [
"mixed-model",
"repeated-measures"
] |
611264 | 2 | null | 610907 | 1 | null | The problem had to do with preprocessing of the data. Conceptually, the understanding of how RNNs would read an image file is correct, i.e. rows correspond to time steps and columns to features.
On the second question, it follows from the first realization and the answer is that RNNs can model spectrograms/multidimensional data fine as the results retrieved after fixing data issues were good.
| null | CC BY-SA 4.0 | null | 2023-03-30T14:05:50.953 | 2023-03-30T14:05:50.953 | null | null | 240802 | null |
611265 | 1 | 611846 | null | 0 | 26 | How could you explain that during a trial (RCT), some features may have no interactions between them if you consider them as categoric features, but have an interaction if you consider them as continuous features ?
Thank you !
| Interaction factor (RCT trial) | CC BY-SA 4.0 | null | 2023-03-30T14:10:23.580 | 2023-04-04T17:10:24.830 | null | null | 378883 | [
"statistical-significance",
"interaction"
] |
611266 | 1 | null | null | 0 | 63 | I have been struggling to implement a custom Anderson-Darling test in R for a custom weibull distribution.
The case is this:
- I have a bunch of datasets which contains % from 0 to 100
- I need to find out for those datasets which contain % answer that are less than 1, whether answers are dominated by a positively skewed distribution that are less than 1
- Therefore, I thought that I could use the Anderson-Darling test to test my data aginst a constructed "ideal" Weibull distribution:
Where:
- mu = 0.5
- alpha(shape) = 3
- beta(scale) = 0.5599 = mu/gamma(1+1/alpha)
[](https://i.stack.imgur.com/ZwmZ7.png)
As a lazy person I first went about deconstructing the known test package. This is because the package anderson-darling test is actually "too smart" in that it infers from the transformed array the shape(alpha) and scale(beta) from my data in order to perfrom the normality test. However, as mentioned above, this is not what I want. I already know my parameters. So I thought about deconstructing the cmstatr::anderson_darling_weibull because why change something if it is not broken. Unfortunately, I have hit walls where the parameter [ad_p_unknown_param_fcn] source code is not accessible.
[](https://i.stack.imgur.com/8eFkS.png)
So, as a result. I set out writing my own Anderson-Darling test then:
[](https://i.stack.imgur.com/A4KkC.png)
The results from running through my custom test if I do not subset for my answer to be within 0 and 1 % yields AD_statistics of Inf and p_value of 0, however if I subset it for between 0 and 1 they actually give me sensible answers. I was just wanting to ask and confirm 2 questions:
- Is my math correct or I have made some stupid mistakes somewhere. Are there better packages to use and for deconstruction instead?
- Is the inifinity in AD-statistics just caused by very left skewed mean of 0.5 for Weibull distribution?
*I am a little bit perplexed would like some guidance. Many thanks in advance!
| Customising the Anderson-Darling test for weibull distribution with known {mean, shape, scale} | CC BY-SA 4.0 | null | 2023-03-30T14:20:57.263 | 2023-03-30T14:20:57.263 | null | null | 384506 | [
"r",
"distributions",
"mathematical-statistics",
"weibull-distribution",
"anderson-darling-test"
] |
611268 | 1 | null | null | 2 | 45 | I've seen many blogposts like [this](https://www.quora.com/How-can-I-apply-reinforcement-learning-to-classification-problems) saying that you can use RL to do classification, but it takes much longer.
However, I don't really see why.
The REINFORCE objective is (for a 1 step RL problem like bandits):
$$
\nabla J_\theta \approx \sum_{s \in \text{simulators}} r_s \nabla \log \pi_\theta(a_s)
$$
and the categorical cross entropy used for classification is the following:
$$
\nabla J_\theta \approx \sum_{x,y \in \text{minibatch}} \sum_{c \in \text{categories}} 1_{c = y} \nabla \log f_\theta(y|x)
$$
which means that the CCE just considers the right response and aims to maximize the corresponding probability... which is no more than what the REINFORCE objective will do with a soft policy in few iterations, even more if we use a baseline and thus a wrong response correspond to a negative gradient with the selected probability
| Isn't classification just a RL problem with binary rewards? | CC BY-SA 4.0 | null | 2023-03-30T14:35:22.433 | 2023-03-30T23:47:03.543 | 2023-03-30T23:47:03.543 | 346940 | 346940 | [
"classification",
"reinforcement-learning"
] |
611269 | 2 | null | 611261 | 1 | null | Let $(\Omega, F, P)$ be a probability space with $F$ a sigma-algebra on $\Omega$. Then for a set $S \in F$ such that $P(S) \in (0,1)$, the conditional probability given $S$ is the following probability measure:
$$
P(\cdot|S) : F \rightarrow [0,1], \ U \rightarrow P(U|S) := \frac{P(U\cap S)}{P(S)}.
$$
For a random variable $X$, we can define the expected value of $X$ given $A$ as
$$
E(X|S) := \int X(\omega)P(d\omega|S).
$$
The previous expression is equivalent to
$$
E(X|S) := \int X(\omega)P(d\omega \cap S)/P(S)
$$
$$
\Leftrightarrow E(X|S) := \int X(\omega)\mathbf{1}_AP(d\omega)/P(S)
$$
$$
\Leftrightarrow E(X|S) := \frac{E(X\mathbf{1}_S)}{P(S)}.
$$
Now, unformally, for two random variables $X$ and $Y$ defined on $(\Omega, F,P)$ $E(X|Y)$ is not a number but a random variable because $Y$ is a random variable, but the last equation still holds, this means that, for a $y \in \Omega$, and if $Y$ is a discrete random variable, we have:
$$
E(X|Y=y) := \frac{E(X\mathbf{1}_{Y=y})}{P(Y=y)},
$$
and if you add a third random variable $Z$
$$
E(X|Y=y,Z) := \frac{E(X\mathbf{1}_{Y=y}|Z)}{P(Y=y|Z)},
$$
which follows from the first equation.
From there it follows
$$
E(X|Y,Z) := \sum_{y\in\Omega}\frac{E(X\mathbf{1}_{Y=y}|Z)}{P(Y=y|Z)}.
$$
Indded, we have $\mathbf{1}_{Y} = \sum_{Y=y}\mathbf{1}_{Y=y}$.
| null | CC BY-SA 4.0 | null | 2023-03-30T14:37:56.563 | 2023-03-30T16:28:31.330 | 2023-03-30T16:28:31.330 | 383929 | 383929 | null |
611270 | 1 | null | null | 1 | 31 | I'm trying to use scipy to fit a $\tanh$ function to some data. The data is of the form $(x_i, y_i)$ for $i=1,\cdots,N$, where $0\leq y_i \leq 1$. I choose $x_i$ to be linearly spaced, such that $x_0=0$ and $x_N=1$. $y_i$ are further obtained from repeating an experiment $M$ times, and checking an event happened or not ($0$ -> did not happen, $1$ -> did happen). The $y$ values are thus a point estimate of probability $p$ for a binomial variable: $y_i=\frac{1}{N}\sum\limits_{j=1}^M X_j$. The standard deviation of $y_i$ is thus $s_i=\sqrt{y_i(1-y_i)/M}$. This is how I end up with data that is spread like $0.5\tanh(a(x-b))+0.5$, which is what I'm trying to fit.
There is more spread in the data if the values are around $0.5$, but many data points at $0$ and $1$ have a standard deviation of $s_i=0$ (such that $X_j=0$ or $X_j=1$ for all measurement repeats $j$).
I tried the method described in [this answer](https://stackoverflow.com/questions/39434402/how-to-get-confidence-intervals-from-curve-fit). For that particular code, if I set one of the standard deviations to be `0`, i.e.:
```
y_spread[3] = 0
```
then I get a runtime error:
```
RuntimeWarning: divide by zero encountered in divide
```
This makes sense to me, as $s_i=0$, and you can't divide by $0$. Now, the question is, what is the correct way to handle this, statistically? A quick and dirty way could be to set the error to be something small, like `1e-6`, when it is `0`. This does result in a fit, but am I angering the statistics gods?
EDIT: added more information as requested in the comments.
| Fitting data taking into account for the spread in data, which are zero for some data points | CC BY-SA 4.0 | null | 2023-03-30T14:53:36.323 | 2023-03-30T20:10:38.827 | 2023-03-30T20:10:38.827 | 115585 | 115585 | [
"python",
"curve-fitting",
"measurement-error",
"scipy",
"error-propagation"
] |
611271 | 2 | null | 90769 | 0 | null | Yosher's modification is for the case when X is a pd.DataFrame. The initial solution for np.arrays.
There seems to be multiple formulas around. In case you need to compare the score to the built-in version of Consensus K-Means I refer to this to the bic_kmeans function as found here: [pyckmeans](https://github.com/TankredO/pyckmeans/blob/main/pyckmeans/core/ckmeans.py).
| null | CC BY-SA 4.0 | null | 2023-03-30T15:07:12.190 | 2023-03-30T15:07:12.190 | null | null | 384515 | null |
611273 | 1 | 611279 | null | 0 | 45 | My function plots data and how the data is ordered depends on an argument of the function. The code looks unnecessarily repetitive but I can't figure out how to make the mutate() function (or any dplyr function) depend directly on a function argument. I hoped it could handle the if/else in the mutate call itself. Any ideas?
```
library(tidyverse)
data <- data.frame(item = c("a", "b", "c"),
mean = c(0.2, 0.3, 0.4))
plot <- function(data, sort = "none"){
if(sort == "desc"){
data %>%
mutate(item = fct_reorder(item, desc(mean))) %>%
ggplot2::ggplot(aes(x = item, y = mean)) +
geom_point()
} else if (sort == "asc") {
data %>%
mutate(item = fct_reorder(item, mean)) %>%
ggplot2::ggplot(aes(x = item, y = mean)) +
geom_point()
} else {
data %>%
ggplot2::ggplot(aes(x = item, y = mean)) +
geom_point()
}
}
plot(data)
```
| R conditional mutate within function | CC BY-SA 4.0 | null | 2023-03-30T15:30:10.263 | 2023-03-30T16:02:23.807 | null | null | 271373 | [
"r",
"ggplot2"
] |
611275 | 1 | 611285 | null | 0 | 24 | I am interested in fitting a Bayesian mixed-effects model to my data using the brms package. My data includes three grouping variables (Category, BioRep, and TechRep), and I want to estimate category-specific variances and intercepts. I have already generated simulated data and fit a model with the following code:
```
library(brms)
library(tidyverse)
# set up parameters
n_categories <- 100
n_tech_reps <- 5
n_bio_reps <- 5
higher_var_categories <- sample(1:n_categories, size = floor(n_categories/3))
batch_effect_scale <- 0.1
cat_sigmas <- rnorm(n_categories, mean = 2, sd = 0.5)
cat_sigmas[higher_var_categories] <- rnorm(length(higher_var_categories), mean = 4, sd = 1)
# generate simulated data
set.seed(1234)
data <- data.frame()
for (i in 1:n_categories) {
category_mean <- rnorm(1, mean = 10, sd = 2)
category_var <- cat_sigmas[i]
for (j in 1:n_bio_reps) {
bio_effect <- rnorm(1, mean = 0, sd = batch_effect_scale)
for (k in 1:n_tech_reps) {
tech_effect <- rnorm(1, mean = 0, sd = batch_effect_scale)
value <- rnorm(1, mean = category_mean + bio_effect + tech_effect, sd = category_var)
data <- rbind(data, data.frame(Category = i, BioRep = j, TechRep = k, Value = value))
}
}
}
# modify brmsformula to include category-specific sigma
brmsformula <- bf(Value ~ 1 + (1|Category) + (1|BioRep) + (1|TechRep),
sigma ~ 1 + (1 | Category))
# fit model with modified formula
model <- brm(brmsformula, data = data, chains = 4, cores = 4,
control = list(adapt_delta = 0.99, max_treedepth = 15))
summary(model)
# BiocManager::install("broom.mixed")
library(broom.mixed)
# extract parameter estimates and credible intervals
params <- tidy(model,conf.int = TRUE)
ranef(model)
est_cat_sigmas <- ranef(model)$Category[,'Estimate','sigma_Intercept']
# plot category-specific sigmas
cor.test(est_cat_sigmas, cat_sigmas)
```
This works great but I can't figure out how to generate the frequentist equivalent (if such exists??) for comparison.
My best attempt:
```
# fit mixed-effects model with category-specific variances
# fit mixed-effects model with category-specific variances
mixedmodel <- nlme(model=Value ~ Category + (1 | BioRep) + (1 | TechRep),
fixed = Category ~ 1,
groups = BioRep+TechRep~1,
data = data, weights = varIdent(form = ~ 1 | Category),
start = list(
fixed = list(Category = rep(0, n_categories)),
random = list(
BioRep = matrix(rep(0, n_bio_reps)),
TechRep = matrix(rep(0, n_tech_reps))
)
)
)
```
Is telling me that he grouping formula is wrong. (I also think the start parameter here could be wrong).
Help?
| Title: How to fit a Frequentist Equivalent of Bayesian mixed-effects model with nlme or lme4 and obtain category-specific variances and intercepts? | CC BY-SA 4.0 | null | 2023-03-30T15:43:41.943 | 2023-03-30T16:45:15.367 | null | null | 132412 | [
"r",
"mixed-model",
"lme4-nlme",
"brms",
"rstan"
] |
611276 | 1 | null | null | 0 | 10 | I have research in which I asked a question from 3 different customer segments on a likert scale.
how can I find out if I am more likely to spend money on binge shopping on a particular day?
|Cust group | Like | Not sure | Dislike|
|:--------- |:----:|:--------:| ------:|
| Group 1 | 2300 | 500 | 20 |
| Group 2 | 17 | 250 | 5 |
| Group 3 | 200 | 310 | 40 |
What is the statistical test to determine if any of the values in the table are significantly different? For eg. is any of the group significantly more likely to like/dislike ?
thanks in advance!
| Determining statistically significant responses in a research - Likert scale answer by 3 customer groups. which responses are significantly diff? | CC BY-SA 4.0 | null | 2023-03-30T15:59:05.833 | 2023-03-30T15:59:05.833 | null | null | 384459 | [
"statistical-significance",
"categorical-data",
"likert",
"marketing"
] |
611277 | 2 | null | 546316 | 1 | null | Yes, I say that it is wrong to consider $R^2 = -1000$ to be the same as $R^2 = 0$. In the latter case, model performance is no worse than naïvely guessing the overall mean every time, while the former indicates that all of your fancy modeling cannot even do as well as predicting `average(a:a)` (to use some Excel syntax) every time. That is, there is a way to get better performance while spending less to get it.
If your cross-validation shows that such performance is so common and/or severe that the average performance is dragged down, you just have evidence that your model does a poor job of predicting. This is disappointing, sure, but the whole reason we do validation is to catch this kind of poor performance. One thought could be to consider the median performance, if you are concerned about one severe "outlier" ruining everything.
Finally, watch out for what calculation you are doing for your out-of-sample $R^2$. While [I disagree with the usual sklearn implementation](https://stats.stackexchange.com/questions/590199/how-to-motivate-the-definition-of-r2-in-sklearn-metrics-r2-score) and do believe my proposed calculation to have stronger motivation as a statistic or measure of performance that would be of interest, I concede that both calculations are likely to give similar answers in most circumstances. However, when your holdout set is just two points, there is a lot of room for having a markedly different mean of the holdout data than the training data. Since the mean minimizes the sum of squares, this means that the `sklearn` implementation is a lower bound on the equation I have proposed (their denominator cannot be larger than my denominator, and the numerators are the same), and your performance might improve, perhaps dramatically, if you use the $R^2$ calculation I prefer.
(Whether or not my calculation or any of these calculations should be called $R^2$ is a different story, and I am open to using different notation for these different statistics.)
$$
R^2_{\text{out-of-sample, Dave}}=
1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y_{\text{in-sample}}
\right)^2
}\right)
$$$$
R^2_{\text{out-of-sample, scikit-learn}}=
1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y_{\text{out-of-sample}}
\right)^2
}\right)
$$
| null | CC BY-SA 4.0 | null | 2023-03-30T15:59:22.530 | 2023-03-30T15:59:22.530 | null | null | 247274 | null |
611278 | 2 | null | 611273 | 0 | null | Since there is not statistics involved in the question, I think it belongs in the stackexchange where I see most R questions. I don't know how to port the question so I will let someone else do it.
But here is the solution - which might be less readable than yours but concise
```
library(tidyverse)
data <- data.frame(item = c("a", "b", "c"),
mean = c(0.2, 0.3, 0.4))
plot_after_sort <- function(data, sort = "none")
{
data %>%
{if(sort != 'none') # if sort is necessary
{mutate(., across(item, # using across reduces repeating "item" twice with anonymous function ~ fct_reorder(., ..)
~ fct_reorder(., if(sort == 'asc') mean else desc(mean)) ) )} else . # don't forget the "else ." here, defaults to NULL when if() condition is not met
} %>%
ggplot2::ggplot(aes(x = item, y = mean)) +
geom_point()
}
plot_after_sort(data, 'desc')
```

Created on 2023-03-30 with [reprex v2.0.2](https://reprex.tidyverse.org)
| null | CC BY-SA 4.0 | null | 2023-03-30T15:59:55.807 | 2023-03-30T16:02:23.807 | 2023-03-30T16:02:23.807 | 283148 | 283148 | null |
611279 | 2 | null | 611273 | 0 | null | Here's one idea. This appears to do what you're asking for.
```
plot <- function(data, sort = "none"){
data %>%
mutate(item = if(sort == "asc"){
fct_reorder(item, mean) } else if(sort == "desc") {
fct_reorder(item, desc(mean)) } else {
item = item}) %>%
ggplot2::ggplot(aes(x = item, y = mean)) +
geom_point()
}
```
| null | CC BY-SA 4.0 | null | 2023-03-30T16:01:40.870 | 2023-03-30T16:01:40.870 | null | null | 46334 | null |
611280 | 1 | 612696 | null | 1 | 48 | I am trying to write a simulation based on what [Cuevas et al. (2004)](https://www.researchgate.net/publication/223287292_An_ANOVA_test_for_functional_data) did. In the paper they explain the simulation as follows:
$ X_{ij}(t)=m_i(t)+e_{ij}(t); j=1,...,10: $
Here $m_i(t)$ is described as
$m_i(t) = t(1-t), i=1,2,3 $ and where the values $t$ have been chosen equispaced in the interval [0,1].
In a part of the paper, they are saying that $e_{ij}(t)$ is a standard brownian process with dispersion parameter $\sigma$. I took $\sigma_1=0.2$.
```
t <- seq(from = 0, to = 1, length.out = 25)
x <- rbind(t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)),
t*(1-t) + cumsum(rnorm(25,0,0.2)))
colnames(x) <- t
```
Here is what I have tried so far. However, I am not getting the correct results using what I have now. I am not sure if I am modeling the error terms correctly and that they follow a standard Brownian motion. I would appreciate any guidance.
| Replicating simulation results | CC BY-SA 4.0 | null | 2023-03-30T16:14:11.447 | 2023-04-12T15:42:52.610 | null | null | 306067 | [
"r",
"simulation"
] |
611281 | 1 | null | null | 0 | 16 | I'm interested in studying user preferences regarding streaming content. Given a discrete number of categories (ex: adventure, horror, comedy, family, drama) and the amount of time a given user watches content of each genre, the per user per month per genre percentages could be determined
```
user, month, genre, percent
----,------,------,--------
1, 1, 1, 50%
1, 1, 2, 30%
1, 1, 3, 10%
1, 1, 4, 5%
1, 1, 5, 5%
...
```
Over time, a given user's genre viewership might fluctuate month over month. There are two interesting cases, and I want to identify a means to distinguish them.
- Noise, suppose in the above example, user 1 in month 7, viewership of genre 4 (family) swells to 20%, but in the months that follow it returns to the typical baseline.
- Change, suppose that starting in month 7, percent viewership of genre 4 not only swells to 20% but hovers around this region for the remainder of the observed data. In effect, the user's preferences have changed.
I believe that the Dirichlet distribution can model proportional viewership per user per genre. However, I'm interested in classifying whether a change therein is case 1 (short term noise) or case 2 (change in preference).
My questions are:
- How might this be done?
- Is what I'm describing an established problem/research area? If so, where can I read more?
| Classifying changes in Dirichlet distribution over time? | CC BY-SA 4.0 | null | 2023-03-30T16:25:12.527 | 2023-03-30T16:25:12.527 | null | null | 288172 | [
"classification",
"dirichlet-distribution"
] |
611282 | 1 | 611300 | null | 0 | 23 | I am having bit of difficultly understanding the process of using censored data. I know there are plenty of R packages that can do this for the usual two-parameter Weibull likelihood formula, but I would like to apply it on an extended Weibull model that incorporates more parameters. Hence the need to incorporate a custom likelihood that can estimate these extra parameters. I'm also confused as to whether I can apply this to data where the equipment does not all have the same start time.
Say I have some likelihood function, f(t), and some reliability function R(t) that incorporates the extra parameters. I also have a dataset that has both failed and un-failed equipment, associated with their current operating age. According to [1](https://i.stack.imgur.com/iAIQI.png), the censored likelihood is:
[](https://i.stack.imgur.com/cT1UQ.png)
where "r is the number of failures and n is the number at risk."
Say I have the following data:
```
equip_id age failed
1 22.50548 0
2 31.79649 1
3 32.53883 1
4 21.90784 0
5 38.48035 1
```
I'm assuming that n=5 and r=3. So the first iteration would be:
f(31.79649) x [R(38.48035)]^(5-3)
second iteration is:
f(32.53883) x [R(38.48035)]^(5-3)
and third iteration is:
f(38.48035) x [R(38.48035)]^(5-3)
With the likelihood value being the joint probability across these iterations. (in practice using the log-likelihood to avoid precision loss).
Is this the appropriate way to perform this on this type of data?
[1](https://i.stack.imgur.com/iAIQI.png) Ebeling, C.E., 2019. An introduction to reliability and maintainability engineering. Waveland Press.
| Right-censoring Survivability with Custom Likelihood Function | CC BY-SA 4.0 | null | 2023-03-30T16:33:05.690 | 2023-03-30T18:32:54.593 | 2023-03-30T18:32:54.593 | 304915 | 304915 | [
"survival",
"reliability",
"censoring"
] |
611285 | 2 | null | 611275 | 1 | null | I believe this would be the frequentist equivalent. I couldn't get it to converge with your data, but I think this is the analogue to the brms model.
```
m <- lme(Value ~ 1,
random = list(~ 1|Category, ~ 1|BioRep, ~1|TechRep),
weights = varIdent(form = ~ 1|Category),
data = data)
```
I don't think you need to use `nlme()`. That's for non-linear mixed effect models. Also it appears you're using lme4 syntax with nlme functions. With the nlme package, random effects need to be specified using the random argument.
| null | CC BY-SA 4.0 | null | 2023-03-30T16:45:15.367 | 2023-03-30T16:45:15.367 | null | null | 46334 | null |
611286 | 1 | null | null | 6 | 548 | I know that two data sets can have the same Kendall's $\tau$ but different Pearson's $\rho$.
- What about the opposite? Can two different data sets have the same Pearson's $\rho$, but different Kendall's $\tau$? Or rather, if we know the correlation matrix Pearson's $\rho_{ij}$, does it correspond to precisely one matrix of Kendall's $\tau_{ij}$, obtained by transforming the original matrix to Kendall's $\tau$ by $$\frac{2}{\pi}\arcsin{(\rho_{ij})}$$
- If we sample from a multivariate distribution with a certain population correlation matrix $\rho_{ij}$, is this equivalent to sampling from a distribution with a population matrix $\tau_{ij}$ obtained by the previously mentioned transformation? I.e. do the samples, which have a sample Pearson correlation matrix $\Sigma$ and associated Kendall's correlation matrix, come from the population with this $\tau_{ij}$ obtained by transforming population $\rho_{ij}$?
| Is Kendall's tau uniquely determined by Pearson rho? | CC BY-SA 4.0 | null | 2023-03-30T16:50:37.810 | 2023-03-31T09:39:01.560 | 2023-03-31T09:39:01.560 | 362581 | 362581 | [
"correlation",
"sample",
"population"
] |
611287 | 1 | null | null | 0 | 26 | I ran an experiment with 10 participants. Each participant had to complete trials. My analyses include:
- 1 continuous dependent variable V.
- 5 categorical independent variables : A (2 categories), B (8 categories), C (2 categories), D (2 categories), P (the participant, 10 categories). The combination of these 5 variables allow to uniquely describe each trial (e.g., trial X had: A category 1, B category 6, C category 1, D category 2, P participant 10).
To note, each participant had the same number of trials with each category of the variables B, C and D, BUT the number of trials with each of the two categories of the variable A differed between participants.
I'm interested in answering the question: does A affect V? But I want to control for the potential effects B, C, D and P might have on V.
Which statistical test would be most appropriate to answer this question?
I started with an ANOVA including A, B, C, D and P, and all their interactions. However, I'm afraid it might not be appropriate because of the differing number of trials with each of the two categories of the variable A between participants, and also because each participant completed more than 1 trial.
Any help would be greatly appreciated.
| Appropriate statistical testing | CC BY-SA 4.0 | null | 2023-03-30T16:53:31.480 | 2023-03-30T16:53:31.480 | null | null | 384522 | [
"hypothesis-testing",
"statistical-significance",
"anova",
"variable"
] |
611288 | 1 | null | null | 0 | 9 | For one of my analyses, I have to find cases that are most different from my cases. It's like case-control matching, but "controls" should be as different as possible. What would be a good method to do this?
Thanks!
| "Reverse" case-control: is there a way to match most DISsimilar "controls" to my cases? | CC BY-SA 4.0 | null | 2023-03-30T16:53:33.467 | 2023-03-30T16:53:33.467 | null | null | 384526 | [
"matching"
] |
611290 | 1 | null | null | 0 | 28 | I have a GitHub dataset of three variables (star_count, fork_count and watch_count) for each repository. Now, I want to find a popularity score using the three variables. For example, I can normalize the value of three variables and take average to calculate the popularity score for a repository.
Now, I will average the three variables if three variables are rank correlated. To calculate rank correlation between two variables, I can calculate Spearman’s rho. However, what is the way to calculate the rank correlation between three variables?
One approach I thought of calculating rank correlation of every pair of variables and check if the pairs are correlated. Will it be a good approach? Is there any statistical test to find rank correlation between three variables?
| How to find rank correlation between three variables? | CC BY-SA 4.0 | null | 2023-03-30T17:03:21.250 | 2023-03-30T17:03:21.250 | null | null | 87123 | [
"correlation",
"spearman-rho"
] |
611291 | 1 | null | null | 0 | 14 | Suppose I performed 10 measurements, and have the dataset:
```
17.39 +/- 0.05
17.47 +/- 0.04
17.49 +/- 0.05
17.56 +/- 0.05
17.43 +/- 0.05
17.54 +/- 0.05
17.43 +/- 0.05
17.41 +/- 0.05
17.43 +/- 0.05
17.44 +/- 0.05
```
What is the error on the mean? The standard deviation of the measurements is `0.0528`. But this ignores the individual errors on the measurements. If I propagated these errors, I would get `0.049`. In an ideal world, would I expect these two values to match? Or do I have to combine them, somehow?
| How to propagate errors from two sources | CC BY-SA 4.0 | null | 2023-03-30T17:07:28.377 | 2023-03-30T17:07:28.377 | null | null | 115585 | [
"error",
"measurement-error",
"error-propagation"
] |
611292 | 2 | null | 611286 | 12 | null | >
Can two different data sets have the same Pearson's ρ, but different Kendall's τ?
[Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet) gives you four two-dimensional datasets with (almost) identical Pearson correlations, but their Kendall correlations are quite different. In R:
```
> with(anscombe,cor(x1,y1,method="pearson"))
[1] 0.8164205
> with(anscombe,cor(x2,y2,method="pearson"))
[1] 0.8162365
> with(anscombe,cor(x1,y1,method="kendall"))
[1] 0.6363636
> with(anscombe,cor(x2,y2,method="kendall"))
[1] 0.5636364
```
In general, there is no bijective transformation between the two correlations. You can also see this by the fact that small changes in data points will always change the Pearson correlation continuously - but the Kendall correlation will not change at first and suddenly change when data points change position, because the Kendall correlation only looks at whether two points are to the left or the right, respectively above or below each other.
| null | CC BY-SA 4.0 | null | 2023-03-30T17:09:27.027 | 2023-03-30T17:09:27.027 | null | null | 1352 | null |
611294 | 1 | null | null | 1 | 26 | I'm trying to estimate a model where the data can be partitioned into subsets that each depend on a single parameter value, where the set of these parameters follow a known distribution. Because the data can be partitioned, I can consistently estimate each parameter separately on the relevant partition of the data using MLE. However, it seems like there would be efficiency gains from exploiting my knowledge of the distribution of parameters in the estimation. Is there a way to do this?
To give a concrete example, suppose there are $N$ parameters $\theta=\{\theta_1,\ldots,\theta_N\}$ that are distributed according to the standard normal distribution $\theta\sim\mathcal{N}(0,1)$. For each of the $N$ parameters, the data $X_n$ consist of $M_n$ observations drawn from a normal distribution where the mean is given by the associated parameter $\theta_n$ and the standard deviation is $1$, such that $X_n\sim\mathcal{N}(\theta_n,1)$. Ignoring the fact that the parameter values are drawn from a known distribution, the log-likelihood function is given by
$$
\mathcal{l}(\theta,X)=\sum_{n=1}^N\left(-\frac{M_n}{2}\log(2\pi)-\frac{1}{2}\sum_{m=1}^{M_n}(x_{nm}-\theta_n)^2\right),
$$
which can be maximized either by taking the derivative with respect to each parameter to obtain the usual maximum likelihood estimators ($\hat\theta_n=\frac{1}{M_n}\sum_{m=1}^{M_n}x_{nm}$) or maximizing numerically. Is there a way to alter the likelihood function or otherwise change the estimation strategy to make use of the fact that $\theta\sim\mathcal{N}(0,1)$?
| Estimating parameters when drawn from known distribution | CC BY-SA 4.0 | null | 2023-03-30T17:42:54.930 | 2023-03-30T17:42:54.930 | null | null | 384530 | [
"normal-distribution",
"maximum-likelihood",
"estimation"
] |
611295 | 2 | null | 611286 | 4 | null | An indirect approach:
If two $\rho_1 > \rho_2$ can relate to the same $\tau$, then imagine adjusting the dataset a little such that we increase all these three values.
We should be able to arrive to some values $\rho^\prime_1 > \rho_1$, $\rho^\prime_2 > \rho_2$ and $\tau^\prime > \tau$. This should be possible in a continuous way such that we get at some point $\rho^\prime_2 = \rho_1$.
That means that we have got the same Pearson's $\rho$, but with different $\tau^\prime$ and $\tau$.
| null | CC BY-SA 4.0 | null | 2023-03-30T17:45:54.470 | 2023-03-30T17:45:54.470 | null | null | 164061 | null |
611296 | 2 | null | 399637 | 0 | null | The graph reflects the heterogeneity across industries of their Innovations with respect to their Profits. I assume that the graph is computed that every point representing a Industry contains the mean Profits and Innovations of the Industry. Then we could observe which industries deviate the greatest from the trend line.
| null | CC BY-SA 4.0 | null | 2023-03-30T17:53:34.190 | 2023-03-30T17:53:34.190 | null | null | 384291 | null |
611298 | 2 | null | 611245 | 0 | null | The underlying statistical methodology is a hypothesis test.
In your situation, the simplest method is to run regressions separately for each method. Then after you obtained the coefficient and 95% confidence interval, you could indicate if the methodologies are significantly different from each other by checking if the coefficients are in the interval.
Consider a situation with two methods:
$$\beta_{1}= 1.05, and\ the\ confidence\ interval\ is\ [1.0,1.1]$$
$$\beta_{2}= 1.02, and\ the\ confidence\ interval\ is\ [0.97, 1.07]$$
Since $\beta_{2}$ fall into the interval of [1.0,1.1], they are the same statistically.
| null | CC BY-SA 4.0 | null | 2023-03-30T18:12:18.773 | 2023-03-30T18:12:18.773 | null | null | 384291 | null |
611299 | 1 | null | null | 0 | 5 | I've got a 257x257 correlation matrix of functional connectivity (fMRI) data. It is a symmetric matrix where each value is the Pearsons correlation of the brain area in the row with the brain area in the column. Here is some example data:
```
ORBdl_left ORBl_left ORBm_left ORBv_left ENTttd_left TTv_left
ORBdl_left 1.0000000000 0.0574354585 0.015540829 -0.0142136757 0.0101830340 0.0189498932
ORBl_left 0.0574354585 1.0000000000 -0.035261202 0.0242639912 0.0186030407 0.0385273022
```
As you can see, the way the brain is parcellated, many regions are broken up into subregions. For example, ORBdl_left, ORBl_left, ORBm_left, ORBv_left are all part of the ORB cortex. I want to find the correlation value of a single variable like TTv_left, with the ENTIRE ORB cortex.
I understand that generally, averaging correlation coefficients is a no-no and the answer is sometimes to perform a Fisher's transformation, THEN average values, then reverse transform it back. However, I was stumped enough to the point where I asked a biostatistician at my workplace about this issue and he stated that within this context he is "not 100% certain that it is advisable to do so in this setting."
If data is formatted in this way, is there any mathematically advisable way to get 'region' correlations from this data?
Thank you!
| Averaging brain subregion correlation coefficients into a single measure | CC BY-SA 4.0 | null | 2023-03-30T18:16:38.520 | 2023-03-30T18:16:38.520 | null | null | 99819 | [
"correlation",
"mean",
"pearson-r",
"neuroimaging"
] |
611300 | 2 | null | 611282 | 1 | null | I don't have access to the reference you cite, but I think that formula only works when all censoring times are the same, at $t_r$, and has an ambiguity that is leading to some confusion.
[This page](https://stats.stackexchange.com/q/145164/28500) and its links provides the general form of the likelihood for observed, censored, and truncated times to events.
The first factor in your likelihood product, $f(t_i| \theta_1,\dots \theta_k)$, is the contribution from cases with exact observed event times $t_i$. $R(t_r)$ is the contribution to likelihood from a right-censored observation at time $t_r$, the survival function at that time.
I think that $[R(t_r)]^{n-r}$ (a) is intended to be outside of the product over the $r$ observed event times, and (b) only can be used that way if all right-censoring times are at $t_r$. With that interpretation, the formula makes sense. That's not the case in general, however.
The likelihood for your example data would be proportional to:
$$R(22.5)\times f(31.8) \times f(32.5) \times R(21.9) \times f(38.5)$$
with the implied dependence of $f$ and $R$ on the parameter values.
If you can define your probability density for events as a function of parameters properly, then the [flexsurv package](https://cran.r-project.org/package=flexsurv) should be able to fit it to your data.
It's not clear what you mean by "the equipment does not all have the same start time." In general you define `time = 0` with respect to the particular probability model that you have in mind. That would typically be the time at which a piece of equipment was put into service, and the event/censoring time would be relative to that. If you think that the actual calendar time of entry into service affects the reliability, then you might include that as a covariate in a parametric model in which the values of some of the Weibull parameters are a function of the calendar date of entry into service.
| null | CC BY-SA 4.0 | null | 2023-03-30T18:21:51.000 | 2023-03-30T18:28:41.450 | 2023-03-30T18:28:41.450 | 28500 | 28500 | null |
611302 | 1 | null | null | 0 | 76 | I'm using Python statsmodel to do logistic regression. I'm trying out their `glm(family=sm.families.Binomial())` and `logit()` models. Please correct me if I'm wrong but technically they should be the same model.
Here are the full sample code for reference
```
glm_model = sm.formula.glm("Y ~ X1 + X2 + ... + Xn", family=sm.families.Binomial(), data=df_train).fit()
```
[](https://i.stack.imgur.com/avTLv.png)
```
logit_model = sm.formula.logit("Y ~ X1 + X2 + ... + Xn", data=df_train).fit()
```
[](https://i.stack.imgur.com/brk9n.png)
So 2 things
- Why are the coefficients between the 2 models inverted? I assume the logit model one makes more sense (in the context of the training data), but I'm curious if there's an argument in the glm() function which I'm missing
- Why do some coefficients in the logit model have nan p-value while the glm model doesn't?
Thanks for your help! I normally use R but I'm moving to Python now. If I replicate this in R it does mimic the result of the logit model here, but without nan p-values.
| Python Statsmodel logit nan p-value (vs glm model) | CC BY-SA 4.0 | null | 2023-03-30T18:27:44.517 | 2023-03-30T23:18:16.737 | null | null | 384535 | [
"logistic",
"python",
"generalized-linear-model"
] |
611303 | 1 | null | null | 1 | 82 | I have several ranking distributions and would, for each one, like to fit a [Zipf distribution][1], and estimate the goodness of fit relative to some standard benchmark.
With the Matlab code below, I tried to do a sanity check and see if a "textbook" Zipf rank distribution passes the statistical test. Clearly something is wrong, as it does not. If that doesn't, nothing will!
Using the Kolmogorov-Smirnoff test, or the Anderson-Darling test with a custom-built (non-normal) distribution in place of the chi-squared test does not change this.
```
% Define some empirical frequency distribution
x = 1:10;
freq = randn(1,10); % textbook zipf!
% Define the Zipf distribution
alpha = 1.5; % Shape parameter, 1.5 is apparently a good all-round value to start with
N = sum(freq); % Total number of observations
k = 1:length(x); % Rank of each observation
zipf_dist = N ./ (k.^alpha); % Compute the Zipf distribution
% Plot our empirical frequency distribution alongside the Zipf distribution
figure;
bar(x, freq); % or freq\N
hold on;
plot(x, zipf_dist, 'r--');
xlabel('Rank');
ylabel('Frequency');
legend('Observed', 'Zipf');
% Compute the goodness of fit using the chi-squared test
expected_freq = zipf_dist .* N;
chi_squared = sum((freq - expected_freq).^2 ./ expected_freq);
dof = length(freq) - 1;
p_value = 1 - chi2cdf(chi_squared, dof);
% Display the results
fprintf('Chi-squared statistic = %.4f\n', chi_squared);
fprintf('p-value = %.4f\n', p_value);
if p_value < 0.05
fprintf('Conclusion: The data is not from a Zipf distribution.\n');
else
fprintf('Conclusion: The data is from a Zipf distribution.\n');
end
```
| Testing goodness of fit for a Zipf distribution (in Matlab) | CC BY-SA 4.0 | null | 2023-03-30T18:35:02.860 | 2023-04-28T11:03:45.730 | 2023-04-04T13:04:16.040 | 41307 | 41307 | [
"model",
"matlab",
"goodness-of-fit",
"curve-fitting",
"zipf"
] |
611305 | 2 | null | 610044 | 1 | null | If you are getting $R^2<0$, then I assume you are calculating according to:
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
This is how `sklearn.metrics.r2_score` does the calculation, for instance.
In that case, you are right that $R^2<0$ is possible, but this is because the above equation need not be related to $\left(\text{corr}\left(y, \hat y\right)\right)^2$ like it is in OLS linear regression with an intercept. Even in such a setting, the relationship breaks down if you go to out-of-sample predictions like I suspect are of interest to someone working on neural networks. Consequently, $\sqrt{1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)}$ does not seem like a valuable statistic to calculate, even if the value is a real number. The only value might be to know this statistic is imaginary, since that flags situations where performance is poor (worse than the baseline of predicting $\bar y$ every time), but we would already know that from getting $R^2<0$.
Since there are multiple featuress in your neural network, or at least one feature that nonlinearly predicts the outcome, there is not even a connection between $\sqrt{R^2}$ and the correlation between the feature and target like there is in simple linear regression. (I actually do not see much value in calculating $\sqrt{R^2}$ in linear regression, especially if there are multiple features.)
Thus, I would not worry about getting imaginary square roots of $R^2<0$. If someone demands to know the correlation:
- Part of your job is to tell people when they have misconceptions. If someone is demanding to know a meaningless statistic or a statistic that does not mean what they think it means, your should be addressing the questions they should have asked. (It is easy to be a jerk about this, so don't be.)
- It is easy to calculate $\text{corr}\left(y, \hat y\right)$ if someone insists on it. It might be that this value is high, perhaps even one, despite $y$ being quite different from $\hat y$, such as $y = (1, 2, 3)$ and $\hat y = (101, 102, 103)$ or $\hat y = (101, 201, 301)$. Again, it is on you to explain why a high correlation between predicted and actual values can hide major issues with performance and to do assessments that expose such such issues.
| null | CC BY-SA 4.0 | null | 2023-03-30T18:47:39.877 | 2023-03-30T18:47:39.877 | null | null | 247274 | null |
611306 | 1 | null | null | 0 | 7 | Let's say you want to train a model so that you can make some predictions when you get some future data. You find some training data. Some of the training records have labels but other records do not. One approach might be to apply semi-supervised machine learning involving two separate models. The first model trains on only labeled training records. Then the first model makes predictions on the training records without labels. Finally, you train a second model on all of the training data: records with real ground-truth labels (what the first model used for learning), and records that don't have original labels but that now have pseudo labels, which are the predictions from the first model.
Does the second model add any value? Will the second model provide meaningfully different (more accurate) predictions than the first the first model? Isn't the second model at best going to give the same (but over-confident) answers as the first model? Presumably a model can't outperform its ground truth (i.e., in a classification task 100% accuracy is the most accurate you can get). If a large portion of the second model's ground truth was generated from the first model, I don't see how the second model could be more accurate than the first model. That makes me wonder, why even bother with the second model?
| Semi-supervised learning with two models | CC BY-SA 4.0 | null | 2023-03-30T18:56:59.780 | 2023-03-30T19:04:59.463 | 2023-03-30T19:04:59.463 | 8401 | 8401 | [
"machine-learning",
"semi-supervised-learning"
] |
611307 | 2 | null | 444819 | 2 | null | Taking the other side: The $R^2$ in OLS has a number of definitions and interpretations that are endemic to OLS. For instance, a "perfect fit" has $R^2 = 1$ and, conversely, a "worthless" fit has $R^2 = 0$. In OLS the $R^2$ is interpreted as a "proportion of 'explained' variance" in the response. It also has the formula $1 - SSR/SST$
You say "non-linear regression" but I think you mean generalized linear models. These are heteroscedastic models that not only transform the response variable, but also express the mean-variance relationship explicitly, such as in a Poisson regression where the variance of the response is proportional to the mean of the response. Contrast this with non-linear least squares where the $R^2$ continues to be a very useful metric.
So if we consider GLMs, none of the interpretations we enjoy about the $R^2$ are valid.
- A "perfect" fit will not necessarily perfectly predict all observations at every observed level. So, the theoretical upper bound may be some value less than 1. Adding a predictor to a model does not optimally improve the $R^2$ in terms of that predictor's contribution: non-linear least squares would do that.
- The probability model for a GLM does not invoke a "residual" per se, (or methods that do do not treat the residual as normally distributed). So neither does the formula make any sense nor can it be interpreted as a fraction of "explained" variance.
- While incremental increases in the $R^2$ indicate improved predictiveness, you can't be guaranteed of the scales or unit differences. For instance, if two candidate predictors $u,v$ increase $R^2$ by 5% when added as separate regressors in separate models, the first, $u$ may predict variance really well in the tails but overall be a very lousy predictor and have disappointingly non-significant resluts, whereas the second, $v$, may not appear to improve predictions much, but when accounting for areas with low variance, the overall contributions are substantially better and corroborate statistical significance.
- Applying $R^2$ in a GLM regardless is called a pseudo $R^2$.
In that regard, the GLM has a much more useful statistic, the [deviance](https://en.wikipedia.org/wiki/Deviance_(statistics)), which even R reports as a default model summary statistics. The deviance generalizes the residual for an OLS model, which has an identity link and gaussian variance structure. But for models such as Poisson the expression is:
$$ D = 2 (y \log y \hat{y}^{-1} - y - \hat{y})$$
| null | CC BY-SA 4.0 | null | 2023-03-30T19:03:44.650 | 2023-03-30T19:03:44.650 | null | null | 8013 | null |
611308 | 1 | null | null | 0 | 40 | I have financial data errors (from linear regression) that form a fat tailed distribution, I would like to calculate confidence intervals using that distribution, but I am not sure how to do that due to the fat tails?
| Financial fat tailed distribution - confidence intervals | CC BY-SA 4.0 | null | 2023-03-30T19:16:40.893 | 2023-03-30T22:30:27.747 | 2023-03-30T22:30:27.747 | 8013 | 384537 | [
"confidence-interval"
] |
611310 | 2 | null | 606915 | 2 | null | In some circles (perhaps just `PyTorch` users), "negative log likelihood" seems to be slang for "negative log-likelihood of the binomial/multinomial", yes.
This is abuse of terminology since, as you correctly point out, likelihood is a general concept in statistics. To defend `PyTorch` slightly, however, when you are predicting a category (or the probabilities of categories), the likelihood kind of has to be binomial/multinomial, indicating a rare situation where the modeler knows the likelihood. This is in contrast to a situation where the modeler uses square loss because of an assumed, but not known, Gaussian likelihood. It might be that the likelihood is not really Gaussian, but for categorical outcomes, the distribution is so simple that binomial/multinomial is kind of the only way the distribution can be. The probabilities of each category completely determine the distribution.
(I'm not actually sold on this because of the possibility of something like a [beta-binomial distribution](https://stats.stackexchange.com/q/611314/247274), but it is at least almost true, explaining the slang.)
| null | CC BY-SA 4.0 | null | 2023-03-30T19:55:55.460 | 2023-04-01T14:48:58.150 | 2023-04-01T14:48:58.150 | 247274 | 247274 | null |
611313 | 1 | null | null | 0 | 20 | I have a dataset of about 100 records and about 80% of those records belong to one class. The rest belong to another class. I'm building two Bayesian models (logistic regression and multiple linear regression) with the Bambi python library which runs the NUTS algorithm for approximating the posterior distribution of the model parameters. How well does the NUTS algorithm handle class imbalance? I'm curious about this because I'd like to know whether I should be employing some other techniques.
Here is a little bit more information about the model. The model I'm building is attempting to predict election outcomes for each county in a specific state. The logistic regression model output a binary value of 0 or 1, representing democrat or republican. The goal is to use a trained model on previous election years to predict out-of-sample election results from a year that was not part of the training dataset.
| No-U-Turn Sampler (NUTS) for handling class imbalance | CC BY-SA 4.0 | null | 2023-03-30T20:25:50.807 | 2023-03-30T20:47:23.167 | 2023-03-30T20:47:23.167 | 384547 | 384547 | [
"bayesian",
"sampling",
"markov-chain-montecarlo",
"unbalanced-classes",
"bambi"
] |
611314 | 1 | null | null | 0 | 64 | When a variable is binary, it sure seems like its distribution is totally characterized by the probability of being in one group: the variable takes one value with probability $p$ and the other value with probability $1-p$. This is essentially a Bernoulli distribution.
But...
There is a beta-binomial distribution, which says that there are $n$ trials (flips of a coin) of a Bernoulli random variable, where each Bernoulli has a a probability $p$ that is drawn from a beta distribution. Thus, the parameters of a beta-binomial distribution are not $n$ and $p$ like the binomial but $n$ (still) as well as the $a$ and $b$ of a beta distribution.
Beta-binomial distributions on $n$ trials can give distributions that simply cannot be achieved by binomial distributions, so the beta-binomial is different from the usual binomial. When we restrict the beta-binomial to be on only one trial, do we get anything useful that is not captured by the usual Bernoulli probability parameter?
That's just one idea. There are all kinds of other distributions one can put on $[0,1]$ that are not beta distributions but are valid distributions from which Bernoulli probability parameters can be drawn. When we have multiple trials of a Bernoulli, I definitely get why more than just binomial is in play. Is this useful for just one Bernoulli trial, however?
I am thinking of a situation where we have $iid$ data like $0,1,0,0,1,0,1,1,1,0$. Sure, it seems like that is Bernoulli$(0.5)$, but could it be beta-Bernoulli$(a,b)$ for some values of $a$ and $b$ that are parameters of a beta distribution?
(Really, I want to consider this in a logistic-ish regression where the outcome is binary and the conditional distribution is modeled. With just one trial, though, I am not sure how the modeling would go. Ultimately, there is a probability of an event happening or not and then the event happens with that probability...or does it?)
EDIT
Simulating in `R`, it sure seems like there is not a difference.
```
library(ggplot2)
set.seed(2023)
N <- 10000
R <- 10000
a <- 1/3
b <- 1
p_bernoulli <- p_betabernoulli <- rep(NA, R)
for (i in 1:R){
p <- rbeta(N, a, b)
p_betabernoulli[i] <- mean(rbinom(N, 1, p))
p_bernoulli[i] <- mean(rbinom(N, 1, a/(a + b)))
if (i %% 75 == 0 | i < 5){
print(paste(
i/R*100,
"% Complete",
sep = ""
))
}
}
d0 <- data.frame(
p = p_bernoulli,
Distribution = "Bernoulli"
)
d1 <- data.frame(
p = p_betabernoulli,
Distribution = "Beta-Bernoulli"
)
d <- rbind(d0, d1)
ggplot(d, aes(x = p, fill = Distribution)) +
geom_density(alpha = 0.25)
```
[](https://i.stack.imgur.com/YKrav.png)
| For a binary outcome, is the distribution necessarily Bernoulli? Could, for instance, "beta-Bernoulli be in play? | CC BY-SA 4.0 | null | 2023-03-30T20:26:04.297 | 2023-03-30T20:47:43.170 | 2023-03-30T20:47:43.170 | 247274 | 247274 | [
"distributions",
"binomial-distribution",
"beta-distribution",
"beta-binomial-distribution"
] |
611315 | 1 | null | null | 2 | 10 | My dataset contains multiple continuous predictors (responses from n neurons) from which I would like to predict two categorical variables (A and B), where B is nested under A. A can take 8 different values, while B can take only 2 values within each level of A. My question is, what would be the best model to use this in this case to predict A and B from my predictors?
My goal is to predict A and B from neural data collected under different conditions and see how the predictions compare to across (A and B) and within variable types (A or B under different conditions).
Here is an example MATLAB code that can be used to generate the surrogate data:
```
N = 50; % number of neurons
T = 100; % number of trials
K = 8; % number of categories for variable A
for iT = 1:T
neural_response{iT,1} = randn(N, 1); % generate vector of neural population response
end
% generate variable A
A = randi(K, T, 1); % randomly assign one of K categories to each time point
% generate variable B within each level of variable A
B = zeros(T, 1);
for k = 1:K
idx = A == k; % find time points corresponding to category k
B(idx) = randi(2, sum(idx), 1) - 1; % randomly assign 0 or 1 to each time point
end
% combine into a table
neural_data = table(neural_response(:), A, B, 'VariableNames', {'neural_response', 'A', 'B'});
```
Any help is appreciated. Thanks!
| Predicting nested categorical variables from a set of continuous predictors | CC BY-SA 4.0 | null | 2023-03-30T20:40:08.783 | 2023-03-30T20:40:08.783 | null | null | 384550 | [
"classification",
"matlab",
"neuroscience",
"neuroimaging"
] |
611316 | 1 | null | null | 0 | 23 | I'm trying to select feature columns in a binary classification model.
I'd like to remove near-constant columns that don't predict the target column values very well. One way of defining this is the number of times a constant values occurs in parallel with a positive classification.
I can create dummy columns with the minimum level of match that I'm happy with and calculate their correlation. Doing this 2,000 plus times gives me a sample with a well defined mean and standard deviation for the expected correlation at that level of "matching".
My plan is then to calculate the correlation between feature column and the target column and use a z-test the left-tailed probability that the real correlation is below my minimum match value.
Can I use my calculated mean and standard deviation to perform this z-test?
| Can I use the Z-Test with estimators for mean and standard deviation? | CC BY-SA 4.0 | null | 2023-03-30T20:45:58.373 | 2023-03-30T20:45:58.373 | null | null | 363857 | [
"hypothesis-testing",
"normal-distribution",
"sampling",
"z-test"
] |
611317 | 1 | null | null | 2 | 20 | I have a set of businesses that have invoices from clients each day. I use this year-over-year to find how transactions have grown each year, i.e. 2 invoices Sep 2021 and 3 invoices Sep 2022 is (3-2)/2 = 50% yoy txn growth.
I want to find a method to discern (preferably in Python) if a business has significantly more or significantly less invoices each month than the average number of monthly transactions across all businesses.
I'm looking at the calculation for statistical power but unsure how to use it in this case.
My original data would look like:
```
business|time_value|txns|yoy_txn_growth
1111 |2022-02-01|10 |null
1111 |2023-02-01|11 |0.10
1111 |2022-03-01|10 |null
1111 |2023-03-01|12 |0.20
2222 |2022-02-01|10 |null
2222 |2023-02-01|13 |0.30
2222 |2022-03-01|10 |null
2222 |2023-03-01|14 |0.40
...
```
I'm looking to arrive at a meaningful answer of how many invoices and/or businesses need to exist to have a 0.05 significance. Not sure if I need to decide what difference in txns/yoy_txn_growth from the mean would be significant but it can be 1 standard deviation.
Could someone outline the steps I should follow for this usecase to derive what number of businesses and/or invoices I would need to find a meaningful result? The null hypothesis can either be 1) Practice X has significantly more/significantly less txns than the mean this month 2) Practice X has significantly more/significantly less yoy_txn_growth than the mean this month.
| Proper Way to Conduct Statistical Test on Businesses with Invoice Growth/Loss | CC BY-SA 4.0 | null | 2023-03-30T20:49:14.627 | 2023-03-30T20:49:14.627 | null | null | 313842 | [
"statistical-significance",
"python",
"statistical-power"
] |
611318 | 2 | null | 605479 | 0 | null | For RMSE, the answer is straightforward: calculate RMSE on the holdout observations for each group. This could lead to insights like, "We are good at predicting this in general, but we get closer predictions when we predict for dogs than we do for wolves." You calculate on the holdout set because, as you point out, performance calculations on the training data are biased high, perhaps immensely so.
For $R^2$, it is not clear what to do, and it depends on what you want to learn. If you want to learn how your model predictions are compared to always predicting the mean of the group, do your calculations with such a $\bar y$. If you want to know about always predicting the overall mean, use that $\bar y$.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
| null | CC BY-SA 4.0 | null | 2023-03-30T20:49:42.310 | 2023-03-30T20:49:42.310 | null | null | 247274 | null |
611321 | 1 | null | null | 1 | 19 | I have a dataset I'd like to break into 2 to 3 subsets to account for outliers. Train the subsets with individual classifier and combining them. Understand that Bagging uses subset of the base training set. However, it is randomly selected for each classifier.
Therefore, is it possible to allocate specific training subset to specific classifier for Bagging or is there any ensemble mode that is able to allocate specific training subset to specific classifier.
ps. not sure whether the question fits here or stackoverflow, hence, if necessary kindly let me know whether to relocate the question. Thank you
| allocating specific training subset to specific classifier for ensemble mode | CC BY-SA 4.0 | null | 2023-03-30T21:32:58.763 | 2023-03-30T21:32:58.763 | null | null | 384551 | [
"machine-learning",
"classification",
"bootstrap",
"bagging"
] |
611323 | 1 | null | null | 2 | 29 | The [example section of Wikipedia's article on Statistical model](https://en.wikipedia.org/wiki/Statistical_model#An_example) says:
>
Suppose that we have a population of children, with the ages of the children distributed uniformly, in the population. The height of a child will be stochastically related to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in a linear regression model, like this: $\text{height}_i = b_0 + b_1 \text{age}_i + \epsilon_i$, where $b_0$ is the intercept, $b_1$ is a parameter that age is multiplied by to obtain a prediction of height, $\epsilon_i$ is the error term, and $i$ identifies the child. This implies that height is predicted by age, with some error.
To do statistical inference, we would first need to assume some probability distributions for the $\epsilon_i$. For instance, we might assume that the $\epsilon_i$ distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: $b_0$, $b_1$, and the variance of the Gaussian distribution.
It sounds like this model assumes that each point on the line $y = b_0 + b_1x$ represents the mean height of children of age specified by the $x$ coordinate of the point. For example: the point $[2.4; 0.85]$ laying on the line means that the mean height of children of age 2.4 years is 0.85 meters.
Furthermore, it sounds like the model assumes that the variance $\sigma^2$ of height is exactly the same for all the different height categories (so it does not change with the age of the children, which is fine for the purpose of this simplified example).
I picture the statistical model as follows: for each point on the line, there is a vertical 1D Gaussian (parallel to the $Y$ axis) with variance $\sigma^2$. Therefore there is infinitely many of those vertical 1D Gaussians (with the same $\sigma^2$) along the line, one Gaussian for each point on the line.
I guess all the 1D Gaussians combined are the part that defines the PDF of this statistical model as a whole. However, I am not sure if I understand how to normalize the PDF of the model so the probability densities in the 2D space near the whole line add up to 1. Since the integral of every PDF needs to be normalized to add up to 1, it seems like we cannot use the normalization factor $1 / (\sigma \sqrt {2\pi})$ of the 1D Gaussian, but instead we need to compensate for the infinite amount of 1D Gaussians along the line.
This is how I would solve the problem: I would define the sample space $\mathcal{S}$ as a subspace of $\mathbb R^2$, so $\mathcal{S} = \left\{ (x,y) \in \mathbb{R}^2 \, | \, x_{\text{min}} < x < x_{\text{max}} \,\,\wedge\,\, y_{\text{min}} < y < y_{\text{max}} \right\}$ would be the domain of the function $f(x,y)$ representing the PDF of the model (1D Gaussians placed on the line, with the line not being infinite, but confined to a finite patch on $\mathbb R^2$). This would ensure that $f(x,y)$ is not defined over an infinitely large area, so that the integral of $f(x,y)$ isn't infinite and the PDF can be properly normalized:
$$
f(x,y) = \frac{1}{\gamma}
\text{exp}\left(
-0.5 \left( \frac{y-(b_0 + b_1 x)}{\sigma} \right)^2
\right)
$$
Where $\gamma = \iint_\mathcal{S} \gamma f(x,y) \,dx\,dy$ (the volume under all the Gaussians).
However, using a finite-sized domain for the PDF seems ugly and not something what a statistician would do. After all, the value of 1D Gaussian function is defined over the whole $\mathbb R^2$, without the need to artifically "cut it off" by defining it over a limited domain.
What is the correct way of defining PDF for the Wikipedia example? How would it look like (assuming the model has 3 parameters $b_0, b_1, \sigma^2$ as mentioned above)?
| The Wikipedia example of a Statistical model and its PDF | CC BY-SA 4.0 | null | 2023-03-30T22:00:03.827 | 2023-03-30T22:00:03.827 | null | null | 384502 | [
"modeling",
"density-function",
"model"
] |
611324 | 2 | null | 611236 | 1 | null | >
I assume my population is distributed:
$$ X \sim \mathcal{N}(\mu, \sigma) $$
where $X$ is a normally distributed random variable, $\mathcal{N}$ is a normal distribution, $\mu$ is an estimator for the population mean, and $\sigma$ is an estimator for the population standard deviation.
This statement does not make sense. Either $\mu$ and $\sigma$ are the parameters of the population, or they are estimates of the parameters. They cannot be both.
>
How can I use this distribution to test the probability that an observation, $Y_i$, has been drawn from the population distribution?
Assuming that you are asking about what you are saying here, you want to calculate what is the probability for $Y_i$ assuming the $\mathcal{N}(\mu, \sigma)$ distribution. If that is the case, just plug-in $Y_i$ to the Gaussian cumulative distribution function parametrized by $\mu$ and $\sigma$ and read the probability it returns. There's nothing more to it, if this actually is what you mean.
Something like
>
$$ P(Y_i = y | Y_i \sim \mathcal{N}(\hat \mu, \hat \sigma))$$
(in the comments), does not make sense, it's like asking "is the color red if we know that the color is red". You cannot have conditional distribution conditioned on this distribution itself. The only way to read this notation would be as $P(Y_i|Y_i)$, which is a tautology.
| null | CC BY-SA 4.0 | null | 2023-03-30T22:01:12.020 | 2023-03-30T22:08:51.300 | 2023-03-30T22:08:51.300 | 35989 | 35989 | null |
611325 | 1 | null | null | 0 | 17 | This hypothesis testing method bugs me a lot, and this is my understanding of this test intuitively after watching so many anologies in Youtube:
You have this counter-idea:
>
The population mean is $\mu$
, which you want to prove it wrong.
So, you do some statistics and get the mean of your statistics and it turns out the distribution of such statistics is z-like distribution(or symetric so it's just more like t-like distribution). And, your real-wolrd mean $\mu_{real}$ is on the left of $\mu$
>
Half of your data is on the left/right of $\mu_{real}$.
Then, you decide to challenge the counter-idea by assuming that the mean $\mu$ is the correct mean of the population statistics. In other words, this means $H_0$ is correct.
Then you continue to wonder what percentage of these extremes of your data(half left in this case because $\mu_{real}$ is on the left) are under this assumption.
[](https://i.stack.imgur.com/1gYhh.jpg)
[](https://i.stack.imgur.com/gkRoP.jpg)
>
So you calculate the z_score or t-score of $\mu_{real}$ under this assumption?
Looking up the table, you find out that:
- The percentage is too small!($\le\alpha$). In other words: Most of what you've seen seems so different from what $H_{0}$ is saying! You denied the hypothesis.
- The percentage is "significant". So, $H_{0}$ is probable.
Question:
Is my overall understanding is correct here? Please feel free to point out my mistakes :D.
| Intuition behind p-test | CC BY-SA 4.0 | null | 2023-03-30T22:10:07.650 | 2023-03-30T22:10:07.650 | null | null | 384174 | [
"hypothesis-testing",
"inference"
] |
611326 | 2 | null | 611302 | 0 | null | the Y was categorical data type (with only 2 levels, 1 and 0). I changed it to numeric and both logit and glm coefficients are now consistent, and the p-values are not nulls anymore. Thanks to @whuber
| null | CC BY-SA 4.0 | null | 2023-03-30T23:18:16.737 | 2023-03-30T23:18:16.737 | null | null | 384535 | null |
611327 | 1 | null | null | 1 | 57 | I'm working on a rental price prediction project and I want to make sure I'm evaluating things correctly. Basically, after I fitted some models with the data and compute R-squared on training and testing, the gap between them is a bit too large, the score was like 0.68 on testing but 0.76 on training. This is clearly overfitting and I tried a lot of techniques to reduce it but didn't improve that much. I later reinspected the data and found that my y(prices) are a bit skewed, so I applied log transformation on them, turns out, I got a better score by doing so. The R-squared now on training is around 0.78 and testing is 0.75.
I know that from here:
[How to compute the R-squared for a transformed response variable?](https://stats.stackexchange.com/questions/133025/how-to-compute-the-r-squared-for-a-transformed-response-variable)
I can't compare the R-squared between two models with different dependent variables, but my point is, seems like I reduced overfitting a little bit by doing so.
I just want to make sure I'm doing things on the right track, any suggestion is appreciated.
Edit: Someone pointed out I shouldn't transform the data by simply looking at the marginal distribution of y, but why would people on kaggle just did that:[https://www.kaggle.com/code/apapiu/regularized-linear-models/notebook](https://www.kaggle.com/code/apapiu/regularized-linear-models/notebook).
| Interpreting R-squared when dependent variable is log transformed | CC BY-SA 4.0 | null | 2023-03-30T23:40:21.400 | 2023-04-02T17:45:34.377 | 2023-03-31T17:32:45.653 | 383542 | 383542 | [
"regression",
"machine-learning",
"data-transformation"
] |
611328 | 1 | null | null | 2 | 67 | I was under the impression that changing the scale/normalizing the target variable in a regression task would not change the overall shape of the loss function equation but would simply translate/move it somewhere else. Therefore, putting bad weight initialization aside, the network would be able to converge the same way regardless of what scale the output was.If in one instance the target variable would be in the range [0,1] and in another instance with range [1000,10000] this should not make a difference.
However I was playing around with a 3d visualizer to see how the loss function would change under different scales of output variables and the shapes of the graph did actually seem to change.
I was trying to model a simple neural network with one input, two weights, and one output which looked like this.
[](https://i.stack.imgur.com/vwPJq.jpg)
Therefore the mean squared error loss function would be something like:
[](https://i.stack.imgur.com/Mgotn.png)
where z represents the loss, w2 is the second weight, w1 is the first weight, the value of 5 represents the input X for one data sample, and the value of 2 represents the true value of the data sample.
When plotting this, I got something like:
[](https://i.stack.imgur.com/nb4EL.png)
When I change the scale of the output so that say the output is now 20 instead of 2 representing a different scale of outputs, the equation becomes:
[](https://i.stack.imgur.com/Omy44.png)
The plot now looks like this:
[](https://i.stack.imgur.com/EZGog.png)
The two 3d plots definitely seem to have some shape dissimilarities to them in terms of gradients and are not simply just translations of eachother.
My Question is shouldn't we always be normalizing our target variables in regression tasks if it leads to different shaped loss curve and would probably make converging easier or is there a particular reason why it might not matter to normalize the target variables.
| Does normalizing/changing the scale of the target variable impact the shape of the loss function equation? | CC BY-SA 4.0 | null | 2023-03-31T00:02:29.150 | 2023-04-01T11:19:31.213 | 2023-03-31T07:28:30.523 | 53690 | 384554 | [
"machine-learning",
"neural-networks",
"normalization",
"loss-functions",
"gradient-descent"
] |
611329 | 1 | null | null | 0 | 20 | I have five employee-level variables.
Z1 = employee wage
Z2 = employee tenure
Z3 = employee age
Y = employee perception of fairness (ordinal survey item with 5 = very fair... 1 = very unfair)
X = employee job satisfaction (ordinal survey item with 5 = very satisfied...1 = very unsatisfied).
My understanding is that with the two ordinal variables, I perhaps need to use the ordered logistic model. However, I was wondering if using OLS would be incorrect? Furthermore, would it be acceptable to incorporate firm-level fixed effect in the ordered logistic model?
Thank you
| Ordered Logistic Regression or OLS | CC BY-SA 4.0 | null | 2023-03-31T00:38:24.880 | 2023-03-31T00:38:24.880 | null | null | 384558 | [
"least-squares"
] |
611330 | 1 | null | null | 1 | 11 | Question is based on the screenshot attached. Based on paper [here](https://openreview.net/pdf?id=I3xhgVtNC5t).
[](https://i.stack.imgur.com/tOqbo.png)
I am not being able to understand why min max formulation (eq 4) is first converted to max min formulation (eq 5). Is it something required to apply strong duality?
Also why is max min formulation (eq 5) again converted to min max formulation (eq 6)?
| Min max formulation conversion to max min formulation. Reason? | CC BY-SA 4.0 | null | 2023-03-31T00:52:17.510 | 2023-03-31T00:52:17.510 | null | null | 384559 | [
"lagrange-multipliers",
"duality"
] |
611331 | 1 | null | null | 6 | 225 | I am tasked with solving a question for a qualifying exam, but I am a little lost about this question.
>
Let $\eta$ and $\xi$ be two independent standard Gaussian random variables. Find $\mathbb{E}(\xi\eta \mid \xi - 2\eta)$.
My attempt:
My main idea is to show that $\operatorname{Cov}(\xi-2\eta,\xi+2\eta)=0$, making the two independent since they are Gaussian. However, computing above, you get $\operatorname{Cov}(\xi-2\eta,\xi+2\eta)=\operatorname{Var}(\xi)-4\operatorname{Var}(\eta)\neq 0$.Alternatively, my thoughts were showing $\operatorname{Cov}(\xi-\eta,\xi+\eta)=0$, which is true. But is $\xi-\eta \perp \xi+\eta \implies \xi-2\eta \perp \xi+2\eta$? The main idea being after you show that $\xi-2\eta \perp \xi+2\eta
$, write $\mathbb{E}(\eta\xi \mid \xi-2\eta)=\mathbb{E}(\frac{1}{8}[\xi+2\eta]^{2}+\frac{1}{8}[\xi-2\eta]^{2}\mid \xi-2\eta)
$
and proceed.
Any help is appreciated.
| Conditional Expectation of Product of Normals given a Linear Combination | CC BY-SA 4.0 | null | 2023-03-31T00:52:59.297 | 2023-03-31T21:47:49.877 | 2023-03-31T04:18:16.940 | 20519 | 361281 | [
"probability",
"self-study",
"normal-distribution",
"conditional-probability"
] |
611333 | 1 | null | null | 2 | 30 | I have gaussian random variables $(Z,Z_k)\sim N(0,\Sigma)$ and $u\sim\text{Uniform}[0,1]$. Given that
\begin{align*}
y=(1-\boldsymbol{1}\{Z>C\})\boldsymbol{1}\{u<\rho\}+\boldsymbol{1}\{Z>C\}\boldsymbol{1}\{u\geq\rho\}
\end{align*}
where $\boldsymbol{1}\{\cdot\}$ is the indicator function, and $C$ and $\rho$ are fixed constants (i.e., not random variables).
Is it possible to compute $\mathbb{E}[Z|Z_k,y]$ without using any integrations (i.e., I want to be able to compute this quickly)? I was thinking this might be possible with the use of some truncated Gaussian property (something like [this](https://math.stackexchange.com/questions/4669171/computing-expectation-of-truncated-conditional-gaussian-r-v)?) or something..? Otherwise, can I compute $\mathbb{E}[Z|Z_k,y]$ with just one integral? Thanks.
| Computing conditional expectation without integrals or monte carlo sampling | CC BY-SA 4.0 | null | 2023-03-31T02:13:13.667 | 2023-03-31T02:13:13.667 | null | null | 217249 | [
"normal-distribution",
"conditional-expectation",
"truncated-normal-distribution"
] |
611334 | 1 | 611479 | null | 15 | 1652 | Question: What are influential, canonical, or otherwise useful works considering low-probability events?
My background: Applied or computational statistics, not theory or pure statistics. How I describe it to people is that, if you ask me a probability problem, I am going to solve it using simulation instead of being able to write out a solution on paper.
Additional information: I have been thinking about low-probability events in the long-run recently. Consider a simple case of a binomial distribution where the probability of an event is $\frac{1}{x}$ where $x$ is arbitrarily large. It seems to be that as $n\to\infty$, the event should be inevitable, which feels like a controversial word to use in probability. Is there work around this idea of how to think about low-probability events? Or even how to judge if the probability is $0$ or $\frac{1}{x}$ with an arbitrarily large $x$?
Note: When I say "inevitable" in the long-run, I'm not saying, "We've done it 100,000 times, this next trial must surely be the time it happens," as that's the gambler's fallacy since all trials are independent and have the same probability. I'm thinking a priori here. Consider the probability of an event being 1/100000. Below, I simulate draws from a binomial distribution 5000 times. I do this for scenarios where the number of trials is 1, 100, 10000, or 1000000. I look at the percentage of the simulations where we hit at least one instance of a the event happening, and we see that the percentage increases as does the number of trials:
```
set.seed(1839)
iter <- 5000
p <- 1/100000
ns <- c(1, 100, 10000, 1000000)
any_hits <- function(p, n) any(rbinom(n, 1, p) == 1)
res <- sapply(ns, \(n) mean(sapply(seq_len(iter),
\(zzz) any_hits(p, n))))
names(res) <- ns
res
```
```
> res
1 100 10000 1e+06
0.0000 0.0012 0.0916 1.0000
```
What I've done so far: I have done some keyword searching, and it seems like people generally consider high-impact, low-probability (HILP) events for this area of work. I've read a little bit, but it is a new area of interest for me, and I so I'm soliciting help searching for the "works you should know" about low-probability events in the long-run.
| What theories, papers, or books examine low-probability events, particularly as the number of trials approaches infinity? | CC BY-SA 4.0 | null | 2023-03-31T02:28:33.757 | 2023-04-07T22:56:57.720 | 2023-03-31T22:58:40.280 | 11887 | 130869 | [
"probability",
"references",
"rare-events"
] |
611335 | 1 | null | null | 0 | 19 | I have a dataset (say N=200) of which a subset (N=150) have undergone a certain assay. I want to be able to say that the subgroup N=150 is not significantly different from the entire population (N=200). Is there a way to compare the two eg in terms of baseline characteristics. I tried comparing the N= 150 who have the assay with N=50 who do not have the assay (eg with ttests and chi square tests). However, for one variable (blood pressure), the difference between the N=150 and N=50 group was significant, however on visual inspection the value for the N=150 did not look very different from the N=200. Is there a way to compare the N=150 subgroup to the N=200 dataset rather than compare the N=150 and N=50 subgroups of the N=200. I hope that makes sense.
| Best way to compare group characteristics | CC BY-SA 4.0 | null | 2023-03-31T02:39:17.487 | 2023-03-31T02:39:17.487 | null | null | 363971 | [
"t-test",
"wilcoxon-mann-whitney-test",
"baseline"
] |
611336 | 2 | null | 470824 | 0 | null | I don't know if this is correct, but here is a try.
$$
\begin{align}
p(y_t|y_{1:t-1}) &= \int p(y_t, x_t | y_{1:t-1})dx, \\
&= \int p(y_t | x_t, y_{1:t-1})p(x_t|y_{1:t-1})dx, \\
&= \int p(y_t|x_t)p(x_t|y_{1:t-1})dx.
\end{align}
$$
Since we have a set of particles distributed according to $p(x_t| y_{1:t-1})$, we can replace the integral by a sum:
$$
p(y_t|y_{1:t-1}) = \frac{1}{N}\sum_{i} p(y_t|x_t^{[i]}).
$$
Finally, in the bootstrap filter where the proposal distribution is chosen as the prior distribution, the weights are equal to $p(y_t|x_t)$.
| null | CC BY-SA 4.0 | null | 2023-03-31T02:49:27.277 | 2023-03-31T02:49:27.277 | null | null | 174541 | null |
611338 | 1 | null | null | 0 | 25 | The facility I am working in can be assessed a number of violations.
I have a list of the violations and the date of the violation.
The violations range from 1 (least severe) to 7 (most severe). We've only ever received one "7" violation.
What I'm trying to do is visualize that the bulk of the category "6" violations were in the past and that the frequency of them has been decreasing with time. The point is to show a trend where frequency of of violations, particularly for the severe (category six) violations is decreasing with time. We only have one category 7.
Thanks for your help!
Here is the data I am working with:
|Date |Violation Code |
|----|--------------|
|2/8/2023 |1 |
|2/2/2023 |3 |
|12/18/2022 |6 |
|9/27/2022 |6 |
|6/30/2022 |7 |
|12/15/2021 |3 |
|6/17/2021 |3 |
|5/26/2021 |3 |
|5/9/2021 |3 |
|4/28/2021 |3 |
|3/22/2021 |6 |
|3/16/2021 |4 |
|1/12/2021 |3 |
|12/5/2020 |6 |
|6/26/2020 |6 |
|2/19/2020 |6 |
|2/19/2020 |4 |
|2/10/2020 |6 |
|10/7/2019 |6 |
|8/12/2019 |3 |
|6/25/2019 |4 |
|6/14/2019 |3 |
|4/23/2019 |5 |
|3/29/2019 |3 |
|3/11/2019 |3 |
|2/26/2019 |4 |
|2/11/2019 |3 |
|2/6/2019 |6 |
|12/30/2018 |3 |
|11/30/2018 |3 |
|11/20/2018 |3 |
|11/15/2018 |6 |
|11/15/2018 |3 |
|10/24/2018 |6 |
|9/12/2018 |4 |
|8/26/2018 |6 |
|6/20/2018 |3 |
|6/17/2018 |3 |
|6/17/2018 |3 |
|6/16/2018 |3 |
|6/11/2018 |4 |
|1/5/2018 |6 |
|1/4/2018 |4 |
| How to visualize decreasing trends of categorical data by date? | CC BY-SA 4.0 | null | 2023-03-31T03:32:49.753 | 2023-03-31T04:34:32.733 | 2023-03-31T03:33:06.623 | 384568 | 384568 | [
"data-visualization"
] |
611339 | 2 | null | 611331 | 11 | null | Comment on your attempt: the idea looks great but unfortunately $\xi - \eta \perp \xi + \eta$ of course does not imply $\xi - 2\eta \perp \xi + 2\eta$. However, the "product-to-sum" identity of $\xi\eta$ is still very useful in approaching this problem. From there, you only need to apply some very basic properties of conditional expectation and the multivariate normal distribution to get the job done.
By the linearity and the "pulling out known factors" property of conditional expectation,
\begin{align}
E[\xi\eta|\xi - 2\eta] = \frac{1}{8}E[(\xi + 2\eta)^2|\xi - 2\eta] + \frac{1}{8}(\xi - 2\eta)^2. \tag{1}
\end{align}
So it remains to evaluate the $E[(\xi + 2\eta)^2|\xi - 2\eta]$, which is tractable thanks to $(\xi, \eta) \sim N_2(0, I_{(2)})$. Because of it, it follows by the [affine transformation property](https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Affine_transformation) of the multivariate normal distribution that
\begin{align}
\begin{bmatrix}
\xi + 2\eta \\
\xi - 2\eta
\end{bmatrix} =
\begin{bmatrix}
1 & 2 \\
1 & -2
\end{bmatrix}
\begin{bmatrix} \xi \\ \eta \end{bmatrix} \sim
N_2\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix},
\begin{bmatrix}
5 & -3 \\
-3 & 5
\end{bmatrix}
\right),
\end{align}
which implies, by the [conditional distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions) of the multivariate normal distribution, that
\begin{align}
& E[\xi + 2\eta | \xi - 2\eta] = -\frac{3}{5}(\xi - 2\eta), \\
& \operatorname{Var}(\xi + 2\eta | \xi - 2\eta) = 5 - 9 \times \frac{1}{5} = \frac{16}{5},
\end{align}
whence
\begin{align}
E[(\xi + 2\eta)^2|\xi - 2\eta] &= \operatorname{Var}(\xi + 2\eta | \xi - 2\eta) + (E[\xi + 2\eta | \xi - 2\eta])^2 \\
&= \frac{16}{5} + \frac{9}{25}(\xi - 2\eta)^2. \tag{2}
\end{align}
Substituting $(2)$ into $(1)$ gives
\begin{align}
E[\xi\eta|\xi - 2\eta] = \frac{2}{5} + \frac{9}{200}(\xi - 2\eta)^2 + \frac{1}{8}(\xi - 2\eta)^2 = \frac{2}{5} + \frac{17}{100}(\xi - 2\eta)^2.
\end{align}
Now, to get the hang of the key operations in solving this problem, try resolving it using the decomposition
\begin{align}
\xi\eta = (\xi - 2\eta + 2\eta)\eta = \eta(\xi - 2\eta) + 2\eta^2.
\end{align}
| null | CC BY-SA 4.0 | null | 2023-03-31T04:02:08.403 | 2023-03-31T21:47:49.877 | 2023-03-31T21:47:49.877 | 20519 | 20519 | null |
611340 | 2 | null | 611338 | 1 | null | You could do something like a faceted histogram, where each facet is the violation code, and on the x axis is the time. The height of the histogram at each particular time would reflect the number of violations in that time bin (e.g. in a month or something).
| null | CC BY-SA 4.0 | null | 2023-03-31T04:34:32.733 | 2023-03-31T04:34:32.733 | null | null | 369002 | null |
611341 | 2 | null | 611334 | 11 | null |
#### Reasoning like this leads to the Poisson distribution and other count distributions
If I understand your question correctly, it sounds like you might be examining a similar case to what leads to the Poisson distribution, and other variations of similar count distributions. Suppose we consider the case where we have a large number of independent binary events with small probability, and we want to count the number of events that occur. This can be repesented fairly well by taking large $n$ and using:
$$X_n | \mu \sim \text{Bin} \bigg( n , \frac{\mu}{n} \bigg).$$
(The situation in your question is the special case where $\mu=1$, but I have generalised this.) If we take the limit as $n \rightarrow \infty$ then the count follows a [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution):
$$\lim_{n \rightarrow \infty} \mathbb{P}(X_n = x | \mu) = \text{Pois}(x| \mu).$$
Also, you can now see that this situation does not make the occurrence of the event inevitable --- in fact, we have the limiting probability:
$$\lim_{n \rightarrow \infty} \mathbb{P}(X_n = 0 | \mu)
= \lim_{n \rightarrow \infty} \bigg( 1-\frac{\mu}{n} \bigg)^n
= \exp(-\mu) = \text{Pois}(0| \mu).$$
This is one of the reason that we consider the Poisson distribution to be a useful base distribution for describing the occurrence of "rare events" in a large number of trials. For example, suppose you go outside in a rainstorm and hold up a test tube to catch rain-drops. An individual rain-drop has a low probability of landing in the test-tube, but there are a lot of rain-drops, so we might reasonably posit a Poisson distribution for the number of rain-drops that land in the tube.
This can be extended out further, to more general forms of count distributions, if we consider "mixtures" of Poisson processes. In the above case we have proceeded conditionally on a fixed mean $\mu$, so that all the binary events have the same probability of occurring. If we allow variation in these probabilities, over some distribution, then we will get a count distribution that is more variable than the Poisson. For example, if $\mu$ has a gamma distribution then we get a negative-binomial distribution for the limiting count variable. We can also get other generalise count distributions with other types of reasoning similar to this.
If you are looking for a good overview of this topic, with extensions into more complex models, I would recommend looking for good statistics books on the analysis of count data. Such works will typically run through the basics of the Poisson distribution and other count distributions and also run through count regression and other standard statistical models in the field that use count data. Many of these models are amenable to situations where a count variable emerges from looking at the number of occurrences of a low-probability event in a large number of trials. Some available books in the field include [Winkelmann (2003)](https://www.amazon.com.au/gp/product/354040404X/), [Hilbe (2014)](https://www.amazon.com.au/Modeling-Count-Data-Joseph-Hilbe/dp/1107611253), [Cameron and Trivedi (2013)](https://www.amazon.com.au/gp/product/B00H7WPE5U/), [Dupuy (2018)](https://www.amazon.com.au/Statistical-Methods-Overdispersed-Count-Data-ebook/dp/B07KRT2QDF/) and [Martin (2022)](https://www.amazon.com.au/gp/product/B09SZHBT17/).
---
Simulation analysis: The simulation analysis you have conducted does not appear to accord with your description of the problem, since you hold the probability parameter in your binomial distribution fixed. If you want to simulate in the case where the probability becomes low as the sample size becomes large then you need to adjust your probability parameter in your simulations. Here is a variation of this simulation that uses the above model:
```
#Set the seed
set.seed(1839)
#Set parameters
mu <- 1
iter <- 5000
n.vals <- c(1, 10, 100, 1000, 10000, 100000)
probs <- numeric(length(n.vals))
#Generate simulations and estimate probs
#The value probs estimates probability of non-occurrence
for (i in 1:length(n.vals)) {
X <- matrix(rbinom(iter*n.vals[i], 1, prob = mu/n.vals[i]),
nrow = iter, ncol = n.vals[i])
probs[i] <- 1 - mean(matrixStats::rowMaxs(X)) }
#Show probs
probs
[1] 0.0000 0.3500 0.3704 0.3534 0.3648 0.3640
#True probability
exp(-mu)
[1] 0.3678794
```
As expected, the simulated probability of non-occurrence of the event is close to the probabilty under the Poisson distribution when the number of trials is large.
| null | CC BY-SA 4.0 | null | 2023-03-31T04:35:07.170 | 2023-03-31T22:40:06.093 | 2023-03-31T22:40:06.093 | 296197 | 173082 | null |
611342 | 2 | null | 497113 | 0 | null | Let f: X -> R, f(X)>=0 and X is a subset of R (real numbers), if we define
Sum_{x in X}f(x) := sup{sum_{x in F}f(x), F is a finite subset of X}
Sum_{x inX}0 = 0
| null | CC BY-SA 4.0 | null | 2023-03-31T05:09:22.400 | 2023-03-31T05:17:16.117 | 2023-03-31T05:17:16.117 | 384572 | 384572 | null |
611343 | 1 | 611344 | null | 1 | 25 | I'm training a binary classification model on a fraud dataset.
The dataset has 300 columns and 40,000 rows and I don't have a lot of computational power.
I've removed constant columns from my dataset but I still have about 80 near-constant columns left. "Near-constant" here means 98% a single value, with < 5 unique values in total.
It seems unlikely they contain much information and they're slowing down training. What should I do with these columns?
| What should you do with near constant columns? | CC BY-SA 4.0 | null | 2023-03-31T06:21:05.363 | 2023-03-31T06:59:17.947 | 2023-03-31T06:59:17.947 | 363857 | 363857 | [
"feature-selection"
] |
611344 | 2 | null | 611343 | 2 | null | You should absolutely analyze them further before taking any action. They could easily contain absolutely crucial information.
For instance, I work in retail sales forecasting. We frequently have predictors that are almost constantly 0 - but they encode the presence of a very specific promotion, so when the promotion occurs and the predictor is 1, sales explode. Which is precisely what we need to account for in forecasting, both outside the promotions (if we removed this predictor, both "baseline" predictions and predicted residual variance would be biased high) and during promotions (where forecasts would be far too low if we had removed that predictor).
| null | CC BY-SA 4.0 | null | 2023-03-31T06:55:02.680 | 2023-03-31T06:55:02.680 | null | null | 1352 | null |
611345 | 1 | null | null | 1 | 86 | I have a function $s(\omega)$ that is a sum of a function with random numbers $a_m$ and looks something like the following.
$$ s(\omega) = \sum_{m = 1}^{M} f(a_m, \omega) $$ where all the $a_m$ are draws from a probability distribution.
I have the expression of the expectation of the sum when $M \to \infty$
$$ \mathbb{E}\left[\sum_{m = 1}^{M} f(a_m, \omega)\right] = M \int_{-\infty}^{+\infty} f(x, \omega) p(x) dx $$, I changed the parametrization of $x= a_m$ to write the integral.
However, in practical simulations, M is not infinity and in my simulations I suspect that this approximation doesn't give me exact results. That is the reason why I want to model the error in this approximation so that I can use it as a distribution over $g$ to fit it with the average estimate of the sum. Is there an existing way to do this?
========EDIT============
The function $f$ looks like the following
$$ f(a_m, \omega) = \exp({-a_m \omega^2 }) $$
Where $a_m \sim \mathcal{N}(0, \sigma^2)$
Now if I want to know the expected value,
I write it as,
$$ \mathbb{E}\left[\sum_{m = 1}^{M}f(a_m, \omega)\right]_{M\to\infty} = M \int_{-\infty}^{-\infty} \exp({-x \omega^2 })\frac{1}{2\pi\sigma^2} \exp({-\frac{x^2}{2\sigma^2}}) dx $$
I hope the notations are clear now a bit. I’m not very confident with statistical terms, so, I say $a_m$ are random draws from a normal distribution and the function is a sum of values of a function that’s computed at these random numbers. For example, it’s a superposition of responses from $M$ different components.
| Can the error be modeled in the approximation of expectation | CC BY-SA 4.0 | null | 2023-03-31T06:59:32.873 | 2023-04-02T08:25:54.113 | 2023-04-02T08:25:54.113 | 327104 | 327104 | [
"expected-value",
"conditional-expectation",
"approximation"
] |
611347 | 1 | null | null | 0 | 26 | I'm creating a binary classification model on a fraud datatset.
I have 300 columns and 40,000 rows. About 80 of these columns are near constant.
I'd like to remove them because they're slowing down training, but I'm worried they could have predictive power.
What's the best way to analyze these columns and check if removing them will be detrimental?
### Edit:
I'm okay with creating intermediate models to test the columns if it means they can be removed in all future models.
| How do you analyze near constant columns? | CC BY-SA 4.0 | null | 2023-03-31T07:21:05.507 | 2023-03-31T08:09:03.433 | 2023-03-31T08:09:03.433 | 363857 | 363857 | [
"feature-selection"
] |
611348 | 2 | null | 611221 | 1 | null | In Bayesian terminology, the marginal likelihood is the prior predictive density
$$m(x)=\int_\Theta f(x|\theta)\pi(\theta)\,\text d\theta$$
where $f(\cdot|\cdot)$ is the sampling density, $\ell(\theta|x)=f(x|\theta)$ is the standard likelihood. This marginal likelihood integrates to one over the sample space $\mathcal X$:
$$\int_\mathcal X m(x)\,\text dx=\int_\mathcal X \int_\Theta f(x|\theta)\pi(\theta)\,\text d\theta\,\text dx=1$$
It is also the normalising factor for the posterior density
$$\pi(\theta|x) = \frac{f(x|\theta)\pi(\theta)}{m(x)}$$
and simulating from the posterior is feasible without deriving $m(x)$, provided $f(x|\theta)$ is known up to a constant (that is, not a constant indexed by $\theta$). This is also the case for tempered versions of the posterior density, e.g.
$$\pi_i(\theta|x) = \frac{f(x|\theta)^{\tau_i}\pi(\theta)}{m_i(x)}\quad 0\le\tau_i\le1$$
SMC and other Monte Carlo methods (nested sampling, bridge sampling, harmonic mean, umbrella sampling, path sampling, &tc.) exploit a sample from $\pi_i(\theta|x)$ or from another distribution to approximate the normalising constant $m_i(x)$. When comparing several models through marginal likelihood ratios, the corresponding (standard) likelihoods must all be completely available (or up to the same multiplicative constant).
| null | CC BY-SA 4.0 | null | 2023-03-31T07:37:18.770 | 2023-03-31T07:37:18.770 | null | null | 7224 | null |
611349 | 1 | 611359 | null | 1 | 81 | I've read that some models, such as [decision trees](https://www.quora.com/Decision-Tree-based-models-dont-require-scaling-How-does-scaling-impact-the-predictions-of-decision-tree-based-models), don't require scaling to work effectively.
However, the author of the linked article states there's no downside to scaling data for a decision tree either.
In general, is there ever a downside to scaling data?
### Edit:
For example, using SciKit-Learn's `StandardScaler` on everything?
| Is there any downside to scaling a dataset? | CC BY-SA 4.0 | null | 2023-03-31T07:41:37.880 | 2023-03-31T10:43:20.030 | 2023-03-31T09:48:07.603 | 363857 | 363857 | [
"data-transformation",
"dataset",
"feature-scaling"
] |
611350 | 1 | null | null | 2 | 32 | I want to know what model I should use to make causal inferences with time series variables with the instrument variable method.
For instance, I want to estimate the effect of price on sales. I have an instrument variable for the price variable. Price and advertisement are endogenous and non-stationary.
$Sale_t = Price_t + Advertisement_t + error_t$
My first thought is difference all the variables and run TSLS. Then I think the potential problem is that the lagged effect of price on sale is not considered.
Should I use a VAR model in this case? Then the question is, how can I use instrument variables in a VAR model?
| Time series regression with instrument variable | CC BY-SA 4.0 | null | 2023-03-31T07:43:47.527 | 2023-03-31T15:17:48.247 | null | null | 373901 | [
"time-series",
"econometrics",
"causality",
"vector-autoregression",
"instrumental-variables"
] |
611351 | 2 | null | 611328 | 0 | null | It's hard to generalize for all the optimization procedures, but if we're talking about neural networks and gradient descent, it's usually a good idea to normalize for the gradient updates to behave well. In general, it's hard to come up with a case where after normalizing, you're worse off.
The following post might be useful on different opinions, I tend to agree with the answers given at the end:
[Is it necessary to scale the target value in addition to scaling features for regression analysis?](https://stats.stackexchange.com/questions/111467/is-it-necessary-to-scale-the-target-value-in-addition-to-scaling-features-for-re)
| null | CC BY-SA 4.0 | null | 2023-03-31T08:09:04.057 | 2023-04-01T11:19:31.213 | 2023-04-01T11:19:31.213 | 204068 | 204068 | null |
611352 | 2 | null | 611349 | 2 | null | Scaling does not affect most of the statistics of the dataset. It is a recommended - sometimes even required - pre-processing step for some machine learning algorithms.
However, note that it is a transformation that takes in consideration all observations in the dataset, affecting the true value of each observation for that particular dataset. It's not a transformation that can be done to a single isolated observation. Thus, you may struggle if you build a classification model from scaled data and then you want to classify a new isolated observation which has not been scaled with the original dataset.
Take a decision tree for instance. This is a simple model, mainly used for its explainability and interpretability, rather than its predictive power. A decision tree can be easily used by anyone without any specific training. The algorithm would work fine if you scale the data prior to fitting the model. But if you do so, then you could not easily apply the model to new data for classification. Example: where a decision tree would have a decision node on "age <45 years-old" (which is understandable), with scaled data you might end up with that node reading "age <0.021 years-old" (which is meaningless).
| null | CC BY-SA 4.0 | null | 2023-03-31T08:24:52.257 | 2023-03-31T08:24:52.257 | null | null | 360512 | null |
611353 | 1 | null | null | 0 | 30 | I would like to include a beta-regression model (using the package glmmTMB) via the piecewiseSEM package.
The beta-regression model looks like this:
```
mod_sel_herbtot<-modCOMBO<-glmmTMB(branch_av_Herbtot~ N_pc+Year+Stratum+(1|Tree.ID),
data=data, family=beta_family(link = "logit"),na.action=na.fail)
```
I want to conduct a SEM using my beta-regression model above and several "lmer" models (i.e. mod1x, mod2x etc.)
Following the piecewiseSEM instructions I used the following command
```
newlist = list(
mod_sel_herbtot,mod1x, mod2x, mod3x,mod4ax,mod4bx,mod5ax,mod5bx)
modTot<-as.psem(newlist)
SEM_beta_Herbtot<-summary(modTot, .progressBar = F)
```
However, when running the as.psem I get the following warning:
Warning messages:
1: In GetSDy(i, newdata, standardize, standardize.type) :
Family or link function not supported: standardized coefficients not returned
As a consequence I do not get any standardized coefficients for my beta regression model.
Is there a work-around solution to fix the problem with the "link function"?
thanks a lot!
| piecewiseSEM standardized coefficients not returned when using glmmTMB model | CC BY-SA 4.0 | null | 2023-03-31T08:26:51.153 | 2023-03-31T08:26:51.153 | null | null | 312145 | [
"structural-equation-modeling",
"glmmtmb"
] |
611354 | 1 | null | null | 0 | 36 | when testing for normality and homogeinity in SPSS, it showed this:
[](https://i.stack.imgur.com/CoUfn.png)
I use shapiro-wilk result, and it shows that E25 is not normally distributed. but the homogeinity based on mean showed that it is homogen. what test should i use?
| One of my data is not normally distributed, but is homogeneous, what test should I use? | CC BY-SA 4.0 | null | 2023-03-31T08:27:42.460 | 2023-03-31T08:38:33.853 | 2023-03-31T08:38:33.853 | 384579 | 384579 | [
"statistical-significance",
"spss",
"normality-assumption"
] |
611355 | 2 | null | 611143 | 1 | null | Here's the solution to my specific problem.
(I never got glm() working, but I found an easier solution to the actual problem).
- I don't need to use glm() at all, because there is an analytic solution to the problem I posed.
- The analytic solution (written using R code) is:
```
p=sum(y)/sum(x)
s=sqrt(p/sum(x))
```
- This solution can be derived analytically by maximising the log-likelihood, and looking at the curvature of the log-likelihood
- Note how the standard error is not the usual binomial pq/n, because it's all in the context of a Poisson distribution
- In my particular prediction context, this gives better out-of-sample predictions than the glm() solution (for those cases when the glm solution does work).
| null | CC BY-SA 4.0 | null | 2023-03-31T08:35:25.420 | 2023-03-31T08:35:25.420 | null | null | 331423 | null |
611356 | 1 | null | null | 0 | 16 | I have one dataframe from species abundances and another dataset with explanatory variables. My purpose is to perform db-RDA through capscale() r function from vegan package.
I have done the Hellinger transformation in species matrix.
```
species.hel[1:5,1:5]
Aeriscardovia_aeriphila Bifidobacterium_adolescentis Bifidobacterium_animalis Bifidobacterium_bifidum Bifidobacterium_breve
SAMC012618 0.000000000 0.00000000 0.00000000 0.1562720 0.00000000
SAMC012617 0.000000000 0.00000000 0.00000000 0.1499330 0.00000000
SAMC012616 0.000000000 0.00000000 0.02184463 0.1366587 0.00000000
SAMC012615 0.000000000 0.00000000 0.01320858 0.1309119 0.01722187
SAMC012614 0.003134008 0.00658639 0.02295155 0.1470166 0.02855218
```
I have 16 explanatory variables:
```
new_metadata[1:5, 1:16]
N P K Ca Mg S Al Fe Mn Zn Mo Baresoil Humdepth pH Group_A Group_B
SAMC012618 19.8 42.1 139.9 519.4 90.0 32.3 39.0 40.9 58.1 4.5 0.3 43.9 2.2 2.7 0 1
SAMC012617 13.4 39.1 167.3 356.7 70.7 35.2 88.1 39.0 52.4 5.4 0.3 23.6 2.2 2.8 0 1
SAMC012616 20.2 67.7 207.1 973.3 209.1 58.1 138.0 35.4 32.1 16.8 0.8 21.2 2.0 3.0 0 1
SAMC012615 20.6 60.8 233.7 834.0 127.2 40.7 15.4 4.4 132.0 10.7 0.2 18.7 2.9 2.8 0 1
SAMC012614 23.8 54.5 180.6 777.0 125.8 39.5 24.2 3.0 50.1 6.6 0.3 46.0 3.0 2.7 0 1
```
Through ordistep the model selected it was:
formula = species.hel ~ K, data = new_metadata
```
anova(dbRDA_ancom, step=1000, perm.max=1000)
# Permutation test for capscale under reduced model
# Permutation: free
# Number of permutations: 999
Model: capscale(formula = species.hel ~ K, data = new_metadata, distance = "bray", add = T)
# Df SumOfSqs F Pr(>F)
# Model 1 0.48746 2.7765 0.01 **
# Residual 14 2.45797
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# H0 is rejected. The RDA model is significative.
```
But when I plot I obtain the following plot, one axis with constrained ordination (CAP) and the y axis with unconstrained ordination (MDS1). This is meaningful statistically?
[](https://i.stack.imgur.com/r63is.png)
This could be related that I do not have variation (inertia) as unconstrained component in simple rda?:
```
species.rda<-rda(species.hel ~ ., new_metadata)
summary(species.rda)
Call:
rda(formula = species.hel ~ N + P + K + Ca + Mg + S + Al + Fe + Mn + Zn + Mo + Baresoil + Humdepth + pH + Group_A + Group_B, data = new_metadata)
Partitioning of variance:
Inertia Proportion
Total 0.2069 1
Constrained 0.2069 1
Unconstrained 0.0000 0
```
Many thanks for your help/comments.
EDIT: I know that if a variable of ecological interest is not selected, I can mantain in the RDA model although it was not selected.
| Constrained Analysis of Principal Coordinates and Multidimensional scaling | CC BY-SA 4.0 | null | 2023-03-31T08:50:06.127 | 2023-03-31T09:24:02.660 | 2023-03-31T09:24:02.660 | 245966 | 245966 | [
"r",
"multivariate-analysis",
"redundancy-analysis"
] |
611357 | 1 | 611360 | null | 1 | 52 | I am a bit loss with the convergence in probability and the absolute value.
Let $X_n$ be a random variable defined in $\mathbb{R}$ with $\lim_{n \rightarrow \infty} E[X_n] = a$ and $V[X_n] = O(n^{-1})$. Then, we know that by applying Chebyshev's inequality
$$P(|X_n - a| > e) < O(n^{-1})$$
which means that $X_n$ converges in probability to $a$.
Now imagine that I can show that $E|X_n - a| = o(1)$, using Markov's inequality we can I conclude that $X_n$ converges in probability to $a$.
But what about if I am only able to show that
$$|E[X_n] - a| = o(1)? $$
| Convergence in probability to a constant and absolute value (?) | CC BY-SA 4.0 | null | 2023-03-31T08:54:40.137 | 2023-03-31T09:33:44.630 | 2023-03-31T09:08:02.493 | 365245 | 365245 | [
"convergence",
"probability-inequalities",
"absolute-value"
] |
611358 | 1 | null | null | -1 | 181 | I'm learning about Gaussian process functional regression but the more I learn, the more questions arise.
So we want to estimate a function $f\left( x \right)$ from data $D = \left( {\left( {{x_1},{y_1}} \right),...,\left( {{x_n},{y_n}} \right)} \right)$ by updating a ${\text{GP}}\left( {m\left( x \right),k\left( {x,x'} \right)} \right)$ prior.
I tell to myself that GP regression should fulfill the "consistency condition" (don't know how it's called in the literature?): if we partition the data set $D$, update the ${\text{GP}}\left( {m\left( x \right),k\left( {x,x'} \right)} \right)$ prior with the first part, then update the posterior process of the previous step with the second part and so forth until the last part, we should get the same final posterior process at the end as if we directly update ${\text{GP}}\left( {m\left( x \right),k\left( {x,x'} \right)} \right)$ with whole data $D$. Otherwise, we would get different estimations/inferences, depending on said partition.
On the one hand, it is well-known that GP regression has $O\left( {{n^3}} \right)$ generic computational complexity.
On the other hand, if we successively update with each datum $\left( {{x_i},{y_i}} \right),\;i = 1,n$ one by one, the update equations are
$\left\{ \begin{gathered}
{m^{i + 1}}(x) = {m^i}(x) + {k^i}\left( {x,{x_i}} \right){\left( {{k^i}\left( {{x_i},{x_i}} \right) + {\sigma ^2}} \right)^{ - 1}}\left( {{y_i} - {m^i}\left( {{x_i}} \right)} \right) \hfill \\
{k^{i + 1}}(x,x') = {k^i}(x,x') - {k^i}\left( {x,{x_i}} \right){\left( {{k^i}\left( {{x_i},{x_i}} \right) + {\sigma ^2}} \right)^{ - 1}}{k^i}\left( {{x_i},x'} \right) \hfill \\
\end{gathered} \right.$
Each recursion requires the evaluation of 4 terms.
Hence, applying the recursion again to those 4 terms requires the evaluation of 16 terms, 8 of them being different for $m(x)$: ${{m^{i-1}}\left( {{x}} \right)}$, ${{m^{i-1}}\left( {{x_{i-1}}} \right)}$ , ${{m^{i-1}}\left( {{x_{i}}} \right)}$, ${k^{i-1}}\left( {x,{x_{i-1}}} \right)$, ${k^{i-1}}\left( {x_{i-1},{x_{i-1}}} \right)$, ${k^{i-1}}\left( {x,{x_{i}}} \right)$, ${k^{i-1}}\left( {x_{i-1},{x_{i}}} \right)$ and ${k^{i-1}}\left( {x_{i},{x_{i}}} \right)$.
And so forth. It appears that updating the kernel sequentially with each datum one by one has exponential computational complexity in $n$!
Therefore, GP regression doesn't fulfill the consistency condition: it gives completely different results (or no result at all) depending on how the data is partitioned.
Correct?
| Does Gaussian process functional regression fulfill the consistency condition? | CC BY-SA 4.0 | null | 2023-03-31T09:12:28.667 | 2023-04-01T15:14:58.293 | 2023-04-01T15:14:58.293 | 384580 | 384580 | [
"gaussian-process"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.