Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
609496
1
null
null
1
85
Let's say we have a probability distribution in x,y space: $$ p(x, y)=\frac{1}{4 \pi} \sqrt{x^2+y^2} \exp \left(- \sqrt{x^2+y^2}\right) $$ This can be converted into polar coordinates as: $$ p(r, \theta)=\frac{1}{4\pi} r^2 \exp (-r), r\geq0 $$ Now let's say we want to know the most probable distance from the centre, that is r = argmax p. This can be done in two ways. - We differentiate $p(r,\theta)$ with respect to $r$, and solve for $$ \frac{\partial p(r, \theta)}{\partial r} = 0 $$ this gives $r=2$ - We rewrite $p(x,y)$ as $$ p(x, y)=\frac{1}{4 \pi} r \exp \left(-r\right) $$ Then solving $ \frac{\partial p(x, y)}{\partial r} = 0 $ instead gives $r=1$. This yields two different answers -- even though they should refer to the same quantity (the most probable distance from the centre). Questions are -- which is correct? and why? and if both are correct then how do we face this contradiction? Related questions seem to be on bayesian inference so I included those tags -- though my question is on generic distributions.
Maximum of a distribution not invariant in different coordinate systems?
CC BY-SA 4.0
null
2023-03-14T22:15:58.527
2023-03-15T08:04:42.763
2023-03-14T22:18:55.673
383241
383241
[ "probability", "distributions", "bayesian", "maximum-likelihood" ]
609497
2
null
609493
2
null
You want something inheriting from `table` class: ``` pie(table(result)) ```
null
CC BY-SA 4.0
null
2023-03-14T22:30:11.603
2023-03-14T22:30:11.603
null
null
369002
null
609498
1
null
null
0
25
From this post, [Confidence regions on bivariate normal distributions using $\hat{\Sigma}_{MLE}$ or $\mathbf{S}$](https://stats.stackexchange.com/questions/372336/confidence-regions-on-bivariate-normal-distributions-using-hat-sigma-mle?newreg=84dda7a002824e29a79c2630ac09a3a7), the equation $(\mathbf{x} - \boldsymbol{\mu})^{T} \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \leq \chi_{p, \alpha}^2$ can be described by the parametric curve $ \mathbf{x} = \boldsymbol{\mu} + \sqrt{\chi_{p, \alpha}^2} \mathbf{L} \begin{bmatrix} \cos(\theta)\\ \sin(\theta) \end{bmatrix} $ for $ 0 < \theta < 2 \pi $ Is it possible to generalize to larger dimensions? For example, for three dimensions $ \mathbf{x} = \boldsymbol{\mu} + \sqrt{\chi_{p, \alpha}^2} \mathbf{L} \begin{bmatrix} \cos(\theta) \cos(\phi)\\ \cos(\theta) \sin(\phi)\\ \sin(\theta) \end{bmatrix} $ for $ 0 < \theta < \pi $ and $ 0 < \phi < 2 \pi $ And so on?
Confidence regions for multivariate normal distributions
CC BY-SA 4.0
null
2023-03-14T22:35:27.630
2023-03-14T22:35:27.630
null
null
383239
[ "confidence-interval", "chi-squared-test", "multivariate-normal-distribution", "hotelling-t2" ]
609499
1
612593
null
1
71
Let $x \in \{0,1\}^N$, and \begin{align} D &= \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{M} \end{bmatrix} \end{align} So that $D \in \{0,1\}^{N \times M} $. This is the original dataset. The zero indicate a trait and 1 indicate absence of the trait. The order of sequence of the 0's and 1's matter for each $x$. A new data sample $D'$ was generated using a different generation process (for example Boltzman machine). I am looking for a test statistic to show that $D$ and $D'$ are different distributions or otherwise. For example, it would be possible to use Kolmogorov-Smirnov test, but I am not certain this would be appropriate for the data. Another contending approach is kernel 2 sample test. Again, while this might work I am wondering if there is any caveat. Or is there any other statistical test that might be more relevant? References: [Test for difference between 2 empirical discrete distributions](https://stats.stackexchange.com/questions/88764/test-for-difference-between-2-empirical-discrete-distributions) [Method to justify claim that two samples come from the same distribution](https://stats.stackexchange.com/questions/204359/method-to-justify-claim-that-two-samples-come-from-the-same-distribution) [Is Kolmogorov-Smirnov test valid with discrete distributions?](https://stats.stackexchange.com/questions/1047/is-kolmogorov-smirnov-test-valid-with-discrete-distributions)
Justifying data samples are from different distribution
CC BY-SA 4.0
null
2023-03-14T22:36:55.867
2023-04-11T16:05:51.187
2023-03-15T01:33:32.903
383244
383244
[ "hypothesis-testing", "distributions", "mathematical-statistics", "statistical-significance", "p-value" ]
609500
2
null
609486
7
null
Use both all the time. There is no strong reason to pick one over another unless you have some kind of contractual obligation against one of the twoo. Both metrics have their pros and cons and those have been discussed at length in CV.SE (e.g. [here](https://stats.stackexchange.com/questions/398199/why-is-a-pr-curve-considered-better-than-an-roc-curve-for-imbalanced-datasets?noredirect=1&lq=1), [here](https://stats.stackexchange.com/questions/262616/roc-vs-precision-recall-curves-on-imbalanced-dataset?rq=1) and [here](https://stats.stackexchange.com/questions/7207/roc-vs-precision-and-recall-curves)) but neither of the two is a panacea for a particular situation. For example, why not use Brier Score or Continuous Ranked Probability Score ([CRPS](https://www.lokad.com/continuous-ranked-probability-score)) too? My advice is that if one really thinks that "generic metrics" like AUR-ROC, AUC-PR, Brier score, etc. are not fit for their modelling purposes then they have to consider cost-sensitive learning to account for significantly different misclassification costs and/or practical usefulness thus doing a proper decision curve analysis. Elkan (2001) [The foundations of cost-sensitive learning](https://dl.acm.org/doi/10.5555/1642194.1642224) is probably one of the most well-cited original papers on the matter. Practical usefulness is usually visited in the context of clinical applications so a good first read there is: Vickers & Elkin (2006) [Decision Curve Analysis: A Novel Method for Evaluating Prediction Models](https://pubmed.ncbi.nlm.nih.gov/17099194/).
null
CC BY-SA 4.0
null
2023-03-14T22:45:34.230
2023-03-14T22:48:22.540
2023-03-14T22:48:22.540
22311
11852
null
609502
2
null
609487
1
null
you don't require normal data to calculate pearson correlation. I would suggest you instead calculate pearson correlation and use [Fisher transformation](https://en.wikipedia.org/wiki/Fisher_transformation) to do your statistical test. test the fisher transformation (which assumes normally distributed variables) using your bootstrapping approaches (on a subset of the pairs) see this article [https://www.uvm.edu/~statdhtx/StatPages/Randomization%20Tests/BootstCorr/bootstrapping_correlations.html](https://www.uvm.edu/%7Estatdhtx/StatPages/Randomization%20Tests/BootstCorr/bootstrapping_correlations.html) which discusses 1) and confirms 3) note that there is an equivalent of fisher transformation for spearman [https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) - see determining significance
null
CC BY-SA 4.0
null
2023-03-14T22:49:17.830
2023-03-14T23:11:56.927
2023-03-14T23:11:56.927
27556
27556
null
609503
1
null
null
0
25
Consider the general problem of predicting the conditional mean $E(Y|X)$ where $X$ is the predictor. One assumes $Y$ can be written as: $Y=f(x)+e$ where $E(e|X)=0$ which implies covariance of predictor and $e$ is 0. Is $E(e|X)=0$ an assumption that is made or is it something that follows automatically from the fact that $f$ is the conditional mean of $Y$ (conditioned on $X$)? If it's an assumption, what drives us to make it?
In a predictive model, is orthogonality of noise and predictors an assumption or something that can be proved?
CC BY-SA 4.0
null
2023-03-14T22:58:02.723
2023-03-14T22:58:02.723
null
null
242049
[ "machine-learning", "mathematical-statistics", "conditional-expectation" ]
609504
1
609506
null
1
45
I am dealing with the following exercise: Exercise. In a linear regression model of the form $y = w_1x + w_0$, if we increase the value of $x$ by one unit, what can we expect from the value of of $y$? $(a)$ An increase of value equal to $1$; $(b)$ An increase of value equal to $w_0$; $(c)$ An increase of value equal to $w_1;$ $(d)$ It is impossible to tell anything about the increase on $y$. My attempt. As an aspirant to be a mathematician, this question leads to one simple line of thinking: if $x$ is increased by one unit, one might think of this as a simple variable change of the form $x \to x+1.$ If this is case, we obviously expect $y$ to suffer an increase of value equal to the parameter $w_1,$ hence I would select option $(c).$ My concerns. After some further thought, what exactly is "one unit" of the variable $x$? As far as I am concerned, this might not be a concept as objeticve as I thought. This makes me think that probably option $(d)$ makes some sense aswell. All the information I have about the question is posted in the Exercise. section. Thanks for any help in advance.
Simple question about linear regression
CC BY-SA 4.0
null
2023-03-14T23:59:02.983
2023-03-15T00:32:35.320
null
null
383130
[ "regression", "self-study", "linear" ]
609505
1
null
null
0
14
In some empirical studies as a validation exercise some people regress some variables on the variable of interest controlling for key control variables. The reason for doing this I think comes from the following argument: The conditional mean independence assumption (CMI) $$\mathbb{E}[u_{i}|X_{i}, W_{i}^1] = \mathbb{E}[u_{i}|W_{i}^1]$$ is the key assumption in all regression control approaches If CMI is true, then by implication it should be true that $$\mathbb{E}[W_{i}^2|X_{i}, W_{i}^1] = \mathbb{E}[W_{i}^2|W_{i}^1]$$ Other variables ($W^2$) should be independent of (uncorrelated with) the variable of interest ($X$) conditional on the key control variables ($W^1$) Does anyone know why this is the case? Is there a simple proof for this $\mathbb{E}[u_{i}|X_{i}, W_{i}^1] = \mathbb{E}[u_{i}|W_{i}^1] \Rightarrow \mathbb{E}[W_{i}^2|X_{i}, W_{i}^1] = \mathbb{E}[W_{i}^2|W_{i}^1]$?
Testing implications of Conditional Mean Independence
CC BY-SA 4.0
null
2023-03-15T00:06:33.293
2023-03-15T00:06:33.293
null
null
285494
[ "least-squares", "econometrics", "controlling-for-a-variable", "conditional-independence" ]
609506
2
null
609504
3
null
The answer to the question is certainly $(c)$. With a one unit increase in $x$, you simply multiply this value by the slope value, and this will yield an increase in $y$. So if the equation is: $$ y = .54x + 45 $$ This would simplify to an increase of value equal to the slope (the $w_1$ in your question): $$ y = .54 + 45 $$ Resulting in this $y$, a simple $.54$ addition to the intercept: $$ y = 45.54 $$ To your question about what a one-unit increase in $x$ actually means, I suppose that depends on the measurement of the $x$ (centimeters, pounds, etc.). However, that doesn't seem to matter here. The units counted are not the same as the type of unit involved. Therefore the answer would still be $(c)$.
null
CC BY-SA 4.0
null
2023-03-15T00:32:35.320
2023-03-15T00:32:35.320
null
null
345611
null
609507
1
null
null
0
110
In statistics there are 4 type of data ; ordinal, nominal, interval and ratio. Based on what i read on many website, in a multiple linear regression analysis you have to use interval or ratio data. Let's say i have Y as dependent variable and X as independent. I have Y = stock price (in usd), X1 = profit (in usd) and X2 = Return on Equity (in ratio scale). I know all of my data are ratio type of data but according to my guidebook i have to transform my stock price (Y) and profit (X1) as ratio scale like RoE (X2) and i can't formatted them in usd. Is that really the case? And can i mix my data as interval and ratio (i.e Y= ratio data; X1 = interval data; X2 = ratio data) to use multiple linear regression?
Type of data in linear regression
CC BY-SA 4.0
null
2023-03-15T00:54:17.343
2023-03-15T09:26:35.213
null
null
383250
[ "multiple-regression", "categorical-data" ]
609508
1
609509
null
1
109
Let $X_1,\dots, X_n$ be a random sample from the geometric distribution $P(X=x)=\theta(1-\theta)^x$ for $x=0,1,2,\dots$ where $0<\theta<1$. Find a function of $\theta$, say $\tau=h(\theta)$ so that there exists an unbiased estimator $\hat{\tau}$ of $\tau$ and the variance of $\hat{\tau}$ coincides with Cramér-Rao lower bound. --- My work: I first obtain the Cramér-Rao lower bound of the variance of $\hat{\tau}$: $$ \hat{\tau}\ge \frac{(\hat{\tau}')^2}{nI(\theta)}=\frac{(\hat{\tau}'\theta^2(1-\theta))^2}{n} $$ where $$I(\theta)=-E\left[\frac{\partial^2}{\partial \theta^2}\log f(x;\theta)\right]=E\left[\frac{1}{\theta^2}-\frac{x}{(1-\theta)^2}\right]=\frac{1}{\theta^2(1-\theta)}$$ Also, since $\hat{\tau}$ is unbiased, then $E[\hat{\tau}]=\tau$. But I have no idea how to find such function $h(\cdot)$?
Find a function of $\theta$ so that there exists an unbiased estimator and the variance coincides with Cramér-Rao lower bound
CC BY-SA 4.0
null
2023-03-15T01:19:53.710
2023-03-15T22:53:29.803
2023-03-15T22:53:29.803
44269
334918
[ "self-study", "mathematical-statistics", "estimation", "unbiased-estimator", "cramer-rao" ]
609509
2
null
609508
2
null
When $X_i\overset{\textrm{iid}}{\sim}f(x\mid\theta), $ and $\hat\tau(\mathbf x) $ is an unbiased estimator of $\tau(\theta), $ then the unbiased estimator attains the CRLB if and only if there exists some function of $\theta,~a(\theta) $(say) such that $$ a(\theta)\left[\hat{\tau}(\mathbf x) -\tau(\theta)\right]=\partial_\theta\ln\mathcal L(\theta\mid\mathbf x) \tag 1.\label 1$$ Here $X_i\sim\textrm{Geom}(\theta). $ So, \begin{align}\partial_\theta\ln\mathcal L(\theta\mid\mathbf x) &= \partial_\theta[n\ln\theta+\sum x_i\ln(1-\theta)]\\&=\frac n\theta-\frac{\sum x_i}{1-\theta}\\&= \frac{\theta-1}n\left(\frac{\theta-1}{\theta}+\bar x\right).\tag 2\label 2\end{align} Taking $a(\theta) :=\frac{\theta-1}n, ~\tau(\theta) :=\frac{1-\theta}\theta $ in $\eqref 2,$ from $\eqref 1,$ it can be concluded that $\bar x$ (an unbiased estimator of $\tau(\theta) $) attains the CRLB. --- ## Reference: $\rm [I]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002, $ sec. $7.3, $ p. $341.$
null
CC BY-SA 4.0
null
2023-03-15T02:44:39.133
2023-03-15T02:44:39.133
null
null
362671
null
609510
1
null
null
2
25
In Causality - Models, Reasoning And Inference by Pearl, definition 2.7.1 says - > Potential Cause definition: A variable $X$ has a potential causal influence on another variable $Y$ (that is inferable from $\hat {P}$) if the following conditions hold - 1. $X$ and $Y$ are dependent in every context. 2. There exists a variable $Z$ and a context $S$ such that      (i) $X$ and $Z$ are independent given $S$ (i.e., $X \perp Z|S$) and      (ii) $Z$ and $Y$ are dependent given $S$ I want to understand how the second condition (with its two sub-conditions) forms a defining factor for potential causes. Note: I know the $IC^*$ algorithm (as given in Section 2.6) and also understand the logic behind rules $R_1$ and $R_2$ for aligning the arrowheads. I have a feeling that these rules might be the inspiration behind the definition, but am unable to figure that out. Hence, it is fine if the answer is provided in this context.
Justification of definition 2.7.1 (Potential cause) in Causality by Pearl
CC BY-SA 4.0
null
2023-03-15T03:24:48.050
2023-03-15T03:24:48.050
null
null
331772
[ "causality", "graphical-model", "bayesian-network", "causal-diagram" ]
609511
1
null
null
1
9
- Which tests can and cannot be run on a nonprobability sample? (Collected by convenience and snowball sampling - not by me.) Finding a hard no to t-tests and ANOVA across the board. Then finding both yes and no, depending on the website, to chi-squares, correlations, regressions, Mann Whitney, Kruskal Wallis, Wilcoxon signed test, etc. Which is it? - Also, what tests can be run just to check for relationships within the sample itself, without making inferences to a population? Can chi squares, correlations, regressions, Mann-Whitney be run just to see the relationship within the sample, without the need to generalize to the population or is it pointless? Thank you.
Help make sense of conflicting info for tests on nonprobability samples
CC BY-SA 4.0
null
2023-03-15T04:02:52.320
2023-03-15T04:02:52.320
null
null
383258
[ "nonparametric" ]
609512
1
null
null
1
9
I am trying to compare completion rates of 4 online assessments, A1, A2, B1, B2. A2 is the modified version of A1, B2 is the modified version of B1. I have categorical variable of Completed, Not Completed for each participant (total around 15,000 participants).Probably very igmorant questions. - Could I combine A1 and A2, B1 and B2 and do the T-test for comparing A and B? (Ideally we would like to compare A vs B). Or ANOVA is more recommended? - A1 has 4-5 more sample size compared to other assessments (this was assigned more frequently). Would this difference become a problem? If so could I randomly sample from A1 so that they are all about the same sample size?
Comparing completion rates of 4 assessments
CC BY-SA 4.0
null
2023-03-15T04:17:05.263
2023-03-15T04:17:05.263
null
null
383261
[ "hypothesis-testing" ]
609514
1
609518
null
1
24
I am trying to calculate the marginal distribution P(X) of the following joint distribution P(X, Y). | |y=0 |y=1 |y=2 | ||---|---|---| |x=0 |.2 |.1 |.2 | |x=1 |0 |.2 |.1 | |x=2 |.1 |0 |.1 | Here is how I am calculating $P(X=0) = P(X = 0, Y=0) + P(X = 0, Y=1) + P(X = 0, Y=3) = .2 + .1 + .2 = .5$. Similarly I can find P(X=1) = .3 and P(X=2) = .2, and this looks like the correct approach since P(X = 0) + P(X = 1) + P(X = 2) =1 But if I break the $P(X,Y)$ term using the chain rule, I get a different answer. (I've calculated P(Y) by adding the columns.) $$ \begin{align} P(X=0) &= P(X = 0, Y=0) + P(X = 0, Y=1) + P(X = 0, Y=3) \\ &= P(X =0| Y=0) P(Y=0) + P(X =0| Y=1) P(Y=1) + P(X =0| Y=2) P(Y=2) \\ &= .2 * .3 + .1 * .3 + .2 * .4 \\ &= 0.17 \end{align} $$ In this approach, P(X=1) = .1 and P(X=2) = .07, and they do not add up to 1. Why do the results differ when I expand $P(X,Y)$ using the chain rule?
Calculate marginal distribution using chain rule
CC BY-SA 4.0
null
2023-03-15T05:58:07.873
2023-03-15T06:49:34.920
null
null
209434
[ "probability", "marginal-distribution" ]
609515
1
609523
null
2
73
The central limit theorem states that: > for identically distributed independent samples, the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed. But I'm confused about the precise meaning of this statement. If the original random variable is uniformally distributed between (0,1), then the sample mean will always be a real number >= 0, and the standardized sample mean will always have some cutoff point somewhere on the domain, regardless of the size of the sample `N`. Whereas a Normal distribution PDF is non zero across the entire domain. So there seems to me that there is a fundamental difference. Am I wrong in this, and is the standardized sample mean for this uniform random variable truly normally distributed when N = infinity?
Central Limit Theorem and Normal Distribution Approximation
CC BY-SA 4.0
null
2023-03-15T06:12:05.840
2023-03-16T00:08:57.770
2023-03-15T16:56:55.937
280928
280928
[ "normal-distribution", "central-limit-theorem" ]
609516
1
null
null
0
25
What are the steps to manually calculate the backpropagation gradient with the architecture that I mentioned? because I'm confused, the architecture on google regarding backprop is different from the neural network architecture that I use, I'm confused about the linear layer that doesn't use the activation function and how to calculate the gradient on the batch norm with its derivative function. I've tried to calculate the gradient loss to output, here the loss I use is bcewithlogitsloss, then I try to calculate the linear layer by multiplying the output of the previous layer by the gradient loss to output, and starting here I feel wrong. the output I want is a gradient value that I can use to update the weights with the adam optimizer. thank you.
How to calculate gradient manually in backpropagation if the neural network architecture consists of linear, batch norm, leaky relu, linear?
CC BY-SA 4.0
null
2023-03-15T06:24:44.730
2023-03-15T06:24:44.730
null
null
383266
[ "neural-networks", "backpropagation" ]
609517
1
null
null
0
25
In linear regression, if we assume the error term follows a gaussian distribution of zero mean, and assume we are using mean squared loss, then we can show that this minimisation will lead to finding optimal parameters that estimates the conditional expectation of the target $y$ given $x$. I have no trouble with this as the estimation can be shown with maximum likelihood, I have troubles constructing that the expectation of the ground truth $y$ conditioned on $x$ is indeed linear (in parameters). Is the following idea correct? It can be shown that the conditional distribution of $y$ given $\mathbf{x}$ is also normally distributed: $$ \begin{aligned} y \mid \mathbf{x} &\sim \mathcal{N}\left(y \mid f\left(\mathbf{x}; \boldsymbol{\theta}\right), \sigma^2\right) \\ &= \mathcal{N}\left(y \mid f\left(\mathbf{x}; \boldsymbol{\theta}\right), \boldsymbol{\beta}^{-1}\right) \\ &= \mathcal{N}\left(\beta_0 + \boldsymbol{\beta}^T \mathbf{x}, \boldsymbol{\beta}^{-1}\right) \\ \end{aligned} $$ The [proof here](https://stats.stackexchange.com/questions/327427/how-is-y-normally-distributed-in-linear-regression) shows this step. If this is true, then can I say $\mathbb{E}[y \mid \mathbf{x}]$ is the mean $\beta_0 + \boldsymbol{\beta}^{T}\mathbf{x}$ by definition of Gaussian.
How to prove that the expectation of $y$ given $x$ is the linear equation itself
CC BY-SA 4.0
null
2023-03-15T06:32:43.730
2023-03-15T06:32:43.730
null
null
253215
[ "regression", "multiple-regression" ]
609518
2
null
609514
2
null
Note $$\mathbb P(X=x\mid Y=y) =\frac{\mathbb P[(X=x) \cap(Y=y)]}{\mathbb P(Y=y) }.\tag 1$$ So, $$\mathbb P(X=0\mid Y=0)=\frac{\mathbb P(X=0, Y=0) }{\mathbb P(Y=0) }=\frac{0.2}{0.3}.$$ Both methods would yield the same result.
null
CC BY-SA 4.0
null
2023-03-15T06:49:34.920
2023-03-15T06:49:34.920
null
null
362671
null
609519
1
null
null
0
12
I want to create a 3 layers neural network from scratch to perform linear regression. The first and the second layer have 2 neurons, and the last layer has one neuron. Feature vector x is divided into $x_{1}, x_{2}$ where $x_{1} = ax, x_2 = (1-a)x, 0 < a < 1$ Hence, there are 6 weights: $w_{111}, w_{121}, w_{211}, w_{221}, w_{112}, w_{212}$. To note that I'm not including biases since they aren't the object of the question and the activation function is linear, so I'm not including that as well. Reading [the answer to this question](https://datascience.stackexchange.com/questions/117281/does-derivative-of-an-activation-function-refer-to-process-of-back-propogation-i), I'm computing the gradient of each weight in this way: ($w_{111}$ and $w_{121}$ are took as examples) $\frac{∂C}{w_{111}} = \frac{1}{m} \sum_{i=1}^m \frac{∂L_{i}}{w_{111}}$, $\frac{∂L_{i}}{w_{111}} = (y-y')\frac{∂y'}{∂w_{111}}$, $\frac{∂y'}{∂w_{111}} = \frac{∂z_{a1}}{∂w_{111}}$, $\frac{∂z_{a1}}{∂w_{111}} = \frac{∂}{∂w_{111}}(w_{111}x_{1} + w_{121}x_{2}) = x_{1}$ $\frac{∂C}{w_{111}} = \frac{1}{m} \sum_{i=1}^m (y-y')x_{i1}$ $\frac{∂C}{w_{112}} = \frac{1}{m} \sum_{i=1}^m (y-y')x_{i2}$ But, since $z_{a1} = w_{111}x_{1} + w_{121}x_{2}$, and $z_{a2} = w_{211}x_{1} + w_{221}x_{2}$, isn't $\frac{∂C}{w_{111}} = \frac{1}{m} \sum_{i=1}^m (y-y')x_{i1} = \frac{∂C}{w_{211}}$, and so, isn't $w_{111} = w_{211}$, and $w_{121} = w_{221}$?
Are some gradient weights equal?
CC BY-SA 4.0
null
2023-03-15T06:54:46.933
2023-03-15T06:54:46.933
null
null
383202
[ "machine-learning", "python", "gradient-descent", "derivative" ]
609520
2
null
595791
2
null
> Performing the same calculation gives me the familywise error rate as $\text{Pr}(\text{Test 1 false rejects}) + \text{Pr}(\text{Test 1 false rejects}) = 0.0506\ldots$ The probability of 1 or more tests failing is not the sum of the probabilities if the events may potentially overlap. $$P(\text{A or B}) = P(\text{A}) + P(\text{B}) {\color{red}{ - P(\text{A and B})}}$$ The third term should make that you get a $0.05$ as result. That is, if the events are independent, such that $P(\text{A and B}) = P(\text{A}) \times P(\text{B})$. With the Bonferroni correction the worst case of $P(\text{A and B}) = 0$ is assumed. [Multiply, add, or condition on probability?](https://stats.stackexchange.com/questions/593683/multiply-add-or-condition-on-probability)
null
CC BY-SA 4.0
null
2023-03-15T06:59:04.860
2023-03-15T07:21:31.110
2023-03-15T07:21:31.110
164061
164061
null
609522
2
null
609492
1
null
In a regression of an outcome on treatment group and a covariate, the intercept generally does not correspond to the reference group mean. It is an estimate of the reference group mean when the covariate is equal to 0, which may not be a meaningful value, and is only such under certain assumptions that may not be true (i.e., parallel regression lines or perfectly balanced groups). Instead, you should center the covariate (blood pressure) at its mean, and include an interaction between group and the centered covariate. This correctly adjusts for the covariate (though still assumes the relationship is linear) and allows for the desirable interpretation of the intercept and group coefficients. So, you might run a regression like the following: ``` cen <- function(x) {x - mean(x)} lm(bonedensity ~ group * cen(bloodpressure), data = df) |> summary() ``` From this, the intercept is equal to the adjusted control group mean and the coefficient on group (if there are only two groups) is equal to the difference between the adjusted treatment group mean and the adjusted control group mean. There is no need to interpret any other coefficients if you are only interested in adjusting the treatment effect estimates. For three groups, you should do exactly the same.
null
CC BY-SA 4.0
null
2023-03-15T07:02:42.660
2023-03-15T07:02:42.660
null
null
116195
null
609523
2
null
609515
6
null
> then the sample mean will always be a real number >= 0 Yes. And less than 1. Sure. > the standardized sample mean will always have some cutoff point somewhere on the domain, regardless of the size of the sample N. Those hard limits grow with sample size (they grow in proportion to $\sqrt{n}$). As $n\to\infty$ those bounds pass beyond any finite value, - but they also become less important (as n increases, the bounds are more and more standard deviations from the mean). e.g. at n=1000000, those bounds are over 577 standard deviations from the mean. > Whereas a Normal distribution PDF is non zero across the entire domain. So there seems to me that there is a fundamental difference. The key phrase in the actual CLT (a phrase that's missing from what you're quoting there, by the look) is in the limit as $n\to\infty$. > Am I wrong in this, and is the standardized sample mean for this uniform random variable truly normally distributed when N = infinity? The sample size is never actually infinity. You have a sequence of standardized means at increasing sample size, with a corresponding sequence of distributions (in this case, standardized versions of the Irwin-Hall distributions). The standard normal is the limiting case of that sequence; you get closer and closer to it -- and ultimately, as close as you like (e.g. If you look at the biggest absolute difference between the cdf of the standardized mean $F_n$ and that of the standard normal, $\Phi$, there's an $n$ that will bring that below any $\epsilon>0$). In other words it gets as close as you like; at some finite sample size it can be closer than any positive bound (you may need a different one for each bound, naturally).
null
CC BY-SA 4.0
null
2023-03-15T07:10:20.240
2023-03-16T00:08:57.770
2023-03-16T00:08:57.770
805
805
null
609524
1
null
null
0
28
A glm, where the response is Poisson distributed, is tested by using the analysis of deviance. In R the model looks like this: ``` glm(Y ~ A + B + C + A:B + A:C + B:C, family = poisson, data = data) ``` If I use the anova()-function on this model, which part of the output can be used to test the interaction effect of B:C at significance level $\alpha$? And how can I test the goodness-of-fit of the model ``` A + B + C + A:B + A:C ``` at significance level $\alpha$?
Testing the interaction of B:C on a glm using the analysis of deviance in R
CC BY-SA 4.0
null
2023-03-15T07:43:59.453
2023-03-16T09:25:05.217
2023-03-16T09:25:05.217
28500
304809
[ "r", "generalized-linear-model", "interaction", "goodness-of-fit", "deviance" ]
609525
1
null
null
0
41
Let's assume the following Null and Alternative hypothesis of a proportion: - H0: $p < 0.01$ - H1: $p \geq 0.01$ Generally, in all various hypothesis testing literature, I see equality condition being used for `H0`. I wonder how I can think about this scenario? and what the test statistics will look like?
Hypothesis test for proportion falling below a threshold
CC BY-SA 4.0
null
2023-03-15T07:44:37.257
2023-03-15T09:58:47.713
null
null
383276
[ "hypothesis-testing", "proportion", "range" ]
609526
2
null
609496
1
null
I wrote some R code to help visualize this function, it is a very pretty surface. ``` library(plotly) x = seq(from = -3, to = 3, by = .2) y = x A = matrix(NA,nrow=length(x),ncol=length(y)) for(i in 1:length(x)){ for(j in 1:length(y)){ A[i,j] = sqrt(x[i]^2+y[j]^2)*exp(-sqrt(x[i]^2 + y[j]^2)) } } plot_ly(x=x,y=y,z=A)%>%add_surface() ``` Here is the picture that you get, [](https://i.stack.imgur.com/SJO36.png) As you can see, the highest value of the PDF occurs on the ring where $r=1$. One should be a little careful and call this the ``most likely'' region, as opposed to "most probable", as the probability is technically zero at all points. You wrote that, $$ p(r,\theta) = (\text{constant})\times r^2\exp\big( -r \big) $$ Where does the $r^2$ come from? When you convert to polar coordinates you replace $x^2+y^2$ by $r^2$. Perhaps, you multiplied by an extra factor of $r$? You only do this multiplication when you replace $dx ~ dy$ by $r ~ dr ~ d\theta$. However, this is not an integration problem, and so you do the usual substitution. If you wrote it instead as, $$ p(r,\theta) = (\text{constant})\times r\exp\big( -r \big) $$ Then the maximum is achieved exactly at $r=1$. Which we can check with WolframAlpha, [](https://i.stack.imgur.com/96ywV.png) You can see from the R picture that the maximum value of the PDF is approximately $0.36$, the mathematical perfect answer is $\frac{1}{e}$, which agrees with your calculus calculation.
null
CC BY-SA 4.0
null
2023-03-15T08:04:42.763
2023-03-15T08:04:42.763
null
null
68480
null
609528
1
null
null
0
30
I’m conducting a regression, looking at the effect of a subsidy on output. Within the dataset, there are firms who have been randomly allocated a subsidy. However, firms with certain characteristics are not allocated a subsidy, for example if they’re in a city they’re not subsidised. My analysis involves other details although this is the primary issue I am having: given that firms in cities are not getting a subsidy, shall I remove observations of firms in cities from the dataset because they’re not being subsidised. Or should city (categorical) be included as a control dummy? Statistics - posted on wrong forum.
When to remove a subset of data for a regression?
CC BY-SA 4.0
null
2023-03-15T08:28:13.667
2023-03-15T08:28:13.667
null
null
383251
[ "regression", "dataset" ]
609530
1
null
null
0
47
I currently reading Whitney Newey and Kenneth West's paper "a simple, positive semidefinite, heteroskedasticity and autocorrelation consistent covariance matrix" For a multi-regression linear model: \begin{equation} y_t = X_t \beta_t + e_t, \qquad t=1,\cdots, T \end{equation} where $y_t$ and $X_t$ are given, $\beta_t$ is a non-random but unobservable vector, $e_t$ is the noise term. According to the paper, the noise covariance of $e_t$, $S_T$ can be estimated using sample errors $\hat{e}_t := y_t - X_t \hat{\beta}_t$, such that \begin{equation} \hat{S}_T = \hat{\Omega}_0 + \sum_{j=1}^m w_j\left[\hat{\Omega}_j \hat{\Omega}_j^\top\right], \end{equation} where \begin{equation} \hat{\Omega}_j := \frac{1}{T}\sum_{t=j+1}^T\hat{e}_t\hat{e}_{t-j}^\top. \end{equation} Can anyone explain to me the derivation of $\hat{S}_T$? Why $\hat{S}_T$ can be expressed as sum of the covariance matrices with different lags? my first guess is that $e_t$ might follow MA(m) process, such that \begin{equation} e_t:= \hat{e}_t + \phi_1\hat{e}_{t-1} + \cdots + \phi_m\hat{e}_{t-m} \end{equation} but traditionally for MA(m) process, we assume $\mathbb{E}\{e_{t} e_{t-j}^\top\} = 0$ for nonzero lag $j$. If we assume that $\mathbb{E}\{\hat{e}_{t} \hat{e}_{t-j}^\top\} \neq 0$ for all $j$, then $e_t$ more like a AR(m) process, hence the covariance matrix is given by \begin{equation} \mathbb{E}\{e_te_t^\top\} = \begin{bmatrix} 1 & \phi_1 & \cdots & \phi_m\end{bmatrix} \begin{bmatrix} \hat{\Omega}_0 & \hat{\Omega}_1 & \cdots & \hat{\Omega}_m \\ \hat{\Omega}_1 & \hat{\Omega}_0 & \cdots & \hat{\Omega}_{m-1}\\ \vdots & \vdots & \ddots & \vdots \\ \hat{\Omega}_m & \hat{\Omega}_{m-1} & \cdots & \hat{\Omega}_0 \end{bmatrix}\begin{bmatrix} 1 \\ \phi_1 \\ \vdots \\ \phi_m\end{bmatrix} \end{equation}
Derivation of Newey West Formula for Estimating the HAC Covariance Matrix
CC BY-SA 4.0
null
2023-03-15T09:09:46.657
2023-03-15T09:09:46.657
null
null
267492
[ "time-series", "multiple-regression", "neweywest" ]
609532
2
null
609507
2
null
There are by far more types of data than the ones listed (e.g. images that might be represented by multiple 2-D grids corresponding to color-channels, geo-spatial data etc.), although the taxonomy has some value. The regression model you show has continuous (positive) real-valued data as the dependent variable. We could philosophize about whether that's ratio data, when perhaps the price could never be 0 etc. and it certainly can never be negative (if it cannot ever be zero, you could get to the whole real line with a log-transformation). The reason to transform predictors is that it makes sense in terms of interpretation and/or predictive/explanatory ability (usually based on judgement from subject matter understanding), it would not be based on some guidebook saying that dependent variables need to be some particular type of data. In general, there are not really any distributional assumptions about the independent variables in a regression model and one can mix different types (it just needs to make sense).
null
CC BY-SA 4.0
null
2023-03-15T09:26:35.213
2023-03-15T09:26:35.213
null
null
86652
null
609534
2
null
608449
0
null
The data needs to be seasonally differenced in Stata using the S. operator. As seen in Page 4 of tsset manual entry of Stata: [https://www.stata.com/manuals13/tstsset.pdf](https://www.stata.com/manuals13/tstsset.pdf) The seasonal differencing operator needs to be used with the lag specified. The lag number is based on the type of time series data (quarterly, annual etc). As this is a quarterly data we use lag = 4 so the command is : ``` gen d_lnoil = d.oil // differencing for trend gen d2_lnoil = s4.d_lnoil // seasonal differencing with lag = 4 ```
null
CC BY-SA 4.0
null
2023-03-15T09:37:23.970
2023-03-26T08:52:13.167
2023-03-26T08:52:13.167
22047
382439
null
609535
2
null
594147
1
null
It doesn't look too bad to me from the qqplot either. But you could also have a look on the histogram provided by ``` testDispersion(ModelNB, type = "DHARMa") ``` or calculate it using Pearson residuals ``` testDispersion(ModelNB, type = "PearsonChisq") ``` to get further impressions about the dispersion. However, there may be a missing covariate, zero-inflation or zero-truncation that is causing this problem. Did you start with a Poisson model as suggested by Zuur and Ieno 2021? They suggest doing a Poisson model with model validation first, including `testZeroinflation()`, and then moving on to negative binomial, generalised Poisson, zero-inflated Poisson and so on, depending on what you find in the model validation. So you might end up with a different distribution that fits your data better. The statement > alternative hypothesis: true probability of success is not 0.007968127 only tells you what the alternative hypothesis (H1) is, it doesn't tell you whether you should accept it or not. However, according to the p-value and confidence intervals, you should accept H1. So, according to this test, your 2.08% outliers are significantly higher than the expected 0.797% outliers calculated by simulating data based on your model. It is probably a good idea to investigate these outliers.
null
CC BY-SA 4.0
null
2023-03-15T10:26:02.920
2023-03-15T14:16:29.947
2023-03-15T14:16:29.947
383278
383278
null
609536
1
null
null
0
12
I am reading this [paper](https://Attention%20Map-Guided%20Visual%20Explanations%20for%20Deep%20Neural%20Networks) about the attention map on convolutional neural networks. My question is: is there any way to transpose that kind of tool to neural networks trained not on images? A way, in other words, to explain the inferences of a NN in a feature-importance fashion? Feel free to drop suggestions, papers, repos, articles, etc. Thank you!
Attention Map alternative for Deep Neural Networks trained on tabular data
CC BY-SA 4.0
null
2023-03-15T10:27:57.657
2023-03-15T10:27:57.657
null
null
350354
[ "neural-networks", "inference", "explainable-ai" ]
609537
1
609908
null
3
141
I am using the pROC package in R to generate ROC curves. Using the "coords" function, I can extract the sensitivity (Se) , specificity (Sp), negative predicted value (NPV) and positive predicted value (PPV) for different thresholds. I also calculated the Se, Sp, NPV and PPV for some thresholds using the Caret package to compare. I am a bit confused as the Se an Sp given by the pROC package are actually the NPV and PPV given by the Caret package, respectively (and conversely, the NPV and PPV given by pROC are the Se and Sp in Caret). Any explanations?
pROC package - sensitivity and specificity calculations
CC BY-SA 4.0
null
2023-03-15T10:36:52.380
2023-03-18T17:34:14.733
null
null
250007
[ "r", "roc", "sensitivity-specificity" ]
609538
1
null
null
0
18
I am studying relations between migration movements of different types of fish and acidity (pH) of water. I am stuck on which statistical test i should use to find those significant correlations and/or regression. The data are structured: - Date column (between the start of 2021 and the start of 2022) - pH (mean pH for every date) - Fish data column (how many fish of one species traveled through, what we call a fishlift). This fishlift lets fish travel from one watercourse to another. I am trying to test if the migration of fish (fish using the fishlift) is caused by changing pH. I've tested if the fish data is normally distributed (it is), but I've come to realize that my data is not linear, but more or less parabolic. Also my data contains a lot of zeros (since the fish didn't use the fishlift every day). Can someone explain which type of statistical analysis I should use? I almost used ANOVA, but I realized that the data is heavily dependent on itself (since a fish could have entered and then left the watercourse on the same day, which would count as 2 observations). Also I added two time plots, to show the year observations. [](https://i.stack.imgur.com/NsriU.png) [](https://i.stack.imgur.com/jTNru.png)
Using a non-linear regression test to find significant relations between fishmigration movements and acidity in water
CC BY-SA 4.0
null
2023-03-15T10:38:41.790
2023-03-15T11:16:10.173
2023-03-15T11:16:10.173
383286
383286
[ "regression", "correlation" ]
609539
1
null
null
1
38
I am using two data sets to compare total population vs our dataset. E.g. 10% of population are in this age group (census data) vs. 15% of our observed population are in this age group (out dataset) I would like to be able to show if this difference is statistically significant. The data comes from raw numbers (discreet count data) for both and I am reasonably sure I should be using Poisson confidence intervals for discreet counts. That is fine for raw numbers, but when I am looking at %, I am not so sure how to proceed. As I am using total populations, it does not feel like any methods specifically specifying "sample size" are relevant to my case. (95% CI as an example below) |Age Group |Census |LCI |UCI |Data |LCI |UCI | |---------|------|---|---|----|---|---| |20-30 |20 |12.22 |30.89 |40 |28.58 |54.47 | |All Others |80 |63.44 |99.57 |150 |126.96 |176.02 | |Total |100 | | |190 | | | (superfluous decimal points for now) |Age Group |Census |LCI |UCI |Data |LCI |UCI | |---------|------|---|---|----|---|---| |20-30 |20% | | |21.05% | | | |All Others |80% | | |78.95% | | | I am essentially looking to fill out the blank LCI/UCI for the % table above. I hope I have not opened a whole can of statistical worms here and I hope the example given above with dummy data is enough to clarify what it is I am trying to do. I am also aware this may be an extremely simple question for those better-versed in statistics than I. Many thanks in advance.
Calculate confidence intervals (%) from count data
CC BY-SA 4.0
null
2023-03-15T11:04:54.240
2023-03-15T11:04:54.240
null
null
375804
[ "confidence-interval" ]
609540
1
null
null
2
30
I would like first to mention that I am relatively new to the Machine Learning (ML) world, but I have a decent background in statistics and econometrics. I am working on a research paper focusing on the gender labor force participation gap. Using the language of econometrics, - the dependent (outcome) variable is a binary variable equal to 1 if the individual is in the labor force and 0 if they are not. - The primary variable is a binary variable equal to 1 if the individual is a female and 0 otherwise. - In addition to that, I have many control variables that, for convenience, I will stack in a matrix X. Using classical econometrics approach, I would use a probit model and then find the difference between the probability of being in the labor force between males and females(i.e. Pr(LF=1|female=0, X=x)-Pr(LLF=1|female=1, X=x)). My question is: is there a machine-learning counterpart for such a method? In other words, is there a machine learning approach that allows me to compare the probability of success (success defined as being in the labor force) of two groups conditional on a set of controls? I can provide further details if needed. Thanks!
Machine learning method(s) to compare probability of success of two groups
CC BY-SA 4.0
null
2023-03-15T11:07:34.880
2023-03-15T11:18:52.327
null
null
383289
[ "machine-learning", "econometrics", "probit" ]
609541
1
null
null
0
54
Disclaimer: I checked some similar questions but I could not find anything in particular that would work for my case. I am dealing with a time series going from 2015 to 2023. The data points are the results of an aggregation of scores calculated on a European country's financial news (simplifying: I take financial news per day, perform sentiment analysis, and get a score per sentiment (n. of pos, n. of neg and neutral per day) and then aggregate it to get one data point per day). Now, the model I built would work pretty well, if it wasn't for the huge Covid outlier: my target is GDP, and it's well captured, but in 2020 my sentiment curve goes way deeper compared to GDP. In fact, the country I am working in was particularly hit by Covid, hence there was a huge mass of negative news in 2020. This means I can't just remove the outlier or use the traditional methods, cause I would lose information, i.e. part of that info is important, cause GDP decreases considerably in that period, but not that much... Now, I have tried changing the aggregation techniques to better deal with that. More specifically, I tried the following: - np.log(n_positive_news + 1) / np.log(n_negative_news + 1) - np.log((n_positive_news + 1) / (n_negative_news + 1)) - -1 * n_negative_news I also tried applying exponential moving average, rolling mean, and rolling sum. But none of these techniques helped a lot. Exponential moving average and the third aggregation technique improved results, but slightly. - Any suggestions on other aggregation techniques or in general, on how to deal with that outlier? - Train/test split: what do you think would be the optimal split? One that would include or exclude Covid in the training period? and why? for the moment, I am including it in the training period (train 01/01/2015-31/12/2020, test 01/01/2021-10/01/2023 Plot looks something like this (that drop is 2020, and the red line is my forecast). [](https://i.stack.imgur.com/C7Xvu.png)
How to deal with Covid outlier in time series/machine learning forecasting?
CC BY-SA 4.0
null
2023-03-15T11:15:52.450
2023-04-12T18:38:47.973
2023-03-15T12:16:07.587
22311
341183
[ "machine-learning", "time-series", "forecasting", "outliers", "covid-19" ]
609542
2
null
609540
2
null
You are looking for a [probabilistic classifier](https://en.wikipedia.org/wiki/Probabilistic_classification). You can feed in your predictor data for the two groups (so the predictors will presumably only differ in the group membership) and get the success probability for each group, conditional on the other predictors. Many classifiers can be taught to give probabilistic results, but unfortunately, many are implemented to yield "hard" 0-1 classifications, even though [probabilistic classifications are much more useful](https://stats.stackexchange.com/q/312119/1352). This earlier thread specifically discusses Random Forests as probabilistic classifiers: [How to make the randomforest trees vote decimals but not binary](https://stats.stackexchange.com/q/358948/1352)
null
CC BY-SA 4.0
null
2023-03-15T11:18:52.327
2023-03-15T11:18:52.327
null
null
1352
null
609543
1
null
null
1
14
How to test the difference between prediction accuracy of two regression models? My idea is to compare the errors of the two models, e.g., one-predictor vs. multiple-predictor model, in order to show that the difference between the two models (i.e., in terms of the accuracy of their results) is statistically significant. I would like to compare the differences, e.g., with a paired t-test procedure. The problem is different from both: - Classic statistical significance between means. I.e., the way I want to implement this to test the errors is the same, but the tested entities differs (classic statistical significance testing: means of measurements; regression models: errors of the regression model). - Machine learning classification problem (see, e.g., Statistical significance when comparing two models for classification) -- here I analyze a regression model, not a classifier (i.e., no cross validation) (regression: quantitative parameter is utilized to predict a quantitative output; classification: quantitative parameters are used to predict a class category). Maybe my regression problem can be stated as a logistic regression, or something -- I did not yet analyzed this, but your feedback is welcome. I also thought about the common mathematical formulation for ANOVA (where statistical significance is usually tested), and the GLM, but I do not think this resemblance can be beneficial when planning to test the models between each other, but I may be wrong. Any thoughts on the subject would be much appreciated. I am sorry for any inconsistencies in my question, I tried to be as specific and possible.
Statistical significance of the difference between two regression models
CC BY-SA 4.0
null
2023-03-15T11:24:54.017
2023-03-15T11:24:54.017
null
null
364471
[ "regression", "statistical-significance", "multiple-regression", "generalized-linear-model", "methodology" ]
609544
2
null
489219
0
null
- I am not sure what do you mean by "comparison of the slopes", because if you have just two models, you would get just two slopes: let's say: a1 and a2. Then how would you like to test the statistical significance of the differences between just these two? Anyway, if you would like to compare two models, it would be helpful that the dependent variable is the same in both models. This way you would be able to show, e.g., that the model no. 1 is better than model no. 2 because (even though different independent variables were used), the model no. 1 allows for a better prediction of the values the dependent variable takes. See my question: [Statistical significance of the difference between two regression models](https://stats.stackexchange.com/questions/609543/statistical-significance-of-the-difference-between-two-regression-models) - As far as I understand, in time-series models, time is an independent variable (usually denoted as "X"). I.e., given time (and optionally some other parameters), you attempt to predict the values of the dependent variable (usually denoted as "Y"). I.e. your output value depends on time (or at least that is that you hope for to show). More on the difference between independent and dependent variables: https://www.scribbr.com/methodology/independent-and-dependent-variables/ ; and: https://www.khanacademy.org/math/cc-sixth-grade-math/cc-6th-equations-and-inequalities/cc-6th-dependent-independent/a/dependent-and-independent-variables-review - As I stated before, I am not completely sure what did you mean in points 1. and 2. , but generally, yes, I would use a t-test, see my question: Statistical significance of the difference between two regression models
null
CC BY-SA 4.0
null
2023-03-15T11:36:27.610
2023-03-15T11:36:27.610
null
null
364471
null
609545
1
null
null
-2
35
How do I calculate mean square error (MSE) from the error obtained from ANN output
What does the error in artificial neural network stand for, is the same with mean square error (MSE)
CC BY-SA 4.0
null
2023-03-15T12:11:37.623
2023-03-15T12:36:23.217
2023-03-15T12:15:21.613
247274
383295
[ "regression", "machine-learning", "neural-networks", "supervised-learning", "mse" ]
609547
1
609549
null
4
425
In logistic regression, we often use maximum likelihood to estimate the parameter vector $\boldsymbol{\beta}$ that parametrizes the logistic equation. My confusion stems from the following: - We know that the logistic regression is finding the conditional probability of $Y$ given $X$, i.e. $P(Y = 1 \mid X)$ for the binary case. - We also know that the conditional probability $Y \mid X \sim \text{Ber}(p)$ follows a Bernoulli distribution for the binary case. - Now the confusion I face is, after maximum likelihood estimation, we derive a set of “optimal” parameters $\boldsymbol{\beta}$, is the parameter found the same as $p$, where $p$ is the parameter of the Bernoulli distribution? My mind is fixated that since the likelihood function of $Y$ given $X$ is Bernoulli, then we should be finding the $p$ that maximise the data. —— An attempt to answer this: finding the $\boldsymbol{\beta}$ is equivalent to finding the $p$ for the conditional distribution of $Y$ given a certain $X$ value. So they are the same. EDIT: To clarify my question, by the definition of maximum likelihood, we are finding the parameter that maximise the conditional distribution $Y \mid X$, which in turn follows a Bernoulli. So my state of mind is that the parameter should be $p$, but of course we ended up finding $\boldsymbol{\beta}$. I understand the logistic function which is linear in the log odds with coefficients $\boldsymbol{\beta}$, what I failed to reconcile is whether we are following the definition that the maximum likelihood is returning us the parameter $p$ or $\boldsymbol{\beta}$, or it does not matter in this context since $\boldsymbol{\beta}$ and $p$ are linked.
Are we estimating the Bernoulli parameter in Logistic Regression?
CC BY-SA 4.0
null
2023-03-15T12:22:11.973
2023-03-15T13:58:07.920
2023-03-15T12:59:21.723
253215
253215
[ "regression", "machine-learning", "probability", "distributions", "logistic" ]
609548
2
null
609545
1
null
It depends on what loss or error function you use when you code the network. If your loss is the sum of squared errors (SSE), then divide this value by the number of predictions being made. If your loss is the mean squared error (MSE), then you already have the MSE and do not need to transform the value at all. If your loss is the root mean squared error (RMSE), square this to get the MSE. If you have a loss function that is unrelated to squared errors, such as mean absolute error (MAE) or crossentropy loss in a classification problem, the MSE cannot be recovered from such a value, so you need more information to get the MSE (such as the predictions that you can then feed into your own calculation of the MSE). Watch out for if the loss value reported for your neural network applies to in-sample or out-of-sample predictions. $$ SSE = \overset{N}{\underset{i=1}{\sum}}\left( y_i -\hat y_i \right)^2 $$ $$ MSE = \dfrac{1}{N} \overset{N}{\underset{i=1}{\sum}}\left( y_i -\hat y_i \right)^2 $$ $$ RMSE =\sqrt{ \dfrac{1}{N} \overset{N}{\underset{i=1}{\sum}}\left( y_i -\hat y_i \right)^2 } $$ $$ MAE = \dfrac{1}{N} \overset{N}{\underset{i=1}{\sum}}\left\vert y_i -\hat y_i \right\vert $$ $$ \text{Crossentropy Loss}\\= -\dfrac{1}{N} \overset{N}{\underset{i=1}{\sum}}\left[ y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i) \right]\\ y_i\in\{0,1\}\\ \hat y_i\in[0,1] $$ $y_i$ are the observed values, $\hat y_i$ are the predicted values, and $N$ is the number of predicted values. For the crossentropy loss, it is often a convention to take $0\log(0)=0$.
null
CC BY-SA 4.0
null
2023-03-15T12:28:37.453
2023-03-15T12:36:23.217
2023-03-15T12:36:23.217
247274
247274
null
609549
2
null
609547
8
null
The logistic regression model is a kind of [generalized linear model](https://en.wikipedia.org/wiki/Generalized_linear_model), so it consists of the linear predictor $$ \eta = \boldsymbol{\beta}X $$ we pass it through the inverse of the [link function](https://stats.stackexchange.com/questions/259683/understand-link-function-in-generalized-linear-model/259688#259688) $g$ (the logistic function), to obtain $p$, i.e. the conditional mean of the Bernoulli distribution $$ E[Y|X] = p = g^{-1}(\eta) $$ since $Y$ is binary, we have $$ Y|X \sim \mathsf{Bernoulli}(p) $$ so $\boldsymbol{\beta} \ne p$, but $g^{-1}(\boldsymbol{\beta}X) = p$. Logistic regression predicts the mean of the Bernoulli distribution. Regarding your comment, in maximum likelihood, we are estimating the parameters $\boldsymbol{\beta}$ of our model by maximizing $$ \hat{\boldsymbol{\beta}} = \underset{\boldsymbol{\beta}}{\operatorname{arg\,max}} \; \mathsf{Bernoulli}\big(y \,|\, g^{-1}(\boldsymbol{\beta}X) \big) $$ (forgive me for the slight abuse of notation). Here $p$ is a function of $X$ and $\boldsymbol{\beta}$, rather than standalone parameter. Noting in the definition of maximum likelihood prohibits us from doing this.
null
CC BY-SA 4.0
null
2023-03-15T12:42:39.593
2023-03-15T13:15:29.667
2023-03-15T13:15:29.667
35989
35989
null
609551
1
null
null
0
75
My data is observational data, and that's made it all kinds of ugly, and I can't decide what statistical test is needed. I have one response variable, which is categorical (Species 1, Species 2, or None). I have about a dozen explanatory variables, which are numeric (canopy cover, soil moisture content, etc.). I want to know which of the explanatory variables have a significant influence on the response variable. I cannot safely assume that these variables are independent, so I won't be using a multinomial logistic regression. I also don't want to use a principal component analysis, because I don't really care how my explanatory variables interact with each other (example, canopy cover might be correlated with soil moisture content; that's no shocker, and not really what I'm looking for; I want to know how those variables affect the presence or absence of the species). The data are not normally distributed, so that's a no on using anything that assumes normal distribution. I measured all these variables at 140 physical locations, but they are in groups of 20 points nearby each other, so I cannot assume the cases are all independent. I am really struggling to find a statistical test that fits this situation. I'm thinking I could run a Kruskal-Wallis test comparing the response variable to each of the explanatory variables individually. Someone tell me if that's a big mathematical "no-no." Alternatively, I was thinking about running a PCA and only looking at the relationships that I am interested in (for example, if there is a statistical correlation between canopy cover and soil moisture, I don't really care; but if there is a statistical correlation canopy cover and the presence of Species 1, that is what I am looking for). Or is there another statistical analysis that fits better? Just about every test I find, my data violate at least one of the core assumptions.
Can I perform multiple Kruskal-Wallis tests with different explanatory variables against the same response variable?
CC BY-SA 4.0
null
2023-03-15T12:43:10.813
2023-03-17T12:11:33.400
null
null
383297
[ "pca", "kruskal-wallis-test" ]
609552
2
null
609551
1
null
Everyone's data always violates model assumptions! But the model can still be useful. First, Kruskal-Wallis does not work when the response variable is nominal. Besides, an independent test with each variable will make a stronger assumption of independence than doing a regression. In general, a regression model will work fine unless the explanatory variables are very highly correlated, and you will be able to tell from the output of the model if this is happening -- the standard errors will be way too big. If you need to convince someone else that this is OK, you can potentially use a diagnostic called the Variance Inflation Factor (VIF). First you run the regression model, then you calculate the VIFs from it and if any of them are really big, it means you have (near-)collinearity. However, we should note that this isn't a world-ending problem anyways. The predictions made by the model (assuming it is correct and all that other stuff) are still accurate, just not the inference on the coefficients. To borrow an analogy from McElreath's Statistical Rethinking, imagine that the regression model is a robot that does a specific task. The regression robot is very good at fitting the model and figuring out the best linear combination of the variables that predicts the (link-transformed) response. But if the variables are collinear, it is not so good at figuring out which variable is more important, so it will give you some crazy numbers. Then it is up to you as a scientist to determine which variables are most important and how to detangle the effects, the robot can't do any of this part that requires critical thinking and domain knowledge. Finally, to deal with the clustering. I am no expert on spatial data so I can't say too much about the clusters you have, but you can probably get away with using the cluster-robust sandwich variance estimator, which is very easy to implement in `R`. I highly recommend [this blog post](https://evalf21.classes.andrewheiss.com/example/standard-errors/) by Andrew Heiss for a thorough treatment of this issue. If you want to account for spatial autocorrelation within clusters, I'll leave that to you, but it can definitely be done in a multiple regression framework.
null
CC BY-SA 4.0
null
2023-03-15T13:17:28.003
2023-03-17T12:11:33.400
2023-03-17T12:11:33.400
288048
288048
null
609553
1
null
null
0
44
An offset is a variable that is included in the regression but without coefficient (i.e. the coefficient is fixed at/assumed to be 1). I want to impose restrictions on my regression and I read that it can be done with offsets ([source](https://www.casact.org/sites/default/files/database/forum_09wforum_yan_et_al.pdf)). However, I do not understand how an offset can result in a restriction. Furthermore, I implemented a basic example in python and the other coefficients do not change (thus if I charge 10% extra on everybody, the total amount also increases with 10% but this should not happen. The other coefficients should adjust accordingly such that the total amount remains similar. The money should just be redistributed among the members). The goal of my regression is to impose a bonus-malus system (i.e. give some people a discount) on a pricing model for insurance. --- Consider the following (GLM) regression: $$\log(\text{cost}) = \beta_0 + \beta_1X_1+...+\beta_nX_n + \epsilon$$ where the cost is assumed to follow a Gamma distribution. Suppose that one of the variables $X_i$ is the number of years without a claim. Furthermore, suppose that I want to give everybody that had zero claims last year a discount of 20%. According to the mentioned source, I need to add $\log(0.8)$. To see why this lead to a discount (or surplus?), consider: $$\text{cost} = e^{\beta_0} \hspace{2mm} e^{X_i} \hspace{2mm}e^{\log(0.8)}$$ $$\hspace{9mm} = 0.8 \hspace{2mm}e^{\beta_0} \hspace{2mm} e^{X_i}$$ However, I am not sure if this is correct. Does this effect all my other coefficient estimates? How do I implement this in python using `statsmodels`? Because the source mentions SAS has support for this by default, using: [](https://i.stack.imgur.com/uYSf9.png) In the SAS code above, we see that the offset is conditionally set (using an `if-statement`). How could I do this in python? UPDATE: I just learned this is called "relativities" in actuarial science.
How to impose restrictions on a regression using offsets?
CC BY-SA 4.0
null
2023-03-15T13:20:34.347
2023-03-16T08:52:43.953
2023-03-16T08:52:43.953
219554
219554
[ "multiple-regression", "python", "generalized-linear-model", "poisson-regression", "offset" ]
609554
2
null
430176
1
null
You're right: it is when the distribution is jointly Gaussian that a lack of correlation implies independence. Our late friend BruceET gives a good visual demonstration of that [here](https://stats.stackexchange.com/a/467079/247274). Consequently, if your teacher only mentioned marginal normality, the claim is not quite true. If you teacher mentioned multivariate normality, however, the claim is true. In defense of your teacher, it is not so unusual to write something like $\epsilon\sim N(0,\sigma^2I)$ to indicate joint normality of the errors, so it is reasonable to think that you teacher left implicit the joint error normality.
null
CC BY-SA 4.0
null
2023-03-15T13:23:32.433
2023-03-15T13:23:32.433
null
null
247274
null
609555
1
null
null
0
24
I'm building a beginner data analysis project and have been stuck on this problem for almost a month. I'm analyzing a [TV Brand E-commerce](https://www.kaggle.com/datasets/devsubhash/television-brands-ecommerce-dataset) dataset from Kaggle with several missing values. The dataset contains these variables: - Brand (Categorical): This indicates the manufacturer of the product - Resolution (Categorical): This has multiple categories and indicates the type of display, i.e., LED, HD LED, etc. - Size (Numeric): This indicates the screen size in inches - Selling Price (Continuous): This column has the Selling Price or the Discounted Price of the product - Original Price (Continuous): This includes the original product price from the manufacturer. - Operating system (Categorical): This categorical variable shows the type of OS like Android, Linux, etc. - Rating (Continuous Ordinal): Average customer ratings on a scale of 5. My main goal is to do a basic exploratory analysis of the variables (whether univariate, bivariate, or multivariate) then the second goal is to do some clustering (which I'm still learning, mainly using kNN) First Question: Missing Mechanism I plotted the data using a missingness matrix, and here's the result. 'rating' and 'operating_system' variables contain several missing values. There is no clear pattern despite sorting with all of the other variables. [](https://i.stack.imgur.com/U5JRm.png) Then I checked Little's MCAR Test, which yielded the following result, which assumes the missingness is not MCAR based on the p-value < 0.05. [](https://i.stack.imgur.com/y8FEv.png) After further investigating the missingness using `gg_miss_var()` broken down by TV Brands, Resolutions, and Sizes, the missingness in the 'Rating' variable is higher in several TV Brands, Resolutions, and Sizes. At the same time, 'operating_system' doesn't show a clear pattern. After that, I tried to explore the missingness in the 'rating' variable by other continuous variables, namely: 'selling_price' and 'original_price' using the `geom_miss_point()`, which is a missingness scatter plot, which shows the rating has higher missing values in TV with the original price and selling price < INR 150,000. [](https://i.stack.imgur.com/pkymJ.png) [](https://i.stack.imgur.com/E3Lfo.png) Based on this information, can I assume the missingness is Missing at Random (MAR) and use the MICE imputation?
Should I use MICE Imputation or Other Method in this Case?
CC BY-SA 4.0
null
2023-03-15T13:26:21.797
2023-03-16T11:46:59.123
2023-03-16T11:46:59.123
378145
378145
[ "r", "missing-data", "data-imputation", "mice" ]
609556
2
null
431556
0
null
First, it is not quite true that you need $X^TX$ to be invertible in order to get the OLS estimate. It is possible to use a generalized inverse $(X^TX)^-$ and write $\hat\beta = (X^TX)^-X^Ty$. If, however, you want to use the usual inverse and require $X^TX$ to be invertible, then this is a matter of the linear algebra. Consider some properties of invertible matrices. One such property is that the matrix has full rank. In order $X^TX$ to have full rank, $X$ itself must have full rank. This means that the columns of $X$ are linearly independent: that is, no linear combination of columns of $X$ can give another column of $X$, unless that linear combination has all weights as zero (the "trivial solution"). Note that this is stronger than just saying that no column of $X$ is a scalar multiple of another. If two columns of $X$ add up to a third column of $X$, for instance, then $X$ lacks full rank. This is synonymous with saying that your data (including the intercept column of all $1$s, if you include it) cannot exhibit perfect multicollinearity if $X^TX$ is to be invertible.
null
CC BY-SA 4.0
null
2023-03-15T13:32:54.787
2023-03-15T13:32:54.787
null
null
247274
null
609557
2
null
609547
4
null
Logistic regression tries to fit a model such as $$p(x_i)=\frac{1}{1+e^{-(\beta_0+\beta_1 x_i)}}$$ or equivalently with the log-odds $$\log_e\left(\frac{p(x_i)}{1-p(x_i)}\right)=\beta_0+\beta_1 x_i$$ to estimate $\beta_0$ and $\beta_1$ from the data, typically by using maximum likelihood methods: with the data $\{(x_i,y_i)\}$ where $y_i\in \{0,1\}$, you find the $\beta_0$ and $\beta_1$ which maximise $\prod_i p(x_i)^{y_i}(1-p(x_i))^{1-y_i}$ . Here $p(x_i)$ is indeed a Bernoulli parameter in $(0,1)$, varying with $x_i$. You are trying to fit this with the logistic model. $\beta_0$ and $\beta_1$ are not Bernoulli parameters, and can each take any real value.
null
CC BY-SA 4.0
null
2023-03-15T13:38:30.120
2023-03-15T13:58:07.920
2023-03-15T13:58:07.920
2958
2958
null
609558
2
null
444512
0
null
While I do not follow the rationale of dividing the p-value by two, you are correct that such a backward elimination procedure might leave out important predictors, and this is one of the reasons why stepwise regression is so problematic in many cases. Perhaps much worse for your analysis, however, is that all of the parameter inferences are distorted as soon as you start doing stepwise regression. The coefficient test statistics are calculated based on the assumption that those features were selected from the beginning. However, stepwise selection of variables means that the distributions of your coefficients has now conditioned on performing the stepwise procedure, which is not accounted for in the usual test statistics, p-values, or confidence intervals. This relates to criticisms 2, 3, 4, and 7 discussed [here](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/) by Frank Harrell. Your goal of having a model that only includes statistically significant predictors starts out sounding reasonable, perhaps noble, but can lead to a great many problems.
null
CC BY-SA 4.0
null
2023-03-15T13:43:36.100
2023-03-15T13:43:36.100
null
null
247274
null
609559
1
609691
null
0
46
My question is specifically directed to the [hiClass](https://github.com/globality-corp/sklearn-hierarchical-classification) Python package for hierarchical classification (I am not sure if it is right to ask here, since I am not reporting an issue). After reading answer to [this](https://stats.stackexchange.com/q/270412/261548) question that the package is actively supported. I am working on a dataset for travel mode recognition problem. I would like to model the problem using hierarchical classification approach to proceed like in figure below: [](https://i.stack.imgur.com/KQP8N.png) The orange oval represent nodes and the green rectangle, the classes to predict. I read in the related paper, that the class labels should be somehow transform into `(n_samples x n_levels)`. > While features are exactly the same shape as expected for training flat models in scikit-learn, hierarchical training labels are represented as an array of shape n samples × n levels, where each column must contain either a label for the respective level in the hierarchy or an empty string that indicates missing labels in the leaf nodes. Currently, my labels data structure in an array of shape `(n_samples,)`: ``` >>> ytrain.shape (31683,) >>> ytrain[:10] # 0:walk, 1:bike, 2:bus, 3:car, 4:train array([0, 0, 4, 1, 4, 2, 2, 3, 3, 3]) ``` Questions Assuming I am using the Local Classifier Per Parent Node algorithm: - Is there a handy way to retrieve just the predicted classes (e.g. predicting just the "Bike" class instead of the hierarchical passway ["Root", "non-Motorised", "Bike"]? - Having that my classes have different tree levels (Walk & Bike -> level-2, others level-3), How then should the hierarchy of the Walk & Bike classes be (["Root", "non-Motorised", "Bike"] or ["Root", "non-Motorised", "Bike", " "]) considering the statement I qouted above? Cc: [Fabio](https://stats.stackexchange.com/users/343396/fabio)
HiClass: Modelling a Hierarchical Classifier
CC BY-SA 4.0
null
2023-03-15T13:47:02.837
2023-03-16T14:04:57.097
2023-03-15T19:31:13.660
261548
261548
[ "machine-learning", "classification", "predictive-models", "scikit-learn" ]
609561
1
null
null
0
11
For a portfolio-wide profit/loss variable $X= \sum_{i=1}^{n}w_iX_i$ the value-at-risk of $X$ at confidence level $\alpha$ (usually close to 1) is defined as the $\alpha$-quantile of $-X$: \begin{equation} VaR_{\alpha}(X) = q_{\alpha}(-X) \end{equation} The formula for the Euler VaR-contributions reads \begin{equation} Var_{\alpha}(X) = \sum_{j=1}^{n} w_i VaR_{\alpha}(X_i | X) \end{equation} where for $i=1, 2, \cdots, n$, we have that \begin{equation} VaR_{\alpha}(X_i| X) = -\mathbb{E}[X_i | X= -VaR_{\alpha}(X) ] \end{equation} Using the linear approximation, it can be shown that \begin{equation} VaR_{\alpha}(X_i| X) \approx \frac{Cov(X_i, X)}{Var(X)}\Big(VaR_{\alpha}(X)+ \mathbb{E}(X)\Big)- \mathbb{E}(X_i) \end{equation} My question is, how we can come up with the above relation? Thank you in advance.
The proof of the Euler method decomposition of VaR
CC BY-SA 4.0
null
2023-03-15T13:54:09.083
2023-03-15T13:54:09.083
null
null
351356
[ "random-allocation", "decomposition" ]
609562
1
609589
null
2
111
Cox proportional hazards model: $$ \lambda_2(t)=\lambda_1(t)e^{\beta}$$ I have survival data for two different groups I would like to find the profile of the partial log-likelihood for the data as a function of $\phi$, the hazard ratio $\phi=\exp(\beta)$, and produce of plot in R but I am not sure of where to start. I understand how to calculate the partial log-likelihood for $\beta$ but not sure how to do it for $\phi.$
Finding partial log-likelihood of survival data
CC BY-SA 4.0
null
2023-03-15T13:59:45.140
2023-03-16T11:09:45.093
2023-03-16T11:09:45.093
null
null
[ "survival", "cox-model", "likelihood", "proportional-hazards" ]
609563
1
null
null
1
22
I am testing invariance between two groups. Configural and metric invariance is given. When adding the restriction of equal intercepts, lavaan gives me the warning that the smallest eigenvalue is close to zero (albeit positive) and that this might indicate that the model is not identified. Model fit is great, however, and if I would only consider model fit, I would state that scalar invariance is also given. I noticed that the latent intercepts in my previous models were estimated to be zero. When introducing the restriction of equal intercepts, they are still zero in one group but negative in the other group. Not sure whether this is relevant. Can the model become unidentified by adding restrictions? Can I still state invariance despite the lavaan warning? This is the model syntax: ``` info =~ EU03 + EU04 + EU07 + EU09 + EU10 cons =~ EU01 + EU02 + EU13 + EU14 norm =~ NU02 + NU04 + NU05 + NU07 + NU08 + NU10 + NU14 eu =~ info + cons ``` This is how I fit the model: ``` mi_set_scalar = cfa(model, data = df_items, estimator = "MLR", cluster = "REF", meanstructure = T, group = "set", group.equal = c("loadings", "intercepts")) ``` And these are the resulting fit indices: ``` CFI.robust = 0.974 TLI.robust = 0.972 RMSEA.robust = .065 ``` EDIT: Full output ``` lavaan 0.6.14 ended normally after 87 iterations Estimator ML Optimization method NLMINB Number of model parameters 106 Number of equality constraints 30 Number of observations per group: oil 1608 company 1608 Number of clusters [REF]: oil 402 company 402 Model Test User Model: Standard Scaled Test Statistic 1912.182 1217.955 Degrees of freedom 228 228 P-value (Chi-square) 0.000 0.000 Scaling correction factor 1.570 Yuan-Bentler correction (Mplus variant) Test statistic for each group: oil 954.152 607.742 company 958.031 610.213 Model Test Baseline Model: Test statistic 59235.913 32898.081 Degrees of freedom 240 240 P-value 0.000 0.000 Scaling correction factor 1.801 User Model versus Baseline Model: Comparative Fit Index (CFI) 0.971 0.970 Tucker-Lewis Index (TLI) 0.970 0.968 Robust Comparative Fit Index (CFI) 0.974 Robust Tucker-Lewis Index (TLI) 0.972 Loglikelihood and Information Criteria: Loglikelihood user model (H0) -69915.023 -69915.023 Scaling correction factor 1.394 for the MLR correction Loglikelihood unrestricted model (H1) -68958.932 -68958.932 Scaling correction factor 1.664 for the MLR correction Akaike (AIC) 139982.046 139982.046 Bayesian (BIC) 140443.814 140443.814 Sample-size adjusted Bayesian (SABIC) 140202.329 140202.329 Root Mean Square Error of Approximation: RMSEA 0.068 0.052 90 Percent confidence interval - lower 0.065 0.050 90 Percent confidence interval - upper 0.071 0.054 P-value H_0: RMSEA <= 0.050 0.000 0.078 P-value H_0: RMSEA >= 0.080 0.000 0.000 Robust RMSEA 0.065 90 Percent confidence interval - lower 0.062 90 Percent confidence interval - upper 0.069 P-value H_0: Robust RMSEA <= 0.050 0.000 P-value H_0: Robust RMSEA >= 0.080 0.000 Standardized Root Mean Square Residual: SRMR 0.030 0.030 Parameter Estimates: Standard errors Robust.cluster Information Observed Observed information based on Hessian Group 1 [oil]: Latent Variables: Estimate Std.Err z-value P(>|z|) info =~ EU03 1.000 EU04 (.p2.) 0.989 0.010 98.235 0.000 EU07 (.p3.) 1.010 0.009 111.361 0.000 EU09 (.p4.) 0.960 0.011 89.097 0.000 EU10 (.p5.) 0.984 0.010 94.213 0.000 cons =~ EU01 1.000 EU02 (.p7.) 1.073 0.013 85.373 0.000 EU13 (.p8.) 1.052 0.012 86.782 0.000 EU14 (.p9.) 1.043 0.011 94.105 0.000 norm =~ NU02 1.000 NU04 (.11.) 1.071 0.015 71.528 0.000 NU05 (.12.) 1.048 0.018 58.748 0.000 NU07 (.13.) 1.108 0.019 58.870 0.000 NU08 (.14.) 1.071 0.016 68.903 0.000 NU10 (.15.) 1.098 0.020 55.627 0.000 NU14 (.16.) 1.042 0.015 68.786 0.000 eu =~ info 1.000 cons (.18.) 0.961 0.017 57.790 0.000 Covariances: Estimate Std.Err z-value P(>|z|) norm ~~ eu 1.282 0.066 19.472 0.000 Intercepts: Estimate Std.Err z-value P(>|z|) .EU03 (.40.) 3.955 0.053 74.015 0.000 .EU04 (.41.) 3.720 0.052 71.008 0.000 .EU07 (.42.) 3.836 0.054 70.463 0.000 .EU09 (.43.) 3.620 0.052 70.266 0.000 .EU10 (.44.) 3.628 0.054 67.654 0.000 .EU01 (.45.) 3.457 0.053 64.995 0.000 .EU02 (.46.) 3.741 0.054 68.679 0.000 .EU13 (.47.) 3.630 0.054 67.380 0.000 .EU14 (.48.) 3.563 0.053 67.547 0.000 .NU02 (.49.) 3.282 0.045 72.684 0.000 .NU04 (.50.) 3.566 0.045 79.719 0.000 .NU05 (.51.) 3.440 0.045 76.713 0.000 .NU07 (.52.) 3.759 0.045 84.468 0.000 .NU08 (.53.) 3.302 0.046 71.442 0.000 .NU10 (.54.) 3.817 0.045 84.636 0.000 .NU14 (.55.) 3.241 0.045 71.382 0.000 .info 0.000 .cons 0.000 norm 0.000 eu 0.000 Variances: Estimate Std.Err z-value P(>|z|) .EU03 0.479 0.035 13.859 0.000 .EU04 0.414 0.028 14.778 0.000 .EU07 0.492 0.040 12.278 0.000 .EU09 0.485 0.035 13.841 0.000 .EU10 0.471 0.033 14.385 0.000 .EU01 0.751 0.045 16.695 0.000 .EU02 0.547 0.038 14.578 0.000 .EU13 0.652 0.043 15.223 0.000 .EU14 0.485 0.032 15.097 0.000 .NU02 0.869 0.051 16.939 0.000 .NU04 0.735 0.048 15.448 0.000 .NU05 0.830 0.055 14.990 0.000 .NU07 0.811 0.049 16.611 0.000 .NU08 0.668 0.044 15.309 0.000 .NU10 0.873 0.064 13.717 0.000 .NU14 0.723 0.043 16.922 0.000 .info 0.300 0.039 7.659 0.000 .cons 0.147 0.036 4.050 0.000 norm 1.608 0.067 23.832 0.000 eu 1.952 0.084 23.255 0.000 Group 2 [company]: Latent Variables: Estimate Std.Err z-value P(>|z|) info =~ EU03 1.000 EU04 (.p2.) 0.989 0.010 98.235 0.000 EU07 (.p3.) 1.010 0.009 111.361 0.000 EU09 (.p4.) 0.960 0.011 89.097 0.000 EU10 (.p5.) 0.984 0.010 94.213 0.000 cons =~ EU01 1.000 EU02 (.p7.) 1.073 0.013 85.373 0.000 EU13 (.p8.) 1.052 0.012 86.782 0.000 EU14 (.p9.) 1.043 0.011 94.105 0.000 norm =~ NU02 1.000 NU04 (.11.) 1.071 0.015 71.528 0.000 NU05 (.12.) 1.048 0.018 58.748 0.000 NU07 (.13.) 1.108 0.019 58.870 0.000 NU08 (.14.) 1.071 0.016 68.903 0.000 NU10 (.15.) 1.098 0.020 55.627 0.000 NU14 (.16.) 1.042 0.015 68.786 0.000 eu =~ info 1.000 cons (.18.) 0.961 0.017 57.790 0.000 Covariances: Estimate Std.Err z-value P(>|z|) norm ~~ eu 1.617 0.067 24.249 0.000 Intercepts: Estimate Std.Err z-value P(>|z|) .EU03 (.40.) 3.955 0.053 74.015 0.000 .EU04 (.41.) 3.720 0.052 71.008 0.000 .EU07 (.42.) 3.836 0.054 70.463 0.000 .EU09 (.43.) 3.620 0.052 70.266 0.000 .EU10 (.44.) 3.628 0.054 67.654 0.000 .EU01 (.45.) 3.457 0.053 64.995 0.000 .EU02 (.46.) 3.741 0.054 68.679 0.000 .EU13 (.47.) 3.630 0.054 67.380 0.000 .EU14 (.48.) 3.563 0.053 67.547 0.000 .NU02 (.49.) 3.282 0.045 72.684 0.000 .NU04 (.50.) 3.566 0.045 79.719 0.000 .NU05 (.51.) 3.440 0.045 76.713 0.000 .NU07 (.52.) 3.759 0.045 84.468 0.000 .NU08 (.53.) 3.302 0.046 71.442 0.000 .NU10 (.54.) 3.817 0.045 84.636 0.000 .NU14 (.55.) 3.241 0.045 71.382 0.000 .info -0.242 0.031 -7.860 0.000 .cons -0.160 0.029 -5.484 0.000 norm -0.589 0.059 -9.922 0.000 eu -0.490 0.047 -10.399 0.000 Variances: Estimate Std.Err z-value P(>|z|) .EU03 0.486 0.041 11.804 0.000 .EU04 0.343 0.027 12.843 0.000 .EU07 0.433 0.031 13.760 0.000 .EU09 0.385 0.028 13.934 0.000 .EU10 0.402 0.028 14.258 0.000 .EU01 0.592 0.042 14.112 0.000 .EU02 0.450 0.035 12.865 0.000 .EU13 0.447 0.033 13.430 0.000 .EU14 0.442 0.030 14.868 0.000 .NU02 0.676 0.046 14.658 0.000 .NU04 0.603 0.037 16.194 0.000 .NU05 0.771 0.049 15.749 0.000 .NU07 0.799 0.061 13.055 0.000 .NU08 0.570 0.035 16.045 0.000 .NU10 0.762 0.056 13.713 0.000 .NU14 0.603 0.040 15.242 0.000 .info 0.209 0.034 6.196 0.000 .cons 0.071 0.032 2.174 0.030 norm 1.789 0.072 24.839 0.000 eu 2.228 0.083 26.697 0.000 ```
Adding invariance restriction leads to eigenvalue close to zero - can invariance be stated, nevertheless?
CC BY-SA 4.0
null
2023-03-15T14:08:17.770
2023-03-16T08:33:46.303
2023-03-16T08:33:46.303
335073
335073
[ "structural-equation-modeling", "confirmatory-factor", "lavaan" ]
609564
1
null
null
0
18
As a start, these are posts that are similar, but I don't quite think they're quite the same: - Regression RMSE when dependent variable is log transformed - https://math.stackexchange.com/questions/2222763/theory-question-how-to-use-mean-absolute-error-properly-in-a-log-scaled-linear --- I have a simple regression model where I assume that the current observations holds some linear relationship with the previous observation. Further to my use I have some dummy covariates $ X_{d1}, X_{d2} $. In my example it's about travel time between bus stops. Imagine stops in the order: $A_1 → B_2 → C_3 →|| D_4 → E_5 → F_6 →... $ Where observations start at $D_4$. Now as a first step, I want to evaluate with what expression my response variable is "best" described with. That is I want to compare the MAE & RMSE for three different formulations of my response variable and also compare the residuals of; - $Y_1$ = Time it took to travel to the current stop - $Y_2$ = difference in travel times to the current and the previous stop = $y_t - y_{t-1}$ - $Y_3$ = Log return = $log y_t - log y_{t-1} = log \frac{y_t}{y_{t-1}}$ Focusing on MAE, for $Y_1$ it's rather straight forward; $MAE_{Y_1} = \frac{\sum_{i=1}^{n}| y - \hat{y}|}{n}$ For $Y_2$; $MAE_{Y_2} = \frac{\sum_{i=1}^{n} |(y_t - y_{t-1}) - \hat{(y_t - y_{t-1})}|}{n}$ But for $\hat{Y_3} = log(y_t / y_{t-1})$ I am unsure of how to proceed: The regression equation I can write as; $\hat{(log \frac{y_t}{y_{t-1}})} = \beta_0 + \beta_1 * y_{t-1} + \beta_3 * h(X_{d1}) + \beta_4 * h(X_{d2}) + \epsilon $ Taking exponent yields; $\hat{(\frac{y_t}{y_{t-1}})} = exp (\beta_0 + \beta_1 * y_{t-1} + \beta_3 * h(X_{d1}) + \beta_4 * h(X_{d2}) + \epsilon ) $ Moving $\hat{y_{t-1}}$ to other side (Issue: I have $\hat{(\frac{y_t}{y_{t-1}})}$ not $\hat{y_{t-1}}$); $\hat{y_t} = exp (\beta_0 + \beta_1 * y_{t-1} + \beta_3 * h(X_{d1}) + \beta_4 * h(X_{d2}) + \epsilon ) * \hat{y_{t-1}} = \hat{Y_3}$ So does this mean to be able to compare MAE with $Y_3$ I calculate $MAE_{Y_3} = \frac{\sum_{i=1}^{n} |y - \hat{Y_3}|}{n}$ Any help is appreciated!
Evaluation comparison of rewritten response variable
CC BY-SA 4.0
null
2023-03-15T14:10:28.343
2023-03-15T14:10:28.343
null
null
320876
[ "regression", "mathematical-statistics", "modeling", "data-transformation" ]
609565
1
null
null
2
22
I have run the model: ``` model2 <- lmer(tas ~ station + (1 | date), data = all5) ``` where `station` is a categorical variable with 4 levels. I want to check for variance homogeneity. I find the residuals from `lmerresid <- resid(model2)`. If I run the levene test: `leveneTest(lmerresid ~ all5$station)` I get p = 0.6572 so it implies homogeneity of variances. If I make the boxplots it implies homogeneity of variances: ``` boxplot(lmerresid ~ all5$station) ``` [](https://i.stack.imgur.com/IHv6G.png) But if I plot residuals vs fited then there is sth wrong: #Plot vs factors ``` par(mfrow = c(1,1)) plot(lmerresid ~ fitted(model2)) ``` [](https://i.stack.imgur.com/GON2f.png) Are the variances homogeneous?
Variance Homogeneity for a mixed model
CC BY-SA 4.0
null
2023-03-15T14:24:38.353
2023-03-15T14:56:11.210
2023-03-15T14:56:11.210
219012
360530
[ "r", "mixed-model", "variance" ]
609566
1
null
null
0
39
I've the following data. - https://drive.google.com/file/d/1_a-mi_Pesx73AnvbKVNr4SzhuujKq9Za/view?usp=sharing - These .csvs contain logged radial intensity curves to a distance of 2.2 cm of different communities of fluorescent microbes (Alpha, Beta, Gamma, Delta, Epsilon). The intensities were recorded after 24 hours of incubation, in 3 different treatments (10 mM sugar, 1mM sugar and 0 mM sugar). - The curves of the medians and means of the experimental results are included, with background noise removed using the average of the intensity at 2.2 cm across all experiments, when the noise removal leads to negative results 0s have been input. - If you plot the curves for the mean and the median there is a degree of variability, with the mean providing a smoother curve. The curves have been logged to highlight the secondary maxima that occurs in some of the plots. I wish to find the best statistical method to reflect changes in the spatial intensity. The hypothesis being that Alpha is "more spread" out than Beta, with Gamma having a slightly greater spatial distibution or range than Alpha and a significantly greater spatial distibution than Beta. I've been looking into area under the curve analysis and Fréchet distance but unsure. Also perhaps there is a better method of standardisation or smoothing.
How best to statistically compare radial intensity curves?
CC BY-SA 4.0
null
2023-03-15T14:25:54.943
2023-03-15T14:31:25.437
2023-03-15T14:31:25.437
362671
381157
[ "spatial", "curves" ]
609567
2
null
549865
1
null
Assuming a linear model that is estimated using ordinary least squares (this seems to be the setting you mean), $R^2$ will not decrease, since the model always has the option to set the coefficient on the new variable equal to zero and use the previous coefficients on the other variables, resulting in the same predicted values and same $R^2$ as before. If, empirically, the added variable is totally unrelated to the outcome, $R^2$ will not increase. However, just because the additional variable has no true relationship with the outcome does not mean that a weak relationship will not be present in the observed data. Just by [(bad) luck](https://stats.stackexchange.com/a/609309/247274), you are likely to observe some kind of relationship. This is why, even if there is no true relationship, the model $R^2$ is likely to increase: the model has detected a coincidence.
null
CC BY-SA 4.0
null
2023-03-15T14:30:41.877
2023-03-15T14:30:41.877
null
null
247274
null
609568
1
609602
null
1
38
I'm modelling binary ecological data using a GAMM and I'm having trouble understanding what the 'Select = TRUE' option is doing. As I understand it, 'Select = TRUE' is meant to add extra penalization to all smooths in the model (i.e. lower EDF of covariates and sometimes knocking them out of the model entirely), but I'm seeing something different (I think). Without 'Select = TRUE', covariate "ToD" has an EDF=1 (linear) and "sal" isn't significant (partial effect includes 0 for all values of "sal" - so, not surprising), however when I include 'Select = TRUE' "sal" becomes statistically significant at the 5% level and "ToD" became more complex (EDF > 1). I thought 'Select = TRUE' was supposed to have the opposite effect, no? Questions: 1. Does it make sense for the EDF to increase for "ToD" when 'Select = TRUE'? 2. Why might "sal" (or any covariate) suddenly become significant when 'Select = TRUE'? - Unfortunately, I can't share the data. library(mgcv) # gam()/bam() library(gratia) # Draw model m <- bam(empty ~ s(SL) + # Standard length s(temp) + # Temperature s(sal) + # Salinity s(ToD) + # Time of Day s(Longitude, Latitude, bs = 'tp') + # Spatial variation s(fStation, bs = "re"), # Structural component data = c_neb, method = 'fREML', discrete = TRUE, family = binomial(link = "logit"), # select = TRUE, # <----------------- Commented out, so no extra penalization gamma = 1.5) > summary(m) Family: binomial Link function: logit Formula: empty ~ s(SL) + s(temp) + s(sal) + s(ToD) + s(Longitude, Latitude, bs = "tp") + s(fStation, bs = "re") Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.0361 0.1009 -10.27 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(SL) 2.208e+00 2.827 40.534 < 2e-16 *** s(temp) 1.000e+00 1.000 1.565 0.211 s(sal) 2.151e+00 2.772 4.198 0.185 # <-- Not important s(ToD) 1.000e+00 1.000 21.296 4.19e-06 *** # <-- linear, can be Beta*x s(Longitude,Latitude) 2.000e+00 2.000 0.599 0.741 s(fStation) 2.248e-05 87.000 0.000 0.140 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0873 Deviance explained = 8.55% fREML = 663.8 Scale est. = 1 n = 675 gratia::draw(m, select = c(3,4)) [](https://i.stack.imgur.com/MvxDk.png) ``` > gam.check(m) Method: fREML Optimizer: perf chol $grad [1] 2.389311e-09 -2.728344e-06 5.702256e-08 -2.188869e-06 -2.135506e-06 -1.828631e-06 $hess [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3.821533e-01 6.820792e-08 1.072074e-03 2.221002e-09 3.419863e-08 -2.262997e-08 [2,] 6.820792e-08 2.728331e-06 1.032721e-07 2.103272e-13 -2.375648e-14 -2.492040e-13 [3,] 1.072074e-03 1.032721e-07 4.014991e-01 9.875683e-09 -1.009651e-07 -5.703817e-08 [4,] 2.221002e-09 2.103272e-13 9.875683e-09 2.188850e-06 6.934220e-14 -7.427936e-13 [5,] 3.419863e-08 -2.375648e-14 -1.009651e-07 6.934220e-14 2.135500e-06 -8.949452e-13 [6,] -2.262997e-08 -2.492040e-13 -5.703817e-08 -7.427936e-13 -8.949452e-13 1.828635e-06 Model rank = 156 / 156 Basis dimension (k) checking results. Low p-value (k-index<1) may indicate that k is too low, especially if edf is close to k'. k' edf k-index p-value s(SL) 9.00e+00 2.21e+00 0.94 0.095 . s(temp) 9.00e+00 1.00e+00 0.96 0.215 s(sal) 9.00e+00 2.15e+00 0.91 0.020 * s(ToD) 9.00e+00 1.00e+00 0.97 0.365 s(Longitude,Latitude) 2.90e+01 2.00e+00 0.97 0.270 s(fStation) 9.00e+01 2.25e-05 NA NA --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` With 'Select = TRUE' back in ``` m2 <- bam(empty ~ s(SL) + s(temp) + s(sal) + s(ToD) + s(Longitude, Latitude, bs = 'tp') + s(fStation, bs = "re"), data = c_neb, method = 'fREML', discrete = TRUE, # speed benefit family = binomial(link = "logit"), select = TRUE, gamma = 1.5) > summary(m2) Family: binomial Link function: logit Formula: empty ~ s(SL) + s(temp) + s(sal) + s(ToD) + s(Longitude, Latitude, bs = "tp") + s(fStation, bs = "re") Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.05872 0.09481 -11.17 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(SL) 1.990e+00 9 35.705 < 2e-16 *** s(temp) 2.353e-01 9 0.462 0.1518 s(sal) 1.560e+00 9 4.783 0.0345 * # <- Became significant, but partial eff. still includes zero s(ToD) 1.697e+00 9 19.856 3.82e-06 *** # <- EDF increased s(Longitude,Latitude) 1.172e-05 29 0.000 0.6293 s(fStation) 2.908e-05 89 0.000 0.1753 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0884 Deviance explained = 8.41% fREML = 660.83 Scale est. = 1 n = 675 gratia::draw(m2, select = c(3,4)) ``` [](https://i.stack.imgur.com/5kKsi.png)
Effects of 'Select = TRUE' on covariates in GAM
CC BY-SA 4.0
null
2023-03-15T14:31:06.293
2023-03-15T19:26:31.900
2023-03-15T15:36:18.147
337106
337106
[ "r", "regularization", "generalized-additive-model", "mgcv" ]
609569
2
null
608307
2
null
First notice that the requirements of non-negativity and mean equals 1 cannot be simultaneously satisfied for an arbitrary covariance matrix: if $x_i$ is the $i$-th variate ($i=1...n$) with $E[x_i]=1$, then $$Cov(x_i,x_j)=E[(x_i-1)(x_j-1)]=E[x_ix_j]-1 \ge -1$$ since all the $x_i$'s are nonnegative and therefore $E[x_ix_j] \ge 0$. This implies that all the entries of the covariance matrix $\Sigma_{ij}$ has to be larger than -1, which is a non-trivial constraint. (Note that the same argument applies to the sample covariance. Also note that it does not imply that if $\Sigma_{ij} \ge -1$ then necessarily a solution exists; there might be other constraints). If you are not interested in the generating distribution but just want to find some matrix $X$ which has the specified properties (assuming it exists), a straightforward approach might be to try to minimize a sum of squared errors using a gradient descent algorithm. To ensure nonnegativity the elements of $X$ can be parametrized e.g. with $X_{in} = e^{W_{in}}$, and you can take advantage of auto-differentiation libraries to calculate the gradients for you. Here is for example an implementation in PyTorch, which finds a (approximate) solution for $R=20, n=3$ in about 2 seconds: ``` import torch R, n = 20, 3 #generate n-by-n random covariance matrix with entries larger than -1 S=torch.Tensor([-2]) while S.min() < -1: A = torch.randn(n,n)/2 S = A.matmul(A.T) def loss(W,S): [R,n]=W.shape X = torch.exp(W) sse = (torch.cov(X.T,correction = 0) - S).square().sum() + (X.mean(0)-torch.ones(n)).square().sum() return sse #initialize W with random entries W = torch.randn(R,n,requires_grad=True) optimizer = torch.optim.Adam([W],lr=0.05) for iter in range(1000): optimizer.zero_grad() L = loss(W,S) L.backward() optimizer.step() print(f'sse = {L}') X=torch.exp(W) print('specified covariance:') print(S) print('sample covariance:') print(torch.cov(X.T,correction = 0)) print('mean:') print(X.mean(0)) ``` output: ``` sse = 0.007093742955476046 specified covariance: tensor([[ 1.9820, -0.2131, -0.5573], [-0.2131, 0.8370, -0.0203], [-0.5573, -0.0203, 0.1815]]) sample covariance: tensor([[ 1.9881, -0.2122, -0.5349], [-0.2122, 0.8371, -0.0169], [-0.5349, -0.0169, 0.2584]], grad_fn=<SqueezeBackward0>) mean: tensor([1.0026, 1.0003, 1.0093], grad_fn=<MeanBackward1>) ```
null
CC BY-SA 4.0
null
2023-03-15T14:35:47.763
2023-03-15T14:35:47.763
null
null
348492
null
609570
1
null
null
9
348
I am trying to self-teach probability and statistics for Machine Learning career. However I want to learn very well as doing research in AI is my goal. Which books should I use to learn probability, frequentist statistics and Bayesian statistics and which one should I go first ? I also know some basic probability, the better route may be to read some frequentist statistical book and then to Bayesian ? Feel free to suggest any route and book?
What is the roadmap to self-taught probability and statistics for artificial intelligence?
CC BY-SA 4.0
null
2023-03-15T14:37:54.203
2023-03-15T17:44:12.300
2023-03-15T17:08:55.463
8013
356068
[ "machine-learning", "probability", "mathematical-statistics", "references", "artificial-intelligence" ]
609571
2
null
431414
0
null
(This self-answer assumes OLS linear regression, which is what I meant when I posted the question.) It doesn't matter! The concern I had four years ago was that, if there are correlated features, that can inflate the variance of the coefficient estimates. This happens through the variance-inflation factor, which considers the $R^2$ of a dummy regression that predicts a feature using all other features. If that feature is unrelated to the other features, that $R^2$ will be zero (or close to zero, empirically). That is, the correlation between the covariates does not impact the variance of the estimation of my variable of interest (the indicator for control vs treatment), so long as the covariates are unrelated to the variable of interest. (Knowing what I was doing when I posted the question, such an assumption is at least a bit dubious, but that is a separate issue.)
null
CC BY-SA 4.0
null
2023-03-15T14:38:35.633
2023-03-15T14:38:35.633
null
null
247274
null
609572
1
null
null
2
23
I watched this (with time marker to important moment [https://youtu.be/TCH_1BHY58I?t=3040](https://youtu.be/TCH_1BHY58I?t=3040)) video of Andrej Karpathy explaining some language model, and in ~50:40, he explains how to pick good initial learning rate. He does 1000 consecutive training steps while gradualy increasing learning rate. Then he plots LR versus loss (or in second version the exponent of LR versus loss). He concludes, that lr<0.1 is too low (why tho?), and lr>0.1 makes the training explode (this part is straightforward). I dont undersand why lower rate on this chart is proven to be too small. This (allegedly too small) learning rate is actually quickly lowering the loss. Can someone explain this?
Why is less than 0.1 considered too low learning rate in this example
CC BY-SA 4.0
null
2023-03-15T14:46:22.073
2023-03-15T14:46:22.073
null
null
383296
[ "neural-networks", "natural-language", "loss-functions" ]
609573
2
null
608880
0
null
Multivariate outputs tends to be something that's difficult for many traditional machine learning models other than neural networks. Neural networks of various forms could be good candidates for this sort of thing, but it's not clear to me why a transformer architecture would necessarily be more promising than other options. It's unclear to me whether it's necessary to output all categories at once. I assume your idea is to provide the whole history (sales across all categories) to predict the sales in the next time step (or a few ahead). One of the key questions is how would one represent the different categories of products sensibly. One of the key innovations in neural networks for tabular data with high cardinality data (i.e. categorical data with many categories such as different products, customers, store locations etc.) is embedding layers (as e.g. [famously used](https://github.com/fastai/fastbook/blob/master/09_tabular.ipynb) in the [Rossman Store Sales](https://www.kaggle.com/c/rossmann-store-sales) Kaggle competition). If I input some history / features and the category I want to predict for (one output at a time), it's easy to see how I do that. I guess, if you want a multi dimensional output, you'd have to somehow "tell the model" that a different embedding (or possibly more than one, if you e.g. also have a product category embedding) is pertinent to each input and to each output. Not 100% sure how one would do that. With a large number of products at some point you'll eventually run into problems with memory, I would assume (even worse if you want to attend at the same time over many time-steps). No idea whether a sufficient complexity model is feasible with the type of realistically sized portfolio of products that a shop might have (probably much more than 100 products). There's examples of similar forecasting settings in various machine learning competitions such as the [M5 Forecasting competition on Kaggle](https://www.kaggle.com/c/m5-forecasting-accuracy) (there is a accuracy and a uncertainty part to that competition) and of course [Rossman Store Sales](https://www.kaggle.com/c/rossmann-store-sales).
null
CC BY-SA 4.0
null
2023-03-15T14:50:01.560
2023-03-15T14:50:01.560
null
null
86652
null
609574
2
null
609344
0
null
As Jarle, points out above in a comment, the answer is simple. Assuming, iid data, for sufficiently large samples, $$ \frac{\bar{A} - \bar{B}}{\sqrt{ \frac{\sigma^2_A} {N_A} + \frac{\sigma^2_B} {N_B} }} \approx N(0, 1) $$ so we can construct the following 95 percent confidence interval for the difference in means. $$ -1.96 \cdot \sqrt{ \frac{\sigma^2_A} {N_A} + \frac{\sigma^2_B} {N_B} } \le \bar{A} - \bar{B} \le 1.96 \cdot \sqrt{ \frac{\sigma^2_A} {N_A} + \frac{\sigma^2_B} {N_B} } $$ and then we add $\bar{B}$ to get the confidence interval for $\bar{A}$ $$ \bar{B} -1.96 \cdot \sqrt{ \frac{\sigma^2_A} {N_A} + \frac{\sigma^2_B} {N_B} } \le \bar{A} \le \bar{B} + 1.96 \cdot \sqrt{ \frac{\sigma^2_A} {N_A} + \frac{\sigma^2_B} {N_B} } $$ We can estimate $\sigma^2_A$ and $\sigma^2_B$ by the sample variance of $B$.
null
CC BY-SA 4.0
null
2023-03-15T15:08:47.313
2023-03-15T15:08:47.313
null
null
266571
null
609575
1
null
null
1
22
I have several variables and I would like to test for possible linear correlations between. However, the data is across 2 groups, and there is a significant group difference in these variables. I know I could just correlate within subgroups, but then I would have only half the data points in each subgroup. Would it be valid to use partial correlation instead, with group as factor to adjust for? Any other ideas are welcome. Background I compare a group of patients with healthy controls in brain activations. I found they differ in 3 areas, so I have extracted these activations values from these regions from both groups. I also have data from 2 psychological tests: memory and learning performance. Thus, I have 3 brain and 2 psychological variables, and the patients showed lower values across all 5 variables compared to controls. To understand the function of the brain regions, I would like to know if brain activity would show a linear correlation with psychological performance e.g. does the lower activity in one of the regions correlate with the lower performance in one of the tests? One last thing, brain activity in different regions correlate with each other, and so do the two performance scores. Due to this, I was suggested to simply use multiple bivariate correlations e.g. 3x2=6 and correct for multiple comparisons. All good, I get a lot of significant activation vs performance correlations but I know the significance is inflated due to the patients vs controls group differences. I tested within patients only and the significance is gone. Could be because it is not there, but it could also be because I have half the data points in, e.g. 30 in the patient group instead of 60 across both groups.
Bivariate correlation across subgroups
CC BY-SA 4.0
null
2023-03-15T15:15:09.067
2023-03-15T15:51:35.290
2023-03-15T15:51:35.290
374012
321346
[ "correlation", "bivariate", "partial-correlation" ]
609576
1
null
null
0
10
I am struggling with the two bottom-up evaluations for Sum-Product Networks with Discriminative Training with Marginal inference. In the paper "Discriminative Learning of Sum-Product Networks" by Robert Gens([http://robertgens.com/papers/dspn.pdf](http://robertgens.com/papers/dspn.pdf)), he mentions two bottom-up evaluations with indicators set as S[y,1|x] and S[1,1|x]. I understand that the configuration S[1,1|x] is like a partition function, but I'm not sure. Please can someone explain these two configurations with an example of an SPN? It will be a big help. Thank you
how to compute two bottom-up evaluations for discriminative Sum-Product Networks with Marginal Inference?
CC BY-SA 4.0
null
2023-03-15T15:23:58.093
2023-03-15T17:06:36.590
2023-03-15T17:06:36.590
360931
360931
[ "inference", "graphical-model", "marginal-distribution", "probabilistic-programming", "discriminative-models" ]
609577
1
null
null
0
31
I have a correlation of 0.777 but only a p-value of 0.069 (not significant) on my Pearson's test. My sample size was of 54. Should my hypothesis still be rejected even if there is a correlation? Is the non-significant p-value due to sample size? What can I say about the hypothesis in this case?
I have a moderate to high correlation and a p-value that is non-significant. Do I still reject the whole hypothesis?
CC BY-SA 4.0
null
2023-03-15T15:29:42.003
2023-03-15T17:05:13.063
null
null
383304
[ "r", "correlation", "p-value", "pearson-r" ]
609580
2
null
609563
1
null
If you are concerned that the model might not be identified you should try the model with different starting values and see if it converges to the same place. If it converges to the same place, it's identified. It's not clear (to me) what the eu latent variable is doing. Can you post output?
null
CC BY-SA 4.0
null
2023-03-15T16:23:22.290
2023-03-15T16:23:22.290
null
null
17072
null
609581
1
null
null
0
30
My apologies if this has been previously answered. I am fairly new to statistics of clinical trials and I want to know how the expert statisticians handle continuous variables like "time to event" to identify responder groups and non responder groups within a clinical trial. I am assuming Survival graphs ([https://cran.r-project.org/web/packages/survminer/vignettes/Informative_Survival_Plots.html#motivation](https://cran.r-project.org/web/packages/survminer/vignettes/Informative_Survival_Plots.html#motivation)) could be one but I may be wrong. Kindly provide insights and guidance. Thank you
Analyses of continuous variable like time to event within clinical setting
CC BY-SA 4.0
null
2023-03-15T16:27:17.090
2023-03-17T14:26:14.833
null
null
5931
[ "survival", "continuous-data", "clinical-trials", "kaplan-meier" ]
609582
1
null
null
0
31
I am new to R and would like to follow the answer to this: [Simulation of logistic regression power analysis - designed experiments](https://stats.stackexchange.com/questions/35940/simulation-of-logistic-regression-power-analysis-designed-experiments) I would like specifically to know if I have power to detect a contribution to the model in an interacting co-variate term (Combined represents gene*prs_standardised). My data is case/control group with polygenic risk score and a mix of other continuous and binary covariates. In order to do this I would like to simulate my own dataset, the model summary is below: [](https://i.stack.imgur.com/MymuB.png) Whilst the P-value for combined implied there is no contribution to the model I am doubtful I am powered to detect this signal and would like to check. In order to simulate a dataset like this do I need to randomly generate tables/models by setting the parameters for the random numbers based of the min/max/sd of each coefficient? I am not too sure how to do that and any help would be greatly appreciated. Ideally there would be some kind of code that will look at the distribution of each column making up a coefficient of the table and then simulate data based of it's distribution?
Simulating data to calculate power for a logistic model
CC BY-SA 4.0
null
2023-03-15T16:44:10.960
2023-03-15T16:44:10.960
null
null
300090
[ "r", "regression", "logistic", "statistical-power" ]
609583
2
null
609577
4
null
I'm sure this question is covered elsewhere on this site. But basically, for Pearson correlation, there is a relationship between sample size, the correlation coefficient, and the resultant p-value. This relationship can be seen on tables like this one: [real-statistics.com/statistics-tables/pearsons-correlation-table/](https://real-statistics.com/statistics-tables/pearsons-correlation-table/), or can be found in many analysis of experiments textbooks. For a given r value, the p-value becomes smaller as the sample size increases. This is basically a feature of how we do hypothesis testing. Practically speaking, getting a relatively large r value with a small sample size is suggestive. Likely, there is a real correlation, but we have limited evidence against the possibility that this apparent correlation occurred simply by chance. Likewise, a p-value less than 0.10 is suggestive, but certainly not dispositive.
null
CC BY-SA 4.0
null
2023-03-15T16:47:33.040
2023-03-15T17:01:35.503
2023-03-15T17:01:35.503
166526
166526
null
609584
1
null
null
3
48
I am currently working on a dataset (count data) from a rather heavily unbalanced sampling design. In particular, I would like to be able to predict the abundance of the studied species according to multiple landscape and meteorological metrics. To do so, I have tried to build GLMMs (with a negative binomial distribution) but I am having trouble finding a model structure that would fit the structure of my dataset. This dataset consists of surveys (count data) carried out in three cities: - In the city A, the protocol I was followed in year 7. - In the city B, the protocol II was followed in years 1 to 6. - In the city C, the protocol I was followed in year 4 and both the protocols I and II were followed in year 5. In the protocol II, clusters of sites are monitored along transects, whereas in the protocol I, the sites should be located more or less randomly. Furthermore, there can be several sites/transects monitored on a given day in a given city. Some sites/transects were also monitored several times during a year or during the whole sampling period. |City |Protocole |Year |Number of transects |Number of sites |Number of surveys | |----|---------|----|-------------------|---------------|-----------------| |City A |Protocole I |Year 7 |NA |73 |73 | |City B |Protocole II |Year 1 |18 |167 |242 | |City B |Protocole II |Year 2 |22 |206 |310 | |City B |Protocole II |Year 3 |11 |103 |103 | |City B |Protocole II |Year 4 |8 |64 |74 | |City B |Protocole II |Year 5 |8 |75 |95 | |City B |Protocole II |Year 6 |8 |77 |88 | |City C |Protocole I |Year 4 |NA |35 |63 | |City C |Protocole I |Year 5 |NA |19 |19 | |City C |Protocole II |Year 5 |13 |71 |71 | Regarding this unbalanced structure, I first tried to build a model by city, e.g. for city B: `Abundance ~ environmental variables + meteorological variables + Julian day (and its quadratic effect) + year (in factor) + (1 | transect / site) + (1 | date) ` However, for a number of reasons (including having sufficient statistical power), I have been advised to try to build a model for all the data (and not just for city subsets). This seems quite tricky to me, especially because of the high correlation between cities, protocols and years. I first imagined adding a random effect which would be a 10-level factor corresponding to the 10 combinations of city * protocol * year. However, I do not think this would be rigorous as the different levels would not be independent (e.g. one can assume that city B * protocol II * year 1 would give more similar results to city B * protocol II * year 2 than with city A * protocol I * year 7). So my question is the following one: Can anyone think of a model structure that would be able to encompass the variation due to city, protocol and year without having correlation issues (e.g. high VIF) or random effects with insufficient levels (e.g. 2 levels whereas I read that a minimum of about 6 would be much better)? Or is it just not possible? For your information: I’m using R and I use the glmmTMB package. UPDATE : Some additional information on the two protocols: - Protocol II: during two given hours of the day, surveys are carried out along a transect consisting of at least 6 sites, each site being monitored for 6 minutes. The abundance at a given site is the number of individuals counted during the 6 minutes. - Protocol I: a given site is monitored for the whole day. For this study, only the two hours during which the protocol II is carried out are considered, and the abundance is the average number of individuals counted during 6-minute intervals during these two hours. The device used to detect individuals are not the same in protocols I and II (with probably different performances).
Heavily unbalanced sampling design - Structure of GLMM (ecology)
CC BY-SA 4.0
null
2023-03-15T16:51:36.993
2023-03-16T15:55:12.737
2023-03-16T15:55:12.737
323265
323265
[ "correlation", "mixed-model", "unbalanced-classes", "glmm", "ecology" ]
609585
1
null
null
1
37
I'm following the derivative calculation of [Batch Norm paper](https://arxiv.org/pdf/1502.03167.pdf): [](https://i.stack.imgur.com/G8ixU.png) Something doesn't seem right. In the 3rd equation shouldn't we lose the 2nd term as the sum is equal to 0 ($\mu_B$ is the mean of the $x_i$ over the batch)? And worse, in the 4th equation, for the same consideration, once we will sum the loss over the entire batch, we will lose the middle term. And then the 1st and 3rd term will cancel each other to give out 0. This obviously can't be right, but I don't seem to spot my mistake(s).
Batch Normalization derivatives
CC BY-SA 4.0
null
2023-03-15T16:57:08.293
2023-03-19T12:27:11.760
null
null
117705
[ "gradient-descent", "backpropagation", "batch-normalization" ]
609586
2
null
609577
1
null
When you do a significance test, you reject, or you fail to reject, the null hypothesis based on the p-value from the significance test. Your p-value is not significant, therefore you fail to reject the null hypothesis. You have not found evidence that the null hypothesis is false. You ask "Do I still reject the whole hypothesis?" No. You do not reject any hypotheses because of a non-significant test.
null
CC BY-SA 4.0
null
2023-03-15T17:05:13.063
2023-03-15T17:05:13.063
null
null
17072
null
609587
1
null
null
0
28
I am trying to perform a multiple regression in R. However I am having hard time to interpret the plots or decide what kind of transformation might be needed. Here is a scatterplot matrix with all my variables (Price is the response variable): [](https://i.stack.imgur.com/OorEN.png) The diagnostic graphics from the full model look as follows: [](https://i.stack.imgur.com/soTOI.png) After looking at the cooks distance and residual vs Leverage plot I have removed the 348th variable. After removing the outlier diagnostic graphics from the full model look as follows: [](https://i.stack.imgur.com/Y08vD.png) I have used the ols_step_both_aic from the olsrr library and decided that 2 response variables are significant. The diagnostic graphics from the chosen model look as follows: [](https://i.stack.imgur.com/Ia37s.png) I have tried applying a log transformation to the response variable lm(log(price) ~ bath + sqft, data = data) The diagnostic graphics for the model after log transformation look as follows: [](https://i.stack.imgur.com/S6nxn.png) Could anyone help me understand if the data needed a transformation at all? if yes was applying log transformation good idea? If it isn't what would be a better alternative?
Transforming predictors for multiple regression in R
CC BY-SA 4.0
null
2023-03-15T17:09:42.677
2023-03-15T17:09:42.677
null
null
383312
[ "r", "multiple-regression", "data-transformation", "residuals", "cooks-distance" ]
609588
1
null
null
2
44
I've heard that one solution to analyzing compositional data (in my case five predictors that are proportions summing to 1) is to simply remove the intercept. This seems like a much simpler solution than transforms, and allows all predictors to be included. I'd like to double-check that this is an acceptable approach. If so, can you explain why this works? The reference I was pointed to is a textbook that isn't available online. Edit: This is the textbook I was pointed to. [https://onlinelibrary.wiley.com/doi/book/10.1002/9781118204221](https://onlinelibrary.wiley.com/doi/book/10.1002/9781118204221) Also, to give a bit more detail, my DV is accuracy in response to each item. My predictors are proportions reflecting the makeup of the items.
Am I correct that removing the intercept allow you to analyse compositional data?
CC BY-SA 4.0
null
2023-03-15T17:21:15.820
2023-03-15T20:43:56.410
2023-03-15T18:25:29.457
67137
67137
[ "regression", "proportion", "compositional-data" ]
609589
2
null
609562
0
null
Just use the formulas $\phi=\exp(\beta)$ and $\beta=\log \phi$ to move between the $\beta$ and the $\phi$ scales. Most simply, after you have found the profile of the log partial likelihood in terms of $\beta$, you can just re-express the values on the horizontal axis in the $\phi$ scale. The plots will be most symmetric, however, if you log-transform the $\phi$ scale of the horizontal axis, which effectively maps back to a linear $\beta$ scale. To illustrate in detail, the Cox partial likelihood can be written in terms of $\beta$ and covariate values $X_i$ for individuals $i$: $$\prod_{i=1}^{n}\left(\frac{\text{exp}(\beta X_i(t_i))}{\sum_{j\in\mathcal{R(t_i)}}\text{exp}(\beta X_j(t_i))}\right)^{\delta_i}$$ where the product is over the event times $t_i$, $\delta_i$ is 0/1 for censored/uncensored observations, and $\mathcal{R(t_i)}$ is the set of cases at risk at time $t_i$. In terms of the hazard $\phi_i=\exp(\beta X_i)$ for case $i$ with time-constant covariates, the partial likelihood can be written: $$\prod_{i=1}^{n}\left(\frac{\phi_i}{\sum_{j\in\mathcal{R(t_i)}}\phi_j}\right)^{\delta_i}.$$ The log partial likelihood is then: $$\sum_{i=1}^n \left(\log(\phi_i)-\log\sum_{j\in\mathcal{R(t_i)}}\phi_j\right),$$ where the sum is only over cases $i$ with observed event times. With the data you provided in an earlier version of the question, containing only 5 event times and your single binary `Group` predictor, you can do this pretty much by hand. Take $\phi_i=1$ for Group 1 ($\log(\phi_i)=0$) and $\phi_i=$`HR` for Group 2. Then you proceed event time by event time. Your data, in event-time order, are: ``` time status group 1 0.028 1 G1 9 0.217 1 G2 2 0.547 0 G1 10 0.598 0 G2 11 0.822 0 G2 3 0.934 0 G1 4 1.194 0 G1 12 1.395 0 G2 5 1.415 1 G1 13 1.626 0 G2 6 1.908 0 G1 14 3.347 1 G2 7 3.378 0 G1 8 3.440 1 G1 ``` Based on the cases at risk at each of the 5 event times, you can write the following for the log partial likelihood as a function of `HR`: ``` lplHR <- function(HR){ (log(1)-log(8+6*HR)) + (log(HR)-log(7+6*HR)) + (log(1)-log(4+2*HR)) + (log(HR)-log(2+HR)) + (log(1)-log(1))} ``` and then plot the results. ``` curve(lplHR(x), from = exp(-1.5), to = exp(2.5), log ="x", xlab = "HR", ylab = "log partial likelihood", bty = "n") ``` [](https://i.stack.imgur.com/jbT2x.png) The log scaling of the `HR` horizontal axis is a linear scaling in terms of the corresponding $\beta$ values. For more complicated situations, as you understand, you start with a set of hypothesized values of $\beta$. For each value, you extract the log partial likelihood from a Cox model that is restricted to have that value. For a single coefficient as in your case, this can be done by initializing $\beta$ to that value and preventing the function from iterating beyond the initial estimate. That's illustrated on [this page](https://stats.stackexchange.com/a/187339/28500). When there are more predictors in the model, for each hypothesized $\beta$ for the predictor $X$ of interest you fit a model with an offset term for $\beta X$ and let the function optimize the other coefficients. That's illustrated on [this page](https://stats.stackexchange.com/a/572528/28500). To do either of those in terms of $\phi$ instead of $\beta$, set up your hypothesized set of values of $\phi$ and use $\log \phi$ wherever you would have used $\beta$ or $\beta X$ previously. Following the first method for a set of hypothesized `HR` values and your data in a data frame `sData`: ``` library(survival) HRs<-seq(exp(-1.5), exp(2.5), length=100) llik<-double(100) for(i in 1:100){temp <- coxph(Surv(time,status) ~ group ,data = sData, init = log(HRs[i]), iter.max = 0); llik[i ]<- temp$loglik[2]} plot(HRs, llik, type = "l", bty = "n", log = "x") ``` The plot (not shown) is the same as above.
null
CC BY-SA 4.0
null
2023-03-15T17:21:34.667
2023-03-15T21:30:33.340
2023-03-15T21:30:33.340
28500
28500
null
609590
2
null
609570
5
null
If you were an academic, one must assume you already have a good reference for multivariable calculus, linear algebra, and differential equations – these are not optional. I personally heard from Witten and Tibshirani that their texts have the greatest value in working out the problems at excruciating detail including intensive matrix algebra. So, bone up on these skills if you haven't already. A mathematical pedagogy is fundamentally different from computer science. Whereas CS advocates a top-down approach, mathematics is about finding generalizations. That's why (on this site and elsewhere) you have many self proclaimed "ML experts" who have fit enough algorithms on Kaggle to burn out a network of NVidia graphics cards, but who can't write down an estimating equation to save their life. If you were a diligent student, you would hope to cover all this over the course of 4-6 years of dedicated study. If you were a graduate statistics student, you would write a theory course from, say, Casella Berger (research other posts on this one, there may be better texts), linear modeling, and then advanced theory up to minimax estimation, empirical processes, etc. Texts might include Ferguson's [A Course in Large Sample Theory](https://www.routledge.com/A-Course-in-Large-Sample-Theory/Ferguson/p/book/9780412043710), or Lehman, Casella's [Theory of Point Estimation](https://link.springer.com/book/10.1007/b98854). At that point you can read and understand foundational work. These are necessary to "prove" that many algorithmic solutions are well motivated such as the bootstrap, LARS, etc. Referring to "Bayesian" alone is a forgivable newbie mistake, but to participate meaningfully on this site, you need to be more precise. Peter Hoffs "[A First Course in Bayesian Statistics](https://link.springer.com/book/10.1007/978-0-387-92407-6)" should cover a broad number of areas. Harrell's "[Regression Modeling Strategies](https://link.springer.com/book/10.1007/978-3-319-19425-7)" is an applied text with some modern solutions that provide a lot of area for research. Take a look at this page from Arcones and Gine regarding bootstrapping. A procedure as simple as resampling rows with replacement from a dataset repeatedly requires knowledge in a practically completely new area of statistics, empirical process theory. (see texts from [Van Der Vaart and Wellner](https://link.springer.com/book/10.1007/978-1-4757-2545-2) for a reference on this... not for faint of heart!). [](https://i.stack.imgur.com/QS0gs.png) If you want to understand the mettle that these researchers bring to the theoretical forefront, you just need to look up any related article on premier research journals, such as biometrika, JRRS, JASA, etc. It is a good exercise at times to find a journal article you really want to understand that's way beyond your ability and try to replicate the results, looking up cited references as needed. With Sci-Hub this is within almost anyone's reach.
null
CC BY-SA 4.0
null
2023-03-15T17:21:45.803
2023-03-15T17:44:12.300
2023-03-15T17:44:12.300
362671
8013
null
609592
1
null
null
2
31
I have implemented a beta regression and am a little confused on how I should interpret the coefficients of my model. For context, both my independent variables and dependent variable are expressed in percentage form, ranging from [0, 1]. The only exception is one independent variable which takes the binary value of 0 or 1. Does anybody mind sharing how I could interpret the coefficients here in this beta regression? I've never worked with a dataset like this before; any help would be appreciated!
Interpreting coefficients of beta regression
CC BY-SA 4.0
null
2023-03-15T18:33:13.413
2023-03-15T18:33:13.413
null
null
383317
[ "multiple-regression", "regression-coefficients", "beta-distribution", "beta-regression" ]
609594
1
null
null
0
36
I have a dataset were a small number of individuals were sampled each year for their sex. I would normally use a binomial model (link = logit), to analyze sex ratios, however, in this system, the ratio of F:M varies by size class. Therefore, to estimate the overall proportion of females in the population, I need to account for the total number of individuals in each size class in each year. I'm wondering if a gamma distribution would be appropriate with a log link function would make sense using the estimated proportion of females (pF) in the population? Or should I try something else entirely like a Beta distribution (link ??)... Ultimately, I'll be modelling this data with a general additive model (GAM). Example dataset: ``` set.seed(42) # dataframe with individuals sexed in each year dfSex <- data.frame( year = sample(2003:2012, 2000, replace = TRUE), size = c(rep(LETTERS[1], 1000), rep(LETTERS[2], 1000)), sex = c(sample(c("F", "M"), 1000, replace = TRUE, prob = c(0.1, 0.9)), sample(c("F", "M"), 1000, replace = TRUE, prob = c(0.7, 0.3)))) %>% group_by(year, size) %>% summarise(nF = sum(sex == "F"), nM = sum(sex == "M")) #dataframe with the number of individuals in each size class/year dfCounts <- data.frame( year = rep(c(2003:2012), 2), size = c(rep(LETTERS[1], 10), rep(LETTERS[2], 10)), count = sample(100:1000, 20)) df <- left_join(dfSex, dfCounts) %>% # calculate overall proportion of females after accounting counts of individuals in each size class mutate(cFsize = nF/(nF+nM)*count) %>% group_by(year) %>% summarise(cF = sum(cFsize), cT = sum(count), pF = sum(cFsize)/sum(count)) df # A tibble: 10 × 4 year cF cT pF <int> <dbl> <int> <dbl> 1 2003 423. 1376 0.308 2 2004 581. 1342 0.433 3 2005 641. 1056 0.607 4 2006 183. 705 0.259 5 2007 322. 909 0.354 6 2008 259. 634 0.409 7 2009 473. 1137 0.416 8 2010 633. 1663 0.381 9 2011 194. 878 0.221 10 2012 691. 1817 0.380 ```
What distribution and link function should I used for analyzing proportions?
CC BY-SA 4.0
null
2023-03-15T18:55:03.927
2023-03-15T20:39:06.193
2023-03-15T20:39:06.193
182146
182146
[ "r", "distributions", "proportion", "generalized-additive-model" ]
609596
2
null
524697
2
null
I see a two errors. First, any use of "statistically significant" refers to a null hypothesis. Therefore, when the first line refers to a correlation that is statistically significant yet equal to zero, the null hypothesis must not be that the correlation is zero. However, then rejecting a null hypothesis that the correlation equals some other value does not give evidence of zero correlation. For instance, if the null hypothesis is that the correlation is $0.5$, rejecting could mean that correlation is $0.4$ or $-0.8$. What could be meant here is that an equivalence test was performed and gave statistically significant evidence that the correlation is, in some sense, "close" to zero. Then it would be reasonable to conclude that there is not a practically significant linear relationship between the two variables. Second, failure to reject a null hypothesis is not the same as confirming the null hypothesis. Therefore, just because you wind up with a correlation that is not statistically significant (that is, for a null hypothesis of zero) does not mean that the variables lack a correlation. With this in mind, the two statements seem completely compatible. In the first case, a significant result from an equivalence test shows the correlation to be close enough to zero that we conclude there to be no practically important linear relationship. In the second case, we have no strong evidence of a linear relationship, so we cannot conclude that there exists a linear relationship.
null
CC BY-SA 4.0
null
2023-03-15T19:08:39.523
2023-03-15T19:08:39.523
null
null
247274
null
609597
1
null
null
0
14
I am mapping strain using a 70-node grid over the surface of different materials to compare mechanical properties. I have calculated the strain over the 70 nodes for 4 different materials, each material group containing 5 individual specimens (n=5/group over 4 groups). For each group, I have averaged the strain at each node over the 5 specimens (now each node in the 70-node grid is an average of 5 individual sample values). I want to compare the overall means between all 4 groups. However, while I can calculate the overall mean from the 70-node grid that has been averaged from 5 specimens (a mean of means), I am not sure how to define a variance/deviation metric to compare them statistically, if that is even possible. Alternatively, I have considered treating the averaged 70-node grid for each of the 4 groups as independent observations that can then be averaged yielding a typical mean/standard deviation of a single group, and comparing the overall means that way. I'm not sure if that is statistically viable either. The only sure thing I know I can do is individually compare each node average across the 4 groups. I'm avoiding that for now, since it would be cumbersome to report. Distilling it down to a comparison of a single number would be easier, if possible.
Is there a statistical test to compare different groups having overall means of a set of averaged data points?
CC BY-SA 4.0
null
2023-03-15T19:12:06.633
2023-03-21T20:11:20.217
2023-03-21T20:11:20.217
11887
383318
[ "hypothesis-testing", "descriptive-statistics", "engineering-statistics" ]
609598
1
609603
null
1
54
I have $X_1, X_2,\ldots,X_n$ be a random sample of size n from a distribution with probability density function: $$p(x) = \theta^2xe^{-\theta x}I (x > 0).$$ How can I find an asymptotically normal estimator?
How to find asymptotically normal estimator if I know probability density function
CC BY-SA 4.0
null
2023-03-15T19:14:56.603
2023-03-15T19:55:20.880
2023-03-15T19:55:20.880
56940
383322
[ "self-study", "mathematical-statistics", "estimators", "asymptotics" ]
609601
1
null
null
1
46
I am analyzing the effect of a corporate event on firms' stock prices with the classical event study methodology from finance meaning that I estimate abnormal returns that are due to this corporate event. After estimating abnormal returns, I aggregate them over firms and time to end up with the so-called CAAR (cumulative average abnormal return). Now, I dive deeper into the analysis. First, I would like to perform some sensitivity analyses. One example is to check whether the effect of the event depends on the size of the firms. To do so, I split the event sample into above and below median sized firms and end up with a CAAR for both groups that are independent, as a firm is only in one of the both groups. I would like to see if the CAAR (remember that is a mean measure) of both groups is statistically different and perform a t-test for independent groups. So, in R 't.test(..., paired = FALSE)'. At this point, don't think about Welch's adjustment etc. Second, I would like to perform some robustness analyses. For example, I change the underlying model to estimate abnormal returns and end up with a new CAAR for the entire event sample. This time the sample is not split into different groups but remains the same. In this situation, should I use the paired t-test to compare the difference of the CAARs regarding both model specifications? So, should I use in R 't.test(..., paired = TRUE)'? The "master question": I perform another type of robustness analysis in which I drop some events that do not pass some more rigorous filter criteria. Let's say, I have 200 events in my baseline analysis, but only 150 fit the more strict criteria. I calculate for both situations the CAAR und would like to test with a t-test whether the CAAR is statistically different both groups. How could I do this when one time there is the full sample and another time only a subsample?
Paired or unpaired t-test in event study methodology
CC BY-SA 4.0
null
2023-03-15T19:20:06.260
2023-03-21T17:33:27.910
2023-03-21T17:33:27.910
11887
383321
[ "t-test" ]
609602
2
null
609568
1
null
You are wrong to assume the `select = TRUE` will necessarily result in lower complexity smooths. The tests in the output of `summary()` can only test the estimated smooth effect against the Null hypothesis of no effect conditional upon all the other smooths and terms in the model. The estimated effect of one smooth will change depending on the other terms in the model and their estimated effects, unless the covariates (and bases) are uncorrelated (orthogonal). In real world data, we rarely if ever will see perfectly uncorrelated covariates. When you turned on `select = TRUE`, the effects of $f(\texttt{temp}_i)$ and $f(\texttt{Longitude}_i, \texttt{Latitude}_i)$ are effectively reduced to null effects (flat, constant functions) as the extra penalty on those two smooths has caused them to be shrunk out of the model. As those terms no longer explain variation in the response, there is opportunity for the other smooths to now explain that variation previously explained by the two shrunken smooths. This is perhaps what has happened with $f(\texttt{tod}_i)$. The other effect that shrinking terms out of the model can have is that it can reduce the uncertainty in the estimates of the remaining terms in the model. This seems to be what has happened with $f(\texttt{sal}_i)$; note the effect has changed shape slightly but that the credible interval is narrower at the boundaries of the `sal` covariate
null
CC BY-SA 4.0
null
2023-03-15T19:26:31.900
2023-03-15T19:26:31.900
null
null
1390
null
609603
2
null
609598
2
null
You can use the maximum likelihood estimator, which, in regular cases as this one has limiting normal distribution. This is defined as $$ \hat\theta = \text{arg max}_{\theta\in\Theta} L(\theta), $$ where $L(\theta) = \prod_{i=1}^n f(x_i;\theta)$, is the likelihood function and $f$ is your density function; I am assuming that the sample is independently and identically distributed. From a computational perspective, it is much easier to maximise the log-likelihood function $\ell(\theta) = \log L(\theta)$ instead of maximising $L(\theta)$. Replacing $f$ with your density will deliver the required estimator; I leave this detail to you. As for the limiting distribution, the theory of maximum likelihood tells us that $$\sqrt{n}(\hat\theta-\theta) \overset{n\to\infty}{\to} N\left(0, I(\theta)^{-1}\right),$$ where $I(\theta)$ is the [expected Fisher information](https://en.wikipedia.org/wiki/Fisher_information) for a single observation. Hint The likelihood function is $$L(\theta) = \prod_{i=1}^n \theta^2 X_i e^{-\theta X_i} = \theta^{2n}e^{-\theta \sum_i X_i}\prod_i X_i.$$ The log-likelihood function is $$\ell(\theta) = 2n\log\theta - \theta\sum_{i} X_i + \log \prod_i X_i.$$ Next, solve $d \ell(\theta)/d\theta = 0$ in $\theta$ to get a stationary point. Lastly, check the second derivative of $\ell(\theta)$ to make sure the stationary point you found is a maximum. If it is, that's the maximum likelihood estimator for $\theta$.
null
CC BY-SA 4.0
null
2023-03-15T19:34:14.853
2023-03-15T19:51:33.757
2023-03-15T19:51:33.757
56940
56940
null
609604
1
null
null
0
13
I'm pretty new to Random Forests and Machine Learning in general, so if you see an alternative to my approach I'll appreciate any suggestions. I want to create a model that classifies particles into 5 different categories (i.e. A, B, C, D) according to a set of measured variables, including size (e.g length), shape (e.g. aspect ratio) and color (e.g. RGB values). I have a dataset of around 30,000 particles. As my dataset is not balanced I've down-sampled it which gives me 1,099 particles per category and a total 5,495 particles. I generated a 80% partition on the dataset to test the model performance. Up to now I am running a single Random Forest model on all variables to predict the categories. I'm getting a max. Accuracy of 0.88, Kappa of 0.85 and OOB of 12.5%. I think I can get a better performance than this: category A depends almost uniquely on size and shape, B depends more on size and color, C more on shape and color and D and E on size, shape and color. Is there a way I could tune the model taking this into account? I was also thinking on building multiple Random Forests more like a cascading approach, e.g.: a first RF could separate the first two "obvious" categories from the rest, and a second RF could try to predict the remaining categories. Any suggestions or thoughts on this?
Cascade Random Forest models or tuning?
CC BY-SA 4.0
null
2023-03-15T19:43:32.680
2023-03-15T19:43:32.680
null
null
383320
[ "r", "machine-learning", "random-forest" ]
609605
2
null
606796
0
null
I can now answer my own question, based on the kind advice of Prof. Vermunt: for ordinal dependent variables in a 3-step LCA, ML is adviced.
null
CC BY-SA 4.0
null
2023-03-15T20:04:57.553
2023-03-15T20:04:57.553
null
null
108040
null
609606
2
null
591930
0
null
You seem to be seeing something like $y = 0.8x\iff x = 0.8 y$ and thinking that looks ridiculous. Yes, that is ridiculous, and the right relationship is $y = 0.8x\iff x = \frac{y}{0.8}$, as you have noticed the algebra says. However, those are not the regression equations! The equations below are the regression equations. $$ \mathbb E[y\vert x] = 0.8 x\\ \Big\Updownarrow\\ \mathbb E[x\vert y] = 0.8 y\\ $$ Therefore, the algebraic rearrangement that leads to the apparent contradiction is not the correct algebraic rearrangement.
null
CC BY-SA 4.0
null
2023-03-15T20:18:33.900
2023-03-15T20:18:33.900
null
null
247274
null
609607
2
null
451539
2
null
For the case where $\Sigma = \mathbf{I}\sigma$, we have the following formulas: \begin{equation} \mathbb{E}\left( \frac{1}{||x||} \right) = \frac{1}{\sqrt{2}} {}_1F_1 \left(\frac{1}{2}, \frac{P}{2}, -\frac{||\frac{\mu}{\sigma}||^2}{2}\right) \frac{\Gamma\left(\frac{P-1}{2}\right)}{\Gamma\left(\frac{P}{2}\right)} \end{equation} and \begin{equation} \mathbb{E}\left( \frac{1}{||x||^2} \right) = \frac{1}{2} {}_1F_1 \left(1, \frac{P}{2}, -\frac{||\frac{\mu}{\sigma}||^2}{2}\right) \frac{\Gamma\left(\frac{P-2}{2}\right)}{\Gamma\left(\frac{P}{2}\right)} \end{equation} where ${}_1F_1 \left(a,b,c\right)$ is the hypergeometric confluent function, and $||\frac{\mu}{\sigma}||^2$ is the squared norm of the distribution mean divided by the isotropic noise standard deviation $\sigma$ above. The first expression I got from the sympy Python package (symbolic computation), with the following code ``` from sympy.stats import ChiNoncentral, density, E from sympy import Symbol, simplify # Define some symbols to use k = Symbol("k", integer=True) l = Symbol("l") z = Symbol("z") # Define the chi noncentral distribution X = ChiNoncentral("x", k, l) # Get the analytic expression for the first moment of the distribution analyticExp = simplify(E(1/X)) ``` Note, in simplifying the first expression from the code below, I used the Kummer relation. These expressions, and other related ones, are derived in "Intermediate Probability: A Computational Approach", M. Paolella, section 10.1.2. In that section, an expression for the moments of the non-centered Chi-square variable are shown. You get these expressions by applying the formula to moments -1 and -2 of the non-cented Chi-Square. This can be generalized to some other $\Sigma$'s. The Mathai and Provost book "Quadratic Forms in Random Variables" specifies in Theorem 5.1.3 the specific $\Sigma$'s under which $X^T X$, with $X \sim \mathcal{N}(\mu, \Sigma)$, is distributed as a non-central chi square distribution with non-centrality parameter $\delta^2$ and $r$ degrees of freedom. For those cases, you can just apply the formulas above, subtituting the corresponding non-centrality parameter and degrees of freedom. The $\Sigma$'s for which this holds, however, seem to be quite limited (e.g. they have to satisfy $\Sigma^3 = \Sigma^2$, which is almost to say that it is idempotent).
null
CC BY-SA 4.0
null
2023-03-15T20:22:07.347
2023-03-16T14:06:14.117
2023-03-16T14:06:14.117
134438
134438
null
609608
1
null
null
1
203
Given these data, reflecting individual data on 206 subjects, two treatment groups ("uc" and "texting") and race ("nonblack" and "black"). My goal is a calculate a risk ratio (with 95% CI) for each treatment group. The risk ratio of interest is defined as the proportion with outcome == 1 in the 'nonblack' category divided by the proportion with outcome == 1 in the 'black' category. How do I calcuate these in R using a logistic (or Poisson model)? ``` agg_dat <- structure(list(trt_grp = structure(c(2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L), levels = c("texting", "uc"), class = "factor"), race = structure(c(2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L), levels = c("black", "nonblack"), class = "factor"), ascertained = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), levels = c("yes", "no"), class = "factor"), counts = c(21, 9, 24, 49, 32, 3, 63, 5)), class = "data.frame", row.names = c(NA, -8L)) ``` Converting to a row for each individual and setting factor levels: ``` dat <- uncount(agg_dat, counts) |> mutate(outcome = ifelse(ascertained == "yes", 1, 0)) |> mutate(race = factor(race, levels = c("black", "nonblack")), trt_grp = factor(trt_grp, levels = c("uc", "texting"))) ``` It's straightforward to get predictions for each category of trt_grp and race: ``` m0 <- glm(outcome ~ race*trt_grp, family = "binomial", data = dat) (newdata <- agg_dat |> select(trt_grp, race) |> unique()) predictions <- predict(m0, newdata, type = "response") ``` I'm stuck on how to use my R glm() model to estimate the confidence intervals for these risk ratios. The point estimates for the risk ratios can be calculated from model predictions (on the "response" scale). ``` (rr_uc <- predicted[1]/predicted[2]) (rr_texting <- predicted[3]/predicted[4]) ``` I wish to calculate 95% confidence intervals for 'rr_uc' and 'rr_texting'. It seems like the R margins package might be useful. Finally, I'd like to estimate the ratio of risk ratios, i.e., rr_texting/rr_uc. The point estimate is straightforward. How do I calculate the confidence interval for the ratio of risk ratios?
How to use R glm() to estimate risk ratios and ratio of risk ratios with confidence intervals
CC BY-SA 4.0
null
2023-03-15T18:43:57.993
2023-03-26T15:40:29.963
2023-03-26T15:40:29.963
25494
25494
[ "r", "confidence-interval", "generalized-linear-model", "relative-risk" ]
609609
1
610041
null
1
89
I Would like to compare if results from two models are significantly different. The models have been trained on the same samples in a K-fold cross validation setup, so both will spit out K performance scores, for which we can test is the mean of the performance is significantly different with a related T-test. At first, setting the significance level alpha at 0.05, my results are not significant: ``` >>> from scipy.stats import ttest_rel >>> np.random.seed(12) >>> n_splits = 5 >>> performance_model_A = np.random.normal(0.5, 0.2, n_splits) >>> performance_model_B = np.random.normal(0.6, 0.1, n_splits) >>> _, pval = ttest_rel(performance_model_A, performance_model_B) At 5-fold cross validation the p-value is 0.180 ``` However, if you add repeats to the cross-validation set up, I end up with a p-value lower than alpha: ``` >>> n_repeats = 10 >>> performance_model_A = np.random.normal(0.5, 0.2, n_splits * n_repeats) >>> performance_model_B = np.random.normal(0.6, 0.1, n_splits * n_repeats) >>> _, pval = ttest_rel(performance_model_A, performance_model_B) >>> print(f'At {n_splits}-fold, {n_repeats}-repeat cross validation the p-value is {pval:.3f}.') At 5-fold, 10-repeat cross validation the p-value is 0.002. ``` Now, I think we should perform multiple comparison correction because I used the same sample multiple times (e.g. using Bonferroni correction for `n_repeats` and `n_splits`, reducing alpha to 0.01 and 0.001 respectively. But I am reluctant, since all I am comparing is the performance of the models on a given sample of data, so all results are further evidence of differences between the models (which is what we are testing). However, this leads to the bizarre conclusion that -with sufficient repeats- any model is significantly different from another, so then I might not be performing the right test for this application. Alternatively, I could just strike it in the middle: correct for `n_repeats` and not for `n_splits`, since this way I correct for time a sample is in the test data partition more than once. But would not be backed by any strong statistical insight. I have not been able to find a definitive advice online/best practice/papers on multiple comparison testing for comparison of cross-validation results, and any help is greatly appreciated. --- Thanks, @Firebug for the reference to Bouckaert and Frank's corrected repeated k-fold cv test. I could not find a Python implementation for this method, so finally I would like to share the following Python solution if that is allowed: ``` def corr_rep_kfold_cv_test(a:list, b:list, n_splits:int, n_samples:int) -> tuple[float, float]: """ Implementation of Bouckaert and Franks (2004) corrected repeated k-fold cv test. """ r = len(a) // n_splits # number of r-times repeats as integer k = n_splits # number of k-folds as integer n1 = n_samples // n_splits * (n_splits - 1) # number of instances used for training n2 = n_samples // n_splits # number of instances used for testing x = np.subtract(a, b) # observed differences m = x.mean() # mean estimate s = np.sum((x - m) ** 2) / (k * r - 1) # variance estimate t_stat = m / np.sqrt((1 / (k * r) + n2 / n1) * s) # corrected test statistic p_val = stats.t.sf(np.abs(t_stat), r) * k # p-value return t_stat, p_val ```
(How) should multiple comparison correction be applied for hypothesis testing of cross validation performance?
CC BY-SA 4.0
null
2023-03-15T20:26:10.757
2023-03-22T01:16:20.857
2023-03-22T01:16:20.857
222339
222339
[ "hypothesis-testing", "cross-validation", "model-evaluation" ]
609612
2
null
569790
0
null
There seems to be a lack of consistency in what $R^2$ should mean outside of the simple settings. The simplest case is simple linear regression, where the squared Pearson correlation between the feature and outcome equals the squared Pearson correlation between the true and predicted outcomes. These both equal the "sum of squares formula" you mention from `sklearn`: $1 - \left[\left(\overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 \right)\Bigg/\left( \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 \right) \right]$. All three of these turn out to be equal to the proportion of variance in $y$ explained by the regression. Thus, in the simple case, there are four notions of what $R^2$ means. Moving to a more general setting, there might be more than one feature, so the correlation between the feature and outcome no longer makes sense. However, all three other notions can be calculated, and each has a legitimate claim to being called $R^2$. I would go with the "sum of squares" formula, since it has a nice connection with a comparison to a "must beat" model that makes sense to me, but all three notions can be defended. [I also like the connection this calculation has to the reduction in error rate that a classifier has](https://stats.stackexchange.com/a/605451/247274) (and the fact that the reduction in error rate I discuss at the link relates to a reasonable definition of the familiar $R^2$ statistic makes me like the reduction in error rate statistic all the more). The good news is that you're always allowed to define a statistic. If you have a reason to want to know the squared correlation between the true and predicted values, feel free to define and calculate such a statistic. If you have a reason to want to use the "sum of squares" formula like `sklearn` uses, define it and use it. If you want to modify the `sklearn` formula [in the way that makes sense to me for out-of-sample testing](https://stats.stackexchange.com/questions/590199/how-to-motivate-the-definition-of-r2-in-sklearn-metrics-r2-score), define the formula and use it. If you want to decompose the total sum of squares to get the $SSRes$, $SSReg$, and $Other$ term I discuss [here](https://stats.stackexchange.com/a/551916/247274) in order to discuss the proportion of the variance in $y$ that is explained by the model, define it and use that formula. But as far as there being a name that basically every statistician knows, like how I can write $\bar x$ without ambiguity, no, I do not sense that for $R^2$.
null
CC BY-SA 4.0
null
2023-03-15T20:54:48.313
2023-03-15T20:54:48.313
null
null
247274
null
609613
1
609616
null
0
61
When I run the code posted at the bottom (using the `summary()` function of the R `survival` package, I get the output shown immediately below: [](https://i.stack.imgur.com/MAOFO.png) Some sources ([http://www.sthda.com/english/wiki/cox-proportional-hazards-model](http://www.sthda.com/english/wiki/cox-proportional-hazards-model)) state that the z-value is the “Wald statistic value” and continues “It corresponds to the ratio of each regression coefficient to its standard error (z = coef/se(coef)). The Wald statistic evaluates whether the beta (β) coefficient of a given variable is statistically significantly different from 0. From the output above, we can conclude that the variable sex have highly statistically significant coefficients.” On the other hand, other sources state “The z-value is a standardized score that measures the number of standard deviations a parameter estimate is from its null hypothesis value. It is calculated by dividing the estimated coefficient by its standard error. The z-value is used to calculate p-values and to assess the statistical significance of the coefficient. The Wald statistic, on the other hand, is a measure of the overall significance of a variable in the Cox proportional hazards model. It is calculated by dividing the squared coefficient estimate by its estimated variance. The Wald statistic is used to test the null hypothesis that the coefficient of a variable is equal to zero, which indicates that the variable is not a significant predictor of the outcome. In summary, the z-value is used to assess the statistical significance of individual coefficients, while the Wald statistic is used to test the overall significance of a variable in the Cox proportional hazards model. Both are important measures in assessing the validity and usefulness of a Cox proportional hazards model, but they serve different purposes.” Which, if either, description of the z-value and Wald statistic is correct? Code: ``` library(survival) library(survminer) head(lung) res.cox <- coxph(Surv(time, status) ~ sex, data = lung) summary(res.cox) ```
What is the difference between z-value and the Wald statistic in the summary function of the Cox Proportional Hazards model of the “survival” package?
CC BY-SA 4.0
null
2023-03-15T21:04:26.693
2023-03-15T21:29:07.570
null
null
378347
[ "r", "survival", "cox-model", "wald-test" ]
609614
1
null
null
0
12
I have 4 experimental groups/conditions and 5 measurement times for each group/condition. Each participant only took part in one of the 1 conditions. In total there are 27 participants and each condition has around 6 participants. In my data set, several participants are missing a value during one of the 5 measurement times. So there are several cases (rows) where the participant does not have 5 measurement times but only 4 for example. The missing values are completely random and exist in all conditions. My problem is the following. Because SPSS does case-wise deletion I end up with around 6 fewer cases which severely impacts my results. What I am wondering is if it is feasible to impute the missing data using a regression for example. And how many values can I impute before it is no longer feasible? Many thanks!
Imputing missing values for a RM-ANOVA in SPSS (how to work when there is case-wise deletion?)
CC BY-SA 4.0
null
2023-03-15T21:08:14.893
2023-03-15T21:14:25.770
2023-03-15T21:14:25.770
56940
383329
[ "repeated-measures", "spss", "missing-data", "multiple-imputation" ]