Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
610992
1
null
null
0
27
In general, I deal with imbalanced datasets in multiclass classification problems. Now I'm facing a multiclass classification problem with balanced data. In this context, are macro precision, recall, and f-measure more informative than accuracy?
Are precision, recall and f-measure more informative than accuracy in multiclass classification with balanced data?
CC BY-SA 4.0
null
2023-03-28T12:23:31.900
2023-03-28T12:23:31.900
null
null
219084
[ "machine-learning", "classification" ]
610993
1
611080
null
4
90
I have not been able to find satisfactory explanations for the logic behind Tukey's HSD test. However, the resources I have been able to find all concern it's use. It's supposed to make multiple pairwise comparisons of means and identify those pairs which are significantly not equal. Everything seems to hinge on the Studentized range distribution. Wikipedia says: > Suppose that we take a sample of size n from each of k populations with the same [emphasis mine] normal distribution $N(\mu,\sigma^2)$ and suppose that $y_{\min}$ is the smallest of these sample means and $y_{\max}$ is the largest of these sample means, and suppose $s^2$ is the pooled sample variance from these samples. Then the following statistic has a Studentized range distribution. $$ q=\frac{{\overline {y}}_{\max }-{\overline {y}}_{\min }}{s/\sqrt {n}} $$ Accepting this, it would mean that assuming that all groups have identical true variance and mean, we should expect all pairwise mean differences to be smaller (in absolute value) than a certain value (which depends on our desired type 1 error significance level). However, my conclusion from this is that, in practice, if I observe that two groups have significantly different means, then the assumption that all $k$ populations have the same mean must be false. I struggle to understand how one can conclude that that specific pair of means is different? It would make more sense to me if in the result from Wikipedia, the assumptions were relaxed so that each group could have its own mean, and the result still held.
The theory behind Tukey's HSD test.
CC BY-SA 4.0
null
2023-03-26T12:12:18.547
2023-03-29T08:51:25.563
2023-03-28T22:25:52.577
11887
371599
[ "hypothesis-testing", "anova", "tukey-hsd-test" ]
610994
2
null
610797
0
null
What is described sounds equivalent to flipping for outcome F first and A last and stopping at the first heads. More generally, for $n$ outcomes, the probability of heads for outcome $n$ should be set to probability $p_n=\frac{1}{n}$. Outcome $n-1$ is selected only if the flip for outcome $n$ results in tails and the flip for outcome $n-1$ results in heads: $$p_{n-1}(1-p_n)=\frac{1}{n}$$ Solve for $p_{n-1}$: $$p_{n-1}=\frac{1}{n(1-p_n)}=\frac{1}{n(1-1/n)}=\frac{1}{n-1}$$ Outcome $n-2$ is selected only if the flips for outcomes $n$ and $n-1$ both result in tails and the flip for outcome $n-2$ results in heads: $$p_{n-2}(1-p_n)(1-p_{n-1})=\frac{1}{n}$$ $$p_{n-2}=\frac{1}{n-2}$$ We can see a pattern emerging. The probability of heads for the $i^{\text{th}}$ outcome should be set to $i^{-1}$. The probability of getting outcome $n-k$ in the end is $$p_{n-k}\prod_{i=0}^{k-1}(1-p_{n-i})=\frac{1}{n-k}\prod_{i=0}^{k-1}\frac{n-i-1}{n-i}=\frac{1}{n}$$ Checking with a quick simulation in R: ``` n <- 6L tabulate(max.col(t(matrix(runif(n*1e6L), n, 1e6) < 1/(1:n)), "last"), n)/1e6 #> [1] 0.165888 0.166987 0.166336 0.167132 0.166237 0.167420 ```
null
CC BY-SA 4.0
null
2023-03-28T12:41:22.483
2023-03-28T12:41:22.483
null
null
214015
null
610995
2
null
610965
6
null
Check the following post: [Kullback Leibler divergence between two multivariate t distributions](https://rpubs.com/FJRubio/DKLt) The authors reduce the dimension of the integral from $p$ (in your notation) to 1, which seems to be easier to handle. The post contains numerical examples in R. [](https://i.stack.imgur.com/P6uN7.png) Edit. As mentioned by @utobi in a comment, `it is not a closed-form solution since it still involves univariate numerical integration. However, it's better than a vanilla multivariate integration.` Indeed, this solution is a more "numerically tractable" alternative to multivariate integration, but not a closed-form one.
null
CC BY-SA 4.0
null
2023-03-28T12:46:27.147
2023-03-28T13:33:50.290
2023-03-28T13:33:50.290
384329
384329
null
610996
2
null
577873
0
null
Just to add to the other answers, a relatively flexible method for non-parametric multi-way anova is aligned ranks transformation anova (ART anova). At least in the implementation in R, it can handle mixed effects and has methods for post-hoc analysis. It has its limitations, so it's important to read up on the background and documentation.
null
CC BY-SA 4.0
null
2023-03-28T12:58:02.260
2023-03-28T12:58:02.260
null
null
166526
null
610997
2
null
610991
2
null
Given a finite time series $\{y_1, \ldots, y_n\}$, the usual definition for the sample autocovariance at lag $k$ is $$ \hat{\gamma}_k = \frac{1}{n} \sum_{t=k+1}^n (y_t - \bar{y})(y_{t-k} - \bar{y})\,. $$ Notice that we're dividing by $n$ even though there are only $n-k$ terms in the sum. The sample autocorrelation at lag $k$ is then $\hat{\rho}_k =\hat{\gamma}_k /\hat{\gamma}_0$, so the $1/n$ cancels out and you obtain the formula in your post. The `stats::acf` function in R uses the same convention. If in your definition of $\hat{\gamma}_k$ you divide by $n-k$ instead of $n$ (which is what `pandas.Series.autocorr` does), you get $\hat{\rho}_1=1$ instead. The difference is negligible for long time series (large $n$), but not for small $n$.
null
CC BY-SA 4.0
null
2023-03-28T13:05:08.127
2023-03-29T04:09:04.470
2023-03-29T04:09:04.470
238285
238285
null
610998
1
null
null
1
131
The following objective is taken from the paper ['Training language models to follow instructions with human feedback'](https://arxiv.org/abs/2203.02155):[](https://i.stack.imgur.com/z1shB.png)which is used to fine-tune the pre-trained language model using Proximal Policy Optimization (PPO). In the [original paper](https://arxiv.org/abs/1707.06347), the objective of PPO is as follows: [](https://i.stack.imgur.com/DgBB1.png)comparing the two objectives we can see the term with beta in equation 2 must be the KL term in equation 5. Now there was a previous [question](https://stats.stackexchange.com/posts/606769) here. NO ML/RL is really needed. $\pi_\phi^{\mathrm{RL}}= \pi_\theta\left(a_t \mid s_t\right)$ and $\pi_\phi^{\mathrm{SFT}}= \pi_{\theta old}\left(a_t \mid s_t\right)$ --- My question is different as I don't quite understand how the KL terms are equivalent. As If we take the InstructGPT objective and isolate the KL part we have $E_{(x, y) \sim D_{\pi_\phi^{\mathrm{RL}}}}\left[-\beta \log \left(\pi_\phi^{\mathrm{RL}}(y \mid x) / \pi^{\mathrm{SFT}}(y \mid x)\right)\right]$ = $-\beta E_{(x, y) \sim D_{\pi_\phi^{\mathrm{RL}}}}\left[\log \left(\pi_\phi^{\mathrm{RL}}(y \mid x) / \pi^{\mathrm{SFT}}(y \mid x)\right)\right]$ = $-\beta \mathrm{KL}(\pi_\phi^{\mathrm{RL}} | \pi^{\mathrm{SFT}})$ While the term in the PPO equation is effectively $-\beta \mathrm{KL}(\pi^{\mathrm{SFT}} |\pi_\phi^{\mathrm{RL}} )$ ?
Understanding Objective in OpenAI InstructGPT paper?
CC BY-SA 4.0
null
2023-03-28T13:17:58.660
2023-03-28T13:17:58.660
null
null
291320
[ "machine-learning", "probability", "sampling", "reinforcement-learning", "kullback-leibler" ]
610999
2
null
610922
2
null
[](https://i.stack.imgur.com/XRYw2.png) The mean of the stratified estimator is $$\frac{1}{N}\sum_i \mathbb{E}[h(X_i)] = \frac{1}{N}\sum_i \int_{A_i}h(x_i)Nf(x_i)\text dx_i=\int_{\cup_iA_i}h(x)f(x)\text dx=\mathbb{E}[h(X)]$$ The variance of the stratified estimator is $$\frac{1}{N^2}\sum_i \text{var}[h(X_i)]$$ and also $$\frac{1}{N^2}\sum_i \mathbb{E}[h(X_i)^2] + \frac{1}{N^2}\sum_{i\ne j} \mathbb{E}[h(X_i)]\mathbb{E}[h(X_j)] -\frac{1}{N} \mathbb{E}[h(X)]^2\\ =\frac{1}{N^2}\sum_i \mathbb{E}[h(X_i)^2] + \frac{1}{N}\sum_{i} \mathbb{E}[h(X_i)] \mathbb{E}[h(X)]\\ - \frac{1}{N^2}\sum_{i} \mathbb{E}[h(X_i)]^2 -\frac{1}{N} \mathbb{E}[h(X)]^2\\ =\frac{1}{N^2}\sum_i \mathbb{E}[h(X_i)^2] + \frac{N-1}{N}\mathbb{E}[h(X)]^2- \frac{1}{N^2}\sum_{i} \mathbb{E}[h(X_i)]^2 $$ While the first term can be (unbiasedly) estimated by a single stratified sample, both other terms require two single stratified samples to produce an unbiased estimate, as requested in the above 1973 JASA paper. (If the strata are regular enough to enjoy symmetry a second sample could be deduced from the first one by applying this (or these) symmetry move(s) but I am unsure a reliable variance could be deduced because of the symmetry and hence dependence between both samples.)
null
CC BY-SA 4.0
null
2023-03-28T13:22:07.257
2023-03-29T14:02:48.937
2023-03-29T14:02:48.937
7224
7224
null
611001
1
null
null
0
47
I'm adopting accuracy and macro and weighted averages of precision, recall and f-measure for evaluating my model in a multiclass problem with an imbalanced dataset. However, I noticed that weighted f1, precision, and recall are always very similar in this context. Is it common? Is it a good practice only use accuracy and macro averages, without weighted averages? Are there scenarios where is justified using weighted averages of precision, recall and f-measure?
Why weighted F1-measure, precision and recall are always very similar to accuracy in my problem?
CC BY-SA 4.0
null
2023-03-28T13:48:43.977
2023-03-28T13:48:43.977
null
null
219084
[ "machine-learning", "classification", "model-evaluation", "metric" ]
611002
2
null
610978
2
null
You can always construct a locally unbiased estimator at a point $\varphi$ from an arbitrary (non constant) estimator by shifting and scaling. suppose that $\tau(x)$ is some estimator, then define $\hat \theta(x)= \varphi + \alpha( \tau(x) - E_\varphi[\tau(x)])$. Clearly $E_\varphi[\hat\theta(x)] = \varphi$, and $\partial_\theta E_\theta[\hat \theta(x)]_{\theta=\varphi} =\alpha \partial_\theta E_\theta[\tau(x)]_{\theta=\varphi}$. So by choosing $\alpha=1/\partial_\theta E_\theta[\tau(x)]_{\theta=\varphi}$ the second condition is satisfied (assuming the derivative of the expectation at $\varphi$ is not zero). Since a locally unbiased estimator always exists, the question is therefore just whether there are examples where a globally unbiased estimator does not exist. You can probably find many of those, as in [this](https://math.stackexchange.com/questions/681638/for-the-binomial-distribution-why-does-no-unbiased-estimator-exist-for-1-p) example.
null
CC BY-SA 4.0
null
2023-03-28T13:55:22.793
2023-03-28T13:55:22.793
null
null
348492
null
611003
1
null
null
2
23
I am working through equation 22 in [Introduction to Boltzmann Machines](http://cms.dm.uba.ar/academico/materias/1ercuat2018/probabilidades_y_estadistica_C/5a89b5075af5cbef5becaf419457cdd77cc9.pdf) I am a little confused with the notation, in particular in the line: [](https://i.stack.imgur.com/7ULqX.png) As I understand it, we want the probability of a specific state of the visible nodes, $v = \{v_1, v_2, ... v_m\}$ where $v_j \in \{0, 1\}$. We are marginalising over all possible states of the hidden nodes, $H$, where $h \in H$ and $h = \{h_1, h_2, ..., h_n\}, h_i \in \{0, 1\}$. For me, this then makes sense if in the above notation, the sum $\sum_{h_n}$ over $h_n$ (i.e., an single hidden node) means to sum over every possible state of $h_n$, i.e. $h_n = 1$ and $h_n = 0$. Is this correct? And, this is equivalent to $\sum_h e^{-E(v, h)}$ because we are summing the probability of $v$ over every possible arrangement of $h$ (of which there are $2^{|h|}$ combinations)?
Notation for calculating $p(v)$ marginalising over $h$ in an Restricted Boltzmann Machine
CC BY-SA 4.0
null
2023-03-28T13:55:50.330
2023-04-02T13:56:02.777
null
null
336682
[ "notation", "marginal-distribution", "restricted-boltzmann-machine" ]
611005
1
null
null
0
13
I have following data columns: `output`, `year`, `data1`, `data2` and `data3` (total 5 columns). I now have data for years 1990-2023 but want to project `output` into 2030. How do I do that in practice mathematically now that I have the coefficients `b0, b1, b2, b3` and `b4` (found using excel linset-command)? I was thinking to set `year` to 2030 and assume values for `data1`, `data2` and `data3` somehow and get value `output`. Is it possible to mathematically formulate: "assuming a linear trend for each of `data1`, `data2` and `data3` the `output` at `year=2030` will be X". Like, I was thinking to find a straight line to each of the pairs `year / dataX` and intercept where year=2030 and insert that into the linear model with the known coefficients `b0, b1, b2, b3` and `b4`. These are the other values: ``` R^2 0.8889896895 F static 94.09602407 ESS 30745.5208 STEYX 9.038061554 DFD 47 RSS 3839.268163 ```
Multivariate linear regression project into the future
CC BY-SA 4.0
null
2023-03-28T14:00:02.763
2023-03-28T14:00:02.763
null
null
384325
[ "regression", "multiple-regression", "linear-model" ]
611006
1
null
null
1
37
I want to compare two models, which has different number of parameters. The first model is Arbitrage free Nelson-Siegel model, which has the following equation: $y_{t}(\tau )=X_{1,t}+X_{2,t}(\frac{1-e^{-\lambda \tau }}{\lambda \tau })+X_{3,t}(\frac{1-e^{-\lambda \tau }}{\lambda \tau }-e^{-\lambda \tau })$, where state variable $X_t$ ís assumed be a Markov process that solved the following stochastic differential equation: $dX_t=\kappa (\theta - X_t)dt+ \Sigma dW_t$, more details here [page 4](https://www.frbsf.org/wp-content/uploads/sites/4/wp07-20bk.pdf). The second model is generalized AF Nelson Siegel model, which has the following equation: $y_{t}(\tau )=X_{1,t}+X_{2,t}(\frac{1-e^{-\lambda_1 \tau }}{\lambda_1 \tau }) +X_{3,t}(\frac{1-e^{-\lambda_2 \tau }}{\lambda_2 \tau })+X_{4,t}(\frac{1-e^{-\lambda_1 \tau }}{\lambda_1 \tau }-e^{-\lambda_1 \tau })+ X_{5,t}(\frac{1-e^{-\lambda_2 \tau }}{\lambda_2 \tau }-e^{-\lambda_2 \tau })$ where state variable $X_t$ is also assumed to be Markov process. I used Kalman filter for estimation of parameters from both models. I can compare those two models by mean or RMSE but this comparison does not include stochastic part of the model. Is there any other method which would include stochastic part too? My idea was to use AIC and BIC criteria. AIC=$-2log L(\Psi )+2n$ where n denotes the number of parameters and $log L(\Psi )$ denotes the logarithm of the likelihood function evaluated at the ML estimator $\Psi $ BIC=$-2log L(\Psi )+n log T$ Can AIC and BIC be used for comparison in this case?
Comparison of two models with different number of parameters
CC BY-SA 4.0
null
2023-03-28T14:03:47.307
2023-03-28T14:03:47.307
null
null
384330
[ "stochastic-processes", "kalman-filter", "state-space-models", "numerics", "stochastic-calculus" ]
611007
1
null
null
2
17
In Time Series Analysis there is this idea of choosing the distribution of the starting point of a time series in a way such that the time series is stationary. Let for example $\{X_t\}_{t=0}^T%$ be a time series following an AR(1) process ($X_t = \phi X_{t-1} + \varepsilon_t, \ \varepsilon_t \sim wn(0, \sigma^2)$). The Wold(-like) decomposition looks like this: \begin{align} X_t = X_0 + \sum_{i = 0}^{t-1} \phi^{i} \varepsilon_{t-i} \end{align} Relevant for the stationarity is e.g. the variance which is given as \begin{align} Var(X_t) = Var(X_0) + \sigma^2 \sum_{i = 0}^{t-1} \phi^{2i} = Var(X_0) + \sigma^2 \frac{1 - (\phi^2)^{t}}{1 - \phi^2}. \end{align} Now in order for $X_t$ to be stationary, we have to choose $X_0$ in such a way that \begin{align} Var(X_0) = \sigma^2 \sum_{j = t}^{\infty} \phi^{2j} \end{align} Because then we have $Var(X_t) = \frac{\sigma^2}{1-\phi^2}$ which is independent from $t$. My question is, what is the technical term for choosing $X_0$ in such a way. I only know the german term which is "eingeschwungene Lösung".
Technical Term for choosing distribution of starting point of a time series in a way such that the time series is stationary
CC BY-SA 4.0
null
2023-03-28T14:13:46.093
2023-03-28T14:25:59.267
2023-03-28T14:16:05.133
1352
384122
[ "time-series", "terminology", "stationarity", "autoregressive" ]
611008
2
null
610911
4
null
Converting a 2D density $f_{r,\theta}$ from polar coordinates $(r,\theta)$ to Cartesian coordinates $(x,y)$ where $r(x,y)^2 = x^2+y^2$ and $\tan{\theta(x,y)}=y/x$ turns out to be simple, because the reference (Lebesgue) measure merely changes from $r\,\mathrm dr\,\mathrm d\theta$ to $\mathrm dx\,\mathrm dy.$ Therefore the probability element can be re-expressed as > $$f_{r,\theta}(r,\theta)\mathrm dr\,\mathrm d\theta = \frac{f_{r,\theta}(r,\theta)}{r} r\,\mathrm dr\,\mathrm d\theta = \frac{f_{r,\theta}(r(x,y),\theta(x,y))}{r(x,y)}\mathrm dx\,\mathrm dy.\tag{*}$$ This formula says to - Divide the density by $r$ and then - Express $r$ and $\theta$ in Cartesian coordinates. The function `f` below implements this formula for general radial densities $f_{r,\theta}.$ In the question the density is the product of the chi-squared$(k)$ density for $r$ and the Uniform$(0,2\pi)$ density for $\theta,$ giving $$f_{r,\theta}(r,\theta) = \frac{1}{2^{k/2}\Gamma(k/2)} r^{k-1}e^{-r/2}\times\frac{1}{2\pi}$$ and plugging that into $(*)$ produces $$f_{x,y}(x,y) = \frac{1}{\pi\,2^{k/2+1}\,\Gamma(k/2)} r^{k-2}e^{-r/2} = \frac{ \left(x^2+y^2\right)^{k/2-1}e^{-\sqrt{x^2+y^2}/2}}{\pi\,2^{k/2+1}\,\Gamma(k/2)}.$$ --- As an example (with $k=20$), I used `R` to generate 100,000 values of $(r,\theta),$ converted those to $(x,y),$ and computed a kernel density estimate. ``` library(MASS) # Generate (r, theta). k <- 20 n <- 1e5 r <- rchisq(n, k) theta <- runif(n, 0, 2*pi) # Convert to (x, y). x <- r * cos(theta) y <- r * sin(theta) # Compute and plot a density (omitting some outliers). q <- qchisq(sqrt(0.99), k) den <- kde2d(x, y, lims = q * c(-1, 1, -1, 1), n = 50) image(den, asp = 1, bty = "n") ``` Here's an image showing your "hole:" [](https://i.stack.imgur.com/imVsz.png) This `R` function performs steps (1) and (2) with an arbitrary density function `df` for the radial density (and a uniform density for the angle): ``` f <- function(x, y, df, ...) { r <- outer(x, y, \(x,y) sqrt(x^2 + y^2)) # Step 2: r(x,y) df(r, ...) / r * 1 / (2 * pi) # Step 1: Formula (*) } ``` Its inputs are vectors `x` and `y`. Its output is a matrix of densities at all ordered pairs from `x` and `y`. Using this function, let's compare the empirical density (shown in the image) to the calculated density using $(*)$ (implemented as `dc`): ``` z.hat <- with(den, f(x, y, dc, k = k)) A <- with(den, mean(diff(x)) * mean(diff(y))) with(den, plot(sqrt(z.hat), sqrt(z), col = gray(0, .25), ylab = "Root Empirical Density", xlab = "Root Calculated Density")) abline(0:1, col = "Red", lwd = 2) ``` [](https://i.stack.imgur.com/1I9uR.png) The agreement is excellent. (I use root scales to achieve a heteroscedastic response, making the variation around the 1:1 reference line the same at all locations.)
null
CC BY-SA 4.0
null
2023-03-28T14:19:58.093
2023-03-28T14:19:58.093
null
null
919
null
611009
2
null
611007
2
null
> I only know the german term which is "eingeschwungene Lösung". In English one often uses the [stationary distribution](https://en.wikipedia.org/wiki/Stationary_distribution). It may also refer to [limiting distribution](https://en.wikipedia.org/wiki/Limit_of_distributions) and possibly in other contexts it could be used as [steady state](https://en.wikipedia.org/wiki/Steady_state), but that is another (non-statistics) story.
null
CC BY-SA 4.0
null
2023-03-28T14:23:40.063
2023-03-28T14:25:59.267
2023-03-28T14:25:59.267
164061
164061
null
611010
1
null
null
0
21
The analysis I am running is a 2(Group; G) x 6(Time; T) x 2(Study Variable; SV) ANOVA. I preset SPSS to report estimated marginal means of all IVs and interactions, so it does the pairwise comparisons automatically. The 3-way interaction was not significant. However, based on our a priori hypothesis, we still proceeded to split up Group and follow the analysis up by conducting two 2-way ANOVAs. We do have a significant 2-way interaction of G x SV for the 3-way ANOVA. In the 2-way ANOVAs, this interaction becomes the main effect of SV. I mainly have two questions: - If we did not have an a priori hypothesis, is a significant 2-way interaction enough to warrant following up a 3-way ANOVA with 2-way ANOVAs? Or could we only follow it up if the 3-way interaction was significant? - For our analysis, should I report the pairwise comparisons for the G x SV interaction in the 3-way ANOVA, or the pairwise comparisons for the main effect of SV in the 2-way ANOVA. The SPSS tables for both pairwise comparisons looked identical, except that, for the 2-way ANOVA, the standard error and p value was way smaller. [](https://i.stack.imgur.com/MaK6y.png)
Should I look at the pairwise comparisons of a significant 2-way interaction in a 3-way ANOVA?
CC BY-SA 4.0
null
2023-03-28T14:28:41.830
2023-03-28T14:28:41.830
null
null
371095
[ "anova", "multiple-comparisons" ]
611011
1
null
null
1
15
The question is basically the title. I have a matrix $Q$ that I know is positive semi-definite. I now want to find the $Y$ that approximates this matrix under some kernel function $f(y_i, y_j)$. I know that if the function $f$ is invertible then I can find the Gram matrix by applying $f^{-1}$ to each element of Q and then do multi-dimensional scaling... but this seems like a sloppy way to do it. Is there a cleaner method? Is there existing literature on this?
Given a psd matrix $Q$ and a kernel function $f(y_i, y_j)$, how do I find $Y \in \mathbb{R}^{n \times d}$ that best approximates $Q$?
CC BY-SA 4.0
null
2023-03-28T14:30:46.077
2023-03-28T14:30:46.077
null
null
173038
[ "pca", "kernel-trick", "multidimensional-scaling" ]
611012
2
null
593107
0
null
If you want to know which dose provoked the largest response, you don't need any statistical analysis. Just look at the data or a graph of the data. If you want help with choosing a statistical test, you'll need to articulate a question that can be answered by statistical calculations.
null
CC BY-SA 4.0
null
2023-03-28T14:33:32.853
2023-03-28T14:33:32.853
null
null
25
null
611013
1
null
null
0
38
Consider the multivariate regression $Y = (\hat{y}_1, \hat{y}_2) = f(x_1,\ldots,x_n, y_1/y_2) = f(X)$; as we see, the ratio of outputs $\frac{y_1}{y_2}$ is among the inputs $X$, thus a trained model should approximately reproduce this ratio, i.e. $\hat{y}_1/\hat{y}_2\approx y_1/y_2$. Indeed, for several regression algorithms applied, the variable $r = \hat{y}_1/\hat{y}_2 - y_1/y_2$ generically has a distribution shaped normally; in particular $\mu(r)=0.01$, $\sigma(r)=0.16$ for Gradient Boosting regression. I'm interested in a possibility to force a model to reproduce the input value $y_1/y_2$ more closely (or even strictly, as a constraint), i.e. for input $(x_1,\ldots,x_n, y_1/y_2)$ we want to reduce $\sigma(r)$ (or to hold constantly $\hat{y}_1/\hat{y}_2 = y_1/y_2$). As I understand it, this falls under the theme of regression with (approximate or exact) constraints on dependent variables; here the constraints are included in the input though. I can think of 2 approaches: - Train several separate models $f_i$, each with subsets of data based on intervals $y_1/y_2\in(r_i-\alpha_i, r_1+\alpha_i)$. Then, by tweaking $r_i, \alpha_i$, each model will reproduce ratios with needed precision. - Given input $(x_1,\ldots,x_n, y_1/y_2)$, build a "reverse" regression with $\hat{x_j} = g(x_1,\ldots,x_n, y_1,y_2)$ for some chosen label $j$, and use constrained optimization to find a pair $y_1,y_2$ with given constaint on $y_1/y_2$ that shows up in input, such that $\hat{x_j}=x_j$. The problem with the 1st approach is that we have to make a choice on $r_i, \alpha_i$ beforehand; moreover, either we saturate full range of $y_1/y_2$ (a lot of models) or we lose a lot of samples (i.e. cover only small portions of this range by a couple of models). The problem with the 2nd approach is also twofold: the reverse regression might be much worse than the initial one, and optimization can be pretty hard if the algorithm is e.g. tree-based (a lot of local plateau's). So I wonder whether there are more "generic" approaches to regression with "input" constraints on target variables.
Multivariate regression with constrained target
CC BY-SA 4.0
null
2023-03-28T14:38:21.580
2023-03-28T14:38:21.580
null
null
356198
[ "optimization", "multivariate-regression", "constrained-regression" ]
611014
1
null
null
0
15
Are there Bayesian alternatives of common statistical tests - Shapiro–Wilk test, Student's t-test, Welch's t-test etc? Bayesian - I mean without P-values. References to statistical literature are welcome.
Bayesian alternatives of statistical tests
CC BY-SA 4.0
null
2023-03-28T14:39:54.043
2023-03-28T14:39:54.043
null
null
377057
[ "hypothesis-testing", "bayesian", "t-test", "shapiro-wilk-test" ]
611015
2
null
610975
1
null
There are many ways how this could be done and which exactly is the best depends on the details of your problem, that you didn't give us. - If you simply want to identify unique words per document or common words between documents, this needs treating documents as sets of words and calculating set intersection (common words) or difference (unique words). Since set can be implemented as a hash map, those operations are quite cheap computationally. - But are you sure you need to find the unique words? If you have large enough collection of documents, it can be as well the case that there is no unique words, or very few of them, so finding them would not make much sense. Much simpler solution might be to just code the word occurrences as one-hot vectors and match them by encoding the query and finding the document with the most similar encoding. If you care about size of the data, this could be efficiently implemented as sparse vectors. - More sophisticated approach could be to use a language model to encode the document as latent vectors and calculate the similarities in terms of the latent vectors rather than the raw one-hot encodings. If you care about representing the documents in the most compressed form, this might be one way of doing so. - Finally, there are many out-of-the box solutions available (like ElasticSearch) and starting with those is often a much better idea than re-inventing the wheel yourself.
null
CC BY-SA 4.0
null
2023-03-28T14:45:04.770
2023-03-28T14:51:25.030
2023-03-28T14:51:25.030
35989
35989
null
611016
1
611018
null
0
48
Let's say $y$ is known, is there any way to compute the number of trials N such that $P(\theta_N<\theta_0|y)=0.95$ ? For the sake of the example, let's say #successes $y=0$ and prior probability $\theta_0=0.001$ with prior $Beta(1,500)$. One solution is to test all value of n until `pbeta(0.001,1,500+n)` $\geq0.95$. After testing from n=1...N, I can get N=2495. But, is there any way to calculate N directly--instead of testing all values of N?
How to get posterior number of trials using Beta-Binomial model?
CC BY-SA 4.0
null
2023-03-28T14:55:29.810
2023-03-28T15:40:30.150
2023-03-28T15:26:59.857
384332
384332
[ "r", "bayesian", "beta-binomial-distribution" ]
611017
2
null
587809
0
null
This is known as the Mark and Recapture problem, because the people who most often encounter it are ecologists estimating population size. There are many methods available, which you can find by searching for "Mark and Recapture".
null
CC BY-SA 4.0
null
2023-03-28T15:14:07.240
2023-03-28T15:14:07.240
null
null
188928
null
611018
2
null
611016
1
null
You could give more details the question. I think you are pointing to `pbeta(0.001,1,500+2495)` being `0.95004` Let's try to deal will this more generally with a significance level $\alpha$, a prior $\text{Beta}(1,\beta_0)$, a binomial observation of $y=0$ and an assumed higher limit $\theta_0$ for the interval of integration. Then you want $$\int_0^{\theta_0} (\beta_0+N)(1-\theta)^{\beta_0+N-1}\ge 1-\alpha$$ and the integration gives $1-(1-\theta_0)^{\beta_0+N} \ge 1-\alpha$ and so ${\beta_0+N} \ge \frac{\log\alpha }{\log(1-\theta_0)}$ - both logarithms are negative - and thus $$N \ge \frac{\log\alpha }{\log(1-\theta_0)} -\beta_0.$$ In your example $\beta_0=500, \alpha=0.05, \theta_0=0.001$ and you get $N\ge 2494.234$ as you found. The integration is made easier by having $\alpha_0=1$ and $y=0$.
null
CC BY-SA 4.0
null
2023-03-28T15:20:48.947
2023-03-28T15:40:30.150
2023-03-28T15:40:30.150
2958
2958
null
611019
1
null
null
1
32
Let's say I have the following strings and associated target variables: ``` PVADDHJ 98.58 LMIJLFPA 98.89 PNI 97.86 YZDYI 100.98 OXFBI 100.99 OPGWQJ 102.43 JDUKN 100.76 ZWDTXHZ 100.09 URJFIT 98.05 VWYWBWUIR 99.76 ``` Python code to generate this: ``` import numpy as np chars = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ') n = 10 for i in range(n): np.random.seed(i) n_chars = np.random.randint(3,10) rand_chars = "".join(np.random.choice(chars, size=n_chars)) rand_target = np.random.normal(100,1) print(f"{rand_chars:15s} {rand_target:.2f}") ``` The above example is random, but let say I have an actual dataset where there is a relationship between the strings and the target variable. Using counts of the individual characters (or just a binary value for each character to indicate whether it's included), I want to predict the values of the target variable. One way to do this would be a linear regression with one hot encoding for each character and solve for the value of each variable plus a constant. If this were binary classification, I could use logistic regression with one hot encoding and I believe Naive Bayes would also work. However, I'm wondering if there is something similar to Naive Bayes that would work for regression in this case, maybe starting with an overall distribution of the target values, and then refining it based on which characters are in the string. I think this would lead to different predictions than the one hot encoding approach. Updating to provide some more context: I'm learning about sentiment analysis and I'm curious whether simple counts of certain words are sufficient over more advanced models. I'm trying to figure out how to treat situations that include words that are somewhat contradictory to each other with respect to the target value. The example above is just an attempt to illustrate the idea in a simple way.
Regression predictor from count of categorical variables?
CC BY-SA 4.0
null
2023-03-28T15:22:43.837
2023-03-28T17:53:26.187
2023-03-28T17:53:26.187
263656
263656
[ "regression", "bayesian", "python", "categorical-encoding" ]
611021
1
null
null
0
10
I have a doubt in the derivation expected predicted error in the Integration part. The link in given below, please refer. [Expected prediction error - derivation](https://stats.stackexchange.com/questions/92180/expected-prediction-error-derivation) \begin{align*} EPE(f) &= \int [y - f(x)]^2 Pr(dx, dy) \\ &= \int [y - f(x)]^2p(x,y)dxdy \\ &= \int_x \int_y [y - f(x)]^2p(x,y)dxdy \\ &= \int_x \int_y [y - f(x)]^2p(x)p(y|x)dxdy \hspace{1mm} \rightarrow \boxed{1} \\ &= \int_x\left( \int_y [y - f(x)]^2p(y|x)dy \right)p(x)dx \hspace{1mm} \rightarrow \boxed{2} \\ &= \int_x \left( E_{Y|X}([Y - f(X)]^2|X = x) \right) p(x)dx\\ &= E_{X}E_{Y|X}([Y - f(X)]^2| X = x) \end{align*} From equation 1 to 2, how can he rewrite $dxdy$ to $dydx$. while integrating w.r.t $x$ the limit is over $y$ and while integrating w.r.t $y$ the limit is over $x$ in the eq 1 But in eq 2 integrating w.r.t $x$ the limit is over $x$ and same for $y$. How? If it's true I need a help to understand this.
Derivation in integrating part of the expected prediction error (ESLR)
CC BY-SA 4.0
null
2023-03-28T15:30:20.467
2023-03-28T15:30:20.467
null
null
384336
[ "regression", "predictive-models", "error", "conditional-expectation" ]
611022
1
null
null
0
11
I have a a time series of differences of returns of a stock from the market (the value is already the difference. I do not have the raw values of the 2). Is there a way in python to calculate a newy west t stat test with a leg of 4 for the time series?
NEwy West t stat test for a time series of values
CC BY-SA 4.0
null
2023-03-28T15:33:02.837
2023-03-28T15:33:02.837
null
null
384337
[ "python", "neweywest" ]
611023
1
null
null
1
32
> Larry and Tony work for different companies. Larry's salary is the $90th$ percentile of the salaries in his company, and Tony`s salary is the $70th$ percentile of the salaries in his company. > Which of the following statements individually provide(s) sufficient additional information to conclude that Larry`s salary is higher than Tony's salary? Indicate all such statements. > The average (arithmetic mean) salary in Larry's company is higher than the average salary in Tony's company > The median salary in Larry's company is equal to the median salary in Tony`s company > The $80th$ percentile in Larry's company is higher than the $70th$ percentile salary in Tony's company The hand-wavy logic I had was that the median is too far from the 80th/90th percentile to indicate anything. The average includes too many order statistics for us to tell about the 90th and 70th percentile which are too very possibly different statistics than any order statistic besides those two. What's a more rigorous logic to solve this problem?
Is my logic about comparing the 70th/90th percentile from two respective datasets correct or is there a proof to do this?
CC BY-SA 4.0
null
2023-03-28T15:43:18.480
2023-03-28T15:58:33.837
2023-03-28T15:58:33.837
null
null
[ "self-study", "mathematical-statistics", "median", "order-statistics", "percentage" ]
611025
1
611027
null
0
16
I am trying to perform model explainability for the best performing model using LIME for a classification problem. The y variable is whether a tumour is malignant or benign. Same question as ([https://stackoverflow.com/questions/75867990/using-lime-in-r-to-explain-the-best-performing-model](https://stackoverflow.com/questions/75867990/using-lime-in-r-to-explain-the-best-performing-model)) This is my attempt below: ``` models<-c("svmRadial","rf","knn") results_table <- data.frame(models = models, stringsAsFactors = F) for (i in models){ model_train <- train(class~., data = training, method = i, trControl= control, metric = "Accuracy") assign("fit", model_train) predictions <- predict(model_train, newdata = testing) table_mat<-table(testing$class, predictions) accuracy<-sum(diag(table_mat))/sum(table_mat) precision_ <-posPredValue(predictions, testing$class) recall_ <- sensitivity(predictions, testing$class) # put that in the results table results_table[results_table$models %in% i, "Accuracy"] <- accuracy results_table[results_table$models %in% i, "Precision"] <- precision_ results_table[results_table$models %in% i, "Recall"] <- recall_ } ``` For this I got the following results, which are on `results_table`: ``` Model Accuracy Precision Recall svmRadial 0.9588235 0.9814815 0.954955 rf 0.9705882 0.9732143 0.981982 knn 0.9705882 0.9732143 0.981982 ``` I have used LIME and it has worked (my attempt is below), but now I do not know which one of the models it is explaining. How do I know which model it is explaining? Is is explaining all three models or just the first model. ``` library(lime) explainer_caret <- lime(training, model_train) explanation <- explain(testing[15:20, ], explainer_caret, labels="malignant", n_permutations=5, dist_fun="manhattan", kernel_width = 3, n_features = 5) ``` [](https://i.stack.imgur.com/LIJ9B.png)
How to use LIME to explain the best performing model?
CC BY-SA 4.0
null
2023-03-28T16:15:11.370
2023-03-28T16:20:12.677
null
null
378400
[ "r", "machine-learning", "classification", "interpretation", "lime" ]
611026
2
null
610864
2
null
One way to think about the identity $$Pr[\hat H = P| H = Q] = \mathrm{Pr}[\mathbf x\in A | H = Q]$$ is to use the law of total probability to write that $$\mathrm{Pr}[\hat H = P | H = Q] = \mathbb E[\mathrm{Pr}[\hat H = P | H = Q, \mathbf X] | H = Q].$$ The fact that a test is non-randomized then implies that $\mathrm{Pr}[\hat H = P | H = Q, \mathbf X]$ is either 0 or 1, and in particular, is 1 if and only if $\mathbf X\in A$. In this case, you can simplify the RHS further to $$\mathbb E[\mathrm{Pr}[\hat H = P | H = Q, \mathbf X] | H = Q] = \mathbb E[1\{\mathbf X\in A\} | H = Q] = \mathrm{Pr}[\mathbf X \in A | H=Q] = Q^k(A).$$ With a randomized test, the only subtlety is that that inner conditional can be something other than 0 or 1. In that case, we have $$\mathbb E[\mathrm{Pr}[\hat H = p| H=Q,\mathbf X]] = \mathbb E_{Q^k}[\pi(\mathbf X)]$$ I do not know of any way to further simplify this expression without additional assumptions about, e.g., what $\pi$ looks like or what $A$ looks like.
null
CC BY-SA 4.0
null
2023-03-28T16:16:17.870
2023-03-28T16:16:17.870
null
null
188356
null
611027
2
null
611025
0
null
It explains `model_train`. In your loop, you overwrite `model_train` three times (more precisely, you create it once and overwrite it twice). The last overwriting results in a model trained using `method="knn"`, and that is what is explained.
null
CC BY-SA 4.0
null
2023-03-28T16:20:12.677
2023-03-28T16:20:12.677
null
null
1352
null
611028
2
null
409498
0
null
To me, $R^2$-style measures compare how your model performs to the performance of a naïve baseline model. In the simple letting of linear regression, that baseline model for predicting conditional expected values is the marginal expected value, which is why the $\bar y$ appears in the denominator of this way of writing $R^2$. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ The predictions from your model are the $\hat y_i$. The predictions from the baseline model are $\bar y$ every time. You have time-dependent data, so I would take a somewhat different stance. If you have only a short period of time before when you must make a forecast for a year in the future, you predict the mean from that short period of time. If you have a longer period of time on which you can base your forecast, your baseline model is the mean of that entire period of time. The baseline is dynamic, in that you keep updating it, but this does feel like it is within the spirit of my usual take on $R^2$. Since you want to mimic McFadden's $R^2$, you would use the log loss incurred by your model (numerator) and the log loss incurred by this dynamic baseline model (denominator). $$ R^2_{you} =1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left[ y_i\log(\hat y_{i,\text{model}}) + (1-y_i)\log(1-\hat y_{i,\text{model}}) \right] }{ \overset{N}{\underset{i=1}{\sum}}\left[ y_i\log(\hat y_{i,\text{model}}) + (1-y_i)\log(1-\hat y_{i,\text{dynamic baseline}}) \right] }\right) $$ This seems consistent with the article you cite. > In this way, the procedure mimics what a statistical model would have predicted with the information available at any point in the past. [This](https://stats.stackexchange.com/a/492581/247274) link discusses Campbell and Thompson (2008) that seems to use this same approach of using a dynamic baseline model. Even if they are not using McFadden's $R^2$, I think you can cite them as inspiration, and you have more evidence that people use this kind of dynamic $R^2$. Referring to this as "the" $R^2$ seems misleading, since the calculation really is different from the usual ways, but it certainly strikes me as within the spirit of the usual $R^2$ being interpreted as a comparison between model performance and the performance of a baseline model. REFERENCE Campbell, John Y., and Samuel B. Thompson. "Predicting excess stock returns out of sample: Can anything beat the historical average?." The Review of Financial Studies 21.4 (2008): 1509-1531.
null
CC BY-SA 4.0
null
2023-03-28T16:23:38.237
2023-03-28T16:36:02.827
2023-03-28T16:36:02.827
247274
247274
null
611029
1
null
null
0
16
I am performing simulations to obtain predicted yhat values based on fixed x-values and random coefficient values about their uncertainty. The matrix with random values of coefficients $(m \times p)$ is based on sandwiching together a random standard normal variate matrix ($m \times p$) and the square-root matrix of $\mathbf{V}(\boldsymbol{\beta})$ ($p \times p$) and then shifting by the means by adding the $p \times 1$ coefficient vector, $\boldsymbol{\beta}$, to each row using: $\mathbf{B} = \mathbf{Z} \mathbf{V}(\boldsymbol{\beta})^{1/2} + \mathbf{1}\boldsymbol{\beta}^\top$ However, I have noticed that the diagonal values of $\mathbf{V}(\boldsymbol{\beta})^{1/2}$ are not the same, obviously, as the square root of the diagonals of $\mathbf{V}(\boldsymbol{\beta})$ taken individually. Nevertheless, I want to preserve correlation between the coefficients, and based on what I am getting, the means of random coefficients appear to converge with the individual values of $\boldsymbol{\beta}$. Since I am not concerned with inference of coefficients, should I leave the square-root matrix intact and not be concerned that diagonal elements do not equal $s.e.(\beta_j)$?
Square-root matrix of coefficient var-cov matrix $\mathbf{V}(\boldsymbol{\beta})$ during simulation
CC BY-SA 4.0
null
2023-03-28T16:27:17.987
2023-03-28T16:32:51.493
2023-03-28T16:32:51.493
377184
377184
[ "regression", "simulation", "covariance-matrix" ]
611032
1
611160
null
1
29
I'm working through time-series forecasting, using models such as ETS, ARIMA, and vector autoregression as described in several texts (for example, Hyndman, R.J., & Athanasopoulos, G. (2021) Forecasting: principles and practice, 3rd edition, OTexts: Melbourne, Australia. OTexts.com/fpp3). I've created a hypothetical where I assume I have only the first [12] months of time-series data and I forecast for months [13-24] based on the actuals for months [1-12]. I generate simulation paths for months [13-24], and distributions thereof. I then compare those forecasted simulation paths/distributions for months [13-24] with the actual data for months 13-24 in order to assess forecast reasonableness. Results using ETS and ARIMA have been fine, with some minor adjustment such as using logs. However, these traditional time-series forecasting methods analyze/forecast essentially a single-line, depicted as the heavier trend line in the below image using my example data and labeled `mean`. In my data, that heavier trend line is simply an average of many underlying elements with disparate trends. The below is a simplified example of my actual data for the sake of post replicability and all of my actual curves take the form of nice smooth logarithmic functions. In the below example, there are elements `v`, `w`, `x`, `y`, and `z`, and their mean is `mean` in the example data frame. But the trends of the underlying elements in my actual data do look like this example data in terms of dispersion around the mean. Values never fall below zero. For time-series forecasting such as for this form of example data, are there any other methods I should be considering, that take into account the additional information I have at hand for the many underlying elements? (In my actual data I have 48 months and 60,000 + elements trending over those 48 months). [](https://i.stack.imgur.com/76iP3.png) Code to generate the above: ``` library(ggplot2) DF <- data.frame( mo = 1:24, v = c(rep(0,24)), w = c(0,0.1,rep(0.2,12),seq(0.2,0.5,length.out=10)), x = c(0,0,seq(0,0.5,length.out = 10),0.5,0.5,seq(0.5,0.98,length.out = 10)), y = seq(0, 1.5, length.out = 24), z = seq(0, 2.5, length.out = 24) ) DF$mean <- rowMeans(DF[,2:6]) DF_reshape <- data.frame( x = DF$mo, y = c(DF$v, DF$w, DF$x,DF$y,DF$z,DF$mean), group = c(rep("v", nrow(DF)), rep("w", nrow(DF)), rep("x", nrow(DF)), rep("y", nrow(DF)), rep("z", nrow(DF)), rep("mean", nrow(DF)) ) ) ggplot(DF_reshape, aes(x, y, col = group)) + geom_line() + geom_line(data = filter(DF_reshape,group == "mean"), linewidth = 2) + labs(x = "x axis = number of months elapsed") ```
What are some alternative approaches for forecasting time-series data when you have more underlying data available than used in the standard models?
CC BY-SA 4.0
null
2023-03-28T16:47:52.893
2023-03-29T17:21:03.780
null
null
378347
[ "r", "time-series", "forecasting", "volatility" ]
611033
1
null
null
1
16
I am trying to test whether sleep improves over the course of a treatment intervention. Through surveys sent periodically across 6 months, people self-report how many days per week they had difficulty falling asleep and how many days they had difficulty staying asleep, which are then added together to give a range of values from 0-14. My base model takes this form: ``` days_impacted ~ time + (1|participant) ``` If I look at the whole distribution of `days_impacted` across all time and people, it fits a zero-inflated negative binomial distribution. But there are caveats: - days_impacted are clustered at the participant-level (i.e., (1|participant)), and for any given person, their responses probably don't fit a zero-inflated negative binomial distribution. - The shape of the days_impacted distribution likely changes across time. For example, it may be less zero-inflated to begin with if most people have some trouble sleeping, but become more zero-inflated if the intervention works. - days_impacted proxies for counted days, but are not true counts. What's the best approach to use here? Can this data be fit with a negative binomial distribution, despite the caveats?
Use of negative binomial regression for clustered and pseudo-count data
CC BY-SA 4.0
null
2023-03-28T16:53:03.533
2023-03-28T16:53:03.533
null
null
5892
[ "regression", "repeated-measures", "negative-binomial-distribution" ]
611034
2
null
611019
0
null
The starting point to me is to have $26$ features, one for each letter of the English alphabet, where your value of the feature is a count of how many times that letter appears in the string. This seems to have an advantage over just an indicator of whether or not a letter appears in the string, since multiple instances could be meaningful. You then can add interactions or other functions of those features, should those be wrranted. The order of the letters could matter, depending on the circumstances, so you might want to consider more sophisticated methods that account for the sequential nature of the strings. Then again, maybe you have reason to believe that just the letter count matters.
null
CC BY-SA 4.0
null
2023-03-28T16:55:33.530
2023-03-28T16:55:33.530
null
null
247274
null
611035
1
611058
null
1
34
> An order for bottles of vitamins from a certain mail order company costs $\$ 12.04$ per bottle plus a shipping cost of $\$ 4.80$ regardless of the number of bottles ordered. Over the past year, the company has received 100 orders for bottles of vitamins, The standard deviation of the numbers of bottles per order for the 100 orders is 1.5 bottles. What is the standard deviation of the 100 costs for the orders? The mean number of bottles per order is unknown. The total number of bottles sold is unknown. The total number of bottles equals $100 \ \cdot$ mean number of bottles per order. $$\sqrt{ \dfrac{\sum_{i=1}^{100} (bottles_{ith \ order}-\dfrac{\sum bottles}{orders})^2}{100}}= standard \ deviation =\sqrt{\dfrac{ \sum_{i=1}^{100}(bottles_{ith \ order}-\dfrac{\sum bottles}{100})^2}{100}}=1.5$$. $12.04 \sum bottles+4.8 \cdot orders=12.04 \sum bottles+4.8 \cdot 100 = total \ cost$ $\sum bottles = \dfrac{total \ cost - 4.8 \cdot 100}{12.04}$ How to proceed?
What's the standard deviation of the cost for the orders?
CC BY-SA 4.0
null
2023-03-28T16:57:00.917
2023-03-28T23:23:02.967
2023-03-28T17:10:05.340
null
null
[ "mean", "standard-deviation" ]
611036
1
611076
null
1
37
I have a set of measurements over time at specific locations with treatments applied to different locations. I'm interesting in testing whether locations have different responses to the treatments overall. Each location may have different 'starting points' so I care about slopes of a linear model, not intercepts or means. I have figured out how to estimate a linear model for each location individually and then use a t-test on slopes (for two treatments), but I expect that's not fully legit and am stuck trying to specify an appropriate mixed model. Here's a reprex ``` library(lme4) library(ggplot2) #just to show what I'm looking for #Set up dummy data dat <- data.frame(samples = c(rep(1, 3), rep(2, 3), rep(3,3), rep(4,3), rep(5,3),rep(6,3)), value = c(1,3,7, 2,4.5,4.2, 7,7.2,10, 5,2,2, 9,9,2, 2,1,0.5), time = rep(c(1,2,3),6), tmt = c(rep(1,9), rep(2,9))) dat$samples <- as.factor(dat$samples) dat$tmt <- as.factor(dat$tmt) # plot it to visualize what I'm looking for # do the slopes of the two tmt groups differ? ggplot(data=dat, aes(x=time, y=value, group=samples)) + geom_line(aes(color=tmt))+ geom_point() # I assume this isn't really a legitimate way to test the groups of slopes! tmt1_estimates <- lmList(value ~ time | samples, data=dat, subset = dat$tmt == 1) summary(tmt1_estimates) tmt2_estimates <- lmList(value ~ time | samples, data=dat, subset = dat$tmt == 2) summary(tmt2_estimates) tmt1_slopes <- summary(tmt1_estimates)$coefficients[,,"time"][,"Estimate"] tmt2_slopes <- summary(tmt2_estimates)$coefficients[,,"time"][,"Estimate"] # the test t.test(tmt1_slopes, tmt2_slopes) # my attempt at specifying in lmer. I feel like I'm missing the comparison of # slopes by tmt group m1 <- lmer(value ~ tmt + (time|samples), data=dat) summary(m1) ``` Any advice for a more appropriate test would be greatly appreciated. In reality, my samples are unbalanced (different sized groups) and vary in when sampled.
compare grouped slopes (change over time)
CC BY-SA 4.0
null
2023-03-28T17:11:13.693
2023-03-29T05:54:51.567
2023-03-28T18:45:26.117
12839
12839
[ "r", "mixed-model", "lme4-nlme", "lm" ]
611037
2
null
610341
2
null
Theoretical Backgorund For the sake of simplicity let's assume $\sigma^2=\gamma(0)=1$, where $\gamma(h)$ represents the autocovariance at lag $h$. Let your time series be $\{X_i\}_{i=1}^n$. Consider the covariance matrix of your data $$\begin{align} \Sigma_X=\begin{bmatrix}1 & \gamma(1) & \gamma(2) & \cdots & \gamma(n-1)\\\gamma(1) & 1 & \gamma(1) & \cdots & \gamma(n-2)\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \gamma(n-1) & \gamma(n-2) & \gamma(n-3) & \cdots & 1\end{bmatrix}\end{align}.$$ For a stationary process (I am assuming you would like your time series to be stationary) this covariance matrix will be positive semi-definite (see [https://stat.ethz.ch/Teaching/buhlmann/Zeitreihen/acf.pdf](https://stat.ethz.ch/Teaching/buhlmann/Zeitreihen/acf.pdf), and [Show that the autocovariance function of stationary process {${X_t}$} is positive definite](https://stats.stackexchange.com/questions/431429/show-that-the-autocovariance-function-of-stationary-process-x-t-is-positiv)). Now since $\Sigma_X$ is positive semi-definite and symmetric, it will have a symmetric square root such that $$\Sigma_X=\Sigma_X^{1/2}\Sigma_X^{1/2}.$$ Finally we can use the fact that for a vector of random variables $Z$, $\text{Cov}(AZ)=A\Sigma_ZA^T$. So here if $Z$ is a vector of normal random varaibles with $\Sigma_Z=I$ then $$\text{Cov}(\Sigma_X^{1/2}Z)=\Sigma_X.$$ Application in R So in your case what you would do is - Using the ACF you have calculated, create a $\Sigma_X$ matrix in R. - Calculate $\Sigma_X^{1/2}$ (you could use the sqrtm() function in R). - Generate $Z$ using rnorm() in R, and then multiply $Z$ by $\Sigma_X^{1/2}$ to get your time series with the desired covariance, and hence autocovariance, structure.
null
CC BY-SA 4.0
null
2023-03-28T17:12:20.260
2023-03-28T17:12:20.260
null
null
234732
null
611038
1
null
null
0
43
I'm modeling rate data using Poisson regression and I want to assess the model assumption of mean = variance. I understand that for Poisson regression it's the conditional means that equal the conditional variances for different levels of the predictor variables. Suppose below is some random example data from one of those combinations of predictor variables that shows the counts (x) and the exposure: ``` n = 100 dat = data.frame( x = rpois(n, 3) exposure = runif(n, min = 10, max = 50) ) ``` My question is how do I account for the varying levels of exposure when checking this assumption?
Poisson regression - How to check mean = variance assumption when the counts have different exposures?
CC BY-SA 4.0
null
2023-03-28T17:13:17.800
2023-03-28T17:13:17.800
null
null
271129
[ "r", "assumptions", "poisson-regression" ]
611039
1
null
null
1
19
I am trying to train a SVM model for my statistical learning course. The problem is a binary image classification problem (wildfire, nowildfire). This is the rigorous amount of testing that I have done so far: [](https://i.stack.imgur.com/bLz9a.png) Initially I started by training the model using the library which was rather easy. Then I toyed around some more with the results and once I was sure that RBF for Kernal function and scale for gamma value would be best I moved on to my own manual implementation. Now the manual implementations are as follows: [](https://i.stack.imgur.com/S7muU.png) SMO was terrible in terms of efficiency and scalability whereas SGD also didn't seem scalable (although it performed well, for a larger dataset it was somewhat slow) so I landed on MBGD which is also the consensus of most of the Statistical Learning and Data Analytics literature. However, what I do not understand is why MBGD performed slightly worse with 2368 images (fourth last row, mini-batch size = 32) than it did with 480 images (last three rows, mini-batch sizes = 32, 16, 8), in terms of both best score and perhaps even test accuracy considering the difference in size. My other big problem so far has been efficiency for which I implemented an early stopping criteria based on the difference between two successive weights and biases. I also made the learning rate dynamically decrease with every successive Epoch/iteration so that I do not overfit the data whilst keeping the learning rate large enough to converge in a reasonable amount of time. So I guess my question is: can anyone help me understand how to improve my model and what mistakes I am currently making?
Could someone help me interpret the data that I have gathered thus far?
CC BY-SA 4.0
null
2023-03-28T17:36:20.500
2023-03-28T17:45:44.837
2023-03-28T17:45:44.837
380393
380393
[ "classification", "svm", "gradient-descent", "stochastic-gradient-descent" ]
611040
2
null
210802
0
null
It is fine that your other variables of the structural equation, BMI and health-related work limitations. be correlated with the instrument. If they are not your research interest, it will even be fine for the variables to be endogenous. Additionally, I believe you could not have more endogenous variables than instruments. So if you are performing 2SLS in STATA: ``` ivregress Log_Income Controls (Reads_Nutri= Diabetes), robust ``` where Controls are just all the controls you have.
null
CC BY-SA 4.0
null
2023-03-28T18:04:56.057
2023-03-28T18:16:39.210
2023-03-28T18:16:39.210
384291
384291
null
611041
1
null
null
3
164
I am new to R studio. I tried to use the regression model. I used the semi-log model due to the normality. I summarised the results of the regression. And I found that two variables (called v1 and v2) had 0 predict (estimate) values. V1 and V2 are continuous variables as the control variables. And I used the independent variable (categorical variable), and the dependent variable (continuous variable). The r-squared value increased when adding v1 and v2. According to their p-value, they are statistically significant. I am confused why they had 0 predicted value and I don't know how to interpret them. If you have any advice, please inform me about it. The following picture is the summary table of the regression model that I mentioned. [](https://i.stack.imgur.com/bWbQ6.png) And This is the regression result. [](https://i.stack.imgur.com/LzG2K.png)
How to interpret the 0 esimtated value in semi-log regression model
CC BY-SA 4.0
null
2023-03-28T18:06:13.037
2023-03-28T21:23:07.243
null
null
384348
[ "r", "regression" ]
611042
2
null
610943
0
null
To use two-stage least squares (2SLS) regression for your analysis, you will need an instrumental variable that is strongly correlated with your explanatory variable (i.e., covid rates and covid lockdown levels), but uncorrelated with the error term of your outcome variable (i.e., murder and suicide rates). One potential instrumental variable for your analysis could be the date that the county implemented its covid lockdown measures. This variable is likely to be strongly correlated with covid rates and lockdown levels, but it may not be directly related to murder and suicide rates. In terms of methodology, you can just use 2SLS.
null
CC BY-SA 4.0
null
2023-03-28T18:21:01.727
2023-03-28T18:21:20.427
2023-03-28T18:21:20.427
384291
384291
null
611043
1
null
null
2
48
For a 2x2 scenario, to determine misclassification we can obtain positive and negative predictive values from a logistic regression model where the outcome is the true value and the exposure is the measured value. I can demonstrate what this might look like in R: ``` set.seed(123) true <- rbinom(2000, 1, 0.30) measured <- rbinom(2000, 1, 0.25) tab <- table(measured,true) crude.ppv.npv <- tab/rowSums(tab) mod <- glm(true ~ measured, family='binomial') model.ppv <- (1) / (1+ (exp(-(coef(mod)[1]+1*(coef(mod)[2]))))) model.npv <- (1- ((1)/(1 + exp(-(coef(mod)[1]+0*(coef(mod)[2])))))) ``` My question is how can I extend these equations to obtain the predictive values in a 5x5 scenario. More specifically, the probability of being truly in category X given that you are categorized in category X. I imagine the model could either be specified as a multinomial model, where all levels of the true and measured exposure are taken into account. Or we could model the measured values separately for all levels of the true exposure. Here is an example in R. Say we have two variables with 5 levels each, from which we can create a 5x5 table and find out the probability of being in each true level given that you are categorized as a measured level ``` set.seed(1234) measured <- as.factor(sample(1:5, 1846, replace=TRUE, prob=c(0.2, 0.2, 0.2, 0.2, 0.2))) true <- as.factor(sample(1:5, 1846, replace=TRUE, prob=c(0.19, 0.2, 0.2, 0.2, 0.21))) # make 5x5 table tab <- table(measured,true) tab # get crude predictive values from the table tab.prop <- tab/rowSums(tab) tab.prop ``` So basically I'm trying to obtain each of the cells in the tab.prop table from a regression model. I'm not sure it's possible, any direction would be welcome :)
How to obtain positive predictive values from multinomial regression?
CC BY-SA 4.0
null
2023-03-28T19:22:16.897
2023-03-28T21:10:48.013
2023-03-28T19:44:45.483
364272
364272
[ "r", "regression", "sensitivity-specificity" ]
611044
1
null
null
1
13
I have a dataset containing two types of variables. These are - numerical and sequential variables( example is shown below). How can I give both of these variables simultaneously as input to machine learning models like, Xgboost. Sample Data- V1      V2      Target 1      [1,2,3]      1 5      [2,3,4]      1 15    [6,8,9]      0
Giving different types of variables as input to ML models
CC BY-SA 4.0
null
2023-03-28T19:53:51.267
2023-03-28T19:53:51.267
null
null
384352
[ "machine-learning", "time-series", "boosting", "multimodality" ]
611045
2
null
611043
1
null
Go through the definition of conditional probability. $$ P\left(Y = y\vert \hat Y=y\right) $$$$= \dfrac{ P\left(\left(\hat Y = y\right)\bigcap \left(Y = y\right)\right) }{ P\left(\hat Y = y\right) } $$$$=\dfrac{ \dfrac{ \text{Count of How Many Points are Classified } y\text{ and really are } y }{ \text{Total Number of Classification Attempts} } }{ \dfrac{ \text{Count of How Many Points are Classified } y }{ \text{Total Number of Classification Attempts} } }$$$$=\dfrac{ \text{Count of How Many Point are Classified } y\text{ and really are } y }{ \text{Count of How Many Points are Classified } y } $$ Note, however, that multinomial logistic regression does not make hard classifications. Instead, such a model returns the probabilities of class membership. You can map those probabilities to categories, and many people do just assign the label with the highest probability, but those probabilities can be useful. After all, wouldn't you be more comfortable making a decision with probabilities like $(0.95, 0.02, 0.03)$ than $(0.34, 0.33, 0.33)$, despite the fact that both would be mapped to the same category according to a rule that assigns the category with the highest probability?
null
CC BY-SA 4.0
null
2023-03-28T19:57:15.360
2023-03-28T19:57:15.360
null
null
247274
null
611046
1
611059
null
0
37
I am struggling to work out if the following is true. Let $\{A_\epsilon\}$ be an indexed set of events in a probability space. Does it hold that: $$\forall \epsilon > 0, \mathbb{P}[A_\epsilon]=1 \implies \mathbb{P}\bigg[ \bigcap_{\epsilon > 0} A_\epsilon\bigg]=1$$ If not, can I add any conditions (such as the $A_\epsilon$ are decreasing or increasing) to guarantee the result? --- Edit: The origin of this question is from an almost sure convergence proof. The proof showed that: $$\forall \epsilon > 0, \mathbb{P}[\exists N \in \mathbb{N} \text{ such that } |X_n-X|\leq \epsilon]=1$$ but then the proof just jumped to: $$\mathbb{P}[\forall \epsilon > 0, \exists N \in \mathbb{N} \text{ such that } |X_n-X|\leq \epsilon]=1$$ and that confused me a lot.
Probability for all epsilon is one implies for all epsilon probability is one
CC BY-SA 4.0
null
2023-03-28T20:02:12.787
2023-03-28T23:46:37.003
2023-03-28T21:05:49.793
320497
320497
[ "probability", "convergence", "probability-inequalities" ]
611048
2
null
611041
6
null
The `R` printout shows your regression coefficients to be estimated as $8.307 \times 10^{-4} = 0.0008307$ and $9.636 \times 10^{-4} = 0.0009636$. These values have the usual interpretation as regression coefficients with a logged target variable (the usual business about percent change). Then the upper table shows that, when the coefficients are rounded to two decimal places, both round to zero, which is correct rounding. This is a valid criticism of the table using rounding. You are able to get significance of those seemingly tiny coefficients because, perhaps among other reasons, you have thousands of observations that give considerable statistical power and ability to detect small effect sizes. You are within your rights as a scientist (economist, physician, engineer, etc), however, to say that, despite the statistical significance, those small values have no practical significance and are essentially zero, or you might use your knowledge of the subject to say that such a magnitude really does matter. However, separate the statistical significance from the practical importance; statistical significance does not necessarily make something interesting.
null
CC BY-SA 4.0
null
2023-03-28T20:19:13.070
2023-03-28T21:23:07.243
2023-03-28T21:23:07.243
247274
247274
null
611049
1
null
null
1
22
Background I've been running a couple of lmer() models with different data and noticed something I cannot really grasp. The models range from simple to more complicated (e.g. DV ~ continous_between_Case_Variable + categorical_within_Case_Variable + categorical_between_Case_Variable + (1|Case) ) Given the categorical predictors I am using the anova() function on the lmer() object, to assess if the variable is relevant/significant. For comparison purposes, I also extract eta² and its confidence intervals (using effectsize::eta_squared). Problem I noticed that sometimes, when there is clearly no effect, the Pr(>F)/p-value is very high, even 1. However, for some reason, Eta² is sometimes 1 in these cases. In other cases (independent on the p-value) I noticed that the Confidence intervals range from 0 to another value, or even 0 to 0. So my questions are: How can I extract Eta² and its confidence intervals more reliably? I need to know which values I can interpret and which ones I should better replace with NA. Do I need worry if the confidence interval starts at 0 (which usually goes together with asymmetrical confidence intervals around the estimate)? Does a confidence interval ranging from 0 to 0 always indicate that it could not be determined and is infinite? But why not simply replacing it then with the largest possible value 1?
Limitations with effect sizes and confidence intervals (fixed effects) with lmer and effectsize in R
CC BY-SA 4.0
null
2023-03-28T16:29:34.797
2023-03-29T04:07:11.237
2023-03-29T04:07:11.237
11887
384341
[ "r", "confidence-interval", "lme4-nlme", "lm" ]
611050
1
null
null
1
138
I am trying to model a time series with R using the `auto.arima()` function. I understand there is no need for the autoregressive variable to be stationary as this is taken care of by the "integrated" part. However, when extending the model with exogenous variables using `xreg`, does the same rule apply to them, or do I have to transform those to achieve stationarity prior to including in the model?
ARIMAX- Do exogenous variables have to be stationary?
CC BY-SA 4.0
null
2023-03-28T20:47:00.023
2023-03-29T06:29:50.160
2023-03-29T06:29:50.160
53690
384355
[ "r", "time-series", "forecasting", "arima", "stationarity" ]
611051
2
null
524574
0
null
Question: Is $y_t$ increasing over time? One proposed solution: Use Spearman's rank correlation coefficient $r_{\text{S}}$ measured between $y_t$ and $t$, and test $\text{H}_{0}\text{: }\rho_{\text{S}} \le 0$ versus $\text{H}_{\text{A}}\text{: }\rho_{\text{S}} > 0$. If you reject $\text{H}_0$ in favor of $\text{H}_{\text{A}}$, that is evidence that $y_t$ is increasing monotonically.
null
CC BY-SA 4.0
null
2023-03-28T21:09:40.910
2023-03-28T21:09:40.910
null
null
44269
null
611052
2
null
611043
1
null
While it is a good idea to use the derived theoretical positive predictive value as @Dave did, the question did ask how to do this using a regression model: ``` ## data from question: set.seed(1234) measured <- as.factor(sample(1:5, 1846, replace=TRUE, prob=c(0.2, 0.2, 0.2, 0.2, 0.2))) true <- as.factor(sample(1:5, 1846, replace=TRUE, prob=c(0.19, 0.2, 0.2, 0.2, 0.21))) #multinomal regression library(nnet) multinom_model <- multinom(true~measured) predicted_values <- predict(multinom_model, newdata=data.frame(measured=factor(1:5)), type="probs") predicted_values ``` which gives the output ``` 1 2 3 4 5 1 0.2106765 0.1910067 0.1853966 0.2443814 0.1685387 2 0.1868126 0.1868142 0.1923090 0.2170322 0.2170320 3 0.2117343 0.1862289 0.1683639 0.2040815 0.2295913 4 0.1778987 0.2318079 0.1967627 0.1940719 0.1994588 5 0.1983484 0.1763090 0.2314043 0.2011012 0.1928372 ``` where the probabilities of correct classification can be read from the diagonal. Notice I compute the probabilities using the `predict` function on a new dataframe, which (for some) is faster than remembering the formula that connects the model parameters to the probabilities.
null
CC BY-SA 4.0
null
2023-03-28T21:10:48.013
2023-03-28T21:10:48.013
null
null
89277
null
611053
1
null
null
1
17
The procedure for an ANOVA is really straightforward; you compute the between group variance and the within group variance, divide your results to get an F statistic, and compare it to your critical value. But what about ANCOVA? I can't find an exact procedure anywhere. Can someone show me a piece of code to find the F statistic in an ANCOVA? And not just using pre-built ANCOVA modules, because what I am interested in is how the resulting F statistic is obtained. I tried researching the ANCOVA formula, but to no avail. Everyone online just says "computation would be tedious and just use a software" but never actually explain the procedure.
What formula does SPSS apply to the data in an ANCOVA? Or exactly *how* does ANCOVA adjust the dependent variable?
CC BY-SA 4.0
null
2023-03-28T21:27:06.943
2023-03-28T21:27:06.943
null
null
384356
[ "hypothesis-testing", "statistical-significance", "anova", "spss", "ancova" ]
611054
1
null
null
1
15
Here's the formula for a continuous case (taken from [this great answer](https://stats.stackexchange.com/q/367558)): \begin{align} \text{AUC} &= \int_{-\infty}^{+\infty} \big( 1-F_1(\tau) \big) f_0(\tau) d\tau \end{align} Since a coin assigns scores from $\{0,1\}$ to each instance of positive and negative class with probability 0.5 we have (adopting formula above to the discrete case): $p_0(1)p_1(score>1) + p_0(0)p_1(score>0) = 0.25$ where $p_0$ and $p_1$ are probability mass functions of scores for negative and positive classes. What's wrong with this reasoning? I arrive at the same conclusion when considering a finite set of 50 positive and 50 negative instances. A coin would assign around 25 positive scores to instances of each class. There are 50x50 possible ways to choose a pair (one negative and one positive instance) and 25x25 ways to choose a pair such that positive instance is scored 1 while negative is scored 0. Which gives $p=0.25$.
Calculating ROC AUC for a fair coin by definition. Where's the mistake?
CC BY-SA 4.0
null
2023-03-28T21:49:54.657
2023-03-28T21:49:54.657
null
null
90976
[ "probability", "roc", "auc" ]
611055
1
null
null
1
18
I'm trying to work out some explanation for a result I'm seeing where I have the Kaplan-Meier plots for monthly customer churn risk. In aggregate, it looks fine, but I've broken it down into subgroups based on whether the user has previously churned before, i.e. canceled and resubscribed. A couple things that might be impacting this -- I'm looking at a shorter time horizon than I'd like (~3 years relative to maybe about 10 years of data). More customers are likely to have not ever churned before, so the other curves represent smaller numbers in total, but still >50k. Leaving aside the fact that recurrent event analysis is probably the better approach if I had access to the full data, I'd love any insights if anyone's seen something like this before. It does also appear in the duration distribution comparison, with non-churners having a very different distribution. My main hypothesis to explain this is that this plot suggests that customers will, more or less, inevitably churn, so customers who have already done so in the past are less at risk now. Similarly that customers who give the service a second (or third, etc.) chance are more likely to stick with it? And then there's the idea that a user who has already churned >= 1 time will probabilistically, in aggregate, have shorter durations -- given this, I'm wondering how I can correct for this effect? It's not a particularly satisfying answer to me so far. I just wish I had a more quantitative way of demonstrating this. [](https://i.stack.imgur.com/nVFZS.png) [](https://i.stack.imgur.com/Ggnbl.png)
Survival curves, counter-intuitive results for customer churn -- lower risk of churn for users who've previously churned?
CC BY-SA 4.0
null
2023-03-28T21:58:53.850
2023-03-29T01:43:23.667
null
null
76711
[ "survival", "churn", "recurrent-events" ]
611056
2
null
213090
1
null
Well, if you check the help of Fisher's exact test in R: ``` ?fisher.test ``` in the end, there's an example where both variables are ordinal. The table is as follows: ``` ## A r x c table Agresti (2002, p. 57) Job Satisfaction Job <- matrix(c(1,2,1,0, 3,3,6,1, 10,10,14,9, 6,7,12,11), 4, 4, dimnames = list(income = c("< 15k", "15-25k", "25-40k", "> 40k"), satisfaction = c("VeryD", "LittleD", "ModerateS", "VeryS")) Job satisfaction income VeryD LittleD ModerateS VeryS < 15k 1 3 10 6 15-25k 2 3 10 7 25-40k 1 6 14 12 > 40k 0 1 9 11 fisher.test(Job) Fisher's Exact Test for Count Data data: Job p-value = 0.7827 alternative hypothesis: two.sided ``` As you can see, this example is from Agresti's book Categorical Data Analysis (2002). I checked the book but seems he didn't mention Fisher's exact test in this part. As R's help is using this example to run a Fisher's test I assume it's valid to run Fisher's test for ordinal variables too, but would be good to confirm it!
null
CC BY-SA 4.0
null
2023-03-28T22:02:44.853
2023-03-28T22:02:44.853
null
null
252638
null
611057
1
null
null
3
70
When training a VAE, one aim to optimize function $\mathcal{L}$, defined as: $$\mathcal{L}\left(\theta,\phi; \mathbf{x}^{(i)}\right) = - D_{KL}\left(q_\phi(\mathbf{z}|\mathbf{x}^{(i)}) || p_\theta(\mathbf{z})\right) + \mathbb{E}_{q_\phi\left(\mathbf{z}|\mathbf{x}^{(i)}\right)}{\log p_\theta\left(\mathbf{x}^{(i)}| \mathbf{z}\right)},$$ where $q_\phi\left(\mathbf{z}|\mathbf{x}^{(i)}\right) = \mathcal{N}_J(\mathbf{\mu}^{(i)},\sigma^{(i)}\mathbb{1})$ and $p_\theta(\mathbf{z}) = \mathcal{N}_J(\mathbf{0},\mathbb{1})$. The term $D_{KL}\left(q_\phi(\mathbf{z}|\mathbf{x}^{(i)}) || p_\theta(\mathbf{z})\right)$ may be viewed as a regularizer. Since normality is already imposed on $q_\phi\left(\mathbf{z}|\mathbf{x}^{(i)}\right)$, then the KL-divergence term aims solely to turn $\mathbf{\mu}^{(i)}=0$ and $\mathbf{\sigma}^{(i)}=1$. Thus, wouldn't it be equivalent (and simpler) to replace the $D_{KL}$ term by (something along the lines of) $\left(\mu^{(i)}\right)^2 + \left(\sigma^{(i)}-1\right)^2$? That is, to use the following function $\tilde{\mathcal{L}}$ instead: $$\tilde{\mathcal{L}}\left(\theta,\phi; \mathbf{x}^{(i)}\right) = -\sum_{j=1}^J\left(\left(\mu^{(i)}_j\right)^2 + \left(\sigma^{(i)}_j-1\right)^2\right) + \mathbb{E}_{q_\phi\left(\mathbf{z}|\mathbf{x}^{(i)}\right)}{\log p_\theta\left(\mathbf{x}^{(i)}| \mathbf{z}\right)}.$$
Replacing the KL-divergence term in a VAE with parameter regularization
CC BY-SA 4.0
null
2023-03-28T23:04:59.753
2023-04-07T11:15:10.370
2023-04-01T22:27:50.290
44221
44221
[ "autoencoders", "kullback-leibler", "variational-inference", "variational" ]
611058
2
null
611035
1
null
In general, multiplying every observation by a positive constant multiplies the standard deviation by that constant, and adding a constant to every observation doesn't change the standard deviation. So here the standard deviation is $12.04 \cdot 1.5 = \$18.06$. To be a bit more formal about it: Let $B$ be the number of bottles in an order. We know $\text{Var}(B)=1.5^2$ and we want to know: $$\text{Var}(12.04B + 4.80) = \text{Var}(12.04B)=12.04^2\text{Var}(B)=12.04^2\cdot1.5^2$$ Then the standard deviation is $\sqrt{\text{Var}(B)}=\sqrt{12.04^2\cdot1.5^2} = 12.04 \cdot 1.5 = \boxed{18.06}$.
null
CC BY-SA 4.0
null
2023-03-28T23:23:02.967
2023-03-28T23:23:02.967
null
null
384360
null
611059
2
null
611046
1
null
The statement from the proof in the edited part of the question has $A_{\epsilon_1}\subset A_{\epsilon_2}$ for $\epsilon_1<\epsilon_2$. This means that the intersection can be written as countable intersection for, say, $\epsilon\in\{\frac{1}{n}\ :\ n\in\mathbb{N}\}$. For countable intersections, the statement holds, see [Countable intersection of almost sure events is also almost sure](https://stats.stackexchange.com/questions/100570/countable-intersection-of-almost-sure-events-is-also-almost-sure) The result will generally hold for increasing or decreasing sequences, as here the countable intersection is the same as the uncountable one.
null
CC BY-SA 4.0
null
2023-03-28T23:46:37.003
2023-03-28T23:46:37.003
null
null
247165
null
611060
1
611062
null
33
6906
I recently read this passage from a website and I just can't work out the math. Overall, it says you can be 93.75% confident of having the true median parameter within an interval, obtained from a random sample of 5 out of a 10 000 population. Could someone guide me to obtain this value? Here's the original passage: > Pretend for a moment that you’re a decision-maker for a large corporation with 10,000 employees. You’re considering automating part of some routine activity, like scheduling meetings or preparing status reports. But you are facing a lot of uncertainty and you believe you need to gather more data. Specifically, one thing you’re looking for is how much time the typical employee spends each day commuting. How would you gather this data? You could create what essentially would be a census where you survey each of the 10,000 employees. But that would be very labor-intensive and costly. You probably wouldn’t want to go through that kind of trouble. Another option is to get a sample, but you are unsure what the sample size should be to be useful. What if you were told that you might get enough information to make a decision by sampling just five people? Let’s say that you randomly pick five people from your company. Of course, it’s hard for humans to be completely random, but let’s assume the picking process was about as random as you can get. Then, let’s say you ask these five people to give you the total time, in minutes, that they spend each day in this activity. The results come in: 30, 60, 45, 80, and 60 minutes. From this, we can calculate the median of the sample results, or the point at which exactly half of the total population (10,000 employees) is above the median and half is below the median. Is that enough information? Many people, when faced with this scenario, would say the sample is too small – that it’s not “statistically significant.” But a lot of people don’t know what statistically significant actually means. Let’s go back to the scenario. What are the chances that the median time spent in this activity for 10,000 employees, is between 30 minutes and 80 minutes, the low and high ends, respectively, of the five-employee survey? When asked, people often say somewhere around 50%. Some people even go as low as 10%. It makes sense, after all; there are 10,000 employees and countless individual commute times in a single year. How can a sample that is viewed as not being statistically significant possibly get close? Well, here’s the answer: the chances that the median time spent of the population of 10,000 employees is between 30 minutes and 80 minutes is a staggering 93.75%. In other words, you can be very confident that the median time spent is between 30 minutes and 80 minutes, just by asking five people out of 10,000 (or 100,000, or 1,000,000 – it’s all the same math). From : [https://hubbardresearch.com/two-ways-you-can-use-small-sample-sizes-to-measure-anything/](https://hubbardresearch.com/two-ways-you-can-use-small-sample-sizes-to-measure-anything/)
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
CC BY-SA 4.0
null
2023-03-29T00:17:28.810
2023-03-30T18:31:40.547
2023-03-29T16:23:07.247
44269
384361
[ "probability", "confidence-interval", "small-sample" ]
611061
2
null
610337
0
null
The reported value in the paper might not be correct. For linear regressions, we can assume that the $\hat{\beta}$s have an approximately normal distribution. Then your 95% confidence interval can be estimated by the formula: $$\hat{\beta}\pm1.96SE$$, since the paper reported the HR and 95% CI, you can calculate the $\hat{\beta}$ and $SE$ from the Hazard ratio and the 95% confidence interval either to use the upper limit or the lower limit and you should get a very close value of SE. Let's do an upper limit first $$\log(0.31)+1.96SE=\log(0.74)\\\Rightarrow 1.96SE=\log(\frac{0.74}{0.31})\\\Rightarrow SE=0.4439713$$ Let's do a lower limit $$\log(0.31)-1.96SE=\log(0.005)\\\Rightarrow 1.96SE=\log(\frac{0.31}{0.005})\\\Rightarrow SE=2.105681$$ In theory, we should get exactly the same SE from the two equations (in reality there can be a little difference due to rounding). However, the SE is quite different from the two calculations. Therefore, my conclusion is that the reported value in the paper might not be correct. You may ask the authors to clarify it.
null
CC BY-SA 4.0
null
2023-03-29T00:38:25.700
2023-03-29T01:16:41.380
2023-03-29T01:16:41.380
61705
61705
null
611062
2
null
611060
68
null
Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$, and similarly for the probability that all five observations are below the median. As the events "above the median" and "below the median" are mutually exclusive, we can calculate the probability that all five observations are either entirely above the median or entirely below the median as the sum of the probabilities: $0.03125 + 0.03125 = 0.0625$. Consequently, the probability that a sample will "enclose" the median is just $1 - 0.0625 = 0.9375$. After you've drawn the sample, of course, probabilities don't apply anymore, but you can construct a $93.75\%$ confidence interval for the median in the obvious way by using the largest and smallest observations.
null
CC BY-SA 4.0
null
2023-03-29T01:15:33.177
2023-03-29T01:15:33.177
null
null
7555
null
611063
2
null
611060
24
null
Yes, this really works, under certain conditions, with a couple of caveats - Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median you wanted an interval for. - Understanding what a confidence interval means. The interval for a parameter will have a certain coverage ... but that doesn't necessarily correspond to how confident you personally are about it ... personal confidence is not the same thing as coverage. Specifically, that 93.75 percent is a frequentist probability - a long run proportion. Loosely, if you use the same methodology many, many times, about 93.75 percent of those intervals will include the population median. - The calculation of the coverage is based on assuming continuous responses. - It's not necessarily very useful; the range of 5 values will tend to be quite wide. The calculation of the coverage is mathematically straightforward (see the last paragraph below) but it's also easy to see via simulation. e.g. here's a quick simulation in R: ``` mean(replicate(1000000,between(range(runif(5)),0.5))) [1] 0.937464 ``` (where `between` is just: `function(x, m) x[1]<m & x[2]>m`; if you were doing it for a discrete variable you'd want <= and >= and to define your interval to be closed; it doesn't matter in the continuous case) It doesn't really matter how big the population was; this calculation effectively uses infinite population. A small population would not have a lower chance. I used the uniform distribution as a source of continuous random numbers but the same result would apply for any other continuous distribution, since the order relationships are unaltered by any monotonic transformation. With a continuous variable, the probability that all the values lay to the left of the population median would be $\frac12^5 = \frac{1}{32}$. Similarly for them all to be to the right. Consequently the coverage of the range of 5 randomly selected values is $\frac{15}{16} = 0.9375$.
null
CC BY-SA 4.0
null
2023-03-29T01:19:19.450
2023-03-29T17:52:51.170
2023-03-29T17:52:51.170
805
805
null
611064
2
null
611055
2
null
This is not counterintuitive at all. Customers who churned before and resubscribed had to have a reason to do so. The range of reasons depends on your service but a typical one is that they tried a competitor and decided that your service is better after all. Resubscribers have more conviction about your service because they have more experience with the landscape. Don't confuse the conditional probability of staying longer given resubscription with the probability of resubscription. Do you have any data about why your customers churn? We ask them on the cancellation form.
null
CC BY-SA 4.0
null
2023-03-29T01:29:57.647
2023-03-29T01:43:23.667
2023-03-29T01:43:23.667
384366
384366
null
611065
1
612350
null
0
107
I have two logistic mixed-effects models, nested within each other and differing in only one fixed variable: ``` mod1<- glmer(y ~ x1 + x2 + (x1 + x2| subject_ID), data = dat, family = binomial) mod2<- glmer(y ~ x1 + (x1 + x2| subject_ID), data = dat, family = binomial) ``` I use the `R` function `anova` to compare the two with `test="Chisq"`, which gives me a likelihood ratio test of whether the fixed effect missing from mod2 significantly improves mod1. I would like to calculate the effect size (specifically, Cohen's w) for this test. How can I accomplish this? Relatedly, it seems that to calculate Cramer's V or Cohen's w I need to know the sample size, number of rows, and number of columns of my test. What do rows and columns refer to in this case?
Calculate effect size Cohen's w (omega) for a chi-square test in model comparison
CC BY-SA 4.0
null
2023-03-29T01:55:39.600
2023-04-08T16:09:44.763
2023-03-29T02:04:34.377
307879
307879
[ "r", "mixed-model", "chi-squared-test", "effect-size", "model-comparison" ]
611066
2
null
213090
0
null
The assumption behind Fisher's exact test is that both variables (membership and frequency) are multinomially distributed, i. e. there are set percentages of a given person being a faculty member, a graduate student, or research staff; and set percentages for a given person being "never", "rare", etc. On a technical level, this assumption is not used because the row and column sums are assumed to be fixed in the actual calculation but the above is what this simplification is supposed to approximate. Coming back to your question, the fact that the second variable (frequency) is ordered has no impact on the original assumption so solely based on this consideration you can go ahead and use Fischer's exact test. However, since your data seems to come from field observation where neither the number of people in each group nor the number in each frequency group is controlled by you you might consider using Boschloo's exact test with a multinomial model wich is more powerful than Fischer's test and more fitting for your use case.
null
CC BY-SA 4.0
null
2023-03-29T02:09:23.703
2023-03-29T02:36:58.930
2023-03-29T02:36:58.930
384366
384366
null
611068
1
null
null
0
62
I am looking at one kind of measurement in 4 groups Young males, young females, senior males, senior females. Whereas I am mainly concerned at looking at the differences between young and old, I don't want sex as a confounding factor which is why I added in the additional sex groups and basically doubled my sample size (no previous studies to say if it is different with age in the species I am researching). I am pretty new to statistics and am not sure how to analyse this as I was originally going to do a student's unpaired T-Test but now I have added the additional two groups to separate the sexes. Additionally, some of the patients I am using may have underlying health issues that impact the measurement I am taking. I know this in advance (so for example I might know that 2 of the 12 senior females may have a health problem that changes the measurement), but unfortunately I cannot choose a different group of patients. Is there a way to account for these in advance? Or to note which ones may have values indicative of health problems and treat them as outliers later. So without changing the outline of the project, if I am mainly comparing differences in this measurement between two age groups, while also taking into account sex, what statistical test should I use? Thank you
Choosing a statistical method to measure differences in age groups while accounting for sex
CC BY-SA 4.0
null
2023-03-29T02:17:08.043
2023-03-29T02:50:11.513
null
null
384369
[ "regression", "t-test", "age" ]
611069
2
null
611068
0
null
This would be a linear regression. Your measurement is the outcome (called "Y"), sex, and any other variable (such as the other health conditions you mentioned) are the predictors (called "X" variables). Variables other than age would also be called "covariates". This would allow you to test the relationship between age and your outcome, while adjusting for sex and health conditions.
null
CC BY-SA 4.0
null
2023-03-29T02:50:11.513
2023-03-29T02:50:11.513
null
null
288142
null
611070
1
null
null
0
13
I am trying to analyze data from an experiment where my collaborators and I were interested in whether tweets about risk can influence people's risk perception (a continuous dependent variable). We manipulated whether the tweets came from a high or low credibility source, and we also manipulated the content of the tweet (text only, text with accompanying photo, text with accompanying data table). We also wanted to assess whether our results would replicate across different risk topics, so we also included a within-subjects factor so that each participant viewed three tweets, each one about a different risk topic. After viewing a tweet about a given topic, participants self-reported their risk perception (i.e., three risk perception measures per participant). However, participants did not necessarily see the same type of tweet across the three topics. For example, as I try to visualize in the figure below, one participant may have viewed the low-credibility/text-only tweet for Topic A, the low-credibility/text+photo tweet for Topic B, and the high-credibility/text+table tweet for Topic C. Another participant may have viewed different combinations of the tweet formats across the topics. In other words, within each of the three topics, participants could be randomly assigned to any one of the six tweet variations. What would be the best way to analyze these data? Would this be considered a nested design (I'm assuming not a crossed design)? My hunch is that this would require some kind of multilevel model to analyze. Any suggestions (or recommendations for similar threads) would be greatly appreciated! [](https://i.stack.imgur.com/bpJm7.png)
Analyzing data from a mixed experiment (nested or crossed?)
CC BY-SA 4.0
null
2023-03-29T02:52:24.273
2023-03-29T02:52:24.273
null
null
384371
[ "r", "repeated-measures", "experiment-design", "multilevel-analysis", "nested-data" ]
611071
2
null
610442
1
null
Another way of the thinking about the difference in sizes between countries is to recognize that it also means that the precision differs between countries. The most common way of handling this is to include all observations (i.e. data from all individual runners) and to use a model that accounts for the structure of your data (multiple observations from individual runners are not independent). Multilevel models are quite popular, though other approaches exist (i.e. sandwich estimators). So you could run a multilevel model (also called a mixed-effect model) and include runner ID as a random intercept (to account for the non-indepedence of observations). Country would be your main variable of interest. You might also want include variables for runner age and sex.
null
CC BY-SA 4.0
null
2023-03-29T03:03:39.613
2023-03-29T03:03:39.613
null
null
288142
null
611072
2
null
610442
2
null
Direct comparisons, as your table shows, are not appropriate because the base observation is not runners but rather one runner in one race. For instance, imagine a country with poor overall running ability, but a talented runner participated in many races, leading to numerous observations. This runner's presence would introduce a severe positive bias into the mean estimates of this country. In my opinion, there are two methods: - Take the average of each runner's performance and treate it as a measurement of their "real ability" of running (with measurement error, of course). Then, you can use a t-test or linear regression to compare the means of different countries with robust standard errors. - Use random-effects models: here, `i` denotes a particular runner and `j` denotes a particular race.
null
CC BY-SA 4.0
null
2023-03-29T03:12:36.543
2023-03-29T03:43:08.830
2023-03-29T03:43:08.830
341034
341034
null
611073
1
null
null
3
86
I was carrying out LDA (linear Discriminant Analysis) and noticed that the Scaling matrix produced by R is not normalized. Here is an example: ``` (res <- MASS::lda(Species~., iris)) Call: lda(Species ~ ., data = iris) Prior probabilities of groups: setosa versicolor virginica 0.3333333 0.3333333 0.3333333 Group means: Sepal.Length Sepal.Width Petal.Length Petal.Width setosa 5.006 3.428 1.462 0.246 versicolor 5.936 2.770 4.260 1.326 virginica 6.588 2.974 5.552 2.026 Coefficients of linear discriminants: LD1 LD2 Sepal.Length 0.8293776 0.02410215 Sepal.Width 1.5344731 2.16452123 Petal.Length -2.2012117 -0.93192121 Petal.Width -2.8104603 2.83918785 Proportion of trace: LD1 LD2 0.9912 0.0088 ``` Normalizing the scaling matrix: ``` scale(res$scaling, F, sqrt(colSums(res$scaling^2))) LD1 LD2 Sepal.Length 0.2087418 0.006531964 Sepal.Width 0.3862037 0.586610553 Petal.Length -0.5540117 -0.252561540 Petal.Width -0.7073504 0.769453092 attr(,"scaled:scale") LD1 LD2 3.973222 3.689878 ``` Why is the scaling matrix not normalized? Notice that if we try to fit lda manually: ``` x <- scale(as.matrix(iris[,-5]), TRUE, FALSE) y <- iris[,5] means <- tapply(x,list(rep(y,ncol(x)), col(x)), mean) Swithin <- crossprod(x - means[y,]) Sbetween <- crossprod(means) eig <- eigen(solve(Swithin, Sbetween)) eig[[2]][,eig[[1]] > 1e-8] [,1] [,2] [1,] 0.2087418 -0.006531964 [2,] 0.3862037 -0.586610553 [3,] -0.5540117 0.252561540 [4,] -0.7073504 -0.769453092 ``` Notice that the results Obtained by directly computing the scaling matrix is normalized. Of course the difference between the manually computed scaling matrix and the one produced by lda is the scale factor. But this has an impact on the posterior probabilities. Looking at the R code, i noticed they are doing `svd` twice rather than doing `eigen` decomposition. Tried analyzing the R code and after hours came up with the following: ``` a <- svd((x - means[y, ])/sqrt(nrow(x) - nrow(means))) S1 <- a$v %*% diag(1/a$d) S1 %*% svd(means %*% S1)$v [,1] [,2] [,3] [1,] 0.8293776 0.02410215 -3.176869 [2,] 1.5344731 2.16452123 1.965956 [3,] -2.2012117 -0.93192121 2.076870 [4,] -2.8104603 2.83918785 -1.447218 ``` Question: This is exactly like the unnormalized Scaling. But what is the intuition behind it? Main QUESTION: in LDA Why is the scaling matrix not normalized? and How can I obtain this unnormalized scaling matrix using eigen decomposition? (Note that I am more interested with eigen decomposition since the eigenvectors are the solutions to the lda problem - But i do not mind having the SVD approach) The `lda` objective: $$\max_{w} J(w) = \max_{w} \frac{w'S_{between}w}{w'S_{within}w}$$ ## EDIT: I have found out that the formula used in R is the same used in python. Am I missing something regarding `lda`? Python too does give the unnormalized vectors. if interested in the python code check [here](https://github.com/scikit-learn/scikit-learn/blob/9aaed4987/sklearn/discriminant_analysis.py#L513)
Why is the Scaling Matrix in LDA unnormalized?
CC BY-SA 4.0
null
2023-03-29T03:41:43.357
2023-04-01T10:15:00.750
2023-04-01T07:21:47.233
3277
180862
[ "r", "discriminant-analysis", "eigenvalues", "svd" ]
611075
1
611212
null
1
34
I have some legacy code in larger block of looped code that compares models. It essentially boils down to this similar example: ``` library(lme4) data("iris") test1 <- lmer(Sepal.Width ~ Petal.Width + (1|Species), data = iris, REML = FALSE) extractAIC(test1) ``` Which in this case gives the result: ``` > extractAIC(test1) [1] 4.00000 89.74898 ``` I am to understand from the return values for the function that the first value is `edf, the ‘equivalent degrees of freedom’ for the fitted model`, however I am also to understand that degrees of freedom are not valid in mixed models except under some special circumstances? It's not clear what it does in an `lmer()` class. My question then is, when using `lmer()`, is this 'K' specifically, the number of parameters? For my data I am getting numbers in the 30's to 40's. Also, I am quite confused as to how 'number of parameters' is explained, and how important it is. I have found some very heavy explanations I think, but I need some more layman terms to explain how it is somehow derived from the random and fixed effects, and their variance and/or covariance (across those groups?)? Thanks.
Results from extractAIC() from a mixed model lmer() are confusing
CC BY-SA 4.0
null
2023-03-29T05:46:33.447
2023-03-30T05:55:27.320
2023-03-29T06:28:01.413
1352
331950
[ "r", "mixed-model", "lme4-nlme", "aic", "degrees-of-freedom" ]
611076
2
null
611036
0
null
I think you want this model: ``` library(lmerTest) fit1 <- lmer(value ~ time * tmt + (time|samples), data=dat) #boundary (singular) fit: see help('isSingular') summary(fit1) ``` However, with your example data, it fits perfect (negative) correlation between random intercept and random slope. You could try providing better starting values to the solver but looking at the plot it seems justified to remove the random slope: ``` fit2 <- lmer(value ~ time * tmt + (1|samples), data=dat) plot(fit2) #no obvious issues, maybe some heteroskedasticity, but not enough data to assess that further summary(fit2) anova(fit2) #Type III Analysis of Variance Table with Satterthwaite's method # Sum Sq Mean Sq NumDF DenDF F value Pr(>F) #time 0.008 0.008 1 10.0000 0.0030 0.957318 #tmt 11.307 11.307 1 9.5941 4.5403 0.060083 . #time:tmt 42.941 42.941 1 10.0000 17.2419 0.001973 ** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` The interaction term `time:tmt` represents the difference between the slopes of the two treatments. In the example it is significant. The perfect correlation between random slope and random intercept in `fit1` indicates that any sample-effect on slope seems to be due to the different starting points (at time 0). If you try a model without correlation, i.e., `value ~ time * tmt + (time||samples)`, the fit is still singular and the random slope is estimated to be zero. Thus, I would conclude that there is no evidence of the location having a strong effect on the slopes.
null
CC BY-SA 4.0
null
2023-03-29T05:47:43.203
2023-03-29T05:54:51.567
2023-03-29T05:54:51.567
11849
11849
null
611077
2
null
610993
4
null
There are three possibilities - all the means are the same - some of the means are the same - all the means are different In the third case, you can't make Type I errors, because all the means are truly different, so we can ignore it. In the first case, the studentised range distribution gives the distribution of the maximum pairwise difference, so the Type I error rate is controlled at exactly the nominal level. In the second case, consider two means $\bar X_i$ and $\bar X_j$. If $\mu_i\neq\mu_j$ we can't make a Type I error. If $\mu_i=\mu_j$ then $\mu_i$ and $\mu_j$ are two of a set of identical means. The size of this set is less than $k$ (by assumption), so the distribution of the studentised range of this set is stochastically smaller than the studentised range of $k$ means. The Type I error is controlled conservatively. This isn't perfect, but it's better than some older methods, which failed to control the Type I error at all in the second case.
null
CC BY-SA 4.0
null
2023-03-29T06:02:18.143
2023-03-29T06:02:18.143
null
null
249135
null
611078
2
null
611050
1
null
Generally speaking, you do not need to difference non-stationary data before running an ARIMAX model. However, if Y is non-stationary and/or any of the regressors are nonstationary (and you do not difference), you need to be very careful that you are not fitting a spurious regression. One way to check is to keep the `order=c(0,0,0)` in the Arima model and check the residuals for a unit root. Something like: ``` fit_OLS <- Arima(Y,order=c(0,0,0),xreg=X) checkresiduals(fit_OLS) library(urca) summary(ur.df(residuals(fit_OLS))) ``` If the residuals have a unit root, you have a spurious regression, and you should difference both Y and X before re-running the regression (alternatively, you can set the AR(p) term >=1, see slide 17 here: [https://www.ssc.wisc.edu/~bhansen/papers/jeeslides.pdf](https://www.ssc.wisc.edu/%7Ebhansen/papers/jeeslides.pdf)). If the residuals from this regression don't have a unit root, but both X and Y do have a unit root, then they are likely cointegrated and you can run an [error correction model](https://en.wikipedia.org/wiki/Error_correction_model). In the Arima function from the forecast package, if you set `Arima(Y,order=c(p,1,q),xreg=X)` where p and q are integers for the number of AR and MA terms, it will difference both Y and X. If you only want to difference Y, but not X, you will need to do it manually: ``` DY <- diff(Y) X_short <- X[2:length(X)] Arima(DY,order=c(0,0,0),xreg=X_short) ``` If you find that these residuals have a unit root (and DY doesn't, but X does) then you should difference X and refit, also adding ARMA(p,q) terms if appropriate. If the residuals don't have a unit root, you can keep the set-up as-is, adding ARMA(p,q) terms as appropriate. If you only want to difference X, but not Y, ``` Y_short <- Y[2:length(Y)] DX <- diff(X) Arima(Y_short,order=c(0,0,0),xreg=DX) ``` again checking to ensure the residuals do not have a unit root before adding the appropriate ARMA(p,q) terms. Finally, note that in the `Arima` function, the default is to include an intercept when `d=0` and not when `d=1` (in which case the intercept is a drift term). If you know you always want to include an intercept, you should use the `include.constant=TRUE` option in the `Arima` function.
null
CC BY-SA 4.0
null
2023-03-29T06:11:00.393
2023-03-29T06:21:05.077
2023-03-29T06:21:05.077
384374
384374
null
611079
1
null
null
0
25
I want to see the effect of using captions on rating the delivery of speech. I'm planning to recruit participants and give a condition of either (1) video with caption or (2) video without caption. So I was planning to use independent samples t-test. However, there are several videos, where the speaker is different. Thus, each video of either condition 1 or 2 will be given to several participants. In this case, how can I consider this different speaker factor in the analysis? I don't want to analyze how the different speaker affects the rating of speech delivery and also I'm not interested in the interaction effect, so I thought MANOVA wasn't the one I'm looking for. In this case, what statistical test should I use?
How can I take into account conditions within the group when using independent samples t-test?
CC BY-SA 4.0
null
2023-03-29T06:20:15.163
2023-03-29T06:40:43.273
2023-03-29T06:40:43.273
362671
319408
[ "hypothesis-testing", "t-test" ]
611080
2
null
610993
4
null
Here is a visualisation for testing the hypothesis $\mu_1 = \mu_2 = \mu_3$ which may be tested based on 3 seperate pairwise comparisons of the means by using individual t-tests based on t statistics that uses a pooled estimate for the deviation. We simulate the distribution of the outcome of these t-tests for 3 samples of size 20 taken from a standard normal distribution. (See [Comparing two, or more, independent paired t-tests](https://stats.stackexchange.com/questions/515582/)) and we plot two of those t-statistics; the other is linearly dependent on the other two, $t_{1} - t_{2} + t_{3} = 0$. (Note: the signs in this linear dependency depend on how the t-statistics are computed). [](https://i.stack.imgur.com/xEukV.png) What Tukey's method does, and also Anova, is defining a region such that given the hypothesis, $\mu_1=\mu_2=\mu_3$ (and the assumptions of normal distribution and equal variance), the outcome will pass some $p$ percent of the time outside the boundary. If the hypothesis is wrong then typically the outcome will be outside the boundary much more often. Tukey's method uses the boundary based on the maximum magnitude of the t-statistics $$q = \max(|t_1|,|t_2|,|t_3|)$$ Anova will use a boundary on the overall sizes of the t-statistics $$F = \frac{t_1^2+t_2^2+t_3^2}{3}$$ Tukey's method will be more sensitive to situations where some $\mu_i$ are different in opposite directions (causing a large range). Anova will be more sensitive when several $\mu_i$ are different. If only a few $\mu_i$ have values at far ends, then Tukey's method will already observe a significant large distance. Anova, will be more sensitive to situations when multiple $\mu_i$ cluster at the far ends. > However, my conclusion from this is that, in practice, if I observe that two groups have significantly different means, then the assumption that all k populations have the same mean must be false. If two means are different then the hypothesis that 'all means are the same' is obviously false, but this doesn't mean that a statement like 'some means are the same' is false. We could have the situation where two means are the same $\mu_1 = \mu_2$, and one is different from the others, $\mu_3 \neq \mu_1$ and $\mu_3 \neq \mu_2$. In such case you will get small t-statistics, except for the ones that compare with the third group. You see this in the graphic where the pointy boundary for Tukey's method extends beyond the boundary for anova when two t-statistics are extreme and the other is nearly zero (Anova is more sensitive for that case).
null
CC BY-SA 4.0
null
2023-03-29T06:32:21.813
2023-03-29T08:51:25.563
2023-03-29T08:51:25.563
164061
164061
null
611081
1
null
null
1
24
I have a sample of cancer patients, of whom 41% died due to the cancer and only 9% died from competing events. So the number of deaths due competing events is fewer than the number of deaths due to cancer. Is it possible to use techniques used to deal with class imbalance in the Fine and Gray competing risk model? If yes, which technique would be most suitable for this kind of a problem?
Can 'oversampling' techniques be used for survival analysis of competing risks?
CC BY-SA 4.0
null
2023-03-29T06:44:35.110
2023-03-29T06:44:35.110
null
null
376830
[ "unbalanced-classes", "competing-risks" ]
611082
1
null
null
2
30
In an experiment I'm running, I'm trying to determine if my light brightnesses are "clumped" or not. During the experiment, stuff will change rapidly from light to dark in a random way, but sometimes there are "clumps" remaining. How do I differentiate between conditions where there are the same mean and standard deviation, but obviously look different. Imagine one where there's just random noise of mean=3 and SD=1, and another where it's completely black, but there is one high point, but still the mean=3 and SD=1. Is that a 4rd moment test? I don't actually know how the random noise is distributed. I've run some tests on it and get something that looks like a bell curve, matches a bell curve almost exactly visually. But is clearly very subtly not because the p-value for the Chi-squared is 0.0000001 (probably Gamma something something). Further, the random distribution is going to change throughout my experiment. I really just need to be able to look for the clumps.
Same mean, same SD, different spread
CC BY-SA 4.0
null
2023-03-29T07:11:28.767
2023-03-29T07:50:56.557
null
null
157328
[ "hypothesis-testing", "variance" ]
611083
2
null
348983
1
null
There are already several answers to this question but on my reading, they are all incorrect (or easily misinterpreted). A frequentist prediction interval works like a frequentist confidence interval. The "promise" of a frequentist confidence interval holds on repeated samples,e.g. if I construct a 95 percent confidence interval then on repeated samples 95 percent of the intervals will contain the parameter value. Any particular interval will either contain the parameter value or not though. So once you've constructed the interval, there is no guarantee, for example, that the parameter falls in the interval with a certain probability. This is also how a prediction interval works. You 1) sample a distribution, 2) construct an 80 percent prediction interval and then 3) sample the distribution again. Now if your repeat steps 1 - 3, thousands of times, the outcome from step 3) will fall in prediction interval from step 2) 80 percent of the time. However, if you follow step 1) and 2) only once and then repeat step 3) thousands of times, the prediction interval from step 2) will almost certainly not cover the outcome from step 3) 80 percent of the time. So to the question, > I calculated an 80% prediction interval of the outcome of interest (proportion of patients, average temperature etc.) based on my previous study. Is it true that in the future study I will receive the outcome within the calculated prediction interval with 80% probability?" the answer is no.
null
CC BY-SA 4.0
null
2023-03-29T07:19:00.020
2023-03-29T07:19:00.020
null
null
266571
null
611084
2
null
611082
1
null
In general, no finite number of moments (like expectation, standard deviation, skewness, kurtosis etc.) will uniquely determine a distribution. That is, two distributions can have the same first $n$ moments but still be "very different". That especially applies if your distributions are not from a specific parametric family, e.g., when some of your distributions are normal and others are multimodal. In practice, you could of course try comparing higher order moments, and that might solve your specific problem. Alternatively, you could either run statistical tests on whether two given samples come from the same distribution. Standard tests here are the two-sample Kolmogorov-Smirnov or the Anderson-Darling test. [What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests?](https://stats.stackexchange.com/q/610924/1352) does not yet have an answer, but may be a start. Or you could calculate distances between two (or each pair of) samples, like the Wasserstein or Earth Mover's Distance, or possibly the Kullback-Leibler divergence, and see whether they are "far apart". Something like this would also allow clustering approaches based on the matrix of pairwise distances, if you have multiple samples and a clustering is of interest.
null
CC BY-SA 4.0
null
2023-03-29T07:50:56.557
2023-03-29T07:50:56.557
null
null
1352
null
611085
1
null
null
0
46
The question is related to another previous [question](https://stats.stackexchange.com/questions/610417/normality-test-after-rounding/610861#610861). I am interested in the process described in the text below the links. For curiosity, I ask myself whether it is a good option or not and the purpose of this question is to discuss about it. None of the following links reply to this question: - Is normality testing 'essentially useless'? - Testing large dataset for normality - how and is it reliable? - Normality testing with very large sample size? We have available a dataset with $60'000$ data and need to ensure that they come from a normal distribution. As it is well known, it is unecessary to test on all the $60'000$ data as it will greatly increase the power of the test and lead to rejecting with certainty the null hypothesis 'the data come from a normal distribution'. What it is usually done is testing on a smaller set of data drawn randomly from the larger dataset. However, this may not be representative enough, and since I have $60'000$ data at hand, I used another approach to have a more accurate result. I randomly drew with replacement $50$ subsets of data from the larger dataset and for each of these subsets, i performed a normality test with a significance level of $5\%$. At the end, we must observe that no more than $5\%$ of the test failed in order to accept the null hypothesis. Critics about this way to test for normality on a big dataset are welcome. I am asking myself if it is useless to do it or it actually give a more accurate result. I do not want to discuss why I am doing a normality test and if it is useful or not in my situation. The goal here is just to think about the process I have just described.
A Process for Testing Normality on a big Dataset
CC BY-SA 4.0
null
2023-03-29T08:18:46.690
2023-03-29T11:18:37.513
2023-03-29T11:18:37.513
383929
383929
[ "normal-distribution", "inference", "normality-assumption" ]
611086
2
null
270148
0
null
import numpy as np import matplotlib.pyplot as plt # Define the two classes C1 = np.array([[0, -1], [3, -2], [0, 2], [-2, 1], [2, -1]]) C2 = np.array([[6, 0], [3, 2], [9, 1], [7, 4], [5, 5]]) # Calculate the mean of each class mean_C1 = np.mean(C1, axis=0) mean_C2 = np.mean(C2, axis=0) # Calculate the within-class scatter matrix Sw = np.dot((C1 - mean_C1).T, (C1 - mean_C1)) + np.dot((C2 - mean_C2).T, (C2 - mean_C2)) # Calculate the between-class scatter matrix Sb = np.dot((mean_C1 - mean_C2).reshape(2, 1), (mean_C1 - mean_C2).reshape(1, 2)) # Calculate the optimal projection direction V eigenvalues, eigenvectors = np.linalg.eig(np.dot(np.linalg.inv(Sw), Sb)) V = eigenvectors[:, np.argmax(eigenvalues)] # Project the data onto the optimal line of projection proj_C1 = np.dot(C1, V) proj_C2 = np.dot(C2, V) # Calculate the minimum and maximum values of the 1D data min_val = min(np.min(proj_C1), np.min(proj_C2)) max_val = max(np.max(proj_C1), np.max(proj_C2)) # Plot the 1D data and the optimal line of projection plt.scatter(proj_C1, np.zeros_like(proj_C1), color='blue') plt.scatter(proj_C2, np.zeros_like(proj_C2), color='red') plt.plot([min_val, max_val], [0, 0], color='green') plt.show()
null
CC BY-SA 4.0
null
2023-03-29T08:27:56.480
2023-03-29T08:27:56.480
null
null
384384
null
611087
1
null
null
0
22
I want to test cross correlation between two non stationary time series which are algo cointegrated. I was wondering if to do this I can use the ccf command in R or if I should use another one. As the variables are cointegrated can I simply do the cross correlation or should I use a method to detred the data before doing the cross correlation? Is there a package in R that can be useful for this? Thanks!
Cross correlation between cointegrated time series
CC BY-SA 4.0
null
2023-03-29T08:50:01.743
2023-03-29T08:50:01.743
null
null
384386
[ "r", "time-series", "cointegration", "cross-correlation" ]
611088
1
611093
null
0
13
I have been analysing and plotting some different Gaussian curves in R to help me visualise them for a project I am working on. One such expression is the following: [](https://i.stack.imgur.com/fp37U.png) In R I have plotted this with by writing the following function: ``` f2Liu <- function(x) { exp(-0.5*(x / 10000)^2) - exp(-0.5) / (1 - exp(-0.5)) } ``` When I visualise the plot it does not peak at zero which is what I expected so am wondering if I have used the incorrect syntax for the function in R? The plot looks like the following and is labelled "Curve 2". [](https://i.stack.imgur.com/Uj8ug.png)
Gaussian Curve equation in R not peaking at zero
CC BY-SA 4.0
null
2023-03-29T09:34:49.733
2023-03-29T09:57:53.793
null
null
286044
[ "r", "normal-distribution", "ggplot2" ]
611089
1
null
null
0
39
[Murphy's result, section 8.3](https://www.cs.ubc.ca/%7Emurphyk/Papers/bayesGauss.pdf) on normal conjugacy states (substituting $\mathbf{y}$ for $\mu$) that if: $$\mathbf{y}|\Lambda \sim \mathcal N(\mathbf{0}, (\kappa\Lambda)^{-1}) $$ $$ \Lambda \sim \mathcal W(T, \nu) $$ then (when $n=0$), $$\mathbf{y} \sim t_{\nu-d+1} \left(\mathbf{0}, \dfrac{T}{\kappa(\nu-d+1)} \right).$$ Thus, it must be that $\text{Cov}(\mathbf{y}) \propto T$. --- Using the tower rule of covariance on the conditionals however, results in: $$\text{Cov}(\mathbf{y}) = \mathbb E(\text{Cov}(\mathbf{y} | \Lambda)) + \text{Cov}(\mathbb E(\mathbf{y} | \Lambda)) = \mathbb E(\Lambda^{-1})/\kappa + \mathbf{0} $$ Using results of the inverse wishart distribution, $$ \Lambda \sim \mathcal W(T, \nu) \Leftrightarrow \Lambda^{-1} \sim \mathcal{W}^{-1}(T^{-1}, \nu), $$ and so, $$\text{Cov}(\mathbf{y}) = \dfrac{T^{-1}}{\kappa(\nu-d-1)} \propto T^{-1}.$$ These results are inconsistent - would someone point out the mistake..? Should I not be using Murphy's result for $n=0$?
Inconsistency between normal-wishart marginal and marginal covariance computed using tower rule
CC BY-SA 4.0
null
2023-03-29T09:37:49.513
2023-03-29T10:05:24.490
2023-03-29T10:05:24.490
211930
211930
[ "bayesian", "normal-distribution", "covariance", "conjugate-prior", "wishart-distribution" ]
611091
1
null
null
1
50
I came across this conditional proability equation (from this paper: [https://ccn.berkeley.edu/pdfs/papers/CollinsFrank2013PsychReview.pdf](https://ccn.berkeley.edu/pdfs/papers/CollinsFrank2013PsychReview.pdf)) and I cannot understand how the RHS gives the LHS: $P(r_t|s_t,a_t,c_t)=\sum_{TS_i}P(r_t|s_t,a_t,TS_i)\times P(TS_i|c_t)$ For more context, $r_t$ is the reward obtained when performing $s_t,a_t$ after observing a context $c_t$, where the context $c_t$ can be used to predict a hidden variable $TS_i$. How does $TS_i$ get replaced by $c_t$ here ? Also, the paper mentions that the probability of hidden variable $TS_i$ given context $c_t$ can be updated as below using Bayes rule: $P_{t+1}(TS_i|c_t)=\frac{P(r_t|s_t,a_t,TS_i)\times P(TS_i|c_t)}{\sum_{TS_i}P(r_t|s_t,a_t,TS_i)\times P(TS_i|c_t)}$ How is this update of $P(TS_i|c_t)$ possible if $r_t$ occurs as a consequence of selecting $(s_t,a_t)$, which is preceded by observing $c_t$ and predicting $TS_i$?
How does this conditional probability work?
CC BY-SA 4.0
null
2023-03-29T09:43:06.283
2023-03-29T13:09:48.513
2023-03-29T13:09:48.513
180564
180564
[ "conditional-probability" ]
611092
2
null
260254
1
null
It is worth saying that the BatchNorm normalization equations vary across literature, and I conjecture that the variations are not completely equivalent. The form presented in the DL book (here being discussed) can be contrasted with the one given [in this book](https://d2l.ai/chapter_convolutional-modern/batch-norm.html#fully-connected-layers) or [Ng videos on youtube](https://youtu.be/tNIpEZLv_eg?t=182). Specifically, while several authors mentioned the sample mean, i.e., the mean of the entire batch, DL book presents BatchNorm using vectors. Let see if we can unravel the DL book notation. First, we have to consider the design matrix $\boldsymbol{H}$. What are the rows and the column of this matrix? Well, in my opinion, that is ambiguous in the DL book: > with the activations for each example appearing in a row of the matrix Are the rows the activation $a_{i,1}, a_{i,2}, \cdots, a_{i,n}$ for one training example $\boldsymbol{x}^{(i)}$? In other words, - the activations at layer $l$ for one training example are arranged in a row $i$. Or are the rows the activation $a_{i,1}, a_{i,2}, \cdots, a_{i,n}$ for all training example $\boldsymbol{x}^{(i)}, \boldsymbol{x}^{(i + 1)}, \cdots, \boldsymbol{x}^{(m)}$? This means that - the activations at layer $l$ for one training example are arranged in a column $j$. I will stitch to 2) for reasons that will become apparent next. Let see a concrete design matrix of three examples being trained in a layer with three units: \begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,2}\\ a_{2,1} & a_{2,2} & a_{2,3}\\ a_{3,1} & a_{3,2} & a_{33}\\ \end{bmatrix} Second, since we need a mean vector $\boldsymbol{\mu}$ and std. dev. vector $\boldsymbol{\sigma}$, the notation cannot refer to the sample mean because that would be a scalar. > where $\boldsymbol{\mu}$ is a vector containing the mean of each unit Now, the question is if we must average over rows or columns. Since we considered that one column represents the same unit, we have \begin{align}\label{eq:average} \mu_1 &= \frac{1}{3} (a_{1,1} + a_{2,1} + a_{3,1})\\ &= \frac{1}{m} \sum_{i = 1}^m a_{i,1}\\ \mu_2 &= \frac{1}{m} \sum_{i = 1}^m a_{i,2}\\ \mu_3 &= \frac{1}{m} \sum_{i = 1}^m a_{i,3}.\\ \end{align} Correspondinly, \begin{align} \sigma_1 &= \sqrt \frac{|a_{1,1} - \mu_1|^2 + |a_{2,1} - \mu_1|^2 + |a_{3,1} - \mu_1|^2}{3}\\ &= \sqrt \frac{\sum^m_{i = 1} |a_{i,1} - \mu_1|^2}{m}\\ \sigma_2 &= \sqrt \frac{\sum^m_{i = 1} |a_{i,2} - \mu_2|^2}{m}\\ \sigma_3 &= \sqrt \frac{\sum^m_{i = 1} |a_{i,3} - \mu_3|^2}{m},\\ \end{align} resulting in column vectors $\boldsymbol{\mu} = \{\mu_1, \mu_2, \mu_3\}$ and $\boldsymbol{\sigma} = \{\sigma_1, \sigma_2, \sigma_3\}.$ > The arithmetic here is based on broadcasting the vector $\boldsymbol{\mu}$ and the vector $\boldsymbol{\sigma}$ to be applied to every row of the matrix $\boldsymbol{H}$. I do not know what exactly that means, let us continue. > Within each row, the arithmetic is element-wise, so $H_{i,j}$ is normalized by subtracting $\mu_j$ and dividing by $\sigma_j$. That would mean $a_{1,1}$ is normalized as $\frac{a_{1,1} -\mu_1}{\sigma_1}, \cdots$ \begin{bmatrix} \frac{a_{1,1} -\mu_1}{\sigma_1} & \frac{a_{1,2} -\mu_2}{\sigma_2} & \frac{a_{1,3} -\mu_3}{\sigma_3}\\ \frac{a_{2,1} -\mu_1}{\sigma_1} & \frac{a_{2,2} -\mu_2}{\sigma_2} & \frac{a_{2,3} -\mu_3}{\sigma_2}\\ \frac{a_{3,1} -\mu_1}{\sigma_1} & \frac{a_{3,2} -\mu_2}{\sigma_2} & \frac{a_{3,3} -\mu_3}{\sigma_3}, \end{bmatrix} which indeed is well defined since the average was taken in per-column basis (see above). It remains only the most difficult part: > At training time, $\boldsymbol{\mu} = \frac{1}{m} \sum_i \boldsymbol{H}_{i,:},$ which I would interpret as $\boldsymbol{\mu} = \frac{1}{3} ( \boldsymbol{H}_{1,:} + \boldsymbol{H}_{2,:} + \boldsymbol{H}_{3,:}).$ Here, according to the DL book notation, $\boldsymbol{H}_{1,:}$ is the first row, $\boldsymbol{H}_{2,:}$ the second row, etc. Let us remove the fration because it is not necessary at this point. This results in \begin{align} \boldsymbol{\mu} &= ( \boldsymbol{H}_{1,:} + \boldsymbol{H}_{2,:} + \boldsymbol{H}_{3,:})\\ &= \{H_{1,1}, H_{1,2}, H_{1,3}\} + \{H_{2,1}, H_{2,2}, H_{2,3}\} + \{H_{3,1}, H_{3,2}, H_{3,3}\}\\ &= \{H_{1,1} + H_{2,1} + H_{3,1}, \; H_{1,2} + H_{2,2} + H_{3,2}, \;H_{3,1} + H_{3,2}, H_{3,3}\}\\ &= \frac{1}{3} \{a_{1,1} + a_{2,1} + a_{3,1}, \; a_{1,2} + a_{2,2} + a_{3,2}, \;a_{3,1} + a_{3,2}, a_{3,3}\}\\ \boldsymbol{\mu} &= \{\mu_1, \mu_2, \mu_3\},\\ \end{align} where in the first three lines I removed the fraction. In the third line, I applied vector summation. Then, in the fourth line, I introduced again the fraction and substituted the matrix elements by the corresponding activations. Finally, in the fifth line, I substituted by the corresponding mean. Which is, of course, also well-defined and corresponds to our assumption that activations of the same unit for a determined mini-batch example $\boldsymbol{x}^{(i)}$ are represented in one column of the design matrix $\boldsymbol{H}$. It's crazy, I know. The interpretation of the per-activations-example-column assumption is for me rather tricky. The intuition usually mentioned when presenting BatchNorm is that one would like to normalize hidden layers, similarly as to when one normalizes features at the input layer because it benefits learning. However, as I see it, normalization is being performed unit-wise and not feature-wise. I left open the question if these two are equivalent (I would say rather no). That is, is normalizing the output of the unit for all examples in the mini-batch equivalent to normalizing the output of all units (e.g. averaging over rows in $\boldsymbol{H}$) for one example in a mini-batch? Is this somehow a misconception? Finally, are these two equivalent to normalizing the entire dataset using the sample mean and std. dev.?
null
CC BY-SA 4.0
null
2023-03-29T09:56:29.593
2023-03-29T09:56:29.593
null
null
372419
null
611093
2
null
611088
1
null
You have to add parentheses in the numerator. f2Liu <- function(x) { (exp(-0.5*(x / 10000)^2) - exp(-0.5)) / (1 - exp(-0.5)) } As it is currently written, the function in your code computes the following function: $$ f(d_{ij},d) = e^{-0.5*(\frac{d_{ij}}{d})^2} - e^{-0.5}/(1-e^{-0.5}). $$
null
CC BY-SA 4.0
null
2023-03-29T09:57:53.793
2023-03-29T09:57:53.793
null
null
383929
null
611094
1
null
null
1
33
I analyze clinical research data, This is a comparison of two treatment groups. The outcome is a continuous variable. We decided to adjust on predictors of outcome in imbalace, in posthoc (We assume it even though not recommanded). These are just robustness analyses, if not exploratory. There are two adjustment variables, a qualitative one: `varcat` and a continuous one: `varcont`. My objective is to have the adjusted difference between the two groups with their confidence interval. I wonder if I should do a linear regression or an ANOVA or ANCOVA. In clinical research we talk more about ANOVA and ANCOVA than linear regression. Aren't these tests specific cases of linear regression? Don't we have the same conclusions as with linear regression? with a linear regression i should compute like below: ``` lm (outcome~treatment+varcont+varcat,data=data) ``` How should I compute when doing analysis of variance or covariance. Translated with [www.DeepL.com/Translator](http://www.DeepL.com/Translator) (free version)
linear regression vs anova or ancova in clinical trial
CC BY-SA 4.0
null
2023-03-29T10:02:18.083
2023-03-29T10:02:18.083
null
null
269691
[ "r", "regression", "anova", "ancova", "clinical-trials" ]
611095
1
null
null
0
15
I have a data set with columns x_1, x_2, ..., x_30 and y, 31 columns in total. I use a clustering algorithm to cluster the x_i columns and per cluster to make groups with similar data instances in terms of x_i. Then, I compare the distribution of the y-values in the full data set to that of the y-values per cluster such that I get graphs like the following for the clusters: [](https://i.stack.imgur.com/ZoOPV.png) The full data set's y-value distribution is not normally distributed and neither is the y-value distribution per cluster. I suppose the two sets are also not independent as the cluster distribution is essentially a sample of the full data set distribution which could then be regarded as the population(?). In terms of size, the full data set consists of ~10k instances and each cluster of >200 instances. I am looking to statistically test whether the cluster distribution differs from the full data set distribution but I am unsure on which test to use, or if I have to use any at all. I am unsure as to whether I even require a test at all, as this comment (1) suggests that when I have a population and I take a sample of it no statistical test is required to claim there is a difference. But this leads to the question whether my full data set would be considered the population as this is essentially a sample I took from a large heap of available data. Any helpful input is appreciated, many thanks! (1) [https://stats.stackexchange.com/a/543849/376242](https://stats.stackexchange.com/a/543849/376242)
Statistical test for difference in distribution between clustered data set and full data set
CC BY-SA 4.0
null
2023-03-29T10:19:21.657
2023-03-29T10:19:21.657
null
null
376242
[ "hypothesis-testing", "clustering" ]
611096
2
null
584916
0
null
This is how to compute average marginal effects using censReg (without s.e.) `library(AER)` `library(censReg)` `library(tidyverse)` `data("Affairs")` `c.tobit <- censReg(affairs ~ age + yearsmarried + religiousness + occupation + rating, data = Affairs)` `af <- cbind(1,Affairs[,names(coef(c.tobit))[2:6]])` `as_tibble(t(apply(af,1,FUN = function(x) margEff(c.tobit,x) ))) |> summarise(across(everything(), mean))` You should get age yearsmarried religiousness occupation rating -0.0459 0.142 -0.431 0.0834 -0.585
null
CC BY-SA 4.0
null
2023-03-29T10:26:02.453
2023-03-29T10:26:02.453
null
null
70901
null
611098
1
611101
null
3
147
Since, estimator is a function on a statistical(or any?) data to estimate an unknown parameter, will it be valid (or is one allowed to) to define your own estimator, like your own parameter..
Can I define my own estimator function?
CC BY-SA 4.0
null
2023-03-29T10:47:25.540
2023-03-29T11:03:55.513
null
null
384396
[ "descriptive-statistics" ]
611099
1
null
null
0
14
Despite there are multiple questions about it, I cannot figure a solution about my problem. I have built a simple neural network classifier on the MNIST database. I have divided it in training, validation, and test sets. Then on the first two sets I have obtained the hyperparameters, in particular the number of epochs following an Early Stopping procedure. Then I train a new model with the same architecture of the previous one on traininig+validation for the same amounts of epochs. I do not understand why this new model performs poorer on the test test in comparison to the old one. I can also share the code, but I know that CrossValidated is for more conceptual questions.
Performances of train/test split vs train/validation/test split
CC BY-SA 4.0
null
2023-03-29T10:51:44.817
2023-03-29T10:51:44.817
null
null
379875
[ "classification", "conv-neural-network" ]
611100
1
611225
null
0
49
I would like to estimate marginal effects of two different variables. In STATA, I have a two-way fixed effects model of type: ``` xtset region year reg y x1 x2 c.x1#c.x2 x3 x4 i.region i.year [aw=pop], robust ``` In turn, I create the margins based on a specific set of quintiles from the variables, obtaining a 25-rows matrix of pairwise estimates. ``` margins, at(x1 =(0.02033865 0.0737 0.25173 1.04338 40.41884) x2 =(0.69 1.60 2.39 3.36 6.91)) ``` What would be the correct translation of this code into R, using the `marginaleffects` package? [Here](https://drive.google.com/file/d/1IeRUBF6EHGhhwodqlUtOhe5oVF0InzFI/view?usp=share_link) is the output of the STATA prediction applied to my dataset. Below, instead, I created a reproducible example in both STATA and R, in order to make testing easier. STATA: ``` set more off cd "/your/dir" use "https://dss.princeton.edu/training/Panel101.dta" * Run regression with the interaction xtset country year reg y x1 x2 c.x1#c.x2 x3 i.country i.year, robust centile x1 , centile(5 25 50 75 95) centile x2 , centile(5 25 50 75 95) * Create the margins based on distribution above margins, at(x1 =(-.16 .32 .64 1.1 1.4) x2 =(-1.54 -1.22 -.46 1.61 1.80)) saving(predictions_test, replace) ``` Same code for R, where I am not sure the marginaleffects function is called correctly in order to replicate STATA's: R: ``` library(tidyverse) library(sandwich) library(lmtest) library(plm) library(multiwayvcov) library(margins) library(MASS) library(marginaleffects) library(data.table) library(MASS) library(jtools) # for clustered SE library(dplyr) setwd('/your/dir/') # Load Data ----------------------------------------------------------- library(foreign) library(plm) data = data.table(read.dta("http://dss.princeton.edu/training/Panel101.dta")) setkeyv(data, c("country", "year") ) m = "x1 + x2 + I(x1 * x2) + x3 + factor(country) + factor(year)" f = as.formula(paste("y ~", m, collapse=" + ")) ols = lm(f, # weights = pop, data = data ) x1quants = c(-.16, .32, .64, 1.1, 1.4) x2quants = c(-1.54, -1.22, -.46, 1.61, 1.80) pred = predictions(ols, by = c("x1", "x2"), variables = c("x1", "x2"), newdata=datagridcf(x1=x1quants, x2=x2quants), # cross=T ) write.csv(pred, file='prediction_tests_R.csv', row.names = F) ```
Pairwise marginal effects at specific quintiles STATA-like in R's marginaleffects
CC BY-SA 4.0
null
2023-03-29T10:52:57.710
2023-03-30T08:59:18.000
2023-03-29T15:23:36.673
346599
346599
[ "r", "marginal-effect" ]
611101
2
null
611098
7
null
Sure! All of the common estimators had to be proposed by someone at some point, after all. While there is a great deal of literature that deals with simple and exotic settings, the problems you are solving might be unique, and if you can show why a new or unusual function of the data is a good (in whatever sense you defined “good”) estimator of some facet of your work, go for it. Among the weirdest estimators I know is that [a constant can be considered “admissible”](https://stats.stackexchange.com/q/48302/247274) in many ([but not all](https://stats.stackexchange.com/a/530017/247274)) situations. Yes, this means that there is some defense for going with $\hat \mu = 2$, regardless of the data. This example might give you an idea of how weird you can choose to get when it comes to making your own estimators.
null
CC BY-SA 4.0
null
2023-03-29T11:03:55.513
2023-03-29T11:03:55.513
null
null
247274
null
611102
5
null
null
0
null
null
CC BY-SA 4.0
null
2023-03-29T11:07:05.250
2023-03-29T11:07:05.250
2023-03-29T11:07:05.250
-1
-1
null