Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
609746
1
609751
null
5
588
Say I have a laser that emits pulses of light containing a random number of photons, which follow a Poisson distribution. So it has a mean number of photons per pulse. These pulses go through a filter, which results in each photon having a probability of being either absorbed or transmitted. I am trying to figure out what the distribution of the number of photons behind the filter will look like. How can this be described mathematically? I understand that the resulting distribution should have something to do with a certain combination of the Poisson and a binomial distribution. Maybe some sort of convolution? Other than that I am pretty clueless, since I only have basic knowledge of statistics.
Convolution of Poisson with Binomial distribution?
CC BY-SA 4.0
null
2023-03-16T21:03:30.183
2023-03-17T06:07:35.500
2023-03-16T21:33:46.693
173082
383425
[ "binomial-distribution", "poisson-distribution" ]
609747
1
null
null
0
23
Assume we have matrices $A_{n\times n}$ and $\Delta_{n\times n}$ and know that: $$A^{-1}_{i.}\Delta_{.j}\sim N(0,\,A^{-1}_{i.}\,\Sigma_j\,[A^{-1}_{i.}]^\top)\,,$$ where $\Delta\sim N(0,\,\Sigma_j)$, $i$ shows the rows and $j$ the columns; $_{i\,.}$ means row $i$ (vector with dimension $1\times n$) and $_{.\,j}$ means column $j$ (vector with dimension $n\times 1$). Note that this is the distribution of each pair in the $A^{-1}\Delta$ matrix. Can we derive the distribution of $(A^{-1}_{i.}\Delta_{.j})^2=A^{-1}_{i.}\Delta\,A^{-1}\Delta_{.j}$? Is it a $\chi^2$? If so, what are the parameters?
Distribution of power two of a term
CC BY-SA 4.0
null
2023-03-16T21:06:27.493
2023-03-16T21:56:04.733
2023-03-16T21:56:04.733
312007
312007
[ "normal-distribution", "linear-algebra", "multivariate-normal-distribution" ]
609748
2
null
570591
0
null
I assume you mean why "macro" and "weighted" precision are same in the above example. "weighted" precision is actually a weighted version of "macro" precision. The above example illustrates the case where the classes are balanced (2 for each class: 0, 1, 2). However, if the classes are imbalanced (the code below), then the "macro" precision will be different for "weighted" one. ``` y_true = [2, 2, 2, 2, 1, 0] y_pred = [2, 2, 1, 0, 2, 2] print(precision_score(y_true, y_pred, average='macro')) #0.167 print(precision_score(y_true, y_pred, average='weighted')) #0.333 ``` Basically, the function precision_score will first calculate the precision for each class. Then, it uses np.average() with the weight attribute to be the number of positive examples in each class. If the classes are well-balanced, meaning the number of examples in each class is exactly the same, then "macro" and "weighted" precision will be the same. Otherwise, "weighted" precision will be higher than "macro" precision.
null
CC BY-SA 4.0
null
2023-03-16T21:08:44.773
2023-03-16T21:08:44.773
null
null
383414
null
609750
2
null
609746
3
null
This sounds like a [compound Poisson distribution](https://en.wikipedia.org/wiki/Compound_Poisson_distribution). You have a Poisson distributed number $N$ of binomial trials $X_i$, each trial coming from one incoming photon. Each photon either passes through or not - as long as the absorption probability is constant, each such choice is a Bernoulli trial, i.e., $P(X_i=1)=p$. So in the end you have a binomial distribution for the total number of photons passing through, with the binomial parameter $N$ being Poisson distributed, or a Poisson-binomial compound. This may be helpful: [Compound of Binomial and Poisson random variable](https://math.stackexchange.com/q/4252625/51074).
null
CC BY-SA 4.0
null
2023-03-16T21:26:09.250
2023-03-16T21:26:09.250
null
null
1352
null
609751
2
null
609746
9
null
Let's start by looking at a single pulse and figure out the distribution of the number of photons in that pulse that get through the filter. To do this, let $N$ denote the initial number of photons in the pulse and let $X$ denote the number of photons that make it through the filter. Then you have the model: $$\begin{align} N &\sim \text{Pois}(\lambda), \\[6pt] X|N &\sim \text{Bin}(N,\theta). \\[6pt] \end{align}$$ The marginal distribution of $X$ is obtained using the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability), to wit: $$\begin{align} p_X(x) \equiv \mathbb{P}(X=x) &= \sum_{n=0}^\infty \mathbb{P}(X=x|N=n) \cdot \mathbb{P}(N=n) \\[6pt] &= \sum_{n=0}^\infty \text{Bin}(x|n,\theta) \cdot \text{Pois}(n|\lambda) \\[6pt] &= \sum_{n=x}^\infty \frac{n!}{x! (n-x)!} \theta^x (1-\theta)^{n-x} \cdot \frac{\lambda^n}{n!} e^{-\lambda} \\[6pt] &= \frac{(\theta \lambda)^x}{x!} e^{-\theta \lambda} \sum_{n=x}^\infty \frac{((1-\theta)\lambda)^{n-x}}{(n-x)!} e^{-(1-\theta)\lambda} \\[6pt] &= \frac{(\theta \lambda)^x}{x!} e^{-\theta \lambda} \sum_{r=0}^\infty \frac{((1-\theta)\lambda)^r}{r!} e^{-(1-\theta)\lambda} \\[6pt] &= \text{Pois}(x| \theta \lambda) \sum_{r=0}^\infty \text{Pois}(r| (1-\theta)\lambda) \\[12pt] &= \text{Pois}(x| \theta \lambda). \\[6pt] \end{align}$$ This gives us the marginal distribution $X \sim \text{Pois}(\theta \lambda)$ for the number of photons that make it through the filter in a single pulse. This is called "thinning" the Poisson variable/process --- it leads to another Poisson variable/process but with the mean parameter reduced proportionately to the thinning. The result shown here can also be proved using the generating functions for the distribution; see e.g., [here](https://math.stackexchange.com/questions/580883/). --- Now suppose we have $k$ independent pulses of the same type (i.e., with the same parameters) and let $X_1,...,X_k \sim \text{Pois}(\theta \lambda)$ denote the number of photons that go through the filter from each of these pulses. Then the total number of photons that make it through the filter is: $$S_k = X_1 + \cdots + X_k.$$ The marginal distribution of $S_k$ is a $k$-fold convolution of the $\text{Pois}(\theta \lambda)$ distribution, which is: $$S_k \sim \text{Pois}(k \theta \lambda).$$ This is the distribution of the number of photons that make it through the filter from $k$ pulses with mean-photons $\lambda$ and filter penetration probability $\theta$.
null
CC BY-SA 4.0
null
2023-03-16T21:31:50.677
2023-03-17T06:07:35.500
2023-03-17T06:07:35.500
173082
173082
null
609752
2
null
389203
0
null
I would add that LPM are perfectly fine when the variable of interest is a binary variable as it will not create predicted values outside the 0-1 range. In this specific case, LPM should even be prefered. In randomized experiments for instance, LPM are always preferred as they are known to perform better than logistic regression both in terms of bias and precision, especially when the outcome variable has high prevalence (close to 1). Of note: I believe your second assumption (0CM) should be written conditional to the Xs
null
CC BY-SA 4.0
null
2023-03-16T21:35:01.583
2023-03-16T21:35:01.583
null
null
383429
null
609753
1
615453
null
0
31
if a data set is set up such that it has 5 independent variables and an array of 201 elements of "smooth continuous" data as the dependent variable. I was wondering if there was a model optimized to handle a situation like that. And what form would I provide the data in. Here is an example of how the data would look. Each data plot has 5 parameters that relate to it. Id like a model where I can input values for those 5 elements and it produces a prediction for the corresponding data. So instead of just finding the relationship between the 5 independent variables and the 201 separate array elements, it somehow finds the relationship between the independent variables, and more generally, the shape of the data plot, if that's possible? [](https://i.stack.imgur.com/NoJbJ.jpg) If what I'm asking makes sense and it is possible, some guidance on the type or types of models to look into would be appreciated, thanks all, I'm working in Matlab as well.
what type of machine learning or neural net algorithm would i use for predictions about the shape of a plot
CC BY-SA 4.0
null
2023-03-16T21:39:03.230
2023-05-10T15:16:02.343
null
null
372026
[ "machine-learning", "matlab", "continuous-time" ]
609754
2
null
609746
6
null
### Intuition You can view it intuitively as following. The Poisson distribution describes the number of counts for a Poisson process taking some time $T$ (like your pulse taking some time $T$ with photons being emitted randomly with a specific rate). You could randomly designate each event/case/photon as $X_i = 0$ or $X_i = 1$ (in the image below this is shown as black/white circles on a line). Effectively this is the same as generating two seperate independent Poisson processes (each taking times $T_0, T_1$ and with $T_0+T_1 = T$) and then mixing the points. You can verify that this correct by the following thought: The sum of two Poisson variables is another Poisson variable, and each point will have $T_i/T$ probability of being $i$. [](https://i.stack.imgur.com/AYeB4.png) So the number of cases $X_i = 1$ is another Poisson distributed variable. related: [Probability of compound Poisson process](https://stats.stackexchange.com/questions/495173/probability-of-compound-poisson-process)
null
CC BY-SA 4.0
null
2023-03-16T21:42:14.120
2023-03-16T22:21:32.453
2023-03-16T22:21:32.453
164061
164061
null
609755
1
null
null
0
31
Let $W_t$ be a complex-valued stochastic process such that $$W_t = \sum_{l=-L}^L C_le^{ 2iπf_lt}$$ where $C_l$ is a complex-valued rv such that $\Bbb E{(C_l)} = 0, \Bbb E(C_l^2) < ∞, |l| ≤ L < ∞$ Also assume $f_l$ is a fixed real-valued constant with $f_{−l} = −f_l$ . Show that if $C_l$ are uncorrelated then $W_t$ is a stationary process. By linearity, we have that $\Bbb E(W_t)=0$ $\forall t$. Then, $Var(W_t)=\sum_{i=-L}^L\sum_{j=-L}^L e^{2iπf_it}e^{-2iπf_jt}Cov(C_i,C_j)=\sum_{i=-L}^L Var(C_l)<\infty$ Finally, let $h$ be a lag, $$Cov(W_{t+h},W_{s+h})=\sum_{i=-L}^L\sum_{j=-L}^L e^{2iπf_i(t+h)}e^{-2iπf_j(s+h)}Cov(C_i,C_j)$$ $$=\sum_{i=-L}^L\sum_{j=-L}^L e^{2iπf_it}e^{-2iπf_js}Cov(C_i,C_j)=Cov(W_{t},W_{s})$$ So $W_t$ is stationary but I don't see where we should have used that $f_{−l} = −f_l$. Did I got it wrong somewhere ?
Non used assumption in calculations of stationary process
CC BY-SA 4.0
null
2023-03-16T21:43:15.173
2023-03-16T21:43:15.173
null
null
361672
[ "time-series", "stationarity" ]
609756
1
null
null
1
108
I'm trying to use an RNN to predict a time series {$y_t$}, where $y_t \in {0,1} \forall 1 \leq t \leq T$ given an input time series {$x_t$}. The elements of the target sequence are not necessarily independent. Which distributions would be a good model of this output? Ultimately, I want to use this distribution over the target time series to derive a loss for the input-output sequence pair. I know of binary cross entropy with logits but this makes the assumption of independence between the elements of the target sequence.
What loss function should one use when using an RNN to to predict a time series of binary values which are not necessarily independent?
CC BY-SA 4.0
null
2023-03-16T22:30:06.677
2023-03-16T22:30:06.677
null
null
243542
[ "time-series", "neural-networks", "independence", "loss-functions", "recurrent-neural-network" ]
609757
1
null
null
1
19
I have acquired videos from a dashcam installed on a vehicle that went around different portions of an airport. My objective is to detect cracks in the pavements. After identifying them, I then need to geolocate them, i.e., tell where the cracks appear in terms of lat/long. So far, I have labeled a handful of images and used FasterRCNN to detect the cracks. I can perhaps use the centers of the bounding boxes around each identified crack to locate them. Since I'm not familiar enough with computer vision and what it can offer, I'm stuck at this stage of my work. I looked around at sources and materials such as monoSLAM and triangulation techniques, but they weren't easy to follow. Basically, I would need something like a mapping function that can relate the pixel values to lat/long. I was hoping for a more straightforward solution or a better reference to understand my problem and hopefully be able to solve it.
Object detection and geolocating the bounding boxes
CC BY-SA 4.0
null
2023-03-16T22:38:20.140
2023-03-17T22:09:24.210
2023-03-17T20:15:25.893
383430
383430
[ "neural-networks", "object-detection", "gis" ]
609758
1
null
null
1
33
I am trying to obtain the required number of sample $n$ for a given confidence interval $\alpha$ and $X_1 ... X_n$ which are Gaussian rv with $\mu$ mean and $\sigma^2$ variance. I know that \begin{equation} Pr(\frac{1}{n}\sum_{i=1}^n X_i - \mu \geq t) \leq \exp{(\frac{-nt^2}{2\sigma^2})}. \end{equation} yet, I don't know how to progress. How does $\alpha$ enter the picture and leads me to obtain $n$?
Number of samples for Hoeffding's Bound with Gaussian R.V
CC BY-SA 4.0
null
2023-03-16T22:43:20.847
2023-03-16T22:59:28.680
2023-03-16T22:59:28.680
296197
383434
[ "probability", "probability-inequalities", "bounds" ]
609759
1
null
null
1
36
Studying Brownian Motion and stochastic integrals in class, my professor rewrote this summand $$1/2*\sum_{j=0}^{n-1} (W((j+1)T/n) - W(jT/n))^2$$ as $$1/2*W^2(T) + \sum_{j=0}^{n-1} W(jT/n)(W(jT/n) - W((j+1)T/n))$$ I can't figure out the algebra between these two steps. I've been trying looking at it as a telescopic sum but can't seem to get the same expression. Did my professor make a mistake or am I missing something?
Stochastic Calculus Algebra
CC BY-SA 4.0
null
2023-03-16T23:08:11.613
2023-03-16T23:49:59.450
2023-03-16T23:49:59.450
296197
383435
[ "brownian-motion", "stochastic-calculus" ]
609760
2
null
593708
1
null
Out-of-sample testing is the standard way to do this. Train your model on most but not all of your data Even better might be to have multiple out-of-sample groups (something like cross validation). [Benavoli et al (2017)](https://www.jmlr.org/papers/volume18/16-305/16-305.pdf) discuss a number of ways to do statistical inference based on model performance in such groups. While the Benavoli paper argues in favor of Bayesian methods, the paper also discusses competing frequentist methods. There are, however, a few issues with out-of-sample testing. - You withhold precious training data. - There can be instability depending on how you split the data into training and holdout sets. This problem is worse the smaller the sample size. (Harrell (2015), for instance, recommends not to use holdout sets unless there are at least 20,000 observations.) - If you do this, like your out-of-sample performance, and then train your model on all of the data combined, you lack holdout data to validate the final model that is trained on all data. Harrell (2015) advocates for bootstrapping to address this. REFERENCES Benavoli, Alessio, et al. "Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis." The Journal of Machine Learning Research 18.1 (2017): 2653-2688. Harrell, Frank E. "Regression modeling strategies with applications to linear models, logistic and ordinal regression, and survival analysis." (2015).
null
CC BY-SA 4.0
null
2023-03-16T23:52:25.930
2023-03-17T10:53:56.077
2023-03-17T10:53:56.077
247274
247274
null
609761
2
null
321831
1
null
Your model is kind of behaving how it should. It knows that values are likely to be at the low end of the range of observed values. Can you fault the model for making predictions that reflect this fact? More mathematically, the prior probability, [which often comes up when discussing "classification" models](https://stats.stackexchange.com/a/583115/247274) but which also makes sense for regression models, of high values is low, and the posterior probability reflects this. Unless you have a strong signal from your features that such a value is likely, Bayes' theorem gives a low posterior probability to that region, as the posterior probability is dragged down by the low prior probability. (Since Bayes' theorem is a mathematical theorem and not an opinion about how statistical modeling should be performed, this applies to a frequentist regression; you do not have to use Bayesian modeling for this rationale to apply.) Most likely, you simply lack the features that predict the extreme events. You either do not have a predictive variable or you have not figured out the characteristics of your existing variables that can allow you to make such predictions (such as interactions and/or polynomial terms as two possibilities). Unfortunately, it is not a given that you will be able to reliably predict such events, and part of your job as a statistician or data scientist (analyst, machine learning engineer, etc) is to produce models that reflect this limitation of your data. Sure, your boss wants perfect performance and the ability to catch those values, but there is a phrase about who wants ice water, too. Just because someone wants something does not entitle them to it. EDIT Being explicit about how Bayes’ theorem plays a role here, consider that you, at some level level, want to estimate something like $P(Y>\ell\vert X=x)$, where $\ell$ is some threshold for being considered a “large” value, and $x$ is your feature vector. $$ P(Y>\ell\vert X=x)=\dfrac{ P(X=x\vert Y>\ell)P(Y>\ell) }{ P(X=x) } $$ Since $P(Y>\ell)$ is small (there are few large values), the rest of the fraction needs to give a gigantic value to give a high posterior probability. If you do not have the features screaming out that the outcome will be large, this will not occur, hence my suspicion that you either lack a variable that is highly predictive of such a result or that you have not used your existing variables in the right way.
null
CC BY-SA 4.0
null
2023-03-17T00:04:20.990
2023-03-17T14:34:15.033
2023-03-17T14:34:15.033
247274
247274
null
609762
1
609767
null
0
35
I’m curious to read about the importance of normality testing in AB tests and clinical trials. It seems that there is a lot of mixed (and strong) opinions about the necessity for normality testing, the methods in which normality would be determined (quantitative vs. visual inspection), and even what data should be normally distributed (ie the data itself, or the residuals). Would anyone have a recommended reading / text about normality, its evaluation, and why it matters? The closer to intermediate level, the better; but any response is welcome.
Importance of normality testing in clinical trials / AB tests
CC BY-SA 4.0
null
2023-03-17T00:06:18.620
2023-03-17T04:13:45.690
2023-03-17T04:13:45.690
362671
323394
[ "normal-distribution", "references", "nonparametric", "normality-assumption", "parametric" ]
609763
2
null
591930
1
null
Thanks for this answer. In the meantime, I also got to understand the issue better. As Dave shows in the regression formula, we are modeling the "expected value of the dependent variable, given fixed covariate values", i.e. in regression we model the conditional mean. The mistake I made was to think that when the independent variable increases with 1 sd, that the dependent variable decreased with 0.8 sd. More correct is to say that the conditional mean of the dependent variable decreases with 0.8 sd. This directly relates to the concept of "regression to the mean", which says that: if you take an extreme group on one variable, the average of this group on another variable will always be closer to the mean (unless there is perfect correlation). So, if X increases with 1 sd, then the conditional mean of Y decreases with 0.8 sd, and if Y increases with 1 sd, then the conditional mean of X decreases with 0.8 sd. Both statements do not contradict each other. In the former statement, you move up in the distribution of X and model the mean of Y, in the latter statement, you move up in the distribution of Y and model the mean of X.
null
CC BY-SA 4.0
null
2023-03-17T00:11:19.603
2023-03-17T00:11:19.603
null
null
136000
null
609764
1
null
null
1
26
The variance inflation factor (VIF) in an ordinary least squares linear regression coefficient is calculated using the $R^2$ of a linear model that uses the other features to predict the feature to which the coefficient corresponds. If the feature for which we want to calculate the variance inflation factor is binary, then such a model is a linear probability model, which has several known problems, among them being that such a model can make predictions that are literally impossible (e.g., predicted probability above $1$). I suppose the math of the working with the feature covariances leads to such a model, but it seems bizarre that such a characteristic (VIF) would relate to a model that can make impossible predictions. Is there any resolution beyond a somewhat unsatisfying, "Yep, that's just what happens when you work with the feature covariance matrix"?
If we have a binary variable in our linear regression, the VIF for its coefficient estimate uses the $R^2$ of a linear probability model. What gives?
CC BY-SA 4.0
null
2023-03-17T00:23:03.653
2023-03-17T00:23:03.653
null
null
247274
[ "regression", "least-squares", "linear-model", "variance-inflation-factor", "linear-probability-model" ]
609765
2
null
511105
1
null
An adjusted $R^2$ is problematic in practice for many machine learning models, since the degrees of freedom range from complicated to calculate to totally unclear. Combine that with the fact that many models are fit after performing preliminary steps like hyperparameter tuning, and calculating the degrees of freedom turns into a real mess. The degrees of freedom matter to the adjusted $R^2$ because the standard adjusted $R^2$ from OLS linear regression uses a comparison of the residual variance of your model and the residual variance of a model that predicts the mean of $y$ every time, and adjusted $R^2$ considers [unbiased estimators](https://stats.stackexchange.com/a/595469/247274) of those respective variances whose calculations require use of the degrees of freedom (that's the $n-p-1$ in the link). If you want an $R^2$-style measure of model performance that penalizes the model for having many parameters that put it at risk of overfitting, you might want to calculate an out-of-sample $R^2$-style metric. As I discuss [here](https://stats.stackexchange.com/a/609612/247274), several calculations can be defended as out-of-sample $R^2$-style calculations, though the one that makes the most sense to me that I discuss [here](https://stats.stackexchange.com/questions/590199/how-to-motivate-the-definition-of-r2-in-sklearn-metrics-r2-score) uses the following formula. $$ 1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y_{\text{training}} \right)^2 }\right) $$ If you can get a handle on the degrees of freedom, however, something like adjusted $R^2$ makes perfect sense as a comparison of variances that are estimated with unbiased estimators. Being explicit: - If you get a handle on the degrees of freedom, I say this makes perfect sense. (However, do not underestimate the difficulty of getting a handle on the degrees of freedom.) - I would interpret such a statistic as a comparison of unbiased estimates of error variances.
null
CC BY-SA 4.0
null
2023-03-17T00:43:36.320
2023-03-17T00:55:00.987
2023-03-17T00:55:00.987
247274
247274
null
609766
1
610256
null
1
52
So, the constrained TRPO objective is the following: $$ J(\theta) = E_t\left[ \frac{\pi_\theta(a_t|s_t)}{\pi_{old}(a_t|s_t)}\cdot A_t \right]\\ st. D_{KL}[\pi_{old}(\cdot|s_t)||\pi(\cdot|s_t)] \le \epsilon $$ The off policy actor critic objective instead is: $$ \nabla J(\theta) = E_t\left[ \frac{\pi_\theta(a_t|s_t)}{b(a_t|s_t)} \nabla \log \pi_\theta(a_t|s_t)\cdot A_t \right] $$ Which is nothing else than the on-policy with the importance sampling correction. However, that $\log$ appears in the derivations considering it's derivative being $f'/f$, thus we can bring it back to: $$ \begin{align} J(\theta) & = E_t\left[ \frac{\pi_\theta(a_t|s_t)}{b(a_t|s_t)} \frac{\nabla \pi_\theta(a_t|s_t)}{\pi_\theta(a_t|s_t)}\cdot A_t \right]\\ & = E_t\left[ \frac{\nabla \pi_\theta(a_t|s_t)}{b(a_t|s_t)}\cdot A_t \right] \end{align} $$ Now, if we take the gradient for the TRPO update, we also have: $$ \nabla J(\theta) = E_t\left[ \frac{\nabla\pi_\theta(a_t|s_t)}{\pi_{old}(a_t|s_t)}\cdot A_t \right]\\ st. D_{KL}[\pi_{old}(\cdot|s_t)||\pi(\cdot|s_t)] \le \epsilon $$ So my question is then: why is TRPO considered on-policy, if the update is literally the off policy update, just with an additional constraint? I get that the KL is there not to deviate too much, so in a off-policy setting this might lead to learn the same/very close policy as the behavioral one, however I don't see why we could not use TRPO also in the fully off policy setting (I'm referring to it's unconstrained version cited in the PPO paper, introducing the KL as a penalty, which makes the optimization easily implementable in the off policy setting)
Is TRPO just a "safe" version of off-policy policy iteration
CC BY-SA 4.0
null
2023-03-17T00:50:21.470
2023-03-22T02:26:39.963
2023-03-17T01:44:17.543
346940
346940
[ "reinforcement-learning", "policy-gradient", "policy-iteration" ]
609767
2
null
609762
2
null
The requirement for normality is greatly misunderstood. First, distributional assumptions are made about the conditional distribution, not the marginal (in your words, "the data itself"). For OLS, this is equivalent to the residuals being roughly normal. However, OLS actually makes no assumption about the likelihood (see the Gauss Markov theorem) and the estimates therefrom remain consistent and unbiased when the assumptions of the Gauss Markov theorem are satisfied (assuming the conditional mean is correctly specified). A good resource on this would be Introductory Econometrics by Wooldridge. It's an accessible book written for undergraduate level students with a minimal background in stats. In all honesty, the normality assumption is perhaps the least important and one should pay more attention to endogeneity and potential omitted variables in my (humble) opinion. Whereas other violations can be rectified (non-linearity of the mean can be addressed with splines, heterogeneity of variance with robust covariance estimates) you can't fix something you didn't measure. In the context of AB tests, you have to be a little more careful. Often times, the marginal distribution of, say revenue, may not have finite variance and so OLS shouldn't be applied. Even in the case where the variance is finite, the distribution may be so long tailed that the sampling distribution of the coefficients may not resemble a normal distribution with any sample collected in a reasonable amount of time.
null
CC BY-SA 4.0
null
2023-03-17T01:08:01.120
2023-03-17T02:15:20.440
2023-03-17T02:15:20.440
22047
111259
null
609769
1
null
null
0
5
I don't know the correct terminology to describe my problem, so the title is probably inadequate and I'll have to describe the background information in extra detail. I have an instrument that takes measurements from a vehicle driving down the road. At regular intervals (nominally every 1/10 mile) the measurements are averaged and the following are recorded: - Start & End GPS Coordinates of the interval - Average measurement value - Standard deviation of measurement value - Number of measurements (In hindsight, recording every measurement sample would have made this much easier) The GPS coordinates were then transfomed into Chainage values (Chainage is a distance measure along the road from its start). For Example: |Interval |Start Chainage |End Chainage |Avg |Std |Samples | |--------|--------------|------------|---|---|-------| |0 |253.857 |253.957 |215 |32 |229 | |1 |253.957 |254.056 |229 |22 |228 | |2 |254.056 |254.159 |221 |22 |232 | |3 |254.159 |254.258 |258 |51 |101 | |4 |254.258 |254.358 |265 |75 |93 | This data is recorded along the same stretch of road multiple times. The recordings don't all start at the same place, nor are the intervals all precisely the same length. A set of recordings (Runs) may look like the following diagram. [](https://i.stack.imgur.com/OC0Rh.png) A plot of a run's averages may look like this. (The confidence intervals probably aren't correct because we don't have N samples of the same point, but an average of N different points along the interval) [](https://i.stack.imgur.com/SUUTh.png) I want to be able to compare the different runs. I think I could do something like Kriging to create an interpolated estimation for each run, then compare those. I don't think I can use Kriging because I don't have point samples but averages of sample intervals. I'm not sure what method would help this situation, and I haven't had much luck searching for methods because I don't know the correct terminology to use for the search. I'd appreciate any help!
Creating a linear interpolation estimation from cluster averages of samples
CC BY-SA 4.0
null
2023-03-17T02:56:53.773
2023-03-17T02:56:53.773
null
null
383424
[ "geostatistics" ]
609772
1
610027
null
1
84
In linear or logistic regression, we have the following (adapted from Foundations of machine learning.): As in all supervised learning problems, the learner $\mathcal{A}$ receives a labeled sample dataset $\mathcal{S}$ containing $N$ i.i.d. samples $\left(\mathbf{x}^{(n)}, y^{(n)}\right)$ drawn from $\mathbb{P}_{\mathcal{D}}$: $$ \mathcal{S} = \left\{\left(\mathbf{x}^{(1)}, y^{(1)}\right), \left(\mathbf{x}^{(2)}, y^{(2)}\right), \ldots, \left(\mathbf{x}^{(N)}, y^{(N)}\right)\right\} \subset \mathbb{R}^{D} \quad \overset{\small{\text{i.i.d.}}}{\sim} \quad \mathbb{P}_{\mathcal{D}}\left(\mathcal{X}, \mathcal{Y} ; \boldsymbol{\beta}\right) $$ --- I am used to the iid assumption in machine learning, but in the case of conditional maximum likelihood, I have the following question. To use maximum likelihood for linear/logistic regression, it is required to have $y \mid x$ to be independent, in other words, $y$ is conditionally independent of $x$. The question is, do we need the strong iid assumption mentioned above for us to invoke MLE?
Is the iid assumption in Linear Regression necessary?
CC BY-SA 4.0
null
2023-03-17T05:54:33.220
2023-03-20T09:41:07.007
null
null
253215
[ "regression", "machine-learning", "probability" ]
609774
1
null
null
0
32
I am performing a missing data analysis from a modified (the original data has been modified for the purposes of teaching) Randomised Control Trial. The missing variable of interest is "prior use of aspirin before randomisation". There are eight other variables in the dataset. The key variable of interest for this post is "systolic blood pressure". Comparing those with missing data for systolic blood pressure with those not missing data for systolic blood pressure gives me the following mean systolic blood pressures: - Not Missing (158.7) - Missing (160.6) Univariate logistic regression (previous aspirin as the outcome variable - Yes/No, systolic blood pressure as the exposure variable) gives me an OR = 1, a significant p-value=0.002, but a 95% CI: 1.00-1.00. Similarly, multivariate regression gives me an OR = 1, p=0.004, but a 95% CI: 1.00-1.00. How should I interpret this? Can I say that there is a statistically significant difference in the systolic blood pressure of those that are missing data for previous aspirin use (160.6) compared to those with data for previous aspirin use (158.7)? If so, how should I understand the apparent oddity in the confidence interval? Is it simply because of the accuracy (number of significant figures) in the OR reporting? Thanks
How to interpret a 95% CI: 1.00-1.00 with a p-value = 0.004
CC BY-SA 4.0
null
2023-03-17T06:37:20.963
2023-03-17T06:37:20.963
null
null
378584
[ "confidence-interval", "p-value", "odds-ratio" ]
609775
2
null
317856
1
null
A conditional probability "distribution" is essentially just a bunch of conditional probabilities, sufficient to fully characterise the conditional behaviour of one random variable given another event or random variable. A probability "distribution" can be characterised in various different ways (e.g., by a probability measure, mass/density function, CDF, generating function, etc.) and while there is no single mathematical object that is the distribution, we may refer to them as such as a shorthand. In general, there are two main classes of mathematical objects which would characterise a "conditional probability distribution" and which we might refer to as such: - Conditional distribution at a given conditioning point: This is characterised by any function that fully characterises the probabilitic behaviour of $X$ conditional on a specific event $Y=y$. - Conditional distribution for any conditioning point: This is characterised by any function that fully characterises the probabilitic behaviour of $X$ conditional on any value of another random variable $Y$. Let me illustrate this by example. Suppose we have two random variables $X$ and $Y$ and suppose we define the conditional cumulative distribution function (CDF): $$F(x|y) \equiv \mathbb{P}(X \leqslant x | Y=y) \quad \quad \quad \text{for all } x \in \mathscr{X} \text{ and } y \in \mathscr{Y}.$$ The function $F( \cdot |y)$ for a fixed value of $y$ fully characterises the distribution of $X$ given the conditioning point $Y=y$, so we would consider this to be a "conditional distribution" in the shorthand sense previously described. The function $F( \cdot | \cdot)$ fully characterises the distribution of $X$ given any conditioning point for $Y$, so we would also consider this to be a "conditional distribution", again, in the shorthand sense. (The latter object is much more general, and it actually gives a whole bunch of conditional distributions, corresponding to each of the possible values for $Y=y$.)
null
CC BY-SA 4.0
null
2023-03-17T07:34:02.520
2023-03-17T07:34:02.520
null
null
173082
null
609777
1
null
null
1
26
I used here linear models for simplifying my final approach. Let's assume that a first linear model is estimated to obtain a response that will be then used as covariate in a successive linear model. How could one analytically find the uncertainty around the response of the second model, presuming that uncertainty in the first model should echo in the second one. In equations, the first model is given by $$x_{i} = \alpha_0 + \alpha_1 z_{i} + \epsilon_{i}\, .$$ Fitted values from the previous equation will be the covariate of the following linear model: $$y_{i} = \beta_0 + \beta_1 \hat{x}_{i} + \nu_{i}\, .$$ Both $\epsilon_{i}$ and $\nu_{i}$ are common homoscedastic error terms. Is there an analytic formulation for $Var(\mathbf{y})$ which accounts for $Var(\mathbf{x})$? A workaround could be the computation of $Var(\mathbf{x})$ from the variance-covariance matrix of the first model and then a bootstrap procedure for the second model by simulating, for each bootstrapped instance, new values for $\mathbf{x}$ using estimated mean and variance. Only for illustrative purposes you can find below this first "half-analytical" solution by means of simulated data. I wrote it in `R`, but I extensively commented and used basic routines to broaden the readership. Any help will be obviously highly appreciated. ``` ## number of data-points n <- 100 ## simulating x from model 1 alphasT <- c(1,2) Z <- cbind(1,seq(0, 1, length=n)) sigma.x <- 0.8 x <- Z%*%alphasT + rnorm(n, 0, sigma.x) ## estimating model 1 tZZ <- t(Z)%*%Z tZZ1 <- solve(tZZ) tZx <- t(Z)%*%x x.hat <- Z%*% tZZ1 %*% tZx ## Variance (and standard errors) of x.hat res.x <- x-x.hat RSSx <- sum(res.x^2) sigma2.x.hat <- RSSx/(n-2) V.alphas <- tZZ1*sigma2.x.hat V.x.hat <- Z %*% V.alphas %*% t(Z) se.x.hat <- sqrt(diag(V.x.hat)) ## model for y (given the previously fitted x) ## simulating y assuming we know x.hat betasT <- c(2,3) X <- cbind(1,x.hat) sigma.y <- 1.5 y <- X%*%betasT + rnorm(n, 0, sigma.y) ## estimating the model tXX <- t(X)%*%X tXX1 <- solve(tXX) tXy <- t(X)%*%y y.hat <- X%*% tXX1 %*% tXy ## standard errors assuming x.hat as given res.y <- y-y.hat RSSy <- sum(res.y^2) sigma2.y.hat <- RSSy/(n-2) V.betas <- tXX1*sigma2.y.hat V.y.hat <- X %*% V.betas %*% t(X) se.y.hat <- sqrt(diag(V.y.hat)) ## analytic confidence intervals assuming x.hat as given y.up <- y.hat + 1.96*se.y.hat y.low <- y.hat - 1.96*se.y.hat ## uncertainty around y by bootstrap ## with different x.hat at each instance ## simulated from estimated mean and variance ## number of bootstrapped instances nb <- 1000 ## where to save all nb fitted y Ys.hat <- matrix(0, n, nb) for(i in 1:nb){ ## sample residuals res.y.s <- sample(res.y, replace=TRUE) ## bootstrapped y y.s <- y.hat + res.y.s ## simulating x.hat x.hat.s <- rnorm(n, mean=x.hat, sd=se.x.hat) ## estimating bootstrapped y for a given simulated x (here I use lm()) fit.y.s <- lm(y.s ~ x.hat.s) ## save fitted bootstrapped y for a given simulated x Ys.hat[,i] <- fit.y.s$fitted } ## extract 95% empirical confidence interval for y y.up.b <- apply(Ys.hat, 1, quantile, probs=0.975) y.low.b <- apply(Ys.hat, 1, quantile, probs=0.025) ## plotting simulated and estimated y ## and both 95% confidence intervals from both approaches plot(x.hat, y) lines(x.hat, y.hat, col=2) lines(x.hat, y.up, col=3, lty=2, lwd=3) lines(x.hat, y.low, col=3, lty=2, lwd=3) lines(x.hat, y.up.b, col=4, lty=3, lwd=3) lines(x.hat, y.low.b, col=4, lty=3, lwd=3) ``` ```
Combine variance from two linear models
CC BY-SA 4.0
null
2023-03-17T07:53:38.683
2023-03-17T07:53:38.683
null
null
221645
[ "variance", "linear-model", "analytical" ]
609778
2
null
609629
1
null
(In the comments, I see you think it solved, so this is more general advice.) To identify the root cause of a poor model, you should start by getting a baseline model to compare against. As you are using H2O, that is very easy, as you can just swap `h2o.randomForest` with, say, `h2o.glm` (for generalized linear model). You could also try `h2o.deeplearning` for a neural net, but using a linear model as a baseline is often good as it is quick, and the defaults are good. So if the results are equally bad, the data or how it is loaded becomes the suspect. If the glm is much better, the way you are using randomForest becomes suspect. With H2O, you have H2O Flow: open a web browser at `http://localhost:54321/` and you can see the models and data you have loaded in, and you can then explore and analyze it. Another thing, which also came up in the comments, is that if you are overfitting you should be seeing good scores when evaluating on your training data. If you don't then I'd be suspicious of the data again. E.g. with this data, the very best training model score you could hope for is 0.33. ``` x1,x2,y 1,2,A 2,3,A 1,2,B 2,3,B 1,2,C 2,3,C ``` This kind of thing can happen if the data has been loaded badly, e.g. it thought the data was csv, but it was actually semi-colon separated. Again, viewing the data in H2O Flow can confirm if this has happened.
null
CC BY-SA 4.0
null
2023-03-17T08:23:42.933
2023-03-17T08:23:42.933
null
null
5503
null
609779
2
null
609713
1
null
[Klein and Moeschberger](https://www.springer.com/us/book/9780387953991) (Second Edition) devote Chapter 10 to additive models. Practical Notes to Section 10.2 discuss this matter. > The estimates of the baseline hazard rate are not constrained to be nonnegative by this least-squares estimation procedure... Similarly, with respect to cumulative hazards and associated survival curves either via Nelson-Aalen (your estimate) or product-limit approaches, they warn: > ... Some care is needed in interpreting either estimate because $\hat H(t | Z)$ [estimated cumulative hazard] need not be monotone over the time interval. What you found is thus to be expected. You might consider using the `slope` values returned for the model as linear approximations to the cumulative hazard. From the manual page for `summary.aareg()`: > The slope is based on a weighted linear regression to the cumulative coefficient plot, and may be a useful measure of the overall size of the effect... (Of course the plots [of coefficients over time] are often highly non-linear, so it is only a rough substitute). You might consider flexible fits of the cumulative hazards that enforce monotonicity. [This page](https://stats.stackexchange.com/q/197509/28500) has some suggestions. You probably need to weight the individual estimates over time similarly to how the `slope` values are estimated by `summary.aareg()`. My sense, as a non-expert in additive models, is that they are best used for evaluating associations of predictors with outcome in a highly flexible way, rather than for modeling survival functions.
null
CC BY-SA 4.0
null
2023-03-17T08:26:30.957
2023-03-17T19:33:05.420
2023-03-17T19:33:05.420
28500
28500
null
609780
1
null
null
0
15
Suppose you are playing a guessing game where there is a box with three marbles. There are three kinds of boxes. Box A contains red, green and blue marbles; Box B contains red, green, and yellow marbles. Box C contains black, white and purple marbles. You are told that there is a 60% chance that this is Box A, 20% that it is Box B, and 20% that it is Box C. Suppose you make a choice, and open the box. Although Box B and C are both equally unlikely, there is an intuitive sense in which Box B would be a less "surprising" choice than Box C, because it contains the red and green marbles (which are 80% likely to be encountered). How do I express this intuition probabilistically? Is the KL divergence between my marble color beliefs before and after opening the box smaller for Box B than Box C? How do I express this? I know that P(red AND green AND yellow) = P(black AND white AND purple) = 20%.
Expressing intuition about surprise factor between three boxes in a guessing game
CC BY-SA 4.0
null
2023-03-17T09:08:07.333
2023-03-17T10:35:35.497
2023-03-17T10:35:35.497
383458
383458
[ "probability", "information-theory" ]
609781
1
null
null
0
36
I had to design a loss function max(0,x). It's not differentiable at x=0. In order to train it with gradient descent, what should I do? - I have learned that subgradient can be used instead, so does it need to be changed in the code, or will pytorch/tf calculate subgradient automatically? - Or use surrogate loss, so what kind of surrogate loss is there for my loss?
My loss has a non-differentiable point
CC BY-SA 4.0
null
2023-03-17T09:20:00.170
2023-03-17T09:27:53.027
2023-03-17T09:27:53.027
383459
383459
[ "machine-learning", "neural-networks", "gradient-descent", "backpropagation" ]
609783
2
null
583735
1
null
I met the same issue as you. I tried to fine-tune a large language model with more than millions of parameters but it outputs exactly the same for each batch. Finally, I figured out that I used a too-large learning rate with 0.0001 for the Adam optimizer. Regularly, we leverage a tiny learning rate during the fine-tuning stage but I forgot to adjust it. In your case, though you are not using a pretrained model, you still set a too-large learning rate. As you stated, you change sigmoid to relu so that you get different outputs. I agree that Sigmoid is not the best choice for the activation function. However, if you take a look at how Sigmoid and Relu are different from each other. You will find that ReLu or LeakyRelu nearly lost their gradients if the input value is smaller than zero. That is why even if you use a big learning rate with a small batch size, the training process is still stable. In other words, the design of ReLu plays the role of preventing training vulnerability. But there may be a case when you cannot alter and need to fix to a certain kind of deep learning model architecture, the optimal way is to turn down the learning rate.
null
CC BY-SA 4.0
null
2023-03-17T10:18:17.490
2023-03-17T10:18:17.490
null
null
314285
null
609784
1
null
null
1
24
When reading Cox's Principles of Statistical Inference (CUP, 2006), I'm struggling with the case-control study example (Example 7.16, pp. 154ff.) in §7.6.6 on pseudo-likelihood. Set-up: - Random variables $Y,Z,W$ are modelled by $P(Y=1|W=w,Z=z)=L(\alpha+\beta^Tz+\gamma^Tw)$ (7.65) in which $L$ is the logistic function. - ‘Individuals with $y = 1$ will be called cases and those with $y = 0$ controls. The variables $w$ are to be regarded as describing the intrinsic nature oft the individuals, whereas $z$ are treatments or risk factors whose effect on $y$ is studied’. - Inclusion in the case-control sample $\mathcal{D}$ is introduced with $P(\mathcal{D}|Y=y,Z=z,W=w)=p_y(w)$. - Now consider ‘the variable $y$ is fixed for each individual and the observed random variable is $Z$’, and the likelihood $f_{Z|D,Y,W}=f_{Y|D,Z,W}f_{Z|D,W}/f_{Y|D,W}$ (7.71). Question: what I don't understand is the justification provided when ignoring the term $f_{Z|D,W}$: [](https://i.stack.imgur.com/mCrEi.png) It is followed by a discussion (p. 157) that ‘if $f_Z(z)$ completely known [...] it seems unlikely that it would be wise to use it.’ I tried to follow the references to Prentice and Pyke (1979) and Farewell (1979) in the notes, as well as his later book Case-Control Studies (coauthored with Ruth H. Keogh, CUP, 2014) in which §4.8 look relevant, but am unable to get a clearer idea. Any help with clarifying this is much appreciated.
Example 7.16 'Case-control study' in Cox, Principles of Statistical Inference
CC BY-SA 4.0
null
2023-03-17T10:32:57.207
2023-03-17T10:32:57.207
null
null
383460
[ "references", "likelihood", "case-control-study" ]
609785
1
null
null
3
36
I’ve got an interesting question at one interview. Assume that we already trained and deployed some fraud detection model for some online service, and it has helped us to decrease amount of fraudulent transactions by 50% (for example, we temporarily ban suspicious cards). This is great, but how can we deal with the fact that we now don’t have any explicit feedback for transactions that our model marked fraudulent? How can we retrain our model keeping this in mind? I see pseudo-labeling such transactions as one possible solution. And, probably, it would be good to use soft labels for them to deal with the cases when our model was not very confident with its decision. But are there any other options? And what are real-world solutions to this? I did’t get much during the research of the internet. Most of articles mention concept drift in terms of user behaviour change, but I haven’t seen my particular question to be discussed anywhere. I haven’t had real-world fraud detection experience, so it would be really interesting to hear your opinion.
Fraud detection feedback loop
CC BY-SA 4.0
null
2023-03-17T10:38:09.260
2023-04-01T07:27:26.923
2023-04-01T07:27:26.923
247277
247277
[ "machine-learning", "fraud-detection" ]
609786
2
null
609726
8
null
I like the idea of a Bernoulli model, as you could start with some strong assumptions and gradually relax them. Let $Y_{ij} \sim \mathrm{Bernoulli}(p_{ij})$, $i=1,\ldots,n$, $j=1,\ldots,5$, be the response given by the $i$th person to the $j$th question. The probability $p_{ij}$ is a function of the explanatory variables, $\mathrm{logit}(p_{ij}) = \beta^{(0)}_{j} + \sum_{k=1}^K \beta^{(k)}_{j} x_{ik}$. You could try: - $p_{ij}=p_i$, i.e. a person is equally likely to respond yes to any of the 5 questions. Their final score is then $S_i=\sum_{j=1}^5 Y_{ij} \sim \mathrm{Bin}(5, p_i)$, as in @utobi's comment. You can drop the $j$ subscripts from the regression coefficients. - A person is more likely to respond yes to some questions than others, but the relationship between predictors and outcome is the same for every question. This means that the slope coefficients ($\beta^{(k)}_{j}$) are the same for all $j$, but the intercepts ($\beta^{(0)}_{j}$) are different for different $j$. - The relationship between predictors and outcome varies by question, so you have different intercepts and slopes for each $j$. At this point, you could think about whether a prior distribution on the regression coefficients makes sense.
null
CC BY-SA 4.0
null
2023-03-17T10:42:10.047
2023-03-17T10:42:10.047
null
null
238285
null
609788
2
null
609726
10
null
@Doctor Milt's response is on the right track, but I think this is much more naturally handled using a multilevel logistic (or probit) regression, each person's response to each item (`0` or `1`) as the outcome variable. You would definitely want to allow the average probability of a `1` vary across participants and across questions (random intercept). You would probably also allow the influence of your predictors to vary across questions (random slopes), although depending on your data set this model might be too complicated to estimate. This is a class of [item response theory](https://en.wikipedia.org/wiki/Item_response_theory) model. With a data frame containing one row per response, the random intercepts and slope model would be coded in R as ``` glmer(response ~ predictor1 + predictor2 + (1 | participant_id) + (1 + predictor1 + predictor2 | question_id), data = your_data, family = binomial) ``` You might also consider using `brms` to fit this model. `brms` has excellent support for item response theory models (see [https://arxiv.org/pdf/1905.09501.pdf](https://arxiv.org/pdf/1905.09501.pdf)).
null
CC BY-SA 4.0
null
2023-03-17T10:56:50.850
2023-03-17T11:03:55.287
2023-03-17T11:03:55.287
22047
42952
null
609789
1
null
null
3
36
I am using a neural network to input some complex numbers and to obtain complex numbers. I converted the input complex numbers into real values by stacking the real part and imaginary parts as a vector. I used the ReLU function as an activation function. I need to put a constraint of unit modulus in the output value. I tried to implement the unit modulus constraint by calculating the amplitude of the complex numbers at the output of the activation function and rescaling the output with it. I used the rescaled version of the outputs to calculate the loss for the neural network. I am not getting satisfactory loss values from the neural network. I would like to clarify two things. - Is it a good practice to modify (in my case rescaling) the output of the activation function before supplying it to the loss function? - How can I impose the unit modulus constraint for the output in other ways?
Putting a constraint on the output of the neural network
CC BY-SA 4.0
null
2023-03-17T11:30:28.293
2023-03-17T11:30:28.293
null
null
383468
[ "regression", "neural-networks", "complex-numbers", "activation-function" ]
609790
1
null
null
2
76
According to [Statistics libre texts](https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.09%3A_Chi-Square_and_Related_Distribution) Equation 5.9.20, a non-central chi square distribution can be approximated as sum of Poisson weighted central chi square distributions. $\tag{1}g(y) = \sum_{k=0}^\infty e^{-\lambda/2} \frac{(\lambda/2)^k}{k!}f_{n+2k}(y)$ Here, the term $e^{-\lambda/2} \frac{(\lambda/2)^k}{k!}$ follows Poisson distribution, $g(y)$ is the pdf of non-central chi-square distribution with non-centrality parameter $\lambda$ and $n$ degrees of freedom, $f_{n+2k}(y)$ is the central chi square distribution with $n+2k$ degrees of freedom. This holds true if the underlying gaussian distribution has unit variance according to Wikepedia [non-central chi square distribution](https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution). In that case, If $J\sim Poisson(1/\lambda)$, then then $\mathcal{X_{k+2J}^2} \sim \mathcal{X_{k}^{\prime2}}(\lambda)$ How do I approximate the non-central chi-square as Poisson distribution in case it has non-unit variances? I have been successful so far only with unit-variance. Should I also make changes to Poisson weights $e^{-\lambda/2} \frac{(\lambda/2)^k}{k!}$ to account for non-unit variance? The textbook of digital communications by Prokais has evaluation for pdfs of both central chi-square distribution and non-central chi-square distributions with non-unit variances and I could successfully simulate them and they fit the distribution. In case it is of intrest, pdf of non-central chi-square can be written as, according to the text book Digital Communications, Prokais, \begin{equation} \tag{2}p(x) = \begin{cases} \frac{1}{2\sigma^2}(\frac{x}{\lambda})^{\frac{n-2}{4}}e^{-\frac{\lambda+x}{2\sigma^2}\mathcal{I}_{\frac{n}{2}-1}}(\frac{\sqrt{\lambda}}{\sigma^2}\sqrt{x}), & \text{if } x > 0\\ 0, & \text{otherwise} \end{cases} \end{equation} Here, $\mathcal{I}_v(x)$ is the modified Bessel function of first kind of order $v$, n is the degrees of freedom, $\lambda$ is the non-centrality parameter given by $\lambda = \sum_{i=1}^{n}m_i^2$ where $m_i$ are the means of underlying Gaussian random variables with common variance $\sigma^2$. The pdf of central chi-square for Gaussian variables of zero mean and common variance $\sigma^2$ is given by, \begin{equation} \tag{3}p(x) = \begin{cases} \frac{1}{2^{(n/2)}\Gamma(\frac{n}{2})\sigma^n}x^{\frac{n}{2}-1}e^{-\frac{x}{2\sigma^2}}, & \text{if } x > 0\\ 0, & \text{otherwise} \end{cases} \end{equation}
How to approximate non-central chisquare distribution to Poisson weighted sum of central chi-square distribution in case of non-unit variances?
CC BY-SA 4.0
null
2023-03-17T11:33:31.723
2023-03-17T13:04:52.127
2023-03-17T13:04:52.127
379600
379600
[ "distributions", "poisson-distribution", "density-function", "chi-squared-distribution", "non-central" ]
609792
1
null
null
0
23
I have the following model: ``` RT ~ condition + (1|participant) ``` RT is a continuous variable. Condition has three levels and is coded using Helmert contrasts. I standardized the variable RT using scale(center == TRUE, scale == TRUE). Can I know interpret the outcomes (differences between conditions) as "effect size" as it is usually done with standardized regression where dependent and independent variables are standardized? (At least in context of experimental setups using these three conditions.) For me this would be logical, as it does not make sense to standardize the independent variable condition.
Standardized dependent variable, effect size interpretation
CC BY-SA 4.0
null
2023-03-17T12:28:51.007
2023-03-17T12:28:51.007
null
null
309425
[ "regression", "lme4-nlme", "regression-coefficients", "effect-size", "standardization" ]
609794
1
null
null
0
15
I am conducting a study to check whether a drug causes fractures, and I am using time dependent Cox PH model. My question is regarding the follow-up time when stratifying. Let's say that we want to look at any fracture and then stratify by location of fracture e.g. hip, knee, elbow... First we want to look at all fractures combined. Assume we have a patient who suffers any given fracture on period 10, therefore, we censor him at this time. However, when we want to look into stratified by location e.g. knee, shall we censor him at time 10 although he might not have had a knee fracture or just we censor at the end of his follow-up? I cannot find a convincing answer and I find arguments supporting censoring at the time any fracture happens, but also to censor at the end of follow up.
Censor time-dependent Cox proportional
CC BY-SA 4.0
null
2023-03-17T12:40:14.677
2023-03-17T20:20:35.883
null
null
109055
[ "cox-model", "censoring" ]
609795
1
null
null
1
31
I'm trying to do a meta analysis of ~30 studies (total N = ~2000) on the correlation (X, Y). However, the heterogeneity is soooo high. My hypothesis (and what has been suggested in the literature) is that the range of X differ from study to study and it contributes to the heterogeneity. What I'm thinking is, can I generate synthetic data from each study using the mean value of (X, Y) and the covariance matrix of (X, Y) specific to that study, then pooled the synthetic data of the studies together, and do a Pearson's correlation? The 95%CI can be estimated by repeating this process 1000 times or more. May I know if this is a valid method? Or, have anyone suggested similar methods?
Can I do a meta-analysis by Monte-Carlo synthetic data?
CC BY-SA 4.0
null
2023-03-17T12:45:46.223
2023-03-17T14:42:03.230
null
null
373326
[ "meta-analysis", "monte-carlo", "heterogeneity", "synthetic-data" ]
609796
1
609839
null
0
104
Hoping to get some clarification on my understanding of interaction terms in a GLM model I have produced. I have written the following model ``` interactionmodel <- lme(ChangeTotal ~ PreTotalCentre + SexCode + AgeCentre + SexCode*PreTotalCentre + AgeCentre*PreTotalCentre, data = Fitness, random = ~ 1|Class, method = "ML", na.action = na.exclude) ``` Where I am looking to explain the change in a physical fitness assessment with predictors of initial score (termed PreTotal), Sex, and Age. Age and PreTotal have been grand mean centred. Sex has been coded as follows ``` SexCode = dplyr::recode(Sex, `0` = "Male", `1` = "Female") ``` The output of this model is as follows ``` Linear mixed-effects model fit by maximum likelihood Data: Fitness_OAzc Random effects: Formula: ~1 | Class (Intercept) Residual StdDev: 13.42854 26.20813 Fixed effects: ChangeTotal ~ PreTotalCentre + SexCode + AgeCentre + SexCode * PreTotalCentre + AgeCentre * PreTotalCentre Correlation: (Intr) PrTtlC SxCdMl AgCntr PTC:SC PreTotalCentre 0.629 SexCodeMale -0.759 -0.782 AgeCentre -0.034 -0.015 0.050 PreTotalCentre:SexCodeMale -0.530 -0.833 0.604 0.053 PreTotalCentre:AgeCentre 0.000 -0.087 -0.003 0.185 0.168 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -4.411066018 -0.514522220 0.003675318 0.532391577 4.837327085 Number of Observations: 354 Number of Groups: 7 Value Std.Error DF t-value p-value (Intercept) -20.19162 8.386217 342 -2.407715 0.0166 PreTotalCentre -0.73703 0.045738 342 -16.114380 0.0000 SexCodeMale 41.61622 6.787769 342 6.131060 0.0000 AgeCentre -0.27988 0.338434 342 -0.826994 0.4088 PreTotalCentre:SexCodeMale 0.35325 0.055494 342 6.365576 0.0000 PreTotalCentre:AgeCentre 0.01218 0.004421 342 2.756044 0.0062 ``` My understanding for interpreting the intercepts would be as follows: - For every 1 point increase in PreTotal above the mean, its expected the Change would decrease by 0.73 points - We'd expect Males to have a 40 point change compared to Females - For every 1 year increase in Age, we'd expect a -0.27 decrease in the Change - Assuming that the PreTotal score is the same distance from the mean, we'd expect Males to have a larger Change of approximately 0.35 - Assuming that the PreTotal score is the same distance from the mean, we'd expect a 1 year increase in age to result in a 0.01 increase Are these interpretations correct? My dataset shows that females have a higher mean and percent change than males, and now I am thinking that if I coded the males as 0, and females a 1 that despite the output saying Males that math (bPreTotalSex) would mean that females would expect a greater change given the same PreTotal score since a 0 (male) would cancel out the term. Would appreciate some confirmation and further insight! Thank you in advance! All analysis done in R using NLME package for the model
Understanding Interaction Term In GLMM
CC BY-SA 4.0
null
2023-03-17T12:56:03.133
2023-03-17T19:59:05.347
null
null
360950
[ "generalized-linear-model", "interaction", "interpretation" ]
609798
1
610037
null
1
59
I have data consisting of three type of tumors (see column names, counts per tumor at the bottom of each column). A tumor can be tested and be given a score (negative, weak, moderate, strong). The cells indicate the respective counts. I want to test whether the different tumors get significantly different scores on this test. I don't know which test to use, how to cope with this big difference in the amount of data and how to cope with the zero counts that appear in three cells. [](https://i.stack.imgur.com/ot0HW.jpg)
How to test for statistical significance between three differently sized groups with categorical outcomes that include zero counts?
CC BY-SA 4.0
null
2023-03-17T13:00:57.480
2023-03-20T10:04:26.537
2023-03-20T09:51:13.153
377775
377775
[ "statistical-significance", "categorical-data" ]
609801
2
null
609581
1
null
Y. Lin, in [Contemp Clin Trials Commun. 2016 Apr 14;3:65-69](https://doi.org/10.1016/j.conctc.2016.04.001), nicely summarizes the definition and problems of "responder analysis" at the beginning of the Discussion: > A responder analysis is one in which each subject is classified as either a “responder” or a “non-responder”, and the proportions of patients who benefit are quantified and compared between treatment groups... It has been widely acknowledged that the main concern about the responder analysis is the arbitrary nature of the definition of a response. A second problem with the analysis is the dramatic reduction in statistical power by dichotomizing continuous endpoints. With a continuous outcome, that means setting an arbitrary cutoff to distinguish "responders" from "non-responders." The problems this can lead to in practice are outlined in [this web post](https://s4be.cochrane.org/blog/2019/07/08/responder-analysis-identifying-responders-in-clinical-trials/). With time-to-event data, one typically sees reports of relative hazards with and without the new treatment, based on an assumption of a shared baseline risk of the event, rather than "responder analysis." Such formal survival analysis has the advantage of using information even from individuals who haven't passed beyond an arbitrary threshold for "response" like "3-year survival." You might see reports of estimated 3-year survival by treatment group, but that's not "responder analysis" as the phrase is typically used. In [immunotherapy for cancer](https://www.nature.com/articles/s41598-021-85696-3), however: > Many studies have shown that the shape of the survival curve for immunotherapy is different than for chemotherapy, with some extra early deaths with immunotherapy and a plateau of long survivors later on. In that case a comparison of outcomes of immunotherapy versus chemotherapy would involve interpreting the different shapes of survival curves, as you suggest. The focus is then on identifying factors associated with being in the "plateau of long survivors" or the group with "extra early deaths."
null
CC BY-SA 4.0
null
2023-03-17T14:26:14.833
2023-03-17T14:26:14.833
null
null
28500
null
609802
1
null
null
0
62
I just read that to use MICE Imputation, variables with missing values need to have a relationship to other variables. In my case, I will anonymize the variable just for convenience purposes: - Numerical Continuous Variables: 'A', 'B', 'C', 'D' - Categorical Variables (Nominal): 'X', 'Y' 'A' has missing values of around 20%, while the other variables are complete. Now I read that it's better to impute the values instead of dropping them using techniques such as MICE, Random Forests, kNN, Bootstrapping, etc. I checked into MICE first, considering it's flexible for all kinds of variables and gives less biased imputations if the model is appropriate. One of the requirements to use MICE is that the variable that will be imputed needs to have a relationship with other variables. Hence, I analyzed 'A' with other numerical variables and only found a weak relationship between them (around 0.2) But when I tried to analyze 'A' with other categorical variables using ANOVA, it showed a strong relationship with 'X' and 'Y', plus it's statistically significant (***). Can I use 'X' and 'Y' (categorical variables) as the predictor in imputing the missing values in 'A' through MICE Imputation?
Can you impute (predict) missing continuous data using categorical data as the predictor?
CC BY-SA 4.0
null
2023-03-17T14:40:19.937
2023-03-17T15:23:09.827
null
null
378145
[ "r", "missing-data", "data-imputation", "mice" ]
609803
2
null
609795
1
null
It's certainly a common idea to reconstruct the data of a study from reported summary statistics. That may or may not provide useful insights. Obviously, often there could be many datasets that could be consistent with what's reported and drawing samples of such datasets given plausible prior assumptions (basically something like Approximate Bayesian Computation aka "ABC" or similar approaches like [Bayesian aggregation of average data](https://doi.org/10.1214/17-AOAS1122)) and then treating these datasets like multiple imputations is one way of dealing with that. You're actually in a really good situation, if you have mean vector and covariance matrix, because those are sufficient statistics, if you are willing to assume bivariate normality. So, you could simulate data until you get samples that match the reported summary statistics (up to the reported decimal places for each study). Sure, it might take a while to exactly match, but it is likely pretty doable. I don't think that naively pooling the data would necessarily be appropriate. What would you do if you had the raw data from all the studies (presumably something that allows for some differences between studies and perhaps trying to explain why these differences exist?)? Whatever that analysis, it would also be a serious candidate for what to do with reconstructed synthetic data. Other approaches might include taking characteristics of the studies (or the samples used in the study) and looking at via meta-regression, whether these differences explain different correlations.
null
CC BY-SA 4.0
null
2023-03-17T14:42:03.230
2023-03-17T14:42:03.230
null
null
86652
null
609804
1
null
null
0
46
I need to do a power calculation for a cost analysis. I believe my model will be a GLM with a gamma distribution and a log link. I would prefer not to use simulations to do the calculation, but I can't find any information about a direct way to calculate power for a GLM with a gamma distribution. If there is a way, I would MUCH prefer to go that route. As it is, I have been trying to teach myself R packages lme4 and simr, but the issue is that there are no random effects. Below is code using completely made up numbers; however, at the model statement, I get the error: ``` Error in array(x, c(length(x), 1L), if (!is.null(names(x))) list(names(x), : 'data' must be of a vector type, was 'NULL' ``` I know this is because VarCorr is set to NULL. Is there a different function instead of "makeGlmer" I can use for specifically for GLMs instead of GLMMs? Or perhaps should VarCorr be set to something else? Any help or direction would be greatly appreciated. ``` id <- factor(1:200) group <- c("control","intervention") gender <- factor(0:1) race <- factor(0:4) group_full <- rep(group,each=100) gender_full <- rep(rep(gender,each=50),2) race_full <- rep(rep(rep(race,each=10),2),2) race_full df <- data.frame(id,group_full,gender_full,race_full) fixed <- c(100,50,25,4) #ERROR HERE!! model <- makeGlmer(cost~group_full+gender_full+race_full, family=Gamma(link="log"), fixef=fixed,data=df,VarCorr=NULL) model sim_treat <- powerSim(model, nsim=10000, test=fcompare(y ~ group_full)) sim_treat ```
GLM Power Calculation
CC BY-SA 4.0
null
2023-03-17T14:43:15.393
2023-03-23T00:53:49.210
2023-03-23T00:53:49.210
11887
328066
[ "generalized-linear-model", "statistical-power", "gamma-distribution" ]
609805
1
null
null
0
35
I'm trying to fitting this bsts-model with some temperature time series, the period is correct, but the amplitude is wrong; can anybody help me fix with this or tell me what to do? ![forecast here](https://i.stack.imgur.com/veCCs.png) data is here: [data](https://docs.google.com/spreadsheets/d/1im1jWWc5kvqo8nYEpSAFrUTFuEVRhde_/edit?usp=share_link&ouid=104971546473038383815&rtpof=true&sd=true) code is here: ``` mydata3 <- read_excel("E:/Desktop/456.xlsx", col_types = c("date", "numeric")) %>% as_tsibble(., key = NULL, index = time, regular = FALSE) temp_1 <- mydata3$temperature dt_1 <- mydata3$time ss2 <- list() ss2 <- AddAr(ss2,temp_1) ss2 <- AddSeasonal(ss2, temp_1, nseasons=12,season.duration = 30) M_bsts_2 <- bsts(temp_1, ss2, timestamps = dt_1, niter = 700,ping = 10, seed = 246) M_pred <- predict(M_bsts_2,horizon = 1040) plot(M_pred,plot.original = 1000, ylim = c(-50, 50)) ```
How to fix this wrong amplitude in the bsts-model of temperature time series data?
CC BY-SA 4.0
null
2023-03-17T14:34:23.893
2023-03-17T15:08:39.893
2023-03-17T15:08:39.893
362671
383482
[ "r", "time-series", "bsts" ]
609806
2
null
609685
1
null
I'm not sure why the intervals shrank when `unconditional = TRUE` was used - I'd expect them to get somewhat wider if anything as this option is trying to correct for the fact that the model estimated (selected is perhaps better) the values of the smoothing parameters. As to the question in your title: > When to use 'unconditional = FALSE' in plot.gam()? or the similarly phrased one in the body of the Q > When would we treat the smoothing parameters as fixed? When we fit a GAM with penalised splines in {mgcv}, we need to estimated coefficients for all basis functions involved in smooth, plus coefficients for any parametric terms, plus other parameters (such as dispersion parameters). To do this, {mgcv} minimises the peanliased log-likelihood $$ \mathcal{L}_p(\boldsymbol{\beta}) = \mathcal{L}(\boldsymbol{\beta}) - \frac{1}{2\phi} \sum_{j} \lambda_j \boldsymbol{\beta}^{\mathsf{T}}\mathbf{S}_j\boldsymbol{\beta} $$ Note the $\lambda_j$ in the equation, which are the $j$ smoothing parameters that control how much penalty we pay for the wiggliness of the smooths in the model. The values of the $\lambda_j$ are not known before we fit the model; we want the model to find optimal values for the $\lambda_j$. `gam()` does this by setting the $\lambda_j$ to some value, and then updating values for $\boldsymbol{\beta}$ given these initial values of $\lambda_j$. These steps are iterated in such a way that each outer iteration (where we update the $\lambda_j$) moves towards more optimal values of the smoothing parameters, while the inner part of the iteration finds estimates of $\boldsymbol{\beta}$ conditional upon the current values of $\lambda_j$. Eventually the $\lambda_j$ and $\boldsymbol{\beta}$ converge on some values and don't change much if further iterations are performed; the model has converged. So far, so good. The problems begin however, when we want to do inference on the estimated smooth functions. The general theory works if we treat the values of $\lambda_j$ as if they were known before fitting and were fixed at their ML or REML estimates. But we didn't know them; we estimated (selected) values for $\lambda_j$ using the data. This means that inferences (statistical tests) we make by treating the $\lambda_j$ as known and fixed are anti-conservative because they do not reflect the true state of our uncertainty about the values of the smoothing parameters. This is why you may have read that p values for smooths are more approximate than in a LM or GLM setting. Statisticians, including Simon Wood, have tried to provide corrections that account for the uncertainty in the smoothing parameters. Unsurprisingly, Simon uses the theoretical developments he is responsible for in his {mgcv} package. This correction isn't applied to the p values in the output from `summary()` but it can be used to produce credible intervals that better reflect the uncertainty in the estimated smooths that arises from us treating the smoothing parameters as known and fixed. This is what `unconditional = TRUE` does; if available (you have to use `method = "REML"` or `"ML"` for this to work), the Bayesian covariance matrix corrected for smoothing parameter uncertainty is used to form credible intervals for the smooths. Simon's approach works well in general, but it breaks down (doesn't work as well) when the true function is close to the penalty null space of the smooth; e.g. when the true function is close to a linear function. When to use it? All the time (if available) — the intervals on your smooths will better reflect the actual uncertainty about the entire fitted model. If you use `unconditional = FALSE`, you are doing a bit of hand waving misdirection by proceeding as if you knew the values of the smoothing parameters ahead of time, before you fitted the model. I believe it is not turned on by default because it isn't available if the model is fitted using GCV (which is also the {mgcv} default - but you shouldn't really use that for most things) or if you used `gamm()` for example. This is why I don't set `unconditional = TRUE` by default in my {gratia} package; if I did I'd have to throw a message all the time anyone used `draw()` on a GAM fitted with GCV to tell them that we didn't actually use the corrected covariance matrix, despite the documentation saying `unconditional = TRUE` was the default. It's easier to just let users turn this on if they want it and know what they are doing. That way, if they ask for it on a model estimated with GCV, say, then they deserve to get a loud warning (and this is what {gratia} does if you inappropriately ask for the smoothing parameter uncertainty-corrected covariance matrix.)
null
CC BY-SA 4.0
null
2023-03-17T14:46:26.347
2023-03-17T14:46:26.347
null
null
1390
null
609807
2
null
190482
1
null
What you're trying to do is far from a solved problem. My impression is that the scientific consensus on model-agnostic interpretability ranges from promising to problematic to fundamentally unsound. See for example: - https://doi.org/10.48550/arXiv.1702.08608 - https://doi.org/10.1016/S2589-7500(21)00208-9 - https://doi.org/10.1038/s42256-019-0048-x My suggestion is to stick to an implicitly explainable model, e.g. regression with few interactions, or tree models which seem to work well with the TreeExplainer algorithm: - https://doi.org/10.1038/s42256-019-0138-9 Note that even if the explanation of the model is simple, the interpretation of the explanation is not.
null
CC BY-SA 4.0
null
2023-03-17T14:52:35.180
2023-03-17T14:55:57.180
2023-03-17T14:55:57.180
190524
190524
null
609809
1
null
null
0
29
Long story short, I have a set of vectors for each image after training on a model. I'd like to find the most unique image from the scores generated by `cosine similarity`. The below 2D tensor array is the scores grid for `5` images, each being compared to each image. ``` tensor([[1.0000, 0.9849, 0.9606, 0.9227, 0.9508], [0.9849, 1.0000, 0.9737, 0.9462, 0.9654], [0.9606, 0.9737, 1.0000, 0.9836, 0.9825], [0.9227, 0.9462, 0.9836, 1.0000, 0.9789], [0.9508, 0.9654, 0.9825, 0.9789, 1.0000]]) ``` I've tried using Standard Deviation and Interquartile range to find which has the max value to guess the most unique image. I'd like to know the correctness of my metric, and if not, what other approaches I can use to solve the above.
Find most unique image from pair scores of a set of Images
CC BY-SA 4.0
null
2023-03-17T15:00:58.953
2023-03-20T07:08:51.237
2023-03-20T07:08:51.237
383481
383481
[ "mathematical-statistics", "clustering", "paired-data", "metric" ]
609811
2
null
577622
0
null
As you get out past iteration $6000$, it seems like the train and test loss values are stable, as is the distance between them, which looks small. However, both values are tiny compared to the starting loss values whose inclusion in the plot stretches out the $y$-axis. If you compress the $y$-axis to something more like $[0, 0.5]$, you are likely to see that the test loss is quite a bit higher. Doing this compression is legitimate, since the loss values from the early iterations correspond to the model guessing parameter values, so of course performance is terrible. With that in mind, it is not so surprising that your second plot of the true vs predicted values shows worse performance on the test set.
null
CC BY-SA 4.0
null
2023-03-17T15:06:43.460
2023-03-17T15:06:43.460
null
null
247274
null
609812
2
null
609809
0
null
There isn't one "correct" metric. Reasonable approaches might be to find the image with the smallest mean similarity, or smallest median similarity, or smallest maximum similarity, or smallest minimum similarity. The first two metrics would find images that are on average different from all others. The smallest max similarity would find the image most different from any other, while the smallest min similarity would find the image most different from one other. I might go for the image with smallest maximum similarity, as this will find an image that is different from all other images. When using average-type metrics, you might find an image that is highly different from almost all others but near-identical to another, which may or may not fit your goals. The maximum approach, however, will more highly rank images that are more distinct from every other image, even though they may be less distinct on average. I don't see how the standard deviation or IQR helps here. Those values just indicate the variability with which an image matches others, but not how well it matches. You might have one image that matches all others fairly well with some variability, and another that matches all others poorly with the same variability, and IQR/standard deviation would treat those the same. Imagine you have a set of images that all differ by just 1 pixel. No matter what image you match against that set, you'll have a nearly invariant similarity measure, it doesn't matter if your query picture is totally unique (in which case the similarity measure is uniformly low) or another 1-pixel variant of the same image (in which case the similarity measure is uniformly high).
null
CC BY-SA 4.0
null
2023-03-17T15:17:16.533
2023-03-17T15:17:16.533
null
null
76825
null
609813
1
null
null
0
28
why for neural networks it is advised to set a test dataset apart to check if it overfitted while for statistical models fitted through mcmc this is never done? if a model has too many parameters shouldn't it overfit with both methods?
Why overfitting is a problem for neural networks but not for models fitted through MCMC methods?
CC BY-SA 4.0
null
2023-03-17T15:17:57.980
2023-03-17T15:17:57.980
null
null
275569
[ "markov-chain-montecarlo", "overfitting" ]
609814
2
null
609352
1
null
Alternative tests that work in general are - Using statistics about the distribution of the nearest neighbour distances Bickel, Peter J., and Leo Breiman. "Sums of functions of nearest neighbor distances, moment bounds, limit theorems and a goodness of fit test." The Annals of Probability (1983): 185-214. - Transform the data to discretize data distributed in bins/intervals and perform a $\chi^2$-test or G-test. Possibly you could do something clever with rescaling the data or working with conditional distributions, but I don't see direct how.
null
CC BY-SA 4.0
null
2023-03-17T15:22:30.007
2023-03-17T15:28:51.040
2023-03-17T15:28:51.040
164061
164061
null
609815
2
null
609802
3
null
There's no reason not to mix different types of data for multiple imputation (continuous, categorical, ordinal etc.). It's just a matter of whether the particular software you use can support it (e.g. the [Amelia R package](https://cran.r-project.org/web/packages/Amelia/index.html) supports quite a few types of data, but e.g. is still not great for censored data or count data). You may also want to check whether MI via chained equations (MICE) is the most appropriate approach for your situation (for some reason it's very popular, but tends to perform poorly in quite a few simulations I've seen) or other forms like MCMC based with latent multivariate normal assumption (like the default approach in Amelia).
null
CC BY-SA 4.0
null
2023-03-17T15:23:09.827
2023-03-17T15:23:09.827
null
null
86652
null
609816
2
null
609727
7
null
Given $y_i\in\{-1,+1\}$, $z_i\in\{0,1\}$ and $z_i = (y_i+1)/2 \iff y_i = 2z_i-1$. Also, $$p(y_i\equiv1) = p(z_i\equiv1)= \left(1+\exp(-w^Tx_i)\right)^{-1}\\ p(y_i\equiv-1) = p(z_i\equiv0)= \left(1 + \exp(w^Tx_i)\right)^{-1}$$ Then $$ \begin{align} \color{red}{\log\left(1 + \exp(-y_i w^Tx_i)\right)}&=\\ \left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)- \left(\frac{y_i-1}{2}\right)\log\left(1 + \exp(w^Tx_i)\right)&=\\ \left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)- \left(\frac{y_i+(1-2)}{2}\right)\log\left(1 + \exp(w^Tx_i)\right)&=\\ \left(\frac{y_i+1}{2}\right)\log\left(1 + \exp(-w^Tx_i)\right)- \left(\frac{y_i+1}{2}-1\right)\log\left(1 + \exp(w^Tx_i)\right)&=\\ z_i\log\left(1/p(z_i\equiv1)\right)- (z_i-1)\log\left(1/p(z_i\equiv0)\right)&=\\ -z_i\log\left(p(z_i\equiv1)\right)+ (z_i-1)\log\left(p(z_i\equiv 0\right)&=\\ -z_i\log\left(p(z_i\equiv1)\right)+ (z_i-1)\log\left(1-p(z_i\equiv1)\right)&=\\ \color{blue}{-\left(z_i\log\left(p(z_i\equiv1)\right)+ (1-z_i)\log\left(1-p(z_i\equiv1)\right)\right)}& \end{align} $$ --- A similar proof can be obtained by reverting to the Bernoulli likelihood.
null
CC BY-SA 4.0
null
2023-03-17T15:24:10.873
2023-04-08T23:17:50.840
2023-04-08T23:17:50.840
247274
60613
null
609817
1
null
null
1
38
I have a test dataset of 11m records. The dataset contains a global customer id and spend figure. I need to group customers into the following categories: - 0 Low - 1 Low/Med - 2 Med - 3 Med/High - 4 High I tried K-Means to group. See results below. As you can see 10m records or so are in the low group as 80% of the db has low to negative spend. If I want to further segment that low group, should I just increase the number of clusters? Or is there a better algorithm given the distribution of the data? Thanks | |count |mean |std |min |25% |50% |75% |Max | ||-----|----|---|---|---|---|---|---| |Cluster | | | | | | | | | |0 |10498822.0 |21.147982 |30.447597 |-22885.364 |6.78600 |11.4520 |26.30600 |160.854 | |1 |714573.0 |300.654938 |115.836596 |160.855 |207.94600 |269.0280 |366.02400 |651.081 | |2 |57318.0 |1002.263623 |400.515911 |651.084 |723.53375 |841.8320 |1118.37575 |2370.803 | |3 |14415.0 |3739.988910 |924.921881 |2371.056 |2993.61250 |3599.1800 |4319.69000 |6162.907 | |4 |3010.0 |8584.038995 |2476.616904 |6163.451 |6905.48800 |7861.5815 |9318.03100 |22884.357 |
Can I use K-Means to group customers based on a single variable?
CC BY-SA 4.0
null
2023-03-17T15:39:33.150
2023-03-18T10:54:44.277
null
null
383489
[ "machine-learning", "mathematical-statistics", "python", "k-means" ]
609818
1
null
null
0
27
I have performed a linear regression analysis using R and generated some diagnostic plots by using package DHARMa (in R). However, I am having trouble interpreting these plots and would appreciate any guidance on how to read and understand them. Brief description of my dataset, variables - 60 teachers were asked to indicate their degree of satisfaction in their work, pay, and opportunities for promotion. Regression analysis with dependent variable - work and independent variable as type of school - state (S) or private (P) - was drawn. I would be grateful if someone could provide a brief explanation of each plot and point out any potential issues that might be visible in the plots, as well as possible solutions or next steps to address those issues. ![DHARMa residual](https://i.stack.imgur.com/pEJuw.jpg)
DHARMa plots diagnostic
CC BY-SA 4.0
null
2023-03-17T15:57:04.923
2023-03-17T15:58:07.853
2023-03-17T15:58:07.853
362671
383491
[ "r", "residuals", "diagnostic" ]
609820
1
null
null
0
32
I have a counts response variable with 46% of the observations being zeros, and am validating model fits to different distributions with GAMMs. I have plotted the observed number of zeros in the response against # zero from 10000 simulated datasets for both Poisson & NB distributions to see if either can cope with the excess zeros. My question is: Can the NB model in this case cope with the excess number of zeros in my response, considering that the observed # zeros falls on the tail of the simulated distribution? In other words, how close to the middle of the histogram must the observed # zeros fall to confidently say that the model can account for the excess zeros in the response? Poisson model [](https://i.stack.imgur.com/E9iSd.png) Negative Binomial model [](https://i.stack.imgur.com/Hp0oB.png)
Can NB model cope with excess # zeros
CC BY-SA 4.0
null
2023-03-17T16:59:38.230
2023-03-17T18:14:05.033
2023-03-17T18:14:05.033
7290
286723
[ "generalized-linear-model", "poisson-distribution", "negative-binomial-distribution", "zero-inflation" ]
609821
1
609838
null
1
36
Quick overview of my data and aims: I have two groups, 50 samples per group, and 6000 features. I want to find the minimal amount of features capable of distinguishing both groups. I know the sample number is not the greatest, but I work with biological samples and getting a total of 100 samples took a lot of work. Besides, I do have additional samples (40 per group) being collected at the moment, and they could be used for further validation and model tuning. What I think I know: If I perform feature selection (e.g., Boruta) using all my 100 samples and then split them only at the classification stage (e.g., XGBoost with k-fold cross-validation), it would result in data leakage because my test set leaked during feature selection, correct? Being aware of the above, I was unsure which approach to use: A) start a k-fold cross-validation, perform feature selection, close the folds, take the features that were good across all folds, then "open" another k-fold CV, run the classification algorithm, and assess the results; or B) start the k-fold CV and, in the same fold, perform feature selection plus classification. This way, the selection and classification results are coupled by fold. Is there any difference between the two methods above? Is one better than the other? I am asking this for two reasons: 1- I need a panel of important features as soon as possible, but I haven't had time to study classification models enough, so I was hoping I could select the features now and, at a later date, assess the best classification model. 2- I want to test at least five feature selection approaches and ten classification models, so I thought that dividing into two stages would be more organized and better in general. Any thoughts?
Does feature selection and model testing have to be coupled in each fold of the cross-validation?
CC BY-SA 4.0
null
2023-03-17T17:05:09.340
2023-03-17T19:55:36.567
null
null
346628
[ "machine-learning", "cross-validation", "feature-selection", "data-leakage" ]
609822
2
null
520623
0
null
You may need to log transform the input variable (Number.of.samples) you're using for your offset variable, to match the log link function of the negative binomial. Here is a link to an mvabund tutorial in which the author uses a log-transformed offset. [https://pdixon.stat.iastate.edu/stat534/R/mvabund.pdf](https://pdixon.stat.iastate.edu/stat534/R/mvabund.pdf) And here is a CV post with a nice answer about using offsets in general, specifying that the input variable used for the offset should be log-transformed before adding it to the function. [Should I use an offset for my Poisson GLM?](https://stats.stackexchange.com/questions/232666/should-i-use-an-offset-for-my-poisson-glm)
null
CC BY-SA 4.0
null
2023-03-17T17:15:25.373
2023-03-17T17:15:25.373
null
null
383492
null
609826
1
null
null
0
9
I have a dataset and in the description is written <<In order to improve the representativeness of some segments of the population, the variable `pesofit` (sample weight) has been inserted. The use of sample weights is recommended to obtain unbiased estimates. >> Here a sample ``` > head(data_shape, 10) # A tibble: 10 × 5 tpens ireg pens_PPP apqual pesofit <dbl> <int> <dbl> <int> <dbl> 1 1800 18 21.1 15 0.380 2 900 5 8.91 16 1.45 3 500 13 5.40 16 0.869 4 1211 13 13.1 15 0.238 5 2100 13 22.7 15 0.238 6 700 8 6.43 15 0.882 7 2000 9 17.9 15 1.25 8 1200 5 11.9 15 1.67 9 2000 4 17.8 15 3.37 10 880 15 9.62 15 1.69 ``` - tpens = is the wage - ireg = is the code of the geographical area where the individual lives - pesofit = is the sample weight I'm using this `pesofit` to compute the weighted means and also for the weighted linear regresion. I need also to count the number of wages in each geographical area ( `ireg`). Should I use it also to count? Or is it enough just counting the number of rows for each value of `ireg`? In that case, how can I do?
How to use weights in the right way?
CC BY-SA 4.0
null
2023-03-17T17:59:27.273
2023-03-17T17:59:27.273
null
null
382951
[ "descriptive-statistics", "count-data", "weighted-mean", "sum", "weighted-data" ]
609827
2
null
609472
0
null
For the first question, the weight of the standard egg (SE) shouldn't be compared with a fixed value ($52=65\times 4/5$), since the weight of picked large egg (LE) each time is not fixed but is a random value. Denote the weights of the SE and LE by X and Y respectively. The probability you are looking for should be $$P\left(X>\frac{4}{5}Y\right)=\int_0^\infty f_Y(y)\left(\int_\frac{4y}{5}^\infty f_X(x)\ dx\right)\ dy. $$ From here you may continue to complete the rest calculation.
null
CC BY-SA 4.0
null
2023-03-17T18:13:34.110
2023-03-18T03:16:29.060
2023-03-18T03:16:29.060
362671
295357
null
609828
1
null
null
0
20
I am trying to model total goals in soccer matches. I ultimately want to predict various odds for a whole future season in advance (so I need a parametric model I can extract quantiles from given its parameters). A NegBin linear model should be a decent starting point (data too overdispersed for Poisson). I have 10 years of data telling me for each match who played whom and what the score was. There are too many teams to use "team name" as a feature, so I have to somehow create features based on past goal performance. My idea is to use "average home goals scored last season", "average home goals conceded last season", etc. as features to summarise teams, and fit a model on the next year. (Going back further than a year isn't very intuitively useful as teams change a lot). I can do this for years (1,2), (2,3),... and average somehow to get a final regression model to fit year n+1 based on n. The big problem with this is that some teams have a lot more past matches than others, so there is somehow an uncertainty associated with the above features that differs between datapoints. Also, the features are coming from the same random process I am trying to model. It seems I have to make an arbitrary choice of how to weight a team's performance compared to average performance based on how much evidence I have -- this seems very undesirable as I am effectively adopting a Bayesian approach without any proper Bayesian model. A more formal Bayesian approach would seem to involve introducing lots of latent variables for dubious quantities like "home-corneriness", which seems even more whimsical. Is there a principled way to approach a problem like this? Is there a name for this kind of setup?
GLM with uncertain features
CC BY-SA 4.0
null
2023-03-17T18:26:38.803
2023-03-17T18:26:38.803
null
null
374720
[ "regression", "time-series", "bayesian", "generalized-linear-model", "modeling" ]
609829
1
null
null
13
1901
I asked a similar question in the past, but I've thought about the message I am trying to convey a bit more and feel I can articulate it better. For context, I am on an introductory course in machine learning. A while ago we covered PCA, and from my point of view, I really can't see what it's good for in the real-world, and Google searching doesn't seem to shed much light on my question. To illustrate my confusion, imagine we have a big data set and we would like to run PCA on it. Suppose (for convenience) that using 2 principal components explains an adequate amount of the variability. Now what? All we have information on now is the linear-combinations of a subset of the variables. I feel like at this point the data has become too abstracted to be interpretable in any meaningful way, in general. So, I reiterate, what's the point? I might be missing something as PCA is used in industry all the time. My ignorance probably comes from only being on an introductory course. Does anyone have any perspective that might be useful?
Practical usefulness of PCA
CC BY-SA 4.0
null
2023-03-17T18:44:38.950
2023-03-19T11:07:41.247
2023-03-18T09:30:22.443
22047
357899
[ "machine-learning", "pca" ]
609831
2
null
609829
5
null
One application of PCA that I have used a few times is the construction of social indicators. We use the projection of each observation (usually households) over a component axis (usually the first), and use it as an indicator for public policy. This is possible because the surveys that we use are designed to capture that information. You can look online for "quality of life questionnaires". Another thing to take into account is that PCA may not be the best method in many applications, but is used for "backward compatibility". If you change the method, you will be unable to compare with past measurements. And the result will not be useful for building public policy. see: [http://article.sapub.org/10.5923.j.statistics.20221203.03.html](http://article.sapub.org/10.5923.j.statistics.20221203.03.html)
null
CC BY-SA 4.0
null
2023-03-17T19:13:41.067
2023-03-18T09:34:08.250
2023-03-18T09:34:08.250
22047
146273
null
609832
2
null
598897
-2
null
Try Rayleigh distribution or $\chi^2$ distribution with, say, $n=4$. In Matlab, type `help\ raylpdf` and `help\ chi2pdf` for details.
null
CC BY-SA 4.0
null
2023-03-17T19:16:19.837
2023-03-17T23:42:23.090
2023-03-17T23:42:23.090
44269
295357
null
609833
1
null
null
0
18
I regularly read that clustering will cause standard errors to under-estimated. So I simulated two distributions - one with clustered observations, one without. In both cases the standard error is 0.09. Should standard error of clustered distribution not be smaller? # Clustered - se = 0.09 [](https://i.stack.imgur.com/WyA7e.jpg) ``` rep(1:10, 100) -> dependent set.seed(1) dependent + rnorm(1000, mean = 0, sd = 0.1) -> dependent round(sqrt(var(dependent)/length(dependent)), 2) ``` # Not clustered - se = 0.09 [](https://i.stack.imgur.com/fJW9C.jpg) ``` set.seed(2) rnorm(1000, mean = 5.49, sd = 2.87) -> independent round(sqrt(var(independent)/length(independent)), 2) ```
Why is standard error of clustered observations not under-estimated?
CC BY-SA 4.0
null
2023-03-17T19:19:45.220
2023-03-20T06:30:33.987
null
null
12492
[ "r", "standard-error", "bias", "clustered-standard-errors" ]
609834
1
null
null
0
17
Hello StackExchange community! My study group and I (M.Sc. in psychology) are currently studying for an upcoming exam and can´t figure out the correct answer to the question below. Its neither discussed in our courses slides nor are we able to find clear answers online. Any input would be much appreciated! Which advantage does the calculation of a multilevel linear model have over a repeated measures ANCOVA? a) One can control the sphericity better. b) You can get better measures of effect size. c) One can account for non-linear changes in repeated measures in model building. d) One can formulate expected relationships between dependent variables in different ways. e) One can combine random effects and fixed effects.* Thank you for any input!
Advantages of MLM over repeated measures ANCOVA?
CC BY-SA 4.0
null
2023-03-17T19:26:15.297
2023-03-17T19:55:06.533
2023-03-17T19:55:06.533
288142
383500
[ "self-study", "repeated-measures", "multilevel-analysis", "ancova", "mlm" ]
609835
2
null
609829
20
null
One important use of PCA is in analysis of [electroencephalography (EEG)](https://en.wikipedia.org/wiki/Electroencephalography) data. To measure an EEG, dozens of electrodes are attached to your scalp and measure electric currents in your brain, either at rest or while you perform some experimental task. Of course, the measurements at neighboring electrodes are heavily correlated, because they are generated by activity at a specific region in the brain, which then creates electric currents that will be picked up by all electrodes in the vicinity. It's not easy to learn about what happens deep in your brain if all you have is measurements from your scalp, but for some reason, few people are fine with having deep electrodes driven into their brain. One thus reduces the dimensionality of the problem using PCA, which in this particular application also has a temporal component. You are completely right that it is hard to actually interpret the principal components. However, over the decades a body of research has developed that lets us expect particular principal components loading on particular electrodes, with peaks at particular points in time, e.g., after being presented with a specific stimulus. For instance, a long time ago I looked at the [P300](https://en.wikipedia.org/wiki/P300_(neuroscience)), an event-related potential that loads over the parietal lobe (there's your PCA) about 300 ms after presentation of a stimulus that requires some kind of decision. In this particular analysis, the experiment was about whether spider phobics and non-phobics reacted differently to drawings that could be interpreted as spiders. The (unconscious) decision whether a particular drawing "was" a spider elicited a P300, and that indeed differed between phobics and nonphobics. Using a PCA and analyzing the parietal principal component - instead of, e.g., a single specific electrode = allows reducing the noise in such a setting, by essentially averaging the signal from multiple electrodes.
null
CC BY-SA 4.0
null
2023-03-17T19:35:56.277
2023-03-18T09:37:25.313
2023-03-18T09:37:25.313
22047
1352
null
609836
1
609851
null
2
124
Consider a real random variable $X$ with zero mean. Does the following inequality hold in general? $$\langle X^4\rangle \ge 3 \langle X^2\rangle^2$$ I'm not sure how to prove this or if a counter-example exists. If the inequality is true, it is also tight because for a standard normal this is an equality. I found a related inequality, valid for symmetric distributions (all odd moments vanish): $$\langle X^4\rangle \ge 2\langle X^2\rangle$$ This is proved by Dreier, Ilona. "Inequalities between the second and fourth moments." Statistics: A Journal of Theoretical and Applied Statistics 32.2 (1998): 189-198. But I'm actually not sure if this is connected to the inequality above. Update: The inequality from Dreier 1998 is stated under the assumption of a distribution with a non-negative characteristic function.
Is it true that $\langle X^4\rangle \ge 3 \langle X^2\rangle^2$?
CC BY-SA 4.0
null
2023-03-17T19:39:30.167
2023-03-18T22:04:03.520
2023-03-18T03:26:43.050
362671
5536
[ "moments", "probability-inequalities", "inequality" ]
609837
1
null
null
0
30
I'm working on a project in R where I'm looking at California's census tract-level demographic data in an explanatory logistic regression model. I have 6 demographic variables of interest and am controlling for population density. My binary predictor is 1=exposed to pollutant/0=unexposed to pollutant for each census tract. Here is the summary of my log regression in R: ``` Estimate Std. Error z value Pr(>|z|) (Intercept) 0.767410 0.782963 0.980 0.327019 percent_unemployed -0.089417 0.036507 -2.449 0.014312 * percent_minority -0.008516 0.011571 -0.736 0.461755 percent_no_diploma 0.053393 0.021160 2.523 0.011627 * percent_uninsured 0.064070 0.039517 1.621 0.104945 percent_under150 -0.016423 0.017528 -0.937 0.348798 percent_disabled -0.023421 0.033759 -0.694 0.487828 pop_density -0.352425 0.098325 -3.584 0.000338 *** ``` I know that exponentiating the coefficients will give me the ORs per 1% increase in the variable. However, I want to know the odds of exposure per 10% increase in the variable (i.e., for each 10% increase in the percent of the population living with a disability, the odds of being exposed increases/decreases by a factor of x). When I code `cali_logit$coefficients <- cali_logit$coefficients * 10` and re-run the summary, all of my coefficients suddenly become significant (when many weren't before), as shown below. ``` Estimate Std. Error z value Pr(>|z|) (Intercept) 7.67410 0.78296 9.801 < 2e-16 *** percent_unemployed -0.89417 0.03651 -24.493 < 2e-16 *** percent_minority -0.08516 0.01157 -7.360 1.84e-13 *** percent_no_diploma 0.53393 0.02116 25.233 < 2e-16 *** percent_uninsured 0.64070 0.03952 16.213 < 2e-16 *** percent_under150 -0.16423 0.01753 -9.369 < 2e-16 *** percent_disabled -0.23421 0.03376 -6.938 3.99e-12 *** pop_density -3.52425 0.09833 -35.843 < 2e-16 *** ``` This doesn't seem right...am I missing something? How do I get odds ratios per 10% increase rather than per 1% increase?
Multiplying coefficients of logistic regression to get per 10 unit increase?
CC BY-SA 4.0
null
2023-03-17T19:44:56.040
2023-03-17T19:44:56.040
null
null
382776
[ "r", "regression", "logistic", "multiple-regression", "regression-coefficients" ]
609838
2
null
609821
2
null
## Yes, you should run the entire pipeline for each fold You are right to say that using feature selection on all samples would mean you have data leakage in the following classification cross-validation (CV) step. You will have the same problem if you separate feature selection (using CV) and classification CV. Just as in the previous scenario, the whole data set is used to inform feature selection (not in the exact same way, but nonetheless). Think of k-fold CV as doing k times a regular CV. In a regular CV, you run your entire pipeline on the test data, and evaluate on validation data. The exact same thing should be done for each of the folds of k-fold CV to avoid data leakage between the folds.
null
CC BY-SA 4.0
null
2023-03-17T19:48:00.457
2023-03-17T19:55:36.567
2023-03-17T19:55:36.567
250702
250702
null
609839
2
null
609796
0
null
> Are these interpretations correct? Not without important qualifications. With standard R coding, individual coefficients for predictors involved in interactions are for situations where the interacting predictors are at 0 (for continuous interacting predictors) or reference levels (for categorical interacting predictors). That requires changes to interpretations 1, 2 and 3. As these data are coded, the individual coefficient for `SexCodeMale` is the extra difference in outcome for males only when `PreTotalCentre` equals 0. That for `AgeCentre` is the change in outcome for a 1-unit change in `AgeCentre` only when `PreTotalCentre` equals 0. That for `PreTotalCentre` is the change in outcome for a 1-unit change of `PreTotalCentre` only when `SexCode` represents females and `AgeCentre` is 0. For the interaction terms, it can help to remember that they are products of the individual numerical codings. The coefficients are the associated extra changes in outcome associated with each unit of that product, beyond what you might have predicted based on lower-level individual or interaction terms. That has the following implications. For your interpretation 4: that interaction coefficient is the extra change associated with males for each unit increase in `PreTotalCentre`. For your interpretation 5: that interaction coefficient is the extra change associated with a 1-unit increase in `AgeCentre` for each unit increase in_ `PreTotalCentre`, and vice-versa. I recommend against trying to interpret any of these coefficients individually. Instead, use the model to display results for particular combinations of predictor values that are of interest, based on your understanding of the subject matter.
null
CC BY-SA 4.0
null
2023-03-17T19:59:05.347
2023-03-17T19:59:05.347
null
null
28500
null
609840
2
null
609094
1
null
The problem is at least partially mitigated if you standardize before computing the slopes. ``` df[,-1] = scale(df[,-1],center = T,scale = T) slopes = df %>% pivot_longer(cols = V1:V4) %>% group_by(ID, name) %>% nest() %>% mutate(modelout = map(data, ~lm(value ~ time, data = .x) %>% tidy %>% filter(term == "time") %>% select(slope = estimate))) %>% unnest() %>% summarise(slope = unique(slope)) %>% pivot_wider(names_from = name, values_from = slope) %>% column_to_rownames('ID') res_pca = prcomp(slopes) res_pca Standard deviations (1, .., p=4): [1] 0.434278053 0.133935718 0.097441635 0.001016885 Rotation (n x k) = (4 x 4): PC1 PC2 PC3 PC4 V1 0.84169164 0.05809774 0.0003982831 -0.53682369 V2 -0.09243915 0.86245718 0.4949728571 -0.05122969 V3 0.24846072 0.46867888 -0.7247261007 0.43974931 V4 -0.47040138 0.18202298 -0.4793472552 -0.71820358 ``` [](https://i.stack.imgur.com/FMHq9.png) More fundamentally, the question is, why is there so much variability in your variables between IDs? Is it meaningful, or should you be accounting for other sources of variance before estimating your slopes?
null
CC BY-SA 4.0
null
2023-03-17T20:01:33.827
2023-03-17T20:01:33.827
null
null
288142
null
609841
2
null
609836
5
null
An easy counterexample is a two point distribution $P(X = \pm 1) = 1/2$, for which $E[X] = 0$, $E[X^2] = E[X^4] = 1$. Hence $1 = E[X^4] < 3(E[X^2])^2 = 3$. This example also showed the related inequality you claimed does not hold either.
null
CC BY-SA 4.0
null
2023-03-17T20:11:38.940
2023-03-17T20:11:38.940
null
null
20519
null
609842
2
null
609352
0
null
If the most important aspect is "visual" (rather than "verify"), then thinking about what kinds of deviations might exist (and what might cause the deviations) and how one might display the data that would show those deviations is required. Unless you're on acid, 3D plots might be the extent as to what you can display (other than changes in 3D plots over time if there was a time element). Below I increased your sample size to `n <- 1e5` and created 3D histograms (with estimated probability density as the vertical axis) along with the bivariate pdf for each pair of variables (using Mathematica). ``` data = Import["pairs.csv"]; data = data[[2 ;;]]; data = data[[All, 2 ;;]]; labels = {"\!\(\*SubscriptBox[\(x\), \(1\)]\)", "\!\(\*SubscriptBox[\(x\), \(2\)]\)", "\!\(\*SubscriptBox[\(x\), \(3\)]\)", "\!\(\*SubscriptBox[\(x\), \(4\)]\)", "\!\(\*SubscriptBox[\(x\), \(5\)]\)"}; p = 5; figures = Table[Show[Histogram3D[data[[All, {1, 2}]], Automatic, "PDF", RotationAction -> "Clip", SphericalRegion -> True, AxesLabel -> (Style[#, Bold, 18] &) /@ {labels[[i1]], labels[[i2]], ""}], Plot3D[(p - 2)/p + x[1]^(p - 1) + x[2]^(p - 1), {x[1], 0, 1}, {x[2], 0, 1}, PlotStyle -> Green]], {i1, 2, 5}, {i2, 1, i1 - 1}] ``` [](https://i.stack.imgur.com/jAoTr.png) The over- and under-estimates of density seem to occur without any pattern and none appear to be large.
null
CC BY-SA 4.0
null
2023-03-17T20:11:51.473
2023-03-17T23:18:05.487
2023-03-17T23:18:05.487
79698
79698
null
609843
2
null
609794
0
null
One way to deal with this would be to specify a different event label for each fracture location that you want to examine. The `Surv()` function in the R `survival` package allows for categorical `event` labels, provided that (fully) censored is the reference level. You aren't limited to 0/1 censored/event labeling. That provides a good deal of flexibility. When you do a survival analysis for time to fracture, specify a logical FALSE/TRUE event value that describes the particular location or combination of locations that you want to evaluate as the event for that analysis. Those for whom the logical value is FALSE will be treated as censored for that analysis but will still be available for other analyses. This also allows repeated-event or multi-state models of fractures within individuals if you include an appropriate `ID` value on each data row. See the [multi-state model vignette](https://cran.r-project.org/web/packages/survival/vignettes/compete.pdf) for an introduction.
null
CC BY-SA 4.0
null
2023-03-17T20:20:35.883
2023-03-17T20:20:35.883
null
null
28500
null
609844
1
null
null
2
93
I want to know the effect of differentiation on the independence of random variables. For a random variable $X$, when are $f^{(n)}(X)$ and $f^{(n+k)}(X)$ independent?, $\forall n\geq0\;, k\geq 1$.
When are $f(X)$ and $f'(X)$ independent?
CC BY-SA 4.0
null
2023-03-17T20:27:25.467
2023-03-19T08:00:04.633
null
null
383439
[ "independence", "derivative" ]
609845
1
null
null
1
22
I am looking for guidance on whether I am approaching my problem correctly. I have an annual time series { $x_{1}$, $x_{2}$, ..., $x_{t-1}$, $x_{t}$ }, where each observation is the estimated median rent price for one-bedroom apartments in an area. These data were derived from pooled cross-sectional samples collected each year, i.e., the median rent in 2015 is the median of a random sample of one-bedroom apartment rents collected in 2015, the median rent in 2016 is the median rent of a random sample of one-bedroom apartment rents collected in 2016, etc. I also have time series for the sample variances and sample sizes from each year. I would like to fit a weighted least squares regression model to the median rent series and use it to predict the median rent of one-bedroom apartments next year (period $t+1$), using a measure of reliability in each sample estimate as the observation-level weights. Because median rent is increasing over time, the mean and variance of each year's sample is also increasing over time (non-stationary). To account for this, I assign each observation a weight equal to the sample median divided by the sample variance: $$ w_{t} = \frac{{x_t}}{s_t^2} $$ This results in less reliable estimates being weighted less than more reliable estimates. Because median rent is non-stationary, for forecasting purposes I also first need to difference the series, then fit my model on this differenced series. In doing so, however, I also need to modify the weights so they measure the reliability of the change in median rent rather than the median rent itself. This is because a less reliable estimate of median rent in time $t-1$ may be followed by a more reliable estimate of median rent in time $t$ or vice versa, in which case the weight for $D1.x_{t}$ (e.g., $w_{x_{t}-x_{t-1}}$) should reflect that. If I'm on the right track, then my question is what's the standard approach to calculate the weight for $x_{t}-x_{t-1}$? Should I just average the weights of $x_{t}$ and $x_{t-1}$? Should I use some kind of pooled variance like what you would use for a two-sample t-test? Something else?
Calculating weights for a weighted least squares regression on a differenced time series
CC BY-SA 4.0
null
2023-03-17T20:31:58.633
2023-03-23T00:50:29.193
2023-03-23T00:50:29.193
11887
362665
[ "time-series", "forecasting", "weighted-regression", "weighted-variance" ]
609847
1
null
null
1
7
I am a non stat person so pls be kind in your reply! What sort of statistic can I use to compare how the items of 3 different Likert scales covariate? My respondents sample is 150 ppl. Each respondent completes all 3 scales. I would like to understand if there is a pattern between the way items of different scales are rated, across these 150 ppl. So, basically, if there is a trend, or average pattern in the way how this sample rates these 3 sets of scales. What is it, is it factor analysis, regression, or else. Don't have a clue. Any advice is welcome! Thank you. Amy
Analyse clustered items from different scales?
CC BY-SA 4.0
null
2023-03-17T21:04:41.493
2023-03-17T21:04:41.493
null
null
383504
[ "regression", "factor-analysis", "covariance-matrix" ]
609848
1
null
null
0
9
I am using LSTM to model time series data. My target variable is categorical so I am using one-hot encoding. The goal is to predict the target class based on the given time. My dataset spans over eight days. ``` input_nodes = look_back = 10 batch_size = 128 train_generator = create_data_generator(train, look_back, outputs, batch_size, class_weights_dict) validation_generator = create_test_generator(test, look_back, outputs, batch_size) model = Sequential() model.add(LSTM(50, input_shape=(input_nodes, outputs))) #model.add(Dense(50, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(outputs, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # fit network history = model.fit_generator(train_generator, steps_per_epoch=math.ceil(len(train)/batch_size), epochs=25, validation_data=validation_generator, validation_steps=math.ceil(len(test)/batch_size)) ``` Here are my results. [](https://i.stack.imgur.com/5DfJr.png) [](https://i.stack.imgur.com/WwOqx.png) Is there anything that I can try to improve my validation accuracy and loss?
LSTM validation accuracy fluctuating
CC BY-SA 4.0
null
2023-03-17T21:17:23.583
2023-03-17T21:38:57.843
2023-03-17T21:38:57.843
383505
383505
[ "validation", "lstm" ]
609850
2
null
609731
1
null
Read this: Wu, H., Estabrook, R. (2016). Identification of Confirmatory Factor Analysis Models of Different Levels of Invariance for Ordered Categorical Outcomes. Psychometrika, 81, 1014–1045. [https://doi.org/10.1007/s11336-016-9506-0](https://doi.org/10.1007/s11336-016-9506-0) You can use this to help with correct specification: `?semTools::measEq.syntax`
null
CC BY-SA 4.0
null
2023-03-17T21:22:38.547
2023-03-17T21:22:38.547
null
null
335062
null
609851
2
null
609836
6
null
#### Both of the inequalities you assert are false For a random variable with zero mean, the moment quantity $\langle X^4 \rangle /\langle X^2 \rangle^2$ is the [kurtosis](https://en.wikipedia.org/wiki/Kurtosis) of the distribution, which has a lower bound of one, not three. Thus, you can find abundant counter-examples to your first inequality by choosing any distribution that is platykurtic (i.e., has a kurtosis less than three). You can also find counter-examples to your second inequality by choosing any symmetric distribution that is sufficiently playkurtic to have a kurtosis less than two. The counter-example given in the other answer here is the most platykurtic distribution there is (and it is also symmetric), so it gives a simple counter-example to both purported inequalities. Other examples of symmetric platykurtic distributions include the [discrete uniform distribution](https://en.wikipedia.org/wiki/Discrete_uniform_distribution), the [continuous uniform distribution](https://en.wikipedia.org/wiki/Continuous_uniform_distribution), the [Wigner semicircle distribution](https://en.wikipedia.org/wiki/Wigner_semicircle_distribution), the [raised cosine distribution](https://en.wikipedia.org/wiki/Raised_cosine_distribution), certain parameterisations of the [beta distribution](https://en.wikipedia.org/wiki/Beta_distribution), and many others.
null
CC BY-SA 4.0
null
2023-03-17T21:33:28.267
2023-03-18T22:04:03.520
2023-03-18T22:04:03.520
173082
173082
null
609852
2
null
609817
1
null
Given your description, I would just assign cutoff points at percentiles of the distribution of total spend. With five categories, equal intervals (same number of observations in each category) would be quintiles, i.e. [0%,20%), [20%,40%),[40-60%),[60%,80%),[80%-100%].
null
CC BY-SA 4.0
null
2023-03-17T21:53:57.223
2023-03-17T21:53:57.223
null
null
362665
null
609853
2
null
584685
0
null
My answer is more of a comment... I am a bit surprised that Keras does not have this kind of dual use in the layers and, as the OP says, all the tutorials online, even when claiming to build LLM as GPT, do independent layers. The corresponding keras layers of the transformers library do indeed embedding-embedding tying, but at the cost of ad-hoc command in the call. A possible alternative could be an initializer that provides the same variable to the embedding an unembedding layers. Now, if one defines a metric to check the difference between embedding and embedding matrix, say ``` print("Misalignment:", np.mean(np.square(np.transpose(unembedding.get_weights()[0])-embedding.get_weights()[0]))) ``` it can be seen that it usually decreases during the training. So perhaps it is not provided because convergence is foreseen to happen in a natural way in a long training, and it only needs to be forced in a fine tuning or short training. Also, it does not need to be forced as hard as installing a single layer, it is possible to promote the above metric to an additional loss, and then we have a sort of "elastic parameter tying"
null
CC BY-SA 4.0
null
2023-03-17T22:08:35.997
2023-04-19T11:53:58.073
2023-04-19T11:53:58.073
176743
176743
null
609855
2
null
86040
2
null
## Data-driven theory-backed procedure If you want a formal treatment of the subject, a good method comes from a pioneering paper by [Andrews & Buchinsky (2000, Econometrica)](https://www.jstor.org/stable/2999474): do some small number of bootstrap replications, see how stable or noisy the estimator is, and then, based on some target accuracy measure, increase the number of replications until you are sure that this resampling-related error has reached a certain lower bound with a chosen certainty. Our helper here is the Weak Law of Large Numbers where the asymptotics are in B. To be more specific, B is chosen depending on the user-chosen bound on the relative deviation measure of the Monte-Carlo approximation of the quantity of interest based on B simulations. This quantity can be standard error, p-value, confidence interval, or bias correction. The closeness is the relative deviation $R^*$ of the B-replication bootstrap quantity from the infinite-replication quantity (or, to be more precise, the one that requires $n^n$ replications): $R^* := (\hat\lambda_B - \hat\lambda_\infty)/\hat\lambda_\infty$. The idea is, find such B that the actual relative deviation of the statistic of interest be less than a chosen bound (usually 5%, 10%, 15%) with a specified high probability $1-\tau$ (usually $\tau = 5\%$ or $10\%$). Then, $$\sqrt{B} \cdot R^* \xrightarrow{d} \mathcal{N}(0, \omega),$$ where $\omega$ can be estimated using a relatively small (usually 200–300) preliminary bootstrap sample that one should be doing in any case. Here is the general formula for the number of necessary bootstrap replications $B$: $$ B \ge \omega \cdot (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2,$$ where r is the maximum allowed relative discrepancy (i.e. accuracy), $1-\tau$ is the probability that this desired relative accuracy bound has been achieved, $Q_{\mathcal{N}(0, 1)}$ is the quantile function of the standard Gaussian distribution, and $\omega$ is the asymptotic variance of $R$*. The only unknown quantity here is $\omega$ that represents the variance due to simulation randomness. The general 3-step procedure for choosing B is like this: - Compute the approximate preliminary number $B_1 := \lceil \omega_1 (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$, where $\omega_1$ is a very simple theoretical formula from Table III in Andrews & Buchinsky (2000, Econometrica). - Using these $B_1$ samples, compute an improved estimate $\hat\omega_{B_1}$ using a formula from Table IV (ibid.). - With this $\hat\omega_{B_1}$ compute $B_2 := \lceil\hat\omega_{B_1} (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$ and take $B_{\mathrm{opt}} := \max(B_1, B_2)$. If necessary, this procedure can be iterated to improve the estimate of $\omega$, but this 3-step procedure as it is tends to yield already conservative estimates that ensure that the desired accuracy has been achieved. This approach can be vulgarised by taking some fixed $B_1 = 1000$, doing 1000 bootstrap replications in any case, and then, doing steps 2 and 3 to compute $\hat\omega_{B_1}$ and $B_2$. Example (Table V, ibid.): to compute a bootstrap 95% CI for the linear regression coefficients, in most practical settings, to be 90% sure that the relative CI length discrepancy does not exceed 10%, 700 replications are sufficient in half of the cases, and to be 95% sure, 850 replications. However, requiring a smaller relative error (5%) increases B to 2000 for $\tau=10\%$ and to 2700 for $\tau=5\%$. This agrees with the formula for B above. If one seeks to reduce the relative discrepancy r, by a factor of k, the optimal B goes up roughly by a factor of $k^2$, whilst increasing the confidence level that the desired closeness is reached merely changes the critical value of the standard normal (1.96 → 2.57 for 95% → 99% confidence). ## Concise practical advice This being said, we should realise that not everyone is a theoretical econometrician with deep bootstrap knowledge, so here is my quick rule of thumb. - B >= 1000, otherwise your paper will be rejected with something like ‘We are not in the Pentium-II era’ from Referee 2. - Ideally, B >= 10000; try to do it if your computer can handle it. - You could check if your B yields the desired probability $1-\tau$ of achieving the desired relative accuracy $r$ for the values thereof that are psychologically comfortable for you (e.g. $r= 5\%$ and $\tau=5\%$). - If not, increase B to the value dictated by the A&B 3-stage procedure described above. - In general, for any actual accuracy of your bootstrapped quantity, to increase the desired relative accuracy by a factor of k, increase B by a factor of $k^2$. Happy bootstrapping!
null
CC BY-SA 4.0
null
2023-03-17T22:48:02.417
2023-03-17T22:48:02.417
null
null
41603
null
609856
1
609859
null
1
29
I'm reading a documentation on causal inference on graph and I'm currently on the chapter about identification. In this section, the authors give examples of invalid adjustment sets, one of which is as in Figure 4a below, followed by a description of why conditioning on a collider biases the estimate of P(B|do(A)): [](https://i.stack.imgur.com/kGeAC.png) However, I'm confused about the first sentence of this paragraph. Why is A and B statistically independent of each other/why are they d-separated in this graph? A is shown to be adjacent to B/they are connected by a directed edge. Am I understanding this incorrectly? Is it the case that the authors put this statement in a slightly wrong way?
Invalid Adjustment Sets in causal graph
CC BY-SA 4.0
null
2023-03-17T23:04:37.573
2023-03-18T00:51:57.103
2023-03-18T00:40:00.710
44269
279018
[ "causality", "d-separation" ]
609857
1
null
null
0
38
According to my understanding, $C(X)$ is a random variable defined as follows: > Let $\mathcal{P}$ be a family of distributions (defined by user). Let $\alpha>0$. Let $\theta$ be some parameter of a distribution (e.g., mean). When $C(X)$ satisfies the following inequality for all $\mathbb{P}\in \mathcal{P}$: $$ \mathbb{P}(\theta\in C(X)) > 1-\alpha $$ then $C(X)$ is called a $1-\alpha$ confidence set or interval. Though a confidence set $C(X)$ should be a random variable, in many research studies, people refer to determined value $C(x_0)$ as a "confidence set" after observing $X=x_0$. Is there any formal mathematical guarantee regarding $\theta$ and $C(x_0)$? By "formal", I mean that a statement or guarantee that can be determined as true or false as usual mathematical statement. is If not, how can people derive any meaningful insights by finding $C(x_0)$?
What kind of formal guarantee does a confidence interval provide after an observation?
CC BY-SA 4.0
null
2023-03-17T23:28:58.590
2023-03-17T23:58:32.243
2023-03-17T23:58:32.243
310702
310702
[ "hypothesis-testing", "statistical-significance", "confidence-interval", "probability-inequalities" ]
609859
2
null
609856
1
null
Taking the DAGs from left to right: - Left DAG: $A$ and $B$ are d-separated because there are no backdoor paths open between them. Unconditioned, the path $A\leftarrow X_1 \rightarrow Z \leftarrow X_2 \rightarrow B$ is blocked by the collider $Z$. The point they are making in their comment is that conditioning on $Z$ opens the flow of association through $X_1 \rightarrow Z \leftarrow X_2$. - Middle DAG: $A$ and $B$ are d-separated because there are no backdoor paths, and 'front door paths' (i.e. no conditioning on a common descendant of both $A$ and $B$). The point they are making in their comment is that conditioning on $Z$ closes the flow of association from $A \rightarrow Z \rightarrow B$ (conditioning on a non-collider path closes it to the flow of association). - Right DAG: $A$ and $B$ are d-separated because there are no backdoor or 'front door' paths open between them. The point they are making in their comment is that conditioning on $Z$ opens the 'front door' path from $A \rightarrow Z \leftarrow X$ (and thence to $B$ via $B$'s backdoor path). I think the statement about d-separation is that any contrast measure relating $A$ and $B$ produces an unbiased estimate of the causal effect of $A$ on $B$. Maybe that's a bit sloppy: clearly if $A \rightarrow B$, then $A$ and $B$ are not d-separated. So I think your question "Is it the case that the authors put this statement in a slightly wrong way?" must be answered "Yes."
null
CC BY-SA 4.0
null
2023-03-18T00:31:52.570
2023-03-18T00:51:57.103
2023-03-18T00:51:57.103
44269
44269
null
609860
1
null
null
1
34
I've been running several mixed linear models in R. I use the `lmer` function from lmerTest. I also ran the same analyses (or so I thought) in JASP. JASP uses R behind the scenes and shows you the R code. It turns out that JASP constructs a different model (see 1 below) than I did (see 2 below). Short question: What is the difference between the two model specifications below? Model 1: ``` value ~ variable + (1 + variable | topic) + (1 + variable | ResponseId) ``` Model 2: ``` value ~ variable + (1|ResponseId) + (1|topic) ``` - value is a continuous dependent variable. - variable and topic are within-subject random variables (factor) - ResponseId is the subject id (factor)
What is the difference between these two mixed model specifications?
CC BY-SA 4.0
null
2023-03-18T00:53:58.320
2023-03-18T03:21:17.193
2023-03-18T03:16:10.673
345611
127570
[ "r", "regression", "mixed-model", "lme4-nlme", "jasp" ]
609861
2
null
609860
0
null
There is another, more general, question on StackExchange: [R's lmer cheat sheet](https://stats.stackexchange.com/q/13166/127570). The resources linked there might be able to address my question. In particular, it seems that the form (1 + ResponseId) only allows for a random intercept per responseId. In contrast, the form (1 + variable | responseId) fits different slopes (for the effect of the variable) and different intercepts for each responseId. It would be nice if someone more knowledgeable could confirm (or correct).
null
CC BY-SA 4.0
null
2023-03-18T01:52:02.443
2023-03-18T02:11:05.743
2023-03-18T02:11:05.743
127570
127570
null
609862
1
null
null
0
54
From my understanding of reverse-mode auto-differentiation, after the forward-pass computation graph has been constructed, gradients are passed from the loss down through the tree to the leaves. Essentially, each node is expected to calculate the gradient of the loss with respect to each of its children, given its own gradient and the operation it performed. E.g. if $\frac{dL}{dz}=2$ and $z = xy$, then $dL/dx = 2y$ would be passed from the $z$ node to the $x$ node in the computation graph. When vector and matrix operations are involved, this becomes more complicated. For example if a node $\vec{p} = \mathbf{A} \vec{q}$ is given $\frac{dL}{d\vec{p}}$ and wants to find $\frac{dL}{d\mathbf{A}}$, the vector Jacobian product $\frac{dL}{d\vec{p}} \frac{d\vec{p}}{d\mathbf{A}}$ would have to be calculated. From what I understand, the result of the VJP $\left( \frac{dL}{d\mathbf{A}} \right)$ can be calculated without explicitly finding the Jacobian $\frac{d\vec{p}}{d\mathbf{A}}$, through some mathematical trick specific to the operation. What I don't understand is what the Jacobian in this case would even be, and how you would go about multiplying it by the incoming gradient vector, if you wanted to go about the VJP computation in the "naive" way. Since $\vec{p}$ is a vector (let's say $n \times 1$) and $\mathbf{A}$ is a matrix (let's say $n \times m$), the Jacobian $\frac{d\vec{p}}{d\mathbf{A}}$ would presumably be some kind of rank 3 tensor with dimensions $n \times n \times 1$ (or maybe $n \times 1 \times n \times m$?). My question is what would the Jacobian be in a case like this, and how would you go about finding its product with a vector? Or is it not valid to apply the VJP chain rule ($\frac{dL}{d\mathbf{A}} = \frac{dL}{d\vec{p}} \frac{d\vec{p}}{d\mathbf{A}}$) in this case?
Meaning of vector Jacobian Product with matrix inputs
CC BY-SA 4.0
null
2023-03-18T02:20:04.590
2023-03-18T02:20:04.590
null
null
370193
[ "backpropagation", "jacobian", "automatic-differentiation" ]
609865
1
null
null
1
44
I know I am able to calculate Mann Whitney U tests when comparing 2 samples unequal inside but I am wondering if I am able to carry this same principle when calculating ROC AUC via the formula: > AUC = U / (n1*n2) where U is the U statistic, n1 is number of positive examples, and n2 is the number of negative examples I am trying to compare test scores by disease status and, ideally, want to be able to say something about discrimination, beyond just association. For example, I have 525 cases and 1770 controls. With a p-value of 0.5, I know that there is no association between the test scores and disease status. But with a U statistic of 471313.5, would it be valid to calculate > AUC = 471313.5 / (525*1770) = 0.5071977 and conclude that the test has poor discrimination this disease? I scanned through some papers and StackExchange posts but was unable to find much about the assumptions of the AUC/MWU relationship when it comes to sample size. It was only brought to my attention as a potential issue when I...consulted ChatGPT.
Can I calculate ROC AUC from Mann Whitney U tests when I am comparing unequal sample sizes?
CC BY-SA 4.0
null
2023-03-18T03:46:26.387
2023-03-18T03:46:26.387
null
null
375395
[ "assumptions", "roc", "wilcoxon-mann-whitney-test", "auc" ]
609866
2
null
609829
9
null
First, from the perspective of education, PCA is a good entryway to the world of dimension-reduction techniques and associated methods. Whether we're talking ICA, non-negative matrix factorization, confirmatory factor analysis, partial-least squares, canonical correlation analysis... (you get the idea), understanding PCA gets you halfway there. From a practical standpoint, there are lots of times when PCA can be sensibly used to combine variables - I think the other answers give good examples. Now, I think what your question is really getting at is - what's the point of a naive application of PCA? What's the point of throwing PCA at a long list of variables and getting back a handful of components? Well, it's quite useful if all you care about is model prediction, and it's also quite handy when you have a lot of highly correlated variables in your data set. PCA is also very useful if you want to understand the structure of your data - for example running PCA on genetic data (millions of variables) tells you about broad trends in ancestry in your data set.
null
CC BY-SA 4.0
null
2023-03-18T03:48:50.897
2023-03-18T09:32:53.717
2023-03-18T09:32:53.717
22047
288142
null
609867
1
null
null
0
8
What models can I use to measure the impact of a recurring event? I know that event analysis can gauge the impact of a one-shot event--for example, the impact of an economic policy (which only happens once) on GDP growth in different countries. But what if the event is recurring? For example, what if I'd like to measure the impact of earthquake on GDP? There are several difficulties: - That earthquake is recurring, and thus there is no strict "pre-event" or "post-event" distinction (particularly if, let's assume, that earthquakes happen quite frequently). - The series of earthquakes are self-correlated and dependent, meaning that some regions are just more likely to have earthquakes than others. Also, previous earthquakes makes the crust more unstable and thus ensuring earthquakes are more likely--causing endogeneity problem.
Measuring the Impact of Recurring Events
CC BY-SA 4.0
null
2023-03-18T04:35:03.163
2023-03-18T04:35:03.163
null
null
189907
[ "regression", "recurrent-events" ]
609868
2
null
609844
0
null
A more modest property, namely a lack of correlation, holds in a series of cases. Consider the case when $f$ is invertible and let the density of $X$ write as $p(f(x))$ (wlog). Further assume (wlog) that $\mathbb E[f(X)]=0$. Then, by a change of variables, \begin{align}\text{cov}(f(X),f'(X))&=\int f(x)f'(x)p(f(x))\text dx\\&=\int f(x)\frac{\text df(x)}{\text dx} p(f(x))\text dx\\&= \int f(x) p(f(x)){\text df(x)}\\ &= \mathbb E_p[F]\\ &= 0 \end{align}
null
CC BY-SA 4.0
null
2023-03-18T05:34:49.387
2023-03-19T08:00:04.633
2023-03-19T08:00:04.633
7224
7224
null
609869
2
null
483888
1
null
> My question, why use 5-fold cross-validation only in the final estimator? why isn't final estimator fitted on the full X' (output from base estimators)? Short answer: You probably misunderstood what `StackingClassifier` does (and so did I at first), because the description provided in scikit-learn is prone to misinterpretations (not our fault). If you check the source code [here](https://github.com/scikit-learn/scikit-learn/blob/9aaed498795f68e5956ea762fef9c440ca9eb239/sklearn/ensemble/_stacking.py#L252), you will see that the scikit-learn implementation does stacking correctly. Long answer. Robby the Belgian's [answer](https://stats.stackexchange.com/a/484681) does not address the following prospect, which I guess was your main concern. Consider training the final estimator. Suppose that one of subestimators overfits pathologically, e.g. it memorises all data seen at training. If subestimators are passed to the final estimator after being trained on the whole dataset, then the final estimator has no means of telling apart an overfitting subestimator from a genuinely good one, because there is no held-out data left to estimate subestimators' generalisation error. The final estimator will thus rely on the overfit subestimator when making final predictions, thinking that it is the best one, even if truly decent subestimators are available. As a short digression, let me quote a slightly different justification for holding out some data from subestimators when stacking models. Hastie, Tibshirani & Friedman write in "Elements of Statistical Learning" (page 290): > ... If [subestimator] $\hat{f}_m(x), \,m=1,\dots,M$ represent the prediction from the best subset of inputs of size $m$ among $M$ total inputs, then linear regression [final estimator] would put all of the weight on the largest model, that is, $\hat{w}_M=1,\, \hat{w}_m=0,\, m<M$. The problem is that we have not put each of the models [subestimators] on the same footing by taking into account their complexity (the number of inputs $m$ in this example). To put this differently, if the candidate models come from nested model spaces $\mathcal{M}_1 \subset \mathcal{M}_{2} \subset \dots \subset \mathcal{M}_M$ and the training set is reused by the final estimator, then it will always choose the model from $\mathcal{M}_M$ (i.e. the most flexible model), simply because the optimum from the superset is always better than what a subset has to offer. [Digression end] Now suppose that all subestimators passed to the final estimator are all decent and do not overfit, despite being trained on the whole dataset. Suppose they are "on equal footing" in the sense that they have similar training and generalisation errors: one subestimator is better at some data, another is better at some other data, and so on. If the final estimator sees input in addition to subestimators' predictions at input (`passthrough=True` option), then there is a possibility that the final estimator overfits by memorising which of the subestimators happened to be correct at each input, instead of learning to combine sub-predictions in a generalisable way. In fact, the final estimator can potentially identify datapoints just by subestimators' predictions at them (in case `passthrough=False`). Overfitting of the final estimator can in principle be controlled by tuning its hyperparameters, but this is not what `cross_val_predict` does inside `StackingClassifier`. So, overfitting in stacking can be caused by either of the following: - some subestimators overfit and are preferred by the final estimator; - the final estimator overfits per se. Pitfall 2 can be avoided by using a simple model as the final estimator or by tuning it in an outer loop (like GridSearchCV, but I don't think this is a good idea). We have to do this manually. Pitfall 1 is avoided in `StackingCalssifier` automatically by the fact that at train-time, subestimators are passed to the final estimator after being trained on a part of the dataset. In other words, the final estimator is trained on the whole dataset but its inputs are out-of-fold predictions of subestimators. This is precisely what is meant by: > ... final_estimator_ is trained using cross-validated predictions of the base estimators using cross_val_predict. In my opinion, "cross-validated out-of-fold predictions" would be a better phrasing. After the training of the final estimator is done, we can re-train subestimators on the whole dataset to further improve their performance. This is what is meant by > Note that estimators_ are fitted on the full X ... Putting these two statements in one sentence causes confusion.
null
CC BY-SA 4.0
null
2023-03-18T06:34:26.097
2023-04-07T23:04:26.627
2023-04-07T23:04:26.627
254326
254326
null
609870
2
null
609829
3
null
I've used PCA in facial motion capture for real time animatronic control of the 'lots of dots on a face' variety. I was able to find out which dots - which is to say regions of the face - encoded the most information in movement and emotive expression. It's obvious to some where these may be, and there is a natural correlation between these areas and how much they move, but I wanted to confirm my intuition. I could only track so many dots so with that information I was able to more efficiently place them around the face, with more density in areas that encoded the most useful 'perpendicular' data and more sparsely in those that only became relevant on their own merits occasionally. This is grossly simplified, and there was a lot more to it (the eyes... another world going on there) and I'd probably use a NN or similar with no dots this time round, but PCA played an integral part of the learning.
null
CC BY-SA 4.0
null
2023-03-18T06:58:37.333
2023-03-18T09:35:44.733
2023-03-18T09:35:44.733
22047
70897
null
609871
1
609887
null
3
127
I'm continuing my slow trudge through Simon Wood's [book](https://www.routledge.com/Generalized-Additive-Models-An-Introduction-with-R-Second-Edition/Wood/p/book/9781498728331#) on generalized additive models (GAMs), and it has given me some new useful insights. However, I am still confused after reading through Chapter 2 about what value the REML/fREML estimates provide in GAM summaries. For example, I have fit these two models using REML below: ``` #### Load Libraries #### library(mgcv) library(gamair) #### Load Data #### data("wesdr") wes <- tibble(wesdr) #### Fit Candidate Models #### fit.1 <- gam( ret ~ s(dur, bs = "cr"), data = wes, method = "REML" ) fit.2 <- gam( ret ~ s(dur, bs = "tp"), data = wes, method = "REML" ) #### Summarize #### summary(fit.1) summary(fit.2) ``` The REML values for these two GAMS are the following: - fit.1: 467.41 - fit.2: 469.33 I rarely every hear or see anything about these values from most articles and videos I have seen on GAMs. I also see pretty much no explanation about these values elsewhere here. However, I feel like they exist for a reason. What utility do these values have?
What interpretation do REML/fREML values provide in generalized additive models (GAMs)?
CC BY-SA 4.0
null
2023-03-18T07:20:09.283
2023-03-18T12:28:03.663
2023-03-18T08:05:28.587
362671
345611
[ "r", "regression", "generalized-additive-model", "mgcv", "reml" ]