Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
609371
1
609662
null
2
40
Let $X_1, ..., X_n$ be i.i.d. samples drawn from a pdf $f(x)$ on the real line. The kernel density estimator is defined as follows, $$\hat{f_n}(x) = \frac{1}{nh}\sum_1^n K(\frac{x-X_k}{h})$$ where $K:\mathbb{R}\to [0, \infty)$ satisfying $\int_{-\infty}^{\infty}K(x) = 1$ and $h>0$. How to prove that $\Vert \hat{f_n}(x) - f \Vert_1$ is sub-Gaussian w.r.t. $\frac{1}{\sqrt{n}}$ i.e. $$\mathbb{P}(|\Vert \hat{f_n}(x) - f \Vert_1 - \mathbb{E}(\Vert \hat{f_n}(x) - f \Vert_1)|\geq t) \leq 2e^{-\frac{nt^2}{2}}$$ where $$\Vert \hat{f_n}(x) - f \Vert_1:=\int_{-\infty}^{\infty}|\hat{f_n}(x)-f(x)|dx$$
Is kernel density estimation sub-gaussian?
CC BY-SA 4.0
null
2023-03-14T07:47:02.307
2023-03-16T09:34:32.783
null
null
383159
[ "high-dimensional", "probability-inequalities" ]
609372
1
null
null
1
33
The `subsample` option in `XGBoost` is described [here](https://xgboost.readthedocs.io/en/stable/parameter.html) as follows: > Subsample ratio of the training instances. Setting it to 0.5 means that XGBoost would randomly sample half of the training data prior to growing trees. and this will prevent overfitting. Subsampling will occur once in every boosting iteration. [default=1] Is it possible to access the list of training instances being used in each iteration? If there are N observations in the full training data and we train an XGBoost model with NROUNDS iterations, I am imagining an N times NROUNDS matrix with position (i,j) being 1 if observation i is used in boosting iteration j, and 0 otherwise.
Which training instances are being subsampled in each iteration of XGBoost?
CC BY-SA 4.0
null
2023-03-14T08:25:50.043
2023-03-14T08:25:50.043
null
null
357671
[ "r", "boosting", "gradient", "subsampling" ]
609373
2
null
609166
3
null
Likelihood is a slippery concept. The likelihood function, $L(t | w)$, expresses how probable the data $t_n$ are in relation to the model function $y(x,w)$. The uncertainty in the empirical measurements enters the modeling through the misfits $ε_n$; one misfit for each data point. We introduce a misfit probability distribution, usually a Gaussian. The values of the misfits have no specific meaning. But the introduction of the misfit probability distribution has changed our state of knowledge in a rather fundamental way. As data scientists, we prefer models with small misfits and look suspiciously at large outliers. The misfits $ε_n$ are now subjected to an overarching probability distribution $p(ε_n)$, which “connects” the previously unrelated individual misfit values by their probabilities. Conceptually, this metamorphosis is a big step. The likelihood value is the product of the $p(ε_n)$. The likelihood function is the generalization of the likelihood value. The likelihood function is the product of the misfit probability densities but with the dependency on the coefficient $w$ of the model function $y(x,w)$ taken into account. Technically, the likelihood function is not a normalized probability distribution over $w$. It is a factor in Bayes' theorem, and does not necessarily integrate to unity. $$\int L(t | w) dw \ne 1.$$ However, it is a normalized probability distribution when integrated over the data values $t$ $$\int L(t | w) dt = 1.$$ Which makes it confusing. --- The above texts are excerpts from my new tutorial book "What is your model?, a Bayesian tutorial". It is a short self-study book on Bayesian data analysis. One can read 80% for free on [Amazon under the Look Inside feature](https://rads.stackoverflow.com/amzn/click/com/B0BTNVFR65), thereby avoiding bad buys.
null
CC BY-SA 4.0
null
2023-03-14T08:57:14.233
2023-03-14T08:57:14.233
null
null
382413
null
609375
2
null
599113
5
null
> My intuition is that, given $(x_{i},y_{i})_{i=1}^n$ and $(x_{i},y_{i})_{i=1}^N$ have the same "information" (I know this is a fuzzy term), using $(x_{i},y_{i})_{i=1}^N$ should not better the fit. It should even make generalisation worse since we overfit. At first I was skeptical, thinking 'how can you overfit if there is no noise'. But with a misspecified model this is possible. [Here](https://stats.stackexchange.com/a/595460/) is a situation where more data can lead to "overfitting": [](https://i.stack.imgur.com/w5l6Vm.png) It is not overfitting due to fitting of random noise, but overfitting due to the model being misspecified and the bias of interpolation is increasing as the fitted curve can become less smooth (this is a case with polynomials, but a similar effect may occur when fitting with some NN, or the NN might even contain those polynomials as output). It might not be unimaginable that this situation can be remedied by considering regularization and cross validation. In that case one might expect that more data should reduce the bias.
null
CC BY-SA 4.0
null
2023-03-14T09:12:11.363
2023-03-14T09:12:11.363
null
null
164061
null
609376
2
null
609364
5
null
I think there is some confusion here due to the word "normalization". In this context, normalization means that the data are transformed to have zero mean and unit standard deviation. The transformed data will also be dimensionless, i.e. lacking physical units. Z-score normalization does not mean that the data become normally distributed. The transformed data are "normal" only in the sense of having zero mean and unit standard deviation.
null
CC BY-SA 4.0
null
2023-03-14T09:17:48.420
2023-03-14T09:17:48.420
null
null
211876
null
609377
2
null
599113
6
null
You are right, it is not only about the size of the dataset. As two other answers pointed out, having more data (vs very little) is desired, as even in a noiseless scenario it may help you to get a more precise estimate. On another hand, as you argue, it is also about the quality of the data. There's a great lecture by Xiao-Li Meng titled "Statistical paradises and paradoxes in Big Data", it was recorded and is [available on YouTube](https://www.youtube.com/watch?v=8YLdIDOMEZs). He argues that it is not only about quantity, but also quality. Having a lot of very poor quality or biased data is not necessarily great. As he shows, how much you gain from the quantity raises very slowly. That's one of the reasons why we need all those poor-quality, non-randomly sampled datasets, scrapped from the internet to be so big. On another hand, as he argues, having a large dataset can lead to overtly optimistic error estimates, which could give us a false sense of the estimates being precise. So in fact, there may be scenarios where "too much" data is not great. So it is not (only) about quantity, but also if the data is relevant, not biased, how noisy it is, if it was randomly sampled, etc, i.e. about the quality of the data. If you have good quality data, more is better, though at some point you start hitting diminishing returns, where getting more data is simply not worth it.
null
CC BY-SA 4.0
null
2023-03-14T09:41:47.653
2023-03-14T10:03:21.927
2023-03-14T10:03:21.927
35989
35989
null
609378
2
null
602787
0
null
## You can (most likely) not apply standard DAG learning algorithms to time series data A way to think about DAG learning algorithms that are not tailored to time series (such as FGES) is this: All variables are all observed at an initial time $t_0$. Then you let the system variables interact until they have reached an equilibrium at time $t_1$, where you measure the variables again. If the starting value of one variable affects the equilibrium value of another one, they are causally connected. The causal relationship between the variables simultaneously encodes the dependency between start and equilibrium, which are the only time steps that exist. This logic breaks once you have a time series where causal networks iteratively relate past states to future states through Granger causality. So you are correct in assuming that such algorithms most likely won't be applicable for your case. ## Note of caution Causal structure learning algorithms tend to come with heavy assumptions (e.g. additive noise, no unobserved variables, acyclicity, ...) and have trouble returning useful results at the best of times (even FGES just returns a CPDAG that may not be correct even if all assumptions are met). So just beware that causal structure learning from multivariate time series is a tall order!
null
CC BY-SA 4.0
null
2023-03-14T10:12:49.603
2023-03-14T10:12:49.603
null
null
250702
null
609379
1
null
null
5
125
With regarding to OLS estimators why $\hat\beta_1$ and $\hat\beta_0$ have a variance?
What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance?
CC BY-SA 4.0
null
2023-03-14T10:21:54.787
2023-03-14T11:38:43.803
2023-03-14T10:54:40.230
247274
383174
[ "regression", "variance", "least-squares", "econometrics", "regression-coefficients" ]
609380
2
null
609379
5
null
The result that we obtain from linear regression is a function of random variables, so the parameters are random variables. You can calculate variance for any random variable. The variance tells us how uncertain the estimates are (in absolutely noiseless data, or with an overfitting model, the variances would be zeros).
null
CC BY-SA 4.0
null
2023-03-14T10:30:06.453
2023-03-14T10:30:06.453
null
null
35989
null
609382
1
null
null
0
26
In ESL section 4.3.3 , Author gives three steps for finding optimal subspaces using LDA as below - compute the K × p matrix of class centroids M and the common covariance matrix W (for within-class covariance); - compute M** = MW-(1/2) using the eigen-decomposition of W; - compute B** the covariance matrix of M**(B for between-class covariance), and its eigen-decomposition B** = V*DBV∗T.The columns vℓ of V* in sequence from first to last define the coordinates of the optimal subspaces Author further states that "Fisher arrived at this decomposition via a different route, without referring to Gaussian distributions at all. He posed the problem: Find the linear combination Z = aTX such that the betweenclass variance is maximized relative to the within-class variance" I think author is trying to establish link between Gaussian LDA and Fisher though I am not able to relate how 3 steps outlined above links to Fisher LDA. Could any one explain 3 steps mentioned above and its relation to Fisher's LDA
Reduced Rank Linear Discriminant Analysis vs Fisher Discriminant as mentioned in Element of Statistical learning section 4.3.3
CC BY-SA 4.0
null
2023-03-14T10:32:15.760
2023-03-20T10:29:05.530
2023-03-20T10:29:05.530
383170
383170
[ "machine-learning", "mathematical-statistics", "classification", "discriminant-analysis" ]
609383
1
null
null
4
42
I am fitting the following model (random intercepts and slopes) on my data: ``` lmer(MuscleActivity ~ Period+ (1 + Period|ppnr), data = df) ``` My goal is to test whether the muscle activity during stimuli presentation is larger than during muscle activity during random time points. The data frame looks something like this: |Participant |MuscleActivity |Period | |-----------|--------------|------| |1 |0.3 |Stimuli | |1 |0.1 |Stimuli | |1 |0.7 |Stimuli | |1 |0.2 |Random | |1 |0.3 |Random | |1 |0.1 |Random | |... |... |... | Running `summary(model)` results in no errors except this warning: ``` boundary (singular) fit: see help('isSingular') ``` ...which I think is due to some participant only having one observation. Now when I call `confint(model)`, I get a wall of warnings and the following output: | |2.5% |97.5% | ||----|-----| |.sig01 |NA |NA | |.sig02 |NA |NA | |.sig03 |NA |NA | |.sigma |NA |NA | |(Intercept) |-0.000423 |0.00031 | |PeriodStimuli |-0.00022 |0.000284 | The warnings contain this message repeated many times: ``` Warning: NAs detected in profilingWarning: Last two rows have identical or NA .zeta values: using minstepWarning: NAs detected in profilingWarning: Last two rows have identical or NA. ``` And this warning: ``` profilingWarning: non-monotonic profile for .sig02Warning: NAs detected in profilingWarning: ``` As well as this warning: ``` .sig01Warning: no non-missing arguments to min; returning InfWarning: non-monotonic profile for ``` Could someone help me understand these errors?
Warnings for confint() in lme4
CC BY-SA 4.0
null
2023-03-14T10:33:15.160
2023-03-14T12:32:15.343
2023-03-14T11:47:04.770
345611
281675
[ "r", "regression", "mixed-model", "lme4-nlme", "singular-matrix" ]
609384
2
null
215497
0
null
In my case I still want the eigenvalues to be randomly drawn so, to control the condition number of the covariance matrix does not exceed, say, $\kappa > 1$, one can generate $\sigma_k = 1 + (\kappa - 1) \cdot \mathcal{U}_k$ where $\mathcal{U}_k$ is uniformly sampled in $[0,1]$ for all $k=1,\dots,n$. Following @whuber (very comprehensive) algorithm, here is a code snippet in Python with all kinds of assertions and data reshaping. ``` import numpy as np d = 5 sigmas = np.arange(1, 6).reshape(-1, 1) # [1, 2, 3, 4, 5] # or sigmas = (1 + (kappa - 1) * np.random.uniform(size = d)).reshape(-1, 1) Q, R = np.linalg.qr(np.random.normal(size = d**2).reshape(d, d)) S = Q.T @ (sigmas * Q) # Check that S is psd assert np.all(np.linalg.eigvals(S) > 0) # Check that S is symmetric assert np.allclose(S.T, S) S _, sigmas_, Q_ = np.linalg.svd(S) # Check SVD retrieves original parameters assert np.allclose(np.abs(Q), np.abs(Q_[::-1])) # up to sign assert np.allclose(sigmas.reshape(-1), sigmas_[::-1]) ```
null
CC BY-SA 4.0
null
2023-03-14T10:33:20.397
2023-03-14T10:53:11.633
2023-03-14T10:53:11.633
368322
368322
null
609385
2
null
609333
5
null
I'll assume you want to treat Treatment as the independent variable and Rating as the dependent variable. You can use the Wilcoxon-Mann-Whitney test. Equivalently, you could use Kruskal-Wallis. In most software applications, you would first need to convert your counts to "long format" data. Simply, your count of 6 for "treatment", "1" would be converted to six observations, each with Treatment="treatment" and Rating="1". An effect size statistic would report the probability that the value for an observation in one Treatment group would be greater than an observation in the other group. Vargha and Delaney's A reports this. Similar measures that are scaled to a range of 0 to 1 are Glass rank biserial coefficient and Cliff’s delta. There are alternative tests. Kendall correlation (particularly tau-c), Spearman correlation, and Cochran-Armitage test are all candidates. Addition: In response to some some of the comments, I added an analysis and results for this data in R using Kruskal-Wallis test and ordinal regression. ``` Control = c(rep(1, 0), rep(2, 20), rep(3, 11)) Treatment = c(rep(1, 6), rep(2, 14), rep(3, 12)) Data = data.frame(Group = c(rep("Control", length(Control)), rep("Treatment", length(Treatment))), Rating = c(Control, Treatment)) ###################### kruskal.test(Rating ~ Group, data=Data) ### Kruskal-Wallis rank sum test ### ### Kruskal-Wallis chi-squared = 0.59551, df = 1, p-value = 0.4403 ###################### Data$Rating.f = factor(Data$Rating) library(ordinal) model = clm(Rating.f ~ Group, data=Data) anova(model, type="II") ### Type II Analysis of Deviance Table with Wald chi-square tests ### ### Df Chisq Pr(>Chisq) ### Group 1 0.6063 0.4362 ```
null
CC BY-SA 4.0
null
2023-03-14T10:36:00.160
2023-03-21T17:27:28.797
2023-03-21T17:27:28.797
166526
166526
null
609386
1
null
null
0
7
The data was initially sent in a numerical format, with the SNP being coded as 0, 1, and 2. Using the provided reference allele, each SNP was now translated into its corresponding variants. Which imputation technique should I employ, and which software would be suitable for the imputation of missing genotypes and SNPs, given that some of the values for the markers and individuals have missing values (up to 50%)? Or, is impute at such a high missing value percentage not possible? Would that affect the outcome? The genotyping platform's metadata file indicates that the missing values are null allele homozygotes (absence of a fragment with an SNP in the genomic representation).
I want to impute missing values of SNP data from the DArTseq . Which software and dosage model should I use?
CC BY-SA 4.0
null
2023-03-14T10:39:55.857
2023-03-14T10:39:55.857
null
null
383178
[ "data-imputation", "software", "dose-response" ]
609387
2
null
609379
5
null
The data you are feeding into your OLS model are random draws from some underlying population. You could either be drawing from the joint distribution of the predictors and the outcome, or from the distribution of the outcome conditional on the predictors (so you consider the predictors fixed - this is the assumption in OLS). In either case, your regression coefficient estimates will be random variables themselves, because they are functions of the random variable $y$. The statistical properties of your parameter estimates, like the mean and the variance, will depend on the properties of the outcomes, which is why the variance of parameter estimates depends on both the model (via the hat matrix) and the variance of the outcome.
null
CC BY-SA 4.0
null
2023-03-14T10:47:22.387
2023-03-14T10:47:22.387
null
null
1352
null
609388
2
null
37943
0
null
Given heteroscedasticity and outliers, - If you seek a hypotheses testing of model coefficients, use robust standard errors generated by {sandwich} package like in lmtest::coeftest(lm(), vcov. = vcovHC(lm(), type = "HC1")). - If you seek the regression line representing population mean, no need to worry too much, as the point estimates are unbiased. Otherwise, (1) use lm(weights =...) for weighted least squares, where weights are 1/resid()^2 using (estimated) residual variance from a different model that estimate the residual variance like lm(resid()^2 ~ ) without weights but resid() from yet another lm() model; or nlme::gls() for generalized least squares where heteroscedasticity functional form must be specified correctly in weights = varFunc(), correlation = corStruc. The point estimates in lm(weights =...) and gls() should be very similar to OLS as lm(), but the residual variance and standard error are more efficient IF the weights are specified correctly. - If you seek confidence intervals of population means in prediction, use lm() and prediction functions that accepts vcov = ... like predictions() in {marginaleffects}, and use robust standard errors that vcovHC() generates. Or weighted least squares lm(weights = ...) and regular predict(se.fit = T, interval = "confidence"). Or generalized least squares gls() with prediction functions like marginaleffects::predictions() - If you seek prediction intervals of individual values in prediction, use weighted least squares lm(weights = ...) and weighted predict(se.fit = T, interval = "prediction", weights = = ~x^2). See predict.lm help. Or hack gls() like https://fw8051statistics4ecologists.netlify.app/gls.html#Sockeye
null
CC BY-SA 4.0
null
2023-03-14T10:47:46.343
2023-03-14T10:47:46.343
null
null
284766
null
609390
1
null
null
0
13
I am trying to estimate an econometric model with non-syncronous data, meaning that I have monthly asset returns and daily factor returns. The econometric model is described below. I need help with estimating the lagged weights using maximum likelihood. Are there anyone who can help me with that: [1]: [https://i.stack.imgur.com/swuWN.png](https://i.stack.imgur.com/swuWN.png) [2]: [https://i.stack.imgur.com/GQ4Co.png](https://i.stack.imgur.com/GQ4Co.png) [3]: [https://i.stack.imgur.com/WhK0m.png](https://i.stack.imgur.com/WhK0m.png)
Econometric model (Maximum likelihood) with non-syncronous data
CC BY-SA 4.0
null
2023-03-14T11:12:47.367
2023-03-14T11:12:47.367
null
null
383181
[ "econometrics" ]
609391
1
null
null
0
27
I plan a study where I predict variables with machine learning methods integrating data from multiple surveys. The questionnaires assessed with each survey slightly differ, so that I have a lot of missing values (e.g. Questionnaire 1 was assessed in survey 1,2,3, but not in survey 4). My question is, do the Missing at Random assumption hold, if the different samples differ systematically in e.g. age? As far as I understood, loosely speaking, if e.g. age can predict the absence of values but is not related to the specific values itself (e.g. expression in a specific item), than the missing at random assumption and imputation is still valid. But in my case, they are probably low correlations between some questionnaire facets and age. Is this a problem if I plan to impute the missing values, using mean imputation or more sophisticated methods? Greetings
Missing at Random / Missing not at Random
CC BY-SA 4.0
null
2023-03-14T11:14:09.977
2023-03-14T11:14:09.977
null
null
348853
[ "machine-learning", "missing-data", "data-imputation" ]
609392
2
null
189110
3
null
$\lambda =\omega^2N$, see Cohen, Jacob (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.), page 549, formula 12.7.1. Hence the effect size you mention is Cohen's omega ($\omega$, sometimes written "w"). $\omega=\sqrt{\frac{\chi2}{N}}$. The $p_{0i}$ and $p_{1i}$ in the formula you give in your question are proportions, not counts. $\omega$ generally has not an upper bound of 1 (except when there are just two cells of expected proportions of 0.5), but it's not really a problem if you use it for sample size calculations. Similarly, you don't need tables to define an effect size of minimal interest. In fact in his book Cohen tends to advise against using them, see his introductory remarks of section 7.2.3 of his book, p. 224: > [...] The best guide here, as always, is the development of some sense of magnitude ad hoc, for a particular problem or a particular field. Since it is a function of proportions, the investigator should generally be able to express the size of the effect he wishes to be able to detect by writing a set of alternate-hypothetical proportions [...] and, with the null-hypothetical proportions, compute w. [...] In other words, you have to define what a "minimally interesting table" would look like in your scenario. For example, you can ask yourself how many cells should deviate from their expected value and by how much in terms of proportions, for you to consider such a table as an interesting departure from the null hypothesis. Once you defined this hypothetical, minimally interesting table, you calculate its effect size $\omega$. From that, you can calculate the sample size required to detect this "minimally interesting effect size". --- As for the other effect sizes you mention, Cramér's V and Phi are meant as measures of association, in other words they are meant for contingency tables, not one-way tables. There's a variant of Cramér's V for one-way tables, but [it does not really bring a lot more information than Cohen's $\omega$](https://stats.stackexchange.com/questions/607040/what-is-the-advantage-if-any-of-using-cohens-omega-instead-of-cramers-v-f), and is not meant for power calculations. I'm not familiar with the Johnston-Berry-Mielke $E$, but from the article you mention its purpose seems to be an easier interpretation than $\omega$, not power calculations.
null
CC BY-SA 4.0
null
2023-03-14T11:14:58.913
2023-03-14T13:25:51.980
2023-03-14T13:25:51.980
164936
164936
null
609393
2
null
609166
4
null
As stated by many others: the likelihood function $\mathcal L_y$ is the probability density function $f_\theta$ of the observed data $y$, but viewed as a function of the (unknown) parameter $\theta$, i.e., $\mathcal L_y(\theta) = f_\theta(y)$. To provide some intuition, let's consider discrete data1 and continuous data separately: - For discrete data we have $\mathcal L_y(\theta) = \mathbb P_\theta(Y = y)$, i.e., the likelihood function is the probability of the observed data viewed as a function of the parameter $\theta$. - For continuous data let's assume that in practice we can only measure, and thus observe, data with limited accuracy. Then, observing $Y = y$ (say, for the sake of simplicity, for real-valued $Y$) can be understood as indicating that $Y$ took a value in a small interval $[y - \delta, y + \delta]$. For the probability of the observed datum $y$ we then have $$ \mathbb P_\theta(Y \in [y - \delta, y + \delta]) = \mathbb P_\theta(y - \delta \leq Y \leq y + \delta) = \int_{y - \delta}^{y + \delta} f_\theta(y) \,\mathrm d y. $$ Now, the approximation $$\int_{y - \delta}^{y + \delta} f_\theta(y) \,\mathrm d y \approx f_\theta(y) \cdot \left[\left(y + \delta\right) -\left(y - \delta\right)\right] = 2\delta \cdot f_\theta(y) $$ suggests that the probability of the observed datum $y$ is approximately proportional to $f_\theta(y)$. In this sense $\mathcal L_y(\theta) = f_\theta(y) \overset{\text{approx.}}\propto \mathbb P_\theta(Y \in [y - \delta, y + \delta])$ still indicates how "likely" a paramter value $\theta$ is for the observed datum $y$. --- 1 here the probability mass function referred to in some of the other answers can be seen as probability density function w.r.t. the counting measure. --- Reference Held, L., & Sabanés Bové, D. (2020). Likelihood and Bayesian inference: With applications in biology and medicine (Second edition). Springer.
null
CC BY-SA 4.0
null
2023-03-14T11:17:38.380
2023-03-30T01:15:46.643
2023-03-30T01:15:46.643
136579
136579
null
609394
1
609435
null
0
42
I have some highly skewed real world data, from which I want to create a probability density function which I can then use to generate random samples that mimic the original data. The data is a long list of lengths and it approximates this function in R... ``` library(rBeta2009) x <- rbeta(10000, 1, 3) plot(density(x)) ``` [](https://i.stack.imgur.com/8vkJu.png) How can I obtain parameters from the data which can then be used to create my simulated PDF from which i can sample simulated lengths?
create and sample from a PDF using real data
CC BY-SA 4.0
null
2023-03-14T11:21:43.890
2023-03-14T15:33:09.693
null
null
166426
[ "r", "density-function" ]
609396
2
null
609379
4
null
The other answers are correct, but I think it might be helpful to simulate what is happening. ``` library(ggplot2) set.seed(2023) # Define sample size # N <- 100 # Define number of times to repeat the simulation # R <- 1000 # Fix values of x # x <- seq(0, 1, 1/(N - 1)) # Define conditional expected values of y as E[y|x] = 1 + 2x # Ey <- 1 + 2*x B_01 <- B_02 <- B_03 <- B_11 <- B_12 <- B_13 <- rep(NA, R) for (i in 1:R){ # Simulate iid Gaussian error terms # e1 <- rnorm(N, 0, 1) e2 <- rnorm(N, 0, 2) e3 <- rnorm(N, 0, 3) # Define observed values of y as the sum of the expected value and the error # y1 <- Ey + e1 y2 <- Ey + e2 y3 <- Ey + e3 # Fit regressions and extract the estimated regression coefficients # L1 <- lm(y1 ~ x) L2 <- lm(y2 ~ x) L3 <- lm(y3 ~ x) # B_01[i] <- summary(L1)$coefficients[1, 1] B_02[i] <- summary(L2)$coefficients[1, 1] B_03[i] <- summary(L3)$coefficients[1, 1] # B_11[i] <- summary(L1)$coefficients[2, 1] B_12[i] <- summary(L2)$coefficients[2, 1] B_13[i] <- summary(L3)$coefficients[2, 1] } # Make a data frame of the coefficients # d_01 <- data.frame( Estimate = B_01, Coefficient = "Intercept", Variance = "1" ) d_02 <- data.frame( Estimate = B_02, Coefficient = "Intercept", Variance = "2" ) d_03 <- data.frame( Estimate = B_03, Coefficient = "Intercept", Variance = "3" ) d_11 <- data.frame( Estimate = B_11, Coefficient = "Slope", Variance = "1" ) d_12 <- data.frame( Estimate = B_12, Coefficient = "Slope", Variance = "2" ) d_13 <- data.frame( Estimate = B_13, Coefficient = "Slope", Variance = "3" ) d <- rbind(d_01, d_02, d_03, d_11, d_12, d_13) # Plot # ggplot(d, aes(x = Estimate, fill = Variance)) + geom_density(alpha = 0.25) + facet_grid(~Coefficient) + theme(legend.position="bottom") ``` [](https://i.stack.imgur.com/wOneN.png) As the variance of the error term gets larger, the estimated slope $\hat\beta_1$ and estimated intercept $\hat\beta_0$ bounce around more. In terms of the math, below is a common way to write the OLS estimates, which may show why the coefficient estimates vary. $$ \hat\beta_1 = \dfrac{ \overset{N}{\underset{i = 1}{\sum}}\left[ (x_i - \bar x)(y_i - \bar y) \right] }{ \overset{N}{\underset{i = 1}{\sum}}\left[ (x_i - \bar x)^2 \right] }\\ \hat\beta_0 = \bar y - \hat\beta_1\bar x $$ Since $y$ is a random variable (due to the randomness of the error term), each of these estimates is random and depends on the exact observed values of $y$. As these observed values of $y$ change, so do the estimated coefficients. If those observed values of $y$ change a lot, such as with a large error variance, then the estimated coefficients change a lot which is the exact behavior in the above simulation. (It could also be argued that $x$ has randomness, such as in an observational study. However, this is not necessary for the discussion of why the coefficient estimates have variances, and it adds confusion without any clear benefit, so I am setting aside $x$ randomness.)
null
CC BY-SA 4.0
null
2023-03-14T11:29:12.140
2023-03-14T11:38:43.803
2023-03-14T11:38:43.803
247274
247274
null
609397
1
null
null
0
6
Dear Cross Validators, I have a data set of the following structure (don't mind the numbers): |subject_ID |Position_of_the_measure |Environment_condition |Variable_of_interest | |----------|-----------------------|---------------------|--------------------| |1 |A |10.3 |40 | |1 |B |10.3 |36 | |2 |A |15.2 |62 | |2 |B |15.2 |48 | |3 |A |21 |27 | |3 |B |21 |29 | I want to investigate the effects of the Position_of_the_measure and environment_condition on my variable_of_interest. Obviously the values are non independent (as paired by ID) so I figured linear mixed models would have been the go to with ID as random effects but I don't have multiple measures per ID per Position (this should have been thought in the sampling design but here we are). I wonder if you guys have an idea about how to deal with this from there (working with A-B might work but I'd prefer not to). I know that I'm pretty much asking for what could be considered as statistical non sens but wanted to know if I had miss something obvious? Thank you for your time. Here is an example of plotted data which might help you understand: [](https://i.stack.imgur.com/yRT8I.png)
Non repeated paired observations along a continuous gradient - what to do?
CC BY-SA 4.0
null
2023-03-14T11:30:40.807
2023-03-14T11:30:40.807
null
null
283361
[ "repeated-measures", "continuous-data", "paired-data" ]
609398
2
null
609383
1
null
Your problem is already here, which is a [well-documented problem](https://stats.stackexchange.com/search?q=singular%20lme4) in mixed modeling with `lme4`: ``` boundary (singular) fit: see help('isSingular') ``` When you have a singular model fit, this is a massive problem and renders pretty much all of your regression output useless. Since you also have extremely close-to-zero confidence intervals, you probably have a random variance structure that doesn't match your data. I think this makes sense if you have some participants with one point of estimation, as they wouldn't anyways have a slope that can be estimated, but regardless you probably have other issues going on that you should check with exploratory data analysis (checking grouped scatter plots, making sure you have enough clusters / observations per cluster, etc.). You should in any case consider trying to fit a less complex model and see what the output looks like. I provide some citations below regarding model complexity issues and how cluster effects can influence this. #### Citations - Austin, P. C., & Leckie, G. (2018). The effect of number of clusters and cluster size on statistical power and Type I error rates when testing random effects variance components in multilevel linear and logistic regression models. Journal of Statistical Computation and Simulation, 88(16), 3151–3163. https://doi.org/10.1080/00949655.2018.1504945 - Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1). https://doi.org/10.18637/jss.v067.i01 - Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing Type I error and power in linear mixed models. Journal of Memory and Language, 94, 305–315. https://doi.org/10.1016/j.jml.2017.01.001 - Meteyard, L., & Davies, R. A. I. (2020). Best practice guidance for linear mixed-effects models in psychological science. Journal of Memory and Language, 112, 104092. https://doi.org/10.1016/j.jml.2020.104092
null
CC BY-SA 4.0
null
2023-03-14T11:57:26.330
2023-03-14T12:32:15.343
2023-03-14T12:32:15.343
345611
345611
null
609399
1
609693
null
1
30
I am analysing the time to the end of rehabilitation; however, not all patients receive rehabilitation. Therefore, for making Kaplan-Meier curves for the whole study population (including non-receivers), these curves should start below survival 1. For example, if only 70% of patients received rehabilitation, the y-axis start value should be 0.7. How should I code my time and status variables to make it work? Would it also be possible to make it so that all survival tables take into account the non-receivers? The following graph isn't ideal, as it shows the event happened for 30% of patients on day 0. This isn't true as they received no rehabilitation. Generated data of 10 patients and KM curves ``` library(survival) library(survminer) library(tidyverse) df = tibble( time = c(0, 0, 0, 40, 50, 60, 70, 100, 100, 100), status = c(1, 1, 1, 1, 1, 0, 0, 0, 0, 0)) fit <- survfit(Surv(time, status) ~ 1, data = df) ggsurvplot(fit, data = df) ``` [](https://i.stack.imgur.com/oexDU.png)
How to code survival data so that sarting survival is below 1
CC BY-SA 4.0
null
2023-03-14T12:13:56.010
2023-03-16T14:33:23.797
null
null
253446
[ "r", "survival", "kaplan-meier", "ggplot2" ]
609400
2
null
608743
3
null
One solution you can consider is the `partR2` package in R (Stoffel et al., 2019). This allows you to derive a partial effect size for each fixed effects predictor. It is technically devised of two coefficients. The first is $part R^2$, defined below as: $$ R^2_{x^*} = \frac{Y_X-Y_\tilde{X}}{Y_X+Y_{RE}+Y_R} = \frac{Y_X-Y_\tilde{X}}{Y_{Total}} $$ where ${X}$ is the variance explained by fixed effects in a full model versus $\tilde{X}$, which is the variance in fixed effects from a reduced model. The denominator $Y_X + Y_{RE} + Y_R$ is essentially the total variance in the model (fixed effects + random effects + residuals). This effectively allows one to tease apart to a degree how much variance changes when the model is reduced down to remove the effect of other predictors. The second coefficient is $inclusive$ $R^2$, which is defined below: $$ IR^2_{x*} = SC^2 * R^2_{x^*} $$ where $SC^2$ is the squared correlation between a predictor of interest and a linear predictor times its $part R^2$. This coefficient quantifies the total proportion of variance explained in the model, both uniquely and jointly with other predictors. #### Citation Stoffel, M. A., Nakagawa, S., & Schielzeth, H. (2021). partR2: Partitioning R2 in generalized linear mixed models. PeerJ, 9, e11414. [https://doi.org/10.7717/peerj.11414](https://doi.org/10.7717/peerj.11414)
null
CC BY-SA 4.0
null
2023-03-14T12:31:02.573
2023-03-14T12:31:02.573
null
null
345611
null
609401
1
null
null
0
13
I have data frames of 25 different individuals. Each data frame has date time as index, channels as column and a latest column for the label (meaning that I have one label per row, it is integer and 4 classes). It contains physiological and cerebral data. Each data frame has a different size. I initially wanted to train EEGNet ([https://arxiv.org/pdf/1611.08024.pdf](https://arxiv.org/pdf/1611.08024.pdf) and [https://github.com/vlawhern/arl-eegmodels](https://github.com/vlawhern/arl-eegmodels)) but the problem is that, in this example, the authors have used one label per individual, instead of one label per row; like in my case. Can I have your thoughts on how I should be modifying the architecture and the data format to be able to use the same/similar model or I cannot use it at all? Other than that, I would be very interested to read about your suggestions on a deep learning model that will let me use data from multiple agents (25 in my case) and label per row. Thank you!
EEGNet training with label per row
CC BY-SA 4.0
null
2023-03-14T12:33:34.057
2023-03-14T12:33:34.057
null
null
383186
[ "time-series", "neural-networks", "classification", "predictive-models" ]
609403
2
null
608798
1
null
## Conditioning on a collider would indeed bias the causal estimate I am assuming the "Diseased" variable is $D$ in your casual graph, and you are trying to estimate the causal influence $A \to Y$. In the graph, $D$ is a collider, so conditioning on it would introduce a spurious correlation between $A$ and $Y$. As you are saying, you should not condition on $D$ (you should, however, condition on the confounder $X$). This is purely a result of the do-calculus. Reasoning about the time-precedence can be helpful for coming up with a suitable graph in the first place, but it is not needed for choosing an adjustment set once the graph is set up (although it may still aid intuition).
null
CC BY-SA 4.0
null
2023-03-14T12:37:38.547
2023-03-14T12:37:38.547
null
null
250702
null
609404
1
null
null
0
12
When using the leave-one-out cross-validation (LOOCV) as a metric, is the unpooled Bayesian model bound to outperform the partially pooled Bayesian model?
LOOCV comparison partially polled vs unpooled model
CC BY-SA 4.0
null
2023-03-14T12:41:10.113
2023-03-14T12:41:10.113
null
null
317043
[ "cross-validation", "hierarchical-bayesian" ]
609405
2
null
609270
0
null
Since you didn't get any answer yet, let me give you a short one. First, there is no single solution. There are many different possible approaches. Choosing between them would boil down to the details like what exactly is your research question and what is your data. Second, you say > My conclusion to be supported is that during winter (trees dormancy, the purple line is $21~\rm Dec. $ of each year), more $\rm Al$ is released in solution. [...] That's not a great start. You should start with forming a hypothesis and then validating it with data. If your aim is only to find something that would confirm the conclusion you already formed, it's not a good reason to use statistics, as it will be very susceptible to cherry-picking ("this didn't confirm my conclusion, let's try something else"). But let's say that your hypothesis is "there is a difference in $\rm Al$ levels between winter and other seasons". In such a case, to directly verify the hypothesis, you can simply divide the data into two groups "winter" and "not winter" and compare the levels using some statistical test (e.g. $t$-test). On another hand, if the question is broader "if there is a seasonality...", then, using time-series analysis makes sense. It also is something you would pick if there is a dependence over time in the levels of $\rm Al$, so ignoring it would be incorrect. Finally, if your aim is not forecasting, but you want to test a hypothesis, you probably should not use things like `auto.arima`, but rather come up with a simple, easily interpretable model, that makes sense for your data, and go with it. But I'll leave you here, as this is not my main area of expertise.
null
CC BY-SA 4.0
null
2023-03-14T12:44:23.670
2023-03-14T12:44:23.670
null
null
35989
null
609407
1
609417
null
0
39
I have this data: ``` x = np.array([[ 6.2120e-01],[ 8.4310e-01],[ 1.2467e+00],[ 1.0147e+00],[ 3.0860e-01], [ 5.2160e-01],[ 6.9720e-01],[ 4.2600e-01],[ 6.5200e-01],[ 1.3448e+00], [-3.5860e-01],[ 1.0618e+00],[ 1.0513e+00],[ 7.5890e-01],[ 2.8270e-01], [ 5.2180e-01],[ 3.9240e-01],[ 1.6279e+00],[ 1.8290e-01],[-4.1820e-01], [ 4.7030e-01],[-4.7610e-01],[-9.7580e-01],[-8.6500e-01],[-5.4680e-01], [ 5.8500e-02],[-9.4460e-01],[-3.5250e-01],[ 7.0000e-04],[-1.1193e+00], [ 9.1700e-02],[ 1.0910e-01]]) y = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) ``` There is a high correlation of -0.68 between the feature and the target. However, when I train an LGBM on this data, it fails to overfit and always returns the prior probability of each class: ``` lgbm = LGBMClassifier() lgbm.fit(x,y) lgbm.predict_proba(x) array([[0.6875, 0.3125], [0.6875, 0.3125], [0.6875, 0.3125], [0.6875, 0.3125], [0.6875, 0.3125], [0.6875, 0.3125], [0.6875, 0.3125],... ``` Changing hyper-parameters like `learning_rate`, `n_estimators`, etc. doesn't help. What is the reason behind this behaviour?
LGBM fails to overfit
CC BY-SA 4.0
null
2023-03-14T12:47:11.853
2023-03-15T10:37:40.563
null
null
146327
[ "machine-learning", "overfitting", "lightgbm" ]
609408
2
null
608955
1
null
The difference is in how the standard error is computed. This can be done based on a pooled estimate of the variance and the assumption of equal probabilities (which makes sense if the null hypothesis is true and what you use when you compute the p-value), or based on non-pooled variance estimate. $$z = \frac{\hat{p}_1 - \hat{p}_2}{\sqrt{ \left(\frac{\hat{p}_1(1-\hat{p}_1)}{n_1} + \frac{\hat{p}_2(1-\hat{p}_2)}{n_2} \right)}} = \frac{\frac{11}{16} - \frac{8}{21}}{\sqrt{ \left(\frac{(11/16)\cdot(5/16)}{16} + \frac{(8/21)\cdot(13/21)}{21} \right)}} = 1.952191$$ $$z = \frac{\hat{p}_1 - \hat{p}_2}{\sqrt{\hat{p}(1-\hat{p}) \left(\frac{1}{n_1} + \frac{1}{n_2} \right)}} = \frac{\frac{11}{16} - \frac{8}{21}}{\sqrt{\frac{19 \cdot 18}{37^2} \left(\frac{1}{16} + \frac{1}{21} \right)}} = 1.848227$$ These relates to respectively a p-value of 0.05092 and 0.06457 ``` > z = (11/16 - 8/21)/sqrt((11/16)*(5/16) * 1/16+ (8/21)*(13/21) * 1/21) > 2*(1-pnorm(z)) [1] 0.05091551 > > z = (11/16 - 8/21)/sqrt((19/37)*(18/37) * (1/16+1/21)) > 2*(1-pnorm(z)) [1] 0.06456946 ``` --- Also note the references in the `prop.test` manual Newcombe R.G. (1998). Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods. Statistics in Medicine, 17, 857–872. doi: [10.1002/(SICI)1097-0258(19980430)17:8<857::AID-SIM777>3.0.CO;2-E](https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8%3C857::AID-SIM777%3E3.0.CO;2-E). Newcombe R.G. (1998). Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods. Statistics in Medicine, 17, 873–890. doi: [10.1002/(SICI)1097-0258(19980430)17:8<873::AID-SIM779>3.0.CO;2-I](https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8%3C873::AID-SIM779%3E3.0.CO;2-I). There are a lot of different approaches possible that may give a different result.
null
CC BY-SA 4.0
null
2023-03-14T12:47:15.230
2023-03-14T13:11:58.960
2023-03-14T13:11:58.960
164061
164061
null
609411
1
null
null
0
19
I have multiple country-specific independent variables over time (annually). That is, each independent variable is measured one time for each year. I am not sure what type of regression model to apply, some Time series model (I am not interested in forecasting, only the effect of each independent varibale on the dependent one), but which? Data is for one entity only and at multiple points in time. ``` Y , X1 , X2... ``` yr1 yr2 yr3 ... Thank you!
Regression for multiple point in time, ONE entity
CC BY-SA 4.0
null
2023-03-14T13:03:56.227
2023-03-15T14:44:25.307
2023-03-15T14:44:25.307
383188
383188
[ "regression", "effects" ]
609413
1
null
null
0
21
I would like to use a Gaussian Copula to simulate data with a given covariance matrix and given marginal distributions. I understand that the input to the copula cannot be the covariance matrix $\Sigma$, but the correlation $\mathcal{R}$, given that the variance will be dictated by the univariate marginals. However, (how) can I scale my data back so that it follows the target covariance $\Sigma$? To fulfill this, I feel I need to answer two questions: - What is the covariance of a Gaussian copula $Y$ with input $\mathcal{R}$? My intuition was that $cov(Y) = diag(sd(Y))\,\mathcal{R}\, diag(sd(Y))$ where $diag(sd(Y))$ is a diagonal matrix containing the standard deviation of the marginals of Y. But from the simulation below, his does not seem to be the case (only true for diagonals)? - How do I scale/transform my data so that it has covariance $\Sigma$? I thought I would need to do: $Y diag(diag(\Sigma)/sd(Y))$ but it doesn't seem to be the case either (off-diagonal difference of 0.02 even with $N=50'000'000$)? Is my intuition wrong, or correct but the simulation is wrong? ``` library(copula) ## Sim SIG <- matrix(c(0.5, 0.8, 0.7, 0.8, 2, 0.2, 0.7, 0.2, 8), 3, 3) R_SIG <- cov2cor(SIG) myCop <- normalCopula(param=P2p(R_SIG), dim = ncol(R_SIG), dispstr = "un") myMvd <- mvdc(copula=myCop, margins=rep("gamma",3), marginsIdentical=FALSE, paramMargins=list(list(shape=2, scale=1), list(shape=1, scale=1), list(shape=2, scale=3))) Y <- rMvdc(50000000, myMvd) vars_marginal_empi <- apply(Y, 2, var) ## 1: covariance matrix? cov_guess <- diag(sqrt(vars_marginal_empi)) %*% R_SIG %*% diag(sqrt(vars_marginal_empi)) cov_guess-cov(Y) #> [,1] [,2] [,3] #> [1,] 0.00000000 0.03973687 1.392110e-01 #> [2,] 0.03973687 0.00000000 2.878686e-02 #> [3,] 0.13921099 0.02878686 -3.552714e-15 ## 2: rescale data? Y_rescale <- Y%*% diag(sqrt(diag(SIG))/sqrt(vars_marginal_empi)) cov(Y_rescale)-SIG #> [,1] [,2] [,3] #> [1,] 1.110223e-16 -0.02809694 -4.641533e-02 #> [2,] -2.809694e-02 0.00000000 -2.714610e-02 #> [3,] -4.641533e-02 -0.02714610 3.552714e-15 ``` Created on 2023-03-14 with [reprex v2.0.2](https://reprex.tidyverse.org)
Gaussian copula: how to scale data back to get target covariance matrix (not correlation)
CC BY-SA 4.0
null
2023-03-14T13:21:34.823
2023-03-20T04:17:42.100
2023-03-20T04:17:42.100
11887
35398
[ "r", "simulation", "covariance-matrix", "copula" ]
609414
1
null
null
2
45
I wanted to cluster [https://www.kaggle.com/datasets/iabhishekofficial/mobile-price-classification?resource=download](https://www.kaggle.com/datasets/iabhishekofficial/mobile-price-classification?resource=download) data. I used 2 different R implementations of Hopkins statistic: hopkins::hopkins() ([https://github.com/kwstat/hopkins/blob/main/R/hopkins.R](https://github.com/kwstat/hopkins/blob/main/R/hopkins.R)) and factoextra::get_clust_tendency() ([https://github.com/kassambara/factoextra/blob/master/R/get_clust_tendency.R](https://github.com/kassambara/factoextra/blob/master/R/get_clust_tendency.R)) Both functions returned vastly different results (~.98 vs ~ .55). It seems that the difference is due to hopkins() exponentiating the final sums of distances as such: ``` return( sum(dux^d) / sum( dux^d + dwx^d ) ) ``` while get_clust_tendency() does not perform this operation: ``` list(hopkins_stat = sum(minp)/(sum(minp) + sum(minq)), plot = plot) ``` I have seen both versions of the definition e.g. 1) [https://www.datanovia.com/en/lessons/assessing-clustering-tendency/](https://www.datanovia.com/en/lessons/assessing-clustering-tendency/) after [https://pubs.acs.org/doi/abs/10.1021/ci00065a010](https://pubs.acs.org/doi/abs/10.1021/ci00065a010) or 2) [https://en.wikipedia.org/wiki/Hopkins_statistic](https://en.wikipedia.org/wiki/Hopkins_statistic), [https://ieeexplore.ieee.org/document/1375706](https://ieeexplore.ieee.org/document/1375706). This seems to be a big problem so I would like to ask an expert to weigh in.
Hopkins statistic - to exponentiate or not to exponentiate?
CC BY-SA 4.0
null
2023-03-14T13:22:43.047
2023-04-04T18:59:37.513
2023-03-14T14:15:42.697
11887
383184
[ "r", "clustering" ]
609416
1
null
null
0
24
This is my first question here, so I hope it is posed correctly. It is a Bayesian probability question: I would like to know the probability of P(A|B,C,D). I can sample everything (including P(A), P(B), P(C), P(A|B,C), P(B,C|A), P(A|C,D) etc.), except for anything that requires sampling B,C and D all at the same time. So I cannot compute P(B,C,D), P(B|C,D) P(B,C,D|A) directly (or any such probability). Is there any way to derive P(A|B,C,D) given these circumstances? I have looked into using Bayesian Model Averaging to help with this, but I was wondering if there might be a more direct solution with fewer assumptions. I look forward to your replies and thank you for taking the time to read this.
P(A|B,C,D) without needing to sample A,B, and C at the same time
CC BY-SA 4.0
null
2023-03-14T13:29:17.167
2023-03-14T13:29:17.167
null
null
383192
[ "bayesian", "inference", "conditional-probability", "joint-distribution" ]
609417
2
null
609407
3
null
The problem is that you did not actually create any trees, because by default you need in each branch at least 20 records (but you only have 32 records in total) so that no branching happens. It helps to look at what the resulting model looks like (in this case, using `lgbm.booster_.trees_to_dataframe()`), which immediately shows that no splits were done. If you ask for e.g. only one record per branch (`LGBMClassifier(min_child_samples=1)`), you do get actual trees and different predictions depending on the feature value.
null
CC BY-SA 4.0
null
2023-03-14T13:34:43.813
2023-03-15T10:37:40.563
2023-03-15T10:37:40.563
86652
86652
null
609418
1
609423
null
1
62
Exercise. Suppose you train a Ridge model to a regression problem that has a normalized perfomance measure (say K) that attains a value in the interval [0,1], where 0 means that the model is terrible and 1 means that the model is perfect. Suppose that you evaluate the trained model and obtain in the training set K = 0.22 and in the test set K = 0.21. Which of the following options should you apply? $ (a) $ Train the model again decreasing the value of the regularization coefficient, $\lambda$ ; $ (b) $ There isn't need to do anything. The model obtained is satisfatory ; $ (c) $ Train the model again increasing the value of the regularization coefficient, $\lambda$. My attempt. According to the values of $K$, our adjust to the data is clearly not satisfatory, in the sense that the values for $K$ are pretty small. Since, for higher values of $\lambda$, we know that the adjust to data is worse and the complexity of the model is more important, if we want to obtain better values for $K$, we should train our model again, with a smaller value for our regularization coefficient, $\lambda$, to give us better results for $K$. Therefore, I would choose option $(a)$. What ChatGPT says. Given that the performance measure K is normalized and attains a value between 0 and 1, we can conclude that the model's performance is not terrible but not perfect either. The difference in the performance measure between the training and test sets is also not significant. Therefore, there is no need to retrain the model with a different value of lambda. Option (b) is the correct choice: there is no need to do anything, and the model obtained is satisfactory. My question. I agree that the difference between training and test sets is not significant but I am not so sure about the perfomance measure is good at all. Is ChatGPT wrong here? Thanks for any help in advance.
Question about retraining a regression model
CC BY-SA 4.0
null
2023-03-14T13:41:37.130
2023-03-14T14:08:19.243
null
null
383130
[ "regression", "self-study", "regularization", "ridge-regression", "chatgpt" ]
609422
1
null
null
0
53
I have a dataset of 2 variables collected repeatedly at 5 different timepoints on a group of individuals, structured like this: ``` ID TimePoint Variable Result 1 1 Var1 5 2 1 Var1 7 3 1 Var1 3 1 1 Var2 5 2 1 Var2 7 3 1 Var2 3 1 2 Var1 0 2 2 Var1 4 3 2 Var1 5 1 2 Var2 6 2 2 Var2 9 3 2 Var2 3 . . . n ``` I want to investigate whether there is a linear relationship between the variables across the timepoints. Separately I have carried out repeated measures testing to determine how the individual variables change for the group over time and a generalized linear model at each timepoint to investigate the relationship between the 2 variables. I am having trouble finding a method to investigate that relationship across all 5 of the timepoints. Can anyone recommend a method to achieve that? N.B; the data in the table is just an example, in the real dataset the participant n = 28.
Investigate relationship between independent variables with multiple timepoints; repeated measures linear model?
CC BY-SA 4.0
null
2023-03-14T14:05:45.307
2023-03-14T18:00:28.153
2023-03-14T14:21:19.027
353617
353617
[ "regression", "correlation", "generalized-linear-model", "repeated-measures", "linear-model" ]
609423
2
null
609418
2
null
My take: neither you, nor ChatGPT, nor a-c are correct. You write: > According to the values of $K$, our adjust to the data is clearly not satisfatory, in the sense that the values for $K$ are pretty small. That logic is incorrect. Suppose your task is to predict the flip of a fair coin, and $K$ is given by the accuracy of your prediction over many trials, but squared. (We disregard the fact that [accuracy is a horrible KPI.](https://stats.stackexchange.com/q/312780/1352)) Since accuracy lies between 0 and 1, squared accuracy also lies between 0 and 1, so $K$ is as in your question. However, since this is a fair coin, our accuracy cannot be expected to be above 0.5, so $K$ cannot reasonably be expected to exceed $0.5^2=0.25$. So just because we got a "small" value for $K$ does not mean it is "not satisfactory", or that it can be improved. Thus, the pure fact that $K$ has a certain value does not tell you anything. Rather, what we should be doing (and this is IMO the correct answer) is to assess in what circumstances our model does a bad job, and try to address these situations. See [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/q/222179/1352) What the question or your professor probably wanted you to argue is: "Since test and validation loss are similar, there is no clear evidence of overfitting. Therefore, we can safely retrain the model with less regularization, i.e., lower values of $\lambda$." Yes, that reasoning makes sense... but I find my take above more helpful, if only because your software (like GLMNet) will probably optimize the regularization parameters automatically, so thinking about a particular value of $\lambda$ is usually not very useful.
null
CC BY-SA 4.0
null
2023-03-14T14:08:19.243
2023-03-14T14:08:19.243
null
null
1352
null
609424
1
609430
null
1
46
I am trying to fit a decision tree on a data with only one explanatory variable and both explanatory and response variables are continuous. I believe in such case the result tree is almost like a step function as in a constant prediction value will be assigned for each subinterval of the horizontal axis. Most of the plotting functions for a tree would render the tree itself, but I am looking for a graphical representation of the tree in the form of the aforementioned step function. I believe this would require extracting the location of each leaf node and the range its corresponding prediction value would be in. I am currently using the `rpart` and `rpart.plot` packages. While the latter offers a really great method `rpart.rules` which shows directly from which range on the x-axis each prediction value would be, it seems to have a limit on the maximum number of rules before it returns an error. My data would require the tree to be really deep and therefore downright simplifying would not be an option. I am looking for a solution to bypass this limit or an alternative method that allows me to render this step function. A reproducible example using `rpart.rules` is given below: ``` library(rpart.plot) set.seed(1) x <- rnorm(10000, mean = 0, sd = sqrt(1)) esp <- rnorm(10000, mean = 0, sd = sqrt(0.25)) Y_noerror <- -1 + 0.5 * x Y <- Y_noerror + esp linear_data <- data.frame(Y = Y, x = x) linear_tree <- rpart(Y ~ x, data = linear_data, control = rpart.control(cp = 0, minbucket = 2)) rpart.rules(linear_tree) ```
Rendering the decision tree as a step function
CC BY-SA 4.0
null
2023-03-14T14:38:34.337
2023-03-14T15:01:08.273
2023-03-14T14:43:56.717
1352
383201
[ "r", "cart", "rpart" ]
609425
5
null
null
0
null
Generalized additive models (GAMs) essentially allow users to model curvilinear data in a manner more flexible than typical regression modeling. These regressions achieve this by fitting splines to data based on estimations from basis functions that approximate where the data is while penalizing overfitting common in techniques like LOESS. More information on these models can be found at [this wiki page.](https://en.wikipedia.org/wiki/Generalized_additive_model)
null
CC BY-SA 4.0
null
2023-03-14T14:43:31.693
2023-03-14T17:08:23.977
2023-03-14T17:08:23.977
345611
345611
null
609426
4
null
null
0
null
Generalized additive models (GAMs) are regressions that estimate nonlinear patterns in data. This tag should not be used with the `glm` tag unless the question explicitly deals with comparison of the GAMs with GLMs.
null
CC BY-SA 4.0
null
2023-03-14T14:43:31.693
2023-03-14T16:21:59.383
2023-03-14T16:21:59.383
345611
345611
null
609427
2
null
148439
0
null
This should be the answer for this question: Highest density regions are often the most appropriate subset to use to summarize a region, and are capable of exposing the most striking features of the data than most alternative methods (Hyndman, 1996) Hyndman further argues that highest-density regions (HDR) are a “more effective summary of the forecast distribution than other common forecast region” because of its flexibility “to convey both multimodularity and asymmetry in the forecast density” This should be the answer. Not sure why you start off with a mathematical definition and talk about 100(1−α)%. All I want to know is what is meant by the term HDR (high density region). A classic example of complicating things when not asked :)
null
CC BY-SA 4.0
null
2023-03-14T14:50:50.527
2023-03-14T14:50:50.527
null
null
383206
null
609429
1
null
null
1
24
I'm doing a study with five proportion predictors that sum to 1. I'm wondering if a LASSO regression is an appropriate analysis?
Is a LASSO an acceptable way to deal with proportional data that sums to 1?
CC BY-SA 4.0
null
2023-03-14T14:58:29.863
2023-03-14T14:58:29.863
null
null
67137
[ "regression", "lasso", "proportion" ]
609430
2
null
609424
1
null
Extract the relevant information from `linear_tree$splits[,"index"]`, which gives you the `x` values at which your tree splits, i.e., at which there are steps. Then evaluate your tree within each step (plus once before the first and once after the last step), and plot: ``` steps_at <- sort(linear_tree$splits[,"index"]) xx <- c(min(steps_at)-1, rowMeans(cbind(head(steps_at,-1),tail(steps_at,-1))), # middle of each step max(steps_at)+1) values <- predict(linear_tree,newdata=data.frame(x=xx)) # y value of each step plot(range(xx),range(values),type="n",xlab="x",ylab="Y",las=1) lines(c(min(xx),rep(steps_at,each=2),max(xx)), rep(values,each=2),type="l") ``` [](https://i.stack.imgur.com/D7bhr.png)
null
CC BY-SA 4.0
null
2023-03-14T15:01:08.273
2023-03-14T15:01:08.273
null
null
1352
null
609431
1
null
null
0
98
I'm a new learner to R. I can properly find a KM curve for Overall survival, but stuck when it comes to relapse free survival. In RFS, is your time to event the time to relapse vs censoring? How do you account for death? I'm confused as the two variables here, relapse Y/N or death Y/N. Ive been using the survfit (Surv time to event, status) ~ 1 for KM curve. Thanks in advance! Michael Stuck on how to report my data from excel to R in order to perform this analysis. I have time to follow up whether that be relapse, death, or censoring.
How to perform Kaplan Meier of Relapse/Disease Free Survival?
CC BY-SA 4.0
null
2023-03-14T13:36:37.253
2023-03-14T17:15:44.850
null
null
null
[ "r", "survival" ]
609432
1
609451
null
1
48
I'm having trouble fitting my model to a bacteria growth curve. I made a sample of my data below. Code: ``` df <- data.frame(Bacteria = "EC", Strains = c("AR1", "AR1", "AR1", "AR1", "AR1", "AR1", "AR2", "AR2", "AR2", "AR2", "AR2", "AR2", "AR1", "AR1", "AR1", "AR1", "AR1", "AR1", "AR2", "AR2", "AR2", "AR2", "AR2", "AR2"), Time_Points = c(0,4,6,8,12,16,0,4,6,8,12,16,0,4,6,8,12,16,0,4,6,8,12,16), CFU = c(6.000000e+04, 7.100000e+06, 7.000000e+08, 2.166667e+09, 3.633333e+09, 2.666667e+09, 1.333333e+04, 2.733333e+07, 9.266667e+08, 3.066667e+09, 6.200000e+09, 2.900000e+09, 0.000000e+00, 7.033333e+06, 3.766667e+08, 1.966667e+09, 1.366667e+09, 1.666667e+09, 0.000000e+00, 4.866667e+06, 1.866667e+08, 7.800000e+08, 9.333333e+08, 1.733333e+09 )) View(df) #Make a subset of df of 1 strain df.AR1 <- subset(df, Strains == "AR1") View(df.AR1) plot(df.AR1$Time_Points, df.AR1$CFU) #make a function k=4.5*10^9 N0=12 r=0.5 e_func <- function(t) k/(1 + ((k-N0)/N0)*exp(-r*t)) time_ec <- (df.AR1$Time_Points) ECmodel.counts <- e_func(time_ec) + df.AR1$CFU fit.ECmodel <- nls(ECmodel.counts~ k/(1 + ((k-N0)/N0)*exp(-r*Time_Points)), start=list(k= 4000000000, N0=1, r=0.05), data= df.AR1, trace=TRUE) #Output: #3.415235e+19 (2.09e+00): par = (4e+09 1 0.05) #Error in nls(ECmodel.counts ~ k/(1 + ((k - N0)/N0) * exp(-r * Time_Points)), : # singular gradient ``` I've tried changing my starting parameters, but the same error occurs. I'm not sure where else to troubleshoot. It might be my "Time_Points," but I'm unsure. Thank you! Edit: singular gradient error solved I've used the suggestions below, and I can fit the cure. However, I run into another problem with confidence intervals Using code from Sal Mangiafico: ``` fit.ECmodel <- nls(ECmodel.counts~ k/(1 + ((k-N0)/N0)*exp(-r*Time_Points)), start=list(k= 2.332e+09, N0=3.769e+04, r=1.637e+00), data= df.AR1, trace=TRUE) summary(fit.ECmodel) plot(df.AR1$Time_Points, ECmodel.counts) lines(df.AR1$Time_Points, predict(fit.ECmodel)) confint(fit.ECmodel) Output: > confint(fit.ECmodel) Waiting for profiling to be done... 3.437809e+18 (1.15e-01): par = (37685.5 1.637133) 3.433077e+18 (1.08e-01): par = (27873.03 1.681181) 3.428443e+18 (1.01e-01): par = (21384.36 1.721428) 3.423988e+18 (9.46e-02): par = (16878.15 1.758512) 3.419835e+18 (8.78e-02): par = (13627.03 1.792894) 3.419453e+18 (8.70e-02): par = (8793.148 1.85692) 3.414117e+18 (7.74e-02): par = (6436.856 1.908924) 3.408836e+18 (6.66e-02): par = (4981.086 1.953969) 3.407693e+18 (6.40e-02): par = (3023.672 2.032921) 3.398734e+18 (3.81e-02): par = (1743.85 2.135082) 3.393806e+18 (1.64e-03): par = (1563.866 2.172616) 3.393799e+18 (7.78e-04): par = (1456.776 2.184367) 3.393797e+18 (1.32e-06): par = (1459.651 2.184449) 2.176242e+22 (7.52e+01): par = (-34692.05 2.730645) Error in prof$getProfile() : singular gradient ``` Edit2: Confidence Interval Issues Solved with the below code `library(nlstools); confint2(fit.ECmodel, level = 0.95)`
fitting growth curves with nls singular gradient error
CC BY-SA 4.0
null
2023-03-14T15:15:21.887
2023-03-14T19:09:30.460
2023-03-14T19:09:30.460
383207
383207
[ "growth-model", "nls" ]
609433
1
null
null
1
24
In the context of machine learning, consider a scenario where multiple models are trained to make predictions on a particular task, which could be either a regression or classification problem. However, the ground truth data of the target phenomenon is unknown. The objective is to determine whether there exists a metric that can be used to measure the consistency between the predictions of these models. Is there such a metric? Furthermore, it is desirable to investigate whether a method exists for combining these models based on this metric to improve the overall performance of the ensemble.
Consistency between predictions
CC BY-SA 4.0
null
2023-03-14T15:18:32.090
2023-03-14T15:21:59.900
2023-03-14T15:21:59.900
1352
351681
[ "correlation", "predictive-models" ]
609434
2
null
137005
1
null
For this problem, I found [this article](https://link.springer.com/article/10.1007/s10994-021-06023-5) to be of great interest. For what I understood, there is usually 2 ways to deal with such non well distributed dataset : - either we resample the dataset, usually by deleting some data to transform the dataset into a well distributed one, - or we weight the data points, for example here by accounting for the density of points along x axis. The second approach is the one they propose in the article. One advantage compared to the first method is that we do not delete data points so we can say in a way that we are more representative. They implement the method in a python package called [denseweight](https://github.com/SteiMi/denseweight). It is based on a kernel density estimation which allows you to get the weight for each point. Then you can use a weighted linear regression with these weights, for example `scipy.optimize.curve_fit` with `sigma` parameter to account for the uncertainty of each data point. Setting `sigma=1/weights` will make isolated points more certain than dense groups of points according to their weight. Here is an example with my dataset (x, y), either using `denseweight` module, or directly trying to use gaussian kernel density estimation: ``` from scipy.optimize import curve_fit from denseweight import DenseWeight from scipy.stats import gaussian_kde f = lambda x, a, b : a * x + b fig, ax = plt.subplots() ax.plot(x, y, '+') # Standart linear regression : popt, pcov = curve_fit(f, x, y) xfit = np.array(ax.get_xlim()) yfit = popt[0] * xfit + popt[1] ax.plot(xfit, yfit, '-', label='standard linear fit') # Weighted linear regression with denseweight module' : dw = DenseWeight(alpha=1) weights = dw.fit(x) popt, pcov = curve_fit(f, x, y, sigma=1/weights) yfit = popt[0] * xfit + popt[1] ax.plot(xfit, yfit, '-', label='weighted linear fit with denseweight module') # Weighted linear regression with gaussian_kde : kde = gaussian_kde(x) popt, pcov = curve_fit(f, x, y, sigma=kde.pdf(x)) yfit = popt[0] * xfit + popt[1] ax.plot(xfit, yfit, '-k', label='weighted linear fit with gaussian_kde') ax.legend() ``` [](https://i.stack.imgur.com/go3kg.png)
null
CC BY-SA 4.0
null
2023-03-14T15:24:58.210
2023-03-14T15:24:58.210
null
null
383169
null
609435
2
null
609394
0
null
If your data has no bounds (and you know that), then one way is the following: ``` d = density(x) n.new = 1000 x.new <- sample(x, n.new, replace=TRUE) + d$bw*rnorm(n.new, 0, 1) ``` But if you know something more about the data generating process (as @user2974951 's comment asks) such as the pdf is zero at zero or if, in your example above, that there's a lower limit, then the above approach might not be appropriate. For a lower limit of zero a common practice is to "double the data" but appending the reflection of the data, run density, and then double the resulting y-value. See > Silverman, B.W. (1986) Density Estimation for Statistics and Data Analysis. Chapman and Hall, London. http://dx.doi.org/10.1007/978-1-4899-3324-9 Here's an implementation related to your example: ``` # Random sample from a beta distribution set.seed(12345) x <- rbeta(10000, 1, 3) # Mirror the observations around 0 # (and one could also mirror around 1 if that is appropriate for what you know) x.mirrored <- c(x,-x) # Estimate density for both sets of data d <- density(x) d.mirrored <- density(x.mirrored) # Plot the results par(mai=c(1, 1, 0.5, 0.5)) plot(d.mirrored$x[d.mirrored$x > 0], 2*d.mirrored$y[d.mirrored$x > 0], xlim=c(0, max(x)), type="l", las=1, lwd=3, ylab="Probability density", xlab="X", font.lab=2, cex.lab=2) xx <- min(x) + c(0:100)*(max(x) - min(x))/100 lines(xx, dbeta(xx, 1, 3), col="red", lwd=3) lines(d$x[d$x>0], d$y[d$x>0], lwd=3, lty=3) # Generate new data and find density estimate x.new <- abs(sample(x.mirrored, 10000, replace=TRUE) + d.mirrored$bw*rnorm(10000, 0, 1)) d.new <- density(c(x.new, -x.new)) lines(d.new$x[d.new$x>0], 2*d.new$y[d.new$x>0], col="green", lwd=3) # Put in legend legend(0.4, 2.5, c("True density", "Naive estimate of density", "Density estimate using reflection", "Density estimate of new sample"), col=c("red", "black", "black", "green"), lty=c(1, 3, 1, 1), lwd=3) ``` [](https://i.stack.imgur.com/LGoVa.png)
null
CC BY-SA 4.0
null
2023-03-14T15:33:09.693
2023-03-14T15:33:09.693
null
null
79698
null
609436
1
609491
null
3
38
I'm going back through Simon Wood's book on generalized additive models (GAMs) and came back across the definition of the penalty term employed, which is supposed to combat overfitting of smooths. The equation used is the following: $$ \Vert{y-X\beta}\Vert^2 + \lambda \sum_{j=2}^{k-1}\{ f(x^*_{j-1})-2f(x^*_j) + f(x^*_{j+1}) \}^2 $$ The book goes on to explain this formula in the following sentence: > The summation term measures wiggliness as a sum of squared second differences of the function at the knots (which crudely approximates the integrated squared second derivative penalty used in cubic spline smoothing: see section 5.1.2, p. 198). When f is very wiggly the penalty will take high values and when f is ‘smooth’ the penalty will be low. If f is a straight line then the penalty is actually zero. So here is what I get from this paragraph: - The summation from the right side of the equation, if higher, indicates high wiggliness and will be penalized thereafter. - The second differences are some kind of approximation of quadratic, cubic, etc. relations with the variable. - There is no penalty assigned when the function is linear because there is no wiggliness to add to the right side of this equation. My question is the following...how exactly does this equation achieve this? I'm not sure if the $\lambda$ component is part of the confusion, because from what I see it is simply labeled as the smoothing parameter on the next page. The three functions within the equation confuse me as to what they are doing exactly, and the exponent on the outside of the brackets also doesn't make it immediately clear what is going on.
Second differences notion behind GAM penalties
CC BY-SA 4.0
null
2023-03-14T15:35:18.527
2023-03-14T21:47:10.877
null
null
345611
[ "regression", "regularization", "overfitting", "generalized-additive-model", "basis-function" ]
609437
1
null
null
0
37
In the mgcv package in R, I'm working on models whose covariates are forced to change shape at the median (=0). These are the models: ``` gam1 <- gam(Y~ s(X1, pc=0)+s(X2, pc=0), data=data) gam2 <- gam(Y~ te(X1, X2, pc=c(0,0)),data=data) ``` Unfortunately, the constraint on the `gam2` model with tensor products doesn't work and I don't understand the R error message I get either: ``` Erreur dans if (length(pc) < d) stop("supply a value for each variable for a point constraint") : la condition est de longueur > 1 ``` But when I use isotropic smoothing in another model: ``` gam3 <- gam(Y~ s(X1, X2, pc=c(0,0)),data=data) ``` ...the constraint applies perfectly. Can you help me to solve this problem please?
Constaint on te() tensor product gam mgcv
CC BY-SA 4.0
null
2023-03-14T15:37:15.590
2023-03-14T16:08:37.563
2023-03-14T16:08:37.563
345611
383208
[ "regression", "nonlinear-regression", "generalized-additive-model", "mgcv", "tensor" ]
609438
1
null
null
0
74
Let dataset $D = \{(x_1,y_1),...,(x_n,y_n)\}$ where $x\in\mathbb{R}^d$ and $y_i\in\{0,1\}$ There are 2 mean vectors $\mu_0,\mu_1\in\mathbb{R}^d$ that represent the means of each feature split by label. Meaning that $\mu_0$ represents the means of data with $y=0$ The likelihood function is given by $$ \log\prod_i^np(x_i|y_i)p(y_i) $$ where $$ p(x|y=0)=\frac{1}{2\pi^{d/2}|\Gamma|^{1/2}}\exp\left(\frac{-1}{2}(x-\mu_0)^T\Gamma^{-1}(x-\mu_0)\right) $$ $$ p(x|y=1)=\frac{1}{2\pi^{d/2}|\Gamma|^{1/2}}\exp\left(\frac{-1}{2}(x-\mu_1)^T\Gamma^{-1}(x-\mu_1)\right) $$ $$ p(y)=\phi^y(1-\phi)^{1-y} $$ to find the estimator, $\phi$, we need to take the derivative and set to 0 first rewrite likelihood function $$ \sum_i^n\log(p(x_i|y_i)p(y_i)) $$ $$ \sum_i^n\log p(x_i|y_i) + \log p(y_i) $$ take derivative with respect to $\phi$ and set to 0 $$ \sum_i^n\frac{y_i\phi^{y_i-1}(1-\phi)^{1-y_i}-(\phi^{y_i})(1-y_i)(1-\phi)^{-y_i}}{\phi^{y_i}(1-\phi)^{1-y_i}} = 0 $$ there are 2 cases $y_i=0, y_i=1$ when $y_i=1$ we get $\frac{1}{\phi}$ when $y_i=0$ we get $\frac{-1}{1-\phi}$ ultimately, this needs to get in the form of $$ \phi=\frac{1}{n}\sum_i^n\mathbb{1}\{y_i=1\} $$ if, for example, I had $$ y=\begin{bmatrix}1\\1\\0\end{bmatrix} $$ then we have $$ \frac{1}{\phi}+\frac{1}{\phi}-\frac{1}{1-\phi}=0 $$ $$ \phi=\frac{2}{3} $$ which is correct and what the indicator function would give but I can't reason how to formally write this as the indicator function
MLE phi derivation
CC BY-SA 4.0
null
2023-03-14T15:38:57.707
2023-03-14T21:15:56.060
2023-03-14T18:38:58.733
22311
380140
[ "maximum-likelihood" ]
609439
1
null
null
0
15
What's the relationship between sample size and the variance of the slope coefficient- var(  ^_1) ? Is there any relationship between the error variance and the reliability of our estimates? Or variance of X and the variance of the slope coefficient?
larger sample size increases the variance of the slope coefficient?
CC BY-SA 4.0
null
2023-03-14T15:41:13.413
2023-03-14T15:41:13.413
null
null
383174
[ "econometrics" ]
609440
2
null
558970
2
null
Depending on the situation, minimizing square loss might not even be the best way to estimate the mean. For instance, if data are Laplace-distributed, minimizing absolute loss gives an estimator that is, in many regards, better than minimizing square loss. Both are unbiased, but the minimization of absolute loss results in lower variance. This can be seen in a simulation. ``` library(ggplot2) library(VGAM) library(quantreg) set.seed(2023) N <- 10 R <- 5000 mse_based <- mae_based <- rep(NA, R) for (i in 1:R){ y <- VGAM::rlaplace(N, 0, 1) L <- lm(y ~ 1) Q <- quantreg::rq(y ~ 1, tau = 0.5) mse_based[i] <- summary(L)$coef[1, 1] mae_based[i] <- summary(Q)$coef[1] if (i %% 100 == 0){ print(paste( i/R*100, "% complete", sep = "" )) } } d_mse <- data.frame( Estimate = mse_based, Estimation = "MSE Minimization" ) d_mae <- data.frame( Estimate = mae_based, Estimation = "MAE Minimization" ) d <- rbind(d_mse, d_mae) ggplot(d, aes(x = Estimate, fill = Estimation)) + geom_density(alpha = 0.2) + theme(legend.position="bottom") ``` [](https://i.stack.imgur.com/zzTlf.png) Visually, both have means around zero, which is the true value, but the MSE minimization results in considerably more variability in the estimates. This is not just a fluke of this simulation but can be shown mathematically for a Laplace distribution. This is just one situation where an alternative to MSE minimization is theoretically justified. Additionally, I demonstrate [here](https://stats.stackexchange.com/a/569874/247274) that minimizing MSE need not produce a better out-of-sample MSE than minimizing MAE. This could be another justification for minimizing MAE instead of MSE. Finally, in "classification" problems like logistic regression, making the (reasonable) assumption of a Bernoulli conditional distribution means that minimizing the log-loss, not the square loss, corresponds with maximum likelihood estimation, giving another theoretical justification for minimizing a loss function other than square loss.
null
CC BY-SA 4.0
null
2023-03-14T15:42:17.507
2023-03-14T15:42:17.507
null
null
247274
null
609442
1
null
null
0
13
I'm a student working with [https://www.kaggle.com/aksha17/superstore-sales](https://www.kaggle.com/aksha17/superstore-sales), primarily as an exercise in resampling and using prophet and it was suggested to me to create dummy variables and use the likes of 'Category' and 'Segment' as additional regressors to improve the 'Sales' forecast And for the life of me, I can't see why I should as they are not independent and would merely separate the element values before adding back up to create 'Sales'. I've separated them out, added the missing dates etc to create ``` Order Date Segment Category Sales 8 2015-01-07 Consumer Furniture 76.728 9 2015-01-07 Consumer Office Supplies 10.430 10 2015-01-08 0 0 0.000 11 2015-01-09 Consumer Office Supplies 9.344 12 2015-01-09 Consumer Technology 31.200 13 2015-01-10 Corporate Furniture 51.940 14 2015-01-10 Corporate Office Supplies 2.890 15 2015-01-11 Consumer Furniture 9.940 ``` prior to splitting as dummies which I'd then need to re-sum to get back to one line per date for feeding into prophet ( and sum the dummies? or create compound variable from Segment+Category and then dummy on that? ) But all the time I think I'm on the very wrong path as these are not exogenous variables and should not be used as regressors Your thoughts please
Dataset has no candidates for prophet add_regressor
CC BY-SA 4.0
null
2023-03-14T15:45:51.780
2023-03-14T15:45:51.780
null
null
383210
[ "categorical-encoding", "prophet", "exogeneity" ]
609444
1
null
null
0
53
Start with a normal distribution with mean M and standard deviation S. Now exclude all values below the Kth percentile. What is the mean of the remaining 100%-K% of the values as a function of M and S? I'd just like a simple answer and don't need a proof or generalization which are provided in lengthy answers to related questions which don't actually provide a simple equation from which to compute the answer, and assume the reader knows the difference between the functions and Φ (I don't): [Expected value of x in a normal distribution, GIVEN that it is below a certain value (2 answers)](https://stats.stackexchange.com/questions/166273/expected-value-of-x-in-a-normal-distribution-given-that-it-is-below-a-certain-v) [Expectation of truncated normal (3 answers)](https://stats.stackexchange.com/questions/356023/expectation-of-truncated-normal)
Mean of a truncated normal distribution
CC BY-SA 4.0
null
2023-03-14T15:58:11.220
2023-03-15T02:36:57.410
2023-03-15T02:36:57.410
25
25
[ "normal-distribution", "truncated-normal-distribution" ]
609445
2
null
586875
1
null
I think that the distribution of $y=\frac{X}{||X||}$ where $X\sim \mathcal{N}(0, \Psi)$ is the [projected normal distribution](https://projecteuclid.org/journals/bayesian-analysis/volume-12/issue-1/The-General-Projected-Normal-Distribution-of-Arbitrary-Dimension--Modeling/10.1214/15-BA989.full). I came across a similar problem and used an approximation that works well in most cases. My approximations also work for the case of non-zero mean. For reference, [here](https://stats.stackexchange.com/questions/605559/expected-value-of-the-outer-product-of-normalized-non-centered-gaussian-vector/606660#606660) is a previous question of mine, that I responded, for the case of $X\sim \mathcal{N}(\mu, \mathbf{I}\sigma^2)$ (i.e. isotropic noise). See the approach used in that question of approximating each element $i,j$ of the second-moment matrix (i.e. $\mathbb{E}\left(\frac{X_i X_j}{||X||^2}\right)$) as a ratio of quadratic forms that uses a symmetric $A$ in the numerator that is specific to each $i,j$. For your question, we can use the exact same approach of approximating the elements of the second-moment matrix with that same ratio of quadratic forms, but generalizing the answer above to the case where $X\sim \mathcal{N}(\mu, \Psi)$ where $\Psi$ is any symmetric positive semi definite matrix. For this generalized quadratic form, a second order Taylor approximation can be found in [this article](https://www.sciencedirect.com/science/article/pii/S016794730200213X), section 3.1. Using the formulas in the paper above for each matrix $A$ corresponding to each element $i,j$ of the second moment matrix (as done in my other question I linked), and converting the resulting equations to matrix formulas, we get the following approximation: \begin{equation} \label{nonIso} \mathbb{E}\left( \frac{XX^T}{||X||^2} \right) \approx \frac{\mu_N}{\mu_D} \odot \left( 1 - \frac{\Sigma^{N,D}}{\mu_N\mu_D} + \frac{Var(D)}{\mu_D^2} \right) \end{equation} where the terms are defined as follows: \begin{equation} \begin{split} & \mu_N = \Psi + \mu \mu^T \\ & \mu_{D} = tr(\Psi) + ||\mu||^2 \\ & Var(D) = 2 tr(\Psi^2) + 4 \mu^T \Psi \mu \\ & \Sigma^{N,D} = 2 \left[\Psi \Psi + \mu \mu^T \Psi + \Psi\mu \mu^T \right] \end{split} \end{equation} Note that while $\mu_N \in \mathbb{R}^{d\times d}$ and $\Sigma^{N,D} \in \mathbb{R}^{d\times d}$, $\mu_D \in \mathbb{R}$ and $Var(D) \in \mathbb{R}$. Also, importantly, $\odot$ is element-wise multiplication, and the ratios between matrices that appear there are also element-wise. Maybe for your case of 0 mean you would be able to apply the same approach and find an exact formula for the ratios, instead of this approximation. Finally, that is the second-moment matrix, not really the Covariance matrix. You can subtract the outer product of the expected value of the projected Gaussian, to obtain the covariance the following way: $$ Cov\left( \frac{X}{||X||} \right) = \mathbb{E}\left( \frac{XX^T}{||X||^2} \right) - \mathbb{E}\left( \frac{X}{||X||} \right) \mathbb{E}\left( \frac{X}{||X||} \right)^T$$ I would suppose that because your variable is centered and its symmetric, the $\mathbb{E}\left( \frac{X}{||X||} \right) = 0$, though I can't say for sure. I tried this approximation on simulated data and it works quite well.
null
CC BY-SA 4.0
null
2023-03-14T15:58:13.660
2023-03-14T15:58:13.660
null
null
134438
null
609446
1
null
null
0
24
I am trying to do a simple linear regression of a scatterplot and it results in this: [](https://i.stack.imgur.com/A316F.jpg) So, the red line is the result I get, but I expect something more like the black line (drew it roughly). I use ordinary least square minimization and I understand that the black line would have huge residuals for the points on the far right of the plot for example. So it is clear why I do not get the result I want. Now, I am wondering if there are other methods, for example using the perpendicular distance to the line rather than the vertical. Maybe you can throw me some buzz words or give me some ideas on what I could try.
Linear fit not representing data well
CC BY-SA 4.0
null
2023-03-14T16:08:26.083
2023-03-14T16:08:26.083
null
null
383212
[ "regression" ]
609447
2
null
609444
1
null
This is the [one-sided truncated normal distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution) with mean and variance given by: - $\mathbb{E}(X | X>a) = \mu + \sigma \phi(\alpha)/Z$ - $Var( X|X>a) = \sigma^2[1+\alpha\phi(\alpha)/Z - (\phi (\alpha)/Z)^2 ]$ where $\alpha=(a-\mu)/\sigma$ and $Z = 1-\Phi(\alpha)$. In your case, you'd plug in $a = 0.98$.
null
CC BY-SA 4.0
null
2023-03-14T16:13:31.160
2023-03-14T16:13:31.160
null
null
383143
null
609448
2
null
194278
0
null
It seems like there are three valid definitions in the case of PCA: - Matrix-wise L2: Whole matrix $\bf{X}$ $ || \bf{X} - \bf{X_r} ||$ - Row-wise L2: Feature vectors on dataset $\bf{Y}$: $ \sum_i{|| \it{y^i} - \it{y^i_r} ||}$ - Elemental L1: Per-element (or pixel) error on dataset $\bf{Z}$: $ \sum_{ij}{|z^{ij} - z^{ij}_r|}$ Where each of the three cases could wind up being slightly different. The definitive norm and reconstruction error would be the type 3 L1 norm summed per-element. However, the other two may be more forgiving and relevant in different domains. In terms of the LDA, when you go about implementing the LDA you can reconstruct the data with computed intermediate components. But, you will still be using covariances or other computed components that rely on the SVD. So, analyzing a PCA based on your SVD solver that you use for your LDA implies that your LDA has the same reconstructive ability, assuming it is implemented correctly.
null
CC BY-SA 4.0
null
2023-03-14T16:18:51.970
2023-03-15T01:53:49.133
2023-03-15T01:53:49.133
97219
97219
null
609449
1
null
null
0
12
For both batch and stochastic gradient descent, how can you detect if it is stuck in a local minimum and not the global minimum? Is there a sophisticated way to do this, as opposed to guess and check strategies like randomly adjusting the learning rate?
How can you detect if your Gradient Descent algorithm is in a local minimum and not the global minimum?
CC BY-SA 4.0
null
2023-03-14T16:27:59.753
2023-03-14T16:27:59.753
null
null
361781
[ "gradient-descent", "stochastic-gradient-descent" ]
609451
2
null
609432
3
null
It's basically a problem with the starting values, and the limitations of the fitting algorithm in nls(). The following works. ``` library(minpack.lm) model = nlsLM(ECmodel.counts~ k/(1 + ((k-N0)/N0)*exp(-r*Time_Points)), start=list(k= 4000000000, N0=1, r=0.05), data= df.AR1, trace=TRUE) summary(model) ``` Or, knowing the correct starting values will work with nls(). ``` fit.ECmodel <- nls(ECmodel.counts~ k/(1 + ((k-N0)/N0)*exp(-r*Time_Points)), start=list(k= 2.332e+09, N0=3.769e+04, r=1.637e+00), data= df.AR1, trace=TRUE) summary(fit.ECmodel) ```
null
CC BY-SA 4.0
null
2023-03-14T16:51:05.150
2023-03-14T16:51:05.150
null
null
166526
null
609453
2
null
319349
0
null
My experience with deep neural networks (DNN) on regression convinced myself that it does not work well if the number of features are high and the accuracy requirement is also high. The probable reason is that the DNN is a only a function approximator. When the original function is complicated and # of features are high, it is very difficult to have “right” model for DNN, the training seems to alternate between underfitting and overfitting if you change the size of DNN and/or size of training data.
null
CC BY-SA 4.0
null
2023-03-14T17:06:48.153
2023-03-14T17:06:48.153
null
null
383217
null
609454
2
null
465909
0
null
Let's do an example. ``` library(ggplot2) set.seed(2023) N <- 100 x <- seq(0, 1, 1/(N - 1)) y <- 1 + 3*x + rnorm(N) beta0s <- seq(0, 2, 0.05) beta1s <- seq(2, 4, 0.05) counter <- 1 d <- data.frame( slope = rep(NA, length(beta0s)*length(beta1s)), intercept = rep(NA, length(beta0s)*length(beta1s)), mse = rep(NA, length(beta0s)*length(beta1s)) ) matrix(NA, length(beta0s)*length(beta1s), 3) for (i in 1:length(beta0s)){ print(paste(i, "of", length(beta0s))) for (j in 1:length(beta1s)){ slope <- beta1s[j] intercept <- beta0s[i] mse <- mean((y - (intercept + slope*x))^2) d[counter, ] <- c(slope, intercept, mse) counter <- 1 + counter } } ggplot(d, aes(slope, intercept, z = mse)) + geom_contour() L <- lm(y ~ x) summary(L)$coef[, 1] d[which(d$mse == min(d$mse)), ] mean((y - (summary(L)$coef[1, 1] + summary(L)$coef[2, 1]*x))^2) ``` [](https://i.stack.imgur.com/yd1wJ.png) The plot shows the minimum to be around a slope of $2.7$ and an intercept of $1.2$. Indeed, the minimum occurs around there in the simulated loss function and also when the OLS regression estimate is performed. However, the mean squared error is not zero. In the simulated loss function, the MSE is $0.9703055$, and the OLS estimate gives an MSE of $0.9702297$. Therefore, even at the global minimum, the loss function is not equal to zero, and the predictions are not perfect. It would be nice to make a model that gives perfect predictions, particularly when it comes to predicting stock prices, but that is not realistic.
null
CC BY-SA 4.0
null
2023-03-14T17:15:29.953
2023-03-14T17:15:29.953
null
null
247274
null
609455
2
null
609431
0
null
From the [National Cancer Institute's](https://www.cancer.gov/publications/dictionaries/cancer-terms/def/relapse-free-survival) definition of "relapse-free survival": > In cancer, the length of time after primary treatment for a cancer ends that the patient survives without any signs or symptoms of that cancer. Based on that definition, the question becomes whether a death was a "sign or symptom of that cancer." That requires further interrogation of the clinical data. If a death was due to previously unrecognized cancer recurrence, then that case should be coded as having relapse, not just death, as of the date of death. If the death was due an unrelated cause, then the date of death is a censoring of the time to relapse. If you can't distinguish the reason for death, then for a type of cancer with high mortality it's probably best to re-define "relapse-free survival" specifically for your study as the "time to first sign or symptom of cancer return, or time to death, whichever came first." Then include that limitation of your re-definition in your discussion of results. The R [survival package](https://cran.r-project.org/package=survival) allows the `status` variable to be a multi-level factor. That allows you to code a particular observation time as any of censored, relapse, death from the cancer, or other death. Then for any particular survival curve (overall survival, disease-specific survival, relapse-free survival) you specify in the function call which (combination of) status values you want to consider the event.
null
CC BY-SA 4.0
null
2023-03-14T17:15:44.850
2023-03-14T17:15:44.850
null
null
28500
null
609456
1
null
null
0
28
I am unsure as to how to compute a two sided p value following permutation testing, following different examples online. For example, this post [Two-sided permutation test vs. two one-sided](https://stats.stackexchange.com/questions/34052/two-sided-permutation-test-vs-two-one-sided), gets the proportion of the absolute correlation coefficients from the permutation test greater than or equal to the absolute non-permuted correlation coefficient. When I ran this, here were my results; ``` set.seed(1) x <- runif(20) y <- 0.5 * x y <- y + rnorm(20) #set up for the permutation, compute observed R nullR <- numeric(length = 1000) nullR[1] <- cor(x, y) ## obsered R in [1] N <- length(x) #permutation test for(i in seq_len(999) + 1) { nullR[i] <- cor(x[sample(N)], y) } >one side H1 R > 0 >sum(nullR >= nullR[1]) / length(nullR) [1] 0.929 >one side H1 R < 0 > sum(nullR <= nullR[1]) / length(nullR) [1] 0.072 > two sided > sum(abs(nullR) >= abs(nullR[1])) / length(nullR) [1] 0.155 ``` Whilst, this website ([https://dgarcia-eu.github.io/SocialDataScience/5_SocialNetworkPhenomena/056_PermutationTests/PermutationTests](https://dgarcia-eu.github.io/SocialDataScience/5_SocialNetworkPhenomena/056_PermutationTests/PermutationTests)) calculates a two sided p value following permutations of correlations as: ``` p_value_Cor <- (sum(nullR>=nullR[1])+1)/length(nullR) ``` which gets the same value as the first one sided test 0.93. Q1) which is the right way to calculate a two sided p value following permutation of correlations? Q2) if I used cor.test, which would get the test statistic and estimate, which should I use to calculate the p value and why, the estimate like they have done above or the test statistic? In my own data, I would be using Spearman correlation. Any clarity would be appreciated.
Clarification of one sided vs two sided p value following permutation testing
CC BY-SA 4.0
null
2023-03-14T17:16:59.850
2023-03-14T17:16:59.850
null
null
361346
[ "r", "correlation", "p-value", "permutation" ]
609457
2
null
567071
0
null
With neural networks, you would do this by having multiple outputs: - the (logit)-probability of being in each category from a to d (that's the easy bit with not too many choices) AND one of the following options (at least these two make sense): - the (logit)-probability of being in each category-subcategory combination (i.e. falling into a1,a2,a3,a4, b1,b2, c1 or d1) treating these are a separate categorical problem to the first one: downside: does not enforce that the probability of being in a1 to a4 adds up to the probability of being in a. It also sort of double-counts categories c and d, where there's really no sub-category. It's very straightforward to implement though. - the (logit)-probability of being in sub-category 1 to 4 conditional on being in category a and the (logit)-probability of being in sub-category 1 or 2 conditional on being in category b: enforces the constraints (you can also output the outputs of the above, but internally set-up the model like this, it's really the same thing). As a loss function, you would not incur any loss for the first output when the category is b, c or d, but use categorical crossentropy for the choice of 1 to 4 when the category is a. Similarly, the loss function of the second target would only apply a loss function when the category is b. With things other than neural networks, it's hard to do it all in a single model. However, you could have sequential models (i.e. first one that predicts the category and then for categories a and b two models that predict the subcategory). These, would only be trained on the subset of observations in category a and b.
null
CC BY-SA 4.0
null
2023-03-14T17:19:42.333
2023-03-14T17:19:42.333
null
null
86652
null
609458
1
null
null
1
55
I want to compare two alternative approaches for evaluating the uncertainty of the multi-dimensional MLE $\widehat \theta$ based on a log-likelihood function $l$: - Compute a Fisher-information-based quadratic confidence interval for the MLE as $$l(\theta_i) \approx l(\widehat \theta_i) -\frac12 I_n(\widehat \theta)_{ii}(\theta_i-\widehat\theta_i)^2,$$ where $I_n$ is the observed Fisher information, and where $n$ is the number of i.i.d. observations. This is based on the CLT $$\sqrt{n}(\widehat \theta - \theta)\implies \mathcal N(0,I(\theta)^{-1}),$$ where $I(\theta)$ is the Fisher information based on one observation (i.e., $I(\theta)=nI_n(\theta)$). - Compute the profile likelihood $$pl(\theta_i)=\max_{\forall j\neq i} l(\theta_j) $$ for values $\theta_i$ around the MLE (the notation may not be fully clear: I mean re-maximization blocking the $i$-th component of the parameter over all other components: see this post for a clear definition). I have a case where the log-likelihood reads $$ l(\theta) =\sum_{i=1}^n \left( -\frac12\log(2\pi) - \log(\sigma) - \frac{(f(\theta)_i- y_i)^2}{2\sigma^2}\right), $$ for a function $f$ that maps the parameter $\theta$ into a $n$-dimensional vector, and a sequence of i.i.d. observations $y_i$. The observed Fisher information then reads $$ I(\theta)_{kl}=\frac{\partial^2 l(\theta)}{\partial\theta_k\partial\theta_l}. $$ Are the two approaches above supposed to give similar results? In which case should the function $pl$ and the quadratic approximation be approximately the same?
Profile likelihood vs quadratic log-likelihood approximation
CC BY-SA 4.0
null
2023-03-14T17:20:04.963
2023-03-18T10:07:01.303
2023-03-17T09:17:15.410
376700
376700
[ "confidence-interval", "likelihood", "fisher-information", "profile-likelihood" ]
609459
1
null
null
2
9
I have been told to estimate a gender and nationality gap using a regression model for a dataset provided by a third party. The dataset contains information about workers in a very specific sector. A sample of X companies was drawn from a population of companies operating in that specific sector. Then, data were collected on all workers employed in the selected companies (lot of informations are available). There are large and small companies, so some large companies have many observations (rows in the dataset), while each small company has only 1 or a very few rows. I am not sure if this is a good framework for a regression analysis. I think there is a non-negligible selection bias going on... Do you agree with me? If not, why ? I am very hesitant and not sure if I should really do this analysis...or, If further adjustments need to be made, such as the random deletion of some observations for larger companies to reduce their weight in the sample. Thank you for your comments and suggestions :-)
Regression and sampling issues : risk of selection bias when estimating gender gap?
CC BY-SA 4.0
null
2023-03-14T17:30:15.527
2023-03-14T17:30:15.527
null
null
383219
[ "regression", "sampling", "bias" ]
609460
2
null
609242
1
null
This is a great question but kind of an impossible task. This points to the fundamental bias-variance tradeoff that is omnipresent in statistics and causal effect estimation in particular. Technique A will probably have lower variance and higher bias, and Technique B will probably have higher variance and lower bias. The question of which has lower means squared error, which in a way is the fundamental question that would indicate which to choose, cannot be answered without knowing more about the data generating process than a researcher has access to. Here is one way you could proceed. First, run a power analysis to determine the sample size required to detect an effect of interest at a desired level or run an analysis to determine the sample size required to have a confidence interval with a given width. Then, see if Technique B yields a matched sample greater than or equal to that size. If it does, then your bias will be low and you will have the desired precision. If it does not, then you can still proceed with technique B, but know that you are at risk for a wide confidence interval or the possibility of make a type II error (false negative). You may also come to the conclusion that there is no way to reliably detect the effect given the data because the only way to reduce bias in the effect estimate is to decrease precision to an unacceptable degree. That is a fundamental limitation of the dataset and is tantamount to running a randomized trial that is too small to detect an effect. Another option is to augment the matching with further bias reduction through regression. So, you can use technique A, then further adjust the effect estimate by including the covariates (in particular, the imbalanced variables) in the outcome model. This still leaves you open to all the problems using regression alone has, including extrapolation and inability to prove that you have achieved adequate balance*, but to a lesser degree since the matching has at least partially reduced some of the model dependence. There is a way to directly visualize the bias-variance tradeoff using a technology called the "matching frontier", which is described in [King et al. (2017)](https://doi.org/10.1111/ajps.12272) and implemented in my R package [MatchingFrontier](https://iqss.github.io/MatchingFrontier/), which isn't yet on CRAN. The matching frontier is a function that relates the size of the matched sample to the (optimal) balance of that sample. This allows you to see how continuing to discard units (e.g., by tightening a caliper) changes balance. It might be that there is a caliper at which balance stops improving, in which case you can use a wider caliper than the one you have been using. You can also estimate treatment effects and confidence intervals across the frontier to see how the effect estimate and confidence interval change as additional units are dropped. You would present the entire frontier to readers so as to not cherry pick the point on the frontier that yields the most favorable result. --- The methodology described in [Chattopadhyay & Zubizarreta (2022)](https://doi.org/10.1093/biomet/asac058) actually does allow you to assess balance after linear regression in a matched or unmatched sample. We have an R package that implements the methods coming out soon, and if you are interested in using it, get in touch.
null
CC BY-SA 4.0
null
2023-03-14T17:30:22.020
2023-03-14T17:30:22.020
null
null
116195
null
609461
1
null
null
0
16
I have a data that it is include multi-level categories. I need to classification it by multi-label and multi-level classification model. may please tell me how can I do it by machine learning or deep learning. thanks
How to predict multi-class multi-level category data in machine learning classification?
CC BY-SA 4.0
null
2023-03-14T17:31:52.773
2023-04-12T16:11:17.970
2023-04-12T16:11:17.970
220466
383187
[ "machine-learning", "neural-networks", "classification", "multi-class" ]
609462
1
null
null
0
20
I created two donation campaigns, one loss framed (N:1993) and one gained framed (N:1989) I understood donations are usually veeeery low and the sample was a lot smaller than previously expected, so I had to let go the idea of having a control group (as if I divided my group into three I most likely would have gotten null results for all, plus, how does a control donation campaign look like?). I conducted stratified randomization, making sure both gain and loss treatment were balanced across relevant covariates. Now, as I analyse my data, I can find that there are significant differences in favour of the loss treatment, both in probability of donation and donation amount. I can conduct both an Intention to treat analysis and also a compliers only analysis (as I have data of who opened the emails). Results hold for for both types of analysis. My main concern now h is, I don't have a control group, so am I limited to just compare both treatments in both donation amount and donation probability? (and check for maybe heterogeneous effects) Now, I might be able to access data of previous donation campaigns (not designed by me, and most likely sent to the whole sample), would that be beneficial to my analysis? For example, there is a "valentines" campaign that was carried out one month before mine... could that be a potentially valid control?
Comparing treatment effects across groups
CC BY-SA 4.0
null
2023-03-14T17:32:08.730
2023-03-14T18:06:50.663
2023-03-14T18:06:50.663
309723
309723
[ "experiment-design", "treatment-effect", "observational-study", "case-control-study", "treatment" ]
609463
2
null
609358
1
null
There are no numbers would immediately be obviously wrong. The only possibility would be numbers that are mathematically impossible, such as odds ratios that are negative. In general, regression coefficients are keyed to the units of your data, so you could make the coefficients arbitrarily large or small by converting, say, length data between nanometers and kilometers. If there is some well grounded reason to believe that the variables 'should' be significant, there are a couple possibilities. First, as @Henry [notes](https://stats.stackexchange.com/questions/609358/unconventional-odds-ratio-and-95-ci#comment1131130_609358), you could have multicollinearity. You can check VIFs for this. If you have it, you won't be able to get highly informative tests of individual variables, but you could get a simultaneous test of the collinear variables and show that something in there somewhere is related to the outcome. With logistic regression (or ordinal, or multinomial), another possibility is that there is complete separation in your dataset. You can simply use a different type of test, such as a likelihood ratio test or a score test. You can also get CIs based on the likelihoods or score functions. To learn more about those topics, try reading some of our threads categorized under the [multicollinearity](/questions/tagged/multicollinearity) and [separation](/questions/tagged/separation) tags (you may want to sort by 'votes' or 'frequent' and start reading down from the top).
null
CC BY-SA 4.0
null
2023-03-14T17:36:34.920
2023-03-14T17:36:34.920
null
null
7290
null
609464
2
null
609422
0
null
First, are your time-points equally spaced or are they dummy variables? If you want to regress both time and your variable they need to be continuous variables to see a linear relationsship. I think you see where I am going here: use time as a covariate in your model and make an interaction with the other variable to determine if there are relations between time and the other covariates in predicating your outcome variable. What software are you running in? Might help for an example.
null
CC BY-SA 4.0
null
2023-03-14T18:00:28.153
2023-03-14T18:00:28.153
null
null
382664
null
609466
1
null
null
0
9
In sample X, I am trying to find the average cost savings of upgrading propane and utility gas-heated mobile and single family. Let's say that I have 4 different profiles, PPMH, PPSFD, UGMH, and UGSFD. I know the percentage of vintages of all mobile homes and SFDs regardless of whether they are heated by propane, utility gas, fuel oil, electricity, etc. For example, a mobile home in sample X .03 is of a 2010s vintage. I also know, given the vintage, the percentage that is heated by a specific method, for example for a mobile home in sample X of a 2010s vintage .97 are heated by electricity, 0 are heated by utility gas, and .02 are heated by propane. My fundamental problem is that I have these percentages for vintages and I have the fuel type dependency on the vintages themselves. If I know for each of my 4 profiles the cost savings of an upgrade from propane or utility gas to electrification, and my sample X has homes of all types and heating methods, how can I make a weighted average of the cost savings of a home in my sample that I upgraded? My thought was to take basically do the dot product along vintages and heating types and multiply the result times the product of the total houses in the sample and the percentage that are either mobile homes or SFDs. So if 27% of sample X were mobile homes there are 100 homes in the sample the product would be 27 which I would then multiply by the dot product of vintage% and heating type% based on vintage. Does this make sense? Any help would be greatly appreciated!
How should I calculate the weight for a weighted average
CC BY-SA 4.0
null
2023-03-14T18:22:54.507
2023-03-14T18:22:54.507
null
null
366624
[ "weights", "weighted-mean" ]
609468
2
null
38832
1
null
This is an agreement chart. I have discussed it here: [Match Quality Graph](https://stats.stackexchange.com/a/219245/7290), which may be worth your while to read.
null
CC BY-SA 4.0
null
2023-03-14T18:29:08.910
2023-03-14T18:29:08.910
null
null
7290
null
609470
1
null
null
0
27
I've created a linear relationship between two different measurements. The equation is in the form of y=mx and I am trying to validate my slope within a certain range so that when I run my experiment, I can account for the accuracy of each system. To explain this is terms of my project (not a student! I'm an analytical technologist) let's say that system A outputs some number with the units of density (mass/volume) and system B outputs some number with the units of energy/distance. The linear equation, based on about 10 sets of historical data, says that the slope of the relationship is 3 (Mass/Volume = 3 * energy/distance). I have 1-2 experiments I can run to prove that the slope, 3, is an accurate representation of the relationship. If I do not get an output of 3 when I compare system A and B, my comparison fails :( However, system A reports values within +- 10% of the 'true' density and system B reports values within +-3% of the 'true' energy/distance. How can I combine these accuracy values to say that if my slope of the experimental data is 3 +- X% then my comparison is successful (i.e., how do I find X?). Any help is much appreciated. Thank you!
Uncertainty in slope
CC BY-SA 4.0
null
2023-03-14T18:39:11.547
2023-03-14T18:39:11.547
null
null
383222
[ "probability", "uncertainty", "method-comparison" ]
609471
1
null
null
2
7
I am looking for a method that incorporates historical data and future incomplete data. I have historical demand data (container usage at a rail yard) that I can use to predict future demand. However, I also have future container booking/reservation data that customers placed in advance. This data may change in the future as some customers may cancel their reservation or some customer may add to the existing reservation. Is there any method that can be used to combine both predictions using time series and the available incomplete booking data?
forecasting with future/advanced booking data
CC BY-SA 4.0
null
2023-03-14T19:02:24.617
2023-03-14T19:02:24.617
null
null
328578
[ "time-series", "forecasting" ]
609472
1
null
null
0
142
I have the following problem: The weights of large eggs are normally distributed with mean 65 grams and standard deviation 4 grams. The weights of standard eggs are normally distributed with mean 50 grams and standard deviation 3 grams. - One large egg and one standard egg are chosen at random. Find the probability that the weight of the standard egg is more than 4 5 of the weight of the large egg. - Standard eggs are sold in packs of 12 while large eggs are sold in packs of 5. Find the probability that the weight of a pack of standard eggs differs from twice the weight of a pack of large eggs by at most 5 grams. My approach to the problem - We have: $\frac{4}{5}$ weight of a large egg = $\frac{4}{5}\cdot65=52g$ As standard and large eggs are normally distributed, we can calculate the probability that the weight of the standard egg is more than $\frac{4}{5}$ the weight of the large egg by: $P(X>52) = P (Z>\frac{52-50}{\sqrt{4^2+3^2}}) = P(Z>\frac{2}{5}) = P (Z > 0.4) = 1 - 0.65542 = 0.33458$ or 33.5% - We determine the distribution for one pack of standard eggs by: $12X = N(12\cdot50,12\cdot3^2) = N(600,108)$ We determine the distribution for two packs of large eggs by: $2\cdot5Y = N(2\cdot5\cdot65,2\cdot5\cdot4^2)=N(650,160)$ We determine the distribution in the weight of the two pack groups: $10Y - 12X = N(650-600,160+108) = N(50,268)$ For the weight of a pack of standard eggs differs from twice the weight of a pack of large eggs by at most 5 grams, we need to calculate the probability so that: $-5\le10Y-12X\le5$ For $10Y - 12X \ge -5: P (Z\ge\frac{-5-50}{\sqrt{268}}) = P(Z\ge-3.36) = 0.99961$ For $10Y - 12X \le 5: P (Z\le\frac{5-50}{\sqrt{268}}) = P(Z\le-2.75) = 1 - 0.99702 = 0.00298$ As such, $P(-3.36\le Z\le-2.75) = 0.99961-0.00298 = 0.99663$ or 99.6% Was my solutions to the questions correct? Also, is it acceptable to consider 2 packs of 5 large eggs as 10 large egges? I would really appreciate some comments on my work, and I thank you all for your time and contribution!
Normal Distribution/Probability problem
CC BY-SA 4.0
null
2023-03-14T19:02:43.877
2023-03-18T03:16:29.060
null
null
383080
[ "probability", "self-study", "mathematical-statistics", "normal-distribution" ]
609473
2
null
500839
0
null
Minimizing a norm other than $\ell^2$ is, in some sense, just some other [extremum estimator](https://en.wikipedia.org/wiki/Extremum_estimator). The OLS estimator is an extremum estimator because the estimated parameters are the points giving the minimum of $\vert\vert y - \hat y\vert\vert_2$. If you want to have the objective function of some other extremum estimator to be some other norm $\vert\vert y - \hat y\vert\vert_{\text{other}}$, go for it. $$ \hat\beta_{\text{ols}} = \underset{\hat\beta}{\arg\min}\{ \vert\vert y - X\beta \vert\vert_2 \}\\ \hat\beta_{\text{other}} = \underset{\hat\beta}{\arg\min}\{ \vert\vert y - X\beta \vert\vert_{\text{other}} \} $$ For instance, minimizing the $\ell^1$ norm leads to [quantile-regression](/questions/tagged/quantile-regression) at the median. Depending on the situation, [this can give a better estimate of the mean than minimizing the $\ell^2$ norm gives.](https://stats.stackexchange.com/a/609440/247274) Consequently, more than just $\ell^2$ minimization is useful. Getting away from $\ell^p$ norms, minimizing a weighted norm corresponds to weighted least squares, and a similar idea should apply for generalized least squares, so it is not just $\ell^p$ norms whose minimizations find use in statistics. Regarding Sobolev norms in particular, I do not see a way for that to make sense, since a Sobolev norm involves derivatives of the function in the function space.
null
CC BY-SA 4.0
null
2023-03-14T19:05:44.063
2023-03-14T19:05:44.063
null
null
247274
null
609476
1
null
null
1
42
Im currently working on a regression model that looks at how the electricity price is impacted by imports of intermittent (wind, solar) electricity from interconnected countries. In the research process I have found that demand is an important factor that may "hide" the effect of intermittent generation especially when demand is high. In addition, due to the price structure of the market, intermittent generation will only have an effect if the generation is large enough. It is always the most expensive generation source that sets the price. If intermittent covers enough of demand it will be a price setter and expensive sources such as gas is not needed. Therefore I want to include the relationships of all this in my regression. My main approach to investigate my research question is to include intermittent generation in country X and import from country X as an interaction variable. However, now I want to take the abovementioned relationships into account. First I tried to create shares where I took the import divided by the demand to capture the effect and capacity constraints from the interconnector. Then I created a variable which is the share of intermittent production of total production in country X, and then used these two shares as an interaction variable. However, the coefficients gets unrealistically large and im not sure how to interpret this. To sum up the approach: > imp_share = import/demand intermit_share = intermittent/total_prod y ~ imp_share + intermit_share + imp_share*intermit_share Do anyone have any suggestion to how I can model this? Or why the coefficients becomes extremely high?
Interaction variables with percentage/share
CC BY-SA 4.0
null
2023-03-14T19:47:10.587
2023-03-16T15:00:25.643
null
null
383228
[ "r", "regression", "interaction", "quantiles" ]
609477
1
null
null
1
13
I have a corpus of book publications split into different clusters. I have information about the nationality of the authors (variable A) and the nationality of the publishing company (variable B). In the case of variable B, publishing companies are either US-based or Euro-based (2 categories). In the case of variable A, authors are either American, European, or others (3 categories). I want to know whether a cluster is more euro-centered or more us-centered when compared to the overall corpus (basically identify clusters in which EU/US identity is important) and plot it on two axes according to variables A and B. A positive value on the Y-axis would mean the cluster has an over-representation of EU authors, and a negative value the opposite. Similarly, the X-axis would have a positive value when we find an over-representation of EU publishing companies and a negative value for US companies. (In the case of variable A, it means that simply comparing proportions can lead to both US and EU authors being over-represented). For variable A, I used the log of relative ratio using the following formula: > log((share_europeans_authors_cluster/share_US_authors_cluster)/(share_europeans_authors_corpus/share_US_authors_corpus)) I used a similar formula for variable B but used `1-share` given that there are only two variables. The results are good but (1) I am not sure this is the best option (or the correct one given that some observations are both in the cluster and in the corpus), and (2) I get one cluster with a value inferior to -1 for variable A.
Score for over/under representations of a variable in sub-group
CC BY-SA 4.0
null
2023-03-14T19:48:48.267
2023-03-14T19:48:48.267
null
null
354734
[ "probability", "standard-error", "group-differences", "odds", "relative-risk" ]
609478
1
null
null
2
29
I know that a family of Gaussian copulas generates a standard bivariate normal distribution if and only if the marginal ones are standard normal. This characterizes the Gaussian copulas, where I have a problem is, if I change the standard normals for normals with mean $\mu$ and variance $\sigma^2$, do I get a family of parametric copulas?
Are there families of known parametric copulas for non-standard marginal normal distributions?
CC BY-SA 4.0
null
2023-03-14T19:53:23.017
2023-03-14T20:35:27.457
null
null
383227
[ "probability", "references", "multivariate-normal-distribution", "copula", "parametric" ]
609479
2
null
558970
0
null
When dealing with (binary) classification problems, e.g. problems where we are trying to map an input $x\in\mathcal X$ to a label $y\in\{+1;1\}$, [it is easy to prove](https://en.wikipedia.org/wiki/Bayes_classifier#Proof_of_Optimality) that the optimal classification rule (i.e. the classifier which minimizes the 0-1 loss) is given by $$\eta(x) = \text{sign}\ \mathbb E[Y\mid X = x] $$ Where the pair $(X,Y)$ follows the data distribution. It thus makes sense to try to minimize the MSE $\mathbb E[(Y-f(X))^2] $, since as you said, the approximate minimizer $\hat f_{MSE}$ will be an approximation of $\mathbb E[Y\mid X] $ and therefore $\hat\eta_{MSE}:=\text{sign}\ \hat f_{MSE} $ should be a good approximation of the optimal classifier $\eta$ as well. What we know, however, is that there exists a large family of surrogate losses $\ell$, such that any minimizer $f_\ell$ of $\mathbb E[\ell(f(X),Y)]$ satisfies $$\text{sign}f_{\ell} = \eta $$ Such loss functions are called Bayes consistent or classification calibrated, and there is an easy characterization for them and many of the practically used losses have this property (you can have a look [here](https://en.wikipedia.org/wiki/Loss_functions_for_classification#Bayes_consistency) for more details). Concretely, this means that minimizing the loss associated to such an $\ell$ will solve our problem just as well as minimising the MSE. Hence, if the optimization problem associated with $\ell$ is easier to solve or has other desirable properties (regularization, robustness...) it makes sense to chose that option over minimizing the MSE. --- The above discussion is mainly about (binary) classification, but even for that problem, some popular loss functions used in practice, such as [cross-entropy](https://en.wikipedia.org/wiki/Cross_entropy), do not come with the theoretical guarantee mentioned above. Nevertheless, it happens to be very useful and exhibit many desirable properties (nicer loss landscape, stability during training...) and it arises naturally by considering the problem through another angle (look at the wikipedia page for more details). Similarly, for regression, we may sometime prefer to minimize other metrics, such as the [MAE](https://en.wikipedia.org/wiki/Mean_absolute_error) or the [MAPE](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) rather than MSE, whose minimizers we know won't converge to the "optimal predictor", simply because we prefer our estimators to be more robust, depending on what we know about our data, or because we know that it will be more efficient computationally. You can have a look at these threads : [1](https://stats.stackexchange.com/q/355538/305654), [2](https://stats.stackexchange.com/q/299712/305654), [3](https://stats.stackexchange.com/q/147001/305654), [4](https://stats.stackexchange.com/q/118/305654) to see some of the advantages and drawbacks of using alternative loss objectives for regression.
null
CC BY-SA 4.0
null
2023-03-14T19:54:00.007
2023-03-14T22:15:34.283
2023-03-14T22:15:34.283
305654
305654
null
609481
2
null
609478
3
null
The point of copulas is that they do not care about the margins. Therefore, if you want a Gaussian copula as the dependence structure between the margins and margins that are normal but not standard normal, go for it. Just specify the parameters of the Gaussian copula, and then specify the means and variances of the margins. No additional theory is needed. To see why this would be a parametric family, the parameters of the Gaussian copula are the off-diagonal elements of the (population-level) correlation matrix, and the parameters of the margins are the usual (population-level) means and variances.
null
CC BY-SA 4.0
null
2023-03-14T20:01:54.963
2023-03-14T20:35:27.457
2023-03-14T20:35:27.457
247274
247274
null
609482
1
null
null
0
65
During my work I encountered a plot, in which we have two curves indicating: - Mean value of observations of dependent variable $Y$ (in my case it was annual frequency of claims), calculated for diffent values of some other variable, let's say $Z$; - Mean predicted value $\hat{Y}$, which is an evaluation in data point of estimate of conditional expectation $\mathbb{E}(Y|\mathbf{X})$ for some vector of variable $\mathbf{X}$, calculated for different values of $Z$. Plot described above serves for checking goodness-of-fit: if two curves don't coincide on some part of a range of variable $Z$, then we should consider adding this variable to our vector $\mathbf{X}$ in some form, since our statistical model does not capture variability of $Y$ across range of $Z$. To gain more insight into this tool, I translated its content into probabilistic language: - (1) would be a nonparametric estimate of conditional mean $\mathbb{E}(Y|Z=\cdot)$; - (2) would be an estimate of $\mathbb{E}(\mathbb{E}(Y|\mathbf{X})|Z=\cdot)$. After this step I realized, that desirable "closeness" of two aforementioned quantities looks the same as Tower Property for conditional expectations, which, in a language of sub-sigma-algebras $\mathcal{G}\subseteq\mathcal{H}$, states that: $$ \mathbb{E}(\mathbb{E}(Y|\mathcal{H})|\mathcal{G})=\mathbb{E}(Y|\mathcal{G})\quad a.s.\quad (*) $$ This observation provides a reformulation of a meaning of plot: "closeness" of $\mathbb{E}(Y|Z=\cdot)$ and $\mathbb{E}(\mathbb{E}(Y|\mathbf{X})|Z=\cdot)$ says that $Z$ provides no additional information- its predictive power is already contained in $\mathbf{X}$. But here I started to wonder, if I am fully right: truth of condition $\mathcal{G}\subseteq\mathcal{H}$ is sufficient, but maybe not necessary for $(*)$ to hold. Here arise my questions: - Is it true that $(*)\Rightarrow\mathcal{G}\subseteq\mathcal{H}$? If no, what is an example of situation, where $(*)$ hold, but $\mathcal{G}$ is not a subset of $\mathcal{H}$? - Do you have an experience with plots like one described above? Do you agree with my interpretation? Any additions and corrections are welcome. Edit (1) My progress after comments of @whuber: assuming that $(*)$ is true, by linearity of conditional expectation we have: $$ \mathbb{E}(\mathbb{E}(Y|\mathcal{H})-Y|\mathcal{G})=0\quad a.s. $$ By defining equations of conditional expectation, for all $G\in\mathcal{G}$: $$ \mathbb{E}([\mathbb{E}(Y|\mathcal{H})-Y]\mathbf{1}_G)=\mathbb{E}(0\mathbf{1}_G)=0\\\mathbb{E}(\mathbb{E}(Y|\mathcal{H})\mathbf{1}_G)=\mathbb{E}(Y\mathbf{1}_G)=\mathbb{E}(\mathbb{E}(Y|\mathcal{G})\mathbf{1}_G). $$ Here I am struggling again since I cannot establish from equation above any relation between $\mathcal{G}$ and $\mathcal{H}$.
Plot indicating Tower Property of conditional expectation
CC BY-SA 4.0
null
2023-03-14T20:09:03.477
2023-03-15T20:00:17.690
2023-03-15T20:00:17.690
269142
269142
[ "predictive-models", "goodness-of-fit", "conditional-expectation", "sigma-algebra" ]
609483
2
null
595791
2
null
Moye published this approach in 1998 in annals of epidemiology and specially considers the case of reporting the results of a clinical trial so that a non-significant primary endpoint analysis would not render subsequent evaluation of secondary endpoints, but if the secondary endpoints are assessed, all such analyses should be presented. Moye argues that this is intuitive because most readership has an intuitive notion of the same alpha threshold for significance being assigned equally to equal tests, rather than in a step-down procedures where "lesser significant" results are tested at subsequently less stringent thresholds. A consequence of this is that the "overall alpha" of a trial is actually subjective and should be left to the reader to infer from the analyses presented. Specifically the below is written: [https://www.sciencedirect.com/science/article/pii/S1047279798000039?via%3Dihub](https://www.sciencedirect.com/science/article/pii/S1047279798000039?via%3Dihub) > the following is a guide to investigators on the apportionment of alpha during the design phase of a research program. Commonly, the scientific community considers an alpha level for each hypothesis test to be examined in a research program, leaving the interpretation of the overall alpha to the reader who is attempting to interpret the significance of the study. [O'Neill](https://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(20000330)19:6%3C785::AID-SIM520%3E3.0.CO;2-K) and [D'Agostino](https://onlinelibrary.wiley.com/doi/pdf/10.1002/%28SICI%291097-0258%2820000330%2919%3A6%3C763%3A%3AAID-SIM517%3E3.0.CO%3B2-8) provided commentary with mixed feelings about the notion of lack of control of the overall study $\alpha$ in this scenario. Specifically D'Agostino nicely summarizes: > Doctor O'Neill's approach is different. He first praises Moye for his recognition of the problem and his contribution to addressing it, but then points to problems. These include the artificial classification of variables as primary and secondary, the problem of interpreting study results in light of a primary variable alpha and a new overall experiment alpha, and the overemphasis in the PAAS scheme of declaring each variable positive or negative in terms of alpha rather than examining the relation among the variables and the power of the study. He also makes useful suggestions to improve upon PAAS
null
CC BY-SA 4.0
null
2023-03-14T20:10:11.873
2023-03-14T20:10:11.873
null
null
8013
null
609484
1
609485
null
1
48
Good afternoon all, I've got a model that models a time series and I am trying to decide which how the residuals are correlated. The first model, called $m1$ is models $AR(1)$ residuals and the second model models $AR(2)$ residuals called $m2$. I'm looking at the residual plots, ACF/PACF plots and also the AIC/BIC values of the two models to decide if $m1$ or $m2$ would a better model for the residuals. However upon closer inspection I cannot determine which one would be best as they are both very similar in the analysis and I lean more towards the AR(1) model since it is simpler. Are my assessments correct on this and am I understanding the ACF/PACF plots correctly? ACF/PACF of $m1$ and $m2$ Analysis: There appears to be slightly less variability in the ACF/PACF for the $m1$ model. I'm also having some issues fully understanding these ACF/PACF plots. [](https://i.stack.imgur.com/KU5tO.png) Plots of the residuals of $m1$ and $m2$ Analysis: No apparent difference in the residuals between the $m1$ and $m2$ as they are almost identical. [](https://i.stack.imgur.com/02Xyy.png) AIC/BIC values m1: -286.812/-274.2221 m2: -287.7302/-271.9928 Analysis: AIC slightly favors $m2$ and BIC slightly favors $m1$. Thank you for the time.
Understanding ACF and PACF plots for model selection for AR(1) vs AR(2)
CC BY-SA 4.0
null
2023-03-14T20:35:57.420
2023-03-14T20:46:53.817
2023-03-14T20:44:49.617
259834
259834
[ "time-series", "residuals", "acf-pacf" ]
609485
2
null
609484
1
null
I would say that you have reached the point where the models are effectively the same. I would also assume that any forecasts are essentially indistinguishable. In such a case, I would always go with the simpler model, compare the ["one standard error" rule](https://stats.stackexchange.com/q/80268/1352). Also, I would usually prefer using information criteria over reading entrails, sorry, I meant ACF/PACF plots: [Selecting ARIMA orders by ACF/PACF vs. by information criteria](https://stats.stackexchange.com/q/595150/1352). In the present case, the two criteria give conflicting advice. Another reason not to stress too much about this.
null
CC BY-SA 4.0
null
2023-03-14T20:46:53.817
2023-03-14T20:46:53.817
null
null
1352
null
609486
1
null
null
3
143
Looking for the circumstances of when we should use a ROC curve vs. a Precision Recall curve. Example of answers I am looking for: Use a ROC Curve when: - you have a balanced or imbalanced dataset (Source). - when the cost of false positives and false negatives is roughly equal (needs verification) - ... Use a Precision Recall Curve when: - you have a imbalanced dataset with way more positives than negatives (Source). - when the cost of false positives is higher than false negatives (needs verification) - ...
When to use a ROC Curve vs. a Precision Recall Curve?
CC BY-SA 4.0
null
2023-03-14T20:57:29.687
2023-03-14T22:48:22.540
null
null
361781
[ "roc", "precision-recall" ]
609487
1
null
null
1
44
I have many variables ( ~2000 and 320 samples) and I have found the correlation coefficient for each unqiue pair of these variables through Spearmann (underlying data does not have normal distribution). As some outliers can affect the correlation coefficient value, I am considering bootstrapping and/or permutation testing to be able to better understand whether there is any significant difference betweeen the original coefficient value and the one generated through a resampling method. I am unsure of a few things; 1)I realise (without resampling methods) that my null hypothesis would be that when considering a pair of variables X and Y , H0:ρ(X,Y)=0, whilst alternative H1:ρ(X,Y) ≠ 0. But following the resampling, my null hypothesis is that there is no difference between the original coefficient value and the one generated from resampling? As ultimately, this is what I need to test. 2)as both approaches are quite time intensive due to the number of variables involved - would one approach be more statistically valid than the other? 3)In addition, lastly, if I were to go with a bootstrapping approach and calculated 95% confidence intervals (CI), based on my null hypothesis above following resampling, if this interval contained 0, I presume this is indicative that there is no significant difference between my original and bootstrapped statistic? Please could anybody help clarify?
Bootstrapping or permuting as resampling methods for correlations?
CC BY-SA 4.0
null
2023-03-14T21:07:44.273
2023-03-14T23:11:56.927
null
null
361346
[ "correlation", "bootstrap", "permutation" ]
609488
2
null
609438
1
null
$L(\phi) = \prod p(x_j, y_j; \phi)$ So $\log L = \ell(\phi) = \sum \log p(x_j, y_j; \phi)$ Since $p(x\mid y)$ does not depend on $\phi$, $p(x, y) = p(x \mid y)p(y) \propto p(y)$ so $\frac{d}{d\phi} \log p(y) = \frac{y}{\phi} -\frac{1-y}{1-\phi}$ gives $$ \phi(1-\phi)\frac{d}{d\phi}\ell(\phi) =\sum y_j(1-\phi) - (1-y_j)\phi = \sum \phi -y_j = n\phi - \sum y_j = 0 $$ and $\phi = \frac{1}{n}\sum y_j$.
null
CC BY-SA 4.0
null
2023-03-14T21:11:15.810
2023-03-14T21:15:56.060
2023-03-14T21:15:56.060
54458
54458
null
609489
1
null
null
1
52
I am using the BaSTA package in R to estimate sex differences in survival parameters for a capture-recapture dataset. When I use a simple Gompertz model the parameters are very interpretable (b0 represents the baseline hazard and b1 represents age-dependent mortality rate, or senescence). However, when I look at a simple Weibull model, I'm not sure how to biologically interpret the differences in survival parameters. For my dataset, females have a higher shape parameter (b0) than males, but a lower rate parameter (b1) (see model output and plots). These differences are substantial enough to interpet according to the KLDC (0.88 and 0.99, respectively), but how would you interpret these differences biologically? The best I can tell is that the shape parameter suggests that females show a later increase in age-specific mortality than males do. The higher rate parameter for males seems like it should mean males are senescing faster than females, but that isn't consistent with the visualization of the results. If males were senescing earlier and faster than females, I would expect them to have higher mortality rates than females at the oldest age classes, but instead it appears that the mortality hazard estimate is converging. Am I misinterpreting the parameters? Thanks! [](https://i.stack.imgur.com/4wCFt.png) [](https://i.stack.imgur.com/6J9Hi.png)
Interpreting shape and rate parameters from a weibull model of mortality hazard in BaSTA
CC BY-SA 4.0
null
2023-03-14T21:21:50.377
2023-03-16T17:17:40.467
null
null
371163
[ "r", "survival", "hazard", "weibull-distribution", "mortality" ]
609490
2
null
586875
0
null
By applying dherera's formula to trace-normalized zero-centered Normal with diagonal $d\times d$ covariance matrix $H$, we get the following estimate for the corresponding covariance matrix $H_p$ of projected Normal. $$H_p = c H -2H^2\\ c=(1+2\operatorname{Tr}(H^2)) $$ When $H$ eigenvalues follow power-law decay with $p=1.1$ we can estimate eigenvalues of $H_p$ using Monte-Carlo and see the following fit: [](https://i.stack.imgur.com/9L6Wd.png) For $1<p<2$, this estimate is slightly biased. For instance, when $p=1.1$ using "adjusted" formula by changing $c$ to be $c=(1+1.7\operatorname{Tr}(H^2))$ gives a slightly nicer fit: [](https://i.stack.imgur.com/FA8a8.png) Error measures mean relative residual squared, tail error only considers smallest $d/2$ eigenvalues. [Notebook](https://www.wolframcloud.com/obj/yaroslavvb/nn-linear/forum-herrera-formula.nb)
null
CC BY-SA 4.0
null
2023-03-14T21:38:38.337
2023-03-14T21:54:52.627
2023-03-14T21:54:52.627
511
511
null
609491
2
null
609436
2
null
The three bits of this equation $$ \Vert{y-X\beta}\Vert^2 + \lambda \sum_{j=2}^{k-1}\{ f(x^*_{j-1})-2f(x^*_j) + f(x^*_{j+1}) \}^2 $$ are: - $\Vert{y-X\beta}\Vert^2$ is the sum of squared errors, which is measuring the lack of fit of the piecewise linear spline being fitted - $\lambda$ is the smoothing parameter, which we use to control how much penalty we pay for the wiggliness of the estimated spline - $\sum_{j=2}^{k-1}\{ f(x^*_{j-1})-2f(x^*_j) + f(x^*_{j+1}) \}^2$ is the sum of squared second differences The last term is the same as a second order central finite difference of the values of the spline $f()$ at the knot locations. The second order central finite difference is $$ f^{\prime \prime}(x) \approx \frac{f(x + h) - 2f(x) + f(x - h)}{h^2} $$ which as shown is an approximation of the second order derivative of the function $f()$. As we are differencing evenly spaced knots, $h = 1$ and so the $h$s disappear yielding essentially the same thing as in the equation you quoted. What we are doing here is getting an estimate of the "curvature" (second derivative) of the spline at each internal knot by differencing the values of the spline at the focal knot plus the previous and next knots. If the spline were flat, the values of the function at the focal knot and the previous and next knots would be the same value so the bit in {} for a single focal knot would be equal to 0. If the spline was a straight line, then $f(x^*_{j+1})$ would be twice as large as $f(x^*_{j-1})$, but as we subtracted off $2f(x^*_j)$ so the sum also equals 0. The constant function or other straight line have 0 curvature and this is what we see when we think about those functions in terms of the second order central finite difference. We square this second order difference because the spline could be curved down or up (concave or convex) and as such a spline that oscillated about some value would likely have zero second order finite difference when summed over the spline because of all the peaks and valleys cancelling each other out. But this would incorrectly measure the wiggliness of such a function; intuition tells us that an oscillating function (say a sine wave) has a lot of wiggliness. Squaring the finite difference estimate of the second derivative solves this problem. It is the same idea as summing squared errors rather than summing errors as a measure of lack of fit. As for $\lambda$, this controls how much penalty we pay for wiggliness in the spline. If we let $\lambda \rightarrow \infty$, then any wiggliness in the spline will dominate the equation, so the smallest value of the entire equation will be when the penalty (the summation term) is 0, i.e. when we have a straight line. Then we recover a linear model (in this case). If we let $\lambda \rightarrow 0$ then we pay no penalty for the wiggliness and we recover an unpenalised piecewise linear regression fit. This latter fit model would likely be overfitted to the data, while the former would be underfitted if the true relationship was not linear. Hence there will be some optimal value of $\lambda$ that allows the spline/model to fit the data reasonably well without overfitting, and we can recover a non-linear estimate of the relationship between $y$ and $x$ if one exists (given sufficient data for the noise level). You can think of $\lambda$ as controlling the trade off between the smoothness of the estimated function (the spline) and the closeness of the fit to the data. If you flip the page to p. 168 Simon goes on to explain that the summation can be written more concisely if we realize that for the tent basis functions he is using in the example, the value of the function at the $j$th knot ($f(x^*_j)$) is the same as the coefficients of $f$, i.e. $\beta_j = f(x^*_j)$. Substituting these coefficients into the term in the sum we would get ${\beta_{j-1} - 2\beta_j + \beta_{j+1}}$. If we write out all the terms in the sum (for $j=2$ through $j=k-1$), and stack them in a matrix, we get $$ \begin{bmatrix} \beta_1 - 2\beta_2 + \beta_3\\ \beta_2 - 2\beta_3 + \beta_4\\ \beta_3 - 2\beta_4 + \beta_5\\ \vdots \\ \end{bmatrix} $$ and we can decompose this matrix into a a vector of coefficients $\boldsymbol{\beta}$ and a difference matrix $\mathbf{D}$ $$ \begin{bmatrix} \beta_1 - 2\beta_2 + \beta_3 \\ \beta_2 - 2\beta_3 + \beta_4 \\ \beta_3 - 2\beta_4 + \beta_5 \\ \vdots \end{bmatrix} = \begin{bmatrix} 1 & -2 & 1 & 0 & \cdots & \cdots & \vdots \\ 0 & 1 & -2 & 1 & 0 & \cdots & \vdots \\ 0 & 0 & 1 & -2 & 1 & 0 & \vdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} \beta_1 \\ \beta_2 \\ \beta_3 \\ \vdots \end{bmatrix} = \mathbf{D}\boldsymbol{\beta} $$ To make progress with other kinds of splines (and the more compact notation used by Simon in the rest of his book), we want to write the penalty in quadratic form, $\boldsymbol{\beta}^{\mathsf{T}}\mathbf{S}\boldsymbol{\beta}$ for a penalty matrix $\mathbf{S}$. We can turn $\mathbf{D}$ into $\mathbf{S}$ via $$ \mathbf{S} = \mathbf{D}^{\mathsf{T}}\mathbf{D} $$ and hence we can equate each of the main topics we have discussed into $$ \sum_{j=2}^{k-1}\{ f(x^*_{j-1})-2f(x^*_j) + f(x^*_{j+1}) \}^2 = \boldsymbol{\beta}^{\mathsf{T}}\mathbf{D}^{\mathsf{T}}\mathbf{D}\boldsymbol{\beta} = \boldsymbol{\beta}^{\mathsf{T}}\mathbf{S} \boldsymbol{\beta} $$
null
CC BY-SA 4.0
null
2023-03-14T21:41:29.750
2023-03-14T21:47:10.877
2023-03-14T21:47:10.877
1390
1390
null
609492
1
null
null
1
40
I am trying to perform t-tests and ANOVAs on numerical parameters (eg. bone density) comparing groups. t-tests will be applied to a 2-group grouping method, and ANOVAs will be used when I hope to analyze in a 3-group manner. I want to adjust the parameter (bone density) taking another numerical parameter into consideration (eg. blood pressure). I have been able to compute the adjusted mean for the groups and p-values after adjustment. However, I am a little confused about whether I would be able to get the individual adjusted values. Is this linear regression process manipulating the mean or each individual value? I would greatly appreciate any input!
does linear regression adjusted t test adjust for individual values or just the mean?
CC BY-SA 4.0
null
2023-03-14T21:41:31.693
2023-03-15T07:02:42.660
null
null
383236
[ "regression", "generalized-linear-model", "t-test" ]
609493
1
null
null
0
32
I'm super new to R and wanted to make a pie chart of the results found from a 6 faced die. From what I understand, this is supposed to throw the 6-faced die 500 times, and then map out the results in a pie chart. It seems fairly simple, but the pie chart splits into 500 ways. What am I doing wrong? ``` die <- 1:6 result <- sample(die, size = 500, replace = TRUE) pie(result, labels = die) ```
How can I represent probability in a pie chart?
CC BY-SA 4.0
null
2023-03-14T22:07:33.700
2023-03-14T22:30:11.603
2023-03-14T22:18:00.103
383240
383240
[ "pie-chart" ]
609495
1
null
null
2
104
I'm sorry if the title is confusing. I am working with a model with $(X,Y_1,Y_2,...,Y_k)$ such that $X$ is some random variable (a latent factor) and the $Y$'s are generated according to: $$ Y_i = \mu_i + \lambda_i X + \varepsilon_i$$ with all $\varepsilon$'s and $X$ mutually independent, all $\mu$'s and $\lambda$'s are constant parameters. My question is: in this case where the $Y$'s are related to $X$ linearly, is it possible to have $$\mathbb{E}[X|Y_1=y_1, Y_2 = y_2,...,Y_k = y_k] \neq \beta_0 + \beta_1 y_1 + ... +\beta_k y_k$$ (all $\beta$'s are constants and the lowercase y's are the realizations of the big Y's), and what is an example of this case? Any help would be appreciated! I've played around with copulae on MATLAB but I can't figure this one out... What am I missing here?
If different $Y_i$'s are generated linearly by some latent factor $X$, can $\mathbb{E}[X|Y_1=y_1,Y_2=y_2,...,Y_k=y_k]$ not be linear?
CC BY-SA 4.0
null
2023-03-14T22:12:37.427
2023-03-20T10:16:38.763
2023-03-15T04:31:02.440
307067
307067
[ "multiple-regression", "factor-analysis", "conditional-expectation", "joint-distribution", "latent-variable" ]