Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
610002
1
null
null
1
81
I am running a Fisher's exact test on SPSS. There are 2 variables - 3 groups and their Pass/Fail frequencies: Group 1: 8 pass, 18 fail Group 2: 1 pass, 10 fail Group 3: 11 pass, 6 fail I am using Fisher's exact test because some cells have a count less than 5. I have a significant result (p=.01) but I don't know how to run post hoc tests for this analysis. Should I rerun the test but using pairs of groups - then adjust my p-value using Bonferroni correction? Any suggestions would help!
Fisher's Exact Test Post-Hoc for 3x2 Table
CC BY-SA 4.0
null
2023-03-19T21:33:35.800
2023-03-19T21:33:56.947
2023-03-19T21:33:56.947
380255
380255
[ "fishers-exact-test" ]
610003
2
null
451389
0
null
[All else equal, the higher the $R^2$, the higher the $F$-stat and the lower the p-value.](https://stats.stackexchange.com/a/56910/247274) That "all else equal" is crucial, however. If you increase the $R^2$ by throwing many parameters at the model, you affect the degrees of freedom and can wind up with a lower p-value in the $F$-test. However, there is a loose relationship. Especially if I knew that I had been careful to have a reasonable number of parameters for the sample size, I would see a high $R^2$ as at least a positive signal. Then the $F$-test can account for the exact number of parameters compared to the sample size. Unless you have a tiny sample size or a huge number of parameters for your sample size, that $R^2 = 0.4787$ is likely screaming out that you will have a significant $F$-test.
null
CC BY-SA 4.0
null
2023-03-19T22:04:47.303
2023-03-19T22:04:47.303
null
null
247274
null
610004
1
null
null
1
117
I am running empirical models based on 3SLS due to the potential endogeneity issue caused by simultaneity. So, I opted for 3SLS as my main analysis tool, and I am using reg3 command to analyze it. My model and STATA command are as below. Empirical model: DV = intercept + endog_var1 + control1 +  ... + control6 + i.industry + i.year + error term (In this model, I assume that there is only one endogenous variable because I am only interested in var1, and there could be potentially simultaneity bias between DV and var1 due to the unobservable factors that affect both, and I am planning to relieve the bias by using one instrument.) STATA command: reg3 (DV endog_var1 control1 control2 control3 control4 control5 control6 i.industry i.Year) (endog_var1 DV instrument1 i.industry i.Year) Actually, when I run fixed effects models that have opposite DV and endog_var1 as DV and IV, respectively, (i.e., model 1: DV = intercept + endog_var1 + control1 +  ... + control6 + i.industry + i.year + error term & model 2: endog var1 = intercept + DV + control1 +  ... + control6 + i.industry + i.year + error term), either way, I could find that the coefficients of DV and endog_var1 are quite statistically significant. However, if I run the STATA command above, I could only see significant endog_var1 in the second stage, but DV becomes insignificant (p-value > 0.1) in the first-stage (second parentheses in reg3) in the 3SLS model. To sum up, my questions are fourfold.  - Should I also put all the control variables into the first stage (second parantheses in reg3 model) in 3SLS model? (However, I keep getting a warning like equation is not identified—does not meet order conditions. - How to interpret when DV becomes insignificant after running the 3SLS model? (endog_var1 DV instrument1 i.industry i.Year; Here, DV is not significant.)  - If I want to add fixed effects into the model, how could I add them to the STATA command?  - What if I set up a model like reg3 (DV endog_var1 control1 control2 control3 control4 control5 control6 i.industry i.Year) (endog_var1 instrument1 i.industry i.Year) without DV in the first stage. Does this model still cope with simultaneity bias that my model might have? Thank you for reading this question.
3SLS with fixed effects and its interpretation issue
CC BY-SA 4.0
null
2023-03-19T22:17:31.433
2023-03-20T04:48:54.447
null
null
382327
[ "stata", "bias", "simultaneity" ]
610005
1
null
null
1
51
I wrote the lm model in R and wanted to know the mean and standard error of each treatment across like 20 responses. Some of the responses were log transformed and some were sqrt transformed to assure model residual normality. I wonder how I can report the information based on emmeans result. For mean, I would assume I should take exp(emmean) for log-transformed response and (emmean)^2 for sqrt tranformed response. And then report original mean. But what about SE? I read that I shall not directly back-transform SE but how I am supposed to report it??? I'm not a stats major so reading formula is really driving me nuts... I attached an example below... [](https://i.stack.imgur.com/1m5bd.png)
How to report Standard Error from log-transformed data?
CC BY-SA 4.0
null
2023-03-19T22:39:46.067
2023-03-24T16:21:22.650
null
null
383612
[ "standard-error" ]
610006
2
null
610005
2
null
My suggestion: Back transform both 95% confidence limits of the set of logarithms. For controls, the 95% CI of the mean of the logarithms is from 0.304 to 0.373. So the 95% CI in the original units runs from exp(.304) to exp(0.373), which is from 1.355 to 1.452. This interval is not symmetrical around exp(.338)=1.402, and that's ok (the uncertainty is not symmetrical on that scale). What would it mean if you back-transformed the SE of the mean of the logarithms? For your control group, the SEM of the logarithms is 0.0176, and exp(0.0176)=1.0177. Unlike the SEM, which is a value you add or subtract, this value (1.0177) is one you multiply or divide by. This approach is not commonly used, so I suggest the CI approach.
null
CC BY-SA 4.0
null
2023-03-19T23:36:24.147
2023-03-20T00:30:22.600
2023-03-20T00:30:22.600
25
25
null
610007
1
null
null
0
33
SLLN tells us that if $X_1,...,X_n$ are iid, with $X_1$ having finite mean $\mu$, then their sample average converges almost surely to $\mu$. Suppose instead we know that $X_1,...,X_n$ are iid and their sample average converges almost surely to a constant. Can we argue that the $X_1$ has a finite mean and hence that such a constant must be the mean of $X_1$? --- Note this is a follow up to [this](https://stats.stackexchange.com/questions/609952/if-sample-average-converges-in-an-iid-sample-must-it-converge-to-the-mean) question.
If sample average converges a.s. in an iid sample, must it converge to the mean?
CC BY-SA 4.0
null
2023-03-20T00:55:55.587
2023-03-20T01:52:22.430
2023-03-20T01:52:22.430
342032
342032
[ "convergence", "asymptotics", "law-of-large-numbers" ]
610009
1
null
null
0
81
Can i study correlation and linear regression for only one group that answered my questionnaire? the two variables of my research are "gamification" and "motivation" but how can i study the relationship between those two without the need to give the questionnaire to two groups? (most of questions in the questionnaire concern motivation only)
linear regression can it be used for one group?
CC BY-SA 4.0
null
2023-03-20T01:11:29.943
2023-03-21T08:48:25.250
null
null
383617
[ "regression" ]
610011
1
610049
null
2
76
I'm reading this article on Structural Causal Models (SCM) and the author is giving this example: [](https://i.stack.imgur.com/buXv6.png) where $m=1$ is the single source environment in this case, ~ denotes the target domain and all noise variables $\epsilon$ follow independent Gaussian distributions with mean zero, and we want to find the coefficients that satisfy the following optimization problem: [](https://i.stack.imgur.com/Nv2EN.png) The output stated in the paper is the following: [](https://i.stack.imgur.com/epJgV.png) I'm having some hard time trying to arrive at this result. My current logic is that, given the constraint, $E[\beta_1 X_1^{(1)}+\beta_2 X_2^{(1)} + \beta_3 X_3^{(1)}] = E[\beta_1 \tilde{X_1}+\beta_2 \tilde{X_2} + \beta_3 \tilde{X_3}] \Rightarrow $ (substitute the equations of every X's in and knowing that $E[\epsilon] = 0$ for all the noises) $E[\beta_1 + \beta_2 + \beta_3] = E[-\beta_1 -\beta_2 + \beta_3]$. This equation won't be satisfied given the output in the picture. Or am I doing the calculation wrong? Any help is appreciated!
Question on solving OLS
CC BY-SA 4.0
null
2023-03-20T01:41:25.493
2023-03-20T16:52:38.853
2023-03-20T16:50:01.780
36229
279018
[ "expected-value", "causality" ]
610012
2
null
6478
1
null
The caret package and the rpart package each have ways to list the variables and rank their importance, but generate different results from each other when calculating variable importance. ``` fit$variable.importance ## shows different results than caret::varImp(fit) ``` The list of variables used is the same, but the scale is different, and even the order of importance is different for the dataset I'm using.
null
CC BY-SA 4.0
null
2023-03-20T01:47:12.897
2023-03-20T01:50:42.053
2023-03-20T01:50:42.053
383619
383619
null
610014
1
null
null
0
34
I am seeking to compare the diagnostic performance of a test within a population with two defined subgroups. I have the raw data & computed data, but am unsure what the best test is to compare the two. Group A: Sensitivity: 62.2% (95% CI 61.6%-62.7%) Specificity: 98.1% (95% CI 98.0%-98.1%) PPV: 35.6% (95% CI 35.1%-36.0%) NPV: 99.3% (95% CI 99.3%-99.4%) Group B: Sensitivity: 63.6% (95% CI 34.6%-87.0%) Specificity: 87.3% (95% CI 86.4%-88.2%) PPV: 1.1% (95% CI 0.5%-2.1%) NPV: 99.9% (95% CI 99.8%-100%) Any help or advice is most welcome.
Comparing Sensitivity, Specificity, PPV and NPV of one test across two subgroups
CC BY-SA 4.0
null
2023-03-20T02:36:53.613
2023-03-20T02:36:53.613
null
null
383621
[ "sensitivity-specificity", "method-comparison" ]
610015
2
null
183265
0
null
Means the model is worse the horizontal line (mean). See this explanation pag. 31: [https://scholarsmine.mst.edu/masters_theses/7913/](https://scholarsmine.mst.edu/masters_theses/7913/)
null
CC BY-SA 4.0
null
2023-03-20T02:52:36.227
2023-03-20T02:52:36.227
null
null
346197
null
610016
1
null
null
0
22
I'm looking for a measure that measure the similarity of two distributions in the following forms: $$ S = \frac{a \mu_1 + b \sigma_1}{a \mu_2 + b \sigma_2}. $$ The above formula I proposed is not rigorous. But it can show that if two distributions are similar, then $S$ would turn to 1. Does there exist a measure like this?
Compare similarity or difference of two distributions by the ratio of moments
CC BY-SA 4.0
null
2023-03-20T02:53:42.430
2023-03-21T01:59:18.927
2023-03-21T01:59:18.927
383622
383622
[ "distance", "similarities" ]
610018
2
null
610004
0
null
There are several questions that must be answered before applying your model: a) Are you sure Endogeneity will be an issue? You mentioned a 'potential' endogeneity issue meaning you suspect it might be a problem. You can verify it with a Hausman Test of Endogeneity. It is possible that simultaneity/endogeneity may not be a serious issue, then you won't need models like 2SLS or 3SLS. b) Are the time-specific effects or cross-sectional effects significant? If they are not, you don't need fixed effects in the model. If significant, you must decide whether fixed effects or random effects are appropriate. c) Do you really need to apply 3SLS? You need 3SLS only if the error terms are correlated across equations of your simultaneous equation model. If they are not correlated across equations, 3SLS boils down to a 2SLS model. So, you are better off applying 2SLS. Moving on to your questions: 1), 2) & 4) The control variables must be included in the first stage. And, you do not need to include DV in the first stage. The first stage represents the reduced-form equations, which means that all the endogenous variables are taken as dependent and only the exogenous variables (including controls and instruments) are used as independent variables. The warning that you get about the equation not being identified happens because you have included DV in the first stage, which is making your equation under-identified. 3) You have already included i.industry and i.Year as independent variables. They act as fixed effects because you are including cross-sectional dummies (i.industry) and time-specific dummies (i.Year) in your model. The following links discuss some of these issues in more detail: 3SLS: [https://spureconomics.com/3sls-three-stage-least-squares/](https://spureconomics.com/3sls-three-stage-least-squares/) 2SLS: [https://spureconomics.com/two-stage-least-squares-2sls-estimation/](https://spureconomics.com/two-stage-least-squares-2sls-estimation/) Identification: [https://spureconomics.com/identification-rank-and-order-conditions/](https://spureconomics.com/identification-rank-and-order-conditions/)
null
CC BY-SA 4.0
null
2023-03-20T04:48:54.447
2023-03-20T04:48:54.447
null
null
360575
null
610019
2
null
609963
-1
null
Your question is full of incoherent statements. > The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. In math, "tends to" is used to refer to a limit, and a limit always has some independent variable, and we take the limit as that variable goes to some value. Furthermore, a limit requires some norm and/or topology. Real numbers have the norm of the absolute value. PDFs do have norms, but there are more than one, so to be rigorous, one should specify one. So your statement of the CLT does not constitute a clear mathematical statement. And while one could infer some more rigorous statement, such that you mean "the $L^2$ norm of the distribution of their means minus a normal distribution goes towards 0 as the sample size goes to infinity", that still wouldn't be correct, because there is no one normal distribution that it goes towards. You have to take the z-score for it to go to a particular normal distribution. > However, in A/B testing, we only draw two samples, and their distribution is not necessarily guaranteed to be normal. This also is not a precise statement. The normal distribution is a continuous distribution. A sample is a set of discrete values. What does it mean to compare them? > Nonetheless, the difference between these two samples is guaranteed to be a normal distribution. Difference? A sample is a set of observations. How do you take the "difference" between two sets? There is the "set difference" of everything in one that isn't in the other, but how would that be normal? Perhaps you mean "the difference between their means". If so, you should be more precise. Furthermore, the mean is a particular number, not a distribution. Precision is very important in mathematics. Yes, mathematicians speak loosely in some contexts, but if you're having trouble understanding something, that's not an appropriate context to be using casual language. You're asking people to explain something to you, and requiring them to make inference after inference as to what you mean. The core issue in your question seems to be the statement "However, in A/B testing, we only draw two samples, and their distribution is not necessarily guaranteed to be normal." The distribution of sample means is approximately normal for large sample size, so your apparent intended statement is false.
null
CC BY-SA 4.0
null
2023-03-20T04:50:10.513
2023-03-20T04:50:10.513
null
null
179204
null
610020
1
null
null
2
26
Given that I have a dependent variable $Y_i$, two endogeneous variables $W_i \text{ and } X_i$ and an instrument correlated with both $W_i \text{ and } X_i$, can we still perform two stage least squares regression? So, our regression equation is $$Y_i=\beta_0+\beta_1 X_i + \beta_2 W_i + u_i$$ with two stage least squares procedure $$X_i = b_0 + b_1 Z_i + \epsilon_i$$ $$ Y_i = \beta_0 + \beta_1 \hat{X}_i + \beta_2 W_i + u_i$$ where $Cov(Z_i, W_i)>0$ My guess is that $\hat{\beta}_2$ will not be unbiased and consistent. But will the estimator $\hat{\beta}_1$ be unbiased and consistent?
Can an instrument be correlated with 2 endogeneous variables in instrumental variables regression?
CC BY-SA 4.0
null
2023-03-20T04:56:28.660
2023-03-20T04:56:28.660
null
null
322329
[ "econometrics", "instrumental-variables" ]
610021
2
null
609970
4
null
The terms "L1" and "L2" refer to special functions called norms, which measure the length or size of a vector. You are correct in that they are used in two different contexts in statistics and machine learning, but their meanings are the same in both contexts. --- In the context of regularization, the L1 and/or L2 norm restricts the magnitude of the parameter vector of a model. The difference between L1 and L2 regularization comes down to the differences between the L1 and L2 norms. See e.g. [https://medium.com/analytics-vidhya/effects-of-l1-and-l2-regularization-explained-5a916ecf4f06](https://medium.com/analytics-vidhya/effects-of-l1-and-l2-regularization-explained-5a916ecf4f06). As pointed out in other answers, L1 regularization in a regression model corresponds to a Laplace prior on coefficients in Bayesian modeling, and L2 regularization corresponds to a Gaussian prior. In the context of loss functions, the L1 or L2 norm measures the magnitude of the error vector of the model on a train/test/validation set. L1 loss is the Median Absolute Error (MAE), and L2 loss is the Root Mean Squared Error (RMSE). As pointed out in the comments, regression models fitted with L1 loss are models of a conditional median, while models fitted with L2 loss are models of a conditional expectation (conditional mean). The latter also happens to correspond with a Gaussian GLM maximum-likelihood model, where the conditional distribution of the data follows a Gaussian distribution centered at the regression prediction. --- The L2 norm corresponds to our conventional notion of [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance), which is essentially a multi-dimensional extension of the Pythagorean theorem. You can think of Euclidean distances as the lengths of hypotenuses of right triangles drawn between points. The L1 norm corresponds to the weirder notion of [Manhattan (aka "Taxicab") distance](https://en.wikipedia.org/wiki/Taxicab_geometry), so named because distances resemble the distance traveled by a taxi cab following the grid layout of streets in Manhattan, New York. It's very common in statistics and machine learning to use L2 loss (MSE) with L1 regularization, or even both L1 and L2 regularization in the same model. L1 loss (MAE) is much less common than L2 in general, in part because the absolute value is not differentiable. However there is a "smooth" differentiable L1 loss that attempts to mimic the properties of true L1 loss, see e.g. [How to interpret smooth l1 loss?](https://stats.stackexchange.com/q/351874/36229).
null
CC BY-SA 4.0
null
2023-03-20T05:31:26.713
2023-03-20T16:30:13.920
2023-03-20T16:30:13.920
36229
36229
null
610023
2
null
609833
2
null
You've drawn from two different populations, so your comparison is not valid. The question you want to ask is: Given a clustered population, what is the impact of ignoring clustering? Let's say we have a clustered population. This is your `dependent` in the question, which I have renamed `popn` in the code below. If we take a simple random sample of size $n = 100$ from `popn`, we get a standard deviation estimate of 3.00 (the true value of the SD is 2.87). Now we'll take a cluster sample. We'll select two clusters each of size $n = 50$. The SD of the cluster sample is 1.51, which is an under-estimate of the true SD. An underestimate of the SD gives an under-estimate of the SE. ``` rep(1:10, 100) -> popn set.seed(1) popn + rnorm(1000, mean = 0, sd = 0.1) -> popn sd(popn) # true population SD hist(popn, breaks = 100) randsamp <- sample(x = popn, size = 100, replace = FALSE) sd(randsamp) # SRS SD clus_to_sample <- sample(x = 1:10, size = 2, replace = FALSE) clussamp <- c() for (i in clus_to_sample) { clussamp <- append(x = clussamp , values = sample( popn[ rep(1:10, 100) == i ] , size = 50 , replace = FALSE )) } sd(clussamp) # cluster sample SD ```
null
CC BY-SA 4.0
null
2023-03-20T06:01:31.027
2023-03-20T06:30:33.987
2023-03-20T06:30:33.987
369002
369002
null
610024
1
null
null
0
14
I know that the k-means algorithm converges in finite steps, see [Proof of convergence of k-means](https://stats.stackexchange.com/questions/188087/proof-of-convergence-of-k-mean). This result implies that the algorithm converges in finite steps. The general definition of the rate of convergence $$\lim_{n\to\infty} \frac{\|x_{n+1}-r\|}{\|x_{n}-r\|^{\alpha}}$$ is no longer valid as both the numerator and the denominator becomes zero for a finite $n$. However, I read this [NeurlPS paper ](https://proceedings.neurips.cc/paper/1994/file/a1140a3d0df1c81e24ae954d935e8926-Paper.pdf), which seems to establish the rate of convergence of the kmeans algorithm from a gradient descent point of view. These two conclusions seem to be contradictory. Could some please explain? Many thanks.
convergence rate for algorithms that stop in finite steps
CC BY-SA 4.0
null
2023-03-20T06:29:00.393
2023-03-20T06:29:00.393
null
null
383630
[ "convergence", "k-means", "algorithms" ]
610025
1
610028
null
4
277
A little bit of background: I have daily demand data for our product from 1 January 2017 to 31 December 2022. Sometime after Covid-19 struck say 1 March 2020, the sale of our product went up substantially (sales in 2021 were 8X sales in 2019) and the demand has sustained till date (March 2023). Now, my manager has asked me to find out what the sales would be if Covid hadn't struck viz. if we had continued at the same sales levels we were at pre covid and find the difference between the expected sales (estimated using pre-covid numbers) and the actual sales. I believe I'm not able to find something online since I don't know the exact area of study to look for. I have the following questions: - What is the broad area of study or technique that deals with the above problem? I assume it would be something like promotional analysis where one tries to model the effect of a promotion/discounts to see how the sales are affected. - Are there any specific techniques that you would suggest that would help me solve this problem? Techniques could be statistical: based on distributions/tests or ML oriented or any other ones.
find the difference between the expected sales and the actual sales
CC BY-SA 4.0
null
2023-03-20T06:37:43.183
2023-03-20T09:02:19.857
null
null
116451
[ "time-series" ]
610026
2
null
610025
5
null
> What is the broad area of study or technique that deals with the above problem? You already have the actual sales. The second thing that you need are the expected sales. That can be found using [time series forecasting](https://en.m.wikipedia.org/wiki/Time_series#Prediction_and_forecasting). --- Also noteworthy to mention is some details about the statistical lingo. Expected refers to something more specific: - Expected value: The average outcome of a random variable. For example, a six sided dice roll has an expectation value of $\frac{1}{6}+\frac{2}{6}+\frac{3}{6}+\frac{4}{6}+\frac{5}{6}+\frac{6}{6} = 3.5$ - Estimated value: Some estimate of a value related to a population. For example when we have a small sample than we can use the properties of that sample to estimate the properties of the population. Any values computed from the sample are not necessarily equal to the related actual values of the population, but are probably close to it. - Predicted value: An estimate that is some sort of extrapolation. Based on samples from a population given certain settings, we make an estimate of the population given different settings. For example when we have a set of samples/observations in the past, we might extrapolate some trend line to the future. You seem to be looking for a 'predicted value' of the covid-time sales based on the pre-covid-time sales.
null
CC BY-SA 4.0
null
2023-03-20T08:10:00.993
2023-03-20T09:02:19.857
2023-03-20T09:02:19.857
164061
164061
null
610027
2
null
609772
5
null
If you are asking about the i.i.d. assumption in machine learning in general, we already have that question answered in the [On the importance of the i.i.d. assumption in statistical learning](https://stats.stackexchange.com/questions/213464/on-the-importance-of-the-i-i-d-assumption-in-statistical-learning) question. As about maximum likelihood, notice that the likelihood function is often written as $$ \prod_{i=1}^N p(x_i | \theta) $$ where $p(x_i | \theta)$ is probability density or mass function for the point $x_i$ parameterized by $\theta$. We are multiplying because we are making the [independence](https://en.wikipedia.org/wiki/Independence_(probability_theory)) assumption; otherwise the joint distribution would not be a product of the individual distributions. Moreover, $p(\cdot | \theta)$ are all the same, so they are "identical", and hence we are talking about the i.i.d. assumption. This does not mean that every likelihood function would assume independence, but that is often the case. The identical distributions assumption also is not necessary, e.g. you can have a mixture model (e.g. clustering), where you assume that individual samples come from different distributions, together forming a mixture. Notice that with maximum likelihood we are directly making such assumptions. If you are fitting a decision tree or $k$NN you are not maximizing any likelihood, the algorithms do not explicitly assume any probability distribution, so you are also not explicitly making such a assumption. It still is the case, however, that you are assuming that your data is "all alike" (so a kind of i.i.d. or exchangeability): for example, you wouldn't mix data from completely different domains (say, ice-cream sales, size of brain tumors, and speed of Formula 1 cars) together and expect it to return reasonable predictions. As for logistic regression, that is discussed in the [Is there i.i.d. assumption on logistic regression?](https://stats.stackexchange.com/questions/259704/is-there-i-i-d-assumption-on-logistic-regression) thread. It would be a tautology, but the assumptions that you made need to hold. If your model assumes that the samples are independent, then you need the independence assumption.
null
CC BY-SA 4.0
null
2023-03-20T08:35:07.533
2023-03-20T09:41:07.007
2023-03-20T09:41:07.007
22047
35989
null
610028
2
null
610025
6
null
As [Sextus writes](https://stats.stackexchange.com/a/610026/1352), this is a case of time series forecasting. Here are some resources: [Resources/books for project on forecasting models](https://stats.stackexchange.com/q/559908/1352) Since you write that you have daily data, this sounds a lot like retail sales to me, so you might be interested in [this introduction to retail forecasting](https://forecasting-encyclopedia.com/practice.html#Retail_sales_forecasting) and the references therein. I basically see two possibilities. - You could fit a model to the data pre-COVID, then forecast out into the COVID time frame. - You could fit a model to all your data, but including one or more predictors to capture the COVID effect. (If COVID had different effects at different times, e.g., driven by different lockdowns, you may want to use multiple predictors.) Then calculate the model fit across the COVID time without the COVID predictors. The two approaches will yield different results. A simple way of dealing with this is to take the average of the two forecasts/fits. Don't worry too much that you get different results - there is a lot of uncertainty in this kind of "alternative history" in any case. In either case, your models should be able to capture the main drivers in your time series. - Daily data usually has intra-weekly seasonality, but your manager is likely not interested in a particular Tuesday but in aggregate results, so you could probably just disregard this one seasonality. - However, you also write that you have "quarterly" seasonality, which one could interpret in two ways: either your daily data have a pattern that repeats every three months, or you have a pattern that recurs on a yearly basis, with different quarters being noticeably different. In any case, you can capture the effect by transforming the day-of-year using sine and cosine waves (Fourier terms). Don't, e.g., use dummy coding for the quarter - this will yield a step function that simply does not make sense. You may want to take a look at our multiple-seasonalities tag. - If you have promotions, you could include them in the model and also include when you would have run promotions in the absence of COVID. Yes, this adds some degree of arbitrariness. Alternatively, don't model promotions and live with the fact that your demand is smoothed out. As for the day of week pattern mentioned above, your manager is likely not interested in this.
null
CC BY-SA 4.0
null
2023-03-20T08:42:41.607
2023-03-20T08:42:41.607
null
null
1352
null
610029
1
null
null
0
24
There are 169 different types of Texas Hold-em hands. I want to learn the probability of each of them winning through empirical simulation. Note that I'm ignoring all betting considerations (even though betting strategy is the whole point of the game). I'm just trying to measure how strong each hand is. So I simulate a bunch of Texas Hold-em rounds (let's say the games are between two people, Alice and Bob). Normally, the probability a hand wins is (# wins)/(total # of times the hand was seen). But my question is: for every round simulated, can I treat as a sample BOTH Alice's hand AND Bob's hand? Or can I only use one or the other? E.g. in one round, Alice's hand is "99" (pair of nines), and Bob's hand is "77" (pair of sevens). After the flop, Alice's best hand is worse than Bob's best hand since Bob ended up with triple 7s. Then can I add to the counts that 77 won AND 99 lost (giving me double the samples)? Or can I only choose one side to add, to avoid dependency between samples? My gut tells me that, theoretically, you're not supposed to use both samples, but in practice, you can totally use both samples since the dependence is so weak over a large sample size. P.S.: As a follow-up, in the case of a hold-em game between multiple players (say 8 players), then each round can potentially let you get 8 samples, instead of just one. Just thought I'd point that out.
Learning a symmetric distribution: best practice for how to treat samples?
CC BY-SA 4.0
null
2023-03-20T08:47:12.700
2023-03-20T10:40:29.337
2023-03-20T10:40:29.337
106978
106978
[ "distributions", "sampling", "simulation", "independence", "approximate-inference" ]
610030
1
null
null
0
30
I don’t know if my title makes sense so I will try to explain using basketball as an analogy. Say you have a set of players and you know the probabilities of them being on court at a given moment. There always needs to be a fixed number on court at once, and the player probabilities are generally different to each other. How then would you then calculate the probability of a specific line-up being formed given this information? The problem sounds like it should be simple but I don’t know the right way to approach it. Take the trivial case of selecting a 2 man line-up from a set of 3 players (A, B, C) with probabilities (0.9, 0.8, 0.3). The line-up probabilities would then be (AB=0.7, AC=0.2, BC=0.1), which doesn’t seem very intuitive. Then for another example, imagine you had another person to pick from, so you now have to pick 2 out of a set of 4. Say that the player probabilities are now (0.5, 0.5, 0.5, 0.5), this would make the line-up probabilities (AB=1/6, AC=1/6, …). If you change the player probabilities to (0.5, 0.5, 1, 0), then the new line-up probabilities would be (AB=0, AC=1/2, ...). Is there a formula for this, because I’m really at a loss of how to approach this.
Selecting without replacement a subset of items from a set knowing the final probabilities that items will be chosen
CC BY-SA 4.0
null
2023-03-20T08:59:27.427
2023-03-20T08:59:27.427
null
null
383634
[ "probability", "conditional-probability" ]
610031
1
null
null
0
80
Consider a simple random walk. I am trying to compute the variance of differences over a window of certain size `dw` which could, for example, model returns of a stock over a certain period. I compute and average this difference for 1) non-overlapping windows and 2) for sliding, overlapping windows. Based on my (limited) statistics knowledge, I would have expected that the sample variance of overlapping windows would be larger, because there are many more correlated samples within overlapping windows and positively correlated samples tend to increase the variance. This is, however, not what I find. The Python code below shows that covariances (and thus variances) are essentially the same: ``` import numpy as np #generate RW dx = np.random.choice([-1,1],1000000) x=np.cumsum(dx) #window size dw=10 #get non-overlapping samples non_overlapping_samples = np.diff(x[::dw]) print("Non-overlapping variance: ", non_overlapping_samples.std()) print("Non-overlapping covariance: ", np.cov(non_overlapping_samples) ) #get overlapping samples overlapping_samples=[] for w0 in range(dw): overlapping_samples.append(np.diff(x[w0:][::dw])) overlapping_samples=np.array(overlapping_samples).flatten() print("Overlapping variance: ", overlapping_samples.std()) print("Overlapping covariance: ", np.cov(overlapping_samples)) output: Non-overlapping variance: 3.170077114073606 Non-overlapping covariance: 10.049489405072238 Overlapping variance: 3.165723940721805 Overlapping covariance: 10.021818090777733 ``` I was wondering why this is the case? Shouldn't overlapping windows be much more correlated than non-overlapping ones? At first I thought that on average, the number of positively and negatively correlated samples is the same and cancels out in the variance. However, even if I bias the RW (i.e. choose samples from {-1,2}), the overlapping and non-overlapping variances are the same.
Overlapping vs non-overapping windows in random walk
CC BY-SA 4.0
null
2023-03-20T09:03:39.870
2023-03-20T10:58:50.447
2023-03-20T10:03:56.410
53690
153176
[ "python", "variance", "covariance", "random-walk", "overlapping-data" ]
610032
1
null
null
2
36
I have performed a meta-analysis with the follwing attributes: ``` meta_escalc <- escalc(measure = "MN", mi = mean , sdi = SD, ni = ni, data=data ) ``` However, I am not sure which diagnostic plot makes more sense. This funnel plot displays the actual values (rather than residuals) on the x-axis, for a model without moderators. ``` res.1 <- rma(yi = yi, vi = vi, data = meta_escalc) funnel(res.1) ``` [](https://i.stack.imgur.com/nVrd9.png) When displaying the funnel plot for the model with moderators the x-axis displays the residual values. ``` res.2 <- rma(yi, vi, mods = ~ mods, data=meta_escalc) funnel(res.2) ``` [](https://i.stack.imgur.com/D9nQd.png) When the model does not have any moderators, but we force it to display residuals on the x-axis by setting int.only = FALSE: ``` res.1$int.only <- FALSE funnel(res.1) ``` [](https://i.stack.imgur.com/VcJru.png) Could you please explain which one is more suitable? and why?
What should be displayed on the x-axis of a funnel plot for a meta-analysis model that does not contain any moderators?
CC BY-SA 4.0
null
2023-03-20T09:23:45.723
2023-03-20T14:13:07.613
2023-03-20T12:43:33.573
340509
340509
[ "r", "meta-analysis", "metafor" ]
610033
1
null
null
0
13
I would like to add external variables to timeseries forecasting. This variable partly refers to the future - so it is withing the timerange, where the target variable should be forecasted. Based on domain knowledge i know for sure that this external variable will change in the future Imagine the following example: - Training should be performed until 2019-06-01 - Prediction of the next 12 months - starting with 2019-07-01 |Date |Product |Target |External Variable | |----|-------|------|-----------------| |... |ProductName |... |... | |2019-02-01 |ProductName |15 |0.05 | |2019-03-01 |ProductName |8 |0.03 | |2019-04-01 |ProductName |12 |0.05 | |2019-05-01 |ProductName |20 |0.05 | |2019-06-01 (train until here) |ProductName |12 |0.03 | |2019-07-01 |ProductName |nodata |0.05 | |2019-08-01 |ProductName |nodata |0.01 | |2019-09-01 |ProductName |nodata |0.07 | |2019-09-01 |ProductName |nodata |... | WHat possibilities (feature engineering ?) do i have to integrate the external variable(s) ?
Multivariate Timeseries Forecasting - Add external data from the future
CC BY-SA 4.0
null
2023-03-20T09:35:15.720
2023-03-20T09:35:15.720
null
null
383636
[ "machine-learning", "time-series", "forecasting", "multivariate-analysis" ]
610034
1
null
null
1
15
Suppose I have an online bayesian linear regression problem for which I can updated the posterior distribution of parameters. Using this posterior, I want to make a point forecast by sampling from it. In a complex regression environment, i.e. with non stationarities, model miss specification etc., it may not always be best to pick the posterior mean to minimize to out of sample objective. Therefore we have this online learning problem that should balance between exploitation/exploration of where to sample from the posterior. The tricky thing here is that if the posterior itself is the action set, it changes at each time step. Im wondering if there are any online learning algorithims that are suited to this problem?
Online learning with random action set?
CC BY-SA 4.0
null
2023-03-20T09:39:42.380
2023-03-20T09:39:42.380
null
null
371362
[ "regression", "bayesian", "sampling", "online-algorithms" ]
610035
1
null
null
1
35
Could I use LDA method to separate outlier from major points? I want to find outlier from certain data with LDA, but I couldn't find use of LDA for outlier detection. Basically I want do the work like below : [](https://i.stack.imgur.com/4DBFv.png) I'm going to try with `MASS:lda()` package function. Thank you for your answer.
Could I use LDA (Linear discriminant analysis) for outlier detection?
CC BY-SA 4.0
null
2023-03-20T09:45:23.917
2023-03-20T09:45:23.917
null
null
383638
[ "r", "discriminant-analysis" ]
610036
1
610466
null
1
36
Assume a system consisting of several components. Each component is characterized by some real number. A sample of a pair of components in the system, is a random variable normally distributed around the difference between the components (with known variance). Given a set of samples (there may be more than one sample of the same pair), I am interested in estimating the differences between the pairs (it can be assumed that the collection of samples forms a connected graph). Furthermore I'm interested in calculating the variance of the estimator. Is there a theory that covers the issue?
Is there a theory for estimating "node differences" using "edge samples" over graph
CC BY-SA 4.0
null
2023-03-20T09:53:00.363
2023-03-23T14:48:00.590
null
null
357622
[ "normal-distribution", "variance", "estimation", "graph-theory" ]
610037
2
null
609798
1
null
This seems like an ordinal outcome (ordered categories from negative to strong) with an arguably unordered categorical predictor (tumor type). If there were sufficient counts in all cells, you could just use some standard frequentist maximum-likelihood approach to ordinal data (e.g. ordinal logistic regression). The zero and near-zero counts might cause some problems there, although it might work. An alternative is a Bayesian version of this, where you can regularize the coefficients by introducing weakly informative proper prior distribution (or if you want to and have prior information, you could also use informative priors). Bayesian models with weakly informative proper prior distributions tend to have good small sample performance. One good option for having modeling flexibility there is the `brms` R package, for which there is a [whole in-depth tutorial](https://osf.io/gyfj7/download) (that also got [published in a journal](https://doi.org/10.1177/2515245918823199)) on how to fit such models with explanations of what each option assumes and how to be more flexible with [distributional regression](https://cran.r-project.org/web/packages/brms/vignettes/brms_distreg.html) (e.g. do you want to estimate an overall "tendency to be in a higher category"-effect, or do you want to estimate a separate coefficient for being in a category vs. a the next higher category etc.).
null
CC BY-SA 4.0
null
2023-03-20T10:04:26.537
2023-03-20T10:04:26.537
null
null
86652
null
610038
2
null
609495
0
null
Yes, it is possible. In general, when you have the distribution of $X|Y$ and you want to know the distribution of $Y|X$ (or some aspect of this distribution, such as its expected value), you need to reverse the conditioning by applying [Bayes' rule](https://en.wikipedia.org/wiki/Bayes%27_theorem) and so the result depends on the marginal distribution of $X$. Applying this rule gives the general result: $$\mathbb{E}(X|\mathbf{Y}=\mathbf{y}) = \int \limits_{\mathscr{X}} x \cdot f_{\mathbf{Y}|X}(\mathbf{y}|x) f_X(x) \ dx.$$ It is simple to choose $f_X$ to give a counter-example to your result. For example, if you take this distribution to be a point-mass distribution on $x=0$ (and assuming that there is some non-zero value for one of the beta coefficients) then you have: $$\mathbb{E}(X|\mathbf{Y}=\mathbf{y}) = 0 \neq \beta_0 + \beta_1 y_1 + \cdots + \beta_k y_k.$$
null
CC BY-SA 4.0
null
2023-03-20T10:16:38.763
2023-03-20T10:16:38.763
null
null
173082
null
610039
1
null
null
3
301
It is confusing because I get so many different sources claiming different ideas. I am going to try to make it simple. Several individuals organised in 3 groups. We measure multiple variables at two time points, but of course, we are interested in how the health intervention influence at the end. Here I present several possibilities. - lme(variable ~ time:group, random = ~ time | subject) - lme(variable ~ time:group + group, random = ~ time | subject) - lme(variable ~ time:group + time, random = ~ time | subject) - lme(variable ~ time:group + group + time, random = ~ time | subject) Which one would you choose? And why? Which are the differences, strictly speaking, about the adjusting? My interest lies in capturing the significant statistical difference across groups and time.
How should interactions be modeled in mixed-models?
CC BY-SA 4.0
null
2023-03-20T10:18:46.657
2023-03-21T09:11:14.710
2023-03-21T09:11:14.710
339186
339186
[ "mixed-model", "lme4-nlme" ]
610041
2
null
609609
1
null
See the linked answer to this question: [Comparing two models using Repeated K-fold Cross Validation](https://stats.stackexchange.com/q/535751/60613) Basically, in [[1](https://dx.doi.org/10.1007/978-3-540-24775-3_3)], authors propose a correction for interdependence to the t-statistic in repeated K-fold cross-validation (see section 3.3): $$t = \frac{ \frac{1}{k \cdot r} \sum_{i=1}^k \sum_{k=1}^r x_{ij} }{ \sqrt{\left(\frac{1}{k\cdot r}+\frac{n_2}{n_1}\right)\hat\sigma^2} },$$ $$\hat\sigma^2=\frac{1}{k \cdot r - 1} \sum_{i=1}^k \sum_{k=1}^r (x_{ij} - \bar {x_{ij}})^2,$$ $$\bar {x_{ij}}=\frac{1}{k \cdot r} \sum_{i=1}^k \sum_{k=1}^r x_{ij},$$ where $n_1$ is the number of instances used for training, $n_2$ is the number of instances used for testing, $k$ is the number of folds, $r$ is the number of repetitions, $x_{ij}$ is the difference in performance between the two models. [1]: R. R. Bouckaert and E. Frank, ‘Evaluating the Replicability of Significance Tests for Comparing Learning Algorithms’, in Advances in Knowledge Discovery and Data Mining, vol. 3056, H. Dai, R. Srikant, and C. Zhang, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, pp. 3–12. doi: [10.1007/978-3-540-24775-3_3](https://dx.doi.org/10.1007/978-3-540-24775-3_3).
null
CC BY-SA 4.0
null
2023-03-20T10:35:32.290
2023-03-20T10:35:32.290
null
null
60613
null
610042
2
null
357466
0
null
Edit to summarize the following arguments and simulations: I propose that balancing by either over-/undersampling or class weights is an advantage during training of gradient descent models that use sampling procedures during training (i.e. subsampling, bootstraping, minibatches etc., as used in e.g. neural networks and gradient boosting). I propose that this is due to an improved signal to noise ratio of the gradient of the loss function which is explained by: - Improved Signal (larger gradient of the loss function, as suggested by the first simulation) - Reduced noise of the gradient due to sampling in a balanced setting vs. strongly unbalanced (as supported by the second simulation). Original answer: To make my point I have modified your code to include a "0" (or baseline) model for each run, where the first predictor column is removed, thus retaining only the remaining 9 predictors which have no relationship to the outcome (full code below). In the end I calculate the Brier scores for logistic and randomForest models and compare the differences with the full model. The full code is below. When I now compare the change in Brier score from the "0" models to the full original models (which include predictor 1) I observe: ``` > round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) 0% 25% 50% 75% 100% -0.048 -0.038 -0.035 -0.032 -0.020 > round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) 0% 25% 50% 75% 100% -0.323 -0.258 -0.241 -0.216 -0.130 > round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) 0% 25% 50% 75% 100% -0.050 -0.037 -0.032 -0.026 -0.009 > round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3) 0% 25% 50% 75% 100% -0.306 -0.272 -0.255 -0.233 -0.152 ``` What seems clear is that for the same predictor the relative change in the Brier score jumps from a median of around 0.035 in an imbalanced setting to a 0.241 in a balanced setting giving a roughly 7x higher gradient for a predictive model vs. a baseline. Additionally when you look at the absolute Brier scores, the baseline model in an unbalanced setting performs much better than the full model in the balanced setting: ``` > round( quantile(brier_score_logistic_0), 5) 0% 25% 50% 75% 100% 0.02050 0.02363 0.02450 0.02545 0.02753 > round( quantile(brier_score_logistic_oversampled), 5) 0% 25% 50% 75% 100% 0.17576 0.18842 0.19294 0.19916 0.23089 ``` Thus concluding that a smaller Brier is better per se will lead to wrong conclusions if say you are comparing datasets with different predictor or outcome prevalences. Overall to me there seem to be two advanteges/problems: - Balancing the datasets seems to get you a higher gradient, which should be beneficial for training of gradient descent algorithms (xgboost, neural networks). In my experience without balancing the neural network might just learn to guess the class with the higher probability without learning any data features if the dataset is too unbalanced. - Comparability between different studies/patient populations/biomarkers may benefit from measures which are less sensitive to changes in prevalence such as AUC or C-index or maybe a stratified Brier. As the example shows that a strong imbalance diminishes the difference between a baseline model and a predictive model. This works goes to a similar direction: ieeexplore.ieee.org/document/6413859 Edit: To follow up on the discussion in the comments, which partially concerns the error due to sampling for a model trained on an imbalanced vs. a balanced dataset I used a second small modification to the script (full version 2 of the new script below). In this modification the datasets for the testing of the original predictive models is performed on one test set, while the "0" models are tested on a separate "test_set_new", which is generated using the same code. This represents either a new sample from the same population or a new "batch" or "minibatch" or subset of the data as used for training models with gradient descent. Now the "gradient" of the Brier from a non-predictive to a predictive model seems quite revealing: ``` > round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) 0% 25% 50% 75% 100% -0.221 -0.100 -0.052 0.019 0.131 > round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) 0% 25% 50% 75% 100% -0.318 -0.258 -0.242 -0.215 -0.135 > > round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) 0% 25% 50% 75% 100% -0.213 -0.092 -0.046 0.020 0.127 > round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3) 0% 25% 50% 75% 100% -0.304 -0.273 -0.255 -0.232 -0.155 > round( mean(brier_score_logistic>brier_score_logistic_0), 3) [1] 0.31 > round( mean(brier_score_randomForest>brier_score_randomForest_0), 3) [1] 0.33 ``` So now in 31-33% of simulations for imbalanced models the Brier score of "0" model is "better" (smaller) than the score of the predictive model, despite a sample size of 10,000! While for models trained on balanced data the gradient of the Brier is consistently in the right direction (predictive models lower than "0" models). This seems to me to be quite clearly due to the sampling variability in the imbalanced setting, where even small variations (individual observation) result in a much stronger variability in performance (as observed above the overall Brier is more strongly affected by prevalence than by actual predictors when trained on an imbalanced dataset). As discussed below I expect that this may strongly affect any sampling approaches during gradient descent training (minibatch, subsampling, etc.), while when using the exactly same dataset during each epoch the effect may be less prominent. The modified version of OP's code: ``` library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_logistic_0 <- brier_score_logistic_oversampled_0 <- brier_score_randomForest <- brier_score_randomForest_oversampled <- brier_score_randomForest_0 <- brier_score_randomForest_oversampled_0 <- rep(NA, n_sims) #pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { print(ii)#setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_0 <- glm(outcome~., dataset_train[,-2], family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_oversampled_0 <- glm(outcome~., dataset_train[index, -2], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_0 <- randomForest(outcome~., dataset_train[,-2]) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) model_randomForest_oversampled_0 <- randomForest(outcome~., dataset_train[,-2], subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_0 <- predict(model_logistic_0, dataset_test[,-2], type="response") brier_score_logistic_0[ii] <- mean((prediction_logistic_0 - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled - (outcome_test==TRUE))^2) prediction_logistic_oversampled_0 <- predict(model_logistic_oversampled_0, dataset_test[,-2], type="response") brier_score_logistic_oversampled_0[ii] <- mean((prediction_logistic_oversampled_0 - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_0 <- predict(model_randomForest_0, dataset_test[,-2], type="prob") brier_score_randomForest_0[ii] <- mean((prediction_randomForest_0[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled[, 2] - (outcome_test==TRUE))^2) prediction_randomForest_oversampled_0 <- predict(model_randomForest_oversampled_0, dataset_test, type="prob") brier_score_randomForest_oversampled_0[ii] <- mean((prediction_randomForest_oversampled_0[, 2] - (outcome_test==TRUE))^2) } #close(pb) quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0) quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0) quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0) quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0) ``` Version 2: ``` library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_logistic_0 <- brier_score_logistic_oversampled_0 <- brier_score_randomForest <- brier_score_randomForest_oversampled <- brier_score_randomForest_0 <- brier_score_randomForest_oversampled_0 <- rep(NA, n_sims) #pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { print(ii)#setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_0 <- glm(outcome~., dataset_train[,-2], family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_oversampled_0 <- glm(outcome~., dataset_train[index, -2], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_0 <- randomForest(outcome~., dataset_train[,-2]) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) model_randomForest_oversampled_0 <- randomForest(outcome~., dataset_train[,-2], subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled[, 2] - (outcome_test==TRUE))^2) #sampling another testing dataset for "0" model predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test_new <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic_0 <- predict(model_logistic_0, dataset_test_new[,-2], type="response") brier_score_logistic_0[ii] <- mean((prediction_logistic_0 - (outcome_test==TRUE))^2) prediction_logistic_oversampled_0 <- predict(model_logistic_oversampled_0, dataset_test_new[,-2], type="response") brier_score_logistic_oversampled_0[ii] <- mean((prediction_logistic_oversampled_0 - (outcome_test==TRUE))^2) prediction_randomForest_0 <- predict(model_randomForest_0, dataset_test_new[,-2], type="prob") brier_score_randomForest_0[ii] <- mean((prediction_randomForest_0[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled_0 <- predict(model_randomForest_oversampled_0, dataset_test_new, type="prob") brier_score_randomForest_oversampled_0[ii] <- mean((prediction_randomForest_oversampled_0[, 2] - (outcome_test==TRUE))^2) } #close(pb) round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3) ```
null
CC BY-SA 4.0
null
2023-03-20T10:44:20.690
2023-04-01T11:36:39.710
2023-04-01T11:36:39.710
224017
224017
null
610043
1
null
null
0
25
I have performed a linear mixed model where I used Tukey's ladder of Powers to transform the outcome variable (time) to get normal residuals, which gave a negative lambda. This resulted in a vector of negative values. I would like to represent the lmm on a graph as I have a 2-way interaction between two categorical variables. However, time obviously cannot be negative. So, I think it doesn't not really make sense to visualize the transformed values. However, if I just plot the original means and standard deviations, my error bars go below zero (probably because the data is not normal, hence the initial transformation...) So, what is the best way to represent the interaction? Flip the log-transformed values so they are positive and plot this? Use the original scale (time) and represent the variance in some other way than standard errors? (although this is primarily a theoretical question, for info I am using R with lme4 and cat_plot from the "interactions" package to plot the data)
Graphing interactions in linear mixed models with negative log-transformed outcome
CC BY-SA 4.0
null
2023-03-20T10:50:55.743
2023-03-20T10:50:55.743
null
null
288203
[ "mixed-model", "data-visualization", "data-transformation", "standard-error" ]
610044
1
611305
null
1
63
I noticed that in some low performance models of neural networks, the value of $R^2$ (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model. The question is: in linear regression models, the multiple correlation coefficient $(R)$ can be calculated using the root of $R^2$. However, this is not possible for a model of neural networks, for example, that presents a negative $R^2$. In that case, is $R$ mathematically undefined?
Is the multiple correlation coefficient $(R)$ undefined in the case of negative determination coefficients $(R^2)$ - Neural network models?
CC BY-SA 4.0
null
2023-03-20T10:52:40.687
2023-04-08T13:42:23.093
2023-04-08T13:42:23.093
247274
346197
[ "machine-learning", "neural-networks", "mathematical-statistics", "multiple-regression", "negative-r-squared" ]
610045
2
null
610031
1
null
Your intuition about overlapping and non-overlapping samples is correct, but that's not what your code is doing. `non_overlapping_samples` and `overlapping_samples` are both 1-dimensional arrays, so `numpy.cov` just computes the sample variance. Try computing the autocorrelation instead.
null
CC BY-SA 4.0
null
2023-03-20T10:58:50.447
2023-03-20T10:58:50.447
null
null
238285
null
610046
1
610344
null
2
105
I'm working with time series data for drug response. And, I wanted to as is there are some alternative ways to analyse it since the FDA package in R is not working in my case. The type of my data is as follows: ``` > head(subset_df) # A tibble: 6 × 4 # Groups: Model, Drug [1] Model Day AUC Drug <chr> <int> <dbl> <chr> 1 AB050 1 0.241 (+)-KT5 2 AB050 3 0.505 (+)-KT5 3 AB050 4 0.598 (+)-KT5 4 AB050 5 0.675 (+)-KT5 5 AB050 6 0.712 (+)-KT5 6 AB050 7 0.734 (+)-KT5 > str(subset_df) gropd_df [102 × 4] (S3: grouped_df/tbl_df/tbl/data.frame) $ Model: chr [1:102] "AB050" "AB050" "AB050" "AB050" ... $ Day : int [1:102] 1 3 4 5 6 7 0 1 2 3 ... $ AUC : num [1:102] 0.251 0.515 0.608 0.685 0.722 ... $ Drug : chr [1:102] "(+)-KT5" "(+)-KT5" "(+)-KT5" "(+)-KT5" ... - attr(*, "groups")= tibble [17 × 3] (S3: tbl_df/tbl/data.frame) ..$ Model: chr [1:17] "AB050" "AB666K" "AB8789" "AB1578" ... ..$ Drug : chr [1:17] "(+)-KT56" "(+)-KT56" "(+)-KT56" "(+)-KT56" ... ..$ .rows: list<int> [1:17] xyplot(AUC ~ Day, data = subset_df, groups = Model, type = "l", xlim = c(0,7), auto.key = list(columns = 4)) ``` [](https://i.stack.imgur.com/YcEp5.png) This plot represents a subset data for a single drug; each line depicts a Model tested with that single drug across time and the y-axis is the AUC value estimated at that time. In general, we have 50 drugs with different models as well. So what I would like to do is an analysis of each Model across time in a functional analysis way in R. In the sense of representing each Model curve as a function and be able to kind of estimate a pattern of the drugs based on the estimated functions for each model. This is what I have tried so far, but without luck: ``` #Tried to create a list of functional data objects that the fda package requires for each curve: fdobj_list <- lapply(unique(subset_df$Model), function(model) { model_df <- subset_df[subset_df$Model == model, ] basisobj <- create.bspline.basis(rangeval = range(model_df$Day), nbasis = 10) fdParobj <- fdPar(basisobj, lambda = 1e-4) Data2fd(model_df, argvals = model_df$Day, y = model_df$AUC, fdPar = fdParobj) }) ``` But I cannot make it work:. I do not know if I am missing the convergence of the fda or the data set is weird. I am quite lost hehe Do you know if this a good way to analyse this data or is it better to approach it with B-spline, or how can I analyse it. Thanks in advance. > Update Ok, reading the paper [https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-019-0666-3](https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-019-0666-3) I tried to implement the Bspline approach: ``` library(bspline) library(spline) model_names <- unique(subset_df$Model) # Define the B-spline function bspline_fun <- function(x, knots) { bs(x, degree = 3, knots = knots, intercept = FALSE) } bspline_df <- data.frame() # Loop through each Model and fit a B-spline curve for (i in 1:length(model_names)) { # Subset the data for the current Model model_data <- subset_df[subset_df$Model == model_names[i],] # Fit a B-spline curve using the bspline function bspline_fit <- bspline_fun(model_data$Day, knots = c(min(model_data$Day), median(model_data$Day), max(model_data$Day))) # Create a data frame with the fitted curve and model name bspline_model <- data.frame(Day = model_data$Day, AUC = predict(bspline_fit)) bspline_model$Model <- model_names[i] # Add the fitted curve to the data frame bspline_df <- rbind(bspline_df, bspline_model) } # Gather the AUC columns into a single column bspline_df <- gather(bspline_df, key = "AUC_id", value = "AUC", AUC.1:AUC.6) # Plot the fitted curves xyplot(AUC ~ Day | Model, data = bspline_df, groups = AUC_id, type = "l", auto.key = list(columns = length(unique(bspline_df$AUC_id)))) # Plot the fitted curves xyplot(AUC ~ Day, data = bspline_df, type = "l", auto.key = list(columns = length(model_names))) ``` I get bizarre results.... Do you know what happened here? : [](https://i.stack.imgur.com/JJ2dQ.png) Another simpler approach is just by plotting the smooth lines: ``` library(splines) library(lattice) # Get unique Model names model_names <- unique(subset_df$Model) # Create a color palette my_colors <- c("#1B9E77", "#D95F02", "#7570B3", "#E7298A", "#66A61E", "#E6AB02", "#A6761D", "#666666", "#D53E4F", "#3288BD", "#C51B7D", "#80CDC1", "#F781BF", "#984EA3", "#FFFF33", "#E41A1C", "#034E7B") # Create an empty list to store fitted curves spline_list <- list() # Loop through each Model and fit a spline curve for (i in 1:length(model_names)) { # Subset the data for the current Model model_data <- subset_df[subset_df$Model == model_names[i],] # Fit a spline curve using the smooth.spline function spline_fit <- smooth.spline(model_data$Day, model_data$AUC, df = 5) # Add the fitted curve to the list spline_list[[i]] <- spline_fit } # Plot the fitted curves xyplot(AUC ~ Day, data = subset_df, groups = Model, type = c("p", "smooth"), panel = function(x, y, groups, subscripts, ...) { panel.superpose(x, y, groups = groups[subscripts], subscripts = subscripts, ...) for (i in subscripts) { col <- my_colors[i %% length(my_colors) + 1] llines(predict(spline_list[[i]], x), col = col, lwd = 2) } }, key = list(space = "bottom", text = list(model_names), points = FALSE, lines = TRUE, col = my_colors[1:length(model_names)], lwd = 2)) ``` But I got problems plotting this...: [](https://i.stack.imgur.com/zaxYM.png) > Update I tried to follow this lecture: [https://www.r-users.gal/sites/default/files/fda_usc.pdf](https://www.r-users.gal/sites/default/files/fda_usc.pdf) But the fda.usc package showed tons of errors... > Update ``` plot_obj <- NULL # Iterate over each unique Model in the data frame for (model in unique(subset_df$Model)) { # Subset the data frame to only include the current model model_df <- subset_df[subset_df$Model == model, ] # Estimate the smoothing spline for the current model spline_fit <- with(model_df, smooth.spline(Day, AUC)) # Add the spline fit to the plot object plot_obj <- xyplot(AUC ~ Day, data = model_df, type = "p", main = model, xlab = "Day", ylab = "AUC", auto.key = list(lines = TRUE, points = FALSE), panel = function(x, y, ...) { panel.xyplot(x, y, ...) panel.lines(spline_fit$x, spline_fit$y, col = "red") }, add = plot_obj) } print(plot_obj) ``` But I am not getting the smooth line... [](https://i.stack.imgur.com/yxkIA.png) Sorry for the problems.... Any insights and help is very welcome... Thanks...
Functional analysis in R with fda
CC BY-SA 4.0
null
2023-03-20T11:30:22.193
2023-03-27T18:47:29.230
2023-03-27T18:47:29.230
260817
260817
[ "r", "splines", "functional-data-analysis", "dose-response" ]
610048
1
null
null
1
20
I have the number of items sold of various products over time from a supermarket. Each product belongs to a category of products. Some categories have a large number of items sold and some categories have small numbers of items sold. This is the total population of items sold, not a sample. [](https://i.stack.imgur.com/wdw9c.png) I am reporting the items sold per month on a graph such as below, along with the % change of the total sales of year t vs year t-1. [](https://i.stack.imgur.com/HKr2B.png) I also have a business requirement to classify the trend of each product based on the below simple rule so that all business stakeholders can understand the reason for each trend classification: - % change >= 20% : Increasing trend - % change <= -20% : Decreasing trend - -20% < % change < 20% : Flat trend Questions: - Is there a statistical approach to determine a minimum number of sales per year per Category that allows for more confidence in the classification of the trend or is this purely a business decision? For example: - For Category A the total sales are X (e.g. 10,000), so for each product within Category A, for any results to have "meaning", the minimum number of sales per year needs to be Y (e.g. 100), where Y is some function of X - Or determine the minimum sample size based on a statistical test (e.g. paired samples t-test?) but then still use the % change rules instead of whether the difference is significant - Or only report the % change for differences that are significant - On a separate but related note, I am concerned that this simplistic rule for classifying trends is going to create misleading information such as the examples below. And if so, would it be better (better = more confident that the trend classification is not just due to random fluctuations) to determine if a trend exists through time-series analysis per product? For example: - For products that have very few sales the % change can jump around a lot e.g. from 10 items to 20 items it’s 100% change but the actual numbers are quite small from a business perspective - There maybe be an outlier in one year that makes all the difference in an otherwise flat trendline
Is there a statistical approach to determine the minimum sample size for comparison of sums over time?
CC BY-SA 4.0
null
2023-03-20T11:39:35.500
2023-03-20T11:39:35.500
null
null
383649
[ "time-series", "sample-size", "effect-size" ]
610049
2
null
610011
3
null
## Edit: The example appears to be wrong I am using the setup described in 3.4 and 3.5.1 of [Domain adaptation under structural causal models](https://www.jmlr.org/papers/volume22/20-1227/20-1227.pdf) which matches your description. In the Source environment we have: $E_{X\sim\mathcal{P}}[\beta_1 + \beta_2 + \beta_3] = \frac{1}{3} + \frac{1}{3} - \frac{2}{3} = 0$ In the target environment we have: $E_{X\sim\widetilde{\mathcal{P}}}[-\beta_1 - \beta_2 + \beta_3] = -\frac{1}{3} - \frac{1}{3} - \frac{2}{3} = -\frac{4}{3}$. The constraint $E_{X\sim\mathcal{P}}[X^\top \beta] = E_{X\sim\widetilde{\mathcal{P}}}[X^\top \beta]$ for DIP$^{(m)}$-mean in 3.4 would therefore not be fulfilled. --- ## Perhaps the interventions were intended to be different? Looking at figures 3 and 4, it seems the authors usually define the target interventions such that $\widetilde{a} = -a^{(1)}$. The scenario in figure 2 seems to be the exception and the numbers they provide would be correct for $\widetilde{a} = -a^{(1)}$. Perhaps that's what they had in mind?
null
CC BY-SA 4.0
null
2023-03-20T11:46:17.127
2023-03-20T16:52:38.853
2023-03-20T16:52:38.853
250702
250702
null
610050
1
610054
null
2
30
I have a data set with a continuous LHS variable y, continuous RHS variable x, and a dummy D. I am running two OLS regressions: $$y_i=\beta_0+\beta_1*x_i+\epsilon_i$$ and $$y_i=\gamma_0+\gamma_1*x_i*(D_i=1)+\gamma_2*x_i*(D_i=0)+\eta_i$$ My estimated $\hat{\beta_1}$ coefficient is larger than both $\hat{\gamma_1}$ and $\hat{\gamma_2}$, which confuses me. Shouldn't it be the weighted average of the two sample-specific slopes (with positive weights)? The regression samples do not change. With the Stata code below, I can replicate the result, but I don't have a good intuition of what is happening: ``` clear all set seed 1234 set obs 10000 gen x=runiform(0,100) gen d=x>50 gen y=2-5*x+rnormal(0,1) if d==0 replace y=2-2*x+rnormal(0,1) if d==1 reg y x predict y_pool reg y c.x#i.d predict y_int twoway (scatter y x if d==0) (scatter y x if d==1) (scatter y_pool x, color(green)) (scatter y_int x, color(orange)), legend(order(1 "D=0 sample observations" 2 "D=1 sample observations" 3 "Pooled predicted values" 4 "Sample-specific predicted values")) ```
How can the pooled slope be outside the sample-specific slopes
CC BY-SA 4.0
null
2023-03-20T11:47:44.120
2023-03-20T12:36:24.647
2023-03-20T12:36:24.647
309751
309751
[ "interaction" ]
610051
2
null
435031
0
null
Random effects (e.g. a random effect of genre on the intercept and/or on gender) are one option here, where smaller categories (i.e. genres with fewer data points) would be shrunk towards the average genre and larger ones (with more data points) are subject to less shrinkage. One can also approach this with multiple hierarchy levels if some genres are related at some higher level and others are not. Random effects are still pretty interpretable (it's really just a form of data informed shrinkage for regression coefficients), so it's an option for both inference and prediction. If you goal is solely prediction and you don't care so much about interpretability, there's quite few other options usually to do with how you represent the input: - Embeddings: e.g. embedding layers within a neural network either trained during training a neural network that you use for this task, taken from such a NN but used in a different model, or trained with some other target & then used, could e.g. be predicting which types of genres get played together on the same radio station the embedding for the genre name from a large language model (or just something like word2vec) anything else that already exist or that you could do - Target encoding (kind of a form of random effect, where you reduce a categorical variable to some kind of - possibly regularized - summary of the outcome split by the categories, requires careful definition/handling to avoid overfitting/target leakage) - Frequency (or some other encoding): use the count of how often the genre is listened to as a numeric representation Plus, if you primarily want to predict, you also don't need to necessarily use a GLM, but various other models (e.g. LightGBM, neural networks, ensembles of these etc.) could be options.
null
CC BY-SA 4.0
null
2023-03-20T11:49:36.530
2023-03-20T11:49:36.530
null
null
86652
null
610052
2
null
610039
4
null
With only 2 observations per subject, you probably cannot reliably estimate random slopes (subject-specific slopes) for time. Assuming "group" refers to the health intervention, the model I would first consider (in lme4 lmer) is ``` lmer(variable ~ (1|subject)+time*group) ``` Which gives you the interaction between time and group (which is what you need to see whether your intervention had an effect on variable), taking into account the non-independence of your observations. Another possibility would be to use single-level (lm) regression with [clustered standard errors](https://www.r-bloggers.com/2021/05/clustered-standard-errors-with-r/)
null
CC BY-SA 4.0
null
2023-03-20T12:00:26.997
2023-03-21T08:21:13.500
2023-03-21T08:21:13.500
357710
357710
null
610053
1
null
null
0
38
From the book Bayesian Decision Analysis Principles and Practice, I am trying to prove $$\begin{aligned} \mathbb{P}(I=i\mid X=x)=\frac{\exp(O(i,1\mid x))}{1+\sum_{k=2}^n \exp(O(k,1\mid x))} \end{aligned}$$ Where $O(i,k\mid x)=log\left( \frac{\mathbb{P}(I=i\mid X=x)}{\mathbb{P}(I=k\mid X=x)}\right) $
Posterior Probabilities in terms log odds ratio
CC BY-SA 4.0
null
2023-03-20T12:22:06.200
2023-03-20T13:17:05.243
null
null
null
[ "bayesian", "likelihood", "naive-bayes" ]
610054
2
null
610050
1
null
> Shouldn't it be the weighted average of the two sample-specific slopes (with positive weights)? First of all, you are not calculating weighted average there. Your second model calculates separate slope for each group, the first one, single slope for all the samples. You are not calculating the weighted average anywhere, at least this is not what you are showing. Second, answering your question, no they shouldn't in the case described by you. On example where this does not happen is [Simpson's paradox](https://en.wikipedia.org/wiki/Simpson%27s_paradox). You can have completely different slopes for groups and for all the data. One such a case is shown below, where each group has a positive slope, but the overall slope is negative. ![Simpson's paradox diagram from Wikipedia](https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Simpson%27s_paradox_continuous.svg/320px-Simpson%27s_paradox_continuous.svg.png) If you indeed calculated separate models per each group and calculated weighted average of the slopes, than, as a convex combination, the result would fall somewhere between the two original slopes.
null
CC BY-SA 4.0
null
2023-03-20T12:29:30.543
2023-03-20T12:29:30.543
null
null
35989
null
610056
1
null
null
0
16
I have 1000 scores (1-20 integer scale with equal intervals), it's quite skewed but presents normally distributed differences. Can I use a paired t-test with this data despite it not being strictly continuous? (provides it satisfies t-test assumptions) Thanks, Harry
Can the paired t-test be used with scores?
CC BY-SA 4.0
null
2023-03-20T12:32:34.563
2023-03-20T12:32:34.563
null
null
380069
[ "hypothesis-testing", "statistical-significance", "inference" ]
610057
1
null
null
0
37
I want to analyze whether the effect of an intervention on the level of depressive symptoms is moderated by a variable called TA (continuous variable). I will do repeated measures analyses in SPSS with depressive symptoms used as the dependent variable which is measured at 5 time points. I think the independent variable (between-subjects factor?) will be the intervention (4 groups). The moderator variable (M) will be measured at one point in time as the moderator (for the first analysis) and an additional analysis will be conducted with the mean of all measures of TA (TOTAL = 4 points in time). Is this correct? And does somebody knows how to analyze in SPSS (model? factors? levels?). Thank you!
Moderation in Repeated Measures Design, how to analyze?
CC BY-SA 4.0
null
2023-03-20T12:43:00.307
2023-03-20T12:43:00.307
null
null
383653
[ "interaction" ]
610059
2
null
610039
5
null
Your models 1-3 ignore one or both main effects (except when your variables are coded as factors; see [https://stackoverflow.com/questions/40729701/how-to-use-formula-in-r-to-exclude-main-effect-but-retain-interaction](https://stackoverflow.com/questions/40729701/how-to-use-formula-in-r-to-exclude-main-effect-but-retain-interaction)). You rarely want to ignore main effects. Since only model 4 contains all main effects and the interaction, this is most likely the most appropriate one. This is usually written as `time*group`. For a discussion of interaction effects without main effects, look for example here: [Including the interaction but not the main effects in a model](https://stats.stackexchange.com/questions/11009/including-the-interaction-but-not-the-main-effects-in-a-model). Edit: I actually just learned that `lme4` would not even let you estimate a random slope if you have just two time points, because you are trying to estimate just as many random effects as you have observations (or even fewer if there is missing data). As Ben Bolker [points out](https://stackoverflow.com/questions/26465215/random-slope-for-time-in-subject-not-working-in-lme4), although with `nlme` the estimation appears to work, the model will not be able to actually distinguish between random slope variance and residual variation. With health interventions whose outcome develops over time, the standard is to use an ANCOVA adjusted for baseline, which will turn out to be quite similar to the mixed model with random intercept. So if you have measured at baseline (i.e. at the time of randomization, or when the treatment was assigned), the usual way to estimate the intervention effect would be: ``` lm(variable ~ baseline + time*group) ``` Or if you are really interested in the variance of the random intercept, you could use ``` lme(variable ~ baseline + time*group, random= ~ time|subject) ``` but that will give (almost) identical results as the other model, and the variance of the random intercept will be (almost) the same as the variance of the baseline values. Be also aware that the interpretation of the results can be somewhat challenging in the presence of interaction effects. You might want to read further into the topic, and examine the fitted model further with pairwise comparison (e.g. with the package `emmeans`) to get a better understanding of the group- and time-specific estimates.
null
CC BY-SA 4.0
null
2023-03-20T12:59:29.983
2023-03-20T22:03:42.910
2023-03-20T22:03:42.910
183460
183460
null
610060
2
null
610053
0
null
Just to save on notation, let $P_i = P(I=i|X=x)$ $$P_i=\frac{\exp(log(\frac{P_i}{P_1}))}{1+\sum_{k=2}^n \exp(log(\frac{P_k}{P_1}))}$$ $$ = \frac{\frac{P_i}{P_1}}{1+\sum_{k=2}^n \frac{P_k}{P_1}}$$ $$ = \frac{\frac{P_i}{P_1}}{1+\frac{1}{P_1}\sum_{k=2}^n P_k}$$ Assuming that $\sum_{k=1}^n P_k = 1$, then $\sum_{k=2}^n P_k = 1 - P_1$ $$ = \frac{\frac{P_i}{P_1}}{1+\frac{1}{P_1}(1-P_1)} = P_i$$
null
CC BY-SA 4.0
null
2023-03-20T13:17:05.243
2023-03-20T13:17:05.243
null
null
212798
null
610061
1
null
null
0
29
I am currently working on my thesis and am interested in exploring the relationship between biological sex (female or male) and 2 interval variables related to parenting and development. I made a mediation model in which biological sex is the x, one of the interval variables is Y, and the other interval variable the M. I am now wondering whether it is possible (and meaningful?) to conduct this analysis in this way. I get stuck at how to compare male and female in this model, because usually one of them is coded as 0. My hypothesis is that males receive less of M, which leads to more Y. Conversely, females get more M, which leads to less Y. My supervisor and I get stuck on whether mediation is fitting for my research question, because the model looks very logical, but we get stuck in how to compare findings. I am not comfortable with posting my full research question and I do not know whether it is possible to fully answer this question with this information. If Mediation is not fitting for these hypotheses, what would you recommend? Thanks for helping me out!
Is it possible to have biological sex as x in a mediation model if you want to compare male and female findings?
CC BY-SA 4.0
null
2023-03-20T13:20:17.723
2023-06-03T07:44:53.333
2023-06-03T07:44:53.333
121522
383656
[ "mediation" ]
610062
2
null
311360
0
null
[here](https://www.statsmodels.org/dev/examples/notebooks/generated/ols.html) is e.g. of OLS non-linear curve but linear in parameters. Just consider OLS assumptions [e.g. here](https://statisticsbyjim.com/regression/ols-linear-regression-assumptions/): > OLS Assumption 1: The regression model is linear in the coefficients and the error term OLS Assumption 2: The error term has a population mean of zero OLS Assumption 3: All independent variables are uncorrelated with the error term. This assumption is also referred to as exogeneity. When this type of correlation exists, there is endogeneity. OLS Assumption 4: Observations of the error term are uncorrelated with each other OLS Assumption 5: The error term has a constant variance (no heteroscedasticity) OLS Assumption 6: No independent variable is a perfect linear function of other explanatory variables OLS Assumption 7: The error term is normally distributed (optional) So, if you do not care about errors & stat.significance - you can afford to use biased model for your purposes. Nevertheless avoid autocorrelation in residuals - as it can serve as evidence of any unrecognized pattern still being latent & hidden for your linear model. And (as of assumption 6) the problem of Multicollinearity still can be the real problem for OLS (resulting in inflating the variance). To detect this problem - can use [VIF](https://corporatefinanceinstitute.com/resources/data-science/variance-inflation-factor-vif/): > Variance inflation factor (VIF) is used to detect the severity of multicollinearity (>4) in the ordinary least square (OLS) regression analysis. & can use Correction of Multicollinearity as a solution to this problem: - PCA & PLS, but be cautious about using them 'cause they can help only in the cases when really multicollinearity exist - Or just remove highly correlated variables (Xs) from your regression equation P.S. initially (before regressing) removing outliers from dataset - because they also inflate variance as well as lead to biased estimations =========== If your relationships between x and y are really non-linear in nature - than use logarithmic or semi-logarithmic [models](https://www.rea.ru/ru/org/cathedries/mathmek/Documents/Study%20material/Lecture%20Notes%202.pdf) - - it's all about transformations of Xs at preprocessing stage P.S. linearization of dependencies (with betas) just serves for the ease of using linear_algebra_calculations to model the process P.P.S. and even [beta](https://stats.stackexchange.com/a/146935/347139) can create a non-linear dependancy in your linear regression model
null
CC BY-SA 4.0
null
2023-03-20T13:44:23.503
2023-03-20T14:13:43.227
2023-03-20T14:13:43.227
347139
347139
null
610063
2
null
609697
2
null
You don't have much details about the process, so you can aim only at a rough approximation of the distribution. The simple, approximate solution is not hard though. You know how often animals of different sizes appear in the population, we need those frequencies to hold in the samples. There is another constraint for the average weight of all the animals in the sample to be 1752 kg. Let's denote the weight per animal type as $x_i$ and the "estimated percentage each animal contributes to the total ecosystem" to be $w_i$, then it's a [categorical distribution](https://en.wikipedia.org/wiki/Categorical_distribution) with $\Pr(X = x_i) = w_i$. From here, we can find the expected weight of a single animal $E[X] = \sum_i x_i w_i = 439.96$. We can also calculate how many animals we need to have so that they on average have the desired weight. You want to sample $n$ animals such that their total weight is $T = 1752$, so $E[n X] = T$, and by the linearity of expectation, $E[nX] = nE[X]$, so we can find $T / E[X] = n$ and it is equal to $1752 / 439.96 = 3.98$. So you need to sample with replacement $\approx4$ animals with the probabilities $w_i$. In R code, this is ``` > x <- c(75, 216, 700, 2500, 5000, 8500, 25000) > w <- c(49.3, 36.8, 6, 6.7, 0.6, 0.4, 0.2) > w <- w/sum(w) > summary(replicate(100000, sum(sample(x, size=4, prob=w, replace=TRUE)))) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 300 582 723 1760 2725 50291 ``` Notice that the mean is $1760$ what is consistent with $4 \times E[X]$. ``` > sum(x * w) ## [1] 439.963 > 4 * sum(x * w) ## [1] 1759.852 ``` This is not equal to $1752$ what shows us that either the numbers you have are not exact, or the system is more complicated and you would need to make more assumptions for a better approximation. One thing you could do is to have $n$ random instead of fixed. For example, you could assume that $n$ follows a [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution) with $\lambda = 3.98$, to get a much better approximation: ``` > Ex <- sum(x * w) > T <- 1752 > En <- T/Ex > summary(replicate(100000, sum(sample(x, size=rpois(1, En), prob=w, replace=TRUE)))) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 366 775 1752 2650 51907 ``` Notice that we got better approximation for the mean, but at the cost of making an assumption about the distribution of the counts. This assumption may or may not have sense for the problem. The same applies for any other approach to the simulation that you would take: you have little details, so you would need to make smaller or larger assumptions about the process and do it wisely.
null
CC BY-SA 4.0
null
2023-03-20T13:44:26.973
2023-03-20T14:22:06.917
2023-03-20T14:22:06.917
35989
35989
null
610064
1
null
null
0
27
In these slides from Hinton ([https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf](https://www.cs.toronto.edu/%7Etijmen/csc321/slides/lecture_slides_lec6.pdf)) there is this statement: [](https://i.stack.imgur.com/xK7Dr.png) I don't understand why "The error derivatives for the hidden units will all become tiny" if we start with a big learning rate. And if so, would there be the vanishing gradient problem? I found conflicting information on the internet... Could you please explain what happens in this specific situation? Both from an intuitive and mathematical point of view
Why do the error derivatives become small if we start with a large learning rate?
CC BY-SA 4.0
null
2023-03-20T13:46:20.097
2023-03-20T14:32:47.537
null
null
383660
[ "machine-learning", "neural-networks", "gradient-descent" ]
610065
1
610081
null
1
56
For a normally distributed random variable $X$, we can "standardize" $X$ by defining a new random variable $\frac{X-\mu}{\sigma}$ where $\mu$ is a mean of $X$, and $\sigma$ is a standard deviation of $X$. Now, why do we do this? A textbook said that it was to make the calculation of probability easier since we can refer to the standard normal table once we have the standard normal. However, I feel like this reason is outdated as I believe we no longer use the standard normal table. One reason I can think of is that many statistical methods rely on the chi-squared distribution, which is the distribution of the sum of the squared standard normal variables. Are there any others?
Importance of the standardization of a normal distribution
CC BY-SA 4.0
null
2023-03-20T13:50:35.973
2023-03-20T16:25:22.167
2023-03-20T16:25:22.167
345611
295387
[ "probability", "distributions", "normal-distribution", "standardization", "z-score" ]
610066
2
null
608813
0
null
Simulation gives us ~2.2%. You could simulate to check your answer in `R`: ``` people <- c("A","J",1:8) # let A = Ari and J = Jamaal nsim <- 100000 count <- rep(NA,nsim) for (i in 1:nsim){ x1 <- sample(people,1,replace = F) x2 <- sample(people[-which(people==x1)],1,replace = F) if (sum(c(x1,x2) == c("A","J")) == 2 | sum(c(x1,x2) == c("J","A")) == 2){ count[i] = 1} else count[i] = 0 } sum(count)/nsim*100 ```
null
CC BY-SA 4.0
null
2023-03-20T13:59:24.350
2023-03-20T13:59:24.350
null
null
29137
null
610067
1
null
null
0
16
I have the following model with exogenous continuous variable $x_i$, endogenous continuous variable $y_i$, and some $k$ constant between the maximum and minimum values of $x_i$: $$y_i=\gamma_0+\gamma_1*x_i*I[x_i\leq k]+\gamma_2*x_i*I[x_i>k]+\eta_i$$ If I estimate $$y_i=\beta_0+\beta_1*x_i+\epsilon_i$$ can my estimate of $\beta_1$ be interpreted as average treatment effect across $I[x_i \leq k]$ and $I[x_i>k]$ subsamples? I attach a Stata code below, which generates an example of this structure. (Notice in this how the estimate of $\beta_0$ is way off from $\gamma_0$ and that $\beta_1$ falls outside of the actual $\gamma_1$ and $\gamma_2$ parameters.) ``` clear all set seed 1234 set obs 10000 gen x=runiform(0,100) gen d=x>50 gen y=2-5*x+rnormal(0,1) if d==0 replace y=2-2*x+rnormal(0,1) if d==1 reg y x predict y_pool reg y c.x#i.d predict y_int twoway (scatter y x if d==0) (scatter y x if d==1) (scatter y_pool x, color(green)) (scatter y_int x, color(orange)), legend(order(1 "D=0 sample observations" 2 "D=1 sample observations" 3 "Pooled predicted values" 4 "Sample-specific predicted values")) ```
Bias or average treatment effect? (in a model where coefficients vary as a function of explanatory variables)
CC BY-SA 4.0
null
2023-03-20T14:01:50.693
2023-03-20T14:45:18.040
2023-03-20T14:45:18.040
309751
309751
[ "interaction", "bias", "treatment-effect" ]
610069
1
null
null
0
35
My clusters are arranged according to a time series, and I want to compute the silhouette score for the clustering performed, considering that they follow an order. Therefore the nearest cluster to the present cluster will be the one that comes next in the time series. What I have tried, Data ``` # Sample Matrix cell_1 <- c(2,2,4) cell_2 <- c(2,3,2) cell_3 <- c(0,1,2) cell_4 <- c(0,2,1) cell_5 <- c(5,2,3) cell_6 <- c(9,2,3) cell_7 <- c(1,2,3) cell_8 <- c(0,2,1) cell_9 <- c(5,2,5) cell_10 <- c(9,2,3) test_mat <- as.matrix(rbind(cell_1, cell_2, cell_3, cell_4, cell_5, cell_6, cell_7, cell_8, cell_9, cell_10)) colnames(test_mat) <- c("gene1", "gene2", "gene3") # Cluster information cluster.time.series <- data.frame(label = c(0, 0, 1, 1, 1, 2, 2, 3, 3, 3), member = c("cell_1", "cell_2", "cell_3", "cell_4", "cell_5", "cell_6", "cell_7", "cell_8", "cell_9", "cell_10")) ``` Calculation intra cluster mean ``` unique_clusters <- unique(cluster.time.series$label) cluster_means <- NULL for (i in unique_clusters) { # Select cells of the clusters sel_cells <- test_mat[rownames(test_mat) %in% cluster.time.series[cluster.time.series$label == i, "member"], ] # Calculate distance sel_cells_dist <- dist(sel_cells, method = "euclidean") # Subset distance matrix and calculate mean cluster_means <- c(cluster_means, mean(sel_cells_dist)) } cluster_means ``` Calculation of separation based on the natural order of cluster labels ``` nearest_means <- NULL for (i in unique_clusters) { # Select cells of present cluster sel_cells <- test_mat[rownames(test_mat) %in% cluster.time.series[cluster.time.series$label == i, "member"], ] # Save present cluster Index present_clus_index <- which(unique_clusters == i) if (present_clus_index + 1 > length(unique_clusters)){break} next_cluster_index <- present_clus_index + 1 # Select the successive cluster next_cluster <- unique_clusters[next_cluster_index] # Next cluster cells next_sel_cells <- test_mat[rownames(test_mat) %in% cluster.time.series[cluster.time.series$label == next_cluster, "member"], ] # R-bind combined.matrix <- rbind(sel_cells, next_sel_cells) combined.matrix.dist <- dist(combined.matrix, method = "euclidean") # Make all possible links of the cells separation_points <- as.data.frame(crossing(rownames(sel_cells),rownames(next_sel_cells))) colnames(separation_points) <- c("present", "next") # nearest mean mean_vec <- apply(separation_points, 1,function(x, dist_mat = as.matrix(combined.matrix.dist)){ # Make a matrix vals <- dist_mat[rownames(dist_mat) == x[1], colnames(dist_mat) == x[2]] }) print(length(mean_vec)) tot_mean <- mean(mean_vec) print(paste(i, "with", next_cluster, "nearest mean: ", tot_mean)) # Distance of point nearest_means <- c(nearest_means, tot_mean) } ``` Calculation of Silhouette Score ``` # calculate silhouette score for each data point silhouette_scores <- (nearest_means - cluster_means[-4]) / pmax(nearest_means, cluster_means[-4]) mean(silhouette_scores) ``` Problems with the script - I have a huge data matrix, and this script consumes too much time - I implemented it using the silhouette score formula; however, different websites show different methods. I am not confident enough that it is a right implementation. - Is there a library which to compute silhouette score taking into account cluster order
Silhouette Score for ordered clusters
CC BY-SA 4.0
null
2023-03-20T14:04:02.333
2023-03-20T14:04:02.333
null
null
361238
[ "r", "machine-learning", "time-series", "clustering", "unsupervised-learning" ]
610070
2
null
610064
1
null
So in the case of sigmoid neurons, having large weights means the hidden unit output saturates, so then changes in the weights have minimal effect on the hidden unit output, and therefore the error gradient too. I can't see this exactly happening with RELU type nonlinearities. so I would question whether these notes were specific to sigmoids ( pre RELU?). RELU: If you have large weights then you are far from the nonlinear region at zero, so you have your sigmoid output layer and lots of effectively linear hidden layers, so the saturation of the output layer has same effect as mentioned above for sigmoid hidden units
null
CC BY-SA 4.0
null
2023-03-20T14:08:05.670
2023-03-20T14:32:47.537
2023-03-20T14:32:47.537
27556
27556
null
610072
1
610188
null
3
59
In order to visually compare two models (logistic regression, in case that it matters) I thought of plotting the contribution of the individual observations to the AIC of the respective model. The plot looks like this: [](https://i.stack.imgur.com/Q6hsG.png) One can see that, although for some observations the full model makes larger errors than the simple one (i.e. a larger contribution to the AIC), for the majority of the observations the per-observation AIC is lower for the full model. The linear regression (the orange line) reflects that by being below the diagonal dotted line. This, I hope, gives some visual support for using the full model---assuming that the model selection is based on the AIC in the first place. Are plots like these actually used in practice and do they have an established name? If not, why? Is this plot in any way misleading or unclear?
Visually comparing two models based on the per-observation AIC
CC BY-SA 4.0
null
2023-03-20T14:13:26.883
2023-03-21T15:00:31.227
null
null
169343
[ "predictive-models", "data-visualization", "aic" ]
610073
1
null
null
0
30
When we need to calculate the ACF of a SARMA model, is there any shortcut about lags that can help us derive the ACF? For example if we have a SARMA(1,1)x(1,0)_6 model, to find γ(0) to calculate the ACF, we would need to solve a series of 6 simultaneous equations with a lot of substitutions. Is there a shortcut to this, where we can see what lags γ(k) equal 0 to speed up the process?
ACF lags of SARMA(A,B)x(a,b)_s (Seasonal ARMA model)
CC BY-SA 4.0
null
2023-03-20T14:35:05.490
2023-03-20T14:56:26.277
2023-03-20T14:56:26.277
53690
373194
[ "time-series", "arima", "seasonality", "acf-pacf" ]
610075
2
null
49052
0
null
choice between splines and polinomial interpolation (either Newton or Lagrange - deterministic ones) stops at really huge data - [splines](https://towardsdatascience.com/polynomial-interpolation-3463ea4b63dd) are more flexible ("using many polynomials in a piece-wise function rather than defining one overall polynomial")... And the problem of overfitting is really the problem of another causes (see marked answer - as of stat. view or [here](https://people.duke.edu/%7Eccc14/bios-823-2020/notebooks/B01_ML_for_DS.html#B2.1.2.-Remedies-for-over-fitting) as of ML view) - can create your own ML-solution or NeuralNetwork with keras or tensorflow
null
CC BY-SA 4.0
null
2023-03-20T14:42:52.267
2023-03-20T15:53:02.613
2023-03-20T15:53:02.613
347139
347139
null
610076
1
null
null
0
32
We are currently using the MCMCglmm package to study the phylogenetic and climatic distribution of deciduousness for a tropical flora. However, we have some difficulties with the prediction of a categorical `MCMCglmm` model. The predictions of the model seem to be overestimated, but we cannot find the source of this overestimation. We are trying to calibrate a model testing the effect of biome (fixed effect, 3 categories: forest vs. generalist vs. savanna) on the "leaf habit" of tree species (each species takes one of the 2 categories: "deciduous" or "evergreen"). A genus-level phylogenetic relationships were considered as a random effect (phylo). For each species, genus identity was included as a second random effect (Accepted_genus). The model on R takes the following form: ``` MCMCglmm(leaf_habit ~ biomes, random = ~phylo + Accepted_genus, family = "categorical", ginverse = list(phylo = inverseA(phylo, nodes = "TIPS", scale = T)$Ainv), prior = prior_categorial, data = data, nitt = nitt, burnin= burning, thin = thin, pr=T) ``` Here are the outputs of the model with 200,000 iterations : [](https://i.stack.imgur.com/UWQo2.png) Below is a graph showing the raw data (percentage of evergreen in each biome) and the model estimates obtained with the predict() function. The red dots correspond to the model predictions. The barplots represent the percentage of evergreen in each biome according to the raw data. Even if these values do not express the same thing, I have the impression that we should obtain roughly the same orders of scale. [](https://i.stack.imgur.com/spRXU.png) Perhaps this divergence is normal... But we find this curious because the inference algorithm seeks to maximise the likelihood *prior, doesn't it? We tried changing the variance of the priors, but it doesn't change anything. Out of curiosity, we removed the phylogenetic tree from the random effects, and we obtained probabilities that fit the data much better. If necessary, I can send the script for more details. In advance, thank you very much for your answers, Best regards,
Categorical model predictions with MCMCglmm
CC BY-SA 4.0
null
2023-03-20T14:53:56.727
2023-03-20T14:59:16.180
2023-03-20T14:59:16.180
56940
383664
[ "bayesian", "mixed-model", "categorical-data", "predictive-models", "phylogeny" ]
610077
2
null
595313
1
null
$R^2$ in this situation is UNINTERESTING. (This is not to say, however, that the question is not worth asking, so this statement is not a criticism of the OP.) The point of regression is that we want tight estimates of some variable ($y$) of interest. When we just have $y$, we might have more variability than is desired. Consequently, we measure some other variables (features) that influence $y$. This way, we can use those features to explain some, preferably all, of the variability in $y$. When there is no variability in $y$, there is no regression worth doing, and any statistics derived from such a regression are worthless. This situation is uninteresting from a statistical standpoint.
null
CC BY-SA 4.0
null
2023-03-20T15:07:05.390
2023-03-20T15:07:05.390
null
null
247274
null
610078
1
null
null
1
13
This is a purely theoretical question (for the moment at least), so not a lot of details to share. If I plan an analysis of residuals following a chi-squared test of independence, I was wondering if I should calculate my sample size differently compared with when I plan "just" a chi-squared test without analysing the (adjusted standardized) residuals. My usual workflow is the following for the "vanilla" chi-squared test of independence: I choose a minimum effect size (Cohen's w) of interest, plus my alpha and power levels (usually 0.05 and 0.8), and then I plug all that in the G*power software or in the R pwr library. As far as I know, these software do not have special options for the case of analysis of residuals in contingency tables, so I guess that I would have to write a script myself if necessary to determine the required sample size. But in fact, I have absolutely no idea if I'm off the tracks, or if residual analysis is really a problem that should be tackled differently (and if it's the case, where to begin). I guess I would have to figure out a minimum effect size of interest for each cell, apply a multiple test correction to my alpha level, but besides that I've no idea what to do to determine the required sample size. Thanks for any guidance.
Are there special sample size calculations involved for the analysis of residuals in a contingency table?
CC BY-SA 4.0
null
2023-03-20T15:11:20.153
2023-03-20T15:11:20.153
null
null
383665
[ "chi-squared-test", "sample-size", "residuals", "statistical-power" ]
610079
1
null
null
0
26
My understanding of the consistency principle is that the observed outcome is equal to the potential outcome. i.e. let T = treatment, if T=1 then then the Observed outcome (Y) is equal to the potential outcome i.e. Y(1) = Y . This implies that there can't be 'multiple versions of the same treatment' which will lead to differnt outcome. I'm struggling to understand this concept. E.G. i want to find out if owning a cat increases happiness. In my data i see that the type of a cat can vary. If i observe that owning a britsih blue hair increases happiness in my sample data but than owning a maincoon cat decreases happiness does this mean a violation? What if i also see that a maincoon cat increases happiness in some subjects too? e.g. my data looks like: ``` treatment | type cat | outcome 1 | British | increase 1 | Maincoon | increase 1 | Maincoon | decrease ``` E.G. if i had the above data, would i be able to run any causal inference experiments? If i include the 'type of cat' into my structural causal model, would consistency naturally follow? How could i embed this variable 'type of cat' into an SCM? please let me know if i am understanding this correctly! e.g. [](https://i.stack.imgur.com/eVxKh.png) source: [https://www.lesswrong.com/posts/JDWTro62tRAHzvhEH/causal-inference-sequence-part-1-basic-terminology-and-the](https://www.lesswrong.com/posts/JDWTro62tRAHzvhEH/causal-inference-sequence-part-1-basic-terminology-and-the)
Is this a breach of the consistency principle in causal inference?
CC BY-SA 4.0
null
2023-03-20T15:18:39.727
2023-03-20T15:38:12.107
2023-03-20T15:38:12.107
250242
250242
[ "bayesian", "causality", "treatment-effect", "observational-study" ]
610080
1
null
null
0
16
I have data of the following form: |Rating |1 |2 |3 | |------|-|-|-| |control |0 |20 |11 | |treatment |6 |14 |12 | Where 1 is a plant of top quality, 2 is a plant of lesser quality that USED TO BE top quality, and 3 is a plant of poor quality that USED TO BE of type 2 quality. The values listed are the counts of each type of plant from an untreated control group and with a treated group. The standard protocol when I joined the project was to treat the categories as independent from one another (rather than ordered rankings) and use the chi-square test to determine whether the treated group yields statistically significantly different results from the control. I realize that this is throwing away valuable information (namely that the rankings have order) and have been trying to find a more appropriate test, but nothing has jumped out at me. The tests that I've seen that incorporate the rankings in the data (such as Kruskal-Wallis or Mann Whitney U) involve using NON-REPEATED rankings of distinct data. That is, there is only a single reading across both the treatment and control groups that is given ranking of "1", another single reading given raking of "2" etc. It is clear how the rank sum can be used to differentiate between the two groups in that case, but if rankings are allowed to repeat, it doesn't seem right. For example, if the control set had all values of ranking "2" and the treatment had half "1"s and the other half "3"s, they would show the same ranked sum even though they are clearly different. Does anyone have any advice on how to proceed for testing statistical difference for data of this form? Thanks for any help you can provide.
Statistical significance for REPEATED ordered categorical data
CC BY-SA 4.0
null
2023-03-20T15:36:10.480
2023-03-20T15:36:10.480
null
null
367293
[ "hypothesis-testing", "statistical-significance", "ordinal-data", "wilcoxon-signed-rank", "kruskal-wallis-test" ]
610081
2
null
610065
1
null
#### Comparison of Scales A very common reason to use z-scores is to compare variables with completely different scales. Consider these two variables I simulated in R, IQ and salary. [](https://i.stack.imgur.com/FCnRi.png) For simplicity's sake (because salary isn't normally distributed in reality), lets just say they both have a Gaussian distribution. However, they are not very comparable. Telling somebody they make $30,000 USD per year and have an IQ of 130 tells you something, but it says nothing about the location of their score, nor if it's considered several deviations below or above average. However, if we transform these variables into z-scores... [](https://i.stack.imgur.com/rShEf.png) We now have something comparable, as they are both on the same scale. We know for example that a person with a z-score of +2 on both measures is way above average compared to the distribution. #### Fixing Messy Regressions Sometimes complicated regressions need some legwork to get them to work. A common issue in mixed model regressions is that interactions with measures on totally different scales often leads to buggy behavior. Standardizing scores between two variables in an interaction is a common fix. For example, if I fit these same variables above into an interaction in `lmer`, R will kick back a warning that it's going to explode. ``` #### Load Libraries #### library(lmerTest) set.seed(123) #### Simulate Data #### money <- rnorm(n=1000, mean = 50000, sd = 10000) iq <- rnorm(n=1000, mean = 120, sd = 10) subject <- factor(rbinom(n=1000,size=50,prob=.5)) response <- rnorm(n=1000) fit <- lmer( response ~ iq*money + (1|subject) ) ``` Shown in this error when I save `fit`: ``` boundary (singular) fit: see help('isSingular') Warning messages: 1: Some predictor variables are on very different scales: consider rescaling 2: Some predictor variables are on very different scales: consider rescaling ``` Refitting with scaled data: ``` fit <- lmer( response ~ scale(iq)*scale(money) + (1|subject) ) ``` We don't fix the singular matrix (this is just because of the lazy way I simulated the data), but it has now removed the error regarding the scale invariance, which improves the chances the regression will fit correctly: ``` boundary (singular) fit: see help('isSingular') ``` #### General Statistical Inference Many inferential tests in statistics are based off CLT and z-scores give useful heuristics for understanding this sorta thing.
null
CC BY-SA 4.0
null
2023-03-20T15:44:18.340
2023-03-20T15:47:58.917
2023-03-20T15:47:58.917
345611
345611
null
610082
2
null
409973
0
null
The answer above is great. Here is one more thing to keep in mind: `confusionMatrix()` will output a table with the proportion of classifications by default. As a result, you will construct 95% CIs using the table of proportions (which does not take into account your sample size), instead of the total number of classifications in each cell. This will impact your CIs. You can have `confusionMatrix()` output the total number of classifications in each cell by including "norm=none". The resulting code will look like this: ``` confusionMatrix(confusionMatrix.train(model, norm = "none")$table, positive = "TRUE") ```
null
CC BY-SA 4.0
null
2023-03-20T15:47:06.167
2023-03-21T17:48:40.527
2023-03-21T17:48:40.527
383668
383668
null
610083
1
null
null
0
23
I've trained a simple NN to perform binary classification with goal of maximizing area under ROC curve. Right now AUC is around 0.85. Out of curiosity, I checked which thresholds are best in terms of maximizing `f1_score`. It turned out that optimal thresholds are around `0.08` < 0.1, corresponding to `fpr = 0.22` and tpr = `0.72` and `f1_score = 0.75`. Note that training and evaluation datasets are inbalanced with ~90% of negative and 10% of positive samples. I am wondering what does it mean about the data or the model if f1-optimal threshold is so low and how can I use that knowledge to improve my model. My initial guess was that low threshold is result of unbalanced classes - because only 10% of samples is positive, it makes sense to be very sensitive and classify something as positive even with low certainty, but then I realized it should be the opposite.
What does it mean if optimal classification threshold found on ROC curve is really small?
CC BY-SA 4.0
null
2023-03-20T15:47:25.207
2023-03-20T15:47:25.207
null
null
332606
[ "classification", "unbalanced-classes", "roc", "auc", "threshold" ]
610084
1
null
null
2
69
I have a question about the interpretation of residual diagnostics using DHARMa. I fitted a binomial mixed model and used DHARMa for model diagnostics. ``` simulationOutput <- simulateResiduals(m1_test, n = 1000, seed = 123) plot(simulationOutput) testResiduals(simulationOutput) ``` This is what the DHARMa plots look like: [](https://i.stack.imgur.com/wxXZp.jpg) [](https://i.stack.imgur.com/hUV74.png) Given that I have a lot of data (n = 9587) and according to the DHARMa vignette there will very likely be significant patterns with large sample sizes, the plots look pretty good to me. However, I'm not sure if I should be concerned about underdispersion since the dispersion test yields a dispersion parameter of 0.80552: [](https://i.stack.imgur.com/CjONN.png) The DHARMa vignette suggests that e.g. a dispersion parameter of 5 is reason for concern about overdispersion, but I cannot find anything about when a value indicating underdispersion should be taken seriously. Should I worry about underdispersion or is it fine? I also plotted the residuals against individual predictors. There are some significant deviations, but nothing outstanding that would point to large deviations.
Dispersion parameter in DHARMA
CC BY-SA 4.0
null
2023-03-20T15:48:52.883
2023-05-07T12:15:50.320
null
null
380499
[ "lme4-nlme", "residuals", "glmm", "underdispersion" ]
610085
1
null
null
0
8
I study ensemble machine learning and I have noticed OOB (out-of-bag) score in [some implementations](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html). I understand the concept but I have some questions about the general usage: - Why is it used for bagging (and special case bagging Random Forest) when general validation set would also be used in the first place? What is the additional value of OOB? - Why is it used only for bagging? As far as I see, it could have been used for any kind of other ensembles (like simple voting or stacking) as well. - Why is it used only in case of bootstrapping, and not for sampling without replacement?
Why Out-of-bag score used for Bagging Ensembles and only for Bagging?
CC BY-SA 4.0
null
2023-03-20T15:50:07.433
2023-03-20T15:50:07.433
null
null
72735
[ "machine-learning", "ensemble-learning", "bagging" ]
610086
1
null
null
1
35
I have the following business request: "Test this change on the landing page and implement it for all the visitors if the test shows improvement in the conversion rate, but if the page load speed significantly differs between the two variations, discard the experiment results and investigate" The rationale behind it is we want to test the change, but we also want to make sure the results we are getting can only be attributed to the visual change itself, and not the change in page load speed that may occur due to poor implementation of A/B testing framework etc or some other factors. As far as I know in our case page load speed is what they call "sanity guardrail metric". However, I am not sure how to translate it into a proper A/B test. What I figured out so far: - We choose one-tail test for testing the conversion rate (right tail in our case, since we test for an improvement) - We chose two-tailed test for the page load metric, since we test for a change in any direction. What I have my doubts about: - Required sample size: since we are going to monitor 2 separate metrics, one of them being a continuous metric and the other one - proportion, I am not sure how to calculate the required sample size. Intuition tells me that I should calculate the required sample size for both metrics with the specified minimum detectable effect, power and error probability and choose the larger sample size. Of course I expect the sample size for page load metric to be larger. - Should I use some sort of correction here? Since I am going to monitor two metrics, maybe I need to apply some sort of correction (bernoully, dunett etc.), but I have doubts, since we do not seek to improve the page load speed - we only need to make sure it doesn't change, but since the page load speed metric is a subject to false positive as well, I guess something must be done about it as well. Besides, we combine one-tail metrics and two-tailed one into one test, which further complicated things. Any help will be much appreciated, as well as any links to the relevant resources that cover similar problems.
A/B test with two metrics, we test one metrics for improvement on the condition the other one doesn't change
CC BY-SA 4.0
null
2023-03-20T15:50:24.450
2023-03-21T10:28:05.523
null
null
135109
[ "hypothesis-testing", "experiment-design", "ab-test" ]
610087
2
null
595313
2
null
You should just continue to think of R2 as undefined in this situation. Thinking of it as 1 or zero obscures what's really go on here. R2 answers the question of "what percent of the variance in Y is being explained by the model?" But in this case Y has no variance. So there is nothing to be explained. The question is ill formed. Asking what R2 "really" is in this situation is analogous to asking "what percentage of this circle is green?" when the area of the circle is zero. Someone could say "It's 100% - no part of the circle is NOT green!" and someone else could say "zero percent - no part of the circle IS green." But the correct answer is: "there is no circle in the first place, so the question can't be answered."
null
CC BY-SA 4.0
null
2023-03-20T16:11:58.377
2023-03-20T16:11:58.377
null
null
291159
null
610088
1
610100
null
2
79
The following plot shows power spectra (periodograms) of a sample from $X_t \sim \operatorname{Poisson}(1)$ along with that same sample where: - Weekends were set to zero - Sundays were set to zero - Saturdays were set to zero [](https://i.stack.imgur.com/Ne4HS.png) I suspect this can be explained in terms of Fourier series because we're looking at the square of the Fourier transform. I have a guess that there is some kind of interference pattern, but I'm unsure about how to procedure when the signal is a random variable rather than a deterministic variable. Really these spectra are instances of random spectra when we think about the data being a transformed Poisson process. Why are there multiple peaks? --- Requested by mhdadk: ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.signal import periodogram dates = pd.date_range('1970-02-23', '2023-03-20') weekdays = dates.dayofweek < 5 notsunday = dates.dayofweek != 6 notsaturday = dates.dayofweek != 5 x = np.random.poisson(size=dates.size) y = x * weekdays z = x * notsunday w = x * notsaturday titles = ['Poisson', 'Weekdays', 'Not Sunday', 'Not Saturday'] fig, axes = plt.subplots(4) for i, var in enumerate([x,y,z,w]): freq, pxx = periodogram(var) axes[i].plot(freq, pxx) axes[i].set_title(titles[i]) plt.tight_layout() plt.show() ``` --- I was reminded by someone to check if the peaks are equally spaced. Indeed, they are! Using the approximate periods of 7, 3.5, and 2.33 from the plots we approximately have $$\frac{1}{7} - \frac{1}{\frac{7}{2}} = -\frac{1}{7}$$ $$\frac{1}{\frac{7}{2}} - \frac{1}{2.33} \approx -\frac{1}{7}$$ Equally spaced frequencies are sometimes called harmonics, which has origins in the study of music. So these are harmonics seem to be reoccur every week week. Why?
Why can weekends cause harmony?
CC BY-SA 4.0
null
2023-03-20T16:15:14.637
2023-03-22T16:03:46.013
2023-03-22T16:03:46.013
69508
69508
[ "poisson-distribution", "stochastic-processes", "poisson-process", "fourier-transform" ]
610089
2
null
379754
0
null
You can use logistic regression with event/trial syntax or poisson loglinear regression with an offset parameter.
null
CC BY-SA 4.0
null
2023-03-20T16:16:18.513
2023-03-20T16:16:18.513
null
null
383670
null
610090
1
610131
null
3
172
I have multiple Markov chains with twelve states. I want to estimate a transition probability matrix for each time point (except for the last time point) that can vary over time using all Markov chains. I found a function in R to do this. It is in the TraMineR package and is called seqtrate. However, it isn't clear to me how they estimate a transition probability matrix that can vary over time for each time point. The transition probability matrix their function infers would in this case have three dimensions, [states,states,time], where the first two dimensions would correspond to a transition probability matrix. Does anyone know how this is done, or can anyone point me toward any resources where I can learn more about this?
How could I estimate a transition probability matrix that varies over time?
CC BY-SA 4.0
null
2023-03-20T16:33:30.970
2023-03-21T16:43:22.323
2023-03-21T16:43:22.323
331670
331670
[ "r", "econometrics", "markov-process", "transition-matrix" ]
610093
1
null
null
4
48
In Equation (9), page 9 of ([Demkowicz-Dobrzanski et al. 2020](https://arxiv.org/abs/2001.11742)), the authors mention that, given a probability distribution $p_{\boldsymbol\theta}(m)$ with hidden parameter $\boldsymbol\theta$ and $m$ labelling the possible outcomes, if the true parameter $\boldsymbol\theta$ is close to some known $\boldsymbol\theta_0$, then we have a locally unbiased estimator $\tilde{\boldsymbol\theta}$ that saturates the CR bound and is thus optimal at $\boldsymbol\theta_0$, written as $$\tilde{\boldsymbol\theta}(m) = \boldsymbol\theta_0 + \frac{1}{p_{\boldsymbol\theta}(m)} F^{-1} \nabla p_{\boldsymbol\theta}(m)\big|_{\boldsymbol\theta=\boldsymbol\theta_0},\tag1$$ where $F$ is the Fisher information matrix: $$F = \sum_m \frac{\nabla p_{\boldsymbol\theta}(m)[\nabla p_{\boldsymbol\theta}(m)]^T}{p_{\boldsymbol\theta}(m)}.$$ That this estimator is efficient around the true value I can see because $$\mathbb{E}[(\tilde{\boldsymbol\theta}-\boldsymbol\theta_0)^2] = \sum_m \frac{1}{p(m)}\sum_{ijk} (F^{-1})_{ij}(F^{-1})_{ik} \partial_j p(m)\partial_k p(m) \\ = \sum_i (F^{-1}FF^{-1})_{ii} = \operatorname{tr}(F^{-1}).$$ What I'm not clear about is how (1) is derived in the first place. Sure, once I have it I can verify that it works, but what's a way to get to it from scratch? I thought of trying to find the locally unbiased estimator that minimises the variance at the true value, but that would just give me back the trivial estimator $m\mapsto \boldsymbol\theta_0$. Is this particular structure characterised by its being locally unbiased and efficient? It would seem like that's what is being stated when it's discussed in the paper. But isn't the trivial estimator $\tilde{\boldsymbol\theta}(m)=\boldsymbol\theta_0$ already enough for that? it's locally unbiased and has zero variance around the true parameter. So how is this one better exactly?
Where does the efficient locally unbiased estimator $\tilde\theta(m)=\theta_0+\frac{F^{-1}\nabla p_\theta(m)}{p_\theta(m)}$ come from?
CC BY-SA 4.0
null
2023-03-20T17:24:05.680
2023-05-21T12:03:51.787
2023-05-21T12:03:51.787
82418
82418
[ "estimation", "information-theory", "fisher-information" ]
610094
1
null
null
0
4
I have a function to be optimized and the instructions say "find the max of the function using a line search for "p1" (first parameter) coupled with standard optimization methods for the other parameters" By line search they mean vary p1 value from .01 to 40 in steps of .0001 to find the max of the function while using standard optimization methods for the rest of the parameters (I've been using Nelder Mead). Does anyone know how I would go about that?
Optimize a function using line search for 1 of the variables and standard optimization methods for the others
CC BY-SA 4.0
null
2023-03-20T17:26:35.000
2023-03-20T17:26:35.000
null
null
383675
[ "optimization" ]
610095
1
null
null
0
27
Yes, it's a very basic question, but I'm not sure if I'm undertanding the process. I feed my network with the first observation, with random W for each neuron. I get a predicition, with an error. Then comes all the iterative process of adjusting the W values (backpropagation, gradient descending, learning rate..), until I have an aceptable error for this single observation (or I have performed a max numbers of iterations). Is that correct? Then...? I feed the network with the next observation, starting with the W values calculated in previous one observation, and repeat the adjusting process, until I get new W's that minimize the error for this new observation? This changes the W values for many (or almost all) neurons... So the idea of a al the process is that, after feeding my network with all observations (training data) the W's of all neurons CONVERGE to a set of values that gives me a valid prediction to all (or a acceptable percentage of) future observation? Thanks
How do we train a neural network?
CC BY-SA 4.0
null
2023-03-20T17:32:15.493
2023-03-20T17:32:15.493
null
null
381118
[ "neural-networks" ]
610096
1
null
null
0
14
I am running a regression on how trade shocks affect population by age group at the county level. We bin ages into three age groups. I am confused about whether I should run one big regression, like this: `change_pop_age ~ trade_shock*age_group + county_controls` Where the product `a*b` means `1 + a + b + a:b`, `change_pop_age` is change in the county population of a specific age group, `age_group` is a dummy variable denoting which age group we are referring to, and `county_controls` is a vector of county-level controls. or three separate regressions, with the following specification? `change_pop_age ~ trade_shock + county_controls` It seems to me that the first regression assumes that the coefficient on `county_controls` does not vary by age group. But what does that mean in terms of correlations between regressors and error terms? Also, how would you construct standard errors in this situation, given that there are effectively three observations per county?
When the endogenous variable in a regression is population changes by age group, do I run a separate regression for each age group?
CC BY-SA 4.0
null
2023-03-20T17:40:15.880
2023-03-20T17:40:15.880
null
null
383674
[ "regression", "least-squares" ]
610098
1
null
null
2
77
I have a model where I can either code a predictor variable as a continous variable with the raw values, or as a categorical variable by assigning the values to quartiles. When I run a random forest model with the continous variable, the continous variable comes out as the most important variable in the model. When I run the model with the same variable as a categorical predictor, it becomes much less important to the model. This has implications for interpretation, so I am curious if there is a mechanical reason (i.e., something in the way that random forests fit data) for why this happens. I know the categorcial variable contains less information, but model fit is not significantly improved by the continous version of the variable. I have reason to be catious about using the continous variable because of the quality of the information in the variable.
Continous variable vs. Categorical Variable in Random Forest
CC BY-SA 4.0
null
2023-03-20T17:49:19.553
2023-03-20T17:49:19.553
null
null
309510
[ "random-forest" ]
610099
1
null
null
1
47
Suppose $x_i$ comes from standard Normal. For a given $\alpha$, I'm interested the following random variable: $$f(s)=\log \prod_{i=1}^s (1-\alpha x^2_i)^2$$ For $\alpha=2.422$, empirical distribution of $f(100)$ [looks](https://www.wolframcloud.com/obj/yaroslavvb/nn-linear/forum-variance-multiplicative.nb) like this: [](https://i.stack.imgur.com/QlVFZ.png) - What kind of distribution is it? - How do I estimate the variance in the limit of large $s$? Edit regarding suggestions to apply Central Limit Theorem, I'm unsure whether the conditions for CLT are satisfied here. A [tutorial](http://physics.bu.edu/%7Eredner/pubs/pdf/ajp58p267.pdf) suggests that we can't take logs and apply CLT: [](https://i.stack.imgur.com/A6kBA.png) Motivation: this describes behavior of LMS filter for Gaussian observations. Related [question](https://dsp.stackexchange.com/questions/87091/for-which-values-of-step-size-is-lms-filter-stable) on dsp.SE
Asymptotic behavior of $\prod_{i=1}^s (1-\alpha x^2_i)^2$ for Gaussian $x_i$
CC BY-SA 4.0
null
2023-03-20T17:49:39.217
2023-03-20T18:01:47.173
2023-03-20T18:01:47.173
511
511
[ "normal-distribution", "stochastic-processes", "asymptotics" ]
610100
2
null
610088
2
null
It turns out that these peaks are what are known as harmonics. In fact, these peaks all occur at the same frequencies: $\frac{1}{7},\frac{2}{7},\frac{3}{7},$ and so on, albeit with different amplitudes for each of your scenarios of "weekdays", "not Saturday", and "not Sunday". I'll now explain why they appear, in contrast to the spectrum of $X_t$. First, because $X_t$ is stationary with finite mean and variance, then it is also wide-sense stationary, and so, by the [Wiener–Khinchin theorem](https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin_theorem), its periodogram is an estimate of the Fourier transform of the auto-correlation function (ACF) of $X_t$. That is, if we let $\mathcal F\{X_t\}(f)$ be the Fourier transform of $X_t$ at the frequency $f$ Hz, then the periodogram is approximately $\mathcal F\{E[X_tX_{t+\tau}]\}(f) = \mathcal F\{R_{XX}(\tau)\}(f)$, where $R_{XX}(\tau) = E[X_tX_{t+\tau}]$ is the ACF. Therefore, if we plot the [correlogram](https://en.wikipedia.org/wiki/Correlogram#Estimation_of_autocorrelations) (which is an estimate of the autocorrelation function) of each of the signals that you plotted, then we can gain some insight as to why these peaks appear. We first start with the correlogram of $X_t$ (I'm assuming that your code was run first): ``` def correlogram(x): grammian = x[...,None] @ x[...,None].T acf = np.zeros(grammian.shape[1]) for i in range(grammian.shape[1]): # to compute the value of the acf with lag i, we compute the # sample mean for the ith diagonal of the grammian. This is because # the acf of wide-sense stationary signal is shift-invariant and only # depends on the lag. Note that, because the ith diagonal get smaller # as i increases, then we have less samples to compute the mean, and # so the estimate is less accurate. This explains the spikes for # increasing lags acf[i] = np.diagonal(grammian,i).mean() lags = np.arange(grammian.shape[1]) return acf,lags acf,lags = correlogram(x) fig,ax = plt.subplots(2,1) ax[0].plot(lags,acf) ax[0].set_ylabel(r"ACF($\tau$) for $X_t$") ax[1].plot(lags,acf) ax[1].set_xlim([7500,7550]) ax[1].set_xlabel(r"$\tau$") ax[1].set_ylabel(r"ACF($\tau$) for $X_t$") ``` [](https://i.stack.imgur.com/NJUCz.png) As expected, the ACF approximately represents white noise, as $X_t$ is a sequence of independent and identically distributed Poisson random variables with mean $1$. Moreover, in the second axes, we see that the mean is approximately $1$, which makes sense as the ACF should have a mean component equal to $1^2 = 1$, unlike the auto-covariance function. This explains why the periodogram for $X_t$ represents that of white noise. We now estimate the ACF for all other signals: ``` acfy,lags = correlogram(y) acfw,lags = correlogram(w) acfz,lags = correlogram(z) fig,ax = plt.subplots(3,2,sharex=False,sharey=False) ax[0,0].plot(lags,acfy) ax[0,0].set_ylabel(r"Weekdays") ax[0,1].stem(lags,acfy) ax[0,1].set_xlim([7500,7520]) ax[1,0].plot(lags,acfw) ax[1,0].set_ylabel(r"not Saturday") ax[1,1].stem(lags,acfw) ax[1,1].set_xlim([7500,7520]) ax[2,0].plot(lags,acfz) ax[2,0].set_ylabel(r"not Sunday") ax[2,0].set_xlabel(r"$\tau$") ax[2,1].stem(lags,acfz) ax[2,1].set_xlim([7500,7520]) ax[2,1].set_xlabel(r"$\tau$") ``` [](https://i.stack.imgur.com/udUAp.png) We see that all 3 correlograms have a period of $7$ days. However, their shapes throughout these 7 days, and the amplitude of their correlations, differ. If we compute the Fourier transform of these ACFs, then, because they are all periodic with a period of 7 samples per cycle, then you will see the harmonics at frequencies of $\frac{1}{7},\frac{2}{7},\frac{3}{7},...$ cycles per sample, although each with different amplitudes. Of course, this doesn't explain why harmonics appear in the first place, and not a single peak at $\frac{1}{7}$. For that, I will defer you to [this helpful answer](https://dsp.stackexchange.com/a/61912/52433). --- Here is another perspective: Let $N$ be the number of days between the dates $1970/02/23$ and $2023/03/20$. Then, for the "Weekdays" scenario, associate the signal of length $N$, $$w[k] = \begin{cases} 1,&\text{if the kth day is a weekday} \\ 0,&\text{otherwise}\end{cases}$$ Do the same thing for the "Not Saturday" and "Not Sunday" scenarios. The resulting signals are the `weekdays`, `notsaturday`, and `notsunday` variables in your question. Then, for each of these three signals, compute their ACFs and their periodograms (even though they are deterministic). Then, by the [convolution theorem](https://en.wikipedia.org/wiki/Convolution_theorem), the periodograms of `y,z,` and `w` will be the convolution of the periodgrams of `weekdays` and `x`, `notsunday` and `x`, and `notsaturday` and `x`.
null
CC BY-SA 4.0
null
2023-03-20T18:20:06.323
2023-03-20T20:21:26.267
2023-03-20T20:21:26.267
296197
296197
null
610102
1
null
null
0
16
Suppose, I have a regression problem, where I have the lables of my training data $\bf{y}$ and two measurement matrixes $A$ and $B$. The cost function is $||\sin(\frac{A*\textbf{w}}{B*\textbf{w}})-\textbf{y}||_{2}$ , where $\textbf{w}$ is vector of predictors. Let's say, I want to use L1 or L2 regularization to prevent overfitting, adding regularization term $norm(\textbf{w})$ to the cost function. You can notice that the cost function contains the fraction, where the predictors are in the numenator and denominator, meaning that it is invariant to multiplication factors. Applying the regularization, the predictors $\textbf{w}$ are supressed as they are supposed to be. However, sometimes they are supressed to very low values ($10^{-5}$) even if I start with $\textbf{w} = [1,1,1,1...]$ in the beginning of the optimizer. But then the optimizer produce unstable results. I was thinking to put normalization (either to the $max(\textbf{w})$ or $sum(\textbf{w})$) in the regularization term, however I am not sure whether it is a good idea. What do you think and are there any alternatives?
Different normalization in regularization term for scale invariant problem
CC BY-SA 4.0
null
2023-03-20T18:25:27.553
2023-03-20T18:25:51.373
2023-03-20T18:25:51.373
383681
383681
[ "regression-coefficients", "regularization", "normalization" ]
610103
1
null
null
0
116
I don't understand why the output of pairwise comparison using emmeans function is z.ratio when analysing response time data. What is the difference between z.ratio and t.ratio? And is this reasonable that the estimate is too large? Here is my code. ``` # model e1_model1 <- lmer(data = mix1, formula = RT ~ soc_s*persp_n*consis_c + (1 + soc_s*persp_n*consis_c|Subject), REML=TRUE, control = lmerControl(optimizer = "bobyqa",optCtrl=list(maxfun=(20000)))) summary(e1_model1) anova(e1_model1) # pairwise comparison e1 <- emmeans(e1_model1, pairwise~soc_s|persp_n + consis_c, adjust = "mvt") e1$contrasts %>% summary(infer = TRUE) ``` And this is pairwise comparison output [](https://i.stack.imgur.com/MGM9K.png) How can I command to get t.ratio?
What is the difference between z.ratio and t.ratio in the pairwise comparison output using emmeans function?
CC BY-SA 4.0
null
2023-03-20T14:55:31.810
2023-03-20T21:19:10.867
null
null
383661
[ "r", "lme4-nlme", "lsmeans" ]
610104
1
null
null
0
16
Basically as the title says: in a scenario, we have missing observations where the entire state vector is unknown for consecutive time steps. Do we just run through the prediction section of the algorithm and not the update section for both the state vector X and the covariance matrix P?
Do we need to propagate state covariance matrix 'P' during missing observations in the Extended Kalman Filter?
CC BY-SA 4.0
null
2023-03-20T18:31:32.530
2023-03-20T18:31:32.530
null
null
383687
[ "estimation", "kalman-filter", "state-space-models", "filter" ]
610105
1
null
null
0
32
I am using GAMs (mgcv) to model the effect of various weather covariates on several count response variables (annual counts of a migratory bird species). When testing the fit of Pois/NB models for one of my response variables, the dispersion statistic appears to be too low for either Pois/NB models to handle (Pois = 0.734; NB = 0.736). The attached figures compare these dispersion statistics (red dot) against 10,000 simulated datasets from each model. I have also plotted (1) Pearson residuals vs. fitted values, and (2) fitted values vs. observed counts, but neither model appears to perform better than the other. As I understand it, GAMs do not have generalized Poisson distributions to account for such underdispersion. If it is necessary to, how should I account for this? Or can this underdispersion be ignored and either Pois/NB model used? > Poisson model > Negative Binomial model
Using GAMs with underdispersed data
CC BY-SA 4.0
null
2023-03-20T18:37:29.967
2023-03-20T18:37:29.967
null
null
286723
[ "generalized-additive-model", "fitting", "mgcv", "underdispersion" ]
610106
1
610137
null
1
45
Given two independent variables $X$ and $Y$, I want to test if they follow the same distribution. $H_0: F_X(.)=F_Y(.)$ $H_1: F_X(.)\neq F_Y(.)$ I do not want to test if their medians are the same. Their shapes are shown below, look similar but can spot slight differences: [](https://i.stack.imgur.com/BM0mvl.png) A concern is that I used the Mood's median test to check if their medians are equal, and it gives a high p-value so I cannot conclude their medians differ. So when I use the MWU test, it may violate the consistency criterion that requires the medians to be different in order for the test to be consistent. The MWU test does give a p-value $<$ 0.05 - Should I then conclude the distributions are different based on this MWU test? - Should I be worried about consistency? - How should I test if the distributions are the same if the answer to 1. is no?
Testing if two distributions are the same with Mann–Whitney U test given equal median
CC BY-SA 4.0
null
2023-03-20T18:50:21.470
2023-03-21T07:45:05.110
null
null
373321
[ "wilcoxon-mann-whitney-test" ]
610107
1
null
null
0
22
I need some help figuring out the best statistical model for my planned research. Here is the setup: I want to understand support for four distinct policies. More specifically, I want to understand the order of preference of these four policies. I plan to collect this data via a survey. I am doubtful whether asking respondents to rank the preferences themselves will be helpful. Since this could be a somewhat cognitively challenging task, I am afraid that many respondents will simply skip the question. My alternative strategy is deploying standard 10-point likert scale questions for each policy. As in: “If 0 is the lowest and 10 is the highest, how much would you support the president implementing policy Pn” and so on three more times. My question is as follows: what would be the best statistical model to calculate individual order of preferences? This is a challenge to me since I do not expect absolute transitivity to hold up at an individual level. I am expecting a lot of people to give the same likability score to two or more policies. Question 1) To reprise the question: what statistical model should I use? Question 2) Lets relax the assumption that respondents will skip more cognitively challenging questions. I welcome any other “outside the box” setup suggestions. For instance, instead of asking four individual feeling thermometer questions, a role-playing scenario where the respondent plays the role of an incumbent and has to decide how to break the budget down between four government agencies. (The policies are not inherently contradictory amongst themselves.) Thank you
Best statistical model for ranked preferences
CC BY-SA 4.0
null
2023-03-20T19:00:28.573
2023-03-20T19:00:28.573
null
null
318236
[ "regression", "statistical-significance", "ranking", "aggregation" ]
610108
1
null
null
5
424
I am reading up on Poisson Regression. Say I have an input vector $X^{T} = (X_1, X_2, \ldots, X_p)$ and I want to predict an output $Y$ which is correlated to these inputs. Then the mathematical form of a Poisson Regression Model is: $$\log(Y) = \alpha + \beta_{1}x_1 + \beta_{2}x_2 + \ldots + \beta_{p}x_p$$ where $\alpha, \beta_{i}$ are numeric coefficients. Now, one of the assumptions is that the observations must be independent of one another. I'm not quite sure what this means? Does it mean that each individual response variable $x_1, x_2, \ldots , x_p$ have to be independent, or does it mean that each $X_{i}$ vector in the data set has to be independent of one another?
What exactly needs to be independent in GLMs?
CC BY-SA 4.0
null
2023-03-20T19:01:02.390
2023-03-21T15:02:44.530
2023-03-21T04:28:56.907
362671
292642
[ "regression", "poisson-distribution", "poisson-regression" ]
610109
1
null
null
0
14
I have a question about the cell size by time point for my longitudinal data set. Basically, the data is from a cohort study which collects 5 waves of data. For my growth-curve mixed model, I will be using age instead of wave to estimate the age-specific Y trajectories. My predictor is a 4-category variable. My goal is to estimate the trajectories by each of the 4-category X variable. However, when I look at the frequency distribution of the cell size by age and X variable, some cells are really small. In this case, will the estimation of the 4-type trajectories by age (in certain age categories) be unstable? My age variables ranges from 12 to 40. Should I recode the age variable into age categories to enlarge the cell size by age (see below the recode) and 4-category x variable: recode ageintw (12/14=1) (15/17=2) (18/20=3) (21/23=4) (24/27=4) (28/32=5) (33/35=6) (36/38=7) (39/40=8) But the problem of using age category is that my estimated trajectories will change by collapsed age categories instead of continuous age. Does any expert have some advice to share? Thanks, Pauline P.S. Here is the cell sizes by age for each value of X: [](https://i.stack.imgur.com/1hPNJ.jpg) [](https://i.stack.imgur.com/H2VSG.jpg) [](https://i.stack.imgur.com/Vcxsn.jpg) [](https://i.stack.imgur.com/8cKJe.jpg)
How big should the cell size for each time point be in longitudinal data?
CC BY-SA 4.0
null
2023-03-20T19:24:53.710
2023-03-21T18:02:04.977
2023-03-21T18:02:04.977
383688
383688
[ "panel-data" ]
610110
1
null
null
0
10
I want to determine whether the type of concrete I used for an artificial reef had an effect on the critters colonizing the reef. For this, there are: 4 types of concrete (OPC, OPC with algae, CaCO3 enriched, CaCO3 enriched with algae). Depth (top and middle) Date (fall2017, winter2017, spring2018, summer2018) There are also 7 different "critter measures" I collected, but for this question I'll just use one as my primary example: fit_concrete. This is the mass of critters on an artificial reef. I have made models that represent the interactions between the concrete types, the depth of the reef, and the season when the data were collected because all of these will have an effect on the mass of critters. My goal is to be able to show a boxplot (like the one linked here) where I can put stars over the places where things significantly (sig) differ and talk about these differences in my results section. I want to be able to say, "X statistic indicates that using CaCO3-enriched concrete was sig. correlated with colonizer mass compared to OPC, and colonizer mass was sig. greater in the fall at the top depth than middle depth, but was overall sig. greater in spring than fall." [](https://i.stack.imgur.com/i9LBc.png) Initially, I represented these interactions using all *: ``` fit_concrete_full <- lm((Disk_Colonizer_Weight_g)^.61 ~ concrete * algae * Date * Depth , data = concrete) ``` After working on this paper for 4 years, I discovered today that it would make more sense to represent some interactions with +: ``` nfit_concrete_full <- lm((Disk_Colonizer_Weight_g)^.61 ~ concrete * algae + Date * Depth , data = concrete) ``` I've been using AICc to rank these models and select the most parsimonious one, and when I compared these two versions of models, the new ones are sig better. ``` Model selection based on AICc: K AICc Delta_AICc AICcWt Cum.Wt LL nfit_concrete_noalgae 10 711.64 0.00 0.64 0.64 -344.94 fit_concrete_nosubstrate 9 714.08 2.44 0.19 0.82 -347.33 nfit_concrete_full 12 715.23 3.59 0.11 0.93 -344.35 nfit_concrete_noconcrete 10 716.01 4.37 0.07 1.00 -347.13 fit_concrete_justdate 5 755.24 43.59 0.00 1.00 -372.39 nfit_concrete_datealgae 6 757.40 45.75 0.00 1.00 -372.37 nfit_concrete_nodepth 10 761.31 49.67 0.00 1.00 -369.78 nfit_concrete_depthconcrete 4 770.53 58.88 0.00 1.00 -381.11 fit_concrete_justdepth 3 771.77 60.12 0.00 1.00 -382.79.... ``` NOW! I want to discuss my data using this model, and my understanding is that I should use the coefficients from `summary()` to do that. But no matter how much I read about `summary()` and interpreting coefficients, I can't figure out describe what I want to. My understanding is that the coefficients elicited in `summary()` only indicate significance relative to a treatment. For example, using the top-ranked AICc model gives me this output: ``` Call: lm(formula = (Disk_Colonizer_Weight_g)^0.61 ~ Date * Depth + concrete, data = concrete) Residuals: Min 1Q Median 3Q Max -6.2285 -2.1947 0.1716 1.5212 8.7385 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 16.4594 0.7700 21.376 < 2e-16 *** DateSprng2018 0.1034 1.0267 0.101 0.91991 DateSummer2018 -0.8620 1.0561 -0.816 0.41588 DateWinter2017 -1.8416 1.0135 -1.817 0.07157 . Depthtop 2.8458 1.0263 2.773 0.00640 ** concreteCaCO3 1.1569 0.5433 2.129 0.03516 * DateSprng2018:Depthtop -4.4903 1.5784 -2.845 0.00518 ** DateSummer2018:Depthtop -8.3061 1.5974 -5.200 7.76e-07 *** DateWinter2017:Depthtop 2.5235 1.4332 1.761 0.08068 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.163 on 127 degrees of freedom Multiple R-squared: 0.4404, Adjusted R-squared: 0.4052 F-statistic: 12.5 on 8 and 127 DF, p-value: 4.137e-13 ``` Which I interpret to mean "winter and summer tend to be correlated with a lower colonizer mass than the fall, top depth was correlated with a sig greater mass than middle depth, CaCO3-enriched concrete was correlated with sig greater mass than OPC concrete" and I don't even know what to say about the other interaction. Are these all relative to DateFall2017:Depthmiddle? Also, how do I discuss the coefficients here? I've read that I should include confidence intervals, and, again, I don't know how to discuss the results in any meaningful way: ``` > confint(nfit_concrete_noalgae) 2.5 % 97.5 % (Intercept) 14.93568915 17.9830995 DateSprng2018 -1.92827715 2.1351492 DateSummer2018 -2.95185673 1.2277579 DateWinter2017 -3.84716920 0.1639856 Depthtop 0.81487648 4.8767266 concreteCaCO3 0.08177886 2.2320019 DateSprng2018:Depthtop -7.61367231 -1.3670277 DateSummer2018:Depthtop -11.46709469 -5.1450340 DateWinter2017:Depthtop -0.31250737 5.3595577 ``` I read that if I removed the intercept, I would be able to see how these outcomes directly related to each other instead of a reference group, I tried a new model where I removed the intercept and got this `summary()` result: ``` Call: lm(formula = (Disk_Colonizer_Weight_g)^0.61 ~ Date * Depth + concrete - 1, data = concrete) Residuals: Min 1Q Median 3Q Max -6.2285 -2.1947 0.1716 1.5212 8.7385 Coefficients: Estimate Std. Error t value Pr(>|t|) DateFall2017 16.4594 0.7700 21.376 < 2e-16 *** DateSprng2018 16.5628 0.7800 21.234 < 2e-16 *** DateSummer2018 15.5973 0.8087 19.287 < 2e-16 *** DateWinter2017 14.6178 0.7577 19.292 < 2e-16 *** Depthtop 2.8458 1.0263 2.773 0.00640 ** concreteCaCO3 1.1569 0.5433 2.129 0.03516 * DateSprng2018:Depthtop -4.4903 1.5784 -2.845 0.00518 ** DateSummer2018:Depthtop -8.3061 1.5974 -5.200 7.76e-07 *** DateWinter2017:Depthtop 2.5235 1.4332 1.761 0.08068 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.163 on 127 degrees of freedom Multiple R-squared: 0.9692, Adjusted R-squared: 0.967 F-statistic: 444.3 on 9 and 127 DF, p-value: < 2.2e-16 ``` Sure, now I know how fall looks in the mix, but I still don't know how to talk about these coefficients or how middle depth during different seasons fits in the mix. Finally, I tried using `Anova()` from the `Car` Package because I read that it would tell me about all the interactions. ``` > Anova(nfit_concrete_noalgae, type = 3) Anova Table (Type III tests) Response: (Disk_Colonizer_Weight_g)^0.61 Sum Sq Df F value Pr(>F) (Intercept) 4572.3 1 456.9179 < 2.2e-16 *** Date 48.0 3 1.5978 0.193197 Depth 76.9 1 7.6884 0.006395 ** concrete 45.4 1 4.5341 0.035156 * Date:Depth 550.6 3 18.3409 5.994e-10 *** Residuals 1270.9 127 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > Anova(intnfit_concrete_noalgae, type = 3) Anova Table (Type III tests) Response: (Disk_Colonizer_Weight_g)^0.61 Sum Sq Df F value Pr(>F) Date 12218.0 4 305.2403 < 2.2e-16 *** Depth 76.9 1 7.6884 0.006395 ** concrete 45.4 1 4.5341 0.035156 * Date:Depth 550.6 3 18.3409 5.994e-10 *** Residuals 1270.9 127 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` So now I can see that all of the factors are sig (or if I keep the intercept, most of them are), but again I don't know how to talk about... directionality? Like, how do I know what about the concrete type is sig? Is OPC correlated with a sig lower colonizer mass than CaCO3-enrich concrete? What date:depth interactions are correlated with the greatest and smallest colonizer masses? Which dates and depth are specifically the ones that differ from each other? AND WHAT DO I DO WITH COEFFICIENTS AND THEIR CONFIDENCE INTERVALS?? I feel like I'm so close to writing my results! But I can't for the life of me figure out how to pull out the specific things I need to talk about. Thanks for reading!
For a linear regression model, how do I describe the specific impact of treatments?
CC BY-SA 4.0
null
2023-03-20T19:25:31.623
2023-03-20T19:25:31.623
null
null
383679
[ "statistical-significance", "multiple-regression", "interpretation", "lm" ]
610111
1
null
null
0
6
I am trying to analyse the effects that the German policy to introduce a 9Euro public transport ticket had on German CPI using a VAR model (which, according to my reading, is the most appropriate method for this). However, I have not been taught how to use or set up the VAR model and, therefore, would appreciate any guidance on how to go about this, especially since, to my understanding, I need to use exogenous variables, and I am unsure of the proper notation for this.
Is a structural VAR method ideal to analyze the effects of the German 9Euro Public transport ticket on CPI?
CC BY-SA 4.0
null
2023-03-20T19:26:10.847
2023-03-20T19:26:10.847
null
null
383689
[ "time-series", "econometrics", "vector-autoregression", "notation", "macroeconomics" ]
610112
1
null
null
1
23
I am using perceptron machine learning to solve the binary classification problem A vs B. For this I have to assign the actual values of A and B to either 1 or -1 to be able to use perceptron. Does it matter which value I assign, as it is not clear which value makes more sense to assign 1.
Does it matter which variable I assign 1 or -1 in a perceptron machine learning algorithm
CC BY-SA 4.0
null
2023-03-20T19:28:52.997
2023-04-08T23:26:00.540
null
null
372029
[ "machine-learning", "neural-networks", "perceptron" ]
610113
1
null
null
1
19
I would like to make sure whether I’m interpreting the results of the lmer model I generated in the right way. The model is: `lmer(distress ~ cond*group + (1|subject), data = data)` Condition has two levels (S1 and S2, S1 is the reference), group has 2 levels (A and B, B is the reference) - Would the coefficient of the three way interaction condS2:groupA indicate that the difference of S2 vs S1 is larger in in group A vs group B? - Maybe this is a stupid question but, can this also not be tested with a simple regression with a predictor group and the difference scores (S2-S1) as the DV? Would it be inappropriate because of the disregard of the random effects? - If I want to test whether conditions differ, within group A and within group B, do I run two different lmers for two groups seperately? Thanks in advance!
Interpreting contrasts in lmer
CC BY-SA 4.0
null
2023-03-20T19:33:12.497
2023-03-20T19:33:12.497
null
null
320897
[ "mixed-model", "lme4-nlme", "interpretation" ]
610115
2
null
357548
1
null
## 1. Bootstrap before calculating CIs Not sure if I understood your question correctly, but if you were asking, ‘Should I be using bootstrap to compute the CIs’, then, the missing part of the question is, ‘bootstrap instead of what’, ‘more precise’ – ‘precise compared to what and in which sense’. There are multiple ways to construct the CIs: - Make an assumption about the sampling distribution of the estimator the for every sample size $n$. This is a very strong assumption; however, people have done it many times in the past: they often assume the $t$ distribution (for sample sizes at least 2). This is the classical frequentist (Fisher) inference. - Assume that nothing is known anything about the sampling distribution for any given $n$, but as $n \to\infty$, you know the distribution: $n^q (\hat \theta_n - \theta_0) \xrightarrow[n\to\infty]{d} \mathcal{N}(0, V)$, where $q$ is the rate of convergence ($q=1/2$ for most parametric estimators, $q = 1/5$ or lower for non-parametric ones, $q=2/5$ for the smoothed Manski estimator etc.). Then, you just look up the table of Gaussian critical values. The problem is, estimating $V$ is sometimes non-trivial. - Estimate the critical value by bootstrapping. This is where the consistency of bootstrap is required (i.e. no parameter on the boundary of the parameter space, the same rate of convergence of the original and bootstrap estimators etc. – in general, the failure of $\sup_{u\in \mathbb{R}} |\mathrm{CDF}_{\sqrt{n}(\hat\theta^*_n - \hat\theta_n)} (u) - \mathrm{CDF}_{\sqrt{n}(\hat\theta_n - \theta_0)} (u)| \xrightarrow[n\to\infty]{\mathbb{P}} 0$ (in Efron’s notation), which can happen due to a multitude of reasons). So if bootstrap works in the sense of the (rather technical) condition described above, then, depending on some extra conditions (such as requiring finite estimator variances), it can beat the asymptotic confidence intervals (as well as the variance estimators, $p$-values – basically, any functional of the estimator distribution) in the sense of the approximation error. Assume that $\mathbb{E} (\hat\theta_n - \theta_0)^2 < \infty$ (which rules out certain estimators, like the IV estimator that is the ratio of two Gaussians) and that the sampling distribution of $\hat\theta_n$ is symmetrical. Then, bootstrap is ‘better’ in the following sense: $$\sup_{u\in \mathbb{R}} |\mathrm{CDF}_{\sqrt{n}(\hat\theta_n - \hat\theta_0) / \mathrm{SE} \hat\theta_n} (u) - \Phi(u)| = O(1/\sqrt{n}),$$ $$\sup_{u\in \mathbb{R}} |\mathrm{CDF}_{\sqrt{n}(\hat\theta_n^* - \hat\theta_n) / \mathrm{SE}^* \hat\theta^*_n} (u) - \mathrm{CDF}_{\sqrt{n}(\hat\theta_n - \theta_0) / \mathrm{SE} \hat\theta_n} (u)| = O_p(1/n),$$ where $\Phi$ is the CDF of the standard normal distribution. Or course it does not guarantee that the capital O in a specific given application is not going to bring the refinement, and of course, depending on the smoothness of the bootstrapped quantity (bias, or variance, or CI, or p-value) and the bootstrap type, the refinement may or may not exist – however, if you are worrying that the bootstrap is going to be less reliable than asymptotic confidence intervals – probably not. Bootstrap does a much better job on reproducing the shape of the sampling distribution (on which one should really be doing inference), which is why if your method belongs to a broad class of estimators for which bootstrap works and you can afford running the bootstrap sufficiently many times, then, bootstrap should be more trustworthy in the sense of capturing the extra moments of the sampling distribution uncertainty (as opposed to the first 2 moments of the asymptotic Gaussian approximation). Depending on how poorly the Weak Law of Large Numbers is working when the numbers are not large enough, bootstrap may simply unveil it to the researcher. NB. Bootstrap is an asymptotic method in the sense that it still relies in most aspects on the number of observations $n \to \infty$. Bootstrap does not improve theoretical properties of statistical tests, and if $n=8$, the question would be, ‘is scientific method really applicable?’ or ‘aren’t we making conclusions about random data features, not the true underlying relationships?’. If the theoretical power of one’s test is low or there are extra complications in the form of departures from the ‘randomised controlled trial’ setting (as well as ‘independent identically distributed’), bootstrap won’t help, and the paper will be rejected. ## 2.1 Number of replications (practical advice) A large number of replications B is required to say that the finite-sample Monte-Carlo approximation replicates the asymptotic (in B) bootstrap distribution of the object of interest closely enough. This means that there is no penalty (other than increased computation time) to doing more replications in one experimental setting. Unless one is studying the theoretical properties of bootstrap (e.g. nested bootstrap, second-order bootstrap, bootstrapping new estimators etc.), then, the infallible rule is ‘more is better’. In case one does not have deep bootstrap knowledge, here is a quick recommendation (that should hopefully stay relevant for another decade). - B >= 1000, otherwise your paper will be rejected with something like ‘We are not in the Pentium-II era’ from Referee 2. - Ideally, B >= 10000; try to do it if your computer can handle it. Here is where most researchers may stop. However, if the researcher suspects that their sampling distribution may be irregular and discrepancies between the true and simulated distribution are large, then, we may check some features of the bootstrap distribution to determine how close we are to it (as a function of $B$). The seminal paper is [Andrews & Buchinsky (2000, Econometrica)](https://www.jstor.org/stable/2999474). Here are the extra steps to make any picky referee shut up: - You could check if your B yields the desired probability $1-\tau$ of achieving the desired relative accuracy $r$ of the bootstrapped quantity of interest for some common level (e.g. $r= 5\%$ and $\tau=5\%$). - If not, increase B to the value dictated by the A&B 3-stage procedure described below. - In general, for any actual accuracy of your bootstrapped quantity, to increase the desired relative accuracy by a factor of k, increase B by a factor of $k^2$. ## 2.2 A data-driven theory-backed procedure There is a data-driven method of choosing B: do some small number of bootstrap replications, see how stable or noisy the estimator is, and then, based on some target accuracy measure, increase the number of replications until you are sure that this resampling-related error has reached a certain lower bound with a chosen certainty. Our helper here is the Weak Law of Large Numbers where the asymptotics are in B. To be more specific, B is chosen depending on the user-chosen bound on the relative deviation measure of the Monte-Carlo approximation of the quantity of interest based on B simulations. This quantity can be standard error, p-value, confidence interval, or bias correction. The closeness is the relative deviation $R^*$ of the B-replication bootstrap quantity from the infinite-replication quantity (or, to be more precise, the one that requires $n^n$ replications): $R^* := (\hat\lambda_B - \hat\lambda_\infty)/\hat\lambda_\infty$. The idea is, find such B that the actual relative deviation of the statistic of interest be less than a chosen bound (usually 5%, 10%, 15%) with a specified high probability $1-\tau$ (usually $\tau = 5\%$ or $10\%$). Then, $$\sqrt{B} \cdot R^* \xrightarrow{d} \mathcal{N}(0, \omega),$$ where $\omega$ can be estimated using a relatively small (usually 200–300) preliminary bootstrap sample that one should be doing in any case. Here is the general formula for the number of necessary bootstrap replications $B$: $$B \ge \omega \cdot (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2,$$ where r is the maximum allowed relative discrepancy (i.e. accuracy), $1-\tau$ is the probability that this desired relative accuracy bound has been achieved, $Q_{\mathcal{N}(0, 1)}$ is the quantile function of the standard Gaussian distribution, and $\omega$ is the asymptotic variance of $R$*. The only unknown quantity here is $\omega$ that represents the variance due to simulation randomness. The general 3-step procedure for choosing B is like this: - Compute the approximate preliminary number $B_1 := \lceil \omega_1 (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$, where $\omega_1$ is a very simple theoretical formula from Table III in Andrews & Buchinsky (2000, Econometrica). - Using these $B_1$ samples, compute an improved estimate $\hat\omega_{B_1}$ using a formula from Table IV (ibid.). - With this $\hat\omega_{B_1}$ compute $B_2 := \lceil\hat\omega_{B_1} (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$ and take $B_{\mathrm{opt}} := \max(B_1, B_2)$. If necessary, this procedure can be iterated to improve the estimate of $\omega$, but this 3-step procedure as it is tends to yield already conservative estimates that ensure that the desired accuracy has been achieved. This approach can be vulgarised by taking some fixed $B_1 = 1000$, doing 1000 bootstrap replications in any case, and then, doing steps 2 and 3 to compute $\hat\omega_{B_1}$ and $B_2$. Example (Table V, ibid.): to compute a bootstrap 95% CI for the linear regression coefficients, in most practical settings, to be 90% sure that the relative CI length discrepancy does not exceed 10%, 700 replications are sufficient in half of the cases, and to be 95% sure, 850 replications. However, requiring a smaller relative error (5%) increases B to 2000 for $\tau=10\%$ and to 2700 for $\tau=5\%$. This agrees with the formula for B above. If one seeks to reduce the relative discrepancy r, by a factor of k, the optimal B goes up roughly by a factor of $k^2$, whilst increasing the confidence level that the desired closeness is reached merely changes the critical value of the standard normal (1.96 → 2.57 for 95% → 99% confidence).
null
CC BY-SA 4.0
null
2023-03-20T19:56:16.003
2023-03-20T19:56:16.003
null
null
41603
null
610116
1
null
null
3
78
I am currently at a dilemma concerning a model describing the allometric relationship between body size and mass. After carefully checking model assumptions and selecting the model that best fits the data, my final model was the following : ``` modD=lmer(body_size~0+(D_Mass*Species*sex+D_Mass*sex*Season +Species*Season)+(1|site), data=Rhabdoglobal, REML= T) ``` Diagnostic plots showed no indication of non-linearity or violating the normality of residuals or heteroscedasticity of variances. 0 was chosen for the intercept as biologically we know for a fact that at 0 body size there is 0 mass, so the relationship between the two must always cross the point (0,0). These were the results of the Anova : ``` Type III Analysis of Variance Table with Satterthwaite's method Sum Sq Mean Sq NumDF DenDF F value Pr(>F) D_Mass 214.78 214.782 1 676.03 717.7672 < 2.2e-16 *** Species 510.73 255.365 2 120.41 853.3874 < 2.2e-16 *** sex 0.95 0.950 1 678.15 3.1732 0.0753052 . Saison 8.71 8.706 1 680.00 29.0926 9.533e-08 *** D_Mass:Species 0.01 0.007 1 678.49 0.0231 0.8791882 D_Mass:sex 0.01 0.014 1 677.11 0.0456 0.8308968 Species:sex 0.96 0.964 1 677.12 3.2204 0.0731712 . D_Mass:Saison 7.02 7.017 1 677.22 23.4497 1.590e-06 *** sex:Saison 3.90 3.904 1 676.44 13.0453 0.0003265 *** Species:Saison 0.26 0.257 1 562.45 0.8579 0.3547159 D_Mass:Species:sex 1.08 1.082 1 676.27 3.6148 0.0576921 . D_Mass:sex:Saison 3.59 3.590 1 675.85 11.9973 0.0005664 *** ``` However, when I try to illustrate my results in plot form, the relationship does not seem linear at all. [](https://i.stack.imgur.com/fTQxb.png) Rather, changing the plot expression to `y=log(x)` seems to solve that problem : [](https://i.stack.imgur.com/wxM9f.png) My questions would be the following : - Is it possible to represent a relationship that was described by a statistical model using a different expression than if one would draw a plot directly from the model estimates ? - If not, is it justifiable to use a different model, not necessarily better in terms of linearity, homoscedasticity or normality of residuals, but simply based on post-hoc representation of raw data ? Edit : thank you for the response. Using a backward stepwise method,this is the simplest model with the lowest AIC that I came up with.
Plotting a regression line with a different fit than the model it is supposed to illustrate
CC BY-SA 4.0
null
2023-03-20T20:08:13.147
2023-05-15T18:26:48.673
2023-05-15T18:26:48.673
345611
383693
[ "r", "regression", "mixed-model", "data-visualization", "intercept" ]
610118
1
null
null
1
54
I know this is undesirable in most cases, but I have a very niche case where I must achieve 100% accuracy on training data. I do not care about unseen points, I only care about given X points what is the Y value for each point. I will train the model before hand but I must get 100% accuracy guaranteed. Some knowledge I know about the data: - The data is always increasing (its sorted). - It is not increasing at a specific rate, it could be random from point to another. - The data comes in chunks, I do not have all the data right away. - I can not store the data at all, the moment I get a chunk, it will be deleted. I can probably temporarily store it before the next data comes but can't store it long. - Each chunk is roughly about 100MB in size. - I can store some information about chunk 1 if needed, such as max, min, average, etc... I can analyze it and store anything from the analytics. Example: Assume `chunk 1` arrives at minute 0, then at minute 2 `chunk 2` arrives but `chunk 1` is gone. Days later I will receive the exact same data, 100% the same. I will receive it in the same order and same sized chunks. My goal is to predict the result for each value in the chunk at a 100% accuracy given that the data was seen before. What I have tried: - Obvious solution is to store a mapping between the input and their corresponding values, but again this is impossible because it would require me to store the data when it is impossible for me to store. - I used different types of trendlines; But this fails miserably as the error is huge, plus it requires me to create a different trendline for each chunk (I don't mind but not preferred) As I am researching I cam across `Lagrange Polynomial` which sounded very promising. But with more research I found out that I can not use it because it requires all of the data to already exist before I can "predict" the point I am looking for, which obviously defeats the purpose. Unless I have understood it wrong, or misusing it. I have been researching but can't find much information since most use cases do not require a a use case where you get 100% on training data, it is usually considered a bad practice because of "overfitting". I have also tried different things but none worked, so I finally came to the conclusion that maybe I could train an AI model? But really not sure where to start to be able to get 100% accuracy on training data. I am open to using tensorflow or any other tools. I am basically open for any leads to solve this issue.
I need to get 100% accuracy on my training data
CC BY-SA 4.0
null
2023-03-20T20:15:34.637
2023-03-20T22:02:33.103
null
null
383697
[ "machine-learning", "neural-networks" ]
610119
2
null
608397
0
null
It's not clear what you are asking for. If you're asking whether the outcome data should be included when imputing covariates: If the substantive models (e.g., Cox model) will include outcome, treatment and covariates, at least all of them should also be included in the imputation model for congeniality. In `mice`, for time-to-event data, it is recommended that you transform them to cumulative hazard using `mice::nelsonaalen`. If you're asking whether you should impute missing events in your outcome variable, you typically don't need to. For time-to-event analysis, you'll typically have last event-free data for patients that you can use to create a censoring indicator, and under assumptions of uninformative censoring, use it for your substantive models without needing to impute any outcomes.
null
CC BY-SA 4.0
null
2023-03-20T20:21:07.303
2023-03-20T20:21:07.303
null
null
197219
null