Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
611730 | 1 | 611756 | null | 2 | 57 | Suppose I would like to develop a machine learning model from 10 datasets to predict a clinical diagnosis (diagnosed or not). After checking the data, all of the outcome and feature data are similarly distributed in the datasets and it is reasonable to combine these datasets.
However, they are discrepant one one feature: half of the datasets collected x-ray data, and the other half didn't due to logistical constraints (ie, no x-ray machine, no radiologist available, etc.). This is also realistic in real-world application, about half of people wont have an x-ray. However, if obtained, x-ray results are important in the diagnosis and influential in previous models that incorporate these data.
Is it possible to develop an ML model that would reflect this situation? Would an ensemble model that combines two independently trained models (represented in the below figure) be a reasonable approach?
[](https://i.stack.imgur.com/JprTb.jpg)
| Machine learning model when important feature is commonly missing | CC BY-SA 4.0 | null | 2023-04-03T20:38:17.407 | 2023-04-04T02:47:41.183 | 2023-04-04T00:25:25.407 | 268539 | 268539 | [
"machine-learning",
"missing-data"
] |
611732 | 1 | null | null | 0 | 31 | Suppose a dataset as follows:
```
data <- data.frame(X = c(1.2, 1.5, 1.7, 1.8, 2.0, 2.2, 2.3, 2.5),
Y = c(3.2, 3.5, 3.7, 3.8, 4.0, 4.2, 4.3, 4.5),
technique = factor(rep(c("A", "B"), each = 4)),
county = factor(rep(1:4, times = 2)))
data
X Y technique county
1 1.2 3.2 A 1
2 1.5 3.5 A 2
3 1.7 3.7 A 3
4 1.8 3.8 A 4
5 2.0 4.0 B 1
6 2.2 4.2 B 2
7 2.3 4.3 B 3
8 2.5 4.5 B 4
```
I have four counties, on each of which two techniques were used to calculate X and Y.
I want to test if there are significant differences in X and Y based on the two techniques used. I can do separately for X and Y using LMMs:
```
m.X <- lmer(X ~ technique + (1 | county), data = data) #for response X
m.Y <- lmer(Y ~ technique + (1 | county), data = data) #for response Y
```
My question now is: Is there any statistical technique where I can combine X and Y to create a single response, say `Z` and test the effect of technique and county on the single response?
| Can I combine two continuous variables to fit a linear mixed model? | CC BY-SA 4.0 | null | 2023-04-03T21:14:42.290 | 2023-04-04T12:39:02.233 | 2023-04-04T12:39:02.233 | 219012 | 257284 | [
"correlation",
"mixed-model",
"lme4-nlme",
"continuous-data",
"paired-data"
] |
611733 | 1 | 611741 | null | 1 | 28 | For example:
```
poisson.test(x = 170,T = 70,r = 3,alternative = "two.sided",conf.level = .95)
Exact Poisson test
data: 170 time base: 70
number of events = 170, time base = 70, p-value = 0.005157
alternative hypothesis: true event rate is not equal to 3
95 percent confidence interval:
2.077216 2.822335
sample estimates:
event rate
2.428571
```
gives me the answer I would expect for a 95% CI. But if I change the alternate to a one-tail test:
```
poisson.test(x = 170,T = 70,r = 3,alternative = "less",conf.level = .95)
Exact Poisson test
data: 170 time base: 70
number of events = 170, time base = 70, p-value = 0.002502
alternative hypothesis: true event rate is less than 3
95 percent confidence interval:
0.000000 2.758037
sample estimates:
event rate
2.428571
```
We can see that the CI has changed drastically when I changed only the alternative hypothesis. Something similar happens in binom.test() as well. In the Biometrika article cited in the function ([https://doi-org.colorado.idm.oclc.org/10.1093/biomet/26.4.404](https://doi-org.colorado.idm.oclc.org/10.1093/biomet/26.4.404)) I don't see any mention of changing the limits based on the alternative, and I can't think of any reason you would want to.
Am I missing something here?
| Why does the confidence interval change in poisson.test() and binom.test() based on the alternative hypothesis? | CC BY-SA 4.0 | null | 2023-04-03T19:13:05.097 | 2023-04-03T22:36:55.113 | 2023-04-03T22:22:36.133 | 11887 | 350763 | [
"r",
"confidence-interval"
] |
611735 | 1 | null | null | 0 | 16 | I am trying to find the best US state using 25 columns of normalized data (best = 1, worst = 0) such as crime rate, GDP, house prices, and others. This results in a 50x25 Excel table. Afterward, each state is given a score by adding the products of each variable with its respective relative weight. Here's a simplified example with trivial data:
|State |Crime rate (w = 0.2) |GDP (w = 0.3) |House prices (w = 0.5) |Mark |
|-----|--------------------|-------------|----------------------|----|
|Iowa |0.20 |1.00 |0.15 |42 % |
|Arkansas |0.30 |0.60 |0.65 |57 % |
|New Mexico |0.10 |0.40 |0.30 |29 % |
|... |... |... |... |... |
This approach is functional, but working with such large amounts of data in Excel can be tedious, and maintaining the relative weight sum of 1 while staying true to my opinion becomes more challenging as more variables are added to the calculation. My goal is to gather as much data as possible to make the most objective decision subjectivity can provide. Perhaps Python could be a helpful tool? Machine learning, pandas...?
I am curious if anyone has suggestions for a better approach. Thank you in advance!
| Multivariate analysis for subjective decision making | CC BY-SA 4.0 | null | 2023-03-30T20:39:25.703 | 2023-04-04T00:31:34.453 | 2023-04-04T00:31:34.453 | 11887 | null | [
"cart",
"excel"
] |
611736 | 2 | null | 611735 | 0 | null | If your aim is just to keep doing the same kind of analysis but reduce the burden of updating lots of things in the table, then that can be done within Excel, although it can potentially be done even easier in a language like python or R.
In Excel, you could do something like this:
- In one sheet, fill in all of your state data. Select all of it, and choose "Insert Table" from the menu. It will look something like this:
[](https://i.stack.imgur.com/bJMik.png)
- In a second sheet, create another table with the variables and their relative weights. It will also help to name the table (possibly in the Table Design menu). You'll get something that looks like this (notice that I've named the table "Weights"):
[](https://i.stack.imgur.com/Kh8t3.png)
- Back in the first table, enter the following in the Mark column (I named this table StateScores for easier reference):
`=SUMPRODUCT(StateScores[@[Crime rate]:[House prices]], TRANSPOSE(Weights[Weight])))`
Let's break down what this does:
- StateScores[@[Crime rate]:[House prices]] references the columns between "Crime rate" and "House prices" in the StateScores table in the current row.
- TRANSPOSE(Weights[Weight]) references the "Weight" column in the Weights table, but since they're in column format it transposes them into a row to match the first array.
- SUMPRODUCT performs an element-by-element product of the two arrays and sums the results together, i.e. it calculates Crime rate x Weight(Crime rate) etc and then adds it all up.
If you add another variable to your table, you add another row to the Weights table and you change the reference in the above formula to cover all of the corresponding columns.
To make it even easier for you, add `/SUM(Weights[Weight])` at the end of the formula. This automatically scales everything so that the weights add to 1, meaning that you don't have to work that out yourself. You can make the weights 2, 3, 5 and they'll be adjusted appropriately.
If you were working in python or R there would be an existing function for all of this in some package, but there's an overhead involved in learning the language. And unless you're completely changing how you analyse the data, machine learning is not going to help you.
| null | CC BY-SA 4.0 | null | 2023-03-30T23:58:24.037 | 2023-03-30T23:58:24.037 | null | null | 59922 | null |
611739 | 1 | null | null | 1 | 25 | after obtaining the posterior draws(N=1000) for parameters for a bayesian VAR Model. how do we select the single parameter among 1000 draws that best depicts the posterior distribution
| how do we select the parameter values to run model from the posterior draws | CC BY-SA 4.0 | null | 2023-04-03T21:56:51.927 | 2023-04-03T21:56:51.927 | null | null | 348756 | [
"bayesian",
"posterior",
"vector-autoregression"
] |
611740 | 1 | null | null | 0 | 54 | Suppose that I want to evaluate the following integral:
$$A = \int_0^1 \log\left(\theta^s(1-\theta)^{n-s}\right)p(\theta)d\theta,$$
where $p(\theta)\equiv$ Beta$(ws+1, w(n-s)+1)$ and $n$, $w$, and $s$ are known constants. How would I go about evaluating this integral analytically?
| How could I evaluate $A = \int_0^1 \log\left(\theta^s(1-\theta)^{n-s}\right)p(\theta)d\theta$? | CC BY-SA 4.0 | null | 2023-04-03T22:20:05.060 | 2023-04-05T14:49:43.723 | 2023-04-05T14:49:43.723 | 257939 | 257939 | [
"expected-value",
"conditional-expectation",
"beta-distribution",
"integral"
] |
611741 | 2 | null | 611733 | 1 | null | One-sided hypothesis testing, such as in your second example
>
alternative hypothesis: true event rate is less than 3
result in a one-sided confidence interval. One-sided confidence intervals are maybe unusual, so seems unfamiliar. For a discussion see for instance [Inverting a hypothesis test: nitpicky detail](https://stats.stackexchange.com/questions/61006/inverting-a-hypothesis-test-nitpicky-detail)
One-sided confidence intervals follows naturally when constructing confidence intervals (CI) by inverting a hypothesis test, see [Inverting a hypothesis test: nitpicky detail](https://stats.stackexchange.com/questions/61006/inverting-a-hypothesis-test-nitpicky-detail). The idea is to include in the CI parameter values that cannot be rejected using the test. But the rejection region for a test will be different for one-sided and two-sided tests, so the resulting CI must be different. For details see the above linked post.
A more detailed discussion is at [Intuition for inverting one sided hypothesis tests](https://stats.stackexchange.com/questions/599523/intuition-for-inverting-one-sided-hypothesis-tests)
| null | CC BY-SA 4.0 | null | 2023-04-03T22:36:55.113 | 2023-04-03T22:36:55.113 | null | null | 11887 | null |
611743 | 1 | null | null | 1 | 25 | A machine model is tested in the factory to check the acceleration coefficient. The experiment selects n machines of this model and measures the distance traveled by the machine in 38 seconds. It is assumed that path measurement errors are independently equally distributed and have a distribution of 1/2 - exp(1). Construct a symmetric confidence interval for the acceleration coefficient. Input data: Two input values. The first is the confidence level, a number between 0 and 1. The second is a one-dimensional array numpy.ndarray of measurements in the path (in meters) of one model. Returned value: A tuple or a list of two values equal to the left and right boundaries of the confidence interval. I solved the problem this way:
```
import pandas as pd
import numpy as np
from scipy.stats import norm
def solution(p: float, x: np.array) -> tuple:
x_bar = np.mean(x)
s = np.std(x, ddof=1)
z = norm.ppf(1 - (1 - p)/2)
lower = x_bar - z * s / np.sqrt(len(x))
upper = x_bar + z * s / np.sqrt(len(x))
t = 38
v = x.mean() / t
a = 2 * v / t
lower_a = 2 * lower / t
upper_a = 2 * upper / t
return lower_a, upper_a
```
But the length of the interval turns out to be long. How do I solve the problem so that it fits the conditions below.
[](https://i.stack.imgur.com/BSSwl.jpg)
Upd: The acceleration coefficient is the same as the acceleration.
| Construct a symmetric confidence interval for the acceleration coefficient | CC BY-SA 4.0 | null | 2023-04-03T22:56:45.297 | 2023-04-03T22:56:45.297 | null | null | 384866 | [
"mathematical-statistics",
"python",
"scipy",
"numpy"
] |
611744 | 1 | null | null | 2 | 35 | In [1], Ogle & Barber discuss a method for ensuring identifiability of certain Bayesian multilevel regression models; they call this method "post-sweeping". I have a couple of related questions stemming from this method/approaches like it.
#### Setup
Consider the varying-intercepts model,
$$
y_i \sim N(\beta_0 + \beta_{g(i)}, \sigma)
$$
Where $y$ is our observed data, indexed by observation $i$, and we know that each observation belongs to one of $G$ groups, $g(i)$ being the group of observation $i$.
This model may be "practically" non-identifiable, as there is obvious potential for parameter tradeoff between the "global intercept" $\beta_0$ and the group-level term $\beta_g$. (I suppose the severity of non-identifiability will depend upon the choice of priors and the observed data.) Let's say that we perform MCMC and find that indeed these two parameters exhibit poor mixing, perhaps due to identifiability challenges.
The post-sweeping method—if I am understanding correctly—is essentially to subtract the mean of each multilevel term from that term (mean being across all levels of that term, not across all steps of the MCMC chain). Then add each mean to the global term. In the case of the above simple model, we would do (with apologies for some notation abuse),
\begin{align*}
\bar \beta_g = \frac{1}{G} \sum_{g=1}^{G} \beta_g & \quad \text{(performed for each draw of the Markov chain)} \\
\beta_{g(\cdot)}^* = \beta_{g(\cdot)} - \bar \beta_g & \quad \text{(broadcast over all levels of } \beta_g \text{)} \\
\beta_0^* = \beta_0 + \bar \beta_g &
\end{align*}
Essentially this constrains each multilevel term to sum to zero while carrying all the extra stuff over to the global intercept. If the adjusted (starred) terms pass convergence diagnostic checks, then we are good to go ahead with our anaylsis using the adjusted terms.
#### Questions
- It seems like this method isn't appropriate if the number of groups is small. E.g. in the extreme case of two groups, subtracting the mean would make the two scalar terms of $\beta_g$ be equal in magnitude/flipped in sign. (This seems to me to be the same limitation discussed in [1] for a different approach to ensuring identifiability, where the sum-to-zero constraint is written into the model itself.) Are there similar-in-spirt approaches to ensuring identifiability when the number of groups is small? One approach that I've considered is doing a "sweep" of just one reference level of each term, rather than the mean of all levels of each term. In other words, identify the model via a treatment style contrast. Then the global intercept is interpreted as the reference value for the chosen level (or combination of levels, if the multilevel model is more complex). I'm not entirely certain if this approach makes sense for complex multilevel models, though, and why it isn't as stringent a constraint as the mean-sweeping approach.
- The paper [1] particularly focuses on the case where $\beta_g$ is a random effect, modeled something like $\beta_g \sim N(0, \sigma_g)$, $\sigma_g \sim
\text{(choice of prior)}$. But if $\beta_g$ in the full/unadjusted/overparameterized model isn't identified, how shall we interpret $\sigma_g$? In other words, is it well-founded to even try and quantify the population variance for a parameter that isn't identified in the full model? The identifiability issue certainly means that samples of $\beta_g$ can't be interpreted in the full model, but I'm not sure what to make of its population-level hyperparameter(s).
---
Sorry for this somewhat wordy question. I'm trying to work with multilevel regression using a nonstardard response distribution, so unfortunately the standard packages that deal with mixed-effects models don't apply. I've also done some reading up in the Gelman & Hill textbook, but although they discuss parameterization and redundant parameters, I couldn't find much advice specifically on identifiability. [1] Was the best practical source I could find on the issue, but I'd be glad to receive any other sources too. Thank you!
[1]: Ogle, K., & Barber, J. J. (2020). Ensuring identifiability in hierarchical mixed effects Bayesian models. Ecological Applications, 30(7), e02159.
| Post-hoc identifiability for Bayesian multilevel regression model | CC BY-SA 4.0 | null | 2023-04-03T22:56:50.490 | 2023-04-04T18:33:17.717 | 2023-04-04T18:33:17.717 | 332860 | 332860 | [
"regression",
"bayesian",
"multilevel-analysis",
"hierarchical-bayesian",
"identifiability"
] |
611745 | 1 | null | null | 2 | 82 | Let $X_1,...,X_n$ a i.i.d from a population with distribution $U[0,\theta]$, i.e.,
$f_{X_i}(x)=\frac{1}{\theta}g_{[0,\theta]}(x)$, for $i=1, \ldots, n$
where
\begin{align}
g_{[0,\theta]}(x) =
\begin{cases}
1 & x \in [0, \theta], \\
0 & x \notin [0, \theta].
\end{cases}
\end{align}
How to prove that $\hat\theta = \max\{X_1, \ldots, X_n\}$ is a root mean square consistent estimator for $\theta$?
I know that an estimator $\hat\theta_n$ is consistent in root mean square if it fulfills that
$$\lim\limits_{n \to \infty}MSE(\hat\theta_n)=0.$$
with $MSE(\hat\theta_n)=E[(\hat\theta_n-\theta)^2]=\operatorname{Var}(\hat\theta_n)+(E(\hat\theta_n)-\theta)^2$.
| I need to prove that $\hat\theta=\max\{X_1,...,X_n\}$ is a mean square consistent estimator for $\theta$ | CC BY-SA 4.0 | null | 2023-04-03T23:50:18.973 | 2023-04-04T12:32:59.670 | 2023-04-04T00:36:22.437 | 20519 | 384869 | [
"probability",
"self-study",
"mathematical-statistics",
"estimators",
"consistency"
] |
611747 | 1 | 613408 | null | 7 | 208 | I'm working on a time-dependent dataset, where basically I have two different populations and we're building Markov chains to describe their behaviors. What I'm trying to do is compare the transition matrices associated to the two populations, and decide whether the transition matrices are really different, or whether they could be due to chance. So my question is, is there a good way to compare two transition matrices in order to determine whether they are statistically different? Part of the problem we're running into is that both matrices have a lot of zeroes in them, but not always in the same places. Any suggestions are welcome.
| statistical comparison of two markov chain transition matrices | CC BY-SA 4.0 | null | 2023-04-04T01:11:44.137 | 2023-04-20T20:46:38.930 | 2023-04-19T03:16:15.100 | 362671 | 158721 | [
"statistical-significance",
"markov-process",
"transition-matrix"
] |
611748 | 1 | 611751 | null | 2 | 114 | I am checking the conditions for hypothesis testing a Pearson correlation as significant or not, and also checking the residuals normality conditions for OLS. Why are the following methods giving different results (why are the variables not bivariate normal but the residuals of an OLS model are normal?)? Also, why does changing the regression from y~x to x~y alter the p value so much for normality of the residuals?
I have included a reproducible example below.
```
library(tidyverse)
library(mvnormtest)
x <- c(21,21,20,16,15,20,16,16,23,21,9,14,18,6,35,39,29,26,32,7)
y <- c(226,223,233,209,207,251,268,255,357,401,323,302,422,245,389,409,352,374,389,418)
data <- tibble(x,y)
# Histogram of residuals for visual inspection
data %>%
mutate(resid = residuals(lm(x ~ y))) %>%
ggplot(aes(x = resid)) +
geom_histogram()
# Univariate Shapiro Wilks of residuals
data %>%
mutate(resid = residuals(lm(x ~ y))) %>%
select(resid) %>%
t() %>%
shapiro.test()
# Univariate Shapiro Wilks of residuals (switch x and y order)
data %>%
mutate(resid = residuals(lm(y ~ x))) %>%
select(resid) %>%
t() %>%
shapiro.test()
# Bivariate Shapiro Wilks
data %>%
t() %>%
mshapiro.test()
```
I was also confused because I thought that hypothesis testing a Pearson correlation has similar assumptions to fitting OLS model, in that the variables should be bivariate normal, but is this mistaken? Do I need to check normality individually for the x and y variables?
Sources:
- Bivariate normality is a necessary condition for testing Pearson correlation (but alternatively, the univariate normality of the two variables can be separately checked (?)): https://statistics.laerd.com/spss-tutorials/pearsons-product-moment-correlation-using-spss-statistics.php
- Both variables must be individually normal (seems to contradict the previous source (1)): https://toptipbio.com/what-is-pearson-correlation/
- Holding constant x, then y must be normally distributed (this assumption seems to be the same as (1)): https://courses.lumenlearning.com/introstats1/chapter/testing-the-significance-of-the-correlation-coefficient/
If anyone has an authoritative source that explains this, I'd much appreciate it.
| Why does checking normality of residuals give a different result than checking bivariate normality of the two variables? | CC BY-SA 4.0 | null | 2023-04-04T01:13:58.950 | 2023-04-04T17:47:59.863 | 2023-04-04T03:38:22.243 | 350157 | 350157 | [
"regression",
"correlation",
"linear",
"normality-assumption",
"shapiro-wilk-test"
] |
611749 | 1 | null | null | 0 | 36 | Suppose I have pre-treatment samples and post-treatment samples for a group of patients. Let the treatment effect on each patient be the difference between the two sample means. Now I have a new treatment and I want to test whether this new treatment has the same treatment effect, i.e. does the sample mean difference follow the same distribution of the old treatment?
I know there are paired t-tests and 2-sample t-tests, but what's the correct t-test for 2 differences between sample means? Note that the definition of treatment effect cannot be changed. Pre-treatment and post-treatment samples do not necessarily have the same variance.
EDIT: Does it make sense to perform a welch's test on the means of mean differences? Also, the sample sizes for the old and new treatment are highly unbalanced: 1000 vs 50
| T-test for paired differences between sample means | CC BY-SA 4.0 | null | 2023-04-04T01:38:20.257 | 2023-04-04T22:43:55.707 | 2023-04-04T22:43:55.707 | 384874 | 384874 | [
"hypothesis-testing",
"t-test",
"repeated-measures",
"central-limit-theorem",
"paired-data"
] |
611750 | 2 | null | 578229 | 1 | null | The $k$-nearest neighbors supervised learning algorithm works by taking a point, calculating the distance in the feature space (predictors) between that point and all other points, determining the $k$ points that are closest, and using the labels ($y$) on those points to make a prediction. That prediction might be an average in order to do a regression. It might be the category with the most representation among those $k$-nearest neighbors. It might be a probability based on the relative number of categories represented in those $k$-nearest neighbors.
The key point, however, is that there is a feature space for calculating distances and then a target space of the variable(s) being predicted. This is exactly a supervised problem.
One unsupervised machine learning clustering algorithm is called “k-means”, which you notice has a similar but distinct name. The gist here is that the distance between points and the cluster mean are calculated in order to assign points to clusters. However, there is no target space, and this is not a supervised method.
A place where $k$-nearest neighbors could be used in an unsupervised way if if you just want to know the $k$-nearest neighbors in the feature space. This could be used, for instance, for some kind of matching procedure.
| null | CC BY-SA 4.0 | null | 2023-04-04T01:43:44.183 | 2023-04-04T01:43:44.183 | null | null | 247274 | null |
611751 | 2 | null | 611748 | 4 | null | >
confused because I thought that hypothesis testing a Pearson correlation has similar assumptions to fitting OLS model,
As simple regression, sure, and equally fairly insensitive to normality of errors in large samples
Bivariate normality and marginal normality are not the same and neither is strictly required for testing a Pearson correlation. (Bivariate normality is sufficient but not necessary. Marginal normality on its own is neither sufficient nor necessary)
>
Also, why does changing the regression from y~x to x~y alter the p value so much for normality of the residuals?
Because you're looking at the errors from a different line and conditioning on a different variable. They may be entirely different.
Testing normality is not necessarily particularly helpful and doesn't answer the question you need answered.
---
To return to the title question:
>
Why does checking normality of residuals give a different result than checking bivariate normality of the two variables?
They're different things. Specifically, while bivariate normality will result in normally distributed residuals from a regression, you can get normally distributed residuals while neither variable is marginally normal (and neither are the variables jointly normal).
Consider, for example, a simple regression where x is either 0 or 1 and the model $Y_i = beta_0 + \beta_1 x_i +\epsilon_i$, $i=1,2,...,n$, where the $\epsilon_i$ are i.i.d. $N(0,\sigma^2)$. In this case, under $\rho(x,Y)=0$, which is equivalent to $\beta_1=0$, the sample Pearson correlation $r$ has the usual null distribution, and so the test statistic $t=\frac{r\sqrt{n-2}}{1-r^2}$ has a t-distribution with $n-2$ d.f. when $H_0$ is true.
Here's an illustration (not proof, but proofs of the necessary results can be found in pretty much any decent regression text):
[](https://i.stack.imgur.com/gF4EE.png)
The plot on the left is under H1 (the variables are correlated with 1000 values of x and y). Clearly they won't be bivariate normal whatever the slope, because one variable is binary. Under H0, I simulated 1000 sets of 20 values of x and y - i.e. smaller samples and population slope 0 - and computed the correlation for each set, then calculated the t-test statistic, and then finally transformed it by its own theoretical cdf (the t cdf with 18 df). The result looks like a random sample from a uniform distribution, exactly as it should; suggesting that the properties of significance levels and p-values will be what they are claimed to be under H0 -- the test works exactly as advertized even though (X,Y) was not bivariate normal, it just had normal errors on the conditional distribution of one of the variables.
We can use a similar approach to investigate sensitivity to non-normality under whatever your sample size and pattern of x's is. That is what I'd suggest is a good starting point for deciding how sensitive the test is to that assumption, by exploring plausible possible assumptions (and then seeing how far you have to push them before the test's properties become too far from the nominal properties for your own requirements.
---
Regarding your sources:
>
bivariate normality is a necessary condition for testing Pearson correlation (but alternatively, the univariate normality of the two variables can be separately checked)
Univariate normality of the margins is not sufficient for bivariate normality. Bivariate normality establishes both linearity of conditional expectation, and both conditional and marginal normality. As mentioned before bivariate normality, while sufficient, is not required for the significance levels to be correct, or the test to be consistent, or to get good power.
>
Both variables must be individually normal
Neither variable need be individually normal; In the simple regression example I gave, the x's were Bernoulli$(\frac12)$ and the y's were conditionally normal with constant variance, but the marginal distribution of y will only be normal when the population correlation is 0; under H1 it can be bimodal. Under other patterns of x-values one can get left skewed, right skewed, heavy-tailed or light tailed marginal distributions of y (when the correlation is non-zero).
>
Holding constant x, then y must be normally distributed
Correct, though additional things are required. However, note that holding x constant, you might only have a single y value there -- consider replacing my earlier example with x having a beta(0.1,0.1) distribution, say.
>
this assumption seems to be the same as (1)
It is not. It is the conditional normality assumption from regression. To be clear, that's the assumption that is made in deriving the t-distribution of the test statistic when H0 is true. The test is not especially sensitive to that assumption in moderately large samples, though.
---
| null | CC BY-SA 4.0 | null | 2023-04-04T02:01:57.967 | 2023-04-04T17:47:59.863 | 2023-04-04T17:47:59.863 | 805 | 805 | null |
611752 | 1 | null | null | 0 | 19 | I'm using data (secondary analysis) to conduct an exploratory analysis. I have a sample size of $n = 67$, and the exploratory analysis includes two independent variables and four dependent variables. I am assessing how each of the two independent variables correlates (partial correlations) with the four dependent variables with and without control variables (two variables are controlled). Next, I am conducting linear regression on the relations that change with and without controls. Since my sample size is small, what assumptions am I violating by conducting this exploratory analysis? I'm asking so I can mention these in my paper as limitations.
| Exploratory Analysis with Small Sample | CC BY-SA 4.0 | null | 2023-04-04T02:04:29.093 | 2023-04-04T10:28:26.083 | 2023-04-04T09:40:22.457 | 35989 | 380161 | [
"regression",
"sample-size",
"multicollinearity",
"small-sample",
"exploratory-data-analysis"
] |
611753 | 2 | null | 20802 | 1 | null | (If you’re afraid to evaluate your model on representative data, that should say a lot.)
By oversampling the minority class, you are telling the model to expect members is that category much more often than they truly appear. Consequently, if you go to representative data (natural class ratio), I suspect that you will find the model a bit trigger-happy to classify as being in the minority class, leading to more false positives than you might expect. Then if you do not care about false positives, save yourself the trouble of doing all this difficult machine learning work that risks melting your computer; just classify everything as being in that category. If this is unacceptable, then it would seem that you have some sense of the cost you incur from misclassifications, and you might be interested in a more nuanced assessment that uses the continuous outputs of your model.
| null | CC BY-SA 4.0 | null | 2023-04-04T02:05:43.237 | 2023-04-04T02:05:43.237 | null | null | 247274 | null |
611754 | 2 | null | 280209 | 1 | null | It would be elegant to have some kind of AIC/BIC-style penalty on a decision tree. However, the typical way to penalize a machine learning model for being overly complex is to try it on some data that were not used to develop the model, with the thinking being that, if the model is so complex that it allows for fitting to the noise (coincidences in the data), the model will not have such a great fit on the new data and will have a poor score. When you see something like a train/test split, the test data are being used for this very evaluation.
There are drawbacks to such a strategy. For instance, what if you have some good or bad luck with your holdout data and just get one score that is particularly high or low? Cross validation is a possible remedy for this. As another example, having a holdout set limits the previous data available for training. [Bootstrap validation](https://stats.stackexchange.com/a/560089/247274) is a possible remedy for this.
| null | CC BY-SA 4.0 | null | 2023-04-04T02:14:33.387 | 2023-04-04T02:34:15.250 | 2023-04-04T02:34:15.250 | 247274 | 247274 | null |
611755 | 1 | null | null | 1 | 21 | I am trying to compare the accuracy of a polynomial model and a Random forest regression model in predicting a variable Y but also the integral of this variable ,
With the polynomial model, it is clear how to compute its integral, however, I am wondering if applying a numerical integration over the prediction range will make sense in the case of Random forest regression model,
| Numerical integration over a Random Forest Regression Model | CC BY-SA 4.0 | null | 2023-04-04T02:29:07.750 | 2023-04-04T07:31:20.077 | null | null | 313261 | [
"random-forest",
"integral",
"numerical-integration"
] |
611756 | 2 | null | 611730 | 1 | null | I think at this point we are faced with two distinct deployment environments and as such two models are warranted. It is tempting to consider some multiple-model, two-staging, stacking, ensembling, mixing, etc. methodology but they more I think of the setting the more I think it is fraught with modelling caveats. Some of these caveats are the following:
- The missingness of X-ray is not "completely-at-random" between the two (sub-)samples so direct imputation of X-ray features can be problematic. The two subsamples can still have different dynamics and almost certainly different unmeasured confounders (e.g. access to X-rays).
- There is an implicit underlying assumption in your commentary that "all patients who have access to an X-ray machine and a radiologist get an X-ray"; I would question how realistic it is. Likely, a doctor thinking that a case is not too severe might not request a diagnostic X-ray (due on cost, time, work-load, etc.).
- Complete case analysis (CCA) can be biased when that the chance of being a complete case depends on the observed values of the outcome. The two prior points strongly suggest we are in this situation so we cannot go for that.
My suggestion would be to mimic [transfer learning](https://en.wikipedia.org/wiki/Transfer_learning). Somewhat informally, in transfer learning, we use a pre-trained model as the starting point for (usually) another task where we try to take advantage of learned features and/or outputs of the pre-trained model. [Transfer learning for non-image data in clinical research: A scoping review](https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000014) (2022) by Ebbehoj et al. give a very relevant overview.
A simple approach I can suggest would start by splitting our data in two sub-samples. One with and one without X-rays access, let's call them subsamples $A$ and $B$ respectively.
We would then use subsample $B$ to create a few models for their respective outcome; we pick the best one, say $M_B$. We report how well we do in our diagnostic task (maybe we have a holdout from $B$, maybe we present bootstrap estimates, repeated $k$-fold). Effectively this is how good we do in our "large training" set from a "resource poor" environment.
We have not touched sub-sample $A$ until now. For the next step we use $M_B$ to create risk estimates for the instances in sub-sample $A$. These estimates can now be used as a new feature to build our model $M_A$ that uses the X-ray info together with everything else to give risk estimates. Similar with the step above, we report how good we do in our "smaller calibration" set within this "resource rich" environment. We note here that this calibration also allows us to account for potential [domain shift](https://en.wikipedia.org/wiki/Domain_adaptation#Domain_shift) (change in the data distribution between the two sub-samples).
Notice that in the above approach we never use model subsamples $A$ and $B$ at the same time. In a sense we perform a "pseudo"-transfer learning; $M_B$ gives us our hidden feature(s) and we "calibrate" our performance based on our new sample. The narrative between what the two model do is also quite simple: $M^B$ gives us our risk estimates in a resource-poor environment and $M_A$ gives us our risk estimates in a resource-rich environment by adjusting the risk presented by $M_B$.
I recently read the paper [What is being transferred in transfer learning?](https://proceedings.neurips.cc/paper/2020/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html) (2020) by Neyshabur et al., it focuses on Computer Vision but it is quite readable and easy to understand how it can be applicable in other areas too.
| null | CC BY-SA 4.0 | null | 2023-04-04T02:47:41.183 | 2023-04-04T02:47:41.183 | null | null | 11852 | null |
611757 | 1 | null | null | 0 | 19 | It is known that a linear function compounded with a squared loss is convex, so one can efficiently find the optimal solution when performing linear regression. Specifically, given a data point $(x,y)$ we have the loss function
$$f(w)=(w^Tx-y)^2,$$
which is convex in $w$. I'm wondering if there exist other functions that satisfy this condition. And is it possible or meaningful to design new learning algorithms with such functions?
Thanks in advance.
| Is there any function that is convex after compounded with a squared loss (besides linear ones)? | CC BY-SA 4.0 | null | 2023-04-04T02:55:35.000 | 2023-04-04T06:59:18.133 | 2023-04-04T06:59:18.133 | 53690 | 270425 | [
"regression",
"machine-learning",
"optimization",
"loss-functions",
"convex"
] |
611758 | 1 | null | null | 0 | 7 | I have 4 variables which are public attitudes towards 4 subgroups of immigrants.
I want to compare the mean scores of the 4 variables in terms of the whole sample.
I can't use One-way ANOVA or ANCOVA since I don't have groups to compare; the whole sample is the group itself.
I can run pairwise t-tests multiple times (for every combination of the two variables), but I am worried about increasing type 1 error.
I also thought about Repeated measures ANOVA since it can analyze all the 4 mean scores simultaneously, even with some control variables.
However, I wonder if the fact that the variables are technically not "repeated measures" of the same construct would matter.
What would be the best method of comparing the mean scores in this case? Any other methods would be welcomed.
| What is the best method for comparing multiple mean scores within a single group? | CC BY-SA 4.0 | null | 2023-04-04T02:59:36.680 | 2023-04-04T02:59:36.680 | null | null | 384880 | [
"mean"
] |
611759 | 2 | null | 611745 | 3 | null | The standard way is to find the CDF of $\hat{\theta}_n$ first, which is
\begin{align*}
F(x) := P(\max(X_1, \ldots, X_n) \leq x) = \prod_{i = 1}^n \frac{x}{\theta} =
\frac{x^n}{\theta^n}, \; 0 \leq x \leq \theta.
\end{align*}
This implies the PDF of $\hat{\theta}_n$ is $f(x) = nx^{n - 1}/\theta^n$ for $0 \leq x \leq \theta$ and $0$ otherwise, whence
\begin{align*}
E[(\hat{\theta}_n - \theta)^2] = \int_0^\theta(t - \theta)^2f(t)dt =
\frac{n}{\theta^n}\int_0^\theta (t - \theta)^2t^{n - 1}dt. \tag{1}
\end{align*}
To evaluate the integral $(1)$, apply the variable substitution $u = t/\theta$, which transfers $(1)$ to
\begin{align*}
n\int_0^1\theta^2(1 - u)^2 u^{n - 1} du = \theta^2nB(3, n), \tag{2}
\end{align*}
where $B(a, b)$ is the [Beta function](https://en.wikipedia.org/wiki/Beta_function). Since $B(a, b) = (a - 1)!(b - 1)!/(a + b - 1)!$ when $a, b$ are positive integers, the right-hand side of $(2)$ equals to
\begin{align}
\theta^2 \frac{2n(n - 1)! }{(n + 2)!} = \frac{2\theta^2}{(n + 2)(n + 1)},
\end{align}
which converges to $0$ as $n \to \infty$. This shows that $\hat{\theta}_n \to \theta$ in quadratic mean.
The above method is straightforward. However, it involves pretty heavy calculations (you may circumvent summoning the Beta function by expanding $(1 - u)^2$ then applying linearity of the integral, which requires less machinery). This motivates me to consider the following alternative solution.
First, show that $\hat{\theta}_n$ converges to $\theta$ in probability. I will leave this to you as an exercise. It is similar to finding the CDF of $\hat{\theta}_n$.
Next, for any $\epsilon \in (0, \theta)$, we have
\begin{align*}
& E[(\hat{\theta}_n - \theta)^2] =
E\left[(\hat{\theta}_n - \theta)^2I_{[|\hat{\theta}_n - \theta| \leq \epsilon]}(\omega)\right] +
E\left[(\hat{\theta}_n - \theta)^2I_{[|\hat{\theta}_n - \theta| > \epsilon]}(\omega)\right] \\
\leq & \epsilon^2 + 4\theta^2P[|\hat{\theta}_n - \theta| > \epsilon]. \tag{3}
\end{align*}
Since $\hat{\theta}_n$ converges to $\theta$ in probability, the second term in $(3)$ converges to $0$ as $n \to \infty$. Since $\epsilon$ is arbitrary, this implies that $E[(\hat{\theta}_n - \theta)^2] \to 0$ as $n \to \infty$, completing the proof.
| null | CC BY-SA 4.0 | null | 2023-04-04T03:36:13.133 | 2023-04-04T12:32:59.670 | 2023-04-04T12:32:59.670 | 20519 | 20519 | null |
611760 | 2 | null | 559051 | 0 | null | "What is meant by "history" in this method?"
The history of a data point $i$ refers to all data points appearing before data point $i$ in a given permutation.
"Afterwards D is split into training and test examples. Then the test examples were expanded with all the training examples of the past steps."
The artificial time only applies to training dataset $\mathcal{D}$. In the testing phase, the whole training dataset $\mathcal{D}$ is used to do TS encoding.
To avoid target leakage, we can actually divide the training set $\mathcal{D}$ into $\mathcal{D}_1$ and $\mathcal{D}_2$, and then using $\mathcal{D}_1$ to do TS encoding and $\mathcal{D}_2$ for training. But this is a waste of your training data. They then figured out a way, i.e. ordered TS encoding, to avoid target leakage and at the same time utilize full training dataset $\mathcal{D}$. This idea is not only used to encode categorical features but also to build base learners in boosting. This target leakage in the boosting machine has been overlooked by people until catboost.
The ordered TS encoding is explained in other answer. It is worth noting that the encoding has large variance for data points with short history, and that is why they utilized many permutations (or artificial times) in different iterations of the boosting process. Notice that this does not mean that different artificial times are used to encode different categorical features. It is actually the same sampled artificial time used to do all ordered TS encoding for all categorical features and also for constructing the base learner.
| null | CC BY-SA 4.0 | null | 2023-04-04T04:07:54.170 | 2023-04-04T04:07:54.170 | null | null | 256134 | null |
611761 | 1 | null | null | 0 | 3 | RMSE or MAD are used as distance measures more for the continuous data. What will be good distance measures for ordinal data?
Are you aware of any good references that show imputation methods also used RMSE or MAD (together with accuracy) as distance measures for ordinal data?
| Searching for references that show imputation methods use RMSE as distance measures for ordinal data | CC BY-SA 4.0 | null | 2023-04-04T04:19:39.497 | 2023-04-04T04:19:39.497 | null | null | 384882 | [
"ordinal-data",
"data-imputation",
"rms"
] |
611762 | 2 | null | 346497 | 2 | null | In R you have the [seastests package](https://cran.r-project.org/web/packages/seastests/index.html) that includes several seasonality tests and the function isSeasonal() that conveniently combines several seasonality tests to indicate whether your time series is seasonal.
| null | CC BY-SA 4.0 | null | 2023-04-04T05:08:00.687 | 2023-04-04T05:08:00.687 | null | null | 384832 | null |
611763 | 1 | 611811 | null | 1 | 30 | Let's look at an example:
>
We want to know if our new jogging shirt reduces the amount of sweat produced by runners. Ten factory employees in Bangkok, Thailand are recruited to try out the prototype by jogging for 30 minutes and recording the amount of sweat on their foreheads. Ten office employees from Vancouver, Canada are recruited to do the same thing wearing the old jogging shirt.
Our predictor is the jogging shirt and the response is sweat amounts. Could a confounder be the weather since higher temperatures will cause someone to sweat more? This doesn't affect the predictor since the people in the study are forced to wear a new or old shirt. Is weather a confounder even though it doesn't have a causal effect on the predictor? How could I adjust the experiment to account for the confounder?
| Do confounder variables need to have a causal effect on treatment AND response variables? | CC BY-SA 4.0 | null | 2023-04-04T05:14:49.510 | 2023-04-04T12:50:13.723 | 2023-04-04T12:50:02.387 | 76484 | 266384 | [
"experiment-design",
"causality",
"confounding"
] |
611764 | 2 | null | 609734 | 0 | null | The Kruskall-Wallis test as a seasonality test is implemented in the R package [{seastests}](https://cran.r-project.org/web/packages/seastests/index.html). For a monthly time series, it tests whether the mean of the months is identical across all months.
The test does not assume that the mean is identical across years, only that the time series is stationary. The latter is why by default the first differences of the series is analysed.
More details on this test and other seasonality tests can be found in [Ollech & Webel 2022](https://www.degruyter.com/document/doi/10.1515/jem-2020-0020/html).
| null | CC BY-SA 4.0 | null | 2023-04-04T05:18:43.737 | 2023-04-04T05:18:43.737 | null | null | 384832 | null |
611765 | 1 | null | null | 0 | 43 | Suppose we have a lottery with n winning numbers across m categories. If we take the results from N draws, and want to test the randomness using the runs test (testing on even/oddness of the winning numbers), would it be more appropriate to do the runs test across the entire sample (all n x N numbers) or do the runs tests across each individual category of prizes (i.e. do the runs test on first/second/third prize numbers)?
| Lottery Numbers and runs test for randomness | CC BY-SA 4.0 | null | 2023-04-04T05:25:43.290 | 2023-04-04T05:25:43.290 | null | null | 384884 | [
"hypothesis-testing",
"random-variable",
"combinatorics",
"runs"
] |
611768 | 1 | null | null | 0 | 8 | I am working on a research design project where I am trying to correlate variables where the data available is either provided for the entire population or nearly so. I know with true population level data there's no statistical analysis needed, but that leaves me with questions. Or would the population in this instance be the years measured? and if so, the sample population is obviously not randomized.
In any case, as I'm not trying to make estimations except about variance, I'm thinking that because $ \rho X,Y = \frac{cov(X,Y)} {\sigma X \sigma Y} $ and
$cov(X,Y) = E[XY] - E[X]E[Y]$ are anaylses of the data sets and not the individual points, it shouldn't matter and I'm just overthinking this.
- two of my data sets contain all reported cases of opioid prescriptions and all reported cases of opioid deaths clustered by Rx and illicit opioids. However, because some cases are unreported, the data does not represent the entire population of either variable with the US.
- I am trying to establish am argument for a causal link between a policy intervention and the outcomes. To do this, my plan was to first utilize interrupted time series analysis with the decade prior serving as the control and the decade after being the treatment group for each of the variables. Then I wanted to test for correlation and effect size of change over time between the following variables.
- Program utilization counts (IV) vs Rx Rates (DV)
- Rx Rates (IV) vs Rx Opioid deaths (DV1) and Illicit Opioid Deaths (DV2)
- Rx Opioid deaths (IV2) and Illicit Opioid Deaths (IV2).
But the only methods I know of either require inferential statistics or test parameters such as means.
Or would I still be able to use things like regression analysis or MANOVA/MANCOVA because I'm using specific data points over time and while each datum point represents the population (i.e. the number of deaths in 2010 was x. it wasn't x +/- y).
I have added a flow chart for the path analysis with the specific types of analysis that I had initially thought appropriate, but I am having doubts that its right (also, ignore that it says multivariate regression for the green arrows, I simply meant analysis for 1 IV and 2 DVs). I intended to run this flow analysis for both of the 10 year periods before and after the start of the policy intervention. From there I would want to compare the results of both group, which at this point I am assuming would be a test of significance for the correlation coefficients of the variables between the groups.
I hope all of these is clear, I am a little sleep deprived at this point. Let me know if I need to clarify anything. Thank you.
[](https://i.stack.imgur.com/vOgej.png)
| How to analyze near-total-population data | CC BY-SA 4.0 | null | 2023-04-04T06:27:11.810 | 2023-04-04T06:27:11.810 | null | null | 384753 | [
"time-series",
"correlation",
"regression-coefficients",
"structural-equation-modeling"
] |
611769 | 1 | null | null | 4 | 146 | Let $X_1 = U(0,1)$ and $X_2 = U(0,1)$. $X_1$ and $X_2$ are independent. Then $f(x_{1}, x_{2})=1, {0}\le{x_1}\le{1}, {0}\le{x_2}\le{1}$.
Let $Y_1 = \arctan(X_{2}/X_{1})$, $Y_2 = X_2$. I need to find the density function $g(y_1)$.
Here's what I've done so far.
$X_{2}/X_{1} = \tan(Y_{1})$
$X_{1} = X_{2} / \tan(Y_{1}) = X_{2}\cot(Y_{1})$
Since $X_{2}=Y_{2}$, $X_{1}=Y_{2}\cot(Y_{1})$.
The Jacobian of the transformation is give by the matrix $J = \begin{bmatrix} \frac{\partial{y_2\cot(y_1)}}{\partial{y_1}} & \frac{\partial{y_2\cot(y_1)}}{\partial{y_2}} \\ \frac{\partial{y_2}}{y_1} & \frac{y_2}{y_2} \end{bmatrix} = \begin{bmatrix} -y_{2}\csc^{2}(y_{1}) & \cot(y_1) \\ 0 & 1 \end{bmatrix}$.
So, $\det(J)= -y_{2}\csc^{2}(y_{1})$.
At this point, I am kinda lost. I know that I need to compute $g(y_1, y_2)$, find its domain, and then integrate with respect to $y_2$. I do not understand how to find the joint density $g(y_1, y_2)$, could you please help?
| Transform bivariate uniform variable | CC BY-SA 4.0 | null | 2023-04-04T06:41:49.860 | 2023-04-04T13:43:39.360 | null | null | 254071 | [
"probability",
"self-study",
"joint-distribution",
"uniform-distribution",
"bivariate"
] |
611770 | 2 | null | 611749 | 0 | null | I am not certain what you mean by "t-test for two differences" but there can only be one difference between two means. Adding a second treatment would require at least one other mean for the second treatment sample (realistically, you would want to sample for the initial control or initial treatment again to limit the influence of exogenous confounding variables).
But any time I have done a new treatment after testing a control and an earlier treatment I either compared the new treatment to the control or to the prior treatment. In fact, both options are frequently used on pharmaceutical research for Placebo controlled trials and for treatment-controlled trials (the latter being where the prior accepted treatment, which has known effects, was used as the comparator/control for ethical reasons such as not denying the control group access to an effective treatment).
But, in principle, the effect size difference between the control and treatment 2 should equal the sum of the effect size difference for control vs treatment 1 and Treatment 1 vs Treatment 2 I.e.,$ E_{CT_{1}}+ E_{T_{1}T_{2}} = E_{CT_{2}} $.
Hope that clarifies things. If it doesn't, please try to clarify what you mean by two differences between sample means. E.g., what are your control and treatmetn variables. What are you trying to measure?
| null | CC BY-SA 4.0 | null | 2023-04-04T06:49:36.567 | 2023-04-04T06:49:36.567 | null | null | 384753 | null |
611771 | 2 | null | 611769 | 1 | null | The transformation from $(x_1,x_2)$ to $(y_1,y_2)$ is given by $h(x_1,x_2) = (h_1(x_1,x_2),h_2(x_1,x_2))^t = \left(\arctan(\frac{x_2}{x_1}), x_2\right)^t$. The jacobian matrix $J(x_1,x_2)$ is given by:
$$
J(x_1,x_2)
= \begin{bmatrix}
\frac{\partial h_1}{\partial x_1} & \frac{\partial h_1}{\partial x_2} \\
\frac{\partial h_2}{\partial x_1} & \frac{\partial h_1}{\partial x_2}
\end{bmatrix}
= \begin{bmatrix}
\frac{-x_2}{x_1^2 + x_2^2} & \frac{x_1}{x_1^2 + x_2^2} \\
0 & 1
\end{bmatrix},
$$
with the absolute value of the determinant $|J(x_1,x_2)| = \frac{x_2}{x_1^2 + x_2^2}$.
As mentioned by Xi'an in the comments, you can find the density probability function $g(y_1,y_2)$ of $(Y_1,Y_2)$ with the following formula:
$$
g(y_1,y_2) = \frac{f(x_1,x_2)}{|J(x_1,x_2)|}|_{(x_1,x_2) = h^{-1}(y_1,y_2)} = \frac{x_1^2 + x_2^2}{x_2}|_{(x_1,x_2) = h^{-1}(y_1,y_2)},
$$
$$
= \frac{y_2^2\cot(y_1)^2 + y_2^2}{y_2} = y_2(\cot(y_1)^2 + 1),
$$
where $f(x_1,x_2)$ is the probability density function of $(X_1,X_2)$.
Note that this works only for bijective transformations (mapping) $h(x_1,x_2)$.
| null | CC BY-SA 4.0 | null | 2023-04-04T07:16:25.057 | 2023-04-04T07:16:25.057 | null | null | 383929 | null |
611772 | 1 | 611780 | null | 0 | 26 | Wooldridge Introductory Econometrics: A Modern Approach (2018), pages 561 and 572, gives the following definitions:
Latent variable model (LVM):
$$
y^*=\beta_0+\mathbf{x} \boldsymbol{\beta}+e, y=1\left[y^*>0\right]
$$
where the indicator function takes the value 1 and if the event in the brackets is true and 0 otherwise. So $y$ is 1 if $y^* > 0$.
Tobit:
$$
y^*=\beta_0+\mathbf{x} \boldsymbol{\beta}+u, u \mid \mathbf{x} \sim \operatorname{Normal}\left(0, \sigma^2\right) \\
y=\max \left(0, y^*\right)
$$
where $y=\max \left(0, y^*\right)$ implies that the observed variable $y$ equals $y^*$ when $y^* \geq 0$, $0$ otherwise.
Questions:
So it seems to me that one difference is that $y^*$ in the LVM takes the value 0 or, while in the Tobit model $y^*$ takes the values 0 or any positive values otherwise. Do I have that right?
Are there any other key takeaways w.r.t to the differences between the models?
| What is the difference between a Latent variable model and a Tobit model? | CC BY-SA 4.0 | null | 2023-04-04T07:24:35.343 | 2023-04-04T08:45:01.400 | null | null | 334202 | [
"regression",
"econometrics",
"latent-variable",
"tobit-regression"
] |
611773 | 2 | null | 611707 | 6 | null | Expanding on @whuber's suggestion: let $g : [0,+\infty) \rightarrow \mathbb{R}_+$ be any positive function such that
- $\int^{+\infty}_0 g(t) dt < +\infty$,
- $\int^{+\infty}_0 tg(t) dt < +\infty$ and
- $\int^{+\infty}_0 t^2 g(t) dt = +\infty$.
Let us denote $f := x\mapsto g(\vert x \vert)$. Then $\frac{f}{2\int^{+\infty}_0 g(t)dt}$ is a symmetric distribution with finite first moment, but infinite second moment.
Such $g$'s are easy to build: for example, let $\alpha>0$, and let us denote, for each $t \in [0,+\infty)$, $g(t) := 1$ if $t \leq 1$ and $g(t) := \frac{1}{t^\alpha}$ if $t>1$. Then such a $g$ satisfies the requirements if and only if $\alpha \in (2,3]$, so there are many of such $g$'s.
| null | CC BY-SA 4.0 | null | 2023-04-04T07:26:20.060 | 2023-04-04T07:26:20.060 | null | null | 189701 | null |
611774 | 2 | null | 611755 | 1 | null | What method of numerical integration do you have in mind? Many numerical integration methods rely on the smoothness of the integrand. Random forests are not smooth. Furthermore, in most cases, numerical integration does not work well beyond 2 dimensions. Safer is Monte-Carlo integration, which works no matter what smoothness and dimension.
But trees consist of locally constant functions on axis aligned rectangles, and random forests are linear combinations of those trees. This means you can integrate analytically, you do not need approximations. The only problem is practical. Depending on the program you use to fit the model, you may not have (easy?) access to the data of the underlying trees, which you need to calculate the individual integrals.
| null | CC BY-SA 4.0 | null | 2023-04-04T07:31:20.077 | 2023-04-04T07:31:20.077 | null | null | 8298 | null |
611775 | 1 | 611781 | null | 1 | 39 | Suppose that $X_1, X_2, X_3$ are iid random variables. I have seen this fact many times that $$\mathbb{P}(X_1<X_2<X_3)=\frac{1}{6}$$ but I want to know that why every permutation of $X_1, X_2, X_3$ is equally likely and also is this true for both discrete and continuous case and can we generalize this fact for all $n \in \mathbb{N}, n \geq 2$. If yes then how to prove it?
| Permutations of iid Random Variables | CC BY-SA 4.0 | null | 2023-04-04T07:41:49.563 | 2023-04-04T08:52:15.470 | 2023-04-04T08:35:06.977 | 7224 | 376295 | [
"probability",
"distributions",
"random-variable",
"iid"
] |
611776 | 1 | null | null | 1 | 28 | I have a model with a one predictor, one mediator and one outcome.
The following are the coefficients i got for my mediation analysis, but I can't understand how to make sense of them. Could someone please help explain what must be going on and how I can report these results.
The indirect effect is significant (b = 0.041, CI [0.0103 and 0.0781])
The Total effect is non-significant ((b = 0.016, t = 0.323, p=0.747)
The direct effect is non-significant with a flipped sign for the coefficient,( -0.03, p=0.619)
is it valid to conduct a mediation in this scenario?
how do I report my results.
P.S; I ran my analysis with Hayes PROCESS macro
P.P.s; I would really appreciate if someone could help soon because I'm on a bit of a time crunch.
| One of the mediation models I am running has confusing results. The indirect, direct and total effect are conflicting | CC BY-SA 4.0 | null | 2023-04-04T07:45:29.333 | 2023-04-07T15:01:54.673 | null | null | 384888 | [
"regression",
"mediation"
] |
611778 | 1 | null | null | 0 | 13 | I am undertaking interrupted time series analysis and check for autocorrelation. I use the Durnin Watson test on data modelled using a simple OLS. Is this the correct t approach or should you use the model you plan to use eventually [such as Poisson etc]?
| Autocorrelation: Durbin Wstson test | CC BY-SA 4.0 | null | 2023-04-04T07:54:53.370 | 2023-04-04T07:54:53.370 | null | null | 343051 | [
"r",
"autocorrelation",
"durbin-watson-test"
] |
611779 | 1 | 612564 | null | 1 | 164 | Is there a risk of overfitting when hyperparameter tuning a model using Optuna (or another hyperparameter tuning method ), with evaluation on a validation set and a large number of trials?
While a smaller number of trials may not find the best combination of parameters, could increasing the number of trials lead to the model being overfitted to the validation set? In both cases, the final model is evaluated on a test set.
| Is there a risk of overfitting when hyperparameter tuning a model | CC BY-SA 4.0 | null | 2023-04-04T07:55:34.900 | 2023-04-11T09:40:55.287 | null | null | 276238 | [
"classification",
"predictive-models",
"overfitting",
"validation",
"hyperparameter"
] |
611780 | 2 | null | 611772 | 1 | null | Your understanding is correct.
The latent-variable representation is convenient because you can express various models in this form.
The LVM in your question corresponds to a probit model. If the errors come from a standard logistic (instead of standard normal) distribution, you get a logistic regression model.
If instead of a single threshold at 0 you have multiple cutpoints at $\zeta_1<\zeta_2<\zeta_3<\ldots$, you get an ordinal probit model (or ordinal logistic if you have logistic errors).
| null | CC BY-SA 4.0 | null | 2023-04-04T08:45:01.400 | 2023-04-04T08:45:01.400 | null | null | 238285 | null |
611781 | 2 | null | 611775 | 0 | null | It is a matter of notations, nothing more. For instance, define$$Y_1=X_2,Y_2=X_3,Y_3=X_1$$
Then
$$(Y_1,Y_2,Y_3)\sim(X_1,X_2,X_3)$$
(meaning both triplets share the same distribution) and
$$\mathbb{P}(X_1<X_2<X_3)=\mathbb{P}(Y_1<Y_2<Y_3)=\mathbb{P}(X_2<X_3<X_1)$$
| null | CC BY-SA 4.0 | null | 2023-04-04T08:52:15.470 | 2023-04-04T08:52:15.470 | null | null | 7224 | null |
611782 | 1 | null | null | 0 | 24 | Consider $S_n = \sum_{i = 1}^n b_{i,n} X_{i,n}$ where
$X_{i,n}$ are random variable neither independent neither identically distribution and $b_{i,n}$ are weights satisfying the Lindeberg condition. I managed to prove that if $E[|X_{i,n}|^s]< \infty$ for all integers $s$, then $S_n$ converges to a gaussian random variable $G$.
Now, I need to relax the assumption that all the moments exists. I read that this could be done by a sum of truncated random variables $Y_n$ and proving that
$$
\text { if } S_n \rightsquigarrow G \text { and } d\left(S_n, Y_n\right) \stackrel{\mathrm{P}}{\rightarrow} 0 \text {, then } Y_n \rightsquigarrow G \text {; }
$$
However, I have no idea on how to formulate the $Y_n$ to make this argument works. Has anyone ever heard of this technique?
| Central limit theorem : relaxing assumption of all finite moments | CC BY-SA 4.0 | null | 2023-04-04T08:55:41.707 | 2023-04-04T08:55:41.707 | null | null | 365245 | [
"convergence",
"central-limit-theorem",
"moments",
"truncated-distributions"
] |
611783 | 1 | null | null | 1 | 26 | I have several time séries couples from which I compute cointegration p-value, then sort these couple by that p-value, starting from the lowest (for further VECM analysis on the top 100).
All couples have not the same length : some asset start after others, all finish at the same time (now), leading to time series with different length. I precise, 2 TS from the same couple have always the same length otherwise computing cointegration is not possible. When I find a couple with TSs having different nobs (different starting date), I crop the largest time series to the other and “align” them so that all points correspond in time.
We know that p-value shrinks when the sample size increases if H0 is false. I confirm that by generating a cointegrated pair and compute its p-value for different sample sizes, 100, 10k and 1m leading to very different p-value and its normal since the TSs are known to be cointegrated (H0 false).
From that context you may already guess the question :
Is it relevant to sort p-values of cointegration from time series couples having different length, knowing that the p-value is ”underestimated” when the length is lower ? If not, how to compute a cointegration “score” that would take the p-value and account for their respective sample size, enabling to sort my TS couples with an homogenous unity ? Or is it just not relevant at all and I should always have the same length for all my TS ?
I precise that in my case, the sample is still large for all couples. It ranges from about 5k points to 15k points.
| Correct cointegration p-value by sample size? | CC BY-SA 4.0 | null | 2023-04-04T09:04:59.723 | 2023-04-04T09:04:59.723 | null | null | 372184 | [
"p-value",
"sample-size",
"cointegration"
] |
611784 | 2 | null | 611699 | 0 | null |
## An intervention on $X$ removes incoming edges to $X$
What you describe is broadly correct. One thing to be careful about is that performing the intervention $\text{do}(X=x)$ normally also implies removing all edges into $X$, such that its value is determined exclusively by the intervention.
## $\text{do}(X=x)$ comes before $M=m$
The general idea of sampling from a DAG is that first all exogenous values (noise parameters) are sampled, and then their values are propagated through the DAG. An intervention is like an exogenous value – the intervened-upon variable takes the intervention value (and nothing else), which is propagated through the DAG.
You would not know what $m$ the variable $M$ takes before the propagation, so there isn't a way to "restrict to $M=m$ before having already sampled by propagating the exogenous variables (including the intervention) through the DAG.
I would recommend to think about it as rejection sampling: You sample many times from the DAG with intervention $\text{do}(X=x)$ and keep all instances where $M=m$.
| null | CC BY-SA 4.0 | null | 2023-04-04T09:08:46.637 | 2023-04-04T09:08:46.637 | null | null | 250702 | null |
611785 | 1 | null | null | 0 | 20 | I have run a Friedman's test in R and it has given me the following result:
```
> pwc <- psheetv2 %>%
+ wilcox_test(score ~ time, paired = TRUE, p.adjust.method = "bonferroni")
> pwc
# A tibble: 3 × 9
.y. group1 group2 n1 n2 statistic p p.adj p.adj.signif
* <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr>
1 score bvigilant pvigilant 67 64 28 0.000855 0.003 **
2 score bvigilant rvigilant 67 51 43 0.012 0.037 *
3 score pvigilant rvigilant 64 51 177 0.105 0.315 ns
```
But despite there being significant differences between the groups, all the group medians are 0. How do I work out the direction of effect if all the means are the same? Have I done this wrong? Any help would be appreciated, thank you!
| Why is my result significant when there is no difference between means? | CC BY-SA 4.0 | null | 2023-04-04T09:13:22.813 | 2023-04-04T09:13:22.813 | null | null | 384825 | [
"r",
"hypothesis-testing",
"friedman-test"
] |
611786 | 1 | null | null | 0 | 35 | I'm trying to analyse bullying experiences across three age groups. The DV is scored on a 5-point Likert, and the IV is categorical (ages 11, 13, and 15).
Initially I ran an ANOVA to see if there was a significant difference in bullying experiences across three age groups. The results came back as non-significant with a very small effect size. I re-ran it using a Kruskal-Wallis because it was ordinal data, and again found no significance and a very small effect size. Finally, I tried three Mann-Whitney U tests with Bonferroni corrections, and found the same. Normally I'd call it quits there and accept the results, but when I look at my table of means, there's quite substantial differences between the three groups.
I'll summarise one example below:
|Age cat. |Mean |SD |N |
|--------|----|--|-|
|11-yrs |1.88 |1.18 |49 |
|13-yrs |2.38 |1.65 |64 |
|15-yrs |2.62 |1.62 |58 |
Would it not be logical to assume there's some difference between 1.88 and 2.62 when it's only on a 5-point?
My main question here is if the large standard deviations can be responsible for this inconsistency?
In this example 32/171 participants gave the highest score of 5, so it didn't seem like an outlier.
| Can high standard deviations explain my non-significant & low effect size results? (please read description) | CC BY-SA 4.0 | null | 2023-04-04T09:14:51.673 | 2023-04-04T11:49:31.433 | null | null | 384893 | [
"standard-deviation",
"outliers",
"effect-size"
] |
611789 | 1 | null | null | 0 | 33 | I have built a generalised linear mixed effects model fitted to a gamma distribution. I am wanting to compare this experimental model to a nested null model to see whether it is a better fit for the data.
Here is the experimental model for illustration:
```
fpMod_gamma <- lme4::glmer(reactionTimes ~ Condition + typingStyle + Condition:typingStyle + depression
+ Condition:depression + Order + Condition:Order + (1 | stimulus) +
(1 + Condition | participant),
family = Gamma(link = "log"),
nAGQ = 1,
glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 500000)),
data = fpData)
```
And here is the nested null model:
```
fpMod_gamma_rl_null <- lme4::glmer(reactionTimes ~ 1 + (1 | stimulus) +
(1 + Condition | participant),
family = Gamma(link = "log"),
nAGQ = 1,
glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 500000)),
data = fpData)
```
I then run `anova(fpMod_gamma, fpMod_gamma_rl_null)` to compare these models. However, the results suggest the null model is a significantly better fit for the data. I have significant effects in the output of my experimental model and I'm a bit confused about what this means for these significant effects - are they still valid? What conclusions can I draw from the experimental model bearing this in mind?
Here is the output from the anova:
```
npar AIC BIC logLik deviance Chisq Df Pr(>Chisq)
fpMod_gamma_rl_null 18 94932 95055 -47448 94896
fpMod_gamma_rl 47 94938 95258 -47422 94844 52.369 29 0.004957 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Any help interpreting this very appreciated.
| Null model is a better fit for the data compared to experimental model? | CC BY-SA 4.0 | null | 2023-04-04T10:01:45.517 | 2023-04-04T10:01:45.517 | null | null | 357969 | [
"r",
"hypothesis-testing",
"inference",
"glmm",
"likelihood-ratio"
] |
611790 | 1 | 611798 | null | 3 | 124 | We have a manufacturing process in which the finished products have the following requirements: individual unit must have a weight within ± 10% of average weight (test with 10 random units).
One of the in-process quality control test is: Take 10 units, determine their average weight (X) without measuring individual units. Let's say this test is repeated 100 times throughout the day.
So each measurement X is the average of 10 units. From the data of X1, X2,...X100, we can calculate the mean of X and its standard deviation (standard error in this case since X is itself an average).
My question is: Can you estimate the standard deviation and range of the population (individual units - 1 million in total) and the probability of passing the required finished tests, from this data and how? Can you use the formula SD = SE * sqrt (n)? (in this case n=10)
If anyone can point me to documentation or guides i'm very thankful.
| How to estimate standard deviation from standard error? | CC BY-SA 4.0 | null | 2023-04-04T10:05:53.790 | 2023-04-04T13:18:52.100 | 2023-04-04T13:18:52.100 | 384895 | 384895 | [
"standard-deviation",
"standard-error"
] |
611791 | 2 | null | 611752 | 1 | null | You can only know which assumptions are violated by your model by, first, fitting your linear models, and then, checking whether the model assumptions of linear regression are satisfied for each of your models. The common mistake is that some researchers check only the distribution of single variables (whether the relevant variables are normally distributed or not). However, what is essential is to check whether the relationships between the dependent and independent variables, on which the model is constructed, are linear in nature. In other words, you should check the MODEL assumptions, not the variable assumptions. You can look at these two sources for detailed explanations:
Pek, J., Wong, O., & Wong, A. C. M. (2018). How to address non-normality: A taxonomy of approaches, reviewed, and illustrated. Frontiers in Psychology, 9, 1–17. [https://doi.org/10.3389/fpsyg.2018.02104](https://doi.org/10.3389/fpsyg.2018.02104)
Osborne, J. W., & Waters, E. (2002). Four assumptions of multiple regression that researchers should always test. Practical Assessment, Research and Evaluation, 8(2), 1–5. [https://doi.org/doi.org/10.7275/r222-hv23](https://doi.org/doi.org/10.7275/r222-hv23)
| null | CC BY-SA 4.0 | null | 2023-04-04T10:28:26.083 | 2023-04-04T10:28:26.083 | null | null | 384050 | null |
611792 | 1 | null | null | 0 | 34 | From what I've seen, it is common practice in Deep Reinforcement Learning to standardize certain data. By standardization, I refer to the process of subtracting the mean and dividing by the standard deviation. (Certain [Reinforcement Learning projects](https://github.com/openai/spinningup/blob/038665d62d569055401d91856abb287263096178/spinup/algos/pytorch/ppo/ppo.py#L74) refer to this as normalization, and I also encountered the term Z-standardization.) One example is the [REINFORCE algorithm example code](https://github.com/pytorch/examples/blob/main/reinforcement_learning/reinforce.py#L70) from the Pytorch examples:
```
returns = (returns - returns.mean()) / (returns.std() + eps)
```
The effects are clear: scaling data to have the mean of zero and a variance of one.
However, in one project I encountered the mean subtraction being omitted. (It was long ago and therefore can not provide a link.) This is something I did not see anywhere else.
Are there cases where omitting the mean subtraction - thus only dividing by the standard deviation - is beneficial in the field of machine learning? If yes, what are those? I would be particularly interested in Deep Learning.
| Uses of data standardization without subtracting mean | CC BY-SA 4.0 | null | 2023-04-04T10:31:08.907 | 2023-04-04T10:45:49.377 | 2023-04-04T10:45:49.377 | 22047 | 308720 | [
"machine-learning",
"neural-networks",
"reinforcement-learning",
"standardization"
] |
611793 | 1 | null | null | 0 | 16 | I have around 33 variables, and 300 observations, although of some variables there are some missing data.
I would like to obtain the best subset of variables which can separate the best 3 categories in a multidimensional space. I thought about applying some discriminant analysis, but as there is some missing data (I would rather not to perform imputation) I am having some trouble.
How would you recommend me to obtain the best subset of variables which can separate these three groups?
| Best subset of variables to perform discriminant analysis | CC BY-SA 4.0 | null | 2023-04-04T10:38:45.707 | 2023-04-04T10:38:45.707 | null | null | 326551 | [
"feature-selection",
"discriminant-analysis"
] |
611794 | 1 | 611842 | null | 1 | 93 | We have data $X_1, \dots, X_n$ which are i.i.d copies of $X$. Where we denote $\mathbb{E}[X] = \mu$, and $X$ has finite variance.
We define the truncated sample mean:
$\begin{align}
\hat{\mu}^{\tau} := \frac{1}{n} \sum_{i =1}^n \psi_{\tau}(X_i)
\end{align}$
Where the truncation operator is defined as:
$\begin{align}
\psi_{\tau}(x) = (|x| \wedge \tau) \; \text{sign}(x), \quad x \in \mathbb{R}, \quad \tau > 0
\end{align}$
The bias for this truncated estimator is then defined as:
Bias $:= \mathbb{E}(\hat{\mu}^{\tau}) - \mu$
And I saw the inequality:
$\begin{align}
|\text{Bias}| = |\mathbb{E}[(X - \text{sign}(X)\tau) \mathbb{I}_{\{|X| > \tau\}}]| \leq \frac{\mathbb{E}[X^2]}{\tau}
\end{align}$
But I am not sure how this was derived.
| Proving upper bound for Bias of truncated sample mean | CC BY-SA 4.0 | null | 2023-04-04T10:39:48.147 | 2023-04-04T15:57:15.103 | 2023-04-04T13:29:29.053 | 283493 | 283493 | [
"probability",
"mathematical-statistics",
"robust",
"probability-inequalities",
"bias-variance-tradeoff"
] |
611795 | 1 | null | null | 1 | 21 | I am comparing calcium intake among 5 groups and want to test whether their knowledge of calcium is associated with a higher intake. I have asked 3 questions to assess knowledge e.g. do they know the RDA for calcium, do they know why we need calcium etc. Is a 3 way anova correct? The group sizes are unequal.
### Details
I carried out a food frequency questionnaire and I have tallied all that up and carried out a one way anova test to compare the means between the 5 groups. I am happy with that. I now want to see whether there is an association between higher intakes and knowledge of calcium. I have 3 questions. 2 of which are likert style ie of they say strongly agree with the two statements then it shows they have good knowledge. I have coded the answers and transferred into excel. Do I just carry out individual one way anova tests for each question?
| I want to test whether knowledge of calcium is associated with higher dietary intakes between 5 groups | CC BY-SA 4.0 | null | 2023-04-04T10:47:23.103 | 2023-04-06T13:36:48.180 | 2023-04-06T13:36:37.173 | 919 | 384897 | [
"hypothesis-testing",
"anova"
] |
611796 | 2 | null | 465720 | 1 | null | Pearson correlation as a loss function presents some problems. In particular, $\text{corr}(y, \hat y)=\text{corr}(y,a+b\hat y)$ for any real $a$ and positive $b$, so $y=(1,2,3)$ and $\hat y=(105, 205, 305)$ have perfect correlation, yet the predictions are terrible.
If you are in a situation where this is acceptable, perhaps Pearson correlation could work for you, but you should be aware of this issue.
The comment in one of your links that $R^2$, for the definition given below, is not correct. Such an equation absolutely catches the poor quality of the above predictions $\hat y$ of $y$. Please see my simulations [here](https://stats.stackexchange.com/a/584562/247274), but it makes sense. The below formula is just a monotonic transformation of the MSE.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
| null | CC BY-SA 4.0 | null | 2023-04-04T10:51:31.310 | 2023-04-04T10:51:31.310 | null | null | 247274 | null |
611797 | 1 | null | null | 1 | 24 | Previously I had tried to do forecasting using Copula where the Copula chosen was Copula Frank. I generate random variate as data simulation. I use the random variates for forecasting. I transform the simulated data into the original margin data multiplied by the standard deviation of the random variates. And the forecast results are obtained. However, the forecasting results show a MAPE of more than 30%. I'm confused whether my method is correct. Or is there another way to do forecasting with copula based analysis?
I'm asking for help by including R code or Python code. Thank You
| How to forecast using copula | CC BY-SA 4.0 | null | 2023-04-04T10:52:27.370 | 2023-04-04T10:52:27.370 | null | null | 373007 | [
"r",
"forecasting",
"simulation",
"copula"
] |
611798 | 2 | null | 611790 | 4 | null | I would personnaly use the following estimator of the variance for your problem:
$$
\hat{\sigma}^2 = \hat{\sigma}^2_L + \hat{\sigma}^2_E
$$
- Empirical mean: $\bar{X}:= \frac{1}{nL} \sum_{j = 1}^L \sum_{i =
1}^n X_{ij}$
- Within location variance $\hat{\sigma}^2_E = \frac{1}{L(n-1)} \sum_{j = 1}^L \sum_{i = 1}^n \left(X_{ij} - \bar{X}_j \right)^2 = MS_E$
- Between-location variance $\hat{\sigma}^2_L = \frac{1}{L-1} \sum_{j =
1}^L \left( \bar{X}_j - \bar{X} \right)^2 - \frac{1}{n}\hat{\sigma}^2_E =
\frac{1}{n}(MS_L - MS_E)$
- $MS_L = \frac{n}{L-1} \sum_{j = 1}^L \left( \bar{X}_j - \bar{X} \right)^2$
- $\bar{X}_j = \frac{1}{n} \sum_{i = 1}^n X_{ij}$
You can use tolerance intervals for example to determine with a given confidence level what is the probability to pass your requirements.
You can find the previous definition in Anand M. Joglekar. Statistical Methods for Six Sigma. John Wiley and Sons Inc. 2003 on pages 191-193.
| null | CC BY-SA 4.0 | null | 2023-04-04T11:07:00.720 | 2023-04-04T11:07:00.720 | null | null | 383929 | null |
611802 | 2 | null | 611786 | 1 | null | The first thing I am seeing is that your variances and sample sizes are not equal between groups. One or the other must be equal (or close to it) and the data must be normally distributed. I would suggest running Bartlett's Test to confirm the variance is sufficiently similar. But your sample sizes are pretty different in size. The largest is 30.6% larger than the smallest. [How to Perform an ANOVA with Unequal Sample Sizes](https://www.statology.org/anova-unequal-sample-size/) [2 sample T-Test for Unequal Variances](https://real-statistics.com/students-t-distribution/two-sample-t-test-uequal-variances/) [Bartlett's Test](https://www.statology.org/bartletts-test/)
But to answer your question, significance is sort of impacted by SD as the standard error is the SD/sqrt(n), but given that SD is an intrinsic property of the sample population, it's not something that you can change. To increase the likelihood of significance, you need to increase the sample size. However it's also entirely possible that its just not significantly different. As you didn't provide the test statistic, it's hard to say if there's an error. Regardless, I would flesh out the possibility of any issues relating to variance and sample sizes being unequal first. If they aren't equal, use the non-parametric equivalent test ([kruskal-wallis-test (non-parametric version of ANOVA)](https://www.statology.org/kruskal-wallis-test/).
One last thing, the amount of people who selected 5 may not be an outlier, but at nearly 20% of the total sample, I suspect the distribution doesn't resemble a normal distribution (not that you can have normal distributions with Likert scales, technically). If the distribution is skewed, try [Levene's Test](https://datatab.net/tutorial/levene-test).
| null | CC BY-SA 4.0 | null | 2023-04-04T11:31:34.130 | 2023-04-04T11:49:31.433 | 2023-04-04T11:49:31.433 | 384753 | 384753 | null |
611803 | 2 | null | 610924 | 3 | null |
#### Bear in mind some general deficiencies of hypothesis tests of "nowhere dense" sets
To give some context to the many intelligent answers I expect this question will attract, I'm going to start by pointing out a big disadvantage of all tests of this general class, including the KS test, the AD test, and many other variants that test whether or not data comes from a specific distribution or parametric distributional family. These are all tests of a null hypothesis that is ["nowhere dense"](https://en.wikipedia.org/wiki/Nowhere_dense_set), meaning that we are testing an extremely specific class of distributions in the null hypothesis compared to much larger class of distributions in the altnerative hypothesis. In most cases where we have data from a process, there may be some theoretical reason to think that a particular distributional form might hold approximately, but it is extremely rare to have reason to believe that a narrow distributional form would hold exactly. Even if we have good theoretical reason to think that a certain distributional form would hold (e.g., mean of IID data leading to a normal distribution via the CLT) the assumptions made in the theoretical derivation are usually only approximations to reality, and so the exact distribution at issue is usually still slightly different to the theoretical distribution. Typically this means that the null hypothesis is "always false" and so the test effectively becomes one where the p-value will converge to zero with enough data.
This is the primary problem that motivated the [ASA statement on p-value](https://www.tandfonline.com/doi/full/10.1080/00031305.2016.1154108) and the general aversion that the statistics community has to classical hypothesis tests with a point-null hypothesis. Many statisticians are averse to this type of test because it is a test of a hypothesis that is so specific that it is a priori impossible, and so if the null hypothesis is not rejected, that is only because we lack enough data for adequate test power. For this reason, many statisticians prefer to use interval estimation methods like confidence intervals and credibility intervals that give an estimate for a range of values of the unknown quantity/parameter/distribution of interest, rather than testing against a specific case. The KS test, AD test, etc., are not easily emenable to conversion to a confidence interval over the function space of possible distribution functions. It is possible to get pointwise confidence bands using reasoning analogous to the KS test, but these usually only give confidence intervals for specific points, rather than confidence sets over the space of distribution functions.
You can of course draw distinctions between distributional tests of this general class, in terms of their power function against specific kinds of alternatives. As you've pointed out in your question, some of the tests are more powerful against certain kinds of alternatives than others, and it is possible to do a deep dive into this by comparing alternative tests with simulation analysis, etc. While this is possible ---and it is useful in understanding the relative merits of all these tests--- this does not get past the primary problem that occurs when we test a point-null hypothesis or a "nowhere dense" hypothesis in a large space. If we test a hypothesis that is so specific that it is "always false" then the test operates as it should, rejecting the null with enough data. The test therefore becomes primarily a test of how much data we have, which we already know.
It is worth noting that it is possible to amend any test of a point-null hypothesis or a "nowhere dense" hypothesis by imposing a non-zero "tolerance" for deviation from the stipulated class within the null hypothesis and amending the test statistic accordingly. With a bit of work the KS test can be amended in this way, as can the AD test and other distributional tests. This solves the problem of testing against a "nowhere dense" region (and is how I would recommend dealing with these types of tests) but it then means that there is some additional arbitrarity in how large you make your "tolerance" in the null hypothesis. Confidence intervals and other region-based estimators sidestep this deficiency in the first place by looking for a region-based estimator of the unknown object of interest rather than looking at the level of evidence of deviation of a stipulated set of values of that object.
| null | CC BY-SA 4.0 | null | 2023-04-04T11:31:43.413 | 2023-04-04T11:31:43.413 | null | null | 173082 | null |
611804 | 1 | null | null | 0 | 24 | I am looking for an intuition on why ANOVA indicates that the difference between two variables is significant although I see that the variance within each variable is quite large.
This is how the mean and error bars showing the standard deviation look like for the two variables:
[](https://i.stack.imgur.com/XwR3H.png)
And the one-way ANOVA for the two samples (using this library [https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f_oneway.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f_oneway.html)) indicates:
F-statistic: 9.584232820104393 p-value: 0.0020408541413054304
Do you think the result looks odd?
| ANOVA showing significance despite large variance within variables | CC BY-SA 4.0 | null | 2023-04-04T11:42:32.927 | 2023-04-04T11:42:32.927 | null | null | 123789 | [
"statistical-significance",
"anova"
] |
611805 | 1 | null | null | 1 | 98 | I am working on hourly-weather data. It contains four features: rain, wind speed, humidity, and temperature. Obviously, all of them are continuous values. The number of records is around 17000. Other than highly skewed precipitation (almost 90% are zero), all parameters are distributed normally. To cope with skewness, I added a one to all precipitation records and then executed a log and improved skewness a bit, but it is still highly skewed anyway.
After performing various preprocessing techniques (Standardization, PCA, ...), I want to determine the optimal number of clusters to benefit KMeans. I tested Silhouette, Gap Statistics, Elbow, and calinski_harabasz, and the numbers of clusters they identified are 2, 6, 8, and 3, respectively.
It is clear that the results of Silhouette and Calinski are not correct. Because it is impossible for weather conditions to be 2 or 3 types.
But, what about Elbow and Gap statistics? How can I determine the right number of clusters? 6 or 8?
EDIT
[](https://i.stack.imgur.com/P1dRO.jpg)
[](https://i.stack.imgur.com/hIawK.jpg)
[](https://i.stack.imgur.com/ZZj4v.jpg)
[](https://i.stack.imgur.com/oiiEv.jpg)
Edit 2
Although the elbow is more primitive than others, its suggestion seems much closer to real-world conditions. Indeed, the question was raised because we always require a technique to compare the results of Gap statistics (7 clusters) and Elbow (8 clusters). As you know, many clustering techniques need cluster numbers. In this case, luckily Silhouette and CH show unacceptable results, What if elbow=8, Silhouette =9, Gap = 10, and CH=11? so in, a real project, again you prescribe the same approach "Be open to the answer being "k-means cannot cluster this data set well"!! Probably, you come up with another solution!
The main advantage of this question could be sharing experiences about your approaches if you face a similar issue!!
| Elbow method Vs Gap statistics, which one? challenging for data scientist | CC BY-SA 4.0 | null | 2023-04-04T11:51:11.677 | 2023-04-12T03:26:45.917 | 2023-04-12T03:26:45.917 | 310292 | 310292 | [
"python",
"clustering",
"k-means"
] |
611806 | 1 | null | null | 0 | 23 | I have greated a model using glmmTMB and as I am interperating the results is am having trouble with the intercept. Please let me know if I am going in the right direction.
I have three types of element_a and four distance, so as I understand the intercept or the reference level for my model consists of the element_aCT1 and distances D1,D2,D3,D4 and additsionally element_aHL2 distance D1 and element_aWL distance D1. So to interpatrete the effect size and direction I should calculate the value of reference level and then compare it to other estimates?
Also if someone could give me explanation why these factors are taken for intercept(reference level) and if and how should i change them if needed.
So if i say it like this, is it correct?:
The avarage abundace of that combinatsion is 2,64 so by looking at the model, distance D2 and element_aWL2 had a postive significant effect (estimates 0.91) (mean value of that is 1,72) as the abundace decreased?
Here is my analysis and thank you for your help :)
```
Family: poisson ( log )
Formula: Abundace~ element_a + distance + element_a * distance +
(1 | sampling.round) + (1 | day_count) + (1 | LS)
Data: Life_abundace1
AIC BIC logLik deviance df.resid
966.6 1017.3 -468.3 936.6 201
Random effects:
Conditional model:
Groups Name Variance Std.Dev.
sampling.round (Intercept) 0.05233 0.2288
day_count (Intercept) 3.32198 1.8226
LS (Intercept) 0.57123 0.7558
Number of obs: 216, groups: sampling.round, 3; day_count, 13; LS, 18
Conditional model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.7480 0.6922 1.081 0.279918
element_aHL2 -1.4288 0.5392 -2.650 0.008052 **
element_aWL2 -0.2888 0.4766 -0.606 0.544524
distanceD2 -0.4925 0.1712 -2.877 0.004009 **
distanceD3 -1.5041 0.2472 -6.084 1.17e-09 ***
distanceD4 -1.3218 0.2297 -5.753 8.75e-09 ***
element_aHL2:distanceD2 0.5253 0.3080 1.705 0.088154 .
element_aWL2:distanceD2 0.9179 0.2331 3.938 8.23e-05 ***
element_aHL2:distanceD3 1.3218 0.3667 3.605 0.000312 ***
element_aWL2:distanceD3 0.8698 0.3238 2.686 0.007225 **
element_aHL2:distanceD4 1.8741 0.3247 5.772 7.82e-09 ***
element_aWL2:distanceD4 0.4994 0.3200 1.561 0.118633
---
```
[](https://i.stack.imgur.com/psP2R.jpg)
| glmmTMB missing values in summary output | CC BY-SA 4.0 | null | 2023-04-04T11:57:49.177 | 2023-04-04T12:05:22.107 | 2023-04-04T12:05:22.107 | 370528 | 370528 | [
"mixed-model",
"interpretation",
"descriptive-statistics",
"glmmtmb"
] |
611807 | 1 | null | null | 0 | 26 | I have Random Forest (RF) regression task where my goal is to predict a coarse resolution raster to a finer spatial scale (from 400m to 100m pixel size). Also, I have a set of predictors at the fine spatial scale (100m). All my variables are continuous rasters. In order to create and fine-tune the RF model:
- I resample the predictors to 400m
- I split the data set into training and test set (80% and 20%)
- Then, I fine tune the model using the training set and I validate it using the test set
- Finally, I make the prediction of my response variable using the tuned RF model.
I would like your thoughts about the steps I am following. Is it okay that I fine tune the RF model at the coarse spatial scale before I make my predictions at the finer scale? If not, could you make some recomendations on how to proceed? I am using `randomForest` and `caret` in `R`. Below is the code:
```
library(terra)
library(caret)
library(doParallel)
library(randomForest)
wd = "path/"
# df with the resposnse and predictors at the coarse spatial scale
block.data = read.csv(paste0(wd, "block.data.csv"))
set.seed(123)
samp <- sample(nrow(block.data), 0.8 * nrow(block.data))
train <- block.data[samp, ]
test <- block.data[-samp, ]
no_cores <- detectCores() - 1
cl = makePSOCKcluster(no_cores)
registerDoParallel(cl)
# define the control
trControl = trainControl(method = "cv",
number = 10,
search = "grid")
rf_default = train(response ~ .,
data = train,
method = "rf",
metric = "Rsquared",
trControl = trControl)
print(rf_default)
# Search best mtry
set.seed(234)
tuneGrid <- expand.grid(.mtry = c(1:8))
rf_mtry <- train(ntl ~ .,
data = train,
method = "rf",
metric = "Rsquared",
tuneGrid = tuneGrid,
trControl = trControl,
importance = TRUE,
nodesize = 5,
ntree = 500)
print(rf_mtry)
best_mtry <- rf_mtry$bestTune$mtry
store_maxtrees <- list()
for (ntree in c(500, 1000, 1500, 2000, 2500, 3000)) {
set.seed(345)
rf_maxtrees <- train(response ~ .,
data = train,
method = "rf",
metric = "Rsquared",
tuneGrid = tuneGrid,
trControl = trControl,
importance = TRUE,
ntree = ntree)
key <- toString(ntree)
store_maxtrees[[key]] <- rf_maxtrees
}
results_tree <- resamples(store_maxtrees)
summary(results_tree)
# train the model with the optimum rf
fit_rf = train(response ~ .,
train,
method = "rf",
metric = "Rsquared",
tuneGrid = tuneGrid,
trControl = trControl,
importance = TRUE,
ntree = 1000)
fit_rf
# check Rsquared in the test data
fit_rf2 = train(response ~ .,
test,
method = "rf",
metric = "Rsquared",
tuneGrid = tuneGrid,
trControl = trControl,
importance = TRUE,
ntree = 1000)
fit_rf2
stopCluster(cl)
# apply best model (fit_rf) using the whole data.set
m = randomForest(response ~ .,
data = block.data,
mtry = best_mtry,
importance = TRUE,
ntree = 100,
corr.bias = TRUE,
replace = TRUE)
m
# predict rf prediction at the fine spatial scale
p = predict(m, df, na.rm = TRUE) # df is a data.frame containing the predictors at the fine spatial scale
```
| Random forest regression - predicting raster at a higher spatial resolution and the importance of spliting the dataset | CC BY-SA 4.0 | null | 2023-04-04T11:59:16.020 | 2023-04-04T11:59:16.020 | null | null | 353850 | [
"r",
"regression",
"predictive-models",
"random-forest",
"hyperparameter"
] |
611808 | 1 | 612085 | null | 1 | 71 | I am modelling the occurrence of a species at 5 different sites on an hourly basis (presence/absence), based on a range of temporal predictors (e.g. time of the year, day/night cycle, tides ...). Covariates are indicated by x1, x2, ... in the code below. For information, I have ~70 000 data points.
I am using a HGAM structure, as introduced in [Pedersen et al.2019](https://peerj.com/articles/6876/). For each covariate, I first investigated different specification options (global smoother or not, shared penalty or not ...), and selected the best one based on AIC and my research question.
When putting all of the terms together in the model, I end up with a structure like this:
```
model <- bam(response ~ offset(log(offset)) + s(year, bs="re") +
Site + s(x1, m=2, bs="cc", k=8) +
s(x1, Site, bs = "fs", xt=list(bs="cc"), m=2) +
s(x2, bs = "cc", by=Site, m=2, k=8) +
s(x3, m=2, bs="cc", k=10) +
s(x3, by = Site, bs = "cc", m=1, k=10) +
s(x4, bs = "cc", by=Site, m=2, k=8) +
s(x5, bs = "tp", by=Site, m=2, k=8), family = "binomial",
data = data.all, method="REML", cluster=cl, select=TRUE)
```
The explained deviance of the model is only about 13%. Given the high temporal resolution (hourly), I expect some temporal autocorrelation in the residuals. I read in the `bam()` documentation, and in other posts ([here](https://stats.stackexchange.com/questions/560455/what-is-the-structure-of-a-gam-fit-with-mgcvbam-with-the-rho-parameter-s) and [there](https://stats.stackexchange.com/questions/591056/how-to-account-for-temporal-autocorrelation-in-ordinal-gam-in-r)), that this could be specified using the rho argument in `bam()` following the order of the dataset.
- However, I am not sure this applies when the model as a hierarchical structure ? I know it is feasible in gamm(), but the problem then is that I cannot specify multiple factor-smoother interaction terms as currently written in the model ...
- Is it something that could be tackled in brms ? Or any other way ?
For information, here is the output of the `acf` and `pcaf` functions plot:
[](https://i.stack.imgur.com/AtJTi.png)
[](https://i.stack.imgur.com/T7iDm.png)
| Is it possible to specify a nested autocorrelation term when working with a hierarchical structure (GAM)? | CC BY-SA 4.0 | null | 2023-04-04T12:03:52.007 | 2023-04-06T08:03:22.053 | 2023-04-06T08:03:22.053 | 247492 | 247492 | [
"multilevel-analysis",
"autocorrelation",
"generalized-additive-model",
"mgcv",
"brms"
] |
611809 | 2 | null | 349953 | 1 | null | With seven-thousand instances of the rare event, it is, at least qualitatively, clear that you are not lacking for rare events. You simply have many more common events than rare events because, well, that is what "rare" and "common" mean.
[As has been argued](https://stats.stackexchange.com/a/559317/247274), the major problem with imbalanced data like you have is not so much the imbalance as much as the imbalance leading to few observations of rare events. You, however, have thousands of observations of the rare event, rather than having thousands of total observations yet only a few-dozen observations of the rare event. Consequently, the usual methods are likely to be fine.
Imbalanced problems like this often appear to pose problems, because they often result in few, if any, of the rare events being caught. Much of this comes from using a predicted probability of $0.5$ as a threshold for making hard classifications based on the continuous predictions. Because of how low the [prior probability of your rare event](https://stats.stackexchange.com/a/583115/247274) is, it is only natural that, unless something in the data is screaming out about a rare event,$^{\dagger}$ the posterior probability (your model prediction) will be low. In this case, you might consider the [ratio of the predicted to the prior probability](https://stats.stackexchange.com/questions/608240/binary-probability-models-considering-event-probability-above-the-prior-probabi). If the rare event happens with a probability of just $7000/(1000000+7000)=0.006951341$, yet you predict a probability of $0.1$, that prediction, despite favoring the common event ($90\%$ chance of the common event), is more than fourteen times higher than the usual probability of a rare event.
$^{\dagger}$This can happen. Have you ever seen/heard/smelled something highly unexpected yet quickly known what it was?
| null | CC BY-SA 4.0 | null | 2023-04-04T12:19:30.897 | 2023-04-04T12:19:30.897 | null | null | 247274 | null |
611811 | 2 | null | 611763 | 1 | null | Weather certainly affects amount of sweat, and it would be wise to include it in your analysis! A [confounding variable is a variable that sets up a backdoor path](https://stats.stackexchange.com/questions/445578/how-do-dags-help-to-reduce-bias-in-causal-inference). Since weather does not set up a backdoor path (because it's not connected to the treatment at all), it is not technically a confounder. However, from the perspective of the design of experiments, it seems clear to me that weather should be a blocking factor. That is, in a linear regression setting, you would include weather on the RHS of the model. Not doing so would introduce much more variation than you probably want.
| null | CC BY-SA 4.0 | null | 2023-04-04T12:50:13.723 | 2023-04-04T12:50:13.723 | null | null | 76484 | null |
611812 | 1 | 612075 | null | 3 | 195 | I want to use self-normalized importance sampling methods to estimate $$\int_{1}^{\infty} \frac{x^2}{\sqrt{2\pi}}e^{\frac{-x^2}{2}} \,dx$$ I choose exponential distribution with rate $\lambda=1$ as my importance function which is $$f(x)=e^{-x}$$ The true value of the integration is about $0.400$ but I get 0.799. The following is my R code. I follow the algorithm in page 32. [http://people.sabanciuniv.edu/sinanyildirim/Lecture_notes.pdf](http://people.sabanciuniv.edu/sinanyildirim/Lecture_notes.pdf) I still can't find the error in my code.
```
N=10000
f=function(x){
return((x^2)*(x>=1))
}
p=function(x){
return(dnorm(x))
}
#((1/sqrt(2*pi))*exp((-x^2)/2))
q = function(x) {
return(exp(-x))
}
x = rexp(N, rate =1)
theta.hat2=sum((p(x)/q(x))*f(x))/sum((p(x)/q(x)))
theta.hat2
```
---
Update:
I use t distribution as the important function because it has same support with the target density and I get the desired value. How can I compute the variance of this estimator?
```
N=10000
f=function(x){
return((x^2))
}
p=function(x){
return(exp((-x^2)/2))
}
q = function(x) {
return(dt(x,df=3))
}
x = rt(N,df=3)
w_u=p(x)/q(x)
w=w_u/sum(w_u)
theta.hat2=sum(w*f(x)*(x>=1))
theta.hat2
# 0.4064571
```
| How to use self-normalized importance sampling method to estimate $\int_{1}^{\infty} \frac{x^2}{\sqrt{2\pi}}e^{\frac{-x^2}{2}}dx$? | CC BY-SA 4.0 | null | 2023-04-04T12:52:37.047 | 2023-04-06T16:00:20.403 | 2023-04-06T13:18:09.357 | 350153 | 350153 | [
"mathematical-statistics",
"monte-carlo",
"numerical-integration"
] |
611813 | 1 | null | null | 0 | 28 | I have observed N (a_i, p_i) pairs each drawn from a different Bernoulli distribution. Here a_i are observed amplitudes and p_i are observed probabilities of success for the
i^{th} draw.
I would like to model the full likelihood distribution (and not just via the MLE) with a view to identifying which draws belong to the successful class (and hence the statistics of just these draws, especially any irregular uncertainty distributions etc.).
For example, I have
```
a_i p_i p_true
10 0.2 0
100 0.9 1
11 0.1 0
99 0.93 1
12 0.25 0
```
I know the PDFs of the model and of the data (both are obviously Bernoulli distributions), but how do I combine them to obtain residuals that I can use to explore the joint distribution? What distribution does the joint distribution follow and how?
I have tried to unpack the cross-entropy and data and model but don't have a clear solution.
Is the continuous Bernoulli distribution a complete red herring?
Note that ordering doesn't matter in my example. Some other points in response to comments:
- The p_i values come from an oracle - these represent the probabilities that the datapoints belong to a common class.
- The amplitudes a_i are just weights for the corresponding Bernoulli distributions. The higher the amplitude, the higher the scaling of the Bernoulli distribution in its contribution to the overall process.
See also:
["Weighted" Poisson binomial distribution](https://stats.stackexchange.com/questions/277330/weighted-poisson-binomial-distribution)
[Weighted sum of Bernoulli distributions](https://stats.stackexchange.com/questions/569610/weighted-sum-of-bernoulli-distributions)
[https://math.stackexchange.com/questions/3481907/sum-of-weighted-independent-bernoulli-rvs](https://math.stackexchange.com/questions/3481907/sum-of-weighted-independent-bernoulli-rvs)
[What is the CDF of the sum of weighted Bernoulli random variables?](https://stats.stackexchange.com/questions/270227/what-is-the-cdf-of-the-sum-of-weighted-bernoulli-random-variables)
Please let me know if you need any more information to assist in answering. Thanks as ever!
| How do I construct the likelihood function for a series of observed Bernoulli-distributed datapoints? | CC BY-SA 4.0 | null | 2023-04-04T12:52:45.870 | 2023-04-04T13:53:09.643 | 2023-04-04T13:41:00.770 | 171291 | 171291 | [
"probability",
"maximum-likelihood",
"likelihood",
"bernoulli-distribution"
] |
611815 | 1 | null | null | 0 | 20 | I have a problem in founding a method to assess whether my observed relationship y~x shows a linear or a convex (concave up) pattern. Unfortunately, this is not clear from the plot...
I performed a Mantel regression test between two distance matrices, using residuals to control for a third variable. The Mantel test shows a significant relationship between my two variables (residualsA vs residualsB).
However, when I plot residualsA vs residualsB, it is not entirely clear whether the relationship is indeed linear or whether it is convex (concave up).
What model (quadratic, exponential, absolute value,...) would fit a connvex relationship?
How can I test which model (linear or convex) fits the observed pattern best?
Here is a reproducible example using the iris data.
```
data(iris)
library("ggplot2")
library("reshape2")
library("vegan")
# distances
dist.SepL <- as.matrix(vegdist(iris$Sepal.Length))
dist.PetL <- as.matrix(vegdist(iris$Petal.Length))
dist.SepW <- as.matrix(vegdist(iris$Sepal.Width))
# A.
# regress Sepal.Length distance against Sepal.Width distance.
vegan::mantel(xdis=dist.SepW, ydis=dist.SepL, method="spearman", permutations=99)
#--> Mantel statistic r: 0.03639, Significance: 0.11
# save residuals.
resA <- mantel.residuals(dist.SepL, dist.SepW)
resA.df <- as.data.frame(melt(as.matrix(resA$residuals))) #residuals as dataframe
# B.
# regress Petal.Length distance against Sepal.Width distance.
vegan::mantel(xdis=dist.SepW, ydis=dist.PetL, method="spearman", permutations=99)
#-->Mantel statistic r: 0.2179, Significance: 0.01
# save residuals.
resB <- mantel.residuals(dist.PetL, dist.SepW); resB$residuals
resB.df <- as.data.frame(melt(as.matrix(resB$residuals))) #residuals as dataframe
# AB.
# regress residual Sepal.Length distance (A.) against residual Petal.Length distance (B.)
vegan::mantel(xdis=resB$residuals, ydis=resA$residuals, method="spearman", permutations=99)
#-->Mantel statistic r: 0.6245, Significance: 0.01
# save residuals.
res.AB <- cbind(resA.df, resB.df$value) #column-bind residuals A and residuals B
colnames(res.AB) <- c("var1","var2","resA","resB") #rename variables
res.AB <- res.AB[100:200,] #subset to get a smaller df, for this example
#(otherwise there are 22500 obs, meaning a slow calculation time and a messy plot)
```
I used geom_smooth in ggplot, to visually compare linear to loess regression lines.
Since the red SE area (linear) is smaller than the blue SE area (loess), I would say that the linear relationship fits the pattern best. Does it make sense?
```
ggplot(res.AB, aes(y=resA, x=resB)) +
geom_point(size=2) +
geom_smooth(method="glm",formula = y ~ x, se=T, col="red", fill="#fadcdd",linetype="solid") +
geom_smooth(method="loess", se=T, col="blue", fill="lightblue", linetype="solid") +
theme_bw()
#red is linear
#blue is quadratic
```
[](https://i.stack.imgur.com/Yz52g.png)
But of course, this is not a proper test!
My first idea: using AIC (Akaike information criterion). But GLM doesn't allow negative values (like some of my residual values). Therefore, I transformed all my residuals to positive values simply by adding a number allowing this (residuals+1). After this transformation, I run GLM models with log and sqrt links. I don't know... how to get a quadratic GLM?
```
model_lin <- lm(resA ~ resB, data = res.AB)
mquad <- lm(resA~poly(resB, degree = 2), data=res.AB)
mlog <- glm((resA+1)~(resB+1), data=res.AB, family=poisson(link="log"))
msqrt <- glm((resA+1)~(resB+1), data=res.AB, family=poisson(link="sqrt"))
as.data.frame(AIC(mlin, mquad, mlog, msqrt))
as.data.frame(AIC(mlog, msqrt))
df AIC
mlin 3 -397.979
mquad 3 -337.922
mlog 2 Inf
msqrt 2 Inf
```
AIC values are:
linear: -397.979
quadratic: -337.922
logarithmic: Inf
squareroot: Inf
So, this approach is also not working.
After an intensive googling, I came up with different methods to check for the goodness-of-fit of the linear model vs. the "convex model".
Here is the code (sorry for its length)...
```
## fit models =============================================================================================
# Fit the null model
model_null <- lm(resA ~ 1, data = res.AB)
# Fit the linear model
model_lin <- lm(resA ~ resB, data = res.AB)
# Fit the quadratic model
model_quad <- lm(resA ~ resB + I(resB^2), data = res.AB)
model_quad <- lm(resA~poly(resB, degree = 2), data=res.AB) #idem
quad_model <- nls(resA ~ a * resB^2 + b * resB + c, data = res.AB, start = list(a = 1, b = 1, c = 1)) #with nls
# Fit the cubic model
model_cub <- lm(resA ~ resB + I(resB^3), data = res.AB)
cubic_model <- nls(resA ~ a + b*resB + c*resB^2 + d*resB^3,data = res.AB, #with nls
start = list(a = 1, b = 1, c = 1, d = 1))
# Fit an exponential model (intercept at zero)
exp_model <- nls(resA ~ a * exp(b * resB), data = res.AB, start = list(a = 1, b = 0.01)) #with nls
# Fit an exponential model (random intercept)
res.AB$id <- seq(1, 64, length=64)
exp_model <- nls(resA ~ exp(a + b * resB), #with nls
data = res.AB,
start = list(a = rnorm(1), b = rnorm(1)),
trace = TRUE)
# create a list of these models
model.list <- list(null = model_null, linear = model_lin, quadratic = model_quad, quadratic.nls=quad_model,
cubic=model_cub, exponential=exp_model)
lapply(model.list, summary)
# List of residuals
resid(model_lin)
# density plot
plot(density(resid(model_lin))) #unsure how to interpret that
hist(model_lin$residuals, main="Histogram of Residuals", xlab = "bf residuals")
## residual plot =============================================================================================
# points randomly scattered around x=0 -> appropriate model
# curved pattern -> LM captures the trend of some data points better than that of others -> consider another model (not linear model)
plot(res.AB$resB, model_lin$residuals,
ylab = "linear model residuals", xlab="independent variable"); abline(h=0, col="black", lwd=1, lty=2)
## visual plot =============================================================================================
#create sequence of residual values
resvalues <- seq(-0.5, 0.5, length=64)
#create list of predicted values using the models
predictlin <- predict(model_lin,list(resB=resvalues))
res.AB$resB2 <- res.AB$resB^2
predictquad <- predict(model_quad,list(resB=resvalues, resB2=resvalues^2))
res.AB$resB3 <- res.AB$resB^3
predictcub <- predict(model_cub,list(resB=resvalues, resB4=resvalues^4))
res.AB$resBexp <- exp(res.AB$resB)
predictexp <- predict(exp_model,list(resB=resvalues, resBexp=exp(resvalues)))
#create scatterplot of original data values
plot(res.AB$resB, res.AB$resA, pch=19, cex=0.9)
#add predicted lines based on quadratic regression model
lines(resvalues, predictquad, col='blue')
lines(resvalues, predictlin, col='red')
lines(resvalues, predictcub, col='darkgreen')
lines(resvalues, predictexp, col='orange')
## residuals vs fitted plot (1st plot of the sequence)
plot(model_lin)
plot(model_quad)
plot(model_cub)
plot(exp_model)
## R-squared =============================================================================================
# higher R-squared value indicates a better fit
sapply(model.list,
function(x) {
summary(x)$r.squared })
## AIC =============================================================================================
# lower AIC value indicates a better model fit.
sapply(model.list, AIC)
as.data.frame(AIC(model_null, model_lin, model_quad)) #idem
## MAE =============================================================================================
# lower MAE value indicates a better model fit.
sapply(model.list, function(x) mean(abs(x$residuals)))
## MSE =============================================================================================
# lower MSE value indicates a better model fit.
sapply(model.list, function(model) {
residuals <- residuals(model)
mean(residuals^2)})
## ANOVA =============================================================================================
# smaller RSS indicates a better fit ??
# if p-value <0.05, we can reject the null hypothesis that linear model is a better fit
anova(model_lin, model_quad)
## Ramsey RESET test =============================================================================================
# this test does not assumes that the quadratic model is nested within the linear model
library("car")
linearHypothesis(model_quad, c("I(resB^2) = 0"), test = "F")
anova_result <- sapply(model.list, anova)
print(anova_result)
## stepAIC =============================================================================================
# Fit all possible models with up to quadratic terms
all_models <- lm(resA ~ I(resB^2), data = res.AB)
# Use stepAIC to perform model selection and find the best-fitting model
best_model <- MASS::stepAIC(all_models, direction = "both")
# Print the summary of the best-fitting model
summary(best_model)
## Pearson =============================================================================================
#Pearson's chi-squared test for goodness of fit.
# Calculate the expected values
expected <- predict(model_lin)
# Divide the data into 10 groups
res.AB$group <- cut(expected, breaks = 10)
# Calculate the observed frequencies in each group
observed <- table(res.AB$group, res.AB$resA > 0)
# Perform the Pearson's chi-squared test for goodness of fit
chisq.test(observed)
```
However, I still don't know:
1. what model (quadratic, exponential or something else) actually
represents a convex (concave up) relationship ?
2. what method is the most appropriate and/or accurate ?
Any reply to these questions would be highly appreciated! Thank you!
| Linear vs convex relationship: testing which one fits better? | CC BY-SA 4.0 | null | 2023-04-04T13:01:54.130 | 2023-04-04T13:04:53.000 | 2023-04-04T13:04:53.000 | 241719 | 241719 | [
"r",
"linear",
"goodness-of-fit",
"convex"
] |
611820 | 1 | 612002 | null | 6 | 129 | I have a dataset with destructive follow-up. That is, with a population starting at time 0, we are taking out a proportion at predetermined time points to see whether the event has occurred to them. The sampling is destructive, so we can't sample the same individuals repeatedly. In this case, we need to dissect a fish to determine whether the event has happened and we do this to a set number of fish at predetermined times. For example, we dissect 15 fish at 24h to know which animals have had the event, then we dissect another 15 animals at 48h and so on. I have done a logistic regression with time as one of the predictors and binary outcome (binomial family glm), but I wanted to ask if it's possible to use survival analysis for this type of data. I think what I have is left censored data for the animals where the event has occurred at time of dissection, and right censored for the animals where it hasn't occurred. In that case, there is only censored data, right? Is there a correct way of using this kind of data for survival analysis?
Edit: In this experiment, I have size of fish and temperature as covariates, so ideally, I would like to test whether these two variables affect the time to event. Both temperature and weight can be stratified ("small, medium, large" and "cold, medium, warm") as they are semi-controlled variables, but I'll probably get more information out of using the exact measurements rather than creating dummy variables. I can also add that the event is certain to happen eventually with the experiment designed to keep going until >95% of the fish are expected to have had the event. I also know for certain that none of the fish had the event at time 0. This then also would suggest that I have reasonable priors, so could take a bayesian approach.
| Survival analysis with only censored event times? | CC BY-SA 4.0 | null | 2023-04-04T13:33:35.490 | 2023-04-21T07:53:28.450 | 2023-04-21T07:53:28.450 | 42336 | 42336 | [
"survival",
"censoring"
] |
611821 | 1 | null | null | 0 | 62 | I would like to determine CDF and PDF from quantiles that I have determined via quantile regression.
I have read here in the forum ([Find the PDF from quantiles](https://stats.stackexchange.com/questions/347964/find-the-pdf-from-quantiles)) that it is possible to interpolate this via the integral of a B-spline The PDF should then be determined via a normal evaluation.
Unfortunately I did not understand why I have to use the integral of the B-spline, how can I ensure that the CDF is monotonically increasing and how I then get to the derivative (the PDF)? Can someone help me please?
This is how it currently looks for me:
```
import scipy.interpolate
import numpy as np
x = np.array([ 38.45442808, 45.12051933, 46.85565437, 47.84576924,
49.50084204, 50.09833301, 51.3717386 , 54.85307741,
59.91982266, 63.11786854, 66.90037244, 67.84446378,
72.96120777, 73.92993279, 81.63075081, 85.42178836,
90.70554533, 91.2393176 , 110.03872988])
y = np.array([0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.55, 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95])
t,c,k = scipy.interpolate.splrep(x, y)
spline = scipy.interpolate.BSpline(t, c, k, extrapolate=False)
d_spline = spline_.derivative()
N = 100
xmin, xmax = x.min(), x.max()
xx = np.linspace(xmin, xmax, N)
fig, ax = plt.subplots(2,1, figsize =(12, 8))
ax[0].plot(x, y, 'bo', label='Original points')
ax[0].plot(xx, spline(xx), 'r', label='BSpline')
ax[1].plot(xx, d_spline(xx), 'c', label='BSpline')
```
[](https://i.stack.imgur.com/nXNLD.png)
My approach doesn't really work well unfortunately and I can't find any numerical examples to help me. I am grateful for all comments and remarks!
Thank you!
| Determine CDF and PDF from quantiles | CC BY-SA 4.0 | null | 2023-04-04T13:37:56.960 | 2023-04-13T06:44:12.230 | 2023-04-04T13:45:31.000 | 384907 | 384907 | [
"density-function",
"quantiles",
"cumulative-distribution-function"
] |
611822 | 1 | null | null | 2 | 32 | I have a dataset consisting of time periods, at the end of each one the individual either develops a disease, or doesn't and is right-censored. I suspect that the rate of developing the disease is determined by a power function of another known variable, N, a an unknown power b, and an unknown multiplier a determined by a random effect across the different individuals id.
I found [this page](https://stats.stackexchange.com/q/124837/28500) very helpful, but I would like to use a survival model as the response variable (lhs), and a mixed non-linear model as the input (rhs).
The data look something like this (where I've made the power parameter 0.4).
```
library(tidyverse)
library(survival)
library(lme4)
Nsamples = 100
Z = tibble(ev = sample(0:1, size = Nsamples, replace = T), #event outcome
N = rpois(n = Nsamples, lambda = 100), #known input variable
dur = N^0.4*rnorm(n = Nsamples, mean = 1, sd = 0.1), #length of follow-up
id = rep(letters[1:2], each = Nsamples/2) #a random effect
)
```
I use a `deriv` function to define the non-linear part:
```
power.f = deriv(~a * N^b, namevec=c('a', 'b'), function.arg=c('a', 'N','b'))
```
And a survival function to define the output:
```
surv_obj <- with(Z, Surv(time = dur, event = ev))
```
Putting these together, and using the lme4::nlmer function with format `Output ~ Non-linear part ~ Random effect`
```
nlmer(surv_obj ~ power.f(a, N, b) ~ (a|id), data = Z, start=c(a=1, b=0.5))
```
However, I get the following error
```
# Error in resp$ptr() : dimension mismatch
```
In principle this should work I think, but I am not sure if these functions are compatible with each other and the error message doesn't mean much to me. Please help if you know how to get this to work, or some other way to fit a survival rates model like this with a power law in the input variables.
Thank you!
| How to include a survival function in non-linear mixed regression | CC BY-SA 4.0 | null | 2023-04-04T13:38:08.493 | 2023-04-06T14:10:07.173 | 2023-04-05T14:38:33.087 | 28500 | 384891 | [
"r",
"mixed-model",
"lme4-nlme",
"survival",
"power-law"
] |
611823 | 2 | null | 611769 | 4 | null | Find the distribution function of $Y_1$ with a picture, then differentiate it.
Observe that because $(X_1,X_2)$ lies in the first quadrant and $Y_1$ is the angle subtended by this point, $0\le Y_1 \le \pi/2.$ Moreover, for any possible value $\theta$ in this interval, the event $Y_1 \le \theta$ is the region in the unit square bounded above by the line at angle $\theta,$ highlighted in this diagram:
[](https://i.stack.imgur.com/CkTkz.png)
Because $(X_1,X_2)$ is uniformly distributed on the unit square, the probability of any event equals its area.
Clearly when $0\le \theta\le \pi/4,$ where this event is a triangle, its area is half its height:
$$\Pr(Y_1 \le \theta) = \frac{1}{2}\tan\theta, \quad \theta \le \pi/4.$$
Its derivative is $\sec^2(\theta)/2.$
When $\pi/4\le\theta\le \pi/2,$ the event is the complement of a triangle, immediately giving
$$\Pr(Y_1 \le \theta) = 1 - \frac{1}{2}\cot\theta, \quad \pi/4 \le \theta \le \pi/2.$$
Its derivative is $\csc^2(\theta)/2 = \sec^2(\pi/2-\theta)/2.$
Combining these results into a common formula gives
>
$$g(\theta) = \frac{1}{2}\sec\left(\min\left(\theta, \frac{\pi}{2}-\theta\right)\right)^2,\quad 0\le\theta\le\frac{\pi}{2}.$$
Of course $g\equiv 0$ for all other arguments.
To illustrate $g,$ here is a histogram of a million realizations of $Y_1$ (computed with the `R` code `n <- 1e6; y <- atan2(x2 <- runif(n), x1 <- runif(n))`) over which the graph of $g$ is plotted in red.
[](https://i.stack.imgur.com/ZiG8N.png)
| null | CC BY-SA 4.0 | null | 2023-04-04T13:43:39.360 | 2023-04-04T13:43:39.360 | null | null | 919 | null |
611824 | 1 | null | null | 0 | 19 | I'm trying to fit a VECM model with MLE estimation for N=2, and compare the result to the VECM `statmodel` implementation. When running both with random cointegrated time series generated from known VECM params, both statmodel and my MLE version shows pretty much the same result (at least in magnitude)
However, when I run it with "real-life" time series with lower cointegration, it appears that OLS predict well (at least in magnitude) what statmodel VECM implementation shows. Though, my MLE implementation shows beta very far from the one from OLS, with loading forces much lower. When running the same MLE with beta not trainable and set to the one given by OLS, I get almost the same loss (the one with beta retrieved by OLS is still a little lower).
To illustrate more explicitely, I run these 2 procedures:
- I take real life time series with "low" cointegration (p-value still < 2%). N=2, nobs=7000
- Procedure (1) : I run OLS to retrieve beta coefficients, I run a VECM fit from statmodel, then I run a VECM fit with my MLE implementation:
Output:
```
beta from OLS = [1, -21.5, 2.45]
statmodel VECM beta: [1, -22.7, 3.05], loading forces: [0.004, 0.0004]
MLE VECM beta: [1, 400.39, -214.06], loading forces: [-3.46e-05, -1.43e-06] # see how the loading forces are much lower, and the VECM beta are completely off scale
MLE final loss: -28029.20
```
- Procedure (2) : I run OLS to retrieve beta coefficients, then I run VECM fitting my MLE implementation with beta not trainable, and fixed to the OLS retrieved ones.
Output:
```
beta from OLS = [1, -21.5, 2.45]
MLE VECM beta: [1, -21.5, 2.45], loading forces: [0.0029, 0.00034] # beta are the same since there fixed and not trainable, and the loading forces are now somewhat close to the statmodel.VECM ones (at least in magnitude).
MLE final loss: -28023 # see how the loss is very close to the other completely off-scale solution it gave me at procedure (1). Procedure (1) would have been stuck in a local minimum ?
```
I know that I could just estimate beta from OLS at all time and do not care about this difference. However, I also want to use my implementation for N > 2, and in this case, it will have no beta "help" from OLS. Indeed, OLS estimation of VECM beta is applicable only when N=2 (Engle & Granger 1987), as far as I know.
Finally, the question is: is it normal that an MLE estimation of a VECM model leads to very far solutions (in term of beta and loading forces) with the same loss ? How to tell MLE that I want the solution with the greatest reverting force, even if both have the same loss ?
| VECM fitting : MLE from scratch and statmodel give very different beta and reverting forces | CC BY-SA 4.0 | null | 2023-04-04T13:45:33.843 | 2023-04-04T13:45:33.843 | null | null | 372184 | [
"maximum-likelihood",
"least-squares",
"vector-error-correction-model"
] |
611825 | 1 | null | null | 1 | 35 | Short version
Can bootstrap be used to find disconnected confidence regions when MLE is not unique?
---
Long version
Let $\theta$ be a parameter and $P_\theta=\mathrm{Normal}(\theta, 1)$ be a distribution. In the frequentist approach to statistical inference one has access to the sample $X_1, \dotsc, X_n\sim P_\theta$ and can construct the maximum likelihood estimator $\hat\theta(X_1, \dotsc, X_n) = \frac{1}{n}\sum_i X_i$ as well as construct the 95% confidence interval $\mathrm{CI}(X_1, \dotsc, X_n)$ around $\hat \theta$ in various manners (in this case an analytic formula is available, but one could also use likelihood profile, Fisher information matrix, or bootstrap).
My understanding is that:
- We work with an identifiable model (for $\theta_1\neq \theta_2$ we have $P_{\theta_1}\neq P_{\theta_2}$), so for large $n$ we will find (approximately) unique $P_\theta$ and from this $\theta$ in turn.
- The confidence intervals based on any of the above methods have their usual meaning, i.e., if we repeat the procedure of sampling the data from $P_\theta$ and construct the confidence interval for each sample, then 95% of them will cover the true value $\theta$.
However, in more complex situations (especially when the model is non-identifiable) a maximum likelihood solution may not be unique. For example, consider $P_\theta=\mathrm{Normal}(\theta^2, 1)$ with two maximum likelihood estimates, $\hat \theta_{\pm}(X_1, \dotsc, X_n) = \pm\sqrt{\frac 1n\sum X_i}$.
In this case I can still define 95% confidence regions (which often will be disconnected in this case, consisting of two intervals) for parameter $\theta$ adjusting the analytical formulas.
However, I do not know how to construct the confidence regions when (a) analytical formulae are not available or (b) I do not even know how many maximum likelihood solutions exist.
Could you recommend me some references on finding confidence regions with non-unique maximum likelihood estimates? I do not know whether methods such as likelihood profile, Fisher information matrix, or (most importantly for me) bootstrap would still work and retain the usual meaning of confidence regions.
I know that in Bayesian statistics identifiability poses [different kinds of issues](https://statmodeling.stat.columbia.edu/2014/02/12/think-identifiability-bayesian-inference/) (as label switching and how to understand the multimodality in the posterior), but I would like to learn how this problem can be tackled from the frequentist perspective.
| Confidence regions with non-unique maximum likelihood | CC BY-SA 4.0 | null | 2023-04-04T13:48:17.243 | 2023-04-08T14:41:01.773 | 2023-04-08T14:41:01.773 | 11887 | 255508 | [
"confidence-interval",
"maximum-likelihood",
"references",
"bootstrap",
"identifiability"
] |
611826 | 1 | null | null | 0 | 12 | Small area estimation (SAE) techniques combine information from household surveys with existing auxiliary information at population level to make inferences of certain indicators for population groups who represent disaggregations for which the survey was not designed.
Most SAE methods, both direct and indirect (for more info read [this document](https://repositorio.cepal.org/bitstream/handle/11362/48107/3/S2200167_en.pdf)), require the survey to record the small area where each respondent lives. In other words, when trying to estimate a variable $y$ for each respondent $r$ in a small region $s$, most methods rely on the knowledge of $y_{rs}$.
I am studying a survey that does not provide $y_{rs}$, and only aggregated zoning information is provided for each respondent. According to [this Eurostat study](https://ec.europa.eu/eurostat/documents/3859598/10167610/KS-GQ-19-011-EN-N.pdf/3b56be5d-8266-0ee7-7579-7c3e8e63289d), the only SAE methods that work for this case are called synthetic estimators, and basically consider the small areas to be homogeneus in that they have common parameters, without allowing any degree of heterogeneity between them.
As stated in [this other question](https://stats.stackexchange.com/questions/599679/using-large-area-data-county-to-predict-small-area-estimand-census-tract/611819#611819), the assumptions of these methods are very strong and the bias of the estimation can be considerably high.
Are there any other methods that can be applied when no information regarding the target small area is covered within the survey?
| Small Area Estimation techniques when no micro information is available | CC BY-SA 4.0 | null | 2023-04-04T13:51:52.453 | 2023-04-04T13:51:52.453 | null | null | 247645 | [
"estimation",
"predictive-models",
"survey",
"survey-sampling",
"small-area-estimation"
] |
611827 | 1 | null | null | 0 | 28 | I am learning Gibbs Sampling for GMMs. Particularly, given $\boldsymbol \theta$, I must sample from the latent $\boldsymbol z$ before sampling $\boldsymbol x$.
The PDF of $\boldsymbol z$ is given as$$
P(z=k|\boldsymbol x,\boldsymbol\theta)\propto P(z=k)\mathcal N(\mu_k,\Sigma_k)
$$ for $k=1\cdots.K$ and by assumption, $\mu,\Sigma,P(z_k)$ are given.
Gibb's sampling states that I should iteratively sample from $\boldsymbol z$ and $\boldsymbol x$ and accept all sample (since it is a MH with proposal probability $1$).
The problem is, how do I sample from $\boldsymbol z$? Sampling from $P(\boldsymbol x|\boldsymbol\theta)$ is easy once I have $\boldsymbol z$ since I will simply select a Gaussian component corresponding to $z_k$ and call packages for sampling a Gaussian distribution.
Do I need to perform another MCMC for a every single sample of $z$ or perhaps rejection or importance sampling?
| How to sample from a given accessible PDF? | CC BY-SA 4.0 | null | 2023-04-04T13:52:51.137 | 2023-04-04T14:03:04.633 | 2023-04-04T14:03:04.633 | 338644 | 338644 | [
"sampling",
"markov-chain-montecarlo",
"gaussian-mixture-distribution",
"gibbs"
] |
611828 | 2 | null | 611813 | 0 | null | >
I know the PDFs of the model and of the data (both are obviously Bernoulli distributions),
Bernoulli distributions are not characterized by a probability density function (PDF) and instead are characterized by a probability mass function (PMF).
>
but how do I combine them to obtain residuals that I can use to
explore the joint distribution?
In many problems the observations are assumed to be independent and are multiplied. For instance if you have two pmf's depending on some parameter $p$ like $$P(X_1=x_1;p) = f_1(x_1,p)$$ and $$P(X_2=x_2;p) = f_2(x_2,p)$$ then the pmf for the joint probability is $$P(X_1=x_1 \land X_2=x_2 ;p) = f_1(x_1,p)f_2(x_2,p)$$.
For Bernoulli distributions the observations $X_i$ are discrete, but the parameter $p$ is continuous. So if you use this to make a likelihood function, then you get a continuous function.
| null | CC BY-SA 4.0 | null | 2023-04-04T13:53:09.643 | 2023-04-04T13:53:09.643 | null | null | 164061 | null |
611829 | 1 | 611850 | null | 2 | 170 | I am using `xgboost` (in R) to predict a continuous non-negative target variable using standard root mean squared error as an evaluation metric (and loss function). However, as pointed out in [this post](https://datascience.stackexchange.com/questions/565/why-does-gradient-boosting-regression-predict-negative-values-when-there-are-no), it is possible that a non-negative target sometimes receives negative predictions. Since I have many training values in the target that are close to zero, there are a few observations that have received a prediction below zero.
So, is there any way to train my `xgboost` model so that predictions are >= 0? Maybe through different loss functions?
| How to predict non negative values of a target in xgboost? | CC BY-SA 4.0 | null | 2023-04-04T14:04:17.853 | 2023-04-04T16:09:07.770 | null | null | 349496 | [
"machine-learning",
"random-forest",
"boosting"
] |
611831 | 2 | null | 610940 | 2 | null | I would argue that the analysis FFDs are a subset of the analysis techniques for RSM. Furthermore, the analogy I would use is the comparison of a multiple regression model where you are deciding between including an ordinal variable as a purely categorical variable (estimating each levels response separately) vs as a scalar variable (where you can assume some element of continuity across the spacing of the ordinal levels).
What is true in both FFD and RSM is that you are not examining every possible combination of experimental cells possible. With FFD this is often because if the number of cells can become quite large (10 dichotomous factors would required $2^{10}=1024$ experimental cells), and with RSM you have continuous variables (so a selection of all possibilities is impossible). However, with careful selection of a subset of the possible combinations, you obtain enough information to tease out most 1st and 2nd order effects.
I am currently teaching a DoE course, and I would recommend the textbook we are using: D.C. Montgomery's Design and Analysis of Experiments. (We are using the 10th ed.) This book provides a very detailed breakdown of the variations of FFDs you might encounter (chapters 8 & 9). The coverage of RSM is very thorough (chapter 11), and the information is presented in a manner that shows how multiple regression analyses of the models from FFDs and RSMs are similar in nature.
I hope this response is useful.
| null | CC BY-SA 4.0 | null | 2023-04-04T14:31:44.300 | 2023-04-04T14:31:44.300 | null | null | 199063 | null |
611833 | 1 | null | null | 0 | 20 | In political science, it is not uncommon to find the use of Heckman selection models to account for the manner in which units may non-randomly be selected into a sample. While these models are not often employed with the language of causal inference (at least in the background of political science), this clearly seems like a method designed to improve causal inferences.
However, I rarely find discussion on selection models in popular introductory causal inference material. I have seen [this](https://stats.stackexchange.com/questions/440322/what-is-selection-modeling) Cross Validated post which very briefly touches up on this issue and I have reviewed the extremely brief discussion on selection models in this post's cited article ([Gelman and Zelizer 2015](https://journals.sagepub.com/doi/full/10.1177/2053168015569830)). Still, I am left with several questions concerning the role and comparative value of selection models for making causal inferences.
Why are selection models rarely discussed? Do they generally preform poorly to alternative methods? In addition, how does modeling for a selection process differ from adjustment (whether that is via covariate adjustment, matching, weighting) of confounders?
| Selection Models for Causal Inference | CC BY-SA 4.0 | null | 2023-04-04T14:44:53.010 | 2023-04-04T14:44:53.010 | null | null | 360805 | [
"causality",
"heckman"
] |
611834 | 1 | null | null | 1 | 23 | In Rubin 1990, Donald Rubin describes four different modes of statistical inference for causal effects:
- Randomization-based tests of sharp-null hypotheses - in the tradition of Fisher, if you've got an unconfounded assignment mechanism combined with a sharp null hypothesis of no treatment effect, you compute the value of a test statistic in your sample and compare that to the sampling distribution of the test statistic under the null to get a p-value (which can also give you a confidence interval by inverting the null hypothesis test)
- Randomization-based inference for sampling distributions of estimands (aka repeated sampling randomization-based inference) - in the tradition of Neyman for survey sampling, where you define an estimand of interest Q, select a statistic $\hat{Q}$ that is an unbiased estimator of the estimand, find a statistic $\hat{V}$ that is an unbiased estimator of the variance of $\hat{Q}$, assume the randomization distribution of $(Q - \hat{Q}) \sim N(0, \hat{V})$, and perform inference using that distribution (sometimes you assume a t-distribution instead of a normal distribution).
- Bayesian inference (aka Bayesian model-based inference) - take the assignment mechanism from the potential outcomes framework, supplement it with a joint probability model $Pr(X,Y)$ (factored in such a way that $Pr(X,Y) = \int \prod_{i=1}^{N} f(X_i, Y_i | \theta) Pr(\theta) d\theta$, where $\theta$ is a parameter such that it's straightforward to compute the causal estimand Q as a function of $\theta$) for your covariates and outcome, specify the prior distribution of the parameter $Pr(\theta)$ and calculate the posterior distribution of the causal estimand of interest Q.
- Superpopulation frequency inference (aka repeated-sampling model-based inference)- take the assignment mechanism and the probability model $\prod_{i=1}^{N} f(X_i, Y_i | \theta)$, but discard the prior distribution and draw frequency inferences about $\theta$ using tools of mathematical statistics like maximum likelihood, likelihood ratios, etc.
Suppose I am using an inverse probability of treatment weighted (IPTW) estimator to fit a marginal structural model to estimate the average treatment effect (ATE) of an active treatment relative to a control treatment using observational data. Let's make the usual assumptions needed to do this kind of inference (treatment version irrelevance, no interference, positivity, conditional exchangeability/no unmeasured confounders, no measurement error in X, correct specification of the nuisance model to estimate the weights, any missing data satisfy the stratified MCAR assumption).
If I want to do frequentist inference and get a 95% confidence interval for the ATE, am I appealing to inference mode 2 (repeated sampling randomization-based inference) or 4 (repeated sampling model-based inference)? Or is there some other argument used in this setting to justify variance estimation.
Rubin, Donald B. (1990). Formal modes of statistical inference for causal effects. Journal of Statistical Planning and Inference. 25. 279-292.
| What is the mode of inference for frequentist IPTW estimation in the causal inference context | CC BY-SA 4.0 | null | 2023-04-04T14:50:20.537 | 2023-04-04T19:13:55.283 | 2023-04-04T19:13:55.283 | 72298 | 72298 | [
"causality",
"assumptions",
"weights"
] |
611835 | 1 | null | null | 1 | 29 | I'm working on a project that does simulations / measurements where we measure values between [-1, 1] (but we also use absolutes [0, 1] to make life simpler).
It's 'good' when everything is as close as possible to zero.
It's 'bad' when a few measurements (even a single one) is larger than 0.05 and 'critical' when close to 1.
The amount of measurements ranges from 20,000 to more than 1,000,000 per test.
When we make a change we simulate the effects. When plotting/graphing it's possible as a human to easily "see" if the changes made improvements (less 'peaks') or made it worse (more 'peaks'), e.g.:
11 peaks:
[](https://i.stack.imgur.com/DQvaM.png)
improvements which resulted in 4 peaks:
[](https://i.stack.imgur.com/09JOU.png)
I never needed to analyse such data mathematically, but I'm struggling to find some mathematical concepts, functions, algorithms or approaches to say anything meaningful about the reduction in peaks. As a human I can "see" the improvement is around ~2.75, but mathematically I tried using a few statistics:
- total area of graph
- mean values
- standard deviation values
- combination of values above, either multiplied or divided by others or the total measurement count
I think the problem is that the amount of data points when peaks occur is very small, but I also can't hardcode the 0.05 and 1 values for filtering peaks out, because it's possible to have a test that's 0.001 everywhere but with peaks that are 0.05 (also bad) - I think normalizing such data would allow me to apply any solutions I get in this post (so increasing all values by a factor one divided by the biggest value).
But in none of the above cases I got any human comprehensible values. I'm wondering what algorithmic approaches would allow me to automate such 'peaks' analysis, and allow a program to "see" the improvement is ~2.75x in the above graphs?
| Meaningful analysis of (expected) spikes in data? | CC BY-SA 4.0 | null | 2023-04-04T14:55:38.083 | 2023-04-09T00:51:07.260 | 2023-04-08T14:40:02.580 | 11887 | 384917 | [
"time-series",
"inference",
"descriptive-statistics"
] |
611836 | 1 | null | null | 0 | 27 | Suppose we fix the number of parameters used, why do AIC and adjusted R squared give the same combination of variables?
Observed from the R leaps package that regsubsets gives only one combination of variables for each subset size, while giving values of different selection criteria (AIC, BIC, adjusted R squared, etc.) They also have the documentation statement "Since this function returns separate best models of all sizes up to nvmax and since different model selection criteria such as AIC, BIC, CIC, DIC, ... differ only in how models of different sizes are compared, the results do not depend on the choice of cost-complexity tradeoff."
I understand that AIC and BIC differ only in the number of parameters, so the rankings of the scores of different combinations of predictors are the same for both criteria.
However, I do not understand why adjusted R squared also give the same ranking (reversed) in scores of different combinations as AIC.
AIC is calculated from 2(p-logL) where L is the maximum likelihood for the model.
Whereas adjusted R squared is calculated from the SSE of regression model of p variables.
How do loglikelihood and SSE reach the same ranking of combinations when fixing the number of parameters?
| AIC and Adjusted R squared for fixed number of parameters in sequential selection | CC BY-SA 4.0 | null | 2023-04-04T14:56:25.263 | 2023-04-04T17:41:30.540 | 2023-04-04T17:41:30.540 | 53690 | 373321 | [
"r",
"regression",
"model-selection",
"r-squared",
"aic"
] |
611837 | 1 | null | null | 0 | 22 | If a survey question has responses ‘very dissatisfied’, ‘dissatisfied’, ‘no opinion’, ‘satisfied’ and ‘very satisfied’. Should I code the results as 1,2,3,4,5 or -2,-1,0,1,2. Are there any strengths and weaknesses each choice of coding has in terms of performing regression or for analysis? Should the number or survey responses have any impact on my decision?
| Advantages and Disadvantages of different coding for survey scale | CC BY-SA 4.0 | null | 2023-04-04T15:02:20.607 | 2023-06-02T05:42:43.753 | 2023-06-02T05:42:43.753 | 121522 | 384918 | [
"regression",
"survey",
"likert"
] |
611839 | 2 | null | 312562 | 1 | null | Autoencoders can be considered in the framework of `[Variational Autoencoders][1]` (VAEs), which effectively generalise deterministic autoencoders, where:
- each data sample $x$ is mapped to a distribution $q(z|x)$ over latent space, rather than to a unique value of $z$ (as given by a deterministic encoder function)
- similarly, each latent representation $z$ is mapped to a distribution over the data space $p(x|z)$, e.g. a small Gaussian around a learned mean;
- the latent variables are fitted to a prior distribution $p(z)$
The point of considering VAEs is that they learn a proper latent variable model where terms of the loss function have a meaningful interpretation.
The deterministic autoencoder can be seen as a special case of the VAE framework where:
- the variance of $q(z|x)$ is reduced towards 0, so that $q(z|x)$ in the limit tends/concentrates to a deterministic function of $x$;
- the mean square ($L_2$) loss (mentioned in the question) relates to the reconstruction term of the VAE loss function and is equivalent to assuming $p(x|z)$ is Gaussian whose mean is learned as a function of $z$; and
- the second (KL or regularisation) term of the VAE loss, including the prior over $z$, is dropped (so no assumed structure is imposed in the latent space).
The distribution choice for $p(x|z)$ can be varied, which corresponds to different metrics in $x$-space (e.g. $L_1$ equivalent to Laplacian, etc).
| null | CC BY-SA 4.0 | null | 2023-04-04T15:06:06.810 | 2023-04-04T15:12:46.590 | 2023-04-04T15:12:46.590 | 307905 | 307905 | null |
611840 | 2 | null | 564401 | 0 | null | I will give a response from the epidemiological side of causal inference, which uses a slightly different terminology ('exchangeable' rather than 'ignorable') with some subtle differences. Citing from [What If](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/) (Hernán & Robins, 2023, p. 28, emphasis mine):
>
Rubin (1974, 1978) extended Neyman’s theory for randomized experiments
to observational studies. Rosenbaum and Rubin (1983) referred to the combination of exchangeability and positivity as weak ignorability, and to the combination of full exchangeability (see Technical Point 2.1) and positivity as strong ignorability
Then, from Technical point 2.1 (p. 15, emphasis mine), we can see that:
>
Formally, let $\mathcal{A} = \{a, a', a'', \dots \}$ denote the set of all treatment values present in the population, and $Y^{\mathcal{A}} = \{Y^a, Y^{a'}, Y^{a''}, \dots\}$ the set of all counterfactual outcomes. Randomization makes $Y^{\mathcal{A}}⊥A$. We refer to this joint independence as full exchangeability. [...] For a continuous outcome, exchangeability $Y^{\mathcal{a}}⊥A$ implies mean exchangeability $\operatorname{E}[Y^a | A = a'] = \operatorname{E}[Y^a]$ but mean exchangeability does not imply exchangeability because distributional parameters other than the mean (e.g., variance) may not be independent of treatment.
---
In other words, the distinction between weak ignorability and strong ignorability boils down to whether we are assuming mean exchangeability or full exchangeability (which I deem slightly more informative terms).
It is no surprise this distinction is hard to grasp since the bulk of most applications and expositions of causal inference problems is only concerned with average causal effects, so we end up erasing the distinction and just talking about ignorability or exchangeability per se. At the same time, causal assumptions from Rubin's terminology involve more than a single property (especially in the case of SUTVA), so that also does not help with clarity.
| null | CC BY-SA 4.0 | null | 2023-04-04T15:10:07.630 | 2023-04-04T15:10:07.630 | null | null | 180158 | null |
611841 | 1 | null | null | 0 | 9 | Consider this simplified situation: I have a single time series and from that I compute a set of parameters (e.g. mean and variance). These parameters are however not independent from one another (e.g. maybe time series with higher means will have more variance).
QUESTION 1)
Does it make sense to use independent component analysis (ICA) to compute a set of independent parameters from the original set of manually extracted features?
The aim of this is to use Hidden Markov Models to cluster time series based on their patterns. In particular I consider a portion of the time series to emit a specific set of parameters which will be later used cluster in an unsupervised way the data, thus segmenting the time series in a predefined number of clusters. I want to use the IndependentComponentDistribution of the Pomegranate package, which means that the emitted observations will be considered independent from one another.
Pomegranate links:
- IndependentComponentDistribution: https://pomegranate.readthedocs.io/en/latest/Distributions.html
- Hidden Markov Model: https://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.html
QUESTION 2)
If I can do what I stated in question 1, does it make sense to use the independent components as observations for the hidden Markov Model?
Thank you in advance
| Can I use Independent component Analysis on features extracted from a single timeseries? | CC BY-SA 4.0 | null | 2023-04-04T15:19:04.430 | 2023-04-04T15:19:04.430 | null | null | 375539 | [
"python",
"feature-selection",
"hidden-markov-model",
"methodology",
"independent-component-analysis"
] |
611842 | 2 | null | 611794 | 1 | null | First note that
\begin{align*}
& |E[(X - \operatorname{sign}(X)\tau)I(|X| > \tau)]| \\
=& |E[(X - \tau)I(X > \tau) + (X + \tau)I(X < -\tau)]| \\
\leq & E[|(X - \tau)I(X > \tau) + (X + \tau)I(X < -\tau)|].
\end{align*}
Now note the function $f(x) = |(x - \tau)I_{(\tau, \infty)}(x) + (x + \tau)I_{(-\infty, -\tau)}(x)|, x \in \mathbb{R}$ is dominated by the function $g(x) = x^2/\tau, x \in \mathbb{R}$ (draw a picture). The inequality then follows by taking "$E$" on both sides of the inequality $f(X) \leq g(X)$.
---
When $\tau = 2$, the graphs of $f(x)$ and $g(x)$ are shown as follows:
[](https://i.stack.imgur.com/1C3NP.png)
```
f <- function(x, tau) {
abs((x - tau) * (x > tau) + (x + tau) * (x < -tau))
}
g <- function(x, tau) {
x^2 / tau
}
x <- seq(-5, 5, len = 1000)
y1 <- f(x, 2)
y2 <- g(x, 2)
plot(x, y1, type = 'n', xlab = '', ylab = '', ylim = c(0, 3))
lines(x, y1, lty = 1)
lines(x, y2, lty = 2)
legend('bottomright', c("f(x)", "g(x)"), lty = 1:2)
```
| null | CC BY-SA 4.0 | null | 2023-04-04T15:19:10.263 | 2023-04-04T15:57:15.103 | 2023-04-04T15:57:15.103 | 20519 | 20519 | null |
611843 | 2 | null | 566933 | 1 | null | If you wanted to do this for the mean, the natural place to do would be the paired t-test. Despite there being two groups, the paired t-test is a one-sample test. That sample is composed of the pairwise differences between the two paired samples, such as subject 1 after minus subject 1 before, subject 2 after minus subject 2 before, etc. As scientists, we know how that once sample was created and use that to guide the interpretation, but the math does not care that the numbers came from pairwise differences.
Consequently, you are not doing a two-sample test, and a two-sample variance F-test does not make sense. If you have some sense of what the variance of the paired differences should be under a null hypothesis, you could do a one-sample variance test on the paired differences. However, I do not see a default null hypothesis like in the case of means where it makes sense to assume there is no difference ($\mu=0$) under the null hypothesis. The variance of the pairwise differences could be four or ten or seven of a billion, depending on the circumstances.
[As usual, the JBStatistic YouTube channel has a nice video on one-sample variance testing.](https://m.youtube.com/watch?v=PweabcpqzYI)
| null | CC BY-SA 4.0 | null | 2023-04-04T15:22:12.250 | 2023-04-04T15:22:12.250 | null | null | 247274 | null |
611844 | 2 | null | 611254 | 2 | null | This is a clever question, and demonstrates careful consideration of the algorithms we use and why. And the analysis of power is indeed a key motivator in deciding which test to use.
However, in this example, the distribution of the proposed test statistic is not really the issue. The problem is that the estimate for the shared variance (standard deviation) that you propose will cause the test-statistic to "break-down". In brief, once the distance between the population means becomes too large, that difference will dominate the value of $S$. In fact, this value will essentially become $S\approx\frac{|\mu_1 - \mu_2|}{2}$.
Thus, your value for $T_1 = 2 \sqrt{\frac{nm}{n+m}}$ doesn't actually depend on the values in your data set, but on the sample sizes of your data set.
Now, one could argue that for large differences, $T_2$ will be highly powered and $T_1$ will be "always" powered to detect a difference (for any reasonable $\alpha$). So, I believe the real question here might be about the difference between the tests is when there is an appropriately small difference in the population means. (In truth, this "answer" is probably more appropriately a comment...but it was too long to include as a comment.)
| null | CC BY-SA 4.0 | null | 2023-04-04T15:27:51.767 | 2023-04-04T15:27:51.767 | null | null | 199063 | null |
611845 | 1 | null | null | 0 | 28 | I have been reading Kruskal's 1964 "Nonmetric multidimensional scaling: A numerical method" and I am slightly confused by some details, this is probably due to my lack of knowledge of optimisation.
The goal of nonmetric multidimensional scaling is as follows: given pairwise dissimilarities between $n$ objects, we want to find a configuration of $n$ points in $\mathbb{R}^p$ such that the distances between the points have a similar ordering to the ordering of the dissimilarities. For any given configuration of $n$ points in $\mathbb{R}^p$,$(x_1,x_2,...,x_n)$, Kruskal developed a measure of how well the configuration represents the dissimilarities in another paper, he called this measure the stress, $S(x_1,x_2,...,x_n)$.
We can regard the space of all configurations as $\mathbb{R}^{np}$, each point in this space has a definite value of $S(x_1,x_2,...,x_n)$, so we can use standard gradient descent to find the configuration with the lowest value of $S(x_1,x_2,...,x_n)$ (if we don't get stuck in a local minimum), this will give us the 'best' configuration.
Two points I don't understand:
- Is $S(x_1,x_2,...,x_n)$ a differentiable function of $x_{11}, x_{12}, ... , x_{np}$? On page 126-128 Kruskal describes an algorithm that is required for calculating $S(x_1,x_2,...,x_n)$, from the description of the algorithm, it doesn't look like $S(x_1,x_2,...,x_n)$ is differentiable. Kruskal does provide formulas for the derivative of $S(x_1,x_2,...,x_n)$ on page 125-126, are these just approximations? There is no simple analytic formula for $S(x_1,x_2,...,x_n)$.
- On page 121, Kruskal describes gradient descent, he writes the negative of the derivative of $S(x_1,x_2,...,x_n)$ as $g$, given a configuration $x$, the new configuration is given by $x'_{is} = x_{is} + \alpha \frac{g_{is}}{\text{mag}(g_{is})}$. He writes $\text{mag}(g) = \frac{\sqrt{\sum g_{is}^2}}{\sqrt{\sum x_{is}^2}}$, I don't understand why we have to divide by $\sqrt{\sum x_{is}^2}$, shouldn't we just normalise $g_{is}$ to have length $1$? He also writes "Of course $x'$ should be normalised before further use." Are we normalising $x$ every step of gradient descent?
| Kruskal's non-metric multidimensional scaling | CC BY-SA 4.0 | null | 2023-04-04T15:30:04.243 | 2023-04-04T16:14:43.000 | 2023-04-04T16:14:43.000 | 3277 | 68301 | [
"optimization",
"multivariate-analysis",
"algorithms",
"multidimensional-scaling"
] |
611846 | 2 | null | 611265 | 0 | null | Artificially dichotimizing variables decreases power. This is true for main effects, and for interactions. The difference in your findings is due to this decrease in power.
Here's an example where power is 83% when $X_1$ and $X_2$ are continuous:
```
> library(InteractionPoweR)
> power_interaction(n.iter = 10000,N = 800,r.x1.y = .2,r.x2.y = .2,r.x1x2.y = .1,r.x1.x2 = .1)
Performing 10000 simulations
N pwr
1 800 0.8309
```
Power becomes 45% when they are both artificially dichotomized:
```
> power_interaction(n.iter = 10000,N = 800,r.x1.y = .2,r.x2.y = .2,r.x1x2.y = .1,r.x1.x2 = .1,k.x1 = 2,k.x2 = 2,adjust.correlations = F)
Performing 10000 simulations
N pwr
1 800 0.4449
```
| null | CC BY-SA 4.0 | null | 2023-04-04T15:33:05.467 | 2023-04-04T17:10:24.830 | 2023-04-04T17:10:24.830 | 288142 | 288142 | null |
611848 | 1 | null | null | 0 | 75 | I am running a basic difference-in difference (DiD) model. I would like to explain the effect of increasing the price of fares for students only. I use DiD with adults as a control group. I have monthly data of tickets sold for each category.
To avoid seasonality I have data before treatment, that is from April 2021 - December 2021 and after treatment. It started the 1st of April, so the post-periods are from April 2022 - December 2022. Can I do DiD like this?
When I did a calculation by hand (not modelled), I realized that the ratio of students/adults was 0,518 before treatment and 0,35 after the treatment. I calculated that the decrease that the treatment meant for students was around 31%. I calculated that with an assumption that the ratio before would stay the same because with DiD I presumed that the only thing that changed is the TREATMENT.
Also, I calculated how much demand for adults went up, it was around 81%, while the students went up only 21,4 %. So, I used the same logic, what is the difference between the counterfactual when the students would rise 81% too and the real state of world.
Then, I ran a DiD model using the standard `lm()` model using no other predictors, all the averages of the groups stayed the same as expected. But, when I calculated the counterfactual to know how many students would be there if there was no intervention, it gives me a way higher number.
In the DiD analysis, the demand went lower by 50% and not 31%.
How is this possible? The DiD counterfactual somehow presumes that the students should increase the demand even more! Why?
This is how the data look:
[](https://i.stack.imgur.com/NFdES.png)
And here my dif in dif results:
[](https://i.stack.imgur.com/GCkAH.png)
Thanks for your help!
Ratio of students to adults:[](https://i.stack.imgur.com/bTJbK.png)
| Interpretation of a difference-in-difference model | CC BY-SA 4.0 | null | 2023-04-04T15:53:40.927 | 2023-04-18T23:21:20.050 | 2023-04-18T23:21:20.050 | 246835 | 383838 | [
"r",
"regression",
"mathematical-statistics",
"difference-in-difference",
"lm"
] |
611849 | 2 | null | 73623 | 1 | null | Another convenient way of understanding this connection is from the modeling perspective. Recall that if $X$ and $Y$ are independent, then the PDF of $X+Y$ are the convolution of the marginal PDFs. In kernel density estimation (KDE), we may think about the problem via the following model
$$ X = X' + \varepsilon,$$
where $X'$ has the uniform discrete distribution over the $n$ observed observations $\{x_1, x_2, \ldots, x_n\}$ with density
$$ f(x') = \frac{1}{n} \sum_{i=1}^n \delta(x'- x_i),$$
as provided above by whuber and $\varepsilon$ is a continuous noise variable independent of $X'$. Then the density of $X$ is their convolution as given by the KDE formula. Depending on the PDF of $\varepsilon$, different kernel functions can be used, e.g., Gaussian kernel.
I believe similar explanations can be made for kernel regression and local linear/polynomial regression, where the focus is more on the mean function. Nevertheless, the mean function is manifested by the density, especially if the Gaussian kernel is used.
| null | CC BY-SA 4.0 | null | 2023-04-04T16:03:26.980 | 2023-04-04T16:03:26.980 | null | null | 56780 | null |
611850 | 2 | null | 611829 | 7 | null | Aside from the obvious: "transform the response data such that we model the log of them and back-transform them to get the final non-negative estimates", we can always set our modelling objective as `reg:gamma` or `reg:tweddie` and ensure the non-negativity of the predicted results. And if our modelling target is counts we can use `count:poisson` too. All these options can be found in the [XGBoost Parameters](https://xgboost.readthedocs.io/en/stable/parameter.html) page.
| null | CC BY-SA 4.0 | null | 2023-04-04T16:09:07.770 | 2023-04-04T16:09:07.770 | null | null | 11852 | null |
611851 | 2 | null | 611700 | 4 | null | This question raises a couple of interesting points:
- In a comment, @Dave points out that the normal QQ plot is "not so awful to suggest infinite variance".
- In an answer (+1), @wzbillings points out that it's not clear what theory (if any) the performance package uses to justify the Cauchy distribution for the errors.
As @Dave advises, let's look at the data first. (I've renamed the independent variable $x$ and the dependent variable $y$ for clarity.)

Clearly both variables are nonnegative (and in fact the values are mostly integers but the OP hasn't provided many details about the data). The relationship between $x$ and $y$ is reasonably linear and the variability of $y$ doesn't appear to increase for large $x$. Except near the origin both variables are constrained and that "squeezes" the residuals: their variance is lower for $x$ close to 0 and as a result the distribution of the residuals is heavy tailed.
Rather than choose between the Normal and the Cauchy distribution for the errors, let's fit a simple linear regression with $t$-distributed errors. And we will use theory to explore what degrees of freedom, $\nu$, are most consistent with the data. (Recall that the Cauchy is equivalent to $t$ distribution with $\nu = 1$ degree of freedom and the Normal is equivalent to $t$ with infinitely many degrees of freedom.)
The idea is to compute (numerically) the profile likelihood $L(\nu)$ for the degrees of freedom parameter $\nu$.
We start by writing down the likelihood of all four parameters in a simple linear regression with $t$-distributed errors:
$$
\begin{aligned}
L(\beta_0,\beta_1,\sigma,\nu) = \prod_i f_\nu\left(\frac{y - (\beta_0 + \beta_1x)}{\sigma}\right) \times \frac{1}{\sigma}
\end{aligned}
$$
where $f_\nu$ is the density of the $t$ distribution with $\nu$ degrees of freedom and $1/\sigma$ is the Jacobian of the transformation $y_0 = (y - \mu)/\sigma = (y - \beta_0 - \beta_1x)/\sigma$.
Then we find the profile likelihood $L(\nu)$ of the degrees of freedom parameter by maximimizing the likelihood $L(\beta_0,\beta_1,\sigma,\nu)$ with respect to $\beta_0,\beta_1$ and $\sigma$. (Actually, we minimize the negative log likelihood; see R code below.)
$$
\begin{aligned}
L(\nu) = \max_{\beta_0,\beta_1,\sigma} L(\beta_0,\beta_1,\sigma,\nu)
\end{aligned}
$$
And here is the plot of the profile likelihood.

A couple of conclusions from this analysis:
- The profile likelihood $L(\nu)$ is maximized at $\nu = 4$ but anything between 3 and 5 degrees of freedom is a good fit to the data.
- As @Dave claimed, the Normal distribution ($\nu \approx 30$) is a better fit than the Cauchy distribution ($\nu = 1$).
References
[1] In All Likelihood: Statistical Modelling And Inference Using Likelihood. Y. Pawitan. Oxford University Press (2013)
[2] A [note](https://stats.stackexchange.com/a/195113/237901) by @kjetilbhalvorsen about the dangers of maximum likelihood estimation for the degrees of freedom of the $t$ distribution. The warning is about small sample sizes in particular while in this case $n = 100$, so the profile likelihood approach is probably okay.
---
```
# Requires data x and y
n <- length(y)
m0 <- lm(y ~ x)
library("nloptr")
minimize <- function(x0, func, lb = NULL, ub = NULL) {
opts <- list(
"algorithm" = "NLOPT_LN_SBPLX",
"xtol_rel" = 1.0e-6
)
a <- nloptr(x0, func, lb = lb, ub = ub, opts = opts)
list(x = a$solution, fx = a$objective)
}
negloglik <- function(beta0, beta1, sigma, nu) {
mu <- beta0 + beta1 * x
y0 <- (y - mu) / sigma
# Calculate the log likelihood for y0
ll <- sum(dt(y0, nu, log = TRUE))
# Add the log of the Jacobian
ll <- ll - n * log(sigma)
# Calculate the negative log likelihood foy y
-ll
}
beta0.hat <- m0$coef[1]
beta1.hat <- m0$coef[2]
sigma.hat <- sigma(m0)
profile <- function(nu) {
sapply(nu, function(nu) {
soln <- minimize(
c(beta0.hat, beta1.hat, sigma.hat),
function(params) negloglik(params[1], params[2], params[3], nu),
lb = c(-Inf, -Inf, 0),
ub = c(Inf, Inf, Inf)
)
soln$fx
})
}
nu <- seq(1, 30, by = 0.25)
nll <- profile(nu)
ll <- exp(min(nll) - nll)
plot(nu, ll,
type = "l",
xlab = quote(paste("degrees of freedom, ", nu)),
ylab = quote(L(nu)),
main = quote(paste("Profile likelihood L(", nu, ")"))
)
abline(h = 0.15, lwd = 0.3)
```
| null | CC BY-SA 4.0 | null | 2023-04-04T16:19:02.540 | 2023-04-07T22:13:41.953 | 2023-04-07T22:13:41.953 | 237901 | 237901 | null |
611853 | 1 | null | null | 1 | 48 | How to understand and explain the sign of variable changes after including its square in a regression? For example, in a regression, X loads negatively, but X loads positively after further including the square of X. Does this means something wrong with the regression specification or not? Do we need to take some measures to address this issue?
| How to understand and explain the sign of variable changes after including its square in a regression? | CC BY-SA 4.0 | null | 2023-04-04T16:33:00.850 | 2023-04-04T22:00:52.607 | 2023-04-04T16:50:44.860 | 247274 | 384921 | [
"regression",
"regression-coefficients",
"linear"
] |
611854 | 2 | null | 611812 | 2 | null | Your integral is
$$
\int x^2 1(x \ge 1)\frac{p(x)}{q(x)}q(x)dx
$$
I'm using the notation of your code, not your $\LaTeX$.
You simulate from the exponential $q$, and evaluate $x_i^2 1(x_i \ge 1)\frac{p(x_i)}{q(x_i)}$ on each sample, and then take the mean.
```
mean((p(x)/q(x))*f(x))
```
That gives you the expected answer. Self-normalized importance sampling is not necessary here because you can evaluate the normalized version of all the densities. This code implements regular importance sampling, which is algorithm 3.1 in your reference.
---
A sufficient condition to guarantee consistency of the estimator of regular IS is
$$
|f(x)|p(x) \ll q(x)|f(x)|
$$
for all $x$. In your case, this is true because both target and proposal have a support that covers $\{x : x \ge 1\}$.
A sufficient condition to guarantee consistency of the estimator of self-normalized IS is
$$
p(x) \ll q(x).
$$
This isn't true in your case because $q$ only covers the positive numbers, but $p$ extends across positive and negative numbers. This can be diagnosed with `mean((p(x)/q(x)))`, which should be close to $1$ (for valid proposals), but is not.
| null | CC BY-SA 4.0 | null | 2023-04-04T16:41:18.860 | 2023-04-06T16:00:20.403 | 2023-04-06T16:00:20.403 | 8336 | 8336 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.