Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
611970 | 2 | null | 611968 | 3 | null | The best regression method depends on the data generating process.
Regardless of sample size, if your response is linearly related with the covariates, then linear regression should always be best.
If the relationship is complex but smooth, and you think uncertainty quantification is important, GP regression might be best
If the relationship is complex and uncertainty quantification is not important, a neural net might be best
I really hate to say it, but the answer is "it depends"
| null | CC BY-SA 4.0 | null | 2023-04-05T12:41:46.063 | 2023-04-05T12:41:46.063 | null | null | 283201 | null |
611971 | 2 | null | 130389 | 0 | null | First you should consider whether you would like to do this at all.
A good start would be to read this even more recent (2022) [paper by Costello and Watts](https://arxiv.org/pdf/2211.02613.pdf) to learn about the various theoretical pitfalls associated with hypothesis testing in a Bayesian framework.
One of the most serious problems they see with the usual t-test null hypothesis of $\mathcal{H}_0: \Delta\mu = 0$ is that this is a "point-form null hypothesis". As $\Delta\mu$ is a real number, and its distribution is continuous, the probability $\mathrm{Pr}(\Delta\mu = 0)$ will unfortunately be exactly 0. Now, "if the null hypothesis is always false, what’s the big deal about rejecting it?" -- they ask in the paper, quoting Cohen (2016).
After showing that the currently used "Bayesian t-test" is equivalent to its "classical"/frequentist counterpart (thus offering no obvious advantages), Costello and Watts also propose a different testing procedure which is supposed to be more informative. See the paper for details.
Disclaimer: I haven't done such an analysis yet (just reading myself into the subject). I would be rather curious to know if there's someone who has some practical experience with what Costello and Watts propose.
| null | CC BY-SA 4.0 | null | 2023-04-05T12:51:08.130 | 2023-04-05T12:51:08.130 | null | null | 43120 | null |
611972 | 2 | null | 611968 | 2 | null | The size of the dataset has relatively little relevance to the question of "what type of regression model should I use."
In essence this is like asking: "If I'm only building a "small" house - should I use a philips head screwdriver or a flat head screwdriver?" The type of screwdriver you should use depends on the kind of screws you are using, not the size of the house you are building.
Different kinds of models are used to estimate/predict different types of dependent variables (e.g. a logit model for a binary variable, a linear model for a continuous one), different data structures (e.g. a multilevel models for data which are "nested" at multiple levels), and different types of research questions and different assumptions (e.g. a Targeted Maximum Likelihood Estimation" method for causal inference, given certain assumptions about the factors associated with treatments). Certainly different kinds of models often have different assumptions the sample size under which they produce reliable results, but that's secondary to the question of whether you are using the "right tool for the job."
Also, while "small" is a subjective term, I would not personally use that term to describe a dataset with several thousand observations...although this is perhaps due to my bias as someone who uses "old school" statistics for hypothesis testing and estimation as opposed to machine learning.
| null | CC BY-SA 4.0 | null | 2023-04-05T12:54:33.493 | 2023-04-05T12:54:33.493 | null | null | 291159 | null |
611973 | 1 | null | null | 0 | 20 | To overcome some problems with a global single bandwidth on my data with the R base function `density()`, I would like to try out adaptive (aka variable) bandwidths. The standerd reference seems to be
>
Abramson, I. S. (1982): "On bandwidth variation in kernel estimates - a square root law." Annals of Statistics,10, 1217-1223
Although this article is quite old, I have not found an R package that implements it and
- works on univariate data (ks and spatstat.explore only provide methods for point clouds in a plane, i.e., for bivariate data)
- returns the estimated bandwidths for later use in calls of an actual density estimator (IsoplotR::kde does not return the estimated bandwidths, nor does it allow to provide these bandwidth via an argument)
Is anyone aware of implementations with these two features?
Moreover, I wonder whether it is guaranteed that a variable bandwidth density estimate actually yields a normalized probability density, i.e., $\int_{-\infty}^\infty f=1$?
| Adaptive (variable) bandwidth kernel density estimation for univariate data | CC BY-SA 4.0 | null | 2023-04-05T12:57:53.930 | 2023-04-05T13:30:15.973 | 2023-04-05T13:30:15.973 | 244807 | 244807 | [
"r",
"kernel-smoothing",
"nonparametric-density"
] |
611974 | 1 | null | null | 0 | 10 | Does anyone know how to calculate/ test to carry out to see what % of your participants are meeting the RDA of the nutrient in SPSS ?
| Calculate percentages | CC BY-SA 4.0 | null | 2023-04-05T12:59:29.233 | 2023-04-05T12:59:29.233 | null | null | 384827 | [
"spss",
"percentage"
] |
611975 | 2 | null | 611174 | 0 | null | With ordered factors, the intercept represents the expected value of the response at the mean of the factor's levels; in the snippet of data, this is `1.5`, and the remaining coefficients represent polynomials of the factor levels. In the model shown, there is only a linear polynomial term because there are two levels.
With this kind of factor, the output from `parametric_effect()` is the model estimate (partial effect) for each factor level:
```
r$> ds <- data_slice(m, spreg = evenly(spreg))
r$> fitted_values(m, data = ds, exclude = c("(Intercept)", "s(year)"))
# A tibble: 2 × 6
spreg year fitted se lower upper
<ord> <dbl> <dbl> <dbl> <dbl> <dbl>
1 n 2010 11996. 11286. -10124. 34115.
2 y 2010 -11996. 11286. -34115. 10124.
```
which, because there is only a linear polynomial contrast, the two estimates equally deviate from the intercept (the mean of the levels) in opposite directions (hence the sign on the fitted value).
| null | CC BY-SA 4.0 | null | 2023-04-05T13:04:37.400 | 2023-04-05T13:04:37.400 | null | null | 1390 | null |
611976 | 1 | null | null | 2 | 64 | My problem:-
I'm writing a test to see if a series of numbers come from certain theoretical distributions. And I need a p value so that software can automatically accept or reject $H_0$ on an $\alpha=0.05$ basis.
The context:-
[Does the 2-sample KS test work? If so, why is it so unintuitive?](https://stats.stackexchange.com/q/379655/74762)
[Is the Kolmogorov-Smirnov-Test too strict if the sample size is large?](https://stats.stackexchange.com/q/333892/74762)
[Kolmogorov–Smirnov test: p-value and ks-test statistic decrease as sample size increases](https://stats.stackexchange.com/q/286694/74762)
[Is normality testing 'essentially useless'?](https://stats.stackexchange.com/q/2492/74762)
[Is there a rule of thumb regarding effect size and the two sample KS test?](https://stats.stackexchange.com/q/363402/74762)
Given the number of unspecific KS sample size answers here, it seems that this is still a valid (open) problem.
My question:-
Given the context and learned comments, it appears that the KS test only holds for a small sample size, $n$. Yet I can't find any quantitative recommendation on this site for $n$. So if I have a total sample size of one million values, should I just randomly pick a hundred of them for the KS test?
| Should we always use 100 samples for an equivalence test given the KS test size problems? | CC BY-SA 4.0 | null | 2023-04-05T13:15:33.740 | 2023-04-05T23:14:44.337 | 2023-04-05T13:46:59.663 | 247274 | 74762 | [
"hypothesis-testing",
"kolmogorov-smirnov-test"
] |
611977 | 1 | null | null | 0 | 13 | How can I calculate Somers' D for an XGBoost model?
For a regression I use the score that I calculate and binary response variable.
I calculate the score with this formula for every variable, and then sum it:
```
Var_score=(Intercept/Number_of_vars+WoE*Variable_estimate)*(-100);
Score = Var_score1+Var_score2
```
What should I use for Somers' D calculation - prediction and the binary response?
Note: I use `somers2()` function in `R`.
| Somers' D for an XGBoost model | CC BY-SA 4.0 | null | 2023-04-05T13:37:30.607 | 2023-04-05T13:37:30.607 | null | null | 194458 | [
"boosting",
"gini",
"somers-d"
] |
611978 | 1 | null | null | -1 | 47 | [This study](https://www.bmj.com/content/350/bmj.h1193) shows that the average penis mean is 13.24cm and the standard deviation is 1.89cm. Let's suppose we have a population with this mean and standard deviation for penis length.
Suppose we ask 30 men in this population what their penis lengths are. Suppose that the selection of 30 men is so that all men from the population have the same probability of being selected. Can we know how many of them are lying about their penis length?
Of course, if we calculate a sample mean of 18 cm, we can, based on the central limit theorem say that this sample is an outlier or that there is people lying about their penis size. But can we create a model to estimate how many of them are lying?
| Estimate how many wrong measures there is in a sample | CC BY-SA 4.0 | null | 2023-04-05T13:41:08.727 | 2023-04-05T21:46:56.787 | 2023-04-05T21:46:56.787 | 327798 | 327798 | [
"sample",
"central-limit-theorem",
"measurement-error"
] |
611979 | 2 | null | 611976 | 3 | null | The KS test does what it is supposed to do, even when you have a million observations. When the null hypothesis is true, the KS test rejects the null about $\alpha$ of the time. What does happen is that, because of the large sample size, there is considerable power to reject slight deviations from the null hypothesis that might not seem to have practical significance.
Thus, use all of your points.
If you do not like that the test has such power because you are rooting for $p>\alpha$, you should think hard about if hypothesis testing is the right tool for your work. Hypothesis testing is extremely literal, and it is a feature, not a bug, that hypothesis testing can detect small deviations from the null hypothesis when the sample size is large. If you want to test if the data are "close" to the null hypothesis, then you might want to think about how to quantify "close" and consider methods like equivalence testing.
You may be interested in the links below.
[Is normality testing 'essentially useless'?](https://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless)
[Significance test for large sample sizes](https://stats.stackexchange.com/questions/35470/significance-test-for-large-sample-sizes/602422#602422)
EDIT
Let's see a simulation with $100,000$ observations per test (to save computing time).
```
library(ggplot2)
set.seed(2023)
N <- 1e5
R <- 5000
ps1 <- ps2 <- rep(NA, R)
for (i in 1:R){
# Simulate draws from N(0, 1)
#
x <- rnorm(N, 0, 1)
# KS test if the distribution is N(0, 1) or not, then save the p-value
#
ps1[i] <- ks.test(x, "pnorm", 0, 1)$p.value
# Simulate draws from N(0.01, 1)
#
y <- rnorm(N, 0.01, 1)
# KS test if the distribution is N(0, 1) or not, then save the p-value
#
ps2[i] <- ks.test(y, "pnorm", 0, 1)$p.value
if (i %% 75 == 0 | i < 5 | R - i < 5){
print(paste(
i/R*100,
"% complete",
sep = ""
))
}
}
d1 <- data.frame(
pvalue = c(ps1, ps2),
CDF = ecdf(ps1)(c(ps1, ps2)),
null = "True"
)
d2 <- data.frame(
pvalue = c(ps1, ps2),
CDF = ecdf(ps2)(c(ps1, ps2)),
null = "False"
)
d <- rbind(d1, d2)
ggplot(d, aes(x = pvalue, y = CDF, col= null)) +
geom_point() +
geom_abline(slope = 1, intercept = 0)
```
[](https://i.stack.imgur.com/iCspy.png)
When the nul hypothesis is true, despite there being a huge number of observations (you can bump it up to a million and get the same result), the distribution of p-values is $U(0,1)$ (the blue CDF), meaning that there is a probability of $\alpha$ of falsely rejecting a null hypothesis. For instance, when $\alpha = 0.05$, `ecdf(ps1)(0.05)` shows that the true null hypothesis is rejected $4.8\%$ of the time, just like is supposed to happen.
However, the red graph shows that the KS test indeed has solid power to reject a false null hypothesis, even one that is slightly false $\big(N(0.01, 1)$ is the true distribution, while the null hypothesis is that the distribution is $N(0, 1)\big)$, and `ecdf(ps2)(0.05)` shows the test to have a power of $77.78\%$ to reject the false null hypothesis.
Importantly, however, the ability to reject this false null hypothesis does not come at the expense of not rejecting a true null hypothesis. The KS test works exactly how a hypothesis test is meant to work.
| null | CC BY-SA 4.0 | null | 2023-04-05T13:46:22.773 | 2023-04-05T23:14:44.337 | 2023-04-05T23:14:44.337 | 247274 | 247274 | null |
611980 | 1 | null | null | 0 | 26 | Suppose X1 is one observation from a population with Beta(θ,1) PDF. Would X1 also have Beta(θ,1) PDF?
| Does a single observation from a population have the same distribution as that population? | CC BY-SA 4.0 | null | 2023-04-05T13:53:55.970 | 2023-04-05T14:25:47.400 | null | null | 383610 | [
"sampling",
"density-function",
"beta-distribution"
] |
611981 | 1 | null | null | 0 | 22 | probably answered before but I would I want to see if my reasoning is correct, as my textbook skips the calculations but the answer coincide.
Q: Let $Z_t \sim \text{WN}(0, \sigma^2)$ (white noise), and define the MA(1) process $X_t = Z_t + \theta Z_{t-1}, t\in \mathbb{Z}, \theta \in \mathbb{R}. $ Find the covariance function $\gamma_X(t, t+h)$.
Sol: $E(X_t) = 0, E(X_t^2) = \sigma^2(1+\theta^2)$. Then
\begin{equation}
\begin{split}
\gamma_X(t,t+h) &= Cov(X_t, X_{t+h}) = E[(X_t - E(X_t))(X_{t+h}-E(X_{t+h})] \\
&= E(X_t X_{t+h}) = E[(Z_t + \theta Z_{t-1})(Z_{t+h} + \theta Z_{t+h-1}
)] \\
&= E(Z_tZ_{t+h}) + \theta E(Z_t Z_{t+h-1}) + \theta E(Z_{t-1} Z_{t+h}) + \theta^2 E(Z_{t-1}Z_{t+h-1})
\end{split}
\end{equation}
Since $Z_t$ is white noise, it holds that $\gamma_Z(s,t) = E(Z_s E_t) = \sigma^2 \delta(s-t)$. Hence
\begin{equation}
\begin{split}
\gamma_X(t,t+h) &= \sigma^2\delta(h) + \theta \sigma^2\delta(h+1) + \theta \sigma^2\delta(h-1) + \theta^2 \sigma^2 \delta(h) \\
&= \begin{cases}
\begin{split}
&\sigma^2(1+\theta^2), \quad &h = 0 \\
&\sigma^2 \theta, \quad &h = \pm 1 \\
&0, \quad &\text{else}
\end{split}
\end{cases}
\end{split}
\end{equation}
| Covariance function of MA(1) process | CC BY-SA 4.0 | null | 2023-04-05T14:11:44.407 | 2023-04-05T14:14:23.493 | 2023-04-05T14:14:23.493 | 384994 | 384994 | [
"covariance",
"moving-average"
] |
611982 | 1 | null | null | 0 | 15 | Say I want to compare the means of two groups in SPSS (men and women). The particular thing is that for these groups in the context that I'm researching, there are no theories to formulate a decisive hypothesis. So that means I don't know beforehand whether one group will have a higher means than the other or if they will be the same.
If I HAD to formulate a hypothesis, I would say that “H1: the mean of group A is NOT equal to the mean of group B”: so a two-sided hypothesis.
Now, in the results I see that the two sided p-value is higher than 0,05, so I would say that I keep the nulhypothesis that there is insufficient evidence to say there is a difference between both groups.
BUT: I see that the one-sided p-value is less than 0,05. So what do I do? Can I still say that there is a significant difference between both groups?
| Independent samples t test when you don't know what to expect | CC BY-SA 4.0 | null | 2023-04-05T14:13:28.753 | 2023-04-05T14:23:28.357 | 2023-04-05T14:23:28.357 | 384995 | 384995 | [
"hypothesis-testing",
"t-test"
] |
611983 | 1 | null | null | 0 | 8 | I am calculating a 2SLS in R-Studio. I have a variable of interest Y, an endogenous Variable E, the instrument Z and three exogenous variables . My theory is that the effect of X (Z respectively) on Y works through the interaction with some exogenous variables. Can someone help me out here? Is it even possible to add interactions in the second stage between the estimated endogenous variable and some exogenous variables? And if yes, do I have to add the interaction terms in the first stage model already - as I did in the code below?
This is my code so far.
```
first_stage <- plm(E ~ Z * (X1 + X2 + X3), data = data, model = "within", effect = "individual")
X_hat <- predict(first_stage, type = "response")
data <- cbind.data.frame(data, X_hat)
data <- pdata.frame(data, index = c("country", "year"))
second_stage <- plm(Y ~ E_hat * (X1 + X2 + X3), data = data, model = "within", effect = "individual")
```
| Do I add interaction term already in the first stage model (2SLS)? | CC BY-SA 4.0 | null | 2023-04-05T14:17:19.653 | 2023-04-05T14:17:19.653 | null | null | 384671 | [
"r",
"2sls"
] |
611984 | 2 | null | 611867 | 1 | null | >
So simple question is it conceptually correct to eliminate those genes which are not present in my validation cohort and go ahead with the ones which are present.
You might start by seeing how well your original model without those genes works on [The Cancer Genome Atlas](https://www.cancer.gov/ccg/research/genome-sequencing/tcga) (TCGA) data. I suspect that it won't be as good, but that would be one thing to try. If you think that the model without those genes is good enough, then proceed as you suggest.
I don't think, however, that removing genes this way is a good strategy in general.
The `lasso` tag on this question suggests that you used LASSO to define your set of 40 genes out of the approximately 20,000 included in data from TCGA. With so many genes and only a few hundred cases (if you restricted analysis to acute myeloid leukemia, AML), many gene-expression values will be highly inter-correlated. LASSO will choose 1 or a few, at most, from each set of correlated genes. The particular choice will depend on vagaries of the data set rather than on overall population characteristics.
In that situation, each of your 40 genes is likely to represent a large number of other genes with similar expression patterns. If you simply omit some of those 40 genes in future work, you are losing information not just about that gene but also about all of the other genes, not included in your set of 40, that it helps to represent.
Although you might think of your modeling so far as having identified a single model, what you have done might better be considered having evaluated a modeling process: a particular way to use LASSO that provided a useful model in AML. You might consider repeating the same modeling process again to the TCGA data, but omitting genes whose expression values aren't available in the newer "beatAML" data set. With the large number of genes whose expression is likely to be correlated with those you removed from the TCGA data, I suspect that you will find similar performance with a new set of around 40 genes in TCGA. Then you can use that model in the test cohort provided by "beatAML."
A final comment: sometimes an apparently "missing" gene in a data set is due to different labeling of the same gene in different data sets. Check that simpler explanation first.
| null | CC BY-SA 4.0 | null | 2023-04-05T14:18:23.680 | 2023-04-05T14:59:21.537 | 2023-04-05T14:59:21.537 | 28500 | 28500 | null |
611985 | 2 | null | 611980 | 0 | null | As you can learn from the [How to Understand the Relationships Among Random Variables, Samples, and Populations?](https://stats.stackexchange.com/questions/224442/how-to-understand-the-relationships-among-random-variables-samples-and-populat) thread,
>
A population is usually modeled as a set $\mathcal S$ together with a probability measure $\mathbb{P}$ on that set. [...]
A (univariate) random variable $X:\mathcal{S}\to\mathbb{R}$ assigns numbers to the elements of $\mathcal S$. [...]
and sampling is commonly understood in statistics as
>
[...] Another procedure focuses on a random variable, rather than the population, and views an independent and identically distributed ("iid") sample as a sequence $$X_1, X_2, \ldots, X_n$$ of random variables on $\mathcal S$ (a) that are independent and (b) for which all the $X_i$ have the same distribution.
So a sample of size $n$ is thought as a sequence of random variables
$$X_1, X_2, \ldots, X_n$$
all following the same distribution, "coming from" the same population. The population does not "have" distribution, random variables do. You are mixing different concepts, so I highly recommend the linked thread that discusses it in depth.
| null | CC BY-SA 4.0 | null | 2023-04-05T14:25:47.400 | 2023-04-05T14:25:47.400 | null | null | 35989 | null |
611986 | 1 | 611990 | null | 2 | 164 | I was looking at the [Informer model implemented in HuggingFace](https://huggingface.co/docs/transformers/main/en/model_doc/informer#informer) and I found that the model is implemented with negative log-likelihood (NLL) loss even though it is a model for a regression task. How can NLL loss be used for regression? I thought it is a loss used for classification only.
| How NLL loss is used for regression? | CC BY-SA 4.0 | null | 2023-04-05T14:33:52.793 | 2023-05-01T18:18:17.090 | 2023-05-01T18:18:17.090 | 247274 | 372864 | [
"regression",
"machine-learning",
"maximum-likelihood",
"loss-functions"
] |
611987 | 1 | null | null | 0 | 38 | I work for a company that helps online retailers to group their inventory into google ad campaigns. I am using [Causal Impact](https://google.github.io/CausalImpact/CausalImpact.html#faq) to determine whether the release of a new feature within our software had an impact on the total impressions that an online retailer received through google ads - an impression is counted each time the retailers ad is shown on a google search result page.
Approach
To begin with, I just have one X variable.
y - impressions over the past 365 days.
X - daily searches for the term ‘garden furniture’ using google trends
I expected searches for 'garden furniture' to have a good correlation with the impressions of this particular retailer (correlation was +0.58). Importantly, google search terms won't be influenced by the change we made to our software and therefore satisfies the key requirement that the X variables are not affected by the intervention.
After standardising, the data looks like this.
[](https://i.stack.imgur.com/r9BNB.png)
And running Causal Impact shows that the intervention did not quite have a significant effect (p=0.07).
```
pre_period_start = '20220405'
per_period_end = '20230207'
post_period_start = '20230208'
post_period_end = '20230329'
pre_period = [pre_period_start, per_period_end]
post_period = [post_period_start, post_period_end]
ci = CausalImpact(dated_data, pre_period, post_period)
ci.plot()
```
[](https://i.stack.imgur.com/bbgqg.png)
[](https://i.stack.imgur.com/ZkibF.png)
Questions
- How can I verify that my X variables are doing a good job of predicting y?
From this [notebook](https://github.com/WillianFuks/tfcausalimpact/blob/master/notebooks/getting_started.ipynb), in section 2.5: Understanding results, it suggests the below code gives the values for beta.X. The value here 0.06836722 seems quite low and suggests that the garden_furniture searches don't explain impressions very well. Is this the correct interpretation?
```
tf.reduce_mean(ci.model.components_by_name['SparseLinearRegression/'].params_to_weights(
ci.model_samples['SparseLinearRegression/_global_scale_variance'],
ci.model_samples['SparseLinearRegression/_global_scale_noncentered'],
ci.model_samples['SparseLinearRegression/_local_scale_variances'],
ci.model_samples['SparseLinearRegression/_local_scales_noncentered'],
ci.model_samples['SparseLinearRegression/_weights_noncentered'],
), axis=0)
```
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([0.06836722], dtype=float32)>
- When adding another X variable to the model, how can I determine whether adding that variable was useful or not?
- I’ve also attempted to backtest the model by selecting the first 90 data points and an imaginary intervention date. As shown below, we do not get a significant effect. However, I’m concerned that the predictions don’t seem to align that closely with the actual y. Does this look like a problem?
[](https://i.stack.imgur.com/MOaqa.png)
- General advice - Any suggestions on improving the analysis would be greatly appreciated as this is the first time I’ve used Causal Impact. In particular, I'm struggling on
- selecting additional X variables to use
- whether changing the data frequency to weekly would help with predictions
| Building a Causal Impact model and Understanding the results | CC BY-SA 4.0 | null | 2023-04-05T14:38:18.940 | 2023-04-05T14:38:18.940 | null | null | 44394 | [
"time-series",
"predictive-models",
"causalimpact"
] |
611988 | 1 | null | null | 2 | 63 | If X ~ Exp(3), Y ~ Exp(1) and h = X / (X + Y) then h ~ beta(1/3, 1) and E(h) = 1/4.
But when I draw random deviates using the following R code, I find mean(h) ≈ 0.324 and the histogram doesn't resemble the beta distribution.
Am I making some dumb mistake?
```
fn <- function() {
X <- rexp(n = 1, rate = 3)
Y <- rexp(n = 1, rate = 1)
h <- X / (X + Y)
return(h)
}
h_vec <- replicate(1e6, fn())
mean(h_vec) # ≈ 0.324
```
| Testing relationship between exponential and beta distributions using R | CC BY-SA 4.0 | null | 2023-04-05T14:43:23.043 | 2023-04-06T19:46:23.883 | null | null | 384996 | [
"r",
"random-generation",
"exponential-distribution",
"beta-distribution"
] |
611989 | 1 | null | null | 0 | 17 | I am interested in testing whether a (100-dimensional) vector is different from a population of (let's say 2,000) vectors. If my data was one-dimensional, the answer would have been a one-sample t-test (i.e., whether a population differs from a fixed value).
Can I use MANOVA for this purpose (with one group having 2,000 members and the other just 1 member)? What other statistical test options do I have that would be equivalent to a multi-dimensional one-sample t-test? (I use Python; a code sample would be much appreciated.)
| One-sample MANOVA/multidimensional t-test | CC BY-SA 4.0 | null | 2023-04-05T14:43:31.483 | 2023-04-05T14:43:31.483 | null | null | 384999 | [
"hypothesis-testing",
"statistical-significance",
"python",
"manova"
] |
611990 | 2 | null | 611986 | 4 | null | As I discuss [here](https://stats.stackexchange.com/questions/606915/negative-log-likelihood-nll-reserved-for-a-classification-in-pytorch-is-weird/611310#611310), "negative log likelihood" seems to be a slang in some circles to refer to the log loss in classification problems, since minimizing that log loss is equivalent to maximizing the binomial log-likelihood.
However, maximum likelihood estimation is a general idea in statistics. For instance, minimizing square loss as is done in OLS linear regression is equivalent to maximum likelihood estimation of the regression parameters under the assumption of a Gaussian likelihood (so $iid$ Gaussian error terms for linear regressions). Other loss functions correspond to maximum likelihood estimation, too, such as minimizing absolute loss being equivalent to maximizing Laplace likelihood. Indeed, if you approach the regression problem from the standpoint of using maximum likelihood estimation of the parameters, that is equivalent to estimating the regression parameters using negative log likelihood. If $L(\theta)$ is the likelihood function, then the following is true.
$$
\underset{\theta}{\arg\max} \{L(\theta)\} = \underset{\theta}{\arg\min} \{-\log\left(L(\theta)\right)\}
$$
(The left side is the set of values of some parameter (possibly a vector) $\theta$ that maximize the likelihood function, and the right side is the set of values of that same parameter (possibly a vector) that minimize the negative log-likelihood.)
This even holds outside of regression or supervised learning problems. You can use this idea to estimate variances or whatever other parameter you want to estimate (assuming you are fine with using maximum likelihood estimation).
Your reference seems to be using correct statistical terminology, and this is the exactly the point being made in the comment.
| null | CC BY-SA 4.0 | null | 2023-04-05T14:47:21.787 | 2023-04-05T14:47:21.787 | null | null | 247274 | null |
611991 | 2 | null | 611933 | 2 | null | Yes, this is correct, that is, one can easily reconstruct the (log) odds ratio for other group comparisons, but unfortunately, it is also correct that one cannot reconstruct the standard error of this derived log odds ratio, unless one has information on the number of events in group A. The problem is that log(odds(B)/odds(A)) and log(odds(C)/odds(A)) are not independent due to reuse of information from group A and the degree of dependence depends on log(odds(A)).
| null | CC BY-SA 4.0 | null | 2023-04-05T15:03:23.247 | 2023-04-05T15:03:23.247 | null | null | 1934 | null |
611992 | 2 | null | 489401 | 0 | null | I know this question is over 2 years old, but I wanted to comment for anyone else who finds this post useful (as I did). The calculation of the treatment effect for `male` in this case should include not only the reference level for treatment, but also the effect of sex. In other words:
$$\beta_{male + treatment} = \beta_{male} + \beta_{treatment} + \beta_{male:treatment}$$ (in OP's original calculation, $\beta_{male}$ was missing).
Or, in terms of odds ratios:
$$OR_{male+treatment} = OR_{male}\times OR_{treatment}\times OR_{male:treatment}$$
This also affects the calculation of the SE, as you should include the variance and covariance of the interaction term in the equation, as below (for generic variables $X_1$, $X_2$, and their interaction term $X_{1:2}$):
$$SE(β_{1+2}) = \sqrt{var(β_1+β_2+β_{1:2})} = \sqrt{var(β_1) + var(β_2) + var(β_{1:2}) + 2[cov(β_1,β_2) + cov(β_1,β_{1:2}) + cov(β_2,β_{1:2})]}$$
In R:
```
beta_se <- sqrt(vcov(m)[1,1] + vcov(m)[2,2] + vcov(m)[3,3] + 2*(vcov(m)[1,2] + vcov(m)[1,3] + vcov(m)[2,3]))
```
Construction of 95% confidence intervals follows as:
$$\beta_{1+2} \pm 1.96\times SE $$
which can be converted to confidence intervals on the odds ratio by exponentiating them.
| null | CC BY-SA 4.0 | null | 2023-04-05T15:04:00.560 | 2023-04-06T13:42:04.890 | 2023-04-06T13:42:04.890 | 328585 | 328585 | null |
611994 | 1 | null | null | 0 | 24 | Hello everyone and good day to you,
Could you please explain how to use principal components in principal component analysis in a correlation analysis? I performed a PCA on my dataset and extracted 2 components form 8 variables; Now I'm wondering how to use pearson or spearman correlation analysis to find out the correlation between PC1(or PC2) and, let's say, another variable.
I mean, I want to do a regression analysis in which my two extracted components are independent variables and each participant's age (for example) is the dependent variable.
How should I do this?
Many thanks.
| Principal component analysis and correlation | CC BY-SA 4.0 | null | 2023-04-05T15:16:00.523 | 2023-04-05T15:59:43.957 | null | null | 385001 | [
"regression",
"correlation",
"pca"
] |
611995 | 1 | null | null | 0 | 22 | A group or a fleet of “m” independent components operates in a plant. These components are non-repairable and we just have “s” items in stock to replace with failed components (no emergency replacement). When a new failure (demand) occurs, a new component with the mean time to failure, mu is replaced .This system is considered during an interval of length T.
Now, Expected shortage should be computed at the end of T to calculate the total cost.
I can give you some results as bellow:
m=1 s=1 mu=1 t=1 => Expected shortage=0.26
m=1 s=1 mu= 3 t=3 => Expected shortage=0.2642
m=1 s=1 mu= 1 t= 5 => Expected shortage=0.9595
m= 1 s=3 mu= 1 t= 1 => Expected shortage=0.0189
m= 5 s=1 mu=10 t= 2 => Expected shortage=0.347
| Application of Poisson distribution | CC BY-SA 4.0 | null | 2023-04-05T15:26:57.723 | 2023-04-05T15:26:57.723 | null | null | 385002 | [
"mathematical-statistics",
"poisson-distribution",
"stochastic-processes",
"matlab",
"exponential-distribution"
] |
611997 | 1 | null | null | 0 | 25 | A binomial sample of $n$ trials consists of $k$ successes. The distribution of $k$ is
$P(k|\theta, n) = C_n^k \theta^k(1-\theta)^{n-k}$
We would like to construct a confidence interval for the parameter $\theta$. For that, we need $P(\theta|k,n)$ which we can compute as
$P(\theta|k,n) \sim P(k | \theta, n) P(\theta)$
For the prior $P(\theta)$ it is common to choose a Beta-distribution, $P(\theta) \sim\theta^{\alpha-1}(1-\theta)^{\beta-1}$. The posterior distribution then becomes
$P(\theta|k,n) = \frac{1}{B(k+\alpha, n-k+\beta)} \theta^{k+\alpha-1} (1-\theta)^{n-k+\beta-1} $
In multiple sources (e.g. [here](https://www.statisticshowto.com/clopper-pearson-exact-method/),[here](https://repository.upenn.edu/cgi/viewcontent.cgi?article=1440&context=statistics_papers) or [here](http://hep.ucsb.edu/people/claudio/Phys250/10-06/Binomial.pdf) ) it is stated that the Clopper-Pearson confidence interval is given by the CDF of a Beta-distribution with parameters $(k, n-k+1)$ for the lower bound and $(k+1, n-k)$ for the upper bound. My question is, is there a Beta (or other distribution) prior that corresponds to the Clopper-Pearson interval? If so, what are its parameters?
I would have expected that the flat prior (with $\alpha=1, \beta=1)$ should correspond to the CP interval, however, it appears that I need $\alpha=0$ to match the lower bound which would correspond to $P(\theta) \sim 1/\theta$. What would be an intuitive explanation for using such a prior?
| Beta prior for the Clopper-Pearson interval | CC BY-SA 4.0 | null | 2023-04-05T15:47:51.907 | 2023-04-05T15:47:51.907 | null | null | 153176 | [
"bayesian",
"confidence-interval",
"binomial-distribution",
"prior",
"beta-distribution"
] |
611998 | 1 | null | null | 0 | 67 | Countably additive probability is defined on sigma field. However a finite additive probability needs only a "finite-additive" field: the finite additive probability does not need the countable infinite union of events to still be in the field
To be specific, sigma field requires that countable union of subset is also in the field; a "nonsigma" field only requires that finite union of subset is also in the field. I think it is enough to have this field to then define a finite additive probability measure.
While the finite-additive measure is well-covered in literature, the finite additive field is not. Question: Are there any reference or discussion on this "finite additive field"? Does this "non-sigma" field already has a name that I can look up?
---
This is a long comment. I think a "finite additive field" $F$ defined like this suffices:
- If $A\in F$, then $A^C\in F$.
- If $A,B\in F$, then $A\cup B\in F$ and $A\cap B \in F$.
Here $A,B$ are sets.
However, when I try to google relevant info, I got no luck. Here are the key words that I tried:
[](https://i.stack.imgur.com/wEEwA.png)
[](https://i.stack.imgur.com/RCJuy.png)
| Finite additive probability defined on a "finite-additive" field | CC BY-SA 4.0 | null | 2023-04-05T15:50:48.863 | 2023-04-06T14:13:25.727 | 2023-04-06T14:13:25.727 | 919 | 164775 | [
"probability",
"measure-theory",
"sigma-algebra"
] |
611999 | 2 | null | 611994 | 0 | null | If I am understanding you correctly, I think you would just perform correlation or regression analysis as you would with any other set of variables, only now you use the principal components (the data projected onto the 2 selected principal component directions) as your variables.
Here is a good, short article that describes performing regression with principal components:
[https://online.stat.psu.edu/stat508/lesson/7/7.1](https://online.stat.psu.edu/stat508/lesson/7/7.1)
Hope that helps!
| null | CC BY-SA 4.0 | null | 2023-04-05T15:59:43.957 | 2023-04-05T15:59:43.957 | null | null | 385004 | null |
612000 | 2 | null | 611902 | 1 | null | First let's talk about weighting. Weighting involves estimating a weight for each unit in the sample and then estimating the treatment effect in a way that incorproates the weights, such as weighted linear regression, weighted maximum likelihood, or a weighted difference (or ratio, etc.) in means. The weights also enter the balance statistics (e.g., standardized mean difference and KS statistics).
Separate from IPTW, it's important to understand that weighted samples have lower precision than the corresponding unweighted sample. By precision, I mean the standard error of the estimate of the desired quantity (e.g., the causal mean or difference between causal means). If we think of uncertainty arising due to the sampling of individuals, the estimate will change a lot if a unit with a large weight is swapped out for another unit in the population. Similarly, having a unit with a weight of 2 is not the same as having 2 units; choosing to use one unit's information twice doesn't give you more information about your sample; you are just changing the importance of a unit's contribution to your estimates.
The ratio of the variance (i.e., square of the standard error) of the estimate of a mean from a weighted sample to the estimate of a mean from an unweighted sample is known as the "design effect" (i.e., the effect of the "design" on the precision of the estimate) and was derived by Shook-Sa and Hudgens (2020) to be
$$
\text{deff}_w = \frac{N \sum_{i}w_i^2}{(\sum_{i} w_i)^2}
$$
where $N$ is the sample size of the group and $w_i$ is the weight for unit $i$. The effective sample size (ESS) is defined as
$$
\text{ESS} = \frac{(\sum_i w_i)^2}{\sum_i w_i^2} = \frac{N}{\text{deff}_w}
$$
and represents the size of an unweighted sample that contains the same precision as a weighted sample. When the weights are scaled to have an average of 1 (i.e., and a sum of $N$), the ESS can be equivalently written as
$$
\text{ESS} = \frac{N}{1 + \text{Var}(w^*)}
$$
where $\text{Var}(w^*)$ is the variance of the scaled weights computed using the population formula. This latter formula makes it easy to see that as the variance of the weights increases, the ESS gets smaller.
The ESS isn't the same as the sample size, but it functions like it. That is, if you have a sample of 1000 units but after weighting the sample has an ESS of 500, then you will have (approximately) the same precision as if you only had 500 units in an unweighed sample. That is the price we pay for using the weights to remove confounding. In this way, matching and weighting both function the same way, which is to trade precision for unbiasedness. The fact that weighting retains all units doesn't mean the estimates will be as precise as had you not done weighting at all; in fact, the ESS from IPTW can be smaller than the remaining sample size after matching.
The frequencies of events/conditions in the dataset is not a useful way to think about the changes caused by the weighting. The frequencies are the same as they prior to weighting; you will still have the same number of events, the same number of patients with conditions, etc., but those patients will contribute different amounts of information to the estimation of the effect and change the precision accordingly. It is useful, though, to think about weighted means and proportions of variables, since these represent balance in the weighted sample. So while the proportion of married units in one group may change from .2 to .4, that doesn't mean the actual number of married units has changed. It means the weighted sample is meant to represent a population in which 40% are married.
So, to sum up:
- The size of the weighted sample and the number of units with each condition (e.g., the number of events) is the same as the unweighted sample. Weighting does not change the sample size or number of events; it changes the relative contribution of each individual to the estimation of the treatment effect
- The effective sample size (ESS) is a measure that approximately captures the degree of precision remaining in the sample after weighting; it is not a literal sample size. It is a diagnostic statistic used for analysts to decide whether their weights have degraded their precision to an unacceptable degree. It should be reported in papers but very rarely is.
- The frequency of events is not a useful way to think about prevalence in a weighted sample. The frequency is the same in the weighted and unweighted samples, but the amount of information contained in each event changes depending on its weight. The weighted proportions/rates capture this information. Balance should be assessed on the proportion of events, not their frequency, so computing the weighted mean of a binary variable is the appropriate way to characterize prevalence after weighting, not a frequency count.
| null | CC BY-SA 4.0 | null | 2023-04-05T16:00:17.847 | 2023-04-05T16:00:17.847 | null | null | 116195 | null |
612002 | 2 | null | 611820 | 4 | null | It's not that you don't have events, it's just that you don't have exact times for the events. You do, however, have a lower and an upper limit to the time to each event. That's what's called "interval censored" data in general. In your situation, there's only 1 observation time per individual, so you have "current status" data as @Ben said in a comment. As you note, you have left-censored event times for cases with events (lower limit of 0) and right-censored event times for cases without events (upper limit of +Infinity).
Some types of survival analysis with such data are relatively straightforward. Let's take the reproducible data set provided in the answer from @AdamO (+1) and reformat it. Specify "L" and "R" as the left and right limits of the interval. It turns out that setting the lower (left) limit for left-censored event times to -Inf instead of 0 helps with some functions.
```
set.seed(123)
n <- 100
x <- rexp(n, 1/50)
t <- 10*(1+1:n%/% 10.1) ## assigned sacrifice timepoint
dissectData <- data.frame(dissectTime=t,event=x<t)
dissectData[,"L"] <- -Inf
dissectData[,"R"] <- Inf
dissectData[dissectData$event==FALSE,"L"] <- dissectData[dissectData$event==FALSE,"dissectTime"]
dissectData[dissectData$event==TRUE,"R"] <- dissectData[dissectData$event==TRUE,"dissectTime"]
```
That puts data into a form used by the "interval2" type of `Surv` object in the R [survival package](https://cran.r-project.org/package=survival). That allows for simple descriptive `survfit()` processing (for 1 or more groups) and for parametric survival modeling. For example:
```
library(survival)
plot(survfit(Surv(L, R, type="interval2") ~ 1, data = dissectData), bty="n",xlab="Time",ylab="Fraction Surviving")
curve(exp(-x/50),from=0,to=100,add=TRUE,col="red")
```
shows the estimated survival curve and its 95% confidence intervals (in black) along with the original continuous function used to generate the data sample (in red).
[](https://i.stack.imgur.com/tthtm.png)
You can fit a parametric survival model this way, also. For example:
```
survreg(Surv(L, R, type="interval2") ~ 1, data = dissectData)
```
fits the default Weibull model to the data. With a more complicated data set including treatment groups and other outcome-associated covariates, you can specify (functions of) those as predictors instead of the simple `~1` intercept-only predictor used here for a single group. There are several other choices for survival distributions available, too.
You can't, however, fit a semi-parametric model like a Cox survival model via the `survival` package. For that you need specialized tools like those in the R [icenReg package](https://cran.r-project.org/package=icenReg). That package works directly with "interval2" data. It also provides for Bayesian models like those recommended by @Björn in another answer (+1).
Your small set of known, fixed time points does recommend a binomial regression approach, but you didn't provide enough details to know if your model correctly takes the left censoring into account. A simple binomial model of fractions of animals with tumors over time is a model of tumor prevalence. That's OK for some purposes, and it can form the basis for [tests of treatment effects with covariate adjustment](https://www.jstor.org/stable/2347946). Prevalence data, however, leads to problems in interpretation in terms of tumor onset if the observed prevalence decreases at a later time period.
The answer from @AdamO provides a good way to deal with that problem, in a way that forces the cumulative hazard to be non-decreasing. Tutz and Schmid discuss ways to handle interval censoring in a binomial regression context in Section 3.7, "Subject-Specific Interval Censoring," of their [Modeling Discrete Time-to-Event Data](https://www.springer.com/us/book/9783319281568) book.
| null | CC BY-SA 4.0 | null | 2023-04-05T16:04:52.307 | 2023-04-08T17:38:36.697 | 2023-04-08T17:38:36.697 | 28500 | 28500 | null |
612003 | 1 | null | null | 0 | 10 | I am running a multilevel growth curve model predicting trajectories with observations over time nested within people. I am specifically interested in if amount of exposure to an event explains/affects these trajectories, which means (as I understand it) predictors must be included at Level 2.
My predictor is measured at each time point, therefore is time varying and simply represents a present/absent of if it occurred at this time. Since I am only interested in it's interaction with Time to explain differences in trajectories (and not in the cross-sectional effects), I would like to simply use the within-person average (essentially creating a % of time it occurs) that is then group-mean centered as my Level 2 predictor. I am wondering if there are any problems with this approach? Is it incorrect to treat a time-varying variable as essentially a person-level trait in this manner?
If it is a problem, would I need to somehow demonstrate consistency over time to justify creating a person-level trait variable? It is the same question asked repeatedly, but each individual has a different number of time points that contribute to this variable so I don't think generating an alpha value for reliability makes sense given the variability in the number of values that contribute to their level 2 average.
Any help and clarification on this matter would be greatly appreciated!
| Problems with using average of time-varying variable as a level 2 predictor in growth curve modeling? | CC BY-SA 4.0 | null | 2023-04-05T16:11:15.197 | 2023-04-05T16:12:57.577 | 2023-04-05T16:12:57.577 | 385009 | 385009 | [
"multilevel-analysis",
"time-varying-covariate",
"growth-mixture-model"
] |
612004 | 2 | null | 611998 | 3 | null | I think what you seek is premeasure. More formally, if $\mathcal S$ is any collection of subsets of $X$ then $\mu:\mathcal S\to [0, \infty]$ is a premeasure if it is finitely additive and countably monotone and if $\emptyset\in\mathcal S,$ then $\mu(\emptyset) $ has to be $0.$ $\mathcal S$ is generally taken to be a semiring.
In this [lecture](https://youtu.be/a49_A272dOU), a premeasure is, though, defined as a countably additive measure over the field. If $\mu$ is finitely additive, it is a finitely-additive measure.
---
## Reference:
$\rm [I]$ Real Analysis, H. L. Royden, P. M. Fitzpatrick, Pearson, $2010, $ sec. $17.5, $ p. $353.$
| null | CC BY-SA 4.0 | null | 2023-04-05T16:13:13.840 | 2023-04-05T16:13:13.840 | null | null | 362671 | null |
612005 | 1 | null | null | 0 | 26 | My question is it correct to use binomial regression when the outcome is count data with a fixed upper limit and trials are different tasks. In my experiment, I count the number of consequentialist responses to 12 different moral dilemmas (i.e. success = consequentialist response, failure = deontological response). So, the upper limit is 12. But I am confused that the trials are not equal as they are presented with different dilemmas. Then, as far as I understand, ANOVA / OLS is not suitable for count data, and both Poisson regression model or negative binomial model are hardly suitable as they both assume that count data has no upper limit. My second choice is ordinal linear regression because I read somewhere that often better to treat count data as ordinal rather than to assume a distribution. So, what is better in this case: binomial regression with number of events occurring in a set of trials or ordinal linear regression? Thank you in advance.
| Binary logistic regression for count data in set of trials | CC BY-SA 4.0 | null | 2023-04-05T16:13:15.897 | 2023-04-05T16:13:15.897 | null | null | 385006 | [
"binomial-distribution",
"ordinal-data",
"count-data"
] |
612006 | 1 | null | null | 0 | 21 | Consider a basketball series, best 4 out of 7. In R, we have the following function for computing the expected number of wins in such a series for a team with a certain single-game win probability `wp_a`:
```
get_expected_wins <- function(wp_a = 0.50, num_games = 7, to_win = 4) {
# compute expected wins for team_a
# wp_a: a team's odds to win a single game
# num_games: the maximum number of possible games remaining in the series
# to_win: how many more games a team needs to win the series
# 7,4 correspond to winning a best 4 out of 7 series
# expected wins for the team
prob_to_win_n_games <- dbinom(x = 0:num_games, size = num_games, prob = wp_a)
num_wins <- c(0:to_win, rep(to_win, num_games - to_win))
ewins <- sum(prob_to_win_games_a * num_wins)
# and return
return(ewins)
}
```
In the function, `prob_to_win_n_games` should be the team's probability of winning 0, 1, 2, up to `num_games` number of games. Consider a playoff series where a team is trailing 0-3, and we are trying to compute their expected remaining number of wins in the series. Keep in mind that 1 more loss by the team would end the series. We want to call `get_expected_wins(0.5, 4, 4)`
In this series, this team has a `50%` chance of winning 0 more games (lose the next game), `25%` to win 1 game (win, then lose), `12.5%` to win 2 games (win, win, then lose), `6.25%` to win 3 games (win, win, win, then lose) and `6.25%` to win 4 games (win 4x). Their expected wins in the series is then `0.5*0 + 0.25*1 + 0.125*2 + 0.0625*3 + 0.0625*4 = .9375`
In this example, `num_games = 4` and `to_win = 4`, and `prob_to_win_n_games` is incorrectly computed as `0.0625 0.2500 0.3750 0.2500 0.0625`. The binomial fails to account for the series ending after an additional loss. It computes a `25%` chance of 3 wins, based on the calculation `(4 choose 3) * (0.5 ^ 4)`, however 3 of the 4 possible sequences (`L W W W`, `W L W W`, `W W L W`) are not possible in our theoretical playoff series where one additional loss by the team would end the series. Only `W W W L` gets the team to 3 wins.
How can we update this function to correctly compute a team's probability of winning a certain number of games, given the parameters we set for the playoff series.
| E(wins) by team in a sports series, given N games and M games away from elimination | CC BY-SA 4.0 | null | 2023-04-05T16:25:14.803 | 2023-04-05T18:13:41.367 | 2023-04-05T18:13:41.367 | 139192 | 139192 | [
"binomial-distribution",
"expected-value"
] |
612007 | 1 | 612018 | null | 0 | 56 | In scikit-learn's [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) docs they write
>
This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers
Logistic regression doesn't have a closed form solution. So it must use some optimization algorithm like gradient descent or Adam. So all, we need are the partial derivatives and we should be good to go. So what are these "solvers" and where do they fit into the picture?
| What is the difference between a solver and an optimization algorithm? | CC BY-SA 4.0 | null | 2023-04-05T16:27:47.767 | 2023-04-05T18:04:40.823 | 2023-04-05T18:04:40.823 | 3277 | 385011 | [
"machine-learning",
"optimization",
"gradient-descent",
"algorithms"
] |
612008 | 1 | null | null | 0 | 7 | I have a large dataset with 15 independent variables. I am interested in investigating the potential interactions between certain independent variables; however, some independent variables measure the same thing. I have two different measurements for both language proficiency (Subjective and Tested) and word frequency (Subjective and Corpus).I need to figure out which of these two measurements do a better job of explaining variation (along with all possible interactions).
My initial plan was to create four full models (with relevant interactions), each having a different combination of these variables as the following:
*
- model: Subjective proficiency + subjective word frequency + other variables
- model: Subjective proficiency + corpus word frequency + other variables
- model: Tested proficiency + subjective word frequency + other variables
- model: Tested proficiency + corpus word frequency + other variables
-
After that, I was planning to use backward elimination to refine the models and compare the reduced models with each other using Likehood Ratio tests. However, as these models are not really nested (overlapping perhaps?), I cannot use LRTs.
Should I just compare the AIC values of the models to reach a final decision? One model has the lowest AIC value, while another has the lowest BIC. Any suggestions?
| Model Comparison for Overlapping Models (Choosing between the different measurements of the same skill) | CC BY-SA 4.0 | null | 2023-04-05T16:33:24.140 | 2023-04-05T16:33:24.140 | null | null | 377196 | [
"mixed-model",
"model-selection",
"aic",
"likelihood-ratio",
"bic"
] |
612009 | 1 | 612022 | null | 0 | 30 | Let's say I have three distributions, P1, P2, and P3, which are probability distributions with domains defined between 0 and 1. Generically these are not Gaussian (more like Beta distributions). I can sample from these three distributions, generating samples p1, p2, and p3, such that I impose the constraint that p1+p2+p3<1, and I'm wondering what the most proper way of doing so is.
I've thought of two solutions:
- Sample independently from each many times, and then reject all correlated draws which don't obey the constraint
- Sample from P1, then crop and renormalize P2 such that the constraint is fulfilled (call the new distribution P2'), then sample from P2', then do the same for P3.
I think both methods have problems: the first introduces bias, and I'm not sure if the second does as well. Is there a more proper way to perform this type of correlated-sampling-with-constraints?
| Sampling from multiple distributions with well defined sum | CC BY-SA 4.0 | null | 2023-04-05T16:54:52.957 | 2023-04-05T19:40:31.830 | null | null | 371277 | [
"sampling"
] |
612011 | 1 | null | null | 1 | 16 | I did an extensive research on more than 50 papers in Finance and Economcis using Propensity Matching Scores. However, there is no paper so far tell me how to match the unit-by-unit(firm-by -firm) due to propensity score.
Clearly, the steps of working on PSM are:
- Calculation propensity for each OBSERVATION based on pre-treatment unit characteristics (I highlighted here because in panel data, one unit (or firm) can have many observations (e.g. if we have 3 years before event, we can have upto 3 observations per firm, meaning 3 propensity score for each observation).
- Afterward, according to tons of paper in finance, health, and economic, we match the treated and control units based on propensity score.
However, the propensity score at the first step is at the observational level when it is at the unit level at the second step. I tried my best but there is no paper explicitly tell me how to match firms by firm.
I found a paper that implicitly tell me that we can we can calculate the mean of characteristics of all observation per firm. Then we estimate the propensity score for each firm based on these average-values. However, that paper state something crucially wrong so it really harmful if I used that paper as reference for my choice. The paper I mentioned is [Howell, 2016](https://www.sciencedirect.com/science/article/abs/pii/S0048733316301093?casa_token=WBG_hIMKwngAAAAA:Dqc3e3qqT2E0e9IAaOyg6VYsWVAxhPqJ0368kyDqCG_F1Hr3DdUMcX1r_DWsITqQAvvTuAftcixj#sec0060)
In that papers, the authgors mentioned twice supporting my idea about average characteristic above :
>
Firms in the control group are matched to the treatment group on the
basis of the pre-treatment (1998–2001) mean of these variables
All covariates are measured by the mean before the policy treatment
But the problem here is he stated wrong here, his sample is from 2001 to2004 is pre-treatment. I am convinced it should be a typo, but it is a wrong statement. In another word, we cannot use that paper as reference in this case.
| Understanding the matching unit-by-unit in Propensity Matching Score? | CC BY-SA 4.0 | null | 2023-04-05T17:09:31.923 | 2023-04-05T17:09:31.923 | null | null | 319998 | [
"econometrics",
"difference-in-difference",
"propensity-scores"
] |
612012 | 1 | null | null | 2 | 52 | I have 18 different groups that I need to compare. I used the raw values from a neuroimaging file to represent synchronization between two brain regions (which ranged from 0 to 1) and Fisher Z transformed them in the hope that I would be able to use parametric statistics (namely an ANOVA to compare the means of these groups.
However, it appears that my fisher z transformed data does not pass the Shapiro-Wilk test. I'm kind of at a loss here, as I know that other non-parametric statistical tests such as the Kruskal-Wallis test exist, I don't think they were made in order to compare Z-scores. I've also read the you shouldn't perform an ANOVA if there is heteroscedasticity, and some of the data forms a slightly skewed distribution (box number 2, 6, the last one). I'm not quite sure what to do now
I've plotted the values using a boxplot (N=50): [](https://i.stack.imgur.com/JOZQy.png)
| Fisher Z transformation still not normal | CC BY-SA 4.0 | null | 2023-04-05T17:10:10.293 | 2023-04-06T18:08:28.917 | 2023-04-06T18:08:28.917 | 22047 | 385012 | [
"anova",
"nonparametric",
"kruskal-wallis-test",
"fisher-transform",
"parametric"
] |
612013 | 2 | null | 610298 | 0 | null | It seems that you are looking for some sort of step-wise selection method in the mixed model. GAMLj does not have this feature because in the mixed model it is not (yet) recommended to use automated predictors selection methods. The main reason is the following: When you change the structure of the fixed effects (removing or adding IV's), the definition of the random effects structure should be adjusted accordingly. That requires a thoughtful choice by the analyst.
Nonetheless, you can explore the plausibility of your model by adding or removing variables and terms in the model, always defining fixed effects and random effects accordingly to the research design.
| null | CC BY-SA 4.0 | null | 2023-04-05T17:13:01.090 | 2023-04-05T17:13:01.090 | null | null | 52808 | null |
612014 | 1 | null | null | 1 | 22 | I have a questionnaire to check the implementation of several recommended health improving measurements. It will look approximately like this:
- Did you do your 20 minute Yoga routine on a daily basis?
Answers possible: Yes / No / partly
- Did you increase your protein intake?
Answers possible: Yes / No / partly
- Did you employer provide ergonomic office furniture?
Answers possible: Yes / No / partly
And so on.
The population consists of 7000 participants.
My hypothesis is, for each measure separately, that the majority of participants did not implement the measures or only partially implemented them
I have two questions:
- Which test to use to test this hypothesis?
- How to calculate the necessary sample size?
| I am looking for the right significance test for a questionnaire | CC BY-SA 4.0 | null | 2023-04-05T17:22:07.360 | 2023-04-05T17:22:07.360 | null | null | 231746 | [
"hypothesis-testing",
"statistical-significance",
"sample-size",
"proportion"
] |
612015 | 1 | null | null | 0 | 19 | Model type- Encoder Decoder type Transformer time-series model.
(Based on: [https://github.com/KasperGroesLudvigsen/influenza_transformer](https://github.com/KasperGroesLudvigsen/influenza_transformer))
Training parameters:
Learning rate- `lr = 1e-3`
optimizer- `Adam(model.parameters(), lr=lr, betas =(0.9,0.95))`
scheduler- `PolynomialLR(optimizer, total_iters = 300, power=3)`
Problem- The loss curve is unstable, with many peaks. (Decreasing the learning rate is also not of help)
[](https://i.stack.imgur.com/NxezU.png)
[](https://i.stack.imgur.com/orHcY.png)
| Abnormal peaks in loss curve when training a transformer model for time series prediction | CC BY-SA 4.0 | null | 2023-04-05T17:24:11.273 | 2023-04-13T13:31:38.003 | 2023-04-13T13:31:38.003 | 22311 | 385013 | [
"time-series",
"neural-networks",
"loss-functions",
"transformers"
] |
612016 | 2 | null | 611820 | 3 | null | Since the observation times are non-random, logistic regression can be used to estimate the event rate per each 24h interval. You can even just use proportions tests to estimate CIs, unless there are stratification features you want to implement, like species, weight, etc.
For fish sacrificed at time 1, denote the probability of event as $p_1$. At time 2, the event occurrence has probability $p_1 + p_2$. And so on. A standard survival analysis does not apply in this case because fish who were sacrificed at time 2 were not known to be event free at time 1, and so it would be inappropriate to include them in the denominator of "risk set" as non-events for the time 1 stratum as would be typical of a Cox model. This considerably simplifies the model, although knowing the event status for surviving fish at each time point would inform different models that could be considerably more powerful.
You can use these probabilities to report the cumulative incidence of the event. See R implementation to make things crystal clear.
```
set.seed(123)
n <- 100
x <- rexp(n, 1/50)
t <- 10*(1+1:n%/% 10.1) ## assigned sacrifice timepoint
p <- tapply(x < t, t, mean)
plot(unique(t), p, xlim=c(0, 100), ylim=c(0, 1), type='b')
segments(
unique(t),
p-sqrt(p*(1-p)/10),
unique(t),
p+sqrt(p*(1-p)/10)
)
curve(pexp(x, 1/50), add=T, lty=2)
```
[](https://i.stack.imgur.com/AVppD.png)
From this basic approach there are a number of interesting and more sophisticated things to consider.
- Is there a single-pass modeling procedure that constrains the empirical cumulative incidence to be strictly increasing? Consider maximum likelihood constraining $p_1 < (p_1+p_2) < \ldots $.
- Alternately, can a fitting procedure be used for the time-based event incidence to produce a stepwise increasing curve?
### maximum likelihood approach
A convenient way to constrain the probability so that $p_2, p_3, \ldots >0 $ while keeping $p_1 + p_2 + \ldots < 1$ is to use the log odds. The result is somewhat better than the above approach.
```
## parameterize the log odds difference, constrain non-index LO to be positive, i.e. increasing probability
lodiff.to.p <- function(lodiff ) plogis(cumsum(c(lodiff[1], pmax(0, lodiff)[-1])))
negloglik <- function(lodiff) ## evaluate the joint likelihood
-sum(dbinom(x=tapply(x<t, t, sum), size=table(t), prob=lodiff.to.p(lodiff), log=T))
mle <- nlm(negloglik, p=c(-3, rep(0.01, 9))) ## lucky guess
plot(c(0,unique(t)), c(0,lodiff.to.p(mle$estimate)), col='red', type='b', xlab='Time', ylab='Cumulative incidence')
curve(pexp(x, 1/50), add=T, lty=2)
```
[](https://i.stack.imgur.com/VGeb9.png)
## curve fitting
We might use the empirical probability estimates to fit a monotonic increasing curve via least-squares. This approach is similar to the above, just swap binomial likelihood with normal.
[](https://i.stack.imgur.com/No7Rb.png)
```
negloglik <- function(mudiff) {
mu <- pmin(1, cumsum(pmax(0, mudiff)))
-sum(dnorm(x=p, mean = mu, sd=1, log=T))
}
mle <- nlm(negloglik, p=rep(0.1, 10))
mu <- pmin(1, cumsum(pmax(0, mle$estimate)))
plot(p, xlim=c(0, 10), ylim=c(0, 1), xlab='Time', ylab='Proportion with event')
lines(0:10, c(0,mu))
```
| null | CC BY-SA 4.0 | null | 2023-04-05T17:35:43.197 | 2023-04-06T18:15:14.150 | 2023-04-06T18:15:14.150 | 8013 | 8013 | null |
612017 | 1 | null | null | 0 | 19 | I have a binary classification problem where the outcome variable $y$ is either 0 or 1 and I want to predict $y$ given a vector of characteristics $x$.
However, I do not observe $y$ but only a variable $\tilde{y}$ with $P(y = 0 | \tilde{y} = 0) = 1$ and $P(y = 0|\tilde{y} = 1) = q$, $P(y = 1|\tilde{y} = 1) =1 - q$, where $q < 1/2$.
So there is a misclassification problem. For every observation with $y=0$ there is a probability $q$ that it will be misclassified as $\tilde{y} = 1$.
Based on this, I think that the log likelihood function should be
$$L(\theta) = \sum_{i=1}^N \bigg[ \mathrm{I}(\tilde{y}_i = 1)ln(q\times P(y_i = 0 | x_i, \theta) + (1-q)\times P(y_i = 1 | x_i, \theta)) + \mathrm{I}(\tilde{y}_i = 0) ln(P(y_i = 0 | x_i,\theta))\bigg] $$
Consider now a slightly different situation where I know that $x$ observations with a true value of $y=0$ are misclassified as $\tilde{y}=1$ in the dataset. Then, the log likelihood function should become:
$$L(\theta) = \sum_{i=1}^N \bigg[ \mathrm{I}(\tilde{y}_i = 1)ln(\frac{x}{N_{\tilde{y}=1}}\times P(y_i = 0 | x, \theta) + \frac{N_{\tilde{y}=1} - x}{N_{\tilde{y}=1}}\times P(y_i = 1 | x, \theta)) + \mathrm{I}(\tilde{y}_i = 0) ln(P(y_i = 0 | x,\theta))\bigg] $$
Above, $N_{\tilde{y}=1}$ are the number of observations with $\tilde{y}=1$.
Now it follows that if $p = \frac{x}{N_{y=1}}$, the log likelihood functions in both scenarios are the same. This means that they will give the same parameter estimates and standard errors. That they give the same standard errors is not intuitive for me. After all, in the first setting I should be more uncertain because I do not truly know how many observations are misclassified. Shouldn't the parameter estimates come with larger standard errors in the first case because there is uncertainty about how many observations were actually misclassified?
| Log likelihood with misclassification. Can you help me with the intuition? | CC BY-SA 4.0 | null | 2023-04-05T17:53:33.710 | 2023-04-05T17:53:33.710 | null | null | 164761 | [
"classification",
"maximum-likelihood",
"likelihood"
] |
612018 | 2 | null | 612007 | 1 | null | There are actually various approaches to finding the minimum of your loss function - the "classical" gradient descent algorithm is only one such approach. Gradient descent uses a linear approximation to the function (gradient is the slope of this linear approximation), but another notable example is Newton's Method, which uses a quadratic approximation (involving the Hessian matrix). So in the context you ask about, "solver" is referring to the particular method for seeking the minimal loss.
There's a very nice description of all of these solvers at this thread:
[https://stackoverflow.com/questions/38640109/logistic-regression-python-solvers-definitions#:~:text=The%20SAGA%20solver%20is%20a,theoretical%20convergence%20compared%20to%20SAG](https://stackoverflow.com/questions/38640109/logistic-regression-python-solvers-definitions#:%7E:text=The%20SAGA%20solver%20is%20a,theoretical%20convergence%20compared%20to%20SAG).
| null | CC BY-SA 4.0 | null | 2023-04-05T17:53:54.283 | 2023-04-05T17:53:54.283 | null | null | 384947 | null |
612019 | 2 | null | 611820 | 3 | null | This is a case of right censoring and (if there had been events) interval censoring. I.e. when a fish is event free when dissected, the event time is right censored (assuming all fish would eventually have the event). If you upon dissection a fish has the event, the event is interval censored to lie between time 0 and the time of dissection.
So far, so easy. One tricky bit is when you don't have any events, at all. E.g. simple maximum likelihood estimation based on asymptotics will break down here. However, firstly there's exact methods for some situations (e.g. if you assume the hazard rate to be constant over time) and secondly (more flexible and probably more usefully) there's Bayesian survival analysis. Going Bayesian here with informative (based on the best available prior information/what you elicit from experts) or weakly informative (wider than your prior assumptions suggest) prior distributions is very attractive.
| null | CC BY-SA 4.0 | null | 2023-04-05T18:04:03.013 | 2023-04-05T18:04:03.013 | null | null | 86652 | null |
612020 | 1 | null | null | 0 | 14 | How do I get the effect size (e.g., Cohen's w) for a comparison of two models using `anova(lm.1,lm.2,test="Chisq")` function in `R`?
| Effect size for comparing two models using anova(test="Chisq") function in R | CC BY-SA 4.0 | null | 2023-04-05T18:05:31.983 | 2023-04-05T18:05:31.983 | null | null | 307879 | [
"r",
"chi-squared-test",
"statistical-power",
"effect-size",
"cohens-d"
] |
612021 | 1 | null | null | 1 | 64 | I am estimating a model with exogenous variables using `ARIMA`, from the `statsmodels` package. But I can't interpret the results, the coefficients are very different from what I expected. Consider the following model:
$$
y_t = 1 + 0.5 y_{t-1} + 4 z_t + \varepsilon_t
$$
where $z_t$ is a dummy and exogenous variable. I've simulated the model using the following code:
```
np.random.seed(42)
c = 1
fi = 0.5
sig = 1
d = 4
y = [2]
n = 1000
z = [0, 1] * int(n/2)
for i in range(n):
y.append(c + fi*y[i] + d*z[i] + np.random.normal(scale=sig))
y = pd.Series(y[1:])
df = pd.DataFrame({'y': y, 'z': z})
m = ARIMA(df['y'], exog=df[['z']], order=(1, 0, 0)).fit()
m.summary()
```
The output is:
```
const 4.686270
z 2.689766
ar.L1 0.494128
sigma2 0.960973
dtype: float64
```
The estimated value for the coefficient of $y_{t-1}$ is OK. But the values for the constant and for the coefficient of $z_t$ is very different from what I expected. Estimating the model using OLS gives a very different result:
```
df['y_lag'] = df['y'].shift(1)
df = df.dropna()
X = sm.add_constant(df[['y_lag', 'z']])
m2 = sm.OLS(df['y'], X).fit()
m2.params
```
```
const 1.058504
y_lag 0.492424
z 4.012447
dtype: float64
```
That is, the values for the coefficient of $y_{t-1}$ are very similar, but the values for constant and the coefficient of $z_t$ are differents. Probably the `ARIMA` estimates the model using a different parameterization.
My two questions are:
- In an ARMA(p, q) with exogenous variables, what parameterization does ARIMA use in the estimation?
- In a model like
$$
y_t = c + \phi_1 y_{t-1} + \cdots + \phi_p y_{t-p} + d_1 z_{1, t} + d_2 Z_{2,t} + \cdots + d_k z_{k, t} + \varepsilon_t
$$
how can I calculate the $d_j$ parameters from the `ARIMA` output?
| Which specification does the statsmodels ARIMA use? | CC BY-SA 4.0 | null | 2023-04-05T18:09:57.420 | 2023-04-05T18:09:57.420 | null | null | 385020 | [
"time-series",
"arima",
"statsmodels"
] |
612022 | 2 | null | 612009 | 2 | null | If$$(X_1,X_2,X_3)\sim cf_1(x_1)f_2(x_2)f_3(x_3)\mathbb I_{x_1+x_2+x_3\le 1}\tag{1}$$
where $c$ is the normalising constant, then simulating$$X_1\sim f_1(x_1),\ X_2\sim f_2(x_2),\ X_3\sim f_3(x_3)$$until$${x_1+x_2+x_3\le 1}$$is a correct way to simulate from (1) as a special case of acceptance-rejection.
| null | CC BY-SA 4.0 | null | 2023-04-05T18:31:11.880 | 2023-04-05T19:40:31.830 | 2023-04-05T19:40:31.830 | 7224 | 7224 | null |
612023 | 1 | null | null | 0 | 18 | I am training machine learning algorithms for a binary classification task. The dataset is very small and high-dimensional, and the variables are highly correlated. My workflow is as follows:
- I do nested cross-validation to check the generalizability. The inner loop does hyper-parameter tuning and feature selection. The outer loop tests the model performance.
- As nested cv gives multiple best models, I also do regular repeated cross-validation to get one best model so that I can use the hyper-parameters selected from the grid search to do the final evaluation on an independent test set.
My questions related to feature selection are as follows:
- What is the best feature selection method where I do not need to specify the number of features to select?
- How can I select a final feature set with cross-validation?
- How can I get those selected features (in Python) and test their significance?
| How to select a final feature set with cross validation? | CC BY-SA 4.0 | null | 2023-04-05T18:38:58.463 | 2023-04-05T18:38:58.463 | null | null | 336916 | [
"python",
"cross-validation",
"feature-selection",
"importance"
] |
612024 | 1 | null | null | 0 | 11 | I have data that look like this.
[](https://i.stack.imgur.com/zNcGp.png)
And my goal is to reduce this 3D dimension into 2D dimension so it might looks like this. Turning the angle so the distance between all classes becomes maximum.
[](https://i.stack.imgur.com/mnIhR.png)
So therefore I have made a MATLAB-code to use:
```
function [W] = lda(varargin)
% Check if there is any input
if(isempty(varargin))
error('Missing inputs')
end
% Get impulse response
if(length(varargin) >= 1)
X = varargin{1};
else
error('Missing data X')
end
% Get the sample time
if(length(varargin) >= 2)
y = varargin{2};
else
error('Missing class ID y');
end
% Get the sample time
if(length(varargin) >= 3)
c = varargin{3};
else
error('Missing amount of components');
end
% Get size of X
[row, column] = size(X);
% Create average vector mu_X = mean(X, 2)
mu_X = mean(X, 2);
% Count classes
amount_of_classes = y(end) + 1;
% Create scatter matrices Sw and Sb
Sw = zeros(row, row);
Sb = zeros(row, row);
% How many samples of each class
samples_of_each_class = zeros(1, amount_of_classes);
for i = 1:column
samples_of_each_class(y(i) + 1) = samples_of_each_class(y(i) + 1) + 1; % Remove +1 if you are using C
end
% Iterate all classes
shift = 1;
for i = 1:amount_of_classes
% Get samples of each class
samples_of_class = samples_of_each_class(i);
% Copy a class to Xi from X
Xi = X(:, shift:shift+samples_of_class - 1);
% Shift
shift = shift + samples_of_class;
% Get average of Xi
mu_Xi = mean(Xi, 2);
% Center Xi
Xi = Xi - mu_Xi;
% Copy Xi and transpose Xi to XiT and turn XiT into transpose
XiT = Xi';
% Create XiXiT = Xi*Xi'
XiXiT = Xi*XiT;
% Add to Sw scatter matrix
Sw = Sw + XiXiT;
% Calculate difference
diff = mu_Xi - mu_X;
% Borrow this matrix and do XiXiT = diff*diff'
XiXiT = diff*diff';
% Add to Sb scatter matrix - Important to multiply XiXiT with samples of class
Sb = Sb + XiXiT*samples_of_class;
end
% Use cholesky decomposition to solve generalized eigenvalue problem Ax = lambda*B*v
Sw = Sw + eye(size(Sw));
L = chol(Sw, 'lower');
Y = linsolve(L, Sb);
Z = Y*inv(L');
[V, D] = eig(Z);
% Sort eigenvectors descending by eigenvalue
[D, idx] = sort(diag(D), 1, 'descend');
V = V(:,idx);
% Get components W
W = V(:, 1:c);
end
```
And a working example
```
% Data for the first class
x1 = 2*randn(50, 1);
y1 = 50 + 5*randn(50, 1);
z1 = (1:50)';
% Data for the second class
x2 = 5*randn(50, 1);
y2 = -4 + 2*randn(50, 1);
z2 = (100:-1:51)';
% Data for the third class
x3 = 15 + 3*randn(50, 1);
y3 = 50 + 2*randn(50, 1);
z3 = (-50:-1)';
% Create the data matrix
X = [x1, y1, z1, x2, y2, z2, x3, y3, z3];
% Create class ID, indexing from zero
y = [0 0 0 1 1 1 2 2 2];
% How many dimension
c = 2;
% Plot original data
close all
scatter3(X(:, 1), X(:, 2), X(:, 3), 'r')
hold on
scatter3(X(:, 4), X(:, 5), X(:, 6), 'g')
hold on
scatter3(X(:, 7), X(:, 8), X(:, 9), 'b')
% Do LDA - Now what?
W = lda(X, y, c);
```
The $W$ matrix contains a lot of eigenvectors. What I need to do is to multiply $W$ with $X$, but the problem is that It's not possible. I can make the $W$ into transpose, but still, I don't think that's the right method to use.
So how can I project the data with the eigenvectors from LDA?
| So how can I project the data with the eigenvectors from LDA? | CC BY-SA 4.0 | null | 2023-04-05T18:48:35.147 | 2023-04-05T18:48:35.147 | null | null | 275488 | [
"dimensionality-reduction",
"discriminant-analysis",
"projection"
] |
612026 | 1 | null | null | 0 | 23 | I'm analysing some non-randomised time to event data. I have 5 controls (demographic variables) and a variable of interest treatment. I only have missing data for 3/5 of the controls. The variables with missing data are continuous, continuous but bounded (0-100) and categorical (0-4). The level of missing data is less than 7% in all cases.
I haven't conducted missing data analysis before.
I'm a bit overwhelmed by the potential choices. From some initial reading, I've seen there seems to be two broad approaches, imputing a single value and analysing and imputing a range with multiple datasets and pooling estimates from the resulting analysis. My dataset is not too large so I thought I'd consider the latter as it allows for uncertainty in the imputations. As for techniques, I've seen two general approaches that relate to the assumptions of the missing data (ignoring MNAR for the moment), MAR and MCAR.
I'm unsure if I'm correct in thinking that MCAR involves imputing the mean or sampling from the marginal distribution of the variable, and MAR involves imputing the conditional mean or sampling from the conditional distribution (conditional on the other variables in the dataset). As both these assumptions are untestable, I was going to use both and carry out a sensitivity analysis and opt for MCAR if results are similar as it doesn't assume a conditional model - although I'm unsure if that is more or less restrictive.
For actual methods to carry these two techniques out, I've also seen several. I don't have much time alloted to this piece of work so I can't experiment with a wide range of methods and carry out multiple sensitivity analyses. As such, are there some standard agreed upon methods for either than I can go with as a first pass at this analysis?
Any guidance on this would be highly appreciated, thanks.
| General guidance on missing data | CC BY-SA 4.0 | null | 2023-04-05T18:56:13.757 | 2023-04-05T18:56:13.757 | null | null | 211127 | [
"missing-data",
"data-imputation",
"multiple-imputation"
] |
612027 | 1 | null | null | 1 | 11 | I have different series from banks balance sheets in monetary terms covering various years. I think is best to use a price deflator to control for inflation. I'm attaching two graphs the before and after of applying the deflactor to the data. For every data in the series I'm applying the following formula: row value from the balance sheet *( deflator -from base year- / deflator -from year of the row data). Both deflators are from the world bank. [](https://i.stack.imgur.com/kR0Zn.jpg)[](https://i.stack.imgur.com/2Xue1.jpg)
Is it correct to do what I just mentioned?
| using price deflator in monteray series over different years | CC BY-SA 4.0 | null | 2023-04-05T18:56:39.443 | 2023-04-05T18:56:39.443 | null | null | 182290 | [
"time-series",
"econometrics"
] |
612028 | 1 | null | null | 0 | 19 | I am currently learning about confidence intervals for the population mean.
Assume we do not know the variance of the population.
Let $\bar{x}$ be the sample mean, $s$ be the sample variance and $n$ the sample size.
I learnt the following:
- Use $\bar{x}\pm t_{\alpha,n-1}\frac{s}{\sqrt{n}}$ if the data is normally distributed, where $t_{\alpha,n-1}$ is the $\alpha$ t-score from the $T$ distribution with $n-1$ degrees of freedom.
- If the data is not normally distributed, then with a large enough sample we can use the z-interval $\bar{x}\pm z_{\alpha}\frac{s}{\sqrt{n}}$ where $z_{\alpha}$ is a suitable z-score.
The reason is that the T-ratio $T=\bar{x}-\mu/(s/\sqrt{n})$ becomes approximately normal with a large enough sample.
The lecture notes mention that we can consider $n=30$ a large enough sample size but without providing any explanation. So my question is simple: Why $n=30$?
| How many sample do we need for normality of t-ratio? | CC BY-SA 4.0 | null | 2023-04-05T19:09:59.230 | 2023-04-05T21:33:26.017 | null | null | 385023 | [
"normal-distribution",
"confidence-interval",
"t-distribution"
] |
612029 | 1 | null | null | 4 | 494 | I've seen a similar questions posted but I wasn't sure on the answers that were provided. Similar to when I try and look these methods up, I've seen more general abstract descriptions that were hard to understand.
I think I've read that G-method is just a general term for a model that estimates ATE and so encompases the G-formula, G-estimation and G-computation and also methods such as propensity scores and inverse weighting. If that is the case, what is the method in G-formula, G-estimation and G-computation?
Additionally, I also see structural models and structural nested mean models quite a lot. Are these the same as G-methods in that they aren't actualy methods, just conceptulisations of types of causal models?
Any help would be really appreciated, thanks
| What is the difference between the G-formula, G-estmation, G-computation and G-methods | CC BY-SA 4.0 | null | 2023-04-05T19:15:13.490 | 2023-04-06T15:49:47.313 | null | null | 211127 | [
"causality"
] |
612031 | 2 | null | 578112 | 0 | null | To just test for the significance of a single fixed effect, you can run a hierarchical F-test which compares the model containing your factor of interest with a more parsimonous model that does not contain the factor. This gives you the desired p-value for the whole factor.
Example code
```
library(lme4)
m0<- lmer(dv ~ a + (1|random intercept))
m1<- lmer(dv ~ a + b + (1| random intercept))
anova(m0, m1)
```
| null | CC BY-SA 4.0 | null | 2023-04-05T19:28:43.783 | 2023-04-05T19:28:43.783 | null | null | 277811 | null |
612032 | 2 | null | 611978 | 0 | null | I think I figure out the solution. Please, someone, confirm if I am right.
First, we need to know the distribution of penis length from the population we take this sample from. It's common to assume that measures of body parts are normally distributed, so we can use the normal dist.
We will not use the sample mean, but the individual measures of the sample. Let us start with a sample of 3 people with these values (16,17,19) to demonstrate how would be the algorithm for 30 values.
Lets see all possible cases.
For 0 liars we have:
S S S (see the explanation in the next value)
For 1 liar we have:
L S S (this means first is Liar, second and third are sincere).
S L S
S S L
For 2:
L L S
L S L
S L L
for 3:
L L L
Lets take the first elements that said 16. The probably of he is sincere is:
$P(15 < X < 17)$ - Thoses are all values that has less than 1cm of error with 16.
the probability of he is lying is $1 - P(15 < X < 17) $ then.
Lets calculate all probabilities for our table:
For 0:
S S S = $(P(15 < X < 17))*(P(16 < X < 18))*(P(18 < X < 20))$
For 1:
L S S = $(1 - P(15 < X < 17))*(P(16 < X < 18))*(P(18 < X < 20))$
S L S = $(P(15 < X < 17))*(1 - P(16 < X < 18))*(P(18 < X < 20))$
S S L = $(P(15 < X < 17))*(P(16 < X < 18))*(1 - P(18 < X < 20))$
For 2:
L L S = $(1 - P(15 < X < 17))*(1 - P(16 < X < 18))*(1 - P(18 < X < 20))$
L S L = $(1 - P(15 < X < 17))*(P(16 < X < 18))*(1 - P(18 < X < 20))$
S L L = $(P(15 < X < 17))*(1 - P(16 < X < 18))*(1 - P(18 < X < 20))$
for 3:
L L L = $(1 - P(15 < X < 17))*(1 - P(16 < X < 18))*(1 - P(18 < X < 20))$
lets calculate all values for the table using (using the normal distribution CDF, remember we are supposing the real mean is 13.24 and std is 1.89):
For 0:
S S S = $(0.1525)*(0.0662)*(0.0057) = 0.000057544349999999994$
For 1:
L S S = $(1 - 0.1525)*(0.0662)*(0.0057) = 0.00031979565$
S L S = $(0.1525)*(1 - 0.0662)*(0.0057) = 0.0008117056499999999$
S S L = $(0.1525)*(0.0662)*(1 - 0.0057) = 0.010037955649999998$
For 2:
L L S = $(1 - 0.1525)*(1 - 0.0662)*(0.0057) = 0.0045109543500000005$
L S L = $(1 - 0.1525)*(0.0662)*(1 - 0.0057) = 0.05578470434999999$
S L L = $(0.1525)*(1 - 0.0662)*(1 - 0.0057) = 0.14159279435$
for 3:
L L L = $(1 - 0.1525)*(1 - 0.0662)*(1 - 0.0057) = 0.78688454565$
Lets sum up all the probabilities, Lets use N as a random variable to represent the number of liars:
$P(N = 0) = 0.000057544349999999994$
$P(N = 1) = 0.00031979565 + 0.0008117056499999999 + 0.010037955649999998 = 0.011169456949999998$
$P(N = 2) = 0.0045109543500000005 + 0.05578470434999999 + 0.14159279435 = 0.20188845304999997$
$P(N = 3) = 0.78688454565$
Lets calculate the expected value of N:
$E(N) = 0.000057544349999999994*0 + 0.011169456949999998*1 + 0.20188845304999997*2 + 0.78688454565*3$
$E(N) = 2.7756$
So, on average, samples with the measures [16,17,19] will have 2.78 liars, witch means that probably all of them are lying.
Now we do this algorithm to the sample of size 30.
| null | CC BY-SA 4.0 | null | 2023-04-05T19:30:22.280 | 2023-04-05T21:18:20.477 | 2023-04-05T21:18:20.477 | 327798 | 327798 | null |
612033 | 1 | null | null | 0 | 44 | I am using DML like this -
Suppose these are my treatment, control and outcome variables:
T = a
X= [b,c,d,e]
y = f
Model 1: Random forest on T ~ X
Model 2: Random forest on y ~ X
Final Model: OLS over Model 2 residuals and Model 1 residuals: y_res ~ T_res
The Beta coefficient that I get for T_res, I am considering that as ATE.
Questions:
- Is this approach correct?
- How to do model validation in this case? I came across cumulative gain curve but for that, we would need random data which I don't have. If I generate it based on some distribution, I have no way of back calculating y for that random data so my cumulative gain curve is all over the place.
| Model Validation for Double Machine learning for Causal Inference | CC BY-SA 4.0 | null | 2023-04-05T19:51:56.267 | 2023-04-05T19:51:56.267 | null | null | 346975 | [
"machine-learning",
"inference",
"dataset",
"causality"
] |
612034 | 2 | null | 432754 | 1 | null | An interesting property of AUC is that it does not change unless you change the ordering of the points. For instance, if you divide every value by two, the AUC is the same.
```
library(pROC)
set.seed(2023)
N <- 1000
p <- rbeta(N, 1, 1)
y <- rbinom(N, 1, p)
pROC::roc(y, p)$auc # I get 0.8481
pROC::roc(y, p/2)$auc # Again, I get 0.8481
```
In this regard, the AUC does not consider calibration; AUC does not penalize the model for making predictions that are detatched from reality, such as having events with predictions equal to $0.2$ that happen $50\%$ of the time. AUC is strictly a measure of ability to discriminate between categories.
Log loss, however, considers both calibration and discrimination. The function penalizes predictions of category $1$ members for being away from $1$ and predictions of category $0$ members for being away from $0$, so it certainly covers discrimination. Calibration is harder to see from the equation, but by being a [strictly proper scoring rule](https://stats.stackexchange.com/a/493940/247274), it can be thought of as seeking out the true conditional probabilities of class membership (which we hope are extreme so we get good discrimination between categories, but we are not assured of that). Brier score, which is another strictly proper scoring rule, has an explicit decomposition into calibration and discrimination.
What this result tells me is that you are harming your calibration without making much improvement to your discrimination, and I would consider this a net negative.
The reason you harm your calibration is that you do not penalize mistakes equally in your loss function. Your loss function is designed to give especially high probabilities of membership in the minority class, and when you go test your model on data that have a low probability of membership in the minority class, the probability is overestimated. This is considered a feature, not a bug, by proponents of weighted loss functions.
Your ability to discriminate between classes has minimal change because the goal of the weighted loss is just to increase the probability values in order to have more predictions of the minority class that are large. This is not a perfect analogy, but it is as if you just divide your predictions by the largest predicted probability value. You do not change the order, so the ability for the model to discriminate between categories does not change, but doing so means that you get higher predicted values.
Mostly, [class imbalance is a non-problem for proper statistical methods](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he), and attempts to "fix" class imbalance typically stem from using a threshold of $0.5$ and trying to force your predictions to fall on the correct side of that threshold, which seems to be how the weighted loss function is used here.
However, you do not have to use $0.5$ as a threshold. In fact, you do not have to use any threshold at all, and the raw predictions can be useful. [This link](https://stats.stackexchange.com/a/312787/247274) gives a good discussion of why and links to other good material.
| null | CC BY-SA 4.0 | null | 2023-04-05T20:06:41.797 | 2023-04-09T16:04:38.610 | 2023-04-09T16:04:38.610 | 247274 | 247274 | null |
612035 | 1 | null | null | 0 | 28 | I am working with data collected in a community. We know the basic information about the community, let's say the racial composition is 60% black and 40% white, in a city of population = 500,000. I now have a list of incidents (fewer than 200 of them in total). Let's assume people involved in these incidents are either black or white. I want to be able to test if probability of a racial group involved in these incidents are unusually high/low, considering the racial composition that we know of this city. We don't have data for people who are NOT involved.
My question is: what tests should I use? I recently attended a talk about bayesian analysis so I'm leaning that direction. Given the small number of incidents perhaps poisson regression will be more appropriate?
Also, these incidents happened in different neighborhood, maybe 10-15 each neighborhood. It seems we need to rely on some mixed effect modeling. Is this right?
Or I may be overthinking.
Thank you very much.
| is bayesian poisson regression remotely relevant in this scenario? | CC BY-SA 4.0 | null | 2023-04-05T20:10:29.050 | 2023-04-05T20:10:29.050 | null | null | 274037 | [
"bayesian",
"mixed-model",
"poisson-regression",
"demography"
] |
612038 | 1 | 612040 | null | 0 | 35 | I am trying to calculate the variance of a truncated normal distribution, var(X | a < X < b), given the expected value and variance of the unbound variable X. I believe I found the corresponding formula on wikipedia (see picture below), but as a Psychologist I am not trained in mathematics and cannot read the formula.
Could somebody show me how to do the calculations for an exemplary case?
Let's say for example if a=0, b=1, var(X) = 0.5, E(X) = 0.5, then what is var(X | a < X < b)?
I would be super greatful for help.
All the Best,
ajj
[](https://i.stack.imgur.com/aQunX.png)
| Variance of truncated normal distribution | CC BY-SA 4.0 | null | 2023-04-05T20:19:21.107 | 2023-04-05T21:06:15.020 | null | null | 383607 | [
"variance",
"truncated-normal-distribution",
"example"
] |
612039 | 2 | null | 433657 | 1 | null | Accuracy is a measure of how well the predictions correspond with the true categories after you apply some rule to the raw output to convert those values into hard classifications. The most common of these rules is to take the category that has the highest predicted probability, which corresponds to assigning to category $0$ if the prediction is below $0.5$ and to category $1$ if the prediction is above $0.5$. (For the rare case of having a prediction of exactly $0.5$, perhaps randomize in some way. This could be a discussion of its own.)
(This gets messier if you do not predict on $[0,1]$, but the idea remains that you pick a threshold where predictions below are assigned to one category and predictions above are assigned to the other.)
AUC is strictly a measure of ability to discriminate between the categories and has no regard for calibration (if events predicted to happen with probability $p$ really do happen with probability $p$). In fact, dividing every prediction by two does not change the AUC, which I demonstrate below with the exact same code I used in an [answer](https://stats.stackexchange.com/questions/432754/why-log-loss-auc-and-precision-recall-change-differently-when-class-imbalance/612034#612034) I posted a few minutes ago that might be of interest.
```
library(pROC)
set.seed(2023)
N <- 1000
p <- rbeta(N, 1, 1)
y <- rbinom(N, 1, p)
pROC::roc(y, p)$auc # I get 0.8481
pROC::roc(y, p/2)$auc # Again, I get 0.8481
```
The `p` and `p/2` cannot both be calibrated, unless `p` is always zero and the event never occurs.
Finally, the log-loss measures both prediction prediction and the ability of the model to discriminate between the categories.
Overall, these three statistics (accuracy, AUC, log-loss) measure totally different aspects of the model in different ways. There is no reason to expect the three to agree on the best model, and what constitutes the best model for your work will depend on what you value and why you bother to do any modeling at all. If you need a model that tends to make good decisions when you apply a threshold (such as in software that lacks a human in the loop), maybe optimizing accuracy will be good for you (though I would advise giving thought to whether or not a threshold of $0.5$ is appropriate for your task). If you need to put the predictions in the correct order but otherwise do not care what the predictions are, AUC could be your friend. If you value having accurate probabilities, then a [strictly proper scoring rule like log-loss or Brier score](https://stats.stackexchange.com/a/493940/247274) could be your performance metric of interest.
| null | CC BY-SA 4.0 | null | 2023-04-05T20:23:02.917 | 2023-04-05T20:23:02.917 | null | null | 247274 | null |
612040 | 2 | null | 612038 | 0 | null | in R you could do:
```
a <- 0
b <- 1
vr <- 0.5
mu <- 0.5
alpha <- (a - mu)/sqrt(vr)
beta <- (b - mu)/sqrt(vr)
denom <- pnorm(beta) - pnorm(alpha)
vr * (1 - (beta*dnorm(beta) - alpha*dnorm(alpha))/denom -
((dnorm(beta) - dnorm(alpha))/denom)^2)
[1] 0.07791413
```
You can compare this with simulated data:
```
set.seed(20)
var((x<-rnorm(3e6, mu,sqrt(vr)))[x>a&x<b])
[1] 0.07791103
```
| null | CC BY-SA 4.0 | null | 2023-04-05T20:56:27.267 | 2023-04-05T21:06:15.020 | 2023-04-05T21:06:15.020 | 180862 | 180862 | null |
612041 | 2 | null | 579134 | 1 | null | The logistic regression optimizes the log loss between the predicted probabilities and categories (coded as $0$ and $1$). The true observations are the $y_i$, and the predictions are the $\hat y_i = \dfrac{1}{
1 + \exp\left(-x_i^T\hat\beta\right)
}
$, where $\hat\beta$ is the vector of estimated parameters for the regression model.
$$
L(y,\hat y) = -\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left[
y_i\log(\hat y_i) + (1 - y_i)\log(1 - \hat y_i)
\right]
$$
Notice that this is not the $AUC$. That is, logistic regression parameter estimation does not seek out the smallest $AUC$. If some parameter values decrease the $AUC$ yet also decreases the log-loss, those parameter values with the higher $AUC$ will be preferred. This can happen if some ability to discriminate between the categories is exchanged for better calibation of the predicted probabilities, as $AUC$ only cares about the ability for the model to distinguish between the two categories.
Therefore...
>
I'm confused as to how the AUC could go lower with the addition of more variables. Even if the new variables have zero predictive power, why wouldn't the coefficients just be set to zero and the AUC stay at ~.83?
If you train by maximizing the $AUC$, you will observe this, short of numerical issues that arise from doing this kind of strange optimization. However, the default of logistic regression is to find the parameters that optimize log-loss, not $AUC$.
Another answer mentions that logistic regression lacks a closed-form solution and must be solved numerically, putting some of the blame on the merely approximate "solution" given as the logistic regression parameters. While it is true that logistic regression lacks a closed-form solution, modern implementations of the numerical optimization are so good that it almost might as well.
| null | CC BY-SA 4.0 | null | 2023-04-05T21:02:08.773 | 2023-04-05T21:02:08.773 | null | null | 247274 | null |
612042 | 1 | null | null | 0 | 11 | Chapter 10, Problem 2 from The Analysis of Time Series by Chatfield and Xing:
The problem says, "Consider the following special case of the linear growth model:"
$$ X_t = \mu_t + n_t $$
$$ \mu_t = \mu_{t-1} + \beta_{t-1} $$
$$ \beta_t = \beta_{t-1} + w_t $$
where $n_t$ and $w_t$ are independent normal with zero means and respective variances $\sigma_n^2$ and $\sigma_w^2$.
The problem has the solver show that the initial least squares estimator of the state vector at time 2, i.e., $[\mu_2, \beta_2]^{T}$, is $[X_2, X_2-X_1]$, and I'm good with that.
We are also to show that the covariance matrix for this vector is
$$P_2 = \begin{bmatrix} \sigma_n^2 & \sigma_n^2 \\ \sigma_n^2 & 2\sigma_n^2 + \sigma_w^2 \end{bmatrix} $$
This latter is tripping me up. The answer in the book seems to suggest that the expected value of $X_2 - X_1$ is $\beta_2$, for it computes $\mathrm{Var}[X_2-X_1]$ as
$$E[(X_2 - X_1 - \beta_2)^2] = E[(n_2-n_1+\beta_2-w_2-\beta_2)^2] = 2 \sigma_n^2 + \sigma_w^2 $$
But using the given system, one could either write $X_2 - X_1 = \beta_2 - w_2 + n_2 - n_1$ or as $X_2 - X_1 = \beta_1 + n_2 - n_1$. If I pretend that $\beta_2$ and $\beta_1$ are values that have been fixed by time 2, then the first leads to the expected value of $X_2 - X_1$ being $\beta_2$, but the second leads to it being $\beta_1$. Using $\beta_1$, the same computation as above would lead to
$$E[(X_2 - X_1 - \beta_2)^2] = 2\sigma_n^2 $$
It seems wrong to me to pretend the values $\beta_1$ and $\beta_2$ are fixed and non-stochastic, and getting two different values seems to confirm my impression.
I guess I am confused about how any of these recursively-defined systems are started off, and the book doesn't seem to say. It only talks about estimating the initial values from the first few values of $X_t$. I would think that the initial value of $\beta_1$ would be $w_1$, and then I am happy with the value
$$\mathrm{Var}[X_2-X_1] = \mathrm{Var}[\beta_1 + n_2 - n_1] = 2 \sigma_n^2 + \sigma_w^2 $$
given as the answer. But then I would think that
$$ \mathrm{Var}[X_2] = \mathrm{Var}[\mu_1 + w_1 + n_1] $$
I don't know what to consider $\mu_1$ to be for this model, but I have trouble reconciling this thinking the the answer the book gives, that
$$ \mathrm{Var}[X_2] = \sigma_n^2 $$
Any guidance here is appreciated.
| Problem from Chatfield and Xing on State-Space Model | CC BY-SA 4.0 | null | 2023-04-05T21:38:35.150 | 2023-04-05T22:15:38.303 | 2023-04-05T22:15:38.303 | 71439 | 71439 | [
"time-series",
"state-space-models"
] |
612043 | 2 | null | 535123 | 1 | null | The contrast matrix you use is not square, it does not have a regular inverse. You need to use the Moore-Penrose (pseudo-)inverse of the contrast matrix, which is calculated by MASS::ginv().
Starting from you contrast matrix, what you want to calculate is:
\begin{equation}
\begin{pmatrix}
1 & -0.5 & -0.5 \\
1 & 0 & -1 \\
\end{pmatrix}\
\begin{pmatrix}\mu_{a} \\\mu_{b} \\\mu_{c}\end{pmatrix}
= \begin{pmatrix} \mu_{a}-\frac{(\mu_{b}+\mu_{c})}{2} \\\mu_{a}-\mu_{c} \end{pmatrix}
\end{equation}
Which is the comparisons you want, right?
If this is the case, your contrast matrix is actually
```
t(contr_mat)
[,1] [,2] [,3]
[1,] 1 -0.5 -0.5
[2,] 1 0.0 -1.0
```
So why do you need to take the inverse?
Let's construct the inverse, with an added intercept:
```
C1 <- cbind(1, ginv(t(contr_mat)))
round(C1, 2)
[,1] [,2] [,3]
[1,] 1 0.67 0
[2,] 1 -1.33 1
[3,] 1 0.67 -1
```
If we invert C1 back, we get:
```
round(ginv(C1), 2)
[,1] [,2] [,3]
[1,] 0.33 0.33 0.33
[2,] 1.00 -0.50 -0.50
[3,] 1.00 0.00 -1.00
```
Which is your original contrast matrix, with an added grand mean row:
\begin{equation}
C\mu = \begin{pmatrix}
1/3 & 1/3 & 1/3 \\
1 & -0.5 & -0.5 \\
1 & 0 & -1 \\
\end{pmatrix}\
\begin{pmatrix}
\mu_{a} \\\mu_{b} \\\mu_{c}\end{pmatrix}
= \begin{pmatrix}
\frac{\mu_{a}+\mu_{b}+\mu_{c}}{3} \\
\mu_{a}-\frac{(\mu_{b}+\mu_{c})}{2} \\
\mu_{a}-\mu_{c} \end{pmatrix}
\end{equation}
So we took your original contrast matrix, transposed it, and took the pseudoinverse, added an intercept. If we invert this matrix, we recover your contrasts.
Why did we invert your matrix?
Note that the matrix we built above (with C1 <- cbind(1, ginv(t(contr_mat))) ) is square and full rank: you can take the regular inverse of this matrix, just as for C. Let's then call it $C^{-1}$, because a property of a regular inverse is:
$C^{-1} C = CC^{-1} = I$, with $I$ the identity matrix.
The model supporting your comparisons and used with the lm() function can be expressed as:
$y = X\mu + \epsilon$
Where X is the design matrix, $\mu$ the vector of means, and $\epsilon$ the random component with normal distribution.
If X is the design matrix for the cell means model, lm() calculates the means using the least square equation:
$\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\$
Where $X^{\prime}$ is the transpose of $X$.
If we now modify the design matrix and coefficients to incorporate C, your contrast matrix, we get:
\begin{equation}
y = X\mu + \epsilon = XI\mu + \epsilon = X(C^{-1}C)\mu + \epsilon = (XC^{-1})(C\mu) + \epsilon
\end{equation}
So you can use the modified design matrix $(XC^{-1})$ to evaluate your contrasts $(C\mu)$, using the least square method, in the context of your original model.
If you use the modified design matrix $(XC^{-1})$, the least squares equations become:
\begin{equation}
\hat{C\mu} =[(XC^{-1})^{\prime }XC^{-1}]^{-1}\ (XC^{-1})^{\prime }y\\
\end{equation}
And you get your desired contrasts $C\hat{\mu}$.
That's why you need to calculate the inverse of C, to evaluate the contrasts $C\mu$.
To show that it works in R with your example:
```
x = sample(letters[1:3], size = 300, replace = T)
y = rnorm(300)
contr_mat = matrix(c(1, -0.5, -0.5,
1, 0, -1),
nrow = 3, ncol = 2)
```
Let's build the modified design matrix:
```
X <- model.matrix(~x + 0) # model matrix for the cell means model
X1 <- X %*% cbind(1,ginv(t(contr_mat))) # this is the modified model matrix, using the pseudoinverse.
```
And we solve by the method of least squares or by lm(). lm() takes the inverse of your contrast matrix as argument and adds the intercept term, then evaluates the contrasts by the least squares method:
```
# least-squares equation:
solve ( t(X1) %*% X1 ) %*% t(X1) %*% y
[,1]
[1,] -0.06524626
[2,] 0.03627632
[3,] 0.04820965
# lm() with modified design matrix
lm(y ~ X1 +0 ) %>% summary
Call:
lm(formula = y ~ X1 + 0)
Residuals:
Min 1Q Median 3Q Max
-3.1953 -0.7073 0.0250 0.7382 2.7251
Coefficients:
Estimate Std. Error t value Pr(>|t|)
X11 -0.06525 0.06049 -1.079 0.282
X12 0.03628 0.12596 0.288 0.774
X13 0.04821 0.14336 0.336 0.737
# lm() with your contrasts
lm(y ~ x, contrasts = list(x = ginv(t(contr_mat)))) %>% summary
Call:
lm(formula = y ~ x, contrasts = list(x = ginv(t(contr_mat))))
Residuals:
Min 1Q Median 3Q Max
-3.1953 -0.7073 0.0250 0.7382 2.7251
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.06525 0.06049 -1.079 0.282
x1 0.03628 0.12596 0.288 0.774
x2 0.04821 0.14336 0.336 0.737
```
The last two rows contain the comparisons you want:
```
means <- tapply(X = y, INDEX = factor(x), FUN = mean)
C %*% means
[,1]
[1,] -0.06524626
[2,] 0.03627632
[3,] 0.04820965
(means[1] + means[2] + means[3])/3
-0.06524626
means[1] - (means[2] + means[3])/2
0.03627632
means[1] - means[3]
0.04820965
```
| null | CC BY-SA 4.0 | null | 2023-04-05T21:40:21.307 | 2023-04-06T22:24:08.930 | 2023-04-06T22:24:08.930 | 383873 | 383873 | null |
612044 | 1 | null | null | 0 | 13 | I am developing a logistic regression model that predicts pregnancy. One of the variables to be included has multiple overlapping levels.
Someone may be coded as being as having a gestational age of 8-42 weeks. Alternately they may be coded as 10 to 19 weeks, 20 to 29 weeks, etc. in gestational age. The issue that I foresee is the interdependence between the two coding systems.
Will the inclusion of a gestational weeks continuous variable from 8 to 42 with alternate dummy codes of 10 to 19 weeks create multi-collinearity in the model?
Specifically would having a continuous variable of 8 to 42 weeks and an alternate code of 10-19 destabilize the models predictor estimates.
Note: Logically it would seem that any gestational age would predict pregnancy; however, sometimes reports of gestational age erroneous.
| Overlapping Dummy Variables in Logistic Regression | CC BY-SA 4.0 | null | 2023-04-05T21:50:12.330 | 2023-04-05T21:50:12.330 | null | null | 138931 | [
"logistic",
"multiple-regression",
"categorical-encoding",
"overlapping-data"
] |
612045 | 1 | null | null | 1 | 16 | I've built a Logistic Regression model in Python for the likelihood of an individual doing an action in the next n days. I am not very experienced at this!
My data comprises one row per individual. I have >500k individuals. I have five individual-level features, and a sixth time-based feature: median gap between actions.
I've built a model with a test F1 score >0.95. My question: is this sufficient to predict individuals doing this action in the next n days -- or am I accidentally classifying them according to whether they'll do this at all?
| Logistic regression for probability of action in next n days | CC BY-SA 4.0 | null | 2023-04-05T21:51:56.923 | 2023-04-05T21:51:56.923 | null | null | 293924 | [
"machine-learning",
"time-series",
"logistic",
"python",
"scikit-learn"
] |
612046 | 1 | null | null | 0 | 25 | Let $y\sim N(X\beta, I\sigma^2)$. We have the following estimator for the vector $\beta$,
\begin{align}
\hat{\beta} &= (X^\top X)^{-1} X^\top y
\\
Var(\hat{\beta}) &= \sigma^2(X^\top X)^{-1}
\end{align}
and we may estimate the $\beta$ covariance matrix by pluggin the unbiased estimate of $\sigma^2$, denoted by $\hat{\sigma}^2$.
The sample covariance matrix of a real-valued $N\times N$ matrix $\mathbf{X}$ is given by
\begin{align}
\hat{\Sigma}_{\mathbf{X}} &=
{1 \over N-1} (\mathbf{X}^\top - \bar{\mathbf{x}}\mathbf{1}^\top)(\mathbf{X}^\top - \bar{\mathbf{x}}\mathbf{1}^\top)^\top
\\
&={1 \over N-1}( \mathbf{X}^\top\mathbf{X} -2\mathbf{X}^\top(\bar{\mathbf{x}}\mathbf{1}^\top) + \bar{\mathbf{x}}\mathbf{1}^\top \mathbf{1}\bar{\mathbf{x}}^\top)
\\
&={1 \over N-1} (\mathbf{X}^\top\mathbf{X} -2\mathbf{X}^\top(\bar{\mathbf{x}}\mathbf{1}^\top) + N\bar{\mathbf{x}}\bar{\mathbf{x}}^\top)
\\
&={1 \over N-1} (\mathbf{X}^\top\mathbf{X} -2N \bar{\mathbf{x}}\bar{\mathbf{x}}^\top + N\bar{\mathbf{x}}\bar{\mathbf{x}}^\top)
\\
&= {1 \over N-1} (\mathbf{X}^\top\mathbf{X} - N\bar{\mathbf{x}}\bar{\mathbf{x}}^\top)
\end{align}
Based on the above calculations, are we allowed to say that the precison of the fixed effects increases when our sample of $X$ has a larger dispersion?
| Standard errors of $\beta$ in multiple linear regression and covariance of the design matrix $X$ | CC BY-SA 4.0 | null | 2023-04-05T21:58:41.257 | 2023-04-06T01:11:39.063 | 2023-04-06T01:11:39.063 | 178156 | 178156 | [
"multiple-regression",
"standard-error"
] |
612047 | 1 | null | null | 2 | 11 | Can anyone point me in the right direction for doing sample size/power calculations for hierarchical log-linear analysis of nominal (frequencies) data? Would I get in trouble with reviewers for just using the old rule-of-thumb to get 100 cases per cell?
| Power for hierarchical loglinear model | CC BY-SA 4.0 | null | 2023-04-05T22:06:13.227 | 2023-04-05T22:06:13.227 | null | null | 190333 | [
"sample-size",
"statistical-power",
"frequency",
"log-linear"
] |
612048 | 1 | null | null | 1 | 26 | I have daily sales (total volume in dollars) from 200 stores of the same franchise, over two years. I would like to identify any store with anomalies or special patterns, which could be the sign of a fraud or just a different management (good or bad).
My problem is that the stores are not of the same size, hence the daily sales have different volumes. The distribution of the sales may need a transformation before comparison. I thought I could just divide the sales by their mean at each given store, and then proceed with comparisons such as the Anderson-Darling test. I am not sure of the statistical consequences of dividing by the mean. Another option is to standardize the distributions, but that loses information on the standard deviations.
What preprocessing approach would you recommend for the cleanest and most informative comparison between all these distributions?
| Transformation to compare many distributions with different means and variances | CC BY-SA 4.0 | null | 2023-04-05T22:16:48.470 | 2023-04-05T22:16:48.470 | null | null | 385028 | [
"data-transformation",
"dataset",
"standardization"
] |
612049 | 1 | 612058 | null | 10 | 250 | Consider the $\{N(\theta,1):\theta \in \Omega\}$ family of distributions where $\Omega=\{-1,0,1\}$. I am trying to show that this is not a complete family. That is, if $X\sim N(\theta,1)$, I need to find a non-zero function $g$ for which $E_{\theta}[g(X)]=0$ for every $\theta \in \mathbb \Omega$.
Now,
$$E_{\theta}[g(X)]=0,~~\forall\,\theta \iff \int_{-\infty}^\infty g(x)e^{-(x-\theta)^2/2}\,dx=0,~~\forall\, \theta.$$
But any $g$, I can think of depends on $\theta$. I realize $g$ must be chosen in a way such that the elements of $\Omega$ are among the solutions of the equation $E_{\theta}[g(X)]=0$. How does one come up with an appropriate choice of $g$ from the above equation? A hint would be great.
The source of the problem is Ex.21 (page 23) of [this](https://www.stat.purdue.edu/%7Edasgupta/expfamily.pdf) note where I have modified the parameter space to make things slightly simpler.
| How to show that $\{N(\theta,1):\theta \in \Omega\}$ is not a complete family of distributions when $\Omega$ is finite? | CC BY-SA 4.0 | null | 2023-04-05T22:28:48.900 | 2023-05-05T17:41:34.223 | 2023-04-25T09:17:16.990 | 56940 | 119261 | [
"mathematical-statistics",
"normal-distribution",
"exponential-family",
"complete-statistics"
] |
612050 | 1 | null | null | 0 | 31 | Suppose $X_1, ..., X_n$ i.i.d.~ $\mathscr{N}(0, \theta^2)$ with unknown $\theta>0$.
I have an estimator $\hat{\theta}_n=\sqrt{\frac{\pi}{2}}\frac{1}{n}\Sigma_{n}^{i=1}|X_i|$ for $\theta$. How do I show that $\hat{\theta}_n$ is a consistent estimator for $\theta$?
I don't really understand how to show an estimator to be consistent, and the absolute value is throwing me off a bit. Any help would be really appreciated. Thank you.
| Showing an estimator is consistent for a parameter | CC BY-SA 4.0 | null | 2023-04-05T22:38:36.773 | 2023-04-05T22:38:36.773 | null | null | 372599 | [
"mathematical-statistics",
"normal-distribution",
"estimators",
"consistency"
] |
612051 | 2 | null | 612049 | 4 | null | We can use a function $f$ that is odd about each point in $\Omega$ since then $\int_{\mathbb R} f(x) e^{-(x-\theta)^2}\,\text dx = 0$ for each $\theta$. This suggests a periodic function and so we can get an example by picking a sine wave with frequency high enough to be odd about each $\theta$, so e.g. in your case $\sin (\pi x)$ does the trick.
| null | CC BY-SA 4.0 | null | 2023-04-05T23:17:45.763 | 2023-04-05T23:37:06.997 | 2023-04-05T23:37:06.997 | 30005 | 30005 | null |
612053 | 2 | null | 597339 | 1 | null | >
Because I've read that using 0.01/99.9 would lead to the algorithm always returning the class of the majority label.
Most models, such as logistic regressions, neural networks, and random forests, give predictions on a continuum, rather than making hard classifications. The way software packages get a categorical prediction is by comparing that continuous prediction to some kind of threshold and deciding on the category by seeing the side of the threshold on which the prediction falls (above the prediction goes to the positive category, below to the negative). These continuous predictions often lie in the interval $[0,1]$ and can have an interpretation as probability, so the typical threshold is $0.5$.
In an unbalanced problem, it might be that the probability of being in the minority class never exceeds $0.5$, meaning that correct performance might be to having every prediction fall below this standard threshold. Consequently, some suggest to artificially balance the categories so the predictions can be higher. After all, a higher [prior probability](https://stats.stackexchange.com/a/583115/247274) (the proportion of observations that belong to the minority class) leads to a higher posterior probability by Bayes' theorem, all else equal. Below, $P(Y=1)$ is taken as the prior probability.
$$
P(Y = 1\vert X = x)
=
\dfrac{
P(X = x\vert Y = 1)P(Y = 1)
}{
P(X = x)
}
$$
Not all else is equal, but this is the idea behind artificial balancing (e.g., ROSE, SMOTE), and the results show that artificial balancing indeed increases predicted probabilities.
Consequently, this artificial balancing can be expected to be successful in making the model make higher predictions that exceed the $0.5$ threshold more often. However, your predictions are now, in some sense, not telling the truth. A prediction of $0.8$ does not mean the event is expected to happen with probability $0.8$. Consequently, the rich information present in probabilistic predictions from your model no longer tell the truth. While you might be able to [calibrate](https://stats.stackexchange.com/a/558950/247274) these predictions to reflect the reality of how often events occur, fiddling with the class ratio and then calibrating (maybe) just to make sure uncalibrated predictions fall above an arbitrary threshold seems inefficient at best (there could be edge cases where this winds up working better, sure) and damaging to the education of aspiring predictive modelers at worst by eliminating an emphasis on the valuable information available in the original, non-categorical predictions.
If you start out with the natural class ratio, you do not start the modeling by misleading the model about the prior probability. Consequently, most "fixing" of class imbalance is unnecessary (though [this](https://stats.meta.stackexchange.com/questions/6349/profusion-of-threads-on-imbalanced-data-can-we-merge-deem-canonical-any) question has an answer with an interesting edge case) and due to an unnecessary obsession with making sure the predictions are on the correct side of $0.5$ when it is fine to use a different threshold (the prior probability seems like a natural one) or consider the direct predictions without considering any threshold at all.
(I know, I know: `sklearn` uses `predict` to return categorical predictions. I believe that `sklearn`, for all of its positives (and I have used it to do good work at my job), has caused damage to the field of statistics by using the `predict` method to return the category with the highest probability and the `predict_proba` method to return the raw probabilities, rather than having `predict` return the probabilities and something like `predict_category` to return the category with the highest probability.)
I will close with some standard links about class imbalance and evaluating classifiers.
[Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he)
[Profusion of threads on imbalanced data - can we merge/deem canonical any?](https://stats.meta.stackexchange.com/questions/6349/profusion-of-threads-on-imbalanced-data-can-we-merge-deem-canonical-any)
[Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models)
[Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp)
| null | CC BY-SA 4.0 | null | 2023-04-05T23:30:44.800 | 2023-04-06T01:51:37.700 | 2023-04-06T01:51:37.700 | 247274 | 247274 | null |
612054 | 1 | null | null | 0 | 35 | Is the following statement correct?
The Central Limit Theorem (CLT) states that as the sample size tends to infinity, the standardized sample mean distribution approaches the standard normal distribution. Motivated by this theorem, we can say that for a large sample size (greater than 30), the standardized sample mean is approximately normally distributed. However, we cannot use this theorem to state that the sample mean by itself is approximately normally distributed for large sample sizes. This is because, according to the Strong Law of Large Numbers (SLLN), the sample mean converges almost surely to the population mean.
| Approximating the Distribution of Xbar using the Central Limit Theorem | CC BY-SA 4.0 | null | 2023-04-06T00:02:42.463 | 2023-04-06T05:14:27.950 | null | null | 385034 | [
"normal-distribution",
"mean",
"central-limit-theorem"
] |
612055 | 1 | 612056 | null | 1 | 26 | I know that the covariance of two random variables, such as X and Y, is calculated as follows:
$$ Cov(X, Y) = \frac{\sum{(X - \bar{X})(Y-\bar{Y})}}{n} $$
where $n$ is the size of the sample and $\bar{X}$ and $\bar{Y}$ are the means of X and Y, respectively. What I do not understand is how this formula measures the dependence or the correlation between these two variables. In other words, What does this formula have to do with the dependence between X and Y?
| In what way, the covariance of two variable shows their correlation or dependency? | CC BY-SA 4.0 | null | 2023-04-06T00:10:53.857 | 2023-04-06T00:36:34.640 | 2023-04-06T00:36:34.640 | 385032 | 385032 | [
"correlation",
"covariance",
"covariance-matrix"
] |
612056 | 2 | null | 612055 | 1 | null | The formula below would be the standard way to write covariances at the sample level.
$$ Cov(X, Y) = \overset{N}{\underset{i=1}{\sum}}\left(\dfrac{(X_i - \bar{X})(Y_i-\bar{Y})}{n - 1} \right)$$
Each term in the summation measures if an observation of $X$ is above or below (or equal to) the mean of $X$ and if the corresponding observation of $Y$ is above or below (or equal to). These are then multiplied. If this product is positive, it means that both $X_i$ and $Y_i$ are either above or below their respective means; if this product is negative, one is above while the other is below its respective mean.
If you wind up with many positive products in the numerator, this means that the $X$ and $Y$ variables tend to be above or below their respective means simultaneously. This means that, when one variable is high, so is the other, and when one variable is low, so is the other. Conversely, if you wind up with many negative products, this means that when one variable is low, the other tends to be high, and when one variable is high, the other starts to be low.
When so many of the numerator products are positive that they wash out negative products, the sum is positive (positive covariance). This means that there is a relationship between the variables in that both tend to be high simultaneously or low simultaneously.
When so many of the numerator products are negative that they wash out the positive products, the sum is negative (negative covariance). This means that there is a relationship between the variables in that, when one is high, the other is low.
| null | CC BY-SA 4.0 | null | 2023-04-06T00:36:17.543 | 2023-04-06T00:36:17.543 | null | null | 247274 | null |
612057 | 1 | 612136 | null | 1 | 37 | I have a few questions concerning what to do next after balancing your population with IPTW.
This is a code just to have an example:
```
library(WeightIt)
W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75,
data = lalonde, estimand = "ATT", method = "ps")
```
Outcome = `re78`
First, if I want to calculate the odds ratio (e.g. odds of re78 in treated vs non-treated), I write the following formula:
```
d.w <- svydesign(~1, weights = W.out$weights, data = lalonde)
fit <- svyglm(re78 ~ treat, design = d.w)
```
My questions are:
- "Treat" itself doesn't have a weight since patients have been grouped by the presence or absence of treatment. How does the formula use the weights that I have provided?
- Why does the result change if I write: fit <- svyglm(re78 ~ treat + age + educ + race + married + nodegree + re74 + re75, design = d.w). Shouldn't this variables already been accounted for if the "weights" are considered when I write just "treat"? (as I said in question 1)
- In some papers I saw that there is a "weighted rate" for the outcomes stratified by treatment. For example, at the end of the IPTW table there is a weighted rate for death (which is the outcome) in treated patients and non-treated. Of course death has not been balanced so how is this possible? Where do these weights come from? Does it make sense to put it like this?
| Calculating outcomes in IPTW | CC BY-SA 4.0 | null | 2023-04-06T00:38:42.213 | 2023-04-06T15:02:04.993 | 2023-04-06T00:49:10.073 | 384938 | 384938 | [
"r",
"propensity-scores",
"weighted-regression",
"weighted-mean",
"weighted-data"
] |
612058 | 2 | null | 612049 | 9 | null | For notational convenience, let $\varphi_{\theta}(x)$ denote the density of a $N(\theta, 1)$ random variable.
One way of viewing this problem is that the condition
\begin{align*}
\int_{-\infty}^\infty g(x)\varphi_\theta(x)dx = 0, \quad \theta \in \{-1, 0, 1\} \tag{1}
\end{align*}
sets up a system of three equations. If we pick up $g(x) = ax^3 + bx^2 + cx + d$ from the family of cubic polynomials, then $(1)$ becomes a homogeneous system of three linear equations with four unknowns $a, b, c, d$. By linear algebra theory, the dimensionality of the solution space to such a linear system is at least $1$ (because the rank of the associated coefficient matrix is at most $3$), implying that there are countless non-zero $g(x)$ satisfying $(1)$.
Indeed, substituting $g(x) = ax^3 + bx^2 + cx + d$ and central moments of $\varphi_\theta(x)$ into $(1)$ yields
\begin{align*}
b + d &= 0, \\
4a + 2b + c + d &= 0, \tag{2} \\
-4a + 2b - c + d &= 0.
\end{align*}
One (of infinitely many) non-zero solution to $(2)$ is $a = -1, b = 0, c = 4, d = 0$, resulting $g(x) = -x^3 + 4x \neq 0$. This shows that the family $\{N(\theta, 1): \theta \in \Omega\}$ is not complete.
---
While the above construction works well for any parameter space with finite cardinality, it does not generalize to the case where the parameter space contains infinite members (e.g., the original linked exercise whose $\Omega = \{1, 2, 3, \ldots\}$). To deal with the latter case, note that, by the translation property of $\varphi_\theta$, the condition $E_\theta[g(X)] = 0$ for all $\theta \in \Omega$ is equivalent to
\begin{align}
E[g(X + \theta)] = 0 \; \text{for all } \theta \in \Omega, \tag{2}
\end{align}
where "$E$" is with respect to the standard normal density $\varphi(x)$. This observation implies that, when $\Omega = \{1, 2, \ldots\}$, $(2)$ automatically holds for $g$ such that $g(x + \theta) = g(x)$ for all $\theta = 1, 2, \ldots$ and $E[g(X)] = 0$. Clearly, one candidate of such $g$ is any periodic odd function with periodicity $1$. For example, $g(x) = \sin(\pi x)$ proposed in @jld's answer. As another (non-trigonometric) example, $g(x)$ can be taken to be
\begin{align}
g(x) = \frac{1}{2} - \left|x - \frac{1}{2}\right|, \; 0 \leq x \leq 1.
\end{align}
And then copy this to intervals $[n, n + 1]$, $n = 1, 2, \ldots$ to complete $g$'s definition on $[0, +\infty)$. Finally, define $g(x) = -g(-x)$ when $x < 0$.
| null | CC BY-SA 4.0 | null | 2023-04-06T00:46:17.717 | 2023-05-05T17:41:34.223 | 2023-05-05T17:41:34.223 | 20519 | 20519 | null |
612059 | 1 | null | null | 1 | 33 | I want to make sure I am choosing the right statistical test.
I am working with 7 semesters of exams scores for a particular course.
Each semester the course ran with many sections. However, the students, no matter what section they were in, no matter which teacher, were given the same exams.
Let A = exam 1 and 2 average (two exams were given within the semester and they were averaged)
Let B = final exam score (the course ends with a final)
Overall for each semester, I want to compare A with B to see if a significant difference is found. Which non-parametric test should I use? What should my level of significance be?
Which test should I use to see if there is a correlation between A and B for each of the semesters?
I tested semester 1 (A & B) through semester 7 (A & B) and found that the data sets for all is not normal. I used the Shapiro-Wilk test (all p<.01).
Note: Just to clarify,...
** A is the average of exams 1 and 2 area given within the semester before the final exam The final exam score equals B.
** The data sets are not normally distributed as per the Shapiro-Wilk test for normality as well as histogram, box plots.
** Overall I wish to compare the averages to see differences if there are any.
| Which Non-Parametric Test should I use? | CC BY-SA 4.0 | null | 2023-04-06T01:04:17.850 | 2023-04-06T13:59:49.007 | 2023-04-06T13:59:49.007 | 22047 | 385031 | [
"hypothesis-testing",
"correlation",
"t-test",
"p-value",
"nonparametric"
] |
612060 | 1 | null | null | 0 | 15 | I need to analyse the results of a messaging trial where the value of interest the the click through rate (CTR) on social media ads. I tested 8 ads and a control and I have the CTR for each one (e.g. message one had a CTR of 1%, message two had a CTR of 2%).
The issue is I don't know if just using an ANOVA will be ok because I literally just have nine figures (the CTRs) and not a full dataset with multiple rows that the ANOVA would use to take an average and then compare that average to see if there is a statistically significant difference.
Can anyone weigh in on how I can understand whether there is a difference between my groups?
| How to analyse averaged data (when raw data is not available) | CC BY-SA 4.0 | null | 2023-04-06T01:08:53.910 | 2023-06-02T05:42:27.080 | 2023-06-02T05:42:27.080 | 121522 | 385039 | [
"statistical-significance",
"mean"
] |
612061 | 2 | null | 120179 | 1 | null | I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
```
import numpy as np
# Define a vector of means and a matrix of covariances
mean = [3, 3]
Sigma = [[1, 0.70],
[0.70, 1]]
# Generate 100 cases
X = np.random.default_rng().multivariate_normal(mean, Sigma, 100).T
# Subtract the mean from each variable
for n in range(X.shape[0]):
X[n] = X[n] - X[n].mean()
# Make each variable in X orthogonal to one another
L_inv = np.linalg.cholesky(np.cov(X, bias = True))
L_inv = np.linalg.inv(L_inv)
X = np.dot(L_inv, X)
# Rescale X to exactly match Sigma
L = np.linalg.cholesky(Sigma)
X = np.dot(L, X)
# Add the mean back into each variable
for n in range(X.shape[0]):
X[n] = X[n] + mean[n]
# The covariance of the generated data should match Sigma
print(np.cov(X, bias = True))
```
Please note: I am using the population (rather than sample) variances and covariances, which are calculated using N (rather than N - 1) as the denominator. This is represented in the "bias = True" setting for np.cov().
| null | CC BY-SA 4.0 | null | 2023-04-06T01:17:25.317 | 2023-04-06T01:26:04.873 | 2023-04-06T01:26:04.873 | 202434 | 202434 | null |
612062 | 2 | null | 612054 | 1 | null | There's something wrong with all three parts of this, though the first bit is almost right.
-
The Central Limit Theorem (CLT) states that as the sample size tends to infinity, the standardized sample mean distribution approaches the standard normal distribution.
Not quite. As long as the conditions for it to apply are present, sure. They weren't mentioned in your quote.
-
Motivated by this theorem, we can say that for a large sample size (greater than 30), the standardized sample mean is approximately normally distributed.
No, there's nothing that makes it work for say n=31. For many reasonable definitions of approximately (including those related to convergence in distribution), counterexamples are not that hard to identify even at n=1000, or n=1000000 or any larger n you like.
As long as the conditions for the CLT hold, it will work eventually, but it might not happen at any practically possible sample size.
That said, for a lot of common variables, it will work at pretty moderate sample sizes -- depending on what you start with (and how approximate you need it to be) sometimes n=3 is fine, sometimes 15, sometimes n=25, sometimes n=200 is okay. But sometimes you need much larger n.
I work with some pretty skewed and heavy-tailed variables, and assuming approximate normality of a standardized sample mean at n=100 would be dangerous.
Unless you constrain the distributions you're talking about, you really can't make such a general claim.
Here's an obvious counterexample. Lets say I am not happy to call a (standardized) exponential distribution "approximately normal" in this sense (for some purpose or other). Then if my parent distribution were gamma with shape parameter 1/50 (the random variables are i.i.d Gamma$(0.02,\theta)\,$), then at $n=50$, the sample mean is exponentially distributed, and then the standardized sample mean is the standardized exponential we were not prepared to call approximately standard normal to begin with.
Clearly, then, $n>30$ doesn't work for this case, even though the CLT applies here.
-
we cannot use this theorem to state that the sample mean by itself is approximately normally distributed for large sample sizes. This is because, according to the Strong Law of Large Numbers (SLLN), the sample mean converges almost surely to the population mean.
There's a correct underlying point there (convergence in the standardized $Z_n$ is not convergence in the unstandardized $\bar{X}$; instead the latter obeys the LLN if its conditions hold), but I'd disagree on what is stated here as well.
The claim we're addressing here is not that of convergence in relation to $\bar{X}$ as $n\to\infty$ but of its cdf being approximately normal at some specific large $n$, in particular, whatever specific $n$ we might have regarded as large enough for the claim about approximation in the previous part in relation to the distribution of standardized means.
If you believe you can use the CLT to argue for something approximately happening to the distribution of a standardized mean at a finite sample size (like $n=31$ say) then it applies exactly as well to the unstandardized distribution. That is, the usual distance for the standardized distribution $|F_n-\Phi|$ will be exactly the same for the unstandardized equivalents of both. If one is approximately normal in that sense, so must the other be, because the discrepancy from equality is the same.
---
I would add to all this an additional caution; often this is used to then argue for some other fact (e.g. that a t-statistic converges a normal distribution even when the original sampling distribution is non-normal) -- you must be careful not to leap over the remaining gaps to establish the further claim (in respect of the t-statistic example, you need an additional theorem).
| null | CC BY-SA 4.0 | null | 2023-04-06T01:17:31.343 | 2023-04-06T05:14:27.950 | 2023-04-06T05:14:27.950 | 805 | 805 | null |
612063 | 1 | null | null | 1 | 10 | I am trying to analyze some preliminary data for a conference. I currently have two animal participants completing a series of tasks.
The tasks use different types of feedback (4 levels), which vary in the order they are completed (4 levels) that they occur. One animal has completed 20 tasks and the other has only completed 12, so I have unequal sizes.
My dependent variable is the total number of trials it takes them to complete the task (the criterion is when they reach 80% accuracy).
My independent variable is the feedback type (categorical variable).
I need to include the order as a covariate.
Because I have only 2 participants, I am reading that I cannot (or rather, should not) run an ANCOVA.
So I am looking for an alternative test that will work for such a small sample size with uneven groups. Any advice would be greatly appreciated. I am working in SPSS if that helps.
| Unsure of what analysis to use. Sample is too small for ANCOVA | CC BY-SA 4.0 | null | 2023-04-06T01:26:32.253 | 2023-04-06T01:26:32.253 | null | null | 294103 | [
"covariance",
"small-sample",
"ancova"
] |
612065 | 1 | 612066 | null | 4 | 176 | We know that it's better to standardization the training data (i.e. X_train) before fitting a LASSO model, especially when features are not in the same scale (Ref. [Is standardisation before Lasso really necessary?](https://stats.stackexchange.com/questions/86434/is-standardisation-before-lasso-really-necessary)).
But after fitting a LASSO model, when doing the prediction with the new data, do we still need to rescale the testing data (i.e. X_test)? And if so, how to properly to rescale the unseen future data?
| Is standardization still needed after a LASSO model is fitted? | CC BY-SA 4.0 | null | 2023-04-06T01:36:15.980 | 2023-04-06T11:24:35.967 | 2023-04-06T11:24:35.967 | 48796 | 48796 | [
"regression",
"lasso",
"regularization",
"normalization",
"feature-scaling"
] |
612066 | 2 | null | 612065 | 6 | null | Yes, you should scale the test data, and you should do so according to the rules you developed on the training data. For instance, if you scale by subtracting the mean and then dividing by the standard dviation, do this to the test data by using the mean and standard deviation from the training data, not from the test data. Otherwise, there is leakage.
The reason to scale your features after you fit the LASSO model is beause the scaling is, essentially, a unit conversion. The original features are in the original units; scaling by the mean and standard deviation puts the values in units of standard deviations from the mean. You wouldn't want to regard centimeters and miles to be the same units, would you? The same idea applies here.
| null | CC BY-SA 4.0 | null | 2023-04-06T02:04:42.570 | 2023-04-06T02:04:42.570 | null | null | 247274 | null |
612067 | 1 | null | null | 1 | 21 | I fitted my data in a GLMM model ("poisson" family) in R, and I got a z values of z value = 2.278. But after I put the model into post-hoc analysis ("Dunn" adjustment) with "emmeans", the z value became z value = -3.350. Why would the z value change? Thanks!
| Why I get different z values after adjusting the p values with post-hoc analysis? | CC BY-SA 4.0 | null | 2023-04-06T02:49:25.510 | 2023-04-06T02:49:25.510 | null | null | 385042 | [
"r",
"self-study",
"z-score"
] |
612068 | 2 | null | 611812 | 2 | null | Your proposal density is exponential with unit rate, so you are generating proposal values:
$$X_1,X_2,...,X_N \sim \text{IID Exp}(1).$$
The integral in question is effectively being evaluated using the importance sampling approximation of the following form:
$$\begin{align}
H
&\equiv \int \limits_1^\infty \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \bigg( -\frac{x^2}{2} \bigg) \ dx \\[6pt]
&= \int \limits_1^\infty \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \bigg( x - \frac{x^2}{2} \bigg) \cdot \exp( -x) \ dx \\[6pt]
&= \int \limits_1^\infty \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \bigg( x - \frac{x^2}{2} \bigg) \cdot \text{Exp}( x|1) \ dx \\[6pt]
&= \int \limits_0^\infty \frac{x^2}{\sqrt{2 \pi}} \cdot \exp \bigg( x - \frac{x^2}{2} \bigg) \cdot \mathbb{I}(x \geqslant 1) \cdot \text{Exp}( x|1) \ dx \\[10pt]
&\approx \frac{1}{N} \sum_{i=1}^{N} \frac{x_i^2}{\sqrt{2 \pi}} \cdot \exp \bigg( x_i - \frac{x_i^2}{2} \bigg) \cdot \mathbb{I}(x_i \geqslant 1). \\[6pt]
\end{align}$$
For large $N$ this approximation should give you a reasonable approximation to the integral, though you could improve the approximation by choosing a candidate distribution that is closer to the target function (in particular, one that shares the support of the integral). Below I show some `R` code where we approximate the integral using $N=10^5$ simulated values. This is compared with the result of numerical integration using the [integrate function](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/integrate). The approximation result is reasonable close in this case.
```
#Generate proposal data
set.seed(5811387)
N <- 10^5
X <- rexp(N, rate = 1)
#Generate importance sampling approximation
INT.APPROX <- mean((X^2)*exp(X-(X^2)/2)*(X >= 1)/sqrt(2*pi))
INT.APPROX
[1] 0.4031962
#Compute integral using alternative method
f <- function(x) { (x^2)*exp(-(x^2)/2)/sqrt(2*pi) }
integrate(f, lower = 1, upper = Inf)
0.400626 with absolute error < 5.7e-07
```
| null | CC BY-SA 4.0 | null | 2023-04-06T03:21:07.840 | 2023-04-06T05:14:11.807 | 2023-04-06T05:14:11.807 | 173082 | 173082 | null |
612071 | 1 | null | null | 0 | 28 | Suppose that for $i=1,2, \ldots, N$ and $j=1, \ldots, n$, the r.v.'s $Y_{i j} \sim \mathcal{N}\left(\vartheta_i, \sigma^2\right)$ are mutually independent, where the parameters $\{\vartheta_i\}_{i=1}^N$ and $\sigma^2$ are unknown.
Problem 1: If $n=2$ and $N$ gets large, show that the MLE $\hat{\sigma}^2$ converges in probability but is not consistent.
I've shown that $\hat{\sigma}^2 = \frac{1}{nN}\sum_{i=1}^N \sum_{j=1}^n (Y_{ij} - \hat{\vartheta_i})^2$, and so plugging n = 2 and simplifying I get:
$\hat{\sigma}^2 = \frac{1}{4N} (Y_{i1} - Y_{i2})^2$.
I was thinking about showing what it converges to in probability, and then if it is not $\sigma^2$, then it is not consistent. But I am a little confused on how to go about showing what it converges to in probability. Could I use LLN and CLT in some way?
Problem 2: Is the MLE $\hat{\sigma}^2$ for $\sigma^2$ consistent if $n=1+[\log (N)]$ and $N \rightarrow \infty$?
I have no clue how to begin with this one because the log term is throwing me off.
| If $n=2$ and $N$ gets large, show that the MLE $\hat{\sigma}^2$ converges in probability but is not consistent | CC BY-SA 4.0 | null | 2023-04-06T04:02:21.060 | 2023-04-06T04:02:21.060 | null | null | 376575 | [
"self-study",
"maximum-likelihood",
"asymptotics"
] |
612072 | 1 | null | null | 0 | 23 | I have an imbalanced fraud dataset with ~$1.4$% fraudulent samples across 50,000 rows with 600 columns. I'm performing a binary classification task on this dataset.
I've performed an EDA; some columns are $99$% the same value. The $1$% of entries that are not the same have a lower fraud rate of ~$1$%.
I believe that makes this column predictive of "non-fraud". But is that useful to me when the dataset's average is so strongly predictive of fraud anyway?
My intuition is no, because of the imbalance. I think it's okay to drop these columns from the feature selection process.
Is that a sensible practice when dealing with a highly imbalanced dataset?
Edit:
This is a follow up to this question: [What should you do with near constant columns?](https://stats.stackexchange.com/questions/611343/what-should-you-do-with-near-constant-columns)
I consider this question different because I'm asking about a specific subset of near-constant columns that I believe should be removed, given the imbalanced nature of my dataset. Instead of asking about near-constant columns being removed in general.
| Do imbalanced datasets make removing poor target predictors easier? | CC BY-SA 4.0 | null | 2023-04-06T04:32:13.010 | 2023-04-06T10:25:30.353 | 2023-04-06T10:25:30.353 | 363857 | 363857 | [
"classification",
"feature-selection",
"unbalanced-classes",
"exploratory-data-analysis",
"fraud-detection"
] |
612074 | 1 | null | null | 1 | 18 | I run competitive events. In our normal event, we have 8 adjudicators split between to categories. Skill and Artistry.
For each category we throw out the high and low scores and average the remaining two scores for the final result. This helps eliminate bias.
I'm trying to find a similar method when there are only 6 judges, three in each category.
I've looked at standard deviation and Quartiles to remove outliers but I'm not satisfied that either is the best choice. My concern is that the Stdev still uses the original mean to determine what is an outlier. Part of me feels that the outlier should be determined based on the "new mean" but then the calculations become so opaque that my team can't follow/duplicate for future events.
With the 1st and 3rd quartile, it seems even more arbitrary to toss the top and bottom 25% but that may be my own cognitive bias.
I'm curious... how would you do this if you were me. Note: In the small groups that I've tested I've found that the results don't vary from taking the simple mean of the three results unless extreme examples are tried... perhaps this isn't worth the effort?
I use googlesheets for the calculations, if that's at all helpful.
| Suggestions on dealing with outliers when sample size is very small AND you must order the results | CC BY-SA 4.0 | null | 2023-04-06T04:51:21.960 | 2023-04-06T12:22:12.627 | null | null | 385046 | [
"outliers",
"scoring-rules",
"bias-correction"
] |
612075 | 2 | null | 611812 | 4 | null | A first remark is that, even when the normalising constants are both available (for p(⋅) and q(⋅), self normalised importance sampling, while biased, may produce a smaller mean squared error. For instance, take the extreme example of the integrand being constant: self normalised importance sampling then has a mean squared error of zero while regular importance sampling does not. We discuss the case for using self-normalised importance sampling in [our book](http://amzn.to/2kSOnNa).
In the current situation, however, self-normalised importance sampling does not work. The reason being that the support of the importance function (i.e., exponential) is not equal to the support of the target density (i.e., normal) and hence that the expectation of the ratio of normalised densities is not equal to one:
```
> x = rexp(N,1) #exponential sample
> mean(exp(dnorm(x,l=T)+x)/sqrt(2*pi))
[1] 0.1995025
```
A second remark is that a basic rule in Monte Carlo integration is to never ever simulate zeros, i.e., to always use a proposal $q(\cdot)$ that shares the same support as the integrand $f(\cdot)p(\cdot)$. Rather than an $\mathcal E(1)$ proposal, which returns about 60% of zeroes, one should thus use a distribution supported over $(1,\infty)$, for instance an exponential $\mathcal E(1)$ drifted by one (1). The change in variance is massive:
```
> x=rexp(1e5,1)+1 #drifted exponential
> mean(x^2*exp(x-1-(x^2)/2)/sqrt(2*pi))
[1] 0.4008881
> var(x^2*exp(x-1-(x^2)/2)/sqrt(2*pi))
[1] 0.02471647
> x=x-1 #non-drifted exponential
> mean((x>1)*x^2*exp(x-(x^2)/2)/sqrt(2*pi))
[1] 0.4004227
> var((x>1)*x^2*exp(x-(x^2)/2)/sqrt(2*pi))
[1] 0.3433074
```
| null | CC BY-SA 4.0 | null | 2023-04-06T04:55:50.513 | 2023-04-06T09:27:54.137 | 2023-04-06T09:27:54.137 | 7224 | 7224 | null |
612076 | 1 | null | null | 1 | 18 |
- Does any of the YOLO models use NMS during training? From going through the papers they only explicitly state that NMS is only applied during inference, unlike Faster-RCNN.
- Does any YOLO (or other computer vision) algorithms filter out low confidence boxes during training based on some threshold before computing the loss? Are there plus / minuses for doing so?
| Questions on filtering out predicted boxes during training for YOLO family of algorithms | CC BY-SA 4.0 | null | 2023-04-06T05:15:43.597 | 2023-04-06T05:15:43.597 | null | null | 243601 | [
"machine-learning",
"computer-vision",
"yolo"
] |
612077 | 1 | 612087 | null | 1 | 39 | I have some categories in a fraud dataset I'm working on that are composed of 99% one category and 1% the other. On inspection these categories contain the exact same percentage of frauds ~1.4%, which is inline with my dataset's average.
If the categories present nearly the same information in terms of the target, are they meaningfully different?
| Are categories meaningful when they're equally predictive? | CC BY-SA 4.0 | null | 2023-04-06T05:45:15.233 | 2023-04-06T08:31:32.267 | null | null | 363857 | [
"categorical-data",
"unbalanced-classes"
] |
612078 | 1 | null | null | 2 | 105 | Scikit learn allows us to fit Gaussian processes $GP(0,K(.,.))$ such that $K:T\times T \to \mathbb{R}$ is a covariance function (kernel), however it doesn't let us specify a mean function $m: T\times \mathbb{R}$. I'm trying to work solely with scikit-learn, so working with only the tools it provides. Is it a reasonable approach to fit a mean function using another method such as the multiple linear regression and then fit a Gaussian process to the residuals?
Should I do something like Iterated Weighted Least Squares (IWLS) where you alternate between fitting the fixed effects under the current estimate of the covariance matrix and then reestimate the covariance function parameters using the current residuals?
| Gaussian process mean function in scikit-learn | CC BY-SA 4.0 | null | 2023-04-06T05:48:29.297 | 2023-04-06T07:44:53.820 | null | null | 178156 | [
"python",
"scikit-learn",
"gaussian-process"
] |
612079 | 1 | null | null | 0 | 37 | I am currently working on my thesis and I have run into some issues regarding interpretation of my analysis.
I wanted to find out whether relationship between my IV and DV differs based on gender (two categories - male and female). In statistics classes I have been advised to do quick t-test before running the regression analysis to find out if the two groups (male and female) differ in the DV.
So I have done the t-test and it is nonsignificant. Then I ran the moderated regression and gender did not significantly predict DV and also the interaction was not significant. But also I expected small gender differences based on literature and I do not have the appropriate sample size (N=212 and I would need twice that much) so that nonsignificant interaction might have happened due to that.
This brings me to the t-test - can I still consider the t-test results to be valid when suggesting there are no gender differences in DV?
In the discussion I would probably conclude that the interaction did not occur due to lacking in sample size but can I still interpret the t-test that is suggesting that there are no gender differences?
| Preliminary t-test for moderation | CC BY-SA 4.0 | null | 2023-04-06T07:10:46.017 | 2023-04-07T13:41:32.163 | null | null | 385050 | [
"t-test",
"interaction",
"interpretation"
] |
612080 | 1 | null | null | 1 | 11 | I am new to power analysis in multi-level models. I am looking for a possibility to do a power analysis for the following 2-level model: Y = y00 + y10D1 + y20D2+y01Z +y11D1Z+y21*D2Z.
In this model, I investigate the effect of time (D1 and D2) and an experimental condition as well as their interaction effect on my outcome variable. The time is measured three times and integrated as dummy-coded contrasts in the model (D1 and D2). The experimental condition is also dummy-coded.
I tried to work with the instruction for a power analysis in 2-level models by Trend & Schäfer (2019) (see R code attached). However, I do not know how create the conditional variances for my model and I think there must be a mistake in the model .
I would be very happy to get your advice.
Thanks a lot!
R-Code:
```
#Specifying standardized input parameters
alpha.S <- .05 #Alpha level
Size.clus <- 3 #L1 sample size
N.clus <- 200 #L2 sample size
L1_DE_standardized <- .30 #L1 direct effects
L2_DE_standardized <- .50 #L2 direct effect
CLI_E_standardized <- .50 #CLI effects
ICC <- .50 #ICC
rand.sl <- .09 #Standardized random slope
#Creating variables for power simulation in z-standardized form
#Creates a dataset with two L1-predictor x and one L2-predictor Z; all predictors are dichotomous
Size.clus <- 3 #L1 sample size
N.clus <- 200 #L2 sample size
EG<-rep(c(0,1),each=300)
x<- scale(rep(1:Size.clus))
g <- as.factor(1:N.clus)
X <- cbind(expand.grid("x"=x, "g"=g))
X <- cbind(X, EG)
X$D1<- recode(var = X$x,
recodes = "-1 = 0; 0 = 1; 1 = 0")
X$D2<- recode(var = X$x,
recodes = "-1 = 0; 0 = 0; 1 = 1")
#Adapting the standardized parameters
varL1 <- 1 #L1 variance component
varL2 <- ICC/(1-ICC) #L2 variance component
varRS1 <- rand.sl*varL1 #Random slope variance tau 11
varRS2 <- rand.sl*varL1 #Random slope variance tau 22
L1_DE <- L1_DE_standardized*sqrt(varL1) #L1 direct effect
L2_DE <- L2_DE_standardized*sqrt(varL2) #L2 direct effect
CLI_E <- CLI_E_standardized*sqrt(varRS) #CLI effect
#Creating conditional variances
#I don’t know how to calculate this conditional variance with two L1 predictor
s <- sqrt((varL1)*(1-(L1_DE_standardized^2))) #L1 variance
V1 <- varL2*(1-(L2_DE_standardized^2)) #L2 variance
rand_sl.con <- varRS1*(1-(CLI_E_standardized^2)) #Random slope variance
#Creating a population model for simulation
b <- c(0, L1_DE, L1_DE, L2_DE, CLI_E,CLI_E) #vector of fixed effects (fixed intercept, L1.1. direct, L1.2. direct, L2 direct, CLI.1 effect, CLI.2 effect)
V2 <- matrix(c(V1,0,0, 0,rand_sl.con,0, 0,0,rand_sl.con), 3) #Random effects covariance matrix with covariances set to 0
# there must be a mistake some steps before that the model doesn't work
model <- makeLmer(y ~ D1 + D2 + EG + D1:EG + D2:EG +(D1+D2 | g), fixef = b, VarCorr = V2, sigma = s, data = X) #Model creation
```
| How to calculate power for a 2-level model with two L1-predictors? | CC BY-SA 4.0 | null | 2023-04-06T07:24:15.633 | 2023-04-06T07:24:15.633 | null | null | 385052 | [
"multilevel-analysis",
"statistical-power",
"conditional-variance"
] |
612081 | 2 | null | 612078 | 1 | null | Usually mean function is [not of your greatest interest](https://stats.stackexchange.com/questions/222238/why-is-the-mean-function-in-gaussian-process-uninteresting) when using Gaussian Processes. If you care about it, it [can be done](https://stats.stackexchange.com/questions/375468/criteria-for-choosing-a-mean-function-for-a-gp) within the GP model, as [discussed for example here](https://stats.stackexchange.com/questions/377999/why-are-gaussian-processes-valid-statistical-models-for-time-series-forecasting). If your scikit-learn does not support non-zero mean functions, you [can simply](https://math.stackexchange.com/a/4030580/114961) use some model to find the mean, subtract if from the data, and fit GP to the de-meaned data. There is rather no need to do it iteratively.
| null | CC BY-SA 4.0 | null | 2023-04-06T07:36:35.407 | 2023-04-06T07:44:53.820 | 2023-04-06T07:44:53.820 | 35989 | 35989 | null |
612083 | 2 | null | 592110 | 1 | null | Almost half a year late, but here goes anyway :
- Regarding your first and third question, all the answers are in the first paragraph of page 3 of the paper you refer to (I guess you just misread it ?) : you wrote
>
The subset that is introduced by that paper is simply: $U_\alpha(r,c) = \{P\in\mathbb{R}^{k\times x} | h(P) \ge h(r) + h(c) - \alpha\}$
But that is not true. If you look again, the subset $U_\alpha(r,c)$ is rather defined as
$$U_\alpha(r,c) := \{P\in\color{red}{U(r,c)}\mid KL(P||rc^T) \le \alpha\} \equiv U_\alpha^{KL}(r,c)$$
Which the author claims (and proves) is equal to :
$$\{P\in\color{red}{U(r,c)}\mid h(P) \ge h(r) + h(c) - \alpha\}\equiv U_\alpha^h(r,c) $$
Hence, the answer to your first question is simply that the set $U_\alpha(r,c)$ is defined as a set of elements of $U(r,c)$ satisfying some extra condition, so necessarily $U_\alpha(r,c)\subseteq U(r,c)$. You are right however that the inequality constraint on the entropy of $P$ does not imply anything on the marginals.
You asked about the relationship between the KL inequality constraint and the $\gamma$ conditions : as we've seen, $U_\alpha^{KL}(r,c) = U_\alpha^{h}(r,c)$, i.e. the KL inequality constraint is equivalent to the entropy inequality constraint (if you're not sure why, prove it !), so similarly it doesn't constrain the marginal distributions in any way.
As you see, the goal of introducing these $\alpha$-dependent constraints on $P$ is not to make sure that $P$ lies in $U(r,c)$, but rather to add some regularization to the problem of interest (which is computing the Sinkhorn distance $d_{M,\alpha}(r,c) := \inf_{P\in U_\alpha(r,c)} \langle P,M\rangle$) and make it more computationally tractable. By letting $\alpha\to 0$, the set $U_\alpha(r,c)$ becomes very small and the distance is easily computed, while letting $\alpha \to \infty$ recovers the set of all possible pairings $U(r,c)$. There is thus some tradeoff between numerical efficiency and accuracy, as always.
- Lastly, you ask
>
[...] so what does it mean to have an outer product of two distributions?
In the setting of this paper, probability distributions are represented as "histogram" vectors, i.e. elements $\pi \in \mathbb R^d $ such that $\pi_1,\ldots,\pi_d \ge 0$ and $\sum_{1\le i \le d} \pi_i = 1$. It is not hard to see that this indeed defines a probability distribution on $\{1,\ldots,d\} $ (for instance, the probability distribution "histogram" of a fair dice on $\{1,\ldots,6\} $ would be given by the vector $(1/6,\ldots,1/6)^T \in \mathbb R^6 $). It may seem overly restrictive to limit ourselves to finitely supported distributions, but this already covers a wide range of applications, as the paper's popularity suggests.
Once you see that the vectors $r$ and $c$ respectively represent probability distributions on $\{1,\ldots,d\}$, you should be able to see that their [outer product](https://en.wikipedia.org/wiki/Outer_product) $rc^T$ represents the joint distribution of the random vector $(X,Y)$, where $X\sim r$, $Y\sim c$, and $X$ and $Y$ are independent (you should definitely prove it if it's not clear). In a sense, this is the most "trivial" joint distribution $(X,Y)$ could have given that its marginals are $r$ and $c$. Notice here that due to the identity $h(rc^T) = h(r) + h(c)$, $rc^T$ is an element of $U_\alpha(r,c)$ for all $\alpha \ge 0$, hence these sets are never empty and the optimization problem is always well defined.
Hope that helps !
| null | CC BY-SA 4.0 | null | 2023-04-06T07:41:00.890 | 2023-04-06T07:41:00.890 | null | null | 305654 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.