Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
610120 | 1 | null | null | 0 | 8 | I have time series data with two variables (x,y) and labeled by user id (1-10). I want to analyze if the x and y are correlated with each other, but those two variables heavily depend on who the user is, so just a simple pearson' correlation between the two variables may not give the best results on if x and y are correlated. What would be the correct way of measuring correlations here? Can I group by the user id, measure pearsons correlations of x and y for each user and take the average?
| Correlation analysis on two variables with groups | CC BY-SA 4.0 | null | 2023-03-20T20:24:14.973 | 2023-03-20T20:24:14.973 | null | null | 376209 | [
"mathematical-statistics",
"correlation",
"cross-correlation"
] |
610121 | 1 | null | null | 0 | 18 | I want to build a model that will allow prediction quite far from a few available measurements (i.e. extrapolation) using prior knowledge of the true system to make predictions when there is insufficient or no data. However, I also know there is measurement noise on the output of the system which I assume is zero-mean Gaussian noise:
$$
y = f_\text{true}(x) + e
$$
where
$$
e \sim \mathcal{N}(0, \sigma_\text{true})
$$
Assuming I have an estimate of $\sigma_\text{true}$, how do I determine whether I can use a linear model fitted to the data, or whether I am better to use the prior, which is a constant value of $y$?
I was thinking I would select a model conditionally according to the number of data points available, $n$, and some condition to check the significance of the model fit:
$$
f(x) = \begin{cases}
y_\text{prior} & \text{if $n=0$} \newline
\mu_y & \text{if $n>0$ and ... is false} \newline
f_\text{fitted}(x) & \text{if $n>0$ and ... is true}
\end{cases}
$$
where $\mu_y$ is the mean of the $y$ and ... is an appropriate significance test of the fitted model.
The [F-test](https://en.wikipedia.org/wiki/F-test) sounds like it would be an appropriate candidate for this condition. However, I would expect the condition to depend on the estimated noise level $\sigma$. As far as I can tell F-stat does not. I suspect the condition should also depend on how far from the available data I will be predicting (I also know this, the range over which I want to make predictions).
What is the most appropriate condition to use in above model selection? (Or is there a better way to do this?)
Similar question:
- How to account for measurement error when computing explained variance
| How to account for measurement noise when calculating significance of fit for linear regression? | CC BY-SA 4.0 | null | 2023-03-20T20:33:28.697 | 2023-03-20T20:33:28.697 | null | null | 226254 | [
"regression",
"statistical-significance",
"predictive-models",
"linear",
"prior"
] |
610122 | 1 | null | null | 0 | 16 | just as a disclaimer, the following questions may be quite basic but I somehow cannot wrap my head around which way to analyse data as there are different tests that do almost the "same" but used interchangeably...
I was wondering, if I were to run a 2x3 mixed ANOVA, what post hoc tests would be proper to (2 Conditions: X,Y and 3 groups: A, B, C) i) test X-Y difference and how is that different between different groups and ii) test how groups differ in terms of their values in conditions Y. Should I:
a) Do a simple effects analysis for the (i)? (Is the Tukey’s appropriate option here?)
b) I guess for (ii) paired t tests would be the way to go ight?
Or would there be any other suggestions? Wilcoxon/Mann Whitney, or follow up ANOVAs (which I'm not sure whether makes sense in this context)?
Any input would be appreciated :) Thank you!
| Proper post-hoc tests (following ANOVA) to test out within/between subject differences | CC BY-SA 4.0 | null | 2023-03-20T20:50:54.370 | 2023-03-20T20:50:54.370 | null | null | 320897 | [
"anova",
"post-hoc",
"simple-effects"
] |
610124 | 2 | null | 610108 | 10 | null | What is actually required is conditional independence of the response variable. Conditional on the regressors, that is. A Poisson regression model - for independent data - is no different from an ordinary least squares model in this assumption except that the OLS conveniently expresses the random error as a separate parameter.
In a Poisson regression model, the specific form of the conditional response is debatable due to disagreements in the fields of probability and statistics. One popular option would be $Y/\hat{Y}$, which are still random variables albeit not exactly Poisson distributed - the value is considered an ancillary statistic like a residual, which does not depend on the estimated model parameters. An interesting thing to observe here is that, if the linear model is misspecified (such as omitted variable bias), this may induce a kind of "dependence" on the residue of the fitted component of the model that is not correctly captured.
Independence has a very specific probabilistic meaning, and most attempts to diagnose dependence with diagnostic tests are futile. This is mostly complicated by the important mathfact that independence implies covariance is zero, but zero covariance does not imply independence.
Poisson regression and OLS are special cases of "generalized linear models" or GLMs so we can conveniently deal with the independence of observations in GLMs. The classic OLS residual plot, which we use to detect heteroscedasticity, is effective to visualize a covariance structure, but additional assumptions are needed to declare independence. In a GLM such a Poisson, a non-trivial mean-variance relationship is an expected feature of the model; a standard residual plot would be useless. So we rather consider the Pearson residuals versus fitted as a diagnostic test. In R, a simple poisson GLM(`x <- seq(-3, 3, by=0.1); y<-rpois(length(x), exp(-3 + x)); f <- glm(y ~ x, family=poisson)`. The resulting graph shows a tapering curve of residuals and, arguably a funnel shape, with a LOESS smoother showing a mostly constant and 0-expectation mean residual trend.
[](https://i.stack.imgur.com/x2yR8.png)
As an example, we may describe the distribution of the $x$ as the design of the study. While many examples here and in textbooks deal with $X$ as random, that's merely a convenience and not that reflective of reality. The design of experiment is to assess viral load of infected mouse models treated with antivirals at a sequence of dose concentrations, say, control, $X = (0 \text{ control}, 10, 50, 200, 1,000,$ and $5,000)$ mg/kg. For an effective ART, the sequence of viral loads is expected to be descending because of a dose-response relationship. The outcome might look $Y = (10^5, 10^5, 10^4, 10^3, LLOD, LLOD)$. This response vector is not unconditionally independent, there is a strong "autoregressive" trend induced by the design. Trivially, when the fitted effect is estimated through the regressions, the conditional response is completely mutually independent.
A more involved but real life example is detailed in Agresti's categorical data analysis second edition in Chapter 3 covering poisson regression. This deals with the issue of estimating the number "satellite crabs" in a horseshoe crab nest, a sort of interesting polyamory. Data analyses can be found [here](https://raymondbalise.github.io/Agresti_IntroToCategorical/Generalized.html#poisson-distribution-for-counts).
[](https://i.stack.imgur.com/w11wu.png)
| null | CC BY-SA 4.0 | null | 2023-03-20T20:57:17.583 | 2023-03-21T15:02:44.530 | 2023-03-21T15:02:44.530 | 8013 | 8013 | null |
610125 | 1 | 610284 | null | 0 | 136 | This is the [paper](https://www.nature.com/articles/s41467-022-33244-6) which is kind of I'm trying to implement based on my set of genes which i have narrowed down from [WGCNA analysis](https://en.wikipedia.org/wiki/Weighted_correlation_network_analysis).
Its a both conceptual and R related question[How to do it in R] my lack of conpetual clarity.
For single gene Im clear what to do where I can divide or split my patinet group based on low and high expression from a particular gene and then I can do survival analysis as show this [post](https://www.biostars.org/p/344233/)
Now based on the above paper I cited I would like to highlight upon this figure, in the figure they have mentioned LS17(low standard and High). LSC17 is the combination of 17 genes which they are using to classify the Leukemia Cohort.
So if i have to do it in R on how to go about it since I'm unable to figure out how to go about it? As here in case instead of 1 gene I have 17 genes so How should I put a filter to categorize my patient samples as `Low/Standard/High`
As shown in the biostar post
```
survplotdata <- coxdata[,c('Time.RFS', 'Distant.RFS',
'X203666_at', 'X205680_at')]
colnames(survplotdata) <- c('Time.RFS', 'Distant.RFS',
'CXCL12', 'MMP10')
# set Z-scale cut-offs for high and low expression
highExpr <- 1.0
lowExpr <- -1.0
survplotdata$CXCL12 <- ifelse(survplotdata$CXCL12 >= highExpr, 'High',
ifelse(survplotdata$CXCL12 <= lowExpr, 'Low', 'Mid'))
survplotdata$MMP10 <- ifelse(survplotdata$MMP10 >= highExpr, 'High',
ifelse(survplotdata$MMP10 <= lowExpr, 'Low', 'Mid'))
# relevel the factors to have mid as the ref level
survplotdata$CXCL12 <- factor(survplotdata$CXCL12,
levels = c('Mid', 'Low', 'High'))
survplotdata$MMP10 <- factor(survplotdata$MMP10,
levels = c('Mid', 'Low', 'High'))
```
Here they have listed two genes which is CXCL12 and MMP10 gene but here they are calculating them differently.
So my question is how to do it for group of genes together?
Giving a dummy dataframe here
```
set.seed(123)
nr1 = 4; nr2 = 8; nr3 = 6; nr = nr1 + nr2 + nr3
nc1 = 6; nc2 = 8; nc3 = 10; nc = nc1 + nc2 + nc3
mat = cbind(rbind(matrix(rnorm(nr1*nc1, mean = 1, sd = 0.5), nr = nr1),
matrix(rnorm(nr2*nc1, mean = 0, sd = 0.5), nr = nr2),
matrix(rnorm(nr3*nc1, mean = 0, sd = 0.5), nr = nr3)),
rbind(matrix(rnorm(nr1*nc2, mean = 0, sd = 0.5), nr = nr1),
matrix(rnorm(nr2*nc2, mean = 1, sd = 0.5), nr = nr2),
matrix(rnorm(nr3*nc2, mean = 0, sd = 0.5), nr = nr3)),
rbind(matrix(rnorm(nr1*nc3, mean = 0.5, sd = 0.5), nr = nr1),
matrix(rnorm(nr2*nc3, mean = 0.5, sd = 0.5), nr = nr2),
matrix(rnorm(nr3*nc3, mean = 1, sd = 0.5), nr = nr3))
)
mat = mat[sample(nr, nr), sample(nc, nc)] # random shuffle rows and columns
rownames(mat) = paste0("gene", seq_len(nr))
colnames(mat) = paste0("LSC", seq_len(nc))
```
Transposing the dataframe
```
mat2 <- mat %>% as.data.frame() %>% t() %>% as.data.frame() %>% rownames_to_column("Patient")
```
In my dummy dataframe I have a total of 18 rows or genes so i would like to categorize all my columns[patient] based on the combination of 5 rows(genes) such as `row1,row2,row3,row4,row5` as `Five Low/ Five standard / Five high` How to do it in R?
Any suggestion or help would be really appreciated
[](https://i.stack.imgur.com/TTcZe.png)
| Using multiple genes building gene signature and survival analysis | CC BY-SA 4.0 | null | 2023-03-20T20:58:35.473 | 2023-05-28T22:28:00.193 | 2023-03-21T18:35:24.943 | 28500 | 334559 | [
"r",
"regression",
"survival",
"cox-model",
"proportional-hazards"
] |
610126 | 2 | null | 610103 | 0 | null | The test statistic has the same form in either case, a difference divided by its standard error, but the comparison is either against a t distribution with the specified number of degrees of freedom or against the limiting normal distribution (an infinite number of degrees of freedom) in a z test.
There is some disagreement about the best way to evaluate the number of degrees of freedom in a linear mixed model like yours. See [this answer](https://stats.stackexchange.com/a/147032/28500), or several other [pages on this site](https://stats.stackexchange.com/search?tab=votes&q=t-test%20versus%20z-test&searchOn=3), for an introduction to the statistical issues.
A [vignette](https://cran.r-project.org/web/packages/emmeans/vignettes/models.html#L) for `emmeans` notes the methods for estimating degrees of freedom that are available in that package. If you do not have the necessary packages installed for the "kenward-roger" or "satterthwaite" estimates of the number of degrees of freedom, the software will use the "asymptotic" method based on a z test. That's probably what happened in your case.
Install the necessary packages and specify the way that you want the degrees of freedom to be estimated. If you have a very large number of observations, there won't be much of a difference between the t- and z-test results, and with a large enough sample even a small absolute difference can have a very small p value.
| null | CC BY-SA 4.0 | null | 2023-03-20T21:19:10.867 | 2023-03-20T21:19:10.867 | null | null | 28500 | null |
610128 | 1 | null | null | 0 | 17 | I have a time series for each of a couple hundred patients, around 10-20 samples per patient, unevenly distributed through time, with over 40000 columns per sample. The target feature is the level of specific symptoms (measured 0-100), and I am trying to build a prediction.
Because of the large number of columns, it’s clear some type of dimensionality reduction is necessary. But due to the nature of the data I am unsure of the correct approach to it.
I first thought of using PCA, but realized it might be problematic as I have many different time series. It doesn’t make sense to apply it to the entire dataset combined, because it would lose relationships within the same time series. And applying it separately would lose the meaning of the columns.
In my online research I have stumbled upon MSSA, SSA, and ForeCA, but after reading on them I am still unsure which is the correct one to choose for my specific need. Would love if somebody could explain to me briefly about those methods, what is each better suited for, what would be better for my need, and if there is another option I didn’t mention that can also be of use.
Thanks a lot!
| multiple time series dimensionality reduction | CC BY-SA 4.0 | null | 2023-03-20T21:38:39.780 | 2023-03-20T21:44:39.767 | 2023-03-20T21:44:39.767 | 383704 | 383704 | [
"machine-learning",
"time-series",
"dimensionality-reduction"
] |
610129 | 2 | null | 610118 | 3 | null | What you are describing is that you need to [losslessly compress](https://en.wikipedia.org/wiki/Lossless_compression) and retrieve the data. Did you consider any off-the-shelf [caching](https://blog.bytebytego.com/p/a-crash-course-in-caching-part-1) solution? Since you care about “predicting” the seen data, it's about compressing it, and if I were you I’d start with ready data compression algorithms. If they don't compress the data enough, you could try machine learning, but storing machine learning model also needs storage space, and the 100% accurate model will possibly be one that is complicated, so heavy.
When using machine learning, you could simply fit one model per chunk, so there is no problem with not seeing all the data at once. If you insist on having a single model, it would be more complicated and it would need to be able to distinguish somehow between the chunks (so it doesn't predict chunk 2 for chunk 1). So it might be harder to do, but the model itself could potentially be smaller than the per-chunk models if there are patterns that repeat between the chunks.
But really, start with regular caching and compression. Decent compression algorithms were designed for such purposes and can do miracles.
| null | CC BY-SA 4.0 | null | 2023-03-20T22:02:33.103 | 2023-03-20T22:02:33.103 | null | null | 35989 | null |
610130 | 1 | null | null | 0 | 24 | Cohen's Kappa is a measure of inter-annotator agreement that is traditionally applied to a setting where you have two raters who each classify $N$ items into $C$ mutually exclusive categories. For example, in a 4-class classification dataset with 1000 examples, $N = 1000$ and $|C| = 4$.
In some settings, $|C|$ differs across the items. For example, suppose you have a dataset where each example is a (question, paragraph) pair, and the annotators are asked to select a substring of the paragraph that answers the question. In this case, the $C$ for each example is the set of all possible substrings in the paragraph (and the paragraph differs in each example).
Does it make sense to calculate Cohen's Kappa in this setting? While it is possible to calculate the Cohen's kappa on a per-example basis, is there a reasonable way of computing an aggregated Cohen's kappa across the whole dataset (the $N = 1000$ examples)? One could imagine taking a simple average, or doing a weighted average based on the prior probability of chance agreement, etc.
If you have any pointers to techniques or past literature, that'd be much appreciated! Thanks in advance!
| Calculating Cohen's Kappa for a dataset when categories differ between examples | CC BY-SA 4.0 | null | 2023-03-20T22:18:38.777 | 2023-03-20T22:18:38.777 | null | null | 383706 | [
"agreement-statistics",
"cohens-kappa"
] |
610131 | 2 | null | 610090 | 1 | null | You can use a multinomial regression model (using previous state and time as independent variables), and you can use any typical regression modelling techniques such as regularization, feature engineering...
| null | CC BY-SA 4.0 | null | 2023-03-20T22:51:53.460 | 2023-03-21T01:14:48.920 | 2023-03-21T01:14:48.920 | 345611 | 27556 | null |
610132 | 1 | null | null | 1 | 47 | I'm doing a GLM homework, and I'm stuck with the following problem:
>
Suppose that data ($Y_i$; $\mathbf{X}_i$); $i = 1, . . . , n$ are observed, where $\mathbf{X}_i$ is a p-dimensional vector for predictors of patient i and $Y_1 , . . . , Y_n$ are given by the following table:
|Y |0 |1 |2 |3 |4 |5 |6 |7 |8 |
|-|-|-|-|-|-|-|-|-|-|
|Number of Subjects |35 |26 |12 |9 |7 |4 |4 |2 |1 |
>
Given the nature of the response variable, what (exponential family) distribution is most appropriate for Y?
I just don't know where to start; I observed that the number of subjects is decreasing as Y increases, and I computed the sample mean = 1.71 and sample variance = 3.8443 (which doesn't seem to be helpful), but how can they help me determine which family is Y from? Could someone give me some hints? Thanks
| Find out which exponential distribution the data belongs to | CC BY-SA 4.0 | null | 2023-03-20T23:29:16.810 | 2023-03-22T14:19:11.850 | null | null | 355736 | [
"regression",
"distributions",
"generalized-linear-model",
"exponential-family"
] |
610133 | 1 | null | null | 0 | 21 | The mutual information of a joint probability distribution $p(x,y)$ tells us, if we send $n$-letter messages with each letter drawn from the marginal distribution $p_X(x)$, that we can use roughly $2^{n I(X:Y)}$ sequences to transmit words to the receiver with small maximal probability of error, in the asymptotic limit of $n\to\infty$.
Is there any non-asymptotic counterpart for these statements? In other words, how can we quantify the amount of information that can be sent through a channel (conditional probability distribution) and recovered on the other end, in the regime of finite (and possibly small) $n$?
| What is the non-asymptotic counterpart of the mutual information? | CC BY-SA 4.0 | null | 2023-03-20T23:57:46.587 | 2023-03-20T23:57:46.587 | null | null | 82418 | [
"probability",
"asymptotics",
"information-theory",
"mutual-information",
"communication"
] |
610134 | 1 | 610139 | null | 1 | 56 | Let's say I have a statistic (in this example `sd`) for sample `v1` and sample `v2`. I would like to compare those two statistics but since I have only two values I cannot run a test like `t.test`, so I thought I could create bootstrap estimates of `sd`:
```
v1 <- c(10,23,21,12,14,14,13,14,15,25)
v2 <- c(10,11,10,13,13,13,14,19,12,23)
bs1 <- replicate(1000,sd(sample(v1,10,replace=TRUE)))
bs2 <- replicate(1000,sd(sample(v2,10,replace=TRUE)))
t.test(bs1,bs2)
```
Would this be a valid approach? My feeling is that it isn't, since I can get really small p.values as the sample size is very large, but just want to make sure.
| t.test on bootstrapped estimates | CC BY-SA 4.0 | null | 2023-03-21T00:03:07.323 | 2023-03-21T00:50:53.610 | null | null | 212831 | [
"r",
"bootstrap"
] |
610135 | 2 | null | 610090 | 4 | null | If the transition probability matrix varies over time then your stochastic process is not a Markov chain (i.e., it does not obey the Markov property). In order to estimate transition probabilities at each time you would need to make some structural assumptions about how these transition probabilities can change (e.g., how rapidly they can change, etc.). Without any structural assumptions, the MLE for the transition probability matrix at each time period will estimate a probability of one for the transition that actually occurred and a probability of zero for all other transitions, which is not very helpful.
If you have a look at the documentation for the [seqtrate function](https://rdrr.io/cran/TraMineR/man/seqtrate.html), you will see that it references [Gabadinho et al (2011)](https://cran.r-project.org/web/packages/TraMineR/vignettes/TraMineR-state-sequence.pdf). This paper gives further information on the analysis of state-sequence objects using the `TraMineR` package. It includes discussion of the transition rates and the "turbulence" of transitions in the sequence. The paper contains references to relevant mathematical and statistical literature that discusses this type of analysis, so you might need to do a bit of a deep-dive to learn the relevant models and methods for this type of analysis.
| null | CC BY-SA 4.0 | null | 2023-03-21T00:15:24.733 | 2023-03-21T00:15:24.733 | null | null | 173082 | null |
610136 | 1 | null | null | 0 | 41 | Consider a piecewise polynomial model, the model takes the form
\begin{align}
Y &= g(X)+\epsilon\\
&\approx
\begin{cases}
\alpha_0+\alpha_1X, & \text{if $X<c$}\\
\beta_0+\beta_1X, & \text{if $X\geq c$}
\end{cases}
\end{align}
where $c$ is the knot of the piecewise linear model and $\theta=(\alpha_0, \alpha_1, \beta_0, \beta_1)$ is the model parameter. We can further reduce $g(X)$ in a linear function such that $\theta$ can be estimated via OLS.
\begin{align}
g(X) &= (\alpha_0+\alpha_1X)\mathbb{1}(X<c) + (\beta_0+\beta_1X)\mathbb{1}(X\geq c)\\
&= \alpha_0\underbrace{\mathbb{1}(X<c)}_{b_1(X)}+\alpha_1\underbrace{X\mathbb{1}(X<c)}_{b_2(X)} + \beta_0\underbrace{\mathbb{1}(X\geq c)}_{b_3(X)}+\beta_1\underbrace{X\mathbb{1}(X\geq c)}_{b_4(X)}
\end{align}
The problem is that $g(X)$ is discontinuous at $X=c$. To make sure $g(X)$ is continuous at the knot $c$, the estimate of $\theta$ would be
\begin{align}
\widehat{\theta}=\underset{\theta\in\Theta}{\arg\min}\sum_{i=1}^{n}(y_i-\widehat{y}_i)^2 && \text{where} && \Theta=\left\{\theta:\alpha_0+\alpha_1c=\beta_0+\beta_1c\right\}
\end{align}
- I am uncertain whether my understanding of piecewise polynomial regression is accurate, and I would greatly appreciate any feedback on potential errors I may have made.
- What is the method to solve the constrained optimization problem shown above? During my study of PCA, I learned that it is possible to optimise an objective function subject to constraints via Lagrange multiplier. However, my understanding of the Lagrange multiplier method is limited, and I am unsure of how to apply it to this particular problem. Any derivation of the solution using the Lagrange multiplier (or any other methods) would be appreciated.
- I know that the piecewise polynomial regression can be equivalently represented as a regression spline by imposing the truncated power basis at knots:
\begin{align}
h(X) &= \lambda_0+\lambda_1X+\lambda_2(X-c)_+
\end{align}
where $(X-c)_+=(X-c)\mathbb{1}(X>c)$. This is much easier to handle since the estimate of $h(.)$ can be obtained using OLS. The problem is that I can't see the relationship between $g(X)$ and $h(X)$. How does $g(X)$ further reduce to $h(X)$ by imposing $(X-c)_+$?
I apologise if the title is confusing or unclear. I hope that the description above clarifies my problems effectively.
| Optimising a piecewise polynomial model's objective function under constraints | CC BY-SA 4.0 | null | 2023-03-21T00:26:39.397 | 2023-03-21T00:56:31.880 | 2023-03-21T00:56:31.880 | 383333 | 383333 | [
"regression",
"self-study",
"optimization",
"nonlinear-regression",
"splines"
] |
610137 | 2 | null | 610106 | 2 | null |
- "... may violate the consistency criterion that requires the medians to be different in order for the test to be consistent"
... there's no such requirement to consider. Even if it were true, consistency is a property of a test not a sample, nor a population.
Consistency is the condition that, IF the population effect is different from what is specified under $H_0$, as sample size increases toward infinity, the probability of the test rejecting $H_0$ goes to 1. Less formally, it's that at sufficiently large sample size, the test rejects false nulls.
The test you're doing is consistent against the alternatives it's designed to test for. If it's testing what you wanted to test for, there's nothing to check on this, its consistency was proved many decades ago.
- If you did the Mood test through some mistaken sense that there was a requirement to do so before using the test, it was a pointless exercise; there's nothing to check here - forget it.
(I am extremely curious to know whether some book, paper, or other source actually suggested you needed to do this. If so, please indicate what it was, if possible I'd like to see what other egregious errors it contains to warn people about.)
If you did it for some other reason - beware of using multiple tests of very similar hypotheses on the same data. It looks rather like significance hunting. You should look to test a single hypothesis rather than multiple variations of the same one. Figure out what you want to test for (before you collect data), and test that.
---
To address the specific questions you asked (albeit largely repeating what I just said):
>
"Should I then conclude the distributions are different based on this MWU test?"
(i) Since you have $p\leq\alpha$, yes, you should reject the null of the Wilcoxon-Mann-Whitney; the test statistic is in the rejection region.
(ii) However, the Wilcoxon-Mann-Whitney is not a general test of distributional inequality and ought not be framed that way. It looks at the probability that a random value from one population will exceed a random value from the other population. In the test, $\frac{U}{n_1 n_2}$ is a sample estimate of that probability. If that probability is different from the probability of the converse direction (e.g. A's tend more often to be bigger when compared with randomly chosen B's) then in large samples you will tend to reject $H_0$, because the test is consistent against that specific alternative.
If the difference in the two distributions does not produce any 'tends to be bigger' effect, then the test simply lacks the ability to see it (it doesn't test any of those other kinds of alternatives).
(iii) If you did want a general test of distributional inequality rather than a test of 'tends to be bigger'-ness, you should not use this test, but instead one designed for that omnibus alternative (one of the various two-sample goodness of fit tests).
(Again, I am extremely curious to know whether some book, paper, or other source actually suggested that the Wilcoxon-Mann-Whitney was a test for general distributional inequality. If it was the same source, we already have our second egregious error to warn people against. It might well be a goldmine of other nonsense.)
>
"Should I be worried about consistency?"
As a general principle? Probably. Most people seem to expect a test should be consistent -- at least up until they actually see its consequences happening (the fact that they always seem to reject with a really large sample), when it seems to make many people think something is wrong with the test, rather than perhaps with their choice to use a pure-equality null rather than something closer to what they thereby reveal that they wanted to test for.
In relation to a specific sample? It has nothing to say about that.
| null | CC BY-SA 4.0 | null | 2023-03-21T00:43:28.833 | 2023-03-21T07:45:05.110 | 2023-03-21T07:45:05.110 | 805 | 805 | null |
610138 | 2 | null | 610134 | 1 | null | Here is the problem with this approach: It is incredibly sensitive to the number of bootstraps. Suppose that the null is true (the population variances between the two samples really are equal). I can still reject the null just by cranking up the number of bootstraps. That doesn't really sound like a good approach.
```
set.seed(0)
v1 <- rnorm(10)
v2 <- rnorm(10)
bs1 <- replicate(1000,sd(sample(v1,10,replace=TRUE)))
bs2 <- replicate(1000,sd(sample(v2,10,replace=TRUE)))
t.test(bs1,bs2)
```
What I suspect you're trying to do is use the bootstrap to test the hypothesis that the two variances are the same. That can be done as follows:
```
set.seed(0)
v1 <- c(10,23,21,12,14,14,13,14,15,25)
v2 <- c(10,11,10,13,13,13,14,19,12,23)
bootstraps <- replicate(10000,{
v1b <- sample(v1, replace = T)
v2b <- sample(v2, replace = T)
log(var(v1b)) - log(var(v2b))
})
```
Here, I've computed the difference of the two variances on the log scale so if the resulting bootstrap confidence interval contains 0, then we know the ratio of these variances is consistent with a value of 1 (i.e. that the data are consistent with the same population variance)
| null | CC BY-SA 4.0 | null | 2023-03-21T00:49:26.090 | 2023-03-21T00:49:26.090 | null | null | 111259 | null |
610139 | 2 | null | 610134 | 1 | null | The bootstrap is a valid approach, but the t-test is not valid. One simple approach is to take the quantiles of the difference of the sample metrics and test whether that confidence interval includes 0.
Also, in this case, Levene's test for equality of variances is also available as an alternative.
```
v1 <- c(10,23,21,12,14,14,13,14,15,25)
v2 <- c(10,11,10,13,13,13,14,19,12,23)
set.seed(10320)
bs1 <- replicate(1000, sd(sample(v1, 10, replace=TRUE)))
bs2 <- replicate(1000, sd(sample(v2, 10, replace=TRUE)))
# t.test(bs1,bs2) # Not correct
quantile(bs1 - bs2, prob = c(0.025, 0.975))
require(car)
car::leveneTest(c(v1, v2), factor(c(rep("v1", length(v1)), rep("v2", length(v2)))),
center = mean)
```
```
| null | CC BY-SA 4.0 | null | 2023-03-21T00:50:53.610 | 2023-03-21T00:50:53.610 | null | null | 212798 | null |
610140 | 1 | null | null | 0 | 21 | [](https://i.stack.imgur.com/5ABVv.png)
The above is the Satoh (2001) discrete derivative of the cumulative Bass model. I am using it to estimate a cumulative diffusion pattern from 19 years of data. (Renewable energy source diffusion in professional sport stadia). My question surrounds the exponent (t/2) from the Satoh (2001) model. I get a better estimation when I raise both terms to (t) rather than utilize the t/2. (My data points are measured annually so with delta equal to 1, n/2 becomes t/2). I have the data as well as output below for the two separate approaches. Any help would be greatly appreciated! (Either justifying the use of t in the exponent or perhaps pointing out where I was mistaken with the t/2.)
```
Data<-c(1,1,1,1,6,13,17,20,28,37,40,44,48,55,63,68,75,79,86)
T119=1:19
#This is the estimation WITHOUT t/2 in the exponent. Market is preset to the number of facilities (175)
Bass.nlsENV<-nls(Data~175*((1-((1-(q+p))/(1+(q+p)))^(T119)/(1+(q/p)*((1-(q+p))/(1+(q+p)))^(T119)))), data=Data, start=list(p=.0001, q=.025))
summary(Bass.nlsENV)
Bcoef <- coef(Bass.nlsENV)
p <- Bcoef[1]
q <- Bcoef[2]
bassModelCUMENV<-function(p,q,T=100)
{
D=double(T)
for(t in 1:T)
D[t] = 175*((1-((1-(q+p))/(1+(q+p)))^(t))/(1+(q/p)*((1-(q+p))/(1+(q+p)))^(t)))
return(D)
}
#Fitting the function to the data
Spred=bassModelCUMENV(p,q,T=50)
#Generating time series variables and plot
Spred=ts(Spred,start=c(2003,1),freq=1)
ENVAdopters=ts(Data$FACILITYCumu,start=c(2003,1),freq=1)
ts.plot(ENVAdopters,Spred,col=c("red","black"), xlab="Year", ylab="Number of Adopters")
```
[](https://i.stack.imgur.com/Khk9m.png)
```
#This is the estimation WITH t/2 in the exponent
Bass.nlsENVn2<-nls(Data$FACILITYCumu~175*((1-((1-(q+p))/(1+(q+p)))^(T119/2)/(1+(q/p)*((1-(q+p))/(1+(q+p)))^(T119/2)))), data=Data, start=list(p=.001, q=.025))
summary(Bass.nlsENVn2)
Bcoef <- coef(Bass.nlsENVn2)
p <- Bcoef[1]
q <- Bcoef[2]
bassModelCUMENVn2<-function(p,q,T=100)
{
D=double(T)
for(t in 1:T)
D[t] = 175*((1-((1-(q+p))/(1+(q+p)))^(t/2))/(1+(q/p)*((1-(q+p))/(1+(q+p)))^(t/2)))
return(D)
}
#Fitting the function to the data
Spredn2=bassModelCUMENVn2(p,q,T=50)
#Generating time series variables and plot
Spredn2=ts(Spredn2,start=c(2003,1),freq=1)
ENVAdopters=ts(Data$FACILITYCumu,start=c(2003,1),freq=1)
ts.plot(ENVAdopters,Spredn2,col=c("red","black"), xlab="Year", ylab="Number of Adopters")
```
[](https://i.stack.imgur.com/zaktq.png)
Clearly the t exponent estimates better than t/2, including a smaller RSE on existing data. However, the equation suggests t/2. What am I missing? Thank you for your help!
| NLS-estimated Discrete Bass Model (Satoh, 2001) | CC BY-SA 4.0 | null | 2023-03-21T00:56:10.270 | 2023-03-21T00:57:36.870 | 2023-03-21T00:57:36.870 | 383709 | 383709 | [
"discrete-data",
"nls"
] |
610142 | 2 | null | 157125 | 1 | null | Probably one of the biggest limitations to GAMs is that they cannot model complex regression paths that involve multiple responses or things like mediation paths. [Structural equation modeling (SEM)](https://ecologicalprocesses.springeropen.com/articles/10.1186/s13717-016-0063-3#:%7E:text=Structural%20equation%20modeling%20(SEM)%20is,on%20pre%2Dassumed%20causal%20relationships.) can achieve this, but generally the tools for nonlinear methods don't seem as developed like GAMs.
GAMs also don't allow you to directly model things like skew or kurtosis, which may make them more accurate. While this isn't always a huge concern for people using GAMs given how flexible they are, one alternative if one wants more specificity is [GAMLSS](https://www.gamlss.com/).
| null | CC BY-SA 4.0 | null | 2023-03-21T01:31:27.223 | 2023-03-21T01:31:27.223 | null | null | 345611 | null |
610143 | 1 | null | null | 0 | 4 | I want to implement the [PIE dataset](https://github.com/aras62/PIE) in the [AgentFormer arch](https://github.com/Khrylx/AgentFormer).
AgentFormer uses ETH and nuScene datasets. I successfully run these datasets on this arch. However, I couldn't take a good way with the PIE dataset. I am not sure how I could write a new data loader for it. Which steps should I take?
I normally have experience in implementing articles without looking at similar GitHub codes, but this time I stuck.
Thank you for any help.
| implementing a new dataset on cv architecture | CC BY-SA 4.0 | null | 2023-03-21T02:01:39.770 | 2023-03-21T02:01:39.770 | null | null | 347870 | [
"neural-networks",
"tensorflow",
"computer-vision",
"torch"
] |
610144 | 1 | null | null | 2 | 72 |
#### Introduction and Model
I recently had a reviewer ask what the strength of my GAM interactions were compared to my main effects. This was in reference to some predicted probability plots I created. To emulate what I mean, I have created what I'm talking about with data from the `gamair` package using `mgcv`.
```
#### Libraries and Data ####
library(mgcv)
library(gamair)
library(itsadug)
data("wesdr")
#### Fit Model ####
fit <- gam(
ret
~ s(dur, bs = "cr")
+ s(bmi, bs = "cr")
+ ti(dur,bmi, bs = "cr"),
method = "REML",
family = binomial,
data = wesdr
)
#### Plot Main Effects ####
par(mfrow=c(1,2))
plot(fit,
select=1,
trans=plogis,
shift=coef(fit)[1])
plot(fit,
select=2,
trans=plogis,
shift=coef(fit)[1])
#### Plot Interactions ####
par(mfrow=c(1,1))
fvisgam(fit,
view = c("dur","bmi"),
transform = plogis)
```
This creates two sets of plots, the main effects:
[](https://i.stack.imgur.com/BSUNl.png)
And their interaction:
[](https://i.stack.imgur.com/UuiQq.png)
#### My Interpretation
Now my model isn't exactly like this, but it's semi-comparable (my interaction has a more linear divide between the contour colors compared to this model). To me, it appears that both main effects seem to have influence on the outcome, but their "strength" can't really be weighed other than very specific predictions. For example, `dur` seems to have a pronounced effect from 0 to 30 (particularly around 10 dur), but becomes increasingly inaccurate with larger values.
For the interaction, it seems that the the red boundary of this plot has the most predictive power (which seems quite strong from what I observe), followed by the weird "red bubble" in the bottom left caused by the bends in the main effects. However, can the main effects and interaction here be directly compared? For me, it seems the interaction in some cases has stronger predictive power (with some regions predicting a 90% chance of retinopathy), whereas this seems to vary in other cases. Because this can be considered on a case-by-case basis, I don't really know if they can be directly compared in this way.
| Can interactions be compared against main effects in generalized additive models (GAMs)? | CC BY-SA 4.0 | null | 2023-03-21T02:08:46.947 | 2023-03-21T03:28:37.307 | 2023-03-21T03:28:37.307 | 345611 | 345611 | [
"r",
"regression",
"interaction",
"nonlinear-regression",
"generalized-additive-model"
] |
610146 | 1 | null | null | 1 | 13 | I have seen people using both normalisation which is min-max normalization ( all values will be between 0,1) and standardize( normal distribution) the data as part of pre-processing.
It's given that if data is skewed prefer normalization. What if there are one hot encoded values in other variables in a regression analysis? then also should we prefer normalization( as unit of different variables would be same) .
Can someone tell the exhaustive conditions of when to use which data transformation techniques?
| When should one normalise the data and when should one standardize the data as a part of data pre-processing while building ML models? | CC BY-SA 4.0 | null | 2023-03-21T02:37:16.613 | 2023-03-21T02:37:16.613 | null | null | 90102 | [
"regression",
"data-transformation",
"normalization",
"standardization",
"data-preprocessing"
] |
610148 | 1 | null | null | 1 | 27 | If I have two metrics, - number of reviews and the rating out of 5, and I want to combine them into a single meaningful rating that can rank a list, how can I do that?
For example, let's say I have the following:
```
BeachName-#Reviews-Rating
Pantai Kaneko Beach-10-4.9 (meaning 10 reviews with a rating of 4.9)
Pantai Kelating Beach-517-4.4
Pantai Pusat Beach-913-4.5
Pantai Abian Kepas Beach-156-4.4
Pantai Antap Beach-177-4.6
Pantai Manyar Beach-141-4.3
Pantai Pernama Beach-840-4.3
Pantai Saba Beach-102-4.4
Pantai Masceti Beach-1155-4.4
Pantai Cucukan Beach-1-4
Emerald Dive Spot-2-5
USAT Liberty Shipwreck-896-4.7
Pantai Nusantara Beach-62-4.1
Menjangan Island-137-4.6
```
I don't have more than grade 5 maths. What Excel formula could I use to combine these two numbers to rank this list by popularity. I want to do more than say this beach has the most reviews....I would like to adjust that by the review out of five to try and get a better ranking.
| A Rating Formula that Combines Two Metrics | CC BY-SA 4.0 | null | 2023-03-21T03:13:29.023 | 2023-03-21T03:35:50.380 | 2023-03-21T03:35:50.380 | 362671 | 383717 | [
"rating"
] |
610150 | 2 | null | 366945 | 0 | null | A polynomial can have matrix or operator inputs. let $p(x) = x^2 + 4x + 4 = (x+2)^2$. If $p(x)=0$ and if $\textbf{A}$ is a square matrix of full rank we have,
$p(\textbf{A}) = (\textbf{A}+2)^2 = (\textbf{A}+2)(\textbf{A}+2) = 0$
what is wrong with. the above? We need to 'replace' $2$ with $2\times\textbf{I}$
Now pick any column $c_j$ from $(\textbf{A}+2\textbf{I})$. We have,
($\textbf{A}+2\textbf{I})c_j = 0 \quad \implies \textbf{A}c_j = -2c_j$
So $c_j$ is an eigenvector of $\textbf{A}$ with corresponding eigenvalue $\lambda = -2$. $c_j$ is obviously in the nullspace of $\textbf{A}$. By the fundamental theorem of algebra, every univariate polynomial of degree $n$ can be decomposed into $n$ monic polynomial terms. By [range-null-space decomposition](https://www.statlect.com/matrix-algebra/range-null-space-decomposition), because the null-space of a matrix of rank $n$ stabilizes (or stops growing) at [at most] $n$ powers of $\textbf{A}$ (e.g.$\textbf{A}^n$), we expect at most $n$ distinct eigenvalues (roots of $p(\textbf{A}) = 0$). Ergo, the polynomial $\prod_i^n(\textbf{A}-\lambda_i\textbf{I}) = 0$ satisfies the characteristic equation.
In time-series analysis it is often the case that we need to divide by the eigenvalues so as to attain terms of the form $(1-\lambda_i^{-1}\textbf{B})$ where $\textbf{B}$ is the backshift operator. See section 3.6 of 'Time Series: Theory and Methods' by Brockwell and Davis.
| null | CC BY-SA 4.0 | null | 2023-03-21T03:51:31.633 | 2023-03-21T04:34:03.683 | 2023-03-21T04:34:03.683 | 383720 | 383720 | null |
610152 | 1 | null | null | 0 | 12 | In some old lecture notes I came across this relationship between all three estimators:
$$
\hat{\beta}_{POLS} = W_{BE} \hat{\beta}_{BE} + W_{FE} \hat{\beta}_{FE},
$$
where the weights $W_{BE}$ and $W_{FE}$ depend on the variance of the estimators. I have been unable to derive the relationship myself, where can I read more about that?
| Relationship between POLS, FE, and BE estimators in panel data | CC BY-SA 4.0 | null | 2023-03-21T05:12:48.903 | 2023-03-21T10:28:46.157 | null | null | 105257 | [
"panel-data",
"estimators"
] |
610154 | 1 | null | null | 1 | 70 | I have fit two partial proportional odds models in R using the `clm()` function from the ordinal package, with nominal effects (more on the function and package [here](https://rdrr.io/cran/ordinal/f/inst/doc/clm_article.pdf)). I have an ordinal response variable and several independent variables of different classes. I was wondering what would be a good way to assess goodness of fit for my models and compare them to each other in R.
Example of the model:
```
model2 <- ordinal::clm(VH ~ Interp.trust + Pol.part + Civ.norms +
edu_lvl + sex + health_status + media + covid_hist +
trust_science,
nominal = ~ Link.trust + Soc.part + age + risk_perc,
data = FINALIT)
```
| Goodness of fit test for partial proportional odds model in R? | CC BY-SA 4.0 | null | 2023-03-20T11:50:02.740 | 2023-03-21T23:05:55.570 | 2023-03-21T23:05:55.570 | 11887 | 382486 | [
"r",
"model",
"ordinal-data",
"goodness-of-fit"
] |
610155 | 2 | null | 610154 | 2 | null | After some more research, I think I am able to answer my own question. Please if are any corrections comment below/post another answer! I am very new to all this!
So, anova() (analysis of variance) in R can be used for clm models, which I was not sure about ([source](https://cran.r-project.org/web/packages/ordinal/ordinal.pdf), "anova.clm" section). This can only be done with nested models, but since what I want to check is whether the addition of certain variables to model 2 (that were not included in model 1) is an improvement, this is perfect for me.
So I run this code:
```
anova(model2, model1) #model 2 first as it is the one with the added variables
```
Which returns this output:
```
Likelihood ratio tests of cumulative link models:
no.par AIC logLik LR.stat df Pr(>Chisq)
model1 52 2850.5 -1373.2
model2 63 2776.7 -1325.3 95.769 11 1.225e-15 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
In this case, the null hypothesis is that both models fit the data equally, therefore the addition of the variables in model 2 is not efficient and we should prefer model 1. The alternative hypothesis is that model 2 fits the data significantly better than model 1, so the added variables should be kept. (I found [this answer](https://stats.stackexchange.com/questions/6505/likelihood-ratio-test-in-r) on cross validated very useful to interpret the output)
As you can see in my analysis the p value is significantly below the .05 threshold, meaning that I can reject the null hypothesis and conclude that the model with the added variables is preferable to the simpler one.
| null | CC BY-SA 4.0 | null | 2023-03-20T13:37:44.963 | 2023-03-20T13:37:44.963 | null | null | 382486 | null |
610156 | 1 | null | null | 5 | 75 | I've read that post-hoc power analysis is useless when the result of a test is statistically insignificant, which I understand (post-hoc power analysis is just circular logic in this scenario). But what if the result is significant?
Isn't a significant but underpowered result really dubious, which would justify the systematic use of post-hoc power analysis after finding a significant result?
Maybe not systematic, but I'm thinking for example of the case of secondary data analysis when you can't possibly run some a priori power analysis (sample size calculation), or the case of exploratory or pilot studies.
In these situations at least, doesn't post hoc power analysis help avoiding possible misinterpretion or overinterpretion of significant results? Am I missing something?
I ask the question, because many texts seem to make a strong point that post-hoc power analysis is useless, but unless I missed something they always talk about the case of insignificant results. So I'm wondering if it applies to the case of significant results too.
Subsidiary question: are there situations where it wouldn't be justified to run post-hoc power analysis after finding a significant result?
If on the top of an answer, you have any good additional references relative to this issue, I'm interested. Thanks.
| Is post-hoc power analysis redundant too in the case of a *significant* result? | CC BY-SA 4.0 | null | 2023-03-21T05:59:44.070 | 2023-03-21T07:05:31.793 | null | null | 383726 | [
"hypothesis-testing",
"statistical-power",
"effect-size",
"post-hoc"
] |
610157 | 2 | null | 610156 | 2 | null | If you have observed a significant effect, then your study was by definition powerful enough to detect this effect. Whether your study was "underpowered" (as in "power was lower than some specific threshold") or not does not enter this argument.
As a result, "observed power" is nothing else than a reformulation of the p value of your observed effect, and therefore, it adds no information beyond the p value.
The best exposition of this is in my opinion [Hoenig & Heisey (2001, The American Statistician), "The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis"](https://www.tandfonline.com/doi/abs/10.1198/000313001300339897). I particularly recommend their Figure 1.
| null | CC BY-SA 4.0 | null | 2023-03-21T07:05:31.793 | 2023-03-21T07:05:31.793 | null | null | 1352 | null |
610159 | 1 | null | null | 0 | 10 | I am validating a prediction tool and I have computed sensitivity, specificity, accuracy, PPV and NPV as shown below.
```
ACTUAL VALUES
+ -
Predected + 86 11
Values - 8 163
Sensitivity Specificity Accuracy PPV NPV
0.914 0.936 0.929 0.915 0.937
```
I know how to calculate the 95% confident interval for sensitivity and spcecificty but I am not able to find the way to do this for accuracy, PPV and NPV.
| How to compute confident interval for accuracy, PPV and NPV? | CC BY-SA 4.0 | null | 2023-03-21T08:30:14.723 | 2023-03-21T08:30:14.723 | null | null | 378571 | [
"confidence-interval"
] |
610160 | 1 | null | null | 0 | 12 | I have 5 sensors and they count number of cars pass through that sensor each day. I have gathered monthly data for all sensors.(i.e. summing each day's data. Now I want to test whether distribution of cars passing through these sensors differ. I was planning to conduct one-way ANOVA but is it right? Is there a better alternative for that? Lastly how can I tackle multiple testing problem. I was considering Bonferroni correction. Thanks in advance!
| Hypothesis testing for monthly data of several groups | CC BY-SA 4.0 | null | 2023-03-21T08:38:12.590 | 2023-03-21T08:38:12.590 | null | null | 346257 | [
"hypothesis-testing",
"anova",
"multiple-comparisons"
] |
610161 | 2 | null | 610009 | 1 | null | OK based on your last comment, you have two somewhat different things going on here: (1) research design and (2) choice of analysis. In order to investigate the effect of gamification on motivation, your research design might be for instance the following:
-2 groups of individuals
-You measure their motivation (time 1)
-one group goes through gamification, one does not (the latter maybe goes through some event that is otherwise similar, but lacks the "gamification" element)
-you measure their motivation again (time 2)
Choice of analysis is a different issue. If you chose to use some version of the above suggested design, you could use multilevel regression or repeated measures ANOVA with time x group interaction predicting motivation. Correlation does not seem to be a relevant analysis with your research questions.
If you only have data on some group's motivation after they went through a gamification event, you can't say anything about the possible effects of gamification on motivation, and actually not run any analyses either (at least no correlations or regressions, if you only have one variable measured once).
| null | CC BY-SA 4.0 | null | 2023-03-21T08:43:11.923 | 2023-03-21T08:48:25.250 | 2023-03-21T08:48:25.250 | 357710 | 357710 | null |
610163 | 1 | null | null | 0 | 24 | I am studying relations between migration movements of different types of fish and acidity (pH) of water. I am stuck on which statistical test i should use to find those significant correlations and/or regression.
The data are structured:
1.) Date column (between the start of 2022 and the start of 2023)
2.) pH (mean pH for every date)
3.) Fish data column (how many fish of one species traveled through, what we call a fishlift). This fishlift lets fish travel from one watercourse to another.
I am trying to test if the migration of fish (fish using the fishlift) is caused by changing pH. The data is higly skewed (as can be seen in the graph at the bottom of this post) and looks highly parabolic if I plot it with the acidity values (as can also be seen below). Also my data contains a lot of zeros (since the fish didn't use the fishlift every day).
Can someone explain which type of statistical analysis I should use?
[](https://i.stack.imgur.com/ToIzY.png)
[](https://i.stack.imgur.com/ZeGOC.png)
| Using statistics to find significant relations between fishmigration movements and acidity in water | CC BY-SA 4.0 | null | 2023-03-21T09:05:36.800 | 2023-03-21T09:09:37.793 | 2023-03-21T09:09:37.793 | 383286 | 383286 | [
"r",
"skewness",
"ecology"
] |
610164 | 1 | null | null | 0 | 6 | I am working with 3 cohorts, 2 of which received a 5-item measure (say measure A) and one that received a modified version of the same measure (measure B). Essentially, measure B is comprised of same 5 items but they are positively phrased and the Likert scales are reversed. Example (made up to convey point):
I find apples bitter (1- strongly agree ... 5-strongly disagree)
/vs/
I find apples sweet (1- strongly disagree ... 5- strongly agree)
My understanding is that these changes can affect the reliability of the scale, and I am wondering if there is a way to check this, hoping that I can still compare mean scores across time.
So far, I have combined the 5 items into a composite scale (after having reversed-coded the modified items), and checked Cronbach's alpha for each for each dataset. They look good. I then compared alpha scores using a chi-square test, finding no difference between the scores. Is this enough to justify comparing mean scores? Or do I need to take another approach?
Thanks and appreciate suggestions.
| Comparing scale worded slightly differently across 3 datasets | CC BY-SA 4.0 | null | 2023-03-21T09:17:52.700 | 2023-03-21T09:17:52.700 | null | null | 383736 | [
"reliability",
"cronbachs-alpha"
] |
610166 | 2 | null | 610086 | 0 | null |
- Unless it's particularly difficult to test a larger sample, you should generally opt for the sample size required for the less powerful test. That is, do the power calculations for both tests, and go with whichever sample size is biggest. There's no such thing as "too much power".
- There's no need for p-value correction here. Correction is necessary where you're looking at multiple outcome measures, and would deem the test to be "successful" if any one of them were to improve, since that would increase the chances of a false positive. I'm defining "false positive" here to be "concluding that the change has increased conversion rates, when it in fact hasn't". The fact that you're also looking at load times doesn't increase the probability of that happening.
Other notes:
- It's almost always possible that your test could reduce conversion rates, so I don't think a one-tailed test on conversions is appropriate.
- You could use a one-tailed test for load times, if you assume there's no way that people could accidentally improve them.
| null | CC BY-SA 4.0 | null | 2023-03-21T10:28:05.523 | 2023-03-21T10:28:05.523 | null | null | 42952 | null |
610168 | 1 | null | null | 0 | 18 | my goal is to compare two groups mean to see which one is greater and conclude that my approach is doing better than the second approach.
I run t-test and I have the following results
```
t = 0.36702, df = 360, p-value = 0.7138
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.04247427 0.06196598
sample estimates:
mean of x mean of y
0.08095028 0.07120442
```
the p-value if nonsignificant. So I fail to reject the null hypothesis. But in the sample estimate the mean of x( my approach) is greater than mean of y. It is true the difference in mean not big but still this can consider. Does my interpretation correct ? is there an alternative view to make a conclusion about the mean of the two groups?
Thanks in advance
| compare the mean difference of two group | CC BY-SA 4.0 | null | 2023-03-21T10:32:17.833 | 2023-03-21T10:32:17.833 | null | null | 313294 | [
"t-test",
"mean"
] |
610170 | 2 | null | 192176 | 1 | null | What you aim to do is a [chunk test](https://stats.stackexchange.com/questions/27429/what-are-chunk-tests). This is minimally different in spirit from the usual t-test or F-test of a single coefficient. However, instead of comparing nested models where the smaller model just drops one coefficient, the smaller model drops several. This can go to the extreme of dropping all coefficients except the intercept. This is the “overall” F-test that often gets reported in software summaries of regression models.
However, keep in mind that fitting a model, dropping features with insignificant coefficients (or an insignificant chunk), and fitting a new model on the remaining features distorts all of your downstream inferences. This relates to points 1, 2, 3, 4, and 7 [here](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/).
| null | CC BY-SA 4.0 | null | 2023-03-21T10:57:30.807 | 2023-03-21T10:57:30.807 | null | null | 247274 | null |
610172 | 2 | null | 210641 | 0 | null | You can apply whatever rule you want. If you can validate that it works (e.g., holdout set, bootstrap, cross validation), then this kind of stepwise regression [could be competitive with other predictive modeling techniques](https://stats.stackexchange.com/q/594106/247274). More typical approaches would be based on Akaike or Bayesian information criteria, however, such as what is performed in the `stepAIC` function in R software.
Keep in mind, though, that extreme care is required to have the desired properties. Software might report values like adjusted $R^2$ and coefficient confidence intervals, but these are misleading. This is related to points 1, 2, 3, 4, and 7 [here](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/).
| null | CC BY-SA 4.0 | null | 2023-03-21T11:11:34.343 | 2023-03-21T11:11:34.343 | null | null | 247274 | null |
610173 | 1 | null | null | 0 | 69 | I am trying to predict the revenues of a portfolio of items. I want to simulate the revenues in a particular market situation in which they might increase. Each item's revenues is made up of 3 components:
Revenues per item = constant * percentage_increase_for_A(item) * A(item) + constant * percentage_increase_for_B(item) * B(item) + constant * percentage_increase_for_C(item) * C(item))
As you can see, each of the percentage increases depends on the item since each item has a different one depending on item type and region. A, B and C are just the current base values of each item that contribute to revenues.
What I plan to do is simulating each item independently by assigning normal distributions to each of the percentage increases (I will extract random values from these normal distributions in the iterations) and then what I expect to have is as many normal distributions for Revenues per item as the number of items. Then, I can sum the mean and the std of these normal distributions and get a normal distribution for the total revenues. Would this be a correct approach? Can I just sum the Revenues per item normal distributions to get a normal distribution in the end? I am not sure if it is true that if I extract randomly from normal distributions each of the percentage increases then each of the Revenues per item will also be a normal distribution.
| Monte Carlo simulations and sum of normal distributions | CC BY-SA 4.0 | null | 2023-03-21T11:39:36.190 | 2023-03-21T22:34:22.520 | null | null | 383746 | [
"normal-distribution",
"simulation",
"monte-carlo"
] |
610174 | 1 | 610288 | null | 3 | 430 |
## Background
I'm working on a tabular data model that performs a binary classification. The model has recently started underperforming and I'd like to know if that's due to a drift in the feature distribution of the model.
The features in the model haven't changed. The model hasn't been retrained. There hasn't been a change to the data collection. With this in mind, I assume changes are due to a change in the underlying feature distribution. I'd like to quantify this change over a series of datasets that I have.
The datasets are large, on the order of $10,000-100,000$ rows in size with $250$ columns.
I've heard about KL Divergence and think it could be a good measure of the difference between the feature distributions of the new and old datasets.
## Question
According to [Wikipedia](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence?useskin=vector#Statistics):
>
In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted $D_{\text{KL}}(P\parallel Q)$, is a type of statistical distance: a measure of how one probability distribution P is different from a second...
I've [read](https://datascience.stackexchange.com/questions/15597/the-best-way-to-calculate-variations-between-2-datasets) that [KL Divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence?useskin=vector#Statistics) is a good way to calculate the difference between multi-variable datasets, please see related questions for an example.
However, as KL Divergence explains the difference between probability distributions, I assume I'll have to transform my datasets into probability distributions of some form.
I haven't been able to find a standard way of doing this, but I have two ideas:
- Compare feature by feature. Bin feature values and use the bin size to calculate the probability of each bin. Use the bins in both datasets to calculate KL Divergence.
- Compare everything. Cluster all the samples in each dataset and get a probability for each cluster. Use those cluster probabilities to calculate KL Divergence.
Are these ideas sensible? Or is there a standard way of calculating KL Divergence that I haven't been able to find yet?
Related Questions:
- Data Science Stack Exchange question on calculating the variation between datasets.
| How do you find the KL Divergence between two multi-variable datasets? | CC BY-SA 4.0 | null | 2023-03-21T11:53:34.837 | 2023-05-14T11:11:04.740 | 2023-03-22T17:42:21.943 | 363857 | 363857 | [
"probability",
"distributions",
"kullback-leibler",
"divergence",
"concept-drift"
] |
610175 | 2 | null | 610108 | 1 | null | A general form of expressing a model is
$$Y_i = f(\textbf{X}_i,\boldsymbol\beta,\epsilon_i)$$
The function $f$ expresses the random variable $Y_i$ in terms of a random latent variable $\epsilon_i$, a fixed/known regressor variable $\textbf{X}_i$, and some distribution parameters $\boldsymbol\beta$.
Note:
- Here the subscript $i$ refers to the observation within the sample. In your question you have a subscript relating to the elements in the vector $\textbf{X}^{T} = (X_1, X_2, \ldots, X_p)$. You could also write it with two subscripts $\textbf{X}_i^{T} = (X_{1i}, X_{2i}, \ldots, X_{pi})$ where $\textbf{X}_i$ is a matrix and the row index relates to the observation and the column index relates to the regressor/feature.
- It is the random part $\epsilon_i$ that is assumed to be independent. (often, more complex models can assume some dependency between the different $\epsilon_i$)
- If you perform an experiment then the regressor variables $\textbf{X}_i$ can be dependent. For example you might repeat a measurement with the same values for several $\textbf{X}_i$. But what needs to be independent is the random part $\epsilon_i$.
Example:
Say we have
$$Y_i = Q(\mu = a + b x_i, p= \epsilon_i)$$
where we use $\epsilon_i$ a uniform distributed variable, $Q(\mu,p)$ is the quantile function of the Poisson distribution with $\mu$ the mean and $p$ the quantile.
Then some simulated data, with parameters $a = 10$ and $b=1$, could look like
```
x epsilon y
[1,] 10 0.26550866 17
[2,] 10 0.37212390 18
[3,] 10 0.57285336 21
[4,] 10 0.90820779 26
[5,] 20 0.20168193 25
[6,] 20 0.89838968 37
[7,] 20 0.94467527 39
[8,] 20 0.66079779 32
[9,] 30 0.62911404 42
[10,] 30 0.06178627 31
[11,] 30 0.20597457 35
[12,] 30 0.17655675 34
```
[](https://i.stack.imgur.com/5JDPZ.png)
An r-code to compute the above numbers is:
```
set.seed(1)
a = 10
b = 1
x = c(10,10,10,10,20,20,20,20,30,30,30,30)
epsilon = runif(12) # generate the nois part based on uniform distribution
y = qpois(epsilon,a + b * x) # transform to the y variable using the quantile function
plot(x,y)
cbind(x,epsilon,y)
```
| null | CC BY-SA 4.0 | null | 2023-03-21T12:04:17.590 | 2023-03-21T12:04:17.590 | null | null | 164061 | null |
610176 | 1 | null | null | 1 | 46 | I have a question about Difference-in-Difference estimation. It is well-known that the key identification assumption in Diff-in-Diff is the parallel trends assumption that says that in the absence of treatment, the evolution of the outcomes in the treated group would move in parallel to the control group. Of course this is a counterfactual claim.
I have been reading some papers in the literature on difference-in-difference and often encounter that the assumption that if treatment timing is as good as randomly assigned, then parallel trends will hold. This random treatment timing assumption is said to be stronger than parallel trends. My question is: can someone explain the intuition behind this?
I understand that there is selection into the treated group which implies that that group can have an initially different level of the outcome. In this way, one could say that having particular characteristics makes treatment more likely. Is it correct to say that since the random treatment timing assumption states that the probability of treatment is the same across all permutations of treatment timing, it implies that selection into the treated group must be the same over time, and that hence the parallel trend assumption holds? If so, how can one shows this?
Any suggestions would be appreciated!
| Difference in Difference and Random Treatment Timing | CC BY-SA 4.0 | null | 2023-03-21T12:11:06.540 | 2023-03-21T12:11:06.540 | null | null | 339190 | [
"causality",
"difference-in-difference"
] |
610178 | 2 | null | 593470 | -1 | null | Since nobody could answer my Question I asked the KI and got a reasonable response :). It does not make me happy, but sounds good enough for me, to let this rest. Since the answer is lengthy I highlighted the juicy bits.
"Hello, this is Bing. I’m happy to help you with your question.
According to 1, the binomial test is a way to test hypotheses about the probability of success in a binomial experiment. The test statistic for the binomial test is:
b(x; n, p) = nCx * px * (1 - p)n - x
where x is the number of successes, n is the number of trials, p is the hypothesized probability of success, and nCx is the binomial coefficient.
The binomial test can be one-tailed or two-tailed depending on whether you want to test if the observed probability is less than, greater than, or different from the hypothesized probability.
The binomial test does not have a standard error or a signal-to-noise ratio like other tests because it does not rely on any assumptions about the sampling distribution of the test statistic. Instead, it calculates the exact probability of observing x successes out of n trials under the null hypothesis.
However, when n is large and p is close to 0.5, the binomial distribution can be approximated by a normal distribution with mean np and variance np(1-p). In this case, you can use a z-test of proportions as an approximation of the binomial test. The z-test has a standard error and a signal-to-noise ratio given by:
SE = sqrt(p(1-p)/n) SNR = (p - p0) / SE
where p0 is the hypothesized probability under the null hypothesis.
When n is small or p is far from 0.5, you can use a correction factor to adjust for the discrepancy between the binomial and normal distributions. One common correction factor is called Yates’ continuity correction2, which subtracts 0.5 from |p - p0| before calculating SNR.
I hope this helps you explain your students why we have different types of test statistics for different types of tests.
sources
- statisticshowto.com
- statisticshowto.com
- bing.com
- en.wikipedia.org
- byjus.com"
| null | CC BY-SA 4.0 | null | 2023-03-21T12:33:15.457 | 2023-03-21T12:33:15.457 | null | null | 5222 | null |
610181 | 2 | null | 381974 | 0 | null | >
My question now is, what does it mean that it is fully specified by its first two moments? I mean the higher order moments are not zero?
While it is true that the normal distribution has an infinite number of higher-order moments, what makes it special is that it is the only distribution with a finite number of cumulants. Specifically, Marcinkiewicz (1935) showed that the normal distribution is the only distribution whose cumulant generating function is a polynomial, i.e. the only distribution having a finite number of non-zero cumulants.
The cumulant generating function for $X\sim\mathcal N(\mu,\sigma^2)$ is
$$
K_X(t)=\log M_X(t)=\mu t+\sigma^2 t^2/2=\kappa_1t+\kappa_2t^2/2.
$$
Through the use of Faa di Bruno's formula the moments of the normal distribution become can be expressed in terms of the cumulants by
$$
\mathsf EX^n=\sum _{k=1}^{n}B_{n,k}(\kappa _{1},\ldots ,\kappa _{n-k+1}).
$$
What this shows is that once you know the mean $(\mu)$ and the variance $(\sigma^2)$ all other higher-order moments of the normal distribution are determined; thus it is characterized by these first two moments.
| null | CC BY-SA 4.0 | null | 2023-03-21T13:12:56.283 | 2023-03-21T13:12:56.283 | null | null | 105773 | null |
610182 | 1 | null | null | 0 | 26 | I want to do a regression in R (RStudio), with equation:
$ y = A + Bc^x$.
Any idea on how to do such regression? Thank You
| Linear Regression y=A + Bc^x | CC BY-SA 4.0 | null | 2023-03-21T13:38:53.110 | 2023-03-21T13:54:06.580 | null | null | 383756 | [
"regression"
] |
610183 | 1 | null | null | 0 | 34 | I have seen a few similar questions on here but none of them had answers and I cannot find an answer anywhere. I have fitted two models with clm() in R. The proportional odds assumption was not satisfied by some variables. I am wondering how to interpret the output for the variables that did not satisfy the assumption.
Taking this example from [here](https://cran.r-project.org/web/packages/ordinal/vignettes/clm_article.pdf):
[](https://i.stack.imgur.com/Hn78J.png)
How would I interpret the "Threshold coefficients"?
In particular:
- How can I report statistical significance? (can I find p values or something else??)
- Can odds ratios be calculated and used as normal?
| Interpreting nominal effects variables in partial proportional odds clm | CC BY-SA 4.0 | null | 2023-03-21T13:53:56.593 | 2023-03-21T13:53:56.593 | null | null | 382486 | [
"r",
"multiple-regression",
"regression-coefficients",
"model",
"ordinal-data"
] |
610184 | 1 | 610643 | null | 5 | 159 | BACKGROUND: I have huge (>10000x10000px) pathology slides images with different sizes. Here is an example:
[](https://i.stack.imgur.com/TSAjD.png)
You can find this specific example [here](https://cancer.digitalslidearchive.org/#!/CDSA/luad/TCGA-05-4244). This images are pieces of tissue samples (biopsies) that are scanned with a high resolution device. Pathologists use them to look at the morphology of the cells and tell if a cell is malignant (cancer) or bening, and identify other types of structures (blood vessels, muscular cells, etc).
This images have a very high resolution but, as you may see in the image, not all the regions have information in it (there are regions without tissue here and there).
Hence, there are several tools to process these large images and most of them use "tiling" which consist on create patches of regions and process them individually. Like:
[](https://i.stack.imgur.com/3ODr7.png)
From this [paper](https://link.springer.com/chapter/10.1007/978-3-030-83332-9_2)
One solution to process this kind of images is the one used in this [git](https://github.com/mahmoodlab/CLAM). This tool performed feature extraction of every non-blank patches and create a vector of `1024` features for every patch. Finally all patches are concatenated in a matrix for every image with shape `1024xN` (being `N` the number of patches).
OBJECTIVE: I would like to use this feature matrices together with some clinical data associated to each image and build and ML classifier that uses both, the clinical data and the feature matrix as input to make a prediction. Hence, I am looking for a way to merge this two datasets.
However, since the number of patches is different between images, I would like to find a method that reduce the shape of the matrix (`1024xN`) to the same shape for all the features extracted (`1024xC`, being `C` a common dimension for all the output matrices).
PRELIMINARY TRY 1: Here, I am tempted to select some `N` patches randomly and use them but I would like to know if there is any method to select top-K `N` patches.
PRELIMINARY TRY 2:I was looking at [this answer](https://stats.stackexchange.com/a/193101/383740) but I think JLT could not be the solution since it reduce the number of features (here the `1024`) and not the number of observations (here `N`). However, I see that this method [also](https://search.r-project.org/CRAN/refmans/Rdimtools/html/linear_RNDPROJ.html) returns a projection matrix but I am unsure if this recapitulates the information in the initial one.
PRELIMINARY TRY 3: This [git](https://github.com/mahmoodlab/PORPOISE) solve the problem but I would like to compare this method to a ML model (random forest etc.)
TLDR: Hence, I am looking for a method that could select the top-K most informative (¿? not sure if is the correct word here) observations. Here "informative" would mean the ones with less correlation with the rest of the patches, but it could be also any method that summarises the information differently. Any ideas how could this be done?
EDIT: Add more context on what means "informative" in this case
EDIT2: Add some background on the type of source of data I am using and structured the post for better understanding
EDIT 3: Added a bounty for the task and add a possible solution using Deep Learning. However, I would like to use ML tabular models for comparison.
| Reduce dimensions of matrices with different shapes: Methods? | CC BY-SA 4.0 | null | 2023-03-21T13:55:16.347 | 2023-03-24T19:57:57.407 | 2023-03-24T14:40:21.207 | 383740 | 383740 | [
"r",
"machine-learning",
"feature-selection",
"dimensionality-reduction"
] |
610185 | 1 | null | null | 1 | 33 | Why is the square of the average of a set of positive numbers not always bigger than the average of the squares of the same set of positive numbers?
I am not talking about in an asymptotic case.
| Why is the square of the average of a set of positive numbers not always bigger than the average of the squares of the same set of positive numbers? | CC BY-SA 4.0 | null | 2023-03-21T14:07:13.273 | 2023-03-21T14:22:39.973 | 2023-03-21T14:22:39.973 | null | null | [
"mean",
"set-theory"
] |
610186 | 2 | null | 481815 | 0 | null | A slightly modified version of your question can be answered using the tools in the following paper: Aue, Alexander, et al. "Break detection in the covariance structure of multivariate time series models." (2009): 4046-4087. This paper can be found here: [https://projecteuclid.org/journals/annals-of-statistics/volume-37/issue-6B/Break-detection-in-the-covariance-structure-of-multivariate-time-series/10.1214/09-AOS707.full](https://projecteuclid.org/journals/annals-of-statistics/volume-37/issue-6B/Break-detection-in-the-covariance-structure-of-multivariate-time-series/10.1214/09-AOS707.full).
Essentially this paper allows you to test for a changepoint in the covariance matrix of the two series.
| null | CC BY-SA 4.0 | null | 2023-03-21T14:28:22.797 | 2023-03-21T14:28:22.797 | null | null | 234732 | null |
610187 | 2 | null | 609958 | 1 | null | If you're drawing a DAG of the variable you have measured in this scenario, it's just the two of them: `Class` affects `Data`.
[](https://i.stack.imgur.com/hgfdl.png)
If you're representing the mixture process (which depends on the unobserved choice of `Component`):
[](https://i.stack.imgur.com/eMSf9.png)
This isn't a very satisfying answer, but that's because DAGs don't represent the details of the generative model, just the presence of dependencies.
[Image source](https://dreampuf.github.io/GraphvizOnline/#digraph%20G%20%7B%0A%20%20%20%20rankdir%3DLR%0A%20%20%20%20Component%5Bshape%3D%22rectangle%22%5D%0A%20%20%20%20Class%20-%3E%20Component%3B%0A%20%20%20%20Component%20-%3E%20Data%3B%0A%20%20%20%20Class%20-%3E%20Data%3B%0A%7D)
| null | CC BY-SA 4.0 | null | 2023-03-21T14:40:05.647 | 2023-03-21T14:40:05.647 | null | null | 42952 | null |
610188 | 2 | null | 610072 | 1 | null | I've never heard of that plot. It may already exist and have a name, however, I suspect it would not be persuasive to a reasonable skeptic, if it's sufficiently esoteric. Simply based on human psychology, people will find things they're familiar with persuasive and be skeptical of someone trying to persuade them of something based on something they've never heard of. For what it's worth, I do think it's interesting—I like plots and thoroughly exploring data to ensure I understand them fully.
In your actual case, I gather you want to argue for the simpler model even though the AIC for the full model is ever so slightly lower. I have heard various times that an AIC that isn't more than 1% lower shouldn't be trusted, but I certainly don't have a citation for that.
Instead, I think there may be a simpler solution. You state that the models are nested. You can perform a nested model test / simultaneous test of all added variables in the full model. For logistic regression, that would be a likelihood ratio test. In R, the code would be something like:
```
anova(simple.model, full.model, test="LRT")
```
[@Glen_b has shown](https://stats.stackexchange.com/a/97309/) that a lower AIC corresponds to a p-value of approximately .16, so such a test is unlikely to be significant by conventional standards.
| null | CC BY-SA 4.0 | null | 2023-03-21T15:00:31.227 | 2023-03-21T15:00:31.227 | null | null | 7290 | null |
610191 | 1 | null | null | 1 | 13 | I'm stuck on a problem I'm trying to "translate" into ML terms so as to dig deeper into the literature.
Setup
I have n samples for which I generate k $\in (0, 40]$ different low-dimensional representations ($\mathbf{x} \in \mathbb{R}^{7}$) using various simulation/scoring methods.
I have a ground-truth ranking of all n samples.
Problem
Let the score of a particular representation be some linear combination ($\Sigma ~ w_ix_i$) of the 7 features. I want to learn a set of weights $\mathbf{w}$ such that the best representation of each sample ranks according to my ground truth. $\mathbf{w}$ is very important here as I want the relative contribution/importance of these features for downstream tasks.
Question
What sort of problem is this? There's an aspect of learning-to-rank but it's not quite formulated in the same way as many of your typical ltr problems are...
Any help/papers/tips are much appreciated!
| Is this a learning to rank problem? | CC BY-SA 4.0 | null | 2023-03-21T15:23:11.330 | 2023-03-21T15:33:00.633 | 2023-03-21T15:33:00.633 | 1352 | 383761 | [
"regression",
"machine-learning",
"ordinal-data",
"ranking",
"supervised-learning"
] |
610192 | 1 | null | null | 0 | 87 | How to calculate Signal to Noise Ratio?
Is the mean to stdev ratio valid to be used as the basis for calculating SNR?
What if a dataset or sample has a value of 0 all then add 1 random value other than 0 which is considered noise , for example it becomes `[0,0,0,...,0,1]` the samples/dataset is considered as clean signal with last of element is considered as noise.
The mean and stdev are close to 0 because all data is about 0 and the distribution is all about same which if the two values are divided will be close to 1. SNR=1 means that the noise and signal have a balanced strength but in reality the dataset does not have a lot of noise.
Now with same case which is a clean signal too, what if the dataset has a value of 2 all? It will looks like
`[2,2,2,2,...,2,1]`
This means stdev close to 0 while mean is close to 2, if we divided it, the SNR will be bigger than 2 for example 1.99/0.001.
Why Both cases have different SNR value even though both of sample are clean signals?
EDIT:
I tried the formula in program, but it didn't give me same value:
$$\frac{\text{Var}(S) + E[S]^2}{\text{Var}(N) + E[N]^2}$$
```
import numpy as np
signal = np.array([0, 0, 0, 0, 0, 0, 1])
expected = np.mean(signal)
noise = expected - signal
print((np.var(signal)+(np.square(np.mean(signal))))/(np.var(noise)+(np.square(np.std(signal)))))
signal = np.array([2, 2, 2, 2, 2, 2, 1])
expected = np.mean(signal)
noise = expected - signal
print((np.var(signal)+np.square(np.mean(signal)))/(np.var(noise)+np.square(np.std(signal))))
```
OUTPUT:
```
0.5833333333333333
14.583333333333334
```
| Do calculate SNR based on mean and stdev need normalization? | CC BY-SA 4.0 | null | 2023-03-21T15:23:31.850 | 2023-03-21T21:34:28.437 | 2023-03-21T19:19:47.233 | 372152 | 372152 | [
"python",
"mean",
"standard-deviation",
"signal-detection"
] |
610194 | 1 | null | null | 0 | 17 | I am using `PROC GENMOD` for log binomial multiple regression models having several class variables. I'd like to calculate incidence rates per 1000 from the regression results, for categories of the class variables. Is this the correct way to do it?
Incidence (per 1000) = EXP(RegressionIntercept) * EXP(ParameterEstimate) * 1000.
Thanks for the replies.
| Is this way of calculating the incidence rate from log binomial regression estimates correct? | CC BY-SA 4.0 | null | 2023-03-21T15:56:21.103 | 2023-03-21T15:58:43.410 | 2023-03-21T15:58:43.410 | 56940 | 382844 | [
"genmod"
] |
610195 | 1 | null | null | 0 | 16 | I have some data with the traffic volumes for TCP/IP connections, flagged by direction. Each count is the total for a 15 second period, eg.
[](https://i.stack.imgur.com/XtjVK.png)
Some connections have events (periods of exceptional higher or lower traffic) on them, for example:
[](https://i.stack.imgur.com/FmqfP.png)
and
[](https://i.stack.imgur.com/MLzVr.png)
I'm trying to indentify which connections are affected by events and what their duration is.
I'm thinking that the `forecast` package's `tsoutliers()` function might be useful for this, but I'm struggling to get reasonable results from it.
For example, for the two images above, tsoutliers() reports:
```
[[1]]$index
[1] 90 91
[[1]]$index
[1] 84 85 86 87 88 89 90 91
```
Which is plausible for the second, but not for the first.
I'd be most grateful for any suggestions of how I might get better reporting of outliers. I realise `fable` seems to have replaced `forecast`, but I'm not sure that `tsoutliers()` has been migrated.
Sample code:
```
x =
testData %>%
group_by( ToTpf ) %>%
summarise( outliers = list(tsoutliers(ts(count)))
) %>%
ungroup()
```
Sample data:
```
structure(list(ToTpf = c(FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE,
FALSE, TRUE), count = c(2332L, 5902L, 2562L, 6356L, 2635L, 6610L,
2835L, 7050L, 2490L, 6410L, 2387L, 6241L, 2786L, 7128L, 2527L,
6387L, 2361L, 5862L, 2658L, 6279L, 2849L, 6981L, 2457L, 6203L,
2624L, 6572L, 2512L, 6307L, 2391L, 5984L, 2761L, 6720L, 2904L,
7268L, 2758L, 7008L, 2679L, 7139L, 2497L, 6392L, 2952L, 7302L,
2496L, 6348L, 2625L, 6779L, 2273L, 5953L, 2576L, 6448L, 2664L,
6578L, 2561L, 6350L, 2715L, 6553L, 2554L, 6500L, 2335L, 5990L,
2522L, 6425L, 2940L, 7325L, 3036L, 7441L, 2821L, 7109L, 2352L,
6006L, 2277L, 5732L, 2561L, 6415L, 2655L, 6623L, 2811L, 7072L,
2653L, 6380L, 2578L, 6612L, 2772L, 6842L, 2921L, 7265L, 2914L,
7392L, 3066L, 7816L, 2591L, 6515L, 2867L, 7315L, 3224L, 7668L,
2654L, 6701L, 2549L, 6498L, 2817L, 7087L, 2721L, 7289L, 2708L,
7078L, 2975L, 7356L, 3124L, 7729L, 2561L, 6732L, 2600L, 6958L,
3227L, 8048L, 2713L, 6584L, 2764L, 6766L, 2557L, 6454L, 3136L,
8075L, 2436L, 6168L, 3128L, 7942L, 2794L, 7171L, 2694L, 6567L,
2382L, 5956L, 2880L, 7340L, 3073L, 7972L, 3297L, 8323L, 2966L,
7296L, 3053L, 7369L, 2643L, 6320L, 2888L, 7316L, 2794L, 7084L,
2860L, 7072L, 2616L, 6436L, 2719L, 6712L, 2882L, 7387L, 2799L,
6956L, 2624L, 6461L, 3205L, 7852L, 2627L, 6731L, 1556L, 2354L,
1155L, 0L, 1504L, 147L, 1323L, 0L, 1627L, 0L, 1816L, 53L, 5418L,
10523L, 4242L, 10582L, 3764L, 9370L, 3681L, 9459L, 3658L, 9540L,
3046L, 7668L, 3288L, 8441L, 3211L, 8310L, 3823L, 9852L, 3117L,
8001L, 3471L, 8961L, 3590L, 9109L, 2807L, 7138L, 3171L, 8046L,
3735L, 9731L, 2930L, 7898L, 3339L, 8478L, 3473L, 8692L, 2925L,
7335L, 3426L, 8829L, 3010L, 7667L, 3490L, 8929L, 3333L, 8449L,
3275L, 8572L, 3408L, 8785L, 2893L, 7187L, 3114L, 7783L, 3603L,
8707L, 3019L, 7669L, 2994L, 7702L, 2774L, 7216L, 2687L, 6725L,
3350L, 8504L, 3133L, 8034L, 3045L, 7731L, 2813L, 7177L, 2996L,
7391L, 3070L, 7600L, 3426L, 8606L, 3406L, 8856L, 3025L, 7743L,
3095L, 8162L, 2924L, 7242L, 2851L, 7367L, 3449L, 8878L, 2994L,
7646L, 3863L, 9688L, 3238L, 8013L, 3987L, 9537L, 3102L, 7811L,
2714L, 6835L, 3057L, 7831L, 3235L, 8091L, 3028L, 7688L, 2965L,
7487L, 2680L, 6575L, 3301L, 8183L, 2760L, 7084L, 3071L, 7780L,
3306L, 8039L, 2825L, 7236L, 2804L, 7230L, 3317L, 8408L, 2804L,
6959L, 2673L, 6924L, 3264L, 8404L, 3198L, 8246L, 3203L, 8311L,
2869L, 7259L, 2982L, 7393L, 3330L, 8890L, 2888L, 7559L, 3355L,
8432L, 3445L, 8483L, 3540L, 8941L)), row.names = c(NA, -328L), class = c("tbl_df",
"tbl", "data.frame"))
```
| Outlier identification | CC BY-SA 4.0 | null | 2023-03-21T16:05:30.800 | 2023-03-21T16:05:30.800 | null | null | 286976 | [
"r",
"forecasting"
] |
610197 | 2 | null | 610192 | 1 | null | It seems you are looking for a mean-invariant version of SNR. Given the signal $S$ and the noise $N$, SNR is typically computed as $$\frac{E[S^2]}{E[N^2]}$$ This can also be written as $$\frac{\text{Var}(S) + E[S]^2}{\text{Var}(N) + E[N]^2}$$ where $\text{Var}(S)$ and $\text{Var}(N)$ are the variances of the signal and noise respectively. Therefore, the SNR depends on the mean value of the signal $E[S]$ and the mean value of the noise $E[N]$. Alternatively, you could compute SNR as $$\frac{\text{Var}(S)}{\text{Var}(N)}$$ However, this only takes into account the "spread" of the signal vs. the "spread" of the noise, and ignores their "DC value". In most cases, this is undesirable.
---
Here is a demonstration of this concept:
```
import numpy as np
np.random.seed(42)
# generate a random signal with mean 0 and the same signal with mean x
N = 500
x = 50
signal1 = np.random.randn(N)
signal2 = signal1 + x
# generate noise with mean 0 and mean x
noise1 = np.random.randn(N)
noise2 = noise1 + x
# print the SNR with and without the mean for both signals
print(
f"""Signal 1 (zero mean):
SNR with mean = {(np.var(signal1) + np.mean(signal1)**2)/(np.var(noise1) + np.mean(noise1)**2)}
SNR without mean = {np.var(signal1)/np.var(noise1)}
Signal 2 (mean x > 0):
SNR with mean = {(np.var(signal2) + np.mean(signal2)**2)/(np.var(noise2) + np.mean(noise2)**2)}
SNR without mean = {np.var(signal2)/np.var(noise2)}
"""
)
```
which yields the output
```
Signal 1 (zero mean):
SNR with mean = 1.0056515707726348
SNR without mean = 1.006669696804731
Signal 2 (mean x > 0):
SNR with mean = 0.99900428349472
SNR without mean = 1.0066696968047304
```
We see that, when we compute the SNR without the mean, as I did in the expression $$\frac{\text{Var}(S)}{\text{Var}(N)}$$ then the SNR doesn't change for two signals with different means (up to numerical precision). This is in contrast to computing the SNR with the mean, as I did in $$\frac{E[S^2]}{E[N^2]}$$
| null | CC BY-SA 4.0 | null | 2023-03-21T16:13:59.697 | 2023-03-21T21:34:28.437 | 2023-03-21T21:34:28.437 | 296197 | 296197 | null |
610199 | 1 | null | null | 2 | 121 | I need to extract results from a SEM, but I'm struggling to read the results using lavaan package in R. More specifically, I have 3 latent variables and would like to know how can i reconstruct them using the results from the SEM. Below you can find my model:
```
sem_model_100 <- '
Att_x =~ ATT_good_100 + ATT_important_100 + self_1_100 + self_2_100 + ATT_useful_100 # + satis_2_100
PBC_x =~ PBC_time_100 + PBC_space_100 + ATT_pleasant_100 + ATT_hygenic_100 #+ satis_3_100 # + satis_1_100
Soc_x =~ MN_friend_100 + MN_colleg_100 + MN_family_100 + MN_media_100 #+satis_4_100
#Covariances
self_1_100 ~~ ATT_pleasant_100
self_1_100 ~~ Intention_100
self_1_100 ~~ self_2_100
# Regresion - Structural
Intention_100 ~ Att_x + PBC_x + Soc_x
beh_avg ~ Intention_100 + PBC_x + PANT + dist_org
'
```
Here are the results of my regression and the standardized solution.
```
summary(fit.factors_4, rsquare = TRUE, standardized = TRUE, fit.measures = TRUE)
lavaan 0.6-12 ended normally after 172 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 56
Number of observations 110
Model Test User Model:
Standard Robust
Test Statistic 146.049 143.819
Degrees of freedom 109 109
P-value (Chi-square) 0.010 0.014
Scaling correction factor 1.016
Satorra-Bentler correction
Model Test Baseline Model:
Test statistic 667.874 490.248
Degrees of freedom 135 135
P-value 0.000 0.000
Scaling correction factor 1.362
User Model versus Baseline Model:
Comparative Fit Index (CFI) 0.930 0.902
Tucker-Lewis Index (TLI) 0.914 0.879
Robust Comparative Fit Index (CFI) 0.927
Robust Tucker-Lewis Index (TLI) 0.910
Loglikelihood and Information Criteria:
Loglikelihood user model (H0) -7168.644 -7168.644
Loglikelihood unrestricted model (H1) -7095.619 -7095.619
Akaike (AIC) 14449.287 14449.287
Bayesian (BIC) 14600.514 14600.514
Sample-size adjusted Bayesian (BIC) 14423.552 14423.552
Root Mean Square Error of Approximation:
RMSEA 0.056 0.054
90 Percent confidence interval - lower 0.028 0.026
90 Percent confidence interval - upper 0.078 0.076
P-value RMSEA <= 0.05 0.337 0.380
Robust RMSEA 0.054
90 Percent confidence interval - lower 0.026
90 Percent confidence interval - upper 0.077
Standardized Root Mean Square Residual:
SRMR 0.084 0.084
Parameter Estimates:
Standard errors Robust.sem
Information Expected
Information saturated (h1) model Structured
Latent Variables:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
Att_x =~
ATT_good_100 12.621 3.091 4.083 0.000 12.621 0.876
ATT_mprtnt_100 12.758 3.120 4.090 0.000 12.758 0.802
self_1_100 8.265 3.475 2.379 0.017 8.265 0.454
self_2_100 9.192 3.131 2.935 0.003 9.192 0.444
ATT_useful_100 -8.263 1.215 -6.801 0.000 -8.263 -0.498
PBC_x =~
PBC_time_100 21.448 2.904 7.384 0.000 21.448 0.789
PBC_space_100 18.144 2.437 7.444 0.000 18.144 0.650
ATT_plesnt_100 11.252 2.420 4.649 0.000 11.252 0.485
ATT_hygenc_100 -10.718 2.715 -3.947 0.000 -10.718 -0.399
Soc_x =~
MN_friend_100 25.834 1.844 14.007 0.000 25.834 0.935
MN_colleg_100 25.054 2.140 11.706 0.000 25.054 0.849
MN_family_100 19.312 2.646 7.298 0.000 19.312 0.624
MN_media_100 16.129 2.373 6.796 0.000 16.129 0.599
Regressions:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
Intention_100 ~
Att_x 1.119 1.164 0.961 0.336 1.119 0.062
PBC_x 4.216 1.706 2.471 0.013 4.216 0.234
Soc_x 3.673 2.069 1.775 0.076 3.673 0.204
beh_avg ~
Intention_100 0.118 0.077 1.529 0.126 0.118 0.145
PBC_x 3.406 1.522 2.238 0.025 3.406 0.233
PANT 17.226 4.268 4.036 0.000 17.226 0.415
dist_org -0.029 0.010 -3.064 0.002 -0.029 -0.304
```
Here the standardized solution
```
> standardizedsolution(fit.factors_4, type = "std.all",
+ se = TRUE, zstat = TRUE, pvalue = TRUE, ci = TRUE)%>%
+ filter(op == "~" | op == "=~") %>%
+ select(LV=lhs, Item=rhs, Coefficient=est.std, ci.lower,
+ ci.upper, SE=se, Z=z, 'p-value'=pvalue)
LV Item Coefficient ci.lower ci.upper SE Z p.value
1 Att_x ATT_good_100 0.876 0.730 1.021 0.074 11.807 0.000
2 Att_x ATT_important_100 0.802 0.632 0.971 0.086 9.278 0.000
3 Att_x self_1_100 0.454 0.138 0.770 0.161 2.815 0.005
4 Att_x self_2_100 0.444 0.179 0.709 0.135 3.285 0.001
5 Att_x ATT_useful_100 -0.498 -0.708 -0.288 0.107 -4.654 0.000
6 PBC_x PBC_time_100 0.789 0.631 0.947 0.081 9.796 0.000
7 PBC_x PBC_space_100 0.650 0.503 0.797 0.075 8.650 0.000
8 PBC_x ATT_pleasant_100 0.485 0.300 0.670 0.095 5.129 0.000
9 PBC_x ATT_hygenic_100 -0.399 -0.588 -0.210 0.096 -4.141 0.000
10 Soc_x MN_friend_100 0.935 0.871 0.999 0.032 28.802 0.000
11 Soc_x MN_colleg_100 0.849 0.744 0.954 0.053 15.876 0.000
12 Soc_x MN_family_100 0.624 0.477 0.771 0.075 8.323 0.000
13 Soc_x MN_media_100 0.599 0.446 0.753 0.078 7.650 0.000
14 Intention_100 Att_x 0.062 -0.069 0.194 0.067 0.926 0.355
15 Intention_100 PBC_x 0.234 0.005 0.464 0.117 2.001 0.045
16 Intention_100 Soc_x 0.204 0.026 0.383 0.091 2.242 0.025
17 beh_avg Intention_100 0.145 -0.002 0.293 0.075 1.929 0.054
18 beh_avg PBC_x 0.233 0.024 0.441 0.106 2.190 0.029
19 beh_avg PANT 0.415 0.247 0.582 0.086 4.843 0.000
20 beh_avg dist_org -0.304 -0.495 -0.113 0.097 -3.122 0.002
```
>
How can I calculate Att_x, PBC_x, Soc_x and beh_avg?
| Constructing latent variables in SEM | CC BY-SA 4.0 | null | 2023-03-21T16:24:44.987 | 2023-03-21T18:25:01.197 | null | null | 376081 | [
"r",
"predictive-models",
"structural-equation-modeling",
"lavaan"
] |
610201 | 1 | null | null | 0 | 63 | I got this question from an interview and failed to give an answer. I have been thinking about this for a while and would love some help. I think we have the following:
P(D (disease)) = 0.1%,
P(TP or TN) = 99%
And we want to know the following:
P(D | P)
which can be translated to: P(D and P)/P(P) or P(TP)/P(TP or FP). It seems impossible to maneuver the known to get to the unknown.
| The probability of having disease is 0.1%; the test for it has 99% accuracy. What is the probability of having the disease given positive test result? | CC BY-SA 4.0 | null | 2023-03-21T16:37:03.837 | 2023-03-21T18:55:24.497 | 2023-03-21T18:55:24.497 | 381101 | 381101 | [
"probability",
"self-study",
"bayesian"
] |
610202 | 1 | null | null | 1 | 52 | I read many articles about SHAP values and I get the general theory behind it. However, there's something that I have a difficulty with.
When we try to explain LR models, we explain it in terms of odds. For exmaple: Males have two times the odds of females, while keeping everything else constant.
In SHAP summary plots, we can see the tendancies of variables and how they effect the outcome. However, do we have that "while keeping everything else constant" interpretation or not. From what I understood, that is not the case. Feel free to correct me and to add to the theory to help me understand. Much appreciated !
| SHAP values vs logistic regression | CC BY-SA 4.0 | null | 2023-03-21T16:42:43.590 | 2023-03-21T16:42:43.590 | null | null | 383104 | [
"machine-learning",
"shapley-value"
] |
610204 | 1 | null | null | 0 | 57 | I have the following dataframe in R
```
Smoking population
yes group1
yes group3
yes group2
no group1
no group1
yes group3
no group2
yes group2
yes group3
no group1
no group1
no group3
yes group2
no group2
no group1
yes group1
yes group2
no group3
no group3
yes group1
no group3
```
I want to do fisher exact test on smoking frequency between different population groups.
```
df <- read.table('Smokingpopulation.txt',header = T,sep = "\t")
df$Smoking <- as.factor(df$Smoking)
df$population <- as.factor(df$population)
str(df)
xtabs(~ Smoking + population, data = df )
population
Smoking group1 group2 group3
no 5 2 4
yes 3 4 3
```
This is a dummy input I created. How can I use fisher exact test(in R) to see whether there is some relationship in the frequency of smoking between different groups?
Using the below command in R, I will only see association between smoking frequency and the population, not amongst individual groups within the population
```
fisher.test(df$Smoking,df$population)
```
| fisher exact test for groups within population | CC BY-SA 4.0 | null | 2023-03-21T17:00:41.090 | 2023-03-21T17:00:41.090 | null | null | 73395 | [
"r",
"fishers-exact-test"
] |
610205 | 2 | null | 610199 | 5 | null | The short answer is lavPredict().
The longer answer is that latent variables are not uniquely identified. We know about the relationship of latent variables with other variables. We don't know the actual values that each individual has - there are various methods for constructing a hypothetical latent variable, but they will give different answers, so don't treat predicted scores as absolute truth.
| null | CC BY-SA 4.0 | null | 2023-03-21T17:10:46.930 | 2023-03-21T17:10:46.930 | null | null | 17072 | null |
610206 | 2 | null | 610199 | 4 | null | Keep in mind that you should never use the extracted factors in an analysis with the same data set (because you would be over-fitting). But if you want to visualize them, or extract them from a different data set...I agree with [@jeremy-miles](https://stats.stackexchange.com/users/17072/jeremy-miles) that `lavPredict()` is the way. If you want to understand `lavPredict()` when it's run on a simple `cfa` model, you can run `summary()` on your data without any standardization, and then take the estimates, intercepts, and variances from the model fit. The output of `lavPredict()` will be a rescaled version of: `sum((Variable - intercepts)/variances*estimates)`.
| null | CC BY-SA 4.0 | null | 2023-03-21T17:12:21.050 | 2023-03-21T18:25:01.197 | 2023-03-21T18:25:01.197 | 288142 | 288142 | null |
610207 | 1 | null | null | 3 | 74 | I am trying to calculate power for 150 samples where 75 are going to be in one group and 75 in another. I tried using the pwr package in R to get the power where I used the following code:
```
pwr.anova.test(k=2, n =75, f=.1, sig.level=.05, power=NULL)
```
k for me equals 2 because the 150 samples are divided into two groups equally (75 each)
n which is the sample size is 75 for both groups
f is the effect size which I set to 0.1
significance level I set to 0.05
I get a number for power which is 0.23 which obviously is low. My question is, am I doing this correctly? I am new to power calculations and if someone could help me understand how to calculate power knowing the sample size or point me to a package that can correctly do this for my example, I would truly appreciate this.
| power analysis using r | CC BY-SA 4.0 | null | 2023-03-21T17:13:11.347 | 2023-06-03T07:43:18.167 | 2023-06-03T07:43:18.167 | 121522 | 383780 | [
"statistical-power"
] |
610210 | 1 | null | null | 5 | 474 | I have used the MATLAB regression learner application to do some stepwise regression with a 10-fold cross validation for feature selection. But now I want to code it myself and I'm confused about the algorithm!
So I divide my data into 10-folds and then train my model using 9-folds and test in on 1 remaining fold. I do it for 10 times. This gives me the performance of 10 linear regression models with different feature subsets. The regular way is to take the average of the 10 performance metrics (RMSE, MSE, R2,...) and decide which model performs the best.
But in my case the model is the same and the features are different. So I'm confused how to choose the best subset. Should I just compare the 10 performance metrics and choose the subset with the highest performance?
| How does cross validation works for feature selection (using stepwise regression)? | CC BY-SA 4.0 | null | 2023-03-21T17:28:29.923 | 2023-03-22T14:09:21.807 | 2023-03-22T13:17:10.197 | 509 | 383782 | [
"regression",
"cross-validation",
"feature-selection",
"model-selection",
"stepwise-regression"
] |
610212 | 1 | null | null | 0 | 20 | I want to measure the average effect of treatment D1 on money using a panel date like the one in the table below, with several units measured at different time periods, and either being treated more than once or being subjected to another treatment D2 as well.
Consider also that the only factors that affect money are D1, D2. When D1 or D2 are equal to 1, it means that at that time either D1 or D2 treatment was applied.
How can I estimate this, dealing with the effects of D2, the different periods and the repeated treatment D1?
Can I add 1 each time the treatment is done? For example, for unit 1 if the treatment column series D1 is [0, 0, 0, 0, 1, 0, 0, 1] can I transform it to [0, 0, 0, 0, 1, 1, 1, 2]?
Also, since the money only changes with D1 and D2 can I fill the missing days of each unit with a bfill and a ffill?
|unit |money |D1 |D2 |t |
|----|-----|--|--|-|
|1 |40 |0 |0 |1 |
|2 |80 |0 |0 |1 |
|3 |20 |0 |0 |1 |
|4 |30 |0 |0 |1 |
|5 |20 |0 |0 |1 |
|1 |70 |1 |0 |2 |
|2 |80 |0 |0 |2 |
|3 |40 |1 |1 |2 |
|4 |30 |0 |0 |2 |
|5 |25 |1 |0 |2 |
|6 |50 |0 |0 |2 |
|7 |40 |0 |0 |2 |
|1 |70 |0 |0 |3 |
|2 |80 |0 |0 |3 |
|3 |25 |1 |0 |3 |
|4 |35 |0 |1 |3 |
|5 |25 |0 |0 |3 |
|6 |50 |0 |0 |3 |
|7 |30 |0 |1 |3 |
|8 |90 |0 |0 |3 |
|1 |70 |0 |0 |4 |
|2 |80 |0 |0 |4 |
|3 |25 |0 |0 |4 |
|4 |35 |0 |0 |4 |
|5 |25 |0 |0 |4 |
|6 |50 |0 |0 |4 |
|7 |25 |1 |1 |4 |
|8 |100 |1 |0 |4 |
|2 |95 |1 |0 |5 |
|3 |25 |0 |0 |5 |
|4 |45 |0 |0 |5 |
|5 |25 |0 |0 |5 |
|6 |50 |0 |0 |5 |
|7 |25 |0 |0 |5 |
|8 |100 |0 |0 |5 |
|2 |95 |0 |0 |6 |
|4 |45 |0 |0 |6 |
|5 |25 |0 |0 |6 |
|6 |50 |0 |0 |6 |
|7 |25 |0 |0 |6 |
|8 |100 |0 |0 |6 |
| How to measure ATE on panel data with heterogeneous treatment effects and many treatments? | CC BY-SA 4.0 | null | 2023-03-21T17:33:50.040 | 2023-03-21T19:34:53.510 | 2023-03-21T19:34:53.510 | 383713 | 383713 | [
"regression",
"panel-data",
"causality",
"difference-in-difference",
"confounding"
] |
610214 | 1 | null | null | 0 | 61 | I found an outlier using the outlierTest function in the car package. However, I can see from the results that the Externally Studentized Residual and p-values. This is a result for the full model.
```
rstudent unadjusted p-value Bonferroni p
348 5.872682 7.9377e-09 3.9689e-06
```
Cooks D Bar Plot:
[](https://i.stack.imgur.com/ak9z9.png)
I have performed normality test on residuals using the following code:
```
shapiro.test(resid(housing.lm))
```
R Console:
```
Shapiro-Wilk normality test
data: resid(housing.lm)
W = 0.97068, p-value = 1.876e-08
```
The p-value is less than 0.05 indicating that the residuals may not be normally distributed. However, I assume it is not critical for linear regression as long as the other assumptions are met.
I have also performed heteroscedasticity test using the following code:
```
ncvTest(housing.lm)
```
R console:
```
Non-constant Variance Score Test
Variance formula: ~ fitted.values
Chisquare = 0.3243994, Df = 1, p = 0.56898
```
When I fit the regression using the coded:
```
lm(price~ bath + sqft, data=data)
```
My diagnostic plots looks as follows;
[](https://i.stack.imgur.com/qA11t.png)
When try to remove observation 348 based on the p-value sqft variable becomes insignificant.
Is it better to keep it since it seems an influential point?
| Outlier Detection using OutlierTest | CC BY-SA 4.0 | null | 2023-03-21T17:38:04.167 | 2023-03-22T15:01:23.460 | 2023-03-22T00:35:40.827 | 383312 | 383312 | [
"r",
"multiple-regression",
"outliers"
] |
610215 | 2 | null | 610210 | 7 | null | Welcome to the instability of feature selection. This is totally predictable behavior and one of the reasons why stepwise regression is less of a panacea than it first seems to be. Sure, you select some variables that work well on the training data, and by limiting the variable count to just those that influence the outcome the most, you seem to restrict the opportunity for overfitting, right?
Unfortunately, you put yourself at risk of the variable selection overfitting to the training data. As you can see from your cross validation, just because a set of variables works on one sample does not assure it of working on another. That is, the feature selection in unstable, and with the selected features bouncing all over the place as you make changes to the data (which will be the case when you go predict on new data), there is justifiable doubt that the variables selected based on the training data will be the right variables for making predictions on new data.
If you want to use your model just to predict, then you might be better off bootstrapping the entire dataset, fitting a stepwise model to the bootstrap sample, applying that model to the entire data set, and seeing by how much the performance (on some metric of interest, say MSE or MAE) differs. This is related to the procedure I discuss [here](https://stats.stackexchange.com/a/560089/247274). If that is an acceptable amount, you have evidence that the overall stepwise procedure is effective, [which can be the case for stepwise regression in pure prediction problems](https://stats.stackexchange.com/questions/594106/how-competitive-is-stepwise-regression-when-it-comes-to-pure-prediction).
If you want to use the stepwise regression to select variables on which you do inferences like p-values or confidence intervals, all of these downstream inferences are distorted by the stepwise selection. While [this](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/) link mentions Stats software, the theory does not care if you use Stata, MATLAB, Python, R, SAS, or any other software, and the previous sentence relates to points 2, 3, 4, and 7. Briefly, by doing the stepwise regression and then calculating statistics as if you have not, you are performing dishonest calculations that fail to account for the variable selection process.
| null | CC BY-SA 4.0 | null | 2023-03-21T17:50:40.520 | 2023-03-22T14:09:21.807 | 2023-03-22T14:09:21.807 | 247274 | 247274 | null |
610216 | 2 | null | 610201 | 2 | null | I don't think the question can be answered with a precise number with the given information, but you can give a range.
Observe that accuracy = TP + TN = 99% and prevalence = TP + FN = 0.1% still permits lots of possible still permits lots of different situations. E.g. with
- TP = 0%, FN=0.1%, TN = 99%, FP = 0.9%, but also
- TP = 0.1%, FN=0%, TN = 98.9%, FP = 1%.
Your answer is, for each of these scenarios, then easy to calculate, but can vary a good bit.
| null | CC BY-SA 4.0 | null | 2023-03-21T17:56:05.893 | 2023-03-21T17:56:05.893 | null | null | 86652 | null |
610217 | 2 | null | 610210 | 5 | null | Training 10 models and picking the best one based on the test set performance metrics is "cheating" - your performance metrics are no longer an unbiased measure of your overall model training procedure, since your model training procedure now uses the test data to select the model! A test set should only be used to evaluate a model, never to train or select it.
If you want one single set of features and one model, you can run your model training procedure on the entire dataset. You will not have a direct unbiased measure of performance (since you have no held-out test data), but the cross-validation performance should be a good approximation. You would generally expect a model trained on the full data to perform slightly better than CV suggests, as it is trained on more data.
| null | CC BY-SA 4.0 | null | 2023-03-21T17:56:26.917 | 2023-03-21T17:56:26.917 | null | null | 76825 | null |
610218 | 1 | null | null | 0 | 28 | I have some biological data of gene expression (GE) in the cells. I want to model GE behaviour as a linear model where GE in cell Y is dependent on GE in Cell X1, Cell X2, Cell X3 and many more. Just showing 3 for the simplicity. In the following figure GE of cell Y is plotted against the cell X1, X2 and X3.
[](https://i.stack.imgur.com/fciAQ.png)
After looking above plot I don't think I can model them as
>
Y = b1x1 + b2x2 + b3x3 + c
Do you suggest any GLM (generalized linear model GLM) such as poisson regression can help? Do GLM helpful for such continuous data form. I do not want to explore non-linear model becasue I think it will be difficult to translate into actual biological understanding.
Parameters of Ridge regression(C=coefficient, I=Intercept, p=pvalue,lambda=regularization)
[](https://i.stack.imgur.com/UQIAS.png)
| How to model following dependent variable in the linear regression framework? | CC BY-SA 4.0 | null | 2023-03-21T18:13:36.447 | 2023-03-21T20:56:29.910 | 2023-03-21T20:56:29.910 | 251125 | 251125 | [
"multiple-regression",
"generalized-linear-model",
"generalized-additive-model"
] |
610219 | 2 | null | 610210 | 8 | null | Running a single cross-validation loop yields an estimate of the out of sample predictive error associated with your modeling procedure, nothing more. You have 10 different models because stepwise selection is unstable, as @Dave explains. There is no reason to believe that any of your 10 models is 'right', but the mean of the cross-validation prediction error gives you an estimate of how large the prediction error will be in the future. At this point, you would run your procedure over the full dataset and use that as the final model. In general, I would advise [against](https://stats.stackexchange.com/a/20856/7290) this, but that would be the protocol.
If you want to use cross-validation to determine the $F$-value to use as a cutoff for your modeling procedure, you need to do more. In that case, you would use a nested cross-validation scheme. In the outer loop, you would partition the data into $k$ folds and set one aside. Then you would perform another cross-validation loop on the remaining folds. In the inner loop, you would use some means to search over possible $F$-values; for instance, you could use a grid search over a series of possible $F$ cutoffs. For each possible $F$, there would be a average out of sample predictive accuracy score. You would take the cutoff that performed best and use it on the entire (nested) dataset to get a model. That model would be used to make predictions on the top level set that had been set aside, and from that you would get an estimate of the out of sample performance of a model that is selected in this manner. Then you would set the second fold aside, and perform the inner loop cross-validation and $F$ cutoff selection again, etc. After having done all this $k$ times, you could average those and get an average estimate of the out of sample performance of models selected in this manner. After that, you can repeat the search procedure that you had used in the inner loop on the outer loop alone (i.e., there wouldn't be an inner loop this time). That will give the model slightly more data to work with to select your final cutoff. Finally, you would fit your intended model using that cutoff on the whole dataset to get your final model, and you would have an estimate of how well a model of that type, selected in that manner, will perform out of sample. In short, the larger protocol is this:
- Run nested cross-validation, selecting a cutoff on the inner loop and then using it in the outer loop, to get an estimate of out of sample performance.
- Run cross-validation to select the cutoff to be used for the final model.
- Fit your model to the full dataset using the cutoff selected.
To get more detail, try reading: [Nested cross validation for model selection](https://stats.stackexchange.com/q/65128/) and [Training on the full dataset after cross-validation?](https://stats.stackexchange.com/q/11602/). Again, I wouldn't recommend you use stepwise selection, even in this case because the parameters will still be biased (and the constituent hypothesis tests will still be garbage), but the out of sample estimate of the predictive performance of a model fitted in this manner should be OK.
| null | CC BY-SA 4.0 | null | 2023-03-21T18:24:59.740 | 2023-03-21T18:24:59.740 | null | null | 7290 | null |
610221 | 1 | null | null | 3 | 187 | I understand that, given a set of iid random variables, the variance of the sum is equal to the sum of the variance. Likewise, I know that the variance of the mean is equal to the variance over n.
My question is: if the variances of the respective random variables are not the same, is the variance of the mean still an average of the variances?
| Is the variance of the mean of a set of independent random variables equal to the average of their respective variances? | CC BY-SA 4.0 | null | 2023-03-21T18:47:33.617 | 2023-03-21T22:25:51.767 | null | null | 383788 | [
"variance",
"mean",
"random-variable",
"standard-deviation"
] |
610223 | 2 | null | 610221 | 6 | null | Given a set of random variables $X_1,\dots,X_n$, if they are independent, then
\begin{align}
\text{Var}(\overline X) &= \text{Var}\left(\frac{1}{n} \sum_{i=1}^n X_i\right) \\
&= \frac{1}{n^2}\text{Var}\left(\sum_{i=1}^n X_i\right) \\
&= \frac{1}{n^2}\sum_{i=1}^n \text{Var}\left(X_i\right)
\end{align}
So the variance of the sample mean is equal to the mean of the variances divided by $n$, regardless of whether the variances are equal or not.
| null | CC BY-SA 4.0 | null | 2023-03-21T19:12:37.473 | 2023-03-21T22:25:51.767 | 2023-03-21T22:25:51.767 | 296197 | 296197 | null |
610224 | 1 | null | null | 1 | 43 | I am comparing the MAE of LASSO regression of multiple features vs. MAE of linear regression of each individual feature, and I am having trouble understanding why the LASSO MAE can be worse than some of the individual feature MAE, even on for the training set (where one single feature resulted in lower MAE than LASSO).
In my understanding, LASSO is a linear regression with regulation to make weight of "un-useful" features zero while minimizing MSE (which should be reflected in minimized MAE as well). Then why did LASSO chose multiple features that gives higher error rather than only keeping a single or fewer features that gives a lower error?
| Why can LASSO MAE be worse than individual feature linear regression MAE? | CC BY-SA 4.0 | null | 2023-03-21T19:33:50.063 | 2023-03-22T12:38:06.830 | null | null | 383791 | [
"regression",
"machine-learning",
"lasso",
"mse",
"mae"
] |
610225 | 2 | null | 605511 | 0 | null | An inducing path is one where all non-endpoint nodes along the path are colliders and an ancestor of one of the endpoints. An example of an inducing paths is:
$X \rightarrow C1 \leftrightarrow C2 \leftrightarrow Y$, where also $C1 \rightarrow X; C2 \rightarrow X$.
An inducing path intuitively is a path between two non-adjacent nodes that cannot be d-separated. Therefore, the path is always "active" regardless of what variables you condition on.
It is useful in a MAG and PAG setting because it implies there are nodes that are non-adjacent in the true DAG, that will appear adjacent in the MAG/PAG. I.e. it "induces" an adjacency, even though the nodes are in fact non-adjacent.
This is useful for understanding because it means that adjacencies in a MAG/PAG setting are not the same as that of a DAG.
| null | CC BY-SA 4.0 | null | 2023-03-21T19:37:23.450 | 2023-03-21T19:37:23.450 | null | null | 106439 | null |
610226 | 1 | 610262 | null | 2 | 60 | I have been learning about ROC curves and had a doubt that can the ROC of 2 completely different data sets with different skew ( P/(P + N) where `P` and `N` are the actual positives and negative values) be the same?
I am thinking that they cannot be the same since any 2 datasets with different skew will have different TPR and FPR.
Am I right? If I am wrong, can you explain why?
Thanks
| Can the ROC curve of 2 different data sets be same | CC BY-SA 4.0 | null | 2023-03-21T19:40:54.463 | 2023-03-24T21:28:40.843 | null | null | 383790 | [
"roc",
"skewness"
] |
610230 | 1 | null | null | 0 | 21 | I have an unusual case where I need to combine two vector spaces but weight one more than the other. Rather than discussing my specific use case, it's likely easier to imagine we trained two word2vec models on two different corpora to produce two different embedding sets on the same words. Now imagine that I need to weight embedding set `A` more than `B`. I know these weights from an external context such that the weight of `A` should be `w_A`, `B` should be `w_B`, `w_A + w_B = 1`, and `w_A > w_B`. Finally, I need to combine my two weighted embeddings them into one embedding space of length `n` (both `A` and `B` are already of length `n`).
One simple solution that has worked relatively well is just a weighted average like so:
`new_embeddings = (w_A * A) + (w_B * B)`
And while in practice actually works OK, it doesn't take into account that the two vectors spaces do not necessarily mean the same thing. That is, column `A_1` might actually show up in `B` as `B_10`, or it might not have an analogue at all. One thought would be to align the columns in the vector spaces before doing the weighted average, but again this assumes there is a meaningful alignment.
An alternative that seems to dodge the alignment problem is to concatenate `A` and `B` to form a new vector space of length `2n`, then use Truncated SVD to capture as much information from `A` and `B` as possible in a new vector space of length `n`.
However, Truncated SVD does not naturally accept columns weights. In order to mimic the effect of weighting, I normalize both `A` and `B`, and then rescale by the weights. That looks like (pseudo-code):
```
new_embeddings =
TruncatedSVD(
concatenate(
(weight_A / weight_B) * normalize(A)
, normalize(B)
)
)
```
And while that is hacky, that does seem to give `A` more impact in the results. But the hackyness bothers me. Is there a better way to do this, or this actually justified in my intent?
| How to do column weighted truncated SVD? | CC BY-SA 4.0 | null | 2023-03-21T20:18:51.533 | 2023-03-21T20:18:51.533 | null | null | 260763 | [
"dimensionality-reduction",
"linear-algebra",
"svd",
"embeddings"
] |
610231 | 1 | 612323 | null | 4 | 136 | I have a few hundred variables representing different biomarkers. These variables have been measured in both cases and controls. The underlying units of measurement are not important, so I have standardized all variables (subtract mean, divide by SD). The standardization was done separately in cases and in controls, since the two groups are expected to have different means for some of the measurements. Within a group, most variables followed a normal distribution, a few that did not were log-transformed to get them closer to normal before standardizing (those variables can be left out if necessary).
Some variables will correlate with each-other in both controls and cases. Other variables may differ in the patterns of correlation they show with each-other in cases vs. controls. Correlations present in both cases and controls can be thought of as noise that I want to remove, so I can get only the case-specific correlations.
My end goal is to look for sets of markers that tend to group together in cases (using a method like factor analysis or PCA).
My original idea was to subtract the covariance matrix for controls from the covariance matrix for cases. But as pointed out in the comments on this question, that won't work because it has the potential to produce negative variances in some of the cells. (Or in this case, since the covariance matrix for standardized variables is a correlation matrix, subtracting would produce 0s on all the diagonals.)
Is there a better approach to take to look at the question "Are there groups of variables that tend to be correlated with each-other specifically in cases, after accounting for any patterns that cases share with controls"?
| Looking at how covariance/correlation between variables differs in two groups? | CC BY-SA 4.0 | null | 2023-03-21T20:21:22.483 | 2023-04-12T14:57:13.753 | 2023-04-08T03:45:17.370 | 146747 | 146747 | [
"correlation",
"covariance",
"covariance-matrix",
"case-control-study",
"correlation-matrix"
] |
610232 | 2 | null | 609667 | 0 | null | A mixed model for repeated measures would implicitly impute these values assuming that they were missing at random. That can make sense, or may be inappropriate depending on which question you wish to answer. E.g. if in a study of a drug people stop taking the drug and stop participating in the study, fitting such a model to the on-drug data of all patients makes sense if you want to know what would have happened if they had stayed on drug and in the study (but not if you want to know what happened after they stopped taking the drug, when drug effects presumably would go away). Such a model or a multivariate normal model lets you also impute the missing values (see [here](https://cran.r-project.org/web/packages/rbmi/vignettes/stat_specs.html) for a more in-depth discussion in the context of a particular R package or for a fully Bayesian version [this merge request](https://github.com/paul-buerkner/brms/pull/1435) from the latest release of the brms R package, which added support for an unstructured correlation matrix between timepoints).
| null | CC BY-SA 4.0 | null | 2023-03-21T20:33:23.900 | 2023-03-21T20:33:23.900 | null | null | 86652 | null |
610233 | 2 | null | 610207 | 3 | null | >
I get a number for power which is 0.23 which obviously is low. My question is, am I doing this correctly?
It is low, but not incorrect. When you have an effect of 0.1 and a sample of size 75, then you don't get very much power and 0.23 is not weird.
---
It may help to do a manual computation.
You can compute this easily manually for the case of a z-test. (Your example is an F-test but the relationship between power and effect size is more or less the same)
Say you have a distribution $X \sim N(\mu,1)$ and you test the hypothesis that $\mu = 0$, by using a sample of size $75$. Then te standard deviation of the statistic $\bar{X}$, the average of the 75 values, is $\frac{1}{\sqrt{75}} \approx 0.115$, and the effect size of $0.1$ is only a shift by one standard deviation. In the image below you see what this means
[](https://i.stack.imgur.com/YraJV.png)
the cutoff values are around $\pm 0.226$ and the power is here only $0.139$, even less than your situation (with a one sided test, like the F-test, the power for a given effect will be higher, and in the case of the z-test it will be approximately $0.218$).
| null | CC BY-SA 4.0 | null | 2023-03-21T20:36:21.857 | 2023-03-21T20:36:21.857 | null | null | 164061 | null |
610234 | 1 | null | null | 0 | 33 | See [en.wikipedia.org/wiki/Sleeping_Beauty_problem](https://en.wikipedia.org/wiki/Sleeping_Beauty_problem) for the statement of this problem.
This seems to me to be a simple weighted average problem. Because the total number of days awake varies between 1 (Monday if heads) or 2 (Monday and Tuesday if tails on Monday). The frequency of occurrence (or the weight) that tossed coin on Monday is 1 (=100% for every Monday) and on Tuesday is ½ (=50% for one Tuesday over two in average). Then:
[](https://i.stack.imgur.com/cJtp8.jpg)
Then, the sleeping beauty’s answer is 50% and is not a paradox at all, isn’t it? The classic misunderstanding is to consider that there are three events (or steps) and think that the answer is one-third. But there are not three steps. This is relative to frequencies. Another view is that obviously:
[](https://i.stack.imgur.com/R8ZEW.jpg)
Finally, and until proven otherwise, each time you wake up, with a balanced coin, there is always as much chance of getting heads as tails, isn’t there? So, the answer is obviously 50%. Is there really still confusion about this?
| Is the sleeping beauty paradox really one? | CC BY-SA 4.0 | null | 2023-03-21T20:45:32.010 | 2023-03-21T20:45:32.010 | null | null | 383781 | [
"paradox"
] |
610235 | 1 | null | null | 0 | 17 | Suppose I want to model the change in some outcome before and after treatment. The outcome of interest is the change in a latent variable measured with error before and after treatment by 3 observed variables. Let the pre-treatment observed values be $x_{10}, x_{20}, x_{30}$ and the post-treatment observed values be $x_{11}, x_{21}, x_{31}$.
There are two ways I might approach modeling the change in this latent variable across time:
- Calculate change in each variable: $x_{11} - x_{10}, x_{21} - x_{20}, x_{31} - x_{30}$. Fit measurement model to the resulting 3 "observed change" variables. Model this "latent score of observed change" variable.
- Fit a measurement model to either the pooled observed variables $x_{kt}$ or separate models for each time period ($x_{k1}, x_{k0}$) to obtain latent outcomes for each time ($y_1$, $y_0$). Calculate the change in the latent outcomes: $y_1 - y_0$. Model the change.
What are the tradeoffs of these two different approaches to handling measurement error when modeling change across two periods?
| Treating multiple observed differences as a latent variable vs. difference in multiple latent variables | CC BY-SA 4.0 | null | 2023-03-21T21:13:39.930 | 2023-03-21T21:13:39.930 | null | null | 120828 | [
"panel-data",
"structural-equation-modeling",
"measurement-error",
"latent-variable",
"pre-post-comparison"
] |
610236 | 1 | null | null | 0 | 10 | I have some data of dental images. The same image was taken using the gold standard method and using an alternative new more cost effective method. Evaluators rate each image (blinded) based on a likert scale for image quality. I have 21 images per evaluator. How do I measure the agreement between quality of the images to make the conclusion that the new method is comparable to the gold standard?
| Agreement statistic for likert scale evaluation of different image modalities | CC BY-SA 4.0 | null | 2023-03-21T21:32:56.617 | 2023-03-21T21:32:56.617 | null | null | 213352 | [
"correlation",
"likert",
"agreement-statistics",
"intraclass-correlation",
"cohens-kappa"
] |
610237 | 1 | null | null | 1 | 17 | I am performing group lasso and need to double check if I include a dummy variable for the reference answer or not. For example:
original question : no (0), Yes (1), Unknown (9).
If I create 3 dummy variables, no (reference) (0/1), yes (0/1), and unknown (0/1), would I include all three in the group lasso or just the 2 (yes and unknown).
| Dummy / Reference variable in LASSO (group lasso) | CC BY-SA 4.0 | null | 2023-03-21T21:46:17.377 | 2023-03-21T22:45:40.893 | null | null | 365631 | [
"lasso",
"categorical-encoding"
] |
610239 | 1 | null | null | 1 | 21 | Say I have some data from an RCT related to a skills training program. The data are such that:
- Marginally randomized binary treatment z. Those with z = 1 attended the program;
- Single binary outcome of interest post_program_employment.
Assume the randomization went great and we have unconditional ignorability.
After collecting the data, I realize that anyone for whom the binary covariate `pre_program_employment = 1` will necessarily also have `post_program_employment = 1` due to the way the data were collected. It is ambiguous whether these people actually got a new job after employment, or simply kept the job they had at baseline.
My question is: can a logistic regression of the type shown below return an unbiased estimate of the effect of the treatment on the probability of getting a new job after graduation? I am confused about whether conditioning on `pre_program_employment` in this way can account for the fact a person with `pre_program_employment = 1` must have an individual treatment effect of 0 in the data as collected, regardless of whether that person actually found a new job or not.
`m1 <- glm(post_program_employment ~ 1 + treatment + pre_program_employment, family = "binomial", data = dat_fake)`
| Causal identification when some ITEs are 0 | CC BY-SA 4.0 | null | 2023-03-21T22:31:11.573 | 2023-03-21T22:31:11.573 | null | null | 337075 | [
"regression",
"logistic",
"causality",
"random-allocation"
] |
610240 | 2 | null | 610173 | 0 | null | [The sum of normal R.Vs is also normal.](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables)
If you have two independent R.V.s $X\sim N(\mu_X, \sigma^2_X)$ and $Y\sim N(\mu_Y, \sigma^2_Y)$, and you are interested in their sum $Z=X+Y$, it can be shown that $Z\sim N(\mu_X+\mu_Y, \sigma^2_X+\sigma^2_Y)$.
There are also formulae if the R.V.s are normal but correlated.
| null | CC BY-SA 4.0 | null | 2023-03-21T22:34:22.520 | 2023-03-21T22:34:22.520 | null | null | 369002 | null |
610241 | 1 | null | null | 0 | 12 | I ran two multiple regressions. The first with alm my variables. The second I removed one variable for which I had missing data for a lot of my cases. How can I explain why in the second multiple regression, an additional two variables become 'negative' in direction?
| How can I explain why when I run 2 multiple regressions, (1 with all my variables, the 2nd with all but one), the directions of some variables change? | CC BY-SA 4.0 | null | 2023-03-21T22:35:03.363 | 2023-03-21T22:35:03.363 | null | null | 383803 | [
"multiple-regression"
] |
610243 | 2 | null | 610237 | 1 | null | It doesn't really matter because you are using regularization. Without regularization, you would have [too many parameters](https://stats.stackexchange.com/questions/224051/one-hot-vs-dummy-encoding-in-scikit-learn), but it's [not a problem](https://stats.stackexchange.com/q/168622/35989) for a regularized model. So you can use either. Including all the categories is a popular choice for regularized models though, as you don't need to decide which category to drop.
| null | CC BY-SA 4.0 | null | 2023-03-21T22:45:40.893 | 2023-03-21T22:45:40.893 | null | null | 35989 | null |
610244 | 1 | null | null | 0 | 18 | Is there a way to prioritize response variables in a random forest model, ideally in R?
For example, suppose I want to predict tree height based on elevation, aspect, slope, and tree canopy cover. In theory, if I have areas of no vegetation, and I only use elevation, aspect, and slope, so no variable that would signify the presence of trees, I can still predict a tree height. And if I add tree canopy cover to my model, where larger values of tree cover would mean greater tree heights, then I might still have areas of no trees where the model predicts a tree height. So is there a way to make the variable tree canopy cover more important in the random forest model? Ideally, I would like for areas where canopy cover = 0 the model to predict tree height = 0. And the higher the canopy cover, the higher the tree height. I could just use the variable canopy cover to predict canopy height but I know that all the other variables also play a role in the tree height.
I found information on applying weights to a random forest model, but in those cases the weight is applied to the predicted variable. For example if tree height was a categorical variable of three classes, by using weights, I could tell the model how many observations to use from each class. But I would like to apply weights on my response variables.
There are two other resources that refer to case.weights() but I am still not sure how to proceed.
[Handling case weight in the Random Forest packages in R](https://stats.stackexchange.com/questions/166424/handling-case-weight-in-the-random-forest-packages-in-r)
[https://stackoverflow.com/questions/75150837/assigning-weights-to-variables-in-random-forest-model-in-r](https://stackoverflow.com/questions/75150837/assigning-weights-to-variables-in-random-forest-model-in-r)
I am mainly using randomForest and ranger packages in R but I am open to other packages, too.
| Random Forest--prioritizing response variables | CC BY-SA 4.0 | null | 2023-03-21T22:55:36.200 | 2023-03-21T22:55:36.200 | null | null | 383794 | [
"r",
"random-forest",
"feature-selection",
"weights"
] |
610245 | 1 | 610253 | null | 1 | 68 | Using Python I have created two separate XGBoost probability models. From these two models, I compute a final value by multiplying the outputs (probabilities) together to give a probability of both events happening at the same time.
I would like to be able to explain the output of the final value using Shapley values computed using the shap library ([https://shap-lrjball.readthedocs.io/en/latest/index.html](https://shap-lrjball.readthedocs.io/en/latest/index.html)). Previously I have attempted to use shap.KernelExplainer to explain the output function. This works, but I have found this to be very slow when I use enough of my dataset to create an accurate background for the explainer.
I have the shap.Explainer for both of the individual models which work quickly. I am attempting to find a way to use the shap values of the individual models to create the final shap values, but am struggling to do so. For example:
```
Model_1 Probability = 0.5, Feature_1 Contribution = 0.3, Feature_2 Contribution = 0.2
Model_2 Probability = 0.2, Feature_1 Contribution = 0.05, Feature_2 Contribution = 0.15
Computed Probability = 0.1, Feature_1 Contribution = ?, Feature_2 Contribution = ?
```
Is there a formula to determine the contributions of the individual features to the computed probability?
| Combined Shapley Values for Probability Models | CC BY-SA 4.0 | null | 2023-03-21T23:01:47.027 | 2023-03-22T01:07:24.550 | null | null | 383805 | [
"probability",
"python",
"conditional-probability",
"shapley-value"
] |
610246 | 1 | null | null | 0 | 26 | After running my regression, none of my variables are significant so I was wondering if conparing t and p values could be a source of discussion instead...
If so, can anything be inferred when a t value is larger than a p value? Or the other way around? Or what does it mean when the p and t values are almost equal?
| Can inferences be made by comparing t and p values? | CC BY-SA 4.0 | null | 2023-03-21T23:39:18.123 | 2023-03-21T23:39:18.123 | null | null | 383803 | [
"multiple-regression",
"p-value"
] |
610249 | 2 | null | 609874 | 0 | null | At the moment you will need to provide a dataset containing the future values of `no2_load` to the `forecast()` function.
For example:
```
NO2_stretch_future <- NO2_stretch %>%
new_data(n = 24) %>%
mutate(no2_load = NA_real_)
fc_cv <- fit_cv %>%
forecast(new_data = NO2_stretch_future)
```
I haven't been able to test this code since you haven't provided a minimally reproducible example (MRE). Providing a MRE makes is easier for me to quickly and accurately answer your question.
| null | CC BY-SA 4.0 | null | 2023-03-22T00:23:46.750 | 2023-03-22T00:23:46.750 | null | null | 245768 | null |
610250 | 1 | null | null | 0 | 12 | I'm self studying probability, specifically generating functions. I'm searching for the relation between conditional probabilities and probably generating functions in order to justify writing the probability generating function of the following problem.
Suppose I have a coin with probability $p$ of landing heads and $(1-p)$ of tails. Using $X=1$ to indicate the coin flipped on heads and $X = 0$ for tails, the probability generating function is
$$G_\text{coin}(s) = (1-p) + ps $$
Now suppose I have two dice. Dice A is a fair three sided dice, and Dice B is a fair 6 sided dice. The PGFs are,
$$G_\text{Dice A}(s) = \frac{1}{3}(s + s^2 + s^3)$$
$$G_\text{Dice B}(s) = \frac{1}{6}(s + s^2 + s^3 + s^4 + s^5 + s^6)$$
Now the problem is the following. I flip the coin, if it lands tails I roll dice A, if it is heads I roll dice B. What is the PGF for the number on the dice for this process?
I expect it will be
$$ G_\text{coin, then dice}(s) = (1-p) G_\text{Dice A}(s) + p G_\text{Dice B}(s)$$
But I would like to understand the general property of why this is true.
| Generating Function for Coin Flip followed by a roll of two dice | CC BY-SA 4.0 | null | 2023-03-22T00:25:35.617 | 2023-03-22T00:25:35.617 | null | null | 37650 | [
"conditional-probability",
"probability-generating-fn"
] |
610251 | 1 | null | null | 0 | 23 | In most scenarios power calculations are used to determine `n`, given expected effect size, power and significance level. However, if I already know `n`, and wish to know what a reasonable effect size would be, can I compute the expected effect size given `n`, power and significance level, like:
```
library(pwr)
pwr.t.test(n=120, sig.level = 0.05, power = 0.8, type = "two.sample", alternative = "two.sided")
n = 120
d = 0.3631543
sig.level = 0.05
power = 0.8
alternative = two.sided
```
Should I interpret this as saying that the minimum effect size I should expect is 0.36 and anything below that is probably meaningless?
I'm aware effect sizes largely depend on the application, but if I have no idea what effect size to expect, would the conclusion above be valid?
| compute expected effect size from n | CC BY-SA 4.0 | null | 2023-03-22T00:42:35.767 | 2023-03-22T00:42:35.767 | null | null | 212831 | [
"effect-size"
] |
610253 | 2 | null | 610245 | 1 | null | Given the additive nature of the Shapley values to the output of the individual models, in general it doesn't seem possible to factor their product in the third line below back into additive contributions of each input without also getting cross terms.
[](https://i.stack.imgur.com/KtbkU.jpg)
| null | CC BY-SA 4.0 | null | 2023-03-22T01:07:24.550 | 2023-03-22T01:07:24.550 | null | null | 383808 | null |
610254 | 1 | null | null | 1 | 23 | Imagine the following study design: We have propagated N plants. Each plant produces many seeds. From each plant, M of the seeds are used to measure community composition of bacteria living within the seed in a destructive process. An additional M seeds from each plant are allowed to germinate and the root length of the resulting seedling is measured.
So now we have N * M measurements of bacterial composition and N * M measurements of root length. We want to fit a regression model to estimate the effect of bacterial composition on root length. But because it is impossible to measure bacterial composition and root length from the same seed, we do not have N * M (x,y) pairs.
It does not seem ideal to average the measurements from each of the N plants and fit the model because this would ignore the nestedness of the variation. And it does not seem possible to fit a simple multilevel model because we do not have paired measurements within plant. My initial idea was to do a bootstrapping kind of method where we would randomly sample 1 of each of the M replicates for each of the N individuals, and fit the regression model. Then repeat that many times to get a bootstrap distribution for each of the regression parameters. That should incorporate the variation among seeds within a parent plant into the uncertainty distribution.
I would like to know if there is a standard or canonical approach to use in this case or if the bootstrap technique is acceptable. Are there any relevant literature references or textbooks that address this issue?
| Appropriate regression model when variables were measured on different subjects? | CC BY-SA 4.0 | null | 2023-03-22T01:33:33.917 | 2023-03-22T01:33:33.917 | null | null | 54923 | [
"regression",
"bootstrap",
"multilevel-analysis"
] |
610255 | 1 | null | null | 1 | 16 | I want to be able to predict what a product's price would be based on its current sales listings and its sales history.
The historical sales data would be a list of (date, price, quantity)
The current listings data would be a list of (price, quantity)
A very simple idea would be just to calculate the average quantity sold per day from the sales data(say, for the past week) and assume the same amount will be sold tomorrow. Then, we can remove that quantity from the listings data sorted by price ascending, and predict the new price of this project to be the new lowest listing price.
Obviously this does not account for stuff such as price elasticity, demand trends, etc.
Just wondering if anybody had any ideas/leads on how I can figure out better ways to solve this problem
| Predicting future price from sales data | CC BY-SA 4.0 | null | 2023-03-22T01:36:46.567 | 2023-03-22T01:36:46.567 | null | null | 383809 | [
"forecasting",
"predictive-models"
] |
610256 | 2 | null | 609766 | 2 | null | Although you've rightly observed structural similarity between the problem formulations of the constrained TRPO and off-policy actor critic (OFF-PAC) in terms of importance sampling, their implementation algos are completely different.
OFF-PAC uses a same behavior policy $b$ to generate action and observes its resultant experiences during each incremental iterative step to form your above advantage function $A_t$, and more importantly update the parameter of the same single target policy $\pi$ thus it's off-policy learning. Implementation details can be found in Degris et al's (2012) paper "Off-Policy Actor-Critic".
On the other hand, TRPO as referenced in Schulman's original 2015 paper uses Single Path or Vine method to sample average as stochastic approximation of its maximization target $\mathbb{E}_{s \sim \rho_{\theta_{old}}, a \sim {\pi_{\theta_{old}}}}[\frac{\pi_\theta(a|s)}{\pi_{\theta_{old}}(a|s)}\cdot A_{\theta_{old}}]$ and further replace above advantage function simply by Q value estimates under current (old) policy $\pi_{\theta_{old}}$ during each current policy evaluation and improvement step until final convergence to arrive at an optimal target policy hopefully. Since TRPO learns from the experiences generated by the current policy and also improves the same current policy at every iterative step therefore it's on-policy similar to the general policy iteration method of classic dynamic programming and SARSA.
| null | CC BY-SA 4.0 | null | 2023-03-22T02:14:45.493 | 2023-03-22T02:26:39.963 | 2023-03-22T02:26:39.963 | 371017 | 371017 | null |
610260 | 1 | null | null | 1 | 34 | I am trying to use a GAM to model average daily water temperature against day and habitat type in an estuary. I have temperature logger data over 115 days across three habitat types. I am using a GAM with a factor-smooth interaction to see how habitat-level smooths differ from the global-smooth for day. Habitat is included as a random effect to allow for varying-intercepts and I also included a random effect for site (where each temperature logger was placed):
`gam(avg_temp ~ s(day, bs = "tp", m = 2) + s(day, by = hab, bs = "tp", m = 1) + s(hab, bs = "re" + s(site, bs = "re")`
I am having trouble selecting the number of basis functions. When I use the default values (k = 10), this is the output I get from the summary() function:
[](https://i.stack.imgur.com/MmUWW.png)
The plots look good (i.e. represent the observed relationship), however gam.check() indicates I need to increase the number of basis functions.
So, I tried 30 knots. When I do this, the gam fits a linear relationship for the global smooth of day, as indicated by the edf of 1 in the summary output:
[](https://i.stack.imgur.com/H15Xd.png)
gam.check() tells me I have used a sufficient number of basis functions.
I don't know why the model does this. Logically, it does not make sense to have a linear relationship between temperature and day, as there is a clear, nonlinear trend of day across all habitat types when you visualize the data. Because of this, I don't know whether the first or second model specification is better.
Any help would be much appreciated.
| EDF and basis functions in GAMs with factor-smooth interactions | CC BY-SA 4.0 | null | 2023-03-22T03:00:37.107 | 2023-03-22T03:00:37.107 | null | null | 383810 | [
"interaction",
"generalized-additive-model",
"mgcv",
"basis-function"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.