Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
612952 | 1 | null | null | 2 | 44 | The following is the pseudo algorithm for using eligibility traces in Q learning:
[](https://i.stack.imgur.com/dtFbl.jpg)
In this algorithm, for a given state and action derived from the policy, we get the delta (reward + discounted Q for next state and next action - Q for current state and action). Why do we use the common delta to update all `Q(s,a)` for a given state? Isn't this delta specific to the earlier chosen `(s, a)` and subsequent `(s', a')`?
| Why do we use same delta to update all Q values for a given state and action in eligibility traces? | CC BY-SA 4.0 | null | 2023-04-14T16:20:45.530 | 2023-04-19T05:36:41.067 | null | null | 385729 | [
"machine-learning",
"reinforcement-learning"
] |
612953 | 2 | null | 351001 | 0 | null | Without actually seeing the code and the problem - most probably there's a problem with your setup. He init, or any other random init for this matter, sets the weights to random numbers around $0$, so the outputs should be completely random. If you're using a cross-entropy loss, and your outputs are completely random, you should get a loss in the vicinity of $-\log(1/C)=\log(C)$, where $C$ is the number of classes.
| null | CC BY-SA 4.0 | null | 2023-04-14T16:30:13.877 | 2023-04-15T16:08:29.723 | 2023-04-15T16:08:29.723 | 117705 | 117705 | null |
612954 | 2 | null | 612943 | 0 | null | First of all, obviously "0.25" is not negative, but you are correct that the fact that the OR is less than 1 means that there is a negative relationship between the independent and dependent variable.
Your interpretation is not quite correct though. The OR of 0.25 means that the odds of developing influence are 25% as high (or 75% lower) for the treatment group compared to the placebo group.
However, note that this is a statement about what happens to the odds of developing influenza. An OR of 0.25 does not mean the chance or probability of developing influenza are 25% as high for the treatment group.
[This is a very common misinterpretation of odds ratios, even in peer reviewed work](https://stats.stackexchange.com/questions/213223/explaining-odds-ratio-and-relative-risk-to-the-statistically-challenged), because everyone understand changes in terms of percentages but very few people (even statisticians) have good intuitions about "odds." If you want to express your results in terms of probabilities (i.e. as a risk ratio - "x% less likely) you will need to [put in some extra work.](https://stats.stackexchange.com/questions/183908/calculating-risk-ratio-using-odds-ratio-from-logistic-regression-coefficient)
| null | CC BY-SA 4.0 | null | 2023-04-14T16:46:12.477 | 2023-04-14T16:46:12.477 | null | null | 291159 | null |
612955 | 1 | null | null | 0 | 40 | Using the `urca` package in R, and the `ca.jo()` function, I can run a Johansen test and get the test statistic and critical values of the test. However, the function does not return p-value(s). I was wondering how to manually find the p-value given the results from the `ca.jo()` function, or if there was another package which calculates it in the first place.
| How to find p-value from Johansen test in R (urca::ca.jo)? | CC BY-SA 4.0 | null | 2023-04-14T16:46:12.593 | 2023-04-14T17:39:51.883 | 2023-04-14T17:39:51.883 | 53690 | 383128 | [
"r",
"time-series",
"cointegration"
] |
612956 | 2 | null | 612900 | 1 | null | I assume the 10,000 people on the register are all diagnosed, so not of interest for the question, and that you want to know how many undiagnosed people are there in the remaining 90,000 who in fact have the disease.
Then you'd need a sample of these, and find out how many undiagnosed cases you have there. You can then run a binomial test (or the analogue for sampling without replacement) of the hypothesis that the proportion is smaller or equal to some critical proportion specified by you.
| null | CC BY-SA 4.0 | null | 2023-04-14T16:55:01.833 | 2023-04-14T16:55:01.833 | null | null | 247165 | null |
612957 | 1 | 612968 | null | 1 | 46 | Hello I'm new to survival analysis and I have a question about right censored data processing.
[](https://i.stack.imgur.com/w5pHZ.png)
For patient B, he survived through the entire study time. Therefore, we need to use the study end date to subtract the study start date to extract the survival days for patient B?
| Right censored data in survival analysis | CC BY-SA 4.0 | null | 2023-04-14T16:58:15.713 | 2023-04-14T19:03:16.233 | null | null | 369545 | [
"survival",
"censoring"
] |
612958 | 1 | null | null | 1 | 84 | Assuming we have the following structural causal model (SCM), with a confounder DAG structure, as follows:
Noise variables:
$$U_1 \sim \mathcal{N}(0,\,1)$$
$$U_2 \sim \mathcal{N}(0,\,1)$$
$$U_3 \sim \mathcal{N}(0,\,1)$$
SCM equations:
$$X1=5 U1$$
$$X2=5 X1 + 5 U2$$
$$X3 =5 X1 + 5 X2 + 5 U3$$
[](https://i.stack.imgur.com/g65ut.jpg)
Suppose we observe: $$U1 = 0.2, \; U2 = 0.2, \; U3 = 0.2, \; X1 = 1, \; X2 = 6, \; X3 = 36$$
Can you please explain (step-by-step) with above example how we can compute the counterfactual query:
$$q(X2_{X1=-1}, X3_{X1=-1} \mid X1 = 1, X2 = 6, X3 = 36)$$
and how is it different to computing the interventional query (step-by-step)
$$q(X2, X3 \mid do(X1=-1))$$
How are the noise variables $U1, \; U2, \; U3$ handled/used when computing each of the two query types (the interventional query and the counterfactual query respectively)?
| Computing counterfactual query given an SCM and how it differs from computing interventional query? | CC BY-SA 4.0 | null | 2023-04-14T17:11:38.993 | 2023-04-14T19:19:02.417 | null | null | 368208 | [
"causality",
"structural-equation-modeling",
"intervention-analysis",
"causal-diagram",
"counterfactuals"
] |
612959 | 1 | null | null | 0 | 14 | I am calculating the CAPM model of BlackRock share price.
In particular the model is:
Rb=return BlackRock
Rf= return risk free asset
Rm = return SP500
Rb-Rf = intercept + B(Rm-Rf)
In my estimation the value of the intercept is not significant. This means that we accept the hypothesis that the intercept is equal to zero.
I can understand what tha intercept =0 mean but I cannot understand the implications of a value not significant in the model.
Can anybody help me?
| CAPM. Intercept not significant | CC BY-SA 4.0 | null | 2023-04-14T17:12:45.603 | 2023-04-14T17:40:37.733 | 2023-04-14T17:40:37.733 | 53690 | 385731 | [
"statistical-significance",
"finance"
] |
612961 | 1 | null | null | 1 | 31 | I was reading [this paper](https://arxiv.org/pdf/1905.03711.pdf), which uses a sampling attention ($a$ is the net that gives the distribution of attention, $f$ takes the samples and classifies them), and derives the gradient as follows:
[](https://i.stack.imgur.com/jE07c.png)
Which means that if we consider $f$, we have that the gradient wrt to itself is just the usual one, where instead if we consider $a$ we have:
$$
\nabla_\theta\log a(x;\theta)f(x)
$$
however, this does not makes total sense to me, as for the usual REINFORCE estimator, we should use the "score" to increase the probability of high scores, and decrease those with low scores... here instead they are using $f$, which is just the classifier... what am i missing?
In my opinion, the gradient should be:
$$
\nabla_{\theta_a} L \approx [-L(y, f(x))]\nabla\log a(x;\theta_a)
$$
thus "pushing up" the probability proportionally to the negative loss (so low loss pushes probs up, and high loss pushes probs down)
| Why is the gradient not depending on the loss? | CC BY-SA 4.0 | null | 2023-04-14T17:59:09.330 | 2023-04-14T17:59:09.330 | null | null | 346940 | [
"neural-networks",
"reinforcement-learning"
] |
612962 | 1 | 612967 | null | 3 | 146 | How does one calculate the p value for a two tailed chi square test with degree freedom of 1?
```
prop.test(matrix(c(9,17,5,13), nrow = 2), correct = F)
2-sample test for equality of proportions without continuity correction
data: matrix(c(9, 17, 5, 13), nrow = 2)
X-squared = 0.22922, df = 1, p-value = 0.6321
alternative hypothesis: two.sided
95 percent confidence interval:
-0.2311216 0.3835026
sample estimates:
prop 1 prop 2
0.6428571 0.5666667
```
With chisquare statistic of `0.22922` I get a cumulative distribution density of `0.36` with this code
```
pchisq(0.22922, df=1, lower.tail = T)
```
I don't think I could just `2*0.36` since it's not a normal distribution... how do I find the complementary of the upper tail so that the number will be equal to `0.6321` as shown by `prop.test`?
| How does one calculate the p value for a two tailed chi square test with degree freedom of 1? | CC BY-SA 4.0 | null | 2023-04-14T18:14:39.873 | 2023-04-16T17:35:49.130 | 2023-04-15T06:16:11.703 | 362671 | 316924 | [
"p-value",
"chi-squared-test"
] |
612963 | 1 | null | null | 1 | 25 | In their [original publication](https://www.jstor.org/stable/2984418), Box and Cox state
>
...we can obtain an approximate $100(1 - \alpha)$ per cent confidence region [around $\hat{\lambda}$] from
$$
L_\text{max} (\hat{\lambda}) - L_\text{max}(\lambda) < \frac{1}{2}\chi^2_{\nu_\lambda}(\alpha)\ ,
$$
where $\nu_\lambda$ is the number of independent components in $\lambda$.
They unfortunately do not explain their notation: does $\chi^2(\cdot)$ refer to the CDF, the PDF or something else? Further, what exactly is $\nu_\lambda$? Concretely, if I am applying a Box-Cox transform to a time series of length $N$, is $\nu_\lambda= N-1$, for instance?
For reference, the `scipy` Python package lists [the following formula](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.boxcox.html) for the confidence interval of their Box-Cox implementation:
$$
L_\text{max} (\hat{\lambda}) - L_\text{max}(\lambda) < \frac{1}{2}\chi^2(1 - \alpha, 1)\ ;
$$
digging into [the source code](https://github.com/scipy/scipy/blob/c1ed5ece8ffbf05356a22a8106affcd11bd3aee0/scipy/stats/_morestats.py#L938), it appears that the compute the PPF of $1- \alpha$, and always assume $1$ df for the $\chi^2$; is this correct?
Any insight is appreciated.
| Box-Cox parameter confidence interval clarification | CC BY-SA 4.0 | null | 2023-04-14T18:17:09.350 | 2023-04-14T18:17:09.350 | null | null | 277115 | [
"confidence-interval",
"chi-squared-test",
"data-transformation",
"boxcox-transformation"
] |
612964 | 2 | null | 424781 | 0 | null | About the question:
When you say "classifying 100 classes" I think you have 100 output categories. The VGG16 output had 1000 output categories, so it isn't unreasonable.
The question:
Given a problem whose output is the selection of 1 of 100 candidate classes, tell me about the machine learning tools that could be used to engage it, and their relative strengths and weaknesses.
Answering:
Your answers are much closer than you imagine. If you did it right you could make them nearly exactly the same except for the bootstrapping in the random forest.
Independent logistic regressions:
- easy to make, understand, troubleshoot, and communicate by themselves.
- Altogether might be a bit overwhelming.
- requires enough but not google-big data to do a good job
- not as good at understanding variable interactions across classes
Neural network with logistic then max indicator:
- easier to code, straightforward to train
- single monolith model
- sometimes wants more data
- sometimes long to train O(n^6)
Random Forest
- single monolith model
- less noisy output, targets mean, robust
- does well with interactions
- does well with moderate data
So I have to ask, what is the nature of your data (type, input element description, input ensemble description)? Is it words, numbers, images? Do you have 10, 40, or 100 samples per output category?
| null | CC BY-SA 4.0 | null | 2023-04-14T18:22:05.727 | 2023-04-14T18:22:05.727 | null | null | 22452 | null |
612965 | 2 | null | 612786 | 0 | null | The "effect size" you have in mind is the one discussed on [this page](https://stats.stackexchange.com/q/182099/28500), expressed in general as:
$$\text{Effect Size} = \dfrac{\text{Difference of Means}}{\text{Standard Deviation}}\text{.}$$
The values you show for `Effect` are presumably what you want to include in the numerator. The question then is what to use for the denominator, the Standard Deviation (SD).
One possibility, which you note in a comment, is to use the root-mean-square error from the model. That estimates the SD of a single "likability" score and is a typical choice in a simple model for the difference in means between independent groups.
From your description, it seems that your `Effect` values might be more related to paired comparisons within individuals. In that case, you probably should be using an SD estimate related to the paired differences. If that's the case and the pairing helped, you could be better off getting the SD estimates from the test results that you show, which would be related to the distribution of those paired differences.
As [this answer](https://stats.stackexchange.com/a/182101/28500) shows, you can translate a standard error (SE) to an SD estimate if you multiply the SE by the square root of the number of observations. From the last row of your report, the combination of t-statistic and p-value suggests that you have about 470 observations for that row. Somewhere in the model there presumably is information about the degrees of freedom used for the t-tests reported with p-values of 0.
I'm not familiar with MEMOIRE, so it's possible that there's some additional subtlety here (like adjusting p-values for multiple comparisons). Also, remember that this type of effect-size calculation ignores the error in the SD estimate itself, which can be substantial.
Two final cautions. First, if repeated measures are handled by random effects in a mixed model, then there is the issue of how to combine the within-individual and among-individual SDs for use in the denominator above. If that's how MEMOIRE treats the repeated measures, then you need to take that into account. Second, if you are using a generalized linear model, then you want a different type of "effect size." See the links from [this page](https://stats.stackexchange.com/a/546988/28500), and the extensive discussion on [this page](https://stats.stackexchange.com/q/603792/28500).
| null | CC BY-SA 4.0 | null | 2023-04-14T18:43:53.560 | 2023-04-14T18:43:53.560 | null | null | 28500 | null |
612966 | 2 | null | 438709 | 0 | null | See this [post](https://stackoverflow.com/questions/29381505/why-does-word2vec-use-2-representations-for-each-word).
To summarize the post above, the goal of word2vec is to compute word embeddings such that semantically similar words are embedded closer to each other, and conversely, dissimilar words are embedded farther from each other.
In the neural network formulation, that means we minimize $$u_c^T v_c$$ because a word c is rarely in its own context. This minimization is not feasible if we constrain u to be same as v.
| null | CC BY-SA 4.0 | null | 2023-04-14T18:48:35.840 | 2023-04-14T18:48:35.840 | null | null | 52200 | null |
612967 | 2 | null | 612962 | 5 | null | In most cases, one does not calculate the p-value for a two-tailed chi-square test. In most cases, one computes two-tailed p-values from PDFs symmetric around the mean (e.g., normal or Student's t). Exceptions are provided in comments below.
With df $< 90$ the chi-square PDF is not symmetric, it is skewed to the right. To get the `0.6321` (the upper tail), one would do:
`1 - pchisq(0.22922, df=1, lower.tail = T)`
which yields an identical result as:
`pchisq(0.22922, df=1, lower.tail = F)`.
That is, upper and lower tail add up to 1, by definition.
In response to your comment below, about the `prop.test` function giving a different result for the following 2x2 table of counts (successes in 1st, failures in 2nd column; group 1 in 1st, group 2 in 2nd row):
>
O <- matrix(c(9, 5, 17, 13), nrow = 2)
O
[,1] [,2]
[1,] 9 17
[2,] 5 13
With `prop.test`, this can be tested in a one-sided manner (prop. in first group is larger than in second group, vice versa), or in a two-sided manner (proportions are not equal between groups). The latter can be tested using a chi-square distribution, where the expected counts (given the null hypothesis of equal group counts) are compared with observed counts ($\sum \frac{(O-E)^2}{E}$). Expected counts under a null-hypothesis of equal proportions (14 successes out of 44 trials, 26 trials out of 44 in first group) would be:
>
E <- matrix(c((14/44)*c(26, 18), (30/44)*c(26, 18)), nrow = 2)
E
[,1] [,2]
[1,] 8.272727 17.72727
[2,] 5.727273 12.27273
>
sum((O - E)^2/E)
[1] 0.2292226
Note how this quantity evaluates the difference between expected and observed (thus, a two-sided test), not whether the observed number of successes is higher or lower in one group than the other (a one-sided test). It thus yields the same result as:
>
prop.test(O, nrow = 2), correct = F, alternative = "two.sided")
2-sample test for equality of proportions without continuity correction
data: O
X-squared = 0.22922, df = 1, p-value = 0.6321
alternative hypothesis: two.sided
95 percent confidence interval:
-0.2077665 0.3445186
sample estimates:
prop 1 prop 2
0.3461538 0.2777778
Note that if one specifies a one-sided test for equal proportions (`alternative = "less"` of `alternative = "greater"`), the same test statistic is used (i.e., same value for `X-squared` is reported). However, under the hood the code then uses the square root of the test statistic and a normal distribution to obtain the p-value:
>
pnorm(sqrt(0.22922), lower.tail = TRUE)
[1] 0.6839486
pnorm(sqrt(0.22922), lower.tail = FALSE)
[1] 0.3160514
Because if $Z$ follows a standard normal distribution, then $Z^2$ follows a chi-square distribution with one degree of freedom.
| null | CC BY-SA 4.0 | null | 2023-04-14T18:48:47.023 | 2023-04-16T17:35:49.130 | 2023-04-16T17:35:49.130 | 173546 | 173546 | null |
612968 | 2 | null | 612957 | 0 | null | In this situation, the "survival days" for all individuals is the difference between the last observation date and the date of entry into the study. To that extent, you are correct in your calculation for Patient B. Remember, however, that all you know is that Patient B survived at least that long. Thus that is a right-censored observation.
What's important is to include for each patient, along with those "survival day" values, an indicator of whether there was an event at the end of that time period (Patient A) or if the individual had not yet experienced the event (Patients B and C). In the R [survival package](https://cran.r-project.org/package=survival), that's done with a `Surv()` object that, in its simplest form, combines the survival time with the event indicator.
| null | CC BY-SA 4.0 | null | 2023-04-14T19:03:16.233 | 2023-04-14T19:03:16.233 | null | null | 28500 | null |
612969 | 1 | null | null | 0 | 20 | I have ordinal data on happiness of citizens from multiple countries (from the European Value Study) and I have continuous data on the GDP per capita of multiple countries from the World Bank. Both of these variables are measured at multiple time points.
I want to test the hypothesis that countries with a low GDP per capita will see more of an increase in happiness with an increase in GDP per capita than countries that already have a high GDP per capita.
My first thought to approach this is that I need to make two groups; 1) countries with low GDP per capita, 2) countries with high GDP per capita. Then, for both groups I need to calculate the correlation between (change in) happiness and (change in) GDP per capita. Lastly, I need to compare the two correlations to check for a significant difference.
I am stuck however on how to approach the correlation analysis. For example, I dont know how to (and if I even have to) include the repeated measures of the different time points the data was collected. If I just base my correlations on one timepoint the data was measured, I feel like I am not really testing my research question, considering I am talking about an increase in happiness and an increase in GDP, which is a change over time.
If anyone has any suggestions on the right approach, I would be very thankful! Maybe I am overcomplicating it (wouldnt be the first time)!
| How to analyse and compare a correlation between two variables over time between two groups? | CC BY-SA 4.0 | null | 2023-04-14T19:05:36.827 | 2023-04-14T19:05:36.827 | null | null | 385733 | [
"regression",
"correlation",
"repeated-measures",
"ordinal-data",
"association-measure"
] |
612970 | 1 | 613296 | null | 5 | 97 | I'm curious about the basis functions generated by the call to "s()" function with default parameter values, but even more specifically I'm curious about a smoother for a single variable varname that would explicitly have a linear term for that variable (a term perfectly correlated with that single variable on the entire real line). I'd like that linear term for the purposes of leveraging certain penalized regression techniques later on that would allow to determine whether the variable is best modeled linearly (via that one linear term) or non-linearly (via the entire spline).
I know that, by default, s() delivers a thin plate spline. Presume I use k=4 for the dimension of the spline, then it gives me 3 spline terms (the "s(varname).1", "s(varname).2", "s(varname).3"). I don't fully understand how these are calculated, but I picked up on the fact that the last term (in this case, "s(varname).3") is perfectly correlated with the original variable varname. This tendency repeats itself for different k value (e.g. for k=6 it would be "s(varname).5" that's perfectly correlated with varname) and for other variables.
Would sincerely appreciate if someone could:
- Confirm that the last term of a thin plate spline generated by default "s()" function call for a single numeric variable will always be perfectly correlated with that variable
- Shed more light on precisely how these basis functions are calculated in case of a default "s()" function call for a single numeric variable
- Provide examples of other, non-default, spline types - among those provided by s() function - that explicitly include a linear term for the original variable (if any)?
| Spline basis explicitly including a linear term; basis functions generated by the default call to "s()" function of mgcv package | CC BY-SA 4.0 | null | 2023-04-14T19:12:50.460 | 2023-04-18T08:44:41.030 | 2023-04-14T21:45:44.150 | 86342 | 86342 | [
"generalized-additive-model",
"splines",
"mgcv",
"smoothing",
"basis-function"
] |
612971 | 2 | null | 612958 | 1 | null | In general, you compute a counterfactual by performing three steps:
- Abduction: Use evidence $E=e$ to determine the value of $U.$
- Action: Modify the model, $M,$ by removing the structural equations for the variables in $X$ and replacing them with the appropriate functions $X=x,$ to obtain the modified model, $M_x.$
- Prediction: Use the modified model, $M_x,$ and the value of $U$ to compute the value of $Y,$ the consequence of the counterfactual.
(from Causal Inference in Statistics: A Primer, by Pearl, Glymour, and Jewell, p. 96).
Normally, in the observation, you wouldn't already have been given $U$ such as you have been given: essentially the abduction step is already done. For Step 2 (Action), the modified model with $X1=-1$ looks like
\begin{align*}
X1&=-1\\
X2&=-5+5U2\\
X3&=-5+5X2+5U3
\end{align*}
Now for Step 3 (Prediction), we plug in the values, normally computed, of $U2=0.2$ and $U3=0.2,$ to obtain the final results:
\begin{align*}
X1&=-1\\
X2&=-4\\
X3&=-24.
\end{align*}
Now for the intervention approach, you actually don't compute the values of the $U$ at all, but the query you wrote we normally (pun intended, of course) interpret as $E[X2, X3\mid\text{do}(X1=-1)].$ To compute this, while we definitely perform graph surgery by removing all arrows into $X1$ and replacing it with $-1,$ we must bank on the distributions given to us. Given the standard normals for $U1, U2,$ and $U3,$ and the SCM
\begin{align*}
X1&=-1\\
X2&=-5+5U2\\
X3&=-5+5X2+5U3,
\end{align*}
as before, we get the distributions
\begin{align*}
X2&\sim\mathcal{N}(-5,\,25)\\
X3&\sim\mathcal{N}(-30,\,630),
\end{align*}
showing that $E[X2\mid\text{do}(X1=-1)]=-5$ and
$E[X3\mid\text{do}(X1=-1)]=-30.$
| null | CC BY-SA 4.0 | null | 2023-04-14T19:19:02.417 | 2023-04-14T19:19:02.417 | null | null | 76484 | null |
612973 | 1 | null | null | 0 | 35 | I am running a cox regression for survival analysis using the coxph() function in R for a very large dataset. My model is set up as:
Surv(time,event) ~ age + race + sex +...
The study period is 1 year. We found that age had a different effect in the first six months (phase 1) than it did in the last 6 months (phase 2). Age's effect being time dependent violates our PH assumption. To account for this, we thought of adding a time-dependent interaction term. The term is:
Agephase2 = (if time < 6 months ~ 0, if time >= 6 months ~ age)
The idea was to add this in the model as follows:
Surv(time,event) ~ Age + Agephase2 + race+ sex+...
Where the Age coefficient would represent the effect of age in phase 1, while Agephase2 would represent the effect of of age in phase 2. However, the output of our model is showing very large effects that are opposite than we anticipated, and the effects of all other covariates are severely altered.
Is there something wrong with this approach? How else could we account for time-dependent covariates in our model, where their effects differ in two periods? Should we just create two different models? When I researched time-dependent covariates for cox models, I didn't find much and what I did find was pretty complex.
| Time Dependent Interaction Term in Cox Regression | CC BY-SA 4.0 | null | 2023-04-14T20:05:58.997 | 2023-04-15T02:27:10.040 | null | null | 368419 | [
"survival",
"cox-model",
"time-varying-covariate"
] |
612974 | 1 | 612985 | null | 1 | 39 | I have a response variable that takes on only 5 distinct values: -15, -5, 0, 5, 15. These values are obtained in a clearly categorical manner, meaning: if event A happens, then the outcome is 5, if event B happens, then the outcome is -15, etc... So the eventual response value for a certain observation isn't a result of some cumulative counting, but rather a pure assignment of a numerical value to a categorical outcome. With that being said, there's a clear ordering and a clearly defined distance between the numerical values being assigned to different categories, and that distance properly reflects the difference in their inherent values (unlike in some other cases of ordinal data, where only the order is known, but not necessarily the distance between the categories). Last note is that the marginal distribution of the response values is heavily asymmetrical here: the negative values (-5, -15) are much rarer than the non-negative ones, with 0 being the most frequent value by far, and 15 is the 2nd, and 5 is a distant 3rd:
[](https://i.stack.imgur.com/2EMvm.png)
My question is: what kind of modeling approaches exist out there for such a response type? Would it be the [generalized ordered logit model](https://www3.nd.edu/%7Erwilliam/gologit2/UnderStandingGologit2016.pdf), where I'd focus on modeling the probabilities of each distinct value, rather than modeling the value directly? But if so, could it account for the known distances between the categories? Or maybe there might be a Bayesian approach that allows modeling an expected value of an arbitrary discrete variable with 5 numerical values (based on each category's probability and designated value)?
PS: I have already tried a variety of classic linear regression techniques with a continuous response assumption, and the fit diagnostics are quite terrible, as expected. I'm attaching diagnostic plots from fitting a thin-plate spline regression with multiple predictors.
[](https://i.stack.imgur.com/4jS6F.png)
| Models for a discrete numerical response with only 5 distinct values | CC BY-SA 4.0 | null | 2023-04-14T20:56:02.210 | 2023-04-15T09:13:16.627 | 2023-04-15T02:01:44.653 | 28500 | 86342 | [
"discrete-data",
"ordered-logit",
"discrete-distributions"
] |
612975 | 1 | null | null | 1 | 20 | I am looking at how sex impacts judicial decision making and am trying to balance my dataset where my treatment variable is named "gender.judge" (0 for control group, males, and 1 for treatment group, females). I want to balance the data on these relevant variables: jcs, confirmation year, party, racial minority, and judicial experience.
By using this code, I am able to create and add weights to my dataset. However, when I use the summarize function to assess whether the balancing was successful, I am left with different weighted means in the treatment and control groups.
Why is my entropy balancing not successfully weighting the covariates?
My code:
```
library(ebal)
library(data.table)
eb <- ebalance(Treatment = abortion$gender.judge,
X = cbind(abortion$jcs,
abortion$confirm.yr,
abortion$party.judge,
abortion$minority.judge,
abortion$jud.experience))
#all the treated observations have weight=1
abortion_treat <- abortion |>
filter(gender.judge==1) |>
mutate(weights=1)
#control observations have weight based on the entropy balancing
abortion_con <- abortion |>
filter(gender.judge==0) |>
mutate(weights= eb$w)
abortion_balanced<- bind_rows(abortion_treat, abortion_con)
#Verifying that the groups were properly balanced
abortion_balanced |>
group_by(gender.judge) |>
summarize(weighted.mean(jcs, weights),
weighted.mean(party.judge, weights),
weighted.mean(confirm.yr, weights),
weighted.mean(minority.judge, weights),
weighted.mean(jud.experience,weights))
```
| Entropy Balancing Did Not Create Balanced Means (R) | CC BY-SA 4.0 | null | 2023-04-14T21:05:35.293 | 2023-04-14T21:05:35.293 | null | null | 385736 | [
"entropy"
] |
612977 | 2 | null | 475461 | 2 | null | The Exponential family of distributions is special: the distribution suggested in your plot is equally (a) a shifted Exponential or (b) a truncated Exponential.
When $F$ is any (cumulative) distribution function, the transformation to the function
$$x \to F(x;\mu) = F(x-\mu)$$
literally shifts the entire graph of $F$ by $\mu$ units to the right. On the other hand the truncated version of $F$, truncated to the region $[\mu,\infty),$ is the part of $F$ defined on the values $x\ge \mu,$ renormalized to rise from $0$ at the left of $\mu$ to $1,$ as any distribution function must do. Therefore the truncated function is
$$x \to \tilde F(x;\mu) = \frac{F(x) - F(\mu)}{1 - F(\mu)}\ \text{when}\ x \ge \mu\ \text{and}\ 0\ \text{otherwise.}$$
When these are the same function of $x,$ equating them and expressing the relation in terms of the survival function $S = 1-F,$ we find after simple algebra that when $x\ge \mu,$
$$S(x) = S(x-\mu)S(\mu)$$
This is a defining property of an exponential function, so if it is to hold generally (for arbitrary $\mu$), necessarily $S(x)$ is proportional to $e^{-\lambda x}$ for some positive number $\lambda$ whenever $x \ge 0.$ The density is proportional to
$$f(x) = -\frac{\mathrm d}{\mathrm d x}S(x)\ \propto\ \lambda e^{-\lambda x},$$
exactly as expressed in the question. (Note that $f(x)\equiv 0$ for all negative $x$ when $f$ is an exponential density.)
This gives us two different, but equivalent, ways to understand any exponentially decaying distribution tail.
The issue in the application appears to be that the first few channels have not been measured. Thus, the graph shows a truncated distribution. In effect, its maximum (were it observable) indeed would occur at $x=0.$ Nevertheless, now that we have seen this distribution will be identical to a shifted exponential distribution and since (clearly) all standard exponential distributions have a peak density ("mode") at $0,$ the mode of this truncated/shifted distribution is found by shifting $0$ by $\mu$ units to the right.
| null | CC BY-SA 4.0 | null | 2023-04-14T21:32:17.060 | 2023-04-14T21:32:17.060 | null | null | 919 | null |
612978 | 1 | null | null | 2 | 60 | Question: In the table at the bottom of this question, the total number of times that $\hat{P}(x;z) > \hat{P}_y(x;z) $ and $\hat{P}(y;z)>\hat{P}_x(y;z)$ is 7.
- Can someone please replicate this result and explain to me what procedure? they used and why
---
Note: The table is from [Tversky (1972)](https://psycnet.apa.org/journals/rev/79/4/281/), and
- $\hat{P}_y(x;z)$ and $\hat{P}_x(y;z)$ comes from asking the subject to choose an object from the triple $\{x,y,z\}$ 30 times
- $\hat{P}(x;z)$ comes from asking the subject to choose an object from the pair $\{x,z\}$ 20 times
- $\hat{P}(y;z)$ comes from asking the subject to choose an object from the pair $\{y,z\}$ 20 times
I tried doing Chi-Squared tests for the proportions, but I did not come up with the correct results.
---
Also, the exact statement from the paper that I am asking how to replicate is: (emphasis mine)
>
Out of 16 individual comparisons in each task (two per subject), equation 16 was satisfied in 13 and 15 cases, respectively, in Tasks B and C (p < .05 in each case5), and only in 7 cases in Task A. Essentially the same result was found in additional analyses.
Where equation 16 is [](https://i.stack.imgur.com/MrfOE.png)
[](https://i.stack.imgur.com/30cUN.png)
---
Here is the table for task B incase it is useful.
- Note in the quote he says their was success in 13 cases in tasks B, and gives a P-value
[](https://i.stack.imgur.com/2cWtx.png)
| How to Replicate result that probabilities in included table are different 7 times? | CC BY-SA 4.0 | null | 2023-04-14T23:08:32.020 | 2023-04-19T14:23:32.447 | 2023-04-19T14:23:32.447 | 106860 | 106860 | [
"hypothesis-testing",
"self-study",
"p-value",
"nonparametric",
"replicability"
] |
612980 | 1 | null | null | 1 | 8 | Statistics help please!!
I gave subjects a multiple choice test with 3 options.
There were 2 types of questions in the test (easy and hard). The non-correct answer options were either lures or non lures. I'm interested whether the type of question (easy hard) and the type of error the student made (did they pick the lure or the non lure) interact. I have low and high performing students.
Can I simply do a 2 (between: Student type) x 2 (within: question type) x 2 (within: type of error) ANOVA, using the proportion out of all errors made for each type of question as the DV? (eg, I have the proportion of responses for Easy Lure, Easy Non Lure, Hard Lure, Hard Non Lure)?
| Repeated measures ANOVA on proportions of errors | CC BY-SA 4.0 | null | 2023-04-14T23:42:51.923 | 2023-04-14T23:42:51.923 | null | null | 385741 | [
"regression",
"anova",
"assumptions"
] |
612981 | 1 | 612984 | null | 0 | 54 | I am using ggsttaplot - I am curious how to get effect size for each pair:
```
# install.packages("tidyverse") # for everything ;)
library(tidyverse)
# install.packages("ISLR")
library(ISLR)
# install.packages("ggstatsplot")
library(ggstatsplot)
# stabilize the output of "sample_n()"
set.seed(1)
d <- Wage %>% group_by(education) %>% sample_n(50, replace = TRUE)
p<- ggbetweenstats(
data = d,
x = education,
y = wage,
type = "nonparametric")
p
# a list of tibbles containing statistical analysis summaries
extract_stats(p)[1]
```
The plot shows epsilon suare:0.3 but I am interested in effect size for each pair. I cant find any function or library that will calculate it
| Pairwise Effect size for Dunns Test | CC BY-SA 4.0 | null | 2023-04-15T00:04:30.903 | 2023-04-16T12:21:25.320 | null | null | 123823 | [
"r",
"effect-size"
] |
612982 | 2 | null | 611913 | 0 | null | My understanding as of now is that no prewhitening is necessary since the dependent variable already lacks auto-correlation or trend; in fact, it may actually wind up masking relevant lag relationships. The CCF is a tool used to help identify possible lags that might be useful in whatever model you are building, it's not the model itself. I think I was simply confused about how I should be using them.
| null | CC BY-SA 4.0 | null | 2023-04-15T00:20:35.227 | 2023-04-15T00:20:35.227 | null | null | 384949 | null |
612983 | 1 | 613060 | null | 1 | 51 | Let $X \sim N(0, \Sigma)$ be a multivariate normal vector, and let our prior for $\Sigma$ be inverse-Wishart: $\Sigma\sim IW(v,V)$.
The posterior for $\Sigma$ given $X$ is [also inverse-Wishart](https://en.wikipedia.org/wiki/Conjugate_prior#When_likelihood_function_is_a_continuous_distribution):
$$\Sigma | X = x \sim IW(1 + v, V + x'x).$$
I'm interested in the case in which we observe only some elements of $X$. I.e., we observe $X_c \sim N(0, \Sigma_c)$, where $\Sigma_c$ is a submatrix of $\Sigma$. The posterior density for the full covariance matrix $\Sigma$ given an observation $X_c$ will be
$$
p(\Sigma| X_c = x_c)
\propto |\Sigma_c|^{-\frac{1}{2}} \exp\left(-\frac{1}{2} x_c' \Sigma_c^{-1} x_c\right)f(\Sigma),
$$
where $f(\Sigma)$ is the prior density of $\Sigma$.
Is there a prior on $Σ$ such that this posterior is conjugate for any set of observed elements of $X$?
| The posterior covariance matrix with missing data | CC BY-SA 4.0 | null | 2023-04-15T01:38:58.377 | 2023-04-15T19:55:25.967 | 2023-04-15T01:45:19.310 | 362671 | 161943 | [
"bayesian",
"missing-data"
] |
612984 | 2 | null | 612981 | 2 | null | I assume you're using Dunn (1964) test, that would be used as a post-hoc for a Kruskal-Wallis test ?
One approach would be to use an effect size statistic that's appropriate for a Wilcoxon-Mann-Whitney test, in a pairwise manner. These effect size statistics include Vargha and Delaney’s A, Cliff’s delta, and Glass rank biserial correlation coefficient, among others.
With the caveat that I wrote it, there is a function in the rcompanion package that does just this.
```
Y = c(1,2,3,2,3,4,4,5,6)
Group = c(rep("A",3), rep("B", 3), rep("C", 3))
Data = data.frame(Group, Y)
library(rcompanion)
multiVDA(Y ~ Group, data=Data)
### Comparison VDA CD rg VDA.m CD.m rg.m
### 1 A - B 0.2220 -0.556 -0.556 0.7780 0.556 0.556
### 2 A - C 0.0000 -1.000 -1.000 1.0000 1.000 1.000
### 3 B - C 0.0556 -0.889 -0.889 0.9444 0.889 0.889
```
Addition:
A few useful references for relevant effect size statistics.
Tomczak and Tomczak. 2014. The need to report effect size estimates revisited. Trends in Sport Sciences 1(21). [www.tss.awf.poznan.pl/files/3_Trends_Vol21_2014__no1_20.pdf](http://www.tss.awf.poznan.pl/files/3_Trends_Vol21_2014__no1_20.pdf)
King, B.M., P.J. Rosopa, and E.W. Minium. 2000. Statistical Reasoning in the Behavioral Sciences, 6th. Wiley.
Grissom, R.J. and J.J. Kim. 2011. Effect Sizes for Research: Univariate and Multivariate Applications, 2nd.
Routledge.
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Routledge.
Vargha, A. and H.D. Delaney. A Critique and Improvement of the CL Common Language Effect Size Statistics of McGraw and Wong. 2000. Journal of Educational and Behavioral Statistics 25(2):101–132.
My own thoughts:
Mangiafico, S. 2016. "Two-sample Mann–Whitney U Test" in
Summary and Analysis of Extension Program Evaluation in R. [rcompanion.org/handbook/F_04.html](https://rcompanion.org/handbook/F_04.html).
Mangiafico, S. 2016. "Kruskal–Wallis Test" in
Summary and Analysis of Extension Program Evaluation in R.
[rcompanion.org/handbook/F_08.html](https://rcompanion.org/handbook/F_08.html)
| null | CC BY-SA 4.0 | null | 2023-04-15T01:49:41.730 | 2023-04-16T12:21:25.320 | 2023-04-16T12:21:25.320 | 166526 | 166526 | null |
612985 | 2 | null | 612974 | 1 | null | This seems like a situation where ordinal regression would be appropriate. See Chapter 13 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/ordinal.html) or this [UCLA web page](https://stats.oarc.ucla.edu/r/dae/ordinal-logistic-regression/) for the commonly used ordinal logistic regression.
Ordinal regression ignores the "known distance between the categories," but it's not immediately clear how important those known distances are, except for their ordering. It's not clear (to me, at least) what the "generalized" ordered logit model would add to that, and so far as I can tell that type of model also doesn't take the "known distance between the categories" into account.
In response to comments:
First, you aren't limited to a proportional-odds model. The R [ordinal package](https://cran.r-project.org/package=ordinal) provides several alternatives for ordinal regression.
Second, the latent-variable interpretation of a cumulative-link ordinal regression provides something very close to "as x increases, the expected value of the response will change by ...". The idea is that there is some continuous underlying (latent) unobserved variable associated with the predictors, and that when you cross a threshold you jump into the next observed category. See Section 2.4 of the [vignette on cumulative link models](https://cran.r-project.org/web/packages/ordinal/vignettes/clm_article.pdf) for the `ordinal` package, and illustrations as in Figure 1 of the vignette.
In that interpretation, a regression coefficient represents the change in the expected value of the latent variable for a change in the associated predictor. The intercepts are related to the thresholds. With your data, analysis might benefit from the ability to specify a "symmetric" structure of thresholds in the `ordinal` package (although I don't have experience with that myself).
This presumably can be handled by a Bayesian approach under a specific probability model, but that's outside my expertise.
| null | CC BY-SA 4.0 | null | 2023-04-15T02:18:22.257 | 2023-04-15T09:13:16.627 | 2023-04-15T09:13:16.627 | 28500 | 28500 | null |
612986 | 2 | null | 612973 | 1 | null | The way to handle step changes in Cox model coefficients over time is discussed in Section 4.1 of the R [time-dependence vignette](https://cran.r-project.org/web/packages/survival/vignettes/timedep.pdf). It's not immediately clear about what is wrong with your approach, but the accepted way to handle this is to reformat the data into strata based on values prior to and after 6 months, then fit a model with an interaction between the predictor of interest (age, here) and the time-group strata.
| null | CC BY-SA 4.0 | null | 2023-04-15T02:27:10.040 | 2023-04-15T02:27:10.040 | null | null | 28500 | null |
612987 | 2 | null | 166610 | 0 | null | The model does not know what "successful" and "unsuccessful" mean. All the model knows is the $0$$/$$1$ encoding you pass to it (which has an [equivalence](https://stats.stackexchange.com/a/612574/247274) with a $\pm1$ encoding, if you prefer that). Consequently, if you change that $0$ and $1$ mean, the model is ignorant of this and keeps doing what you trained it to do.
Perhaps think of it this way: you train a regression model that makes extremely accurate predictions of distances in meters, but then you realize that your American customer wants the output in feet. Do you use the model output and tell your customer that is how many feet you predict (and be off by a factor of three-and-a-bit), or do you convert to feet?
| null | CC BY-SA 4.0 | null | 2023-04-15T03:07:37.233 | 2023-04-15T03:07:37.233 | null | null | 247274 | null |
612988 | 2 | null | 597969 | 1 | null | I would expect the issue to go away as you got an enormous amount of data, especially if you used a flexible model that can figure out patterns (e.g., deep learning). You can think of the probability prediction in terms of Bayes' theorem, where you have a prior probability (the class ratio) and get the posterior probability as your prediction. As the data size gets large, the prior probability ought to get overwhelmed by the data, allowing an unusual case to scream out as being particularly likely to belong to the minority category, regardless of prior probability. Thus, whether you do artificial balancing or not, the model should figure out the truth. While you do not have to go full-Bayesian to view the problem in these terms, it might help you to think of how even a rather strong prior distribution gets [overwhelmed by a huge amount of contradictory data](https://stats.stackexchange.com/a/588644/247274) in maximum a posteriori estimation.
(Now, it might be that such an event is unusual no matter what, but a good model trained on a huge data set should catch this.)
| null | CC BY-SA 4.0 | null | 2023-04-15T03:15:16.857 | 2023-04-15T03:15:16.857 | null | null | 247274 | null |
612989 | 2 | null | 572231 | 1 | null | The paper I'm looking for [appears to be](https://www.jstor.org/stable/2951752)
Newey, W. K. (1994). The Asymptotic Variance of Semiparametric Estimators. Econometrica, 62(6), 1349–1382. [https://doi.org/10.2307/2951752](https://doi.org/10.2307/2951752)
The fourth-root convergence assumption appears in the discussion following Assumption 5.1
| null | CC BY-SA 4.0 | null | 2023-04-15T03:25:34.410 | 2023-04-15T03:25:34.410 | null | null | 249135 | null |
612990 | 2 | null | 472314 | 1 | null | FOURIER SERIES
"Not a machine learning algorithm," you say? I disagree. A Fourier series is a linear regression with infinite (nonlinear) features.
$$
\hat y_i = \hat a_0 + \hat a_1\cos\left(
\dfrac{
2\pi \times 1x_i
}{
P
}
\right)
+ \hat b_1\sin\left(
\dfrac{
2\pi \times 1x_i
}{
P
}
\right)
+ \hat a_2\cos\left(
\dfrac{
2\pi \times 2x_i
}{
P
}
\right)
+ \hat b_2\sin\left(
\dfrac{
2\pi \times 2x_i
}{
P
}
\right) +\dots
$$
Even with a finite number of features, you can get as close as you want to the modulo function, which is, ultimately, just a [sawtooth function](https://en.wikipedia.org/wiki/Sawtooth_wave) that we know a Fourier series can approximate well almost everywhere.
For the kind of periodic behavior that the modulo function exhibits, trigonometric functions and Fourier series seem like a natural place to look.
EDIT
If you want to have the period be variable (as in $z(x, y) = x\mod y$), perhaps consider an interaction term in the Fourier regression.
| null | CC BY-SA 4.0 | null | 2023-04-15T03:30:19.680 | 2023-04-15T03:38:53.267 | 2023-04-15T03:38:53.267 | 247274 | 247274 | null |
612991 | 1 | 613226 | null | 8 | 191 | If $X$ and $Y$ have the same distribution and $Y=g(X)$ where $g$ is monotonically increasing, then $Y=X$ almost surely. It seems obvious, but how to prove it?
| Prove $Y=X$ almost surely given they have the same distribution and $Y$ is an increasing function of $X$ | CC BY-SA 4.0 | null | 2023-04-15T03:49:43.477 | 2023-04-17T18:20:26.843 | 2023-04-15T04:14:56.173 | 20519 | 216170 | [
"probability",
"distributions",
"mathematical-statistics"
] |
612992 | 1 | null | null | 0 | 4 | The data I have seeks to understand whether an individual's new job is of high quality. I have an individuals
`old job wage: how much they were paid before`,
`new job wage: how much they were paid in new job`, and
`new job duration: how many days they are in the new job before quitting or going somewhere else`.
I would like to use these metrics to come up with a measure of how good the new job is. Ideas I had included
- Relative change of old and new job wage
- Combining 1) Relative Change with new job duration
Is there a way I can use these 3 variables to measure new job quality? Thanks.
| What is a way to estimate job match quality using new job wage, old job, wage, and how long a new job was had? | CC BY-SA 4.0 | null | 2023-04-15T04:39:01.480 | 2023-04-15T04:39:01.480 | null | null | 108150 | [
"inference",
"dataset",
"descriptive-statistics"
] |
612994 | 1 | null | null | 1 | 29 | I am looking for advice (do not have a specific example regarding data) but am wondering, when working with any dataset that is missing, at what point/percentage would you consider using something like multiple imputation to account for missing data?
I am aware that there are assumptions that need to be held before proceeding with multiple imputation but in general what percentage of missing data would yo consider to be too much missing data?
What other tools/procedures would you recomend apart from multiple imputation?
| when working with missing data, what percentage of data is considered too much missing before implementing something like imputation? | CC BY-SA 4.0 | null | 2023-04-15T04:44:03.567 | 2023-04-15T14:14:31.377 | null | null | 328519 | [
"missing-data",
"data-imputation",
"multiple-imputation"
] |
612995 | 1 | null | null | 0 | 21 | Suppose that we have a stochastic process $v_t$ for $t=1,\ldots,T$.
Here, we do not impose any assumption of the serial correlation, but the constant variance assumption is required.
That is, the stochastic process satisfies $Var(v_t)=\sigma^2$ for all $t$.
Is there any well-known stochastic process that has constant variance?
| Well-known stochastic process having constant variance over time | CC BY-SA 4.0 | null | 2023-04-15T04:54:04.270 | 2023-04-15T04:54:04.270 | null | null | 375224 | [
"variance",
"stochastic-processes"
] |
612996 | 1 | null | null | 1 | 21 | Given the linear model:
$y=\beta x+\varepsilon $
an estimator for the slope was suggested:
$\widehat{\beta} =\frac{\sum y}{\sum x}$
I wish to determine if it is linear and if it is unbiased.
Clearly, it is linear, no problem here:
$\widehat{\beta}=\frac{1}{\sum x}\cdot y_{1}+\frac{1}{\sum x}\cdot y_{2}+...+\frac{1}{\sum x}\cdot y_{n}$
As for bias. I have calculated the expected value, and found that it is equal to the real slope, i.e., it is unbiased.
However, there is a proof saying that for linear estimators, an estimator is unbiased if the sum of all coefficients of y, is 0 (and the sum of the coefficients of y multiplied by x is 1).
Now,
$\frac{1}{\sum x} + \frac{1}{\sum x} + \frac{1}{\sum x} + ... + \frac{1}{\sum x}$
is not 0.
What am I missing here?
| Bias of a linear estimator, different results with different methods | CC BY-SA 4.0 | null | 2023-04-15T06:09:30.910 | 2023-04-15T06:09:30.910 | null | null | 73819 | [
"estimation",
"econometrics",
"unbiased-estimator"
] |
612997 | 1 | 613030 | null | 0 | 18 | I want to estimate how likely a disease is associated with symptom (dummy hypothesis). Say that I want to assess which of avian flu, swine flu, and common flu is more commonly associated with fever. In this case, I can use binary logistic regression
However, it's also possible to say that "I want to estimate if fever is more common in avian flu, swine flu, or common flu", in which I should use multinomial logistic regression.
The problem is, if I use multinomial logistic regression, when I use avian flu as the reference and compare avian flu vs common flu, the estimates are different when I invert the reference category i.e., using common flu as the reference category and compare common flu vs avian flu. This makes interpreting the results harder.
In this case, is it valid to use the 1st hypothesis: "which disease is more commonly associated with fever" and use binary logistic regression instead? In that case, changing reference category of independent variables are much easier than changing reference category of dependent variables
Which are the best approach? Thanks in advance
| Choosing between multinomial logistic regression or binary logistic regression for interchangeable variables | CC BY-SA 4.0 | null | 2023-04-15T06:51:33.867 | 2023-04-15T14:52:30.357 | null | null | 234366 | [
"logistic",
"estimation",
"references",
"multinomial-logit"
] |
612999 | 1 | null | null | 2 | 52 | I'm very new here and am struggling to interpret the model. Please help me in layman's terms.
```
AR - GJR-GARCH Model Results
====================================================================================
Dep. Variable: GD R-squared: -0.003
Mean Model: AR Adj. R-squared: -0.004
Vol Model: GJR-GARCH Log-Likelihood: -3572.12
Distribution: Standardized Student's t AIC: 7168.24
Method: Maximum Likelihood BIC: 7236.93
No. Observations: 2261
Date: Sat, Apr 15 2023 Df Residuals: 2257
Time: 07:18:04 Df Model: 4
Mean Model
=============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-----------------------------------------------------------------------------
Const 0.0688 2.281e-02 3.017 2.555e-03 [2.411e-02, 0.114]
GD[1] -0.0134 2.114e-02 -0.635 0.526 [-5.485e-02,2.801e-02]
GD[2] -0.0327 2.011e-02 -1.626 0.104 [-7.210e-02,6.716e-03]
GD[3] 6.0716e-03 1.971e-02 0.308 0.758 [-3.255e-02,4.470e-02]
Volatility Model
=============================================================================
coef std err t P>|t| 95.0% Conf. Int.
-----------------------------------------------------------------------------
omega 0.1078 4.361e-02 2.473 1.341e-02 [2.236e-02, 0.193]
alpha[1] 0.0322 1.903e-02 1.691 9.074e-02 [-5.108e-03,6.947e-02]
alpha[2] 4.1672e-14 1.602e-02 2.602e-12 1.000 [-3.139e-02,3.139e-02]
gamma[1] 0.0394 2.891e-02 1.364 0.172 [-1.722e-02,9.611e-02]
gamma[2] 0.1528 3.636e-02 4.202 2.651e-05 [8.151e-02, 0.224]
beta[1] 9.4508e-03 4.481e-02 0.211 0.833 [-7.837e-02,9.727e-02]
beta[2] 0.7992 5.555e-02 14.386 6.313e-47 [ 0.690, 0.908]
Distribution
========================================================================
coef std err t P>|t| 95.0% Conf. Int.
------------------------------------------------------------------------
nu 5.2877 0.574 9.215 3.121e-20 [ 4.163, 6.412]
========================================================================
```
I read so many articles or paper over the internet and I concluded below. But not sure if I'm right.
Here, the p-value for gamma[1] is 0.172(which is greater than 0.05), hence gamma[1] is not statistically significant, and we cannot conclude that there is a significant leverage effect.
However, the p-value for gamma[2] is less than 0.05, indicating that gamma[2] is statistically significant.
Therefore, we can conclude that the model has captured significant leverage effects.
Also, I want to write a generic code that includes all the scenarios like,
- What if the gamma[1] is +ve and gamma[2] is -ve.
- What if we have n the number of gammas in the model, in that case can I follow the same approach?
- If we have n number of gammas, and they are in a combination of +ve and -ve, how we can conclude then?
| How can I interpret the below GJR-GARCH model in terms of "leverage effects"? | CC BY-SA 4.0 | null | 2023-04-15T07:48:42.667 | 2023-04-15T11:05:51.230 | 2023-04-15T08:46:47.353 | 385690 | 385690 | [
"interpretation",
"garch",
"finance"
] |
613003 | 2 | null | 612999 | 0 | null | According to [V-Lab](https://vlab.stern.nyu.edu/docs/volatility/GJR-GARCH), the conditional variance equation of a GJR-GARCH model (extended from one to two lags in an obvious way) is
$$
\sigma_t^2=\omega+(\alpha_1+\gamma_1 I_{t-1})\varepsilon_{t-1}^2+(\alpha_2+\gamma_1 I_{t-2})\varepsilon_{t-2}^2+\beta_1\sigma_{t-1}^2+\beta_1\sigma_{t-2}^2
$$
where $I_{t-j}=1$ if $\varepsilon_{t-j}<0$ and $I_{t-j}=0$ otherwise, for $j=1,2$. (You may want to check if Python uses the same representation. In the following, I will assume it does.) Now, you assume
$$
\sigma_t^2=\omega+(\alpha_1+ve I_{t-1})\varepsilon_{t-1}^2+(\alpha_2-ve I_{t-2})\varepsilon_{t-2}^2+\beta_1\sigma_{t-1}^2+\beta_1\sigma_{t-2}^2.
$$
The equation tells you how a shock $\varepsilon_t$ affects the conditional variance one and two periods ahead, $\sigma_{t+1}^2$ and $\sigma_{t+2}^2$. Just insert the specific value of the shock into the formula, and you readily get an answer. (You may need to shift the time index by 1 or 2 for each member of the equation, but that is straightforward.)
With $n$ $\gamma$s, the logic is still the same. I do not know what exactly we could conclude from that, but I find such a pattern of $\gamma$s unlikely to be observed in practice, so the question itself might not be all that relevant.
| null | CC BY-SA 4.0 | null | 2023-04-15T09:14:39.220 | 2023-04-15T11:05:51.230 | 2023-04-15T11:05:51.230 | 362671 | 53690 | null |
613005 | 1 | null | null | 5 | 89 | Using gpower, I would like to calculate the sample size to validate a bug fix.
There is a software bug that appears in 3 out of 1000 test runs.
I would like to validate a bug fix so that it doesn't appear within a degree of confidence.
My question is what kind of "test family" do I choose for that?
I am a bit overwhelmed by the number of choices.
Also, when trying the different test families I seem to never enter the failure rate (0.003) which is crucial to make the prediction right?
I can enter the following parameters
- power of 0.80
- alfa / err prob of 0.05
- effect size of 0.2
For fun, I also asked chatgpt for an answer and it gave back a sample size that does take into account the failure rate of 0.003
```
n = (Zbeta + Zalpha)^2 * (p1 * (1 - p1) + p2 * (1 - p2)) / (p1 - p2)^2
```
Plugging in the values, we get:
```
n = (0.84 + 1.96)^2 * ((0.003 * (1 - 0.003)) + (0 * (1 - 0))) / (0.003 - 0)^2
n = 1472.39
```
Thanks for setting up this beginner! I would like to master gpower a bit better for my daily work.
If gpower isn't suited for this at all that would be an answer as well :)
Update 20230416
I think I can now zoom in on a better question.
Given that a test fails 3 out of 1000, what is the chance it fails 0 out of 1000?
And then, what is the chance it fails 99% of the time 0 out of 1000?
So the question is more:
How do I prove that the previous failure rate has gone?
Rather then to prove my bugifx is perfect since you can't prove perfection in a limited time.
| Determine the sample size to validate a bug fix | CC BY-SA 4.0 | null | 2023-04-15T09:47:56.370 | 2023-04-16T21:21:37.657 | 2023-04-16T08:34:10.507 | 385758 | 385758 | [
"statistical-power",
"effect-size",
"gpower"
] |
613006 | 2 | null | 612335 | 2 | null |
Your problem is related to the [F-test for equality of variances](https://en.m.wikipedia.org/wiki/F-test_of_equality_of_variances) as the sum of exponential distributions that you have are like variances of normal distributed variables.
We can instead use
$$F = \frac{\bar{Y}}{\bar{X}} = \frac{n}{m} (T^{-1} -1) \sim F(2m,2n)$$
This is the distribution of $F$ if the null hypothesis is correct.
If the hypothesis $\lambda_1 = \lambda_2$ is wrong then the distribution will be like a scaled F-distribution. Or more easily we use [Fisher's z- distribution](https://en.m.wikipedia.org/wiki/Fisher%27s_z-distribution) for the statistic $Z = 0.5 \log F$, where the alternative hypothesis is a shift of the distribution.
$$f_Z(z;d_1=2m,d_2=2n) = \frac{2 d_1^{d_1/2}d_2^{d_2/2}}{B(d_1/2,d_2/2)} \frac{e^{d_1 z}}{(d_1e^{2z}+d_2)^{(d_1+d_2)/2}}$$
where $B$ is the beta function.
Why do I suggest to use the statistic $Z$ that follows Fisher's z-distribution?
- Because the alternative hypothesis $\lambda_1 \neq \lambda_2$ relates to a shift of the distribution and the likelihood ratio is equal to $$\Lambda(z) = \frac{f_Z(z;2m,2n)}{f_Z(0;2m,2n)}$$ $f_Z(z;2m,2n)$ is the likelihood when we use $\lambda_0$ and $f_Z(0;2m,2n)$ (the peak of the z-distribution) is the likelihood when we use independent $\lambda_1 = 1/\bar{X}$ and $\lambda_2 = 1/\bar{Y}$.
- The effect is that the critical region for the statistic $Z$ can be found by using the highest density region for the distribution $f_Z$
Demonstration with code:
Say that $m=1$ and $n=5$, then the boundaries are
$$\begin{array}{rcccl}
&&Z & \in& [-1.628 , 0.971] \\
e^{2Z}&=& F& \in& [0.03857, 6.98385] \\
(1+\frac{m}{n}F)^{-1}&=& T& \in& [0.4172 , 0.9923]
\end{array}$$
[](https://i.stack.imgur.com/yD3AD.png)
```
m = 1
n = 5
set.seed(1)
### Fisher's z-distribution density
dz = function(z,d1,d2) {
2*d1^(d1/2) * d2^(d2/2) * exp(d1*z) / beta(d1/2,d2/2) / (d1*exp(2*z)+d2)^((d1+d2)/2)
}
dz = Vectorize(dz)
### compute and plot null-distribution of z
z = seq(-4,4,0.0001)
delta = 0.0001
f = dz(z,2*m,2*n)
plot(z,f, type = "l", main = "null-distribution of z \n with 95% highest density boundary", lwd = 2)
### compute 95% highest density region
### by ranking the densities
### and select the lowest that sum up to 5%
ord = order(f)
rejectregion = which(cumsum(f[ord]*delta)<=0.05)
zb = range(z[ord][-rejectregion])
lines(c(zb[1],zb[1]),c(0,1), lty = 2, col = 2, lwd = 2)
lines(c(zb[2],zb[2]),c(0,1), lty = 2, col = 2, lwd = 2)
###### computation of boundary values
###### for different statistics z, F and T
### -1.6276 0.9718
zb
### 0.03857311 6.98384762
F = exp(2*zb)
F
### 0.9923444 0.4172283
T = 1/(1+m/n*F)
T
### check with the formula
### both are around 0.007367
T[1]^n*(1-T[1])^m
T[2]^n*(1-T[2])^m
### computational test
sample = function(m,n) {
X = sum(rexp(n,1))
Y = sum(rexp(m,1))
T = X/(X+Y)
return(T)
}
Tsample = replicate(10^5,sample(m,n))
hist(Tsample, breaks = seq(0,1,0.01), main = "histogram of T with 95% boundary lines")
lines(c(T[1],T[1]),c(0,100000), lty = 2, col = 2, lwd = 2)
lines(c(T[2],T[2]),c(0,100000), lty = 2, col = 2, lwd = 2)
x = seq(0,1,0.01)
lines(x,dbeta(x,n,m)*10^5/100,col = 3)
### check 95% 0.9496603
pbeta(T[1],n,m)-pbeta(T[2],n,m)
```
| null | CC BY-SA 4.0 | null | 2023-04-15T09:58:55.257 | 2023-04-15T10:05:27.757 | 2023-04-15T10:05:27.757 | 164061 | 164061 | null |
613007 | 1 | null | null | 0 | 39 | I want to show an inverted U-shape relationship between two variables: "minutes spent in a room A" and "trustworthiness in others". The hypothesis is that those who have low and high trustworthiness are the ones who spend the least amount of time in room A, whereas those with medium level-trustworthiness spend the most time in that room. The inverted U shape relationship is visible when I plot the two variables (x=trustworthiness and y=minutes spent in a room A).
To statisitcally test this inverted U-shape, I calculated a polynomial regression in R using the poly function. I have been reading about the different arguements that the function can have, depending on whether the linear and quadratic regressor should be considered as orthogonal or raw regressors.
The output for the "raw" polynomial regression is as follows: [](https://i.stack.imgur.com/ehyeG.png)
The output for the "orthogonal" polynomial regression is as follows: [](https://i.stack.imgur.com/8CUTZ.png)
Now, reading through questions (and answers) of others, in my model, the linear and quadratic regressors seem to be highly correlated as the raw and orthogonal output is vastly different considering their own p-values and beta-weights. I would interpret that in the "raw" model, both predictors are significant but not on their own, their significance is dependent on both regressors being in the model (because they are strongly correlated, (actually r = 0.94)). In the "orthogonal" model, the regressors are considered independently. Therefore one could conclude that the orthogonal model is the right choice and the result shows that the relationship can be significantly described by a quadratic fit, rather than a linear fit, so a U-shape relationship seems fair. (note: calculating the linear fit along (lm(y~x)) results in an insignificant fit with an Rsuared under 1%, whereas in the polynomial fits, it goes up to 18% - which is still not fantastic, but still better).
However, when interpreting the beta-weights, I would still need to use the coefficients from the raw model in order to make sensible predictions. So if one cannot interpret the beta-weights in the orthogonal models because in fact the two regressors are inherently correlated, can the above conclusion about better fit really be made?
ps. the QQ plots for the residuals as well as the normality checks using Shapiro-Wilk indicate that the prerequisits for the regression model are met
| interpreting polynomial regression output when the regressors are orthogonal (vs. raw) | CC BY-SA 4.0 | null | 2023-04-15T10:22:07.600 | 2023-04-15T15:33:44.820 | null | null | 385603 | [
"regression-coefficients",
"polynomial",
"orthogonal"
] |
613008 | 1 | null | null | 0 | 21 | I have daily mean temperature for 20 years for 5 different locations. How can I compare the time series statistically?
The locations are all in UK so the seasonality is the same.
I want to see if the series have any differences. In the best case to see if they are statistically different with some test.
| What is the correct method to compare time series? | CC BY-SA 4.0 | null | 2023-04-15T10:36:27.153 | 2023-04-17T09:34:12.917 | 2023-04-17T09:34:12.917 | 355434 | 355434 | [
"time-series",
"hypothesis-testing"
] |
613009 | 1 | 613021 | null | 2 | 97 | I did a little experiment and saw that the `gmdh` algorithm works very well for these toy data, much better than `random forest`.
```
X <- matrix(data = c(rnorm(2000)), ncol = 5, nrow = 500)
colnames(X) <- c("a", "b", "c", "d", "e")
Y <- c(10 + X[, "a"] * X[, "e"]^3)
par(mfrow=c(2,1),mar=c(2,2,2,2))
matplot(X,t="l",lty=1, main="data")
plot(Y,t="l", main="target")
tr <- 1:450
ts <- 451:500
library(GMDHreg)
gmdh <- gmdh.gia(X = X[tr,], y = Y[tr], prune = 5, criteria = "PRESS")
gmdh_pred <- predict(gmdh, X)[,1]
lines(gmdh_pred ,col=4)
library(randomForest)
rf <- randomForest(Y[tr]~.,X[tr,],ntree=500)
rf_pred <- predict(rf,X)
lines(rf_pred ,col=2)
par(mfrow=c(1,1),mar=c(2,2,2,2))
plot(Y[ts],t="l",lwd=10,col="gray70")
lines(gmdh_pred[ts],col=4,lwd=2)
lines(rf_pred[ts],col=2,lwd=2)
legend(x = "topright",
legend = c("original", "GMDH", "RandomForest"),
col = c(8,4,2), lwd = 2)
```
[](https://i.stack.imgur.com/DnFVF.jpg)
I'm interested in:
- Why does gmdh work better?
- Is it possible to make random forest work better?
- For which data is it better to use the first algorithm and for which the second?
| Why does GMDH work better than a random forest? | CC BY-SA 4.0 | null | 2023-04-15T10:40:11.673 | 2023-04-16T02:21:48.187 | 2023-04-16T02:21:48.187 | 509 | 303632 | [
"r",
"regression",
"machine-learning",
"random-forest"
] |
613011 | 2 | null | 611417 | 3 | null | >
Where does this randomness come from?
From the temperature. See [What is the "temperature" in the GPT models?](https://ai.stackexchange.com/q/32477/4). Note that even a temperature set to 0 [doesn't guarantee](https://stackoverflow.com/q/75946090/395857) a deterministic result.
On [https://platform.openai.com/playground](https://platform.openai.com/playground) one may change the temperature:
[](https://i.stack.imgur.com/untTP.png)
| null | CC BY-SA 4.0 | null | 2023-04-15T11:05:05.380 | 2023-05-22T10:47:37.643 | 2023-05-22T10:47:37.643 | 12359 | 12359 | null |
613012 | 1 | null | null | 0 | 15 | I am looking at the difference in mean reaction time change between 2 populations. Although I subtracted the new reaction time from the original(there was mostly a reduction in reaction time across the population) there are still some negative mean change values.
Shall I proceed in using these negative values in linear regression models and Mann-Whitney U tests or do I need to transform the data?
p.s reaction time is non-normal and usually <0.1s
| Transformation of negative reaction time values? | CC BY-SA 4.0 | null | 2023-04-15T11:09:24.030 | 2023-04-15T11:26:48.103 | 2023-04-15T11:26:48.103 | 22047 | 374911 | [
"p-value",
"data-transformation"
] |
613013 | 2 | null | 610542 | 2 | null | >
How can I compare the semantic similarity of the answer it provides me with a reference question?
With a text generation metric. See [Evaluation of Text Generation: A Survey](https://arxiv.org/abs/2006.14799). Typical metrics: [TF-IDF cosine](https://datascience.stackexchange.com/q/120901/843), [Rouge](https://stackoverflow.com/q/47045436/395857), Bleu, BertScore, Sentence-Bert, and more recently, [GPTScore](https://arxiv.org/pdf/2302.04166.pdf) and [G-Eval](https://arxiv.org/abs/2303.16634). Note that they have some serious limitations when evaluating GPT output {1,2}.
---
References:
- {1} Goyal, Tanya, Junyi Jessy Li, and Greg Durrett. "News Summarization and Evaluation in the Era of GPT-3." arXiv preprint arXiv:2209.12356 (2022).
- {2} Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, Tatsunori B. Hashimoto. Benchmarking Large Language Models for News Summarization. arXiv:2301.13848.
| null | CC BY-SA 4.0 | null | 2023-04-15T11:09:50.253 | 2023-04-16T03:16:20.300 | 2023-04-16T03:16:20.300 | 12359 | 12359 | null |
613014 | 1 | null | null | 1 | 13 | I am reading the PYIN[1] research paper, and I am having difficulty understanding stage 2. I have looked at the Librosa[2] implementation, however, I would like to clarify my understanding.
From Equation 6 in the PYIN[1] paper, it appears that $(1-\sum_k P^*_k)$ is the difference between unity and the sum of the voiced probabilities assigned to discrete frequency bins within a frame.
Any help would really be appreciated as I am new to Hidden Markov Models.
Thank you.
[1] Mauch, M. and Dixon, S., 2014, May. pYIN: A fundamental frequency estimator using probabilistic threshold distributions. In 2014 ieee international conference on acoustics, speech and signal processing (icassp) (pp. 659-663). IEEE.
[2] Librosa.org. (2014). librosa.core.pitch — librosa 0.10.1dev documentation. [online] Available at: [https://librosa.org/doc/main/_modules/librosa/core/pitch.html#pyin](https://librosa.org/doc/main/_modules/librosa/core/pitch.html#pyin) [Accessed 15 Apr. 2023].
| Understanding stage 2 of the PYIN algorithm | CC BY-SA 4.0 | null | 2023-04-15T11:57:01.097 | 2023-04-15T12:19:48.737 | 2023-04-15T12:19:48.737 | 385763 | 385763 | [
"python",
"hidden-markov-model"
] |
613015 | 1 | 613024 | null | 1 | 145 | I recently came across a problem where I want to predict a timeseries let's refer to it as T1 but it seems that two other time series, let's call them T2 and T3 have some predictive power on the T1 time series.
Now the question came up, I can use either multivariate analysis and include the series T2 and T3 as endogenous variables or I can include them as exogenous. I have been reading around here and there but I did not come up with a clear answer on when to use which and given my data which approach is more suitable.
I would also be interested in the details on how SARIMA (for example) treats endogenous vs exogenous variables and how these choices influence the forecasting of my main time series T1.
I hope everything was clear. Thanks a lot and please let me know if there is already some answer to that!
| Multivariate time series forecasting: Endogenous vs Exogenous | CC BY-SA 4.0 | null | 2023-04-15T12:04:51.617 | 2023-04-16T10:31:47.053 | 2023-04-16T10:31:47.053 | 310850 | 310850 | [
"machine-learning",
"time-series",
"forecasting",
"multivariate-analysis"
] |
613016 | 2 | null | 612721 | 7 | null | The scenario in your example is actually better described not as "model selection" problem (where you have to decide between two models to describe the entire data) but rather as an [Empirical Bayes](https://en.wikipedia.org/wiki/Empirical_Bayes_method) method applied to a [hierarchical model](https://en.wikipedia.org/wiki/Multilevel_model).
Specifically you assume that $\theta$ has a mixture distribution
$$ \theta \sim p_f\delta(1/2) + (1-p_f)\text{Beta}(\theta|\alpha,\beta)$$
if for example you choose a Beta distribution to describe $p(\theta|\mathcal M_{loaded})$. Then you can use this to estimate the model parameters $p_f,\alpha,\beta$ by calculating the marginal likelihood:
$$P(\text{data}| p_f,\alpha,\beta) = \prod_i \int d\theta_i P(\text{data}|\theta_i)\times P(\theta_i | p_f,\alpha,\beta). $$
The "empirical" aspect in "empirical Bayes" refers to using point estimates of those parameters (for example by maximum likelihood) as a prior for a particular $\theta_i$ (the "low" level in the hierarchy), for example if you have count data $k_i \sim \text{Binomial}(n_i,\theta_i)$ then you would calculate the posterior probability of $\theta_i$ as
$$ P(\theta_i|k_i,n_i) \propto \theta_i^{k_i} (1-\theta_i)^{n_i-k_i} P(\theta_i | \hat p_f,\hat \alpha,\hat \beta)$$
where a "hat" over a parameter denotes its point estimate${^1}$.
It is also possible to treat this in a "fully Bayesian" way by assigning a prior $\pi(p_f,\alpha,\beta)$ to the high level parameters
and marginalizing, using the full posterior distribution:
$$ P(\theta_i|\text{data}) \propto \theta_i^{k_i} (1-\theta_i)^{n_i-k_i} \int d\alpha\int d\beta \int dp_f P(\theta_i|p_f,\alpha,\beta)\times P(p_f,\alpha,\beta | \text{data}_{-i}) $$
Where
$$P(p_f,\alpha,\beta | \text{data}_{-i}) \propto \int d\theta_1 ... \int d\theta_n \prod_{j\ne i} \theta_j^{k_j} (1-\theta_j)^{n_j-k_j} P(\theta_j|p_f,\alpha,\beta)\pi(p_f,\alpha,\beta)$$
Is the posterior distribution of the hyperparameters. To be completely rigorous indeed requires excluding the coin of interest from the data, however for large dataset this might be a negligible effect. Calculating the integrals in hierarchical models can usually be done only numerically (for example the Beta distribution does not have a simple conjugate prior). However when the dataset is large enough we can expect the posterior probability to become concentrated around the point estimates, such that the full calculation reduces to the empirical one. This is the justification for using empirical Bayes methods as an approximation.
---
${^1}$ Using the Beta distribution is convenient since it is the conjugate prior of the Binomial distribution, so the marginal likelihood can be calculated analytically:
$$ \int d\theta \theta^k (1-\theta)^{n-k}\times
(p_f\delta(1/2) + (1-p_f)\text{Beta}(\theta|\alpha,\beta)) $$
$$=p_f\frac{1}{2^n} + (1-p_f)\frac{B(\alpha+k,\beta+n-k)}{B(\alpha,\beta)}$$
where $B(\cdot,\cdot)$ is the Beta function. The maximum likelihood estimates are
$$\hat p_f,\hat \alpha,\hat \beta = \underset{p_f,\alpha,\beta}{\text{argmax}} \sum_i \log \left( p_f\frac{1}{2^{n_i}} + (1-p_f)\frac{B(\alpha+k_i,\beta+n_i-k_i)}{B(\alpha,\beta)} \right)$$
and the posterior distribution of $\theta_i$ is
$$P(\theta_i|k_i,n_i) = \tilde p_f \delta(1/2) + (1-\tilde p_f)\text{Beta}(\theta_i|\hat \alpha+k_i, \hat \beta+n_i-k_i)$$
where
$$\tilde p_f = \frac{\hat p_f\frac{1}{2^{n_i}}}{ \hat p_f\frac{1}{2^{n_i}} + (1-\hat p_f)\frac{B(\hat \alpha+k_i,\hat \beta+n_i-k_i)}{B(\hat \alpha,\hat \beta)}} $$
is the posterior probability of the coin being fair.
| null | CC BY-SA 4.0 | null | 2023-04-15T12:26:36.257 | 2023-04-15T18:52:03.620 | 2023-04-15T18:52:03.620 | 348492 | 348492 | null |
613019 | 1 | null | null | 0 | 38 | I am a beginner R user and admittedly only have relatively basic statistics knowledge. But I am keen to learn
I have a patient data set in 2 groups (n = 50; group 1, n = 35, group 2, n = 15), who were measured for 'C' at 6 time intervals (0 = 0 months, 3 = 3 months and so on), every 3 months. The time measurement is not continuous - rather samples collected +/- a month of 3 months are included into the 3 month time point for example. I am primarily interested in determining if there is a difference in 'C' at each time point between the two groups. But I have been asked to also determine if there is an interaction between Group and Time. However, I'm unsure of the type of post-HOC testing I can do.
Due to the nature of patient data collection, I have a few data points missing and so was advised to do a mixed-effects model for my data rather than a repeated measures two-way ANOVA.
I did this on both Prism and using lmer in R. Both programmes show me that there is no interaction between Group and Time although both have a significant effect on the measurement of 'C' independent of each other. Both give me the same predicted mean values for each group at each time point.
I then wanted to determine the differences in 'C' at each time point between the two groups. On Prism, this can be done easily by doing multiple comparisons with Sidak or FDR p-value adjusment when running the mixed-effects model. Prism uses the observed mean values between the two groups to do this.
With lmer, I used the emmeans package to determine differences between the two groups at each time point. But this uses the estimated marginal means rather than the observed values and so the determined p-values seem to be affected by the amount of missing data points at each time point.
Is it incorrect to do a multiple comparison test using the observed group means rather than the estimated marginal means after I've done lmer?
The purpose of the lmer was to determine if there was an interaction between Group and time and since this doesn't exist, can I proceed to use either paired t-test or one-way ANOVA to determine differences in 'C' between groups across time?
Also, is there a way to do "Šídák's multiple comparisons test" on R?
| Mixed effects Model in Prism versus lmer in R - Post-Hoc testing | CC BY-SA 4.0 | null | 2023-04-15T13:14:21.497 | 2023-04-15T13:48:21.077 | 2023-04-15T13:48:21.077 | 385766 | 385766 | [
"r",
"mixed-model",
"lme4-nlme",
"post-hoc"
] |
613020 | 1 | null | null | 0 | 39 | If X and Y are two independent Weibull distributions with the same shape parameter, what distribution is the min(X, Y).
I am trying to find out the hazard ratio for the following case.
If I model Z=min(X, Y) with 2 independent Weibull distributions and C=min(A,B) with the 2 independent Weibull distributions, and all have the same shape parameter (say 2) (and hence satisfy the proportional hazards assumption), and the hazard ratio for X vs A is lambda and the hazard ratio for Y vs B is also lambda, what is the harzard ratio for Z vs C?
| What is the minimum of 2 Weibull distributions with the same shape parameter | CC BY-SA 4.0 | null | 2023-04-15T13:49:44.503 | 2023-04-15T13:49:44.503 | null | null | 385768 | [
"distributions",
"hazard",
"weibull-distribution",
"proportional-hazards",
"shape-parameter"
] |
613021 | 2 | null | 613009 | 6 | null | While not many details are provided, here is a quick take for the three questions posed:
- No single algorithm is ever guarantee to outperform another, if that was the case we would invalidate the whole "No free lunch theorem" notion. That said, it appears that the data generating process (DGP) here is of polynomial form so unsurprisingly Group method of data handling (GMDH) is competitive as it is in some sense a polynomial neural network.
- In this case, we can increase mtry the number of variables randomly sampled as candidates at each split and more competitive results. Especially given that only 2 out of 5 variables are relevant and in many cases our candidate tree might contain only irrelevant variables that increased number of variables examined can be very helpful.
- Again there is not universal answer here but as mentioned in point 1, if we have a good understanding for the DGP, a method that seems to align with that DGP is a good first choice.
| null | CC BY-SA 4.0 | null | 2023-04-15T13:57:56.630 | 2023-04-15T13:57:56.630 | null | null | 11852 | null |
613022 | 1 | null | null | 0 | 7 | I have a problem analyzing two variables of my experiments. Variable 1 is the Signal to noise ratio, that is the relevance of my signal of interest with respect to the noise, just a noise measure. Variable 2 is the score, the precision of my algorithm. I have seen that when the SNR is low, the algorithm very usually shows a low precision, and for high SNR it shows very good precision. However, from my experimental data, that is not very clear, and I don't know how to demonstrate that correlation because in the correlation plot, it looks like a step-like scatter.
Here is the picture
[](https://i.stack.imgur.com/8zyPi.png)
To be honest, when I looked at the specific experiments with the low SNR , the signals are also a bit weird, but they most probably show a low score.
Do you think this could infers a correlation between the two variables or the possible explanation is that there is no real SCORE-SNR correlation but it is more probable for signals with low SNR to have abnormal results?
Thank you
| Step-like correlation. Is that possible? | CC BY-SA 4.0 | null | 2023-04-15T14:11:45.727 | 2023-04-15T14:11:45.727 | null | null | 365263 | [
"correlation",
"partial-correlation"
] |
613023 | 2 | null | 612994 | 2 | null | >
at what point/percentage would you consider using something like multiple imputation to account for missing data?
Stef van Buuren discusses this in Section 1.3.1 of [Flexible Imputation of Missing Data](https://stefvanbuuren.name/fimd/sec-simplesolutions.html) (FIMD), "Listwise Deletion:"
>
The leading authors in the field are, however, wary of providing advice about the percentage of missing cases below which it is still acceptable to do listwise deletion.
I'm not a leading author in the field, so I'll follow that advice for addressing your question. Also:
>
The implications of the missing data are different depending on where they occur (outcomes or predictors), and the parameter and model form of the complete-data model.
For example, as van Buuren says in [Section 2.7](https://stefvanbuuren.name/fimd/sec-when.html):
>
If the missing data occur in [outcome] $Y$ only, complete-case analysis and multiple imputation are equivalent, so then complete-case analysis is preferred since it is easier, more efficient and more robust.
In that situation, no percentage of missing data would require multiple imputation, at least in terms of estimating regression coefficients.
Frank Harrell devotes Chapter 3 of [Regression Modeling Strategies](https://hbiostat.org/rmsc/missing.html) to ways to handle missing data. He summarizes practical suggestions in [Section 3.10](https://hbiostat.org/rmsc/missing.html#summary-and-rough-guidelines), but warns that "Simulation studies are needed to refine the recommendations." The general idea is that increasing fractions of cases with missing data require more imputed data sets for reliable results.
Thus with very small fractions of missing-data cases (subject to the above warning, he suggests <3% of cases), single imputation or listwise deletion can work well enough. Even then, he says "Multiple imputation may be needed to check that the simple approach 'worked.'"
What's important, whatever decision you make, is to state how you handled the missing data in reports, so that others can try to reproduce or improve upon your analysis.
| null | CC BY-SA 4.0 | null | 2023-04-15T14:14:31.377 | 2023-04-15T14:14:31.377 | null | null | 28500 | null |
613024 | 2 | null | 613015 | 0 | null | The distinction between multivariate and exogenous seems to be a false dichotomy. If your model contains more than one variable or more than one time series, it is a multivariate model. The relevant distinction is between treating all variables as endogenous vs. treating only one of them as endogenous and the rest as exogenous.
Now, which approach would deliver more accurate forecasts of the series T1? If we are talking about one-step forecasts, there need not be a difference. The result of interest will have T1 as the dependent variable and T2 and T3 (and probably some lags) as predictors. Whether you take a model with this single equation or a model with multiple equations (one for each endogenous variable), there will still be just one equation you focus on. So there need not be a difference.
Regarding multiple-step forecasts, you may need to predict the predictors themselves, and for that you will need equations for them. Thus a multi-equation model with more than one endogenous variable may be beneficial. But there is always the bias-variance tradeo-ff, so we cannot be sure ahead of time regarding which model will perform better.
Regarding SARIMAX or regression with SARIMA errors (these are [different things](https://robjhyndman.com/hyndsight/arimax/)), it has a single dependent (engoneous) variable and some predictors (exogenous variables). You will find a brief overview with concrete equations and their interpretations by following the link above.
| null | CC BY-SA 4.0 | null | 2023-04-15T14:18:47.007 | 2023-04-15T14:24:09.350 | 2023-04-15T14:24:09.350 | 53690 | 53690 | null |
613025 | 2 | null | 496467 | 1 | null | This question relates to the more general question [Find the distribution of the statistic and the critical region of the generalized test at level $\alpha$ for two sample test](https://stats.stackexchange.com/questions/612335/), which asks the same question but with samples of potentially different sizes than one.
An answer to that question uses a statistic that follows Fisher's z-distribution. When we apply the same approach to this problem, then that answer simplifies to the use of the statistic $\log(X_1/X_2)$ which follows a [logistic distribution](https://en.m.wikipedia.org/wiki/Logistic_distribution) with scale $s=1$ and location $\mu = \log(\lambda_1/\lambda_2)$.
So the tests for the hypothesis $\lambda_1/\lambda_2=1$ can be tested with the statistic $\log(X_1/X_2)$. And the p-values can be computed based on that statistic following a logistic distribution given the null hypothesis. The difference between the first and second case is whether you use a one-tailed or two-tailed test, and changes the p-value by a factor two.
| null | CC BY-SA 4.0 | null | 2023-04-15T14:24:47.317 | 2023-04-15T15:26:29.550 | 2023-04-15T15:26:29.550 | 164061 | 164061 | null |
613026 | 2 | null | 15011 | 1 | null |
## Given:
$\rho$ = desired correlation between Y and Z
$Z$ = sample
## Requested:
'Random' sample $Y$ such that $cor(Y, Z) = \rho$
## Solution:
Let $x_1 =$ scale($Z$), which implies $E(x_1) = 0, Var(x_1) = 1$
Generate a random sample $x_2$ with $E(x_2) = 0$ and $Var(x_2) = 1$ (distribution not important)
We then determine scalar $a$ such that $Y = x_1 + a * x_2$ satisfies the requirement.
Let $cv = cov(x_1, x_2)$.
We then have:
\begin{eqnarray*}
Var(Y) &=& Var(x_1 + a * x_2) \\
&=& Var(x_1) + a^2 * Var(x_2) + 2 * a * cov(x_1, x_2) \\
&=& 1 + a^2 + 2*a*cv \\
cor(Y, Z) &=& cor(Y, x_1) \\
&=& cov(x_1 + a * x_2, x_1) / sqrt(Var(Y)*Var(x_1)) \\
&=& [Var(x_1) + a * cov(x_2, x_1)] / sqrt(1 + a^2 + 2*a*cv)\\
&=& (1 + a * cv) / sqrt(1 + a^2 + 2*a*cv)
\end{eqnarray*}
So
\begin{eqnarray*}
\rho * sqrt(1 + a^2 + 2*a*cv) &=& (1 + a * cv) \\
\rho^2 * (1 + a^2 + 2 * a * cv) &=& (1 + a * cv)^2 \\
\rho^2 + \rho^2 * a^2 + 2 * cv * \rho^2 * a &=& 1 + 2 * cv * a + cv^2 * a^2\\
a^2 * (\rho^2 - cv^2) + 2 * a * (cv * \rho^2 - cv) + \rho^2 - 1 &=& 0\\
a^2 * (\rho^2 - cv^2) - 2 * a * cv * (1 - \rho^2) - (1 - \rho^2) &=& 0\\
\end{eqnarray*}
We remember from the second line above that $sign(\rho) = sign(1 + a * cv)$.
Solving the quadratic equation:
\begin{eqnarray*}
\Delta &=& cv^2 * (1 - \rho^2)^2 + (1 - \rho^2) * (\rho^2 - cv^2) \\
&=& (1 - \rho^2) * [cv^2 * (1 - \rho^2) + \rho^2 - cv^2] \\
&=& (1 - \rho^2) * \rho^2 * (1 - cv^2) \\
a &=& [cv * (1 - \rho^2) \pm \sqrt \Delta ] / [\rho^2-cv^2] \\
\end{eqnarray*}
This gives an r-function, about 2.5 times faster then the beautiful 'complement' function (solution by whuber):\
```
corr <- function(z, rho){ \
x1 <- c(scale(z)) \
x2 <- scale(rnorm(length(x1))) \
cv <-c(cov(x2, x1)) \
sqrtdelta <- sqrt(rho^2 * (1 - rho^2) * (1 - cv^2)) \
a <- (sqrtdelta + cv * (1-rho^2)) / (rho^2 - cv^2) \
if (correlation * (1 + a * cv) < 0) a <- (-sqrtdelta + cv * (1-rho^2)) / (rho^2 - cv^2) \
a * x2 + x1 \
}
```
| null | CC BY-SA 4.0 | null | 2023-04-15T14:27:57.313 | 2023-04-16T12:31:46.577 | 2023-04-16T12:31:46.577 | 385769 | 385769 | null |
613027 | 1 | 613164 | null | 1 | 72 | In R, I am using the survey package (inverse probability weighting) to conduct these tests on paired data:
- Weighted Wilcoxon Signed Rank test
- Weighted Sign test
However, I am struggling in doing them.
By defining a design and then using the functions
```
signed_rank<-function (x) {sign(x) * rank(abs(x))}
summary (svyglm(signed_rank(variable) ~ 1, mydesign))
```
I have been able to conduct a non-paired weighted wilcoxon.
But I have not been able to conduct the paired version as well as the Sign test. Any idea?
Thanks
| Paired sign and Wilcoxon tests with weighting | CC BY-SA 4.0 | null | 2023-04-15T14:36:58.860 | 2023-04-18T01:35:28.217 | 2023-04-17T00:43:51.783 | 17072 | 354094 | [
"r",
"nonparametric",
"survey",
"wilcoxon-signed-rank",
"sample-weighting"
] |
613028 | 1 | null | null | 0 | 12 | I will use an example of an antidepressant drug trial as an example to illustrate my question - hope this helps as I, too, am confused.
Suppose a study is testing the effects of some drug on people with depression. The participants are split into independent groups, with each group being given a different dose of the drug, including a control group. The authors measure two different related random variables, for example serotonin and dopamine levels in the brain of the participants. Then they write a report and only give the values of the mean levels for each group. Now I want to estimate the covariance between serotonin and dopamine levels for the null distribution of this study.
I understand that if the Null Hypothesis was false, it would be incorrect to simply treat each group as a single sample point and calculate the covariance from there. As fundementally, each group would be sampling from a different distribution due to the unpredictable effects of the drug.
However, if I simply wanted to estimate the covariance of the null distribution, could I then assume that the indepedent groups are all sampling from the same distribution and treat them like individual data points? My reasoning is that if the Null Hypothesis were true, the drug would have no effect and; hence, the distributions for each independent group would be identical.
P.S. Apologies if the answer to this question may appear obvious or if the question was otherwise ill-posed, I am new to statistics and still learning, edits are appreciated!
| Is it correct to estimate covariance matrix of null distribution of related random variables using independent groups as data points? | CC BY-SA 4.0 | null | 2023-04-15T14:42:41.460 | 2023-04-15T14:42:41.460 | null | null | 385770 | [
"hypothesis-testing",
"covariance",
"covariance-matrix",
"group-differences"
] |
613029 | 1 | 613033 | null | 2 | 25 | I made a toy dataset where the test part doesn't look like a train. I would like to know why `random forest` works so bad compared to GMDH (see [Wikipedia](https://en.wikipedia.org/wiki/Group_method_of_data_handling)).
my question
why does `random forest` lose stability on test data and `GMDH` no?
[](https://i.stack.imgur.com/TW2dC.jpg)
```
set.seed(111)
s <- function(ve=1:500,a,f,p) a*sin(f*ve+p)
s1 <- s(a = 1, f = 0.01, p = 0) + rnorm(500,sd = 0.05)
s2 <- s(a = 0.5, f = 0.06, p = 2) + rnorm(500,sd = 0.05)
s3 <- s(a = 0.1, f = 0.12, p = 4) + rnorm(500,sd = 0.05)
X <- cbind(s1,s2,s3)
Y <- s1 + s2 + s3
tr <- 1:300
ts <- 301:500
library(randomForest)
rf <- randomForest(Y[tr]~.,X[tr,],ntree=500,mtry=ncol(X))
rf_pred <- predict(rf,X)
library(GMDHreg)
gmdh <- gmdh.gia(X = X[tr,], y = Y[tr], prune = 5, criteria = "PRESS")
gmdh_pred <- predict(gmdh, X)[,1]
par(mfrow=c(2,1),mar=c(2,2,2,2))
matplot(X,t="l",lty=1)
abline(v=length(tr),col=1,lty=2,lwd=2)
plot(Y,t="l",col="gray90",lwd=10,main="Target")
abline(v=length(tr),col=1,lty=2,lwd=2)
lines(rf_pred,col=2,lwd=1)
lines(gmdh_pred ,col=4,lwd=1)
legend(x = "bottomleft",
legend = c("original", "GMDH", "RandomForest"),
col = c(8,4,2), lwd = c(5,1,1))
```
| Why random forest loses stability on new data, but GMDH works great | CC BY-SA 4.0 | null | 2023-04-15T14:44:23.570 | 2023-04-15T15:34:27.960 | 2023-04-15T15:19:56.600 | 60613 | 303632 | [
"r",
"machine-learning",
"random-forest"
] |
613030 | 2 | null | 612997 | 1 | null | If all you have is `yes/no` for `fever` and distinct categories of `flu`, then you have a 2 x 3 contingency table of counts of the cases in each combination of `fever` presence and `flu` type. In that case, the binary regression of `fever` as a function of `flu` type and the multinomial regression of `flu` type as a function of `fever` presence are just two different ways of looking at the same data set. You can choose the direction you want based on how you want to apply the model.
The usual initial report of regression coefficients for such models leads to the apparent problem that you note. Yes, the choice of reference level with more than 2 levels in a categorical variable will affect the reported coefficients, whether the variable is the outcome in a multinomial model for `flu` type or the predictor in a binary model of `fever` outcome.
That's only an apparent problem, however. Regardless of your choice of reference level, the model contains all the information needed to evaluate the probability of `fever` given a type of `flu` (or in the other direction, the probability of a `flu` type given presence of `fever`), and to make whatever comparisons among scenarios you wish. For that you can use post-modeling tools like those provided by the R [emmeans package](https://cran.r-project.org/package=emmeans).
| null | CC BY-SA 4.0 | null | 2023-04-15T14:52:30.357 | 2023-04-15T14:52:30.357 | null | null | 28500 | null |
613031 | 1 | null | null | 0 | 22 | ```
model = Sequential()
model.add(Masking(mask_value=-1, input_shape=(None, feature_shape)))
model.add(GRU(128, return_sequences=True, activation='tanh' , dropout = 0.3 , recurrent_dropout = 0.3, input_shape = (None , feature_shape)))
model.add(GRU(128, return_sequences=False, activation='tanh' , dropout = 0.3 , recurrent_dropout = 0.3))
model.add(BatchNormalization())
model.add(Dense(256, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(actions.shape[0], activation='softmax'))
model.summary()
```
Things I have changed but didn't show much results at overcoming overfitting:
- increasing the dropout and recurrent_dropout
- adding batch normalization between dense layers.
- Using L2 regularization underfitted the GRU model when done on dense layers.
| This GRU model overfits heavily . Is there a way to improve it? | CC BY-SA 4.0 | null | 2023-04-15T14:55:21.117 | 2023-04-15T14:55:21.117 | null | null | 327943 | [
"machine-learning",
"neural-networks",
"regularization",
"gru"
] |
613032 | 1 | 613079 | null | 1 | 74 | I have explained my study design in [this post](https://stats.stackexchange.com/questions/613019/mixed-effects-model-in-prism-versus-lmer-in-r-post-hoc-testing).
To summarize, I have repeated measures data of 2 patient groups for 'C' across 6 different time points. Time measurement is more of a time window - samples collected +/- a month of 3 months are included into the 3 month time point for example.
Below is my current model:
```
model <- lmer(C ~ Group + Time + Group*Time + (1|Sample_ID), REML = TRUE, data = data)
```
I've been playing around with my model set up using `lme4` and I noticed that the interaction between my group (a factor variable of positive or negative) changes when it's coding is changed. For example, time is shown as not significant when Time is a factor, and Time is significant when it is numeric. Why is that?
I do have the exact time measurements available for all my samples and can plug this into my model instead, but I really want to be looking at whether there is a difference between my two groups at each time interval. If I treat Time as numeric and then plug it into my model, my `emmeans` contrasts will only determine a single p-value for patient Group:Time.
| Time as a factor or numeric in lme4? | CC BY-SA 4.0 | null | 2023-04-15T15:14:26.983 | 2023-04-16T02:08:29.513 | 2023-04-16T01:56:04.800 | 345611 | 385766 | [
"r",
"regression",
"mixed-model",
"categorical-data",
"lme4-nlme"
] |
613033 | 2 | null | 613029 | 3 | null | As shown in [Random forest regression not predicting higher than training data](https://stats.stackexchange.com/q/235189/60613), Random Forests are based on averages of subsets of values of the response variable in the training set, and thus, naïvely, cannot predict values lower or higher than what the training examples themselves demonstrate.
The GMDH (a regression over basis functions) can have terms that match the data generating process, thus allowing it to extrapolate. In fact, you gave it the exact same terms that generated the data.
| null | CC BY-SA 4.0 | null | 2023-04-15T15:22:18.543 | 2023-04-15T15:34:27.960 | 2023-04-15T15:34:27.960 | 60613 | 60613 | null |
613034 | 1 | null | null | 4 | 380 | I have 60 numbers drawn from a normal distribution with mean 0 and standard deviation of 1.
1 realization:
[](https://i.stack.imgur.com/ov8Ow.png)
I then take the sum of the 60 values.
I do this 1000 times and plot a histogram of the various sums I get:
[](https://i.stack.imgur.com/yjnkr.png)
And I am able to fit the histogram with a normal distribution with a mean of 0 and a standard deviation of sqrt(60).
So far so good.
I now want to square each of my 60 numbers to get (for example) a realization like this:
[](https://i.stack.imgur.com/yrKNX.png)
If I again take the sum of the 60 values and repeat 1000 times I get a different histogram:
[](https://i.stack.imgur.com/ztEME.png)
My question: what function do I use to fit this histogram? Is it another normal distribution? Maybe chi-squared? What parameters do I use?
| What distribution do I get when I square numbers from a normal distribution and add them together? | CC BY-SA 4.0 | null | 2023-04-15T15:23:51.893 | 2023-04-15T15:44:50.993 | null | null | 30724 | [
"normal-distribution",
"histogram"
] |
613035 | 2 | null | 613007 | 0 | null | A polynomial term in a regression represents the product of a continuous predictor with itself. It's just one form of an interaction term in a regression, which in general is the product of two predictors.
You should be very cautious about interpreting individual coefficients when predictors are involved in interactions. The value of an individual coefficient for a predictor involved in an interaction depends on the coding of the predictor(s) with which it interacts. Thus, for a predictor involved in an interaction, the apparent "significance" of its individual coefficient (whether it differs from 0, which is what the p-value reports) is a function of the way the interacting predictors are coded. In this case, the coding differs between the choice for `raw`. To evaluate the significance of such a predictor, you need to include all of the terms that involve the predictor, for example in a likelihood-ratio test or a multi-coefficient Wald test.
However you code the predictor, the model will still provide the same predictions about outcome. The intercept and the linear terms in your two models differ due to the reasons explained above, but the models are identical in terms of fitting the data and any predictions you would make from them. Don't waste time fretting about the "significance" of its individual coefficient.
One further suggestion: a polynomial fit imposes a strong structure over the shape of the predictor-outcome relationship. You might consider a more flexible fit, for example with a regression spline.
| null | CC BY-SA 4.0 | null | 2023-04-15T15:33:44.820 | 2023-04-15T15:33:44.820 | null | null | 28500 | null |
613036 | 2 | null | 613034 | 12 | null | If $z$ has a standard normal distribution (mean of zero and variance of one), then $z^2$ has a chi-squared distribution with 1 degree of freedom.
Furthermore, the sum of $x$ and $y$ from chi-squared distributions with degrees freedom $\nu_x$ and $\nu_y$ is itself chi-squared distributed (with $\nu_x+\nu_y$ degrees of freedom).
Thus, you are looking at a sampling distribution from a chi-squared distribution with 60 degrees of freedom. Furthermore, the chi-squared distribution approaches a normal distribution in the limit with mean equal to the degrees of freedom (and variance is double this value).
(Proof of these statements can be found in any undergraduate mathematical statistics and probably textbook...so I'll omit a formal proof here.)
| null | CC BY-SA 4.0 | null | 2023-04-15T15:44:50.993 | 2023-04-15T15:44:50.993 | null | null | 199063 | null |
613037 | 1 | null | null | 1 | 13 | I’ve designed an experiment where I would like to evaluate how treatment exposure influences a binary outcome.
We noticed that there was bias between test and control groups in terms of the success rate. So we included a “calibration period” or logs of successes and failures from individuals from both group prior to exposure administration.
So logistic regression was used with two slopes: treatment group assignment and exposure administration. Control would have X=(0,0), treatment group in calibration period would have X=(1,0) and treatment group with exposure would have X=(1,1).
One issue we found is that not all individuals contribute the same number of trails. Consider testing the efficiency of an energy drink on runners; some runners simply go on more runs. In consequence, these individuals seem to have greater influence on the model coefficients and intercept than others.
I’ve read that using a multilevel model could mitigate the effects of unbalanced trials across individuals.
Could someone explain why this is intuitive?
| Multilevel models to mitigate asymmetric influences on posterior? | CC BY-SA 4.0 | null | 2023-04-15T16:00:51.060 | 2023-04-15T16:00:51.060 | null | null | 288172 | [
"regression",
"bayesian",
"binomial-distribution",
"experiment-design",
"multilevel-analysis"
] |
613039 | 1 | null | null | 0 | 25 | I've made t-test in excel with my samples and the graph showing mean values with standard deviation. t-test shows significant difference, when standard deviation lines cross on the graph. Could you please explain what's can be wrong with it?
[](https://i.stack.imgur.com/7zn1T.png)
| t-test shows significant differences when graph doesn't | CC BY-SA 4.0 | null | 2023-04-15T16:17:50.057 | 2023-04-15T21:23:46.000 | 2023-04-15T18:19:09.460 | 385777 | 385777 | [
"standard-deviation",
"excel"
] |
613040 | 1 | null | null | 3 | 36 | I am calculating within-group effect sizes from pre-test to post-test. Cohen's dav reports this effect size as a proportion of the average standard deviation [(Lakens, 2013)](https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00863/full#B7):
[](https://i.stack.imgur.com/6G8eS.png)
Meanwhile, Cohen's drm corrects for the fact that the pre-test and post-test measures are correlated (i.e., dependent):
[](https://i.stack.imgur.com/sXxdA.png)
Some argue that drm should be used instead of dav, because pre-test and post-test scores are not independent of one another ([Cuijpers et al., 2017](https://pubmed.ncbi.nlm.nih.gov/27790968/)). My question is this: Why is the fact that these measures aren't independent a problem? In other words, why should one adjust the effect size estimate based on the correlation of the pre-test and post-test measures?
Consider these two simulated effects:
[](https://i.stack.imgur.com/qBrUK.png)
In the correlated example (left), the correlation between pre-test and post-test measures is .9, dav is 1, and drm is .6. In the uncorrelated example (right), the correlation is .1, dav is still 1, and drm is also 1. In both examples, the SD at pre-test is 1 and the SD at post-test is 2. Clearly, the average person in the correlated example improved the same amount as the average person in the uncorrelated example. So why "correct" the effect size?
| Why is it important to correct for correlated (dependent) measures when reporting Cohen's d? | CC BY-SA 4.0 | null | 2023-04-15T16:17:52.533 | 2023-04-15T16:34:58.017 | null | null | 96659 | [
"effect-size",
"cohens-d"
] |
613041 | 2 | null | 365778 | 0 | null |
# Reduce the number of parameters in the model.
The existing answers focus on different regularization strategies that can improve fit, given a model architecture that remains fixed (same configuration, number of layers, number of neurons in each layer).
However, the simplest and easiest step to reducing overfitting in a neural network is to reduce the number of parameters in the model. This can mean some combination of
- fewer layers in the model; and
- fewer parameters in each layer.
This can reduce overfitting because a model with a larger parameter count has a greater flexibility to fit the data. Intuitively, this is analogous to the simpler case of adding degrees of freedom to a linear model. A linear model with at least as many degrees of freedom as the number of observations can achieve a perfect fit to the training data, because it can interpolate between each training data point. However, this is unlikely to generalize well because, by perfectly interpolating the training data, the model has also fit to the noise in the target variable.
From an overfitting perspective, the goal of adjusting the number of parameters in the model is to achieve the correct trade-off between achieving a good fit to the data and a fit that will generalize to new data.
| null | CC BY-SA 4.0 | null | 2023-04-15T16:24:47.193 | 2023-04-15T16:24:47.193 | null | null | 22311 | null |
613042 | 2 | null | 613040 | 0 | null | This question essentially boils down to the difference between the effect size for a matched-pairs t-test and a 2 independent samples t-test (the former would be the correlated version in your example above). Simply put, these are different statistical tests asking different research questions under different assumptions on the data.
For the matched-pairs approach, the bivariate data can be reasonably transformed into a univariate data set: the gain or difference scores for each individual. Thus, the question becomes a measure of how much each individual respondent gained (as opposed to where they actually ended up). Thus, the effect size is given by
$$d = \frac{\overline{d}}{s_d}$$
where $\overline{d}$ is the average gain score (which is mathematically equivalent to the difference in the means) and $s_d$ is the standard deviation of the difference scores. (I didn't work it out, but I'm pretty sure this is equivalent to the formula you've provided in the OP.)
For the 2 independent samples t-test, the effect size is calculated using the pooled standard deviation
$$d = \frac{\overline{x}_1-\overline{x}_2}{s_\text{pooled}}$$
This will not give the exact same value of $d_\text{av}$ above, but it will be reasonably close. This is just a measure of how far part averages are in terms of the best measure of "communal" spread for the two groups, i.e., how many standard deviations away are the two means.
Happy to clarify more if needed.
| null | CC BY-SA 4.0 | null | 2023-04-15T16:34:58.017 | 2023-04-15T16:34:58.017 | null | null | 199063 | null |
613043 | 2 | null | 613005 | 3 | null | Saying that a bug is "fixed" might be taken to mean that it has zero probability of occurring. That's not something you can determine. You might still have a very low probability, less than your current 0.003, that would only be seen in a very large sample.
For this situation you might apply the "[rule of three](https://en.wikipedia.org/wiki/Rule_of_three_(statistics))". If there are no occurrences of a rare event in $N$ trials, then you have a 95% confidence interval for the true probability within the range $[0,3/N]$.* As the linked Wikipedia page shows, with the large number of trials you need to use, that approximation is indistinguishable from the "exact binomial" analysis that's needed with smaller samples, where design might be improved with software like G*Power.
So if you did 1 million trials and found no failures, you would have a 95% confidence interval from 0 to 3 out of 1 million. The Wikipedia page shows how to adjust the value of `3` to obtain other confidence intervals. For example, the 99.5% confidence interval following such a test would be between 0 and 5.3 out of 1 million.
The above assumes that the trials are independent and have identical probabilities of failure. That might not always hold in tests of software bugs, so be warned.
In response to comments:
First, in a comment on the question, @dipetkov rightly notes that this situation should only occur with non-deterministic code, with a [link to better ways to deal with this situation in testing computer code](https://tinyurl.com/2p9fuwmj).
Second, from the perspective of ruling out rare independent events in general, there is no "shortcut" to extended testing.
If the probability of an event in a single trial is $p$, then the probability of no events in $N$ independent trials is $(1-p)^N$. With your scenario of $p=0.003$ and $N=1000$, the probability of observing 0 events is 0.0496. There is about a 1% chance, after 1000 trials, of observing no events with $p$ as high as 0.0046.
You can use that formula to evaluate your tradeoffs among assumed probabilities, the risk of missing an event given that probability, and the number of trials. But that formula is equivalent to what the "rule of three" (and its extensions to other confidence intervals) provides in the limit of large $N$.
---
*Note that the sample size of 1472 that you show in the question only provides a 95% confidence interval of [0, 0.002] if there are no failures.
| null | CC BY-SA 4.0 | null | 2023-04-15T16:35:25.677 | 2023-04-16T21:21:37.657 | 2023-04-16T21:21:37.657 | 28500 | 28500 | null |
613044 | 1 | null | null | 1 | 28 | I have a set of data measuring response variable (length) on the same individual over several time points in nine different treatments (pH). Due to the confounding effect of temperature and it's biological relevance for interpretation, this data are split into three observation periods. I am analyzing each period separately.
For each observation period, I want to know if there is an effect of pH on the mean growth rate (population response); is there a relationship between mean length growth rate and pH? I'm not interested in the effect of pH on growth rate over time at this particular moment.
I have performed GEE to obtain population-response while accounting for the multiple measurements on the same individual, ignoring time (not of interest now).
I get the regression coefficients for each pH and when I plot these coefficients or EMMs against pH, I can see a strong relationship (positive or negative, depending on the observation period).
My question is: how can I quantify this relationship? For example (in terms I understand) - when we are performing linear regression on data we get the R2 value to see if there is a trend and how strong it is. Can I do that on this EMMs data obtained from GEE? If not, what are my options here, how to quantify this relationship?
Please note that for this particular case I need to work with mean population response - effect of pH on the mean shell growth rate (while accounting for repeated measurements).
| Subsequent analysis on estimated marginal means from GEE | CC BY-SA 4.0 | null | 2023-04-15T16:35:49.417 | 2023-04-15T16:35:49.417 | null | null | 385463 | [
"generalized-estimating-equations"
] |
613045 | 1 | 613054 | null | 7 | 341 | Consider the following two random variables:
$$Z_1=U_1-X_1$$
and
$$Z_2=U_2-X_1,$$
where $U_1$ and $U_2$ are two i.i.d random variables following a general distribution, and $X_1$ is an exponential random variable. It is obvious that $Z_1$ and $Z_2$ are two dependent random variables. Now, let us consider the following random variable:
$$Y=\min(Z_1,Z_2).$$
Is there any explicit expression or approximation for the PDF or expected value of $Y$?
| The distribution function of the min of two random variables which are dependent via a common term | CC BY-SA 4.0 | null | 2023-04-15T16:37:04.773 | 2023-04-15T19:39:25.697 | 2023-04-15T17:15:49.380 | 362671 | 385778 | [
"probability",
"expected-value",
"moment-generating-function"
] |
613047 | 1 | 613917 | null | 1 | 42 | I am trying to establish measurement invariance between two groups on a depression measure: high-school aged boys and high-school aged girls. Though my entire sample reports elevated depression symptoms, girls report higher levels of symptoms than boys.
In measurement invariance testing, "scalar invariance" (full score equivalence) is supported when both the factor loadings and the factor intercepts are equivalent across groups. This means that respondents in both groups use the measure the same; they will both select the same response option given the same level of latent depression. When you look at respondents' response functions, the slopes and intercepts are the same ([source](https://bookdown.org/content/5737/invariance.html)):
[](https://i.stack.imgur.com/apcuE.png)
This does not hold for my analysis. Because girls report more depression, their factor intercepts are higher. Thus, this measure fails the measurement invariance test.
Here is my question: (How) is it possible to differentiate between differences in respondents' response styles (e.g., girls tend to over-report depression) and differences in the actual latent mean (e.g., girls are more depressed)? When I started this project, I assumed the point of measurement invariance testing would be to figure this out. But now I'm wondering if this is even possible.
| In measurement invariance testing, how can one differentiate between differences in response styles and differences in latent means? | CC BY-SA 4.0 | null | 2023-04-15T16:41:43.900 | 2023-04-25T13:54:22.607 | null | null | 96659 | [
"latent-variable",
"measurement",
"invariance"
] |
613048 | 1 | null | null | 0 | 20 | Would be beneficial to apply nonlinear dimensionality reduction on a binary dataset (around 200 binary features) ? what is difference from applying MCA (multiple correspondence analysis).
| Applying non-linear dimensionality reduction on binary data | CC BY-SA 4.0 | null | 2023-04-15T16:47:35.570 | 2023-04-15T16:51:20.190 | 2023-04-15T16:51:20.190 | 376080 | 376080 | [
"categorical-data",
"pca",
"dimensionality-reduction",
"high-dimensional",
"manifold-learning"
] |
613051 | 1 | null | null | 0 | 19 | Say I have two finite mixtures, each consisting of an equally weighted mix of $n$ Gaussians each with a known mean and standard deviation.
Is there an efficient method of calculating the probability that a random sample drawn from the first is less than a random sample drawn from the second?
The naive method is simple: enumerate all pairs of Gaussians choosing one from each mixture, [calculate the result for the pair](https://math.stackexchange.com/a/40236/246278), and take the mean.
This works, but is $O(n^2)$. Is there a more efficient method of calculating this?
| Efficiently calculating comparisons between finite mixtures of Gaussians | CC BY-SA 4.0 | null | 2023-04-15T17:20:10.847 | 2023-04-15T18:35:17.737 | 2023-04-15T18:35:17.737 | 7224 | 100205 | [
"normal-distribution",
"mixture-distribution"
] |
613052 | 1 | null | null | 0 | 16 | I'm trying to do sarcasm detection on Twitter data to replicate the results mentioned in this [paper](https://aclanthology.org/2020.wnut-1.2.pdf). Binary classification problem. For that I used a separate set of unlabeled tweets to create the embedding matrix using Word2Vec model. Before doing that I preprocessed the unlabeled data and removed the rare words as mentioned in the paper. Code is as follows:
```
model = Word2Vec(df_hing_eng['tweet_text'], vector_size=300, window=10, hs=0, negative = 1)
embedding_size = model.wv.vectors.shape[1]
```
Next I fit a tokenizer on this unlabeled data:
```
tok = Tokenizer()
tok.fit_on_texts(df_hing_eng['tweet_text'])
vocab_size = len(tok.word_index) + 1
```
Next, I created the embedding matrix as follows:
```
word_vec_dict={}
for word in vocab:
word_vec_dict[word]=model.wv.get_vector(word)
embed_matrix=np.zeros(shape=(vocab_size,embedding_size))
for word,i in tok.word_index.items():
embed_vector=word_vec_dict.get(word)
if embed_vector is not None:
embed_matrix[i]=embed_vector
```
Now, I'm using a separate set of labeled tweets to be used as training and test data (for the DL models). I used the same preprocessing steps as the `unlabeled` data and removed the same rare words we found in the `unlabeled` data. Now I find the maximum length of all tweets in the labeled data.
```
maxi = -1
for row in df_labeled.loc[:,'tweet_text']:
if len(row)>maxi:
maxi = len(row)
```
After that I used the tokenizer, that I fit on the unlabeled data, to create the word indices for the labeled data as follows:
```
encoded_tweets = tok.texts_to_sequences(df_labeled['tweet_text'])
```
Now I padded the labeled data to the length of the maximum tweets among the labeled data.
```
padded_tweets = pad_sequences(encoded_tweets, maxlen=maxi, padding='post')
```
Finally, I split the labeled data into training and test data as follows,
```
x_train,x_test,y_train,y_test=train_test_split(padded_tweets, df_labeled['is_sarcastic'], test_size=0.10, random_state=42)
```
Is there any data leakage anywhere from training to test data or any other problem? Almost all of my DL models are giving more than 90% accuracy contrary to the original paper which reported a maximum of 75% accuracy. The codes for DL models were written by the authors of the papers. I used the same parameters as they mentioned.
The tokenizer was actually fit on a completely different unlabeled data that is absolutely separate from (labeled) training and test data.
| Why are my deep learning models giving unreasonably high accuracy on test data? | CC BY-SA 4.0 | null | 2023-04-15T17:50:35.203 | 2023-04-15T17:50:35.203 | null | null | 222213 | [
"machine-learning",
"neural-networks",
"natural-language",
"keras",
"text-mining"
] |
613053 | 1 | null | null | 0 | 46 | I have this problem that I'm struggling to solve
Be $\left\{ Z_t \right\} \sim N(0,1)$ a stochastic process and let:
$$X_t = \begin{cases}
Z_t & \text{if $t$ is even }\\
\left( Z_{t-1}^2 - 1\right) /\sqrt{2}& \text{if $t$ is odd}
\end{cases}$$
Prove that $\left\{ X_t \right\}$ is a white noise with mean $0$ and variance $1,$ but not IID.
Well, I could prove that the mean is 0 and the variance is 1 without much trouble. But I can't prove that $\operatorname{Cor} \left( X_{t}, X_{t+h} \right) = 0$ for all $t, h \in \mathbb{N}$. In fact, I can't even think how can I do this without knowing if the $Z_t$s are independent or not.
Any help or hints are appreciated. I'm very stuck here and I feel I'm missing some very basic knowledge.
| Proving that a series is white noise, but not IID | CC BY-SA 4.0 | null | 2023-04-15T18:04:59.753 | 2023-04-15T21:41:41.210 | 2023-04-15T21:41:41.210 | 5176 | 186160 | [
"time-series",
"self-study",
"iid",
"white-noise"
] |
613054 | 2 | null | 613045 | 10 | null | The distribution of $Y$ can also be written as $$Y = \min(Z_1,Z_2) = \min(U_1,U_2) - X_1 = Q-X_1$$
This contains two parts:
- The distribution of a minimum of iid variables $Q=\min(U_1,U_2)$ whose cdf can be expressed in terms of the cdf of $U_i$, $$P(\min(U_1,U_2)> x) = P(U_i > x)^2$$
- The distribution of a sum $Q-X_1$, which can be found with a convolution.
| null | CC BY-SA 4.0 | null | 2023-04-15T18:07:16.323 | 2023-04-15T19:39:25.697 | 2023-04-15T19:39:25.697 | 164061 | 164061 | null |
613055 | 2 | null | 613051 | 0 | null | Indeed,
\begin{align*}
\mathbb P(X_1<X_2) &= \mathbb E[\mathbb I_{X_1<X_2}]\\
&= \mathbb E[\mathbb E\{\mathbb I_{X_1<X_2}|Z_1,Z_2\}]\\
&= \mathbb E[\mathbb E\{\mathbb I_{X_1-X_2+\mu_{2Z_2}-\mu_{1Z_1}<\mu_{2Z_2}-\mu_{1Z_1}}|Z_1,Z_2\}]\\
&= \mathbb E[\Phi(\{\mu_{2Z_2}-\mu_{1Z_1}\}/\{\sigma^2_{2Z_2}+\sigma^2_{1Z_1}\}^{1/2})]
\end{align*}
when denoting the labels by $Z_1$ and $Z_1$, resp., and $\mu_{ij},\sigma_{ij}$ the parameters of the respective Normal components. Unless the terms
$$\{\mu_{2j}-\mu_{1k}\}/\{\sigma^2_{2j}+\sigma^2_{1k}\}^{1/2}$$
are identical for several pairs $(j,k)$, it is alas a computation of order $O(n^2)$. Due to the equally weighted components, simulation would not bring much of an improvement unless $n$ is very large and components can start being neglected.
This probability can also be written as
$$\mathbb E[F_X(Y)]=\int F_X(y) f_Y(y)\,\text dy$$
but the integrand is again of order $O(n^2)$.
| null | CC BY-SA 4.0 | null | 2023-04-15T18:21:50.290 | 2023-04-15T18:34:34.117 | 2023-04-15T18:34:34.117 | 7224 | 7224 | null |
613056 | 2 | null | 613045 | 6 | null | As noted by @YashaswiMohanty the expectation of $Y$ can sometimes
be found without explicting the probability distribution function.
Assume that all r.vs are of continuous type and $X_1$ is exponential
with rate $\lambda >0$. We can consider the survival function
$\bar{F}_Y(y) := 1 - F_Y(y)$
$$
\bar{F}_Y(y) = \text{Pr}\{ \min(Z_1,\,Z_2) > y\}
=\text{Pr}\{[Z_1 > y] \cap [Z_2
> y] \}.
$$
Then by conditioning on $X_1$ we can use the independence
\begin{align*}
\bar{F}_Y(y)
&=\int_0^\infty \text{Pr}\{[Z_1 > y] \cap [Z_2
> y] \, \vert \, X_1 = x_1\} f_{X_1}(x_1) \,\text{d}x_1\\
&= \int_0^\infty \text{Pr}\{[U_1 > y + x_1] \cap [U_2
> y + x_1] \, \vert \, X_1 = x_1\} \, f_{X_1}(x_1) \,\text{d}x_1\\
&= \int_0^\infty \bar{F}_U(y + x_1)^2 \lambda \, e^{-\lambda x_1}\, \text{d}x_1
\end{align*}
There are some cases where we can get a closed form expression. For
instance if $U_i$ are exponential with rate $\gamma$ i.e.,
$\bar{F}_U(u) = e^{-\gamma u}$ for $u >0$.
Interestingly, this is a simple and efficient way to generate a couple
of random variables with tail dependence.
| null | CC BY-SA 4.0 | null | 2023-04-15T18:28:15.393 | 2023-04-15T19:15:00.550 | 2023-04-15T19:15:00.550 | 10479 | 10479 | null |
613057 | 1 | null | null | 2 | 81 | I am comparing 4 groups of samples using SPSS. The groups are 4 bathymetry groups (>0m, >-1m, >-2m, >-3m). Within each group are a number of samples n. Each sample has a recorded seagrass cover % per meter squared. n ranges between 6 and 65 for all 4 samples. Samples are not normally distributed. A kruskal wallis test is used to determine significant differences in the distribution of seagrass cover % per meter squared. When I compare all 4 groups (pairwise comparisons), my results show that one group is significantly different from the rest. However; when I remove that group from the test and redo the kruskal wallis test with the remaining 3 groups (that didn't show significant difference from each other in the previous test), one of the groups is now significantly different. If i remove that group, leaving me with 2 groups (which were not significantly different in the two previous test), and I do a Mann-Whitney U test, it says that the groups are now significantly different. Please explain what is going on here.
| Kruskal-Wallis returns only one significant group (out of 4 groups), when significant group is removed and test redone, another group is significant? | CC BY-SA 4.0 | null | 2023-04-15T18:49:01.647 | 2023-04-15T20:10:19.180 | null | null | 385784 | [
"variance",
"nonparametric",
"multiple-comparisons",
"wilcoxon-mann-whitney-test",
"kruskal-wallis-test"
] |
613058 | 1 | null | null | 0 | 14 | I need advice on how to solve this problem. Topics I see in my school are Low Risk Decision Making, Decision Making, Decision Making Under Uncertainty, and Decision Trees The results must be shown with graphs, and explanation. The company bolitas is dedicated to the sale of balls, backpacks and caps. The company has 2 suppliers (x and y). The quality of their products is shown in the following table.
|Articles |Defective pieces |Probability for supplier x |Probability for supplier y |
|--------|----------------|--------------------------|--------------------------|
|balls |2% |.80 |.40 |
|Backpacks |3% |.0 |.30 |
|Caps |6% |.20 |.30 |
The probability of receiving a batch of balls from supplier x is 70%. The orders made by the company are 2000 balloons per month. 1 defective ball can be repaired with 5 pesos. However, supplier y is willing to sell the 2000 pieces for 6 pesos less than supplier x.
Taking into account that the quality of the supplier is lower, which supplier should the pellet company choose in order to have higher profits?
| I need advice on. Inventory theory, Decision making under uncertainty, Decision trees | CC BY-SA 4.0 | null | 2023-04-15T19:05:29.293 | 2023-04-15T19:10:53.043 | 2023-04-15T19:10:53.043 | 362671 | 385787 | [
"probability",
"hypothesis-testing",
"model"
] |
613060 | 2 | null | 612983 | 1 | null | While I haven't been able to find a conjugate prior, I believe the following MCMC algorithm will suffice:
In each iteration $r$:
($r.1$) draw the missing elements ${X_{-c}}^r$ given $X_c, Σ^{r-1}$,
($r.2$) draw $Σ^r$ given $X_c, {X_{-c}}^r$.
Step ($r.1$) just requires drawing from a [Normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions) while step ($r.2$) can draw from an Inverse-Wishart as described in the original question.
I believe that $Σ^r$ should converge in distribution to $Σ | X_c$.
| null | CC BY-SA 4.0 | null | 2023-04-15T19:55:25.967 | 2023-04-15T19:55:25.967 | null | null | 161943 | null |
613061 | 2 | null | 613057 | 4 | null | [You are using the incorrect post hoc test](https://stats.stackexchange.com/a/95270). Mann-Whitney (a) does not use the same rankings as the K-S, so no surprise you are getting strange results, and (b) does not use the pooled variance assumed by the K-S null. Try Dunn's test or, even better, the Conover-Iman test.
| null | CC BY-SA 4.0 | null | 2023-04-15T20:10:19.180 | 2023-04-15T20:10:19.180 | null | null | 44269 | null |
613062 | 2 | null | 612228 | 0 | null | Conditions 2. and 3. are essentially the conditions for the Uniform Law of Large Numbers. In particular, condition 3. is the compact way to express the dominance condition of the ULLN.
A literature pointer to start is Newey, W. K., & McFadden, D. (1994). Large sample estimation and hypothesis testing. Handbook of econometrics, 4, 2111-2245.
Lemma 2.4,
| null | CC BY-SA 4.0 | null | 2023-04-15T20:18:14.800 | 2023-04-15T20:18:14.800 | null | null | 28746 | null |
613063 | 1 | 613069 | null | 0 | 44 | I have a propensity score weighted population (using IPTW) and I want to compute risk ratios on my weighted population. For this, I am using a weighted Poisson regression.
Let's suppose that "married" is the outcome and I want to see the risk ratio of treated vs non treated considering married as outcome.
```
library(WeightIt)
library(survey)
W.out <- weightit(treat ~ age + educ + race + nodegree + re74 + re75,
data = lalonde, estimand = "ATE", method = "ps")
d.w <- svydesign(~1, weights = W.out$weights, data = lalonde)
fit <- svyglm(married ~ treat , design = d.w, family="poisson")
exp(coefficients(fit))
exp(confint.default(fit))
```
For instance, in this case the weighted rate of "`married`" for each group is:
```
married_treated <- with(lalonde, weighted.mean(married[treat == 1], W.out$weights[treat == 1]))
married_nontreated <- with(lalonde, weighted.mean(married[treat == 0], W.out$weights[treat == 0]))
```
The result of the Poisson regression is basically equal to the ratio between `married_treated` / `married_nontreated`.
This should be correct because RR is risk of outcome in exposed people / risk of outcome in unexposed.
The risk of outcome in exposed patients is equal to the frequency of married people in the total exposed group divided by the total n° of people in the exposed group (married/total n° exposed). Subsequently, the weighted rate of married in the treated group should be the weighted risk in the exposed while the weighted rate of married in the non treated group should be the weighted risk in the non exposed. Therefore, RR = weighted risk treated / weighted risk non treated.
However, I would like a confirmation.
Is this the correct way of computing risk ratios in a population balanced through IPTW?
| Calculate risk ratio in weighted population | CC BY-SA 4.0 | null | 2023-04-15T20:27:38.113 | 2023-04-15T21:41:36.103 | 2023-04-15T20:38:10.167 | 384938 | 384938 | [
"r",
"propensity-scores",
"weights",
"weighted-data",
"survey-weights"
] |
613064 | 1 | null | null | 1 | 27 | I would like to calculate the difference in log-odds between the error of two logistic regression models, given the correct answer aka ground truth (depression present${}= 1;$ depression absent${} = 0$) and the predicted probabilities of depression being present from each model (pA and pB).
I believe the formulas are as follows:
When the correct answer is that depression is present:
$$=(\log(1) - \log(1-pA)) - (\log(1) - \log(1-pB)).$$ This should reduce to:
$$=\log\left(\frac{1-pB}{1-pA}\right)$$
When the correct answer is that depression is absent:
$$=(\log(0) - \log(pA)) - (\log(0) - \log(pB)).$$
I think that might reduce to the following, since the undefined $=\log(0)$ terms cancel each other out:
$$=\log\left(\frac{pB}{pA} \right)$$
But I'm not sure if the${}=\log(0)$ terms can actual cancel each other out here.
Are these the correct formulas or am I wrong? Is this the correct way to calculate the difference in log-odds between two logistic regression models?
| Calculate difference in log odds between two logistic regression models | CC BY-SA 4.0 | null | 2023-04-15T21:09:31.473 | 2023-04-15T21:33:31.910 | 2023-04-15T21:33:31.910 | 5176 | 213342 | [
"probability",
"logistic",
"logarithm",
"odds"
] |
613065 | 2 | null | 613047 | 0 | null | To clarify, measurement (or metric) invariance means the "respondents in both groups use the measure the same." However, scalar invariance is used to test if the means of the latent variable are different between groups. To clarify, this does NOT mean that respondents won't give the same response option for the same level of latent depression. Scalar invariance instead means that the average for the latent variables for the groups being measured are actually different (assuming you've established metric invariance).
If you wish to tackle the question of response styles (patterns of one group responding to Likert-type questions in a different fashion from another), this is another level of analysis for latent variable modeling. And, if you wish to show that your dependent variable construct is the "same" after you account for response styles, this will require having a latent grouping variable to indicate which type of response style is present. And then you will need to confirm (or not) that this response style latent variable "intercept" is different for your different groups.
I will forewarn you that this type of analysis requires a rather large number of respondents in the data set to be effective (e.g., to have enough power to detect both the latent construct of interest and the latent construct representing the response style).
Happy to share more if you're interested.
| null | CC BY-SA 4.0 | null | 2023-04-15T21:10:04.680 | 2023-04-15T21:10:04.680 | null | null | 199063 | null |
613067 | 2 | null | 613039 | 3 | null | The t-test considers standard error, which is related to but not synonymous with standard deviation. A quick word on the difference is that the standard error divides the standard deviation by the sample size. (A more detailed explanation would get into how you estimate the standard deviation.)
Consequently, it is find that the standard deviations overlap like they do yet the test rejects. As the sample size gets large, the standard errors shrink arbitrarily small, no matter how large the standard deviation is, and the t-test will reject even small differences in the means.
Excel seems to be doing statistics correctly.
| null | CC BY-SA 4.0 | null | 2023-04-15T21:23:46.000 | 2023-04-15T21:23:46.000 | null | null | 247274 | null |
613068 | 1 | null | null | 1 | 26 | I would like to ask you a question regarding CNNs. Is it possible to train CNNs for such kind of pictures (~ 1100 x 703 pixels). They are scatter plots, which can be classsified into 2 categories (signal and background) based on the patterns of the dots. However, as far as I know images in the CNN process are shrinked into 256 x 256 pixels. Do you think the CNN will be able to resolve patterns in such scatter plots?
Thanks a lot in advance!
[](https://i.stack.imgur.com/RV4xs.png)
| Convolutional Neural Networks - resolution | CC BY-SA 4.0 | null | 2023-04-15T21:31:07.397 | 2023-04-15T21:31:07.397 | null | null | 385794 | [
"neural-networks"
] |
613069 | 2 | null | 613063 | 2 | null | That's one way to do it. But if you include covariates in the outcome model or if you want to generalize the effect to a subset of your population (e.g., for the average treatment effect in the treated), you should use weighted g-computation instead. This is explained in the `WeightIt` [vignette](https://ngreifer.github.io/WeightIt/articles/estimating-effects.html) on estimating effects.
Weighted g-computation involves fitting an outcome model with the IPW weights applied, then generating predicted values of the outcome under treatment and under control for each unit. Then, you computed the IPW-weighted mean of the predicted outcomes under treatment and under control, which are the marginal risks. Finally, you can take the ratio of the marginal risks and that is your IPW- and covariate-adjusted estimate of the marginal risk ratio. A benefit of this method is that you can use whatever model you want to fit the outcome model because the risk ratio does not correspond to a coefficient in the outcome model but rather is a function of the covariates, the model parameters, and the weights. This is important when including covariates in the outcome model because a Poisson model is rarely if ever the right model for a binary outcome. It is useful as a convenience because the coefficient on treatment is equal to the log risk ratio, but that is only true when no covariates are present in the model. In contrast, a logistic regression model is more likely to fit the data well and generate valid predictions.
Below is how to do weighted g-computation manually. Getting standard errors is hard unless you bootstrap.
```
library(WeightIt)
data("lalonde", package = "cobalt")
W.out <- weightit(treat ~ age + educ + race + nodegree + re74 + re75,
data = lalonde, estimand = "ATE", method = "ps")
library(survey)
d.w <- svydesign(~1, weights = W.out$weights, data = lalonde)
fit <- svyglm(married ~ treat * (age + educ + race + nodegree + re74 + re75),
design = d.w, family = "quasibinomial")
# Predicted outcomes under treatment
pred_1 <- predict(fit, newdata = transform(lalonde, treat = 1),
type = "response")
# Predicted outcomes under control
pred_0 <- predict(fit, newdata = transform(lalonde, treat = 0),
type = "response")
# Marignal risks udner treatment and control
Epred_1 <- weighted.mean(pred_1, W.out$weights)
Epred_0 <- weighted.mean(pred_0, W.out$weights)
# Marginal risk ratio
(RR <- Epred_1 / Epred_0)
#> [1] 0.6167727
```
Below is how you do it using `marginaleffects` as explained in the `WeightIt` [vignette](https://ngreifer.github.io/WeightIt/articles/estimating-effects.html).
```
# Using marginaleffects
library(marginaleffects)
avg_comparisons(fit, variables = "treat",
comparison = "lnratioavg",
transform = "exp",
wts = "(weights)")
#>
#> Term Contrast Estimate Pr(>|z|) 2.5 % 97.5 %
#> treat ln(mean(1) / mean(0)) 0.617 0.048 0.382 0.996
#>
#> Columns: term, contrast, estimate, p.value, conf.low, conf.high, predicted, predicted_hi, predicted_lo
```
You should always use the `marginaleffects` strategy because it produces the right answer no matter what outcome model you use or whether you have covariates in the outcome model or not. When there are no covariates in the outcome model, it produces the same estimate as the exponentiated coefficient on treatment in a Poisson regression model.
| null | CC BY-SA 4.0 | null | 2023-04-15T21:41:36.103 | 2023-04-15T21:41:36.103 | null | null | 116195 | null |
613070 | 1 | null | null | 0 | 40 | I have a problem which never happened to me before.
I am testing for a binary outcome (yes/no) treated vs non treated.
I have performed a weighted chi square test using `wtd.chi.sq` (in order to know if there is a difference between treated vs non treated in terms of the weighted rate of the outcome) which gave me a p value of 0.027. I also computed the SMD which is >0.10. Thus there is a difference in the weighted rate between treated and non treated.
Code for the weighted chi square: `wtd.chi.sq(lalonde$treat, lalonde$married, weight=W.out$weights)`
This is just an example of the code.
For example:
- Married treated weighted rate: XXX
- Married non treated weighted rate: XXX
Weighted chi square p value: 0.027 (number comes from my real data which I cannot share here)
On the same variables, I performed a Poisson regression (with no covariates, the expression is `outcome ~ treat` using svydesign - I explained [here](https://stats.stackexchange.com/questions/613063/calculate-risk-ratio-in-weighted-population)) in order to calculate the risk ratio with the following result:
RR (95% CI): 1.44 (0.98 - 2.09)
P value = 0.053
Why is this happening?? Why is the weighted risk ratio I computed non significant while the p value of the weighted chi square is significant?
| Weighted chi square gives significant p value but Poisson regression doesn't | CC BY-SA 4.0 | null | 2023-04-15T22:25:54.267 | 2023-04-16T12:25:25.260 | 2023-04-16T12:25:25.260 | 384938 | 384938 | [
"hypothesis-testing",
"p-value",
"chi-squared-test",
"poisson-regression",
"weighted-regression"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.