Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
612826 | 2 | null | 214963 | 0 | null | You could reckon that the angst over Simpson's Paradox is just hysterical and overdone.
The difficult phenomenon we really have is CONFOUNDING.
The general problem with Simpson's Paradox is that, when the data set is stratified, the overall result is then different.
Stratification could be, for example, over age.
So, only old people (such as me) are compared with old people, and young with young.
Hence, age is no longer a confounder, since like is being compared with like (young with young, old with old).
Thus, the initial 'jumbled up' data set, where age was still a confounder, could indeed have had quite a dissimilar overall outcome.
CONFOUNDING, and then SAMPLE SIZE, are the two most terrible difficulties in statistical analysis.
I suppose you could wearily conclude that Simpson's paradox gives us all the labyrinthine, incomprehensible complexity of CONFOUNDING.
| null | CC BY-SA 4.0 | null | 2023-04-13T16:37:29.887 | 2023-04-13T16:43:24.467 | 2023-04-13T16:43:24.467 | 56940 | 385626 | null |
612827 | 2 | null | 612819 | 3 | null | $R$ is the ordinary least squares (OLS) estimate of the slope of the regression of the $y_i$ on the $x_i.$ Because the regression line must go through the point of averages $(\bar x, \bar y),$ adjoining any number of instances of that point to the dataset will not change the sum of squares and therefore will not change the OLS solution, whence $R$ will be unchanged.
| null | CC BY-SA 4.0 | null | 2023-04-13T16:45:53.147 | 2023-04-13T17:49:00.167 | 2023-04-13T17:49:00.167 | 919 | 919 | null |
612828 | 1 | null | null | 2 | 41 | The calculation the MAD (Mean Absolute Deviation)/Mean ratio is this, according to the title:
$$
\frac{\overline{\left | Forecast - Demand \right |}}{\overline{Demand}}$$
However, the calculation is often shown as this, because of the derivation from wMAPE (weighted Mean Absolute Percentage Error), with sum not average:
$$
\frac{1}{\sum Demand}\sum_{}^{}Demand\frac{\left | Forecast - Demand \right |}{Demand}=\frac{\sum_{}^{}\left | Forecast - Demand \right |}{\sum_{}^{}Demand}
$$
I understand why it's shown as that, because of the derivation, but that means that the title (mean) and the calculation (sum) don't match up. Are there pros and cons to using mean or sum? The key ones that I can think for using average are:
- Fair comparison between numerators and denominators of different populations, because sum is affected by number of observations
- Average is less affected by missing data than sum is, when you apply that aggregation across a column
| MAD/Mean Ratio - advantages/disadvantages of using average or sum | CC BY-SA 4.0 | null | 2023-04-13T16:59:29.617 | 2023-04-13T20:54:13.837 | null | null | 173459 | [
"forecasting",
"measurement-error",
"mean-absolute-deviation"
] |
612829 | 1 | null | null | 3 | 105 | Generally, my default practice in regression for nominal categorical variables, including race, is to use dummy coding, with the majority/plurality level as reference. Interpretation of the model coefficients using this scheme is straightforward.
Additionally, I typically view comparisons to the majority/plurality group as most relevant, and for this coding scheme those comparisons are simply evident in the estimated coefficients and it's straightforward to test only these comparisons. They are also the best-sampled pairwise comparisons (i.e., the tests with the most power for given effect size). For a sample in the US population, that usually means white race is the reference category.
Recently, a colleague of mine received some criticism for this approach, arguing that using "white" as a default category propagates bias that "white" is normal/typical, and that it's better to look at difference between each group to "all" as a reference, or perhaps to choose the measured maximum/minimum category as a reference (whichever is preferable with respect to the dependent variable).
I appreciate the sentiment behind this, but the interpretation seems flawed to me. For an outcome where disparities are expected due to racism or bias, a comparison of one race category to the mean across all categories (weighted or not) seems to dilute the size of any disparities that are present across more than one non-majority race. Planning contrasts only with the "best" point estimate could mean the comparison group is likely to be undersampled and introduces selection bias. Unfortunately, I wasn't present and was unable to follow up with the person raising an objection as to what specifically they are proposing.
Am I missing some alternative? I'd be interested in any supported proposals of best-practices for handling these types of variables. I understand the use of "race" as a variable is unfamiliar/unusual to many people outside the US and would prefer not to relitigate those issues here: from my perspective, perceived race is not useful as a biological variable, but is nonetheless important because it impacts how people are treated by others in society and therefore affects health and healthcare.
---
A colleague suggested the criticism may have been motivated by papers like [https://journals.sagepub.com/doi/abs/10.1177/0081175020982632](https://journals.sagepub.com/doi/abs/10.1177/0081175020982632) that suggests use of mean contrasts or binary contrasts. That would help answer the alternative I'm missing, but I'm still a bit uncertain with these suggestions, as they still seem to bring other problems with interpretation.
| Choice of coding scheme/planned contrasts using race as a categorical variable | CC BY-SA 4.0 | null | 2023-04-13T17:06:26.133 | 2023-04-21T18:28:37.267 | 2023-04-21T16:14:50.790 | 135291 | 135291 | [
"regression",
"categorical-encoding"
] |
612830 | 2 | null | 612805 | 2 | null | One way to think about overfitting is that the model is fitting too closely to the random variation in your training dataset rather than to the underlying trend between your x and your y's. When this happens, the model can show a high performance in predicting the training set because it learned its specific noisy variability, but it could perform poorly in a testing set, because it is more adapted to the noise of the training set than to its signal. Thus, an indicator of overfitting is the difference in performance between training set and testing set. If this difference is large, that could indicate overfitting (although not necessarily, see [this related post](https://medium.com/all-things-ai/in-depth-parameter-tuning-for-random-forest-d67bb7e920d) and the comments).
In your particular case, the confusion matrix for your training set is considerably better than for your testing set, which could indicate overfitting. Also, you practically perfect performance in your training set, which in some applications could be deemed unrealistic and thus indicative of overfitting (unless the problem is actually solvable to 100% accuracy). However, maybe this behavior is common in RandomForest models (see again the related post above).
You also see some weird pattern in the testing confusion matrix where a lot of the confusions are concentrated in column 0. This doesn't necessarily have to be indicative of overfitting, it could just be that under 'uncertain' cases, the model tends to assign class 0. This could happen, for example, if you have many more samples of class 0 than of the other classes (what is called an unbalanced dataset). In that case, for data points where there is high uncertainty, the model may rely more on its prior, and assign the class with the highest a priori probability, leading to a pattern like the one seen there.
Finally, you mention the model's incapability of generalizing to new data. Your model does show some decent performance in the testing set. As I mentioned above, the observed pattern of having a lot of confusions in the first column doesn't necessarily reflect overfitting. As also mentioned above, your model does show some indication of overfitting, and it would be desirable to try to see if overfitting is actually making your testing performance worse. You could try to reduce overfitting by increasing the parameters min_samples_split and min_samples_leaf (see [this blogpost](https://stats.stackexchange.com/questions/361983/random-forest-fully-fits-training-sample)). If this modification leads to improved testing performance, then you indeed have overfitting hindering your model's predictive capability. Whether your model is useful or not, I think you should base on the confusion matrix of the test set, and whether the results there suit your purposes.
| null | CC BY-SA 4.0 | null | 2023-04-13T17:10:24.920 | 2023-04-13T17:21:30.523 | 2023-04-13T17:21:30.523 | 134438 | 134438 | null |
612831 | 1 | null | null | 3 | 39 | I have read that the Vuong test is no longer considered appropriate for testing whether a ZINB fits better than a negative binomial test because it is not strictly non-nested nor partially non-nested. I'm modeling in R (glmmTMB) and there is not the Vuong (nested) option that I can find. If the NB is nested in the ZINB (when ziformula=~0), is it ok to use the Chi-square goodness of fit test? My AIC is considerably lower for the ZINB vs the NB but what is the correct test for the two models? I do have theoretical reasons for why there would be structural zeros.
| ZINB vs binomial model goodness of fit testing | CC BY-SA 4.0 | null | 2023-04-13T17:10:37.827 | 2023-04-13T19:44:46.270 | 2023-04-13T18:42:34.647 | 205125 | 205125 | [
"chi-squared-test"
] |
612832 | 1 | null | null | 0 | 26 | I have started reading [this](https://projecteuclid.org/journals/annales-de-linstitut-henri-poincare-probabilites-et-statistiques/volume-48/issue-4/Challenging-the-empirical-mean-and-empirical-variance--A-deviation/10.1214/11-AIHP454.full) paper, and do not understand a line in its first paragraph in the introduction. He says:
>
Indeed, as far as the mean square error is concerned, Gaussian distributions represent already the worst case, so that in the framework of a minimax mean least square analysis, no need is felt to improve estimators for non-Gaussian sample distributions.
So far I know that in the non-parametric setting, the sample mean is a minimax estimator of the expected value of a random variable when using the squared loss, for the class of distributions with finite variance (see Bickel and Doksum,
2015, Example 3.3.4).
But, I do not understand what he means by "Gaussian distributions already represent the worst case".
| Empirical mean optimal mean square error, Gaussian is the worst case | CC BY-SA 4.0 | null | 2023-04-13T17:32:32.240 | 2023-04-13T17:32:32.240 | null | null | 283493 | [
"probability",
"mathematical-statistics",
"normal-distribution",
"inference",
"estimators"
] |
612833 | 1 | null | null | 1 | 29 | Let $A_t$ and $B_t$ be $I(1)$ processes and assume that they are co-integrated i.e. there exists $\beta$ such that $A_t - \beta B_t$ is $I(0)$. Woolridge's Introductory Econometrics text (5th edition, page 646) claims that if we fix $A_t$'s coefficient at 1, then $\beta$ unique.
A naive proof would be to assume to the contrary that there $\exists \delta \ne \beta$ and $A_t - \delta B_t$ is $I(0)$, then take difference of the two $I(0)$ processes to get $(\beta - \delta)B_t$. If difference of $I(0)$ processes is also $I(0)$, then we reach a contradiction, because $(\beta - \delta)B_t$, which is a scaler multiple of $B_t$, cannot be $I(0)$.
The issue is that it is generally not true that difference of $I(0)$ processes is also $I(0)$, unless they are jointly stationary. See [here](https://stats.stackexchange.com/questions/337636/is-stationarity-preserved-under-a-linear-combination).
Is there something that I am missing?
Edit: Rephrased the question, and as pointed out by @Richard Hardy, stationarity is not necessarily the same as $I(0)$, and the post I linked to deals with stationarity.
| Uniqueness of cointegration coefficient | CC BY-SA 4.0 | null | 2023-04-13T17:38:30.663 | 2023-04-14T17:42:46.607 | 2023-04-14T17:42:46.607 | 154728 | 154728 | [
"stationarity",
"cointegration"
] |
612835 | 1 | null | null | 1 | 21 | I am using hierarchical GAMs to examine the effect of weather covariates (n=21) on annual bird counts. The hierarchical part is to account for nested observations with `s(site, bs="re")` and temporal autocorrelation with `s(year, by=site, m=2)`. None of my covariates have significant smooths in the global model, so I have fit them all as parametric terms. My model looks as follows:
```
m = gam(nests ~ x1 + x2 + x3 + .... + x21 + s(site, bs="re") + s(year, by=site, m=2), data=data, family=poisson, method="REML")
```
My question is:
How do I perform selection on GAMs if all covariates of interest are parametric terms? I can't treat the parametric terms as low-degree smooths and use select=TRUE ([as recommended here](https://stats.stackexchange.com/questions/340387/gam-selection-when-both-smooth-and-parametric-terms-are-present)) due to concurvity issues, and I'm apprehensive to use the paraPen argument to penalize the parametric terms because it makes approximate p values unreliable.
Any help is greatly appreciated!
| Selection on HGAM with only parametric terms | CC BY-SA 4.0 | null | 2023-04-13T18:00:21.570 | 2023-04-13T18:00:21.570 | null | null | 286723 | [
"model-selection",
"generalized-additive-model",
"mgcv"
] |
612836 | 1 | null | null | 0 | 9 | We have a Linear Hierarchical Model where
$$Y_i | \theta_i \sim N(\theta_i,1)$$
$$\theta_i | A \sim N(0,A)$$
with
$$Y_i |A \sim N(0,A+1)$$
where $ i = 1,2,\ldots,k.$
I found the likelihood function for $A$ by using the marginal distribution of the $Y_i$'s, where
$$L(A+1) = \prod_{i=1}^k f_{Y_i}(y_i | A+1)$$
$$ = [2\pi(A+1)]^\frac{-k}{2} e^{-\frac{1}{2(A+1)}\sum_{i=1}^k y_i^2}I(A \geq 0) $$
From this, I found the MLE for $A$, which is
$$\hat{A} = \max \left(\frac{\sum_{i=1}^k y_i^2}{k}, 0 \right) $$
where $A \geq 0$, because it's variance.
However, I do not know how to proceed with a generalized likelihood ratio to find the test statistic. I don't want to use a large sample approximation by taking the log to get a chi squared distribution. Apparently the Generalized likelihood ratio is supposed to evaluate to a known distribution. I want to test the hypothesis that
$$H_0: A = 0$$
$$H_1: A > 0$$
and I'm not sure how to proceed.
| Generalized Likelihood Ratio for variance in Normal Hierarchical Models | CC BY-SA 4.0 | null | 2023-04-13T18:07:51.743 | 2023-04-13T18:27:46.643 | 2023-04-13T18:27:46.643 | 362671 | 385628 | [
"hypothesis-testing",
"mathematical-statistics",
"likelihood-ratio"
] |
612837 | 1 | null | null | 0 | 45 | Suppose $X_1$ is a random variable. I want to test the hypothesis $$ H_0:X_1\sim \delta_0 \textrm{ vs } H_1:X_1\not \sim \delta_0.$$
Where $\delta_0$ is the Dirac-measure in $0$. As teststatistic I take T(x)=x, if i get the sample $x_1=0$, then I would calculate the p-value through
$$2\cdot\min\{P(T(x_1)\geq T(X_1)|H_0),P(T(x_1)\leq T(X_1)|H_0)\}
=2\cdot\min\{P(0\geq 0|H_0),P(0\leq 0|H_0)\}=2.$$
I don't understand how I can get a p-value over 1, if the p-value is supposed to represent the probability of an " even extremer event regarding T" under the Nullhypothesis. Maybe if the distribution of the teststatistic under the Nullhpothesis is discrete the two sided p-value has to be calculated through
$$ \min \{2\min\{P(T(x_1)\geq T(X_1)|H_0),P(T(x_1)\leq T(X_1)|H_0)\},1\}.$$
In this situation a p-value of 1 would make sense, but I have not found anything about handling p-value's of discrete distributions differently. This example is obviously made up, but I had a simular problem with the Hypergeometric distribution, where suddenly the left an right sided p-value where both greater than 0.5, so the p-value was greater than 1. I would appreciate some guidance, which mistake I made.
| Two sided p-value for a discrete distribution is greater than one | CC BY-SA 4.0 | null | 2023-04-13T18:17:10.787 | 2023-04-14T15:15:13.393 | 2023-04-14T15:15:13.393 | 373404 | 373404 | [
"hypothesis-testing",
"mathematical-statistics",
"p-value"
] |
612838 | 1 | null | null | 0 | 5 | I have a dataset with symptoms and its severity (integer: 0-10, over 20 variables), and a columns with chosen treatment (0/1, over 10 variables).
The dataset include over 1000 records.
Correlation plot seems inconvenient for me.
I am looking for a statistical test and/or visualization for analysis of associations between declared symptomes and treatment.
I am coding in R.
| Statistical test & visualization type for associations between symptoms, and chosen treatment | CC BY-SA 4.0 | null | 2023-04-13T18:27:57.837 | 2023-04-13T18:27:57.837 | null | null | 254193 | [
"r",
"clustering",
"networks"
] |
612840 | 1 | null | null | 0 | 15 | Does anyone know where there is information on fit tests or something that can be done to help choose the best fitting family and link for a glm model? I have panel data that I am trying to run a regression with using a binary dependent variable (0/1). I am trying to decide between a glm with a poisson distribution and log link or a binomial distribution with logit link. Or is it best to just use a logit model?
If anyone knows where there is more information on how to correctly make these choices, either online, through fit tests, or in a specific text book, I would greatly appreciate the help!
| Figuring out best fit Family and Link for GLM Model | CC BY-SA 4.0 | null | 2023-04-13T18:46:43.413 | 2023-05-12T05:10:58.983 | null | null | 385635 | [
"generalized-linear-model"
] |
612841 | 2 | null | 188838 | 0 | null | Here's an implementation of a binary search algorithm that uses some probability techniques (possibly the same as Thimothy mentioned in his answer) to deal with a noisy binary search: [https://github.com/adamcrume/robust-binary-search](https://github.com/adamcrume/robust-binary-search)
| null | CC BY-SA 4.0 | null | 2023-04-13T18:47:55.763 | 2023-04-13T18:47:55.763 | null | null | 385634 | null |
612842 | 1 | null | null | 0 | 19 | I perform an LSA with `textmodels_lsa` of the quanteda package in R, but I have little idea about interpreting the results.
A minimal example taken from [here](https://quanteda.io/articles/pkgdown/examples/lsa.html)
```
txt <- c(d1 = "Shipment of gold damaged in a fire",
d2 = "Delivery of silver arrived in a silver truck",
d3 = "Shipment of gold arrived in a truck" )
mydfm <- dfm(txt)
mylsa <- textmodel_lsa(mydfm)
```
Now my questions to the return values:
- $sk are the singular values, is there a general rule of what is a high/significant singular value?
- Is it possible to choose somehow like: "dimension that explain 80% of variety"?
- are the matrices SVD of a decomposition X = SVD represented in the return?
- How can I find 2 or 3 dimensions such that patterns of documents, like clustering, emerge?
- same for dimensions. Is it possible to assign words/tokens to the dimensions to give them a topic?
Last one, how can I interpret for example the $feature matrix(rounded):
```
shipment -0.26 0.38 0.15
of -0.42 0.07 -0.05
gold -0.26 0.38 0.15
damaged -0.12 0.27 -0.45
in -0.42 0.07 -0.05
a -0.42 0.07 -0.05
fire -0.12 0.27 -0.45
delivery -0.16 -0.30 -0.20
silver -0.32 -0.61 -0.40
arrived -0.30 -0.20 0.41
truck -0.30 -0.20 0.41
```
Does this mean, that "truck" and "arrived" are most associated with dimension 3 (highest value)? What is the difference between negative and positive values?
I tried to learn it from [here](https://www.mzes.uni-mannheim.de/socialsciencedatalab/article/advancing-text-mining/#lsa) btu was left with all these questions.
| Interpretation of LSA results | CC BY-SA 4.0 | null | 2023-04-13T17:30:44.160 | 2023-05-27T01:52:57.997 | 2023-05-27T01:52:57.997 | 11887 | 385699 | [
"r",
"latent-semantic-analysis"
] |
612843 | 1 | 615073 | null | 4 | 93 | I am comparing the difference of medians between two groups of sample sizes $n1$ and $n2$. I would like to confirm that my boostrap approach for finite population size without pooling sample data correctly provides a distribution function of the differences between samples. Below, I provide examples of approaches that I've looked at. Approach 1 is provided as a reference (assuming a large population). I would like to confirm that approach 3 is sound while better understanding how to interpret the differences in results between approach 2 and 3.
Assuming a large population, I can compute the distribution of medians for each group using bootstrapping with replacement. To check if the observed difference is due to random error, use the following approach:
Approach 1, assume large population
- pool the samples from two groups together into a list of length $n1 + n2$,
- shuffle the pool,
- split the pool into "simulated" groups--cutting the shuffled list into new lists of sizes $n1$ and $n2$,
- compute the medians of each simulated pool,
- compute the differences of the medians in each pool,
- repeat steps 2-5 many times to calculate a set of medians, and
- use the resulting cumulative distribution function of set of medians to understand the probability of observing various effect sizes due to chance (i.e., bin and count the results, divide the counts by the total number of resamples).
A similar example of this approach is in A.B. Downey's Think Stats (pg 105).
Now, for a finite population size, A.C. Davidson and D.V. Hinkley's "bootstrap methods and their applications" provide methods modify sample size when bootstrapping statistics estimating a population quantity, where the population is a known, finite size (pg 92). For example, given a finite population size, we can adjust the resample size upwards to $n'$ where $n'=(n-1)/(1-n/N)$. Here, $N$ is the population size. (As the sample size approaches the population size, we will have more certainty in the estimate. By adjusting the resample size upwards as $n$ approaches $N$, we tighten the test statistic's distribution to reflect this increased certainty.)
I think that my above steps for shuffling a pool break down, because I'm now working with an $n1'$ and $n2'$ sample size. So I went with the following approach:
Approach 2, fixed population
- compute $n1'$ and $n2'$
- bootstrap the median test statistics for group 1 and group 2 many times
- calculate the difference in medians between the groups (calculated in step 2)
- use the empirical/cumulative distribution function of the resulting differences to explore probabilities of observing given differences between the medians.
Is approach 2 correct? (It is similar to [Bootstrap sampling for ratio of means with uneven sample sizes](https://stats.stackexchange.com/questions/384243/bootstrap-sampling-for-ratio-of-means-with-uneven-sample-sizes?rq=1)) This second approach feels different than the first since I'm not pooling data together. My understanding is that by pooling, I'm testing to see if the two samples could have been generated by the same underlying population. Approach 2 doesn't seem to be accomplishing this since I'm not mixing the data before distributing the data between the two samples.
Approach 3
My intuition is to do somewhat of a hybrid:
- pool groups 1 and 2 and then
- resample from that pooled group two groups of size $n1'$ and $n2'$, and then
- use steps 4 through 7 of approach 1.
If I wasn't adjusting the group sizes for the finite population, I would shuffle the pooled data into new groups (without replacement) as in Approach 1. By resampling with replacement, how should I interpret the results? Is it still correct to think about the `fig_bsed_pool_deltas` as the probability of observing the delta due to random error? Or is this a misapplication of the technique? One thing that bothers me is that I pool the data, but then use the original group size rather than setting the populations of each group to the sum of population_size_1 and population_size_2.
For reference, here is a toy example with python code implementing approach 3:Suppose I'm at a middle school where I give the same lecture to both class 1 and class 2 with respective class sizes of 15 and 20 students. I suspect that class 2 likes the course better since I teach that class after I have had my coffee. To assess attitude between the classes, I survey 5 students in class 1 and 10 students in class 2. The responses from class 1 are {1,2,3,4,5}. The responses from class 2 are {2,3,4,5,6,7,2,3,4,5}. I want to know if the attitude between the two classes taught by this teacher are different, say greater than a certain value x. (In this example, I happen to have ordered categorical responses--say a survey response from 1 to 7).
Set up and Define the inputs:
```
import numpy as np
import plotly.graph_objects as go
responses_1 = [1,2,3,4,5] #median is 3
responses_2 = [2,3,4,5,6,7,2,3,4,5] #median is 4
population_size_1 = 15
population_size_2 = 20
sam_pop_ration = len(responses_1)/population_size_1
sam_pop_ration = len(responses_2)/population_size_2
```
Approach 3:
```
def bootstrap_medians_pooled_approach(input_array_1, len_input_array_1, sam_pop_ration_1, \
input_array_2, len_input_array_2, samp_pop_ration_2, \
n_resamples):
#sample 1
adjusted_n_1 = (len_input_array_1 - 1)/(1 - sam_pop_ratio_1)
##some considerations for having a decimal adjusted_n_1
base_adjusted_n_1 = int(adjusted_n_1)
fraction_adjusted_n_1 = adjusted_n_1 - base_adjusted_n_1
#create an a array of sample 1 resample sizes
##alternate size to account for the fraction of adjustment
adjusted_n_array_1 = [base_adjusted_n_1 + \
int(np.random.choice([0,1], size = 1, \
p = [1 - fraction_adjusted_n_1, fraction_adjusted_n_1)) \
for x in range(n_samples)]
#sample 2 (same setup as above for sample 1)
adjusted_n_2 = (len_input_array_2 - 1)/(1 - sam_pop_ratio_2)
base_adjusted_n_2 = int(adjusted_n_2)
fraction_adjusted_n_2 = adjusted_n_1 - base_adjusted_n_2
adjusted_n_array_2 = [base_adjusted_n_2 + \
int(np.random.choice([0,1], size = 1, \
p = [1 - fraction_adjusted_n_2, fraction_adjusted_n_2)) \
for x in range(n_samples)]
pooled_array = input_array_1 + input_array_2
#create list of resampled medians for group 1 and group 2
medians_1 = [np.median(np.random.choice(pooled_array, size = x)) \
for x in adjusted_n_array_1]
medians_2 = [np.median(np.random.choice(pooled_array, size = x)) \
for x in adjusted_n_array_2]
n_resamples = 10000
bs_pool_delta = bootstrap_medians_pooled_approach(responses_1, len(responses_1),
sam_pop_ratio_1,\
responses_2, len(responses_2), sam_pop_ratio_2, \
n_resamples)
#visualize the distribution of deltas results
fig_bsed_pool_deltas = go.Figure()
fig_bsed_pool_deltas.add_trace(go.Histogram(x = bs_pool_delta)
#explore the chance that the observed delta of a given delta might be observed by random chance
deltas = 0.25 * x for x in range(-28,28)
fig_ps_bs = go.Figure()
fig_ps_bs.add_trace(go.Scatter(x = deltas, y = bsed_p_values_pool))
```
| bootstrap confidence interval and p-value calculations for finite population sizes | CC BY-SA 4.0 | null | 2023-04-13T18:54:30.340 | 2023-05-07T18:53:42.243 | 2023-05-07T18:53:42.243 | 306798 | 306798 | [
"p-value",
"bootstrap",
"computational-statistics",
"resampling",
"finite-population"
] |
612844 | 2 | null | 612567 | 0 | null | The method you selected from the [page you cite](https://stats.stackexchange.com/questions/59213/how-to-compute-varimax-rotated-principal-components-in-r) is incorrect, or at least not standard, as the author of that answer explains below the code that you used. It applies the `varimax` rotation to the original eigenvectors from the PCA, which is not standard practice.
For this type of analysis, "Loadings are eigenvectors scaled by the square roots of the respective eigenvalues," as explained on that page in the [answer from @amoeba](https://stats.stackexchange.com/a/137003/28500), while your `prc$rotation` values are unscaled eigenvectors. Of the 3 correct methods shown in that answer, the one perhaps closest to your code (using the first 4 principal components) might be translated to:
```
rawLoadings <- prc$rotation[,1:4] %*% diag(prc$sdev, 4, 4) # scaling
rotatedLoadings <- varimax(rawLoadings)$loadings # varimax rotation after scaling
invLoadings <- t(pracma::pinv(rotatedLoadings)) # transpose of generalized inverse
scores <- scale(df) %*% invLoadings
```
To avoid errors, you should consider using packages that have been vetted to provide correct results, like the R [psych package](https://cran.r-project.org/package=psych). That's also illustrated in the [answer from @ameoba](https://stats.stackexchange.com/a/137003/28500).
| null | CC BY-SA 4.0 | null | 2023-04-13T19:02:04.427 | 2023-04-13T19:02:04.427 | null | null | 28500 | null |
612846 | 1 | 612857 | null | 2 | 71 | Steps followed for testing :
- Derive parameter estimates using fitdistr() function in MASS package for the dataset, dt consisting of discrete values varying from 0 to 50.
```
df=dt[[1]]
nbfit <- fitdistr(df,'negative binomial')
pfit <- fitdist(df, "pois")
```
- Use parameter estimates for creating reference distribution
```
fitnb <- dnbinom(0:50, size=0.4, mu=3.9)
fitp <- dpois(0:50, lambda=3.9)
```
I found that `lambda` value is same as `mu`.
- Get frequencies
```
t <- table(df)
D <- as.data.frame(t)
observed_freq <- D$Freq
```
- perform the chi-squared test
```
chisq.test(observed_freq, fitnb, simulate.p.value = TRUE)
chisq.test(observed_freq, fitp , simulate.p.value = TRUE)
```
- Results
for NB
```
Pearson's Chi-squared test with simulated p-value (based on 2000 replicates)
data: observed_freq and fitnb
X-squared = 476, df = NA, p-value = 0.989
```
for Poisson
```
Pearson's Chi-squared test with simulated p-value (based on 2000 replicates)
data: observed_freq and fitp
X-squared = 476, df = NA, p-value = 0.9875
```
Question:
- What can we comment upon which distribution fits better? Clearly, for both X-squared is the same and we fail to reject the null hypothesis.
- Can I perform some other test?
Edit
I also used lrtest() function and got the following result:
```
fit_poi <- fitdistr(df,"poisson")
fit_nbin <- fitdistr(df,"negative binomial")
lrtest(fit_poi,fit_nbin)
```
Result
```
Model 1: fit_poi
Model 2: fit_nbin
#Df LogLik Df Chisq Pr(>Chisq)
1 1 -2537.7
2 2 -1159.3 1 2756.6 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
With this, can we say negative binomial performs better?
Histogram of actual data used in the test (Note: Codes differ a little based on this dataset)
[](https://i.stack.imgur.com/PQ0vH.png)
Actual data
```
value count
0 205
1 65
2 40
3 40
4 32
5 18
6 15
7 11
8 8
9 9
10 7
11 6
12 8
13 3
14 1
15 8
16 2
18 2
19 2
20 5
21 1
23 2
24 1
25 1
29 1
31 1
32 2
35 1
36 1
42 1
43 1
45 1
50 1
53 1
```
| Compare chi-squared goodness of fit results for Poisson and Negative binomial | CC BY-SA 4.0 | null | 2023-04-13T19:10:48.863 | 2023-04-13T22:07:52.487 | 2023-04-13T21:56:17.440 | 56940 | 332846 | [
"hypothesis-testing",
"chi-squared-test",
"poisson-distribution",
"goodness-of-fit",
"negative-binomial-distribution"
] |
612847 | 1 | null | null | 0 | 12 | I'm writing my thesis on private equity funds' financial performance and ESG integration. The data is gathered from funds at a single point in time and is therefore cross-sectional data.
I want to run an OLS regression on the below dataset. I was a bit in doubt if I should use a logit model, however, as I understand it, a logit model is only applicable if the dependent variable is binary, which mine is not. I have therefore chosen to use a linear multiple regression model. I am running the models in STATA.
My dataset includes 2,300 funds. My variables are:
Dependent variable: IRR (financial performance) as a continuous variable.
Independent variable: Nominal categorical variable (1: Traditional funds, 2: ESG funds, 3: high ESG funds)
Control variables:
- Maturity in years: Integer
- Fund size: Integer
- Industry: 6 binary variables
- Geography: 7 binary variables
- Strategy: 6 binary variables
Is it possible to run a fixed effects model (maturity, fund size, and industry fixed effects) on this model? Or is this not possible on cross-sectional data? I have a difficult time figuring if its possible and have really looked through everything. I hope someone call help!
| Is it possible to add fixed effects in an OLS (multiple) regression on cross section data? | CC BY-SA 4.0 | null | 2023-04-13T19:10:59.730 | 2023-04-13T19:10:59.730 | null | null | 385637 | [
"multiple-regression",
"binary-data",
"fixed-effects-model",
"cross-section"
] |
612848 | 1 | null | null | 0 | 22 | I have the following model. Let $BP$ be a continuous variable s.t. true relationship is $BP=144+0.5age+4sex+3gene+\epsilon$. Suppose for subjects with $BP>160$, there is a probability of 0.5 getting some treatment with effect following truncated normal distribution with mean -15 and standard deviation $sd=2$, where treatment effect is defined only for $<0$ part of real axis.
In data, the treatment will distort the true relationship. Thus I want to correct the treatment effect. I was initially thinking Bayesian approach. However, I encountered issue of convergence for truncated normal density. I will denote $N(\mu,s^2)$ for normal with mean $\mu$ and variance $s^2$ and it will also be used to denote density function. In particular, for treated subjects observed $BP_o$, it has density proportional to $\int^\infty_{BP_o}N(\mu,s^2)(y)dy$, where $\mu$ follows some $N(\beta_0+\beta_1age+\beta_2sex+\beta_3gene,\sigma^2)$, $s$ follows half t distribution with 3 df and $\beta$'s follow some joint normal distribution centers at $0$. The Stan code will not converge in this case due to $y-\mu$ is too large in comparison to $s$.
Thus I want to estimate treatment effect sd=2 to give some reasonable informative prior, which might give model a chance for convergence.
I have considered the following.
If I can match treated and untreated people with $BP>160$ by propensity score and apply linear regression on matched data, then I should be able to extract treatment effect mean. That also extracts $s_1^2=\sigma^2+sd^2$ standard deviation of both treatment and inherent $\epsilon$ variation. $\epsilon$'s variation $\sigma^2$ can be preliminarily estimated by standard linear model on the whole observed data. Take square root of the difference of the two variations $\sqrt{s_1^2-\sigma^2}$. I would hope to some sort recover $sd$. This does not seem to be the case.
```
nsim=1200
age=runif(nsim,min=55,max=74)
sex=rbinom(nsim,size=1,prob=0.5)
gene=rbinom(nsim,size=1,prob=0.51)
sig_err=21
err=rnorm(nsim,mean=0,sd=sig_err)
beta0=144;beta1=0.5;beta2=4;beta3=3
Z=beta0+beta1*age+beta2*sex+beta3*gene+err
betas=c(beta0,beta1,beta2,beta3)
list_hyp=which(Z>160)
num_hyp=length(list_hyp)
treat=rbinom(num_hyp,1,prob=0.5)
old_Z=Z
Z[list_hyp]=Z[list_hyp]+treat*rnormTrunc(num_hyp,mean=-15,sd=2,max=0)
hyp=data.frame(id=1:nsim,age=age,sex=sex,gene=gene,BP=Z,true_BP=old_Z)
hyp$censor=1
hyp$censor[list_hyp[as.logical(treat)]]=0
data=list(N=dim(hyp)[1],
X=as.matrix(hyp[,c('age','sex','gene')],ncol=3),
censor=hyp$censor,
BP=hyp$BP,
hyp_diag=160
)
hyp_t=hyp %>% filter(censor==0)
hyp_u=hyp %>% filter(BP>data$hyp_diag&censor!=0)
hyp_dat=rbind(hyp_t,hyp_u)
m.out=MatchIt::matchit(censor~age+gene+sex+BP,data=hyp_dat,
method='full',distance='glm',link='logit',estimand = 'ATC')
m.dat=match.data(m.out)
z2=summary(lm(BP~censor+age+gene+sex,weights=weights,data=m.dat))
z3=summary(lm(BP~age+gene+sex,data=hyp %>% filter(censor!=0&BP<160)))
sqrt(z2$sigma^2-z3$sigma^2) ##trying to estimate treatment sd
```
$Q1:$ How do I give a reasonable prior for treatment effect variance? Or how do I estimate treatment effect variance beforehand?
$Q2:$ How do I give a reasonable prior for treatment effect mean? It can be checked from above code that coefficient of censor term represents treatment effect. However, this treatment effect is way to off away from true treatment effect.
| How to estimate variance in censored data? | CC BY-SA 4.0 | null | 2023-04-13T19:13:57.720 | 2023-04-13T19:13:57.720 | null | null | 79469 | [
"regression",
"bayesian",
"causality",
"matching"
] |
612849 | 1 | null | null | 1 | 45 | In GLM, we assume that $\mathbb{E}[Y|X]=\mu(\beta^\top X)$ and $Y|X$ follows exponential family distribution. I am going to assume that the probability of success in the Bernoulli distribution is modeled as $\beta^\top X$. However, the issue is that because $\beta^\top X$ can take values less than 0 or greater than 1, I will use map $\mu(\beta^\top X)=0$ if $\beta^\top X<0$ and $\mu(\beta^\top X)=1$ if $\beta^\top X>1$. I am wondering if I use this modeling, I will still have a GLM model? Is the link function $g=\infty $ for $\beta^\top X<0$, $g=\text{identity}$ for $0\le\beta^\top \le 1$, and $g=1 $ for $\beta^\top X>1$?
If I use this definition, can I estimate parameters through OLS accurately?
| Is this model linear function belongs to GLM models? | CC BY-SA 4.0 | null | 2023-04-13T19:14:02.200 | 2023-04-13T19:37:38.040 | 2023-04-13T19:37:38.040 | 247274 | 133197 | [
"regression",
"generalized-linear-model",
"least-squares",
"binary-data",
"link-function"
] |
612851 | 1 | null | null | 0 | 20 | I have the following project in front of me:
Clients are going to get a bill informing them of a price increase (let's call them group A). However clients are getting the bill at different points of time.
I want to learn which individual characteristics eg age will make individuals to quit.
Then I want to use these learnings to predict the behaviour of other clients (let's call them group B).
I have thought that I should create a panel for clients of group A.
Quitted_it = (age_i + other ind characteristics_i) * days since they got the bill_it
Then I can predict the probability of quitting on individuals of group B, given their charactetistics and a number of days since they got the bill (eg 30)
Does it make sense? How could I compute it in R?
| Staggered treatment and predictions based on individual characteristics | CC BY-SA 4.0 | null | 2023-04-13T19:20:03.720 | 2023-04-13T19:20:03.720 | null | null | 385639 | [
"r",
"regression",
"panel-data",
"difference-in-difference",
"treatment-effect"
] |
612852 | 2 | null | 612849 | 0 | null | By setting $0$ and $1$ as bounds on the predictions, you’ve put yourself in a position where you do not have a linear model. Rather, your model is something along the lines of:
$$\mathbb E [Y\vert X=x]=
\min\{
\max\{
0, \beta^Tx
\}, 1
\}
$$
A linear probability model, which shoehorns the binary $0$$/$$1$ data into an ordinary least squares linear regression, does not have that $\min\max$ business. The linear probability model is an ordinary linear model.
Ultimately, you can do basically whatever you want and call it Amin’s estimator, and if you can prove that a bizarre technique unexpectedly has desirable properties, then maybe you have a paper for a top statistics journal like JASA. However, I would not expect the $\hat\beta$ estimate via either maximum likelihood or minimization of square loss in the above model to coincide with the parameter estimates from a true linear probability model (or a logistic or probit regression, for that matter).
| null | CC BY-SA 4.0 | null | 2023-04-13T19:24:50.427 | 2023-04-13T19:24:50.427 | null | null | 247274 | null |
612853 | 2 | null | 612831 | 2 | null | (Do you really need to do a significance test if the difference in AIC is large and you have a theoretical justification for including zero-inflation? Why?)
tl;dr a parametric bootstrap test is probably the most straightforward solution to your problem; it's moderately computationally intensive (will take something like 2000x the computational effort of fitting your original model, although you could parallelize and might be able to whittle the time down further by using smarter starting parameter values ...)
```
library(glmmTMB)
m1 <- glmmTMB(count~mined, ziformula =~mined, Salamanders, family="nbinom1")
## fit null/reduced model (no zero-inflation)
m0 <- update(m1, ziformula = ~0)
## parametric bootstrap; simulate data from reduced model, fit with
## both full and reduced model, find diff(logLik)
simfun <- function() {
d <- transform(Salamanders, count = simulate(m0)[[1]])
logLik(update(m1, data = d)) - logLik(update(m0, data = d)))
}
## simulate a null distribution (SLOW)
set.seed(101); r <- replicate(1000, simfun())
## compute diff(logLik) for real data
obs_diff <- logLik(m1) - logLik(m0)
## compare graphically
hist(r, breaks = 50); abline(v = obs_diff)
## p-value (proportion that test_stat(null) >= test_stat(observed))
mean(c(na.omit(r), obs_diff) > obs_diff)
[1] 0.05345912
```
[](https://i.stack.imgur.com/eASzU.png)
Note there is a minor concern about eliminating NA values (47/1000 in this case) — these are probably cases where the estimated level of zero-inflation converged to 0 ($-\infty$ on the log scale); this will make the p-value slightly conservative.
---
Wilson (2015) goes into detail about why the Vuong test is inappropriate for testing the hypothesis of zero-inflation. For the record they propose some solutions, but none of these are (AFAICS) easily applicable to/available off-the-shelf in your case
>
Note that when the value of is allowed to be both positive or negative fitted values of do not “pile up” close to zero, and the distribution of zero-modification parameter is normal and hence a non-zero-inflated model is (strictly) nested in its zero-inflated counterpart, and hence a Vuong test for nested models could be used as a test of zero-inflation/deflation. Dietz and Böhning (2000) proposed a link function that allowed for zero-deflation, and more recently Todem et al. (2012) have proposed a score test that incorporates a link function that allows for both zero-inflation and deflation.
As for a "chi-square goodness of fit test": I'm not sure, but I think you mean a likelihood ratio test (whose test statistic, $2 (\log \cal L_\textrm{full} - \log \cal L_\textrm{restricted})$, is $\chi^2$ distributed under the null hypothesis)? This won't work either because it assumes that the null values of the distinguishing parameters (the zero-inflation probability in this case) are not on the edge of their feasible domain. That is, the derivation depends on a quadratic expansion around the null parameter value, and we can't expand around a value of $p_z = 0$ because that would involve negative probabilities ... this issue is mostly thoroughly described for testing the hypothesis that the random-effects variance is zero in a mixed model (see refs in [this section of the GLMM FAQ](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#testing-significance-of-random-effects)), but the same issue applies here.
Roughly speaking, a likelihood ratio test of significance for a single parameter on the boundary will give a p-value that's twice as large as it should be (again, see refs in GLMM FAQ): this appears to be the case here, where the LRT p-value (from `anova(m1, m0)`) is 0.1084, very close to twice the parametric bootstrap result.
Wilson, Paul. 2015. “The Misuse of the Vuong Test for Non-Nested Models to Test for Zero-Inflation.” Economics Letters 127 (February): 51–53. [https://doi.org/10.1016/j.econlet.2014.12.029](https://doi.org/10.1016/j.econlet.2014.12.029).
| null | CC BY-SA 4.0 | null | 2023-04-13T19:38:27.077 | 2023-04-13T19:44:46.270 | 2023-04-13T19:44:46.270 | 2126 | 2126 | null |
612854 | 2 | null | 519907 | 0 | null | Validation can happen after training or they can be complementary. When it happens after the training is completed, the validation data passes through the neural network (which has fixed weights and biases) in one epoch. Complementary training and validation means the following. In one epoch training is done and right after it validation is done; next epoch, training is done again and again validation happens. Each time weights and biases are updated, there is an estimation of the neural network's performance. So we have the same number of epochs for training and validation. When you design your model, you can go for either single validation after complete training or complementary training and validation.
| null | CC BY-SA 4.0 | null | 2023-04-13T19:46:48.957 | 2023-04-13T19:46:48.957 | null | null | 385640 | null |
612855 | 1 | null | null | 0 | 10 | Really a pretty basic question about generative models, but I'm trying to map my (limited) understanding of NNs generally to what's going on when I invoke an OpenAI API:
- When the OpenAI API docs state a limit to the prompt size for a model ("max tokens", which I assume is the same as what's also referred to as in the documentation as "context length"), does
that correspond to the size of the input layer of the actual network implementing the model?
- Does each token generated by invocation of the API correspond to an execution of the underlying model, presumably truncating the front of the original supplied prompt if prompt plus output has exceeded max tokens?
| Network invocations underlying OpenAI execution? | CC BY-SA 4.0 | null | 2023-04-13T19:47:31.637 | 2023-04-13T19:47:31.637 | null | null | 16694 | [
"generative-models",
"gpt"
] |
612856 | 1 | null | null | 2 | 95 | I estimated a variable for three different species and have a posterior distribution of 4000 estimates for each species. Now I want to know if the distribution between species differs significantly. How do I do this?
I have displayed the mean and 95% credible interval for each three species and would like to indicate if differences are significant.
| Comparing three posterior distributions | CC BY-SA 4.0 | null | 2023-04-13T19:54:09.470 | 2023-04-19T09:28:44.923 | null | null | 101294 | [
"bayesian",
"mean",
"credible-interval"
] |
612857 | 2 | null | 612846 | 3 | null | About 1. Your chi-squared test does not reject $H_0$, thus the two fits are statistically equivalent; By [Occam's Razor principle](https://en.wikipedia.org/wiki/Occam%27s_razor) the Poisson model, i.e. the most parsimonious of the two, is to be preferred. However, as noted in the comments by whuber, these tests seem to give strange results; see the Addendum below.
About 2. The test you are looking for is sometimes called a test for overdispersion since the Poisson distribution imposes the "mean equals variance" assumption. Since the Poisson distribution is a particular case of the Negative Binomial distribution when the latter is expressed in a particular parametrization, then a likelihood ratio test can be applied. However, this test doesn't have the usual $\chi^2$ limiting distribution due to the fact that the parameter being tested is assumed on the border of the parameter space.
The easiest solution is then to apply a score test (Dean and Lawless, 1989 "Tests for detecting overdispersion in Poisson
regression models", Journal of the American Statistical Association 84: 467–472.). This amounts to applying a $t$-test to
$$
Z_i = \frac{(Y_i - \mu_i)^2 - \mu_i}{\mu_i\sqrt{2}}.
$$
The test is post-hoc, in the sense that it is performed subsequent to modelling the data.
Here is an `R` example applied to a simulated variable from the Poisson distribution.
```
# generate some data
set.seed(12)
x <- data.frame(y = rpois(30,3.5))
fit_poi <- glm(y~1, family = "poisson", data = x)
mu <- predict(fit_poi, type="response")
z <- ((x$y - mu)**2 - x$y)/ (mu * sqrt(2))
zscore <- lm(z ~ 1)
summary(zscore)
Call:
lm(formula = z ~ 1)
Residuals:
Min 1Q Median 3Q Max
-0.67076 -0.55149 -0.27887 0.07895 2.87330
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.08207 0.16413 -0.5 0.621
Residual standard error: 0.899 on 29 degrees of freedom
```
The p-value is $0.621$ thus we cannot reject the null hypothesis and conclude that the data are not overdispersed.
Addendum
With your data the score test gives:
```
summary(zscore)
Call:
lm(formula = z ~ 1)
Residuals:
Min 1Q Median 3Q Max
-9.19 -8.19 -5.72 -5.72 422.01
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 8.463 1.660 5.099 4.85e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 37.22 on 502 degrees of freedom
```
The conclusion is that the data are definitely overdispersed, so non-Poisson. You should then opt for the Negative Binomial and as suspected above (and by whuber in the comments to your post) there must be something wrong with your first chi-squared test.
| null | CC BY-SA 4.0 | null | 2023-04-13T19:54:41.190 | 2023-04-13T22:07:52.487 | 2023-04-13T22:07:52.487 | 56940 | 56940 | null |
612858 | 1 | null | null | 1 | 44 | Suppose we are using the matchit function from the MatchIt package in R, as in the following example given in the package,
`m.out1 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = "nearest", replace=FALSE, caliper = NULL, ratio = 1)`
Suppose now, you want to match only those treated patients that are within a specified caliper = 0.1 (in terms of propensity scores) of the control, and otherwise leave them unmatched, I would do,
`m.out2 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = "nearest", replace = FALSE, caliper = 0.1, ratio = 1)`
In both cases I get the same matching result, in terms of sample size. I am confused by this, as even those outside the caliper are being matched. Note that, this was working the way I understand before, but is giving different results now. Please let me know if there have been any updates that I am missing.
Thank you!
############################################################################
Adding to Noah's comment below,
`m.out1 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = 'nearest', replace = FALSE, ratio = 1)`
`summary(m.out1)$nn`
[](https://i.stack.imgur.com/hObHD.png)
`m.out2 <- matchit(treat ~ age + educ + race + nodegree + married + re74 + re75, data = lalonde, distance = "glm", method = "nearest", replace = FALSE, caliper = 0.1, ratio = 1)`
`summary(m.out2)$nn`
[](https://i.stack.imgur.com/lFrYv.png)
| matchit: Specified caliper still matches everyone! | CC BY-SA 4.0 | null | 2023-04-13T20:09:56.007 | 2023-04-18T13:53:33.840 | 2023-04-14T20:36:22.680 | 385643 | 385643 | [
"r",
"propensity-scores",
"matching"
] |
612859 | 2 | null | 577084 | 1 | null | You should try to improve both your Data and your Model. Each is a critical piece in a real-world ML application, and spending too much time on one vs the other is unwise.
If you like to use open-source Python packages to work efficiently, you can try improving your model with packages like: sklearn, huggingface, timm. Try improving your data with packages like: cleanlab, featuretools, refinery.
Don't forget to utilize your knowledge as well! Improve your model by experimenting with appropriate training techniques you learned in ML class, and improve your data by relying on your domain expertise.
| null | CC BY-SA 4.0 | null | 2023-04-13T20:39:59.903 | 2023-04-13T20:39:59.903 | null | null | 376907 | null |
612860 | 2 | null | 593458 | 0 | null | As others pointed out, you probably want Active Learning algorithms, which help you collect the least amount of labels needed to train an accurate model. While there are tons of algorithms published in the literature, there aren't many practical open-source libraries for active learning. I helped create one such package with easy tutorials for:
- active learning with multiple data annotators who can re-label data
- active learning with at most one label for each datapoint
| null | CC BY-SA 4.0 | null | 2023-04-13T20:46:18.520 | 2023-04-13T20:46:18.520 | null | null | 376907 | null |
612861 | 2 | null | 612828 | 1 | null | If you are a little more careful and explicit in your formulas, it might well get a lot clearer.
Specifically, one typically calculates means or sums over forecasts and actuals for the same periods (as opposed, say, to calculating the MAD over the forecast horizon, but the mean of actuals over the training sample). And then both the numerator and the denominator have the same number of summands, say $N$. But then the sums you have yield the exact same ratio as dividing the means instead - because going from the sums to averages only means that we divide both the numerator and the denominator by $N$, and this cancels.
Some more on this can be found in [Kolassa & Schütz (2007)](https://econpapers.repec.org/article/forijafaa/y_3a2007_3ai_3a6_3ap_3a40-43.htm).
| null | CC BY-SA 4.0 | null | 2023-04-13T20:54:13.837 | 2023-04-13T20:54:13.837 | null | null | 1352 | null |
612862 | 1 | null | null | 0 | 22 | My model estimates the percentage of living-in agricultural workers (that is, the ones provided with board & accommodation) in labor force. The unit is the % of those workers in a district:
Y = a0 + a1 * cattle + a2 * horses + a3 * popden + a4 * altitude + e
Cattle and horses (more specifically, their densities, i.e. the number divided by acreage) affect demand for labor. Population density is a labor supply factor, and altitude affects both labor supply (via population density) and demand (via animal density).
The results generally make sense, but I am concerned that I combine supply and demand factors in one equation. Is there a way to check if this concern is justified? If it is, what would be the best approach to resolving this issue?
| Is this a case of simultaneous equations? | CC BY-SA 4.0 | null | 2023-04-13T21:21:49.017 | 2023-04-13T21:21:49.017 | null | null | 333840 | [
"regression",
"multiple-regression",
"regression-coefficients",
"linear",
"instrumental-variables"
] |
612863 | 2 | null | 448020 | 1 | null | [This is just rough intuition, avoiding measure theory]
For continuous random variables ${ X }$ and ${ Y ,}$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\color{blue}{Y}} } : {\color{purple}{\mathbb{R}}} \times {\color{blue}{\mathbb{R}}} \to \mathbb{R} _{\geq 0} }$ such that probability of any event ${ (X,Y) \in [a,b] \times [c,d] }$ is the volume under graph of ${ f _{X,Y} }$ and over ${ [a,b] \times [c,d] }.$
>
${ \mathbb{P}(X \in [a,b], Y \in [c,d]) }$ ${ = \int \int _{[a,b] \times [c,d]} f _{X,Y} (x,y) \, dx \, dy }$
Intuitively, ${ \mathbb{P}(X \in [x, x + \Delta x], Y \in [y, y + \Delta y]) }$ ${ \approx f _{X,Y} (x, y) \Delta x \Delta y }$ for small ${ \Delta x, \Delta y \gt 0 }$ (assuming continuity of ${ f _{X,Y} }$ at ${ (x,y) }$).
Similarly for a discrete random variable ${ X }$ with range ${ \lbrace x _1, x _2, \ldots \rbrace }$ and a continuous random variable ${ Y },$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\color{blue}{Y}} } : {\color{purple}{\lbrace x _1, x _2, \ldots \rbrace}} \times {\color{blue}{\mathbb{R}}} \to \mathbb{R} _{\geq 0} }$ such that probability of any event ${ (X, Y) \in \lbrace x _i \rbrace \times [c,d] }$ is the area under graph of ${ f _{X,Y} }$ and over ${ \lbrace x _i \rbrace \times [c,d] }.$
>
${ \mathbb{P}(X = x _i, Y \in [c,d]) }$ ${ = \int _{[c,d]} f _{X, Y} (x _i, y) \, dy }$
Intuitively, ${ \mathbb{P}(X = x _i, Y \in [y, y + \Delta y]) }$ ${ \approx f _{X, Y} (x _i, y) \Delta y }$ for small ${ \Delta y \gt 0 }$ (assuming continuity of ${ f _{X,Y} (x _i, \cdot) }$ at ${ y }$).
Eg: From this, in the ${ X }$ discrete and ${ Y }$ continuous case:
[Marginals] PMF ${ \mathbb{P}(X = x _i) }$ ${ = \int f _{X,Y} (x _i, y) dy .}$ Also ${ \mathbb{P}(Y \in [y, y + \Delta y]) }$ ${ = \sum _i \mathbb{P}(X = x _i, Y \in [y, y + \Delta y]) }$ ${ \approx \sum _{i} f _{X,Y} (x _i, y) \Delta y}$ suggesting ${ f _Y (y) = \sum _{i} f _{X,Y} (x _i, y) .}$
[Conditionals] Intuitively, similar to how ${ f _Y (y) \Delta y \approx \mathbb{P}(Y \in [y, y + \Delta y]) },$ conditional density ${ f _{Y \vert X } ( \cdot \vert x _i) }$ is such that ${ f _{Y \vert X} (y {\color{red}{\vert x _i)}} \Delta y }$ ${ \approx \mathbb{P}(Y \in [y, y + \Delta y] {\color{red}{\vert X = x _i)}} }.$ So ${ f _{Y \vert X} (y \vert x _i) \Delta y }$ ${ \approx \frac{f _{X,Y} (x _i, y) \Delta y}{\int f _{X,Y} (x _i, y) dy} }$ suggesting ${ f _{Y \vert X} (y \vert x _i) }$ ${ = \frac{f _{X,Y} (x _i, y)}{\int f _{X,Y} (x _i, y) dy} .}$
Intuitively, conditional PMF ${ p _{X \vert Y} (\cdot \vert y) }$ is such that ${ p _{X \vert Y} (x _i \vert y) }$ is limit of ${ \mathbb{P}(X = x _i \vert Y \in [y, y + \Delta y]) }$ as ${ \Delta y \to 0 ^{+} }.$ But ${ \mathbb{P}(X = x _i \vert Y \in [y, y + \Delta y]) }$ ${ \approx \frac{f _{X,Y} (x _i, y) \Delta y }{ \sum _i f _{X,Y} (x _i, y) \Delta y} }$ suggesting ${ p _{X \vert Y} (x _i \vert y) }$ ${ = \frac{ f _{X,Y} (x _i, y)}{\sum _i f _{X,Y} (x _i, y) }. }$
>
To summarise, marginals are ${ p _{X} (x _i) = \int f _{X,Y} (x _i, y) \, dy }$ and ${ f _Y (y) = \sum _i f _{X,Y} (x _i, y) },$ and conditionals are ${ f _{Y \vert X} (y \vert x _i) = \frac{f _{X,Y}(x _i, y)}{p _{X} (x _i)} }$ and ${ p _{X \vert Y} (x _i \vert y) = \frac{f _{X,Y} (x _i, y)}{f _Y (y)} .}$
>
Especially ${ X, Y }$ are independent implies ${ f _{Y \vert X} (y \vert x _i) = f _Y (y) }$ i.e. ${ f _{X,Y} (x _i, y) = p _{X} (x _i) f _Y (y) },$ and vice versa. We also have Bayes rule ${ f _{Y \vert X} (y \vert x _i) }$ ${ = \frac{p _{X \vert Y} (x _i \vert y) f _Y (y) }{p _X (x _i)} }.$
| null | CC BY-SA 4.0 | null | 2023-04-13T21:52:44.767 | 2023-04-13T21:52:44.767 | null | null | 319581 | null |
612865 | 1 | null | null | 0 | 19 | I want to run an A/B test to examine the effect of a marketing campaign on revenue. I’m using a synthetic control setup, hence I have to compare the revenue generated by treated units to the counterfactual revenue given by the synthetic control.
One way of assessing if there is a non-random effect of the campaign is to compare the observed revenue under treatment to the distribution of all possible counterfactual revenues (as derived by my counterfactual model). If the observed treatment revenue was in the tail of such distribution (i.e., I get a low p-value), then I would conclude the campaign had an effect. This is the inferential approach adopted, for example, by Google’s library CausalImpact.
I wonder if this is enough, however. The observed revenue in the treated units could have been different from the value actually measured, due to random noise. The above approach does not take into account this uncertainty in observed treatment, but this uncertainty feels like something that needs to be accounted for - as we do, for example, in classic A/B test where we have both treated and control outcomes and we run a statistical test on the difference between them based on both their distribution.
So I wonder if I should somehow try to measure the uncertainty in my observed treatment revenue and compare its distribution to the counterfactual distribution (effectively running a test for the difference in means between treatment and synthetic control). Any suggestions on whether this is a better approach indeed?
| Statistical inference in A/B testing: is it enough to compare observed test outcome to control distribution? | CC BY-SA 4.0 | null | 2023-04-13T23:16:03.373 | 2023-04-13T23:16:03.373 | null | null | 385653 | [
"hypothesis-testing",
"causality",
"ab-test",
"causalimpact"
] |
612866 | 1 | null | null | 0 | 11 | I am modelling change in the mass of an animal over time with a GAM. These animals are caught at one of two ages (1 or 2, but 1's never become 2's in the subsequent year). I also have a large number of sites in my analysis.
The most simple model I am considering is:
```
mm <- gamm(MASS ~ s(YEAR.s, SITE, bs = "fs", by = AGE, k = 5), data = df)
```
Where YEAR.s is a continuous variable that has been centered and scaled, and SITE and AGE are factors.
(Partial) autocorrelation plots show a lot autocorrelation in the timeseries.
```
acf(resid(mm$lme, type = "normalized"))
pacf(resid(mm$lme, type = "normalized"))
```
[](https://i.stack.imgur.com/NrdKZ.png)
[](https://i.stack.imgur.com/5oFD0.png)
I know there are three different correlation structure classes (corARM1, corARMA, corCAR1) that I could add to my model, but I'm not certain about the best one to use for this situation.
Also, I'm assuming that I would include both SITE and AGE as grouping factors, but I'm not sure if it will be an issue that not all AGEs are found at all SITEs (e.g., SITE A might have AGE 1 & 2, but SITE B only has AGE 1 and SITE C only has AGE 2). Between the two factors, making sure that I've included SITE is probably more important than AGE.
Finally, not all sites are monitored in each year of the timeseries (min. 20 years of data over a ~50 year timeseries).
| Determining best temporal autocorrelation structure | CC BY-SA 4.0 | null | 2023-04-13T23:16:06.650 | 2023-04-13T23:16:06.650 | null | null | 182146 | [
"time-series",
"autocorrelation",
"generalized-additive-model"
] |
612867 | 1 | null | null | 0 | 23 |
### The Forecasting Model
I have developed a proprietary forecasting model (call it AMA) that forecasts an expected range in price movement of a particular security. An example of the way it works is by giving out expected price range movement of $\mathrm{$10}$ for a $\mathrm{$100}$ stock within the next hour (or any given time interval or period).
For an overview using the above example, we'll ignore the possibility of the stock's price falling and focus only on price rising to avoid over-complicating things.
In the model, a stock's current high price $H_{t_i}$, is forecasted to print a price high by adding the Differential High $DH_t$ to the previous period's closing price, $C_{(t-1)}$, giving us $E(H_{t})$.
Let
$t_i=$ $\mathrm{1\,hour\,period}$, divided into 10 sub intervals where $i=$ $\mathrm{1,2,3,...,10}$
$C_{t_i}$ = $\mathrm{Current\,price\,in\,period}$ $t$
$H_{t_i}$ = $\mathrm{Current\,high\,price\,in\,period}$ $t$
$DH_t$ = $\mathrm{Differential\,High}$
$C_{(t-1)}$ = $\mathrm{Previous\,closing\,price}$
$E(H_{t})$ = $C_{(t-1)}+DH_t$; $\mathrm{AMA\,/\,Expected\,high\,price\,of\,current\,period}$
Assume $C_{t_i}$= $H_t$ for this problem.
## The Problem
However, I'm having confusion in calculating the probability that price reaches $100\%$ AMA before the time interval ends, whether to use Bayes Theorem or Poisson. I have a basic understanding of probability so Poisson is a bit out of my domain.
Intuitively though, the probability of reaching $E(H_{t})$ should diminish/reduce over time the closer we get to an hour's time given that $H_t=c$, where c is a fixed constant price throughout the period (stagnant). Conversely, this would also imply that the probability of reaching $E(H_{t})$ increases when $H_t$ moves closer to the expected high price.
# Test Solutions
The way I tried to solve the question is by constructing a counting table to compute the probability. However I am not confident of my probability test calculations given the nature of time decay of the problem (which sounds a lot like Poisson). However, the thought process makes sense intuitively as I have mentioned above given the calculations I did which is prone to error.
A snippet of my calculations in the picture below is constructed behind the premise of basic probability where $$P(A)=\frac{number\,of\,favourable\,outcomes}{total\,number\,of\,outcomes}$$
$$A = \mathrm{Event\,of\,reaching\,100\%\,AMA\,occuring}$$
---
[Test Solution #1: "Modified" Basic Probability](https://i.stack.imgur.com/XnFiy.png)
The confirmation bias is strong with this solution as it's not mathematically sound ie. I manipulated the formula to make it display increasing probability of event $A$ occurring when $H_{t_i}$ increases towards $100\%$ AMA within a time interval whereas decreasing probability as $H_{t_i}$ remains the same throughout the whole period $t_{1,2,...10}$. Regardless, I'm describing the formula anyways.
$P(A)=1-\frac{number\,of\,favourable\,outcomes}{total\,number\,of\,outcomes}$
---
[Test Solution #2: Basic Probability](https://i.stack.imgur.com/ArgYu.png)
This one is mathematically sound and doesn't involve any manipulation to fit my presumptions where the formula is the basic probability calculation as described earlier. However intuitively, this doesn't make any sense as it assumes that the probability of event $A$ occurring is the same given AMA =$10\%,20\%,\,...,100\%\,for\,t_{1,2,3,...,10}$
---
[Test Solution #3: Hybrid 1D Random Walk](https://i.stack.imgur.com/9InyH.png)
Note the Random Walk model in this one is only possible with movements north or east outwards along y- and x-axis respectively due to the time decay nature of the problem. In this one, it assumes that since there are only 2 possible outcomes for each node, the probability is $50\%$ chance for the price to either reach a new AMA high along the y-axis or exhausting the time interval to move on to the next node along the x-axis where $H_{t_i}=c$.
---
### Final Notes
Given all the Test Solutions I've presented, how do I determine the correct one to use (as a starting point) so as to develop a working probability model for this problem?
TLDR version: I need to find a viable solution to calculating the probability that price reaches the AMA or Expected High before the time interval ends. 3 Test Solutions are given and which one is the correct one (if any) to base off of to repair this probability model?
| What is the probability that price reaches expected target within a time interval? | CC BY-SA 4.0 | null | 2023-04-12T15:56:18.700 | 2023-04-13T23:22:52.653 | null | null | null | [
"probability",
"finance",
"discrete-time"
] |
612868 | 2 | null | 612788 | 3 | null | I think the question here is why/whether the loglikelihood function is asymptotically locally a parabola with its maximum at $\hat\theta$ (and so the likelihood function is locally $\exp(-(\theta-\hat\theta)^2/2)$.Basically, it's just Taylor series: any smooth function is locally a parabola, and any smooth function with zero derivative at a point is locally a parabola with a max or min at that point. It isn't quite just Taylor series -- we do need the Law of Large Numbers and also the rate of convergence of an average to its expected value to be sure that the terms in the Taylor expansion are of the sizes we need them to be.
In this case (and relevant other cases) the log-likelihood function is smooth. By the definition of the MLE, the loglikelihood has a maximum at $\hat\theta$ and the derivative is zero at $\hat\theta$. Therefore,
$$\ell(\theta)=\ell(\hat\theta) + 0\times (\theta-\hat\theta) + \ell''(\hat\theta)(\theta-\hat\theta)^2 +\text{remainder}$$
Since the derivatives of $\ell$ are proportional to $n$, we need $\theta-\hat\theta= O_p(n^{-1/2}$ to keep the remainder small, but that's the range we're interested in. On that range,
$\ell(\hat\theta)-\ell(\theta)$ is asymptotically a parabola centered at $\hat\theta$, so $\exp(\ell(\theta))$ is asymptotically the same shape as a Gaussian likelihood function.
| null | CC BY-SA 4.0 | null | 2023-04-13T23:24:58.830 | 2023-04-13T23:24:58.830 | null | null | 249135 | null |
612870 | 1 | null | null | 1 | 29 | I'm working on a machine learning classification project and i faced some difficulties: all of my features distributed like this: [](https://i.stack.imgur.com/mAeJj.png)
I'm not sure what should i do, should i use any scalers/other preprocessing methods? All features means count of some events for a specific person. What problems can this distribution cause after fitting the models? Also, is there any name for this kind of distribution?
| data preprocessing with zeros dominating | CC BY-SA 4.0 | null | 2023-04-13T23:51:14.707 | 2023-04-13T23:51:14.707 | null | null | 385656 | [
"machine-learning",
"distributions",
"data-preprocessing"
] |
612873 | 1 | null | null | 0 | 10 | I have a set of $k$ random variables,
$y_i(x) = f_i(x) + \epsilon_i, y \in \mathbb{R}$
where,
$\epsilon_i \sim \mathcal{N}(0, \sigma_n^2)$ (a noise term)
$x \sim \mathcal{U}(-\infty,\infty; -\infty,\infty)$ (a 2D uniform variable)
$f_i(x) = \varphi(x,\mu_i,\Sigma_s), \mu_i \in \mathbb{R}^2, \Sigma_s = \begin{bmatrix}\sigma_s^2 & 0 \\\ 0 & \sigma_s^2\end{bmatrix}
$ (i.e., the 2D multivariate normal P.D.F.)
I am interested in the joint posterior probability, $P(x|y)$, and so I am attempting to use Bayes' theorem,
$P(x|y) = \frac{P(y|x) P(x)}{P(y)}$
Since $P(x)$ is uniform, it only has the effect of scaling the result. However, I am unsure of how to arrive at the other terms, or even if I can expect a closed-form solution. I can easily simulate these results for small k, but I would like an analytic solution when k is large.
One observation is that $P(y_i | f_i(x)) = P(\epsilon_i = y_i - f_i(x)) = \varphi(y_i, f_i(x), \sigma_n^2)$. Thus, given a realization of $f_i(x)$ I can estimate the probability of $y_i$. I also know that $P(x|f_i(x))\neq0$ defines a circle in $\mathbb{R}^2$ (or more generally an ellipsoid), so I was thinking that I need some generalization of the delta function. In this way I think I can go from $P(x) \rightarrow P(f_i(x)) \rightarrow P(y_i|f_i(x)) \rightarrow P(y_i|x)$. However, I am not sure how to arrive at $P(y|x)$ from $P(y_i|x)$, since the $y_i$ are not conditionally independent.
I am even less sure of how to approach $P(y)$. The range of $f_i(x)$ is strictly positive, but I don't know its probability distribution, much less $P(y_i)$ or the joint $P(y)$. However I imagine since I am working with the normal distribution that if there is a solution it has been studied before.
Is there a solution to this problem? Are there similar problems that do have solutions (e.g., adding a scale factor or other transformation to $f_i$, discretizing $x$)?
| Bayesian inference on a set of functions of random variables | CC BY-SA 4.0 | null | 2023-04-14T01:07:32.643 | 2023-04-14T01:07:32.643 | null | null | 385662 | [
"probability",
"bayesian",
"multivariate-normal-distribution"
] |
612874 | 1 | null | null | 1 | 75 | This is probably a very crude question and I've been thinking about it for a while. Is the odds ratio the same for generalized additive models (GAMs) as it is with generalized linear models (GLMs)? If no, then how can I interpret them in a logistic GAM? I found many links about odds ratios for GAMMs but not GAMs.
| Odds ratio interpretation for generalized additive models | CC BY-SA 4.0 | null | 2023-04-14T01:21:31.790 | 2023-04-16T03:18:31.360 | 2023-04-16T03:15:41.207 | 345611 | null | [
"regression",
"logistic",
"generalized-additive-model",
"odds-ratio",
"mgcv"
] |
612875 | 1 | null | null | 0 | 18 | People sometimes include the interactions between their instruments and region/year or the interactions between different instruments in the first stage of a 2SLS regression. I wonder if the effect hierarchy principle applies to the instruments in 2SLS.
I'm asking this question because this [paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2977074) selects instruments using LASSO without considering the effect hierarchy principle and ends up selecting a few interaction terms without the main effects. Does this make sense?
| Should instrumental variables abide by the effect hierarchy principle? | CC BY-SA 4.0 | null | 2023-04-14T01:23:28.953 | 2023-04-14T01:23:28.953 | null | null | 385663 | [
"machine-learning",
"causality",
"lasso",
"instrumental-variables"
] |
612876 | 1 | null | null | 4 | 184 | A colleague wants to analyze two outcomes (X1, X2). They believe the two outcomes measure the same construct or something similar. They decide to Z-score both outcomes in separate dataset and combine them (X3). They assume that because they are Z-scored they are now on the same scale, and because they are both measuring the same outcome, they can be combined.
A real-world example could be that a person has depressive symptoms measure by the Center for Epidemiological Studies - Depression Scale and the Beck Depression Inventory, in two different datasets. They want to combine these, but the measures are on different scales. They decide to Z-score and concatenate. But this is inappropriate, no?
I am aware that simply standardizing scores does not necessarily make variables interchangeable or equivalent. However, I am unclear if there is any simulation-based evidence or literature that can make this point clear to my colleague so they believe me. I want to make clear that two variables that you assume to measure the same construct on different metrics/scales cannot simply be combined because they have been Z-scored. What is the reason for this? What is a good way to communicate this point that X1 and X2 both being Z-scored does not mean they can be combined?
| Can you combine two different metrics on different scales by Z-scoring? | CC BY-SA 4.0 | null | 2023-04-14T01:29:31.637 | 2023-04-14T06:49:46.043 | null | null | 241198 | [
"standardization",
"z-score"
] |
612877 | 2 | null | 612876 | 4 | null | My quick answer is: if the properties of the two scales are different, then combining the two scales by z-scoring will likely introduce hidden errors into any subsequent analysis. For example,
- if one scale is skewed and another symmetric, z-scoring will still result in one scale being skewed and the other symmetric
- if one scale favours certain conditions and the other favours different kinds of conditions
- etc.
then z-scaling will not magically make this disappear and make them equivalent! I'm not saying that it's always inappropriate, but you should carefully consider the properties of the two scales being combined (from both a theoretical, and an empirical perspective) before making this decision.
---
In R, a quick example:
- I'm generating two set of scores, one Normal and the other right-skewed
- I'm then z-scaling the scores separately, and then plotting them
Would I combine these two z-scores? Probably not at this stage, since the way that they are distributed post-transformation is still very different.
```
set.seed(123)
scores_group_1 <- rnorm(n = 100, mean = 50, sd = 20)
scores_group_2 <- rgamma(n = 100, shape = 1, scale = 10)
z_group_1 <- scale(scores_group_1, center = TRUE, scale = TRUE)
z_group_2 <- scale(scores_group_2, center = TRUE, scale = TRUE)
par(mfrow = c(2,1))
hist(z_group_1, breaks = seq(-4.75, 4.75, by = 0.5))
hist(z_group_2, breaks = seq(-4.75, 4.75, by = 0.5))
par(mfrow = c(1,1))
```
[](https://i.stack.imgur.com/yy5fl.png)
| null | CC BY-SA 4.0 | null | 2023-04-14T02:17:31.617 | 2023-04-14T06:49:46.043 | 2023-04-14T06:49:46.043 | 369002 | 369002 | null |
612878 | 1 | null | null | 0 | 17 | Suppose I have three variables (x, y, z) and I am calculating their regression coefficients.
How to calculate Z-score between two variable (Z-score_xy) if Z-scores of two variable with a third variable are known (Z-score_xz, Z-score_yz)?
If you can show the calculation with beta would also be helpful.
Give reference please.
| How to calculate z-score between two variable if their z-score with a third variable are known? | CC BY-SA 4.0 | null | 2023-04-14T02:32:51.740 | 2023-04-14T03:18:46.747 | 2023-04-14T03:18:46.747 | 169706 | 169706 | [
"regression",
"z-score"
] |
612879 | 1 | null | null | 0 | 11 | I ran a second order growth curve on Mplus and got a very high covariance between intercept and slope and significant variance for the slope and intercept.
Assuming from the results that there are different trajectories for different starting points, I want to make Mplus get different trajectories according to the level of intercept; 1 STD above the mean intercept as "high", 1 STD below the mean as "low " and all others who fall between as "moderate".
I'm figuring that it will be coded as a GMM where I set intercept ranges for each group, but I don't know how. I will post the code and some parts of the output that I think are relevant if it helps anyone that could help me out. Thank you so much in advance!
```
! 2nd order growth factors (No T7,8)
iGAE sGAE|AE1@0 AE2@1 AE3@2 AE4@3 AE5@4 AE6@5 AE9@8;
[iGAE sGAE]; iGAE sGAE; ![intercepts]; variance;
!Class 1 (high intercept [above 1 STD])
%C#1%
[iGAE-sGAE]; ! Estimate means of the intercept and slope factors
!i-s(V1-V2); ! Estimate variance of the intercept and slope factors
!I WITH S (COV1) !Estimate covariance of Class 1
!Class 2 (avarage intercept)
%C#2%
[iGAE-sGAE]; ! Estimate means of the intercept and slope factors
!i-s(V3-V4); ! Estimate variance of the intercept and slope factors
!I WITH S (COV2) !Estimate covariance of Class 1
!Class 3 (low intercept [below 1 STD])
%C#3%
[iGAE-sGAE]; ! Estimate means of the intercept and slope factors
!i-s(V3-V4); ! Estimate variance of the intercept and slope factors
!I WITH S (COV2) !Estimate covariance of Class 1
```
(from output of the unconditional model)
unstandardized
```
Two-Tailed
Estimate S.E. Est./S.E. P-Value
```
Means
```
I 3.961 0.041 97.537 0.000
```
Variances
```
I 0.516 0.048 10.843 0.000
```
| How do I code for different trajectories for different intercepts for a second order growth model in Mplus | CC BY-SA 4.0 | null | 2023-04-14T03:10:18.593 | 2023-04-19T22:53:55.943 | 2023-04-19T22:53:55.943 | 385667 | 385667 | [
"growth-model",
"mplus",
"growth-mixture-model"
] |
612880 | 1 | null | null | 0 | 17 | I am trying to develop a strategy to deal with outliers.
Below is the residual boxplot that I have generated for the test data, train data and both the test and train data.
My question is, what is the best way to deal with these outliers? Is it ok to just leave them if I had looked at them closely and there is no evidence to show that they are outliers due to mistake? What is the best way to go about this sort of problem?
Thank you!
[](https://i.stack.imgur.com/Uf5LN.png)
| Residual Analysis in R | CC BY-SA 4.0 | null | 2023-04-14T03:11:39.797 | 2023-04-14T03:11:39.797 | null | null | 385668 | [
"residuals",
"boxplot"
] |
612881 | 1 | 612894 | null | 4 | 200 | Question
A drug maker wants to design a study in which two medications are compared. The first group has seen a improvement in 40%. Researchers want to see if the newer drug will improve this by 10% so they design a study. Assuming a study with power of 80% and a two-sided significance level of 5%. How many participants are required to detect a difference of 10% between the two groups.
My attempt and understanding
From the information gathered above there are two critical values $Z_{1-\alpha/2} = 1.96$ and $Z_{\beta}=0.84$ given the significance level of $\alpha = 0.05$. From the proportions, $p_{0} = 0.4$ and the proposed new drug should have the second group $p_{1} = 0.5$. The difference $\theta = p_{1}-p_{0} = 0.1$. The hypotheses
\begin{align}
H_{0}:\theta &= 0 \\
H_{1}: \theta &\neq 0
\end{align}
However, I am unsure what formula has been used to calculate this. I have run command in R which has given me $\approx 388$ per group.
```
> power.prop.test(p1 = 0.4, p2 = 0.5, alternative = "two.sided",
+ sig.level = 0.05,
+ power = 0.80)
Two-sample comparison of proportions power calculation
n = 387.3385
p1 = 0.4
p2 = 0.5
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
```
However, I am unsure what formula has been used to calculate the sample size. From my understanding this has been a comparison of two binomial proportions. If I could get any assistance in understand how this was calculated, it would be greatly appreciated!
| Two proportion sample size calculation | CC BY-SA 4.0 | null | 2023-04-14T03:12:16.520 | 2023-04-14T07:35:34.533 | 2023-04-14T07:35:34.533 | 164936 | 376744 | [
"self-study",
"sample-size",
"statistical-power",
"proportion"
] |
612885 | 2 | null | 612670 | 3 | null | You are keeping bins with 0 counts and assigning them an uncertainty of 0. When you measure 0 counts, 0 is a poor estimate of the uncertainty. 0 is a valid outcome even when you expect a few counts. Common practices are to remove those bins or assign them a non-zero value for the std (I would use 1.14 since it's when you expect 0 at 32% probability).
Your chi2 value should be much closer to the expectation value, and you will be able to use a chi2 test to determine goodness of fit.
| null | CC BY-SA 4.0 | null | 2023-04-14T04:44:47.290 | 2023-04-14T04:44:47.290 | null | null | 385588 | null |
612887 | 2 | null | 612609 | 1 | null | This question was answered by @NikoNyrh [here](https://ai.stackexchange.com/questions/40021/blurring-of-image-in-generative-model-using-diffusion-probabilistic-method). The 2-d swiss roll data was represented as a list of $(x,y)$ coordinates, not as a raster image with a rectangular grid of pixel data. Therefore, each $(x,y)$ coordinate undergoes a random walk, which causes the overall image to diffuse as described in the paper.
| null | CC BY-SA 4.0 | null | 2023-04-14T05:21:08.603 | 2023-04-14T05:21:08.603 | null | null | 385459 | null |
612888 | 2 | null | 612694 | 1 | null | The main issue with the chi2 test you are performing is the treatment of empty bins where your estimate of the std is very poor.
Another issue is that the chi2 test assumes normality, but you have counts so theoretically the test won't work. In practice, it might still give you useful answers. If you are able to simulate your data around the best-fit (for example, use the best fit as the expected value for each bin and draw counts from a poisson distribution) then you can base your chi2 test on the simulated distribution instead of the expectation for gaussian data.
| null | CC BY-SA 4.0 | null | 2023-04-14T05:24:27.390 | 2023-04-14T05:24:27.390 | null | null | 385588 | null |
612889 | 1 | null | null | 0 | 24 | I am considering the following log log model :
$$
log(y_t) = log(\beta_0) + \sum_{i=1}^{K}\beta_ilog(x_{i,t}) + \sum_{j=K+1}^{L}\beta_jx_{j,t} + log(\epsilon_t)
$$
to explain the sales of a company given several factors. After estimation of the model, I would like to "decompose" the dependent variable (the sales) by computing the contribution of each explanatory variable. Ideally, what I have in mind would be a percentage contribution, the sum of which would be 1, but I don't know how to go about it since I have some variables in log and others not, I don't even know if it's possible. So I would like to have your help on this please.
Thanks a lot!
| How to decompose the dependent variable in a log log model | CC BY-SA 4.0 | null | 2023-04-14T06:10:33.390 | 2023-04-14T06:10:33.390 | null | null | 375362 | [
"regression",
"econometrics",
"interpretation",
"nonlinear-regression"
] |
612891 | 2 | null | 612881 | 4 | null | It isn't precisely from a formula. According to the code inside `power.prop.test`, the power for any value of `n` is calculated as
```
pnorm((sqrt(n) * abs(p1 - p2) - (qnorm(sig.level/2,lower.tail = FALSE) *
sqrt((p1 + p2) * (1 - (p1 + p2)/2))))/
sqrt(p1 * (1 - p1) + p2 * (1 - p2)))
```
or in traditional notation
$$\Phi\left(\frac{\sqrt{n}|p_1-p_2|-z_{\alpha/2}\sqrt{(p_1+p_2)(1-(p_1+p_2)/2)}}{\sqrt{p_1(1-p_2)+p_2(1-p_2)}} \right) $$
and numerical root-finding methods are used to get the value of `n` that makes this equal to 80%.
That's not necessarily how you'd do the computations by hand.
| null | CC BY-SA 4.0 | null | 2023-04-14T06:35:06.510 | 2023-04-14T06:35:06.510 | null | null | 249135 | null |
612894 | 2 | null | 612881 | 3 | null | A formula based on a large sample test for the equality of two independent proportions is:
\begin{align}
n_1 &= \kappa n_2 \\
n_2 &= \frac{(z_{\alpha/2} + z_{\beta})^2}{(p_1 - p_2)^2}\left[\frac{p_1(1 - p_1)}{\kappa} + p_2(1 - p_2)\right]
\end{align}
Where $p_1$ and $p_2$ are the proportions, $\kappa$ is the allocation ratio and $\alpha$ and $\beta$ are the significance level and 1-Power.
Reference
Chow S-C, Shao J, Wang H, Lokhnygina Y (2018): Sample size calculations in clinical research. 3rd ed. CRC Press.
| null | CC BY-SA 4.0 | null | 2023-04-14T07:08:21.603 | 2023-04-14T07:08:21.603 | null | null | 21054 | null |
612895 | 1 | null | null | 0 | 28 | I'm trying to understand the proofs on the [The information bottleneck method](https://arxiv.org/abs/physics/0004057) paper by Tishby, Pereira and Bialek without luck. In particular, the second term in the functional derivative of the Mutual Information, i.e.,
$$
\frac{\partial}{\partial p(\tilde{x} | x)} I(X, \tilde{X}) = \frac{\partial}{\partial p(\tilde{x} | x)} \left[ \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \big( \log p(\tilde{x} | x) - \log p(\tilde{x}) \big) \right].
$$
In the equations 10 and 25, when they take the functional derivative $\frac{\partial}{\partial p(\tilde{x} | x)} \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \log p(\tilde{x})$, that is the mutual information $I(X, \tilde{X})$ functional derivative that corresponds to the denominator in the $\log$, they have a different way of finding the derivative that I cannot understand.
In (10),
$$\frac{\partial}{\partial p(\tilde{x} | x)} \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \log p(\tilde{x}) = p(x) \left[ \log p(\tilde{x}) + \frac{1}{p(\tilde{x})} \sum_{x'} p(x') p(\tilde{x} | x') \right].$$
But shouldn't the second term $\sum_{x'} p(x') p(\tilde{x} | x')$ be the functional derivative of this w.r.t. $p(\tilde{x} | x)$ due to the chain rule. Additionally, I think it is missing the $p(\tilde{x} | x)$ from the first product rule. (See my attempt of this derivation below.) In (25), they do something similar (I guess), since the term accompanying $\log p(\tilde{x})$ is $1$.
My attempt is
$$\begin{align}
\frac{\partial}{\partial p(\tilde{x} | x)} \sum_{x, \tilde{x}} p(\tilde{x} | x) p(x) \log p(\tilde{x}) &= p(x) \left[ \log p(\tilde{x}) + \frac{p(\tilde{x} | x)}{p(\tilde{x})} \frac{\partial p(\tilde{x})}{\partial p(\tilde{x} | x)} \right] \\
&= p(x) \left[ \log p(\tilde{x}) + \frac{p(\tilde{x} | x)}{p(\tilde{x})} p(x) \right] \\
&= p(x) \left[ \log p(\tilde{x}) + p(x | \tilde{x}) \right]
\end{align}$$
So, I was wondering if somehow $p(x | \tilde{x})=1$ given the $x$ and $\tilde{x}$ after the functional derivative and due to the relation between $\tilde{x}$ been considered as the quantized version of $x$ (the codeword in the codebook), and if this is the case, why?
What am I doing wrong?
| Trying to understand the derivation in the Information Bottleneck Method | CC BY-SA 4.0 | null | 2023-04-14T07:12:45.760 | 2023-04-14T07:12:45.760 | null | null | 74093 | [
"information-theory",
"mutual-information",
"derivative"
] |
612896 | 1 | null | null | 0 | 10 | I have data with a large proportion of zeros that we are committed to analysing with ITS [to fit in with other parts of a project] and have been advised that the best approach would be to convert to a binary outcome. I've done so and fit a binomial glm to the data, but[with experience of more standard count/rate based ITS analyses] I am not too sure of whether yhe outcome should be reported as the coefficients of the model as normal? I u derstand that such would be an Odds Ratio, but wondered whether the coefficients were also less relevant?
| Interrupted time series analysis binary outcome variable | CC BY-SA 4.0 | null | 2023-04-14T08:00:29.870 | 2023-04-14T08:00:29.870 | null | null | 343051 | [
"interpretation",
"binomial-distribution",
"binary-data"
] |
612897 | 1 | null | null | 0 | 14 | I have run a binomial generalised linear mixed model (GLMM) via the lme4 package in R. The optimal model of mine is from this syntax: fm1 <- glmer (answer~ (1|subj) + (1|item) + seeconversationmask, data=analysis1, family=binomial, control=glmerControl(optimizer="bobyqa",optCtrl=list(maxfun=2e5)))
I have three independent variables. They're all categorical. The first variable has two categories; the second one has two categories and the last one has three categories. The optimal model is the model with three-way interaction which means all variables are needed to explain the finding.
And this is the example of output I've got from emmeans package for pairwise comparison to see if there are significant differences in each contrast:
[](https://i.stack.imgur.com/rOYcr.png)
The example of my interpretations is 'The participants received significantly lower score in ao, clear, dm context than ao, con, dm context (b = -4.74, SE = 0.60, p < 0.01).'
However, after running the syntax for odd ratio, it gives me this:
[](https://i.stack.imgur.com/e7QOP.png)
So we can see that the results from odd ratio formula doesn't fit with the results from pairwise comparison. Hence, I cannot use the odd ratio from this formula as it doesn't give me odd ratio for each pair of contrast. Do you have any other solution? Is there other way to get odd ratio for each pair of contrast for three independent variables? Sorry for posting this question again. The first time hasn't been answered.
| How to get odd ratio for the result of pairwise comparison for binomial GLMM? | CC BY-SA 4.0 | null | 2023-04-14T08:14:00.733 | 2023-04-14T08:14:00.733 | null | null | 40023 | [
"multiple-comparisons",
"glmm",
"odds-ratio",
"lsmeans"
] |
612898 | 2 | null | 502531 | 2 | null | I found [this article](https://distill.pub/2019/visual-exploration-gaussian-processes/) very helpful towards building an intuition of GPs.
The mean and covariance functions you define for your prior distributions sets up the distribution that you can sample functions from, where the covariance affects the shape (wiggliness, trend, periodicity) of the functions.
[](https://i.stack.imgur.com/mAwjE.png)
Given observation points you use define a conditional distribution which now must pass through (or close to) your observation points. So if you have a observed points $Y$, and you are interested in the distribution of $X$ given $Y$ and you have assumed a Gaussian prior joint distribution with mean $\begin{pmatrix} \mu_X & \mu_Y \end{pmatrix}^\top$ and covariance matrix $\begin{pmatrix} \Sigma_{XX} & \Sigma_{XY} \\ \Sigma_{YX} & \Sigma_{YY}\end{pmatrix}$, the conditional distribution given the observations $Y$, $X|Y$ is given by
$$X|Y \sim \mathcal{N} (\mu_X + \Sigma_{XY}\Sigma_{YY}^{-1}(Y-\mu_Y), \Sigma_{XX} - \Sigma_{XY}\Sigma_{YY}^{-1}\Sigma_{YX}).$$
This distribution now looks more like this:
[](https://i.stack.imgur.com/Za8dT.png)
Once again it is a distribution, so functions can be sampled from it.
I also found [this YouTube video](https://www.youtube.com/watch?v=UBDgSHPxVME) helpful for building an intuition for kernel functions effect on shape.
(Images used from [this article](https://distill.pub/2019/visual-exploration-gaussian-processes/))
| null | CC BY-SA 4.0 | null | 2023-04-14T08:17:53.567 | 2023-04-14T08:17:53.567 | null | null | 363176 | null |
612899 | 1 | null | null | 1 | 20 | Can a Mann-Whitney test be done on different sample sizes, one sample about 18 and the other about 32? The problem is looking at the difference in views of tourists and locals about the environmental quality of a location.
| Can I do the Mann-Whitney test with different sample sizes 18 and 32? | CC BY-SA 4.0 | null | 2023-04-14T08:24:27.973 | 2023-04-14T09:01:42.867 | 2023-04-14T09:01:42.867 | 22047 | 385691 | [
"sample-size",
"wilcoxon-mann-whitney-test"
] |
612900 | 1 | null | null | 2 | 38 | I have a general population of let's say 100,000 people, 10,000 of which are on a register following a diagnosis of kidney disease (this involves looking at a blood result and coding a diagnosis). However, I hypothesize there may be more undiagnosed people in the 100,000 general population (blood results indicates but no diagnosis code). I wish to test whether if there are people undiagnosed, is the value statistically significant. My null hypothesis is the kidney disease register is robust and there are a statistically minimal number of people undiagnosed. I'm at a loss as to what statistical test to use. Any help would be much appreciated. Thanks
| what statistical test to use for the following research question? | CC BY-SA 4.0 | null | 2023-04-14T08:30:11.797 | 2023-04-14T16:55:01.833 | null | null | 385693 | [
"hypothesis-testing"
] |
612901 | 2 | null | 612250 | 1 | null | I was able to generate these plots using ggeffects(), as suggested by dipetkov in the comments. An example is below. It is possible to alter the size of the confidence intervals with ci.lvl within the ggpredict() function, so I was able to create multiple CI bands simply by generating two objects of class("ggeffects")--one with CI set to .95 and one to .999--and plotting both with geom_ribbon(), using a lower alpha for plotting of the wider CI band.
[](https://i.stack.imgur.com/mKbCv.jpg)
| null | CC BY-SA 4.0 | null | 2023-04-14T08:33:37.387 | 2023-04-14T08:33:37.387 | null | null | 347134 | null |
612902 | 1 | null | null | 0 | 35 | I am trying to identify a list of features that are significantly over represented in one population when compared to the other. Experiment is designed such that I have two groups of individuals A and B. Each group has 20 individuals. Over 2 million features have been analyzed in these two groups and for each feature its presence/absence has been counted. As a result I have a table that looks like this:
```
A | B
yes/no | yes/no
8/12 | 19/1
2/18 | 3/17
...
```
Problem is, if I apply Fisher exact test on each row and correct the obtained p-values utilizing Bonferroni correction, because of small within row counts my p-values are not low enough for Bonferroni correction to adequately address multiple testing correction. All corrected p-values after the lowest one soon become 1. How to properly correct for multiple testing given the above describe scenario?
| Propper correction for multiple Fisher's exact tests | CC BY-SA 4.0 | null | 2023-04-14T08:43:28.877 | 2023-04-14T08:43:28.877 | null | null | 385695 | [
"hypothesis-testing",
"statistical-significance",
"fishers-exact-test",
"bonferroni"
] |
612903 | 1 | null | null | 1 | 44 | Is it safe to run a logistic regression with time as independent variable? For example, I want to test whether a certain outcome (say, blue or red) changes along time. I know that, in general, one should be careful on time series because of autocorrelation.
EDIT: the dataset I have contains hundreds of observations per months, where there is a decrease in the outcome blue at the advantage of red at the end of the time span. Note that the data might not be completely independent as I have multiple observations for each month coming from the same individual, but the individuals are different for each month (this is not a consequence of poor experimental design, it is impossible to get an observation from the same individual in different months).
| Logistic linear regression and time series | CC BY-SA 4.0 | null | 2023-04-14T08:49:46.980 | 2023-04-14T12:56:02.480 | 2023-04-14T12:56:02.480 | 372559 | 372559 | [
"time-series",
"logistic",
"autocorrelation",
"trend"
] |
612904 | 1 | 612930 | null | 22 | 1605 | Why do some definitions of the Kullback-Leibler divergence include extra terms $-p_i + q_i$? For example, [kl_div()](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.kl_div.html) (in the Python `scipy.special` module) defines the Kullback-Leibler divergence as
$$
\sum_i p_i \ln\frac{p_i}{q_i} - p_i + q_i.
$$
The documentation says:
>
The origin of this function is in convex programming; see [1] for details. This is why the function contains the extra terms over what might be expected from the Kullback-Leibler divergence.
I don't have the referenced book at hand. What is the justification or motivation for the additional $-p_i + q_i$ terms?
Anti-closing note: This is not a question about software, but about the concept behind it.
| Why are there extra terms $-p_i+q_i$ in SciPy's implementation of Kullback-Leibler divergence? | CC BY-SA 4.0 | null | 2023-04-14T08:53:17.540 | 2023-04-17T16:17:46.143 | 2023-04-17T16:17:46.143 | 22228 | 169343 | [
"optimization",
"information-theory",
"kullback-leibler",
"convex"
] |
612905 | 1 | 612913 | null | 0 | 39 | Assume we have $n$ non-iid standard normal random variables $X_i$. I'm interested in the distribution of $Z=\sum X_i^2$. It is clear to me, that the sum of independent $n$ squared standard normal variables will follow a chi-squared distribution with n degrees of freedom. However, as said above, we assume the $X_i$ to be pairwise correlated. To be more precise:
Assume we have $n$ non-iid standard normal random variables $X_i (i=1,...n)$ with mean vector $\vec{\mu}= \vec 0$ and covariance matrix $\Sigma$. Is it possible to calculate the distibution of $Z$ in dependence of $\sigma_{ij}$? Or $\rho_{ij}$ (as pairwise correlation coefficient)?
| Distribution sum of squared correlated normal random variables | CC BY-SA 4.0 | null | 2023-04-14T09:26:15.163 | 2023-04-14T10:56:51.007 | 2023-04-14T09:42:31.507 | 362671 | 385698 | [
"distributions",
"multivariate-normal-distribution"
] |
612907 | 1 | null | null | 0 | 12 | I have an experiment where I take a set of samples, and test them. If I then take a second set of samples and repeat the experiment, how can I test if there is a significant difference between the sample sets?
More specifically, I take 25 rods of a material and subject them to a breakage test at five different strain rates, with five replicates per strain rate. I can plot breakage stress vs strain rate as a scatter plot and calculate the standard errors, etc.
Now if I take different 25 rods of a treated material and repeat the experiment under the same conditions and same strain rates I get a second scatter plot.
I know from other work that the breakage stress at a each strain rate is (almost!) normally distributed.
How can I tell if there is a difference due to the treatment?
I could take each strain rate separately and do a (paired?) t-test giving 5 p-values or just put all the 25 results together for untreated and treated, then do a two sample t-test but either of these seem to be ignoring the structure of the data.
| Appropriate technique for comparing two experiments with replicates | CC BY-SA 4.0 | null | 2023-04-14T09:37:29.470 | 2023-04-14T09:37:29.470 | null | null | 17907 | [
"t-test"
] |
612910 | 2 | null | 612796 | 0 | null | A few thoughts.
First, I'm not sure how useful a "restricted mean survival time" is in this situation. You presumably are looking at times to failure of a shunt after implantation. That's not very useful if I were a patient contemplating a shunt, as that value lumps all those with shunts lasting more than 12 months together at 12 months. Your RMST values around 11 months could be due to ~90% lasting beyond 12 months with ~10% failing very soon after implantation, or everyone failing at 11 months, or lots of possibilities in between. So the RMST doesn't provide very useful information about the risks of having a shunt implanted. I'd be more interested in the probability of a shunt lasting at least 12 months.
Second, the power of a survival model comes primarily from the number of events, not from the number of cases. The number of events would thus seem to be a better choice for getting the standard deviation from the standard error. Meta-analysis is outside my comfort zone, so check what's the standard choice in the field for survival models.
Third, note that your Prediction Interval goes out to more than 13 months, while a restricted mean survival up to 12 months can't be more than 12 months. Something in the calculation isn't respecting the censoring at 12 months.
Instead of focusing on RMST, it might make more sense in your meta-analysis to combine survival-fraction estimates at different times after implantation, and/or estimated survival times to some survival fraction greater than 50%. I don't have experience with the [IPDfromKM package](https://cran.r-project.org/package=IPDfromKM), but its `survreport()` function can:
>
report the survival times with confidence intervals for a given vector of survival probabilities, as well as the landmark survival probabilities of interest.(for example, if set interval=6, the survival probability will be reported at every six months)
That would seem to be a more straightforward use of the individual patient estimates than working with RMST.
| null | CC BY-SA 4.0 | null | 2023-04-14T10:09:40.090 | 2023-04-14T16:41:46.180 | 2023-04-14T16:41:46.180 | 28500 | 28500 | null |
612911 | 2 | null | 612808 | 0 | null | Your "not-significant" log-rank test could be due to imbalance between treatment groups in terms of other outcome-associated variables. Also, in a Cox model, omitting any outcome-associated variable will tend to bias the magnitudes of the coefficients for included predictors toward 0. The log-rank analysis does just that: the only predictor you include is treatment, so the log-rank test might be underestimating the true treatment effect. Thus you typically want to adjust in some way for as many outcome-related predictors as reasonable.
The Cox multiple regression model is one way to adjust for other predictors. There are other ways that might better assess the causal effect of the treatment. The [treatment-effect tag](https://stats.stackexchange.com/tags/treatment-effect/info) on this site labels many useful pages.
| null | CC BY-SA 4.0 | null | 2023-04-14T10:27:26.587 | 2023-04-14T10:27:26.587 | null | null | 28500 | null |
612912 | 2 | null | 610287 | 1 | null | What I need is called "continual learning". I am searching through the scientific literature and I have found [this](https://arxiv.org/pdf/2101.10423.pdf) paper that gives many information about the theme I have posted. It defines in a formal way the problem, furnishes a review of the state of art methods, and presents a comparison between different approaches in handling this problem.
| null | CC BY-SA 4.0 | null | 2023-04-14T10:40:03.877 | 2023-04-14T10:40:03.877 | null | null | 379875 | null |
612913 | 2 | null | 612905 | 2 | null | The distribution of $Z$ will depend on the full joint distribution of $X = (X_1,X_2,...,X_n)^T$. If you assume for example that $X$ has a multivariate normal distribution with covariance matrix $\Sigma$, then by performing a unitary transformation that diagonalizes the covariance matrix you can express $Z$ as a weighted sum of independent $\chi^2_1$ random variables :
$$ Z = X^TX = \lambda_1 u_1^2 + \lambda_2 u_2^2 + ... + \lambda_n u_n^2$$
$$ u_i^2 \overset{\mathrm{iid}}{\sim} \chi^2_1$$
where $\lambda_i$ are the eigenvalues of $\Sigma$.
For $n=2$ you can find a [closed form](https://math.stackexchange.com/questions/1324382/sum-of-weighted-chi-square-distributions) for the distribution of $Z$, but not for the general case (see e.g. [here](https://arxiv.org/abs/2203.11940)).
You can however find the moments of $Z$, for example:
$$ Var(Z) = \sum_i \lambda_i^2 Var(u_i^2) = 2 \sum_i \lambda_i^2$$
| null | CC BY-SA 4.0 | null | 2023-04-14T10:51:16.047 | 2023-04-14T10:56:51.007 | 2023-04-14T10:56:51.007 | 348492 | 348492 | null |
612914 | 1 | null | null | 0 | 13 | I met a problem in interpreting z values in my GLMM and its post-hoc tests. Since what I am calculating is the counts, I used the poisson family from glmer. and I got a z values of z value = 2.278. But after I put the model into post-hoc analysis ("Dunn" adjustment) with "emmeans", the z value became z value = -3.350. Why would the z value change? I'm concerned about the reliability of the results. If the two are different, which value shall I trust?
The code in the model runs like this: summary(m1 <- glmer(y ~ fixed1*fixed2+(1|random1)+(1|random2), family="poisson", data=data,control=glmerControl(optimizer = "bobyqa"))). Fixed1 represents for fixed factor one, fixed2 for fixed factor 2, random1 for random factor 1, and random2 for random factor 2. While for the post-hoc analysis, I used: pairs(emmeans(m1, pairwise~fixed1),adjust="Dunn"), in order to adjust p value. But the z values are different from the two. So how shall I interpret the results? Thanks!
| Different z values after adjusting the p values with post-hoc analysis? | CC BY-SA 4.0 | null | 2023-04-14T11:12:44.620 | 2023-04-14T11:12:44.620 | null | null | 385042 | [
"r",
"z-score"
] |
612915 | 1 | null | null | 0 | 13 | I'm new to Bayesian statistics. I'm running a linear mixed effects model in R (using `lmer`), and I want to report Bayes Factors using these results, as demonstrated in Silver,Dienes & Wonnacott (2021). This paper uses a R script version of Dienes' Bayes Factor calculator found here: [http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm](http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm)
I have a question about specifying the model of the alternative hypothesis. In this paper, the authors state that this can be defined with a normal distribution with mean (effect size) 0 and that this makes 'a more stringent test' as it is then harder to discriminate H1 and H0. However, surely an effect size of 0 is impossible in the model of the alternative hypothesis?
| Specifying Model of Alternative Hypothesis for Calculating Bayes Factors | CC BY-SA 4.0 | null | 2023-04-14T11:15:26.430 | 2023-04-14T11:15:26.430 | null | null | 379020 | [
"bayesian",
"mixed-model",
"bayes-factors"
] |
612916 | 2 | null | 612729 | 1 | null | I wasn't entirely sure about what you mean by all possible comparisons, but one thing you could do would be to compare the continuous predictor's slopes between different levels of your categorical predictors. For this, you can use emmeans emtrends:
(Va_k seems to be the continuous predictor. I'll use the Shifttype as the example categorical predictor):
```
library(emmeans)
em<-emtrends(model, "ShiftType", var="Va_k")
pairs(em)
```
Edited to add: for 3-way interactions you can similarly use
```
em2<-emtrends(model, ~ShiftType|TrialType, var="Va_k")
pairs(em2)
```
but I suspect there may be some better way to do the latter (better than just running all possible contrasts). For instance [this](https://stats.stackexchange.com/questions/355611/pairwise-comparisons-with-emmeans-for-a-mixed-three-way-interaction-in-a-linear) and [this](https://stats.stackexchange.com/questions/563765/probing-categorical-3-levels-x-continuous-var-with-emmeans) response here might be useful.
| null | CC BY-SA 4.0 | null | 2023-04-14T11:16:04.070 | 2023-04-14T12:29:03.513 | 2023-04-14T12:29:03.513 | 357710 | 357710 | null |
612917 | 1 | null | null | 0 | 101 | I have an AR(3)-GJR-GARCH(2,2,2) model. How can I test the presence of ‘leverage effects’ (i.e. asymmetric responses of the conditional variance to the positive and negative shocks) with 5% significance level?
Below is my code for the model:
```
startdate = '2009-01-01'
enddate = '2021-12-31'
data = yf.download('GD', start = startdate, end = enddate)
data.rename(columns={"Adj Close": "price"}, inplace = True)
log_returns = np.log(data['price']/data['price'].shift(1))*100 # Log return in %
log_returns.dropna(inplace = True)
startdate = '2010-01-01'
enddate = '2018-12-31'
in_sample_return = log_returns.loc[startdate:enddate]
gjr_garch = arch_model(in_sample_return,mean='AR',lags=3,vol='GARCH',p=2,o=2,q=2,dist='t').fit(update_freq=5)
```
What do I do next to check if ‘leverage effects’ is present at 5% significance level?
As I know the gamma parameter is the leverage and when gamma is non-zero it means that the model has leverage effect, but the problem is here in this model I have two gamma parameters.
I thought checking gamma coefficient is enough but as it mentioned "5% significance level", I believe the p-value needs to be calculated and I'm not sure how do I do it.
| GARCH model analysis using python | CC BY-SA 4.0 | null | 2023-04-14T11:26:50.870 | 2023-04-14T11:52:56.267 | 2023-04-14T11:48:39.080 | 53690 | 385690 | [
"python",
"garch",
"volatility"
] |
612920 | 2 | null | 612917 | 0 | null | You need to assess the joint significance of the two gamma coefficients. For that you will need to extract the estimated covariance matrix of the parameters from the fitted model object and do an $F$-test (or a $\chi^2$ test if the sample is large enough). I am not sure if there is a function that does the $F$-test given just the fitted ARMA-GJR-GARCH model object or the covariance matrix and the two point estimates. However, you could always do this by hand. Conceptually, it is no more difficult than doing an $F$-test in a multiple regression model by hand. Finding worked out examples for the latter is probably quite easy.
| null | CC BY-SA 4.0 | null | 2023-04-14T11:52:56.267 | 2023-04-14T11:52:56.267 | null | null | 53690 | null |
612922 | 1 | null | null | 0 | 21 | I am trying to model the following and would be happy about input/if its correct.
I want to compute a model that checks on differences between Statuses between data sets plotted here on the graph:
The question is: are there significantly more poor/moderate/good statuses in one data origin, in comparison to the other origin.

my data format is the following:
```
> head(data$Nutritional.Status)
[1] Moderate Moderate Moderate Poor-very poor Moderate Moderate
Levels: Good Moderate Poor-very poor
> head(data$Nutritional.Status.olr)
[1] 2 2 2 3 2 2
Levels: 1 2 3
> head(data$Data.origin)
[1] IR.recent IR.recent IR.recent IR.recent IR.recent IR.recent
Levels: IR.historic IR.recent UK
```
I have tried to compute an ordinal logistic regression` from [here](https://www.analyticsvidhya.com/blog/2016/02/multinomial-ordinal-logistic-regression/):
```
> data$Data.origin <- as.factor(data$Data.origin)
> m <- polr(data$Nutritional.Status.olr ~ Data.origin, data = data, Hess=TRUE)
> summary(m)
Call:
polr(formula = data$Nutritional.Status.olr ~ Data.origin, data = data,
Hess = TRUE)
Coefficients:
Value Std. Error t value
Data.originIR.recent 1.2417 0.4716 2.633
Data.originUK -0.8151 0.4362 -1.869
Intercepts:
Value Std. Error t value
1|2 -0.2478 0.4251 -0.5831
2|3 1.4957 0.4339 3.4475
Residual Deviance: 895.0317
AIC: 903.0317
(428 observations deleted due to missingness)
> ctable <- coef(summary(m))
> ctable
Value Std. Error t value
Data.originIR.recent 1.2417349 0.4716366 2.6328213
Data.originUK -0.8150987 0.4361647 -1.8687866
1|2 -0.2478473 0.4250569 -0.5830922
2|3 1.4957495 0.4338635 3.4475116
> p <- pnorm(abs(ctable[, "t value"]), lower.tail = FALSE) * 2
> ctable <- cbind(ctable, "p value" = p)
> ctable
Value Std. Error t value p value
Data.originIR.recent 1.2417349 0.4716366 2.6328213 0.008467888
Data.originUK -0.8150987 0.4361647 -1.8687866 0.061652505
1|2 -0.2478473 0.4250569 -0.5830922 0.559831226
2|3 1.4957495 0.4338635 3.4475116 0.000565776
> ci <- confint(m)
Waiting for profiling to be done...
> ci
2.5 % 97.5 %
Data.originIR.recent 0.3241307 2.18429517
Data.originUK -1.6670721 0.05759485
> exp(coef(m))
Data.originIR.recent Data.originUK
3.4616139 0.4425956
> exp(cbind(OR = coef(m), ci))
OR 2.5 % 97.5 %
Data.originIR.recent 3.4616139 1.382828 8.884384
Data.originUK 0.4425956 0.188799 1.059286
```
I am not sure why IR.historic is not mentioned anymore anywhere in the outputs? Data IR historic has way less points overall due to the nature of the sample collection, maybe it was excluded therefore?
see here my data summarized:
```
> counts2 <- table(data$Nutritional.Status, data$Data.origin)
> counts2
IR.historic IR.recent UK
Good 12 12 251
Moderate 4 37 106
Poor-very poor 7 34 36
```
Is my final output to read/present to ctable telling me the IRrecent origin has significantly more 3 than 1&2 in comparison to UK? That is my desired outcome
| Ordinary logistic regression correctly applied/ is the model correct to answer my question | CC BY-SA 4.0 | null | 2023-04-14T12:03:12.527 | 2023-04-14T16:39:50.553 | 2023-04-14T16:39:50.553 | 385710 | 385710 | [
"r",
"logistic",
"ordered-logit",
"polr"
] |
612923 | 1 | null | null | 0 | 23 | despite my searching, I cannot find a clear answer to the question:
Do dynamic logistic regression models require data points to be temporally independent?
An answer, an hint and/or a reference would be appreciated, if anyone knows. Thank you.
| Do dynamic logistic models require data points to be temporally independent? | CC BY-SA 4.0 | null | 2023-04-14T12:28:39.197 | 2023-04-14T12:45:52.660 | 2023-04-14T12:45:52.660 | 11233 | 11233 | [
"logistic",
"time-varying-covariate"
] |
612924 | 2 | null | 612904 | 21 | null | The referenced book has a free pdf on Boyd's site: [https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf](https://web.stanford.edu/%7Eboyd/cvxbook/bv_cvxbook.pdf)
On page 90, formula 3.17 gives this definition. I suspect the reason for the added terms is that in convex optimization, the two vectors needn't be probability distributions; the authors say
>
Note that the relative entropy and the Kullback-Leibler divergence
are the same when $u$ and $v$ are probability vectors
When they are, in the sum the extra terms cancel. But when they aren't, the added terms ensure that the total is nonnegative.
| null | CC BY-SA 4.0 | null | 2023-04-14T12:46:50.790 | 2023-04-14T13:00:53.650 | 2023-04-14T13:00:53.650 | 232706 | 232706 | null |
612925 | 1 | null | null | 1 | 30 | Research Problem
We are trying to help universities recruit as many students as possible for different projects for a good cause. To be eligible for any project offered by their university, students can self-register online via a brief form that asks for contact information and some more data. Some questions are mandatory, others aren’t. We see that a lot of students drop out filling in the form. (BTW this is not my real research problem, but will serve as a good analogy).
So our hypothesis is:
The number (and type) of questions influence the drop out rate above and beyond other predictors. The more (mandatory) fields, the lower the probability of a student submitting the form.
Data Structure
Our research question seems quite simple, but the structure of our data has some pitfalls:
- Each row is a student that opened an application form with their respective university (e.g. city), project (e.g. project category) and individual level variables (e.g. sex, device they opened the form with)
- The two predictor variables of interest are on yet another level, the application form level (see details below). These are the only numeric predictors, all confounding variables are categorical.
- The outcome is whether the student submitted the application form or not (Boolean)
- The data is hierarchical:
We have a total of ~80 universities with a total of ~4,000 different projects and ~30,000 rows (students).
All distributions are skewed: Some universities have hundreds of projects, some have only two. Some projects have hundreds of student applicants, some have less than 10.
We know from exploratory data analysis that within each university and project, submission rates do share a high proportion of variance so we want to consider that.
- The application forms do not vary a lot between universities / projects and if they do, they are mostly the same within any university:
Each university can have their own application form, but many use the same default form that contains only 4 mandatory fields
Most universities do use one form for all student recruitment, but some use different form types for different project categories
This means that also in the predictor variables, we tend to have unbalanced classes / little variance in the no. of fields
So overall, we have a lot of multicollinearity in the data if we use it row-wise.
Question / Current Approach
Our question is IF AND HOW we can robustly measure the effect of the number of fields on the outcome (students submits form or not) in such a way that it does not only reflect the differences between the universities (or projects).
We were thinking about either including university and project IDs as confounding variables into our logistic regression model, or hierarchical modelling. But is this even possible with so many different universities and projects?
Alternatively, we could aggregate the data on project level and compute dummy variables for the individual level variables, e.g. “ratio_male” = 0.27. We could then use the submission rates per project as the outcome variable for a (weighted) linear regression. But wouldn’t that make us lose lots of information? And in any case, the data would still be hierarchical (projects nested in universities)…
Thanks so much in advance for any input!
| Logistic regression for hierarchical / nested (3 level) data | CC BY-SA 4.0 | null | 2023-04-14T12:51:09.080 | 2023-04-14T12:51:09.080 | null | null | 363995 | [
"logistic",
"multilevel-analysis",
"multicollinearity",
"weighted-regression"
] |
612926 | 2 | null | 388011 | 1 | null | I find this resource very helpful, it also contains methods for cross-sectional nested data:
[https://methodenlehre.github.io/intro-to-rstats/hierarchical-linear-models.html](https://methodenlehre.github.io/intro-to-rstats/hierarchical-linear-models.html)
| null | CC BY-SA 4.0 | null | 2023-04-14T13:00:45.077 | 2023-04-14T13:00:45.077 | null | null | 363995 | null |
612927 | 1 | null | null | 1 | 25 | [](https://i.stack.imgur.com/5wb2m.png)
The agent has two actions, a0 and a1, whose effects in each state σ0; . . . ; σ3 are described in Figure 1. The edges from actions are labeled with the probability that this transition occurs. For example, Pr[st+1 = σ2 | st = σ0; at = a1] = 1; similarly, Pr[st+1 = σ0 | st = σ1, at = a0] = 1-p. If there is no edge from a state to
an action, that action is not allowed in that state. Thus, choosing either a0 or a1 in σ3 is not allowed ,and σ3 is a sink state; similarly, action a1 cannot be taken in state σ1. The rewards in each state are action-independent, and are r(σ0) = r(σ2) = 0; r(σ1) = 1; r(σ3) = 10.
Q1) What are the possible (deterministic) policies are there for this MDP? When counting, ignore
“degenerate” actions, i.e. ones that are not allowed in a given state.
Doubt - Is the policy choosing a0 at σ0 and a1 at σ2 a deterministic policy? If yes, how do I write it mathematically as we have transition probabilities included with 2 states? Also, how do I write the value function for this policy?
| Can two states have different actions in a deterministic policy? How to specify states which have probability linked with them in the policy? | CC BY-SA 4.0 | null | 2023-04-14T13:01:23.013 | 2023-04-14T13:18:46.967 | 2023-04-14T13:18:46.967 | 385714 | 385714 | [
"reinforcement-learning",
"markov-decision-process",
"deterministic",
"deterministic-policy"
] |
612929 | 1 | null | null | 0 | 60 | x_t=(1+ θL)*ε_t,ε_t ~ iid N (0,σ^2)
What is the long-run variance of x_t? Does it depend on θ? If so, how?
| What is the long-run variance of xt? Does it depend on θ? If so, how? | CC BY-SA 4.0 | null | 2023-04-14T13:31:44.987 | 2023-04-14T15:57:22.343 | 2023-04-14T14:23:00.067 | 385717 | 385717 | [
"time-series",
"variance"
] |
612930 | 2 | null | 612904 | 26 | null | The other answer tells us why we don't usually see the $-p_i+q_i$ term: $p$ and $q$ are usually residents of the simplex and so sum to one, so this leads to $\sum - [p_i - q_i] = \sum - p_i + \sum q_i = -1 + 1 = 0$.
In this answer, I want to show why those terms are there in the first place, by viewing KL divergence as the [Bregman divergence](https://en.wikipedia.org/wiki/Bregman_divergence) induced by the (negative) [Entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)) function.
Given some differentiable function $\psi$, the Bregman divergence induced by it is a binary function on the domain of $\psi$:
$$
B_\psi(p,q) = \psi(p)-\psi(q)-\langle\nabla\psi(q),p-q\rangle
$$
Intuitively, the Bregman divergence measures the difference between $\psi$ evaluated at $p$ and the linear approximation to $\psi$ (about $q$) evaluated at $p$. When $\psi$ is convex, this is guaranteed to be nonnegative, and thus so is the Bregman divergence.
Noting that if $\psi(p) = \sum_i p_i \log p_i$, $\nabla\psi(p) = [\log p_i + 1]$, the entropic Bregman divergence is thus:
$$
B_e(p,q) = \sum_i p_i \log p_i - \sum_i q_i \log q_i - \sum_i [\log q_i + 1][p_i-q_i]\\
= \sum_i p_i \log p_i - \sum_i q_i \log q_i - \sum_i [\log q_i (p_i-q_i) + p_i-q_i]\\
= \sum_i p_i \log p_i - \sum_i q_i \log q_i - \sum_i p_i \log q_i + \sum_i q_i\log q_i - \sum_i[p_i-q_i]\\
= \sum_i p_i \log p_i - \sum_i p_i \log q_i - \sum_i[p_i-q_i]\\
= \sum_i p_i \log \frac{p_i}{q_i} + \sum_i[-p_i+q_i]
$$
which we recognize as the KL divergence you mentioned.
| null | CC BY-SA 4.0 | null | 2023-04-14T13:34:43.417 | 2023-04-16T19:00:20.893 | 2023-04-16T19:00:20.893 | 82893 | 82893 | null |
612931 | 1 | null | null | 0 | 29 | I have been studying linear models recently and I'm confused why $cov(Y) = cov(\epsilon)$ holds for $Y = X\beta + \epsilon$. This was just kinda assumed in my course notes
[](https://i.stack.imgur.com/olug4.png)
I was looking at this section specifically
| Covariance of linear models with Second Order Assumptions | CC BY-SA 4.0 | null | 2023-04-14T13:40:18.150 | 2023-04-15T12:27:12.827 | 2023-04-15T12:27:12.827 | 385718 | 385718 | [
"linear-model",
"covariance"
] |
612932 | 1 | null | null | 0 | 5 | I'm learning about recommentation systems and I'm trying to build one using item based collaborative filtering approach. I have this dataset in which the lines correspond the items and columns the user. The matrix is filled with the ratings[1, 5] of the items. I chose sklearn.neighbors.NearestNeighbors class to find the cosine similarities of the items.
If I'm right this dataset doesn't have label. In other words is an unsupervised task(clustering). I would like to know If I have to do the train/test split in this case. I'm asking this beacuse all code videos i've watched on youtube the guys don't do this but in some theory videos they say this step of the pipeline should be done.
Thanks!
| Item based collaborative filtering. Since is an unsupervised task should I do training/test split or not? | CC BY-SA 4.0 | null | 2023-04-14T13:40:45.127 | 2023-04-14T13:40:45.127 | null | null | 369891 | [
"machine-learning",
"recommender-system",
"train-test-split"
] |
612933 | 1 | null | null | 2 | 41 | I am thinking about a problem in which features comes in batches and in different moments of time/environments.
My current problem set is the following:
Let's say you have n datasets with an arbitrary number of features. Is it possible to set for the boosting optimization to use the first dataset in the first k estimators? And then tell it to use all the features from datasets 1 and 2 from the estimators k to k+p?
| Is there any gradient boosting package that allows controlling the features set used in each estimator? | CC BY-SA 4.0 | null | 2023-04-14T13:41:26.663 | 2023-04-23T19:04:33.270 | null | null | 220627 | [
"machine-learning",
"boosting"
] |
612934 | 1 | 612936 | null | 0 | 46 | Yes, this has been asked before [here](https://stats.stackexchange.com/questions/276526/why-is-the-expectation-step-in-the-em-algorithm-called-this-way), but for different reasons. In the E-Step nothing is calculated, we simply define the function, yet once it is defined it is defined once and for all. We could even define it in a way, such that it would not need to be redefined in each iteration by accepting a second parameter. It makes it look like the EM algorithm is a 2 step algorithm, while all it does it maximises the expectation in each iteration. I ask this question because I am not sure if I misunderstood something or if the description and presentation of the algorithm really is that bad or if there is a historical (or other) reason for the chosen description.
| Why is the E-step in the EM algorithm called this way? | CC BY-SA 4.0 | null | 2023-04-14T13:51:00.767 | 2023-04-14T14:18:07.277 | null | null | 367053 | [
"maximum-likelihood",
"terminology",
"expectation-maximization"
] |
612935 | 1 | null | null | 1 | 43 | Suppose that we have a random vector $X \in \Bbb R^n$ and a random variable $Y \in \Bbb R$, and that the joint density $f(x, y)$ is known. For a given $x \in \Bbb R^n$, what is the most efficient way in the statistics community to compute (or approximate) $E[Y| X = x]$?
| Computation of conditional expectation | CC BY-SA 4.0 | null | 2023-04-14T14:15:06.190 | 2023-04-14T14:15:06.190 | null | null | 175565 | [
"machine-learning",
"mathematical-statistics",
"monte-carlo",
"conditional-expectation",
"density-estimation"
] |
612936 | 2 | null | 612934 | 3 | null | Let's use an example of a Gaussian mixture model. The model describes the distribution of your data in terms of $k$ clusters, such that each cluster is normally distributed. The model is
$$
f(x; \mu, \sigma,\pi) = \sum_{i=1}^K \pi_i\,\mathcal{N}(x\mid\mu_i,\sigma_i)
$$
where $\mu = (\mu_1, \mu_2, \dots, \mu_K)$ is the vector or means, $\sigma = (\sigma_1, \sigma_2, \dots, \sigma_K)$ is the vector of variances, and $\pi = (\pi_1, \pi_2, \dots, \pi_K)$ is the vector of mixing proportions such that $\forall_i\,\pi_i \ge 0$ and $\sum_{i=1}^K \pi_i = 1$.
When we want to estimate the parameters of such a model, there is a chicken-and-egg problem: we could easily estimate the $\mu_i, \sigma_i,\pi_i$ per each cluster only if we knew to which cluster each observation belongs, and we could find out to which cluster the observation belongs if we knew the parameters.
Here [E-M algorithm comes to play](http://alexminnaar.com/2017/05/22/EM-Algorithm-and-Mixture-of-Gaussians-Clustering.html). You start with randomly initialized parameters and then iterate between the two steps. In the E step, you use the parameters to find the probabilities for each of the samples belonging to specific clusters. In the M step, you use those probabilities as weights to estimate and update the parameters. Such iterations are repeated until you don't see that they are changing anymore. In each step, you calculate something that wouldn't be possible to calculate without taking the other step.
| null | CC BY-SA 4.0 | null | 2023-04-14T14:18:07.277 | 2023-04-14T14:18:07.277 | null | null | 35989 | null |
612939 | 1 | 612946 | null | 1 | 31 | I'm running/ reporting lmer analysis for the first time and have a hypothetical question related to a few of my models. I'm using R lmer package - my models are generally of the formula:
```
model=lmer(dependantVar ~ continuousVar*catagoricalVar+ (1|Subject), data=data, REML = FALSE);
anova(model)
```
In some instances, I have an interaction (sig p value) - this is fairly straightforward as I interpret that and "ignore" the main effects.
However, when I have no interaction, but a main effect of the continousVar only. I'm not sure what to do in terms of reporting the findings. Do I report the main effects estimates from the model above, or do I refit the model without the interaction terms first? i.e.
```
modelreduced=lmer(dependantVar ~ continuousVar + (1|Subject), data=data, REML = FALSE);
```
or
```
modelreduced2=lmer(dependantVar ~ continuousVar + catagoricalVar + (1|Subject), data=data, REML = FALSE);
```
and report these estimates.
It's not as if they change a great deal but I'm trying to gage from the literature what people do, and there seems to be examples of both. Although I feel like leaving the interaction in is incorrect, I have come across instances of it being reported.
Any pointers to the "proper way" (if there is one) would be appreciated.
I should say that my main interest is my continuousVar on my dependantVar, but I did believe
catagoricalVar may effect the steepness slopes, even by a small amount, so wanted to capture this in the model. So even if it's not significant (if we defer to p values here) should I leave it in anyway?
Also to note, whilst there is a slight reduction in AIC values:
```
lrtest(model,modelreduced)
lrtest(model,modelreduced2)
```
suggests no significant difference in these models.
Thank you.
| No significant interaction in lmer - do you refit the model without interaction? | CC BY-SA 4.0 | null | 2023-04-14T14:53:30.157 | 2023-04-14T15:50:26.710 | 2023-04-14T15:50:26.710 | 314255 | 314255 | [
"lme4-nlme",
"interaction",
"linear-model",
"fitting"
] |
612940 | 1 | 613167 | null | 3 | 49 | Suppose that four variables of $X$, $Y$, $L$, and $C$ have the following relationships in the form of directed acyclic graph.
[](https://i.stack.imgur.com/h4XmE.png)
$X$, $Y$, and $C$ are observable variables while $L$ is a latent (unobservable) variable. If one is interested in modeling the relationship between $X$ and $Y$ through, for example, regression, is it reasonable to include $C$ as a covariate and condition the $X$-$Y$ association on $C$? Alternatively, how about when one is interested in the association between $C$ and $Y$?
| Causal modeling in the presence of a latent variable | CC BY-SA 4.0 | null | 2023-04-14T14:55:22.780 | 2023-04-16T23:45:08.783 | 2023-04-14T15:21:01.977 | 76484 | 1513 | [
"regression",
"causality",
"latent-variable",
"dag",
"causal-diagram"
] |
612941 | 2 | null | 101560 | 0 | null | Generally speaking, $\tanh$ has two main advantages over a sigmoid function:
- It has a slightly bigger derivative than the sigmoid (at least for the area around 0), which helps it to cope a bit better with the “vanishing gradients” problem of deep neural networks. Here is a plot of the derivatives of both functions:
[](https://i.stack.imgur.com/06Fc1.png)
- It is symmetric around 0, which helps it to avoid the “bias shift” problem that sigmoid suffer from (which causes the weight vectors to move in diagonals, or “zig-zag”, which slows down learning).
Sigmoid has 1 main advantage over $\tanh$, which is that it can represent a binary probability - hence can be used as the output of the final layer in binary classification problems.
You can check out [this video I made on YouTube](https://youtu.be/S7nr9YnFE30) which explains a bit further about these problems.
Elaboration on the bias shift problem:
Consider a case of activation functions like Sigmoids which only output positive values. Now let’s focus on a single layer $a_l$. Let’s look at the weight vector associated with the first next neuron: $z_{(l+1),1}=W_{l,1}\cdot a_l + b_{l,1}$.
[](https://i.stack.imgur.com/Wv5AD.png)
The gradient w.r.t. this vector will be (according to the chain rule) $a_l \cdot \frac{\partial \mathcal L}{\partial z_{(l+1),1}}$. That is the gradient up to $z_{(l+1),1}$ (which is a scalar) times the gradient of $z_{(l+1),1}$ w.r.t. $W_{l1}$ which is just $a_l$.
We know that the $a_l$ neurons are all $\ge 0$, so this $W_{l1}$ vector updates depend on $sign(\frac{\partial \mathcal L}{\partial z_{(l+1),1}}).$
This means that the vector either increase or decrease for all elements $\Rightarrow$ it can only move in Zig-Zag / diagonals, which is not very efficient.
This is sometimes called the “bias shift” problem. It also happens when the activations output values which are far from 0 (though to a less extent).
| null | CC BY-SA 4.0 | null | 2023-04-14T15:15:46.940 | 2023-04-14T15:15:46.940 | null | null | 117705 | null |
612943 | 1 | null | null | 3 | 55 | I have just calculated an odds ratio between the variable vaccine group (experimental and control) and the variable flu (got the flu or did not get the flu). I found the odds ratio to be 0.25. Is it correct to interpret this in the following way?
The odds of developing influenza are 0.25 times less for individuals who had the experimental vaccine than for individuals who had the control vaccine.
If this is incorrect, what is a proper way to write this sentence?
| How to interpret a negative odds ratio? | CC BY-SA 4.0 | null | 2023-04-14T15:17:51.317 | 2023-04-14T16:46:12.477 | 2023-04-14T15:43:21.517 | 383610 | 383610 | [
"probability",
"interpretation",
"odds-ratio"
] |
612944 | 2 | null | 612940 | 4 | null | In your first scenario, you certainly can include $C$ on the RHS of your linear regression model, but you shouldn't need to if your DAG is correct. If your DAG is correct, then $C$ does not cause $Y$ in any way, and is not a confounder (does not set up a back door path). All of the causal effect of $X$ on $Y$ is mediated through $L:$ that's fine. There's nothing special you need to do, just regress $Y$ on $X.$
In your second scenario - finding the association between $C$ and $Y$ - you have a problem: $L$ is a confounding variable. Moreover, if you have no other variables present anywhere, then I don't see a way to estimate the causal effect of $C$ on $Y.$ $X$ is not an instrument, because if it were, it would have be pointing into $C$ and nothing else. You're not even really sure that $C$ has a causal effect on $Y.$ Technically, of course, your DAG says it doesn't have a causal effect on $Y.$ I'd say you're stuck, here.
| null | CC BY-SA 4.0 | null | 2023-04-14T15:20:41.013 | 2023-04-14T15:20:41.013 | null | null | 76484 | null |
612945 | 1 | null | null | 1 | 19 | In [this example](https://stats.oarc.ucla.edu/r/faq/how-can-i-calculate-standard-errors-for-variance-components-from-mixed-models/):
what is the third term in `attr(,"Pars")` within the `apVar` of the `lme` object (`model.c$apVar` in this example)? I know it is related to the correlation of the two random effects, but how is this calculated? How can I calculate this manually from the correlation in the output of `VarCorr(model.c)`?
| What's exactly inside the attribute 'Pars' in apVar of the lme object in R | CC BY-SA 4.0 | null | 2023-04-14T15:26:00.797 | 2023-04-14T15:27:00.367 | 2023-04-14T15:27:00.367 | 224302 | 224302 | [
"mixed-model",
"lme4-nlme",
"covariance-matrix"
] |
612946 | 2 | null | 612939 | 1 | null | Chapter 4 of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/multivar.html) covers that and many other issues relevant here. The general strategy is to decide on how many coefficient values you can try to estimate without overfitting ("degrees of freedom"), decide how many degrees of freedom to spend on each predictor (including interactions), then spend those degrees of freedom in the model and report the results.
Thus it's best to decide beforehand on whether to include an interaction, based on your understanding of the subject matter. If you decide to include an interaction, then report the model with the interaction. If you later refit the model based on the "significance" results of the interaction term, then you've used the outcome to choose the (new) model in a way that violates the assumptions behind significance testing.
There is, however, another place where you might want to spend more degrees of freedom: your modeling of `continuousVar`. As Harrell explains, a strictly linear association of a continuous predictor with outcome is unlikely to hold in practice. A more flexible model, like a regression spline, might be called for; you would include an interaction of the spline with `categoricalVar`. That would be wise, if your data set is large enough to allow you to spend more degrees of freedom. Again, you would pre-specify the model with the spline and report the results, not re-fitting even if the nonlinearity is "not statistically significant."
| null | CC BY-SA 4.0 | null | 2023-04-14T15:45:23.860 | 2023-04-14T15:45:23.860 | null | null | 28500 | null |
612948 | 2 | null | 612929 | 0 | null | Hi Jefferson: It's best to write out the expression without the lag operator
and calculate the variance directly using properties of variance. The MA(1) is
:
$x_t = \epsilon_t + \theta \epsilon_{t-1} $
The variance of the first term is $\sigma^2$ and the variance of the second term is $\theta^2 \sigma^2$. There is no covariance between the two terms so they just add and one obtains $\sigma^2 + \theta^2 \sigma^2 = \sigma^2(1+\theta^2)$. If you're not familar with the steps, it's probably best to take a
mathematical statistics class where these concepts are covered.
| null | CC BY-SA 4.0 | null | 2023-04-14T15:57:22.343 | 2023-04-14T15:57:22.343 | null | null | 64098 | null |
612949 | 1 | null | null | 0 | 11 | I would like to simulate a dataset of a Discrete Choice Experiment (DCE) that would be analysed with a simple multinomial logit model. For example, the DCE could concern consumers preferences for two different food items with different attributes (e.g., calories, fat content, price). I was thinking to use a Monte Carlo simulation, but on the forum I have read different suggestion. Any help?
Also I was thinking to include some interaction term such as ATTRIBUTE X AGE, ATTRIBUTE X GENDER.
| Simulating DataSet for a DCE | CC BY-SA 4.0 | null | 2023-04-14T16:12:28.717 | 2023-04-14T16:25:31.707 | 2023-04-14T16:25:31.707 | 362671 | 385728 | [
"multinomial-logit"
] |
612950 | 2 | null | 47590 | 0 | null | There are two distinct ideas in this heuristic:
- Initialize the weights to be small - in addition to Douglas Zare excellent answer about sigmoid activations, the problem is more general. Even when the gradients are of "good" magnitude (e.g., using ReLU activations) training is hampered with big weights. Think about 2 neurons whose real weights should be $(3, -2)$. If you initialize them close to $0$, the maximal "distance" the weights have to traverse is roughly $3.6$ (Euclidean distance; 5 in Manhattan distance). While if you initialize them e.g. from $U(-3,3)$ you run into the risk that in the worst case the initial weights will be set to $(-3,3)$, in that case the distance the weights have to traverse is roughly $7.8$ (Euclidean; 11 Manhatten).
- Keep the variance of each weight $\propto \frac{1}{d}$ - the input to the next layer will thus have a variance $\propto 1$ (as it is a sum of $d$ neurons times their respective weights). Why do we want this? We want to keep the magnitude of the inputs to the layers the same. We don't want that inputs from a layer with a lot of hidden units will be much bigger than inputs from a layer with fewer hidden units. If we add a lot of inputs, we want the weights to be relatively smaller in magnitude, and if we're adding fewer inputs we want them to be larger.
The $\frac{1}{3}$ variance constant in the $U(-\frac{1}{\sqrt d}, \frac{1}{\sqrt d})$ heuristic is actually problematic. To keep information flowing we would like that $\mathbb V[a_l] = \mathbb V[a_{l-1}]$- i.e. that the variance of the activations inputs / outputs stay more or less the same across layers, and similarly for backprop that $\mathbb V[\frac{\partial \mathcal L}{\partial z_l}] = \mathbb V[\frac{\partial \mathcal L}{\partial z_{l+1}}]$ i.e., that the variance of the backprop derivatives will be more or less the same. The $\frac{1}{3}$ factor get's in our way. You can see this in the following graphs (taken from the original [Xavier Glorot init paper](https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf)):
[](https://i.stack.imgur.com/vrN3V.png)
Here you can see that the activations follow $\mathbb V[a_l] \approx \frac{1}{3}\mathbb V[a_{l-1}] $. And:
[](https://i.stack.imgur.com/wsu6K.png)
Here you can see that the derivatives follow $\mathbb V[\frac{\partial \mathcal L}{\partial z_l}] \approx \frac{1}{3}\mathbb V[\frac{\partial \mathcal L}{\partial z_{l+1}}]$
In both cases - the narrow parts are bad: in the forward pass (activations) it means each neuron is basically calculating the same thing, and also that we are not really taking advantage of the activation function non-linearity ($\tanh$ were used in this network); in the backprop it means we are not really learning in the early layers.
Xavier Glorot init fixed this by changing the distribution to $U(-\frac{\sqrt 3}{\sqrt n}, \frac{\sqrt 3}{\sqrt n}) $which eliminates the 1/3 factor. Also, since we care about both the previous layer number of neurons (for the forward prop) and the next layer number of neurons (for backprop) - Xavier Glorot init uses the harmonic mean between them as a compromise. Using this, the same network now looks like:
[](https://i.stack.imgur.com/sveqU.png)
If you want to learn more, check out my YouTube videos on the topic: [Part 1](https://youtu.be/eoNVmZDnn9w) and [Part 2](https://youtu.be/1Rf7BVQ-z0M).
| null | CC BY-SA 4.0 | null | 2023-04-14T16:16:44.173 | 2023-04-14T16:16:44.173 | null | null | 117705 | null |
612951 | 1 | null | null | 0 | 28 | I'm running GLM(M)s on proportional data ([0,...,1] ) using a binomial family and weighted to number of trials.
>
ProportionFlowertoPod_Site.b = glmmTMB(PropFlowtoPod ~ Site_ID,
family = binomial,
weights = RepEffort,
data = RepSucPodPlant)
Where PropFlowtoPod is proportion of flowers on an inflorescence that developed into pods, site is a categorical factor, and RepEffort is trials per inflorescence. I run a similar GLMM where range position is a categorical factor, two levels; edge/core.
>
ProportionFlowertoPod_RangePos_site.b = glmmTMB(PropFlowtoPod ~ RangePosition + (1|Site_ID),
family = binomial,
weights = RepEffort,
data = RepSucPodPlant)
I've tried the following dispersion tests
>
testDispersion,
check_overdispersion,
and Bolker's overdisp_fun
But I get conflicting values.
>
testDispersion(ProportionFlowertoPod_RangePos_site.b) #Not overdispersed
>
DHARMa nonparametric dispersion test via sd of residuals fitted vs. simulated
>
data: simulationOutput
dispersion = 1.3178, p-value = 0.208
alternative hypothesis: two.sided
>
check_overdispersion(ProportionFlowertoPod_RangePos_site.b)
>
Overdispersion test
dispersion ratio = 2.388
Pearson's Chi-Squared = 902.750
p-value = < 0.001
>
Overdispersion detected.
I'm pretty sure DHARMa's testDispersion test is the only one that works on binomial GLMMs, but does it work for GLMs? And does it work for weighted binomial GLM(M)s?
| Overdispersion tests for weighted binomial GLM(M)s | CC BY-SA 4.0 | null | 2023-04-14T16:17:47.600 | 2023-04-14T16:17:47.600 | null | null | 385727 | [
"generalized-linear-model",
"binomial-distribution",
"glmm",
"overdispersion"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.