Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
611103 | 4 | null | null | 0 | null | AdaDelta is a gradient descent variant introduced in "Adadelta: An Adaptive Learning Rate Method" by Zeiler in 2012. Like similar variants, it is concerned with adapting the global learning-rate to each parameter by specific information associated with each specific parameter. It is mainly used in training NN and is implemented in both Keras and PyTorch. | null | CC BY-SA 4.0 | null | 2023-03-29T11:07:05.250 | 2023-03-29T11:53:26.083 | 2023-03-29T11:53:26.083 | 117705 | 117705 | null |
611106 | 1 | null | null | 1 | 54 | Here's my idea and doubt
There are two purposes of machine learning, inference, and prediction. In prediction, we are interested in finding a model to give us the best accuracy when we try to find the forecast for a new data point. In inference, the idea is to understand the relationship between input variables and output variables.
However, for a particular problem, we assume to have a single underlying data-generating process. In that case, won't fitting different models for inference and prediction be a violation of this assumption? For example, we can fit linear regression for inference and a normal distribution-based model for prediction but underlying DGP can be only one (in the best possible scenario). In that case, does it make sense to do so?
| Do models differ for prediction and inference for same training data? | CC BY-SA 4.0 | null | 2023-03-29T11:27:35.307 | 2023-04-03T07:50:02.127 | 2023-04-03T04:54:46.847 | 11887 | 266998 | [
"machine-learning",
"predictive-models",
"inference"
] |
611107 | 1 | 611549 | null | 1 | 126 | I am trying to perform an apriori power analysis to estimate sample size for a Poisson regression model. The background is that a RCT is proposed to compare the rate of pill consumption between two methods (standard vs new) of administering drugs. Supposing in the standard method, people usually consume 2 pills per day and we want to detect a 10% reduction in pill consumption using the new method. I believe that we specify the base rate (expB0) as 2 and then exp(B1)=0.9. We also set desired power = 0.8 and alpha = 0.05. However, I have noticed that GPower software requires the "mean exposure" to be entered. I am unsure of what this is, and it is not mentioned in other software e.g. [https://rdrr.io/cran/WebPower/man/wp.poisson.html](https://rdrr.io/cran/WebPower/man/wp.poisson.html)
In this video tutorial:
[https://www.youtube.com/watch?v=OJvDiQUI56c](https://www.youtube.com/watch?v=OJvDiQUI56c)
it seems to suggest that mean exposure represents how long the study will last. However that dosn't seem quite right to me.
In the GPower manual page 76
[https://www.psychologie.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf](https://www.psychologie.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf)
it gives an example of power calculation for Poisson regression looking at rates of infection in swimmers over a whole season (i.e. many days) and they simply specify the mean exposure as 1 (presumably meaning 1 season?)
Here is a screen capture of my set up in G*Power
[](https://i.stack.imgur.com/D1124.png)
With this set up, the estimated sample size is 1175. However, if the advice in the YouTube video (above link) is correct, I'm worried that maybe I need to change the mean exposure to e.g. 365 if we record the pill consumption rate every day for a whole year?
| Statistical power and sample size for Poisson regression: specifying the "mean exposure"? | CC BY-SA 4.0 | null | 2023-03-29T11:30:10.430 | 2023-04-02T10:24:14.613 | null | null | 167591 | [
"sample-size",
"statistical-power",
"poisson-regression",
"gpower"
] |
611109 | 2 | null | 611100 | 1 | null | From your description of the output, I'm lead to believe the right approach is
```
library(tidyverse)
library(marginaleffects)
x1 <- rnorm(1000, 2, 1)
x2 <- rnorm(1000, -2, 1)
y <- 2*x1 - x2 + 0.5*x1*x2 + rnorm(1000)
d <- tibble(x1, x2, y)
fit <- lm(y~x1*x2, data=d)
comparisons(
fit,
variables = c('x1', 'x2'),
newdata = datagrid(x1=quantile(x1), x2=quantile(x2)),
cross = T
)
```
Here, I have specified that the marginal effects should be computed at the quantile of each variable (you're free to specify your own values). This returns 25 marginal effects estimated at each combination of the variables specified in `newdata = datagrid(x1=quantile(x1), x2=quantile(x2))`.
If you edit your question to include stata output and a reproducible example in R, then I can verify this is the right appraoch.
| null | CC BY-SA 4.0 | null | 2023-03-29T12:06:53.520 | 2023-03-29T12:06:53.520 | null | null | 111259 | null |
611113 | 2 | null | 547410 | 0 | null | Here's how it may happen: AUC-ROC calculation is based in Sensitivity and Specificity values, both of which are based on the correctly predicted values, both Positive and Negative:
>
Sensitivity = True Positive Rate = TPos / (TPos + FNeg)
Specificity = True Negative Rate = TNeg / (TNeg + FPos)
Precision and Recall on the other hand are based on the True Positive values:
>
Precision = Positive Predictive Value = TPos / (TPos + FPos)
Recall (same as Sensitivity) = True Positive Rate = TPos / (TPos + FNeg)
Note that AUC-ROC metric takes in consideration all 4 values (TPos, TNeg, FPos and FNeg).
However, for Precision and Recall only 3 of those values are considered for the calculation (TPos, FPos and FNeg), while the count of True Negatives is disregarded.
This means that your second model may have a slightly improved True Positive detection rate but significantly decreased the True Negative detection rate to a higher extent. This would result in better Precision and Recall, as these metrics ignore TNeg; but would negatively affect the overall AUC-ROC value, especially if the increased performance in TPos detection is surpassed by an even higher decrease in performance of TNeg detection.
The fact the your dataset is imbalanced may enhance this effect even further.
| null | CC BY-SA 4.0 | null | 2023-03-29T12:27:37.370 | 2023-03-29T12:27:37.370 | null | null | 360512 | null |
611114 | 1 | 611138 | null | 0 | 47 | My project is based in a clinical trial in which we measured in gene expression in three groups (OO, NUTS, LFD). The individuals (n = 151) are almost equally distributed and the variables to measure are at baseline and 12 months after the intervention
I am not an expert but I have read about PCA. The idea of using PCA is to observe clusters according to the upregulation or downregulation of these genes. Until now I have done PCA in just the genes, expressed in a numeric and continuous variable, and scaled.
The way I modelled the PCA is in a matrix in which I have columns (= variables = genes), categorical variable in the 1st column and in the rownames I have the individuals.
- I am not sure if I am overthinking and if I had to exchange the order of columns and rows? Putting the genes in the rows and make the individuals go to the columns? If I understand correctly, the eigenvalues are calculated across columns, and the are depcited there, so the initial approach is the one I considered correct
- Besides that, I plan to explore the contribution splitting across the categorical variable (groups) to observe if the contribution of variables change among the 3 groups. Does this make sense? I have used this approach to the contribution, has anybody used this or something different in this context?
```
fviz_pca_var(res.PCA, col.var = "cos2",
gradient.cols = c("#00AFBB", "#E7B800", "#FC4E07"),
repel = TRUE # Avoid text overlapping
)
```
This is my database
```
head(PCAcomp[, 1:8], n = 6)
group ppara ppard pparg nr1h3 nr1h2 rxra rxrb
50109018 LFD 1.9100000 0.654 1.137 0.631 -0.217 0.486 -0.020
50109019 LFD 0.0960000 -0.123 -0.027 0.282 0.547 0.101 -0.347
50109025 LFD -0.3190000 0.157 0.215 -0.131 -0.476 -0.091 0.716
50109026 NUTS 0.2755359 0.177 0.177 0.167 -0.794 -0.061 0.386
50109027 LFD -0.6283524 -0.390 -0.761 -1.076 -0.880 -0.263 0.299
50118001 OO 0.5441151 0.864 0.454 0.577 0.336 0.306 0.507
```
1 )Clustering all 3 groups at once and 2) genes per group
[](https://i.stack.imgur.com/kYTVg.png)
[](https://i.stack.imgur.com/JtoU2.png)
| Is this approach of PCA correct? | CC BY-SA 4.0 | null | 2023-03-29T12:28:20.957 | 2023-03-29T15:16:46.960 | 2023-03-29T15:16:46.960 | 339186 | 339186 | [
"r",
"pca"
] |
611116 | 1 | null | null | 1 | 22 | I am currently studying the Naive Bayes method (a classification method) and I am having quite some trouble classifying it as a hard or soft classifier. Below follows a quick introduction on the method and my thoughts about this question.
One of the disadvantages of Bayesian Decision Theory is the fact that we need to manipulate multivariate density functions, which can lead to some problems. This is why Naive Bayes is so important: in this method, we have a simplification of the problem where we assume that the independent variables $x_1 , \dots , x_D$ are independent between them. Therefore,
$$ P(C_k \, | \,x ) = \frac{P(C_k) \cdot f(x \, | \, C_k )}{f(x)} = \frac{P(C_k)\cdot \prod_{i=1}^D f(x_d \, | \, C_k)}{\prod_{i=1}^D f(x_d)}.$$
With this in mind, we can easily adapt the minimum error Bayes' decision rule to our Naive Bayes method, which yields
$$ \text{ Decide } C_j \text{ if } P(C_j)\prod_{i=1}^D f(x_d \, | \, C_j) = \max_{k = 1,\dots,M} P(C_k)\prod_{i=1}^D f( x \, | \, C_k), $$
where $M$ represents the total number of groups in our problem.
My problem. How can I figure if Naive Bayes is a hard or a soft classifier? As far as I know, we define a hard classifier as being a classifier that directly estimates a group, i.e., given an observation $\mathbb x$, our classifier tells us that $\mathbb x$ belongs to some class $C$ of our problem. On the other hand, we say a classifier is a soft classifier if it estimates the a posteriori probabilites.
In my mind, Naive Bayes does both. For example, If I give some observation $\mathbb x$ to the Naive Bayes, it is going to return me which class $\mathbb x$ should belong to. At the same time, it also estimates a posteriori probabilities (this is what we calculate to decide which class $\mathbb x$ belongs to). So how can I decide if Naive Bayes is a hard or soft classifier?
Are my general concepts about hard and soft classifiers wrong or incomplete?
Thanks for any help in advance!
| What type of classifier is Naive Bayes? | CC BY-SA 4.0 | null | 2023-03-29T12:53:00.427 | 2023-03-29T12:53:00.427 | null | null | 383130 | [
"machine-learning",
"bayesian",
"classification",
"naive-bayes"
] |
611117 | 1 | null | null | 0 | 68 | I have trouble understanding the documentation for the glmmtmb package ([https://cran.r-project.org/web/packages/glmmTMB/glmmTMB.pdf](https://cran.r-project.org/web/packages/glmmTMB/glmmTMB.pdf))
On page 26, in the details on the beta-binomial distribution, it says:
>
Beta-binomial distribution: parameterized according to Morris (1997).
$V = \mu(1 - \mu)(n(\phi + n)/(\phi + 1))$.
I consulted Morris' paper (Morris W (1997). "Disentangling Effects of Induced Plant Defenses and Food Quantity on Herbivores by Fitting Nonlinear Models." American Naturalist 150:299-327), but I'm still not sure I understand. Can someone explain to me what this means to $\phi$? And what does $n$ represent in the calculation of variance? The number of replicates?
| Parameterization of the beta-binomial family Glmmtmb | CC BY-SA 4.0 | null | 2023-03-29T12:56:18.793 | 2023-03-29T16:20:55.333 | 2023-03-29T13:17:19.827 | 77222 | 384405 | [
"beta-binomial-distribution",
"glmmtmb"
] |
611118 | 1 | null | null | 0 | 17 | I am currently working on a logistic regression, where innovation is my independent variable and correlation is my dependent variable. I have some other independent variables and interaction terms but this is my main effect.
How do I test for endogeneity? And is there a syntax in R?
Thanks you a lot in advance.
| Endogeneity in Logistic Regression | CC BY-SA 4.0 | null | 2023-03-29T12:57:39.673 | 2023-03-29T12:57:39.673 | null | null | 384406 | [
"endogeneity"
] |
611119 | 1 | null | null | 1 | 28 | When conducting a 2-way fixed effects ANOVA, there are essentially four null hypothesis statistical tests (NHSTs) that can reasonably conducted with just the ANOVA tables. The first is the omnibus test: ¿is the overall model statistically significant? The next three are about the effects: ¿are the two main effects and the interaction effect statistically significant?
Most reputable textbooks will mention that you cannot entertain the last three if you fail to reach significance with the omnibus NHST.
However, I have yet to locate a textbook that suggests we need to do some type of alpha adjustment (say a Bonferroni adjustment) when looking at the significance of the three effects. Yet, these books will mention that you do need to account for Type-I inflation when doing any follow-up (post hoc) comparisons.
So, if the effect NHST is the omnibus test prior to the MCPs which require alpha adjustment, ¿why is it not the same for the omnibus NHST prior to the analysis of the effects?
As an addition to this, I have the same question for general multiple regression models as well...but I thought it would be easier to present in the 2-way ANOVA context as a question on this forum.
| Why is an alpha adjustment not required for factorial ANOVA? | CC BY-SA 4.0 | null | 2023-03-29T13:06:53.610 | 2023-03-29T13:06:53.610 | null | null | 199063 | [
"statistical-significance",
"anova",
"multiple-comparisons",
"bonferroni"
] |
611120 | 1 | null | null | 0 | 19 | I am trying to forecast using the SARIMA model. I have used
```
auto.arima(ts,seasonal=T,trace=T)
```
but the prediction I'm getting is just the previous years values shifted up:
[](https://i.stack.imgur.com/wUudo.jpg)
Clearly the 2020 values are just the 2019 values but raised. I'm not sure why this is happening?
The model it fitted was SARIMA(5,1,4)(0,1,0)365. The data is river flow data recorded daily.
| SARIMA forecast is just the previous year shifted | CC BY-SA 4.0 | null | 2023-03-29T13:07:43.590 | 2023-03-29T15:29:25.817 | 2023-03-29T15:29:25.817 | 384409 | 384408 | [
"r",
"time-series",
"forecasting",
"arima"
] |
611122 | 1 | null | null | 1 | 49 | I'm wondering how to go about fitting the following linear mixed model in R. Suppose I have three variables: response, subject and treatment. I observe 4 responses per subject, and a subject is either assigned to treatment or isn't (binary indicator). I want to fit the linear mixed model with covariance structure
$$
\text{Cov}(Y_{is}, Y_{jt}) = \sigma^2_0 \delta_{ij} \delta_{st} + \sigma^2_1 \delta_{ij} 1\{t(i)=1\} + \sigma^2_2 \delta_{ij} 1\{t(i)=0\}.
$$
Here, $i$ ranges over subjects, and $s$ ranges over different observations made on the subject. $\delta_{ij} = 1$ if $i=j$, and $t(i)$ is the value of the treatment assigned to subject $i$.
In words, I want to have a variance component for treated subjects and another for untreated subjects. How should this be coded up in R? say using lmer.
| random effect formula in lmer | CC BY-SA 4.0 | null | 2023-03-29T13:18:02.900 | 2023-03-30T11:27:36.080 | null | null | 55946 | [
"r",
"regression",
"mixed-model"
] |
611124 | 1 | null | null | 0 | 24 | I want to run a stepped wedge cluster randomised trial ([link](https://www.bmj.com/content/350/bmj.h391)), over $N \sim 1000$ cities during $M = 4$ weeks. When I analyze the data of the experiment, at which level should I define clusters (for the clustered-standard-errors) to have a well-calibrated type-I error?
- city
- city $\times$ week
- city $\times$ period
where a period is a sequence of weeks with constant treatment. Is there literature for this?
Is the OLS or GEE method with clustered standard errors valid in this case? Is the point estimate unbiased? Are the confidence intervals valid?
| Clusters in stepped wedge cluster randomized trial | CC BY-SA 4.0 | null | 2023-03-29T13:36:11.427 | 2023-04-26T13:19:38.547 | 2023-04-26T13:19:29.103 | 205730 | 205730 | [
"generalized-estimating-equations",
"clustered-standard-errors",
"robust-standard-error"
] |
611125 | 1 | null | null | 0 | 64 | I am currently working on developing an LSTM model using six time-series data as an input with the objective to predict one of them. However, the data contains missing values that need to be addressed. I am using Keras to create the model and have found that using a Keras masking layer can enable the model to learn how to handle these missing values. Based on my research, this masking layer allows the model to omit masked values when calculating the loss function.
To normalize my data, I utilized the MinMaxScaler() function from sklearn, resulting in all values falling between 0 and 1. To handle NaN values, I transformed them to -1 and informed the Keras layer of this. I assumed that the model would disregard these -1 values; however, when I plotted the predicted versus real observations, it appeared that the model was trying to predict the -1 values as well. Here is the graphic:
[](https://i.stack.imgur.com/sFlcl.png)
Any help in understanding why this is happening and how to address it would be greatly appreciated. Thank you!
| Managing Missing Values in LSTM Time-Series Model using Keras Masking Layer | CC BY-SA 4.0 | null | 2023-03-29T13:52:41.360 | 2023-03-29T13:52:41.360 | null | null | 384412 | [
"time-series",
"missing-data",
"lstm",
"keras"
] |
611126 | 1 | null | null | 1 | 23 | Suppose that you had subjects learn a series of sentences and then recall the subject noun from each sentence.You manipulate the type of imagery included in the sentence (FACTOR A), ordinary and bizarre imagery.You also manipulated the amount of complexity of the sentences (FACTOR B), low and high complexity.Each subject had 30 sentences to learn; thus a maximum of 30 nouns could be recalled. Since this design is 2by 2 factorial design, you have four experimental conditions. 20 subjects participated in each of the four
conditions.
- Write out your null and alternative hypothesis for each of the statistics of interest, write in words and as a mathematical expression. (For example, for a b-coefficient in a categorical regression in words: a)H0: The mean of group 1 is not different than the mean of group 2 and b) as a mathematical expression:
b1=0)
| How would I write the Null and Alternative Hypothesis for this research question? | CC BY-SA 4.0 | null | 2023-03-29T13:53:06.570 | 2023-03-29T13:53:06.570 | null | null | 384413 | [
"hypothesis-testing"
] |
611127 | 2 | null | 610947 | 2 | null | This question asks how to evaluate the integral
$$\int_{\Theta} \frac{1}{\sqrt{2\pi\tau^2}}e^{\frac{-1}{2}\big(\frac{\theta - 0}{\tau}\big)^2} \prod_{m=1}^T \frac{\frac{1}{\sqrt{2\pi\sigma^2}} e^{\frac{-1}{2}\big(\frac{Z_m - \theta}{\sigma}\big)^2}}{\frac{1}{\sqrt{2\pi\sigma^2}} e^{\frac{-1}{2}\big(\frac{Z_m - \theta_0}{\sigma}\big)^2}} \,\mathrm d\theta$$
By adopting a suitable notation and aiming to put the integral into a particularly convenient form, we can make short work of its evaluation. It all comes down to completing the square, which is a matter of adding up the coefficients of powers of $\theta$ appearing in the exponentials.
---
The rules of exponentiation tell us this integrand must reduce to the exponential of a quadratic form $-Q(\theta)/2,$ which is (as we shall see) most conveniently expressed by "completing the square" as
$$Q(\theta) = s\left[\frac{1}{\beta^2}\theta^2 - \frac{2\alpha}{\beta^2}\theta + \left(\frac{\alpha^2}{\beta^2}+\gamma\right)\right] = s\left(\frac{\theta - \alpha}{\beta}\right)^2 + s\gamma$$
(with $\beta \gt 0$ and $s=\pm 1$) because the change of variable $x = (\theta-\alpha)/\beta,$ entailing $\mathrm d\theta = \beta\,\mathrm dx,$ yields
$$\int_\Theta e^{-Q(\theta)/2}\,\mathrm d\theta = \int_\Theta \exp\left(-s\left(\frac{\theta - \alpha}{\beta}\right)^2/2 - s\gamma/2\right)\,\mathrm d\theta = \beta\, e^{-s\gamma/2}\int_{\mathcal X} e^{-sx^2/2}\,\mathrm dx.$$
where $\mathcal X$ is the image of $\Theta$ under the mapping $\theta\to (\theta-\alpha)/\beta = x.$ Assuming the sign of the form is $s=+1,$ the integral of $\exp(-sx^2/2)$ over all the real numbers is $C=\sqrt{2\pi}.$ (The case $s=-1$ will not arise in these probability calculations and I have excluded the possibility that $Q$ is a linear function of $\theta$ because that reduces the integrand to an exponential, which is elementary to compute.)
We can therefore relate this integral to a probability measure by introducing this normalizing factor of $C,$ resulting in
>
$$\int_\Theta e^{-Q(\theta)/2}\,\mathrm d\theta = \left(\beta\,e^{-\gamma/2}\sqrt{2\pi}\right) \frac{1}{\sqrt{2\pi}}\int_{\mathcal X} e^{-x^2/2}\,\mathrm dx = \beta\,e^{-\gamma/2}\,\sqrt{2\pi}\, \Phi(\mathcal X)\tag{*}$$
where $\Phi$ is the standard Normal probability measure.
In many applications $\Theta = \mathbb R,$ whence $\mathcal X = \mathbb R$ without any further computation and $\Phi(\mathcal X) = 1$ because it is a probability measure. The right hand side reduces to a simple function of $\beta$ and $\gamma$ -- you don't even have to compute $\alpha.$
---
To answer the question specifically, then, we compute the coefficients appearing in $Q.$ By inspection, and writing $Z = Z_1 + Z_2 + \cdots + Z_T$ for that sum, the coefficient of $\theta^2$ is
$$\frac{1}{\beta^2} = \frac{1}{\tau^2} + \sum_{m=1}^T \frac{1}{\sigma^2} = \frac{1}{\tau^2} + \frac{T}{\sigma^2}, \tag{1}$$
the coefficient of $\theta$ is
$$\frac{-2\alpha}{\beta^2} = 0+ \sum_{m=1}^T \frac{-2Z_m}{\sigma^2} = -\frac{2 Z}{\sigma^2},\tag{2}$$
the constant term (within the arguments of the exponentials) is
$$\frac{\alpha^2}{\beta^2}+\gamma = 0 + \sum_{m=1}^T \frac{Z_m^2}{\sigma^2} - \left(\frac{Z_m-\theta_0}{\sigma}\right)^2 = \frac{1}{\sigma^2}\left(2Z\theta_0-T\theta_0^2\right)\tag{3},$$
and a constant term from the factors that multiply the exponentials is just $1/\sqrt{2\pi\tau^2}.$
Simply solve for $\beta,$ $\alpha,$ and then $\gamma$ in that order and plug that into $(*),$ multiplying the result afterwards by $1/\sqrt{2\pi\tau^2}.$
| null | CC BY-SA 4.0 | null | 2023-03-29T14:06:52.020 | 2023-03-29T14:06:52.020 | null | null | 919 | null |
611128 | 1 | null | null | 0 | 47 | I was given a dataset consisting of 80 curves which are given as individual points. The curves represent the decline of batteries untill they are no longer useful. The objective is to model a curve which fits this data the best and predict the lifetime of a battery.
[](https://i.stack.imgur.com/k2MNq.png)
I have tried polynomial regression so far, in what other ways could I model a curve?
| How do I fit a curve to a dataset consisting of multiple curves given as individual datapoints | CC BY-SA 4.0 | null | 2023-03-29T14:04:36.190 | 2023-03-29T15:01:00.603 | 2023-03-29T14:44:09.433 | 68149 | null | [
"r"
] |
611129 | 1 | null | null | 0 | 6 | I’m currently working on a conjoint analysis (CBC) about e-commerce fulfillment. The idea is to find out (among other things) whether customer characteristics (gender, age, home-presence, …) influence the utilities. The conjoint tool calculates individual part worth utilities for each respondent in case it is needed.
In the end, I want to know if certain characteristics significantly influence a particular part worth utility.
How can this be tested statistically?
I’m very grateful for any help. If further information is needed or anything is unclear, please let me know. Thank you!
| Influence of demographics/background on part worth utilities in conjoint analysis | CC BY-SA 4.0 | null | 2023-03-29T14:24:30.590 | 2023-03-30T08:49:44.887 | 2023-03-30T08:49:44.887 | 384418 | 384418 | [
"controlling-for-a-variable",
"conjoint-analysis"
] |
611131 | 2 | null | 611128 | 2 | null | If you have the lifetime of multiple batteries, you can treat that as time-to-event - that is, you have the time it takes for a battery to run dead. In such case, you can perform a survival analysis.
If you have only the time-to-event, you can draw a Kaplan-Meier survival curve. If you have other variables that can explain why some batteries last longer than others, you can try a Cox regression to evaluate the effect of those variables in the lifetime of batteries in your dataset.
[Edit] Now looking at your plot, it seems that you are trying to model an "average" curve learning from all the 80 curves. In that case, a possible approach would be to find the parameters of each individual curve (eg. modeling each one of them individually) and then inferring the true parameters of the population of curves, eg. by Bayesian inference. As result of this process you would get the most likely (maximum likelihood) parameters for a general curve that would represent what you know about the lifetime of your batteries.
| null | CC BY-SA 4.0 | null | 2023-03-29T14:28:26.757 | 2023-03-29T15:01:00.603 | 2023-03-29T15:01:00.603 | 360512 | 360512 | null |
611132 | 2 | null | 357466 | 6 | null | I generally agree with your premise that there is an over-fixation on balancing classes, and that it is usually not necessary to do so. Your examples of when it is appropriate to do so are goods ones.
However, I disagree with your statement:
>
I conclude that unbalanced classes are not a problem, and that oversampling does not alleviate this non-problem, but gratuitously introduces bias and worse predictions.
The problem in your predictions is not the oversampling procedure, it is the failure to correct for the fact that the base-rate for positives in the "over-sampled" (50/50) regression is 50%, while in the data it is closer to 2%.
Following King and Zeng ("Logistic Regression in Rare Events Data", 2001, Political Analysis, [PDF here](https://gking.harvard.edu/files/gking/files/0s.pdf)), let the population base rate be given by $\tau$. We estimate $\tau$ as the proportion of positives in the training sample:
$$
\tau = \frac{1}{N}\sum_{i=1}^N y_i
$$
And let $\bar{y}$ be the proportion of positives in the over-sampled set, $\bar{y}=0.5$. This is by construction since you use a balanced 50/50 sample in the over-sampled regression.
Then, after using the `predict` command to generate predicted probabilities $P(y|x,d)$ we adjust these probabilities using the formula in King and Zeng, appendix B.2 to find the probability under the population base rate. This probability is given by $P(y=1|x,d)A_1B$. In the case of two classes:
$$
P(y=1|x,d)A_1B = \frac{P(y=1|x,d) \frac{\tau}{\bar{y}}}{P(y=1|x,d) \frac{\tau}{\bar{y}} + P(y=0|x,d) \frac{1-\tau}{1-\bar{y}}}
$$
Since $\bar{y}=0.5$ this simplifies to:
$$
P(y=1|x,d)A_1B = \frac{P(y=1|x,d) \tau}{P(y=1|x,d) \tau + P(y=0|x,d) (1-\tau)}
$$
Modifying your code in the relevant places, we now have very similar Brier scores between the two approaches, despite the fact that the over-sampled training sample uses an order of magnitude less data than the raw training sample (in most cases, roughly 450 data points vs. 10,000).
So, in this Monte Carlo study, we see that balancing the training sample does not harm predictive accuracy (as judged by Brier score), but it also does not provide any meaningful increase in accuracy. The only benefit of balancing the training sample in this particular application is to reduce the computational burden of estimating the binary predictor. In the present case, we only need ~450 data points instead of 10,000. The reduction in computational burden would be much more substantial if we were dealing with millions of observations in the raw data.
[](https://i.stack.imgur.com/IzNCQ.png)
[](https://i.stack.imgur.com/XwcZI.png)
The modified code is given below:
```
library(randomForest)
library(beanplot)
nn_train <- nn_test <- 1e4
n_sims <- 1e2
true_coefficients <- c(-7, 5, rep(0, 9))
incidence_train <- rep(NA, n_sims)
model_logistic_coefficients <-
model_logistic_oversampled_coefficients <-
matrix(NA, nrow=n_sims, ncol=length(true_coefficients))
brier_score_logistic <- brier_score_logistic_oversampled <-
brier_score_randomForest <-
brier_score_randomForest_oversampled <-
rep(NA, n_sims)
pb <- txtProgressBar(max=n_sims)
for ( ii in 1:n_sims ) {
setTxtProgressBar(pb,ii,paste(ii,"of",n_sims))
set.seed(ii)
while ( TRUE ) { # make sure we even have the minority
# class
predictors_train <- matrix(
runif(nn_train*(length(true_coefficients) - 1)),
nrow=nn_train)
logit_train <-
cbind(1, predictors_train)%*%true_coefficients
probability_train <- 1/(1+exp(-logit_train))
outcome_train <- factor(runif(nn_train) <=
probability_train)
if ( sum(incidence_train[ii] <-
sum(outcome_train==TRUE))>0 ) break
}
dataset_train <- data.frame(outcome=outcome_train,
predictors_train)
index <- c(which(outcome_train==TRUE),
sample(which(outcome_train==FALSE),
sum(outcome_train==TRUE)))
model_logistic <- glm(outcome~., dataset_train,
family="binomial")
model_logistic_oversampled <- glm(outcome~.,
dataset_train[index, ], family="binomial")
model_logistic_coefficients[ii, ] <-
coefficients(model_logistic)
model_logistic_oversampled_coefficients[ii, ] <-
coefficients(model_logistic_oversampled)
model_randomForest <- randomForest(outcome~., dataset_train)
model_randomForest_oversampled <-
randomForest(outcome~., dataset_train, subset=index)
predictors_test <- matrix(runif(nn_test *
(length(true_coefficients) - 1)), nrow=nn_test)
logit_test <- cbind(1, predictors_test)%*%true_coefficients
probability_test <- 1/(1+exp(-logit_test))
outcome_test <- factor(runif(nn_test)<=probability_test)
dataset_test <- data.frame(outcome=outcome_test,
predictors_test)
prediction_logistic <- predict(model_logistic, dataset_test,
type="response")
brier_score_logistic[ii] <- mean((prediction_logistic -
(outcome_test==TRUE))^2)
prediction_logistic_oversampled <-
predict(model_logistic_oversampled, dataset_test,
type="response")
# Adjust probabilities based on appendix B.2 in King and Zeng (2001)
p1_tau1 = prediction_logistic_oversampled*(incidence_train[ii]/nn_train)
p0_tau0 = (1-prediction_logistic_oversampled)*(1-incidence_train[ii]/nn_train)
prediction_logistic_oversampled_adj <- p1_tau1/(p1_tau1+p0_tau0)
brier_score_logistic_oversampled[ii] <-
mean((prediction_logistic_oversampled_adj -
(outcome_test==TRUE))^2)
prediction_randomForest <- predict(model_randomForest,
dataset_test, type="prob")
brier_score_randomForest[ii] <-
mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2)
prediction_randomForest_oversampled <-
predict(model_randomForest_oversampled,
dataset_test, type="prob")
# Adjust probabilities based on appendix B.2 in King and Zeng (2001)
p1_tau1 = prediction_randomForest_oversampled*(incidence_train[ii]/nn_train)
p0_tau0 = (1-prediction_randomForest_oversampled)*(1-incidence_train[ii]/nn_train)
prediction_randomForest_oversampled_adj <- p1_tau1/(p1_tau1+p0_tau0)
brier_score_randomForest_oversampled[ii] <-
mean((prediction_randomForest_oversampled_adj[, 2] -
(outcome_test==TRUE))^2)
}
close(pb)
hist(incidence_train, breaks=seq(min(incidence_train)-.5,
max(incidence_train) + .5),
col="lightgray",
main=paste("Minority class incidence out of",
nn_train,"training samples"), xlab="")
ylim <- range(c(model_logistic_coefficients,
model_logistic_oversampled_coefficients))
beanplot(data.frame(model_logistic_coefficients),
what=c(0,1,0,0), col="lightgray", xaxt="n", ylim=ylim,
main="Logistic regression: estimated coefficients")
axis(1, at=seq_along(true_coefficients),
c("Intercept", paste("Predictor", 1:(length(true_coefficients)
- 1))), las=3)
points(true_coefficients, pch=23, bg="red")
beanplot(data.frame(model_logistic_oversampled_coefficients),
what=c(0, 1, 0, 0), col="lightgray", xaxt="n", ylim=ylim,
main="Logistic regression (oversampled): estimated
coefficients")
axis(1, at=seq_along(true_coefficients),
c("Intercept", paste("Predictor", 1:(length(true_coefficients)
- 1))), las=3)
points(true_coefficients, pch=23, bg="red")
beanplot(data.frame(Raw=brier_score_logistic,
Oversampled=brier_score_logistic_oversampled),
what=c(0,1,0,0), col="lightgray", main="Logistic regression:
Brier scores")
beanplot(data.frame(Raw=brier_score_randomForest,
Oversampled=brier_score_randomForest_oversampled),
what=c(0,1,0,0), col="lightgray",
main="Random Forest: Brier scores")
```
| null | CC BY-SA 4.0 | null | 2023-03-29T14:28:58.213 | 2023-04-11T05:19:55.857 | 2023-04-11T05:19:55.857 | 1352 | 384374 | null |
611133 | 1 | 611171 | null | 0 | 64 | I draw a single sample from a given standard normal distribution: $Y \sim N(0, 1)$. I also draw $n$ samples from a different normal distribution: $X_i \sim N(\mu, \sigma^2)$ for $i=1...n$. What is the probability distribution of the rank of $Y$ among all $(n + 1)$ samples? i.e. what are the probabilities that: $Y$ is larger than any of $\\{X_i\\}$, larger than all but one of $\\{X_i\\}$, larger than all but two of $\\{X_i\\}$, ... smaller than all of $\\{X_i\\}$?
Note this is not a homework exercise, I'm at work and it's a step in a real-world application!
| Distribution of the rank of a sample from a normal distribution among a set of samples from a different normal distribution | CC BY-SA 4.0 | null | 2023-03-29T14:31:04.717 | 2023-03-29T19:54:32.377 | null | null | 449 | [
"normal-distribution",
"ranking"
] |
611134 | 2 | null | 611060 | 20 | null | The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample and a high-confidence CI, but the problem sort of glosses over the fact that when choosing even just 5 individuals, the width of the "max-min" range will usually be quite large. It should not be terribly surprising that we can confidently claim that the median is within some very large range. We are likely treading into the territory of "statistically significant, but practically useless". Even very small samples can be used to make conclusions of arbitrary statistical confidence simply by relaxing the width of the tested interval. Here, the sampling approach naturally gives us a large interval, which might be the surprising part.
A knee-jerk reaction might be to think that a small sample size and high-confidence CI are incompatible and cannot be observed together. But given any sample size at all, you can build a CI of any confidence you want, so long as you make it wide enough. What's surprising here is just how wide of a range you get, on average, when selecting only 5 individuals from the population. Choosing 5 individuals from any distribution at all results in a range that covers, on average, the middle two thirds of the population! And since this method tends to put the range nearer the middle than the extremes of possible values, the chance of containing the median individual is even higher than percentage of the population covered.
With that knowledge, it shouldn't be surprising to define a range using a method that usually covers a majority of the population, and be quite confident that the median is in that range. Yes, we have a method that reliably generates a range that contains the median, but that range is so large that it usually contains most other observed values, too. It's already unlikely to pick 5 individuals and find a range that covers less than 50 percentiles of the population, and even less likely to have that sub-majority range land entirely on one side of the median, which is the only way you can avoid containing the median.
| null | CC BY-SA 4.0 | null | 2023-03-29T14:31:41.977 | 2023-03-30T18:31:40.547 | 2023-03-30T18:31:40.547 | 76825 | 76825 | null |
611136 | 1 | null | null | 1 | 26 | I have an MLR model created in R, regresses the dependent variable `y`, against the following explanatory variables: age (numerical), hair colour and eye colour (categorical with 5 categories each), birth month (categorical with 12 levels), and gender (male/female).
In building the model, I am only concerned with the coefficient for the gender variable. The reason for including all of the other variables is so that I can see how gender impacts the dependent variable, without the influence of the other factors.
The default in R is to include the first category per variable within the intercept of the model. In my case, this corresponds to, blonde hair, green eyes, January birth month, and female.
In this output, this gave a `male` coefficient of `-10`, with respect to an `intercept` of `200`. This is a relative difference of -5%.
However, I want to refactor the variables so that the default levels used are those that are the most prevalent within my sample (i.e., blue eyes, brown hair, December birth month).
When doing so, I am expecting my coefficients for the variables as well as the intercept to change. This was the case for all variables other than gender. The `male` coefficient is still `-10` but the `intercept` is `150`. This gives a relative difference of -6.66%
As the purpose of this model is to look at the relative influence of gender without the influence of the other factors, I am now unsure if looking at the coefficient relative to the intercept is correct, and if it is, why this differs based on the levels used.
I am fully expecting the gender coefficient to vary, as the raw data for `y` does differ when looking at brown-haired, blue eyed males, compared with blonde-haired, brown eyed, males.
The interpretation of an MLR coefficient is to the best of my knowledge:
>
The change in the response based on a 1-unit change in the corresponding explanatory variable keeping all other variables held constant.
So surely the impact of gender should update following an update to the 'base' variables in the `intercept`?
| Coefficients stayed the same after re-levelling MLR variables | CC BY-SA 4.0 | null | 2023-03-29T14:43:03.927 | 2023-03-29T16:16:57.457 | 2023-03-29T15:09:21.553 | 96005 | 96005 | [
"multiple-regression",
"regression-coefficients",
"dependent-variable",
"explanatory-models"
] |
611137 | 1 | null | null | 0 | 18 | I have N time series of shape `(30*36)` (time step, feature). For each time series 500 parameters that can be seen as the history of the time series have to be added. This parameters are different for each time series.
For now, the way I found to feed a ML algorithm with the time series plus the parameters is to create 500 new constants features per time serie correponding to this parameter.
The final N time series would have the shape `(30*536)`, where the 500 last features are constant in time.
However this approach have to main issues :
- Important features (the parameters) are constants
- I am increasing by 15 the size of my data (dimension wise)
Any ideas of how I could deal with this parameters ?
| Time series : add constants for each time series | CC BY-SA 4.0 | null | 2023-03-29T14:51:59.900 | 2023-03-29T14:51:59.900 | null | null | 357352 | [
"machine-learning",
"time-series",
"data-transformation"
] |
611138 | 2 | null | 611114 | 1 | null |
- If I write $\rm{X}$ the matrix of the 7 last numerical columns, then a PCA should find the eigenvalues and eigenvectors of $\rm{X}^TX$, a 7x7 square matrix corresponding to your 7D parameter space. Whether that or the converse computation on the transposed matrix happens depends on the software you're using, but anyway you'll see very quickly how many eigenvalues you get, either 7 or your number of rows...
- Your initial PCA should help you see whether the 3 groups are clustered in different parts of the parameter space, hopefully finding subspace dimensions to which you can attribute a meaning. Individual PCAs on the separate groups will describe the (group-specific) correlation between gene expressions, a valid but completely different question.
In short, you can (and probably should) do both. Keep in mind though that the PCA is sensitive to the normalization of your variables; your choices there will be markedly reflected in the results.
| null | CC BY-SA 4.0 | null | 2023-03-29T15:03:26.847 | 2023-03-29T15:03:26.847 | null | null | 137705 | null |
611139 | 1 | null | null | 2 | 22 | I am faced with a classification problem where I wish to predict 2 binary variables, say x and y. x and y are dependent in the sense that one conditional probability vanishes: P(y=1 | x=0) = 0. I am looking for a model that is able to incorporate this assumption, which is, of course, reflected in my data.
As of now, there are two approaches I can think of:
- Fit 1 model that predicts P(x) on the entire training data. Fit a second model that predicts P(y | x=1 ) by using only the subset of the training data where x=1. To obtain P(y) for the test data, simply multiply the outputs of the two models, i.e. P_hat(y=1) = P_hat(x=1) * P_hat(y=1 | x=1)
- Fit a multi class classification model, modifying the loss function such that predictions of the form (x=0, y=1) increase the loss substantially
I lean towards the first approach as it seems simpler and more theoretically sound in my eyes. The caveat is that, the overall sample is different to the subsample of x=1 data points, and so it would be invalid to apply the second model to the overall data.
Are there any other methods that could treat this problem well? I should note that my data only has a few thousand rows and is somewhat imbalanced (10-20% have x=1, and then ~30% of x=1 data points have y=1)
| Is there a multi-class classification model which can incorporate partly known conditional probabilities of the targets? | CC BY-SA 4.0 | null | 2023-03-29T15:04:59.833 | 2023-03-29T15:04:59.833 | null | null | 360676 | [
"machine-learning",
"classification",
"conditional-probability",
"multi-class"
] |
611141 | 2 | null | 611122 | 3 | null | I'm unsure about lme4, but this is straightforward with nlme.
The syntax is
```
lme(response ~ 1, data = data,
random = list(subject = pdDiag(form = ~ treatment - 1)))
```
where `data` is a data frame with columns `response` ($Y_{is}$), `subject` ($i$), and `treatment` ($t(i)$).
| null | CC BY-SA 4.0 | null | 2023-03-29T15:24:42.707 | 2023-03-30T11:27:36.080 | 2023-03-30T11:27:36.080 | 219012 | 238285 | null |
611142 | 1 | null | null | 0 | 35 | I have three datasets of continuous data. Is there a convenient metric for the "binnedness" of the data? How "lumpy" it is?
I'd like a single number to allow me to distinguish between these three datasets (generated by the R code below).
[](https://i.stack.imgur.com/GZKyz.png)
```
library(tidyverse)
sds <- c(0, 2, 6)
dataset_1 <-
c(
rnorm(25, mean = 20, sd = sds[1]),
rnorm(25, mean = 40, sd = sds[1]),
rnorm(25, mean = 60, sd = sds[1]),
rnorm(25, mean = 80, sd = sds[1])
)
dataset_2 <-
c(
rnorm(25, mean = 20, sd = sds[2]),
rnorm(25, mean = 40, sd = sds[2]),
rnorm(25, mean = 60, sd = sds[2]),
rnorm(25, mean = 80, sd = sds[2])
)
dataset_3 <-
c(
rnorm(25, mean = 20, sd = sds[3]),
rnorm(25, mean = 40, sd = sds[3]),
rnorm(25, mean = 60, sd = sds[3]),
rnorm(25, mean = 80, sd = sds[3])
)
tibble(
dataset_1,
dataset_2,
dataset_3
) %>%
pivot_longer(cols = everything(), names_to = "dataset", values_to = "x") %>%
ggplot(aes(x, fill = dataset)) + geom_dotplot(binwidth = 1) +
facet_wrap(~dataset, ncol = 1)
```
```
| Descriptive statistics: a metric of the "lumpiness" of numeric vector | CC BY-SA 4.0 | null | 2023-03-29T15:30:46.243 | 2023-03-29T16:13:01.447 | 2023-03-29T15:33:41.170 | 384426 | 384426 | [
"r",
"descriptive-statistics",
"entropy",
"binning"
] |
611143 | 1 | 611378 | null | 1 | 46 | I'm trying to regress one set of hurricane numbers per year onto another, as a way to estimate the proportion of the hurricanes that hit the US, and the uncertainty around that proportion, all in the context of using a Poisson distribution for the distribution of the number hurricanes hitting the US.
I'm using the R command:
`reg1=glm(y~0+x,family=poisson(link="identity"),start=0.5)`
where x is the number of hurricanes, and y is the number of hurricanes that hit the US.
This seems to me to be the obvious statistical model to use.
I've got various data-sets. It works for most of them, but fails for those data-sets in which the number of hurricanes in the y variable is quite low. For instance, here's one that fails:
```
x= c(1,1,3,0,1,3,2,1,2,1,1,0,5,6,1,3,5,3,4,2,3,6,7,2,2,5,2,5,4,2,0,2,2,4,6,2,3,7,4)
y= c(1,0,1,0,0,0,1,0,0,1,1,0,1,1,0,0,1,0,0,0,0,3,4,0,0,0,0,0,0,0,0,0,0,0,2,1,0,2,1)
```
The error I get is:
"Error: cannot find valid starting values: please specify some"
But whatever starting values I specify, it fails.
From a mathematical/statistical point of view, it seems to me to be a reasonable question to ask, that should have a solution.
Any suggestions for:
a) how I might get glm to work
b) any alternative model I could use?
Things I've tried:
a) gaussian regression: that works, but doesn't really make sense since y is non-negative integers.
b) poisson regression with log link: that works, but that doesn't make sense either since it doesn't use a constant proportion.
c) I can estimate the proportion mean p and standard error s using
```
p=sum(y)/sum(x)
q=1-p
s=sqrt(pq/sum(x))
```
but that doesn't really take the Poisson context into account, so doesn't seem like the best solution (and it gives lower results for the standard error s than the glm does in the cases where it works).
thanks
Steve
| Why does my Poisson regression fail? I'm using the R command glm(), x and y are non-negative integers, I'm using a linear link | CC BY-SA 4.0 | null | 2023-03-29T15:44:08.960 | 2023-03-31T13:11:13.707 | null | null | 331423 | [
"regression",
"generalized-linear-model",
"linear",
"poisson-regression"
] |
611144 | 2 | null | 557483 | 0 | null | First of all, if I may be a bit pedantic: you don't calculate the MLE "for the Binomial" because we usually don't estimate distributions, but rather parameters of distributions. In your case you wish to estimate the "p parameter" of the Binomial, i.e. the probability of "success" in the underlying Bernoulli trials.
The estimate $\hat{p} = \frac{\bar{x}}{n}$ seems correct to me, because the MLE of the population mean $\mu$ is the sample mean $\bar{x}$, and the population mean of the Binomial is $\mu = n p$ where $n$ is the number of Bernoulli trials what I assume to be known in this case.
| null | CC BY-SA 4.0 | null | 2023-03-29T15:44:54.367 | 2023-03-29T15:44:54.367 | null | null | 43120 | null |
611146 | 1 | null | null | 0 | 17 | Can Kaplan-Meier plots be used for feature selection when building a multivariable survival model (e.g. Cox PH)? Would a visual assessment (no separation/crossing curves) suffice or would it have to be based on something more formal like log-rank test?
I'm in a situation where I have access to very large amounts of clinical data and I know that most of it has some relevance in predicting my outcome of interest. I will be using PCA to remove collinearity and reduce the number of features—but am limited in the model I can use to an extension of the Cox proportional hazards model (my outcome is interval-censored...). Can I use the outputs of a Kaplan Meier plot to decide if a variable is worth including or not?
Or is it just as misguided as when multiple regression models are built using features that give p values < 0.05 in univariate models?
| Kaplan-Meier plots as feature selection method? | CC BY-SA 4.0 | null | 2023-03-29T15:48:12.693 | 2023-03-29T16:19:49.650 | 2023-03-29T16:19:49.650 | 44269 | 265390 | [
"survival",
"feature-selection",
"cox-model",
"kaplan-meier"
] |
611147 | 2 | null | 530925 | 1 | null | The earliest reference I can find is a 1997 paper as shown - PubMed is a powerful tool to analyze trends in academic litearture. I obviously can't search "early stopping" apropos of nothing because it's apparently quite intuitive to terminate any laborious procedure early when desired precision is achieved, whether in ecological sampling, chemical and biochemical processes, medical imaging, and so on.
But which terms to combine to address the specific question becomes vague, and a matter of capturing exogenous trends. Note: Cross-validation is not an algorithm which guarantees any form of "convergence". "Early stopping" according to these searches is not well defined. You may, for instance, put a "cap" on the number of iterations allowed, or allow a more generous "convergence" criterion. Regularization and cross-validation are different concepts, too. This question is tagged "machine learning" which, even itself, is a late entry to the field of methods.
Prior to ML, there are, for instance, "one-step" estimators which are estimators such as GLS, GLMs, NLMMs, non-linear least squares, etc. which are estimated from iterative processes such as EM, NR, BGFS and whose theoretical results are known. A one-step estimator merely performs one iteration of the algorithm and produces an interesting and well behaved estimator in some cases whose variance is easy to express, and provides consistent tests of hypotheses. Theoretical work on one-step estimators has been explored since the 1980s.
[](https://i.stack.imgur.com/3mdy2.png)
| null | CC BY-SA 4.0 | null | 2023-03-29T15:50:52.373 | 2023-03-29T15:50:52.373 | null | null | 8013 | null |
611148 | 2 | null | 611136 | 1 | null | The coefficient estimate on male is simply the marginal effect of male, relative to female. It doesn't matter if you change the baseline hair, eye color, etc., it will still give you the same marginal effect of male (in this case, being male is associated with 10 units less of y).
If you want an estimate of percent difference, you need to take the natural log of y, $y^* = \ln(y)$. Note that in order to log y, all values of y must be greater than 0.
If you log y and not x (which will be the case here since you can't log x=0), then you can interpret the effect as a percentage in the following way. Let $\beta_{male}$ be the estimated coefficient on male from the regression using $\ln(y)$ as the dependent variable. Then the percent change in y associated with being male is $100 \times \left(\exp\{\beta_{male}\}-1 \right)$. This estimated percentage should be the same regardless of how you identify the base category for hair, eye color etc.
When choosing which regression to use, y or $\ln(y)$, you should inspect the residuals and choose the model that conforms most closely to the OLS assumptions (homoskedastic, etc.)
| null | CC BY-SA 4.0 | null | 2023-03-29T15:52:12.310 | 2023-03-29T16:16:57.457 | 2023-03-29T16:16:57.457 | 384374 | 384374 | null |
611149 | 1 | null | null | 0 | 26 | I am attempting to test if the correlational structure of a model-fit parameter is enough to explain some effects in my data. To do this, I have fit my model and then wish to generate a new set of model parameters with the same correlation matrix as the fit data, but that are otherwise random. The problem I am running into is that my model parameters are fit for the range [0,1], and whenever I use methods to generate correlated data (like via [this method](https://stats.stackexchange.com/questions/32169/how-can-i-generate-data-with-a-prespecified-correlation-matrix)) I've found here on CV, my newly generated matrix is no longer in the original range [0,1]. While it does have a very similar correlation matrix, its range is way off. Is there a simple step to fix this I'm missing, or is this an issue fundamental to the method of projecting by the cholesky decomposed matrix? If so, is there an alternative method I can use that preserves range?
Here's my very simple matlab code so far:
```
corr_real = corr(model_params')
generated_model_params = rand(numcolumns,numrows) * chol(nearestPSD(corr_real))
generated_corr = corr(generated_model_params)
```
I've tried making my initial, random generated matrix with something like `datasample(model_params,numcolumns)` to make sure the random matrix has the same mean and variance as the original, since my model parameters are not uniformly distributed, and this has not worked. I've also tried normalizing the resulting matrix after the fact with `generated_model_params_normalized = normalize(generated_model_params,'range')` and this totally screws up the correlation matrix.
| Generating random data with same correlation matrix, mean, and range as original data | CC BY-SA 4.0 | null | 2023-03-29T15:56:09.060 | 2023-03-29T15:57:53.197 | 2023-03-29T15:57:53.197 | 384431 | 384431 | [
"correlation",
"matlab"
] |
611150 | 1 | 611159 | null | 1 | 30 | I coded a version of the adaptive lasso that does model selection for arch(q) (hopefully garch(q,p) soon) processes.
It optimizes over a grid of 4 parameter.
right now it works with:
```
LamdaT <- seq(.5,1.7,by=.2)
gamma0 <- 2#seq(.25,1.75,by=.25)
gamma1 <- seq(.25,1.75,by=.25)
gamma2 <- seq(.25,1.75,by=.25)
```
is there a way to narrow down the range or the stepsize so that the runtime decreases? Or is trail and error the best i can do?
| is there a way to narrow down the grid of a Lasso regression? | CC BY-SA 4.0 | null | 2023-03-29T15:59:25.623 | 2023-03-29T17:23:58.833 | null | null | 384310 | [
"optimization",
"lasso",
"iteration-methods",
"grid-approximation"
] |
611151 | 2 | null | 611143 | 2 | null | This is probably because there are issues with estimating parameters that are on the boundaries, eg $\lambda=0$ for a Poisson($\lambda$). with a log link this doesn't become an issue since $\log\lambda =x$ is 0 only for $x\to -\infty$, but for identity link like you use this becomes an issue.
| null | CC BY-SA 4.0 | null | 2023-03-29T16:06:27.580 | 2023-03-29T16:06:27.580 | null | null | 17661 | null |
611152 | 1 | null | null | 1 | 34 | Say a policy was introduced in 1 Jan 2013. I have yearly data from 2012 to 2019 and would like to implement a difference-in-differences model. I am exploiting variation in female intensiveness of firms prior to the implementation of the policy given by $ShareFemale_{i2012}$. $PostAmendment_t$ is a dummy variable which takes the value of 1 if the year is after 2013.
Would it be expressed as such:
$$
Y_{it} = \beta_1 ShareFemale_{i2012} + \beta_2 \sum_{t = 2012}^{2019} PostAmendment_t +\sum_{t = 2012}^{2019} \delta (ShareFemale_{i2012} * PostAmendment_t) + \alpha_i + \tau_t + \epsilon_{it}
$$
or without the time fixed effects $\tau_t$. Thank you.
| How to express as an equation DiD with multiple time periods with one treatment timing | CC BY-SA 4.0 | null | 2023-03-29T16:09:44.390 | 2023-03-29T16:09:44.390 | null | null | 384434 | [
"econometrics",
"difference-in-difference"
] |
611153 | 2 | null | 611142 | 0 | null | It's not quite a single numbers, but you could do a quick cluster analysis and look at the within- and between-cluster sums of squares.
From the output of `kmeans(dataset_1, centers = 4, nstart = 25)` (and similarly for the other two datasets), we have:
```
Within cluster sum of squares by cluster:
[1] 0 0 0 0
(between_SS / total_SS = 100.0 %)
```
```
Within cluster sum of squares by cluster:
[1] 127.60912 76.78191 121.07307 69.15073
(between_SS / total_SS = 99.2 %)
```
```
Within cluster sum of squares by cluster:
[1] 593.5224 722.0721 481.3305 449.7612
(between_SS / total_SS = 95.5 %)
```
| null | CC BY-SA 4.0 | null | 2023-03-29T16:13:01.447 | 2023-03-29T16:13:01.447 | null | null | 238285 | null |
611154 | 2 | null | 611146 | 0 | null | If you have large amounts of survival data, in the sense of having a large number of events, then you should include as many features as is reasonable without overfitting. You usually can estimate 1 coefficient per 15 or so events in the data set, but some features (e.g., continuous predictors, those involved in interactions) might require more than 1 coefficient for proper modeling.
The large model has the best chance of making sure that you are adjusting properly for outcome-associated variables that might not be of primary interest, and to avoid the [omitted-variable bias](https://en.wikipedia.org/wiki/Omitted-variable_bias) that can occur in a survival model if any outcome-associated variable isn't included in the model.
In that context, pre-selection of individual features based on Kaplan-Meier curves isn't a good idea.
I'd recommend careful study of Frank Harrell's [Regression Modeling Strategies](https://hbiostat.org/rmsc/), which covers these issues both in the general context of regression and specifically for survival models.
Briefly, the idea is to decide first how many coefficients you can try to estimate without overfitting, based on your number of events. That's how many "degrees of freedom" you have to spend on your model. Then decide, based on subject-matter understanding, which predictors need the most flexibility in fitting and which might be combined together in what he calls "data reduction." Spend those "degrees of freedom" accordingly in setting up the model structure. Then run the full model, perhaps with some later simplification.
| null | CC BY-SA 4.0 | null | 2023-03-29T16:15:11.603 | 2023-03-29T16:15:11.603 | null | null | 28500 | null |
611156 | 1 | null | null | 1 | 21 | Here is a simple case of my problem. I want to predict the temperature for 5 meter for next days. I am using two features. Temperature and humidity ( I assumed that I have the humidity for tomorrow). So, I have a problem with normalization.
here is a type of my data set. The training data is for 10 days. So, what is the the best way to normalize the data? in my model, the predicted value is also in range (0-1), and how can I un-normalized the predicted value? Can you help me with that? Thank you.
```
temperature_train = np.random.randint(3, size = (5, 24*10))
humidity_train = np.random.randint(3, size = (5, 24*10))
humidity_test = np.random.randint(3, size = (5, 24))
```
| Normalize the input to dl model and then un-normalize the predicted values | CC BY-SA 4.0 | null | 2023-03-29T16:50:25.343 | 2023-03-29T16:50:25.343 | null | null | 356849 | [
"machine-learning",
"neural-networks"
] |
611157 | 1 | null | null | 0 | 40 | I made a generalized linear model with an inverse gaussian link.
```
glm(lone_total ~ class + age + basic_needs_covered_id,
data = mod_data_lone,
family = gaussian(link = "inverse")
)
```
I have chosen this family and link because of the diagnostic plots, which were bad for `inverse.gaussian(link="1/mu^2")` or `Gamma(link="log")` (See distribution of outcome variable in the image below)
The results with the following coefficients:
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.784873 0.066836 11.743 < 2e-16 ***
class2 -0.070565 0.067936 -1.039 0.300
class3 0.171242 0.203703 0.841 0.401
age 0.003912 0.003444 1.136 0.257
basic_needs_covered_id -0.110093 0.026307 -4.185 4.1e-05 ***
```
how do I interpret the Estimates? I tried to interpret them like betas from gamma or poisson models, or to run log() and exp() on them, but nothing really made sense to me.
Descriptives of the Data
```
> str(mod_data_lone)
'data.frame': 229 obs. of 5 variables:
$ lone_total : num 0.01 3 3 1 1 1 0.01 3 0.01 0.01 ...
$ class : Factor w/ 3 levels "1","2","3": 3 2 1 3 1 1 1 1 1 2 ...
$ age : num -14.44 -8.27 7.38 NA 13.99 ...
$ basic_needs_covered_id: num 0 4 0 2 1 2 0 1 0 1 ...
$ education_id : num 6 8 6 6 6 2 6 6 6 6 ...
```
The following image shows the barplot and density distributions and means by class of the observations:
[](https://i.stack.imgur.com/qponb.png)
## -
another table I got is the following, which shows different types of B (which all do not really make sense to me...)
[](https://i.stack.imgur.com/haw75.png)
## -
Thanks for any help!
| How to interpret beta coefficients of a generalized linear model using inverse gaussian | CC BY-SA 4.0 | null | 2023-03-29T17:07:20.910 | 2023-03-30T12:31:50.527 | 2023-03-30T09:40:50.873 | 383360 | 383360 | [
"generalized-linear-model",
"interpretation",
"regression-coefficients",
"inverse-gaussian-distribution"
] |
611158 | 1 | 611194 | null | 0 | 76 | I am struggling with effect size calculations for the Wilcoxon test for one sample in R.
My data:
` item1 <- c(1,5,3,4,4,2,3,2,1,4,5,4,3,1)`
` dt<- as.data.frame(id=1:14, item1)`
My Hypothesis:
$H_0: \theta \le 3 \quad vs. \quad H_1: \theta > 3$
With this data I get 3 null differences ($item1-3$) and thus $n$ becomes $11$. Hand calculations following Wilcoxon original paper gives $W=\min(W^+,W^-)=32$.
If I run the Wilcox_test from package `rstatix` using
` dt|>rstatix::wilcox_test(item1~1, mu=3, alternative="greater")`
I get:
$n=14; w=32; p=0.555$ (first question, $n$, after discounting $D=item1-3=0$ should be 11, not 14?
If I run the effect size as
```
dt|>rstatix::wilcox_effsize(item1~1, mu=3)
```
I get $effsize=0.0085$ with $n=14$.
This is $r=\frac{Z}{\sqrt n}$ where $z=\dfrac{W-n*(n+1)/4}{\sigma_w}$ where $\sigma_W=\sqrt{\frac{n(n+1)(2 n+1)}{24}-\sum_{i=1}^g \frac{e_i^3-e_i}{48}}$
My questions:
- Should the effect size use only the n<-sum(Di!=0) that is $11$, instead of n<-nrow(dt) =$14$?
- If I calculate the rank biserial correlation with rob<-(2 * (W / totalRankSum)) - 1 I get $rob=-0.030$ which is close to the $r=-0.027$ calculated with $n=11$, not $n=14$.
- So, I think that the $Z$ and $r$ should be calculated with sample size corrected for $Di=0$. I can't find a definitive answer in Maciej Tomczak and Ewa Tomczak. The need to report effect size estimates revisited. An overview of some recommended measures of effect size. Trends in Sport Sciences. 2014; 1(21):19-25. They say that $n$ should be the $n$ used for the $Z$ calculation, but say nothing about correcting for null differences....
What do you think? Can you give me a final reference to settle the question of which $n$ to use?
| One Sample Wilcoxon effect size | CC BY-SA 4.0 | null | 2023-03-29T17:15:37.107 | 2023-03-30T14:18:52.703 | 2023-03-29T23:19:50.067 | 44269 | 212308 | [
"effect-size",
"wilcoxon-signed-rank"
] |
611159 | 2 | null | 611150 | 2 | null | The best approach would be not to use a grid search. Nearly any other algorithm would be more efficient starting from [random search](https://stats.stackexchange.com/q/160479/35989), and ending on specialized optimization algorithms doing Bayesian optimization or using other techniques (see e.g. [hyperopt](https://github.com/hyperopt/hyperopt) or [optuna](https://optuna.org/)). They would usually do the job faster, giving you better quality results.
If a priori you don't have a good idea about the grid to search, the way to go would be to test some values and use them to narrow down the search space, then repeat the procedure recursively. Roughly speaking, this is what those specialized algorithms would be doing, but in a way that is proven to be optimal (vs ad hoc).
| null | CC BY-SA 4.0 | null | 2023-03-29T17:17:29.317 | 2023-03-29T17:23:58.833 | 2023-03-29T17:23:58.833 | 35989 | 35989 | null |
611160 | 2 | null | 611032 | 1 | null | This issue is addressed in chapter 11 of the book you referenced ([https://otexts.com/fpp3/hierarchical.html](https://otexts.com/fpp3/hierarchical.html)).
Basically, if you care about forecasts for the sub-series, you can (1) forecast the total and split it amongst the sub-series, or (2) forecast each sub-series individually and aggregate them to the total. Or, you can do both at once in the unified method discussed in that chapter.
If you go with approach (2), a simple starting place might be pooling all of the series together and estimating one common ARIMA model. Of course, this may not be a good idea if your series have very different ACFs. The opposite (separate ARIMA models for each series) would be the other extreme. Most likely, there is some way to group series and estimate their AR coefficient jointly.
One alternative to explicit grouping/pooling is to use a factor model (or principal components), extracting the first $k<N$ factors (where $N$ is the total number of series), and using those to form forecasts for the individual series.
For two series ($y_1$ and $y_2$), with one factor ($f_t$) we would have:
$$
\begin{bmatrix}y_{1,t} \\ y_{2,t} \end{bmatrix} = \begin{bmatrix}\alpha_1 \\ \alpha_2 \end{bmatrix} + \begin{bmatrix}\beta_1 \\ \beta_2 \end{bmatrix} f_t + \varepsilon_t \\
f_t = \phi f_{t-1} + v_t
$$
Basically, the two series are driven by the same unobserved common factor. Once you estimate the system (either by MLE or Bayesian methods) you can forecast the factor and plug those forecasts into the top equation. A rough estimate of this model (I can't speak to it's consistency properties) would be to extract the first PC of $Y = [y_1 \ y_2]$ and plug that into the top equation. Then estimate an AR(1) on the first PC. Forecast the first PC according to the AR(1), and plug in the forecasts to the top equation.
If you were using the first $k$ factors in a larger context ($N>2$), you would simply extract the first $k$ factors, use them to get estimates of the parameters in the top equation, fit AR(1) models to each of the $k$ factors, forecast each of the $k$ factors, and then plug those forecasts into the estimated top equation to get forecasts for each of your series.
| null | CC BY-SA 4.0 | null | 2023-03-29T17:21:03.780 | 2023-03-29T17:21:03.780 | null | null | 384374 | null |
611161 | 2 | null | 610733 | 2 | null | I believe you can use simple t-tests from the following regressions.
First, to test the $k^{\text{th}}$ ACF value, fit a regression
```
lm(Y~1+Ylag)
```
where `Ylag` is the $k^{\text{th}}$ lag you are interested in testing. Then look at the p-value for the coefficient on `Ylag`.
Second, to test the $k^{\text{th}}$ PACF value, fit an AR(k) model:
```
Arima(Y,order=c(k,0,0))
```
And check the p-value on the coefficient on the $k^{\text{th}}$ lag.
However, I would not use statistical significance to determine variable inclusion. Instead, using AIC or BIC will typically result in a better model. You may want to set the following options in you auto.arima call:
```
auto.arima(Y,stepwise=FALSE,approximation=FALSE,trace=TRUE)
```
- stepwise=FALSE will fit every possible ARIMA model under the default max p, q, and p+q constraints, not taking any shortcuts.
- approximation=FALSE will use the actual likelihood instead of an approximation to the likelihood when computing AIC or BIC.
- trace=TRUE will print the model results as it tries each model. It may take up to a few minutes (especially if it is trying seasonal models), so this is helpful as it shows you that the command is actually running and making progress.
| null | CC BY-SA 4.0 | null | 2023-03-29T17:43:52.480 | 2023-03-29T17:43:52.480 | null | null | 384374 | null |
611162 | 1 | null | null | 0 | 19 | I have i.i.d. training data of the form $(X_i,A_i,Y_i)$, where $X_i$ is drawn from $\mathcal{N}(0,9)$, $A_i$ is drawn from the uniform distribution on $\left\{ 1,2,3,4\right\}$, and $Y_i$ is drawn, conditionally to $A_i$ and $X_i$, from $\mathcal{N}(A_i * X_i,1)$.
I want to estimate the 0,1-quantile of the distribution of $Y$ given $X$ and $A$. I wanted to do this by using the QuantileRegressor method from sklearn ([https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.QuantileRegressor.html](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.QuantileRegressor.html)), with $(X_i,A_i)$ as the input and $Y_i$ as the output. But this does not seem to work : my estimated quantile appears to only depend on $X$ but not on $A$ (as if the model ignored the training data $A_i$ when regressing).
Below is my code. I first generate my training data of size $100$, and then attempt to estimate the conditional quantiles with QuantileRegressor.
```
x = np.random.normal(loc=0,scale=9,size=100)
a, y = np.random.choice([1,2,3,4], 100, p=[1/4,1/4,1/4,1/4]), np.zeros(n)
for i in range (n):
y[i]=np.random.normal(loc=x[i]*a[i],scale=1,size=1)
dataset=(x,a,y)
X,Y=np.stack((dataset[0],dataset[1]),axis=1),dataset[2]
reg=QuantileRegressor(quantile=0.1).fit(X,Y)
```
Then, for instance, `reg.predict(([[2,1],[2,3]]))` outputs `array([-10.11344969, -10.11344969])`, which suggests that the prediction does not depend on $A$. This does not make sense since $Y_i$ is drawn from $\mathcal{N}(A_i * X_i,1)$. What am I doing wrong ?
| QuantileRegressor from sklearn seems to ignore part of my input data | CC BY-SA 4.0 | null | 2023-03-29T18:00:42.487 | 2023-03-29T18:01:53.750 | 2023-03-29T18:01:53.750 | 384404 | 384404 | [
"scikit-learn",
"quantile-regression"
] |
611163 | 2 | null | 335983 | 2 | null | To a large extent, the desire to apply artificial balancing comes from using improper scoring rules, chiefly accuracy.
In particular, it seems that people realized that a model with strong imbalance could achieve an impressive-looking $98\%$ accuracy by predicting yet underperform a model that always predicts the majority category, such as if the majority category represents $99\%$ of all observations. Consequently, people seem to have changed the class ratio to allow for such a high accuracy to be more reflective of strong performance: if you balance the classes, then predicting one class every time results in $50\%$ accuracy (or worse, if there or three or more classes), so scoring $98\%$ would be quite an improvement.
I see this as a major drawback of accuracy, and a simple remedy, [comparison of error rates](https://stats.stackexchange.com/a/605451/247274), makes it more comparable to $R^2$ in regression and might be a more useful measure of performance. I show in the link what happens when you have an accuracy score than looks high but underperforms predicting the majority class every time, and this statistic indeed flags that as poor performance.
However, accuracy, comparison of error rates, and some classics like sensitivity, specificity, and $F_1$ score, all have the downside of being based on hard classifications. Most "classification" models, such a logistic regressions and neural networks, do not output predicted class labels. Instead, they output values on a continuum that often can be interpreted as a probability, and then you can make decisions based on those probabilities. Importantly, those decisions can depend on factors other than the probabilities (such a features in the model...might be more willing to make a certain decision for the usual people than with a high-roller) and can be [more numerous than the categories](https://stats.stackexchange.com/questions/312119/reduce-classification-probability-threshold/312124#312124).
There are exceptions to the upcoming statement, such as in [data collection](https://stats.stackexchange.com/a/559317/247274) or perhaps for [computational reasons when it comes to numerical optimization of neural networks](https://stats.stackexchange.com/a/610042/247274), but the apparent problems when it comes to class imbalance largely do not manifest when models are evaluated on the continuous predictions.
You are correct to point out that representative samples and oversampling contradict each other, but oversampling is largely a solution to a non-problem. For the most part, I am with you that it makes sense to develop models on representative data. If a category is rare, then we should be skeptical that an observation belongs to it by assigning a low [prior probability](https://stats.stackexchange.com/a/583115/247274) and making the features have to shine through to prove that there is a strong chance that the observation indeed belongs to that category.
This link is already elsewhere in this answer, but many of the claimed issues with class imbalance are debunked [here](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he) by our Stephan Kolassa.
| null | CC BY-SA 4.0 | null | 2023-03-29T18:24:09.390 | 2023-03-29T18:24:09.390 | null | null | 247274 | null |
611164 | 2 | null | 285801 | 2 | null | I share your skepticism that there is a tradeoff. A typical way to think about the bias-variance decomposition of MSE, such as in regularized regression, is that we accept a bit of bias in our estimator in exchange for a large reduction in variance. However, we do this to achieve lower MSE, not to maintain the MSE. Thus, while I understand the "trade" being used to describe trading your unbiased, high variance estimator for a slightly biased, low variance estimator, "tradeoff" to me implies keeping MSE constant, and I try not to describe it as a "tradeoff", preferring to refer to a bias-variance "decomposition".
| null | CC BY-SA 4.0 | null | 2023-03-29T18:40:50.733 | 2023-03-29T18:40:50.733 | null | null | 247274 | null |
611165 | 1 | null | null | 0 | 10 | TLDR Question:
What can the AUC performance of randomly permuting class labels in a classification tell us?
Setup:
I have a classification task where I am predicting a clinical outcome considering 62 features and 103 patients. Classes are fairly balanced (45-55). I am using forward feature selection with logistic regression to choose 5 features. I evaluate performance using 400 iterations of bootstrapping. The out of bag mean AUC of these iterations is 0.76.
Experiment:
As a way to form a null hypothesis test, I considered the same approach as above, but I randomly permuted the class labels 60 times. Then, I considered the mean AUCs of these 60 iterations. The mean and std of these 60 were 0.686 and 0.036. I compared these 60 AUCs versus the above 0.76 and found that only 1 (1.667%) was larger. This is smaller than the tradition 5% boundary, and thus seems like the biomarker I am training above is a true discovery.
Questions:
- Permuting the class labels has an AUC > 0.5. Does that difference tell me something about my setup (bias/variance)?
- Does the difference between the null mean AUC and the true label AUC tell me anything? For example, might it say anything about what the future performance may be in an independent cohort?
[Something similar was previously asked](https://stats.stackexchange.com/questions/201415/randomizing-class-labels-during-classification-to-asses-the-feature-selection-re), but I'm not sure if the original (and my) question was answered (or at least I didn't understand the answer).
| What can the AUC performance of randomly permuting class labels in a classification tell us? | CC BY-SA 4.0 | null | 2023-03-29T18:41:22.397 | 2023-03-29T18:41:22.397 | null | null | 26242 | [
"hypothesis-testing",
"logistic"
] |
611166 | 2 | null | 549153 | 0 | null | The ROC curve measures how well the model can distinguish between the two categories: the higher the AUC score, the better the ability to distinguish (at least a bit loosely speaking). This comes from [the fact that the ROCAUC is related to running a two-sample Wilcoxon Mann-Whitney U test on the continuous predictions made by your model](https://stats.stackexchange.com/q/206911/247274), such as predicted probabilities or log-odds of a logistic regression.
You can get the test statistic for the hypothesis test from knowing the AUC and the number of members of each category. Then you can get a viable p-value.
There are other ways to test if your categories have different distributions of the features, but this seems totally legitimate to me. If the model struggles to distinguish between the categories, then you get a low AUC close to $0.5$ and a high p-value, as you should. If the features provide valuable information for distinguishing between the categories, model performance will be strong, leading to a high AUC and a small p-value.
For this particular use case, you might need more than just a small improvement over baseline (AUC $=0.5$) performance to warrant clinical use. The specific predicted probabilities might be of use there, and you can run tests directly on those (such as a test of a full logistic regression against the intercept-only model nested within it).
| null | CC BY-SA 4.0 | null | 2023-03-29T18:54:01.267 | 2023-03-29T18:59:49.963 | 2023-03-29T18:59:49.963 | 247274 | 247274 | null |
611169 | 1 | 611260 | null | 0 | 27 | Dears,
Let's assume, that I have a study like this:
- longitudinal, 3 time points, T1, T2, T3. Let's assume T1 is post-intervention.
- 2 interventions, A and B
- 2 categorical covariates: Cov1 (2 levels: Cov1.1, Cov1.2) and Cov2 (2 levels: Cov2.1, Cov2.2)
The covariates DO NOT interact with each other.
I want the interaction between Time and Intervention and the covariates to study the impact of the covariates on the intervention at each time point:
- Whether there are differences between interventions at each time point
- Whether these differences at all time points are affected by the baseline values of Cov1 and Cov2 (the baseline covariates may have different impact on subsequent time points, but itself they don't change over time).
I thought about something like:
`Response ~ Intervention * Time * Cov1 + Intervention * Time * Cov2`
I also saw somewhere also this shortcut: `Response ~ (Intervention * Time) * (Cov1 + Cov2)`
Or maybe it should be `Response ~ Intervention * Time + Intervention * Time * Cov1...` ?
or `Response ~~ Intervention * Time + Intervention : Time : Cov1 + Cov1 + Intervention : Time : Cov2 + Cov2` ?
This is exploratory analysis I will be doing with GEE followed by the analysis of contrasts (in R: emmeans). My data are complete.
My output should look like (using emmeans):
```
Strata: Time T1:
Cov1.1-TrtA vs. Cov1.1-TrtB
Cov1.2-TrtA vs. Cov1.2-TrtB
Cov2.1-TrtA vs. Cov2.1-TrtB
Cov2.2-TrtA vs. Cov2.2-TrtB
```
PS: later the pairs for each covariate Cov1 (and Cov2) will be compared to see if in overall Cov1 (and Cov2) affects the effect TrtA vs. TrtB), I mean: `[Cov1.1-TrtA vs. Cov1.1-TrtB] vs. [Cov1.2-TrtA vs. Cov1.2-TrtB ]` and same for Cov2.
```
Strata: Time T2:
Cov1.1-TrtA vs. Cov1.1-TrtB
Cov1.2-TrtA vs. Cov1.2-TrtB
Cov2.1-TrtA vs. Cov2.1-TrtB
Cov2.2-TrtA vs. Cov2.2-TrtB
Strata: Time T3:
Cov1.1-TrtA vs. Cov1.1-TrtB
Cov1.2-TrtA vs. Cov1.2-TrtB
Cov2.1-TrtA vs. Cov2.1-TrtB
Cov2.2-TrtA vs. Cov2.2-TrtB
```
You know my goals, now I need your advice of which formula best reflects my needs. I cannot provide the data, it's just about general idea. I used R syntax, because it's easiest for me to understand the syntax. But if you use any other software, SAS, SPSS, you can provide your own. I need the general idea on how to set up the interactions properly.
---
EDIT: I just checked, that `Response ~ Intervention*Time*(Cov1 + Cov2)` gives me sensible model coefficients and it's equivalent to `Response ~ Intervention + Time + Intervention:Time + Cov1 + Cov1:Time + Cov1:Intervention + Cov1:Time:Intervention + (same for Cov2)`
The second options with `Response ~ Intervention*Time + Cov1 + Intervention:Time:Cov1 + Cov2 + Intervention:Time:Cov2` gives me weird coefficients, I missed certain higher-level interactions.
| How to define interaction between 2 categorical covariates and Time and Intervention in a longitudinal model? (in R) | CC BY-SA 4.0 | null | 2023-03-29T19:13:54.687 | 2023-03-30T13:12:52.853 | 2023-03-30T01:05:01.307 | 384446 | 384446 | [
"regression",
"generalized-linear-model",
"interaction",
"generalized-estimating-equations"
] |
611170 | 1 | 611627 | null | 4 | 182 | TL;DR: I am working with binary classifications. I have different models I want to compare their performance out of the box. I read that accuracy is a poor metric, and Brier score or log loss should be used instead. However, I also read that the Brier score should not be used when comparing logistic regression vs. random forest, and it should be mainly used as a metric when tuning/changing the parameters of a single model. Is this statement true? Is it wrong to use Brier to compare the performance of different models/approaches?
---
Full background to my research question:
Hi all,
I have a dataset composed of two groups (disease type 1 vs. type 2) and 50 samples per group. For each sample, I have around 7000 features being measured. Importantly, identifying type 2 is key, and I am willing to "pay the price" of getting some type 1 as false positives.
My initial plan was to run feature selection and machine learning to classify these groups. After reading a bunch of stuff here, I realize that my approach may not be ideal for my dataset. For instance, ML with 100 samples is far from ideal. In addition, my dataset is 50/50 while the real-world prevalence of both disease types is 70/30; thus, any model I come up with will most likely underperform in the future.
I am aware of these limitations (and there are probably many more), but since the data is already in my hands right now, I wish I could "play" with it to see what I can get. I plan to run repeated k-fold cross-validation (10-fold with 10 repetitions). Inside each fold, I am performing mRMR (feature selection) and a few classification models. For example, logistic regression, random forest, SVM, XGBoost, and a few more. I want to compare the performance of each model and then spend more time optimizing the one that performed the best out of the box.
At first, I was going to compare log reg and the ML models using accuracy, but great posts by Frank Harrell, Stephan Kolassa, and others are changing my mind. Right now, I am planning to use Brier Score, at least in this initial stage where an overall screening is needed. However, I read that the Brier Score should not be used to compare logistic regression vs. random forest, as they are two different models. It seemed like Brier score should be used only for the same model under different parameters, for example, when evaluating the gains for hyperparameter tuning. How much of that is actually true?
| Is Brier Score appropriate when comparing different classification models? | CC BY-SA 4.0 | null | 2023-03-29T19:16:32.667 | 2023-04-03T02:24:58.330 | 2023-03-29T23:22:32.773 | 346628 | 346628 | [
"machine-learning",
"logistic",
"accuracy",
"scoring-rules"
] |
611171 | 2 | null | 611133 | 3 | null | [Correcting my daft initial answer - thank you to @whuber!]
I'm assuming that $Y$ and the $X_i$ are all independent of each other.
Conditional on $Y=y$, the probability that any one of the $X_i$ is less than $Y$ is
$$
P(X_i < Y|Y=y) = P(X_i < y)= \Phi\left(\frac{y-\mu}{\sigma}\right)
$$
where $\Phi(.)$ is the CDF of the standard Normal.
Now let $U$ be the rank of $Y$, so $U=0$ if $Y$ is less than all of the $X_i$, $U=1$ if $Y$ is less than exactly one of them, etc. Notice that we have $(U | Y=y) \sim \mathrm{Bin}\left(n,\Phi\left(\frac{y-\mu}{\sigma}\right) \right)$.
Putting everything together, we have
$$
\begin{align}
P(U=k) & = \int_{-\infty}^{\infty} P(U=k|Y=y) f_Y(y) dy \\
& = \int_{-\infty}^{\infty} {n \choose k} \Phi\left(\frac{y-\mu}{\sigma}\right)^k \left(1-\Phi\left(\frac{y-\mu}{\sigma}\right) \right)^{n-k} \, \phi(y) dy
\end{align}
$$
where $\phi(.)$ is the PDF of the standard Normal.
| null | CC BY-SA 4.0 | null | 2023-03-29T19:25:47.617 | 2023-03-29T19:25:47.617 | null | null | 238285 | null |
611172 | 1 | null | null | 0 | 16 | Let the following multilevel problem, where we try to predict the credit card balance of individuals $y_i$:
$$
x_{i 1}= \begin{cases}1 & \text { if } i \text { th person is from the South } \\ 0 & \text { if } i \text { th person is not from the South },\end{cases}
$$
and the second should be
$$
x_{i 2}= \begin{cases}1 & \text { if } i \text { th person is from the West } \\ 0 & \text { if } i \text { th person is not from the West. }\end{cases}
$$
Then both of these variables can be used in the regression equation, in order to obtain the model
$$
y_i=\beta_0+\beta_1 x_{i 1}+\beta_2 x_{i 2}+\epsilon_i= \begin{cases}\beta_0+\beta_1+\epsilon_i & \text { if } i \text { th person is from the South } \\ \beta_0+\beta_2+\epsilon_i & \text { if } i \text { th person is from the West } \\ \beta_0+\epsilon_i & \text { if } i \text { th person is from the East. }\end{cases}
$$
My problem now is to solve by linear regression under the problem that adding the intercept will add a lot of colinearity between the column vectors. One solution is to add one constraint:
$$
\beta_0 + \beta_1 + \beta_2 = 0
$$
And use restricted least square. But what is the interpretation of adding this constraint ? (Outside of the technical reason) I suppose that we cannot say anymore that $\beta_0$ can be interpreted as the average credit card balance for individuals from the east
$$
y_i=\beta_0+\beta_1 x_{i 1}+\beta_2 x_{i 2}+\epsilon_i= \begin{cases}-\beta_2+\epsilon_i & \text { if } i \text { th person is from the South } \\ -\beta_1+\epsilon_i & \text { if } i \text { th person is from the West } \\ -\beta_1 - \beta_2+\epsilon_i & \text { if } i \text { th person is from the East. }\end{cases}
$$
| Dummy coding of linear regression, intercept and constraint | CC BY-SA 4.0 | null | 2023-03-29T19:35:34.650 | 2023-03-29T19:35:34.650 | null | null | 271843 | [
"multiple-regression",
"categorical-encoding"
] |
611173 | 1 | null | null | 0 | 30 | I have a dataset consisting of m rows and 2 columns. The m rows denote a patient and the columns denote the label assigned to them. The label size can be arbitrarily long, with labelers given the liberty of introducing a new label if none of the existing labels describe the issue. Finally, two among n labelers are randomly assigned to a patient.
Based on [Wikipedia](https://en.wikipedia.org/wiki/Fleiss%27_kappa#:%7E:text=This%20contrasts%20with%20other%20kappas,would%20be%20expected%20by%20chance.), I believe that Fleiss' Kappa coefficient will be appropriate in this case. However, I'd like to hear from the Stats community before making a decision.
Edit 1: I just saw this line in Wikipedia - "Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings, to a fixed number of items, at the condition that for each item raters are randomly sampled. " In my case, two among n labelers are randomly assigned to a patient. Does that fall under random sampling. I am unsure cause I am fixing the constraint of having 2 labelers per patient.
| What is the correct Kappa coefficient to use when more than two labelers are present? | CC BY-SA 4.0 | null | 2023-03-29T19:35:48.700 | 2023-03-31T14:40:36.533 | 2023-03-31T14:40:36.533 | 262849 | 262849 | [
"categorical-data",
"agreement-statistics",
"cohens-kappa",
"labeling"
] |
611174 | 1 | 611975 | null | 1 | 71 | I am asking a question akin to this one: [https://stackoverflow.com/questions/71884457/what-does-the-y-axis-effect-mean-after-using-gratiadraw-for-a-gam](https://stackoverflow.com/questions/71884457/what-does-the-y-axis-effect-mean-after-using-gratiadraw-for-a-gam) but am wondering the same question for parametric terms not smooths.
My data looks like this:
```
df<-structure(list(spreg = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L), levels = c("n", "y"), class = c("ordered",
"factor")), Landings = c(48974, 16933, 18389, 16433, 5720, 3775,
1388, 97109, 148609, 104267, 77454, 128938, 108096, 126957, 102396,
16165, 59423, 2892, 4728, 3783, 4785, 11359, 5323, 6106, 167,
568, 480, 2208, 4378, 1908), year = c(2007, 2009, 2011, 2013,
2015, 2018, 2007, 2007, 2007, 2012, 2015, 2018, 2007, 2007, 2012,
2015, 2018, 2008, 2010, 2006, 2008, 2011, 2008, 2011, 2007, 2010,
2007, 2014, 2015, 2014)), row.names = c(1L, 2L, 3L, 4L, 5L, 6L,
7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L,
20L, 21L, 22L, 23L, 24L, 25L, 26L, 28L, 29L, 30L, 31L), class = "data.frame")
```
My code looks like this:
```
library(mgcv)
library(gratia)
gam<-gam(Landings~s(year)+spreg,data=df)
draw(parametric_effects(gam))
```
Partial effect plot looks like this:
[](https://i.stack.imgur.com/VOPGU.png)
This is what `summary(gam)` looks like:
```
Family: gaussian
Link function: identity
Formula:
Landings ~ s(year) + spreg
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 30460 11178 2.725 0.0112 *
spreg.L -16964 15961 -1.063 0.2974
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(year) 1.374 1.652 0.229 0.798
R-sq.(adj) = -0.00914 Deviance explained = 7.35%
GCV = 2.6732e+09 Scale est. = 2.3726e+09 n = 30
```
I want to report these partial effect plots for parametric terms but I am having trouble finding a good description of the partial effect plots for fixed effects. Is this the estimate and 95% credible interval like the smooth partial effect plots?
| How to interpret the partial effect plots for parametric terms while using gratia::draw() and gratia::parametric_effects()? | CC BY-SA 4.0 | null | 2023-03-29T19:45:21.877 | 2023-04-05T13:04:37.400 | 2023-04-03T20:22:39.877 | 348874 | 348874 | [
"r",
"fixed-effects-model",
"generalized-additive-model",
"ggplot2",
"parametric"
] |
611175 | 2 | null | 380599 | 0 | null | Short Answer
While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach.
Long Answer
The approach you describe is similar to what is seen in the machine learning community where a tremendous amount of focus is put on model selection and parameter estimation. For example, there are countless papers on how to optimize a neural net to attain strong results on the ImageNet dataset. Part of the reason there is such an emphasis on this, is that in the research process it is important to compare your model to other models on benchmark datasets. To make sure your results are comparable to others' reported results you cannot manually bring in outside information to improve your model's performance.
While this automatic-forecasting approach is not wrong per se, in the time series setting often the most important step in an analysis is bringing in exogenous variables to help explain a time series. For example, if one is attempting to forecast a time series of average house prices in New York, what would matter most in forecasting is determining what factors influence house prices (e.g. population growth, interest rates, unemployment) and then coming up with a reasonable forecast of these variables to inform your forecast of house prices. The time dependence of the residuals (which is what most traditional statistical methods attempt to model), while still important to consider in the model, would be likely be far less important in improving a forecast than the selection and forecasting of the aforementioned exogenous variables.
| null | CC BY-SA 4.0 | null | 2023-03-29T19:54:19.497 | 2023-03-29T19:54:19.497 | null | null | 234732 | null |
611177 | 1 | null | null | 0 | 49 | If there are only 2 groups for each manipulation (2x2 factorial design), how can you do planned contrast or post-hoc analysis comparing between two specific groups? Do you just do t-test?
| Planned contrast in 2x2 factorial design | CC BY-SA 4.0 | null | 2023-03-29T19:59:35.980 | 2023-04-07T07:01:30.303 | null | null | 302906 | [
"anova",
"t-test"
] |
611178 | 1 | null | null | 0 | 16 | This more an academic question than practical. I like that ;)
I have been thoroughly trained in frequentist statistics. Zealously, actually - to the point that no one even told me it’s called “frequentist” and that there exists an alternative.
For quality control, frequentist is OK. We are mostly interested in long-term, population behavior. Except for 1 thing, which has always bugged me.
I have no problem figuring out the probability that I misclassify any inspection, given a measured result, Type I or II error. Textbooks are full of ways to find this for different combinations of types of underlying quality distributions and distributions of measurement uncertainty. But this does not help me.
What we want to know, is “will we get a complaint if we ship this one to a customer?” and not “how many complaints will we get when we ship a thousand of these to customers?”.
Let’s suppose I have a well controlled process. Probability of getting a product anywhere near the spec limits (which is a requirement for any appreciable risk of Type I/II error) is then extremely low. If I ever do get a product near the limits, I strongly suspect this one is an outlier, coming from a whole different distribution. Given my well controlled process, this is a rare event. Meaning: I have no way of knowing any population parameters, not even any (frequentist) way of estimating or inferring them.
All my frequentist math breaks down right there.
So what can I answer my boss, when he asks me to evaluate his risk of complaints, or his risk of spending loads of money reworking something that wasn’t broken to begin with? The question may be academic - the consequences are very real!
I recently encountered Bayesian statistics. It feels like it could help with this situation, but I have no idea how to apply it. Part of it is to take the probability of Type I or II error. I got that covered. But in all the examples I found until now, there was some reasonable way of determining the underlying quality distribution. If I understand correctly, that would be called the prior. I’m at a loss on what to choose there; certainly not the distribution I THOUGHT I had before this (suspected) outlier came across my desk…
So in short, I guess the main question is: What will I take as prior? What would be a good way to approach that?
| How to apply Bayesian statistics to this (general) Quality Control problem | CC BY-SA 4.0 | null | 2023-03-29T20:20:08.963 | 2023-03-29T20:20:08.963 | null | null | 356008 | [
"bayesian",
"frequentist",
"type-i-and-ii-errors",
"quality-control"
] |
611179 | 1 | null | null | 0 | 38 | I am trying to forecast using the SARIMA model. I have used
```
auto.arima(ts,seasonal=T,trace=T)
```
but the prediction I'm getting is just the previous years values shifted up:
[](https://i.stack.imgur.com/FTeLg.jpg)
Clearly the 2020 values are just the 2019 values but raised. I'm not sure why this is happening? The model it fitted was SARIMA(5,1,4)(0,1,0)365, so
$(1-\sum_{i=1}^5 \theta_i B^i)(1-B)(1-B^{365})y_t = (1+\sum_{i=1}^4\phi_i B^i)\epsilon_t$
in backshift notation where $By_t = y_{t-1}$. I think I understand why the values are just the year before's, but not why they're so much higher.
| SARIMA forecast is just the previous year shifted | CC BY-SA 4.0 | null | 2023-03-29T10:58:50.753 | 2023-04-12T16:43:26.553 | 2023-04-12T16:43:26.553 | 384409 | 384409 | [
"r",
"forecasting",
"arima"
] |
611180 | 1 | null | null | 0 | 44 | I am dealing with multi-target binary classifications (I have two targets). I need to use a sampling strategy. I have tried imblearn.pipeline but I'm getting the same error as this time when I'm trying to resample before training.
```
for dataset in dataset_list:
smote = SMOTE()
X_resampled, y_resampled = smote.fit_resample(dataset.X_train, dataset.y_train)
model = clone(model_class)
model.fit(X_resampled, y_resampled)
preds = model.predict(dataset.X_test)
Imbalanced-learn currently supports binary, multiclass, and binarized encoded multiclass targets. Multilabel and multioutput targets are not supported.
```
Any suggestions for that? I searched a lot and most articles are about multi-class and not multi-targets.
| Sampling strategies in multi-target classification | CC BY-SA 4.0 | null | 2023-03-29T20:42:17.410 | 2023-03-29T22:02:54.137 | 2023-03-29T22:02:54.137 | 247274 | 78427 | [
"machine-learning",
"classification",
"unbalanced-classes",
"multilabel",
"smote"
] |
611181 | 1 | null | null | 0 | 17 | Thinking it should be exponential, but I could be wrong.
| What is the approximate distribution of the distance between gas stations along a highway? | CC BY-SA 4.0 | null | 2023-03-29T20:56:44.067 | 2023-03-29T20:56:44.067 | null | null | 384452 | [
"distributions",
"poisson-distribution",
"exponential-distribution",
"operations-research"
] |
611182 | 1 | null | null | 0 | 14 | I have a data set of 139 individuals who provided responses to a series of questionnaires at 5 times points (0, 3, 6, 12 and 18 months) across treatment. Time is included as a covariate in the model (centered at the intercept). My outcome (dependent variable) variable are two forms of functioning. My independent variables (measured at all time points) include:
- Personality disorder severity
- Level of emotion dysregulation
- Depression severity
- Number of substances used
- Socioeconomic status
- Level of antisocial personality disorder
- If the person’s caregivers are in employment (dichotomous, 1 = yes, 2 = no)
- If the individual is in employment or education themselves (dichotomous, 1 = yes, 2 = no)
- PTSD diagnosis (dichotomous, 1 = yes, 2 = no)
- Age
- Gender
All variables except Gender and Age (because I am keeping it constant) vary over time. The first three variables (personality, emotion regulation and depression) change systematically over time, approximating a linear or quadratic curve. The other independent variables appear to fluctuate over time but don't show a systematic pattern.
I am interesting in understand if individuals who experience an increase in their IV at time 1 have a subsequent decrease in their level of function at time 2.
My initial plan was to develop a series of time-lagged multilevel models and correct for type 1 errors using the method here ([https://onlinelibrary.wiley.com/doi/full/10.1002/sim.6461](https://onlinelibrary.wiley.com/doi/full/10.1002/sim.6461)).I plan to person-intercept center at Level 1 (X - Person's intercept) and Grand-intercept center at Level 2 (X - Group mean intercept) so that if I see a significant within person main effect I can make the statement "when participant's emotion dysregulation increased beyond than their initial values, their functioning declined at the subsequent time point".
However, I am now concerned that this is the wrong analysis for the independent variables above which change systematically over time. A Curran & Bauer paper (2011-https://pubmed.ncbi.nlm.nih.gov/19575624/) seems to suggest that if I have independent variables which change systematically over time, the method of disaggregating the levels (they suggest Grand mean and person-mean centering) will provide inaccurate results as it doesn't account for the sources of variance within the time-varying predictor (intercept, slope and residual). I'm a little confused about this as a number of papers have completed a time-lagged MLM with lagged predictors that varying systematically (e.g., linearly) overtime (e.g., [https://www.sciencedirect.com/science/article/abs/pii/S000579671830024X;](https://www.sciencedirect.com/science/article/abs/pii/S000579671830024X;) [https://pubmed.ncbi.nlm.nih.gov/32333727/](https://pubmed.ncbi.nlm.nih.gov/32333727/)). I have noticed that these papers assess the main effects of the time-lagged predictor on the outcome at the between and within level but only the interaction between the lagged predictor and time (slope) on the outcome at the between level.
My questions are:
- Can I complete a time-lagged MLM with an independent variable and dependent variable that both systematically change over time?
- If I was to do this is the way I have centered the variables okay? AND is it only possible to assess the main effects and between level interaction with time (as above) in my model or can I also assess the interaction with time at the within level? If so, why is this the case?
Thank you very much for your time!
| Is it possible to complete a Time-Lagged Multilevel Model with time-variant predictors that change systematically with time? | CC BY-SA 4.0 | null | 2023-03-29T21:07:20.433 | 2023-03-29T21:07:20.433 | null | null | 379607 | [
"multilevel-analysis",
"lags",
"time-varying-covariate",
"centering"
] |
611183 | 1 | null | null | 0 | 31 | Imagine a simple website where a user can access and can click on a button that will refresh the page. On average, 50 000 requests are made to the webserver of this website in a month. We can assume that the number of number accesses to the website(the first request) in a month follows a Poisson distribution, but we know that, after accessing the page, on an interval of 20s, on average, the user will refresh the page 9 times(make 9 more requests). What would be the distribution of the request to the server in a range of a month then?
PS: One more piece of information: The website has a delay of 30ms to respond, so the user needs to wait at least 30ms to refresh the page.
It's a really interesting problem that came to my mind I dare say, basically a sequence of events in which the start follows a Poisson distribution, any idea how to solve this?
| Distribution for a sequence of events which the first one follows a Poisson distribution | CC BY-SA 4.0 | null | 2023-03-29T21:09:25.400 | 2023-03-29T21:25:44.617 | 2023-03-29T21:25:44.617 | 327798 | 327798 | [
"probability",
"distributions",
"poisson-distribution",
"poisson-process"
] |
611184 | 1 | null | null | 1 | 10 | I would like to know how an out-of-sample pseudo prediction is obtained in a mixed autoregressive model from a mathematical point of view. and to understand why it is not possible to make the same thing in the model in question by going out of sample.
| I would like to know how an out-of-sample pseudo prediction is obtained in a mixed autoregressive model | CC BY-SA 4.0 | null | 2023-03-29T21:20:45.500 | 2023-04-04T00:38:18.643 | 2023-04-04T00:38:18.643 | 11887 | 384454 | [
"econometrics",
"autoregressive"
] |
611185 | 1 | 611192 | null | 1 | 49 | Let's say I have a regression dataset (paired x and y) such that the response variable (y) has an unknown distribution (but definitely not Gaussian) and is large enough such that the central limit theorem holds. If I do OLS with bootstrapping and observe that the bootstrap standard error of the coefficient is far from the ML estimate of the standard error, does that indicate that Gauss-Markov assumptions are violated? If so, let's then assume that homoskedasticity is violated and thus the standard error estimate is not BLUE. If I use an approach like adjusting the OLS with robust standard errors, should you expect both standard error estimates to agree?
| Why would bootstrap OLS standard errors differ from ML estimate? | CC BY-SA 4.0 | null | 2023-03-29T21:33:03.743 | 2023-03-31T20:38:42.190 | 2023-03-30T01:55:22.770 | 247274 | 261708 | [
"least-squares",
"bootstrap",
"heteroscedasticity",
"blue",
"gauss-markov-theorem"
] |
611189 | 1 | 611208 | null | 2 | 50 | Given a quantile function $Q(p)=F^{-1}(p)$ where $F(x)$ is the CDF, one can easily calculate the expected value as
$E[X]=\int_0^1Q(p)dp$. (see e.g., [here](https://stats.stackexchange.com/questions/164788/expected-value-as-a-function-of-quantiles))
Is there a similar way to get the Variance from the quantile function $Q(p)$? I tried deriving a similar function, but the best I can do is
$Var[X]=\int_0^1(Q(p))^2dp-(\int_0^1Q(p)dp)^2$,
simply by using the equality $Var[X]=E[X^2]-(E[X])^2$. But I fail to find any simpler formula based on the quantiles.
Are there any simplifications that could be performed on this?
| Variance as function of quantiles | CC BY-SA 4.0 | null | 2023-03-29T22:21:07.080 | 2023-03-30T09:12:05.060 | null | null | 7194 | [
"self-study",
"variance",
"quantiles"
] |
611190 | 1 | null | null | 0 | 44 |
# Background
I'm performing a feature selection process on a fraud dataset.
The dataset is made up of roughly 300 columns and 40,000 rows. It has a single binary indicator for a target.
A lot of the columns are constant or nearly constant (80%+ one value, 2-10 unique values). I've already removed the constant columns, but I'd like to remove most of the nearly constant columns too.
To do this, I'm [one-hot-encoding](https://en.wikipedia.org/wiki/One-hot) each column and then calculating the correlation between the dummy and target variables.
I'd like to remove all columns below a correlation of $X$, where $X$ is decided by a statistical test that is dependent on the percentage of 1s in the dummy and target columns.
# What I've done so far
I've created a function in Python that calculates an expected distribution if a variable is linked to the target column `x%` of the time. It gives me a distribution like this:
[](https://i.stack.imgur.com/i3hrG.png)
where x is the correlation level, and y is the binned number of times that correlation range occurs.
As you can see, this distribution is centred around a correlation of `0`, which is the expected mean when the link between the target and dummy variable column is `0%`. To produce this distribution, I created pairs of columns, with 1s randomly positioned in `y%` of the rows, and calculated their correlation. This is a sample of the true distribution of all columns with `y%` as 1s.
This function returns me the mean and the standard deviation of an expected distribution for a given link-percentage.
I'd like to use this distribution to calculate the statistical probability that the true correlation is above/ below (one-tailed test) the distribution's mean. For example, my null hypothesis, $H_0$, would be:
>
The correlation between the dummy variable and the target column is $X$.
The alternative hypothesis, $H_A$, would be:
>
The correlation between the dummy variable and the target column is $<X$.
# Question
I'm confused about what statistical test to use to calculate the probability. I feel like I have a reasonable estimate of the population mean and standard deviation through my sampling for the true distribution. That makes me think I should use the [Z-Test](https://en.wikipedia.org/wiki/Z-test).
However, when I calculate the correlation between my dummy variable column and the target column, I have a sample of size 1, which lowers the power of the Z-Test. On the other hand, the standard deviation of my expected distribution is small; therefore, a difference should still be easy to pick out.
Given all of the above, is the Z-Test a good way to calculate the probability of a single value being lower than the expected mean?
| Can you use a Z-Test with a sample size of 1? | CC BY-SA 4.0 | null | 2023-03-29T22:28:06.413 | 2023-03-30T10:09:28.023 | 2023-03-30T10:09:28.023 | 363857 | 363857 | [
"hypothesis-testing",
"feature-selection",
"statistical-power",
"categorical-encoding",
"z-test"
] |
611191 | 2 | null | 440832 | 1 | null | In a perfect world, you would be able to fit a half-million observations in memory and run the neural network on everything. This has the advantage of using a representative data set and letting the neural network learn from reality instead of fiction.
In this imperfect world, you might be right that you do not lose much performance. If you are able to build a useful model, that counts for a lot.
If you do reduce the number of non-fraud training cases to make the data fit in memory, use representative data when you evaluate your model. If you have a $95/5$ class imbalance in reality, you should have about that kind of class imbalance when you evaluate your model on unseen data. If the performance of the model trained on undersampled data only performs well in the fictional setting where the classes are rather balanced, there is limited reason to expect good performance in production.
Our Demetri Pananos discusses [here](https://stats.stackexchange.com/a/558950/247274) what to do if you fiddle with the class ratio, and the King paper discussed [here](https://stats.stackexchange.com/a/559317/247274) is related. Depending on your task, you might be more or less interested in the fraud probabilities (I would think that would be interesting for this task), and giving an incorrect prior probability of fraud by downsampling the majority class to fit the data into memory leads to an incorrect posterior probability of fraud, though that Pananos answer I linked discusses a possible remedy. That is, depending on what you value in predictions, you might find out that your model trained on downsampled data does not perform as well as you thought.
| null | CC BY-SA 4.0 | null | 2023-03-29T22:33:24.630 | 2023-03-30T12:18:13.153 | 2023-03-30T12:18:13.153 | 247274 | 247274 | null |
611192 | 2 | null | 611185 | 2 | null | You might just as well ask, "Why is the sample correlation between two independent variables always non-zero?" and the answer is the same: samples never conserve the properties of the probability models that generate them unless they are, somehow, perfectly balanced.
Consider this example where I generate data that have errors generated straight from empirical normal quantiles. That is that the ECDF of the residual very closely matches the normal distribution.
```
library(boot)
set.seed(123)
x <- seq(-3, 3, 0.1)
e <- qnorm(1:99/100)
data <- expand.grid('x'=x,'e'=e)
data$y <- data$x + data$e
f <- lm(y ~ x, data=data)
confint(f)
betahat <- function(data,i) cov(data[i, 'x'],data[i, 'y'])/var(data[i, 'x'])
b <- boot(data, betahat, R=1000, stype='i')
boot.ci(b, type='norm')
```
Gives:
```
> confint(f)
2.5 % 97.5 %
(Intercept) -0.02422861 0.02422861
x 0.98623908 1.01376092
> boot.ci(b, type='norm')
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates
CALL :
boot.ci(boot.out = b, type = "norm")
Intervals :
Level Normal
95% ( 0.9861, 1.0140 )
Calculations and Intervals on Original Scale
```
Which agrees out to nearly 4 decimal places. The asymptotic result is that these are the same, but in finite samples, you'll never have perfectly normal residuals.
It's true that Gauss-Markov does not require that errors are normally distributed. It is an asymptotic result. But the question bears consideration: if the bootstrapped error estimate is very different from the model-based error estimate, should you go with the bootstrapped error estimate because the probability model is misspecified? It can go both ways. Change the above to a simulation to consider the 80% coverage of the CIs.
```
library(boot)
set.seed(123)
`%has%` <- function(ci, true) true > ci[1] & true < ci[2]
res <- replicate(1000, {
x <- seq(-3, 3, 1)
e <- rnorm(7)
data <- data.frame(x=x, e=e)
data$y <- data$x + data$e
f <- lm(y ~ x, data=data)
betahat <- function(data,i) cov(data[i, 'x'],data[i, 'y'])/var(data[i, 'x'])
b <- boot(data, betahat, R=1000, stype='i')
c(
olscover = confint(f, level = 0.8, 'x') %has% 1,
bscover = boot.ci(b, type='norm', conf=0.8)$normal[-1] %has% 1
)
})
```
Gives:
```
> rowMeans(res)
olscover bscover
0.795 0.782
```
In other words, the bootstrap fails to produce CIs that cover at the nominal 80% rate. The difference is not substantial, and the simulation isn't exactly fast, but you can play with it to understand the risks. The bootstrap is just slower to converge, so in small sample like this N=7 regression, when the OLS assumptions hold, the OLS is the better answer. On the other hand, you don't know when those assumptions are true, and if you have decent power, the bootstrap is a robust safeguard against model misspecifications.
| null | CC BY-SA 4.0 | null | 2023-03-29T22:41:22.047 | 2023-03-31T20:38:42.190 | 2023-03-31T20:38:42.190 | 8013 | 8013 | null |
611193 | 1 | null | null | 1 | 10 | In training/test, we used median for filling missing data. We got the median after the split, so each set had it's own values.
In production every month new data will go throught the model.
Do we use the new data median for imputation or do we use the training set median?
| Filling missing data in production | CC BY-SA 4.0 | null | 2023-03-29T23:06:42.090 | 2023-03-29T23:06:42.090 | null | null | 384458 | [
"machine-learning"
] |
611194 | 2 | null | 611158 | 1 | null | My recommendation would be to use the rank biserial correlation coefficient, rather than the r that is the z value divided by the square root of N.
A reference for the calculation is King, B.M., P.J. Rosopa, E.W. and Minium. 2000. Statistical Reasoning in the Behavioral Sciences, 6th. Wiley.
Although it appears you already know how to do this. They show the matched-pairs case, but this can be adapted to the one-sample case.
The r statistic has a few limitations as an effect size statistic. In some cases it won't reach 1 or -1 in cases where we think it should.
If you do use the r statistic, I would decrease N to account for values that equal the designated mu. To, me this seems to give more reasonable results.
There is a question of to deal with these zero-difference observations with the rank biserial correlation coefficient also (or in the Wilcoxon test itself).
I have some examples of these properties with the relevant functions in the rcompanion package, with the caveat that I wrote them.
I can't speak to the rstatix package.
Here, with item2, and mu=3, we would probably expect the effect size to be -1, but it isn't with the r statistic.
```
library(rcompanion)
item2 = c(1,2,1,2,1,2,1,2,1,2,1,2,1,2)
wilcoxonOneSampleR(item2, mu=3)
### r
### -0.906
```
The rank biserial correlation coefficient is -1.
```
wilcoxonOneSampleRC(item2, mu=3)
### rc
### -1
```
Here, with zero-difference values, the default for r is to decrease N by the number of zero-difference values.
```
item3 = c(1,2,1,2,1,2,1,2,1,2,3,3,3,3)
wilcoxonOneSampleR(item3, mu=3)
### r
### -0.911
wilcoxonOneSampleR(item3, mu=3, adjustn=FALSE)
### r
### -0.77
```
The rank biserial correlation coefficient function also has a verbose option, which may be helpful.
```
wilcoxonOneSampleRC(item3, mu=3, verbose=TRUE)
### zero.method: Wilcoxon
### n kept = 10
### Ranks plus = 55
### Ranks minus = 0
### T value = 0
###
### rc
### -1
```
The function also has a zero.method option that can be set to "Pratt" or "none" for different ways to deal with zero-difference values. (See the documentation.)
| null | CC BY-SA 4.0 | null | 2023-03-29T23:12:20.017 | 2023-03-30T14:18:52.703 | 2023-03-30T14:18:52.703 | 166526 | 166526 | null |
611195 | 2 | null | 610872 | 1 | null | You could start with some visualization:
[](https://i.stack.imgur.com/3w12b.png)
R code for the plot:
```
# Assign data frame in question to variable dat, then
dat0 <- reshape2::melt(dat,id.vars=1, measure.vars=2:5, variable.name="Group",
value.name="Count")
library(ggplot2)
dat0 |> ggplot(aes(days, Count, color=Group, group=Group)) + geom_point() + geom_line()
```
| null | CC BY-SA 4.0 | null | 2023-03-29T23:45:41.190 | 2023-03-29T23:45:41.190 | null | null | 11887 | null |
611196 | 2 | null | 610870 | 1 | null | There are advantages when it comes to experimental design in balancing the categories, and if there is a budget to get more observations, sure, it could be reasonable to allocate that to observing members of the smallest class instead of the largest, but equal sample size is not an inherent assumption of regression on categories.
| null | CC BY-SA 4.0 | null | 2023-03-29T23:50:25.100 | 2023-03-29T23:50:25.100 | null | null | 247274 | null |
611197 | 1 | 611404 | null | 0 | 31 | I have some data with categorical predictors and I'm wondering about comparing the confidence intervals of difference in means of one pairwise comparison in my TukeyHSD analysis vs that of a two-sample t-test. Which one will have a wider confidence interval and will it always be that way for every pairwise comparison?
I just don't understand what's happening, doesn't a lower p-value mean a wider CI? And if so, then why would we be able to expect TukeyHSD to have a wider/skinnier CI if the p-value for each pairwise comparison in TukeyHSD seems to be either high or low? TukeyHSD adjusts for the probability of making type 1 errors. Bonferroni requires a lower P-value for each individual comparison to lower the family-wise error rate which I assume Tukey does as well, so if the p-value goes down, shouldn't the CI go up?
My CI's for the pairwise comparison $D-C$ for example were:
TukeyHSD: $(-16.8, -4.93)$ and
Two-sample t-test: $(-14.91, -6.82)$
So the CI for the TukeyHSD was wider. Why and will it always be like this?
| Confidence interval for difference of means for TukeyHSD vs two-sample t-test? | CC BY-SA 4.0 | null | 2023-03-30T00:20:16.820 | 2023-03-31T16:50:05.767 | 2023-03-30T01:41:35.680 | 266384 | 266384 | [
"confidence-interval",
"t-test",
"tukey-hsd-test"
] |
611198 | 1 | 611203 | null | 4 | 176 | I've seen these two definitions of almost sure convergence:
- $\mathbb{P}\left(\lim _{n \rightarrow \infty} X_n=X\right)=1$
- The sequence $X_n$ converges almost surely to $X$ if there exists a sequence of random variables $\Delta_n$ such that $d\left(X_n, X\right) \leq \Delta_n$ and $\Delta_n \stackrel{\text { as }}{\rightarrow} 0$.
The first of these came from Wikipedia and the second came from Asymptotic Statistics by A.W. Van der Vaart. Are these equivalent? I'm a little confused.
| Almost sure convergence definitions | CC BY-SA 4.0 | null | 2023-03-30T01:06:01.833 | 2023-03-30T03:33:36.057 | null | null | 232845 | [
"mathematical-statistics",
"convergence",
"central-limit-theorem"
] |
611201 | 1 | null | null | 0 | 28 | I get everything but the two sentences are confusing me.
Offset is a variable that serves to consider the different exposures of different observation. When properly included, the target variable is directly proportional to the exposure. So, you have mu= w*lambda where mu is target variable and lambda is exposure.
Then, you have the offset exposure that is proportional to the mean of the target variable. Offset is ln(w). mean of the target variable is just ln(mu).
Do you know what the difference between the mean of the target variable and the target variable in general?
| Offset using Poisson distribution | CC BY-SA 4.0 | null | 2023-03-30T02:41:05.947 | 2023-03-30T07:25:17.050 | 2023-03-30T04:15:59.600 | 362671 | 382257 | [
"generalized-linear-model",
"poisson-distribution",
"offset"
] |
611202 | 1 | null | null | 1 | 25 | Suppose you want to compute the 25% quantile of some random variable. You can do this by first computing the median, and then computing the median of everything less than the median. You can repeat this process as any times as you want, going recursively in either direction from the last computed median, to compute any value in the quantile function to any desired level of accuracy.
The question is: what happens if you replace the median with the mean above? You get, basically, some kind of "mean version" of the quantile function, or something which fits the metaphor of median:quantile function::mean:________. Does there exist a name for this?
| Iterated mean as a variant of the quantile function | CC BY-SA 4.0 | null | 2023-03-30T03:11:23.307 | 2023-03-30T03:11:23.307 | null | null | 150335 | [
"mean",
"terminology",
"quantiles",
"median"
] |
611203 | 2 | null | 611198 | 5 | null | As I commented under the question, the second definition looks more contrived because it wants to accommodate the situation that $X_n$ and $X$ are not measurable or $d(X_n, X)$ is not measurable (for more details, see Section 18.2 in Asymptotic Statistics). In this situation, the set $[\lim_{n \to \infty}X_n = X]$ may not be an event so that the notation "$P[\lim_{n \to \infty}X_n = X]$" may become meaningless.
That said, the two definitions are indeed equivalent when $X_n$, $X$, and $d(X_n, X)$ are all measurable. For example, when $X_n$ and $X$ are real-valued random variables and $d(x, y) = |x - y|$. This case can be proved as follows.
In [this thread](https://stats.stackexchange.com/questions/604904/weak-law-vs-strong-law-of-large-numbers-intuition/604919#604919), it was shown that $Y_n$ converges to $Y$ almost surely if and only if for any $\epsilon > 0$, it holds that
\begin{align}
P[\cap_{k \geq 1}\cup_{n \geq k}[|Y_n - Y| \geq \epsilon]] = 0. \tag{1}
\end{align}
Suppose $|X_n - X| \leq \Delta_n$ and $\Delta_n$ converges to $0$ almost surely, it then follows by $(1)$ that (set $Y_n = \Delta_n$, $Y = 0$) for any $\epsilon > 0$:
\begin{align}
P[\cap_{k \geq 1}\cup_{n \geq k}[|X_n - X| \geq \epsilon]]
\leq & P[\cap_{k \geq 1}\cup_{n \geq k}[\Delta_n \geq \epsilon]] = 0,
\end{align}
which implies (by the other direction of $(1)$) that $X_n$ converges to $X$ almost surely. This shows that the second definition implies the first definition.
Conversely, suppose $X_n$ converges to $X$ almost surely, then simply taking $\Delta_n = |X_n - X|$ (or if you don't want to be that extreme, take $\Delta_n = 2|X_n - X|$, say) meets all the requirements in the second definition. This shows the first definition implies the second definition. In conclusion, these two definitions are equivalent.
| null | CC BY-SA 4.0 | null | 2023-03-30T03:33:36.057 | 2023-03-30T03:33:36.057 | null | null | 20519 | null |
611204 | 1 | null | null | 0 | 11 | I am new to meta-analysis and `metafor`. I am doing a meta-analysis of partial correlation. I've got bivariate correlation matrix from several studies, and partial correlation matrix from others. I have transformed the bivariate correlation matrix into the partial correlation matrix, and I need to combine these partial correlation coefficients into one coefficient with CI to see the global correlational relationship.
I've read the manual of `metafor`, and the methods for combining the partial correlation are within the context of regression model. So can I use the following code to combine the partial correlation in my situation?
```
x <- escalc(measure="PCOR"...)
agg(x,...)
```
The `df` of correlation information between each pairs of variables will be like the following table (for example, this is the correlation information of different studies between variable A and variable B):
|study_name |N |partial_correlation |
|----------|-|-------------------|
|Smith_one_2001 |30 |0.3 |
|Jones_two_2002 |43 |-0.1 |
|... |... |... |
Sorry if it's a dumb question, and thanks in advance for any answer or discussion.
| How to combine partial correlation coefficients from different studies | CC BY-SA 4.0 | null | 2023-03-30T04:04:56.180 | 2023-03-30T04:04:56.180 | null | null | 264753 | [
"correlation",
"meta-analysis",
"partial-correlation",
"metafor"
] |
611205 | 1 | null | null | 0 | 28 | I am new to meta-analysis and `metafor`. I am doing a meta-analysis of partial correlation. I've got bivariate correlation matrix from several studies, and partial correlation matrix from others. I have transformed the bivariate correlation matrix into the partial correlation matrix, and I need to combine these partial correlation coefficients into one coefficient with CI to see the global correlational relationship.
I've read the manual of `metafor`, and the methods for combining the partial correlation are within the context of regression model. So can I use the following code to combine the partial correlation in my situation?
```
x <- escalc(measure="PCOR"...)
agg(x,...)
```
The `df` of correlation information between each pairs of variables will be like the following table (for example, this is the correlation information of different studies between variable A and variable B):
|study_name |N |partial_correlation |
|----------|-|-------------------|
|Smith_one_2001 |30 |0.3 |
|Jones_two_2002 |43 |-0.1 |
|... |... |... |
| How to combine partial correlation coefficients from different studies | CC BY-SA 4.0 | null | 2023-03-30T04:04:56.180 | 2023-03-30T04:18:23.830 | 2023-03-30T04:18:23.830 | 362671 | 264753 | [
"correlation",
"meta-analysis",
"partial-correlation",
"metafor"
] |
611207 | 1 | null | null | 1 | 23 | Suppose that $(X_1,\dots, X_n)$ is an iid random sample from $X\sim f(x;\alpha, \beta)$ and
$$
f(x;\alpha, \beta)=\frac{\alpha x^{\alpha-1}}{\beta^{\alpha}}, \, 0<x\le \beta, \alpha>0, \beta>0,
$$
Show that the MLE of $\alpha$ is consistent by definition.
---
My work:
Note that the log-likelihood:
$$
\ell(\alpha, \beta)=n\log\alpha-n\alpha\log\beta+\sum_{i=1}^n \log x_i^{\alpha-1} I[1<X_{(1)}<X_{(n)}\le \beta]
$$
where $X_{(1)}\le X_{(2)}\le \dots \le X_{(n)}$.
Then the MLE of $\beta$ is
$$
\hat{\beta}=X_{(n)}.
$$
By the invariant of MLE and solving $\frac{\partial \ell(\alpha, X_{(n)})}{\partial \alpha}=0$, the MLE of $\alpha$ is
$$
\hat{\alpha}=\frac{1}{\log X_{(n)}-\frac{1}{n}\sum \log X_i}
$$
By Weak law of large number, we have
$$
\frac{1}{n}\sum \log X_i \to E[\log X_1]=\log \beta-\frac{1}{\alpha}.
$$
Note that for every $\epsilon>0$,
$$
P(|X_{(n)}-\beta|>\epsilon)=P(\beta-X_{(n)}>\epsilon)=\frac{(\beta-\epsilon)^{n\alpha}}{\beta^{n\alpha}}\to 0
$$
as $n\to \infty$. (Because $\frac{\beta-\epsilon}{\beta}<1$)
Hence, $\hat{\beta}=X_{(n)}\to \beta$ in probability.
Hence, by continuous mapping theorem
$$
\hat{\alpha}=\frac{1}{\log X_{(n)}-\frac{1}{n}\sum \log X_i}\to \alpha
$$
Is my proof right?
| Show that the MLE of $\alpha$ is consistent by definition | CC BY-SA 4.0 | null | 2023-03-30T04:55:07.350 | 2023-03-30T04:55:07.350 | null | null | 334918 | [
"self-study",
"consistency"
] |
611208 | 2 | null | 611189 | 2 | null | Quantiles of a distribution are not in direct relation with the distribution variance since the former always exist while the variance may be infinite. A generic connection between the cdf and the variance is as follows.
Assuming the finiteness of the variance of $F$ and a support for $F$ included in $\mathbb R^+$,
\begin{align}
\int_0^\infty x² \text dF(x)&=-\int_0^\infty x² \text d(1-F)(x)\\
&=-\underbrace{[x²(1-F(x))]_0^\infty}_{0}+\int_0^\infty 2x(1-F(x))\text dx\tag{1}\\&=-\int_0^\infty 2x\frac{d}{dx}\left\{\mathbb E[X]-\int_0^x(1-F(y))\text dy\right\}\text dx\\
&=-\left[2x\left\{\mathbb E[X]-\int_0^x(1-F(y))\text dy\right\}\right]_0^\infty\\
&\qquad +2\int_0^\infty \left\{\mathbb E[X]-\int_0^x(1-F(y))\text dy\right\}\text dx\\
&=2\int_0^\infty \left\{{\int_0^\infty (1-F(y))\text dy}-\int_0^x(1-F(y))\text dy\right\}\text dx\\
&=2\int_0^\infty \int_x^\infty (1-F(y))\text dy\text dx\tag{2}
\end{align}
by successive integrations by parts. Note that (1) and (2) are identical by Fubini, i.e. by inverting the order of integration in (2). The extension to an arbitrary support in $\mathbb R$ proceeds by breaking
$$\int x² \text dF(x)=\int^0_{-\infty} x² \text dF(x)+\int_0^\infty x² \text dF(x)$$
| null | CC BY-SA 4.0 | null | 2023-03-30T05:13:55.623 | 2023-03-30T09:12:05.060 | 2023-03-30T09:12:05.060 | 7224 | 7224 | null |
611212 | 2 | null | 611075 | 0 | null | The edf are calculated by `lme4:::npar.merMod`. If you study the source code, you see that it is the sum of the number of fixed-effect parameters (`test1@beta`), the number of covariance parameters (`test1@theta`, i.e., one per random effect) + 1 if there is a scaling parameter (which is always the case for `lmer` models according to `help("lmList", "lme4")`).
The AIC is calculated as $-2 \cdot L + k \cdot edf$ where $L$ is the log-likelihood and usually $k = 2$. As you see, the estimated degrees of freedom are used for a penalty term for the number of paramaters in a model. This term counters the effect that more complex models can always fit the data better than simpler models (at least if the models are nested). So, the whole point of AIC is that it allows comparing models with different number of parameters. Otherwise, you could just compare likelihoods directly.
| null | CC BY-SA 4.0 | null | 2023-03-30T05:55:27.320 | 2023-03-30T05:55:27.320 | null | null | 11849 | null |
611215 | 1 | null | null | 1 | 13 | I am trying to understand if it is an acceptable practice when match treatment groups with control groups (either one to one or one to many) between a pre (control) group versus a post (experimental) group.
| Propensity Score Matching Between Pre and Post Group | CC BY-SA 4.0 | null | 2023-03-30T06:36:24.380 | 2023-03-30T06:36:24.380 | null | null | 345721 | [
"propensity-scores"
] |
611216 | 1 | null | null | 0 | 22 | What is the best statistical test to be done to determine the best drawing of three drawings?
Given data collected from a Likert scale on various characteristics of the drawings. Questions on characteristics like Colours, size, brush stroke etc...
(Are the colours nice? Strongly agree, agree, neutral, disagree, strongly disagree)
People who are doing the questionnaire are divided into groups (City, age, gender, social class)
Top two boxes, top three boxes, bottom two boxes and bottom three boxes are also given.
So there are tables of data displaying the number of people who checked each box given that they are living in a certain city x and of age y.
Which statistical test can be used to determine the winning drawing using this data?
| Quantitative analysis Statistical test of Likert Scale to determine best option from multiple options | CC BY-SA 4.0 | null | 2023-03-23T08:10:20.700 | 2023-06-03T07:41:51.163 | 2023-06-03T07:41:51.163 | 121522 | null | [
"probability",
"likert"
] |
611217 | 2 | null | 360244 | 0 | null |
- You can fit directly any ARMA(p,q)-GARCH(h,s) before applying Filtered Historical Simulations by Maximum Likelihood.
- You should call the $Z_t$ “innovations” and the estimated $\hat{Z}_t$ standardized residuals.
- The innovations must be iid, otherwise FHS cannot be applied. You should check the ACF and PACF of the standardized residuals and their squared values. If they are not iid, the conditional model is not correctly specified and it must be modified.
| null | CC BY-SA 4.0 | null | 2023-03-30T06:53:38.727 | 2023-03-30T06:53:38.727 | null | null | 296201 | null |
611218 | 2 | null | 611057 | 3 | null | Doing what you propose, i.e. adding a regularization term other than the KLD in the loss, is totally feasible. You can find many classical autoencoder architectures which incorporate a diversity of regularization term along with the regularization term, see the [wiki](https://en.wikipedia.org/wiki/Autoencoder) entry for autoencoder for example.
Now the critical point is that you will not be correct referring to your new model as a VAE because a VAE is strictly derived in the variational Bayes setting by constructing a [lower bound on the loglikelihood](https://en.wikipedia.org/wiki/Evidence_lower_bound). If you replace the regularizing KLD term without proper care, you bindly modify the loss and cannot assert that you still have an ELBO whose maximization is at the core of VAE training.
| null | CC BY-SA 4.0 | null | 2023-03-30T06:57:23.677 | 2023-03-30T06:57:23.677 | null | null | 244367 | null |
611220 | 2 | null | 611201 | 1 | null | An offset is a model term with a coefficient of 1. So, in a Poisson regression with an intercept, an offset $\log t_i$ (where $t_i$ is e.g. the observation time for record $i=1,\ldots,I$) and the log-link function, we have
$$\log E Y_i = \beta + \log t_i$$
and $Y_i \sim \text{Poisson}(e^{\beta + \log t_i})$.
You can re-write that as
$$\log \frac{E Y_i}{t_i} = \beta.$$
I.e. the intercept (or if you add additional mode terms the regression equation) describes the logarithm of the expected number of events per unit of time.
Does this clarify it?
| null | CC BY-SA 4.0 | null | 2023-03-30T07:25:17.050 | 2023-03-30T07:25:17.050 | null | null | 86652 | null |
611221 | 1 | 611348 | null | 0 | 49 | There is a contradiction in my understanding of Sequential Monte Carlo for estimating Bayesian evidence for model comparison:
Marginal likelihood (aka normalizing constant, aka Bayesian evidence) estimates are supposed to be directly "apples to apples" comparable between different simulations of different models with different parameters. This is the basis for Bayesian Model Selection (e.g. as described in E.T. Jaynes chapter 20.)
But,
SMC permits me to use an unnormalized likelihood function and, at least in the context of Bayesian parameter estimation using likelihood tempering, it seems like I can "spike" the marginal likelihood estimate to any value at all via my choice of unnormalized likelihood function.
This is a contradiction and it suggests a fundamental misunderstanding on my part. But what?
Concretely, suppose I choose an unnormalized likelihood function that just uniformly returns a big number:
```
L = 1000
```
Then I used a five-step likelihood tempering to derive a series of incremental likelihood functions:
```
L0 = L^0 = 1
L1 = L^0.25 ~= 6
L2 = L^0.5 ~= 31
L3 = L^0.75 ~= 178
L4 = L = 1000
```
Since the likelihood is uniform it doesn't matter how many particles I have: they would all be equivalent anyway. So let's suppose there is just one.
SMC will do importance sampling between the likelihood functions and compute these weights:
```
W0 = L0 ~= 1
W1 = L1/L0 = 6/1 ~= 6
W2 = L2/L1 = 31/6 ~= 5.2
W3 = L3/L2 = 178/31 ~= 5.74
W4 = L4/L3 = 1000/178 ~= 5.61
```
and the marginal likelihood will just be the product of these incremental weights:
```
ML = 6 * 5.2 * 5.74 * 5.61 ~= 1000
= L
```
So there we have it: SMC estimates the normalizing constant / marginal likelihood to be the value of the unnormalized likelihood function, L, but that's just a number that I made up. It doesn't have any absolute/normalized meaning as a basis for comparison with other simulations from other models.
So where did I go wrong? How do I fix this approach so that the marginal likelihood value will be valid and practical for comparison between simulations of different models?
| Bayesian evidence with Sequential Monte Carlo and an unnormalized likelihood function: a contradiction? | CC BY-SA 4.0 | null | 2023-03-30T08:02:11.310 | 2023-03-31T07:37:18.770 | null | null | 167476 | [
"bayesian",
"model-selection",
"model-comparison",
"particle-filter",
"sequential-monte-carlo"
] |
611222 | 1 | null | null | 0 | 53 | I have read that you should use a Z-Test when you have a sample size $ n > 30$, because this is where your sample distribution becomes normally distributed.
The Z-Test equation has more power when you increase the sample size:
$$Z = \frac{\bar X - \mu_0}{\frac{\sigma}{\sqrt{n}}}$$
where $\bar X$ is the sample mean, $\mu_0$ is the population mean, $\sigma$ is the standard deviation of the population, and $n$ is the sample size.
Do you still need to worry about your sample size if you have a good estimate of $\mu_0$ and $\sigma$? Or can you replace the $\frac{\sigma}{\sqrt{n}}$ term with $\sigma$?
| Are large samples always needed for a Z-Test? | CC BY-SA 4.0 | null | 2023-03-30T08:04:29.160 | 2023-03-30T10:19:45.977 | 2023-03-30T10:19:45.977 | 363857 | 363857 | [
"hypothesis-testing",
"statistical-significance",
"standard-deviation",
"z-test"
] |
611223 | 1 | 611256 | null | 3 | 60 | I wanted to check if these reasonings are correct.
The formula for a multilinear regression, input $X_{s,i}$, where $s$ is the sample and $i$ the features, and output $Y$, is given by:
$$\beta=(X^+X)^{-1}X^+Y$$
Now if we define:
$Q_{ij}=\sum_s X_{si}X_{sj}, M_j=\sum_s X_{js}Y_s$
than the equation is equivalent to:
$$\beta_i=\sum_{j} [Q^{-1}]_{ij}M_{j}$$
Now we can add the number of samples into this formula $N_s$:
$$\beta_i=\sum_{j} [Q/N_s]^{-1}_{ij}[M/N_s]_{j}$$
and observe that we have some sample averages:
$Q_{ij}/N_s=\sum_s \left( X_{si}X_{sj}\right) /N_s \rightarrow E[X_iX_j]$
and:
$M_j/N_s=\sum_s \left( X_{js}Y_s \right)/N_s \rightarrow E[X_jY]$,
so that in the large sample limit we expect:
$$\beta_i \rightarrow (E[X_*X_*])_{i,j}^{-1}E[X_jY]$$
- Question 1. Is this result expected/trivial/correct ?
Now we can try to get a large sample limit for Ridge regression. Looking at internet this implies that $Q$ must be replaced with $Q+\lambda I$ and the formula becomes, after inserting as before the number of samples:
$$\beta^R_i=\sum_{j} [(Q+\lambda I)/N_s]^{-1}_{ij}[M/N_s]_{j}$$
But here it looks that as before:
$(Q+\lambda I)/N_s \rightarrow Q/N_s \rightarrow E[X_iX_j]$
because the added factor becomes negligible. If we want to have a different limit we should have $\lambda$ scaling with the samples : $\lambda=\rho N_s$ and in this case:
$(Q+\rho N_s I)/N_s \rightarrow Q/N_s \rightarrow E[X_iX_j]+\rho I$
, so that:
$$ \beta^R_i \rightarrow (E[X_*X_*]+\rho I)_{i,j}^{-1}E[X_jY] $$
- Question 2. Is this result expected/trivial/correct ?
- Question 3. If yes, is it correct that the ridge regression factor should scale with $N_s$ to get a finite value ?
- Question 4. Can we understand something about the behavior of Ridge regression looking at the large sample limit formulas?
| Large sample limit of linear and ridge regression | CC BY-SA 4.0 | null | 2023-03-30T08:22:19.890 | 2023-03-30T12:36:56.230 | 2023-03-30T08:43:24.280 | 70458 | 70458 | [
"self-study",
"mathematical-statistics",
"linear-model",
"ridge-regression"
] |
611224 | 1 | null | null | 0 | 19 | I've been following a polytomous latent class regression routine in R called 'poLCA.'
My goal is to generate a visual analogue to the attached visual using the LCA coefficients and the formula:
$$p_{ri} = pr(x_i;\beta) = \frac{e^{x_i\beta_r}}{\sum_{q=1}^{R} e^{x_i\beta_q}}$$
However, at the moment, I am struggling with putting all the information together from the poLCA routine.
Therefore, my goal is not to generate the visual using code (this is available in the vignette). But if anyone could show me a numerical example of how one line on the plot may be generated, this would be greatly appreciated.
All information taken from Eq.11, and Section 6.2 of the vignette. [https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a95a9fd3ee69c007e8ba23ee69aa7f8f81e94187](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a95a9fd3ee69c007e8ba23ee69aa7f8f81e94187)
The attached example has:
- 3 latent classes.
- manifest variables = a set of political factors.
- 1 numeric covariate, Party (1-7). ~1 = Democratic preference. 3-4-5 = Independent preference; ~7 = Republican preference.
Coefficients:
[](https://i.stack.imgur.com/RLPtv.png)
Visualisation of three class probabilities:
[](https://i.stack.imgur.com/hhD1W.png)
| Latent Class Regression - Probability of Class Membership | CC BY-SA 4.0 | null | 2023-03-30T08:58:23.327 | 2023-03-30T09:12:20.780 | 2023-03-30T09:12:20.780 | 196520 | 196520 | [
"regression",
"machine-learning",
"probability",
"latent-class"
] |
611225 | 2 | null | 611100 | 1 | null | As per @Vincent 's suggestion, the correct translation for the pairwise STATA behavior would be:
```
pred = predictions(ols,
by = c("x1", "x2"),
newdata=datagridcf(x1=x1quants, x2=x2quants),
)
```
| null | CC BY-SA 4.0 | null | 2023-03-30T08:59:18.000 | 2023-03-30T08:59:18.000 | null | null | 346599 | null |
611226 | 1 | null | null | 3 | 247 | In many machine learning papers, there are notations like N(0,I). It seems that notation describes a normal distribution. If so, why did not use N(0,1)?
Could someone describe the difference between normal distribution N(0,I) and N(0,1)? Many thanks
| normal distribution in machine learning | CC BY-SA 4.0 | null | 2023-03-30T08:53:12.690 | 2023-03-30T09:19:31.340 | null | null | 384562 | [
"machine-learning",
"normal-distribution"
] |
611227 | 2 | null | 611226 | 7 | null | It is a bit hard to understand what the $I$ is really, but in general this notation is used to designate a squared matrix with $1$ on the diagonal and $0$ elsewhere. $I$ is called the identity matrix.
$N(0,I)$ is not the normal distribution and $0$ is a vector of zeros, let us note it in bold $\mathbf{0}$ to distinguish it from a scalar.
$N(\mathbf{0},I)$ belongs to a more general distribution, the multivariate normal distribution. The multivariate normal distribution in $\mathbb{R}^n$ can be characterized by two parameters, the mean $\mathbf{\mu} \in \mathbb{R}^n$ and covariance matrix $\Sigma \in \mathbb{R}^{n \times n}$, with $\Sigma$ a positive definite matrix. The probability density function of $N(\mathbf{\mu},\Sigma)$ is given by:
$$
f(\mathbf{x}) = \frac{1}{(2\pi)^{n/2}\det(\Sigma)^{1/2}}e^{ -\frac{1}{2}(\mathbf{x} - \mathbf{\mu})^t \Sigma^{-1}(\mathbf{x} - \mathbf{\mu}) }.
$$
Then, if you replace $\mathbf{\mu}$ by $\mathbf{0}$ and $\Sigma$ by $I$, you have your distribution with density function:
$$
f(\mathbf{x}) = \frac{1}{(2\pi)^{n/2}}e^{ -\frac{1}{2}\mathbf{x}^t \mathbf{x} }.
$$
For $n= 1$, the previous expression corresponds to the density function of a standard normal distribution.
I hope this answer your question.
| null | CC BY-SA 4.0 | null | 2023-03-30T09:19:31.340 | 2023-03-30T09:19:31.340 | null | null | 383929 | null |
611228 | 1 | 611233 | null | 3 | 103 | Let $X_1,\dots, X_n$ be iid sample from $X\sim f(x;\theta)=(\theta+1)x^\theta$ for $\theta>0$. Find the MLE say $\hat{\theta}$ and the asymptotic distribution of $\hat{\theta}$.
---
My work: The log-likelihood function is that
$$
\ell(\theta)=n\log(\theta+1)+\sum \theta\log X_i.
$$
The MLE is that
$$
\hat{\theta}=-\left(\frac{n}{\sum\log X_i}+1\right).
$$
Now, the asymptotic distribution of $\hat{\theta}$ should be
$$
\sqrt{n}(\hat{\theta}-\theta_0)\to N(0, I^{-1}(\theta_0))
$$
where $$I(\theta)=E\left[\left(\frac{\partial \ell(\theta)}{\partial \theta}\right)^2\right].$$
Here $\hat{\theta}\to \theta$ in probability.
So the result is that $$\sqrt{n}(\hat{\theta}-\theta)\to N\left(0, \frac{(\theta+1)^2}{n}\right)$$?
| Find the MLE say $\hat{\theta}$ and the asymptotic distribution of $\hat{\theta}$ | CC BY-SA 4.0 | null | 2023-03-30T09:21:08.440 | 2023-04-25T09:52:16.600 | 2023-04-25T09:52:16.600 | 56940 | 334918 | [
"self-study",
"mathematical-statistics",
"maximum-likelihood"
] |
611231 | 2 | null | 611228 | 2 | null | So far so good. Just one thing you need to be careful with:
In your expression for the asymptotic distribution of $\sqrt{n}(\hat{\theta}-\theta_0)$, the term $\mathcal{I}(\theta)$ is the information contained in a single observation.
I prefer the notation $\mathcal{I}_{X_1}(\theta)$ for this quantity, and $\mathcal{I}_{\mathbf{X}}(\theta) = n I_{X_1}(\theta)$ for the information contained in a random sample of size $n$.
Also, remember that there is an alternative formula for the Fisher information,
$$
\mathcal{I}_{\mathbf{X}}(\theta)= -\mathbb{E}\left(\frac{\partial^2}{\partial \theta^2} \ell(\theta)\right)
$$
This is much easier to evaluate in this situation.
| null | CC BY-SA 4.0 | null | 2023-03-30T09:46:45.223 | 2023-03-30T09:46:45.223 | null | null | 238285 | null |
611232 | 2 | null | 599188 | 0 | null | I guess there is a misunderstanding here because the output of the decoder network theoretically also corresponds to the mean parameter of the conditional likelihood $p(x|z)$.
And in fact, in practice, as your title mentions, we directly take this predicted mean as an output sample from the network.
To really get a feel of what is happening, consider the Continuous Bernoulli VAE ([article](https://proceedings.neurips.cc/paper/2019/file/f82798ec8909d23e55679ee26bb26437-Paper.pdf)). Here, as opposed to the Gaussian distribution, the parameter $\lambda$ of the CB distribution used as conditional likelihood does not correspond to the mean of the CB distribution, hence the sample is obtained after a further transformation (Equation 8 of the paper).
| null | CC BY-SA 4.0 | null | 2023-03-30T09:47:13.567 | 2023-03-30T09:47:13.567 | null | null | 244367 | null |
611233 | 2 | null | 611228 | 6 | null | Your work is almost fine and a bit incomplete. To complete the answer you'll have to compute the Fisher information
\begin{align}
I_1(\theta) & = -E\left(\frac{d^2\ell(\theta;X_1)}{d\theta d\theta}\right)
\end{align}
where $\ell(\theta;X_1) = \log f(X_1;\theta)$, is the log-likelihood for a single observation, here chosen to be the first. You can also use its equivalent expression based on squared the score function, though that may lead to slightly longer computations.
Remark You can also use the asymptotically equivalent version based on the observed information $J(\theta) = -\frac{d^2\ell(\theta)}{d\theta d\theta}$. The rationale for this choice is that, under broad regularity conditions, $J(\theta)/n$ converges in probability to $I_1(\theta)$ for $n\to\infty$.
| null | CC BY-SA 4.0 | null | 2023-03-30T09:47:18.007 | 2023-03-30T10:10:48.540 | 2023-03-30T10:10:48.540 | 56940 | 56940 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.