Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
615524 | 1 | null | null | 0 | 11 | [This source](https://www.scribbr.com/frequently-asked-questions/difference-quota-and-stratified-sampling/) says the difference between quota sampling and stratified sampling is that the units in each group are drawn in a non-random manner. What are the statistical implications of the fact that units are drawn non-random?
| Statistical implications of quota sampling | CC BY-SA 4.0 | null | 2023-05-11T07:04:24.267 | 2023-05-11T07:04:24.267 | null | null | 172814 | [
"sampling",
"survey"
] |
615525 | 1 | null | null | 0 | 12 | I have a three level model, level 1 is time points of measurements, level 2 is measurement day, and level 3 is participant, when I run correlation to understand within-person (between day), the correlation is positively significant, however, if I run a multilevel analysis following growth curve model, the results is negative significant, how come this is possible? Did I do something wrong? Any recommendation to understand this result
The opposite result between two analysis, I expect to have similar result that both negative or both positive
| How come my correlation table has a opposite result from my growth curve model? | CC BY-SA 4.0 | null | 2023-05-11T07:22:37.927 | 2023-05-11T07:22:37.927 | null | null | 387716 | [
"multilevel-analysis",
"growth-mixture-model"
] |
615526 | 1 | null | null | 0 | 12 | On island biology I am currently doing a research on the difference in seed fall the between island.
For this research I collected plants on the islands and dropping 5 seeds for eacht plant.
This means that seeds are nested in plant and plant is nested in island.
My data looks as follows:
|Drop time |Seed ID |PlantID |Distance to coast(cm) |plant height(cm) |Island |
|---------|-------|-------|---------------------|----------------|------|
|5 |1 |1 |20 |100 |Vis |
|8 |2 |1 |100 |20 |Bishevo |
I used the following code:
lm10 <- lmer(Drop.time~IslandDistance.to.coastplant.height+(1+plant.height+Distance.to.coast|Seed.ID:PlantID))+((1+plant.height+Distance.to.coast.|PlantID:Island))
Unfortenatly I get the following error messages:
ixed-effect model matrix is rank deficient so dropping 4 columns / coefficients
boundary (singular) fit: see help('isSingular')
Error in lmer(Gemiddelde.valtijd ~ Eiland * Distance.to.the.Coast..meter. * :
non-numeric argument to binary operator
In addition: Warning messages:
1: Some predictor variables are on very different scales: consider rescaling
2: In Plant:Eiland :
numerical expression has 120 elements: only the first used
3: In Plant:Eiland :
numerical expression has 120 elements: only the first used
Now I am scared that the statistical analysis is not usable.
Can I ignore this error.
I already wrote some discusion on this data but I do not know how correct using this data is.
| Double nested lmer | CC BY-SA 4.0 | null | 2023-05-11T07:38:43.140 | 2023-05-11T07:58:06.090 | null | null | 378046 | [
"r",
"regression",
"mixed-model",
"lme4-nlme",
"nested-data"
] |
615527 | 1 | 615535 | null | 3 | 232 | I have a problem regarding regression with sistematically missing data. I cannot describe the exact setting, so I will make up a situation that captures everything that is essential.
Let's imagine the following experiment setting: I am doing research on the probability of individuals participating in the labour market (binary variable). One of the independent variables in my dataset is the age of the first kid. When the individual has a kid, then the variable is simply its age. When the individual doesn't have offspring, it is NA. Note that the data is not missing at random: it reflects the observable characteristic that some people have kids while other don't. More importantly, it divides the sample into two sub-samples, in a way that is potentially very correlated with the outcome variable.
Now, how can I use this variable in a Logit model? I cannot simply fill the column with zeroes. I also cannot use the variable has_children x age_of_firstborn, since the value of that variable would be the same as just filling with zeroes. I also thought about doing two regressions, one for people with kids, and one for those who don't, but I think it would be a pity, because I would loose generality. I am not even sure if it can be considered truncated data. Any ideas?
| Systematically Missing Data | CC BY-SA 4.0 | null | 2023-05-11T07:38:51.230 | 2023-05-11T20:06:43.007 | 2023-05-11T20:06:43.007 | 919 | 387715 | [
"regression",
"logistic",
"inference",
"missing-data"
] |
615529 | 1 | null | null | 0 | 20 | I have been working on coding a simple Neural Network, and I have come across a question that I would like to discuss with you. I am trying to approximate two functions with the Neural Network: $f_1(x) = x^2$ and $f_2(x) = (1-x)^2$. To calculate the local losses, I am using the following equations:
$L_1 = \|NN(x) - x^2\|$ and $L_2 = \|NN(x) - (1 - x^2)\|$
My question is regarding the optimization process. Should I perform the optimization for each condition separately and not cumulatively? Or should I accumulate the gradient on all conditions before backpropagating? I am not sure which approach would be more appropriate?
[](https://i.stack.imgur.com/nZ4Dt.png)
| Accumulate the gradient on all conditions before or after backpropagating | CC BY-SA 4.0 | null | 2023-05-11T07:58:41.557 | 2023-05-11T09:07:05.593 | 2023-05-11T09:07:05.593 | 362981 | 362981 | [
"neural-networks",
"backpropagation"
] |
615530 | 2 | null | 615412 | 3 | null | If you have some parametric function $F$ for a survival curve that predicts the time $T$ of some event
$$P(T\leq t | \theta_1,\theta_2) = F(t;\theta_1,\theta_2)$$
and the parameters themselves are random as well
$$\boldsymbol{\theta} \sim MVN(\boldsymbol{\mu}, \boldsymbol{\sigma})$$
then you can express the probability by integrating over all cases:
$$P(T\leq t) = \iint_{\forall \theta_1,\theta_2} P(T\leq t | \theta_1,\theta_2) f(\theta_1,\theta_2) d\theta_1 d\theta_2$$
Possibly this might be evaluated analytically or approximated. Your question seems to use the approach of simulations. In that case the result is the average of your simulated curves.
---
Computational example:
Let the waiting time for the event be exponential distributed
$$T \sim Exp(\lambda)$$
with a variable rate $\lambda$
$$\lambda \sim N(1,0.04)$$
The the survival curve can be computed as the average
$$S(t) = E[exp(-\lambda t)]$$
and this follows a [log normal distribution](https://en.m.wikipedia.org/wiki/Log-normal_distribution) (approximately because the case here is truncated at zero) with mean parameter $-\mu t$ and scale parameter $\sigma t$ thus we have
$$S(t) \approx exp(-\mu t + 0.5 \sigma^2 t^2)$$
(the formula brakes down for large $\sigma$ or $t$ when the approximation of the truncated distribution with a non-truncated distribution fails).
The simulation below shows that this approximation can work
[](https://i.stack.imgur.com/ubfJj.png)
```
set.seed(1)
### generate data from exponential distribution
### with variable rate
n = 10^5
lambda = rnorm(n,1,0.2)
t = rexp(n,lambda)
### order data for plotting as
### emperical survival curve
t = t[order(t)]
p = c(1:n)/n
### plotting
plot(t, 1-p, ylab = "P(T<=t)", main = "emperical survival curve \n t ~ exp(lambda)\n lambda ~ N(1,0.04)", type = "l", log = "y")
### compare two models
lines(t,exp(-t+0.2^2/2*t^2), col = 4, lty = 2)
lines(t,exp(-(1)*t), col = 2, lty = 2)
```
| null | CC BY-SA 4.0 | null | 2023-05-11T08:31:50.483 | 2023-05-14T20:31:14.130 | 2023-05-14T20:31:14.130 | 164061 | 164061 | null |
615531 | 1 | null | null | 2 | 67 | I'm currently concerned with the topic of Gaussian Processes. To compute the covariance matrix of the conditional distribution, we have to invert $(K_{XX})^{-1}$, where $K_{XX}$ is a matrix of a kernel function $k(x_i, x_j)$ evaluated on all pairwise training samples.
For a matrix to always have an inverse, it has to be positive definite (pd.).
However, when we check for the validity of kernels, it is often based on Mercer's Theorem and this seems to only ensure positive semi-definiteness (psd.).
[https://las.inf.ethz.ch/courses/introml-s20/tutorials/tutorial5-kernels2.pdf](https://las.inf.ethz.ch/courses/introml-s20/tutorials/tutorial5-kernels2.pdf) (ETH Zürich, Slide 13).
[https://people.eecs.berkeley.edu/~jordan/courses/281B-spring04/lectures/lec3.pdf](https://people.eecs.berkeley.edu/%7Ejordan/courses/281B-spring04/lectures/lec3.pdf) (Berkley, Page 4).
But positive semi-definite kernels produce positive semi-definite matrices.
Could someone explain where my misconception is?
Are there valid kernels for which we get a psd matrix which may not be invertible and the Gaussian Process fails?
Edit: I know the kernels we classically use are pd as listed here: [https://en.wikipedia.org/wiki/Positive-definite_kernel](https://en.wikipedia.org/wiki/Positive-definite_kernel).
This seems to be related to the unanswered post from [Is a valid kernel function have to be positive definite?](https://stats.stackexchange.com/questions/433249/is-a-valid-kernel-function-have-to-be-positive-definite)
| Why does a valid Kernel only have to be positive semi-definite instead of positive definite? | CC BY-SA 4.0 | null | 2023-05-11T08:34:54.987 | 2023-05-12T17:46:59.010 | null | null | 387721 | [
"gaussian-process",
"kernel-trick",
"matrix-inverse"
] |
615534 | 1 | null | null | 2 | 18 | This was my professor's interpretation but he didn’t provide an example:
there could be training points at the same distance from x such that more than k points are closest to x. In this case, we proceed by ranking the training points based on their distance from x and then taking the k′ closest points where k′ is the smallest integer bigger or equal to k such that the (k′+ 1)-th point in the ranking has distance from x strictly larger than the k'-th point. If no such k′ exists, then we take all the points
| what does the k-NN Algo do with equidistant training points from the test point? | CC BY-SA 4.0 | null | 2023-05-11T08:53:27.973 | 2023-05-11T09:08:50.560 | null | null | 387723 | [
"k-nearest-neighbour"
] |
615535 | 2 | null | 615527 | 7 | null | There are a lot techniques for dealing with missing values (multiple imputation, EM, weighting, etc.) However, none of these apply to your situation, as your missing values are not missing values as these techniques understand them. For a value to be missing it first needs to exist. In your case the age of the child just does not exist for people without children, so it cannot be missing. Unfortunately for you, they still have a missing value code in your data. So you need to do something, but those techniques are not applicable to your situation.
Lets say you want to explain a variable $y$ with respondent's education ($educ$), which is observed for everybody, and age of youngest child ($chage$), which is only observed for people with children. The child's age cannot influence $y$ for persons without children; this characteristic just does not exist for those persons. So for these persons the logistic regression model is $P(y=1|x)=\Lambda(\beta_0 + \beta_1educ)$, where $\Lambda(\cdot)= \frac{\exp(\cdot)}{1+\exp(\cdot)}$. For people with children, your model is $P(y=1|x)=\Lambda(\beta_0^* + \beta_1educ + \beta_2chage)$. It reasonable to suspect that just the fact of having children has its own effect on top the children's age, so $\beta_0\neq\beta_0^*$.
You can estimate this model. The first step is to create a new variable indicating whether or not a person has children (in all likelihood you don't need to create that variable as it is probably already in your data). Lets call that variable $child$. The second step is to change the variable $chage$ to 0 for all persons without children. Than you can add $child$, and $chage$ to your model. So we have $P(y=1|x)=\Lambda(\beta_0 + \beta_1educ + \beta_2chage + \beta_3 child )$
If someone does not have children, then $child$ = 0 and $chage$ = 0. So the model becomes: $P(y=1|x)=\Lambda(\beta_0 + \beta_1educ + \beta_2 0 + \beta_3 0 ) = Lambda(\beta_0 + \beta_1educ )$, which is what we wanted.
If someone has children, then $child$ = 1. So the model becomes $P(y=1|x)=\Lambda(\beta_0 + \beta_1educ + \beta_2chage + \beta_3 1 ) = \Lambda(\underbrace{\beta_0 + \beta_3}_{\beta_0^*} + \beta_1educ + \beta_2chage ) $
, which is what we wanted.
Notice that the number of variables in your model differ depending on whether $chage$ had a missing value or not. In your case that is exactly what we wanted, because quite sensibly we don't want to control for characteristics that don't exist for an individual. However this also means that this trick does not work for a general case of missing values, i.e. the value exists but we have not observed it.
| null | CC BY-SA 4.0 | null | 2023-05-11T08:56:17.590 | 2023-05-11T08:56:17.590 | null | null | 23853 | null |
615537 | 2 | null | 615534 | 1 | null | In practice this is usually solved by taking arbitrary points, e.g. $k$ first points, or random $k$ points. The rationale is that it should not make that a big difference and if it does, then maybe you should pick a larger $k$ parameter. $k$NN is intended to be a simple algorithm, but can be computationally demanding with a lot of data, so adding unnecessary complexity to it is not desirable.
| null | CC BY-SA 4.0 | null | 2023-05-11T09:08:50.560 | 2023-05-11T09:08:50.560 | null | null | 35989 | null |
615538 | 1 | null | null | 0 | 21 | We are performing an independent external validation of a 2 published diagnostic risk prediction tools (logistic regression models) which estimate the risk for having a specific disease (model A and model B). We have done the following.
- Assessed discrimination by calculating c-index
- Evaluated model calibration using calibration plots
- Compared the two models using decision curves
Doing so told us that calibration of A and B were ok (around 0.80 for both), and calibration was better for B (close to ideal, whereas model A undererstimated the risk for most patients). Finally, model B had the highest net benefit across a wide range of thresholds.
My specific questions:
- Given sufficient statistical power, does it make sense to do the above for subgroups of patients (e.g. women, men, age-groups etc) to identify individuals in which the model works best? I saw this being done by prediction model experts here. Would this be possible for all subgroups or only for subgroups based on covariates in the model (e.g. can I only look at the performance of women/men separately if sex is a predictor)?
- I was able to compare the performance of the models using decision curves. As they are based on different covariates, does it (in theory) make sense to evaluate the performance of applying both models sequentially?
Many thanks for your help.
| External validation of diagnostic model: Performance in subgroups | CC BY-SA 4.0 | null | 2023-05-11T09:38:41.923 | 2023-05-11T09:51:32.240 | 2023-05-11T09:51:32.240 | 305011 | 305011 | [
"regression",
"predictive-models",
"validation",
"regression-strategies",
"diagnosis"
] |
615539 | 2 | null | 614382 | 1 | null | I have kind of figured out a (almost) correct answer to my question so I will post it here and leave room for others to weigh in to improve it.
Answer to the first question
Apparently, there is no consensus as to the definition of the standard error of the weighted mean. Even different statistical softwares use different definitions. However, the most coherent answer that I keep seeing is this for an unbiased estimation of the standard error on a weighted mean:
$$
se= \frac{s_w}{\sqrt{\sum_i^n w_i}}
$$
where the $s_w$ is the unbiased estimator of the standard deviation of the random variable $X$ and $\sum_i^n w_i$ is the sum of the individual weights that contribute to the unbiased estimation of $X$. The following [link](https://www.analyticalgroup.com/download/weighted_mean.pdf) is a statistical note that compares how it is computed in SPSS vs WinCross and SPSS uses the sum of weights as the denominator (which happens to be almost the same as the sample size $n$ in their example). So in the example I provided in my question, the sum of weights is $\sum_i^n w_i = 92$.
Answer to the second question
I came up with the following formulas for recursive computation of the weighted mean, weighted standard deviation and the standard error on the weighted mean:
Given that the current known data points are $n$ and the next data point that triggers the update is denoted as $n+1$, we can express the weighted stats like so:
Recursive weighted mean:
$$
\bar{x}_{w,n+1} = \frac{(\sum_{i=1}^{n} w_i) \bar{x}_{w,n} + w_{n+1} \times x_{n+1}}{\sum_{i=1}^{n} w_i + w_{n+1}}
$$
Recusrive weighted standard deviation
$$
s_{w,n+1} = \sqrt{\frac{(\sum_{i=1}^{n} w_i) \times (s_{w,n}^2 + [\bar{x}_{w,n} - \bar{x}_{w,n+1}]^2) + w_{n+1} (x_{n+1} - \bar{x}_{w,n+1})^2}{\sum_{i=1}^{n} w_i + w_{n+1}}}
$$
Standard error of the weighted mean
$$
se_w = \frac{s_{w,n+1}}{\sqrt{\sum_{i=1}^{n} w_i + w_{n+1}}}
$$
Python's `statsmodels` implemented a class that computes all sorts of weighted statistics including the standard deviation and standard error (method under the name `std_mean` here in their [source](https://www.statsmodels.org/dev/_modules/statsmodels/stats/weightstats.html#DescrStatsW) code. As we can see from their implementation, their unbiased estimator of the standard error with degres of freedom parameter set to $1$ is the formula that I wrote above. This answers my first question as to what should I take as a denominator when computing the unbiased estimation of the standard error on my weighted mean.
Using Python I was able to verify the implementation of the above estimators using recursive definitions vs `statsmodels`'s weighted stats function knowing the full history of data like so:
```
import numpy as np
from statsmodels.stats.weightstats import DescrStatsW
def update_weighted_mean_se(current_sum_weights, current_weighted_avg, current_weighted_std, new_weight, new_x):
'''
Update the weighted statistics (mean, weighted standard deviation and weighted standard error) given the previous
sum of weights, previous weighted mean, previous weighted standard deviation, new weight, and new x value.
'''
# new weighted mean and weighted standard deviation recursively
new_sum_weights = current_sum_weights + new_weight
new_weighted_avg = (current_sum_weights*current_weighted_avg + new_weight*new_x) / new_sum_weights
new_weighted_std = np.sqrt((current_sum_weights*(current_weighted_std**2 + (current_weighted_avg-new_weighted_avg)**2) + new_weight*(new_x-new_weighted_avg)**2)/new_sum_weights)
# new standard error on the weighted mean
se_w = new_weighted_std / np.sqrt(new_sum_weights)
return new_weighted_avg, new_weighted_std, se_w
# define the x measurements and their weights
x = np.array([10, 12, 15.2, 12.5, 11])
w = np.array([100, 120, 108, 80, 98])
# calculate the unbiased estimators of avg, std and se (with ddof=1)
sum_w = np.sum(w)
avg_w = np.sum(w * x) / sum_w
std_w = np.sqrt(np.sum(w*(x-avg_w)**2) / (sum_w-1))
se_w = std_w / np.sqrt(sum_w)
# add new values and compute weighted stats iteratively
new_x_array = np.array([20, 30])
new_weights_array = np.array([200, 150])
for new_x, new_w in zip(new_x_array, new_weights_array):
avg_w, std_w, se_w = update_weighted_mean_se(sum_w, avg_w, std_w, new_w, new_x)
sum_w+=new_w
# verify new weighted stats using the formula (with ddof=1)
weighted_stats = DescrStatsW(np.concatenate([x, new_x_array]), weights=np.concatenate([w, new_weights_array]), ddof=1)
print('iterative weighted avg = %0.5f' %avg_w)
print('iterative weighted std = %0.5f' %std_w)
print('iterative weighted se = %0.5f' %se_w)
print('statsmodels weighted avg = %0.5f' %weighted_stats.mean)
print('statsmodels weighted std = %0.5f' %weighted_stats.std)
print('statsmodels weighted se = %0.5f' %weighted_stats.std_mean)
>>> OUTPUT:
iterative weighted avg = 17.12570
iterative weighted std = 6.88164
iterative weighted se = 0.23521
statsmodels weighted avg = 17.12570
statsmodels weighted std = 6.88539
statsmodels weighted se = 0.23534
```
My implementation yields the correct weighted average but the standard deviation (and by extention the standard error) are only accurate up to $2$ or $3$ decimal points. This means that my implementation of the standard deviation is not exactly the same as `statsmodel`'s and there is room for improvement. I wonder if this is just a matter of numerical precision.
| null | CC BY-SA 4.0 | null | 2023-05-11T09:53:17.073 | 2023-05-23T08:24:12.507 | 2023-05-23T08:24:12.507 | 346672 | 346672 | null |
615540 | 1 | null | null | 0 | 21 | I am working out a percentage survival between 2 conditions (e.g. in the presence/absence of antibiotic). For both mean values I have a standard deviation. How do represent the error in the % survival value that accounts for the SDs?
| How to represent 2 standard deviation values as one error | CC BY-SA 4.0 | null | 2023-05-11T10:12:30.667 | 2023-05-11T10:12:30.667 | null | null | 387731 | [
"standard-deviation",
"error"
] |
615541 | 2 | null | 615421 | 5 | null | Unfortunately, I do not think your "compact" data format would be any more beneficial computationally (at least as in the example code you showed), and is not the same representation as the original.
You could think of the original (intractable) problem as estimating the model
$$ Y_i = m(X_i) + \epsilon_i \qquad (i=1,\dots, n)$$
where $X_i \in \mathbb{R}^{50000000}$. What you are proposing is to define some new categorical variable $Z_{ij}$ that indicates column $j$ of the $i$th row with value $V_{ij}$, then estimating the model
$$ Y_{ij} = m(Z_{ij}, V_{ij}) + \xi_{ij} \qquad (ij=1,\dots, n*50000000) $$
---
Firstly, the number of "samples" in the "compact" dataset is now 50M times larger. While it's true that you could train a model in mini-batches (say 50000 rows in each batch), why not just train on the original dataset with a much smaller batch size (say 1-2 rows per batch)? Note that you do not need to load the whole dataset into memory at once, just keep it on disk and read in 1-2 rows and train via an incremental ML algorithm.
A much more important point however is that most standard implementations of ML algorithms assume that each row are i.i.d. realizations from some DGP. Using the "compact" representation, we lose all of the information related to the joint distribution of $X_i$ - for example, correlations between the variables and (possibly complicated) combinations of them that are predictive of $Y_i$.
Now as for the fact that you have 50M features...I would strongly suggest you explore the data a bit more before considering any actual modeling. Are the features highly correlated (this may be tough to compute)? Are they sparse? Are there duplicates? Dimensionality reduction can be very useful here both in terms of making it computationally feasible and likely improving the performance of the model.
If you really do not want to do any dimensionality reduction for whatever reason (which you should), there are certain ML models that would be able to work well with only having a subset of the features. Random Forests come to mind, where you could pass in a small subset of the features to each leaf to be split. Though you will likely have to code this up yourself.
| null | CC BY-SA 4.0 | null | 2023-05-11T10:22:34.243 | 2023-05-11T10:22:34.243 | null | null | 269723 | null |
615542 | 1 | null | null | 0 | 23 | I have this burning question that variables are not significantly correlated to one another, where the other study I refer to keeps claiming that those variables (bullying victimization and perpetration) are highly correlated.
At first, I thought the researchers transformed the data by taking logarithm since they were highly skewed. So I did the same by taking the logs and transforming the data. When I put those transformed data into SPSS for correlation, only those victimization and perpetration variables are not correlated at all.
I also tried natural log and also square-root since log10 did not work. Kurtosis and skewness went down significantly by taking the logs, but how come they are not correlated as the previous study says?
It's weird because I'm using the exact same data from the previous study. Can anybody help? Any feedbacks are appreciated.
G8V1 is victimization type1. There are 4 victimization types, and 4 perpetration types so I separated them in my correlation. But when I summed and transformed the existing data, it still did not show any correlation like the other one I posted.
[](https://i.stack.imgur.com/zXuM7.jpg)
[](https://i.stack.imgur.com/wAwb1.jpg)
The authors mentioned in the article that "Before the SEM, any problems with the normality assumption of the three indicators of the latent variables, which were parceled for the stability of the model were examined (Little et al., 2002). The errors of the variables were not normally distributed. To deal with these problems, those parcels were inversely transformed (Kline, 2005). After the transformation, the non-normality problem was resolved. These transformed data were used for SEM with the FIML technique and raw data were used for SEM with MLR." Does it mean that they conducted Parcelling function in Mplus before conducting correlation?
| Insignificant correlation between the variables that were tested to be highly significant in the other study | CC BY-SA 4.0 | null | 2023-05-11T10:35:47.400 | 2023-05-11T12:07:26.740 | 2023-05-11T12:07:26.740 | 387734 | 387734 | [
"correlation"
] |
615543 | 1 | 617203 | null | 0 | 50 | Suppose $r \geq 1$ distinct books are distributed at random among
$n \geq 3$ children.
(a) For each $j \in {0, 1, 2, . . . , r}$, compute the probability that
the first child gets exactly $j$ books.
(b) Let $X$ be the number of children who do not get any book,
and $Y$ be the number of children who get exactly one book.
Show that $$Cov(X, Y)=\frac{r(n-1)(n-2)^{r-1}}{n^{r-1}}-\frac{r(n-1)^{2r-1}}{n^{2r-2}}$$
For part (a) we can view the distribution of books as independent Bernoulli trials such that for each student, the probability of getting the book equals $\frac{1}{n}$ and so the probability that the first child gets exactly $j$ books equals ${r \choose j} (\frac{1}{n})^j(\frac{n-1}{n})^{r-j}$.
For part (b) define $X_{ij}=1$, if the ith student gets the jth book and otherwise zero. Then we have $$X=\sum^n_{i=1}\mathbf{1}(\sum^r_{j=1}X_{ij}=0), Y=\sum^n_{i=1}\mathbf{1}(\sum^r_{j=1}X_{ij}=1)$$. So, $$Cov(X, Y)=\sum^n_{p=1}\sum^n_{q=1}Cov(\mathbf{1}(\sum^r_{j=1}X_{pj}=1), \mathbf{1}(\sum^r_{j=1}X_{qj}=0))\\ =\sum^n_{p=1}\sum^n_{q=1}E[\mathbf{1}(\sum^r_{j=1}X_{pj}=1)\mathbf{1}(\sum^r_{j=1}X_{qj}=0)]-E[\mathbf{1}(\sum^r_{j=1}X_{pj}=1)]E[\mathbf{1}(\sum^r_{j=1}X_{qj}=0)] \\ =\sum^n_{p=1}\sum^n_{q=1}\mathbb{P}(\sum^r_{j=1}X_{pj}=1, \sum^r_{j=1}X_{qj}=0)-\mathbb{P}(\sum^r_{j=1}X_{pj}=1)\mathbb{P}(\sum^r_{j=1}X_{qj}=0) \\ =\sum^n_{p=1}\sum^n_{q=1}\mathbb{P}(\sum^r_{j=1}X_{pj}=1|\sum^r_{j=1}X_{qj}=0)\mathbb{P}(\sum^r_{j=1}X_{qj}=0)-\frac{r(n-1)^{2r-1}}{n^{2r}} \\ =\sum^n_{p=1}\sum^n_{q=1}{r\choose1}\frac{1}{n-1}(\frac{n-2}{n-1})^{r-1}(\frac{n-1}{n})^r-\frac{r(n-1)^{2r-1}}{n^{2r}} \\ =\frac{r(n-2)^{r-1}}{n^{r-2}}-\frac{r(n-1)^{2r-1}}{n^{2r-2}}$$.
I want to know where I am getting wrong?
| Covariance of two Random Variables | CC BY-SA 4.0 | null | 2023-05-11T10:37:31.853 | 2023-05-29T12:03:41.377 | 2023-05-11T11:28:49.310 | 376295 | 376295 | [
"probability",
"mathematical-statistics",
"random-variable",
"covariance"
] |
615544 | 1 | 615548 | null | 3 | 151 | I want to use a multiple logistic regression to model the relationship between two experimental groups (test and control) and accuracy of a procedure, controlling for the experience (in years) of the participants.
```
outcome ~ group + experience
```
The design I am using is paired in the sense that every participant is tested twice, so there are no differences in baseline characteristics between groups (since they are the same individuals). If I was only testing for differences in time, a paired t-test would suffice, but I need to control for experience, hence a regression model is being built.
Time is measured in seconds until the procedure is completed, and accuracy is defined as completing it within a pre-specified threshold (the outcome is 1 if less than or equal to 6 minutes and 0 otherwise). It is expected that time and experience are negatively correlated or, experience practitioners are expected to take less time to complete the procedure.
I would like to test for interactions in this model, but I don't think it makes much sense to interact the group with experience.
```
outcome ~ group * experience
```
I am considering including time in the model and test for interaction with experience.
```
outcome ~ group + experience*time
```
Since time is used in the definition of the response of the logistic model I expect it to be significant even with a small sample size. However it seems to me that including time in this model would be circular reasoning.
```
outcome ~ group*time + experience
```
Q1: Is this a correct interpretation?
Q2: If I try interactions between time and the group instead, would that tell me that time is modifying the effect attributed to the group?
Q3: Does it make sense to test for interactions between experience and group in this setting?
EDIT: I understand Douglas Altman's point of that, while unnecessary dichotomization of a continuous variable is prevalent in medical research, it leads to loss of estimate precision (at the very least). I was able to make the case for a linear model of `time ~ group + experience` for this experiment as a secondary endpoint, but the main goal needs to remain being accuracy, which is why the outcome is a dichotomization of time. This practice is prevalent for a reason :)
| Does it make sense to include a predictor that is by definition related to the response variable in a regression model? | CC BY-SA 4.0 | null | 2023-05-11T11:00:53.323 | 2023-05-11T18:02:33.020 | 2023-05-11T12:59:58.453 | 101724 | 101724 | [
"regression",
"logistic",
"interaction"
] |
615545 | 2 | null | 610940 | 2 | null | A brief summary of what I've learned:
Response Surface Methodology (RSM) is a type of adaptive or sequential design. Sequential designs involve multiple rounds of experimentation, with the choice of treatments in each later round being dependent on the data accumulated from completed rounds. RSM is usually used to identify some sort of optimum, often for industrial production. Sequential designs also exist outside of the RSM framework, however, such as Bayesian Adaptive Experimental Design.
Fractional Factorial Designs (FFDs) are a specific group of experimental designs. They allow useful information to be extracted from relatively tiny experiments, especially in situations with a large number of predictors. The central (quite reasonable) assumption is that higher-order interactions are less important than lower-order ones and main effects, which is called the "sparsity-of-effects principle". Experimental designs can therefore be scaled down by neglecting the higher-order interactions. This scaling down involves intentionally confounding (‘aliasing’) combinations of factors relative to the full factorial experiment, with the consequence that one cannot estimate each separate main effect and interaction term. We can still learn a great deal of useful information from the data despite this limitation, though.
How RSM and FFDs interact:
FFDs are most often used as a component of RSM. But they can in principle be used independently, either as part of a one-shot experiment, or as part of a sequential design approach that does not involve RSM. They seem unlikely to be very useful when used in a one-shot experiment, which is probably why they are so tightly connected to RSM.
Additionally, RSM can take as its input experimental designs that are not FFDs. Central Composite Designs and Box-Behnken designs are two other commonly-used designs.
---
My thanks to kjetil b halvorsen and Gregg H for their input and the suggested resources. I was not able to lay my hands on the Montgomery book but found the Box, Hunter & Hunter book very useful.
| null | CC BY-SA 4.0 | null | 2023-05-11T11:24:29.450 | 2023-05-30T12:23:54.913 | 2023-05-30T12:23:54.913 | 121522 | 121522 | null |
615546 | 1 | null | null | 3 | 39 | Could somebody direct to me to some literature dealing with this issue. So we have $X = U\Sigma V^{T}$ and we have $M \odot X = U^{'}\Sigma^{'}V^{'^{T}}$ with
\begin{equation}
M_{i,j} = \begin{cases}
1 & \text{or} \\
0
\end{cases}
\end{equation}
X and M can be symmetric and X could be a Graph (so: how does the spectral decomposition change if i delete a vertice)
I know general perturbation bounds or bounds using the fact that $M$ is a Matrix with Random entries. My Question is if there is an exact relationship like a formula with an equality relating these two quantities (their singular vectors and/or their singular values). Maybe this is related to like an entrywise updated calculation of an SVD for example. If there are perturbation bounds specifically for graphs then if somebody could mention them that would b helpful too! Thanks!
| Relationship between SVD of Matrix and SVD of same Matrix with deleted entries (Matrix can be Adjacency Matrix of a Graph) | CC BY-SA 4.0 | null | 2023-05-11T11:30:55.030 | 2023-05-11T15:00:48.367 | 2023-05-11T15:00:48.367 | 386816 | 386816 | [
"linear-algebra",
"svd"
] |
615547 | 1 | null | null | 0 | 15 | Quite straight-forward, looking for a function that can mass-search combination of predictors for GAMs.
The function leaps() does so for normal linear regression, it returns the best model at every number of predictors (at 3 predictors, at 4 predictors, etc).
The function stepAIC allows us to specify a floor (predictors that have to be included), a ceiling (largest model that you allow), and searches for a model in between, stepwise (adding or removing one predictor at a time).
However I'm unaware of any function that can do this for GAM, from the two classes I follow we had to manually construct 10+ models, severely limiting the number of models we can search.
| Is there an equivalent of leaps() or stepAIC() for GAM in R? | CC BY-SA 4.0 | null | 2023-05-11T11:55:46.863 | 2023-05-11T12:24:51.977 | null | null | 342779 | [
"r",
"generalized-additive-model"
] |
615548 | 2 | null | 615544 | 6 | null | `outcome` in this situation is fully determined by `time`. Another way to say this is that it is simply a re-expression of `time` on a binary scale. So if you were to model `outcome` and use `time` as a predictor, the other predictors would not matter - all the variation in `outcome` would be fully explained by `time`.
Actually, that's an oversimplification - it would be worse than this. You would presumably be using a logistic regression, and would encounter the problem of [perfect separation](https://stats.stackexchange.com/a/254266/121522).
I would add that `outcome` does not seem very useful to use as a response variable. You could just model `time` as a response; taking a continuous value and turn it into a binary throws away useful information. It's very unlikely that the binary `outcome` variable is a better metric of 'accuracy' than `time`.
EDIT:
Yes, including `group*experience` makes sense. As EdM says, it would tell you whether the effect of the treatment (`group`) on the `outcome` changes with `experience`. This is easier to understand if you plot the model output.
Also, if you've measured each participant more than once, you will need to model the non-independence of data points. A mixed model (random intercept and perhaps random slope for participant) would help address this.
| null | CC BY-SA 4.0 | null | 2023-05-11T12:13:48.580 | 2023-05-11T18:02:33.020 | 2023-05-11T18:02:33.020 | 121522 | 121522 | null |
615549 | 1 | null | null | 0 | 19 | I am using a model for a multi-classification / ranking task, however for each choice problem it associates the different options with a number that is in a range with negative numbers with the caveat that the smallest number would actually correspond to the preferred choice. However, typically, softmax is applied to probabilities with the highest being the most likely to be the preferred choice. How could I adapt softmax to associate it with the highest probability? I have thought of using exp(-x) with x being the predicted number, but it does not work so well and there are no real mathematical justification in my view of doing so. In short, I would like to use a softmax like function that would associate to a set of n values amongst which some are negative a set of normalized probabilities (the sum of the n probabilities should be 1). The difference compared to the standard softmax function is that smallest value should be associated with the highest probability.
| How to calculate softmax for decreasing values? | CC BY-SA 4.0 | null | 2023-05-11T12:18:17.797 | 2023-05-11T13:59:28.223 | 2023-05-11T13:59:28.223 | 319948 | 319948 | [
"machine-learning",
"predictive-models",
"softmax"
] |
615550 | 2 | null | 615391 | 1 | null | A key here is your statement:
>
over 300 cases who had missing value for reason for termination (these patients might still be in the registry for all we know, there's no way of knowing what happened with them)...
If you don't even know that those individuals have left the registry, then their time durations within the registry only have a lower limit of the time between entry and the time of last follow up, a right censoring of their durations in the registry. That situation calls for some type of survival analysis.
You dichotomy of solutions isn't quite so stark as you might think. The choice depends on the nature of your observation times. If they are effectively continuous, then a competing-risks survival model would be a reasonable choice. If instead you have evaluations of all individuals at, say, regular 6-month intervals, you would have a "discrete-time" survival model that is essentially (in your case) a multinomial regression on a "person-period" data set. For each time interval you would have one row for each individual at risk during that interval, with the covariate values in place during that interval and an indicator of the outcome during that interval (with no record of a terminating event being a possibility).
The R [competing risks vignette](https://cran.r-project.org/web/packages/survival/vignettes/compete.pdf) outlines the procedure for continuous time. The counting-process `Surv(startTime, stopTime, eventType)` data format allows for time-varying covariate values. With at most one event possible per individual, you don't need to treat this formally as a repeated-measures analysis if your model (like a proportional hazards model) only evaluates covariates at event times. See [this page](https://stats.stackexchange.com/a/596069/28500). There are many pages on this site dealing with discrete-time survival; [this page](https://stats.stackexchange.com/q/57191/28500) contains some references for further study. Discrete-time survival models are binomial regressions if there is only one type of event, but they can be extended to multinomial regression to handle your situation.
| null | CC BY-SA 4.0 | null | 2023-05-11T12:18:46.093 | 2023-05-11T12:18:46.093 | null | null | 28500 | null |
615552 | 2 | null | 615547 | 0 | null | So there is `step.gam`, see [here](https://rdrr.io/cran/gam/man/step.gam.html). The syntax, as far as I can tell, is the same as stepAIC. This is still not as ideal as a hypothetical `leaps.gam` that searches exhaustively, but this still is an immense help.
| null | CC BY-SA 4.0 | null | 2023-05-11T12:24:51.977 | 2023-05-11T12:24:51.977 | null | null | 342779 | null |
615553 | 1 | null | null | 1 | 159 | [CONSEQUENCES OF HETEROSCEDASTICITY ](https://www.google.fr/books/edition/Regression_Analysis/dRQAkwHHmtwC?hl=en&gbpv=1&dq=The%20presence%20of%20heteroscedasticity%20causes%20the%20OLS%20to%20underestimate%20the%20variances%20of%20the%20coefficients&pg=PA92&printsec=frontcover)
$\textbf{1}$. The presence of heteroscedasticity does not make the OLS estimates of coefficients biased, but it causes the variances of OLS estimates to increase.
$\textbf{2}$. The presence of heteroscedasticity causes the OLS to ${\color{Red} {\text{underestimate}}}$ the variances of the coefficients.
---
I don't understand why the OLS to underestimate the variances of the coefficients in $\textbf{2}$.
The following is my thoughts :
Let $\mathbf{X}$ has full column rank,
$$\text{the homoscedasticity model} (1):\begin{cases}
\mathbf{y}=\mathbf{X} \boldsymbol{\beta}+\varepsilon \\
E(\varepsilon)=\mathbf{0}, \operatorname{Var}(\varepsilon)=\sigma^{2} \mathbf{I}
\end{cases};$$
$$\text{the heteroscedasticity model} (2):\begin{cases}
\mathbf{y}=\mathbf{X} \boldsymbol{\beta}+\varepsilon \\
E(\varepsilon)=\mathbf{0}, \operatorname{Var}(\varepsilon)=\sigma^{2} \mathbf{V}
\end{cases},\text{where} \mathbf{V} \text{ is diagonal but with unequal diagonal elements.}$$
$\\$
When $\text{the heteroscedasticity model} (2)$ is ture,
then the weighted least-squares estimator of $\hat{\boldsymbol{\beta}}_{WLS}$ is an unbiased estimator of $\boldsymbol{\beta}$ and $$\operatorname{Var}(\hat{\boldsymbol{\beta}}_{WLS})=\sigma^{2}\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1}.$$
$\\$
The ordinary least-squares estimator $ \hat{\boldsymbol{\beta}}_{OLS}=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{y}$ is no longer appropriate in model $(2)$.If the ordinary least squares is used in this case, the resulting estimator $\hat{\boldsymbol{\beta}}_{OLS}=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{y}$ is still unbiased. However, the ordinary least-squares estimator is no longer a minimum variance estimator. That is, the covariance matrix of the ordinary least-squares estimator is
$$\operatorname{Var}(\hat{\boldsymbol{\beta}}_{OLS})=\sigma^{2}\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V} \mathbf{X}\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}$$ and the covariance matrix of the weighted least-squares estimator $\sigma^{2}\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1}$ gives smaller variances for the regression coefficients. So the OLS should overestimates the variances of the coefficients.
| Why the OLS underestimates the variances of the coefficients | CC BY-SA 4.0 | null | 2023-05-11T12:37:32.290 | 2023-05-13T07:01:01.670 | 2023-05-13T06:07:41.353 | 371966 | 371966 | [
"regression",
"linear-model",
"heteroscedasticity",
"generalized-least-squares"
] |
615555 | 1 | null | null | 0 | 17 | I'm reading sequential estimation section 2.3.5 PRML where Bishop introduces Robbins-Monro algorithm to calculate the root of $f(\theta) = E z | \theta = \int zp(z|\theta) dz, (z, \theta) \sim p(z, \theta)$.
>
The Robbins-Monro procedure then defines a sequence of successive estimates of the root $\theta^\star$ given by
$$θ^{(N)} = θ^{(N−1)} + a_{N−1}z(θ^{(N−1)}) \tag{2.129}$$
where $z(θ^{(N)})$ is an observed value of $z$ when $θ$ takes the value $θ^{(N)}$. The coefficients
$\{a_N\}$ represent a sequence of positive numbers that satisfy the conditions. (I skip the conditios since it does not help)
However, after some deduction and oberving the asypmtotic property of the equation of log likelihood (Equation 2.134), Bishop gives the procudure to estimate the parameters of normal distribution,
>
We can therefore apply the Robbins-Monro procedure, which now takes the form
$$
θ^{(N)} = θ^{(N−1)} + a_{N−1}\frac{\partial}{\partial \theta^{(N-1)}}\ln p(x_N|\theta^{(N-1)}) \tag{2.135}
$$
where $p(x|theta)$ is the normal density.
I'm confused that why it is $x_N$ but $x_{N-1}$. Does it be a typo or I didnot catch the idea of Robbins-Monro algorithm?
---
UPDATE
Another confusion arises that why the equation(2.129) does not integrate the the new observed data $x_N$ since I think the algorithm is similar to the gradient descent mothod.
| PRML: Why does the sequential estimation algorithm of normal parameters use $X_N$ but $X_{n-1}$? | CC BY-SA 4.0 | null | 2023-05-11T12:58:49.557 | 2023-05-11T13:55:36.550 | 2023-05-11T13:55:36.550 | 376068 | 376068 | [
"normal-distribution",
"estimation",
"sequential-analysis"
] |
615556 | 2 | null | 615379 | 1 | null | It's not that clear what you are actually looking for, if a classifier or a density estimator, but I think you are looking for a conditional variational autoencoder, or a conditional GAN.
A cVAE is a neural network which takes in input a sample from your distribution, maps it to a known distribution (and a second conditional distribution), and then back to the original sample.
Doing that, at the end you can sample in the latent dimension, which distribution is known, and you get samples from your distribution.
Check this paper [https://arxiv.org/pdf/1812.04405.pdf](https://arxiv.org/pdf/1812.04405.pdf) for cVAE and this paper [https://arxiv.org/pdf/1411.1784.pdf](https://arxiv.org/pdf/1411.1784.pdf) for cGAN
| null | CC BY-SA 4.0 | null | 2023-05-11T13:06:26.550 | 2023-05-11T13:06:26.550 | null | null | 346940 | null |
615557 | 1 | null | null | 0 | 6 | I have dataset that corresponds to year (from 1930-2020) and volume of sediment. I have to predict the volume for next 50 years. While trying ARIMA in R I tried different models like (1,0,0), (1,1,0) (2,2,0) etc. But the forecast summary show same volume through out except for (1,0,1). So I am not sure if ARIMA (1,0,1) exists as I found no articles or topis talking about (1,0,1). so I am confused if I can use it or not?
Can someone tell me what does arima (1,0,1) mean? and how to intrepret? Will that even make sense if I use (1,0,1), as my forecast summary is giving different values? Rest all the models giving same value except for (1,0,1).
| Does Arima (1,0,1) exist? Are there any articles station Arima (1,0,1)? How to intrpret Arima (1,0,1) summary results? | CC BY-SA 4.0 | null | 2023-05-11T13:12:50.400 | 2023-05-11T13:12:50.400 | null | null | 387713 | [
"time-series",
"forecasting",
"predictive-models",
"arima",
"model"
] |
615558 | 1 | null | null | 1 | 14 | I want to bootstrap my model with two random factors, subject and item, which are crossed. I have specified my model as follows:
```
lmer(outcome ~ predictor_level1 + predictor_level2 + (1 | subject) + (1 | item), data = df)
```
I know that `bootMer` or `lmeresampler` are two R packages that can be used to do bootstrapping easily, but it seems that `lmeresampler` only supports the parametric bootstrap for crossed random factors. I could not find more information about bootMer's support for semi-parametric (or residual bootstrapping) with crossed random factors. Since residual bootstrapping seems to be more robust, I would like to use this method. So here are my questions:
- Does anyone have some information, if bootMer supports residual bootstrap for crossed random factors? If not, can anyone recommend an other R package that supports this option for crossed random factors?
- Does anyone know which semi-parametric bootstrap method is implemented in bootMet? Is it the one of Carpenter, Goldstein and Rasbash (2003)?
- Has anyone some literature tips on bootstrapping for crossed random factors? This would be highly appreciated, since I'm not sure how bootstrapping for crossed factors differs from normal level 2 nested models.
| Bootstrap for a model with crossed random factors in R (bootMer) | CC BY-SA 4.0 | null | 2023-05-11T13:15:10.483 | 2023-05-11T13:35:02.470 | 2023-05-11T13:35:02.470 | 338919 | 338919 | [
"r",
"mixed-model",
"lme4-nlme",
"bootstrap"
] |
615559 | 2 | null | 615409 | 0 | null | The score residuals in a Cox model are based on the "[score function](https://stats.stackexchange.com/a/560234/28500)" that is solved to get the coefficient estimates and the [martingale residuals](https://stats.stackexchange.com/q/589511/28500) that underlie the Cox model and its extensions. The "score residual" is actually a matrix of residuals, with an entry for each individual and each covariate coefficient estimated in the model.
For each covariate, you need its [risk-weighted average](https://stats.stackexchange.com/a/538894/28500) $\bar x(t)$ among all cases at risk at each event time (used in solving the score function) and the (change in the) martingale residual for each individual at each event time. The martingale residuals in turn depend on the overall cumulative hazard estimated from the Cox model, $\hat \Lambda(t)$, for individual $i$: $\widehat M_i= \delta_i -r \hat \Lambda(t_i)$, where $\delta_i$ is the event indicator and $r$ is the relative risk.
Then the score residual for an individual and a covariate $x$ can be written:
$$\int_0^\infty (1- \bar x(t)) d \hat M_i(t), $$
which represents a sum over all event times. $d \hat M_i(t)$ is the change in the individual's martingale residual at each event time for which the individual is at risk.
[Therneau and Grambsch](https://www.springer.com/us/book/9780387987842) work through a simple example in their Appendix E. Even that simple example with a single covariate and only 6 observations requires a fair amount of effort to work through in detail. For R `coxph` models the calculations are coded in C; you could examine the source code for more details.
| null | CC BY-SA 4.0 | null | 2023-05-11T13:18:00.817 | 2023-05-11T13:18:00.817 | null | null | 28500 | null |
615560 | 2 | null | 615205 | 5 | null | In cases like yours, I've reported the median and the quartiles, as mkt [suggested](https://stats.stackexchange.com/q/615205/169343).
But, inspired by the OverLordGoldDragon's [answer](https://stats.stackexchange.com/a/615273/169343) and motivated by the wish to keep the idea of the mean and sd and, at the same time, not to deviate too much from established statistical practices, I propose an alternative. I don't know whether it's been used so far, so I'll call it the "decomposed standard deviation". It also allows you to report the results as three numbers, in the form $\overline x ~ (+sd_A; -sd_B)$.
Standard deviation is:
$$
sd = \sqrt{\frac{1}{N-1}\sum_i (x_i - \overline x)^2}.
$$
The sum can be decomposed into the sum over the elements above and below $\overline x$:
$$
sd = \sqrt{\frac{1}{N-1} \left( \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 + \sum_{i:x_i < \overline x} (x_i - \overline x)^2 \right)}
$$
(I've left out the summation over $i: x_i = \overline x$, as it evaluates to zero).
Define:
$$
\begin{align}
sd_A &= \sqrt{\frac{1}{N_A + \frac{N_0-1}{2}} \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 }, \\
sd_B &= \sqrt{\frac{1}{N_B + \frac{N_0-1}{2}} \sum_{i:x_i \lt \overline x} (x_i - \overline x)^2 }
\end{align}
$$
with $N_A$, $N_B$, and $N_0$ being the number of elements "above", "below" and "equal to" the mean, respectively. Then, the standard deviation can be rewritten as:
$$
sd = \sqrt{\frac{(N_A + \frac{N_0-1}{2})sd_A^2 + (N_B + \frac{N_0-1}{2})sd_B^2} {N-1} }.
$$
If no values are exactly equal to $\overline x$, which is very likely in practice, the formulae simplify to:
$$
\begin{align}
sd_A &= \sqrt{\frac{1}{N_A - 0.5} \sum_{i:x_i \gt \overline x} (x_i - \overline x)^2 }, \\
sd_B &= \sqrt{\frac{1}{N_B - 0.5} \sum_{i:x_i \lt \overline x} (x_i - \overline x)^2 }, \\
sd &= \sqrt{\frac{(N_A -0.5)sd_A^2 + (N_B - 0.5)sd_B^2} {N-1} }.
\end{align}
$$
It is easy to show that for perfectly symmetric data, $sd$, $sd_A$, and $sd_B$ are exactly the same. For asymmetric, they differ. Also, it is easy to see that for non-negative data, $\overline x - sd_B$ is always non-negative.
Below is a simple graphical example:
[](https://i.stack.imgur.com/34PKe.png)
and you'd report the result as $1.56 ~ (+3.08; -0.93)$. This makes the asymmetry in the data explicit and, at the same time, avoids the implication that data can be negative.
Below is the Python code to reproduce the figure and play with the data:
```
import matplotlib.pyplot as plt
import numpy as np
def decomposed_std(x):
m = x.mean()
xA = x[x > m]
xB = x[x < m]
nA = len(xA)
nB = len(xB)
n0 = len(x[x == m])
sA = np.sqrt(np.sum((xA-m)**2) / (nA + (n0-1)/2))
sB = np.sqrt(np.sum((xB-m)**2) / (nB + (n0-1)/2))
# the two are equal:
# np.sqrt((sA**2 * (nA + (n0-1)/2) + sB**2 * (nB + (n0-1)/2)) / (n-1))
# x.std(ddof=1)
return sA, sB
np.random.seed(0)
x = np.exp(np.random.normal(0, 1, 1000))
m = x.mean()
x = np.hstack([x, [m, m, m, m, m]]) # append some averages
s = x.std(ddof=1)
sA, sB = decomposed_std(x)
h = plt.hist(x, bins=20, fc='skyblue', ec='steelblue')
y_top = max(h[0])
x_right = max(h[1])
plt.vlines(x.mean(), 0, 1.1*y_top, colors='chocolate')
plt.plot([m-sB, m], [1.025*y_top]*2, '-', color='seagreen')
plt.plot([m+sA, m], [1.025*y_top]*2, '-', color='firebrick')
plt.grid(linestyle=':')
plt.text(0.8*m, 1.05*y_top, f'$sd_B = {sB:.2f}$', horizontalalignment='right')
plt.text(1.2*m, 1.05*y_top, f'$sd_A = {sA:.2f}$', horizontalalignment='left')
plt.text(x_right, 1.1*y_top,
'$\overline{x} = ' f'{m:.2f}$\n'
'$sd = ' f'{s:.2f}$',
horizontalalignment='right', verticalalignment='top')
plt.title('Histogram with decomposed standard deviation')
plt.xlim(-2, 1.05*x_right)
plt.show()
```
| null | CC BY-SA 4.0 | null | 2023-05-11T13:31:07.497 | 2023-05-12T13:56:14.060 | 2023-05-12T13:56:14.060 | 169343 | 169343 | null |
615561 | 1 | null | null | 0 | 26 | Suppose we have a sample $S$ of IID data and two different real-valued functions of $S$, say $\theta(S)$ and $f(S)$, and the latter is intended to estimate the value of the former. For example, the former may be the out-of-sample prediction performance of a machine learning model trained with $S$ and the latter a performance evaluation method carried out on S, such as cross-validation. One could think the latter to be the estimand and the former its estimator, whose bias variance and other properties could be analyzed. However, since the latter also depends on the same sample as the former, I presume that one can not consider it as the estimand in the usual sense.
My question is whether the correct interpretation would be to consider 0 (i.e. the constant zero) as the actual estimand and $f(S) - \theta(S)$ as its estimator? Then, the estimand would be a constant independent of the sample as it should and, for example, the bias of the estimator could be defined in the usual way as: $E[f(S) - \theta(S)]$.
| Estimand dependent on the sample | CC BY-SA 4.0 | null | 2023-05-11T13:33:01.257 | 2023-05-11T13:33:01.257 | null | null | 387726 | [
"bias",
"sample",
"estimators"
] |
615562 | 1 | null | null | 0 | 5 | I have conducted a repated measures ANOVA with a 3 (scenario) x 2 (person) design. I get sig interaction and main effect of person. However, the main effect of scenario is completrely blank in SPSS and JASP suggest F in very small 1.657 x10to-29. However,when I run a repeated measure with just the data for scenario I get a very significant p-value p = .00000000000000015, etc, with an F-value of approx 39.
I am confused as to why I do not get any main effect in the 2 x 3 design, but get such a strong one for a repeated measures with just 3 levels of scenario.
I have check datasheet for inaccuracies, but cannot find any. The data points are being created from raw data and summed in to total scores using script in SPSS (which has been checked for correctness).
I am stumped and would appreciated any solutions or poentatial stats reason why this may be the case.
Thanks
| Repeated measure missing main effect | CC BY-SA 4.0 | null | 2023-05-11T13:33:16.663 | 2023-05-11T13:33:16.663 | null | null | 387745 | [
"anova",
"repeated-measures",
"spss",
"jasp"
] |
615563 | 1 | null | null | 3 | 66 | Let's say we have 10 calibrated reference samples of chemical products with a known concentration $x_i$ of a certain chemical component A.
$x_i$ is different for each sample.
[](https://i.stack.imgur.com/JmnyQ.png)
We are building a chemical test workflow to detect the concentration of A in any chemical product, and we get a measurement $y_i$ for each of the 10 samples.
Note: if there is no "component A" in the chemical sample ($x_i=0$), then $y_i$ should be as close as zero as well, but it's not really true, due to real-world conditions.
Now here is the question:
We do know that there is a linear relation between the real concentration $x_i$ and our measurement $y_i$. Therefore we do a least-square regression $y = a x + b$, let's say:
$$y = 1.2 x + 100$$
Now if we change some workflow settings in our chemical test (modification in our chemistry protocol), our raw measurements data values change, and we get another regression $y = c x + d$, let's say
$$y = 2.3 x + 150$$
Question: how to evaluate the "quality" of these 2 different detection tests? (except using the $R^2$ parameter which is always close to 0.98 or 0.99)
Intuitively it seems that the highest $$Q = \frac{a}{b},$$ the more our test is sensitive to the presence of the chemical component A. But does this quantity $Q$ have a name in the context of a statistical regression?
Example: if we had $$y=0.001x + 50,$$ empirically, we would have very similar measurement values $y_i$ even if $x_i$ changes a lot from $1$ to $10$. Here, the ratio $Q = \frac{a}{b}$ is very low, indicating a poor sensitivity in the detection of the chemical component A.
Are there other "quality factors" for comparing multiple workflows (each of them giving different $a$ and $b$ in the $y=ax+b$ regression) that show a relationship between a known real-world quantity $x_i$ and a measurement $y_i$?
---
TL;DR: we have 3 different chemistry protocols for detecting the amount $x_i$ of a component A in chemical products. Our measurements are noted $y_i$. For each protocol we have a different relationship between $x_i$ and $y_i$:
$$\begin{align}y & = 1.2 x + 100$\\
y & = 2.3 x + 150\\
y & = 0.001 x + 50\end{align}$$
with similar $R^2$ coefficients. How to find which protocol is the most sensitive?
| What is $\frac{a}{b}$ called in a $y=a x+b$ regression in the context of a physical detection test? | CC BY-SA 4.0 | null | 2023-05-11T13:38:51.257 | 2023-05-12T14:43:18.553 | 2023-05-11T15:30:33.557 | 44269 | 102252 | [
"regression",
"sensitivity-specificity"
] |
615565 | 1 | null | null | 0 | 13 | I want to create a simple representation of "rate of change" for a number of different metrics, which aren't really comparable between each-other. To illustrate my problem, let's say I have the following metrics: temperature in Celsius, temperature in Fahrenheit, and air pressure in mbar.
Each metric supplies a continuous stream of data points.
For each of these points, I want to illustrate the "trend", i.e., if the metric is going up or down, and ideally by how much. However, for each such calculation I only have access to two values - the most recent value (let's call it `x0`) and the previous value (let's call it `x1`). I know the time in between these two data points.
So first I thought I'd just illustrate the rate of change by the time derivative, so I did `(x1 - x0) / (t1 - t0)`. This of course gives an indication of how this metric has changed (and possibly where it's heading).
However, this value for rate of change isn't really comparable between different metrics, because e.g. a change of 10 for Celsius temperature represents almost twice as much as a change of 10 for Fahrenheit temperature, and isn't at all comparable to a change of 10 for air pressure in mbar.
So then I thought I'd use the fraction of change, so I used `(x1 - x0) / x0`, which tells you by how much the value changed relative to its previous value. But there are at least two problems here:
- This ignores how quickly the change was made - were the two values 1 minute apart or 1 hour apart?
- This is also skewed as a comparison between different metrics: a change in Celsius temperature from 10 to 20°C is a 100% increase, but represents the same physical temperature change of 50 to 68°F, which is just 36%. This also makes the rate of change values impossible to compare to each other.
This also illustrates that saying "it's twice as warm as yesterday" doesn't really make sense neither in Celsius nor Fahrenheit.
Even using a combination of these to use "percentage of change per time unit" doesn't fix problem 2.
So I guess I need to normalize my rate of change value somehow. For temperature, perhaps actually using the difference in K is the only thing that makes "physical" sense (as it's proportial to the kinetic energy). But when comparing completely different metrics such as temperature vs air pressure, I just don't have any clue.
Am I making this too complicated or is it just a bad idea to even try to have a comparable rate of change more refined than having three values "going up", "going down" and "the same"? Is it feasible to come up with something that makes it possible to illustrate that a temperature is going "rapidly up" while air pressure has just gone "slightly up" in the last hour without having custom formulas for each metric (such as percentage of change measured Kelvin per time unit for temperature)?
| Useful value for rate of change | CC BY-SA 4.0 | null | 2023-05-11T13:53:31.253 | 2023-05-11T14:16:16.927 | null | null | 387737 | [
"time-series",
"trend",
"derivative"
] |
615566 | 2 | null | 615371 | 3 | null | The qualitative evolution of such a population depends on the dynamics of reproduction. Let's look at some of the possibilities.
Rather than offer a mathematical analysis, I present some simulations to show what can happen. Although these make simplifying assumptions, they faithfully reflect the behavior of realistic systems.
We begin with a population of mothers (the males play no role, alas). It is represented by a vector indexed by the distinct genotypes whose values count the numbers of each genotype in the population. The initial population posited in the question therefore corresponds to the vector $(1,1,\ldots, 1)$ with one thousand ones: a thousand distinct genotypes.
The simulation is simple. A "generational clock" ticks. At each successive moment, each mother in the population is replaced by zero or more offspring of the same genotype according to a specified random distribution.
What turns out to govern the general behavior of this population's evolution are (a) the chance of dying with no offspring; (b) the chance of having a single child surviving to reproduce next time; and (c) the expected number of surviving children. Clearly, when this expectation is less than $1,$ the population will die out (fairly steadily). When this expectation is close to $1,$ the total population size will behave like a random walk and (therefore) will still eventually die out. When the expectation exceeds $1,$ the population size ought to increase exponentially on average.
We are interested in the numbers of each genotype over time and the number of distinct genotypes. I therefore plot these for each simulation, along with total population size. To make the computation feasible for large populations, I limit the total population size by randomly (and uniformly) killing people off if necessary. I also resort to an approximate sampling method (based on a Normal approximation to the distribution) once the numbers of any given genotype grow large.
The figure below presents four scenarios. In all of them
- The initial population is one thousand distinct genotypes.
- Each mother can produce 0, 1, 2, or 3 surviving offspring. The chances of each are given as a vector in the plot titles below.
- The simulation extends for 10,000 generations (at 20 years or so per human generation, that's 200,000 years).
- The growth rate per generation is 1.001 in each scenario. Thus, the expected population size after 10,000 generations is approximately $1000 \exp((0.001)\times 10000),$ which is about 22 million. I thin large generations to keep them below ten million.
The scenarios differ according to (a), the chance of dying with no offspring. I have varied these from $0.4$ down to $0.001$ to provide a range of behaviors.
Rather than interpret these in detail, I leave it to you to consider anything that might interest you. But I will point out two salient features:
- It's practically inevitable, under any of these scenarios, that the majority of genotypes will eventually disappear.
- The scenarios differ according to how many distinct genotypes persist after 10,000 generations. Notice, in particular, that in the first scenario (with $0.4$ chance of each mother having no surviving offspring) 99.7% of the genotypes have disappeared rapidly. (Starting with a smaller population, it's likely all will have died out.)
- Among the surviving genotypes there is a large variation in subpopulation size. Some genotypes remain rare while a few appear to dominate the population.
- Nevertheless, because originally each genotype is identically situated -- they are all present in the same numbers and are subject to identical reproduction and thinning -- every genotype has the same chance of surviving and each one has the same expected proportion of the population at any generation.
Therein lies the point of this simulation: to show how in any actual example of the evolution of a population, the characteristics of that population are likely to differ markedly from the expected values. That's not a paradox, because the expectation is taken over all possible ways the population could have evolved and therefore does not have to reflect any specific history.
These insights are applicable to more than human populations. As an example, comparable simulations might help us understand financial markets: when we study the returns of an asset over time, with the intention of selecting something for an investment, we must bear in mind that the existing assets only reflect the survivors. The stocks and bonds of defunct companies are no longer in the market.
[](https://i.stack.imgur.com/GeC23.png)
In all these plots, generation 0 is the initial population.
---
The following `R` code produced these simulations and is flexible enough to produce a wide variety of other simulations. It defaults to an initial population of 100 which is evolved for 100 generations: the calculation will be almost instantaneous. (The calculations for the figure, which required one thousand times this effort, took six minutes.)
It's worth pointing out that the heart of the code is truly short and simple: the tally of genotypes `x` in one generation is updated using a probability vector `prob` to the next generation by looping over the counts `m` and executing one line:
```
sum(sample.int(length(probs), m, replace = TRUE, prob = probs) - 1)
```
`sample.int` draws the multinomial sample (effectively representing each individual among these `m` individuals) to find the number of offspring of each mother and, of course, `sum` adds them up. The code simply repeats this in an outer loop while keeping track of any population properties of interest along the way. Most of the code is devoted to plotting the results.
```
next.generation <- function(x, probs) {
# `x` tallies the genotypes;
# `probs` gives the chances of each mother having 0, 1, ... daughters.
sapply(x, \(m) {
if (m <= 5 / min(probs[probs > 0])) {
# Multinomial sampling
sum(sample.int(length(probs), m, replace = TRUE, prob = probs) - 1)
} else {
# Normal approximation to the multinomial
k <- 0:(length(probs) - 1)
mu <- sum(k * probs)
sigma <- sqrt(m * sum((k - mu)^2 * probs))
pmax(0, round(rnorm(1, mu * m, sigma)))
}
})
}
thin <- function(x, n.max) {
# The population tallied in `x` is randomly and uniformly reduced to
# total approximately `n.max` individuals.
lambda <- n.max / sum(x)
if (lambda < 1) x <- rbinom(length(x), x, lambda)
x
}
p_ <- function(x, y, rho = 0.001) {
# Create a probability distribution with generational growth rate `1+rho` where
# `x` = Pr(0 children) and `y` = Pr(1 children). Only 0, 1, 2, or 3
# children are possible.
w <- rho + 2*x + y - 1
z <- 1 - x - y - w
probs <- c(x, y, z, w)
if (any(probs < 0)) warning("No solution.")
probs
}
p <- function(x, rho = 0.001) {
# A special sort-of-symmetric case of `p_`.
zapsmall(p_(x, 1 - 2*x, rho))
}
#
# Run some illustrative simulations.
#
Probs <- list(
p(0.4), # Genotypes rapidly die out -- and so does the population
p(0.1), # Usually a few are left after a 10,000 generations
p(0.01), # Several are left after 10,000 generations
p(0.001) # Many (30 - 40%) are left after 10,000 generations
)
# set.seed(17)
p.start <- rep(1, 1e2) # Initial population of `n.start[i]` individuals of genome `i`.
n.generations <- 1e2 # Maximum length of simulation
pop.max <- 1e7 # Limiting population size (approximately)
# pdf("CV.pdf", width = 10, height = 13) # For large simulations run in the background
par(mfrow = c(length(Probs), 3))
for (probs in Probs) {
probs <- probs / sum(probs)
stitle <- paste0("(", paste(signif(probs, 2), collapse = ", "), ")")
# sum(probs * 0:(length(probs)-1)) # Growth rate per generation
#
# Run the simulation.
#
population <- p.start
Simulation <- cbind(population,
sapply(seq_len(n.generations), \(g)
population <<- thin(next.generation(population, probs), pop.max)
))
#
# Compute summaries by generation.
#
N <- colSums(Simulation) # Population sizes by generation
Simulation <- Simulation[, N > 0] # Remove empty generations
N <- N[N > 0]
# -- The population proportions are jittered for plotting
Proportions <- t(t(Simulation + rbeta(prod(dim(Simulation)), 3/2, 3/2) - 1/2) / N)
Genotypes <- colSums(Simulation > 0) # Number of distinct genotypes per generation
#
# Plot the simulation.
#
plot(c(0, ncol(Proportions)-1), c(0, max(Proportions, na.rm = TRUE)), type = "n",
main = paste("Proportions for", stitle),
ylab = "Proportion", xlab = "Generation",
family = "Informal")
# -- Trace each genotype's proportion of the population over time, distinguishing
# them by color.
invisible(sapply(seq_len(nrow(Proportions)),
\(i) {
y <- Proportions[i, ]
j <- y > 0 & !is.na(y)
lines(which(j) - 1, y[j],
col = hsv(0.9 * i / nrow(Proportions), 1, 0.8, 0.5))
}))
plot(seq_along(Genotypes) - 1, Genotypes, ylim = c(0, length(population)),
main = paste("Genotypes for", stitle),
ylab = "# Distinct Genotypes", xlab = "Generation",
family = "Informal")
plot(seq_along(N) - 1, N, log = if(min(N) == 0) "" else "y",
main = paste("Population for", stitle),
ylab = "Population Size", xlab = "Generation",
family = "Informal")
}
par(mfrow = c(1, 1))
# dev.off() # If outputting to pdf
```
| null | CC BY-SA 4.0 | null | 2023-05-11T14:02:42.740 | 2023-05-11T14:02:42.740 | null | null | 919 | null |
615567 | 1 | null | null | 0 | 41 | As the question says, can a machine learning algorithm learn to differentiate a function?(Eg. If we give such a network a function $f(x)$, it should output its derivative $f'(x)$.)
Clearly, a simple feed forward neural network should not be able to do so, but I have a strong feeling that recurrent neural networks can.
Has this been done before? How complex does the network have to be?
| Can a machine learning algorithm learn to differentiate a function? | CC BY-SA 4.0 | null | 2023-05-11T14:03:50.070 | 2023-05-11T15:56:29.473 | 2023-05-11T15:56:29.473 | 382923 | 382923 | [
"neural-networks",
"recurrent-neural-network"
] |
615568 | 2 | null | 615565 | 0 | null | If you have all the data and are analyzing it retrospectively, you could calculate all of the individual measures' interval rates of change, and then express them relative to the average rate of change for that measure. To do this, calculate $m_i=(x_i-x_{i-1})/(t_i-t_{i-1})$ for one measure for every interval, find the average rate of change over all intervals ($\bar m$), and then express interval rates of change as $m_i/ \bar m$. This gives you measure-specific values as to whether the measure is changing more or less than average, which could be a reasonable way to express "changing a lot" or "changing a little".
You may have to weight the formula if your intervals are not of equal length. There may also be numerical issues if most of your intervals exhibit no change, as even small fluctuations will look relatively huge compared to no change at all. This also requires retrospective analysis of the data, as you need the full history to compute the average. You could alter this for prospective analysis if you have domain knowledge about roughly what's an "average" rate of change - simply replace $\bar m$ with whatever value you choose, and this becomes your new setpoint for comparison for what constitutes a big or little change.
| null | CC BY-SA 4.0 | null | 2023-05-11T14:10:38.180 | 2023-05-11T14:16:16.927 | 2023-05-11T14:16:16.927 | 76825 | 76825 | null |
615570 | 1 | 615579 | null | 1 | 29 | I've been examining fitting the Weibull and lognormal distributions with the `survreg()` function of the `survival` package. Fitting the Weibull distribution took some transformation for standard parameterization (per R `dweibull()`) as shown here: [How to generate multiple forecast simulation paths for survival analysis?](https://stats.stackexchange.com/questions/614198/how-to-generate-multiple-forecast-simulation-paths-for-survival-analysis)
I'm now moving on to the exponential distribution. [See https://stats.stackexchange.com/questions/616351/how-to-assign-reasonable-scale-parameters-to-randomly-generated-intercepts-for-t for an example of the exponential distribution.] Could someone please confirm if the exponential distribution is being correctly fit in the R code posted at the bottom and as illustrated in the following image? If not, how do I correctly fit exponential? I only use the `lung` dataset for ease of example even though it doesn't provide good fit: Weibull provides the best fit.
[](https://i.stack.imgur.com/czyHL.png)
Code:
```
library(survival)
time <- seq(0, 1000, by = 1)
fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "exponential")
survival <- 1 - pexp(time, rate = 1 / fit$coef)
plot(time, survival, type = "l", xlab = "Time",ylab = "Survival Probability",col = "red", lwd = 3)
lines(survfit(Surv(time, status) ~ 1, data = lung), col = "blue")
legend("topright",legend = c("Fitted exponential","Kaplan-Meier" ),col = c("red", "blue"),lwd = c(3, 1),bty = "n")
```
| Confirm that the exponential distribution is correctly being used with the survreg() function of the survival package? | CC BY-SA 4.0 | null | 2023-05-11T14:21:42.647 | 2023-05-21T09:37:31.227 | 2023-05-21T09:37:31.227 | 378347 | 378347 | [
"r",
"survival",
"exponential-distribution"
] |
615571 | 1 | null | null | 0 | 16 | Imagine there are two parties that compete with each other for votes. If Party A wins votes in a certain town, it's most often from winning over voters from Party B.
In my scenario, Party A is a large mainstream conservative party and Party B is a smaller "niche" right-wing party. In my project overall, I am trying to qualitatively assess the extent to which Party B has pushed Party A to the right. In the 2006 election, Party B was quite small, but in the 2010 election it had a major breakthrough and rode an extreme right-wing platform. The period from 2006-2010 also coincided with Party B's efforts to improve its organization, mobilize voters at the grassroots, and develop a media network. Since 2010, Party A and Party B have been directly competing for the conservative vote.
I have data for four parliamentary elections (2006, 2010, 2014, and 2018), showing the breakdown of support for each party in each town in each election (so an observation is an election-town). If I plot Party A's vote share change from one election to the next against Party B's vote share change from the same election to the next (i.e. change between 2010 and 2014, 2014 and 2018, etc), I get a strong negative correlation, as would be expected since the parties are competing directly for the same voters. Party A's win is Party B's loss and vice versa.
Party A has moved far to the right, largely in an effort to capture Party B's votes (as I claim qualitatively), but I want to provide some statistical support for this claim. I found a pretty striking positive correlation between Party A's vote share change from 2014-2018 (the period Party A shifted far to the right and had lots of electoral success) and Party B's change in vote share from 2006 to 2010 (which represents how popular Party B's right wing message was in a certain town).
[](https://i.stack.imgur.com/TH7qp.png)
I want to be able to say that this provides evidence that Party B's right-wing messaging early on likely made voters more receptive to Party A's hard-right turn in the 2018 election. (Disclaimer: This is political science research; I want descriptive evidence to support my argument rather than a causally robust natural experiment). But I am concerned that this relationship is just an arithmetical fact. If Party A and Party B compete directly for votes, then places where Party B gained a lot of votes in 2010 are areas that Party A had a lot of potential voters that it could "flip" toward itself in 2018. So it would make sense that the change in vote share would be higher in these towns.
So I was wondering:
a) Are my concerns about inferring anything from the data I've discussed valid or am I excessively concerned?
b) Might a linear regression strategy be possible? Imagine I regress Party A's change in vote share from 2014-2018 and regress this Party B's change in vote share from 2006-2010 and Party A's vote share in 2010. That way, would I be controlling for bias that could emerge from Party A having an initially low vote share in the 2010 elections in towns that they could then "flip" in the 2018 elections? I worry about collinearity but VIFs in these regressions are generally around 1.
| Inference and Linear Regression when dependent and independent variables are political party vote shares | CC BY-SA 4.0 | null | 2023-05-11T14:26:01.847 | 2023-05-11T14:26:01.847 | null | null | 382389 | [
"regression",
"logistic",
"least-squares",
"descriptive-statistics"
] |
615572 | 1 | 615575 | null | 3 | 284 | If we have two events, $A$ and $B$, it is sure that
$$P(A\mid B) + P(A^C\mid B) = 1$$
However, it seems me that
$P(A\mid B) + P(A \mid B^C) = 1$ is false in general
I don't remember why it is so. How can this fact be explained in the simplest way?
| Complementary events and conditioning | CC BY-SA 4.0 | null | 2023-05-11T14:26:03.737 | 2023-05-12T08:29:54.667 | 2023-05-12T04:35:49.087 | 509 | 106229 | [
"probability",
"conditional-probability"
] |
615573 | 1 | null | null | 0 | 11 | Intuitively speaking, what's the impact of changing the $d_k$ (and $d_{kv}$) for transformers?
My understanding is that each attention head is effectively a lower-dimensional projection of a higher-dimensional representation of the sequence, and each head effectively looks at different subspaces. Hence, more heads lead to a lower $d_{kv}$. Presumably this means that for some tokens the model has to learn how to compress it to such a low dimension that it might not capture all the nuances of that token with respect to others within the larger sentence? Is this the best way to think about this?
| Effect of dK (key vector dimension) in transformers | CC BY-SA 4.0 | null | 2023-05-11T14:28:54.117 | 2023-05-11T14:28:54.117 | null | null | 288378 | [
"neural-networks",
"transformers",
"attention"
] |
615574 | 1 | null | null | 1 | 20 | Suppose a student takes 3 tests on a given day and that the probability that she passes any particular test is .6, the probability that she passes any particular pair of tests is .4, and the probability that she passes all 3 tests is .3. What is the probability that she passes at least one test ?
My idea to this question is to model it in terms of P(A) = probability of passing test A with tests A, B , C. So P(passes at least one test) = 1 - P(passing no tests)
P(passing no tests) = P((AUBUC)') = 1 - P(AUBUC)
P(AUBUC) = inclusion exclusion theory which is = 0.6 * 3 - 0.4 * 3 - 0.3 = 0.3
P(passing no tests) = 1 - 0.3 = 0.7
Obviously I'm missing something here, anyone can help me with this one?
| Probability of passing at least one test | CC BY-SA 4.0 | null | 2023-05-11T14:46:45.613 | 2023-05-11T15:26:54.947 | 2023-05-11T15:26:54.947 | 56940 | 387750 | [
"probability"
] |
615575 | 2 | null | 615572 | 7 | null | All you need is just one counterexample.
Consider $\Omega = \{1, 2, 3\}$ with $P(\{1\}) = P(\{2\}) = P(\{3\}) = \frac{1}{3}$, $A = \{1, 2\}, B = \{2, 3\}$, then
\begin{align}
& P(A|B) = \frac{P(\{2\})}{P(\{2, 3\})} = \frac{1/3}{2/3} = \frac{1}{2}, \\
& P(A|B^c) = \frac{P(\{1\})}{P(\{1\})} = \frac{1/3}{1/3} = 1,
\end{align}
which results in $P(A|B) + P(A|B^c) = \frac{3}{2} > 1$.
Another more extreme counterexample is for any sample space $\Omega$, take $A = \Omega$, and $B$ be any event with $0 < P(B) < 1$, then $P(A|B) + P(A|B^c) = 1 + 1 = 2 > 1$.
---
More in-depth analysis: As you used the notation $P(A|B)$ and $P(A|B^c)$, which are meaningless unless $0 < P(B) < 1$, it can be assumed that $0 < P(B) < 1$. Under this condition,
\begin{align}
& P(A|B) + P(A|B^c) = \frac{P(A \cap B)}{P(B)} + \frac{P(A \cap B^c)}{P(B^c)} = 1 \tag{$*$}
\end{align}
if and only if
\begin{align}
P(A \cap B)P(B^c) + P(A \cap B^c)P(B) = P(B)P(B^c). \tag{1}
\end{align}
If $P(B) = \frac{1}{2}$, then $(1)$ implies $P(A) = \frac{1}{2}$. Hence for any event $A$ whose probability is not $\frac{1}{2}$, the statement $(*)$ is false.
If $P(B) \neq \frac{1}{2}$, let $p := \max(P(B), P(B^c))$. Hence the left-hand side of $(1)$ is bounded above by $pP(A)$ and below by $(1 - p)P(A)$, $(1)$ thus implies that $pP(A) \geq p(1 - p) \geq (1 - p)P(A)$ or $P(A) \in [1 - p, p]$, which means that for any event $A$ whose probability is outside $[p, 1 - p]$, $(*)$ is false.
$(*)$ is not always false, of course. For example, if $\Omega = A \cup B$, $A \cap B = \varnothing$, then $A = B^c$, whence
\begin{align}
P(A|B) + P(A|B^c) = P(A|B^c) = \frac{P(B^c)}{P(B^c)} = 1.
\end{align}
| null | CC BY-SA 4.0 | null | 2023-05-11T14:55:07.513 | 2023-05-11T21:27:43.193 | 2023-05-11T21:27:43.193 | 20519 | 20519 | null |
615576 | 1 | null | null | 0 | 12 | I am using NLME packege in R to perform non linear regression.
The aim of the model is to estimate parameters related with gaseous weight loss from different silages.
I have defined my function as:
```
profileLag <- function(Day,B,C,L) {B*(1-exp(-C*(Day-L)))}
```
B is the asymptote = total weight loss, C represents fractional rate of weight loss per day and L is the lagtime prior to onset of fermentation
I would like to add a lower bound to my parameter L (lagtime before onset of fermentation).
I should be able to do this by:
```
nlme(....control=nlmeControl(opt="nlminb",lower=c(B=-Inf,C=-Inf,L=0)))
```
However, this does not work. I do not get any error messages, the limits are just simply ignored by the model.
Does anybody know what the problem is and how I can fix it/add limits to my L parameter?
I have build my model as follows:
```
#First model with estimated start values
GWL.DM.F2.M1<-nlme(GWL_g_kgDM~profileLag(Day,B,C,L),data=GWL.F2.grp,fixed=list(B~1,C~1,L~1),
random=pdDiag(B+C+L~1),correlation=corAR1(0.8,form=~Day|BagIDuq),
start=c(B=160,C=0.05,L=0))
# Second model - heterogeneous variance
GWL.DM.F2.M2<- update(GWL.DM.F2.M1,weights=varIdent(form=~1|MixID))
# Fixed effects from second model (new start values)
fe2.F2<-fixef(GWL.DM.F2.M2)
# Final model - parameters depend on type of silage (MixID)
GWL.DM.F2.M3<-update(GWL.DM.F2.M2,fixed=list(B~MixID-1,C~MixID-1,L~MixID-1),
start=c(B=rep(fe2.F2["B"],4),C=rep(fe2.F2["C"],4),L=rep(fe2.F2["L"],4)))
```
The limits should be added to the final model. I have also tired adding it to the first and second model and to the two other parameters (just to check), but they are always ignored by the model.
In the final model the limits should be added as:
```
nlme(....control=nlmeControl(opt="nlminb",lower=c(B=rep(-Inf,4),C=rep(-Inf,4),L=rep(0,4))))
```
| Lower limit for NLME regression | CC BY-SA 4.0 | null | 2023-05-11T14:59:14.997 | 2023-05-11T15:06:35.330 | 2023-05-11T15:06:35.330 | 387751 | 387751 | [
"r",
"optimization",
"nonlinear-regression"
] |
615577 | 1 | null | null | 0 | 16 | I am being asked what the units are for variable importance using the varimp() function for a caret package random forest model. My best answer based on previous posts is % but is that accurate? Would %MSE be a better interpretation, something else entirely or is it unitless and just showing relative magnitude? The caret documentation does not say and previous posts are not definitive on this.
| Units for importance from the varimp function applied to random forest model in caret? | CC BY-SA 4.0 | null | 2023-05-11T15:00:47.820 | 2023-05-11T15:00:47.820 | null | null | 325355 | [
"random-forest",
"importance"
] |
615578 | 1 | null | null | 0 | 18 | Most of the pretrained architecture accept the input image size 224x224, but do we have to always resize our images to 224x224?
I try to do both resize 224x224 and 512x512, the first resolution give lousy image quality, while the second image resolution had a better image quality (my original image size is 4288x2488). Because it's segmentation task, I think that the 224x224 quality can affect the segmentation results because the image quality is not clear enough so that the segmented part is not clear either.
So do we have to resize the image to 224x224 for following the pretrained rules or not?
| Input Image to U Net with Pretrained ResNet34 Encoder | CC BY-SA 4.0 | null | 2023-05-11T15:05:50.207 | 2023-05-11T15:34:44.837 | 2023-05-11T15:34:44.837 | 387494 | 387494 | [
"conv-neural-network",
"image-processing",
"transfer-learning"
] |
615579 | 2 | null | 615570 | 2 | null | You've gotten trapped by [location-scale modeling](https://stats.stackexchange.com/a/615237/28500) again. The model you fit is:
$$\log(T)\sim \beta_0 + W, $$
where $\beta_0$ is your `fit$coef ` (location) and $W$ represents a standard minimum extreme value distribution. The scale factor multiplying $W$ for a corresponding Weibull model is set exactly to 1 for an exponential model.
Thus $\beta_0$ represents a value in the log scale of time. For linear time, you need to exponentiate it to get the `rate` argument to supply to `pexp()`.
```
1/exp(fit$coef)
# (Intercept)
# 0.002370928
```
Try that.
| null | CC BY-SA 4.0 | null | 2023-05-11T15:18:26.733 | 2023-05-11T15:18:26.733 | null | null | 28500 | null |
615580 | 1 | 615581 | null | 0 | 17 | I trained a 1 unit 1 layer (which I assume is limited to being a linear model) on temperature data, which follows a sinusoidal pattern over time. I expected this limited model to just produce a line around the mean, but if I predict values over time I get this stepped pattern [](https://i.stack.imgur.com/BcRKN.png)
The model creation:
```
body = tf.keras.Sequential([
layers.Dense(1)
])
```
And the input preprocessing:
[](https://i.stack.imgur.com/kvfB2.png)
How is it able to produce this non-linear output? Or an I grossly misunderstanding what Dense(1) creates, or what a linear model can predict?
If I add hidden layers and more units it follows a smooth curve without the steps.
| How is this linear model producing non-linear output? | CC BY-SA 4.0 | null | 2023-05-11T15:25:36.877 | 2023-05-11T15:29:17.890 | 2023-05-11T15:26:35.363 | 387675 | 387675 | [
"linear-model",
"tensorflow"
] |
615581 | 2 | null | 615580 | 0 | null | The seasonality of the output is very clear. That happens because your input is not only "day" as a number, but rather that as a category (so 31 levels), plus a concatenation of more three features. In particular, you can see the output changes level by month.
| null | CC BY-SA 4.0 | null | 2023-05-11T15:29:17.890 | 2023-05-11T15:29:17.890 | null | null | 60613 | null |
615582 | 1 | null | null | 0 | 29 | My dataset is high dimensional (sample size is 200 with 300 features) and imbalanced. The imbalance ratio is 80:20 in the training set and 88:12 in the held-out test set (collected at a different time point). I am working on a binary classification problem. The performance on the held-out test set is very low (low recall and precision for the minority class, with an AUC around 50-60%). I am trying multiple machine learning algorithms (e.g. logistsic regression, random forest, XGBoost,...etc), but all models are prone to overfitting regardless of hyperparameter tuning. I am using `SMOTETomek` to address the training dataset's class imbalance.
- Does the difference in the imbalance ratios in the training & testing sets affect the performance in the held-out test set?
- How can I reduce overfitting and increase my performance?
| Class-Imbalance: How to handle different class distributions in training and held-out test data? | CC BY-SA 4.0 | null | 2023-05-11T15:43:14.013 | 2023-05-11T16:31:03.957 | 2023-05-11T16:31:03.957 | 247274 | 336916 | [
"machine-learning",
"unbalanced-classes",
"overfitting",
"high-dimensional"
] |
615583 | 2 | null | 615519 | 1 | null | This is a slight error in how you programmed the DR estimator. Although `dr_weight` is the clever covariate, it is actually a function of the treatment. That means when you do g-computation on the DR outcome model, you need to set the value of the treatment to the specified value not just in the dataset but in the clever covariate, too, rather than leaving the clever covariate as is. Here is what the estimator should look like:
```
p <- predict(ip_mod, type = "response")
dr_mod <- lm(Y ~ A + L + I(A/p - (1-A)/(1-p)), data = d)
E_0 <- mean(predict(dr_mod, newdata = d0))
E_1 <- mean(predict(dr_mod, newdata = d1))
dr_est <- E_1 - E_0
```
That is, instead of defining the clever covariate outside the model and fixing its values, we let it be a function of the treatment and propensity score, i.e., `I(A/p - (1 - A)/(1 - p))`. This way, in the g-computation step, the value of `A` that is set in `newdata` is inserted into the clever covariate as well.
| null | CC BY-SA 4.0 | null | 2023-05-11T16:07:58.810 | 2023-05-11T16:07:58.810 | null | null | 116195 | null |
615584 | 2 | null | 615553 | 0 | null | The OLS assumes there is no heteroscedasticity, i.e. $V=I$, thus $\operatorname{Var}\left(\hat{\boldsymbol{\beta}}_{O L S}\right)$ is not the one you mentioned, instead, $\operatorname{Var}\left(\hat{\boldsymbol{\beta}}_{O L S}\right)=\sigma^2\left(\boldsymbol{X}^T \boldsymbol{X}\right)^{-1}$. However if the assumption is incorrect, i.e. the heteroscedasticity exists, $V\neq I$, this OLS formula will underestimate the variances of the coefficients.
| null | CC BY-SA 4.0 | null | 2023-05-11T16:11:21.743 | 2023-05-11T16:11:21.743 | null | null | 387373 | null |
615585 | 1 | null | null | 0 | 11 | I have some variables regarding companies' workers, and I'd like to come up with a way of rankings these companies according to this data.
Say I have the following variables: number of technicians, number of engineers, number of accountants.
One problem that arises when doing PCA is the following: say the loadings for the first component for the variables above mentioned are `-0.5, 0.25, 0.75`, respectively.
A company with 1 of each (workers, this is) would then get a score of `-0.5*1 + 0.25*1 + 0.75*1 = 0.5`.
A company with just one techician would get a score of `-0.5*1 + 0.25*0 + 0.75*0 = -0.5`.
And a company with no employees (for wathever reason -maybe this is not the best example) would get a score of `-0.5*0 + 0.25*0 + 0.75*0 = 0`.
If I were to rank the companies according to their scores, the third company would be "better" than the second one, but this is clearly not true (we value more having an employee than not).
How can I get around this, or what can I do in this scenario?
| Synthetic variable with PCA | CC BY-SA 4.0 | null | 2023-05-11T16:39:45.587 | 2023-05-11T16:39:45.587 | null | null | 386434 | [
"pca",
"ranking"
] |
615586 | 2 | null | 615527 | 3 | null | What about binning the children's ages so that you have
$Y=B_0+B_1(no\_child) + B_2(child0-2)+B_3(child2-4)...etc$
I realize it's a logit but my latex skills are weak and the idea is the same even if this is the wrong syntax.
| null | CC BY-SA 4.0 | null | 2023-05-11T16:41:23.337 | 2023-05-11T16:41:23.337 | null | null | 24521 | null |
615588 | 1 | null | null | 0 | 24 | I am reading a paper and it has the following simple model:
Y = a + b1 + b2 + b1*b2 + e.
The author seems to be interested in b1, not the interaction term.
How do we interpret b1 in this case? I'm a little confused because most people would be interested in the interaction effect. By including the interaction term as a control variable, how does our interpretation of b1 change?
Thank you!
| Logic of including an interaction term as a control variable? | CC BY-SA 4.0 | null | 2023-05-11T16:51:04.520 | 2023-05-11T17:03:00.070 | null | null | 355204 | [
"regression",
"interaction",
"controlling-for-a-variable"
] |
615589 | 2 | null | 615588 | 2 | null | The coefficient on b1 corresponds to the change in the expected value of the outcome for a unit increase in b1 when b2 = 0. If b2 being 0 is meaningless or uninformative, then the coefficient on b1 will be meaningless as well.
Often people center b2 at its mean, so that when the centered version is 0, the original variable is at its mean. This gives a nice interpretation for the coefficient on b1: the change in the expected value of the outcome for a unit increase in b1 when b2 is at its mean (i.e., when centered b2 is 0).
When the interaction is included (whether b2 is centered or not), each individual has a different value of the slope of b1 on Y. If you were to take the average of these slopes, you get a summary measure that some would consider something like the "main effect" of b1. It turns out that in a linear model, this averaged slope is equal to the coefficient on b1 when b2 is centered.
| null | CC BY-SA 4.0 | null | 2023-05-11T17:03:00.070 | 2023-05-11T17:03:00.070 | null | null | 116195 | null |
615593 | 1 | null | null | 0 | 14 | I'm trying to implement a custom objective function in XGBoost. I read [the docs](https://xgboost.readthedocs.io/en/stable/tutorials/custom_metric_obj.html) on this topic. I am not sure if I need to define a "reverse link function" (aka inverse link function) to properly implement my custom objective, and the documentation is pretty difficult to decypher.
In my objective, the raw output of the regression model is $f(x_i) = f_i = log(\sigma_i)$, the logarithm of the estimated (conditional) standard deviation. The loss is the negative log-likelihood of the targets under the assumption that each target $y_i$ comes from a normal distribution with zero mean and standard deviation $\sigma_i$. The loss, gradient, and hessian are (ignoring constant $c$):
$$
loss(y,f) = c +\sum_if_i+\frac{y_i^2}{2e^{2f_i}} $$
$$
grad_i(y_i, f_i) = 1-y_i^2/e^{2f_i}
$$
$$
hess_i(y_i,f_i) = 2y_i^2/e^{2f_i}
$$
Again, I'm not sure if XGBoost requires me to also define a reverse link function. I suppose my link function would be $\sigma_i= e^{f_i}$. I am fine working with the untransformed $f_i$ but I'm not sure if XGboost is.
| XGBoost custom objective/loss: when is a Reverse (inverse) Link Function required? | CC BY-SA 4.0 | null | 2023-05-11T17:11:01.277 | 2023-05-11T17:11:01.277 | null | null | 125259 | [
"boosting",
"loss-functions",
"gradient-descent",
"link-function"
] |
615594 | 1 | null | null | 0 | 25 | I am trying to compare the fit of two nested multiple-group path analysis models (Models A & B).
Model A is the model that does not have any equality constraints across genders (i.e., path coefficients and correlations are freely estimated separately for males and females).
Model B differs from Model A only by placing equality constraints for genders for the path coefficients and correlations.
I am using Mplus with the MLR estimator. So, to compare these nested models, I conducted scaled chi-square difference testing using the models' loglikelihood values. The result from that test is statistically significant, which suggests the unconstrained model fits better than the constrained model.
(Chi-square = 33.79, df = 21, p < .05) .
I calculated cd to be 1.015 and TRd to be 33.78 based on the following from the two models:
Model A
Number of Free Parameters 70
Loglikelihood
H0 Value -15820.276
H0 Scaling Correction Factor 1.0020
for MLR
H1 Value -15820.276
H1 Scaling Correction Factor 1.0020
for MLR
Information Criteria
Akaike (AIC) 31780.551
Bayesian (BIC) 32131.462
Sample-Size Adjusted BIC 31909.125
(n* = (n + 2) / 24)
Model B
Number of Free Parameters 49
Loglikelihood
H0 Value -15837.415
H0 Scaling Correction Factor 0.9966
for MLR
H1 Value -15820.276
H1 Scaling Correction Factor 1.0019
for MLR
Information Criteria
Akaike (AIC) 31772.830
Bayesian (BIC) 32018.468
Sample-Size Adjusted BIC 31862.831
(n* = (n + 2) / 24)
Although the scaled chi-square difference test says the unconstrained model fits better, the values for AIC, BIC, and sample-size adjusted BIC from the constrained model are smaller compared to their values for the unconstrained model, respectively.
In summary, the result (and following decision) using the scaled chi-square difference test is inconsistent with the result (and following decision) using AIC, BIC, and sample-size adjusted BIC.
Can someone please explain why the scaled chi-square difference test favors the model with more parameters (i.e., Model A) while the AIC, BIC, and Sample-Size adjusted BIC favor the model with fewer parameters (i.e., Model B)?
Which fit statistic(s) should I rely on and report in this situation?
Is it misplaced to use AIC, BIC, and sample-size adjusted BIC for nested multiple-group models when the goal is to freely estimate many more parameters in one model compared to the other model?
Is the penalty correction for AIC, BIC, and sample-size BIC in this context too harsh?
Which model is better to use (i.e., Model A based on the statistical test or Model B based on parsimony)?
Thank you and Regards,
Aaron
| Result from scaled chi-square difference test and AIC/BIC are inconsistent for comparison of nested models | CC BY-SA 4.0 | null | 2023-05-11T17:21:25.420 | 2023-05-12T14:09:43.530 | null | null | 387757 | [
"structural-equation-modeling"
] |
615595 | 1 | null | null | 0 | 15 | Let me preface my question with a summary of my understanding of ANOVA.
- One-way ANOVA: When the researcher is interested in the effect of one independent variable (such as treatment) on a dependent variable. There may be other variables that affect the dep. variable, but that effect is not removed in any way in one-way ANOVA.
- Two-way ANOVA: When the researcher is interested in the effect of two independent variables on the dep. variable. The P-values that you get from a two-way ANOVA have mathematically taken into consideration the other independent variable.
My question: Since we are told that if we are only interested in the effect of 1 independent variable we should use a one-way ANOVA, is there any way to remove variability of a 2nd ind. variable? Could we perhaps do a two-way ANOVA with that 2nd variable even though we aren't interested in how it affects the dep. variable. We would simply want to remove the variability of the 2nd ind. variable from the P-value of our 1st ind. variable.
Notes: Please let me know if my understand of ANOVA is flawed. I have looked many places for clarification on this question and have struggled with it for years.
| Difference between one-way and two-way ANOVA | CC BY-SA 4.0 | null | 2023-05-11T17:22:22.667 | 2023-05-11T17:22:22.667 | null | null | 339558 | [
"anova"
] |
615596 | 1 | null | null | 0 | 16 | I am trying to write the model which is like this.Suppose you have a nested design (B nested within A), but the number of levels of B can change, depending on the level of A. In addition, the sample sizes may differ. So, for the ith level of A, there are bi levels
of B, and n ij replicates in the ijth cell.
This is the model I am thinking would be,yij = μ + Ai + Bij + eij,
can anyone help me?
| Nested Model building | CC BY-SA 4.0 | null | 2023-05-11T17:58:25.070 | 2023-05-11T18:12:24.923 | null | null | 387758 | [
"regression",
"mixed-model",
"anova",
"nested-models"
] |
615598 | 2 | null | 376085 | 1 | null | >
We are given that each factor is three levels, two replicates, and that the treatment sum of squares is 1200.
The interpretation of this old question might be a bit ambiguous. I'll take it to mean that there were 2 separate experiments (replications), with 1 observation made for each of the 27 combinations of `A`, `B` and `C` (each a factor with 3 levels) within each `replication`. Thus there were 54 observations, for 53 overall degrees of freedom.
The `replication` would then be thought of as a 2-level fixed effect that isn't involved in interactions with `A`, `B` or `C`. It could account for an overall difference in means between the two replications.
Then work backwards. In that interpretation, there is 1 degree of freedom associated with `replication`.
Each 3-level factor uses up 2 degrees of freedom. Thus the missing SS for `A` is 240, the missing MS for `B` is 160, and the missing SS for `C` is 120.
Each 2-way interaction between two 3-level factors uses up $2 \times 2=4$ degrees of freedom. The missing SS for `AB` is 220. The missing MS values for `AC` and `BC` are 40 and 25, respectively.
By my count, that's a total SS of 1160 for `A`, `B`, `C` and their 2-way interactions. If the total "treatment" SS for all combinations is 1200, that leaves 40 for the SS of the 3-way interactions. The three-way interactions among all 3 factors use up $2 \times 2 \times 2=8$ degrees of freedom, for a MS of only 5 for the 3-way interactions.
What's left for Error? For SS, you have 2000 total minus 1200 for treatments, and also minus 300 for `replication`: 500.
For df, you have 53 total minus 1 for `replication`, minus 6 for the individual `A`, `B`, `C` combined (2 df each), minus 12 for their two-way interactions (4 df each), and minus 8 for the 3-way interaction: 26 df are left for Error. MS Error is thus 19.2, approximately.
| null | CC BY-SA 4.0 | null | 2023-05-11T18:12:49.257 | 2023-05-11T18:12:49.257 | null | null | 28500 | null |
615599 | 1 | null | null | 0 | 52 | I need your help with some work I am doing.
Some context first:
I am writing a dissertation for my master. The topic is about perceived trust in Smart Home technology. I launched a survey with a closed ended questions for demographic data, and likert scale that asks 8 Questions on a scale of 1 to 5. I gathered 159 responses in total.
The 8 Questions in ther likert scale are actually 4 different dependent variables. Q1/Q2 make dependent variable1, Q3/Q4 dependent variable 2 etc.
Since it's a likert scale the data is not an interval, so what I did is that I took the sum of Q1 and Q2 and divided it by 2, which gave me a mean. This mean is one of the 4 dependent variables. I did this an additional 3 times for the other 3.
Here is an example of the likert scale:[](https://i.stack.imgur.com/Hl6dL.png)
The IV: Age (integer from 18 to 99), Gender (0 = male, 1= female), educational level (0 = low, 1 =mid , 2 = high), income ( they're ranges (below 24.999 => 0 = low, 25000 -39.999 => 1 = mid, more than 40000 2 = high), household size.
DV => Predictability of the technology, Dependability of the technology, Faith in the technology, Technology usefulness
I have 4 different hypotheses for this. One for each dependent variable, here is an example: There is a relationship between at least one of the independent variables and predictability of smart home technology.
The idea is to test each one of these dependent variables and see if they can be predicted with the independent variables (and control variables) that I have ( age, gender, educational attainment, household size and income).
For that I read that a multiple linear regression would be enough. So I started reading about that method and I saw that there were some assumptions that needed to be met before I could use that method. For normality (3 of the 4dependent variables were normally distributed, but the last one had was not quite normally distributed. Secondly, it seems that testing the the four variables for linearity resulted in all of them not being linear.
Now I need to start the analysis part of my dissertation but I have no clue wich method I should use since the assumptions of the multiple linear regression are not met.
I know about non-parametric tests, but I can't find anything non-parametric alternative for the multiple linear regression.
If you need more info about the variables etc let me know, I will provide them!
Thanks for your help and time.
| Multiple Linear regression unmet assumptions, what can I do? | CC BY-SA 4.0 | null | 2023-05-11T18:16:42.543 | 2023-05-12T07:37:03.900 | 2023-05-11T18:52:25.950 | 359836 | 359836 | [
"regression",
"multiple-regression",
"nonparametric",
"stata",
"nonlinear-regression"
] |
615600 | 1 | 615628 | null | 4 | 414 | >
Suppose a random vector $(X, Y )$ has joint probability density function $f(x, y)=3y$ on the triangle bounded by the lines $y = 0, y = 1 − x$, and $y =1+ x.$ Compute $E(Y \mid X ≤ 1/2 ).$
I'm confused about how to write the range of integral for obtaining the marginal distribution of $x$. After plotting it, I found that $y$ varies from $0$ to $1$. But again, it also depends on the value of $x$, like when $x=0.5, y=0.5$. What am I missing and what is a general method for obtaining the ranges?
| Finding the conditional expectation given the joint density function | CC BY-SA 4.0 | null | 2023-05-11T18:16:42.607 | 2023-05-12T16:11:08.750 | 2023-05-11T18:34:00.283 | 5176 | 339153 | [
"conditional-expectation",
"joint-distribution",
"triangular-distribution"
] |
615601 | 1 | null | null | 0 | 4 | [](https://i.stack.imgur.com/kzoRA.png)
The above picture is an excerpt from a paper I am reading. (Roads and Loans, Review of Financial Studies). I think based on the reporting of the results from the authors, beta1 is their interest. But my question is, why would the authors not include [500-h<pop<500+h] separately as they did with the 1000 threshold?
| Trouble interpreting a discontinuity model | CC BY-SA 4.0 | null | 2023-05-11T18:37:10.417 | 2023-05-11T18:37:10.417 | null | null | 355204 | [
"interpretation",
"regression-discontinuity"
] |
615602 | 1 | null | null | 0 | 5 | I'm testing a longitudinal model with two time points (X1 predicting Y2 while controlling for Y1). The standardized stability coefficient (Y2 on Y1) is b = .1. It is safe to assume that much of the residual variance is due to measurement error and the rank order stability of Y is likely much higher, though unknown. This is common with this type of data (fMRI). My question is, if X1 significantly predicts Y2 while controlling for Y1 (it does at b ~ .1***), is this model any better than cross-sectional, given that Y1 explains so little of the variance in Y2? I am inclined to believe that controlling for outcome stability is not helpful here and this model is only marginally better (if at all) than cross-sectional for making causal claims. Thoughts? Experiences? Thanks.
| Can one make causal inferences when outcome construct stability is low (i.e., standardized b = .1)? | CC BY-SA 4.0 | null | 2023-05-11T18:48:34.413 | 2023-05-11T18:48:34.413 | null | null | 387761 | [
"inference",
"panel-data"
] |
615603 | 2 | null | 615425 | 1 | null | Assuming your Poisson and negative binomial models used the (natural) log as the link function, yes, you would transform those coefficients by exponentiating them. It is certainly possible to have a 'negative' relationship between $X$ and $Y$ such that increasing values of $X$ are associated with decreasing values of $Y$. On the scale of the linear predictor, that will show up as a negative coefficient. Note that on the scale of the linear predictor, everything is additive—that is, every time you go up $1$ unit on $X$, you go up $\hat{\beta}_X$ units on (the log of the mean of) $Y$. When you transform those coefficients, you are no longer on the scale of the linear predictor, and things are no longer additive—now, they are multiplicative in nature. Every time you go up $1$ unit on $X$, the former mean is multiplied by $\exp(\hat{\beta}_X)$ to get (the new mean of) $Y$. Moreover, when you exponentiate a negative number, you get a value between $0$ and $1$. To overemphasize this, you get a value less than one. So every time you multiply a number by such a coefficient, the product is smaller than the former value. Thus, you still have a 'negative' relationship between $X$ and $Y$.
| null | CC BY-SA 4.0 | null | 2023-05-11T18:50:28.273 | 2023-05-11T18:50:28.273 | null | null | 7290 | null |
615604 | 2 | null | 102931 | 0 | null | This an old question, but I want to add what I think other answers lack.
Graphical Models:
They describe how a probability distribution over some $N$ variables factorizes using a parent-child relationship. Given parents, the distribution of the random variable is independent of its non-descendants. Example:
[](https://i.stack.imgur.com/clp4e.png)
This graph describes a distribution $f$ over $X_1,X_2,X_3,X_4,X_5,X_6$ which factorizes as,
\begin{equation}
f(X_1,X_2,X_3,X_4,X_5,X_6) = p(X_1)\cdot p(X_3)\cdot p(X_2 | X_1,X_3)\cdot p(X_5| X_1)\cdot p(X_4|X_5,X_3)\cdot p(X_6 | X_1,X_3)
\end{equation}
Markov Chains: A discrete-time Markov chain defines a sequence of random variables that can take several discrete values called "states." A graph for a Markov chain usually represents how these states evolve into each other. Example:
[](https://i.stack.imgur.com/9eqXQ.png)
Here a random variable, say $X_0$ can take 4 values $\{A,B,C,D\}$. Given we know the value $X_0$ takes, we describe a probability distribution over states for the random variable $X_1$, where the corresponding weighted edges give the probability of each state.
However, we can draw a different graph for Markov Chains, one which describes a probability distribution over $\{X_t\}, t\geq 0$
[](https://i.stack.imgur.com/FQa1y.png)
So, the distribution factorizes as
\begin{equation}
f(X_0,X_1,\dots,X_{t-1},X_t,\dots) = p(X_0)\cdot \prod_{t>0}p(X_t | X_{t-1})
\end{equation}
| null | CC BY-SA 4.0 | null | 2023-05-11T18:56:16.967 | 2023-05-11T18:56:16.967 | null | null | 387759 | null |
615605 | 2 | null | 615600 | 6 | null | I would refer to the marginal distribution of (capital) $X$ rather than to the marginal distribution of (lower-case) $x.$ The former is the random variable, and those are the things that have distributions.
For any fixed value of $x,$ the other variable $y$ goes from $0$ to $1+x$ if $x\le0$ or from $0$ to $1-x$ if $x\ge0.$ So
\begin{align}
\Pr(X\le 1/2) & = \Pr(X\le0) + \Pr(0<X\le1/2) \\[8pt]
& = \frac12 + \Pr(0<X\le1/2), \text{ by symmetry} \\[8pt]
& = \frac12 + \int_0^{1/2} \left( \int_0^{1-x} 3y \,dy \right) \,dx \\[8pt]
& = \frac12 + \int_0^{1/2} \frac{3(1-x)^2} 2 \, dx \\[8pt]
& = \frac12 + \frac7{16} = \frac{15}{16}.
\end{align}
So the conditional joint density of $(X,Y)$ given $X\le1/2$ is
\begin{align}
& f_{X,Y\,\mid\,X\,\le\,1/2} (x,y) \\[8pt]
= {} & \begin{cases} (16/5)y & \text{if } \big(1<x<0 \text{ and } 0<y<1+x\big) \\ & \text{or } \big( 0<x<1/2 \text{ and } 0<y<1-x\big), \\[8pt] 0 & \text{otherwise.} \end{cases}
\end{align}
And so
\begin{align}
& \operatorname E(Y\mid X\le1/2) \\[8pt]
= {} & \iint\limits_\text{triangle} y\cdot f_{X,Y\,\mid\,X\,\le\,1/2}(x,y) \, d(x,y) \\[8pt]
= {} & \int_{-1}^0 \left( \int_0^{1+x} y\cdot\frac{16}5 y \, dy \right) \, dx + \int_0^{1/2} \left( \int_0^{1-x} y \cdot \frac{16}5 y \, dy \right) \, dx.
\end{align}
| null | CC BY-SA 4.0 | null | 2023-05-11T19:06:06.250 | 2023-05-12T13:18:45.913 | 2023-05-12T13:18:45.913 | 5176 | 5176 | null |
615606 | 1 | null | null | 0 | 10 | I am trying to understand sample sizes required to estimate Cohen's k with a given precision. I am aware that the traditional confidence intervals do not achieve the nominal coverage in small samples, though I am confused by the wide range of findings regarding that. More importantly, however, I am struggling to find an approach that accounts for the asymetry inherent in a correlation coefficient.
When I bootstrap confidence intervals for Cohen's kappa, they are (obviously) not symmetric but bounded at 1 and/or right-skewed. For Pearson's r, the way to account for that would be to transform it to z-scores, create the confidence interval, and transform it back. Is something similar possible and/or appropriate for Cohen's kappa? It would seem necessary, but I have not yet seen anything in that direction.
| Asymmetric confidence intervals for Cohen's kappa | CC BY-SA 4.0 | null | 2023-05-11T19:07:09.517 | 2023-05-11T19:07:09.517 | null | null | 240420 | [
"confidence-interval",
"agreement-statistics",
"cohens-kappa"
] |
615607 | 2 | null | 615599 | 0 | null | As mkt said in a comment, linear regression does not assume the dependent variable is itself normally distributed, but only the residuals.
Regarding the non-linearity, you can consider a non-linear transformation of the predictor variables. For continuous variables like the age, taking a logarithm often helps. You can also try a polynomial, or splines, e.g. restricted cubic splines.
For discrete variables, whose values are just arbitrary encodings, you are free to change the encoding. E.g. instead 0, 1, and 2 for low, mid, and high, you may take 0, 1, and 5, if it makes the dependency more linear.
Update:
"Normality" is a mathematical ideal which can never be truly achieved in practice. Even for your data where the Shapiro-Wilk test is negative! The negative test result does not confirm that the data are normally distributed; it just fails to reject that hypothesis.
So, the true question which you should be interested in is whether the violation of normality assumption is so strong than an alternative gives better prediction than the ordinary least squares regression.
As of alternatives, I can think of [mean absolute deviation (MAD)](https://stats.stackexchange.com/a/388346) or [SVM regression](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) with a linear kernel (which is basically a MAD regression with some tolerance $\epsilon$), but I don't know whether they are available in Stata.
| null | CC BY-SA 4.0 | null | 2023-05-11T19:19:53.070 | 2023-05-12T07:37:03.900 | 2023-05-12T07:37:03.900 | 169343 | 169343 | null |
615609 | 2 | null | 615572 | 5 | null | The conditional probability that you don't have red hair, given that you are male, is more than $0.9.$
The conditional probability that you don't have red hair, give that you are not male, is more than $0.9.$
The sum of those two probabilities is not $1.$
| null | CC BY-SA 4.0 | null | 2023-05-11T19:24:18.470 | 2023-05-11T21:35:29.853 | 2023-05-11T21:35:29.853 | 919 | 5176 | null |
615610 | 1 | null | null | 0 | 13 | I ran some OLS regressions with standardised coefficients, but to test the robustness of my results, I also want to run an ordinal logistic regression, as well as a Poisson model, using count data.
My dependent variable is a big ranking, which explains why I can use both OLS and ordinal regression (for the count model, I won't go much into details, but it is possible to use it if I tweak my dependent variable a bit). I can give more context if needed, but I think it would just add unnecessary details.
I want to be able to compare my coefficients across my three models, for this, I wanted to use standardised data for my ordinal and count models.
Can I do this ? Or is it not recommended to standardize data when the dependant variable is ordinal/count ?
| Can I use standardised coefficients in ordinal logistic and Poisson regressions? | CC BY-SA 4.0 | null | 2023-05-11T19:24:36.137 | 2023-05-11T19:24:36.137 | null | null | 382870 | [
"standardization"
] |
615611 | 2 | null | 615503 | 1 | null | >
Which model should I use to test this hypothesis (Hypothesis: Volatility in the European stock market increases on ECB Monetary policy announcement days). If I use this model, how do I test my hypothesis?
Using a GARCH model with a dummy in the conditional variance equation sounds like a good approach. The conditional variance equation is
$$
\sigma_t^2=\omega+\alpha_1\varepsilon_{t-1}^2+\beta_1\sigma_{t-1}^2+\gamma d_t
$$
where $d_t$ is the dummy variable. Test $H_0\colon \gamma=0$ using a $t$-test. If you reject $H_0$, you have an indication for presence of an effect on the conditional variance.
>
Is it possible for me to add other variables than the dummy variables for my event days? I would also like to test if shocks in monetary policy or changes in interest rates has an effect.
Yes, you can add more variables to the equation.
>
Perhaps I can also do simple OLS regressions with a measure of volatility as my $y$ variable and the event day as my dummy variable?
You could do this, but I think GARCH is a more elegant approach. Obtaining an accurate and precise measure of volatility to use as $y$ in an OLS regression is quite challenging, especially in presence of external shocks that you wish to test for. Meanwhile, in GARCH the measure is embedded in the model itself. (I do not have a good way to explain this now, but I will update the answer if I come up with anything relevant.)
| null | CC BY-SA 4.0 | null | 2023-05-11T19:38:44.260 | 2023-05-11T19:48:23.553 | 2023-05-11T19:48:23.553 | 53690 | 53690 | null |
615612 | 1 | 615834 | null | 4 | 62 | Killeen (1) published several papers about P-rep. It can be easily computed from a p-value (for a two-group comparison) and has an easy interpretation. It is the estimated probability that future repeat experiments (same sample size, same assumptions, only random sampling differs) will result in a difference that goes in the same direction (so the difference has the same sign). That includes all differences in the same direction, including tiny ones, so includes some future studies that result in a large p-value.
Lecoutre (2) went further and improved the equations, and showed how P-rep fits in the Bayesian world. It seems like a useful way to think about reproducibility. It is a simple function of the p-value so can't provide additional information, but it seems like a useful way to interpret results.
My question is whether this value (P-rep) has been used much in various fields (beyond psychology) and found to be useful. I can't find citations in the last decade. Do statisticians agree it is useful, or find that it is somehow incorrect or misleading?
Another minor question is: Lecoutre et al mentioned a webpage with equations, code and an Excel file. But that URL no longer works. Please let me know if it can be found!
- Killeen, P. R. An Alternative to Null-Hypothesis Significance Tests. Psychol Sci 16, 345–353 (2004). DOI: 10.1111/j.0956-7976.2005.01538.x
- Lecoutre, B., Lecoutre, M.-P. & Poitevineau, J. Killeen’s Probability of Replication and Predictive Probabilities: How to Compute, Use, and Interpret Them. Psychol Methods 15, 158–171 (2010). DOI: 10.1037/a0015915
| What is the current status of Killeen’s probability of replication? | CC BY-SA 4.0 | null | 2023-05-11T19:41:26.693 | 2023-05-14T18:44:03.213 | 2023-05-14T14:19:49.980 | 25 | 25 | [
"p-value",
"references",
"reproducible-research",
"repeatability"
] |
615613 | 1 | null | null | 0 | 13 | I forecasted a combined model with using full data set. Problem is when I use test data set my accuracy coding works fine but when I use the same for my combined data set with full data set it does not work.
Coding:
```
combo.model.full <- yosemite.new %>%
model(ets=ETS(yosemite1),
lets=ETS(log(yosemite1)),
larima=ARIMA(log(yosemite1)),
arima=ARIMA(yosemite1)) %>%
mutate(combination.1=(larima+arima)/2)
combo.full.fore <- forecast(combo.model.full, h=12)
combo.full.fore %>% autoplot(level=NULL)+
labs(title="Comparing forecast with full data", x="month",
y="number of visit")
accuracy(combo.full.fore, yosemite.new) %>% arrange(RMSE)
```
but it shows the comment :
```
Warning message:
The future dataset is incomplete, incomplete out-of-sample data will be treated as missing.
12 observations are missing between 2023 Jan and 2023 Dec .
```
Anyone can tell what to do find the best model.
| Accuracy for forecasted value in R | CC BY-SA 4.0 | null | 2023-05-11T19:51:43.803 | 2023-05-12T01:25:38.203 | 2023-05-12T01:25:38.203 | 2958 | 387764 | [
"forecasting",
"forecast-combination"
] |
615614 | 2 | null | 200534 | 3 | null | My answer aligns with the majority: Do not remove outliers unless you are certain they are erroneous. What I add is:
- A brief overview of published papers on this topic (those that I am aware of and primarily those published in psychology. There are many more).
- Based on that, an answer to the question: What method can be used instead of removing outliers when one knows that there are many incorrect data points but not which ones are incorrect?
Overview
It is well-documented that the removal of outliers invalidates statistical results. [Wilcox 1998](https://psycnet.apa.org/doi/10.1037/0003-066X.53.3.300) states: "This approach fails [removing outliers before a standard analysis] because it results in using the wrong standard error. Briefly, if extreme values are thrown out, the remaining observations are no longer independent, so conventional methods for deriving expressions for standard errors no longer apply." For a more detailed explanation, see the paper. [Bakker et al. (2014)](https://psycnet.apa.org/record/2014-14633-001) demonstrated one of the effects of this: substantially inflated type I error rates. Recently, [Andre (2022)](https://psycnet.apa.org/record/2014-14633-001) argued that this is only a problem when the model/hypothesis is considered for removing outliers. To provide a concrete example: He stated that while removing outliers within a group is problematic due to the invalid standard errors, removing outliers across groups is valid. More recently, [Karch (2023)](https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0001357)(disclaimer: that's me) demonstrated that removing outliers across groups is equally problematic: Among other things, if there are group differences, it almost always invalidates confidence intervals and parameter estimates.
What can be used with noisy data?
All the papers cited so far recommend robust methods for handling noisy data (as suggested in some answers). Importantly, contrary to what is claimed in other answers, robust methods do not always yield the same results as outlier removal + standard methods. Briefly, robust methods use the correct standard errors, while outlier removal + standard methods do not (refer to Wilcox for details).
For the situation the original question asks about (comparing two groups), either the Yuen-Welch test or the Brunner-Munzel test and their corresponding confidence intervals seem like they could be applicable. The Yuen-Welch test is essentially the robust version of Welch's t-test. It's important to note that it considers trimmed means instead of normal means, which can be very different for asymmetric distributions (see example by AdamO). Brunner-Munzel's test is essentially the robust alternative to the Wilcoxon-Mann-Whitney test (see [https://stats.stackexchange.com/a/579604/30495](https://stats.stackexchange.com/a/579604/30495)). Both tests are readily available in R (see [https://rdrr.io/cran/WRS2/man/yuen.html](https://rdrr.io/cran/WRS2/man/yuen.html), and [https://cran.r-project.org/web/packages/brunnermunzel/index.html](https://cran.r-project.org/web/packages/brunnermunzel/index.html))
| null | CC BY-SA 4.0 | null | 2023-05-11T20:08:36.843 | 2023-05-11T20:08:36.843 | null | null | 30495 | null |
615615 | 1 | null | null | 0 | 12 | Low Blood Pressure (BP), below a threshold, say 90 mmHg, is associated with worse outcomes in surgery. The depth (how low the BP gets) and the time spent at that depth are both associated with worse outcomes. The association between BP and outcome is also non-linear. I have time spent at each BP value < 90 for each patient, however, they are highly correlated and they are more highly correlated with the numbers above it then below it. A patient with a BP = 60 usually also has BPs of 58, 59, 61, and 62. If I include all BPs in the model, I get odds ratios (OR) results like
- BP OR
- 60 1.2
- 59 1.4
- 58 0.6
- 57 1.5
Binning the BPs (e.g., 51-55, 56-60, ...) produces the same pattern - every several values there is one BP or one BP bin that is way out of whack from its neighbors. I don't want to combine the BPs into a summary statistic (e.g., time spent below 90 or area under the 90 curve). any help on how to analyze this would be greatly appreciated. Thank you.
| How do I use many highly correlated continuous variables in a logistic regression? | CC BY-SA 4.0 | null | 2023-05-11T20:11:10.940 | 2023-05-11T20:11:10.940 | null | null | 173292 | [
"logistic",
"correlation"
] |
615616 | 1 | null | null | 0 | 29 | I have just developed a formula (of which I could give the different elements which compose it if need be) allowing to quantify the tolerance of an individual to the exposure. This gives me an index that can fluctuate from -1 to 1 (-1 The individual is not tolerant at all, 1 he is very tolerant, 0 there is no effect of the disturbance on the tolerance). I applied this formula for all my individuals. What I would like to know is if there is a threshold value, allowing me to assert statistically, that beyond or below this threshold the value of my index is different from 0.
For example, for individuals whose tolerance index = 0.09 or = -0.12, the values being close to 0, I would like to know if I can consider them as very slightly tolerant and slightly intolerant respectively, or if the values are too close to 0 and that it is therefore necessary to consider that there is no effect of the disturbance on the tolerance for these individuals.
to clarify, What I would like to know is what test to perform to assert if values very close to 0, positive or negative, will be considered significantly far enough from 0 or too close, implying that we interpret the result as if it were a 0.
I don't really know which statistical test on R is the most relevant to use in this case, and if you have any suggestions on how to apply it.
| Index value considered significantly different from 0? | CC BY-SA 4.0 | null | 2023-05-11T20:11:57.880 | 2023-05-18T15:26:57.580 | 2023-05-18T15:26:57.580 | 11887 | 387778 | [
"r",
"hypothesis-testing"
] |
615617 | 1 | 615619 | null | 2 | 38 | Suppose that you have a rectangle in a 2D space.
I'm looking for the probability of being in this rectangle, knowing only the probability of being on one side or the other of each line extending each edge of the rectangle, shown by the colored probabilities in the following figure. [](https://i.stack.imgur.com/Unw7k.png)
Is it possible to get the probability of being in the rectangle given that piece of information, or is it not enough? The issue I suppose is that we don't know the probabilities of being into the corners (for instance $P(A\wedge B)$ for the top left corner) and I don't know if I can say that $P(\sim A\wedge\sim B\wedge\sim C\wedge\sim D)=(1-P(A))(1-P(B))(1-P(C))(1-P(D))$ since events might not be independent?
| Probability of being in a rectangle given probabilities of being on one side or the other of each line extending each edge of the rectangle | CC BY-SA 4.0 | null | 2023-05-11T20:20:58.907 | 2023-05-12T05:07:06.417 | 2023-05-11T21:24:22.227 | 919 | 387763 | [
"probability"
] |
615618 | 2 | null | 509447 | 1 | null | Both the Huber and the Pseudo-Huber allow control over the treatment of "inliers" vs. "outliers" via one required user-specified parameter. The interpretation of the Huber's parameter is more straightforward, as it is just the max distance between the predicted value and the output value that you would still consider as an inlier (and thus use quadratic loss instead of the L1 loss for outliers). So it is quite easy to assign a decent Huber parameter value with only a cursory look at your input data. For the Pseudo-Huber, the transition between inlier and outlier regions is much more gradual and therefore the interpretation of the Pseudo-Huber parameter is less clear.
The main advantage of the Pseudo-Huber is that it is twice differentiable (i.e. "smooth"), whereas the Huber is only differentiable once. While a twice differentiable loss function is not required for vanilla SGD, other solvers and SGD variants may require the second derivative's existence to converge properly.
Below I have plotted an example of both functions and their derivatives, fixing the parameters at 1.0:
[](https://i.stack.imgur.com/DrraL.png)
[](https://i.stack.imgur.com/xpX2n.png)
Lastly, I found the image below online which illustrates changing the Pseudo-Huber parameter, as compared to L1 and L2 loss functions:
[](https://i.stack.imgur.com/FiWO2.png)
| null | CC BY-SA 4.0 | null | 2023-05-11T20:53:49.517 | 2023-05-11T22:24:14.420 | 2023-05-11T22:24:14.420 | 186477 | 186477 | null |
615619 | 2 | null | 615617 | 2 | null | A full description of the probabilities would consist of nine non-negative numbers (one for each region) summing to unity. (This assumes the probability of being on any line is zero, for otherwise the description needs even more parameters.) You suppose only four linear combinations of those nine numbers are known. Together with the sum-to-unity restriction, that leaves up to $9-4-1=4$ dimensions of solutions. Thus, unless the probabilities you specify are special (so that the solutions are limited to the boundary of this 4D space), the answer must be no.
With this in mind, let's consider two of the simplest possible situations.
- Suppose the chance of each of the nine regions is $1/9.$ Then each boundary line divides the plane into pieces of probabilities $1/3$ and $2/3.$ With $p=1/9,$ we may draw this schematically as $$\begin{array}{c|c|c}p&p&p\\\hline p&p&p\\\hline p&p&p\end{array}$$ Outside each line the probability is $p+p+p=3p$ in all cases.
In the next array the probabilities determined by the boundary lines are still the same: $$\begin{array}{c|c|c}2p&0&p\\\hline 0&p&0\\\hline p&0&2p\end{array}$$ because each side has $3p$ probability, just as before.
- Here, as another example, are infinitely many solutions where all probabilities are $1/2.$ The number $t$ is anywhere from $-1/4$ through $1/4.$ $$\begin{array}{c|c|c}\frac{1}{4}+t&0&\frac{1}{4}-t\\\hline 0&0&0\\\hline \frac{1}{4}-t&0&\frac{1}{4}+t\end{array}$$
Illustrating the possibility that in special cases the solution is unique, suppose the probability outside each line is $1/4.$ The only way this can happen is $$\begin{array}{c|c|c}0&\frac{1}{4}&0\\\hline\frac{1}{4}&0&\frac{1}{4}\\\hline0&\frac{1}{4}&0\end{array}$$ That's because the laws of probability imply the chance of the middle strip is $1-1/4-1/4=1/2$ and that can happen only if the probability of being in the interior square itself is $0,$ thereby placing $1/4$ into the left and right cells of the middle row, and this solution readily follows as the only one possible.
| null | CC BY-SA 4.0 | null | 2023-05-11T21:07:53.533 | 2023-05-11T21:22:19.740 | 2023-05-11T21:22:19.740 | 919 | 919 | null |
615620 | 1 | null | null | 1 | 23 | I have a network meta-analysis where all treatment are compared to placebo.
Obviously this is not properly a disconected network but consistency cannot be assessed.
Is it problematic?
| Network meta-analysis with placebo only comparison | CC BY-SA 4.0 | null | 2023-05-11T21:19:01.847 | 2023-05-11T21:19:01.847 | null | null | 354185 | [
"meta-analysis",
"consistency",
"network-meta-analysis"
] |
615622 | 1 | null | null | 4 | 45 | I'm not a statistician, but I do have a basic understanding of biostatistics in the context of medicine and clinical trials. However, recently I came across a trial that is using a statistical method that I am very unfamiliar with and was hoping someone could help.
Here is the study. It's in the "Statistical Analysis" of Clark, D., et al., (2021). [Clinically relevant activity of the novel rasp inhibitor reproxalap in allergic conjunctivitis: the Phase 3 ALLEVIATE trial](https://www.ajo.com/article/S0002-9394(21)00222-1/fulltext). American Journal of Ophthalmology, 230, 60-67.
Where I'm getting very confused is with this sentence: "Reproxalap was compared to vehicle via a MIXED EFFECT MODEL for Repeated Measures with baseline area under the curve as a covariate and treatment group and minutes post-challenge as factors. A generalized estimating equation procedure, with baseline area under the curve as a covariate and treatment group and minutes post-challenge as factors, was used to compare responder proportions for the key secondary endpoint."
I'm trying to understand what the terms in this entire paragraph actually mean and why this would be a valid statistical method to use for this given trial.
If someone could explain, non-mathematically, the actual intuition and reasoning behind what a "mixed effect model for repeated measures" is(and what the significance of the covariate and factors are in this model) with some examples or point to some sources that explain it well to someone with a basic understanding of stats, and even perhaps then proceed to explain it mathematically, I'd be very grateful.
| Explaining a Mixed Effect Model to a Non Statistician/Mathematician | CC BY-SA 4.0 | null | 2023-05-11T22:05:08.877 | 2023-05-11T22:13:49.470 | 2023-05-11T22:13:49.470 | 44269 | 387773 | [
"hypothesis-testing",
"statistical-significance",
"mixed-model",
"biostatistics",
"clinical-trials"
] |
615624 | 1 | null | null | 0 | 18 | I need to run an analysis of efficiency of automated recommendations for call-centre agents on how to handle customer support tickets. It works like that:
- A ticket get 0, 1 or more recommendations based on the rules that use ticket's or customer's attributes as input.
- Different recommendations may be correlated to various extent, both due to overlapping rules and because customer's and ticket's attributes may be not independent.
- Agent may or may not "engage" with each recommendation (click on them, read them etc). It's not random in general - some agents may be more prone to ignore recommendations.
- Sometimes a recommendation is "triggered" (i.e. its rule is satisfied), but not created, to make this ticket a part of "control group".
- Agents are assigned to tickets not randomly - some of them work on specific types of tickets, so recommendations subsets and agents that see them are not independent too.
We need to measure the effect of each recommendation type on a continuous outcome variable (some real positive number).
What kind of model/observational study design would be appropriate here? Ideally, I think, we'd compare the outcome only between groups of tickets where exactly the same subset of recommendations was triggered, but we don't have enough data points with various combinations of recommendations "used"/"not used" (i. e. when it's a part of control group, or not "engaged" with by agent).
Is it possible to use different groups of tickets to estimate individual effects (e. g. use tickets with recos A, A+B, A+C together to estimate effect of A)?
Also, how to take into account that different groups of agents work on different types of tickets (but not necessarily exclusively)? Is it something that linear mixed models can help with?
| Effects of multiple non-independent treatments | CC BY-SA 4.0 | null | 2023-05-11T22:34:45.687 | 2023-05-11T23:26:36.693 | 2023-05-11T23:26:36.693 | 387774 | 387774 | [
"mixed-model",
"inference",
"modeling",
"linear",
"observational-study"
] |
615625 | 1 | null | null | 2 | 54 | >
Let $U$ be uniformly distributed on the interval $(0, 2)$ and let $V$ be an independent
random variable which has a discrete uniform distribution on ${0, 1, . . . , n}$. i.e. $P\{V = i\} =\frac{1}{n + 1}$ for $i = 0, 1, . . . , n.$ Find the cumulative distribution function of $X = U + V$.
I did the following, how do I calculate the minimum?
$P(X \leq x)=P(U+V<x)=\sum_{v=0}^nP(U<v-x|V=v)P(V=v)=\sum_{v=0}^nP(U<v-x)P(V=v)=\sum_{v=0}^n \big[\int_0^{\min(x-v,2)} \frac{1}{2}du\big]P(V=v)=\sum_{v=0}^n\frac{\min(x-v,2)}{2(n+1)}$
| Distribution of sum of a discrete uniform and a uniform on (0,2) | CC BY-SA 4.0 | null | 2023-05-11T22:45:55.163 | 2023-05-12T06:15:48.770 | null | null | 339153 | [
"self-study",
"cumulative-distribution-function"
] |
615626 | 2 | null | 510994 | 0 | null | In same cases when the correlation is high you can transform your x, usually by centering it, that is making:
```
x* = x - mean(x)
```
Then the model is formulated with the transformed x*:
```
y=α∗x*^2+β∗x*
```
Of course, you need to take care when making inferences on the coefficients, but usually those do not change dramatically, but the collinearity would be very low compared to the original x. See more information [here](https://online.stat.psu.edu/stat501/lesson/12/12.6).
| null | CC BY-SA 4.0 | null | 2023-05-11T22:51:55.833 | 2023-05-11T22:51:55.833 | null | null | 77852 | null |
615627 | 1 | null | null | 0 | 36 | I had a few classes related to time-series econometrics, however most were theory heavy. I would like to practice this, so I will try to analyse few stock prices, however I am not fully sure about the steps, so would appreciate some advice. I will use intraday stock data (very high frequency for a 6-7 year time-period, so several 100 thousand rows of data)
The main goal would be forecasting and predicting based on other time-series.
1, First is the Box-Jenkins test, we make sure about stationarity, white noise etc. this part is clear.
2, Now I'm not sure about this one. I will have to end up using the ARCH-GARCH model family, but do I have to do the ARIMA model first? In the R code I think the ARMA parameters go into the GARCH , but I did not see that being the case in theory, so is the AR(I)MA required to do, or I can safely skip this part and jump to GARCH?
3, Is there any formula to determine which GARCH is the best, or I would have to trial and error a few that I suspect to be fitting? (mostly likely Treshold (-> GJR) for financial data).
4, Something was mentioned that there is a difference between aggregating the intraday data and if using it by itself. Related to the HAR/realized volatility models, but I'm not sure what is the exact difference between the 2 usages? When should I do these 2 methods? Similarly, if I do HAR, do I need to do the ARIMA/GARCH process before?
5, What's the main difference between multivariate GARCH and VAR/VECM models? From what I understood multivariate-GARCH models are very intense computationally and harder to interpret compared to a VAR/VECM? Any reason to use them?
EDIT: Also I would be open to any books/resources that explain these models in detail, with explanations on what each parameter means in the equations etc.
| Full time-series analysis steps | CC BY-SA 4.0 | null | 2023-05-11T23:32:45.200 | 2023-05-12T02:42:01.403 | 2023-05-12T02:42:01.403 | 320240 | 320240 | [
"time-series",
"garch",
"frequency"
] |
615628 | 2 | null | 615600 | 8 | null | If you know the equation
\begin{align}
E[Y|X \leq 1/2] = \frac{E[YI_{[X \leq 1/2]}]}{P[X \leq 1/2]}, \tag{1}
\end{align}
you can skip the step of finding the conditional density $f_{Y|X \leq 1/2}$ by working with the joint density $f(x, y) = 3y$ over the shaded region below (call it $S$ hereafter) directly:
[](https://i.stack.imgur.com/4cytL.png)
Specifically, just evaluate the following two integrals:
- Integrating $f(x, y) = 3y$ over $S$ to get $P[X \leq 1/2]$.
- Integrating $yf(x, y) = 3y^2$ over $S$ to get $E[YI_{[X \leq 1/2]}]$.
By the linearity of integrals, for each task, it is equivalent to integrating the integrand over regions $S_1$ and $S_2$ then add them up, where $S_1$ and $S_2$ are shaded regions below respectively:
 
After setting up this roadmap, now let's deal with the calculations:
\begin{align}
& P[X \leq 1/2] = \iint_{S_1}3ydxdy + \iint_{S_2}3ydxdy \\
=& \int_{-1}^0\left[\int_0^{1 + x}3ydy\right]dx +
\int_0^{1/2}\left[\int_0^{1 - x}3ydy\right]dx \\
=& \frac{3}{2}\int_{-1}^0 (1 + x)^2dx +
\frac{3}{2}\int_0^{1/2}(1 - x)^2dx \\
=& \frac{1}{2} + \frac{7}{16} \\
=& \frac{15}{16}. \tag{2}
\end{align}
\begin{align}
& E[YI_{[X \leq 1/2]}] = \iint_{S_1}3y^2dxdy + \iint_{S_2}3y^2dxdy \\
=& \int_{-1}^0\left[\int_0^{1 + x}3y^2dy\right]dx +
\int_0^{1/2}\left[\int_0^{1 - x}3y^2dy\right]dx \\
=& \int_{-1}^0 (1 + x)^3dx +
\int_0^{1/2}(1 - x)^3dx \\
=& \frac{1}{4} + \frac{15}{64} \\
=& \frac{31}{64}. \tag{3}
\end{align}
Substituting $(2)$ and $(3)$ into $(1)$ then gives
\begin{align}
E[Y|X \leq 1/2] = \frac{31/64}{15/16} = \frac{31}{60}.
\end{align}
| null | CC BY-SA 4.0 | null | 2023-05-11T23:33:05.120 | 2023-05-12T16:11:08.750 | 2023-05-12T16:11:08.750 | 20519 | 20519 | null |
615630 | 1 | null | null | 0 | 16 | I have some code that I'm trying to convert from working with floats to numpy arrays for performance reasons. I have it down to one last function, which is below and requires the `mvn.mvndst` function from `scipy.stats`. After some digging, it looks like the function is written in Fortran ([https://github.com/scipy/scipy/blob/main/scipy/stats/mvndst.f](https://github.com/scipy/scipy/blob/main/scipy/stats/mvndst.f)), but I'm wondering if there's another option to get this to work with numpy.
Here's the original code:
```
import numpy as np
from scipy.stats import mvn
def cbnd(a, b, rho):
lower = np.array([0, 0])
upper = np.array([a, b])
infin = np.array([0, 0])
correl = rho
error, value, inform = mvn.mvndst(lower, upper, infin, correl)
return value
# this works
a = 1
b = 1
rho = 1
cbnd(a, b, rho)
# this does not work
a = np.arange(0,1,step=0.1)
b = np.arange(0,1,step=0.1)
rho = np.arange(0,1,step=0.1)
cbnd(a, b, rho)
```
The last part gives this error:
```
ValueError: failed in converting 2nd argument `upper' of _mvn.mvndst to C/Fortran array
```
A few things I tried were:
- Matching the shape of lower and infin to the shape of upper with a and b as arrays.
- Transposing the above in various ways
- List comprehension (still fairly slow)
I did see this: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html) but I'm not really sure how to apply that compared to `mvn.mvndst`.
Any help is appreciated.
| Scipy.stats.mvn.mvndst function for 3d inputs | CC BY-SA 4.0 | null | 2023-05-12T00:07:08.827 | 2023-05-12T00:12:53.000 | 2023-05-12T00:12:53.000 | 263656 | 263656 | [
"python",
"multivariate-normal-distribution",
"scipy"
] |
615631 | 1 | null | null | 0 | 47 | I've come across several approximations for Mills ratio, but I haven't found any good ones for the Inverse Mills ratio. Is there any known closed-form approximation for the Inverse Mills ratio ([link](https://rpubs.com/FJRubio/IMR#:%7E:text=The%20Inverse%20Mills%20Ratio%20(IMR,x)%2Cx%E2%88%88R.)) specifically for a normal random variable, denoted as $\frac{\phi(x)}{\Phi(x)}$ for "$x>0$"? (ratio of normal pdf to normal cdf)
| Approximation on Inverse Mills ratio for the normal R.V | CC BY-SA 4.0 | null | 2023-05-12T00:08:18.417 | 2023-05-12T17:29:26.640 | 2023-05-12T17:29:26.640 | 384284 | 384284 | [
"normal-distribution",
"density-function",
"cumulative-distribution-function",
"approximation"
] |
615632 | 1 | null | null | 0 | 48 | We are trying to calculate a CI around an inverse proportion.
The sample is women who have had one caesarean section in the past and are now having a normal (vaginal) birth of another baby
The binomial outcome is a ruptured uterus during the second birth (yes/no)
The observed proportion is 0.5% (78 ruptures of sample size 15587)
Using SE = sqrt(p(1-p)/n) we get 95%CI = p +/- 1.96*SE = 0.39% to 0.61%
We wish to express the proportion as its inverse (as this may be easier for patients to understand) and the CI around that and have used:
point estimate = 1 in 200 (1/0.5%)
95% CI = 1 in 164 to 1 in 257 (inverting the limits of the 95%CI for proportions)
We thought this would be valid as the inverse is a monotonic function of the proportion
Then we started doubting this approach illustrated by the following example:
Imagine a random sample of peoples' heights from a large population which showed a mean height of 1.70cm and 95%CI 1.60cm to 1.80cm
Then imagine we wish to report on a transformed variable height-squared (silly but couldn't think of a better example).
Using the above approach we get mean height-squared = 1.7^2 = 2.89 with 95%CI 2.56 to 3.24 and thus the CI is not symmetrical about the point estimate.
We could alternatively 'measure' height-squareds' directly and we would get the same mean of 2.89 but the 95%CI would be symmetrical about this point estimate (+/- 1.96xSE)
Given 2 different 95%CIs, which of these 2 approaches would be incorrect? (or with a big enough sample would they give similar results?)
| Confidence intervals for a transformed | CC BY-SA 4.0 | null | 2023-05-12T00:08:47.363 | 2023-05-14T01:48:42.837 | 2023-05-12T05:37:01.540 | 11887 | 209996 | [
"confidence-interval",
"data-transformation"
] |
615633 | 2 | null | 402523 | 2 | null | Before testing any statistical hypotheses, we need to settle down the probabilistic model for data first. In this problem, it is reasonable to assume $(X_1, X_2, X_3)$ is a random sample from the distribution (i.e., [the geometric distribution](https://en.wikipedia.org/wiki/Geometric_distribution)):
\begin{align}
f_\theta(x) = P_\theta(X = x) = (1 - \theta)^x\theta, \quad x = 0, 1, 2, \ldots, \tag{1}
\end{align}
where $\theta \in (0, 1)$ is the probability of having a girl. Having set the stage as $(1)$, the hypotheses testing problem can be formulated as:
\begin{align}
H: \theta = \frac{1}{2} \text{ v.s. } K: \theta < \frac{1}{2}. \tag{2}
\end{align}
However, since we have three independent observations (as opposed to a single observation), we need to consider the joint density of $(X_1, X_2, X_3)$, which is given by
\begin{align}
p_\theta(x_1, x_2, x_3) = (1 - \theta)^{x_1 + x_2 + x_3}\theta^3, \quad x_i \in \{0, 1, 2, \ldots\}, i = 1, 2, 3.
\end{align}
By rewriting $p_\theta(x_1, x_2, x_3)$ as (where $T(\mathbf{x}) = x_1 + x_2 + x_3$)
\begin{align}
p_\theta(x_1, x_2, x_3) = \exp(\log(\theta^3(1 - \theta)^{x_1 + x_2 + x_3})) = \theta^3\exp(T(\mathbf{x})\log(1 - \theta)),
\end{align} it is clear that $p_\theta$ belongs to the one-parameter exponential family, whence by Corollary $3.4.1$ in Testing Statistical Hypotheses by Lehmann and Romano the UMP test (of size $\alpha$) $\phi$ for testing $(2)$ exists, which is
\begin{align}
\phi(\mathbf{x}) = \begin{cases}
1 & T(\mathbf{x}) > C, \\
\gamma & T(\mathbf{x}) = C, \\
0 & T(\mathbf{x}) < C, \tag{3}
\end{cases}
\end{align}
where $\gamma$ and $C$ are determined by $E_{1/2}(\phi(T(\mathbf{X})) = \alpha$. [Since](https://en.wikipedia.org/wiki/Negative_binomial_distribution#Distribution_of_a_sum_of_geometrically_distributed_random_variables) the sum of $r$ i.i.d. geometric $G(p)$ random variables has negative binomial distribution $NB(r, p)$, under $H$, it follows that $T(\mathbf{X}) \sim NB(3, 1/2)$, whence $E_{1/2}(\phi(T(\mathbf{X})) = \alpha$ becomes
\begin{align}
\sum_{k > C} \binom{k + 2}{k}(0.5)^{k + 3} + \gamma\binom{C + 2}{C}(0.5)^{C + 3} = \alpha. \tag{4}
\end{align}
For $\alpha = 0.05$, as `pbniom(8, 3, 0.5) = 0.9672852` and `pbinom(7, 3, 0.5) = 0.9453125`, a good choice of $C$ to solve $(4)$ is $C = 8$, for which
$\gamma = \frac{0.05 - (1 - 0.9672852)}{0.02197266} = 0.7866667$ (in fact, this $(C, \gamma)$ is the unique solution to $(4)$ because $C$ must be a positive integer and $\gamma$ must be in $(0, 1)$). Hence $(3)$ can be written explicitly as
\begin{align}
\phi(\mathbf{x}) = \begin{cases}
1 & T(\mathbf{x}) > 8, \\
0.7866667 & T(\mathbf{x}) = 8, \\
0 & T(\mathbf{x}) < 8.
\end{cases}
\end{align}
For the observed data $\mathbf{x} = (0, 3, 2)$, $T(\mathbf{x}) = 0 + 3 + 2 = 5 < 8$. Hence the null hypothesis $H$ is not rejected at $\alpha = 0.05$.
| null | CC BY-SA 4.0 | null | 2023-05-12T00:54:49.820 | 2023-05-14T17:25:54.797 | 2023-05-14T17:25:54.797 | 20519 | 20519 | null |
615634 | 2 | null | 134748 | 2 | null | The answer provided by Piotr is mostly correct, but there is a tiny mistake.
The maximum Tsallis entropy value is actually given by the expression: $(1−N^{1−\alpha})/(\alpha-1)$.
The denominator order is reversed.
This is because this expression is obtained when dealing with a uniform probability distribution $ P = \{\frac{1}{N}, \frac{1}{N}, ..., \frac{1}{N}\}$
Considering the Tsallis entropy defined by the expression:
\begin{align}
T &= \frac{1}{\alpha-1} \sum_{j=1}^{N}(P_{j} - (P_{j})^\alpha)\\
\end{align}
In this particular scenario, all instances of $ P_{j} $ will be equal to $\frac{1}{N}$. This means the sum can be reduced to $N\times(\frac{1}{N}-(\frac{1}{N})^\alpha)$. Therefore, we can rewrite this in the following way:
\begin{align}
T_{\max} &= \frac{1}{α-1}\times\left \{N\times\left[\frac{1}{N}-\left(\frac{1}{N}\right)^\alpha\right]\right\}\\
T_{\max} &= \frac{1}{\alpha-1}\times (1-N^{1-\alpha})\\
T_{\max} &= \frac{1−N^{1−\alpha}}{\alpha-1}\\
\end{align}
| null | CC BY-SA 4.0 | null | 2023-05-12T01:02:37.707 | 2023-05-23T00:30:18.997 | 2023-05-23T00:30:18.997 | 387784 | 387784 | null |
615635 | 2 | null | 615632 | 3 | null | You're correct that the CI you get by transforming the endpoints is not symmetric about the point estimate, but there's no reason to expect it to be; CIs are in general not symmetric (at least not in the sense that the endpoints of the interval are equidistant from the usual point estimate).
You're used to using a normal approximation to produce a CI for a proportion, which does produce a symmetric interval, but many methods for producing a binomial proportion confidence intervals produce endpoints that are not equidistant from the ordinary sample proportion.
[https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval)
Which is to say, you shouldn't necessarily think an interval for the proportion ought to be symmetric to start with.
Indeed insisting on that kind of symmetry in confidence intervals will in some situations - including with binomial proportions - do things you won't like, such as having your interval include impossible values for the parameter.
Even when you do have that kind of symmetry in an interval for a parameter, you shouldn't expect it when you consider an interval for a monotonic transformation of that parameter.
So for example, when you considered an interval for $\mu^2$ in the height example, you got an interval that was not symmetric about the estimate of $\mu^2$ you used. There's no reason to expect it to be symmetric. That example offers no basis to reject the asymmetric interval for the inverse of a proportion.
>
We could alternatively 'measure' height-squareds' directly and we would get the same mean of 2.89
Well, no, we would not. The mean of the squares is larger than the square of the mean (unless the variance is 0, which won't be the case for real height data).
>
but the 95%CI would be symmetrical about this point estimate (+/- 1.96xSE)
Why does it necessarily make sense that the interval for squared-height should be symmetric? It will be if you assume it's normal, but it cannot be the case that both height and height squared are actually normal. In fact neither can be normal since both are necessarily non-negative, but the distribution of height is less skewed than that of height-squared, and a normal-based interval for the population mean height will work reasonably well at somewhat smaller sample sizes than it would for the mean of its square.
A suitable small-sample interval for height-squared would perhaps be better based on a non-normal model for the population distribution. It might or might not be the case that some interval for the mean was symmetric, depending on how you generated the interval. I'd probably be inclined to use something like a shifted Weibull model (or possibly a shifted gamma) as a decent approximation to the distribution of squared heights, though other approaches could be used.
If the sample was not small, the interval for the mean of a moderately skew variable like the square of height based on a normal model would work reasonably well, nonetheless.
[1]: Incidentally, there's some reason to consider whether $\hat{\mu}^2$ (which you seem to be thinking of using to estimate $\mu^2$) is necessarily the point estimate you want to use to estimate $\mu^2$. That is, should $\widehat{\mu^2}=(\hat{\mu})^2$, necessarily? A good answer to that will depend on the properties you want for your estimator. If you wanted unbaisedness, you might not choose that one $-$ not that I am suggesting that you need to use unbiased estimators at all. On the other hand, if you like MLEs, then such direct transformation will be appropriate, as long as you started with an MLE, but then you should definitely not expect intervals to be symmetric about the point estimate.
| null | CC BY-SA 4.0 | null | 2023-05-12T01:14:55.980 | 2023-05-14T01:48:42.837 | 2023-05-14T01:48:42.837 | 805 | 805 | null |
615636 | 1 | 615710 | null | 1 | 31 | I am reading the second edition of Crawley's Statistics: An Introduction Using R and in the Pseudoreplication section of chapter 1 (pg. 15), he provides the following experiment structure:
>
"There are 20 plots, 10 sprayed and 10 not sprayed. [...] In fact
there are 10 replicates in this experiment. There are 10 sprayed plots
and 10 unsprayed plots, and each plot will yield a datum to the
response variable (the proportion of leaf area consumed by insects, for
example). Thus, there are 9 degrees of freedom within each treatment,
and 2x9=18 degrees of freedom for error in the experiment as a whole.
Isn't that actually 20 replicates? 10 for each field? Wouldn't there be 20 rows in this dataframe? Aren't the rows the replicates? I believe I understand the degrees of freedom concept.
I have no problem with math or programming and even have a little probability theory, but I am a rank neophyte in statistics, and I apologize if this is a silly question.
| How do I count replicates in chapter 1 of Crawley's book? | CC BY-SA 4.0 | null | 2023-05-12T01:38:05.183 | 2023-05-12T18:18:44.580 | null | null | 387785 | [
"pseudorepliction"
] |
615637 | 1 | null | null | 0 | 12 | I'm running this on databricks, using python, spark, pipeline, mlflow, etc. Can use whichever library I need to though
I have a simple Linear Regression script. I separately have a Random Forest script.
The data set has 10 features and 1 label to predict. All columns are normalized doubles.
```
assembler = VectorAssembler(inputCols=in_cols, outputCols='features')
rf = RandomForestRegressor(labelCol='label')
#lr = LinearRegression(labelCol='label', featuresCol='features')
evaluator = RegressionEvaluator(labelCol='label', predictionCol='prediction', metricName='r2')
stages = [assebler, rf]
pipeline = Pipeline(stages=stages)
cv = CrossValidator(estimator=pipeline, evaluator=evaluator, numFolds=3, seed=42..)
cv.fit(train).transform(test)
```
How can I have the random forest output be a vector and feed its output into the linear regression and have the model fit. Similar to how a neural network layer's output would be the input to the next layer.
I'm aware this isn't really the ideal model, but want to know how it can be done
| Spark Pipeline - Chain Regressors Together | CC BY-SA 4.0 | null | 2023-05-12T02:34:04.027 | 2023-05-12T02:34:04.027 | null | null | 69383 | [
"regression",
"neural-networks",
"python",
"scikit-learn",
"spark-mllib"
] |
615638 | 2 | null | 615625 | 2 | null | The hint in comments notes that you might find it easier to break things down by looking at the distribution over each unit interval. To do this formally, we can use the alternative decomposition:
$$X = U_* + R + V
\quad \quad \quad
U_* \sim \text{U}(0,1)
\quad \quad \quad
R \sim \text{U} \{ 0,1 \}
\quad \quad \quad
V \sim \text{U} \{ 0,n \}.$$
(This decomposition breaks things up so that $U = R + U_*$ so that we are dealing with three random variables all with support within the unit interval.) To facilitate analysis, let us define the following notation (which is an extension of the idea of [Macaulay brackets](https://en.wikipedia.org/wiki/Macaulay_brackets)):
$$\{ u|a \}
\equiv \max(0, \min(u, a))
= \begin{cases}
0 & & & \text{if } u < 0, \\[6pt]
u & & & \text{if } 0 \leqslant u \leqslant a, \\[6pt]
a & & & \text{if } u > a. \\[6pt]
\end{cases}$$
We will make use the CDF of the uniform random variable $U_*$ over the unit interval, which can be written as $F(u) \equiv \mathbb{P}(U_* \leqslant u) = \{ u|1 \}$. Using the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probabilityhttps://en.wikipedia.org/wiki/Law_of_total_probability), we have:
$$\begin{align}
F_X(x)
&\equiv \mathbb{P}(X \leqslant x) \\[18pt]
&= \mathbb{P}(R+U_*+V \leqslant x) \\[12pt]
&= \sum_{r=0}^1 \sum_{v=0}^n \mathbb{P}(R+U_*+V \leqslant x | R=r, V=v) \cdot \mathbb{P}(R=r) \cdot \mathbb{P}(V=v) \\[6pt]
&= \frac{1}{2(n+1)} \sum_{r=0}^1 \sum_{v=0}^n \mathbb{P}(R+U_*+V \leqslant x | R=r, V=v) \\[6pt]
&= \frac{1}{2(n+1)} \bigg[ \quad \sum_{v=0}^n \mathbb{P}(R+U_*+V \leqslant x | R=0, V=v) \\[6pt]
&\quad \quad \quad \quad \quad \quad + \sum_{v=0}^n \mathbb{P}(R+U_*+V \leqslant x | R=1, V=v) \bigg] \\[6pt]
&= \frac{1}{2(n+1)} \bigg[ \sum_{v=0}^n \mathbb{P}(U_* \leqslant x-v) + \sum_{v=0}^n \mathbb{P}(U_* \leqslant x-v-1) \bigg] \\[6pt]
&= \frac{1}{2(n+1)} \bigg[ \sum_{v=0}^n \mathbb{P}(U_* \leqslant x-v) + \sum_{v=1}^{n+1} \mathbb{P}(U_* \leqslant x-v) \bigg] \\[6pt]
&= \frac{1}{2(n+1)} \bigg[ \sum_{v=0}^{n+1} \mathbb{P}(U_* \leqslant x-v) + \sum_{v=1}^n \mathbb{P}(U_* \leqslant x-v) \bigg] \\[6pt]
&= \frac{1}{2(n+1)} \bigg[ \sum_{v=0}^{n+1} \{ x-v|1 \} + \sum_{v=1}^n \{ x-v|1 \} \bigg] \\[6pt]
&= \frac{1}{2(n+1)} \bigg[ \sum_{v=0}^{n+1} \{ x-v|1 \} + \sum_{v=0}^{n-1} \{ (x-1)-v|1 \} \bigg] \\[6pt]
&= \frac{1}{2(n+1)} \bigg[ \{ x|n+2 \} + \{ x-1|n \} \bigg]. \\[6pt]
\end{align}$$
Thus we can write the CDF of $X$ in a simple succinct form as:
$$\begin{align}
F_X(x)
&= \frac{\{ x|n+2 \} + \{ x-1|n \}}{2n+2}. \\[6pt]
\end{align}$$
(See if you can understand the penultimate step in this working, where we sum a set of terms using extended Macaulay brackets into a single term using extended Macaulay brackets.)
| null | CC BY-SA 4.0 | null | 2023-05-12T03:02:37.447 | 2023-05-12T06:15:48.770 | 2023-05-12T06:15:48.770 | 173082 | 173082 | null |
615640 | 2 | null | 615617 | -1 | null | In short, no. You have a 2D plane so there are two degrees of freedom for position in that plane. You did not specify anything about the joint distribution of these two coordinates. Thus you can't say anything definitive about the probabilities in the corners or the center. If you work in typical x/y coordinates and assume they are independent, then it becomes pretty straight forward to calculate the other probabilities.
| null | CC BY-SA 4.0 | null | 2023-05-12T05:07:06.417 | 2023-05-12T05:07:06.417 | null | null | 375464 | null |
615641 | 1 | null | null | 0 | 21 | I want to classify diabetic retinopathy grades (normal, mild, moderate, severe, PDR) using SVM.
But the problem is i don't know which type of svm should i use, because i extract three lession features (MA, HM, and EX), and sometimes one image can contain all of these lession but just belong to one class (ex:severe).
Can someone enlighten me should i use multiclass/multilabel SVM in my case. Thank you!
| Multilabel Classification Task using SVM | CC BY-SA 4.0 | null | 2023-05-12T05:37:25.127 | 2023-05-12T06:40:52.353 | null | null | 387494 | [
"svm",
"multi-class",
"multilabel"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.