Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
617076 | 2 | null | 280031 | 1 | null | In the example below, I am able to score chance-level accuracy at a particular threshold (needed to convert your continuous predictions to discrete categories) despite what looks like a low square loss.
```
# Taken from: https://stats.stackexchange.com/a/46525/247274
library(pROC)
library(MLmetrics)
set.seed(2023)
N <- N
x1 <- rnorm(N) # some continuous variables
x2 <- rnorm(N)
z <- 1 + 2*x1 + 3*x2 # linear combination with a bias
pr <- 1/(1 + exp(-z)) # pass through an inv-logit function
y <- rbinom(N, 1, pr) # Bernoulli outcome variable
L <- glm(y ~ x1 + x2, family = "binomial")
preds <- 1/(1 + exp(-predict(L)))
r <- pROC::roc(y, preds)
thresholds <- r$thresholds
accuracies <- rep(NA, length(thresholds))
for (i in 1:length(thresholds)){
idx1 <- which(preds > r$thresholds[i])
yhat <- rep(0, N)
yhat[idx1] <- 1
accuracies[i] <- MLmetrics::Accuracy(yhat, y)
}
plot(thresholds, accuracies)
abline(h = mean(y))
mean(y) # 0.593, so the chance-level accuracy is 59.3%
accuracies[811] # 0.593, which is chance-level accuracy
thresholds[811] # 0.9868599 is the threshold giving chance-level accuracy
mean((y - preds)^2) # 0.08799204 looks pretty low to me
r$auc # Area under the curve: 0.9468
```
Overall, this logistic regression model, along with a decision rule of using $0.9868599$ as the threshold for categorizing outcomes, scores chance-level accuracy. However, the square loss of the logistic regression model seems rather low at $0.08799204$, and the ROCAUC is quite high at $0.9468$.
Consequently, I am inclined to believe your results to be possible.
| null | CC BY-SA 4.0 | null | 2023-05-27T17:50:03.497 | 2023-05-27T17:50:03.497 | null | null | 247274 | null |
617077 | 1 | null | null | 1 | 25 | I'm doing binary classification in Python with an SVM classifier, and I implemented stratified repeated cross validation to have more robust results.
I would like to calculate confidence intervals for the mean accuracy, but using normal distribution assumption (if my code is correct) I obtain unrealistically narrow CIs.
I know that another approach is to use bootstrapping, but I have some confusion and I am unsure if I can use bootstrapping together with cross-validation.
Below is a toy example with the dataset breast_cancer where the accuracy is relatively stable from one run to another. However I have data with smaller sample size, where the accuracy can range a lot.
```
from sklearn.datasets import load_breast_cancer
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.svm import SVC
import scipy.stats as stats
from sklearn.metrics import accuracy_score
df = load_breast_cancer(as_frame=True)
X = df['data']
y = df['target']
n_runs = 5
accuracies = []
for i in range(n_runs):
# Split the data into training and testing sets using KFold
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=None)
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
svclassifier = SVC(kernel='linear', random_state=None)
svclassifier.fit(X_train, y_train)
y_pred = svclassifier.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracies.append(accuracy)
mean_accuracy = np.mean(accuracies)
std_accuracy = np.std(accuracies)
n_samples = len(accuracies)
z_score = stats.norm.ppf(0.975)
margin_of_error = z_score * sqrt((mean_accuracy*(1-mean_accuracy))/n_samples)
accuracy_CI = (mean_accuracy - margin_of_error, mean_accuracy + margin_of_error)
```
EDIT:
I found a way to compute the confidence interval, calculating the difference between the mean_accuracy and each value of accuracy obtained from each run of the classifier. That should be a bootstrap approach.
Is that correct?
```
result_tot=[]
for i in accuracies:
result=i-mean_accuracy
result_tot.append(result)
pct_05 = np.percentile(result_tot, 2.5)
pct_95 = np.percentile(result_tot, 97.5)
print("0.05 percentile:", mean_accuracy+pct_05)
print("0.95 percentile:", mean_accuracy+pct_95)
```
```
| Confidence intervals for binary classification | CC BY-SA 4.0 | null | 2023-05-27T17:58:15.740 | 2023-05-28T18:26:41.970 | 2023-05-28T18:26:41.970 | 375245 | 375245 | [
"confidence-interval",
"bootstrap",
"accuracy"
] |
617078 | 2 | null | 547602 | 0 | null | According to the theory of optimization calculus, these are equivalent: monotonic transformations like dividing by a constant do not change the $\arg\min$ that you aim to find (that $\arg\min$ gives the network weight and bias values).
However, to do any empirical work, we need to use computers to approximate these optimizations, and the sum and mean do not behave quite the same way. For example, [this answer](https://stats.stackexchange.com/a/539140/247274) discusses why the mean might be preferred to the sum for numerical reasons when it comes to doing the calculations on a computer like you have to do. Thus, despite the theoretical guarantees of the sum and mean having the same $\arg\min$, the numerical considerations matter for doing applied work.
| null | CC BY-SA 4.0 | null | 2023-05-27T17:59:01.797 | 2023-05-27T17:59:01.797 | null | null | 247274 | null |
617079 | 2 | null | 616569 | 0 | null | Scaling is typically do on a variable-by-variable basis:
- Consider the first variable.
- Calculate its mean and standard deviation
- Subtract the mean from every value, and then divide every value by the standard deviation
- Move to the next variable and repeat the process, then the next, then the next...
(You could also proceed by subtracting the minimum value and then dividing by the range. This puts the values in the interval $[0,1]$, while the approach given above gives variables with means of zero and variances of one. There are pros and cons to each approach.)
Consequently, it does not matter that your variables come from different sources and have different ranges of values. The whole point isi to do a kind of unit conversation for each variable to give an equivalence across the variables. In the approach given below, a scaled value of $1$ always means the value is one standard deviation above the mean, no matter the variable.
| null | CC BY-SA 4.0 | null | 2023-05-27T18:05:23.590 | 2023-05-27T18:05:23.590 | null | null | 247274 | null |
617080 | 2 | null | 73360 | 0 | null | It is completely reasonable to consider the individual pixels (to continue the analogy to images) as features, giving you $2500$ features from a $50\times 50$ image. It is not clear why there should be opposition to this. The relationships between pixels in the image are captured by dependence between features, so you are not losing information by working this way, and this is probably the easiest way to proceed.
There are techniques from image processing to extract features from images, and you might find success in considering such feature extraction techniques. One of the comments mentions [histogram of oriented gradients](https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients) and [scale-invariant feature transform](https://en.wikipedia.org/wiki/Scale-invariant_feature_transform). To that list, I will add Fourier and wavelet transforms. I know that I have played with wavelet transforms of MNIST digits in `R`.
However, you are not obviously making a mistake by considering the original features. Even the convolutional neural networks that exploit the spatial orientation of pixels in images [can be viewed as fully-connected networks that use weight-sharing and parameter dropping](https://stats.stackexchange.com/a/409172/247274).
| null | CC BY-SA 4.0 | null | 2023-05-27T18:24:18.293 | 2023-05-27T18:24:18.293 | null | null | 247274 | null |
617081 | 2 | null | 611523 | 0 | null |
- They are all pdfs, you are basically applying Bayes' Rule. Since the first line has an exact equality, if you integrate just over the full domain of $x_{t-1}$ for fixed $x_t$ and $x_0$, you will get 1 as you said. But the integration is intractable in such a high dimensional space (i.e. even an image as small as $32\times 32$).
- The main assumption of Diffusion models is that the noise is Gaussian. So , yes, you can think that the distributions of $x_i$ are Gaussian.
- Those are random variables. Every $x_i$ is an image that is processed through the Markov chain. So, you are basically saying that the images are Gaussian, given the initial image and/or the previous image in the Markov chain.
Hope this helps.
| null | CC BY-SA 4.0 | null | 2023-05-27T18:29:27.693 | 2023-05-27T18:29:27.693 | null | null | 388580 | null |
617082 | 2 | null | 605231 | 1 | null | I also searched long for an answer to this question but managed to convince myself that this is similar to the StyleGAN model. In StyleGAN, we also sample from a Gaussian distribution (just as we do in DDPM, $x_T$). Then, when we are passing the model through the network, we add some noise to induce some fine details such as the details of human hair.
I don't have a rigorous explanation but it seemed me slightly similar. Hope I can convince you as well :)
| null | CC BY-SA 4.0 | null | 2023-05-27T18:35:31.497 | 2023-05-27T18:35:31.497 | null | null | 388580 | null |
617083 | 1 | null | null | 0 | 16 | I have some hierarchical data and I'm struggling to figure out how to code it in my model. The data was taken on 20 individuals from 18 total sites: 9 sites in year 1 and 9 sites in year 2 and each site is in one of three treatments. So site is nested in year and in treatment, but year and treatment are crossed. I want to include site as a random factor, but I'm not sure how to account for the fact that it's nested within two other factor variables.
| Random factor nested in multiple other factors | CC BY-SA 4.0 | null | 2023-05-27T19:05:54.780 | 2023-05-27T19:05:54.780 | null | null | 388956 | [
"mixed-model",
"nested-data"
] |
617086 | 1 | null | null | 2 | 56 | I am reading the Introduction to Linear Algebra 5th edition, section 7.3: Principal Component Analysis. The section contains the following sentence
>
The first eigenvector $u_1$ of $S$ points in the most significant direction of the data.
What does the "direction of data" mean over here? I am guessing it implies a subspace. Also, why would an eigen vector point to it? Does it mean that the eigen vector is a part of the subspace?
| What does the "direction of data" mean in the context of Principal Component Analysis? | CC BY-SA 4.0 | null | 2023-05-27T20:09:17.033 | 2023-05-31T05:21:21.417 | 2023-05-28T13:32:33.573 | 262849 | 262849 | [
"machine-learning",
"pca",
"multivariate-analysis",
"linear-algebra"
] |
617087 | 1 | null | null | 1 | 10 | I wonder if case-control matching will bring a new confounding bias into the matched design. In the following figure, $L$ is a confounder, $E$ is the exposure, D is the disease outcome. In the matched design, $L$ can be correlated with $D$ through 1) $L\rightarrow E\rightarrow D$; 2) $L\rightarrow D$ (initial confounding); 3) $L\leftarrow S\rightarrow D$ (selection bias). Since L and D are marginally unassociated, so the overall effect over 1)+2)+3) must be 0. However, when we assess the effect of $E$ on $D$, $E$ is controlled for relative to $L$. So 1) is blocked for $L$. 2)+3) is not zero. Thus, $L$ and $D$ are correlated via paths 2)+3) in the matched design. Not adjusting for $L$ will lead to biased estimate of the exposure effect. (Mansournia et al IJE, 2013).
I understand 2) is the initial confounding effect of $L$; 3) is the selection effect by $L$. Why 3) can not be seen as a new confounding effect also? $L$ is a confounder and it is connected to $Y$ via path 3), which is a criteria for being a confounder.
So the general question is: can we see the backdoor selection bias path by matched confounders as a confounding bias path also? matched confounders are still correlated with $Y$ and $E$ in the matched design and the resulting bias is a confounding bias, right?
[](https://i.stack.imgur.com/X9IYh.jpg)
| Can selection bias lead to confounding bias? | CC BY-SA 4.0 | null | 2023-05-27T20:13:06.243 | 2023-05-27T20:13:06.243 | null | null | 56456 | [
"bias",
"confounding",
"case-control-study"
] |
617089 | 1 | null | null | 1 | 46 | My task is to run a logistic regression that models the probability that revenue is non-zero (in the
logistic regression parlance, a positive revenue is a success). Revenue variable is numeric and expressed in thousands of euro.
My code looks like this:
```
my_data <- dataset
my_data$success <- ifelse(my_data$revenue > 0, 1, 0)
logistic_model <- glm(success ~ age + initial_capital + industry,
data = my_data, family = "binomial")
summary(logistic_model)
```
Is it correct or there should be revenue variable itself when estimating the equation?
I want to know if the equation is correct, because it gives different values when I type either `revenue` or `success` in the equation
| regression modeling - logistic | CC BY-SA 4.0 | null | 2023-05-27T15:55:02.587 | 2023-05-28T07:40:20.190 | null | null | 388980 | [
"r",
"regression",
"logistic"
] |
617090 | 2 | null | 617086 | 1 | null | Just prior to the line you quoted is another explanation. Just above the line you quoted is this:
>
The leading eigenvector of $\mu_1$ shows the direction that you see in the scatter graph of Figure 7.2.
...
>
The SVD of A (centered data) shows the dominant direction in the scatter plot.
The first eigenvector is the direction of the maximum variance of the data. It is not really a subspace of the data, but rather a rotation of the axes of the data such that the first axis (the first eigenvector) is aligned with the direction of maximum variance. (Each eigenvector is a linear transformation of the original data space.) The second eigenvector is then the direction of the maximum remaining variance conditioned on being orthogonal to the first eigenvector.
| null | CC BY-SA 4.0 | null | 2023-05-27T20:42:26.350 | 2023-05-31T01:56:20.037 | 2023-05-31T01:56:20.037 | 36206 | 36206 | null |
617091 | 1 | null | null | 0 | 11 | Let's say I want to compare the correlation of BMI and blood pressure in two independent groups and see, whether there is a difference between the correlations (whether the correlation is stronger in one group than in another). This can be readily analysed with the 'cocor' package (and Fisher's z) in R.
But what if, instead of BMI, we were to compare the association of an ordinal variable - such as an academic grade (A, B, C, D, F) - and blood pressure in two independent groups to see whether there is a difference between the associations (whether the association is stronger in one group than in another)?
If I've understood correctly, we cannot compute the Pearson correlation coefficient if one (or more) of the variables is ordinal (not continuous), and thus, we cannot use the 'cocor' package to approach this question.
| Correlations can be compared with the Comparing Correlations ('cocor') package or R - but what if one of the variables is ordinal? | CC BY-SA 4.0 | null | 2023-05-27T20:46:55.513 | 2023-05-27T20:46:55.513 | null | null | 388683 | [
"correlation",
"ordinal-data",
"pearson-r",
"association-measure",
"fisher-transform"
] |
617092 | 1 | null | null | 1 | 14 | I have conducted a randomized 2x2 cross-over trial of 8 participants measuring the effect of a specific diet (intervention) vs normal diet (control) on the number of sleep hours.
The study design includes one week periods: run-in, intervention/control, washout, intervention/control, and then washout. Thus in total 5 weeks. Thus:
| |Period 1 |Period 2 |
||--------|--------|
|Sequence AB |Treatment A |Treatment B |
|Sequence BA |Treatment B |Treatment A |
The data looks like this:
|subject |sleep_hours |sequence |period |treatment |
|-------|-----------|--------|------|---------|
|1 |4.3 |AB |runin |0 |
|2 |6.5 |AB |runin |0 |
|3 |5.2 |AB |runin |0 |
|4 |4.4 |AB |runin |0 |
|5 |4.2 |BA |runin |0 |
|6 |6.5 |BA |runin |0 |
|7 |5.2 |BA |runin |0 |
|8 |4.6 |BA |runin |0 |
|1 |5.2 |AB |1 |A |
|2 |4.1 |AB |1 |A |
|3 |6.5 |AB |1 |A |
|4 |4.4 |AB |1 |A |
|1 |7.1 |AB |2 |B |
|2 |8.7 |AB |2 |B |
|3 |6.5 |AB |2 |B |
|4 |7.4 |AB |2 |B |
|5 |7.2 |BA |1 |B |
|6 |8.3 |BA |1 |B |
|7 |6.9 |BA |1 |B |
|8 |7.4 |BA |1 |B |
|5 |4.8 |BA |2 |A |
|6 |5.1 |BA |2 |A |
|7 |4.2 |BA |2 |A |
|8 |6.6 |BA |2 |A |
Here is a reproducible code of the above data:
```
db <- structure(list(subject = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 1L,
2L, 3L, 4L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 5L, 6L, 7L, 8L),
sleep_hours = c(4.3, 6.5, 5.2, 4.4, 4.2, 6.5, 5.2, 4.6, 5.2,
4.1, 6.5, 4.4, 7.1, 8.7, 6.5, 7.4, 7.2, 8.3, 6.9, 7.4, 4.8,
5.1, 4.2, 6.6), sequence = structure(c(1L, 1L, 1L, 1L, 2L,
2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L), levels = c("AB", "BA"), class = "factor"),
period = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L,
1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L
), levels = c("1", "2", "runin"), class = "factor"),
treatment = structure(c(1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 2L, 2L, 2L, 2L), levels = c("0", "A", "B"
), class = "factor")), row.names = c(NA, -24L),
class = "data.frame")
```
I used a Linear mixed-effects model like to see if the intervention (treatment A) affects sleep_hours compared with the control (treatment B):
```
install.packages(lme4)
install.packages(lmerTest)
model <- lmer(sleep_hours ~ treatment * period + sequence +
(1|subject), data = db)
summary(model)
```
This gave these results:
```
Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: sleep_hours ~ treatment * period + sequence + (1 | subject)
Data: db
REML criterion at convergence: 57.7
Scaled residuals:
Min 1Q Median 3Q Max
-1.0220 -0.6944 -0.1703 0.3407 1.5199
Random effects:
Groups Name Variance Std.Dev.
subject (Intercept) 0.0000 0.000
Residual 0.9101 0.954
Number of obs: 24, groups: subject, 8
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 5.100000000000009415 0.477005998098789574 17.999999999553889296 10.692 0.00000000316 ***
treatmentA -0.050000000000007615 0.674588351844622736 17.999999999553857322 -0.074 0.94173
treatmentB 2.325000000000000178 0.674588351844622736 17.999999999553857322 3.447 0.00288 **
period2 -0.000000000000009721 0.954011996197578593 17.999999999553843111 0.000 1.00000
sequenceBA 0.024999999999988885 0.674588351844622736 17.999999999553867980 0.037 0.97085
treatmentA:period2 0.100000000000017311 1.652397248444412492 17.999999999616743906 0.061 0.95241
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) trtmnA trtmnB perid2 sqncBA
treatmentA -0.707
treatmentB 0.000 0.000
period2 -0.500 0.354 -0.707
sequenceBA -0.707 0.500 -0.500 0.707
trtmntA:pr2 0.577 -0.612 0.612 -0.866 -0.816
fit warnings:
fixed-effect model matrix is rank deficient so dropping 4 columns / coefficients
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
```
I also tried to extract the P value and 95% confidence intervals for the effects of treatment A, inspired by the code of @BenBolker [here](https://stackoverflow.com/a/66716632/6152554):
```
model_p <- pvalue(coef(summary(as(model,
"lmerModLmerTest")))[2,5])
model_ci <- paste0(ciformat1(coef(summary(as(model,
"lmerModLmerTest")))[2,1]), " (", ciformat1(confint(model,
method="Wald")[4,1]), " to ", ciformat1(confint(model,
method="Wald")[4,2]), ")")
```
Which gave these results:
| |Intervention period |P |
||-------------------|-|
|Sleep hours (hours) |-0.1 (-1.4 to 1.3) |.942 |
My interpretation: treatment A did not affect sleep differently than treatment B, mean difference -0.1 hours (95% CI -1.4 to 1.3 hours), P = .942.
Is my code and interpretation correct?
| Interpret results of a mixed effects model analyzing a 2x2 cross-over study? | CC BY-SA 4.0 | null | 2023-05-27T13:43:41.033 | 2023-05-31T20:50:32.247 | 2023-05-31T20:50:32.247 | 11887 | 346798 | [
"r",
"mixed-model",
"crossover-study"
] |
617093 | 1 | null | null | 1 | 12 | I am trying to train a NN model in MATLAB to predict the amount of overflow for flooded junctions in an urban runoff system and I have 45 samples and 15 features. The issue is, I don't think 45 samples are quite enough for predictive modeling. Is there anything I can do to have a have good model without the need to augment my dataset (maybe training another model instead of NN is a better choice?), or should I just jump right into data augmentation techniques? Is data augmentation even a wise choice?
| small dataset for a regression task | CC BY-SA 4.0 | null | 2023-05-27T21:10:23.503 | 2023-05-27T22:13:56.007 | null | null | 388959 | [
"regression",
"neural-networks",
"matlab",
"small-sample",
"data-augmentation"
] |
617094 | 1 | null | null | 0 | 9 | I am using PCA to do the data inspection. First 3 PCs explain nearly 82% of the total variance. Suppose the number of features is $n$. And I found 4 of the features have similar PCA feature importances for first 3 PCs. Say in the attached figure:
[](https://i.stack.imgur.com/guoLb.png)
`evap_c_5`, `evap_c_7`, `w_s_5`, `w_s_7` have similar feature importance for `pc_1`, `pc_2` and `pc_3`. So can I safely conclude that those 4 variables are similar in the original space? if so, could you provide a theoretical derivation, or a literature for reference? Thanks!
| Do similar PCA feature importance in first few top PCs mean these variables are nearly same in the original space? | CC BY-SA 4.0 | null | 2023-05-27T21:38:22.200 | 2023-05-27T21:38:22.200 | null | null | 303835 | [
"pca",
"feature-selection",
"feature-engineering",
"eigenvalues",
"importance"
] |
617095 | 1 | 617102 | null | 1 | 30 | I am trying, without success, to calculate the log-likelihood of the most basic logistic regression model - a constant probability model (i.e. only $\beta_0 \ne 0$).
For the simplest model with 1 coefficient (i.e. constant probability):
$$
E(y) = \pi = \frac{1}{1 + e^{-\beta_0}}
$$
The maximum likelihood estimate of $\pi$ is $y/n$. This means that my likelihood is:
$$
\begin{aligned}
\ln(L(y_1, \dots, y_2)) &= \ln \left[ \left(\frac{y}{n}\right)^{y}\left(1 - \frac{y}{n} \right)^{n-y} \right] \\
&= y[\ln(y) - \ln(n)] + (n-y)\ln(1 - \frac{y}{n}) \\
&= y\ln(y) - y\ln(n) + (n-y)[\ln(n - y) - \ln(n)] \\
&= y\ln(y) - y\ln(n) - n\ln(n) + y\ln(n) + (n-y) \ln(n-y) \\
&= y\ln(y) - n\ln(n) + (n-y)\ln(n-y)
\end{aligned}
$$
An example can be found from the book Introduction to Linear Regression Analysis by Montgomery on p. 426 as given below.
A 1959 article in the journal Biometrics presents data concerning the proportion of
coal miners who exhibit symptoms of severe pneumoconiosis and the number of
years of exposure. The response variable of interest, $y$ , is the proportion of miners who have severe symptoms. A reasonable probability model for the number of severe cases is the binomial, so we will fit a logistic regression model to the data.
```
exposure = c(5.8, 15.0, 21.5, 27.5, 33.5, 39.5, 46.0, 51.5)
cases = c(0,1,3,8,9,8,10,5)
miners = c(98,54,43,48,51,38,28,11)
y = sum(cases)
n = sum(miners)
print(y * log(y) - n * log(n) + (n - y)*log(n - y))
```
The above prints -135.0896. However, the log-likelihood of a constant probability model is:
```
m0 = glm(cbind(successes, failures) ~ 1, family=binomial(link='logit'))
logLik(m0)
```
The above prints -39.8646.
I don't understand where I'm going wrong with this.
| Calculate log-likelihood of logistic regression | CC BY-SA 4.0 | null | 2023-05-27T22:02:45.040 | 2023-05-27T22:54:10.163 | null | null | 137303 | [
"r",
"regression",
"multiple-regression",
"generalized-linear-model",
"likelihood"
] |
617096 | 1 | 617105 | null | 1 | 24 | This is a question seeking to follow up on [this post](https://stats.stackexchange.com/questions/462489/multi-group-sem-analysis-regression-paths).
I have a multigroup SEM with a mix of observed and latent variables.
In the measurement model to inspect latent variables, metric invariance holds (loadings), but scalar invariance (intercepts) does not hold.
The regression is one manifest variable (Std_LC) and one latent variable predicting a latent outcome.
I wish to determine if the regression coefficients significantly different across three groups. If I hold "loadings" and "regressions" invariant, there are no sig. differences across groups. But if intercepts are not also constrained to test for differences, aren't the regressions coefficients representing different values?
Basically, I understand configural, metric, and scalar invariance, but I don't understand if "intercepts" being constrained here refers to intercepts of indicators loading onto latent factors, or also intercepts of variables modelled in the regression.
Many thanks in advance for any advice you could offer!
```
modelx <- '
FluentWR =~ Std_WI +Std_ORF + Raw_RANN
RC =~ Std_PC + Std_WC
RC ~ FluentWR + Std_LC
FluentWR ~~ Std_LC '
```
```
| Multi-group SEM - constraints and regression paths | CC BY-SA 4.0 | null | 2023-05-27T22:12:37.220 | 2023-05-28T18:44:15.553 | null | null | 249468 | [
"structural-equation-modeling",
"lavaan",
"invariance"
] |
617097 | 2 | null | 617093 | 0 | null | [As I wrote yesterday](https://stats.stackexchange.com/a/617031/247274), there are issues with data augmentation. If you have a small sample size, you put yourself at similar risk of overfitting as you would be fitting a complex model, and if you have a large sample size where that is not such a concern, then I question the need to synthesize artificial data.
With just $45$ observations, you lack the sample size to do sophisticated modeling like neural networks (unless you just want to learn the mechanics of writing neural network code). Unless this overflow follows a simple pattern, consistently strong performance is unlikely.
My suggestion is to work with a simple model like a linear regression on a few features, perhaps three to five features, following a rule-of-thumb for using one feature per $10$-$15$ observations. This is unlikely to achieve the kind of performance that you would get from sophisticated modeling on a large data set, but this is probably all your data will allow.
| null | CC BY-SA 4.0 | null | 2023-05-27T22:13:56.007 | 2023-05-27T22:13:56.007 | null | null | 247274 | null |
617098 | 1 | null | null | 0 | 8 | I'm building a search engine of technical documents based on Word2Vec, using cosine similarity metric. This search engine is very specific because it is meant to work with technical writings written in both French and English, to be used by end-users who don't necessarily know the exact technical words to look for. Using context through word embedding makes it possible to infer synonyms, which is great for this purpose.
The issue is I have only a few thousands (rather short) documents to train the word embedding, so the signal/noise ratio is not great and there is a fair deal of manual cleanup to do to help the model converge to something general enough.
To give the Word2Vec learning some reference points between languages, I resorted to conservatively stem words with regexs. That's because French and English share a fair deal of words ("action", "profession", etc.) and word's roots (activity/activité, apply/appliquer), so if you prune enough suffixes from both languages, you can converge to a fairly similar synthetic language in a way that gives Word2Vec a bit more generality to work with, where translations should be seen as synonyms, thus neighbours in the model.
For words out of the dictionnary, I use [Peter Norvig's spellchecking algo](https://norvig.com/spell-correct.html) (using Levensthein distance) against the Word2Vec database, and use the vector of the closest word found in dictionnary. In this context, this can be seen as a typo corrector as well as a fuzzy "nearest translation".
I have not been able to find litterature regarding a possible way to directly merge the words morphologic likeliness (represented by Levenshtein distance) within the vector cosine similarity from the Word2Vec fitting. Is there a such thing ?
| Word2Vec with some Levenshtein metric | CC BY-SA 4.0 | null | 2023-05-27T22:16:48.207 | 2023-05-27T22:23:47.780 | 2023-05-27T22:23:47.780 | 388960 | 388960 | [
"word2vec",
"cosine-similarity",
"levenshtein-distance"
] |
617099 | 2 | null | 617089 | 1 | null | I have my doubts that you should model revenue in this kind of binary way. However, if that if what you want to do, then the revenue value cannot be a predictor variable. When it comes time to make predictions in the future, you will lack the revenue value and be unable to make a prediction.
| null | CC BY-SA 4.0 | null | 2023-05-27T22:21:32.753 | 2023-05-28T07:40:20.190 | 2023-05-28T07:40:20.190 | 247274 | 247274 | null |
617100 | 1 | null | null | 0 | 21 | There are 197,058 record holders (population) with an average of K shares each. I don’t know K, and the sample collected so far of ~24,000 observations appear to be non-normal (inverse gauss). I’m wondering how big the sample size would need to be such that the sample mean would be approximately equal to the population mean. There are many online statistics calculators, but none that factor in non-normal distributions.
Ultimately, I’m looking for K x 197,058 = total number of shares held by record holders.
| Adequate sample size for non-normal population | CC BY-SA 4.0 | null | 2023-05-27T22:27:35.053 | 2023-05-27T22:27:35.053 | null | null | 388962 | [
"sample-size",
"inverse-gaussian-distribution"
] |
617102 | 2 | null | 617095 | 1 | null | A likelihood is a strange thing: it is not a probability and does not need to sum or integrate to $1$. In fact likelihood is only measured up to proportionality, with a positive multiplicative constant which becomes an additive constant if you take the logarithm. So you can find the ratios of likelihoods of different values of the parameter, i.e the difference for log-likelihoods, and you can which value of the parameter maximises the likelihood, but the likelihood itself not fixed absolutely; in Bayesian analysis the constant of proportionality integrates out.
If you are just looking at the overall figures from your data, you get $\frac{y}{n}=\frac{44}{371}\approx 0.1186$ and this is going to be the maximum likelihood proportion $\hat p$. In R:
```
phat <- y/n
```
The differences in the log-likelihoods come from binomial coefficients involving $y$ and $n$ but not $\hat p$. Here are three ways of calculating it, giving three different values:
- Your: $$\log_e\left(\hat p^y (1-\hat p)^{n-y}\right)$$
log(phat^y * (1-phat)^(n-y))
# -135.0896
- the binomial: $$\log_e\left({n \choose y} \hat p^y (1-\hat p)^{n-y}\right) = \log_e{n \choose y}+\log_e\left(\hat p^y (1-\hat p)^{n-y}\right)$$
log(choose(n,y) * phat^y * (1-phat)^(n-y))
# -2.749837
- the binomials for each set of miners as used by glm: $$\log_e\left(\prod\limits_i{\text{miners}_i \choose \text{cases}_i} \hat p^{\text{cases}_i} (1-\hat p)^{\text{miners}_i-\text{cases}_i}\right)\\ = \left(\sum\limits_i \log_e {\text{miners}_i \choose \text{cases}_i}\right) + \log_e\left(\hat p^y (1-\hat p)^{n-y}\right)$$
log(prod(choose(miners,cases) * phat^cases * (1-phat)^(miners-cases)))
# -39.8646
These are all log-likelihoods from the same data, with the difference between the first and second being `log(choose(n,y))` about $132.3398$, and between the first and third `sum(log(choose(miners,cases)))` about $95.22504$, none of which involve $\hat p$.
| null | CC BY-SA 4.0 | null | 2023-05-27T22:54:10.163 | 2023-05-27T22:54:10.163 | null | null | 2958 | null |
617103 | 1 | null | null | 0 | 31 | Given a bivariate joint distribution of random variable X and Y, $P(X,Y)$, consider the expectation value $E[(X-Y)^n]$ for different n values.
If one observes that while the variance becomes smaller as $n$ is increased within even and odd $n$ series, the even values are hierarchically larger than odd, what does this say about the bivariate distribution I'm considering? Is there a nice intuition for this?
That is, precisely, say one sees something like
$ E[(X-Y)^5]< E[(X-Y)^6]\lesssim E[(X-Y)^3] < E[(X-Y)^4] \lesssim E[(X-Y)] < E[(X-Y)^2]$
and similar pattern continues for higher power. Note that here, I wrote $E[(X-Y)^4] \lesssim E[(X-Y)]$ and $E[(X-Y)^6]\lesssim E[(X-Y)^3]$. Although $E[(X-Y)^4]$ (or $E[(X-Y)^6]$) seems smaller than $E[(X-Y)]$ (or $E[(X-Y)^3]$), they don't seem much smaller, so I'm not sure whether this is generally true. That is, $E[(X-Y)^4]$ (or $E[(X-Y)^6]$) is suppressed because it is higher power and $E[(X-Y)]$ (or $E[(X-Y)^3]$) is suppressed because it is odd.
| Behavior of higher moments in the bivariate distribution | CC BY-SA 4.0 | null | 2023-05-27T23:23:57.140 | 2023-05-28T00:52:35.533 | 2023-05-28T00:52:35.533 | 388965 | 388965 | [
"probability",
"distributions",
"mathematical-statistics",
"bivariate"
] |
617104 | 1 | null | null | 0 | 30 | A common explanation for mass shootings is the "copycat effect," or that mass shootings inspire more mass shootings. If the copycat effect is true, we would expect the incidence of mass shootings to be dependent (?).
Suppose I wanted to test if the incidence of mass shootings per year in the United States were independent. I would have year-level data with # of mass shootings in the US for a given year. How would I go about this? I've thought about conducting a GoF hypothesis test to see if the data approximates a Poisson distribution, which assumes the events are independent. So, if the data fits a Poisson distribution, perhaps the events are independent (though I realize Poisson also assumes the events occur at a constant mean rate ... so it's unclear whether it's independence of events or the mean rate that's driving the distribution) (?).
Any ideas would be greatly appreciated! I've put question marks around statements I'm not completely confident on.
| Testing independence of events (mass shootings in the US) | CC BY-SA 4.0 | null | 2023-05-27T23:46:22.643 | 2023-05-28T01:20:19.740 | 2023-05-27T23:47:03.557 | 388966 | 388966 | [
"hypothesis-testing",
"independence"
] |
617105 | 2 | null | 617096 | 0 | null | The regression slope coefficients in your structural (latent variable) model only involve the covariance structure (latent variances and covariances). Therefore, loading (metric/weak) invariance is sufficient to meaningfully test whether the regression slope coefficients in the structural model are equal across groups.
The intercepts in the structural regression model involve the mean structure of the latent variables. To meaningfully compare structural intercepts, you need to have at least strong/scalar invariance in your measurement model (i.e., equal factor loadings and equal indicator intercepts). Otherwise, the mean structure of the latent variables is not comparable across groups.
| null | CC BY-SA 4.0 | null | 2023-05-28T00:06:52.280 | 2023-05-28T00:06:52.280 | null | null | 388334 | null |
617110 | 1 | null | null | 0 | 13 | In regression models such as linear and multiple regression models, there are several conditions that must be met such as normality, non-autocorrelation, heteroscedasticity etc. does ANN also have conditions that must be met as well? If there are, what are the conditions?
| are there any conditions on the data in ANN classification? | CC BY-SA 4.0 | null | 2023-05-28T01:33:48.050 | 2023-05-28T01:33:48.050 | null | null | 388875 | [
"neural-networks",
"classification",
"assumptions",
"conditional"
] |
617111 | 1 | null | null | 0 | 35 | Question
Suppose there are two groups on treatments and an individual follows a Weibull distribution with the following probability density function.
\begin{align}
f(x; \alpha, \lambda) = \alpha \lambda x^{\alpha - 1}e^{-\lambda x^{\alpha}}, \quad \text{where } x \geq \alpha \; \text{and } \lambda > 0
\end{align}
It is to be noted $\alpha_1$ and $\alpha_2$ are known but are not equal to $\alpha = 1$. If participants in the first group, denoted by $x_i$ are modeled by a Weibull distribution $f(x_i; \alpha_1; \lambda_1)$ and participants in the second group are modeled by Weibull distribution $f(y_j; \alpha_2; \lambda_2)$.
What is the likelihood function and log-likelihood function for parameters $(\lambda_1, \lambda_2)$? Omit terms that do not depend on $\lambda_1$ or $\lambda_2$ to faciliate the likelihood equations to find the MLEs of $\lambda_1$ and $\lambda_2$
Attempt
For participant group 1, $x_i$,
\begin{align*}
L_{1}(\lambda_1) = \prod_{i = 1}^{n}(\alpha_1\lambda_{1}x_i^{\alpha_1 - 1}e^{-\lambda_1 x_i^{\alpha_1}})
\end{align*}
For participant group 2, $y_j$,
\begin{align*}
L_{2}(\lambda_2) = \prod_{j = 1}^{n}(\alpha_2\lambda_{2}y_j^{\alpha_2 - 1}e^{-\lambda_2 y_j^{\alpha_2}})
\end{align*}
The joint likelihood function would then be.
\begin{align}
L(\lambda_1, \lambda_2) &= L_{1}(\lambda_1) \cdot L_{2}(\lambda_2) \\
&= \prod_{i = 1}^{n}(\alpha_1\lambda_{1}x_i^{\alpha_i - 1}e^{-\lambda_1 x_i^{\alpha_1}}) \cdot \prod_{i = 1}^{n}(\alpha_2\lambda_{2}y_j^{\alpha_2 - 1}e^{-\lambda_1 y_j^{\alpha_2}}) \\
&= \left(\alpha_1^{n_1}\lambda_{1}^{n_1}x_i^{\sum_{i = 1}^{n}\left(\alpha_1 -1\right)}e^{\sum_{i = 1}^{n}\left(-\lambda_1 x_i^{\alpha_1}\right)}\right) \cdot \left(\alpha_2^{n_2}\lambda_{2}^{n_2}y_j^{\sum_{i = 1}^{n}\left(\alpha_2 -1\right)}e^{\sum_{i = 1}^{n}\left(-\lambda_2 y_j^{\alpha_2}\right)}\right)
\end{align}
The log-likelihood equation would then be with the omission of terms that do not depend on $\lambda_1, \lambda_2$ are.
\begin{align*}
\ell(\lambda_1, \lambda_2) = n_1\ln(\lambda_1) + \sum_{i = 1}^{n}\left(-\lambda_1 x_i ^{\alpha_1}\right) + n_2\ln(\lambda_2) + \sum_{i = 1}^{n}\left(-\lambda_2 y_j^{\alpha_2}\right)
\end{align*}
My concerns
I understand that with this question to find the MLEs for our parameters of interest $\lambda_1, \lambda_2$ is that we need to derive the likelihood function with respect to the parameter of interest and set equal to 0. However, I am unsure the approach taken to obtain the likelihood function is correct. Any help would be appreciated.
Update
Upon further commentary provided, it has become clear to me that the we cannot simply take the product of the random variables $x_i$ and $y_j$. The likelihood equation would then be the following when taking the natural log.
\begin{align}
\prod_{i = 1}^{n} x_i^{\alpha_1 - 1} \rightarrow \left(\alpha_1 - 1\right) \sum_{i = 1}^{n} x_i
\end{align}
| Likelihood and log-likelihood for Weibull distribution | CC BY-SA 4.0 | null | 2023-05-28T03:03:59.683 | 2023-05-28T23:22:31.597 | 2023-05-28T23:22:31.597 | 376744 | 376744 | [
"self-study",
"likelihood",
"weibull-distribution"
] |
617112 | 2 | null | 617089 | 1 | null | >
Is it correct or there should be revenue variable itself when estimating the equation?
By this, I'm assuming you mean including the numeric value as a predictor. Something like
`logistic_model <- glm(success ~ age + initial_capital + industry, data = my_data, family = "binomial")`
Clearly, were you to do this then the successes and failures would be completely separable and hence your model would fail to fit.
| null | CC BY-SA 4.0 | null | 2023-05-28T03:33:00.287 | 2023-05-28T03:33:00.287 | null | null | 111259 | null |
617113 | 1 | null | null | 1 | 122 | I am trying to implement a variational extension of some kind of Bayesian network estimation method. The main goal is to improve speed, since the current method is pretty slow due to MCMC.
My question should be pretty simple for veterans in the field: "How to obtain the optimal variational density?" "Is it fixed a priori? Or derived analytically from the data ?".
But before that, I think it's important to validate my understanding on what is variational inference and why we use it. This post it's an attempt to summarize what I think I understand. Correct me if I did any mistakes during the process.
DISCLAIMER: I am a frequenstist-trained biostatistician. This is my first bayesian attempt in research.
What I think I understand:
Assuming we are in a general framework (linear regression for example) with one vector of parameter $\beta$ and $\sigma$, each following a given distribution.
From a bayesian perspective we are looking for the posterior distribution of some parameters $\beta, \sigma$ given data $Y$ in order to do inferences. Formally:
$$
p(\beta, \sigma|Y) = \frac{p(\beta)p(\sigma)p(Y|\beta, \sigma)}{p(Y)} \propto p(\beta)p(\sigma)p(Y|\beta, \sigma)
$$
Since the denominator is basically an integral over all parameters $\int_{\beta}\int_{\sigma} p(Y, \beta, \sigma) d_{\beta} d_{\sigma}$, in presence of lot of parameters, this density is often intractable.
One approach to estimate this quantity it's by sampling (MCMC), but this is not the focus here.
Another approach is to consider variational inference (VI). VI turns the approximation problem into an optimization one. Let's move into VI more formally and check if I understand all concepts in a good way.
In VI we consider a proxy parametric distribution for parameter $q(\beta, \sigma)$ which is easy to compute while enough flexible to capture the true posterior. Traditionally, researchers try to minimize some kind of metric in order to determine the best $q(.)$. The most common is the Kullback-Leibler divergence. This is just an asymptotic Likelihood ratio test statistic between $q(\beta, \sigma)$ and $p(\beta, \sigma|Y)$. Formally:
$$
KL(q(\beta, \sigma) || p(\beta, \sigma|Y)) = \int q(\beta, \sigma) \log(\frac{q(\beta, \sigma)}{p(\beta, \sigma|Y)})d_{\beta, \sigma}
$$
From
$$
p(Y) = \int_{\beta}\int_{\sigma}p(Y, \beta, \sigma) d_{\beta}d_{\sigma}
$$
$$
= \int_{\beta}\int_{\sigma} q(\beta, \sigma) \frac{p(Y, \beta, \sigma)} {q(\beta, \sigma)} d_{\beta}d_{\sigma} = \int_{\beta}\int_{\sigma} q(\beta, \sigma) \frac{p(\beta, \sigma |Y) p(Y)} {q(\beta, \sigma)} d_{\beta}d_{\sigma}
$$
$$ \log(P(Y)) = \log(\int_{\beta}\int_{\sigma} q(\beta, \sigma) \frac{p(\beta, \sigma |Y) p(Y)} {q(\beta, \sigma)} d_{\beta}d_{\sigma}) >= \int_{\beta}\int_{\sigma} q(\beta, \sigma) \log(\frac{p(\beta, \sigma |Y) p(Y)} {q(\beta, \sigma)}) d_{\beta}d_{\sigma}$$
The last inequality is obtained from Jensen's inequality. The right term of the inequality is the evidence lower bound (ELBO). After simple algebraic manipulations, the ELBO can be re-expressed as:
$$
\int_{\beta}\int_{\sigma} q(\beta, \sigma) \log(\frac{p(\beta, \sigma |Y) p(Y)} {q(\beta, \sigma)}) d_{\beta}d_{\sigma} = \int_{\beta}\int_{\sigma} q(\beta, \sigma) (\log(p(\beta, \sigma |Y)) - \log(q(\beta, \sigma)) + \log(p(Y))) d_{\beta}d_{\sigma} \propto -KL(q(\beta,\sigma)||p(\beta, \sigma |Y))
$$
Since $p(Y)$ is constant over $q(\beta,\sigma)$, minimizing the KL is the same as maximizing the ELBO.
From the mean-field theory $q(\beta,\sigma)$ can be rewritten as $q(\beta) q(\sigma)$. Thus,
$$
ELBO = \int_{\beta}\int_{\sigma} q(\beta, \sigma) \log(\frac{p(\beta, \sigma |Y))}{q(\beta, \sigma)}) d_{\beta}d_{\sigma} = \int_{\beta}\int_{\sigma} q(\beta)q(\sigma) \log(\frac{p(\beta, \sigma |Y)}{\log(q(\beta) q(\sigma))} ) d_{\beta}d_{\sigma}
$$
Rewriting the ELBO we obtain:
$$
\int_{\beta}\int_{\sigma} q(\beta)q(\sigma) \log(\frac{(p(\beta)p(\sigma)P(Y|\beta, \sigma))}{q(\beta) q(\sigma)} ) d_{\beta}d_{\sigma}
$$
$$
= \int_{\beta}\int_{\sigma} q(\beta)q(\sigma) \log(p(\beta))d_{\beta}d_{\sigma} + \int_{\beta}\int_{\sigma}q(\beta)q(\sigma)
\log(p(\sigma))d_{\beta}d_{\sigma} + \int_{\beta}\int_{\sigma}q(\beta)q(\sigma) \log(p(Y|\beta, \sigma))d_{\beta}d_{\sigma} - \int_{\beta}\int_{\sigma}q(\beta)q(\sigma) \log(q(\beta))d_{\beta}d_{\sigma} - \int_{\beta}\int_{\sigma}q(\beta)q(\sigma) \log (q(\sigma)) d_{\beta}d_{\sigma}
$$
EDIT:
We can rewrite it in terms of expectations:
$$
E_{q(\beta, \sigma)}\log(p(\beta)) + E_{q(\beta, \sigma)}\log(p(\sigma)) + E_{q(\beta, \sigma)}\log(p(Y| \beta, \sigma)) - E_{q(\beta, \sigma)}\log(q(\beta)) - E_{q(\beta, \sigma)}\log(q(\sigma))
$$
From my understanding we have two general approaches to optimize the ELBO.
- Analytically, where each optimal $q(z)^*$ are defined for every particular problem. Following Blei et al., 2017 for the jth parameter $q_j(z_j)^* \propto \exp{E_{-j}(log(p(Z,X))}$. This quantity is the same as integrating out over the jth parameter keeping all the other parameters constant. But practically speaking I am not sure what does that mean.
- Approximately, where there is no context-specific optimal $q(z)^*$ but we use approximate distribution instead, such as Gaussian or any distribution closed to the prior. I know that in PyMC3 or Stan, Gaussian Mean-field approxiamtions are used. If I translate it correctly it's the same as replacing $q(z_j) \sim N(z_j, \mu_s, \sigma_s)$, where $\mu_s, \sigma_s$ are the corresponding variational parameters. I know that other kind of approximations are available depending on the context.
My questions
Now this is the difficult part for me. I am not sure which method is better in which context.
Is the Gaussian mean-field can be extended to other kind of parametric family ? I am thinking about Laplace distribution for example.
Also, I will appreciate if a high-level explanation of the algorithm needed to obtain the variational parameters can be provided. Some authors talk about "EM-like" algorithms. But I am not sure what is done at each iteration.
| Conceptual questions about the proxy distribution in variational inference | CC BY-SA 4.0 | null | 2023-05-28T03:41:23.840 | 2023-06-03T21:34:36.237 | 2023-05-31T16:18:38.803 | 223713 | 223713 | [
"bayesian",
"kullback-leibler",
"variational-bayes"
] |
617114 | 1 | null | null | 2 | 54 | The code below gets a: error in mean(abs((y_true - y_pred)/y_true)) :
argument "y_true" is missing, with no default
I've seen MAPE used on forecasts. Can one use this and similar methods on models? Or is there some other R package that would work?
Edit 1: Seems lke MAPE wants a forecast. While my arima:
```
> class(arima)
[1] "forecast_ARIMA" "ARIMA" "Arima"
```
might be a forecast MAPE does not work. Using summary, i get:
```
Training set error measures:
ME RMSE MAE MPE
Training set 0.003245791 1.473989 1.051721 -0.001574146
MAPE MASE ACF1
Training set 0.5306977 0.9908161 -0.001193281
```
Doing a ??MAP gets me nine packages (with many different names for methods ans parameters). This is confusing. Is there any doc on which types of objects work with what?
Original code:
```
symbols <- c("SPY","EFA", "IJS", "EEM","AGG")
prices <- getSymbols(symbols,
src = 'yahoo',
from = "2012-12-31",
to = "2017-12-31",
auto.assign = TRUE,
warnings = FALSE)
ts<-SPY$SPY.Close
autoplot(ts)
arima = auto.arima(ts)
library(MLmetrics)
MAPE(arima)
```
| How to use MAPE with auto.arima model? | CC BY-SA 4.0 | null | 2023-05-28T03:52:52.243 | 2023-05-28T21:21:20.273 | 2023-05-28T21:21:20.273 | 174445 | 174445 | [
"r",
"arima",
"mape"
] |
617115 | 2 | null | 381149 | 0 | null | Covariance only measures linear relationship and it classifies the slope of the linear relationship into one of three cases: 1)positive, 2)negative, 3) no trend.
If two random variables, X and Y, are not linearly correlated, we can still calculated their covariance. However, the result is not valid, meaning that we cannot explain a positive covariance as that bigger X are associated with bigger Y. Similarly, a negative covariance doesn't necessarily mean that bigger X are associated with smaller Y; nor can we say that if the covariance is 0, then there is no trend between X and Y.
Why is it? Think about this case: Y = X^2. There is a clear trend/pattern between the two variables, but if you calculate their covariance, it is 0. Here is how I got cov(X, Y) = 0: Take four data points from this graph that are symmetric, say (-2, 4), (-1, 1), (1, 1), and (2, 4). Then calculate the covariance using its formula, you get it equals to 0.
To summarize, if two variables doesn't show linear trends, then it is not valid to use covariance to characterize their relationship. That's why we say that covariance only measure linear relationships.
| null | CC BY-SA 4.0 | null | 2023-05-28T03:55:25.910 | 2023-05-28T03:55:36.770 | 2023-05-28T03:55:36.770 | 388973 | 388973 | null |
617116 | 1 | 617358 | null | 2 | 30 | I have a series (sensor readings) with different events and I'm trying to segment/detect those events for further analysis using change point detection in python.
In general, the series consists of a mean with a fairly small standard deviation. Occasionally, an outlier reading will occur for one, maybe two points, and then usually revert to the same mean. Another common pattern is the series shifting to a new mean, either abruptly or over the course of several readings. I'd like the method I use to capture both kinds of changes.
Additionally, I'm going to be doing this online so I'd like the results to be somewhat stable. It's fine if new data makes a mean-shift more obvious and a new change point is detected at N-3, but I don't want a method that causes a change point to be detected at N-20.
It seems like [ruptures](https://centre-borelli.github.io/ruptures-docs/) is the most common change point library in python (I also looked at [changefinder](https://pypi.org/project/changefinder/)), all the methods are offline but I'd be happy re-running for every new data point if it worked. However, I've had trouble getting satisfactory results.
```
import ruptures as rpt
import matplotlib.pyplot as plt
signal = [9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 28.0, 13.0, 9.0, 10.0, 10.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 10.0, 10.0, 10.0, 9.0, 9.0, 9.0, 9.0, 9.0, 10.0, 31.0, 31.0, 35.0, 35.0, 37.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0]
def detect_events(signal):
signal = np.array(signal)
algo = rpt.Pelt(model="rbf", min_size=1, jump=1).fit(signal)
result = algo.predict(pen=1)
algo = rpt.Window(model='l2', width=2, min_size=1).fit(signal)
result = algo.predict(pen=5)
# algo = rpt.Binseg(model="l2", jump=10).fit(signal)
# result = algo.predict(pen=20)
change_points = [i for i in result if i < len(signal)]
return change_points
change_points = detect_events(signal)
optimal_change_points = [7,9, 38, 43]
print(change_points)
for cp in change_points:
print(signal[cp-2:cp+2])
f, ax = plt.subplots(1,1)
plt.plot(signal, axes=ax)
for cp in change_points:
ax.axvline(x=cp, color='r', linestyle='--')
for cp in optimal_change_points:
ax.axvline(x=cp, color='g', linestyle=':')
```
[Series with optimal change points vs detected changepoints](https://i.stack.imgur.com/kkfhl.png)
It seems whatever combination of search method I try with ruptures I'm either missing significant events, or detecting loads of spurious events, or getting a change point at weird spots, like well into the sequence of -1 instead of at the start.
I'm getting to the point where I'm probably going to make my own method (I'm primarily concerned with mean-shifts so I think it's feasible), though I can see that turning into a giant pile of edge-cases. The problem feels standard enough that I'd just be re-inventing a much more functional wheel but I don't know where that wheel is.
| Detecting Events in a series | CC BY-SA 4.0 | null | 2023-05-28T04:00:43.160 | 2023-05-30T20:37:56.913 | null | null | 388974 | [
"python",
"change-point"
] |
617117 | 1 | null | null | 0 | 5 | I'm a newbie to the world of deep learning. I know that in practice what a lot of people do while training these models is to decay the learning rate as the number of iterations increases.
In a similar fashion, when we are trying to minimize a function via Newtons Method we use the inverse of the second derivative as the "learning rate".
Google searches on the connection between these concepts didn't yield much so I was wondering if they are related:
- By decaying the learning rate are we assuming (or giving?) the loss function a "stronger" curvature? Are we making our parameters more identifiable by doing this?
Thanks
| Learning rate decay in neural networks and inverse hessians in Newton's algorithm | CC BY-SA 4.0 | null | 2023-05-28T04:25:49.130 | 2023-05-28T04:25:49.130 | null | null | 115120 | [
"neural-networks",
"optimization",
"identifiability",
"learning"
] |
617118 | 1 | null | null | 0 | 9 | Is there any reference about backpropagation of the Transformer's multi-head layer or multi-head attention (MHA)? I have searched various journals but have not found one yet.
| Is there any reference about backpropagation of the Transformer's multi-head layer? | CC BY-SA 4.0 | null | 2023-05-28T04:26:36.460 | 2023-05-28T04:26:36.460 | null | null | 387019 | [
"backpropagation",
"transformers"
] |
617119 | 1 | null | null | -1 | 23 | I am trying to simulate a microgrid that has EV charging stations. My goal is to use a random generator in python to generate random arrival times, but is similar to the charging behavior, which has increased arrivals at certain times. I used one year of prior power data that I averaged every day of the year by one minute intervals (all 0:00 where averaged together then all the 0:01) to get an "average day", from that I noticed three bell curve like peaks, so I created for a compound Gaussian using numpy and pandas. While the output does look like the original data (step like power data where you can see the switching on and off). The arrival times means seem off to what I programmed in the code.
My professor suggested that I use Poisson or discrete arrival times instead of Gaussian, but I am not sure if that will work since I am using 1440 different arrival times (minutes in a day), so at the size it is basically a Gaussian. Does anyone know a better method like monte carlo or why my random tri-Gaussian does not have the peaks as the original data? Also, I did manage to curve fit the data, but I am not sure if that is useful for random generation. Thank you for all the help, and apologies for any statistics terminology error I am a power systems engineer.
Yearly EV charger Output data
[](https://i.stack.imgur.com/8bflA.png)
Avg Day Charging (Time Domain)
[](https://i.stack.imgur.com/QxnrQ.png)
Avg Day Charging (Minute Domain)
[](https://i.stack.imgur.com/9As4m.png)
Avg Day Curve Fit
[](https://i.stack.imgur.com/4ucC9.png)
Random Generator Avg Day
[](https://i.stack.imgur.com/fMlfC.png)
Random Generator Yearly Output
[](https://i.stack.imgur.com/4CnLr.png)
My code for generating random EV sessions:
Function
```
import numpy as np
import pandas as pd
# Creates the Gaussian for number of random arrivals for a day (Scalar value)
def number_of_arrivals(mean, std_dev, size =1):
num_arriv = np.array([])
for i in range(len(mean)):
num_arriv = np.append(num_arriv, np.random.default_rng().normal(mean[i], std_dev[i], size))
num_arriv = np.sort(np.absolute(num_arriv))
return (num_arriv)
# Creates a compund Gaussian with user imputed size (vector length)
# A vector is expected for for the function fir both the mean and std dev
# [1 ... N] creates an Nth compound Gaussian eg, mean = [10, 20, 30], std_dev = [1, 2, 3] creates a tri-Gaussian
def compound_gauss(mean, std_dev, size):
if len(mean) == len(std_dev):
cpd_gauss = np.array([])
for i in range(len(mean)):
if size[i] <= 1:
size[i] = 1
cpd_gauss = np.append(cpd_gauss, np.random.default_rng().normal(mean[i], std_dev[i], int(size[i])))
cpd_gauss = np.sort(cpd_gauss)
return (cpd_gauss)
# Creates the Guassian for arrival and leave times in minutes in a day
def random_ev_arrivals_times(arriv_mean, arriv_std_dev, cpd_mean, cpd_std_dev, charge_mean, charge_std_dev):
# Size (vector length) is determined by number_of_arrivals function
size = number_of_arrivals(arriv_mean, arriv_std_dev)
# Input the time when peaks occur
arriv_time = compound_gauss(cpd_mean, cpd_std_dev, size)
# Input the average duration of charging for each peak
leave_time = compound_gauss(charge_mean, charge_std_dev, size)
return [arriv_time, leave_time]
# Creates a datafrrame with the arrival times, and adds a time column
# If multiple charging sessions occur they add up
def date_time_arrival_times(start_date, num_days, arriv_mean, arriv_std_dev, cpd_mean, cpd_std_dev, charge_mean, charge_std_dev):
# Create a dataframe to be populated
df = pd.DataFrame(columns = ['arriv_time', 'leave_time'])
# Timestamp for each day counter
current_date = start_date
# Populates the dataframe with random arrival for each day
# Each day is added one at a time in the for loop
for i in range(num_days):
# Create temporary dataframe for that day that will be added main dataframe
temp = pd.DataFrame(columns = ['arriv_time'])
# Call the random arrival time function for each day
arriv_time, leave_time = random_ev_arrivals_times(arriv_mean, arriv_std_dev, cpd_mean, cpd_std_dev, charge_mean, charge_std_dev)
# Converts random arrival times (minutes in a day) into timedeltas which are added to that day's timestamp
# Converts random arrival times into pandas timestamps
temp['arriv_time'] = current_date + pd.to_timedelta(arriv_time, unit = 'min')
# Converts random leave times (duration) into timestamps by adding arrival time to leave time timedeltas
temp['leave_time'] = temp['arriv_time'] + pd.to_timedelta(leave_time, unit = 'min')
# Add daily dataframe to the main dataframe
df = pd.concat([df,temp])
# Update the current day timestamp by adding one day
current_date = current_date + pd.Timedelta(1, "d")
# Ensures that arrival times are in order
df = df.sort_values(by=['arriv_time'])
# Reset dataframe to organized arrival times
df = df.reset_index(drop = True)
# Seperate arrival times and leave times to create charging sessions
df1 = df[['arriv_time']]
# Set a charging session to 1
df1['value'] = 1
df1['time'] = df1['arriv_time']
df2 = df[['leave_time']]
# Set a charging session to -1 to negate a charging to shut it off
df2['value'] = -1
df2['time'] = df2['leave_time']
# Combine the seperated dataframes to create a charging sessions
ev = pd.concat([df1, df2])
# Sort the combined dataframes, by time
ev = ev.sort_values(by=['time'])
# Reset dataframe to organized times
ev = ev.reset_index(drop = True)
# Adds the 1 and -1s to get number of charging sessions occuring at every time interval
ev['fuzz'] = ev.value.cumsum()
# 0 - 4 charging session can occur since there are 4 charging stations
ev['fuzz'] = ev['fuzz'].clip(lower=0, upper = 4)
# Multiply by 5 kW which is the power consumption for each EV charger
ev['charge'] = ev['fuzz'] * 5
# Organize columns
ev = ev[['time', 'value', 'fuzz', 'charge']]
return [df, ev]
```
Calling the function
```
arriv_mean = [6, 3, 1] # Average number of car arrivals
arriv_std_dev = [4, 2, 1] # Car arrivals standard deviation
cpd_mean = [558, 872, 1235] # Arrival times peaks
cpd_std_dev = [332, 358, 300] # Arrival times standard deviation
ev_arriv = np.array([])
start_date = pd.Timestamp(2023, 1, 1, 0) # Start date for the time deltas
charge_mean = [90, 90, 90] # Average length of charging session
charge_std_dev = [30, 30, 30] # Standard deviation of charging length
num_days = 365 # Number of outputed random data
# Call the function
df, ev = date_time_arrival_times(start_date, num_days, arriv_mean, arriv_std_dev,
cpd_mean, cpd_std_dev, charge_mean, charge_std_dev)
# Display dataframe
ev
```
| Best way to generate random ev charging sessions using Python | CC BY-SA 4.0 | null | 2023-05-28T04:33:44.383 | 2023-05-28T04:33:44.383 | null | null | 388968 | [
"normal-distribution",
"python",
"random-generation",
"poisson-process",
"interarrival-time"
] |
617120 | 2 | null | 617114 | 4 | null | MAPE is just a calculation. The functions from `MLmetrics` might not be specifically designed for time series, but if you give the `MLmetrocs::MAPE` function the true and predicted values, you will get the desired mean absolute percent error for the predictions made by your ARIMA model.
Note that MAPE, despite what seems to be a nice interpretation, [has some problem that are worth knowing](https://stats.stackexchange.com/a/299713/247274).
Since it is fine to use the function from `MLmetrics`, you do not have to write your own MAPE function. However, if you have defined the `y_true` and `y_pred` variables in your code, you seem to have given the right calculation (though it seems you have not defined `y_true` anywhere, which would explain your error).
| null | CC BY-SA 4.0 | null | 2023-05-28T04:43:26.420 | 2023-05-28T04:43:26.420 | null | null | 247274 | null |
617122 | 1 | null | null | 1 | 78 | I have similar recurring use case at work whereby we are wanting to understand attributes/features of a datasets such as 'how many customers talk about this', 'how many customers do that' and generally the only knowns that I think are relevant are the populatation sizes.
In this current example what we have is a dataset that containts 9636 survey responses and another dataset that has been previously analysed of customer feedback that has been categorised. The categorised dataset is different to the current survey dataset but we are running on the assumption that the categories will be similar. What we are seeking to understand is the proportions of the 9636 surveys that can be categorised into 1 of the 12 categories from the second dataset.
Is there a general sample size calculation method for this type of use case and if so can someone plese provide detail on what the method for calculating the sample size for this use case? I intend on calculating this in r so if anyone has any r code snippets that would help here I would also greatly appreciate it.
Thank you in advance! This is a recurring requirement at work (calculating sample sizes that will give our stakeholders confidence that the results are relevant to the population in question). As a side note, my stakeholders often ask for 'what sample size will be statistically significant' - could anyone additionally provide any feedback on technically whether this is the right question to be asking for the use case/s I have outlined in this post.
Thanks again. I appreciate any help or insight.
EDIT 1 - adding details to clarify Dataset 1 is a list of reasons why customers might need to speak to our contact centre and there is 1 category with 12 levels. Dataset 2 is the survey responses and we will classify the reason for low scores using the same 12 levels from dataset 1 (as there is an underlying assumption that the reasons for being unhappy about the product or service and thus result in a call to our contact centre, will be the same reasons why customers will state in a survey they are unhappy with the product or service.
EDIT 2 Dataset 2 is a set of survey responses that we need to categorise into (hopefully, if our assumptions hold up) 1 of 12 categories. Dataset 1 is where those 12 categories were initially defined but this was not a survey dataset - it was a dataset of transcripts from contact centre calls (and analysed them in order to categorise the 'root cause of their issue') so the customers in dataset 2 are unlikely for the most part to be the same customers in dataset 1.
EDIT 3 Essentially what we are trying to answer/test is 'are the reasons for customer calls the same reasons why customers are detractors in an NPS survey'. I am not sure what test I would run to test for this but my initial question is at the top of this process which is 'how do we calculate the size of the sample we need to analyse in dataset 2 in order to give confidence that whatever we find is likely to be representative of the whole population (of survey respondants).
| Calculating Sample Size When Wanting To Analyse Proportions & Proportions Sizes are Unknown | CC BY-SA 4.0 | null | 2023-05-28T08:47:35.880 | 2023-06-02T18:52:37.193 | 2023-05-29T08:14:52.080 | 388990 | 388990 | [
"r",
"sampling"
] |
617123 | 2 | null | 605071 | 0 | null | First, I'm just a graduate student who has just started reading about this topic either and has proven to get things wrong the first time. So may don't put to much on what I say. But since you ask for references, my answer can may can be of any help.
I don't know where the quoted text is from, but I think we mixing two viewpoints here.
The quoted text wants to use leverage scores to quantify the influence of each data point. If the score is high, the data point has a huge influence on the regression model and deleting or perturbating it can change the solution completely.
So one can either (a) assume the the data point with high leverage score is an outlier which arised from wrong measurements and should be excluded from further computations (deterministic method).
Or (b) one assumes that the measurements are right and especially this point should be considered, due to its great influence. This is important when applying randomized data reduction techniques like sampling. If you want to reduce the number of rows, say form $n$ to $k$, you could just draw $k$ row indices from $\{1,\ldots,n\}$ with replacement (for the independence assumption besides w/o replacement would make more sense). This is uniform sampling, i.e. every row has probability $1/n$ to be drawn in one trial. This approach surely doesn't incorporate the different influence of each row.
A more sophisticated approach is to calculate the statistical leverage scores and then choose $k$ times the $i$-th row with the so-called importance sampling probability $p_i = \ell_i/n$ and then rescale the row by an appropriate factor. So an, in terms of leverage scores, influential row has a much higher probability to be included in the sample. This is what the matrix $S_L$ does in your linked lecture notes. They then proceed to show with Chernoff that $S_L$ is embedding the data in a, lets lay simplified, appropriate subspace which doesn't perturbuate the solution "too much".
Unfortunately naively implemented the computation of the leverage scores for $A\in \mathbb{R}^{n \times m}$ takes $O(nm^2)$ and thus is not faster than solving for example a overconstrained least-square problem by classic methods like QR/ SVD/ normal equation CG.
Thus one want to approximate the leverage scores or reduce the so-called coherence of the matrix.
References:
- Introduction to importance sampling : https://arxiv.org/abs/1104.5557
- approximate leverage scores: https://arxiv.org/abs/1109.3843
- more randomized LS approaches : https://arxiv.org/abs/1712.08880
- extensive intro: https://www.cs.ubc.ca/~nickhar/Book2.pdf
I only included resources which I think are suited for a first reading at graduate level and not the important work of these who came up with the ideas. Just ask if you need more references.
| null | CC BY-SA 4.0 | null | 2023-05-28T09:37:28.730 | 2023-05-28T10:18:42.440 | 2023-05-28T10:18:42.440 | 166179 | 166179 | null |
617124 | 1 | null | null | 2 | 16 | when we compute self-attention in a decoder model, we compute, for an embedding $x$, the tensors $Q = W_Q x, K = W_K x, V = W_V x$.
However, in the next step, we compute a dot product on the embedding dimension:
$(Q|K) = x^T W_Q^T W_K x$.
In this way, the matrices $W_Q, W_K$ only appear through the product $W_Q^T W_K$. So why do we need to train two matrices if all that matters is this product ?
| Necessity of keys and queries in a decoder-only transformer model | CC BY-SA 4.0 | null | 2023-05-28T09:45:24.917 | 2023-05-28T09:45:24.917 | null | null | 388994 | [
"neural-networks",
"transformers",
"attention"
] |
617125 | 1 | 617314 | null | 3 | 55 | I read this post [How is the minimum of a set of IID random variables distributed?](https://stats.stackexchange.com/questions/220/how-is-the-minimum-of-a-set-of-iid-random-variables-distributed) where I can find how to compute the density distribution of the minimum between $N$ positive random variables. If the cumulative of these random variables is $F(x)$ the cumulative of the minimum between $N$ of these random variables is:
$$ \mathbb{P}(\text{min}\le x)=1-\big(1-F(x)\big)^N$$
From this result it is easy to compute the mean value of the minimum (simply using integration by parts):
$$
\overline{\text{min}}=\int_{0}^{\infty}x\,\frac{d \mathbb{P}(\text{min}\le x)}{dx}dx=\int_{0}^{\infty}\big(1-F(x)\big)^Ndx
$$
At this point I can compute $\overline{\text{min}}$ for a general cumulative distribution $F(x)$.
I want now to get the scaling of $\overline{\text{min}}$ for $N\rightarrow\infty$ when $F(x)=\frac
{1}{r!}\int_{0}^{x}l^re^{-l}dl$ for $r\in\mathbb{Z}$ and $r>-1$. (A simple case can be the one for $r=0$, that is the density distribution for the variables is exponential, in this case the average min value scale as $1/N$).
In the general case I can arrive to:
$$
\overline{\text{min}}=\int_{0}^{\infty}\left(1-\frac{1}{r!}\int_0^{x}l^re^{-l}dl\right)^Ndx
$$
but I don't know how to approximate it. For general $r$ it can be useful to approximate $\frac{l^re^{-l}}{r!}\simeq \frac{l^r}{r!}\propto l^r$, since the important contribution for the average minimum value will be $\ll1$. In this approximation my result is:
$$
\overline{\text{min}}=\int_{0}^{\infty}\left(1-\int_0^{x}l^rdl\right)^Ndx
$$ but still can't figure out which is the scaling with large $N$.
The result should be:
$$
\overline{\text{min}}\simeq N^{-\frac{1}{r+1}}
$$
I am studying the following paper: [https://hal.science/jpa-00232897/document](https://hal.science/jpa-00232897/document) .
Here the authors find the scaling of the mean value of what I suppose is the minimum of the random variables, which in this case are $l_{ij}$, called “distances”. I cannot find the scaling found after Eq.(5), can anyone help me?
| Average value of the minimum between $N$ positive random variables | CC BY-SA 4.0 | null | 2023-05-28T09:54:05.723 | 2023-05-30T13:51:15.673 | 2023-05-28T11:30:30.640 | 388995 | 388995 | [
"probability",
"distributions"
] |
617126 | 1 | null | null | 1 | 16 | I'm confused as to how 'patience' works on keras. as far as I know, if we set patience=10 then if in the last 10 epoch the loss doesn't decrease significantly or even continues to increase, then the epoch stops. but the results that I tried show the difference.
```
#TRIAL
def reset_seeds():
np.random.seed(0)
python_random.seed(0)
tf.random.set_seed(0)
reset_seeds()
model3 = Sequential()
model3.add(Dense(30, input_dim= Train_X2_Tfidf.shape[1], activation='sigmoid'))
model3.add(Dense(1, activation='sigmoid'))
opt = tf.keras.optimizers.RMSprop (learning_rate=0.0001)
model3.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
model3.summary()
es = EarlyStopping(monitor="val_loss",mode='min',patience=10)
history3 = model3.fit(Train_X2_Tfidf, Train_Y2, epochs=2000, verbose=1,
validation_split=0.2, batch_size=32, callbacks =[es])
```
the following results I got:
[](https://i.stack.imgur.com/bL0hI.png)
val_loss decreases to epoch 1185 and shows a constant result for the next epoch, shouldn't the epoch stop at 1195? but I found the epoch continued until 1243.
[](https://i.stack.imgur.com/lJGsC.png)
| why does epoch sometimes not stop at the set patience? | CC BY-SA 4.0 | null | 2023-05-28T10:13:25.287 | 2023-05-28T12:51:40.983 | 2023-05-28T12:51:40.983 | 388875 | 388875 | [
"neural-networks",
"loss-functions",
"tensorflow",
"keras"
] |
617127 | 2 | null | 592633 | 0 | null | This sounds like a situation for paired testing. You have the performance of each technique on the first task, so take the difference in performance. Then do the same for the second task, third task…
This gives you $100$ differences that you can test for significance. If you want to show that the two techniques have equivalent performance, you must first define what you mean by equivalent, but once you do, various forms of equivalence testing could be appropriate, the easiest of which to understand is [two one-sided testing (TOST)](https://stats.stackexchange.com/a/500121/247274).
In my view, if technique A is better $60$ out of $100$ times and by about the same magnitude as B when B outperforms A, that makes it seem like A is performing better, but you might have a different sense of what constitutes equivalence (which is fine).
| null | CC BY-SA 4.0 | null | 2023-05-28T11:49:55.717 | 2023-05-28T11:49:55.717 | null | null | 247274 | null |
617128 | 1 | null | null | 1 | 9 | Sorry if the question sounds stupid. I am new to this. Consider the following dataset:
```
|Month | Temperature|
| -------- | -------------- |
Jan | 10
Feb | 12
Mar | ?
Apr | 16
May | ?
Jun | ?
Jul | 22
Aug | ?
Sep | 18
Oct | ?
Nov | 14
Dec | 12
```
Now, can we evaluate "?" for this dataset? I cannot find the documentation for the Kriging method for dataset that is not spatial.
| Can kriging method be used for dataset that is not spatial? | CC BY-SA 4.0 | null | 2023-05-28T11:55:10.410 | 2023-05-28T12:21:54.160 | 2023-05-28T12:21:54.160 | 22047 | 388999 | [
"mathematical-statistics",
"python",
"data-imputation",
"interpolation",
"kriging"
] |
617129 | 1 | null | null | 1 | 8 | Could you please help to write down the exact equation?
It is clear for Garch part but not clear how to add ARFIMA (1,0,1) or here just ARMA(1,1) in model equation specification.
Should we also type formula for our distrubution type that follows normal here?
How change the equation for same model but with Student-t distrubution ?
[](https://i.stack.imgur.com/BicZb.png)
| GARCH (sGARCH) with ARFIMA (ARIMA) model in Rugarch equation formula output | CC BY-SA 4.0 | null | 2023-05-28T11:55:35.943 | 2023-05-28T11:55:35.943 | null | null | 388998 | [
"regression",
"machine-learning",
"time-series",
"variance",
"garch"
] |
617130 | 1 | null | null | 0 | 9 | A spline of order $r$ (with $K$ knots) can be written as:
$$f(x) = \sum_{i=0}^{r-1}\beta_i x^{i} + \sum_{j=1}^K b_j(x-\kappa_{j})^{r-1}_+$$
A natural spline is defined as a spline that has order $2r$ inside $\left[ x_{(1)}, x_{(n)} \right]$, and is a polynomial of order $r$ outside of $\left[ x_{(1)}, x_{(n)} \right]$.
But what does the equation for a natural spline look like? If we take:
$$f_{\texttt{natural}}(x) = \sum_{i=0}^{r-1}\beta_i x^{i} + \sum_{j=1}^K b_j(x-\kappa_{j})^{2r-1}_+,$$
then this satisfies first extra condition to be a natural spline, however it only satisfies the second extra condition on $(-\infty, x_{(1)})$. I do not know how to "turn off" the $(x-\kappa_j)_+$ terms on $(x_{(n)}, \infty)$, they are a polynomial of order $2r$ there.
| Equation of a natural spline vs just a spline | CC BY-SA 4.0 | null | 2023-05-28T12:06:05.747 | 2023-05-28T12:06:05.747 | null | null | 342779 | [
"splines"
] |
617131 | 2 | null | 587261 | 0 | null | The rationale is as follows:
If you know nothing about the data other than the fact that there are $20$ instances of category $0$ for every one instance of category $1$, the most sensible prediction about the probability of a new observation belonging to category $1$ is $1/21$.
This kind of comparison to a naïve model is routine in other forms of modeling. For instance, the usual $R^2$ in OLS linear regression can be seen is a comparison of your model performance to the performance of a model that predicts the overall mean every time. The approach here is the same: use the [prior](https://stats.stackexchange.com/a/583115/247274) probability as your best guess of the posterior probability, since you have no features to help you make a better prediction of the posterior probability.
| null | CC BY-SA 4.0 | null | 2023-05-28T12:07:13.573 | 2023-05-28T12:07:13.573 | null | null | 247274 | null |
617132 | 1 | 617155 | null | 1 | 45 | Suppose I have the Bayesian network in the figure and the corresponding conditional probability table for each node, where A and B are the hidden variables, and C and D are the observed variables. What probabilistic inference algorithm can I use to get all the conditional probabilities in Table - 1? can I use likelihood weighting sampling inference algorithm ? If the network becomes the bottom one, is the likelihood weighting sampling inference algorithm appropriate?
| Inference in Bayesian networks with hidden variables | CC BY-SA 4.0 | null | 2023-05-28T12:15:05.417 | 2023-05-30T04:02:35.740 | 2023-05-30T04:02:35.740 | 389001 | 389001 | [
"bayesian",
"inference",
"bayesian-network"
] |
617133 | 2 | null | 578324 | 0 | null | IT MEANS YOU ARE CRUSHIN’ IT, BUT WATCH OUT FOR OVERFITTING
If you are able to achieve a high true positive rate without having a high false positive rate, this means that your model is quite good at distinguishing between the two categories.
In your case, you seem to get a perfect true positive rate when the false positive rate is $2\%$. This sounds to me like a case where you achieve a high true positive rate while also having a low false positive rate.
If you plot the ROC curve across all false positive rates like a standard ROC curve plot does, you will see a very steep upward slope to the left that levels out at a perfect true positive rate for almost the entire plot. This corresponds with the area under the curve being close to one, and one indicates perfect separation between the classes.
There are always overfitting concerns when you do the kind of modeling that you are doing, and performance that seems “too good to be true” should lead you to be skeptical. However, the goal is to be able to separate your two categories. If you are able to get the nearly perfect separation indicated by the area under the ROC curve being close to one, this indicates good performance!
| null | CC BY-SA 4.0 | null | 2023-05-28T12:32:06.360 | 2023-05-28T12:32:06.360 | null | null | 247274 | null |
617134 | 1 | null | null | 1 | 35 | I am a bit confused about how people calculate p-value when calculating odds-ratios.
The log-odds ratio (LOR) for a contingency table with two entries is $L = \log \frac{p_{1}}{p_{0}}$ and has an unbiased estimator using sampled frequencies: $\hat{L} = \log \frac{n_{1}}{n_{0}}$. This estimator has asymptotic variance $\sqrt{\frac{1}{n_1} + \frac{1}{n_0}}$, which allows you to assign confidence intervals to the estimated LOR. If you also want to assign a p-value to the observed sample LOR, then you'd need the variance around the null hupothesis of a LOR of zero, which in this case, since $n_1+n_2=N$ and $n_1 = n_0$, is equal to $\frac{2}{\sqrt{N}}$. This is independent of the population parameters since it only depends on the total number of samples, which makes it a pivotal statistic. This means you can shift the distribution to zero to calculate probabilities under the null hypothesis of a LOR of zero, and assign p-values. No problems there.
However The LOR for a contingency table with four entries is $L = \log \frac{p_{11}p_{00}}{p_{10}p_{01}}$ and has an unbiased estimator using sampled frequencies: $\hat{L} = \log \frac{n_{11}n_{00}}{n_{10}n_{01}}$. This estimator has variance $\sqrt{\frac{1}{n_{11}} + \frac{1}{n_{00}} + \frac{1}{n_{01}} + \frac{1}{n_{10}}}$.
While this still allows you to construct a confidence interval, it is (if I understand correctly) no longer a pivotal statistic: the variance depends on the observed frequencies and thus the population parameters.
Still, I see people calculate p-values associated to nonzero LORs (see for example this discussion: [How to calculate the p.value of an odds ratio in R?](https://stats.stackexchange.com/questions/156861/how-to-calculate-the-p-value-of-an-odds-ratio-in-r)). How is that possible? Am I missing something? Are there hidden assumptions?
| How to calculate the p-value of a log-odds ratio, given that the variance depends on the observed frequencies? | CC BY-SA 4.0 | null | 2023-05-28T12:40:03.703 | 2023-05-28T19:38:12.683 | null | null | 236452 | [
"confidence-interval",
"variance",
"p-value",
"odds-ratio",
"contingency-tables"
] |
617137 | 1 | 617140 | null | 0 | 24 | I have this exercise where [](https://i.stack.imgur.com/WfMjY.png) is the answer. Using this annex(text at the top means tenths of the x), [](https://i.stack.imgur.com/5RJ06.png)the value becomes [](https://i.stack.imgur.com/Qz7fD.png) . What I don't understand is how it becomes 2,58. Our first answer is 0,495, so I'm looking on the left searching the 0,4 table. After that, I'm looking at the tenths of our answer, but there is no number equivalent to 2,58. Is the exercise wrong or am I not understanding the annex right?
| Not fully understanding gaussian-laplace table | CC BY-SA 4.0 | null | 2023-05-28T13:43:19.203 | 2023-05-28T21:24:11.137 | null | null | 389004 | [
"mathematical-statistics",
"laplace-distribution",
"tables"
] |
617139 | 1 | null | null | 4 | 181 | In the first derivation of dL/dW, I use the rule for the derivative of a constant with respect to a matrix and then apply the chain rule.
\begin{gather*}
Y\ =\ XW\ +\ B\\
X=\begin{bmatrix}
x_{0} & x_{1} & x_{2}
\end{bmatrix} ,\ Y=\begin{bmatrix}
y_{0} & y_{1}
\end{bmatrix} ,\ W=\begin{bmatrix}
w_{00} & w_{01}\\
w_{10} & w_{11}\\
w_{20} & w_{21}
\end{bmatrix} ,\ B=\begin{bmatrix}
b_{0} & b_{1}
\end{bmatrix}
\end{gather*}
When it comes to the derivative with respect to a vector, the [rules](https://en.wikipedia.org/wiki/Matrix_calculus) I found assume column vectors. Are the rules the same for row vectors? (For [numerator layout](https://en.wikipedia.org/wiki/Matrix_calculus#Layout_conventions), dY/dL is a column vector. However, they don't say that Y has to be a column vector, instead they say "If the numerator y is of size m and the denominator x of size n")
\begin{gather*}
\left(\frac{\partial L}{\partial W}\right)^{T} =\begin{bmatrix}
\frac{\partial L}{\partial w_{00}} & \frac{\partial L}{\partial w_{00}}\\
\frac{\partial L}{\partial w_{10}} & \frac{\partial L}{\partial w_{11}}\\
\frac{\partial L}{\partial w_{20}} & \frac{\partial L}{\partial w_{21}}
\end{bmatrix} =\begin{bmatrix}
\color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial w_{00}} & \color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial w_{01}}\\
\color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial w_{10}} & \color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial w_{11}}\\
\color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial w_{20}} & \color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial w_{21}}
\end{bmatrix}\\
\\
Focus\ on\ one\ term:\\
y_{0} \ =\ w_{00} x_{0} +w_{10} x_{1} +w_{20} x_{2} \ +b_{0}\\
y_{1} \ =\ w_{01} x_{0} +w_{11} x_{1} +w_{21} x_{2} +b_{1}\\
\\
\frac{\partial Y}{\partial w_{00}} =\ \begin{bmatrix}
\frac{\partial y_{0}}{\partial w_{00}}\\
\frac{\partial y_{1}}{\partial w_{00}}
\end{bmatrix} =\begin{bmatrix}
x_{0}\\
0
\end{bmatrix}\\
\color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial w_{00}} =\ \begin{bmatrix}
\color{red}{\frac{\partial L}{\partial y_{0}}} & \color{red}{\frac{\partial L}{\partial y_{1}}}
\end{bmatrix}\begin{bmatrix}
x_{0}\\
0
\end{bmatrix} =\color{red}{\frac{\partial L}{\partial y_{0}}} \ x_{0} \ +\ \color{red}{\frac{\partial L}{\partial y_{1}}} *\ 0\ =\color{red}{\frac{\partial L}{\partial y_{0}}} \ x_{0}\\
\\
\frac{\partial Y}{\partial w_{10}} =\ \begin{bmatrix}
\frac{\partial y_{0}}{\partial w_{10}}\\
\frac{\partial y_{1}}{\partial w_{10}}
\end{bmatrix} \ =\begin{bmatrix}
x_{1}\\
0
\end{bmatrix} ,\ \frac{\partial Y}{\partial w_{01}} =\ \begin{bmatrix}
\frac{\partial y_{0}}{\partial w_{01}}\\
\frac{\partial y_{1}}{\partial w_{01}}
\end{bmatrix} \ =\begin{bmatrix}
0\\
x_{0}
\end{bmatrix} ,\ \frac{\partial Y}{\partial w_{11}} =\ \begin{bmatrix}
\frac{\partial y_{0}}{\partial w_{11}}\\
\frac{\partial y_{1}}{\partial w_{11}}
\end{bmatrix} \ =\begin{bmatrix}
0\\
x_{1}
\end{bmatrix} ,\\
\frac{\partial Y}{\partial w_{20}} =\ \begin{bmatrix}
\frac{\partial y_{0}}{\partial w_{20}}\\
\frac{\partial y_{1}}{\partial w_{20}}
\end{bmatrix} \ =\begin{bmatrix}
x_{2}\\
0
\end{bmatrix} ,\ \frac{\partial Y}{\partial w_{21}} =\ \begin{bmatrix}
\frac{\partial y_{0}}{\partial w_{21}}\\
\frac{\partial y_{1}}{\partial w_{21}}
\end{bmatrix} \ =\begin{bmatrix}
0\\
x_{2}
\end{bmatrix}\\
\\
Finally:\\
\left(\frac{\partial L}{\partial W}\right)^{T} =\begin{bmatrix}
\frac{\partial L}{\partial y_{0}} \ x_{0} & \frac{\partial L}{\partial y_{1}} \ x_{0}\\
\frac{\partial L}{\partial y_{0}} \ x_{1} & \frac{\partial L}{\partial y_{1}} \ x_{1}\\
\frac{\partial L}{\partial y_{0}} \ x_{2} & \frac{\partial L}{\partial y_{1}} \ x_{2}
\end{bmatrix} =\begin{bmatrix}
x_{0}\\
x_{1}\\
x_{2}
\end{bmatrix}\begin{bmatrix}
\frac{\partial L}{\partial y_{0}} & \frac{\partial L}{\partial y_{1}}
\end{bmatrix} =\ X^{T}\color{red}{\frac{\partial L}{\partial Y}}
\end{gather*}
Is dY/dW, the derivative of a vector with respect to a matrix, a third degree tensor? Am I allowed to do the following derivation? (writing a 3d tensor as a vector of 2d matrices)
\begin{gather*}
\frac{\partial L}{\partial W} =\ \color{red}{\frac{\partial L}{\partial Y}}\frac{\partial Y}{\partial W} =\begin{bmatrix}
\color{red}{\frac{\partial L}{\partial y_{0}}} & \color{red}{\frac{\partial L}{\partial y_{1}}}
\end{bmatrix}\begin{bmatrix}
\frac{\partial y_{0}}{\partial W}\\
\frac{\partial y_{1}}{\partial W}
\end{bmatrix} =\color{red}{\frac{\partial L}{\partial y_{0}}}\frac{\partial y_{0}}{\partial W} +\color{red}{\frac{\partial L}{\partial y_{1}}}\frac{\partial y_{1}}{\partial W}\\
=\ \color{red}{\frac{\partial L}{\partial y_{0}}}\begin{bmatrix}
\frac{\partial y_{0}}{\partial w_{00}} & \frac{\partial y_{0}}{\partial w_{01}}\\
\frac{\partial y_{0}}{\partial w_{10}} & \frac{\partial y_{0}}{\partial w_{11}}\\
\frac{\partial y_{0}}{\partial w_{20}} & \frac{\partial y_{0}}{\partial w_{21}}
\end{bmatrix}^{T} +\color{red}{\frac{\partial L}{\partial y_{1}}}\begin{bmatrix}
\frac{\partial y_{1}}{\partial w_{00}} & \frac{\partial y_{1}}{\partial w_{01}}\\
\frac{\partial y_{1}}{\partial w_{10}} & \frac{\partial y_{1}}{\partial w_{11}}\\
\frac{\partial y_{1}}{\partial w_{20}} & \frac{\partial y_{1}}{\partial w_{21}}
\end{bmatrix}^{T}\\
=\ \color{red}{\frac{\partial L}{\partial y_{0}}}\begin{bmatrix}
x_{0} & 0\\
x_{1} & 0\\
x_{2} & 0
\end{bmatrix}^{T} +\color{red}{\frac{\partial L}{\partial y_{1}}}\begin{bmatrix}
0 & x_{0}\\
0 & x_{1}\\
0 & x_{2}
\end{bmatrix}^{T} =\begin{bmatrix}
\color{red}{\frac{\partial L}{\partial y_{0}}} x_{0} & \color{red}{\frac{\partial L}{\partial y_{1}}} x_{0}\\
\color{red}{\frac{\partial L}{\partial y_{0}}} x_{1} & \color{red}{\frac{\partial L}{\partial y_{1}}} x_{1}\\
\color{red}{\frac{\partial L}{\partial y_{0}}} x_{2} & \color{red}{\frac{\partial L}{\partial y_{1}}} x_{2}
\end{bmatrix}^{T}\\
=\ \left( X^{T}\color{red}{\frac{\partial L}{\partial Y}}\right)^{T}
\end{gather*}
Edit: Found a [similar question](https://math.stackexchange.com/a/4459069), but the final answer is different.
| Derivative of vector Y=XW with respect to matrix W | CC BY-SA 4.0 | null | 2023-05-28T14:23:38.527 | 2023-06-02T22:38:09.193 | 2023-06-02T22:38:09.193 | 389006 | 389006 | [
"machine-learning",
"neural-networks",
"matrix-calculus"
] |
617140 | 2 | null | 617137 | 2 | null | Note to readers: On this forum, $\Phi(x)$ usually denotes the cumulative distribution function of the standard normal random variable, and thus $\Phi(x) = \Phi_{\text{CDF}}(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}}e^{-x^2/2} \mathrm dx$, which takes on values increasing from $0$ at $-\infty$ to $1$ at $\infty$ as $x$ increases from $-\infty$ to $\infty$. In contrast, the OP's notation uses $\Phi(x)$ (I will call it $\Phi_{\text{OP}}(x))$ to mean $\int_0^x \frac{1}{\sqrt{2\pi}}e^{-x^2/2} \mathrm dx$, which takes on values increasing from $0$ at $x=0$ to $0.5$ at $x=\infty$ as $x$ increases from $0$ to $\infty$. Thus, $\Phi_{\text{CDF}}(x) = 0.5+ \Phi_{\text{OP}}(x)$, The table shows the values of $\Phi_{\text{OP}}(x)$.
You need to find the value of $x$ for which the "table entry" is $0.4950$. Unfortunately, there is no such entry; the closest that we can come is $x=2.57$ for which the table entry is $0.4949$ or $x=2.58$ for which the table entry is $0.4951$. What is really needed is the smallest value of $x$ for which we are guaranteed that the "table entry" is $0.495$ or larger. By the way, that's an exact number, no round-off etc: if you write the number as a decimal, all the digits after that $5$ are 0; $0.495000000000000\cdots$. Now, since $x=2.58$ clearly satisfies the desired criterion $(0.4951 > 0.495)$, we choose that value as the answer.
An alternative would be to do linear interpolation and say that $x=2.575$, midway between $x=2.57$ and $x=2.58$, is the right answer, but this would be incorrect. The value of $x$ at which $\Phi_{\text{CDF}}(x) = 0.9950$, which value of $x$ is denoted by $x_{0.9950}$, is just a little smaller than $2.58$. According to the tables on pages 968-871 of Abramowitz and Stegun, Handbook of Mathematical Functions, $\Phi_{\text{CDF}}(2.58) = 0.9950599642\cdots$ (and so $\Phi_{\text{OP}}(2.58) = 0.4950599642\cdots$), and so the exact value of $x_{0.9950}$ is definitely larger than the $2.575$ value obtained by linear interpolation.
| null | CC BY-SA 4.0 | null | 2023-05-28T14:55:04.323 | 2023-05-28T21:24:11.137 | 2023-05-28T21:24:11.137 | 6633 | 6633 | null |
617141 | 1 | null | null | -2 | 17 | I started to talk to a girl whom I used to go to high school with on June 3rd 2013. We both share the same year and class of of 2003. She lives in Austin , Tx and I live in San Antonio, TX.
Talking to her prompted the recalling of one of her ex boyfriends, Issac. During this time, his place of residency is in Seattle Tx.
Since June 3rd 2013 , I have been thinking about Issac due to something I won’t go into .
It is June 7th , a Friday, and I’m drive to new braunfels Tx to go visit a friend of mine, Matt along with his wife, Sarah, whom he recently married on April 27.2013
Both Matt and Sarah wanted to go a brand new recently opened restaurant named Willy’s, but the wait was too long , and was packed .
We decided to go to a different one
BJ’s, to be exact. It was walking distance from the theatre , which was a plus since we also had plans to make s 8:15 showing of Star Strek part 2
As we put our debit cards within the cashiers wallet, me still heavy in heart with Issacs face etched in my mind, as it serves an archetype of wrongs/ forgiveness.
I excused myself to go to the restroom while as we waited on the waiter to pick up our cards.
5 seconds after I begin walking , something caught my eye . 20 feet in front of me, slightly to the left .
It was Issac.
He said he and some buds where there for a wedding
Calculate the probability of running into someone youve been thinking of for four days but haven’t seen in 10 years, at a bjs restaurant in new braunfels tx right before you leave
Data: you live in San Antonio tx
the restaurant is in New Braunfels
Issac lives in Seattle
loni lives in Austin
Time : 7:30 pm
Start date and time of thinking about Issac: 06/03/2013 @0900
Date and time running into Issac : 06/07/2013 @1930
| This is complex, try to follow along | CC BY-SA 4.0 | null | 2023-05-28T14:59:47.840 | 2023-05-28T14:59:47.840 | null | null | 389009 | [
"statistical-significance"
] |
617142 | 1 | null | null | 0 | 7 | Long story short, I'm seeing in the literature that linear instrumental variables models are identifiable, even in the presence of unobserved confounders. The unobserved confounding aspect befuddles me, since it is not clear where this insight came from.
Briefly, given a linear instrumental variables setup where
$X = Z\beta + U\theta + \epsilon, \; \epsilon \sim N(0, \sigma_{\epsilon})$
$Y = X\alpha + U\phi + \delta, \; \delta \sim N(0, \sigma_{\delta})$
where $Z\in \mathbb{R}^z$ are instruments $X \in \mathbb{R}^x$ are the exogenous variables, $Y \in \mathbb{R}^y$ are the endogenous variables and $U \in \mathbb{R}^u$ are the unobserved confounders. In the instrumental variables setup, the average causal effect $\mathbb{E}[Y|do(x)]$ is of interest.
If $U$ is observed, I can see how this works out. But it isn't clear to me how an unobserved $U$ gets dropped / marginalized out in the linear setting, and I have not been successful finding the original proof, whether it be in Bowden + Turkington 1984, Pearl 2008 or Rubin + Imbens 2015. Any references or pointers would be appreciated.
P.S. I just realized that there are very few causality-related questions on this stack-exchange - if there are recommendations on a more appropriate forum, those are also welcome.
P.S.S Thanks to @kjetilbhalvorsen , I was able to find a [similar question](https://stats.stackexchange.com/a/550944/79569) on stats.stackexchange -- $X$ can be treated as a collider.
| Instrumental variable identifiability in the presence of unobserved confounders | CC BY-SA 4.0 | null | 2023-05-24T13:35:34.817 | 2023-05-31T21:01:09.437 | 2023-05-31T21:01:09.437 | 11887 | 79569 | [
"least-squares",
"instrumental-variables"
] |
617143 | 1 | null | null | 2 | 45 | I am doing some research on a provider's network geographical distribution. If my data looks like this:
|Country |Quantity |Continent |
|-------|--------|---------|
|Ireland |5 |Europe |
|Singapore |9 |Asia |
|Canada |25 |North America |
|UK |43 |Europe |
|USA |50 |North America |
but for 70 countries and 2,000 servers. How could I best visually display this? The relationship I'm trying to show is the potential even or uneven geographic distribution of servers while also taking in to consideration the fact that the USA would have more servers than a country than the size of Ireland or Hungary. This seems like the least complicated way of controlling for size rather than getting into population, etc.
Would a Lorenz curve or Gina coefficient be meaningful in this instance?
| Best type of chart or plot to visualize geographic distribution of servers | CC BY-SA 4.0 | null | 2023-05-28T15:12:53.013 | 2023-05-31T17:31:43.197 | null | null | 389010 | [
"data-visualization"
] |
617144 | 1 | null | null | 1 | 17 | After making a neural network using ReLU as the activation function throughout, I had a look at the input layer activations and noticed that about 10% of the neurons are dead on initialization (never activated). At first I assumed it was caused by the large magnitude of my inputs and/or large input layer weights, but scaling those down made no difference (which in hindsight seems obvious for ReLU).
Then I realized that the input feature vectors have high inter-example correlation, and that the linear part of the input layer is essentially taking the dot/inner product of this correlated bundle of vectors, with other random vectors. From this perspective, it seems unsurprising that some of those random vectors will have a negative dot product with the input vectors from the entire example set, and that there's nothing I can do to fix this other than changing the activation function, or de-correlating the input feature vectors by preprocessing.
Am I missing something, or are ReLU-activated input neurons unsuitable when the dataset's inputs are highly correlated between examples?
Or should I perhaps just subtract the population mean input from all the input vectors to at least center the population at the origin? (Or perhaps learn a bias parameter for each input feature before feeding it in?)
| Is ReLU activation function unsuitable for input layer if the input data has high inter-example correlation? | CC BY-SA 4.0 | null | 2023-05-28T15:16:03.717 | 2023-05-28T15:16:03.717 | null | null | 49397 | [
"neural-networks",
"data-preprocessing",
"activation-function",
"relu"
] |
617145 | 1 | null | null | 0 | 8 | In semi-supervised learning, when a portion of unlabelled data is combined with labelled data during the training process, I'm wondering how one can perform inference specifically on the unlabelled data used for training. What are the recommended strategies or techniques for conducting inference on this subset of unlabelled data?
Additionally, considering a scenario where there is a substantial amount of unlabelled data available, how do you determine the appropriate dataset size to utilize in a semi-supervised training procedure? Are there any established methodologies or best practices for defining the size of the dataset used in such scenarios?
| Inference on Unlabeled Data in Semi-Supervised Learning and Determining Dataset Size | CC BY-SA 4.0 | null | 2023-05-28T15:18:20.533 | 2023-05-28T15:34:56.320 | 2023-05-28T15:34:56.320 | 362671 | 386413 | [
"methodology",
"semi-supervised-learning"
] |
617146 | 1 | null | null | 1 | 13 | I am trying to combine GNN and CNN in my model.
Every node of my graph has 2d space coordinates, so as a whole it's like an irregular 2d mesh.
I think we can't use deep GNNs due to oversmoothing. So I am thinking of sampling this 2d mesh into a regular one so that I can use CNNs.
So the first part of the model is a GNN, whose information is finally aggregated into a new four-node rectangle.
Then I create an image from this rectangle. All pixels are linearly interpolated from the four nodes.
Finally come the CNNs.
What I am not sure is that, my image is created according to the four nodes, but it does not contain gradient information. Also, my convolution kernels are not directly related to the former four-nodes. I am afraid that the path of gradients is blocked.
So how can I put these two parts together to make an unblocked model, and the gradients can propagate without problem?
| How can I put GNN and CNN together? | CC BY-SA 4.0 | null | 2023-05-28T15:22:21.797 | 2023-05-28T15:22:21.797 | null | null | 368202 | [
"neural-networks",
"conv-neural-network",
"graph-neural-network"
] |
617147 | 2 | null | 617143 | 0 | null | Perhaps you could make a xy scatter plot, with number of servers on the y-axis and some useful comparison property on the x-axis (population size, internet users, GDP, etc).
Hans Rosling's graphs may provide inspiration:
[https://duckduckgo.com/?t=ffab&q=Hans+Rosling+graphs&atb=v101-1&iax=images&ia=images](https://duckduckgo.com/?t=ffab&q=Hans+Rosling+graphs&atb=v101-1&iax=images&ia=images)
Another possibility is to go with a map, but distort it by something, for example population size:
[https://medium.com/google-news-lab/tilegrams-make-your-own-cartogram-hexmaps-with-our-new-tool-df46894eeec1](https://medium.com/google-news-lab/tilegrams-make-your-own-cartogram-hexmaps-with-our-new-tool-df46894eeec1)
You could distort make a scaled with the number of servers, and compare to one scaled with the population size.
| null | CC BY-SA 4.0 | null | 2023-05-28T15:34:20.260 | 2023-05-28T15:34:20.260 | null | null | 9496 | null |
617149 | 2 | null | 243564 | 0 | null | If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i.e., tens of parameters.
If you use the direct variables, you have one (or more) parameters per state, so if your time series has 1000 entries, you have a 1000-dimensional likelihood.
High-dimensional spaces are hard to explore.
| null | CC BY-SA 4.0 | null | 2023-05-28T15:39:53.557 | 2023-05-28T15:39:53.557 | null | null | 9496 | null |
617150 | 1 | null | null | 0 | 30 | I have a dataset with 96 records in the following format:
```
State Period WHC WVC
Aguascalientes 2010-2011 79,333 118,192
Aguascalientes 2015-2016 79,802 60,427
Aguascalientes 2020-2021 89,301 73,652
Baja California 2010-2011 178,822 247,565
Baja California 2015-2016 169,765 101,350
Baja California 2020-2021 174,783 124,263
```
'State' is one of the 32 Mexican states; 'Period' is one of three sample periods; 'WHC' is a count of women with moderate-severe food insecurity; and 'WVC' is a count of women experiencing violence.
Pearson regression run in Python against all 96 records for WCH and WVC yields the following:
```
PearsonRResult(statistic=0.8286902722280343, pvalue=2.0050767218356561e-25)
```
With this information, I believe I am correct in asserting:
- There is a strong (0.83) linear relationship between hunger and violence nationally.
- It is extremely unlikely the relationship occurred by chance (null hypothesis rejected).
When I run regression on the data at a state level (three samples each) I see as expected a wide range of r-values, and only three of the state samples have a p-value < .05.
Couple of questions:
- With more data collected (i.e. filling in the gaps in years), we would expect to see more significant correlations at a state level?
- Would we expect with 95% confidence the strength of those
relationships (slope of blue line) to fall within the shaded interval below (default value regplot)?
- Do I need to think about this differently since the samples were taken at different times?
[](https://i.stack.imgur.com/1jsvZ.png)
Of course I'm ignoring confounding variables here (like poverty) that will be part of the discussion.
Edit: responding to @ute comments below
Thanks for the good questions - they made me think more about this and do some digging. The states are very different in size of course, with the largest being 14% of the total population (across all three time periods) and the smallest being 0.6%. I have rates available, however if I use those then I would be treating all the sample sizes as equal. Running Pearson on the rates I get a very weak correlation (r=1.6, p=.11), but that would shift the focus to analyzing rate differences by state which is not my research question.
As you suggested, there is indeed an interesting variance in the data by time, with total violence reports in 2010 more than 2x of those in 2015 (hunger does not follow this pattern):
[](https://i.stack.imgur.com/l2KF4.png)
I checked to see if this was an outlier in the data, but the effect is seen across all the states, leading me to think it is either a measurement change or some other less likely phenomenon:
[](https://i.stack.imgur.com/PkHML.png)
Finally I reran Pearson against all three time periods (on total counts) to see what I could learn, and all three produced strong, significant correlations:
```
Period r-value p-value
2010-2011 0.957437 9.733679e-18
2015-2016 0.871691 8.366862e-11
2020-2021 0.901341 1.994013e-12
```
Am I on the right track with my method and findings?
Edit: Responding to second round of comments. I can see that the larger states impact the Pearson results (did it by hand to see), but conversely considering just the rates seems equally problematic and vulnerable to small sample variance. I tried two approaches:
- Run Spearman instead of Pearson by period to reduce the impact of outliers; this produces slightly weaker but still strong, significant correlations.
- Add weights by multiplying rate x count, then running Pearson and Spearman by period. Now results are further weakened.
Here are the results:
```
Pearson by Period - Counts
r-value p-value
Period
2010-2011 0.957437 9.733679e-18
2015-2016 0.871691 8.366862e-11
2020-2021 0.901341 1.994013e-12
Spearman by Period - Counts
r-value p-value
Period
2010-2011 0.920088 9.605107e-14
2015-2016 0.874633 6.030124e-11
2020-2021 0.874267 6.284060e-11
Pearson by Period - Counts x Rates
r-value p-value
Period
2010-2011 0.900054 2.400698e-12
2015-2016 0.616009 1.744375e-04
2020-2021 0.686044 1.463182e-05
Spearman by Period - Counts x Rates
r-value p-value
Period
2010-2011 0.766862 3.077595e-07
2015-2016 0.707845 5.862840e-06
2020-2021 0.755499 5.781732e-07
```
Are either of these reasonable approaches?
| How to frame subsets of statistically significant sample? | CC BY-SA 4.0 | null | 2023-05-28T16:22:28.573 | 2023-05-29T19:21:00.593 | 2023-05-29T19:21:00.593 | 388952 | 388952 | [
"python",
"pearson-r"
] |
617151 | 2 | null | 492998 | 0 | null | mgcv package seems very slow and space inefficient. I was only able to estimate the binary model. The multinomial model could not complete the estimation.
| null | CC BY-SA 4.0 | null | 2023-05-28T16:28:01.157 | 2023-05-28T16:28:01.157 | null | null | 389012 | null |
617154 | 2 | null | 616872 | 2 | null | Ultimately, if you have the same reasonably large number of events and the underlying parametric model is adequate, the distribution of coefficient estimates from repeated sampling of the underlying distribution should be close to the multivariate normal distribution from the original fit.
There's no need to put this in the $\log T = \alpha + W$ accelerated failure time format. Once you have fixed `shape` and `rate` parameters in the form expected by `pgamma()` and `rgamma()`, just sample directly from `rgamma()` for event-time simulations. With your `shape` and `rate` values from the fit to the `lung` data set, to mimic the 165 events in that data set and overlay onto the Kaplan-Meier curve:
```
plot(survfit(Surv(time, status) ~ 1, data = lung))
newGammaData <- rgamma(165,shape=shape,rate=rate)
newGammaFit <- flexsurvreg(Surv(newGammaData, rep(1,165))~1,dist="gamma")
lines(1-pgamma(time, exp(newGammaFit$coef["shape"]), exp(newGammaFit$coef["rate"])),col="red")
```
Repeat the last 3 lines as often as you'd like.
To compare against the variability of coefficient estimates from the original model, sample from the bivariate normal distribution of the coefficients as they are reported by `flexsurvreg()`, exponentiate (as you do) to get values that are expected by `pgamma()`, then add lines as you show. Try the following:
```
newCoef <- MASS::mvrnorm(1,coef(fit),vcov(fit))
lines(1-pgamma(time,exp(newCoef[["shape"]]),exp(newCoef[["rate"]])),col="blue")
```
and repeat both code lines as often as you would like.
Direct comparison of covariances among fits to simulated data and covariance matrix of coefficient estimates
To illustrate how well the asymptotically bivariate normal distribution of coefficient estimates corresponds to what you find by multiple fits to data drawn from the modeled survival distribution, get the gamma fit to the original data (with 165 events), then combine fit results from 200 samples (of 165 uncensored simulated event times each) from the modeled gamma fit.
```
## fit lung data
library(flexsurv)
lungGammaFit <- flexsurvreg(Surv(time, status) ~ 1, data = lung, dist = "gamma")
## get coefficients in form for rgamma()
shape <- exp(lungGammaFit$coef["shape"])
rate <- exp(lungGammaFit$coef["rate"])
## initialize for accumulation over reps
bar <- rep(0,2) ## to keep coefficient estimates
cov <- matrix(0,nrow=2,ncol=2) ## to keep covariances
set.seed(104)
## do 200 samples of 165 each and fit
for (fitCount in 1:200) {newData <- rgamma(165,shape=shape,rate=rate);
newFit <- flexsurvreg(Surv(newData, rep(1,165))~1,dist="gamma");
cof<-coef(newFit);
bar <- bar+cof;
cof<-as.matrix(cof);
cov<-cov+cof%*%t(cof)}
## take average over 200 reps
(barAve <- bar/200)
# shape rate
# 0.4001778 -5.5644051
## expected good agreement with original fit
coef(lungGammaFit)
# shape rate
# 0.3908883 -5.5839805
## multivariate coefficient covariance estimate
barAve <- as.matrix(barAve)
newVcov <- (cov-200*barAve %*% t(barAve))/199
newVcov
# shape rate
# shape 0.01090990 0.01145029
# rate 0.01145029 0.01592581
## even covariances are very close
vcov(lungGammaFit)
# shape rate
# shape 0.009105897 0.01050332
# rate 0.010503320 0.01603337
```
That illustrates the first paragraph of the answer: the variability among fits to multiple samples from the gamma-distribution fit to the lung data is essentially what you would get from the original variance-covariance matrix of coefficient estimates, if the number of events is the same.
| null | CC BY-SA 4.0 | null | 2023-05-28T17:34:33.350 | 2023-05-29T21:16:50.103 | 2023-05-29T21:16:50.103 | 28500 | 28500 | null |
617155 | 2 | null | 617132 | 0 | null | Well I don't think sampling is needed here (unless I misunderstand your question / diagram). I believe what is intended is to expand the probabilities using something like the product rule, so that:
\begin{align}
P(c1,d1\mid a1,b1) &= P(d1 \mid c1, a1, b1)\cdot P(c1\mid a1,b1) \\
&=P(d1 \mid c1)\cdot P(c1\mid a1,b1)
\end{align}
and you have $P(d1 \mid c1)$ and $P(c1\mid a1,b1)$ in the tables already so you multiply them together.
I assume this is a homework question so I wont give you all the reasons why I did what I did but I will leave with somethings to think about:
(1) How did I do that probability product rule? i.e. How does the probability product rule work.
(2) Why did I expand the product rule with respect c1 and not d1?
(3) Why did $a1$ and $b1$ disappear in the probability: $P(d1 \mid c1, a1, b1)$
Good luck!
| null | CC BY-SA 4.0 | null | 2023-05-28T17:34:57.293 | 2023-05-28T17:41:33.800 | 2023-05-28T17:41:33.800 | 117574 | 117574 | null |
617157 | 1 | null | null | 0 | 9 | I am having difficulty understanding how the skip connections occur in resnet50 when the input and output layers are different in shape.
For example, in the first residual block, a 56*56*64 size matrix needs to be added with 56*56*256 size matrix. In the paper they mentioned when input output size are different, they do a projection. They gave the formula, `y = F(x, {Wi}) + Ws*x`
Does that mean do we need to initialize a random matrix of size 64*256 and do a matrix multiplication of `matmul(x, Ws)` so that we get a matrix of 56*56*256 size and then add it with the earlier 56*56*256 matrix ? Is my understanding correct ?
But in that case how the identity mapping will happen. Aren't we going to lose the identity because we are already multiplying `x` with `Ws` ?
| How the skip connections occur in resnet50 architecture when input output layers are different? | CC BY-SA 4.0 | null | 2023-05-28T18:23:18.520 | 2023-05-28T18:23:18.520 | null | null | 389018 | [
"conv-neural-network",
"residual-networks"
] |
617158 | 2 | null | 617096 | 0 | null | Yes, probably the most appropriate way to approach this would be to test an item-level multigroup CFA (measurement model) first to see what level of measurement equivalence holds across the groups. Depending on the level of measurement equivalence, you can then test the equivalence of the structural regression coefficients across groups using multigroup SEM or path analysis.
| null | CC BY-SA 4.0 | null | 2023-05-28T18:44:15.553 | 2023-05-28T18:44:15.553 | null | null | 388334 | null |
617159 | 1 | null | null | 1 | 10 | I am using SAS PROC Reg to develop an OLS regression equation for a random sample of 190 observations. These observations contain the Old_Area of_oF_Polygons (var X) and the true area of GIS polygons (var Y). The REG equation is then used to score a larger dataset with PROC Score. In cases where there is no real dependent or independent variable in the calssic sense, it seems that the Line-of_Organic Correlation (LOC)-- which minimizes the sum of right triangles formed by vertical and horizontal lines--might be more appropriate.
Is there a SAS PROC that can do that? Would there be much difference between this and the OLS REG approcah ?
| SAS procedure for calculating LOC Line-of-Organic-Correlation? | CC BY-SA 4.0 | null | 2023-05-28T18:56:17.277 | 2023-05-28T18:56:17.277 | null | null | 389019 | [
"regression"
] |
617160 | 1 | null | null | 0 | 46 | There seems to be a bit of confusion regarding these terminologies. E.g., in the Wikipedia article on [Path Analysis](https://en.wikipedia.org/wiki/Path_analysis_(statistics)) it's stated that:
>
...path analysis is SEM with a structural model, but no measurement model
If that was the case, then I don't see any difference between Path Analysis and the wikipedia definition of [Simultaneous Equations](https://en.wikipedia.org/wiki/Simultaneous_equations_model) - which is exactly this: a structural model without a measurement model - or a model that assumes that the observed variables are perfect measures of the latent variables.
Reading Bolen's 1989 "[Structural Equations with Latent Variables](https://onlinelibrary.wiley.com/doi/book/10.1002/9781118619179)" gives some historic connotation. Path Analysis (Wright 1918) is basically the historical predecessor of SEM (Joreskog, 1973; Keesing 1972; and Wiley, 1973). The main advances were in the methods of estimation, and not so much in the actual modelling (except maybe some subtleties). He also treats Simultaneous equations as being another way of representing the same model, though acknowledges that the econometric folks will probably use them without latent variables. The name of the book emphasizes that you can incorporate latent variables into SE(M).
Personally, I think both Wikipedia articles are wrong, and that generally speaking Path Analysis = SEM = Simultaneous Equations. I do realize people in certain fields are used to their own unique terminology with their unique field subtleties, but from a statistics/mathematics point of view they should be treated all the same.
| Path Analysis vs. Structural Equations vs. Simultaneous Equations | CC BY-SA 4.0 | null | 2023-05-28T19:10:07.843 | 2023-05-28T19:26:56.017 | null | null | 117705 | [
"factor-analysis",
"structural-equation-modeling",
"simultaneous-equation"
] |
617161 | 2 | null | 617122 | 0 | null | To restate things and make sure there's no misunderstanding:
You have a "reference dataset" (i.e. your "previously analyzed dataset") with a categorical distribution among 12 levels, maybe looking like that (in R):
```
#note that this vector is sorted by decreasing order, the maximum proportion being 30%, and the smallest 1%
reference_distribution = c(0.30, 0.15, 0.12, 0.08, 0.08, 0.07, 0.06, 0.04, 0.03, 0.03, 0.03, 0.01)
#But let's make sure this is sorted in decreasing order anyway - this will be important for coding purposes later.
reference_distribution = sort(reference_distribution, decreasing=TRUE)
```
You plan to collect data from another source (or have already collected the data), and want to compare this new sample to the reference distribution above, to see if their discrepancies (that will inevitably happen) are concerning. You want to make sure that the size of your new sample is large enough to get a reliable result relative to these differences, or if you need to collect additional data.
In short: when you need to calculate a sample size, you have to, in this order:
- determine the statistical test adequate for your purpose;
- determine the minimal effect size you're interested in;
- calculate the required sample size.
## 1- Choosing an appropriate test
An appropriate statistical test here to compare your new sample to the reference distribution is almost certainly a [chi-square goodness-of-fit test](https://en.wikipedia.org/w/index.php?title=Pearson%27s_chi-squared_test), which is meant to compare categorical distributions. In short, the chi-squared test answers the question: "If the sample really comes from the reference distribution, how likely is it for this sample to look like this?".
Had your data being paired (that is, the same customers answered the first and second survey, and you wanted to check if these customers changed their mind between the first and second survey), a more appropriate test would have been the [Stuart-Maxwell test](https://search.r-project.org/CRAN/refmans/DescTools/html/StuartMaxwellTest.html). It would have implied a different (and probably more complicated) procedure for calculating the required sample size. Here, we'll stick to the chi-square goodness-of-fit test, as it seems more appropriate for your data.
You'll find a short documentation for the chi-squared goodness-of-fit test in R [on this page](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/chisq.test). This is an extremely common statistical test, and many free online resources (including this very website) are available on how to interpret correctly the result of this test. So I don't think it is necessary to explain it in detail in this answer.
If the chi-squared test shows that something is going on (i.e. that there's an overall difference between your sample and the reference distribution), you might also want to examine the residuals to see which levels are driving this overall difference. You'll find a technical but useful discussion on residuals in [this article](https://scholarworks.umass.edu/pare/vol20/iss1/8/) by D. Sharpe (2015).
But this short presentation of the chi-squared test was just an aside, and using this test will come after you collected your data. The important step here was simply to identify in advance the test to use after the data has been collected. It is important to identify the correct test beforehand, because it affects the way to calculate the required sample size.
## 2- Defining a minimal effect size you care about
The ability of the chi-square goodness-of-fit test to detect a difference if it exists is called its [power](https://en.wikipedia.org/wiki/Power_of_a_test). As the sample size increases, power increases. A consequence of that is if the sample size is too small, you might miss a difference you're interested in, and this is probably what your stakeholders are worried about. But on the other hand, if the sample size is too large, the test may detect differences that actually do not matter to you, so it might make you worry when there's no reason to (a note on that at the end of this answer).
So you want to run an a priori power analysis for the chi-square goodness-of-fit test, which will give you an appropriate sample size to detect the minimal effect you're interested in.
In R, the [pwr](https://cran.r-project.org/web/packages/pwr/vignettes/pwr-vignette.html) library may help you conducting this kind of analysis for the chi-square test. Other statistical software or languages generally also offer this kind of feature, e.g. GPower which is specialized in power analysis, or the `statsmodels` library in Python.
To conduct a power analysis, you have to define beforehand the magnitude of the difference you want to be able to detect between the sample and its hypothesized distribution.
For that, you can ask yourself what kind of difference it would be problematic or not to miss.
To take an example, let's say you're interested in detecting a deviation/difference of at least $\pm8\%$ in at least one level/cell (because for whatever reasons specific to your business, you know that deviations smaller than 8% would not really matter).
```
diff = 0.08 #this is the difference we want to be able to detect in at least one cell, to let's enter that in R.
```
"A deviation of 8% in at least one cell" is a very specific criteria, and in the following 3 or 4 paragraphs, I'm going to detail quite a bit how to deal with it. But you don't have to use this kind of criteria or to follow exactly the specific workflow I'm going to suggest. It's just an example to give you an idea of how to think about it. You may have better ideas of criteria relevant to your situation. The main idea here is that smaller the difference you care about, the bigger the sample size will have to be - so you have to think carefully about what kind of difference/deviation is of interest to you.
Returning to our example, one of the "worst case scenarios" would be if this difference of $\pm8\%$ happened in the largest proportion cell of the reference distribution (in our example above, 0.3), because differences in largest cells are more difficult to detect for the chi-square goodness-of-fit test. Here, it would mean a proportion of 0.22 for this level in the new sample (it could have been 0.38 too, but let's take the example of 0.22).
An "even worst case scenario" is if the 8% you'd have to add elsewhere are added to each remaining level in a proportional way (it makes the deviation even harder to detect).
Let's create a hypothetical sample to simulate this "worst case scenario":
```
new_sample = c( reference_distribution[1] - diff) #we substract 8% from 30%. For the moment, this vector is just c(0.22)
#let's add 8% proportionally to the remaining levels
remaining_prop = reference_distribution[2:length(reference_distribution)] #we retrieve a vector containing only the remaining levels
remaining_prop = remaining_prop + (remaining_prop/sum(remaining_prop)* diff) #we add the 8% proportionally to each remaining level
new_sample= c(new_sample, remaining_prop )
new_sample
[1] 0.22000000 0.16714286 0.13371429 0.08914286 0.08914286 0.07800000 0.06685714 0.04457143 0.03342857 0.03342857 0.03342857 0.01114286
```
So we now have a reference distribution and a hypothetical "worst-case scenario" sample, that look like that if we compare their distributions side by side:
|level |Reference distribution |Hypothetical "worst-case scenario" sample |
|-----|----------------------|-----------------------------------------|
|A |0.30 |0.22 |
|B |0.15 |0.16714286 |
|C |0.12 |0.13371429 |
|D |0.08 |0.08914286 |
|E |0.08 |0.08914286 |
|F |0.07 |0.07800000 |
|G |0.06 |0.06685714 |
|H |0.04 |0.04457143 |
|I |0.03 |0.03342857 |
|J |0.03 |0.03342857 |
|K |0.03 |0.03342857 |
|L |0.01 |0.01114286 |
From this hypothetical "worst-case" sample and from the reference distribution, we are now able to calculate an [effect size](https://en.wikipedia.org/wiki/Effect_size), essentially measuring the deviation of the sample from the reference distribution. We will then plug this effect size into the appropriate `pwr` method to get the required sample size.
The appropriate effect size for a chi-square goodness-of-fit test is called Cohen's $\omega$ (this is the Greek letter omega, but some people simply call it "Cohen's w").
Here's how to calculate the effect size Cohen's $\omega$ with the `pwr` library, using the reference distribution and the hypothetical sample we created before:
```
library(pwr)
effect_size_w = ES.w1(reference_distribution, new_sample)
effect_size_w
>>>0.1745743
```
Here, $\omega = 0.1745743$. (In case you want to calculate $\omega$ manually, you can find the formula on [the Wikipedia article about effect sizes](https://en.wikipedia.org/w/index.php?title=Effect_size&oldid=1156174961#Cohen%27s_omega_(%CF%89))).
Higher $\omega$ is, more different the two distributions are, and lower it is, more similar they are. Here, 0.1745743 is the smallest possible $\omega$ we get given a criteria of "a difference of at least 8% in at least one cell". If we want to err on the side of caution, we can round it to 0.17:
```
effect_size_w = round(effect_size_w , 2)
```
Now that we have the smallest possible effect size we care about, we're almost ready to calculate the sample size required to detect an effect size of at least this magnitude.
## 3- Calculating the required sample size
We still have to define an [alpha level](https://en.wikipedia.org/w/index.php?title=Statistical_significance), and the power of the test. Choosing 0.05 for alpha is very common, but it entirely depends on your use case and you may want to choose a smaller level (see [this discussion](https://stats.stackexchange.com/questions/245431/significance-test-how-to-define-alpha-levels-other-than-the-standard-0-10-0-05)). A power of 0.8 is also very common to choose (but here again, you may want a higher level).
So we just have to plug in `pwr` the effect size we calculated earlier ($\omega=0.17$), the alpha level, the power, and the degrees of freedom (number of levels minus one). With an alpha level of 0.01 and a power of 0.9, here is what we'd get:
```
pwr.chisq.test(w=effect_size_w, df=(12-1), sig.level=0.01, power=0.9)
Chi squared power calculation
w = 0.17
N = 961.857
df = 11
sig.level = 0.01
power = 0.9
NOTE: N is the number of observations
```
So this a required sample size (N) of 962 for this specific example.
You may want to adjust this number upward to account for possible problems like non-responses, duplicate answers, and other shenanigans, depending on your knowledge of the people you're surveying.
## 4- After collecting the data
We collected our data, and maybe we have a sample looking like that:
```
real_sample = c(209, 98, 79, 66, 115, 87, 58, 60, 77, 65, 22, 26)
#in proportions, this is c(0.21725572, 0.10187110, 0.08212058, 0.06860707, 0.11954262, 0.09043659, 0.06029106, 0.06237006, 0.08004158, 0.06756757, 0.02286902, 0.02702703)
```
If we compare our collected sample to the reference distribution, we have something like that:
|level |Reference distribution |real collected sample |
|-----|----------------------|---------------------|
|A |0.30 |0.21725572 |
|B |0.15 |0.10187110 |
|C |0.12 |0.08212058 |
|D |0.08 |0.06860707 |
|E |0.08 |0.11954262 |
|F |0.07 |0.09043659 |
|G |0.06 |0.06029106 |
|H |0.04 |0.06237006 |
|I |0.03 |0.08004158 |
|J |0.03 |0.06756757 |
|K |0.03 |0.02286902 |
|L |0.01 |0.02702703 |
We can run the chi-square test to check how likely it is that this sample comes from the reference distribution:
```
chisq.test(real_sample, p=reference_distribution)
Chi-squared test for given probabilities
data: real_sample
X-squared = 2606.2, df = 11, p-value < 2.2e-16
```
Here, with a very small p-value (<2.2e-16), no way this sample comes from the reference distribution. So something is going on, the too distributions are too different to ignore it given the criteria we previously defined. Maybe the reference distribution was not reflecting correctly how the population is, maybe there were many categorization errors in the second dataset, or maybe something changed for customers between the time the first dataset and second dataset have been collected. There are many possible explanations that should be investigated, but anyway something is going on here.
Caveat about a too large sample size
If you already collected data before conducting sample size calculations, and you realize that you have really much more data than what you actually need, it may be a problem - but nothing unsolvable.
Indeed, in this case, the chi-square test may return a very small p-value even if the difference between the sample and the reference distribution is actually of no practical interest to you. If you're in this situation (sample size much larger than what you calculated) and are unsure on how to interpret the small p-value you get from your test, I'd suggest to have a look at the following pages: [Sample size too large?](https://stats.stackexchange.com/questions/125750/sample-size-too-large) and [Are large data sets inappropriate for hypothesis testing?](https://stats.stackexchange.com/questions/2516/are-large-data-sets-inappropriate-for-hypothesis-testing)
## Bibliography
You can find a thorough discussion of power calculations for the chi-squared tests in the chapter 7 of: Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). Routledge.
If you want to use other tools than R, there are many online video tutorial explaining on to do that using [Gpower](https://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower). You'll also find a comparison on how to perform power calculations in R and Python here, section "Chi-square goodness-of-fit": Perktold, J. (2013, March 17). joepy: Statistical Power in Statsmodels. Joepy. [http://jpktd.blogspot.com/2013/03/statistical-power-in-statsmodels.html](http://jpktd.blogspot.com/2013/03/statistical-power-in-statsmodels.html)
| null | CC BY-SA 4.0 | null | 2023-05-28T19:17:39.913 | 2023-05-29T12:53:20.477 | 2023-05-29T12:53:20.477 | 164936 | 164936 | null |
617162 | 2 | null | 617160 | 0 | null | I agree that the "distinction" between path analysis and SEM is confusing as both use (typically) multiple equations/multiple dependent variables and a "structural" model. Perhaps it would be clearer to use the term "latent variable model" or "latent variable SEM" for a simultaneous equation model that includes a measurement (CFA) model with latent variables in addition to a "structural" (latent variable) regression or path model.
| null | CC BY-SA 4.0 | null | 2023-05-28T19:26:56.017 | 2023-05-28T19:26:56.017 | null | null | 388334 | null |
617163 | 2 | null | 617134 | 0 | null | If you use a likelihood-based binomial regression, as suggested by [Frank Harrell](https://stats.stackexchange.com/a/156913/28500) and [Ben Bolker](https://stats.stackexchange.com/a/156863/28500) on the [page you cite](https://stats.stackexchange.com/questions/156861/how-to-calculate-the-p-value-of-an-odds-ratio-in-r), or use [log-linear analysis](https://en.wikipedia.org/wiki/Log-linear_analysis) of counts in a contingency table, the p-values are based on the [asymptotic normality of the maximum-likelihood estimator](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#Efficiency). The test statistic is then a pivotal z-statistic from which confidence intervals can be calculated. There remains a question of whether there are enough cases to be close enough to asymptotic normality, but that's an issue for all maximum-likelihood estimation.
Agresti devotes Chapter 3 of the second edition of [Categorical Data Analysis](https://onlinelibrary.wiley.com/doi/book/10.1002/0471249688) to "Inference for Contingency Tables." Sections 3.5 and 3.6 discuss relative advantages of different methods for small samples, where the highly discrete nature of the data poses particular problems.
| null | CC BY-SA 4.0 | null | 2023-05-28T19:38:12.683 | 2023-05-28T19:38:12.683 | null | null | 28500 | null |
617164 | 1 | null | null | 1 | 10 | Im currently working on specifying a seasonal ARIMA model with an exogenous variable. I'm using the forecast package developed by Hyndman for this. I have specified the following:
```
Arima(endogenous, order = c(1,1,0), seasonal = c(0,1,0), xreg = exogenous)
```
Q: I was wondering if the seasonal differencing is also applied to the exogenous variable?
In relation to the order Hyndman does mention: "The order argument specifies the order of the ARIMA error model. If differencing is specified, then the differencing is applied to all variables in the regression model before the model is estimated" [https://otexts.com/fpp2/regarima.html](https://otexts.com/fpp2/regarima.html)
However, it is unclear to me if the seasonal differencing, and any other seasonal parameters that are set are also applied to the exogenous variable.
Thank you!
| Seasonal differencing applied to exogenous variable (xreg)? Forecast package R by Hyndman | CC BY-SA 4.0 | null | 2023-05-28T19:38:41.067 | 2023-05-28T19:38:41.067 | null | null | 389021 | [
"r",
"time-series",
"forecasting",
"arima",
"seasonality"
] |
617165 | 1 | null | null | 0 | 25 | I have a control variable firm size in my regression model. I want to examine the moderating role of the firm size. I added the firm type that is 1 for small-sized companies, 2 for medium-sized companies and 3 for large-sized companies. Now in this situation, should I remove firm size as the control variable and then run my model? Or do I have to keep that?
Thanks for your help
| Moderator Variable | CC BY-SA 4.0 | null | 2023-05-28T19:48:57.887 | 2023-05-28T19:48:57.887 | null | null | 366529 | [
"interaction"
] |
617167 | 1 | null | null | 1 | 9 | I understand that with multiple components, the result will be coefficients that lead to maximally independent series.
When requesting only one component I'm unclear if it actually does optimization and what that optimization is. Since there is only one component, there is nothing to measure an independence against? My intuition is that it might return the coefficients that result in the maximum entropy series, but I have no idea.
Specifically I'm asking about sklearn's FastICA, but a general answer would be great too.
| What does ICA using only one component return? | CC BY-SA 4.0 | null | 2023-05-28T20:32:58.757 | 2023-05-28T20:32:58.757 | null | null | 389023 | [
"scikit-learn",
"entropy",
"information-theory",
"independent-component-analysis"
] |
617169 | 2 | null | 617139 | 4 | null | The choice of layout notation and the treatment of vector variables (i.e. row/column) is usually tricky and may yield inconsistent results. So, it's hard to tell that the rules are the same for both. For example, in some cases, the multiplication terms in the chain rule goes from right to left, as opposed to what we're accustomed to, i.e. left to right.
For your second question, the matrix multiplication is not valid because the LHS, $\partial L/\partial Y$ has dimension $1\times 2$ and RHS, has dimension $4\times 3$. But, the expression
$$\frac{\partial L}{\partial W}=\frac{\partial L}{\partial y_0}\frac{\partial y_0}{\partial W}+\frac{\partial L}{\partial y_1}\frac{\partial y_1}{\partial W}$$
is correct due to chain rule in multivariate calculus. Therefore, the rest follows.
| null | CC BY-SA 4.0 | null | 2023-05-28T20:35:52.043 | 2023-06-02T06:21:06.590 | 2023-06-02T06:21:06.590 | 56940 | 204068 | null |
617171 | 1 | null | null | 0 | 31 | Let's assume that I have two groups that I would like to compare in terms of death rates. One group (group A) got a specific treatment and the other group (group B) did not get any treatment. I matched the groups using full propensity score matching because the groups differed systematically from one another. Now I would like to perform a log-rank test in the basis of the matched groups and I cannot find how to do that in SPSS or R. I matched the groups with R and am left with the propensity score, weights and subclasses. Now I would like to compare the mean time until death.
Who can help?
| Log-Rank test with propensity score matched data | CC BY-SA 4.0 | null | 2023-05-28T20:55:19.083 | 2023-05-29T09:24:12.817 | null | null | 389026 | [
"survival",
"propensity-scores",
"matching",
"logrank-test"
] |
617172 | 1 | null | null | 3 | 293 | Statisticians and Data Analysts sometimes draw conclusions from surveys given to people.
Often times, there is a question, and there are 2 to 5 options to choose from if you are a survey respondent.
One of the mistakes people make when writing these surveys is asking survey respondents to choose exactly one option at the same time as providing answer options which are not mutually exclusive.
What are example of real-world survey questions which have been put to people which had non-mutually exclusive answers to choose from?
| What are several examples of survey questions which have response options designed as mutually exclusive when they are not? | CC BY-SA 4.0 | null | 2023-05-28T21:12:08.240 | 2023-05-29T16:52:24.857 | 2023-05-29T02:06:14.857 | 164936 | 290074 | [
"experiment-design",
"survey",
"research-design"
] |
617173 | 2 | null | 617172 | 0 | null | This is a community editable wiki.
Feel free to insert additional examples at the bottom
---
# Example One: Video Content
>
The Question
Which of the following are you?
Video Content Creator
Business Video Creator
Educational Video Creator
You Create Videos for Personal-Use, and/or Family and/or Your Friends".
One problem with mutual exclusivity is that the word "content" is an all-encompasing umbrella-like or blanket-like term.
All of the following are sub-categories of content:
>
business content
educational content
videos for personal use, family & friends.
All videos, of any kind constitute video content.
---
# Example Two : [insert your description here]
>
The Question
Are you willing to edit this wiki?
Yes
No
Uncertain
?
??
The problem is that ...
| null | CC BY-SA 4.0 | null | 2023-05-28T21:12:08.240 | 2023-05-28T21:12:08.240 | null | null | 290074 | null |
617174 | 2 | null | 617172 | 3 | null | A common example is discrete counts with overlapping class-limits. For example: How many days did you visit Stack Exchange in the last month?
- 0
- 1
- 2-5
- 5-10
- more than 10
The problem is this...which option do I pick if I visited exactly 5 days?
| null | CC BY-SA 4.0 | null | 2023-05-28T21:19:49.943 | 2023-05-29T16:52:24.857 | 2023-05-29T16:52:24.857 | 206203 | 199063 | null |
617176 | 1 | null | null | 1 | 16 | I am new to working with time series and have tried several methods on my data including SARIMAX, Croston and forecastHybrid. The most accurcate result I've gotten so far is with stlf(), based on [Explain the croston method of R](https://stats.stackexchange.com/questions/127337/explain-the-croston-method-of-r). and [https://laurentlsantos.github.io/forecasting/seasonal-and-trend-decomposition-with-loess-forecasting-model-stlf.html](https://laurentlsantos.github.io/forecasting/seasonal-and-trend-decomposition-with-loess-forecasting-model-stlf.html)
For reproducability, the data for this project can be found here: [https://github.com/Rtse716/Time-Series-Data/blob/main/test_ts.csv](https://github.com/Rtse716/Time-Series-Data/blob/main/test_ts.csv)
Without getting into detail, the project contains daily data for the purpose of predicting observations of an event within a geographical grid cell. The data provided is for one of the cells, which all have sporadic rates of observations and long periods of zero observations. The columns in the csv include the Date, number of observations and actic sea ice extent.
Here is the code for my project:
Convert data to daily time series
```
g1_ts2 <- ts(test_ts, frequency=365)
```
choose 80% of the data to be the training data:
```
data_split <- initial_split(g1_ts2, prop = .80)
train <- (training(data_split))
test <- testing(data_split)
```
Apply loess.as function, which returned 0.7700653
```
LoessOptim<-fANCOVA::loess.as(train[,3], train[,2], user.span =
NULL,
plot = FALSE)
```
Check residuals:
```
forecast::checkresiduals(LoessOptim$residuals)
```
[](https://i.stack.imgur.com/EBmZT.jpg)
Model:
```
stlf_model <- stlf(ts(train[,2], frequency=365),s.window=365, robust=TRUE, t.window =
0.7700653,method = c("arima"))
stlf_model$mean <- pmax(stlf_model$mean,0)
fc_stlf <- forecast(stlf_model, robust=TRUE)
accuracy(fc_stlf[["mean"]], g1_ts2[,2])
summary(fc_stlf)
```
Accuracy came out to be 0.6712329.
Summary results are:
```
Error measures:
ME RMSE MAE MPE MAPE MASE ACF1
Training set 10.88896 11745.37 4076.881 NaN Inf 0.7001504 0.2055367
```
Here is what the forecasted plot looks like (true values are in red)
`autoplot(fc_stlf)+autolayer(g1_ts2[,2])`
[](https://i.stack.imgur.com/wZRsn.jpg)
What can I do to improve the forecast? Is there a way to include exogenous variables with stlf (such as the extent values in the csv)?
I would greatly appreciate any input. Thank you.
| How to improve the accuracy of time series forecast using stlf() | CC BY-SA 4.0 | null | 2023-05-28T21:53:01.940 | 2023-05-28T21:53:01.940 | null | null | 389027 | [
"r",
"time-series",
"forecasting"
] |
617177 | 1 | null | null | 1 | 32 | Suppose $\mathbf{X}$ is a vector of iid Bernoulli variables with the fixed success probability of $p$. The variance of X is $np(1-p)$.
Now, suppose, I am interested in the conditional probability of $s$ successes given the weighted sum of Bernoulli RV, formally, $P(1^TX=s|w^TX = w^Tx)$. How would that pmf look like?
Particularly, how could I prove that $Var(1^TX) \geq Var(1^TX|w^Tx)$?
| Binomial distribution conditional on the weigthed sum? | CC BY-SA 4.0 | null | 2023-05-28T21:59:13.647 | 2023-05-28T21:59:13.647 | null | null | 370912 | [
"probability",
"mathematical-statistics",
"conditional-probability",
"binomial-distribution",
"conditional"
] |
617179 | 1 | null | null | 1 | 43 | I know that random variables can take negative values, so can independent and identically distributed random variables be negative?
| Can i.i.d. random variables be negative? | CC BY-SA 4.0 | null | 2023-05-29T00:06:18.107 | 2023-05-29T01:48:07.770 | 2023-05-29T01:48:07.770 | 69508 | 369300 | [
"random-variable",
"iid"
] |
617181 | 2 | null | 617179 | 1 | null | YES
Consider a distribution $D$ that takes either $-1$ or $-2$ with equal probability.
Let $X_1,\dots,X_n\overset{iid}{\sim}D$. Then the $X_i$ are $iid$ yet only take negative values.
| null | CC BY-SA 4.0 | null | 2023-05-29T00:45:19.610 | 2023-05-29T00:45:19.610 | null | null | 247274 | null |
617182 | 2 | null | 333697 | 5 | null | No, they are not well-calibrated. The predicted probabilities are likely not outright horrible as we would expect from an SVM classifier but they are not usually very well-calibrated. For that matter the estimated probability deciles are not even guaranteed to be monotonic. In Caruana et al. (2004) ["Ensemble Selection from Libraries of Models"](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf) boosted trees have some of the "worst" calibration performance scores. Similarly in Niculescu-Mizil & Caruana (2005) ["Predicting good probabilities with supervised learning"](https://www.cs.cornell.edu/%7Ealexn/papers/calibration.icml05.crc.rev3.pdf), boosted trees have "the predicted values massed in the center of the histograms, causing a sigmoidal shape in the reliability plots". Importantly, these findings don't even touch upon the scenarios of up-sampling, down-sampling or re-weighting our data; in those cases it is very unlikely that our predicted probabilities have a direct interpretation at all.
Do note that "badly" calibrated probabilities are not synonymous with a useless model but I would urge one doing an extra calibration step (i.e. [Platt scaling](https://en.wikipedia.org/wiki/Platt_scaling), [isotonic regression](https://en.wikipedia.org/wiki/Isotonic_regression) or [beta calibration](https://github.com/betacal/betacal.github.io)) if using the raw probabilities is of importance. Similarly, looking at Guo et al. (2017) "[On Calibration of Modern Neural Networks](http://proceedings.mlr.press/v70/guo17a.html)" can be helpful as it provides a range of different metrics (Expected Calibration Error (ECE), Maximum Calibration Error (MCE), etc.) that can be used to quantify calibration discrepancies.
| null | CC BY-SA 4.0 | null | 2023-05-29T01:26:59.437 | 2023-05-29T01:26:59.437 | null | null | 11852 | null |
617184 | 2 | null | 617179 | 2 | null | Take any set of iid values that are non-negative.
For each one, put a minus sign in front of it. You now have a new set of i.i.d. variables that are negative.
| null | CC BY-SA 4.0 | null | 2023-05-29T01:43:55.783 | 2023-05-29T01:43:55.783 | null | null | 805 | null |
617185 | 1 | 617221 | null | 0 | 34 | Question
Suppose two groups on treatment and can be modelled by a Weibull distribution with a normal probability density function
\begin{align}
f(x; \alpha, \lambda) = \alpha \lambda x^{\alpha - 1}e^{-\lambda x^{\alpha}}, \quad \text{where } x \geq \alpha \; \text{and } \lambda > 0
\end{align}
Where in the study $n_1$ participants are assigned to treatment 1, $x_i$, and $n_2$ participants are assigned to treatment 2, $y_j$. We assume $\alpha$ is known but not necessarily equal to 1. Group $X_i$ has a Weibull distribution with pdf $f(x_i; \alpha_1, \lambda_1)$ and group $y_j$ has a Weibull distribution with pdf $f(y_j; \alpha_2, \lambda_2)$.
Derive the likelihood equations and solve to find the MLEs of $\lambda_1$ and $\lambda_1$
Approach
Obtaining the joint likelihood distribution for both groups, $x_i$ and $y_j$, with sample $(x_1, x_2,...x_i, y_1, y_2,..., y_j)$ to obtain allows for the likelihood equation which depends on parameters $(\lambda_1, \lambda_2)$ to look like the following be as follows:
\begin{align*}
L(\lambda_1, \lambda_2) &= L_{1}(\lambda_1) \cdot L_{2}(\lambda_2) \\
&= \prod_{i = 1}^{n}(\alpha_1\lambda_{1}x_i^{\alpha_i - 1}e^{-\lambda_1 x_i^{\alpha_1}}) \cdot \prod_{j = 1}^{n}(\alpha_2\lambda_{2}y_j^{\alpha_2 - 1}e^{-\lambda_1 y_j^{\alpha_2}}) \\
&= \alpha_1^{n_1}\lambda_1^{n_1}\prod_{i = 1}^{n} x_i^{\alpha_1 - 1}e^{-\lambda_1 \sum_{i = 1}^{n} x_i^{\alpha_1}} \cdot \alpha_2^{n_2}\lambda_2^{n_2}\prod_{j = 1}^{n}y_j^{\alpha_2 - 1} e^{-\lambda_2 \sum_{i = 1}^{n} y_j^{\alpha_2}}
\end{align*}
This produces the following log-likelihood
\begin{align}
\ell(\lambda_1, \lambda_2) = n_1\ln(\lambda_1) + n_2\ln(\lambda_2) - \lambda_1\sum_{i = 1}^{n} x_i^{\alpha_1} -\lambda_2\sum_{i = 1}^{n} y_j^{\alpha_2}
\end{align}
The partial derivatives for $\lambda_1, \lambda_2$
For $\lambda_1$:
\begin{align}
\frac{\partial{\ell(\lambda_1, \lambda_2)}}{\partial{\lambda_1}} = \frac{n_1}{\lambda_1} - \sum_{i = 1}^{n} x_i^{\alpha_1}
\end{align}
For $\lambda_2$:
\begin{align}
\frac{\partial{\ell(\lambda_1, \lambda_2)}}{\partial{\lambda_2}} = \frac{n_2}{\lambda_2} - \sum_{i = 1}^{n} y_j^{\alpha_2}
\end{align}
My question.
From here, do I set the equations equal to 0 and solve independently for each to obtain $\hat{\lambda}_1, \hat{\lambda}_1$ or do I solve by setting both equal to 0 and solving in terms of each other.
For example independently:
For $\lambda_1$:
\begin{align*}
0 &= \frac{\partial\ell(\lambda_1, \lambda_2)}{\partial\lambda_1} \\
0 &= \frac{n_1}{\lambda_1} - \sum_{i = 1}^{n} x_i^{\alpha_1} \\
\hat{\lambda_1} &= \frac{n_1}{\sum_{i = 1}^{n} x_i^{\alpha_1}}
\end{align*}
For $\lambda_2$:
\begin{align*}
0 &= \frac{\partial\ell(\lambda_1, \lambda_2)}{\partial\lambda_1} \\
0 &= \frac{n_2}{\lambda_2} - \sum_{i = 1}^{n} y_j^{\alpha_2} \\
\hat{\lambda_2} &= \frac{n_2}{\sum_{i = 1}^{n} y_j^{\alpha_2}}
\end{align*}
Solving together:
For $\lambda_1$:
\begin{align}
\hat{\lambda}_{1} = \frac{n_1\lambda_2}{\lambda_2\left(\sum_{i = 1}^{n} x_i^{\alpha_1} -\sum_{i = 1}^{n}y_j^{\alpha_2} \right) + n_2}
\end{align}
For $\lambda_2$:
\begin{align}
\hat{\lambda}_{2} = \frac{n_2\lambda_1}{\lambda_1\left(\sum_{i = 1}^{n} x_i^{\alpha_1} -\sum_{i = 1}^{n}y_j^{\alpha_2} \right) + n_1}
\end{align}
Thank you for your help. This might be a trivial question but I would also like to understand why.
| Maximum Likelihood Estimator Multiple Parameters | CC BY-SA 4.0 | null | 2023-05-29T02:17:18.507 | 2023-05-29T15:19:28.887 | 2023-05-29T02:26:26.317 | 376744 | 376744 | [
"self-study",
"maximum-likelihood",
"weibull-distribution"
] |
617186 | 1 | null | null | 0 | 14 | I want to use the window function to analyze some time series like the code below does:
```
# https://towardsdatascience.com/a-guide-to-forecasting-in-r-6b0c9638c261
library(forecast)
library(MLmetrics)
data=AirPassengers #Create samples
training=window(data, start = c(1949,1), end = c(1955,12))
validation=window(data, start = c(1956,1))
```
My time series will be stock market data and may have different data formats and periods. I would like to say something like `window(ts,c(.7,.2,.1))` and get 3 windows for my training, validation and test sets.
Does anyone know of a way to do this?
| How to specify time series window in r as a fraction of the length? | CC BY-SA 4.0 | null | 2023-05-29T04:12:13.803 | 2023-05-29T04:53:56.573 | 2023-05-29T04:53:56.573 | 362671 | 174445 | [
"r",
"machine-learning",
"time-series"
] |
617188 | 1 | null | null | -1 | 11 | why am I getting the error message that my R could not find the function odd ratio? although I have used them recently.
| Issues with generating Odds ratio in R | CC BY-SA 4.0 | null | 2023-05-29T04:49:26.613 | 2023-05-29T04:49:26.613 | null | null | 389036 | [
"regression",
"logistic",
"chi-squared-test"
] |
617189 | 1 | null | null | 4 | 362 | I'm really newbie about neural network and optimization. When I read the references, I found this journal [Wang et al 2018](https://www.matec-conferences.org/articles/matecconf/pdf/2018/91/matecconf_eitce2018_03007.pdf).
The journal stated:
>
One disadvantage of SGD
is that it scales the gradient uniformly in all directions;
this can be particularly detrimental for ill-scaled
problems. This also makes the process of tuning the
learning rate α circumstantially laborious
What does it mean by "it scales the gradient uniformly in all directions"?
| What is the meaning of "SGD scales the gradient uniformly in all directions"? | CC BY-SA 4.0 | null | 2023-05-29T07:23:23.737 | 2023-05-30T11:01:28.607 | 2023-05-29T07:43:16.040 | 362671 | 388875 | [
"neural-networks",
"optimization",
"gradient-descent",
"gradient",
"stochastic-gradient-descent"
] |
617190 | 2 | null | 616756 | 0 | null | You have this backwards. We are optimizing for $\min_w \frac{||w||^2}{2} + C\mathcal L(w,X,Y)$ where I've simplified notation. Clearly, if we set $C \rightarrow 0$ then we will be only optimizing for $||w||^2$ in which case $w = 0$, and the margin will tend to infinity. Likewise, as $C \rightarrow \infty$, the second term dominates, and $||w||$ can increase arbitrarily as its contribution to the objective becomes negligible, resulting in the margin decreasing.
| null | CC BY-SA 4.0 | null | 2023-05-29T07:51:57.003 | 2023-05-29T07:51:57.003 | null | null | 99688 | null |
617191 | 2 | null | 616408 | 0 | null | The point is that $\arg\max_w \frac{2}{||w||} = \arg\min_w \frac{||w||^2}{2}$ - in other words maximizing the margin is equivalent to minimizing the norm in terms of the solution.
| null | CC BY-SA 4.0 | null | 2023-05-29T07:53:09.303 | 2023-05-29T07:53:09.303 | null | null | 99688 | null |
617192 | 2 | null | 615811 | 0 | null | The bounds for the individual $\alpha_i$ are $0 \leq \alpha_i \leq C_i$.
| null | CC BY-SA 4.0 | null | 2023-05-29T07:54:22.137 | 2023-05-29T07:54:22.137 | null | null | 99688 | null |
617193 | 2 | null | 617189 | 5 | null | The gradient vector is multiplied by a scalar constant called learning rate, i.e.
$$\theta_{t+1}=\theta_t-\alpha\nabla_\theta L$$
where $\theta$ is the parameter, $L$ is the loss. This means every dimension of the gradient vector is multiplied by the same constant, i.e. scaled uniformly in all directions.
| null | CC BY-SA 4.0 | null | 2023-05-29T08:07:28.973 | 2023-05-29T08:07:28.973 | null | null | 204068 | null |
617194 | 2 | null | 617171 | 0 | null | Instead of using matching, you can calculate inverse probability of treatment weights from your estimated propensity scores. These weights can then be used to perform a weighted log-rank test, as described in this article:
Xie, Jun and Chaofeng Liu (2005). “Adjusted Kaplan-Meier Estimator and Log-Rank Test with Inverse Probability of Treatment Weighting for Survival Data”. In: Statistics in Medicine 24, pp. 3089–3110.
This is implemented in the `ipw.log.rank` function in the `RISCA` R-package. Adjusted survival curves can be plotted in a similar fashion using the `ipw.survial` function of that same package. Note that the latter function is rather limited in terms of customizability (does not support confidence interval calculation etc.). You can find a more flexible implementation in the `adjustedCurves` R-package I developed.
| null | CC BY-SA 4.0 | null | 2023-05-29T09:24:12.817 | 2023-05-29T09:24:12.817 | null | null | 305737 | null |
617195 | 2 | null | 616847 | 2 | null | The $\chi^2$-statistic can be derived in (at least) three ways.
- In this question Why does chi-square testing use the expected count as the variance? two ways occur.
Consider $n$ indpendent Poisson distributed variables with an additional constraint.
Consider $n$ dependent Binomial distributed variables. (also the approach in the previous version of this answer)
- Consider $n-1$ dependent Binomial distributed variables.
The third case has been the approach by Pearson in 1900 and will be explained below in the case of the 3-sided dice.
---
Pearson considered multivariate normal distributions in terms of the Mahalanobis distance $\chi$:
$$f(x) \propto \exp \left(- \frac{\chi^2}{2} \right) \qquad \text{where $\chi^2 = (x-\mu)^t \Sigma^{-1} (x-\mu)$}$$
and when applying this to a multivariate normal approximation of the multinomial distribution only $n-1$ variables are considered instead of $n$. This is because the variables are distributed in a plane because of the constraint $\sum_{i=1}^3 O_i = N$. We can use the joint distribution of two variables $O_1$ and $O_2$, which are a sufficient statistic, and $O_3 = N - O_1 - O_2$.
The covariance matrix of the multinomial distribution, with 3 levels and probabilities $p_1,p_2,p_3$ is
$$\Sigma_{O_1,O_2,O_3} = N \begin{bmatrix} 1- p_1p_1 & -p_1p_2 & -p_1p_3 \\
- p_2p_1 & 1-p_2p_2 & -p_2p_3 \\
- p_3p_1 & -p_3p_2 & 1-p_3p_3 \end{bmatrix}$$
The covariance matrix of the normalized variables $Z_i = \frac{O_i-E_i}{\sqrt{N E_i/N(1-E_i/N)}}$ is
$$\Sigma_{Z_1,Z_2,Z_3} = \begin{bmatrix} 1 & -\sqrt{r_1r_2} & \sqrt{r_1r_2} \\
-\sqrt{r_2r_1} & 1 & \sqrt{r_2r_3} \\
-\sqrt{r_3r_1} & \sqrt{r_3r_2} & 1
\end{bmatrix}$$
Where we define $q_1 = 1- p_1$ and $r_1 = p_1/q_1$
Now, for the description of the density in terms of $\chi^2$ we only use two of the variables, e.g. $Z_1$ and $Z_2$ and their covariance matrix is:
$$\Sigma_{Z_1,Z_2} = \begin{bmatrix} 1 & -\sqrt{r_1r_2} \\
-\sqrt{r_2r_1} & 1 \\
\end{bmatrix}$$
whose inverse is
$$\Sigma_{Z_1,Z_2}^{-1} = \frac{1}{1-r_1r_2} \begin{bmatrix} 1 & \sqrt{r_1r_2} \\
\sqrt{r_2r_1} & 1 \\
\end{bmatrix}$$
Note that for the variables $Z_i$ the constraint is
$$\sqrt{p_1q_1}Z_1 + \sqrt{p_2q_2}Z_2 + \sqrt{p_3q_3}Z_3 = 0$$
and
$$\frac{\sqrt{p_1q_1}}{ \sqrt{p_3q_3}}Z_1 + \frac{\sqrt{p_2q_2}}{ \sqrt{p_3q_3}}Z_2 = -Z_3$$
And $\chi^2$ is equal to
$$\begin{array}{}
\begin{bmatrix} Z_1 , Z_2 \end{bmatrix} \Sigma_{Z_1,Z_2}^{-1} \begin{bmatrix} Z_1 \\ Z_2 \end{bmatrix} &=& \frac{1}{1-r_1r_2} Z_1^2 + \frac{1}{1-r_1r_2} Z_2^2 + 2 \frac{\sqrt{r_1r_2}}{1-r_1r_2} Z_1Z_2 \\
&=& \left(q_1 + \frac{p_1q_1}{p_3}\right) Z_1^2 + \left(q_2+\frac{p_2q_2}{p_3}\right) Z_2^2 + 2 \frac{\sqrt{p_1p_2q_1q_r}}{p_3} Z_1Z_2 \\
&=& q_1 Z_1^2 + q_2 Z_2^2 + \left(\sqrt{\frac{p_1q_1}{p_3}} Z_1 + \sqrt{ \frac{p_2q_2}{p_3}}Z_2\right)^2\\
&=& q_1 Z_1^2 + q_2 Z_2^2 + \left(-\sqrt{q_3}Z_3\right)^2\\
&=& q_1 Z_1^2 + q_2 Z_2^2 + q_3 Z_3^2\\
\end{array}$$
| null | CC BY-SA 4.0 | null | 2023-05-29T09:28:16.317 | 2023-05-30T11:27:17.280 | 2023-05-30T11:27:17.280 | 164061 | 164061 | null |
617196 | 2 | null | 617189 | 6 | null | What it's getting at, is the step size you use should depend on the curvature , ie how the gradient changes in each direction.
Imagine a narrow u-shaped sloping valley.in the direction of the U, you want to take a very small step size or you will shoot up the other side ( the gradient has actually changed direction). Along the valley, in the perpendicular direction - with low curvature - you want a large step size.
the step size you need should be large in directions that curvature is small and gradient stays roughly the same, and small in directions that the gradient is changing rapidly.
with gradient descent, you have a fixed stepsize in all directions. So in the situation of "ill-scaled" problems, where the curvature is not the same in all directions (eg u-shape valley example). You need to choose a very small step size so you don't constantly overshoot, which means that you will travel very slowly along the valley floor.
| null | CC BY-SA 4.0 | null | 2023-05-29T09:33:13.440 | 2023-05-30T11:01:28.607 | 2023-05-30T11:01:28.607 | 27556 | 27556 | null |
617197 | 1 | null | null | 2 | 28 | I am using SGD & Adam optimizer in a simple NN for a binary classification problem.
The model converges every time, but when I run the same model for 100 times, I notice that I won't get the same coefficients every time. There is a deviation in coefficient values from benchmark coefficients (that I obtained from Statsmodel) of around 8-10%.
Is there a specific reason why this happens, or we have any proofs around non-convergence to same coefficient values every time for SGD/Adam optimizers ?
I am implementing a simple NN model using TF library. Below are the Hyper-parameters I am using:
Epoch: 300
Optimizer: ADAM/SGD
Activation: Sigmoid
Learning Rate: 0.005
Batch Size: n/4 (n is # of data points)
Regularization: L1/L2
| Why doesn't SGD optimizer converge to same Coefficients for multiple runs but Statsmodels does? | CC BY-SA 4.0 | null | 2023-05-29T09:53:46.503 | 2023-05-29T13:17:44.823 | null | null | 389052 | [
"machine-learning",
"neural-networks",
"optimization"
] |
617198 | 2 | null | 501835 | 1 | null | Fisher is said to have given the interpretion of $p$-values as a "measure of surprise", given you believe in the null hypothesis. This may actually be confusing, since low $p$-value then indicates strong surprise.
Instead, we can introduce $p$-values as "measure of
compatibility with the null". (suggested by Christian Hennig)
Then: low p = low compatibility
In Cox And Hinkley's 1974 text, they use the p-value "as a measure of the consistency of the data with the null hypothesis" (p.66). Earlier, Cox (1958) described a significance test as "concerned with the extent to which the data are consistent with the null hypothesis".(p. 362)
| null | CC BY-SA 4.0 | null | 2023-05-29T10:54:04.203 | 2023-05-31T11:52:59.457 | 2023-05-31T11:52:59.457 | 343075 | 237561 | null |
617199 | 1 | 617213 | null | 1 | 36 | Q: If $X_t$ is an AR(2) process, what is $Y_t := X_t - X_{t-1}$?
Attempted solution:
$X_t = \phi_1 X_{t-1} + \phi_2 X_{t-2} + W_t$, where $W_t$ is white noise.
\begin{equation} \begin{split} Y_t &:= X_t - X_{t-1} = \phi_1 X_{t-1} + \phi_2 X_{t-2} + W_t - \phi_1 X_{t-2} - \phi_2 X_{t-3} - W_{t-1} \\
\end{split} \end{equation}
But what can we say about $Y_t$?
| If $X_t$ is an AR(2) process, what is $Y_t := X_t - X_{t-1}$? | CC BY-SA 4.0 | null | 2023-05-29T11:03:26.880 | 2023-05-29T18:29:10.910 | 2023-05-29T18:29:10.910 | 53690 | 384994 | [
"time-series",
"arima",
"autoregressive",
"moving-average",
"differencing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.