Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
606915 | 1 | 611310 | null | 4 | 238 | I'm curious as to why the Negative Log Likelihood (NLL) loss is used for classification tasks in PyTorch (see [here](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html)). The negative log likelihood is a much more general notion than a measurement of error in a classification problem.
Yes the negative log likelihood of a Categorical distribution can be minimized (with respect to some parameters) to do maximum likelihood estimation, but it is not reserved for the Categorical distribution.
The negative log likelihood is a function we can determine for any distribution. For example, we can also use the negative log likelihood of a Gaussian distribution to then minimize, effectively doing maximum likelihood, in a simple regression problem.
Is there any reason PyTorch decided to do this? Is the negative log likelihood a commonly abused term to refer to a classification objective?
| Negative Log Likelihood (NLL) reserved for a classification in PyTorch is weird? | CC BY-SA 4.0 | null | 2023-02-28T16:29:54.080 | 2023-04-01T14:48:58.150 | null | null | 381061 | [
"machine-learning",
"loss-functions"
] |
606919 | 1 | null | null | 0 | 19 | I'm not sure what question to ask, so I'll start by explaining what I'm trying to do.
#### The data
I have a hierarchical dataset that contains multiple batches (5); Data points for each batch have three further levels of hierarchy with groups of size (2–15, 1-10, 3-10) at each level.
Let's call the levels
- (lvl 1) "batches",
- (lvl 2) "subjects",
- (lvl 3) "sets", and
- (lvl 4) "trials",
respectively. Each trial yields measurements, call them $(x, y, \mathbf z)$, where $\mathbf z = \{ z_1, z_2, .., z_k \}$. In aggregate, there are about 1000 trials.
#### The philosophy
This is an under-powered design. I don't expect conclusions about the population-of-batches. The question I want to answer is something like "does this effect probably exist within this particular batch", extended too "assuming inter-batch effects can be neglected, does this effect probably exist within these five batches?".
I've accepted the ansatz that level 1 (batches) are identical, and will report results with this caveat.
I've also accepted that effects related to level 3 ("sets") might exist, but I'm not interested in identifying them, and am happy to treat this as an unknown dependence between the trials. This is similar to a problem often faced when working with time-series data with unknown (and possibly non-stationary) autocorrelations, e.g. tests taken throughout the day. (To make matters worse, there are almost surely multiple different sources of dependence between "trials", so trying to identify them is futile).
In summary, I've simplified my model for dependence into "subjects", with "trials" that aren't independent, but the impact of this dependence really only matters for faithfully reporting confidence intervals / p-values.
### The specific test that brings me here today:
I want to see how much of the explained variance from $x \to y$ might be attributable to other variables $\mathbf z$, which are also correlated with $x$ and $y$. Partial correlation, semipartial (a.k.a. part) correlation, and coefficient of partial determination, all seem like appropriate ways of exploring this question. The data are not Gaussian and the dependences aren't linear, so I'll be working with rank correlations.
Don't do it this way:
My first approach was to directly measure the difference in R² when using $(x)$ as a predictor compared to $(x,\mathbf z)$. The variables $\mathbf z$ were fairly heterogeneous, and I was tempted toward model selection. This got messy, computational performance was abysmal, and I stopped being able to reason about what was happening. (This is the approach I've used in papers with black-box models in the past; It works, but on bad days you'll find yourself waiting weeks and looking for high-performance computing solutions to run your stepwise-greedy-selection using doubly-crossvalidated lasso and shuffle tests and hierarchical bootstrapped intervals for each step along the way).
### There has to be a better way! (especially if I only care about correlations).
I found the [pinguoin](https://pingouin-stats.org/build/html/generated/pingouin.partial_corr.html#pingouin.partial_corr) package, which has a nice partial (and semipartial) correlation function.
To remove per-subject effects, I converted $(x,y,\mathbf z)$ to ranks within each subject separately before aggregating. Now, unsurprisingly, if I just toss in all 1000 trials, I get some weak partial correlations with absurdly small ($<10^{-20}$) p-values.
This seems embarrassingly spurious, and I dare not report it in a publication.
My intuition is this:
- I don't know how pingouin.partial_corr is computing p-values for its partial (and semipartial) correlations, but it's probably using a normal approximation assuming $N$ independent trials (very likely assuming $N-1$ degrees of freedom)
- What I probably want are confidence intervals and p-values that use $M-1$ degrees of freedom, where $M$ is the number of subjects.
- I suspect that I can just rescale p-values along the lines of norm.cdf(norm.ppf(pvalue)*sqrt((M-1)/(N-1))) (scipy.stats.norm).
There are two problems with this:
I just made it up, and need something with actual scholarly rigour (i.e. citation needed!);
It makes all my results non-significant.
Naturally (1) is the bigger problem, but I have real reason to believe that (2) is also wrong. I know that there are dependencies between trials per subject, but they aren't so dependent that I need to pretend that all trials within a subject are no better than a single data point. This is wasteful of statistical power.
I've also considering fitting the model per-subject, then aggregating. I could, for example, treat each (semi)partial correlation $\rho$ and standard errors $s$ as $\sim \mathcal N( \rho, s^2)$, and product together these distributions a la Bayes' rule. This is a bit iffy, as per-subject data are limited enough that I worry about normality assumptions. I could also aggregate p-values using Fisher's method (a bit dicey if correlations don't have a consistent sign?)
Really I just want something that is fast, easy to run, perfectly defensible, suitably powerful, and simple enough to explain that I don't have to add a new sub-section to Methods for what should be an incidental analysis. (Also I probably won't be able to expediently use solutions involving R or STATA for this paper, but happy to learn new skills for the future.)
And ideas?
| Correct degrees-of-freedom for a (semi)partial-correlation analysis that aggregates multi-level data with unknown dependencies? | CC BY-SA 4.0 | null | 2023-02-28T17:14:22.450 | 2023-02-28T17:14:22.450 | null | null | 102631 | [
"mixed-model",
"multilevel-analysis",
"aggregation",
"semi-partial"
] |
606921 | 2 | null | 606813 | 1 | null | There typically is greater dispersion in sequencing count data than would be expected from a Poisson distribution, where the variance must equal the mean. [Negative-binomial regression](https://stats.oarc.ucla.edu/r/dae/negative-binomial-regression/) often is used to evaluate such data, as in the Bioconductor [edgeR](https://doi.org/doi:10.18129/B9.bioc.edgeR) and [DESeq2](https://doi.org/doi:10.18129/B9.bioc.DESeq2) packages.
It will probably be most efficient to adapt the tools in a package like one of those to analyze all of your data together.* They can accept complex experimental designs, although you might have to do some work to get the models specified properly. That uses all of your data together, which is typically more powerful than a set of separate tests on species by species. If there's one particular species of primary interest, however, you certainly can examine that with a standard negative-binomial regression.
You will have to account for the multiple measurements within plots. In principle you could treat the plots as separate fixed effects, but in your case that leads to a lot of extra coefficients to estimate. The [limma package](https://doi.org/doi:10.18129/B9.bioc.limma) allows you to specify things like plots as random effects for count data, greatly decreasing the number of coefficients.
With respect to [compositional-data](https://stats.stackexchange.com/tags/compositional-data/info) in [metabarcoding](https://en.wikipedia.org/wiki/Metabarcoding), that might not be something you need to worry about in your application. If all sequencing reads could be mapped to a species, then you could only independently evaluate 1 fewer than the total number of species. In that case, the results on the last species would be completely determined by the results on all the others. I suspect that many reads aren't successfully mapped, however, so that you aren't stuck with that limitation.
---
*Try looking for packages specific to analyzing metabarcoding data on [Bioconductor](http://www.bioconductor.org).
| null | CC BY-SA 4.0 | null | 2023-02-28T17:45:32.967 | 2023-02-28T17:45:32.967 | null | null | 28500 | null |
606922 | 1 | null | null | 0 | 30 | I am trying to come up with priors for the intercept and coefficients of a Bayesian regression model on the log scale, but I can't get the resulting distribution of the prior to be diffuse once I log transform it. The model attempts to estimate the abundance of several bird species at a number of points, and assumes that the mean abundance of each bird species follows a normal distribution from some grand mean, and there is a random effect on abundance at each point.
Here's a simulation of 100 runs for priors I have tried (in R):
```
### Priors
mean_alpha0 <- rnorm(100, 0, 31) # Prior for log abundance grand mean (sd of 31 = precision of 0.001 in JAGS)
sd_alpha0 <- runif(100, 0, 10) # Prior for standard deviation on log abundance
alpha0 <- rnorm(100, mean_alpha0, sd_alpha0) # Mean log abundance for each species
sd_eps <- runif(100, 0, 10) # Prior for standard deviation of random point effect on log abundance
eps <- rnorm(100, 0, sd_eps) # Random point effect on log abundance
### Abundance model
loglambda <- alpha0 + eps # Log of expected abundance at each point
N <- rpois(100, exp(loglambda)) # Realized abundance at each point
hist(N)
```
[Distribution of priors after log transformation](https://i.stack.imgur.com/0n1BL.png)
As you can see from the histogram, the resulting distribution does not appear diffuse at all, with distribution weight in a range that is completely unrealistic for bird abundance. What are some priors I can use that will create a more diffuse distribution on the log scale for this model? Also, if anybody has resources on good systems for determining priors for transformed scales, please let me know, because what I've found online has been very sparse.
| What Bayesian priors are diffuse on the log scale? | CC BY-SA 4.0 | null | 2023-02-28T17:46:46.193 | 2023-02-28T17:46:46.193 | null | null | 381067 | [
"r",
"distributions",
"bayesian",
"mathematical-statistics",
"jags"
] |
606923 | 1 | 606926 | null | 4 | 78 | I want to estimate the following regression equation:
$y = a + \frac{b}{r*x + 1}$
x is the independent variable, and a, b and r are parameters to be estimated. I have been told that the model is not identified, I suppose because I am trying to estimate both b and r with just one independent variable. Theoretically, what I am most interested in a, which is the asymptotic value of y as x approaches infinity.
However, I was wondering if this model can be identified in a nonlinear regression procedure, for example by using the nl command in stata or the nlstools package in R?
| Can nonlinear regression identify this equation? | CC BY-SA 4.0 | null | 2023-02-28T17:49:32.213 | 2023-03-01T12:54:46.957 | 2023-03-01T12:54:46.957 | 381070 | 381070 | [
"regression",
"nonlinear",
"identifiability"
] |
606924 | 1 | null | null | 1 | 19 | I am new to multivariate analysis.
I have been given a set of survey answers with around 120 binary questions. My goal is to first perform cluster analysis to find the optimal number of respondent "types".
In the next step, I want to analyze what distinguishes the different clusters within correlation between different "trait" occurences and which traits (i.e. survey answers) often appear together in different clusters.
I am new to this type of ordination analysis, and wondered if anyone can point me in the direction of good resources. I can imagine that community ecology (species presence/absence data) might do similar things.
| Analyzing binary survey data - after clustering, how move forward in multivariate/ordination analysis? | CC BY-SA 4.0 | null | 2023-02-28T17:58:16.910 | 2023-02-28T21:06:32.897 | 2023-02-28T21:06:32.897 | 16974 | 381071 | [
"clustering",
"binary-data"
] |
606925 | 2 | null | 549371 | 2 | null | This is not entirely a statistics question, and mainly a programming one. To answer the statistics aspect, this is very simple once you understand what sklearn is doing: the `chi2` function performs a [goodness-of-fit test](https://en.wikipedia.org/w/index.php?title=Goodness_of_fit&oldid=1117713304#Pearson%27s_chi-square_test) on your data, not a chi-squared test of independence. This is why you end up with different results.
But since you probably need some additional explanation not to take my answer at face value, and since we're at it anyway, here is a short explanation on the programming aspect (i.e. how sklearn performs this goodness-of-fit test): the observed values are the observed counts of 0/1 in $y$ filtered by $x=1$, and the expected values are the proportions of 0/1 in $y$ multiplied by the sum of 1s in $x$.
Below is a piece of code to reconcile scipy and sklearn, which will probably make the explanation above much clearer. The variable `data` comes from the code you provided in your question, and the scipy `chisquare` function performs a goodness-of-fit test:
```
> from scipy import stats
> observed = data[data["x"] == 1]["y"].value_counts()
> print(observed)
0 42
1 33
Name: y, dtype: int64
>expected = data["y"].value_counts(normalize=True)*observed.sum()
>print(expected)
0 45.0
1 30.0
Name: y, dtype: float64
>result = stats.chisquare(observed, f_exp=expected)
>print(result)
Power_divergenceResult(statistic=0.5, pvalue=0.47950012218695337)
```
which is in line with the sklearn output that you mention in your question.
| null | CC BY-SA 4.0 | null | 2023-02-28T18:11:37.910 | 2023-02-28T18:11:37.910 | null | null | 164936 | null |
606926 | 2 | null | 606923 | 4 | null | Yes, nonlinear least squares regression can estimate this. The idea is similar to linear least squares regression. You find estimates $\hat a$ of $a$, $\hat b$ of $b$, and $\hat r$ of $r$ such that, for $\hat y = \hat a +\frac{\hat b}{1 + \hat r x}$, the sum of squared residuals,$\overset{N}{\underset{i=1}{\sum}}\left(y_i - \hat y_i\right)^2$, is minimized. As usual, $\left(y_i - \hat y_i\right)^2$ is the residual for true value $y_i$ and prediction $\hat y_i$.
Unlike ordinary least squares linear regression, however, there is not necessarily a clean formula to calculate the parameter estimates like OLS has $\hat\beta_{ols} = (X^TX)^{-1}X^Ty$. Consequently, numerical methods will be required. Fortunately, software exists do do just that, as I demonstrate below with some `R` code that you might find yourself using.
```
set.seed(2023)
N <- 1000
a <- 1
b <- 200
r <- 3
x <- runif(N, 0, 20)
y <- a + (b)/(1 + r*x) + rnorm(N, 0, 3)
model <- nls(
y ~ a + b/(1 + r*x),
start = list(a = 1, b = 100, r = 1)
)
model
summary(model)
```
You have to pick starting guesses for your parameters, which is what I do in the `start` line. You can fiddle with the starting parameters to see how sensitive your estimates are.
| null | CC BY-SA 4.0 | null | 2023-02-28T18:14:09.087 | 2023-02-28T18:14:09.087 | null | null | 247274 | null |
606927 | 1 | null | null | 0 | 27 | Given a random variable $X$ with known pdf $f$ and some computer simulation model $g(x): \mathcal
R \rightarrow \mathcal R$, mapping samples $x \sim f$ to a scalar metric $g$, we can estimate the probability $P(g(x) > 0)$ using Direct Monte Carlo Simulation:
$\hat{p} = 1/N \sum_{i=1}^N I(x_i)$, where the indicator function $I(x)$ is $1$ if $g(x)>0$ and $0$ otherwise.
The variance of this estimator is:
${\rm var}[\hat{p}] = \frac{p(1-p)}{N}$, which we estimate to be ${\hat{\rm{var}}}[\hat{p}] = \frac{\hat{p}(1-\hat{p})}{N}$.
Now suppose that we do not always observe the outcome of $I$ correctly, i.e., we occasionally record $I=1$ although in reality $g(x) < 0$. This might for instance be the result of using some approximation $\hat g$ in place of $g$.
Assuming that the error between $g(x_i)$ and $\hat g(x_i)$ is normally distributed for all samples $x_i \sim f$ with known mean and variance, is there a way to "propagate" the uncertainty stemming from $\hat g$ to the estimated variance of our estimator ${\hat{\rm{var}}}[\hat{p}]$.
| Effect of Simulation Error on the Monte Carlo Estimator | CC BY-SA 4.0 | null | 2023-02-28T18:23:56.987 | 2023-02-28T18:30:09.847 | 2023-02-28T18:30:09.847 | 353621 | 353621 | [
"simulation",
"monte-carlo",
"asymptotics",
"error-propagation"
] |
606928 | 1 | 606932 | null | 0 | 65 | I'm trying to understand this formula for the variance in Kohler's PGM text. It's theorem 7.3:
Let $Y$ be a linear Gaussian of parents $X_1 .. X_k$:
$$p(Y|X) = \mathcal{N}(\beta_0 + \beta^T\mathbf{x}; \sigma^2)$$
Assume that $X_i .. X_k$ are jointly Gaussian with distribution $\mathcal{N}(\mathbf{u}, \Sigma)$ Then:
The distribution of $Y$ is such that:
$$\sigma^2_{Y} = \sigma^2 + \beta^T \Sigma \beta$$
How is this variance derived? I assume the sum of the variance of gaussians is equal to its linear sum plus the covariance. In this case, it seems like we're summing a weighted variance, where the coefficients are the weights. However, why is it then not just:
$$\sigma^2_{Y} = \sigma^2 + \beta^2 \Sigma$$
That would follow the usual variance formula $Var(Ax + By) = A^2Var(x) + B^2Var(Y)$
| Derivation of the formula of the variance of a linear gaussian | CC BY-SA 4.0 | null | 2023-02-28T18:24:43.563 | 2023-02-28T19:26:09.933 | 2023-02-28T19:24:05.633 | 362671 | 43080 | [
"normal-distribution",
"variance",
"linear",
"graphical-model"
] |
606929 | 1 | null | null | 4 | 46 | Let's say I have some regression model, say $\mathbb E[y]=\beta_0+\beta_1x_1+\beta_2x_2$. However, I am fitting this regression because I want to estimate $\beta_1$ while "controlling for" the effect of $x_2$, as is common.
A typical approach might be to fit the regression with a method like least squares or maximum likelihood estimation in order to estimate the entire parameter vector $\beta = (\beta_0,\beta_1,\beta_2)$. However, if I do not care about making an accurate estimate of $\beta_2$ or even $\beta_0$, is there a way to trade off some accuracy in the estimation of those coefficients in order to improve accuracy in estimating the $\beta_1$ of interest?
| Can we sacrifice the estimation accuracy of some regression coefficients in order to gain accuracy in estimating the coefficients of primary interest? | CC BY-SA 4.0 | null | 2023-02-28T18:31:09.770 | 2023-02-28T18:31:09.770 | null | null | 247274 | [
"regression",
"mathematical-statistics",
"estimation",
"inference"
] |
606930 | 2 | null | 70501 | 0 | null | As long as the treatment occurs at the same time for all units, the estimator of DiD is equivalent to the one from panel data (usually called Two-way fixed effects), as shown by Jeffrey Wooldridge in [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3906345), section 5. To show it, we can generate some random data, apply an effect and estimate with both methods.
```
np.random.seed(1)
n=10 #number of units
ID=np.arange(n) #list of IDs
t= 10 #periods of time
#Create dataset
data=pd.DataFrame({'ID':np.tile(ID,t),'year':np.repeat(np.arange(t),n),
'after_treatment':(np.repeat([0,1],n*t/2)),'treatment':np.zeros(n*t)})
data.loc[data['ID']<3, 'treatment']=1 #Mark these units as treated in the dataset
data['interaction']=data['after_treatment']*data['treatment'] #Create interaction variable
ui= np.random.normal (0,2,n*t) #Error term centered in zero with constant variance
data['out']=2*data['after_treatment']+2*data['treatment']+3*data['interaction']+ui #generate outcome
```
First, let's estimate with the classic diff-in-diff equation:
$$Y_{it} = \beta_0 + \beta_1 \textrm{Post}_t + \beta_2 \textrm{Treated}_i + \beta_3 \textrm{Treated}_i \textrm{Post}_t + e_{it}$$
```
print(smf.ols('out~after_treatment*treatment', data=data).fit().params[3])
```
The result is 2.8685.
If we estimate as a panel data including time and unit-fixes effects, we are doing:
$$y_{it}=\alpha_{i}+ \gamma_{t} +\tau_{it}TreatedxPost +\epsilon_{it}$$
```
df_panel=data.set_index(['ID', 'year'])
Y=df_panel["out"] #Dependent variable
X=df_panel[[ "interaction"]] #Independent variables
X=sm.add_constant(X) #adding constant
model=PanelOLS(Y,X, entity_effects=True, time_effects=True).fit()
print(model.params[1])
```
The result is the same, 2.8685
| null | CC BY-SA 4.0 | null | 2023-02-28T18:33:51.160 | 2023-02-28T18:33:51.160 | null | null | 208995 | null |
606931 | 2 | null | 503365 | 6 | null | Edit: after revisiting this answer, I realized that some of the language I used previously was not precise enough and could confuse people. In particular, using the term data-generative process when our model is discriminative. I've attempted to fix such issues.
Two years too late but here's my go at it :)
DISCLAIMER: this follows Bishop's amazing book [Pattern Recognition and Machine Learning](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf). Some sentences are directly copied (particularly in the deterministic approach section). I have not used quotations because it could break up the flow of the words, as I often make additional statements in between. I'm not sure if that's not good practice, so please let me know if that is not the way to do it.
## Setup
We observe a real-valued input variable $x$ and we wish to use this scalar observation to predict the value of a real-valued target variable $t$. To this end, we have gathered a set of $N$ realizations $x_i$ of $x$ together with a corresponding set of realizations $t_i$ of $t$
\begin{equation}
\mathbf{x} = (x_1, x_2,...x_N)^T \; , \; \mathbf{t} = (t_1, t_2,...t_N)^T
\end{equation}
In this synthetic curve fitting example, the data is generated by the function $\text{sin}(2 \pi x)$, with random noise included in the target values so we can write
\begin{equation}
y = \text{sin}(2 \pi x) + N(0, \beta^{-1}).
\end{equation}
Our goal is to exploit the training set in order to make predictions of the value $\hat{t}$ of the target variable for some new value $\hat{x}$ of the input variable. This involves implicitly trying to find the underlying function which we know to be $\text{sin}(2 \pi x)$.
## Deterministic Approach: Least Squares
### Model Setup
In the deterministic--or least squares--approach, we simply consider our output $t$ to be a parameterized function of the input variable $x$ and weights $\mathbf{w}$. For this example, we will consider a hypothesis set of all polynomial functions of the following form:
\begin{equation}
y(x, \mathbf{w}) = w_0 + w_1x + w_2x^2 + \dots + w_Mx^M = \sum_{j=0}^M w_jx^j \tag{1}
\end{equation}
where $M$ is the order of the polynomial, $x^j$ denotes $x$ raised to the power of $j$, and the polynomial coefficients $w_0,...,w_M$ are collectively denoted by the vector $\mathbf{w}$. Although $y(x, \mathbf{w})$ is a nonlinear function of $x$, it is a linear function of the coefficients $\mathbf{w}$. Functions that are linear in the unknown weights have special properties and are called linear models.
### Inference
We want to find $\mathbf{w}$ so that $(1)$ fits the data. To do so, we find a $\mathbf{w}$ that minimizes some notion of "error". The fit to the data is constrained by the family of functions we've decided to consider: polynomial functions of order $M$.
Our notion of error can be mathematically expressed as an error function that measures the misfit between the function $y(x, \mathbf{w})$ for a given value of $\mathbf{w}$, and the training data points. A widely used --and simple-- error function is given by the sum-of-squares of the errors between predictions $y(x_n, \mathbf{w})$ and the corresponding target values $t_n$, so that we minimize
\begin{equation}
E(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^N \left( y(x_n, \mathbf{w})- t_n \right)^2 \tag{2}
\end{equation}
where $\frac{1}{2}$ is for convenience later on. A Visual representation of the sum-of-squares error function in $(2)$ is shown here:
[](https://i.stack.imgur.com/xZOWP.png)
The sum-of-squares error function in $(2)$ is computed by taking one half the sum of the squared distances of each data point from the function $y(x, \mathbf{w})$. These displacements are shown in red.
Solving for $\mathbf{w}$ in this setting is fairly straightforward. Because the error function $(2)$ is a quadratic function of the coefficients $\mathbf{w}$, its derivative with respect to $\mathbf{w}$ will be linear in the elements of $\mathbf{w}$. Therefore, the minimization of the error function has a unique solution, denoted $\mathbf{w}^{\star}$ which can be found in closed form. The resulting polynomial function is then $y(x, \mathbf{w}^{\star})$.
## Maximum Likelihood Approach
In the Least Squares approach we viewed curve fitting purely as an error minimization problem. Now we take a first step in viewing it from a probabilistic perspective so that we can express uncertainty in our predictions.
### Model Setup
As in the least-squares approach we could consider $x$ to be related to $t$ through a parameterized function $y(x, \mathbf{w})$. However, we may not want to make such a strong statement as saying $t$ is exactly equal to $y(x, \mathbf{w})$. This could be because we think there is noise in the observations $\mathbf{t}$, for example due to measurement error. To introduce such uncertainty, we need to place a distribution over the target variable $t$. A sensible distributional assumption is to place a Gaussian distribution over $t$, with its mean given by the parameterized function $y(x, \mathbf{w})$, and its variance being fixed and unknown. This is visualized here:
[](https://i.stack.imgur.com/LNFyD.png)
Illustration of a Gaussian conditional distribution over $t$ given $x$, where the mean is given by some function of $x$, and the variance is fixed.
A perhaps more intuitive perspective is to think about our model describing a process that produces $t$ from $x$ and parameters $\mathbf{w}$. In the least-squares, approach, this process was a deterministic function $y(x, \mathbf{w})$. In this section, we extend the process by making the assumption that each $t$ is the result of $y(x, \mathbf{w})$ and some additive uncertainty, where that additive uncertainty takes the form of a zero-mean Gaussian distribution with unknown variance. This is to say that we are assuming, to have gotten a particular instance of $t$:
▪ we are given an instance of $x$ 1.
▪ nature used this instance of $x$ to get the output of the parameterized function $y(x, \mathbf{w})$.
▪ nature sampled from a zero-mean Gaussian, with fixed and unknown variance, and added it
to the output.
This leads to the following model
\begin{equation}
p(t | x, \mathbf{w}, \beta) = y(x, \mathbf{w}) + N(0, \beta^{-1}) = N\left( y(x, \mathbf{w}), \beta^{-1} \right) \tag{4}
\end{equation}
where we've used the scaling property of the Gaussian distribution's mean, and we've defined $\beta = \frac{1}{\sigma^2}$. $(4)$ is referred to as the Gaussian noise model.
### Inference
As the name suggests, in order to use the training data $( \mathbf{x}, \mathbf{t})$ to determine the values of the unknown parameters $\mathbf{w}$ and $\beta^{-1}$, we will search for a setting of $\mathbf{w}$ that maximizes the likelihood of the data $t$. In other words, we've defined a generative process, and we want to find the correct setting of the parameters $\mathbf{w}$ so that the likelihood of our process having created our observed $\mathbf{t}$ from our observed $\mathbf{x}$ is maximized.
Assuming the data $(\mathbf{x}, \mathbf{t})$ were independently sampled from $(4)$, the likelihood function is simply the product of each conditional distribution and is evaluted for a particular setting of $\mathbf{w}$:
\begin{equation}
p(\mathbf{t}|\mathbf{x}, \mathbf{w}, \beta) = \prod_{n=1}^N N(t_n|y(x_n, \mathbf{w}), \beta^{-1}) \tag{5}
\end{equation}
Each time we choose a setting for $\mathbf{w}$ and plug it into our model, we are defining a conditional distribution--in particular the one in $(4)$. This conditional distribution may agree with the data we have, or it may not. Examples of agreement and disagreement are shown in Figure 3.
[](https://i.stack.imgur.com/lCZah.png)
A Gaussian noise model shown for a handful of $x$, with two different settings for $\mathbf{w}$ and $\beta$. On the left is a setting of $\left( \mathbf{w}, \beta \right)$ that yields a model that disagrees with our observed data. On the right is a setting of $\left( \mathbf{w}, \beta \right)$ that yields a model that agrees much better with our observed data. Maximum likelihood looks for the setting of $\left( \mathbf{w}, \beta\right)$ that best agrees with our observed data.
We begin by finding the maximum likelihood estimates for $\mathbf{w}$. For this example, this amounts to taking the derivative of the likelihood $(5)$, setting it equal to zero, and then solving for $\mathbf{w}$. So again, finding the maximum likelihood setting for $\mathbf{w}$ can be found in closed form. It is common to instead maximize the log likelihood instead of $(5)$ for numerical stability and convenience, This which can be written as
\begin{equation}
\text{ln} p(\mathbf{t}|\mathbf{x}, \mathbf{w}, \beta^{-1}) = -\frac{\beta}{2} \sum_{n=1}^N \left(y(x_n, \mathbf{w}) - t_n \right)^2 + \frac{N}{2} \text{ln} \beta - \frac{N}{2} \text{ln}(2 \pi) \tag{6}
\end{equation}
For the purpose of taking the derivative of $(6)$ with respect to $\mathbf{w}$, we can omit the last two terms as they do not depend on $\mathbf{w}$. We can also replace the coefficient $\frac{\beta}{2}$ with $\frac{1}{2}$ since scaling $(6)$ by a constant won't chance the location of the maximum with respect to $\mathbf{w}$. Lastly, we can equivalently minimize the negative log likelihood. This leaves us with minimizing the following:
\begin{equation}
\frac{1}{2} \sum_{n=1}^N \left(y(x_n, \mathbf{w}) - t_n \right)^2 \tag{7}
\end{equation}
and so we see that the sum-of-squares error function has arisen as a consequence of maximizing the likelihood under the assumption of a Gaussian noise distribution.
Once we've found the maximum likelihood estimate for $\mathbf{w}$, which we will denote $\mathbf{w}_{\text{ML}}$, we can use it to find the setting for the precision parameter $\beta$ of the Gaussian conditional distribution. Maximizing $(6)$ with respect to $\beta$ gives
\begin{equation}
\frac{1}{\beta_{\text{ML}}} = \frac{1}{N} \sum_{n=1}^N \left(y(x_n, \mathbf{w}_{\text{ML}}) - t_n \right)^2
\end{equation}
and so we see that the maximum likelihood procedure yields a variance $\sigma^2$ being the average squared deviation between the observed data points and the fitted $y(x, \mathbf{w}_{\text{ML}})$.
---
Footnotes:
1 An important distinction is that we are not modelling how we are given $x$. To do so would be to model $p(x)$, thereby making our model a generative one instead of a discriminative one. Here, we are only modelling $p(t\vert x)$, not $p(t,x) = p(t\vert x) p(x)$.
| null | CC BY-SA 4.0 | null | 2023-02-28T18:36:37.107 | 2023-03-12T18:11:30.090 | 2023-03-12T18:11:30.090 | 381061 | 381061 | null |
606932 | 2 | null | 606928 | 3 | null | What you need is the law of total variance formula $\operatorname{Var}(Y) = \operatorname{Var}(E[Y|X]) + E[\operatorname{Var}(Y|X)]$.
By assumption, $E[Y|X] = \beta_0 + \beta'X, \operatorname{Var}(Y|X) = \sigma^2$. It then follows by $X \sim N(\mu, \Sigma)$ that $\beta_0 + \beta'X \sim N(\beta_0 + \beta'\mu, \beta'\Sigma\beta)$, whence
\begin{align}
\operatorname{Var}(Y) = \operatorname{Var}(E[Y|X]) + E[\operatorname{Var}(Y|X)] = \beta'\Sigma\beta + \sigma^2.
\end{align}
| null | CC BY-SA 4.0 | null | 2023-02-28T18:37:39.420 | 2023-02-28T19:26:09.933 | 2023-02-28T19:26:09.933 | 20519 | 20519 | null |
606933 | 2 | null | 606928 | 1 | null | The reason you need to put the covariance between the two betas, and why one needs to be transposed (the superscript "T" on the first one) is because covariance is a matrix, and beta is a vector, whereas variance is always a scalar (except in situations where Y is itself a matrix of random variables). Specifically, if there are n Gaussians, the covariance is nxn and beta is a n-dimensional column vector.
In your suggested formula, "squaring" beta--which amounts to a dot product of beta with itself, would give a scalar, which when multiplied by the nxn covariance matrix would give you an nxn matrix, which isn't what you want.
If you expand out the matrix product in the second term of the actual result, you in fact get a sum of nxn weighted terms, like your second term, one for each pair of Gaussians (including pairs where both are the same Gaussian, which come from the diagonal elements of the covariance). These diagonal terms give you a weighted sum of the individual variances, while the off-diagonal terms give you the "extra" variance due to cross-correlation between different Gaussians.
| null | CC BY-SA 4.0 | null | 2023-02-28T19:04:41.830 | 2023-02-28T19:04:41.830 | null | null | 381026 | null |
606934 | 2 | null | 606484 | 0 | null | I'll provide a computational response and a conceptual response. Computationally, I agree with the Yashaswi that findings account for baseline measures when those baseline measures are included in the analytic model. If a direct analysis of the difference between the post-minus-pre averages of the two groups is desired, a difference test (e.g., t-test) could be performed. However, conceptually, because there was not sampling (i.e., it sounds like there was random assignment to treatment but not random sampling of individuals from the population to participate in the study), there is not a need for inferential statistics. That is, the population values are known and so do not need to be estimated. Similarly, differences among groups can be calculated through basic arithmetic and do not need to be inferred using standard errors (which is a sampling statistic). Findings can be reported in terms of average change in strenuous versus relaxed groups; the impact of the groups not being equivalent prior to treatment is not known. Replications and alternative designs could provide insight into unanswered questions.
Re: @num_39's assertion that this response is incorrect: I disagree but would agree that industry standard would be to attempt to generalize from the individuals in the study by making strong assumptions that either the specific individuals are representative of all possible individuals who could have been selected for the study or the specific combination of individuals assigned to treatment and control conditions are representative of all possible combinations. In this study, my bet would be that neither of those assumptions hold. So I would recommend describing the actual findings, rather than speculating about what would happen with different individuals or combinations of individuals. See the replication crisis literature in any field for the problems with relying on strong assumptions and the rampant over-generalization of findings.
| null | CC BY-SA 4.0 | null | 2023-02-28T19:07:53.780 | 2023-03-01T16:40:29.097 | 2023-03-01T16:40:29.097 | 380516 | 380516 | null |
606935 | 2 | null | 606866 | 2 | null | Given the density is
$$f(x; a, \theta) := \theta a^{\theta} x^{-(\theta+1)}\boldsymbol 1_{(a,\infty)}(x).\tag 1\label 1$$
The pdf of the first order statistic $X_{(1)}$ can be easily shown to be $$g(x_1):= n\theta a^{n\theta}{x_1}^{-(n\theta+1)} \boldsymbol 1_{(a,\infty)}(x_1)\tag 2\label 2.$$
Observation $1.$ For known $\theta,~\hat a:= X_{(1)} $ is sufficient and complete for $a.$
Use $\eqref 2$ to express the joint pdf of the sample in the form $$f(\mathbf x) = g\left(\hat a; a\right)\times \left\{\frac1n \theta ^{n-1}\left(\prod x_i\right)^{-(\theta+1)}{\hat a}^{n\theta+1}\right\} .\tag 3$$ Sufficiency follows from Neyman factorization theorem.
Consider an arbitrary Borel measurable function $\varphi$ such that $$\mathbb E\left[\varphi\left(\hat a\right)\right] = 0, ~\forall a\in (0,\infty).\tag 4\label 4$$
$\eqref 4$ implies $$\int_a^\infty\varphi\left(\hat a\right){\hat a}^{-(n\theta+1)}~\mathrm d\hat a = 0, ~\forall a\in (0,\infty).\tag 5\label 5$$ Define (denoting $\varphi:= \varphi^+-\varphi^-$) $\nu^{\pm}(A) :=\int_A\varphi^{\pm}\left(\hat a\right){\hat a}^{-(n\theta+1)}~\mathrm d\hat a; ~~A:= [a, \infty); $ from $\eqref 5, ~\nu^+(A) = \nu^-(A)$ whence $\varphi^+ = \varphi^- ~\textrm{a.s.} ~[\lambda].$ The result follows.
$\blacksquare$
Observation $2.$ Define $Z:= \sum_{i=1}^n\ln\frac{Y_i}{Y_1} $ where $Y_i$ are the order statistics. Then $\hat \theta:= Z$ and $\hat a = Y_1$ are stochastically independent.
The approach would be to show $M_Z(t),$ the moment generating function of $Z$ doesn't depend on $a,$ whence by Observation $1.$ and [Basu's theorem](https://math.stackexchange.com/a/3577124/1074816), $Z$ would be independent of $Y_1.$
$$M_Z(t) = n!\int_a^\infty\int_a^{y_n}\cdots\int_a^{y_2}\exp\left(t\sum_{i=1}^n\ln\frac{Y_i}{Y_1}\right)\theta^na^{n\theta}\prod_{i=1}^n y_1^{-(\theta+1)}~\mathrm dy_i ;$$
substitute $y_i\mapsto \frac a{y_i}~\forall i\in\{1,2,\ldots, n\}.$ It is easy to see it is one-to-one and $|\mathcal J| = a^n.$ Therefore, $M_Z(t)$ reduces to a form that doesn't depend on $a.$
$\blacksquare$
Now, we would concentrate on the distribution of $Z.$ Note that $\sum_{i=1}^n\ln\frac{X_i}{X_1}$ doesn't depend on the ordering of $x_2, x_3, \ldots, x_n.$ So assuming $x_1< x_2, \ldots, x_n, ~~\sum_{i=1}^n\ln\frac{X_i}{X_1} = \sum_{i=1}^n\ln\frac{Y_i}{Y_1}.$ Also
$$g(x_2, \ldots, x_n\mid x_1) = \frac{\prod_{i=2}^nf(x_i)}{[1- F(x_1)]^{n-1}} .\tag 6$$ Therefore, the characteristic function of $Z$ given $X_1 = x_1$
\begin{align}\phi(t) &= \mathbb E\left[\exp\left(it\sum_{i=1}^n\ln\frac{X_i}{X_1}\right)\bigg \vert ~x_1\right]\\ &=\left[\int_{x_2}^\infty\frac{\exp\left(it\ln\frac{x_2}{x_1}\right)f(x_2)}{1- F(x_1)}~\mathrm dx_2\right]^{n-1} \tag 7.\end{align}
From this, the pdf of $Z$ is (by Observation $2.$)
\begin{align}f(z) &= \frac1{2\pi}\int_{-\infty}^{\infty}\exp{(-itz)}\phi(t)~\mathrm dt.\end{align}
Simplify it to deduce (cf. $\rm [II]$)
$$f(z) = \frac{\theta}{\Gamma{(n-1)}}z^{n-2}e^{-\theta z}.\tag 8 \label 8$$
Observation $3.$$\hat \theta$ is complete.
For arbitrary Borel measurable function $\varphi,$
\begin{align}\mathbb E\left[\varphi\left(\hat \theta\right)\right] &= 0, ~\forall \theta\in (0, \infty)\\\implies \int_0^\infty \varphi\left(\hat \theta\right){\hat\theta}^{n-2}\exp{\left(-\theta\hat\theta\right)}~\mathrm d\hat\theta &= 0; \tag 9\label 9\end{align}
$\eqref 9$ resembles the kernel of a one-dimensional exponential family. So, $\hat \theta$ is complete for $\theta.$
$\blacksquare$
Theorem $1.$ The family $\{F\left(\hat\theta, \hat a;\theta, a\right): (\theta, a)\in \mathbb R_+\times \mathbb R_+\}$ is complete.
From $\eqref 2, \eqref 8$ and by Observation $2.,$ write down the density $f\left(\hat\theta, \hat a;\theta, a\right).$
For an arbitrary Borel measurable function $\varphi\left(\hat\theta, \hat a\right)$
\begin{align}\mathbb E\left[\varphi\left(\hat\theta, \hat a\right)\right] &= 0, ~\forall (\theta, a)\in \mathbb R_+\times \mathbb R_+\\ \implies \int_0^\infty\int_k^\infty f\left(\hat\theta, \hat a;\theta, a\right)\varphi\left(\hat\theta, \hat a\right)~\mathrm d\hat\theta\mathrm d\hat a & = 0\\ \overset{\textrm{Fubini}}{\implies} \int_k^\infty n\theta a^{n\theta}{\hat a}^{-(n\theta+1)}\left[\underbrace{\int_0^\infty\varphi\left(\hat\theta, \hat a\right) \frac{\theta}{\Gamma{(n-1)}}{\hat\theta}^{n-2}e^{-\theta \hat\theta}~\mathrm d\hat\theta}_{:= h\left(\hat a, \theta\right)}\right] ~\mathrm d\hat a & =0 ;\tag{ 10}\label{ 10}\end{align}
by Observation $1.$ and $\eqref{ 10}, ~\lambda \{\hat a: h\left(\hat a, \theta\right) \ne 0\} = 0, ~\forall \theta > 0.$ This means $\lambda\otimes\lambda\{\left(\theta,\hat a\right): h\left(\hat a, \theta\right) \ne 0\} = 0 $ whence $\lambda\{\left(\theta,\hat a\right): h\left(\hat a, \theta\right) \ne 0\} = 0, ~\forall \hat a~\textrm{a.s.}.$ By continuity of $h\left(\hat a, \cdot\right), ~h\left(\hat a, \cdot\right) = 0,~\forall \hat a~\textrm{a.s.}\overset{\textrm{Obs.}~3.}\implies \lambda\left\{\hat a: \varphi\left(\hat\theta,\hat a\right) \ne 0\right\} = 0, ~\forall \hat a~\textrm{a.s.}\implies \lambda\otimes\lambda\left\{\left(\hat\theta,\hat a\right): \varphi\left(\hat\theta,\hat a\right) \ne 0\right\} = 0.$
$\blacksquare$
---
## References:
$\rm [I]$ Best Unbiased Estimators for the Parameters of a
Two-Parameter Pareto Distribution, S.K. Saksena, A.M. Johnson, Metrika, Volume $31,~ 1984,$ page $77-83.$
$\rm[II] $ Estimation of the Parameters
of the Pareto Distribution, H. J. Malik, Skandinavisk
Aktuarietidskrift $49, ~1966, ~144-157.$
| null | CC BY-SA 4.0 | null | 2023-02-28T19:13:29.977 | 2023-03-01T13:17:36.530 | 2023-03-01T13:17:36.530 | 362671 | 362671 | null |
606936 | 1 | null | null | 5 | 63 | I am working on a project where I am attempting to estimate the causal impact of civil war peace agreements (treatment) on levels of violence (outcome). While developing the DAG, I came across an issue where some of the nodes cause each other (pictured below is a simplified version of the DAG designed to only include the nodes where this became a problem).
[](https://i.stack.imgur.com/RMW7k.png)
Peace agreements sometimes result in an agreement to authorize a third-party peacekeeping operation (PKO), but, in other cases, PKOs facilitate the establishment of a peace agreement. Sometimes, mediation between warring parties leads to the authorization of a PKO. In other cases, a PKO itself facilitates mediated talks between warring parties. There is no natural ordering to these phenomena. PKOs do not always precede mediation and mediation does not always precede PKOs (it depends on the context). Likewise, a similar relationship exists between peace agreements and PKOs.
One way of getting around this (I think, could be wrong) is to separate these three nodes into six nodes. Instead of PKO, Peace Agreement, and Mediation, I would have:
- Mediation t-1 (Mediation preceding PKOs)
- Mediation t (Mediation following PKOs)
- PKO t-1 (PKOs preceding mediation or peace agreements)
- PKO t (PKOs following mediation or peace agreements)
- Peace Agreement t-1 (Peace Agreement preceding PKOs)
- Peace Agreement t (Peace Agreement following PKOs)
Here is the DAG with the new lagged nodes incorporated:
[](https://i.stack.imgur.com/7n590.png)
I'm not sure that this actually solves the issue, however. After all, aren't PKO(t-1) and Mediation (t-1) & Peace Agreement (t-1) and PKO (t-1) mutually directed in the same way that their (t) counterparts are?
I was wondering if anyone could help with this issue.
Update on this question
Is another way to get around this is to assign an unobservable node to the dual-directed relationships? For example, consider the image below:
[](https://i.stack.imgur.com/ycfS7.png)
Take the PKO -> Mediation/Mediation -> PKO example. Mediation (t-1) can cause PKO (t) and PKO (t-1) can cause Mediation (t). Perhaps these can both be collapsed within the unobservable node given that the causal mechanism driving the variation in ordering is unknown?
| DAGs with Ambiguous Temporal Ordering Between Nodes | CC BY-SA 4.0 | null | 2023-02-28T19:17:32.040 | 2023-03-01T22:56:22.953 | 2023-03-01T22:56:22.953 | 360805 | 360805 | [
"causality",
"dag"
] |
606937 | 1 | 607027 | null | 4 | 265 | I'm trying to understand the basics of Gaussian Distribution. I struggle to visualice how the variance of the conditional probability of say P (Y|X) changes when X is fixed (given X and Y have a joint gaussian distribution). So, I have two pictures , in picture A from many sources shows that the variances dont change but the mean does. But it seems reasonable to me also to consider the conditional as a cut of de bivariate along one axis but now the variances does change, why is it so? I'm I wrongly thinking Picture B is the conditional?
Picture A
[](https://i.stack.imgur.com/ocDp3m.png)
[Picture updated thanks to contribution]
Picture B
Cuts along one variable. It seems that the variance (spread of black lines) increases or decreases according to a fixed X.
[](https://i.stack.imgur.com/bCgeFm.png)
| How to properly visualize the change of variance of a bivariate Gaussian distribution cut sliced along a fixed variable? | CC BY-SA 4.0 | null | 2023-02-28T19:34:49.380 | 2023-03-01T23:38:24.230 | 2023-03-01T15:34:40.620 | 919 | 381075 | [
"probability",
"normal-distribution",
"variance",
"data-visualization"
] |
606938 | 1 | null | null | 0 | 69 | I am reading the following paper:
[https://arxiv.org/abs/2301.06941](https://arxiv.org/abs/2301.06941)
The authors in Eq.(8) have obtained a relation which has the mutual information, $i$, in the exponent of the exponential on the RHS of the 1st line. At the end of this equation, it seems mutual information is replaced by minus conditional entropy.
How does such a thing hold and what is the implication of that?
Equation (8):
$$P(X) = \frac{e^{\int^X -\beta (1-e^{i(Y:\tilde{X})})d\tilde{X}}}{Z} \overset{i(Y:X) \to 0}{=} \frac{e^{\int^X \beta i(Y:\tilde{X})d\tilde{X}}}{Z} = \frac{e^{-\beta \int^X S(Y|X=\tilde{X})d\tilde{X}}}{Z},$$
where $\beta$ is some constant, $i$ the mutual information, $S$ conditional entropy, and $Z$ is the normalization constant of the the probability $P$.
| How can the mutual information be equal to minus conditional entropy? | CC BY-SA 4.0 | null | 2023-02-28T19:35:37.003 | 2023-03-01T21:15:01.443 | 2023-03-01T21:15:01.443 | null | null | [
"probability",
"entropy",
"information-theory",
"mutual-information"
] |
606941 | 1 | null | null | 1 | 23 | In Measuring nominal scale agreement among many raters, where the authors define Fleiss kappa, the agreement by chance is defined as
[](https://i.stack.imgur.com/Vx5NK.png)
where $p_j$ is defined as the proportion of ratings which were to the $j$th category.
Can someone explain how the authors arrived at this equation?
An additional reference for the equations can be found on [Wikipedia](https://en.wikipedia.org/wiki/Fleiss%27_kappa)
| Chance Agreement for Fleiss Kappa | CC BY-SA 4.0 | null | 2023-02-28T20:31:24.720 | 2023-02-28T22:19:22.413 | null | null | 325454 | [
"reliability",
"agreement-statistics",
"cohens-kappa"
] |
606942 | 1 | null | null | 0 | 20 | In Mathematics for Machine Learning (page 336), the authors state that centering the data (subtracting from the data its the empirical mean) reduces the risk of numerical problems.
Which numerical problems are likely to arise if the data is not centered?
How does centering mitigate the numerical problems that would otherwise aris?
| How does centering the data reduce the risk of numerical problems when doing PCA? | CC BY-SA 4.0 | null | 2023-02-28T20:40:59.357 | 2023-02-28T20:40:59.357 | null | null | 243542 | [
"pca",
"data-transformation",
"data-preprocessing"
] |
606943 | 1 | null | null | 0 | 68 | I understand that the quality of representation of an individual by a certain axis is measured by the cosine of the angle between the axis and the individual; the more the vector representing the individual is "close" to the axis the more the latter represents better the former. However, I found it challenging to understand the concept behind the quality of representation of a variable. I tried scanning through [Making sense of principal component analysis, eigenvectors & eigenvalues](https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues) but didn't find anything related. The word "quality" isn't even mentioned in the post. I also tried to search elsewhere but to no avail.
| What does the quality of representation of a variable mean in PCA? | CC BY-SA 4.0 | null | 2023-02-28T21:05:55.407 | 2023-03-15T20:40:05.857 | 2023-03-15T20:40:05.857 | 376166 | 376166 | [
"pca",
"random-variable",
"intuition",
"quality-control",
"representative"
] |
606944 | 2 | null | 606484 | 0 | null | There is no perfect way to deal with a study looking for a treatment effect when the baselines of the intervention groups are different. There are a few approaches that are common, but none can be used to support inferences without lots of thought. I'll give you a few scenarios that might help you see the issues.
Scenario 1: Imagine that the intervention increases the average reaction time by about 4 units in every individual. If you subtract the baseline from everyone then you get a good measure of the effect of the intervention. Easy. That is what is assumed implicitly when people reach for the change from baseline as their index of response. In this case it will not matter if the two experimental groups have different baselines because the effect you are interested in is independent of the baseline as long as you analyse the change from baseline. However, if you express the changes as a fraction of the individual baselines then you will disadvantage the group with the higher average pre-intervention average baseline.
Scenario 2: Imagine that the intervention increases the reaction time by about 10% of the baseline. Now you should ideally express the responses as a fractional change. If you use change from baseline then your index of response will have extra noise from the influence of the variation in the individual baselines. In this case it will not matter quite a lot that the two experimental groups have different baselines unless you express the data as a fraction of the pre-intervention baselines.
Do you know if your intervention is likely to have an additive or multiplicative effect? If not then you might wish to examine the data both as change from baseline and as fractional change. If the two expressions support the same conclusions then it might be OK to make an inference, but if they point in opposite directions then you need a better study. (It is, of course, possible that the effect of intervention is a mixture of additive and multiplicative, or that the arithmetical nature is better expressed in some other way.)
Scenario 3: Imagine that your intervention is roughly additive but that there is a ceiling to the biological phenomenon that you are measuring. (No-one can have a zero reaction time, for example.) In that case when the group with the longer pre-intervention reaction times has the opportunity to change more than the group with the shorter pre-intervention times.
Scenario 4: Imagine that the change to average reaction times is mostly due to changes to the times of the individuals who are normally slow to react, or those who are normally fast to react. It is common to assume that a population-level change is due to a roughly equivalent change in all members of the population, but biology doesn't often work like that. Individuals can be more or less susceptible to interventions of all kinds. In this scenario the difference in pre-intervention baselines should be thought about as a difference in distribution of fast and slow individuals. You might look for the nature of the correlations between pre- and post-intervention reaction times.
If you have lots of data then you can probably explore the relevance of the various scenarios that I have listed (and others), but I doubt that your study is large enough. If it is large enough for such an exploration to be helpful then it is unlikely that you would have found a substantial difference in group baselines in actually randomly assigned groups. (I would worry about the randomisation procedures, keeping in mind that haphazard allocation is not random allocation.)
If you have relatively little data then you have done a preliminary study that can serve to help you design a better subsequent study. Do not make firm conclusions at this stage. Examine the data with a variety of graphs, making sure that the baseline and post-intervention values are visible and connected.
| null | CC BY-SA 4.0 | null | 2023-02-28T21:07:12.177 | 2023-02-28T21:07:12.177 | null | null | 1679 | null |
606945 | 2 | null | 606828 | 0 | null | If the treatment impacts only a tiny fraction of users, you should analyze just the impacted subset because even a large effect on a small set of users could be diluted until it becomes statistically undetectable. In other words, your analysis sample should include only users eligible for personalization.
One way to think of this is that personalization changes the experience for the 1700 users but not the 483K. If you average the effect for the entire population of users, you are mixing the effect for 1700 with 483K zeros.
Instead, you should randomly split the 1700 personalization-eligible users in half. The first group should be sent the personalized e-mail, and the second the unpersonalized e-mail (or perhaps nothing, depending on what you actually plan to do). The analysis would compare the mean outcomes between the two groups. Since personalization is costly, you may want to make the null that the average/total effect is greater or equal to the average/total cost of personalization rather than zero.
More broadly, this approach is called "triggering" and can improve the sensitivity (statistical power) of the test. The tradeoff is that the estimated effect does not apply to your total user base, just the folks eligible for this particular personalization.
A good reference is Chapter 20 of Kohavi, Tang, and Xu's [Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing](https://experimentguide.com/)
| null | CC BY-SA 4.0 | null | 2023-02-28T21:18:49.587 | 2023-02-28T21:57:09.993 | 2023-02-28T21:57:09.993 | 7071 | 7071 | null |
606946 | 2 | null | 225580 | 1 | null | I think it is worth elaborating a little bit the step of how to reach $g = 0$ a.e. $\lambda$ from the integral equation in @Tan's and @whuber's answer, as it demonstrates some classical measure theory techniques that could be used in other similar completeness proving problems.
Denote $g(x, y)(y - x)^{n - 2}$ by $f(x, y)$, then Tan's answer showed that
\begin{align}
\int_a^b\int_a^y f(x, y)dxdy = 0\; \text{ for all } a < b. \tag{1}
\end{align}
whuber's clever geometric argument shows that $(1)$ implies that
\begin{align}
\int_I f(x, y)dxdy = 0\; \text{ for all } I \in \mathscr{I}. \tag{2}
\end{align}
where $\mathscr{I}$ is the class of bounded rectangles in $\mathbb{R}^2$:
\begin{align}
\mathscr{I} = \{(a_1, b_1] \times (a_2, b_2]: a_1 < b_1, a_2 < b_2\}.
\end{align}
Our goal is to prove by that $f = 0$ a.e. on $\mathbb{R}^2$ under the condition $(2)$ (which obviously implies $g = 0$ a.e. on $\mathbb{R}^2$). This is implied$^\dagger$ by $f = 0$ a.e. on $I_M := (-M, M] \times (-M, M]$ for arbitrary $M > 0$, to which we prove it below.
It is well known that $\mathscr{I}$ generates the Borel $\sigma$-field $\mathscr{R}^2$ on $\mathbb{R}^2$, hence $\mathscr{I} \cap I_M$ generates the $\sigma$-field $\mathscr{R}^2 \cap I_M := \{A \cap I_M: A \in \mathscr{R}^2\}$ in $I_M$ (see Theorem 10.1 in Probability and Measure by Patrick Billingsley for the proof). Hence if we can show that the class
\begin{align}
\mathscr{A} := \left\{A \in \mathscr{R}^2 \cap I_M: \int_A f(x, y)dxdy = 0\right\} \tag{3}
\end{align}
is a $\sigma$-field in $I_M$, then $\mathscr{I} \cap I_M \subset \mathscr{A}$ (which is implied by $(2)$) and $\sigma(\mathscr{I} \cap I_M) = \mathscr{R}^2 \cap I_M$ together imply that $\mathscr{R}^2 \cap I_M \subset \mathscr{A}$. Since $f$ is measurable, $A_1 := \{(x, y): f(x, y) > 0\} \in \mathscr{R}^2$ and $A_2 := \{(x, y): f(x, y) < 0\} \in \mathscr{R}^2$, whence $A_1 \cap I_M \in \mathscr{A}$, $A_2 \cap I_M \in \mathscr{A}$. It then follows by $\mathscr{I} \cap I_M \subset \mathscr{A}$ and $(3)$ that
\begin{align}
\int_{A_1 \cap I_M}f(x, y)dxdy = 0, \quad \int_{A_2 \cap I_M} -f(x, y)dxdy = 0. \tag{4}
\end{align}
Note that $fI_{A_1} = f^+ = \max(f, 0) \geq 0$ and $-fI_{A_2} = f^- = \max(-f, 0) \geq 0$, $(4)$ can be rewritten as
\begin{align}
\int_{I_M}f^+(x, y)dxdy = \int_{I_M}f^-(x, y)dxdy = 0,
\end{align}
which implies that $f^+ = f^- = 0$ a.e. on $I_M$. Therefore, $f = f^+ - f^- = 0$ a.e. on $I_M$.
Therefore, to complete the proof, it remains to show that $\mathscr{A}$ is a $\sigma$-field. In fact, because $\mathscr{I} \cap I_M$ is a $\pi$-system, it suffices to show that $\mathscr{A}$ is a $\lambda$-system in view of [Dynkin's $\pi$-$\lambda$ theorem](https://en.wikipedia.org/wiki/Dynkin_system#Sierpi%C5%84ski%E2%80%93Dynkin%27s_%CF%80-%CE%BB_theorem). To this end:
- $I_M \in \mathscr{A}$. This follows from $(2)$ directly.
- If $A \in \mathscr{A}$, then
\begin{align}
\int_{I_M - A}f(x)dxdy = \int_{I_M}f(x)dxdy - \int_A f(x)dxdy = 0 - 0 = 0,
\end{align}
which shows $I_M - A$ lies in $\mathscr{A}$. Therefore $\mathscr{A}$ is closed under the formation of complementation.
- If $A_n$ lies in $\mathscr{A}$ for all $n$ and $A_1, A_2, \ldots$ are disjoint, then $|fI_{\cup_n A_n}| \leq |f|$ (note that $\int_{I_M}fd\lambda = 0$ implies that $f$ is integrable in $I_M$) and Lebesgue's dominated convergence theorem imply that
\begin{align}
\int_{\cup_n A_n} fd\lambda = \sum_n \int_{A_n}fd\lambda = 0,
\end{align}
which shows $\cup_n A_n$ lies in $\mathscr{A}$. Therefore $\mathscr{A}$ is closed under the formation of countable disjoint unions.
This completes the proof.
---
$^\dagger$: For each $m \in \mathbb{N}$, define $B_m = \{x \in I_m: f(x) \neq 0\}$. Then the sequence $\{B_m\}$ is increasing and converges to the set $\cup_{m = 1}^\infty B_m = \{x \in \mathbb{R}^2: f(x) \neq 0\}$. It then follows by the continuity from below of $\lambda$ and $f = 0$ a.e. on every $I_m$ that $\lambda(\cup_{m = 1}^\infty B_m) = \lim_{m \to \infty}\lambda(B_m) = 0$, i.e., $f = 0$ a.e. on $\mathbb{R}^2$.
| null | CC BY-SA 4.0 | null | 2023-02-28T21:32:50.613 | 2023-03-04T14:30:57.483 | 2023-03-04T14:30:57.483 | 20519 | 20519 | null |
606947 | 1 | 606963 | null | 1 | 58 | I came across this post in using GAM with the default Gaussian family to examine the significant difference of the patterns across two categorical variables
[Statistical differences between two hourly patterns](https://stats.stackexchange.com/questions/179947/statistical-differences-between-two-hourly-patterns)
I am wondering why is Gaussian used here, instead of Gamma family since MET (metabolic rate) seems to not include negative values ?
Is it because it's due to the purpose of the task, which is not to predict, but rather examining the significance of the pattern across 2 categorical variables?
In other words, is the default Gaussian family fine to use when comes to examining the significance of the pattern of the data across 2 categorical variables when we do not worry about the predictions at all? We are just concern about the significance in the PATTERN of the data. And hence, the family used is not a concern as long as the R-square is reasonably good?
I am new to GAM and would appreciate some advice. Thank you.
| Using GAM Gaussian family to Examine Significance in Data Pattern Across Two Categorical Variables | CC BY-SA 4.0 | null | 2023-02-28T22:15:11.907 | 2023-03-01T10:20:30.603 | 2023-03-01T01:43:11.527 | 345611 | 362150 | [
"r",
"regression",
"generalized-additive-model",
"mgcv"
] |
606948 | 2 | null | 606941 | 0 | null | Say two raters respond randomly, with probability of yes = 0.9 and probability of no = 0.1, we can put the results into this table.
```
Yes | No
Yes | | 0.9
No | | 0.1
---------------------------
Total 0.9 | 0.1 | 1.0
```
Best case (perfect agreement):
```
R1
R2 Yes | No
Yes 0.9 | 0.0 | 0.9
No 0.0 | 0.1 | 0.1
---------------------------
Total 0.9 | 0.1 | 1.0
```
Worst case (max disagreement):
```
R1
R2 Yes | No
Yes 0.81 | 0.09 | 0.9
No 0.09 | 0.01 | 0.1
---------------------------
Total 0.9 | 0.1 | 1.0
```
How much disagreement do we have in the worst case? 0.9^2 + 0.1^2 = 0.82
That's the formula.
| null | CC BY-SA 4.0 | null | 2023-02-28T22:19:22.413 | 2023-02-28T22:19:22.413 | null | null | 17072 | null |
606949 | 1 | 616955 | null | 11 | 598 | Quantile regression allows to estimate a conditional quantile for y (like e.g. the median of y,...) from data x.
I do not see any distributional assumptions about y being made. This seems in contrast to maximum likelihood estimation which starts with making an assumption about the distribution of y (e.g gaussian distribution).
Therefore the question: is quantile regression a maximum likelihood method?
If not, what is the broader term for methods like quantile regression?
---
Additional rewording:
What is the rationale of the quantile loss function in quantile regression (of arbitrary complex models)? Does it rely on the specification of the distributional form of response variable? And, specifically, is the quantile loss (somehow) a (log) likelihood function?
| Is quantile regression a maximum likelihood method? | CC BY-SA 4.0 | null | 2023-02-28T22:20:16.380 | 2023-05-26T09:42:52.137 | 2023-03-01T22:38:49.970 | 298651 | 298651 | [
"regression",
"maximum-likelihood",
"quantile-regression"
] |
606950 | 1 | null | null | 0 | 19 | This might be a basic question but here it goes:
I have a differentiable (arbitrary) function $f(t)$ on some interval $[0, T]$ that has mean $T^{-1}\int_0^T f(t) = 0$. I want to sample a set $S$ of $n > 1$ points such that the sample average is as close to $0$ as possible, on average. I think I would formulate this as that I want to find the distribution $p$ from which to sample $S$ that minimizes the sample mean's expected deviation from $0$:
$$p = \arg \min_{p'} \mathbb{E}_{S \sim p'}[|n^{-1}\sum_{t \in S}f(t)|]$$
My impression is that usually one wants to use uniform sampling to get a representative sample. But intuitively for this problem I feel like one would want to sample a random set of $n$ evenly spaced points, rather than uniform sampling of $n$ points. At least when I think of a function such as $f(t) = \sin(t)$. But I don't know if that is correct, if it is true in general, and how to formalize it?
| Sampling arbitrary differential function with mean $0$ so that sample mean's deviation from $0$ is minimized | CC BY-SA 4.0 | null | 2023-02-28T22:36:03.063 | 2023-02-28T23:54:32.163 | 2023-02-28T23:54:32.163 | 288693 | 288693 | [
"sampling"
] |
606951 | 2 | null | 606949 | 2 | null | It depends on the loss function you're trying to minimize. In MLE you're not always assuming that the distribution is gaussian. For instance, there's a relation between the distribution you assume and the loss function you try to minimize
- Gaussian distribution $\propto e^{-(x-\mu)^2}$ implies $L^2$ loss
- Laplace distribution $\propto e^{-|x-\mu|}$ implies $L^1$ loss
In the case of Gaussian distribution
$$
P(y|x) = N(y| f(x;\theta), \sigma^2)
$$
where $\sigma$ is fixed. Therefore the likelihood estimation is is
$$
\theta^* = \text{arg max}_\theta \prod_i P(y_i|x_i) = \text{arg max}_\theta \sum_i \log P(x_i | y_i)
$$
and therefore
$$
\theta^* = \text{arg max} -n \log \sigma - \frac{n}{2} \log (2\pi) -\frac{1}{2} \sum_i \left(\frac{y_i - f(x_i, \theta)}{\sigma}\right)^2
$$
which removing constant terms is
$$
\theta^* = \text{argmax}_\theta -\sum_i \left(y_i - f(x_i, \theta)\right)^2
$$
which is the standard $L^2$ loss function for regression problems. So, as you can see by defining a loss function you're also defining which distribution you assume of your data.
In the case of quantile regression is the same. Depending on your loss function you'll implicitly be specifying an underlying distribution.
| null | CC BY-SA 4.0 | null | 2023-02-28T22:43:19.837 | 2023-02-28T23:07:23.913 | 2023-02-28T23:07:23.913 | 350686 | 350686 | null |
606952 | 2 | null | 604463 | 2 | null | The Cochran's Rule that is often simplified to a "minimum of 5 expected counts per cell" is originally meant for contingency tables with degrees of freedom > 1 ([see P. Kroonenberg & A. Verbeek, 2018](https://www.tandfonline.com/doi/full/10.1080/00031305.2017.1286260)). So it's not that evident that you should merge the "sample" categories together based on this rule, though merging them could prevent you from running underpowered tests – but maybe the question of merging or not these categories should be treated as a separate question, as you seem to focus this question on what kind of following-up tests you could use or not.
After conducting your goodness-of-fit test, two options appropriate for a chi-squared goodness-of-fit test are mentioned by D. Sharpe in [Chi-Square Test is Statistically Significant: Now What?](https://scholarworks.umass.edu/pare/vol20/iss1/8/) (2015):
- calculating residuals,
- comparing observed and expected cells.
I won't rewrite here what Sharpe already wrote very clearly in his open-access paper (see pages 2 and 3 [of the pdf](https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1269&context=pare)), but be sure to apply corrections to any following-up analysis you conduct after a chi-squared test, to limit the [multiple testing problem](https://en.wikipedia.org/wiki/Multiple_comparisons_problem). (Sharpe mentions some possible corrections in the paper).
---
References:
P. M. Kroonenberg & Albert Verbeek (2018) "The Tale of Cochran's Rule: My Contingency Table has so Many Expected Values Smaller than 5, What Am I to Do?", The American Statistician, 72:2, 175-183, DOI: [https://doi.org/10.1080/00031305.2017.1286260](https://doi.org/10.1080/00031305.2017.1286260)
Sharpe, Donald (2019) "Chi-Square Test is Statistically Significant: Now What?," Practical Assessment, Research, and Evaluation: Vol. 20, Article 8.
DOI: [https://doi.org/10.7275/tbfa-x148](https://doi.org/10.7275/tbfa-x148)
| null | CC BY-SA 4.0 | null | 2023-02-28T22:51:22.960 | 2023-02-28T23:02:06.507 | 2023-02-28T23:02:06.507 | 164936 | 164936 | null |
606953 | 2 | null | 604949 | 2 | null | The question of robustness of the posterior with respect to perturbations of prior/likelihood/model/data is crucial for Bayesian inference, but also very delicate, since things really depend on which metric(s) you consider, and there are positive and negative results.
I am not aware of any positive results if the likelihood perturbation is measured by total variation, on the contrary, there is a negative result in [this paper](https://epubs.siam.org/doi/pdf/10.1137/130938633) by Owhadi, Scovel and Sullivan "On the Brittleness of Bayesian Inference".
Hope this helps.
| null | CC BY-SA 4.0 | null | 2023-02-28T23:01:45.337 | 2023-02-28T23:01:45.337 | null | null | 255382 | null |
606954 | 2 | null | 606949 | 12 | null | You seem to confuse two closely-related yet very different concepts: a regression model (which is a specification of a statistical model) and a parameter estimation method (which essentially is a data-based objective function formulation and subsequent numerical procedures).
For simplicity, we restrict ourselves to the linear parametric family. A quantile regression (model) models (i.e., approximates) the $\tau$-th conditional quantile of the response $y$ given predictors $x$ as a linear function of parameters:
\begin{align}
Q_\tau(y|x) = \alpha + \beta'x. \tag{1}
\end{align}
Likewise, a mean regression (model) models the conditional mean of the response $y$ given predictors $x$ as a linear function of parameters:
\begin{align}
E(y|x) = \alpha + \beta'x. \tag{2}
\end{align}
In principle (of course, it is somewhat a quite narrow view), $(1)$ standalone is the heart of "quantile regression" -- we do not need to understand how parameters $\alpha$ and $\beta$ will be estimated to specify a quantile regression model.
The parameter estimation problem kicks in when a sample $\{(y_i, x_i): i = 1, \ldots, n\}$ is observed. As you may have already known, typically $\alpha$ and $\beta$ are estimated by minimizing the sum of check function:
\begin{align}
(\hat{\alpha}, \hat{\beta}) = \operatorname{argmin}_{\alpha, \beta}\sum_{i = 1}^n \rho_\tau(y_i - \alpha - \beta'x_i), \tag{3}
\end{align}
which is numerically implemented by the linear programming or interior point algorithm. Of course, this is one of many parameter estimation methods (when $\tau = 0.5$, this is usually referred to as Least Absolute Deviation Estimation), which, as you stated, does not require any distributional assumption of $y$.
The MLE, as another parameter estimation method, on the other hand, can be carried out only if one specifies the complete conditional distribution of $y$. That means, to do maximum likelihood estimation, you will need to specify a statistical model that is more granular than regression models such as $(1)$ or $(2)$ (it is more granular because the complete distribution function contains much more information than the quantile function or the mean function only. In fact, both $Q_\tau(y|x)$ and $E(y|x)$ can be derived probabilistically from the distribution of $y|x$). For example, a statistical model like
\begin{align}
y | x \sim f(\alpha + \beta'x; \theta), \tag{4}
\end{align}
where $f$ is some known density function with additional parameter $\theta$. Model $(4)$ then entails the likelihood function (assuming the observations are i.i.d.)
\begin{align}
L(\alpha, \beta; \theta) = \prod_{i = 1}^nf(\alpha + \beta x_i; \theta),
\end{align}
which can be maximized over the parameter space to determine MLE.
It is well-known that when $f$ is Gaussian, Model $(4)$ implies the mean regression model $(2)$, and when $f$ is (asymmetrical) Laplacian, Model $(4)$ implies the quantile regression model $(1)$. For other conditional distributions $f$, in general $(1)$ or $(2)$ are not nested in $(4)$ (that is, neither $Q_\tau(y|x)$ nor $E(y|x)$ admits the simple linear form $\alpha + \beta'x$ under $(4)$).
In summary, the question "Is quantile regression a maximum likelihood method?" is somewhat ill-posed because the former is a statistical model while the latter is a parameter estimation method that depends on a more granular statistical model than the quantile regression model. In this sense, I do not think these two concepts are comparable. If by "what is the broader term for methods like quantile regression?", you meant "what is the parameter estimation method typically used to estimate $(\alpha, \beta)$ in $(1)$?", then the answer is $(3)$ -- I am not sure if there is a universally accepted term for this minimization problem, but it may be OK to call it "least $L^1$ estimation", in view of $\rho_\tau(t) = t(\tau - I_{(-\infty, 0)}(t))$ is a piecewise linear function.
| null | CC BY-SA 4.0 | null | 2023-02-28T23:04:31.150 | 2023-03-01T05:41:04.613 | 2023-03-01T05:41:04.613 | 20519 | 20519 | null |
606955 | 1 | null | null | 0 | 3 | I have two dataset X= [r1 x f1] and Y = [r2 x f2]
Here f1 and f2 are the features such that f1>>f2 and the common features between f1 and f2 is around ~200.
I am interested to know a common or transferable latent space between Y and X. Could it be possible to learn such latent space via NMF?
I was thinking first to find a common feature matrix X1=[r1 x fc], and Y1=[r2 x fc]
then performed the NMF factorization X1=W1H1 and Y2=W2H2
such that H1~=H2. If number of factors is 5, then W1 size is r1x5, W2 size is r2x5, H1 and H2 sizes are 5xfc
What approach should I follow so that I should get a common latent space between X and Y such that H1~=H2
| Transfer factors information from one matrix to another in the non-negative matrix factorization | CC BY-SA 4.0 | null | 2023-02-28T23:13:16.450 | 2023-02-28T23:13:16.450 | null | null | 251125 | [
"machine-learning",
"non-negative-matrix-factorization"
] |
606956 | 2 | null | 606950 | 0 | null | This seems like it may well be an unsolvable problem. The set of all functions on an interval with zero mean is infinite-dimensional, as is the set of all distributions on that interval.
If you remove the requirement that f be differentiable (I assume that's what you mean by "differential"), then in my understanding it's actually possible to prove that ALL distributions are in some sense "equally good". For suppose that for a function f1(t), one sample set S1 of n points gives a mean m1 and another sample set S2 gives a mean m2, with |m1| < |m2|. Then, construct another function f2(t) where the values at ONLY those 2n values of t are swapped. Now, S1 has mean m2 and S2 has mean m1, so now S2 is a "better" sample, and these functions are in effect "equally likely" to be your f because they have equal mean. It would take some more work to make this truly rigorous, as we're talking about two random samples from different distributions, rather than ONE sample each, but my intuition says it still holds.
Now, requiring differentiability (and hence smoothness) MAY make it possible to say more about a good sampling strategy, but this doesn't HAVE to be true. The most that can be said without some real heavy math (well beyond elementary statistic courses) is that for a uniform distribution, the sample mean approaches 0 as n gets very large.
| null | CC BY-SA 4.0 | null | 2023-02-28T23:22:11.197 | 2023-02-28T23:22:11.197 | null | null | 381026 | null |
606957 | 1 | 606962 | null | 4 | 128 | In my textbook, the autocovariance of the AR(1) model is derived as such:
$$Y_t=\phi Y_{t-1}+e_t$$
After multiplying both sides by $Y_{t-k}(k=1,2,...)$ and take expected values, you get:
$$E(Y_{t-k}Y_t)=\phi E(Y_{t-k}Y_{t-1})+E(e_tY_{t-k})$$
which implies that
$$\gamma_k=\phi\gamma_{k-1}+E(e_tY_{t-k})$$
However, I don't understand how $E(Y_{t-k}Y_t)$ becomes $\gamma_k$ and how $\phi E(Y_{t-k}Y_{t-1})$ becomes $\phi\gamma_{k-1}$.
| Derivation of Autocovariance Function of First-Order Autoregressive Process | CC BY-SA 4.0 | null | 2023-02-28T23:31:47.337 | 2023-03-01T02:51:57.600 | null | null | 366720 | [
"time-series"
] |
606958 | 2 | null | 606950 | 1 | null | The way your question is formulated, the function $f$ is fixed.
In this case there is a root $t_{\ast} \in [0,T]$ of $f$ such that $f(t_{\ast}) = 0$.
So the optimal distribution $p$ on $[0,T]$ would be the one that gives all mass to the value $t_{\ast}$, i.e. the Dirac measure corresponding to $(t_{j}=t_{\ast})_{j=1,\dots,n}$ (choose all sample points to equal $t_{\ast}$ with probability 1).
I suspect that this is not what you want, you rather want the same distribution $p$ to work for all/many/typical functions $f$ and the answer really depends on the way you (re-)formulate your question.
- Gaussian quadrature provides the optimal convergence rate in a certain sense depending on the smoothness of the function class considered. Again, this is "deterministic", so $p$ would be concentrated on one specific set of sample points.
- If you want a probabilistic approach, then you might define a probability distribution on the set of functions $f$ and try to minimize the corresponding expected value of the error. Note that even in this case the answer might be a "deterministic" choice of the sample $S$.
A final remark: The condition $T^{-1}\int_0^T f(t) = 0$ in the problem statement is not really necessary. If $T^{-1}\int_0^T f(t) =: \bar{f} \neq 0$ you can minimize
$|\bar{f} - n^{-1}\sum_{t \in S}f(t)|$ instead and you will always obtain exactly the same result, since shifting everything by $\bar{f}$ does not make any difference.
| null | CC BY-SA 4.0 | null | 2023-02-28T23:48:43.213 | 2023-02-28T23:48:43.213 | null | null | 255382 | null |
606959 | 2 | null | 572642 | 0 | null | One of the advantages of the log link is that it stabilizes the variance of data with a constant coefficient of variation. By doing so, one could run ordinary least squares on the log-transformed data. However, your intercepts would be biased by the offset -0.5*$v$ where $v$ is the coefficient of variation. Restating what Gordon touched upon in the comments, links such as the sqrt link are often not justifiable. Even the canonical inverse link is not practical.
| null | CC BY-SA 4.0 | null | 2023-03-01T00:09:11.003 | 2023-03-01T00:10:15.613 | 2023-03-01T00:10:15.613 | 381083 | 381083 | null |
606960 | 2 | null | 606891 | 0 | null | I would advise building a model with just the total and observing the significance of that model. Your situation often happens when one predictor explains your response similar to the other predictor's ability to explain the response.
Variance Inflation Factors are generally a better metric than correlations, especially if you plan on adding more predictors (any VIF over 10 is considered dangerous). If significance tests are not helping, you could compare models and attempt to minimize AIC/BIC. You can then check if adding an interaction term (even if a main effect is left out) improves the fit.
After this, you will have some more clarity on what is driving reelection in your data.
| null | CC BY-SA 4.0 | null | 2023-03-01T00:45:16.850 | 2023-03-01T00:45:16.850 | null | null | 381083 | null |
606961 | 1 | null | null | 0 | 73 | I am performing a random forest model in R using caret = rf method. I have 20 explanatory variables and most are continuous but a few are categorical and numeric. For example, there are 6 categories of soil drainage coded as 1-6 class = integer. Before running the model I converted these categorical variables to factors using as.factor().
```
df1000 <- transform(df1000, LU1945=as.factor(LU1945),
LU1960=as.factor(LU1960), HSG=as.factor(HSG),
Drainage=as.factor(Drainage))
```
Should this type of variable (nominal categorical) be one-hot encoded instead?
| Numeric categorical variables as factors or one hot encoded before using random forest? | CC BY-SA 4.0 | null | 2023-03-01T00:49:49.250 | 2023-03-02T17:43:39.270 | 2023-03-02T17:43:39.270 | 325355 | 325355 | [
"random-forest",
"categorical-encoding"
] |
606962 | 2 | null | 606957 | 5 | null | Firstly, we note that $\gamma_k \equiv \mathbb{Cov}(Y_t, Y_{t-k})$ is the covariance function, which holds for all $t$ and $k$. Secondly, there is no mean term in the model, so we have $\mathbb{E}(Y_t)=0$ for all $t$. Putting these together and using a [standard decomposition for the covariance](https://en.wikipedia.org/wiki/Covariance#Definition), for any $t$, $k$ and $\ell$ we have:
$$\begin{align}
\mathbb{E}(Y_{t-k}Y_{t-\ell})
&= \mathbb{Cov}(Y_{t-k}, Y_{t-\ell}) + \mathbb{E}(Y_{t-k}) \mathbb{E}(Y_{t-\ell}) \\[6pt]
&= \gamma_{(t-\ell)-(t-k)} + 0 \times 0 \\[6pt]
&= \gamma_{k-\ell}. \\[6pt]
\end{align}$$
This result encompasses both the results you are considering.
| null | CC BY-SA 4.0 | null | 2023-03-01T01:20:44.000 | 2023-03-01T02:51:57.600 | 2023-03-01T02:51:57.600 | 173082 | 173082 | null |
606963 | 2 | null | 606947 | 3 | null | The two most important questions you had here are below:
>
Why is Gaussian used here, instead of Gamma family since MET (metabolic rate) seems to not include negative values ?
Unfortunately we can't read the minds of other posters here, but the logic that gamma should be used purely because of positive values is not that useful an assertion in my opinion. Consider these two distributions I just simulated from gamma and Gaussian families, which both only have positive values:
[](https://i.stack.imgur.com/mnYju.png)
Only the shape of the distribution matters and we can only infer that from the data.
>
The family used is not a concern as long as the R-square is reasonably good?
A great example of where this is problematic is a logistic GAM, which is constrained to only have values between 0 and 1. Running a Gaussian regression with assumed normally distributed error variance normally [has a number of problems](https://thestatsgeek.com/2015/01/17/why-shouldnt-i-use-linear-regression-if-my-outcome-is-binary/#:%7E:text=With%20binary%20data%20the%20variance,the%20residual%20errors%20is%20constant.) and should be avoided in a linear model. This changes to a degree with a GAM model, but still has it's own issues. See below a GAM model I fit to the same data in `gamair`, one with a logistic model and the other with a Gaussian model:
```
library(mgcv)
library(gamair)
data("wesdr")
fit.log <- gam(ret
~s(dur,k=5),
data=wesdr,
family=binomial,
method="REML")
fit.gauss <- gam(ret
~s(dur,k=5),
data=wesdr,
family=gaussian,
method="REML")
```
If you plot them as is:
```
par(mfrow=c(1,2))
plot(fit.log,
main = "Logistic GAM")
plot(fit.gauss,
main = "Gaussian GAM")
```
You'll notice they are remarkably similar:
[](https://i.stack.imgur.com/2jqeL.png)
However, the y-axis for the logistic regression is plotted by its logit link, whereas the Gaussian model to the right is plotted literally by decimal values between 0 and 1. Already we can see that the Gaussian model isn't that interpretable, as we want to know the outcome for what should be a binary response. This isn't made that useful by having logistic data in logit form either, so we can make one subtle change to the plotting to get a meaningful result...by plotting instead by logistic probability using `trans=plogis`:
```
par(mfrow=c(1,2))
plot(fit.log,
trans = plogis,
main = "Logistic GAM")
plot(fit.gauss,
main = "Gaussian GAM")
```
You can see now that the logistic model now has much more meaningful interpretation, as the y-axis has been replaced with predicted probability values:
[](https://i.stack.imgur.com/r07XO.png)
Now we can make an approximation that the outcome has a certain probability based off the `dur` variable. For example, we know that a duration of about 10 years of diabetes yields around a 60% chance of getting retinopathy. We can't really get anything from the plot to the right because they are not data that has a any real interpretable value. So in short, specifying your distribution is always important for regression, though the consequences of that decision vary based off what you're doing.
## Edit
As pointed out by Gavin Simpson in the comments, you would need to include the intercept into the predicted probability plot. You can do this with the following code:
```
plot(fit.log,
trans=plogis,
shift=coef(fit.log)[1],
main = "Logistic GAM Shifted Intercept")
```
[](https://i.stack.imgur.com/MNLpm.png)
| null | CC BY-SA 4.0 | null | 2023-03-01T01:42:42.733 | 2023-03-01T10:20:30.603 | 2023-03-01T10:20:30.603 | 345611 | 345611 | null |
606964 | 2 | null | 606007 | 2 | null |
## Separable case
If you want to get the weights for the separable case, look at the transformed dataset:
\begin{bmatrix}
&1&\sqrt{2}&1\\
&1&-\sqrt{2}&1\\
&1&-\sqrt{2}&1\\
&1&\sqrt{2}&1\\
\end{bmatrix}
and your y values are basically $[1, -1, -1, 1]$.
You have to apply the SVM method in the transformed space, i.e., your equation would be: $$\alpha_i(y_i(W.\phi(x_i)+b)-1)=0.$$
Notice that if you take the second coordinate of the $\phi(x_i),$ you can have a direct relationship with the y values. So take W to be $(0, 1, 0)$ which picks the second coordinate. Now your $W.\phi(x_i)$ values are $\sqrt{2}\ [1,-1,-1,1].$
If you if you scale the $W$ to be $(0, 1/\sqrt{2}, 0).$ and look carefully, the $y_i * (W.\phi(x_i))$ values are all $1.$
Hence, $b=0$ and any real values of $\alpha_i>0$ would work in this case.
## Non-Separable case
Now you are in two-dimensional space, and your quadrant 1 and 3 basically have 1's, and the rest have -1's. First, take any line visually and check that it can not divide them. If the first and the fourth points are on the same side, at least one other point would be in the same side too.
Mathematically, why?
Take any line in the form $f(x_1,x_2)=x_2 - m x_1 - c = 0.$ Now if a point $(x,y)$ satisfies $f(x,y)>0,$ it will lie on one side, and if the value is $<0,$ it will lie on the other side. The values of $f$ for your four points are, respectively: $-1+m-c, 1+m-c, -1-m-c, 1-m-c.$
If $-1+m-c<0,$ and $1-m-c<0,$ both happen (i.e., first and fourth points are on the same side), then $1-c<m<c+1.$ If $c>0,$ the point $(1,-1)$ will be less than 0 too, i.e., that is on the same side of $(1,1) (-1,-1).$ If $c<0,$ the other point will be on the same side of them. Similarly, you can show that if $-1+m-c>0,$ and $1-m-c>0,$ both happen, one of the other points will be on the same side.
Also, note that the y-axis can not separate them.
You can translate this whole argument in the form of hyperplanes too.
| null | CC BY-SA 4.0 | null | 2023-03-01T02:48:09.610 | 2023-03-01T02:48:09.610 | null | null | 327787 | null |
606965 | 2 | null | 606726 | -1 | null | I get zero by two methods, a Mathematica simulation, and a counting argument.
Mathematica simulation:
```
results = {}; results2={};
Module[{n = 10000000, outer, inner},
AbsoluteTiming[
For[k = 1, k <= 5, ++k,
sum1 = 0;
For[outer = 1, outer <= n, ++outer,
sum2 = 0;
For[inner = 1, inner <= 5, ++inner,
sum2 += RandomChoice[{-1, 1}]
];
AppendTo[results,sum2];
sum1 += sum2/5.0;
];
Print["{n, sum2, sum1/n}", {n, sum2, sum1/N[n]}];
AppendTo[results, {n, sum2, sum1/N[n]}];
]
]
]
results:
n = # Flips Sum last Average of n flips
5 flips
10000, 1, 0.0006400000000000063
10000, -3, -0.007840000000000005
10000, -1, -0.004319999999999987
10000, -3, -0.0022799999999999947
10000, 1, 0.00899999999999997
100000, -1, 0.0005719999999999979
100000, 3, 0.0025119999999999774
100000, -1, -0.000683999999999993
100000, -3, -0.0034999999999999923
100000, -1, 0.0016439999999999894
1000000, -3, -0.00005719999999999856
1000000, 1, 0.0005504000000000307
1000000, 1, 0.000036799999999996686
1000000, -1, 0.00013999999999999622
1000000, 1, 0.0006572000000000153
10000000, -1, -0.00001588000000000591
10000000, 3, -2.8800000000038654e-6
10000000, 1, -0.00011363999999998875
10000000, 3, 0.00022167999999998797
10000000, -1, 0.000014039999999994434
```
The counting argument:
only six sums are possible
-5,-3,-1,+1,+3,+5 and the averages are those divided by 5.
The sums of equal absolute value have an equal number of terms.
Since the negative and positive values cancel each other out, the average is zero.
Here is a count of the number of sums of each kind:
Tally[results2] ->
{{-1, 312475}, {1, 312595}, {5, 31378}, {-3, 155964}, {3,
156121}, {-5, 31467}}
| null | CC BY-SA 4.0 | null | 2023-03-01T02:59:15.750 | 2023-03-01T02:59:15.750 | null | null | 381090 | null |
606966 | 1 | null | null | 0 | 23 | Let's say that we have a population of 1000 people, and every day people eat some number of oranges. We run a survey to find how many oranges each individual has eaten in each of the last 360 days. We then bucket those 360 responses into 30-day periods, noting the mean and stdev for each, and mix them all into the same pool.
My question is a) is the mean value of the 30-day-bucket-means a valid approximation of the population mean of oranges per day? And b) is the mean value of the 30-day-bucket-variances a valid approximation of the population variance of oranges per day? I think they should be, because both the sampled means and sampled variances (ought to?) obey the central limit theorem.
| Are the sampled mean and variance approximations of population mean and variance? | CC BY-SA 4.0 | null | 2023-03-01T03:30:40.047 | 2023-03-01T03:30:40.047 | null | null | 2251 | [
"inference",
"central-limit-theorem"
] |
606967 | 1 | null | null | 0 | 6 | I am training an eye landmark localizer using the [One Millisecond Face Alignment with an Ensemble of Regression Trees](https://www.csc.kth.se/%7Evahidk/face/KazemiCVPR14.pdf) approach. It has a good number of [hyper-parameters](https://pyimagesearch.com/2019/12/23/tuning-dlib-shape-predictor-hyperparameters-to-balance-speed-accuracy-and-model-size/) to be tuned. Apart from this, some hyper-parameters can make the model very expensive to train and/or predict, as the time and/or memory consumption increases sometimes exponentially. But I am mostly interested to make the model localize the landmark with lower loss in real-time.
So, I need to compromise on some hyper-parameter. I mean I cannot choose any such values for these hyper-parameters which can potentially make my model slow. This is why I have generated a correlation plot to figure out which hyper-parameters are less important for my data.
[](https://i.stack.imgur.com/n3nNL.png)
Here if we observe the `test_error` row, we can see that `tree_depth`, `nu`, and `cascade_depth` has very little influence on `test_error`. So will it be logical to choose values in such a way that doesn't make the model run slowly? For example, taking `tree_depth` to some small number such that the model doesn't become slow?
| Can I compromise hyper-parameter if it has very less correlation with test loss? | CC BY-SA 4.0 | null | 2023-03-01T03:39:40.177 | 2023-03-01T03:39:40.177 | null | null | 245577 | [
"regression",
"hyperparameter",
"ensemble-learning"
] |
606968 | 2 | null | 606865 | 5 | null |
#### Use the queue function in the utilities package
This type of problem can be dealt with as a [queueing problem](https://en.wikipedia.org/wiki/Queueing_theory), which is a class of problem dealt with in statistical theory. For the case where you have the inputs for the use of the facilities by a set of users you can use a deterministic function to turn this into a set of queuing metrics. Typically, for each user you would have an input for the time they arrive, the amount of time they need to use the facilities, and a description of their waiting behaviour (i.e,. how long they are willing to wait for the facility before giving up and leaving).
Assuming you have all this information, you can use the `queue` function in the [utilities package](https://CRAN.R-project.org/package=utilities) in `R` to compute the queueing metrics under various numbers of beds in your facility (see [O'Neill 2021](https://arxiv.org/abs/2111.07064) for further explanation). Below I show an example of this function showing queueing results from using `n = 3` facilities for a set of twenty users with random arrival-times and use-times. By varying the parameter `n` you can see the queueing results using different numbers of amenities. As you can see, the function allows inputs for a range of aspects of the problem, including revival-times and close-times for the facilities.
```
#Set parameters for queuing model
lambda <- 1.5
mu <- 6
alpha <- 5
beta <- 2
#Generate arrival-times, use-times and patience-times
set.seed(1)
K <- 20
ARRIVE <- cumsum(rexp(K, rate = 1/lambda))
USE.FULL <- 2*mu*runif(K)
WAIT.MAX <- function(kappa) { alpha*exp(-kappa/beta) }
#Compute and print queuing information with n = 3
library(utilities)
QUEUE <- queue(arrive = ARRIVE, use.full = USE.FULL,
wait.max = WAIT.MAX, n = 3, revive = 2,
close.arrive = 30, close.full = 35)
#View the queue results
plot(QUEUE)
QUEUE
```
This code produces the following queueing output showing information for each of the users and facilities in the queueing problem. The =plot shows this same information graphically, which gives a clear visualisation of the waiting-times, use-times, etc.
[](https://i.stack.imgur.com/vDnC0.jpg)
```
Queue Information
Model of an amenity with 3 service facilities with revival-time 2
Service facilities close to new arrivals at closure-time = 30
Service facilities close to new services at closure-time = 35
Service facilities end existing services at closure-time = 35
Users are allocated to facilities on a 'first-come first-served' basis
----------------------------------------------------------------------
User information
arrive wait use leave unserved F
user[1] 1.132773 0.000000 1.295324 2.428096 0.0000000 1
user[2] 2.905237 0.000000 8.684531 11.589768 0.0000000 2
user[3] 3.123797 0.000000 4.935293 8.059090 0.0000000 3
user[4] 3.333490 1.094606 9.851356 14.279452 0.0000000 1
user[5] 3.987593 3.032653 0.000000 3.987593 7.7647223 NA
user[6] 8.330046 1.729045 9.395193 19.454283 0.0000000 3
user[7] 10.174389 3.032653 0.000000 10.174389 6.6364357 NA
user[8] 10.983913 1.839397 0.000000 10.983913 6.3566350 NA
user[9] 12.418764 1.171004 9.472275 23.062043 0.0000000 2
user[10] 12.639333 3.032653 0.000000 12.639333 0.2799744 NA
user[11] 14.725436 1.554016 5.726761 22.006213 0.0000000 1
user[12] 15.868481 3.032653 0.000000 15.868481 8.7877649 NA
user[13] 17.724886 3.032653 0.000000 17.724886 8.3127787 NA
user[14] 24.360787 0.000000 5.731435 30.092223 0.0000000 1
user[15] 25.942602 0.000000 9.057398 35.000000 1.2771158 2
user[16] 27.495468 0.000000 5.257165 32.752633 0.0000000 3
user[17] 30.309521 0.000000 0.000000 30.309521 2.9375673 NA
user[18] 31.291641 0.000000 0.000000 31.291641 0.8481486 NA
user[19] 31.797041 0.000000 0.000000 31.797041 1.1935939 NA
user[20] 32.679761 0.000000 0.000000 32.679761 3.7952605 NA
----------------------------------------------------------------------
Facility information
open end.service use revive
F[1] 0 30.09222 22.60488 8
F[2] 0 35.00000 27.21420 6
F[3] 0 32.75263 19.58765 6
```
| null | CC BY-SA 4.0 | null | 2023-03-01T03:52:58.620 | 2023-03-01T03:52:58.620 | null | null | 173082 | null |
606969 | 2 | null | 606937 | 7 | null | Suppose that $X$ and $Y$ are jointly normal with correlation coefficient $\rho \in (-1,1)$ and identical marginal $\mathcal N(0,1)$ distributions. Then, the joint density is
$$f_{X,Y}(x,y) = \frac{1}{2\pi \sqrt{1-\rho^2}}\exp\left[-\left.\left.\frac{1}{2(1-\rho^2)}\right(x^2-2\rho xy + y^2\right)\right]\tag{1}$$
and for any fixed value $x_0$ for $X$, we have that
$$f_{X,Y}(x_0,y) = \frac{1}{2\pi \sqrt{1-\rho^2}}\exp\left[-\left.\left.\frac{1}{2(1-\rho^2)}\right(x_0^2-2\rho x_0y + y^2\right)\right].\tag{2}$$
The OP has drawn pictures of Eq. $(2)$ for different values of $x_0$ and seems to think that these cross-sections of the joint pdf solid are the various conditional densities of $Y$ given that $X$ has those specific values. He seems to think that the variances of these conditional densities increase as $|x_0|$ increases because the cross-sections seem more spread out. But, $f_{Y\mid X = x_0}(y\mid X=x_0)$, the conditional density of $Y$ given that $X=x_0$, is not given by $(2)$. In fact, $f_{X,Y}(x_0,y)$ is a valid pdf only in rare circumstances. Note that, in general and without any assumptions about normality etc,
the integral $\int_{-\infty}^\infty f_{X,Y}(x_0,y) \,\mathrm dy$ equals $f_X(x_0)$ and so while $f_{X,Y}(x_0,y)$ is indeed nonnegative (as all pdfs must be), the area under the curve does not equal $1$ except when $x_0$ is such that $f_X(x_0)$ serendipitously happens to have value $1$. Note that $\dfrac{f_{X,Y}(x_0,y)}{f_X(x_0)}$, which we recognize immediately as the formula for the conditional pdf of $Y$ given that $X$ has value $x_0$, is indeed as valid pdf. Thus,
\begin{align}
f_{Y\mid X = x_0}(y\mid X=x_0) &= \dfrac{f_{X,Y}(x_0,y)}{f_X(x_0)}\\
&= \dfrac{\frac{1}{2\pi \sqrt{1-\rho^2}}\exp\left[-\left.\left.\frac{1}{2(1-\rho^2)}\right(x_0^2-2\rho x_0y + y^2\right)\right]}{\frac{1}{ \sqrt{2\pi}}\exp\left[-\left.\left.\frac{1}{2}\right(x_0^2\right)\right]}\\
&= \frac{1}{\sqrt{1-\rho^2}\sqrt{2\pi}}\exp\left[-\left.\left.\frac{1}{2}\right(\frac{y-\rho x_0}{\sqrt{1-\rho^2}}\right)^2\right]
\end{align}
which shows that the conditional density of $Y$ given that $X=x_0$ is a normal density with fixed variance $1-\rho^2$ (not depending on the given value $x_0$ of $X$ at all) but with mean $\rho x_0$ which does vary with $x_0$. This is what several of the comments on the OP's question are trying to point out:
>
the cross-section of the joint pdf solid at $x_0$ is not the conditional density of $Y$ given $X=x_0$ (though it is related). The conditional density has fixed variance (not depending on $x_0$) but variable mean (depending on $x_0$).
It is possible to read too much into the aphorism that the bivariate normal distribution is like a piece of bologna in the sense no matter how you slice it, it is still bologna. Every cross-section (not necessarily parallel to the axes) of the bivariate normal density solid is proportional to a normal density but is not exactly a normal density unless the cross-section passes through the mean point of the bivariate distribution.
| null | CC BY-SA 4.0 | null | 2023-03-01T03:53:42.983 | 2023-03-01T23:38:24.230 | 2023-03-01T23:38:24.230 | 6633 | 6633 | null |
606972 | 2 | null | 606937 | 5 | null | If I plot a 95% ellipse for a bivariate normal distribution, it might look like the plot below.
[](https://i.stack.imgur.com/c28nj.png)
When I take a vertical slice through that distribution, I might get a slice like the lighter red line, a, in the center, or I might get a slice like the darker red line, b, on the right. It looks like a is longer than b (and it is). That is, the vertical spread from the bottom to the top of the ellipse at a given point on $X$ changes as you move from side to side. This is simply not the same as the conditional variance. As a first attempt to understand the bivariate normal and the idea of homoscedasticity, just looking at the vertical spread is reasonable, but it isn't quite right. It's simply not the case that the length of those lines are the variances of the conditional distributions of $Y$ given $X=x_i$.
With a bivariate normal, you will have more data, and more density in the center (either from side to side, or equally, from top to bottom) than at the extremes. Where there is more data, you will have more extreme values. For instance, only 5% will be $>|2|$ SD's from the mean; where you have 100 data, that would be 5, but where you have 1000 data, that will be 50. The result is that the density is higher with more data and it looks like it is spreading out further, but in both cases, the variance is the same. To illustrate, consider the very simple simulation below. Even though the standard deviation is the same, with more data, the value at the 2.5th percentile is further out:
```
set.seed(1) # makes this reproducible
quantile(sort(rnorm( 200, mean=0, sd=1)), probs=.025)
# 2.5%
# -1.641215
set.seed(1)
quantile(sort(rnorm(2000, mean=0, sd=1)), probs=.025)
# 2.5%
# -2.092716
```
| null | CC BY-SA 4.0 | null | 2023-03-01T04:37:22.090 | 2023-03-01T12:33:57.110 | 2023-03-01T12:33:57.110 | 7290 | 7290 | null |
606973 | 1 | null | null | 3 | 53 | I have 1100 objects to inspect whether they meet the standard or not (yes or no question), of which 100 have been inspected already and 99% of them passed the test.
Due to resource constraints, we can't inspect all the remaining 1000 objects and hence need to take a random sample of them. How can I calculate the minimum sample size required to test the hypothesis that at least 98% of them will pass the test.
Note: 95% Significance Level & 99% Confidence Level
| Sample Size Calculation for One Sided Hypothesis Testing | CC BY-SA 4.0 | null | 2023-03-01T04:38:45.450 | 2023-03-01T09:17:54.033 | 2023-03-01T05:03:01.200 | 44269 | 155036 | [
"hypothesis-testing",
"statistical-significance",
"sampling",
"inference",
"sample-size"
] |
606974 | 1 | null | null | 0 | 17 | I want to run a regression with panel data involving data from variables that are collected at different periods. Is there a way to do this, and if so, is it legitimate to do this or should panel data regressions only be done if time periods align so there is no missing data in particular periods. Ive added an image of how the data looks when arranged.
[](https://i.stack.imgur.com/Dkg7Y.png)
| Panel data regression with time periods that are unaligned | CC BY-SA 4.0 | null | 2023-03-01T04:57:47.357 | 2023-03-01T04:57:47.357 | null | null | 381097 | [
"r",
"regression",
"panel-data"
] |
606975 | 1 | 606993 | null | 0 | 105 | I have heard people say, "One of the disadvantages of neural networks is that they are generally less interpretable". But I wonder, how is another model, such as XGBoost, more interpretable than neural networks? In XGBoost, one could use feature importance, SHAP, or PDP to explain the model, which I believe can also apply to neural networks?
| which one between XGBoost and neural networks is more interpretable? | CC BY-SA 4.0 | null | 2023-03-01T05:45:16.583 | 2023-03-01T21:25:24.983 | 2023-03-01T21:25:24.983 | 381101 | 381101 | [
"machine-learning",
"neural-networks",
"interpretation",
"boosting",
"explainable-ai"
] |
606976 | 1 | 607011 | null | 3 | 122 | I am working on Bayesian estimation: suppose that $X_1,\dots, X_n$ is an iid sample from Uniform$[0,\theta]$. Assume a Pareto prior for $\theta\sim Pareto(\alpha,\beta)$, i.e.
$$
f(\theta)=\frac{\alpha\beta^\alpha}{\theta^{\alpha+1}}, \, \theta\ge \beta, \alpha>0, \beta>0
$$
What is the Bayes estimator of $\theta$? Do the prior and posterior belong to the same family of distributions (i.e., conjugate prior)?
(2) What does this estimator converge to as $n\to \infty$?
---
My work is as follows.
The prior distribution:
$$
\pi(\theta)=\frac{\alpha\beta^\alpha}{\theta^{\alpha+1}}I[\theta\ge \beta]
$$
and the likelihood function is
$$
f(X|\theta)=\prod_{i=1}^n f(x_i;\theta)=\frac{1}{\theta^n}I[0\le X_{(1)}\le X_{(n)}\le \theta]
$$
where $X_{(1)}\le \dots X_{(n)}$.
Then the posterior distribution is
$$
\pi(\theta|X)\approx \pi(\theta)L(X|\theta)=\frac{\alpha\beta^\alpha}{\theta^{n+\alpha+1}}I[0\le X_{(1)}\le X_{(n)}\le \theta]I[\theta\ge \beta]
$$
So for $\beta\ge X_{(n)}$, $\pi(\theta|X)\approx \frac{\alpha\beta^\alpha}{\theta^{n+\alpha+1}}I[\theta\ge \beta]$.
For $\beta<X_{(n)}$, $\pi(\theta|X)\approx \frac{\alpha\beta^\alpha}{\theta^{n+\alpha+1}}I[\theta\ge X_{(n)}]$.
But I have no idea about the distribution of $\theta|X$? Does it mean $\theta|X\sim Pareto(n+\alpha,\beta)$?
---
For the Bayes estimator,
$$
\hat{\theta}=E[\theta|X]=\int_R \theta \pi(\theta|x)dx
$$
As $\beta\ge X_{(n)}$,
$$
\hat{\theta}=E[\theta|X]=\int_\beta^\infty \theta \frac{(n+\alpha)\beta^{n+\alpha}}{\theta^{n+\alpha+1}}d\theta=\frac{(n+\alpha)\beta}{n+\alpha-1}
$$
As $\beta< X_{(n)}$, the pdf of $\theta|X$ is
$$
g(\theta|x)=\frac{(n+\alpha)(X_{(n)})^{n+\alpha}}{\theta^{n+\alpha+1}}I[\theta\ge X_{(n)}],
$$
then
$$
\hat{\theta}=E[\theta|X]=\frac{(n+\alpha)X_{(n)}}{n+\alpha-1}
$$
It seems that this result is not right because the Bayes estimator should be weighted of the prior mean ($E[\theta]=\frac{\alpha\beta}{\alpha-1}$) and sample mean $\bar{X}$.
| Bayesian estimation of iid sample from Uniform$[0,\theta]$ and a Pareto$(\alpha,\beta)$ prior for $\theta$ | CC BY-SA 4.0 | null | 2023-03-01T06:47:15.790 | 2023-03-02T08:02:10.147 | 2023-03-02T07:47:51.607 | 67799 | 334918 | [
"bayesian",
"estimation",
"uniform-distribution",
"conjugate-prior"
] |
606977 | 1 | null | null | 1 | 34 | I have a linear regression task where I am trying to predict in the interval of roughly [-100, 100]. I'd like to build a GBM (it doesn't have to be) that allows me to see the probability distribution of potential outcomes instead of the point prediction.
My end goal is to sample from that distribution for a simulation exercise.
If it simplifies things greatly, it would be reasonable (for my task) to assume the error structure is normal, that is, a specified mean and variance would be enough to generate a distribution I can sample from.
I explored quantile regression a bit, but it's not directly suited to the simulation task (since you need all quantiles for simulation, and would have to calculate of impute them).
| Probablistic linear regression predictions for simulation tasks | CC BY-SA 4.0 | null | 2023-03-01T07:03:26.310 | 2023-03-01T07:03:26.310 | null | null | 71965 | [
"regression",
"distributions",
"simulation",
"boosting"
] |
606978 | 1 | null | null | 1 | 31 | I was linked a youtube video where the first two characters were underscores, how could I calculate the probability of that happening? 64 possible characters, 11 character long string, and only two underscores appearing twice at the beginning. Is this possible?
| Probability of words appearing at specific area in string | CC BY-SA 4.0 | null | 2023-03-01T07:13:14.567 | 2023-03-01T08:25:01.057 | 2023-03-01T08:25:01.057 | 362671 | 355935 | [
"probability",
"distributions",
"normal-distribution",
"binomial-distribution"
] |
606979 | 1 | null | null | 2 | 20 | I'm running a likelihood ratio test to check whether or not a condition has a significant outcome for subjects performing an experiment, and I'm using lmer in R to do this. So I run something like
```
> mod = lmer(outcome~ 1 + (1 | subject_id),data=df)
> mod2 = lmer(outcome~ 1 + (1 | subject_id) + condition,data=df)
> anova(mod,mod2)
```
Which results in something like
```
npar AIC BIC logLik deviance Chisq Df Pr(>Chisq)
mod 3 -6020.6 -6002.0 3013.3 -6026.6
mod2 4 -6019.5 -5994.7 3013.7 -6027.5 0.9067 1 0.0341
```
My question is whether the P-value I obtain this way is one-sided or two-sided.
| If I use anova() in R to compare two nested models, do I get a one-sided or two-sided p-vlaue? | CC BY-SA 4.0 | null | 2023-03-01T07:17:52.017 | 2023-03-01T07:17:52.017 | null | null | 381103 | [
"r",
"anova",
"lme4-nlme",
"p-value"
] |
606981 | 2 | null | 606978 | 1 | null | If you assume that all characters are equally likely to appear at any position (not likely for any "real" language, but probably a reasonable assumption for auto-generated URLs), then the probability for an underscore at any particular position is $\frac{1}{64}$.
If you also assume that characters are independent (again see above), then the probability for two underscores is $\frac{1}{64}\times\frac{1}{64}\approx 0.0002$.
| null | CC BY-SA 4.0 | null | 2023-03-01T07:37:54.630 | 2023-03-01T07:37:54.630 | null | null | 1352 | null |
606982 | 1 | null | null | 3 | 171 | I'm writing a paper that uses a random forest algorithm. I want to represent the model in the paper.
But I was wondering how to illustrate a random forest model using an equation in a paper, with the target variable, y and features, x.
I hope this makes sense.
| Represent a random forest model as an equation in a paper | CC BY-SA 4.0 | null | 2023-03-01T07:39:04.503 | 2023-03-28T15:33:16.067 | 2023-03-28T15:33:16.067 | 247274 | 187921 | [
"machine-learning",
"random-forest"
] |
606983 | 1 | null | null | 7 | 162 | Consider the model [1]
$$y_n=X_n\beta_n+\epsilon_n$$
$$\beta_i|\sigma^2,v_i \sim \mathcal{N}(0,\sigma^2 v_i), i=1,\ldots,p$$
$$v_i \sim \beta^\prime(a,b)$$
$$\sigma^2 \sim \mathcal{IG}(c,d)$$
where $\beta^\prime$ is the beta prime distribution and $\mathcal{IG}$ the inverse gamma distribution.
In [1], the authors prove posterior consistency in the high-dimensionality regimes were $p\rightarrow \infty$ as $n\rightarrow \infty$. Is there a way to show posterior consistency for fixed/small $p$ as $n\rightarrow \infty$?
[1] Bai, R. and Ghosh, M., 2021. On the beta prime prior for scale parameters in high-dimensional bayesian regression models. Statistica Sinica, 31(2), pp.843-865.
| Posterior consistency for scale-mixture shrinkage priors in low dimension? | CC BY-SA 4.0 | null | 2023-03-01T07:39:42.557 | 2023-03-08T09:15:32.070 | 2023-03-08T07:14:10.717 | 191260 | 191260 | [
"regression",
"probability",
"bayesian",
"large-data",
"consistency"
] |
606984 | 1 | 607021 | null | 4 | 123 | Let $(U_i,V_i)$ be i.i.d samples from a bivariate distribution. Show that $$\mathbb{E}[(1-U^2_1)\cdot(1-U^2_{2})\cdot|V_1-V_2|]=\mathbb{E}\left\{\mathbb{E}[(1-U^2_1)|V_1]\cdot\mathbb{E}[(1-U^2_2)|V_2]\cdot|V_1-V_2| \right\}$$
I have no ideal how to start.Are there some properties of conditional expectation I should use?
Could you give me some hints?
| The equality about conditional expectation | CC BY-SA 4.0 | null | 2023-03-01T08:03:23.660 | 2023-03-01T19:36:01.540 | null | null | 73778 | [
"probability",
"conditional-expectation"
] |
606986 | 2 | null | 606982 | 7 | null | You won't be able to write down the full specification of the fitted RF the way you could, e.g., write down a regression equation. The best you can likely do is to explain what software you used (with version), and the exact parameters you provided to your fitting method (ideally also the RNG seed). Then in principle anyone who has your exact data should be able to replicate your model. Plots of the fitted response against important predictors are also often helpful.
| null | CC BY-SA 4.0 | null | 2023-03-01T08:30:08.393 | 2023-03-01T08:30:08.393 | null | null | 1352 | null |
606988 | 1 | null | null | 3 | 149 | I would like to compute ratio of proportions coming from a Dirichlet distribution. My understanding is that each proportion should be treated as a random variable and therefore I should use Taylor expansion to compute mean and variance of ratio. However when I follow this procedure I seem to obtain very different results between the first and second order Taylor expansion, and more importantly the second order Taylor results are difficult to interpret (ie they dont seem to make sense). As I am not very familiar with the Dirichlet distribution, I was wondering whether I missed something.
Here is an illustration of the problem on simulated data (using R and the DirichletReg package for the modelling) - In this example, we simulate 6 proportions for 10,000 individuals using a Dirichlet distribution.
```
# load R package
library(DirichletReg)
# simulate data (6 proportions, 10000 individuals)
pct0 = (1:6)/sum(1:6)
set.seed(12345)
data = rdirichlet(10000,pct0)
data = as.data.frame(data)
colnames(data) = paste0('pct',1:6)
# model estimation
data$Y = DR_data(data[,paste0('pct',1:6)])
fit = DirichReg(Y ~ 1, data=data, model='common')
# retrieve model estimates
mle = as.numeric(fit$coefficients)
names(mle) = paste0('pct',1:6)
# compute average
a0 = sum(exp(mle))
pct1 = exp(mle)/a0
# compare estimated vs true proportions
cbind(pct0,pct1)
# compute mean (mu), variance (va) and covariance (cov) of Dirichlet distribution for proportion_1 and proportion_2
# source: https://en.wikipedia.org/wiki/Dirichlet_distribution
mu1 = pct1[1]
mu2 = pct1[2]
va1 = (mu1*(1-mu1))/(1+a0)
va2 = (mu2*(1-mu2))/(1+a0)
cov12 = -(mu1*mu2)/(1+a0)
# compute average of ratio of proportion_1/proportion_2
# source: https://www.stat.cmu.edu/~hseltman/files/ratio.pdf
# with first order taylor expansion
muratio1 = mu1/mu2
# with second order taylor expansion
muratio2 = (mu1/mu2) - (cov12/(mu2**2)) + (mu1*va1/(mu2**3))
# compare results
c(muratio1,muratio2)
```
I would have 2 questions for you:
(1) Is it ok to obtain average proportions from the Dirichlet regression (DR) that are quite different from the true values? For example, true proportion for proportion_1 was 0.04761905, and when we compute the mean over 10000 individuals we obtain 0.04866258, but with the DR model the corresponding average would be 0.09224526 (so almost twice the size of the true value!).
(2) The average ratio obtained with 1st Taylor expansion is 0.7919314 and I would say is in line with the true values (The ratio of true values is 0.5). However the 2nd Taylor expansion gives us a very different result (3.239524). If I understand this value correctly, this is telling us that average of proportion_1/proportion_2 ratio is 3.24, which would suggest that on average the proportion_1 is larger than proportion_2, but we know that this is not true.
| Computation of ratio with Dirichlet distribution | CC BY-SA 4.0 | null | 2023-03-01T08:35:16.747 | 2023-03-08T17:16:21.080 | 2023-03-05T01:25:29.847 | 11887 | 239552 | [
"distributions",
"ratio",
"dirichlet-distribution",
"taylor-series",
"dirichlet-regression"
] |
606989 | 2 | null | 606973 | 0 | null | The minimum number of additional samples you need to test is $135$.
---
In your problem you have a Bernoulli random variable $X$ with an unknown parameter $p$. Your current estimate for $p$ is that it is $.99$, of course, because of uncertainty we are not $95\%$ sure that this estimate is correct.
If we assume a uniform proir on $p$, then the posterior distribution for $p$, after gathering data of $99$ successes and $1$ failure is given by the function,
$$ f(p) = \frac{101!}{(99!\times 1!)}p^{99}(1-p)^1 $$
The minimum number of additional samples $n$ will occur when they are all successes (otherwise you will need to keep on gathering more data). If we gather $n$ more samples, and if they are all successes, then the posterior distribution changes to,
$$ f(p) = \frac{(101+n)!}{(99+n)!\times 1!}p^{99+n}(1-p)^1 = (101+n)(100+n)p^{99+n}(1-p) $$
You want to be $95\%$ sure that the true value of $p$ is at least $.98$.
Therefore, you need to find the minimum $n$ such that,
$$ \int_{.98}^1 (101+n)(100+n)p^{99+n}(1-p) ~ dp \geq .95 $$
Using WolframAlpha,
```
integrate (101+n)(100+n) x^(99+n) (1-x) dx from .98 to 1 where n = 134
```
We find this integral computes to $.949$, however at $n=135$ it will exceed $.95$.
| null | CC BY-SA 4.0 | null | 2023-03-01T09:17:54.033 | 2023-03-01T09:17:54.033 | null | null | 68480 | null |
606990 | 1 | null | null | 0 | 15 | Given that we usually have sample data, it makes sense to describe statistical significance in relation to the size of our sample and so on. But with, for example, country-level data, we might have a complete survey rather than a sample.
Let's say that I have complete country level data on
- per capita fuel consumption (dependent variable), which I think is conditioned on population density, vehicle fleet attributes and a few other control variables that I also have aggregated country-level data on.
I might conduct a difference-in-difference regression in order to determine the casual impact of a policy intended to lower fuel consumption. If I had individual-level data on these variables it would make sense to report standard errors to determine if the sample belongs to the target population. But in this case, I have complete population data.
Am I still restricted to the same assumptions about sample size and so on that you would be with individual-level data?
And does it make sense to apply DiD in such a scenario? A tangental question: can my treatment group only consist of one country, while the control group consists of many? The famous Kard and Kreuger (1994) study on minimum wages has this setup, but on firm-level data. Does having aggregated data change anything?
| Questions on Difference in difference and statistical significance with aggregated data | CC BY-SA 4.0 | null | 2023-03-01T09:21:01.820 | 2023-03-01T09:21:01.820 | null | null | 360425 | [
"statistical-significance",
"econometrics",
"difference-in-difference"
] |
606991 | 1 | null | null | 2 | 26 | Define
$I_n(\lambda_j) = \frac{1}{2\pi n} |\sum_{t = 1}^n Z_t e^{it\lambda_j}|^2 = \frac{1}{2 \pi} \sum_{h = - \infty}^{\infty} \hat{\gamma}_n(h) e^{ih\lambda_j}$
where $Z_t$ is a $WN \sim (0, \sigma^2)$ and $\gamma(h) = cov(Z_{t}, Z_{t - h})$ is the autocovariance of the process at lag $h$, $\hat{\gamma}_n(h)$ is sample counterpart. Denote, $\boldsymbol{\gamma}_h = (..., \gamma(0), \gamma(1), ...)^T$, we know that the vector $\sqrt{n}(\hat{\boldsymbol{\gamma}_h} - \boldsymbol{\gamma}_h)$ is asymptotically normally distributed with a covariance which is of order $O(\frac{1}{n})$.
I would like to prove a CLT for sums of the $I_n(\lambda_j)$, i.e.
$\begin{aligned} \sum_{j=1}^m I_n\left(\lambda_j\right) & =\frac{1}{2 \pi n} \sum_{j=1}^m \sum_{h=-\infty}^{\infty} \hat{\gamma}(h) e^{i h \lambda_j} =\frac{1}{2 \pi n} \sum_{h=-\infty}^{\infty} \hat{\gamma}(h) \sum_{j=1}^m e^{i h \lambda_j} \end{aligned}.$
Since the whole vector $\boldsymbol{\hat{\gamma}_h}$ is A.N. we can expect that the infinite sums is A.N as well, however I do not know how to prove it. Any ideas?
| CLT for sums of Fourier transform of white noises r.v | CC BY-SA 4.0 | null | 2023-03-01T10:05:45.133 | 2023-03-01T10:37:38.743 | 2023-03-01T10:37:38.743 | 365245 | 365245 | [
"central-limit-theorem",
"fourier-transform",
"white-noise",
"slutsky-theorem"
] |
606992 | 1 | null | null | 0 | 14 | I have a dataset with 200,000 obs and the following variables:
score (continuous: 0-100), pred (binary: 0/1).
I want to create a binary variabel: pred2, that acts the following:
a) if score is high, pred2=1
b) if score is middle, pred2=pred
c) if score is low, pred2=0
I want to define when score is high/middel/low as the following:
The threshold for a high score should be such, that the total number of observations where pred2=1 is 7,000. And with the total number, I mean across both high/middle/low groups, i.e., in total 7,000 observations out of the total 200,000 observations should have pred2=1.
I want 50% of all observations (i.e. 100,000) to be in group high/low, i.e. the threshold for a low score should simply be 100,000 observations minus the number of observations in the high group.
However, I have a lot of trouble with this, since it seems like a dynamic problem - when changing the threshold for the high score group, it affect the threshold for the low group. And this thus affects the distribution of 1 and 0s in the middle group, since this group takes the value of pred.
Of course, I can find a more or less manual way to keep adjusting the thresholds back and fourth, but there must exists a more elegant way.
| Setting threshold with dynamic feedback | CC BY-SA 4.0 | null | 2023-03-01T10:20:20.237 | 2023-03-05T01:18:42.597 | 2023-03-05T01:18:42.597 | 11887 | 322537 | [
"threshold",
"binning"
] |
606993 | 2 | null | 606975 | 2 | null | Both (sufficiently large) neural networks and XGBoost are not interpretable on their own (they are not "intrinsically interpretable") and are [typically](https://www.nature.com/articles/s42256-019-0048-x) [seen](https://dl.acm.org/doi/10.1145/3359786) as part of the same "not interpretable" category. Both can be interpreted post-hoc using various methods such as feature importance, SHAP, PDP, [saliency maps](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9369420), [counterfactual explanations](https://christophm.github.io/interpretable-ml-book/counterfactual.html), etc.
As Stephan Kolassa already mentioned, the "more interpretable" traditional models neural networks are often compared with are much simpler models like logistic regression or simple (non-boosted, non-ensemble) decision trees. (These [often perform surprisingly well](https://stats.stackexchange.com/a/582072/131402) when compared with modern neural networks in a proper evaluation. Even in image processing, logistic regression based on interpretable hand-crafted features [can very well outperform modern CNNs](https://arxiv.org/abs/2204.01737).)
In a separate branch of research, there is [a push to make neural networks intrinsically interpretable](https://www.nature.com/articles/s42256-019-0048-x).
| null | CC BY-SA 4.0 | null | 2023-03-01T10:29:19.727 | 2023-03-01T10:29:19.727 | null | null | 131402 | null |
606994 | 2 | null | 592957 | 0 | null | >
What does it mean that each category contributes to this dimension or that dimension?
You might find answers to your questions on this page : [http://www.sthda.com/english/articles/31-principal-component-methods-in-r-practical-guide/114-mca-multiple-correspondence-analysis-in-r-essentials/](http://www.sthda.com/english/articles/31-principal-component-methods-in-r-practical-guide/114-mca-multiple-correspondence-analysis-in-r-essentials/)
See for example:
>
The variable categories with the larger value, contribute the most to
the definition of the dimensions. Variable categories that contribute
the most to Dim.1 and Dim.2 are the most important in explaining the
variability in the data set.
| null | CC BY-SA 4.0 | null | 2023-03-01T10:39:20.520 | 2023-03-01T10:39:20.520 | null | null | 381113 | null |
606995 | 1 | null | null | 0 | 9 | I have looked through all the online material about fixed vs random effects, and I have not been able to make head or tail of most of that which I have read. I am not a statistician, and unfortunately, when R was taught, MEMs were only very briefly discussed.
I have data that has not yet been analysed: the continuous outcome variable is a vowel's formant frequency (in Hz) that has been normalised by [-1,1] min-max normalisation; this indicates how back or front in the mouth (respectively) the vowel is pronounced.
I am unsure how to specify that one of my fixed effects (word in which the vowel was produced, "word": categorical, 14 levels) is a nested effect in another fixed effect (word type; there is a priori reason to suspect that words of a similar type tend to behave similarly, although some words behave opposite to the rest of the words of the same type; categorical, 4 levels) using R's lme4 package.
Additionally, there are some other grouping factors that could have possible effects: age (discrete, many levels), sex (categorical, 2 levels), dialect background (categorical, many levels), and, of course, the random effect of participant (categorical, n levels).
How do I know which effects interact, and which cross, when random slope/interaction should be employed vs fixed, and when intercepts vary across crossed random effects?
My best guess so far, if the most complicated model is the best fit, is the following:
```
model <- lmer(normalised_frequency ~ word*wordtype + sex + (1|age) + (1|background) + (word|subject), data = dat)
```
If anyone could help me improve this code, or help me understand when e.g. a slope should be correlated with an effect or when to let effects interact, then that would be great.
| Not sure how to define effects in MEM in R | CC BY-SA 4.0 | null | 2023-03-01T10:47:57.277 | 2023-03-01T10:47:57.277 | null | null | 381110 | [
"r",
"mixed-model",
"multiple-regression",
"nested-data"
] |
606996 | 1 | null | null | 1 | 84 | I want to examine the relative importance of the predictors in my model.
I know the `domir` package in `R` allows for the examination of relative importance for `glm` but my model is a `nb.glm`.
Is there a way that I can use to examine the relative importance in this situation?
| Can I use Dominance analysis (using r package domir for instance) with a negative binomial (nb.glm)? | CC BY-SA 4.0 | null | 2023-03-01T10:59:50.377 | 2023-04-03T22:21:39.897 | 2023-04-03T22:21:39.897 | 203199 | 381117 | [
"r",
"generalized-linear-model",
"negative-binomial-distribution",
"importance"
] |
606997 | 1 | 606999 | null | 1 | 36 | Lets suppose we have a time series with monthly data (frequency=12)
```
my_tS <- ts(my_monthly_data, start=c(2002,1), frequency=12)
```
When I plot it, I can see a seasonality every 6 months. So, to remove it, I should compute a differencing every 6 months:
```
my_tS_stationary <- diff(my_tS, 6)
```
I check it with KPSS and its stationary. Now, I want to model it with ARIMA(p,d,q)(P,D,Q)[12]. Which value should I use for D? A value of 1 is for 12 months seasolnality (the frequency of my TS), but mine is every 6 months...
Edit: As requested, I Add the images od the graphs I made to check seasonality
[](https://i.stack.imgur.com/Weu80.png)
[](https://i.stack.imgur.com/Y59nn.png)
[](https://i.stack.imgur.com/al1Rn.png)
EDIT AGAIN: Added season plot... Data are flights through year. Now, with season plot, I think it has a 12 months seasonality with two peaks in the year. Thanks, Stephan and Richard, for guiding me to find this!
[](https://i.stack.imgur.com/PSDyJ.png)
| ARIMA D coefficient for seasonality lesser than frequency | CC BY-SA 4.0 | null | 2023-03-01T11:24:43.297 | 2023-03-02T09:15:30.617 | 2023-03-02T09:15:30.617 | 381118 | 381118 | [
"r",
"arima",
"seasonality",
"differencing"
] |
606998 | 1 | null | null | 0 | 15 | I have a question about determining the shape of a curve (linear vs. quadratic) in latent growth curve models within a structural equation model framework.
If you are including time-varying covariates in an unconditional model (see example below), should these be included when determining the shape of the curve? Or should you determine the shape first, using fit indices and LRT tests, before adding unconditional time-varying covariates?
Thanks in advance for any help!
[](https://i.stack.imgur.com/pYCdS.png)
| Latent growth curve models (SEM): determining the shape of curve with unconditional time-varying covariates | CC BY-SA 4.0 | null | 2023-03-01T11:39:17.447 | 2023-03-01T11:39:17.447 | null | null | 323168 | [
"panel-data",
"structural-equation-modeling",
"fitting",
"curve-fitting",
"time-varying-covariate"
] |
606999 | 2 | null | 606997 | 1 | null | The `D` parameter governs the number of seasonal differences, where "seasonal" is understood to be as per the underlying time series frequency. What you are implicitly doing is considering your time series as having a frequency not of 12 (monthly data with yearly seasonality), but of 6 (monthly data with half-yearly seasonality).
So the best solution would be to recode your time series with a new `frequency=6` attribute, then feed it into `auto.arima()` (with or without [a hard setting for the D parameter](https://stackoverflow.com/q/37046275/452096)):
```
> library(forecast)
> auto.arima(ts(AirPassengers,frequency=6))
Series: ts(AirPassengers, frequency = 6)
ARIMA(4,1,2) with drift
Coefficients:
ar1 ar2 ar3 ar4 ma1 ma2 drift
0.2243 0.3689 -0.2567 -0.2391 -0.0971 -0.8519 2.6809
s.e. 0.1047 0.1147 0.0985 0.0919 0.0866 0.0877 0.1711
sigma^2 = 706.3: log likelihood = -670.07
AIC=1356.15 AICc=1357.22 BIC=1379.85
```
(Removed erroneous images)
| null | CC BY-SA 4.0 | null | 2023-03-01T11:40:35.170 | 2023-03-01T16:43:33.883 | 2023-03-01T16:43:33.883 | 381118 | 1352 | null |
607000 | 2 | null | 606982 | 6 | null | Realistically, this cannot be done. It would require you to give every decision tree in your random forest, and that’s a lot of trees! The best you can realistically do is, as Stephan Kolassa wrote, say exactly what you did. If you publish your code on a personal website, GitHub, or journal website, that’s even better!
If you just want to illustrate the idea of a random forest in your paper, however, it might make sense to make up three small decision trees, show how an observation makes its way down each tree, and then calculate the mean.
| null | CC BY-SA 4.0 | null | 2023-03-01T11:41:40.807 | 2023-03-01T17:08:15.427 | 2023-03-01T17:08:15.427 | 247274 | 247274 | null |
607001 | 1 | null | null | 3 | 177 | I have a mixed-ANOVA design: 2 groups (experimental & control) x 4 time points (pre, post, post2, post3), and am using G*Power to calculate the required sample size.
My understanding is that the "ANOVA Repeated measures, between factors" should be used if I am interested in the sample size required to detect the main effect of the group factor, and that the "ANOVA Repeated measures, within-between interaction" should be used if I am interested in the sample size required to detect the interaction effect (group*time).
However, the sample size required when I used "ANOVA Repeated measures, within-between interaction" is lower than the sample size required when using "ANOVA Repeated measures, between factors", which is weird as we generally require larger sample size to detect an interaction effect.
Am I missing something here?
| G*Power - Difference between ANOVA Repeated measures, between factors and ANOVA: Repeated measures, within-between interaction | CC BY-SA 4.0 | null | 2023-03-01T11:43:15.567 | 2023-03-01T16:43:04.340 | null | null | 381122 | [
"anova",
"repeated-measures",
"interaction",
"statistical-power",
"gpower"
] |
607002 | 1 | null | null | 0 | 21 | We apply the student’s t instead of the normal when we are estimating the standard deviation too (for example in a t-test).
But what happens with other probability distributions when we do a hypothesis test? Say for example we are using a Cauchy distribution with all its parameters being estimated.
| Student’s t distribution counterpart for non-normally distributed outcomes | CC BY-SA 4.0 | null | 2023-03-01T11:44:47.763 | 2023-03-01T11:44:47.763 | null | null | 152490 | [
"probability",
"hypothesis-testing"
] |
607003 | 1 | null | null | 1 | 30 | I have a data set on a large group of long-term stroke survivors. I need to determine whether there is a difference in ability (on a scale of 1-10) between their left and right arms. The ordinal data is heavily skewed towards scores of 9/10.
I'm wavering between the Wilcoxon matched pairs test or McNemar test, which should I use?
Thanks in advance,
Harry
| I'm having trouble choosing between Wilcoxon matched pairs test and McNemar test for a data set | CC BY-SA 4.0 | null | 2023-03-01T11:56:29.427 | 2023-03-01T12:14:59.593 | 2023-03-01T12:14:59.593 | 3277 | 380069 | [
"hypothesis-testing",
"nonparametric",
"paired-data",
"wilcoxon-signed-rank",
"mcnemar-test"
] |
607004 | 1 | null | null | 1 | 62 | Is it recommended to check for autocorrelation in a Bayesian Structural Time Series model (using CausalImpact)?
If so, could I use the Durbin Watson significance table for models with no intercept to determine the lower and upper bounds for the critical values? [https://www3.nd.edu/~wevans1/econ30331/Durbin_Watson_tables.pdf](https://www3.nd.edu/%7Ewevans1/econ30331/Durbin_Watson_tables.pdf)
| Should I check for autocorrelation in a Bayesian Structural Time Series Model (CausalImpact)? | CC BY-SA 4.0 | null | 2023-03-01T11:57:49.563 | 2023-03-01T11:57:49.563 | null | null | 360160 | [
"bayesian",
"autocorrelation",
"structural-equation-modeling",
"causalimpact",
"durbin-watson-test"
] |
607005 | 1 | null | null | 1 | 28 | Why do I not get the same p-value when comparing the marginal means for 3rd and 2nd person using the pairs() or contrast() function? (see code below)
[](https://i.stack.imgur.com/tKydl.png)
| Why different p-value when using pairs() or a contrast()? | CC BY-SA 4.0 | null | 2023-03-01T12:00:17.590 | 2023-03-05T01:16:38.067 | 2023-03-05T01:16:38.067 | 11887 | 381123 | [
"anova",
"p-value"
] |
607006 | 1 | 616289 | null | 2 | 57 | I have 32 months of historical data and I am testing forecasting methodologies. I assume that only 12 months of data are available, I forecast months 13-32, and then compare actuals for months 13-32 with my forecasts for months 13-32. The data at hand shows the number of monthly deaths, starting with a fixed population at time 0, and it follows a nice exponential decay curve over time. Plot y-axis shows number of deaths, y-axis shows month elapsed since time 0.
I’ve used traditional time series forecasting and have gotten good results with exponential state space models (ETS function from R package `feasts`), with results that encompass the variability I’ve seen with this type of data. But I’m exploring other methodologies and am currently studying survival analysis, since I have a lot of variables that correlate with the probability of death and I have data for each study element showing progression stage each month.
So far in survival analysis I see that it is very useful for showing any effect of those variables on death rates (multivariate analysis, etc.), but at this stage I’m only interested in forecasting and simulating future curve paths in the hypothetical scenario of only having a partial curve to work with. Is survival analysis appropriate for forecasting from a partial curve? If so, how does one forecast future curve shape using survival probabilities and hazard rates (ignoring the variates)? Are there other methodologies besides ETS and survival that I should be exploring?
The below images show the survival probabilities plots, and the ETS model forecasts, with this dataset. Basically, is it possible to derive the sort of estimates using survival models that I’ve done with ETS?
Survival probabilities:
[](https://i.stack.imgur.com/z6bV1.png)
Time-series forecasting using ETS:
[](https://i.stack.imgur.com/2IMKi.png)
| Is survival analysis appropriate for forecasting time series data? | CC BY-SA 4.0 | null | 2023-03-01T12:04:21.020 | 2023-05-19T05:01:18.207 | null | null | 378347 | [
"r",
"time-series",
"forecasting",
"survival"
] |
607007 | 2 | null | 305310 | 0 | null | The universe is broad in its nature. The universe in research is the area of your study while the population is the specific characteristics of the universe and the samples are selected units of the population.
| null | CC BY-SA 4.0 | null | 2023-03-01T12:15:49.423 | 2023-03-01T12:38:34.910 | 2023-03-01T12:38:34.910 | 362671 | 381124 | null |
607008 | 1 | null | null | 0 | 12 | I am doing a linear regression analysis of IMU data (accelerometer, gyroscope, magnetometer).
When I do 10-fold walk forward cross-validation, the first split gives the best results.
When I do 5-fold walk forward cross-validation, the fourth split gives the best results.
My expectation was that each next split to have better results than the previous one due to having more training data.
I am confused about how should I interpret/investigate this.
| First split in walk forward cross-validation gives the best results in linear regression | CC BY-SA 4.0 | null | 2023-03-01T13:00:20.020 | 2023-03-01T13:00:20.020 | null | null | 339131 | [
"regression",
"multiple-regression",
"cross-validation"
] |
607011 | 2 | null | 606976 | 2 | null | The Pareto is indeed conjugate to the uniform, see e.g. [Aside from the exponential family, where else can conjugate priors come from?](https://stats.stackexchange.com/questions/192554/aside-from-the-exponential-family-where-else-can-conjugate-priors-come-from/192675#192675).
The posterior mean looks right, see also [https://en.wikipedia.org/wiki/Pareto_distribution](https://en.wikipedia.org/wiki/Pareto_distribution) (in Wikipedia's notation, $\alpha>1$ is guaranteed as the present $\alpha$ is positive and the sample size $n\geq1$).
The result that the posterior mean is a weighted average of prior mean and MLE (which the sample mean is not, though, so that I am not sure why to expect that in the first place?) is restricted to certain parametrizations in exponential families (and the uniform is not a member). See e.g. [Can the posterior mean always be expressed as a weighted sum of the maximum likelihood estimate and the prior mean?](https://stats.stackexchange.com/questions/498250/can-the-posterior-mean-always-be-expressed-as-a-weighted-sum-of-the-maximum-like) or [How does Prior Variance Affect Discrepancy between MLE and Posterior Expectation](https://stats.stackexchange.com/questions/486602/how-does-prior-variance-affect-discrepancy-between-mle-and-posterior-expectation).
We have that the maximum $X_{(n)}$ is consistent for $\theta$. This follows from, e.g., [https://math.stackexchange.com/questions/2905482/expectation-and-variance-of-y-maxx-1-ldots-x-n-where-x-is-uniformly-dis](https://math.stackexchange.com/questions/2905482/expectation-and-variance-of-y-maxx-1-ldots-x-n-where-x-is-uniformly-dis) (slightly adapting the argument from a uniform on $[0,1]$ to one on $[0,\theta]$; essentially, work with cdf $y/\theta$ on $[0,\theta]$ instead of cdf $y$ on $[0,1]$) and noting that $E(X_{(n)})\to\theta$ and $V(X_{(n)})\to0$, so mean square convergence which implies consistency.
Also, $(n+\alpha)/(n+\alpha−1)\to1$ the posterior mean will tend to either the true $\theta$ or, when $\beta\geq X_{(n)}$, to $\beta$. [One could additionally consider the variance of the Pareto posterior, which is $\mathcal{O}(n^{-2})$.]
Asymptotically, the latter only seems possible when $\beta$ is larger than the true $\theta$ in view of consistency of $X_{(n)}$ for $\theta$. In that case, the support of the prior does not include the true parameter so that the posterior mean cannot concentrate on the true value.
Here is a little plot with posteriors for different $n$ and one prior choice for $\beta$ smaller (solid) and one larger than the true upper bound of the uniform (vertical black bar). We notice how the posterior concentrates around either the sample maximum or $\beta$.
[](https://i.stack.imgur.com/MNwj2.jpg)
```
library(EnvStats)
theta <- 1
beta.low <- 0.8
beta.high <- 1.03
alpha <- 0.5
n <- c(10, 20, 30, 50)
x <- runif(n[4], 0, theta)
x.n <- sapply(n, function(i) max(x[1:i]))
alpha.n <- alpha + n
beta.nlow <- max(x.n, beta.low)
beta.nhigh <- max(x.n, beta.high)
theta.ax <- seq(0.95, 1.1, by=.0001)
plot(theta.ax, dpareto(theta.ax, beta.nlow, alpha.n[4]), type="l", lwd=2, col="deeppink4")
lines(theta.ax, dpareto(theta.ax, beta.nlow, alpha.n[3]), type="l", lwd=2, col="lightblue")
lines(theta.ax, dpareto(theta.ax, beta.nlow, alpha.n[2]), type="l", lwd=2, col="orange")
lines(theta.ax, dpareto(theta.ax, beta.nlow, alpha.n[1]), type="l", lwd=2, col="chartreuse")
lines(theta.ax, dpareto(theta.ax, beta.nhigh, alpha.n[4]), type="l", lwd=2, col="deeppink4", lty=2)
lines(theta.ax, dpareto(theta.ax, beta.nhigh, alpha.n[3]), type="l", lwd=2, col="lightblue", lty=2)
lines(theta.ax, dpareto(theta.ax, beta.nhigh, alpha.n[2]), type="l", lwd=2, col="orange", lty=2)
lines(theta.ax, dpareto(theta.ax, beta.nhigh, alpha.n[1]), type="l", lwd=2, col="chartreuse", lty=2)
abline(v=theta, lwd=4)
```
Quibbles: The posterior mean is only "the" Bayes estimator when you working with the squared error loss function.
Also, you could omit the lower indicator in the likelihood function since you know that all $X_i$ are nonnegative.
To indicate that the posterior is proportional to some kernel of a distribution, it is more common to use $\propto$ than $\approx$.
| null | CC BY-SA 4.0 | null | 2023-03-01T13:07:38.477 | 2023-03-02T08:02:10.147 | 2023-03-02T08:02:10.147 | 67799 | 67799 | null |
607012 | 2 | null | 604321 | 0 | null |
- no, by doing that you change the state distribution, which causes a ton of problems, like the change in value estimation... which means that if you don't prioritize, your policy might learn states on which it might never see again, because it already knows that they are very bad
- the IS is the correction
- the agent will learn the Q value for the starting and ending state, thus the update will pretty much produce a 0 TD error, causing no issue, and the new priority will be close to 0
| null | CC BY-SA 4.0 | null | 2023-03-01T13:29:22.663 | 2023-03-01T13:29:22.663 | null | null | 346940 | null |
607013 | 1 | null | null | 0 | 26 | I am trying to figure out whether I can trust OLS results in this situation.
I have possibly simultaneous equations that follow the models:
$ y = k_1 + \gamma_1 x + \beta_1 z + u_1 $
$ z = k_2 + \beta_2 x + ... + u_2 $
where $ z $ is exogenous in the first equation, but partially determined by x.
$u_1 , u_2$ are the error terms in the equations (random, mean zero and normal distr).
I am interested in consistent estimation of $ \beta_1 $ through OLS: the impact of z on y.
do I have OLS biased estimates of beta_1 if I introduce x in the first equation?
what if instead I also have a third equation:
$ x = k_3 + \beta_2 z + ... + u_3 $
I am trying to understand whether the equation for y can be estimated through OLS despite the endogeneity of z-x equations. Additionally, in the remote case it is a consistent estimator, the appearance of y in any of the two equations (the second and third) would bias the estimator right?
| Simultaneity biases | CC BY-SA 4.0 | null | 2023-03-01T13:41:13.297 | 2023-03-01T13:41:13.297 | null | null | 357809 | [
"least-squares",
"econometrics",
"simultaneous-equation"
] |
607014 | 2 | null | 604021 | 1 | null | Your LLM will give you a categorical distribution as output, over which you can sample, and thus use RL to estimate the gradient...
What you are suggesting looks more like a GAN which instead of using a discriminator, you have a ordinal-NN, over which you take the gradient to maximize the ordinal output... however, this is unstable and usually the generator (LLM) will have a very easy time maximizing the output of the discriminator, which is known as mode alignment...
You can probably fine tune a LLM with a loss like:
$$
\nabla L(\theta) = \nabla(-D_{kl}(\pi_{\theta}(y|x)||\pi_{orig}(y|x))) + \nabla D(\pi_{\theta}(y|x)|x)
$$
where the second term is the gradient flowing from the conditional discriminator (which maximization should give you more human-like response), and the first one is just a penalization term to not go too far from the pre-trained model
However, in my opinion, this will just make the LLM overfit the discriminator...
| null | CC BY-SA 4.0 | null | 2023-03-01T13:44:31.007 | 2023-03-01T13:44:31.007 | null | null | 346940 | null |
607015 | 1 | null | null | 0 | 15 | [](https://i.stack.imgur.com/9ZaMw.png)I have datasets X,Y and Z, matrices of variables. I was wondering whether canonical correlation analysis is telling you in a direct way a result about the overall correlation between the two datasets, in comparison with other two.
For example, is it possible to establish whether X is more correlated with Y rather than Z? And if yes, how so? The correlations between the linear combinations are informative in that sense? (example in figure, given these correlation between canonical variables, can we say that the blu case (X VS Y) is more correlated than red case (X vs Z)?)
Thanks!
| Canonical correlation analysis for direct comparison between datasets | CC BY-SA 4.0 | null | 2023-03-01T13:51:52.580 | 2023-03-01T13:51:52.580 | null | null | 381132 | [
"multivariate-analysis",
"canonical-correlation"
] |
607016 | 1 | null | null | 0 | 27 | I'm interested in constructing a risk index by indexing a large number of identified risk factors into a composite measure (that ideally then has sub-dimensions that can be explored further if one so chooses --- especially potentially linking these sub-dimensions in the form of cascading risks).
I'm wondering if you all have any suggestions on best practices or (ideally), have come across literature that identifies innovative approaches to this task (especially one that maybe integrates some qualitative information though this isn't a pre-requisite), or summarizes existing work / best practices on this topic.
A few examples just to give better context / get the conversation going ...
- https://www.anticipation-hub.org/news/multi-hazard-risk-analysis-methodologies#c2737
- https://mr-hn.github.io/pcaIndex/
- https://academic.oup.com/heapol/article/21/6/459/612115?login=false
| Recommendations on best papers / blogs / existing literature on constructing risk and vulnerability indices? | CC BY-SA 4.0 | null | 2023-03-01T14:05:54.700 | 2023-03-03T02:52:52.647 | 2023-03-03T02:52:52.647 | 11887 | 325815 | [
"pca",
"references",
"dimensionality-reduction",
"risk",
"index-decomposition"
] |
607018 | 2 | null | 604021 | 2 | null | Supervised LLM training only gives the model positive examples, i.e. ones it should produce. It does not provide the negative ones, and a naive attempt to do so would probably fail due to the sheer volume of negatives in the space of possible outputs.
Indeed, you probably could somehow penalize the model for producing outputs like "afsjkafnkfkasfjk nasjfasfas" but that would be a poor negative sample as the model would probably not produce this gibberish in the first place. Coming up with a particular set of useful negative examples is hard and probably depends on a particular model. This is where RL comes in: it allows you to operate on the models' outputs themselves, which is exactly the thing you want to improve.
| null | CC BY-SA 4.0 | null | 2023-03-01T14:30:10.670 | 2023-03-01T14:30:10.670 | null | null | 62549 | null |
607019 | 1 | null | null | 2 | 74 | I have built a linear regression model with three predictor variables: the model predicts forest growth (y = stand volume) with stand age, stand basal area and site type (x1, x2 and x3 respectively). However, my goal is to predict the forest volume so that I only know the forest age and site type. Is there any way to make the prediction with the model when the stand basal area is unknown? I have also tried to build the model with only two predictor variables but it doesn't produce good values if the stand basal area is removed.
Do you have any tips? I couldn't find anything by googling except this: [https://stackoverflow.com/questions/28528703/ols-predict-using-only-a-subset-of-explanatory-variables](https://stackoverflow.com/questions/28528703/ols-predict-using-only-a-subset-of-explanatory-variables). It suggests setting the value of unused variable as 0.
EDIT:
So I have already trained my model and then tested it with a testing dataset with the predict() function. BOTH, the training AND the testing data sets indeed contained the value for area. But I'm now confused when I would have to find the volume for a certain-aged forest (in a future) as I don't know the area of the forest for example after 50 years. Thus, I can not add it to the model as a predictor variable. I added this in case someone thought that I haven't done the model predictions with a testing data set yet.
| Can I predict values from a multiple regression model with fewer predictors than there are in the model? | CC BY-SA 4.0 | null | 2023-03-01T11:04:18.747 | 2023-03-01T15:52:31.910 | 2023-03-01T15:52:31.910 | 919 | 378827 | [
"r",
"regression",
"predictive-models"
] |
607020 | 2 | null | 607019 | 0 | null | That's very unusual, I can't see how one could be expected to estimate a volume without an area. I would suggest running predictions with a number of example values of area. For example, choose three values of area that you think are reasonable given your context, then run the model three times - each with a fixed area value (and variable x1 and x3 values). Then present prediction results as a range of volumes that could reasonably be expected from stand ages and site types.
| null | CC BY-SA 4.0 | null | 2023-03-01T11:20:21.620 | 2023-03-01T11:20:21.620 | null | null | 335396 | null |
607021 | 2 | null | 606984 | 4 | null | The joint distribution of $(U_1,U_2,V_2,V_2)$ factorises as
$$f(u_1,v_1)f(u_2,v_2)$$ by assumption.
(i) Since
$$(U_1,U_2)|(V_1,V_2) \sim p(u_1,u_2|v_1,v_2)\propto f(u_1,v_1)f(u_2,v_2)$$
the factorisation shows that $U_1$ and $U_2$ are independent given $(V_1,V_2)$, hence
$$\mathbb E[(1 - U_1^2)(1 - U_2^2)|V_1, V_2] = \mathbb E[(1 - U_1^2)|V_1, V_2]\mathbb E[(1 - U_2^2)|V_1, V_2]$$
(ii) Since $(i=1,2)$
$$U_i|(V_1,V_2) \sim q(u_i|v_1,v_2)\propto f(u_i,v_i)\int f(u_{3-i},v_{3-i})\,\text du_{3-i}\propto f(u_i,v_i)$$
$U_i$ is independent of $V_{3-i}$ given $V_i$, hence
$$\mathbb E[(1-U_i^2)|V_1,V_2]=\mathbb E[(1-U_i^2)|V_i]$$
| null | CC BY-SA 4.0 | null | 2023-03-01T15:01:28.117 | 2023-03-01T15:28:03.303 | 2023-03-01T15:28:03.303 | 7224 | 7224 | null |
607022 | 2 | null | 607019 | 0 | null | Your regression models the mean conditional on having feature values, and if you don’t have values for those features, the conditional mean calculation breaks down. Thus, you need some value for that third covariate, and the predictions will not be the same for all such values.
However, you’re allowed to input whatever you want into the equation, so you can pick a value or even several values. For instance, you might want to make a prediction based on your two feature values and the mean value of that third feature (maybe the median value, maybe both). You might want to give a range of values, such as saying the range when you set that third feature to its $10$th and $90$th percentiles. You might consider extreme scenarios, such as what happens if that third feature takes its lowest or highest values (maybe go even a bit more extreme).
| null | CC BY-SA 4.0 | null | 2023-03-01T15:06:42.957 | 2023-03-01T15:06:42.957 | null | null | 247274 | null |
607023 | 1 | null | null | 0 | 21 | [](https://i.stack.imgur.com/Sfqsb.png)
For multi-classification. The ROC curve not completely covers the upper-left corner of the plot, why the AUC is equal to 1. The result of the testing set is completely classified with an accuracy equal to 1. How to explain this situation?
Thank you.
| The correlation between ROC and AUC curve | CC BY-SA 4.0 | null | 2023-03-01T15:07:02.637 | 2023-03-01T15:07:02.637 | null | null | 381137 | [
"neural-networks",
"model-evaluation",
"image-processing",
"auc"
] |
607024 | 1 | null | null | 0 | 7 | I'm currently doing a project that studies the resource curse and whether greater institutional quality (INSTQ) can help alleviate the curse. This focus is on Sub-Saharan Africa and I have a sample of 18 resource-rich countries from this region from 2002-2020 (only Angola is shown below). I'm not entirely sure which sort of regression to run (I'm using Stata) and I also want to include an interaction term between NR (natural resource abundance) and INSTQ (institutional quality) as the coefficient on this term will help me determine whether greater INSTQ can help overcome the RC. Any recommendations?
[](https://i.stack.imgur.com/FzpJ9.png)
| Model to assess resource curse and whether greater institutional quality (INSTQ) can help alleviate the curse | CC BY-SA 4.0 | null | 2023-03-01T15:07:07.593 | 2023-03-01T15:14:24.137 | 2023-03-01T15:14:24.137 | 362671 | 381138 | [
"regression",
"econometrics",
"interaction",
"stata"
] |
607026 | 1 | 608197 | null | 2 | 46 | I have data where 400 participants rated a set of 8 given scenarios on the scales "valence" and "emotional arousal". The scenarios were designed in a 2 (setting: Europe vs. Asia) x 2 (complexity: low vs. high) x 2 (density: low vs. high) design (experimental manipulation).
I would like to model the fixed effects of complexity and density on valence. Would it be wrong to include random intercepts for scenario if the only variance left in scenario is known to be due to the different levels in setting?
```
lmer(valence ~ complexity * density + (1|participant) + (1|scenario), data = df)
```
Should I instead include random intercepts for different levels of setting?
```
lmer(valence ~ complexity * density + (1|participant) + (1|setting), data = df)
```
This seems wrong to me since there are only two levels of setting (which are also part of an experimental manipulation - I somehow remember that such variables should not be chosen as levels in mixed models since they are not random). It also begs the question why I do not include setting as a further fixed effect, which would make random intercepts for scenario obsolete, but I do not want to include fixed effects for setting since this is not the focus of this analysis.
In my data, the AIC for the second version (1|setting) indicates worse fit in comparison to the inclusion of (1|scenario). However, the fixed effects of complexity and density completely disappear in this model (same estimate but high p-values). When I use (1|setting) or even no random intercept on scenario level at all, all p values are < 0001.
How do I specify random intercepts correctly in this case?
| Linear mixed model: Include random effect for trial if most trial characteristics already included as fixed effect? | CC BY-SA 4.0 | null | 2023-03-01T15:29:51.790 | 2023-03-06T13:59:56.810 | 2023-03-01T15:49:09.577 | 335073 | 335073 | [
"mixed-model",
"lme4-nlme",
"multilevel-analysis"
] |
607027 | 2 | null | 606937 | 8 | null | In the spirit of my original post at [What is the intuition behind conditional Gaussian distributions?](https://stats.stackexchange.com/a/71303/919), we should look at the situation and consider it geometrically.
Begin with the bottom illustration, which slices the bivariate density in the vertical direction (that is, through fixed values of $x$):
[](https://i.stack.imgur.com/bCgeFm.png)
Here is a plot of the slices through the positive $x$ values (those closest to you in the previous image) on common axes:
[](https://i.stack.imgur.com/q9DWH.png)
In this example, because the correlation is negative, as $x$ increases (1) the center of the slice shifts leftwards and (2) the height decreases. The height is the original bivariate density. I use the original color scheme to emphasize this.
You want to visualize the conditional densities: these represent the distribution of the response, $y,$ for any given value of $x.$ They are obtained by scaling the slices in the vertical (density) direction to give them a unit area, which is a basic requirement of any probability density. Here is an example of scaling the slice through $x=1.5$ appearing at the very front edge of the original graphic:
[](https://i.stack.imgur.com/2G37E.png)
The arrows show how each original point is moved to a new point. The colors now represent the (new) conditional density while the heights represent both the original bivariate density and the conditional density (ignoring the fact that the two densities use different units of measurement!).
The conditional variance describes the spread of the conditional density. Evaluate the spread by examining the rescaled densities. Here is the plot of the original slices, all rescaled into conditional densities:
[](https://i.stack.imgur.com/EnBRs.png)
(The color continues to depict the original bivariate density.)
It is now evident that (in this example) all the conditional distributions are translates of a common distribution. Clearly they all have the same spread, whence their variances are equal.
| null | CC BY-SA 4.0 | null | 2023-03-01T15:30:24.877 | 2023-03-01T20:35:35.357 | 2023-03-01T20:35:35.357 | 919 | 919 | null |
607029 | 1 | null | null | 0 | 24 | I have two Bayes models, and let's say that model A is more complex than model B. I know a priori that model A is a good model for the data. I want to make the claim that model B is applicable as well. One idea that comes to mind is the use of WAIC. Is there any acceptable score classification that says that if the WAIC difference is below some level, the models are considered close? For example, we can say that models with the Bayes factor below three are similar. Does it make sense to compare distance (like L2) between distributions of observables (marginalized over parameters of each model space)?
| Bayes model comparison | CC BY-SA 4.0 | null | 2023-03-01T15:32:39.403 | 2023-03-01T15:32:39.403 | null | null | 206143 | [
"bayesian",
"model-selection"
] |
607030 | 2 | null | 596151 | 0 | null | In the lavaan forum I was pointed towards a further solution: cluster-robust standard errors. It allows fitting a flat model (without between-participant effects) but still accounts for the hierarchical nature of the data. However, they are only suited, as far as I have understood, in the case where all variables in the model are on level 1. If that is the case, cluster-robust standard errors can be utilized in lavaan by specifying the cluster argument in the cfa()-function; e.g.: `cfa(model, data = df, cluster = "participant_id")`. The model syntax does not differ from that of a regular flat cfa model.
| null | CC BY-SA 4.0 | null | 2023-03-01T15:34:39.600 | 2023-03-01T15:34:39.600 | null | null | 335073 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.