Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
608161
1
null
null
0
26
I want to compare the medians of an interrupted time series, for example, years 2019-2020 with 2021-2022. The data is not normally distributed and the distribution of the data is not similar. Would quantile regression be appropriate here? Should I expect to see similarities between mood median test and quantile regression? I am looking at the rate of abx prescribed (standardized by patient days) in a group of hospitals. The research question is, was the rate of abx prescribed different between the two time periods (before vs after the interruption). I am doing a full ITS - this is just a preliminary/descriptive part. @user2974951 Thanks-
quantile regression vs mannU (median difference)
CC BY-SA 4.0
null
2023-03-02T13:37:51.730
2023-03-02T14:00:24.647
2023-03-02T14:00:24.647
365631
365631
[ "time-series", "median", "quantile-regression" ]
608162
2
null
463879
1
null
Yes: see [Slice sampling covariance hyperparameters of latent Gaussian models by Murray and Adams (2010)](https://proceedings.neurips.cc/paper/2010/file/4d5b995358e7798bc7e9d9db83c612a5-Paper.pdf). They use ESS as a step in their MCMC algorithm. [Clarification after comment:] Well, yes and no. The ESS is used as a step to sample the latent $f$ vector given the hyperparameters $\theta$, while their paper actually develops a new (non-elliptical!) slice sampler for updating covariance hyperparameters. So the ESS is just an auxiliary step. That being said, I have used ESS with success for GP regression problems that have unbounded hyperparameters; simply transform them to $N(0,I)$ using some sequence of bijectors. Then ESS can be used in principle. But this does not guarantee success, since "any distribution [on the reals] can be factored [that way]" [(Murray+ 2010)](https://arxiv.org/abs/1001.0175).
null
CC BY-SA 4.0
null
2023-03-02T14:09:18.830
2023-03-02T16:28:59.097
2023-03-02T16:28:59.097
136341
136341
null
608164
1
608302
null
2
39
I am interested in understanding the asymptotic distribution of Likelihood ratio (LR) test statistic for model specification. I am focusing on the case in which the null hypothesis is of the form (i.e. to assess model specification): $$ \mathcal{H}_0:\theta_j=0, \qquad \text{For any} \; \theta_j \in \theta \\ \mathcal{H}_1:\theta_j\neq 0, \qquad \text{For any} \; \theta_j \in \theta $$ this is a test where $\theta=\{\theta_1,\theta_2,\dots,\theta_n\} \in \Theta$ is the set of all parameters for a model $f(\theta,x)$ In particular, I am trying to understand two things: - Whether the asymptotic approximation of $\chi^2_l$ remains valid (where $l$ are the degrees of freedom) - If so, how to correctly define the critical value at $1-\alpha$ confidence (i.e. $\chi^2_{l,1-\alpha}$) To support my question, assume that we have a parametric model of the following form: $$ f(\theta,x) = a_0 + a_1x^{b_1} + a_2x^{b_2} $$ Case 1: $\mathcal{H}_0:a_0=0$ In this case, I obtain the estimates for the unrestricted case (i.e. $\hat{\theta}_{UR}$) and the estimates for the restricted case (in which $\mathcal{H}_0$ holds, i.e. $\hat{\theta}_{R}$) In this example (and if I understand correctly), $\hat{\theta}_{UR} \in \mathbb{R}^{5}$ and $\hat{\theta}_{R} \in \mathbb{R}^{4}$ (Note: All parameters in $\theta$ can take real values, including 0). Given the Null $\mathcal{H}_0$, we lose 1 degree of freedom. Thus when calculating the LR statistic, we need to compare it with the critical value from $\chi^2_{k-l}=\chi^2_{5-4}=\chi^2_{1}$ at $1-\alpha$ confidence. This case works as expected. Case 2: $\mathcal{H}_0:a_1=0,a_2=0 $ Similarly, to Case 1, I obtain the estimates $\hat{\theta}_{UR},\hat{\theta}_{R}$ Here lies my question: In this case and under the null, the restricted (R) model is the following: $$ f(\theta,x) = a_0 + 0 + 0 $$ Even though the null contains a restriction on 2 parameters (i.e. $a_1=0,a_2=0$), the role of parameters $b_1,b_2$ vanishes, as they do not have any impact over the values of f(\theta,x) because $a_1=a_2=0$. Thus: - Is the dimensionality of $\theta_R$ equal to $l = 3$ (because there are 3 remain parameters with no restrictions, i.e. $a_0,b_1,b_2$) or $l=1$ in this case (because we have only 1 parameter left $a_0$ to try to "fit the data"? - Can I still directly apply LR test in this case? It has come to my attention after going through this book, that the null in this case may be at the boundary, therefore the classic $\chi^2$ asymptotic approximation does not longer apply directly here. Also, I've seen similar questions where replies suggest the use of a chi-square mixture to compute critical values, but I am not sure how to compute the critical value My main problem is to come up with the correct critical value for my simulation experiment. Currently, I am using $\chi^2_{l=2}$ at 95% confidence, which leads to over rejection when $\mathcal{H}_0$ is true in the data generating process (DGP) My intuition is that, there must be some adjustment that I am not considering in this case to make either the critical value larger or use another distribution as proxy for the asymptotics, thus being able to do proper statistical inference Thank you!
Likelihood ratio test for model specification with boundary Null
CC BY-SA 4.0
null
2023-03-02T14:32:10.560
2023-03-03T17:18:14.250
2023-03-02T15:56:00.553
139901
139901
[ "inference", "likelihood", "asymptotics", "likelihood-ratio", "quasi-likelihood" ]
608165
2
null
608127
3
null
Fisher's approach was ingenious and intuitively clear: separate the populations $\Pi_i$ based on the linear function $\mathbf a^\top \mathbf x$ that maximizes the ratio of between-groups of sum of squares to within-group sum of squares: Consider $\mathbf y=\mathbf X\mathbf a, ~\mathbf X$ being the data matrix. Total sum of squares $\mathbf y^\top \mathbf H\mathbf y=\mathbf a^\top \mathbf X^\top\mathbf H\mathbf X\mathbf a$ can be decomposed into sum of within-groups sum of squares $\sum\mathbf y_i^\top\mathbf H_i\mathbf y_i=\sum \mathbf a^\top \mathbf X_i^\top\mathbf H_i\mathbf X_i\mathbf a:= \mathbf a^\top \mathbf W\mathbf a, $ (here $\mathbf H_i$ being the centering matrices) and between-groups sum of squares $\sum_i n_i(\bar y_i-\bar y) :=\mathbf a^\top\mathbf B\mathbf a. $ We seek to find $\mathbf a$ that maximizes the ratio (as remarked by [statmerkur](https://stats.stackexchange.com/questions/608127/is-this-description-of-linear-discriminant-analysis-lda-correct#comment1128501_608127)) $$ \frac{\mathbf a^\top\mathbf B\mathbf a}{\mathbf a^\top\mathbf W\mathbf a}$$ which would be nothing but the largest eigenvalue's eigenvector of $\mathbf W^{-1}\mathbf B. $ --- Coming to the stated blog: while I won't say it is outright wrong, it is indeed problematic: $\bullet$ The author defined $\mathbf S_b$ in terms of $\boldsymbol\mu_i,$ the population means. But then used sample within groups matrix $\sum_i (n_i-1) \mathbf S_i=\sum_i\sum_{j=1}^{n_i}(\mathbf x_{ij} -\bar{\mathbf x}_i) (\mathbf x_{ij} -\bar{\mathbf x}_i)^\top.$ The former is used when the population parameters $(\boldsymbol\mu_i, \mathbf\Sigma) $ are known. Otherwise, in place of $\mathbf S_b, ~\sum_i(\mathbf{\bar x}_i-\mathbf{\bar x}) (\mathbf{\bar x}_i-\mathbf{\bar x}) ^\top$ has to be substituted. The author is plausibly confusing things using Fisher's sample LDs. $\bullet$ > Maximizing the distance between the means of two classes; Now they are talking about two classes instead of $g$ classes. For $g=2, $ the interpretation would be to find $\mathbf a$ such that maximum separation is achieved between $\bar y_1$ and $\bar y_2, $ measured in standard deviation units. --- ## References: $\rm [I]$ Applied Multivariate Statistical Analysis, Wolfgang Karl Härdle, Léopold Simar, Springer-Verlag, $2015, $ sec. $14.2, $ pp. $418-419.$ $\rm [II]$ Applied Multivariate Statistical Analysis, Richard A. Johnson, Dean A. Wichern, Pearson, $2013, $ sec. $11.6, $ pp. $622-623; ~590.$
null
CC BY-SA 4.0
null
2023-03-02T14:39:00.257
2023-03-02T15:02:47.297
2023-03-02T15:02:47.297
362671
362671
null
608166
1
608187
null
0
46
I would appreciate if you could let me know to conclude about the modified Diebold and Mariano test when our alternative hypothesis is less? How about greater? ``` library(forecast) forecast <- ts(c( 96, 99,102, 96,105, 99, 99,103, 98,106)) observed <- ts(c( 96,101,107,108, 93,103, 99,105,103, 98)) forecast2 <- ts(c(105, 94,107,101,111,115,104,111,111,116)) print(dm.test((forecast-observed), (forecast2-observed),alternative = "two.sided")) print(dm.test((forecast-observed), (forecast2-observed),alternative = "less")) print(dm.test((forecast-observed), (forecast2-observed),alternative = "greater")) ```
How to interpret modified Diebold and Mariano test?
CC BY-SA 4.0
null
2023-03-02T14:41:06.427
2023-03-02T17:22:20.020
null
null
94909
[ "r", "time-series", "forecasting", "diebold-mariano-test" ]
608167
1
null
null
0
38
According to most sources I see (e.g. [Wikipedia](https://en.wikipedia.org/wiki/Causal_Markov_condition)), the Markov assumption states that: > every node in a Bayesian network is conditionally independent of its nondescendants, given its parents Two questions: - Should the statement actually say "given only its parents"? For example, in the image below, Raining is conditionally independent of Sprinkler given Cloudy, but I believe they are not conditionally independent given Cloud and Grass Wet. If I know both Cloudy and Grass Wet, then Sprinkler and Raining are not d-separated. Wikipedia suggests that the "Markov condition is equivalent to the global Markov condition, which states that d-separations in the graph also correspond to conditional independence relations." So My guess is that the Markov assumption also requires that we don't know that value of non-parent nodes. Is that correct? - Is it fair to say that A is conditionally independent of B "given all of A's parents or all of B's parents"? Or would it be "given all of A's parents and all of B's parents"? Since conditional independence is symmetric, it has to be one of them, right? And I would assume the former is correct. For example, Sprinkler is independent of Raining given nothing. We know all of Sprinkler's parents (none), so it doesn't matter if we know all of Raining's parents. Correct? [](https://i.stack.imgur.com/UZjCj.png)
Markov Assumption: Given parents or given *only* parents
CC BY-SA 4.0
null
2023-03-02T14:41:26.257
2023-03-05T00:58:28.883
2023-03-05T00:58:28.883
11887
145816
[ "bayesian-network" ]
608168
1
608172
null
5
138
The definition of the [hypoexponential distribution](https://en.wikipedia.org/wiki/Hypoexponential_distribution) (HD) requires that: $$f(x)=\sum_i^d \left(\prod_{j=1,i\neq j}^{d}\frac{\lambda_j}{\lambda_j-\lambda_i}\right)\lambda_i e^{(-\lambda_ix)},\quad x>0 $$ is well-defined, which imposes that all $\lambda_i \neq \lambda_j$. As such, it seems to impose a significant limitation on the underlying exponentially distributed $X_i$'s in that if, say $X_i$ and $X_j$, have $\lambda_i=\lambda_j$, the distribution is no longer HD, neither can be Erlang, which requires that all $\lambda_i$ are the same! I'm sure I'm missing something! A second question: I tried to plot the HD using the syntax below for $\lambda = .1:20$ but not sure if it's the right syntax. Maybe you have experience this this R package ir can suggest another one for working with HD (generalized Erlang)? ``` library(sdprisk) x <- seq(0.1, 20, 0.05) y <- dhypoexp(x, rate = r, log = FALSE) plot(x, y) ``` [](https://i.stack.imgur.com/vNIDH.png)
Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$?
CC BY-SA 4.0
null
2023-03-02T14:48:07.760
2023-03-04T15:25:56.050
2023-03-03T08:06:40.443
7224
55158
[ "r", "distributions", "exponential-distribution" ]
608169
1
null
null
0
52
I transformed a 2-mode matrix into a 1-mode co-occurence matrix and canceled all the values on the diagonal (because I thought it was meaningless). I wanted to use the jaccard method to calculate the similarity matrix of this co-occurrence matrix, but funded that the equation of jaccard is like the below: Jaccard coefficient=Cij/(Ci+Cj-Cij) I want to know does this mean that I must fill all the diagoal values? If I couldn't, do I have to do it by another method? I am so sorry that I don't have so much knowledge about this. By the way, if the Jaccard method is not the best choice, what should I do to deal with a co-occurence matrix without diagonal values? Pearson correlation? I'm calculating the usage of common hashtags in posts, and this is part of my table: [](https://i.stack.imgur.com/A5HOs.png) The scale of the cells' value is large. Could you help me with that? Thank you a lot!
How to calculate the Jaccard index of co-occurence matrix without diagonal values?
CC BY-SA 4.0
null
2023-03-02T15:02:36.913
2023-03-02T15:02:36.913
null
null
382194
[ "matrix", "similarities", "jaccard-similarity", "cooccurrence" ]
608170
2
null
608148
2
null
(cross-posted from [here](https://stats.stackexchange.com/a/500797/1352)) Section 3.2 in [the following paper](https://doi.org/10.1007/s10618-005-0039-x) offers a possibility for determining the length of the seasonal cycle: ``` Wang, X, Smith, KA, Hyndman, RJ (2006) "Characteristic-based clustering for time series data", _Data Mining and Knowledge Discovery_, *13*(3), 335-364. ``` However, this is only one aspect in a paper that is more comprehensive in its aims, so the specific issue of determining a seasonal length is not treated at great depth. Also note that this was never included in the `forecast::auto.arima()` function (whose author is Hyndman), although this function does use other methods from that paper (for instance, `auto.arima()` decides whether to apply seasonal differencing for known seasonal cycle length based on an estimate of seasonal strength as also given in Wang et al.). I do not now why this was never included. It may have been because it was unstable, varying and hard to automate. After all, you need to identify peaks and troughs in the ACF, and what constitutes a "peak" or a "trough" in a noisy ACF series would need to be operationalized. Alternatively, perhaps there simply never was any demand for it, since users presumably know their seasonal cycle length. So if you want to use the cycle length determination per Wang et al., you would need to code it yourself.
null
CC BY-SA 4.0
null
2023-03-02T15:10:32.717
2023-03-02T15:10:32.717
null
null
1352
null
608171
2
null
608168
3
null
The distribution in question is a mixture of Exponential distributions $\mathcal E(\lambda_i)$ with specified weight, i.e. its density is $$f(x) = \sum_{i=1}^d p_i\,\lambda_i\exp\{-\lambda_i x\}\quad x\ge 0\tag{1}$$ with $$p_i=\prod_{j=1\\ j\ne i}^d\dfrac{\lambda_j}{\lambda_i-\lambda_j}\quad i=1,\ldots,d$$ There is therefore a single realisation produced by this density (rather than a sequence of $x_i$'s). Note that, while $$\sum_{i=1}^d p_i= \sum_{i=1}^d \prod_{j=1\\ j\ne i}^d\dfrac{\lambda_j}{\lambda_i-\lambda_j}=1$$ (by the Cauchy determinant formula), the weights can be negative, which makes (1) a signed mixture. Now, if $\lambda_1\approx\lambda_2$ (wlog), starting with the case $d=2$ leads to the Gamma $\mathcal G(2,\lambda_1)$ distribution: $$\lim_{\epsilon\to 0}\lambda_1(\lambda_1+\epsilon)\dfrac{e^{-\lambda_1x}-e^{-[\lambda_1+\epsilon]x}}{\epsilon}=\lambda_1^2xe^{-\lambda_1x}$$ $-$by L'Hospital's rule$-$as expected since this is the distribution of the sum two iid Exponential $\mathcal E(\lambda_1)$ random variates. In the general case $d>2$, the undefined term in the density $$\lim_{\epsilon\to 0}\dfrac{e^{-\lambda_1x}}{\epsilon}\underbrace{\prod_{j=3}^d\dfrac{1}{\lambda_j-\lambda_1}}_\rho -\dfrac{e^{-[\lambda_1+\epsilon]x}}{\epsilon}\prod_{j=3}^d\dfrac{1}{\lambda_j-\lambda_1-\epsilon}$$ is equal to $$\lim_{\epsilon\to 0}\frac{\rho e^{-\lambda_1x}}{\epsilon\prod_{j=3}^d[\lambda_j-\lambda_1-\epsilon]} \left\{\prod_{j=3}^d[\lambda_j-\lambda_1-\epsilon]-e^{-\epsilon x}\rho^{-1}\right\} =\rho e^{-\lambda_1x}\left\{ x - \sum_{j=3}^d (\lambda_j-\lambda_1)^{-1}\right\}$$ by L'Hospital's rule.
null
CC BY-SA 4.0
null
2023-03-02T15:13:28.460
2023-03-04T15:25:56.050
2023-03-04T15:25:56.050
7224
7224
null
608172
2
null
608168
3
null
> The definition of the hypoexponential distribution (HD) requires that: $$f(x)=\sum_i^d \left(\prod_{j=1,i\neq j}^{d}\frac{\lambda_j}{\lambda_j-\lambda_i}\right)\lambda_i e^{(-\lambda_ix)},\quad x>0 $$ that expression is only a special case when $\lambda_i \neq \lambda_j$. It is not a requirement for the hypoexponential distribution in general. The more general hypo-exponential distribution can be expressed as a [phase-type distribution](https://en.wikipedia.org/wiki/Phase-type_distribution) $$f(x) = -\boldsymbol{\alpha}e^{x \boldsymbol{\Theta}} \boldsymbol{\Theta} \mathbf{1} $$ With $$\boldsymbol{\alpha} = (1,0,0,\dots,0,0)$$ and $$\boldsymbol{\Theta} = \begin{bmatrix} -\lambda_1 & \lambda_1 & 0 & \dots & 0 & 0 \\ 0 & -\lambda_2 & \lambda_2 & \dots & 0 & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\ 0 & 0 & \ddots & -\lambda_{d-2} & \lambda_{d-2} & 0\\ 0 & 0 & \dots & 0 & -\lambda_{d-1} & \lambda_{d-1}\\ 0 & 0 & \dots & 0 & 0 & -\lambda_{d}\\ \end{bmatrix}$$ That involves the [exponentiation of a matrix](https://en.wikipedia.org/wiki/Matrix_exponential). And that can be approximated with a sum using a Taylor series. You can see the matrix as modelling a sort of Markov chain process with non-discrete time steps. A sum of exponential distributed variables is like waiting for several consecutive transitions whose waiting time is each exponential distributed. Those transitions relate to the Markov chain. Not only when $\lambda_i \neq \lambda_j$ but also when $\lambda_i = \lambda_j$ then the formula can give problems as demonstrated in this question: [Why my cdf of the convolution of n exponential distribution is not in the range(0,1)?](https://stats.stackexchange.com/questions/566588)
null
CC BY-SA 4.0
null
2023-03-02T15:51:38.050
2023-03-03T09:17:43.943
2023-03-03T09:17:43.943
164061
164061
null
608173
1
null
null
1
22
I am estimating a time series 2SLS, ECM model, for electricity consumption. The system has a demand equation: ![Formula](https://chart.googleapis.com/chart?cht=tx&chl=%5CDelta%20%20ln(Q_t)%3D%5Calpha%20_0%2B%5Calpha%20_1%5CDelta%20ln(P_t)%2B%5Calpha%20_2%20Temp_t%2B%5Calpha%20_3ln(Q_%7Bt-1%7D)%2B%5Calpha%20_4ln(P_%7Bt-1%7D)%20%2B%20u_t) The price is endogenous in the demand equation, and therefore, I also estimate a price equation using instruments: ![Formula](https://chart.googleapis.com/chart?cht=tx&chl=%5CDelta%20ln(P_t)%3D%5Cbeta%20_0%2B%5Cbeta%20_1%5CDelta%20ln(Q_t)%2B%5Cbeta%20_2%20PrecipEnergy%2B%5Cbeta_3%5CDelta%20ln(Wind_t)%2B%5Cbeta_4%5CDelta%20ln(Coal_t)) ![Formula](https://chart.googleapis.com/chart?cht=tx&chl=%20%2B%0A%5Cbeta_5ln(Wind_%7Bt-1%7D)%2B%5Cbeta_6ln(Coal_%7Bt-1%7D)%2B%5Cbeta_7ln(Q_%7Bt-1%7D)%2B%5Cbeta_8ln(P_%7Bt-1%7D)%2Bv_t) A paper I have been reading makes an interesting point about this setup. The price function is essentially an inverted supply function. That begs the question: can I get from the price equation to the supply equation by a simple rewrite of equation 2? Simply put, just by switching sides for price and quantity? Anyway, I'm estimating the long run relationship (elasticity) between demand and price, which for this setup is given by the fraction: ![Formula](https://chart.googleapis.com/chart?cht=tx&chl=E%3D-(%5Calpha_4%20%2F%20%5Calpha_3%20)) In the price equation, the elasticity of price with respect to demand is: ![Formula](https://chart.googleapis.com/chart?cht=tx&chl=E%3D-(%5Cbeta%20_7%2F%2F%20%5Cbeta_8%20)) It's not the purpose of the model, but I can find that the estimated elasticity of price with respect to demand (quantity) is 0.109. I.e., price increases 0.1% for a 1% increase in quantity. Is it possible (and valid) to turn this around to express it as the elasticity of supply wrt. price? Essentially doing something like: ![Formula](https://chart.googleapis.com/chart?cht=tx&chl=E_s%5Ep%3D-(%20%5Cfrac%7B%5Cbeta%20_7%7D%7B%5Cbeta_8%7D%20)%5E%7B-1%7D)
"Inverting back" the inverted supply equation in an ECM. Possible?
CC BY-SA 4.0
null
2023-03-02T15:53:39.783
2023-03-02T17:09:38.257
2023-03-02T17:09:38.257
366596
366596
[ "time-series", "econometrics", "elasticity", "ecm", "economics" ]
608174
1
null
null
4
388
Suppose that we have observations $x_1,\dots,x_n$ for some process. We want to fit an AR(k) model to these observations. I do not understand why the naïve OLS approach to estimate our AR(k) coefficients would be inappropriate when the process has a unit root. From some simulations it seems to recover the coefficients no matter the location of the roots of the lag polynomial. If estimating the AR coefficients is the not problem, then what is? A unit root means that uncertainty around your forecast grows with time, but that's not a problem from a mathematical point of view, it just something to keep in mind when interpreting the forecasted values. Basically I'm asking which part of the mathematical analysis breaks down in the presence of a unit root?
Why, exactly, is a unit root a problem?
CC BY-SA 4.0
null
2023-03-02T15:55:41.823
2023-03-04T18:12:23.113
2023-03-02T16:00:16.023
11887
376142
[ "time-series", "stationarity", "unit-root" ]
608175
2
null
608115
3
null
For exponential distribution family, operating its survival function $S(t) = e^{-\alpha t}$ is usually slightly more convenient than treating the distribution function $F(t) = 1 - e^{-\alpha t}$. Except it, my answer below is almost the same as User1865345's. For any $t > 0$, by the independence of $T_1$ and $T_2$: \begin{align} & P[|T_1 - T_2| > t] = P[T_1 > T_2 + t] + P[T_2 > T_1 + t] \\ =& \int_0^{\infty}P[T_1 > s + t]f_{T_2}(s)ds + \int_0^\infty P[T_2 > s + t]f_{T_1}(s)ds \\ =& 2\int_0^{\infty}e^{-\alpha(s + t)}\alpha e^{-\alpha s}ds \\ =& e^{-\alpha t} \int_0^\infty 2\alpha e^{-2\alpha s}ds \\ =& e^{-\alpha t}, \end{align} which is the survival function of an $\exp(\alpha)$ r.v., hence the result.
null
CC BY-SA 4.0
null
2023-03-02T15:56:18.413
2023-03-02T15:56:18.413
null
null
20519
null
608177
1
608178
null
4
329
I am working on a project with a call center. Long story short, I am analzying the data revolved around the incoming calls to this call center in order to eventually use a queueing model. A queueing model is one which you provide with certain values such as the average service time of calls (average value of how long calls last) and some other inputs, and it would at the end of the day tell us how many agents need to be planned to answer incoming calls. My question is: Should I insert the mean value or the median value of my service time data to this queueing model? (The data follows a lognormal distribution). Note: 1- I cleaned my data from outliers so I suppose a mean and a median value are both usable. 2- The answer to my decision is probably going to be dependent on what my end-goal is, which is why I tried to explain my goal in the above paragraph. Could someone shed some light on this? Thank you!
Should I use the mean or median of my data for queueing models?
CC BY-SA 4.0
null
2023-03-02T16:05:10.560
2023-03-03T17:34:29.577
null
null
382239
[ "mean", "median", "lognormal-distribution", "queueing" ]
608178
2
null
608177
11
null
The standard results in queuing theory predict mean wait times, so you will find means easier to work with. But let me ask something. You say "I cleaned my data from outliers". Do you mean that you dropped the records of really long waits? That sounds like a mistake - those are the most important records for this application, aren't they. You wouldn't invest based on a model that ignores stock market crashes.
null
CC BY-SA 4.0
null
2023-03-02T16:23:09.367
2023-03-03T17:34:29.577
2023-03-03T17:34:29.577
53690
188928
null
608179
1
null
null
0
15
I came across the following problem and I'm unsure if a solution exists or if it has been proven that what I'm trying to achieve is impossible. I tried to google for papers on the topic but I haven't found anything related. Any help is much appreciated :) Summary of the task: I'm trying to train a nn on MNIST but instead of directly predicting the labels Y from the images X, I want to use a generative network. My network (which uses only feedforward layers), therefore, expresses the function X = f(Y, w), where w are the network's parameters. To train, I'm using MSE loss on the generated images. Once the network is trained, I want to use it for classification, by using activation maximization: - I pick an image A from the test set, fix w, and choose an initial input vector (I tried with several, but let's stick with [0,0,0,...,0]) representing the label. - I perform gradient descent on the input label and minimise the loss between the output of the network and A - Once the network has converged, the input represents the label predicted for A. In other words, I want to see which label minimises the error between the network prediction and the chosen image. Problem: With this method I get a relatively low accuracy, even on MNIST: ~86%. Am I missing something? Or getting the same accuracy of the normal classification network (i.e., predicting labels from images, > 95%) cannot be reached? I imagine that if this is not possible it is due to the different dimensionality of images/labels and to the fact the nns are not invertible, however I'm looking for a formal result/proof, or a direction to follow.
Classification via "Activation Maximization"
CC BY-SA 4.0
null
2023-03-02T16:25:32.970
2023-03-02T16:25:32.970
null
null
264552
[ "machine-learning", "classification", "generative-models" ]
608180
2
null
608174
-1
null
The process is very different in the presence of a unit root. Many processes increase or decrease exponentially. But if there is a unit root, the process is a random walk, which is different.
null
CC BY-SA 4.0
null
2023-03-02T16:26:22.190
2023-03-02T16:26:22.190
null
null
188928
null
608181
2
null
608155
0
null
Firstly, on the general statistical question, "what sample size do I need to estimate the mean of an unknown distribution to a given level of certainty?". This has no answer. In fact, some distributions don't even have a mean, e.g. the Cauchy distribution. Most statistics, or data modelling, depends on prior knowledge. Secondly, on distributions of code running time. There is a lot of prior knowledge available to you. If you want to look at a complex case, read about the performance of the JVM. While often it is very good, it is also very unpredictable. It depends on whether and when the JIT kicks in, and also one whether and when the garbage collector kicks in. Even at a low level there are complexities, e.g. on the state of the cache lines. So you need to do some data modelling, appropriate to the actual code that interests you, before you can decide the sample sizes.
null
CC BY-SA 4.0
null
2023-03-02T16:39:03.930
2023-03-02T16:39:03.930
null
null
188928
null
608182
1
null
null
4
58
One of the benefits of DAGs is that they openly state the causal assumptions a researcher is making, allowing for greater transparency. This is nice in theory. However, in practice, the DAGs I have personally created are a tangled web of nodes and arrows. Visually speaking, when the complexity of a DAG increases, I find that the transparency benefit of DAGs decrease. With that being said, does anyone have an idea of how to present the DAG (or information that the DAG generates) in a visually friendly manner. The idea is to include a DAG in a peer-reviewed journal that, by and large, does not publish overtly causal research. With that being said, a highly complex DAG is going to be a hard sell. Just wondering how I can retain the visual transparency of a DAG without bombarding the reader with a highly complex figure. There is no prior example in my field for presenting DAGs in a journal article (beyond educational material where a researcher uses a very simple DAG as a teaching element), so I do not have precedent to guide me.
Presenting DAGs in Journal-Quality Research
CC BY-SA 4.0
null
2023-03-02T16:45:16.407
2023-03-17T17:57:08.487
2023-03-02T18:48:59.723
11887
360805
[ "causality", "reporting", "dag" ]
608183
2
null
508627
0
null
The coefficients of the logistic regression should be wrapped up with np.exp(coef_) to get the right values as the model used the logit function. For example: ``` lr coefs [ 2.20394292e+00 -4.63322645e-02 -9.06940848e-01 -3.98100719e-04] np.exp(lr coefs) [ 9.060669 0.999602 0.954725 0.403757] ```
null
CC BY-SA 4.0
null
2023-03-02T16:47:25.253
2023-03-02T16:49:38.850
2023-03-02T16:49:38.850
362671
382242
null
608184
1
null
null
1
32
I am trying to predict a binary outcome. My sample size is very small (n=160) and has a high-class imbalance (80:20). All the variables are highly correlated, and the dataset is high dimensional (the number of variables is 96, and the minority class has 32 samples only). - Can I only use repeated or nested cross-validation instead of using a held-out test set (20% of data) for the final evaluation? - Or should I use cross-validation for hyper-parameter optimization only and then do the final testing on the held-out test set? - What feature selection methods are appropriate for high-correlated, high-dimensional data?
Can I only use cross-validation when sample size is very small or do I still need a held-out test set?
CC BY-SA 4.0
null
2023-03-02T16:51:39.300
2023-03-02T16:51:39.300
null
null
336916
[ "classification", "cross-validation", "feature-selection", "high-dimensional" ]
608185
1
null
null
1
24
I would like to know how small of a beta coefficient my meta-regression is powered to detect. Thank you for any help.
How can I calculate the power (number of studies) needed for a meta-regression?
CC BY-SA 4.0
null
2023-03-02T17:08:01.087
2023-03-02T17:08:01.087
null
null
364121
[ "meta-analysis", "meta-regression", "network-meta-analysis" ]
608187
2
null
608166
0
null
If you happen to reject the null hypothesis of equal expected predictive loss, $H_0\colon E(L(e_1))=E(L(e_2))$, then - under $H_{1a}\colon E(L(e_1))\neq E(L(e_2))$, you favor the view that the expected losses are unequal (alternative="two.sided"); - under $H_{1b}\colon E(L(e_1))> E(L(e_2))$, you favor the view that the expected loss from the first forecast is larger (alternative="less"); - under $H_{1c}\colon E(L(e_1))< E(L(e_2))$, you favor the view that the expected loss from the second forecast is larger (alternative="greater"). You can find the same interpretation phrased another way in the functions' [help file](https://www.rdocumentation.org/packages/forecast/versions/8.20/topics/dm.test).
null
CC BY-SA 4.0
null
2023-03-02T17:22:20.020
2023-03-02T17:22:20.020
null
null
53690
null
608188
1
null
null
0
11
I am using GLMMs to model the influence of weather variables on bird counts, and using model averaging to generate parameter estimates for each predictor. Standardizing (centralizing) predictors is recommended for this method by [Grueber et al. (2011)](https://onlinelibrary.wiley.com/doi/10.1111/j.1420-9101.2010.02210.x) and [Gelman (2008)](https://onlinelibrary.wiley.com/doi/10.1002/sim.3107), but since I am not modelling interaction terms, is it necessary to perform this step if my continuous predictors are not on widely-different scales? Thank you!
Is it necessary to standardize predictors when model averaging without interactions?
CC BY-SA 4.0
null
2023-03-02T17:24:36.257
2023-03-02T17:53:03.007
2023-03-02T17:53:03.007
286723
286723
[ "glmm", "standardization", "predictor", "model-averaging" ]
608189
1
null
null
2
60
I have a continuous response variable (leaf length), which has been repeatedly measured in 45 experimental plots over several growing seasons. My goal is to properly estimate at which day of the year (DOY) certain events happened in each year (e.g., the passing of 20% or 80% of annual leaf growth). I fitted different smoothers for each plot using the gam function of mgcv. Here some examples for one year of 16 plots (x-axis: DOY; y-axis: leaf length in percent): [](https://i.stack.imgur.com/Sh2wq.jpg) I then extracted the DOY and its confidence interval (CI) for the passing of each threshold by sampling the posterior according to this post: [Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM?](https://stats.stackexchange.com/questions/190348/can-i-use-bootstrapping-to-estimate-the-uncertainty-in-a-maximum-value-of-a-gam/191489#191489) So far so good. I have an estimate for the mean DOY with a lower and upper boundary of the CI for each plot and threshold. But now I would like to fit a mixed effect model to assess whether the timing of these events differs between treatments. I was looking for a way to take the uncertainty coming from each plot-level estimate into account. My question: Is it correct to calculate the inverse of the squared standard error (derived from the CI) as a weight and use this with lme as following? ``` lme1 <- lme(DOY ~ year * treatment, random = ~1|plot, weights = varFixed(~weight), data = leaf_growth) ``` It plays a large role whether or not these weights are included. OR is there an entirely different way to get good estimates for the passing of such thresholds?
Include confidence intervals of samples in nlme model
CC BY-SA 4.0
null
2023-03-02T17:55:53.403
2023-03-03T17:43:48.180
2023-03-03T17:43:48.180
120119
120119
[ "time-series", "lme4-nlme", "mgcv", "weighted-regression", "error-propagation" ]
608190
1
608366
null
2
50
I am trying to model data $\{Y_t,Q_t\}_{t=1}^T$, where the model is parameterized by $\theta$. $Y_t$ is a quantity where the model prediction can be solved in closed form, $\hat{Y}_t(\theta)$, where the model prediction of $Q_t$, $\hat{Q}_t(Y_t,\theta)$, can only be simulated via Monte Carlo. The simulation results in an estimate of the mean, $\bar{Q}_t(Y_t,\theta)$, and Monte Carlo variance, $\sigma_t(Y_t,\theta)^2$. Thus, $\hat{Q}_t(Y_t,\theta) \sim N\left(\bar{Q}_t\left(Y_t,\theta\right),\sigma_t\left(Y_t,\theta\right)^2\right)$. I would like to compute the likelihood. For a specific $t$, the contribution to the likelihood is $$p(Y_t, Q_t | \theta) = p(Y_t|\theta)p(Q_t|Y_t, \theta)$$. How do I incorporate the uncertainty of $\hat{Q}_t(Y_t,\theta)$ caused by the Monte Carlo simulation in the likelihood? Setting $\hat{Q}_t(Y_t,\theta) = \bar{Q}_t(Y_t,\theta)$ would ignore the uncertainty. My instinct is to try to integrate out the noise, but I am not sure if that is technically correct: $$p(Y_t, Q_t | \theta) = p(Y_t|\theta)p(Q_t|Y_t,\theta) = p(Y_t|\theta) \int_{\mathbb{R}}p\left(Q_t| \hat{Q}_t\left(Y_t, \theta\right) = X,Y_t, \theta\right)p\left(\hat{Q}_t\left(Y_t,\theta\right) = X | Y_t,\theta\right) dX$$
Help Deriving Likelihood Term When the Target is Known Probabilistically
CC BY-SA 4.0
null
2023-03-02T17:56:22.327
2023-03-04T18:26:16.253
2023-03-02T20:36:14.103
98420
98420
[ "maximum-likelihood", "conditional-probability", "probabilistic-programming" ]
608191
1
null
null
2
36
In R when you run anova on a multiple regression you get a table like this ``` Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x1 1 9.56 9.56 2.4237 0.13160 x2 1 28.24 28.24 7.1591 0.01273 * x3 1 1201.42 1201.42 304.6223 7.081e-16 *** Residuals 26 102.54 3.94 ``` I understand that the Sum of Squares for each row is the additional sum of squares explained by this predictor when added to the model in this order $SS(\beta_1|\beta_0)=9.56$ $SS(\beta_2|\beta_0,\beta_1)=28.24$ And I know how the F statistics are calculated in the table $$F = \frac{SS(row)/df(row)}{MSE}$$ Further I get that the final row's F statistic is the F stat you would get if you were testing $$H_0: \beta_k=0 | \beta_0,\ldots,\beta_{k-1}$$ What I'm not clear about is how to interpret the F statistics in the other rows of the table. Because each of them use MSE from the full model in their calculation, so I don't see how to phrase the null hypothesis in these cases. Any help would be appreciated. I'm currently of the mind that only the F stat in the final row has a "meaningful" hypothesis test associated with it, and the others should be ignored, or they are approximations somehow.
Additional Sum of Squares Table in R - how in the world do we interpret the F statistics?
CC BY-SA 4.0
null
2023-03-02T18:00:09.493
2023-03-02T19:09:12.423
null
null
213506
[ "regression", "hypothesis-testing", "anova" ]
608192
1
null
null
4
202
Two characteristics are tested on population elements $X_1$ and $X_2$. We assume that $X = (X_1, X_2)\sim N_2 (\mu, \Sigma)$ where $\mu$ and $\Sigma$ are parameters. The sample size is equal to $20$. It was calculated that (for the sample) $$ \bar x = \begin{bmatrix}0 \newline 1 \end{bmatrix} $$ $$ s = \begin{bmatrix} 5&-3 \newline -3&9 \end{bmatrix} $$ The above values are the mean and standard deviation calculated from the sample. At the $\alpha = 0.05$ level, verify the hypothesis system: $H_0: $ $\mu_1 = 2\mu_2$ $H_1: $ $\mu_1 \neq 2\mu_2$. I don't know how to get down to this task. I have never encountered such an arrangement of hypotheses.
Statistical test for vector of expected values
CC BY-SA 4.0
null
2023-03-02T18:01:58.073
2023-03-03T04:50:01.600
2023-03-03T04:50:01.600
362671
382245
[ "hypothesis-testing", "self-study", "mathematical-statistics" ]
608194
1
608205
null
1
27
So I am using `HistGradientBoostingRegressor` (scikit learn) to predict temperature values. After training and testing, the model seems to provide predictions that stagnates after certain values even if the actual values go beyond those values (see following Figure). Is this a limitation of the model or am I missing something? [](https://i.stack.imgur.com/BxGMp.jpg)
why GradientBoosting Regressor predictions stagnate within a specific interval of values?
CC BY-SA 4.0
null
2023-03-02T18:22:35.723
2023-03-02T20:06:45.603
2023-03-02T19:35:28.187
11887
151299
[ "regression", "predictive-models", "scikit-learn", "boosting" ]
608195
2
null
608150
4
null
Using Bayesian methods, you could start with a conjugate Dirichlet prior for the probabilities of the six sides, update it with your observations, and then find the probability from the Dirichlet posterior that side five has the highest underlying probability of the six sides. This will be affected slightly by the prior you choose and substantially by the actual observations. It may produce slightly counter-intuitive results for small numbers of observations. To take a simpler example with a biased coin, - if you start with a uniform prior for the probability of it being heads then toss it once and see heads, the posterior probability of it being biased towards heads would be $0.75$ and towards tails $0.25$; - if instead you tossed it $200$ times and see heads $101$ times then the posterior probability of it being biased towards heads would be about $0.556$; - if you tossed it $200$ times and see heads $115$ times then the posterior probability of it being biased towards heads would be about $0.983$. I do not see a simple way of doing the integration with six-sided dice to find the probability a given face is most probable, but simulation will get close enough. The following uses R and a so-called uniform Dirichlet prior for the biases, supposing you observed $21$ dice throws of $2$ ones, $3$ twos, $4$ threes, $5$ fours, $6$ fives and $1$ six: ``` library(gtools) probmostlikely <- function(obs, prior=rep(1, length(obs)), cases=10^6) { posterior <- prior + obs sims <- rdirichlet(cases, posterior) table(apply(sims, 1, function(x) which(x == max(x))[1])) / cases } set.seed(2023) probmostlikely(c(2, 3, 4, 5, 6, 1)) # 1 2 3 4 5 6 # 0.027885 0.072102 0.152415 0.279509 0.460498 0.007591 ``` so suggesting that the die is biased most towards five with posterior probability about $0.46$ (and most towards six with posterior probability just under $0.008$). Seeing that pattern of observations ten times as often would increase the posterior probability that the die is biased most towards five to just under $0.82$ (and reduces those for one and six to something so small that they never appeared as most likely in a million simulations). ``` probmostlikely(c(20, 30, 40, 50, 60, 10)) # 2 3 4 5 # 0.000222 0.013702 0.167825 0.818251 ```
null
CC BY-SA 4.0
null
2023-03-02T18:25:57.267
2023-03-05T16:44:52.010
2023-03-05T16:44:52.010
11887
2958
null
608196
1
null
null
0
43
I am working through the "Microeconometrics and MATLAB" book, translating the code to Python. Here is my implementation for estimating the parameters of a binary logit model ([Colab](https://colab.research.google.com/drive/189dz0SZSIps0WyFSJUq9SbG5aBtvJE9T?usp=sharing) link). I am facing two issues: - ML estimation: the signs of the parameters are the reverse of true values - SML estimation: the estimated parameters are completely off. For a simple case with true parameters of [0.5, 0.5], the results are (respectively): - [-4.6, -4.6] - [-2.352e-01, 5.417e-01]. def simulate_binary_logit(x, beta): beta = beta.reshape(-1,1) N = x.shape[0] J = beta.shape[0] # epsilon = -np.log(-np.log(a)) # a draw from Type 1 extreme value epsilon = np.random.gumbel(size =(N,J)) Beta_augmented = np.hstack([beta, np.zeros_like(beta)]) utility = x @ Beta_augmented + epsilon choice_idx = np.argmax(utility, axis=1) return (choice_idx).reshape(-1,1) exact log likelihood ``` def binary_logit_log_likelihood(beta, y,x): beta = beta.reshape(-1,1) lambda_xb = np.exp(x@beta) / (1 + np.exp(x@beta)) ll_i = y * np.log(lambda_xb) + (1-y) * np.log(1-lambda_xb) ll = -np.sum(ll_i) return ll ``` Simulated LL ``` def binary_logit_simulated_likelihood(beta, y,x,r): #np.random.seed(1) N = y.shape[0] simulated_y = np.zeros((N,r)) beta = beta.reshape(-1,1) for count in range(r): simulated_y[:,count] = simulate_binary_logit(x, beta).ravel() sim_prob = np.mean(simulated_y, axis =1).reshape(-1,1) ll_i = y * np.log(sim_prob) + (1-y) * np.log(1-sim_prob) ll = -np.sum(ll_i) return ll ``` estimate using the analytical loglikelihood ``` res = opt.minimize(fun=binary_logit_log_likelihood, args=(y,x), x0 = (beta_init), method='CG', options={'disp':True}) ``` which gives [-4.665e-01 -4.513e-01], i.e., wrong sign. and the estimate from SML is completely wrong ``` res = opt.minimize(fun=binary_logit_simulated_likelihood, args=(y,x,r), x0 = (beta_init), method='SLSQP', options={'disp':True}) ``` giving [-2.352e-01 5.417e-01].
Maximum simulated likelihood of binary logit model implementation in python
CC BY-SA 4.0
null
2023-03-02T18:48:01.897
2023-03-02T20:41:16.700
2023-03-02T20:41:16.700
42176
42176
[ "logistic", "python", "maximum-likelihood", "simulation" ]
608197
2
null
607026
2
null
To summarise my answer in the comments: This a 2 x 2 x 2 factorial experiment with multiple measurements on the 400 participants. A logical place to start analysing this would be as a mixed model with all three factors (`setting`, `complexity`, and `density`) as fixed effects and `participant` as random intercept. From your code, it looks like you'd like to include all possible interactions. This may be reasonable, but I would also consider whether random slopes might be important here. It's unclear to me if 400 participants would be sufficient data to support that level of model complexity, but it might be. Why not use random effects for `setting` or even all the different 'scenarios'? Common advice is that random effects don't work well when you have less than 6 levels per factor (sometimes you will hear higher numbers, like 15 or 20). For 8 levels (i.e. if you treat all scenarios as equally distinct instead of 2 x 2 x 2), it's borderline, but it smacks of try to fit a square peg into a round hole. The simpler option works, so I would stick with it. The fact that you've preregistered a different model is a wrinkle, but I'd be inclined to fit the more appropriate model and defend the change from the preregistered plan. Or to do a [multiverse analysis](https://explorablemultiverse.github.io/) exploring the consequences of different model structures. Useful reading: [https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#should-i-treat-factor-xxx-as-fixed-or-random](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#should-i-treat-factor-xxx-as-fixed-or-random) [What is the minimum recommended number of groups for a random effects factor?](https://stats.stackexchange.com/questions/37647/what-is-the-minimum-recommended-number-of-groups-for-a-random-effects-factor)
null
CC BY-SA 4.0
null
2023-03-02T18:48:56.890
2023-03-06T13:59:56.810
2023-03-06T13:59:56.810
121522
121522
null
608198
1
null
null
1
15
In our lab we have developed a non parametric permutation test which assesses whether a feature $f:G\to \mathbb{R}$ respects a given symmetric binary relation $A\subset G\times G$. Regard $A$ as being equipped with the uniform distribution (except for points on the diagonal, which are discarded). Then $f(A_1),f(A_2)$ are random variables. Write $c(f)$ for the Pearson correlation coefficient of $f(A_1,A_2)$. Now randomly choose $N$ permutations $\pi$ of $G$ and compute $c(f\circ\pi)$ for each $\pi$. If $c(f)$ lies in the top 5% of the distribution, reject the null hypothesis that $f$ is independent of $A$. This makes sense to me but my PI says we can extend this permutation test to handle covariates. Given two features $f, g$ on $G$, plot $(c(f\circ\pi), c(g\circ\pi))$ for $N$ randomly chosen permutations $\pi$, and draw a line of best fit. If the residual $c(f) - \widehat{c(f)}$ is in the upper 5% of residuals, we conclude that $f$ still respects $A$ after accounting for/controlling for $g$. I am not convinced that this is an appropriate way of controlling for $g$ or "regressing out" $g$. I don't understand why it makes sense to draw a line of best fit through this scatter plot and interpret the residuals. I am unable to interpret this. He claims that this technique is standard.
Does it make sense to "regress out on one variable" during permutation testing?
CC BY-SA 4.0
null
2023-03-02T18:52:21.630
2023-03-02T18:52:21.630
null
null
381000
[ "permutation-test", "controlling-for-a-variable" ]
608199
2
null
608191
0
null
There is a misunderstanding here. The order of variables entered into the model does not change how each F-statistic is interpreted. The F-statistic for x3 is also not testing if any of the variables in the model are significant. Each individual F-statistic is $$F_{\beta_k} = \frac{MS_{\beta_k}}{MSE}$$ The hypothesis test you are thinking of, $H_0: \beta_k =0, \forall k=1,2,3$ is the model test, that is if any of the variables are significant. $$F_{model} = \frac{MS_{model}}{MSE}$$ Where $MS_{model} = \sum_1^k\frac{SS_k}{df_k}$, which we can calculate here $$F_{model} = \frac{1239.22}{3.94} = 314.52$$ Note that this should be the same F-statistic that you get from R when you get the summary from a multiple regression. So then the hypothesis tests for each variable is $$H_0: \beta_1 = 0 \\ H_1: \beta_1 \neq 0$$ This is testing if there are differences in any level of the variable. Sometimes this is called a group level test. So for your interpretations here: There is no significant effect of x1. There is statistically significant effect of x2 at the $p< 0.05$ level. This gives some evidence of an effect. There is a statistically significant effect of x3 at the $p < 0.0001$ level. This gives strong evidence that there is an effect. In my experience, the order that variables are entered into a model is rarely important to actual interpretations, in modern data sets that have large dimensionality. A fun side note, but if the variables are "balanced" i.e. orthogonal categorical variables, the order of variables in the model makes no difference in the sum of squares.
null
CC BY-SA 4.0
null
2023-03-02T19:09:12.423
2023-03-02T19:09:12.423
null
null
254780
null
608200
2
null
606023
1
null
When testing two or more hypothesis simultaneously, there is always the risk of inconsistent results. Consistency between results of different tests is simply not a part of the theory of hypothesis testing! This has been discussed here before, see - Contradictory test results and links therein, for instance [The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant](http://www.stat.columbia.edu/%7Egelman/research/published/signif4.pdf). In your case you want to ask one question of the data, if the intervention has an effect or not. That is properly tested with an interaction, as you say (for details [Best practice when analysing pre-post treatment-control designs](https://stats.stackexchange.com/questions/3466/best-practice-when-analysing-pre-post-treatment-control-designs)). Testing separately for the treatment and control groups if there is a change from baseline is different, there might for instance be a difference from baseline in both groups , simply because there is a change with time ... a test of the interaction looks at if the change from baseline is different between the groups, and so will work even if there is a trend with time in both groups.
null
CC BY-SA 4.0
null
2023-03-02T19:21:08.830
2023-03-02T19:21:08.830
null
null
11887
null
608201
2
null
436403
2
null
Statistical significance only makes sense for prespecified hypotheses. If your analysis begins by testing for significant associations among possible adjustments and the outcome for inclusion in the model, then you already have a multiple testing issue that's being ignored. A principle of regression modeling is that: changing the number and type of adjustments to the model changes the hypothesis. As such, the space of possible hypotheses you generate by testing 5 covariates is $2^5$! In general, overadjustment attenuates the power of a logistic analysis. So even if all 5 covariates are truly prognostic variables (but perhaps not confounders) the resulting analysis may be biased not just based on the number of predictors versus the sample size, but based on the overall data structure and the possibility of sparse strata. If the 5 variables are in fact confounders, then you should include them in the analysis regardless, even if a small sample correction is required, so that the desired interpretation of effect is achieved. Then there is the added nuance of performing a subgroup analysis according to $L$. Now you have added yet another complication. Usually, subgroup analyses do not correspond to primary or secondary analyses, but rather are meant as a sensitivity analysis. They are useful to detect a specific form of interaction where the effect is "consistent with the null" in one group, and "not consistent with the null" in another group. An example might be a targeted chemotherapy that works only in particular cancers with a genetic mutation. If a potential patient has wild-type disease, the drug will do nothing and it's better that they do not take it at all. Note these analyses (as well as analyses of interactions as @whuber mentioned) can at times produce contradictory results: where one subgroup fails to reject the null but not the other, yet the test of interaction is non-significant, and vice versa. The meaning of a sensitivity analysis is to generate hypotheses, not confirm them. So, knowing very little about the problem, it would make sense not to deviate from the original hypothesis which does not depend on $L$.
null
CC BY-SA 4.0
null
2023-03-02T19:28:05.847
2023-03-02T19:28:05.847
null
null
8013
null
608202
2
null
608192
8
null
The problem can be solved using a slightly modified paired $t$-test in which, the null is $H_0:\mu_1 -2\mu_2 = 0$ and the alternative is $H_1:\mu_1 -2\mu_2 \neq 0$. The procedure is thus to apply a $t$ to the difference data $Y_1,\ldots,Y_n$, where $Y_i = X_{1i}-2X_{2i}$, as noted by @whuber. The test statistic is $$\frac{\sqrt{n}\bar Y}{\sqrt{s_Y^2}}\sim t_{n-1}.$$ For this test thus you only require $s_Y^2$, the sample variance of $Y_1,\ldots,Y_n$, and of course, its sample average $\bar Y = \bar{X_1}-2\bar{X_2}$.
null
CC BY-SA 4.0
null
2023-03-02T19:42:33.547
2023-03-02T21:27:45.430
2023-03-02T21:27:45.430
56940
56940
null
608203
2
null
607033
1
null
You have both an error: `Error in fixed.only && random.only : invalid 'x' type in 'x && y'` For the former, you could check that both `age` and `cci` are numeric rather than factor objects? I can't see any other clues as to this error message, which seems to crop up in lots of different settings. And a warning: `In addition: Warning message:` `In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :` ` Model is nearly unidentifiable: large eigenvalue ratio` ` - Rescale variables?` This may be due to `age` in particular needing centering/scaling before analysis? Though apparently `rescale=TRUE` is the default option.
null
CC BY-SA 4.0
null
2023-03-02T19:56:12.027
2023-03-02T19:56:12.027
null
null
16974
null
608204
1
null
null
0
16
I am working with a normal model $X \sim N(0, \sigma^2(\theta))$, where $\sigma^2(\theta) = \frac{1}{e}\cos^2(\theta)+e\sin^2(\theta)$. My goal is to estimate $\theta$ within the range $[0, 2\pi]$. My desire is the point estimation of $\theta \in [0, 2\pi]$. However, the maximum likelihood estimator is not consistent because this model produces a likelihood function with 4 maxima. I am wondering if there is a reparametrization or another method that could help solve the non-identifiability of $\theta$, given that I don't have prior information about it. Thank you in advance for any suggestions or advice you may have. Thank you in advance.
How to solve non-identifiability problem in point estimation
CC BY-SA 4.0
null
2023-03-02T20:04:13.063
2023-03-02T20:04:13.063
null
null
163553
[ "maximum-likelihood", "identifiability", "point-estimation" ]
608205
2
null
608194
0
null
This is a limitation that's inherent to the particular version you are using (which is the most common one). What you do there, is that you have multiple trees (the overall prediction is a weighted average of the predictions of all the trees) and each tree has branches. Each observation "goes along" branches and at each branching point either goes right or left. The decision is based on some split of feature values that occur in the training data (i.e. there will not be any splits once you are beyond the feature range of the training data, and there will not be splits at multiple values between two adjacent feature values). After several splits, you eventually reach a leaf node and the prediction is a single value at that leaf node. See e.g. [this notebook](https://www.kaggle.com/code/bjoernholzhauer/will-rose-or-jack-survive-lightgbm-and-shap) for some illustrations in a simple setting. So, inherently, once you go beyond the values seen in training for features, predictions will stay constant (such tree based models basically don't extrapolate beyond the training data, or interpolate between training data points). That's why it's really good, if there is a lot of varied training data and if the training data covers the whole range of the feature space of interest for practical predictions. Alternative versions might e.g. build some simple model (e.g. something like a (penalized?) linear regression with only certain features) at each leaf node, such models would extrapolate to an extent.
null
CC BY-SA 4.0
null
2023-03-02T20:06:45.603
2023-03-02T20:06:45.603
null
null
86652
null
608206
1
null
null
1
33
I am trying to grapple with the following question as I self-study from the chapter on probability in my quantum mechanics textbook (Ballentine's Quantum Mechanics: A Modern Development). Unfortunately, it has been a long time since I've studied probability and my memory is hazy. > A source emits particles at an average rate of $\alpha$ particles per second; however, each emission is stochastically independent of all previous emission events. Calculate the probability that exactly $n$ particles will be emitted within a time interval $t$. Now I have the vague notion that I should be able to situate this question in the context of a Poisson process but I'm not quite sure how to go about it. In particular, all I can deduce so far is the following: > We interpret the problem statement as saying that the probability of an emission in any infinitesimal interval $dt$ of time is $\alpha \ dt$. Then we can argue that the probability $P_1$ of the first emission happening in the interval $[t,t+dt)$ is the probability of the event in which no emissions happen in the interval $[0,t)$ and we have an emission in the interval $[t,t+dt)$. That is, we have $$P_1 = \left(\lim_{N \to \infty} \left(1-\frac{\alpha t}{N}\right)^N\right)\left(\alpha \ dt \right) = \alpha e^{-\alpha t}dt := f(t)dt$$ where $f(t)$ is the corresponding pdf (and we've just shown that it's exponential). It's not clear to me how to generalize this statement about the random variable $T_1$ (time until first emission) to the pmf/pdf of the random variable $n_t$ (the number of emissions in a time interval $t$). To use the theory of point Poisson processes I think I need to show the definition: "a Poisson point process has the property that the number of points in a bounded region of the process's underlying space is a Poisson-distributed random variable." But it's not clear to me how I can use my $f(t)$ (for the RV $T_1$) to show this fact. Any help with showing this fact (and ideally basically helping me work through the whole problem) would be greatly appreciated.
Radioactive decay as a Poisson process
CC BY-SA 4.0
null
2023-03-02T20:09:16.540
2023-03-02T20:09:16.540
null
null
382189
[ "probability", "self-study", "poisson-distribution", "independence", "poisson-process" ]
608207
2
null
436403
1
null
The presence or absence of "statistical significance" is a very blunt tool for examination of the potential role of the L status in your data and it is rarely useful by itself in preliminary investigations. I would say that, as the bare minimum, you should instead be looking at the actual p-values as indices of the evidence in the data. If the association observed with L=0 is weak but nonetheless "significant" with your pre-specified threshold for significance (p<0.05? Hopefully you've been more thoughtful than that.) and the association is a little weaker with L=1 then you would end up with one being "significant" and the other not without any real support for the associations being different! That may well not be close to the case with your data, but it does point to the inappropriateness of applying the significant/not significant dichotomy to this type of analysis. If you want to compare the associations then don't look to see if there is a difference between their significant/not significant status, but test the difference between those associations directly. You've mentioned in a comment another problem with comparing the significant/not significant status of two associations. One association might be in one direction and the other in the opposite and yet they might be both statistically significant for your chosen threshold. In fact, there will always be a threshold at which two associations will be both significant and others at which they fall on opposite sides of the arbitrary boundary between significant and not significant. Instead of asking is there an association or not, ask how big are the associations. Have you examined a graphical display of the relevant associations? If there is an important difference it will likely be discernible by eye.
null
CC BY-SA 4.0
null
2023-03-02T20:17:28.103
2023-03-02T20:17:28.103
null
null
1679
null
608208
1
null
null
1
26
DEoptim is [described as](https://cran.r-project.org/web/packages/DEoptim/index.html) > DEoptim implements the Differential Evolution algorithm for global optimization of a real-valued function of a real-valued parameter vector as described in Mullen et al. (2011) <doi:10.18637/jss.v040.i06> I am wondering about the following case: if we have parameters that live on vastly different scales from one another (e.g., one parameter takes on values between .001 and .00001; another that takes on values between 1000 and 100000), would it help convergence to reparameterize the search space for the parameters such that the search space for the reparameterized parameters is the same (e.g., that both of the aforementioned parameters are rescaled such that they take on values between 0 and 1), given how the mutations occur from one iteration to the next? Any recommendations on how to speed convergence (beyond increasing NP and lowering F) would be greatly appreciated.
Can reparameterizing the parameters for DEoptim improve convergence: whether/how to do it?
CC BY-SA 4.0
null
2023-03-02T20:19:13.513
2023-03-03T16:39:29.023
2023-03-03T16:39:29.023
164061
18040
[ "optimization", "genetic-algorithms" ]
608209
1
null
null
0
6
Let's propose we have a set of n heterogeneous study participants that independently from one another accrue salaries over time. We now want to evaluate a treatment to the study participants that is expected to raise salaries (e.g.one-time career coaching), and want to find out how many study participants are needed. However, we can also decide for how long the study participants should be followed after the intervention (we receive salary information monthly, salary information before the intervention is available as well for all possible participants). A natural way of testing this is performing a two-sample t-test. We can specify a minimum detectable effect size, as well as an alpha and power level. We can also aggregate the participant level salaries for a given (past) time frame, and then calculate the mean and variance to calculate a minimum participant sample size for a study length (into the future) equivalent to the past time frame chosen. While we do not gain participants by increasing study length, my assumption would be that we do gain more certainty in the observed differences between the groups (treatment and no treatment), and would therefore need less participants going into the study if we would conduct it for a longer time frame. However, I can not reproduce this by calculating the sample size need for different time frames, as variance of the aggregated salaries grows strongly. How can we produce accurate sample size estimates for different proposed study lengths that in accordance with intuition reduce the needed number of participants the longer the study is undertaken? Do we need to use the monthly salary data differently then aggregating it?
Sample Size Estimation for Two Sample T-Test with Fixed Sample Size and Time-Varying Covariates
CC BY-SA 4.0
null
2023-03-02T20:28:09.257
2023-03-02T20:28:09.257
null
null
292148
[ "t-test", "sample-size", "time-varying-covariate" ]
608210
1
null
null
1
37
I have two linear regression models predicting the same outcome with the same data, but the predictors are different and the sample sizes slightly vary due to missing values on some predictors but not others. One model has an R^2 of .41 and the other an R^2 of .10. I want to say that the first model explains significantly more variance in the outcome than the second. Is this possible? Is there a package or function in R that I could use to test whether these values are significantly different from each other? I initially tried anova(model1, model2) but realized that was only appropriate for hierarchical models. I need something that can compare R^2 values from models with different Ns and different predictors. For clarification: model1 <- lm(DV ~ a + b, data=data) (933 df) model2 <- lm(DV ~ w + x + y + z, data=data) (888 df) The model1 R^2 is .10 and the model2 R^2 is .41. The dataset is the same, but the sample sizes/degrees of freedom are different because of missing values (which I can clean up, if needed). Variables a and b are conceptually different from variables w-z, so I don't want to include them all in the same model.
Statistically compare R^2 values from linear regression models with same outcome but different predictors and Ns
CC BY-SA 4.0
null
2023-03-02T20:51:08.457
2023-03-02T22:22:20.257
2023-03-02T22:22:20.257
382255
382255
[ "r", "regression", "r-squared", "model-comparison" ]
608211
2
null
608123
3
null
Since the co-occurrence table is a quadratic and symmetric matrix around the main diagonal, it doesn't make any difference if you read it by rows or by columns. The main diagonal seems to be zero everywhere, which makes sense since it's unlikely that one repeats the same hashtag within the same tweet. My suggestion is to normalize row-wise (or column-wise), that is, to divide each row by its total. In this way, the $i$th row of the table would represent the relative frequency distribution of co-occurrences for the hashtag $i$. You can now compare this frequency distribution with other frequency distributions from other tables or for other tags. In my opinion, it doesn't make sense to compute quantiles or moment-based summaries on these distributions (so z-score is not meaningful either) since the variable in question is qualitative, i.g. having modalities "#1 vs #1", ..., "#1 vs #n". You can instead use use the mode (i.e. the co-occurence with highest relative frequency) as a measure of location. As a measure of "variability" or entropy you may use the Shannon entropy. If we denote by $p_{j|i}$ be the relative frequency of the co-occurence "#i vs #j", for $j = 1,\ldots,k$, the Shannon entropy is $$ H = \sum_{j=1}^k p_{i|j}\log p_{j|i}, $$ with $p_{j|i}\log p_{j|i} = 0$ if $p_{j|i}=0$. $H$ assumes value zero, i.e. its minimum, when the distribution is uniform. Furthermore, it can be shown that $H\leq \log k$, thus if you use $H$ to compare frequency distributions with different $k$'s, it's better to use its normalized version $$ H_n = \frac{H}{\log(k)}. $$
null
CC BY-SA 4.0
null
2023-03-02T20:52:52.387
2023-03-02T20:52:52.387
null
null
56940
null
608212
2
null
607076
1
null
You never really "lose a category" in regression against a categorical predictor, you just have to get the comparisons that you want from the model results. In a Cox model it's best to work with the regression coefficients first, then convert to hazard ratios by exponentiating. The regression coefficient for the reference level ("A" here) is 0 in a Cox model. For a comparison against the average among groups, you can find the average of the 3 regression coefficients (0 for "A" and the individual values found for for "B" and "C") and evaluate the differences of each coefficient against that average. For standard errors you would have to use the formula for the [variance of a weighted sum of correlated variables](https://en.wikipedia.org/wiki/Variance#Weighted_sum_of_variables) along with the variance-covariance matrix of the model coefficient estimates. Instead of doing this by hand, take advantage of the post-modeling tools provided by packages like [emmeans](https://cran.r-project.org/package=emmeans). You can then specify the comparisons/contrasts that you desire and display results in either the original coefficient scale or in the hazard-ratio scale. After you fit your data with the `coxph()` function, the "eff" contrast method in `emmeans` provides what you want in the hazard-ratio scale: ``` library(survival) cox1 <- coxph(Surv(a,c)~b,data=df) library(emmeans) emmeans(cox1,eff~b,type="response")$contrasts # contrast ratio SE df null z.ratio p.value # A effect 1.071 0.207 Inf 1 0.357 0.7212 # B effect 1.339 0.250 Inf 1 1.567 0.1834 # C effect 0.697 0.163 Inf 1 -1.545 0.1834 # # P value adjustment: fdr method for 3 tests # Tests are performed on the log scale ``` If you remove the `type="response"` and the `$contrasts` from that function call, you can get a better idea of what's going on under the hood: in the "contrasts" you get comparisons of each of the original regression coefficients (the "emmeans") against their mean. ``` emmeans(cox1,eff~b) # $emmeans # b emmean SE df asymp.LCL asymp.UCL # A 0.000 0.000 Inf 0.000 0.000 # B 0.223 0.299 Inf -0.363 0.810 # C -0.430 0.386 Inf -1.187 0.327 # # Results are given on the log (not the response) scale. # Confidence level used: 0.95 # # $contrasts # contrast estimate SE df z.ratio p.value # A effect 0.0689 0.193 Inf 0.357 0.7212 # B effect 0.2922 0.186 Inf 1.567 0.1834 # C effect -0.3611 0.234 Inf -1.545 0.1834 # # Results are given on the log (not the response) scale. # P value adjustment: fdr method for 3 tests ```
null
CC BY-SA 4.0
null
2023-03-02T21:10:15.797
2023-03-02T21:10:15.797
null
null
28500
null
608213
1
608216
null
3
114
What is the conditional $\operatorname{Var}(XY|Y)$ given $X$ and $Y$ are independent? Is it: $$\operatorname{Var}(XY|Y)= Y^2\operatorname{Var}(X|Y) = Y^2\operatorname{Var}(X)?$$
What is the conditional $\operatorname{Var}(XY|Y)$ given that $X$ and $Y$ are independent?
CC BY-SA 4.0
null
2023-03-02T21:15:17.203
2023-04-04T18:29:59.127
2023-03-07T15:44:56.600
362671
382225
[ "probability", "variance", "conditional-probability", "independence" ]
608214
2
null
608139
3
null
I think @Henry's [point](https://stats.stackexchange.com/questions/608139/fitting-simple-model-in-r#comment1128532_608139) is the answer here. You seem to assume that a 'good' fit should have $\hat{y} = 1.0$ for the 2006 datum. There is no statistically grounded reason to believe that—there might be a scientific reason to want that, but that's a different issue—from a statistical point of view your software tries to find the candidate parameter values that will maximize the likelihood of the data assuming a conditional Gamma distribution with a log link as a linear function of those three variables. A fitted function running through $(1.0,\, 1.0,\, 1.0,\, 1.0)$ won't do that for these data and assumptions. That is, such a function would have a lower likelihood. You could crowbar the model into doing so, but it would yield a worse statistical fit. At first glance, the fit doesn't seem problematic: [](https://i.stack.imgur.com/7s00G.png) [](https://i.stack.imgur.com/7EOCd.png) [](https://i.stack.imgur.com/VmNpy.png) [](https://i.stack.imgur.com/xHpOk.png)
null
CC BY-SA 4.0
null
2023-03-02T21:21:42.870
2023-03-03T01:20:28.813
2023-03-03T01:20:28.813
7290
7290
null
608215
1
null
null
0
59
[This exercise document](https://www.soa.org/4a18f6/globalassets/assets/files/edu/2022/spring/solutions/2022-04-14-exam-pa-model-sol.pdf#page=25) ask the following question (page 25): > Your assistant, A, builds a decision tree to investigate which variables have a significant impact on response time. The variable day, when used as a categorical variable, is deemed important by the tree-based model. A knows from experience, and from testing other models, that day is not actually a significant variable. Explain why a decision tree model may emphasize day which is a less important categorical variable with 31 levels, when used as a categorical variable, despite it not being an important variable. They give the following answer, but not sure about the reason why this holds: > Decision trees tend to create splits on categorical variables with many levels because it is easier to choose a split where the information gain is large. However, splitting on these variables will likely to lead to overfitting. Can you please help me explain this?
Descision trees and splits on categorical variables with many levels?
CC BY-SA 4.0
null
2023-03-02T21:22:00.597
2023-03-05T00:51:36.337
2023-03-05T00:51:36.337
11887
382257
[ "self-study", "categorical-data", "multilevel-analysis", "many-categories" ]
608216
2
null
608213
5
null
Your reasoning flow shows that you had a good understanding on conditioning. Formally, you can derive it from the definition$^\dagger$ of conditional variance and basic properties of conditional expectation: \begin{align} & \operatorname{Var}(XY|Y) \\ =& E[X^2Y^2 | Y] - (E[XY|Y])^2 \tag{definition}\\ =& Y^2E[X^2|Y] - (YE[X|Y])^2 \tag{pulling out known factors} \\ =& Y^2E[X^2] - Y^2(E[X])^2 \tag{independence}\\ =& Y^2\operatorname{Var}(X). \end{align} --- $^\dagger$ For two random variables $\xi$ and $\eta$ such that $E[\xi^2] < \infty$, the conditional variance of $\xi$ given $\eta$ is \begin{align} \operatorname{Var}(\xi|\eta) = E[(\xi - E[\xi|\eta])^2|\eta] = E[\xi^2|\eta] - (E[\xi|\eta])^2. \end{align}
null
CC BY-SA 4.0
null
2023-03-02T21:32:42.317
2023-04-04T18:29:59.127
2023-04-04T18:29:59.127
20519
20519
null
608217
2
null
608103
7
null
You just don't have enough data to see whether homoscedasticity is violated with only 9 data points. Maybe you could include a hint that homoscedasticity might be violated but you just don't have enough information to tell.
null
CC BY-SA 4.0
null
2023-03-02T21:34:22.017
2023-03-03T12:46:31.417
2023-03-03T12:46:31.417
22047
369002
null
608218
1
null
null
0
45
I want to predict using residuals. Let's say we used regression of J against H (assume all positive values), giving us residuals (Jres) that express, in theory, J, with the effect of H removed. We then regress those residuals (Jres) against another variable (K). We get a slope and intercept (let's say it is significant). Now we want to predict an unknown Jres from a known K using our new slope and intercept. The problem - since residuals are centered on 0, the predicted Jres has to be adjusted to be in the original scale of J. What do we add to the Y-intercept so that the new predicted can be in a scale comparable to the original J? In other words, let's say you have a population that smokes and lives near a coal-powered factory, and you see cancer cells in their blood. You want to regress away the effect of smoking (from the cancer data) and then use the distance from the factory in a new regression to see if distance predicts tumor cells independent of the smoking. If so, how far away would you have to be to reduce the cancer cells to 0? It looks like this, using residuals against K (all data are hypothetical): [](https://i.stack.imgur.com/uDur8.png)
regression - using residuals to predict
CC BY-SA 4.0
null
2023-03-02T21:37:13.417
2023-03-02T22:10:14.213
2023-03-02T22:10:14.213
382256
382256
[ "regression", "residuals" ]
608219
2
null
608210
1
null
You could compute the confidence interval of both $R^2$s. With a 95% CI, if they don't overlap, then $p<0.05$. But, if both the samples and the predictors differ, then I don't think this is a very good comparison. It would be better to hold-out participants who have all the variables. Then build the two linear models in the remaining participants and predict your outcome in the held-out participants. At that point you can compare the two pearson's $r$s (or whatever performance measure you want to use), to determine which is better. Since the held-out sample is the same for both then it's a fair comparison.
null
CC BY-SA 4.0
null
2023-03-02T21:55:57.263
2023-03-02T22:13:07.287
2023-03-02T22:13:07.287
288142
288142
null
608220
2
null
608103
7
null
I believe that tests of assumptions are very often "essentially useless" (see: [Why use normality tests if we have goodness-of-fit tests?](https://stats.stackexchange.com/a/576487/7290), e.g.). Box said, "All models are wrong, but some are useful." In that spirit, homoscedasticity is a model, and the idea that it is perfectly met is implausible. A test of a false null can return either a correct decision or a type II error (because you don't have enough data). It is much better to assess the apparent magnitude and type of deviations from perfectly met assumptions than to conduct formal tests. The best way to do this is generally to look at appropriate plots. For assessing possible heteroscedasticity, the scale-location plot is better than the plot of residuals vs fitted values. In neither case does it look like you have a magnitude of heteroscedasticity that is likely to cause problems. On the other hand, it looks like you have a curvilinear relationship between Note and Duree (but don't have enough data to establish that with a conventional degree of confidence).
null
CC BY-SA 4.0
null
2023-03-02T22:01:38.573
2023-03-05T13:20:50.893
2023-03-05T13:20:50.893
7290
7290
null
608221
2
null
604926
0
null
The distribution of $x$ can be approximated with a [logit-normal](https://en.wikipedia.org/wiki/Logit-normal_distribution) distribution for a large number of steps. The distribution will concentrate at 0 or 1, depending on $\mu$: $$\mu=(1-\lambda)\ln\bigg(\frac{1-\pi_H}{1-\pi_L}\bigg)+\lambda\ln\bigg(\frac{\pi_H}{\pi_L}\bigg)$$ - $\mu<0$: $x\rightarrow1$ - $\mu>0$: $x\rightarrow0$ - $\mu=0$: $P(x\rightarrow0)=0.5$; $P(x\rightarrow1)=0.5$ --- I'll assume $\lambda_{right}+\lambda_{left}=1$ and use $\lambda\equiv\lambda_{right}$. Let $x_{a,b}$ denote the position of a particle that has moved a total of $a$ times to the left and $b$ times to the right. First, notice that the sequence of the $n=a+b$ moves does not affect the value of $x_{a,b}$ (e.g., left-left-right-right results in the same position as left-right-right-left). $$x_{a,b}=\bigg[1+\bigg(\frac{1-\pi_L}{1-\pi_H}\bigg)^a\bigg(\frac{\pi_L}{\pi_H}\bigg)^b\bigg]^{-1}$$ A [logit](https://en.wikipedia.org/wiki/Logit) transformation on $x_{a,b}$ results in: $$y_{a,b}=a\ln\bigg(\frac{1-\pi_H}{1-\pi_L}\bigg)+b\ln\bigg(\frac{\pi_H}{\pi_L}\bigg)$$ Possible values of $y_{a,b}$ are equally spaced from $n\ln\Big(\frac{1-\pi_H}{1-\pi_L}\Big)$ to $n\ln\Big(\frac{\pi_H}{\pi_L}\Big)$ with intervals: $$d=\ln\Big(\frac{\pi_H}{\pi_L}\Big)-\ln\Big(\frac{1-\pi_H}{1-\pi_L}\Big)$$ $b\sim\text{Bin}(n,\lambda)$, so $y_{a,b}$ follows a scaled and shifted binomial distribution: $$b=\frac{y_{a,b}-n\ln\Big(\frac{1-\pi_H}{1-\pi_L}\Big)}{n\cdot{d}}\sim\text{Bin}(n,\lambda)$$ We can use the normal approximation to the binomial as $n$ gets large: $$y_{a,b}\sim\mathcal{N}(n\mu,d^2n\lambda(1-\lambda))$$ Equivalently, $x_{a,b}$ can be approximated with the logit-normal distribution. As $n$ increases, the magnitude of the ratio of the mean to the standard deviation increases if $\mu\neq0$, so $x$ will tend to either 0 or 1. For $\mu=0$, the mean of $x$ remains at 0.5, and, although the most probable positions are near 0.5, they make up an increasingly smaller proportion of the total probability.
null
CC BY-SA 4.0
null
2023-03-02T22:31:13.923
2023-03-03T20:52:28.033
2023-03-03T20:52:28.033
214015
214015
null
608222
2
null
545162
1
null
Yes, you definitely can. Moreover, it does not have to be any linear kind of regression. In the most general case, the following method will allow you to effectively turn any parametric or even non-parametric probability density estimation into a regressor for the desired output (i.e., the 3rd variable in your case, which is salary). The following method is from the book: Deep Learning, by Goodfellow, Bengio, and Courville, 2016 (page 103 or 104, from section 5.1.3), [https://www.deeplearningbook.org/contents/ml.html](https://www.deeplearningbook.org/contents/ml.html). Note: Despite the name of the book, the following method is completely general, and suitable beyond deep learning or any other neural network, or even other machine learning methods, for that matter. The method: - Assume you've modeled the probability distribution over the input vector $\textbf{v}$ as $p(\textbf{v})$ by any parametric or non-parametric probability density estimation technique. In your example: $\textbf{v}$ = [height, weight, salary] $\in \mathbb{R}^3$. - Now, you decide to estimate one component $y$ of the vector $\textbf{v}$ from the remaining "input" components $\textbf{x}$, i.e., estimate salary from height and weight. Let's denote: $\textbf{v} = (\textbf{x}, y)$, where $y$ is the desired component to be estimated, and $\textbf{x}$ are the remaining "input" components. - Using the definition of conditional probability, the estimation of the probability of $y$ given the other components $\textbf{x}$ is: $$p(y | \textbf{x}) = \frac{p(\textbf{x}, y)}{p(\textbf{x})} = \frac{p(\textbf{v})}{\sum_{y'}{p(\textbf{x}, y')}}$$ where: $$p(\textbf{x}) = \sum_{y'}{p(\textbf{x}, y')}$$ by the law of total probability, over quantized (discretized) values $y'$ of the component $y$. But you can use un-quantized continuous values using a suitable 1D integration technique of your choice: $$p(\textbf{x}) = \int_{y'}{p(\textbf{x}, y')}dy'$$ by the law of total probability for continuous values $y'$ of the component $y$. Observe that instead of a specific point-estimate of $y$ you obtain a posterior probability density estimation $p(y | \textbf{x})$ of $y$ given the $\textbf{x}$ inputs. If you only need a point-estimate, you can simply choose for example: $$\hat{y} = argmax_{y'} p(y | \textbf{x})$$ where $\hat{y}$ is the maximum aposteriori point-estimate.
null
CC BY-SA 4.0
null
2023-03-02T22:32:30.943
2023-03-05T14:45:01.103
2023-03-05T14:45:01.103
366449
366449
null
608224
1
null
null
0
32
I am using GLMMs to examine the influence of 5 weather variables on different biological count variables in R. With n=5 weather variables and modelling main effects only, I have 32 candidate models (incl. intercept-only and global models) for each response variable. I have been reading literature and tutorials re: model selection and averaging with GLMMs, and I understand that my selection results have a high degree of uncertainty (e.g., none of my top candidate models have an Akaike weight >0.26). Due to this uncertainty, [Symonds & Moussalli (2011)](https://psycnet.apa.org/record/2011-00050-002) - based on [Burnham & Anderson (2002)](https://link.springer.com/book/10.1007/b97636) - recommend the full-model averaging approach, whereby parameter averages are calculated from all possible candidate models. My question is: when model averaging, is it necessary to include models that rank worse than the intercept-only model? For example, the intercept-only model ranks between 7-15 (out of 32) by AICc, but if I follow the best-practices described by Symonds & Mousalli, I would include several worse ranking models that technically do not explain variation in my responses. These worse-ranking models have Akaike weights between 0 - 0.10, so they would affect my parameter averages to some degree. Conceptually, I would think it is counterintuitive to include any model that ranks worse than the null. Thank you for any clarification!
Inclusion of candidates in model averaging with GLMMs
CC BY-SA 4.0
null
2023-03-02T23:01:18.270
2023-03-02T23:01:18.270
null
null
286723
[ "r", "glmm", "glmmtmb", "model-averaging" ]
608225
1
null
null
0
21
I recently started learning about the Gaussian Process for a GP machine learning project so my understanding is relatively limited. However, from what I have read/watched so far you have a prior GP and then train it to get a posterior GP. But, I am receiving conflicting information on how informative the prior should be. In the video below it talks about how you want your prior to be informative so that if your model tries to predict a faraway point, it is still somewhat able to give an estimate. Yet in other readings they talk about how having a prior that is not informative gives the model flexibility so that is more preferable. Is there something I am missing that links the two different interpretations? [https://www.youtube.com/watch?v=dkd9mQDkwOo](https://www.youtube.com/watch?v=dkd9mQDkwOo) {Timestamp: 36:40 - 37:05} Also, when trying to answer my question I came across this question which says: "I am reading on gaussian processes and there are multiple resources that say how the parameters of the prior (kernel, mean) can be fitted based on data, specifically by choosing those that maximize the marginal likelihood. However, if we use an expression of our data to fit the parameters of the prior, doesn't this defeat the point of a prior? Wouldn't it be the same to fit the parameters that best explain the data and use them directly, instead of Bayesian inference?" I believe their question relates to what I am asking so I wanted more clarification. [https://mathoverflow.net/questions/389658/gaussian-process-kernel-parameter-tuning](https://mathoverflow.net/questions/389658/gaussian-process-kernel-parameter-tuning) Thanks in advance!
How informative should a Gaussian Process prior to be?
CC BY-SA 4.0
null
2023-03-02T23:08:40.673
2023-03-02T23:09:21.663
2023-03-02T23:09:21.663
382264
382264
[ "machine-learning", "bayesian", "gaussian-process", "prior", "posterior" ]
608226
1
null
null
4
145
During my statistics course, we studied bootstrap and its operational principles. We explored the potential application of bootstrap in cases where we have limited knowledge about the distribution. Specifically, we utilized this technique to approximate certain statistics such as the variance of the mean. However, I encountered some difficulties in comprehending the process of constructing confidence intervals (CI) through the utilization of bootstrap. My professor mentioned three methods, and I would greatly appreciate some guidance in comprehending each approach: 1 - Normal approximation: if I'm not mistaken, this works only when our statistic is normally distributed (asymptotically at least). If so, we can somehow use bootstrap to approximate the standard error $se$. Perhaps this works like any classical bootstrap where we generate many subsets of the dataset and just calculate the statistic for each one, like the variance in this case, and then find the $se$. I just didn't get why would it only work when the statistic is normally distributed? 2 - Pivot (pivotal intervals): This one confuses me the most. It has been defined like this: "A function $Q(X_1,...X_n;\theta)$ is a pivot if the distribution of Q does not depend on \theta". However, I am uncertain as to how this method helps in determining the CI, and whether it implies that the statistic itself is a pivot. 3 - Percentile intervala: as I understand it, involves generating multiple subsets of the data with $n$ samples in each one, from the original data distribution (or the empirical data distribution if the original is unknown). For each subset, we compute the statistic of interest. Next, we sort the computed statistics in ascending order, and the lower (left) bound of the confidence interval (CI) is determined by the $\alpha/2$ percentile, such as the 0.025 percentile, while the upper (right) bound is defined by the $1-\alpha/2$ percentile, such as the 0.975 percentile. Although I have not fully understood the proof behind this approach, I believe that sorting the statistics generated using the bootstrap is the fundamental step in constructing the percentile intervals. 3.5 - Parametric: This is not a new approach. The parametric approach involves generating subsets of data from a known underlying distribution. We can generate the data subsets from this distribution. If it's a Normal distribution, for example, we can calculate the mean for each subset. And then, what do we do? If we only want the mean of the normal distribution we take the mean of the means? It's crucial that I fully understand those three approaches and how to use them. The pivotal intervals I think is the one that is causing me the most trouble. It's not very intuitive, unlike the percentile approach which makes a lot of sense to me. The Normal approximation is also weird to me, I would appreciate any help I can get. If you could also provide some examples for the usage of those approaches that would also be a very big help.
Understanding different approaches to construct confidence intervals with bootstrap
CC BY-SA 4.0
null
2023-03-02T23:28:41.777
2023-03-11T15:14:39.293
2023-03-07T20:32:24.823
357522
357522
[ "mathematical-statistics", "confidence-interval", "sampling", "bootstrap" ]
608228
1
null
null
2
40
If I have three variables: - $x$ - $y$ - $z$ ...how would I go about calculating an "adjusted $z$" measure that has the variation in $z$ that is explained by $x$ and $y$ removed?
Removing variance in $y$ explained by $x$
CC BY-SA 4.0
null
2023-03-02T23:41:18.417
2023-03-03T04:01:03.237
2023-03-03T04:01:03.237
345611
382266
[ "regression", "multiple-regression", "residuals", "predictor" ]
608229
2
null
605952
3
null
There are 2 cases: - If you do not know the type probability density distribution of the errors (i.e., the orange density plot), as you say, then I'd recommend to use a robust fitting procedure, such as Theil-Sen or RANSAC, rather than ordinary Least Squares (OLS). A robust estimator will reduce the chance that your estimation will go astray due to the unknown and skewed tails of the errors. - However, if you do know the type of probability density distribution of the errors (i.e., the orange density plot), then theoretically you can do much better by using Maximum Likelihood. Assume the errors are distributed according to a known probability density $f(\epsilon; \theta)$, where $\epsilon$ are the error values and $\theta$ are a set of parameters of the family of the distribution $f$. What you need to do is mathematically express the errors and parameters as a linear regression model (as implied by the scatter-plot), and estimate all unknown parameters using Maximum Likelihood. To clarify this, see the example below. Example for clarifying the 2nd case: Let's assume the known type of probability density distribution of the errors is an exponential distribution, which is obviously skewed in one direction, so: $$f(\epsilon; \theta) = \theta \exp({-\theta \cdot \epsilon}) \cdot U(\epsilon)$$ where $U(\epsilon)$ is the Heavyside step function, which is zero for negative $\epsilon$ and 1 otherwise. But we need to somehow incorporate the linear regression model implied by the scatter-plot, so for this example, I've chosen to assume: $y = a \cdot x + b + \epsilon$, where $(x, y)$ are the horizontal and vertical coordinates of a specific data-point. The error term $\epsilon$ between the $y$-axis of a data-point and its linear model is then: $\epsilon = y - (a \cdot x + b) = y - ax - b$. Plugging this into the probability density distribution of the error for a single data-point at $(x, y)$, we have the likelihood: $$f(x, y; \theta, a, b) = \theta \exp({-\theta \cdot (y - ax - b)}) \cdot U(y - ax - b)$$ $$= \theta \exp({-\theta y + \theta ax + \theta b}) \cdot U(y - ax - b)$$ where all unknown parameters which we'll need to estimate are: $\theta, a, b$. Let's further assume the errors $\epsilon_n$ are statistically independent for each point $(x_n, y_n)$ of the $N$ data-points, so the joint probability density of the complete dataset $(\mathbf{x}, \mathbf{y})$ of $N$ data-points is: $$f(\mathbf{x}, \mathbf{y}; \theta, a, b) = \prod_{n=1..N}{f(x_n, y_n; \theta, a, b)}$$ In order to prodceed, it would be extremely convenient to take the (natural) logarithm, so we'll have a log-likelihood function. The log-likelihood function is differentiated with respect to the unknown parameters $\theta, a, b$ in order to maximize it for the given data-points. An iterative numerical procedure is necessary for maximization in this specific example; perhaps it will also be necessary to run it from different initializations in order to guarantee that the discovered maximum is the global one.
null
CC BY-SA 4.0
null
2023-03-02T23:54:00.587
2023-03-04T10:38:24.160
2023-03-04T10:38:24.160
366449
366449
null
608230
1
608245
null
0
51
Suppose $(X_n)$ are iid with mean $\mu$ and variance $\sigma^2$. Then by CLT $\sqrt{n}(\bar{X}_n-\mu) \overset{D}{\rightarrow} N(0,\sigma^2)$ and use delta method we can get the asymptotic distribution for $\bar{X}_n^k$ that is $\sqrt{n}(\bar{X}_n^k-\mu^k) \overset{D}{\rightarrow} k\mu^{k-1}N(0,\sigma^2)$, where $k \in \mathbb{N}$ and assume $\mu \neq 0$. My question is how to obtain the asymptotic distribution for $\bar{X}_n^k$ when $\mu=0$? Do I need to use "k-th" order delta method? i.e. use k-th order taylor approximation?
Asymptotic distribution of $\bar{X}_n^k$
CC BY-SA 4.0
null
2023-03-02T23:59:35.477
2023-03-03T05:23:42.773
null
null
344717
[ "distributions", "self-study", "estimation", "asymptotics" ]
608231
1
null
null
1
44
I have a sample of stroke survivors, I'm trying to determine if there is a difference in age group (<59, 59, 60, >60) as to whether they are employed or not (yes, no). I'm unsure which hypothesis test I need, I'm considering Kruskal-Wallis, Pearson's Chi-squared, and Chi-squared test for trend. I believe the data fills assumptions for all 3. What do you think? Thanks in advance, Warm regards, Harry
Which hypothesis test should I use in this scenario?
CC BY-SA 4.0
null
2023-03-03T00:08:13.753
2023-03-04T21:13:22.483
null
null
380069
[ "hypothesis-testing", "mathematical-statistics", "statistical-significance" ]
608232
1
null
null
0
14
I have a theoretical question regarding forecasting: let's say I have 100 different time series data sets (each set of data is unique but within a single domain, say historical sales of 100 different types of cell phones), each containing 1000 time series observations. for the sake of this argument, let's say they are all univariate forecasts, and all data is intact. I build a distinct forecast model for each dataset (so, 100 ARIMA models, let's say), and then compute the MAPE on the same size hold-out sample for each model. Now someone asks, ok - let's see how good your models are, and wants to compute an "average" MAPE across all 100 models and and confidence interval on that mean MAPE (n = 100). I argue: averaging MAPE over models is invalid and non-sensical. MAPE is not a "measurement", it has no distribution, what does an "average" MAPE over all these models even mean? (nothing) Can someone here argue that this is indeed a valid thing to do? Thank you!
Aggregating MAPE
CC BY-SA 4.0
null
2023-03-03T00:22:39.257
2023-03-03T00:22:39.257
null
null
242764
[ "machine-learning", "mathematical-statistics", "forecasting" ]
608233
2
null
608228
2
null
It depends on what exactly you mean by the variance of $Z$ after you account for the variance explained by $X$ and $Y$. However, a pretty reasonable interpretation of this relates to the $R^2$ of a linear regression. Use your $X$ and $Y$ to predict $Z$, as in the following model. $$ \mathbb E[Z_i] = \beta_0 + \beta_1X_i + \beta_2 Y_i $$ Then use ordinary least squares to estimate the $\beta$ coefficients. Next, calculate the $R^2$ of this model. Under these conditions with a linear model estimated with ordinary least squares, $R^2$ is interpreted as the porportion of variance of $Z$ that is explained by the regression, so by your predictor variables $X$ and $Y$. This is explained in a standard regression book like Agresti (2015). Consequently, $\left(1-R^2\right)$ is the proportion of variance in $Z$ that is not explained by the regression. You have $Z$, so you can calculare $\text{var}(Z)$. Now multiply $\left(1-R^2\right)\text{var}(Z)$ to get the variance of $Z$ that remains unexplained. This can be performed in software. I get that the unexplained variance of $Z$ is $0.9792228$. ``` set.seed(2023) N <- 100 # Sample size x <- rnorm(N) # Varible X y <- rnorm(N) # Varible Y z <- x - y + rnorm(N) # Variable Z L <- lm(z ~ x + y) # Fit the linear model, estimated via # ordinary least squares r2 <- summary(L)$r.squared # Calculate R-squared of the regression var(z) * (1 - r2) # Variance of Z that is not explained by X and Y ``` However, this is equal to the variance of the residuals, calculated via `var(resid(L))`. This is because $R^2$ involves the variance of the the residuals term (estimated variance of $Z$, conditional on $X$ and $Y$) divided by the unconditional (marginal or pooled) variance of $Z$. $$ R^2 = 1-\dfrac{ \text{var}(Z\vert X, Y) }{ \text{var}(Z) } $$ Since the fraction has the same units in the numerator and denominator, the units cancel and give a unitless $R^2$. Then then math of what I described and simulated in my code is: $$ (1-R^2)\times(\text{var}(Z))=\\ \left[ 1 -\left( 1-\dfrac{ \text{var}(Z\vert X, Y) }{ \text{var}(Z) } \right) \right]\times(\text{var}(Z))=\\ \dfrac{ \text{var}(Z\vert X, Y) }{ \text{var}(Z) }\times(\text{var}(Z))=\\ \text{var}(Z\vert X, Y) $$ REFERENCE Agresti, Alan. Foundations of linear and generalized linear models. John Wiley & Sons, 2015. EDIT Some issues to consider: - What if the true relationship between $Z$ and $X$ or $Y$ (or both) is not linear, say $Z = X^2 + Y$, but you only fit the regression to $X$ and $Y?$ You are getting at the variance of $Z$ that is or is not explained by $X$ and $Y$ as they enter the regression, yes, but it could be argued (I think strongly) that you aren't really accounting for the entire way in which $X$ explained the variance of $Z$. Dealing with this sort of situation is what leads to regression-strategies like spline basis functions. - If you want to draw inferences to something greater than your observed $Z$, such as a population, there's more to the story, even if fiddling with $R^2$ and the residual variance is the beginning of that story.
null
CC BY-SA 4.0
null
2023-03-03T00:35:29.597
2023-03-03T00:52:11.817
2023-03-03T00:52:11.817
247274
247274
null
608236
1
null
null
0
16
I'm trying to build a graph-based Variational-Autoencoder, which should be able to generate graph structures (adjacency matrices). So far, all the papers and models I've seen use a fixed latent vector size to encode the input graphs, resulting in graphs where only the edge positions and edge numbers are different. I followed the approach used in this paper: [https://arxiv.org/pdf/2010.04408.pdf](https://arxiv.org/pdf/2010.04408.pdf). I wondered how to approach graph generation with altered node numbers and edges. A solution I could think of is an additional distribution for the size of the generated adjacency matrix. Is this a feasible approach? I'm pretty new to VAE and would appreciate any help.
Graph based variational Autoencoder with variable latent size
CC BY-SA 4.0
null
2023-03-03T01:15:18.733
2023-03-03T01:15:18.733
null
null
380166
[ "machine-learning", "autoencoders", "graphical-model", "variational-bayes", "variational-inference" ]
608238
1
null
null
0
16
I've heard that ensemble methods have been used to mixed effect on time series models. Their biggest issue is that they cannot predict something they haven't seen before, so they struggle trends, whereas they're a bit better equipped for seasonality. My question is, is there any research in predicting the slope between adjacent points? For example, $X = [5, 7, 6, 8, 7, 9]$ has $N$ elements. But slopes $S$ would have $N-1$ elements account for each adjacent pair. $[\frac{7-5}{t}, \frac{6-7}{t}, ...]$ where $t$ is the interval between observations (presumably 1.) If an ensemble method, for example XGBoost, was supposed to not predict $P(X_t|X_{t-1...t-n})$ but the slopes, $P(S_{(t,t-1)}|S_{(t-1, t-2), (t-2, t-3) ...})$, then the issue of what the model has or hasn't seen before would theoretically mitigated. It would require $X_t$ and $t$ to determine how to augment the most recent $X_{t-1}$ by the predicted slope. The biggest drawback is that the loss function will be evaluated in terms of predicted slopes, not the actual prediction of $X_t$. I'm curious if this has been asked or researched before?
Tree based time series forecasts, inference on slopes?
CC BY-SA 4.0
null
2023-03-03T01:42:34.230
2023-03-03T01:42:34.230
null
null
288172
[ "time-series", "boosting" ]
608240
1
null
null
1
60
Consider a model of the probability of a binary, yes/no-type of event. The event is infrequent, say it happens only once every thousand times. In that regard, the prior probability of the event is $0.001$. Consequently, if a predictive model like a logistic regression predicts a probability of $0.05$ given a certain situation (the model features), while there is still only a $5\%$ chance of the event happening, the event is $50$-times more likely to occur than usual. $$ \dfrac{0.05}{0.001} $$ If that event is something catastrophic, I would want to know if the chance of it happening if $50$ times higher than usual, even if the event remains unlikely ($5\%$). What drawbacks might there be to looking at predicted probability in this way? My reservation is that I don't want to get hung up on something like, "The chance of it happening is up from ultra-super-duper-unlikely to ultra-unlikely," something like a change in probability from a prior of $0.000001$ to $0.0001$. At the same time, a $100$-fold increase in event probability seems like a big deal, even if the event remains unlikely.
Binary probability models: considering event probability above the prior probability
CC BY-SA 4.0
null
2023-03-03T03:37:38.890
2023-04-06T14:30:03.570
null
null
247274
[ "regression", "machine-learning", "probability", "classification", "unbalanced-classes" ]
608241
2
null
67721
1
null
Glivenko–Cantelli theorem [(Wikipedia Link)](https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem) This theorem says, loosely speaking, that as the number of samples grows, the empirical distribution tends toward ("converges") the true distribution. In that sense, if there truly is a nonzero probability of an event happening, enough observations should lead to you seeing it happen, since your empirical CDF has to tend toward the true CDF that gives such an event positive, even if small, probability.
null
CC BY-SA 4.0
null
2023-03-03T03:43:47.927
2023-03-03T03:43:47.927
null
null
247274
null
608242
1
608247
null
0
29
I know that my 95% confidence interval can be calculated for a proportion using: $$ 1.96 \times \sqrt{p(\frac{1-p}{n})} $$ where $p$ is the proportion and $n$ is the number of trials. But if my data is collected in a series of datasets (say, annual data collections), does this change my $n$? For example, if my data is framed: |Year |TRUE |n | |----|----|-| |2010 |14 |25 | |2011 |17 |25 | |2012 |15 |25 | |2013 |11 |25 | |2014 |15 |25 | Do I calculate a total $p=\frac{14+17+15+11+15}{25\times5}=0.576$ and use: - $n=5$ because data was collected in five seperate experiments? $$ 1.96 \times \sqrt{0.576(\frac{1-0.576}{5})} $$ - $n=125$ because data was collected on 125 events? $$ 1.96 \times \sqrt{0.576(\frac{1-0.576}{125})} $$ Furthermore, say there was an extra variable, $x$, for which I wanted to calculate a seperate proportion for ($\frac{\text{TRUE}}{x}$). Say $x$ represents the total number of job openings available and $n$ is the total applications, so $\frac{\text{TRUE}}{n}$ would be the job acceptance rate and $\frac{\text{TRUE}}{x}$ would be the positions filled rate: |Year |TRUE |n |x | |----|----|-|-| |2010 |14 |25 |20 | |2011 |17 |25 |20 | |2012 |15 |25 |20 | |2013 |11 |25 |20 | |2014 |15 |25 |20 | Would my $n$ for calculating my confidence interval for $\frac{\text{TRUE}}{x}$ still use the total from the $n$ column (either $5$ or $125$) or would it be the total from the $x$ column (either $5$ or $100$)?
Correct n to use in calculating confidence interval of a proportion
CC BY-SA 4.0
null
2023-03-03T04:47:05.827
2023-03-03T05:44:34.990
2023-03-03T05:18:17.547
195845
195845
[ "confidence-interval" ]
608243
1
null
null
0
47
I have a control group and treatment group and the treatment is introduced in the second phase after time point `t`. But the treatment can be different types at different time points after time point `t`. Let's suppose we have 3 different types of such treatment. Here are some examples of the data: - Subject a in the treatment group may receive type 1 treatment at time t+1, and type 3 treatment at time t+2, so on and so forth. - Subject b in the treatment group may receive type 1 and type 2 treatment at time t+1, and type 2 at time t+2, so on and so forth. - Subject c in the treatment group may receive type 1 and type 3 treatment at time t+1, and type 2 and type 3 at time t+2, so on and so forth. - Any subject in the control group will never receive any treatment. In the dataset, I have variables - Treatment_Group: a dummy variable representing whether the subject is in the control or treatment group. - After: a dummy variable representing whether the subject is in the phase before or after the treatment is introduced. - Treatment_Type: a categorical variable representing different types of the treatment. If I do not want to distinguish different types of treatment, I understand how to run a DID model with `Treatment_Group` and `After` variables. What if I want to further evaluate the effect of different types of treatment in this context? How shall I integreate `Treatment_Type` into the DID model? Or any other model would be more appropriate to identify the effect?
Difference-in-difference: how to integrate different types of treatment at different time points
CC BY-SA 4.0
null
2023-03-03T04:51:46.470
2023-03-03T04:51:46.470
null
null
79616
[ "econometrics", "fixed-effects-model", "difference-in-difference" ]
608245
2
null
608230
2
null
In general, this situation (ie $g'$ vanishing at the limit of $\bar{X}_n$) is what the higher-order delta method is for. However, in this case, $g(x) = x^k$ is sufficiently simple that we can compute the limiting distribution directly. By the CLT, $\sqrt{n} \bar{X}_n \stackrel{\mathrm{d}}{\rightarrow} \mathcal{N}(0, \sigma^2)$, from which we find that the limiting distribution $$ n^{k/2} \bar{X}_n^k = (\sqrt{n} \bar{X}_n)^k \stackrel{\mathrm{d}}{\rightarrow} \sigma^k Z^k, $$ where $Z \sim \mathcal{N}(0, 1)$. As an exercise, you can verify that the $k$th order delta method peprforms this exactl calculation, but with some superfluous extra steps.
null
CC BY-SA 4.0
null
2023-03-03T05:23:42.773
2023-03-03T05:23:42.773
null
null
335519
null
608247
2
null
608242
2
null
The model for this experiment is $$ X_i \sim \mathrm{Bernoulli}(p) $$ where the $X_i$ are the total outcomes of all of the trials. If you had access to the whole data (ie, the outcomes of all 125 trials), your estimate of $p$ would be $$ \hat{p} = \frac{1}{125} \sum_{i=1}^{215} X_i. $$ Fortunately, you can compute this estimate from the data you do have, which id $$ \frac{1}{125} \Bigl( \sum_{i=1}^{25} X_i + \sum_{i=26}^{50} X_i + \dotsc + \sum_{i=101}^{125} X_i \Bigr), $$ where each of these sums are the total yearly outcomes that you have access to. Now, by the central limit theorem, $\hat{p}$ has an approximate distribution of $$ \hat{p} \stackrel{\mathrm{d}}{\approx}N\Bigl(p, \frac{p(1-p)}{\sqrt{125}}\Bigl), $$ from which we have the standard error estimate $$ \mathrm{se}(\hat{p}) = \frac{\hat{p}(1-\hat{p})}{\sqrt{125}}, $$ and so the confidence interval $$ \hat{p} \pm 1.96 \cdot \frac{\hat{p}(1-\hat{p})}{\sqrt{125}}. $$ The point here is that $n$ is the total number of independent trials, of which you have $125$. If your data were the outcomes of the $125$ trials, you probably wouldn't be confused, and would confidently use $125$ as your $n$ value. Well, it turns out that, even though you don't have access to the whole data, you still have all of the information that you need to compute this some $\hat{p}$, so the situation is the same!
null
CC BY-SA 4.0
null
2023-03-03T05:44:34.990
2023-03-03T05:44:34.990
null
null
335519
null
608248
1
null
null
2
71
I have a data which can be classified into two groups. As you see, Figure(a) shows that they are easily classified into group A and B. However, sometimes they are overlapped and it is impossible set a line between group A and B. Figure(b) shows that two groups are overlapped. Fortunately, I have a group A's single distribution and Figure(c) shows it. [](https://i.stack.imgur.com/KBKRE.png) And Figure(d) shows Group A's distribution along X and y axises. Even though Group A and B are overlapped, I think there got to be a way to know the distribution of Group B, because we know the group A's distribution and it does not change well. Whould you plase tell me how to solve this problem? I don't expect exact solution, just give me a idea or a proper mathmatical approach or something I have to study. I hope many genius people help me. Thank you:) [](https://i.stack.imgur.com/8w5f5.png)
When two distributions overlap, how to separate one distribution from the mixture distribution if I the other distribution is known?
CC BY-SA 4.0
null
2023-03-03T05:54:27.603
2023-03-03T09:04:11.463
2023-03-03T09:04:11.463
53690
294144
[ "machine-learning", "clustering", "mixture-distribution", "overlapping-data", "source-separation" ]
608249
1
null
null
0
34
I want to compare three versions of the same count response variable using Poisson regression. Can I make comparisons based on how significant the predictor coefficient is, and can I put the results in the same plot? The significance of the coefficients in each model suggest (or so I'd like to think) that grass richness and not forb richness can be predicted based on the canopy cover. Is there a better approach? ``` y1 = c(14, 20, 25, 32, 7, 12) # all plant richness (total number of species) y2 = c(11, 18, 17, 25, 2, 3) # grass species richness y3 = c( 3, 2, 8, 7, 5, 9) # forb richness # note that y2 + y3 is equal to the sum of y1 for each record canopy = c(30, 42, 60, 75, 20, 25) # canopy cover (%) df<-data.frame(richness=c(y1, y2, y3), canopy=rep(canopy, 3), type=c(rep('all plants', 6), rep('grasses', 6), rep('forbs',6))) m1<-glm(y1 ~ canopy, family = 'poisson') m2<-glm(y2 ~ canopy, family = 'poisson') m3<-glm(y3 ~ canopy, family = 'poisson') summary(m1) summary(m2) summary(m3) library(ggplot2) ggplot(df, aes(x= canopy, y=richness, color=type, linetype = type)) + geom_point(aes(shape = type)) + scale_shape_manual(values = c(1,2,0)) + geom_smooth(method = "glm", method.args = list(family = "poisson"), size=.5) + xlab("canopy cover %") + ylab("richness") + theme_classic() + theme(legend.title=element_blank()) library(stargazer) stargazer(m1,m2,m3, type = 'text') ``` [](https://i.stack.imgur.com/GTUk8.png) ``` =============================================== Dependent variable: ----------------------------- y1 y2 y3 (m1) (m2) (m3) ----------------------------------------------- canopy 0.021*** 0.029*** 0.006 (0.005) (0.006) (0.008) =============================================== Note: *p<0.1; **p<0.05; ***p<0.01 ```
Compare full set and subsets of response variable in poisson regression?
CC BY-SA 4.0
null
2023-03-03T06:13:08.970
2023-03-07T01:27:10.130
null
null
382277
[ "modeling", "regression-coefficients", "poisson-regression" ]
608250
1
null
null
0
9
Given some set of points sampled from the distribution $X \sim N(\mu_1, \sigma^2_1) + N(\mu_2, \sigma^2_2)$, how would one find the value of $\sigma_1$? Is this easier if we assume $\mu_1 = \mu_2$? I'm not sure how to approach this problem. My approach was to approximate the inflection point of the sum of the two distributions and approximate each $\sigma$ from this, but this seems incorrect/inelegant. Is there a better way to solve this problem?
Find each $\sigma$ given points sampled from sum of two normal distributions
CC BY-SA 4.0
null
2023-03-03T06:26:36.247
2023-03-03T06:27:02.603
2023-03-03T06:27:02.603
382283
382283
[ "distributions", "normal-distribution" ]
608251
1
null
null
0
45
Likelihood function: $$L(\zeta) = \zeta^{\alpha} \exp\left[\zeta\sum_{i=1}^{\alpha}(x_i - x)\right]$$ Prior function: $$p(\zeta) = \frac{1}{(\sqrt{2π}σ_\zeta)} \exp\left[-\frac{(\zeta-\zeta_0)^2}{(2σ_\zeta^2)}\right]$$ I am trying to find the log posterior of $\zeta$ Bayes Theorem: $$\log(posterior) \stackrel{\log}{\propto} \log(L(\zeta))+\log(p(\zeta)$$ Solving this I get: $$\log(posterior) \stackrel{\log}{\propto}\log[\zeta^{\alpha}] +\zeta\sum_{i=1}^{\alpha}(x_i - x)-\frac{(\zeta-\zeta_0)^2}{2\sigma_{\zeta}}-\log(\sigma_{\zeta})$$ but I don't not know how to to proceed. Edit: Here - $x_{i} ≤ x$ where $x$ is a limiting constant - We have ${x_1,…x_{\alpha}} $independently and identically distributed random sample.
Posterior of exponential likelihood and gaussian prior
CC BY-SA 4.0
null
2023-03-03T06:32:57.700
2023-03-03T16:10:28.660
2023-03-03T07:53:53.503
7224
382265
[ "bayesian", "posterior" ]
608252
1
null
null
0
51
Holt-Winters forecasting equations for quarterly observations are $\alpha_t = \alpha.\frac{y_t}{s_{t-4}} + (1-\alpha) . (\alpha_{t-1} + g_{t-1} )$ $g_t=\gamma.(\alpha_{t}-\alpha_{t-1})+(1-\gamma).g_{t-1}$ $s_t = \delta.\frac{y_t}{\alpha_{t}} + (1-\delta) . s_{t-4}$ The prediction formula for k period ahead at period T is $\hat{y_{T+k}}=(\alpha_T+k.g{T}).s_{T-4+k}$ Suppose $\alpha=\gamma=\delta=0.2$, $\alpha_4=2$, $g_4=2$, and $S_1=0.8, s_2=1, s_3=1, s_4=1.2$ Forecast $y_5$ to $y_8$ at $T=4$. Also assuming $y_5= 10$, find $a_5$, $g_5$ and $s_5$. Answer: Calculating $\alpha_5$: $\alpha_5 = \alpha y_5s_1 + (1-\alpha)(\alpha_4+g_4) = 0.2100.8 + 0.8(2+2) = 2.24$ Calculating $g_5$: $g_5 = \gamma*(\alpha_5-\alpha_4) + (1-\gamma)g_4 = 0.2(2.24-2) + 0.8*2 = 2.04$ Calculating $s_5$: $s_5 = \delta y_5\alpha_5 + (1-\delta)s_1 = 0.2102.24 + 0.80.8 = 1.216$ Therefore, $\alpha_5=2.24, g_5=2.04$, and $s_5=1.216$. Now we can use the prediction formula to forecast $y_5$ to $y_8$: $\hat {y_5} = (\alpha_5g_5)s_1 = 4.5936$ $\hat{y_6} = (\alpha_6g_6)s_2 = (0.24.5936+0.82.04)(1.2) = 5.37312$ $\hat{y_7} = (\alpha_7g_7)s_3 = (0.25.37312+0.82.04)(1) = 2.870752$ $\hat{y_8} = (\alpha_8g_8)s_4 = (0.22.870752+0.82.04)*(1.2) = 3.3515392$ I don't know how $T=4$ affects the results. Also, $K$?
Holt-Winters forecasting method
CC BY-SA 4.0
null
2023-03-03T06:47:55.697
2023-03-03T15:14:24.300
2023-03-03T15:14:24.300
94909
94909
[ "time-series", "forecasting", "exponential-smoothing" ]
608253
2
null
608248
1
null
In UV-VIS spectroscopy, a 1-dimensional variant of this is common. Absorbance peaks often overlap. The spectra are “deconvoluted” by peak fitting. You have to be a bit careful about the lingo; many spectroscopists use this term without really knowing why it’s called this way. After all the inherent (routine, tacit) transforms to get to an absorbance spectrum it turns out that in spectroscopy the absorbance peaks are additive. So, it’s a matter of finding models to describe the peaks and then performing a fit routine to find the relative contributions. Something similar may work here. You say you know A so you can put that distribution as constant. You probably have a sense of how to describe B. Then it is a matter of fitting the parameters for B so that the observed data comes out. The 2-dimensionality makes it a bit more difficult, there are some additional things to consider (such as covariance). I have never had to do it that way so I can’t help you there. If you can modify the problem into “2x 1-D”, that would definitely be easier. But you require independence and I can’t tell if your data/problem allows that.
null
CC BY-SA 4.0
null
2023-03-03T06:52:59.217
2023-03-03T06:52:59.217
null
null
356008
null
608254
1
null
null
1
69
I understand the concepts of Bayesian linear regression and regular neural networks separately, but I cannot wrap my head around how to combine both. In a general setting, lets say I have a (deterministic) regular neural network, and I would like to build a Bayesian linear regression on the top of it and train them together. If it's not Bayesian linear regression, we can easily train them with backpropagation. But with two together, I do not understand how to train the neural network using gradient descent. For my more specific problem setting, I have a set of vectors, and I want to first apply the same linear transformation to each of them, and then find the best linear combination of them to fit a target vector. The linear transformation is a matrix that maps the same dimension of the vectors to the same dimension. The linear combination part can be formulated as linear regression problem and the whole formula is as follows: $\operatorname*{argmin}_{\theta, W} ||\theta^T(XW^T)-y||^2 $ where specifically, $X \in R^{m\times d}, W \in R^{d\times d}, \theta \in R^m, y \in R^d$ and $W$ is the transformation matrix and $\theta$ is the coefficient of the linear regression. I would like to pose prior on $\theta$ and thus make it Bayesian, but let $W$ remain deterministic. Maybe it's because of my lack of understanding in Bayesian networks and other knowledge, I have searched up the internet but cannot find the right contents I want. I may be over-complicating things too. Please give me a direction or keywords that points to the solution of my problem. Thank you very much.
Bayesian Linear Regression on the top of deterministic neural network
CC BY-SA 4.0
null
2023-03-03T07:51:46.263
2023-03-03T13:04:27.140
2023-03-03T09:03:04.097
1390
332146
[ "regression", "neural-networks", "bayesian", "optimization", "bayesian-network" ]
608255
2
null
606863
0
null
This is how the continuous bag of words is defined. It is not a language model but a model for training word embeddings. Continuous means having a (small) sliding window over the training corpus. The bag of words part means that the model disregards the order of the words in the context window. Neither of these features is particularly good for language modeling where the order of words matters, and the longer the context, the better. One way to view Word2Vec training objectives is that you know that language modeling provides a good training signal for word embeddings. However, it is too computationally expensive when you only care about the embeddings. You want a solution that scales for hundreds of thousands of word forms when computing on the CPU. The solution is to simplify the language modeling objective as much as possible while still getting a reasonable training signal for the embeddings, which leads to CBOW and Skip-gram.
null
CC BY-SA 4.0
null
2023-03-03T08:04:52.070
2023-03-03T08:04:52.070
null
null
249611
null
608256
2
null
608109
1
null
### A derivation of $\mathbf{E[X_0X_k] = 1/4}$ The solutions of this special case of the logistic map can also be written parameterized by $\theta$ $$x_n(\theta) = \frac{1}{2} - \frac{1}{2} \cos(2^n\pi \theta )$$ And the case $X_0 \sim Beta(\frac{1}{2},\frac{1}{2})$ would be similar to $\theta \sim U(0,1)$ (a typical transformation between the uniform distribution and the arcsine distribution). Below are some graphs of this transformation from $\theta$ to $x_n$ for different $n$ [](https://i.stack.imgur.com/cHYzE.png) The expectation of $x_0x_k$ can be written as the integral $$\begin{array}{} E[x_kx_n] &=& \int_{0}^1 x_0(\theta) x_n(\theta)f(\theta) \,\text{d}\theta\\& = &\int_{0}^1 \left( \frac{1}{2} + \frac{1}{2} \cos(2^k\pi \theta ) \right) \left( \frac{1}{2} + \frac{1}{2} \cos(2^n\pi \theta ) \right) \text{d}\theta \\ &=& \begin{cases} \frac{1}{4} & \quad \text{if $k\neq n$} \\ \frac{3}{8} & \quad \text{if $k = n$} \end{cases} \end{array}$$ Where the last equation is found by using the fact that the product of the cosine terms with different frequencies cancel because they are orthogonal. ### Other distributions than $\mathbf{X_0 \sim Beta[1/2,1/2]}$ Also when we would start with a different distribution as $\theta = U(0,1)$ if it is a continuous distribution, then eventually we will approach the same result. We can cut the domain of $\theta$ into $2^n$ evenly spaced intervals and each interval will eventually be distributed as an arcsine distribution This is because the image $x_n$ of $\theta \in (k \frac{1}{2^n}, (k+1) \frac{1}{2^n})$ such intervals is similar to the image $x_0$ of $\theta \in (0,1)$. Then by letting $n \to \infty$ the domain of any continuous distribution will eventually be able to be split up into intervals, except for some part whose density approaches zero, that all map to the arcsine distribution. ### What if $\mathbf{x_0 = 1/3}$ I don't think that the above result is strong enough to be able to say anything about specific cases. We already have for the set $x_0 = q^2$ with $q$ a rational number between zero and one, that the $x_n$ will have a cycling behaviour. In addition we may have that for other numbers $x_0$ we have $\rho(m) \neq 0$. The above result only tells that $E[\rho(m)] = 0$ when we average over all numbers. We could have a set of numbers, with non-zero density, for which $\rho(m) \neq 0$ as long as the negative and positive cases are cancelling out in an average over a distribution.
null
CC BY-SA 4.0
null
2023-03-03T09:04:00.863
2023-03-03T15:41:44.197
2023-03-03T15:41:44.197
164061
164061
null
608257
1
null
null
0
43
We commonly fit regression models to estimate the conditional expectation function $\mathrm{E}(Y|X=x)$. We fit models by minimizing a loss function (e.g. sum of squared errors) on the training sample. An intuitive motivation for this is that we want a model which minimizes said loss function on the population, but since we only have access to the sample we minimize the loss on that instead. If a given sample $x_s,y_s$ is representative of the population (i.e. if $x_s,y_s \sim F_{X,Y}$, where $F_{X,Y}$ is the joint distribution of the population) then this approach makes sense and there's a lot of theory, e.g. maximum likelihood, to back it up. My question is about the situation where we have an unrepresentative sample $x_u,y_u$, where $x_u$ has not been drawn from $F_X$. Unrepresentative samples can easily arise through non-random sampling techniques, such as those based on availability of data. Obviously, if the marginal population distribution $F_X$ is unknown then the best we can do is fit to the sample and hope for the best, but what about the situation where $F_X$ is known? Intuitively it would seem that some kind of importance sampling could be applied here. Is there a name for this situation, and is there a standard approach for dealing with it? EDIT: I found that this problem is known as "covariate shift" in the machine learning community. Proposed solutions are based on the idea of importance sampling, but the more successful methods avoid explicit density estimation (e.g. [this paper](https://proceedings.neurips.cc/paper/2007/file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper.pdf)).
Regression with an unrepresentative sample
CC BY-SA 4.0
null
2023-03-03T09:10:38.490
2023-03-03T13:13:51.500
2023-03-03T13:13:51.500
211876
211876
[ "regression", "sampling" ]
608258
2
null
423811
1
null
The problem of finding the partitioning that minimizes the trace of the within scatter matrix (this is the target criterion that k-means tries to minimize) has been shown to be NP-hard. For a proof, see > Drineas, Frieze, Kannan, Vempala, Vinay: "Clustering Large Graphs via the Singular Value Decomposition." Machine Learning 56, pp. 9-33 (1999) This means that there is no other way than brute force to find the global optimum. The article above, however, also presents an algorithm that finds a solution that is guaranteed to be less than two times the optimum criterion. From a practical point of view, you should follow the suggestions in the comments to your question and use a primitive Monte Carlo algorithm by trying out different start points for k-means. This is actually what the R function `kmeans` does (see its argument `nstart`).
null
CC BY-SA 4.0
null
2023-03-03T09:11:41.937
2023-03-03T09:11:41.937
null
null
244807
null
608260
1
608261
null
3
54
I'm running ANCOVA. My dependent variable is is IQ, and one of my covariates is sex. I know that to perform ANCOVA you should be thinking that there exists a linear relationship between your dependent variable and each of your covariates. I'm wondering if I even have to bother with worrying about the linearity assumption with respect to the relationship between IQ and sex given that sex is a categorical variable. And so in general my question is whether we ever have to worry about the linearity assumption with respect to the relationship between a dependent variable and a categorical covariate . Thanks!! FBH
Question about linearity assumption in ANCOVA
CC BY-SA 4.0
null
2023-03-03T09:43:13.290
2023-03-03T10:32:48.230
null
null
128883
[ "linear", "ancova" ]
608261
2
null
608260
1
null
The linearity assumption only applies to continuous covariates, so the answer to your question is no. In a linear model, each level of a factor covariate is assumed to have an additive effect on the expected response.
null
CC BY-SA 4.0
null
2023-03-03T10:32:48.230
2023-03-03T10:32:48.230
null
null
211876
null
608262
1
null
null
0
13
We have a survey consisting of two waves ($N=4000$). In Wave 1, we ask block of questions $M_1$. In Wave 2, we ask blocks of questions $W$, $Y$, $M_2$ The order of questions is as follows: - $W \rightarrow Y \rightarrow M_2$ (half of respondents; treatment $T=0$) - $Y \rightarrow M_2 \rightarrow W$ (half of respondents; treatment $T=1$) So we are looking at the priming effect of question block $W$ on $Y$ and $M_2$ The following is true: - $T$ causally affects $M_{2}$, conditional on $M_1$ - $T$ does not affect $W$ - For those respondents for whom $W>0$, it is true that $T$ affects $Y$, and, moreover, this effect is mediated by the effect of $T$ on $M_2$ Is result 3 legitimate? The problem is that in half of observations $Y$ is causally prior to $W$. Can additional tests/robustness check be made to make this a legitimate result?
Can we do some causal conclusions if the mediator follows the dependent variable?
CC BY-SA 4.0
null
2023-03-03T10:42:11.157
2023-03-03T10:42:11.157
null
null
198400
[ "multivariate-analysis", "experiment-design", "random-allocation", "research-design" ]
608263
2
null
558695
0
null
As a base reference you could try to directly merge the three data sets by interlacing them and then analyzing this new data set. First you need to do a linear transformation on each data set individually so they have the same mean and variance (mean=0 and variance=1 is the simplest choice). Then you transform the x-scales of the three data sets by $x \mapsto 3x+1, 2, 3$ respectively. If you now look at the union of the three data sets you have created one new data set merged out of the three individual ones. Next I would apply the moving average again (with a window length of now 6000) and compare the numbers and locations of peaks and throughs with the results on the three individual data sets and see whether this gives anything useful.
null
CC BY-SA 4.0
null
2023-03-03T10:55:27.560
2023-03-03T10:55:27.560
null
null
181468
null
608264
1
null
null
0
67
I have two datasets having values : s1 = [25 ,75, 25, 75, 25, 75, 25, 75, 25, 75] and s2 = [46.24, 73.16, 27.13, 74.30, 25, 72.52, 53.15, 75, 40.15, 69.86] Pearson correlation coefficient = 0.90 and Spearman correlation coefficient = 0.90 (calculated using python library) How correlation coefficient is 0.90, because for it to be 0.90 as per my understanding s1 and should be : s1 = [25 ,75, 25, 75, 25, 75, 25, 75, 25, 75] and s2 = [20, 75, 25, 75, 25, 75, 25, 75, 25, 75] ie. 9 datapoints should be similar and 1 different? Is my understanding correct here ?
Correlation between two datasets
CC BY-SA 4.0
null
2023-03-03T11:08:13.647
2023-03-03T11:41:49.850
2023-03-03T11:41:49.850
362671
381018
[ "correlation", "pearson-r", "spearman-rho" ]
608265
2
null
598692
0
null
Sorry, but gunes made some mistakes that I have to point out. For each output pixel, we actually have 5 x 5 x 192 multiplications. The output size is 28 x 28 x 32 (not 28 x 28 x 192). The thing you should keep in mind is that the number of kernels equals the number of output channels. Hence, the total number of multiplications is (28 x 28 x 32) x (5 x 5 x 192).
null
CC BY-SA 4.0
null
2023-03-03T11:22:56.147
2023-03-03T11:22:56.147
null
null
382296
null
608266
1
null
null
0
23
I want to validate an algorithm with binary outputs, where two of the requirements are that the false positive rate (FPR) and the false negative rate (FNR) should be lower than some threshold levels. The algorithm is within a "black-box" framework so I don't have access to more than the inputs and outputs. With threshold levels $c_1, c_2 \in [0,1]$ in classic hypothesis testing we have \begin{align*} H_{0_1}: FPR \geq c_1 \text{ vs } H_{a_1}: FPR < c_1 \end{align*} and \begin{align*} H_{0_2}: FNR \geq c_2 \text{ vs } H_{a_2}: FNR < c_2 \end{align*} that is, we want to reject each hypothesis to validate my algorithm. For the true negatives $TN$, we have the distribution \begin{align} TN \sim \text{Bin}(N_{negatives}, 1-FPR) \end{align} where $N_{negatives}$ is the total number of negative labelled samples. We can reject $H_{0_1}$ if the observed amount of false positives are few, that is, the amount true negatives $TN$ are many. So we calculate the $p$-value as \begin{align} p_1 &= \text{Prob}(\text{Observe these TN or more} | H_{0_1} \text{ true}) \\ &= \text{Prob}(TN_{observed} \leq TN \text{ | } TN \sim \text{ Bin}(N_{negatives},1-c_1)) \end{align} and in a similar way we calculate the $p$-value for the second hypothesis $H_{0_2}$ as \begin{align} p_2 &= \text{Prob}(\text{Observe these TP or more} | H_{0_2} \text{ true}) \\ &= \text{Prob}(TP_{observed} \leq TP \text{ | } TP \sim \text{ Bin}(N_{positives},1-c_2)) \end{align} Is this correct way of evaluating the hypothesis tests I want to perform? I basically want to set up statistic tests for the $FPR$ and the $FNR$, but I'm unsure if its correctly done. The family-wise error rate (or the false discovery rate) can be ignored for now. Thanks in advance.
Hypothesis model for testing false positive rate (FPR) and false negative rate (FNR)
CC BY-SA 4.0
null
2023-03-03T11:25:07.480
2023-03-08T11:09:20.687
2023-03-08T11:09:20.687
302186
302186
[ "hypothesis-testing", "p-value", "modeling" ]
608267
2
null
608264
2
null
That is not what correlation means. Correlation, either Pearson or Spearman, can be $1$ when all of the values are different. Further, either correlation can take values below zero, so this “proportion of equal values” interpretation cannot apply to such a situation. Pearson correlation is defined as the covariance between the two variables divided by the product of their standard deviations. Spearman correlation first transforms the variables to ranks (this number is the lowest, this next number is third-lowest, this next number is second-lowest, etc) and then applies Pearson correlation to the ranks. Applied to data, this equals: $$ \rho(X,Y)=\dfrac{ \overset{n}{\underset{i=1}{\sum}} \left[ (X_i-\bar X)(Y_i-\bar Y) \right] }{ \sqrt{ \overset{n}{\underset{i=1}{\sum}}\left[ (X_i-\bar X)^2 \right) }\sqrt{ \overset{n}{\underset{i=1}{\sum}}\left[ (Y_i-\bar Y)^2 \right) } } $$ Take $X$ and $Y$ to be the raw values for Pearson correlation or the ranks for Spearman correlation. An example of perfect Pearson correlation with no equal values is $X=(1,2,3)$, $Y=(11,21,31)$, which also gives perfect Spearman correlation. An example of perfect Spearman correlation, imperfect Pearson correlation, and no equal values is $X=(1,2,3)$, $Y=(11, 12, 15)$.
null
CC BY-SA 4.0
null
2023-03-03T11:29:52.943
2023-03-03T11:29:52.943
null
null
247274
null
608270
1
null
null
0
32
I have a small dataset which i want to look for anomalies with isolation forest. In all the papers I read, I see that sub-sampling is an advantage (I'm sure I fully understand why). Since my data set is small, all data points are picked in the isolation forest training. Is this a disadvantage?
Isolation forest subsampling
CC BY-SA 4.0
null
2023-03-03T12:22:44.933
2023-03-03T12:22:44.933
null
null
378435
[ "subsampling", "isolation-forest" ]
608272
2
null
606380
0
null
As per comment by @ttnphns, Hopkin's statistic answered my question. In the genetic algorithm optimization process, minimazing the Hopkin's statistic would converge to finding the solutions with more uniform distribution.
null
CC BY-SA 4.0
null
2023-03-03T12:29:28.107
2023-03-03T12:29:28.107
null
null
380656
null
608273
1
null
null
0
40
I want to compare the antiviral effect of different drug concentrations to the untreated control. The highest concentration of my drug reduces the infectious virus titer from 5.2 E6 to 3.3 E2. However, the effect is very suprisingly not significant after performing an ANOVA and a Dunnet´s post hoc test. I assume that the large standard deviation of my control (2.5 E6, mean: 5.2 E6) is skewing the statistics for me even though normal distribution was positive. I had the idea of using the log of my data. The statistics is now more sensitive and makes sense with the graphical representation. Now my question is whether I can use ANOVA again + Dunnet's post hoc test (normal and lognormal distribution is positive) or whether I have to somehow take the log transformation into account in the result. In the following you find the summary results of the test. [https://i.stack.imgur.com/j2epW.png](https://i.stack.imgur.com/j2epW.png) [https://i.stack.imgur.com/dMBUV.png](https://i.stack.imgur.com/dMBUV.png)
ANOVA and post hoc test of log data
CC BY-SA 4.0
null
2023-03-03T12:43:44.970
2023-03-06T14:43:02.730
2023-03-06T11:45:06.757
382303
382303
[ "anova", "biostatistics", "logarithm" ]
608276
1
null
null
0
26
I want to perform a regression where my independent variables all sum to 1. The independent variables are proportions of money invested in different categories. What I have done: - When checking the vif of the regression I get an error that 'there are aliased coefficients in the model', which means I have a multicolinearity problem. - I remove one of the independent variables and after running the vif, there is no problem any more (all values < 5). I know vif is not the only way to test multicolinearity. - I have checked for autocorrelation and I do not have that problem. Is this enough? Should I be running another type of regression instead? I am no expert, which is why I do not know if what I have done is enough. (I will remove it if not appropriate) - Edit: I added an image with sample of my data (the column missing is the last Industry that has the rest of the proportion invested so that each column adds to 100%). [](https://i.stack.imgur.com/joyv1.png)
Regression with proportion values in the independent variables
CC BY-SA 4.0
null
2023-03-03T13:02:28.423
2023-03-05T00:40:30.993
2023-03-05T00:40:30.993
11887
263426
[ "regression", "multicollinearity", "proportion", "predictor", "compositional-data" ]
608277
2
null
608254
0
null
There's a simple option, if you are fine with maximum-a-posteriori estimation and prediction, but don't care so much about uncertainty. I guess you don't, because it's hard to see how you'd get that with the preceding "deterministic" neural network in the mix. In that case you can just penalize the parameters of your last linear layer as per the prior distributions you want (i.e. add that to the loss function). If you want a fuller posterior (but are happy to ignore that the inputs are really also model estimates from the deterministic NN, the uncertainty about which does not end up being propagated properly), then perhaps look at O'Hagan 2010 or the like, where you can see how a Bayesian linear regression is just solving some linear algebra, too. That is, as long as you are fine to work with conjugate priors (or mixtures of them, which can approximate most things). I wonder whether you can convert this into the final layer of your neural network, but that's much more involved (no idea how complex this gets). - O'Hagan, A., 2010. Kendall's Advanced Theory of Statistic 2B. John Wiley & Sons.
null
CC BY-SA 4.0
null
2023-03-03T13:04:27.140
2023-03-03T13:04:27.140
null
null
86652
null
608279
1
null
null
0
5
I have raster data that provides annual air quality measurements over my study area (Toronto). I also have monthly air quality data from four monitoring stations in Toronto. One of my jobs is to develop monthly raster data using these two data sources -- how do I go about doing this?
Modelling monthly data from annual raster data and monthly point sources
CC BY-SA 4.0
null
2023-03-03T13:38:10.933
2023-03-03T13:38:10.933
null
null
382305
[ "spatial" ]