Id stringlengths 1 6 | PostTypeId stringclasses 7
values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3
values | FavoriteCount stringclasses 3
values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
608523 | 1 | 608636 | null | 2 | 27 | Suppose we have two categorical random variables $X$ and $Y$, with both $k$ levels.
We assume $Y$ is generated causally from $X$ ($X \to Y$). We also assume causal Markovian is satisfied, which means $X$ and $Y$ are probabilistically dependent.
My question is:
- Is there always a way to re-discretize $Y$ into a new va... | Can we preserve dependence in discretization? | CC BY-SA 4.0 | null | 2023-03-06T12:57:08.157 | 2023-03-07T13:22:59.153 | 2023-03-07T13:22:03.123 | 345167 | 345167 | [
"causality",
"discrete-data",
"non-independent",
"discrete-distributions"
] |
608524 | 1 | null | null | 1 | 28 | here's a pretty general question, so i do not expect to get precise answers. Instead i hope to read ideas and suggestions.
I have a deployied proprietary Deep Learning model that make retail price prediction of pre-owned cars.
Data are shaped like this: car characteristics, selling date and selling price and i'm receiv... | How to make a DL price prediction model aware of market fluctuations | CC BY-SA 4.0 | null | 2023-03-06T13:06:53.203 | 2023-03-09T11:49:09.883 | 2023-03-09T11:49:09.883 | 350354 | 350354 | [
"regression",
"neural-networks",
"trend"
] |
608525 | 1 | 608626 | null | 2 | 75 | I'm trying to run a linear mixed model (in R) but my model either never seems to finish running or (with a simpler random effects structure) there is a warning about singular effects. My full model is below (this is the version that runs for ages and never completes):
```
RT_lme <- lmer(RT ~ Condition * HighLow * Corre... | Choosing Random Effects to Include in a Linear Mixed Model | CC BY-SA 4.0 | null | 2023-03-06T13:14:04.757 | 2023-03-07T12:10:55.283 | 2023-03-07T11:36:36.220 | 345611 | 379020 | [
"r",
"regression",
"mixed-model",
"lme4-nlme",
"singular-matrix"
] |
608526 | 2 | null | 606988 | 4 | null | For Dirichlet random variates $x_1,...x_K$ with concentration parameters $\alpha_1...\alpha_K$,
$$y_{i,j}=\frac{x_i}{x_i+x_j}\sim\text{Beta}(\alpha_i,\alpha_j)$$
and
$$\frac{x_i}{x_j}=\frac{y_{i,j}}{1-y_{i,j}}\sim\beta'(\alpha_i,\alpha_j)$$
which is the [beta prime distibution](https://en.wikipedia.org/wiki/Beta_prime_... | null | CC BY-SA 4.0 | null | 2023-03-06T13:27:16.817 | 2023-03-08T17:16:21.080 | 2023-03-08T17:16:21.080 | 214015 | 214015 | null |
608527 | 1 | null | null | 0 | 32 | I am trying to generate time-to-event data for two treatments, with 25 patients in each arm. I generated the survival time in the treatment arm using an exponential distribution with parameter 0.95, and in the control arm using parameter 1.0. I assumed that all patients would experience an event without censoring. The ... | Why the actual hazard ratio of the simulated time-to-event data is very different from the expected value? | CC BY-SA 4.0 | null | 2023-03-06T13:27:46.477 | 2023-03-16T10:46:41.973 | null | null | 364419 | [
"survival",
"simulation",
"cox-model",
"exponential-distribution",
"hazard"
] |
608528 | 1 | 608657 | null | 1 | 70 | This is probably a straight-forward question but I can't find a straight-forward answer. The topic is new to me.
I am performing parametric survival analysis, e.g. estimating a survival function from survival data and some covariates.
It seems like it would be intuitive to check the residuals of the model in the same w... | residuals in parametric survival analysis | CC BY-SA 4.0 | null | 2023-03-06T13:40:19.623 | 2023-06-03T07:49:27.663 | 2023-06-03T07:49:27.663 | 121522 | 72174 | [
"survival",
"residuals",
"parametric"
] |
608529 | 1 | null | null | 0 | 32 | Model framework:
Suppose that the loss function is given by the Kullback-Leibler divergence (KLD) as follows:
\begin{equation}
\text{KL}(\Theta \parallel \hat{\Theta}) = \text{KL}\big(f(x;\Theta) \parallel f(x; \hat{\Theta})\big) = \int_{\mathcal{A}}\log\frac{f(x; \Theta)}{f(x; \hat{\Theta})}f(x; \Theta) dx,
\end{equa... | Bayes estimate of mixture of exponential under the the Kullback-Leibler divergence loss function | CC BY-SA 4.0 | null | 2023-03-06T13:44:08.053 | 2023-03-06T13:44:08.053 | null | null | 351356 | [
"bayesian",
"estimation",
"loss-functions",
"kullback-leibler",
"mixture-distribution"
] |
608530 | 1 | null | null | 2 | 20 | In the introduction of this paper
[Bifurcations in a predator–prey model with general logistic
growth and exponential fading memory](https://doi.org/10.1016/j.apm.2016.12.003)
The authors claim
[](https://i.stack.imgur.com/HKeYF.png)
I didn't know that this model was well known.
Can anyone point to a general reference ... | Who formulated this Generalized Lotka-Volterra model? | CC BY-SA 4.0 | null | 2023-03-06T13:45:17.430 | 2023-03-06T14:00:23.017 | 2023-03-06T14:00:23.017 | 382497 | 382497 | [
"references",
"growth-model",
"differential-equations"
] |
608531 | 1 | 608532 | null | 3 | 53 | Suppose we have two discrete random variables $X$ and $Y$, both of which take values from $\{1,2,...,k\}$.
$Y$ is generated from $X$ via a transition probability matrix (also known as the [stochastic matrix](https://en.wikipedia.org/wiki/Stochastic_matrix)), which is defined as:
$$P:=[P_{ij}]_{k \times k}$$ with $P_{ij... | What kind of transition probability matrix indicates dependence/independence? | CC BY-SA 4.0 | null | 2023-03-06T13:50:27.880 | 2023-03-06T14:19:15.860 | null | null | 345167 | [
"probability",
"stochastic-processes",
"markov-process",
"discrete-data",
"discrete-distributions"
] |
608532 | 2 | null | 608531 | 4 | null | Since the sample space of both $X$ and $Y$ is finite, the independence boils down to finite number of constraints namely $P_{ij}$ as defined in the question should only depend on $j$ or in other words every row of $P$ should be the same (the distribution of $X$). It can be seen easily (for eg. using law of total probab... | null | CC BY-SA 4.0 | null | 2023-03-06T14:19:15.860 | 2023-03-06T14:19:15.860 | null | null | 342327 | null |
608535 | 2 | null | 608273 | 4 | null | It's somewhat easy to present results when using a log transformation and traditional tests of the mean like anova, because the results are essentially on the geometric means of the data. That is, results could be presented as "The geometric means of Group A and Group B were statistically different.". And then back-t... | null | CC BY-SA 4.0 | null | 2023-03-06T14:43:02.730 | 2023-03-06T14:43:02.730 | null | null | 166526 | null |
608536 | 1 | null | null | 0 | 22 | Assume that a device is traveling at a constant expected speed $\mu$, subject to random variation with standard deviation $\sigma$, for a measurement period of duration $t$. Then the expected travel distance will be $\mu t$ over duration $t$.
I am only able to measure the distance that the device has traveled and not i... | Detection limit for a drop in speed, due to stopping between measurements | CC BY-SA 4.0 | null | 2023-03-06T14:46:20.437 | 2023-03-06T15:02:18.430 | 2023-03-06T15:02:18.430 | 36229 | 36229 | [
"hypothesis-testing",
"likelihood-ratio",
"measurement",
"bayes-factors"
] |
608537 | 2 | null | 561483 | 0 | null |
# 1) WSS processes
Here is the definition of cyclic autocorrelation function from Wikipedia:
$$
R_x^{n/T_0}(\tau) = \frac{1}{T_0} \int_{-T_0/2}^{T_0/2} R_x(t,\tau)e^{-j2\pi\frac{n}{T_0}t} \mathrm{d}t.
$$
We can evaluate this quantity for a WSS process ($R_x^0(\tau)\ne 0$) for the special case of $n/T_0 = 0$ and get
... | null | CC BY-SA 4.0 | null | 2023-03-06T14:59:40.930 | 2023-03-06T14:59:40.930 | null | null | 375862 | null |
608539 | 2 | null | 608515 | 2 | null | This is not true. Set the probability space $(\Omega, \mathscr{F}, P)$ to be $((0, 1], \mathscr{B}, \lambda)$ and define $X_n = \sqrt{n}I_{(0, n^{-1})}(x)$, $n = 1, 2, \ldots$, $X \equiv 0$, then
\begin{align}
E[|X_n|] = \frac{1}{\sqrt{n}} \to 0 = E[|X|]
\end{align}
as $n \to \infty$.
Let $f(x) = x^2$, which is contin... | null | CC BY-SA 4.0 | null | 2023-03-06T15:09:07.797 | 2023-03-06T15:09:07.797 | null | null | 20519 | null |
608540 | 2 | null | 24799 | 0 | null | There is actually [a paper](https://academic.oup.com/biomet/article/107/2/489/5715611) that connects CV and EB:
E Fong, C C Holmes, On the marginal likelihood and cross-validation, Biometrika, Volume 107, Issue 2, June 2020, Pages 489–496
If I understand correctly, the paper claims that the marginal likelihood is simil... | null | CC BY-SA 4.0 | null | 2023-03-06T15:28:50.180 | 2023-03-06T15:28:50.180 | null | null | 348832 | null |
608541 | 1 | 608545 | null | 0 | 32 | I just fitted a glm with the binomial family in R and to my surprise I didn't need to specify the $m_i$ values. Checking my notes, we simply assumed that $Y =_d \text{Binom}(m_i, p_i)$ where $p_i = H(x_i^T \beta)$, and thus our predictions are $\mathbb{E}[Y|X_i] = m_i p_i$.
My response was binary to begin with, and to ... | In binomial regression, how do we know what is $m_i$ (number of trials in the binomial distribution)? | CC BY-SA 4.0 | null | 2023-03-06T15:32:32.087 | 2023-03-06T15:55:40.720 | null | null | 342779 | [
"regression",
"generalized-linear-model"
] |
608542 | 1 | null | null | 0 | 14 | I have this type of diagram
[](https://i.stack.imgur.com/NVo6V.png)
Where my main focus is the latent variable A and I want to know which factors impact the most on it.
To make things more clear I will describe some made-up context that fits this situation.
Imagine that A is a concentration of a compound in solvent A, ... | How to specify this diagram type in lavaan | CC BY-SA 4.0 | null | 2023-03-06T15:35:28.040 | 2023-03-06T15:35:28.040 | null | null | 382508 | [
"r",
"python",
"structural-equation-modeling",
"latent-variable",
"lavaan"
] |
608543 | 2 | null | 608503 | 1 | null | >
Because I have found outliers in my dataset (both in terms of Mahalonobis distance and generalized Cook's distance
These were calculated by treating your 6-point Likert scale as a continuum, right? So 1 means 1, 2 means 2, ...
>
Because of the ordinal nature of the data, I am using ULS instead of ML, and the ord... | null | CC BY-SA 4.0 | null | 2023-03-06T15:40:54.973 | 2023-03-06T15:40:54.973 | null | null | 335062 | null |
608544 | 1 | null | null | 0 | 27 | In R I have fitted a logistic model for polygenic risk scores in cases and controls as well as a number of covariates :
```
model <- glm(phenotype~prs+var1+var2+var3...var10, family=binomial(link='logit')data=data.prs)
```
The PRS column have values from around -0.1 to 0.2
The summary of my model looks like this:
[](h... | Way of making odds ratios more manageable from logistic regression | CC BY-SA 4.0 | null | 2023-03-06T15:46:48.417 | 2023-03-06T15:46:48.417 | null | null | 300090 | [
"r",
"regression",
"logistic"
] |
608545 | 2 | null | 608541 | 1 | null | I think this is what is happening. From [the documentation](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/family):
>
For the binomial and quasibinomial families the response can be specified in one of three ways:
...
As a numerical vector with values between 0 and 1, interpreted as the proportio... | null | CC BY-SA 4.0 | null | 2023-03-06T15:50:23.650 | 2023-03-06T15:55:40.720 | 2023-03-06T15:55:40.720 | 22311 | 22311 | null |
608547 | 1 | null | null | 0 | 13 | I am trying to fit an Arima model to the following time series of yearly incidence data from an infectious disease which presents a cyclical behaviour, meaning outbreaks in a non-predictable manner. The log-transformed time series is stationary based on KPSS test (function unitroot_kpss in fpp3 package). Using the ARIM... | Arima model for infectious disease with random outbreaks (cyclic behaviour) | CC BY-SA 4.0 | null | 2023-03-06T15:59:03.563 | 2023-03-06T15:59:03.563 | null | null | 25032 | [
"forecasting",
"arima"
] |
608548 | 1 | null | null | 1 | 25 | I have data where people are given a limited number of tokens (1 - 10) which they can assign across five different items. The number of tokens they allocate to an item represents the importance of that item to them over the others. In other words they have to make make more trade-offs if they are given fewer tokens to ... | What kind of regression for predicting limited resource allocation across items | CC BY-SA 4.0 | null | 2023-03-06T15:59:06.163 | 2023-03-07T09:17:21.403 | 2023-03-07T09:17:21.403 | 282624 | 282624 | [
"regression",
"statistical-significance",
"multinomial-logit",
"choice-modeling"
] |
608549 | 1 | null | null | 1 | 30 | dataset is a time series of index values from 2015 to 2020.
i would like to do see if arima is an adequate forecasting tool for this index.
in a first step i am trying to figure out whether my data is stationary or needs transformation.
looking at the plot i see no seasonality and a constant mean over time, however, va... | Stationarity/ADF vs. KPSS Test | CC BY-SA 4.0 | null | 2023-03-06T15:39:56.257 | 2023-03-07T17:42:05.730 | 2023-03-07T17:42:05.730 | 11887 | 384355 | [
"r",
"time-series"
] |
608550 | 1 | null | null | 0 | 28 | There exists a classical method for solving a certain computational problem related to random sampling. It is the "gold standard", so to speak.
I'm working on an algorithm that aims to solve the same problem more efficiently. It relies on partitioning the data used in the classical method and processing it in parallel.... | Hypothesis testing for two samples from discrete distributions | CC BY-SA 4.0 | null | 2023-03-06T16:12:21.667 | 2023-03-06T16:12:21.667 | null | null | 300849 | [
"hypothesis-testing",
"chi-squared-test",
"kolmogorov-smirnov-test",
"discrete-distributions"
] |
608551 | 1 | null | null | 0 | 41 | Suppose I've sampled $x_0,\ldots,x_{n-1}$ and want to calculate the variance of these samples. What is a good (numerically stable) algorithm for this? And does the answer change, if we impose assumptions on the correlation of the samples (like assuming that they are samples drawn from a Markov chain)?
I've seen that th... | Numerically stable computation of the variance | CC BY-SA 4.0 | null | 2023-03-06T16:14:52.867 | 2023-03-06T17:00:58.863 | null | null | 222528 | [
"variance",
"references",
"markov-chain-montecarlo",
"markov-process"
] |
608552 | 1 | null | null | 0 | 8 | I need to find the best variance estimator of my parameter $\theta$ using complex sampling data. My survey data with dimension N are drawn with a two-stages stratified sampling.
I thus began from my dataset, I draw 1000 samples with dimension $n<N$ replicating the original sampling scheme and I estimate for each sample... | Compare variance estimators in complex sampling designs by simulations | CC BY-SA 4.0 | null | 2023-03-06T16:18:55.430 | 2023-03-06T16:51:44.617 | 2023-03-06T16:51:44.617 | 382514 | 382514 | [
"variance",
"simulation",
"monte-carlo",
"estimators",
"weighted-sampling"
] |
608554 | 1 | null | null | 1 | 7 | If we have a fully connected MRF with three random variables $a, b, c$, what probablistic assumption would we make if we break the joint potential of the three by pairwise potentials?
$$
\phi(a,b,c) = \phi(a,b)\phi(b,c)\phi(a,c).
$$
| What type of probabilistic assumption do we make in a MRF if we break a clique by pairwise potentials? | CC BY-SA 4.0 | null | 2023-03-06T16:23:02.100 | 2023-03-06T16:23:02.100 | null | null | 382515 | [
"conditional-probability",
"graphical-model",
"markov-random-field"
] |
608557 | 2 | null | 608297 | 4 | null | I tend to have data with inherent structure, such as multiple samples per patient, multiple measurements per sample and the like.
In that situation, the statistically independent unit is typically the patient rather than the row of the data matrix. (In some cases, independence is even more complicated, with multiple to... | null | CC BY-SA 4.0 | null | 2023-03-06T17:30:36.057 | 2023-03-10T21:33:40.393 | 2023-03-10T21:33:40.393 | 4598 | 4598 | null |
608558 | 1 | null | null | 1 | 19 | Given two probability densities $p(x)$ and $p(y)$, define the dot-product of their log-likelihood gradients, also sometimes known as "scores", $\langle \nabla_x \log p(x), \nabla_y \log p(y) \rangle $. I was wondering if this pops up in a specific definition of a distance measure between distributions, is there a conne... | Dot-product of log-likelihood gradients ("scores") | CC BY-SA 4.0 | null | 2023-03-06T17:31:22.737 | 2023-03-06T17:31:22.737 | null | null | 133692 | [
"machine-learning",
"probability",
"distributions",
"distance"
] |
608559 | 1 | 608562 | null | 1 | 39 | To start I'll say I am relatively new to data analytics.
How is this data set distributed? I'm debating whether I can use the Mann-Whitney test on it along with gender (M/F).
I've been assuming it's very highly skewed but a couple of things are making me question whether it is considered skewed:
- The means and median... | How would you describe the distribution of this data? | CC BY-SA 4.0 | null | 2023-03-06T17:32:49.033 | 2023-03-06T18:01:02.340 | null | null | 382520 | [
"hypothesis-testing",
"mathematical-statistics",
"statistical-significance"
] |
608561 | 1 | null | null | 2 | 150 | $$
\newcommand{\pset}[1]{2^{#1}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\PP}{\mathbf{P}}
\newcommand{\OO}{\Omega}
\newcommand{\oo}{\omega}
\newcommand{\sal}{$\sigma$-algebra\xspace} % Sigma algebra
\newcommand{\sals}{$\sigma$-algebras\xspace} % Sigma algebras (plural)
$$
Background:
I'm tr... | Help with rigorous derivation of multinomial distribution | CC BY-SA 4.0 | null | 2023-03-06T17:43:25.687 | 2023-03-11T23:17:30.497 | 2023-03-07T01:01:33.840 | 173082 | 273784 | [
"probability",
"distributions",
"mathematical-statistics",
"multinomial-distribution"
] |
608562 | 2 | null | 608559 | 1 | null | First, you should use a more descriptive title, if possible. You have ordinal data (scores between 1 and 10?), where the data is mostly saturating the scale (getting the top value).
Your data seems skewed, because one tail is longer than the other, but I think that the actual issue with your data is that most of it is ... | null | CC BY-SA 4.0 | null | 2023-03-06T18:01:02.340 | 2023-03-06T18:01:02.340 | null | null | 134438 | null |
608563 | 1 | null | null | 1 | 17 | The data I have is essentially the number of times two students were seen together.
So student 1 and 3 were seen the most amount of times together, then 2 and 3, then 1 and 2.
The correlation matrix looks like this:
[1](https://i.stack.imgur.com/Q6ZRe.jpg) [2] [3]
[1](https://i.stack.imgur.com/Q6ZRe.jpg) 0 0.4 0.7
[2... | I want to conduct some test on a large correlation matrix in r | CC BY-SA 4.0 | null | 2023-03-06T18:22:58.290 | 2023-03-06T18:22:58.290 | null | null | 382528 | [
"fisher-transform"
] |
608564 | 1 | null | null | 4 | 67 | In Lemma 1 of [these lecture notes](https://ocw.mit.edu/courses/14-382-econometrics-spring-2017/resources/mit14_382s17_lec1/), Chernozhukov and Fernández-Val write that partialing out with the Frisch-Waugh-Lovell theorem has an "adaptivity" property. Namely, suppose we regress $Y$ on $D$ and $W$ and partial out $W$.
Th... | Why does the "infeasible" Waugh-Frisch-Lovell estimator agree with the usual one? | CC BY-SA 4.0 | null | 2023-03-06T18:26:56.633 | 2023-03-07T16:45:09.667 | null | null | 382529 | [
"regression",
"asymptotics"
] |
608565 | 2 | null | 608111 | 4 | null |
- That looks curved to me. No, it is not wiggling all over the place, but there is curvature upon visual inspection.
- A highly significant p-value despite only modest curvature can be interpreted just like any other hypothesis test that catches a modest effect. I will venture a guess that you have a fairly large sa... | null | CC BY-SA 4.0 | null | 2023-03-06T18:31:52.807 | 2023-03-06T19:12:56.943 | 2023-03-06T19:12:56.943 | 247274 | 247274 | null |
608566 | 2 | null | 582236 | 1 | null | I think that the article ["Evaluating the density of ratios of noncentral quadratic forms in normal variables"](https://%20https://www.sciencedirect.com/science/article/pii/S0167947308005070) may be what you are looking for, although there doesn't seem to be a nice tidy expression there.
In Proposition 1 in the paper, ... | null | CC BY-SA 4.0 | null | 2023-03-06T18:33:58.323 | 2023-03-16T14:17:08.903 | 2023-03-16T14:17:08.903 | 134438 | 134438 | null |
608567 | 2 | null | 581493 | 1 | null | I'm going to break ranks with others complaining that you can't fit a continuous distribution to a discrete valued sample. We actually do this all the time, and it's an interesting problem in statistical computing and asymptotics to consider what happens when the distributional assumptions aren't exactly met. So... you... | null | CC BY-SA 4.0 | null | 2023-03-06T18:34:58.940 | 2023-03-06T18:46:12.420 | 2023-03-06T18:46:12.420 | 8013 | 8013 | null |
608568 | 1 | null | null | 0 | 44 | I have been working to try to get means adjusted for covariates. I've seen examples such as [this](https://stats.stackexchange.com/questions/567116/calculate-covariate-adjusted-means-and-95cis-for-treatment-and-control-group-se) one, but I haven't seen an example that has Dr. Lumley's `survey` package in R. The code I ... | Means adjusted for covariates | CC BY-SA 4.0 | null | 2023-03-06T19:37:45.897 | 2023-03-06T19:37:45.897 | null | null | 254436 | [
"r",
"survey",
"geometric-mean"
] |
608569 | 2 | null | 608514 | 3 | null | A common approach is to look at the standardized residuals (a.k.a. Pearson residuals), or to the adjusted standardized residuals. See Donald Sharpe's paper "[Chi-Square Test is Statistically Significant: Now What?](https://doi.org/10.7275/tbfa-x148)" (2015), that is a short review of residual analysis and other methods... | null | CC BY-SA 4.0 | null | 2023-03-06T19:53:45.980 | 2023-03-06T19:53:45.980 | null | null | 164936 | null |
608570 | 2 | null | 594060 | 1 | null | I think your professor means that $e^x$ cannot be approximated by a single layer of ReLU globally (on the entire $\mathbb{R}$), which seems correct because whatever the output of a single layer of ReLU is, it grows linearly, not exponentially.
Meaning that the output of your network cannot be equal to a function that g... | null | CC BY-SA 4.0 | null | 2023-03-06T19:58:37.373 | 2023-03-06T19:58:37.373 | null | null | 214510 | null |
608572 | 1 | 608581 | null | 1 | 37 | After an ANOVA in R, do you know if it is possible to get the group differences using Bonferroni correction ?
If I use the iris dataset
```
a<-aov(iris$Sepal.Width~iris$Species, data=iris)
```
When I run a TukeyHSD, I get the group differences directly
```
TukeyHSD(a)
Tukey multiple comparisons of means
95% fami... | How to get the group differences after Bonferroni correction in multiple comparison? | CC BY-SA 4.0 | null | 2023-03-06T20:15:51.147 | 2023-03-06T23:13:14.107 | null | null | 261354 | [
"r",
"anova",
"post-hoc",
"bonferroni",
"tukey-hsd-test"
] |
608574 | 2 | null | 608478 | 2 | null | What you are looking at is the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability), [law of total expectation](https://en.wikipedia.org/wiki/Law_of_total_expectation), etc. These laws follow directly from the definition of conditional probability and conditional probability densities. I... | null | CC BY-SA 4.0 | null | 2023-03-06T21:59:14.877 | 2023-03-06T21:59:14.877 | null | null | 173082 | null |
608575 | 1 | null | null | 0 | 25 | I have 2 questions.
- How can I calculate the probability of getting a specific sequence of heads vs tails?
- If I have a given sequence of tosses, how can I use this information to help me guess the next flip?
Let's suppose that I have a fair, 2 sided coin.
My attempts for 1: Let's suppose I want to know what th... | Conditional probablity for a given seqeunce | CC BY-SA 4.0 | null | 2023-03-06T22:09:55.157 | 2023-03-06T22:09:55.157 | null | null | 382535 | [
"probability"
] |
608578 | 2 | null | 608394 | 1 | null | It doesn't make sense to use $f(x)=\frac{1}{2\pi}\int \exp(-\mathrm itx) \varphi_X(t)~\mathrm dt$ when $\int|\varphi_X(t) |~\mathrm dt=\infty, $ for the former is true if $\int|\varphi_X(t) |~\mathrm dt<\infty.$
The general inversion formula (assuming $a,~b\in \mathcal C(\mathrm F) ;~a<b$) is
$$ \mathrm F(b) - \mathrm ... | null | CC BY-SA 4.0 | null | 2023-03-06T22:57:34.027 | 2023-03-07T03:02:29.780 | 2023-03-07T03:02:29.780 | 362671 | 362671 | null |
608579 | 2 | null | 606983 | 0 | null | Based on your reference I believe that you are estimating the vector $\boldsymbol{\beta}$ of size $p_n$ with a posterior distribution based on the observation of the vector $\mathbf{Y}$ of size $n$ in the model $$\mathbf{Y} = \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\epsilon}$$ where $\mathbf{X}$ is a fixed regress... | null | CC BY-SA 4.0 | null | 2023-03-06T23:02:04.947 | 2023-03-08T09:15:32.070 | 2023-03-08T09:15:32.070 | 164061 | 164061 | null |
608580 | 1 | null | null | 0 | 26 | Why is the value of MSE in cross-validation is always less than 1 in the MSE values of the hidden nodes selected by the MLP function.
[](https://i.stack.imgur.com/Lbcp9.png)
This is the result of the code
```
fit2 <- mlp(fit, auto.hd.type = "cv")
fit2$MSEH
```
The values of my input have a mean of 9000.
Further, the M... | MSE on cross validation of MLP | CC BY-SA 4.0 | null | 2023-03-06T23:05:33.293 | 2023-03-06T23:05:33.293 | null | null | 367146 | [
"time-series"
] |
608581 | 2 | null | 608572 | 1 | null | This can be done using the `emmeans` package, which allows easy working with contrasts.
We can get specify a bonferroni-adjusted pairwise series of tests by
```
fit <- aov(Sepal.Width ~ Species, data = iris)
fit_em_bonf <- emmeans::emmeans(
fit,
specs = pairwise ~ Species,
adjust = "bonf"
)
```
and then extrac... | null | CC BY-SA 4.0 | null | 2023-03-06T23:13:14.107 | 2023-03-06T23:13:14.107 | null | null | 335519 | null |
608582 | 1 | null | null | 2 | 62 | I am trying to calculate the odds ratio using the epitools package. The counts in 2 cells of my 2x2 table are $< 10$ so I keep receiving the following error "Error in chisq.test(xx, correct = correction) : (converted from warning) Chi-squared approximation may be incorrect". I tried defining the method as "fisher" and ... | Epitools oddsratio error in chisq.test | CC BY-SA 4.0 | null | 2023-03-06T23:22:11.597 | 2023-03-09T16:17:22.893 | 2023-03-09T15:32:11.490 | 11887 | 382539 | [
"chi-squared-test",
"odds-ratio",
"epidemiology"
] |
608583 | 1 | null | null | 0 | 20 | In my problem, I have a condition in which I need to compute the joint distribution of two dependent distributions. The first distribution is normal and the second one is beta distribution. How can I get the joint distribution function of these two distributions? Any help would be appreciated.
Update
Actually, I am tes... | How can I combine and get cdf of joint distribution of normal and beta distributions on the same set of data? | CC BY-SA 4.0 | null | 2023-03-06T23:26:02.393 | 2023-03-06T23:26:02.393 | null | null | 365295 | [
"wilcoxon-mann-whitney-test",
"joint-distribution",
"kolmogorov-smirnov-test"
] |
608584 | 1 | null | null | 5 | 635 | I am reading [this article](https://towardsdatascience.com/horseshoe-priors-f97672b4f7cb) about the horseshoe prior and how it is better than lasso and ridge priors. The author makes several points that I don't understand. One of them is "The ideal prior distribution will put a probability mass on zero to reduce varian... | How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have fat tails to reduce bias? | CC BY-SA 4.0 | null | 2023-03-06T23:32:25.283 | 2023-03-08T09:38:39.630 | 2023-03-07T08:16:55.283 | 53690 | 362604 | [
"regression",
"bayesian",
"prior"
] |
608585 | 2 | null | 608561 | 6 | null | Even under the rigorous measure-theoretic framework, your proof is overly verbose, probably due to that you confused the underlying probability space $(\Omega, \mathscr{F}, P)$, where $X_1, X_2, \ldots, X_n$ and $Y = (Y_1, \ldots, Y_m)$ are defined, with their image space $(\mathbb{R}^1, \mathscr{R}^1)$. In particular... | null | CC BY-SA 4.0 | null | 2023-03-06T23:49:01.043 | 2023-03-07T12:21:20.927 | 2023-03-07T12:21:20.927 | 20519 | 20519 | null |
608586 | 2 | null | 608584 | 6 | null | The idea is that you want your regularisation procedure to set small parameter estimates to zero and leave large estimates unchanged.
Now, lasso does zero out small estimates (ridge doesn't even do that), but both lasso and ridge shrink large estimates towards zero, which is a significant source of bias in the two proc... | null | CC BY-SA 4.0 | null | 2023-03-06T23:56:19.037 | 2023-03-06T23:56:19.037 | null | null | 335519 | null |
608587 | 2 | null | 608584 | 6 | null |
### Probability mass at zero
>
How does the normal distribution have a zero probability mass at zero?
The normal distribution has a non zero density at zero but the probability (mass) is zero $P[X=0] = 0$.
By placing a probability mass at zero the prior is expressing more strongly the believe that a parameter is ... | null | CC BY-SA 4.0 | null | 2023-03-06T23:56:24.470 | 2023-03-08T09:38:39.630 | 2023-03-08T09:38:39.630 | 362671 | 164061 | null |
608588 | 1 | null | null | 0 | 20 | I am reading [Inference on Counterfactual Distributions](https://arxiv.org/pdf/0904.0951.pdf) and my knowledge of distribution functions is rusty.
>
Suppose we would like to analyze the wage differences between men and women. Let 0 denote the population of men and 1 the population of women. $Y_j$ denotes wages and $X_... | Integrating a conditional CDF WRT another distribution with nested support | CC BY-SA 4.0 | null | 2023-03-07T00:11:05.217 | 2023-03-07T00:11:05.217 | null | null | 382540 | [
"probability",
"distributions",
"conditional-probability"
] |
608589 | 2 | null | 608584 | 9 | null |
#### The MAP estimator can have non-zero probability mass at a point (even if the posterior distribution is always continuous)
The linked article is actually a bit misleading on this point, since even under the stipulated model all the relevant distributions are still continuous, so there is still zero probability m... | null | CC BY-SA 4.0 | null | 2023-03-07T00:22:37.470 | 2023-03-07T21:23:25.287 | 2023-03-07T21:23:25.287 | 173082 | 173082 | null |
608590 | 1 | null | null | 1 | 22 | I am using `lagsarlm` in spdep package in `r` to estimate a spatial Durbin (mixed) model by
```
m1 <- lagsarlm(f, data = d, wlist, type = "mixed")
```
and using predict function
```
pred <- predict(m1,newdata = d, listw = wlist)
```
with the original data and spatial weight list to estimate the dependent variable. Th... | predict return different values from fitted.value | CC BY-SA 4.0 | null | 2023-03-07T00:53:30.703 | 2023-03-08T00:59:36.813 | 2023-03-08T00:59:36.813 | 382545 | 382545 | [
"r",
"spatial-correlation"
] |
608591 | 1 | null | null | 2 | 81 | I have fitted gam for a generated data with binomial responses. The problem is, many times while running this gam with different bootstrap samples, error occurred.
| Why does using "REML" in mgcv give error for generalized additive models? | CC BY-SA 4.0 | null | 2023-03-07T00:53:58.060 | 2023-03-23T06:25:31.760 | 2023-03-23T06:25:31.760 | null | null | [
"convergence",
"aic",
"generalized-additive-model",
"mgcv",
"reml"
] |
608592 | 2 | null | 608561 | 4 | null |
#### Your present attempted proof is question-begging
Firstly, well done on your initial attempt. You appear to have a basic idea of how you would like to proceed, and you are making an attempt to set this out rigorously, which is no easy task.
I see a couple of problems with your proof. The first problem is that ... | null | CC BY-SA 4.0 | null | 2023-03-07T00:57:36.263 | 2023-03-11T23:17:30.497 | 2023-03-11T23:17:30.497 | 173082 | 173082 | null |
608593 | 2 | null | 608582 | 0 | null | The issue is that `oddsratio`, `oddsratio.fisher` and `oddsratio.small` all calculate a "regular" chi-squared statistic for reporting each time: you can see this in the example output for `oddsratio.fisher(mat)` below (the `$p.value` table, `chi.square` column).
So each of these variants of oddsratio is triggering the ... | null | CC BY-SA 4.0 | null | 2023-03-07T01:04:49.567 | 2023-03-09T15:32:37.773 | 2023-03-09T15:32:37.773 | 11887 | 16974 | null |
608594 | 2 | null | 608249 | 1 | null | Standardizing the coefficients may help. Also could try using canopy cover as the dependent variable and adding a grass and forb counts as independent variables and just use a linear model rather than poisson.
| null | CC BY-SA 4.0 | null | 2023-03-07T01:27:10.130 | 2023-03-07T01:27:10.130 | null | null | 382547 | null |
608595 | 2 | null | 605619 | 0 | null | You may find the answer quite intuitive. Consider the sketch below. As cosine similarity deals with the angle between $\bf{X}$ and $\bf{w}$, the question boils down to projecting $\bf{X}$ on a sphere passing through the center of the distribution of $\bf{X}$, that is, through $E[\bf{X}]$.
From the symmetry of the distr... | null | CC BY-SA 4.0 | null | 2023-03-07T01:43:29.787 | 2023-03-07T01:43:29.787 | null | null | 382548 | null |
608596 | 1 | 616605 | null | 0 | 34 | I am currently reading up on RealNVP, which has the following transformations according [Lilian Weng](https://lilianweng.github.io/posts/2018-10-13-flow-models/):
$$
\begin{aligned}
\mathbf{y}_{1:d} &= \mathbf{x}_{1:d} \\
\mathbf{y}_{d+1:D} &= \mathbf{x}_{d+1:D} \odot \exp({s(\mathbf{x}_{1:d})}) + t(\mathbf{x}_{1:d})
... | Normalizing Flows Invertibility | CC BY-SA 4.0 | null | 2023-03-07T02:22:29.203 | 2023-05-22T17:50:46.887 | null | null | 269616 | [
"probability",
"neural-networks",
"mathematical-statistics",
"generative-models",
"normalizing-flow"
] |
608597 | 1 | null | null | 0 | 15 | i have trained a small bayesian neural network by splitting a dataset in the usual training and test dataset.
Now i also have an analytical model whose parameters needs to be estimated by mcmc methods on the same dataset.
i would like to compare the quality of the predictions.
should i split the dataset into test and ... | comparing the performance of a neural network with the prediction of an analytical model: question on the test dataset | CC BY-SA 4.0 | null | 2023-03-07T02:42:27.657 | 2023-03-07T02:42:27.657 | null | null | 275569 | [
"regression",
"bayesian",
"model-comparison"
] |
608598 | 1 | null | null | 1 | 11 | I have a phylogentic tree of samples, and some samples belong to either of two groups. I'm trying to show that the mean pairwise distance of samples in one group is larger than the one from the other group.
I computed the pairwise genetic distance of all samples in a tree using cophenetic.phylo and obtaining the mean p... | Distribution of genetic distances in a phylogenetic tree | CC BY-SA 4.0 | null | 2023-03-07T02:45:21.813 | 2023-03-07T02:45:21.813 | null | null | 8089 | [
"distributions",
"phylogeny"
] |
608599 | 1 | 608647 | null | 2 | 48 | I am trying to create a database with a dichotomized dependent variables and a bunch of binary (1/0) independent variables. I'd like to pre-set some associations between the independent variables and the outcome. Doing so is relatively easy - after generating the random binary covariates, I run a binomial function to p... | switching from probability to classification while maintaining exact ORs | CC BY-SA 4.0 | null | 2023-03-07T03:33:19.103 | 2023-03-07T14:53:36.133 | 2023-03-07T14:24:21.453 | 292896 | 292896 | [
"machine-learning",
"logistic",
"simulation",
"odds-ratio"
] |
608600 | 1 | 608618 | null | 3 | 26 | In Causality - Models, Reasoning, And Inference by Pearl, definition 2.3.3 reads as follows -
>
One latent structure $L$ = $\langle D,O \rangle$ is preferred to
another $L^{'}$ = $\langle D^{'},O \rangle$ (written $L \preceq L^{'}$)
if and only if $D^{'}$ can mimic $D$ over $O$ - that is, if
and only if for every $\Th... | In what sense is one latent causal structure "preferred to" another? Definition 2.3.3 from Causality by Pearl | CC BY-SA 4.0 | null | 2023-03-07T03:40:24.743 | 2023-03-07T10:03:47.627 | null | null | 331772 | [
"causality",
"graphical-model",
"bayesian-network",
"causal-diagram"
] |
608601 | 2 | null | 157582 | 0 | null | Here is a simulation to demonstrate that @soakley's confidence interval works for a normally distributed random variable.
- It takes $10^4$ values of $\mu$ in $[-10,10]$ and of $\sigma$ in $[0,10]$ and
- for each of those pairs, it generates $10^6$ single observations $x$ and
- sees what proportion of the correspon... | null | CC BY-SA 4.0 | null | 2023-03-07T03:43:28.657 | 2023-03-07T03:48:53.750 | 2023-03-07T03:48:53.750 | 2958 | 2958 | null |
608602 | 1 | null | null | 0 | 8 | I've been asked to obtain a suitable differential equation describing the concentration profile of the oxygen gas in the wastewater column.
Things I know so far:
- Wastewater is stored in a large cuboidal tank, which is in contact with oxygen gas
- Oxygen diffuses into the water isothermally
- Height of wastewater i... | Am I missing a generation/consumption/accumulation term in this mass balance? | CC BY-SA 4.0 | null | 2023-03-07T04:51:11.673 | 2023-03-07T04:51:11.673 | null | null | 382554 | [
"modeling"
] |
608604 | 1 | null | null | 1 | 14 | I am doing a Before/After analysis, which is aiming at evaluating the effect of a change. Assume I have made a change at the beginning of June-2022, and I want to evaluate the effect of the change based on a one-month interval. This results in two time series, related to two distinct months, i.e., the first one is for ... | How to apply a seasonal index into an eCDF? | CC BY-SA 4.0 | null | 2023-03-07T04:57:32.647 | 2023-03-07T23:57:55.030 | 2023-03-07T23:57:55.030 | 371243 | 371243 | [
"time-series",
"seasonality",
"empirical-cumulative-distr-fn",
"wasserstein"
] |
608605 | 1 | null | null | 1 | 21 | In [https://www.jmlr.org/papers/volume9/zhang08a/zhang08a.pdf](https://www.jmlr.org/papers/volume9/zhang08a/zhang08a.pdf), a Maximal Ancestral Graph (MAG) is defined as:
```
a mixed-edge graph that:
i) does not contain any directed or almost directed cycles (ancestral) and
ii) there is no inducing path between any tw... | Does a PAG (partial ancestral graph) have almost directed cycles with circular endpoints? | CC BY-SA 4.0 | null | 2023-03-07T04:59:30.787 | 2023-03-08T00:51:44.913 | 2023-03-08T00:51:44.913 | 106439 | 106439 | [
"causality",
"graphical-model",
"causal-diagram"
] |
608606 | 1 | null | null | 0 | 27 | I am recently encountering a challenge with BTYD, specifically with Pareto-NBD model. See, from the papers that I read from Faders, there are few assumptions using this model, and the first and foremost is:
i) Customers go through two stages in their “lifetime” with a specific firm: they are “alive” for some period of ... | BTYD prior model tweaking | CC BY-SA 4.0 | null | 2023-03-07T05:00:34.577 | 2023-03-07T05:00:34.577 | null | null | 382556 | [
"customer-lifetime-value"
] |
608607 | 2 | null | 608347 | 1 | null | I was able to achieve my goal by using entropy. Since I want the distribution to be seen uniformly on a larger scale, I used binning and then calculated the entropy. I used this approach with multiple bin sizes together.
| null | CC BY-SA 4.0 | null | 2023-03-07T05:07:08.200 | 2023-03-07T05:07:08.200 | null | null | 319408 | null |
608608 | 1 | null | null | 4 | 277 | I'm trying to plan a study based on previous data and I need to know the sample size required for a given effect size.
Previous data looks like this:
|Rating |1 |2 |3 |
|------|-|-|-|
|control |0 |20 |11 |
|treatment |6 |14 |12 |
where these are counts of the number of samples given a particular ranking at some ti... | Question about sample size indicated from power analysis for chi-square analysis in Python | CC BY-SA 4.0 | null | 2023-03-07T05:35:54.470 | 2023-03-07T15:12:14.937 | 2023-03-07T08:28:12.777 | 164936 | 367293 | [
"mathematical-statistics",
"statistical-significance",
"python",
"statistical-power"
] |
608609 | 1 | 608791 | null | 2 | 168 | I've been asked to fit a ZeroInflatedPoisson model on a dataset to predict Y (count data) for an assignment.
First, I did this manually:
- Create a binary variable (Y_IND) based on Y where Y_IND = 0 if Y = 0, and 1 if Y >=1.
- Fit a statsmodels Logistic Regression model using X variables to predict the binary variabl... | Statsmodels ZeroInflatedPoisson - Unable To Converge | CC BY-SA 4.0 | null | 2023-03-07T06:06:24.167 | 2023-03-08T18:05:00.970 | null | null | 336714 | [
"logistic",
"poisson-regression",
"zero-inflation",
"statsmodels"
] |
608610 | 1 | null | null | 0 | 51 | For $X_1,\dots, X_n$ iid sample from $X\sim Bernoulli(p)$. I try to verity that the estimator $\hat{p}=\bar{X}$ (sample mean) is the UMVUE for unknown parameter $p$.
I know that $\hat{p}$ is unbiased. I try to show that $Var[\hat{p}]=CRLB$ (Cramer-Rao lower bound), which is
$$
CRLB=\frac{[I'(\theta)]^2}{nI(\theta)},
$... | Which Fisher information should I use for Cramer-Rao lower bound? | CC BY-SA 4.0 | null | 2023-03-07T07:59:47.757 | 2023-03-07T09:43:52.993 | 2023-03-07T09:43:52.993 | 362671 | 334918 | [
"mathematical-statistics",
"estimation",
"inference",
"likelihood",
"cramer-rao"
] |
608611 | 1 | null | null | 1 | 45 | I have a very specific work-related question which I would like a second opinion on.
We have a device that can be powered via battery and AC mains. We have identified a specific failure mode where suddenly, the battery cannot provide power to the device. Let's call this event P(F).
In order to pose a risk to the user, ... | Probability of failure event? | CC BY-SA 4.0 | null | 2023-03-07T08:09:29.023 | 2023-03-07T08:09:29.023 | null | null | 382568 | [
"probability"
] |
608612 | 2 | null | 608608 | 4 | null | The boundary for what is "just statistically significant" (i.e. the p-value is just below some "significance threshold" such as 0.05) is, if everything you observe is the true state of nature, around the point where you would have 50% power with that sample size. This is rather obvious, when you think about it: If what... | null | CC BY-SA 4.0 | null | 2023-03-07T08:12:29.010 | 2023-03-07T08:12:29.010 | null | null | 86652 | null |
608614 | 2 | null | 608608 | 5 | null | As you don't provide a lot of details about the goal of your study, from the outside it looks a bit like your null hypothesis may be ill-defined:
- why using a chi-squared test, when the variable Rating probably has an order? Don't you want to know if the treatment tend to increase or decrease the rating? The approach... | null | CC BY-SA 4.0 | null | 2023-03-07T09:01:30.783 | 2023-03-07T15:12:14.937 | 2023-03-07T15:12:14.937 | 164936 | 164936 | null |
608615 | 2 | null | 135739 | 0 | null | My 2p since this question has resurfaced to the top after few years with no answer...
In my opinion, it pays off to invest in a workflow manager. I'm very happy with [snakemake](https://snakemake.github.io/) and it's been a game-changer after having spent quite some time hacking together README files, bash scripts, and... | null | CC BY-SA 4.0 | null | 2023-03-07T09:13:33.237 | 2023-03-07T09:18:44.830 | 2023-03-07T09:18:44.830 | 31142 | 31142 | null |
608617 | 2 | null | 608610 | 3 | null | Cramér-Rao Lower Bound would be of the form $\operatorname{Var}_\theta(T(\mathbf X) ) \geq \mathscr I(\theta)^{-1}. $ For exponential family, $\mathscr I(\theta) =\mathbb E_\theta\left[-\partial^2_\theta \ln f(\mathbf x;\theta)\right],$ which for $X_i\overset{\text{i.i.d.}}{\sim}\mathrm{Ber}(p) $ is
\begin{align}\maths... | null | CC BY-SA 4.0 | null | 2023-03-07T09:41:35.873 | 2023-03-07T09:41:35.873 | null | null | 362671 | null |
608618 | 2 | null | 608600 | 1 | null |
## Definition 2.3.3 is in essence a statement about "excess edges" or "excess dependencies"
Let us for a second assume there are no hidden variables. Then, a structure $L'$ that can, with the right parametrization $\Theta'_{D'}$, mimic all probability distributions of an alternative structure $L$ (such that $P_{[O]}... | null | CC BY-SA 4.0 | null | 2023-03-07T10:03:47.627 | 2023-03-07T10:03:47.627 | null | null | 250702 | null |
608619 | 1 | 608628 | null | 0 | 31 | suppose I have categorical dataset, I'm doing data pre-processing.
what is the correct order of applying these 3 techniques
- Train Test split
- SMOTEN to over sampler the minority class
- Categorical encoding of variables (a mix of one hot, label encoding and target encoding)
| Order of pre-processing the dataset | CC BY-SA 4.0 | null | 2023-03-07T10:09:12.843 | 2023-03-07T11:50:28.727 | null | null | 376559 | [
"categorical-data",
"dataset",
"categorical-encoding",
"data-preprocessing",
"smote"
] |
608620 | 1 | null | null | 3 | 60 | I would like to generate a propensity score for a continuous treatment in R, and then control for the propensity score in a structural equation model (using the lavaan package). I'm aware that the twangContinuous package can generate propensity scores for continuous treatments, but it doesn't seem there is an option fo... | How to generate a propensity score for a continuous treatment in R? | CC BY-SA 4.0 | null | 2023-03-07T10:15:23.290 | 2023-03-07T10:15:23.290 | null | null | 269031 | [
"structural-equation-modeling",
"propensity-scores",
"lavaan"
] |
608622 | 1 | null | null | 1 | 60 | Using R, I m performing a backtest on a time series by using quantile regression (quantreg::rq) on a number of features.
These features are selected based on a condition such as p-values <= 5%.
If I run the routine multiple times, I always end up with the same betas/coefficients on the features, however p-values are un... | p-values unstable using quantreg::rq in R | CC BY-SA 4.0 | null | 2023-03-07T10:31:37.457 | 2023-03-07T11:35:47.327 | null | null | 382577 | [
"r",
"regression",
"time-series",
"p-value",
"quantile-regression"
] |
608623 | 2 | null | 608525 | 2 | null | Your model has an overly complicated random-effects structure. I would suggest first thinking about your research question, i.e., which associations between `RT` and the variables `Condition`, `HighLow`, and `Correct` do you want to study, and translate this to the specific fixed effects terms to include in the model. ... | null | CC BY-SA 4.0 | null | 2023-03-07T10:56:43.323 | 2023-03-07T10:56:43.323 | null | null | 219012 | null |
608624 | 1 | null | null | 0 | 34 | I am very new to analyzing and forecasting timeseries data so apologies if this question has an answer too obvious.
I am trying to find the residuals between a stationarized price data and white noise ARIMA(0,0,0).
```
library(forecast)
library(tseries)
DJT = read.csv("DJTA.csv")
DJT_ts <- ts(DJT$DJTA, start=c(2013,1)... | ARIMA(0,0,0) residuals are the same as the timeseries data | CC BY-SA 4.0 | null | 2023-03-07T11:04:51.730 | 2023-03-07T17:53:55.100 | 2023-03-07T17:53:55.100 | 53690 | 382583 | [
"forecasting",
"arima",
"residuals",
"white-noise"
] |
608625 | 2 | null | 536308 | 0 | null | I just met that question and found that there is a simulation study (Valente et al., 2021) proved to permutation all data before CV is correct.
And, here is reason.
>
A theoretical insight into why the other resampling schemes result in an inflation of false positives can be gained from (Bengio and Grandvalet, 2004), ... | null | CC BY-SA 4.0 | null | 2023-03-07T11:22:35.157 | 2023-03-07T11:23:36.587 | 2023-03-07T11:23:36.587 | 382584 | 382584 | null |
608626 | 2 | null | 608525 | 1 | null | The singular effects error is no accident and is fairly common in fitting complicated interaction models (Meteyard & Davies, 2020). This typically happens when a mixed effects model has a random effects structure specified that doesn't fit the data well. For example, there may be very little variance in the way you mod... | null | CC BY-SA 4.0 | null | 2023-03-07T11:35:10.833 | 2023-03-07T12:10:55.283 | 2023-03-07T12:10:55.283 | 345611 | 345611 | null |
608627 | 2 | null | 608622 | 0 | null | This is an example of the instability of feature selection and a drawback of the threshold-based approach that you take.
There’s basically no difference between the $0.049$ and $0.0536$ p-values you give, yet your approach to feature selection treats those as dramatically different.
It is typical to see instability in ... | null | CC BY-SA 4.0 | null | 2023-03-07T11:35:47.327 | 2023-03-07T11:35:47.327 | null | null | 247274 | null |
608628 | 2 | null | 608619 | 0 | null | While one hot and label encoding can be applied dataframe wise before splitting (using e.g. pandas routines), it's better to split first and build a proper pipeline which would simplify the input of the new data without extra manual steps.
Target encoding should always be done after splitting, otherwise it creates a hu... | null | CC BY-SA 4.0 | null | 2023-03-07T11:50:28.727 | 2023-03-07T11:50:28.727 | null | null | 361202 | null |
608629 | 2 | null | 608498 | 7 | null | This is a consequence of the (general) fact that (auto)correlation matrices like
$$
\begin{pmatrix}
1&\rho_1&\rho_2\\
\rho_1&1&\rho_1\\
\rho_2&\rho_1&1\\
\end{pmatrix}
$$
are positive [semi-definite](https://stats.stackexchange.com/questions/69114/why-does-correlation-matrix-need-to-be-positive-semi-definite-and-what-d... | null | CC BY-SA 4.0 | null | 2023-03-07T11:51:37.370 | 2023-03-10T07:47:38.300 | 2023-03-10T07:47:38.300 | 67799 | 67799 | null |
608630 | 1 | null | null | 0 | 91 | I am trying to interpret an NMDS analysis. And for that I did a shepard plot. I understand that ideally the points should follow a monotonic line, which is not really the case in my example. But I do not really understand what Non-metric fit and metric fit correspond to. What can I extract from these squared-r? Does it... | Interpretation of a shepard plot for NMDS | CC BY-SA 4.0 | null | 2023-03-07T12:19:30.930 | 2023-03-07T17:51:37.453 | 2023-03-07T17:51:37.453 | 234629 | 234629 | [
"multivariate-analysis",
"dimensionality-reduction",
"r-squared"
] |
608632 | 2 | null | 46591 | 1 | null | There are competing factors.
One the one hand, multicollinearity inflates standard errors. On the other hand, removing a variable to remove the multicollinearity can lead to omitted-variable bias, and it is not clear that you are better-off with a narrow standard error for a biased estimate than you would be with the m... | null | CC BY-SA 4.0 | null | 2023-03-07T12:22:25.643 | 2023-03-07T12:22:25.643 | null | null | 247274 | null |
608633 | 2 | null | 45259 | 0 | null | This does not make sense to me. The point of regression is to use the variation in covariates ($x$ variables) to explain the variation in some variable of interest ($y$). If you have no variability in a covariate, it isn’t helping to accomplish that goal.
What you might find interesting is to fit a quantile regression ... | null | CC BY-SA 4.0 | null | 2023-03-07T12:27:11.717 | 2023-03-07T12:27:11.717 | null | null | 247274 | null |
608635 | 2 | null | 608528 | 0 | null | Survival times are sometimes modelled as
$$Y_i = e^{\beta X_i} \cdot \epsilon_i$$
where the $\epsilon_i$ are exponential distributed. The term $e^{\beta X_i}$ relates to the risk and increases or decreases the mortality rate.
[Cox Snell residuals](https://www.jstor.org/stable/2984505) are in this case defined as the so... | null | CC BY-SA 4.0 | null | 2023-03-07T13:08:47.117 | 2023-03-07T13:08:47.117 | null | null | 164061 | null |
608636 | 2 | null | 608523 | 0 | null | After some thinking, I believe the answer to my question is yes: there is always such a re-discretization as long as $k>l\geq2$.
The proof is as follows:
Denote the transition probability matrix from $X$ to $Y$ as $P(Y|X):=[pr(y_i|x_j)]_{i,j}$.
Then, $X \not \perp Y$ means at least two columns of $P(Y|X)$ are different... | null | CC BY-SA 4.0 | null | 2023-03-07T13:22:59.153 | 2023-03-07T13:22:59.153 | null | null | 345167 | null |
608638 | 1 | null | null | 0 | 30 | With a certain rate $R$ balls fall into a box. There is no limit to the number of balls the box can hold, but each ball has a rate $\gamma$ to leave the box and when two balls hit each other they leave both the box. The rate which two balls hit each other is $\beta$.
One can build a markov chain to describe this proces... | stationary distribution of a continuous time markov chain | CC BY-SA 4.0 | null | 2023-03-07T13:34:22.187 | 2023-03-07T13:34:22.187 | null | null | 382593 | [
"probability",
"distributions",
"stationarity",
"markov-process",
"transition-matrix"
] |
608639 | 2 | null | 608297 | 4 | null | There's situations where you can bootstrap. If your sample size is large, then bootstrapping when possible is really convenient for a number of reasons:
- If it works, then it works for almost any metric you can define, while frequentist analytical solutions tend to be derived for one metric at a time (bad luck if you... | null | CC BY-SA 4.0 | null | 2023-03-07T13:59:31.650 | 2023-03-07T14:08:15.847 | 2023-03-07T14:08:15.847 | 86652 | 86652 | null |
608640 | 1 | null | null | 0 | 18 | I tried to measure entropy of binary matrix like below using code at : [https://github.com/cosmoharrigan/matrix-entropy](https://github.com/cosmoharrigan/matrix-entropy)
(I already saw the question : Measuring entropy/ information/ patterns of a 2d binary matrix)
[](https://i.stack.imgur.com/WAPez.png)
(red implies 1 a... | Measuring entropy of a binary matrix with biased probability | CC BY-SA 4.0 | null | 2023-03-07T14:15:23.753 | 2023-03-07T14:15:23.753 | null | null | 366997 | [
"algorithms",
"matrix",
"entropy",
"information-theory",
"pattern-recognition"
] |
608642 | 2 | null | 23197 | 1 | null | `aov3` in the sasLM package in R will give the same results as SAS Type III. (Continued after output.)
```
library(sasLM)
aov3(Y ~ T*B, Data) # Data is defined in question
```
giving:
```
Response : Y
Df Sum Sq Mean Sq F value Pr(>F)
MODEL 5 77.900 15.580 8.9029 0.02733 *
T ... | null | CC BY-SA 4.0 | null | 2023-03-07T14:27:16.660 | 2023-03-08T13:44:27.543 | 2023-03-08T13:44:27.543 | 4704 | 4704 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.