Id stringlengths 1 6 | PostTypeId stringclasses 7
values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3
values | FavoriteCount stringclasses 3
values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
606790 | 2 | null | 606745 | 0 | null | I think it is useful to be able to look at a formula with no context and reason about what it might mean. A few facets of this formula signal to me what might be happening.
The $n-2$ is a signal to me that this has something to do with a simple linear regssion, where the degrees of freedom equals the sample size $(n)$ ... | null | CC BY-SA 4.0 | null | 2023-02-27T14:20:53.880 | 2023-02-27T14:36:45.100 | 2023-02-27T14:36:45.100 | 247274 | 247274 | null |
606791 | 2 | null | 606726 | 9 | null | In addition to the excellent answers by Sextus Empiricus and Dave, when you don't know how to approach a problem, a naive but very effective way to get an approximate answer is by just simulating the process you describe.
In R you could do this as follows:
```
set.seed(1234)
MC <- 1e5
x <- numeric(MC)
for(i in 1:MC){
... | null | CC BY-SA 4.0 | null | 2023-02-27T14:23:37.373 | 2023-02-27T18:41:15.643 | 2023-02-27T18:41:15.643 | 296197 | 176202 | null |
606792 | 2 | null | 156861 | 2 | null | An even simpler code is to use directly a poisson regression with
```
library(tidyverse)
library(broom)
d <- data.frame(g=factor(1:2),
s=c(25,75),
f=c(38,162))
d_long = pivot_longer(d, cols = 2:3)
tidy(glm(value ~ name*g, data=d_long, family = 'poisson'))
```
Some explanations
If you... | null | CC BY-SA 4.0 | null | 2023-02-27T14:38:42.233 | 2023-02-28T11:02:15.890 | 2023-02-28T11:02:15.890 | 83585 | 83585 | null |
606793 | 2 | null | 407038 | 2 | null |
### 4-fold rotational symmetry
The distribution looks like
```
n = 10^3
x = rnorm(n,0,1)
y = x*(rbinom(n,1,0.5)*2-1)
plot(x,y, pch=21,
col=rgb(0,0,0,0.1),bg=rgb(0,0,0,0.1))
```
[](https://i.stack.imgur.com/o280mm.png)
It has a 4-fold rotational symmetry which means that a quarter rotation leaves the distribut... | null | CC BY-SA 4.0 | null | 2023-02-27T14:38:48.080 | 2023-02-28T15:08:54.613 | 2023-02-28T15:08:54.613 | 164061 | 164061 | null |
606794 | 1 | null | null | 1 | 96 | I've been trying to create a function for the Irwin Hall distribution that doesn't face the same issue as the unifed package implementation. Because the function suffers from numerical issues, I figured using Rmpfr to be able to specify precision would help. I was wrong. I also attempted to use as.brob() from the Brobd... | Algorithm for Irwin Hall Distribution | CC BY-SA 4.0 | null | 2023-02-27T14:42:55.367 | 2023-03-08T21:37:02.290 | 2023-03-08T21:37:02.290 | 32301 | 32301 | [
"r",
"distributions",
"algorithms",
"computational-statistics",
"numerics"
] |
606795 | 2 | null | 44201 | 0 | null | From Proakis, Digital communication 5th ed., the straightforward relationship is
$$
\varphi(\omega) = M(j\omega)
$$
and
$$
M_x(t) = \varphi_x(-j t)
$$
| null | CC BY-SA 4.0 | null | 2023-02-27T15:11:31.140 | 2023-02-27T15:11:31.140 | null | null | 380969 | null |
606796 | 1 | null | null | 0 | 18 | I'm performing a 3-step analysis in Latent Gold 5.1. In step 3 I estimate a model with an ordinal distant outcome. The documentation of Latent Gold gives advice for when to use ML (categorical dependent) or BCH (continuous and count dependent). But it does not explicitly state which method to use when modelling an ordi... | When performing a 3-step analysis in Latent Gold, should I use ML or BCH to model an ordinal outcome? | CC BY-SA 4.0 | null | 2023-02-27T15:22:34.643 | 2023-03-15T20:04:57.553 | null | null | 108040 | [
"ordered-logit",
"latent-class"
] |
606797 | 2 | null | 564611 | 0 | null | Even though this analysis might not be relevant (see comment from @mkt), this code works:
`mod <- lme(Climate_variable ~ Scenario, correlation = corExp(form = ~(Lon+Lat)/Scenario), random = list(~1|Site))`
| null | CC BY-SA 4.0 | null | 2023-02-27T15:30:54.860 | 2023-02-27T15:30:54.860 | null | null | 211755 | null |
606798 | 2 | null | 606135 | 1 | null | For option 1, you wouldn't proceed with a set of t-tests. When you have several samples to compare against a control, "[Dunnett's test](https://en.wikipedia.org/wiki/Dunnett%27s_test) takes into consideration the special structure of comparing treatment against control, yielding narrower confidence intervals."
You shou... | null | CC BY-SA 4.0 | null | 2023-02-27T15:36:46.967 | 2023-02-27T15:36:46.967 | null | null | 28500 | null |
606801 | 2 | null | 606726 | 5 | null | There are 32 different equally probable outcomes of 5 throws, of which two have absolute value of sum of throws equal to 5, 10 have 3, and 20 have 1, so total sum of all these cases is $5\times2 + 3\times10 + 1\times20 = 60$. To get average, we need to devide 60 by number of total cases which is 32. $\frac{60}{32} = \f... | null | CC BY-SA 4.0 | null | 2023-02-27T15:52:58.333 | 2023-02-27T16:00:17.887 | 2023-02-27T16:00:17.887 | 380972 | 380972 | null |
606802 | 1 | null | null | 0 | 15 | I am trying to wrap my head around this basic idea. In the paper YOLO (2016) they run a standard CNN, and 2 fully connected regression layers and get an output layer of 7x7x30 fcc.
What is stunnning to me is that the 7x7 output seems to be spatially related to the initial input, as if you were dividing the input in a 7... | CNN output is directly related to an area of the input in Yolo (2016) paper | CC BY-SA 4.0 | null | 2023-02-27T15:59:24.657 | 2023-02-27T16:28:34.547 | 2023-02-27T16:28:34.547 | 22311 | 379919 | [
"neural-networks",
"conv-neural-network"
] |
606803 | 1 | null | null | 1 | 157 | I know similar questions were asked before. However, none of the existing answers help me with my problem:
I have a gls model with y=B0+B1X1+B2X2+B3X1X2+e. The VIF value for X1 and the interaction effect X1X2 are 413.67 and 414.08 respectively. The VIF value for X2 is only 1.044729. Furthermore, the resulting coefficie... | Multicollinearity and Interaction Effects | CC BY-SA 4.0 | null | 2023-02-27T16:19:46.177 | 2023-02-28T14:49:30.437 | null | null | 380975 | [
"interaction",
"multicollinearity",
"generalized-least-squares",
"variance-inflation-factor"
] |
606804 | 1 | null | null | 1 | 38 | I'm not sure how to word the title best for this context. But I am creating a metric with data that is correlated. I have first, the number of a specific dish prepped for the day. Second, I have the time to sell that dish out. The underlying assumption here is that, of course, a dish that has a higher prepped quantity ... | What does dividing one variable by another actually do? | CC BY-SA 4.0 | null | 2023-02-27T16:23:48.783 | 2023-02-27T17:05:00.817 | null | null | 380976 | [
"descriptive-statistics"
] |
606806 | 1 | null | null | 1 | 72 | In my data, I have about 10K predictive features (genes), and one target feature (age). I want to predict the ages according to the genes. The rows in the data are the patients. To do so I plan to use Regression Random Forest.
I don't want to use this many predictive features, so I want to do some feature selection fir... | Find correlation between continuous predictive features and a continuous target feature | CC BY-SA 4.0 | null | 2023-02-27T16:46:56.130 | 2023-05-02T12:19:56.047 | 2023-05-02T12:09:32.750 | 22047 | 357522 | [
"r",
"regression",
"machine-learning",
"predictive-models",
"feature-selection"
] |
606807 | 2 | null | 606803 | 1 | null | The advice you've heard that one does not have to consider multicollinearity in interactions probably comes from the fact that mean-centering your variables prior to computing the interaction term will sometimes (but not [always](https://journals.sagepub.com/doi/10.1177/0013164418817801)) reduce multicollinearity betwe... | null | CC BY-SA 4.0 | null | 2023-02-27T16:53:37.090 | 2023-02-28T14:49:30.437 | 2023-02-28T14:49:30.437 | 288142 | 288142 | null |
606809 | 1 | null | null | 0 | 37 | Motivated by an experimental investigation in pure mathematics, I am interested in the following type of reinforcement learning problems:
- The state is a collection of N integers, where there are no bounds on the values (or the bounds are very large);
- The possible actions are given by a collection of M integers, w... | Reinforcement learning with simple integer arithmetic actions | CC BY-SA 4.0 | null | 2023-02-27T16:58:54.557 | 2023-03-01T06:59:27.660 | 2023-03-01T06:59:27.660 | 380970 | 380970 | [
"machine-learning",
"reinforcement-learning",
"arithmetic"
] |
606810 | 2 | null | 606804 | 1 | null | It would certainly seem simpler to do so [take a ratio], because you wind up with one variable instead of two. However, you are invoking certain assumptions (which I'm not sure apply) based on your problem description, and the probability model of a ratio can be extremely poorly behaved whereas a regression model can h... | null | CC BY-SA 4.0 | null | 2023-02-27T17:05:00.817 | 2023-02-27T17:05:00.817 | null | null | 8013 | null |
606811 | 1 | null | null | 0 | 89 | I am looking to simulate the results for MLB hitters in terms of their FanDuel and DraftKings fantasy scores. I'm wondering if this is doable given only the following information per player:
```
Mean
Median
Standard Deviation
```
For normally distributed outcomes, this would be fairly easy. The issue is that I know th... | Simulating Outcomes for Skewed Data | CC BY-SA 4.0 | null | 2023-02-27T17:09:23.467 | 2023-02-28T13:49:05.730 | 2023-02-28T13:49:05.730 | 380978 | 380978 | [
"mathematical-statistics",
"simulation",
"skewness"
] |
606812 | 2 | null | 606778 | 3 | null | I want to add that a sufficient and necessary condition for $X$ and $X^2$ are independent is that $X^2$ is degenerate.
The sufficiency is trivial. Conversely, suppose $X$ and $X^2$ are independent. It then follows by $X^2$ is a function of $X$ that $X^2$ and $X^2$ are independent, whence
\begin{align}
E[X^4] = E[X^2 \... | null | CC BY-SA 4.0 | null | 2023-02-27T17:12:02.443 | 2023-02-27T17:12:02.443 | null | null | 20519 | null |
606813 | 1 | 606921 | null | 1 | 30 | I have DNA metabarcoding sequencing data in the following format:
|plot |Time_point |reads_species_A |Reads_species_B |reads_species_C |
|----|----------|---------------|---------------|---------------|
|1 |T1 |0 |245 |65 |
|2 |T1 |48 |455 |0 |
|3 |T1 |15 |5 |10 |
|1 |T3 |153 |23 |564 |
|2 |T3 |448 |468 |48 |
|3 |T3 ... | How to test differences (over time and between treatments) of a specific species in DNA metabarcoding sequencing data? | CC BY-SA 4.0 | null | 2023-02-27T17:12:41.690 | 2023-02-28T17:45:32.967 | 2023-02-28T17:00:48.880 | 374476 | 374476 | [
"chi-squared-test",
"poisson-regression",
"bioinformatics",
"sequence-analysis",
"compositional-data"
] |
606814 | 2 | null | 606803 | 2 | null | I think that you might be getting misled by the p-values of the individual coefficients for `X1` and the interaction term. Multicollinearity often isn't a big problem, particularly for predictive models. It just can make it hard to get precise estimates of individual coefficients.
When two predictors are highly correla... | null | CC BY-SA 4.0 | null | 2023-02-27T17:25:01.803 | 2023-02-27T17:36:31.103 | 2023-02-27T17:36:31.103 | 247274 | 28500 | null |
606815 | 1 | null | null | 0 | 8 | this is the data that i have and would like to determine if there was a significant difference between the groups as to average age at death
I do not have the original raw data
Class of occupation number average age at death
Medical 244 67.31
Lawyer ... | i have a number of different groups with their populatons and the average life expectacy and in i need a formula to determine statistical difference | CC BY-SA 4.0 | null | 2023-02-27T17:25:33.967 | 2023-02-27T17:25:33.967 | null | null | 380979 | [
"probability"
] |
606816 | 1 | null | null | 0 | 22 | The jump chain of a continuous time markov chain is a discrete time markov chain. I know that the existence of an invariant distribution for one chain does not imply the existence of an invariant distribution for another. But I was wondering are their invariant distributions connected in any way?
| invariant distribution in discrete time markov chain and continuous time markov chain | CC BY-SA 4.0 | null | 2023-02-27T17:42:50.320 | 2023-02-27T17:42:50.320 | null | null | 349806 | [
"markov-process"
] |
606817 | 1 | null | null | 2 | 46 | I'm trying to fit an ECM model with variance following a GARCH-DCC model (GARCH with dynamic cross correlation). It has 16 parameters for 2 assets (ECM : 4 gammas, 2 lambda, GARCH: 2 alphas, 2 beta, 2 omega, DCC: alpha and beta). Since I'm pretty new in econometrics, I don't know really much what amount of time this sh... | Accelerate the fitting of an ECM-GARCH model by computing MLE gradient numerically? | CC BY-SA 4.0 | null | 2023-02-27T17:45:44.030 | 2023-02-27T18:12:30.503 | 2023-02-27T18:12:30.503 | 372184 | 372184 | [
"optimization",
"garch",
"gradient",
"ecm"
] |
606819 | 1 | null | null | 0 | 32 | I am analysing a randomised controlled trial using a two level linear mixed model. This is a two arm, pre-post design, where the post test is the primary outcome and we control for pre-test as a covariate.
$L1: Y_{ik} = \beta_{0k} + \beta_{1k}Y0_{ik} + \beta_{2k}X1_{ik}+ r_{ik}, r_{ik}\sim N(0,\sigma^{2}_{|X})$
$L2:\... | How to extract pre/post correlations at each level of a multilevel model? (Cluster RCT - two levels - children clustered within schools) | CC BY-SA 4.0 | null | 2023-02-27T17:55:33.410 | 2023-02-28T16:44:45.847 | 2023-02-28T16:44:45.847 | 86393 | 86393 | [
"lme4-nlme",
"multilevel-analysis",
"r-squared",
"clinical-trials",
"intraclass-correlation"
] |
606820 | 2 | null | 497036 | 0 | null | Recall the error propagation formula for the addition and subtraction
operations:
$s[x+y]^2 = s[x]^2 + s[y]^2 + 2 s[x,y]$
and
$s[x-y]^2 = s[x]^2 + s[y]^2 - 2 s[x,y]$
Therefore:
$s[x+y]^2 - s[x-y]^2 = 4 s[x,y]$
Expanding the left hand side of this equation:
\begin{align*}
s[x+y]^2 - s[x-y]^2 & = \frac{1}{n-1} \sum_{i}... | null | CC BY-SA 4.0 | null | 2023-02-27T18:20:13.790 | 2023-02-27T18:20:13.790 | null | null | 297141 | null |
606822 | 1 | null | null | 8 | 281 | Given $n+1$ variables $p_0, p_1, \ldots, p_n$ defined over $\mathbb{R}^{+}$ so that $\sum_{i=0}^np_i=1$, and given a real number $1<x<n$, I want to generate random solutions of the equation so that every solution is equiprobable (or close enough to be equiprobable) $$
0p_0 + 1p_1+\ldots+np_n=x
$$
Since all variables ar... | Generating uniformly distributed random solutions of a linear equation | CC BY-SA 4.0 | null | 2023-02-27T18:28:04.347 | 2023-02-27T20:53:42.013 | 2023-02-27T20:34:11.427 | 7224 | 349199 | [
"uniform-distribution",
"random-generation"
] |
606823 | 1 | 606827 | null | 1 | 21 | I am attempting to determine statistical significance of various customer programs for farmers use of sustainable products. Farm sales of sustainable products follows roughly an exponential curve, where many people purchase little but a few purchase a lot. The customer programs tend to target the larger purchasing cust... | Determining statistical significance on exponential variables and timeseries | CC BY-SA 4.0 | null | 2023-02-27T18:30:19.397 | 2023-02-27T19:12:49.380 | null | null | 380971 | [
"hypothesis-testing",
"mathematical-statistics",
"exponential-distribution"
] |
606824 | 1 | null | null | 0 | 38 | I am conducting research on nonprofit cultural organisations for my master thesis.
I have 100 units of analysis (organization's financial statements) coming from 23 organisations. Therefore, I have panel data. Specifically, unbalanced panel data. Statements range from 2011 to 2019.
The following schema shows the compos... | The independent variable is a share of a part constituting the dependent variable | CC BY-SA 4.0 | null | 2023-02-27T18:52:36.580 | 2023-03-03T03:13:48.583 | 2023-03-03T03:13:48.583 | 11887 | 380986 | [
"mixed-model",
"predictor",
"dependent-variable",
"economics"
] |
606827 | 2 | null | 606823 | 1 | null | Your basic question is how to deal with an exponentially distributed time series in a regression. The generalized linear model provides a powerful and flexible regression framework. For an exponentially distributed outcome, a GLM with a Gamma family, that is an inverse link and a ^2 variance, meets the probabilistic re... | null | CC BY-SA 4.0 | null | 2023-02-27T19:12:49.380 | 2023-02-27T19:12:49.380 | null | null | 8013 | null |
606828 | 1 | 606945 | null | 0 | 51 | I am researching if it makes sense to create personalized emails based on past purchase behavior, but I do not know how to design A/B tests. The starting data I have is a list of about 500,000 subscribed email addresses, about 1,700 of which have a common attribute: they've purchased a certain category of product. To d... | A/B test: measure likelihood of outcomes based on historical events | CC BY-SA 4.0 | null | 2023-02-27T19:22:32.103 | 2023-02-28T21:57:09.993 | 2023-02-27T21:26:05.177 | 368908 | 368908 | [
"statistical-significance",
"experiment-design"
] |
606829 | 2 | null | 606822 | 0 | null | Here's on possible approach, in R. It takes random samples from a simplex as starting points, and then uses optimisation to find nearby points that meet your criteria. I'm not sure how to verify that the resulting values are still "uniformly" distributed in the admissible parameter space.
```
find_solution = function(n... | null | CC BY-SA 4.0 | null | 2023-02-27T19:30:29.410 | 2023-02-27T19:30:29.410 | null | null | 42952 | null |
606830 | 2 | null | 606811 | 1 | null | Basically kluged this together, but it seems to work okay when comparing it to last season's data. I was using last season's data to make the avg, med, stdv. Just kind of started with the regular values and modified weights and things until it looked right. Quick and dirty until you or anyone else on here gets to the e... | null | CC BY-SA 4.0 | null | 2023-02-27T19:34:21.997 | 2023-02-27T19:34:21.997 | null | null | 380988 | null |
606832 | 1 | null | null | 1 | 28 | I am familiar with formulae for paired proportions Z test when the crosstabs table is available. However, if only the two proportions are available, I am having trouble to figure out what is the S.E. of the $P_1 - P_2$.
If the following crosstab is available
| |Success |Failure |Total |
||-------|-------|-----|
|Suce... | Z test for paired proportions when only the two proportions are available | CC BY-SA 4.0 | null | 2023-02-27T19:40:53.277 | 2023-02-27T19:46:22.833 | 2023-02-27T19:46:22.833 | 212308 | 212308 | [
"proportion",
"z-test"
] |
606834 | 2 | null | 423319 | 1 | null | It seems to me that you are finding this difficult because you are focussed on only two aspects where there are (at least) four that are important: sample size; effect size; false positive errors; and false negative errors. You might find the section 2.5 of this chapter helpful when thinking about experimental power an... | null | CC BY-SA 4.0 | null | 2023-02-27T20:03:37.873 | 2023-02-27T20:03:37.873 | null | null | 1679 | null |
606835 | 1 | null | null | 0 | 13 | Suppose that we have observations in a 2-dimensional principal coordinate space. Let's denote one observation $\mathbf{y}=(y_1,y_2)$ in our PC space. Further, suppose that we have another observation $\mathbf{y}'=(y_1',y_2')$. Consider two scenarios:
- $y_1'=y_1$, $y_2' = y_2 + c,$
- $y_1' = y_1 + c$, $y_2'=y_2,$
w... | In a principal component space, is a change in any coordinate going to result in the same distance between PC-projected observations? | CC BY-SA 4.0 | null | 2023-02-27T20:08:35.673 | 2023-02-27T20:08:35.673 | null | null | 257939 | [
"pca"
] |
606836 | 1 | null | null | 0 | 60 | I have a problem deriving one of the variational inference equations for LDA.
I am trying to derive the batch variational inference presented in the following paper:
[https://www.jmlr.org/papers/volume14/hoffman13a/hoffman13a.pdf](https://www.jmlr.org/papers/volume14/hoffman13a/hoffman13a.pdf)
This is the stochastic va... | Explanation of LDA variational inference derivation | CC BY-SA 4.0 | null | 2023-02-27T20:16:37.310 | 2023-02-27T20:16:37.310 | null | null | 351918 | [
"bayesian",
"mathematical-statistics",
"inference",
"latent-dirichlet-alloc",
"variational-inference"
] |
606837 | 2 | null | 275328 | 1 | null | The semipartial correlation of a predictor measures the (square root) of the decrease in R² when said predictor is removed from the full model.
If, for some reason, you have highly correlated predictors (e.g. independent variables, regressors, features, covariates), the semipartial correlation will often be small.
The... | null | CC BY-SA 4.0 | null | 2023-02-27T20:25:38.363 | 2023-02-27T20:25:38.363 | null | null | 102631 | null |
606838 | 1 | null | null | 4 | 67 | This is a situation that arises commonly in my area (medicine).
- Suppose there is an inherently continuous variable $y$
- Suppose there is some normal range for this variable, say 80 - 120
- Suppose there is a dichotomous categorization of $y$ as "within normal limits", and "outside normal limits" that are commonly... | Logistic vs. linear regression for "inherently continous" variable - comparing probability | CC BY-SA 4.0 | null | 2023-02-27T20:29:13.200 | 2023-02-28T05:54:50.240 | null | null | 164675 | [
"bayesian",
"logistic",
"categorical-data",
"posterior",
"continuous-data"
] |
606839 | 1 | 606841 | null | 1 | 55 | Suppose
$$\sqrt{n} (\hat{\beta} - \beta) \overset{d}{\rightarrow} N(0, \sigma^2)$$
Then I know that for some constant $\alpha$ that a linear combination of normal is normal:
$$\sqrt{n} (\alpha\hat{\beta} - \alpha\beta) \overset{d}{\rightarrow} N(0, \sigma^2\alpha^2).$$
Now suppose that $\hat{\alpha} \rightarrow \alpha$... | Linear combination of normal distribution with Slutsky's theorem | CC BY-SA 4.0 | null | 2023-02-27T20:41:33.590 | 2023-02-27T20:55:53.330 | 2023-02-27T20:46:14.787 | 45971 | 45971 | [
"probability",
"normal-distribution",
"convergence",
"central-limit-theorem",
"slutsky-theorem"
] |
606840 | 2 | null | 606822 | 6 | null | The intersection of the $n+1$ simplex
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf 1_{n+1}=1\}$$
when $\mathbf 1_{n+1}=(1,\ldots1)^\top$ and of the constrained hyperplane
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf \iota_{n+1}=x\}$$
when $\iota_{n+1}=(0,1,\ldots n)^\top$ is within a [$(n−... | null | CC BY-SA 4.0 | null | 2023-02-27T20:53:42.013 | 2023-02-27T20:53:42.013 | null | null | 7224 | null |
606841 | 2 | null | 606839 | 0 | null | Your conjecture is not true. As a counterexample, let $\alpha = 1$, $\hat{\alpha} = 1 + n^{-1/2}$ (non-random), then $\hat{\alpha} \to \alpha$, but
\begin{align}
\sqrt{n}(\hat{\alpha}\hat{\beta} - \alpha\beta) = \sqrt{n}(\hat{\beta} - \beta) + \hat{\beta},
\end{align}
which, by Slutsky's theorem (note that $\sqrt{n}(... | null | CC BY-SA 4.0 | null | 2023-02-27T20:55:53.330 | 2023-02-27T20:55:53.330 | null | null | 20519 | null |
606844 | 2 | null | 606838 | 4 | null | Turning a continuous variable into a binary variable is almost always a bad idea. It has been termed dichotomization. Frank Harrell (@FrankHarrell)[explains and gives some examples.](https://www.fharrell.com/post/errmed/#catg) Steven Senn gave it the name dichotomania: [Dichotomania: an obsessive-compulsive disorder th... | null | CC BY-SA 4.0 | null | 2023-02-27T21:07:22.703 | 2023-02-27T21:50:03.377 | 2023-02-27T21:50:03.377 | 25 | 25 | null |
606845 | 1 | null | null | 0 | 26 | I'm trying to understand this subject and I'm stuck in a question found in a book. [](https://i.stack.imgur.com/VMmdX.png)
I understand white noise characterization, but I'm having some problems with this... maybe it's too much theory? I kind of get lost when I try to prove that also the new variable $Y$ is MA, and do... | Time series question from book (ma) | CC BY-SA 4.0 | null | 2023-02-27T21:13:36.177 | 2023-02-27T21:13:36.177 | null | null | 380993 | [
"time-series",
"econometrics"
] |
606846 | 1 | 606853 | null | 3 | 58 | I want to generate $M$ random numbers following a poisson distribution with mean $k$, and then write down how many times the number 0 appears, how many times the number 1 appeared, and so on, until some number $n$, which is the higher number I'm interested in.
Let's call $C(j)$ to the number of times the number $j$ wou... | What is the statistical distribution of the number of times a number i would appear after M trials of a poisson distribution? | CC BY-SA 4.0 | null | 2023-02-27T21:35:12.167 | 2023-02-27T23:39:00.537 | 2023-02-27T21:40:18.990 | 349199 | 349199 | [
"poisson-distribution"
] |
606848 | 2 | null | 590543 | 1 | null | Bayesian neural network is a fake technology. Take any example, which shows distribution of the target and change data into multimodal, just split them into two blocks with the gap between, launch BNN and it returns you same unimodal distribution. BNN only returns correctly expectation and variance. But you can get sam... | null | CC BY-SA 4.0 | null | 2023-02-27T21:38:40.433 | 2023-02-27T21:40:56.757 | 2023-02-27T21:40:56.757 | 380066 | 380066 | null |
606849 | 2 | null | 606569 | 1 | null | The general approach of using a change of variables does work, with a slight modification of the strategy:
- Engineer the change of variables so that the lower bound of the integral approaches $\mu$ as $\alpha \to \infty$.
Specifically, consider the change of variables $u = \alpha (- \ln t)$, e.g. $\mathrm{d}t = -\f... | null | CC BY-SA 4.0 | null | 2023-02-27T21:44:33.123 | 2023-02-27T21:44:33.123 | null | null | 113090 | null |
606852 | 2 | null | 158444 | 1 | null | What is an interesting effect size is ultimately up to your judgement, there are no universal guidelines that apply to any situation. Some guidelines taken from Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed) are sometimes followed (see [this question](https://stats.stackexchange.com/q... | null | CC BY-SA 4.0 | null | 2023-02-27T22:54:20.940 | 2023-02-28T07:53:58.133 | 2023-02-28T07:53:58.133 | 164936 | 164936 | null |
606853 | 2 | null | 606846 | 3 | null | You ask about the order statistics of any finite discrete probability distribution. Remembering that the order statistics form a Markov chain, we can immediately code a two-line solution.
---
Let the possible values of your distribution be $x_0 \lt x_1 \lt \cdots \lt x_{n}$ with corresponding probabilities $p_i\gt ... | null | CC BY-SA 4.0 | null | 2023-02-27T23:16:17.130 | 2023-02-27T23:39:00.537 | 2023-02-27T23:39:00.537 | 919 | 919 | null |
606855 | 1 | null | null | 0 | 13 | I'm having trouble understanding how to specify a model for Bayesian inference.
I have a set of measurements, each with known uncertainty, and I would like to infer the mean of the measurements with gaussian model. The mean is known to be within the interval `[0,1]`, so I use an appropriate prior to enforce that. But I... | Bayesian inference for simple gaussian model with measurement error | CC BY-SA 4.0 | null | 2023-02-28T00:25:52.190 | 2023-02-28T00:25:52.190 | null | null | 349400 | [
"bayesian"
] |
606857 | 1 | null | null | 0 | 15 | I have a question about interpreting conditional effects for more complex models than models with two-way interactions. Specifically...
If you have an equation like:
Y = B0 + B1X1 + B2X2 + e
Then interpreting X1 would be like: Changes in Y for every 1 unit change in X1
But if you modeled an interaction...
Y = B0 + B1X1... | How do you interpret conditional lower-order effects in a three-way or higher interaction? | CC BY-SA 4.0 | null | 2023-02-28T00:42:41.957 | 2023-02-28T00:42:41.957 | null | null | 241198 | [
"regression",
"multiple-regression",
"generalized-linear-model",
"interaction"
] |
606858 | 1 | 606859 | null | 0 | 17 | I used the following for cross-validation and expand grid function to train a caret rf model.
```
control <- trainControl(method='repeatedcv',
number=10,
repeats=3)
```
Is it possible to know what percentage of the training data is used in the cross-validation to generate rmse... | What percentage of data is used for cross validation with trainControl() | CC BY-SA 4.0 | null | 2023-02-28T00:43:06.160 | 2023-02-28T01:06:07.813 | null | null | 325355 | [
"cross-validation"
] |
606859 | 2 | null | 606858 | 0 | null | From the documentation for `trainControl`:
```
p For leave-group out cross-validation: the training percentage
```
Above that, you can see that the default is `p = 0.75`
| null | CC BY-SA 4.0 | null | 2023-02-28T01:06:07.813 | 2023-02-28T01:06:07.813 | null | null | 141956 | null |
606860 | 1 | null | null | 0 | 11 | My experiment was conducted in multiple years. Each year, plants were sown in an infested field and then harvested after a certain time, sown again next year, so this is not a time series data. There was a weather station to record weather data. I would to investigate the effect of weather variables on plant `disease_s... | Taking into account difference in trial duration while investigating the effect of weather variables on response variable? | CC BY-SA 4.0 | null | 2023-02-28T01:30:44.787 | 2023-02-28T22:00:46.297 | 2023-02-28T22:00:46.297 | 346283 | 346283 | [
"regression",
"hypothesis-testing",
"mathematical-statistics",
"inference",
"biostatistics"
] |
606861 | 1 | null | null | 0 | 68 | How can we calculate the expected R-squared for a linear regression model without intercept where the predictor variable x and the response variable y are both 100-dimensional standard normal vectors?
I tried the following R code and got the output. It looks complicated to compute the expectation of $$1-R^2 = SS_{\math... | Expected value of $R^2$ for linear regression of two normal vectors | CC BY-SA 4.0 | null | 2023-02-28T01:33:25.010 | 2023-02-28T05:21:52.337 | 2023-02-28T05:21:52.337 | 145991 | 145991 | [
"regression",
"probability"
] |
606863 | 1 | null | null | 0 | 10 | The continuous bag of words model has the following log probability for observing a sequence of words: $$\log P(\textbf{w})=\sum_{c=1}^{C}\log{P(w_c|w_{c-m},...w_{c-1}, w_{c+1},...,w_{c+m}})$$
I don't fully understand this. Probabilistically, shouln't it be define as:
$$\log P(\textbf{w})=\sum_{c=1}^{C}\log{P(w_c|w_{c-... | Continuous Bag of Words derivation | CC BY-SA 4.0 | null | 2023-02-28T01:53:37.070 | 2023-03-03T08:04:52.070 | null | null | 253488 | [
"natural-language",
"language-models",
"bag-of-words"
] |
606864 | 1 | null | null | 0 | 114 | I am trying to fit a multiple piecewise linear regression model, where I coerce the model to have the exact terms I want.
For example, let's say I want to fit the exact model below, with predictors $x_1$, $x_2$, and $x_3$, to response $y$:
$$y = b_0 + b_1 x_1 + b_2 x_2 + b_3\text{max}(0, x_3 - c_3)$$
I am solving for t... | Fitting multiple piecewise linear regression model (with hinges) | CC BY-SA 4.0 | null | 2023-02-28T01:54:06.760 | 2023-03-09T15:09:09.140 | 2023-03-09T15:09:09.140 | 348534 | 348534 | [
"r",
"multiple-regression",
"python",
"linear-model",
"piecewise-linear"
] |
606865 | 1 | null | null | 5 | 107 | I have recently started working on bed forecasting project. I don't have data and I am asked to figure out a way to forecast the bed occupancy. As per the discussion with my supervisor/boss, I have to use probability to find out how many beds the hospital needs to start.
I would appreciate some opinions on how to get s... | forecasting without data | CC BY-SA 4.0 | null | 2023-02-28T02:10:19.253 | 2023-03-09T04:32:57.353 | 2023-03-09T04:32:57.353 | 369492 | 369492 | [
"time-series",
"forecasting",
"arima"
] |
606866 | 1 | null | null | 2 | 234 | Let$(X_1,...,X_{n})$ be a random sample from the Pareto distribution
with pdf density $\theta a^{\theta} x^{-(\theta+1)}I_{(a,\infty)}(x),$ where $\theta>0$ and $a>0$
$\textbf{(i)}$ Show that when $\theta$ is known, $X_{(1)}$ is complete and sufficient for $a$.
$\textbf{(ii)}$ Show that when both $a$ and $\theta$ are u... | Verifying the statistics are complete and sufficient for two parameter Pareto distribution | CC BY-SA 4.0 | null | 2023-02-28T02:12:56.457 | 2023-03-01T13:17:36.530 | 2023-02-28T19:38:38.363 | 362671 | 73778 | [
"self-study",
"mathematical-statistics",
"sufficient-statistics",
"pareto-distribution",
"complete-statistics"
] |
606867 | 2 | null | 60815 | 0 | null | For that many classes and classes with very few samples, I would try triplet loss.
For every sample, choose other of same class and other from another class, train with the goal to minimize the distance from samples of same class while maximizing the distance between samples of classes.
You create a cluster space where... | null | CC BY-SA 4.0 | null | 2023-02-28T02:45:15.927 | 2023-02-28T02:45:15.927 | null | null | 374528 | null |
606868 | 1 | null | null | 1 | 16 | I have a large set of objects that each generate a multivariate time series at a daily to hourly resolution. As an example, let's say that they are weather stations generating variables for temperature, humidity, etc. Generally speaking, for each variable, the data from each object changes similarly over time. Here's a... | Visualize difference between time series lines, with similar changes over time and missing data | CC BY-SA 4.0 | null | 2023-02-28T03:17:45.727 | 2023-02-28T03:17:45.727 | null | null | 258502 | [
"time-series",
"data-visualization",
"missing-data",
"normalization"
] |
606869 | 2 | null | 122225 | 2 | null | I had a similar problem and came across Andrew Ng's slide, which I found helpful, although there are good answers here as well.
As highlighted by other answers the key is remembering the confusion matrix.
[](https://i.stack.imgur.com/LBLsV.png)
Positives are on the first row and negatives are on the bottom row.
Andrew ... | null | CC BY-SA 4.0 | null | 2023-02-28T03:32:02.660 | 2023-02-28T08:15:18.750 | 2023-02-28T08:15:18.750 | 128404 | 128404 | null |
606870 | 1 | null | null | 1 | 17 | Gauss uniquely characterised the 1D normal distribution by asking for a distribution that:
- is symmetric
- is decreasing on either side of some center point $\mu$
- has the data likelihood maximized by moving the center $\mu$ to be the value that minimizes the sum of square distances of the data points to $\mu$, (i... | If the most likely value is that which minimizes squared-error, what are the possible distributions? | CC BY-SA 4.0 | null | 2023-02-28T04:30:55.193 | 2023-02-28T04:30:55.193 | null | null | 264898 | [
"machine-learning",
"normal-distribution",
"maximum-likelihood",
"mse"
] |
606871 | 1 | null | null | 0 | 43 | Imagine we have a large system of images. We are trying to search for an image and images that are closer to it. One strategy I could think of is to embed the image say into a 100-dimensional vector. Then we can use a clustering algorithm like $k$-means. However, our number of categories can grow as we might have a gro... | Clustering algorithm on undefined number of classes | CC BY-SA 4.0 | null | 2023-02-28T04:33:57.320 | 2023-02-28T05:01:41.910 | null | null | 309731 | [
"classification",
"clustering"
] |
606874 | 1 | null | null | 0 | 21 | I'm having trouble using the output of a model to find OR and 95% CI.
I've created two models. Model 1 has no random effects (clm) and model 2 includes random 1 random effect of participant ID
```
model1 <- clm(as.factor(Total_daily_symptoms_N)~ as.factor(`Flow_evident?`) + as.factor(Simple_MS),
data=data... | Does including random effects in CLMM alter the calculation of OR? | CC BY-SA 4.0 | null | 2023-02-28T05:25:34.000 | 2023-02-28T05:25:34.000 | null | null | 381014 | [
"r",
"logistic",
"ordinal-data",
"odds-ratio",
"ordered-logit"
] |
606875 | 2 | null | 606838 | 0 | null | Medical doctor and researcher here. I have often thought about this as well. I believe there are 2 main reasons, but ofcourse these are personal opinions and open to discussion:
First, the medical world, including doctors and patients, are mainly interested in whether or not the tests are normal. As a basic example, do... | null | CC BY-SA 4.0 | null | 2023-02-28T05:54:50.240 | 2023-02-28T05:54:50.240 | null | null | 155375 | null |
606876 | 1 | null | null | 2 | 32 | I made up this example myself: Three geoscientists from China, USA and Europe were interested in exploring the mineralogical and textural characteristics and organic carbon composition of carbonate concretions. A concretion is a hard, compact mass of matter formed by the precipitation of mineral cement within the space... | One-way ANOVA F-test is significant but none of the Tukey pairwise test is significant | CC BY-SA 4.0 | null | 2023-02-28T05:56:10.783 | 2023-02-28T06:16:59.207 | null | null | 380451 | [
"r",
"anova",
"multiple-comparisons",
"post-hoc",
"tukey-hsd-test"
] |
606877 | 2 | null | 606876 | 0 | null | It is certainly plausible to have a significant omnibus test ANOVA and non-significant pairwise comparisons. This is because the ANOVA is more sensitive to overall variance across groups, as it literally accounts for the mean sum of squares for all of the grouping variables as shown by the below formula:
$$
F = \frac{\... | null | CC BY-SA 4.0 | null | 2023-02-28T06:16:59.207 | 2023-02-28T06:16:59.207 | null | null | 345611 | null |
606878 | 2 | null | 495458 | 0 | null | Your first drawing is a feature space, the second drawing should be a record space. Where each vector represents the similarity of one column to another. The same vectors mean the same columns numerically.
| null | CC BY-SA 4.0 | null | 2023-02-28T06:43:46.497 | 2023-02-28T06:43:46.497 | null | null | 381017 | null |
606880 | 1 | null | null | 0 | 22 | I am new to econometrics and tried to find the answer on google, but couldn't find any. I've learned that Fixed-effects models depend on there being variation within each higher-level unit of analysis. For example, if there is no variation within a company's observations, it can't be used in the model.
- This is proba... | Intuition on why Fixed-effects models depend on there being variation within the unit of analysis | CC BY-SA 4.0 | null | 2023-02-28T07:10:58.913 | 2023-02-28T07:10:58.913 | null | null | 355204 | [
"fixed-effects-model"
] |
606881 | 2 | null | 563464 | 1 | null | A solution I can think of is adding a function of $\theta$ that tends to infinity to the lost function. For example, $\frac{1}{\theta+1}+\frac{1}{\theta-1}$ would be a good choice. For your example, swap 1 with 0.75
This is an interesting question.
If anyone knows a much better solution, let me know.
| null | CC BY-SA 4.0 | null | 2023-02-28T07:42:20.343 | 2023-02-28T07:44:44.230 | 2023-02-28T07:44:44.230 | 381022 | 381022 | null |
606883 | 1 | null | null | 0 | 12 | My question is along the lines of this: [Clustering based on interaction between the variables](https://stats.stackexchange.com/questions/311557/clustering-based-on-interaction-between-the-variables). However, my situation is kind of the reverse--the independent variable is binary and the response variable is one of ex... | Clustering data points by difference in distributions of a response variable? | CC BY-SA 4.0 | null | 2023-02-28T08:13:24.047 | 2023-02-28T08:13:24.047 | null | null | 381026 | [
"distributions",
"clustering",
"data-mining"
] |
606884 | 1 | null | null | 0 | 36 | So I wanted your help with this :
We have a model, with two exogenous variables $$yt = B1 + B2 X2t + B3 X3t + mu$$
Let ry2, ry3, r23 be the linear correlation coeff. between the endogenous variable and X2, the endogenous variable and X3 and finally, between both X2 and X3, respectively.
We know that $$R² = [(ry2)²+(ry... | Multiple regression : Relation between the model's R² and the linear correlation coefficients | CC BY-SA 4.0 | null | 2023-02-28T08:30:09.643 | 2023-03-01T06:53:24.993 | 2023-03-01T06:53:24.993 | 67799 | 381025 | [
"regression",
"correlation",
"multiple-regression",
"econometrics",
"r-squared"
] |
606885 | 1 | null | null | 1 | 30 | I am working on a problem that is similar to the following.
Suppose I am interested in understanding what is causing the mean price of apples to change from month to month.
My dataset has a similar structure to the following:
|Region |Apple Vendor |Apple Size |Date |Price |
|------|------------|----------|----|-----|... | Understanding the affect of sample composition changes on mean over time | CC BY-SA 4.0 | null | 2023-02-28T08:39:41.390 | 2023-03-01T08:34:05.747 | 2023-03-01T08:34:05.747 | 381023 | 381023 | [
"regression",
"time-series",
"hypothesis-testing",
"mean",
"group-differences"
] |
606886 | 1 | null | null | 1 | 14 | I have measured one parameter with two identical devices. Unfortunately one device shows higher values on average. Also maximum and minimum of the values differ by exactly the same value like the average. Can I just substract the mean difference from values measured with the device which measured slightly too high valu... | How to treat data of one parameter measured with two same devices that show slight differences in averge | CC BY-SA 4.0 | null | 2023-02-28T09:00:47.997 | 2023-02-28T09:00:47.997 | null | null | 381032 | [
"measurement"
] |
606887 | 2 | null | 380996 | 1 | null | So we have a formula to calculate the dimensions of the new image i.e. after it has passed from convolution layer.
Formula is ((n-f+2p)/s)+1
- where n is the pixels of the image i.e. 32
- f is the number of kernels, in our case it is 5*5 kernel which mean f = 5
- p is the
padding, p = 0
- s is the stride, s = 0
I... | null | CC BY-SA 4.0 | null | 2023-02-28T09:56:20.587 | 2023-02-28T09:59:14.380 | 2023-02-28T09:59:14.380 | 381038 | 381038 | null |
606888 | 2 | null | 604022 | 1 | null | Turning my comment into an answer:
Interesting question! One complicated option could be to use the standard errors to simulate a large number of datasets, generate density plots/histograms of each (with the same bandwidth/interval parameters), and overlay them with very high transparency. The final result may look lik... | null | CC BY-SA 4.0 | null | 2023-02-28T09:57:39.873 | 2023-02-28T09:57:39.873 | null | null | 121522 | null |
606889 | 1 | null | null | 1 | 33 | Imagine the following study where we want to teach a group of $N$ patients to improve their capacity to hold their breath under the water. We have data for the patients before teaching them and after teaching them. Something like
```
patient_id | duration before (s) | duration after (s)
--------------------------------... | How to evaluate patient significant improvement? | CC BY-SA 4.0 | null | 2023-02-28T10:28:40.550 | 2023-03-05T02:23:09.740 | 2023-03-05T01:43:35.210 | 11887 | 350686 | [
"statistical-significance",
"paired-data",
"pre-post-comparison"
] |
606890 | 2 | null | 606772 | 1 | null | My hypothesis that model.kneighbors returns 1 - real_cos_similarity score appears correct.
I took a small 10 document x 2000 tf_idf score matrix from the original data and computed cosine similarity
using
```
# import cosine_similarity
from sklearn.metrics.pairwise import cosine_similarity
# compute cosine similarity
... | null | CC BY-SA 4.0 | null | 2023-02-28T11:13:46.650 | 2023-02-28T11:21:43.707 | 2023-02-28T11:21:43.707 | 380777 | 380777 | null |
606891 | 1 | 606960 | null | 0 | 36 | I am trying to estimate the impact on a politician's re-election of bills dealing with immigration.
Suppose that the number of bills on immigration represents a high share of total bills and the two counts are correlated (correlation is around 0.7). If I run a logit model like this
$$
Reelection = \beta_{1} immigratio... | Assessing which variable is more predictive in a logit model | CC BY-SA 4.0 | null | 2023-02-28T11:18:32.263 | 2023-03-01T01:05:02.527 | 2023-03-01T01:05:02.527 | 11887 | 379413 | [
"regression",
"logistic"
] |
606892 | 1 | null | null | 0 | 23 | I’ve had a long search through this site and surprisingly haven’t been able to find anything that directly answers this question. I’m interested both in the case where Pearson’s r is used for an correlation between two ordinal variables, and also the case where it's used for a correlation between one ordinal and one co... | Why is Pearson's r suboptimal for a correlations involving an ordinal variable? | CC BY-SA 4.0 | null | 2023-02-28T11:32:31.170 | 2023-02-28T11:32:31.170 | null | null | 9162 | [
"correlation"
] |
606893 | 1 | null | null | 0 | 26 | A certain experimental design foresees collecting 100 datapoints under condition A and exactly 1 datapoint under condition B. It may be assumed that in both cases the data is normally distributed. Is it possible to test the null hypothesis that both datasets have originated from the same distribution?
- I have conside... | Test if a single sample is likely to have come from the same distribution as a dataset | CC BY-SA 4.0 | null | 2023-02-28T11:42:19.653 | 2023-02-28T11:53:12.117 | 2023-02-28T11:53:12.117 | 200664 | 200664 | [
"t-test",
"small-sample"
] |
606894 | 1 | null | null | 0 | 13 | when doing a weighted regression, the weights should be based on the estimate of the SD's of each predicted value or on the estimate of the SDs of each residual?
| question about the weights in weighted regression | CC BY-SA 4.0 | null | 2023-02-28T11:44:27.080 | 2023-02-28T11:44:27.080 | null | null | 378747 | [
"regression",
"weights"
] |
606895 | 1 | 606902 | null | 1 | 79 | This question has similar answers somewhere, but I do not understand them still:
- Read but do not understand
- Read but do not understand
In the notes [here](http://mitliagkas.github.io/ift6085-2020/ift-6085-lecture-7-notes.pdf), we see the definition of PAC Learning:
Definition 3 (Generalization Gap). Given an sa... | In learning theory, why can't we use Hoeffding's Inequality as our final bound if the learnt hypothesis is part of $\mathcal{H}$? | CC BY-SA 4.0 | null | 2023-02-28T11:48:24.823 | 2023-02-28T15:09:46.930 | 2023-02-28T15:09:46.930 | 253215 | 253215 | [
"machine-learning",
"probability"
] |
606896 | 1 | null | null | 0 | 44 | As the EM algorithm is guaranteed to increase the log likelihood at each iteration. If the log likelihood is concave is it guaranteed to converge to the maximum of the likelihood, that is will we get the MLE.
I am asking this in the context of using the EM algorithm to find the MLE for the Factor Model, when we assume ... | Is the EM algorithm guaranteed to converge if the log likelihood is concave | CC BY-SA 4.0 | null | 2023-02-28T12:07:04.550 | 2023-02-28T12:07:04.550 | null | null | 283493 | [
"maximum-likelihood",
"inference",
"optimization",
"factor-analysis",
"expectation-maximization"
] |
606898 | 1 | null | null | 0 | 21 | I am trying to compare various unsupervised machine learning models to detect anomalous water consumption in each user's house. Now I have 10 datasets (minutely data, no anomalous points) that have no labels. I created a method that would generate random anomalous water usage, like dripping, leaking, running faucet, e... | Correctly evaluating unsupervised learning model | CC BY-SA 4.0 | null | 2023-02-28T12:16:08.333 | 2023-02-28T12:16:08.333 | null | null | 381040 | [
"machine-learning",
"model-evaluation",
"unsupervised-learning",
"auc",
"precision-recall"
] |
606899 | 2 | null | 519727 | 1 | null | It's unclear why you'd want to do what you are doing. If all you want to do is to transform your data onto (0,1), then why don't you just take the rank and divide by the number of observations + 1?
If your data were already normally distributed, then what you described would achieve something quite similar (up to some ... | null | CC BY-SA 4.0 | null | 2023-02-28T12:18:03.693 | 2023-02-28T12:18:03.693 | null | null | 86652 | null |
606901 | 1 | null | null | 1 | 63 | I am working on confidence intervals for transformed parameters in dose-response log-logistic model.
For simplicity, let's assume 2 parameter regression model with normal errors, where $\theta=(a,b)$ are the parameters:
$$E[y|x] = \frac{1}{1 + \exp(a(\log x -\log b))}$$
Now, I am interested in finding the confidence in... | Confidence interval for transformed parameter | CC BY-SA 4.0 | null | 2023-02-28T12:57:15.957 | 2023-03-06T05:01:23.107 | 2023-03-05T21:23:44.790 | 11887 | 381043 | [
"hypothesis-testing",
"confidence-interval"
] |
606902 | 2 | null | 606895 | 1 | null | Does the equation below mean/suggest that amongst all the $h \in \mathcal{H}$, there is a "worst" $h$?
>
$$
\max _{h \in \mathcal{H}}\left|\hat{R}_S\left[h_S\right]-R\left[h_S\right]\right|
$$
The gap differs for different hypotheses.
In particular, a model that is always wrong, will have $\hat{R}_S\left[h_S\right]-... | null | CC BY-SA 4.0 | null | 2023-02-28T13:03:42.163 | 2023-02-28T13:28:10.410 | 2023-02-28T13:28:10.410 | 164061 | 164061 | null |
606903 | 1 | null | null | 0 | 31 | Apologies if this is a repeat question; I couldn’t find another post that asked this specifically.
I’m running a linear mixed effects model with a continuous outcome (reaction time) and categorical predictors as fixed effects (correct/incorrect response, experimental condition, experimental group). I have random effect... | Centering Categorical Predictors in Mixed Models | CC BY-SA 4.0 | null | 2023-02-28T13:12:56.873 | 2023-03-05T21:22:51.110 | 2023-03-05T21:22:51.110 | 11887 | 379020 | [
"mixed-model",
"categorical-data",
"lme4-nlme",
"convergence",
"centering"
] |
606904 | 1 | null | null | 0 | 17 | I have data from an intervention study and I would appreciate any help analyzing them. The intervention is the following: it's a cluster-randomized intervention with 200 people which ran for 12 months. Half of them received the intervention the first 6 months, while the others did not. After that the control group rece... | Repeated measures exposures and outcomes in an intervention study | CC BY-SA 4.0 | null | 2023-02-28T13:44:47.783 | 2023-02-28T14:47:50.420 | 2023-02-28T14:47:50.420 | 345611 | 381048 | [
"regression",
"mixed-model",
"repeated-measures",
"intervention-analysis"
] |
606905 | 2 | null | 606865 | 4 | null | First off, I would not put ARIMA at the top of my list. On the one hand, as you write, you need data to fit an ARIMA model. On the other hand, ARIMA allows for non-integer and negative values, which does not make a lot of sense in your situation, but this is usually not a major problem. On the third hand, I see little ... | null | CC BY-SA 4.0 | null | 2023-02-28T13:45:56.183 | 2023-02-28T13:45:56.183 | null | null | 1352 | null |
606906 | 1 | null | null | 0 | 14 | I am working on a classification problem in which the positive class is very rare.
The dataset consists of categorical variables, as shown in the example below.
The variables are hierarchical, in the sense that the values which var_i may take depends on the the value of var_i-1. In the mock data below, for instance, va... | Decision trees: Measure of split quality which takes into account rare values | CC BY-SA 4.0 | null | 2023-02-28T13:58:18.880 | 2023-02-28T13:58:18.880 | null | null | 139509 | [
"regularization",
"cart",
"pruning"
] |
606907 | 1 | null | null | 0 | 45 | I have 4 mixed effect models that I want to compare - each one has exactly the same structure but a different outcome variable (see summary plot below). I need a statitical test of whether the effects are different from one another.
[](https://i.stack.imgur.com/Jg3e2.png)
I have rejigged the dataframe using pivotlonger... | Interpreting interaction coefficients from categorical predictors in lmer | CC BY-SA 4.0 | null | 2023-02-28T12:25:51.780 | 2023-02-28T16:59:28.577 | 2023-02-28T16:59:28.577 | 253871 | 253871 | [
"r",
"lme4-nlme",
"interaction"
] |
606908 | 1 | null | null | 1 | 34 | I am fairly new to using clustering. On the data science course I am on, we recently covered agglomerative clustering and k means clustering. I have created a toy example to see if I can use R to cluster data correctly using the method of kmeans. I have created five sets of two points that are very close to each other:... | Elbow plot and plot of average silhouette width disagree with each other | CC BY-SA 4.0 | null | 2023-02-28T15:08:28.783 | 2023-02-28T15:08:28.783 | null | null | 357899 | [
"r",
"machine-learning",
"clustering"
] |
606909 | 1 | null | null | 1 | 12 | Lets say we have two questionares for two different age groups. One has 100 items the other 150 items, where we can answer each question with 0 - 2. Then we calculate the mean value of all answers given. So for the first questionare this is a value between 0 and 200 and for the second a value between 0 and 300.
How can... | Score Questionare for different age groups combined | CC BY-SA 4.0 | null | 2023-02-28T15:11:21.057 | 2023-02-28T15:11:21.057 | null | null | 380302 | [
"clinical-trials"
] |
606910 | 1 | null | null | 1 | 32 | So I'm currently conducting some research where I'm using item response theory (IRT) to estimate the difficulty of school subjects in Norway. It turns out that a two-dimensional, simple-structure model where social science and natural science subjects load separately provides a much better fit. Now I'm a little unsure ... | Comparing scores on different dimensions in factor models | CC BY-SA 4.0 | null | 2023-02-28T15:28:16.153 | 2023-02-28T15:32:32.580 | 2023-02-28T15:32:32.580 | 110833 | 337572 | [
"factor-analysis",
"item-response-theory",
"philosophical"
] |
606911 | 1 | null | null | 0 | 15 | I am trying to perform a generalized linear mixed-effects model to understand if exposure to pollutants in specific time-points is associated with clinical severity of a given disease. All to be done using R.
My dependent variable is clinical severity, and is binomial.
As predictor variables I have longitudinal data fo... | GLMM to identify associations between longitudinal predictor variables and binomial outcome | CC BY-SA 4.0 | null | 2023-02-28T15:58:29.760 | 2023-03-05T21:19:10.133 | 2023-03-05T21:19:10.133 | 11887 | 147345 | [
"logistic",
"panel-data",
"biostatistics",
"glmm"
] |
606912 | 1 | null | null | 2 | 38 | I'm trying to evaluate the influence of a single binary explanatory variable on a 0-1 scale response, with one grouping factor. The response variable is generally 0-1 inflated. The simplest solution would be to collapse the data to a 2x2 contingency table, however, the magnitude of 0-1 inflation generally seems to depe... | Explanatory model for zero-one inflated bimodal data with random effect and binary indepentent variable | CC BY-SA 4.0 | null | 2023-02-28T16:15:46.920 | 2023-03-01T09:01:30.133 | 2023-03-01T09:01:30.133 | 191705 | 191705 | [
"regression",
"mixed-model",
"effect-size",
"zero-inflation",
"beta-regression"
] |
606914 | 1 | null | null | 1 | 33 | I developed a new forecasting model with the aim of replacing an older model. I conducted 12 backtests and need to predict only one quarter ahead. Overall, my new model performs better than the older model, but there are instances where it performs worse. I believe that performing a paired t-test would be appropriate t... | Can I use a paired t-test to assess whether there are statistically significant differences between the backtesting results of two forecasting models? | CC BY-SA 4.0 | null | 2023-02-28T16:20:20.490 | 2023-02-28T16:55:09.273 | 2023-02-28T16:55:09.273 | 381059 | 381059 | [
"time-series",
"forecasting",
"t-test",
"cross-validation",
"model-comparison"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.