Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
617424
2
null
616925
0
null
## It seems to be about how far one can move away from the support In the synthetic control setting, the hypothetical value that would have been observed without a treatment is approximated using a linear combination of subjects thought to be similar to the treatment subject. The authors stress that the weights in t...
null
CC BY-SA 4.0
null
2023-05-31T11:46:36.483
2023-05-31T11:46:36.483
null
null
250702
null
617425
1
null
null
1
23
I have an experiment with time series data (spike rates). A Python script calculating their autocorrelation with `statsmodels.tsa.stattools.acf` was apparently giving different answers than an implementation of equivalent logic in Python, using the same bins and 99 lags in each case. The answers had the same pattern, ...
Python's `acf` and Matlab's `xcorr` apparently give different magnitude (but same pattern) answers for some data
CC BY-SA 4.0
null
2023-05-31T11:52:35.650
2023-05-31T15:40:46.073
2023-05-31T15:40:46.073
245642
245642
[ "python", "autocorrelation", "matlab", "sparse" ]
617426
1
null
null
0
15
You'll often see the goal of a statistical estimation problem as being to fit a model such that it $\approx p_{*}(y|x)$ where $p_{*}(y|x)$ is the "true distribution of the data". My question is: what uncertainty is possessed in this "true distribution of the data"? Does it assume infinite training data...in which case ...
Where does the uncertainty of the "true" $p_{*}(y|x)$ come from?
CC BY-SA 4.0
null
2023-05-31T12:23:22.303
2023-05-31T12:23:22.303
null
null
381061
[ "machine-learning", "bayesian", "assumptions", "posterior" ]
617427
2
null
588751
0
null
A fully convolutional network is independent of the number of pixels in the input if the output size is allowed to have a different number of pixels as well. This is due to the fact that the number of parameters in a convolutional layer is independent of the number of pixels in the input. However, the same convolution ...
null
CC BY-SA 4.0
null
2023-05-31T12:28:01.197
2023-05-31T12:28:01.197
null
null
95000
null
617428
2
null
605756
0
null
I think that in your case, if it is "Cured", the event of interest will never happen and converge to a duration of inifinity. If it's the case, you just put your duration as the duree, the timeline as the maximum timeline you want to observe, and then "event" (not cured) as the event_col (0/1). The model will do the re...
null
CC BY-SA 4.0
null
2023-05-31T12:46:24.390
2023-05-31T12:48:11.617
2023-05-31T12:48:11.617
389264
389264
null
617429
1
null
null
1
46
I am struggling with the following problem (casella & berger 4.30(b)): $$ \text{Suppose that} \;\;\;Y|X=x \sim normal(x,x^2) \;\; \text{and} \;\; X\sim uniform\,(0,1).\\\text{Prove that} \;\;\frac{Y}{X} \;\text{and}\; X \;\text{are independent.} $$ My attempt: $$ \text{Let} \; u= y/x \;\; \text{and}\;\;v=x. \\ \text{Th...
Prove that two random variables are independent
CC BY-SA 4.0
null
2023-05-31T12:49:06.613
2023-05-31T19:48:34.930
null
null
389258
[ "mathematical-statistics", "inference", "random-variable", "independence" ]
617430
2
null
617046
0
null
## The difference is in the fact that $X_2$ is an effect of the target This is a really cool question that I hadn't thought enough about! A linear regression of the sort of $Y\sim T+X_1+X_2$ will aim to find the coefficients that explain the most variation an the target. In your first example, controlling for $X_2$ ...
null
CC BY-SA 4.0
null
2023-05-31T13:06:22.973
2023-05-31T13:06:22.973
null
null
250702
null
617431
2
null
617429
1
null
You have $$f_{U,V}(u,v) =\frac{v}{\sqrt{2\pi}v^2}e^{-\frac{1}{2v^2}(uv-v)^2}I_{\mathscr Y}((u,v))$$ though I think you should extend the square root to $\dfrac{v}{\sqrt{2\pi v^2}}e^{-\frac{1}{2v^2}(uv-v)^2}I_{\mathscr Y}(u,v)$. There are $v$s you can cancel to give $$f_{U,V}(u,v) =\frac{1}{\sqrt{2\pi }}e^{-\frac{1}{2}...
null
CC BY-SA 4.0
null
2023-05-31T13:08:17.283
2023-05-31T13:08:17.283
null
null
2958
null
617432
1
null
null
1
13
From a total of $N$ words i have the following dataset where the first column represents the ranks and the second the frequency. For example $$\begin{array}{cc} 1 & 4300 \\ 2 & 3100 \\ 3 & 2500 \\ 4 & 1900 \\ \vdots & \vdots \end{array} $$ I want to find the constant where satisfies $$cf_i =\frac{\text{const}}{i}$$ whe...
Given the rank and frequency find the constant in Zipfs law
CC BY-SA 4.0
null
2023-05-31T13:22:04.153
2023-05-31T16:49:53.687
2023-05-31T16:49:53.687
5176
389267
[ "power-law", "zipf" ]
617433
1
null
null
0
18
- A "new broom" in the modeling department has swept clean the existing 5-figure number of dictionary geo features (kept in a key/value store), replacing them with just their key (more precisely, by exact latitude and longitude of client's zip code - a pair of multi-level features that together can probably proxy for ...
Replacing 10k+ geo features with just their key (zipcode coordinates) in a GBDT model - a sound idea?
CC BY-SA 4.0
null
2023-05-31T13:22:10.157
2023-05-31T13:29:33.700
2023-05-31T13:29:33.700
325325
325325
[ "machine-learning", "feature-selection", "boosting", "feature-engineering", "geography" ]
617434
1
null
null
1
32
I did a power analysis to calculate the sample size in GPower. Now I'd like to do the same in R. However, I am not able to figure out how... I found [ss.2way](https://rdrr.io/cran/pwr2/src/R/ss.2way.R) but that seems to require different inputs. Is there any way to calculate the sample size for a 2x3 design in R? Thank...
Power Analysis for 2-Way-Anova in R
CC BY-SA 4.0
null
2023-05-31T13:26:12.577
2023-05-31T13:28:54.750
2023-05-31T13:28:54.750
389269
389269
[ "r", "anova", "statistical-power", "gpower" ]
617435
2
null
617375
0
null
The confusion might come from the [multiple parameterizations of the Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution#Alternative_parameterizations). Note that the first hazard function can be written in the form $\lambda(t)=\lambda_0(t) \exp(\eta_i)$, where $\eta_i$ is the linear predictor for ...
null
CC BY-SA 4.0
null
2023-05-31T13:35:19.033
2023-05-31T13:35:19.033
null
null
28500
null
617436
1
null
null
0
16
Is it true that for a square symmetric matrix such as the covariance matrix, the singular values are equal to the eigenvalues? The eigen decomposition for covariance is the same as singular value decomposition?
Singular value and Eigen value for Square Matrix
CC BY-SA 4.0
null
2023-05-31T13:37:46.370
2023-05-31T13:50:33.683
null
null
388783
[ "eigenvalues" ]
617437
1
null
null
0
13
Let say I am sampling from a population with an unknown distribution to approximate the mean of the population. I am trying to figure out how large my sample size n has to be in order to guarantee with 95% confidence that my sample mean is within, say, 1% of the population mean. I know I can get the 95% confidence inte...
Estimate potential distance from population mean given only sample size?
CC BY-SA 4.0
null
2023-05-31T13:42:40.207
2023-05-31T13:42:40.207
null
null
76851
[ "confidence-interval", "sample" ]
617438
2
null
617436
1
null
For a real symmetric positive semi-definite matrix like a covariance matrix, the nonnegative square roots of the eigenvalues are equal to the singular values. The eigenvectors are also equal to the left and right singular vectors. This is because, for these types of matrices, the eigendecomposition and the SVD give equ...
null
CC BY-SA 4.0
null
2023-05-31T13:44:38.433
2023-05-31T13:50:24.093
2023-05-31T13:50:24.093
53580
53580
null
617439
2
null
617420
0
null
@Frank Harrell I'm not a statistics major, but I'll try to present a few motivations for using IPTW from a non-expert perspective. Assuming that by "covariate adjustment" you mean multiple covariate Cox regression: - IPTW allows for adjustment of more parameters. I once attended a lecture on your modeling strategy, an...
null
CC BY-SA 4.0
null
2023-05-31T13:45:57.150
2023-05-31T13:45:57.150
null
null
388935
null
617440
1
null
null
0
7
I have pooled several rounds of annual cross-sectional survey data to create 5 synthetic-cohorts to assess differences in (say smoking) prevalence between cohorts at the same age. The age-range (25-60) for the cohorts does not overlap completely - the most recent cohort has rates for ages 25-38, the oldest cohort 45-60...
Best model to use for computing prevalence rate ratios in cross-sectional data with binary outcome?
CC BY-SA 4.0
null
2023-05-31T13:50:28.157
2023-05-31T13:59:16.823
2023-05-31T13:59:16.823
345611
389270
[ "logistic", "binomial-distribution", "cox-model", "prevalence", "synthetic-cohort" ]
617441
1
617444
null
2
31
In page 64 of [Bayesian Data Analysis](http://www.stat.columbia.edu/%7Egelman/book/) by Gelman et.al. they write > ... sensible vague prior density for µ and σ, assuming prior independence of location and scale parameters, is uniform on ($\mu$, $\log~\sigma$) or, equivalently, $p(\mu, \sigma^2) \propto 1/\sigma^2$. ...
Derive the prior on variance scale if uniform prior placed on logarithm scale
CC BY-SA 4.0
null
2023-05-31T13:57:53.433
2023-05-31T14:30:51.373
null
null
43842
[ "bayesian", "prior" ]
617442
1
null
null
2
48
I'm a physicist trying to finally get a hold on practical statistics for particle physics and am having problem with the following -- I apologize for the lack of formality below. Suppose the number of events within a single channel is governed by a Poisson distribution $P(N,\mu)$, whose parameter for Null ($\mu_b$) and...
Likelihood Ratio vs Modified Frequentist Approach (CLs)
CC BY-SA 4.0
null
2023-05-31T14:04:39.737
2023-06-02T16:50:20.283
2023-05-31T20:23:04.390
389271
389271
[ "hypothesis-testing", "confidence-interval", "likelihood-ratio" ]
617443
2
null
617333
1
null
You can think of it like this: a [function](https://en.wikipedia.org/wiki/Function_(mathematics)) is a mapping $f: x \to y$. We use Gaussian Processes to model random functions $f \sim \mathcal{GP}$, where the mapping is non-deterministic. GP takes some points $x$ and the realizations of the functions $f(x) = y$ to lea...
null
CC BY-SA 4.0
null
2023-05-31T14:18:49.190
2023-05-31T14:18:49.190
null
null
35989
null
617444
2
null
617441
2
null
Your error is going from $\text{Let}~ Y = \log \sigma^2$ to $\dfrac{dY}{d\sigma^2} = 2/\sigma$ You should have: $\dfrac{dY}{d\sigma^2} = 1/{\sigma^2}$ (simple derivative of a logarithm) though perhaps you tried $\dfrac{dY}{d\sigma} = 2\sigma \frac{1}{\sigma^2}= 2/{\sigma}$ (chain rule). This gives you: $\text{If}~ X =...
null
CC BY-SA 4.0
null
2023-05-31T14:22:54.700
2023-05-31T14:30:51.373
2023-05-31T14:30:51.373
2958
2958
null
617446
1
null
null
0
19
I am trying to implement open set classification and from my research, softmax (usually with temperature scaling) can be used to create a confidence metric. However, for a complete outlier input which is not part of any of the known classes, the temperature scaled softmax assigns a probability of 1 to the middle class ...
Softmax gives high value for middle class when seeing outlier data
CC BY-SA 4.0
null
2023-05-31T14:36:47.433
2023-05-31T14:36:47.433
null
null
389274
[ "machine-learning", "tensorflow", "computer-vision", "artificial-intelligence", "softmax" ]
617447
1
null
null
0
12
I am working with different transformations of my response. I use two error metrics which normalize by the range of the data, in order to make comparisons between different models based on these transformations. Does anyone know a source which discuss the implications of normalizing by the range when the distribution i...
Implications of normalizing by the range of data when comparing evaluation metrics for different distributions
CC BY-SA 4.0
null
2023-05-31T15:13:33.563
2023-05-31T16:46:44.573
2023-05-31T16:46:44.573
320876
320876
[ "distributions", "normalization", "error", "model-evaluation" ]
617448
1
null
null
0
17
What kind of deep learning is the generation of numerical features (Y) from objects (X) used to compute a score (f(.), differentiable) that is to be maximized directly? Basically NN$\theta$(x) = y, f(y) = score, so $\frac{d score}{d\theta} = \frac{dy}{d\theta} \frac{d f(y)}{dy}$ can be used for backpropagation; f(.) ca...
What kind of learning is feature generation for score maximization?
CC BY-SA 4.0
null
2023-05-31T15:25:06.880
2023-05-31T16:51:57.627
2023-05-31T16:51:57.627
389279
389279
[ "machine-learning", "generative-models" ]
617449
2
null
617419
4
null
The event* $$0 \in \left[Y-\log\left(\frac{1-\alpha_2}{\alpha_2}\right),Y-\log\left(\frac{\alpha_1}{1-\alpha_1}\right)\right]$$ is equivalent to the event $$\theta \in \left[X-\log\left(\frac{1-\alpha_2}{\alpha_2}\right),X-\log\left(\frac{\alpha_1}{1-\alpha_1}\right)\right]$$ so if you can show that that the first even...
null
CC BY-SA 4.0
null
2023-05-31T15:34:29.807
2023-05-31T19:19:57.517
2023-05-31T19:19:57.517
164061
164061
null
617450
1
null
null
1
18
I want to characterize the relation of a few input parameters to a single output parameter. The problem I have is that my data is collected from several groups. The groups are defined both by the input parameters and by how the input parameters interact with the output parameter. I don't know the identity or proportion...
Regression with unlabeled data from several clusters
CC BY-SA 4.0
null
2023-05-31T15:34:50.507
2023-05-31T18:42:39.910
2023-05-31T18:42:39.910
52004
52004
[ "regression", "clustering", "unsupervised-learning" ]
617451
1
null
null
0
4
I am using R. I have a dataset that looks like this using `srt()`: ``` 'data.frame': 233 obs. of 3 variables: $ Design : Factor w/ 4 levels "Crossover","Observational",..: 2 3 3 3 4 3 3 1 3 2 ... $ Status : Factor w/ 3 levels "Active","Passive",..: 1 1 2 2 2 2 1 1 2 1 ... $ Outcome: Ord.factor w/ 3 levels "Positi...
Testing difference of counts of one category across a combination of the other two categories
CC BY-SA 4.0
null
2023-05-31T15:54:46.943
2023-05-31T16:06:03.730
2023-05-31T16:06:03.730
378020
378020
[ "categorical-data", "many-categories" ]
617452
1
null
null
4
156
I I want to perform a paired t-test to check if there's some effect, I have the distribution of "before" and the distribution of "after" the manipulation. Do I need to assume the population variances of the two distribution are equal ?
equal *population* variances in paired t test
CC BY-SA 4.0
null
2023-05-31T16:17:54.197
2023-06-01T07:21:36.193
2023-06-01T06:23:31.740
53690
389283
[ "hypothesis-testing", "distributions", "variance", "t-test", "paired-data" ]
617454
2
null
617452
8
null
Any assumptions you make in a paired test would have to do with the paired differences. After all, a paired test is a one-sample test in disguise. Therefore, NO, you do not need to assume equal variances of the two groups for a paired t-test.
null
CC BY-SA 4.0
null
2023-05-31T16:31:44.597
2023-06-01T07:21:36.193
2023-06-01T07:21:36.193
247274
247274
null
617455
1
null
null
0
13
As I am reading about recommender systems in Machine Learning, UV decomposition caught my eye ([click](https://stats.stackexchange.com/questions/189730/what-is-uv-decomposition) for an explanation or see below). So I have two questions: Question 1: what are the drawbacks of trying to UV-decompose a 1 by m vector into a...
Is it possible to apply matrix decomposition to a vector, injecting additional information to UV decomposition?
CC BY-SA 4.0
null
2023-05-31T16:48:05.837
2023-05-31T16:48:05.837
null
null
389285
[ "machine-learning", "linear-algebra", "recommender-system", "svd", "matrix-decomposition" ]
617456
1
617551
null
2
18
Define $\pi_i$ as the probability that person $i$ will be missing from your sample and $Y_i = 1$ denotes that a subject is missing. Say we're in a missing at random (MAR) scenario where $\pi_i$ depends on two known continuous variables $X_1$ and $X_2$: $$logit(\pi_i) = \beta_1 x_1 + \beta_2 x_2$$ Let's say that I intro...
Baseline rate of missing values. Can missing values be MAR and MCAR?
CC BY-SA 4.0
null
2023-05-31T16:49:09.203
2023-06-01T15:15:55.903
2023-06-01T15:15:55.903
45453
45453
[ "logistic", "mathematical-statistics", "missing-data" ]
617457
1
null
null
0
16
It has KM and model predicted curve overlayed for repeated event model produced by below code. Struggling to interpret this plot. Any help appreciated.Thanks. 1)Is this for first event or all events(counting event method)regardless of subject? fmods = flexsurvreg(Surv(START,STOP,EVENT) ~ 1,data=data,dist="weibull") p=g...
does ggflexsurvplot overlay KM curve with model predicted curve for the first event or all events for repeated event analysis?
CC BY-SA 4.0
null
2023-05-31T16:53:04.007
2023-05-31T21:42:51.800
2023-05-31T21:42:51.800
297005
297005
[ "survival" ]
617458
1
null
null
0
26
Suppose I have m observations of $y$ vectors of varying dimensions $y_1=(y_{11},\dots, y_{1n_1}),\dots,y_m=(y_{m1},\dots, y_{mn_m})$, where $y_i$ is of dimension $n_i\geq 300$ for $1\leq i\leq m$. Let $X_i$ be corresponding covariate matrix of $y_i$ of dimension $n_i\times p$. I will denote $D=(y_1,X_1,\dots, y_m,X_m)$...
Was there any mistake in my derivation in Gibbs sampling?
CC BY-SA 4.0
null
2023-05-31T16:57:08.053
2023-05-31T19:58:30.650
2023-05-31T19:58:30.650
79469
79469
[ "regression", "bayesian", "markov-chain-montecarlo", "model", "gibbs" ]
617459
1
null
null
2
36
I have a known distribution for my population, and it is very right skewed. Let's say Lognormal with mu = 0 and sigma = 3. The mean of this distribution is about 90, and the median is 1. For a given sample, I am interested in knowing the ratio of values in excess of a certain threshold (lets say 90) to the total sum of...
Distribution Estimator dependent on sample size
CC BY-SA 4.0
null
2023-05-31T16:58:47.690
2023-05-31T19:26:33.600
2023-05-31T19:26:33.600
389281
389281
[ "distributions", "mathematical-statistics", "expected-value" ]
617460
2
null
617459
0
null
It seems like you want an example where the expected value of an estimator depends on the sample size. Such examples certainly exist. Consider the mean $\mu$ and an estimator $\hat\mu = \bar X + \dfrac{1}{\sqrt{n}}$, where $\bar X$ is the usual sample mean. Then $\mathbb E\left[\hat\mu\right] = \mu + \dfrac{1}{\sqrt{n}...
null
CC BY-SA 4.0
null
2023-05-31T17:08:50.807
2023-05-31T19:16:24.583
2023-05-31T19:16:24.583
247274
247274
null
617461
2
null
617143
1
null
An option is to use proportional colored circles (or squares), showing simultaneously absolute numbers and ratios. If you want to show the absolute number of servers, while taking into account the "size of the country" (e.g. the number of inhabitants, the total number of computers in this country, or whatever you think...
null
CC BY-SA 4.0
null
2023-05-31T17:12:21.080
2023-05-31T17:21:21.847
2023-05-31T17:21:21.847
164936
164936
null
617462
1
null
null
0
11
What does the StandardScaler() command do when called on other than the individual subcommands? Here are two code examples where I get a different ML-score. From which I conclude that the standardization must be different. Standardization with fit and transform ``` clf = KNeighborsClassifier(n_neighbors=3) X_Scale_trai...
What is the difference between StandardScaler() in pipline and StandardScaler().fit_transform separate from the ML-qualification
CC BY-SA 4.0
null
2023-05-31T17:18:09.813
2023-05-31T17:31:31.767
2023-05-31T17:31:31.767
389289
389289
[ "machine-learning", "python", "scikit-learn", "standardization", "multidimensional-scaling" ]
617463
1
null
null
0
19
I am planning a study where we have a low number of observations. We know we need to control for at least two variables, but other variables also exist that we can control for. It seems to me that adding variables to an analysis is always best. By controlling for the variables that have the largest effect, we can reduc...
What is the effect of adding variables to an analysis on type 1 and type 2 error?
CC BY-SA 4.0
null
2023-05-31T17:27:11.600
2023-05-31T17:27:11.600
null
null
338681
[ "experiment-design", "type-i-and-ii-errors" ]
617464
2
null
617419
4
null
The good think with the pivotal method, is that you can actually find a distribution of the observations independent of the unknown parameter $\theta$ and the implicitly through that distribution construct the confidence interval for $\theta$. So, so the goal is to create a $(1-a_{1}-a_{2})$ confidence interval for $\t...
null
CC BY-SA 4.0
null
2023-05-31T17:31:40.480
2023-05-31T17:31:40.480
null
null
208406
null
617465
1
null
null
0
29
### There are two classifiers; - Classifier_A -> [A] Classifier_A outputs a binary variable either A or nothing (implicitly not A). - Classifier_B -> [B, C, D] Classifier_B outputs any n-combination of B, C, D. All three variables are booleans. And a lack of an output implicitly implies that that output is false. ...
Creating a test set where predicted variables are independent, ground truth variables are mutually exclusive. Two different classifiers
CC BY-SA 4.0
null
2023-05-31T17:38:02.543
2023-06-01T11:18:32.387
2023-06-01T11:18:32.387
386952
386952
[ "classification", "categorical-data", "binary-data" ]
617467
2
null
617208
0
null
In this question, the aim is to make inference for the average monthly rate $p$ of faulty items in a production line ($p$=number of faulty items per total). Daily count data are available over a long time period (2 years). The total number of items per day is large (thousands), and the failure rate is not very small (a...
null
CC BY-SA 4.0
null
2023-05-31T18:03:28.643
2023-05-31T18:19:25.113
2023-05-31T18:19:25.113
237561
237561
null
617468
1
null
null
0
14
I am working on a multilevel analysis aiming to investigate factors that impact student GPA. The data comes from 16 different schools. To include school effects, we are using a mixed effect model (with random intercepts). However, I would like to understand the effect of school tuition on student GPA and I am confused ...
How to structure higher level effect (between clusters) in mixed-effect models
CC BY-SA 4.0
null
2023-05-31T11:52:28.717
2023-06-01T13:12:49.440
2023-05-31T21:02:12.420
11887
284325
[ "r", "regression", "mixed-model", "lme4-nlme", "multilevel-analysis" ]
617469
1
null
null
0
6
In a roll-playing board game, there are various kinds of dice. Dice may have 4, 6, 8, 10 or 12 sides. On any throw, we may toss 1-5 dice of the same kind. Dice are fair. Given a sample of 100 throws, I need to determine the most likely number of faces and dice per throw. I supposs I am searching for a goodness of fit m...
Goodness of fit test, variable dice faces and count
CC BY-SA 4.0
null
2023-05-31T18:05:50.913
2023-05-31T18:05:50.913
null
null
389290
[ "chi-squared-test", "goodness-of-fit", "discrete-data" ]
617470
2
null
617442
1
null
If I'm understanding you correctly, $P(N\mid \mu)$ means the value at $N$ of the Poisson probability mass function with expectation $\mu.$ To find this, you need the value of $N.$ If you observe the value of $N$ you can find $Q.$ The quantity you called $\mathrm{CL}_\mathrm{s}$ can be used if you know $N\le N_0$ but yo...
null
CC BY-SA 4.0
null
2023-05-31T18:23:25.060
2023-06-02T16:50:20.283
2023-06-02T16:50:20.283
5176
5176
null
617471
1
null
null
0
10
I think I understand what average precision is: the area under the precision-recall curve.The curve is constructed by calculating the precision and recall metrics at each threshold. There are a few methods how you actually calculate/approximate area but that is not the focus of my question. For me it was clear how AP i...
Average precision in in calssification vs in object detection
CC BY-SA 4.0
null
2023-05-31T18:52:24.097
2023-05-31T18:52:24.097
null
null
389064
[ "neural-networks", "classification", "model-evaluation", "object-detection", "average-precision" ]
617472
1
null
null
0
14
i have difficulties to better understand about what we commonly called a sampler, especially how to produce a covariance matrix between parameters during a MCMC code run. In MCM, I know that we start from "guess" values and after we iterate by choosing a random value and compute the Chi2 thanks to Experimental data. If...
Subtilities of MCMC method and more generally about covariance matrix and Samplers
CC BY-SA 4.0
null
2023-05-30T18:08:50.073
2023-06-02T16:09:42.110
2023-06-02T16:09:42.110
11887
389017
[ "normal-distribution", "markov-chain-montecarlo", "covariance-matrix" ]
617473
1
null
null
0
12
I have been using GAMMs to analyse time series data and I have included a smoothing term (hour of day by season) and I cant seem to find the results for the winter season. I have the proper information (edf, Ref.df, F, and p value) for all my smoothed terms and each season except for winter. I am using the summary func...
How to interpret smoothing effects in the summary output of a generalised additive mixed effect model GAMM
CC BY-SA 4.0
null
2023-05-31T18:07:15.737
2023-06-02T16:10:35.350
2023-06-02T16:10:35.350
11887
null
[ "r", "modeling" ]
617474
2
null
343146
0
null
From the `sklearn` [documentation](https://github.com/scikit-learn/scikit-learn/blob/1495f6924/sklearn/metrics/classification.py#L500): $$ \kappa = (p_o - p_e) / (1 - p_e) $$ > where $p_o$ is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and $p_e$ is the exp...
null
CC BY-SA 4.0
null
2023-05-31T19:14:02.527
2023-05-31T22:40:29.353
2023-05-31T22:40:29.353
247274
247274
null
617475
2
null
617338
0
null
In the simpler case of independent data points, a simple two-sample t-test here would give too-low p-values because you choose the change point to create the pair of datasets with the largest possible t-statistic. Suppose we generate a time series of 26 observations of $N(0, 1)$ observations, with no change points. If ...
null
CC BY-SA 4.0
null
2023-05-31T19:16:11.380
2023-05-31T19:16:11.380
null
null
78857
null
617476
2
null
575314
0
null
> When evaluating a machine learning (or other statistical model) against multiple evaluation metrics, is there a standardized way to choose the "best" model? NO It depends on what you value from your predictions. In your example, if you value a high $F_1$ score over a high dice score, you might be inclined to go wi...
null
CC BY-SA 4.0
null
2023-05-31T19:29:44.840
2023-05-31T20:04:34.557
2023-05-31T20:04:34.557
247274
247274
null
617477
2
null
501835
1
null
I've found that some of my students are helped by thinking of the p-value as a percentile. They are familiar with the concepts of being in the top 10% of a class by GPAs, or "among the 1%" in terms of wealth. So for your example, a p-value of 0.04 means "Our observed value of the test statistic $T$ was among the top 4%...
null
CC BY-SA 4.0
null
2023-05-31T19:31:24.873
2023-05-31T19:31:24.873
null
null
17414
null
617478
2
null
405872
1
null
(This seems to be a near-duplicate of a question I [answered](https://stats.stackexchange.com/a/577858/247274) a year ago.) $R^2$ is often defined as a comparison of the sum of squared residuals for the model of interest vs the sum of squared residuals for a model that only has an intercept. With this in mind, I would ...
null
CC BY-SA 4.0
null
2023-05-31T19:40:57.790
2023-05-31T19:40:57.790
null
null
247274
null
617479
1
null
null
1
3
In my team we do a study on a group of patients undergoing abdominal surgery and we evaluate the correlation between frailty amongst patients and complications. Background knowledge: We use a frailty-score (Clinical Frailty Score, CFS) where 1 is non-frail and 9 is severe frail. Patients are categorized into three grou...
Multiple groups with multiple events of different character
CC BY-SA 4.0
null
2023-05-31T19:48:17.930
2023-05-31T19:48:17.930
null
null
388632
[ "recurrent-events" ]
617480
2
null
617429
0
null
Conditioned on $X$ having value $x$, the distribution of $Y$ is $N(x,x^2)$. The conditional distribution of $Z = \dfrac YX$ given that $X=x$ is the same as the distribution of $\dfrac Yx$ which, as you have discovered, is an $N(1,1)$. distribution. Thus, $$f_{Z \mid X=x}(\alpha \mid X=x) = \frac{\exp\left(-\frac{(\alp...
null
CC BY-SA 4.0
null
2023-05-31T19:48:34.930
2023-05-31T19:48:34.930
null
null
6633
null
617481
1
null
null
1
15
What is the relation between the vector X used to create a Gaussian process prior, the X used to 'train' the GP, ie. giving it some observations (X,y), and the X* (used to make predictions of y* values)?
Gaussian Process prior, posterior, and predictive x vectors?
CC BY-SA 4.0
null
2023-05-31T19:52:59.963
2023-05-31T19:52:59.963
null
null
389294
[ "bayesian", "normal-distribution", "gaussian-process" ]
617482
2
null
503081
0
null
An important consideration is that your models are not giving categories. They are giving values on a continuum that are binned according to a threshold to give discrete categories (above the threshold is one category, below the threshold is the other). Moving this threshold around is what yields ROC curves. A similar ...
null
CC BY-SA 4.0
null
2023-05-31T20:02:06.007
2023-05-31T20:02:06.007
null
null
247274
null
617483
2
null
616677
1
null
In the `emmeans` call, you can specify only predictor (independent) variables. Seems like you want `Persona` there. The dependent variable is understood from the model. You mention that `Prominence` is a moderator, and if you think that it is influenced by `Persona`, consider adding `cov.reduce = Prominence~Persona` to...
null
CC BY-SA 4.0
null
2023-05-31T20:21:10.417
2023-05-31T20:21:10.417
null
null
52554
null
617484
1
null
null
-1
32
I am trying to better understand the importance of "matching" in medical studies. For example, suppose I have a dataset that has different covariates (e.g. height, weight, sex, employment, place of residence, smoking history, etc.) for a large group of people, and a response variable if a person has asthma or not (let'...
Understanding the Need for "Matching" in Medical Studies
CC BY-SA 4.0
null
2023-05-31T20:25:39.387
2023-05-31T20:25:39.387
null
null
77179
[ "regression" ]
617485
1
null
null
0
16
Let's say I have $n$ samples which are vectors of length $p$. I know that the $p \times p$ sample covariance matrix is singular if $n \leq p$. Is there another estimator for the covariance that results in a non-singular matrix when $n \leq p$? My goal is to estimate covariance from many datasets and then quickly sample...
Is there an alternate estimator for a sample covariance matrix when n < p such that the estimator is not singular
CC BY-SA 4.0
null
2023-05-31T20:35:27.580
2023-05-31T20:35:27.580
null
null
261708
[ "covariance", "estimators", "multivariate-normal-distribution", "svd", "singular-matrix" ]
617486
2
null
615790
0
null
Is there a clear and precise explanation of why minimising the variance of the weights in SIS with respect to a proposal ensures that the samples generated from the empirical distribution induced by the normalised weights will be closer to the posterior/target distribution? I tend to think of this problem in terms of t...
null
CC BY-SA 4.0
null
2023-05-31T20:45:03.887
2023-05-31T20:45:03.887
null
null
78857
null
617487
2
null
616613
0
null
I think you have a nested fixed-effects structure, where `group` is nested in `sub_type`. Did not `emmeans` auto-detect this? You can make this structure explicit by omitting any term where `group` does not interact with `sub_type`: ``` mod2 <- lm(value ~ (sub_type + sub_type:group)*study_day*gender, data = dd) ``` `e...
null
CC BY-SA 4.0
null
2023-05-31T20:48:24.410
2023-05-31T20:48:24.410
null
null
52554
null
617488
1
null
null
1
26
I am trying to understand how the values of the irf plots are estimated I read following page: [https://www.statsmodels.org/stable/vector_ar.html](https://www.statsmodels.org/stable/vector_ar.html) But I don't understand how the values of the impulse response are estimated. I have a model that I fit with order of 3. ``...
impulse response values VAR statsmodels
CC BY-SA 4.0
null
2023-05-31T21:02:11.623
2023-06-01T06:21:49.397
2023-06-01T06:21:49.397
53690
246234
[ "python", "vector-autoregression", "statsmodels", "impulse-response" ]
617489
2
null
526583
1
null
Here is a drawing of a two-layer neural network. [](https://i.stack.imgur.com/AkvZD.png) The blue, red, purple, and grey lines represet network weights, and the black line is a bias. Assume the pink output neuron to have sigmoid activation function so it gives a predicted probability. Then the equation is: $$ p = \text...
null
CC BY-SA 4.0
null
2023-05-31T21:21:33.053
2023-05-31T21:21:33.053
null
null
247274
null
617490
1
617499
null
0
26
I have the following result from a hierarchical model.[](https://i.stack.imgur.com/c3dz2.png) I know how to write the equation for a multiple regression model. Is it possible to write a similar mathematical equation using the coefficients from this hierarchical regression model?
How to write the results of an hierarchical regression into an equation?
CC BY-SA 4.0
null
2023-05-31T21:36:06.777
2023-06-02T02:36:29.187
null
null
250576
[ "regression", "mixed-model", "lme4-nlme" ]
617493
1
null
null
0
39
I was always taught to use $p\times(1-p)\times n$ for binomial variance. In a textbook for actuarial problems, I have: ``` probability of death benefit A .01 200,000 B .05 100,000 ``` Using $Var(A) = .01\times 200000^2 - (.01\times 200000)^2 = 396000000$ I get the same answer with...
When to use $p\times (1-p)\times n^2$ for variance?
CC BY-SA 4.0
null
2023-05-31T23:32:07.170
2023-06-01T07:44:26.710
2023-06-01T00:25:28.083
44269
114193
[ "variance" ]
617495
2
null
617493
1
null
You are right about the variance of a binomial random variable. In your example, the number of deaths would be modelled as a binomial variable. In the example of your textbook, the quantity of interests is however not the number of deaths, but apparently the benefit payed for one particular person in one year. This is ...
null
CC BY-SA 4.0
null
2023-05-31T23:56:09.783
2023-05-31T23:56:09.783
null
null
237561
null
617496
1
null
null
0
19
I am looking to build a multi state model some packages for instance in `R` are for panel or intermittently observed data (`msm` package) which I believe would be interval censored data and others can be used to fit models where transitions times are known (packages such as `mstate` and `flexsurv`). My question is to...
Defining states in Multi-State model
CC BY-SA 4.0
null
2023-06-01T00:27:00.737
2023-06-01T15:00:59.523
2023-06-01T01:35:21.743
281323
281323
[ "r", "survival", "censoring", "interval-censoring", "competing-risks" ]
617497
1
null
null
0
13
I did ANN classification using SMOTE random sampling in python but I found strange plot loss and accuracy results. This is my code: ``` #With SMOTE sm = SMOTE(random_state=42) Train_X2_Smote, Train_Y2_Smote = sm.fit_resample(Train_X2_Tfidf, Train_Y2) #TRIAL 4 def reset_seeds(): np.random.seed(0) python_random.s...
ANN uses python smote random oversampling
CC BY-SA 4.0
null
2023-06-01T00:50:05.163
2023-06-01T00:50:05.163
null
null
375024
[ "neural-networks", "python", "data-visualization", "oversampling", "smote" ]
617498
1
null
null
0
7
Can someone explain me the difference between these approaches, if you want i can provide the results, but since they are quite extensive, i could attach on demand. I'm working with the Theory of Planned Behavior, lets say - a1,a2,a3,a4 are for construct A - s1,s2,s3,s4 are for construct S - p1,p2,p3,p4 are for cons...
Difference between SEM AND ols+pca
CC BY-SA 4.0
null
2023-06-01T01:10:40.007
2023-06-01T01:10:40.007
null
null
376081
[ "least-squares", "structural-equation-modeling", "lavaan" ]
617499
2
null
617490
2
null
If LaTeX output is OK, try `equatiomatic::extract_eq(mdl)`. For example, on this model: ``` library(lme4) (fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)) equatiomatic::extract_eq(fm1) ``` I get the LaTeX: $$ \begin{aligned} \operatorname{Reaction}_{i} &\sim N \left(\alpha_{j[i]} + \beta_{1j[i]}(\opera...
null
CC BY-SA 4.0
null
2023-06-01T01:24:04.603
2023-06-02T02:36:29.187
2023-06-02T02:36:29.187
369002
369002
null
617500
1
null
null
0
23
As is in the title, I'm curious: Is including only the consecutive observations of an ID in longitudinal data a prerequisite for estimating ID level fixed effect? For instance, I have longitudinal data with the structure of firm_id-year. There is one value of firm_id, say, 'Corp_Umbrella,' with the value of `sales` at ...
Is consecutive observations only is a prerequisite of fixed effect estimation in panel data?
CC BY-SA 4.0
null
2023-06-01T01:25:52.863
2023-06-01T02:58:01.550
2023-06-01T02:58:01.550
362671
130153
[ "regression", "panel-data", "fixed-effects-model" ]
617501
1
null
null
2
61
I am running a statistical test to determine if females are more influenced by the framing effect. I designed a survey with three overall questions, each with a "positive frame" and a "negative frame". Each participant would be randomly chosen to answer either the positive or negative frame, and each frame would have a...
Compare and find p-value between two t-tests
CC BY-SA 4.0
null
2023-06-01T01:30:05.373
2023-06-03T02:57:21.553
2023-06-03T02:57:21.553
389304
389304
[ "hypothesis-testing", "t-test", "multiple-comparisons", "difference-in-difference", "group-differences" ]
617502
1
null
null
-1
24
We'd like to test if two rates (number of occurrences / number of days) are statistically different for a paper. However, for one of the rates (let's say Rate A), we have uncertainty around the exposure (number of days). We have several different estimates for the exposure of Rate A. But the exposure variability is not...
Confidence interval for a rate where there is uncertainty around the exposure (number of days)
CC BY-SA 4.0
null
2023-06-01T02:04:55.017
2023-06-01T04:36:04.813
2023-06-01T02:54:38.023
362671
389305
[ "statistical-significance", "confidence-interval", "uncertainty" ]
617503
1
null
null
0
18
I have been using GAMMs to analyze time series data and I have included a smoothing term (hour of day by season) and I can't seem to find the results for the winter season. I have the proper information (edf, Ref.df, F, and p-value) for all my smoothed terms and each season except for winter. I am using the summary fun...
Why am I missing the result of a smoothing effect in a GAMM while interpreting results from summary command
CC BY-SA 4.0
null
2023-06-01T02:38:53.557
2023-06-01T14:27:41.707
2023-06-01T14:27:41.707
389307
389307
[ "r", "modeling", "mgcv" ]
617504
1
null
null
2
14
Suppose that I observe a bi-variate joint distribution over two random variables, $(X_1,X_2)$. I want to represent this joint distribution as arising from a function $F$ applied to i.i.d. uniform random variables, that is, I want to find $F:\mathbb [0,1]^2\to\mathbb R^2$ such that when $U_1,U_2$ are i.i.d. $Uniform(0,...
Uniqueness of a Latent Representation Under Monotonicity Condition?
CC BY-SA 4.0
null
2023-06-01T03:26:35.630
2023-06-01T03:26:35.630
null
null
188356
[ "uniform-distribution", "copula", "latent-variable", "identifiability" ]
617505
1
null
null
1
9
When I do causal mediation analysis in R package mediation::mediate(), I need to print out the standard error of the indirect effect estimate. However, I suppose there is no such output? I have two questions as following, - Is there anyway to get the SE based on this function output? - The output gives me the CI. An...
The CI and standard error of indirect effect in Mediation analysis
CC BY-SA 4.0
null
2023-06-01T04:12:28.883
2023-06-01T04:43:52.847
2023-06-01T04:43:52.847
386760
386760
[ "confidence-interval", "bootstrap", "standard-error", "mediation" ]
617506
1
null
null
0
5
$U \colon (0,\infty) \to (0,\infty)$ is called a $\rho$-varying function is $\frac{U(xt)}{U(t)} \to x^{\rho}$ as $t \to \infty$. Here, we assume that $U$ is $\rho$-varying function with $\rho>-1$. Furthermore, we assume $U$ is locally integrable and that it's integrable on any interval of form $(0,b), b < \infty$. In t...
For $\rho$-varying function with $\rho>-1$, $\lim_{t \to \infty} \frac{\int_0^t U(sx)ds }{\int_{N}^t U(sx)ds} = 1$
CC BY-SA 4.0
null
2023-06-01T04:15:56.797
2023-06-01T04:15:56.797
null
null
260660
[ "extreme-value", "measure-theory" ]
617507
2
null
617270
2
null
$\mathbf X$ is a random $n$-vector that models the observation $\mathbf x$ in $\mathbb R^n$. The decision rule is specified by partitioning $\mathbb R^n$ into disjoint subsets $R_1, R_2, \cdots$: if the observation $\mathbf X$ is an element of $R_j$, then we decide or declare that $\mathsf H_j$ is the true hypothesis: ...
null
CC BY-SA 4.0
null
2023-06-01T04:15:59.127
2023-06-01T20:52:27.123
2023-06-01T20:52:27.123
6633
6633
null
617508
2
null
617502
0
null
I'm not sure there's a "best way" for this. All your results will have to be taken with a big pinch of salt, given that you are basically guessing the exposure time, and you don't know if your estimates are biased. Some options are: - Use your own knowledge of how the exposure times were guessed, to choose the best on...
null
CC BY-SA 4.0
null
2023-06-01T04:36:04.813
2023-06-01T04:36:04.813
null
null
369002
null
617509
1
null
null
0
12
Let W be a discrete random variable with cmf $$ F_{W}(w)= 1-\left(\frac{1}{2}\right)^{\lfloor w \rfloor}\ \text{if}\ w>0\ (0\ \text{otherwise}), $$ where $\lfloor w \rfloor$ is the largest integer less than or equal to $w$. How can I get the pmf of $Y=W^2$?
How to get the pmf of Y=W^2 when CMF is given
CC BY-SA 4.0
null
2023-06-01T05:11:53.567
2023-06-01T05:11:53.567
null
null
389309
[ "density-function", "cumulative-distribution-function" ]
617510
1
null
null
0
6
SGD disadvantage is scale gradient to all directions and Adam is fixed it. How can it be? How is the example if depicted in a graph?
SGD disadvantage is scale gradient to all directions and Adam is fixed it. How can it be?
CC BY-SA 4.0
null
2023-06-01T05:26:31.600
2023-06-01T05:26:31.600
null
null
375024
[ "optimization", "gradient-descent", "gradient", "stochastic-gradient-descent", "adam" ]
617511
1
null
null
2
56
We know that if $X \sim N_p(\mu,\Sigma)$ then $(X-\mu)^T \Sigma^{-1}(X-\mu) \sim \chi^2_p$, does the converse hold? Is it possible for a non-multivariate Gaussian random variable to satisfy $(X-E(X))^T (cov(X))^{-1}(X-E(X)) \sim \chi^2_p$?
Does $(X-E(X))^T (cov(X))^{-1}(X-E(X)) \sim \chi^2_p$ imply normality?
CC BY-SA 4.0
null
2023-06-01T05:36:29.340
2023-06-01T13:30:10.293
null
null
68301
[ "distributions" ]
617512
2
null
617511
6
null
A super simple counter example: Let $X \sim \mathcal{N}(0, 1)$, but let $Y = |X|$. Well, what's the distribution of $Y^2$?
null
CC BY-SA 4.0
null
2023-06-01T05:45:39.373
2023-06-01T13:30:10.293
2023-06-01T13:30:10.293
8013
8013
null
617513
1
null
null
0
12
I am learning the weighted majority algorithm in "foundation of machine learning" by Mohri. But I can not understand the conclusion from the book and other reference. It has the statement > No deterministic algorithm can achieve a regret $R_T = o(T)$ over all sequences. How can we prove this? The book provide a scen...
Regret for deterministic algorithm
CC BY-SA 4.0
null
2023-06-01T05:57:43.050
2023-06-01T05:57:43.050
null
null
157934
[ "machine-learning", "online-algorithms" ]
617514
1
null
null
0
16
64,810 women were screened for cervical cancer with a pap-smear test, suppose 132 of the 177 women diagnosed with cancer using colonoscopy and another 983 women tested positive through the screening program Construct a 2x2 table and answer the following: b. What is the prevalence of disease in this population? c. Calcu...
how to construct a 2x2 table: 64,810 women screened for cervical cancer,132/177 diagnosed with cancer by colonoscopy,983 pos with screening program?
CC BY-SA 4.0
null
2023-06-01T06:43:03.690
2023-06-01T06:43:03.690
null
null
389313
[ "biostatistics" ]
617515
2
null
617493
0
null
The squared terms do probably not refer to the parameter $n$ in a binomial distribution. Instead your computation* and the formula with a square relates to a scaled [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution) with support $x \in \lbrace 0, w \rbrace$. This has variance $w^2\cdot p(1-p...
null
CC BY-SA 4.0
null
2023-06-01T06:53:56.110
2023-06-01T07:44:26.710
2023-06-01T07:44:26.710
164061
164061
null
617516
1
617518
null
0
21
I understand the analytic proof that lasso regularisation tends to shrink coefficients to zero. However, from a practical standpoint, most of those methods are combined with gradient optimisation (like SGD). For this reason, gradient of the loss w.r.t. each parameter is $\lambda\texttt{sign}(w_i)$, where $\lambda$ is t...
From a computational perspective, how does the lasso regression shrink coefficients to 0?
CC BY-SA 4.0
null
2023-06-01T07:01:07.523
2023-06-01T07:53:59.090
2023-06-01T07:03:34.313
389315
389315
[ "regression", "lasso", "regularization" ]
617518
2
null
617516
2
null
The coefficients start at zero. That is, the algorithm starts by applying a sufficiently large penalty that all the coefficient estimates are exactly zero. As the penalty is progressively decreased, coefficients start moving away from zero, one at a time. The problem you point out is one reason that starting from a h...
null
CC BY-SA 4.0
null
2023-06-01T07:53:59.090
2023-06-01T07:53:59.090
null
null
249135
null
617519
1
null
null
0
5
How do I keep `normalmixEM` from printing the number of iterations it required? I am using it in a call inside a bootstrap and the resulting dynamic report in RMarkdown becomes a beast as it prints out the required number of iterations for each bootstrap resample fit, as per the 3-line sample below: '## number of itera...
Silencing output of `normalmixEM` from R package `mixtools`
CC BY-SA 4.0
null
2023-06-01T08:14:36.590
2023-06-01T08:19:35.143
2023-06-01T08:19:35.143
110833
180421
[ "r" ]
617520
2
null
617511
2
null
There are various artificial solutions to this - Let $Y_1$ be any zero-mean variable that is lighter-tailed than Normal and has variance at most 1. Take $Q\sim \chi^2_2$ correlated with $Y_1^2$ so that $Q-Y_1^2$ is always non-negative and then take $Y_2=\sqrt{Q}$ with a random $\pm$ sign (so that its mean is zero). ...
null
CC BY-SA 4.0
null
2023-06-01T08:18:29.793
2023-06-01T08:18:29.793
null
null
249135
null
617521
1
null
null
0
24
An often cited advantage of Structural Equation Modeling (SEM) is that it is able to account for measurement error in the observed indicator variables, therefore allowing for consistent estimates in the presence of error-in-variables (in contrast to standard linear regression). It is not clear to me, however, what type...
Types of measurement error and their implications in SEM
CC BY-SA 4.0
null
2023-06-01T08:22:09.370
2023-06-01T09:56:13.223
null
null
321797
[ "structural-equation-modeling", "measurement-error" ]
617522
1
null
null
0
15
I have a dataset where `treatment` and `subject` play a role in how the data is behaving. I am fitting a linear model where I am modeling feature abundance as function of both covariates. My aim is to perform comparisons at `treatment` level while removing the effect of `subject` on the data. However, in some cases I h...
Linear models: can we trust estimated coefficients when some are not estimatable?
CC BY-SA 4.0
null
2023-06-01T08:42:29.573
2023-06-01T08:42:29.573
null
null
59647
[ "multiple-regression", "linear-model" ]
617523
1
null
null
-2
17
[](https://i.stack.imgur.com/gGt4q.jpg) [enter image description here](https://i.stack.imgur.com/iUIld.jpg) [](https://i.stack.imgur.com/yal3y.jpg) [](https://i.stack.imgur.com/Zz20l.jpg) [](https://i.stack.imgur.com/8qRrS.jpg) need interpreatation
can you interprete my graph?
CC BY-SA 4.0
null
2023-06-01T08:46:49.907
2023-06-01T08:46:49.907
null
null
389320
[ "regression", "interpretation", "linear" ]
617525
1
null
null
1
28
I wanted to ask conceptual about what to do with main effects? Assume, I have randomly assigned equal two groups(let's say CBT, Control; N1=25; N2:25). I collected the depression levels at three time points (pre, pos and follow-up). At pre level, using independent sample t-test, groups did not differ each other signifi...
Which is the correct way to deal with insignificant main effect of Group condition? Stick with interaction effect in RM-Anova or perform ANCOVA?
CC BY-SA 4.0
null
2023-06-01T09:09:44.397
2023-06-01T20:03:54.300
2023-06-01T20:03:54.300
389177
389177
[ "anova", "repeated-measures", "ancova", "random-allocation", "main-effects" ]
617526
2
null
616808
0
null
## The causal effect can be identified with the right methodology An instrumental variable (IV) can be used to estimate the causal effect even under hidden confounding. However, one has to use a suitable estimation procedure and how to best estimate effects in an IV setting is a research question of its own. The Wik...
null
CC BY-SA 4.0
null
2023-06-01T09:22:28.343
2023-06-01T09:22:28.343
null
null
250702
null
617527
2
null
358766
1
null
Just a small edit to Kevin's answer: I think there's a small typo as the derivative of the expression $\frac{p}{1-p}$ written above reaches a stationary point at $x=\frac{0.0265}{2 * 0.000462}$. So $57.36$ should be divided by 2.
null
CC BY-SA 4.0
null
2023-06-01T09:30:44.667
2023-06-01T09:30:44.667
null
null
389325
null
617528
1
null
null
0
7
How can I convert 17 joint points into pose parameters in the SMPL model
How to convert 17 joint points in the human 3.6 dataset into pose parameters for 24 nodes in the SMPL model?
CC BY-SA 4.0
null
2023-06-01T09:33:24.410
2023-06-01T09:33:24.410
null
null
389326
[ "machine-learning", "forecasting" ]
617529
2
null
616904
2
null
If you had a large number of data points, I'd strongly recommend simply fitting a random forest while keep your response continuous. Random forests can deal with possible nonlinearities and are structurally quite robust to overfitting. There's no need to dichotomise your continuous variable - it throws away information...
null
CC BY-SA 4.0
null
2023-06-01T09:38:47.173
2023-06-01T11:21:49.797
2023-06-01T11:21:49.797
121522
121522
null
617530
1
null
null
1
24
I am working with a dataset of 110.000 rows. Each row only has categorical data, most of which is also nominal. Each of these rows represents an event that has several parameters (again, nominal) and an outcome. The question I'm trying to answer is what combination of parameters gives the best result. In this case, the...
Way to find the best performing combination of categorical parameters
CC BY-SA 4.0
null
2023-06-01T09:55:47.027
2023-06-01T09:55:47.027
null
null
388715
[ "machine-learning", "categorical-data" ]
617531
2
null
617521
0
null
It depends on your measurement design and model(s) what types of measurement "error" (systematic vs. unsystematic) you can account for and whether the different sources of "error" can be separated from one another. For example, to separate random error in self report measures of depression, you need at least two measur...
null
CC BY-SA 4.0
null
2023-06-01T09:56:13.223
2023-06-01T09:56:13.223
null
null
388334
null