Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
608762
2
null
608757
8
null
The default linear model implied by `lm` assumes that the residual standard deviation in sepal length is the same for all plants. Under this assumption, the residual standard deviation for any plant is estimated as 0.5148, and so the standard error for the difference between two groups is $$ 0.5148\times\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}$$ For all of the pairwise comparisons in this dataset, $n_1=n_2=50$, and so the standard error is the same $(0.5148\times\sqrt{2/50}=0.103)$. If you were to estimate the difference between setosa and versicolor separately (without the virginica data) then it would have a different standard error based only on the residual standard deviation in those species. As it is in your model, the setosa vs versicolor test is being influenced by the variation in the viriginca group! This isn't necessarily wrong, but it's important to understand this and to be happy with the assumption that the pooled standard deviation can be found in this way. This is partly why we say that equal residual variances across all groups is an important assumption for linear regression. If its not true, even in some groups, it can affect all of the tests based on that model. Is it OK in this case? The standard deviations across groups do vary here, but not enormously. There may be a case for a log-transformation before estimating this model which would stabilise the variances across groups, or the use of a different model. ``` > aggregate(data=iris, Sepal.Length ~ Species, sd) Species Sepal.Length 1 setosa 0.3524897 2 versicolor 0.5161711 3 virginica 0.6358796 ``` [](https://i.stack.imgur.com/5XfEk.png)
null
CC BY-SA 4.0
null
2023-03-08T12:21:04.473
2023-03-08T16:35:22.730
2023-03-08T16:35:22.730
68149
68149
null
608763
1
null
null
0
43
I am trying to understand a model, and I feel a bit stuck and I think I am missing something easy and fundamental. The model in question is rather strange and of the form $$ f(x_1,x_2,...,x_{n-1},x_n) = C_0 (1+x_1C_1) (1+x_2 C_2)... (1+x_{n-1}C_{n-1})(1+x_2C_n) $$ the variables ($x_i$) are mainly categoricals (i.e. one-hot encoded to either 0 or 1), and the coefficients ($C_i$) are then usually small and thus modify the first coefficient ($C_0$, kind of like a base level, since it is $(1+x_iC_i)$). I cannot remember that I have ever seen such a model and it is very difficult to constrain the coefficients since it is all interactions. 1. Is there a name for this kind of model? 2. Perhaps there is a transformation that I am missing that I can do to make the fitting easier (log(0) is not great though)? 3. Are there any recommendations for which model I should replace this with that is easier to constrain (a linear model with interactions included)? Technically the model is even stranger and also contain a linear function, like $$ f(x_1,x_2,...,x_{n-1},x_n) = A (B + x_1C_1)(1+x_2C_2) (1+x_3 C_3)... (1+x_{n-1}C_{n-1})(1+x_2C_n) $$ I did not know how to express the coefficients here, but I think it is pretty clear anyway, $A, B,$ and $C_i$ are coefficients, and $x_i$ are variables.
Help with strange, strictly multiplicative, model definition
CC BY-SA 4.0
null
2023-03-08T12:26:52.330
2023-03-08T13:13:21.893
2023-03-08T13:13:21.893
267824
267824
[ "regression", "mathematical-statistics" ]
608764
1
608767
null
1
103
I am running linear mixed models analyses for the second time. My categorical variable random1 for group is a dummy variable (1-0), and the output has this error: - b Parameter is set to zero because it is redundant. [](https://i.stack.imgur.com/ebHmf.png) [](https://i.stack.imgur.com/SzLWg.png) How can I interpret my results? Do I need to fix my data/coding to get reliable results? Why do I encounter this problem? I have run similar analyses before with the same dependent variable and fixed factors including my dummy variable (but other covariates), this is the first time I have encountered this problem. I have tried running the analysis without an intercept, it does not work (I still get the error). I have read a lot about the dummy trap, but struggle to solve my problem. Everything seem incomprehensible.
Redundant parameters, interpretation of Estimates of fixed effects in SPSS
CC BY-SA 4.0
null
2023-03-08T12:34:56.933
2023-03-08T13:15:01.323
null
null
382677
[ "mixed-model", "spss", "redundancy-analysis" ]
608766
1
null
null
0
55
this question is similar to another on this platform, but that one adresses the general question of rooting the variance, while this question is about rooting d.f. - furthermore, that question has an elaborate answer which is too complex for me to interpret.. So, generally to get the std.dev., the square root of the variance is calculated. In doing so, this includes taking the root of the degrees of freedom (n-1). This seems very strange to me. Maybe I'm just not thinking clearly about this. So, std.def = √(variation/d.f.) I'd think this ought to be: Std.dev = √(variation) / d.f. Why is this not the case? Why do we root the degrees of freedom to get to the std.dev (which, afaik, should reflect the average deviation from the mean). Edit with a bit more info: When calculating any average, we take the sum total of all values, and divide by the amount of values. Now, for std.dev. commonly d.f. is taken instead of n, which is also useful for my usecase. So, when calculating the average distance from the mean, I'd expect to first get the total distance from the mean, and then divide that by d.f. to get the average. As a workaround for the problem with negative/positive values in calculating a total distance to the mean, the total distance would be calculated as the variation (I.e. Sum of Squares). But then, to go back to the same unit of measurement that we started with the root of R^2 should be taken. Lastly, we divide this total distance from the mean by d.f. If we follow the existing formula for std.dev. then we use std.dev=√(variation)/√(d.f.). I don't get why we also take the root of d.f. here. Compared to a basic calculation of a mean, I'd compare the existing std.dev. formula to: √(Sum of values^2)/√(number of values), which seems very strange to me, and I can't really wrap my head around why this is done.
In getting to the standard deviation, why do we square d.f.?
CC BY-SA 4.0
null
2023-03-08T13:09:00.017
2023-03-10T14:12:18.313
2023-03-10T14:12:18.313
382680
382680
[ "variance", "standard-deviation", "dispersion" ]
608767
2
null
608764
1
null
This is not an error SPSS will always take one of the dummy variables as the starting point, this is always the highest number of your dummy variable. In this case the staring point is Random1 = 1. Here the beta for random1 = 0 is the change in BDI when compared to random1=1, when controlled for all the other variables and interactions. In same accounts for the interactions with a dummy variable. You can recode variables if you would like to compare the other way around or if you would have three groups. I also suggest not adding to many interactions if your data set is not that big. The str error is almost as big as your beta. Inidacting it might be underpowerd.
null
CC BY-SA 4.0
null
2023-03-08T13:13:29.130
2023-03-08T13:15:01.323
2023-03-08T13:15:01.323
382664
382664
null
608768
1
null
null
1
30
As a start- I appreciate any advice you can offer me with this question, and want to thank you for your assistance in advance. I have A-level maths knowledge, although did not like statistics. Statistics are not in my daily activities and this is the first time I have attempted to conduct statistical analysis in over 4 years - so it will be clear that I am not entirely sure what I am doing! I have conducted a questionnaire which was completed pre- and post- a mandatory activity (for all participants) in their education and would like to assess whether there has been a statistically significant change in rating of statements using Likert data: 1 “Strongly disagree” 2 “Somewhat disagree” 3 “Neither agree not Disagree” 4 “Somewhat agree” 5 “Strongly agree” I hypothesise that post-intervention there should be an increase in confidence (and therefore more strongly agree with the statements) Firstly, I believe that my Likert scale data should be analysed as ordinal rather than interval data as the intervals are not even between "strongly, somewhat, neither". Secondly, is there a dependent variable? Is the Likert data the 'outcome' data, that is being measured, and therefore is dependent on the intervention that was carried out (even though everyone received it)? Thirdly, does the fact that there is a different number of respondents pre- and post- intervention matter, and that they are not necessarily the same participants, as long as I recognise this in my methods? I cannot say that the same people responded, so can I really say that the intervention made a significant difference, or am I rather commenting on the general trend? Before writing this I thought that: - I am testing for correlation (within a group) - There is no dependent variable - My data is continuous, ordered & categorical data - And therefore should be analysed using Spearman's rho correlation. However, when researching further to write this question, I now think I believe that: - I am checking for difference (between groups) - I have an ordinal dependent variable. - I have a repeated measurement (pre- and post-intervention) - I only have 1 group - And therefore should use a Mann-Whitney test. Any assistance with this would be hugely appreciated. I hope it makes sense - please ask for more information if necessary.
Am I checking for correlation or difference? If so, how shall I do it?
CC BY-SA 4.0
null
2023-03-08T13:32:09.470
2023-03-14T02:11:34.233
2023-03-14T02:11:34.233
11887
382681
[ "hypothesis-testing", "likert" ]
608769
1
null
null
2
90
I am working on a binomial mixed model. I want to analyse the use of a certain construction by students. My response is CONSTRUCTION (two levels: THAT/NO_THAT). My predictors are both categorical (type of INSTRUCTION with five levels (A, B, C, D, E) and L2 with three levels (Italian, Spanish, French). I am using lme4 and sum coding (contr.sum) because I do not want one level of a variable to be the reference level. I started with a maximal model and used LRTs (anova()) to remove non-significant predictors. This is my final model: ``` m1 <- glmer(CONSTRUCTION ~ L2 * INSTRUCTION + (1|ID), data=data2, family="binomial", glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 100000))) ``` I am wondering what the p-values given by summary() for the individual regressors exactly mean. I have seen several publications in which these p-values are reported and taken as criteria to decide whether a regressor is significant. [](https://i.stack.imgur.com/ZcRGs.png) In my case this would mean that regarding the interaction L2 * INSTRUCTION only the term L22:INSTRUCTION2 is significant. However, how do I know if the levels that are not shown in the summary (i.e. e.g. L23:INSTRUCTION1) are significant? Is it really correct to use these p-values to evaluate the significance of individual regressors? What is the difference between these p-values shown by summary() and e.g. the following output from the package emmeans(): ``` emm_1 <- emmeans(m1, "INSTRUCTION", by = "L2") pairs(emm_1) ``` [](https://i.stack.imgur.com/80Itl.png) Which p-values should be used to evaluate if the regressors of a model are significant? Thank you very much! Edit: Just to make sure that I understand the meaning of the p-values provided by summary() correctly: Does the p-value of a coefficient correspond to a test that tests if the coefficient is significantly different from the grand mean (i.e. for L21, if the first level of L2 (Italian) is significantly different from 0, i.e. the grand mean)? And how does that meaning of the p-value change if I use treatment coding instead. Then that would be the ouput of summary(): [](https://i.stack.imgur.com/VN0Tb.jpg) Do significant p-values here indicate that the coefficients are significantly different from the reference level (i.e. for L2Spanish it corresponds to the null-hypothesis that the difference in the log odds of using NO_THAT between Spanish and Italian (in INSTRUCTION A) is equal to 0)? And for the p-value of the interaction L2Spanish:INSTRUCTIONB the null hypothesis is that for students with L2 Spanish there is no difference in the log odds of using NO_THAT between INSTRUCTION A (the reference level) and INSTRUCTION B?
What is the difference between p-values in summary() and p-values given e.g. by emmeans?
CC BY-SA 4.0
null
2023-03-08T13:41:34.490
2023-03-09T20:10:18.323
2023-03-09T20:10:18.323
380499
380499
[ "regression", "mixed-model", "lme4-nlme", "p-value", "lsmeans" ]
608772
1
null
null
0
17
I'm in charge of a year-long, stratified sample where the population is estimated based on previous years but which is also quite dynamic, changing in unpredicted ways every year (furthermore, past population data are aggregated by year, making it impossible to know in advance if there are seasonal spikes or drops). Obtaining a sample response is very resource intensive, so only the bare minimum sample size is sought and the stakeholders are hyper-interested in monitoring the sample daily to ensure no single stratum subsample has 'taken off'. Sometimes a stratum subsample will exceed its annual estimated total early (e.g. Stratum #3 has hit it's annual expectation of 200 samples even though we're only halfway into the year). When this happens, they immediately want to shut down all sampling for that stratum, even if other strata are lagging behind expectations (thus putting at risk obtaining the overall sample size goal). What is the proper way to monitor and govern an in-progress sample like this? Are there any objective, statistical 'threshold' tests to govern such decisions (e.g. Stratum #2 has reached 125% of the estimated annual stratum subsample before the 8-month mark in the sample therefore further sampling in that stratum should be shut down)?
When monitoring a sample, how to gauge when a substratum significantly exceeds sample size estimates and should therefore have its rates adjusted?
CC BY-SA 4.0
null
2023-03-08T14:09:44.063
2023-03-08T14:09:44.063
null
null
64572
[ "sampling", "sample-size", "sample", "stratification", "monitoring" ]
608773
1
null
null
2
187
I'm trying to understand the differences between RNNs and State Space Models (SSMs). I know that SSMs can take on different definitions depending on who you ask, but here I define it as in [Learning Latent Dynamics for Planning from Pixels](https://arxiv.org/pdf/1811.04551.pdf). Although this paper is about a new method, they do compare the graphical models associated with RNNs vs. SSMs: [](https://i.stack.imgur.com/78Bg3.png) Beneath this diagram they say: " Circles represent stochastic variables and squares deterministic variables. Solid lines denote the generative process and dashed lines the inference model". I understand that RNNs have deterministic states unlike SSMs (as they define them here). However, I often see RNNs visualized by an input/output diagram (unfolded RNN diagram) like so: [](https://i.stack.imgur.com/1iY87.png) Ignoring the actions $a_t$ and rewards $r_t$ (the paper is an RL paper), considering $x_t$ to be $o_t$, and supposing that $y_t$ is a prediction for the next state $h_{t+1}$, it seems as though the canonical diagram of an RNN is simply the "inference model" of a state space model where the states are deterministic. So can we say that the relationship between RNN and SSMs is that RNNs directly model the "inference model" $p(h_{t+1}|h_t, x_t)$, while SSMs model the generative model (transition model + observation model) and then "do inference" by computing the necessary posteriors? note: I've been looking over these diagrams for a bit so the differing notation doesn't bother me. However, I'm not sure if it is very annoying as a viewer. If so, please comment and I will try and edit.
Recurrent neural networks vs. State space models
CC BY-SA 4.0
null
2023-03-08T14:25:23.740
2023-03-08T17:56:24.460
2023-03-08T17:56:24.460
381061
381061
[ "reinforcement-learning", "recurrent-neural-network", "state-space-models" ]
608774
1
null
null
0
27
I was wondering how I would report this? My ANOVA is significant but when I look at the model and the different predictors, there is no significance? [](https://i.stack.imgur.com/hbP82.png)
ANOVA is significant but coefficients are not?
CC BY-SA 4.0
null
2023-03-08T14:31:10.653
2023-03-13T18:45:41.647
null
null
382685
[ "regression", "anova", "regression-coefficients" ]
608775
1
null
null
0
41
I'm currently trying to decide what test to run on SPSS for my data. I am comparing the effects of 3 different interventions on trauma scores. I have both pre and post-scores.I have created a difference score variable and will use this to run a one-way ANOVA. I'm trying to run normality tests to see if I instead need to run the non-parametric version, but I'm unsure which statistics I should use to test normality. Should I analyse the normality of the pre and post-scores, or should I analyse the normality of the obtained difference score? I ask as I have run all 3 and have obtained results that suggest the data is normally distributed for all my conditions in pre and post. However, when testing normality using the difference score one of my conditions(drug intervention) appears to not be normally distributed. Would this mean I need to run a non-paramnetic version.
Testing for normality of difference between pre-post scores
CC BY-SA 4.0
null
2023-03-08T14:31:55.557
2023-03-08T15:10:28.457
null
null
382686
[ "anova", "nonparametric", "normality-assumption", "pre-post-comparison" ]
608777
1
null
null
1
52
The max entropy method is a way of deriving a probability distribution given only the information you know about how the data is distributed and nothing more. For example the normal distribution can be derived by constraining the mean and variance to be fixed. This is a very intuitive and principled way of choosing how to model your data. I'm trying to wrap my head around the constraints for the gamma distribution, which does not seem as intuitive. These constraints are $$\mathbf E[x] = k\theta,$$ $$\mathbf E[\log(x)] = \psi(k) + \log(\theta),$$ where $k$ is the shape parameter, $\theta$ is the scale parameter, and $\psi$ is the digamma function. There is also the implicit constraint that $x \ge 0$. One possibility I see here is that perhaps it's less about the specifics of what values these constraints take, and more about the fact that they are fixed -- the values are circumstantial. But then is there any intuition behind these values? If the specific values do matter, under what circumstances would you assume the mean to be a product of two parameters? Given the scale parameter is included, are we implicitly making a constraint on the variance? Also, I've noticed that the second term looks like one of the constraints for the max entropy that generates the beta distribution. What is the connection there? Any insight here would be helpful. I've scoured the web and have found little to help me out here.
What is the reasoning behind max entropy constraints for the gamma distribution?
CC BY-SA 4.0
null
2023-03-08T14:58:16.940
2023-03-09T18:46:21.937
null
null
250158
[ "distributions", "gamma-distribution", "maximum-entropy" ]
608779
2
null
608775
0
null
The assumption of normality is regarding the residuals of a regression analysis and not on the unmodeled data, so looking at the changes within group or absolute numbers or anything like that does not make sense. There's plenty of previous questions addressing that testing for normality on the data under analysis is problematic (e.g. type I error inflation, if analysis is changed in response to the assessment) and that it is better to assess the appropriateness of a normality assumption on previous data (but to not use a null hypothesis test for this, but rather visual inspection of plots of the residuals). Also note that randomized trials (I assume it is since you plan to do a simple group comparison) are reasonably robust to small/moderate deviations from normal residuals esp. for larger sample sizes. If this study is not a randomized trial, then a simple comparison of treatment groups is of course completely inappropriate and a normality assumptions would have to be looked at in the context of an appropriate analysis. E.g. if you are doing matching/stratifying for propensity scores, then the residuals of a regression with matching/stratification would be one thing one could look at.
null
CC BY-SA 4.0
null
2023-03-08T15:10:28.457
2023-03-08T15:10:28.457
null
null
86652
null
608781
1
null
null
0
31
I'm attempting to implement an adaptive kernel Kalman filter following this paper [https://arxiv.org/abs/2203.08300](https://arxiv.org/abs/2203.08300), but I'm struggling to find a method of evaluating the feature mapping for a polynomial kernel $K(x,y)=(x^{T}y+c)^{d}$ with the input (state) dimension is larger than 2. For the scenarios I inted to use the filter for the state could be $\mathbf{x}\in\mathbb{R}^{10}$ which leads to some fairly high-dimensional feature mappings. The idea is to map the filter state to a kernel mean embedding to perform the filter update. The notes I've found have feature mappings $\phi{}(x)$ of different dimensions for the homogeneous quadratic kernel but want to see if there was a simpler method of finding the explicit feature mapping. My question is if there is a formula for generating the feature map for the polynomial kernel given a n-dimensional input and kernel of degree d? Probably a bit of a newbie question but cheers for any help in advance!
Method of evaluating the feature map of a polynomial kernel feature mapping
CC BY-SA 4.0
null
2023-03-08T15:26:15.193
2023-03-09T08:46:26.657
2023-03-09T08:46:26.657
382694
382694
[ "kernel-trick", "kalman-filter", "kernel-mean-embedding" ]
608782
1
null
null
0
21
I have two random variable $X_1$ and $X_2$. They are dependent. How can I make (which formula) a hypothesis test where $H_0: \textrm{mean}(X_1|Y) \leq 2\times \textrm{mean}(X_2|Y)$?
Hypothesis test of dependent variables
CC BY-SA 4.0
null
2023-03-08T15:59:39.043
2023-03-08T16:08:07.360
2023-03-08T16:08:07.360
362671
355603
[ "hypothesis-testing" ]
608785
1
null
null
0
14
I know that input data is not of much use, but I still want to understand how are we going to split the data (in the decision tree) in this case (because all the features should end-up giving the same amount of information gain) Any thoughts on this?
What's the behaviour of decision tree classification if the Y (i.e true labels) have the same values
CC BY-SA 4.0
null
2023-03-08T17:02:01.203
2023-03-08T17:02:01.203
null
null
292848
[ "classification", "cart" ]
608786
1
null
null
0
19
I feel this is probably a stupid question; I have 3 time series (x, y, z). Time series x and y have a more or less time^2 pattern (i.e., curvilinear effect of time), whereas time series z is basically a mirror image of it (i.e., curvilinear in the opposite direction - so when x and y move up, z moves down and vice versa). I did run ADF tests which did not show evidence for unit roots. I want to detrend this data. I can probably run it with a lowess or loess to get rid of the time effect. My question is Do I need to use the same time trend to detrend (e.g., Loess) for each time series (i.e., shared underlying time trend)? Or can I fit a Loess to each individual series? I'm fine with reverse scoring the z time series so it basically follows the same pattern if that makes things easier. My goal is to investigate the lagged effects of x on z, and z on y.
Detrending multiple predictors
CC BY-SA 4.0
null
2023-03-08T17:12:54.873
2023-03-08T17:12:54.873
null
null
378010
[ "time-series", "arima", "trend", "differencing", "armax" ]
608787
1
null
null
2
113
For any distribution, we can substract two random variables and find the distribution of the difference. But what about the reverse? Can the below statement be true in general or under any condition for the distribution? For any probability distribution function, there exists another distribution such that the difference of two independent random variables from the second distribution has the first distribution
Existence of distribution that its difference of two iid RVs becomes a desired distribution
CC BY-SA 4.0
null
2023-03-08T17:13:14.767
2023-03-09T07:12:55.157
2023-03-08T18:13:12.633
43969
43969
[ "probability", "distributions", "random-variable" ]
608788
2
null
592572
1
null
One obvious solution (esp. if you only need to calculate this once) is to analyze your data with a mixed effects model, where there is a random player effect. Each match has two players, so you need a multi-membership (always exactly 2 memberships) version of such a model. See e.g. [this blog post](https://bbolker.github.io/mixedmodels-misc/notes/multimember.html) on how to fit such a model in R, but its even easier to do in something like the `brms` R package: ``` library(tidyverse) library(brms) library(posterior) library(patchwork) set.seed(1234) data <- tibble(player1 = sample(x=1L:20L, size=500, replace = T), player2 = sample(x=1L:20L, size=500, replace = T), score = rnorm(n=500,mean=player1-player2,sd=1), w1=1L, w2=-1L) %>% filter(player1 != player2) p1 <- data %>% ggplot(aes(x=player1, y=score, col=factor(player2))) + theme_bw(base_size=18) + geom_jitter(height=0, width=0.1, alpha=0.5) + ylab("Observed score difference") # multi-membership model with two members per group and equal weights fit1 <- brm(score ~ 0 + (1|mm(player1, player2, weights=cbind(w1, w2), scale=FALSE)), family = gaussian(), data = data) summary(fit1) p2 <- fit1 %>% as_draws_df() %>% pivot_longer(cols=everything(), names_to="parameter", values_to="value") %>% filter(str_detect(parameter, "r_mmplayer")) %>% mutate(playerno = as.integer(str_remove_all(str_extract(parameter, "\\[[0-9]+\\,"), "\\[|\\,"))) %>% ggplot(aes(x=playerno, y=value)) + theme_bw(base_size=18) + geom_jitter(width=0.1, height=0, alpha=0.3) + ylab("Estimated player skill") p1 + p2 ``` [](https://i.stack.imgur.com/MIdO5.png) This version fits this as a random effects model, which induces shrinkage on the skill of players with very little data, which may often make sense. If you don't like that, just set a prior that fixes to between player standard deviation to a very large number (`prior = prior(class="sd", constant(1000))` or the like). Of course, one concern could be that the residuals would not be normal, e.g. that there would be more large scores than one would expect, then a Student-t error could be chosen. Or perhaps you worry that in very one-sided matches the better player tries less hard (for which they should perhaps have their estimated skill reduced, but you might also feel that you'd not want to do that). In such a setting, you could introduce a term based on the estimated skill differential that says that the expected score is reduced based on the difference in skill (that might require a more bespoke model, I guess) or something like that. You could also add an intercept to the model, if being player 1 is an advantage vs. being player 2 (e.g. like in chess where it's better to be white rather than black).
null
CC BY-SA 4.0
null
2023-03-08T17:40:46.830
2023-03-08T17:40:46.830
null
null
86652
null
608789
2
null
608787
4
null
This is not the case if the two variables from the second distribution are independent. For example, the uniform distribution over $[-1,1]$ cannot be expressed as the difference of two i.i.d. random variables. To see this, consider the characteristic function of $Z\sim U[-1,1]$. $$\varphi_Z(\theta) = \frac{\sin \theta}{\theta}$$ If $X$ and $Y$ are i.i.d. then $$\varphi_{X-Y}(\theta)=\varphi_X(\theta)\varphi_Y(-\theta)=\varphi_X(\theta)\overline{\varphi_Y(\theta)}=\Vert\varphi_X(\theta)\Vert^2$$ However, then $$\frac{\sin \theta}{\theta} =\Vert\varphi_X(\theta)\Vert^2$$ which is a contradiction since the LHS is sometimes negative whilst the RHS is nonnegative. This example is exercise E16.1 from William's Probability with Martingales.
null
CC BY-SA 4.0
null
2023-03-08T17:58:12.857
2023-03-09T07:12:55.157
2023-03-09T07:12:55.157
362671
355731
null
608790
2
null
608663
3
null
The CMP distribution generalizes the Poisson distribution. It is a discrete distribution on the natural numbers $0,1,2,\ldots$ with probabilities $$\Pr(x;\lambda,\nu)\ \propto\ \frac{\lambda^x}{\left(x!\right)^\nu}$$ where $\lambda \gt 0$ and $\nu \ge 0$ are its parameters. When $\nu = 1,$ this is the Poisson distribution with parameter $\lambda.$ Here are some examples, plotted as probability mass functions (bar heights equal the probabilities): [](https://i.stack.imgur.com/xi0Jv.png) The constant of proportionality for the probabilities is not easily computed, especially when $\nu$ is not integral. The challenging cases to simulate are for $\nu\ll 1,$ as I will show with an example. Here is a plot of the counts in a simulation of a million iid values from the CMP distribution with parameters $\lambda = 50, \nu = 1/4:$ [](https://i.stack.imgur.com/qfroA.png) The red curve plots the expected counts (it's proportional to the underlying probabilities). The range of the simulated values is from less than 6.23 million to over 6.27 million, comprising over 30,000 distinct values. Within this range the individual probabilities range by five orders of magnitude from almost $0.000\,08$ to less than $0.000\,000\,000\,1.$ Nevertheless, this sample was produced in a fraction of a second using the general function `s` posted at [https://stats.stackexchange.com/a/606853/919](https://stats.stackexchange.com/a/606853/919). That algorithm scales well: generating a larger sample of $10^{30}$ values required only one more second. The computation time depends primarily on the expected range of the sample, not on the sample size. The implementation challenge is twofold: first, to overcome the lack of a closed form for the normalizing constant; second, to handle potential numerical overflow. The raw values given by the formula are largest at the mode of $6\,250\,000,$ where they get as large as $$\Pr(6250000;50,1/4)\ \propto\ \frac{50^{6250000}}{(6250000!)^{1/4}} \approx 10^{678584.2}.$$ That will overflow your calculator ;-). The solution (coded in `R` as the `dCMPZ` and `dCMP` functions) given below overcomes these difficulties by searching from the mode (which occurs near $\lceil \lambda ^ {1/\nu}\rceil$) outwards into each tail until the individual probabilities are tiny compared to the probability at the mode. These are taken to be the entire support of the distribution, essentially truncating it at lower and upper limits beyond which your simulation is unlikely to produce any values. To overcome the numerical overflow problems, these probabilities are first divided by the modal probability (the largest possible one) and then summed up -- and all this is done on a logarithmic scale. Finally, if you need an iid sequence of values for the simulation, rather than a tabulation of the frequencies of its values, it's straightforward to convert this tabulated output into the individual values with their multiplicities and randomly permute them. Just don't try this with a sample of size $10^{30}$! --- ``` # # Un-normalized CMP probability at `x`. # Requires lambda > 0, nu >= 0. # dCMP <- function(x, lambda = 1, nu = 1, log.p = FALSE) { q <- x * log(lambda) - nu * lfactorial(x) if(!isTRUE(log.p)) q <- exp(q) q } # # Return essentially all the probabilities, ignoring probabilities less than `tol` # compared to the mode. `n.max` assures the function won't hang up when the # search is too long: it limits the range of the search. # dCMPZ <- function(lambda = 1, nu = 1, tol = 1e-16, n.max = 1e6) { m <- ceiling(lambda ^ (1/nu)) # A mode + 1 p.max <- dCMP(m, lambda, nu, log.p = TRUE) if (is.infinite(p.max)) stop("CMP parameters cause logarithmic overflow.") # # Search into the upper tail. # It proceeds in blocks of values of length `stride` to capitalize on # vectorized computation. These blocks are (asymptotically in lambda) # one SD long. # v <- lambda^(1/nu) / nu * (1 + (nu^2 - 1)/(24 * nu^2) * lambda^(-2/nu)) if (is.na(v) | is.infinite(v) | v <= 0) v <- 1 stride <- min(ceiling(n.max/2), m + 1, ceiling(sqrt(v))) m.start <- m - 1 x <- m.start + seq_len(stride) p <- p1 <- dCMP(x, lambda, nu, log.p = TRUE) p.min <- p.max + log(tol) while(isTRUE(p1[stride] > p.min) && isTRUE(length(x) < n.max)) { m.start <- m.start + stride x1 <- m.start + seq_len(stride) p1 <- dCMP(x1, lambda, nu, log.p = TRUE) x <- c(x, x1) p <- c(p, p1) } if (length(x) >= n.max) warning("The problem is too large; a partial distribution is returned.") # # Compute the lower tail. # This assumes all the CMP distributions have positive skewness, implying # it suffices to search no further out into the lower tail than into the # upper tail. # x0 <- seq(max(0, m - x[length(x)] + x[1]), x[1] - 1) x <- c(x0, x) p <- c(dCMP(x0, lambda, nu, log.p = TRUE), p) - p.max # # Order by increasing probability for maximum precision in the summation. # i <- order(p, decreasing = FALSE) x <- x[i] p <- exp(p[i]) # Convert from logs to relative probabilities (max is 1). p <- p / sum(p) # The normalized probability distribution. # # Discard probabilities reduced to zero by underflow in the normalization. # j <- which(p > 0) x <- x[j] p <- p[j] # # Return the values, their probabilities, and a record of the caller's # arguments to the function. # return(list(x = x, probs = p, tol = tol, lambda = lambda, nu = nu)) } # # Examples of dCMPZ. # theta <- list(c(1e-2, 1e-4), c(1, 2), c(5, 1/2), c(1e6, 3)) pars <- par(mfrow = c(2,2)) for (q in theta) with(dCMPZ(q[1], q[2], tol = 1e-6), plot(x, probs, type = "h", col = hsv(0.01, 1, 0.7), xlab = "Value", ylab = "Probability", main = bquote(paste(lambda == .(q[1]), ", ", nu == .(q[2]))))) par(pars) #==============================================================================# # # Sample from a discrete distribution on the values 1, 2, 3, ..., length(p). # Returns the *unordered* sample as a vector tabulation of the counts. # s <- function(m, p) { # Initialization. P <- rev(cumsum(rev(p))) k <- rep(0, length(p)) # The algorithm. for (i in seq_along(p)) { k[i] <- rbinom(1, m, p[i]/P[i]) m <- m - k[i] } k } #==============================================================================# # # Simulating from a CMP distribution. # set.seed(17) # For reproducibility n <- 1e6 # Sample size lambda <- 50; nu <- 1/4 system.time({ obj <- dCMPZ(lambda, nu) # Generate the full distribution k <- s(n, obj$probs) # Sample from it and summarize the sample }) i <- k > 0 # Remove zero counts plot(obj$x[i], k[i], col = gray(.9), xlab = "Value", ylab = "Count", main = bquote(paste("CMP Simulation for ", lambda == .(lambda), " and ", nu == .(nu)))) j <- order(obj$x) # Need to place values in order for drawing a curve! lines(obj$x[j], n * obj$probs[j], lwd = 2, col = hsv(0.01, 1, .9)) #==============================================================================# # # How to generate an ordered *iid* sample. # obj <- dCMPZ(5, 0.7) # Precompute the distribution k <- s(5000, obj$probs) # Sample from it x <- sample(rep(obj$x, k)) # Replicate each value and randomly permute the vector # -- Display a histogram of the sample to check. hist(x, freq = FALSE, breaks = seq(0, length(k)+1) - 1/2) j <- order(obj$x) lines(obj$x[j], obj$probs[j], lwd = 2, col = hsv(0.01, 1, .9)) ```
null
CC BY-SA 4.0
null
2023-03-08T17:59:17.303
2023-03-08T17:59:17.303
null
null
919
null
608791
2
null
608609
2
null
In this case the warning is noise and can be ignored. The final model converges and the results look good. The warning comes from estimating a simplified model for starting parameters which is not run until convergence (I had forgotten to silence it.). In general models that are strongly misspecified for the given data will often have convergence problems. This is the case in Zero-inflated models if there is no zero inflation or even a deficit of zeros. A similar case, NegativeBinomial is unlikely to converge if there is no overdispersion relative to Poisson. In those cases it is better to estimate the basic model first and check that the deviation or misspecification is in the direction of the alternative model. The following illustrates that for zero-inflation. ``` rhs = 'AGE + COHES + ESTEEM + GRADES + SATTACH' mod = Poisson.from_formula("STRESS ~" + rhs, data) res = mod.fit() diag = res.get_diagnostic() ``` The hypothesis test that there is no zero-inflation (or deficit) is strongly rejected: ``` diag.test_poisson_zeroinflation().pvalue # 3.9959363587290166e-33 ``` As a quick diagnostic for how well the model fits the data, we can compare the observed frequencies of count with the expected probabilities for each count, both averaged over the estimation sample. We can see that more zeros are observed than predicted. ``` diag.plot_probs(), ``` [](https://i.stack.imgur.com/IuE97.png) So, estimating the Poisson model indicates that there is an excess of zeros. Next, estimate the zero-inflated Poisson model. It shows that the estimation converged. Comparing observed and expected frequencies shows that the ZIP model fits the data much better than the Poisson model. ``` mod_zip = cm.ZeroInflatedPoisson.from_formula("STRESS ~" + rhs, data) res_zip = mod_zip.fit() res_zip.converged # True diag_zip = res_zip.get_diagnostic() diag_zip.plot_probs(), ``` [](https://i.stack.imgur.com/cpK7T.png) (Note: get_diagnostic will be available in the next statsmodels release)
null
CC BY-SA 4.0
null
2023-03-08T18:05:00.970
2023-03-08T18:05:00.970
null
null
14187
null
608794
1
null
null
1
11
I have a problem where I want to predict "when is the next action happening" based on the time. Example problem: Imagine you have a dataset of transactions per user, your goal is to predict when is the user going to do another transaction. Minimal example data: ``` transaction_id,user_id,last_transaction,next_transaction 0,1,'2018-02-04 22:31:25','2018-02-05 02:35:11' 1,1,'2018-02-05 02:35:11','2018-02-06 14:58:54' 2,2,'2018-02-06 15:50:50','2018-02-06 16:24:22' ``` You get the labels already from the data. 1. How would you frame this problem using machine learning if you want to be precise in minutes? I want the output to be a probability distribution of all future minutes (e.g. next 7 days, predictions later than 7 days are not that important but have to be part of the problem). 2. After framing of the problem, what kind of methods would you investigate/dive deeper into? At first I was thinking of using classification of time buckets but this gives me no flexibility. If I have a time bucket 'next transaction is next day 16:00-17:00', if somebody asks what is the probability of next transaction between 15:45-16:45, I am unable to tell. Another idea of mine were regression trees and I think what I want to achieve goes in the direction of probabilistic forecasting. Do you have other ideas that will direct me towards my goal?
Problem formulation of future timeframe prediction based on current time
CC BY-SA 4.0
null
2023-03-08T18:14:09.693
2023-03-08T18:14:09.693
null
null
206294
[ "machine-learning", "time-series", "probability", "forecasting", "probabilistic-forecasts" ]
608796
1
null
null
1
13
I made a lmer with an interaction between odour and concentration and random effect of date. I'm getting very high df in the emmeans output below. It's actually higher than the number of observations in my df (954). Why? Should I be concerned? Also, can I specify "side = '>'" when I do the dunnett trt.vs.ctrl instead of what I am doing here? ``` summary1 <- emmeans(mod1, specs = trt.vs.ctrl ~ conc|odour) summary(summary1) test(summary1, side = ">",adjust = "tukey", type = "response") $contrasts odour = A: contrast estimate SE df t.ratio p.value 0.01 - 0 0.290 0.252 292 1.149 0.3320 0.1 - 0 0.594 0.214 933 2.775 0.0084 1 - 0 0.506 0.256 876 1.980 0.0702 odour = B: contrast estimate SE df t.ratio p.value 0.01 - 0 0.770 0.253 379 3.046 0.0037 0.1 - 0 0.706 0.259 302 2.721 0.0103 1 - 0 0.852 0.265 328 3.214 0.0022 odour = C: contrast estimate SE df t.ratio p.value 0.01 - 0 0.328 0.184 718 1.782 0.1085 0.1 - 0 0.658 0.230 275 2.858 0.0069 1 - 0 0.759 0.282 332 2.694 0.0111 odour = D: contrast estimate SE df t.ratio p.value 0.01 - 0 0.536 0.250 245 2.146 0.0484 0.1 - 0 0.659 0.292 381 2.261 0.0361 1 - 0 0.284 0.283 283 1.004 0.4034 odour = E: contrast estimate SE df t.ratio p.value 0.01 - 0 0.157 0.215 910 0.733 0.5470 0.1 - 0 0.508 0.256 291 1.988 0.0699 1 - 0 0.693 0.361 440 1.918 0.0814 odour = F: contrast estimate SE df t.ratio p.value 0.01 - 0 0.496 0.220 899 2.251 0.0365 0.1 - 0 1.282 0.213 983 6.018 <.0001 1 - 0 1.144 0.232 978 4.922 <.0001 odour = G: contrast estimate SE df t.ratio p.value 0.01 - 0 0.312 0.242 263 1.288 0.2698 0.1 - 0 0.952 0.225 509 4.229 <.0001 1 - 0 0.997 0.204 473 4.881 <.0001 Degrees-of-freedom method: kenward-roger P value adjustment: sidak method for 3 tests P values are right-tailed ```
emmeans df > observations
CC BY-SA 4.0
null
2023-03-08T19:32:15.853
2023-03-08T19:55:34.150
2023-03-08T19:55:34.150
8013
347235
[ "lsmeans" ]
608797
1
null
null
0
43
I have a binary dependent variable and a continuous independent variable. When I apply logistic regression to build the prediction model, I use p-value to know whether the independent variable is a significant predictor for the independent variable. I would like to repeat the same study using two other ML techniques: classification and regression trees and random forest. For each of these techniques how to know whether the prediction model is significant?
How to know if classification and regression tree prediction model is significant?
CC BY-SA 4.0
null
2023-03-08T19:46:45.580
2023-03-09T11:51:44.233
null
null
382709
[ "regression", "logistic", "statistical-significance" ]
608798
1
609403
null
0
29
My team is conducting a pre/post-intervention comparison of health outcomes in treatment and control groups, and the question came up whether it's a good idea condition/match on a deceased flag for subjects after the post study period. Instinctively, I know contaminating your model with future information introduces bias when you're trying to predict a future event (or the causal effect of a treatment on a future event). However, in the context of causal diagrams, how do we know conditioning on a future variable variable is not good? Based on the diagram below, the Deceased variable is a collider which introduces unwanted bias if included by opening the Y -> D <- A backdoor path, is that correct? [](https://i.stack.imgur.com/yGVWX.png)
Conditioning on a future (post intervention/treatment and post outcome) event in a causal diagram?
CC BY-SA 4.0
null
2023-03-08T19:55:39.953
2023-03-14T12:37:38.547
null
null
13634
[ "treatment-effect", "causal-diagram", "collider", "backdoor-path" ]
608799
1
null
null
0
12
I am doing regression analysis for 2 different dependant variables A and B, I want to see what explain A and what explain B and hypothesizing that it might be the agricultural productivity. But then I want to really validate the explanatory model by adding later the wealth level of the households. I have X1 to x20: sociodemographic variables , and households characteristics Then I proceed basically as following : FOR A A = x_1 + ... + x_20 + farm production A ~ x_1 + ...+ x_20 + segregated farm production A ~ x_1 + ... + x_20 + farm production + income source A ~ x_1 + ... + x_20 + farm production + segregated income source For B B ~ x_1 + ... + x_20 + farm production) B ~ x_1 + ... + x_20 + segregated farm production B ~ x_1 + ... + x_20 + farm production + income source B ~ x_1 + ...+ x_20 + farm production + segregated income source Should I perform a correction with multiple testing? then what would be the repetition? is it the number of model for each outcome variable so 4? Thank you
Do I need to do a multiple testing in my analysis?
CC BY-SA 4.0
null
2023-03-08T20:23:13.490
2023-03-08T20:23:13.490
null
null
382706
[ "regression", "p-value" ]
608800
1
null
null
1
17
I am working with a dataset that has several variables querying time spent using media each day. Each variable queries a different type of media use, with the same answer options. An example question is, "how much time did you spend today talking on your phone?" The answer options are: a) < 30 Min b) 30 Min - 1 Hr c) 1 - 2 Hrs d) 2 - 3 Hrs e) 3+ Hrs There are 7 questions in total. I would like to create a composite of these 7 ordinal variables. My idea was to take the midpoint of each answer and create an average. For example, if someone answered A, D, E, B, C, A, then D, that would translate to 0.25 hours, 2.5 hours, 3 hours, 0.75 hours, 1.5 hours, 0.25 hours, and 2.5 hours, with the average being 1.54 hours spent on media use per day. Is this considered an "appropriate" way to combine the variables? If not, is there a more appropriate way?
Combining ordinal scales where each level is numerical range
CC BY-SA 4.0
null
2023-03-08T20:35:24.863
2023-03-10T20:47:50.213
2023-03-10T20:47:50.213
44269
382713
[ "ordinal-data", "composite" ]
608801
1
null
null
2
21
In stochastic convex optimization, if $F(w) = E[l(w^Tx,y)]$, when l is a convex, L-Lipschitz loss function, it can be optimized using SGD such that $E[F(\bar{w}_T)] = \frac{1}{T} E[F(w_t)] \leq \min F(w) + \frac{RL}{\sqrt{T}}$ Assuming that $\vert|w\vert| \leq R$ when the step size is chosen to be $\eta = \frac{R}{L\sqrt{T}}$ SGD's convergence proof relies on access to gradient samples from an unknown distribution D. However, in practice, I have only finite train and test sets. Often, samples are drawn uniformly without replacement from the train set. My questions is, what do you do when you use all available samples? Shuffle them and calculate another epoch? How does this affect convergence? In general, I understand the concepts of empirical and true risks, but I'm not clear on how they relate to finite sets in practice
SGD on finite datasets
CC BY-SA 4.0
null
2023-03-08T20:48:20.460
2023-03-08T20:48:20.460
null
null
343082
[ "stochastic-gradient-descent" ]
608802
1
608950
null
1
38
I am working on a survival analysis in R using the `survival` package (and trying to switch it to a Bayesian analysis but that may be a different question). Either way, I'd eventually like to incorporate time-varying covariates, so I'm trying to set the data up as a counting process. But, I'm getting a little lost in "time". I have a set of wildlife telemetry data where we relocated individuals approximately once per month, though the day of the month varies. I've seen examples in the literature where a monthly relocation schedule means they are alive on the first, and stay alive through the month, and if they die that month, it is assigned to the last day of the month. So, I have my start and stop times set up according to months. For example: ``` id enter exit event year <dbl> <dbl> <dbl> <dbl> <dbl> 1 1 2 9 0 2012 2 1 0 8 0 2013 3 1 11 12 0 2013 4 1 0 1 0 2014 5 2 2 7 0 2012 6 3 2 9 0 2012 7 3 0 12 0 2013 ``` where 11=November, 12=December, etc., and the first row indicates we relocated that animal ever month starting in March, but then lost it for a few months after hearing it in September. The timescale is also set up to be recurrent, so I've split individuals into rows according to year. Just as a starting point, I ran a Cox PH model on these data: ``` fit_1 <- coxph(Surv(enter, exit, event) ~ 1, data=dat) ``` And the survival estimates by the end of the year are much lower than published estimates. So, I'm wondering if my data aren't set up correctly. In the `survival` vignette for the counting process, the start and stop times are in days. ``` subject time1 time2 death creatinine 1 5 0 90 0 0.9 2 5 90 120 0 1.5 3 5 120 185 1 1.2 ``` So, my questions are: - Does it matter that I've set my data up according to months? Or do I need to somehow put these times into days? (E.g., should the first row have enter=60 and exit=274 to account for those months in days? But that seems wrong since we only relocated the animal for 1 day during a month.) - Am I getting low survival estimates because I'm using the counting process format without time-varying covariates? I've run the same dataset through survfit(Surv(exit, event) ~ 1, dat=dat) and coxph(Surv(exit, event) ~ 1, data=dat), which give similar results to each other, but are different than what is produced from Surv(enter, exit, event).
How to properly format monthly relocations for counting process version of Cox Proportional Hazards?
CC BY-SA 4.0
null
2023-03-08T21:06:18.760
2023-03-10T03:30:43.813
null
null
378482
[ "survival", "cox-model" ]
608803
1
null
null
0
88
For a dataset with mean = 67 and the standard deviation = 6, how could I find the p-value associated with the true population mean being 79 OR greater in R?
R function to calculate the p-value
CC BY-SA 4.0
null
2023-03-08T20:52:43.890
2023-03-08T21:09:30.770
null
null
null
[ "r", "p-value" ]
608804
1
null
null
0
5
As the title says, I want to predict the time (with a wide error range) of a main event’s first occurrence based on previous sub events that are vary in importance. These previous ‘predictor’ events are different in nature, some are more indicative of the main event occurring. The scale of time would be months/years Im lucky to have access to a good set of data, with a bunch of observations (110k) so I believe I can make some sort of predictive model. It does not need to be pin point accurate, I would like to give an error range of 3-6 months on both sides. Can also optimize for precision. Here are some the events, y-axis is frequency, x-axis is days before main event. Plotted w/ matplotlib. [](https://i.stack.imgur.com/1h8YI.png) This only one subset of "sub events" there will be some more similar on top. Also - a lot of the smaller "predictor" events generally follow a poisson distribution with "days before main event" on the x-axis: [](https://i.stack.imgur.com/a9UZs.png) ON TOP OF THAT (IMPORTANT), there is other sentiment data that would be significant here, each data point has a certain class that would cause variation in main event time. Some techniques I came across that might be right - but wanted to be very specific here with my problem: XG boost model - binary prediction on if a company will predict in next six months (maybe have rolling window feature of how many events happened in previous 180 days, and then a special feature for more indicative features) Hidden Markov Model Poisson Point Process Would love any advice here, about which model might work, what other techniques to consider, but yea mainly about to how to approach/model this.
Main event time prediction based on different sub events
CC BY-SA 4.0
null
2023-03-08T21:14:04.787
2023-03-08T21:14:04.787
null
null
382510
[ "time-series", "boosting", "markov-process", "hidden-markov-model", "poisson-process" ]
608805
1
null
null
2
29
I wanted to know if one would require to check for the violation of ANOVA Assumptions before running an ANOVA model on a Big Dataset (size of the big dataset is 57 million rows)? Thanks!
ANOVA Assumptions on Big data
CC BY-SA 4.0
null
2023-03-08T21:18:03.060
2023-03-08T21:18:03.060
null
null
382718
[ "r", "anova", "dataset", "assumptions", "large-data" ]
608806
1
null
null
3
34
Lets suppose I have three random variables: $X$, $Y$, and $Z$. I can use the [Spearman Rank Correlation](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient?oldformat=true) to measure the degree of monotonic relation between any two of them. However what if I want to compare the monotonic relation between $X$ and $Y$ to that of $X$ and $Z$ (i.e. answer the question: do $X$ and $Y$ have a stronger monotonic relation than $X$ and $Z$). Is there a principled way to go about this?
How to statistically compare two rank correlations?
CC BY-SA 4.0
null
2023-03-08T21:18:28.907
2023-03-08T21:18:28.907
null
null
349988
[ "correlation", "statistical-significance", "spearman-rho", "ranks" ]
608807
1
610732
null
2
122
I am running a Monte Carlo simulation that results in an heavy-tailed distribution. The image below shows the distribution of 1,200 runs of the Monte Carlo simulation, where each run consists of integrating over $M$ = 12,000 randomly drawn paths of $ \mathcal{X}_m = \left\{X_{n,m},s_{n,m}\right\}_{n=1}^N$. ![Monte Carlo Distribution](https://i.imgur.com/E64MT57.png) The quantity I am simulation is an expectation of a definite sum of exponentials, where I know the sum converges as $N \rightarrow \infty$. $$\mathbb{E} \left[ S\left(\mathcal{X}\right) \middle| X_1, s_1 \right] = \mathbb{E} \left[\exp\left( X_1\right) + \exp\left(X_1 + X_2\right) + \cdots + \exp\left(X_1 + X_2 + \cdots + X_N\right) \middle| X_1, s_1 \right]. $$ $X_n$ is a Markov-switching Autoregressive process with Gaussian errors: $$X_n = \alpha_{s_n} + \rho_{s_n} X_{n-1} + \sigma_{s_n} \epsilon_n,$$ where $\epsilon_n \sim N(0,1)$, $s_n \in \{1,2\} \sim \Pi$, and $\Pi$ is a transition matrix. For a single realization of what is presented in the histogram, I compute $S(\mathcal{X}_m)$ for each of the $M$ simulated paths. And then I simply take an average over the $M$ realizations, $$\mathbb{E}\left[S\left(\mathcal{X}\right) \middle| X_1, s_1 \right] \approx \frac{\sum_{m=1}^M S(\mathcal{X}_m)}{M}.$$ I believe Monte Carlo should result asymptotically in a normal distribution, but this resembles a log-normal distribution. How would I diagnose this issue? How should I change my simulation strategy? I've proved the sum converges. The proof boils down to the unconditional mean of $X_n <0$, and sum inside the exponential goes to negative infinity faster than one-half the unconditional variance. The instability occurs when the sum inside the exponential terms is greater than 0 for a few periods (the process is persistent). Even though it will eventually converge to -$\infty$, and can blow up temporarily.
Monte Carlo Integration Results in Heavy Tailed Distribution
CC BY-SA 4.0
null
2023-03-08T21:36:43.080
2023-03-25T22:40:15.490
2023-03-15T17:46:49.277
98420
98420
[ "monte-carlo", "heavy-tailed" ]
608808
1
null
null
1
36
How to compare the effect of receiving treatment on a dependent variable measured using a Likert scale (0-7), using such data: ``` # Group Rating at baseline Rating at endline 1 Treatment 2 5 2 Treatment 3 3 3 Control 4 7 4 Control 5 9 ```
Estimating treatment effect on a variable measured on a Likert scale at two points in time: How?
CC BY-SA 4.0
null
2023-03-08T22:00:31.117
2023-03-09T14:42:07.320
2023-03-08T22:12:58.930
919
382720
[ "hypothesis-testing" ]
608809
2
null
605966
1
null
Based on the Conditional Likelihood defined in [https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2517-6161.1996.tb02101.x](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2517-6161.1996.tb02101.x), the conditional log-likelihood for $\phi$ of $Y$ conditioned on $Z$, dropping terms that don't involve $\phi$, is: $$l_{Y|Z=z}(\mathbf y; \phi)=\log f(\mathbf y;\mu,\phi)-\log f_{z}(z;n\mu,n^{-1}\phi)\\ =\sum\log \Gamma(y_i+\phi^{-1}) - n \log\Gamma(\phi^{-1}) + \sum y_i\log\left({\phi\mu \over 1 + \phi\mu}\right) + n\phi^{-1}\log\left({1 \over 1 + \phi\mu}\right)\\ -\log\Gamma(z+n\phi^{-1})+\log\Gamma(n\phi^{-1})-z\log\left({\phi\mu \over 1 + \phi\mu}\right) - n\phi^{-1}\log\left({1 \over 1 + \phi\mu}\right)\\ =\left[\sum_{i=1}^{n}\log\Gamma(y_i+\phi^{-1})\right]+\log\Gamma(n\phi^{-1})-\log\Gamma(z+n\phi^{-1})-n\log\Gamma(\phi^{-1})$$
null
CC BY-SA 4.0
null
2023-03-08T22:07:28.033
2023-03-08T22:07:28.033
null
null
256516
null
608810
2
null
606440
1
null
The relation between spline estimators and spline estimators is far from trivial. The "best starting reference" is likely: Lin et al. (2004) [Equivalent kernels of smoothing splines in nonparametric regression for clustered/longitudinal data ](https://academic.oup.com/biomet/article/91/1/177/218863) but it is a very technical paper. Maybe, one finds Nychka (1995) [Splines as Local Smoothers](https://projecteuclid.org/journals/annals-of-statistics/volume-23/issue-4/Splines-as-Local-Smoothers/10.1214/aos/1176324704.full) easier to follow but I personally found the exposition in Lin et al. cleaner and the examples more helfpul. In general, I think one should first build some familiarity with the asymptotic optimality of Generalised Cross-Validation. An obvious reference on the matter is Li's (1987) [Asymptotic Optimality for $C_p$, $C_L$, Cross-Validation and Generalized Cross-Validation: Discrete Index Set](https://projecteuclid.org/journals/annals-of-statistics/volume-15/issue-3/Asymptotic-Optimality-for-C_p-C_L-Cross-Validation-and-Generalized-Cross/10.1214/aos/1176350486.full) where the connection with nearest-neighbour nonparametric regression is discussed in detailed. Following that, Smirnoff (1996) [Smoothing Methods in Statistics](https://link.springer.com/book/10.1007/978-1-4612-4026-6) Sect. 5.8 "Comparing Non-parametric Regression Methods" has a short casual discussion and it is the only place I have seen a book commenting on how local polynomial estimators generalise to arbitrary likelihoods via the idea of a local likelihood. Finally, we get to the papers mentioned in the beginning and particularly to the sections 3 & 4 by Lin et al.
null
CC BY-SA 4.0
null
2023-03-08T22:08:02.300
2023-03-09T00:07:02.870
2023-03-09T00:07:02.870
11852
11852
null
608811
1
null
null
0
7
I want to see if there is a proper way to represent this problem. I have an initial Benoulli trial: Level 1 P(A) = f(x) P(B) = 1-f(x) Success is "A" Level 2 P(C|A) = 100% P(C|B) = 50% P(D|B) = 50% Success is "C" The first level Bernoulli splits between "A" and "B" as a f(x) The second level, Bernoulli "A" always goes to "C" The second level, Bernoulli "B" is a random 50/50 event I can describe the above using similar math from above. I guess I'm trying to ask if there is a more formal way to explain the collapse of this information to a P(C) = Thanks. D
1 level nested Bernoulli trial with fixed probabilities for each run
CC BY-SA 4.0
null
2023-03-08T22:14:06.327
2023-03-08T22:14:06.327
null
null
382721
[ "bernoulli-process" ]
608813
1
null
null
1
69
I have this problem to solve There are ten people in a class. Ari and Jamaal are twins in this class. At random, two people will be chosen as the class representatives. What are the odds that Ari and Jamaal will both be chosen? I can guess the solution For the first pick, there is a 2 out of 10 probability. For the second pick, there is a 1 out of 9 probability. Therefore, 20% * 11% = 2.2%. However I am not understanding the logic can someone help ??
How to select 2 specific people from a 10 people group
CC BY-SA 4.0
null
2023-03-08T22:42:10.943
2023-03-20T13:59:24.350
null
null
382725
[ "conditional-probability" ]
608814
1
null
null
0
21
I am learning probabilistic forecasting and there are three way to do it, quantile regression, prediction interval and probability density forecast. Can i perform quantile regression with Gaussian process regression? Are these 2 same interpretation?
Can you do Quantile regression for probabilistic forecast with Gaussian process regression (GPR)?
CC BY-SA 4.0
null
2023-03-08T23:15:28.467
2023-03-08T23:15:28.467
null
null
380179
[ "gaussian-process", "quantile-regression", "probabilistic-forecasts" ]
608816
1
null
null
2
34
I’m doing some matching followed by difference-in-difference regression to look at the impact of a certain disease on people's income. The matching is done, and I’m preparing to go into the next stage, setting up a difference in difference regression and achieving double robustness by including the same covariates I used during matching. Currently, I matched individuals in the treatment and the controls based on the quarter that the treatment had their diagnosis. The covariates I included in the propensity score model include age, ethnicity, area of residence and previous health care utilization, etc. My variables for previous healthcare utilization include the number of GP visits in the previous year, the number of nights in the hospital in the previous year etc. The data structure is cross-sectional. So, I used this cross-sectional data to do the matching, and then have the IDs of the treatment and the matched controls. I now plan to grab additional information based on the IDs of the treatment and the matched controls to estimate the difference in difference regression. For the DND regression, I’m going to get monthly data for these matched IDs, and that turns my data into a panel data structure. (The outcome of interest is monthly income) My question is, given that I matched the treatment and the controls using cross-sectional data, with variables like the number of GP visits in the previous year before the cancer diagnosis, how do I include this variable in the DND regression? In other words, when the matching was done, I included some variables that already have a built-in time dimension, how do I then include that in the DND regression to achieve double robustness? Do I still calculate how many GP visits they’ve had in the previous year by each month? Or do I include the number of GP visits in that specific month? I guess what I'm trying to ask here is how important is it that I use the same variables during matching and DND regression. Do they have to be specified the same or can I use different measures for the same variable? Thanks so much for your help.
Difference in difference regression after propensity score matching
CC BY-SA 4.0
null
2023-03-08T23:25:27.077
2023-03-08T23:25:27.077
null
null
382726
[ "econometrics", "difference-in-difference", "matching", "treatment-effect" ]
608817
1
null
null
1
41
So I need to compare three linear mixed effects models using the `anova` function in R. My advisor oversaw me build the actual models themselves so I am fairly sure the syntax for the models themselves is correct but I get a weird result. Here is my data set for references ``` structure(list(X = 1:6, sub = c("59917f16e339120001fb8c21_fvHlk:5fbd11ca7025930168297956", "59917f16e339120001fb8c21_uK9Bt:5fbd11ca7025930168297956", "59917f16e339120001fb8c21_fvHlk:5fbd11ca7025930168297956", "59917f16e339120001fb8c21_uK9Bt:5fbd11ca7025930168297956", "59917f16e339120001fb8c21_fvHlk:5fbd11ca7025930168297956", "59917f16e339120001fb8c21_uK9Bt:5fbd11ca7025930168297956"), subject = c("59917f16e339120001fb8c21", "59917f16e339120001fb8c21", "59917f16e339120001fb8c21", "59917f16e339120001fb8c21", "59917f16e339120001fb8c21", "59917f16e339120001fb8c21"), event = c(68L, 56L, 72L, 37L, 71L, 48L), timestamp = c("11/24/20 14:27", "11/24/20 14:11", "11/24/20 14:27", "11/24/20 14:09", "11/24/20 14:27", "11/24/20 14:10" ), profile = c("mean", "odd", "mean", "odd", "mean", "odd"), rating = c(4L, 3L, 4L, 3L, 4L, 5L), rt_ms = c(2053L, 2370L, 3044L, 1568L, 1112L, 1732L), image = c("accordion_1", "accordion_3", "apple_01", "apple_03", "asian_01", "asian_02"), trial = c(7L, 55L, 11L, 36L, 10L, 47L), version = c(1L, 1L, 1L, 1L, 1L, 1L), onset_s = c(542.394, 418.52, 570.127, 283.328, 563.481, 358.924), profile_rating = c(3L, 3L, 3L, 3L, 4L, 5L), block = c(2L, 1L, 2L, 1L, 2L, 1L), sub_num = c(179L, 154L, 179L, 154L, 179L, 154L), session = c(1L, 2L, 1L, 2L, 1L, 2L), sub_num2 = c(2L, 2L, 2L, 2L, 2L, 2L), own_pref = c(4L, 3L, 4L, 4L, 4L, 5L), cat1 = c(1L, 1L, 3L, 3L, 3L, 3L), cat2 = c(2L, 2L, 10L, 10L, 11L, 11L), item_num = 1:6, learning_run = c(1L, 2L, 1L, 2L, 1L, 2L), own_pref_nan = c(4L, 3L, 4L, 4L, 4L, 5L), profile_rating_new = c(2L, 2L, 2L, 2L, 3L, 3L), PE = c(2L, 1L, 2L, 1L, 1L, 2L), PE_si = c(2L, 1L, 2L, 1L, 1L, 2L), se_PE = c(0L, 0L, 0L, 1L, 0L, 0L), pro_PE = c(2L, 1L, 2L, 2L, 1L, 2L), mean_p1 = c(3.177631579, 2.868421053, 4.421052632, 4.526315789, 4.440789474, 4.0625), mean_p2 = c(2.864197531, 2.728395062, 4.283950617, 4.259259259, 3.956790123, 4.104938272 ), med_p = c(3, 3, 4.5, 5, 5, 5), learn_prof = c(1L, 1L, 1L, 1L, 1L, 1L)), row.names = c(NA, 6L), class = "data.frame") ``` When I run this data through three models below: ``` td_pref_own <- lmer(rating ~ learn_prof * own_pref * profile_rating_new * profile + (1|sub) + (1|image), data=td_pref_all) summary(td_pref_own) td_pref_mean_adu <- lmer(rating ~ learn_prof * mean_p1 * profile_rating_new * profile + (1|sub) + (1|image), data=td_pref_all) summary(td_pref_mean_adu) td_pref_mean_kid <- lmer(rating ~ learn_prof * mean_p2 * profile_rating_new * profile + (1|sub) + (1|image), data=td_pref_all) summary(td_pref_mean_kid) ``` Basically, I am replacing the predictors own, mean_p1, and mean_p2 in each model. And then compare the models with this: ``` anova(td_pref_mean_adu, td_pref_own) anova(td_pref_own, td_pref_mean_kid) anova(td_pref_mean_adu,td_pref_mean_kid) ``` I get the result below. It seems weird to me that there is no p-value and I'm not really sure how to interpret my results. ``` ata: td_pref_all Models: td_pref_mean_adu: rating ~ learn_prof * mean_p1 * profile_rating_new * profile + (1 | sub) + (1 | image) td_pref_own: rating ~ learn_prof * own_pref * profile_rating_new * profile + (1 | sub) + (1 | image) npar AIC BIC logLik deviance Chisq Df Pr(>Chisq) td_pref_mean_adu 19 73305 73458 -36634 73267 td_pref_own 19 72805 72958 -36384 72767 500.43 0 Data: td_pref_all Models: td_pref_own: rating ~ learn_prof * own_pref * profile_rating_new * profile + (1 | sub) + (1 | image) td_pref_mean_kid: rating ~ learn_prof * mean_p2 * profile_rating_new * profile + (1 | sub) + (1 | image) npar AIC BIC logLik deviance Chisq Df Pr(>Chisq) td_pref_own 19 72805 72958 -36384 72767 td_pref_mean_kid 19 73306 73459 -36634 73268 0 0 ata: td_pref_all Models: td_pref_mean_adu: rating ~ learn_prof * mean_p1 * profile_rating_new * profile + (1 | sub) + (1 | image) td_pref_mean_kid: rating ~ learn_prof * mean_p2 * profile_rating_new * profile + (1 | sub) + (1 | image) npar AIC BIC logLik deviance Chisq Df Pr(>Chisq) td_pref_mean_adu 19 73305 73458 -36634 73267 td_pref_mean_kid 19 73306 73459 -36634 73268 0 0 ``` Does anyone know what this might mean?
Comparing two linear mixed effect models in ANOVA resulting in no p-value
CC BY-SA 4.0
null
2023-03-08T22:12:58.760
2023-03-08T23:37:34.647
null
null
null
[ "r" ]
608818
2
null
608817
1
null
Looking at your three models, the differences are among the fixed terms rather than random terms. On the other hand, the default method of estimation in `lmer()` is REML method which estimates the random effects by removing the fixed effects. Therefore, ML is preferred method of comparison when the fixed effects change which is also true in your case. So my advise is try to add the input `REML=FALSE` to your `lmer()` call. For more details on choosing ML or REML I would suggest to read [this](https://stats.stackexchange.com/questions/116770/reml-or-ml-to-compare-two-mixed-effects-models-with-differing-fixed-effects-but) and [this](https://stats.stackexchange.com/questions/41123/reml-vs-ml-stepaic) posts. Hope it could helps
null
CC BY-SA 4.0
null
2023-03-08T23:10:28.640
2023-03-08T23:10:28.640
null
null
351643
null
608819
1
null
null
0
21
I have panel data and I want to model how changes some quantity $Q$ caused by changes in district boundaries affects another quantity of interest in these districts. I want to create a variable for district $d$ that's the difference between quantity $Q$ in time $t$ vs. time $t-1$. In other words my variable $X_{dt}$ = $Q_{dt}$ - $Q_{dt-1}$ Is it appropriate to model a dependent variable $Y$ in a linear regression as $Y_{dt}$=$\beta_0$ + $\beta_1X_{dt}$+$\epsilon$? I cannot find anything saying it's not, but something tells me this may be incorrect. I'm having a hard time even figuring out what to search for, so any guidance you all might provide is greatly appreciated!
Is it acceptable to create an independent variable in panel data that's the difference between a variable in time t and time t-1?
CC BY-SA 4.0
null
2023-03-08T23:57:01.247
2023-03-08T23:57:01.247
null
null
171667
[ "regression", "time-series", "panel-data" ]
608820
1
null
null
0
51
Consider a well-defined function $\psi(x,\theta)$. Assume that it is a smooth function, differentiable, has a finite expectation and a finite second moment. $E\dfrac{\partial \psi(X,\theta)}{\partial \theta} \neq 0$ and is finite as well. The text book then provides the following hint: $E \sup \biggl({\dfrac{1}{n}\sum_{i=1}^{n} \left|\dfrac{\partial\psi(X_i,\theta)}{\partial\theta} - \dfrac{\partial\psi(X_i,\theta(P))}{\partial\theta(P)}\right|:|\theta-\theta(P)|\leq \epsilon_n}\biggr) \leq E \sup \biggl({\left|\dfrac{\partial\psi(X_1,\theta)}{\partial\theta} - \dfrac{\partial\psi(X_1,\theta(P))}{\partial\theta(P)}\right|:|\theta-\theta(P)|\leq \epsilon_n}\biggr)$ EDIT: Assume $X_1, \ldots, X_n$ are iid random variables. The parameter $\theta(P)$ is given by the solution of $\int{\psi(x) f(x,\theta) dx} = 0$, where as $\theta$ is a free variable. How does the above relation hold? Does it hold all the time?
Regularity conditions hint
CC BY-SA 4.0
null
2023-03-09T00:00:21.777
2023-06-03T07:49:16.647
2023-06-03T07:49:16.647
121522
165434
[ "probability", "self-study", "mathematical-statistics" ]
608821
1
null
null
0
64
I got curious with deep learning models in sets/learning representations of sets (paper by Zaheer et al: [https://proceedings.neurips.cc/paper/2017/file/f22e4747da1aa27e363d86d40ff442fe-Paper.pdf](https://proceedings.neurips.cc/paper/2017/file/f22e4747da1aa27e363d86d40ff442fe-Paper.pdf)) I'm under the impression that their goal in this paper is to construct a neural network architecture for set feature vectors as input (with set vector representations as output) and that this NN architecture has both the permutation-invariant and permutation-equivariant characteristics. Because sets are unordered and has arbitrary ordering, then therefore any dataset whose data type is a set is pretty much applicable for this specific Deep Set architecture. My question is, is whether the input data ought to have an arbitrary ordering? Since sets don't follow a specific permutation (i.e., output of `{1,2,3}` should give the same output for `{2,1,3}` but perhaps permutated due to the invariance and equivariance properties), can we input some dataset whose data type has the characteristic that we know the specific ordering apriori (because we know that specific ordering would yield the most optimal results). So for example, the task for getting the top `n` numbers in a set can most easily be solved when the numbers are sorted in decreasing order from highest to lowest. I guess we could just turn to sequence-based models, transformers, LSTMs, RNNs, etc. for such datasets where we know the order of the data apriori, but I just thought if it would also work for Deep Sets (i.e., we just know a little bit of extra information, which is the ordering, and to help solve the problem and help the performance of the Deep Set architecture).
Deep Sets - Deep Learning for Sets (does it also work for ordered data?)
CC BY-SA 4.0
null
2023-03-09T00:23:01.737
2023-03-09T00:23:01.737
null
null
337274
[ "machine-learning", "neural-networks", "dataset" ]
608822
1
null
null
1
13
I've just started practicum work for a SaaS company, and trying to build a customer attrition (binary classification) model for enterprise SaaS product with a target variable indicating whether or not a customer churned after their annual renewal cycle by encoding 1/0 based on their historical renewal status. At a high level there are 3 types of features: Adoption features, engagement features, and fixed characteristics. The training dataset contains monthly snapshots of adoption and engagement features. Thus every customer will have at least 12 records in our system, if they have churned after their first annual subscriptions, and if a customer has renewed for another 12 months, then they would have at least 24 records, etc. Adoption features are the known to be the most important features for predicting customer churn outcomes and features in this category are things like % of activated accounts, Monthly active user count, etc. Engagement features are things like the amount of self-service online videos customers in a particular account have watched, number of certificates earned, number of interactions a customer had with customer success product experts. The problem is that using the base classification model construct of `churn(1/0) ~ adoption features + engagement features`, I'm not seeing any significant contributions from engagement features, but I think there are some lag effect where improvements to engagement features lead to the improvements to adoption features. For example, customer success interactions lead to some improvements to adoption features but not during the exact months when the interactions happen, but in the next 1~2 months after the interactions. Are there any models that would better represent this dynamic? From my initial research I might use a technique like vector autoregressive regression to model this, but not sure if that's the way to go. Any help would be greatly appreciated!
How to better model customer attrition?
CC BY-SA 4.0
null
2023-03-09T01:09:05.080
2023-03-09T01:09:05.080
null
null
382731
[ "regression", "time-series", "econometrics", "churn" ]
608823
1
null
null
1
17
i have two layer with different layer sizes (hidden states) how can i perform encoder decoder type of attention on these layers if the layer sizes are different? because i will do dot product, how? Consider I have two layers: lstm1 = LSTM(20, return_sequences=True, name='lstm') lstm2 = LSTM(40, return_sequences=True, name='lstm') How is possible to apply attention here since dot product would be not possible? Thanks
Attention mechanism with different hidden states length?
CC BY-SA 4.0
null
2023-03-09T01:18:04.430
2023-03-09T01:18:04.430
null
null
377324
[ "machine-learning", "neural-networks", "natural-language", "attention" ]
608826
2
null
608820
3
null
Written in that way, it seems complicated. But what the hint really says is just the following simple [inequality](https://math.stackexchange.com/questions/207335/prove-supfg-le-sup-f-sup-g): \begin{align} \sup_{\theta \in \Theta}(f(\theta) + g(\theta)) \leq \sup_{\theta \in \Theta}f(\theta) + \sup_{\theta \in \Theta}g(\theta), \end{align} for any real functions $f, g$ and non-empty set $\Theta$. Now use the linearity of expectation and the i.i.d. assumption of $X_1, \ldots, X_n$.
null
CC BY-SA 4.0
null
2023-03-09T02:31:56.543
2023-03-09T02:49:17.260
2023-03-09T02:49:17.260
362671
20519
null
608827
1
null
null
2
32
I want to include both parent's education variables as control variables in my estimation about the effects of maternal bargaining power to child's educational attainments. They are stated as categorical variables ranging 0 for "not receiving educ" and 4 for "tertiary-level educ". Many previous papers in my topic have also included these as controls, and most of them yield significant results in the main independent variable (maternal bargaining power). But when I did the same, my independent variable would lose significance greatly, and turns out they're highly correlated with each other. Is there any way to get around this and still include these variables in the model without sacrificing significance?
Controlling parents education that correlates with each other
CC BY-SA 4.0
null
2023-03-09T02:32:02.970
2023-03-10T04:20:07.917
null
null
382735
[ "statistical-significance", "p-value", "controlling-for-a-variable" ]
608828
1
null
null
0
27
I use variables, independent as well as dependent, which are inherently not normaly distributed. Exept a few "extremist", nobody score very above the mean on these scale, which are validated on peer reviews journals and widely used (e.g. rape myths scale). I ran an experiment with two experimental condition, measure of two non normal IV, and measure of two non normal outcomes. As a rookie I am, i ran general linear model to observe principle and interaction effect. Found significative results but with wide 95%IC. I then realized that none of this measures were normally distributed. I sometimes read that non normality, weather it be on IV or DV, is not mandatory to run GLM. However i also read the opposite which suggest to switch to non parametric (impossible) or to convert variables into log or others. What is your opinion (and sources) about this please ? I admit i feel a bit terrified to see my project destroy because of systematic non normal measures... And i cant find answers on published papers because almost all use these variables in correlational studies using structural equation model allowing to correct for non normality. EDIT : Here the assumptions check from the GLM for IV 1 : Levene : p=.073 Kolmogorov =.258 Shapiro p=.002 [](https://i.stack.imgur.com/yAX03.png) [](https://i.stack.imgur.com/p4yaA.png) [](https://i.stack.imgur.com/CB4Lg.png) For IV 2 : Levene p=.025 Kolmogorov p<.001 Shapiro p<.001 [](https://i.stack.imgur.com/bJ9qu.png) [](https://i.stack.imgur.com/qV4cr.png) [](https://i.stack.imgur.com/2TNvn.png)
Non normal independent and dependent variable - GLM
CC BY-SA 4.0
null
2023-03-09T02:32:40.157
2023-03-09T03:41:15.243
2023-03-09T03:41:15.243
382736
382736
[ "multiple-regression", "anova", "generalized-linear-model", "linear-model", "normality-assumption" ]
608829
1
null
null
1
43
I'm working on a project in R where I'm looking at California's census tract-level demographic data in an explanatory logistic regression model. I have 6 demographic variables of interest: percent below 150% poverty line, percent minority, percent unemployed, percent disabled, percent with no high school diploma, and percent without health insurance. I also want to control for population density since it varies so much amongst census tracts. My binary exposure variable is if the census tract contains a large animal farming operation (1=yes, 0=no). Here's an example of the model I coded in R: ``` cali_logit <- glm(exposure ~ percent_unemployed + percent_minority + percent_no_diploma + percent_uninsured + percent_under150 + percent_disabled + pop_density, family = "binomial", data = cali_cafos) ``` I checked all variables' variance inflation factors and all are under 5, so multicollinearity is not a problem. From my (long ago) stats classes, I know that when we add all of our data into a model we are adjusting the model to control for potential confounding effects. However, do I need to "control" for census tract level data? It's not quite clicking with me how percentages of other categories are confounders or need to be adjusted for in the model. If I want to see the odds of being in an exposed tract given a 1% increase in x,y,z demographic of interest, and only control for population density, should I just do an individual glm for each variable with only pop_density included? What would be the reason for adding all variables into a logistic regression model with census tract data?
Should I be controlling for all independent variables in my logistic regression model?
CC BY-SA 4.0
null
2023-03-09T02:02:20.510
2023-03-09T18:01:00.150
2023-03-09T18:01:00.150
11887
382776
[ "r", "regression", "logistic" ]
608830
2
null
608813
1
null
You might see the logic with conditional probability statements: $A_1$ is the event Ari is selected in the first pick. $A_2$ for the second $J_1$ is the event Jamal is selected in the first pick. $J_2$ for the second $P(A_1) = P(J_1) = 1/10$ $P(A_2 | \bar{A_1}) = P(J_2 | \bar{J_1}) = 1/9$ $P([A_1 \cap J_2] \cup [J_1 \cap A_2]) = P(A_1)P(J_2|A_1) + P(J_1)P(A_2|J_1) - P([A_1 \cap J_2] \cap [J_1 \cap A_2]) = (1/10)(1/9) + (1/10)(1/9) + 0 = 2/90$
null
CC BY-SA 4.0
null
2023-03-09T02:46:57.390
2023-03-20T13:19:39.247
2023-03-20T13:19:39.247
212798
212798
null
608831
1
null
null
0
12
I know conditioning decreases the normal entropy: $ H(Y)>H(Y|X)$. But does it hold for the cross entropy? Do we have $H_c(Y;Y')>H_c(Y|X;Y'|X)$?
Conditioning decreases cross-entropy
CC BY-SA 4.0
null
2023-03-09T02:53:02.573
2023-03-09T03:14:33.277
2023-03-09T03:14:33.277
362671
382739
[ "cross-entropy", "conditioning" ]
608832
2
null
608829
0
null
Vanilla OLS linear regression can suffer from coefficient bias when predictors are omitted that are correlated with included predictors. Logistic regression is worse, because omitting a predictor that is totally independent of the included predictors can still lead to biased coefficients (okay, yes, the usual estimation is biased, but the bias is even worse). Consequently, including relevant predictors can help you get better point estimates of your coefficients of interest. Further, better performance can help tighten up confidence intervals on your coefficients. If you have enough data to support a model with many relevant variables, including all of them can have some serious advantages.
null
CC BY-SA 4.0
null
2023-03-09T02:56:24.530
2023-03-09T02:56:24.530
null
null
247274
null
608833
1
null
null
0
35
I am learning latent processes and reading the paper ["Estimation of Extended Mixed Models Using Latent Classes and Latent Processes: The R Package lcmm"](https://www.jstatsoft.org/article/view/v078i02). I'm not a professional statistician. There's a lot I can't understand. So I am here for some help. I have a data set of 5,000 people, including their physical examination data for 20 years. The purpose of my study is to analyze the trajectory of nonfasting triglycerides and the risk of CVD using Jointlcmm funcation in lcmm package in R. There is part of my data set, and code (Jointlcmm function) I am trying. Please also let me know if there are any errors in the dataset and codes. > ``` latent.j.nonfasting.fit <- lcmm::Jointlcmm( tg.log2 ~ poly(age.40, degree = 3, raw = TRUE) + sexc, random = ~ poly(age.40, degree = 3, raw = TRUE), survival = Surv(age.first, age.censored, icvd) ~ sexc, hazard = "Weibull", hazardtype = "PH", subject = "ID", data = dt, ng = 1 ) ``` There are my questions. - I need multiple physical examination data to fit trajectories, but some people only had one or two participations over 20 years. Should I exclude the ones with less attendance. If yes, what should be the minimum number of participations. - I have read "6.5. Jointlcmm examples" in page 41. Covariates can be adjusted in the risk model, but only for time independent variables "The original text is”Note that the survival model only handles time-independent covariates.” in page 41". For example, some people have participated in 5 physical examinations, I can adjust his gender, but I cannot adjust his diabetes status during the follow-up. If I want to adjust diabetes, I need to adjust baseline diabetes. Is it right? - The Jointlcmm function can adjust the competition risk. In my data set,the CVD variable is 1 when CVD occurs and 2 when death occurs. Death is a competing risk. So can the model be written as Surv(age.first, age.40, icvd) ~ caues1(sexc). That is to say, caues1 is used to specify the fitting value when the outcome variable is 1. Is this understanding correct?R Documentation for Jointlcmm function 4.In page 42 of the paper ["Estimation of Extended Mixed Models Using Latent Classes and Latent Processes: The R Package lcmm"](https://www.jstatsoft.org/article/view/v078i02), there is following content: "Here, we first see that for each additional latent class, there is a 6-parameter increase. This corresponds to the additional class-specific parameters: the proportion of the class, the two Weibull parameters, and the three fixed effects for the quadratic trajectory (intercept, time and time squared)...... This illustrates once again that default initial values do not necessarily lead to a global maximum (and a convergence), and that multiple sets of initial values should be systematically tried. The models were thus reestimated with various sets of initial values specified in B. For example, the following code illustrates a reestimation of the three-class model using estimates of the two-class model as initial values along with arbitrary initial values for an additional class:" ``` Binit <- rep(0, length(mj2$best) + 6) Binit[c(2, 5:10, 12, 13, 15, 16,18, 19:(length(Binit)))] <- mj2$best Binit[c(1, 3, 4, 11, 14, 17)] <- c(0, 0.11, 4, 70, 0, 0) ``` What I don't understand is how to determine where the parameters need to be added, and the logic of adding parameters. In the example in the text. The parameters are added at [c(1, 3, 4, 11, 14, 17)], and the added parameters are c(0, 0.11, 4, 70, 0, 0). I don't understand what the basis for these numbers is. I'm not sure if these questions should be asked here. I would be very grateful if someone could help me. Thanks.
Some questions about Latent Classes and Latent Processes using R
CC BY-SA 4.0
null
2023-03-09T03:01:03.847
2023-03-09T03:39:47.037
2023-03-09T03:39:47.037
375971
375971
[ "r", "latent-class" ]
608834
1
608876
null
0
64
A study followed 1000 patients for a 5-year period, but some patients may have been lost to follow-up or relocated during the study. The study's results indicate that: - Kaplan-Meier survival rate at 1 year: 0.80 - Kaplan-Meier survival rate at 2 year: 0.60 - Kaplan-Meier survival rate at 5 year: 0.50 Questions: - Is it valid to interpret that there were 200 deaths out of 1000 participants at 1 year, 400 deaths at 2 years, and 500 deaths at 5 years? - Does that interpretation assume that no patients were censored? - What is the best approach to estimate the number of deaths out of 1000 participants - at 1,2 and 5 years? Can you provide your thoughts, pointing a few good references?
Interpretation and assumptions of Kaplan-Meier survival rates
CC BY-SA 4.0
null
2023-03-09T03:15:32.830
2023-03-09T13:20:48.390
null
null
305274
[ "regression", "survival", "inference", "interpretation" ]
608835
1
608837
null
0
31
In the life table, two values are related to death. One is the death rate, by definition, $$ m_x = \frac{d_x}{n_x}$$ Another one is the probability of dying: $$ q_x = \frac{d_x}{l_x}$$ The numerator of these two formulas are the same, but the denominators are different. I cannot understand the difference between $l_x$ and $n_x$ even though I read the notes many times... Can someone explain it to me in a more straightforward way?
Difference of death rate and probability of death
CC BY-SA 4.0
null
2023-03-09T03:20:58.383
2023-03-09T03:41:21.230
2023-03-09T03:21:34.123
362671
368723
[ "demography" ]
608836
1
null
null
0
6
I came across a paper that write the following: The first set of independent variables, REPRESSION(1)i,t-1, REPRESSION(2)i,t-1, REPRESSION(3)i,t-1, REPRESSION(4)i,t-1, are binary indicators measuring a state's previous level of repression. They are included in place of the standard lagged dependent variable to account for dependence across the categories of the dependent variable over time. Because REPRESSIONi,t are treated as a nonlinear dependent variable, it is not appropriate to control for autocorrelation through the standard treatment of REPRESSIONi,t-1 as a lagged independent variable. The inclusion of four dummy variables is a nice alternative to the problem of correlated categories of repression within a state across time. The model that the author uses is REPRESSIONi,t = a + REPRESSION(1)i,t-1 + REPRESSION(2)i,t-1 + REPRESSION(3)i,t-1 + REPRESSION(4)it_1 + other control variables + error term. The dependent variable REPRESSIONi,t is an ordinal variable, ranging across five levels of repressive behaviors. Can someone explain what this means?.. Why does the author include 4 different dummy variables as an alternative?
Lagged nonlinear independent variable?
CC BY-SA 4.0
null
2023-03-09T03:29:42.860
2023-03-09T03:29:42.860
null
null
355204
[ "lags", "nonlinear" ]
608837
2
null
608835
1
null
$l_x$ is the number of persons out of the cohort living at the specified age $x.$ $d_x$ is the number of persons out of $l_x$ who die before attaining age $x+1, $ i.e. $d_x=l_x-l_{x+1}.$ So, $q_x$ measures the the probability of a person of exact age $x$ dying within one year. $m_x$ in a life table measures the probability that a person whose exact age is not known but lying in $(x, x+1) $ would die within one year. The denominator is actually (provided the deaths are uniform) $L_x:= \int_0^1 (l_x -td_x) ~\mathrm dt=l_x-\frac12d_x.$ In case of stationary population, number of persons in the age group $(x, x+1) $ would be generally denoted by $n_x.$
null
CC BY-SA 4.0
null
2023-03-09T03:41:21.230
2023-03-09T03:41:21.230
null
null
362671
null
608839
2
null
608341
0
null
Indeed, your reasoning is sound. Effect sizes such as Cohen's or Hedge's g can be influenced by sampling variance. When standard deviations are extremely small, effect sizes may be artificially inflated, even if the difference (measured using a non-standardized metric or scale) is not clinically significant. If possible, the best way to deal with this is to check for outliers, and replace unrealistically tiny standard deviations with more plausible estimates, preferentially obtained in large, well-conducted, low risk of bias studies. Another option is to use the ratio of means if feasible. To convert the summary effect size back to the original metric or scale, you can multiply d or g by a reliable estimate of the population standard deviation. However, it's important to note that the same d or g may have different magnitudes on the original scale, depending on the level of variability within the populations being studied.
null
CC BY-SA 4.0
null
2023-03-09T04:18:54.427
2023-03-09T04:18:54.427
null
null
305274
null
608840
1
null
null
0
54
My approach is given below. > Let $X_1, \ldots, X_n \stackrel{\text{i.i.d}}{\sim} \text{Geometric}(\theta)$, so that $$f_{\theta}(x) = \theta(1 - \theta)^x, \hspace{10mm} x \in \mathbb{N}.$$ Find the likelihood ratio test for testing $$ \begin{cases} H_0: \log(1 - \theta) = \eta_0, \\[0.25em] H_1: \log(1 - \theta) \neq \eta_0 \end{cases} $$ I know the likelihood ratio is given by $$ \lambda_n = \frac{\sup_{\theta \in \Theta_0} L(\theta)}{\sup_{\theta \in \Theta_0 \cup \Theta_1} L(\theta)}$$ The unrestricted MLE is given by $\hat{\theta} = \frac{1}{1 + \bar{X_n}}$ so the likelihood ratio is given by $$\lambda_n = \frac{(1 - e^{\eta_0})^n \exp\left\{\eta_0\ \sum_{i=1}^n x_i\right\}}{\hat{\theta}^n \exp\left\{\log(1 - \hat{\theta}) \sum_{i=1}^nx_i\right\}}$$ I wanna reject the null when $\lambda_n < c$ for some $c$ but the problem I'm facing is that I'm unable to get the LRT in a form so that $\lambda_n < c \iff T(x) \in \mathcal{R}$ for some statistic $T(x)$. I think I should be able to bring the LRT in a form so that it is a "clean" function of $\bar{X_n}$ but I am unable to do the manipulation. Please help me out with this!
Find the likelihood ratio test for to test $H_0$:$\log(1-\theta)=\eta_0$ against $H_1$:$\log(1-\theta)\neq\eta_0$
CC BY-SA 4.0
null
2023-03-09T04:49:12.183
2023-03-09T05:04:10.073
2023-03-09T05:04:10.073
362671
264629
[ "hypothesis-testing", "mathematical-statistics", "inference", "likelihood-ratio", "geometric-distribution" ]
608841
1
null
null
2
32
I'm planning out a project that involves [lattices](https://en.wikipedia.org/wiki/Lattice_(order)) in the [order theory](https://en.wikipedia.org/wiki/Order_theory) sense of "lattice". I am assuming the number of vertices is known ahead of time. Unfortunately I have only found resources related to [lattices](https://en.wikipedia.org/wiki/Lattice_model_(physics)) in a crystal or (undirected) graph embedding sense. As such it would not be difficult to sample uniformly from Bernoulli variables indicating whether an edge exists and then reject those that do not match the structure of a lattice. This seems inefficient, and possibly non-uniform. Even more non-uniform but possibly more efficient would be to incrementally sample edges to build out the lattice. I was reading in [Deleu et al. 2022](https://arxiv.org/abs/2202.13903) that some improvements have been made in sampling directed acyclic graphs due to a general interest in structural learning. But this only gets as far as directed acyclic graphs in its current state. Maybe with considerably more work I could figure out how their method works and adapt it to lattices, but I am hoping something already exists for my use case. For a really small number of vertices I could generate all of them and explicitly assign probabilities to them. While lattices are a subset of partial orders, [OEIS A001035](https://oeis.org/A001035) is suggestive that this approach will fail for even modest problems. Rather, I need something that generates the lattice as part of the sampling. Are there algorithms for uniformly-sampling lattices given that the number of vertices is known?
Is there an algorithm to uniformly sample (order) lattices without generating all of them?
CC BY-SA 4.0
null
2023-03-09T05:03:29.003
2023-03-09T05:42:34.060
2023-03-09T05:42:34.060
69508
69508
[ "sampling", "algorithms" ]
608843
2
null
558656
1
null
There are conformal prediction approaches that explicitly consider time series analysis. Here is an article that discusses this: [Conformal prediction for time series](https://arxiv.org/abs/2010.09107). The R [caretForecast](https://cran.r-project.org/web/packages/caretForecast/index.html) and the Python [MAPIE](https://mapie.readthedocs.io/en/latest/index.html) packages can implement such methods.
null
CC BY-SA 4.0
null
2023-03-09T07:13:49.153
2023-03-09T07:13:49.153
null
null
81392
null
608844
1
null
null
2
119
I have recently started learning about conformal prediction. I am a programmer without a strong mathematical background, but with a strong intuitive, applied background in statistics. I am trying to understand the intuition underlying conformal prediction and it is difficult to find good sources on this. Even the classic "[A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification](https://arxiv.org/abs/2107.07511)" quickly gets too mathematical for me to intuitively follow. Could someone please give an intuitive explanation of the key concepts underlying conformal prediction with minimal mathematical details?
Intuitive explanation of conformal prediction
CC BY-SA 4.0
null
2023-03-09T07:18:51.090
2023-06-03T17:41:36.320
null
null
81392
[ "intuition", "conformal-prediction" ]
608846
1
608933
null
2
116
As a graduate student, I have always used tools which calculate p-value for me and I kind of understand what it means. If p-value is 0.05, there is only 5% of chance that something happens naturally. To me, probability only makes sense when there is whole population or something that goes to "denominator". For example, the probability of getting 2 from a 6 dice is 1/"6" since there are "6" options. Here, p-value is also 1/6. I got this. However, I've seen so many p-values in every analysis without population. For example, when you want to know correlation between English score and math score in your class, a tool calculates p-value here as well with correlation value (r value). What does it mean? What I thought was that to calculate every r value of every class in the world and see how the r value of my class is high and determine the p-value based on it. But the tool certainly does not know English score and math score of all students in the world. The problem is not limited to correlation. When "denominator" is unknown, a tool always calculate p-value with given sample. How is this possible? How can I understand p-value?
How can I understand p-value?
CC BY-SA 4.0
null
2023-03-09T07:20:46.597
2023-03-14T07:08:45.850
2023-03-09T07:21:16.307
378844
378844
[ "p-value" ]
608848
1
null
null
0
15
[](https://i.stack.imgur.com/mEelj.png) Can someone please walk me through this, I'm really confused on this and I've tried rereading and I can't seem to grasp the intuition behind it. Thanks!
Backpropagation with Softmax and Log-Likelihood Cost from Nielsen book, Chapter 3
CC BY-SA 4.0
null
2023-03-09T07:40:11.813
2023-03-09T07:40:11.813
null
null
382749
[ "machine-learning", "neural-networks", "backpropagation" ]
608849
2
null
608846
0
null
For the Pearson correlation, null hypothesis is correlation coefficient being zero. The definition of p-value is the probability of getting a calculated sample statistic if your null hypothesis is true. In other words, it shows how well your null hypothesis explains the given sample whether your sample aligns with the null hypothesis by a respective criterion. It's not any kind of a quantitative measure for the whole population.
null
CC BY-SA 4.0
null
2023-03-09T07:53:50.077
2023-03-14T07:08:45.850
2023-03-14T07:08:45.850
361202
361202
null
608850
1
608853
null
0
44
I have data where we have longitudinal data for several users, so we have a case of repeated measures. The plan is to apply some classification or regression model. While there are other models suited to this like MLMs and GEEs, I'm interested in using predictive ML methods like the SVM, but within-subject correlations must be taken care of. It seems there isn't a way to explicitly tell the SVM that a set of measurements belong to a single user, other than not having the same user in the training/test/validation set. Is this correct?
Machine learning models with repeated measures
CC BY-SA 4.0
null
2023-03-09T08:03:24.113
2023-03-09T08:49:02.280
null
null
212311
[ "machine-learning" ]
608851
2
null
608591
2
null
Without having looked in detail, the fact that you can perfectly predict the response suggests you are seeing a complete separation problem; one or more terms in the model allows you to perfectly separate the data into 0s and 1s. At that point the likelihood function will be flat (so no well defined maximum) and despite the model being perfect, the statistical quantities we would compute for the model break down, as you are seeing, so standard errors are huge (they are based on the curvature of the likelihood function at the maximum — which is essentially 0 as the function is flat), rendering you unable to reject the null hypothesis in each of the wald-like tests in the Summary output.
null
CC BY-SA 4.0
null
2023-03-09T08:03:37.623
2023-03-09T08:03:37.623
null
null
1390
null
608852
2
null
508086
3
null
Disclaimer: I just glimpsed at the linked articles. My answer focuses solely on Gaussian processes per se. ## Your example If you understand your example $f(x)=x^2+Y$ as a random function of $x$, then it is a Gaussian process, if and only if $Y$ is a Gaussian random variable. If it has any other distribution, it is still a stochastic process but not a Gaussian one. But no matter what kind of process, you can determine the mean and covariance function. The mean function is $m(x) = x^2 + \mathbb{E}[Y]$ and the covariance function $C$ is $$C(x_1,x_2)=\text{Cov}(f(x_1), f(x_2))= \text{Cov}(x_1^2 + Y, x_2^2 + Y)=\text{Cov}(Y,Y)=\text{Var}[Y].$$ This is because $x_1$ and $x_2$ are non-random, which means the respective covariance terms are zero. This particular covariance function does not depend on $x$ i.e. it is constant. This reflects the fact that your set of random functions is the parabola $x^2$ which is just randomly shifted up and down as a whole by the values of $Y.$ ## Parametric families If you can express your parametric family as a span of basis functions, as is the case for example for polynomials, you can easily turn them into a Gaussian process. Say your family consists of functions of the form $f(x)=\sum \alpha_i \phi_i(x)$ for a set of some basis functions $\phi_i$ then you can turn those into Gaussian random functions simply by turning the coefficients $\alpha_i$ into Gaussian random variables $Y_i$ to arrive at the Gaussian process $$ G(x) = \sum_i \phi_i(x) Y_i.$$ This works only because linear combinations of Gaussian variables are again Gaussian variables, and explains what is so special about the "Gaussian" in Gaussian processes. For other distributions, including discrete ones, this is no longer true. The idea is straightforward for finite dimensional spaces, but with the proper technical assumptions even possible for infinite dimensional spaces. ## Final remark If you want to discuss how functions look or what they do "on average" you need to find a way to express probabilities for functions. As demonstrated above, Gaussian process provide an easy and flexible way to turn any space of functions into a space of random functions. The source and purpose of those functions, i.e. whether they are regression functions or loss functions or time series of beaver dams built, does not matter the least.
null
CC BY-SA 4.0
null
2023-03-09T08:26:38.247
2023-03-09T08:33:13.643
2023-03-09T08:33:13.643
8298
8298
null
608853
2
null
608850
1
null
There are quite a few other options besides the ones you mention. I tried to roughly classify the options I'm aware of (I'm sure there's other things one can do): - Feature representations for IDs: Embeddings encodings based on individual's features (e.g. target encoding based on personal history etc.) A lot of these come down to training some explicit (or implicit) model and then using its feature representation as an input to another model. The first model could be a neural network, autoencoder, UMAP or just a GLMM/MMRM. You can even do this in a proper statistical inference setting, if you bootstrap the whole process, but usually this is mostly done for prediction purposes. - Model internal representations for individuals: embedding layer for individuals (which has in many Kaggle competitions been a key to winning the competitions/getting close to it like the famous Rossmann Store Sales example that popularized embeddings layers for high-cardinality categorical features - also enormously popular for recommender systems) of course, random effects are a bit like 1-D embeddings There's various proposals to modify existing algorithms like Random Forest using random effects (see e.g. this old question and this one and this one). What works for random forest (where trees get averaged) may not work the same way with XGBoost/LightGBM/etc., because trees get added together with a weighted sum. For SVM, there seems to be some work to explicitly incorporate things like that using Fisher kernels - Reflecting it in what the model predicts: Either a time series of data for an individual or all the data for the individual at once (mostly something neural networks can be made to do, tree based models tend to not be so great with multivariate output), if data are partially missing, don't incur loss for those data points no matter what the model predicts. - Reflecting individuals in how the models are trained As you already mentioned, splitting for validation/testing in the way that you will really get to make predictions in practice (i.e. if you want to predict for new individuals, then you should not have some data from the same individuals in more than one of training-, validation- or test-set). E.g. for Random forest/boosted trees, where you bootstrap data/subsample data you could only ever subsample whole individuals (not implemented in the major libraries). The question is of course how much difference it will make. As far as I am aware the answer is that it depends on what you are trying to do and the specifics situation. Explicitly reflecting correlation in the model or how the model outputs things will definitely matter a lot for inference (e.g. getting confidence intervals with good operating characteristics), but may not always matter for making good point predictions. However, we also know that things like good representations for high cardinality features (like embeddings individual ID) can help a lot for making predictions better and one can probably find circumstances where even with low cardinality it matters a lot. The one good thing is that with an appropriate training-/validation/test-splitting setup, one can evaluate how different approaches fare.
null
CC BY-SA 4.0
null
2023-03-09T08:49:02.280
2023-03-09T08:49:02.280
null
null
86652
null
608854
2
null
608759
0
null
RE models rely on within- and between-group variation, while FE models only rely on within-group estimation. However, this is only true if the explanatory variable is independent from group specific effects. If this is not the case RE are biased.
null
CC BY-SA 4.0
null
2023-03-09T08:57:03.443
2023-03-09T08:57:03.443
null
null
305206
null
608856
2
null
567987
0
null
I think this is the basic assymetric TSP. You can try the LKH-3 algorithm ([http://webhotel4.ruc.dk/~keld/research/LKH-3/](http://webhotel4.ruc.dk/%7Ekeld/research/LKH-3/))
null
CC BY-SA 4.0
null
2023-03-09T10:04:08.187
2023-03-09T10:04:08.187
null
null
382757
null
608857
1
null
null
2
101
Assume a set of variables $x_1,...,x_p$ and their variable importance indices given by two different predictive models. Regardless of the importance metric itself (as long as we use the same metric for both models), I want to look at the ranking of their importance: which variable is the most important for model A, second most important for A, which is the most important for model B and so on. So now for each model I have a different permutation of the rank vector $(1,...,p)$, denoted $r^A,r^B$ and I want to test whether these rankings are correlated. My null hypothesis would be something like "the vectors $r^A,r^B$ are not correlated" (I relate here only to positive correlation). The proper way of handling rank vectors (all distinct integers, no ties) is probably using Spearman's correlation coefficient: $$r_s=1-\frac{6\sum_i{d_i^2}}{p^3-p},\quad d_i=r_i^A-r_i^B$$ So a convenient formulation for the null hypothesis would be something like $H_0:r_s\le 0.4$. But, we don't really have a convenient distribution for Spearman's. So the way to go is transformations. I've got two possible transformations: - Kruskal's transformation from Spearman's $r_s$ to Pearson's $\rho$: $$\rho=2\sin\left(\frac{\pi}{6}r_s\right)$$ - Fisher's $z$-transformation from Pearson's $\rho$ to normal: $$z'=\tanh^{-1}(\rho),\quad z'\sim\mathcal{N}\left(0,\frac{1}{p-3}\right)$$ I've seen some examples where the $z$ transformation is applied directly on $r_s$, and I'm not feeling like it's the right thing to do. However, Kruskal himself (and citing papers) require a bivariate normal distribution, which I'm not sure I can fulfill. Any thoughts?
Inference for Spearman's Correlation
CC BY-SA 4.0
null
2023-03-09T10:10:39.760
2023-03-12T06:04:24.980
null
null
144600
[ "inference", "spearman-rho", "importance", "ranks" ]
608860
1
null
null
1
46
Assume two paired binary vectors, $v^A,v^B$. We can easily construct a contingency table looking like: | |$v^A_i=1$ |$v^A_i=0$ |Totals | ||---------|---------|------| |$v^B_i=1$ |$a$ |$b$ |$a+b$ | |$v^B_i=0$ |$c$ |$d$ |$c+d$ | |Totals |$a+c$ |$b+d$ |$a+b+c+d=p$ | Usually we infer using the $\chi^2$ statistic $$\frac{(\text{observed-expected})^2}{\text{expected}}$$ Or using [McNemar's test](https://en.wikipedia.org/wiki/McNemar%27s_test) where: $$\frac{(b-c)^2}{b+c}\sim\chi^2.$$ However, the book [Applied Multivariate Statistical Analysis (Johnson & Wichern, 2013)](https://rads.stackoverflow.com/amzn/click/com/0131877151) (p. 675) offers some other summative statistics derived from the contingency table: > Table 12.1 Similarity Coefficients for Clustering Items* | |Coefficient |Rationale | | ||-----------|---------|| |1. |$\frac{a+d}{p}$ |Equal weights for 1-1 matches and 0-0 matches. | | |2. |$\frac{2(a+d)}{2(a+d)+b+c}$ |Double weight for 1-1 matches and 0-0 matches. | | |3. |$\frac{a+d}{ a+d+2(b+c) }$ |Double weight for unmatched pairs. | | |4. |$\frac{a}{p}$ |No 0-0 matches in numerator. | | |5. |$\frac{a}{a+b+c}$ |No 0-0 matches in numerator or denominator. (The 0-0 matches are treated as irrelevant.) | | |6. |$\frac{2a}{2a+b+c}$ |No 0-0 matches in numerator or denominator. Double weight for 1-1 matches. | | |7. |$\frac{a}{a+2(b+c)}$ |No 0-0 matches in numerator or denominator. Double weight for unmatched pairs. | | |8. |$\frac{a}{b+c}$ |Ratio of matches to mismatches with 0-0 matches excluded | | > *[$p$ binary variables; see (12-7).] My null hypothesis is $H_0:v^A,v^B \text{ are independent}$. Assuming that I want to use one of these measures (each fits a somewhat different alternative), what can I assume regarding their distribution? It's clear this ain't $\chi^2$, but what is it? Another possible difficulty might rise from the fact that we don't have the "real" binary vectors $v^A,v^B$ but rather their estimates $\hat{v}^A,\hat{v}^B$. What effect does it have, if any? --- Just for comparison with the table 12.1 reproduced above, here is an image of the table as it appears originally in the book: [](https://i.stack.imgur.com/izm1T.png)
Inference for Contingency tables
CC BY-SA 4.0
null
2023-03-09T10:31:25.290
2023-03-12T06:03:48.053
2023-03-12T06:03:48.053
144600
144600
[ "inference", "contingency-tables" ]
608861
1
608882
null
0
87
I built a logistic regression, including the followed variables. And I tried to compare effects of temp among group and habitat using `emtrends()` in `emmeans` R package. I have 3 questions about the followed results. First, it feels like that there was type I error in the results. How can I figure out there is type I error or not? Second, I wonder why df is Inf in my results of `emtrends()`. Third, I ran `emtrends()` seperately for group variable and for habitat variable. My model included the interaction term between temp and habitat but not a habitat variable. Then, should I ran like this `emtrends(fit_1, ~habitat|group, var = "temp")`? ``` > str(dataset) 'data.frame': 1440 obs. of 6 variables: $ habitat : Factor w/ 4 levels "A","B","C",..: 2 2 3 2 1 1 1 1 1 1 ... $ group : Factor w/ 4 levels "red","skyblue","indigo",..: 3 4 4 4 3 4 4 4 1 1 ... $ temp : num 15.5 13.9 13.9 15.4 15.5 ... $ event : Factor w/ 2 levels "0","1": 1 1 1 1 2 1 1 1 1 1 ... $ weight : num 3 3 2.73 3 10 ... $ rain : Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ... > fit_1 <- glm(event ~ group*temp + rain + habitat:temp + rain:temp , weights = weight,family=binomial, data=dataset) > summary(fit_1) Call: glm(formula = event ~ group * temp + rain + habitat:temp + rain:temp, family = binomial, data = dataset, weights = weight) Deviance Residuals: Min 1Q Median 3Q Max -3.6849 -1.7761 -1.1416 -0.2509 8.5246 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.51354 0.63168 -3.979 6.92e-05 *** groupskyblue 2.06381 0.85254 2.421 0.01549 * groupindigo 8.48504 0.97759 8.680 < 2e-16 *** groupblack 4.09039 0.83798 4.881 1.05e-06 *** temp -0.06882 0.05273 -1.305 0.19187 rain1 2.25311 0.89997 2.504 0.01230 * groupskyblue:temp -0.24329 0.06062 -4.013 5.99e-05 *** groupindigo:temp -0.59809 0.06905 -8.662 < 2e-16 *** groupblack:temp -0.35243 0.06355 -5.546 2.92e-08 *** temp:habitatB 0.20523 0.01928 10.644 < 2e-16 *** temp:habitatC 0.20808 0.01925 10.807 < 2e-16 *** temp:habitatD 0.17938 0.01955 9.173 < 2e-16 *** temp:rain1 -0.17861 0.06167 -2.896 0.00378 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 9483.6 on 1439 degrees of freedom Residual deviance: 8604.1 on 1427 degrees of freedom AIC: 8630.1 Number of Fisher Scoring iterations: 6 > emt_fit_group <- emtrends(fit_1, ~group, var = "temp") > summary(emt_fit_group) group temp.trend SE df asymp.LCL asymp.UCL red 0.0903 0.0866 Inf -0.0794 0.2601 skyblue -0.2686 0.0719 Inf -0.4096 -0.1276 indigo -0.5297 0.0816 Inf -0.6897 -0.3698 black -0.2192 0.0803 Inf -0.3765 -0.0619 Results are averaged over the levels of: rain, habitat Confidence level used: 0.95 > emt_fit_habitat <- emtrends(fit_1, ~habitat, var = "temp") > summary(emt_fit_habitat) habitat temp.trend SE df asymp.LCL asymp.UCL A -0.380 0.0547 Inf -0.487 -0.2727 B -0.175 0.0534 Inf -0.279 -0.0701 C -0.172 0.0534 Inf -0.277 -0.0672 D -0.201 0.0534 Inf -0.305 -0.0960 Results are averaged over the levels of: group, rain Confidence level used: 0.95 ```
post-hoc analysis for interaction terms in logistic regression: emtrends(), type I error
CC BY-SA 4.0
null
2023-03-09T10:32:23.733
2023-03-10T06:07:54.190
2023-03-10T06:07:54.190
382759
382759
[ "r", "logistic", "interaction", "lsmeans", "type-i-and-ii-errors" ]
608862
1
null
null
1
34
What is not right with a model that produces this kind of residual plot? Does it have to be discarded? My data is egg production (counts, but cumulative) over a period of 55 days for 7 treatments+control with 10 replicates each There are repeat measurements of the same individuals done over several days, and this is accounted for in the random factors, using a glmm model with family set to Poisson distribution. Dependent variable is discrete, the three fixed factors are categorical with 2 levels: ``` m1 <- glmer(n_cumulative ~ Pyrene*Temp*pH + (Days|ID), family = poisson, data = egg_production) ``` [](https://i.stack.imgur.com/ZlTwe.png)
Zigzag residual plot
CC BY-SA 4.0
null
2023-03-09T10:40:54.650
2023-03-09T10:46:32.300
2023-03-09T10:46:32.300
380763
380763
[ "lme4-nlme", "residuals", "glmm" ]
608863
1
null
null
1
37
I have time series data for 50k customers.I want to forecast at a customer level. But the problem is it is not possible to train a ARIMA/ARIMAX model on each customer. Can I train a general time series model on the population which has an effect term for each customer? Or is it somehow computationally possible to have a time series model for each customer?
Build time series models for 50k customers?
CC BY-SA 4.0
null
2023-03-09T11:05:48.060
2023-03-09T11:05:48.060
null
null
382761
[ "machine-learning", "time-series", "forecasting", "arima" ]
608864
1
null
null
0
21
I have a data set where I have journey based data from buses, and how long time it took to travel a said bus-stop (starting from stop 4). I have multiple such journeys recording, and they are kept in a data frame. In the example below, the columns indicate the stop order and the values are the time it took to travel to that stop. My real data contain several hundred journeys over the span of multiple days. I will use some type of regression model later on, but now I'm in the stage of deciding my features. ``` j1 <- c(100 , 74 , 70 , 88 , 104 , 177 , 88 , 189 , 75 , 58 , 105 , 171 ,29 ,60 , 71 , 37 , 93) j2 <- c(99 , 206 , 74 , 82 , 69 , 67 , 102 , 161 , 60 , 92 , 62 , 104 , 34 ,108 , 53 , 50 , 80) j3 <- c(70 , 77, 76 , 105 , 115 , 78 , 139 , 160 , 52 , 97 , 81 , 206, 33 , 88 , 49 , 44 , 89) dfj <- data.frame(j1, j2, j3) dfj <- data.frame(t(dfj)) colnames(dfj) <- seq(4, 20) ``` I want to test an assumption that time it takes to travel to the next stop is by some means influenced by the time it took to travel to the previous stop. To test this assumption, I was planning on making use of the correlation matrix `cor(df)`. But as I read here [What's the purpose of autocorrelation?](https://stats.stackexchange.com/questions/427418/whats-the-purpose-of-autocorrelation) > But since everyone's rate of consumption is different, the autocorrelation at the aggregate is so much attenuated that it may not make sense to model it any more. I was a bit pensive about how to proceed. What would be a sound strategy to test the assumption in my case?
Autocorrelation of journeys along a route
CC BY-SA 4.0
null
2023-03-09T11:13:05.583
2023-03-09T15:40:32.170
2023-03-09T15:40:32.170
11887
320876
[ "autocorrelation", "regression-strategies" ]
608865
2
null
591547
2
null
I have found this video, which was very helpful: [https://www.youtube.com/watch?v=zftIxv532hE](https://www.youtube.com/watch?v=zftIxv532hE)
null
CC BY-SA 4.0
null
2023-03-09T11:20:51.047
2023-03-09T11:40:44.973
2023-03-09T11:40:44.973
382768
382768
null
608866
1
null
null
1
45
In the section 15.5 of the book 'Machine Learning: A Probabilistic Perspective' by Kevin P. Murphy, it discusses the Gaussian Process Latent Variable Model. The log-likelihood objective function is given By $$l = -\frac{D}{2}\ln|K| - \frac{1}{2}\text{tr}(K^{-1}YY^{T})\tag 1$$ Where $K = ZZ^T + \beta^{-1}I$ and the gradient with regard to Z is given by: $$\frac{\partial l}{\partial \mathbf{Z}_{ij}} = \frac{\partial l}{\partial \mathbf{K}}\frac{\partial \mathbf{K}}{\partial \mathbf{Z}_{ij}}\tag 2$$ and $$\frac{\partial l}{\partial K} = K^{-1}YY^TK^{-1} - DK^{-1}\tag 3$$ (I think the author omits the '$ \frac{1}{2}$' here). Anyway, I can get the equation (3) by the rules in [MatrixCookbook](https://www.math.uwaterloo.ca/%7Ehwolkowi/matrixcookbook.pdf). The author then says we can have $$\frac{\partial K}{\partial Z} = Z$$ (I think the author omits 2 here) Finally, we get $$\frac{\partial l}{\partial Z} = K^{-1}YY^TK^{-1}Z - DK^{-1}Z\tag 4$$ The result matches the result in [Lawrence 2005](https://jmlr.csail.mit.edu/papers/volume6/lawrence05a/lawrence05a.pdf). And there is a similar derivation in this site, [see the answer](https://stats.stackexchange.com/questions/286351/gaussian-process-latent-variable-model-optimisation). It seems that the chain rule($$\frac{\partial l}{\partial \mathbf{Z}} = \frac{\partial l}{\partial \mathbf{K}}\frac{\partial \mathbf{K}}{\partial \mathbf{Z}}\tag 5$$) works here. But as I know, only $$\frac{\partial \text{Tr}[K]}{\partial Z}= 2Z\tag 6$$ and $$\frac{\partial l}{\partial \mathbf{Z}_{ij}} = \text{Tr}\left[\left(\frac{\partial l}{\partial \mathbf{K}}\right)^T\frac{\partial \mathbf{K}}{\partial \mathbf{Z}_{ij}}\right]\tag 7$$. I assume the author omits the'Tr', but How can I get (4) by using (6) and (7).And are there any connections between (5) and (7)? As far as I know, the equation (5) is invalid for matrix-multiplying-shape match.
The confusing derivation in the book 'Machine Learning: A Probabilistic Perspective' by Kevin P. Murphy
CC BY-SA 4.0
null
2023-03-09T11:21:03.320
2023-03-11T04:08:58.290
2023-03-11T04:08:58.290
382696
382696
[ "machine-learning", "maximum-likelihood", "optimization", "derivative" ]
608869
1
null
null
1
9
n = 164 where Group 1 = 58, Group 2 =50, Group 3 = 38 and Group 4 = 18. The expected frequencies are 25% (41). - Would it be appropriate to calculate an Odds Ratio if I wanted to compare Group 1 v Group 4? (58 x 41)/(41 x 18) = 3.22 - Would this suggest it is 3.22 times more likely to belong to Group 1 versus Group 4?
Can Odds Ratio be used for a 2x2 table of observed and expected frequencies?
CC BY-SA 4.0
null
2023-03-09T11:52:52.300
2023-03-09T11:52:52.300
null
null
381147
[ "odds-ratio" ]
608870
2
null
44228
0
null
This sounds like a multi-label problem, which is somewhat different from a multi-class problem. A multi-class problem uses the various features to estimate the probabilities of multiple events, exactly one of which will happen: the subject who is $28$ years old who lives in New York and who was exposed to asbestos has a probability of being healthy of $0.8$, a probability of skin cancer of $0.1$, and a probability of psoriasis of $0.1$; if you have more than just those two diseases, include them all to get probabilities of each disease. Then, of all the options (heathy or any one of the diseases), exactly one will happen. The likely outcome here is that the subject will be healthy, but the subject might also be upset to know of a decent probability of developing some nasty diseases, even if they’re not among the most likely outcomes. (Would you do something if it had a $30\%$ chance of causing you an agonizing death? Why not? You’re probably going to be fine.) Multinomial logistic regression is the starting point for this kind of prediction. Such a model gives the probability of each individual disease and the probability of being healthy, the sum of which is one. However, multiple diseases can happen. Someone can have skin cancer and epilepsy. Someone can have depression and diabetes. Someone can have Covid and HIV and liver cancer. Modeling (the probabilities of) categorical outcomes, multiple of which can occur, is a multi-label problem. The idea is to model the individual probabilities of binary events (cancer vs no cancer, depression vs no depression, Covid vs no Covid, etc), which might be independent but do not have to be. My answer [here](https://stats.stackexchange.com/a/605387/247274) gets more into the difference between multi-label and multi-class problems, and my question [here](https://stats.stackexchange.com/q/586201/247274) has multiple nice answers that discuss the underlying statistical model of a multi-label classification, with the comment about an Ising distribution being particularly helpful (even if my main question about the prior probability remains unanswered).
null
CC BY-SA 4.0
null
2023-03-09T11:54:57.913
2023-03-09T11:54:57.913
null
null
247274
null
608871
1
null
null
1
12
I use factor analysis on a set of 15 survey questions (likert scales). Using the predict command (in stata) I make 5 factors. Subsequently, I want to use cluster analysis to see if there are "groups" of people with similar scores on the factors. I find it hard to know which distance measure to use (e.g. singlelinkage, wards, correlation based measures). My question is: What distance measure should I use on my data?
Cluster analysis after factor analysis: What distance measure to use?
CC BY-SA 4.0
null
2023-03-09T12:13:57.940
2023-03-09T12:13:57.940
null
null
382770
[ "factor-analysis", "hierarchical-clustering" ]
608872
1
null
null
0
21
If I have an experimental group and a control group in a RCT; and I am trying to estimate the intervention effect on the dependent variables of the participants under control of several covariates using a multiple regression model (instructions, not by choice), should I include all cases from both treatment and control groups in the model?
RCT Multiple regression: should I include the control group in the model?
CC BY-SA 4.0
null
2023-03-09T12:29:20.587
2023-03-09T13:49:18.833
2023-03-09T13:49:18.833
382720
382720
[ "hypothesis-testing" ]
608873
1
null
null
3
209
I am stuck on the first part of problem 8.2 of the book "A Probabilistic Theory of Pattern Recognition" by Luc Devroye: > Show that for any $s > 0$, and any random variable $X$ with $\mathbb{E}(X) = 0$, $\mathbb{E}(X^2) = \sigma^2, X \leq c$, $$ \mathbb{E}(e^{sX}) \leq e^{f(\sigma^2/c^2)}, $$ where $$ f(u) = \log \left( \frac{1}{1+u}e^{-csu} + \frac{u}{1+u}e^{cs} \right). $$ The purpose of the problem is to prove Bennett's inequality. I've searched for how Bennett's is usually proved, and it seems like the usual trick is to expand $\mathbb{E}(e^{sX})$ with the Taylor series, followed by applying an inequality on the terms in $\mathbb{E}(X^k), k \geq 3$. However, this is not what the author has in mind here, and I cannot figure out any way to invoke the term $e^{-csu}$ in any inequality.
Inequality on the moment generating function of a centered random variable which is bounded above
CC BY-SA 4.0
null
2023-03-09T12:59:45.117
2023-03-20T04:29:27.580
2023-03-20T04:29:27.580
11887
382773
[ "probability", "probability-inequalities", "moment-generating-function" ]
608874
1
null
null
0
14
I'm building several GAMs with a dataset that includes several levels of stratification (`animal id`, `animal_status`, `population_id`), and I'm having trouble understanding which model formula and covariates specifications should be used. For now, I'm not considering `population_id` in my models as it seems to make them quite complex, + the populations are distributed at a latitudinal gradient, so I can account for that using a covariate for latitude. So, in one particular case, my goal is to understand how a continuous variable (ft) changes along the year, considering `animal_status` (with 3 levels) as well. `animal_status` is not balanced, the proportion of observations are 73% (a_m), 15% (a_f), and 11.5% (fam). I used a gamma distribution as ft represents time values and it's skewed to the right. -- GAM Nº1 I first fitted a simple GAM, with only month as covariate: ``` month_ds2_ft <- gam(ft ~ s(month, bs="cc", k = 12) + s(id, bs="re"), data=ft_d, family= Gamma(link="log"), method = "REML") ``` And this is the summary output: ``` Family: Gamma Link function: log Formula: ft ~ s(month, bs = "cc", k = 12) + s(id, bs = "re") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.90486 0.02174 41.63 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(month) 5.821 10 6.575 2.98e-07 *** s(id) 73.508 106 3.355 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0917 Deviance explained = 10.8% -REML = 6819.3 Scale est. = 0.32906 n = 4334 ``` And here's the visualization: [](https://i.stack.imgur.com/vcMeU.png) - All good here, I don't have questions regarding this model formula. -- GAM Nº 2 Then, because I want to see the variation of ft along the year by social status, I built the following model: ``` month_ds2_ft_1 <- gam(ft ~ status + s(month, bs="cc", k = 12, by=status) + s(id, bs="re"), data=ft, family= Gamma(link="log"), method = "REML") ``` And this is the summary output: ``` Family: Gamma Link function: log Formula: ft ~ status + s(month, bs = "cc", k = 12, by = status) + s(id, bs = "re") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.06241 0.03991 26.620 < 2e-16 *** statusa_m -0.19080 0.04710 -4.051 5.19e-05 *** statusfam -0.18685 0.04950 -3.775 0.000162 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(month):statusa_f 0.01507 10 0.001 0.3896 s(month):statusa_m 5.89042 10 6.601 3.38e-07 *** s(month):statusfam 2.87179 9 2.090 0.0032 ** s(id) 70.25464 105 2.718 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0984 Deviance explained = 11.6% -REML = 6801.4 Scale est. = 0.32665 n = 4334 ``` Here's the visualization: [](https://i.stack.imgur.com/IcG0d.png) - All good here, the outcome makes sense (even though a_f doesn't look the greatest), but I was looking into the model structure, to see if I was using the right formula. Then, I saw in this post that it is also correct to include the continuous variable without the "by" component to get the reference smooth of "month". -- GAM Nº3 Therefore, I tried the following formula: ``` month_ds2_ft_2 <- gam(ft ~ status + s(month, bs="cc", k = 12) + s(month, bs="cc", k = 12, by=status) + s(id, bs="re"), data=ft, family= Gamma(link="log"), method = "REML") ``` Here's the summary: ``` Family: Gamma Link function: log Formula: ft ~ status + s(month, bs = "cc", k = 12) + s(month, bs = "cc", k = 12, by = status) + s(id, bs = "re") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.04351 0.03958 26.362 < 2e-16 *** statusa_m -0.17258 0.04655 -3.707 0.000212 *** statusfam -0.13856 0.05129 -2.701 0.006932 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(month) 5.971e+00 10 7.038 < 2e-16 *** s(month):statusa_f 2.618e-03 10 0.000 0.984 s(month):statusa_m 9.276e-04 10 0.000 0.790 s(month):statusfam 3.212e+00 9 4.995 2.67e-06 *** s(id) 6.947e+01 105 2.628 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0989 Deviance explained = 11.6% -REML = 6799.3 Scale est. = 0.32615 n = 4334 ``` Here's the visualization: [](https://i.stack.imgur.com/QzdlY.png) - Here's where my questions start, but I'll write all of them at the end. I then read this great paper, which made me wonder whether I was including the variable "social status" correctly - instead of a fixed factor, I thought that maybe it should be included as random factor, as I would expect different group-level (social status) smooths and wiggliness (model GI in the paper). Therefore: -- GAM Nº4 I added now social status as random effect, and not simply as factor, and kept month separately as before: ``` month_ds2_ft_3 <- gam(ft ~ s(status, bs="re") + s(month, bs="cc", k = 12, by=status) + s(month, bs="cc", k = 12) + s(id, bs="re"), data=ft_d, family= Gamma(link="log"), method = "REML") ``` Here's the summary: ``` Family: Gamma Link function: log Formula: ft ~ s(status, bs = "re") + s(month, bs = "cc", k = 12, by = status) + s(month, bs = "cc", k = 12) + s(id, bs = "re") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.9371 0.0596 15.72 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(status) 1.718274 2 80.018 0.000509 *** s(month):statusa_f 0.001857 10 0.000 0.991736 s(month):statusa_m 0.003394 10 0.000 0.817288 s(month):statusfam 3.292432 10 12.951 0.000434 *** s(month) 5.996040 10 7.909 < 2e-16 *** s(id) 69.725833 106 3.750 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0991 Deviance explained = 11.6% -REML = 6797.9 Scale est. = 0.32591 n = 4334 ``` And visualization: [](https://i.stack.imgur.com/pcOE9.png) -- AIC Values ``` > AIC(month_ds2_ft, month_ds2_ft_1, month_ds2_ft_2, month_ds2_ft_3) df AIC (GAM Nº1) month_ds2_ft 82.61284 13552.63 (GAM Nº2) month_ds2_ft_1 85.97126 13517.08 (GAM Nº3) month_ds2_ft_2 84.91109 13511.50 (GAM Nº4) month_ds2_ft_3 84.91109 13511.50 ``` -- QUESTIONS I find the [paper mentioned above](https://peerj.com/articles/6876/) extremely useful, but I'm still not sure what is happening with my data. So: 1) Why is there such a difference between GAM Nº2 and Nº3 for the level "a_m" in status? Shouldn't it be the same? It looks like the general trend in GAM Nº1 is then mostly "moved" to "a_m" (I'm guessing, because "a_m" represents 73% of the data), so I would expect "a_m" to be the same in GAM Nº2 and Nº3. 2) Why is the visualization for "a_f" so flat? If I model it separately, the smooth is still not significant but the representation is not the same. I wouldn't expect it to be exactly the same, but I wouldn't think it would look so flat. I'm wondering if "a_f" is being used as reference category, and that's why it is that flat? 3) From the AIC values, it seems like there's support for considering both month and social status (although from the deviance explained, it doesn't look so relevant). Then, it looks like adding `month` as a global smoother helps with model performance - but then it doesn't seem relevant to have social status, at least from the model visualization, as only "fam" shows some trend. Any explanations? 4) It also looks like adding `status` as factor or random effect does not change model performance. The decision here, on whether to add this as a factor or RE, would be based on the expected wiggliness? And could be evaluated with AIC as well, if I understood correctly. Thank you very much for any clarifications on this!
GAM - different outputs with similar covariates: how to choose the proper model formula?
CC BY-SA 4.0
null
2023-03-09T13:06:26.967
2023-03-09T13:11:49.817
2023-03-09T13:11:49.817
117281
117281
[ "r", "categorical-data", "group-differences", "mgcv" ]
608875
1
null
null
1
14
In my research I am trying to estimate how the introduction of an electricity interconnector between two different markets (A and B) has affected the electricity price in market A. I would preferably differentiate the price effect between the export- and import hours, that should yield negative and positive coefficients respectively. I have looked into interaction terms and nested interaction terms to capture this effect in a regression model, but I feel like this might be a dead end. Any help or ideas would be highly appreciated. I apologize if this is a very basic question, or if it is not well-defined/too broad. Please let me know if I can clarify this further.
Estimating the price-effect of an electricity interconnector in exporting- and importing scenarios?
CC BY-SA 4.0
null
2023-03-09T13:16:46.647
2023-03-10T07:16:04.657
2023-03-10T07:16:04.657
382774
382774
[ "regression", "causality" ]
608876
2
null
608834
1
null
Survival analysis tries to represent the distribution of survival times among members of a population. If you took 1000 randomly sampled members of a population of patients and followed each of them from the time of study entry until the event of interest, then in your scenario (with death as the event) your interpretation in Question 1 would be obviously correct. In practice, many individuals in the sample aren't followed all the way until the event occurs. Right censoring (having a lower limit to the time to the event) is common. The advantage of survival models is that they can estimate the survival curve over time for the population of patients even when some survival-time values are censored. In terms of Questions 1 and 2 when there are censored survival times, the Kaplan-Meier method provides an estimate of what you would have found without censoring. You didn't record that many deaths in your data, but those are estimates of how many actually occurred out of the 1000 patients. For Question 3, [this article](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3932959/) might be helpful in terms of how Kaplan-Meier curves are estimated. The main R [survival vignette](https://cran.r-project.org/web/packages/survival/vignettes/survival.pdf) contains concise summaries of other extensions of survival analysis. As of today, this Cross Validated website has nearly [3000 pages](https://stats.stackexchange.com/questions/tagged/survival) on survival analysis. [This page](https://stats.stackexchange.com/q/580095/28500), for example, shows how survival analysis takes censored event times into account.
null
CC BY-SA 4.0
null
2023-03-09T13:20:48.390
2023-03-09T13:20:48.390
null
null
28500
null
608877
2
null
597597
0
null
There isn't just one $Z^\star$ sample you get from the run. Generally what one does is to run the algorithm for a large number of iterations, after the burn in period (first subset of the iterations), you collect samples, $Z^\star$, at regular intervals. Once you have decided that your algorithm has converged, you now have a distribution of $Z^\star$'s whose normalized counts should in principle follow $p\left(Z|W;\theta\right)$.
null
CC BY-SA 4.0
null
2023-03-09T13:29:46.570
2023-03-09T13:39:10.737
2023-03-09T13:39:10.737
324982
324982
null
608878
2
null
608711
0
null
Without events in a group you can't get finite regression-coefficient (or corresponding hazard-ratio) point estimates relative to that group in a Cox model even if covariate values are fixed in time. Nevertheless, you can get limits on confidence intervals (CI), in a way that works with time-varying covariates too. That's a useful way to present such results. The CI can be found by calculating the profile (partial) likelihood of the data as you force the model to go through a range of finite values for the "infinite" regression coefficient. Then find the finite coefficient value that corresponds to a 95% confidence interval, given that the other end of the interval is infinite. The process is illustrated in R on [this page](https://stats.stackexchange.com/a/572528/28500). I understand that SAS can do this with built-in functionality.
null
CC BY-SA 4.0
null
2023-03-09T13:44:49.087
2023-03-09T13:44:49.087
null
null
28500
null
608880
1
609573
null
2
68
I'm looking for a multivariable time series architecture that accepts multiple independent variables and produces multiple dependent variables. The context is sales forecasting given item, discount percentage and units sold. So if there were 100 items, a single time-step (row) would have 300 elements: `[(item_token_1, item_discount_pct_1, units_sold_1),..., (item_token_100, item_discount_pct_100, units_sold_100)]` I believe that a transformer architecture might be applicable as it could infer patterns across items. For example, if shampoo performed well on a given week with a discount percentage of 15% then conditioner might perform similarly on the same week given a similar discount percentage. The output should have 200 elements `[(units_sold_item1),...(units_sold_item100)]`. - Any white papers that are similar to what I'm describing? - Are time series models which produce vector outputs, rather than scalars, common?
Transformers for sales forecasting, vector output
CC BY-SA 4.0
null
2023-03-09T13:52:22.283
2023-03-15T16:32:58.603
2023-03-15T16:32:58.603
53690
288172
[ "time-series", "forecasting", "transformers", "vector" ]
608881
1
null
null
0
43
I have conducted a Bayesian Logistic regression, and I would like to compare 2 models : one model with one continuous predictor (M1) and one model without predictor (M0). The outcome is a binary variable. I have used LOO estimated of brms package. I find following results : Model comparisons: elpd_diff se_diff (M1)Bayes_Model_Binary 0.0 0.0 (M0)modelLOO -0.4 1.3 [](https://i.stack.imgur.com/NpRho.png) But how can I interpret that ?
How to interpret elpd_diff of Bayesian LOO estimate in Bayesian Logistic Regression
CC BY-SA 4.0
null
2023-03-09T13:52:28.043
2023-03-09T14:03:59.513
2023-03-09T14:03:59.513
362671
381165
[ "r", "bayesian", "logistic", "brms" ]
608882
2
null
608861
0
null
It's not clear why you think that there is "Type I error" (false positives) in your results. If the assumptions underlying the binomial model are correct, then you shouldn't be having false-positive results unless your model is overfitting the data. If you have more than 15 or so cases in the minority outcome class per coefficient that you are estimating (as you seem to), you are probably OK. You can check by repeating the modeling on multiple bootstrap samples. For the degrees of freedom (`df`), the maximum-likelihood method used to fit the binomial logistic regression is based on asymptotic theory that holds in the limit of an infinite number of observations. Comparisons are then based on a normal distribution of coefficient estimates, the limit of the t distribution as the degrees of freedom approach infinity. For your model, you are asking for trouble when you [omit the individual coefficient](https://stats.stackexchange.com/q/11009/28500) for a predictor (`habitat`) that you are including in an interaction. You might get away with that if there's only 1 interaction term in the model, but you have several. I'd be reluctant to give advice on how to set up the `emtrends` in this situation.
null
CC BY-SA 4.0
null
2023-03-09T13:59:45.417
2023-03-09T13:59:45.417
null
null
28500
null
608883
2
null
508086
1
null
### Neural Networks as Gaussian Processes Consider a neural network with only one layer (i.e. no hidden layers, i.e. logistic regression): $$\operatorname{reg}: \mathbb{R}^N \to \mathbb{R}^M : \boldsymbol{x} \mapsto \boldsymbol{s} = \boldsymbol{W} \boldsymbol{x}.$$ If we replace the entries in $\boldsymbol{W} \in \mathbb{R}^{M \times N}$ by random values, such that $w_{ij} \sim \mathcal{N}(0, \sigma_w^2)$, the resulting function will be a random/stochastic process. Now, let $\boldsymbol{w}_i$ be a row of $\boldsymbol{W}$, such that $$s_i = \boldsymbol{w}_i \boldsymbol{x} = \sum_{j=1}^N w_{ij} x_j,$$ we can use the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem) to conclude that $s_i$ follows a Gaussian distribution if $N \to \infty$. Therefore, a large number of inputs ($N$) turns the random process into a Gaussian process (because the outputs are now Gaussian). This is exactly the idea presented in your last piece of literature (Lee, 2018). Although Lee et al. write about infinite width in every layer, I would argue that you only really need it in the penultimate layer (i.e. the inputs to the final layer). Having infinite width everywhere just makes the computation of the mean and covariance functions tractable (at least for ReLU networks). ### The Effect of Loss Functions A loss function by itself will never be a Gaussian process because there is typically no randomness in a loss function. This being said, the combination of neural network and loss function can give rise to a random process. In order to assess whether this random process will still be Gaussian depends on the loss function itself. I believe that there are no practical loss functions that would preserve Gaussianity. E.g. when using the mean squared error, $(\operatorname{reg}(\boldsymbol{x} \mathbin{;} \boldsymbol{w}) - y)^2,$ it should be clear that the loss values will not be Gaussian. After skimming the papers that are referenced in your question, I am not entirely sure whether they really talk about loss functions as Gaussian processes: - Pascanu et al. (2014) mention that they use random loss functions, sampled from a Gaussian process. This would be using GPs exactly as how you described them: a distribution of functions. - Choromanska et al. (2015) seem to try to prove that a ReLU network with some loss function that uses randomness is related to a Gaussian process. At least that would be my interpretation since I do not know much about spin-glass models.
null
CC BY-SA 4.0
null
2023-03-09T14:11:07.360
2023-03-09T14:11:07.360
null
null
95000
null
608885
2
null
608684
0
null
The issue here is that the values of `X1` change depending on whether or not treatment `Z` was applied before imaging. Thus the association of observed `X1` values with outcome depends on whether `Z` was applied. The regression must take that into account. If `Z` is coded as 0/1 for absence/presence, then Model 1 won't evaluate the association of `X1` with outcome at all when `Z=0`. The interaction term is just the product of the individual predictor values, so the first term in your model will be 0 for `Z=0` regardless of the value of `X1`. Model 2 will provide an individual coefficient for `X1` that represents its association with outcome for `Z=0`, and an interaction coefficient that represents the change in that association if `X1` is measured following `Z`. You might get away with that if the model remains that simple and `X1` values are affected multiplicatively by `Z`. In general, Model 3 is the safest. See [this page](https://stats.stackexchange.com/q/11009/28500) and its links for extensive discussion. The individual coefficient for `Z` in Model 3 will represent the apparent additive association of `Z` with outcome when `X1=0`, and the interaction coefficient allows for a proportional change in `X1` values as a function of `Z`. You might find that the interaction term in that model isn't large. For example, if `Z` just has an additive effect on `X1` values across the entire range, then you might find a corresponding coefficient for `Z` and an insignificant interaction coefficient. All of your models implicitly assume a direct linear association between your continuous `X` values and (a possible transformation of) outcome. That's often not the case, and you should consider more flexible modeling with regression splines or a generalized additive model. That's particularly the case if `Z` has a complicated non-additive or non-proportional effect on `X1` values.
null
CC BY-SA 4.0
null
2023-03-09T14:32:14.410
2023-03-09T14:32:14.410
null
null
28500
null
608886
1
null
null
2
23
This question is about a [magic square](https://en.wikipedia.org/wiki/Magic_square) generator, "relaxed" because - it's only about one vector (row) in the square independent of all other rows; - the individual elements are continuous and not integral. Constraints are $$0 \le i < n, i \in \mathbb{I}, n \in \mathbb{I} $$ $$0 \le v_i \le u_i, v_i \in \mathbb{R}, u_i \in \mathbb{R} $$ $$ \sum_{i=0}^{n-1} v_i = 100 $$ with $\mathbb{I}$ and $\mathbb{R}$ as the set of all integers and the set of all reals respectively. $n$ and $u$ are known and fixed. $v_i$ are the random variables. I want to define a single random distribution that satisfies the above constraints. Naive uniform distribution up to each known, fixed bound $u_i$ will not satisfy the sum constraint, and something like naive uniform for all but $v_{n-1}$ where that's left as a degree of freedom to satisfy the sum constraint will (a) not produce a uniform distribution for that one variable, and (b) will not always be satisfiable for choices of other $v$. Ideally I would like to know which random distribution has one unchanging form over all $v$ that is parameterised based on individual $u_i$. Beyond the distribution, it would be nice to hear if there is a typical algorithm for this situation that is statistically sound. I realise that there are multiple solutions. At the risk of sounding subjective, I would be interested in hearing one or two that are not mathematically complex and that are easy to code for in Python, and that produce means of each $v$ that over several experiments would tend to land between the bounds but not on them.
Relaxed magic-square generator distribution
CC BY-SA 4.0
null
2023-03-09T14:32:58.913
2023-03-09T15:18:26.197
2023-03-09T15:18:26.197
76901
76901
[ "distributions", "random-generation", "constraint" ]
608887
1
null
null
0
44
I am looking at factors that are associated to changes in student's grades over time. My dataset consists of 45 students with Time 1 and Time 2 grades across 7 different subjects as well as characteristics about these students. As you can see in the below sample dataset each student will have 7 different scores. I want to look at factors associated with changes in math scores, english scores, history scores...separately. I am running 7 different multiple linear regression models (one model for each subject). The outcome for every model is the same (change in subject scores) and the starting IV for each model prior to stepwise selection is the same. I am using a backward stepwise selection to come to the final multivariate for all models. Although for each model different variables show up as significant. All models are using the same dataset. For instance each student has a T1 and T2 score for each subject. ``` student <- c(1,1,1,1,1,2,2,2,2,2) subject <- c("math", "his", "geo", "eng", "art","math", "his", "geo", "eng", "art") t1_grade <- c(78,54,78,67,72,89,76,80,99,76) t2_grade <- c(67,60,65,78,81,87,90,67,92,79) age <- c(18,18,18,18,18,18,18,18,18,18) sex <- c("M","M","M","M","M","F","F","F","F","F") data <- data.frame(student, subject, t1_grade,t2_grade, age,sex) daya$grade_change <- daya$t2_grade- daya$t1_grade student subject t1_grade t2_grade age sex grade_change 1 1 math 78 67 18 M -11 2 1 his 54 60 18 M 6 3 1 geo 78 65 18 M -13 4 1 eng 67 78 18 M 11 5 1 art 72 81 18 M 9 6 2 math 89 87 18 F -2 7 2 his 76 90 18 F 14 8 2 geo 80 67 18 F -13 9 2 eng 99 92 18 F -7 10 2 art 76 79 18 F 3 ``` - I assume I have to adjust the p-values for multiple testing. Would my number of tests be 7 because I am running 7 multivariate linear regression models? 2.From my reading, to correct using the Bonferroni method I would divide the 0.05/7 and my new significance level would be 0.007. Is that correct? - If I were to use the hochberg correction, how would I go about doing this for all 7 models? Thanks for all your help!
Adjusted p-value for conducting 7 linear regression models?
CC BY-SA 4.0
null
2023-03-09T14:35:58.067
2023-03-09T19:27:48.373
2023-03-09T19:27:48.373
382780
382780
[ "r", "multiple-regression", "linear", "bonferroni" ]