Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
9697
2
null
9685
1
null
Here is the code to do the chi square tests as well as generate a variety of test statistics. However, statistical tests of association of the table margins are useless here; the answer is obvious. No one does a statistical test to see if summer is hotter than winter. ``` Chompy<-matrix(c(30,10,1,31,20,10), 3, 2) Chompy chisq.test(Chompy) chisq.test(Chompy, simulate.p.value = TRUE, B = 10000) chompy2<-data.frame(matrix(c(30,10,1,31,20,10,1,2,1,2,1,2,1,2,3,1,2,3), 6,3)) chompy2 chompy2$X2<-factor(chompy2$X2) chompy2$X3<-factor(chompy2$X3) summary(fit1<-glm(X1~X2+X3, data=chompy2, family=poisson)) summary(fit2<-glm(X1~X2*X3, data=chompy2, family=poisson)) #oversaturated summary(fit3<-glm(X1~1, data=chompy2, family=poisson)) #null anova(fit3,fit1) library(lmtest) waldtest(fit1) waldtest(fit2) #oversaturated kruskal.test(X1~X2+X3, data=chompy2) kruskal.test(X1~X2*X3, data=chompy2) ```
null
CC BY-SA 3.0
null
2011-04-18T19:18:55.227
2011-04-18T19:18:55.227
null
null
1893
null
9698
2
null
9692
5
null
You can use `lm()` instead of `aov()` in this case (the latter is a wrapper of the former). Here is an illustration: ``` n <- 100 A <- gl(2, n/2, n, labels=paste("a", 1:2, sep="")) B <- gl(2, n/4, n, labels=paste("b", 1:2, sep="")) # generate fake data for a balanced two-way ANOVA df <- data.frame(y=rnorm(n), A, B) summary(lm1 <- lm(y~A+B, data=df)) # compare with summary.aov(...) predict(lm1, expand.grid(A=levels(A), B=levels(B)), interval="confidence") ``` The latter command gives you predictions for each combination of the A and B factor levels (here, I didn't included the interaction), in the following order: ``` A B 1 a1 b1 2 a2 b1 3 a1 b2 4 a2 b2 ``` Another option is to use the [effects](http://cran.r-project.org/web/packages/effects/index.html) package.
null
CC BY-SA 3.0
null
2011-04-18T19:19:11.087
2011-04-18T19:19:11.087
null
null
930
null
9699
1
9700
null
46
98800
Is there a possibility to use R in a webinterface without the need to install it? I have only one small script which I like to run but I just want to give it a shot without a long installation procedure. Thank you.
Using R online - without installing it
CC BY-SA 3.0
null
2011-04-18T19:26:12.323
2020-09-14T18:19:07.250
null
null
230
[ "r" ]
9700
2
null
9699
23
null
Yes, there are some Rweb interface, like [this one](http://pbil.univ-lyon1.fr/Rweb/) (dead as of September 2020), RDDR [online REPL](https://rdrr.io/snippets/), or [Repl.it](https://repl.it/languages/). Note: Installation of the R software is pretty straightforward and quick, on any platform.
null
CC BY-SA 4.0
null
2011-04-18T19:35:38.070
2020-09-14T18:19:07.250
2020-09-14T18:19:07.250
930
930
null
9701
2
null
9666
2
null
I saw you said you prefer Python, but there are a bunch of R libraries for this, see Highest Density Region function: [http://cran.r-project.org/web/packages/hdrcde/hdrcde.pdf](http://cran.r-project.org/web/packages/hdrcde/hdrcde.pdf) The second iteration of your looking for the median wouldn't work, as your modes would balance each other. Better off calculating the steepest points of ascent in the cdf.
null
CC BY-SA 3.0
null
2011-04-18T19:42:11.797
2011-04-18T19:42:11.797
null
null
1893
null
9703
2
null
9662
1
null
You can use logistic regression. In SPSS the categorical variable (group A, B, or C) can be entered as a single variable using the contrast command, in which case one of the three will be designated as the reference category, or if you prefer you can create 2 dummy variables to account for the 3 groups. You would run the regression hierarchically: first enter those risk factors you want to control, then on a separate step enter Group and watch for the coefficients, odds ratios [in SPSS, "Exp(B)"], and/or p-values you obtain for each category as compared to the reference category. For example, if C is the reference and if the odds ratio for A is 1.3, then A has 1.3 times the odds that C has of developing the disease. (Just be careful to distinguish odds from probability.)
null
CC BY-SA 3.0
null
2011-04-18T19:55:39.293
2011-04-18T19:55:39.293
null
null
2669
null
9704
2
null
9666
2
null
Had to change my answer because I had trouble with `strucchange`, which doesn't seem to like hard changes. Maybe this code will help a bit. ``` library (robfilter) # Make phoney data... clock <- ts (rnorm (1000, 1, 0.03) * approx (1:10, rgamma (10, 1, 1, 1), seq (0.01, 10, 0.01), method="constant")$y) spike <- round (runif (12, 1, 1000)) clock[spike] <- clock[spike] * 10 # Median filter, then difference... clock.med <- med.filter (clock, 50)$level[,1] clock.change <- abs (diff (clock.med)) plot (clock.change, type="l") clock.med[clock.change > 0.15] ```
null
CC BY-SA 3.0
null
2011-04-18T20:21:42.377
2011-04-25T20:13:49.233
2011-04-25T20:13:49.233
1764
1764
null
9705
2
null
9626
0
null
Balanced designs have really just one goal, orthogonal treatment effects. Orthogonal design lowers the risk of unobservables sneaking into your effect estimates in an uneven way. See: [http://www1.umn.edu/statsoft/doc/statnotes/stat06.txt](http://www1.umn.edu/statsoft/doc/statnotes/stat06.txt) for an excellent discussion of this topic.
null
CC BY-SA 3.0
null
2011-04-18T20:28:17.657
2011-04-18T20:28:17.657
null
null
1893
null
9706
2
null
9541
2
null
The concept of numbers of parameters and hence df in the lmer model is kind of fuzzy. Don't bother with it, and use AICc; you stand on firmer theoretical ground: [http://warnercnr.colostate.edu/~anderson/PDF_files/TESTING.pdf](http://warnercnr.colostate.edu/~anderson/PDF_files/TESTING.pdf)
null
CC BY-SA 3.0
null
2011-04-18T20:33:07.297
2011-04-18T20:33:07.297
null
null
1893
null
9707
2
null
9699
8
null
[Sage](http://www.sagemath.org/) also has R included with a Python interface. The Sage system is available. Since a couple of years, the prefered way to run SageMath is via [CoCalc](https://www.cocalc.com/). It also allows you to run R directly, e.g. in a [Jupyter notebook using the R kernel](https://share.cocalc.com/share/20e4a191-73ea-4921-80e9-0a5d792fc511/test/faithful.ipynb?viewer=share). Example: ``` r.data("faithful") r.lm("eruptions ~ waiting", data=r.faithful) ``` Output: ``` Call: lm(formula = sage2, data = sage0) Coefficients: (Intercept) waiting -1.87402 0.07563 ```
null
CC BY-SA 4.0
null
2011-04-18T20:38:17.917
2018-08-06T16:21:55.720
2018-08-06T16:21:55.720
13176
3911
null
9708
2
null
7775
1
null
Well, r^2 is really just covariance squared over the product of the variances, so you could probably do something like cov(Yfull, Ytrue)/var(Ytrue)var(Yfull) - cov(YReduced, Ytrue)/var(Ytrue)var(YRed) regardless of model type; check to verify that gives you the same answer in the lm case though. [http://www.stator-afm.com/image-files/r-squared.gif](http://www.stator-afm.com/image-files/r-squared.gif)
null
CC BY-SA 3.0
null
2011-04-18T20:45:12.783
2011-04-18T20:45:12.783
null
null
1893
null
9709
2
null
9699
8
null
Also, if you want to provide a solution to other users, you can set up a webserver with [RApache](http://rapache.net/).
null
CC BY-SA 3.0
null
2011-04-18T21:02:49.140
2011-04-18T21:02:49.140
null
null
582
null
9710
2
null
8147
1
null
Sums of bernoullis are distributed exactly binomial, so one often would use logistic regression.
null
CC BY-SA 3.0
null
2011-04-18T21:04:11.073
2011-04-18T21:04:11.073
null
null
1893
null
9711
2
null
9695
1
null
Not sure it gives a final answer to the question, but I would give a look at [this](http://cscs.umich.edu/~crshalizi/weblog/491.html). Especially point 2. See also the discussion in appendix A2 of the [paper](http://arxiv.org/abs/0706.1062).
null
CC BY-SA 3.0
null
2011-04-18T21:06:24.723
2011-04-18T21:06:24.723
null
null
4220
null
9712
1
9713
null
2
2204
I'm in the process of learning R, in the hope of replacing everything I do in SPSS/Sigmplot with R. It's going well so far :) I've got to the point of running a repeated-measures ANOVA, but have come unstuck when trying to plot the results I've worked out how to plot a set of means using ggplot2, but now I'm unsure of how to plot the standard error as error bars. I've seen a number of guides with different implementations, and none of them seem to be appropriate (or even agree with each other). Many people use standard deviations, which is not what I am after. Others have different methods of computing the standard error, so I'm unsure of the best way to proceed. What I have so far is this: ``` qplot(CATEGORIES, means, shape=factor(ANOTHER_CATEGORY), facets=MORE_CATEGORIES ~ ., data=alldata) ``` I was wondering if someone could point me in the right direction in terms of how to get the standard errors from a repeated-measures ANOVA in R, and then how to translate this into error bars in ggplot? Thanks!
How to add standard error to plots in ggplot2 with R?
CC BY-SA 3.0
null
2011-04-18T21:13:05.013
2011-04-18T21:50:00.160
null
null
4204
[ "r", "anova", "ggplot2" ]
9713
2
null
9712
2
null
The reason you're running into multiple methods is because the target variability to visualize in a repeated measures design is not necessarily that straightforward to determine. If you calculate the conventional SE then what you've done is give an estimate of how well you calculated the raw score. However, generally in a repeated measures design that wasn't the goal of the study. What you are typically looking to do is to calculate an effect. The variability of that effect is much less. I generally recommend plotting your effects only and the variability of your effect estimates (better as confidence intervals than SEs). Then the error bar will represent something about what you actually attempted to study. The effect SE will be the sqrt(MSe/n) where n is the number of measurements of the effect (not to be confused with number of S's).
null
CC BY-SA 3.0
null
2011-04-18T21:50:00.160
2011-04-18T21:50:00.160
null
null
601
null
9714
2
null
9693
3
null
Hopefully, modelling the dynamics of tumor progression qualifies for this: Anderson & Quaranta. [Integrative mathematical oncology](http://www.nature.com/nrc/journal/v8/n3/abs/nrc2329.html). Nature Reviews Cancer, 2008.
null
CC BY-SA 3.0
null
2011-04-18T22:04:17.970
2011-04-18T22:04:17.970
null
null
3770
null
9715
1
null
null
24
32579
I ran a multinomial logit model in JMP and got back results which included the AIC as well chi-squared p-values for each parameter estimate. The model has one categorical outcome and 7 categorical explanatory vars. I then fit what I thought would build the same model in R, using the `multinom` function in the [nnet](http://cran.r-project.org/web/packages/nnet/index.html) package. The code was basically: ``` fit1 <- multinom(y ~ x1 + x2 + ... xn, data=mydata); summary(fit1); ``` However, the two give different results. With JMP the AIC is 2923.21, and with `nnet::multinom` the AIC is 3116.588. So my first question is: Is one of the models wrong? The second thing is, JMP gives chi-squared p-values for each parameter estimate, which I need. Running summary on the multinom `fit1` does not - it just gives the estimates, AIC and Deviance. My second question is thus: Is there a way to get the p-values for the model and estimates when using `nnet::multinom`? I know [mlogit](http://cran.r-project.org/web/packages/mlogit/index.html) is another R package for this and it looks like its output includes the p-values; however, I have not been able to run `mlogit` using my data. I think I had the data formatted right, but it said I had an invalid formula. I used the same formula that I used for `multinom`, but it seems like it requires a different format using a pipe and I don't understand how that works.
How to set up and estimate a multinomial logit model in R?
CC BY-SA 4.0
null
2011-04-18T22:35:27.610
2022-12-07T13:20:28.120
2022-09-08T03:12:06.697
11887
3984
[ "r", "logistic", "multinomial-distribution", "jmp" ]
9718
1
null
null
3
1673
If the correlation between demographic dissimilarity and satisfaction is $r=.-14$ and the partial correlation, with career development partialled out, between demographic dissimilarity and satisfaction is $r=-.06$ across a very large sample of size $n$, what is the appropriate test to determine if these correlations are significantly different? Steiger (1980) appears to be the authoritative article on this but the article and most other sources assume one is comparing the correlation between x/y and v/y with dependent groups. This one has stumped our school's statistic resource center.
How to test whether correlation measures differ when controlling or not for a third variable?
CC BY-SA 3.0
null
2011-04-19T01:49:14.960
2011-04-20T14:01:56.807
2011-04-19T14:52:12.420
930
null
[ "correlation", "statistical-significance", "causality" ]
9720
2
null
9627
5
null
You cannot "systemically avoid this problem in the future", because it should not be called a "problem". If the reality of the material world features strong covariates, then we should accept it as fact and adjust our theories and models in consequence. I like the question very much, and hope that what follows will not sound too disappointing. Here are some adjustments that might work for you. You will need to review a regression handbook before proceeding. - Diagnose the issue, using correlation or post-estimation techniques like the Variance Inflation Factor (VIF). Use the tools mentioned by Peter Flom if you are using SAS or R. In Stata, use pwcorr to build a correlation matrix, gr matrix to build a scatterplot matrix, and vif to detect problematic tolerance levels of 1/VIF < 0.1. - Measure the interaction effect by adding, for example, var3*var4 to the model. The coefficient will help you realise how much is at play between var3 and var4. This will only bring you so far as partially measuring the interaction, but it will not rescue your model from its limitations. - Most importantly, if you detect strong multicollinearity or other issues like heteroscedasticity, you should ditch your model and start again. Model misspecification is the plague of regression analysis (and frequentist methods in general). Paul Schrodt has several excellent papers on the issue, including his recent "Seven Deadly Sins" that I like a lot. This answers your point on multicollinearity, and a lot of this can be learnt from the [regression handbook](http://www.ats.ucla.edu/stat/stata/webbooks/reg/) over at UCLA Stat Computing. It does not answer your question on causality. Briefly put, regression is never causal. Neither is any statistical model: causal and statistical information are separate species. Read selectively from Judea Pearl ([example](http://ftp.cs.ucla.edu/pub/stat_ser/r373-reprint.pdf)) to learn more on the matter. All in all, this answer does not cancel out the value of regression analysis, or even of frequentist statistics (I happen to teach both). It does, however, reduce their scope of appropriateness, and also underlines the crucial role of your initial explanatory theory, which really determines the possibility of your model possessing causal properties.
null
CC BY-SA 3.0
null
2011-04-19T02:50:30.023
2011-04-19T02:50:30.023
null
null
3582
null
9721
2
null
9685
4
null
I am going to assume that "100% survival" means that your sites only contained a single organism. so 30 means 30 organisms died, and 31 means 31 organisms didn't. Based on this the chi-square should be fine, but it will only tell which hypothesis are not supported by the data - it won't tell you if two reasonable hypothesis are better or not. I present a probability analysis which does extract this information - it agrees with the chi-square test, but it gives you more information than the chi-square test, and a better way to present the results. The model is a bernouli model for the indicator of "death", $Y_{ij}\sim Bin(1,\theta_{ij})$ ($i$ denotes the cell of the $2\times 3$ table, and $j$ denotes the individual unit within the cell). There are two global assumption underlying the chi-square test: - within a given cell of the table, the $\theta_{ij}$ are all equal, that is $\theta_{ij}=\theta_{ik}=\theta_{i}$ - the $Y_{ij}$ are statistically independent, given $\theta_{i}$. This means that the probability parameters tell you everything about $Y_{ij}$ - all other information is irrelevant if you know $\theta_{i}$ Denote $X_{i}$ as the sum of $Y_{ij}$, (so $X_{1}=30,X_{2}=10,X_{3}=1$) and let $N_{i}$ be the group size (so $N_{1}=61,N_{2}=30,N_{3}=11$). Now we have a hypothesis to test: $$H_{A}:\theta_{1}=\theta_{2},\theta_{1}=\theta_{3},\theta_{2}=\theta_{3}$$ But what are the alternatives? I would say the other possible combinations of equal or not equal. $$H_{B1}:\theta_{1}\neq\theta_{2},\theta_{1}\neq\theta_{3},\theta_{2}=\theta_{3}$$ $$H_{B2}:\theta_{1}\neq\theta_{2},\theta_{1}=\theta_{3},\theta_{2}\neq\theta_{3}$$ $$H_{B3}:\theta_{1}=\theta_{2},\theta_{1}\neq\theta_{3},\theta_{2}\neq\theta_{3}$$ $$H_{C}:\theta_{1}\neq\theta_{2},\theta_{1}\neq\theta_{3},\theta_{2}\neq\theta_{3}$$ One of these hypothesis has to be true, given the "global" assumptions above. But note that none of these specify specific values for the rates - so they must be integrated out. Now given that $H_{A}$ is true, we only have one parameter (because all are equal), and the uniform prior is a conservative choice, denote this and the global assumptions by $I_{0}$. so we have: $$P(X_{1},X_{2},X_{3}|N_{1},N_{2},N_{3},H_{A},I_{0})=\int_{0}^{1}P(X_{1},X_{2},X_{3},\theta|N_{1},N_{2},N_{3},H_{A},I_{0})d\theta$$ $$={N_{1} \choose X_{1}}{N_{2} \choose X_{2}}{N_{3} \choose X_{3}}\int_{0}^{1}\theta^{X_{1}+X_{2}+X_{3}}(1-\theta)^{N_{1}+N_{2}+N_{3}-X_{1}-X_{2}-X_{3}}d\theta$$ $$=\frac{{N_{1} \choose X_{1}}{N_{2} \choose X_{2}}{N_{3} \choose X_{3}}}{(N_{1}+N_{2}+N_{3}+1){N_{1}+N_{2}+N_{3} \choose X_{1}+X_{2}+X_{3}}}$$ Which is a hypergeometric distribution divided by a constant. Similarly for $H_{B1}$ we will have: $$P(X_{1},X_{2},X_{3}|N_{1},N_{2},N_{3},H_{B1},I_{0})=\int_{0}^{1}P(X_{1},X_{2},X_{3},\theta_{1}\theta_{2}|N_{1},N_{2},N_{3},H_{B1},I_{0})d\theta_{1}d\theta_{2}$$ $$=\frac{{N_{2} \choose X_{2}}{N_{3} \choose X_{3}}}{(N_{1}+1)(N_{2}+N_{3}+1){N_{2}+N_{3} \choose X_{2}+X_{3}}}$$ You can see the pattern for the others. We can calculate the odds for say $H_{A}\;vs\;H_{B1}$ by simply dividing the above two expressions. The answer is about $4$, which means the data support $H_{A}$ over $H_{B1}$ by about a factor of $4$ - fairly weak evidence in favour of equal rates. The other probabilities are given below. $$\begin{array}{c|c} Hypothesis & probability \\ \hline (H_{A}|D) & 0.018982265 \\ (H_{B1}|D) & 0.004790669 \\ (H_{B2}|D) & 0.051620022 \\ (H_{B3}|D) & 0.484155874 \\ (H_{C}|D) & 0.440451171 \\ \end{array} $$ This is showing strong evidence against equal rates, but not in strong evidence favour of a defintie alternative. It seems like there is strong evidence that the "offshore" rate is different to the other two rates, but inconclusive evidence as to whether "inshore" and "mid-channel" rates differ. This is what the chi-square test won't tell you - it only tells you that hypothesis $A$ is "crap" but not what alternative to put in its place
null
CC BY-SA 3.0
null
2011-04-19T02:54:55.040
2011-04-19T02:54:55.040
null
null
2392
null
9722
2
null
9664
84
null
If the quantity of interest, usually a functional of a distribution, is reasonably smooth and your data are i.i.d., you're usually in pretty safe territory. Of course, there are other circumstances when the bootstrap will work as well. What it means for the bootstrap to "fail" Broadly speaking, the purpose of the bootstrap is to construct an approximate sampling distribution for the statistic of interest. It's not about actual estimation of the parameter. So, if the statistic of interest (under some rescaling and centering) is $\newcommand{\Xhat}{\hat{X}_n}\Xhat$ and $\Xhat \to X_\infty$ in distribution, we'd like our bootstrap distribution to converge to the distribution of $X_\infty$. If we don't have this, then we can't trust the inferences made. The canonical example of when the bootstrap can fail, even in an i.i.d. framework is when trying to approximate the sampling distribution of an extreme order statistic. Below is a brief discussion. Maximum order statistic of a random sample from a $\;\mathcal{U}[0,\theta]$ distribution Let $X_1, X_2, \ldots$ be a sequence of i.i.d. uniform random variables on $[0,\theta]$. Let $\newcommand{\Xmax}{X_{(n)}} \Xmax = \max_{1\leq k \leq n} X_k$. The distribution of $\Xmax$ is $$ \renewcommand{\Pr}{\mathbb{P}}\Pr(\Xmax \leq x) = (x/\theta)^n \>. $$ (Note that by a very simple argument, this actually also shows that $\Xmax \to \theta$ in probability, and even, almost surely, if the random variables are all defined on the same space.) An elementary calculation yields $$ \Pr( n(\theta - \Xmax) \leq x ) = 1 - \Big(1 - \frac{x}{\theta n}\Big)^n \to 1 - e^{-x/\theta} \>, $$ or, in other words, $n(\theta - \Xmax)$ converges in distribution to an exponential random variable with mean $\theta$. Now, we form a (naive) bootstrap estimate of the distribution of $n(\theta - \Xmax)$ by resampling $X_1, \ldots, X_n$ with replacement to get $X_1^\star,\ldots,X_n^\star$ and using the distribution of $n(\Xmax - \Xmax^\star)$ conditional on $X_1,\ldots,X_n$. But, observe that $\Xmax^\star = \Xmax$ with probability $1 - (1-1/n)^n \to 1 - e^{-1}$, and so the bootstrap distribution has a point mass at zero even asymptotically despite the fact that the actual limiting distribution is continuous. More explicitly, though the true limiting distribution is exponential with mean $\theta$, the limiting bootstrap distribution places a point mass at zero of size $1−e^{-1} \approx 0.632$ independent of the actual value of $\theta$. By taking $\theta$ sufficiently large, we can make the probability of the true limiting distribution arbitrary small for any fixed interval $[0,\varepsilon)$, yet the bootstrap will (still!) report that there is at least probability 0.632 in this interval! From this it should be clear that the bootstrap can behave arbitrarily badly in this setting. In summary, the bootstrap fails (miserably) in this case. Things tend to go wrong when dealing with parameters at the edge of the parameter space. An example from a sample of normal random variables There are other similar examples of the failure of the bootstrap in surprisingly simple circumstances. Consider a sample $X_1, X_2, \ldots$ from $\mathcal{N}(\mu,1)$ where the parameter space for $\mu$ is restricted to $[0,\infty)$. The MLE in this case is $\newcommand{\Xbar}{\bar{X}}\Xhat = \max(\bar{X},0)$. Again, we use the bootstrap estimate $\Xhat^\star = \max(\Xbar^\star, 0)$. Again, it can be shown that the distribution of $\sqrt{n}(\Xhat^\star - \Xhat)$ (conditional on the observed sample) does not converge to the same limiting distribution as $\sqrt{n}(\Xhat - \mu)$. Exchangeable arrays Perhaps one of the most dramatic examples is for an exchangeable array. Let $\newcommand{\bm}[1]{\mathbf{#1}}\bm{Y} = (Y_{ij})$ be an array of random variables such that, for every pair of permutation matrices $\bm{P}$ and $\bm{Q}$, the arrays $\bm{Y}$ and $\bm{P} \bm{Y} \bm{Q}$ have the same joint distribution. That is, permuting rows and columns of $\bm{Y}$ keeps the distribution invariant. (You can think of a two-way random effects model with one observation per cell as an example, though the model is much more general.) Suppose we wish to estimate a confidence interval for the mean $\mu = \mathbb{E}(Y_{ij}) = \mathbb{E}(Y_{11})$ (due to the exchangeability assumption described above the means of all the cells must be the same). McCullagh (2000) considered two different natural (i.e., naive) ways of bootstrapping such an array. Neither of them get the asymptotic variance for the sample mean correct. He also considers some examples of a one-way exchangeable array and linear regression. References Unfortunately, the subject matter is nontrivial, so none of these are particularly easy reads. > P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap. Ann. Stat., vol. 9, no. 6 (1981), 1196–1217. D. W. K. Andrews, Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space, Econometrica, vol. 68, no. 2 (2000), 399–405. P. McCullagh, Resampling and exchangeable arrays, Bernoulli, vol. 6, no. 2 (2000), 285–301. E. L. Lehmann and J. P. Romano, Testing Statistical Hypotheses, 3rd. ed., Springer (2005). [Chapter 15: General Large Sample Methods]
null
CC BY-SA 3.0
null
2011-04-19T03:32:57.203
2011-04-20T02:08:28.043
2011-04-20T02:08:28.043
2970
2970
null
9723
2
null
4111
3
null
There are companies that specialize in counting people. For instance, [www.lynce.es](http://www.lynce.es)$^\dagger$ (I am not affiliated nor have any interest whatsoever in such company). They hung cameras over the groups they want to count, shoot pictures and actually count heads. They only make small adjustments when it comes to estimate people under trees or other objects which prevent direct vision. --- $\dagger$ The archived link can be found [here](https://web.archive.org/web/20110915235737/http://lynce.es/es/index.php).
null
CC BY-SA 4.0
null
2011-04-19T05:09:08.220
2022-12-08T14:15:11.347
2022-12-08T14:15:11.347
362671
892
null
9724
1
13369
null
3
505
I'm hoping to hear from someone who has worked on mouse models or similar biological analyses where there is a tendency to run 'replicates' of an experiment. I know multiple testing is a sizeable kettle of fish which is definitely relevant to this discussion. I have some applications for projects where they talk about running 3 replicates of an experiment, where each experiment has n = 3 to 7. However there seems to be no mention of what they will do with the multiple sets of results, as in how they will handle success, failure, success vs failure, failure, success etc. It seems like this 'replicates' approach is quite common practise in this field. What are your thoughts / experiences with this situation? I know there are different types of replication, technical vs biological, however I've found little useful reading on this issue.
Mouse models - 'replicates' and analysis
CC BY-SA 3.0
null
2011-04-19T05:19:00.737
2011-07-22T13:09:00.340
2011-06-20T19:19:31.947
82
4226
[ "repeated-measures", "multiple-comparisons", "experiment-design", "biostatistics" ]
9727
2
null
9507
0
null
To asnwer the first part of my question, does a flat initial guess lead to flat data, the answer would be "yes". Not only does having a flat guess flatten the result, it also makes it unchanging (a fact I missed thanks to a small error in my algorithm). Here's a proof: Assuming that $\langle R_{r\alpha} \rangle^{(t)} = x$ for all $r$ and $\alpha$. $\hat{\pi}_{\alpha}^{(t)} = \frac1{L}\sum_{r=1}^{L}\langle R_{r\alpha} \rangle^{(t)} = \frac{Lx}{L} = x$ for each $\alpha$. $\hat{p}_{i|\alpha}^{(t)} = \frac1{L\hat{\pi}_\alpha^{(t)}} \sum_{r:i(r)=i} \langle R_{r\alpha} \rangle^{(t)} = \frac{n_ix}{Lx} = \frac{n_i}{L}$ for each $\alpha$, $n_i$ is the number of co-observations where $r:i(r)=i$ is true. $\hat{q}_{j|\alpha}^{(t)} = \frac1{L\hat{\pi}_\alpha^{(t)}} \sum_{r:j(r)=j} \langle R_{r\alpha} \rangle^{(t)} = \frac{n_jx}{Lx} = \frac{n_j}{L}$ for each $\alpha$, $n_j$ is the number of co-observations where $r:j(r)=j$ is true. From these, we can calculate $\langle R_{r\alpha} \rangle^{(t+1)} = \frac{\pi_{\alpha}^{(t)}\hat{p}_{i(r)|\alpha}^{(t)}\hat{q}_{j(r)|\alpha}^{(t)}}{\sum_{v=1}^K\pi_{v}^{(t)}\hat{p}_{i(r)|v}^{(t)}\hat{q}_{j(r)|v}^{(t)}} = \frac{x(n_{i(r)}n_{j(r)})/L^2}{Kx(n_{i(r)}n_{j(r)})/L^2} = \frac1{K}$ for all $r$ and $\alpha$. And thus, $R_{r\alpha}$ is again a constat for the next round of iteration.
null
CC BY-SA 3.0
null
2011-04-19T07:10:28.757
2011-04-20T06:52:07.723
2011-04-20T06:52:07.723
4141
4141
null
9728
1
null
null
1
295
Under what circumstances would using regression with two given variables not increase accuracy of prediction?
When is there no point in using regression?
CC BY-SA 3.0
null
2011-04-19T07:19:11.553
2011-04-29T05:14:44.927
2011-04-29T05:14:44.927
183
null
[ "regression" ]
9729
1
null
null
1
5321
I was given the following question: A survey found that 89% of a random sample of 1024 American adults approved of cloning endangered animals. Find the margin of error for this survey if we want 90% confidence in our estimate of the percent of American adults who approve of cloning endangered animals. I know that for 90% Confidence, $\text{ME}\sim 0.82/\sqrt{n}$. I attempted using this formula with n equal to both 1024 and (.89)1024. I got 0.025625 and 0.02716, respectively. The answer given for the problem is 1.61%. I do not understand where I went wrong. Perhaps I am using the Margin of Error formula incorrectly? Thanks. :)
How to compute margin of error with a given confidence interval?
CC BY-SA 3.0
null
2011-04-19T08:27:19.990
2011-04-19T12:50:04.660
2011-04-19T12:50:04.660
930
4228
[ "self-study", "sampling", "survey" ]
9731
1
null
null
4
263
> Possible Duplicate: Threshold for correlation coefficient to indicate statistical significance of a correlation in a correlation matrix ### Context I am doing an exploratory study to investigate the relationship between a drug (actually measured in two ways - by direct and indirect methods) and 15 various parameters. There are three groups of individuals, on which I compute correlation statistics separately ($n=11$, $n=11$ and $n=24$ - the first two are paired). Obviously, the number of statistical tests is large, and I would like to control for multiple testing. I am using Spearman correlation as data are limited and do not (always) follow a normal distribution. I am working in R. ### Questions - How should I control for multiplicity of testing? This is an hypothesis generating study, and I would rather prefer to use less conservative methods. I have been looking into Benjamini–Hochberg's procedure (known under R as `"BH"` or `"fdr"`) which allows to control the false discovery rate. However, I am not sure if I would violate dependence assumptions. And I am not sure if this could be used in correlation statistics. Perhaps I should not adjust for multiple testing at all. In my mind, if both direct and indirect methods give the same association, false positives are highly unlikely.
Adjust a large set of Spearman correlation analyses for multiple testing
CC BY-SA 3.0
null
2011-04-19T08:55:24.803
2011-06-07T04:21:27.120
2017-04-13T12:44:26.710
-1
4229
[ "correlation", "multiple-comparisons", "spearman-rho" ]
9733
2
null
9728
5
null
When the model assumptions are valid, but the data are not correlated. When the model assumptions are invalid (e.g. the noise process is heteroskedastic) in which case a regression model may fit the data very well, but provide very poor out-of-sample predictions. See also the excellent point about extrapolation made by probabilityislogic.
null
CC BY-SA 3.0
null
2011-04-19T09:43:04.257
2011-04-19T14:18:17.637
2011-04-19T14:18:17.637
887
887
null
9734
1
31748
null
4
603
Is it possible to use a continuous predictor in Bugs? The simplest way of doing this would be turning the size variable in alligators example from discrete to continuous. Both Winbugs and JAGS examples use combination of values of covariates as indices as in ``` X[i,j,] ~ dmulti( p[i,j,] , n[i,j] ); ``` where `i` is the lake index (4 possible values), and `j` is the size index (2 possible values). With this approach, a continuous size variable would mean an infinite amount of indices. There must be something I'm missing here.
How to model logistic regression with continuous predictor in Bugs?
CC BY-SA 3.0
null
2011-04-19T09:47:27.687
2012-08-05T05:46:13.763
2011-04-19T12:53:18.977
930
3280
[ "bayesian", "logistic", "bugs" ]
9735
1
9866
null
8
1244
... (optional) within the context of Google Web Optimizer. Suppose you have two groups and a binary response variable. Now you get the following outcome: - Original: 401 trials, 125 successful trials - Combination16: 441 trials, 141 successful trials The difference is not statistically significant, however one can calculate a probability that Combination16 will beat Original. To calculate "Chance to beat Original" I have used an bayesian approach, i.e. performing a two-dimensional monte carlo integration over the bayesian-style confidence intervals (beta-distribution, (0,0) prior). Here is the code: ``` trials <- 10000 resDat<-data.frame("orig"=rbeta(trials,125+1,401-125+1), "opt"=rbeta(trials,144+1,441-144+1)) length(which(resDat$opt>resDat$orig))/trials ``` This results in 0.6764. Which technique would a frequentist use to calculate "Chance to beat ..." ? Maybe the power function of Fisher's exact test ? Optional: Context of Google Web Optimizer Google Web Optimizer is a tool for controlling multivariate Testing or A/B-Testing. This only as an introduction since this should not matter for the question itself. The example presented above was taken from the explanation page of Google Web Optimizer (GWO), which you can find [here](https://www.google.com/analytics/siteopt/siteopt/help/techoverview.html#stats) (please scroll down to the section "Estimated Conversion Rate Ranges"), specifically from figure 2. Here GWO delivers 67.8% for "Chance to beat Original", which slightly differs from my result. I guess Google uses a more frequentist-like approach and I wondered: What could it be ? EDIT: Since this question was close to disappear (I guess because of its too specific nature), I have rephrased it to be of general interest.
How does a frequentist calculate the chance that group A beats group B regarding binary response
CC BY-SA 3.0
null
2011-04-19T09:53:48.520
2011-05-06T11:17:56.070
2011-05-06T11:17:56.070
264
264
[ "bayesian", "ab-test" ]
9736
1
null
null
5
219
I have a time series (X) representing a natural phenomenon (wind speed, measured every 15 minutes) and I have to create similar time series (up to 20, Xdi, i=1,...,20) with the same structure (same average, same standard deviation, same percentiles distribution...) but with a predetermined correlation (about 0.7) between each other. Is there any defined method for this operation? Can you provide a link to a book, a paper, a page, or just the name of the method? Thank you very much, Andrew --- A couple of clarifications: for "pre-defined correlation" I mean that if I have a seed time series X and I want to create three derived time series Xd1, Xd2, Xd3, then the correlation among two of the time series must be equal (almost) to a value chosen (e.g.: 0.7). For example Correlation(Xdi, Xdj) = 0.7 The comment of charles.y.zheng answers the question (thanks!), but the resulting time series (AZ in the example) does not have necessarily the same autocorrelation of the original seed. (in the original time series 0 tends to be followed by 0 and 1 tends to be followed by 1, the values are clustered). Sorry to haven't specified also this requirement, but I noticed after trying the solution proposed. I guess that I can fix the problem by manipulating the values in Z. This adds a second section of the question: Is there a defined method to create a time series with a pre determined autocorrelation? p.s.: sorry if the language is not correct, I'm not a statistician but a programmer, I try to do my best but if there is something not clear, just ask for details and clarifications.
How to create n time series characterised by a defined average and correlation?
CC BY-SA 3.0
null
2011-04-19T11:00:41.787
2012-03-30T15:53:40.423
2011-04-21T13:57:37.540
4230
4230
[ "time-series", "correlation" ]
9737
2
null
9729
3
null
Because you are dealing with proportions, the variance is given by: $$\frac{p(1-p)}{n}$$ And so the 90% CI ME is equal to $1.645\times \sqrt{\frac{p(1-p)}{n}}=1.645\times \sqrt{\frac{0.89(1-0.89)}{1024}}=0.016$
null
CC BY-SA 3.0
null
2011-04-19T11:39:35.177
2011-04-19T11:39:35.177
null
null
2392
null
9738
1
null
null
5
2334
I am attempting to build a Multinomial Logit model with dummy variables of the following form: - The dependent variable represents 0-8 discrete choices. - Dummy Variable 1: 965 dummy vars - Dummy Variable 2: 805 dummy vars The data set I am using has the dummy columns pre-created, so it's a table of 72,381 rows and 1770 columns. The first 965 columns represent the dummy columns for Variable 1; the next 805 columns represent the dummy columns for Variable 2. I'm on a Sun Grid Machine at my university, so memory won't be an issue... I have been able to generate the factors and generate `mlogit` data using code: ``` mldata<-mlogit.data(mydata, varying=NULL, choice="pitch_type_1", shape="wide") ``` my `mlogit` data looks like: ``` "dependent_var","A variable","B Var","chid","alt" FALSE,"110","19",1,"0" FALSE,"110","19",1,"1" FALSE,"110","19",1,"2" FALSE,"110","19",1,"3" FALSE,"110","19",1,"4" TRUE,"110","19",1,"5" FALSE,"110","19",1,"6" FALSE,"110","19",1,"7" FALSE,"110","19",1,"8" FALSE,"110","19",2,"0" FALSE,"110","19",2,"1" FALSE,"110","19",2,"2" FALSE,"110","19",2,"3" FALSE,"110","19",2,"4" FALSE,"110","19",2,"5" TRUE,"110","19",2,"6" FALSE,"110","19",2,"7" FALSE,"110","19",2,"8" TRUE,"110","561",3,"0" ... ``` The `mldata` contains 651,431 rows. If I try to run this full data set I get the following error: ``` > mlogit.model<- mlogit(dependent_var~0|A+B, data = mldata, reflevel="0") Error in model.matrix.default(formula, data) : allocMatrix: too many elements specified Calls: mlogit ... model.matrix.mFormula -> model.matrix -> model.matrix.default Execution halted ``` Smaller datasets (`mldata` with only 595 rows) and `mlogit` works fine and generates the expected regression output. Is there a problem with `mlogit` and huge datasets? I suppose this is perhaps not the best way to assess this kind of data, but I am trying to replicate a previous analysis that was completed on a similar amount of similar data.
Problem building multinomial logit model formula on huge data in R
CC BY-SA 3.0
null
2011-04-19T12:22:14.087
2014-05-18T00:20:31.843
2014-05-18T00:07:35.877
7291
null
[ "r", "logistic", "multinomial-distribution" ]
9739
1
null
null
13
5738
I have a set of sea surface temperature (SST) monthly data and I want to apply some cluster methodology to detect regions with similar SST patterns. I have a set of monthly data files running from 1985 to 2009 and want to apply clustering to each month as a first step. Each file contains gridded data for 358416 points where approximately 50% are land and are marked with a 99.99 value that will be NA. Data format is: ``` lon lat sst -10.042 44.979 12.38 -9.998 44.979 12.69 -9.954 44.979 12.90 -9.910 44.979 12.90 -9.866 44.979 12.54 -9.822 44.979 12.37 -9.778 44.979 12.37 -9.734 44.979 12.51 -9.690 44.979 12.39 -9.646 44.979 12.36 ``` I have tried CLARA clustering method and got some apparently nice results but it also seems to me that is just smoothing (grouping) isolines. Then I am not sure this is the best clustering method to analyse spatial data. Is there any other clustering method devoted to this type of datasets? Some reference would be good to start reading. Thanks in advance.
Clustering spatial data in R
CC BY-SA 3.0
null
2011-04-19T13:16:03.780
2016-09-19T00:56:11.053
2011-04-20T12:48:42.013
null
4147
[ "r", "clustering", "spatial" ]
9740
2
null
9728
5
null
Considering the OLS case $$Y_{i}=\alpha+\beta X_{i}$$ One case is when you try to predict using values of $X_{i}$ outside your sample range (extrapolation). Say if your data had $1<X_{i}<10$ in the sample, and you try to predict for when a new value is $X=100$. In OLS you have a prediction interval for a new value $X_{p}$ of: $$\hat{Y}_{p}=\overline{Y}+\hat{\beta}(X_{p}-\overline{X})\pm t_{\alpha/2}^{(n-2)}\hat{\sigma}_{Y|X}\sqrt{1+\frac{1}{n}+\frac{X_{p}-\overline{X}}{\sum_{i=1}^{n}(X_{i}-\overline{X})^{2}}}$$ Compare this to a prediction interval without using $X$ $$\hat{Y}_{p}=\overline{Y}\pm t_{\alpha/2}^{(n-1)}\hat{\sigma}_{Y}\sqrt{1+\frac{1}{n}}$$ on comparing the two, if the new covariate $X_{p}$ is far enough away from the mean value $\overline{X}$ the OLS prediction interval will get wider and wider, whereas the unconditional interval remains constant. Further if the $X$ variable is uncorrelated with $Y$ so that $\hat{\beta}=0$, then $\hat{\sigma}_{Y|X}\geq \hat{\sigma}_{Y}$ and the prediction interval is always wider.
null
CC BY-SA 3.0
null
2011-04-19T13:23:26.303
2011-04-19T13:23:26.303
null
null
2392
null
9741
1
9782
null
5
1995
I'm interested in assessing model performance on data with an ordinal categorical dependent variable. For my use case, the ideal metric would: - Not assume equal intervals between classes or that recoding to a continuous scale is appropriate - Be scale independent - Give preference to models that rank the outcomes accurately, with higher penalties for mis-ranking classes with a larger degree of difference (e.g., Excellent > Poor > Good is better than Excellent > Very Poor > Good) - Accept continuous predictions and be indifferent to their distributions For example, suppose we have the following test set, where "response" is 5-category ordinal response and "pred1", "pred2", and "pred3" are predictions: ``` id response pred1 pred2 pred3 1 Excellent 1.00 150 10 2 Good .80 39 9 3 Good .85 12 5 4 Fair .40 11 4 5 Poor .39 10 3 6 Very Poor .20 3 2 . . . . . . . . . . ``` For my purposes, the ideal metric would score all three predictions as equally accurate since all three perfectly rank the response. What are my options and the benefits/drawbacks to each? Bonus points for references to R packages or functions.
Model performance metrics for ordinal response
CC BY-SA 4.0
null
2011-04-19T13:57:46.837
2018-08-13T17:00:06.823
2018-08-13T17:00:06.823
7290
1611
[ "r", "model-selection", "predictive-models", "ordinal-data" ]
9742
2
null
4884
7
null
If the treatment is randomly assigned the aggregation won't matter in determining the effect of the treatment (or the average treatment effect). I use lowercase in the following examples to refer to disaggregated items and uppercase to refer to aggregated items. Lets a priori state a model of individual decision making, where $y$ is the outcome of interest, and $x$ represents when an observation recieved the treatment; $y = \alpha + b_1(x) + b_2(z) + e$ When one aggregates, one is simply summing random variables. So one would observe; $\sum y = \sum\alpha + \beta_1(\sum x) + \beta_2(\sum z) + \sum e$ So what is to say that $\beta_1$ (divided by its total number of elements, $n$) will equal $b_1$? Because by the nature of random assignment all of the individual components of $x$ are orthogonal (i.e. the variance of $(\sum x)$ is simply the sum of the individual variances), and all of the individual components are uncorrelated with any of the $z$'s or $e$'s in the above equation. Perhaps using an example of summing two random variables will be more informative. So say we have a case where we aggregate two random variables from the first equation presented. So what we observe is; $(y_i + y_j) = (\alpha_1 + \alpha_2) + \beta_1(x_i + x_j) + \beta_2(z_i + z_j) + (e_1 + e_2)$ This can subsequently be broken down into its individual components; $(y_i + y_j) = \alpha_1 + \alpha_2 + b_1(x_i) + b_2(x_j) + b_3(z_i) + b_4(z_j) + e_1 + e_2$ By the nature of random assignment we expect $x_i$ and $x_j$ in the above statement to be independent of all the other parameters ($z_i$, $z_j$, $e_1$, etc.) and each other. Hence the effect of the aggregated data is equal to the effect of the data disaggregated (or $\beta_1$ equals the sum of $b_1$ and $b_2$ divided by two in this case). This exercise is informative though to see where the aggregation bias will come into play. Anytime the components of that aggregated variable are not independent of the other components you are creating an inherent confound in the analysis (e.g. you can not independently identify the effects of each individual item). So going with your "blue day" scenario one might have a model of individual behavior; $y = \alpha + b_1(x) + \beta_2(Z) + b_3(x*Z) + e$ Where $Z$ refers to whether the observation was taken on blue day and $x*Z$ is the interaction of the treatment effect with it being blue day. This should be fairly obvious why it would be problematic if you take all of your observations on one day. If treatment is randomly assigned $b_1(x)$ and $\beta_2(Z)$ should be independent, but $b_1(x)$ and $b_3(x*Z)$ are not. Hence you will not be able to uniquely identify $b_1$, and the research design is inherently confounded. You could potentially make a case for doing the data analysis on the aggregated items (aggregated values tend to be easier to work with and find correlations, less noisy and tend to have easier distributions to model). But if the real questions is to identify $b_1(x)$, then the research design should be structured to appropriately identify it. While I made an argument above for why it does not matter in a randomized experiment, in many settings the argument that all of the individual components are independent is violated. If you expect specific effects on specific days, aggregation of the observations will not help you identify the treatment effect (it is actually a good argument to prolong the observations to make sure no inherent confounds are present).
null
CC BY-SA 3.0
null
2011-04-19T14:14:37.803
2011-06-23T17:24:00.630
2011-06-23T17:24:00.630
1036
1036
null
9743
2
null
9629
6
null
Using the extra information you gave (being that quite some of the values are 0), it's pretty obvious why your solution returns nothing. For one, you have a probability that is 0, so : - $e_i$ in the solution of Henry is 0 for at least one i - $np_i$ in the solution of @probabilityislogic is 0 for at least one i Which makes the divisions impossible. Now saying that $p=0$ means that it is impossible to have that outcome. If so, you might as well just erase it from the data (see comment of @cardinal). If you mean highly improbable, a first 'solution' might be to increase that 0 chance with a very small number. Given : ``` X <- c(0, 0, 0, 8, 6, 2, 0, 0) p <- c(0.406197174, 0.088746395, 0.025193306, 0.42041479, 0.03192905, 0.018328576, 0.009190708, 0) ``` You could do : ``` p2 <- p + 1e-6 chisq.test(X, p2) Pearson's Chi-squared test data: X and p2 X-squared = 24, df = 21, p-value = 0.2931 ``` But this is not a correct result. In any case, one should avoid using the chi-square test in these borderline cases. A better approach is using a bootstrap approach, calculating an adapted test statistic and comparing the one from the sample with the distribution obtained by the bootstrap. In R code this could be (step by step) : ``` # The function to calculate the adapted statistic. # We add 0.5 to the expected value to avoid dividing by 0 Statistic <- function(o,e){ e <- e+0.5 sum(((o-e)^2)/e) } # Set up the bootstraps, based on the multinomial distribution n <- 10000 bootstraps <- rmultinom(n, size=sum(X), p=p) # calculate the expected values expected <- p*sum(X) # calculate the statistic for the sample and the bootstrap ChisqSamp <- Statistic(X, expected) ChisqDist <- apply(bootstraps, 2, Statistic, expected) # calculate the p-value p.value <- sum(ChisqSamp < sort(ChisqDist))/n p.value ``` This gives a p-value of 0, which is much more in line with the difference between observed and expected. Mind you, this method assumes your data is drawn from a multinomial distribution. If this assumption doesn't hold, the p-value doesn't hold either.
null
CC BY-SA 4.0
null
2011-04-19T14:48:08.320
2022-01-02T13:36:16.320
2022-01-02T13:36:16.320
11887
1124
null
9744
1
9793
null
5
1587
I'm trying to understand the following claim: > if the $t$-statistic is greater than zero, it indicates that the variable is explosive... but does that mean it has unit root? In the context of Dickey Fuller test.
What is explosive variable?
CC BY-SA 3.0
null
2011-04-19T15:01:13.000
2011-04-20T15:56:42.650
2011-04-20T15:56:42.650
2645
333
[ "hypothesis-testing", "stationarity" ]
9745
1
9746
null
6
4272
How would you go about explaining "Stambaugh Bias" in simple relatively non-technical language?
Stambaugh bias definition
CC BY-SA 4.0
null
2011-04-19T15:23:56.890
2021-02-15T07:23:27.413
2021-02-15T07:23:27.413
53690
333
[ "time-series", "autocorrelation", "bias" ]
9746
2
null
9745
7
null
I'm not sure you can explain this term without using some technical terms, unfortunately. I'll give it my best shot. Some definitions first: - Bias: the difference between the expectation of an estimator and the true value of the parameter you're estimating. - OLS: Ordinary Least Squares; a method for solving a regression problem. - Autoregressive process (AR): (via Wikipedia) Stambaugh bias occurs when you perform regression on a lagged stochastic input. Essentially, when you do this, you have to use an estimate for the input (regressor), which requires estimating autocorrelation coefficients. The bias in the autocorrelation coefficients is then proportional to the bias in the slope coefficient's estimate from the OLS. You can correct for this if you know that your method for computing autocorrelation coefficients is biased. The original paper really isn't too complicated, so long as you know both what an AR process is and how OLS regression works: [Paper](http://finance.wharton.upenn.edu/~stambaugh/bias_reg.pdf).
null
CC BY-SA 3.0
null
2011-04-19T15:37:25.683
2011-04-19T15:37:25.683
null
null
781
null
9747
2
null
9667
14
null
If you're coming from a mathematics background, and you want to learn time series, it's hard to go wrong with a combination of: - The Analysis of Time Series (Chatfield): introduction at the undergraduate level - Fourier Analysis of Time Series (Bloomfield): introduction to Fourier methods at the undergraduate level and after you've gone through those two and learned the basics, proceed to: - Time Series: Theory and Methods (Brockwell & Davis): excellent high-level undergraduate / starting graduate-level book - Spectral Analysis and Time Series (Priestley): excellent graduate-level text and if you become interested in spectrum estimation, the best book I'm aware of is: - Spectral Analysis for Physical Applications (Percival and Walden): more of an engineering flavour, but lots of great examples and carefully written algorithms that you can turn into code. When I want to look up something I've seen before in classical time series methods, I mostly use Priestley. It's not an easy read by any means, but it's very well written, and you can go back to it and learn new things every time. Since you're coming from a mathematics background, you shouldn't have too much issue with any of the probabilistic notation, especially if you've had some measure theory. If I'm reviewing an algorithm for spectral methods, I use Percival & Walden: it's the only good book I'm aware of that covers modern spectrum estimation techniques without diverging too strongly into wavelets or time-frequency methods. I would encourage you to stay away from focused books on econometrics or any area of time series where the focus is on one particular area, as nonstandard notation and terminology tends to develop within these subfields. If it's your first approach to time series, start with a couple of good general undergraduate books (1 and 2 are decent, and have lots of examples that you can work through on your own with R). Only after you know the basics should you venture into the world of specific subfields and read books there.
null
CC BY-SA 3.0
null
2011-04-19T15:57:14.973
2011-04-19T15:57:14.973
null
null
781
null
9748
1
9750
null
3
445
First of all, I’m new to statistics and this is the first time I am trying to apply it to a real world problem. I am doing analysis of a series of observations of a variable over all weeks of a year. During certain weeks an event happened that I believe has impacted the variable and I want to check for this. The values are technically a time series, however I only need to know that the event has an impact so I did an uncorrelated t-test. My reasoning is that, if I split the series of observations into a group of observations when the event happened and one where it didn’t happen, then I can compare the mean of each group using a t-test. Is this reasoning valid? Many thanks, David @GaBorgulya, I only have ONE value for each week. @Gavin Simpson No, I don't think that the values are autocorrelated. I am working with human behaviour and I believe that the event has an effect but it is not as deterministic as what it would be if the data were autocorrelated. @Wayne, the event is random.
Can I split a series of observations of a variable over time into two groups instead of working with time series?
CC BY-SA 3.0
null
2011-04-19T15:59:49.677
2011-04-20T08:37:56.523
2011-04-20T08:37:56.523
4233
4233
[ "time-series", "statistical-significance", "mean", "t-test" ]
9749
1
null
null
1
3859
I have a set of data that are binomial, and am comparing them across 9 years. The first 5 years have low sample sizes ($~n=20$) and the last 4 have $n>100$. I've run a glm in R with the family set to "binomial", and the results look reasonable. However, when I did the multiple comparisons afterwards using the [multcomp](http://cran.r-project.org/web/packages/multcomp/index.html) package, as shown below ``` cs_comp<-glht(model2, linfct = mcp(bin_Year= "Tukey")) ``` I got some really unexpected results that do not coincide with a bar graph I made of the means $\pm$ SE (essentially, years that were not significantly different from each other that I'd expect to be AND years that were significantly different that I didn't expect to be). I've been told that this is likely due to the unequal sample sizes; of the 9 years of data I have, the last 4 years have 100+ data points, while the first 5 years have 20-25 data points. I've also had the suggestion that I use Wald's CIs to determine pairwise comparisons by looking for overlapping CI's. Does anyone know if this is a better approach to unequal samples sizes of this magnitude, or if there is another (even better) multiple comparison method for this? If so, any tips on how to approach it in R? I have figured out how to make confidence intervals in R, but not Wald's. EDIT: I've found somewhere that one can use `confint.default(model)` to obtain Wald's 95% CI's, however, this results in completely different numbers than were provided to me by a friend in a software program I don't have access to (and can't remember what it is). The reason I'm searching now for this is the CI's that he obtained when running this data set for me are only to 2 decimal places, and I need more to determine if there is true overlap. Furthermore, if I've obtained CI's from the `'confint.default()` code that are negative or higher than 1, I'm almost certain they cannot be correct, since I have a binomial data set with only 0's and 1's...
Binomial GLM post-hoc tests for unequal sample sizes
CC BY-SA 4.0
null
2011-04-19T16:14:33.530
2018-08-11T15:14:28.243
2018-08-11T15:14:28.243
11887
4238
[ "r", "binomial-distribution", "generalized-linear-model", "post-hoc" ]
9750
2
null
9748
2
null
Your reasoning sounds reasonable to me, although I have the feeling you are stretching the independence assumptions of t tests a little. Therefore, you should keep two things in mind. First, the size of both groups (weeks with event versus weeks without event) should be comparable. E.g., 20 versus 30. would be fine I guess. Second, your observations are not independent but follow a rule (weeks follow deterministically each other). Therefore, the occurrence of the events should be uncorrelated with the this rule (i.e., order of the weeks). This would be especially important if the dv (your measured variable) is influenced by the order of the week. But if you can negate both of these issues (correlation of event with order of weeks and of order with dv) you are good to go.
null
CC BY-SA 3.0
null
2011-04-19T16:25:56.253
2011-04-19T16:25:56.253
null
null
442
null
9751
1
null
null
69
26610
I often hear that post hoc tests after an ANOVA can only be used if the ANOVA itself was significant. - However, post hoc tests adjust $p$-values to keep the global type I error rate at 5%, don't they? - So why do we need the global test first? - If we don't need a global test is the terminology "post hoc" correct? - Or are there multiple kinds of post hoc tests, some assuming a significant global test result and others without that assumption?
Do we need a global test before post hoc tests?
CC BY-SA 3.0
null
2011-04-19T16:51:22.190
2016-09-06T20:40:41.843
2016-09-06T20:40:41.843
49647
4176
[ "anova", "statistical-significance", "post-hoc" ]
9752
1
9754
null
4
398
I have a large set of customer data. For these customers, I have devised a customer loyalty score which is a measure of the loyalty of the customer. I want to find the features that are strongly associated/correlated with this score. Features could be number of purchases at various merchant types. One obvious answer would to be just to calculate the correlation for each feature with the customer loyalty score and see which have the highest correlations. Is this preferred way of doing this or are there better techniques?
Suggestions for identifying key features
CC BY-SA 3.0
null
2011-04-19T17:18:10.903
2011-04-20T12:50:21.000
2011-04-20T12:50:21.000
null
4235
[ "correlation", "feature-selection" ]
9753
2
null
9751
29
null
(1) Post hoc tests might or might not achieve the nominal global Type I error rate, depending on (a) whether the analyst is adjusting for the number of tests and (b) to what extent the post-hoc tests are independent of one another. Applying a global test first is pretty solid protection against the risk of (even inadvertently) uncovering spurious "significant" results from post-hoc data snooping. (2) There is a problem of power. It is well known that a global ANOVA F test can detect a difference of means even in cases where no individual t-test of any of the pairs of means will yield a significant result. In other words, in some cases the data can reveal that the true means likely differ but it cannot identify with sufficient confidence which pairs of means differ.
null
CC BY-SA 3.0
null
2011-04-19T17:22:09.467
2011-04-19T17:22:09.467
null
null
919
null
9754
2
null
9752
2
null
I understand that the loyalty score is calculated on the strength of some data. If your features include components that are used in calculating the loyalty score they will prove evidently influential. Multivariate techniques are probably more useful than pairwise correlations: - they can detect weaker features that may be useful in combination with stronger ones - they can reveal that some features have very similar information content. The most simple way to start with could be multiple linear regression, although other methods may be better depending on many conditions.
null
CC BY-SA 3.0
null
2011-04-19T17:58:00.073
2011-04-19T17:58:00.073
null
null
3911
null
9755
2
null
9752
1
null
Sounds like Business Intelligence work (http://en.wikipedia.org/wiki/Business_intelligence). Could you confirm if it's a customer database or a survey that you ran? Both? Is it from a CRM database? Are customers segmented? Demographically/Physcographically? We need more detail as to what you have. If it's a customer database, correlations tell a story about how features load upon your score but not the only story (cor != cause). If you have transactional information you can run survival analysis and calculate life time value (always useful). We need to know a lot more about your variables in order to make recommendations of "what to do with it"
null
CC BY-SA 3.0
null
2011-04-19T18:01:06.623
2011-04-19T18:01:06.623
null
null
776
null
9756
1
null
null
6
724
I have trained an SVM Regression model using training data, $x_1,x_2,\dots,x_N$. I want to perform active learning to improve the model; i.e., I want to add more samples to the training data and relearn a better model, and to choose these new samples in such a way as to maximize the resulting model performance. For an SVM classifier, a useful heuristic for active learning is to choose samples that fall close to the decision boundary; i.e., for a particular sample, the 'confidence' (c) can be computed using the SVM. Samples that have small |c| are more likely to improve the decision boundary in the retrained SVM. Any suggestions on how to do this for SVM regression? (I can generate samples at will, but it is costly to label them, so I want to know if I can use the 'already-trained' regression-SVM to help me decide which ones to label)
Active learning using SVM Regression
CC BY-SA 3.0
null
2011-04-19T18:10:25.063
2013-11-14T04:09:49.783
2011-04-20T13:00:49.357
null
4218
[ "regression", "cross-validation", "svm" ]
9757
2
null
9756
7
null
Active learning requires a compromise between exploration and exploitation. If the model you have so far is bad, if you exploit this model to determine the best place to label mode data, it will probably suggest bad places to label the data as your current hypothesis is poor. It is a good idea to do some random exploration as well, as that is about the best way to ensure that eventually you will label the data that shows the current hypothesis to be incorrect. For regression models, I would suggest that Gaussian Process regression is a better bet for active learning, as it gives you predictive error bars, so you can query the labels for points where the model is most uncertain. See for example [this paper](http://www.computer.org/portal/web/csdl/doi/10.1109/IJCNN.2000.861310) looks an interesting place to start. I have worked on active learning in classification, and the results have been rather mixed for all strategies. Often just picking points randomly (i.e. all exploration, no exploitation) works best. I am looking into active learning for regression problems at the moment and intending to tse GPs, I'll add to my answer if I find out anything that seems to work better than exploration only.
null
CC BY-SA 3.0
null
2011-04-19T18:20:25.197
2011-04-19T18:20:25.197
null
null
887
null
9758
2
null
9752
1
null
In addition to the suggestions from the previous answers, I would suggest the `catdes` function from the [FactoMineR](ftp://ftp.ccu.edu.tw/pub/languages/CRAN/web/packages/FactoMineR/FactoMineR.pdf) package in R. It gives a description of the categories of one factor by qualitative variables and/or by quantitative variables. The output is briefly explained in the manual but I think it would be worth to have a look at the reference mentioned there. The idea is that you get a list of the variables that characterise the most the factor along with a p-value to assess significance. Note 1 I think that the function is particularly used in a "cluster analysis" context. Note 2 It requires to discretise your "customer loyalty score"... By the way, about three years ago I used that function and I had a question about it. I wrote an email to the author (mentioned in the manual) and he kindly answered me!
null
CC BY-SA 3.0
null
2011-04-19T18:24:28.563
2011-04-19T18:37:16.410
2011-04-19T18:37:16.410
3019
3019
null
9759
1
9798
null
16
13573
I am about to dive into learning R and my learning project will entail applying mixed- or random-effects regression to a dataset in order to develop a predictive equation. I share the concern of the writer in this post [How to choose nlme or lme4 R library for mixed effects models?](https://stats.stackexchange.com/questions/5344/how-to-choose-nlme-or-lme4-r-library-for-mixed-effects-models) in wondering whether NLME or LME4 is the better package to familiarize myself with. A more basic question is: what's the difference between linear and nonlinear mixed-effects modeling? For background, I applied M-E modeling in my MS research (in MATLAB, not R), so I'm familiar with how fixed vs. random variables are treated. But I'm uncertain whether the work I did was considered linear or nonlinear M-E. Is it simply the functional form of the equation used, or something else?
Can someone shed light on linear vs. nonlinear mixed-effects?
CC BY-SA 4.0
null
2011-04-19T18:46:18.587
2019-12-18T22:21:17.723
2019-12-18T22:21:17.723
92235
4237
[ "r", "regression", "random-effects-model" ]
9760
2
null
9759
1
null
For the linear-nonlinear part, see: [CrossValidated article on the topic](https://stats.stackexchange.com/questions/8689/what-does-linear-stand-for-in-linear-regression), particularly the second-ranked answer by Charlie. I don't think there are any changes when dealing with mixed effects.
null
CC BY-SA 3.0
null
2011-04-19T20:00:02.607
2011-04-19T20:00:02.607
2017-04-13T12:44:35.347
-1
1764
null
9763
1
null
null
12
8263
A typical image processing statistic is the use of [Haralick texture features](http://murphylab.web.cmu.edu/publications/boland/boland_node26.html), which are 14. I am wondering about the 14th of these features: Given an adjacency map $P$ (which we can simply view an the empirical distribution of two integers $i,j < 256$), it is defined as: the square root of the second eigenvalue of $Q$, where $Q$ is: $Q_{ij} = \sum_k \frac{ P(i,k) P(j,k)}{ [\sum_x P(x,i)] [\sum_y P(k, y)] }$ Even after much googling, I could not find any references for this statistic. What are its properties? What is it representing? (The value $P(i,j)$ above is the normalised number of times that a pixel of value $i$ is found next to a pixel of value $j$).
What is this "maximum correlation coefficient"?
CC BY-SA 3.0
null
2011-04-19T22:41:27.670
2013-01-02T15:30:52.380
null
null
2067
[ "probability", "computational-statistics" ]
9764
2
null
9752
4
null
One way to reformulate your problem is the following: you want to select a small set of features that predict well the loyalty score, using a linear model for example. This problem is called (best) subset selection. Suppose that you want to pick k features. The first way to do it is to test all the subsets of k features, by doing linear regression on each subset. But for large dataset, this is way too long. Another way to do it is in a greedy way. You start by picking the feature that is the most correlated with the score and add it to the (empty) subset. You compute the linear model associated to this subset (in this case, just a coefficient) to predict the loyalty score. Then, you pick the feature which is the most correlated with the residual (the difference between the value predicted by your linear model and the true score) and compute the linear model corresponding to your new subset. You do so, until you have k features in your set. There are other methods, such as the lasso, to do subset selection. For a more complete introduction to subset selection, you should read the section 3.3 of [The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), which is freely download-able on the authors' site.
null
CC BY-SA 3.0
null
2011-04-19T23:17:10.557
2011-04-19T23:22:28.073
2011-04-19T23:22:28.073
4241
4241
null
9765
2
null
9718
2
null
For this particular case there's not much of a difference to work with in practical terms, but for the general case, I'm going to go out on a limb and guess that there is no way to conduct a strict test of significance. The partial correlation will be a direct function of the correlations among the 3 variables. Depending on 'career's r with each of the others, we can specify the change from zero-order to partial r for the 2 main variables. So in a sense there's no room for variation and thus no place for a significance test. I think.
null
CC BY-SA 3.0
null
2011-04-20T00:20:29.863
2011-04-20T00:20:29.863
null
null
2669
null
9766
1
9771
null
4
2460
I have a question which asks: > Determine those values of the positive integer n for which a finite nth moment of X about zero exists. How should I approach this question? Does it depend on the numbers of variables in X? I think that a first moment exists if the mean exists, and the second moment exists if the variance can be determined. Is that general concept correct?
Determine whether a n-th finite moment of X exists
CC BY-SA 3.0
null
2011-04-20T02:46:09.323
2019-08-20T15:14:52.237
2011-04-20T13:00:27.937
null
null
[ "self-study", "moments" ]
9767
1
null
null
6
1874
Could anyone provide some suggestions on how to generate over-dispersed counts data with serial correlations? I am using R software to conduct a simulation study. Any references on this subject will be much appreciated. Thanks for your help.
Generating over-dispersed counts data with serial correlation
CC BY-SA 3.0
null
2011-04-20T03:13:26.073
2011-04-20T13:03:57.170
null
null
2742
[ "r", "time-series", "distributions", "poisson-distribution", "simulation" ]
9768
2
null
9724
1
null
The first thing that comes to my mind when I read of the approach that you describe is that there is a miss-match between the idea of replicating an experiment and the use of "success" and "failure" as descriptors of the outcomes. Presumably a success would be a result that is significant in the Neyman-Pearson paradigm (i.e. P < alpha). However, that paradigm allows control of type I errors and specification of power by assuming that the experimenter will act as if the null hypothesis is false when a significant result is observed. In that case there is no point in testing it again--the determination of type I error rates assumes that you discard the hypothesis and move on. Neyman called the process 'inductive behavior'. A consequence of inductive behavior is that you can't integrate the information across multiple replications of the overall experiment within the Neyman-Pearson paradigm. There would be no sensible interpretation of any sequence of significant and not significant results from repeats of the experiment using the Neyman-Pearson paradigm. If you use instead Fisher's approach of inductive inference, then you can combine P values from multiple experiments testing the same null hypothesis and obtain a composite value. The P values are indices of the strength of evidence against the null hypothesis (and NOT error rates!) and so, just as it is sensible to combine the evidence from multiple sources before judgement, you should combine the P values. However, while there are several (many?) way to combine the P values, it is perhaps the case that none is perfect. Arguably the best way to combine evidence from multiple experiments is to use a likelihood-based approach (see Royall 1997 for a very clear exposition) . The likelihood functions for each of the experiments can be combined exactly by multiplication to yield a composite function that can support both interval estimation and statement of the probability of the overall set of results having come from a true null hypothesis. Royall, R. M. (1997). Statistical evidence: a likelihood paradigm. Chapman & Hall. --- It is unfortunate that Fisher was frequently unclear in his expression, and rather inconsistent in his application of the ideas. However, he was undoubtedly in favor of repeating experiments and hated Neyman's idea of inductive behavior: “...we may be able to validly apply a test of significance to discredit a hypothesis the expectations from which are widely at variance with the ascertained fact. If we use the term rejection for our attitude to such a hypothesis, it would be clearly understood that no irreversible decision has been taken; that, as rational beings, we are prepared to be convinced by future evidence that appearances were deceptive, and that a remarkable and exceptional coincidence had taken place.” Stat Meth & Sci Inf p.37 (Chapter II section 4).
null
CC BY-SA 3.0
null
2011-04-20T03:18:45.590
2011-04-20T03:18:45.590
null
null
1679
null
9770
2
null
9766
3
null
It seems for me that the question is ill-posted if there is no additional context about $X$ distribution or at least the family of distribution it belongs to (Student $t$, [Pareto](http://en.wikipedia.org/wiki/Pareto_distribution), [Cauchy](http://en.wikipedia.org/wiki/Cauchy_distribution)). For instance for normal distribution all moments exist, for Cauchy none. The topic is clearly related to the problem of [heavy tails](http://en.wikipedia.org/wiki/Heavy-tailed_distribution), therefore a general approach would be: - determine what distribution is relevant - estimate the parameters of this distribution (consistent estimators will provide better parameter estimates with more numbers of $X$ available) - decide on the number of finite moments that do exist From practical point of view all you need is the behaviour of the tail of empirical distribution function. There is a great number of works and books on heavy tailed distributions, I can help to find some relevant methods if you think this approach is acceptable to you.
null
CC BY-SA 3.0
null
2011-04-20T07:14:14.310
2011-04-20T07:14:14.310
null
null
2645
null
9771
2
null
9766
5
null
If you have the probability density function $f$ of the random variable, then it is a matter of checking for which $n$ the integral $$\int_{\mathbb{R}}x^nf(x)dx<\infty$$ This is then the standard exercise in real analysis. Alternatively if you know the [characteristic function](http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)) of random variable $\phi$ then it is a matter of checking how many derivatives the function $\phi$ has. Yet another alternative if you have the distribution function is to look for highest $n$ for which the the following limit is zero: $$\lim_{t\to\infty}t^nP(|X|\ge t)=\lim_{t\to\infty}t^n(1-P(|X|<t))=\lim_{t\to\infty}t^n[1-F(t)+F(-t)]=0$$
null
CC BY-SA 4.0
null
2011-04-20T08:13:14.423
2019-08-20T15:14:52.237
2019-08-20T15:14:52.237
95370
2116
null
9772
2
null
9767
2
null
This is one way to do it: ``` v = rnorm(1, 30, 10) for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10)) round(v) ```
null
CC BY-SA 3.0
null
2011-04-20T08:51:38.153
2011-04-20T08:51:38.153
null
null
3911
null
9774
1
null
null
6
258
For many years I have been conducting t-tests on response to mailing activity. Recently I was challenged that we should infact be conducting tests on profit rather than response. So, let me put this in context. If you have two groups of customers of sample size 10,000 each that you were mailing two different offers. One receives free delivery (group a) on goods and one receives 25% off their next purchase (group b). And assume the two groups responded as follows. - Group a - 10% of the 10,000 customers mailed responded - Group b - 15% of the 10,000 customers mailed responded I would conduct a t-test on the response rates (10% and 15%), which is correct. I have been asked to do a test on the profitability, which i am reluctant to do. Again to put this into context: - Group a - generated a profit of £1 per customer mailed (i.e. £10,000) - Group b - generated a profit of £1.50 per customer mailed (i.e. £15,000) I have been asked to do a test on the £1 and £1.50 profit, but my concern is that there are many variable factors that contribute to the final profit figures, eg, margins, cost of the mailing the customer received etc, that are not really considered when setting the groups up. Am I correct in thinking that for this particular scenario that testing on profit is not the right thing to do, instead test on response as we are currently doing and then factor in all the other variable costs when making a final commercial decision?
Given two responses for two groups, how to decide what to test on response or profit?
CC BY-SA 3.0
null
2011-04-20T10:01:59.467
2011-04-21T00:40:09.410
2011-04-20T16:40:37.660
919
null
[ "t-test", "decision-theory" ]
9775
1
9780
null
1
3971
Given a data-frame: ``` d1 <-c("A","B","C","A") d2 <-c("A","V","C","F") d3 <-c("B","V","E","F") d4 <-c("A","B","C","A") data.frame(d1,d2,d3,d4) d1 d2 d3 d4 1 A A D A 2 B V B B 3 C C C C 4 A F A A ``` Also given that each row may have a unique pattern such that the occurrence of the values A,D,A (first row) represents a unique pattern assigned to a class 1 and F,A,A last row also represents a unique pattern assigned a class 4. I would like to manipulate the data-frame to search for rows that contain such 'unique patterns' and return a new column that classifies them such that, 0 represents rows that do not have any of the patterns. The pattern has to occur exactly as indicated. ``` d1 d2 d3 d4 class 1 A A D A 1 2 B V B B 0 3 C C C C 0 4 A F A A 4 ``` I tried to use a select statement with a concat qualifier using package sqldf, but it does not provide a useful approach. I would appreciate ideas on how to perform the search or if there are relevant packages to perform this type of search. Thank you
Manipulating and searching data-frames
CC BY-SA 3.0
null
2011-04-20T12:02:33.307
2011-04-20T12:55:50.920
2011-04-20T12:13:27.853
2116
18462
[ "r" ]
9777
2
null
9738
2
null
Well, you are just exhausting RAM on your machine. Generally, you have four options: - Fetch a bigger computer (rather a bad idea, since it is rather impossible to push more than few hundred GB in one node). - Limit your problem. - Look for HPC version of multinomial logit, probably outside R -- using sparse matrices, parallelizable among multiple nodes, stuff. - Switch to same better scalable algorithm. While you say that the problem was once solved, probably the way to go is option 3. EDIT: I saw that the problem is in `model.matrix.default`; this seems quite common while the formula (those statements with `~`) interpretation algorithm in R is not written too well in terms of memory use. If there is a way to run your model without using formulas, try it.
null
CC BY-SA 3.0
null
2011-04-20T12:43:40.830
2011-04-20T12:43:40.830
null
null
null
null
9778
1
null
null
10
577
Most clustering algorithms I've seen start with creating a each-to-each distances among all points, which becomes problematic on larger datasets. Is there one that doesn't do it? Or does it in some sort of partial/approximate/staggered approach? Which clustering algorithm/implemention takes less than O(n^2) space? Is there a list of algorithms and their Time and Space requirements somewhere?
Space-efficient clustering
CC BY-SA 3.0
null
2011-04-20T12:44:27.060
2012-07-15T09:33:33.173
null
null
595
[ "clustering", "algorithms", "large-data" ]
9779
1
9819
null
1
6559
I asked a question on StackOverflow for which I was suggested to use Kalman Filter. The question is as follows: [https://stackoverflow.com/questions/5726358/what-class-of-algorithms-reduce-margin-of-error-in-continuous-stream-of-input/5728373#5728373](https://stackoverflow.com/questions/5726358/what-class-of-algorithms-reduce-margin-of-error-in-continuous-stream-of-input/5728373#5728373) > A machine is taking measurements and giving me discrete numbers continuously like so: 1 2 5 7 8 10 11 12 13 14 18 Let us say these measurements can be off by 2 points and a measurement is generated every 5 seconds. I want to ignore the measurements that may potentially be same Like continuous 2 and 3 could be same because margin of error is 2 so how do I partition the data such that I get only distinct measurements but I would also want to handle the situation in which the measurements are continuously increasing like so: 1 2 3 4 5 6 7 8 9 10 In this case if we keep ignoring the consecutive numbers with difference of less than 2 then we might lose actual measurements. Now how do I apply Kalman Filter to solve this? All examples I see take multiple error estimations while I know a single thing that each measurement can be off by a value Q thats it and all the examples also work on multi dimensional vectors, that too multiple vectors.
How to apply Kalman filter to one dimensional data?
CC BY-SA 3.0
null
2011-04-20T12:54:43.323
2011-04-21T09:39:02.513
2017-05-23T12:39:26.523
-1
4251
[ "kalman-filter" ]
9780
2
null
9775
3
null
Suppose the entries to data.frame contain single uppercase letters. Suppose that we have a vector containing the patterns and that only one pattern can be in one row. ``` d1 <-c("A","B","C","A") d2 <-c("A","V","C","F") d3 <-c("B","V","E","F") d4 <-c("A","B","C","A") dd <- data.frame(d1,d2,d3,d4) > dd d1 d2 d3 d4 1 A A B A 2 B V V B 3 C C E C 4 A F F A pats <- c("ABA","FFA") pat.fun <- function(r,pats) { rr <- paste(r,collapse="") tmp <- sapply(pats,function(p)grep(p,rr)) res <- which(tmp==1) if(length(res)==0) res <-0 res } dd$class <- apply(dd,1,pats.fun,pats=pats) > dd d1 d2 d3 d4 class 1 A A B A 1 2 B V V B 0 3 C C E C 0 4 A F F A 2 ``` This is an example, the code certainly does not look like very efficient.
null
CC BY-SA 3.0
null
2011-04-20T12:55:50.920
2011-04-20T12:55:50.920
null
null
2116
null
9781
2
null
9573
60
null
The central limit theorem is less useful than one might think in this context. First, as someone pointed out already, one does not know if the current sample size is "large enough". Secondly, the CLT is more about achieving the desired type I error than about type II error. In other words, the t-test can be uncompetitive power-wise. That's why the Wilcoxon test is so popular. If normality holds, it is 95% as efficient as the t-test. If normality does not hold it can be arbitrarily more efficient than the t-test.
null
CC BY-SA 3.0
null
2011-04-20T12:59:07.080
2011-04-20T12:59:07.080
null
null
4253
null
9782
2
null
9741
12
null
A good measure is Somers' Dxy rank correlation, a generalization of ROC area for ordinal or continuous Y. It is computed for ordinal proportional odds regression in the lrm function in the rms package.
null
CC BY-SA 3.0
null
2011-04-20T13:03:50.337
2011-04-20T13:03:50.337
null
null
4253
null
9783
2
null
9767
6
null
A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$ has a Gamma distribution, you will get the negative binomial distribution for $Y$. You can easily impose serial correlation by imposing correlation on the $\lambda_i$'s. For example, you could have $\log\lambda_i \sim AR(1)$. Implemented in R: ``` N <- 100 rho <- 0.6 log.lambda <- 1 + arima.sim(model=list(ar=rho), n=N) y <- rpois(N, lambda=exp(log.lambda)) > cor(head(y,-1), tail(y,-1)) [1] 0.4132512 > mean(y) [1] 4.35 > var(y) [1] 33.4015 ``` Here $\lambda_i$'s come from a normal distribution, so the marginal distribution is not a classic distribution, but you could get more creative. Also note that the correlation of the $y$'s does not equal to `rho`, but it is some function of it.
null
CC BY-SA 3.0
null
2011-04-20T13:03:57.170
2011-04-20T13:03:57.170
null
null
279
null
9784
2
null
9778
5
null
K-Means and Mean-Shift use the raw sample descriptors (no need to pre-compute an affinity matrix). Otherwise, for spectral clustering or power iteration clustering, you can use a sparse matrix representation (e.g. Compressed Sparse Rows) of the k-nearest-neighbours affinity matrix (for some distance or affinity metric). If k is small (let say 5 or 10). You will get a very space efficient representation (2 * n_samples * k * 8 bytes for double precision floating point values).
null
CC BY-SA 3.0
null
2011-04-20T13:38:13.817
2011-04-20T13:38:13.817
null
null
2150
null
9785
1
null
null
8
6497
Rob Tibshirani propose to use lasso with Cox regression for variable selection in his 1997 paper "The lasso method for variable selection in the Cox model" published in Statistics In Medicine 16:385. Does anyone know of any R package/function or syntax in R that does lasso with a Cox model?
Cox model with LASSO
CC BY-SA 3.0
null
2011-04-20T13:46:29.630
2022-02-02T13:46:05.007
2022-02-02T13:46:05.007
53690
null
[ "r", "regression", "survival", "lasso", "cox-model" ]
9786
1
null
null
1
132
I want to test the hypothesis of a decreased level of vitamin D in diabetic subjects. For this I have recorded blood glucose and vitamin D levels in 40 cases and 40 controls. What kind of statistical test can I use to the above hypothesis?
How to compare vitamin D and glucose levels between patients and controls?
CC BY-SA 3.0
null
2011-04-20T13:47:25.053
2011-04-20T15:28:56.520
2011-04-20T13:52:18.920
930
null
[ "hypothesis-testing" ]
9787
2
null
9785
9
null
Here are two suggestions. First, you can take a look at the [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html) package, from Friedman, Hastie and Tibshirani, but see their JSS 2010 (33) paper, [Regularization Paths for Generalized Linear Models via Coordinate Descent](http://www.jstatsoft.org/v33/i01/paper). Second, although I've never used this kind of penalized model, I know that the [penalized](http://cran.r-project.org/web/packages/penalized/index.html) package implements L1/L2 penalties on GLM and the Cox model. What I found interesting in this package (this was with ordinary regression) was that you can include a set of unpenalized variables in the model. The associated publication is now: > Goeman J.J. (2010). L-1 Penalized Estimation in the Cox Proportional Hazards Model. Biometrical Journal 52 (1) 70-84.
null
CC BY-SA 3.0
null
2011-04-20T14:01:49.567
2011-04-20T14:01:49.567
null
null
930
null
9788
2
null
9718
2
null
I don't doubt a particular test statistic aiming to accomplish what your asking for exists, but I will offer some alternatives that you may be interested in that offer different answers (but probably still interesting) given the nature of the question. Like Rolando already stated, the extent to which the partial correlation is moderated depends on the correlation between the three variables. Below I have inserted a picture of the situation in the form of a path diagram (and you can just pretend the box with X is demographic dissimilarity, Z is career development, and Y is satisfaction). The test for whether the direct correlation between X and Y is different than the partial correlation of X and Y controlling for Z amounts to stating whether the blue line and the red line in the below picture are non-zero. So simply examining both of those correlations will be informative. If the product of those correlations is near zero, then the direct correlation between X and Y will be largely unchanged when partialling out the variance of Z. ![enter image description here](https://i.stack.imgur.com/tQmHE.png) Another test you may be interested in would be a [likelihood ratio test](http://en.wikipedia.org/wiki/Likelihood-ratio_test). In this example you have a case of nested models, and hence you can test the model with only the direct correlation between X and Y and the model that includes both X and Z. This can be more easily extended to multiple Z's as well. But this is not the same as testing whether the effect of X is moderated by Z(s), but whether the model including X and Z is a better fit to the data than the model only including X. Some might interpret this question as asking about moderating and mediating effects. By the names of the variables though I wouldn't immediately suggest these are what you are after. [Kline (2010)](http://goo.gl/oBTBg) has a few references regarding these, so those might be a good forray to search. Although to conduct those tests it appears you need to make much stronger assumptions about causality ([Frazier, Barron, & Tix 2004](http://dx.doi.org/10.1037/0022-0167.51.1.115) [PDF](http://dionysus.psych.wisc.edu/Lit/Topics/Statistics/Mediation/frazier_barron_mediation_jcc.pdf)).
null
CC BY-SA 3.0
null
2011-04-20T14:01:56.807
2011-04-20T14:01:56.807
null
null
1036
null
9789
2
null
9774
5
null
The reason why you are conducting this test is to determine which policy is more valuable, and if value is measured in profitability, then it makes no sense to do statistical testing on any other variable. A properly conducted test on profitability gives you all the information needed for your companies' decision: once you have the results of this test, results about other variables (e.g. response rate) provide no decision-relevant information.
null
CC BY-SA 3.0
null
2011-04-20T14:05:00.977
2011-04-21T00:40:09.410
2011-04-21T00:40:09.410
3567
3567
null
9790
2
null
7959
3
null
I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedasticity pretty well and can be used one a rolling window basis. It's basically an L1-Norm penalized regression, so it's not too numerically intensive and there's a pretty full featured R, SAS, and SPSS packages plus a few matlab implementations out there. Here's the [main](http://en.wikipedia.org/wiki/Quantile_regression) and the [R](http://en.wikibooks.org/wiki/R_Programming/Quantile_Regression) package wikis for more info. Edited: Check out the math stack exchange crosslink: Someone sited a couple of papers that essentially lay out the very simple idea of just using a rolling window of order statistics to estimate quantiles. Literally all you have to do is sort the values from smallest to largest, select which quantile you want, and select the highest value within that quantile. You can obviously give more weight to the most recent observations if you believe they are more representative of actual current conditions. This will probably give rough estimates, but it's fairly simple to do and you don't have to go through the motions of quantitative heavy lifting. Just a thought.
null
CC BY-SA 3.0
null
2011-04-20T14:39:35.683
2011-04-20T19:09:57.173
2011-04-20T19:09:57.173
3737
3737
null
9791
2
null
9779
0
null
I believe that for the Kalman Filter, you'll need to clarify your "can be off by 2 points" into something like, "error is normal with mean 0 and standard deviation of 0.8". Also, I believe that the usual statement of the Kalman Filter assumes you have a model that would predict how the actual value changes over time. (Though I guess you can specify a naive model that basically says x_t = x_t-1.) I found an interesting explanation at [this website](http://www.convict.lu/htm/rob/imperfect_data_in_a_noisy_world.htm) that might be helpful.
null
CC BY-SA 3.0
null
2011-04-20T15:00:21.363
2011-04-20T15:00:21.363
null
null
1764
null
9792
2
null
9786
2
null
Generally this sound like a simple [t-test](http://en.wikipedia.org/wiki/T-test). That is, you have two groups (diabetics and controls) and you measured 1 variable (Vitamin D). However, some more context/information about your data will lead to a lot better answers. For example, please answer chl's comment. Second, what is the idea behind measuring the glucose level regarding your hypothesized relation between diabetes and Vitamin D?
null
CC BY-SA 3.0
null
2011-04-20T15:28:56.520
2011-04-20T15:28:56.520
null
null
442
null
9793
2
null
9744
5
null
First of all it could be useful to read a bit about the [unit root](http://en.wikipedia.org/wiki/Unit_root) problem (you may start from the hypothesis section). So the nature of the explosiveness (exponential growth) is what matters. Roughly the growth could be explained either by deterministic part (for example linear trend) or by random walk with drift. Dickey Fuller (not the best unit root tests choice though, if you are familiar with $R$ I would suggest to go for `library(urca)` Zivot and Andrews test `?ur.za`, since this one also counts for possible structural breaks) is designed to distinguish between the too, thus $t$ statistic will help to decide if the nature of explosive behavior is deterministic or not. But bear in mind that DF test is very sensitive to deviations from the assumptions it was build on, even ADF would be more robust to apply in practice.
null
CC BY-SA 3.0
null
2011-04-20T15:31:31.433
2011-04-20T15:31:31.433
null
null
2645
null
9794
1
9816
null
7
313
Let's say I have two time series, one of which updates more frequently than the other: $x_0,x_1,x_2,\dots,x_t,\dots$ $y_0,y_{10},y_{20},\dots,y_{10t},\dots$ I want to fit a model to this that predicts $y$ from $x$ (and possibly from previous values of $y$) at each of the values $1,2,3,\dots$, i.e. it gives a prediction even for values of $y$ for which we won't make an observation (equivalently, assume that there are true values for $y$ at every value of $t$, but we only observe it at $t=0,10,20,\dots$) Is there a canonical way to do this?
Time series factor model with one series more frequent
CC BY-SA 3.0
null
2011-04-20T15:41:51.503
2019-07-23T19:18:54.480
2019-07-23T19:18:54.480
11887
2425
[ "regression", "time-series", "predictive-models", "unevenly-spaced-time-series" ]
9795
1
null
null
1
3118
> Possible Duplicate: Supervised learning with “rare” events, when rarity is due to the large number of counter-factual events I am trying to predict diabetes using the [BRFSS dataset](http://www.cdc.gov/brfss/) by using a supervised learning classification model. But I see that the target variable which is having diabetes or not is skewed. That is 90% of the records are non-diabetic and only 10% of the records are diabetic. How do I handle the skewness in the target variable?
How to handle skewed binary target variables?
CC BY-SA 3.0
null
2011-04-20T16:16:38.850
2012-01-20T01:38:40.197
2017-04-13T12:44:33.310
-1
3897
[ "machine-learning", "sampling", "unbalanced-classes" ]
9796
2
null
9794
4
null
I would cast the model in state-space form. Then there is no problem if one of the variables is observed more frequently than the other, or the observation times are irregular: the Kalman filter deals with missing and partially observed variables gracefully. Without details on the exact kind of relationships you aim to model it is difficult to be more specific.
null
CC BY-SA 3.0
null
2011-04-20T16:56:56.410
2011-04-20T16:56:56.410
null
null
892
null
9797
1
24654
null
8
521
I have the following data, representing the binary state of four subjects at four times, note that it is only possible for each subject to transition $0\to 1$ but not $1\to 0$: ``` testdata <- data.frame(id = c(1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4), day = c(1,1,1,1,8,8,8,8,16,16,16,16,24,24,24,24,32,32,32,32), obs = c(0,0,0,0,0,1,0,0,0,1,1,0,0,1,1,1,1,1,1,1)) ``` I can model it with a logistic regression: ``` testmodel <- glm(formula(obs~day, family=binomial), data=testdata) > summary(testmodel) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.018890 0.148077 -0.128 0.899907 day 0.032030 0.007555 4.240 0.000493 *** ``` First, how can I account for repeated measures on the same individual within the model? Second, how can I estimate, with uncertainty, the day on which 1/2 of the subjects will have made the transition from $0\to 1$?
How can I estimate the time at which 50% of a binomial variable will have transitioned?
CC BY-SA 3.0
null
2011-04-20T17:20:17.850
2012-03-14T22:31:10.687
2011-11-15T15:28:22.663
1381
1381
[ "logistic", "censoring", "interval-censoring" ]
9798
2
null
9759
18
null
There are several distinctions between linear and nonlinear regression models, but the primary mathematical one is that linear models are linear in the parameters, whereas nonlinear models are nonlinear in the parameters. Pinheiro and Bates (2000, pp. 284-285), authors of the `nlme` R package, elegantly described the more substantive considerations in model selection: > When choosing a regression model to describe how a response variable varies with covariates, one always has the option of using models, such as polynomial models, that are linear in the parameters. By increasing the order of a polynomial model, one can get increasingly accurate approximations to the true, usually nonlinear, regression function, within the observed range of the data. These empirical models are based only on the observed relationship between the response and the covariates and do not include any theoretical considerations about the underlying mechanism producing the data. Nonlinear models, on the other hand, are often mechanistic, i.e., based on a model for the mechanism producing the response. As a consequence, the model parameters in a nonlinear model generally have a natural physical interpretation. Even when derived empirically, nonlinear models usually incorporate known, theoretical characteristics of the data, such as asymptotes and monotonicity, and in these cases, can be considered as semi-mechanistic models. A nonlinear model generally uses fewer parameters than a competitor linear model, such as a polynomial, giving a more parsimonious description of the data. Nonlinear models also provide more reliable predictions for the response variable outside the observed range of the data than, say, polynomial models would. There are also some big differences between the nlme and lme4 packages that go beyond the linearity issue. For example, using nlme you can fit linear or nonlinear models and, for either type, specify the variance and correlation structures for within-group errors (e.g., autoregressive); lme4 can't do that. In addition, random effects can be fixed or crossed in either package, but it's much easier (and more computationally efficient) to specify and model crossed random effects in lme4. I would advise first considering a) whether you will need a nonlinear model, and b) whether you will need to specify either the within-group variance or correlation structures. If any of these answers is yes, then you have to use nlme (given that you're sticking with R). If you work a lot with linear models that have crossed random effects, or complicated combinations of nested and crossed random effects, then lme4 is probably a better choice. You may need to learn to use both packages. I learned lme4 first and then realized I had to use nlme because I almost always work with autoregressive error structures. However, I still prefer lme4 when I analyze data from experiments with crossed factors. The good news is that a great deal of what I learned about lme4 transferred well to nlme. Either way, Pinheiro and Bates (2000) is a great reference for mixed-effects models, and I'd say it's indispensable if you're using nlme. References Pinheiro, J.C., & Bates, D.M. (2000). Mixed-effects models in S and S-PLUS. New York: Springer-Verlag.
null
CC BY-SA 3.0
null
2011-04-20T17:21:42.293
2011-04-20T17:21:42.293
null
null
3964
null
9799
2
null
9779
0
null
I found a nice simple introductory example of a Kalman filter (coded in matlab) [here](http://www.mathworks.com/matlabcentral/fileexchange/5377-learning-the-kalman-filter). The example the author provides in this code is on one dimensional data. Hopefully this will at least give you a starting point for figuring out how to apply it to your specific problem.
null
CC BY-SA 3.0
null
2011-04-20T18:21:57.773
2011-04-20T18:21:57.773
null
null
1913
null
9800
2
null
9751
74
null
Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so. > "An unfortunate common practice is to pursue multiple comparisons only when the hull hypothesis of homogeneity is rejected." (Hsu, page 177) Will the results of post tests be valid if the overall P value for the ANOVA is greater than 0.05? Surprisingly, the answer is yes. With one exception, post tests are valid even if the overall ANOVA did not find a significant difference among means. The exception is the first multiple comparison test invented, the protected Fisher Least Significant Difference (LSD) test. The first step of the protected LSD test is to check if the overall ANOVA rejects the null hypothesis of identical means. If it doesn't, individual comparisons should not be made. But this protected LSD test is outmoded, and no longer recommended. Is it possible to get a 'significant' result from a multiple comparisons test even when the overall ANOVA was not significant? Yes it is possible. The exception is Scheffe's test. It is intertwined with the overall F test. If the overall ANOVA has a P value greater than 0.05, then the Scheffe's test won't find any significant post tests. In this case, performing post tests following an overall nonsignificant ANOVA is a waste of time but won't lead to invalid conclusions. But other multiple comparison tests can find significant differences (sometimes) even when the overall ANOVA showed no significant differences among groups. How can I understand the apparent contradiction between an ANOVA saying, in effect, that all group means are identical and a post test finding differences? The overall one-way ANOVA tests the null hypothesis that all the treatment groups have identical mean values, so any difference you happened to observe is due to random sampling. Each post test tests the null hypothesis that two particular groups have identical means. The post tests are more focused, so have power to find differences between groups even when the overall ANOVA reports that the differences among the means are not statistically significant. Are the results of the overall ANOVA useful at all? ANOVA tests the overall null hypothesis that all the data come from groups that have identical means. If that is your experimental question -- does the data provide convincing evidence that the means are not all identical -- then ANOVA is exactly what you want. More often, your experimental questions are more focused and answered by multiple comparison tests (post tests). In these cases, you can safely ignore the overall ANOVA results and jump right to the post test results. Note that the multiple comparison calculations all use the mean-square result from the ANOVA table. So even if you don't care about the value of F or the P value, the post tests still require that the ANOVA table be computed.
null
CC BY-SA 3.0
null
2011-04-20T18:35:24.807
2011-04-20T18:35:24.807
null
null
25
null
9801
1
9802
null
10
11550
I'm trying to understand matrix notation, and working with vectors and matrices. Right now I'd like to understand how the vector of coefficient estimates $\hat{\beta}$ in multiple regression is computed. The basic equation seems to be $$ \frac{d}{d\boldsymbol{\beta}} (\boldsymbol{y}-\boldsymbol{X\beta})'(\boldsymbol{y}-\boldsymbol{X\beta}) = 0 \>. $$ Now how would I solve for a vector $\beta$ here? Edit: Wait, I'm stuck. I'm here now and don't know how to continue: $ \frac{d}{d{\beta}} \left( \left(\begin{smallmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{smallmatrix}\right) - \left(\begin{smallmatrix} 1 & x_{11} & x_{12} & \dots & x_{1p} \\ 1 & x_{21} & x_{22} & \dots & x_{2p} \\ \vdots & & & & \vdots \\ 1 & x_{n1} & x_{n2} & \dots & x_{np} \\ \end{smallmatrix}\right) \left(\begin{smallmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{smallmatrix}\right) \right) ' \left( \left(\begin{smallmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{smallmatrix}\right) - \left(\begin{smallmatrix} 1 & x_{11} & x_{12} & \dots & x_{1p} \\ 1 & x_{21} & x_{22} & \dots & x_{2p} \\ \vdots & & & & \vdots \\ 1 & x_{n1} & x_{n2} & \dots & x_{np} \\ \end{smallmatrix}\right) \left(\begin{smallmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{smallmatrix}\right) \right) $ $ \frac{d}{d{\beta}} \sum_{i=1}^n \left( y_i - \begin{pmatrix} 1 & x_{i1} & x_{i2} & \dots & x_{ip} \end{pmatrix} \begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{pmatrix} \right)^2$ With $x_{i0} = 1$ for all $i$ being the intercept: $ \frac{d}{d{\beta}} \sum_{i=1}^n \left( y_i - \sum_{k=0}^p x_{ik} \beta_k \right)^2 $ Can you point me in the right direction?
Analytical solution to linear-regression coefficient estimates
CC BY-SA 3.0
null
2011-04-20T18:39:30.667
2021-11-22T04:52:33.890
2011-04-29T00:54:40.567
3911
2091
[ "regression" ]
9802
2
null
9801
13
null
We have $\frac{d}{d\beta} (y - X \beta)' (y - X\beta) = -2 X' (y - X \beta)$. It can be shown by writing the equation explicitly with components. For example, write $(\beta_{1}, \ldots, \beta_{p})'$ instead of $\beta$. Then take derivatives with respect to $\beta_{1}$, $\beta_{2}$, ..., $\beta_{p}$ and stack everything to get the answer. For a quick and easy illustration, you can start with $p = 2$. With experience one develops general rules, some of which are given, e.g., in [that document](http://www.colorado.edu/engineering/cas/courses.d/IFEM.d/IFEM.AppD.d/IFEM.AppD.pdf). Edit to guide for the added part of the question With $p = 2$, we have $(y - X \beta)'(y - X \beta) = (y_1 - x_{11} \beta_1 - x_{12} \beta_2)^2 + (y_2 - x_{21}\beta_1 - x_{22} \beta_2)^2$ The derivative with respect to $\beta_1$ is $-2x_{11}(y_1 - x_{11} \beta_1 - x_{12} \beta_2)-2x_{21}(y_2 - x_{21}\beta_1 - x_{22} \beta_2)$ Similarly, the derivative with respect to $\beta_2$ is $-2x_{12}(y_1 - x_{11} \beta_1 - x_{12} \beta_2)-2x_{22}(y_2 - x_{21}\beta_1 - x_{22} \beta_2)$ Hence, the derivative with respect to $\beta = (\beta_1, \beta_2)'$ is $ \left( \begin{array}{c} -2x_{11}(y_1 - x_{11} \beta_1 - x_{12} \beta_2)-2x_{21}(y_2 - x_{21}\beta_1 - x_{22} \beta_2) \\ -2x_{12}(y_1 - x_{11} \beta_1 - x_{12} \beta_2)-2x_{22}(y_2 - x_{21}\beta_1 - x_{22} \beta_2) \end{array} \right) $ Now, observe you can rewrite the last expression as $-2\left( \begin{array}{cc} x_{11} & x_{21} \\ x_{12} & x_{22} \end{array} \right)\left( \begin{array}{c} y_{1} - x_{11}\beta_{1} - x_{12}\beta_2 \\ y_{2} - x_{21}\beta_{1} - x_{22}\beta_2 \end{array} \right) = -2 X' (y - X \beta)$ Of course, everything is done in the same way for a larger $p$.
null
CC BY-SA 3.0
null
2011-04-20T19:04:57.233
2011-04-21T14:16:35.683
2011-04-21T14:16:35.683
2129
3019
null
9803
2
null
9797
0
null
We know that the $t_1$ transition time (from state 0 to state 1) of subject `id=1` was between two boundaries: $24<t_1<32$. An approximation is to assume that $t_1$ may have taken values within this range with uniform probability. Resampling the $t_i$ values we can get an approximate distribution of $\text{median}(t_i)$: ``` t = replicate(10000, median(sample(c(runif(1, 24, 32), # id=1 runif(1, 1, 8), # id=2 runif(1, 8, 16), # id=3 runif(1, 16, 24)), # id=4 replace=TRUE))) c(quantile(t, c(.025, .25, .5, .75, .975)), mean=mean(t), sd=sd(t)) ``` Result (repeated): ``` 2.5% 25% 50% 75% 97.5% mean sd 4.602999 11.428310 16.005289 20.549056 28.378774 16.085808 6.243129 4.517058 11.717245 16.084075 20.898324 28.031452 16.201022 6.219094 ``` Thus an approximation with 95% confidence interval of this median is 16 (5 – 28). EDIT: See whuber's comment on the limitation of this method when the number of observations is small (including n=4 itself).
null
CC BY-SA 3.0
null
2011-04-20T19:17:19.013
2011-04-20T23:40:21.120
2011-04-20T23:40:21.120
3911
3911
null
9805
2
null
8511
6
null
if deviance were proportional to log likelihood, and one uses the definition (see for example McFadden's [here](http://www.ats.ucla.edu/stat/mult_pkg/faq/general/psuedo_rsquareds.htm)) ``` pseudo R^2 = 1 - L(model) / L(intercept) ``` then the pseudo-$R^2$ above would be $1 - \frac{198.63}{958.66}$ = 0.7928 The question is: is reported deviance proportional to log likelihood?
null
CC BY-SA 3.0
null
2011-04-20T20:08:26.750
2017-12-02T22:32:21.847
2017-12-02T22:32:21.847
128677
2849
null
9806
2
null
9797
-1
null
Assuming that you will have more data of the same structure you will be able to use the [actuarial (life table) method](http://en.wikipedia.org/wiki/Life_table) to estimate median survival.
null
CC BY-SA 3.0
null
2011-04-20T21:47:11.837
2011-04-20T21:56:31.867
2011-04-20T21:56:31.867
919
3911
null
9807
1
null
null
1
30439
I performed a survey using a Likert 1 to 5 scale (totally agree/agree/neutral/ disagree/totally disagree) on 12 questions which are split into 3 statements which the respondent places a value of between 1 to 5 dependent on how much they agree or disagree - there are 36 statements in total. Respondents: ``` Group 1 Group 2 architects UK 140 replied Architects US 100 replied engineers UK 140 replied Engineers US 100 replied Contractors Uk 140 replied Contractors US 100 replied ``` The data is in Excel. ### Questions: - How do I input them in SPSS? - How do I work out the frequency of replies for each recipient? - How do I work out frequency of replies i.e agrees/disagrees etc for each group? - How can I rank each individual question (12 of them)? Remember, there are 3 individual statements to each question. - How do I compare UK architects to US architects to show congruence or not? - How would show correlation between the two groups UK and US? - Will SPSS develop graphs etc for me showing frequency or correlation?
Working with Likert scales in SPSS
CC BY-SA 3.0
0
2011-04-20T22:57:58.130
2016-10-25T01:51:39.020
2011-04-22T08:01:01.770
183
4262
[ "spss", "likert" ]
9808
2
null
8511
61
null
Don't forget the [rms](http://cran.r-project.org/web/packages/rms/index.html) package, by Frank Harrell. You'll find everything you need for fitting and validating GLMs. Here is a toy example (with only one predictor): ``` set.seed(101) n <- 200 x <- rnorm(n) a <- 1 b <- -2 p <- exp(a+b*x)/(1+exp(a+b*x)) y <- factor(ifelse(runif(n)<p, 1, 0), levels=0:1) mod1 <- glm(y ~ x, family=binomial) summary(mod1) ``` This yields: ``` Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.8959 0.1969 4.55 5.36e-06 *** x -1.8720 0.2807 -6.67 2.56e-11 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 258.98 on 199 degrees of freedom Residual deviance: 181.02 on 198 degrees of freedom AIC: 185.02 ``` Now, using the `lrm` function, ``` require(rms) mod1b <- lrm(y ~ x) ``` You soon get a lot of model fit indices, including Nagelkerke $R^2$, with `print(mod1b)`: ``` Logistic Regression Model lrm(formula = y ~ x) Model Likelihood Discrimination Rank Discrim. Ratio Test Indexes Indexes Obs 200 LR chi2 77.96 R2 0.445 C 0.852 0 70 d.f. 1 g 2.054 Dxy 0.705 1 130 Pr(> chi2) <0.0001 gr 7.801 gamma 0.705 max |deriv| 2e-08 gp 0.319 tau-a 0.322 Brier 0.150 Coef S.E. Wald Z Pr(>|Z|) Intercept 0.8959 0.1969 4.55 <0.0001 x -1.8720 0.2807 -6.67 <0.0001 ``` Here, $R^2=0.445$ and it is computed as $\left(1-\exp(-\text{LR}/n)\right)/\left(1-\exp(-(-2L_0)/n)\right)$, where LR is the $\chi^2$ stat (comparing the two nested models you described), whereas the denominator is just the max value for $R^2$. For a perfect model, we would expect $\text{LR}=2L_0$, that is $R^2=1$. By hand, ``` > mod0 <- update(mod1, .~.-x) > lr.stat <- lrtest(mod0, mod1) > (1-exp(-as.numeric(lr.stat$stats[1])/n))/(1-exp(2*as.numeric(logLik(mod0)/n))) [1] 0.4445742 > mod1b$stats["R2"] R2 0.4445742 ``` Ewout W. Steyerberg discussed the use of $R^2$ with GLM, in his book Clinical Prediction Models (Springer, 2009, § 4.2.2 pp. 58-60). Basically, the relationship between the LR statistic and Nagelkerke's $R^2$ is approximately linear (it will be more linear with low incidence). Now, as discussed on the earlier thread I linked to in my comment, you can use other measures like the $c$ statistic which is equivalent to the AUC statistic (there's also a nice illustration in the above reference, see Figure 4.6).
null
CC BY-SA 3.0
null
2011-04-20T23:21:07.663
2011-04-20T23:27:35.910
2011-04-20T23:27:35.910
930
930
null
9809
1
9890
null
5
2002
Does anyone know of a good resource listing known tricks (with examples?) for calculating closed form expressions from messy expectations? (e.g., moment generating function, law of iterated expectations, change of measure, etc.) In a different setting, I've found [Summary of Rules for Identifying ARIMA Models](http://www.duke.edu/~rnau/arimrule.htm) tremendously helpful. I was hoping a list of rules-of-thumb like this would also exist for calculating expectations...right? Unfortunately, I'm not finding anything.
A list of tricks for calculating expectations?
CC BY-SA 3.0
null
2011-04-20T23:35:57.413
2018-10-28T13:26:39.690
2018-10-28T13:26:39.690
11887
3577
[ "references", "expected-value", "moment-generating-function" ]
9810
1
null
null
1
1156
I want to compare if three groups are different by a non-parametric test. Now..., the problem is that two groups are paired (prior to treatment and one-year after treatment), and the (reference) group is other healthy individuals. Is there a simple solution? I am using R,
Which non-parametric test for difference between three groups, of which two are paired?
CC BY-SA 3.0
null
2011-04-20T23:49:40.123
2011-06-17T13:49:51.740
null
null
4229
[ "multiple-comparisons", "nonparametric" ]