Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11473 | 1 | 11488 | null | 6 | 6275 | I'm performing a relatively simple one-way ANOVA as part of a class exercise. I'd like to get pairwise uncorrected p-values out of R, so I can do a sequential FDR test in another package (I realize there are FDR packages in R as well). I've set up my ANOVA as shown below, and it works fine, produces results, but I can't seem to figure out how to get the raw, uncorrected p-values back. Ultimately, I'd like to do the pairwise tests for both FDR and sequential Bonferroni in R, but this seems like a first step.
It looks like `pairwiseCI` or `multcomp` might get me where I'm trying to go, but I'm having a difficult time figuring out which will do what I'm looking for.
```
R> head(d10)
time breed
1 27.4 type.A
2 18.3 type.A
3 24.3 type.B
4 19.6 type.B
5 21.6 type.C
6 30.3 type.D
a10 <- aov(time~breed,data=d10)
# reports the overall significance, but nothing on the pairs
summary(a10)
# reports corrected values only
TukeyHSD(a10)
```
| Uncorrected pairwise p-values for one-way ANOVA? | CC BY-SA 3.0 | null | 2011-06-02T10:45:21.923 | 2015-08-16T21:59:58.573 | 2011-06-02T15:33:20.037 | 3601 | 410 | [
"r",
"anova",
"self-study",
"multiple-comparisons"
] |
11474 | 2 | null | 643 | 4 | null | Playing around with the following code, varying the value of `M` and choosing distributions other than the uniform can be a fun illustration.
```
N <- 10000
M <- 5
meanvals <- replicate(N, expr = {mean(runif(M,min=0, max=1))})
hist(meanvals, breaks=50, prob=TRUE)
```
| null | CC BY-SA 3.0 | null | 2011-06-02T10:45:50.370 | 2011-06-02T11:08:55.880 | 2011-06-02T11:08:55.880 | 930 | 1806 | null |
11475 | 2 | null | 8148 | 8 | null | Well, clustering techniques are not limited to distance-based methods where we seek groups of statistical units that are unusually close to each other, in a geometrical sense. There're also a range of techniques relying on density (clusters are seen as "regions" in the feature space) or probability distribution.
The latter case is also know as model-based clustering; psychometricians use the term [Latent Profile Analysis](http://spitswww.uvt.nl/~vermunt/ermss2004f.pdf) to denote this specific case of [Finite Mixture Model](http://en.wikipedia.org/wiki/Mixture_model), where we assume that the population is composed of different unobserved groups, or latent classes, and that the joint density of all manifest variables is a mixture of this class-specific density. Good implementation are available in the [Mclust](http://www.stat.washington.edu/mclust/) package or [Mplus](http://www.statmodel.com/) software. Different class-invariant covariance matrices can be used (in fact, Mclust uses the BIC criterion to select the optimal one while varying the number of clusters).
The standard [Latent Class Model](http://en.wikipedia.org/wiki/Latent_class_model) also makes the assumption that observed data come from a mixture of g multivariate multinomial distributions. A good overview is available in [Model-based cluster analysis: a Defence](http://eric.univ-lyon2.fr/~rias2006/presentations/Celeux.pdf), by Gilles Celeux.
Inasmuch these methods rely on distributional assumptions, this also render possible to use formal tests or goodness-of-fit indices to decide about the number of clusters or classes, which remains a difficult problem in distance-based cluster analysis, but see the following articles that discussed this issue:
- Handl, J., Knowles, J., and Kell, D.B. (2005). Computational cluster validation in post-genomic data analysis. Bioinformatics, 21(15), 3201-3212.
- Hennig, C. (2007) Cluster-wise assessment of cluster stability. Computational Statistics and Data Analysis, 52, 258-271.
- Hennig, C. (2008) Dissolution point and isolation robustness: robustness criteria for general cluster analysis methods. Journal of Multivariate Analysis, 99, 1154-1176.
| null | CC BY-SA 3.0 | null | 2011-06-02T10:51:50.320 | 2011-06-02T10:51:50.320 | null | null | 930 | null |
11477 | 2 | null | 11473 | 7 | null | You can use `pairwise.t.test()` with one of the available options for multiple comparison correction in the `p.adjust.method=` argument; see `help(p.adjust)` for more information on the available option for single-step and step-down methods (e.g., `BH` for FDR or `bonf` for Bonferroni). Of note, you can directly give `p.adjust()` a vector of raw p-values and it will give you the corrected p-values.
So, I would suggest to run something like
```
pairwise.t.test(time, breed, p.adjust.method ="none") # uncorrected p-value
pairwise.t.test(time, breed, p.adjust.method ="bonf") # Bonferroni p-value
```
The first command gives you t-test-based p-values without controlling for FWER or FDR. You can then use whatever command you like to get corrected p-values.
| null | CC BY-SA 3.0 | null | 2011-06-02T11:06:53.443 | 2011-06-02T11:06:53.443 | null | null | 930 | null |
11478 | 1 | null | null | 3 | 1419 | Both dependent and independent variables I deal with are nonstationary series that become stationary after differentiating them once.
The problem is that I assume that the dependent variable has a certain constant value which does not depend on the explanatory variables' changes and should be estimated as the model's constant term.
But the constant term is lost in the process of differentiation.
Does anyone know any method of dealing with that kind of problem?
| Constant term in time series econometric models built on 1-st differences | CC BY-SA 3.0 | null | 2011-06-02T11:15:30.280 | 2011-06-02T16:42:25.207 | 2011-06-02T12:45:25.813 | 2116 | 4837 | [
"time-series",
"modeling"
] |
11479 | 2 | null | 11457 | 2 | null | Disclaimer: This is merely a comment but it won't fit as such, so I'll leave it as a CW response.
Everything is already available in Frank Harrell's [rms](http://cran.r-project.org/web/packages/rms/index.html) package (which model to choose, how to evaluate its predictive performance or how to validate it, how not to fall into the trap of overfitting or stepwise approach, etc.), with formal discussion in his textbook, Regression Modeling Strategies (Springer, 2001), and a nice set of handouts on his [website](http://biostat.mc.vanderbilt.edu/wiki/Main/RmS).
Also, I would recommend the following papers if you're interested in predictive modeling:
- Aliferis, C.F., Statnikov, A., Tsamardinos, I., Schildcrout, J.S., Shepherd, B.E., and Harrell, F.E. Jr (2009). Factors Influencing the Statistical Power of Complex Data Analysis Protocols for Molecular Signature Development from Microarray Data. PLoS ONE 4(3): e4922.
- Harrell, F.E. Jr, Margolis, P.A., Gove, S., Mason, K.E., Mulholland, E.K., Lehmann, D., Muhe, L., Gatchalian, S., and Eichenwald, H.F.. (1998). Development of a clinical prediction model for an ordinal outcome: the World Health Organization Multicentre Study of Clinical Signs and Etiological agents of Pneumonia, Sepsis and Meningitis in Young Infants. WHO/ARI Young Infant Multicentre Study Group. Statistics in Medicine, 17(8): 909-44.
- Harrell, F.E. Jr, Lee, K.L., and Mark, D.B. (1996). Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Statistics in Medicine, 15(4): 361-87.
| null | CC BY-SA 3.0 | null | 2011-06-02T11:23:20.110 | 2011-06-02T11:49:17.280 | 2011-06-02T11:49:17.280 | 930 | 930 | null |
11480 | 2 | null | 11478 | 1 | null | All software I know allows the user to specify whether or not a constant is included in the model. What software are you using ? Do you wish to share data (coded or not) and the model you are trying to estimate and perhaps I caN shed some light on "stuff"
ADDITIONAL MATERIAL ADDED !
When Y and X can be rendered stationary by suitable differencing , we do so to IDENTIFY the relationship between Y and X. Just because Y and X need to be differenced for IDENTIFICATION PURPOSES does not mean the differencing needs to be incorporated into the Transfer Function (econometric model) . For example both Y and X may be non-stationary BUT the relationship between Y and X may be as simple as Y(t)=B0 + b1*X(T) +A(t) . If one was naive enough to difference both Y and X then one would be inducing an ARIMA component into the model viz. [1-B]Y(t)=[1-B]X(t)+ [1-theta*B]*A(t) where the estimated theta1 would be aprroximately equal to 1.0. Some programs ( you can guess the usual suspects) actually ASSUME that differencing required for IDENTIFICATION is the same as differencing in the final ESTIMATED MODEL. Hope my commentary helps you !
| null | CC BY-SA 3.0 | null | 2011-06-02T12:33:12.367 | 2011-06-02T16:42:25.207 | 2011-06-02T16:42:25.207 | 3382 | 3382 | null |
11481 | 1 | 11518 | null | 6 | 1617 | I have completed an experiment measuring the magnetic field of a solenoid. The answer I got was $0.0075 \pm 0.0011$T. The uncertainty by the way was generated from Microsoft Excel's Regression under data analysis and is the standard error of the gradient of some graph. The gradient is the magnetic field, B in Tesla.
I would like to compare this with a theoretical value obtained from an approximation formula of $B = 0.0106623 \pm 0.0000007$T.
How do I determine whether my experimental result is "close" to the theoretical one??
I would imagine it being easy if my theoretical result had no uncertainty. Then I could just say that the theoretical result is about three standard errors away. But now the theoretical value has an uncertainty. How do I get across this?
Thanks.
| How to tell the "closeness" of two variables | CC BY-SA 3.0 | null | 2011-06-02T12:33:21.893 | 2011-06-03T14:42:39.133 | 2011-06-03T14:42:39.133 | 919 | 4853 | [
"hypothesis-testing",
"normal-distribution"
] |
11482 | 2 | null | 11457 | 8 | null | I really appreciate the pointers to my book and papers and R package. Briefly, stepwise regression is invalid as it destroys all statistical properties of the result as well as faring poorly in predictive accuracy. There is no reason to use ROC curves to guide model selection (if model selection is even a good idea), because we have the optimum measure, the log-likelihood and its variants such as AIC. Thresholds for the dependent variable should be dealt with using ordinal regression instead of making a series of binary models. The Hosmer-Lemeshow test is now considered obsolete by many statisticians as well as the original authors. See the reference below (which proposes a better method, implemented in the rms package).
@ARTICLE{hos97com,
author = {Hosmer, D. W. and Hosmer, T. and {le Cessie}, S. and Lemeshow, S.},
year = 1997,
title = {A comparison of goodness-of-fit tests for the logistic regression
model},
journal = Statistics in Medicine,
volume = 16,
pages = {965-980},
annote = {goodness-of-fit for binary logistic model;difficulty with
Hosmer-Lemeshow statistic being dependent on how groups are
defined;sum of squares test (see cop89unw);cumulative sum test;invalidity of naive
test based on deviance;goodness-of-link function;simulation setup;see sta09sim}
}
See also
@Article{sal09sim,
author = {Stallard, Nigel},
title = {Simple tests for the external validation of mortality prediction scores},
journal = Statistics in Medicine,
year = 2009,
volume = 28,
pages = {377-388},
annote = {low power of older Hosmer-Lemeshow test;avoiding grouping of predicted risks;logarithmic and quadratic test;scaled $\chi^2$ approximation;simulation setup; best power seems to be for the logarithmic (deviance) statistic and for the chi-square statistics that is like the sum of squared errors statistic except that each observation is weighted by $p(1-p)$}
}
| null | CC BY-SA 3.0 | null | 2011-06-02T12:41:01.310 | 2011-06-02T12:41:01.310 | null | null | 4253 | null |
11483 | 2 | null | 396 | 4 | null | These are wonderful suggestions. We have assembled a lot of materials [here](http://hbiostat.org/rflow/graphics.html). A group of statisticians in the pharma industry, academia, and FDA have also creating a resource that are useful for clinical trials and related research [here](http://www.ctspedia.org/do/view/CTSpedia/PageOneStatGraph)$^\dagger.$
My personal favorite graphics book is [Elements of Graphing Data](https://rads.stackoverflow.com/amzn/click/com/0534037291) by William Cleveland.
In terms of software, in my opinion it is hard to beat R's `ggplot2` and `plotly`. Stata also supports some excellent graphics.
---
$\dagger$ The site is unfortunately temporarily down.
| null | CC BY-SA 4.0 | null | 2011-06-02T12:47:17.963 | 2022-11-21T12:41:13.030 | 2022-11-21T12:41:13.030 | 4253 | 4253 | null |
11484 | 2 | null | 11478 | 3 | null | As @IrishStat said it depends on the model. One way of recovering constant is to use the mean value of the residuals. Note that this method relies strongly on certain assumptions. Here is the illustration. Suppose your model is
$$Y_t=\alpha + \beta X_t + \varepsilon_t$$
with
$$E(\varepsilon_t|X_t)=0$$
and you estimate it by
$$\Delta Y_t=\beta \Delta X_t+\Delta \varepsilon_t.$$
Suppose your estimate $\hat\beta$ is unbiased (or at least consistent). Define
$$\hat{e}_t=Y_t-\hat\beta X_t.$$
Substituting the true model we have
$$\hat{e}_t=\alpha+\varepsilon_t+(\beta-\hat\beta)X_t,$$
hence
$$E(\hat{e}_t|X_t)=\alpha,$$
if $\hat\beta$ is unbiased or
$$\frac{1}{T}\sum_{t=1}^T\hat{e}_t\to \alpha,$$
if $\hat\beta$ is consistent.
So the natural estimate for $\alpha$ is
$$\hat{\alpha}=\frac{1}{T}\sum_{t=1}^T(Y_t-\hat\beta X_t).$$
Note that the model assumption is critical here. However with care this trick can be applied in general.
| null | CC BY-SA 3.0 | null | 2011-06-02T13:05:09.017 | 2011-06-02T13:05:09.017 | null | null | 2116 | null |
11485 | 1 | 11486 | null | 1 | 644 | Say I have two $n$-vectors $f(t)=(f_1(t), \dots, f_n(t))$ and $f(s)=(f_1(s), \dots, f_n(s))$, both with expected value 0.
Let $\operatorname{Cov}(f(t)) = a \Sigma$ and $\operatorname{Cov}(f(s)) = b \Sigma$ with scalar $a,b$. Further assume that the two vectors are not independent and that $Cov(f_i(t), f_j(s)) = c \Sigma_{ij} \quad \forall i,j$.
I know that $\operatorname{E}(f(t)' \Sigma f(t)) = a \operatorname{trace}(\Sigma^2)$, but can any of you help me derive an expression for
$\operatorname{E}(f(t)' \Sigma f(s))$?
Background: This is not homework (i'm too old for that...), I'm working on an estimator for the temporal covariance structure of spatially correlated functional data.
| Expected value of non-standard quadratic form | CC BY-SA 3.0 | null | 2011-06-02T13:59:29.967 | 2011-06-02T14:19:12.410 | 2011-06-02T14:00:56.677 | 2116 | 1979 | [
"probability",
"correlation",
"mathematical-statistics",
"expected-value"
] |
11486 | 2 | null | 11485 | 4 | null | The answer is not that hard to get directly (without resorting to references). Denote $\Sigma=(\sigma_{ij})$, $\Omega=(\Sigma_{ij})$. We have
$$Ef(t)'\Sigma f(s)=\sum_{i=1}^n\sum_{j=1}^n\sigma_{ij}Ef_i(t)f_j(s)=\sum_{i=1}^n\sum_{j=1}^n\sigma_{ij}c\Sigma_{ij}$$
Then it is a matter of figuring out what that means:
$$Ef(t)'\Sigma f(s)=c\operatorname{trace}(\Sigma\Omega)$$
where we exploit the fact that $\Sigma$ is symmetric.
| null | CC BY-SA 3.0 | null | 2011-06-02T14:19:12.410 | 2011-06-02T14:19:12.410 | null | null | 2116 | null |
11487 | 1 | 11493 | null | 2 | 1182 | I wanted to solve such a regression problem:
$$Y = Xb + e$$
where $X$ is a $m$ by $n$ matrix, resulting in: b = (X'X)-1X'Y as a solution. Since $n$ is quite large (2400), I can't use the conventional methods to calculate the inverse of $X'X$. So I thought about using LU Decomposition using Crout method. In this [link](http://mymathlib.webtrellis.net/matrices/linearsystems/crout.html) an implementation of this method is available which requires an $n$ by $n$ matrix $A$ to solve the equation $Ax=B$. Does it mean that I should use $X'X$ matrix as the input ($A$)? Please note that the problem is that it requires an $n$ by $n$ matrix instead of an $m$ by $n$ matrix.
| Solving a regression problem | CC BY-SA 3.0 | null | 2011-06-02T14:35:39.513 | 2022-12-03T20:30:09.320 | 2011-06-02T15:07:28.710 | 2885 | 2885 | [
"regression",
"matrix-decomposition",
"matrix-inverse"
] |
11488 | 2 | null | 11473 | 9 | null | For the `multcomp` package, see the help page for `glht`; you want to use the `"Tukey"` option; this does not actually use the Tukey correction, it just sets up all pairwise comparisons. In the example section there's an example that does exactly what you want.
This calculates the estimates and se's for each comparison but doesn't do p-values; for that you need `summary.glht`; on the help page for that, notice in particular the `test` parameter, that allows you to set the actual test that is run. For doing multiply-adjusted p-values, you use the `adjusted` function for this parameter, and to not multiply-adjust, you use `test=adjusted("none")` (which isn't mention specifically in the help page, though it does say it takes any of the methods in `p.adjust`, which is where you'd find `none`.)
You can also compute the estimates and se's by hand using matrix multiplication, and then get the p-values however you want; this is what the `glht` function is doing behind the scenes. To get the matrices you need to start you'd use `coef` and `vcov`.
I didn't put complete code as you say it's for a class project (thanks for being honest, by the way!) and the policy here is to provide helpful hints but not solutions.
| null | CC BY-SA 3.0 | null | 2011-06-02T14:57:17.083 | 2011-06-02T14:57:17.083 | null | null | 3601 | null |
11489 | 2 | null | 11370 | 2 | null | Assuming that John's correction to the data is correct, you can use the `ezANOVA` function from the [ez package](http://cran.r-project.org/web/packages/ez/index.html):
```
my_anova = ezANOVA(
data = my_data
, dv = .(recalled_items)
, wid = .(Subject)
, within = .(Task)
, between = .(Order)
)
```
print(my_anova)
However, as I note in a comment below your question, it looks like your data represents the number of items recalled from a list of study items, in which case it would be more appropriate to analyze the raw item-by-item recall data. See my answer to [this question](https://stats.stackexchange.com/questions/11448/is-there-a-way-to-determine-the-significance-of-a-change-in-a-d-score/11452#11452) for a description of how to achieve this. That answer explicitly speaks to recognition memory type scenarios where one typically wants to dissociate response bias from discriminability. If your context if free recall, then you would simply have data labelling each studied word as recalled or not-recalled and this would be your response variable. You would then predict this response as a function of the explanatory variables in your study and their effect would represent the change in likelihood (on the log-odds scale) of recall.
| null | CC BY-SA 3.0 | null | 2011-06-02T15:46:52.483 | 2011-06-02T15:46:52.483 | 2017-04-13T12:44:39.283 | -1 | 364 | null |
11490 | 1 | 11491 | null | 7 | 3550 | Does a high LL value imply that the model has a high $R^2$? I'm a very beginner to statistics so please excuse my naivete.
| Does high log-likelihood imply high R^2 | CC BY-SA 4.0 | null | 2011-06-02T16:02:58.393 | 2020-11-05T13:42:48.797 | 2020-11-05T13:42:48.797 | 103153 | 4855 | [
"modeling",
"maximum-likelihood",
"r-squared"
] |
11491 | 2 | null | 11490 | 9 | null | No, since for linear regression log likelihood is a sum of squared residuals plus some other terms, log likelihood is scale dependent. So for the same model multiplying the regressors by some constant will change log likelihood but R squared will remain the same.
| null | CC BY-SA 3.0 | null | 2011-06-02T16:13:21.030 | 2011-06-02T16:13:21.030 | null | null | 2116 | null |
11492 | 2 | null | 11454 | 7 | null | There are power calculation functions specifically for proportions such as `power.prop.test`:
```
> power.prop.test(p1=0.4, p2=0.6, power=0.8)
Two-sample comparison of proportions power calculation
n = 96.92364
p1 = 0.4
p2 = 0.6
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
```
| null | CC BY-SA 3.0 | null | 2011-06-02T16:27:52.900 | 2011-06-02T16:27:52.900 | null | null | 279 | null |
11493 | 2 | null | 11487 | 5 | null | You say that you need to solve an ordinary least squares problem on 2400 variables.
There are two assumptions that I think you need to revisit:
Assumption 1: that you need to compute the inverse of $X^TX$.
Assumption 2: that solving ordinary least squares on 2400 variables requires specialized methods.
I'll examine them in turn:
Assumption 1: that you need to compute the inverse of $X^TX$.
A better way to solve OLS using normal equations is by computing the Cholesky factorization of $X^TX$. See section 5.3.2 of [Golub and van Loan](https://rads.stackoverflow.com/amzn/click/com/0801854148) for details. They state that the entire algorithm, including computing $X^TX$, $X^Ty$ as well as performing the Cholesky factorization and back-substitution, requires $(m+n/3)n^2$ floating-point operations.
Assumption 2: that solving ordinary least squares on 2400 variables requires specialized methods.
You don't say what kind of hardware you have at your disposal, so I'll assume that you have access to a typical mainstream PC.
First of all, a 2400x2400 of 64-bit floats requires just 44MB of memory.
Secondly, computing Cholesky decomposition of a matrix of this size takes half a second on my desktop PC using Numerical Python ([numpy](https://numpy.org/)). This is the dominant computation once you have computed $X^TX$ and $X^Ty$.
| null | CC BY-SA 4.0 | null | 2011-06-02T16:56:47.763 | 2022-12-03T20:30:09.320 | 2022-12-03T20:30:09.320 | 79696 | 439 | null |
11494 | 1 | 11589 | null | 4 | 7166 | I've run a 2-way ANOVA on growth rate data (grams/day) with the factors of year type (good and poor) and site (A and B). Though the data themselves are non-normal and do not have homogeneous variances, the residuals fall pretty nicely along the qq plot, and they are not heteroskedastic. Running the test shows that there is an interaction between year-type and site. It's been suggested to me that I now must run a series of pairwise comparisons to look for differences because of this interaction effect, which I assumed I'd need to do anyway.
My stats program (Sigmaplot11, which includes the SigmaStat package) automatically does post-hoc tests for any significant results (in this case is reports: "All Pairwise Multiple Comparison Procedures (Holm-Sidak method): Overall significance level = 0.05). I assume that this is the correct way to conduct the pairwise comparisons rather than making them separately?
Here's why I ask: Using these built-in post-hoc tests, I get significant results in both good years and poor (p<0.001 in each case). However, doing these tests separately (good vs. good and poor vs. poor) as Mann-Whitney rank sum tests (the raw data are non-normal with heterogeneous variances), I get no significant difference between sites within poor years. t-tests show the same as the ANOVA. I assume this is because I'm not comparing means in the case of the Mann-Whitney, but which post-hoc method should I be using in this case?!
EDIT: following @whuber's informative comment, I've taken a look at my other 2-way ANOVAs run in a similar manner. I've done another similar test today comparing adult weight. The t-test from the multiple pairwise comparisons after the 2-way ANOVA shows no difference (t=1.547, p=0.122), but a t-test run outside of the ANOVA shows highly significant difference (t=-4.739, p<0.001). As @chl pointed out, I would expect these 2 t-tests to have the same t-values; note that these 2 tests use exactly the same original data. Any idea why this might be or how I can interpret this?
Thanks for any suggestions you can provide!!
EDIT #2: Just to update anyone who's interested, I've taken a look more closely at the numbers behind the test, and it looks like somehow the software is not doing what I'm asking. It lists a table called the Least Square Means, listing each site, its mean and its standard error of the mean. The overall site means listed in this table are not correct. However, for each site within year type, it does have the correct means in the Least Square Means table. I'm giving up on the main factor of site comparison within this test, and sticking to the t-value given in the t-test for this main factor comparison. I'm still not sure what's going on exactly, but from comments and answers provided (thanks again!), I feel that this is a safe move, and I must move on with other things.
Thanks for all of your help!
| Pairwise comparisons after significant interaction results: parametric or non? | CC BY-SA 3.0 | null | 2011-06-02T17:43:01.070 | 2011-06-06T02:19:21.593 | 2011-06-06T02:19:21.593 | 4238 | 4238 | [
"anova",
"post-hoc",
"nonparametric"
] |
11495 | 2 | null | 11368 | 5 | null | Assuming that in constructing the covariance matrix, you are automatically taking care of the symmetry issue, your log-likelihood will be $-\infty$ when $\Sigma$ is not positive definite because of the $\log {\rm det} \ \Sigma$ term in the model right? To prevent a numerical error if ${\rm det} \ \Sigma < 0$ I would precalculate ${\rm det} \ \Sigma$ and, if it is not positive, then make the log likelihood equal -Inf, otherwise continue. You have to calculate the determinant anyways, so this is not costing you any extra calculation.
| null | CC BY-SA 3.0 | null | 2011-06-02T17:51:30.647 | 2011-06-02T19:42:33.137 | 2011-06-02T19:42:33.137 | 4856 | 4856 | null |
11496 | 2 | null | 6071 | 3 | null | If $\sigma^{2} = {\rm var}(\epsilon)$ is known then you can use the SIMEX method (Stefanski and Cook, 1995) to extrapolate backwards to determine the model the effect when $X$ is not measured with error. The basic idea is -
- Generate a grid of $\sigma_{1}, ..., \sigma_{k}$ obtained by adding progressively more measurement error to $X$
- Fit $Y_{i} = \beta X_{i} + \varepsilon_{i}$ and for each $\sigma$ and
obtain the corresponding $\beta_{1}, ..., \beta_{k}$.
- Fit a regression of $\beta_{1}, ..., \beta_{k}$ vs. $\sigma_{1}, ..., \sigma_{k}$ and extrapolate backward to predict $\beta$ when $\sigma = 0$.
This is a rather crude description of the method, but this is the basic approach. Cook and Stefanski show that, under some conditions, this will work. Have a look at the paper.
| null | CC BY-SA 3.0 | null | 2011-06-02T18:06:41.773 | 2011-06-02T18:06:41.773 | null | null | 4856 | null |
11498 | 1 | null | null | 5 | 1395 | I'm fitting a linear model where the response is a function both of time and of static covariates (i.e. ones that are independent of time). The ultimate goal is to identify significant effects of the static covariates.
Is this the best general strategy for selecting variables (in R, using the `nlme` package)? Anything I can do better?
- Break the data up by groups and plot it against time. For continuous covariates, bin it and plot the data in each bin against time. Use the group-specific trends to make an initial guess at what time terms to include-- time, time^n, sin(2*pi*time)+cos(2*pi*time), log(time), exp(time), etc.
- Add one term at a time, comparing each model to its predecessor, never adding a higher order in the absence of lower order terms. Sin and cos are never added separately. Is it acceptable to pass over a term that significantly improves the fit of the model if there is no physical interpretation of that term?.
- With the full dataset, use forward selection to add static variables to the model and then relevant interaction terms with each other and with the time terms. I've seen some strong criticism of stepwise regression, but doesn't forward selection ignore significant higher order terms if the lower order terms they depend on are not significant? And I've noticed that it's hard to pick a starting model for backward elimination that isn't saturated, or singular, or fails to converge. How do you decide between variable selection algorithms?
- Add random effects to the model. Is this as simple as doing the variable selection using lm() and then putting the final formula into lme() and specifying the random effects? Or should I include random effects from the very start?. Compare the fits of models using a random intercept only, a random interaction with the linear time term, and random interaction with each successive time term.
- Plot a semivariogram to see if an autoregressive error term is needed. What should a semivariogram look like if the answer is 'no'? A horizontal line? How straight, how horizontal? Does including autoregression in the model again require checking potential variables and interactions to make sure they're still relevant?
- Plot the residuals to see if the variance changes as a function of fitted value, time, or any of the other terms. If it does, weigh the variances appropriately (for lme(), use the weights argument to specify a varFunc()) and compare to the unweighted model to see if this improves the fit. Is this the right sequence in which to do this step, or should it be done before autocorrelation?.
- Do summary() of the fitted model to identify significant coefficients for numeric covariates. Do Anova() of the fitted model to identify significant effects for qualitative covariates.
| Variable selection for time covariate | CC BY-SA 3.0 | null | 2011-06-02T20:23:19.327 | 2011-06-03T12:18:18.957 | 2011-06-02T22:03:57.473 | null | 4829 | [
"r",
"regression",
"time-series",
"model-selection",
"linear-model"
] |
11499 | 2 | null | 10604 | 5 | null |
## Specifying the Input Variables' ARIMA Models
The ARIMA Procedure uses the results of the first pair(s) of `identify` and `estimate` statements (i.e., the `identify` and `estimate` statements for the input variables) to create models to forecast the values of the input variable(s) (also called exogenous variable(s)) after the last point in time that each of those input variables are observed. In other words, those statements specify the models that are used whenever values for the input variables are needed for periods not yet observed.
Thus, the model for `VariableY` is specified as
```
identify var=VariableY(PeriodsOfDifferencing);
estimate p=OrderOfAutoregression q=OrderOfMovingAverage;
```
where `VariableY` is modeled as $ARIMA(p,d,q)$ with $p$ = `OrderOfAutoregression`, $d$ = the order of differencing (determined from `PeriodsOfDifferencing`), and $q$ = `OrderOfMovingAverage`.
## Specifying Differencing for the Main and Input Series in the ARIMAX Model
The order(s) of differencing to apply to the input variables are specified in the `crosscorr` option; for modeling `VariableX` with inputs `VariableY` and `VariableZ`, the SAS code is:
```
identify var=VariableX(DifferencingX) crosscorr=( VariableY(DifferencingY) VariableZ(DifferencingZ) );
```
where `DifferencingX`, `DifferencingY`, and `DifferencingZ` are the period(s) of differencing for `VariableX`, `VariableY`, and `VariableZ`, respectively.
## Specifying the Order of Autoregression and the Order of Moving Average for the Main and Input Series in the ARIMAX Model
The number of input variable lags to include in the model is specified in the transfer function (in the `input` option). The beginning of the `estimate` line sets the orders of autoregression and moving average for the main series (i.e., the series for which a model or forecasts are ultimately being sought):
```
estimate p=AutoregressionX q=MovingAverageX
```
where `VariableX` is modeled as $ARIMAX(p,d,q,b)$ with $p$ = `AutoregressionX` and $q$ = `MovingAverageX`.
The `input` option in the same `estimate` statement sets the orders of autoregression and moving average for the ARIMAX model. The numerator factors for a transfer function for an input series are like the MA part of the ARMA model for the noise series. The denominator factors for a transfer function for an input series are like the AR part of the ARMA model for the noise series. (All examples below will simplify the example down to a single input series `VariableY` instead of showing both `VariableY` and `VariableZ`.)
When specified without any numerator or denominator terms, the input variable is treated as a pure regression term (i.e., the value of the input variable in the current period is used without any lags, whether it is forecast by the input variable's ARIMA model or already present as an observed value in the input series): `estimate`...`input=( VariableY );`.
Numerator terms are represented in parentheses before the input variable. `estimate`...`input=( (1 2 3) VariableY );` produces a regression on `VariableY`, `LAG(VariableY)`, `LAG2(VariableY)`, and `LAG3(VariableY)`.
Denominator terms are represented in parenetheses after a slash and before the input variable. `estimate`...`input=( \ (1) VariableY );` estimates the effect of `VariableY` as an infinite distributed lag model with exponentially declining weights.
Initial shift is represented before a dollar sign; `estimate`...`input=( k $ (` $\omega$-lags `) / (` $\delta$-lags `) VariableY );` represents the form $B^k \cdot \left(\frac{\omega (B)}{\delta (B)}\right) \cdot \text{VariableY}_t$. The value of `k` will be added to the exponent of $B$ for all numerator and denominator terms. To use an AR-like shift in the input variable without including the un-shifted (i.e., un-lagged or pure regression) term, use this operator instead of numerator terms in parentheses. For example, to set a 6, 12, and 18 month shift in the input series `VariableY` without the un-shifted term, the statement would be `estimate`...`input=( 6 $ (6 12) VariableY );` (this results in shifts of 6, 6 + 6 (i.e., 12), and 6 + 12 (i.e., 18)).
## Summary
The first pair(s) of `identify` and `estimate` statements are used to prepare any necessary forecasted values for the input variable(s).
The last pair of `identify` and `estimate` statements run the actual ARIMAX model, and use forecasted values for the input variable(s) (generated from the first pair(s) of `identify` and `estimate` statements) when necessary.
The relationship between the main variable and the input variable(s) is specified in the `crosscorr` option of the `identify` statement and the `input` option of the `estimate` statement. The relationship between the main variable and the input variable(s) can be defined as a run-of-the-mill regression relationship; or it can be defined with differencing, AR term(s), and/or MA term(s).
Attribution
Although this answer is my own, I was able to come up with the answer based on substantial help (and some quotations) from the official SAS documentation ("[The ARIMA Procedure: Rational Transfer Functions and Distributed Lag Models](http://support.sas.com/documentation/cdl/en/etsug/60372/HTML/default/viewer.htm#etsug_arima_sect014.htm)", "[The ARIMA Procedure: Specifying Inputs and Transfer Functions](http://support.sas.com/documentation/cdl/en/etsug/60372/HTML/default/viewer.htm#etsug_arima_sect037.htm)", "[The ARIMA Procedure: Input Variables and Regression with ARMA Errors](http://support.sas.com/documentation/cdl/en/etsug/60372/HTML/default/viewer.htm#etsug_arima_sect012.htm)", and "[The ARIMA Procedure: Differencing](http://support.sas.com/documentation/cdl/en/etsug/60372/HTML/default/viewer.htm#etsug_arima_sect010.htm)"), and from direction found in [this answer](https://stats.stackexchange.com/questions/10604/how-do-i-ensure-proc-arima-is-performing-the-correct-parameterization-of-input-va/10653#10653) and comments by [IrishStat](https://stats.stackexchange.com/users/3382/irishstat).
| null | CC BY-SA 3.0 | null | 2011-06-02T21:16:45.563 | 2011-06-03T07:32:50.997 | 2017-04-13T12:44:45.783 | -1 | 1583 | null |
11500 | 1 | 11502 | null | 7 | 659 | The problem I'm trying to solve here is very simple but the available data is very limited. That makes it a hard problem to solve.
The available data are as follows:
- I have 100 patients and I need to rank order them in terms of how healthy they are.
- I only have 5 measurements for each patient. Each of the five readings is coded as a numeric value, and the rule is that the bigger the reading the healthier is the patient.
Should I have some sort of doctor's "expert judgement based ranking" I could use that as the target variable and fit some sort of an ordinal logistic regression model trying to predict doctor's assessment. However, I don't have that. The only thing I have is (1) and (2).
How would you come up with a simple "scoring" algorithm which would combine those five measurements into a single score which would be good enough (not perfect) in rank ordering patients?
| Creating an index based on a set of measurements without a target for purpose of rank ordering | CC BY-SA 3.0 | null | 2011-06-02T21:30:14.260 | 2011-06-09T04:20:32.917 | 2011-06-09T04:20:32.917 | 183 | 333 | [
"scales",
"ranking"
] |
11501 | 2 | null | 11500 | 0 | null | I would just simply sum them up, weighting each factor if necessary.
| null | CC BY-SA 3.0 | null | 2011-06-02T22:07:12.243 | 2011-06-02T22:07:12.243 | null | null | 4860 | null |
11502 | 2 | null | 11500 | 2 | null | A simple approach would be to calculate the sum score or the mean. Another approach would not assume that all variables are of equal importance and we could calculate a weighted mean.
Let's assume we have the following 10 patients and variables `v1` to `v5`.
```
> set.seed(1)
> df <- data.frame(v1 = sample(1:5, 10, replace = TRUE),
+ v2 = sample(1:5, 10, replace = TRUE),
+ v3 = sample(1:5, 10, replace = TRUE),
+ v4 = sample(1:5, 10, replace = TRUE),
+ v5 = sample(1:5, 10, replace = TRUE))
>
> df
v1 v2 v3 v4 v5
1 2 2 5 3 5
2 2 1 2 3 4
3 3 4 4 3 4
4 5 2 1 1 3
5 2 4 2 5 3
6 5 3 2 4 4
7 5 4 1 4 1
8 4 5 2 1 3
9 4 2 5 4 4
10 1 4 2 3 4
```
1. Sum score and ranks
```
> df$sum <- rowSums(df)
> df$ranks <- abs(rank(df$sum) - (dim(df)[1] + 1))
> df
v1 v2 v3 v4 v5 sum ranks
1 2 2 5 3 5 17 4.0
2 2 1 2 3 4 12 9.5
3 3 4 4 3 4 18 2.5
4 5 2 1 1 3 12 9.5
5 2 4 2 5 3 16 5.0
6 5 3 2 4 4 18 2.5
7 5 4 1 4 1 15 6.5
8 4 5 2 1 3 15 6.5
9 4 2 5 4 4 19 1.0
10 1 4 2 3 4 14 8.0
```
2. Mean score and ranks (note: `ranks` and `ranks2` are equal)
```
> df$means <- apply(df[, 1:5], 1, mean)
> df$ranks2 <- abs(rank(df$mean) - (dim(df)[1] + 1))
> df
v1 v2 v3 v4 v5 sum ranks means ranks2
1 2 2 5 3 5 17 4.0 3.4 4.0
2 2 1 2 3 4 12 9.5 2.4 9.5
3 3 4 4 3 4 18 2.5 3.6 2.5
4 5 2 1 1 3 12 9.5 2.4 9.5
5 2 4 2 5 3 16 5.0 3.2 5.0
6 5 3 2 4 4 18 2.5 3.6 2.5
7 5 4 1 4 1 15 6.5 3.0 6.5
8 4 5 2 1 3 15 6.5 3.0 6.5
9 4 2 5 4 4 19 1.0 3.8 1.0
10 1 4 2 3 4 14 8.0 2.8 8.0
```
3. Weighted mean score (i.e. I assume that V3 and V4 are more important than v1, v2 or v5)
```
> weights <- c(0.5, 0.5, 1, 1, 0.5)
> wmean <- function(x, w = weights){weighted.mean(x, w = w)}
> df$wmeans <- sapply(split(df[, 1:5], 1:10), wmean)
> df$ranks3 <- abs(rank(df$wmeans) - (dim(df)[1] + 1))
> df
v1 v2 v3 v4 v5 sum ranks means ranks2 wmeans ranks3
1 2 2 5 3 5 17 4.0 3.4 4.0 3.571429 2.5
2 2 1 2 3 4 12 9.5 2.4 9.5 2.428571 9.0
3 3 4 4 3 4 18 2.5 3.6 2.5 3.571429 2.5
4 5 2 1 1 3 12 9.5 2.4 9.5 2.000000 10.0
5 2 4 2 5 3 16 5.0 3.2 5.0 3.285714 5.0
6 5 3 2 4 4 18 2.5 3.6 2.5 3.428571 4.0
7 5 4 1 4 1 15 6.5 3.0 6.5 2.857143 6.0
8 4 5 2 1 3 15 6.5 3.0 6.5 2.571429 8.0
9 4 2 5 4 4 19 1.0 3.8 1.0 4.000000 1.0
10 1 4 2 3 4 14 8.0 2.8 8.0 2.714286 7.0
```
| null | CC BY-SA 3.0 | null | 2011-06-02T22:38:02.897 | 2011-06-02T22:38:02.897 | null | null | 307 | null |
11503 | 2 | null | 11252 | 4 | null | Unfortunately you're not going to be able to create the exact solution you're looking for. The company's existing system depends on linear relationships between the factors and the final score, which is a proxy for probability. Your logistic model, on the other hand, depends on S-shaped curves rather than linear relationships between factors and the probabilities. The latter are bounded at 0 and 1; if you were to try to use linear weights to compute probabilities, you would no doubt have to assign to certain cases probabilities less than zero or greater than one. This is one of the classic reasons why logistic regression is preferred over linear regression when the outcome variable is binary.
Your best bet, from a statistical point of view, is to create the best logistic model you can and to use that instead of the existing linear weights system. This will give you the best predictive accuracy while also keeping all predicted probabilities in a reasonable range.
| null | CC BY-SA 3.0 | null | 2011-06-03T00:03:52.680 | 2011-06-03T00:03:52.680 | null | null | 2669 | null |
11504 | 1 | null | null | 2 | 139 | In an attempt to improve the results of Bayesian NNT, I transformed the 7 variables that I have into normal scores (subtracted the mean and divided by SD).
Then I used a PCA on the transformed variables to generate new 7 PCs. I used these PCs to run the Bayesian NNT and the classification results improved a little bit.
My question is is this statistically valid and applicable or does it violate some fundamental statistical rules?
| Performing PCA for normal score transformed data | CC BY-SA 3.0 | null | 2011-06-03T02:41:07.133 | 2011-06-04T21:56:23.067 | 2011-06-04T21:56:23.067 | null | 4861 | [
"bayesian",
"pca"
] |
11505 | 1 | null | null | 2 | 1951 | Does anyone have Stata code they could share with me so that I can run a two-part model to look at health care expenditures? The first part of the model will determine whether or not an individual had any visit and the second part will determine how much individuals spent (estimated on the subset of those who had a visit). I know that I need to take the product of these to get the expected cost of health care for individual i and also need to do a transformation for the first part but I am not sure how to do this in Stata. Thank you!
| How do I run a two-part model of health care expenditures in Stata? | CC BY-SA 3.0 | null | 2011-06-03T02:47:45.077 | 2018-08-27T08:40:27.440 | 2018-08-27T08:40:27.440 | 11887 | 834 | [
"logistic",
"stata"
] |
11506 | 2 | null | 11458 | 1 | null | denominators are different in the correlation formula and the autocorrelation formula. (moved to answer at moderator's request)
| null | CC BY-SA 3.0 | null | 2011-06-03T04:05:10.890 | 2011-06-03T04:05:10.890 | null | null | 3919 | null |
11507 | 2 | null | 11490 | 3 | null | If all your models have normally distributed errors and are fit to exactly the same data, then there is a straightforward, increasing relationship between likelihood and R^2, since they're both ultimately about the sum of squared errors. But in general, no, you can't look at a pair of log likelihood values and assume much of anything about overall goodness of fit.
The clearest case of this is when you have models fit to two different data sets. The likelihood of getting exactly two heads and one tail from a fair coin after three flips is a lot higher than the likelihood of getting exactly four heads and two tails after six flips, even though the fit is equally good.
| null | CC BY-SA 3.0 | null | 2011-06-03T05:52:22.727 | 2011-06-03T05:52:22.727 | null | null | 4862 | null |
11508 | 1 | 11529 | null | 5 | 1103 | What am I supposed to do when I want to interpret significances, although I know that standard errors are biased because of wrong error term assumptions? I know that there is the possibility to use White estimators, weighted OLS. But my prof told me not to do so.
Maybe some extra information:
1.) I am analyzing an OLS with a whole bunch of dummy variables.
3.) the assumption of normal distributed error terms is wrong (they are t-distributed) and heteroskedasticity occurs.
4.) I am doing cross sectional analysis
5.) I have a lot of Interaction in the model. And most of the variables are non significant (p-values around 0.8, so that the coefficients are close to zero). My prof doesn't want me to get rid of these variables, although they are not significant (He said that this is not a good way, because stepwise elimination is trouble because of deleting wrong variables and choosing the right criteria).
On the one hand I understand why there is no way to interpret the significances. But on the other hand it makes interpreting not easier. Sure I can change the model, but I have to do OLS first, before I am allowed to switch!
| What to do with p-values when standard errors are obviously biased | CC BY-SA 3.0 | null | 2011-06-03T07:38:32.973 | 2013-03-29T10:12:55.007 | 2012-05-26T04:08:25.197 | 4856 | 4496 | [
"regression",
"statistical-significance",
"p-value",
"least-squares"
] |
11510 | 1 | 11511 | null | 5 | 230 | I have a web-site, and I found the distribution of user number in a day have an obvious pattern. Not only my own site, I see almost all usage distributions of web-site fit model like this. They look like sine wave. I would like to use the model to predict how many total bandwidth I will use if I know the peak of usage. What kind of formula do you think it fits the distribution best?

| What's the best formula to fit the distribution of website user number over a day | CC BY-SA 3.0 | null | 2011-06-03T08:54:38.197 | 2011-06-03T11:30:22.467 | 2011-06-03T11:30:22.467 | 2116 | 4865 | [
"time-series"
] |
11511 | 2 | null | 11510 | 5 | null | You have time series data and one develops an equation for intra-day usage which may use either an auto-projective ARIMA model or a set of fixed dummies (23 in number) to predict hourly expectations. One has to be concerned with detecting "unusual data" so that your model/parameters reflect the main body of data and not being impacted by the exceptions. You might also be concerned with inter-day activity as different days of the week may have different effects. I have found that there are also interaction effects where the hourly distribution depends on the day-of-the-week. Additionally there may be known events/holidays that need to be accounted for. Upon building a suitable model , the residuals provide an estimate as to the expected variability yielding a "safety stock" which can be useful in guidance. The suggested statistical model for this is called a Transfer Function which is a hybrid between regression and ARIMA modelling.
| null | CC BY-SA 3.0 | null | 2011-06-03T10:32:18.730 | 2011-06-03T10:32:18.730 | null | null | 3382 | null |
11512 | 2 | null | 11498 | 1 | null | Fitting models that include time, time-squared, time-cubed , sines , cosines et all are not very useful in my opinion as they assume deterministic structure that often is inappropriate. Using historical values that are lagged values of the output and possibly covariate series is the approach to take. When constructing these models one needs to verify a few things ( the Gaussian Assumptions ) before declaring victory :
1. that the mean of the errors is "near zero" for all time intervals otherwise empirically identifiable structure like Pulses, Level Shifts, local time trends and/or seasonal pulses might be needed (N.B. This is not guaranteed by simply including a constant in the model ) ; 2. that there is no provable auto-correlative structure in the residuals for ALL LAGS ; 3. that the parameters of your model are stable/constant over time ; 4. that the variance of the errors is constant over time thusly no structural breaks in variance requiring weighted estimation and no dependence of the error variablility on the level of the series and no stochastic variance adaptation in effect. The suggested approach is called a Transfer Function and is a super-set of Regression and ARIMA modelling in conjunction with Intervention Detection
| null | CC BY-SA 3.0 | null | 2011-06-03T12:03:32.843 | 2011-06-03T12:03:32.843 | null | null | 3382 | null |
11513 | 2 | null | 11436 | 0 | null | I don't know whether yours is a problem of inference. If the problem is of inferring a vector from $\mathbb{R}^n$ under certain constraints(which should define a closed convex set) when a prior guess say $u$ is given then the vector is inferred by minimizing $\ell_2$-distance from $u$ over the constraint set (if the prior $u$ is not given then its just by minimizing the $\ell_2$-norm). The above principle is justified as the right thing to do under certain circumstances in this paper [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1176348385](https://projecteuclid.org/journals/annals-of-statistics/volume-19/issue-4/Why-Least-Squares-and-Maximum-Entropy-An-Axiomatic-Approach-to/10.1214/aos/1176348385.full).
| null | CC BY-SA 4.0 | null | 2011-06-03T12:06:15.663 | 2022-09-03T20:02:06.363 | 2022-09-03T20:02:06.363 | 79696 | 3485 | null |
11514 | 2 | null | 11498 | 3 | null | Fully data-driven model selection will result in standard errors and P-values that are too small, confidence intervals that are too narrow, and overstated effects of remaining terms in the model.
For time effects I usually model using restricted cubic splines. A detailed case study in the context of generalized least squares for correlated serial data may be found at [http://biostat.mc.vanderbilt.edu/RmS](http://biostat.mc.vanderbilt.edu/RmS) - see the two attachments at the bottom named course2.pdf and rms.pdf. This uses the R rms package. The case study contains information about the choice of basis functions for the time component.
| null | CC BY-SA 3.0 | null | 2011-06-03T12:18:18.957 | 2011-06-03T12:18:18.957 | null | null | 4253 | null |
11515 | 1 | 11521 | null | -2 | 944 | $A$, $B$, $C$, $D$ are positive integers.
$$A \sim Binomial(p_1, A+B)$$
$$A+C \sim Binomial(p_2, A+B+C+D)$$
My variable of interest is $p_1/p_2$
Could one analytically compute a distribution (preferably exact) for this variable? What would be its mean and variance (how to compute it?).
| Distribution of a ratio of two proportions | CC BY-SA 3.0 | null | 2011-06-03T13:03:23.800 | 2011-06-03T17:11:01.157 | 2011-06-03T17:11:01.157 | 4569 | 4569 | [
"distributions",
"variance",
"binomial-distribution",
"proportion"
] |
11516 | 1 | 11520 | null | 2 | 579 | I am doing Cox regression models and KM plots for a data set where the end point is death. In addition there is information about whether the death was cancer-specific or not. So I have three categories:
- No death, last seen is the date of censor
- Death - cancer-specific
- Death - other cause
What I would like to know is what to do with category 3 data when I am looking at cancer-specific survival as my endpoint. Do I censor that data at the date of death OR do I just remove that data from the dataset?
| Shall I censor or rather remove other causes in cause-specific survival analysis? | CC BY-SA 3.0 | null | 2011-06-03T14:04:30.007 | 2011-06-04T10:52:30.077 | 2011-06-04T10:52:30.077 | null | 1150 | [
"survival",
"cox-model"
] |
11517 | 2 | null | 11500 | 5 | null | Any function $f: \mathbb{R}^5 \to \mathbb{R}$ that is separately increasing in each of its arguments will work. For example, you can select positive parameters $\alpha_i$ and any real parameters $\lambda_i$ and rank the data $(x_1, x_2, x_3, x_4, x_5)$ according to the values of
$$\sum_{i=1}^{5} \alpha_i (x_i^{\lambda_i} - 1) / \lambda_i \text{.}$$
Evidently some criterion is needed to select among such a rich set of distinctly different scores. In particular, the simple obvious solutions (frequently employed, unfortunately) of just summing the scores or first "normalizing" them in some fashion and then summing them will suffer from this lack of grounding in reality. To put it another way: any answer that does not derive its support from additional information is a pure fabrication.
Because this problem is essentially the same as [Creating an index of quality from multiple variables to enable rank ordering](https://stats.stackexchange.com/questions/9358/creating-an-index-of-quality-from-multiple-variables-to-enable-rank-ordering), I refer you to the discussion there for more information.
| null | CC BY-SA 3.0 | null | 2011-06-03T14:08:38.533 | 2011-06-03T14:08:38.533 | 2017-04-13T12:44:40.883 | -1 | 919 | null |
11518 | 2 | null | 11481 | 4 | null |
### Solution
When you assume the residuals (vertical deviations in a graph of $n$ data) are independently and identically distributed with some normal distribution of zero mean, the estimate of the slope will have a Student t distribution with $n-2$ degrees of freedom, scaled by the standard error. Because the theoretical value has essentially zero error, we can ignore this complication and treat the theoretical value as a constant. Therefore we refer the ratio
$$t = (0.0106623 - 0.0075) / 0.0011 = 2.88$$
to [Student's t distribution](http://en.wikipedia.org/wiki/Student%27s_t-distribution#Table_of_selected_values) (as a two sided test, because in principle the slope could have been greater or less than the theoretical value and you just want to see whether the difference could be attributed to chance).
Whether this deviation is "significant" depends on your criterion for significance and on the degrees of freedom. For example, if you want 95% or greater significance, then this difference will be significant if and only if you have six or more data values. This conclusion follows from noting that the 95% two-sided critical value with $5-2 = 3$ degrees of freedom is $3.182$, greater than $2.88$, and the critical value with $6-2 = 4$ d.f. is $2.776$, less than $2.88$.
### Discussion
If the uncertainty in the theoretical value were appreciable compared to the standard error of the slope ($0.0011$) and you had relatively few data points (perhaps 10 or fewer), the problem would become more difficult:
- First, you don't know the distribution of the theoretical error.
- Second, you probably don't know for sure that it is a standard error (people often report confidence limits or two or three standard errors or even standard deviations without clearly specifying what they have computed).
- Third, the sum of a t-distributed value (your error) and another distribution (the theoretical error) can have a mathematically less tractable distribution.
Mitigating these complications, though, is a simple consideration: if the theoretical uncertainty were largish, then it would add to the overall uncertainty in the difference between the theoretical and estimated values, thereby lowering the t-statistic. In some cases such a semi-quantitative result might be good enough. (The addition is in terms of variances: you sum the squares of the two standard errors, obtaining the square of the standard error of the difference, and (therefore) take its square root.)
For instance, if the theoretical uncertainty were equal to the uncertainty of the estimate, the t-statistic would be reduced to $2.03$. The distribution of the difference would be approximately Normal, but with slightly longer tails, so referring the value of $2.03$ to a standard Normal distribution would slightly overestimate the significance. Well, we can compute that $4.2\%$ of the standard Normal distribution is more extreme than $\pm 2.03$. Thus--still in this hypothetical situation with a largish standard error for the theoretical result--you would not conclude the difference is significant if your criterion for significance exceeds $100 - 4.2 = 95.8\%$. Otherwise, the picture is murky and the determination depends on the resolution of the difficulties enumerated above.
| null | CC BY-SA 3.0 | null | 2011-06-03T14:42:22.673 | 2011-06-03T14:42:22.673 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
11519 | 1 | null | null | 4 | 132 | I am starting a work on electric vehicle to see how the charging process can impact the local electricity network. I would like to know if there exists public data of driving "habits".
Ideally, I would like time series data for bigh fleets of vehicles in a relatively big city. For a given car these data could be a series of date telling when and where the car stops and when it starts again. It could also contain the associated consumption of energy and driving distance between to stop. I know I am asking too much but would like to know if anyone is aware of an intermediate dataset.
Thanks in advance
| Searching for car displacement data | CC BY-SA 3.0 | null | 2011-06-03T14:49:51.200 | 2011-06-06T06:14:50.127 | 2011-06-06T06:14:50.127 | 223 | 223 | [
"dataset",
"spatial",
"networks",
"spatio-temporal"
] |
11520 | 2 | null | 11516 | 8 | null | Category 3 should be censored, not removed. Removing them would be similar to removing those in category 1 instead of censoring. The fact that those people were alive and did not die from cancer is useful information.
You should also look at all cause mortality in addition to the cancer specific. There is a chance that some of the non-cancer deaths were indirectly influenced by the cancer (increased stress leads to heart problems, believing they are going to die anyways leads to risky behavior, etc.)
Competing risks models could also be informative (just make sure that you understand what assumptions you are making).
| null | CC BY-SA 3.0 | null | 2011-06-03T14:54:58.087 | 2011-06-03T14:54:58.087 | null | null | 4505 | null |
11521 | 2 | null | 11515 | 1 | null | It looks like you have a standard 2x2 table and you want to condition on the margins and compare the conditional proportion to the marginal proportion. This is not simple theoretically since they are not going to be independent. Someone has probably solved this problem sometime, probably in a thesis somewhere, but I have no idea where to look.
On the other hand, you con easily simulate to estimate the distribution and answer many questions using that. If you want to condition on the margins and test the null that p1/p2=1 then this is just a simple permutation test:
Compute p1/p2, then randomly permute the A/B status of individual points and compute p1/p2 again, repeate a bunch of times and compare your original to the distribution.
If you want to see how your estimates of p1/p2 are distributed based on values of p1 and p2 with them unequal, then just simulate a bunch of times from binomials with the assumed p1 and p2 and do the computations.
Combining both of the above would show the power of the permutation test for a given alternative.
| null | CC BY-SA 3.0 | null | 2011-06-03T15:09:58.293 | 2011-06-03T15:09:58.293 | null | null | 4505 | null |
11522 | 1 | 11525 | null | 5 | 3701 | I have been running correlations for a set of data and several subsamples.
During this analysis I ran into a situation where the $r^2$ for two groups was smaller in each individual group as opposed to when they are grouped together.
- Is there any straight forward explanation for how this could happen?
| Why are the correlations in two groups less than the correlation when the groups are combined? | CC BY-SA 3.0 | null | 2011-06-03T15:23:43.543 | 2023-01-07T04:18:21.403 | 2011-06-03T15:36:22.373 | 183 | 3727 | [
"correlation"
] |
11523 | 2 | null | 11436 | 6 | null | The key here is understanding the "curse of dimensionality" the paper references. From wikipedia: when the number of dimensions is very large,
>
nearly all of the high-dimensional space is "far away" from the centre, or, to put it another way, the high-dimensional unit space can be said to consist almost entirely of the "corners" of the hypercube, with almost no "middle"
As a result, it starts to get tricky to think about which points are close to which other points, because they're all more or less equally far apart. This is the problem in the first paper you linked to.
The problem with high p is that it emphasizes the larger values--five squared and four squared are nine units apart, but one squared and two squared are only three units apart. So the larger dimensions (things in the corners) dominate everything and you lose contrast. So this inflation of large distances is what you want to avoid. With a fractional p, the emphasis is on differences in the smaller dimensions--dimensions that actually have intermediate values--which gives you more contrast.
| null | CC BY-SA 3.0 | null | 2011-06-03T15:26:19.360 | 2011-06-03T15:26:19.360 | null | null | 4862 | null |
11524 | 2 | null | 11519 | 1 | null | If you're looking for data, try:
[http://www.zanran.com](http://www.zanran.com)
===========================
Here are some additional links that might help:
[http://pubs.its.ucdavis.edu/download_pdf.php?id=1387](http://pubs.its.ucdavis.edu/download_pdf.php?id=1387)
[http://www.its.ucdavis.edu/people/faculty/kurani/index.php](http://www.its.ucdavis.edu/people/faculty/kurani/index.php)
[http://www.nrel.gov/docs/fy09osti/46251.pdf](http://www.nrel.gov/docs/fy09osti/46251.pdf)
[http://s3.amazonaws.com/zanran_storage/www.inl.gov/ContentPages/110551078.pdf](http://s3.amazonaws.com/zanran_storage/www.inl.gov/ContentPages/110551078.pdf)
[http://s3.amazonaws.com/zanran_storage/trb.org/ContentPages/520326.pdf](http://s3.amazonaws.com/zanran_storage/trb.org/ContentPages/520326.pdf)
| null | CC BY-SA 3.0 | null | 2011-06-03T15:30:12.840 | 2011-06-03T15:30:12.840 | null | null | 2775 | null |
11525 | 2 | null | 11522 | 9 | null | Here are just a couple of ideas:
- Range restriction is one explanation. Check out this simulation; and this explanation.
- Correlated group mean differences is another related idea. Say group 1 has a mean two standard deviations higher than group 2 on both X and Y, but that there is no correlation between X and Y within each group. When you combine the two groups there would be a strong correlation.
And just for fun, here's a little R simulation
```
# Setup Data
x1 <- rnorm(200, 0, 1)
x2 <- rnorm(200, 2, 1)
y1 <- rnorm(200, 0, 1)
y2 <- rnorm(200, 2, 1)
grp <- rep(1:2, each=200)
x <- data.frame(grp, x=c(x1,x2), y=c(y1,y2))
# Plot
library(lattice)
xyplot(y~x, group=grp, data=x)
# Correlations
cor(x1, y1)
cor(x2, y2)
cor(x$x, x$y)
```
Which produced these three correlations respectively on my run of the simulation
```
[1] 0.1248730
[1] 0.1027219
[1] 0.56244
```
And the following graph

| null | CC BY-SA 4.0 | null | 2011-06-03T15:46:31.267 | 2023-01-07T04:18:21.403 | 2023-01-07T04:18:21.403 | 362671 | 183 | null |
11526 | 2 | null | 11522 | 5 | null | Sounds like [Simpson's Paradox](http://en.wikipedia.org/wiki/Simpson%27s_paradox).
| null | CC BY-SA 3.0 | null | 2011-06-03T16:06:16.500 | 2011-06-03T16:06:16.500 | null | null | 2817 | null |
11527 | 1 | null | null | 2 | 1197 | I've collected around 10 numbers of ants from a colony and introduced the same numbers of ants to the same colony (say Colony-1) at their entrance to see their behaviour (whether they accepts or rejects). Again I introduced 10 ants from different colony (say Colony-2, i.e. non-nestmate) to Colony-1 and checked their rate of acceptance and rejectance. Likewise I introduced another 10 ants from another colony (say colony-3) to colony-1 to see the same.
The data I got looks like that:
```
*In Colony-1*
Accepted Rejected Colony
8 2 Colony-1
3 7 Colony-2
2 8 Colony-3
```
Now my question is that how I should arrange these data in SPSS 11.5 and run Fischer exact test in order to get the P-value and percentage of acceptance and rejection in Colony-1?
| How to perform Fisher exact test in SPSS? | CC BY-SA 3.0 | null | 2011-06-03T16:32:05.453 | 2011-06-04T20:50:09.950 | 2011-06-04T20:50:09.950 | 930 | 4868 | [
"spss",
"contingency-tables"
] |
11528 | 2 | null | 11505 | 1 | null | See the code from "Modeling Health Care Costs and Use" presentations by Deb, Manning, and Norton. Aviable via Google or [http://urban.hunter.cuny.edu/~deb/](http://urban.hunter.cuny.edu/~deb/)
| null | CC BY-SA 3.0 | null | 2011-06-03T16:47:14.343 | 2011-06-03T16:47:14.343 | null | null | 4691 | null |
11529 | 2 | null | 11508 | 1 | null | You can't interpret the $p$-values. The long-tailed errors you're describing often act to underestimate the standard errors, making your $p$-values too small (not to mention that $\hat{\beta}$ isn't normally distributed in finite samples). I suggest a non-parametric bootstrap so you can characterize the sampling distribution of your coefficient estimates without making an unwarranted assumptions about the error distribution.
| null | CC BY-SA 3.0 | null | 2011-06-03T16:57:22.990 | 2012-05-26T04:07:59.580 | 2012-05-26T04:07:59.580 | 4856 | 4856 | null |
11530 | 1 | 11545 | null | 3 | 74 | I've collected a data set from the literature. These data are largely binomial: they are clutch sizes of a bird that lays only 1 or 2 eggs in most areas, and in some areas will lay 3. However the only information I have compiled are means, standard errors and sample sizes (# nests) from various sites. Some sites have multiple values reported from them (from different years), and I have split the sites into 3 categories based on location (A, B and C).
I want to do 2 things that are essentially the same:
- Produce one value (± SE or SD) for each site (so a site mean)
- Produce a location mean ± SE or SD (so an average of all the sites within a given location category).
Is it possible to do this relatively simply while keeping the information contained with the reported standard errors from each site?
Currently, I've taken a mean of the means of each site within a location category, and just calculated the SE from the mean of means, but that's losing all the reported variation from each site.
| Computing descriptives statistics for sites and locations based on literature search with sites having varying numbers of time points | CC BY-SA 3.0 | null | 2011-06-03T18:29:57.233 | 2011-06-04T17:19:25.383 | 2011-06-04T17:19:25.383 | 4238 | 4238 | [
"variance",
"standard-deviation",
"mean",
"standard-error"
] |
11531 | 1 | null | null | 7 | 3717 | I am looking at timeseries data in foreign exchange and bond markets (to test for reversion on extreme moves). Unfortunate "tick" data, namely high frequency data, is prone to many problems, and they obviously can significantly mess with the analysis. I'd like to know which R library can help with the following type of fairly frequent data cleaning problems:
1) one spike:

This is typically created when one market maker prints a wrong quote in one tick, but there would have been no tradability at that price because it lasted for a split second. I'd like to eliminate the spike (but only if there is only one (or maybe 2) prints)
2) bid ask gapping:

In this case the market is fairly illiquid and the data algorithm is jumping between bids and asks (in this case 2bps wide) causing this weird cloud.
Where should I start to clean this stuff, obviously trying to throw out the least amount of real data. I realise that the maxim of "look at the data" applies here, but when you're looking at 1000 series each with 100 days of data, you can see how this will become quickly impractical so I need some automated help. I'll also look at Python language methods if they're available or better.
| High frequency data series cleaning in R | CC BY-SA 3.0 | null | 2011-06-03T19:02:33.940 | 2022-01-08T02:26:32.800 | 2011-06-03T19:07:57.977 | 4705 | 4705 | [
"r",
"finance"
] |
11532 | 1 | null | null | 4 | 772 | I have a number of regular daily measurements in a MySQL database that I'd like to manipulate using R. When it's returned from RMySQL, it looks like this:
```
> memdata
date vsize
1 2011-04-22 3535.178
2 2011-04-23 5680.516
3 2011-04-24 5468.914
4 2011-04-25 4761.044
5 2011-04-26 4403.515
6 2011-04-27 4459.155
7 2011-04-28 4889.884
8 2011-04-29 5290.908
9 2011-04-30 5370.952
> str(memdata)
'data.frame': 9 obs. of 2 variables:
$ date : chr "2011-04-22" "2011-04-23" "2011-04-24" "2011-04-25" ...
$ vsize: num 3535 5681 5469 4761 4404 ...
```
Since many of the time series libraries expect `ts` objects, I'd like to convert the data frame into one, but there doesn't seem to be a straightforward way of doing that. [This site](http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm) has a lot of good examples, but none that deal with daily data.
Any help would be greatly appreciated!
| Importing time series from SQL base into R | CC BY-SA 3.0 | null | 2011-06-03T19:34:49.813 | 2011-06-03T23:37:38.850 | 2011-06-03T20:31:43.463 | null | 2659 | [
"r",
"time-series"
] |
11533 | 2 | null | 11531 | 7 | null | There's a package for that. Check out [RTAQ](http://cran.r-project.org/web/packages/RTAQ/index.html).
Small plug: there's a [quantitative finance stack exchange](https://quant.stackexchange.com/) you may be interested in.
| null | CC BY-SA 3.0 | null | 2011-06-03T19:35:01.857 | 2011-06-03T19:35:01.857 | 2017-04-13T12:46:23.127 | -1 | 1657 | null |
11534 | 2 | null | 11532 | 1 | null | Unless I missed something, you want to convert your data.frame into a suitable time-indexed series of measurement. In this case, you can use the [zoo](http://cran.r-project.org/web/packages/zoo/index.html) package as follows:
```
> library(zoo)
> memdata.ts <- with(memdata, zoo(vsize, date))
> str(memdata.ts)
‘zoo’ series from 2011-04-22 to 2011-04-30
Data: Factor w/ 9 levels "3535.178","4403.515",..: 1 9 8 4 2 3 5 6 7
Index: Factor w/ 9 levels "2011-04-22","2011-04-23",..: 1 2 3 4 5 6 7 8 9
```
| null | CC BY-SA 3.0 | null | 2011-06-03T19:55:36.677 | 2011-06-03T19:55:36.677 | null | null | 930 | null |
11535 | 2 | null | 11532 | 3 | null | Here's a better reference for that kind of stuff:
[http://cran.r-project.org/web/packages/zoo/index.html](http://cran.r-project.org/web/packages/zoo/index.html)
Take a look at the vignettes.
Edit 1 =========================================
To answer chl's question, I would do the following:
```
library(zoo)
memdata.zoo <- read.zoo(memdata)
```
However, the reason that I pointed to the vignettes is because it is required reading. Trust me, I've screwed up enough code that I am well aware that I don't yet fully understand everything in those articles. There's a lot of subtle stuff in there.
Also, if you use packages like`stl` and `decompose`, be careful to notice that what they want as a `frequency` and what `zoo` wants as a `frequency` may not be what you expect. You'll simply have to play around with it, and refer back to the vignettes.
| null | CC BY-SA 3.0 | null | 2011-06-03T19:57:29.363 | 2011-06-03T23:37:38.850 | 2011-06-03T23:37:38.850 | 2775 | 2775 | null |
11536 | 2 | null | 11531 | 4 | null | To detect an anomaly, you need a model which provides an expectation. Intervention Detection yields the answer to the question " What is the probability of observing what I observed before I observed it ? I suggest that you focus on shorter time series and use an automatic modeling algorithm that forms an ARIMA model based upon separating signal and noise. This ARIMA model can then used to identify the "unusual". Time Series Methods can be used to alert users that the underlying activity has significantly changed. The problem is that you can't catch an outlier without a model (at least a mild one) for your data. Else how would you know that a point violated that model? In fact, the process of growing understanding and finding and examining outliers must be iterative. This isn't a new thought. Bacon, writing in Novum Organum about 400 years ago said: "Errors of Nature, Sports and Monsters correct the understanding in regard to ordinary things,and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows her deviations will more accurately understand Nature.
| null | CC BY-SA 3.0 | null | 2011-06-03T20:01:00.717 | 2011-06-03T20:01:00.717 | null | null | 3382 | null |
11537 | 1 | 11538 | null | 3 | 430 | I have the observations $X(n)$, where $X(n)$ is the realization of a binomial random variable with probability of success $p(n)$, and with $Y(n)$ trials. The observations are independent across $n$. I would like to test the null hypothesis H0: $p(1)=(2)=\cdots=p(N)=0.5$. Is there a standard recommended test? An approach would be to perform a multiple comparison test with a correction of the significance level, but I wonder if othe methods would be possible. If $Y(n)=$const, I could have used a goodness-of-fit test, but this doesn't apply here. Suggestions welcome!
| Tests on binomial distribution | CC BY-SA 3.0 | null | 2011-06-03T20:54:23.293 | 2011-06-03T21:41:23.837 | 2011-06-03T21:17:38.790 | null | 30 | [
"hypothesis-testing",
"binomial-distribution",
"goodness-of-fit"
] |
11538 | 2 | null | 11537 | 4 | null | This is a question of testing if several proportions are equal and equal to a specific value. This is quite standard, and you can do this by a likelihood-ratio test or a [$\chi^2$-test](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test). In R, the $\chi^2$-test can be computed using [prop.test](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/prop.test.html), and you can specify that you want the vector of proportions to be equal to the vector $(0.5, \ldots, 0.5)$. The computations are, however, not complicated.
| null | CC BY-SA 3.0 | null | 2011-06-03T21:41:23.837 | 2011-06-03T21:41:23.837 | null | null | 4376 | null |
11539 | 1 | 11540 | null | 8 | 2005 | I found a [list of statistical tests along with practical guidance on Wikipedia](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing#Common_test_statistics).
Can anyone point me to something similar, but in a textbook form?
I'm interested in particular in practical guidance (along the lines of "should have n>30 for Z-test")
| Textbook with list of hypothesis tests and practical guidance on use | CC BY-SA 3.0 | null | 2011-06-03T22:06:18.400 | 2011-06-04T21:24:33.003 | 2011-06-04T03:41:33.923 | 183 | 511 | [
"hypothesis-testing",
"references"
] |
11540 | 2 | null | 11539 | 7 | null | [Statistical Rules of Thumb](http://vanbelle.org/) (Wiley, 2002), by van Belle, has a lot of useful rules of thumb for applied statistics.
| null | CC BY-SA 3.0 | null | 2011-06-03T22:48:52.627 | 2011-06-03T22:48:52.627 | null | null | 930 | null |
11541 | 1 | 11542 | null | 9 | 22547 | Lets say there is population of measurements X, and 50% of those X = 1, and the the other 50% = 0.
Therefore, population mean = 0.5
given a random sample of size n, how do you determine the SE of that sample?
Or, in layman's terms, if you flip a coin n times, how far can you expect to deviate from 50% heads and 50% tails?
| How to calculate SE for a binary measure, given sample size n and known population mean? | CC BY-SA 3.0 | null | 2011-06-03T23:01:34.050 | 2011-06-04T10:46:51.123 | 2011-06-04T10:46:51.123 | null | 3443 | [
"standard-error",
"binary-data"
] |
11542 | 2 | null | 11541 | 11 | null | Each outcome may be thought of a bernoulli trial with success probability $p$. A ${\rm Bernoulli}(p)$ random variable has mean $p$ and variance $p(1-p)$. Therefore the average of $n$ independent ${\rm Bernoulli}(p)$ random variables also has mean $p$ and variance $p(1-p)/n$, which is typically estimate by $\hat{p}(1 - \hat{p})/n$. So, in your example the standard error of your mean estimate is $1/\sqrt{4n}$.
| null | CC BY-SA 3.0 | null | 2011-06-03T23:08:25.943 | 2011-06-03T23:08:25.943 | null | null | 4856 | null |
11543 | 1 | 11550 | null | 3 | 112 | I've taken a few probability classes and now understand how to calculate some statistical measures like mean and confidence intervals. What I don't know is the what, when, and why of using these measures for specific situations. I'm hoping to put together a good collection of each of these measures, what they're used for, and what situations these are good to use. Specifically I'm looking for these (but not limited to):
- Mean (average)
- Standard Deviation
- Variance
- Confidence Intervals
- Median
| How do I tell when and why to use specific statistical measures? | CC BY-SA 3.0 | null | 2011-06-02T18:28:17.337 | 2019-01-03T21:36:33.593 | 2019-01-03T21:36:33.593 | 11887 | 4857 | [
"descriptive-statistics",
"intuition"
] |
11544 | 1 | null | null | 10 | 8267 | >
Is there a standard (or best) method for testing when a given time-series has stabilized?
---
### Some motivation
I have a stochastic dynamic system that outputs a value $x_t$ at each time step $t \in \mathbb{N}$. This system has some transient behavior until time step $t^*$ and then stabilizes around some mean value $x^*$ with some error. None of $t^*$, $x^*$, or the error are known to me. I am willing to make some assumptions (like Gaussian error around $x^*$ for instance) but the less a priori assumptions I need, the better. The only thing I know for sure, is that there is only one stable point that the system converges towards, and the fluctuations around the stable point are much smaller that the fluctuations during the transient period. The process is also monotonic-ish, I can assume that $x_0$ starts near $0$ and climbs towards $x^*$ (maybe overshooting by a bit before stabilizing around $x^*$).
The $x_t$ data will be coming from a simulation, and I need the stability test as a stopping condition for my simulation (since I am only interested in the transient period).
### Precise question
Given only access to the time value $x_0 ... x_T$ for some finite $T$, is there a method to say with reasonable accuracy that the stochastic dynamic system has stabilized around some point $x^*$? Bonus points if the test also returns $x^*$, $t^*$, and the error around $x^*$. However, this is not essential since there are simple ways to figure this out after the simulation has finished.
---
### Naive approach
The naive approach that first pops into my mind (which I have seen used as win conditions for some neural networks, for instance) is to pick to parameters $T$ and $E$, then if for the last $T$ timesteps there are not two points $x$ and $x'$ such that $x' - x > E$ then we conclude we have stabilized. This approach is easy, but not very rigorous. It also forces me to guess at what good values of $T$ and $E$ should be.
It seems like there should be a better approach that looks back at some number of steps in the past (or maybe somehow discounts old data), calculates the standard error from this data, and then tests if for some other numbers of steps (or another discounting scheme) the time-series has not been outside this error range. I included such a slightly less naive but still simple strategy as an [answer](https://stats.stackexchange.com/questions/11544/testing-for-stability-in-a-time-series/11750#11750).
---
Any help, or references to standard techniques are appreciated.
### Notes
I also cross posted this question as-is to [MetaOptimize](http://metaoptimize.com/qa/questions/6749/testing-for-stability-in-a-time-series) and in a more simulation-flavored description to [Computational Science](https://scicomp.stackexchange.com/q/627/535).
| Testing for stability in a time-series | CC BY-SA 3.0 | null | 2011-06-04T00:37:18.303 | 2013-10-03T15:56:00.077 | 2020-06-11T14:32:37.003 | -1 | 4872 | [
"time-series",
"machine-learning"
] |
11545 | 2 | null | 11530 | 3 | null | The simplest approach is to use weighted means. Given a set of $n$ experiments that had mean and standard error $(x_i,\sigma_i)$ we can calculate the overall mean and standard error $(x,\sigma)$ is calculated as follows:
$\sigma^2 = \frac{1}{(\sum_{i = 1}^n \frac{1}{\sigma_i^2})}$
and
$x = \sigma^2 (\sum_{i = 1}^n \frac{x_i}{\sigma_i^2})$
This is the maximum likelihood estimator of the combined mean under the assumption that the sub-experiments are independent and normally distributed with the same mean. This is often a valid assumption (since most distributions tend towards normal by central-limit theorem), and for instance, this is the most popular way to quickly combine results of experiments measuring the same quantity in fields like physics.
| null | CC BY-SA 3.0 | null | 2011-06-04T01:10:08.930 | 2011-06-04T01:10:08.930 | null | null | 4872 | null |
11546 | 1 | 11563 | null | 2 | 5118 | I have a series of (x1,y1) points. I'm using a 3rd party software tool to which I feed these points. The tool then provides a mechanism for me to get back a series of (x2,y2) points that are on a Gaussian curve that has been fit to the data.
I'd like to confirm that the points I'm getting back are correct (because, frankly, sometimes they don't look like they are.)
What would be an easy on-line or Excel procedure that would allow me to enter the X1, Y1 series and then get back a series of X2, Y2 points, plot both series, and maybe even see the s.d.?
It's been 20+ years since I did any stats, so I'm just "cook-booking" this with no understanding of what I'm trying to do.
EDIT
The is a plot by wavelength, of intensity. X-axis is the colors of a rainbow, in the familiar ROYGBIV order. The Y-axis is measured intensity of that particular color.
The help page from the vendor is here: [http://www.lohninger.com/helpcsuite/calcgaussfit.htm](http://www.lohninger.com/helpcsuite/calcgaussfit.htm)
I call the library with each x1, y1 data point. I then call the method above. I then call the library again with a series of x values (or maybe the x1 series itself), retrieving the y2 values on the fitted curve. I also retrieve the sd of the fit.
EDIT 2
Below is a link to an image of the dataset and Gaussian fit that has me concerned. I think that the peak of the fitted curve is too low.
](https://i.stack.imgur.com/ffKKu.jpg)
| Tool to confirm Gaussian fit | CC BY-SA 3.0 | null | 2011-06-04T02:41:34.177 | 2016-12-22T15:34:16.767 | 2011-06-04T21:59:14.813 | null | 2767 | [
"least-squares",
"curve-fitting"
] |
11547 | 1 | 11558 | null | 3 | 580 | I am facing a difficult challenge, given my very low skills at text mining… Basically, I have a list of approx. 200 individuals described in a plain text file following a simple structure:
>
N: (name)
Y: (year of birth)
S: (sibling)
N: (name)
Y: (year of birth)
S: (sibling)
[etc.]
Here's the catch:
- each individual can have missing data, in which case the line is omitted;
- multiple siblings, in which case his/her entry has more than one S: line.
The only real constants in the text file are:
- each individual is separated by a blank file from each other;
- all lines feature the appropriate N:, Y: or S: prefix descriptor.
Is there a method, using pretty much anything from Excel (!) to Stata to R or even [Google Refine](http://code.google.com/p/google-refine/) or [Wrangler](http://vis.stanford.edu/wrangler/), to turn that organised chaos into a standard dataset, by which I mean, one column per descriptor, with `S1, S2... Sn` columns for siblings?
If that is the wrong forum, please redirect me. I just figured that the stats crew would be the most acquainted with text mining.
| Data transposition from 'clustered rows' into columns | CC BY-SA 3.0 | null | 2011-06-04T02:47:59.060 | 2011-07-18T02:59:08.477 | null | null | 3582 | [
"text-mining"
] |
11548 | 1 | null | null | 5 | 1705 | Given the following dataset for a single article on my site:
```
Article 1
2/1/2010 100
2/2/2010 80
2/3/2010 60
Article 2
2/1/2010 20000
2/2/2010 25000
2/3/2010 23000
```
where column 1 is the date and column 2 is the number of pageviews for an article. What is a basic acceleration calculation that can be done to determine if this article is trending upwards or downwards for days I have pageview data for it?
For example looking at the numbers I can see Article1 is trending downwards. How can that be reflected in an algorithm most easily?
thanks!
| What is the best way to determine if pageviews are trending upward or downward? | CC BY-SA 3.0 | null | 2011-06-04T03:44:05.063 | 2011-06-08T09:01:08.560 | 2011-06-04T04:25:48.267 | 183 | 4875 | [
"statistical-significance",
"trend"
] |
11549 | 2 | null | 11539 | 4 | null | In general, I would have a look at statistics books in your domain of application (e.g., whether it is psychology, ecology, medical, sociology, etc.).
Such books tend to have less rigour.
Instead, such books often try to give useful decision rules to assist researchers where statistics is not the main interest of the researcher.
Here are a few suggestions coming from a behavioural and social sciences perspective.
### Multivariate books
If you want practical tips on techniques like multiple regression, factor analysis, PCA, and so forth, these books are options:
- Hair et al Multivariate data analysis: This has very few formulas but lots of flow charts and simple decision rules designed to assist less-mathematically inclined social scientists implement multivariate stats.
- Tabachnick and Fidell Multivariate Statistics. This arguably has more rigour than Hair et al, but it does have sections devoted to giving practical advice.
### SPSS Cookbook
- I know a lot of psychology research students who are looking for a cookbook approach (perhaps just to get themselves started) to analysing their data turn to the SPSS Survival Manual. However, this is SPSS centric and more about tips on implementing analyses in SPSS.
| null | CC BY-SA 3.0 | null | 2011-06-04T03:53:02.130 | 2011-06-04T03:53:02.130 | null | null | 183 | null |
11550 | 2 | null | 11543 | 3 | null |
### General Advice
- Start analysing data and reading the analyses of other researchers.
This should assist you in mapping statistical techniques onto data analytic problems.
- Read some applied statistics textbooks related to a domain that you are interested in.
- If you have specific questions (e.g., when would you report the SD versus the variance? or when would you use the median rather than the mean), do a search and see if a question already exists on the site. If no such question exists, ask it here.
### Mean, SD, Var, CI, Median
I'd broadly classify the five things you mentioned into
- Measures of central tendency: Mean, Median
There are questions on this site that discuss when to use mean versus median such as this one.
- Measures of spread: SD, Var
- Measures of confidence in parameter estimation: CI
| null | CC BY-SA 3.0 | null | 2011-06-04T04:23:06.383 | 2011-06-04T04:23:06.383 | 2017-04-13T12:44:28.873 | -1 | 183 | null |
11551 | 1 | 11554 | null | 38 | 34616 | I want to browse a .rda file (R dataset). I know about the `View(datasetname)` command. The default R.app that comes for Mac does not have a very good browser for data (it opens a window in X11). I like the RStudio data browser that opens with the `View` command. However, it shows only 1000 rows and omits the remaining. (UPDATE: RStudio viewer now shows all rows) Is there a good browser that will show all rows in the data set and that you like/use.
| Is there a good browser/viewer to see an R dataset (.rda file) | CC BY-SA 3.0 | null | 2011-06-04T04:45:39.070 | 2017-04-10T20:54:37.823 | 2016-08-10T23:43:59.497 | 183 | 4820 | [
"r"
] |
11552 | 2 | null | 11548 | 4 | null |
### General thoughts about pageviews
I think there is a fair amount of domain specific knowledge that can be brought to bear on page views.
From examining my Google Analytics statistics from particular blog posts, I observe the following characteristics:
- Large initial spike in pageviews when an article is first posted related to hits coming from RSS feeds, links from syndication sites, prominence on home page, spikes related to newness and social media.
This effect tends to decline rapidly, but seems to still provide some boost for a few weeks.
- Day of the week effects. At least in my blog on statistics, I get a consistent day of the week effect. There is a lull on the weekend.
The implication is that if I were trying to understand meaningful trends in an article, I would be looking at changes from week to week rather than day to day.
- Seasonal effects: I also get more subtle seasonal effects presumably related to when people are working or holidays and for some posts more than others when university students are studying or not. For example, the week between Christmas and New Years is very quiet.
- After the initial spike, I find most traffic is driven by Google searches, although a few posts derive considerable traffic from links from other blogs or websites. Links from Social media and blog posts tend to lead to abrupt spikes in page views and depending on the medium may or may not lead to a consistent stream over time.
### Implications for identifying upward or downward trends in a page
- The above analysis provides a general model that I use to understand pageviews on my own blog posts.
It is a theory of some of the major factors that influence page views, at least on my site and from my experience.
I think having a model like this, or something similar, helps to refine the research question.
- For instance, presumably you are interested in only some forms of upward and downward trends.
Trends that operate on the whole site such as day of the week and seasonal trends are probably not the main focus.
Likewise, trends related to the initial spike in pageviews and subsequent decline following a posting are relatively obvious and may not be of interest (or maybe they are).
- There is also an issue related to the time frame and functional form of trending.
A page may be gradually increasing in weekly pageviews due to gradual improvements in its positioning in Google's algorithms or general popularity of the topic of the post.
Alternatively, a post may experience an abrupt increase as a result of it being linked to by a high profile website.
- Another issue relates to thresholds for defining trending.
This includes both statistical significance and effect sizes.
I.e., is the trend statistically significantly different from random variation that you might see, and is the change worthy of your attention.
### Simple strategy for detecting interesting trends in pageviews
I'm not an expert in time series analysis, but here are a few thoughts about how I might implement such a tool.
- I'd compute a table that compares pageviews for the preceding 28 days with the 28 days prior to the most recent 28 days.
You could make this more advance by making time frame a variable quantity (e.g., 7 days, 14 days, 56 days, etc.).
The more popular the page (and the site in general), the more likely that you are going to have enough page views in a period to do meaningful comparisons.
Each row of the table would be a page on your site.
You'd start with three columns (page title, current page views, comparison page views)
- Filter out pages that did not exist for the entire comparison period.
- Add columns that assist in the assessment of the effect size of any change, and the statistical significance of any change.
A simple summary statistic to use would be percentage change from comparison to current. You could also include raw change from comparison to current.
Perhaps a chi-square could be used to provide a rough quantification of the significance of any change (although I'm aware that the assumption of independence of observations is often compromised, which also raises the issue of whether you are using pageviews or unique page views).
- I'd then create a composite of the effect size and the significance test to represent "interestingness".
- You could also adopt a cut-off for when a change is sufficiently interesting, and of course classify it as upward or downward.
- You could then apply sorting and filtering tools to answer particular questions.
- In terms of implementation, this could all be done using R and data exported from tools like Google Analytics. There are also some interfaces between R and Google Analytics, but I haven't personally tried them.
| null | CC BY-SA 3.0 | null | 2011-06-04T05:08:00.960 | 2011-06-08T09:01:08.560 | 2017-05-23T12:39:26.167 | -1 | 183 | null |
11553 | 1 | 11559 | null | 11 | 2937 | The classical F-test for subsets of variables in multilinear regression has the form
$$
F = \frac{(\mbox{SSE}(R) - \mbox{SSE}(B))/(df_R - df_B)}{\mbox{SSE}(B)/df_B},
$$
where $\mbox{SSE}(R)$ is the sum of squared errors under the 'reduced' model, which nests inside the 'big' model $B$, and $df$ are the degrees of freedom of the two models. Under the null hypothesis that the extra variables in the 'big' model have no linear explanatory power, the statistic is distributed as an F with $df_R - df_B$ and $df_B$ degrees of freedom.
What is the distribution, however, under the alternative? I assume it is a non-central F (I hope not doubly non-central), but I cannot find any reference on what exactly the non-centrality parameter is. I am going to guess it depends on the true regression coefficients $\beta$, and probably on the design matrix $X$, but beyond that I am not so sure.
| What is the power of the regression F test? | CC BY-SA 3.0 | null | 2011-06-04T05:12:33.510 | 2011-06-06T05:43:34.767 | 2011-06-06T05:43:34.767 | 2116 | 795 | [
"regression",
"hypothesis-testing",
"statistical-power",
"non-central",
"f-distribution"
] |
11554 | 2 | null | 11551 | 22 | null | Here are a few basic options, but like you, I can't say that I'm entirely happy with my current system.
Avoid using the viewer:
- I.e., Use the command line tools to browse the data
- head and tail for showing initial and final rows
- str for an overview of variable types
- dplyr::glimpse() for an overview of variable types of all columns
- basic extraction tools like [,1:5] to show the first five colums
- Use a pager to display and navigate the data (e.g., page(foo, "print")) possibly
in conjunction with some variable extraction tools. This works fairly well on Linux, which uses less. I'm not sure how it goes on Windows or Mac.
Export to spreadsheet software:
- I quite like browsing data in Excel when it's set up as a table.
It's easy to sort, filter, and highlight. See here for the function that I use to open a data.frame in a spreadsheet.
| null | CC BY-SA 3.0 | null | 2011-06-04T05:32:22.973 | 2016-08-10T12:35:19.470 | 2016-08-10T12:35:19.470 | 79643 | 183 | null |
11555 | 2 | null | 11539 | 2 | null |
### Biometry,
by Sokal and Rohlf
has a fairly comprehensive table with such information on the inside of the front and back covers, but these tables apparently didn't make (perhaps due to placement) into Google's digitized version.
| null | CC BY-SA 3.0 | null | 2011-06-04T05:34:38.423 | 2011-06-04T05:34:38.423 | 2020-06-11T14:32:37.003 | -1 | 1381 | null |
11556 | 2 | null | 11435 | 2 | null | For the case $N=3,\;x_i\sim\mathcal{N}(\mu,\sigma),\;i\in\{1,2,3\}$, i calculated a CDF of
$P(y\leq\alpha)=\text{exp}\Big(-\frac{1}{2}(3-\alpha)\big(\frac{\mu}{\sigma}\big)^2\Big)\sqrt{\alpha/3}\;,\quad y=\frac{(x_1+x_2+x_3)^2}{x_1^2+x_2^2+x_3^2},\quad 0\leq\alpha\leq N$.
The other cases $N\neq 3$ are much harder to solve (at least for me).
An idea that helped the solution was rotating the coordinate system so that the former $(x_1, x_2, x_3)\propto(1,1,1)$-direction is aligned to an axis from the new coordinate system, e.g. $(z_1, z_2, z_3)\propto(0,0,1)$. Then
$y=\frac{(x_1+x_2+x_3)^2}{x_1^2+x_2^2+x_3^2}=\frac{3 z_3^2}{z_1^2+z_2^2+z_3^2}$,
which makes integration easier.
| null | CC BY-SA 3.0 | null | 2011-06-04T05:56:11.803 | 2011-06-04T06:29:01.780 | 2011-06-04T06:29:01.780 | 4360 | 4360 | null |
11557 | 1 | 11566 | null | 2 | 370 | I am wondering the orthogonal projectors such as Hadamard transform change the statistics of i.i.d. generated data.
Consider, i.i.d. generated data $\mathbf{X}$ of length $N$,
$$\mathbf{X}_P=\mathbf{P}\mathbf{X}$$
where $\mathbf{P}$ is an orthogonal projector of size $N \times N$ and $\mathbf{X}_P$ is the projected data of length $N$.
It is explicit that $\mathbf{P}$ doesn't change the covariance matrix of i.i.d. data, but I am doubtful whether the projected data is still i.i.d. and the shape of distribution is the same, e.g., if $\mathbf{X}$ i.i.d. is generated from Laplacian, then $\mathbf{X}_P$ is still i.i.d. Laplacian?
Thanks a lot in advance
| Do orthogonal projectors change the statistics of i.i.d. generated data? | CC BY-SA 3.0 | null | 2011-06-04T07:02:14.690 | 2011-06-04T15:07:10.027 | 2011-06-04T10:19:08.300 | 4770 | 4770 | [
"distributions"
] |
11558 | 2 | null | 11547 | 3 | null |
## Load data
Assuming `fd.txt` contains the following
```
N: toto
Y: 2000
S: tata
N: titi
Y: 2004
S: tutu
N: toto
Y: 2000
S: tata2
N: toto
Y: 2000
S: tata3
N: tete
Y: 2002
S: tyty
N: tete
Y: 2002
S: tyty2
```
here is one solution in R:
```
tmp <- scan("fd.txt", what="character")
res <- data.frame(matrix(tmp[seq(2, length(tmp), by=2)], nc=3, byrow=TRUE))
```
The first command read everything a single vector of character, skipping blank lines; then we remove every odd elements ("N:", "S:", "Y:"); finally, we arrange them in a data.frame (this a convenient way to make each column a factor).
The output is
```
X1 X2 X3
1 toto 2000 tata
2 titi 2004 tutu
3 toto 2000 tata2
4 toto 2000 tata3
5 tete 2002 tyty
6 tete 2002 tyty2
```
Please note that if you have some GNU utilities on your machine, you can use `awk`
```
sed 's/[NYS]: //' fd.txt | awk 'ORS=(FNR%4)?FS:RS' > res.txt
```
The first command uses `sed` to filter the descriptor (replace by blank); then `awk` will produce its output (Output Record Separator) as follows: arrange each record using default Field Separator (space), and put a new Record Separator (new line) every 4 fields. Of note, we could filter the data using `awk` directly, but I like separating tasks a little.
The result is written in `res.txt` and can be imported into R using `read.table()`:
```
toto 2000 tata
titi 2004 tutu
toto 2000 tata2
toto 2000 tata3
tete 2002 tyty
tete 2002 tyty2
```
## Process and transform data
I didn't find a very elegant solution in R, but the following works:
```
library(plyr)
tmp <- ddply(res, .(X1,X2), mutate, S=list(X3))[,-3]
resf <- tmp[!duplicated(tmp[,1:2]),]
```
Then, `resf` has three columns, where column `S` list the levels of the `X3` factor (siblings' name). So, instead of putting siblings in different columns, I concatenated them in a list. In other words,
```
as.character(resf$S[[1]])
```
gives you the name of `tete`'s siblings, which are `tyty` and `tyty2`.
I'm pretty sure there's a better way to do this with `plyr`, but I didn't manage to get a nice solution for the moment.
---
With repeated "S:" motif, here is one possible quick and dirty solution. Say `fd.txt` now reads
```
N: toto
Y: 2000
S: tata
S: tata2
S: tata3
N: titi
Y: 2004
S: tutu
N: tete
Y: 2002
S: tyty
S: tyty2
```
then,
```
tmp <- read.table("fd.txt")
tmp$V1 <- gsub(":","",tmp$V1)
start <- c(which(tmp$V1=="N"), nrow(tmp)+1)
max.col <- max(diff(start))
res <- matrix(nr=length(start)-1, nc=max.col)
for (i in 1:(length(start)-1))
res[i,1:diff(start[i:(i+1)])] <- t(tmp[start[i]:(start[i+1]-1),])[2,]
res <- data.frame(res)
colnames(res) <- c("name","year",paste("S",1:(max.col-2),sep=""))
```
will produce
```
name year S1 S2 S3
1 toto 2000 tata tata2 tata3
2 titi 2004 tutu <NA> <NA>
3 tete 2002 tyty tyty2 <NA>
```
| null | CC BY-SA 3.0 | null | 2011-06-04T07:33:59.703 | 2011-06-04T14:16:20.583 | 2011-06-04T14:16:20.583 | 930 | 930 | null |
11559 | 2 | null | 11553 | 9 | null | The noncentrality parameter is $\delta^{2}$, the projection for the restricted model is $P_{r}$, $\beta$ is the vector of true parameters, $X$ is the design matrix for the unrestricted (true) model, $|| x ||$ is the norm:
$$
\delta^{2} = \frac{|| X \beta - P_{r} X \beta ||^{2}}{\sigma^{2}}
$$
You can read the formula like this: $E(y | X) = X \beta$ is the vector of expected values conditional on the design matrix $X$. If you treat $X \beta$ as an empirical data vector $y$, then its projection onto the restricted model subspace is $P_{r} X \beta$, which gives you the prediction $\hat{y}$ from the restricted model for that "data". Consequently, $X \beta - P_{r} X \beta$ is analogous to $y - \hat{y}$ and gives you the error of that prediction. Hence $|| X \beta - P_{r} X \beta ||^{2}$ gives the sum of squares of that error. If the restricted model is true, then $X \beta$ already is within the subspace defined by $X_{r}$, and $P_{r} X \beta = X \beta$, such that the noncentrality parameter is $0$.
You should find this in Mardia, Kent & Bibby. (1980). Multivariate Analysis.
| null | CC BY-SA 3.0 | null | 2011-06-04T10:43:08.553 | 2011-06-05T05:26:24.257 | 2011-06-05T05:26:24.257 | 1909 | 1909 | null |
11560 | 2 | null | 11544 | 6 | null | This short remark is far from complete answer, just some suggestions:
- if you have two periods of time where the behaviour is different, by different I mean either differences in model parameters (not relevant in this particular situation), mean or variance or any other expected characteristic of time-series object ($x_t$ in your case), you can try any methods that do estimate the time (interval) of structural (or epidemic) change.
- In R there is a strucchange library for structural changes in linear regression models. Though it is primarily used for testing and monitoring changes in linear regression's parameters, some statistics could be used for general structural changes in time series.
| null | CC BY-SA 3.0 | null | 2011-06-04T12:58:20.283 | 2011-06-04T12:58:20.283 | null | null | 2645 | null |
11562 | 1 | null | null | 5 | 1038 | What is a good statistical test to check if there is a bias in judging in a situation that there is one judge that gave extreme scores (high score for one of the contestant and very low scores on the rest of the contestants)? Here is the actual data in the contest:
```
contestant 1 contestant 2 contestant 3 contestant 4
judge 1 83.03 96.5 88.5 90.5
judge 2 67.15 89.9 85.36 89.85
judge 3 72.05 84.6 78.95 85
judge 4 86.95 93.3 88 94.1
judge 5 44 65.15 52.45 96.05
```
Thank you very much!
| Assessing rater bias where one rater has given one very high rating and the remainder very low ratings | CC BY-SA 3.0 | null | 2011-06-04T14:38:53.370 | 2011-06-05T15:40:09.920 | 2011-06-05T09:59:41.547 | 183 | 4880 | [
"reliability",
"agreement-statistics",
"bias"
] |
11563 | 2 | null | 11546 | 2 | null | Your fitting method uses least squares. To check it, set up four parallel columns in the spreadsheet:
- X has the x-values.
- Y has the y-values.
- Fit computes the Gaussian values (based on the x-values and three parameters).
- Residual is the difference between the y-values and the fits.
In order to compute the fit, you need to create three cells holding the three gaussian parameters. The formula for the fit must be identical to that used by the other software so you can compare your results with its. The example below names the three parameters kappa0, kappa1, and kappa2--just as in the documentation. The formula in the second row, where the x-value is in cell `A2`, is
```
=Kappa0 * EXP(-1*(A2 - Kappa1)^2 / Kappa2)
```
It is copied down to all the other rows.
This is enough to check the software's results simply by plugging in its reported values of kappa0, kappa1, and kappa2. To see whether they are correct, compute the sum of squared residuals (SSR). A formula for this uses the `SUMSQ` function ("=SUMSQ(D2:D32)" in the example). If you think a better combination of the parameters will work, plug in that new combination and see whether SSR decreases: if it goes down, the new values are better.

You can have this automated for you using Excel's "Solver" tool. Specify that you want to minimize the SSR by varying the three kappas. Start with the solution given by the other software. Solver will try systematically to improve its solution.

The same method--suitably adapted--works well for least squares, maximum likelihood, and other optimization procedures in statistics, provided the objective function is well-behaved (i.e., differentiable and convex) and you can obtain an excellent starting value. Otherwise, Solver is perfectly capable of reporting inferior solutions or failing altogether: it is wise not to use it as the sole method to solve a problem.
| null | CC BY-SA 3.0 | null | 2011-06-04T14:50:58.937 | 2011-06-04T14:50:58.937 | null | null | 919 | null |
11564 | 2 | null | 11544 | 0 | null | You might consider testing backward (with a rolling window) for co-integration between `x` and the long term mean.
When `x` is flopping around the mean, hopefully the windowed Augmented Dickey Fuller test, or whatever co-integration test you choose, will tell you that the two series are co-integrated. Once you get into the transition period, where the two series stray away from each other, hopefully your test will tell you that the windowed series are not co-integrated.
The problem with this scheme is that it is harder to detect co-integration in a smaller window. And, a window that is too big, if it includes only a small segment of the transition period, will tell you that the windowed series is co-integrated when it shouldn't. And, as you might guess, there's no way to know ahead of time what the "right" window size might be.
All I can say is that you'll have to play around with it to see if you get reasonable results.
| null | CC BY-SA 3.0 | null | 2011-06-04T14:54:56.763 | 2011-06-04T14:54:56.763 | null | null | 2775 | null |
11565 | 1 | null | null | 9 | 1050 | I have an application for which I need an approximation to the lognormal sum pdf for use as part of a likelihood function. The lognormal sum distribution has no closed form, and there are a bunch of papers in signal processing journals about different approximations. I have been using one of the simplest approximations (Fenton 1960), which involves replacing a sum of lognormals with a single lognormal with matching first and second moments. This is pretty straightforward to code, but judging by the literature on the subject that has been written in the last 50 years, this may not be the best approximation for all applications. I have no intuition for how to identify which approximations will lead to the best MLE estimates.
Does anyone know if
(A) There is different approximation I should be using for a maximum likelihood application?
(B) There is existing R code for any of the more computationally intensive approximations?
Update: For some background on the problem, see [this review](http://www.soa.org/library/proceedings/arch/2009/arch-2009-iss1-dufresne.pdf)
| Approximating lognormal sum pdf (in R) | CC BY-SA 3.0 | null | 2011-06-04T15:05:16.417 | 2012-04-27T12:11:31.803 | 2011-06-04T21:57:35.843 | null | 4881 | [
"r",
"lognormal-distribution"
] |
11566 | 2 | null | 11557 | 2 | null | The simplest non-trivial example I can construct uses an iid 2-vector of Bernoulli variables and the projection matrix {{1,1},{0,0}}. That is, $\mathbf{P}(\mathbf{x_1},\mathbf{x_2})' = \mathbf{x_1+x_2}$. The projected random variable can take on the values 0, 1, and 2 with positive probability (it has a binomial distribution). Even when you rescale this to be an orthogonal projection, there will still be three distinct values with positive probability. Therefore, because the original Bernoulli variables can only have two distinct values, you cannot generally expect the projected components to have the same distribution as the original variables.
It may be worth making a few additional remarks:
- In most cases, any proper projection (i.e., not the identity) does change the covariance matrix. This is obvious, because the resulting covariance matrix must have a nonzero kernel, implying it is not positive definite.
- When the original distribution is normal, the components of $\mathbf{X}_P$ will be normal or degenerate (that is, constant). If the projection is proper, the components cannot possibly be iid, because the covariance is degenerate.
| null | CC BY-SA 3.0 | null | 2011-06-04T15:07:10.027 | 2011-06-04T15:07:10.027 | null | null | 919 | null |
11567 | 2 | null | 11539 | 0 | null | Could this help?
>
The following table shows general guidelines for choosing a statistical analysis. We emphasize that these are general guidelines and should not be construed as hard and fast rules.
From the UCLA Stata/SAS/R tutorial pages. I use a revised version in class.
| null | CC BY-SA 3.0 | null | 2011-06-04T15:32:06.563 | 2011-06-04T15:32:06.563 | null | null | 3582 | null |
11568 | 1 | 11599 | null | 3 | 544 | I have one data sample of non-negative random variable $X$ with unknown distribution and predefined expected value $y$. Is there any test able to check null hypothesis $\mathbb{E}[X]\geq y$ or $\mathbb{E}[X]\leq y$?
Actual data samples are gathered in realtime. More specifically, it's an intervals between HTTP-requests coming to web-server from one client. Pearson's test shown that it is not normally distributed variable.
| How to check hypothesis about estimation of random variable with unknown distribution? | CC BY-SA 3.0 | null | 2011-06-04T17:29:53.320 | 2011-06-06T17:19:51.677 | 2011-06-05T11:36:40.143 | 4883 | 4883 | [
"hypothesis-testing",
"expected-value"
] |
11569 | 2 | null | 11562 | 1 | null | You won't be able to demonstrate bias, but you can try to establish whether the 96.05 is an outlier using Dixon's Test for Outliers. If these judges went on to judge these same contestants on another task/domain, you could test for the replicability of this unusual result for Judge 5 and Contestant 4.
| null | CC BY-SA 3.0 | null | 2011-06-04T17:34:35.490 | 2011-06-04T17:34:35.490 | null | null | 2669 | null |
11570 | 2 | null | 11562 | 2 | null | You could measure agreement in ratings across judges with [inter-rater reliability](http://en.wikipedia.org/wiki/Inter-rater_reliability) statistics. This would tell you whether the judging of contestants is consistent across judges.
There may be a more sophisticated way of doing this, but I might naively try dropping out each of the five judges individually looking at how the reliability changes.
But with such a small sample, I don't think you'll get particularly strong answers whatever you do.
| null | CC BY-SA 3.0 | null | 2011-06-04T17:45:48.757 | 2011-06-04T17:45:48.757 | null | null | 3874 | null |
11571 | 2 | null | 11551 | 13 | null | RStudio (RStudio.org) has a built-in data frame viewer that's pretty good. Luckily it's read-only. RStudio is very easy to install once you've installed a recent version of R. If using Linux first install the r-base package.
| null | CC BY-SA 3.0 | null | 2011-06-04T18:24:55.150 | 2011-06-04T18:24:55.150 | null | null | 4253 | null |
11572 | 2 | null | 11551 | 12 | null | Here are some other thoughts (although I am always reluctant to leave Emacs):
- Deducer (with JGR) allows to view a data.frame with a combined variable/data view (à la SPSS).
- J Fox's Rcmdr also offers edit/viewing facilities, although in an X11 environment.
- J Verzani's Poor Man Gui (pmg) only allows for quick preview for data.frame and other R objects. Don't know much about Rattle capabilities.
Below are two screenshots when viewing a 704 by 348 data.frame (loaded as an RData) with Deducer (top) and Rcmdr (bottom).


| null | CC BY-SA 3.0 | null | 2011-06-04T19:48:29.887 | 2011-06-04T19:48:29.887 | null | null | 930 | null |
11573 | 1 | null | null | 2 | 411 | I have a set of images of same color cars which I have users rate on a scale from 1-5 (integers only) based on how attractive they think the car design is. For each image I have a set of parameters about the cars in question, mostly various ratios of dimensions (say height at middle, width at trunk, curviness of hood, etc). I first give the user a set of training images and have them rate them. I would like to use this training set to predict future ratings by a particular user based on the ratios of the car in question. While a number prediction would be great (ie a prediction of what the user will rate the car), I am also willing to settle on a predicting of whether the rating is above or below say, 3. I don't really have a background in stats, but I think this has something to do with logistic regressions and discrete choice? I was wondering what a good reference for this would be.
To add a little more info, there are no strong individual correlations amongst the ratios and the users rating. Moreover, a linear regression is out of the question because there isn't a simple relationship between changing a particular ratio and the attractiveness. Moreover, I also do not want to intoduce too many ratios because then they become correlated with each other (say, height/width and height/length can give length/width upon division).
| Discrete choice prediction | CC BY-SA 3.0 | null | 2011-06-04T20:27:53.267 | 2011-06-07T20:09:17.800 | 2011-06-04T21:53:09.463 | null | 4886 | [
"logistic",
"multivariate-analysis"
] |
11574 | 2 | null | 11544 | 5 | null | As I read your question "and the fluctuations around the stable point are much smaller that the fluctuations during the transient period " what I get out of it is a request to detect when and if the variance of the errors has changed and if so when ! If that is your objective then you might consider reviewing the work or R. Tsay "outliers, Level Shifts and Variance Changes in Time Series" , Journal of Forecasting Vol 7 , 1-20 (1988). I have done considerable work in this area and find it very productive in yielding good analysis. Other approaches (ols/linear regression analysis for example ) which assume independent observations and no Pulse Outliers and/or no level shifts or local time trends and time-invariant parameters are insufficient in my opinion.
| null | CC BY-SA 3.0 | null | 2011-06-04T20:28:53.903 | 2011-06-04T20:28:53.903 | null | null | 3382 | null |
11575 | 2 | null | 11562 | 0 | null | You could think of this as a test of variances. Judge 5's scores will get more weight because the variability of the scores is higher.
This test would be for the equality of two variances. It's in most intro stat books, and even in Excel, which provides the following results for judge 5 versus judge 1-4
F-Test Two-Sample for Variances
```
Variable 1 Variable 2
```
Mean 64.4125 85.85875
Variance 520.415625 60.13891833
Observations 4 16
df 3 15
F 8.653558119
P(F<=f) one-tail 0.001424952
F Critical one-tail 3.287382105
This does show judge 5 is significantly more variable than the other judges, but frankly I would be careful of a result like this because of the amount of "fishing" involved. You're looking at this after-the-fact, with several possible hypotheses available (just to start, there are equivalent tests of judge 1 against 2,3,4,5, judge 2 against 1,3,4,5, etc.)
It's also possible that even if you are observing something, it might not be what you think. You might not be observing bias for/against contestants for a consistent tendency to view things in a different way -- sort of like umpires being willing to call a high strike in American baseball, with various pitchers tending to use/not use a high strike.
If you had more contest results, you could compare judge 5 (and others) versus some overall norm. That gets around the fact that with a small sample of judges and ratings (and a posthoc analysis!) you can't really get above the suspicion level.
| null | CC BY-SA 3.0 | null | 2011-06-04T20:39:11.340 | 2011-06-04T20:39:11.340 | null | null | 3919 | null |
11577 | 2 | null | 11539 | 3 | null | For a thorough overview of tests, I can recommend the [Handbook of Parametric and Nonparametric Statistics](http://rads.stackoverflow.com/amzn/click/158488133X) by David Sheskin.
| null | CC BY-SA 3.0 | null | 2011-06-04T21:24:33.003 | 2011-06-04T21:24:33.003 | null | null | 198 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.