Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9140 | 1 | 9158 | null | 12 | 630 | I have heard of survival analysis and life data analysis, but don't quite get the big picture.
I was wondering what topics they are covering?
Is it pure statistics, or just application of statistics on some specific area?
Is life date analysis part of survival analysis?
Thanks and regards!
| Big picture on survival analysis and life data analysis | CC BY-SA 2.5 | null | 2011-04-04T01:09:10.603 | 2021-04-24T22:06:23.637 | 2011-04-04T05:55:45.793 | 1005 | 1005 | [
"survival",
"mathematical-statistics"
] |
9142 | 2 | null | 9131 | 7 | null | @Tal: Might I suggest [Kutner et al](https://rads.stackoverflow.com/amzn/click/com/007310874X) as a fabulous source for linear models.
There is the distinction between
- a prediction of $Y$ from an individual new observation $X_{vec}$,
- the expected value of a $Y$ conditioned on $X_{vec}$, $E(Y|X_{vec})$ and
- $Y$ from several instances of $x_{vec}$
They are all covered in detail in the text.
I think you are looking for the formula for the confidence interval around $E(Y|X_{vec})$ and that is $\hat{Y} \pm t_{1-\alpha/2}s_{\hat{Y}}$ where $t$ has $n-2$ d.f. and $s_{\hat{Y}}$ is the standard error of $\hat{Y}$, $\frac{\sigma^{2}}{n} +(X_{vec}-\bar{X})^{2}\frac{\sigma^{2}}{\sum(X_{i}-\bar{X})^{2}}$
| null | CC BY-SA 4.0 | null | 2011-04-04T01:15:10.187 | 2021-09-27T14:11:17.713 | 2021-09-27T14:11:17.713 | 28436 | 2040 | null |
9143 | 2 | null | 2504 | 19 | null | You cannot be certain without knowing the distribution. But there are certain things you can do, such as looking at what might be called the "partial variance", i.e. if you have a sample of size $N$, you draw the variance estimated from the first $n$ terms, with $n$ running from 2 to $N$.
With a finite population variance, you hope that the partial variance soon settles down close to the population variance.
With an infinite population variance, you see jumps up in the partial variance followed by slow declines until the next very large value appears in the sample.
This is an illustration with Normal and Cauchy random variables (and a log scale)

This may not help if the shape of your distribution is such that a much larger sample size than you have is needed to identify it with sufficient confidence, i.e. where very large values are fairly (but not extremely) rare for a distribution with finite variance, or are extremely rare for a distribution with infinite variance. For a given distribution there will be sample sizes which are more likely than not to reveal its nature; conversely, for a given sample size, there are distributions which are more likely than not to disguise their natures for that size of sample.
| null | CC BY-SA 2.5 | null | 2011-04-04T01:20:36.233 | 2011-04-04T01:20:36.233 | null | null | 2958 | null |
9144 | 2 | null | 9131 | 30 | null | You will need matrix arithmetic. I'm not sure how Excel will go with that. Anyway, here are the details.
Suppose your regression is written as $\mathbf{y} = \mathbf{X}\mathbf{\beta} + \mathbf{e}$.
Let $\mathbf{X}^*$ be a row vector containing the values of the predictors for the forecasts (in the same format as $\mathbf{X}$). Then the forecast is given by
$$
\hat{y} = \mathbf{X}^*\hat{\mathbf{\beta}} = \mathbf{X}^*(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{Y}
$$
with an associated variance
$$
\sigma^2 \left[1 + \mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'\right].
$$
Then a 95% prediction interval can be calculated (assuming normally distributed errors) as
$$
\hat{y} \pm 1.96 \hat{\sigma} \sqrt{1 + \mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'}.
$$
This takes account of the uncertainty due to the error term $e$ and the uncertainty in the coefficient estimates. However, it ignores any errors in $\mathbf{X}^*$. So if the future values of the predictors are uncertain, then the prediction interval calculated using this expression will be too narrow.
| null | CC BY-SA 3.0 | null | 2011-04-04T01:44:16.487 | 2017-10-08T01:31:34.933 | 2017-10-08T01:31:34.933 | 159 | 159 | null |
9146 | 2 | null | 9085 | 7 | null | I program in Python for 95% of my work and the rest in R or MATLAB or IDL/PV-WAVE (and soon SAS). But, I am in an environment where time-to-results is often a huge driver of the analysis chosen and so I often use point-and-click tools as well. In my experience, there is no single, robust, flexible GUI tool for doing analytics, much like there is not a single language. I typically cobble together a collection of the following free and commercial software
- Weka
- KNIME
- Excel and its plugins (like Solver)
- Alteryx
- MVP Stats
I have not used JMP, Stata, Statistica, etc, but would like to.
Using these tools involves learning different GUIs and multiple abstractions of modeling, which is a pain at the time but let's me get faster ad hoc results later. I am in the same boat as the OP because while most of the folks I work with are really smart, they do not care to learn a language, nor multiple GUIs and application specific terminology. So, I have resigned myself to accepting that Excel drives 90% of analysis in the business world. Accordingly, I am looking in to using things like [pyinex](http://code.google.com/p/pyinex/) to let me provide better analytics to the same Excel presentation layer that the vast majority of my colleagues expect.
UPDATE: Continuing down the Do-modeling-with-programming-but-make-Excel-the-presentation-layer theme, I just came across [this guy's website](http://sparklines-excel.blogspot.com/) offering Tufte-style graphics to embed in Excel cells. Simply awesome and free!
| null | CC BY-SA 3.0 | null | 2011-04-04T02:11:11.327 | 2011-11-04T18:51:17.657 | 2011-11-04T18:51:17.657 | 1080 | 1080 | null |
9147 | 2 | null | 5926 | 6 | null | There's no clear answer. The idea is that if you have a correlation that approaches 1 then you essentially have one variable and not multiple variables. So you could test against the hypotheses that r=1.00. With that said, the idea of MANOVA is to give you something more than a series of ANOVA tests. It helps you find a relationship with one test because you're able to lower your mean square error when combining dependent variables. It just won't help if you have highly correlated dependent variables.
| null | CC BY-SA 2.5 | null | 2011-04-04T02:55:43.847 | 2011-04-04T02:55:43.847 | null | null | null | null |
9148 | 1 | 13970 | null | 3 | 507 | I am looking around for datasets mainly in the form of natural language text corpus, that has been redacted by experts. So far I found only the [enron](http://www.cs.cmu.edu/~enron/) dataset, but it appears only a few specific things (like employee mail-id, etc.) were redacted, which will not be beneficial for feature extraction.
Something like declassified government records or company data would be awesome; medical records would probably be comparatively easier to get, but are quite unrelated to the problem. Any idea if such dataset is available?
| Looking for redacted text corpus | CC BY-SA 3.0 | null | 2011-04-04T03:33:38.370 | 2020-06-05T17:33:35.437 | 2020-06-05T17:25:04.037 | 12359 | 2192 | [
"dataset",
"natural-language"
] |
9149 | 2 | null | 9085 | 1 | null | Well, this particular tool is popular in my industry (though it is not industry-specific by design):
[http://www.umetrics.com/simca](http://www.umetrics.com/simca)
It allows you to do latent variable type multivariate analysis (PCA and PLS), and it includes all the attendant interpretative plots / calculations and interrogation tools like contribution plots, variable importance plots, Q2 calculations etc.
It is often used on high-dimensional (and often highly correlated/collinear) industrial datasets where OLS/MLR type methods are unsuitable (e.g. info from a boatload of sensors, log info, etc.).
It operates in a fully GUI environment, and the user does not have to write a single line of code. Unfortunately it is not free, and cannot be extended via programming.
| null | CC BY-SA 2.5 | null | 2011-04-04T03:54:06.550 | 2011-04-04T03:54:06.550 | null | null | 2833 | null |
9150 | 2 | null | 9085 | 1 | null | In my opinion, if you don't code yourself the test, you are prone to errors and misunderstandings of the results.
I think that you should recommend them to hire a statistician that has computer skills.
If it is to do always the same thing, then indeed you can use a small tool (blackbox) that will do the stuff. But I am not sure this is still called data exploration.
| null | CC BY-SA 2.5 | null | 2011-04-04T04:11:24.810 | 2011-04-04T04:11:24.810 | null | null | 1709 | null |
9151 | 2 | null | 9140 | 8 | null | About survival analysis
In survival analysis, or time-to-event analysis, the variable or interest measures the time from a starting point to a point of interest like death due to some disease. So the response variable is a positive variable which is in most cases skewed. As a consequence the usual assumption of normality fails and, for instance, the classical regression techniques are not applicable. (Though, note that sometimes a transformation of the variable could make the situation better). But the main difference is censoring: a very common feature when dealing with time-to-event data. In its most common form (right censoring), you do not know the exact time for a given individual but you do know that it is larger than some value $t^{\star}$. For example, suppose you follow a patient up to death. At time $t=10$ days, he is alive. At time $t=30$ days he is still alive but then he is lost to follow-up. Then you do not know the exact time of death but you do know that $t > 30$. Ignoring censoring is clearly not the best think to do; instead, you can record $t^{\star} = 30$ as a censored observation. Techniques of survival analysis (e.g., Kaplan-Meier estimator, Cox regression, ...) are specially designed to deal with censoring.
To my point of view [Modelling Survival Data in Medical Research](http://www.crcpress.com/product/isbn/9781584883258) is a very good choice as a first book in survival analysis... but there are many others.
| null | CC BY-SA 2.5 | null | 2011-04-04T05:53:22.827 | 2011-04-04T05:53:22.827 | null | null | 3019 | null |
9152 | 2 | null | 9109 | 1 | null | The log probability of a GMM is non-convex, which makes it converge only locally. Also, EM scales with the number of points in your dataset - you might want to try online EM if you have a big dataset.
Compared to fitting univariate models (like a single Gaussian) performance is of course horrible, since it's an iterative procedure. To speed it up, a good heuristic is to start of with a couple of iterations of K-Means, use the centers as means and go on from there with the GMM modeling.
I have had the feeling that GMMs converge globally for very simple and low dimensional datasets. I have never checked this for big datasets though.
| null | CC BY-SA 2.5 | null | 2011-04-04T06:35:27.023 | 2011-04-04T06:35:27.023 | null | null | 2860 | null |
9154 | 2 | null | 9129 | 2 | null | State of the art is to use [semantic hashing by Hinton and Salakhutdinov](http://www.cs.toronto.edu/~rsalakhu/papers/semantic_final.pdf). If you have a look into the paper, there are some really impressive 2D plots of several benchmark datasets.
It is a rather advanced algorithm, however. You train a stack of restricted boltzmann machines with contrastive divergence. In the end your representation of a document will be a bit vector. This can be used to do lookups based on the hamming distance.
Lots of machine learning knowledge goes is required to sucessfully implement this, and as far as I know there is no out of the box. If you want to do this and you have no prior knowledge in neural networks et al, it will take quite some effort.
| null | CC BY-SA 2.5 | null | 2011-04-04T08:42:40.867 | 2011-04-04T08:42:40.867 | null | null | 2860 | null |
9155 | 1 | 9196 | null | 17 | 18296 | I would like to clarify how [the Granger causality](http://en.wikipedia.org/wiki/Granger_causality) can/should be used in practice, and how to interpret the statistical significance given by the test.
Also, I would like to fill this table with things like "we don't know" or if we know something, what do we know (It will for sure not be causality, but maybe something else?).
```
X Granger cause Y sig. X Granger cause Y not
Y Granger cause X sig. ... ...
Y Granger cause X not ... ...
```
| Interpretation of the Granger causality test | CC BY-SA 3.0 | null | 2011-04-04T10:32:41.897 | 2016-02-20T15:06:21.423 | 2016-02-20T15:06:21.423 | 7290 | 1709 | [
"hypothesis-testing",
"granger-causality"
] |
9156 | 1 | 9157 | null | 4 | 1806 | I would like to propose a single model (decision tree), that is very variable, and validate it. I have choosen parameters after I had obtained good quality measures with a cross-validation.
I could build the model on the whole data set and show cross-validated measures. But I can't get a special graph (called Reliability Plot) specific for that model. I should split my data set in training and test sets to obtain that specific graph. The model builded on the training set is different from the optimezed on the whole dataset.
Could I choose my training set (50% of the total) to obtain the same model as the whole data set builded one? There is something unwise or wrong in this method?
Thanks
| How to choose training and test sets | CC BY-SA 2.5 | null | 2011-04-04T11:02:16.770 | 2011-04-04T20:29:03.047 | 2011-04-04T20:29:03.047 | 2719 | 2719 | [
"cross-validation",
"validation"
] |
9157 | 2 | null | 9156 | 4 | null | If you have tuned the model parameters using cross-validation, then you won't get an unbiased estimnate of performance without using some completely new data. Even if you re-cross-validate using a different partition of the data, or make a random test/training split using the data you have used already, this will still bias the performance evaluation.
Note the "cross-validated measures" you already have are a (possibly heavily) biased performance estimate if you have directly optimised it to choose the (hyper-) prarameters.
The thing to do would be to used a nested cross-validation, where the outer cross-validation is used for performance estimation, where the model parameters are tuned indpendently in each fold via an "inner" cross-validation.
| null | CC BY-SA 2.5 | null | 2011-04-04T11:28:31.587 | 2011-04-04T11:28:31.587 | null | null | 887 | null |
9158 | 2 | null | 9140 | 13 | null | The concept of censoring is the key to survival analysis and life data analysis. This issue can also enter via industrial statistics. When monitoring the length of time it
takes for a sample of units to fail, you can have
- Complete data: the exact time a unit fails is known
- Censored to the right: the time to fail for a unit is beyond the present run time
- Censored to the left: the known time is after the time a unit failed
Other issues that enters the data mix are
- Singly censored: all unfailed units have a common run time
- Multiply censored: the unfailed units have different run times
- Interval censored: the time to fail is known to be between a particular set of times.
- Time censored: the censoring time is fixed
- Failure censored: a test is stopped when a fixed number of units fail
- Competing failure modes: the sample units fail for different reasons
Common distributions capable of handling these situations are: lognormal, Weibull, and extreme value. The issues become interesting because there are graphical procedures to handle analysis as well as MLE and Method of Moments methods.
Systems reliability is an off-shoot of this topic which gets involved with Bayesian methods, renewal theory, and accelerated life testing. Wayne Nelson and Bill Meeker
have several good books on the topics.
| null | CC BY-SA 2.5 | null | 2011-04-04T12:35:15.843 | 2011-04-04T12:35:15.843 | null | null | 3805 | null |
9159 | 1 | 9166 | null | 15 | 27353 | While reading a few papers, I came across the term "gold set" or "gold standard". What I don't understand is what makes a dataset gold standard? Peer acceptance, citation count and if its the liberty of the researcher and the relevance to problem he is attacking?
| What is the meaning of a gold standard? | CC BY-SA 2.5 | null | 2011-04-04T14:25:22.570 | 2020-07-09T14:40:40.767 | 2016-09-28T09:23:03.227 | 35989 | 2192 | [
"terminology"
] |
9160 | 2 | null | 9159 | 5 | null | A gold standard is a standard accepted as the most valid one and the most used. You can apply the expression for everything... But you can always accept or critic the standard, especially in the case of a dataset.
| null | CC BY-SA 2.5 | null | 2011-04-04T14:46:40.257 | 2011-04-04T14:46:40.257 | null | null | null | null |
9161 | 2 | null | 9104 | 17 | null | The terminology is probably not used consistently, so the following is only how I understand the original question. From my understanding, the normal CIs you computed are not what was asked for. Each set of bootstrap replicates gives you one confidence interval, not many. The way to compute different CI-types from the results of a set of bootstrap replicates is as follows:
```
B <- 999 # number of replicates
muH0 <- 100 # for generating data: true mean
sdH0 <- 40 # for generating data: true sd
N <- 200 # sample size
DV <- rnorm(N, muH0, sdH0) # simulated data: original sample
```
Since I want to compare the calculations against the results from package `boot`, I first define a function that will be called for each replicate. Its arguments are the original sample, and an index vector specifying the cases for a single replicate. It returns $M^{\star}$, the plug-in estimate for $\mu$, as well as $S_{M}^{2\star}$, the plug-in estimate for the variance of the mean $\sigma_{M}^{2}$. The latter will be required only for the bootstrap $t$-CI.
```
> getM <- function(orgDV, idx) {
+ bsM <- mean(orgDV[idx]) # M*
+ bsS2M <- (((N-1) / N) * var(orgDV[idx])) / N # S^2*(M)
+ c(bsM, bsS2M)
+ }
> library(boot) # for boot(), boot.ci()
> bOut <- boot(DV, statistic=getM, R=B)
> boot.ci(bOut, conf=0.95, type=c("basic", "perc", "norm", "stud"))
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 999 bootstrap replicates
CALL :
boot.ci(boot.out = bOut, conf = 0.95, type = c("basic", "perc", "norm", "stud"))
Intervals :
Level Normal Basic Studentized Percentile
95% ( 95.6, 106.0 ) ( 95.7, 106.2 ) ( 95.4, 106.2 ) ( 95.4, 106.0 )
Calculations and Intervals on Original Scale
```
Without using package `boot` you can simply use `replicate()` to get a set of bootstrap replicates.
```
boots <- t(replicate(B, getM(DV, sample(seq(along=DV), replace=TRUE))))
```
But let's stick with the results from `boot.ci()` to have a reference.
```
boots <- bOut$t # estimates from all replicates
M <- mean(DV) # M from original sample
S2M <- (((N-1)/N) * var(DV)) / N # S^2(M) from original sample
Mstar <- boots[ , 1] # M* for each replicate
S2Mstar <- boots[ , 2] # S^2*(M) for each replicate
biasM <- mean(Mstar) - M # bias of estimator M
```
The basic, percentile, and $t$-CI rely on the empirical distribution of bootstrap estimates. To get the $\alpha/2$ and $1 - \alpha/2$ quantiles, we find the corresponding indices to the sorted vector of bootstrap estimates (note that `boot.ci()` will do a more complicated interpolation to find the empirical quantiles when the indices are not natural numbers).
```
(idx <- trunc((B + 1) * c(0.05/2, 1 - 0.05/2)) # indices for sorted vector of estimates
[1] 25 975
> (ciBasic <- 2*M - sort(Mstar)[idx]) # basic CI
[1] 106.21826 95.65911
> (ciPerc <- sort(Mstar)[idx]) # percentile CI
[1] 95.42188 105.98103
```
For the $t$-CI, we need the bootstrap $t^{\star}$ estimates to calculate the critical $t$-values. For the standard normal CI, the critical value will just be the $z$-value from the standard normal distribution.
```
# standard normal CI with bias correction
> zCrit <- qnorm(c(0.025, 0.975)) # z-quantiles from std-normal distribution
> (ciNorm <- M - biasM + zCrit * sqrt(var(Mstar)))
[1] 95.5566 106.0043
> tStar <- (Mstar-M) / sqrt(S2Mstar) # t*
> tCrit <- sort(tStar)[idx] # t-quantiles from empirical t* distribution
> (ciT <- M - tCrit * sqrt(S2M)) # studentized t-CI
[1] 106.20690 95.44878
```
In order to estimate the coverage probabilities of these CI-types, you will have to run this simulation many times. Just wrap the code into a function, return a list with the CI-results and run it with `replicate()` like demonstrated in [this gist](https://gist.github.com/anonymous/73f64e1b8ff7e972fc3b).
| null | CC BY-SA 3.0 | null | 2011-04-04T15:03:13.660 | 2015-08-03T07:19:09.243 | 2015-08-03T07:19:09.243 | 1909 | 1909 | null |
9162 | 2 | null | 7146 | 1 | null | I'm not sure if somebody has already made this point, but pelwei can actually be forced to work as a 2 parameter weibull function by adding in a fixed bound.
Insead of calling `moments<-pelwei(wind.moments)` you should simply call `moments<-pelwei(wind.moments,bound=0)`
you can always check what the zeta value is. If it's not 0 and you're using dweibull, you need to do something about it.
| null | CC BY-SA 2.5 | null | 2011-04-04T15:12:40.560 | 2011-04-04T15:12:40.560 | null | null | 4025 | null |
9163 | 2 | null | 9159 | 2 | null | In a challenge, this usually mean the answer to the test set, previously hidden from participants.
| null | CC BY-SA 2.5 | null | 2011-04-04T15:20:46.483 | 2011-04-04T15:20:46.483 | null | null | null | null |
9164 | 2 | null | 9140 | 2 | null | ```
5, 10, 12+, 14, 17, 18+, 20+
```
A first approximation description of survival analysis: Analysing data where the dependent variable has (1) precise values (the complete observations) and (2) values know to be above a given threshold (the censored observations). The above may be a survival data sample, values without `+` are precisely known; values with `+` are known to be more, but not how much more. (And there are many extensions.)
| null | CC BY-SA 3.0 | null | 2011-04-04T15:28:25.340 | 2011-04-10T16:02:11.417 | 2011-04-10T16:02:11.417 | 3911 | 3911 | null |
9165 | 1 | null | null | 6 | 233 | I have a survival cancer clinical trials dataset from which I have generated Cox models using forward likelihood ratio testing within R. These models are based on 'traditional' cancer variables (eg. age, histology, metastasis etc).
I would like to extend the model using high dimensional data (where we have measured many thousands of genes - FWIW, this is DNA methylation data, which can range from zero to one, rather than gene expression). Several approaches have been suggested for investigating survival using high-dimensional data, but I am not aware of any approaches that fit my requirements, i.e. adding high-dimensional data to a base multi-variate model constructed using previously identified survival correlates.
As a first step, I am testing for bi-modality and reducing dimensionality by selecting the most bi-modal probes for further analysis. These probes would be the most amenable to testing and verification in the lab.
One approach would simply be to carry on with the forward LR testing, although this would leave me very prone to overfitting.
Another (more sensible, in my opinion) approach would be to aggregate collections of genes into (survival-related) metagenes and then trim the metagenes into a handful of testable genes, so that this could be a usable test clinically, although this may also be prone to overfitting.
The cancer I work on is rare and test/training cohorts are tricky. To put things into perspective, the clinical trials dataset is 135 cases, with a further 55 age-matched non-clinical-trials cases, which show no difference in survival to the clinical trials dataset.
So my question is, what sort of approaches should I be considering and is what I have done so far sensible?
Any advice from this rather rambling question is most appreciated.
Thanks for reading!
Ed
| Adding high-dimensional data to mutivariate Cox model | CC BY-SA 2.5 | null | 2011-04-04T15:29:56.450 | 2011-04-05T01:18:24.983 | null | null | 3429 | [
"survival",
"microarray",
"large-data"
] |
9166 | 2 | null | 9159 | 15 | null | Say that you want to measure a certain polluant in drinking water, the golden standard will be the method which detects it with the highest sensitivity and accuracy. Any other method can then be compared to it, knowing that -under certain conditions- the golden standard is the best (e.g.: if you need to measure the polluant on site, the golden standard will not be any method that require huge machinery to be used, as you will not be able to bring it with you).
I think the [Wikipedia article](http://en.wikipedia.org/wiki/Gold_standard_%28test%29) explains it quite well:
>
In medicine and statistics, gold standard test refers to a diagnostic, test or benchmark that is the best available under reasonable conditions. It does not have to be necessarily the best possible test for the condition in absolute terms. For example, in medicine, dealing with conditions that require an autopsy to have a perfect diagnosis, the gold standard test is normally less accurate than the autopsy. Anyway, it is obviously preferred by the patients.
| null | CC BY-SA 2.5 | null | 2011-04-04T15:34:19.880 | 2011-04-04T15:34:19.880 | null | null | 582 | null |
9167 | 2 | null | 9159 | 3 | null | I have observed the term "gold standard" in quotes more times than not, so I take it to mean something that is highly subjective. Even in the Wikipedia article some paragraphs refer to it in quotes. The OP is also referring to a "gold standard dataset" which I take it to mean a "Gold Standard" for descriminant Analysis, as in Fishers' Iris dataset being the "Gold Standard". But I am not 100% sure usage is consistant.
| null | CC BY-SA 2.5 | null | 2011-04-04T15:47:16.553 | 2011-04-04T15:57:54.190 | 2011-04-04T15:57:54.190 | 3489 | 3489 | null |
9168 | 2 | null | 9165 | 1 | null | >
One approach would simply be to carry on with the forward LR testing, although this would leave me very prone to overfitting.
You could penalise model complexity to avoid overfitting. My favourite is the `stepAIC` function from the `MASS` package that uses AIC (can be configured to use BIC) as a goodness of fit.
| null | CC BY-SA 2.5 | null | 2011-04-04T16:17:49.320 | 2011-04-04T16:17:49.320 | null | null | 3911 | null |
9169 | 2 | null | 9165 | 0 | null | Edit: after the comment below from EdS my original answer was not meaningful any more. @EdS, thanks for the further information!
| null | CC BY-SA 2.5 | null | 2011-04-04T16:23:56.663 | 2011-04-05T01:18:24.983 | 2011-04-05T01:18:24.983 | 3911 | 3911 | null |
9170 | 2 | null | 9076 | 1 | null | You may also be interested in the 'auto.arima' function in the [forecast package](http://robjhyndman.com/software/forecast/) for [r](http://www.r-project.org/) as an example of a way to programmatically identify ARIMA models. It probably doesn't find the model in the exact same way you would, but some of the code/ideas might be useful to you.
| null | CC BY-SA 2.5 | null | 2011-04-04T18:01:02.420 | 2011-04-04T18:01:02.420 | null | null | 2817 | null |
9171 | 1 | 9185 | null | 25 | 100617 | I'm brand new to this R thing but am unsure which model to select.
- I did a stepwise forward regression selecting each variable based on the lowest AIC. I came up with 3 models that I'm unsure which is the "best".
Model 1: Var1 (p=0.03) AIC=14.978
Model 2: Var1 (p=0.09) + Var2 (p=0.199) AIC = 12.543
Model 3: Var1 (p=0.04) + Var2 (p=0.04) + Var3 (p=0.06) AIC= -17.09
I'm inclined to go with Model #3 because it has the lowest AIC (I heard negative is ok) and the p-values are still rather low.
I've ran 8 variables as predictors of Hatchling Mass and found that these three variables are the best predictors.
- My next forward stepwise I choose Model 2 because even though the AIC was slightly larger the p values were all smaller. Do you agree this is the best?
Model 1: Var1 (p=0.321) + Var2 (p=0.162) + Var3 (p=0.163) + Var4 (p=0.222) AIC = 25.63
Model 2: Var1 (p=0.131) + Var2 (p=0.009) + Var3 (p=0.0056) AIC = 26.518
Model 3: Var1 (p=0.258) + Var2 (p=0.0254) AIC = 36.905
thanks!
| AIC or p-value: which one to choose for model selection? | CC BY-SA 3.0 | null | 2011-04-04T18:25:56.523 | 2022-10-22T18:15:10.073 | 2012-01-13T14:26:01.260 | null | 4027 | [
"model-selection",
"aic",
"stepwise-regression"
] |
9174 | 2 | null | 9171 | 36 | null | Looking at individual p-values can be misleading. If you have variables that are collinear (have high correlation), you will get big p-values. This does not mean the variables are useless.
As a quick rule of thumb, selecting your model with the AIC criteria is better than looking at p-values.
One reason one might not select the model with the lowest AIC is when your variable to datapoint ratio is large.
Note that model selection and prediction accuracy are somewhat distinct problems. If your goal is to get accurate predictions, I'd suggest cross-validating your model by separating your data in a training and testing set.
A paper on variable selection: [Stochastic Stepwise Ensembles for Variable Selection](https://arxiv.org/abs/1003.5930v2)
| null | CC BY-SA 4.0 | null | 2011-04-04T18:59:19.727 | 2022-10-22T18:15:10.073 | 2022-10-22T18:15:10.073 | 362671 | 3834 | null |
9175 | 1 | null | null | 2 | 610 | When you make the normal probability plot, the plot may have curved bounds. Then the plot should be roughly linear and the data must lie within the bounds provided by software. In the examples I have seen the normal was rejected because some data points were not within the bounds. Can you provide an example where the data is between the bounds, but not linear?
| Linearity of the normal probability plot | CC BY-SA 2.5 | null | 2011-04-04T19:08:50.480 | 2011-04-26T09:44:55.127 | 2011-04-05T07:19:51.750 | 449 | 3454 | [
"normality-assumption"
] |
9180 | 2 | null | 9175 | 2 | null | This is a synthetic example. Here the data are close to q1=q2 but stepwise rather than linear. The example draws a sample from a normal distribution (right panel) then rounds the numbers (left panel).
```
set.seed(100)
x=rnorm(100, sd=3)
par(mfrow=(c(1, 2)))
qqnorm(round(x))
qqnorm(x)
```

| null | CC BY-SA 3.0 | null | 2011-04-04T20:10:31.270 | 2011-04-26T09:44:55.127 | 2011-04-26T09:44:55.127 | 3911 | 3911 | null |
9182 | 1 | 9205 | null | 4 | 5154 | I have multiple variables (here: weight, horizontal diameter, price and dummy) related to different factors (here: Apple, Orange, Banana and Avocado):
```
Fruit Weight HorDiam Price Dummy
Apple 60 60 5 4
Apple 50 70 8 6
Orange 80 75 7 2
Orange 72 70 9 8
Banana 40 30 3 1
Banana 45 35 4 2
Banana 80 50 8 3
Avocado 100 60 13 8
Avocado 95 70 14 6
```
I need to test if I can group some species together: are apples and oranges significantly different? ANOVA tells me if weight (or horizontal diameter, or price) is significantly different among species. Tukey test gives me if weight of one species is significantly different from weight of another one (pairwise). Clustering seems only able to group individual observations together, not species. I can't find the appropriate test (or algorithm) to tell me if, for a single variable (weight) or for all of them (weight, horDiam and price), apples can be grouped with oranges and/or with bananas. Any suggestion?
I created a R code for this example:
```
### CREATE TABLE
Fruit<-c("Apple","Apple","Orange","Orange","Banana","Banana","Banana","Avocado","Avocado")
Weight<-c(60,50,80,72,40,45,85,90,95)
horDiam<-c(60,70,75,70,30,35,50,60,70)
Price<-c(5,8,7,9,3,4,8,13,14)
Dummy<-c(4,6,2,8,1,2,3,8,6)
myData<-data.frame(Fruit=Fruit, Weight=Weight, horDiam=horDiam, Price=Price, Dummy=Dummy)
rownames(myData)<-c("Apple1","Apple2","Orange1","Orange2","Banana1","Banana2","Banana3","Avocado1","Avocado2")
### ANOVA
fit.aov<-list()
summaryAOV<-list()
for (i in 1:3){
fit.aov[[i]]<-aov(myData[,i+1]~myData[,1])
summaryAOV[[i]]<-summary(fit.aov[[i]])
}
### TUKEY
par(mfrow=c(1,3))
testTukey<-list()
mainTukey<-c("Weight", "Horiz. Diameter", "Price")
for (i in 1:3){
testTukey[[i]]<-TukeyHSD(fit.aov[[i]], conf.level = 0.95)
plot(testTukey[[i]], main=mainTukey[i])
}
### CLUSTERING
plot( hclust(dist(myData), method="ward") )
### CLUSTERING WITH P-VALUE
fit <- pvclust(t(myData[,-1]), method.hclust="ward", method.dist="euclidean")
plot(fit)
pvrect(fit, alpha=0.95)
```
| Multivariate grouping: clustering, anova, tukey | CC BY-SA 2.5 | null | 2011-04-04T23:00:51.540 | 2011-04-06T11:46:01.810 | null | null | 4028 | [
"r",
"hypothesis-testing",
"clustering"
] |
9183 | 2 | null | 8881 | 0 | null | As the comments suggest, it is only by fully understanding and specifying your research design that you will establish what regression method best corresponds to your data.
In the case where your DV is a categorical variable, which seems likely if you are dealing with social data, I would recommend reading extensively from [Long and Freese](http://www.stata.com/bookstore/regmodcdvs.html) to make an informed choice. Long and Freese use Stata, but equivalent commands exist in both R and SPSS.
| null | CC BY-SA 2.5 | null | 2011-04-05T00:20:39.873 | 2011-04-05T00:20:39.873 | null | null | 3582 | null |
9184 | 2 | null | 9182 | 2 | null | Are you looking for [MANOVA](http://en.wikipedia.org/wiki/Multivariate_analysis_of_variance)? This is the multivariate generalization of ANOVA where you are testing for differences between mean vectors. In R the manova function will fit this model.
| null | CC BY-SA 2.5 | null | 2011-04-05T00:39:42.973 | 2011-04-05T00:39:42.973 | null | null | 2040 | null |
9185 | 2 | null | 9171 | 25 | null | AIC is a goodness of fit measure that favours smaller residual error in the model, but penalises for including further predictors and helps avoiding overfitting. In your second set of models model 1 (the one with the lowest AIC) may perform best when used for prediction outside your dataset. A possible explanation why adding Var4 to model 2 results in a lower AIC, but higher p values is that Var4 is somewhat correlated with Var1, 2 and 3. The interpretation of model 2 is thus easier.
| null | CC BY-SA 2.5 | null | 2011-04-05T01:20:55.760 | 2011-04-05T01:20:55.760 | null | null | 3911 | null |
9187 | 2 | null | 8823 | 1 | null | Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality:
$$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left(\sum_{k\neq i}h_{k}\right)^{m-1}\left(\sum_{k\neq i}h_{k}\prod_{j=1}^{m}d_{jk}\right)$$
where
$$d_{jk}=P(D_{j}|H_{k},I)\;\;\;\;h_{k}=P(H_{k}|I)$$
Now we can specialise to the case $m=2$ (two data sets) by taking $D_{1}^{(1)}\equiv D_{1}$ and relabeling $D_{2}^{(1)}\equiv D_{2}D_{3}\dots D_{m}$. Note that these two data sets still satisfy conditions 1 and 2, so the result above applies to them as well. Now expanding in the case $m=2$ we get:
$$\left(\sum_{k\neq i}h_{k}d_{1k}\right)\left(\sum_{l\neq i}h_{l}d_{2l}\right)=\left(\sum_{k\neq i}h_{k}\right)\left(\sum_{l\neq i}h_{l}d_{1l}d_{2l}\right)$$
$$\rightarrow\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{1k}d_{2l}=\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{1l}d_{2l}$$
$$\rightarrow\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{2l}(d_{1k}-d_{1l})=0\;\;\;\;\;\;\; (i=1,\dots,n)$$
The term $(d_{1a}-d_{1b})$ occurs twice in the above double summation, once when $k=a$ and $l=b$, and once again when $k=b$ and $l=a$. This will occur as long as $a,b\neq i$. The coefficient of each term is given by $d_{2b}$ and $-d_{2a}$. Now because there are $i$ of these equations, we can actually remove $i$ from these equations. To illustrate, take $i=1$, now this means we have all conditions except where $a=1,b=2$ and $b=1,a=2$. Now take $i=3$, and we now can have these two conditions (note this assumes at least three hypothesis). So the equation can be re-written as:
$$\sum_{l>k}h_{k}h_{l}(d_{2l}-d_{2k})(d_{1k}-d_{1l})=0$$
Now each of the $h_i$ terms must be greater than zero, for otherwise we are dealing with $n_{1}<n$ hypothesis, and the answer can be reformulated in terms of $n_{1}$. So these can be removed from the above set of conditions:
$$\sum_{l>k}(d_{2l}-d_{2k})(d_{1k}-d_{1l})=0$$
Thus, there are $\frac{n(n-1)}{2}$ conditions that must be satisfied, and each conditions implies one of two "sub-conditions": that $d_{jk}=d_{jl}$ for either $j=1$ or $j=2$ (but not necessarily both). Now we have a set of all of the unique pairs $(k,l)$ for $d_{jk}=d_{jl}$. If we were to take $n-1$ of these pairs for one of the $j$, then we would have all the numbers $1,\dots,n$ in the set, and $d_{j1}=d_{j2}=\dots=d_{j,n-1}=d_{j,n}$. This is because the first pair has $2$ elements, and each additional pair brings at least one additional element to the set*
But note that because there are $\frac{n(n-1)}{2}$ conditions, we must choose at least the smallest integer greater than or equal to $\frac{1}{2}\times\frac{n(n-1)}{2}=\frac{n(n-1)}{4}$ for one of the $j=1$ or $j=2$. If $n>4$ then the number of terms chosen is greater than $n-1$. If $n=4$ or $n=3$ then we must choose exactly $n-1$ terms. This implies that $d_{j1}=d_{j2}=\dots=d_{j,n-1}=d_{j,n}$. Only with two hypothesis ($n=2$) is where this does not occur. But from the last equation in Saunder's article this equality condition implies:
$$P(D_{j}|\overline{H}_{i})=\frac{\sum_{k\neq i}d_{jk}h_{k}}{\sum_{k\neq i}h_{k}}=d_{ji}\frac{\sum_{k\neq i}h_{k}}{\sum_{k\neq i}h_{k}}=d_{ji}=P(D_{j}|H_{i})$$
Thus, in the likelihood ratio we have:
$$\frac{P(D_{1}^{(1)}|H_{i})}{P(D_{1}^{(1)}|\overline{H}_{i})}=\frac{P(D_{1}|H_{i})}{P(D_{1}|\overline{H}_{i})}=1 \text{ OR} \frac{P(D_{2}^{(1)}|H_{i})}{P(D_{2}^{(1)}|\overline{H}_{i})}=\frac{P(D_{2}D_{3}\dots,D_{m}|H_{i})}{P(D_{2}D_{3}\dots,D_{m}|\overline{H}_{i})}=1$$
To complete the proof, note that if the second condition holds, the result is already proved, and only one ratio can be different from 1. If the first condition holds, then we can repeat the above analysis by relabeling $D_{1}^{(2)}\equiv D_{2}$ and $D_{2}^{(2)}\equiv D_{3}\dots,D_{m}$. Then we would have $D_{1},D_{2}$ not contributing, or $D_{2}$ being the only contributor. We would then have a third relabeling when $D_{1}D_{2}$ not contributing holds, and so on. Thus, only one data set can contribute to the likelihood ratio when condition 1 and condition 2 hold, and there are more than two hypothesis.
*NOTE: An additional pair might bring no new terms, but this would be offset by a pair which brought 2 new terms. e.g. take $d_{j1}=d_{j2}$ as first[+2], $d_{j1}=d_{j3}$ [+1] and $d_{j2}=d_{j3}$ [+0], but next term must have $d_{jk}=d_{jl}$ for both $k,l\notin (1,2,3)$. This will add two terms [+2]. If $n=4$ then we don't need to choose any more, but for the "other" $j$ we must choose the 3 pairs which are not $(1,2),(2,3),(1,3)$. These are $(1,4),(2,4),(3,4)$ and thus the equality holds, because all numbers $(1,2,3,4)$ are in the set.
| null | CC BY-SA 2.5 | null | 2011-04-05T07:46:04.800 | 2011-04-05T07:46:04.800 | null | null | 2392 | null |
9190 | 1 | null | null | 10 | 507 | I have to build a multi-user web app which is about traffic measurements, prognoses etc. At this point I know that I'll use bar and pie charts.
Unfortunately, those chart types aren't rich in expressing all the data that I collect and compute.
I'm looking for a collection of graphical charts. It is very ok if I have to buy a book or anything else. I need to find some graphical samples with explanations in order to inspire me.
Do you know of any such resource? Do you have any advice for me?
| Graphics encyclopedia | CC BY-SA 3.0 | null | 2011-04-05T11:51:15.763 | 2015-11-12T13:11:14.290 | 2015-11-12T13:11:14.290 | 22468 | 3856 | [
"data-visualization",
"references"
] |
9191 | 2 | null | 9190 | 7 | null | If you fancy R, you can see the [R graph gallery](http://addictedtor.free.fr/graphiques/).
| null | CC BY-SA 2.5 | null | 2011-04-05T12:21:39.897 | 2011-04-05T12:21:39.897 | null | null | 144 | null |
9192 | 1 | null | null | 7 | 3154 | I'm doing some Machine Learning in R using the `nnet` package. I want to estimate the generalisation performance of my classifier by using k-fold cross-validation.
How should I go about doing this? Are there some built-in functions that do this for me? I've seen `tune.nnet()` in the `e1071` package, but I'm not sure that does quite what I want.
Basically I want to do the cross-validation (split into 10 groups, train on 9, test on the other 1, repeat) and then from that obtain some sort of measure of how well my classifier generalises - but I'm not sure what measure that should be. I guess I want to look at the average of the accuracies across the different cross-validation examples, but I'm not sure how to do that with the `tune.nnet()` function above.
Any ideas?
| How to get generalisation performance from nnet in R using k-fold cross-validation? | CC BY-SA 2.5 | null | 2011-04-05T12:24:30.907 | 2011-04-05T14:16:48.083 | 2011-04-05T12:41:05.260 | 930 | 261 | [
"r",
"machine-learning",
"cross-validation",
"neural-networks"
] |
9193 | 2 | null | 9190 | 6 | null | [Cleveland, William S](http://www.stat.purdue.edu/~wsc/). 1993. [Visualizing data.](http://books.google.com/books?id=V-dQAAAAMAAJ) ISBN 0963488406.
| null | CC BY-SA 2.5 | null | 2011-04-05T12:53:05.117 | 2011-04-05T12:53:05.117 | null | null | 449 | null |
9194 | 2 | null | 9190 | 8 | null | For an online summary, check out A [Periodic Table of Visualization Methods](http://www.visual-literacy.org/periodic_table/periodic_table.html).
| null | CC BY-SA 2.5 | null | 2011-04-05T13:01:16.297 | 2011-04-05T13:01:16.297 | null | null | 930 | null |
9195 | 2 | null | 9192 | 5 | null | If you are planning to tune the network (e.g. select a value for the learning rate) on your training data and determine the error generalization on that same data set, you need to use a nested cross validation, where in each fold, you are tuning the model on that 9/10 of the data set (using 10 fold cv). See this post where I asked a similar question and got very good answers ([post](https://stats.stackexchange.com/questions/5918/cross-validation-error-generalization-after-model-selection)).
I am not sure if there is an R function to accomplish this - maybe the [ipred](http://cran.r-project.org/web/packages/ipred/index.html) package could be used by passing the tune function with the call - not sure. It is rather trivial to simply write a loop to accomplish this whole process though.
As far as what metric to examine, that depends on your problem (classification versus regression) and if you are interested in accuracy (classifications rate or kappa) or how well the model ranks (e.g. lift) or the MAE or RMSE for regression.
| null | CC BY-SA 2.5 | null | 2011-04-05T13:11:14.243 | 2011-04-05T13:33:28.780 | 2017-04-13T12:44:44.530 | -1 | 2040 | null |
9196 | 2 | null | 9155 | 16 | null | To begin with, the source you added has almost all you need to get acquainted with Granger (non)causality concept (though I like the [scholarpedia](http://www.scholarpedia.org/article/Granger_causality)'s article more). The most crucial is that G-causality in practice looks for the answer: would variable $x$ be useful predicting variable $y$, meaning that information containing in variables up to lag $p$ is statistically significant. Thus G-causality is purely statistical property of the data, that may be though supported by theoretically sound hypothesis.
---
Some practical considerations:
- If you have more than two stationary signals, it may happen that they have to be jointly described by a vector autoregressive model (VAR). Therefore pairwise G-causality could be misleading, since you ignore the impacts that come from the other variables.
- Suggestion in $R$: try library(vars) and ?causality for instantaneous and G-causality, when you have more than two variables, and VAR seems meaningful (well it is a separate answer when it really is, some ideas are also related to G-causality concept).
- Previous suggestion is better in multivariate case, comparing to pairwise case library(lmtest) and ?grangertest. On the other hand pairwise case is an option when you do have to work with two variables only. Even in multivariate case you may still perform grangertest just to mark possible useful covariates or decide on statistical possible endogeneity issues. I usually do so when lacking in time, since identification of subsets of variables and hyper-parameters (lag order) selection for VAR models is a not-so-quick task. So for quick useful-predictive-information-containing variables detection it is alright to go pairwise (but do not stop with this results, they are just auxiliary).
- Note, that under null hypothesis you do test non-G-causality, thus $p$ values will mark G-causal relationships.
- Conclusions from G-causality tests would be "We know that if $x$ G-cause $y$ statistically significantly, thus it contains useful information that helps to predict future values of $y$". However if we conclude the same about $y$ (feedback effect) it would mean that $x$ and $y$ are both endogenous and VAR type of the model is needed. You may also conclude that if none of the variables G-cause another, it is one of the signs that VAR specification is not necessary. And you may go for separate ARMA models (note, that your variables have to be stationary to perform G-causality tests correctly).
- Any other suggestions from the community are welcome, @zik you may try gretl as an alternative to $R$ to implement Granger-causality tests.
| null | CC BY-SA 2.5 | null | 2011-04-05T13:26:13.777 | 2011-04-05T13:26:13.777 | null | null | 2645 | null |
9197 | 2 | null | 9192 | 4 | null | Implementing k-fold CV (with or without nesting) is relatively straightforward in R; and stratified sampling (wrt. class membership or subjects' characteristics, e.g. age or gender) is not that difficult.
About the way to assess one's classifier performance, you can directly look at the R code for the `tune()` function. (Just type `tune` at the R prompt.) For a classification problem, this is the class agreement (between predicted and observed class membership) that is computed.
However, if you are looking for a complete R framework where data preprocessing (feature elimination, scaling, etc.), training/test resampling, and comparative measures of classifiers accuracy are provided in few commands, I would definitely recommend to have a look at the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package, which also includes a lot of useful vignettes (see also the [JSS paper](http://www.jstatsoft.org/v28/i05/paper)).
Of note, although NNs are part of the methods callable from within `caret`, you may probably have to look at other methods that perform as well and most of the times better than NNs (e.g., Random Forests, SVMs, etc.)
| null | CC BY-SA 2.5 | null | 2011-04-05T14:16:48.083 | 2011-04-05T14:16:48.083 | null | null | 930 | null |
9198 | 1 | 10558 | null | 19 | 3089 | Seasonal adjustment is a crucial step preprocessing the data for further research. Researcher however has a number of options for trend-cycle-seasonal decomposition. The most common (judging by the number of citations in empirical literature) rival seasonal decomposition methods are X-11(12)-ARIMA, Tramo/Seats (both implemented in [Demetra+](http://circa.europa.eu/irc/dsis/eurosam/info/data/demetra.htm)) and $R$'s [stl](http://www.wessa.net/download/stl.pdf). Seeking to avoid random choice between the above-mentioned decomposition techniques (or other simple methods like seasonal dummy variables) I would like to know a basic strategy that leads to choosing seasonal decomposition method effectively.
Several important subquestions (links to a discussion are welcome too) could be:
- What are the similarities and differences, strong and weak points of the methods? Are there any special cases when one method is more preferable than the others?
- Could you provide general guides to what is inside the black-box of different decomposition methods?
- Are there special tricks choosing the parameters for the methods (I am not always satisfied with the defaults, stl for example has many parameters to deal with, sometimes I feel I just don't know how to choose these ones in a right way).
- Is it possible to suggest some (statistical) criteria that the time series is seasonally adjusted efficiently (correlogram analysis, spectral density? small sample size criteria? robustness?).
| Choosing seasonal decomposition method | CC BY-SA 3.0 | null | 2011-04-05T15:02:20.867 | 2017-06-14T21:18:41.800 | 2014-09-30T10:23:03.050 | 53690 | 2645 | [
"time-series",
"data-transformation",
"methodology",
"seasonality"
] |
9199 | 2 | null | 9190 | 4 | null | Systat (Lee Wilkinson) was an early leader in statistical graphics software. It always has had a [nice visual gallery](http://www.systat.com/solutions/ImageGallery.aspx).
| null | CC BY-SA 2.5 | null | 2011-04-05T15:19:15.227 | 2011-04-05T15:19:15.227 | null | null | 919 | null |
9200 | 2 | null | 9190 | 3 | null | A visual [gallery of really creative graphics](http://gallery.wolfram.com/) (but without much organization, unfortunately) is available on the Wolfram site (Mathematica).
| null | CC BY-SA 2.5 | null | 2011-04-05T15:21:14.570 | 2011-04-05T15:21:14.570 | null | null | 919 | null |
9201 | 1 | null | null | 9 | 1462 | I wrote a script tests the data using the `wilcox.test`, but when I got the results, all the p-values where equal to 1.
I read in some websites that you could use jitter before testing the data (to avoid ties as they said), I did this and now I have an acceptable result.
Is it wrong to do this?
```
test<- function(column,datacol){
library(ggplot2)
t=read.table("data.txt", stringsAsFactors=FALSE)
uni=unique(c(t$V9))
for (xp in uni) {
for(yp in uni) {
testx <- subset(t, V9==xp)
testy <- subset(t, V9==yp)
zz <- wilcox.test(testx[[datacol]],jitter(testy[[datacol]]))
p.value <- zz$p.value
}
}
}
```
---
This is the output of `dput(head(t))`
```
structure(list(V1 = c(0.268912,
0.314681, 0.347078, 0.286945,
0.39562, 0.282182), V2 = c(0.158921, 0.210526, 0.262024, 0.322006,
0.133417, 0.283025), V3 = c(0.214082, 0.166895, 0.132547, 0.147361,
0.09174, 0.169093), V4 = c(0.358085, 0.307898, 0.258352, 0.243688,
0.379224, 0.2657), V5= c(-0.142223, 0.010895, 0.14655,
0.08152, 0.02116, 0.030083), V6 = c(0.096408, -0.091896,
-0.331229, -0.446603, -0.088493, -0.262037), V7` = c(1.680946,
1.649559, 1.534401, 1.130529, 3.441356, 1.211815), V8 = c("NC_000834", "NC_000844",
"NC_000845", "NC_000846", "NC_000857",
"NC_000860" ), V9 = c("Chordata",
"Arthropoda", "Chordata", "Chordata",
"Arthropoda", "Chordata"), V10 =
c("???:???", "Diplostraca",
"???:???", "Rheiformes", "Diptera",
"Salmoniformes"), V11 = c("???:???",
"Branchiopoda", "Mammalia", "Aves",
"Insecta", "Actinopterygii" )), .Names
= c("V1", "V2", "V3", "V4", "V5", "V6", "V7",
"V8", "V9", "V10",
"V11"), row.names = c(NA, 6L),
class = "data.frame")
```
The data is very large, and that's the thread I started and they told me it might be wrong to do this
Note This question comes from tex.SE:
[generating PDFcontain R output inside latex table](https://tex.stackexchange.com/questions/15013/generating-pdfcontain-r-output-inside-latex-table)
| Is it wrong to jitter before performing Wilcoxon test? | CC BY-SA 2.5 | null | 2011-04-05T15:38:55.223 | 2019-06-29T08:16:32.750 | 2019-06-29T08:16:32.750 | 3277 | 4038 | [
"r",
"nonparametric",
"ties"
] |
9202 | 1 | 9211 | null | 46 | 8436 | I am about to try out a BUGS style environment for estimating Bayesian models. Are there any important advantages to consider in choosing between OpenBugs or JAGS? Is one likely to replace the other in the foreseeable future?
I will be using the chosen Gibbs Sampler with R. I don't have a specific application yet, but rather I am deciding which to intall and learn.
| OpenBugs vs. JAGS | CC BY-SA 2.5 | null | 2011-04-05T15:42:20.177 | 2019-09-14T09:13:32.790 | 2012-08-23T01:11:33.680 | 9583 | 3700 | [
"r",
"software",
"bugs",
"jags",
"gibbs"
] |
9203 | 1 | 9204 | null | 6 | 219 | I have a distribution $X$. By playing around with random samples from $X$, I've determined that $Var(X^i) > Var(X)$ where $i > 1$. However, I can't seem to find a formula for the expected variance of $X^i$, or why it should be greater than $Var(X)$.
Moving away from normal distributions, should it generalize that the scale parameter of any $X^i$ will be greater than $X$?
| Variance of a distribution's product with itself | CC BY-SA 2.5 | null | 2011-04-05T15:54:06.677 | 2011-04-06T07:24:02.930 | null | null | 287 | [
"variance"
] |
9204 | 2 | null | 9203 | 6 | null | $Var(X^i)$ = $\mathbb{E}[(X^i - \mathbb{E}[X^i])^2]$ = $\mathbb{E}[X^{2i}] - (\mathbb{E}[X^i])^2$ by definition. This expresses the variance of $X^i$ in terms of moments of $X$.
The generalization is false, because $\mathbb{E}[(\lambda X)^{2i}]$ = $|\lambda|^{2i}\mathbb{E}[X^{2i}]$ implies that the scale parameter of $(\lambda X)^i$ will be smaller than the scale parameter of $\lambda X$ when $\lambda$ is sufficiently close to zero and $i \gt 1$.
| null | CC BY-SA 2.5 | null | 2011-04-05T16:04:03.533 | 2011-04-05T16:04:03.533 | null | null | 919 | null |
9205 | 2 | null | 9182 | 6 | null | You might have a look at the `betadisper()` function in the `vegan` package. The function implements the PERMDISP2 procedure (Anderson, 2006) for the analysis of multivariate homogeneity of group dispersions. An example using your data might be the following:
```
require(vegan)
distance<-vegdist(myData[,2:5], method="euclidean")
model<-betadisper(distance, myData[,1])
permutest(model, pairwise = TRUE)
Permutation test for homogeneity of multivariate dispersions
No. of permutations: 999
Permutation type: free
Permutations are unstratified
Mirrored permutations?: No
Use same permutation within strata?: No
Response: Distances
Df Sum Sq Mean Sq F N.Perm Pr(>F)
Groups 3 223.28 74.427 0.3523 999 0.84
Residuals 5 1056.34 211.268
Pairwise comparisons:
(Observed p-value below diagonal, permuted p-value above diagonal)
Apple Avocado Banana Orange
Apple 8.0000e-03 8.5600e-01 0.004
Avocado 9.2864e-31 7.9800e-01 0.008
Banana 6.2093e-01 5.6617e-01 0.797
Orange 1.5060e-31 4.0863e-27 5.6544e-01
```
Below I have inserted a plot of groups and distances to the group centroid [`plot(model`)], and a boxplot of the distances to centroid for each group [`boxplot(model)`].


Hope this helps.
### References
Anderson, M. J. (2006) Distance-based tests for homogeneity of multivariate dispersions. Biometrics 62(1): 245–253.
## Edit
On a second thought I would recommend also a more descriptive approach using linear discriminant analysis (LDA) that could help not only to visualise the spread of objects around their group centroids, but also to find the features that contribute to this configuration. The `ade4` package contains the versatile function `discrimin()` that does this as follows:
```
require(ade4)
discr <- discrimin(dudi.pca(myData[,2:5], scan = FALSE), myData[,1], scan = FALSE)
```
Note that the LDA is based on a PCA (function `dudi.pca()`) of the data so you will need to consider its properties when applying it to your task.

The top left plot represents the coefficients of the linear discriminant functions on the first two axes of the DA. The "Cos(variates, canonical variates)" plot shows the covariances between the object properties projected on the first two axes. Then, on the bottom left is the eigenvalue screeplot demonstrating the contribution of each axis to the variation. The main plot, "Scores and Classes", shows the projections of the individuals on the plane defied by the axes of the DA. Groups are displayed by ellipses where the centers are the means and the ellipses show the variance within the objects. All plots are the result of the `plot(discr)` command.
A randomisation test (and plot via `plot(randtest.discrimin(discr))`) of the eigenvalue significance is also available:
```
randtest.discrimin(discr)
Monte-Carlo test
Call: randtest.discrimin(xtest = discr)
Observation: 0.5074292
Based on 999 replicates
Simulated p-value: 0.052
Alternative hypothesis: greater
Std.Obs Expectation Variance
1.549410152 0.376074589 0.007187167
```

| null | CC BY-SA 2.5 | null | 2011-04-05T16:09:22.260 | 2011-04-06T11:46:01.810 | 2011-04-06T11:46:01.810 | 3467 | 3467 | null |
9207 | 2 | null | 9159 | 3 | null | The term "gold standard" has been used a lot with respect to No Child Left Behind. One important component of the legislation is that it established the need for the education field to move towards interventions and practices that have been demonstrated to be effective in rigorous studies. In the NCLB materials, research designs have been classified into three categories based on the strength of the conclusions they warrant. Randomized control trials (i.e. randomized or true experiments) are the only designs that provide "strong evidence" of effectiveness. Thus they have been dubbed the "gold standard" in educational research. Well-designed true experiments are given higher precedence in the awarding of federal research grants.
| null | CC BY-SA 2.5 | null | 2011-04-05T16:31:03.737 | 2011-04-05T16:31:03.737 | null | null | 4041 | null |
9208 | 1 | null | null | 8 | 998 | I was wondering how statistics and decision theory are related?
It looks to me all the statistics problems/tasks can be formulated in decision theory. Also problems in decision theory can be formulated in statistics/probability problems, or in deterministic way. So is statistics just a part of the problems studied in decision theory?
Or are they just two theories with overlapping and neither falls completely inside the other?
But I have to admit that I don't have a systematic big picture of what topics statistics theory and decision theory are covered respectively, and would like to here some of your point of view.
Thanks and regards!
| What is the relation between statistics theory and decision theory? | CC BY-SA 2.5 | null | 2011-04-05T16:40:19.540 | 2011-04-05T23:58:15.773 | 2011-04-05T16:44:44.153 | 1005 | 1005 | [
"mathematical-statistics",
"decision-theory"
] |
9209 | 5 | null | null | 0 | null | null | CC BY-SA 2.5 | null | 2011-04-05T16:46:53.373 | 2011-04-05T16:46:53.373 | 2011-04-05T16:46:53.373 | 919 | 919 | null | |
9210 | 4 | null | null | 0 | null | Used to tag questions (often Community Wiki) where a collection of replies is requested, such as a list of references, of quotations, etc. | null | CC BY-SA 2.5 | null | 2011-04-05T16:46:53.373 | 2011-04-05T16:46:53.373 | 2011-04-05T16:46:53.373 | 919 | 919 | null |
9211 | 2 | null | 9202 | 35 | null | BUGS/OpenBugs has a peculiar build system which made compiling the code difficult to impossible on some systems — such as Linux (and IIRC OS X) where people had to resort to Windows emulation etc.
Jags, on the other hand, is a completely new project written with standard GNU tools and hence portable to just about anywhere — and therefore usable anywhere.
So in short, if your system is Windows then you do have a choice, and a potential cost of being stuck to Bugs if you ever move. If you are not on Windows, then Jags is likely to be the better choice.
| null | CC BY-SA 3.0 | null | 2011-04-05T16:49:23.473 | 2017-11-15T11:32:58.197 | 2017-11-15T11:32:58.197 | 97671 | 334 | null |
9212 | 2 | null | 9208 | 5 | null | Statistical decision theory is a subset of statistical theory.
Exploratory statistics is not decision theory but it is statistics.
A theory about how to make (good) decisions is certainly much wider than statistical decision theory. For example, making a good decision in society may have more relation with psychology or even philosophy than with statistics, don't you think?
| null | CC BY-SA 2.5 | null | 2011-04-05T17:01:27.370 | 2011-04-05T17:57:08.913 | 2011-04-05T17:57:08.913 | 919 | 223 | null |
9213 | 1 | 9219 | null | 0 | 232 | Previosuly I have asked how to compute some conditional probabilities, but I am missing this particular case:
lets say we now have 3 variables:
$T$, $L$, $E$:
```
T L
\ /
E
```
So
$E$ depends on $T$ and $E$ depends on $L$
I have these probabilities:
$P(T) =$ 0.0104
$P(L) =$ 0.055
$P(E) =$ 0.0648
$P(E|L,T) =$ 1.0
$P(E|L,\lnot T) =$ 1.0
$P(E|\lnot L,T) =$ 1.0
$P(E|\lnot L,\lnot T) =$ 0.0
How is modified the probability of $T$ and $L$ when we observe that $P(E)$ = 1?
| variable depending on 2 variables conditional probability | CC BY-SA 2.5 | null | 2011-04-05T17:10:13.750 | 2011-04-05T19:05:10.680 | 2011-04-05T18:59:11.100 | 3681 | 3681 | [
"conditional-probability"
] |
9214 | 1 | 12802 | null | 5 | 197 | Latent semantic indexing seems to work well; e.g. it is independent of language, etc.
However, it appears to use the similarity of frequencies of terms in the corpus to categorize them.
If this understanding is correct, is there a way to measure the size of the dataset that will give optimal performance?
| Estimating sample size required for optimal performance of latent semantic indexing? | CC BY-SA 3.0 | null | 2011-04-05T17:36:40.303 | 2011-07-08T11:17:23.197 | 2011-07-08T11:17:23.197 | null | 2192 | [
"text-mining",
"svd"
] |
9215 | 1 | null | null | 2 | 197 | [Data Mining and Statistical Analysis](https://stats.stackexchange.com/q/1521/2192) has a general discussion on stats vs data mining. If I may, narrow down the question a bit - are there any general demarcations that allows you to decide which approach is more suited for using - LDA or ARM? (other than appreciation for the maths in the former)
| Choosing between Latent Dirichlet Allocation and Association Rule Mining | CC BY-SA 3.0 | null | 2011-04-05T18:02:10.560 | 2013-10-02T10:18:37.333 | 2017-04-13T12:44:46.680 | -1 | 2192 | [
"data-mining"
] |
9216 | 2 | null | 5926 | 3 | null | I would recommend to conduct a MANOVA whenever you are comparing groups on multiple DVs that have been measured on each observation. The data are multivariate, and a MV procedure should be used to model that known data situation. I do not believe in deciding whether to use it on the basis of that correlation. So I would use MANOVA for either of those situations. I would recommend reading the relevant portions of the following conference paper by Bruce Thompson (ERIC ID ED429110).
p.s. I believe the 'conceptually related' quote comes from the Stevens book.
| null | CC BY-SA 2.5 | null | 2011-04-05T18:33:24.237 | 2011-04-05T18:33:24.237 | null | null | 4041 | null |
9217 | 1 | null | null | 2 | 109 | I plan to conduct a research study about Interaction Designers.
Prior to starting on sampling methods I am finding it very hard to get my hands on some demographics and statistics, e.g. How many people are registered to be employed as Interaction Designers, User Experience Designers etc. in a particular territory (UK in my case).
Any help regarding this would be highly appreciated.
| Estimating the number of registered interaction designers in a given territory | CC BY-SA 2.5 | null | 2011-04-05T18:50:34.460 | 2013-02-05T12:58:24.747 | 2013-02-05T12:58:24.747 | 919 | 4043 | [
"sampling",
"dataset",
"survey"
] |
9218 | 1 | 9221 | null | 2 | 1002 | ```
str(test)
'data.frame': 767 obs. of 2 variables:
$ datefield: Ord.factor w/ 59 levels "1984-04-01"<"1984-07-01"<..: 1 2 3 4 5 6 7 8 9 10
$ somevar : num 43.7 55.6 43.5 54.1 42.8 ...
> str(Italy)
'data.frame': 1008 obs. of 2 variables:
$ year: Ord.factor w/ 48 levels "1951"<"1952"<..: 1 1 1 1 1 1 1 1 1 1 ...
$ gdp : num 8.56 12.26 9.59 8.12 5.54 ...
```
I turned the datefield into an ordered factor because I try to reproduce an example with own data. Now I wonder what the difference between is, or put it differently, what does 1 2 3 4 5 6 7 8 9 10 mean opposed to 1 1 1 1 1 1 1 1 1 1 ... ?
| What's the difference between these ordered factors? | CC BY-SA 2.5 | null | 2011-04-05T18:58:01.350 | 2011-04-05T19:16:16.503 | null | null | 704 | [
"r",
"ordinal-data"
] |
9219 | 2 | null | 9213 | 2 | null | There is a contradiction. Among ($T \land L \land E$, $T \land L \land \lnot E$, $T \land \lnot L \land E$, $T \land \lnot L \land \lnot E$, $\lnot T \land L \land E$, $\lnot T \land L \land \lnot E$, $\lnot T \land \lnot L \land E$, $\lnot T \land \lnot L \land \lnot E$) the $\lnot T \land \lnot L \land \lnot E$ is the only $\lnot E$ case that has a non zero probability. According to $Pr(E)= 0.0648$ this should have a probability of $1-0.0648=0.9452$. You also specified however that $Pr(T)=0.0104$ thus $Pr(\lnot T)=0.8996$. As the previously considered $\lnot T \land \lnot L \land \lnot E$ case is a member of the $\lnot T$ cases this should have a probability less then or equal to $Pr(\lnot T)=0.8996$. Alas $0.9452 \nleq 0.8996$, so your [$P(T) =$ 0.0104, $P(L) =$ 0.055, $P(E) =$ 0.0648, $P(E|L,T) =$ 1.0, $P(E|L,T) =$ 1.0, $P(E|\lnot L,T) =$ 1.0, $P(E|\lnot L,\lnot T) =$ 0.0] statements can not be simultaneously true.
I assume that your question is a formulation of a real problem to solve. So you have presumably not used the correct symbolic way to denote the problem. Maybe you could describe the problem in more detail in words?
Maybe you could separate which of your formulas correspond to facts, measurements that may be inaccurate or observations.
| null | CC BY-SA 2.5 | null | 2011-04-05T19:05:10.680 | 2011-04-05T19:05:10.680 | null | null | 3911 | null |
9220 | 1 | 10731 | null | 51 | 24611 | Assume you are given two objects whose exact locations are unknown, but are distributed according to normal distributions with known parameters (e.g. $a \sim N(m, s)$ and $b \sim N(v, t))$. We can assume these are both bivariate normals, such that the positions are described by a distribution over $(x,y)$ coordinates (i.e. $m$ and $v$ are vectors containing the expected $(x,y)$ coordinates for $a$ and $b$ respectively). We will also assume the objects are independent.
Does anyone know if the distribution of the squared Euclidean distance between these two objects is a known parametric distribution? Or how to derive the PDF / CDF for this function analytically?
| What is the distribution of the Euclidean distance between two normally distributed random variables? | CC BY-SA 3.0 | null | 2011-04-05T19:10:30.120 | 2015-12-05T18:45:08.673 | 2015-12-05T15:57:03.523 | 7290 | 1913 | [
"normal-distribution",
"distance-functions"
] |
9221 | 2 | null | 9218 | 2 | null | When you convert a variable to a factor variable using the `factor` function (without using special arguments) the original values are substituted with the codes 1, 2, 3 ..., and the original values are assigned to these codes as labels. See `?factor`.
Your 1 2 3 4 5 6 7 8 9 10 means that first 10 the values of datefield were the smallest values of the datefield variable, and each of them was unique (only appearing once).
The 1 1 1 1 1 1 1 1 1 1 means that the first 10 values of year was 1951, and that 1951 was the lowest value of the year variable.
| null | CC BY-SA 2.5 | null | 2011-04-05T19:16:16.503 | 2011-04-05T19:16:16.503 | null | null | 3911 | null |
9222 | 2 | null | 9201 | 6 | null | There's a thread on the R-help list about this; see for example: [http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9200.html](http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9200.html)
The first suggestion there is to repeat the test a large of number of times with different jittering and then combine the p-values to get an overall p-value, either by taking an average or a maximum. They also suggest that a straightforward permutation test could be used instead (of the two, that's what I'd prefer). See the question [Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?](https://stats.stackexchange.com/q/6127/3601) for some examples of permutation tests.
Elsewhere in that thread, Greg Snow writes:
Adding random noise to data in order to avoid a warning is like removing the batteries from a smoke detector to silence it rather than investigating the what is causing the alarm to go off.
(See [http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9195.html](http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9195.html) )
| null | CC BY-SA 2.5 | null | 2011-04-05T19:24:41.770 | 2011-04-06T14:41:20.167 | 2017-04-13T12:44:41.980 | -1 | 3601 | null |
9223 | 2 | null | 9217 | 2 | null | The recent census (27/March/2011) in the UK had "full and specific job title" as personal question #34. So the Office for National Statistics must have information on this.
| null | CC BY-SA 2.5 | null | 2011-04-05T19:32:52.573 | 2011-04-05T19:32:52.573 | null | null | 3911 | null |
9224 | 2 | null | 9201 | 3 | null | (disclaimer: I didn't check the code, my answer is just based on your description)
I have the feeling that what you want to do is a really bad idea. Wilcoxon is a resampling (or randomization) test for ranks. That is, it takes the rank of the values and compares these ranks to all possible permutations of the ranks (see e.g., [here](http://www2.statistics.com/resources/glossary/w/wmwutest.php)).
So, as you realized, ties are pretty bad as you don't get ranks out of them. However, adding random noise (jitter) to your data will transform all ranks, so that they have random ranks! That is, it distorts your data severely.
Therefore: It is wrong to do so.
| null | CC BY-SA 2.5 | null | 2011-04-05T19:40:42.630 | 2011-04-05T19:40:42.630 | null | null | 442 | null |
9225 | 1 | null | null | 16 | 12108 | What does it mean for a study to be over-powered?
My impression is that it means that your sample sizes are so large that you have the power to detect minuscule effect sizes. These effect sizes are perhaps so small that they are more likely to result from slight biases in the sampling process than a (not necessarily direct) causal connection between the variables.
Is this the correct intuition? If so, I don't see what the big deal is, as long as the results are interpreted in that light and you manually check and see whether the estimated effect size is large enough to be "meaningful" or not.
Am I missing something? Is there a better recommendation as to what to do in this scenario?
| What does it mean for a study to be over-powered? | CC BY-SA 2.5 | null | 2011-04-05T19:49:48.360 | 2014-09-20T19:09:02.433 | 2011-04-06T03:33:40.073 | 183 | 3836 | [
"statistical-significance",
"sample-size",
"effect-size",
"statistical-power"
] |
9226 | 2 | null | 9225 | 2 | null | Everything you've said makes sense (although I don't know what "big deal" you're referring to), and I esp. like your point about effect sizes as opposed to statistical significance. One other consideration is that some studies require the allocation of scarce resources to obtain the participation of each case, and so one wouldn't want to overdo it.
| null | CC BY-SA 2.5 | null | 2011-04-05T19:56:19.610 | 2011-04-05T19:56:19.610 | null | null | 2669 | null |
9228 | 1 | null | null | 9 | 972 | In SVM (linear kernel) classification analyses of a data-set of gene expression (~400 variables/genes) for ~25 each of cases and controls, I find that the gene expression-based classifiers have very good performance characteristics. The cases and controls do not differ significantly for a number of categorical and continuous clinical/demographic variables (as per Fisher's exact or t tests), but they do differ significantly for age.
Is there a way to show that the classification analysis results are or are not influenced by age?
I am thinking of reducing the gene expression data to principal components, and doing a Spearman correlation analysis of the components against age.
Is this is a reasonable approach? Alternately, can I check for correlation between age and class-membership probability values obtained in the SVM analysis.
Thanks.
| Correlating continuous clinical variables and gene expression data | CC BY-SA 2.5 | null | 2011-04-05T20:34:38.033 | 2011-04-06T22:23:28.577 | 2011-04-05T20:56:34.987 | 4045 | 4045 | [
"correlation",
"classification",
"pca",
"continuous-data"
] |
9229 | 2 | null | 9217 | 1 | null | One of the problems of a national census is that it takes a long time to get all the data together so it may be a while until it appears but if you want census data try
[https://www.census.ac.uk](https://www.census.ac.uk)
| null | CC BY-SA 2.5 | null | 2011-04-05T20:58:43.643 | 2011-04-05T20:58:43.643 | null | null | 3597 | null |
9231 | 2 | null | 9201 | 2 | null | You've asked several people what you should do now. In my view, what you should do now is accept that the proper p-value here is 1.000. Your groups don't differ.
| null | CC BY-SA 2.5 | null | 2011-04-05T22:15:54.327 | 2011-04-05T22:15:54.327 | null | null | 686 | null |
9233 | 1 | null | null | 14 | 8218 | Here is some context. I am interested in determining how two environmental variables (temperature, nutrient levels) impact the mean value of a response variable over a 11 year period. Within each year, there is data from over 100k locations.
The goal is to determine whether, over the 11 year period, the mean value of the response variables has responded to changes in environmental variables (e.g. warmer temperature + more nutrients would = greater response).
Unfortunately, since the response is the mean value (without looking at the mean, just regular inter-annual variation will swamp the signal), the regression will be 11 data points (1 mean value per year), with 2 explanatory variables. To me even a linear positive regression will be hard to consider as meaningful given that the dataset is so small (does not even meet the nominal 40 points/variable unless the relationship is super strong).
Am I right to make this assumption? Can anyone offer any other thoughts/perspectives that I may be missing?
PS: Some caveats: There is no way to get more data without waiting additional years. So the data that is available is what we really have to work with.
| Are short time series worth modelling? | CC BY-SA 3.0 | null | 2011-04-05T22:30:48.447 | 2017-06-01T10:28:46.947 | 2017-06-01T09:35:36.260 | 11887 | 1451 | [
"time-series",
"regression",
"sample-size",
"small-sample"
] |
9234 | 2 | null | 9233 | 3 | null | I would say that the validity of the test has less to do with the number of data points and more to do with the validity of the assumption that you have the correct model.
For example, the regression analysis that is used to generate a standard curve may be based on only 3 standards (low, med, and high) but the result is highly valid since there is strong evidence that the response is linear between the points.
On the other hand, even a regression with 1000s of data points will be flawed if the wrong model is applied to the data.
In the first case any variation between the model predictions and the actual data is due to random error. In the second case some of the variation between the model predictions and the actual data is due to bias from choosing the wrong model.
| null | CC BY-SA 3.0 | null | 2011-04-05T23:14:11.423 | 2017-06-01T10:28:46.947 | 2017-06-01T10:28:46.947 | 4048 | 4048 | null |
9236 | 2 | null | 9233 | 8 | null | The small number of data points limits what kinds of models you may fit on your data. However it does not necessarily mean that it would make no sense to start modelling. With few data you will only be able to detect associations if the effects are strong and the scatter is weak.
It's an other question what kind of model suits your data. You used the word 'regression' in the title. The model should to some extent reflect what you know about the phenomenon. This seems to be an ecological setting, so the previous year may be influential as well.
| null | CC BY-SA 2.5 | null | 2011-04-05T23:25:14.203 | 2011-04-05T23:25:14.203 | null | null | 3911 | null |
9237 | 1 | 9288 | null | 7 | 526 | I have several dependent variables that are measures of racial disproportionality; I've calculated them as:
% of events caused by racial minority group / % of events caused by racial majority group
I have a dependent variable for each racial minority group in my sample. I am running longitudinal Generalized Estimating Equations (GEE) on these models, however I am somewhat stumped as to which family is appropriate for these dependent variables. The probability range for my ratios are truncated at 0, as it's not possible to have negative values in my DVs. This makes me question the validity of using a Gaussian family for my models.
The idea behind these variables is that a ratio greater than 1 indicates some level of greater burden of events that a given racial minority is bearing compared to the racial majority, and a ratio less than 1 indicates the opposite.
- What would be the most appropriate family to use for my GEE regressions?
EDIT:
I misspoke about the racial disproportionality measure I was using. The correct formula is:
% events by minority / % of total enrollment that is minority OVER
% events by non-minority / % of total enrollment that is non-minority
Because they are ratios, the number of observations with value less than 1 is comparable to the number of observations greater than 1, with the lower bound being 0 and the upper bound being non-bounded. Looking at the histograms of my response variables, they definitely seem to fit a negative binomial distribution better than the normal. The QIC (GEE adjustment to AIC) confirms this suspicion. My questions now are:
- Can I trust this evidence to move forward with the negative binomial family?
- If so, how do I possibly interpret the exponentiated coefficients from the resulting models? They don't see to be Incidence Rate Ratios, as one would interpret them to be from count variables...
| Distribution family for a ratio dependent variable in a generalized estimating equation | CC BY-SA 3.0 | null | 2011-04-05T23:50:43.807 | 2011-04-27T19:17:16.593 | 2011-04-27T19:17:16.593 | 3309 | 3309 | [
"r",
"regression",
"link-function",
"generalized-estimating-equations"
] |
9239 | 2 | null | 9225 | 5 | null | In medical research trials may be unethical if they recruit too many patients. For example if the goal is to decide which treatment is better it's not ethical any more to treat patients with the worse treatment after it was established to be inferior. Increasing the sample size would, of course, give you a more accurate estimate of the effect size, but you may have to stop well before the effects of factors like "slight biases in the sampling process" appear.
It may also be unethical to spend public money of sufficiently confirmed research.
| null | CC BY-SA 2.5 | null | 2011-04-06T00:22:28.443 | 2011-04-06T03:30:28.593 | 2011-04-06T03:30:28.593 | 183 | 3911 | null |
9240 | 1 | null | null | 18 | 3413 | There is a web service where I can request information about a random item.
For every request each item has an equal chance of being returned.
I can keep requesting items and record the number of duplicates and unique. How can I use this data to estimate the total number of items?
| Estimating population size from the frequency of sampled duplicates and uniques | CC BY-SA 2.5 | null | 2011-04-06T00:45:50.147 | 2019-12-08T02:30:41.280 | 2017-05-07T10:13:53.433 | 11887 | 4049 | [
"probability",
"population",
"coupon-collector-problem"
] |
9241 | 2 | null | 9240 | 3 | null | You can use [the capture-recapture method](http://en.wikipedia.org/wiki/Capture-recapture), also implemented as [the Rcapture R package](http://cran.r-project.org/web/packages/Rcapture/index.html).
---
Here is an example, coded in R. Let's assume that the web service has N=1000 items. We will make n=300 requests. Generate a random sample where, numbering the elements from 1 to k, where k is how many different items we saw.
```
N = 1000; population = 1:N # create a population of the integers from 1 to 1000
n = 300 # number of requests
set.seed(20110406)
observation = as.numeric(factor(sample(population, size=n,
replace=TRUE))) # a random sample from the population, renumbered
table(observation) # a table useful to see, not discussed
k = length(unique(observation)) # number of unique items seen
(t = table(table(observation)))
```
The result of the simulation is
```
1 2 3
234 27 4
```
thus among the 300 requests there were 4 items seen 3 times, 27 items seen twice, and 234 items seen only once.
Now estimate N from this sample:
```
require(Rcapture)
X = data.frame(t)
X[,1]=as.numeric(X[,1])
desc=descriptive(X, dfreq=TRUE, dtype="nbcap", t=300)
desc # useful to see, not discussed
plot(desc) # useful to see, not discussed
cp=closedp.0(X, dfreq=TRUE, dtype="nbcap", t=300, trace=TRUE)
cp
```
The result:
```
Number of captured units: 265
Abundance estimations and model fits:
abundance stderr deviance df AIC
M0** 265.0 0.0 2.297787e+39 298 2.297787e+39
Mh Chao 1262.7 232.5 7.840000e-01 9 5.984840e+02
Mh Poisson2** 265.0 0.0 2.977883e+38 297 2.977883e+38
Mh Darroch** 553.9 37.1 7.299900e+01 297 9.469900e+01
Mh Gamma3.5** 5644623606.6 375581044.0 5.821861e+05 297 5.822078e+05
** : The M0 model did not converge
** : The Mh Poisson2 model did not converge
** : The Mh Darroch model did not converge
** : The Mh Gamma3.5 model did not converge
Note: 9 eta parameters has been set to zero in the Mh Chao model
```
Thus only the Mh Chao model converged, it estimated $\hat{N}$=1262.7.
---
EDIT: To check the reliability of the above method I ran the above code on 10000 generated samples. The Mh Chao model converged every time. Here is the summary:
```
> round(quantile(Nhat, c(0, 0.025, 0.25, 0.50, 0.75, 0.975, 1)), 1)
0% 2.5% 25% 50% 75% 97.5% 100%
657.2 794.6 941.1 1034.0 1144.8 1445.2 2162.0
> mean(Nhat)
[1] 1055.855
> sd(Nhat)
[1] 166.8352
```
| null | CC BY-SA 3.0 | null | 2011-04-06T00:50:08.733 | 2011-04-09T14:40:52.987 | 2011-04-09T14:40:52.987 | 3911 | 3911 | null |
9242 | 1 | 9247 | null | 9 | 122041 | What does it mean if the F value in one-way ANOVA is less than 1?
Remember the F-ratio is
$$\frac{\sigma^2+\frac{r\times\sum_{i=1}^t \tau_i^2}{t-1}}{\sigma^2}$$
| What is the meaning of an F value less than 1 in one-way ANOVA? | CC BY-SA 2.5 | null | 2011-04-06T00:51:24.083 | 2020-10-22T22:12:18.853 | 2011-04-06T05:57:59.773 | 3903 | 3903 | [
"anova",
"experiment-design"
] |
9244 | 2 | null | 9233 | 4 | null | I've seen ecological datasets with fewer than 11 points, so I would say if you are very careful, you can draw some limited conclusions with your limited data.
You could also do a power analysis to determine how small an effect you could detect, given the parameters of your experimental design.
You also might not need to throw out the extra variation per year if you do some careful analysis
| null | CC BY-SA 2.5 | null | 2011-04-06T01:47:20.040 | 2011-04-06T01:47:20.040 | null | null | 2817 | null |
9246 | 2 | null | 9225 | 16 | null | I think that your interpretation is incorrect.
You say "These effect sizes are perhaps so small as are more likely result from slight biases in the sampling process than a (not necessarily direct) causal connection between the variables" which seems to imply that the P value in an 'over-powered' study is not the same sort of thing as a P value from a 'properly' powered study. That is wrong. In both cases the P value is the probability of obtaining data as extreme as those observed, or more extreme, if the null hypothesis is true.
If you prefer the Neyman-Pearson approach, the rate of false positive errors obtained from the 'over-powered' study is the same as that of a 'properly' powered study if the same alpha value is used for both.
The difference in interpretation that is needed is that there is a different relationship between statistical significance and scientific significance for over-powered studies. In effect, the over-powered study will give a large probability of obtaining significance even though the effect is, as you say, miniscule, and therefore of questionable importance.
As long as results from an 'over-powered' study are appropriately interpreted (and confidence intervals for the effect size help such an interpretation) there is no statistical problem with an 'over-powered' study. In that light, the only criteria by which a study can actually be over-powered are the ethical and resource allocation issues raised in other answers.
| null | CC BY-SA 2.5 | null | 2011-04-06T03:19:15.373 | 2011-04-06T03:19:15.373 | null | null | 1679 | null |
9247 | 2 | null | 9242 | 16 | null | The F ratio is a statistic.
When the null hypothesis of no group differences is true, then the expected value of the numerator and denominator of the F ratio will be equal. As a consequence, the expected value of the F ratio when the null hypothesis is true is also close to one (actually it's not exactly one, because of the properties of expected values of ratios).
When the null hypothesis is false and there are group differences between the means, the expected value of the numerator will be larger than the denominator.
As such the expected value of the F ratio will be larger than under the null hypothesis, and will also more likely be larger than one.
However, the point is that both the numerator and denominator are random variables, and so is the F ratio. The F ratio is drawn from a distribution. If we assume the null hypothesis is true we get one distribution, and if we assume that it is false with various assumptions about effect size, sample size, and so forth we get another distribution. We then do a study and get an F value.
When the null hypothesis is false, it is still possible to get an F ratio less than one.
The larger the population effect size is (in combination with sample size), the more the F distribution will move to the right, and the less likely we will be to get a value less than one.
The following graphic extracted from the [G-Power3](http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/) demonstrates the idea given various assumptions.
The red distribution is the distribution of F when H0 is true.
The blue distribution is the distribution of F when H0 is false given various assumptions.
Note that the blue distribution does include values less than one, yet they are very unlikely.

| null | CC BY-SA 3.0 | null | 2011-04-06T03:53:51.367 | 2011-07-01T01:45:31.517 | 2011-07-01T01:45:31.517 | 183 | 183 | null |
9248 | 5 | null | null | 0 | null | It can be thought of as an analogue of linear regression for binary dependent variables.
| null | CC BY-SA 3.0 | null | 2011-04-06T04:31:59.587 | 2011-08-02T15:32:49.080 | 2011-08-02T15:32:49.080 | 919 | 919 | null |
9249 | 4 | null | null | 0 | null | Logistic regression is a type of regression where the dependent variable is binary. | null | CC BY-SA 3.0 | null | 2011-04-06T04:31:59.587 | 2011-08-02T15:32:49.093 | 2011-08-02T15:32:49.093 | 919 | 2116 | null |
9250 | 5 | null | null | 0 | null | Although ANOVA stands for ANalysis Of VAriance, it is about comparing means of data from different groups. It is part of the general linear model which also includes linear regression and ANCOVA. In matrix algebra form, all three are:
$Y = XB + e$
Where $Y$ is a vector of values for the dependent variable (these must be numeric), $X$ is a matrix of values for the independent variables and $e$ is error.
The chief difference among ANOVA, ANCOVA and linear regression is that they arose in different fields. Also, ANOVA is usually restricted to cases where the independent variables are categorical, ANCOVA where some are numeric but most categorical. Regression (through dummy variables) can handle any type of independent variable.
| null | CC BY-SA 3.0 | null | 2011-04-06T04:34:48.617 | 2012-09-30T22:45:07.703 | 2012-09-30T22:45:07.703 | 686 | 919 | null |
9251 | 4 | null | null | 0 | null | ANOVA stands for ANalysis Of VAriance, a statistical model and set of procedures for comparing multiple group means. The independent variables in an ANOVA model are categorical, but an ANOVA table can be used to test continuous variables as well. | null | CC BY-SA 3.0 | null | 2011-04-06T04:34:48.617 | 2014-08-18T16:20:57.647 | 2014-08-18T16:20:57.647 | 7290 | 919 | null |
9252 | 2 | null | 9203 | 0 | null | Quick note, you may find useful discussion of why the formula for estimating the standard deviation for a sample uses $n-1$ as opposed to $n$.
- Paul Savory Why divide by (n-1) for sample standard deviation
- Graphpad Why use n-1 when calculating a standard deviation?
- Andrew Hardwick Why there is a Minus One in Standard Deviations
| null | CC BY-SA 2.5 | null | 2011-04-06T07:24:02.930 | 2011-04-06T07:24:02.930 | null | null | 183 | null |
9253 | 1 | 9256 | null | 3 | 2071 | For the $\nu$-SVM (for both classification and regression cases) the $\nu \in (0;1)$ should be selected.
The LIBSVM guide suggests to use grid search for identifying the optimal value of the $C$ parameter for $C$-SVM, it also recommends to try following values $C = 2^{-5}, 2^{-3}, \dots, 2^{15}$.
So the question is, are there any recommendations for values of the $\nu$ parameter in case of $\nu$-SVMs?
| $\nu$-svm parameter selection | CC BY-SA 2.5 | null | 2011-04-06T07:34:56.343 | 2016-01-05T12:53:25.330 | null | null | 4051 | [
"machine-learning",
"svm"
] |
9254 | 2 | null | 423 | 153 | null | 
>
'So, uh, we did the green study again and got no link. It was probably a--' 'RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!'
xkcd: [significant](http://xkcd.com/882/)
| null | CC BY-SA 2.5 | null | 2011-04-06T08:35:46.377 | 2011-04-06T08:35:46.377 | null | null | 442 | null |
9255 | 2 | null | 8881 | 1 | null | The lm() procedure in R handles the entire range of linear models, not just multiple regression. All you have to do is make sure your predictors are set up to
be of the right type.
Binary is the special case of nominal where the number of levels is two.
Nominal variables must be set to mode factor. They can be coerced to factors from character variables by using factor(). Note that linear models use one of the levels as a baseline, so it effectively disappears. By default this will be the first in your list of levels. If you don't specify the order of the levels they will be put in alphabetic order. You can change the order using relevel().
For ordinal data you need them to be ordered factors. Use ordered() to coerce characters or factors to ordered factors.
For continuous predictors you want the predictor to be a double. Use double() to enforce this.
| null | CC BY-SA 3.0 | null | 2011-04-06T09:30:46.060 | 2016-07-24T11:34:16.580 | 2016-07-24T11:34:16.580 | 28500 | null | null |
9256 | 2 | null | 9253 | 5 | null | Rather than use a grid search, you can just optimise the hyper-parameters using standard numeric optimisation techniques (e.g. gradient descent). If you don't have estimates of the gradients, you can use the [Nelder-Mead simplex method](http://en.wikipedia.org/wiki/Nelder-Mead_simplex_method), which doesn't require gradient information and is vastly more efficient than grid-search methods. I would use the [logit](http://en.wikipedia.org/wiki/Logit) function to map the (0;1) range of $\nu$ onto $(-\infty;+\infty)$ to get an unconstrained optimisation problem.
If you really want to use grid search, then just spacing the evaluation points linearly in the range 0 - 1 should be fine.
| null | CC BY-SA 2.5 | null | 2011-04-06T10:06:32.467 | 2011-04-06T10:06:32.467 | null | null | 887 | null |
9257 | 2 | null | 9242 | 9 | null | Your question in the title is an interesting question that crossed my mind today too. I just want to add a correction. The F-ratio is :
$$\frac{MS_{treatment}}{MS_{residual}}=\frac{\frac{SS_{treatment}}{t-1}}{\frac{SS_{residual}}{t(r-1)}}$$
What you wrote is the
$$\frac{E(MS_{treatment})}{E(MS_{residual})}$$
While the first fraction can be less than 1, the second fraction cannot be less than 1. But that's not a problem since it's a quotient of expectations.
| null | CC BY-SA 2.5 | null | 2011-04-06T10:54:17.303 | 2011-04-06T10:54:17.303 | null | null | 3454 | null |
9258 | 2 | null | 9190 | 3 | null | I've also found good material at The Gallery of Data Visualization: The Best and Worst of Statistical Graphics, at
[http://www.datavis.ca/gallery/index.php](http://www.datavis.ca/gallery/index.php)
| null | CC BY-SA 2.5 | null | 2011-04-06T10:55:04.980 | 2011-04-06T10:55:04.980 | null | null | 2669 | null |
9259 | 1 | null | null | 10 | 1078 | I am running a GAM-based regression using the R package [gamlss](http://cran.r-project.org/web/packages/gamlss/index.html) and assuming a zero-inflated beta distribution of the data. I have only a single explanatory variable in my model, so it's basically: `mymodel = gamlss(response ~ input, family=BEZI)`.
The algorithm gives me the coefficient $k$ for the impact of the explanatory variable into the mean ($\mu$) and the associated p-value for $k(\text{input})=0$, something like:
```
Mu link function: logit
Mu Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.58051 0.03766 -68.521 0.000e+00
input -0.09134 0.01683 -5.428 6.118e-08
```
As you can see in the above example, the hypothesis of $k(\text{input})=0$ is rejected with high confidence.
I then run the null model: `null = gamlss(response ~ 1, family=BEZI)` and compare the likelihoods using a likelihood-ratio test:
```
p=1-pchisq(-2*(logLik(null)[1]-logLik(mymodel)[1]), df(mymodel)-df(null)).
```
In a number of cases, I get $p>0.05$ even when the coefficients at input are reported to be highly significant (as above). I find this quite unusual -- at least it never happened in my experience with linear or logistic regression (in fact, this also never happened when I was using zero-adjusted gamma with gamlss).
My question is: can I still trust the dependence between response and input when this is the case?
| Significance of (GAM) regression coefficients when model likelihood is not significantly higher than null | CC BY-SA 3.0 | null | 2011-04-06T10:56:43.190 | 2017-03-01T12:47:36.970 | 2017-03-01T12:47:36.970 | 11887 | 6649 | [
"nonlinear-regression",
"gamlss"
] |
9260 | 1 | 9265 | null | 1 | 288 | Recent history suggests that one supplier fails to meet this new specification 20% of the time. Assume that the next 15 batches of this alloy are a random sample.
How can I find the expected number of shipments that do meet the new specifications,
and the standard deviation?
| Expected number of shipments and its standard deviation | CC BY-SA 2.5 | null | 2011-04-06T11:13:26.087 | 2012-01-15T16:16:36.613 | 2012-01-15T16:16:36.613 | 919 | null | [
"probability",
"estimation",
"self-study",
"standard-deviation"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.