Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4044 | 1 | 4047 | null | 37 | 27205 | I'm looking for some good terminology to describe what I'm trying to do, to make it easier to look for resources.
So, say I have two clusters of points A and B, each associated to two values, X and Y, and I want to measure the "distance" between A and B - i.e. how likely is it that they were sampled from the same distribution (I can assume that the distributions are normal). For example, if X and Y are correlated in A but not in B, the distributions are different.
Intuitively, I would get the covariance matrix of A, and then look at how likely each point in B is to fit in there, and vice-versa (probably using someting like Mahalanobis distance).
But that is a bit "ad-hoc", and there is probably a more rigorous way of describing this (of course, in practice I have more than two datasets with more than two variables - I'm trying to identify which of my datasets are outliers).
Thanks!
| Measuring the "distance" between two multivariate distributions | CC BY-SA 2.5 | null | 2010-10-28T13:06:25.417 | 2018-07-12T20:26:01.040 | null | null | 1737 | [
"multivariate-analysis",
"terminology",
"distance-functions"
] |
4045 | 2 | null | 4043 | 8 | null | The [zoo](http://cran.r-project.org/package=zoo) package is very good at that (as is [xts](http://cran.r-project.org/package=xts) which extends it). The [zoo](http://cran.r-project.org/package=zoo) vignettes have e.g. this example:
```
zr3 <- zooreg(rnorm(9), start=as.yearmon(2000), frequency=12)
zr3
aggregate(zr3, as.yearqtr, mean)
```
A (regular) series is created with monthly frequency, and the averaged by quarter. It works the very same way for `POSIXct` objects at much higher granularity; see the vignette. I suspect that the R-SIG-Finance list archives have plenty of related examples too.
| null | CC BY-SA 2.5 | null | 2010-10-28T13:33:07.927 | 2010-10-28T13:33:07.927 | null | null | 334 | null |
4046 | 2 | null | 4044 | 19 | null | Hmm, the [Bhattacharyya distance](http://en.wikipedia.org/wiki/Bhattacharyya_distance) seems to be what I'm looking for, though the [Hellinger distance](http://en.wikipedia.org/wiki/Hellinger_distance) works too.
| null | CC BY-SA 2.5 | null | 2010-10-28T13:47:54.260 | 2010-10-28T13:47:54.260 | null | null | 1737 | null |
4047 | 2 | null | 4044 | 17 | null | There is also the [Kullback-Leibler](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) divergence, which is related to the Hellinger Distance you mention above.
| null | CC BY-SA 2.5 | null | 2010-10-28T14:19:15.170 | 2010-10-28T14:27:09.980 | 2010-10-28T14:27:09.980 | 1390 | 1390 | null |
4049 | 2 | null | 3955 | 4 | null | It sounds like you are looking for an "online changepoint detection method." (That's a useful phrase for Googling.) Some useful recent (and accessible) papers are [Adams & MacKay](http://arxiv.org/PS_cache/arxiv/pdf/0710/0710.3742v1.pdf) (a Bayesian approach) and [Keogh et al.](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23.6570&rep=rep1&type=pdf) You might be able to press the [surveillance](http://surveillance.r-forge.r-project.org/) package for R into service. Isolated large numbers of hits can be found using [statistical process control methods](http://en.wikipedia.org/wiki/Statistical_process_control).
| null | CC BY-SA 2.5 | null | 2010-10-28T15:38:11.823 | 2010-10-28T15:38:11.823 | null | null | 919 | null |
4050 | 2 | null | 4013 | 7 | null | Statistics is the deductive approach to induction. Consider the two main approaches to statistical inference: Frequentist and Bayesian.
Assume you are a Frequentist (in the style of Fisher, rather than Neyman for convenience). You wonder whether a parameter of substantive interest takes a particular value, so you construct a model, choose a statistic relating to the parameter, and perform a test. The p-value generated by your test indicates the probability of seeing a statistic as or more extreme than the statistic computed from the sample you have, assuming that your model is correct. You get a small enough p-value so you reject the hypothesis that the parameter does take that value. Your reasoning is deductive: Assuming the model is correct, either the parameter really does take the value of substantive interest but yours is an unlikely sample to see, or it does not take in fact that value.
Turning from hypothesis test to confidence intervals: you have a 95% confidence interval for your parameter which does not contain the value substantive interest. Your reasoning is again deductive: assuming the model is correct, either this is one of those rare intervals that will appear 1 in 20 times when the parameter really does have the value of substantive interest (because your sample is an unlikely one), or the parameter does not in fact have that value.
Now assume you are a Bayesian (in the style of Laplace rather than Gelman). Your model assumptions and calculations give you a (posterior) probability distribution over the parameter value. Most of the mass of this distribution is far from the value of substantive interest, so you conclude that the parameter probably does not have this value. Your reasoning is again deductive: assuming your model to be correct and if the prior distribution represented your beliefs about the parameter, then your beliefs about it in the light of the data are described by your posterior distribution which puts very little probability on that value. Since this distribution offers little support for the value of substantive interest, you might conclude that the parameter does not in fact have the value. (Or you might be content to state the probability it does).
In all three cases you get a logical disjunction to base your action on which is derived deductively/mathematically from assumptions. These assumptions are usually about a model of how the data is generated, but may also be prior beliefs about other quantities.
| null | CC BY-SA 2.5 | null | 2010-10-28T17:45:54.457 | 2010-10-28T17:45:54.457 | null | null | 1739 | null |
4051 | 2 | null | 1164 | 45 | null | So 'classical models' (whatever they are - I assume you mean something like simple models taught in textbooks and estimated by ML) fail on some, perhaps many, real world data sets.
If a model fails then there are two basic approaches to fixing it:
- Make fewer assumptions (less model)
- Make more assumptions (more model)
Robust statistics, quasi-likelihood, and GEE approaches take the first approach by changing the estimation strategy to one where the model does not hold for all data points (robust) or need not characterize all aspects of the data (QL and GEE).
The alternative is to try to build a model that explicitly models the source of contaminating data points, or the aspects of the original model that seems to be false, while keeping the estimation method the same as before.
Some intuitively prefer the former (it's particularly popular in economics), and some intuitively prefer the latter (it's particular popular among Bayesians, who tend to be happier with more complex models, particularly once they realize they're going to have use simulation tools for inference anyway).
Fat tailed distributional assumptions, e.g. using the negative binomial rather than poisson or t rather than normal, belong to the second strategy. Most things labelled 'robust statistics' belong to the first strategy.
As a practical matter, deriving estimators for the first strategy for realistically complex problems seems to be quite hard. Not that that's a reason for not doing so, but it is perhaps an explanation for why it isn't done very often.
| null | CC BY-SA 2.5 | null | 2010-10-28T18:14:53.187 | 2010-10-28T18:14:53.187 | null | null | 1739 | null |
4052 | 1 | 4056 | null | 19 | 4080 | Is the power of a logistic regression and a t-test equivalent? If so, they should be "data density equivalent" by which I mean that the same number of underlying observations yields the same power given a fixed alpha of .05. Consider two cases:
- [The parametric t-test]: 30 draws from a binomial observation are made and the resulting values are averaged. This is done 30 times for group A (which has a binomial Pr of .70 of occurring) and 30 times for group B (which has a binomial Pr of .75 of occurring). This yields 30 means per group that represent a summary of 1,800 draws from a binomial distribution. A 58df t-test is performed to compare the means.
- [The logistic regression]: A logistic regression is performed with a dummy coded slope representing group membership and each of the 1,800 draws.
My question has two parts:
- Given a set alpha of .05, will the power of these methodologies be the same or different? Why? How can I prove it?
- Is the answer to question 1 is sensitive to the sample sizes going into the t-test, sample size of each group in the t-test, underlying binomial probabilities, or some other factor? If so, how can I know (without simulation) that the power is indeed different and what sort of changes will produce what sort of changes in power? Alternatively, provide worked out R code that solves the issue using simulation.
| How does the power of a logistic regression and a t-test compare? | CC BY-SA 2.5 | null | 2010-10-28T18:33:14.513 | 2010-11-02T19:19:49.510 | 2010-11-01T20:17:20.833 | 196 | 196 | [
"logistic",
"t-test",
"statistical-power"
] |
4053 | 1 | 4055 | null | 15 | 611 | I am analyzing social networks (not virtual) and I am observing the connections between people. If a person would choose another person to connect with randomly, the number of connections within a group of people would be distributed normally - at least according to the book I am currently reading.
How can we know the distribution is Gaussian (normal)? There are other distributions such as Poisson, Rice, Rayliegh, etc. The problem with the Gaussian distribution in theory is that the values go from $-\infty$ to $+\infty$ (although the probabilities go toward zero) and the number of connections cannot be negative.
Does anyone know which distribution can be expected in case each person independently (randomly) picks-up another person to connect with?
| How can the number of connections be Gaussian if it cannot be negative? | CC BY-SA 3.0 | null | 2010-10-28T19:54:31.930 | 2013-08-02T16:18:03.997 | 2013-08-02T16:18:03.997 | 7290 | 315 | [
"distributions",
"networks",
"central-limit-theorem"
] |
4054 | 2 | null | 4053 | 1 | null | The answer is dependent on the assumptions that you are willing to make. A social network constantly evolves over time and hence is not a static entity. Therefore, you need to make some assumptions about how the network evolves over time.
The trivial answer under the stated conditions is: If the network size is $n$ then as asymptotically (in the sense of 'as time goes to infinity')
$Prob(\mbox{No of connections for any individual} = n-1) =1$.
If a person selects another person at random to connect to then eventually everyone will be connected.
However, real life networks do not behave this way. People differ in several aspects.
- At any time a person has a fixed network size and the probability of another connection being made is a function of his/her network size (as people introduce other people etc).
- A person has his/her own intrinsic tendency to form a connection (as some are introvert/exterovert etc).
These probabilities change over time, context etc. I am not sure there is a straightforward answer unless we make some assumptions about the structure of the network (e.g., density of the network, how people behave etc).
| null | CC BY-SA 2.5 | null | 2010-10-28T20:18:31.910 | 2010-10-28T20:37:39.610 | 2010-10-28T20:37:39.610 | null | null | null |
4055 | 2 | null | 4053 | 6 | null | When there are $n$ people and the number of connections made by person $i, 1 \le i \le n,$ is $X_i$, then the total number of connections is $S_n = \sum_{i=1}^n{X_i} / 2$. Now if we take the $X_i$ to be random variables, assume they are independent and their variances are not "too unequal" as more and more people are added to the mix, then the [Lindeberg-Levy Central Limit Theorem](http://en.wikipedia.org/wiki/Central_limit_theorem) applies. It asserts that the cumulative distribution function of the standardized sum converges to the cdf of the normal distribution. That means roughly that a histogram of the sum will look more and more like a Gaussian (a "bell curve") as $n$ grows large.
Let's review what this does not say:
- It does not assert that the distribution of $S_n$ is ever exactly normal. It can't be, for the reasons you point out.
- It does not imply the expected number of connections converges. In fact, it must diverge (go to infinity). The standardization is a recentering and rescaling of the distribution; the amount of rescaling is growing without limit.
- It says nothing when the $X_i$ are not independent or when their variances change too much as $n$ grows. (However, there are generalizations of the CLT for "slightly" dependent series of variables.)
| null | CC BY-SA 2.5 | null | 2010-10-28T20:23:08.333 | 2010-10-28T20:23:08.333 | null | null | 919 | null |
4056 | 2 | null | 4052 | 20 | null | If I have computed correctly, logistic regression asymptotically has the same power as the t-test. To see this, write down its log likelihood and compute the expectation of its Hessian at its global maximum (its negative estimates the variance-covariance matrix of the ML solution). Don't bother with the usual logistic parameterization: it's simpler just to parameterize it with the two probabilities in question. The details will depend on exactly how you test the significance of a logistic regression coefficient (there are several methods).
That these tests have similar powers should not be too surprising, because the chi-square theory for ML estimates is based on a normal approximation to the log likelihood, and the t-test is based on a normal approximation to the distributions of proportions. The crux of the matter is that both methods make the same estimates of the two proportions and both estimates have the same standard errors.
---
An actual analysis might be more convincing. Let's adopt some general terminology for the values in a given group (A or B):
- $p$ is the probability of a 1.
- $n$ is the size of each set of draws.
- $m$ is the number of sets of draws.
- $N = m n$ is the amount of data.
- $k_{ij}$ (equal to $0$ or $1$) is the value of the $j^\text{th}$ result in the $i^\text{th}$ set of draws.
- $k_i$ is the total number of ones in the $i^\text{th}$ set of draws.
- $k$ is the total number of ones.
Logistic regression is essentially the ML estimator of $p$. Its logarithm is given by
$$\log(\mathbb{L}) = k \log(p) + (N-k) \log(1-p).$$
Its derivatives with respect to the parameter $p$ are
$$\frac{\partial \log(\mathbb{L})}{ \partial p} = \frac{k}{p} - \frac{N-k}{1-p} \text{ and}$$
$$-\frac{\partial^2 \log(\mathbb{L})}{\partial p^2} = \frac{k}{p^2} + \frac{N-k}{(1-p)^2}.$$
Setting the first to zero yields the ML estimate ${\hat{p} = k/N}$ and plugging that into the reciprocal of the second expression yields the variance $\hat{p}(1 - \hat{p})/N$, which is the square of the standard error.
The t statistic will be obtained from estimators based on the data grouped by sets of draws; namely, as the difference of the means (one from group A and the other from group B) divided by the standard error of that difference, which is obtained from the standard deviations of the means. Let's look at the mean and standard deviation for a given group, then. The mean equals $k/N$, which is identical to the ML estimator $\hat{p}$. The standard deviation in question is the standard deviation of the draw means; that is, it is the standard deviation of the set of $k_i/n$. Here is the crux of the matter, so let's explore some possibilities.
- Suppose the data aren't grouped into draws at all: that is, $n = 1$ and $m = N$. The $k_{i}$ are the draw means. Their sample variance equals $N/(N-1)$ times $\hat{p}(1 - \hat{p})$. From this it follows that the standard error is identical to the ML standard error apart from a factor of $\sqrt{N/(N-1)}$, which is essentially $1$ when $N = 1800$. Therefore--apart from this tiny difference--any tests based on logistic regression will be the same as a t-test and we will achieve essentially the same power.
- When the data are grouped, the (true) variance of the $k_i/n$ equals $p(1-p)/n$ because the statistics $k_i$ represent the sum of $n$ Bernoulli($p$) variables, each with variance $p(1-p)$. Therefore the expected standard error of the mean of $m$ of these values is the square root of $p(1-p)/n/m = p(1-p)/N$, just as before.
Number 2 indicates the power of the test should not vary appreciably with how the draws are apportioned (that is, with how $m$ and $n$ are varied subject to $m n = N$), apart perhaps from a fairly small effect from the adjustment in the sample variance (unless you were so foolish as to use extremely few sets of draws within each group).
Limited simulations to compare $p = 0.70$ to $p = 0.74$ (with 10,000 iterations apiece) involving $m = 900, n = 1$ (essentially logistic regression); $m = n = 30$; and $m = 2, n = 450$ (maximizing the sample variance adjustment) bear this out: the power (at $\alpha = 0.05$, one-sided) in the first two cases is 0.59 whereas in the third, where the adjustment factor makes a material change (there are now just two degrees of freedom instead of 1798 or 58), it drops to 0.36. Another test comparing $p = 0.50$ to $p = 0.52$ gives powers of 0.22, 0.21, and 0.15, respectively: again, we observe only a slight drop from no grouping into draws (=logistic regression) to grouping into 30 groups and a substantial drop down to just two groups.
The morals of this analysis are:
- You don't lose much when you partition your $N$ data values into a large number $m$ of relatively small groups of "draws".
- You can lose appreciable power using small numbers of groups ($m$ is small, $n$--the amount of data per group--is large).
- You're best off not grouping your $N$ data values into "draws" at all. Just analyze them as-is (using any reasonable test, including logistic regression and t-testing).
| null | CC BY-SA 2.5 | null | 2010-10-28T20:59:00.237 | 2010-10-29T16:45:45.883 | 2010-10-29T16:45:45.883 | 919 | 919 | null |
4057 | 2 | null | 3390 | 7 | null | I have never seen C-F used for empirical estimates. Why bother? You have outlined a good set of reasons why not. (I don't think C-F "wins" even in case 1 due to the instability of estimates of higher-order cumulants and their lack of resistance.) It is intended for theoretical approximations. Johnson & Kotz, in their [encyclopedic work on distributions](http://rads.stackoverflow.com/amzn/click/0471584940), routinely use C-F expansions to develop approximations to distribution functions. Such approximations were useful to supplement tables (or even create them) before powerful statistical software was widespread. They can still be useful on platforms where appropriate code is not available such as quick-and-dirty spreadsheet calculations.
| null | CC BY-SA 2.5 | null | 2010-10-28T22:34:50.633 | 2010-10-28T22:34:50.633 | null | null | 919 | null |
4058 | 1 | null | null | 15 | 14781 | One of the assumption of logistic regression is the linearity in the logit. So once I got my model up and running I test for nonlinearity using Box-Tidwell test. One of my continuous predictors (X) has tested positive for nonlinearity. What am I suppose to do next?
As this is a violation of the assumptions shall I get rid of the predictor (X) or include the nonlinear transformation (X*X). Or transform the variable into a categorical?
If you have a reference could you please point me to that too?
| Testing nonlinearity in logistic regression (or other forms of regression) | CC BY-SA 4.0 | null | 2010-10-29T01:31:29.823 | 2019-05-03T12:38:54.653 | 2019-05-03T12:38:54.653 | 686 | 10229 | [
"regression",
"logistic",
"references",
"assumptions",
"regression-strategies"
] |
4060 | 2 | null | 4058 | 10 | null | I would suggest to use restricted cubic splines (`rcs` in R, see the [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) and [Design](http://cran.r-project.org/web/packages/Design/index.html) packages for examples of use), instead of adding power of $X$ in your model. This approach is the one that is recommended by Frank Harrell, for instance, and you will find a nice illustration in his handouts (§2.5 and chap. 9) on Regression Modeling Strategies (see the [companion website](http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/RmS)).
You can compare the results with your Box-Tidwell test by using the `boxTidwell()` in the [car](http://cran.r-project.org/web/packages/car/index.html) package.
Transforming continuous predictors into categorical ones is generally not a good idea, see e.g. [Problems Caused by Categorizing Continuous Variables](http://biostat.mc.vanderbilt.edu/wiki/Main/CatContinuous).
| null | CC BY-SA 2.5 | null | 2010-10-29T07:50:26.763 | 2010-10-29T08:09:39.157 | 2010-10-29T08:09:39.157 | 930 | 930 | null |
4061 | 2 | null | 4058 | 6 | null | It may be appropriate to include a nonlinear transformation of x, but probably not simply x × x, i.e x2. I believe you may find this a useful reference in determining which transformation to use:
G. E. P. Box and Paul W. Tidwell (1962). Transformation of the Independent Variables. Technometrics Volume 4 Number 4, pages 531-550. [http://www.jstor.org/stable/1266288](http://www.jstor.org/stable/1266288)
Some consider the Box-Tidwell family of transformations to be more general than is often appropriate for interpretability and parsimony. Patrick Royston and Doug Altman introduced the term fractional polynomials for Box-Tidwell transformations with simple rational powers in an influential 1994 paper:
P. Royston and D. G. Altman (1994). Regression using fractional polynomials of continuous covariates: parsimonious parametric modeling. Applied Statistics Volume 43: pages 429–467. [http://www.jstor.org/stable/2986270](http://www.jstor.org/stable/2986270)
Patrick Royston in particular has continued to work and publish both papers and software on this, culminating in a book with Willi Sauerbrei:
P. Royston and W. Sauerbrei (2008). Multivariable Model-building: A Pragmatic Approach to Regression Analysis Based on Fractional Polynomials for Modelling Continuous Variables. Chichester, UK: Wiley. ISBN 978-0-470-02842-1
| null | CC BY-SA 2.5 | null | 2010-10-29T08:37:57.233 | 2010-10-29T08:37:57.233 | null | null | 449 | null |
4062 | 1 | 160853 | null | 9 | 17282 | I'm referring to something like this:

suggested dataset for showing a solutions:
```
data(mtcars)
plot(hclust(dist(mtcars)))
```
| How to plot a fan (Polar) Dendrogram in R? | CC BY-SA 2.5 | null | 2010-10-29T08:56:14.133 | 2015-07-10T14:05:07.740 | 2010-10-29T12:24:53.710 | 449 | 253 | [
"r",
"data-visualization",
"dendrogram"
] |
4063 | 2 | null | 4058 | 6 | null | Don't forget to check for interactions between X and other independent variables. Leaving interactions unmodeled can make X look like it has a non-linear effect when it simply has a non-additive one.
| null | CC BY-SA 2.5 | null | 2010-10-29T09:02:47.340 | 2010-10-29T09:02:47.340 | null | null | 1739 | null |
4064 | 2 | null | 4062 | 10 | null | In phylogenetics, this is a fan phylogram, so you can convert this to `phylo` and use `ape`:
```
library(ape)
library(cluster)
data(mtcars)
plot(as.phylo(hclust(dist(mtcars))),type="fan")
```
Result:

| null | CC BY-SA 2.5 | null | 2010-10-29T09:43:52.803 | 2010-10-29T09:43:52.803 | null | null | null | null |
4065 | 1 | 4178 | null | 8 | 665 | Hello fellow number crunchers
I want to generate n random scores (together with a class label) as if they had been produced by a binary classification model. In detail, the following properties are required:
- every score is between 0 and 1
- every score is associated with a binary label with values "0" or "1" (latter is positive class)
- the overall precision of the scores should be e.g. 0.1 (<- parameter of the generator)
- the ratio of scores with label "1" should be higher than overall precision in the top-section and lower in the bottom section (<- the "model-quality" should also be a parameter of the generator)
- the scores should be in such a way, that a resulting roc curve is smooth (and not e.g. that a bunch of scores with label "1" are at the top and the rest of the scores with label "1" is at the bottom of the list).
Does anyone have an idea how to approach this? Maybe via generation of a roc-curve and then generating the points from that cure? Thanks in advance!
| Random generation of scores similar to those of a classification model | CC BY-SA 2.5 | null | 2010-10-29T12:39:52.583 | 2012-04-06T15:09:14.477 | 2010-10-29T16:09:07.507 | null | 264 | [
"machine-learning",
"classification",
"roc",
"random-generation"
] |
4067 | 2 | null | 4040 | 9 | null | Another example using base packages and Tal's example data:
```
DataCov <- do.call( rbind, lapply( split(xx, xx$group),
function(x) data.frame(group=x$group[1], mCov=cov(x$a, x$b)) ) )
```
| null | CC BY-SA 2.5 | null | 2010-10-29T15:23:57.553 | 2010-10-29T15:23:57.553 | null | null | 1657 | null |
4068 | 1 | 4069 | null | 8 | 4225 | I was horrified to find recently that Matlab returns $0$ for the sample variance of a scalar input:
```
>> var(randn(1),0) %the '0' here tells var to give sample variance
ans =
0
>> var(randn(1),1) %the '1' here tells var to give population variance
ans =
0
```
Somehow, the sample variance is not dividing by $0 = n-1$ in this case. R returns a NaN for a scalar:
```
> var(rnorm(1,1))
[1] NA
```
What do you think is a sensible way to define the population sample variance for a scalar? What consequences might there be for returning a zero instead of a NaN?
edit: from the help for Matlab's `var`:
```
VAR normalizes Y by N-1 if N>1, where N is the sample size. This is
an unbiased estimator of the variance of the population from which X is
drawn, as long as X consists of independent, identically distributed
samples. For N=1, Y is normalized by N.
Y = VAR(X,1) normalizes by N and produces the second moment of the
sample about its mean. VAR(X,0) is the same as VAR(X).
```
a cryptic comment in the m code for `var states:
```
if w == 0 && n > 1
% The unbiased estimator: divide by (n-1). Can't do this
% when n == 0 or 1.
denom = n - 1;
else
% The biased estimator: divide by n.
denom = n; % n==0 => return NaNs, n==1 => return zeros
end
```
i.e. they explicitly choose not to return a `NaN` even when the user requests a sample variance on a scalar. My question is why they should choose to do this, not how.
edit: I see that I had erroneously asked about how one should define the population variance of a scalar (see strike through line above). This probably caused a lot of confusion.
| How should one define the sample variance for scalar input? | CC BY-SA 2.5 | null | 2010-10-29T17:45:09.530 | 2018-08-27T08:17:18.387 | 2018-08-27T08:17:18.387 | 11887 | 795 | [
"r",
"variance",
"matlab"
] |
4069 | 2 | null | 4068 | 5 | null | Scalars can't 'have' a population variance although they can be single samples from population that has a (population) variance. If you want to estimate that then you need at least: more than one data point in the sample, another sample from the same distribution, or some prior information about the population variance by way of a model.
btw R has returned missing (NA) not NaN
```
is.nan(var(rnorm(1,1)))
[1] FALSE
```
| null | CC BY-SA 2.5 | null | 2010-10-29T18:14:18.963 | 2010-10-29T18:14:18.963 | null | null | 1739 | null |
4070 | 2 | null | 4068 | 3 | null | I am sure people in this forum will have better answers, here is what I think:
I think R's answer is logical. The random variable has a population variance, but it turns out that with 1 sample you don't have enough degrees of freedom to estimate sample variance i-e- you are trying to extract information that is NOT there.
Regarding Matlab's answer, I don't know how to justify 0, except that it is from the numerator.
Consequences can be bizarre. But I can think of anything else related to the estimation.
| null | CC BY-SA 2.5 | null | 2010-10-29T18:15:06.807 | 2010-10-30T22:29:28.380 | 2010-10-30T22:29:28.380 | 1307 | 1307 | null |
4071 | 2 | null | 4068 | 1 | null | I think Matlab is using the following logic for a scalar (analogous to how we define population variance) to avoid having to deal with NA and NAN.
$Var(x) = \frac{(x - \bar{x})^2}{1} = 0$
The above follows as for a scalar: $\bar{x} = x$.
Their definition is probably a programming convention that may perhaps make some aspect of coding easier.
| null | CC BY-SA 2.5 | null | 2010-10-29T18:34:27.813 | 2010-10-29T18:34:27.813 | null | null | null | null |
4072 | 1 | 4074 | null | 3 | 2283 | I'd like to make an assertion about whether individuals in my dataset exceed legal standards for commitment, which is largely determined by estimated risk of recidivism. I have estimated a logit model predicting recidivism. Here is sample code written trying to determine what percentage of individuals had predicted recidivism of at least 75% with 75% confidence:
```
.logit sexrecS [INDEPENDENT VARIABLES]
.predict sr1
.predict stdsr1, stdp
.gen byte sr1C1R1=0
.replace sr1C1R1=1 if 1/(1 + exp(ln((1-sr1)/sr1) - 0.67448975*stdsr1)) > .75
.sum sr1C1R1 if sr1!=. & sexrecS!=.
```
I later replace 0.67448975 with 1.281551566 to reflect a higher legal standard of proof, but the code generates more, not fewer, hits, so I know I've done something wrong.
| How does one calculate confidence intervals on predictions generated by logit in Stata? | CC BY-SA 2.5 | null | 2010-10-29T19:24:27.813 | 2010-10-29T20:44:13.563 | 2010-10-29T19:53:44.330 | 919 | null | [
"confidence-interval",
"stata",
"logit"
] |
4073 | 2 | null | 3907 | 9 | null | The starting time of the study is immaterial: it's just an origin for the clock. What you want to consider are the states in which the subjects can be found and the ages at which they transition from one to another. In this situation a minimum set of states would be
- [Born]: "Born with gene." This always happens at age 0, of course.
- [Enrolled]: "Enrolled in study."
- [Endpoint]: "Cardiovascular event identified."
-
(This framework will allow multiple "endpoint" states to be modeled.)
The multistate analysis supposes there is a transition probability from some of these states to others. The relevant ones would be
- [Born] --> Death. These account for people who never enrolled.
- [Born] --> [Endpoint]. Are you considering these people? Are they even allowed into the study?
- [Born] --> [Enrolled]. These are all the people selected for the study (who haven't died and don't already exhibit the cardiovascular disease).
- [Enrolled] --> [Endpoint]. These are people in the study diagnosed with a cardiovascular disease.
- [Enrolled] --> Death. These people died in the study without a diagnosis of cardiovascular disease.
The [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) can be generalized to estimate the rates of these transitions. It's a simple estimator, summing the ratios of events occurring to the numbers of people at risk for them to occur. The conclusion of the recent TAS article [Two Pitfalls in Survival Analyses of Time-Dependent Exposure](https://www.tandfonline.com/doi/abs/10.1198/tast.2010.08259) is that if you get your multistate model wrong, you will miscount the number of people at risk in various states and that will bias the results. Its message is clear: get the multistate model right. If the study truly is prospective--that is, if you identify people with the gene at birth and follow them--then there is no question about the right model. Similarly, if enrollment in the study is independent of the presence of the gene, there will be no bias. Otherwise, this framework calls out for incorporating the study selection probabilities into the model and shows how to account for deaths and prior disease before enrollment was possible.
This paper also illustrates a nice tool for analyzing these subtleties: the [Lexis Diagram](http://www.jlund.dk/files/master_thesis/lexissjs.ps.gz). (Look at the figures in the end of this rather technical paper.) I believe these diagrams can be produced with the [epi](http://cran.r-project.org/web/packages/Epi/index.html) package in R. You might find them helpful for having discussions with your colleagues about the appropriate model to adopt.
ASA members and people with university library privileges probably already have online access to this article: it's worth reading.
| null | CC BY-SA 4.0 | null | 2010-10-29T19:38:10.120 | 2023-03-03T10:44:07.670 | 2023-03-03T10:44:07.670 | 362671 | 919 | null |
4074 | 2 | null | 4072 | 3 | null | Don't try to do too much at once. Note, too:
- 'Predict' without options computes the probabilities. I don't think you want those for this calculation. Use 'predict, xb' to obtain the linear fits.
- Subtract a multiple of the standard error of prediction (obtained via 'predict, stdp') from the linear prediction.
- Now transform the limit with a logistic transformation to convert it to a probability.
- Compare that to .75 (or whatever).
At each step of the way summarize (and even graph) the results to make sure they are behaving as you expect.
It's actually simpler, if not quite as clear, to replace the last two steps by a direct comparison of the limit to the logit of 0.75, defined as ln(.75) - ln(.25) = 1.099.
| null | CC BY-SA 2.5 | null | 2010-10-29T19:58:28.380 | 2010-10-29T20:44:13.563 | 2010-10-29T20:44:13.563 | 919 | 919 | null |
4075 | 1 | 4079 | null | 15 | 30094 | I have two populations, One with N=38,704 (number of observations) and other with N=1,313,662. These data sets have ~25 variables, all continuous. I took mean of each in each data set and computed the test statistic using the formula
t=mean difference/std error
The problem is of the degree of freedom. By formula of df=N1+N2-2 we'll have more freedom than the table can handle. Any suggestions on this? How to check the t statistic here. I know that the t-test is used for handling samples but what if we apply this on large samples.
| How to perform t-test with huge samples? | CC BY-SA 3.0 | null | 2010-10-30T07:55:10.057 | 2020-05-01T11:36:01.170 | 2018-02-01T14:55:15.570 | 113777 | 1763 | [
"t-test"
] |
4076 | 2 | null | 4075 | 10 | null | The $t$ distribution tend to the $z$ (gaussian) distribution when $n$ is large (in fact, when $n>30$, they are almost identical, see the picture provided by @onestop). In your case, I would say that $n$ is VERY large, so that you can just use a $z$-test. As a consequence of the sample size, any VERY small differences will be declared significant. So, it is worth asking yourself if these tests (with the full data set) are really interesting.
Just to be sure, as your data set includes 25 variables, you are making 25 tests? If this is the case, you probably need to correct for multiple comparisons so as not to inflate the type I error rate (see related thread on this site).
BTW, the R software would gives you the p-values you are looking for, no need to rely on Tables:
```
> x1 <- rnorm(n=38704)
> x2 <- rnorm(n=1313662, mean=.1)
> t.test(x1, x2, var.equal=TRUE)
Two Sample t-test
data: x1 and x2
t = -17.9156, df = 1352364, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.1024183 -0.0822190
sample estimates:
mean of x mean of y
0.007137404 0.099456039
```
| null | CC BY-SA 2.5 | null | 2010-10-30T08:40:46.110 | 2010-10-30T13:09:02.090 | 2010-10-30T13:09:02.090 | 930 | 930 | null |
4077 | 2 | null | 4075 | 14 | null | Student's t-distribution becomes closer and closer the the standard normal distribution as the degrees of freedom get larger. With 1313662 + 38704 – 2 = 1352364 degrees of freedom, the t-distribution will be indistinguishable from the standard normal distribution, as can be seen in the picture below (unless perhaps you're in the very extreme tails and you're interested in distinguishing absolutely tiny p-values from even tinier ones). So you can use the table for the standard normal distribution instead of the table for the t-distribution.

| null | CC BY-SA 2.5 | null | 2010-10-30T08:41:36.613 | 2010-10-30T13:09:47.210 | 2010-10-30T13:09:47.210 | 930 | 449 | null |
4078 | 2 | null | 1562 | 7 | null | There's nothing wrong with the existing answers, but I suspect that you're looking for a causal sense of dependence rather than an associational one, that is: whether A causes B rather than whether B is more predictable when you know A. The chi^2 test is working with the second sense.
Even in the simplest case of the first kind of dependence you would ideally experimentally manipulate B and observe the effect on A and vice versa. Judea Pearl points out that this is the difference between the ordinary sense of conditional probability
P("I observe that A has value a" | "I observe that B has value b")
and a quite different thing that we might slightly misleadingly write as
P("I observe that A has value a" | "I fix B to have value b")
These need not, of course, be the same number.
| null | CC BY-SA 2.5 | null | 2010-10-30T14:56:29.207 | 2010-10-30T14:56:29.207 | null | null | 1739 | null |
4079 | 2 | null | 4075 | 27 | null | chl already mentioned the trap of multiple comparisons when conducting simultaneously 25 tests with the same data set. An easy way to handle that is to adjust the p value threshold by dividing them by the number of tests (in this case 25). The more precise formula is: Adjusted p value = 1 - (1 - p value)^(1/n). However, the two different formulas derive almost the same adjusted p value.
There is another major issue with your hypothesis testing exercise. You will most certainly run into a Type I error (false positive) whereby you will uncover some really trivial differences that are extremely significant at the 99.9999% level. This is because when you deal with a sample of such a large size (n = 1,313,662), you will get a standard error that is very close to 0. That's because the square root of 1,313,662 = 1,146. So, you will divide the standard deviation by 1,146. In short, you will capture minute differences that may be completely immaterial.
I would suggest you move away from this hypothesis testing framework and instead conduct an Effect Size type analysis. Within this framework the measure of statistical distance is the standard deviation. Unlike the standard error, the standard deviation is not artificially shrunk by the size of the sample. And, this approach will give you a better sense of the material differences between your data sets. Effect Size is also much more focused on confidence interval around the mean average difference which is much more informative than the hypothesis testing focus on statistical significance that often is not significant at all. Hope that helps.
| null | CC BY-SA 2.5 | null | 2010-10-30T19:37:59.957 | 2010-10-30T19:37:59.957 | null | null | 1329 | null |
4080 | 1 | 4090 | null | 4 | 359 | I asked this question at Mathoverflow but they recommend me to ask here. I need to find factored joint distribution of [Tree Augmented Naive Bayes](http://www.cs.huji.ac.il/~nir/Abstracts/FrGG1.html) algorithm. I read the paper but I couldn't figure out the answer. Any help or pointers appreciated.
| Factored Joint Distribution of Tree Augmented Naive Bayes Algorithm | CC BY-SA 2.5 | null | 2010-10-30T21:23:43.047 | 2010-11-01T08:50:52.740 | 2010-11-01T08:50:52.740 | null | null | [
"machine-learning",
"naive-bayes"
] |
4081 | 1 | 4082 | null | 17 | 10985 | I had been using the term "Heywood Case" somewhat informally to refer to situations where an online, 'finite response' iteratively updated estimate of the variance became negative due to numerical precision issues. (I am using a variant of Welford's method to add data and to remove older data.) I was under the impression that it applied to any situation where a variance estimate became negative, either due to numerical error or modeling error, but a colleague was confused by my usage of the term. A google search doesn't turn up much, other than that it is used in Factor Analysis, and seems to refer to the consequences of a negative variance estimate. What is the precise definition? And who was the original Heywood?
| What is the precise definition of a "Heywood Case"? | CC BY-SA 2.5 | null | 2010-10-30T21:31:17.513 | 2010-10-30T22:22:12.180 | null | null | 795 | [
"variance",
"factor-analysis",
"definition",
"online-algorithms"
] |
4082 | 2 | null | 4081 | 17 | null | Googling "[Heywood negative variance](http://www.google.com/search?q=heywood+negative+variance)" quickly answers these questions. Looking at a recent (2008) [paper by Kolenikov & Bollen](http://web.missouri.edu/~kolenikovs/papers/heywood-8.pdf), for example, indicates that:
- " “Heywood cases” [are] negative estimates of variances or correlation estimates greater than one in absolute value..."
- "The original paper (Heywood 1931) considers specific parameterizations of factor analytic models, in which some parameters necessary to describe the correlation matrices were greater than 1."
Reference
"Heywood, H. B. (1931), ‘On finite sequences of real numbers’, Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character 134(824), 486–501."
| null | CC BY-SA 2.5 | null | 2010-10-30T22:22:12.180 | 2010-10-30T22:22:12.180 | null | null | 919 | null |
4083 | 2 | null | 3893 | 5 | null | I thank everyone for their answers, but the question has grown to something I did not intend it to, being mainly an essay on the general notion of causal inference with no right answer.
I initially intended the question to probe the audience for examples of the use of cross validation for causal inference. I had assumed such methods existed, as the notion of using a test sample and hold out sample to assess repeatability of effect estimates seemed logical to me. Like John noted, what I was suggesting isn't dissimilar to bootstrapping, and I would say it resembles other methods we use to validate results such as subset specificity tests or non-equivalent dependent variables (bootstrapping relaxes parametric assumptions of models, and the subset tests in a more general manner are used as a check that results are logical in varied situations). None of these methods meets any of the other answers standards of proof for causal inference, but I believe they are still useful for causal inference.
chl's comment is correct in that my assertion for using cross validation is a check on internal validity to aid in causal inference. But I ask we throw away the distinction between internal and external validity for now, as it does nothing to further the debate. chl's example of genome wide studies in epidemiology I would consider a prime example of poor internal validity, making strong inferences inherently dubious. I think the genome association studies are actually an example of what I asked for. Do you think the inferences between genes and disease are improved through the use of cross-validation (as oppossed to just throwing all markers into one model and adjusting p-values accordingly?)
Below I have pasted a copy of a table in the Berk article I cited in my question. While these tables were shown to demonstrate the false logic of using step-wise selection criteria and causal inference on the same model, lets pretend no model selection criteria were used, and the parameters in both the training and hold out sample were determined A priori. This does not strike me as an unrealistic result. Although I could not say which estimate is correct and which is false, doesn't the inconsistency in the Assault Conviction and the Gun conviction estimates between the two models cast doubt that either has a true causal effect on sentence length? Is knowing that variation not useful? If we lose nothing by having a hold out sample to test our model why can't we use cross-validation to improve causal inference (or I am missing what we are losing by using a hold out sample?)

| null | CC BY-SA 2.5 | null | 2010-10-31T05:14:12.747 | 2010-10-31T05:22:56.657 | 2010-10-31T05:22:56.657 | 1036 | 1036 | null |
4084 | 1 | 4085 | null | 13 | 7523 | I've got a large set of data (20,000 data points), from which I want to take repeated samples of 10 data points. However, once I've picked those 10 data points, I want them to not be picked again.
I've tried using the `sample` function, but it doesn't seem to have an option to sample without replacement over multiple calls of the function. Is there a simple way to do this?
| How to take many samples of 10 from a large list, without replacement overall | CC BY-SA 2.5 | null | 2010-10-31T14:09:43.783 | 2017-12-22T14:01:40.220 | null | null | 261 | [
"r",
"sample"
] |
4085 | 2 | null | 4084 | 10 | null | You could call sample once on the entire data set to permute it. Then when you want to get a sample you could grab the first 10. If you want another sample grab the next 10. So on and so forth.
| null | CC BY-SA 2.5 | null | 2010-10-31T14:55:22.850 | 2010-10-31T14:55:22.850 | null | null | 1028 | null |
4086 | 1 | 4096 | null | 9 | 1141 | I am not that familiar with the analysis of time series data. However, I have what I think is a simple prediction task to address.
I have about five years of data from a common generating process. Each year represents a monotonically increasing function with a non-linear component. I have counts for each week over a 40 week cycle for each year. The process starts, the function begins at zero, increases rather quickly over the first half of the function, slowing over the second half before leveling during the last five weeks. The process is consistent across years with small differences in rate of change and volume across the segments from year to year.
$$ y_{1}=\{0, N_{t1}, N_{t2}, ... N_{t39}, N_{t40}\} $$
$$ \vdots $$
$$ y_{5}=\{0, N_{t1}, N_{t2}, ... N_{t39}, N_{t40}\} $$
Where $N_{tx}$ equal the count at time x.
The goal is to take $N$ at $tx$ (or better $t0$ to $tx$, or the slope to that point) and predict the $N$ at $t40$. For example, if $N_{t10}$ is 5000 what is the expected value of $N_{t40}$ for that year. So, the question is, how would you model such data? It's easy enough to summarize and visualize. But I'd like a model to facilitate predictions and incorporate a measure of error.
| How to make forecasts for a time series? | CC BY-SA 3.0 | null | 2010-10-31T15:21:13.410 | 2016-09-06T09:45:28.133 | 2016-09-06T09:45:28.133 | 100369 | 485 | [
"time-series",
"forecasting"
] |
4087 | 2 | null | 4084 | 2 | null | This should work:
```
x <- rnorm(20000)
x.copy <- x
samples <- list()
i <- 1
while (length(x) >= 10){
tmp <- sample(x, 10)
samples[[i]] <- tmp
i <- i+1
x <- x[-match(tmp, x)]
}
table(unlist(samples) %in% x.copy)
```
However, I don't think that's the most elegant solution...
| null | CC BY-SA 2.5 | null | 2010-10-31T16:32:55.587 | 2010-10-31T16:32:55.587 | null | null | 307 | null |
4088 | 2 | null | 4084 | 10 | null | Dason's thought, implemented in R:
```
sample <- split(sample(datapoints), rep(1:(length(datapoints)/10+1), each=10))
sample[[13]] # the thirteenth sample
```
| null | CC BY-SA 2.5 | null | 2010-10-31T18:48:59.330 | 2010-11-01T09:53:00.917 | 2010-11-01T09:53:00.917 | 1739 | 1739 | null |
4089 | 1 | null | null | 39 | 33735 | I'm sure I've come across a function like this in an R package before, but after extensive Googling I can't seem to find it anywhere. The function I'm thinking of produced a graphical summary for a variable given to it, producing output with some graphs (a histogram and perhaps a box and whisker plot) and some text giving details like mean, SD, etc.
I'm pretty sure this function wasn't included in base R, but I can't seem to find the package I used.
Does anyone know of a function like this, and if so, what package it is in?
| Graphical data overview (summary) function in R | CC BY-SA 3.0 | null | 2010-10-31T19:17:24.210 | 2016-05-05T16:50:59.280 | 2013-06-19T14:04:09.833 | 7290 | 261 | [
"r",
"data-visualization",
"descriptive-statistics",
"exploratory-data-analysis"
] |
4090 | 2 | null | 4080 | 1 | null | Where $A_1$, $A_2$, ..., $A_n$ are the attribute variables and $C$ is the class variable, it's:
$P(C) \prod_{i=1}^{n} P(A_i | A_{\pi(i)}, C)$.
Where $\pi(i)$ is the index of the parent of $A_i$ in the tree or $0$ if $i$ is the root and $P(A_i | A_0, C)$ is defined to be $P(A_i | C)$.
| null | CC BY-SA 2.5 | null | 2010-10-31T19:26:30.673 | 2010-10-31T19:26:30.673 | null | null | 1756 | null |
4091 | 2 | null | 4086 | 4 | null | What your asking is essentially what Box Jenkins ARIMA modeling does (your yearly cycles would be referred to as seasonal components). Besides looking up materials on your own, I would suggest
[Applied Time Series Analysis for the Social Sciences](http://books.google.com/books?id=D6-CAAAAIAAJ&q=Applied+Time+Series+Analysis+for+the+Social+Sciences&dq=Applied+Time+Series+Analysis+for+the+Social+Sciences&hl=en&ei=QaLNTKD-HcH88Aabt-Al&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC8Q6AEwAA) 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
Although I can think of reasonable reasons for why you want to forecast further into the future (and hence assess the error when doing so) it is often very difficult in practice. If you have very strong seasonal components it will be more feasible. Otherwise your estimates will likely reach an equilibrium in relatively few future time periods.
If you plan on using R to fit your models you should probably check out Rob Hyndman's [website](http://robjhyndman.com/) (Hopefully he will give you better advice than me!)
| null | CC BY-SA 2.5 | null | 2010-10-31T21:10:25.720 | 2010-10-31T21:17:30.207 | 2010-10-31T21:17:30.207 | 1036 | 1036 | null |
4092 | 2 | null | 4089 | 25 | null | Frank Harrell's [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) package has some basic graphics with options for annotation: check out the `summary.formula()` and related `plot` wrap functions. I also like the `describe()` function.
For additional information, have a look at the [The Hmisc Library](http://lib.stat.cmu.edu/S/Harrell/Hmisc.html) or [An Introduction to S-Plus and the Hmisc and Design Libraries](http://biostat.mc.vanderbilt.edu/twiki/pub/Main/RS/sintro.pdf).
Here are some pictures taken from the on-line help (`bpplt`, `describe`, and `plot(summary(...))`):



Many other examples can be browsed on-line on the [R Graphical Manual](http://rgm3.lab.nig.ac.jp/RGM/), see [Hmisc](http://rgm3.lab.nig.ac.jp/RGM/r_package?p=Hmisc) (and don't miss [rms](http://rgm3.lab.nig.ac.jp/RGM/r_package?p=rms)).
| null | CC BY-SA 3.0 | null | 2010-10-31T21:13:06.657 | 2013-06-19T18:41:21.603 | 2013-06-19T18:41:21.603 | 930 | 930 | null |
4093 | 1 | null | null | 16 | 36132 | Can anyone help me in interpreting PCA scores? My data come from a questionnaire on attitudes toward bears. According to the loadings, I have interpreted one of my principal components as "fear of bears". Would the scores of that principal component be related to how each respondent measures up to that principal component (whether he/she scores positively/negatively on it)?
| Interpreting PCA scores | CC BY-SA 3.0 | null | 2010-10-31T21:18:20.970 | 2017-11-12T10:04:22.853 | 2017-11-12T10:04:22.853 | 101426 | null | [
"pca"
] |
4094 | 2 | null | 4093 | 13 | null | Basically, the factor scores are computed as the raw responses weighted by the factor loadings. So, you need to look at the factor loadings of your first dimension to see how each variable relate to the principal component. Observing high positive (resp. negative) loadings associated to specific variables means that these variables contribute positively (resp. negatively) to this component; hence, people scoring high on these variables will tend to have higher (resp. lower) factor scores on this particular dimension.
Drawing the correlation circle is useful to have a general idea of the variables that contribute "positively" vs. "negatively" (if any) to the first principal axis, but if you are using R you may have a look at the [FactoMineR](http://cran.r-project.org/web/packages/FactoMineR/index.html) package and the `dimdesc()` function.
Here is an example with the `USArrests` data:
```
> data(USArrests)
> library(FactoMineR)
> res <- PCA(USArrests)
> dimdesc(res, axes=1) # show correlation of variables with 1st axis
$Dim.1
$Dim.1$quanti
correlation p.value
Assault 0.918 5.76e-21
Rape 0.856 2.40e-15
Murder 0.844 1.39e-14
UrbanPop 0.438 1.46e-03
> res$var$coord # show loadings associated to each axis
Dim.1 Dim.2 Dim.3 Dim.4
Murder 0.844 -0.416 0.204 0.2704
Assault 0.918 -0.187 0.160 -0.3096
UrbanPop 0.438 0.868 0.226 0.0558
Rape 0.856 0.166 -0.488 0.0371
```
As can be seen from the latest result, the first dimension mainly reflects violent acts (of any kind). If we look at the individual map, it is clear that states located on the right are those where such acts are most frequent.


You may also be interested in this related question: [What are principal component scores?](https://stats.stackexchange.com/questions/222/what-are-principal-component-scores)
| null | CC BY-SA 2.5 | null | 2010-10-31T21:34:24.127 | 2010-10-31T22:01:49.937 | 2017-04-13T12:44:51.217 | -1 | 930 | null |
4095 | 2 | null | 3893 | 21 | null | I think it's useful to review what we know about cross-validation. Statistical results around CV fall into two classes: efficiency and consistency.
Efficiency is what we're usually concerned with when building predictive models. The idea is that we use CV to determine a model with asymtptotic guarantees concerning the loss function. The most famous result here is due to [Stone 1977](http://www.jstor.org/pss/2984877) and shows that LOO CV is asymptotically equivalent to AIC. But, Brett provides a good example where you can find a predictive model which doesn't inform you on the causal mechanism.
Consistency is what we're concerned with if our goal is to find the "true" model. The idea is that we use CV to determine a model with asymptotic guarantees that, given that our model space includes the true model, we'll discover it with a large enough sample. The most famous result here is due to [Shao 1993](http://www.jstor.org/pss/2290328) concerning linear models, but as he states in his abstract, his "shocking discovery" is opposite of the result for LOO. For linear models, you can achieve consistency using LKO CV as long as $k/n \rightarrow 1$ as $n \rightarrow \infty$. Beyond linear mdoels, it's harder to derive statistical results.
But suppose you can meet the consistency criteria and your CV procedure leads to the true model: $Y = \beta X + e$. What have we learned about the causal mechanism? We simply know that there's a well defined correlation between $Y$ and $X$, which doesn't say much about causal claims. From a traditional perspective, you need to bring in experimental design with the mechanism of control/manipulation to make causal claims. From the perspective of Judea Pearl's framework, you can bake causal assumptions into a structural model and use the probability based calculus of counterfactuals to derive some claims, but you'll need to satisfy [certain properties](http://bayes.cs.ucla.edu/BOOK-2K/jw.html).
Perhaps you could say that CV can help with causal inference by identifying the true model (provided you can satisfy consistency criteria!). But it only gets you so far; CV by itself isn't doing any of the work in either framework of causal inference.
If you're interested further in what we can say with cross-validation, I would recommend Shao 1997 over the widely cited 1993 paper:
- An Asymptotic Theory for Linear Model Selection (Shao, 1997)
You can skim through the major results, but it's interesting to read the discussion that follows. I thought the comments by Rao & Tibshirani, and by Stone, were particularly insightful. But note that while they discuss consistency, no claims are ever made regarding causality.
| null | CC BY-SA 2.5 | null | 2010-10-31T22:40:29.637 | 2010-10-31T22:40:29.637 | null | null | 251 | null |
4096 | 2 | null | 4086 | 6 | null | Probably the simplest approach is, as Andy W suggested, to use a seasonal univariate time series model. If you use R, try either `auto.arima()` or `ets()` from the [forecast package](http://cran.r-project.org/web/packages/forecast/).
Either should work ok, but a general time series method does not use all the information provided. In particular, it seems that you know the shape of the curve in each year, so it might be better to use that information by modelling each year's data accordingly. What follows is a suggestion that tries to incorporate this information.
It sounds like some kind of sigmoidal curve will do the trick. e.g., a shifted logistic:
\begin{equation}
f_{t,j} = \frac{r_te^{a_t(j-b_t)}}{1+e^{a_t(j-b_t)}}
\end{equation}
for year $t$ and week $j$ where $a_t$, $b_t$ and $r_t$ are parameters to be estimated. $r_t$ is the asymptotic maximum, $a_t$ controls the rate of increase and $b_t$ is the mid-point when $f_{t,j}=r_t/2$. (Another parameter will be needed to allow the asymmetry you describe whereby the rate of increase up to time $b_t$ is faster than that after $b_t$. The simplest way to do this is to allow $a_t$ to take different values before and after time $b_t$.)
The parameters can be estimated using least squares for each year. The parameters each form time series: ${a_1,\dots,a_n}$, ${b_1,\dots,b_n}$ and ${r_1,\dots,r_n}$. These can be forecast using standard time series methods, although with $n=5$ you probably can't do much apart from using the mean of each series for producing forecasts. Then, for year 6, an estimate of the value at week $j$ is simply $\hat{f}(6,j)$ where the forecasts of $a_6$, $b_6$ and $r_6$ are used.
Once data start to be observed for year 6 you will want to update this estimate. As each new observation is obtained, estimate the sigmoidal curve to the data from year 6 (you will need at least three observations to start with as there are three parameters). Then take a weighted average of the forecasts obtained using the data up to year 5 and the forecast obtained using only the data from year 6, where the weights are equal to $(40-t)/36$ and $(t-4)/36$ respectively. That is very ad hoc, and I'm sure it can be made more objective by placing it in the context of a larger stochastic model. Nevertheless, it will probably work ok for your purposes.
| null | CC BY-SA 2.5 | null | 2010-11-01T01:18:37.127 | 2010-11-01T03:18:34.553 | 2010-11-01T03:18:34.553 | 159 | 159 | null |
4097 | 2 | null | 3260 | 2 | null | I will outline an approach that requires no "training" at all; it is up to you to determine its utility in this case.
A simple (and nonparametric) hypothetical model is that all datasets are independent, that none has a trend, and that their variations from one time period to the next are mutually independent. This implies the probability with which two pre-specified datasets simultaneously have local minima would equal the product of the probabilities with which each has local minima, with obvious (but more complex) generalizations to three or more pre-specified datasets (which I illustrate below). In particular, you can estimate the probabilities of local minima by means of their observed frequencies in each dataset. From these you can compute the probabilities of simultaneous local minima among 2, 3, or, generally, $k$ or more of the datasets. When the probability for $k$ or more is so low that it is unlikely to occur during the time span you have observed, you can take the simultaneous occurrence of $k$ or more local minima to be "significant" relative to this null hypothesis of independence.
For example, suppose you have five datasets, each observed 100 times, with local minima appearing 8, 9, 10, 11, and 12 times in them. All five would simultaneously exhibit a local minimum (8/100) * (9/100) * (10/100) * (11/100) * (12/100) = 0.00095% of the time, so even within 100 observations the expected number of simultaneous minima (of 100 * 0.00095% = 0.00095) is so ridiculously low that five simultaneous minima surely would be significant evidence of an "interesting" point.
Local minima among the first four datasets (unaccompanied by a local minimum in the fifth) would have an expected frequency of 100 * (8/100) * (9/100) * (10/100) * (11/100) * ((100-12)/100) = .00697. Similarly we could compute the expected frequency of local minima among the other combinations of four of the datasets. The total frequency of exactly four simultaneous minima is 0.04375. Added to the frequency of five simultaneous minima this gives 0.0447 as the expected number of times you would observe four or five simultaneous local minima in 100 observations: still pretty rare and therefore significant if it turns up. A similar computation for the ten combinations of three simultaneous local minima shows that you would expect at least three local minima 0.8452 times out of 100. So, observing one or two such events would not be unusual and you might not consider them significant. Obviously the expected number of two-way minima would be substantial (you should expect to see around 40 of them out of 100) and you would be unlikely to consider any of those significant.
The example illustrates how you could go about computing thresholds for significance in terms of the number of simultaneous local minima for any number of datasets that are observed for any number of time periods.
You can give a more precise accounting of the situation by means of the Poisson distribution. Take the occurrence of four or more simultaneous minima in the example. Under the null hypothesis (of independent datasets), this is rare enough that the actual count should have a Poisson distribution with expectation 0.8452. This implies there is a 94.59% chance of observing two or fewer such events. Thus, if you see three or more three- or four- or five-way minima you could take this to be significant evidence of lack of independence (with about 95% confidence). However, in this case you could not point to a specific time that is significant; you could only say that there are more threefold minima than there should be. Any one of them would be a reasonable candidate for an "interesting" time, but further investigation should ensue before you stipulate that any particular one of these times really demonstrates a departure from independence.
This model might or might not be appropriate for your data. You can check that by examining the data. If your data have trends or exhibit serial correlation you would need a more complex version of this model. Nevertheless, the same kind of analysis can help you decide what constitutes an "interesting" or "significant" syzygy of local minima.
| null | CC BY-SA 2.5 | null | 2010-11-01T03:15:58.853 | 2010-11-01T03:15:58.853 | null | null | 919 | null |
4098 | 2 | null | 573 | 10 | null | The audio application is a one-dimensional simplification of the two-dimensional image classification problem. A phoneme (for example) is the audio analog of an image feature such as an edge or a circle. In either case such features have an essential locality: they are characterized by values within a relatively small neighborhood of an image location or moment of speech. Convolutions are a controlled, regular form of weighted averaging of values within local neighborhoods. From this originates the hope that a convolutional form of a DBN can be successful at identifying and discriminating features that are meaningful.
| null | CC BY-SA 2.5 | null | 2010-11-01T03:29:42.770 | 2010-11-01T03:29:42.770 | null | null | 919 | null |
4099 | 1 | null | null | 17 | 32474 | I am currently assessing multicollinearity in my datasets.
What threshold values of VIF and condition index below/above suggest a problem?
VIF:
I have heard that VIF $\geq 10$ is a problem.
After removing two problem variables, VIF is $\leq 3.96$ for each variable.
Do the variables need more treatment or does this VIF seem fine?
Condition Index:
I have heard that a Condition Index (CI) of 30 or more is a problem.
My highest CI is 16.66. Is this a problem?
Other Issues:
- Are there any other dos/donts that need to be considered?
- Are there any other things that I need to keep in mind?
| VIF, condition Index and eigenvalues | CC BY-SA 3.0 | null | 2010-11-01T10:19:44.367 | 2018-04-15T22:17:24.670 | 2013-02-02T18:35:12.347 | 812 | 1763 | [
"multiple-regression",
"linear-model",
"multicollinearity",
"variance-inflation-factor"
] |
4100 | 2 | null | 2537 | 2 | null | Following up on chl's IRT suggestion and taking a different view of the analysis (and as an answer to the original question 2).
I would see if there was dominance structure in the items, e.g. an item ordering where people that like 2 tend to like 1 but not 3 and people who like 3 tend to like 1 and 2, etc. If so, there's a scale of some kind underneath your items and you might do better to build a measurement model of the underlying score and use those measurements as your dependent variable. Parametric IRT models, e.g. in the R package ltm, would give you an expected score and per person standard error, if you wanted to use that as a weight in the final regression. The mokken package can be used to see if there might be a scale in there in the first place.
The regression model decisions are then a separate but now slightly easier issue, for which existing comments provide a good overview. Personally I'd go for a mixed effects model using lmer, but that's just my preference for more rather than less model.
| null | CC BY-SA 2.5 | null | 2010-11-01T10:49:03.003 | 2010-11-01T10:49:03.003 | null | null | 1739 | null |
4101 | 1 | 4103 | null | 11 | 3516 | I would like to do an intervention analysis to quantify the results of a policy decision on the sales of alcohol over time. I am fairly new to time series analysis, however, so I have some beginners questions.
An examination of the literature reveals that other researchers have used ARIMA to model the time-series sales of alcohol, with a dummy variables as regressor to model the effect of the intervention. While this seems like a reasonable approach, my data set is slightly richer than those I have encoutnered in the literature. Firstly, my data set is disaggregated by beverage type (i.e. beer, wine, spirits), and then further disaggregated by geographical zone.
While I could create separate ARIMA analyses for each disagregated group and then compare the results, I suspect there is a better approach here. Could anyone more familiar with multi-dimensional time-series data provide some poitners or suggestions?
| Intervention analysis with multi-dimensional time-series | CC BY-SA 2.5 | null | 2010-11-01T11:37:13.793 | 2011-04-01T01:07:15.667 | null | null | 179 | [
"time-series",
"multivariate-analysis",
"arima",
"intervention-analysis"
] |
4102 | 2 | null | 1432 | 6 | null | The working residuals are the residuals in the final iteration of any iteratively weighted least squares method. I reckon that means the residuals when we think its the last iteration of our running of model. That can give rise to discussion that model running is an iterative exercise.
| null | CC BY-SA 3.0 | null | 2010-11-01T12:19:08.010 | 2015-02-16T23:20:32.487 | 2015-02-16T23:20:32.487 | 9007 | 1763 | null |
4103 | 2 | null | 4101 | 10 | null | The ARIMA model with a dummy variable for an intervention is a special case of a linear model with ARIMA errors.
You can do the same here but with a richer linear model including factors for the beverage type and geographical zones.
In R, the model can be estimated using arima() with the regression variables included via the xreg argument. Unfortunately, you will have to code the factors using dummy variables, but otherwise it is relatively straightforward.
| null | CC BY-SA 2.5 | null | 2010-11-01T12:21:36.523 | 2010-11-01T12:21:36.523 | null | null | 159 | null |
4104 | 1 | 4109 | null | 6 | 786 | I would like to use data mining to try to find a good workout schemes. The input dataset will contain the parameters of a set of workouts with dates and different performance and medical measures. The problem is that the influence of each individual workout will be different depending on the time that has passed after this workout. For instance on the next day after the workout the performance will degrade, but eventually will improve to the higher level than before the workout. So each performance measure will be the cumulative result of different workouts.
I know that I can run a series of workouts of the same type during a period of time and then use difference in the performance as a predictor. But I was wondering if there are any algorithm that allow to take into account the time component and run analysis that will take into account individual workouts.
| Data mining algorithm suggestion | CC BY-SA 2.5 | null | 2010-11-01T12:24:49.967 | 2019-03-09T21:12:09.497 | 2019-03-09T21:12:09.497 | 11887 | 255 | [
"data-mining",
"algorithms",
"exploratory-data-analysis",
"unevenly-spaced-time-series"
] |
4106 | 2 | null | 4101 | 6 | null | If you wanted to model the sales of drinks types as a vector [sales of wine at t, sales of beer at t, sales of spirits at t], you might want to look at Vector Autoregression (VAR) models. You probably want the VARX variety that have a vector of exogenous variables like region and the policy intervention dummy, alongside the wine, beer and spirits sequences. They are fairly straightforward to fit and you'd get impulse response functions to express the impact of exogenous shocks, which might also be of interest. There's comprehensive discussion in Lütkepohl's book on multivariate time series.
Finally, I'm certainly no economist but it seems to me that you might also think about ratios of these drinks types as well as levels. People probably operate under a booze budget constraint - I know I do - which would couple the levels and (anti-)correlate the errors.
| null | CC BY-SA 2.5 | null | 2010-11-01T16:40:54.657 | 2010-11-01T16:40:54.657 | null | null | 1739 | null |
4108 | 2 | null | 2077 | 7 | null | Could you fit a loess/spline through the data and use the residuals? Would the residuals be stationary?
Seems fraught with issues to consider, and perhaps there would not be as clear an indication of an overly-flexible curve as there is for over-differencing.
| null | CC BY-SA 2.5 | null | 2010-11-01T17:39:33.897 | 2010-11-01T17:39:33.897 | null | null | 1764 | null |
4109 | 2 | null | 4104 | 5 | null | Your first task is to find a reasonable model relating an outcome $Y$ to the sequence of workouts that preceded it. One might start by supposing that the outcome depends quite generally on a linear combination of time-weighted workout efforts $X$, but such a model would be unidentifiable (from having more parameters than data points). One popular simplification is to suppose that the "influence" of a workout at time $t$ on the outcome at time $s$ is
a. proportional to the intensity of the workout,
b. decays exponentially; that is, is reduced by a factor $\exp(\theta(t-s)))$ for some unknown decay rate $\theta$, and
c. independently adds to the influences of all other workouts preceding time $t$.
Of course we must be prepared to allow some deviation between the actual outcome and that predicted by the model; it is natural to model that deviation as a set of independent random variables of zero mean.
This leads to a formal model which can serve as a useful point of departure for EDA. To write it down, let the times be $t_1 \lt t_2 \lt \ldots \lt t_n$ with corresponding workout intensities $x_1, x_2, \ldots, x_n$ and let the outcomes be measured at times $s_1 \lt s_2 \lt \ldots \lt s_m$ with values $y_1, \ldots, y_m$, respectively. The model is
$$y_j =\alpha + \beta \exp(-\theta(s_j - t_{k})) \left( x_{k} + \exp(-\theta \Delta_{k,k-1}) x_{k-1} + \cdots + \exp(-\theta \Delta_{k, 1})x_1 \right) + \epsilon_j$$
where $\alpha$ and $\beta$ are coefficients in a linear relation, $k$ is the index of the most recent workout preceding time $s_j$, $\Delta_{i,j} = t_i - t_j$ is the time elapsed between the $i^\text{th}$ and $j^\text{th}$ workouts, and the $\epsilon_j$ are independent random variables with zero expectations.
This can get messy when workouts and endpoint measurements are unevenly spaced. If to a good approximation the spacing between a workout and the next measurement is constant (say, a time difference of $s$) and--as an expository simplification--if each workout is followed by a measurement (so that $m = n$), then this model suggests some useful EDA procedures. As an abbreviation, let's write (somewhat loosely)
$$f_k(x,t,\theta) = \left( x_{k} + \exp(-\theta \Delta_{k,k-1}) x_{k-1} + \cdots + \exp(-\theta \Delta_{k, 1})x_1 \right)$$
for the weighted sum of the workouts up to and including the $k^\text{th}$ one, whence
$$y_k = \alpha + \gamma f_k(x,t,\theta) + \epsilon_k$$
where $\gamma = \beta \exp(-\theta s)$. Note that this formulation accommodates any irregular time sequence of workouts, so it's not grossly oversimplified.
What you want to know is whether this makes sense: do the data at all behave like this? We're really asking about the possibility of a linear relationship between the $x$'s and the $y$'s. We need that to hold for at least one decay constant $\theta$ with a reasonable value.
One way to check is to note there is a relatively simple relationship between successive terms $f_{k+1}$ and $f_k$; you let $f_k$ decay for additional time $t_{k+1} - t_k$ and add $x_{k+1}$ to it:
$$f_{k+1}(x,t,\theta) = x_{k+1} + \exp(-\theta \Delta_{k+1,k}) f_k(x,t,\theta).$$
(This formula, by the way, provides an efficient way to compute all the $f_k$ by starting at $f_1 = x_1$ and continuing recursively for $k = 2, 3, \ldots, n$--a simple spreadsheet formula. It is a generalization of the weighted running averages used extensively in financial analysis.)
Equivalently, we can isolate $x_k$ by subtracting the right hand term. This suggests exploring the relationship between the adjusted values $z_k = y_{k+1} - \exp(-\theta \Delta_{k+1,k})y_k$ and the workouts $x_k$, because
$$z_k = (1 - \exp(-\theta \Delta_{k+1,k}))\alpha + \gamma x_k + (\epsilon_{k+1} - \exp(-\theta \Delta_{k+1,k}))\epsilon_k).$$
If the workouts are approximately regularly spaced, so that $\Delta_{k+1,k}$ is roughly constant, then for any fixed value of $\theta$ this expression is in the form
$$z = \text{ constant + constant *} x + \text{ error}.$$
The error terms will be positively correlated (in pairs) but still unbiased. It is now clear how to check linearity: Pick a trial value for $\theta$, compute the $z$'s (which depend on it), make a scatterplot of the $z$'s versus the $x$'s, and look for linearity. Vary $\theta$ interactively to search for linear-looking scatterplots. If you can produce one, you already have a reasonable estimate of $\theta$. You can then estimate the other parameters ($\alpha$ and $\beta$) if you like. If you cannot produce a linear scatterplot, use standard EDA techniques to re-express the variables (the $x$'s and $y$'s) until you can. Look for a value of $\theta$ that minimizes the typical sizes of the residuals: that is a rough estimate of the decay rate.
I don't expect this method to be highly stable: there is likely a wide range of values of $\theta$ that will induce linearity and relatively small residuals in the scatterplot. But that is something for you to find out. If you discover that only a narrow range of values accomplishes this, then you can have confidence that the decay effect is there and can be estimated. Use maximum likelihood; it will be convenient to suppose the $\epsilon$'s are normally distributed. (The profile likelihood, with $\theta$ fixed, is an ordinary least squares problem, so it will be easy to fit this model. Alternatively you could try fitting the relationship between $z$ and $x$ directly using generalized least squares, but I think that would be trickier to implement.)
This all might sound complicated but it's actually quite simple. You could set up a spreadsheet in which $\theta$ is the value in a cell, add a $\theta$-varying control to a scatterplot of $x$ and $z$ (computed in the spreadsheet from a column of $t$ values and a column of $y$ values), and simply adjust the control to straighten out the plot.
It will be harder to explore a dataset in which there are fewer (or more) measurements $y_j$ than there are workouts $x_k$ or where the temporal spacing between workouts and measurements varies a lot. You might have to settle for maximum likelihood solutions alone, without the benefit of supporting graphics to verify the reasonableness and the adequacy of this model a priori.
Even if my assumptions do not agree with your situation in all details, I hope that this discussion at least suggests effective approaches for furthering your investigation.
| null | CC BY-SA 2.5 | null | 2010-11-01T18:28:40.480 | 2010-11-01T18:38:55.810 | 2010-11-01T18:38:55.810 | 930 | 919 | null |
4110 | 2 | null | 25 | 2 | null | I'm new here, and perhaps "financial time series" has a specific definition... But given that I don't know it, my question for you would be what you mean: quarterly/monthly economic data, daily market prices, hourly or higher-frequency data, etc? And by "modeling", do you mean working with textbook ARIMA/ARCH solutions, or things a bit more exotic (such as dynamic linear systems), or exotic/custom experimentation?
R is flexible and free, though less GUI-fied than most. It also has packages covering everything from daily stock prices to dynamic linear systems and optimization packages. (In fact, the hard part will be deciding which time series and which financial packages to use.)
GRETL is free and has a reasonable GUI, though it's econometric, not really daily market oriented. I've heard of Oxmetrics, which appears to have a very complete every-possible-variant-of-ARCH package available for it. If you're talking monthly/quarterly economic data, you could also use X12-ARIMA, which is a benchmark of sorts.
I've used all kinds of GUIs for programming/processing data, but for some reason RapidMiner's never really clicked with me. Something strange about its workflow that I've just never gotten.
| null | CC BY-SA 2.5 | null | 2010-11-01T19:45:21.320 | 2010-11-01T19:45:21.320 | null | null | 1764 | null |
4111 | 1 | 4116 | null | 28 | 5284 | A student asked me today, "How do they know how many people attended a large group event, for example, the Stewart/Colbert 'Rally to Restore Sanity' in Washington D.C.?" News outlets report estimates in the tens of thousands, but what methods are used to get those estimates, and how reliable are they?
One article apparently based their estimate on parking permits... but what other techniques do we have? Please note I am not talking about capture/recapture experiments or anything of the like.
I don't have any idea. I would guess in advance that there aren't specific methods for something like this, and whatever's there is very ad hoc (such as how many parking permits were sold). Is this true? For purposes of national security - of course - it would be possible to have an analyst sit down with satellite photographs and physically bean-count the number of people there. I doubt this method is used very often.
| How to estimate how many people attended an event (say, a political rally)? | CC BY-SA 2.5 | null | 2010-11-01T20:00:57.660 | 2022-12-08T14:15:11.347 | 2010-11-02T13:57:31.997 | 8 | null | [
"estimation",
"sampling"
] |
4113 | 2 | null | 964 | 3 | null | I've found that the books I've read tend to mention the "why" behind diff and log. And it's easy to see for yourself. Try this:
```
data (AirPassengers)
plot (AirPassengers)
```
Notice the seasonal pattern, but also notice the upward trend. So try
```
plot (diff (AirPassengers))
```
See how the upward trend is gone? By looking at the change each month instead of the actual data, you're seeing the patterns more clearly. You've stabilized the time series in some sense.
But also note that the pattern gets larger towards the right side of the graph. That's because the pattern is not additive (add amount Y), but rather multiplicative (add a percentage, or multiply). Logs turn multiplication into addition, so:
```
plot (diff (log (AirPassengers)))
```
and you have a stable pattern that you can better perceive what's actually happening over time to the pattern, independently of the trend. A time series with this kind of stability is called "stationary". (Of course, "stationary" has a very technical meaning beyond looking "stable", but let's not go there for now.)
Obviously, the first step to analysis is to understand and preprocess your data and a graph of the raw data is an essential step in the process. Graphing the diff(log(foo)) is just confirming that you understand the data and the results look appropriate, and it gives you a bit more insight into the seasonal patterns of the data.
There are also tests (ADF, etc) and graphs (ACF) that would be used to confirm that differencing of a series is called for, but it can't hurt to look at things as well.
So the transformation reveals the de-trended data (if the transform is indeed called for) for you to look at, and it is the data that you will actually feed into follow-on analysis. (Though some software you use will do the diff and the log under-the-hood for you and you'll only ever see the output which is reversed back to the original data scale)
| null | CC BY-SA 2.5 | null | 2010-11-01T20:22:48.497 | 2010-11-01T20:22:48.497 | null | null | 1764 | null |
4114 | 1 | null | null | 11 | 1707 | I've been reading some about Generalized Least Squares (GLS) and trying to tie it back to my basic econometric background. I recall in grad school using Seemingly Unrelated Regression (SUR) which seems somewhat similar to GLS. One paper I stumbled on even referred to SUR as "special case" of GLS. But I still can't wrap my brain around the similarities and differences.
so the question:
What are the similarities and differences between GLS and SUR? What are the hallmarks of a problem which should use one method over the other?
| Difference between GLS and SUR | CC BY-SA 2.5 | null | 2010-11-01T20:31:17.030 | 2010-11-03T09:55:08.037 | 2010-11-03T09:55:08.037 | 930 | 29 | [
"regression",
"generalized-least-squares"
] |
4115 | 2 | null | 4114 | 9 | null | In a narrow sense, GLS (and in particular Feasible GLS or FGLS) is an estimation method applied to SUR models.
SUR implies a system of m equations that are assumed to have correlated errors, and (F)GLS helps to recover from this -- see [Wikipedia on Seemingly Unrelated Regressions](http://en.wikipedia.org/wiki/Seemingly_unrelated_regressions).
GLS, on the other hand, is a method of incorporating information from the covariance structure of your model. See [Wikipedia on GLS](http://en.wikipedia.org/wiki/Feasible_generalized_least_squares).
To recap, you can use the latter (GLS) to estimate the former (SUR).
| null | CC BY-SA 2.5 | null | 2010-11-01T20:39:05.573 | 2010-11-02T01:18:26.927 | 2010-11-02T01:18:26.927 | null | 334 | null |
4116 | 2 | null | 4111 | 13 | null | You could estimate the people per square meter (use a few areas, of at least a few square meters each to get a good estimate) and multiply this by the size of the area.
Here is an article on this topic: [How is Crowd Size estimated?](https://web.archive.org/web/20110810132244/http://www.lifeslittlemysteries.com/how-is-crowd-size-estimated--1074/)
| null | CC BY-SA 4.0 | null | 2010-11-01T20:52:32.800 | 2022-11-26T12:47:52.030 | 2022-11-26T12:47:52.030 | 362671 | 1765 | null |
4117 | 2 | null | 4111 | 0 | null | A police officer told me once that they had rules of thumb to guesstimate attendance at demonstrations (don't ask me for specifics), probably based on what Tim said.
| null | CC BY-SA 2.5 | null | 2010-11-01T21:05:58.687 | 2010-11-01T21:05:58.687 | null | null | 1766 | null |
4118 | 2 | null | 4111 | 4 | null | Tim's linked article is great, though I think the company that counts people in grids is making it out to be easier than it really is.
In the local (DC) papers, I've seen quotes about Metro rider usage (except there were two other major events downtown the same day), attempts to count people at security checkpoints, grid square counting from aerial photos, quoting the numbers put on Park Service event applications, etc, none of which impress me in a large town with many things going on concurrently.
| null | CC BY-SA 2.5 | null | 2010-11-01T22:21:56.600 | 2010-11-01T22:21:56.600 | null | null | 1764 | null |
4119 | 2 | null | 4111 | 1 | null | Here's an idea (but I am not sure this could work in practice): place a free wifi access point, and count the number of connections ( of iPhones, blackbery...).
| null | CC BY-SA 2.5 | null | 2010-11-01T22:22:07.313 | 2010-11-01T22:22:07.313 | null | null | null | null |
4120 | 1 | 4130 | null | 7 | 781 | I'm working with a dataset for which I only have means, standard deviations, and sample sizes for different levels of a continuous predictor.
E.G.
Y X SD_Y N_Y
5 1 3 4
10 2 6 2
15 3 2 8
I would like to determine the regression line that fits this data. I'm wracking my brains to remember how data points should be weighted in a linear regression (I'm also interested in using a generalized linear model as well)- by sample size, variance, SD?
Any pointers?
| Using weighted regression to obtain fit lines for which I only have summary data | CC BY-SA 2.5 | null | 2010-11-01T22:35:51.763 | 2010-11-02T17:24:09.077 | null | null | 101 | [
"regression"
] |
4121 | 1 | 4122 | null | 3 | 3150 | If X and Y are standardized variables and are perfectly positively correlated with respect to each other, how can i prove that $E[(X-Y)^2] = 0$?
| Statistics Proof that $E[(X-Y)^2] = 0$ | CC BY-SA 2.5 | null | 2010-11-01T23:13:00.463 | 2011-06-16T16:40:50.630 | 2010-11-02T13:51:18.837 | 8 | 1395 | [
"mathematical-statistics",
"self-study"
] |
4122 | 2 | null | 4121 | 12 | null | $E[(X-Y)^2) = E(X^2) + E(Y^2) - 2E(XY)$
Use the fact that $X, Y$ are standardized and perfectly correlated to make appropriate substitutions above to get the desired result.
PS: I am not providing the complete solution. Hopefully, the above hints will set you on the right path.
| null | CC BY-SA 2.5 | null | 2010-11-01T23:27:27.143 | 2010-11-01T23:27:27.143 | null | null | null | null |
4123 | 2 | null | 4120 | 0 | null | I think I would calculate normalized variables (z=(x-mean(x))/(sd(x)), and run the regression. Or you could work out a way to generate samples in a bootstrap. I'm not shure if this would be the textbook solution, but intuitively it should work.
| null | CC BY-SA 2.5 | null | 2010-11-02T00:42:39.077 | 2010-11-02T00:42:39.077 | null | null | 1766 | null |
4124 | 2 | null | 4120 | 3 | null | Let the disaggregrate model be:
$Y_{ia} = X_a \beta + \epsilon_i$
where
$\epsilon_i \sim N(0,\sigma^2)$
Your aggregate model is given by:
$Y_a = \frac{\sum_i(Y_{ia})}{n_a}$
where,
$n_a$ is the number of observations you have corresponding to the $a$ index.
Therefore, it follows that:
$Y_a = X_a \beta + \epsilon_a$
where
$\epsilon_a \sim N(0, \frac{\sigma^2}{n_a} )$ and
$a=1, 2, ... A$
Therefore, the OLS estimate would be given by minimizing:
$\sum_a(Y_a - X_a \beta)^2$
Which yields the usual solution. So, I do not think there is any difference as far as the estimate for the slope parameters are concerned.
Edit 1
Here is a small simulation in R which illustrates the above idea (apologies for the flaky code as I am using questions such as the above to learn R).
```
set.seed(1);
n <- c(4,2,8);
x <- c(1,2,3);
data <- matrix(0,14,2)
mean_data <- matrix(0,3,2)
index <- 1;
for (i in 1 : 3)
{
for(obs in 1:n[i])
{
data[index,1] <- x[i];
data[index,2] <- x[i]*8 + 1.5*rnorm(1);
mean_data[i,1] = x[i];
mean_data[i,2] = mean_data[i,2] + data[index,2];
index = index + 1;
}
mean_data[i,2] = mean_data[i,2] / n[i];
}
beta <- lm(mean_data[,2] ~ mean_data[,1]);
```
The above code yields the output when you type `beta`:
```
Call:
lm(formula = mean_data[, 2] ~ mean_data[, 1])
Coefficients:
(Intercept) mean_data[, 1]
-0.03455 7.99326
```
Edit 2
However, OLS is not efficient as error variances are not equal. Thus, using MLE ideas, we need to minimize:
$\sum_a{n_a (Y_a - X_a \beta)^2}$
In other words, we want to minimize:
$\sum_a{(\sqrt{n_a} Y_a - \sqrt{n_a} X_a \beta)^2}$
Thus, the MLE can be written as follows:
Let $W$ be a diagonal matrix with the $\sqrt{n_a}$ along the diagonal. Thus, the MLE estimate can be written as:
$(X' X)^{-1} X' Y$
where,
$Y = W [Y_1,Y_2,...Y_A]'$ and
$X = W [X_1,X_2,...X_A]'$
Another way to think about this is:
Consider the variance of $Y$. The transformation given above for $Y$ ensures that the variance of the individual values of $Y$ are identical thus satisfying the conditions of the [Gauss-Markov theorem](http://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem) that OLS is BLUE.
| null | CC BY-SA 2.5 | null | 2010-11-02T00:50:27.137 | 2010-11-02T01:56:12.977 | 2010-11-02T01:56:12.977 | null | null | null |
4125 | 1 | null | null | 13 | 31545 | How do you show that the point of averages (x,y) lies on the estimated regression line?
| Regression Proof that the point of averages (x,y) lies on the estimated regression line | CC BY-SA 2.5 | null | 2010-11-02T01:22:05.690 | 2020-10-02T09:30:54.100 | 2010-11-02T13:50:27.137 | 8 | 1395 | [
"distributions",
"regression",
"proof",
"self-study"
] |
4126 | 2 | null | 964 | 0 | null | Time serie analysis is easier on stationary data. (more tools)
When a time serie is non stationary, you can try to find another time series, linked to the initial one, and which is stationary.
In many cases taking the difference will be enough. (Time series integrated with order 1) Sometimes you'll have to take the logarithm before (for instance for financial time series) The reason is that investment returns are in percentage!
Just don't forget before doing that, there are also some tools for non stationary data. Sometimes taking the diff or the log return may lead to spurious regressions!
Fred
| null | CC BY-SA 2.5 | null | 2010-11-02T01:47:17.577 | 2010-11-02T01:47:17.577 | null | null | 1709 | null |
4130 | 2 | null | 4120 | 5 | null | Weight each mean by the number of points that went into computing it. You can later use the estimated standard deviations to test the hypothesis of homoscedasticity that this approach assumes. If the Ns are as small as in your example then this test probably wouldn't have much power unless the SDs vary greatly.
| null | CC BY-SA 2.5 | null | 2010-11-02T06:01:22.763 | 2010-11-02T06:01:22.763 | null | null | 449 | null |
4131 | 1 | null | null | 7 | 1161 | CONTEXT:
I am modelling the relation between time (1 to 30) and a DV for a set of 60 participants. Each participant has their own time series.
For each participant I am examining the fit of 5 different theoretically plausible functions within a nonlinear regression framework.
One function has one parameter; three functions have three parameters; and one function has five parameters.
I want to use a decision rule to determine which function provides the most "theoretically meaningful" fit.
However, I don't want to reward over-fitting.
Over-fitting seems to come in two varieties. One form is the standard sense whereby an additional parameter enables slightly more of the random variance to be explained. A second sense is where there is an outlier or some other slight systematic effect, which is of minimal theoretical interest. Functions with more parameters sometimes seem capable of capturing these anomalies and get rewarded.
I initially used AIC. And I have also experimented with increasing the penalty for parameters.
In addition to using $2k$: [$\mathit{AIC}=2k + n[\ln(2\pi \mathit{RSS}/n) + 1]$];
I've also tried $6k$ (what I call AICPenalised).
I have inspected scatter plots with fit lines imposed and corresponding recommendations based on AIC and AICPenalised. Both AIC and AICPenalised provide reasonable recommendations. About 80% of the time they agree. However, where they disagree, AICPenalised seems to make recommendations that are more theoretically meaningful.
QUESTION:
Given a set of nonlinear regression function fits:
- What is a good criterion for deciding on a best fitting function in nonlinear regression?
- What is a principled way of adjusting the penalty for number of parameters?
| Comparing model fits across a set of nonlinear regression models | CC BY-SA 2.5 | null | 2010-11-02T06:26:26.320 | 2010-11-02T09:41:07.780 | 2010-11-02T08:55:32.367 | 183 | 183 | [
"aic",
"nonlinear-regression"
] |
4133 | 2 | null | 4131 | 1 | null | For each participant, compute the cross-validated (leave one out) prediction error per functional form and assign the participant the form with the smallest one. That should do something to keep the overfitting under control.
That approach ignores higher level problem structure: the population has groups that are assumed to share a functional form, so data from one participant with the a particular form is potentially useful for estimating the parameters of another with the same form. But it's a start, if not a finish, for the analysis.
| null | CC BY-SA 2.5 | null | 2010-11-02T09:41:07.780 | 2010-11-02T09:41:07.780 | null | null | 1739 | null |
4134 | 2 | null | 25 | 2 | null | While not exactly cheap, MATLAB is widely used in the financial industry for time series modelling: [http://www.mathworks.com](http://www.mathworks.com)
| null | CC BY-SA 2.5 | null | 2010-11-02T11:17:57.750 | 2010-11-02T11:48:26.543 | 2010-11-02T11:48:26.543 | 439 | 439 | null |
4135 | 2 | null | 25 | 4 | null | I really like to work with R, because in the end you will find almost anything, and you have a very good support with the mailing lists. The downside of R is that helpful bits which fit your specific problems might be spread over a large range of packages, and you might not always be able to find them. Another point may be a lock-in, with that I mean that after a time learning R, you will probably be unmotivated to relearn another software, but this will happen in any system.
With regard to Matlab being expensive - if on a budget, Octave will work just as well, at least it did for the things I needed to do with it, which were rather basic.
| null | CC BY-SA 2.5 | null | 2010-11-02T12:04:30.563 | 2010-11-02T12:15:04.513 | 2010-11-02T12:15:04.513 | 1766 | 1766 | null |
4136 | 2 | null | 4111 | 2 | null | As an alternative to WiFi mentioned by [Uri](https://stats.stackexchange.com/users/1767/uri), you could place Bluetooth scanner(s) in 'strategic' locations of your venue. I've attended a presentation during [MPA workshop](http://www.geo.uzh.ch/~plaube/mpa10/index.html) about [such development](http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-652/MPA10-05.pdf) in Netherlands.
| null | CC BY-SA 2.5 | null | 2010-11-02T12:19:33.383 | 2010-11-02T12:19:33.383 | 2017-04-13T12:44:33.977 | -1 | 22 | null |
4137 | 2 | null | 2715 | 6 | null | In a forecasting problem (i.e., when you need to forecast $Y_{t+h}$ given $(Y_t,X_t)$ $t>T$, with the use of a learning set $(Y_1,X_1),\dots, (Y_T,X_T)$ ), the rule of the thumb (to be done before any complex modelling) are
- Climatology ($Y_{t+h}$ forecast by the mean observed value over the learning set, possibly by removing obvious periodic patterns)
- Persistence ($Y_{t+h}$ forecast by the last observed value: $Y_t$).
What I often do now as a last simple benchmark / rule of the thumb is using randomForest($Y_{t+h}$~$Y_t+X_t$, data=learningSet) in R software. It gives you (with 2 lines of code in R) a first idea of what can be achieved without any modelling.
| null | CC BY-SA 3.0 | null | 2010-11-02T13:02:40.370 | 2016-08-05T15:19:02.257 | 2016-08-05T15:19:02.257 | 22047 | 223 | null |
4138 | 1 | null | null | 9 | 1186 | Given a $n$-dimensional multivariate normal distribution $X=(x_i) \sim \mathcal{N}(\mu, \Sigma)$ with mean $\mu$ and covariance matrix $\Sigma$, what is the probability that $\forall j\in {1,\ldots,n}:x_1 \geq x_j$?
| What is the probability that random variable $x_1$ is maximum of random vector $X=(x_i)$ from a multivariate normal distribution? | CC BY-SA 2.5 | null | 2010-11-02T13:57:43.190 | 2013-01-10T15:06:45.487 | 2010-11-10T07:40:41.250 | 930 | 767 | [
"probability",
"multivariate-analysis",
"normal-distribution"
] |
4139 | 2 | null | 4138 | 3 | null | Answer updated thanks to remarks from Whuber and Srikant
>
Proposition
Let C=[C_1;C_2] be a 2*n matrix, $X^0=(X^0_i)\sim \mathcal{N} (0,\Sigma)$ $\mathbb{R}^n$ valued. Let $\Sigma^Y=^tC\Sigma C=(\sigma^Y_{ij})$. Then, for $u_1,u_2\in\mathbb{R}$
$P(^tC_1X^0\geq u_1\text{ and } ^tC_2X^0\geq u_2)=\mathbb{E}\left [ \bar{\Phi}\left (\frac{u_2-\frac{\sigma^Y_{21}}{\sigma^Y_{11}}^tC_1X^0}{\sqrt{\sigma^Y_{22}-\frac{\sigma^Y_{21}\sigma^Y_{12}}{\sigma^Y_{11}}}} \right )1_{ ^tC_1X^0\geq u_1 } \right ]$
where $\bar{\Phi}=P(\mathcal{N}(0,1)>z)$
Answer to the question when the dimension is 3
Assume $i=1$, $\Sigma=(\sigma_{ij})$.
The probability $P(X_1>X_2 \text{ and }X_1>X_3)$ is obtained using the preceding proposition with $X^0=X-\mu$, $C_1=(1,-1,0)$, $C_2=(1,0,-1)$, $u_1=\mu_2-\mu1$ and $u_2=\mu_3-\mu1$. This gives
$\sigma^Y_{11}=\sigma_{11}+\sigma_{22}-2\sigma_{12}$
$\sigma^Y_{22}=\sigma_{11}+\sigma_{33}-2\sigma_{13}$
$\sigma^Y_{12}=\sigma_{11}+2\sigma_{23}-\sigma_{31}-\sigma_{21}$
Proof of the proposition
Assume $c\in\mathbb{R}^n$ and $\Sigma$ has full rank. It is easy to show that for any $u\in\mathbb{R}$
$$P(^tcX^0>u)=\bar{\Phi} \left (\frac{u}{\|\Sigma^{1/2}c\|_2} \right )$$
Let us denote $Y_1=^tC_1X^0,Y_2=^tC_2X^0$. From the correlation theorem, since $Y=(Y_1,Y_2)$ is centered gaussian in $\mathbb{R}^2$ with covariance $\Sigma^Y$
then $Y_2|Y_1$ is gaussian with mean $\frac{\sigma^Y_{21}}{\sigma^Y_{11}}Y_1$ and variance $\sigma^Y_{22}-\frac{\sigma^Y_{21}}{\sigma^Y_{11}}\sigma^Y_{12}$.
This, with
$P(Y_1>u_1 \text{ and } Y_2>u_2)=\mathbb{E}\left [\mathbb{E}[1_{Y_2\geq u_2 }|Y_1] 1_{Y_1\geq u_1 }\right ] $
gives the desired result.
How to extend the proposition
If we want to be able to solve the initial problem with dimension larger than $3$,
we need to compute
$P(\forall j \; ^tc_jX^0\geq u_j) $
(for well chosen $u_j$). Set $Y=(Y_1,\dots,Y_n)$ with $Y_j=^tc_jX$ centered $\mathbb{R}$-valued gaussians.
You can use the correlation theorem iteratively to derive the distribution of $Y_1|Y_{2:n}$, $Y_2|Y_{3:n}$..... This may give something like a recurcive formulation of the solution to the proposition when C is $p*n$ (recurcive on $p$).
| null | CC BY-SA 2.5 | null | 2010-11-02T15:22:26.983 | 2010-11-04T12:13:00.860 | 2010-11-04T12:13:00.860 | 223 | 223 | null |
4140 | 1 | null | null | 6 | 254 | I feel I'm pretty new to this, since some time passed since my last statistics assignment, so please bear with me.
I am analyzing results of a biological experiment. Basically, I'm looking at a some graph over a genome, where each position in the genome has a value, and I'm looking for local minima (peaks).
Now, I have to set some threshold since relatively high local minima also occur by chance. I can simulate the experiment computationally and get new data, but this is quite resource demanding (I can't run 1000 simulations, perhaps 100 or even only 20).
What I'm currently doing is run some simulations; for each simulation: find all local minima, build a CDF for the local minima values. I then average all the simulation CDFs over all simulations so I have one 'average' CDF (CDF_simulations) which is supposed to show how would local minima distribute if everything in my genome is random.
I do the same for the real data: find minima and build CDFs for their values, so I now have two CDFs - one for the real data and one for the average of the simulations.
I now search for the max x such that CDF_simulations(x) / CDF_realdata(x) is < 10%. I report all minima in the real data with value < x as "true".
I think this method should get me to a FP rate of 10%.
- Does this make sense?
- What is this method called and where can I find more about it?
- How should I scan the CDFs to find the right x? Sometimes for low x's CDF_simulations(x) > CDF_realdata(x).
- Where does the number of simulations come into play? Does it make sense to simply build an averaged CDF as I did?
I think this is quite common, and the name FDR also pops to mind, but when reading about FDR I couldn't exactly figure how to apply it to my situation.
Any comments and references (hopefully user-friendly ones) will be appreciated!
Thanks
Dave
| How can I control the false positives rate? | CC BY-SA 2.5 | 0 | 2010-11-02T15:27:13.997 | 2013-02-12T23:20:04.833 | 2010-11-02T18:25:52.760 | 449 | 634 | [
"multiple-comparisons",
"statistical-significance",
"cumulative-distribution-function"
] |
4141 | 2 | null | 4125 | 19 | null | To get you started: $\bar y = 1/n \sum y_i = 1/n \sum (\hat y_i + \hat \epsilon_i)$ then plug in, how the $\hat y_i$ are estimated by the $x_i$ and you're almost done.
EDIT: since no one replied, here the rest for sake of completeness:
the $\hat y_i$ are estimated by $\hat y_i=\hat \beta_0 + \hat \beta_1 x_{i1} + \ldots + \hat \beta_n x_{in} + \hat \epsilon_i$, so you get $\bar y = 1/n \sum \hat \beta_0 + \hat \beta_1 x_{i1} + \ldots + \hat \beta_n x_{in}$ (the $\hat \epsilon_i$ sum to zero) and finally:
$\bar y = \hat \beta_0 + \hat \beta_1 \bar x_{1} + \ldots + \hat \beta_n \bar x_{n}$. And that's it: The regression line goes through the point $(\bar x, \bar y)$
| null | CC BY-SA 2.5 | null | 2010-11-02T15:33:09.130 | 2010-11-05T13:59:57.563 | 2010-11-05T13:59:57.563 | 1573 | 1573 | null |
4142 | 1 | null | null | 3 | 1017 | I have two statistical tests which are inverse of each other, meaning that the null hypothesis are reversed. I want to use both the tests to take a decision. For this purpose, I am planning to do the following:
- If both the tests point to same result (by, say, rejecting the null hypothesis in test A and not rejecting the null hypothesis in test B, or vice versa), then I go ahead and take that decision
- On the other hand, if the results of the two tests are conflicting, I measure the difference of the p-values from alpha (say, 0.05) for each test, and go with the one having the largest deviation (I have some tie-breaking rules, but let's leave that here)
It sounds reasonable to me, but is there some statistical ground in interpreting p-values like this? I certainly haven't seen similar application before (in my limited exposure).
Edit: Let me clarify the question some more with the actual context and tests. I am testing for unit roots in time series, in order to determine if the series needs to be differenced in order to render it mean-stationary. The particular tests are KPSS test, with null being no unit roots, and ADF test with null being an unit root exists. Although the null hypotheses are related (and inverse of each other), but the test regression and the statistics are quite different in my opinion.
| Using the distance of p-value from alpha | CC BY-SA 2.5 | null | 2010-11-02T16:49:01.617 | 2010-11-03T00:10:51.297 | 2010-11-02T18:18:32.637 | 528 | 528 | [
"time-series",
"hypothesis-testing",
"statistical-significance"
] |
4143 | 2 | null | 4142 | 4 | null | I have a question based on what you asked and assuming I understood you correctly.
>
Why do you need two null hypotheses to decide about one decision?
Are you doing this just because we null hypotheses can only be rejected and never accepted?
Now the answer to your question about using p-value as a measure of "truth of the null" (similar to measuring the distance of p-value from $\alpha$) is not justified from a statistical perspective. This can lead to possible wrong decisions. [Here](http://www.stat.duke.edu/~berger/papers/99-13.html) is a great reference explaining the dangers of using p-values in the sense you are trying to use.
| null | CC BY-SA 2.5 | null | 2010-11-02T17:12:03.430 | 2010-11-02T17:12:03.430 | null | null | 1307 | null |
4144 | 2 | null | 4120 | 8 | null | This is Analysis of Variance.
After all, consider one of the $y$'s, with standard deviation $s$, and let its predicted value (which depends on the corresponding $x$) be $f$. The original aim is to vary $f$ (within constraints depending on the model; often $f$ is required to be a linear function of $x$) to minimize the sum of squared residuals. Suppose we had the original dataset available. Let the values summarized by a particular $y$ be $y_1, y_2, \ldots, y_k$, so that $y$ is their mean and $s$ is their standard deviation. Their contribution to the sum of squares of residuals equals
$$\sum_{i=1}^k{\left( y_i - f \right)^2} = k \left(y - f \right)^2 + k' s^2 \text{.}$$
(I have written $k'$ because its value depends on how you compute your standard deviations: it is $k$ for one convention and $k-1$ for another.) Because the last term does not depend on $f$, it does not affect the minimization: we can neglect it.
The other term on the right hand side shows that you want to perform a weighted least squares calculation with weights equal to the counts $k$ (the "N_Y" column of the data). Equivalently, you can create a synthetic dataset by making $N_Y$ copies of each datum $(X,Y)$ and performing ordinary least squares regression.
Note that this analysis presumes nothing about the form of the prediction function: it can include any explanatory variables you like and have any form, even nonlinear ones.
Note also that the weighting does not depend on the standard deviations. This is because implicitly we have assumed that the variance of the y's is constant, so that all of the differences among the observed standard deviations are attributed to random fluctuations. This hypothesis can be tested in the usual ways (e.g., with F-tests). For the example data it holds up: those standard deviations do not vary significantly.
Edit
I see, in retrospect, that this answer merely reiterates @onestop's pithy response. I'm leaving it up because it demonstrates why @onestop is correct.
| null | CC BY-SA 2.5 | null | 2010-11-02T17:15:39.277 | 2010-11-02T17:24:09.077 | 2010-11-02T17:24:09.077 | 919 | 919 | null |
4145 | 2 | null | 2142 | 3 | null | The question is about marginal effects (of X on Y), I think, not so much about interpreting individual coefficients. As folk have usefully noted, these are only sometimes identifiable with an effect size, e.g. when there are linear and additive relationships.
If that's the focus then the (conceptually, if not practically) simplest way to think about the problem would seem to be this:
To get the marginal effect of X on Y in a linear normal regression model with no interactions, you can just look at the coefficient on X. But that's not quite enough since it is estimated not known. In any case, what one really wants for marginal effects is some kind of plot or summary that provides a prediction about Y for a range of values of X, and a measure of uncertainty. Typically one might want the predicted mean Y and a confidence interval, but one might also want predictions for the complete conditional distribution of Y for an X. That distribution is wider than the fitted model's sigma estimate because it takes into account uncertainty about the model coefficients.
There are various closed form solutions for simple models like this one. For current purposes we can ignore them and think instead more generally about how to get that marginal effects graph by simulation, in a way that deals with arbitrarily complex models.
Assume you want the effects of varying X on the mean of Y, and you're happy to fix all the other variables at some meaningful values. For each new value of X, take a size B sample from the distribution of model coefficients. An easy way to do so in R is to assume that it is Normal with mean `coef(model)` and covariance matrix `vcov(model)`. Compute a new expected Y for each set of coefficients and summarize the lot with an interval. Then move on to the next value of X.
It seems to me that this method should be unaffected by any fancy transformations applied to any of the variables, provided you also apply them (or their inverses) in each sampling step. So, if the fitted model has log(X) as a predictor then log your new X before multiplying it by the sampled coefficient. If the fitted model has sqrt(Y) as a dependent variable then square each predicted mean in the sample before summarizing them as an interval.
In short, more programming but less probability calculation, and clinically comprehensible marginal effects as a result. This 'method' is sometimes referred to CLARIFY in the political science literature, but is quite general.
| null | CC BY-SA 2.5 | null | 2010-11-02T17:21:52.077 | 2010-11-02T17:21:52.077 | null | null | 1739 | null |
4146 | 1 | 4149 | null | 10 | 1760 | When developing a general purpose time-series software, is it a good idea to make it scale invariant? How would one do that?
I took a time series of around 40 points, and then multiplied by factors ranging from 10E-9 to 10E3 and then ran through the ARIMA functions of Forecast Pro and Minitab. In Forecast Pro, all resulted in the same answer (automatic modeling), whereas in Minitab, they were not. Not sure what Forecast Pro does, but they might just scale up or down all the numbers to a certain scale (let's say 100s) before running the model. Is this good idea in general?
| Scale-invariant analysis of time series | CC BY-SA 3.0 | null | 2010-11-02T17:33:15.593 | 2012-02-15T18:02:41.487 | 2012-02-15T18:02:41.487 | 528 | 528 | [
"time-series",
"scale-invariance"
] |
4147 | 2 | null | 4052 | 8 | null | Here is code in R that illustrates the simulation of whuber's [answer](https://stats.stackexchange.com/questions/4052/how-does-the-power-of-a-logistic-regression-and-a-t-test-compare/4056#4056). Feedback on improving my R code is more than welcome.
```
N <- 900 # Total number data points
m <- 30; # Size of draw per set
n <- 30; # No of sets
p_null <- 0.70; # Null hypothesis
p_alternate <- 0.74 # Alternate hypothesis
tot_iter <- 10000;
set.seed(1); # Initialize random seed
null_rejected <- 0; # Set counter to 0
for (iter in 1:tot_iter)
{
draws1 <- matrix(0,m,n);
draws2 <- matrix(0,m,n);
means1 <- matrix(0,m);
means2 <- matrix(0,m);
for (obs in 1:m)
{
draws1[obs,] <- rbinom(n,1,p_null);
draws2[obs,] <- rbinom(n,1,p_alternate);
means1[obs,] <- mean(draws1[obs,]);
means2[obs,] <- mean(draws2[obs,]);
}
if (t.test(means1,means2,alternative="l")$p.value <= 0.05)
{
null_rejected <- null_rejected + 1;
}
}
power <- null_rejected / tot_iter
```
| null | CC BY-SA 2.5 | null | 2010-11-02T19:19:49.510 | 2010-11-02T19:19:49.510 | 2017-04-13T12:44:45.783 | -1 | null | null |
4148 | 2 | null | 4142 | 4 | null | I think you are confusing the concepts of stationarity vs unit roots. Unit root implies non-stationarity but the converse need not hold. Thus, KPSS and ADF tests are not both a test for unit roots. KPSS tests for stationarity whereas ADF tests for one particular form of non-stationarity (i.e., existence of unit roots).
I would suggest going through these [notes](http://www.econ.ku.dk/metrics/Econometrics2_07_I/LectureNotes/unitroottest.pdf) and in particular see the figure 1 excerpted from the notes below for example of non-stationary series that are not because of unit roots:

| null | CC BY-SA 2.5 | null | 2010-11-02T19:45:15.580 | 2010-11-02T19:45:15.580 | null | null | null | null |
4149 | 2 | null | 4146 | 7 | null | If the software computes the sum of squared errors in optimization (and most will), then you can run into trouble with very large numbers or very small numbers because of how floating point numbers are stored. The same applies to any statistical modelling, not just time series analysis. One way to avoid the problem is to scale the data before running the model, and then re-scale the results. For most time series models, including all linear models, that will work. Some nonlinear models won't scale however.
When I'm analysing data I will often scale the data myself, not just to prevent possible optimization problems but also to make graphs and tables easier to read.
| null | CC BY-SA 2.5 | null | 2010-11-02T22:39:26.827 | 2010-11-02T22:39:26.827 | null | null | 159 | null |
4150 | 1 | null | null | 4 | 1533 | I have about 1000 data points from some thick tailed distribution that I would like to fit a parametrized distribution to. From my data, I've made some adjustments and constructed an empirical distribution (so I have percentiles).
What is the best way to fit a mixture of parametrized distribution functions (pareto, lognormal, gamma, etc) to this empirical distribution?
So far I have been using Excel to maximize the grouped MLE function; using solver to maximize the parameters subject to sum(weights = 1). I have R as well but am new to it. It is pretty obvious that excel is getting stuck in local maxima.
How would you maximize a MLE function for a mixture distribution?
| Optimization of MLE for mixture problems | CC BY-SA 2.5 | null | 2010-11-02T23:59:15.377 | 2011-03-27T20:36:29.907 | 2011-03-27T16:02:32.933 | 919 | null | [
"maximum-likelihood",
"curve-fitting",
"expectation-maximization",
"mixture-distribution"
] |
4151 | 2 | null | 4142 | 1 | null | I don't know anything about unit roots, but your problem may be analogous to an equivalence test where, say, a new drug is tested for inferiority, equivalence or superiority compared to another drug. That is a reasonably common issue in clinical testing and so there is quite a literature on it.
You might like to start here: [http://www.graphpad.com/library/BiostatsSpecial/article_182.htm](http://www.graphpad.com/library/BiostatsSpecial/article_182.htm)
| null | CC BY-SA 2.5 | null | 2010-11-03T00:10:51.297 | 2010-11-03T00:10:51.297 | null | null | 1679 | null |
4152 | 2 | null | 4150 | 2 | null | If you want to fit univariate distribution on your data, please try using `fitdistr` function in the MASS package in `R`. [Here](http://rss.acs.unt.edu/Rdoc/library/MASS/html/fitdistr.html) is the information on how to use it. I am assuming that you have the full data set in addition to the quantiles.
| null | CC BY-SA 2.5 | null | 2010-11-03T00:20:02.080 | 2010-11-03T00:20:02.080 | null | null | 1307 | null |
4153 | 2 | null | 3911 | 6 | null | I think the premise of this question is flawed because it denies the distinction between the uncertain and the known.
Describing a coin flip provides a good analogy. Before the coin is flipped, the outcome is uncertain; afterwards, it is no longer "hypothetical." Confusing this fait accompli with the actual situation we wish to understand (the behavior of the coin, or decisions that are to be made as a result of its outcome) essentially denies a role for probability in understanding the world.
This contrast is thrown in sharp relief within an experimental or regulatory arena. In such cases the scientist or the regulator know they will be faced with situations whose outcomes, at any time beforehand, are unknown, yet they must make important determinations such as how to design the experiment or establish the criteria to use in determining compliance with regulations (for drug testing, workplace safety, environmental standards, and so on). These people and the institutions for which they work need methods and knowledge of the probabilistic characteristics of those methods in order to develop optimal and defensible strategies, such as good experimental designs and fair decision procedures that err as little as possible.
Confidence intervals, despite their classically poor justification, fit into this decision-theoretic framework. When a method of constructing a random interval has a combination of good properties, such as assuring a minimal expected coverage of the interval and minimizing the expected length of the interval--both of them a priori properties, not a posteriori ones--then over a long career of using that method we can minimize the costs associated with the actions that are indicated by that method.
| null | CC BY-SA 2.5 | null | 2010-11-03T00:38:16.730 | 2010-11-03T00:38:16.730 | null | null | 919 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.