Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10579 | 1 | null | null | 4 | 249 | I have 4 groups each consisting of 1000 recent college graduates, and I am studying employment immediately after graduation. 2 groups took a business class in college, and 2 groups took a math course in college. Group 1 is a control that has taken neither a business class nor a math course in college. I want to conduct a statistical analysis to find if either class was significant in increasing the graduate in obtaining a job immediately after school. Here's the data:
Group 1: No Math, No Business
1000 subjects, 50 employed
Group 2: No Math, Yes Business
1000 subjects, 60 employed
Group 3: Yes Math, No Business
1000 subjects, 75 employed
Group 4: Yes Math, Yes Business
1000 subjects, 100 employed
| Testing significant effect in 2 by 2 factor design on a binary outcome | CC BY-SA 3.0 | null | 2011-05-09T22:20:50.180 | 2011-05-11T15:29:29.850 | 2011-05-11T15:29:29.850 | 183 | 4538 | [
"statistical-significance"
] |
10580 | 2 | null | 10432 | 5 | null | It sounds like any linear classifier will do what you need. Suppose you have $N$ features and the value of feature $i$ is $f_i$. Then a linear classifier will compute a score
$$s = \sum_i w_i f_i + o$$ (where $o$ is the offset). Then, if $s > t$ (where $t$ is some threshold), then the feature belongs to a class (a group), and if $s < t$, then it doesn't. Note that there is a single threshold applied to the entire score (rather than to individual feature values), so indeed a deficiency in one parameter can be compensated for by abundance in another. The weights are intuitively interpretable, in the sense that the higher the weight is, the more important (or more decisive) that feature is.
There are a lot of off-the-shelf linear classifiers that can do that, including SVM, LDA (linear discriminant analysis), linear neural networks, and many others. I'd start by running linear SVM because it works well in a lot of cases and can tolerate limited training data. There are also a lot of packages in many environments (like Matlab and R), so you can easily try it. The downside of SVM is that it can be computationally heavy, so if you need to learn a lot of classes, it might be less appropriate.
If you want to preserve some of the threshold behavior you currently have, you can pass the feature values through a sigmoid with the threshold in the right location. E.g. for a feature $i$ for which you currently use a threshold of $t_i$, first compute
$$g_i = \frac{1}{1 + \exp(f_i - t_i)},$$
and then learn a linear classifier using $g$'s rather than $f$'s. This way, the compensating behavior will only happen near the threshold, and things that are too far away from the threshold cannot be compensated for (which is sometimes desirable).
Another thing that you could try is to use probabilistic classifiers like Naive Bayes or TAN. Naive Bayes is almost like a linear classifier, except it computes
$$s = \sum_i w^i_{f_i}.$$
So there is still a sum of weights. These weights depend on the feature values $f_i$, but not by multiplication like in a usual linear classifier. The score in this case is the log-probability, and the weights are the contributions of the individual features into that log-probability. The disadvantage of using this in your case is that you will need many bins for your feature values, and then learning may become difficult. There are ways around that (for example, using priors), but since you have no experience with this, it might be more difficult.
Regarding terminology: what you called 'test set' is usually called a 'training set' in this context, and what you called 'new data' is called the 'test set'.
For a book, I'd read "Pattern recognition" by Duda, Hart, and Stork. The first chapter is a very good introduction for beginners.
| null | CC BY-SA 3.0 | null | 2011-05-09T22:51:32.127 | 2011-05-09T22:58:25.560 | 2011-05-09T22:58:25.560 | 3369 | 3369 | null |
10581 | 2 | null | 10578 | 39 | null | Here I explain why the asymptotic variance of the maximum likelihood estimator is the Cramer-Rao lower bound. Hopefully this will provide some insight as to the relevance of the Fisher information.
Statistical inference proceeds with the use of a likelihood function $\mathcal{L}(\theta)$ which you construct from the data. The point estimate $\hat{\theta}$ is the value which maximizes $\mathcal{L}(\theta)$. The estimator $\hat{\theta}$ is a random variable, but it helps to realize that the likelihood function $\mathcal{L}(\theta)$ is a "random curve".
Here we assume iid data drawn from a distribution $f(x|\theta)$, and we define the likelihood
$$
\mathcal{L}(\theta) = \frac{1}{n}\sum_{i=1}^n \log f(x_i|\theta)
$$
The parameter $\theta$ has the property that it maximizes the value of the "true" likelihood, $\mathbb{E}\mathcal{L}(\theta)$. However, the "observed" likelihood function $\mathcal{L}(\theta)$ which is constructed from the data is slightly "off" from the true likelihood. Yet as you can imagine, as the sample size increases, the "observed" likelihood converges to the shape of the true likelihood curve. The same applies to the derivative of the likelihood with respect to the parameter, the score function $\partial \mathcal{L}/\partial \theta$. (Long story short, the Fisher information determines how quickly the observed score function converges to the shape of the true score function.)
At a large sample size, we assume that our maximum likelihood estimate $\hat{\theta}$ is very close to $\theta$. We zoom into a small neighborhood around $\theta$ and $\hat{\theta}$ so that the likelihood function is "locally quadratic".
There, $\hat{\theta}$ is the point at which the score function $\partial \mathcal{L}/\partial \theta$ intersects the origin. In this small region, we treat the score function as a line, one with slope $a$ and random intercept $b$ at $\theta$. We know from the equation for a line that
$$a(\hat{\theta} - \theta) + b = 0$$
or
$$
\hat{\theta} = \theta - b/a .
$$
From the consistency of the MLE estimator, we know that
$$
\mathbb{E}(\hat{\theta}) = \theta
$$
in the limit.
Therefore, asymptotically
$$
nVar(\hat{\theta}) = nVar(b/a)
$$
It turns out that the slope varies much less than the intercept, and asymptotically, we can treat the score function as having a constant slope in a small neighborhood around $\theta$. Thus we can write
$$
nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b)
$$
So, what are the values of $a$ and $nVar(b)$? It turns out that due to a marvelous mathematical coincidence, they are the very same quantity (modulo a minus sign), the Fisher information.
$$-a = \mathbb{E}\left[-\frac{\partial^2 \mathcal{L}}{\partial \theta^2}\right] = I(\theta)$$
$$nVar(b) = nVar\left[\frac{\partial \mathcal{L}}{\partial \theta}\right] = I(\theta)$$
Thus,
$$
nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b) = (1/I(\theta)^2)I(\theta) = 1/I(\theta)
$$
asymptotically: the Cramer-Rao lower bound. (Showing that $1/I(\theta)$ is a the lower bound on the variance of an unbiased estimator is another matter.)
| null | CC BY-SA 3.0 | null | 2011-05-09T23:09:17.680 | 2011-05-10T15:09:16.277 | 2011-05-10T15:09:16.277 | 3567 | 3567 | null |
10582 | 2 | null | 10579 | 3 | null | This seems like a classic problem for [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression). Rather than specifying these groups, turn math and business coursework into predictors for employment status. This is easy to code up in pretty much whatever statistical software you have around, although you'll want to look into different contrasts (it sounds like you'd want to use treatment contrasts with "no class" set as the reference level).
| null | CC BY-SA 3.0 | null | 2011-05-09T23:28:35.240 | 2011-05-09T23:28:35.240 | null | null | 71 | null |
10583 | 2 | null | 10579 | 3 | null | Self-selection problem need to be addressed here. People who choose to take business classes might be more likely to get a job. So, the sample you have is not random, and the inferences you draw might be incorrect. Heckman procedure is used to correct to self-selection bias. I am not sure if it is applicable for discrete dependent variable though.
| null | CC BY-SA 3.0 | null | 2011-05-09T23:52:24.900 | 2011-05-09T23:52:24.900 | null | null | 4540 | null |
10585 | 2 | null | 10579 | 2 | null | One of the more basic approaches you could take is a [two-way ANOVA](http://udel.edu/~mcdonald/stattwoway.html) (Page from U Delaware), using Math Class = {Y, N} and Business Class = {Y, N} as your two treatments. You would then perform an analysis of variance on the dependent variable (number of people employed) to determine whether taking a math/business class has an impact on how many people were able to get jobs.
One thing you may want to watch out for, however, are some of the assumptions of ANOVA, which may not be appropriate. You can find the assumptions on [this wikipedia page](http://en.wikipedia.org/wiki/Analysis_of_variance#Assumptions_of_ANOVA). Normality of the residuals may be questionable if there was no random assignment, in which case, there are [non-parametric methods](http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_test).
| null | CC BY-SA 3.0 | null | 2011-05-10T00:59:05.887 | 2011-05-10T00:59:05.887 | null | null | 1118 | null |
10586 | 2 | null | 10579 | 3 | null | If @EEE's concern can be addressed and you proceed with an hypothesis test, then rather than logistic regression I'd recommend a chi-square test. For a person fairly new to statistical testing, it'll be dramatically easier to conduct, interpret, and explain to an audience. Plus I think it'll give you just about as much information.
| null | CC BY-SA 3.0 | null | 2011-05-10T00:59:45.170 | 2011-05-10T00:59:45.170 | null | null | 2669 | null |
10587 | 1 | null | null | 3 | 675 | I lack knowledge of statistical terminology, so I'll try to thoroughly explain my predicament and hope that it is understandable:
I am programming a software for technical analysis of financial markets.
This software will receive a variable who's value represents current market conditions.
In this particular case the variable represents a measurement, in standard deviations, of the financial vehicle's price in respect to its moving average. The exact nature of this variable is not relevant to my question - the important part is that it has non-uniform distribution.
What I mean by this is that at any given time the variable has a much greater chance of being somewhere around 0 than of being at one of the extremes.
Now, the point of my software is to find and evaluate similar conditions in the market history. If I look for the exact same value (ex. 2.0021) I will not find it; so what I must do is create an interval around 2.0021 and evaluate whatever values fall within the interval.
Then, when I receive a different input value, I will need to create an interval of a different size, such that approximately the same amount of values will be found within this interval.
## Here is a concrete example:
>
CURRENT VALUE Xa = 2.0021
Lets say I found that `100 values of X`a fall within the interval of `1.9992 < X < 2.0050`...
(The size of the interval (obviously) is `0.0058`)
Next I receive another value for X:
>
CURRENT VALUE Xb = 4.5010
Values around 4.5 are much more rare than values around 2.08. Therefore, the interval surrounding `4.5010` must also be larger in order for my software to find the same number of results as (close to) Xa ;
I am trying to create intervals in my data such that each interval contains the same number of samples within its bounds.
In other words:
The `CLOSER TO 0`, the more data points are present, the `SMALLER` the interval needs to be.
The farther from 0, the fewer data points are present, the greater the interval needs to be in order to contain the same amount of results.
And now to the heart of my question:
How can I formulaically calculate the size of the interval around `X` based on the value of `X`?
It is also crucial to mention that I have a ton (500,000) of real `X` values to work from.
This is what I tried - I feel like I'm close but not quite there:
I created a histogram out of my 1D data.
Since the data is symmetric, and centered on 0, I created another histogram based on the data's absolute value.
[My Histograms](https://i.stack.imgur.com/oPKWc.png)
I do not know where to go from here :(
I tried doing an exponential regression to the data; it fits very well but I do not know how to use the formula to create the sort of bounds within my data that I want.
I seriously regret not taking a Stats class in college...
Can anyone help me out please?
| Uniform frequency from non-uniform (exponential) distribution? | CC BY-SA 3.0 | null | 2011-05-10T01:47:23.723 | 2011-05-10T01:58:11.990 | 2020-06-11T14:32:37.003 | -1 | 4542 | [
"distributions",
"probability",
"confidence-interval",
"histogram"
] |
10588 | 2 | null | 10587 | 3 | null | If your data really is [exponentially distributed](http://en.wikipedia.org/wiki/Exponential_random_variable), find the [maximum likelihood estimate of the rate](http://en.wikipedia.org/wiki/Exponential_random_variable#Maximum_likelihood) $\lambda$, then transform the samples $X_n$ to a sequence $Y_n$ uniformly distributed in $[0,1]$ using the equation $Y_n=1-\mathrm e^{-\lambda X_n}$
| null | CC BY-SA 3.0 | null | 2011-05-10T01:58:11.990 | 2011-05-10T01:58:11.990 | null | null | 4479 | null |
10589 | 2 | null | 10578 | 17 | null | One way that I understand the fisher information is by the following definition:
$$I(\theta)=\int_{\cal{X}} \frac{\partial^{2}f(x|\theta)}{\partial \theta^{2}}dx-\int_{\cal{X}} f(x|\theta)\frac{\partial^{2}}{\partial \theta^{2}}\log[f(x|\theta)]dx$$
The Fisher Information can be written this way whenever the density $f(x|\theta)$ is twice differentiable. If the sample space $\cal{X}$ does not depend on the parameter $\theta$, then we can use the Leibniz integral formula to show that the first term is zero (differentiate both sides of $\int_{\cal{X}} f(x|\theta)dx=1$ twice and you get zero), and the second term is the "standard" definition. I will take the case when the first term is zero. The cases when it isn't zero aren't much use for understanding Fisher Information.
Now when you do maximum likelihood estimation (insert "regularity conditions" here) you set
$$\frac{\partial}{\partial \theta}\log[f(x|\theta)]=0$$
And solve for $\theta$. So the second derivative says how quickly the gradient is changing, and in a sense "how far" $\theta$ can depart from the MLE without making an appreciable change in the right hand side of the above equation. Another way you can think of it is to imagine a "mountain" drawn on the paper - this is the log-likelihood function. Solving the MLE equation above tells you where the peak of this mountain is located as a function of the random variable $x$. The second derivative tells you how steep the mountain is - which in a sense tells you how easy it is to find the peak of the mountain. Fisher information comes from taking the expected steepness of the peak, and so it has a bit of a "pre-data" interpretation.
One thing that I still find curious is that its how steep the log-likelihood is and not how steep some other monotonic function of the likelihood is (perhaps related to "proper" scoring functions in decision theory? or maybe to the consistency axioms of entropy?).
The Fisher information also "shows up" in many asymptotic analysis due to what is known as the Laplace approximation. This basically due to the fact that any function with a "well-rounded" single maximum raise to a higher and higher power goes into a Gaussian function $\exp(-ax^{2})$ (similar to Central Limit Theorem, but slightly more general). So when you have a large sample you are effectively in this position and you can write:
$$f(data|\theta)=\exp(\log[f(data|\theta)])$$
And when you taylor expand the log-likelihood about the MLE:
$$f(data|\theta)\approx [f(data|\theta)]_{\theta=\theta_{MLE}}\exp\left(-\frac{1}{2}\left[-\frac{\partial^{2}}{\partial \theta^{2}}\log[f(data|\theta)]\right]_{\theta=\theta_{MLE}}(\theta-\theta_{MLE})^{2}\right)$$
and that second derivative of the log-likelihood shows up (but in "observed" instead of "expected" form). What is usually done here is to make to further approximation:
$$-\frac{\partial^{2}}{\partial \theta^{2}}\log[f(data|\theta)]=n\left(-\frac{1}{n}\sum_{i=1}^{n}\frac{\partial^{2}}{\partial \theta^{2}}\log[f(x_{i}|\theta)]\right)\approx nI(\theta)$$
Which amounts to the usually good approximation of replacing a sum by an integral, but this requires that the data be independent. So for large independent samples (given $\theta$) you can see that the Fisher information is how variable the MLE is, for various values of the MLE.
| null | CC BY-SA 3.0 | null | 2011-05-10T07:19:38.177 | 2018-04-17T04:12:51.770 | 2018-04-17T04:12:51.770 | 14396 | 2392 | null |
10591 | 1 | 10661 | null | 8 | 2438 | The [Anscombe transform](http://en.wikipedia.org/wiki/Anscombe_transform) is $a(x) = 2\sqrt{x+3/8}$.
Can anyone show me how to prove that an Anscombe-transformed version $Y = a(X)$ of a Poisson distributed random variable $X$ is approximately normal distributed (when $\lambda>4$)?
| Anscombe transform and normal approximation | CC BY-SA 3.0 | null | 2011-05-10T10:38:24.523 | 2011-05-12T13:02:02.343 | 2011-05-12T13:02:02.343 | 2970 | 4496 | [
"distributions",
"normal-distribution",
"poisson-distribution"
] |
10592 | 1 | 10637 | null | 9 | 1080 | Given a data matrix $X$ of say 1000000 observations $\times$ 100 features,
is there a fast way to build a tridiagonal approximation
$A \approx cov(X)$ ?
Then one could factor $A = L L^T$,
$L$ all 0 except $L_{i\ i-1}$ and $L_{i i}$,
and do fast decorrelation (whitening) by solving
$L x = x_{white}$.
(By "fast" I mean $O( size\ X )$.)
(Added, trying to clarify): I'm looking for a quick and dirty whitener
which is faster than full $cov(X)$ but better than diagonal.
Say that $X$ is $N$ data points $\times Nf$ features, e.g. 1000000$\times$ 100,
with features 0-mean.
1) build $Fullcov = X^T X$, Cholesky factor it as $L L^T$,
solve $L x = x_{white}$ to whiten new $x$ s.
This is quadratic in the number of features.
2) diagonal: $x_{white} = x / \sigma(x)$
ignores cross-correlations completely.
One could get a tridiagonal matrix from $Fullcov$
just by zeroing all entries outside the tridiagonal,
or not accumulating them in the first place.
And here I start sinking: there must be a better approximation,
perhaps hierarchical, block diagonal → tridiagonal ?
---
(Added 11 May): Let me split the question in two:
1) is there a fast approximate $cov(X)$ ?
No (whuber), one must look at all ${N \choose 2}$ pairs
(or have structure, or sample).
2) given a $cov(X)$, how fast can one whiten new $x$ s ?
Well, factoring $cov = L L^T$, $L$ lower triangular, once,
then solving $L x = x_{white}$
is pretty fast; scipy.linalg.solve_triangular, for example, uses Lapack.
I was looking for a yet faster whiten(), still looking.
| How to calculate tridiagonal approximate covariance matrix, for fast decorrelation? | CC BY-SA 3.0 | null | 2011-05-10T10:38:49.923 | 2022-05-24T19:37:35.363 | 2011-05-15T09:34:53.137 | 557 | 557 | [
"variance",
"approximation",
"covariance-matrix"
] |
10593 | 2 | null | 10594 | 4 | null | Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of values falling into that range $N$, the following should hold:
$$F(r_1)-F(r_2)=\frac{N}{500 000}$$
This is an equation with 2 unknown variables, so we need some restrictions to solve it. The popular one would be setting $r_1=x-\varepsilon/2$ and $r_2=x+\varepsilon/2$. This would give us the equation of one variable, which could be solved pretty easily, using any optimisation algorithm.
The only thing you need is the function $F$. You can either model it, or use some non-parametric estimate. In the latter case probably optimisation algorithm may be unnecessary, as it should be possible to work out the solution.
Note: This is only one possible approach, depending on data it might not work.
| null | CC BY-SA 3.0 | null | 2011-05-10T11:06:58.780 | 2011-05-10T11:06:58.780 | null | null | 2116 | null |
10594 | 1 | 10595 | null | 6 | 11951 | I have 500,000 values for a variable derived from financial markets. This variable has a arbitrary distribution. I need a formula that will allow me to select a range around any value of this variable such that an equal (or close to it) amount of values fall within that range. From what I understand, this means that I need to convert it from arbitrary distribution to uniform distribution. I have read (but barely understood) that what I am looking for is called "probability integral transform."
Can anyone assist me with some code (Matlab preferred, but it doesn't really matter) to help me accomplish this?
Edit: I uploaded my dataset as a [.csv file](http://furlender.com/data.csv) and [compressed .rar file](http://furlender.com/data.rar)
I used the empirical distribution function within MatLab and got the following plot:
Does this look about right?
Here is a histogram of the raw data for reference:

| Converting arbitrary distribution to uniform one | CC BY-SA 3.0 | null | 2011-05-10T09:23:10.113 | 2011-05-10T15:00:17.433 | 2020-06-11T14:32:37.003 | -1 | 4544 | [
"probability",
"matlab"
] |
10595 | 2 | null | 10594 | 10 | null | If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use the empirical distribution function:
$$\hat{F}(x) = \frac{1}{N} \sum_{i=1}^N 1[x_i\leq x]$$
where $1[A]$ is the indicator function, $1[A]=1$ is $A$ is true and $1[A]=0$ if $A$ is false. The function $F$ is often also called the quantile function.
---
In coding terms, once you've written your function `F` you now have two objects, `x` containing your data and `q` containing the transformed data, so you could write a function `Finv` which takes a number in [0,1] and returns the value of your sample distribution at that quantile (using linear interpolation or some other appropriate method for filling in the gaps).
Now if you want to take e.g. 5% of the data either side of the value `x0`, your range will be `Finv(F(x0) - 0.05)` to `Finv(F(x0) + 0.05)`.
| null | CC BY-SA 3.0 | null | 2011-05-10T10:11:43.367 | 2011-05-10T10:26:19.683 | null | null | 2425 | null |
10597 | 1 | null | null | 4 | 2239 | Is there a reasonable way to quantify the amount of local correlations in an image? For example, I want to justify the correlations between a neighbourhood of pixels is much higher than the correlations between pixels in entirely different regions of the image.
Would showing the xcorr2(A,A) as the 2-d autocorrelation of the image be a valid way to show this? ie: if if there are large values mostly located at the center of the matrix.
| Valid method to analyze spatial correlations in images? | CC BY-SA 3.0 | null | 2011-05-10T13:55:45.370 | 2011-05-10T21:03:06.407 | 2011-05-10T15:47:44.150 | 1036 | 4547 | [
"correlation",
"spatial",
"image-processing"
] |
10598 | 1 | 14771 | null | 9 | 2449 | Various forms of the correlation, e.g.,
$r = \frac{\Sigma_i x_i * y_i}{\sigma_x \sigma_y}$
or
$r = \frac{\Sigma_i (x_i-\bar{x}) * (y_i-\bar{y})}{\sigma_x \sigma_y}$
are popular similarity measures in many applications.
Is there a probabilistic interpretation for this such that either $r$ or $r^2$ is an approximate likelihood for x and y coming from the same or similar distribution? i.e., if we have some form of $P_{\theta_1}(x)$ and $P_{\theta_2}(y)$, then $r$ is related to $P(\theta_1=\theta_2 | x,y)$?
I would guess that the correlation may be the first term in the approximation of some sort of a likelihood measure. But I am unable to arrive at such a model. Assuming $x$ and $y$ to come from a normal, and $\theta$ being the mean, it doesn't really derive that expression.
| Correlation as a likelihood measure | CC BY-SA 3.0 | null | 2011-05-10T14:19:14.667 | 2011-08-24T19:32:05.150 | 2011-05-10T14:30:19.253 | 2728 | 2728 | [
"probability",
"correlation",
"interpretation",
"likelihood"
] |
10599 | 2 | null | 10597 | 2 | null | One straightforward way of doing this is to consider arbitrarily-sized patches of the image. For example, let's say we are interested in all 9*9 regions of pixels that can be taken from the image. Extract each of these image patches, and transform each image patch to a row vector. Consider the entire set of image patches (8464 row vectors for a 100*100 pixel image) as a matrix M.
Compute the correlation (or covariance, corr(M) and cov(M) in Matlab) between each of the columns in M. For your specific question, look at the three columns of the correlation/covariance matrix corresponding to the central pixel in the image. Reshape these back to the size of the image patch, and plot these. For natural images, you should find that the central pixel is highly correlated with adjacent pixels, and that the correlation decreases as distance from the central pixel increases across each of the three color channels.
| null | CC BY-SA 3.0 | null | 2011-05-10T15:14:59.910 | 2011-05-10T15:14:59.910 | null | null | 3595 | null |
10600 | 2 | null | 10526 | 2 | null | I don't think that using CCA will help you. It appears to me that you have a number of endogenous series ( abundance of species n in number ) and a number of exogenous series ( variety of food resources m in number ). I would suggest constructing n transfer functions each one optimized to fully utilize the information content in the m supporting series and their lags if appropriate while incorporating and unspecified stochastic structure with ARMA and unspecified deterministic structure like Level Shifts/Local Time Trends etc.. Having these n equations unser a "statistical microscope" might illuminate "commonalities" suggesting further grouping of the n equations into subsets.
| null | CC BY-SA 3.0 | null | 2011-05-10T15:57:42.317 | 2011-05-10T15:57:42.317 | null | null | 3382 | null |
10601 | 2 | null | 10564 | 1 | null | If I understand things correctly, you're testing if the mean value of predictor $A$ associated with outcome $1$ differ from the mean value of predictor A associated with outcome $2$. Even if they don't differ, this result says nothing to your research question. What it says is only that in your sample, the average value of the predictor $A$ is'n different among succes and failures of the dependent variable. I'll give an example so you can see why.
Imagine that i'm interesting to know if education affects the probability of an employee being manager in a given firm. I collect a random sample of employees, with $200$ employes that are not managers and $70$ which are actually manager. I ran a logistic regression and the effect of education is significant (usual caveats about p-value apply here as well).
Now, you may wonder if the sample is balanced, which is, if there is the same variation in education for managers and non-managers. One first test to see this is to test if their means differ. If their means don't differ, than you know that on average on your sample, managers and non-managers have the same level of education. You could check if there is roughly the same variation of educational level among managers and non-managers. But I think that it's pretty clear that the Wilcoxon test (or the Mann-Whitney) has nothing to say about the effect of your predicton on the probability of success (in my example, success means to be a manager).
| null | CC BY-SA 3.0 | null | 2011-05-10T16:09:36.753 | 2011-05-10T20:31:29.040 | 2011-05-10T20:31:29.040 | 3058 | 3058 | null |
10602 | 1 | null | null | 0 | 676 | I have a design where birds of 1, 3, 5 week of age are placed in 2 chambers(independent) with 4 treatments( light technology) for 4 days. for example, x no of birds of 1 weeks of age are kept for 4 days and then new birds of 3 week age are placed in both chambers and so on...readings are taken for their position, feed consumption/day/bird etc. I need to calculate the sample size required. I am using Proc GLMpower in SAS, using twowayanova to calculate the sample size but I am not sure how to account for repeated measurement for 4 days. Also, Can you suggest any other way to do this type power analysis. I tried in G*Power as well but not quite sure how it works. Thank you.
| Sample size determination for block design with repeated measurement in SAS | CC BY-SA 3.0 | null | 2011-05-10T16:28:07.593 | 2011-05-10T17:14:40.017 | null | null | 4550 | [
"anova"
] |
10603 | 1 | null | null | 2 | 2924 | I have a very basic question on when a discrete distribution might be called a symmetric distribution. Let say I have a r.v. $X$ that can take two possible values $(x1, x2)$ with $x1 \neq x2$ and corresponding probabilities $(0.4, 0.6)$. Then can I say that $X$ is symmetric?
Thanks,
| How to characterize symmetric discrete distribution? | CC BY-SA 3.0 | null | 2011-05-10T16:55:47.630 | 2012-05-23T11:33:55.457 | 2011-05-10T20:59:45.283 | 930 | 4551 | [
"distributions",
"discrete-data"
] |
10604 | 1 | 11499 | null | 7 | 8670 | I'm trying to forecast using ARIMAX with two exogenous (input) variables. I'm using PROC ARIMA, but I can't figure out from the SAS documentation whether my code is producing the parameterization I want.
I want to extend an ARI(12,1) model so that it also includes the last 12 terms of each of the two exogenous variables in my forecast. So, using `VariableX` with the two exogenous variables `VariableY` and `VariableZ`, my best attempt at the code is:
```
proc arima;
identify var=VariableY(1) nlag=24;
estimate p=12;
identify var=VariableZ(1) nlag=24;
estimate p=12;
identify var=VariableX(1) nlag=24 crosscorr=( VariableY(1) VariableZ(1) );
estimate p=12 input=( VariableY VariableZ );
forecast id=MonthNumber interval=month alpha=.05 lead=24;
run;
quit;
```
The documentation leads me to believe the first four lines of the procedure are required for setting up the forecast at the end. But when I run the procedure, the output appears to show a forecast using only the last term of each of the two exogenous variables.
## In summary, I'd like to be sure where each of the following are controlled:
- The $p$ of $AR(p)$, and similarly for each of the exogenous variables
- The $d$ of $I(d)$, and similarly for each of the exogenous variables
- The $q$ of $MA(q)$, and similarly for each of the exogenous variables
| How do I ensure PROC ARIMA is performing the correct parameterization of input variables? | CC BY-SA 3.0 | null | 2011-05-10T17:02:08.347 | 2011-06-03T07:32:50.997 | 2011-05-10T17:12:45.243 | 1583 | 1583 | [
"time-series",
"sas",
"dynamic-regression"
] |
10605 | 2 | null | 10603 | 4 | null | No, it only would be symmetric if the corresponding probabilities were (0.5,0.5).
Also, with binary or categorical distributions, the concept of symmetry does not have much meaning.
| null | CC BY-SA 3.0 | null | 2011-05-10T17:11:08.470 | 2011-05-10T20:46:21.960 | 2011-05-10T20:46:21.960 | 3595 | 3595 | null |
10606 | 2 | null | 10602 | 1 | null | When problems get more complicated than the simple cases that have nice canned sample size solutions I turn to simulation. The basic steps:
- Decide what you think your data will look like (including things you may want to change, e.g. sample size(s))
- Decide how you will analyze the data
- create some code that simulates the data and analyzes it then returns the p-value or other statistic of interest
- run the code in step 3 a bunch of times on a given set of conditions, then see how often the null hypothesis is rejected, this is your power (or an estimate of the power).
- change the conditions and run again, ...
In SAS you could probably create the data in proc IML, then send it to proc GLM or such for the analysis, use ODS to save the p-value, then capture this back in IML, etc. However, I think R is more straight forward for this (but I use R much more than SAS these days, so could be biased).
| null | CC BY-SA 3.0 | null | 2011-05-10T17:14:40.017 | 2011-05-10T17:14:40.017 | null | null | 4505 | null |
10607 | 1 | 10620 | null | 8 | 12128 | I am running a logistic model. In SAS Entreprise Miner, I noticed there's a link function that has three possible options: `logit`, `probit` and `cll` (complementary log-log).
Can you please shed light on the following questions:
- Can we use any of these link function to carry out a logistic regression?
- Are there situations where one would be better than others?
- Is it intuitively possible to get some insight about which kind of function could be useful in which situation? (By just looking at the formula, complementary log-log function might be good for normalization of data when data does not depart (too much) from a normal distribution.)
Any additional pointers would be greatly appreciated.
| How to choose the link function when performing a logistic regression? | CC BY-SA 3.0 | null | 2011-05-10T17:23:17.073 | 2021-12-07T15:31:28.437 | 2021-12-07T15:31:28.437 | 11887 | 1763 | [
"regression",
"logistic",
"sas",
"link-function"
] |
10608 | 1 | 10610 | null | 9 | 2544 | I have a question concerning feature selection and classification. I will be working with R. I should start by saying that I am not very familiar with data mining techniques, aside from a brief glimpse provided by an undergraduate course on multivariate analysis, so forgive me if I am lacking in details regarding my question. I will try my best to describe my problem.
First, a little about my project:
I am working on an image cytometry project, and the dataset is composed of over 100 quantitative features of histological images of cellular nuclei. All of the variables are continuous variables describing features of the nucleus, such as size, DNA amount, etc. There is currently a manual process and an automatic process for obtaining these cellular images. The manual process is (very) slow, but is done by a technician and yields only images that are usable for further analysis. The automatic process is very fast, but introduces too many unusable images - only about 5% of the images are suitable for further analysis, and there are thousands of nuclear images per sample. As it turns out, cleaning the data obtained from the automatic process is actually more time consuming than the manual process.
My goal is to train a classification method, using R, to discriminate between good objects and bad objects from the data obtained from the automatic process. I have an already classified training set that was obtained from the automatic process. It consists of 150,000 rows, of which ~5% are good objects and ~95% are bad objects.
My first question deals with feature selection. There are over 100 continuous explanatory features, and I would like possibly get rid of noise variables to (hopefully) help with the classification. What methods are there for dimensionality reduction with the goal of improving classification? I understand that the need for variable reduction may vary depending on the classification technique used.
Which leads to my second question. I have been reading about different classification techniques, but I feel that I cannot adequately determine the most suitable method for my problem. My main concerns are having a low misclassification rate of good objects over bad objects, and the fact that the prior probability of the good objects is much lower than the prior probability of the bad objects. Having a bad object classified as good is less of a hassle than recovering a good object from the pool of bad objects, but it would be nice if not too many bad objects were classified as being good.
I have read this [post](https://stats.stackexchange.com/questions/3458/alternatives-to-classification-trees-with-better-predictive-e-g-cv-performanc) and I am currently considering Random Forests as per chl's answer. I would like to explore other methods as well, and I would like to collect the suggestions of the good people here at CV. I also welcome any readings on the subject of classification that may be helpful, and suggestions for R packages to use.
Please ask for more details if my post is lacking in details.
| Questions about variable selection for classification, and different classification techniques | CC BY-SA 3.0 | null | 2011-05-10T17:29:14.633 | 2011-05-11T16:23:22.990 | 2017-04-13T12:44:40.883 | -1 | 2252 | [
"r",
"machine-learning",
"classification",
"dimensionality-reduction"
] |
10610 | 2 | null | 10608 | 14 | null | Feature selection does not necessarily improve the performance of modern classifier systems, and quite frequently makes performance worse. Unless finding out which features are the most important is an objective of the analysis, it is often better not even to try and to use regularisation to avoid over-fitting (select regularisation parameters by e.g. cross-validation).
The reason that feature selection is difficult is that it involves an optimisation problem with many degrees of freedom (essentially one per feature) where the criterion depends on a finite sample of data. This means you can over-fit the feature selection criterion and end up with a set of features that works well for that particular sample of data, but not for any other (i.e. it generalises poorly). Regularisation on the other hand, while also optimising a criterion based on a finite sample of data, involves fewer degrees of freedom (typically one), which means that over-fitting the criterion is more difficult.
It seems to me that the "feature selection gives better performance" idea has rather passed its sell by date. For simple linear unregularised classifiers (e.g. logistic regression), the complexity of the model (VC dimension) grows with the number of features. Once you bring in regularisation, the complexity of the model depends on the value of the regularisation parameter rather than the number of parameters. That means that regularised classifiers are resistant to over-fitting (provided you tune the regularisation parameter properly) even in very high dimensional spaces. In fact that is the basis of why the support vector machine works, use a kernel to tranform the data into a high (possibly infinite) dimensional space, and then use regularisation to control the complexity of the model and hence avoid over-fitting.
Having said which, there are no free lunches; your problem may be one where feature selection works well, and the only way to find out is to try it. However, whatever you do, make sure you use something like nested cross-validation to get an unbiased estimate of performance. The outer cross-validation is used for performance evaluation, but in each fold repeat every step in fitting the model (including feature selection) again independently. A common error is to perform feature selection using all of the data and then cross-validate to estimate performance using the features identified. IT should be obvious why that is not the right thing to do, but many have done it as the correct approach is computationally expensive.
My suggestion is to try SVMs or kernel logistic regression or LS-SVM etc. with various kernels, but no feature selection. If nothing else it will give you a meaningfull baseline.
| null | CC BY-SA 3.0 | null | 2011-05-10T18:04:24.603 | 2011-05-10T18:04:24.603 | null | null | 887 | null |
10611 | 2 | null | 10608 | 6 | null | On dimensionality reduction, a good first choice might be [principal components analysis](http://en.wikipedia.org/wiki/Principal_components_analysis).
Apart from that, i don't have too much to add, except that if you have any interest in data mining, I strongly recommend you read [the elements of statistical learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/). Its both rigorous and clear, and although I haven't finished it, it would probably give you much insight into the right way to approach your problem. Chapter 4, linear classifiers would almost certainly enough to get you started.
| null | CC BY-SA 3.0 | null | 2011-05-10T18:04:58.020 | 2011-05-10T18:04:58.020 | null | null | 656 | null |
10612 | 2 | null | 10531 | 4 | null | There is no simple yes or no answer. People constantly attempt to make inferences about causal relationships. The question is what assumptions you have to make, and how sensitive your inferences are to changing those assumptions.
The causal effects you can identify with the fewest assumptions are the effects of the things you randomly assign: A, B, and the interaction A*B, on Y1, Y2, Y3, and Y4.
I'm likely to be skeptical of a claim to have identified the causal effect of any of the non-randomized variables on anything else. The scientific context (which you have not provided) will shape what is considered a reasonable inference.
| null | CC BY-SA 3.0 | null | 2011-05-10T18:22:00.337 | 2011-05-10T18:22:00.337 | null | null | 3748 | null |
10613 | 1 | 10617 | null | 161 | 90704 | Recently, I have found in a [paper by Klammer, et al.](http://pubs.acs.org/doi/abs/10.1021/pr8011107) a statement that p-values should be uniformly distributed. I believe the authors, but cannot understand why it is so.
Klammer, A. A., Park, C. Y., and Stafford Noble, W. (2009) [Statistical Calibration of the SEQUEST XCorr Function](http://pubs.acs.org/doi/abs/10.1021/pr8011107). Journal of Proteome Research. 8(4): 2106–2113.
| Why are p-values uniformly distributed under the null hypothesis? | CC BY-SA 3.0 | null | 2011-05-10T18:26:26.630 | 2022-03-31T13:53:07.677 | 2017-10-19T22:56:56.347 | 44269 | 4552 | [
"p-value",
"uniform-distribution"
] |
10614 | 2 | null | 9299 | 5 | null | You can use this module of the [pysal](http://pysal.org/1.1/library/spreg/ols.html) python library for the spatial data analysis methods I discuss below.
Your description of how each person's attitude is influenced by the attitudes of the people surrounding her can be represented by a [spatial autoregressive model (SAR)](http://www.ecoevol.ufg.br/sam/wiki/Autoregressive_Models_%28SAR_/_CAR%29) (also see my simple SAR explanation from [this SE answer](https://stats.stackexchange.com/questions/277/spatial-statistics-models-car-vs-sar) [2](http://www.ecoevol.ufg.br/sam/wiki/Autoregressive_Models_%28SAR_/_CAR%29)). The simplest approach is to ignore other factors, and estimate the strength of the influence how surrounding people affect one another's attitudes by using the [Moran's I](http://en.wikipedia.org/wiki/Moran%27s_I) statistic.
If you want to assess the importance of other factors while estimating the strength of the influence of surrounding people, a more complex task, then you can estimate the parameters of a regression: $y = bx + rhoWy + e$. See the docs [here](http://pysal.org/1.1/library/spreg/ols.html).(Methods of estimating this type of regression come from the field of spatial econometrics and can get much more sophisticated than the reference I gave.)
Your challenge will be to build a spatial weights matrix ($W$). I think each element $w_{ij}$ of the matrix should be 1 or 0 based on whether the person $i$ is within some distance you feel that it is required to influence the other person $j$.
To get an intuitive idea of the problem, below I illustrate how a spatial autoregressive data generating process (DGP) will make a pattern of values. For the 2 lattices of simulated values the white blocks represent high values and the dark blocks represent low values.
In the first lattice below the grid values have been generated by a normally distributed random process (or Gaussian), where $rho$ is zero.

In the next lattice below the grid values have been generated by a spatial autoregressive process, where $rho$ has been set to something high, say .8.

| null | CC BY-SA 3.0 | null | 2011-05-10T18:59:18.107 | 2011-05-11T03:57:17.720 | 2017-04-13T12:44:27.570 | -1 | 4329 | null |
10615 | 1 | null | null | 2 | 482 | I'd be too happy, if someone could post a code snippet, which explains how to compute mean and variance of a set of records, which contain frequencies?
Suppose we have records like (FORMAT F)
```
- GroupA, 6 x Grade 1, 5 x Grade 2, 10 x Grade 3
- GroupB, 2 x Grade 1, 7 x Grade 2, 18 x Grade 3
- GroupA, 23 x Grade 1, 5 x Grade 2, 1 x Grade 3
- GroupA, 10 x Grade 1, 15 x Grade 2, 12 x Grade 3
```
Goal: Compute for each Group's mean and variance of grades using SPSS syntax.
Clarification:
Using SPS, processing cases (FORMAT C) would be simple (as colleagues told me):
```
# Record number 1 (line 1 of the above data)
GroupA, Grade 1
... 6 times
GroupA, Grade 1
GroupA, Grade 2
... 5 times
GroupA, Grade 2
GroupA, Grade 3
... 10 times
GroupA, Grade 3
# Record number 2 (line 2 of the above data)
GroupB, Grade 1
GroupB, Grade 1
GroupB, Grade 2
... 78 times
GroupB, Grade 7
... and so on
```
Unfortunately, we don't have cases of FORMAT C, but already aggregated data as in FORMAT F.
I probably should add this:
I implemented a solution, which uses (user entered) FORMAT F data using PHP/SQL: The PHP program constructs an SQL query, which computes the mean, the variance and the standard deviation of each Group.
I even tested the solution using well-known sample data. The results are identical. Thus I'm sure the solution is correct.
Since the computed statistical properties are sensible, I'd like to get a verification using a standard statistical package using the real FORMAT F data (not the test case data).
My colleague is going to use SPSS.
| Variance based on given frequencies using SPSS | CC BY-SA 3.0 | null | 2011-05-10T19:28:40.657 | 2011-05-15T22:31:54.813 | 2011-05-15T22:31:54.813 | 4554 | 4554 | [
"spss"
] |
10617 | 2 | null | 10613 | 114 | null | To clarify a bit. The p-value is uniformly distributed when the null hypothesis is true and all other assumptions are met. The reason for this is really the definition of alpha as the probability of a type I error. We want the probability of rejecting a true null hypothesis to be alpha, we reject when the observed $\text{p-value} < \alpha$, the only way this happens for any value of alpha is when the p-value comes from a uniform distribution. The whole point of using the correct distribution (normal, t, f, chisq, etc.) is to transform from the test statistic to a uniform p-value. If the null hypothesis is false then the distribution of the p-value will (hopefully) be more weighted towards 0.
The `Pvalue.norm.sim` and `Pvalue.binom.sim` functions in the [TeachingDemos](http://cran.r-project.org/web/packages/TeachingDemos/index.html) package for R will simulate several data sets, compute the p-values and plot them to demonstrate this idea.
Also see:
>
Murdoch, D, Tsai, Y, and Adcock, J (2008). P-Values are Random
Variables. The American Statistician, 62, 242-245.
for some more details.
# Edit:
Since people are still reading this answer and commenting, I thought that I would address @whuber's comment.
It is true that when using a composite null hypothesis like $\mu_1 \leq \mu_2$ that the p-values will only be uniformly distributed when the 2 means are exactly equal and will not be a uniform if $\mu_1$ is any value that is less than $\mu_2$. This can easily be seen using the `Pvalue.norm.sim` function and setting it to do a one sided test and simulating with the simulation and hypothesized means different (but in the direction to make the null true).
As far as statistical theory goes, this does not matter. Consider if I claimed that I am taller than every member of your family, one way to test this claim would be to compare my height to the height of each member of your family one at a time. Another option would be to find the member of your family that is the tallest and compare their height with mine. If I am taller than that one person then I am taller than the rest as well and my claim is true, if I am not taller than that one person then my claim is false. Testing a composite null can be seen as a similar process, rather than testing all the possible combinations where $\mu_1 \leq \mu_2$ we can test just the equality part because if we can reject that $\mu_1 = \mu_2$ in favour of $\mu_1 > \mu_2$ then we know that we can also reject all the possibilities of $\mu_1 < \mu_2$. If we look at the distribution of p-values for cases where $\mu_1 < \mu_2$ then the distribution will not be perfectly uniform but will have more values closer to 1 than to 0 meaning that the probability of a type I error will be less than the selected $\alpha$ value making it a conservative test. The uniform becomes the limiting distribution as $\mu_1$ gets closer to $\mu_2$ (the people who are more current on the stat-theory terms could probably state this better in terms of distributional supremum or something like that). So by constructing our test assuming the equal part of the null even when the null is composite, then we are designing our test to have a probability of a type I error that is at most $\alpha$ for any conditions where the null is true.
| null | CC BY-SA 3.0 | null | 2011-05-10T19:45:10.343 | 2016-03-23T07:19:27.883 | 2016-03-23T07:19:27.883 | null | 4505 | null |
10619 | 1 | 10626 | null | 1 | 704 | I am using Matlab to try and find a good fit for this curve:

None of the built-in formulas seem to work well.
Any suggestions?
| What regression formula would best fit this curve? | CC BY-SA 3.0 | null | 2011-05-10T20:21:47.193 | 2011-05-10T23:37:39.147 | 2011-05-10T20:23:56.783 | 919 | 4544 | [
"distributions",
"matlab"
] |
10620 | 2 | null | 10607 | 4 | null | I don't know of SAS, so i'll just answer based on the statistics side of the question. About the software you mays ask at the sister site, stackoverflow.
- If the link function is different (logistic, probit or Clog-log), than you will get different results. For logistic, use logistic.
- About the real differences of these link functions.
Logistic and probit are pretty much the same. To see why they are pretty much the same, remember that in linear regression the link function is the identity. In logistic regression, the link function is the logistic and in the probit, the normal.
Formally, you can see this by noting that, in case your dependent variable is binary, you can think of it as following a Bernoulli distribution with a given probability of success.
$Y \sim Bernoulli(p_{i})$
$p_{i} = f(\mu)$
$\mu = XB$
Here, thew probabitliy $p_{i}$ is a function of predictor, just like in linear regression. The real difference is the link function. In linear regression, the link function is just the identity, i.e., $f(\mu) = \mu$, so you can just plug-in the linear predictors.In the logistic regression, the link function is the cumulative logistic distribution, given by $1/(1+exp(-x)). In the probit regression, the link function is the (inverse) cumulative Normal distribution function. And in the Clog-log regression, the link function is the complementary log log distribution.
I never used the Cloglog, so i'll abstein of coments about it here.
You can see that Normal and Logist are very similar in this blog post by John Cook, of Endeavour [http://www.johndcook.com/blog/2010/05/18/normal-approximation-to-logistic/](http://www.johndcook.com/blog/2010/05/18/normal-approximation-to-logistic/).
In general I use the logistic because the coefficients are easier to interpret than in a probit regression. In some specific context I use probit (ideal point estimation or when I have to code my own Gibbs Sampler), but I guess they are not relevant to you. So, my advice is, whenever in doubt about probit or logistic, use logistic!
| null | CC BY-SA 3.0 | null | 2011-05-10T20:30:24.190 | 2011-05-10T21:14:00.837 | 2011-05-10T21:14:00.837 | 3058 | 3058 | null |
10621 | 1 | null | null | 4 | 4072 | I'm interested in fitting a conditional Poisson regression model using PROC GENMOD in SAS to analyze a matched cohort study. However, it's not quite clear to me how I should exactly go about it.
My impression is that a REPEATED statement should be used along with the events/trials syntax, but if so then how does one account for continuous covariates in the model, such as age?
I would be most grateful if anyone had any pointers. Thus far, Google and the internet have disappointed me. Cheers.
| How does one fit conditional Poisson regression in SAS? | CC BY-SA 3.0 | null | 2011-05-10T21:02:18.387 | 2017-03-17T18:23:09.137 | 2011-05-11T07:30:51.817 | 930 | 4555 | [
"regression",
"poisson-distribution",
"sas",
"matching"
] |
10623 | 1 | 10624 | null | 4 | 202 | If I have a sample of k successes and n-k failures, there are standard techniques (Agresti-Coull, Clopper, etc.) for finding a confidence interval of the probability of an individual success. What if I want to find a confidence interval for the probability of getting at least k' out of n' instead? Obviously it can be approximated by the usual
$$\sum_{j\ge k'}{n'\choose j}(k/n)^j(1-k/n)^{n'-j}$$
but this does not take into account the uncertainty of the initial sample, which may be important (the samples are small).
| Binomial testing with probability estimated from sample | CC BY-SA 3.0 | null | 2011-05-10T21:42:26.387 | 2011-05-11T00:46:47.260 | null | null | 1378 | [
"confidence-interval",
"binomial-distribution"
] |
10624 | 2 | null | 10623 | 3 | null | I believe you are looking for the [beta binomial distribution](https://secure.wikimedia.org/wikipedia/en/wiki/Beta-binomial_distribution), which reduces the pdf of of interest ($\pi(k\prime)$) to
$\pi(k\prime|n\prime,n,k) = {n\prime \choose k\prime} \frac{B(k\prime+k+1,n\prime-k\prime+n-k+1)}{B(k+1,n-k+1)}$
For motivation, remember that you do not know $p$ of the process, but you need to estimate $p$ as the posterior after making observation of $k$ successes out of $n$ trials. So
$\pi(p|k,n) = \frac{p^k(1-p)^{(n-k)}}{\int x^k(1-x)^{(n-k)} dx}$
| null | CC BY-SA 3.0 | null | 2011-05-10T21:59:57.487 | 2011-05-10T21:59:57.487 | null | null | 2728 | null |
10625 | 2 | null | 10619 | 1 | null | Check out [Eureqa](http://creativemachines.cornell.edu/eureqa) for a neat evolutionary approach to finding the mathematical form of an otherwise ambiguous function. It's native to Windows but works fine on Linux & Mac via Wine (in which case I'd suggest you use [winebottler](http://winebottler.kronenberg.org)).
| null | CC BY-SA 3.0 | null | 2011-05-10T22:13:21.433 | 2011-05-10T22:13:21.433 | null | null | 364 | null |
10626 | 2 | null | 10619 | 2 | null | Looking at the charts in your first question, this looks slightly like the absolute value of a standard normal distribution, what Wikipedia calls a [half-normal distribution](http://en.wikipedia.org/wiki/Half-normal_distribution), and your curve here looks like the top half of the cdf of a normal distribution.
One way to check is to look at various quantiles of your distribution, compared with those of the absolute value of a standard normal. In R, using your data, this might look like
```
> mydat <- read.csv("http://furlender.com/data.csv", header=FALSE)
> mean(mydat$V1) # compare with sqrt(2/pi) = 0.7978846 for half normal
[1] 1.014772
> median(mydat$V1) # compare with 0.6744898 for half-normal
[1] 0.9162514
> percents <- c(0.01, (1:19)/20, 0.99, 0.999, 0.9999, 0.99999, 0.999999)
> quantile(mydat$V1, percents) / qnorm(percents/2 + 0.5)
1% 5% 10% 15% 20% 25% 30% 35%
1.479777 1.455286 1.443074 1.435248 1.429568 1.421616 1.409148 1.400204
40% 45% 50% 55% 60% 65% 70% 75%
1.386696 1.372990 1.358436 1.341848 1.323763 1.306209 1.287204 1.266550
80% 85% 90% 95% 99% 99.9% 99.99% 99.999%
1.245071 1.223053 1.198247 1.172505 1.148309 1.141927 1.220474 1.412236
99.9999%
1.371697
```
and this suggests that your distribution is slightly more dispersed than the absolute value of a standard normal, perhaps somewhere between 1.15 and 1.4 times. Even the tails fit this description. Drawing the ecdf confirms this compared with scaled normals.
```
> plot.ecdf(mydat$V1, lwd=3)
> curve(2*pnorm(x/1.4 )-1, 0, 7, col="red" , add=TRUE)
> curve(2*pnorm(x/1.15)-1, 0, 7, col="green", add=TRUE)
```

If you wish, you could stop there, and choose some intermediate value (perhaps 1.2718 to give a similar mean). Or you could look for some normal-like symmetric distribution which has even moments about 0 similar to your data, and then take the absolute value. These moments are easily calculated: for example the second and fourth are:
```
> mean(mydat$V1^2)
[1] 1.516602
> mean(mydat$V1^4)
[1] 5.972706
```
suggesting that (before taking the absolute value) the symmetric distribution would be slightly platykurtic even though the extreme tail is normal-like.
| null | CC BY-SA 3.0 | null | 2011-05-10T23:37:39.147 | 2011-05-10T23:37:39.147 | null | null | 2958 | null |
10627 | 2 | null | 73 | 3 | null | I use `lattice`, `ggplot2`, `lubridate`, `reshape`, `boot`, `e1071`, `car`, `forecast`, and `zoo` a lot.
| null | CC BY-SA 3.0 | null | 2011-05-10T23:50:24.527 | 2011-05-10T23:50:24.527 | null | null | 1764 | null |
10629 | 2 | null | 10607 | 2 | null | All 3 link functions are s-shaped and are not going to be that different. Li and Duan showed that if the predictor variables are well behaved (elliptically symmetric predictors are a subset of the well behaved group) then changing the link function will change the coefficients by a multiplicitive constant. Even if the predictors are not perfectly well behaved the differences between similar link functions are unlikely to change the overall inference (the exact coefficients will change, but what is important or significant will still be under a different link function).
The logit allows you to interpret individual coefficients as log-odds, so it tends to be the most popular these days.
| null | CC BY-SA 4.0 | null | 2011-05-11T00:35:33.890 | 2018-07-24T04:36:38.033 | 2018-07-24T04:36:38.033 | 11887 | 4505 | null |
10630 | 2 | null | 10623 | 3 | null | If you are happy with your confidence interval on the unknown proportion of a single success, then just plug both of those values into the above formula. Since k and n are known constants (rather than random variables) and the probability of k or more out of n is monotone, you can just transform the ends of your confidence interval to get a confidence interval on the transformed parameter (and your probability above is just a transform of the parameter).
If you want to get fancier you can do a full Bayesian analysis with the Beta-Binomial and get the posterior distribution of the next observation (I think there is a more formal name for it, but that could be what you want).
| null | CC BY-SA 3.0 | null | 2011-05-11T00:46:47.260 | 2011-05-11T00:46:47.260 | null | null | 4505 | null |
10631 | 1 | null | null | 3 | 1264 | I have a data set with a distribution of one variable against the other resembling a cubic one (rises to some point and then falls to a steady level without a consequent rise). I know in which cases to use log-linear, log-lin, lin-log, and reciprocal or log reciprocal linear models, but I am not sure what to do here (I have checked all of the above and they not surprisingly turned out to be a bad fit). Is there any linear model that would help me in this case?
| What is the best linear regression model to use when the shape of the data resembles a cubic distribution? | CC BY-SA 3.0 | null | 2011-05-11T00:51:20.380 | 2011-05-11T13:31:27.083 | null | null | 4560 | [
"regression",
"econometrics"
] |
10632 | 2 | null | 10579 | 0 | null | you could use the Agresti-Caffo simultaneous confidence interval or (Simultaneous Score Intervals for Difference of Proportions) to compare differences in proportions (Agresti et al. 2008. Simultaneous confidence intervals for comparing binomial parameters, Biometrics 64, 1270-1275).
The corresponding R code is available in [http://www.stat.ufl.edu/~aa/cda/software.html](http://www.stat.ufl.edu/~aa/cda/software.html)
| null | CC BY-SA 3.0 | null | 2011-05-11T00:58:52.337 | 2011-05-11T00:58:52.337 | null | null | 4559 | null |
10633 | 2 | null | 10544 | 3 | null | You would need to use the "Simultaneous Score Intervals for Difference of Proportions" to solve your question. The reference is " Agresti et al. 2008. Simultaneous confidence interval for comparing binomial parameters. Biometrics 64, 1270-1275.
The corresponding R code is available in [http://www.stat.ufl.edu/~aa/cda/software.html](http://www.stat.ufl.edu/~aa/cda/software.html)
Sincerely,
| null | CC BY-SA 3.0 | null | 2011-05-11T01:09:17.860 | 2011-05-11T01:09:17.860 | null | null | 4559 | null |
10634 | 2 | null | 10562 | 0 | null | You might want to try "model-based clustering". This algorithm uses "BIC" to determine the number of clusters.
Sincerely
| null | CC BY-SA 3.0 | null | 2011-05-11T01:13:16.013 | 2011-05-11T01:13:16.013 | null | null | 4559 | null |
10636 | 2 | null | 10607 | 4 | null | I have a question/comment. I thought that by definition, logistic regression uses the logit link. If you are using the probit or complementary log-log link, then I do not think that is logistic regression.
What you are doing is fitting generalized linear models on a binary outcome, which is assumed to follow a Bernoulli. The 3 usual choices of link functions are the logit, probit, and complementary log-log. If you are using the logit link, that is logistic regression.
| null | CC BY-SA 3.0 | null | 2011-05-11T01:37:41.867 | 2011-05-11T01:37:41.867 | null | null | 2312 | null |
10637 | 2 | null | 10592 | 2 | null | Merely computing the covariance matrix--which you're going to need to get started in any event--is $O((Nf)^2)$ so, asymptotically in $N$, nothing is gained by choosing a $O(Nf)$ algorithm for the whitening.
There are approximations when the variables have additional structure, such as when they form a time series or realizations of a spatial stochastic process at various locations. These effectively rely on assumptions that let us relate the covariance between one pair of variables to that between other pairs of variables, such as between pairs separated by the same time lags. This is the conventional reason for assuming a process is [stationary](https://en.wikipedia.org/wiki/Stationary_process) or [intrinsically stationary](https://web.archive.org/web/20101207205144/http://www.nrcse.washington.edu/pdf/trs56_covar.pdf), for instance. Calculations can be $O(Nf\,\log(Nf)$ in such cases (e.g., using the Fast Fourier Transform as in [Yao & Journel 1998](https://doi.org/10.1023/A:1022335100486)). Absent such a model, I don't see how you can avoid computing all pairwise covariances.
| null | CC BY-SA 4.0 | null | 2011-05-11T03:37:27.523 | 2022-05-24T16:44:55.370 | 2022-05-24T16:44:55.370 | 919 | 919 | null |
10638 | 2 | null | 10592 | 2 | null | On a whim, I decided to try computing (in R) the covariance matrix for a dataset of about the size mentioned in the OP:
```
z <- rnorm(1e8)
dim(z) <- c(1e6, 100)
vcv <- cov(z)
```
This took less than a minute in total, on a fairly generic laptop running Windows XP 32-bit. It probably took longer to generate `z` in the first place than to compute the matrix `vcv`. And R isn't particularly optimised for matrix operations out of the box.
Given this result, is speed that important? If N >> p, the time taken to compute your approximation is probably not going to be much less than to get the actual covariance matrix.
| null | CC BY-SA 3.0 | null | 2011-05-11T04:00:43.317 | 2011-05-11T04:00:43.317 | null | null | 1569 | null |
10639 | 1 | 10722 | null | 3 | 3323 | According to [Wikipedia](http://en.wikipedia.org/wiki/Wilks%27_lambda_distribution), Wilks' Lambda distribution generalizes Hotelling's distribution. I am having some problems seeing how this works. I can see how Hotelling's distribution generalizes Student's t-distribution (a RV distributed as Hotelling's law with $p=1$ is just the square of a RV distribution as Student's t), but cannot see how to get to Wilks' Lambda. Is there some setting of the parameters $p,m,n$ such that a RV distributed as Wilks' lambda is some transform of a Hotelling RV?
| How exactly does Wilks' Lambda distribution generalize the Hotelling distribution? | CC BY-SA 3.0 | null | 2011-05-11T04:34:34.790 | 2017-04-28T19:39:29.567 | 2017-04-28T19:39:29.567 | 28666 | 795 | [
"distributions",
"t-distribution",
"hotelling-t2"
] |
10640 | 1 | 10709 | null | 5 | 2132 | I'm trying to design an experiment where I measure a variable as a function of 5 two-level factors, labelled A, B, C, D and E.
I'm trying to understand how to best design this experiment so I can conduct it in 8 runs. I've tried to follow the guidance given in Box, Hunter & Hunter, and found two experimental $2^{5-2}$ designs that seem to contradict each others.
One is given in the table p. 272, where D=AB and E=AC. That would yield the following design:
```
ABCDE
+++++
++-+-
+-+-+
+----
-++--
-+--+
--++-
---++
```
The other one is the example whose table is given on p. 236, defined by D=BC and E=ABC:
```
ABCDE
+++++
++---
+-+--
+--++
-+++-
-+--+
--+-+
---++
```
On what criteria should one choose one design over another? Sorry if this sounds like a newbie's question, but I'd really like to understand this issue.
| How to design an 8-run experiment in 5 factors? | CC BY-SA 3.0 | null | 2011-05-11T07:09:00.243 | 2013-09-24T18:50:43.890 | 2011-05-11T15:36:41.373 | 26 | 4370 | [
"experiment-design"
] |
10641 | 2 | null | 10639 | 2 | null | [These NCSU course notes](http://faculty.chass.ncsu.edu/garson/PA765/manova.htm) say
>
Multivariate tests in contrast to the overall F test, answer the
question, "Is each effect
significant?" or more specifically,
"Is each effect significant for at
least one of the dependent variables?"
That is, where the F test focuses on
the dependents, the multivariate tests
focus on the independents and their
interactions. These tests appear in
the "Multivariate Tests" table of SPSS
output. The multivariate formula for F
is based not only on the sum of
squares between and within groups, as
in ANOVA, but also on the sum of
crossproducts - that is, it takes
covariance into account as well as
group means....
Hotelling's T-Square is the most common, traditional test where there
are two groups formed by the
independent variables....
Wilks' lambda, U. This is the most common, traditional test where there
are more than two groups formed by the
independent variables.... The t-test,
Hotelling's T, and the F test are
special cases of Wilks's lambda....
So I presume that if you take Wilks' lambda, and reduce the number of groups formed by the independent variables to two, then you get something like Hotelling's T-Square.
| null | CC BY-SA 3.0 | null | 2011-05-11T07:19:46.017 | 2011-05-11T07:19:46.017 | null | null | 2958 | null |
10642 | 2 | null | 10640 | 2 | null | I do not have the book at hand, so I cannot comment on the reasoning to find these models.
However, it is reasonable to expect in this type of setting that:
- Different designs might better achieve different optimality criteria: perhaps you want the design with 8 runs that has best overall predictive ability, perhaps you want least maximum variance,...
- There may be more than one design yielding the same optimality (given a criterion)
As such, if you want help for your specific situation, you're going to have to enlighten us on what you want to use this design for (i.e. what is your optimality criterion).
| null | CC BY-SA 3.0 | null | 2011-05-11T07:47:29.803 | 2011-05-11T07:47:29.803 | null | null | 4257 | null |
10643 | 1 | 10686 | null | 2 | 278 | I'm measuring distances of various samples from a reference point. The distance is defined as a non-negative number, where $d=0$ means that the test case is identical to the reference.
My general question is: Given a set of "typical" distances, what is the proper way to tell whether a given $d_1$ "too large", compared to the "typical"?
In my particular case the distance distribution is shown on the following graph

I failed to transform these data to anything symmetrical, so I can't use normal approximation. Any suggestions?
| How to properly analyze distance from a reference? | CC BY-SA 3.0 | null | 2011-05-11T09:01:45.637 | 2011-05-11T23:04:01.823 | 2011-05-11T09:50:04.673 | 930 | 1496 | [
"distributions",
"hypothesis-testing",
"distance-functions"
] |
10644 | 1 | null | null | 10 | 2362 | In PCA eigenvalues determine the order of components. In ICA I am using kurtosis to obtain the ordering. What are some accepted methods to assess the number, (given I have the order) of components that are singificant apart from prior knowledge about the signal?
| Using kurtosis to assess significance of components from independent component analysis | CC BY-SA 3.0 | null | 2011-05-11T09:27:52.943 | 2017-10-15T02:25:18.827 | 2015-02-12T07:02:41.553 | 53618 | 4563 | [
"statistical-significance",
"pca",
"kurtosis",
"independent-component-analysis"
] |
10645 | 2 | null | 10643 | 1 | null | Can you not use the empirical distribution's 95% (or whichever you prefer) confidence limit? If your sample size is big enough, this ought to be a reasonable approximation.
| null | CC BY-SA 3.0 | null | 2011-05-11T09:54:22.177 | 2011-05-11T09:54:22.177 | null | null | 4257 | null |
10646 | 2 | null | 73 | 1 | null | Some packages are very useful in R.
I will just recommand kernlab for Kernel-based Machine Learning Lab and e1071 for SVM and ggplot2 for graphics
| null | CC BY-SA 3.0 | null | 2011-05-11T10:03:37.470 | 2011-05-11T10:03:37.470 | null | null | 4531 | null |
10647 | 2 | null | 9756 | 0 | null | I have worked on active learning in classification and in SVM, that problem was same for me, if the boundary you found out by first model isn't that good the probability to have a good label for new points will decrease. If you have any other method to labelize your new generated points rather than using the boundary that can be a good way and your accuracy for the new generated boundary will be better.
| null | CC BY-SA 3.0 | null | 2011-05-11T10:15:49.427 | 2011-05-11T10:15:49.427 | null | null | 4531 | null |
10649 | 1 | null | null | 4 | 6662 | How can one obtain confidence limits of predicted values in ARIMA?
| How to obtain confidence limits of predicted values in ARIMA? | CC BY-SA 3.0 | null | 2011-05-11T12:28:39.347 | 2011-05-11T21:10:48.220 | 2011-05-11T16:51:38.367 | 2970 | 4427 | [
"forecasting"
] |
10650 | 2 | null | 73 | 2 | null | For me
I am using kernlab for Kernel-based Machine Learning Lab and e1071 for SVM and ggplot2 for graphics
| null | CC BY-SA 3.0 | null | 2011-05-11T12:33:30.563 | 2011-05-11T12:33:30.563 | null | null | 4531 | null |
10651 | 2 | null | 10631 | 2 | null | I would have thought a "cubic regression" would work well for a cubic relationship. Call $Y_{i}$ the dependent variable, and $X_{i}$ the independent variable (or regressor). You simply use a polynomial regression:
$$Y_{i}=\left(\sum_{j=0}^{p}\beta_{j}X_{i}^{j}\right)+e_{i}$$
I would use BIC to select the value of $p$. To do this is very easy - calculate the coefficient of determination $R_{p}^{2}$ from a standard OLS regression output. Then a convenient form of BIC is given by:
$$BIC_{p}=n\log(1-R_{p}^{2})+p\log(n)$$
Although this is the standard form, with the natural logarithm's, a more convenient numerical form is given by
$$BIC10_{p}=-\frac{1}{2}\log_{10}(e)BIC_{p}$$
The reason I say this is that in this form above, you get BIC expressed in based 10 log units, and this leads to a very quick interpretation of the actual number of the BIC. If BIC is positive, then the current order $p$ polynomial is more supported by the data (compared to intercept only model), and the numerical value in odds form is $10^{BIC10_{p}}$. So if $BIC10_{p}=1$, then the order $p$ polynomial is 10 times more likely than the intercept only model, if $BIC10_{p}=10$ then the order $p$ polynomial is 10 billion times more likely. BIC10 tells you how many digits are in the odds ratio. So a reasonable way to proceed is to continue to increase the order of a polynomial until $BIC10_{p}$ becomes sufficiently large.
One thing to be careful of though, is that this type of procedure is not likely to work well for extrapolation outside the range of the $X_{i}$ values. This is mainly because this is a data driven procedure.
| null | CC BY-SA 3.0 | null | 2011-05-11T12:35:05.150 | 2011-05-11T12:35:05.150 | null | null | 2392 | null |
10652 | 2 | null | 7224 | 3 | null | You need to put on an algorithm detecting which person that picture is referring to. You can build a model based on different portrait pictures of famous personality and use classifiers to ensure that this picture is referring to one of your database picture. You need to use a certain classifier based on different parameters liked to the face, like distance between eyes or other parameters to rise up the accuracy of your model.
There is also skin analysis.
The most important is to build a good classifier. This method can be vulnerable.
But there is also a very good project working on face recognition [http://opencv-code.com/Opencv_Face_Detection](http://opencv-code.com/Opencv_Face_Detection)
| null | CC BY-SA 3.0 | null | 2011-05-11T12:53:49.673 | 2011-05-11T12:53:49.673 | null | null | 4531 | null |
10653 | 2 | null | 10604 | -1 | null | I have reviewed the output and the forecast refelects an AR(12) in the error term which translates to a 12 period weighted forecast using the last 12 values of both your predictor series as the AR polynomial acts a multiplier across all series ( X,Y,Z ). Without getting into it in great detail , your model specification or rather lack of specification is in my opinion "found wanting". Unfortunately the SAS procedure assumes that the differencing operators required to make the original series stationary is the same as the differencing operators in the Transfer Function. Furthermore the ARIMA component in the Transfer Function is the same as the ARIMA component for the Univariate Analysis of the dependent series. This structure should be identified form the residuals of a suitably formed Transfer Function that does not have an ARIMA structure. Finally your specification ( by default ) of the ARIMA component in the Transfer Function is a "common factor". What you need to do is to identify the forms of differencing ( if any ) for all three series and the nature of the response (PDL/ADL/LAG STRUCTURE) for EACH of the two inputs. After estimating such a tentative model verify that there are no Level Shifts/Local Time Trends/Seasonal or 1 time Pulses in the tentative set of model errors via Interevention Detection schemes. Furthermore one must ensure the model errors have constant variance and that the parameters of the model haven't proved to have changed over time and that the acf of the model errors is free of any significant structure AND that these errors are uncorrelated with either of the two pre-whitened input series.
In summary you are getting what you want but you might not want what you are getting !You might consider posting the original data for the three series and have the list members ( including myself ) aid you in constructing a minimally sufficient model.
EDIT: I found some material on the web that might be of help to you.
For illustration, say the non-zero lags are 2 and 4. The process y might be estimated as follows using an x as an input.
estimate input=( 2$(2) x )
The input is of the form cB*2+ dB*4 = B*2( c + dB*2). It is this latter form that gives the form of the input statement
| null | CC BY-SA 3.0 | null | 2011-05-11T13:26:05.713 | 2011-05-12T22:10:04.407 | 2011-05-12T22:10:04.407 | 3382 | 3382 | null |
10654 | 2 | null | 10631 | 4 | null | Restricted cubic splines (natural splines) are an excellent choice. These are piecewise cubic polynomials that can fit any shape given enough knots. The following code in R shows how to fit such relationships and to plot the fit with confidence bands.
```
require(rms)
dd <- datadist(mydata); options(datadist='dd')
f <- ols(y ~ rcs(x1, 5)) # 5 knots at default locations
f # print model stats
plot(Predict(f)) # or plot(Predict(f, x1)) # plots over 10th smallest to 10th largest observation
```
| null | CC BY-SA 3.0 | null | 2011-05-11T13:31:27.083 | 2011-05-11T13:31:27.083 | null | null | 4253 | null |
10655 | 1 | 10706 | null | 4 | 2432 | I have a scatter plot of (for example) height against age. How does one calculate for an individual point the percentile of the height for a given age?
Suggestions in R would be most appreciated. Thanks!
| How to calculate a percentile of y for a given x given a series of (x, y)? | CC BY-SA 3.0 | null | 2011-05-11T13:33:07.000 | 2011-05-12T21:10:18.593 | null | null | 1991 | [
"quantiles"
] |
10656 | 1 | null | null | 2 | 902 | i just wanted to know if i could use factor scores and crosstab it with demographics (i.e. gender, age, etc.)? I have 69 likert-scale variables and run factor analysis on SPSS. It gave me 10 new variables (types of personality. i just wanted to see the demographics of each new variables. Thanks!!!
| Crosstab factor scores generated by factor analysis | CC BY-SA 3.0 | null | 2011-05-11T13:44:50.520 | 2011-05-11T19:34:00.410 | 2011-05-11T19:34:00.410 | null | 4565 | [
"factor-analysis"
] |
10657 | 1 | null | null | 3 | 196 | In relation to web usage mining from a log file, can you cluster data without performing User and/or Session identification?
I mean,let's say I have these entries:
>
123.234.324.122 [timestamp] "GET /cars/sport/porsche.jpg" 200 23432 "http://topgear.com/cars" "Mozilladsfsd"
120.23.324.122 [timestamp] "GET /bikes/sport/r1.jpg" 200 23432 "http://topgear.com/cars" "Mozilladsfsd"
13.234.324.122 [timestamp] "GET /cars/utility/micra.jpg" 200 23432"http://topgear.com/cars" "Mozilladsfsd"
So,in this scenario, I just need to cluster based on which cars have been viewed more frequently etc etc..Do I need user identification and session identification then? Or can I just consider the URLs and cluster on them?
Because as far as the traditional Web Usage Mining approach goes and all the papers I've gone through suggest, you do preprocessing,then the pattern-discovery comes along..
My question is why not jump to the pattern discovery straight-away????
| Can one cluster web log data without performing user or session identification? | CC BY-SA 3.0 | null | 2011-05-11T13:55:41.987 | 2011-05-11T20:07:20.180 | 2020-06-11T14:32:37.003 | -1 | 4402 | [
"clustering"
] |
10658 | 2 | null | 10062 | 5 | null | Package [Mclust](http://cran.r-project.org/web/packages/mclust/mclust.pdf) is nice. The mclust function fits a mixture of normals distribution to data. You can automatically choose the number of components based on BIC (mclustmodel) or specify the number of components. There is also no need to convert your data into a data frame.
Also, package [Mixtools](http://cran.r-project.org/web/packages/mixtools/mixtools.pdf) and the function normalmixEM fits a mixture of normals.
Update: I recently discovered the mixAK package and the NMixMCMC function and it is terrific. It has many options, including RJMCMC for component selection, right left censoring, etc...
| null | CC BY-SA 3.0 | null | 2011-05-11T13:57:14.200 | 2011-09-24T02:16:05.507 | 2011-09-24T02:16:05.507 | 2310 | 2310 | null |
10659 | 2 | null | 10655 | 1 | null | 'The' percentile for a given age implies some sort of regression (i.e. you can find 'the' mean predicted height from a given age).
Once you have found this, the result depends on your assumptions: if you want no assumptions (besides the regression's), find how many of the heights in your original data are smaller than the predicted one for your age (=use empirical distribution).
Otherwise, you can fit whatever model you like to the marginal distribution of height and then find the percentile of the predicted height in that distribution.
| null | CC BY-SA 3.0 | null | 2011-05-11T14:18:39.307 | 2011-05-11T14:18:39.307 | null | null | 4257 | null |
10660 | 2 | null | 10657 | 1 | null | If you don't identify your sessions/users, you are clustering different things: one user who is an insane adept of any given car and looks at its picture dayly could have a huge impact on your results, though you're probably not interested in this.
| null | CC BY-SA 3.0 | null | 2011-05-11T14:58:55.840 | 2011-05-11T14:58:55.840 | null | null | 4257 | null |
10661 | 2 | null | 10591 | 9 | null | Here is a sketch of a proof which combines three ideas: (a) the delta method, (b) variance-stabilization transformations and (c) the closure of the Poisson distribution under independent sums.
First, let's consider a sequence of iid Poisson random variables $X_1, X_2, \ldots$ with mean $\lambda > 0$. Then, the Central Limit Theorem asserts that
$$\newcommand{\barX}{\bar{X}_n}\newcommand{\convd}{\,\xrightarrow{\,d\,}\,}\newcommand{\Nml}{\mathcal{N}}
\sqrt{n} (\barX - \lambda) \convd \Nml(0,\lambda) \> .
$$
Notice that the asymptotic variance depends on the (presumably unknown) parameter $\lambda$. It would be nice if we could find some function of the data other than $\bar{X}_n$ such that, after centering and rescaling, it had the same asymptotic variance no matter what the parameter $\lambda$ was.
The [delta method](http://en.wikipedia.org/wiki/Delta_method) provides a handy way for determining the distribution of smooth functions of some statistic whose limiting distribution is already known. Let $g$ be a function with continuous first derivative such that $g'(\lambda) \neq 0$. Then, by the delta method (specialized to our particular case of interest),
$$
\sqrt{n}\big(g(\barX) - g(\lambda)\big) \convd \Nml(0, \lambda g'(\lambda)^2) \>.
$$
So, how can we make the asymptotic variance constant (say, the value $1$) for all possible $\lambda$? From the expression above, we know we need to solve
$$g'(\lambda) = \lambda^{-1/2} \>.$$
It is not hard to see that the general antiderivative is $g(\lambda) = 2 \sqrt{\lambda} + c$ for any $c$, and the limiting distribution is invariant to the choice of $c$ (by subtraction), so we can set $c = 0$ without loss of generality. Such a function $g$ is called a variance-stabilizing transformation.
Hence, by the delta method and our choice of $g$, we conclude that
$$
\sqrt{n}\Big(2\sqrt{\barX} - 2\sqrt{\lambda}\Big) \convd \Nml(0, 1) \>.
$$
Now, the Poisson distribution is closed under independent sums. So, if $X$ is Poisson with mean $\lambda$, then there exist random variables $Z_1, \ldots, Z_n$ that are iid Poisson with mean $\lambda/n$ such that $\sum_{i=1}^n Z_i$ has the same distribution as $X$. This motivates the approximation in the case of a single Poisson random variable.
What [Anscombe (1948)](http://www.jstor.org/stable/2984159) found was that modifying the transformation $g$ (slightly) to $\tilde{g}(\lambda) = 2\sqrt{\lambda + b}$ for some constant $b$ actually worked better for smaller $\lambda$. In this case, $b = 3/8$ is about optimal.
Note that this modification "destroys" the true variance-stabilizing property of $g$, i.e., $\tilde{g}$ is not variance-stabilizing in the strict sense. But, it is close and gives better results for smaller $\lambda$.
| null | CC BY-SA 3.0 | null | 2011-05-11T15:28:51.700 | 2011-05-12T12:56:29.837 | 2011-05-12T12:56:29.837 | 2970 | 2970 | null |
10662 | 1 | 10663 | null | 4 | 465 | I'm working with a lot of data that was collected by obstetricians regarding the health of infants (birth weight, gestational age at delivery, mother's BMI), and I am trying to connect this data with geometric measurements performed on microscopic slide scans for each associated placenta (area, perimeter, number of blood vessels). Each mother-infant-placenta trio is identified with a lab ID so it is possible to know which is which, but there are only 27 sets of mother-infant-placenta.
All the clinical data were taken before I arrived on the scene. I was pretty much given the placenta slide images, and an excel sheet of the clinical data. Then I performed the geometric measurements of the placentas. So the data was not taken with my purpose in mind.
My question is, what can I do with this data? I collected measurements with some clinical knowledge that the condition of a placenta is both an influence on and reflection of the infant health outcome. But I desperately need advice on which statistical/data mining techniques I can use to see how my measurements affect/are an indicator of infant health.
Is there any hope for ad-hoc analysis on a small sample size?
| What to do with a small (27) medical dataset? | CC BY-SA 3.0 | null | 2011-05-11T15:37:13.417 | 2011-05-11T19:46:48.087 | 2011-05-11T19:43:51.520 | null | 5129 | [
"hypothesis-testing",
"data-mining",
"small-sample",
"exploratory-data-analysis"
] |
10663 | 2 | null | 10662 | 5 | null | If you're looking for statistical significance I wouldn't hold out hope unless you have a very targeted hypothesis and/or there is a very strong effect. But certainly you could generate some new hypotheses with this data via some exploratory analysis. With 6 variables overall I'm not sure I'd start with any sophisticated modeling. Never underestimate the power of scatterplots and histograms :)
One really simple thing to do would be to run PCA and see if the scores on any of the components have an apparent relationship with the response(s) you're interested in. It's probably a good<\strike> reasonable idea anyhow since your measurements are certainly correlated.
Edit: My thought on using PCA was basically to reduce the area/perimeter/number variables to a single dimension. Not strictly necessary but it might make visualizing the relationships easier.
| null | CC BY-SA 3.0 | null | 2011-05-11T15:57:01.233 | 2011-05-11T19:46:48.087 | 2011-05-11T19:46:48.087 | 26 | 26 | null |
10664 | 1 | null | null | 2 | 578 | I am trying to analyse a group of 4 questions that are on a 5 point scale. I need to group the answers for each question based on age. There are three different ages. How would I go about doing this?
| How to examine group differences on several 5-point items using SPSS? | CC BY-SA 3.0 | null | 2011-05-11T16:17:56.080 | 2011-05-12T10:58:56.187 | 2011-05-12T01:47:22.627 | 183 | 4567 | [
"spss",
"likert"
] |
10665 | 2 | null | 10649 | 4 | null | One idea would be to use the [forecast](http://cran.r-project.org/web/packages/forecast/index.html) package in [R](http://www.r-project.org/):
```
library(forecast)
fit <- auto.arima(WWWusage)
fit
f <- forecast(fit,h=20)
f
plot(f)
```
You can also give auto.arima parameters to use, rather than allowing it to fit its own. I'm not sure how to obtain confidence intervals for the historic period-- you could try 'rolling' through the dataset, producing 1-step ahead forecasts+confidence intervals.
| null | CC BY-SA 3.0 | null | 2011-05-11T16:40:47.500 | 2011-05-11T16:48:50.180 | 2011-05-11T16:48:50.180 | 2817 | 2817 | null |
10666 | 2 | null | 10656 | 4 | null | In SPSS, choose Analyze...Descriptive Statistics...Explore. The factor scores will be your dependents, and a demographic grouping will be your "factor" in this procedure. I'd choose Plots Only to start with and request boxplots. You can get either factor levels together or dependents together: you can experiment.
Later, when you want means and standard deviations and such, I'd use the Summarize command, located in the menus at Analyze...Reports...Case Summaries. You can also experiment with "means [var list] by [var list]/stat anova."
However, I'd caution you that some of your 10 factors are likely to be marginal ones that have small eigenvalues and explain little of the total item variance. Maybe consider making your criteria stricter for factor extraction, such as using a subcommand like "/criteria mineigen (1.5)" or 2.0 instead of 1.0 which is so commonly used. It'll also be a plus to reduce your number of factors when it comes to displaying and explaining your results.
| null | CC BY-SA 3.0 | null | 2011-05-11T16:43:02.240 | 2011-05-11T16:43:02.240 | null | null | 2669 | null |
10667 | 1 | 10670 | null | 2 | 1734 | I have to assess for each product p, the odds ratio associated (success/failure). The data are in this table:
```
N_success N_trials p1 p2 p3 p4 p5
5 310 n n n n n
17 700 n y n n y
12 650 y y y n n
27 214 n y n n y
0 87 n n n y n
```
So I did that and I got the odds ratios for each plus the 95% asymptotic confidence intervals:
```
p1= (0.2558322 0.48442194 0.91725993)
p2= (1.1584454 2.9114056 7.316946)
p3= (0.2558322 0.48442194 0.91725993)
p4= nil
p5= (1.738197 3.0642326 5.4018736)
```
Now is this correct way to assess? There are at least two situations where the lower 95% CI of the OR is above 1 (p2 and p4) but should I not take into account multiple comparisons? What is the best way to do that with odds ratios (Bonferroni, others)? Is it better to use logistic regressions (and how)? Thanks
| Odds ratios multiple comparisons | CC BY-SA 3.0 | null | 2011-05-11T16:58:07.793 | 2011-05-11T18:52:39.843 | 2011-05-11T18:18:03.137 | 919 | 4569 | [
"logistic",
"multiple-comparisons",
"odds-ratio"
] |
10668 | 2 | null | 10369 | 1 | null | Observe that the random variable $i_j$ is a function of $\mathbf{Z} = (Z_1, \ldots, Z_n)$ only. For an $n$-vector, $\mathbf{z}$, we write $i_j(\mathbf{z})$ for the index of the $j$th largest coordinate. Let also $P_z(A) = P(X_1 \in A \mid Z_1 = z)$ denote the conditional distribution of $X_1$ given $Z_1$.
If we break probabilities down according to the value of $i_j$ and desintegrate w.r.t. $\mathbf{Z}$ we get
$$\begin{array}{rcl}
P(X_{i_j} \in A) & = & \sum_{k} P(X_k \in A, i_j = k) \\
& = &\sum_k \int_{(i_j(z) = k)} P(X_k \in A \mid \mathbf{Z} = \mathbf{z}) P(\mathbf{Z} \in d\mathbf{z}) \\
& = & \sum_k \int_{(i_j(z) = k)} P(X_k \in A \mid Z_k = z_k) P(\mathbf{Z} \in d\mathbf{z}) \\
& = & \sum_k \int_{(i_j(z) = k)} P_{z_k}(A) P(\mathbf{Z} \in d\mathbf{z}) \\
& = & \int P_{z}(A) P(Z_{i_j} \in dz) \\
\end{array}$$
This argument is quite general and relies only on the stated i.i.d. assumptions, and $Z_k$ could be any given function of $(X_k, Y_k)$.
Under the assumptions of normal distributions (taking $\sigma_y = 1$) and $Z_k$ being the sum, the conditional distribution of $X_1$ given $Z_1 = z$ is
$$N\left(\frac{\sigma_x^2}{1+\sigma_x^2} z, \sigma_x^2\left(1 - \frac{\sigma_x^2}{1+\sigma_x^2}\right)\right)$$
and @probabilityislogic shows how to compute the distribution of $Z_{i_j}$, hence we have explicit expressions for both the distributions that enter in the last integral above. Whether the integral can be computed analytically is another question. You might be able to, but off the top of my head I can't tell if it is possible. For asymptotic analysis when $\sigma_x \to 0$ or $\sigma_x \to \infty$ it might not be necessary.
The intuition behind the computation above is that this is a conditional independence argument. Given $Z_{k} = z$ the variables $X_{k}$ and $i_j$ are independent.
| null | CC BY-SA 3.0 | null | 2011-05-11T18:22:17.520 | 2011-05-11T18:22:17.520 | null | null | 4376 | null |
10669 | 1 | 10685 | null | 2 | 1089 | I am running a model for which I am getting a very bad percentage detection of events in the confusion matrix (basically my true positives).
Obviously that implies my false negatives are too hight.
When I gave these dataset to a neural network node or a decision tree node, I see that the percentage detection by these two techniques is quite good (65% - 67% vis a vis mine 15%).
I went to the decision tree diagram and saw the various cuts under which it divides the population.
I obviously understand that the variable which falls on the root of the tree has highest importance and the leafs have lowest.
- How can the decision tree "tree" help me create categorical variables or treat continuous variables so that the accuracy of my model improves?
To clarify, if a decision tree can generate a matrix with 65% detection, it would have some rule inside it to get such accuracy.
These rules would display in the tree diagram we get as output.
- Can we use this tree to create out variables in a different way and get close to the accuracy given by decision tree?
| Decision tree output -- learning | CC BY-SA 3.0 | null | 2011-05-11T18:37:16.257 | 2011-05-12T13:51:27.850 | 2011-05-12T13:51:27.850 | 183 | 1763 | [
"cart"
] |
10670 | 2 | null | 10667 | 0 | null | I believe you could use "simultaneous score confidence interval for OR" to analyze your question. The reference is Agresti et al. 2008 Simultaneous confidence intervals for comparing binomial parameters. Biometrics 64 1270-1275.
The R code is available in [http://www.stat.ufl.edu/~aa/cda/software.html](http://www.stat.ufl.edu/~aa/cda/software.html)
Sincerely,
| null | CC BY-SA 3.0 | null | 2011-05-11T18:52:39.843 | 2011-05-11T18:52:39.843 | null | null | 4559 | null |
10671 | 2 | null | 10662 | 1 | null | I agree with JMS, you will need to plot each of your variable first because PCA requires the normality assumption. If your variables are not normally distributed then it is not appropriate to use PCA before transforming the variables. I think you will need to ask yourself, what you really want to know from this data set (set up your hypothesis) then you will be able to pick the right statistical tests.
It is not good to dichotomize continuous variables into categorical variables because you will lose power to detect the difference. However, if this is the case, You could use "odds ratio", "risk difference" etc to interpret your data sets.
Sincerely,
| null | CC BY-SA 3.0 | null | 2011-05-11T19:11:30.590 | 2011-05-11T19:11:30.590 | null | null | 4559 | null |
10672 | 1 | 10757 | null | 13 | 1828 | I've read about a number of algorithms for solving n-armed bandit problems like $\epsilon$-greedy, softmax, and UCB1, but I'm having some trouble sorting through what approach is best for minimizing regret.
Is there a known optimal algorithm for solving the n-armed bandit problem? Is there a choice of algorithm that seems to perform best in practice?
| Optimal algorithm for solving n-armed bandit problems? | CC BY-SA 3.0 | null | 2011-05-11T19:57:03.987 | 2012-07-09T16:32:29.370 | 2012-07-09T16:32:29.370 | 4872 | 4281 | [
"machine-learning",
"reinforcement-learning",
"multiarmed-bandit"
] |
10673 | 2 | null | 10657 | 0 | null | Cluster analysis does not involve hypothesis testing per se, but is really just a collection of different similarity algorithms for exploratory analysis. You can force hypothesis testing somewhat but the results are often inconsistent, since cluster changes are very sensitive to changes in parameters. So the answer is yes, you can do it, but be careful about making specific statistical inference.
As to your other point. As in any data analysis, your data should be as clean and as representative as possible, so I would avoid jumping steps.
| null | CC BY-SA 3.0 | null | 2011-05-11T20:07:20.180 | 2011-05-11T20:07:20.180 | null | null | 3489 | null |
10674 | 1 | null | null | 0 | 904 | I have a two-dimensional data set that looks like $(t, x)$ where $t$ is a time in seconds when event $X$ happened. $X$ ranges from $[0, 200]$.
I want to visualize the frequency of each $x$ at time $t$ over some time period. I guess this would be a bar graph with $x$-axis being event #, $y$-axis being frequency, and $z$-axis being time, $t$.
Furthermore, I would like to group all events that happen within say a 5 second interval to count towards the same frequency bar on the $y$-axis.
If there is a way to do this with R that would be even better.
My goal is to get a sense how often some event occurs over the course of a day, and when certain events happen a lot or infrequently. If you know of a better way to understand this information, I am all ears.
| Modeling frequency over time | CC BY-SA 3.0 | null | 2011-05-11T20:07:34.483 | 2013-05-15T04:33:18.643 | 2013-05-15T04:33:18.643 | 805 | 4571 | [
"r",
"data-visualization"
] |
10676 | 1 | 10677 | null | 8 | 10107 | Lets say I have a highly dimensional classification problem with a lot of noise, and I want to improve my results by removing some of the noisy variables. I've [read](http://research.microsoft.com/pubs/69946/tr-2002-63.pdf) [several](http://www.andrew.cmu.edu/user/minhhoan/papers/SVMFeatureWeight_PR.pdf) [papers](http://www.ncbi.nlm.nih.gov/pubmed/16606446) on using SVMs for feature selection, but I'm at a loss as to how to implement this in R. Are there pre-existing packages that do this, or am I going to have to roll my own?
| Using an SVM for feature selection | CC BY-SA 3.0 | null | 2011-05-11T20:32:04.700 | 2011-05-12T09:22:47.270 | null | null | 2817 | [
"r",
"svm"
] |
10677 | 2 | null | 10676 | 5 | null | As I understand them, SVMs have built-in regularization because they tend to penalize large weights of predictors which amounts to favor simpler models. They're often used with [recursive feature elimination](http://www.brainvoyager.com/bvqx/doc/UsersGuide/WebHelp/Content/MVPATools/Recursive_Feature_Elimination.htm) (in neuroimaging paradigms, at least).
About R specifically, there's the [kernlab](http://cran.r-project.org/web/packages/kernlab/index.html) package, by [Alex Smola](http://alex.smola.org/) who co-authored Learning with Kernels (2002, MIT Press), which implements SVM (in addition to [e1071](http://cran.r-project.org/web/packages/e1071/index.html)). However, if you are after a dedicated framework, I would warmly recommend the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package.
| null | CC BY-SA 3.0 | null | 2011-05-11T20:47:54.837 | 2011-05-11T20:47:54.837 | null | null | 930 | null |
10678 | 2 | null | 10649 | 3 | null | The confidence limits for an ARIMA forecast are based upon the PSI WEIGHTS . The PSI WEIGHTS are easily computed by representing the ARIMA MODEL as a pure MOVING AVERAGE MODEL. One should not be dependent upon software ( any software ! ) for answers.
| null | CC BY-SA 3.0 | null | 2011-05-11T21:10:48.220 | 2011-05-11T21:10:48.220 | null | null | 3382 | null |
10679 | 2 | null | 10664 | 3 | null | Here are 3 options; hard brackets need to be filled in, while braces contain optional subcommands:
```
cross [varlist] by age {/cells count col row}.
means [varlist] by age {/stat anova}.
```
summarize command - best obtained through the menus via Analyze...Reports...Case Summaries. Then you may need to double-click the resulting pivot table and right-click ... Pivoting Trays to rearrange.
| null | CC BY-SA 3.0 | null | 2011-05-11T22:19:51.593 | 2011-05-12T01:55:53.997 | 2011-05-12T01:55:53.997 | 183 | 2669 | null |
10680 | 1 | 10724 | null | 15 | 981 | This is sort of an open ended question but I wanna be clear. Given a sufficient population you might be able to learn something (this is the open part) but whatever you learn about your population, when is it ever applicable to a member of the population?
From what I understand of statistics it's never applicable to a single member of a population, however, all to often I find myself in a discussion where the other person goes "I read that 10% of the world population has this disease" and continue to conclude that every tenth person in the room has this disease.
I understand that ten people in this room is not a big enough sample for the statistic to be relevant but apparently a lot don't.
Then there's this thing about large enough samples. You only need to probe a large enough population to get reliable statistics. This though, isn't it proportional to the complexity of the statistic? If I'm measuring something that's very rare, doesn't that mean I need a much bigger sample to be able to determine the relevance for such a statistic?
The thing is, I truly question the validity of any newspaper or article when statistics is involved, they way it's used to build confidence.
That's a bit of background.
Back to the question, in what ways can you NOT or may you NOT use statistics to form an argument. I negated the question because I'd like to find out more about common misconceptions regarding statistics.
| How to NOT use statistics | CC BY-SA 3.0 | null | 2011-05-11T17:47:50.800 | 2014-07-20T01:01:54.527 | 2014-07-20T00:13:05.307 | 22468 | 4576 | [
"teaching",
"validity"
] |
10681 | 2 | null | 10680 | 9 | null | Unless the people in the room are a random sample of the world's population, any conclusions based on statistics about the world's population are going to be very suspect. One out of every 5 people in the world is Chinese, but none of my five children are...
| null | CC BY-SA 3.0 | null | 2011-05-11T17:56:10.473 | 2011-05-11T17:56:10.473 | null | null | null | null |
10682 | 2 | null | 10680 | 6 | null |
- To address overapplication of statistics to small samples, I recommend countering with well-known jokes ("I am so excited, my mother is pregnant again and my baby sibling will be Chinese." "Why?" "I have read that every fourth baby is Chinese.").
- Actually, I recommend jokes to address all kinds of misconception in statistics, see http://xkcd.com/552/ for correlation and causation.
- The problem with newspaper articles is rarely the fact that they treat a rare phenomenon.
- Simpsons's paradox comes to mind as example that statistics can rarely be used without analysis of the causes.
| null | CC BY-SA 3.0 | null | 2011-05-11T17:57:05.260 | 2011-05-11T17:57:05.260 | null | null | 17823 | null |
10683 | 2 | null | 10680 | 3 | null | There is an interesting article by Mary Gray on misuse of statistics in court cases and things like that...
Gray, Mary W.; Statistics and the Law. Math. Mag. 56 (1983), no. 2, 67–81
| null | CC BY-SA 3.0 | null | 2011-05-11T18:09:30.407 | 2011-05-11T18:09:30.407 | null | null | 42201 | null |
10684 | 2 | null | 10680 | 0 | null | Hypothesis: $A$
(Textbook) Result: Do no reject $A$ ($\sigma = c$)
Your Statement: $A$ holds with probability $\sigma$!
Correct would be: In this case, you know nothing. If you want to "prove" $A$, your hypothesis has to be $\neg A$; reject it with $\sigma$ to get the desired statement.
| null | CC BY-SA 3.0 | null | 2011-05-11T20:27:27.100 | 2011-05-11T20:27:27.100 | null | null | 12868 | null |
10685 | 2 | null | 10669 | 2 | null | I see that you've accepted answers to just 3 of 9 questions...
Are you using the type of Decision Tree known as CHAID? If so, you will obtain an indication of one main effect and then any number of so-called interaction effects. You can try these effects in a regression, ANOVA, or general linear model. You build in the main effect. You build in the main effect for each variable involved in an interaction. And you build in the interactions. But before you do this, the interaction effects all need to be pre-tested, as I explained in a comment to
[this post](https://stats.stackexchange.com/questions/7815/what-skills-are-required-to-perform-large-scale-statistical-analyses).
| null | CC BY-SA 3.0 | null | 2011-05-11T22:48:58.137 | 2011-05-11T22:48:58.137 | 2017-04-13T12:44:24.677 | -1 | 2669 | null |
10686 | 2 | null | 10643 | 2 | null | My first instinct is to say that it would be silly to make such a determination absent any knowledge of the topic. "Too large" for what, or for whom? But perhaps what you're looking for is really a test for outliers in the distribution--not that you're likely to find any in the one you've shown. Check out Dixon's Test for Outliers (sometimes called the Q-Test). I'm not thrilled with what Wikipedia provides, so you might want to check around further than that. Sorry I don't have a good web reference; I use the guidelines in the book 100 Statistical Tests by Gopal Kanji.
| null | CC BY-SA 3.0 | null | 2011-05-11T23:04:01.823 | 2011-05-11T23:04:01.823 | null | null | 2669 | null |
10687 | 1 | null | null | 22 | 63713 | I know correlation does not imply causation but instead the strength and direction of the relationship. Does simple linear regression imply causation? Or is an inferential (t-test, etc.) statistical test required for that?
| Does simple linear regression imply causation? | CC BY-SA 3.0 | null | 2011-05-11T23:05:00.207 | 2022-05-07T10:19:43.723 | 2011-05-12T06:44:00.000 | 930 | 4572 | [
"regression",
"correlation",
"causality"
] |
10688 | 2 | null | 2770 | 3 | null | I was using "[Fundamentals of Clinical Trials](https://link.springer.com/book/10.1007/978-3-319-18539-2)" when I was in PhD program.
| null | CC BY-SA 4.0 | null | 2011-05-11T23:19:08.927 | 2022-12-06T02:54:37.377 | 2022-12-06T02:54:37.377 | 362671 | 4559 | null |
10689 | 2 | null | 10687 | 24 | null | The quick answer is, no. You can easily come up with non-related data that when regressed, will pass all sorts of statistical tests. Below is an old picture from Wikipedia (which, for some reason has recently been removed) that has been used to illustrate data-driven "causality".
We need more pirates to cool the planet?

For time series, there is a term called "Granger Causality" that has a very specific meaning.
[http://en.wikipedia.org/wiki/Granger_causality](http://en.wikipedia.org/wiki/Granger_causality)
Other than that, "causality" is in the eye of the beholder.
| null | CC BY-SA 3.0 | null | 2011-05-11T23:44:00.907 | 2011-05-11T23:57:04.897 | 2011-05-11T23:57:04.897 | 2775 | 2775 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.