Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8571 | 2 | null | 7946 | 1 | null | What kind of graph should I create to illustrate my data?
I would use a scatterplot - it would give you an idea about the type of relationship between the data. It is important to identify if the relationship is linear or not, before calculating correlation between the measurements.
| null | CC BY-SA 2.5 | null | 2011-03-21T15:39:29.850 | 2011-03-21T15:39:29.850 | null | null | 2635 | null |
8572 | 1 | 8876 | null | 24 | 7329 | I know a fair amount about fitting continuous parameters particularly gradient-based methods, but not much about fitting discrete parameters.
What are commonly used MCMC algorithms/techniques for fitting discrete parameters? Are there algorithms which are both fairly general and fairly powerful? Are there algorithms which deal with the curse of dimensionality well? For example, I would say Hamiltonian MCMC is general, powerful and scales well.
Sampling from an arbitrary discrete distribution seems more difficult than sampling from a continuous distribution, but I am curious what the state of the art is.
Edit: JMS asked me to elaborate.
I don't have specific applications in mind, but here are some kinds of models I am imagining:
- Model selection between several kinds of continuous regression models. You have a discrete single 'model' parameter
- A continuous model where each observation has a possibility of being an 'outlier' and drawn from a much more dispersed distribution. I suppose this is a mixture model.
I would expect many models to include both continuous and discrete parameters.
| What MCMC algorithms/techniques are used for discrete parameters? | CC BY-SA 2.5 | null | 2011-03-21T15:51:51.340 | 2011-03-31T16:45:44.557 | 2011-03-31T14:59:59.527 | 1146 | 1146 | [
"bayesian",
"markov-chain-montecarlo"
] |
8573 | 1 | 8580 | null | 2 | 12079 | Probability distribution of two classes is given by $N(5,1)$ and $N(6,1)$ where
$N(\mu,\sigma^2)$:
$$f(x) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} $$
- How to classify them, and see error rate?
I am doing this in MATLAB
Taking 500 samples of each distribution and tagging them depending where they came from.
```
sample1 =
0.8864 -1.0000
0.1560 -1.0000
0.8502 -1.0000
-0.4059 -1.0000
0.9298 -1.0000
sample2 =
-0.0671 1.0000
0.7057 1.0000
0.3310 1.0000
-0.7314 1.0000
-0.4524 1.0000
data =
0.8864 -1.0000
0.1560 -1.0000
0.8502 -1.0000
-0.4059 -1.0000
0.9298 -1.0000
-0.0671 1.0000
0.7057 1.0000
0.3310 1.0000
-0.7314 1.0000
-0.4524 1.0000
```
Now I would like to classify them using Bayes, I am using `normpdf`, to make this easier I am taking prior probabilities equal so they are not important in creating the rule, but I do not know how to code this in MATLAB, any idea?
```
n=500;
sample1=[randn(n,1) -1*ones(n,1)];
sample2=[randn(n,1) ones(n,1)];
data=[sample1; sample2];
mu1 =5;
sigma1 =1;
mu2 =6;
sigma2 =1;
x1=linspace(mu1-1*sigma1,mu1+1*sigma1,500);
p1=normpdf(x1,mu1,sigma1);
x2=linspace(mu2-1*sigma2,mu2+1*sigma2,500);
p2=normpdf(x2,mu2,sigma2);
plot(x1,p1,x2,p2)
```
Also Is it correct to label with `-1 and 1`?
If I compute mean and variance of `sample1, and sample2`?
mean_S1=mean(sample1);
mean_S2=mean(sample2);
var_S1 = var(sample1);
var_S2 = var(sample2);
- What is next step?
- for error rate Im planning to do a comparison between the original class vector (-1,1) and the result of classifier like:
```
errorRate = mean(OriginalClasses ~= ResultOfClassifier);
```
**UPDATED**
```
clear;
clc;
n = 500;
mu1 =5;
sigma1 =1;
mu2 =6;
sigma2 =1;
mu = [mu1,mu2];sigma = [sigma1,sigma2]; %group them
%suppose you get your test data from somewhere.
%for kicks, I put random data in:
%xtest = randn(2*n,1); %OP example code has the labels in the data var; ack
sample1=[randn(n,1) -1*ones(n,1)];
sample2=[randn(n,1) ones(n,1)];
data=[sample1; sample2];
deviance = bsxfun(@minus,data,mu); %tbc
deviance = bsxfun(@rdivide,deviance,sigma); %tbc
deviance = deviance .^ 2; %tbc
deviance = bsxfun(@plus,deviance,2*log(abs(sigma))); %tbc
deviance = deviance + log(2*pi); %not necessary here, actually;
[dummy,mini] = min(deviance,[],2); %find which class;
ResultOfClassifier = 2 * mini - 3; %is now a -1/1
errorRate = mean(data(:,2) ~= ResultOfClassifier)
errorRate =
0.2680
```
| Bayes classifier of two normal distributions in MATLAB | CC BY-SA 2.5 | null | 2011-03-21T16:06:07.347 | 2016-10-17T13:42:52.167 | 2011-03-21T18:32:43.573 | 3681 | 3681 | [
"bayesian",
"classification",
"matlab"
] |
8574 | 2 | null | 8570 | 6 | null | James Gentle's Computational Statistics (2009).
James Gentle's Matrix algebra: theory, computations, and applications in statistics (2007), more so towards the end of the book, the beginning is great too but it's not exactly what you're looking for.
Christopher M. Bishop's Pattern Recognition (2006).
Hastie et al.'s The elements of statistical learning: data mining, inference, and prediction (2009).
Are you looking for something as low-level as a text that will answer a question such as: "Why is it more efficient to store matrices and higher dimensional arrays as a 1-D array, and how can I index them in the usual M(0, 1, 3, ...) way?" or something like "What are some common techniques used to optimize standard algorithms such as gradient descent, EM, etc.?"?
Most texts on machine learning will provide in-depth discussions of the topic(s) you're looking for.
| null | CC BY-SA 2.5 | null | 2011-03-21T17:29:40.023 | 2011-03-21T17:29:40.023 | null | null | 2660 | null |
8576 | 5 | null | null | 0 | null | [Python](https://www.python.org/) ([Wikipedia page](https://en.wikipedia.org/wiki/Python_%28programming_language%29)) is a general purpose programming language designed for ease of use. It is a commonly used platform for machine learning. Two very popular threads concerned with using Python for statistics and machine learning are:
- Python as a statistics workbench
- Machine Learning using Python
Be aware that Python-based questions are frequently migrated between [Cross Validated](http://stats.stackexchange.com/) (CV) and [Stack Overflow](http://stackoverflow.com/) (SO). CV fields questions with statistical / machine learning content, and SO fields questions of programming and implementation. Python questions can be on topic here when they are centrally about statistics / ML while involving Python either as a critical part of the question or expected answer. However, questions that are just about how to use Python / why it works a certain way, etc., are off topic here. Many such questions can be on topic on SO if they have a [reproducible example](http://stackoverflow.com/help/mcve).
We maintain a [list](http://meta.stats.stackexchange.com/a/1090/) of Python resources available on the internet in our [Internet Support for Statistics Software](http://meta.stats.stackexchange.com/q/793/) meta.CV thread.
There is an extensive wiki for Python on SO [here](http://stackoverflow.com/tags/python/info).
| null | CC BY-SA 3.0 | null | 2011-03-21T17:40:28.580 | 2016-01-15T23:24:07.973 | 2016-01-15T23:24:07.973 | 7290 | -1 | null |
8577 | 4 | null | null | 0 | null | Python is a programming language commonly used for machine learning. Use this tag for any *on-topic* question that (a) involves `Python` either as a critical part of the question or expected answer, & (b) is not *just* about how to use `Python`. | null | CC BY-SA 4.0 | null | 2011-03-21T17:40:28.580 | 2019-10-02T16:00:15.173 | 2019-10-02T16:00:15.173 | 121522 | 2660 | null |
8578 | 5 | null | null | 0 | null | [Principal component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis) is a technique to decompose an array of numerical data into a set of orthogonal vectors (uncorrelated linear combinations of the variables) called principal components. The first few principal components often suffice to grasp nearly all the multivariate variability of the data; therefore PCA is one of the data reduction / dimensionality reduction methods.
| null | CC BY-SA 3.0 | null | 2011-03-21T17:46:43.223 | 2017-07-23T04:14:37.080 | 2017-07-23T04:14:37.080 | 7290 | 28666 | null |
8579 | 4 | null | null | 0 | null | Principal component analysis (PCA) is a linear dimensionality reduction technique. It reduces a multivariate dataset to a smaller set of constructed variables preserving as much information (as much variance) as possible. These variables, called principal components, are linear combinations of the input variables. | null | CC BY-SA 3.0 | null | 2011-03-21T17:46:43.223 | 2016-03-28T10:21:26.183 | 2016-03-28T10:21:26.183 | 3277 | 2660 | null |
8580 | 2 | null | 8573 | 2 | null | Here's how I have done this in matlab:
```
mu = [mu1,mu2];sigma = [sigma1,sigma2]; %group them
%suppose you get your test data from somewhere.
%for kicks, I put random data in:
xtest = randn(2*n,1); %OP example code has the labels in the data var; ack
deviance = bsxfun(@minus,xtest,mu); %tbc
deviance = bsxfun(@rdivide,deviance,sigma); %tbc
deviance = deviance .^ 2; %tbc
deviance = bsxfun(@plus,deviance,2*log(abs(sigma))); %tbc
deviance = deviance + log(2*pi); %not necessary here, actually;
[dummy,mini] = min(deviance,[],2); %find which class;
ResultOfClassifier = 2 * mini - 3; %is now a -1/1
```
| null | CC BY-SA 2.5 | null | 2011-03-21T18:19:28.450 | 2011-03-21T18:19:28.450 | null | null | 795 | null |
8581 | 1 | 8613 | null | 13 | 610 |
### Context:
I'm a Psychology PhD student. As with many psychology PhD students, I know how to perform various statistical analyses using statistical software, up to techniques such as PCA, classification trees, and cluster analysis.
But it's not really satisfying because though I can explain why I did an analysis and what the indicators mean, I can't explain how the technique works.
The real problem is that mastering statistical software is easy, but it is limited. To learn new techniques in articles requires that I understand how to read mathematical equations. At present I couldn't calculate eigenvalues or K-means. Equations are like a foreign language to me.
### Question:
- Is there a comprehensive guide that helps with understanding equations in journal articles?
---
### Edit:
I thought the question would be more self explanatory: above a certain complexity, statistical notation becomes gibberish for me; let's say I would like to code my own functions in R or C++ to understand a technique but there's a barrier. I can't transform an equation into a program.
And really: I don't know the situation in US doctoral schools, but in mine (France), the only courses I can follow is about some 16th century litterary movement...
| Transition from using statistical software to understanding mathematical equations? | CC BY-SA 3.0 | null | 2011-03-21T18:33:17.083 | 2017-11-27T13:03:22.367 | 2020-06-11T14:32:37.003 | -1 | 3827 | [
"references",
"notation",
"software"
] |
8583 | 1 | 8584 | null | 17 | 18550 | I have conducted an analysis in which I have modeled different variance components. When reporting the results in a table, it is much more concise to report standard deviations instead of variances.
So, this brings me to the question - is there ever a reason to report variance instead of standard deviation? Is it ever more appropriate to report one over the other?
| When would it be appropriate to report variance instead of standard deviation? | CC BY-SA 2.5 | null | 2011-03-21T19:51:20.863 | 2021-02-17T00:02:48.980 | null | null | 2750 | [
"standard-deviation",
"variance",
"tables"
] |
8584 | 2 | null | 8583 | 20 | null | If you report the mean, then it is more appropriate to report the standard deviation as it is expressed in the same unity. Think about dimensional homogeneity in physics.
Moreover, it is easier for the reader to consider confidence intervals (for large n, in order to use the Central Limit Theorem and consider a normal distribution) if the standard deviation is provided rather than the variance.
However, you may consider reporting the variance if you are interested in comparing variance and bias, or giving "different variance components", since the total variance is the sum of the intra and inter variances, while the standard deviations do not sum up.
| null | CC BY-SA 2.5 | null | 2011-03-21T19:57:14.420 | 2011-03-21T19:57:14.420 | null | null | 1351 | null |
8585 | 2 | null | 8583 | 10 | null | This is similar (but not equivalent). Nonetheless, standard deviation is expressed in the same units as the variable whereas the units of the variance are those of the variable to the power two. This makes standard deviation easier to interpret.
| null | CC BY-SA 4.0 | null | 2011-03-21T19:57:27.323 | 2021-02-17T00:02:48.980 | 2021-02-17T00:02:48.980 | 311558 | 3019 | null |
8586 | 1 | 8589 | null | 9 | 387 | I have a few hundred estimates of a parameter calculated from two different models and I would like to know if these parameters have different variances.
What is a straightforward test for comparing the variances of these parameters? (straightforward meaning, least assumptions).
| How can I test $H_0:\sigma^2_1=\sigma^2_2$? | CC BY-SA 2.5 | null | 2011-03-21T20:22:14.803 | 2011-04-03T06:51:57.360 | 2011-03-21T21:34:42.577 | 2750 | 2750 | [
"hypothesis-testing",
"variance",
"mean"
] |
8589 | 2 | null | 8586 | 8 | null | For comparing variances, Wilcox suggests a percentile bootstrap method. See [chapter 5.5.1 of 'Introduction to Robust Estimation and Hypothesis Testing'](http://books.google.com/books?id=_tAJr4ooOM8C&lpg=PR1&dq=Introduction%20to%20Robust%20Estimation%20and%20Hypothesis%20Testing&pg=PA170#v=onepage&q&f=false). This is available as `comvar2` from [the wrs package](https://r-forge.r-project.org/R/?group_id=468) in R.
edit: to find the amount of bootstrap differences to trim from each side for different values of $\alpha$, one would perform a Monte Carlo study, as suggested by Wilcox. I have a quick and dirty one here in Matlab (duck from thrown shoes):
```
randn('state',0); %to make the results replicable.
alphas = [0.001,0.005,0.01,0.025,0.05,0.10,0.15,0.20,0.25,0.333];
nreps = 4096;
nsizes = round(2.^ (4:0.5:9));
nboots = 599;
cutls = nan(numel(nsizes),numel(alphas));
for ii=1:numel(nsizes)
n = nsizes(ii);
imbalance = nan(nreps,1);
for jj=1:nreps
x1 = randn(n,1);x2 = randn(n,1);
%make bootstrap samples;
x1b = x1(ceil(n * rand(n,nboots)));
x2b = x2(ceil(n * rand(n,nboots)));
%compute stdevs
sig1 = std(x1b,1);sig2 = std(x2b,1);
%compute difference in stdevs
Dvar = (sig1.^2 - sig2.^2);
%compute the minimum of {the # < 0} and {the # > 0}
%in (1-alpha) of the cases you want this minimum to match
%your l number; then let u = 599 - l + 1
imbalance(jj,1) = min(sum(Dvar < 0),sum(Dvar > 0));
end
imbalance = sort(imbalance);
cutls(ii,:) = interp1(linspace(0,1,numel(imbalance)),imbalance(:)',alphas,'nearest');
end
%plot them;
lh = loglog(nsizes(:),cutls + 1);
legend(lh,arrayfun(@(x)(sprintf('alpha = %g',x)),alphas,'UniformOutput',false))
ylabel('l + 1');
xlabel('sample size, n_m');
```
I get the rather unhelpful plot:

A little bit of hackery indicates that a model of the form $l + 0.5 = \exp{5.18} \alpha^{0.94} n^{0.067}$ fits my Monte Carlo simulations fairly well, but they do not give the same results that Wilcox quotes in his book. You might be better served running these experiments yourself at your preferred $\alpha$.
edit I ran this experiment again, using many more replicates ($2^{18}$) per sample size. Here's a table of the empirical values of $l$. The first row is a NaN, then the alpha (type I rate). Following that, the first column is the size of the samples, $n$, then the empirical values of $l$. (I would expect that as $n \to \infty$ we would have $l \to 599 \alpha /2$)
```
NaN,0.001,0.005,0.01,0.025,0.05,0.1,0.15,0.2,0.25,0.333
16,0,0,1,4,9,22,35,49,64,88
23,0,0,1,4,10,23,37,51,66,91
32,0,0,1,4,10,24,38,52,67,92
45,0,0,1,5,11,25,39,54,69,94
64,0,0,2,5,12,26,41,55,70,95
91,0,1,2,6,13,27,42,56,71,96
128,0,1,2,6,13,28,42,58,72,97
181,0,1,2,6,13,28,43,58,73,98
256,0,1,2,6,14,28,43,58,73,98
362,0,1,2,7,14,29,44,59,74,99
512,0,1,2,7,14,29,44,59,74,99
```
| null | CC BY-SA 2.5 | null | 2011-03-21T21:36:33.820 | 2011-03-24T17:17:31.463 | 2011-03-24T17:17:31.463 | 795 | 795 | null |
8590 | 1 | null | null | 13 | 25035 | I come from the social sciences, where p < 0.05 is pretty much the norm, with p < 0.1 and p < 0.01 also showing up, but I was wondering: what fields of study, if any, use lower p-values as a common standard?
| Examples of studies using p < 0.001, p < 0.0001 or even lower p-values? | CC BY-SA 2.5 | null | 2011-03-21T21:39:32.833 | 2021-05-12T07:15:55.597 | null | null | 3582 | [
"statistical-significance",
"p-value"
] |
8591 | 1 | null | null | 7 | 5036 | How do probability distributions of continuous random variables transform under functions?
I.e. I have a random variable, X, drawn from a normal distribution with mean 0 and variance 1. What is the probability distribution associated with sin(X)?

More generally, what are the rules for transforming continuous random variables? If we know the PDF and CDF of two random variables, X,Y what is the PDF/CDF of Z=X*Y ? How about Z=X^Y ? How about Z = sin(X+Y)+3?
Are there Computer Algebra Systems which can compute this symbolically? Is this possible generally? If not, for what class of probability distributions and functions is it possible?
Note: excuse the plot. These are obviously noisy histograms and obviously not to scale (areas under the curves do not match and so could not both sum to one). Hopefully the plot does get the point across though. Given the blue distribution describing X, I want the red distribution describing sin(X).
| Operations on probability distributions of continuous random variables | CC BY-SA 3.0 | null | 2011-03-21T16:57:30.150 | 2013-09-30T19:50:09.047 | 2020-06-11T14:32:37.003 | -1 | 3830 | [
"distributions",
"probability"
] |
8593 | 2 | null | 8590 | 9 | null | My opinion is that it does (and should) not depend on the field of study. For example, you may well work at a lower significance level than $p<0.001$ if, for example, you are trying to replicate a study with historical or well-established results (I can think of several studies on the [Stroop effect](http://en.wikipedia.org/wiki/Stroop_effect), which had led to some controversies in the past few years). That amounts to consider a lower "threshold" within the classical Neyman-Pearson framework for testing hypothesis. However, statistical and practical (or substantive) significance is another matter.
Sidenote.
The "star system" seems to have dominated scientific inquiries as early as the 70's, but see The Earth Is Round (p < .05), by J. Cohen (American Psychologist, 1994, 49(12), 997-1003), despite the fact that what we often want to know is given the data I have observed, what is the probability that $H_0$ is true? Anyway, there's also a nice discussion on "[Why P=0.05?](http://www.jerrydallal.com/LHSP/p05.htm)", by Jerry Dallal.
| null | CC BY-SA 2.5 | null | 2011-03-21T22:14:36.940 | 2011-03-21T22:14:36.940 | null | null | 930 | null |
8595 | 2 | null | 4086 | -2 | null | you have 5 years of data and 40 observations per year. Why don't you post them on the web and allow us to actually answer this at ground zero rather than philosophizing at 500 miles high.
I look forward to the numbers. WE have seen data like this for example the number of customers who trade in their time sharing week ON a weekly basis. The series each year starts at zero and accumulates to a limiting value.
| null | CC BY-SA 2.5 | null | 2011-03-21T23:19:35.030 | 2011-03-21T23:19:35.030 | null | null | 3382 | null |
8596 | 2 | null | 8590 | 8 | null | It might be rare for anyone to use a pre-specified alpha level lower than, say, 0.01, but it is not nearly as rare that people claim an implied alpha of less than 0.01 in the mistaken belief that an observed P value of less than 0.01 is the same as a Neyman-Pearson alpha of less than 0.01.
Fisher's P values are not the same as, or interchangeable with, Neyman-Pearson error rates. $P = 0.0023$ does not mean $\alpha = 0.0023$ unless one has decided to use $0.0023$ as the critical level for significance when the experiment is designed. If you would have taken $P = 0.05$ as significant then $P = 0.0023$ means that there is an $0.05$ probability of a false positive claim.
Have a look at [Hubbard et al. Confusion over Measures of Evidence (p's) versus Errors (α's) in Classical Statistical Testing. The American Statistician (2003) vol. 57 (3)](http://www.jstor.org/stable/30037265)
| null | CC BY-SA 3.0 | null | 2011-03-21T23:26:06.033 | 2013-04-07T15:18:05.777 | 2013-04-07T15:18:05.777 | 7290 | 1679 | null |
8598 | 1 | 8602 | null | 5 | 6379 | I'm working on a problem as follows for a course that I'm auditing:
>
Suppose a 95% symmetric t-interval is applied to estimate a mean, but the
sample data are non-normal. Then the probability that the confidence interval covers
the mean is not necessarily equal to 0.95.
Use a Monte Carlo experiment to
estimate the coverage probability of
the t-interval for random samples of
$\chi^2(2)$ data with sample size $n = 20$.
Here is the current state of my R code:
```
alpha = 0.05;
n = 20;
m = 1000;
UCL = numeric(m);
LCL = numeric(m);
for(i in 1:m)
{
x = rchisq(n, 2);
LCL[i] = mean(x) - qt(alpha / 2, lower.tail = FALSE) * sd(x);
UCL[i] = mean(x) + qt(alpha / 2, lower.tail = FALSE) * sd(x);
}
# This line below is wrong...
mean(LCL > 0 & UCL < 0);
```
The problem is that the result is $0$. Am I approaching this question incorrectly? What exactly does coverage probability mean...?
| Monte Carlo experiment to estimate coverage probability | CC BY-SA 4.0 | null | 2011-03-21T23:43:26.520 | 2019-01-28T07:48:16.197 | 2019-01-28T07:48:16.197 | 128677 | null | [
"r",
"self-study",
"monte-carlo",
"simulation"
] |
8599 | 2 | null | 6498 | -3 | null | An ARIMA model is simply a weighted average. It answers the double question;
- How many period (k )should I use to compute a weighted average
and
- Precisely what are the k weights
It answers the maiden's prayer to determine how to adjust to previous values ( and previous values ALONE ) in order to project the series ( which is really being caused by unspecified causal variables ) Thus an ARIMA model is a poor man's causal model .
| null | CC BY-SA 2.5 | null | 2011-03-21T23:53:18.980 | 2011-03-21T23:53:18.980 | null | null | 3382 | null |
8601 | 2 | null | 8598 | 1 | null | You have several issues with your code:
- Your mean(UCL < 0 & LCL > 0) is decidedly strange, and in particular is failing because UCL is coming out positive so you are taking the mean of an empty set. A $\chi^2$ distribution takes only positive values.
- (since solved) You have UCL less than LCL, which is a slightly odd use of upper and lower
- You do not need semicolons in R unless you want more than one instruction on the same line
- (Wrong - as pointed out by mark999) You are dividing by sqrt(n). This wrongly narrows your confidence intervals: it is for finding the standard error of the mean, but you care about the original distribution.
- The question tells you to use "t-interval" but you are using a normal distribution. You might try typing ?qt into R
Try this
```
alpha = 0.05
n = 20
m = 1000
UCL = numeric(m)
LCL = numeric(m)
for(i in 1:m)
{
x = rchisq(n, 2) # compare with x = rnorm(n) + 2
LCL[i] = mean(x) - qt(alpha / 2, df=n-1, lower.tail = FALSE)*sd(x)/sqrt(n)
UCL[i] = mean(x) + qt(alpha / 2, df=n-1, lower.tail = FALSE)*sd(x)/sqrt(n)
}
mean(LCL < 2 & UCL > 2)
```
| null | CC BY-SA 2.5 | null | 2011-03-22T00:36:10.603 | 2011-03-22T07:20:49.733 | 2011-03-22T07:20:49.733 | 2958 | 2958 | null |
8602 | 2 | null | 8598 | 6 | null | I disagree with Henry - I think you should be dividing by sqrt(n), because it's a confidence interval for the mean. You also have to add a `df = n-1` argument to your qt calls.
And the last line should be `mean(LCL < 2 & UCL > 2)`. This is because 2 is the true mean, and you're interested in the condition that 2 is in the confidence interval.
| null | CC BY-SA 2.5 | null | 2011-03-22T02:49:38.337 | 2011-03-22T02:49:38.337 | null | null | 3835 | null |
8603 | 2 | null | 8566 | 0 | null | Naive Bayes and Logistic Regression (Classification) are both linear classifiers. If you remove all misclassified instances, then you will allow an infinite number of separators to have 0 training error. In the case of the logistic regression, this translate to your information matrix being singular (The information matrix must be inverted at each iteration of GLM).
I don't know if that's what you mean by overfit.
| null | CC BY-SA 2.5 | null | 2011-03-22T02:53:50.613 | 2011-03-22T02:53:50.613 | null | null | 3834 | null |
8604 | 1 | null | null | 54 | 36528 | I admit I'm relatively new to propensity scores and causal analysis.
One thing that's not obvious to me as a newcomer is how the "balancing" using propensity scores is mathematically different from what happens when we add covariates in a regression? What's different about the operation, and why is it (or is it) better than adding subpopulation covariates in a regression?
I've seen some studies that do an empirical comparison of the methods, but I haven't seen a good discussion relating the mathematical properties of the two methods and why PSM lends itself to causal interpretations while including regression covariates does not. There also seems to be a lot of confusion and controversy in this field, which makes things even more difficult to pick up.
Any thoughts on this or any pointers to good resources/papers to better understand the distinction? (I'm slowly making my way through Judea Pearl's causality book, so no need to point me to that)
| How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter? | CC BY-SA 2.5 | null | 2011-03-22T03:41:20.293 | 2022-03-31T17:38:50.513 | null | null | 3836 | [
"regression",
"multivariate-analysis",
"causality",
"propensity-scores"
] |
8605 | 1 | 8612 | null | 26 | 57156 | I would like to perform column-wise normalization of a matrix in R. Given a matrix `m`, I want to normalize each column by dividing each element by the sum of the column. One (hackish) way to do this is as follows:
```
m / t(replicate(nrow(m), colSums(m)))
```
Is there a more succinct/elegant/efficient way to achieve the same task?
| Column-wise matrix normalization in R | CC BY-SA 2.5 | null | 2011-03-22T04:17:39.163 | 2014-04-02T04:30:31.847 | 2011-03-23T20:16:41.383 | 1537 | 1537 | [
"r",
"data-transformation",
"normalization",
"matrix"
] |
8606 | 1 | null | null | 2 | 271 | I am working on a stopping rule for an optimization algorithm that produces an upper bound and lower bound for the objective value of an optimization problem. In my case, the lower bound is deterministic, but the upper bound is an estimate derived from $N$ data points $UB_1, UB_2... UB_N$ with mean $\widehat{UB}$ and standard deviation $\hat{\sigma}_{UB}$, where:
$$\widehat{UB} = \frac{1}{N} \sum_{i=1}^N UB_i$$
$$\hat{\sigma}_{UB}^2 = \frac{1}{N} \sum_{i=1}^{N} (UB_i - \widehat{UB})^2$$
Theoretically speaking, the algorithm needs to stop when the upper bound is equal to the lower bound. The stopping rule that I have in mind is to stop when $\frac{\widehat{UB}}{LB} = 1$. More formally, I would like to test:
$$H_0: \frac{\widehat{UB}}{LB} = 1$$
$$H_A: \frac{\widehat{UB}}{LB} > 1$$
And stop when I fail to reject $H_A$.
Broadly speaking, I would like to control for both type I and type II errors. That is, I would like to specify the probability that I am stopping too early (i.e. $H_0$ is accepted given $H_A$ is true) and also specify that I am stopping too late (i.e. $H_0$ is rejected given that $H_0$ is true).
I'm wondering what the exact formulation should look like in this case? Or what type of test to do. I'm open to any ideas you may have, including a different test altogether so long as we can specify the probabilities I described above in some way.
| Designing a stopping rule using a hypothesis test | CC BY-SA 3.0 | null | 2011-03-22T04:28:52.787 | 2011-06-29T01:37:30.327 | 2011-06-28T17:41:17.213 | null | 3572 | [
"hypothesis-testing",
"optimization"
] |
8607 | 2 | null | 8541 | 6 | null | Short answer: Gibbs or Metropolis-Hastings-within-Gibbs (MCMC) should work just fine on joint distributions and full conditional distributions that are mixed products of pmfs and pdfs. If you're doing MCMC, just make sure that sampling from the candidate distributions gives you values in the right domain.
Long answer:
The most comprehensive way we know of to account for probability is to use measure theory. In measure-theoretic probability theory, there is little difference between pmfs and pdfs.
In measure-theoretic probability, pmfs and pdfs are both called "densities." More precisely, they are both Radon-Nikodym (pronounced "RahDOHN-NickohDEEM") derivatives. The only difference is that a pmf is a derivative with respect to a counting measure, and a pdf is a derivative with respect to Lebesgue ("LehBAYG") measure (i.e. n-dimensional volume). When you integrate under a pmf, you integrate with respect to counting measure: you sum over part of its domain. When you integrate under a pdf, you integrate with respect to Lebesgue measure. In every case that Bayesians tend to care about, the latter is equivalent to regular old Riemann integration.
Because pmfs and pdfs are the same kind of thing, they obey the same laws, such as Bayes' law and the product rule. Thus, you can use both pmfs and pdfs in a model and get a meaningful joint density and meaningful conditional densities. But these densities won't necessarily be pdfs or pmfs. In general, they will be Radon-Nikodym derivatives with respect to products of counting and Lebesgue measures.
Riemann integration can't handle Radon-Nikodym derivatives with respect to mixed spaces like that, but Lebesgue integration can. Sampling methods approximately carry out Lebesgue integration, so they just work.
You might be wondering, then, what doesn't just work. Well, the only thing I've mentioned so far are products of densities and integrating under those products. A random variable with a distribution that is the sum of a distribution with a pmf and a distribution with a pdf can cause problems.
Any easy way to get a sum like that is to use an "if." For example, say you have this model:
$X \sim \mathrm{Normal}(0,1)$
$Y \sim \mathrm{Geometric}(1/2)$
$B \sim \mathrm{Bernoulli}(1/2)$
and you define a new random variable $Z$ by $Z = X$ if $B = 0$, otherwise $Z = Y$. The distribution of $Z$ can't be represented by a pmf or a pdf: its cdf has a discontinuity at each positive integer.
You can still approximate $Z$'s distribution using samples. You can even use $Z$ as a parameter for another random variable's distribution, and answer conditional queries with a Gibbs sampler. But conditioning on $Z$ in a query will not work without measure-theoretic tools. Also, you might find your current tools, like the density version of Bayes' law, inadequate. (I haven't looked into it enough to say when.)
Another easy way to get a non-pmf/non-pdf distribution is to truncate a random variable; something like $W = X$ if $X \le 0$, otherwise $W = 0$. (Note that truncating $X$'s distribution won't give you $W$'s, because $P[W=0] = 1/2$. Truncated distributions and truncated random variables are very different things.) $W$ would give you the same problems as $Z$.
Other things off the cuff:
You might have a hard time getting good n-dimensional KDEs that have discrete axes, if you want them.
If you're doing Metropolis-Hastings, you might have a hard time coming up with candidate pmfs for categorical random variables.
--
If you want another way to think about it, consider what would happen if you transformed your model. Convert your pmfs into pdfs and floor the corresponding discrete random variables before using them as parameters in other random variables' distributions. The joint distribution that includes the un-floored random variables (but not the floored ones) obviously has a pdf.
The easiest way to get an equivalent model is to define each new pdf $d$ so that, if its corresponding pmf is $m$, $d(x) = m(\lfloor x \rfloor)$. Now suppose you created an MCMC sampler from the transformed model. It shouldn't take too long to convince yourself that the fractional part of a candidate sample for any originally discrete random variable contributes nothing to the accept/reject decision.
| null | CC BY-SA 2.5 | null | 2011-03-22T04:41:00.800 | 2011-03-22T04:50:36.987 | 2011-03-22T04:50:36.987 | 3831 | 3831 | null |
8608 | 1 | 8647 | null | 2 | 1325 | I am reading this example, but could you explain a little more. I don't get the part where it says "then we Normalize"... I know
```
P(sun) * P(F=bad|sun) = 0.7*0.2 = 0.14
P(rain)* P(F=bad|rain) = 0.3*0.9 = 0.27
```
But where do they get
```
W P(W | F=bad)
-----------------
sun 0.34
rain 0.66
```



Example [from](http://inst.eecs.berkeley.edu/~cs188/fa10/slides/FA10%20cs188%20lecture%2018%20--%20decision%20diagrams%20%286PP%29.pdf)
| Decision network example | CC BY-SA 2.5 | null | 2011-03-22T04:58:01.640 | 2011-04-29T00:58:38.457 | 2011-04-29T00:58:38.457 | 3911 | 3681 | [
"probability",
"bayesian",
"conditional-probability"
] |
8610 | 2 | null | 8604 | 20 | null | The short answer is that propensity scores are not any better than the equivalent ANCOVA model, particularly with regard to causal interpretation.
Propensity scores are best understood as a data reduction method. They are an effective means to reduce many covariates into a single score that can be used to adjust an effect of interest for a set of variables. In doing so, you save degrees of freedom by adjusting for a single propensity score rather than multiple covariates. This presents a statistical advantage, certainly, but nothing more.
>
One question which may arise when using regression adjustment with
propensity scores is whether there is any gain in using the propensity
score rather than performing a regression adjustment with all of the
covariates used to estimate the propensity score included in the
model. Rosenbaum and Rubin showed that the "point estimate of the
treatment effect from an analysis of covariance adjustment for
multivariate X is equal to the estimate obtained from a univariate
covariance adjustment for the sample linear discriminant based on X,
whenever the same sample covariance matrix is used for both the
covariance adjustment and the discriminant analysis". Thus, the
results from both methods should lead to the same conclusions.
However, one advantage to performing the two-step procedure is that
one can fit a very complicated propensity score model with interactions
and higher order terms first. Since the goal of this propensity score
model is to obtain the best estimated probability of treatment
assignment, one is not concerned with over-parameterizing this model.
From:
PROPENSITY SCORE METHODS FOR BIAS REDUCTION IN THE COMPARISON OF A TREATMENT TO A NON-RANDOMIZED CONTROL GROUP
D'Agostino (quoting Rosenbaum and Rubin)
D’agostino, R.B. 1998. Propensity score matching for bias reduction in the comparison of a treatment to a non-randomized control group. Statistical Medicine 17: 2265–2281.
| null | CC BY-SA 3.0 | null | 2011-03-22T05:19:25.640 | 2016-01-27T15:06:15.753 | 2016-01-27T15:06:15.753 | 485 | 485 | null |
8611 | 1 | 8638 | null | 4 | 805 | I have two different columns of data which are recorded in different configurations and I want to show the users that these two records varie (data is time in seconds). The dataset is not of the same size as shown below. The end users are all experienced people in stats and math. My question is how can I plot a graph (which kind of(?)) easily.
First dataset:
```
0.000223
0.000206
0.000223
0.000193
0.001321
0.000223
...
```
Second dataset:
```
0.076975003
0.076724999
0.076600999
0.0766
0.050742
0.000397
0.000642
0.000522
0.000772
0.000522
0.076725997
0.158800006
0.159801006
0.159426004
...
```
Some of the graphs I came across: boxplot, meantime.
| How to show differences between two univariate datasets graphically? | CC BY-SA 2.5 | null | 2011-03-22T05:21:30.313 | 2011-03-22T17:53:05.827 | 2011-03-22T16:38:59.353 | 919 | 3270 | [
"data-visualization",
"matlab",
"gnuplot"
] |
8612 | 2 | null | 8605 | 45 | null | This is what sweep and scale are for.
```
sweep(m, 2, colSums(m), FUN="/")
scale(m, center=FALSE, scale=colSums(m))
```
Alternatively, you could use recycling, but you have to transpose it twice.
```
t(t(m)/colSums(m))
```
Or you could construct the full matrix you want to divide by, like you did in your question. Here's another way you might do that.
```
m/colSums(m)[col(m)]
```
And notice also caracal's addition from the comments:
```
m %*% diag(1/colSums(m))
```
| null | CC BY-SA 3.0 | null | 2011-03-22T06:07:21.883 | 2014-04-02T04:30:31.847 | 2014-04-02T04:30:31.847 | 3601 | 3601 | null |
8613 | 2 | null | 8581 | 10 | null |
### Overview:
- My impression is that your experience is common to a lot of students in the social sciences.
- The starting point is a motivation to learn.
- You can go down either self-taught or formal instruction routes.
### Formal instruction:
There are many options in this regard.
You might consider a masters in statistics or just taking a few subjects in a statistics department.
However, you'd probably want to check that you have the necessary mathematical background. Depending on the course, you may find that you need to revisit pre-calculus mathematics, and perhaps some material such as calculus and linear algebra before tackling university-level mathematically rigorous statistics subjects.
### Self-taught
Alternatively, you could go down the self-taught route.
There are heaps of good resources on the internet.
In particular, reading and doing exercises in mathematics text books is important, but probably not sufficient. It's important to listen to instructors talking about mathematics and watch them solve problems.
It's also important to think about your mathematical goals and the mathematical prerequisites required to achieve those goals. If equations are like a foreign language to you, then you may find that you need to first study elementary mathematics.
I've prepared a few resources aimed at assisting people who are making the transition from using statistical software to understanding the underlying mathematics.
- Videos: List of Free Online Mathematics Videos - This post also provides some guidance regarding what would be an appropriate mathematical sequence starting from pre-calculus and working through calculus, linear algebra, probability, and mathematical statistics.
Also see this question on mathematical statistics videos.
- Reading and pronunciation - One of the first challenges is learning how to pronounce and read mathematical equations. I wrote two posts, one on pronunciation and another on tips for reading mathematics for the non-mathematician.
- Writing - Learning to write mathematics can help with reading mathematics. Try learning LaTeX and check out some of the guides on mathematics in LaTeX
- Books: When it comes to learning mathematics, I think it is worth investing in a few good textbooks. However, there are a lot of free online options these days
| null | CC BY-SA 3.0 | null | 2011-03-22T06:18:34.707 | 2017-11-27T13:03:22.367 | 2017-11-27T13:03:22.367 | 22047 | 183 | null |
8614 | 1 | 8622 | null | 1 | 2388 | I am trying to determine if a given noise from a compass sensor is time-correlated (it is supposed to be!) and for that I tried to compute the cross correlation between the noise signal and the time of sampling using Matlab xcorr() function. However, I am getting a random value indicating that it is not time-correlated, whereas in reality it has to be that way.
Am I doing something wrong? I am not able to find references to determine if a signal is time-correlated, so any ideas would be greatly appreciated!
Thanks!
Imelza
| How to determine if a given signal is time-correlated? | CC BY-SA 2.5 | null | 2011-03-22T06:54:59.047 | 2015-01-25T15:09:19.667 | null | null | null | [
"time-series",
"matlab",
"cross-correlation"
] |
8615 | 2 | null | 8606 | 3 | null | You should take the difference between upper and lower, then t-test migh be more suitable than with ratios...
Anyway, if the probability to reject the null wrongly is $1-\alpha$ the probability to accept it wrongly won't be $\alpha$, if you want to control the error of accepting the null wrongly you have to specify one alternative or at least to assume it is separated from the null and use the "extreme" point of the alternative to compute the threshold at the level you want.
I really like this quote from Tukey:
>
All we know about the world teaches us that the effects of A and B
are always different---in some decimal
place---for any A and B. Thus asking
"are the effects different?" is
foolish.
note that you should work on differentiating what are the parameters (deterministic) and what are the random variables in your question. In addition you should explain a bit more how you obtain these upper and lower bound, might help to get a better answer than mine.
| null | CC BY-SA 2.5 | null | 2011-03-22T07:17:13.800 | 2011-03-22T07:37:55.537 | 2011-03-22T07:37:55.537 | 223 | 223 | null |
8616 | 2 | null | 8566 | 6 | null | The following is not restricted to NB + LogRes
Overfitting = Loss of generalization.
When you train a model on dataset you generally assume that the data you use for training has a similar structure than the data the model is applied to later (the general assumption of predicting the future from the past). So if you remove parts of the data (namely the misclassified instances) and train a model on this reduced dataset, you effectively change the structure of the data in comparison to the test dataset (and hence violate this assumption). In this case the following can happen (when testing this model on an unreduced test-dataset):
In the best case nothing happens, e.g. of the following reasons:
- The missclassified instances represented only a tiny subspace of the dataspace (corresponds to a high accuracy achieved by the first model)
- The model classifies one part of the dataspace better and another one worse so that they even out.
In the worst case the quality decreases rapidly, because of overfitting / loss of generalization power. The model focuses too hard on the part of the dataspace of in first step correctly classified instances and hence is not able anymore to make even an approximate statement for the rest of the dataspace.
---
I think what you are actually looking for is called [Boosting](http://en.wikipedia.org/wiki/Boosting), where one restricts the dataspace to the missclassified instances (i.e. doing the opposite of your strategy) to refine the model. The procedure tries to avoid overfitting by combining the different (subspace-)models afterwards, but nevertheless it is still an issue.
Here is an [plain-text explanation of boosting](https://stats.stackexchange.com/questions/7813/adjusting-sample-weights-in-adaboost/7877#7877) with a illustrative graphic you might find helpful.
| null | CC BY-SA 2.5 | null | 2011-03-22T07:39:48.263 | 2011-03-22T07:39:48.263 | 2017-04-13T12:44:32.747 | -1 | 264 | null |
8617 | 1 | 8618 | null | 19 | 1842 | An increase in the number of cases and deaths occurs during epidemics (sudden increase in numbers) due to a virus circulation (like West Nile Virus in USA in 2002) or decreasing resistance of people or contamination of food or water or increase in the number of mosquitoes.
These epidemics will present as outliers which can occur every 1 to 5 years.
By removing these outliers we are removing evidence of epidemics which form an important part of forecasting and disease understanding.
Is data cleaning necessary while dealing with outliers caused by epidemics?
Is it going to improve the results or worsen the results of statistical analysis?
| Can data cleaning worsen the results of statistical analysis? | CC BY-SA 2.5 | null | 2011-03-22T07:56:30.263 | 2011-08-16T07:34:59.977 | 2011-03-22T22:55:17.803 | null | 2956 | [
"time-series",
"forecasting",
"epidemiology",
"outliers"
] |
8618 | 2 | null | 8617 | 13 | null | It actually depends on the purpose of your research. In my opinion, there could be several:
- You want to understand what are the typical factors that causes cases and deaths and that are not affected by epidemic periods and factors that causes epidemics (so you are interested in typical not force major probabilities) - in this case you obviously need to remove the epidemic periods from the data, as they are by the purpose of research the outliers to what you would like to conclude
- You may want to include epidemic changes into your models (regime-switching models, for instance, any good links and model suggestions from the community are welcome here), because you want to know the probability of epidemic period to occur (and also how long it will last), to test the stability and to forecast - in this case you do not exclude epidemic periods, but search for more complicated models rather than go for hammer-econometric-tool $OLS$ or something similar
- Your primarily goal IS to detect epidemic periods and monitor for them in real-time - it's a special field in econometrics a number of my colleagues are working with at Vilnius University (definitely, you would like to have a lot of epidemic observations to deal with)
So if your primarily goal is something like 2, clearing the data will cause wrong conclusions about the future forecasts, i.e. inaccurate forecasting performance. It is also true that the 2nd case does not necessarily provide better forecasts, but you at least could make conclusions about the probabilities of epidemic periods and their length. This IS vitally important for actuarial mathematicians, so may be you are the one?
| null | CC BY-SA 2.5 | null | 2011-03-22T08:51:11.383 | 2011-03-22T08:51:11.383 | null | null | 2645 | null |
8619 | 2 | null | 8617 | 16 | null | I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a person aged 4 is a single parent, etc.).
The presence of a real effect in your data does not make it "messy" (to the contrary, the presence of real effects would make it rich) - although it can make your mathematical task more involved. I would suggest that the data be "cleaned" in this way if it is the only feasible way to get a prediction. If there is a feasible way which doesn't throw away information, then use that.
It sounds like you may benefit from some sort of cyclical analysis, given that you say this effect comes around periodically (kind of like a "business cycle").
From my point of view, if you are looking at forecasting something, then removing a genuine effect from that source can only make your predictions worse. This is because you have effectively "thrown away" the very information that you wish to predict!
The other point is that it may be difficult to determine how much of a set of deaths were due to the epidemic, and how much was caused by the ordinary fluctuations.
In statistical terminology, the epidemic sounds like that, from your point of view, it is a "nuisance" to what you actually want to analyse. So you aren't particularly interested in it, but you need to somehow account for it in your analysis. One "quick and dirty" way to do this in a regression setting is to include an indicator for the epidemic years/periods as a regressor variable. This will give you an average estimate of the effect of epidemics (and implicitly assumes the affect is the same for each epidemic). However, this approach only works for describing the effect, because in forecasting, your regression variable is unknown (you don't know which periods in the future will be epidemic ones).
Another way to account for the epidemic is to use a mixture model with two components: one model for the epidemic part and one model for the "ordinary" part. The model then proceeds in two steps: 1) classify an period as epidemic or normal, then 2) apply the model to which it was classified.
| null | CC BY-SA 2.5 | null | 2011-03-22T09:24:20.850 | 2011-03-22T09:24:20.850 | null | null | 2392 | null |
8620 | 2 | null | 8608 | 4 | null | This is a straight forward analysis of Bayes Theorem. Now Bayes Theorem reads:
$$P(H|F,I)=\frac{P(H|I)P(F|H,I)}{P(F|I)}$$
So in your example you have two "hypothesis" SUN and RAIN. $F$ stands for the the forecast, and $I$ for the prior information (or assumptions). The "I" has not been included explicitly in your question, but it is always there. These create 2 equations,
$$P(SUN|F,I)=\frac{P(SUN|I)P(F|SUN,I)}{P(F|I)}$$
$$P(RAIN|F,I)=\frac{P(RAIN|I)P(F|RAIN,I)}{P(F|I)}$$
Now you have been given all probabilities except for the $P(F|I)$. You can explicitly calculate this, but a quicker way is to get rid of it by taking the odds of SUN to RAIN.
$$\frac{P(SUN|F,I)}{P(RAIN|F,I)}=\frac{\frac{P(SUN|I)P(F|SUN,I)}{P(F|I)}}{\frac{P(RAIN|I)P(F|RAIN,I)}{P(F|I)}}=\frac{P(SUN|I)P(F|SUN,I)}{P(RAIN|I)P(F|RAIN,I)}$$
And this odds ratio is given by $\frac{0.14}{0.27}$. And then you can use the exhaustiveness of the two hypothesis - because $$P(SUN|I)+P(RAIN|I)=1\rightarrow P(SUN|F,I)+P(RAIN|F,I)=1$$
$$\rightarrow P(RAIN|F,I)=1-P(SUN|F,I)$$
. And you get:
$$\frac{P(SUN|F,I)}{1-P(SUN|F,I)}=\frac{0.14}{0.27}$$
And re-arranging for $P(SUN|F,I)$ gives:
$$P(SUN|F,I)=\frac{\frac{0.14}{0.27}}{1+\frac{0.14}{0.27}}=\frac{0.14}{0.41}=0.34$$
| null | CC BY-SA 2.5 | null | 2011-03-22T09:45:20.753 | 2011-03-22T09:45:20.753 | null | null | 2392 | null |
8621 | 2 | null | 8611 | 2 | null | Rather than a boxplot, I'd use a strip chart, since both data sets are small and it will be easy to see each data point.
| null | CC BY-SA 2.5 | null | 2011-03-22T10:10:29.543 | 2011-03-22T10:10:29.543 | null | null | 686 | null |
8622 | 2 | null | 8614 | 5 | null | Despite the fact that there were a lot of discussion on the relative topics (though no any answer provided), I would like to add some ideas from my own teaching and model-building experience. It would be also very useful for you to study/read any (good) textbook in econometrics or time-series analysis first (just to speak the same language).
Sidenote: it would be also great to switch from master-of-all-trades matlab to either [$R$](http://www.r-project.org/) or [gretl](http://gretl.sourceforge.net/), both open-source products that are just great to do time-series (regarding $R$ then not only) things.
---
As it follows from your description of the problem, you have a time-series process, denote it by $y(t)$, that could be very noisy (low signal/noise ratio, with high noisy part) for a researcher to see any good time-dependence by simply screening the data (though a plot of a typical sensor's output would be very appreciated to improve this answer). Note, that time is already present in the definition of the object. From here on you have a number of options of what to do next:
- You may go solo , first of all, trying to decompose the time-series object into trend + cycle + seasonal part (for magnetic fields both cycle and seasonal parts are very likely to present) + noise. At this stage it could happen that you do have something there close to white noise sequence. Note also that your trend could be either deterministic (fully predictable) or stochastic. The form of the trend could be tested by a number of unit-root tests. Plotting the data again could be useful to detect the deterministic behaviour.
- Alternatively you may be interested in Box-Jenkins approach and try to build best parsimonious (S)ARIMA model for $y(t)$. As a side-step in this approach is the analysis of correlogram (acf and pacf plots), anyway it is a good thing to begin from plotting this one as a first thing you do with your time series object.
- You may also find useful to move from time to frequency domain: produce periodogram, search for deterministic sinusoidal behaviour or trends (again if the latter is flat you are probably indeed dealing with white noise process)
- You may search for meaningful covariates like average temperature, solar activity, earthquakes, etc. In this case you could build the theory based regression model as opposed to a-theoretical decomposition and (S)ARIMA models.
Concluding remark: as you may see from the response time-dependence could be far more complicated than just a linear time trend. There is no short answer to your question (well the shortest could be: produce a correlogram, apply [Ljung-Box](http://en.wikipedia.org/wiki/Ljung%E2%80%93Box_test) test, if it is close to white noise characteristics, conclude that there is no time-dependence, else some kind of it IS present, but are you happy with that one?). Good luck.
| null | CC BY-SA 2.5 | null | 2011-03-22T11:00:05.527 | 2011-03-22T11:00:05.527 | null | null | 2645 | null |
8623 | 2 | null | 8617 | 6 | null | The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the current model that we are entertaining. These "outliers" if untreated permit an unwanted distortion in the model parameters as estimation is "driven to explain these data points" that are "not behaving according to our hypothesized model". In other words there is a lot of payback in terms of explained Sum of Squares by focusing on the "baddies". The empirically identified points that require cleansing should be carefully scrutinized in order to potentially develop/suggest cause factors which are not in the current model. The identified Level Shift in STATE1 for the data you presented in the question below is an example of "knowledge waiting to be discovered".
[How to assess effect of intervention in one state versus another using annual case fatality rate?](https://stats.stackexchange.com/questions/8358/how-to-assess-effect-of-intervention-in-one-state-versus-another-using-annual-cas)
To do science is to search for repeated patterns.
To detect anomalies is to identify values that do not follow repeated patterns.
How else would you know that a point violated that model? In fact, the process of growing,
understanding, finding, and examining outliers must be iterative. This isn't a new thought.
Sir Frances Bacon, writing in Novum Organum about 400 years ago said: “Errors of Nature, Sports and Monsters correct the understanding in regard to ordinary things, and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows herdeviations will more accurately describe her ways.”
We change our rules by observing when the current rules fail.
If indeed the identified outliers are all pulses and have similar effects (size) then we suggest the following ( quoted from another poster )
"One "quick and dirty" way to do this in a regression setting is to include an indicator for the epidemic years/periods as a regressor variable. This will give you an average estimate of the effect of epidemics (and implicitly assumes the affect is the same for each epidemic). However, this approach only works for describing the effect, because in forecasting, your regression variable is unknown (you don't know which periods in the future will be epidemic ones)."
This if course requires that the individual anomalies(pulse years) have similar effects. If they differ then a portmanteau variable described above would be incorrect.
| null | CC BY-SA 2.5 | null | 2011-03-22T11:18:37.447 | 2011-03-24T12:21:41.397 | 2017-04-13T12:44:21.160 | -1 | 3382 | null |
8624 | 2 | null | 4604 | 2 | null | To me, it sounds more like you want a conditional frequency, as a conditional probability has no "error" so to speak. The only error from a probability is from either from a mathematical approximation, or a mathematical error in the calculation. Once you make this conceptual distinction, I think finding the exact measure of error is easy. @robin's answer is a very conservative number - and represents the worst case error.
Anyways, I assume that your data is a $K\times K$ frequency table. The first way (row) is the letter previous $p_j\;(j=1,..,K)$, and the second way is the current letter $c_i\;(i=1,..,K)$ (column). Let the counts in each cell be denoted by $n_{ij}$. So $n_{11}$ is the number of times the character $p_1$ preceded the character $c_1$. Expressing your problem this way also allows a straight-forward generalisation to a more general model - just add extra ways into the frequency table to give your process more "memory" (although the notation becomes a bit cumbersome, for you have quantities such as $f_{i|(j_1,j_2,...,j_L)}$ for a Lth order auto-regressive process).
Now we need to write down the model. Because you have a markov model, this means that the frequencies are given by $0\leq f_{i|j}\leq 1$. This means that the frequency is a function of $p_j$ (i.e. we are modeling each row of the frequency table separately). They are constrained to sum to 1 over the rows $\sum_{i=1}^{K}f_{i|j}=1\;(j=1,\dots,K)$
The simplest model is to assume that each frequency is unconnected with any other. In this case you have a N-dimensional multinomial distribution for each row
$$p(n_{1j},n_{2j},\dots,n_{Kj}|f_{1|j},f_{2|j},\dots,f_{K|j},I)\propto
f_{1|j}^{n_{1j}},f_{2|j}^{n_{2j}},\dots,f_{K|j}^{n_{Kj}}
$$
Now you need a prior distribution for the $f_{i|j}$ values, which expresses what you know about them, prior to seeing the data. If you know that each letter is possible prior to seeing the data, then the uniform prior is the appropriate one. This is the one I will use. This gives a [dirichlet posterior distribution](http://en.wikipedia.org/wiki/Dirichlet_distribution) for the $f_{i|j}$ (same form as above, because of the uniform prior). This has a mode (most likely value) where $f_{i|j}=n_{ij}$ as you would expect. Now, this means that each of the marginal posterior distributions have a Beta distribution
$$(f_{i|j}|D,I)\sim Beta(n_{ij}+1,\sum_{m\neq i}n_{mj}+K-1)$$
With a simple form for its mean and variance:
$$E(f_{i|j}|D,I)=\frac{n_{ij}+1}{\sum_{m}n_{mj}+K}$$
$$Var(f_{i|j}|D,I)=\frac{E(f_{i|j}|D)\left[1-E(f_{i|j}|D)\right]}{\sum_{m}n_{mj}+K+1}$$
Also, you can use the dirichlet distribution to get the correlation between two of the frequencies:
$$Corr(f_{i|j},f_{l|j}|D)=-\sqrt{\frac{E(f_{i|j}|D)}
{1-E(f_{i|j}|D)}
\frac{E(f_{l|j}|D)}
{1-E(f_{l|j}|D)}
}$$
I think this approach gives you a far better idea of the accuracy in your data, which shows the value in making a distinction between probability and frequency. probability can never be observed, only calculated or assigned. and frequencies can never be calculated or assigned, it can only be observed and predicted.
| null | CC BY-SA 2.5 | null | 2011-03-22T11:32:23.003 | 2011-03-22T11:32:23.003 | null | null | 2392 | null |
8625 | 1 | 8671 | null | 16 | 10229 | I was fiddling with PCA and LDA methods and I am stuck at a point, I have a feeling that it is so simple that I can't see it.
Within-class ($S_W$) and between-class ($S_B$) scatter matrices are defined as:
$$
S_W = \sum_{i=1}^C\sum_{t=1}^N(x_t^i - \mu_i)(x_t^i - \mu_i)^T
$$
$$
S_B = \sum_{i=1}^CN(\mu_i-\mu)(\mu_i-\mu)^T
$$
Total scatter matrix $S_T$ is given as:
$$
S_T = \sum_{i=1}^C\sum_{t=1}^N(x_t^i - \mu)(x_t^i - \mu)^T = S_W + S_B
$$
where C is number of classes and N is number of samples $x$ are samples, $\mu_i$ is ith class mean, $\mu$ is overall mean.
While trying to derive $S_T$ I came up to a point where I had:
$$
(x-\mu_i)(\mu_i-\mu)^T + (\mu_i-\mu)(x-\mu_i)^T
$$
as a term. This needs to be zero, but why?
---
Indeed:
\begin{align}
S_T &= \sum_{i=1}^C\sum_{t=1}^N(x_t^i - \mu)(x_t^i - \mu)^T \\
&= \sum_{i=1}^C\sum_{t=1}^N(x_t^i - \mu_i + \mu_i - \mu)(x_t^i - \mu_i + \mu_i - \mu)^T \\
&= S_W + S_B + \sum_{i=1}^C\sum_{t=1}^N\big[(x_t^i - \mu_i)(\mu_i - \mu)^T + (\mu_i - \mu)(x_t^i - \mu_i)^T\big]
\end{align}
| Deriving total (within class + between class) scatter matrix | CC BY-SA 3.0 | null | 2011-03-22T12:43:24.383 | 2018-01-15T12:16:49.350 | 2018-01-15T12:16:49.350 | 28666 | 760 | [
"discriminant-analysis"
] |
8626 | 2 | null | 8581 | 2 | null | I understand your difficulty as I have a similar problem when I try to do something new in statistics (I'm also a grad student, but in a different field). I have found examining the R code quite useful to get an idea how something is calculated. For example, I have been recently learning how to use `kmeans` clustering and have many basic questions, both conceptual and how it is implemented. Using an `R` installation (I recommend `R Studio`, [http://www.rstudio.org/](http://www.rstudio.org/), but any installation works), just type `kmeans` in to the command line. Here is an example of part of the output:
```
x <- as.matrix(x)
m <- nrow(x)
if (missing(centers))
stop("'centers' must be a number or a matrix")
nmeth <- switch(match.arg(algorithm), `Hartigan-Wong` = 1,
Lloyd = 2, Forgy = 2, MacQueen = 3)
if (length(centers) == 1L) {
if (centers == 1)
nmeth <- 3
k <- centers
if (nstart == 1)
centers <- x[sample.int(m, k), , drop = FALSE]
if (nstart >= 2 || any(duplicated(centers))) {
cn <- unique(x)
mm <- nrow(cn)
if (mm < k)
stop("more cluster centers than distinct data points.")
centers <- cn[sample.int(mm, k), , drop = FALSE]
}
}
```
I'm not sure how practical it is to examine the source every time, but it really helps me get an idea what is going on, assuming you have some familiarity with the syntax.
A previous question I asked on [stackoverflow](https://stackoverflow.com/questions/5297889/how-do-i-find-the-parameters-used-in-kmeans-to-create-clusters) pointed me in this direction, but also helpfully told me that the comments about the code are sometimes included [here](http://svn.r-project.org/R/trunk/src/library/stats/R/kmeans.R).
---
More generally, the [Journal of Statistical Software](http://www.jstatsoft.org/) illustrates this link between theory and implementation, but it is frequently about advanced topics (that I personally have difficulty understanding), but is useful as an example.
| null | CC BY-SA 2.5 | null | 2011-03-22T13:11:06.113 | 2011-03-22T13:30:20.690 | 2017-05-23T12:39:26.203 | -1 | 2635 | null |
8627 | 2 | null | 8604 | 7 | null | A likely obtuse reference, but if you by chance have access to it I would recommend reading this book chapter ([Apel and Sweeten, 2010](http://dx.doi.org/10.1007/978-0-387-77650-7_26)). It is aimed at social scientists and so perhaps not as mathematically rigorous as you seem to want, but it should go into enough depth to be more than a satisfactory answer to your question.
There are a few different ways people treat propensity scores that can result in different conclusions from simply including covariates in a regression model. When one matches scores one does not necessarily have common support for all observations (i.e. one has some observations that appear to never have the chance to be in the treatment group, and some that are always in the treatment group). Also one can weight observations in various ways that can result in different conclusions.
In addition to the answers here I would also suggest you check out the answers to the [question](https://stats.stackexchange.com/questions/3400/from-a-statistical-perspective-can-one-infer-causality-using-propensity-scores-w/3443#3443) chl cited. There is more substance behind propensity scores than simply a statistical trick to achieve covariate balance. It you read and understand the highly cited articles by Rosenbaum and Rubin it will be more clear why the approach is different than simply adding in covariates in a regression model. I think a more satisfactory answer to your question is not necessarily in the math behind propensity scores but in their logic.
| null | CC BY-SA 2.5 | null | 2011-03-22T13:14:03.273 | 2011-03-22T13:14:03.273 | 2017-04-13T12:44:41.980 | -1 | 1036 | null |
8628 | 2 | null | 8617 | 6 | null | To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting.
The situation is similar to the experiment performed my Robert Millikan in determining the charge of an electron. Decades after winning the Nobel prize for his experiment, his notes were examined and it was found that he threw out a large number of data points because they disagreed with the results he was looking for. Is that bad science?
If you find a few outliers, then maybe they are due to "statistical abberations". However,
if you find more than a few outliers, you need to explore your data more closely. If you cannot attribute a cause for the abberations, then you do not understand the process and a statistical model will not solve your problem. The purpose of a model is to summarize a process, the model will not magically summarize a process the experimenter does not understand.
| null | CC BY-SA 2.5 | null | 2011-03-22T13:25:28.850 | 2011-03-22T13:25:28.850 | null | null | 3805 | null |
8629 | 2 | null | 8581 | 3 | null | I get the impression that you think that you can get insight into a statistical equation
by programming it into either R or C++; you can't. To understand a statistical equation,
find an "undergraduate" textbook with lots of homework problems at the end of each chapter that contains the equation, and then do the homework at the end of the chapter containing the equation.
For example, to understand PCA you do need a good understanding of linear algebra and in particular singular value decomposition. While learning quantum computing through Michael Nielsen's book, it became apparent to me that I needed to review linear algebra. I came across Gilbert Strang's videos, they were extremely helpful in establishing a foundational understanding of concepts. However, the nuance of the material did not get through until
I found a linear algebra book containing alot of homework problems, and then I needed to do them.
| null | CC BY-SA 2.5 | null | 2011-03-22T13:58:45.933 | 2011-03-22T13:58:45.933 | null | null | 3805 | null |
8630 | 1 | null | null | 19 | 3223 | I have carried out a principal components analysis of six variables $A$, $B$, $C$, $D$, $E$ and $F$. If I understand correctly, unrotated PC1 tells me what linear combination of these variables describes/explains the most variance in the data and PC2 tells me what linear combination of these variables describes the next most variance in the data and so on.
I'm just curious -- is there any way of doing this "backwards"? Let's say I choose some linear combination of these variables -- e.g. $A+2B+5C$, could I work out how much variance in the data this describes?
| Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables? | CC BY-SA 3.0 | null | 2011-03-22T14:00:23.313 | 2016-08-24T22:58:15.310 | 2015-01-28T09:21:45.423 | 28666 | 3845 | [
"variance",
"pca",
"r-squared",
"covariance-matrix"
] |
8631 | 1 | null | null | 7 | 571 | I want to understand how to make calculations on the prevalence of a disease in a country population and the impact that the element of average life expectancy (of those suffering with the disease at time of diagnosis) has on this calculation.
Are there any 'best practice' papers on making epidemiology calculations?
| How would life expectancy impact the calculation of disease prevalence? | CC BY-SA 2.5 | null | 2011-03-22T14:12:22.203 | 2012-09-02T15:56:25.523 | 2012-09-02T15:56:25.523 | 919 | 3844 | [
"epidemiology"
] |
8632 | 1 | null | null | 1 | 3274 | The [ISO VIM](http://www.iso.org/sites/JCGM/VIM/JCGM_200e.html) defines them as:
>
measurement method:
generic description of a logical organization of operations used in a
measurement.
measurement procedure: detailed description of a measurement according to one or more measurement principles and to a given measurement method, based on a measurement model and including any calculation to obtain a measurement result.
But I'm still not sure what their difference is. Can anyone explain this to me, maybe with some examples?
| What's the difference between "measurement method" and "measurement procedure"? | CC BY-SA 2.5 | null | 2011-03-22T14:28:02.757 | 2011-06-30T07:27:45.080 | 2011-03-23T08:23:14.400 | 2645 | 3823 | [
"teaching",
"terminology",
"measurement",
"methodology"
] |
8633 | 1 | null | null | 2 | 640 | If I find that my covariate (reaction time) alters over the length of my experiment (e.g. due to fatigue), can I somehow build that into my model?
So what I am saying is that the effect of my covariate is not constant (between subjects and within subjects).
| ANCOVA with multiple instances of the between-subject covariate | CC BY-SA 2.5 | null | 2011-03-22T14:57:53.010 | 2011-08-25T01:34:29.183 | 2011-03-22T22:51:11.193 | null | 3822 | [
"repeated-measures",
"ancova"
] |
8634 | 1 | 8648 | null | 9 | 7852 | Given two bivariate normal distributions $P \equiv \mathcal{N}(\mu_p, \Sigma_p)$ and $Q \equiv \mathcal{N}(\mu_q, \Sigma_q)$, I am trying to calculate the Jensen-Shannon divergence between them, defined (for the discrete case) as:
$JSD(P\|Q) = \frac{1}{2} (KLD(P\|M)+ KLD(Q\|M))$
where $KLD$ is the Kullback-Leibler divergence, and $M=\frac{1}{2}(P+Q)$
I've found the way to calculate [$KLD$](http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Kullback.E2.80.93Leibler_divergence) in terms of the distributions' parameters, and thus $JSD$.
My doubts are:
- To calculate $M$, I just did $M \equiv \mathcal{N}(\frac{1}{2}(\mu_p + \mu_q), \frac{1}{2}(\Sigma_p + \Sigma_q))$. Is this right?
- I've read in [1] that the $JSD$ is bounded, but that doesn't appear to be true when I calculate it as described above for normal distributions. Does it mean I am calculating it wrong, violating an assumption, or something else I don't understand?
| Jensen-Shannon divergence for bivariate normal distributions | CC BY-SA 2.5 | null | 2011-03-22T16:15:30.263 | 2022-10-17T03:56:20.170 | 2011-03-23T01:53:13.153 | 2970 | 3843 | [
"normal-distribution",
"distance-functions",
"information-theory"
] |
8637 | 2 | null | 2181 | 11 | null | Almost the same question was asked recently on the [ISOSTAT](http://www.lawrence.edu/fast/jordanj/isostat.html) listserver (frequented by college professors):
>
If you had a strong undergraduate student who was interested in learning about various multivariate methods (e.g. PCA, MANOVA, discriminant analysis, ...) is there a good, accessible book you might recommend she/he purchase?
Here are the responses:
- Perhaps "Applied Multivariate Data Analysis", 2nd edition, by Everitt, B. and Dunn, G. (2001), published by Arnold. [Roger Johnson]
- Rencher's Methods of Multivariate Analysis is a great resource. I think a strong undergraduate student could grasp the material. [Philip Yates]. I'm fond of Rencher's approach. He offers good intuition and examples. But the matrix algebra can get pretty thick; I'm not sure "accessible" is an adjective I'd use. Nevertheless, I've taught undergrads successfully with his book. His second edition is a good improvement over the first. [Paul Velleman]
- Applied Multivariate Statistics by Johnson and Wichern. [Brad Hartlaub]
- I haven't done much with it, but I do like the idea of using modern techniques and modern data sets: Modern Multivariate Statistical Techniques by Alan Julian Izenman. (I own the book, it has the topics you are looking for, and the text seems accessible.) [Johanna Hardin]
| null | CC BY-SA 2.5 | null | 2011-03-22T17:21:14.290 | 2011-03-22T17:21:14.290 | null | null | 919 | null |
8638 | 2 | null | 8611 | 3 | null | I would try a q-q plot if you have enough data;
```
%make fake data;
x1 = randn(1000,1) .^ 2;x2 = (1.3 * randn(2000,1)).^2;
%which quantiles?
alphas = linspace(0,1,100);alphas = alphas(2:end-1);
q1 = interp1(linspace(0,1,numel(x1)),sort(x1),alphas,'linear');
q2 = interp1(linspace(0,1,numel(x2)),sort(x2),alphas,'linear');
scatter(q1,q2)
set(gca(),'XScale','log');set(gca(),'YScale','log');
xyline = [min(min(q1),min(q2)),max(max(q1),max(q2))];
hold on;
plot(xyline,xyline,'r-');
```

Adding a linear fit with stock matlab tools is not as simple as a one-liner, regrettably...
| null | CC BY-SA 2.5 | null | 2011-03-22T17:21:44.560 | 2011-03-22T17:53:05.827 | 2011-03-22T17:53:05.827 | 795 | 795 | null |
8639 | 1 | 8641 | null | 4 | 160 | Greetings,
Is it possible to use evidence in a Winbug model? For example, a random variable in a model has been observed, and I'd like to update the other variables in the model, pretty much the same update perfomed in tools like Smile, or other inference software.
Gibbs sampling is supposed to use observed values in full conditional distribution when there observations/evidence are entered into the model, but I am not sure if Winbugs allows this.
Regards
edit to clarify:
The winbug documentation says a stochastic node is a data node if it has been observed, but a stochastic node is described via a distribution as in some_var ~ dbin(theta,n)
if it has been observed, then I'd like to tell this to winbugs without losing the semantics of the stochastic node, expressing something like "some_var has this distribution, and it has been observed to have this particular value".
so how do I do that? by declaring some_var as I've done above and than setting a value to it as in some_var = 5 ? Would that express what I want to express?
In this case, in each observation of a node in a bayesian network, I'd need to redefine the winbugs model, (quite likely) replacing initial values of the unobserved nodes with the outcomes of the previous simulation.
In short, I'm trying to understand how to perform updates on a Bayesian network similar to message passing in exact inference, but using Gibbs sampling instead, via Winbugs.
| Can I insert an observation (evidence) to a Winbugs model? | CC BY-SA 2.5 | null | 2011-03-22T17:23:12.153 | 2016-09-12T19:30:54.530 | 2016-09-12T19:30:54.530 | 28666 | 3280 | [
"inference",
"bugs"
] |
8641 | 2 | null | 8639 | 2 | null | Of course it's possible to use evidence from observations in WinBUGS! Try working through any of the examples in the documentation that comes with the program to see how.
| null | CC BY-SA 2.5 | null | 2011-03-22T18:58:10.133 | 2011-03-22T18:58:10.133 | null | null | 449 | null |
8642 | 1 | 8643 | null | 8 | 5966 | I want to perform a single-tail test on a single sample of real numbers (N~100) against an expected value. The population is known to be not normally distributed. So from what I've read about stats, I can do my testing using
- Wilcoxon signed rank test, or
- bootstrap shifted sample data to obtain the null distribution of t-statistic (see How to perform a bootstrap test to compare the means of two samples?).
Is that correct?
Which method is preferred for minimizing type I error, and if possible, why please?
| What method is preferred, a bootstrapping test or a nonparametric rank-based test? | CC BY-SA 3.0 | null | 2011-03-22T18:58:38.307 | 2023-05-09T17:07:10.133 | 2017-04-13T12:44:28.873 | -1 | 3847 | [
"hypothesis-testing",
"nonparametric",
"bootstrap",
"wilcoxon-signed-rank"
] |
8643 | 2 | null | 8642 | 1 | null | You just described the difference. No one can know in advance outcome differences because it greatly depends on the nature of your data.
Do you know the non-normal distribution you're working with? If so, you could simulate some results and see what the typical error rates for the different tests were and how they differed.
| null | CC BY-SA 2.5 | null | 2011-03-22T19:52:01.620 | 2011-03-22T19:52:01.620 | null | null | 601 | null |
8644 | 2 | null | 6772 | 4 | null | Following whuber's [link to Wikipedia](http://en.wikipedia.org/wiki/Fieller%27s_theorem#Case_1) you have
>
Assume that $a$ and $b$ are jointly
normally distributed, and that $b$ is
not too near zero (i.e. more
specifically, that the standard error
of $b$ is small compared to $b$)
$$\operatorname{Var} \left( \frac{a}{b} \right) = \left(
\frac{a}{b} \right)^{2} \left( \frac{\operatorname{Var}(a)}{a^2} +
\frac{\operatorname{Var}(b)}{b^2}\right).$$
though in fact you want $\operatorname{Var} \left( \frac{B}{A} \right)$.
If your 95% CI is $\pm 0.002$ then your variances for $A$ and $B$ are $(0.002/1.96)^2 \approx 0.00000104$, so $\operatorname{Var} \left( \frac{B}{A} \right) \approx 0.00846$. Taking the square root and multiplying by 1.96 you get $$\frac{B}{A} \approx 1.5 \pm 0.18$$
If you must turn this into percentages (I think it confuses more than it enlightens) then it becomes
>
B's proportion is 50% higher than A's, plus or minus 18%, i.e. between 32% higher and 68% higher.
In R you could simulate this by something like
```
> n <- 1000000
> A <- 0.02 + (0.002 / qnorm(0.975)) * rnorm(n)
> B <- 0.03 + (0.002 / qnorm(0.975)) * rnorm(n)
> C <- B / A
> quantile(C, probs = c(0.025, 0.5, 0.975))
2.5% 50% 97.5%
1.333514 1.499955 1.697418
```
which is reasonably close.
| null | CC BY-SA 2.5 | null | 2011-03-22T20:22:25.160 | 2011-03-22T20:22:25.160 | null | null | 2958 | null |
8645 | 2 | null | 8614 | 0 | null | Following DC's excellent summary of available approaches let me add:
The question "if a given noise from a compass sensor is time-correlated " raises suggestions as how to analyse it in order to make a conclusion. In the absence of user-specified possible support/explanatory series one is left with approaches that entertain Deterministic Structure , ARIMA Structure and hybrids. When faced with an unknown underlying frequency we often scan for hidden stochastic seasonalities by a search process. In the absence of user-specified suggestions like these readings are taken every 24 hours. One must reach out and via trial and error to deduce the strong (if any) hidden periodicity. Outliers , Pulses, Level Shifts m Local Time Trends can also play havoc with system identification. We use very aggressive model identification strategies ( no neural nets here ) to extract the signal be it stocahastic or deterministic or both.
| null | CC BY-SA 2.5 | null | 2011-03-22T21:48:37.590 | 2011-03-31T22:39:28.480 | 2011-03-31T22:39:28.480 | 3382 | 3382 | null |
8647 | 2 | null | 8608 | 5 | null | Research has shown that people have difficulty reasoning in terms of probabilities but can do so accurately when presented with the same questions in terms of frequencies. So, let's consider a closely related setting where the probabilities are expressed as numbers of occurrences:
- In 100 similar situations, it rained 30 times and was sunny 70 times. This matches P(W=Sun) = 0.7 = 70/100 and P(W=Rain) = 0.3 = 30/100.
- From P(F=good|Sun) = 0.8 we compute that 0.8 * 70 = 56 times F will be "good" when W is "sun". Likewise, from P(F=bad|Sun) = 0.2 we compute that 0.2 * 70 = 14 times F will be "bad" when W is "sun".
- From P(F=good|Rain) = 0.1 we compute that 0.1 * 30 = 3 times F will be "good" when W is "rain" and from P(F=bad|Rain) = 0.9 we compute that 0.9 * 30 = 27 times F will be "bad" when W is "rain".
If F is "bad", what can we say? Well, this situation happened 14 + 27 = 41 times. In 14/41 = 0.34 of those times W was "sun"; therefore, we expect P(W=Sun|F=Bad) = 0.34. In the other 27/41 = 0.66 of those times W was "rain"; therefore, P(W=Rain|F=Bad) = 0.66.
Thus, "normalization" means we focus only on those situations where the prior condition holds (F=Bad in the example) and rescale the probabilities to sum to unity (as they must).
This is an archetypal example of [Bayes' Theorem](http://en.wikipedia.org/wiki/Bayes%27_theorem) which in mathematical terms says that to compute conditional probabilities, focus and rescale.
| null | CC BY-SA 2.5 | null | 2011-03-22T22:45:45.143 | 2011-03-22T22:45:45.143 | null | null | 919 | null |
8648 | 2 | null | 8634 | 9 | null | The midpoint measure $\newcommand{\bx}{\mathbf{x}} \newcommand{\KL}{\mathrm{KL}}M$ is a mixture distribution of the two multivariate normals, so it does not have the form that you give in the original post. Let $\varphi_p(\bx)$ be the probability density function of a $\mathcal{N}(\mu_p, \Sigma_p)$ random vector and $\varphi_q(\bx)$ be the pdf of $\mathcal{N}(\mu_q, \Sigma_q)$. Then the pdf of the midpoint measure is
$$
\varphi_m(\bx) = \frac{1}{2} \varphi_p(\bx) + \frac{1}{2} \varphi_q(\bx) \> .
$$
The Jensen-Shannon divergence is
$$
\mathrm{JSD} = \frac{1}{2} (\KL(P\,\|M)+ \KL(Q\|M)) = h(M) - \frac{1}{2} (h(P) + h(Q)) \>,
$$
where $h(P)$ denotes the (differential) entropy corresponding to the measure $P$.
Thus, your calculation reduces to calculating differential entropies. For the multivariate normal $\mathcal{N}(\mu, \Sigma)$, the answer is well-known to be
$$
\frac{1}{2} \log_2\big((2\pi e)^n |\Sigma|\big)
$$
and the proof can be found in any number of sources, e.g., Cover and Thomas (1991), pp. 230-231. It is worth pointing out that the entropy of a multivariate normal is invariant with respect to the mean, as the expression above shows. However, this almost assuredly does not carry over to the case of a mixture of normals. (Think about picking one broad normal centered at zero and another concentrated normal where the latter is pushed out far away from the origin.)
For the midpoint measure, things appear to be more complicated. That I know of, there is no closed-form expression for the differential entropy $h(M)$. [Searching on Google](http://www.google.com/search?q=entropy+of+Gaussian+mixtures) yields a couple potential hits, but the top ones don't appear to give closed forms in the general case. You may be stuck with approximating this quantity in some way.
Note also that the paper you reference does not restrict the treatment to only discrete distributions. They treat a case general enough that your problem falls within their framework. See the middle of column two on page 1859. Here is where it is also shown that the divergence is bounded. This holds for the case of two general measures and is not restricted to the case of two discrete distributions.
The Jensen-Shannon Divergence has come up a couple of times recently in other questions on this site. See [here](https://stats.stackexchange.com/questions/7630/clustering-should-i-use-the-jensen-shannon-divergence-or-its-square) and [here](https://stats.stackexchange.com/questions/6907/an-adaptation-of-the-kullback-leibler-distance).
---
Addendum: Note that a mixture of normals is not the same as a linear combination of normals. The simplest way to see this is to consider the one-dimensional case. Let $X_1 \sim \mathcal{N}(-\mu, 1)$ and $X_2 \sim \mathcal{N}(\mu, 1)$ and let them be independent of one another. Then a mixture of the two normals using weights $(\alpha, 1-\alpha)$ for $\alpha \in (0,1)$ has the distribution
$$
\varphi_m(x) = \alpha \cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{(x+\mu)^2}{2}} + (1-\alpha) \cdot
\frac{1}{\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2}} \> .
$$
The distribution of a linear combination of $X_1$ and $X_2$ using the same weights as before is, via the [stable](http://en.wikipedia.org/wiki/Stable_distribution) property of the normal distribution is
$$
\varphi_{\ell}(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-(1-2\alpha)\mu)^2}{2\sigma^2}} \>,
$$
where $\sigma^2 = \alpha^2 + (1-\alpha)^2$.
These two distributions are very different, though they have the same mean. This is not an accident and follows from linearity of expectation.
To understand the mixture distribution, imagine that you had to go to a statistical consultant so that she could produce values from this distribution for you. She holds one realization of $X_1$ in one palm and one realization of $X_2$ in the other palm (though you don't know which of the two palms each is in). Now, her assistant flips a biased coin with probability $\alpha$ out of sight of you and then comes and whispers the result into the statistician's ear. She opens one of her palms and shows you the realization, but doesn't tell you the outcome of the coin flip. This process produces the mixture distribution.
On the other hand, the linear combination can be understood in the same context. The statistical consultant merely takes both realizations, multiplies the first by $\alpha$ and the second by $(1-\alpha)$, adds the result up and shows it to you.
| null | CC BY-SA 2.5 | null | 2011-03-22T23:34:52.187 | 2011-03-26T14:30:10.350 | 2017-04-13T12:44:53.513 | -1 | 2970 | null |
8649 | 1 | 8650 | null | 10 | 1158 | I'm using a tutorial I found and plotting mean values along with the standard errors to show my data. But I'm having a problem discussing the results. My plot is as shown below: some of the standard errors (shown as a error bar) vary much and some of them are very close to zero.

| What is standard error used for? | CC BY-SA 3.0 | null | 2011-03-23T00:50:44.210 | 2014-03-12T11:51:58.750 | 2013-08-16T18:41:32.250 | 601 | 3270 | [
"data-visualization",
"standard-error"
] |
8650 | 2 | null | 8649 | 10 | null | Error bars in general are to convince the plot reader that the differences she/he sees on the plot are statistically significant. In an approximation, you may imagine a small gaussian which $\pm1\sigma$ range is shown as this error bar -- "visual integration" of a product of two such gaussians is more-less a chance that the two values are really equal.
In this particular case, one can see that both the difference between red and violet bar as well as gray and green ones are not too significant.
| null | CC BY-SA 2.5 | null | 2011-03-23T01:06:09.497 | 2011-03-23T01:06:09.497 | null | null | null | null |
8651 | 1 | null | null | 2 | 342 | Is there a way to find the Spearman correlation between two Weibull distributions? I need it as a parameter in a copula function for the joint Weibull distribution.
I learned that using the Pearson correlation, which I can easily obtain from the variances and cross variances of the given spectra, is not reliable with copulas especially when the data is not normally distributed.
| Spearman correlation of two Weibull distributions | CC BY-SA 3.0 | null | 2011-03-23T02:48:08.760 | 2018-07-22T17:27:49.700 | 2018-07-22T17:27:49.700 | 11887 | 3854 | [
"correlation",
"spearman-rho",
"copula",
"weibull-distribution"
] |
8652 | 2 | null | 8649 | 6 | null | As mbq says, error bars are a way of letting your readers to get a feel if the differences between two groups are significant - i.e. if the variation within each of your groups is small enough to believe that the difference you've found for the mean between your groups.
All else being equal, larger error bars mean more within-group difference, but it looks like the y-axis of your plot is log-transformed, so the lower groups aren't quite on the same scale as the higher ones.
You should be aware, [many of your readers won't understand what error bars represent, even if you explicitly explain it!](http://www.ncbi.nlm.nih.gov/pubmed/16392994) Often you can achieve the same goal with a [jittered dot-plot](http://had.co.nz/ggplot2/geom_jitter.html) or a boxplot (or both together) to achieve the same effect.
| null | CC BY-SA 2.5 | null | 2011-03-23T04:01:23.240 | 2011-03-23T04:01:23.240 | null | null | 3732 | null |
8653 | 1 | null | null | 4 | 94 | I am interested in estimating how many subjects should be included in a brain imaging study. Although the design is a fairly straight forward cross-sectional comparison, there are a number of tweakable image processing steps between the raw image and the processed image in which we carry out pixel-wise comparisons.
I've used the power.t.test function in R to estimate the number of subjects required to reach statistical significance, using hypothetical delta values and sd estimates from a control population. I've run these analyses with varying image processing settings, resulting in a large number of "number of subjects per group" vs "some image processing parameter" plots. So many that it looks kind of messy to provide hundreds of plots.
What I would like is an empirical equation that allows users to estimate the number of subjects per group as a function of effect size, alpha, and a few other image processing specific parameters. That way I only need to report the coefficients for the equation rather than supplying hundreds of graphs. Is this kind of thing possible?
So far if I model N ~ k1/effect.size^2 that looks OK for some values of alpha (for example) but doesn't work so good for others. A log-log plot doesn't model the relationship well either.
I can't seem to find an explicit formula that relates the number of subjects per group with the other factors in a power analysis. Does this exist?
Thanks
| How to model the relationship between number of subjects per group, derived using standard power analysis methods, and study specific parameters | CC BY-SA 2.5 | null | 2011-03-23T05:45:42.340 | 2011-05-26T21:50:31.813 | null | null | 3855 | [
"modeling",
"model-selection",
"statistical-power",
"cross-section"
] |
8655 | 2 | null | 8604 | 25 | null | One big difference is that regression "controls for" those characteristics in a linear fashion. Matching by propensity scores eliminates the linearity assumption, but, as some observations may not be matched, you may not be able to say anything about certain groups.
For example, if you are studying a worker training program, you may have all the enrollees be men, but the control, non-participant population be composed of men and women. Using regression, you could regress, income, say, on a participation indicator variable and a male indicator. You would use all your data and could estimate the income of a female had she participated in the program.
If you were doing matching, you could only match men to men. As a result, you wouldn't be using any women in your analysis and your results wouldn't pertain to them.
Regression can extrapolate using the linearity assumption, but matching cannot. All the other assumptions are essentially the same between regression and matching. The benefit of matching over regression is that it is non-parametric (except you do have to assume that you have the right propensity score, if that is how you are doing your matching).
For more discussion, see [my page here](http://gibbons.bio/courses/ps236.html) for a course that was heavily focused on matching methods. See especially [Causal Effects Estimation Strategy Assumptions](http://gibbons.bio/courses/ps236/StrategyComparisons.pdf).
Also, be sure to check out the [Rosenbaum and Rubin (1983)](http://homes.stat.unipd.it/erich/teaching/eval/papers/rosenbaum_rubin83.pdf) article that outlines propensity score matching.
Lastly, matching has come a long way since 1983. Check out [Jas Sekhon's webpage](http://sekhon.berkeley.edu) to learn about his genetic matching algorithm.
| null | CC BY-SA 4.0 | null | 2011-03-23T06:18:03.407 | 2020-04-12T03:57:27.420 | 2020-04-12T03:57:27.420 | 116587 | 401 | null |
8656 | 2 | null | 8632 | 2 | null | Actually if drop measurement and leave only or narrowed question: What's the difference between method and procedure? A nice answer could be found [here](http://wiki.answers.com/Q/What_is_the_difference_between_a_method_and_a_procedure). From this answer we may conclude that method is a wider concept (more abstract) as opposed to a certain (more concrete) set of actions that the procedure is. You may think about the procedure as being a subset of method.
Measurement in this context is just a field where some particular examples live. For me it is easier to think of many examples in estimation methods, so any help form the community is welcome.
For example, in econometrics you often need to obtain data through the set of Monte Carlo simulations (that is a method). When I solve a particular simulation problem I first describe the data generating process, set the parameters of the experiment and run the simulations, that are than generalized into some averaged results. All this sequence is the procedure.
---
Most of the data collection methods are design based. So you have to work with one of the [sampling](http://en.wikipedia.org/wiki/Sampling_%28statistics%29#Sampling_methods) methods. Take the first, simple random sampling (method) and apply it to a specific population sampling problem, the way you apply the method, a certain list of actions in this case will be a procedure of collecting your data.
| null | CC BY-SA 3.0 | null | 2011-03-23T07:35:19.457 | 2011-06-30T07:27:45.080 | 2011-06-30T07:27:45.080 | 2116 | 2645 | null |
8657 | 1 | 8679 | null | 0 | 81 | which is the % of population with vote rights(above 18yo for example). Can u point me to some papers which talks about voting and ages?
| vote population | CC BY-SA 2.5 | null | 2011-03-23T08:37:03.867 | 2011-03-23T18:45:20.533 | null | null | 3856 | [
"population"
] |
8658 | 2 | null | 8632 | 1 | null | If you are familiar with programming, you could perhaps think of it as a method (not to be confused with a logical part of some code) being a brief description of an algorithm in pseudo-code, whereas a procedure is a specific implementation with exact syntax.
Admitted that this is not a perfect metaphor but I think it portrays the difference.
| null | CC BY-SA 2.5 | null | 2011-03-23T08:41:36.297 | 2011-03-23T08:41:36.297 | null | null | 3014 | null |
8659 | 2 | null | 8642 | 6 | null | This answer may be helpful, and/or it may be annoying. Your welcome and my apologies at the same time :)
One thing to remember when using a normal distribution, is that it has a set of sufficient statistics, namely the mean and variance. What this indicates is that only the mean and variance matter in the inference. Any property of your sample besides the mean and variance will be thrown away when you use a normal distribution.
The statement that the "population is not normally distributed" is a bit of a misnomer - the population is not "distributed" at all - there is one and only one population (imaginary data sets and alternate worlds aside). It sounds like what you are actually saying is that your knowledge of the population consists of something other than the average and variance
So presumably, the only thing to do is to state what this extra/different knowledge is. Perhaps you know the skewness (or you know the skewness is important/relevant for the analysis, and not "noise").
I would suggest that you simply calculate the probability that your hypothesis is true, conditional on the information you have. This would include the data, and whatever "structure" you claim to know about the population that makes it non-normal (something other than the mean and variance of the population). So call your one sided test $T$, then you simply calculate:
$$P(T|D,I)=\frac{P(T|I)P(D|T,I)}{P(D|I)}$$
$P(T|I)$ is the prior probability for the test being "true" or "successful" (what did know about the test prior to seeing the data?). $P(D|T,I)$ is the "model" or "likelihood" and is similar to a p-value (How likely is the data you observed, given the test is true?). And $P(D|I)$ is often called the "evidence" (how well do any of the hypothesis predict the observed data?) - this quantity does not need to be explicitly assigned, as it can be derived from the requirement that the probability must add to 1.
The good thing about this method, is that probability theory will "construct the optimal test for you". You just need describe your prior information, and then simply do the mathematics. Now you may find that a bootstrap may be necessary in order to evaluate some mathematical formula - you may find that you should do the wilcoxon test - or probability theory will construct a test which is better than either of them (in terms of that type 1 and type 2 error you speak of).
| null | CC BY-SA 2.5 | null | 2011-03-23T09:18:53.263 | 2011-03-23T09:18:53.263 | null | null | 2392 | null |
8660 | 2 | null | 8502 | 4 | null | This is "taylor made" almost for a Bayesian regression. First of all, there is nothing "fundamentally wrong" with what you suggest. You result may not be optimal by some mathematical standard, but it will almost certainly be optimal time wise. Most other methods will involve much more time than a straight multiplication and division.
I would use a normal likelihood $(y_i|\beta,\sigma,x_i,I)\sim N(x_i^T\beta,\sigma^2)$ and the "jeffreys prior" $p(\beta,\sigma|x_i,I) \propto \frac{1}{\sigma}$. This gives a posterior for $\beta$ as a multivariate t-distribution, with scale matrix $s^2(X^TX)^{-1}$ and mean vector $\beta_{ols}$ with the standard $n-p$ degrees of freedom. Now you simply use this posterior based on the "A" data set as the prior for the "B" data set. Now because you have a "t" prior and a "normal" likelihood, the posterior for beta will favour the normal likelihood, because the t has fatter tails - hence less "pulling power". This regression will balance the A and B regression between how accurately A was estimated, and how well the B estimate fits the data.
An "add-hoc" way that you could add more weight to "B" is by setting the degrees of freedom to 1 in the "A" posterior. But then you may as well save some time and do the multiply the B estimate by two.
I don't think there is a simple analytic expression for this posterior, so will likely need to simulate. But you only require the estimate from the "A" data set, and the covariance matrix from the "A" data set, and the number of observations in the "A" data set. Once you have these quantities, you don't require the original data set.
| null | CC BY-SA 2.5 | null | 2011-03-23T09:47:40.163 | 2011-03-23T09:47:40.163 | null | null | 2392 | null |
8661 | 1 | 8667 | null | 56 | 195140 | I'm trying to undertake a logistic regression analysis in `R`. I have attended courses covering this material using STATA. I am finding it very difficult to replicate functionality in `R`. Is it mature in this area? There seems to be little documentation or guidance available. Producing odds ratio output seems to require installing `epicalc` and/or `epitools` and/or others, none of which I can get to work, are outdated or lack documentation. I've used `glm` to do the logistic regression. Any suggestions would be welcome.
I'd better make this a real question. How do I run a logistic regression and produce odds rations in `R`?
Here's what I've done for a univariate analysis:
`x = glm(Outcome ~ Age, family=binomial(link="logit"))`
And for multivariate:
`y = glm(Outcome ~ Age + B + C, family=binomial(link="logit"))`
I've then looked at `x`, `y`, `summary(x)` and `summary(y)`.
Is `x$coefficients` of any value?
| Logistic Regression in R (Odds Ratio) | CC BY-SA 2.5 | null | 2011-03-23T09:59:21.777 | 2022-10-14T12:56:01.700 | 2011-03-23T10:18:41.703 | 2824 | 2824 | [
"r",
"logistic",
"odds-ratio"
] |
8662 | 1 | 8674 | null | 14 | 3929 | I have the sample population of a certain signal's registered amplitude maxima. Population is about 15 million samples. I produced a histogram of the population, but cannot guess the distribution with such a histogram.
EDIT1: File with raw sample values is here: [raw data](http://hotfile.com/dl/111583549/5c73384/TDETQ_Z2M_ALL_TIM50_N.txt.zip.html)
Can anyone help estimate the distribution with the following histogram:

| Need help identifying a distribution by its histogram | CC BY-SA 2.5 | null | 2011-03-23T10:20:57.830 | 2023-03-13T15:24:53.887 | 2011-03-24T04:54:13.743 | 2820 | 2820 | [
"distributions",
"histogram"
] |
8663 | 1 | null | null | 2 | 1274 | I used `summary.formula` from `Hmisc` with continuous `Age` and binary outcome `O` with `test=TRUE`. This returned a p-value for `Age` predicting `O` (if I understand this correctly).
I then ran a `glm` using `Age` and `O` (univariate logistic regression), which returned a different p-value. I thought that the p-value should be the same?
Edit:
`library(Hmisc)`
`summary.formula(Outcome ~ cut2(Age, seq(15,75,10)), method="reverse", test=TRUE)`
p-value = 0.8
`x = glm(Outcome ~ Age, family=binomial(link="logit"))`
p-value = 0.4
(I'm getting quite confused: the example above which gives the 0.8 p-value is binning the ages, but I was told that the p-value it returns is for all the ages. However, if I run the summary without the binning, the p-value is 0.5.)
| Chi-squared versus logistic regression | CC BY-SA 2.5 | null | 2011-03-23T10:21:41.670 | 2011-03-24T16:12:05.140 | 2011-03-24T16:12:05.140 | null | 2824 | [
"logistic",
"chi-squared-test",
"p-value"
] |
8664 | 1 | null | null | 2 | 2442 | I want to carry out a power analysis on a one group repeated measures experiment using G*power.
I have a group of subjects who tested a set of products. Each subject test one time the products. Hence, I have one observation per cell.
To test the product effect, I used a two anova model with subjects as random effect with proc glm in sas.
I would like to calculate the sample size for a given study having the same design. Given an estimation of effect size, alpha, Beta and an estimation of "a correlation among the repeated measures" could I use the ratio variance of subjets/ (variance of subjects + variance of error) derived from the two way anova model as an estimation of this correlation?
| How to estimate correlation among repeated measures? | CC BY-SA 2.5 | null | 2011-03-23T10:35:06.573 | 2011-05-26T20:50:30.690 | 2011-03-23T13:49:22.730 | 2116 | 3858 | [
"correlation",
"repeated-measures"
] |
8665 | 2 | null | 8649 | 2 | null | Plenty of researchers have trouble interpreting these graphs. See [http://scienceblogs.com/cognitivedaily/2008/07/31/most-researchers-dont-understa-1/](http://scienceblogs.com/cognitivedaily/2008/07/31/most-researchers-dont-understa-1/) for a more detailed elaboration.
| null | CC BY-SA 3.0 | null | 2011-03-23T10:54:10.860 | 2013-08-16T18:13:06.457 | 2013-08-16T18:13:06.457 | -1 | 1048 | null |
8666 | 2 | null | 8661 | 46 | null | You are right that R's output usually contains only essential information, and more needs to be calculated separately.
```
N <- 100 # generate some data
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N, 30, 8)
X3 <- abs(rnorm(N, 60, 30))
Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)
# dichotomize Y and do logistic regression
Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))
```
`coefficients()` gives you the estimated regression parameters $b_{j}$. It's easier to interpret $exp(b_{j})$ though (except for the intercept).
```
> exp(coefficients(glmFit))
(Intercept) X1 X2 X3
5.811655e-06 1.098665e+00 9.511785e-01 9.528930e-01
```
To get the odds ratio, we need the classification cross-table of the original dichotomous DV and the predicted classification according to some probability threshold that needs to be chosen first. You can also see function `ClassLog()` in package `QuantPsyc` (as chl mentioned in [a related question](https://stats.stackexchange.com/questions/4832/logistic-regression-classification-tables-a-la-spss-in-r/4835#4835)).
```
# predicted probabilities or: predict(glmFit, type="response")
> Yhat <- fitted(glmFit)
> thresh <- 0.5 # threshold for dichotomizing according to predicted probability
> YhatFac <- cut(Yhat, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi"))
> cTab <- table(Yfac, YhatFac) # contingency table
> addmargins(cTab) # marginal sums
YhatFac
Yfac lo hi Sum
lo 41 9 50
hi 14 36 50
Sum 55 45 100
> sum(diag(cTab)) / sum(cTab) # percentage correct for training data
[1] 0.77
```
For the odds ratio, you can either use package `vcd` or do the calculation manually.
```
> library(vcd) # for oddsratio()
> (OR <- oddsratio(cTab, log=FALSE)) # odds ratio
[1] 11.71429
> (cTab[1, 1] / cTab[1, 2]) / (cTab[2, 1] / cTab[2, 2])
[1] 11.71429
> summary(glmFit) # test for regression parameters ...
# test for the full model against the 0-model
> glm0 <- glm(Yfac ~ 1, family=binomial(link="logit"))
> anova(glm0, glmFit, test="Chisq")
Analysis of Deviance Table
Model 1: Yfac ~ 1
Model 2: Yfac ~ X1 + X2 + X3
Resid. Df Resid. Dev Df Deviance P(>|Chi|)
1 99 138.63
2 96 110.58 3 28.045 3.554e-06 ***
```
| null | CC BY-SA 2.5 | null | 2011-03-23T11:27:58.007 | 2011-03-23T11:27:58.007 | 2017-04-13T12:44:56.303 | -1 | 1909 | null |
8667 | 2 | null | 8661 | 47 | null | if you want to interpret the estimated effects as relative odds ratios, just do `exp(coef(x))` (gives you $e^\beta$, the multiplicative change in the odds ratio for $y=1$ if the covariate associated with $\beta$ increases by 1). For profile likelihood intervals for this quantity, you can do
```
require(MASS)
exp(cbind(coef(x), confint(x)))
```
EDIT: @caracal was quicker...
| null | CC BY-SA 2.5 | null | 2011-03-23T11:28:45.930 | 2011-03-23T11:28:45.930 | null | null | 1979 | null |
8668 | 2 | null | 8662 | 1 | null | I am not sure why you would want to classify a sample to a specific distribution with such a large sample size; parsimony, comparing it to another sample, looking for physical interpretation of the paramters?
Most statistical packages(R, SAS, Minitab) allow one to plot data on a graph that yields a straight line if the data come from a particular distribution. I have seen graphs that yield a straight line if the data is normal(log normal-after a log transformation), Weibull, and chi-squared come to mine immediately. This technique will allow you to see outliers and give you the possiblity to assign reasons for why data points are outliers.
In R, the normal probability plot is called qqnorm.
| null | CC BY-SA 2.5 | null | 2011-03-23T11:43:00.033 | 2011-03-23T11:43:00.033 | null | null | 3805 | null |
8669 | 1 | 9376 | null | 2 | 202 | I currently have two sets of input variables say, $X$ and $Y$ with one output variable $Z$. That is:
$$Z = a_0 + a_1X_1 + a_2X_2... + a_{11}X_{11} = b_0 + b_1Y_1 + b_2Y_2 + b_3Y_3 + b_4Y_4$$
I have the independent $X$ and $Y$ values but don't have the dependent variable $Z$ values.
Is there anyway that I can estimate coefficients $a$ and $b$ and also the value of R squared?
| Two sets of input variables for the same unknown dependent variable | CC BY-SA 2.5 | null | 2011-03-23T12:02:12.467 | 2011-04-09T18:43:50.403 | 2011-03-23T13:47:27.210 | 2116 | 3859 | [
"regression"
] |
8670 | 2 | null | 8669 | 0 | null | This sounds to me like problem where a canonical correlation study might help. In canonical correlation, we are given a random vector W that is partitioned into two sub-random vectors X and Y; and the issue is to find linear combinations of the two subvectors that have maximal correlation and are orthogonal to one another.
The end result is the discovery of correlates that will point to relationships that have some physical meaning.
If you are comfortable with matrices, then Google Wolfgang Hardle and Leopold Simar's online book and read Chapter 14 on Canonical Correlation. If you use R, then read Chapter 10 of Paul Hewson's online book "Multivariate Statistics with R".
| null | CC BY-SA 2.5 | null | 2011-03-23T12:21:50.093 | 2011-03-23T12:21:50.093 | null | null | 3805 | null |
8671 | 2 | null | 8625 | 9 | null | If you assume
$$\frac{1}{N}\sum_{t=1}^Nx_t^{i}=\mu_i$$
Then
$$\sum_{i=1}^C\sum_{t=1}^N(x_t^i-\mu_i)(\mu_i-\mu)^T=\sum_{i=1}^C\left(\sum_{t=1}^N(x_t^i-\mu_i)\right)(\mu_i-\mu)^T=0$$
and formula holds. You deal with the second term in the similar way.
| null | CC BY-SA 2.5 | null | 2011-03-23T14:12:28.437 | 2011-03-23T14:12:28.437 | null | null | 2116 | null |
8672 | 2 | null | 8661 | 23 | null | The UCLA stats page has [a nice walk-through](https://stats.oarc.ucla.edu/r/dae/logit-regression/) of performing logistic regression in R. It includes a brief section on calculating odds ratios.
| null | CC BY-SA 4.0 | null | 2011-03-23T14:39:55.110 | 2022-10-14T12:56:01.700 | 2022-10-14T12:56:01.700 | 370174 | 124 | null |
8673 | 2 | null | 5346 | 3 | null | I found [this article by Algina & Olejnik (1984)](http://epm.sagepub.com/content/44/1/39.short).
The abstract:
>
The Welch-James procedure may be used
to test hypotheses on means, when
independent samples from populations
with heterogeneous variances are
available. Until recently the
complexity of the available
presentations of this procedure
limited the application of this
procedure. To resolve this state of
affairs, summation formulas for the
Welch-James procedure are presented
for the 2 x 2 design. In addition,
matrix formulas that permit routine
application of the procedure to
crossed factorial designs are
presented.
It frankly looks a little hairy, but I thought it might be a start.
Citation
Algina, J., & Olejnik, S. F. (1984). Implementing the Welch-James procedure with factorial designs. Educational and psychological measurement, 44(1), 39-48.
| null | CC BY-SA 3.0 | null | 2011-03-23T14:55:49.300 | 2016-09-05T14:04:50.877 | 2016-09-05T14:04:50.877 | 100369 | 3861 | null |
8674 | 2 | null | 8662 | 23 | null | Use `fitdistrplus`:
Here's the [CRAN link](http://cran.r-project.org/web/packages/fitdistrplus/index.html) to `fitdistrplus`.
Here's the [old vignette link](https://r-forge.r-project.org/scm/viewvc.php/*checkout*/www/fitdistrplusE.pdf?revision=19&root=riskassessment&pathrev=21) for `fitdistrplus`.
If the vignette link doesn't work, do a search for "Use of the library `fitdistrplus` to specify a distribution from data".
The vignette does a good job of explaining how to use the package. You can look at how various distributions fit in a short period of time. It also produces a Cullen/Frey Diagram.
```
#Example from the vignette
library(fitdistrplus)
x1 <- c(6.4, 13.3, 4.1, 1.3, 14.1, 10.6, 9.9, 9.6, 15.3, 22.1,
13.4, 13.2, 8.4, 6.3, 8.9, 5.2, 10.9, 14.4)
plotdist(x1)
descdist(x1)
f1g <- fitdist(x1, "gamma")
plot(f1g)
summary(f1g)
```


| null | CC BY-SA 4.0 | null | 2011-03-23T15:04:06.113 | 2023-03-13T15:24:53.887 | 2023-03-13T15:24:53.887 | 11887 | 2775 | null |
8675 | 2 | null | 8642 | 1 | null | The inferences generated by Wilcoxon vs bootstrapping cannot be compared as they pertain to different data. Wilcoxon is a rank test, thus generates inferences that pertain to ranks. Bootstrapping applies to the raw data, and thus generates inferences that pertain to the raw data. If you dislike bootstrapping but want inferences that pertain to the raw data, then you may want to try a permutation test (sometimes referred to as a randomization test).
| null | CC BY-SA 2.5 | null | 2011-03-23T15:33:25.353 | 2011-03-23T15:33:25.353 | null | null | 364 | null |
8676 | 2 | null | 8630 | 5 | null | Let the total variance, $T$, in a data set of vectors be the sum of squared errors (SSE) between the vectors in the data set and the mean vector of the data set,
$$T = \sum_{i} (x_i-\bar{x}) \cdot (x_i-\bar{x})$$
where $\bar{x}$ is the mean vector of the data set, $x_i$ is the ith vector in the data set, and $\cdot$ is the [dot product](http://en.wikipedia.org/wiki/Dot_product) of two vectors. Said another way, the total variance is the SSE between each $x_i$ and its predicted value, $f(x_i)$, when we set $f(x_i)=\bar{x}$.
Now let the predictor of $x_i$, $f(x_i)$, be the projection of vector $x_i$ onto a unit vector $c$.
$$ f_c(x_i) = (c \cdot x_i)c$$
Then the $SSE$ for a given $c$ is $$SSE_c = \sum_i (x_i - f_c(x_i)) \cdot (x_i - f_c(x_i))$$
I think that if you choose $c$ to minimize $SSE_c$, then $c$ is the first principal component.
If instead you choose $c$ to be the normalized version of the vector $(1, 2, 5, ...)$, then $T-SSE_c$ is the variance in the data described by using $c$ as a predictor.
| null | CC BY-SA 3.0 | null | 2011-03-23T15:34:35.570 | 2015-01-28T09:23:33.247 | 2015-01-28T09:23:33.247 | 28666 | 3864 | null |
8677 | 1 | 8687 | null | 6 | 1037 | I just came by [a post talking about](http://www.investuotojas.eu/?p=464) networks for displaying correlations:

Is this a known method? Can someone shed some insights into it? (I'm wondering about how useful it might be, and when.)
| References for using networks to display correlations? | CC BY-SA 2.5 | null | 2011-03-23T15:47:58.887 | 2011-03-23T19:55:45.100 | 2011-03-23T19:55:45.100 | 26 | 253 | [
"data-visualization",
"correlation"
] |
8678 | 2 | null | 8677 | 4 | null | Surprisingly, as a [search of Google Images](http://www.google.com/images?q=multiple+correlation) indicates, such graphs do not appear to be in common use to study or explain multiple correlations. That's a pity, because I'm sure much of this theory can be reduced to simple operations on graphs.
Nevertheless, this graphical method to display correlations (or their mathematical equivalents, cosines of angles) has been in use a long time ([at least 75 years](http://en.wikipedia.org/wiki/Dynkin_diagram#History)) in the form of a [Coxeter-Dynkin diagram ](http://en.wikipedia.org/wiki/Coxeter%E2%80%93Dynkin_diagram).
For instance, the A3 diagram 0--0--0 represents three variables X, Y, and Z where X and Z (the outer nodes) are uncorrelated and the correlations between X and Y and Z and Y are both -0.5. In the usual applications of these diagrams, certain special "correlations" (angles) are important, so a special method of labeling the edges with the correlations is used, but this functions no differently than using other forms of labeling such as colors.
When you use a "distance" metric monotonically related to correlation, then any 2D [MDS calculation](http://en.wikipedia.org/wiki/Multidimensional_scaling) can be (and usually is) thought of as embedding this graph in the plane so that Euclidean distances reflect the correlations. This illustrates the intimate connection between correlation-based clustering methods and graphs of correlation. As another example in this vein, a [dendrogram](http://www.mathworks.com/help/toolbox/stats/dendrogram.html), when it is derived from a correlation-based similarity matrix, is another network-based way of displaying correlations. (However, it uses vertical position in an essential way to display similarity, and so is not a purely network-based method.)
| null | CC BY-SA 2.5 | null | 2011-03-23T16:36:48.973 | 2011-03-23T16:36:48.973 | null | null | 919 | null |
8679 | 2 | null | 8657 | 3 | null | You can find minimum voting ages in [Wikipedia](http://en.wikipedia.org/wiki/Voting_age). Most large countries use 18, except for Brazil and Indonesia.
You can find country population by age in the [U.S Census Bureau International Data Base](http://www.census.gov/ipc/www/idb/). It does not seem to use 18 as a break point, but you should be able to divide the 15-19 age group safely. This suggests slightly less than 69% of the world population is 18+ in 2011.
Voter turnout by age is harder, though you may not be interested. I would expect there to be national figures in scattered sources but not a central collection. The [International Institute for Democracy and Electoral Assistance](http://www.idea.int/vt/by_age.cfm) tried but only managed to list three countries.
| null | CC BY-SA 2.5 | null | 2011-03-23T16:42:38.700 | 2011-03-23T18:45:20.533 | 2011-03-23T18:45:20.533 | 930 | 2958 | null |
8680 | 2 | null | 8502 | 1 | null | Maybe you should look into "stacking". Or even "feature-weighed stacking".
The former is using a cross validation method to determine the weights you should use to linearly stack them. The latter is using "meta-parameters" to give even more insight on how to weight the parameters depending on what is being predicted. This is a method that the #2 Netflix competition team developed. [http://arxiv.org/abs/0911.0460](http://arxiv.org/abs/0911.0460)
| null | CC BY-SA 2.5 | null | 2011-03-23T16:55:09.020 | 2011-03-23T16:55:09.020 | null | null | 3834 | null |
8681 | 2 | null | 6920 | 1 | null | The problem is more easily solved when you rewrite things a little bit:
Y = y
X = [x, 1 ]
then
Y = A*X
A one time-solution is found by calculating
V = X' * X
and
C = X' * Y
note the V should have size N-by-N and C a size of N-by-M.
The parameters you're looking for are then given by:
A = inv(V) * C
Since both V and C are calculated by summing over your data, you can calculate
A at every new sample. This has a time complexity of O(N^3), however.
Since V is square and semi-definite positive, a LU decomposition does exists, that
makes inverting V numerically more stable.
There are algorithms to perform rank-1 updates to the inverse of a matrix. Find those and you'll have the efficient implementation you're looking for.
The rank-1 update algorithms can be found in "Matrix computations" by Golub and van Loan. It's tough material, but it does have a comprehensive overview of such algorithms.
Note:
The method above gives a least-square estimate at each step. You can easily adding weights to the updates to X and Y. When the values of X and Y grow too large, they can be scaled by a single scalar, without affecting the result.
| null | CC BY-SA 2.5 | null | 2011-03-23T17:29:07.060 | 2011-03-23T17:29:07.060 | null | null | 3867 | null |
8682 | 2 | null | 8663 | 1 | null | First, you need to know what a p-value is. A p-value is the probability that you would observe results as extreme, or more extreme, than the ones you have, if the null hypothesis was in fact true.
The reason you aren't getting the same p-value in your two tests, is that you aren't examining the same null hypotheses under the same assumptions. Pick up a stats textbook.
If a relationship between two variables is approximately linear, binning will reduce statistical power, which probably explains why you get a lower p-value without binning. Think of it this way, binning treats different values within a bin as identical. The information about the differences within a bin may be valuable, but this information is ignored by statistical tests.
| null | CC BY-SA 2.5 | null | 2011-03-23T17:41:08.740 | 2011-03-23T17:41:08.740 | null | null | 3748 | null |
8683 | 2 | null | 7897 | 0 | null | Lasso is indeed a good one. Simple things like starting with none, and adding them one by one sorted on 'usefullness' (via cross-validation) do also work quite well in practice.
This is sometimes called stagewise feedforward selection.
Note that the subset selection problem is fairly independent on the type of classification / regression. It's just that nonparametric methods can be slow and therefore require more intelligent methods of selection.
The book 'The elements of statistical learning' from T. Hastie gives a nice overview.
| null | CC BY-SA 2.5 | null | 2011-03-23T17:46:42.810 | 2011-03-23T17:46:42.810 | null | null | 3867 | null |
8684 | 2 | null | 8502 | 7 | null | You are retaining $p$ (=3 in this case) values for each regression: the estimated coefficients. If you are willing to retain $p(p+1)$ (=12) values per regression, you can weight your results in a way that is equivalent to having all the data and performing a weighted least squares regression with them en masse.
The analysis is simple: let $X_1$ be the design matrix (i.e., an $n_1$ by $p$ matrix of independent variable values) for the first year and $y_1$ be the $n_1$-vector of dependent values for that year. The estimated coefficients are
$$\hat{\beta}_1 = \left( X_1' X_1 \right)^{-1} X_1' y_1.$$
Let the subscript $2$ designate the same quantities for the second year. Suppose you would like to uniformly weight all observations with (positive) values $w_1^2$ and $w_2^2$ in those two years. The design matrix $X$ is the vertical concatenation of $X_1$ and $X_2$, an $n_1+n_2$ by $p$ matrix, and similarly the vector of dependent values $y$ is the vertical concatenation of $y_1$ and $y_2$. Let $W$ be the diagonal matrix with values $w_1$ along the first $n_1$ places and $w_2$ along the last $n_2$ places. The [weighted least squares](http://en.wikipedia.org/wiki/Least_squares#Weighted_least_squares) solution is
$$\hat{\beta} = \left( (W X)' (W X) \right)^{-1} (W X)' W y.$$
However, $(W X)' (W X)$ = $X' W'W X$ is the vertical concatenation of $X_1 W_1 W_1' X_1$ and $X_2 W_2 W_2' X_1$. Because both $W_1 W_1'$ and $W_2 W_2'$ are multiples of identity matrices, they factor through, giving
$$\hat{\beta} = \left( w_1^2 X_1' X_1 + w_2^2 X_2' X_2 \right)^{-1} \left(w_1 X_1 y_1 + w_2 X_2 y_2\right).$$
Notice that $X_1' X_1$ and $X_2' X_2$ are just $p$ by $p$ matrices and that $X_1 y_1$ and $X_2 y_2$ are just $p$-vectors. Therefore you can obtain $\hat{\beta}$ just from the two $p$ by $p$ matrices, the two $p$-vectors, and the two numbers $w_1$ and $w_2$.
This approach generalizes in an obvious way when more than two regressions are involved. It shows, incidentally, that the weighted combination $w_1^2 \hat{\beta_1} + w_2^2 \hat{\beta_2}$ as suggested in the question will not in general equal the weighted least-squares solution. Therefore, if you are using least squares for any of its optimality properties, you should not want to use this seductively simple solution, because it will be suboptimal.
In conclusion, if you would store the 12 numbers $X_i' X_i$ and $X_i' y_i$ each year, then retrospectively (without needing the original data) you can fit any regression on all the data for any set of positive weights without any loss of information.
I would recommend saving some additional values such as the estimated error variances: these will help you detect changes in variability over time (heteroscedasticity).
| null | CC BY-SA 2.5 | null | 2011-03-23T17:56:30.697 | 2011-03-23T17:56:30.697 | null | null | 919 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.