Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12475 | 2 | null | 11248 | 1 | null | What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates.
You can either use the samples from the MCMC run, or you can approximate it from the mean and covariance of the posterior distribution of the parameters by use of the probit function. See pages 218-220 of Chris Bishop's book "Pattern Recognition and Machine Learning" for an overview of how this can be done.
| null | CC BY-SA 3.0 | null | 2011-06-29T12:32:46.913 | 2011-06-29T12:32:46.913 | null | null | 495 | null |
12476 | 2 | null | 12461 | 5 | null | The "frequentist" response is to invent a null hypothesis of the form "not B" and then argue against "not B", as in Steffen's response. This is the logical equivalent of making the argument "You are wrong, therefore I must be right". This is the kind of reasoning politician's use (i.e. the other party is bad, therefore we are good). It is quite difficult to deal with more than 1 alternative under this sort of reasoning. This is because that "you are wrong, therefore I am right" argument only makes sense when it is not possible for both to be wrong, which can certainly happen when there is more than one alternative hypothesis.
The "Bayesian" response is to simply calculate the probability of the hypothesis that you are interested in testing, conditional on whatever evidence you have. Always this contains prior information, which is simply the assumptions you have made to made your problem well posed (all statistical procedures rely on prior information, Bayesian ones just make them more explicit). It also usually consists of some data, and we have by bayes theorem
$$P(H_{0}|DI)=\frac{P(H_{0}|I)P(D|H_{0}I)}{\sum_{k}P(H_{k}|I)P(D|H_{k}I)}$$
This form is independent of what is called the "null" and what is called the "alternative", because you have to calculate exactly the same quantities for every hypothesis that you are going to consider - the prior and the likelihood. This is in a sense, analogous to calculate the "type 1" and "type 2" error rates in Neyman Pearson hypothesis testing, simply because a "type 2" error rate when $H_0$ is the "null" is the same thing as the "type 1" error rate with $H_0$ is the "alternative". It is only the connotations implied by the words "null" and "alternative" which make them seem different. You can show equivalence in the case of the "Neyman Pearson Lemma" when there are two hypothesis, for this is simply the likelihood ratio, which is given at once by taking the odds of the above bayes theorem:
$$\frac{P(H_{0}|DI)}{P(H_{1}|DI)}=\frac{P(H_{0}|I)}{P(H_{1}|I)}\times\frac{P(D|H_{0}I)}{P(D|H_{1}I)}=\frac{P(H_{0}|I)}{P(H_{1}|I)}\times\Lambda$$
So the decision problems are the same: accept $H_0$ when $\Lambda > \tilde{\Lambda}$ for some cut-off $\tilde{\Lambda}$, and accept $H_1$ otherwise. Thus, the procedures are basically different rationales for choosing the cut-off value, or decision boundary. "Bayesians" would say it should be the product of the prior odds times the loss ratio $\frac{L_2}{L_1}$ where $L_1$ is the "type 1 error loss" and $L_2$ is the "type 2 error loss". These are losses, not probabilities, which describe the relative severity of making each of the two errors. The frequentist criterion is to minimise the one of the average error rates, type 1 or 2, while keeping the other fixed. But because they lead to the same form of decision boundary, we can always find an equivalent bayesian prior*loss ratio for every frequentist minimised error rate.
In short, if you are using the likelihood ratio to test your hypothesis, it does not matter what you call the null hypothesis. Switching the null to the alternative just changes the decision to $\Lambda^{-1}<\tilde{\Lambda}^{-1}$ which is mathematically the same thing (you will make the same decision - but based on inverse chi-square cut-off rather than chi-square for your p-value). Playing word games with "failing to reject the null" just doesn't apply to the hypothesis test, because it is a decision, so if there are only two options, then "failing to reject the null" means the same thing as "accepting the null".
| null | CC BY-SA 3.0 | null | 2011-06-29T14:47:35.053 | 2011-06-29T14:47:35.053 | null | null | 2392 | null |
12478 | 2 | null | 12461 | 1 | null | The null hypothesis should generally assume that differences in a response variable are due to error alone.
For example if you want to test the effect of some factor `A` on response `x`, then the null would be: $H_0$ = There is no effect of `A` on response `x`.
Failing to reject this null hypothesis would be interpreted as:
1) any differences in `x` are due to error alone and not `A` or,
2) that the data are inadequate to detect a difference even though one exists (see Type 2 error below).
Rejecting this null hypothesis would be interpreted as the alternative hypothesis: $H_a$ = There is an effect of `A` on response `x`, is true.
Type 1 and Type 2 errors are related to the use of the null hypothesis but not its designation really. Type 1 error occurs when you reject $H_0$ even though it is true - that is, you incorrectly conclude an effect of `A` on `x` when one didn't exist. Type 2 error occurs when you fail to reject the $H_0$ even though it is false - that is, you incorrectly conclude no effect of `A` on `x` even though one exists.
| null | CC BY-SA 3.0 | null | 2011-06-29T15:22:49.130 | 2011-06-29T16:45:48.387 | 2011-06-29T16:45:48.387 | 4048 | 4048 | null |
12479 | 2 | null | 12453 | 2 | null | Create a table that crosstabulates Perceived Attraction (“yes/no”) x Color Choice x Gender x Trial. This should give you a good sense of whether there are main effects or interactions of gender and trial number on color choice or perceived attraction, and you will want to include it in your write-up.
Suppose you are primarily interested in color choice as opposed to perceived attraction. A basic approach that takes into account the dependence within participants is to fit a binomial regression of color choice on gender, trial number, and their interaction, and include a participant-level random effect. You can do this in `R` and test the fixed effects using:
```
fm1 <- lmer(Color ~ Female * Trial + (1 | ID), data=dat, family=binomial(), REML=F)
fm2 <- lmer(Color ~ Female + Trial + (1 | ID), data=dat, family=binomial(), REML=F)
fm3 <- lmer(Color ~ Female + (1 | ID), data=dat, family=binomial(), REML=F)
fm4 <- lmer(Color ~ Trial + (1 | ID), data=dat, family=binomial(), REML=F)
anova(fm1, fm2) # Tests for interaction
anova(fm2, fm3) # Tests for main effect of trial
anova(fm2, fm4) # Tests for main effect of gender
```
where `dat` is your data matrix and all variables are coded as factors. Report the results as you would an ANOVA except that your test statistics will be $\chi^2$ instead of $F$. Test your random effect term for participants:
```
fm5 <- lmer(Color ~ Female * Trial + (1 | ID), data=dat, family=binomial(), REML=T)
fm6 <- lm(Color ~ Female * Trial, data=dat, family=binomial())
pchsqi(as.numeric(2*(logLik(fm5) - logLik(fm6)), 1, lower=F) #A conservative test
```
Suppose your interaction and the random effect term is significant. Report and interpret the regression output `fm5`. For example, males might 4 times more likely than females to choose red, and this effect might be strongest for trial 4 vs. 1, etc.
You might further hypothesize that the color preference effect only exists or is strongest for males who were successfully primed by the attraction dialogue. To test such a hypothesis, include perceived attraction and possibly perceived attraction in the previous trial as covariates and reanalyze.
Finally, it is possible that there are effects of your stimuli (dialogue, photos, color-to-photo assignment, orderings of the combination of the aforementioned variables), but I doubt you have enough data to analyze these, so I would just carefully report how you randomized them (I’m assuming you assigned orderings randomly to participants).
| null | CC BY-SA 3.0 | null | 2011-06-29T16:03:25.067 | 2011-06-29T16:03:25.067 | null | null | 3432 | null |
12480 | 2 | null | 12446 | 2 | null | There is the `drm` [package](http://cran.r-project.org/web/packages/drm/index.html) that implements "[l]ikelihood-based marginal regression and association modelling for repeated, or otherwise clustered, categorical responses using dependence ratio as a measure of the association," but I have not tried it.
| null | CC BY-SA 3.0 | null | 2011-06-29T16:12:41.150 | 2011-06-29T16:12:41.150 | null | null | 3432 | null |
12481 | 2 | null | 12446 | 4 | null | You should have a look at `MCMCglmm`, see for example this blog post:
[http://hlplab.wordpress.com/2009/05/07/multinomial-random-effects-models-in-r/](http://hlplab.wordpress.com/2009/05/07/multinomial-random-effects-models-in-r/)
| null | CC BY-SA 3.0 | null | 2011-06-29T16:28:29.093 | 2011-06-29T16:28:29.093 | null | null | 5020 | null |
12482 | 2 | null | 9557 | 1 | null | I have found a decent introduction in "STATISTICS: AN INTRODUCTION USING R" by Michael J. Crawley.
There is a site where you can download pdfs [http://www.bio.ic.ac.uk/research/crawley/statistics/exercises.htm](http://www.bio.ic.ac.uk/research/crawley/statistics/exercises.htm)
in particular [http://www.bio.ic.ac.uk/research/crawley/statistics/exercises/R3Statistics.pdf](http://www.bio.ic.ac.uk/research/crawley/statistics/exercises/R3Statistics.pdf) explains t test and Wilcoxon test.
But I am still looking for a better introduction...
| null | CC BY-SA 3.0 | null | 2011-06-29T17:27:04.713 | 2011-06-29T17:27:04.713 | null | null | 5221 | null |
12483 | 2 | null | 12465 | 2 | null | If you were still in school then you might have a couple points taken off for saying that you accept the null hypothesis. The purists (well frequentist purists) will always say that we never accept the null, just fail to reject it.
As a style thing, generally test functions will return an object with the test statistic, p-value, etc. and not print anything. Then a print method will be used to print the results nicely. But for learning or simple use what you have done is fine (I would go the other route if you are planning on building on this, or doing more of your own tests).
In your last cat statement you use relative_error_warning, but I don't see it defined anywhere, did you mean rel_error_warn? Not that that line is ever likely to be run.
Everything looks correct, both your calculations and the running of t.test, I would expect the only differences you ever see to be rounding error (or due to handling of missing values).
| null | CC BY-SA 3.0 | null | 2011-06-29T17:30:30.517 | 2011-06-29T17:30:30.517 | null | null | 4505 | null |
12484 | 1 | 12511 | null | 21 | 20092 | I have a point (x,y) that I need a linear regressor to pass through given a data set (X,Y). How do I implement this in R?
| Constrained linear regression through a specified point | CC BY-SA 3.0 | null | 2011-06-29T18:34:32.977 | 2021-02-09T19:35:44.470 | 2011-06-29T20:27:15.487 | 8 | 5226 | [
"r",
"regression"
] |
12485 | 1 | 12487 | null | 4 | 1139 | I have a dataset that I can collect some quantities from, eg. sum,mean,variance...
I want to perform a simple regression on column(x,y). According to Wikipedia, the closed form for $\alpha,\beta$ is
\begin{equation}
\alpha=\frac{Cov(x,y)}{Var(x)}
\quad \mbox{and} \quad
\beta=\bar{y}-\beta\bar{x}.
\end{equation}
However I would also like some more stats from the regression, for example r, r-square, significance...
Basically I want to get the same result that this [apache SimpleRegression](http://commons.apache.org/math/apidocs/org/apache/commons/math/stat/regression/SimpleRegression.html) class produces by only feeding collected quantities instead of raw data points. Any advice on how to do that?
| Perform simple regression without raw data | CC BY-SA 3.0 | null | 2011-06-29T18:40:01.200 | 2011-06-30T07:36:17.700 | 2011-06-29T20:16:31.580 | 8 | 5212 | [
"regression"
] |
12486 | 2 | null | 9557 | 1 | null | By far the best I think:
Design of Experiments, Statistical Principles of Research Design and Analysis, 2nd Ed, by
Robert O. Kuehl
| null | CC BY-SA 3.0 | null | 2011-06-29T19:35:06.853 | 2011-06-29T19:35:06.853 | null | null | 5195 | null |
12487 | 2 | null | 12485 | 1 | null | A standard regression textbook would give the formulas to convert between the different quantities (r is just covariance divided by the 2 standard deviations, r-squared is just r squared, the F statistic for only 1 x in the model is r/(1-r)/(n-2), etc.)
Writing a function that will do all of this automatically is on my to do list, but does not exist yet.
A simple alternative is to simulate data from a bivariate (or multivariate) normal that matches your summaries, then just run the regression on the simulated data. The mvrnorm function in the MASS package for R will simulate such data, set empirical=TRUE.
| null | CC BY-SA 3.0 | null | 2011-06-29T19:36:19.457 | 2011-06-30T07:36:17.700 | 2011-06-30T07:36:17.700 | 2116 | 4505 | null |
12488 | 1 | 12489 | null | 1 | 144 | I ran an ad campaign on Facebook and I have impressions (ad-views), clicks, and sign-ups broken down by gender and age.
Certain demographics have a higher rate of clicks and sign-ups when compared to their respective percent of impressions. What is the best way to determine if these differences represent significant results instead of random noise?
I am trying to see how age and gender predict sign-up.
| Calculating the probability that demographic trends happened by chance | CC BY-SA 3.0 | null | 2011-06-29T19:56:26.133 | 2011-06-30T12:51:41.023 | 2011-06-29T21:12:58.690 | 1514 | 1514 | [
"probability",
"data-mining",
"categorical-data"
] |
12489 | 2 | null | 12488 | 3 | null | I'm not really clear what you are trying to predict, but let's suppose you want to investigate how sign-ups relate to gender and age.
Well you either sign-up or your don't, i.e. it's a binary variable. So you could try [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression). That's would allow you to predict the probability of signing up give the persons gender and age (and anything else you have).
| null | CC BY-SA 3.0 | null | 2011-06-29T20:26:26.263 | 2011-06-29T20:26:26.263 | null | null | 8 | null |
12490 | 1 | null | null | 4 | 2234 | I understand the Max likelihood estimators for mu and sigma for the lognormal distribution when data are actual values. However I need to understand how these formulas are modified when data are already grouped or binned (and actual values are not available). Specifically, for mu, the mle estimator is the sum of the logs of each X (divided by n which is the number of points). For sigma squared, the mle estimator is the sum of (each log X minus the mu, squared); all divided by n. (Order of operations is taking each log X minus the mu; square that; sum that over all X's; then divide by n). Now suppose data in bins b1, b2, b3, and so on where b1 to b2 is the first bin; b2 to b3 second bin and so on. What are the modified mu and sigma squared? thank you.
| Lognormal distribution using binned or grouped data | CC BY-SA 3.0 | null | 2011-06-29T22:00:11.343 | 2011-07-01T14:23:50.713 | 2011-06-29T22:56:40.273 | 919 | 5229 | [
"maximum-likelihood",
"lognormal-distribution"
] |
12491 | 2 | null | 12490 | 6 | null | Let $\Phi$ be the cumulative standard normal distribution function. The probability that a value $Y$ drawn from a lognormal distribution with log mean $\mu$ and log SD $\sigma$ lies in the interval $(b_i, b_{i+1}]$ therefore is
$$\Pr(b_i \lt Y \le b_{i+1}) = \Phi \left( \frac{\log(b_{i+1}) - \mu}{\sigma} \right) - \Phi \left( \frac{\log(b_{i}) - \mu}{\sigma} \right).$$
Call this value $f_i(\mu, \sigma)$.
When the data consist of independent draws $Y_1,Y_2, \ldots, Y_N$, with $Y_i$ falling in bin $j(i)$ and the bin cutpoints are established independently of the $Y_i$, the probabilities multiply, whence the log likelihood is the sum of the logs of these values:
$$\log(\Lambda(\mu, \sigma)) = \sum_{i=1}^{N} \log(f_{j(i)}(\mu, \sigma)).$$
It suffices to count the number of $Y_i$ falling within each bin $j$; let this count be $k(j)$. By collecting the $k(j)$ terms associated with bin $j$ for each bin, the sum condenses to
$$\log(\Lambda(\mu, \sigma)) = \sum_{j} k(j) \log(f_{j}(\mu, \sigma)).$$
The MLEs are the values $\hat{\mu}$ and $\hat{\sigma}$ that together maximize $\log(\Lambda(\mu, \sigma))$. There is no closed formula for them in general: numerical solutions are needed.
### Example
Consider data values known only to lie within the even intervals $[0,2]$, $[2,4]$, etc. I randomly generated 100 of them according to a Lognormal(0,1) distribution. In Mathematica this can be done via
```
With[{width = 2},
data = width {Floor[#/width], Floor[#/width] + 1} & /@
RandomReal[LogNormalDistribution[0, 1], 100]
];
```
Here are their tallies:
```
Interval Count
[0, 2] 77
[2, 4] 16
[4, 6] 5
[6, 8] 1
[16,18] 1
```
Finding the MLE for data like this requires two procedures. First, one to compute the contribution of a list of all 100 intervals to the log likelihood:
```
logLikelihood[data_, m_, s_] :=
With[{f = CDF[LogNormalDistribution[m, s], #] &},
Sum[Log[f[b[[2]]] - f[b[[1]]]], {b, data}]
];
```
Second, one to numerically maximize the log likelihood:
```
mle = NMaximize[{logLikelihood[data, m, s], s > 0}, {m, s}]
```
The solution reported by Mathematica is
```
{-77.0669, {m -> -0.014176, s -> 0.952739}}
```
The first value in the list is the log likelihood and the second (evidently) gives the MLEs of $\mu$ and $\sigma$, respectively. They are comfortably close to their true values.
Other software systems will vary in their syntax, but typically they will work in the same way: one procedure to compute the probabilities and another to maximize the log likelihood determined by those probabilities.
| null | CC BY-SA 3.0 | null | 2011-06-29T22:43:12.103 | 2011-07-01T14:23:50.713 | 2011-07-01T14:23:50.713 | 919 | 919 | null |
12492 | 1 | null | null | 7 | 7530 | Mathematically speaking, for which data does a logistic regression model have a unique solution?
| When does a logistic regression model have a unique solution? | CC BY-SA 3.0 | null | 2011-06-29T23:37:46.787 | 2018-02-01T22:45:55.920 | 2011-06-30T10:26:40.407 | 1390 | 2849 | [
"logistic",
"identifiability"
] |
12493 | 2 | null | 12488 | 1 | null | If you're relatively new to statistics, logistic regression can be pretty difficult to master. An alternative plan would consist of...
- a T-test of the age difference between those who do and don't click (or sign up);
- a Chi-Square test of the independence of clicks (or sign ups) and gender.
| null | CC BY-SA 3.0 | null | 2011-06-30T01:38:39.057 | 2011-06-30T12:51:41.023 | 2011-06-30T12:51:41.023 | 8 | 2669 | null |
12494 | 1 | 12496 | null | 2 | 2822 | I'm talking with my advisor about how to compute standard deviations for, say, combined standardized test scores for admissions purposes. For example, we'd be interested to compute the sum of the verbal and quantitative scores from the GRE, which are correlated, and normed to be approximately normal.
More formally, say you have a multivariate normal with vector mean and matrix covariance $X \sim N(\mu, \Sigma)$, with $X = (X_1, X_2, ...)$ (and covariances are non-zero). What is the variance of $\sum{X_i}$? If it's hard to compute in general, I'm happy with bivariate for now, a recursive approach or similar.
| What is the variance of the sum of components of a multivariate normal distribution? | CC BY-SA 3.0 | null | 2011-06-30T01:44:38.100 | 2011-06-30T03:26:12.873 | null | null | 4514 | [
"normal-distribution",
"variance",
"bivariate"
] |
12495 | 1 | null | null | 21 | 37413 | Can anyone point me out a k-means implementation (it would be better if in matlab) that can take the distance matrix in input?
The standard matlab implementation needs the observation matrix in input and it is not possible to custom change the similarity measure.
| k-means implementation with custom distance matrix in input | CC BY-SA 3.0 | null | 2011-06-30T01:52:27.973 | 2021-01-12T17:38:21.483 | 2017-12-14T14:54:45.120 | 11887 | 4809 | [
"clustering",
"matlab",
"k-means"
] |
12496 | 2 | null | 12494 | 5 | null | The variance is the matrix product:
$$
1'\Sigma1
$$
| null | CC BY-SA 3.0 | null | 2011-06-30T02:24:30.140 | 2011-06-30T02:24:30.140 | null | null | 4797 | null |
12497 | 2 | null | 423 | 12 | null | 
John Deering, [Strange Brew](http://www.thecomicstrips.com/store/add_strip.php?iid=45773)
| null | CC BY-SA 3.0 | null | 2011-06-30T02:48:01.283 | 2012-05-04T22:13:33.977 | 2012-05-04T22:13:33.977 | 919 | 3919 | null |
12498 | 1 | null | null | 7 | 2595 | In this case, "generic" being the entire gauntlet of macroeconomic time-series that private and government statistical offices put out.
Some background - I recently started working at a data provider - we collect data releases and repackage them in a presumably more convenient and accessible fashion for our clients, and we have tens of thousands of data series (wouldn't be surprised if we topped a million, actually). As part of our QA process, we run the following outlier detection:
$X_t-X_{t-1} = E_t$
$\sigma^2$ is estimated from the resulting sample of $E_t$, and a z-score is calculated based off $E_t\sim N(0,\sigma^2)$
I think we can do better - the math clearly falls apart for everything that isn't a random walk.
I initially thought of fitting an ARMA(m,n) based on the peak of the autocorrelation/autocovarience functions of the series and checking the residuals. I'm wary of the robustness of this, and a previous [question](https://stats.stackexchange.com/questions/1207/period-detection-of-a-generic-time-series) seems to indicate that autocorrelation is not particularly robust.
| Outlier detection for generic time series | CC BY-SA 3.0 | null | 2011-06-30T03:16:14.993 | 2017-04-23T16:14:09.437 | 2017-04-23T16:14:09.437 | 11887 | 4485 | [
"time-series",
"autocorrelation",
"outliers",
"winsorizing"
] |
12500 | 2 | null | 12495 | 16 | null | Since k-means needs to be able to find the means of different subsets of the points you want to cluster, it does not really make sense to ask for a version of k-means that takes a distance matrix as input.
You could try [k-medoids](http://en.wikipedia.org/wiki/K-medoids) instead. There are [some matlab implementations](http://www.google.com/search?aq=f&sourceid=chrome&ie=UTF-8&q=k-medoids%20matlab) available.
| null | CC BY-SA 3.0 | null | 2011-06-30T04:50:05.313 | 2011-06-30T04:50:05.313 | null | null | 5179 | null |
12501 | 2 | null | 10540 | 11 | null | Take a look at the
[Cluster Validity Analysis Platform (CVAP) ToolBox](http://www.mathworks.com/matlabcentral/fileexchange/14620-cvap-cluster-validity-analysis-platform-cluster-analysis-and-validation-tool)
And some of the materials (links) from CVAP:
>
Silhouette index (overall average
silhouette) a larger Silhouette value
indicates a better quality of a
clustering result [Chen et al. 2002]
- N. Bolshakova, F. Azuaje. 2003. Cluster validation techniques for genome expression data, Signal Processing. V.83. N4, P.825-833.
- E. Dimitriadou, S. Dolnicar, A. Weingessel. An examination of indexes for determining the Number of Cluster in binary data sets. Psychometrika, 67(1):137-160, 2002.
You can also check this [(simple) Tool for estimating the number of clusters](http://www.mathworks.com/matlabcentral/fileexchange/13916-simple-tool-for-estimating-the-number-of-clusters)
Just take a look at the examples of both toolkits (You can also use other cluster validation techniques)
| null | CC BY-SA 3.0 | null | 2011-06-30T05:32:36.207 | 2011-06-30T05:32:36.207 | null | null | 5172 | null |
12502 | 2 | null | 423 | 123 | null | Another from [xkcd #833](http://xkcd.com/833/):
[](http://xkcd.com/833/)
>
And if you labeled your axes, I could tell you exactly how MUCH better.
| null | CC BY-SA 3.0 | null | 2011-06-30T05:50:35.560 | 2017-07-21T08:45:28.167 | 2017-07-21T08:45:28.167 | 166832 | 1106 | null |
12503 | 2 | null | 12495 | 11 | null | You could turn your matrix of distances into raw data and input these to K-Means clustering. The steps would be as follows:
- Distances between your N points must be squared euclidean ones. Perform "double centering" of the matrix:
From each element, substract its row mean of elements, substract its column mean of elements, add matrix mean of elements, and divide by minus 2. (The row, column, and matrix means are from the initial squared distance matrix. The vectors of row means and the column means contain, of course, the same values, because the distance matrix is symmetric. The matrix mean scalar should be based on all matrix elements, including diagonal.)$^1$
The matrix you have now is the SSCP (sum-of-squares-and-cross-product) matrix between your points wherein the origin is put at geometrical centre of the cloud of N points. (Read explanation of the double centering here.)
- Perform PCA (Principal component analysis) on that matrix and obtain NxN component loading matrix. Some of last columns of it are likely to be all 0, - so cut them off. What you stay with now is actually principal component scores, the coordinates of your N points onto principal components that pass, as axes, through your cloud. This data can be treated as raw data suitable for K-Means input.
P.S. If your distances aren't geometrically correct squared euclidean ones you may encounter problem: the SSCP matrix may be not positive (semi)definite. This problem can be coped with in several ways but with loss of precision.
$^1$ It is easy to show that the subtrahend from $d_{ij}^2$, the [rowmean + colmean - matrixmean], equals $h_i^2+h_j^2$ of the euclidean space's [law of cosines](https://stats.stackexchange.com/a/36158/3277): $d_{ij}^2 = h_i^2+h_j^2-2 s_{ij}$, where $s_{ij}$ is the scalar product similarity between the two vectors. Thus, the double centration operation is the reversing a (euclidean) distance into the corresponding angular similarity by that law. Specifically, it is a particular case of that law, the case when we put (via the specific subtrahend) the origin into the centroid of the bunch of points (the vectors' endpoints).
| null | CC BY-SA 4.0 | null | 2011-06-30T07:20:10.307 | 2021-01-12T17:38:21.483 | 2021-01-12T17:38:21.483 | 3277 | 3277 | null |
12504 | 2 | null | 12235 | 0 | null | You might also look into R package qualV and function `generalME`. It provides various comparisons between observed and predicted values of the model. The package [is described](http://www.jstatsoft.org/v22/i08/) in Journal of Statistical Software.
The function `generalME` will work with two vectors, no need to combine them into one data.frame.
| null | CC BY-SA 3.0 | null | 2011-06-30T07:26:15.860 | 2011-06-30T07:26:15.860 | null | null | 2116 | null |
12505 | 2 | null | 12492 | 7 | null | I believe you are looking for the concept of orthogonality of the covariates. As soon as one of the covariates can be written as a linear combination of one of the others, you will not have a unique solution.
As an extreme case: say you have 2 covariates, and one is (in your dataset) always the double of the other, then both
$$logodds(outcome)=\beta_0+\beta_1 X_1$$
and
$$logodds(outcome)=\beta_0+\frac{1}{2}\beta_1 X_2$$
Will yield the same results (regardless of $\beta_1$), and of course there are lots of other solutions.
| null | CC BY-SA 3.0 | null | 2011-06-30T07:47:03.133 | 2011-06-30T07:47:03.133 | null | null | 4257 | null |
12506 | 2 | null | 12492 | 11 | null | The solution of logistic regression is a solution of maximization of certain function, namely log-likelihood:
$$\sum_{i=1}^ny_i\log p_i+(1-y_i)\log(1-p_i),$$
where
$$p_i=\frac{\exp(\beta_0+\beta_1x_{1i}+...+\beta_kx_{ik})}{1+\exp(\beta_0+\beta_1x_{1i}+...+\beta_kx_{ik})},$$
and $(y_i,x_{1i},...,x_{ki})$, $i=1,...,n$ is the data.
So mathematically speaking the unique solution of logistic regression exists for given data set if the log-likelihood has a unique maximum. If I am not mistaken full rank of matrix $X=[1,x_{1i},...,x_{ki}]$ is necessary for that. For more mathematical conditions you might look into [iterative reweighted least squares](http://en.wikipedia.org/wiki/IRWLS), since maximisation of log likelihood function for logistic regression is a special case of IRWLS.
| null | CC BY-SA 3.0 | null | 2011-06-30T08:03:26.107 | 2011-06-30T14:22:09.643 | 2011-06-30T14:22:09.643 | 2116 | 2116 | null |
12507 | 2 | null | 9259 | 1 | null | I see no immediate reason why this should be related to GAM. The fact is that you are using two tests for the same thing. Since there is no absolute certainty in statistics, it is very well possible to have one give a significant result and the other not.
Perhaps one of the two tests is simply more powerful (but then maybe relies on some more assumptions), or maybe the single significant one is your one-in-twenty type I error.
A good example is tests for whether samples come from the same distribution: you have very parametric tests for that (the T-test is one that can be used for this: if the means are different, so should the distributions), and also nonparametric ones: it could happen that the parametric one gives a significant result and the nonparametric one doesn't. This could be because the assumptions of the parametric test are false, because the data is simply extraordinary (type I), or because the sample size is not sufficient for the nonparametric test to pick up the difference, or, finally, because the aspect of what you really want to test (different distributions) that is checked by the different tests is just different (different means <-> chance of being "higher than").
If one test result shows significant results, and the other is only slightly non-significant, I wouldn't worry too much.
| null | CC BY-SA 3.0 | null | 2011-06-30T09:49:25.587 | 2011-06-30T09:49:25.587 | null | null | 4257 | null |
12508 | 1 | null | null | 1 | 226 | (I am far from an expert in the field of statistics, so I apologize beforehand if my question is irrelevant or my use of any terms is incorrect)
Let's suppose that we have a discrete variable $X$, and a set of $n+1$ observations $\{x_0, x_1, \dots, x_n\}$. The values of $X$ may not be numeric and all observations are considered equal with no particular order. Therefore, we can transform the observation set above to a set of $k+1$ value/frequency pairs $\{<v_0, f_0>, <v_1, f_1>,\dots,<v_k, f_k>\}$.
Is there a metric that estimates the "importance" of a specific value in this set?
For example, in an observation set where all values exist exactly three times, none can be considered more important than the others. On the other hand, in a set where all but one values exist 3 times, I would want that one value that exists 10 times to be differentiated from the rest. Same with a value that only appears once.
A simple probability calculation, for example, $P(k) = f_k / (n+1)$ would not be of help, because it does not take into account the other values.
I experimented with various combinations/formulas using the various layman-known metrics (mean, standard deviation, maximum, minimum etc), but anything I came up with seemed too arbitrary for me to trust.
I am currently using the standard score of the value frequency as an estimator, but it has a major issue: it's not bounded, and I am not at all sure how to normalize it without unknowingly biasing any results.
I would appreciate any reference to a metric that might get me started, or to any terminology that describes what I'd like to do more accurately, so that I can focus my search.
I would especially appreciate any bounded metrics that can be computed incrementally as new samples are added to the observation set :-)
| "Importance" metric for discrete variable value | CC BY-SA 3.0 | null | 2011-06-30T10:27:18.680 | 2011-06-30T14:01:42.847 | 2011-06-30T14:01:42.847 | 919 | 3934 | [
"estimation",
"descriptive-statistics",
"importance"
] |
12509 | 2 | null | 12498 | 4 | null | You are quite right that the ARIMA Model you are using (first differences) may not be appropriate to detect outliers. Outliers can be Pulses, Level Shifts, Seasonal Pulses or Local Time Trends. You might want to google "INTERVENTION DETECTION IN TIME SERIES" or google "AUTOMATIC INTERVENTION DETECTION" to get some reading matter on INTERVENTION DETECTION. Note that this is not the same as INTERVENTION MODELLING which often assumes the nature of the outlier and does not empirically identify same. Following mpkitas's remarks one would include the empirically identified outliers as dummy predictor series in order to accommodate their impact. A lot of work has been done in identifying oultliers using a null filter and then identifying the appropriate ARIMA Model. Some commercial packages assume that you identify the arima model first ( possibly flawed by the outliers ) and then identify the outliers. More general procedures examine both strategies. Your current procedure follows the "use up front filter first" approach but is also flawed by the assumption of the upfront filter.
Some more reflections:
to detect an anomaly, you need a model which provides an expectation. Intervention Detection yields the answer to the question " What is the probability of observing what I observed before I observed it ? AN ARIMA model can then used to identify the "unusual" Time Series observations. The problem is that you can't catch an outlier without a model (at least a mild one) for your data. Else how would you know that a point violated that model? In fact, the process of growing understanding and finding and examining outliers must be iterative. This isn't a new thought. Bacon, writing in Novum Organum about 400 years ago said: "Errors of Nature, Sports and Monsters correct the understanding in regard to ordinary things,and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows her deviations will more accurately understand Nature, The Model you are imposing on all your series i clearly am inadequate way to go.
| null | CC BY-SA 3.0 | null | 2011-06-30T11:00:03.650 | 2011-06-30T21:24:01.643 | 2011-06-30T21:24:01.643 | 3382 | 3382 | null |
12510 | 1 | null | null | 4 | 681 | I am trying to estimate number of different unique visitors who visited a given website (online store). There are hundreds of millions of visits to the store and so this task is too difficult to handle by my database. Note that one visitor may have more than one visit so it is not enough to count number of all records, we need to find number of all unique visitors. Even when scaling the data into MapReduce cluster this task turns out to be quite difficult.
I decided to sample the data. Each visitor has a Md5 hash (which is a uniform random hexadecimal number) and so I can select all users whose Md5 hash string finishes with for example '0'. By doing so I am sampling 1 in 16 users (1 hex number = 16 possibilities).
In this way I am limiting number of records and the database now can select number of unique visitors (number of distinct visitor hashes). I can simply multiply this number by 16 to get the estimated total number of unique visitors. I did some tests on a small sample of a real data and it works pretty well.
I am trying to find out what is the minimum sample size to get 95% confidence level that the number I arrived at is correct. Some pages are visited by millions of visitors while other have only few hundreds visits. If I will sample 1/16 users on a page which has only 20 users my estimate may be quite off.
Sample data looks in the following way:
```
Visitor Hash | Website ID | Timestemp | Irrelevant data
009AB730 | 123 | 11111 |
009AB730 | 122 | 11112 |
009AB734 | 122 | 11112 |
0283AB22 | 122 | 11112 |
0283AB22 | 122 | 11112 |
.. repeated 1000 times
0283AB22 | 122 | 11112 |
0283AB20 | 122 | 11112 |
```
After sampling this data based on user hash ending with "0" we get:
```
009AB730 | 123 | 11111 |
009AB730 | 122 | 11112 |
0283AB20 | 122 | 11112 |
```
So it is irrelevant how many times a given user visited the website, what matters is whether his Hash/Id finishes with 0. This hash should be uniformly random and so we can assume that 1/16 of users IDs finishes with 0.
Moreover it is easier to calculate number of unique IDs in the following smaller sample.
That is why I was wondering what should be my minimum sample to achieve less than 5% error? I found [this question](https://stats.stackexchange.com/questions/8072/choosing-sample-size-to-achieve-pre-specified-margin-of-error) but I am not sure if I can use the same calculations? There is no proportion in my problem.
My first take on this problem would be to use the same technique and use the worst case scenario assumption by taking largest margin of error when the proportion is 0.5. Using exactly the same logic: the maximum margin of error at 95% confidence is $m = 0.98 / \sqrt{n}.$ Rearranging gives $n = (0.98/m)^2.$ And so if we want a margin of error of 5% = 0.05, $n = (0.98/0.05)^2 = 384$. Therefore we need 384 unique visitors for each webpage in which we want to estimate the number of unique visitors. Since this is the worst case scenario I hope this logic is not flawed? I am worried that I did not use the sampling 1/16 in this calculation. What if I would look for users whose ID finishes with '00' - I would expect that the minimum sample should be bigger in that case.
| Sample size to achieve given confidence level | CC BY-SA 3.0 | null | 2011-06-30T11:46:36.467 | 2011-08-11T20:05:50.427 | 2017-04-13T12:44:46.680 | -1 | 1970 | [
"confidence-interval",
"statistical-significance",
"sampling"
] |
12511 | 2 | null | 12484 | 27 | null | If $(x_0,y_0)$ is the point through which the regression line must pass, fit the model $y−y_0=\beta (x−x_0)+\varepsilon$, i.e., a linear regression with "no intercept" on a translated data set. In $R$, this might look like `lm( I(y-y0) ~ I(x-x0) + 0)`. Note the `+ 0` at the end which indicates to `lm` that no intercept term should be fit.
Depending on how easily you are convinced, there are multiple ways to demonstrate that this does, indeed, yield the correct answer. If you want to establish it formally, one simple method is to use Lagrange multipliers.
Whether or not it is actually a good idea to force a regression line to go through a particular point is a separate matter and is problem dependent. Generally, I would personally caution against this, unless there is a very good reason (e.g., very strong theoretical considerations). For one thing, fitting the full model can provide a means for measuring lack of fit. As a second matter, if you are mostly interested in evaluating model explanatory power for values of $x$ and $y$ "far away" from $(x_0,y_0)$, then the relevance of the fixed point becomes suspect.
| null | CC BY-SA 3.0 | null | 2011-06-30T13:11:33.550 | 2011-06-30T13:26:09.693 | 2011-06-30T13:26:09.693 | 2970 | 2970 | null |
12512 | 1 | 12520 | null | 5 | 271 | I have 10 years of backtested simulated performance of some trading strategy (using historical prices), and N months of actual trading performance. I want to compare the two. How big does N have to be to get a statistically significant comparison? I'm comparing returns and Sharpe ratios.
| Sample size for actual vs backtested performance | CC BY-SA 3.0 | null | 2011-06-30T14:03:41.197 | 2011-06-30T21:43:45.697 | 2011-06-30T21:43:45.697 | 4002 | 4002 | [
"hypothesis-testing",
"statistical-significance"
] |
12513 | 1 | null | null | 6 | 636 | I'm using latin hypercube sampling to draw parameter values for an epidemiological simulation model (about 30 parameters, following various distributions), and so far I haven't been able to find any good rule of thumb/heuristics/theory for choosing the number of samples (i.e. the number of equiprobable intervals).
| Choosing the number of latin hypercube samples | CC BY-SA 3.0 | null | 2011-06-30T15:12:33.333 | 2016-02-15T22:15:01.473 | 2016-02-15T22:15:01.473 | 11887 | 3693 | [
"sampling",
"simulation",
"latin-hypercube"
] |
12517 | 1 | 12518 | null | 11 | 40552 | There are two forms for the Gamma distribution, each with different definitions for the shape and scale parameters. Rather than asking what the form is used for the [gsl_ran_gamma](http://www.gnu.org/software/gsl/manual/html_node/The-Gamma-Distribution.html) implementation, it's probably easier to ask for the associated definitions for the mean and standard deviation in terms of the shape and scale parameters.
Any pointers to definitions would be appreciated.
| What are the mean and variance for the Gamma distribution? | CC BY-SA 3.0 | null | 2011-06-30T18:46:23.383 | 2017-04-10T12:45:04.733 | 2011-06-30T20:34:09.180 | 930 | 3591 | [
"gamma-distribution"
] |
12518 | 2 | null | 12517 | 16 | null | If the shape parameter is $k>0$ and the scale is $\theta>0$, one parameterization has density function
$$p(x) = x^{k-1} \frac{ e^{-x/\theta} }{\theta^{k} \Gamma(k)}$$
where the argument, $x$, is non-negative. A random variable with this density has mean $k \theta$ and variance $k \theta^{2}$ (this parameterization is the one used on the wikipedia page about the gamma distribution).
An alternative parameterization uses $\vartheta = 1/\theta$ as the rate parameter (inverse scale parameter) and has density
$$p(x) = x^{k-1} \frac{ \vartheta^{k} e^{-x \vartheta} }{\Gamma(k)}$$
Under this choice, the mean is $k/\vartheta$ and the variance is $k/\vartheta^{2}$.
| null | CC BY-SA 3.0 | null | 2011-06-30T18:53:35.977 | 2017-04-10T12:45:04.733 | 2017-04-10T12:45:04.733 | 150310 | 4856 | null |
12519 | 1 | null | null | 4 | 213 | This is a supervised learning problem. Ideally would like to work in R due to having an easy way to pre-process the input data, but could work around that as well.
For each sample, input consists of tens of thousands of features. These are genomics data and will likely need to be reduced to a manageable amount, somehow, before being used to train the classifier.
Supervisory signal consists of 4 dependent continuous values, representing relative composition of the sample.
e.g. continuous between 0 and 1, all 4 summing to 1 for each sample:
```
Sub012 0.5940594 0.26732673 0.07920792 0.059405941
Sub013 0.5102041 0.34693878 0.08163265 0.061224490
Sub014 0.6521739 0.20652174 0.07608696 0.065217391
```
Wanted: a regression function capable of predicting the relative composition of a sample in terms of those same 4 dependent continuous values.
The constraints on the supervisory signal are what is causing me pause: the dependence of the variables, being constrained between 0-1 and summing to 1. I was hoping someone might have attempted something similar and could point me in the right direction - packages or approaches which may work or definitely won't work - all thoughts welcomed.
Thank you.
| Supervised learning approaches which can accommodate a supervisory signal composed of multiple dependent continuous variables? | CC BY-SA 3.0 | null | 2011-06-30T20:00:54.897 | 2011-07-02T17:20:55.793 | 2011-07-02T17:20:55.793 | 5240 | 5240 | [
"r",
"machine-learning",
"classification"
] |
12520 | 2 | null | 12512 | 3 | null | A slightly simpler formulation is as follows: suppose you believe that your trading strategy has a Sharpe ratio of $\psi$, 'annualized' to the time units of your mark frequency (monthly in your case, evidently). Then to perform a 1-sided, 1-sample t-test for the null hypothesis that the expected return of your strategy is zero, you should set
$$N = \frac{2.7}{\psi^2}$$
in order to have a power of 0.5 and a type I rate of 0.05 (the 'magical' value). Note this is just a modification of Lehr's rule (as described by [Van Belle](http://amzn.com/0470144483)).
There are a large number of caveats here:
- this only holds for reasonably non-skewed distributions of returns.
- using relative returns (percents) instead of log returns will create a geometric bias.
- having a small number of samples biases the estimate of Sharpe.
There is probably a similar formula for the two sample t-test to compare mean returns, or a test to compare Sharpe ratios, but I don't know them (yet).
| null | CC BY-SA 3.0 | null | 2011-06-30T20:23:53.580 | 2011-06-30T20:23:53.580 | null | null | 795 | null |
12521 | 1 | 12522 | null | 9 | 1150 | I have 10 years of backtested simulated performance of some trading strategy (using historical prices), and N months of actual trading performance. What statistical test can I do find out if I'm on target with the backtesting numbers? (both, in terms of expected annual returns and expected annual Sharpe ratio)
| Comparing backtesting returns with real trading returns | CC BY-SA 3.0 | null | 2011-06-30T21:48:00.857 | 2011-06-30T22:23:45.800 | null | null | 4002 | [
"hypothesis-testing"
] |
12522 | 2 | null | 12521 | 8 | null | Letting $\psi_1, \psi_2$ be the sample Sharpe ratios of the two periods, the difference $\Delta \psi = \psi_1 - \psi_2$ is asymptotically normal. Under the null hypothesis that the population Sharpe ratios in the two periods are equal, the difference is asymptotically mean zero. The standard deviation is approximately $\sqrt{\frac{1}{120} + \frac{1}{n}}$, when your Sharpe ratios are 'annualized' to monthly terms. So the simplest test would be to reject the null if $|\Delta\psi|> 1.96 \sqrt{\frac{1}{120} + \frac{1}{n}}$.
My answer here is just a realization of @drnexus' answer to [this question](https://stats.stackexchange.com/questions/1622/comparing-2-independent-non-central-t-statistics).
| null | CC BY-SA 3.0 | null | 2011-06-30T22:23:45.800 | 2011-06-30T22:23:45.800 | 2017-04-13T12:44:27.570 | -1 | 795 | null |
12523 | 1 | null | null | 3 | 598 | I have the following setup.
- Parameters $W$ with density $\pi(w)$.
- Observed data $X_1,...,X_n$ iid.
- Density of $X_i|W=w$ is $f(x_i|w) = \int_{\Delta(x_i)} f(\mathbf c|w) \,d\mathbf c$.
- The simplex $\Delta(x_i) = \{\mathbf c \geq 0 : c_1 + \cdots + c_m = x_i\}$.
I want to find the distribution of $W|X_1=x_n,\dots,X_n=x_n$.
Although $f(\mathbf c|w)$ is a fairly cheap to evaluate, I don't think it is possible (after a lot of effort!) to get a closed or simple form for $f(x_i|w)$. Therefore it's also difficult to find out much about the posterior distribution of $W$, analytically, either.
Instead I have been considering MCMC for this purpose. It is straightforward (assuming I can find a good enough $q$) to write down a basic Metropolis-Hasting algorithm:
- Come up with a proposal distribution $q(w|w')$ and pick $w_0$.
- For $t=1,2,3,\dots$:
Sample $w$ from a $q(\cdot|w_{t-1})$.
Let $r = \frac{ q(w|w_{t-1}) \pi(w) \prod_{i=1}^n f(x_i|w) }
{ q(w_{t-1}|w) \pi(w_{t-1}) \prod_{i=1}^n f(x_i|w_{t-1}) }$.
Let $w_t = w$ with probability $\min(r,1)$, and let $w_t = w_{t-1}$ otherwise.
The trouble is that, as was already mentioned above, the only way I know to evaluate $f(x_i|w)$ is via a (multi-dimensional) numerical integration. And in the above algorithm, this has to be done $2n$ times per iteration!
So my $\textbf{questions}$ are, in order of specificity:
- Is there a way to combine the numerical integration required for $f(x_i|w)$ and the overall MCMC algorithm? (Intuitively, suppose we use an MCMC algorithm to evaluate each integral $f(x_i|w)$, then there would be two levels of nested Markov chains, and perhaps there is a way to combine these chains to get better overall performance.)
- Is there a (obvious, standard, or otherwise) better way to solve my problem than the basic MH algorithm above?
Thanks in advance for any replies!
[P.S. In case it matters, $f(\mathbf c|w)$ is the density of a probability distribution on $\mathbb R^m$.]
| MCMC when the density involves integration over a simplex | CC BY-SA 3.0 | null | 2011-06-30T22:36:37.537 | 2012-06-17T16:01:31.030 | 2011-06-30T23:08:25.993 | 5179 | 5179 | [
"markov-chain-montecarlo",
"metropolis-hastings"
] |
12524 | 1 | null | null | 3 | 104 | Can you please recommend good reads on statistical analysis related to online fraud detection and prevention of account abuse?
Thank you.
| Reading recommendation on using statistical analysis in online fraud prevention | CC BY-SA 3.0 | null | 2011-06-30T23:51:51.223 | 2011-06-30T23:51:51.223 | null | null | 5242 | [
"references",
"fraud-detection"
] |
12525 | 1 | 12532 | null | 21 | 31444 | I'm trying to minimize a custom function. It should accept five parameters and the data set and do all sorts of calculations, producing a single number as an output. I want to find a combination of five input parameters which yields smallest output of my function.
| Is there a way to maximize/minimize a custom function in R? | CC BY-SA 3.0 | null | 2011-06-30T23:54:42.607 | 2013-07-10T16:39:39.907 | 2011-07-01T04:36:17.623 | 2970 | 333 | [
"r",
"optimization"
] |
12526 | 1 | 12561 | null | 2 | 845 | What is the best imputation method for a dataset consisting of stochastic data? For example, let's say you have a table of security returns. In some cases the missings are random, in other cases are not. For example, a new IPO would have a relatively short time-series.
Suggestions:
- Expectations-Maximization algorithm
- Nearest neighbor algorithm
- Linear interpolation (I would want to avoid this since it is deterministic)
Thanks!
| Best imputation method for stochastic noisy data? | CC BY-SA 3.0 | null | 2011-07-01T00:00:48.627 | 2013-11-05T09:51:54.647 | 2011-07-01T23:31:52.853 | 2645 | 8101 | [
"time-series",
"missing-data",
"data-imputation"
] |
12528 | 2 | null | 12525 | 1 | null | Is your function continuous and differentiable? You might be able to use optim, either with user-supplied derivatives or numerically approximated ones.
| null | CC BY-SA 3.0 | null | 2011-07-01T00:26:26.200 | 2011-07-01T00:26:26.200 | null | null | 1777 | null |
12529 | 1 | 12552 | null | 4 | 1091 | I have two dataset that i want to compare.
each dataset contain the weight of 10 different person measured for 3 different day.
I am interested in measuring the probabily that the two sample originate from the same population.
People seem to suggest doing a Kolmogorov-Smirnov test but i need a measurement.
I was thinking doing the EMD to compare the distribution for each day
EMD(dataset1.day1,dataset2.day1) + EMD(dataset1.day2,dataset2.day2) + EMD(dataset1.day3,dataset2.day3)
where dataset1.day1 is the histogram of the value for day1 in dataset 1...
But i could probably take each person as a 3d datapoint and do the EMD in 3d.
One other possibility was to do the Hausdorff distance but doing the average of the distance for each point instead of taking the maximum distance.
The two dataset have very different skewness so i was also considering using the Mann-Whitney-Wilcoxon_test.
What are the main difference between the two technique.
| Difference between Hausdorff and earth mover (EMD) distance | CC BY-SA 3.0 | 0 | 2011-07-01T01:13:25.003 | 2011-07-01T19:44:45.110 | 2011-07-01T19:44:45.110 | 5244 | 5244 | [
"hypothesis-testing",
"distance"
] |
12530 | 2 | null | 12529 | 3 | null | The intuitive difference between Hausdorff distance and EMD between sets A and B is:
- EMD tells you the total work required to move all A's mass onto B, under the optimal scheme for doing so.
- Hausdorff tells you the worst-case distance between an element of A and the nearest element of B. If you consider each point to have unit mass, then you can think of Hausdorff as telling you the worst-case amount of work required to move a single element of A onto some element of B, under the optimal scheme for doing so.
Your modification of Hausdorff would have the characterization:
- It tells you the average amount of work required to move each element of A onto some element of B, under the optimal scheme for doing so.
Of course, which one you want depends on your application...
| null | CC BY-SA 3.0 | null | 2011-07-01T01:53:31.960 | 2011-07-01T01:53:31.960 | null | null | 5179 | null |
12531 | 2 | null | 12471 | 1 | null | You don't say why you chose the average linkage method, but since you are only doing that step to generate starting values for k means, the choice of method in the first step might not be very important. Ward's method, also available in the SPSS CLUSTER procedure, may scale better to larger datasets. If the dataset for the first stage turns out to be too large, you could take a random sample to calculate the initial cluster centers.
HTH,
Jon Peck
| null | CC BY-SA 3.0 | null | 2011-07-01T02:01:56.040 | 2011-07-01T02:01:56.040 | null | null | 5245 | null |
12532 | 2 | null | 12525 | 27 | null | I wrote a post [listing a few tutorials using optim](http://jeromyanglim.blogspot.com/2011/02/r-optimisation-tips-using-optim-and.html).
Here is a quote of the relevant section:
- "The combination of the R function optim and a custom created objective
function, such as a minus log-likelihood function provides a powerful tool for
parameter estimation of custom models.
Scott Brown's tutorial includes an example of
this.
Ajay Shah has an example
of writing a likelihood function and then getting a maximum likelihood
estimate using optim.
Benjamin Bolker has great material available on the web from his book
Ecological Models and Data in R.
PDFs, Rnw, and R code for early versions of the chapters are provided on
the website.
Chapter 6 (likelihood and all that)
, 7 (the gory details of model fitting),
and 8 (worked likelihood estimation examples).
Brian Ripley has a set of slides on simulation and optimisation in R.
In particular it provides a useful discussion of the various optimisation
algorithms available using optim".
| null | CC BY-SA 3.0 | null | 2011-07-01T02:37:22.900 | 2013-07-10T16:39:39.907 | 2013-07-10T16:39:39.907 | 442 | 183 | null |
12533 | 2 | null | 4044 | 13 | null | Heuristic
- Minkowski-form
- Weighted-Mean-Variance (WMV)
Nonparametric test statistics
- 2 (Chi Square)
- Kolmogorov-Smirnov (KS)
- Cramer/von Mises (CvM)
Information-theory divergences
- Kullback-Liebler (KL)
- Jensen–Shannon divergence (metric)
- Jeffrey-divergence (numerically stable and symmetric)
Ground distance measures
- Histogram intersection
- Quadratic form (QF)
- Earth Movers Distance (EMD)
| null | CC BY-SA 3.0 | null | 2011-07-01T02:39:26.897 | 2011-07-01T02:47:22.257 | 2011-07-01T02:47:22.257 | 5244 | 5244 | null |
12535 | 1 | null | null | 0 | 209 | I have two dataset that i want to compare.
each dataset contain the weight of 10 different person measured for 3 different day.
I am interested in measuring the probabily that the two sample originate from the same population.
People seem to suggest doing a Kolmogorov-Smirnov test but i need a measurement.
I was thinking doing the EMD to compare the distribution for each day
EMD(dataset1-day1,dataset2-day1) + EMD(dataset1-day2,dataset2-day2) + EMD(dataset1-day3,dataset2-day3)
But i could probably take each person as a 3d datapoint and do the EMD in 3d.
One other possibility was to do the Hausdorff distance but doing the average of the distance for each point instead of taking the maximum distance.
What are the main difference between the two technique.
| Measuring probability that 2 sample originate from the same population | CC BY-SA 3.0 | 0 | 2011-07-01T01:58:21.050 | 2011-07-01T04:04:58.323 | null | null | 5244 | [
"probability"
] |
12536 | 2 | null | 12498 | 0 | null | Winsorization replaces extreme data values with less extreme values.
[http://www.r-bloggers.com/winsorization/](http://www.r-bloggers.com/winsorization/)
| null | CC BY-SA 3.0 | null | 2011-07-01T06:09:23.313 | 2011-07-01T06:09:23.313 | null | null | 1709 | null |
12537 | 2 | null | 3294 | 3 | null | Take a look at the [(HMM) Toolbox for Matlab by Kevin Murphy](http://www.cs.ubc.ca/~murphyk/Software/HMM/hmm.html) and also section Recommended reading on HMMs on this site.
You can also get [Probabilistic modeling toolkit for Matlab/Octave](http://code.google.com/p/pmtk3/) with some examples of using Markov Chains and HMM.
You can also find lectures and labs on HMM, for example:
- Labs
- Lecture1 and Lecture2
| null | CC BY-SA 3.0 | null | 2011-07-01T06:27:39.323 | 2011-07-01T06:27:39.323 | null | null | 5172 | null |
12538 | 2 | null | 12378 | 3 | null | I think the problem may be one of model mis-specification. If your targets are angles wrapped to +-180 degrees, then the "noise process" for your data may be sufficiently non-Guassian that the Baysian evidence is not a good way to optimise the hyper-parameters. For instance, consider what happens when "noise" causes the signal to wrap-around. In that case it may be wise to either perform model selection by minimising the cross-validation error (there is a public domain implementation of the Nelder-Mead simplex method [here](http://theoval.cmp.uea.ac.uk/matlab/#optim) if you don't have the optimisation toolbox). The cross-validation estimate of performance is not so sensitive to model mis-specification as it is a direct estimate of test performance, whereas the marginal likelihood of the model is the evidence in suport of the model given that the model assumptions are correct. See the discussion starting on page 123 of Rasmussen and Williams' book.
Another approach would be to re-code the outputs so that a Gaussian noise model is more appropriate. One thing you could do is some form of unsupervised dimensionality reduction, as there are non-linear relationships between your targets (as there are only a limited way in which a body can move), so there will be a lower-dimensional manifold that your targets live on, and it would be better to regress the co-ordinates of that manifold rather than the angles themselves (there may be fewer targets that way as well).
Also some sort of [Procrustes analysis](http://en.wikipedia.org/wiki/Procrustes_analysis) might be a good idea to normalise the differences between subjects before training the model.
You may find some of the [work](ftp://ftp.dcs.shef.ac.uk/home/neil/gplvmTutorial.pdf) done by Neil Lawrence on human pose recovery of interest. I remember seeing a demo of this at a conference a few years ago and was very impressed.
| null | CC BY-SA 3.0 | null | 2011-07-01T06:54:54.350 | 2011-07-01T06:54:54.350 | null | null | 887 | null |
12540 | 2 | null | 3294 | 5 | null | For bioinformatics applications, the classic text on HMMs would be Durbin, Eddy, Krough & Michison, "[Biological Sequence Analsysis](http://rads.stackoverflow.com/amzn/click/0521629713) - Probabilistic Models of Proteins and Nucleic Acids", Cambridge University Press, 1998, ISBN 0-521-62971-3. It is technical, but very clear and I found it very useful.
For MCMC there is a recent (version of a) book by Robert and Casella, "[Introducing Monte Carlo Methods with R"](http://rads.stackoverflow.com/amzn/click/1441915753), Springer, which looks good, but I haven't had a chance to read it yet (uses R for examples, which is a good way to learn, but I need to learn R first ;o)
| null | CC BY-SA 3.0 | null | 2011-07-01T07:05:58.663 | 2011-07-01T07:05:58.663 | null | null | 887 | null |
12541 | 2 | null | 12519 | 4 | null | The constraint on the output can be achieved using the [softmax](http://en.wikipedia.org/wiki/Softmax_activation_function) inverse-link function used in multi-nomial logistic regression, i.e.
$y_i = \frac{exp\{\nu_i\}}{\sum_{j=1}^n exp\{\nu_j\}}$
where $y_i$ is the $i^{th}$ output of the model and $\nu_j$ is the linear combination of the input features for the $j^{th}$ component.
The model can then be fitted by minimising a suitable likelihood. As the targets are constrained, the likelihood won't be Gaussian, which may be a problem. Some sort of Dirichlet likelihood might be more appropriate? There may not be any R software that does this already, so you will probably have some coding to do.
As you have many input variables, it will be vital to use some form of regularisation to avoid over-fitting, the regularisation parameters can probably by tuned very efficiently my minimising the leave-one-out cross-validation error, see e.g. [Generalised Kernel Machines](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.108.4339). WHich reminds me, if you have many more features than patterns, then you can perform the computation cheaply without having to do any feature reduction using kernel methods (see the GKM paper).
Update: You may be able to just use some off-the-shelf code for multi-nomial logistic regression, providing the implementation allows a soft (probabilistic) assignment of the targets (most software assumes that the targets specify that each pattern belongs to one particular class, rather than a probability of belonging to each class, as that is what most people need). It will have the softmax inverse-link function, so the required constraints on the output will be in place. The multinomial loss function is probably wrong, but then again so is a Gaussian. I have used logistic regression models for regressing ratios before, it is a bit of a hack, but it did work pretty well.
| null | CC BY-SA 3.0 | null | 2011-07-01T07:18:23.550 | 2011-07-01T07:33:45.190 | 2011-07-01T07:33:45.190 | 887 | 887 | null |
12542 | 2 | null | 12525 | 12 | null | In addition to Jeromy Anglim's answer, I have some more links.
Next to `optim` there is another function in base R that allows for what you want: `nlminb`. Check `?nlminb` and `?optim` for examples of the usage.
There are a bunch of packages that can do optimizations. What I found most interesting were the packages [optimx](http://cran.r-project.org/web/packages/optimx/) and, quite new, the [neldermead](http://cran.r-project.org/web/packages/neldermead/index.html) package for different versions of the simplex algorithm.
[Furthermore, you might want to have a look at the CRAN Task View on Optimization for more packages](http://cran.r-project.org/web/views/Optimization.html)
Please note that my recommendations all assume that you have a deterministic function (i.e., no random noise). For functions that are not strictly deterministic (or too big) you would need to use algorithms such as simulated annealing or genetic algorithms. But the [CRAN Task View](http://cran.r-project.org/web/views/Optimization.html) should have what you need.
| null | CC BY-SA 3.0 | null | 2011-07-01T07:59:18.400 | 2011-07-01T07:59:18.400 | null | null | 442 | null |
12544 | 2 | null | 11032 | 3 | null | Some of the materials concerning Initial Estimates of HMM are given in
[Lawrence R. Rabiner (February 1989). "A tutorial on Hidden Markov Models and selected applications in speech recognition". Proceedings of the IEEE 77 (2): 257–286. doi:10.1109/5.18626](http://www.cs.ubc.ca/~murphyk/Software/HMM/rabiner.pdf) (Section V.C)
You can also take a look at the [Probabilistic modeling toolkit for Matlab/Octave](http://code.google.com/p/pmtk3/), especially hmmFitEm function where You can provide your own Initial parameter of the model or just using ('nrandomRestarts' option).
While using 'nrandomRestarts', the first model (at the init step) uses:
- Fit a mixture of Gaussians via MLE/MAP (using EM) for continues data;
- Fit a mixture of product of discrete distributions via MLE/MAP (using EM) for discrete data;
the second, third models... (at the init step) use randomly initialized parameters and as the result converge more slowly with mostly lower Log Likelihood values.
| null | CC BY-SA 3.0 | null | 2011-07-01T11:17:37.883 | 2011-07-01T11:17:37.883 | null | null | 5172 | null |
12545 | 2 | null | 12526 | 2 | null | I suspect the [no-free-lunch](http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization) [theorems](http://www.no-free-lunch.org/) will apply to this problem, just as they do for supervised learning in general. Essentially which method works best depends on the nature of the particular dataset, and none are superior a-priori. For noisy data, I would advise multiple imputation methods, as you need to consider the uncertainty of the imputed value in assessing the uncertainty in the prediction. For noisy problems, predictive uncertainty (i.e. conditional variance) is often just as important as the conditional mean.
| null | CC BY-SA 3.0 | null | 2011-07-01T13:49:36.643 | 2011-07-01T13:49:36.643 | null | null | 887 | null |
12546 | 1 | 12564 | null | 17 | 8003 | Is there any software package to solve the linear regression with the objective of minimizing the L-infinity norm.
| Software package to solve L-infinity norm linear regression | CC BY-SA 3.0 | null | 2011-07-01T15:07:47.900 | 2020-07-26T15:15:28.497 | 2020-05-05T18:49:56.523 | 11887 | 4670 | [
"regression",
"optimization"
] |
12547 | 1 | null | null | 0 | 241 | I have pairs of values from two runs (replicate) for each sample along with total count for each run. I assumed each value as random binomial. I used log-likelihood ratio test to compare each pair, but my data range is wide, e.g. X1, X2 = (0,0), (0,1), (0,2), ..., (20, 20) with total counts for each run (n1, n2) range from hundreds to thousands. So my data look like:
```
ID X1 X2 n1 n2
A1 0 0 119 230
A2 0 1 213 185
. . . . .
. . . . .
. . . . .
A200 15 23 2300 1735
```
What test will be appropriate and power for it? Already tried likelihood ratio test, how can I implement Fisher's exact test in R for it? any other test that may use X1 condition of X1 + X2!
Any help will be appreciated. I want to implement it in R, so R codes/function will be more helpful. Thanks in advance
| Appropriate test for testing a pair of random binomial variables | CC BY-SA 3.0 | null | 2011-07-01T15:30:01.477 | 2017-11-06T13:16:07.930 | 2017-11-06T13:16:07.930 | 101426 | 4098 | [
"hypothesis-testing",
"binomial-distribution",
"fishers-exact-test"
] |
12548 | 2 | null | 9242 | 4 | null | Note that while values of the F statistic less than 1 can occur by chance when the null hypothesis is true (or near true) as others have explained, values close to 0 can indicate violations of the assumptions that ANOVA depends on. Some analysts will look at the area to the left of the statistic in the F-distribution as a p-value checking assumption violations. Some of the violations that lead to small F-stats include unequal variances, improper randomization, lack of independence, or just faking the data.
| null | CC BY-SA 3.0 | null | 2011-07-01T15:45:42.233 | 2011-07-01T15:45:42.233 | null | null | 4505 | null |
12549 | 1 | 12550 | null | 3 | 1523 | How can I transform a variable (non linear transformation) such that its values are more evenly spread, that is reduce the peak in the middle of the histogram and move more into tails?
| What order preserving transformation makes data more evenly spread, decreasing the peak, and fattening the tails of the distribution? | CC BY-SA 3.0 | null | 2011-07-01T15:48:00.630 | 2011-10-06T10:40:28.413 | 2011-10-06T10:40:28.413 | 183 | 333 | [
"data-transformation",
"kurtosis"
] |
12550 | 2 | null | 12549 | 3 | null | The transform that most evens things out is the rank transform (just replace the data by the ranks). If there are no ties then the result is uniform.
If data is fairly normal (bell shaped) then the inverse of the normal distribution will spread towards uniform. Actually any s-shaped curve will tend to do this including the arctangent and inverse logit (center and choose an appropriate scale first).
| null | CC BY-SA 3.0 | null | 2011-07-01T16:33:24.643 | 2011-07-01T16:33:24.643 | null | null | 4505 | null |
12552 | 2 | null | 12529 | 1 | null | There are two questions here: (1) how to determine the probability that the two samples are from the same distribution and (2) what kind of distance metric could be used to measure their overlap.
For the first, one simple way would be to determine the distribution of the first sample (perhaps it's multivariate normal?) and then calculate the posterior density of the second sample under the assumption of the distribution of the first. I like this approach because the interpretation is very straightforward.
For the second, I wouldn't do what you're suggesting with EMD unless you have some natural pairing of individuals in samples 1 and 2 (see @whuber's questions above). One common point between Hausdorff and EMD is that both let you specify an arbitrary distance metric for the points, e.g., euclidean or cosine, so you don't have to average the points (I'd go further and say you shouldn't if you use these methods). The downside is that your results will depend on your choice of the distance metric so you need some way of justifying your choice.
Because of the downside of the distance metric being arbitrary, I'd consider, instead, the [Bhattacharyya distance](http://en.wikipedia.org/wiki/Bhattacharyya_distance) or perhaps mutual information, provided you can make some information choice about what the distributions are.
| null | CC BY-SA 3.0 | null | 2011-07-01T17:29:00.653 | 2011-07-01T17:29:00.653 | null | null | 1876 | null |
12554 | 1 | 12555 | null | 8 | 3915 | What parameterization to `glmnet` will give the same results as `glm`? (I'm mainly interested in logistic and linear regressions, if that matters.)
| How to make glmnet give the same results as glm? | CC BY-SA 3.0 | null | 2011-07-01T05:48:10.597 | 2021-12-11T18:30:41.800 | null | null | 1720 | [
"r",
"generalized-linear-model",
"glmnet"
] |
12555 | 2 | null | 12554 | 13 | null | You will get the same results as `glm` when you pass `alpha=1` (the default) and `lambda=0`. especially that last one: it means no penalization.
Note that the method of fitting is different in both, so although theoretically you should get the same result, there may still be tiny differences depending on your data.
| null | CC BY-SA 3.0 | null | 2011-07-01T10:40:14.833 | 2011-07-01T10:40:14.833 | null | null | 4257 | null |
12557 | 2 | null | 12547 | 1 | null | One option is the Mantel-Haenszel test. This will test/estimate the odds-ratio between x1 and x2 while allowing the margins to vary between the ID's. In R use the `mantelhaen.test` function.
You could also use the glm function to fit a logistic regression with a term indicating x1 vs. x2 and another for ID (and possibly an interaction).
| null | CC BY-SA 3.0 | null | 2011-07-01T18:41:46.160 | 2011-07-01T18:41:46.160 | null | null | 4505 | null |
12558 | 1 | 12559 | null | 11 | 2628 | I have two sets of data where I have at ~250.000 values for 78 and 35 samples.
Some of the samples are members of a family and this may have an effect of the data. I have calculated pairwise correlation and it varies between 0.7 and 0.95 but I would like to know if there is significant difference in correlation coefficients intra vs inter family? What is the best way to do this?
Thanks
| Comparing correlation coefficients | CC BY-SA 3.0 | null | 2011-07-01T19:10:32.050 | 2016-12-02T11:37:58.237 | 2011-07-01T19:56:13.820 | null | 5252 | [
"correlation",
"cross-correlation",
"intraclass-correlation"
] |
12559 | 2 | null | 12558 | 6 | null | A general way to compare two correlation coefficients $\hat{\rho}_{1}, \hat{\rho}_{2}$ is to use Fisher's z-transform method, which says that ${\rm arctanh}(\hat{\rho})$ is approximately normal with mean ${\rm arctanh}(\rho)$ and standard deviation $1/\sqrt{n-3}$. If the samples are independent, then you transform each correlation coefficient and the difference between the two transformed correlations will be normal with mean ${\rm arctanh}(\rho_{1})-{\rm arctanh}(\rho_{2})$ and standard deviation $\sqrt{1/(n_{1}-3) + 1/(n_{2}-3)}$. From this you can form a $z$-statistic and do testing as you would in an ordinary two-sample $z$-test.
| null | CC BY-SA 3.0 | null | 2011-07-01T19:31:59.917 | 2011-07-01T19:31:59.917 | null | null | 4856 | null |
12560 | 2 | null | 12558 | 2 | null | Though @Macro's answer is nice, it does require an assumption about the (in)dependence of the statistics. Another approach would be to use bootstrapping. The idea would be to keep one one variable fixed and shuffle the other variable, calculate the correlation for each of your samples, and take their difference. Repeat many times to get a distribution and use this distribution to test the hypothesis that the correlations are the same. The structure of your data set isn't that clear to me, so it's hard to provide more details.
| null | CC BY-SA 3.0 | null | 2011-07-01T19:38:05.900 | 2011-07-01T19:38:05.900 | null | null | 401 | null |
12561 | 2 | null | 12526 | 3 | null | I think Dikran (+1) is right pointing to no-free-lunch theorems and the ad hoc nature working with missing values imputations. Best is indeed highly dependent on a particular case you deal with. Moreover the optimality criterion is unclear even if you do some Monte Carlo simulations fixing data generating process, the conclusions won't prove the optimality. You might state though that the data does not contradicts (yet) the fact that a particular imputation is superior to this particular data (this is the only thing you can show by simulations). Thus I only can give some recommendations based on the personal recent experience.
---
It seems that Expectation-Maximization (EM) for time series imputations based on data rich data sets (in the context of factor models to be more precise) returns visually acceptable results for scaled (standardized data) data. The imputed data may be easily unscaled to the original units, thus it is also in favor of EM method as applied to time series. Though to fasten the convergence of EM method I would recommend to impute NA by interpolation (in R you may consider `na.approx` for linear and `na.spline` for cubic spline approximations) and then run the iterations. The described method worked pretty well for macroeconomic time series, for (ultra)high frequency financial data the nature of missing values may be important. I also vote for the Dikran's suggestion regarding [multiple imputation](http://www.stat.psu.edu/~jls/mifaq.html) (MI). You may need additional flexibility that multiple imputation provides you. The only thing as is left unclear is how both methods work with low signal/noise ratio as the common signal may be dominated by the idiosyncratic volatility within the response time series.
| null | CC BY-SA 3.0 | null | 2011-07-01T20:19:18.307 | 2011-07-01T23:35:14.427 | 2011-07-01T23:35:14.427 | 2645 | 2645 | null |
12562 | 1 | 12565 | null | 37 | 29251 | I am new to Machine Learning, and am trying to learn it on my own. Recently I was reading through [some lecture notes](https://web.archive.org/web/20111202153913/http://www.cs.cmu.edu/~epxing/Class/10701/recitation/recitation3.pdf) and had a basic question.
Slide 13 says that "Least Square Estimate is same as Maximum Likelihood Estimate under a Gaussian model". It seems like it is something simple, but I am unable to see this. Can someone please explain what is going on here? I am interested in seeing the Math.
I will later try to see the probabilistic viewpoint of Ridge and Lasso regression also, so if there are any suggestions that will help me, that will be much appreciated also.
| Equivalence between least squares and MLE in Gaussian model | CC BY-SA 3.0 | null | 2011-07-01T21:31:15.127 | 2017-10-02T12:01:50.953 | 2017-10-02T12:01:50.953 | 7290 | 3301 | [
"regression",
"bayesian",
"least-squares"
] |
12564 | 2 | null | 12546 | 25 | null | Short answer: Your problem can be formulated as a linear program (LP), leaving you to choose your favorite LP solver for the task. To see how to write the problem as an LP, read on.
This minimization problem is often referred to as Chebyshev approximation.
Let $\newcommand{\y}{\mathbf{y}}\newcommand{\X}{\mathbf{X}}\newcommand{\x}{\mathbf{x}}\newcommand{\b}{\mathbf{\beta}}\newcommand{\reals}{\mathbb{R}}\newcommand{\ones}{\mathbf{1}_n} \y = (y_i) \in \reals^n$, $\X \in \reals^{n \times p}$ with row $i$ denoted by $\x_i$ and $\b \in \reals^p$. Then we seek to minimize the function $f(\b) = \|\y - \X \b\|_\infty$ with respect to $\b$. Denote the optimal value by
$$
f^\star = f(\b^\star) = \inf \{f(\b): \b \in \reals^p \} \>.
$$
The key to recasting this as an LP is to rewrite the problem in epigraph form. It is not difficult to convince oneself that, in fact,
$$
f^\star = \inf\{t: f(\b) \leq t, \;t \in \reals, \;\b \in \reals^p \} \> .
$$
Now, using the definition of the function $f$, we can rewrite the right-hand side above as
$$
f^\star = \inf\{t: -t \leq y_i - \x_i \b \leq t, \;t \in \reals, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>,
$$
and so we see that minimizing the $\ell_\infty$ norm in a regression setting is equivalent to the LP
$$
\begin{array}{ll}
\text{minimize} & t \\
\text{subject to} & \y-\X \b \leq t\ones \\
& \y - \X \b \geq - t \ones \>, \\
\end{array}
$$
where the optimization is done over $(\b, t)$, and $\ones$ denotes a vector of ones of length $n$. I leave it as an (easy) exercise for the reader to recast the above LP in standard form.
Relationship to the $\ell_1$ (total variation) version of linear regression
It is interesting to note that something very similar can be done with the $\ell_1$ norm. Let $g(\b) = \|\y - \X \b \|_1$. Then, similar arguments lead one to conclude that
$$\newcommand{\t}{\mathbf{t}}
g^\star = \inf\{\t^T \ones : -t_i \leq y_i - \x_i \b \leq t_i, \;\t = (t_i) \in \reals^n, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>,
$$
so that the corresponding LP is
$$
\begin{array}{ll}
\text{minimize} & \t^T \ones \\
\text{subject to} & \y-\X \b \leq \t \\
& \y - \X \b \geq - \t \>. \\
\end{array}
$$
Note here that $\t$ is now a vector of length $n$ instead of a scalar, as it was in the $\ell_\infty$ case.
The similarity in these two problems and the fact that they can both be cast as LPs is, of course, no accident. The two norms are related in that that they are the dual norms of each other.
| null | CC BY-SA 3.0 | null | 2011-07-01T22:30:21.127 | 2011-07-02T01:51:13.073 | 2011-07-02T01:51:13.073 | 2970 | 2970 | null |
12565 | 2 | null | 12562 | 37 | null | In the model
$ Y = X \beta + \epsilon $
where $\epsilon \sim N(0,\sigma^{2})$, the loglikelihood of $Y|X$ for a sample of $n$ subjects is (up to a additive constant)
$$ \frac{-n}{2} \log(\sigma^{2}) - \frac{1}{2 \sigma^{2}} \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} $$
viewed as a function of only $\beta$, the maximizer is exactly that which minimizes
$$ \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} $$
does this make the equivalence clear?
| null | CC BY-SA 3.0 | null | 2011-07-01T23:37:56.503 | 2011-07-01T23:37:56.503 | null | null | 4856 | null |
12566 | 1 | null | null | 1 | 367 | I have an experiment that perturbs variable x and causes a change in variable z. There is a concurrent change in variable y. How can I determine whether variable y is on the causal path between x and z or if it is an irrelevant epiphenomenon?
| Whats on the causal path? | CC BY-SA 3.0 | null | 2011-07-01T23:40:08.723 | 2020-09-03T02:32:59.523 | null | null | 3033 | [
"causality",
"linear-model",
"graphical-model"
] |
12567 | 1 | 12571 | null | 12 | 1197 | In the wikipedia article on [Credible Interval](http://en.wikipedia.org/wiki/Credible_interval), it says:
>
For the case of a single parameter and
data that can be summarised in a
single sufficient statistic, it can be
shown that the credible interval and
the confidence interval will coincide
if the unknown parameter is a location
parameter (i.e. the forward
probability function has the form Pr(x
| μ) = f(x − μ) ), with a prior that
is a uniform flat distribution;[5] and
also if the unknown parameter is a
scale parameter (i.e. the forward
probability function has the form Pr(x
| s) = f(x / s) ), with a Jeffreys'
prior [5] — the latter following
because taking the logarithm of such a
scale parameter turns it into a
location parameter with a uniform
distribution. But these are distinctly
special (albeit important) cases; in
general no such equivalence can be
made."
Could people give specific examples of this? When does the 95% CI actually correspond to "95% chance", thus "violating" the general definition of CI?
| Examples of when confidence interval and credible interval coincide | CC BY-SA 3.0 | null | 2011-07-02T01:01:01.813 | 2015-06-10T11:00:16.097 | 2011-07-02T09:37:36.837 | null | 1764 | [
"confidence-interval",
"credible-interval"
] |
12568 | 2 | null | 1555 | 1 | null | Was going to leave this as a comment, but it was getting too long...
While the chi-square statistic may not be significant, its numerical value is large. You can interpret $\frac{1}{2}\chi^{2}$ as an approximate log-likelihood ratio against the best alternative in the Bernoulli class. So the data supports the best alternative (Observed=Expected) to the independence hypothesis by a factor of $\exp\left(\frac{1}{2}\chi^{2}\right)=6,507,722$. So you are justified in looking for a better hypothesis, as your intuition suggests.
The problem is that with $10*8-1=79$ degrees of freedom, there are a huge number of alternatives, so you are at risk of finding spurious relationships in your (i.e. "fitting the noise"). So you basically need some good prior information to back up this better alternative.
Column 1 and Column 2 are clearly different in terms of how they are represented in the data as your test appears to indicate (column1 has many more, and column2 has almost zero observations). If you separate off these, the Chi-square for the remaining $6$ columns is now just $15.83$ for an approximate likelihood ratio of $2,741$. However, this could well be just an artifact of the sampling scheme.
| null | CC BY-SA 3.0 | null | 2011-07-02T01:02:12.770 | 2011-07-02T01:10:51.207 | 2011-07-02T01:10:51.207 | 2392 | 2392 | null |
12569 | 1 | 12578 | null | 3 | 605 | Could someone compare k-means and kernel PCA in the domain of vector quantization (memory, speed, effectivity, ...)?
| Kernel PCA vs. k-means | CC BY-SA 3.0 | null | 2011-07-02T01:22:08.810 | 2011-07-02T13:55:03.513 | 2011-07-02T10:10:53.627 | null | 5258 | [
"machine-learning"
] |
12571 | 2 | null | 12567 | 15 | null |
## normal distribution:
Take a normal distribution with known variance. We can take this variance to be 1 without losing generality (by simply dividing each observation by the square root of the variance). This has sampling distribution:
$$p(X_{1}...X_{N}|\mu)=\left(2\pi\right)^{-\frac{N}{2}}\exp\left(-\frac{1}{2}\sum_{i=1}^{N}(X_{i}-\mu)^{2}\right)=A\exp\left(-\frac{N}{2}(\overline{X}-\mu)^{2}\right)$$
Where $A$ is a constant which depends only on the data. This shows that the sample mean is a sufficient statistic for the population mean. If we use a uniform prior, then the posterior distribution for $\mu$ will be:
$$(\mu|X_{1}...X_{N})\sim Normal\left(\overline{X},\frac{1}{N}\right)\implies \left(\sqrt{N}(\mu-\overline{X})|X_{1}...X_{N}\right)\sim Normal(0,1)$$
So a $1-\alpha$ credible interval will be of the form:
$$\left(\overline{X}+\frac{1}{\sqrt{N}}L_{\alpha},\overline{X}+\frac{1}{\sqrt{N}}U_{\alpha}\right)$$
Where $L_{\alpha}$ and $U_{\alpha}$ are chosen such that a standard normal random variable $Z$ satisfies:
$$Pr\left(L_{\alpha}<Z<U_{\alpha}\right)=1-\alpha$$
Now we can start from this "pivotal quantity" for constructing a confidence interval. The sampling distribution of $\sqrt{N}(\mu-\overline{X})$ for fixed $\mu$ is a standard normal distribution, so we can substitute this into the above probability:
$$Pr\left(L_{\alpha}<\sqrt{N}(\mu-\overline{X})<U_{\alpha}\right)=1-\alpha$$
Then re-arrange to solve for $\mu$, and the confidence interval will be the same as the credible interval.
## Scale parameters:
For scale parameters, the pdfs have the form $p(X_{i}|s)=\frac{1}{s}f\left(\frac{X_{i}}{s}\right)$. We can take the $(X_{i}|s)\sim Uniform(0,s)$, which corresponds to $f(t)=1$. The joint sampling distribution is:
$$p(X_{1}...X_{N}|s)=s^{-N}\;\;\;\;\;\;\;0<X_{1}...X_{N}<s$$
From which we find the sufficient statistic to be equal to $X_{max}$ (the maximum of the observations). We now find its sampling distribution:
$$Pr(X_{max}<y|s)=Pr(X_{1}<y,X_{2}<y...X_{N}<y|s)=\left(\frac{y}{s}\right)^{N}$$
Now we can make this independent of the parameter by taking $y=qs$. This means our "pivotal quantity" is given by $Q=s^{-1}X_{max}$ with $Pr(Q<q)=q^{N}$ which is the $beta(N,1)$ distribution. So, we can choose $L_{\alpha},U_{\alpha}$ using the beta quantiles such that:
$$Pr(L_{\alpha}<Q<U_{\alpha})=1-\alpha=U_{\alpha}^{N}-L_{\alpha}^{N}$$
And we substitute the pivotal quantity:
$$Pr(L_{\alpha}<s^{-1}X_{max}<U_{\alpha})=1-\alpha=Pr(X_{max}L_{\alpha}^{-1}>s>X_{max}U_{\alpha}^{-1})$$
And there is our confidence interval. For the Bayesian solution with jeffreys prior we have:
$$p(s|X_{1}...X_{N})=\frac{s^{-N-1}}{\int_{X_{max}}^{\infty}r^{-N-1}dr}=N (X_{max})^{N}s^{-N-1}$$
$$\implies Pr(s>t|X_{1}...X_{N})=N (X_{max})^{N}\int_{t}^{\infty}s^{-N-1}ds=\left(\frac{X_{max}}{t}\right)^{N}$$
We now plug in the confidence interval, and calculate its credibility
$$Pr(X_{max}L_{\alpha}^{-1}>s>X_{max}U_{\alpha}^{-1}|X_{1}...X_{N})=\left(\frac{X_{max}}{X_{max}U_{\alpha}^{-1}}\right)^{N}-\left(\frac{X_{max}}{X_{max}L_{\alpha}^{-1}}\right)^{N}$$
$$=U_{\alpha}^{N}-L_{\alpha}^{N}=Pr(L_{\alpha}<Q<U_{\alpha})$$
And presto, we have $1-\alpha$ credibility and coverage.
| null | CC BY-SA 3.0 | null | 2011-07-02T01:45:50.613 | 2015-06-10T11:00:16.097 | 2015-06-10T11:00:16.097 | 12100 | 2392 | null |
12572 | 2 | null | 12566 | 1 | null | Experimental design rather than statistics alone will give you a solid answer. You may well have already thought about this and ruled it out as a possibility, but can you design an experiment that tests whether Z changes as a function of Y when X is held constant? Further, perhaps you could build in trials that test the extent to which Z changes as a function of X when Y is held constant. Isolating predictors in this way is your best bet of capturing the causal relationships.
| null | CC BY-SA 3.0 | null | 2011-07-02T02:52:43.043 | 2011-07-04T01:16:13.457 | 2011-07-04T01:16:13.457 | 2669 | 2669 | null |
12573 | 1 | 14319 | null | 7 | 5717 | The transfer entropy, from information theory, is an effective way to measure the one-way information dependence between two variables. A nice high-level summary is here:
[http://lizier.me/joseph/presentations/20060503-Schreiber-MeasuringInfoTransfer.pdf](http://lizier.me/joseph/presentations/20060503-Schreiber-MeasuringInfoTransfer.pdf)
I see that there is a package for entropy and mutual information estimation (http://strimmerlab.org/software/entropy/), but not the one-way transfer metric.
What is an efficient way to calculate this in R? Perhaps I can use a chart output or metric from the mutual information package as a startpoint.
| Calculating the transfer entropy in R | CC BY-SA 3.0 | null | 2011-06-30T00:19:13.453 | 2019-05-10T08:17:41.147 | 2011-07-02T14:00:26.327 | null | 8101 | [
"r",
"mathematical-statistics",
"entropy",
"information-theory"
] |
12574 | 2 | null | 12460 | 3 | null | If the aim is to discover the best model with four terms (e.g. polynomial, fourier series, Taylor series) then it is essentially a model selection issue and an optimisation problem, as such it is almost certainly covered by the no-free-lunch theorem, so the model that is most accurate will depend on the particular dataset you have at hand. That means unless the problem is restricted to a particular kind of dataset, there will be no solution better than to fit all the models and find out.
| null | CC BY-SA 3.0 | null | 2011-07-02T08:09:43.700 | 2011-07-02T08:09:43.700 | null | null | 887 | null |
12575 | 1 | null | null | 2 | 496 | Here i am again. From this script:
```
model1 <- zelig(Decision ~ as.factor(age) + as.factor(income) + as.factor(town.type) +
tag(town.type|town), model="logit.mixed", data=Fish)
```
Does it mean that `town.type` is treated both as fixed effects and random effects and `town` treated as random effects nested in `town.type`?
I am currently analyzing the effects of `age` and `income` to the decision of fishers to exit the fishery. In addition i added `town.type` (whether the respondent comes from developed (coded 2) or less developed (coded 1) towns ) as fixed effects. We surveyed many towns each of which we categorized into one of two town types.
I'm sorry, i am new to R. Hope you could help me out. Thanks a lot.
| Hierarchical modeling in R | CC BY-SA 3.0 | null | 2011-07-02T09:50:50.960 | 2011-07-02T10:10:01.303 | 2011-07-02T10:10:01.303 | 307 | 4848 | [
"r",
"multilevel-analysis"
] |
12576 | 1 | null | null | 1 | 1683 | First off, let me say that I'm extremely poorly versed in statistics, and this is a question purely about terminology.
I have a distribution for some quantity (height of person, say) and the most likely 95% of outcomes are within some range $h_0$ to $h_1$. I want to know what to call (ideally a 2 or 3 word phrase) this range. It is not the case that the distribution is normal, though we can safely assume that it is unimodal.
Clarification:
Let's say I have a coin, which I think is biased for heads with $p_h=0.6$. Now I toss it $n$ times and I get $a$ heads. I want to write a sentence which basically states that the result is in some 95% "likelihood region", (something like "middle 95 percentile" seems a little clunky, if not ambiguous --- I'm not taking a region about some median!). Now, it might be that my $p_h$ is actually a prediction based on some data, and I have some uncertainties in it, but I don't want to include those uncertainties! I want a term that refers to the fact that because I didn't toss the coin an infinite number of times, there is some variance in the ratio $a/n$.
| What is the name for a "typical" range of values? | CC BY-SA 3.0 | null | 2011-07-02T12:14:07.307 | 2011-07-03T14:27:40.363 | 2011-07-03T14:27:40.363 | 919 | 5259 | [
"terminology"
] |
12577 | 2 | null | 12576 | 3 | null | This is interval estimation and it sounds like a "prediction interval".
| null | CC BY-SA 3.0 | null | 2011-07-02T12:32:22.403 | 2011-07-03T14:22:50.647 | 2011-07-03T14:22:50.647 | 919 | 2392 | null |
12578 | 2 | null | 12569 | 2 | null | Kernel PCA is a dimensionality reduction/visualisation algorithm, it isn't really suitable for vector-quantization. For a kernel approach to vector quantisation, see the paper by [Tipping and Scholkopf.](http://www.gatsby.ucl.ac.uk/aistats/aistats2001/files/tipping140.pdf). The example where you can still read the registration number of the car for the kernel algorithm, but not with the LBG algorithm (similar to k-means), is really neat (figure 2).
| null | CC BY-SA 3.0 | null | 2011-07-02T13:55:03.513 | 2011-07-02T13:55:03.513 | null | null | 887 | null |
12579 | 2 | null | 12576 | 4 | null | I'm not sure the middle 95% has a name, but the middle 50% does: "interquartile range".
The middle 80% does as well: interdecile range.
...
Actually, a bit of poking around on google turned up "95% interpercentile range" and "2.5-97.5 interpercentile range."
I hadn't heard those terms before (they don't seem to be common), but if I saw them in a paper, especially the second one, I would immediately know what the author meant. So I think you should be just fine using them.
| null | CC BY-SA 3.0 | null | 2011-07-02T14:14:20.160 | 2011-07-02T14:14:20.160 | null | null | 4862 | null |
12580 | 1 | 12613 | null | 0 | 2844 | I used the following example code from latticeExtra to understand two-way clustering in R
```
library(latticeExtra)
data(mtcars)
x <- t(as.matrix(scale(mtcars)))
dd.row <- as.dendrogram(hclust(dist(x)))
row.ord <- order.dendrogram(dd.row)
dd.col <- as.dendrogram(hclust(dist(t(x))))
col.ord <- order.dendrogram(dd.col)
library(lattice)
levelplot(x[row.ord, col.ord],
aspect = "fill",
scales = list(x = list(rot = 90)),
colorkey = list(space = "left"),
legend =
list(right =
list(fun = dendrogramGrob,
args =
list(x = dd.col, ord = col.ord,
side = "right",
size = 10)),
top =
list(fun = dendrogramGrob,
args =
list(x = dd.row,
side = "top",
size = 10))))
```
and this is what I got

Joining of both row and column entities make sense to me but I'm confused with different color shades of heatmap.
Questions
- Do the Joining of row variables also take into account the column variables and vice versa
- What does mean the different colors in heatmap for different row variables clustering as well as for column variables clustering. Specifically focus on cyl and disp row variables.
| Interpretation of two-way clustering in R | CC BY-SA 3.0 | null | 2011-07-02T17:22:40.927 | 2011-07-04T10:32:48.560 | null | null | 3903 | [
"r",
"clustering"
] |
12581 | 2 | null | 12580 | 1 | null | The colours are fairly simple: Blue means more, purple means less, when compared to the distribution of theat variable across all the cars.
So for `cyl` there are three possibilities: 8 (light blue), 6 (pale pink) and 4 (purple).
Similarly for `disp` the Lincoln Continental and Chrysler Imperial have over 400 so are blue while the Toyota Corolla, Fiat 128, Fiat X1-9 and Honda Civic have under 80 and are purple.
As for the clustering, the rows are clustered using the column variables, so those cars with similar values are more likely to be clustered together. The columns are clustered according to how much similarity there is in the information they give about the cars. Displacement depends on the number of cylinders and the dimensions of each cylinder, so gives similar information to the number of cylinders, and this makes it likely they will be clustered together.
| null | CC BY-SA 3.0 | null | 2011-07-02T19:17:17.787 | 2011-07-02T19:22:34.613 | 2011-07-02T19:22:34.613 | 2958 | 2958 | null |
12582 | 1 | null | null | 3 | 111 | This is crossposted from [StackOverflow](https://stackoverflow.com/questions/6558941/finding-average-settling-value-after-step-response). Someone suggested I should post it here. Though I don't have enough karma to post images.
I'm working with gathering data from a biological monitoring system. They need to know the average value of the plateaus after changes to the system are made.

This is data for about 4 minutes. There is decent lag time between the event and the steady state response.
These values won't always be this level. They want me to find where the steady-state response starts and average the values during that time. My boss, who is a biologist, said there may be overshoot and random fluctuations... and that I might need to use a z-transform. Unfortunately he wasn't more specific than that.
I feel decently competent as a programmer, but wasn't sure what the most efficient way would be to go about finding these values.
Any algorithms, insights or approaches would be greatly appreciated. Thanks.
| Finding average settling value after step response | CC BY-SA 3.0 | null | 2011-07-02T19:32:08.710 | 2011-07-03T16:11:37.567 | 2017-05-23T12:39:27.620 | -1 | 3089 | [
"algorithms",
"signal-processing"
] |
12583 | 2 | null | 4667 | 1 | null | See the bottom 2 thirds of [http://cran.fhcrc.org/other-docs.html](http://cran.fhcrc.org/other-docs.html) (or other cran site).
| null | CC BY-SA 3.0 | null | 2011-07-02T19:44:26.100 | 2011-07-02T19:44:26.100 | null | null | 4505 | null |
12584 | 2 | null | 12582 | 1 | null | What you have is a time series interrupted by level shifts. These level shifts are often (in your case definitely!) not known a priori. I suggest that you investigate by googling "automatic intervention detection". This and other intelligent searches for intervention detection in time series should yield some results. The bottom line is to characterize/model your time series with both ARIMA and outlier detection. Outliers can be either pulses, level shifts, seasonal pulses and/or local time trends. Care should be taken to investigate the detection of interventions both using the observed data to build an ARIMA first and alternatively subsequent to the detection of the interventions. You might review some of my other postings on the subject of time series ; Outliers ; Exception Reporting particularly [Outlier detection for generic time series](https://stats.stackexchange.com/questions/12498/outlier-detection-for-generic-time-series/12509#12509) .
| null | CC BY-SA 3.0 | null | 2011-07-02T21:00:51.197 | 2011-07-03T16:11:37.567 | 2017-04-13T12:44:25.243 | -1 | 3382 | null |
12585 | 2 | null | 2379 | 28 | null | Michael Jordan has a short article called [What are the Open Problems in Bayesian Statistics?](http://bayesian.org/sites/default/files/fm/bulletins/1103.pdf), in which he polled a bunch of statisticians for their views on the open problems in statistics. I'll summarize (aka, copy-and-paste) a bit here, but it's probably best just to read the original.
# Nonparametrics and semiparametrics
- For what problems is Bayesian nonparametrics useful and worth the trouble?
- David Dunson: "Nonparametric Bayes models involve infinitely many parameters and priors are typically chosen for convenience with hyperparameters set at seemingly reasonable values with no proper objective or subjective justification."
- "It was noted by several people that one of the appealing applications of frequentist nonparametrics is to semiparametric inference, where the nonparametric component of the model is a nuisance parameter. These people felt that it would be desirable to flesh out the (frequentist) theory of Bayesian semiparametrics."
# Priors
- "Elicitation remains a major source of open problems."
- 'Aad van der Vaart turned objective Bayes on its head and pointed to a lack of theory for "situations where one wants the prior to come through in the posterior" as opposed to "merely providing a Bayesian approach to smoothing."'
# Bayesian/frequentist relationships
- "Many respondents expressed a desire to further hammer out Bayesian/frequentist relationships. This was most commonly evinced in the context of high-dimensional models and data, where not only are subjective approaches to specification of priors difficult to implement but priors of convenience can be (highly) misleading."
- 'Some respondents pined for non-asymptotic theory that might reveal more fully the putative advantages of Bayesian methods; e.g., David Dunson: "Often, the frequentist optimal rate is obtained by procedures that clearly do much worse in finite samples than Bayesian approaches."'
# Computation and statistics
- Alan Gelfand: "If MCMC is no longer viable for the problems people want to address, then what is the role of INLA, of variational methods, of ABC approaches?"
- "Several respondents asked for a more thorough integration of computational science and statistical science, noting that the set of inferences that one can reach in any given situation are jointly a function of the model, the prior, the data and the computational resources, and wishing for more explicit management of the tradeoffs among these quantities. Indeed, Rob Kass raised the possibility of a notion of “inferential solvability,” where some problems are understood to be beyond hope (e.g., model selection in regression where “for modest amounts of data subject to nontrivial noise it is im- possible to get useful confidence intervals about regression coefficients when there are large numbers of variables whose presence or absence in the model is unspecified a priori”) and where there are other problems (“certain functionals for which useful con- fidence intervals exist”) for which there is hope."
- "Several respondents, while apologizing for a certain vagueness, expressed a feeling that a large amount of data does not necessarily imply a large amount of computation; rather, that somehow the inferential strength present in large data should transfer to the algorithm and make it possible to make do with fewer computational steps to achieve a satisfactory (approximate) inferential solution."
# Model Selection and Hypothesis Testing
- George Casella: "We now do model selection but Bayesians don’t seem to worry about the properties of basing inference on the selected model. What if it is wrong? What are the consequences of setting up credible regions for a certain parameter $β_1$ when you have selected the wrong model? Can we have procedures with some sort of guarantee?"
- Need for more work on decision-theoretic foundations in model selection.
- David Spiegelhalter: "How best to make checks for prior/data conflict an integral part of Bayesian analysis?"
- Andrew Gelman: "For model checking, a key open problem is developing graphical tools for understanding and comparing models. Graphics is not just for raw data; rather, complex Bayesian models give opportunity for better and more effective exploratory data analysis."
| null | CC BY-SA 4.0 | null | 2011-07-02T22:03:50.030 | 2019-10-13T08:40:34.290 | 2019-10-13T08:40:34.290 | 11887 | 1106 | null |
12587 | 1 | null | null | 0 | 728 | I have already asked a [question on this forum](https://stats.stackexchange.com/questions/12453/what-test-do-i-use-in-order-to-analyze-a-within-participants-repeated-measure-exp) about this, but as I changed my experiment a bit and I am still confused.
I conducted an experiment in which 25 men and 25 women listened to an attractive conversation and picked a photo (between a woman with red and a woman with green shirt) and next, they heard a neutral dialogue and did exactly the same, picked a photo between an woman in red and a woman in green. My hypothesis is that men are much more attracted to women in red, in contrast to women. I was thinking of using repeated measures ANOVA as both men and women were 'examined' in the same experimental conditions. So, I guess that my columns are: `gender` 2 levels (0 for males and 1 for females), `attraction` 2 levels (0 for no and 1 for yes) and `color` 2 levels (0 for green-or no red and 1 for red).
My problem is how do I show that each participant did this twice (i.e., there were two dialogues)? (Note: I am using SPSS.)
| What test do I use / how do I show in the analysis that I measured everyone twice? | CC BY-SA 3.0 | null | 2011-07-03T01:09:15.980 | 2013-07-04T02:31:04.047 | 2017-04-13T12:44:39.283 | -1 | 5218 | [
"hypothesis-testing",
"anova",
"spss",
"repeated-measures"
] |
12588 | 1 | null | null | 5 | 3528 | I conducted a t-test / ANOVA (both repeated measurements) and I want to represent the difference in the mean via a bar graph.
There are several different views about the appropriate error bars for repeated-measurements: personal preference (Field, 2009), [Root Mean Square Error](https://doi.org/10.3758/BF03210790) (Estes, 1997) or rather [Statistical Significance Bars](https://www.lrdc.pitt.edu/schunn/SSB/) (Schunn, 1999).
What is the best solution?
| Appropriate error bars for repeated-measurements designs | CC BY-SA 4.0 | null | 2011-07-03T07:59:00.110 | 2022-06-20T14:18:59.610 | 2022-06-20T14:18:59.610 | 361019 | 5267 | [
"standard-error"
] |
12590 | 1 | null | null | 9 | 1154 | I would like to do something in R that SAS can do using SAS's proc mixed (there is some way to do in STATA es well), namely fitting the so called Bivariate model from Reitsma et al (2005). This model is a special mixed model where the variance depends on the study (see below). Googling and talking to some people familiar with the model did not yield a straightforward approach that is fast at the same time (i.e. a nice high level model fitting function). I am nevertheless sure, there is something fast in R that one can built on.
In a nutshell one is faced with the following situation: Given pairs of proportions $(p_1,p_2)$ in $[0,1]^2$ one would like to fit a bivariate normal to the logit-transformed pairs. Since the proportions come from a 2x2 table (i.e. binomial data) each logit transformed observed proportion has a variance estimate that is to be included in the fitting process, say $(s_1, s_2)$. So one would like to fit a bivariate normal to the pairs, where the covariance matrix $\Sigma$ depends on the observation, i.e.
$(\text{logit}(p_1),\text{logit}(p_2)) \sim N((mu_1, mu_2), \Sigma + S)$
where S is the diagonal matrix with $(s_1, s_2)$ and depends entirely on the data but varies from observation to observation. mu and Sigma are the same for all observation though.
Right now I am using a call to `optim()` (using BFGS) to estimate the five parameters ($\mu_1$, $\mu_2$, and three parameters for $\Sigma$). Nevertheless this is painfully slow, and especially unsuitable for simulation. Also one of my aims is to introduce regression coefficients for mu later, increasing the number of parameters.
I tried speeding up fitting by supplying starting values and I also thought about computing gradients for the five parameters. Since the likelihood becomes quite complex due to the addition of $S$, I felt the risk of introducting errors this way was too big and did not attempt it yet, nor did I see a way to check my calculations.
Is the calculation of the gradients typically worthwhile? How do you check them?
I am aware of other optimizer besides `optim()`, i.e. `nlm()` and I also know about the CRAN Task view: Optimization. Which ones a are worth a try?
What kind of tricks are there to speed up `optim()` besides reducing accuracy?
I would be very grateful for any hints.
| Fitting a special mixed model in R - alternatives to optim() | CC BY-SA 3.0 | null | 2011-06-30T21:16:11.453 | 2018-03-15T22:42:48.133 | 2011-07-03T09:19:00.883 | null | 2916 | [
"r",
"optimization"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.