Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11688 | 2 | null | 2717 | 0 | null | Co-clustering is one of the answers I think. But Im not expert here. Co-clustring isn't newborn method, so you can find some algos in R, wiki shows that concepts in good way. Another method that isnt menthioned is graph partitioning (but I see that graph wouldnt be sparse,graph partitioning would be useful if your matrix would be dominated by values meaning=maximum distance=no similarity between the nodes).
| null | CC BY-SA 3.0 | null | 2011-06-07T22:55:36.453 | 2011-06-07T22:55:36.453 | null | null | 4908 | null |
11689 | 1 | 15248 | null | 4 | 3979 | I'm reading about the Linear Discriminant Analysis by Fisher and I have a couple of questions about its usage.
- If you have k>2 classes in a two-dimensional space you find k−1 vectors that you need to use to project the sample data. Is it possible that one sample is closer to different means along different vectors?
- Suppose you have, for example, 3 classes in a 3-dimensional space. You get two vectors, but now you need planes to be able to project your samples. Do you create those by combining the vectors obtained by LDA?
| Usage of LDA with more than two classes | CC BY-SA 3.0 | null | 2011-06-07T23:05:30.967 | 2011-09-06T20:22:54.613 | 2011-09-06T17:01:18.673 | 223 | 4889 | [
"machine-learning",
"clustering",
"classification",
"discriminant-analysis"
] |
11690 | 2 | null | 11531 | 0 | null | Mayby try some "moving deletator" - in window of p observations compute standard deviation and then delete obs for which absolute difference to previous observation is x times bigger then standard deviation in that window. But this method could don't work with densely packed outliers (one after another) which is showed on the second picture.
ps. from what program are that pictures ?
| null | CC BY-SA 3.0 | null | 2011-06-07T23:25:43.797 | 2011-06-07T23:25:43.797 | null | null | 4908 | null |
11691 | 1 | 11702 | null | 88 | 78312 | How would you know if your (high dimensional) data exhibits enough clustering so that results from kmeans or other clustering algorithm is actually meaningful?
For k-means algorithm in particular, how much of a reduction in within-cluster variance should there be for the actual clustering results to be meaningful (and not spurious)?
Should clustering be apparent when a dimensionally-reduced form of the data is plotted, and are the results from kmeans (or other methods) meaningless if the clustering cannot be visualized?
| How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? | CC BY-SA 3.0 | null | 2011-06-08T00:04:43.590 | 2015-02-09T02:07:12.587 | null | null | 2973 | [
"clustering",
"k-means"
] |
11692 | 2 | null | 11687 | 2 | null | If you want to model the data and the dependent categorical variable has no ordering (nominal) then you must use a multinomial logit model. If the dependent variable does have an ordering (ordinal) then you can use a cumulative logit model (proportional odds model).
For me personally, I find the results much easier to interpret for a proportional odds model compared to a multinomial model, especially when you want to report the results to someone not statistically knowledgeable.
These are not the only models you can use but they are very typical.
| null | CC BY-SA 3.0 | null | 2011-06-08T00:49:52.040 | 2011-06-08T02:59:47.810 | 2011-06-08T02:59:47.810 | 2310 | 2310 | null |
11693 | 2 | null | 11687 | 4 | null | If you ignore the ordered nature of the variables the appropriate methods will still provide correct analysis, but the advantage of using methods for ordered data is they provide greater information about the order and magnitude of significant variables.
| null | CC BY-SA 3.0 | null | 2011-06-08T00:55:19.533 | 2011-06-08T00:55:19.533 | null | null | 4927 | null |
11694 | 2 | null | 11691 | 6 | null | I have just started using clustering algorithms recently, so hopefully someone more knowledgeable can provide a more complete answer, but here are some thoughts:
'Meaningful', as I'm sure you're aware, is very subjective. So whether the clustering is good enough is completely dependent upon why you need to cluster in the first place. If you're trying to predict group membership, it's likely that any clustering will do better than chance (and no worse), so the results should be meaningful to some degree.
If you want to know how reliable this clustering is, you need some metric to compare it to. If you have a set of entities with known memberships, you can use discriminant analysis to see how good the predictions were. If you don't have a set of entities with known memberships, you'll have to know what variance is typical of clusters in your field. Physical attributes of entities with rigid categories are likely to have much lower in-group variance than psychometric data on humans, but that doesn't necessarily make the clustering 'worse'.
Your second question alludes to 'What value of k should I choose?' Again, there's no hard answer here. In the absence of any a priori set of categories, you probably want to minimize the number of clusters while also minimizing the average cluster variance. A simple approach might be to plot 'number of clusters' vs 'average cluster variance', and look for the "elbow"-- where adding more clusters does not have a significant impact on your cluster variance.
I wouldn't say the results from k-means is meaningless if it cannot be visualized, but it's certainly appealing when the clusters are visually apparent. This, again, just leads back to the question: why do you need to do clustering, and how reliable do you need to be? Ultimately, this is a question that you need to answer based on how you will use the data.
| null | CC BY-SA 3.0 | null | 2011-06-08T02:08:11.113 | 2011-06-08T02:08:11.113 | null | null | 1977 | null |
11695 | 1 | null | null | 2 | 365 | What is power in logistic regression? Is it the ability of the test to reject the null hypothesis when it is actually false?
Second, if you're trying to maximize your statistical power when doing a logistic regression, is it better to use predictor values that are only high or low or a range of predictor values?
| Logistic regression - power and predictor values | CC BY-SA 3.0 | null | 2011-06-08T03:37:16.950 | 2011-06-08T05:40:59.183 | 2011-06-08T03:51:10.397 | 183 | 4928 | [
"logistic",
"statistical-power"
] |
11696 | 1 | null | null | 0 | 1386 | Can you please suggest me a good model-based learning algorithm to recommend items to the user? Is there any open source implementation available on model based learning algorithm? I am sure Apache Mahout doesn't implemented any model based learning algorithms.
| Model-based learning algorithm for recommendation engine | CC BY-SA 3.0 | null | 2011-06-08T05:07:14.653 | 2017-10-24T14:12:32.040 | 2011-06-08T08:41:15.707 | null | 4665 | [
"machine-learning",
"recommender-system"
] |
11697 | 2 | null | 11695 | 2 | null | Power by definition is what you wrote. The ability to reject a false null hypothesis. That is how assertive a model is to say that a predictor x has something to do with the dependent variable y. Power is a probability so closer is it to 1, better it is.
For the second question, there is no fixed answer to this question. It depends on the data. Lets say you are predicting defaulter out of people in which one variable is delay in payment. Now if you are doing a high low categorization you are essentially saying that everybody ( lets say) who is above 90 days has beta = .4 and everybody less than 90 has beta = .15 . In essence you are trying to segregate a 89 day delay from a 90 day delay while a different view could be that both of 'em are same guys. So in my view low/high or range of values should come from curve fitting analysis where you should be watchful of inflections in the curve. Where is it peaking ?? where does it show almost equal behavior in defaulters/non defaulters. The final aim is to multiply the person's delay with the right beta and not an imposed beta.
| null | CC BY-SA 3.0 | null | 2011-06-08T05:40:59.183 | 2011-06-08T05:40:59.183 | null | null | 1763 | null |
11698 | 2 | null | 11691 | 10 | null | Surely, the ability to visually discern the clusters in a plotable number of dimensions is a doubtful criterion for the usefulness of a clustering algorithm, especially if this dimension reduction is done independently of the clustering itself (i.e.: in a vain attempt to find out if clustering will work).
In fact, clustering methods have their highest value in finding the clusters where the human eye/mind is unable to see the clusters.
The simple answer is: do clustering, then find out whether it worked (with any of the criteria you are interested in, see also @Jeff's answer).
| null | CC BY-SA 3.0 | null | 2011-06-08T07:01:16.137 | 2011-06-08T07:01:16.137 | null | null | 4257 | null |
11699 | 1 | 139428 | null | 1 | 4759 | I am trying to use SPSS to build a linear regression on historical data (dependent and independent variables) and then apply this to new data (independent variables only) to generate predicted values and associated prediction intervals.
I've looked in detail at the documentation on the `REGRESSION` procedure within SPSS, and while it is obvious how I would get the prediction and interval for the data used to build the regression (using a `/SAVE` subcommand to save the temporary variables `PRED`, `LICIN` and `UICIN`) I'm not seeing any functionality that would allow me to apply this to new data.
Essentially I'm looking for the equivalent of `PROC SCORE` in SAS, or `predict.lm` in R.
| How do you apply a linear regression built in SPSS to new data and generate prediction intervals | CC BY-SA 3.0 | null | 2011-06-08T07:34:51.063 | 2015-02-26T13:39:26.257 | 2011-06-08T08:50:03.333 | 183 | 4933 | [
"regression",
"spss",
"predictive-models"
] |
11700 | 2 | null | 11689 | 2 | null | I'm not sure I understand what you mean by projecting your sample data, but:
The result per set of 2 classes of LDA is always a linear form in the coordinates of your space (e.g. `3x_1-x_2+2`). Hence it also defines a hyperplane (a line in 2D, a plane in 3D,...), where this linear form is zero, and the 'discriminating' between these two classes works by looking at the sign of the linear form.
The coefficients of this linear form always define a vector that is orthogonal to this hyperplane, and is defined up to a factor (in my example above, this vector is `(3,-1)`, but could also be `(-1, 1/3)`). This should answer your second question.
Your first question is very confusing, but: for every pair of classes (given that they don't completely overlap), you should be able to decide which of two gets the better vote (i.e. which of these 2 classes your LDA would appoint this sample to). Due to the nature of the hyperplanes, it is impossible to have the choices for three classes so that A > B, B > C and C > A if that's what you are asking. Of course, it is possible that a sample is exactly on the discriminating hyperplane for two classes (or even on the intersection of two discriminating hyperplanes): for these points, the LDA has no way of deciding between the classes involved (the sample is 'just as close to either mean' then).
| null | CC BY-SA 3.0 | null | 2011-06-08T07:35:07.850 | 2011-06-08T07:35:07.850 | null | null | 4257 | null |
11701 | 2 | null | 11418 | 2 | null | I'm still stuck with this problem. I have received some suggestions from the R mailing list (thanks to Christian Hennig) that I attach here:
>
Have you considered the dbscan function in library fpc, or was it
another one? The fpc::dbscan() function doesn't have a "distance"
parameter but several options, one of which may resolve your memory
problem (look up the documentation of the "memory" parameter).
Using a distance matrix for hundreds of thousands of points is a
recipe for disaster (memory-wise). I'm not sure whether the function
that you used did that, but fpc::dbscan() can avoid it.
It is true that fpc::dbscan() requires tuning constants that the
user has to provide. There is unfortunately no general rule how to do
this; it would be necessary to understand the method and the meaning
of the constants, and how this translates into the requirements of
your application.
You may try several different choices and do some cluster validation
to see what works, but I can't explain this in general terms easily
via email.
I have made some attempts with my data but without any success:
"Yes, I have tried dbscan from fpc but I'm still stuck on the memory problem. Regarding your answer, I'm not sure which memory parameter should I look at. Following is the code I tried with dbscan parameters, maybe you can see if there is any mistake.
```
> sstdat=read.csv("sst.dat",sep=";",header=F,col.names=c("lon","lat","sst"))
> library(fpc)
> sst1=subset(sstdat, sst<50)
> sst2=subset(sst1, lon>-6)
> sst2=subset(sst2, lon<40)
> sst2=subset(sst2, lat<46)
> dbscan(sst2$sst, 0.1, MinPts = 5, scale = FALSE, method = c("hybrid"),
seeds = FALSE, showplot = FALSE, countmode = NULL)
Error: no se puede ubicar un vector de tamaño 858.2 Mb
> head(sst2)
lon lat sst
1257 35.18 24.98 26.78
1258 35.22 24.98 26.78
1259 35.27 24.98 26.78
1260 35.31 24.98 26.78
1261 35.35 24.98 26.78
1262 35.40 24.98 26.85
```
In this example I only apply `dbscan()` to temperature values, not lon/lat, so `eps` parameter is 0.1. As it is a gridded data set any point is surrounded by eight data points, then I thought that at least 5 of the surrounding points should be within the reachability distance. But I'm not sure I'm getting the right approach by only considering temperature value, maybe then I'm missing spatial information. How should I deal with longitude and latitude data?
Dimensions of `sst2` are: 152243 rows x 3 columns "
I share this mail messages here in case any of you can share some light on R and DBSCAN. Thanks again
| null | CC BY-SA 3.0 | null | 2011-06-08T07:57:01.557 | 2011-08-07T20:36:34.503 | 2011-08-07T20:36:34.503 | 930 | 4147 | null |
11702 | 2 | null | 11691 | 86 | null | About k-means specifically, you can use the Gap statistics. Basically, the idea is to compute a goodness of clustering measure based on average dispersion compared to a reference distribution for an increasing number of clusters.
More information can be found in the original paper:
>
Tibshirani, R., Walther, G., and
Hastie, T. (2001). Estimating the
numbers of clusters in a data set via
the gap statistic. J. R. Statist.
Soc. B, 63(2): 411-423.
The answer that I provided to a [related question](https://stats.stackexchange.com/questions/9671/how-can-i-assess-how-descriptive-feature-vectors-are) highlights other general validity indices that might be used to check whether a given dataset exhibits some kind of a structure.
When you don't have any idea of what you would expect to find if there was noise only, a good approach is to use resampling and study clusters stability. In other words, resample your data (via bootstrap or by adding small noise to it) and compute the "closeness" of the resulting partitions, as measured by [Jaccard](http://en.wikipedia.org/wiki/Jaccard_index) similarities. In short, it allows to estimate the frequency with which similar clusters were recovered in the data. This method is readily available in the [fpc](http://cran.r-project.org/web/packages/fpc/index.html) R package as `clusterboot()`.
It takes as input either raw data or a distance matrix, and allows to apply a wide range of clustering methods (hierarchical, k-means, fuzzy methods). The method is discussed in the linked references:
>
Hennig, C. (2007) Cluster-wise
assessment of cluster stability.
Computational Statistics and Data Analysis, 52, 258-271.
Hennig, C. (2008) Dissolution point
and isolation robustness: robustness
criteria for general cluster analysis
methods. Journal of Multivariate
Analysis, 99, 1154-1176.
Below is a small demonstration with the k-means algorithm.
```
sim.xy <- function(n, mean, sd) cbind(rnorm(n, mean[1], sd[1]),
rnorm(n, mean[2],sd[2]))
xy <- rbind(sim.xy(100, c(0,0), c(.2,.2)),
sim.xy(100, c(2.5,0), c(.4,.2)),
sim.xy(100, c(1.25,.5), c(.3,.2)))
library(fpc)
km.boot <- clusterboot(xy, B=20, bootmethod="boot",
clustermethod=kmeansCBI,
krange=3, seed=15555)
```
The results are quite positive in this artificial (and well structured) dataset since none of the three clusters (`krange`) were dissolved across the samples, and the average clusterwise Jaccard similarity is > 0.95 for all clusters.
Below are the results on the 20 bootstrap samples. As can be seen, statistical units tend to stay grouped into the same cluster, with few exceptions for those observations lying in between.

You can extend this idea to any validity index, of course: choose a new series of observations by bootstrap (with replacement), compute your statistic (e.g., silhouette width, cophenetic correlation, Hubert's gamma, within sum of squares) for a range of cluster numbers (e.g., 2 to 10), repeat 100 or 500 times, and look at the boxplot of your statistic as a function of the number of cluster.
Here is what I get with the same simulated dataset, but using Ward's hierarchical clustering and considering the cophenetic correlation (which assess how well distance information are reproduced in the resulting partitions) and silhouette width (a combination measure assessing intra-cluster homogeneity and inter-cluster separation).
The cophenetic correlation ranges from 0.6267 to 0.7511 with a median value of 0.7031 (500 bootstrap samples). Silhouette width appears to be maximal when we consider 3 clusters (median 0.8408, range 0.7371-0.8769).

| null | CC BY-SA 3.0 | null | 2011-06-08T08:43:28.373 | 2011-06-08T14:50:08.427 | 2017-04-13T12:44:25.283 | -1 | 930 | null |
11703 | 1 | null | null | 3 | 7551 | Assume the following easy example of a glm regression with an offset:
```
numberofdrugs <- rpois(84, 10)
healthvalue <- rpois(84,75)
age <- rnorm(84,50,5)
test <- glm(healthvalue~age, family=poisson, offset=log(numberofdrugs))
summary(test)
fitted(test) # How to get one of these values manually?
```
- How can I compute the fitted values manually?
- Also, why is there no estimation of log(numberofdrugs)? (In the book Generalized Linear Models on page 205-207 there is an example where the offset is estimated. It was done to see if the coefficient is close to one. It's 0.903 (see page 207 if you've this classic book) and from this follows, that there is nearly a constant rate in the number of damage incident!)
Previous related questions asked:
- When to use an offset?
- Whether to use an offset when predicting hockey scores?
| How to estimate and interpret an offset correctly in a Poisson regression? | CC BY-SA 3.0 | null | 2011-06-08T10:00:11.857 | 2017-09-18T19:30:48.897 | 2017-09-18T17:17:23.237 | 7290 | 4496 | [
"r",
"regression",
"poisson-distribution",
"count-data",
"offset"
] |
11705 | 2 | null | 11703 | 1 | null | About the practical part -- outputs of `glm` or `summary` are just lists which are pretty-printed for user convenience. You can see their full structure calling `unclass` on them and extract single values as usual, with a help of `$`, `[[]]` and `[]` operators.
| null | CC BY-SA 3.0 | null | 2011-06-08T10:52:42.993 | 2011-06-08T10:52:42.993 | null | null | null | null |
11706 | 2 | null | 11676 | 9 | null | R gives null and residual deviance in the output to `glm` so that you can make exactly this sort of comparison (see the last two lines below).
```
> x = log(1:10)
> y = 1:10
> glm(y ~ x, family = poisson)
>Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
5.564e-13 1.000e+00
Degrees of Freedom: 9 Total (i.e. Null); 8 Residual
Null Deviance: 16.64
Residual Deviance: 2.887e-15 AIC: 37.97
```
You can also pull these values out of the object with `model$null.deviance` and `model$deviance`
| null | CC BY-SA 3.0 | null | 2011-06-08T11:26:42.833 | 2013-12-10T21:06:39.087 | 2013-12-10T21:06:39.087 | 4862 | 4862 | null |
11707 | 1 | null | null | 75 | 79327 | According to the Wikipedia article on [unbiased estimation of standard deviation](http://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation) the sample SD
$$s = \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2}$$
is a biased estimator of the SD of the population. It states that $E(\sqrt{s^2}) \neq \sqrt{E(s^2)}$.
NB. Random variables are independent and each $x_{i} \sim N(\mu,\sigma^{2})$
My question is two-fold:
- What is the proof of the biasedness?
- How does one compute the expectation of the sample standard deviation
My knowledge of maths/stats is only intermediate.
| Why is sample standard deviation a biased estimator of $\sigma$? | CC BY-SA 3.0 | null | 2011-06-08T12:28:05.087 | 2021-07-25T02:29:55.573 | 2012-07-07T10:01:29.047 | 930 | 4937 | [
"estimation",
"standard-deviation"
] |
11708 | 1 | null | null | 1 | 1425 | I have a large dataset that has many variables. I'm trying to determine which variables correlate strongly with one specific variable. When you look at the entire dataset as a whole, the correlation of different variables is pretty weak.
I know, however, that within certain subsets of the data the correlation is strong. For example:
When variable A is between X1 and Y1 and when variable B is between X2 and Y2, that resulting subset has unusually large instances of the variable I'm trying to optimize.
How can I determine using R which subsets of data have unusually large instances of the optimization variable when there are hundreds of variables to test against?
| Determining correlation in certain subsets of a dataset in R | CC BY-SA 3.0 | null | 2011-06-08T12:30:08.830 | 2011-06-08T14:35:23.717 | 2011-06-08T14:35:23.717 | 183 | 4936 | [
"r",
"regression",
"correlation",
"large-data"
] |
11709 | 2 | null | 11703 | 3 | null | There should not be an estimate of the offset: this offset is (could be) different for every observation (the whole idea is that you monitor the number of events within a (linear) 'timemeasure' (here apparently `numberofdrugs`).
There is no one 'population' offset you could estimate: person 1 is going to have 5 drugs administered, person 2 maybe 10, and you assume that the number of events (`healthvalue`?) is linear with this person's `numberofdrugs` (as per the answer to your previous question).
I don't have the book at hand, and Amazon won't let me look at the pages you mention, but I suppose something else is happening there (maybe simply the average `numberofdrugs` in the population)?
| null | CC BY-SA 3.0 | null | 2011-06-08T12:31:30.467 | 2011-06-08T12:31:30.467 | null | null | 4257 | null |
11710 | 2 | null | 11687 | 10 | null | There are major power and precision gains from treating Y as ordinal when appropriate. This arises from the much lower number of parameters in the model (by a factor of k where k is one less than the number of categories of Y). There are several ordinal models. The most commonly used are the proportional odds and continuation ratio ordinal logistic models.
| null | CC BY-SA 3.0 | null | 2011-06-08T12:41:40.190 | 2011-06-08T12:41:40.190 | null | null | 4253 | null |
11711 | 2 | null | 11708 | 4 | null | It is a bit unclear what is your aim behind this, but maybe you just need a feature selection?
Try for instance training a Random Forest predicting the value you optimize from the other ones and extract its importance scores. What it does is almost explicitly a search for hyper-rectangles in your feature space with smallest possible variance and selecting those dimensions on which meaningful intervals are most frequently made.
| null | CC BY-SA 3.0 | null | 2011-06-08T12:46:18.803 | 2011-06-08T12:46:18.803 | null | null | null | null |
11712 | 2 | null | 11708 | 1 | null | Very Interesting problem. With my limited experience my first comment is that this problem would not be having many shortcuts. However I have done this kind of exercise. I would suggest the following points:
1) make a list of the variables that could be related-- this means that dont try to relate in your mind every x with y. Make a business case and ask yourself as to why could be there a pattern between x and y..example age could be shoowing pattern with salary( generally people who are more aged earn more) but lets say in insurance "my age may not be related with my agents age"( though actually it could be )
2) make a cross tab of these x and y.all possible Xs and Ys
3) read carefully through these cross tabs if there is a trend coming out from population...example has mean of salary gone up with age
4) finally make a category of those X( though this step is not necessary) and compare it against the whole dataset..you might see a different trend of these few rows as compared to the whole dataset.
Hope this helps.
| null | CC BY-SA 3.0 | null | 2011-06-08T12:51:53.063 | 2011-06-08T13:01:39.073 | 2011-06-08T13:01:39.073 | 1763 | 1763 | null |
11713 | 1 | null | null | 7 | 3354 | This follows on from [my previous question on assessing reliability](https://stats.stackexchange.com/questions/11628/assessing-reliability-of-a-questionnaire-dimensionality-problematic-items-and).
I designed a questionnaire (six 5-points Likert items) to evaluate the attitude of a group of users toward a product. I would like to estimate the reliability of the questionnaire for example computing Cronbach's alpha or lambda6. So, I need to check the dimensionality of my scale. I have seen that some people use PCA to find out the number of dimensions (e.g. the principal components), other people prefer to use EFA.
- Which is the most suitable approach?
- Further, if I find more than one principal component or more than one latent factor does this mean that I am measuring more than one construct or several aspects of the same construct?
| Whether to use EFA or PCA to assess dimensionality of a set of Likert items | CC BY-SA 3.0 | null | 2011-06-08T12:57:46.427 | 2011-09-30T20:53:05.520 | 2017-04-13T12:44:29.013 | -1 | 4903 | [
"pca",
"factor-analysis",
"scales",
"reliability",
"likert"
] |
11714 | 1 | 11719 | null | 3 | 3190 | I'm having a hard time understanding what the authors of [this paper (pdf)](http://www.sciamachy.org/validation/documentation/proceedings_ES2007/463103me.pdf) want to tell me with this graph (Fig. 2,3 (shown below) and 4, right):

[Caption: A comparison between the standard deviation of the differences (green line) and the standard deviation of all GOMOS (red line) and LIDAR (blue line) ozone profiles.]
Note: contrary to convention, the measured quantity is plotted on the x-axis, not y-axis.
The problem at hand is as following: Two instruments measure the same physical quantity (in this case vertical distribution of ozone in the atmosphere). Their corresponding (instrument characterized) standard deviations (averaged over if I understand correctly) are displayed in red and blue. The difference between the two datasets is computed and the standard deviation of the difference data set is plotted in green.
What does this graph mean? Is this a way to say that the two datasets are in agreement? If yes, on what basis?
What would other extremes mean?
- stdev of the difference is much larger than the measurement stdevs
I think: measurement stdevs are too optimistic.
- stdev of the difference is much smaller than the measurement stdevs
I think: measurement stdevs are too conservative.
Maybe an example would help me to understand this better. Thanks
| Comparing two datasets (of the same physical quantity) - what do I learn from this graph? | CC BY-SA 3.0 | null | 2011-06-08T13:28:20.487 | 2011-06-08T19:02:18.293 | 2011-06-08T13:36:20.213 | 4373 | 4373 | [
"dataset",
"standard-deviation",
"standard-error",
"error-propagation",
"measurement-error"
] |
11716 | 2 | null | 11699 | 4 | null | If you have SPSS Version 19, I believe they introduced "Scoring Wizard" under Utilities that apparently can accomplish this sort of task. That said, I have tried to get it to work and do not have the desire to debug the errors I am getting since it is very easy to do in R.
I echo @Jeromy's response; if you need to stay within SPSS, I would use the R plugin and the ?predict function.
| null | CC BY-SA 3.0 | null | 2011-06-08T14:31:53.123 | 2011-06-08T14:31:53.123 | null | null | 569 | null |
11717 | 1 | null | null | 6 | 347 | Consider the following survey question:
>
Q: How would you classify the importance for you of the following 5 items:
A
B
C
D
E
Assign to each item a number in the set {1,2,3,4,5}, with 1 meaning the highest importance and 5 meaning the lowest importance; the number used to an item cannot be used in any other item.
- How can one test whether item A is the most important item for a sample with 100 individuals?
| Testing the importance of an item among a finite set of items | CC BY-SA 3.0 | null | 2011-06-08T14:58:04.123 | 2018-06-09T20:09:43.350 | 2020-06-11T14:32:37.003 | -1 | 6245 | [
"hypothesis-testing",
"ordinal-data",
"ranking",
"paired-data",
"psychometrics"
] |
11718 | 2 | null | 11699 | 0 | null | Why would you use linear regression on time series in the first place ? If you have time series data there may be lags required for all series and adjustments for Pulses , Level Shifts , Seasonal Pulses and /or Local Time Trends. Additionally you might have parameters that change over time (N.B. this is not rectified by Arima structure) and/or error variance that may change over time (N.B. Not necessarily rectified by Power Transforms such as reciprocal square roots, logs et. al.).You might need to update your tool set as you are abusing the methodology of linear regression by using it incorrectly on time series data.
You should be using Transfer Function Models (Chapter 10) in the seminal BoX-Jenkins text on time series analysis. Routine implementation of these procedure facilitate re-use of models, re-estimating parameters and even augmenting the older model with newly identified structure from the "new data". Try Googling Transfer Functions or AUTOMATIC Transfer Functions
| null | CC BY-SA 3.0 | null | 2011-06-08T15:02:20.717 | 2011-06-08T15:12:11.830 | 2011-06-08T15:12:11.830 | 3382 | 3382 | null |
11719 | 2 | null | 11714 | 5 | null | This appears to be an unconventional way to report correlation (or lack thereof). It focuses more on the variability of the measurements (across the earth at each fixed altitude) than on the correlation among them. As such the graphic may be of physical interest but it's an obscure way (at best) of comparing two measurement systems.
At each vertical position (an estimated altitude based on a pressure reading) the plots summarize between 3 and 18 pairs of data obtained over fixed stations on the earth's surface. The summaries consist of sample standard deviations normalized by the LIDAR readings (the reference measurement).
When comparing measurements, one is usually interested in assessing their correlations. We need to do a little math to relate this graphic to those correlations. Let $(X,Y)$ be a random variable representing the (LIDAR, GOMOS) readings. Let the variance of $X$ be $\sigma^2$, the variance of $Y$ be $\tau^2$, and their correlation equal $\rho$. Then
$$Var(X-Y) = Var(X) + Var(Y) - 2Covar(X,Y) = \sigma^2 + \tau^2 - 2 \rho \sigma \tau.$$
Consequently we can recover the correlation from the covariances:
$$\rho = \frac{1}{2\sigma \tau}(Var(X) + Var(Y) - Var(X-Y)).$$
Let the LIDAR mean be $m$. The plots depicts estimates of $\sigma/m$ (relative LIDAR SD): call this $s$; $\tau/m$ (relative GOMOS SD): call this $t$; and $\sqrt{Var(X-Y)}/m$ (relative SD of difference): call this $r$. Plug the estimates in to the preceding formula:
$$\rho = \frac{1}{2(s m)(t m)}((s m)^2 + (t m)^2 - (r m)^2;$$
$$\rho = \frac{s^2 + t^2 - r^2}{2 s t}.$$
These are, of course, estimates of $\rho$, subject to sampling uncertainty.
We can now qualitatively identify several portions of the plot:
- $s = t = r$, approximately, between 35 and 45 km. From the formula we estimate $\rho \sim 1/2$. This is modest correlation--not very good for two measurements of the same thing.
- One of $s$ and $t$ is small relative to the other and $r$ is comparable to the larger. This occurs from 20 to about 25 km and 45 to 50 km. The formula indicates $\rho \sim 0$. This is lack of correlation.
- $r$ is small and $s$ and $t$ are comparable (between 28 and 35 km). Now we estimate $\rho \sim 1$. This is what one hopes to see for two consistently comparable measurements.
In short, good correlation occurs when the green line lies substantially to the left of the red or blue lines and there is lack of correlation wherever the green line approximates (or exceeds) either or both of the red and blue lines. Overall, correlation is poor except between 27 and 34 km.
| null | CC BY-SA 3.0 | null | 2011-06-08T15:09:59.537 | 2011-06-08T15:09:59.537 | null | null | 919 | null |
11720 | 2 | null | 11717 | 6 | null | The naive approach would be to compute the marginal distribution of rankings (e.g., mean score for each item), but it would throw away a lot of information as it does not account for the within-person relationship between ranks.
As an extension to [paired preference model](http://en.wikipedia.org/wiki/Pairwise_comparison) (e.g., the Bradley-Terry model, described in Agresti's CDA pp. 436-439), there exist model for ordinal or likert-type comparison data with or without subject covariates, as well as model for ranking data (baiscally, it relies on the use of log-linear model). Here is a [short intro](http://statmath.wu-wien.ac.at/people/hatz/preference/tag1/) to the package, and a mathematical explanation in this technical report: [Fitting Paired Comparison Models in R](http://epub.wu.ac.at/740/1/document.pdf). You will find everything you need in the [prefmod](http://cran.r-project.org/web/packages/prefmod/index.html) R package, see the `pattR.fit()` function which expects data in the form you described:
```
The responses have to be coded as consecutive integers starting
with 1. The value of 1 means highest rank according to the
underlying scale. Each column in the data file corresponds to one
of the ranked objects. For example, if we have 3 objects denoted
by ‘A’,‘B’,and ‘C’, with corresponding columns in the data matrix,
the response pattern ‘(3,1,2)’ represents: object ‘B’ ranked
highest, ‘C’ ranked second, and ‘A’ ranked lowest. Missing values
are coded as ‘NA’, ties are not allowed (in that case use
‘pattL.fit’. Rows with less than 2 ranked objects are removed
from the fit and a message is printed.
```
For additional information (about and beyond your particular study), you might find useful the following papers:
- Böckenholt, U. and Dillon, W.R. (1997). Modelling within-subject dependencies in ordinal paired comparison data. Psychometrika, 62, p.411-434
- Dittrich, R., Francis, B., Hatzinger, R., and Katzenbeisser, W. (2006). Modelling dependency in multivariate paired comparisons: A log-linear approach. Mathematical Social Sciences, 52, 197-209.
- Maydeu-Olivares, A. (2004). Thurstone's Case V model: A structural equations modeling perspective. In K.van Montfort et al. (eds), Recent Developments on Structural Equation Models, 41-67.
| null | CC BY-SA 3.0 | null | 2011-06-08T15:28:01.107 | 2011-06-08T15:28:01.107 | null | null | 930 | null |
11721 | 2 | null | 10900 | 4 | null | The Laplace (aka double exponential) distribution has relatively light tails - exponential in fact :). The Laplace and t/Cauchy distributions are part of a larger family of scale mixtures of normals, which are distributions that can be written as an infinite mixture like so:
$$p(x) = \int Nor(x; 0, r^2s^2)p(s^2)ds^2$$
$r$ is an additional scale parameter; it can also be absorbed into $p(s^2)$. The t family have inverse gamma distributions on $s^2$, the Laplace distribution has an exponential mixing distribution on $s^2/2$. The parameters of $p(s^2)$ will control the scale and tail behavior of the resulting distribution. Since it sounds like you only need to sample from this distribution, you can basically pick any mixing distribution you like. A recommendation for which distribution would work well requires more information about your problem, for the reasons @whuber gave.
| null | CC BY-SA 3.0 | null | 2011-06-08T16:06:44.050 | 2011-06-08T16:06:44.050 | null | null | 26 | null |
11722 | 1 | 14782 | null | 6 | 792 | Repeating an experiment ([about which I asked before](https://stats.stackexchange.com/questions/10407/probability-for-finding-a-double-as-likely-event)) with $n$ possible outcomes $t$ times independently, where all but one outcomes have probability $\frac{1}{n+1}$ and the other outcome has the double probability $\frac{2}{n+1}$, I also obtain an independent information where the outcome with double probability shows up 50% more likely than the other outcomes (i.e., with probability $\frac{3}{2n+1}$ and the others' probability is $\frac{2}{2n+1}$).
How do I combine the two results?
Alternative formulation: Given is the probability space $(\Omega\times\Omega, \mathcal{P}(\Omega\times\Omega), \mathrm{p})$ with
$$\mathrm{p}(\omega, \omega') = \left\{\begin{array}{cc}
\frac{6}{(n+1)(2n+1)} & \mbox{ if } \omega = \omega_0 \mbox{ and } \omega' = \omega_0\\
\frac{4}{(n+1)(2n+1)} & \mbox{ if } \omega = \omega_0 \mbox{ and } \omega' \ne \omega_0\\
\frac{3}{(n+1)(2n+1)} & \mbox{ if } \omega \ne \omega_0 \mbox{ and } \omega' = \omega_0\\
\frac{2}{(n+1)(2n+1)} & \mbox{ otherwise }
\end{array}\right.$$
where $\omega_0\in\Omega$ is unknown (and $|\Omega| = n$). The goal is to find $\omega_0$ given $t$ samples from $\Omega\times\Omega$.
Currently I'm counting how often each value shows up in the first coordinate and add the number of times it shows up in the second coordinate multiplied by a weighting factor of $\log_2(\frac{3}{2})$. The value with the highest number should be $\omega_0$ (if $t$ is big enough).
Is this the correct weighting factor rsp. the correct way to find $\omega_0$?
PS: I'm also thankful for anyone finding better tags for my question.
| How to combine two independent repeated experiments with different success probabilities? | CC BY-SA 3.0 | null | 2011-06-08T16:13:48.513 | 2011-08-24T22:38:48.597 | 2017-04-13T12:44:29.013 | -1 | 565 | [
"probability",
"sampling"
] |
11723 | 2 | null | 11672 | 2 | null | Sophie and I discussed this earlier (she is a student at my university) and I am still not satisfied with any of my suggestions so far. Here are two possibilities for the winner/loser data (assuming you always have a winner).
1) Compete each yellow against each red (64 competitions) and record which colour won. Test whether the proportion of fights won by yellow males is significantly different from that you'd expect if colour has no effect on competitive ability (i.e. significantly different from a binomial distribution with p=q=0.5). This is very simple and ignores weight.
2) Compete each fish against every other fish, regardless of colour (120 competitions). Construct a dominance hierarchy (see, for example, [Bang et al. 2009 Anim. Behav. 79:631](http://repository.ias.ac.in/23713/)). Test either a) whether there is a significant difference in median dominance rank between the two colour morphs (e.g. Mann-Whitney test) or b) whether red and yellow are randomly dispersed through the hierarchy using a randomisation test. Better still, see if you can find a bespoke test for effects of phenotypic variables on dominance in the literature.
| null | CC BY-SA 3.0 | null | 2011-06-08T16:28:56.170 | 2011-06-08T16:28:56.170 | null | null | 266 | null |
11724 | 1 | 11742 | null | 9 | 34926 | I'm running a binary logistic regressions with 3 numerical variables. I'm suppressing the intercept in my models as the probability should be zero if all input variables are zero.
What's minimal number of observations I should use?
| Minimum number of observations for logistic regression? | CC BY-SA 3.0 | null | 2011-06-08T18:33:53.903 | 2019-12-19T16:24:51.600 | 2019-12-19T16:24:51.600 | 11887 | 333 | [
"regression",
"logistic",
"sample-size"
] |
11725 | 2 | null | 11500 | -1 | null | How about generating a synthetic binary target variable first and then running a logistic regression model?
The synthetic variable should be something like... "If the observation is in the top decile on all of the input variable distributions flag it as 1 else 0"
Having generated the binary target variable... Run logistic regression to come up with probabilistic metric 0 to 1 assesing how far/close in the tails of multiple distributions observation is?
| null | CC BY-SA 3.0 | null | 2011-06-08T18:43:52.830 | 2011-06-08T18:43:52.830 | null | null | 333 | null |
11726 | 2 | null | 11609 | 2 | null | In frequentist statistics, the event $E$ is fixed -- the parameter either lies in $[a, b]$ or it doesn't. Thus, $E$ is independent of $C$ and $C'$ and so both $P(E|C) = P(E)$ and $P(E|C') = P(E)$.
(In your argument, you seem to think that $P(E|C) = 1$ and $P(E|C') = 0$, which is incorrect.)
| null | CC BY-SA 3.0 | null | 2011-06-08T18:56:31.923 | 2011-06-08T19:12:04.370 | 2011-06-08T19:12:04.370 | 1106 | 1106 | null |
11727 | 2 | null | 11609 | 31 | null | I think the fundamental problem is that frequentist statistics can only assign a probability to something that can have a long run frequency. Whether the true value of a parameter lies in a particular interval or not doesn't have a long run frequency, becuase we can only perform the experiment once, so you can't assign a frequentist probability to it. The problem arises from the definition of a probability. If you change the definition of a probability to a Bayesian one, then the problem instantly dissapears as you are no longer tied to discussion of long run frequencies.
See my (rather tounge in cheek) answer to a related question [here](https://stats.stackexchange.com/questions/22/bayesian-and-frequentist-reasoning-in-plain-english/1602#1602):
"A Frequentist is someone that believes probabilies represent long run frequencies with which events ocurr; if needs be, he will invent a fictitious population from which your particular situation could be considered a random sample so that he can meaningfully talk about long run frequencies. If you ask him a question about a particular situation, he will not give a direct answer, but instead make a statement about this (possibly imaginary) population."
In the case of a confidence interval, the question we normally would like to ask (unless we have a problem in quality control for example) is "given this sample of data, return the smallest interval that contains the true value of the parameter with probability X". However a frequentist can't do this as the experiment is only performed once and so there are no long run frequencies that can be used to assign a probability. So instead the frequentist has to invent a population of experiments (that you didn't perform) from which the experiment you did perform can be considered a random sample. The frequentist then gives you an indirect answer about that fictitious population of experiments, rather than a direct answer to the question you really wanted to ask about a particular experiment.
Essentially it is a problem of language, the frequentist definition of a popuation simply doesn't allow discussion of the probability of the true value of a parameter lying in a particular interval. That doesn't mean frequentist statistics are bad, or not useful, but it is important to know the limitations.
Regarding the major update
I am not sure we can say that "Before we calculate a 95% confidence interval, there is a 95% probability that the interval we calculate will cover the true parameter." within a frequentist framework. There is an implicit inference here that the long run frequency with which the true value of the parameter lies in confidence intervals constructed by some particular method is also the probability that that the true value of the parameter will lie in the confidence interval for the particular sample of data we are going to use. This is a perfectly reasonable inference, but it is a Bayesian inference, not a frequentist one, as the probability that the true value of the parameter lies in the confidence interval that we construct for a particular sample of data has no long run freqency, as we only have one sample of data. This is exactly the danger of frequentist statistics, common sense reasoning about probability is generally Bayesian, in that it is about the degree of plausibility of a proposition.
We can however "make some sort of non-frequentist argument that we're 95% sure the true parameter will lie in [a,b]", that is exactly what a Bayesian credible interval is, and for many problems the Bayesian credible interval exactly coincides with the frequentist confidence interval.
"I don't want to make this a debate about the philosophy of probability", sadly this is unavoidable, the reason you can't assign a frequentist probability to whether the true value of the statistic lies in the confidence interval is a direct consequence of the frequentist philosophy of probability. Frequentists can only assign probabilities to things that can have long run frequencies, as that is how frequentists define probability in their philosophy. That doesn't make frequentist philosophy wrong, but it is important to understand the bounds imposed by the definition of a probability.
"Before I've entered the password and seen the interval (but after the computer has already calculated it), what's the probability that the interval will contain the true parameter? It's 95%, and this part is not up for debate:" This is incorrect, or at least in making such a statement, you have departed from the framework of frequentist statistics and have made a Bayesian inference involving a degree of plausibility in the truth of a statement, rather than a long run frequency. However, as I have said earlier, it is a perfectly reasonable and natural inference.
Nothing has changed before or after entering the password, because niether event can be assigned a frequentist probability. Frequentist statistics can be rather counter-intuitive as we often want to ask questions about degrees of plausibility of statements regarding particular events, but this lies outside the remit of frequentist statistics, and this is the origin of most misinterpretations of frequentist procedures.
| null | CC BY-SA 3.0 | null | 2011-06-08T18:57:51.263 | 2011-06-09T09:42:40.690 | 2017-04-13T12:44:55.360 | -1 | 887 | null |
11728 | 2 | null | 11714 | 1 | null | Good questions. I scanned over the paper and have a couple of general thoughts...
First, with respect to
>
Note: contrary to convention, the measured quantity is plotted on the x-axis, not y-axis.
I like the unconventional orientation in this setting: with the Y-axis being altitude it lets me easily visualize that as I go higher into the atmosphere the new GOMOS instrument has less variable measurements of the same quantity as the traditional LIDAR method (and vice versa). This could be helpful in choosing instrumentation depending on where my particular data collection will take place, or may make me struggle with the idea of using two instruments (and their costs) to improve data quality.
Second, the left-most plots in Figures 2 and 3 visually show that the measurements are "very close", and the authors do call out the bias/variability differences with respect to altitude (which the plot you posted helps convey).
Finally, about the extremes, I think your "I think" statements imply that the two measurements are both valid, but it may be the case that at certain altitudes the bias and variability associated with a particular method means it should be abandoned in lieu of the other.
As for the statistical assessment of agreement, like you, I want to digest whuber's response.
| null | CC BY-SA 3.0 | null | 2011-06-08T19:02:18.293 | 2011-06-08T19:02:18.293 | null | null | 1080 | null |
11730 | 2 | null | 11691 | 3 | null | To tell whether a clustering is meaningful, you can run an algorithm to count the number of clusters, and see if it outputs something greater than 1.
Like chl said, one cluster-counting algorithm is the gap statistic algorithm. Roughly, this computes the total cluster variance given your actual data, and compares it against the total cluster variance of data that should not have any clusters at all (e.g., a dataset formed by sampling uniformly within the same bounds as your actual data). The number of clusters $k$ is then chosen to be the $k$ that gives the largest "gap" between these two cluster variances.
Another algorithm is the prediction strength algorithm (which is similar to the rest of chl's answer). Roughly, this performs a bunch of k-means clusterings, and computes the proportion of points that stay in the same cluster. $k$ is then chosen to be the smallest $k$ that gives a proportion higher than some threshold (e.g., a threshold of 0.8).
| null | CC BY-SA 3.0 | null | 2011-06-08T19:09:12.110 | 2011-06-08T19:09:12.110 | null | null | 1106 | null |
11731 | 2 | null | 11724 | 9 | null | There isn't really a minimum number of observations. Essentially the more observations you have the more the parameters of your model are constrained by the data, and the more confident the model becomes. How many observations you need depends on the nature of the problem and how confident you need to be in your model. I don't think it is a good idea to rely too much on "rules of thumb" about this sort of thing, but use the all the data you can get and inspect the confidence/credible intervals on your model parameters and on predictions.
| null | CC BY-SA 3.0 | null | 2011-06-08T19:10:58.603 | 2011-06-08T19:10:58.603 | null | null | 887 | null |
11732 | 2 | null | 10182 | 18 | null | Both methods rely on the same idea, that of decomposing the observed variance into different parts or components. However, there are subtle differences in whether we consider items and/or raters as fixed or random effects. Apart from saying what part of the total variability is explained by the between factor (or how much the between variance departs from the residual variance), the F-test doesn't say much. At least this holds for a one-way ANOVA where we assume a fixed effect (and which corresponds to the ICC(1,1) described below). On the other hand, the ICC provides a bounded index when assessing rating reliability for several "exchangeable" raters, or homogeneity among analytical units.
We usually make the following distinction between the different kind of ICCs. This follows from the seminal work of Shrout and Fleiss (1979):
- One-way random effects model, ICC(1,1): each item is rated by different raters who are considered as sampled from a larger pool of potential raters, hence they are treated as random effects; the ICC is then interpreted as the % of total variance accounted for by subjects/items variance. This is called the consistency ICC.
- Two-way random effects model, ICC(2,1): both factors -- raters and items/subjects -- are viewed as random effects, and we have two variance components (or mean squares) in addition to the residual variance; we further assume that raters assess all items/subjects; the ICC gives in this case the % of variance attributable to raters + items/subjects.
- Two-way mixed model, ICC(3,1): contrary to the one-way approach, here raters are considered as fixed effects (no generalization beyond the sample at hand) but items/subjects are treated as random effects; the unit of analysis may be the individual or the average ratings.
This corresponds to cases 1 to 3 in their Table 1. An additional distinction can be made depending on whether we consider that observed ratings are the average of several ratings (they are called ICC(1,k), ICC(2,k), and ICC(3,k)) or not.
In sum, you have to choose the right model (one-way vs. two-way), and this is largely discussed in Shrout and Fleiss's paper. A one-way model tend to yield smaller values than the two-way model; likewise, a random-effects model generally yields lower values than a fixed-effects model. An ICC derived from a fixed-effects model is considered as a way to assess raters consistency (because we ignore rater variance), while for a random-effects model we talk of an estimate of raters agreement (whether raters are interchangeable or not). Only the two-way models incorporate the rater x subject interaction, which might be of interest when trying to unravel untypical rating patterns.
The following illustration is readily a copy/paste of the example from `ICC()` in the [psych](http://cran.r-project.org/web/packages/psych/index.html) package (data come from Shrout and Fleiss, 1979). Data consists in 4 judges (J) asessing 6 subjects or targets (S) and are summarized below (I will assume that it is stored as an R matrix named `sf`)
```
J1 J2 J3 J4
S1 9 2 5 8
S2 6 1 3 2
S3 8 4 6 8
S4 7 1 2 6
S5 10 5 6 9
S6 6 2 4 7
```
This example is interesting because it shows how the choice of the model might influence the results, therefore the interpretation of the reliability study. All 6 ICC models are as follows (this is Table 4 in Shrout and Fleiss's paper)
```
Intraclass correlation coefficients
type ICC F df1 df2 p lower bound upper bound
Single_raters_absolute ICC1 0.17 1.8 5 18 0.16477 -0.133 0.72
Single_random_raters ICC2 0.29 11.0 5 15 0.00013 0.019 0.76
Single_fixed_raters ICC3 0.71 11.0 5 15 0.00013 0.342 0.95
Average_raters_absolute ICC1k 0.44 1.8 5 18 0.16477 -0.884 0.91
Average_random_raters ICC2k 0.62 11.0 5 15 0.00013 0.071 0.93
Average_fixed_raters ICC3k 0.91 11.0 5 15 0.00013 0.676 0.99
```
As can be seen, considering raters as fixed effects (hence not trying to generalize to a wider pool of raters) would yield a much higher value for the homogeneity of the measurement. (Similar results could be obtained with the [irr](http://cran.r-project.org/web/packages/irr/index.html) package (`icc()`), although we must play with the different option for model type and unit of analysis.)
What do the ANOVA approach tell us? We need to fit two models to get the relevant mean squares:
- a one-way model that considers subject only; this allows to separate the targets being rated (between-group MS, BMS) and get an estimate of the within-error term (WMS)
- a two-way model that considers subject + rater + their interaction (when there's no replications, this last term will be confounded with the residuals); this allows to estimate the rater main effect (JMS) which can be accounted for if we want to use a random effects model (i.e., we'll add it to the total variability)
No need to look at the F-test, only MSs are of interest here.
```
library(reshape)
sf.df <- melt(sf, varnames=c("Subject", "Rater"))
anova(lm(value ~ Subject, sf.df))
anova(lm(value ~ Subject*Rater, sf.df))
```
Now, we can assemble the different pieces in an extended ANOVA Table which looks like the one shown below (this is Table 3 in Shrout and Fleiss's paper):
[](https://i.stack.imgur.com/8QszL.png)
(source: [mathurl.com](http://mathurl.com/3pzv9m2.png))
where the first two rows come from the one-way model, whereas the next two ones come from the two-way ANOVA.
It is easy to check all formulae in Shrout and Fleiss's article, and we have everything we need to estimate the reliability for a single assessment. What about the reliability for the average of multiple assessments (which often is the quantity of interest in inter-rater studies)? Following Hays and Revicki (2005), it can be obtained from the above decomposition by just changing the total MS considered in the denominator, except for the two-way random-effects model for which we have to rewrite the ratio of MSs.
- In case of ICC(1,1)=(BMS-WMS)/(BMS+(k-1)•WMS), the overall reliability is computed as (BMS-WMS)/BMS=0.443.
- For the ICC(2,1)=(BMS-EMS)/(BMS+(k-1)•EMS+k•(JMS-EMS)/N), the overall reliability is (N•(BMS-EMS))/(N•BMS+JMS-EMS)=0.620.
- Finally, for the ICC(3,1)=(BMS-EMS)/(BMS+(k-1)•EMS), we have a reliability of (BMS-EMS)/BMS=0.909.
Again, we find that the overall reliability is higher when considering raters as fixed effects.
## References
- Shrout, P.E. and Fleiss, J.L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420-3428.
- Hays, R.D. and Revicki, D. (2005). Reliability and validity (including responsiveness). In Fayers, P. and Hays, R.D. (eds.), Assessing Quality of Life in Clinical Trials, 2nd ed., pp. 25-39. Oxford University Press.
| null | CC BY-SA 4.0 | null | 2011-06-08T19:53:52.143 | 2019-02-17T01:36:16.990 | 2019-02-17T01:36:16.990 | 79696 | 930 | null |
11734 | 1 | null | null | 2 | 449 | I have implemented a three-way anova with type III sum of squares in c++. Since some of my experiments (observations) are more important (more informative), I want to give them a higher weight in my analysis. For example, an experiment which is very important has a weight of 10, and a relatively important one has a weight of 5, and so on. To implement it, I repeat such observations in respect to its corresponding weight, 10 times, 5 times, ...
I used the same concept in 2way anova, but there I used the conventional formulas of Sum of Squares, because my design was balanced. So there I just multiplied the value of each observation and the number of times I have seen that to the weight.
Here, repeating the items makes the design matrix so big and increase the computational complexity. Now the problem is that what if I don't want to repeat them, but use a weight matrix? and what if the weights are not integer values (so I really can't repeat one item 0.3 times)?
I found this formula [here](http://www.stat.umn.edu/pub/macanova/docs/manual/manchp03.pdf):
```
H^=X(X'WX)-1XW
```
So I put my weights into W matrix and used this formula. To check if it works, I used the conventional method and repeated observations as often as the weight value, and gave it to MATLAB. But I got different SS values.
Could you please kindly tell me how I should change this formula?
| How to implement a weighted 3-way ANOVA in unbalanced design? | CC BY-SA 4.0 | null | 2011-06-08T20:34:35.620 | 2021-04-12T03:16:58.320 | 2021-04-12T03:16:58.320 | 11887 | 2885 | [
"anova",
"sums-of-squares",
"weighted-sampling"
] |
11736 | 2 | null | 11724 | 0 | null | Update: I didn't see the above comment, by @David Harris, which is pretty much like mine. Sorry for that. You guys can delete my answer if it is too similar.
I'd second Dikran Marsupail post and add my two cents.
Take in consideration your prior knowledge about the effects that you expect from your independent variables. If you expect small effects, than you will need a huge sample. If the effects are expected to be big, than a small sample can do the job.
As you might know, standard errors are a function of sample size, so the bigger the sample size, the smaller the standard errors. Thus, if effects are small, i.e., are near zero, only a small standard error will be able to detect this effect, i.e, to show that it is significantly different from zero. On the other hand, if the effect is big (far from zero), than even a large standard error will produce significant results.
If you need some reference, take a look at Andrew Gelmans' Blog.
| null | CC BY-SA 3.0 | null | 2011-06-08T22:03:32.000 | 2011-06-08T22:03:32.000 | null | null | 3058 | null |
11737 | 1 | 11741 | null | 3 | 1880 | This question's context is time series forecasting using regression, with multivariate training data. With a regularization method like LARS w/ LASSO, elastic net, or ridge, we need to decide on the model complexity or regularization parameters. For example, the ridge $\lambda$ penalty or the number of steps to go along the LARS w/ LASSO algorithm before hitting the OLS solution.
My first instinct is to use cross-validation to infer a decent value of the regularization parameter. For LARS w/ LASSO, I would infer the (effective) degrees of freedom that optimizes some fitness function like $\frac{1}{n}\sum_{i{\le}n}|\hat{y}_i-y_i|$. However with time series data, we should cross-validate out-of-sample. (No peeking into the future!) Say there are two feature time series $x_1$ and $x_2$ and I am forecasting a time series $y$. For each step of time $t$, train with $x_{1,1}$ through $x_{1,t}$ and $x_{2,1}$ through $x_{2,t}$ — and then forecast $\hat{y}_{t+1}$ and compare with the actual $y_{t+1}$.
This framework makes sense from an out-of-sample perspective, but I worry that earlier cross-validation steps (low $t$) will be overemphasized when averaging over all the equally-weighted steps. Should the first few time series cross-validation steps, the ones that use much less training data, be suppressed when inferring (regularization) model parameters? I might prefer a model complexity (regularization) level that "did better" on those later cross-validation steps using more training data.
| Cross-validating for model parameters with time series | CC BY-SA 3.0 | null | 2011-06-08T22:13:31.840 | 2011-06-09T02:10:15.290 | 2011-06-08T23:13:14.547 | null | 4942 | [
"time-series",
"model-selection",
"cross-validation",
"regularization"
] |
11738 | 2 | null | 11609 | 3 | null | The way you pose the problem is a little muddled. Take this statement: Let $E$ be the event that the true parameter falls in the interval $[a,b]$. This statement is meaningless from a frequentist perspective; the parameter is the parameter and it doesn't fall anywhere, it just is. P(E) is meaningless, P(E|C) is meaningless and this is why your example falls apart. The problem isn't conditioning on a set of measure zero either; the problem is that you're trying to make probability statements about something that isn't a random variable.
A frequentist would say something like: Let $\tilde E$ be the event that the interval $(L(X), U(X))$ contains the true parameter. This is something a frequentist can assign a probability to.
Edit: @G. Jay Kerns makes the argument better than me, and types faster, so probably just move along :)
| null | CC BY-SA 3.0 | null | 2011-06-08T22:37:56.597 | 2011-06-08T22:48:35.597 | 2011-06-08T22:48:35.597 | 26 | 26 | null |
11739 | 1 | null | null | 3 | 122 | A treatment was given to one hand of a subject, and a single outcome metric is measured for both hands, twice pre and several times post treatment.
What is best practice for assessing effectiveness of treatment?
Treated and Untreated "groups" really are paired.
| Pre and Post, treated and un treated but from same subject | CC BY-SA 3.0 | null | 2011-06-08T23:58:58.097 | 2011-06-09T02:02:48.033 | 2011-06-09T02:02:48.033 | 183 | 4944 | [
"repeated-measures",
"clinical-trials"
] |
11740 | 2 | null | 11739 | 3 | null | For each time the metric is measured, take the difference of the measurements between the two hands. This gives you just one variable measured over time, which you can measure as repeated measures. You hypothesize that the mean value of this difference across subjects will change (or won't change) after the treatment.
| null | CC BY-SA 3.0 | null | 2011-06-09T01:29:28.610 | 2011-06-09T01:29:28.610 | null | null | 3874 | null |
11741 | 2 | null | 11737 | 3 | null | You can include a "minimum" number of observations that you think you need to fit your model, and exclude n< this number from cross validation. Obviously, you can't fit a model using just the 1st sample, and you can't really fit a model using the 1st 2 samples. At some reasonable point (5? 10?) you'll have enough observations to fit a valid model, so start at that point.
| null | CC BY-SA 3.0 | null | 2011-06-09T02:10:15.290 | 2011-06-09T02:10:15.290 | null | null | 2817 | null |
11742 | 2 | null | 11724 | 22 | null | There is one way to get at a solid starting point. Suppose there were no covariates, so that the only parameter in the model were the intercept. What is the sample size required to allow the estimate of the intercept to be precise enough so that the predicted probability is within 0.1 of the true probability with 95% confidence, when the true intercept is in the neighborhood of zero? The answer is n=96. What if there were one covariate, and it was binary with a prevalence of 0.5? One would need 96 subjects with x=0 and 96 with x=1 to have an upper bound on the margin of error for estimating Prob[Y=1 | X=x] not exceed 0.1. The general formula for the sample size required to achieve a margin of error of $\delta$ in estimating a true probability of $p$ at the 0.95 confidence level is $n = (\frac{1.96}{\delta})^{2} \times p(1-p)$. Set $p = 0.5$ for the worst case.
| null | CC BY-SA 3.0 | null | 2011-06-09T02:45:10.820 | 2011-06-09T02:45:10.820 | null | null | 4253 | null |
11743 | 2 | null | 11609 | 11 | null | OK, now you're talking! I've voted to delete my previous answer because it doesn't make sense with this major-updated question.
In this new, updated question, with a computer that calculates 95% confidence intervals, under the orthodox frequentist interpretation, here are the answers to your questions:
- No.
- No.
- Once the interval is observed, it is not random any more, and does not change. (Maybe the interval was $[1,3]$.) But $\theta$ doesn't change, either, and has never changed. (Maybe it is $\theta = 7$.) The probability changes from 95% to 0% because 95% of the intervals the computer calculates cover 7, but 100% of the intervals $[1,3]$ do NOT cover 7.
(By the way, in the real world, the experimenter never knows that $\theta = 7$, which means the experimenter can never know whether the true probability $[1,3]$ covers $\theta$ is zero or one. (S)he only can say that it must be one or the other.) That, plus the experimenter can say that 95% of the computer's intervals cover $\theta$, but we knew that already.
The spirit of your question keeps hinting back to the observer's knowledge, and how that relates to where $\theta$ lies. That (presumably) is why you were talking about the password, about the computer calculating the interval without your seeing it yet, etc. I've seen in your comments to answers that it seems unsatisfactory/unseemly to be obliged to commit to 0 or 1, after all, why couldn't we believe it is 87%, or $15/16$, or even 99%??? But that is exactly the power - and simultaneously the Achilles' heel - of the frequentist framework: the subjective knowledge/belief of the observer is irrelevant. All that matters is a long-run relative frequency. Nothing more, nothing less.
As a final BTW: if you change your interpretation of probability (which you intentially have elected not to do for this question), then the new answers are:
- Yes.
- Yes.
- The probability changes because probability = subjective knowledge, or degree of belief, and the knowledge of the observer changed. We represent knowledge with prior/posterior distributions, and as new information becomes available, the former morphs into the latter (via Bayes' Rule).
(But for full disclosure, the setup you describe doesn't match the subjective interpretation very well. For instance, we usually have a 95% prior credible interval before even turning on the computer, then we fire it up and employ the computer to give us a 95% posterior credible interval which is usually considerably skinnier than the prior one.)
| null | CC BY-SA 3.0 | null | 2011-06-09T03:19:31.060 | 2011-06-09T11:58:12.827 | 2011-06-09T11:58:12.827 | null | null | null |
11744 | 2 | null | 11609 | 16 | null | Major update, major new answer. Let me try to clearly address this point, because it's where the problem lies:
"If you argue that "after seeing the interval, the notion of probability no longer makes sense", then fine, let's work in an interpretation of probability in which it does make sense."
The rules of probability don't change but your model for the universe does. Are you willing to quantify your prior beliefs about a parameter using a probability distribution? Is updating that probability distribution after seeing the data a reasonable thing to do? If you think so then you can make statements like $P(\theta\in [L(X), U(X)]| X=x)$. My prior distribution can represent my uncertainty about the true state of nature, not just randomness as it is commonly understood - that is, if I assign a prior distribution to the number of red balls in an urn that doesn't mean I think the number of red balls is random. It's fixed, but I'm uncertain about it.
Several people including I have said this, but if you aren't willing to call $\theta$ a random variable then the statement $P(\theta\in [L(X), U(X)]| X=x)$ isn't meaningful. If I'm a frequentist, I'm treating $\theta$ as a fixed quantity AND I can't ascribe a probability distribution to it. Why? Because it's fixed, and my interpretation of probability is in terms of long-run frequencies. The number of red balls in the urn doesn't ever change. $\theta$ is what $\theta$ is. If I pull out a few balls then I have a random sample. I can ask what would happen if I took a bunch of random samples - that is to say, I can talk about $P(\theta\in [L(X), U(X)])$ because the interval depends on the sample, which is (wait for it!) random.
But you don't want that. You want $P(\theta\in [L(X), U(X)]| X=x)$ - what's the probability that this interval I constructed with my observed (and now fixed) sample contains the parameter. However, once you've conditioned on $X=x$ then to me, a frequentist, there is nothing random left and the statement $P(\theta\in [L(X), U(X)]| X=x)$ doesn't make sense in any meaningful way.
The only principled way (IMO) to make a statement about $P(\theta\in [L(X), U(X)]| X=x)$ is to quantify our uncertainty about a parameter with a (prior) probability distribution and update that distribution with new information via Bayes Theorem. Every other approach I have seen is a lackluster approximation to Bayes. You certainly can't do it from a frequentist perspective.
That isn't to say that you can't evaluate traditional frequentist procedures from a Bayesian perspective (often confidence intervals are just credible intervals under uniform priors, for example) or that evaluating Bayesian estimators/credible intervals from a frequentist perspective isn't valuable (I think it can be). It isn't to say that classical/frequentist statistics is useless, because it isn't. It is what it is, and we shouldn't try to make it more.
Do you think it's reasonable to give a parameter a prior distribution to represent your beliefs about the universe? It sounds like it from your comments that you do; in my experience most people would agree (that's the little half-joke I made in my comment to @G. Jay Kerns's answer). If so, the Bayesian paradigm provides a logical, coherent way to make statements about $P(\theta\in [L(X), U(X)]| X=x)$. The frequentist approach simply doesn't.
| null | CC BY-SA 3.0 | null | 2011-06-09T03:39:05.227 | 2011-06-09T03:46:45.040 | 2011-06-09T03:46:45.040 | 26 | 26 | null |
11745 | 1 | 11779 | null | 4 | 1121 | When characterizing an information measure one desires to have the following 'Grouping' property (cf., Cover&Thomas, Ch.2 exercise 46)
$$H(p_1, p_2,\dots, p_n)=H(p_1+p_2, p_3,\dots, p_n)+(p_1+p_2)H(\frac{p_1}{p_1+p_2},\frac{p_2}{p_1+p_2})$$
(a.k.a. recursive). An analogous Grouping axiom is employed for Renyi entropy in [Jizba, Arimitzu](http://arxiv.org/abs/cond-mat/0207707).
Can anybody give an intuitive meaning of it and why it is desired?
Also in the axiomatic characterization to inference based on entropy measures one has a set of properties(or axioms)(cf.,[Shore and Johnson](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01056144)).
Does the above Grouping property have any connection with any of the axioms of entropy based inference?
In general, is there any connection between the axioms in the axiomatic characterization of entropy(information) measures and the axioms in the axiomatic characterization of inference based on entropy measures? I know some of the connections say the symmetry property of entropy is related to the invariance property of inference, additivity of entropy is related to system independence of inference etc.
| Property of entropy | CC BY-SA 3.0 | null | 2011-06-09T05:58:45.603 | 2011-06-10T08:55:44.707 | 2011-06-10T08:55:44.707 | 3485 | 3485 | [
"inference",
"entropy",
"information-theory"
] |
11746 | 1 | 11747 | null | 25 | 16973 | The Pearson's coefficient between two variables is quite high (r=.65). But when I rank the variable values and run a Spearman's correlation, the cofficient value is much lower (r=.30).
- What is the interpretation of this?
| What could cause big differences in correlation coefficient between Pearson's and Spearman's correlation for a given dataset? | CC BY-SA 3.0 | null | 2011-06-09T07:14:24.973 | 2017-02-11T13:14:23.617 | 2011-06-09T08:06:27.420 | 183 | 3671 | [
"correlation",
"spearman-rho"
] |
11747 | 2 | null | 11746 | 44 | null |
### Why the big difference
- If your data is normally distributed or uniformly distributed, I would think that Spearman's and Pearson's correlation should be fairly similar.
- If they are giving very different results as in your case (.65 versus .30), my guess is that you have skewed data or outliers, and that outliers are leading Pearson's correlation to be larger than Spearman's correlation. I.e., very high values on X might co-occur with very high values on Y.
- @chl is spot on. Your first step should be to look at the scatter plot.
- In general, such a big difference between Pearson and Spearman is a red flag suggesting that
the Pearson correlation may not be a useful summary of the association between your two variables, or
you should transform one or both variables before using Pearson's correlation, or
you should remove or adjust outliers before using Pearson's correlation.
### Related Questions
Also see these previous questions on differences between Spearman and Pearson's correlation:
- How to choose between Pearson and Spearman correlation?
- Pearson's or Spearman's correlation with non-normal data
### Simple R Example
The following is a simple simulation of how this might occur.
Note that the case below involves a single outlier, but that you could produce similar effects with multiple outliers or skewed data.
```
# Set Seed of random number generator
set.seed(4444)
# Generate random data
# First, create some normally distributed correlated data
x1 <- rnorm(200)
y1 <- rnorm(200) + .6 * x1
# Second, add a major outlier
x2 <- c(x1, 14)
y2 <- c(y1, 14)
# Plot both data sets
par(mfrow=c(2,2))
plot(x1, y1, main="Raw no outlier")
plot(x2, y2, main="Raw with outlier")
plot(rank(x1), rank(y1), main="Rank no outlier")
plot(rank(x2), rank(y2), main="Rank with outlier")
# Calculate correlations on both datasets
round(cor(x1, y1, method="pearson"), 2)
round(cor(x1, y1, method="spearman"), 2)
round(cor(x2, y2, method="pearson"), 2)
round(cor(x2, y2, method="spearman"), 2)
```
Which gives this output
```
[1] 0.44
[1] 0.44
[1] 0.7
[1] 0.44
```
The correlation analysis shows that without the outlier Spearman and Pearson are quite similar, and with the rather extreme outlier, the correlation is quite different.
The plot below shows how treating the data as ranks removes the extreme influence of the outlier, thus leading Spearman to be similar both with and without the outlier whereas Pearson is quite different when the outlier is added.
This highlights why Spearman is often called robust.

| null | CC BY-SA 3.0 | null | 2011-06-09T07:32:00.293 | 2011-06-09T12:32:19.720 | 2017-04-13T12:44:26.710 | -1 | 183 | null |
11749 | 1 | null | null | 1 | 179 | I would like simulate appearance of publications in a forum and I need know what is the probability distribution of new question being asked in a forum. In my first simulation I used to normal distribution, but I think that the best distribution can be exponential distribution.
| Probability distribution of questions in a forum | CC BY-SA 3.0 | null | 2011-06-09T09:05:26.687 | 2011-06-10T16:21:24.647 | 2011-06-10T06:46:57.370 | 2116 | 4953 | [
"distributions",
"probability"
] |
11750 | 2 | null | 11544 | 1 | null | I was thinking more about the question and thought I would give a slight enhancement of the naive approach as an answer in hopes that people know further ideas in the direction. It also allows us to eliminate the need to know the size of the fluctuations.
---
The easiest way to implement it is with two parameters $(T,\alpha)$. Let $y_t = x_{t + 1} - x_{t}$ be the change in the time series between timestep $t$ and $t + 1$. When the series is stable around $x^*$, $y$ will fluctuate around zero with some standard error. Here we will assume that this error is normal.
Take the last $T$, $y_t$'s and fit a Gaussian with confidence $\alpha$ using a function like Matlab's [normfit](http://www.mathworks.com/help/toolbox/stats/normfit.html). The fit will give us a mean $\mu$ with $\alpha$ confidence error on the mean $E_\mu$ and a standard deviation $\sigma$ with corresponding error $E_\sigma$. If $0 \in (\mu - E_\mu, \mu + E_\mu)$, then you can accept. If you want to be extra sure, then you can also renormalize the $y_t$s by the $\sigma$ you found (so that you now have standard deviation $1$) and test with the [Kolmogorov-Smirnov](http://www.mathworks.com/help/toolbox/stats/kstest.html) test at the $\alpha$ confidence level.
---
The advantage of this method is that unlike the naive approach, you no longer need to know anything about the magnitude of the thermal fluctuations around the mean. The limitation is that you still have an arbitrary $T$ parameter, and we had to assume a normal distribution on the noise (which is not unreasonable). I am not sure if this can be modified by some weighted mean with discounting. If a different distribution is expected to model the noise, then normfit and the Kolmogorov-Smirnov test should be replaced by their equivalents for that distribution.
| null | CC BY-SA 3.0 | null | 2011-06-09T09:07:54.933 | 2011-06-09T09:07:54.933 | null | null | 4872 | null |
11752 | 1 | null | null | 2 | 1901 | Does anybody know if there're common known disadvantages of a negbin regression? In my opinion it seems to fit every problem pretty good (measured with the estimated dispersionparameter). So why not always use it?
| Disadvantages of negbin regression | CC BY-SA 3.0 | null | 2011-06-09T11:16:12.717 | 2011-08-28T08:48:03.403 | null | null | 4496 | [
"regression",
"generalized-linear-model",
"negative-binomial-distribution"
] |
11753 | 1 | 11757 | null | 3 | 1842 | How do you estimate degrees of freedoms for derived measurements?
I want to assess the significance of the distance of an independent data point to a regression line. I can easily calculate the (vertical) distance between the data point and the regression line, and I get the uncertainty of the distance from the uncertainties of slope and intercept of the linear regression via Gaussian error propagation. However, what are the degrees of freedom?
The linear regression line has been calculated from `n` data points, thus its degrees of freedom is `n-2`. The additional measurement is independent, so I get another degree of freedom, bringing the total to `n-1`?
Also, should I estimate the uncertainty of the independent measurement using the variance of the residuals of the regression, since the measurement process is the same for both the data that went into the fit and the independent data point? I guess this would reduce the degrees of freedom again?
| Distance to a regression line, and degrees of freedom | CC BY-SA 3.0 | null | 2011-06-09T11:55:47.333 | 2011-06-09T13:16:40.683 | 2011-06-09T12:18:20.830 | 198 | 198 | [
"regression",
"degrees-of-freedom"
] |
11754 | 1 | 11787 | null | 6 | 447 | I would like to estimate a multi level model in Stata or R (using lmer) where the first level coefficients are the same for all observations, but the coefficients within observation are correlated.
An example would look something like this:
$$Y_i = \beta_1 x_{1i} + \beta_2 x_{2i} + \beta_3 x_{3i} + ... + \varepsilon_{0i}$$
$$\beta_1=\gamma_1 z_1 + \gamma_2 z_2 + \varepsilon_{1}$$
$$\beta_2=\gamma_1 z_1 + \gamma_3 z_3 + \varepsilon_{2}$$
$$\beta_3=\gamma_2 z_2 + \gamma_3 z_3 + \varepsilon_{3}$$
and so on, with equations for each beta.
Clearly, I'd make a distributional assumption for the $\varepsilon$'s... like $\varepsilon \sim N(0,\sigma^2)$
The x variables vary by observation, but the z variables do not vary between observations. Thus, the parameters $\gamma$ and $\beta$ are also the same for all observations.
This differs from most hierarchical models I have seen in that parameters are related within an observation, rather than depending on observation-level characteristics.
As a specific application, consider a model where the dependent variable $Y$ is a student's test scores. The x variables are measures of performance in previous classes, and the $z$ variables are characteristics of those classes. Students have taken the same set of classes, but there are few students in each class, so I'd like to pool estimation of the coefficients $\beta$. Because the classes have similar characteristics, there may be far fewer $\gamma$ parameters than $\beta$ parameters, and pooling estimates to those lower level class characteristics may yield more precise estimates of $\beta$ than estimation without the 2nd level model.
At the same time, I'd like estimates of the $\beta$ parameters, so substituting in and estimating y as a function of $\gamma$ and x only gets me half way there.
What is the best way to estimate this type of model? I typically program in R, Stata and Python.
| Estimating correlated parameters with multi-level model | CC BY-SA 3.0 | null | 2011-06-09T12:29:07.600 | 2017-04-29T21:13:56.563 | 2017-04-29T21:13:56.563 | 28666 | 3700 | [
"r",
"multilevel-analysis",
"lme4-nlme"
] |
11755 | 2 | null | 11753 | 1 | null | Simplest way would be to include the new data point in the regression and add an indicator (dummy) variable to the model that takes the value 1 for your new data point and 0 for all the rest. Then simply look at the t-statistic and p-value for the indicator variable.
This approach assumes the residual variance for the new data point is the same as for the rest, and shows that the degrees of freedom for the comparison are $n-1$ (where $n$ is the number of original data points, i.e. not counting the new one).
| null | CC BY-SA 3.0 | null | 2011-06-09T12:55:51.920 | 2011-06-09T12:55:51.920 | null | null | 449 | null |
11757 | 2 | null | 11753 | 6 | null | There is a well established theory of prediction intervals in the context of linear regression. New values at $x=x_0$ have a normal distribution with mean $\alpha+\beta x_0$ (not surprisingly) and variance $\sigma^2\left(1+\frac{1}{n} + \frac{(x_0-\bar{x})^2}{\sum{(x_i-\bar{x})^2}}\right)$.
After plugging in the estimated versions of the parameters, the standardized distribution will be a $t$ distribution with $n-2$ degrees of freedom. That's because the estimate of $\sigma^2$ has that many degrees of freedom, and the df of the chi-squared term in the denominator drives the degrees of freedom.
Intuitively, you can think that you are not using the new data point for estimating anything, so you are not gaining any degrees of freedom.
| null | CC BY-SA 3.0 | null | 2011-06-09T13:16:40.683 | 2011-06-09T13:16:40.683 | null | null | 279 | null |
11758 | 2 | null | 3713 | 32 | null | A quote from Hastie, Tibshirani and Friedman,
[Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/),
p. 506:
>
"An appropriate dissimilarity measure
is far more important in obtaining
success with clustering than choice
of clustering algorithm. This aspect
of the problem ...
depends on domain specific knowledge
and is less amenable to general
research."
(That said, wouldn't it would be nice if (wibni) there were a site
where students could try a few algorithms and metrics
on a few small standard datasets ?)
| null | CC BY-SA 3.0 | null | 2011-06-09T13:33:16.750 | 2011-06-20T10:17:35.320 | 2011-06-20T10:17:35.320 | 557 | 557 | null |
11759 | 1 | 11760 | null | 5 | 1375 | I want to generate series of 0s and 1s that exhibit some clustering. By this I mean that 1s and 0s should occur together. So I envisage series of 0s and 1s that will exhibit similar clustering of these elements, and not just random series of 0s and 1s.
In essence, for a single time series, I would go about that by thresholding a Markov chain with a 2x2 transition matrix, with some stochastic added on to it. Now, I'm not too certain on how to do this, but since I would like to produce several of these series, I was wondering whether there's something straightforward that I have missed.
I plan to use these series to simulate data availability (0 or 1) in a data acquisition system and to do some Monte Carlo simulations of how this affects what we can do with the data.
In order for the above simulations to be realistic, I would like to fit real observations to this model, so that the simulated time series share temporal correlation with the data. I would initially do this by calculating lag autocorrelations of both series and tweaking model parameters until I get something that resembles my observations, but unsure whether this is the best way.
Thanks!
| How can I generate correlated timeseries made up of 0s and 1s? | CC BY-SA 3.0 | null | 2011-06-09T14:05:18.013 | 2011-06-14T14:07:31.290 | 2011-06-14T14:07:31.290 | 4955 | 4955 | [
"time-series",
"simulation",
"markov-process"
] |
11760 | 2 | null | 11759 | 7 | null | A standard method is to begin by generating an autocorrelated Gaussian process $z_i$. (It doesn't have to be Gaussian, but such processes are easy to generate.) Take the logistic (inverse logit) of the values, producing a series of numbers $p_i = 1/\left(1 + \exp(-z_i)\right)$ in the interval $(0,1)$. Independently draw values from Bernoulli($p_i$) distributions to create a series of $0$ and $1$ values. Clustering will tend to occur with positive autocorrelation.
As a bonus, this procedure allows you to perform two stages of simulation: you can fix the underlying realization of the Gaussian process and iterate the second stage of Bernoulli draws. Or you can generate a separate realization of the Gaussian process each time.
There are probably R packages to do all this directly. The `geoRGLM` package performs this simulation in two dimensions (using a Matern autocorrelation function, which includes exponential and Gaussian autocorrelations as special cases); you could simulate along a straight line (or $1$ by $n$ grid) to obtain a time series.
| null | CC BY-SA 3.0 | null | 2011-06-09T14:15:29.530 | 2011-06-09T14:15:29.530 | null | null | 919 | null |
11761 | 2 | null | 6978 | 5 | null | You might look into the [Vowpal Wabbit project](https://github.com/JohnLangford/vowpal_wabbit/wiki), from John Langford at Yahoo! Research . It is an online learner that does specialized gradient descent on a few loss functions. VW has some killer features:
- Installs on Ubuntu trivially, with "sudo apt-get install vowpal-wabbit".
- Uses the hashing trick for seriously huge feature spaces.
- Feature-specific adaptive weights.
- Most importantly, there is an active mailing list and community plugging away on the project.
The Bianchi & Lugosi book [Prediction, Learning and Games](http://rads.stackoverflow.com/amzn/click/0521841089) gives a solid, theoretical foundation to online learning. A heavy read, but worth it!
| null | CC BY-SA 3.0 | null | 2011-06-09T14:45:29.813 | 2011-06-09T16:09:25.037 | 2011-06-09T16:09:25.037 | 4942 | 4942 | null |
11762 | 1 | null | null | 2 | 298 | My question is very general. I am learning extreme value theory to examine tail behavior. The concept of regular variation is still too vague to me. Could anyone help to provide more info to clarify? Any thoughts on its importance on probability theory?
| More info needed on second order regular variation in extreme value theory | CC BY-SA 3.0 | null | 2011-06-09T14:49:25.973 | 2011-06-11T05:43:42.933 | 2011-06-10T05:00:01.503 | 919 | 4497 | [
"probability"
] |
11763 | 2 | null | 643 | 7 | null | Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it allows us to use the normal distribution as an approximation in cases where we do not know the true distribution. You could ask your father a standard statistics question (but phrased as math) about what is the probability that the mean of a sample will be greater than a given value if the data comes from a distribution with mean mu and sd sigma, then see if he assumes a distribution (which you then say we don't know) or says that he needs to know the distribution. Then you can show that we can approximate the answer using the CLT in many cases.
For comparing math to stats, I like to use the mean value theorem of integration (which says that for an integral from a to b there exists a rectangle from a to b with the same area and the height of the rectangle is the average of the curve). The mathematician looks at this theorem and says "cool, I can use an integration to compute an average", while the statistician looks at the same theorem and says "cool, I can use an average to compute an integral".
I actually have cross stitched wall hangings in my office of the mean value theorem and the CLT (along with Bayes theorem).
| null | CC BY-SA 3.0 | null | 2011-06-09T15:53:24.603 | 2011-06-09T15:53:24.603 | null | null | 4505 | null |
11764 | 1 | 11790 | null | 38 | 8670 | Neural networks are often treated as "black boxes" due to their complex structure. This is not ideal, as it is often beneficial to have an intuitive grasp of how a model is working internally. What are methods of visualizing how a trained neural network is working? Alternatively, how can we extract easily digestible descriptions of the network (e.g. this hidden node is primarily working with these inputs)?
I am primarily interested in two layer feed-forward networks, but would also like to hear solutions for deeper networks. The input data can either be visual or non-visual in nature.
| How to visualize/understand what a neural network is doing? | CC BY-SA 3.0 | null | 2011-06-09T17:19:19.360 | 2016-02-17T04:10:17.037 | null | null | 2965 | [
"data-visualization",
"neural-networks"
] |
11765 | 1 | null | null | 2 | 704 | The intercoder reliability statistic Krippendorff's alpha is nice because it can be used across many different types of data: nominal, ordinal, interval, ratio, circular, etc. To do so, you just substitute a different distance metric into the reliability calculation. See [wikipedia](http://en.wikipedia.org/wiki/Krippendorff%27s_Alpha) for a good description.
My question: where do these distance metrics come from? Are they uniquely determined for a given data type? Are there certain desiderata they satisfy? I've read Krippendorff's content analysis book, and everything else I could find on the topic, and no luck...
| Where do the distance metrics for the Krippendorff's alpha statistic come from? | CC BY-SA 3.0 | null | 2011-06-09T17:47:34.477 | 2018-08-15T08:04:28.927 | 2018-08-15T08:04:28.927 | 11887 | 4110 | [
"distance",
"reliability",
"metric",
"agreement-statistics"
] |
11766 | 2 | null | 9396 | 1 | null | The significance of modeling the cumulative sum of residuals is to better approximate the [Ornstein-Uhlembeck process](http://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process) of equation $(12)$ with discrete real-life data.
This process $X_i(t)$ represents the idiosyncratic above- or below- market fluctuations of the particular stock. More specifically, it is the difference between the stock's return and that of its industry sector (ETF). The expected value of the infinitesimal increment $dX_i(t)$ of the $X_i(t)$ process is based on the previous value of the process:
$$
E[dX_i(t)|X_i(s),s{\le}t] = {\kappa}_i(m_i-X_i(t))dt
$$
Note the $X_i(t)$ on the right-hand side, suggesting a cumulative process.
The authors approximate a stock's $X_i(t)$ process with actual market data by first regressing the stock on its industry ETF (top of p.45), and then summing the residuals up to a certain point in time. This represents the cumulative above- or below- market return of the stock before the end of the regression time window.
| null | CC BY-SA 3.0 | null | 2011-06-09T18:03:20.480 | 2011-06-09T18:03:20.480 | null | null | 4942 | null |
11767 | 2 | null | 11764 | 13 | null | Estimate feature importance by randomly bumping every value of a single feature, and recording how your overall fitness function degrades.
So if your first feature $x_{1,i}$ is continuously-valued and scaled to $[0,1]$, then you might add $rand(0,1)-0.5$ to each training example's value for the first feature. Then look for how much your $R^2$ decreases. This effectively excludes a feature from your training data, but deals with cross-interactions better than literally deleting the feature.
Then rank your features by fitness function degradation, and make a pretty bar chart. At least some of the most important features should pass a gut-check, given your knowledge of the problem domain. And this also lets you be nicely surprised by informative features that you may not have expected.
This sort of feature importance test works for all black-box models, including neural networks and large CART ensembles. In my experience, feature importance is the first step in understanding what a model is really doing.
| null | CC BY-SA 3.0 | null | 2011-06-09T18:23:18.693 | 2011-06-09T18:23:18.693 | null | null | 4942 | null |
11768 | 1 | null | null | 6 | 444 | In class, we've been learning a myriad of really interesting techniques to sample from a given distribution, filter online data, particle filters, etc.
My issue is that when I take some real-world data and plot it, the distribution is clearly not Gaussian. So, I need to estimate some distribution. Or, in the case of an online filter (particle, etc.) I need to estimate some form of transition kernel.
How do people normally do this? What would be considered "best practices" for developing some distribution to fit empirical data? What are some reliable "goodness of fit" tests?
| Which distribution to use with MCMC and empirical data? | CC BY-SA 3.0 | null | 2011-06-09T18:25:37.017 | 2017-09-28T18:27:35.047 | 2017-09-28T18:27:35.047 | 60613 | 2566 | [
"markov-chain-montecarlo"
] |
11769 | 1 | null | null | 16 | 1105 | As a student in physics, I have experienced the "Why I am a Bayesian" lecture perhaps half a dozen times. It is always the same -- the presenter smugly explains how the Bayesian interpretation is superior to the frequentist interpretation allegedly employed by the masses. They mention Bayes rule, marginalization, priors and posteriors.
What is the real story?
Is there a legitimate domain of applicability for frequentist statistics? (Surely in sampling or rolling a die many times it must apply?)
Are there useful probabilistic philosophies beyond "bayesian" and "frequentist"?
| Is there more to probability than Bayesianism? | CC BY-SA 3.0 | null | 2011-06-07T16:47:38.303 | 2020-03-18T01:01:01.113 | 2020-03-18T01:01:01.113 | 11887 | 3334 | [
"probability",
"bayesian",
"frequentist",
"philosophical"
] |
11770 | 2 | null | 11768 | 6 | null | Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution:
>
Li, Q. and E. Maasoumi and J.S. Racine
(2009), “A Nonparametric Test for
Equality of Distributions with Mixed
Categorical and Continuous Data,”
Journal of Econometrics, 148, pp
186-200
This test is available in the [np](http://cran.r-project.org/web/packages/np/index.html) package in R as the `npdeneqtest()` function.
Choosing a good distribution is always difficult; what does your data look like? [Gamma distributions](http://en.wikipedia.org/wiki/Gamma_distribution) are rather flexible for positive data, most data can be reasonably approximated with mixtures of Gaussians, the Beta distribution is extremely flexible for data between zero and one.
| null | CC BY-SA 3.0 | null | 2011-06-09T19:06:04.097 | 2011-06-10T10:20:25.863 | 2011-06-10T10:20:25.863 | 930 | 1893 | null |
11771 | 2 | null | 11768 | 5 | null | Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so you really don't know if the data comes from that distribution, or you just don't have the power.
But note that you can have a population that follows a normal distribution exactly (or at least close enough), but data sampled randomly from that distribution does not look as nicely bell shaped (or any other distribution). The population distribution is more important that the sample distribution. One thing to try is to plot several samples and see how different they are, then see if your data fits into that variation scheme. This idea is detailed in:
>
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A 2009
367, 4361-4383 doi: 10.1098/rsta.2009.0120
If you still feel the need to find a transformation to get to normality, then consider using the Box-Cox transformations. The boxcox function in the MASS package for R will find an optimal transform, but it also gives a confidence interval so that you can bring outside knowledge into the decision, for example the "best" value of lambda may be 0.4, but if a square root transform has scientific merit and 0.5 is in the confidence interval, then that is probably more reasonable than going with the 0.4.
A lot of this also depends on what you plan to do with your data or the tranform of it. Often we can apply the Central Limit Theorem and the distribution of the population then does not matter (as long as we believe that it is not overly skewed or has extreem outliers). Or there are non-parametric methods that don't rely on assumptions about the population distribution. So the best approach depends on what you plan to do with this data.
| null | CC BY-SA 3.0 | null | 2011-06-09T19:07:00.947 | 2011-06-09T19:07:00.947 | null | null | 4505 | null |
11772 | 2 | null | 11769 | 12 | null | The Bayesian interpretation of probability suffices for practical purposes. But even given a Bayesian interpretation of probability, there is more to statistics than probability, because the foundation of statistics is decision theory and decision theory requires not only a class of probability models but also the specification of a optimality criteria for a decision rule. Under Bayes criteria, the optimal decision rules can be obtained through Bayes' rule; but many frequentist methods are justified under minimax and other decision criteria.
| null | CC BY-SA 3.0 | null | 2011-06-09T19:24:29.173 | 2011-06-09T19:24:29.173 | null | null | 3567 | null |
11773 | 1 | 19499 | null | 3 | 188 | In the following bioinformatics paper, ["Quantifying environmental adaptation of metabolic pathways in metagenomics"](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2629784/), Gianoulis et al. employ the use of two tools to detect multivariate relationships between environmental features and microbiomic features:
- Regularized Canonical Correlation Analysis
- Discriminative Partition Matching (DPM)
The paper references several statistics papers and books, including a book by R. Wichern, but it unclear which of these is the reference for DPM. A Google search for "Discriminative Partition Matching" pulls up several applied papers, including the Gianoulis paper, but no direct expositions of the method.
Where can I find information on "Discriminative Partition Matching"?
| What is discriminative partition matching? | CC BY-SA 3.0 | null | 2011-06-09T20:05:42.557 | 2011-12-07T17:51:23.760 | 2011-06-12T08:09:46.040 | null | 3567 | [
"machine-learning"
] |
11774 | 2 | null | 11754 | 0 | null | How is this advantageous over a normal varying coefficient model such as:
```
fit<-lmer(score~1+vector of class_attributes+vector of student attributes
+(1+vector of class attributes+vector of student attributes)
+(1+vector of student attributes|class)
+(1+vector of class attributes|student))
```
?
In this example, there is an overall intercept and attribute effect, but each class has a different coefficient possible which can be viewed by typing ranef(fit)
Section 3.2 of the Bates book on lme4 seems exactly analogous to your situation.
```
https://r-forge.r-project.org/scm/viewvc.php/*checkout*/www/lMMwR/lrgprt.pdf?revision=656&root=lme4&pathrev=656
```
Update (I updated the line of code above):
I also ran these lines to try to simulate your situation, but without any student attributes
```
library(lme4)
n<-100 #class size
pool<-200 #student pool size
class=c(rep(1,n), rep(2,n), rep(3,n))
min_in_class=c(rep(45,n), rep(60,n), rep(90,n))
min_hw=c(rep(90,n), rep(60,n), rep(60,n))
student_id=c(sample(1:pool,n), sample(1:pool,n), sample(1:pool,n))
performance=55+10*class +.1*min_in_class +.2*min_hw+ -.001*min_in_class*min_hw +rnorm(3*n, 0,10)
df<-data.frame(class=as.factor(class), min_in_class, min_hw, student_id=as.factor(student_id), performance)
library(reshape2)
melted<-melt(df, id.vars=c('student_id', 'class'))
casted<-dcast(melted, student_id~class+variable)
casted$score<-rowMeans(casted[,c(4,7,10)],na.rm=T)+rnorm(nrow(casted),0,5)
df$score<-casted$score[match(df$student_id, casted$student_id)]
```
I thought what you needed trying to do was this:
```
fit<-lmer(score~1+min_in_class+min_hw+(1|class)+(1+min_in_class+min_hw|student_id), data=df)
```
I ran it with various class sizes and pools and didn't get the results I was expecting; but perhaps with more than a few classes, things will look better.
| null | CC BY-SA 3.0 | null | 2011-06-09T20:14:53.080 | 2011-06-09T22:39:42.577 | 2011-06-09T22:39:42.577 | 1893 | 1893 | null |
11775 | 2 | null | 11634 | 3 | null | In some sense this depends on what you mean by $x$ and $\delta x$. Usually people mean that they are modeling $X$ as a random variable with mean $x$ and variance $(\delta x)^2$. Sometimes they mean the stronger condition that $X$ is actually Gaussian, and sometimes they have a broader meaning that $x$ and $\delta x$ can possible be other measures of the center and the spread.
A bit of calculus and handwaving shows that for small variations that are also approximatable as Gaussian, and $X$ and $Y$ independent, $f(X, Y)$ can be approximately described as having mean $f(x, y)$, and $(\delta f)^2 = (\delta x)^2 (\frac{\partial f}{\partial x})^2 + (\delta y)^2 (\frac{\partial f}{\partial y})^2$.
We can do the same thing for $a(m, r) = m /r$, where $a$ is the calculated age, $m$ is the mass, and $r$ is the rate.
$$
\begin{align*}
(\delta a)^2 &= (\delta m)^2 / r^2 + (\delta r)^2 m^2 / r^4 \\
a^2 &= m^2 / r^2 \\
(\delta a)^2/a^2 &= (\delta m)^2/m^2 + (\delta r)^2 / r^2 \\
(\delta a)/a &= \sqrt{(\delta m)^2/m^2 + (\delta r)^2 / r^2} \\
\end{align*}
$$
This matches the formula you have. You just have to convert between absolute errors and relative errors to be able to use it.
*EDIT*ed to add (incorporating comments): To convert the sedimentation rate to relative error, just use $(\delta r)/r = 10\% = 0.1$. You need to find the $\delta m$ = [standard error for the mean](http://en.wikipedia.org/wiki/Standard_error_(statistics)#Standard_error_of_the_mean). It's not clear whether you have $\delta m_i$ for each individual sediment core measurements. If you do, you want to find $m$ with a weighted mean and calculating the standard error is a bit tricky, but the prescription given above for general $f$ expands fine to three arguments. If it's not, the standard mean can be used and the variance in the sample can be used to calculate the standard error of the mean. The relative error is of course just $(\delta m)/m$.
| null | CC BY-SA 3.0 | null | 2011-06-09T20:23:13.360 | 2011-06-17T23:04:41.153 | 2011-06-17T23:04:41.153 | 4925 | 4925 | null |
11776 | 2 | null | 11428 | 3 | null | Yes, Gary Becker discusses this at length famously in "Crime and Punishment: An Economic Approach". You can find it at
```
http://www.nber.org/chapters/c3625.pdf
```
and in his Nobel lecture section on crime at
```
http://faculty.smu.edu/millimet/classes/eco4361/readings/quantity%20section/becker.pdf
```
Typically, any model of the level of a fine will be intrinsically married to the probability of detection, as in:
```
http://140.247.200.140/faculty/shavell/pdf/81_Amer_Econ_Rev_618.pdf
```
More recently, Harold Winter wrote a book in 2008 titled The economics of crime: an introduction to rational crime analysis. There is a chapter on setting fines and one can straightforwardly form econometric models from the framework provided
An example of a rather elementary game theoretic approach is here:
```
http://bal.buu.ac.th/bal2010/sites/default/files/Research%20report%202008.07.pdf
```
| null | CC BY-SA 3.0 | null | 2011-06-09T20:38:13.227 | 2011-06-09T20:38:13.227 | null | null | 1893 | null |
11777 | 2 | null | 11595 | 15 | null | An offset model is modeling goals per game, as one can see here:
```
log(goals/games) = a+bx
```
is equivalent to
```
log(goals) -log(games) = a+bx
```
is equivalent to
```
log(goals)= a+bx +log(games) <-this is an offset model, assumes coef on the last term =1
```
See slide 35 here:
[http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/4glm3-ha-online.pdf](http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/4glm3-ha-online.pdf)
If you think a+bx is related to the log ratio of goals to games (the rate), use an offset. If you think there is a more complicated game effect, perhaps from accumulating experience, do not. For more discussion, see this: [http://ezinearticles.com/?The-Exposure-and-Offset-Variables-in-Poisson-Regression-Models&id=2155811](http://ezinearticles.com/?The-Exposure-and-Offset-Variables-in-Poisson-Regression-Models&id=2155811)
| null | CC BY-SA 3.0 | null | 2011-06-09T20:48:07.277 | 2014-08-31T21:32:50.907 | 2014-08-31T21:32:50.907 | 1970 | 1893 | null |
11778 | 2 | null | 11754 | 1 | null | How about just writing out the likelihood function and maximizing?
| null | CC BY-SA 3.0 | null | 2011-06-09T20:52:53.827 | 2011-06-09T20:52:53.827 | null | null | 3601 | null |
11779 | 2 | null | 11745 | 4 | null | There is a simple interpretation of the above grouping property. Suppose your alphabet is $A, B, C,...$ where the letters have frequency $p_1, p_2, p_3, ..$ Now let $S$ be a random sequence of large length in your alphabet. Introduce a modified alphabet in which the letters $A$ and $B$ are merged into a new letter, $\alpha$. Thus $\alpha$ has frequency $p_1 + p_2$. Now let $S'$ be a random sequence of large length in the modified alphabet. The grouping property stipulates that the entropy $H(S)$ be equal to the entropy $H(S')$ plus the conditional entropy of predicting whether $\alpha$ corresponded originally to $A$ or $B$ in the old alphabet.
| null | CC BY-SA 3.0 | null | 2011-06-09T21:09:41.120 | 2011-06-09T21:09:41.120 | null | null | 3567 | null |
11780 | 2 | null | 11769 | 6 | null | There are non-Bayesian systems or philosophies of probability -- Baconian & Pascalian, e.g. If you are into epistemology & philosophy of science you might enjoy the debates--otherwise, you'll shake your head & conclude that in fact the Bayesian interpretation is all there is.
For good discussions,
- Cohen, L.J. An introduction to the philosophy of induction and probability, (Clarendon Press ;
Oxford University Press, Oxford
New York, 1989)
- Schum, D.A. The evidential foundations of probabilistic reasoning, (Wiley, New York, 1994).
| null | CC BY-SA 3.0 | null | 2011-06-09T21:10:18.623 | 2011-06-09T21:10:18.623 | null | null | 11954 | null |
11781 | 2 | null | 11609 | 1 | null | If I say the probability the Knicks scored between xbar - 2sd(x) and xbar + 2sd(x) is about .95 in some given game in the past, that is a reasonable statement given some particular distributional assumption about the distribution of basketball scores. If I gather data about the scores given some sample of games and calculate that interval, the probability that they scored in that interval on some given day in the past is clearly zero or one, and you can google the game result to find out. The only notion of it maintaining non-zero or one probability to the frequentist comes from repeated sampling, and the realization of interval estimation of a particular sample is the magic point where either it happened or it didn't given the interval estimate of that sample. It isn't the point where you type in the password, it is the point where you decided to take a single sample that you lose the continuity of possible probabilities.
This is what Dikran argues above, and I have voted up his answer. The point when repeated samples are out of the consideration is the point in the frequentist paradigm where the non-discrete probability becomes unobtainable, not when you type in the password as in your example above, or when you google the result in my example of the Knicks game, but the point when your number of samples =1.
| null | CC BY-SA 3.0 | null | 2011-06-09T21:29:55.963 | 2011-06-09T22:52:53.693 | 2011-06-09T22:52:53.693 | 1893 | 1893 | null |
11782 | 2 | null | 643 | 2 | null | In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical testing, the CLT helps you protect the type I error but does little to keep the type II error at bay. For example, the t-test can have arbitrarily low power for large n when the data distribution is extremely skewed.
| null | CC BY-SA 3.0 | null | 2011-06-09T21:40:25.677 | 2011-06-09T21:40:25.677 | null | null | 4253 | null |
11783 | 2 | null | 11609 | 6 | null | I'll throw in my two cents (maybe redigesting some of the former answers). To a frequentist, the confidence interval itself is in essence a two-dimensional random variable: if you would redo the experiment a gazillion times, the confidence interval you would estimate (i.e.: calculate from your newly found data each time) would differ each time. As such, the two boundaries of the interval are random variables.
A 95% CI, then, means nothing more than the assurance (given all your assumptions leading to this CI are correct) that this set of random variables will contain the true value (a very frequentist expression) in 95% of the cases.
You can easily calculate the confidence interval for the mean of 100 draws from a standard normal distribution. Then, if you draw 10000 times 100 values from that standard normal distribution, and each time calculate the confidence interval for the mean, you will indeed see that 0 is in there about 9500 times.
The fact that you have created a confidence interval just once (from your actual data) does indeed reduce the probability of the true value being in that interval to either 0 or 1, but it doesn't change the probability of the confidence interval as a random variable to contain the true value.
So, bottom line: the probability of any (i.e. on average) 95% confidence interval containing the true value (95%) doesn't change, and neither does the probability of a particular interval (CI or whatever) for containing the true value (0 or 1). The probability of the interval the computer knows but you don't is actually 0 or 1 (because it is a particular interval), but since you don't know it (and, in a frequentist fashion, are unable to recalculate this same interval infinitely many times again from the same data), all you have to go for is the probability of any interval.
| null | CC BY-SA 3.0 | null | 2011-06-09T21:58:35.820 | 2011-06-09T22:20:34.857 | 2011-06-09T22:20:34.857 | 4257 | 4257 | null |
11784 | 2 | null | 11769 | 7 | null | Take a look at [this paper](http://www.stat.columbia.edu/~gelman/research/unpublished/philosophy.pdf) by Cosma Shalizi and Andrew Gelman about philosophy and Bayesianism. Gelman is a proeminent bayesian and Shalizi a frequentist!
Take a look also at [this short criticism](http://cscs.umich.edu/~crshalizi/weblog/664.html) by Shalizi, where he points the necessity of model checking and mock the dutch book argument used by some Bayesians.
And last, but not least, I think that, since you are a physicist, you may like [this text](http://www.scottaaronson.com/democritus/lec15.html), where the author points to “computational learning theory” (which I frankly know nothing at all), which could be an alternative to Bayesianism, as far as I can understand it (not much).
ps.: If you follow the links, specially the last one and have an opinion about the text (and the [discussions that followed the text at the blog of the author](http://yolanda3.dynalias.org/tsm/ae_bayes.html))
ps.2: My own take on this: Forget about the issue of objective vs subjective probability, the likelihood principle and the argument about the necessity of being coherent. Bayesian methods are good when they allow you to model your problem well (for instance, using a prior to induce unimodal posterior when there is a bimodal likelihood etc.) and the same is true for frequentist methods. Also, forget about the stuff about the problems with p-value. I mean, p-value sucks, but in the end they are a measure of uncertainty, in the spirit of how Fisher thought of it.
| null | CC BY-SA 4.0 | null | 2011-06-09T22:46:29.547 | 2019-10-31T10:44:42.600 | 2019-10-31T10:44:42.600 | 11887 | 3058 | null |
11785 | 2 | null | 11657 | 5 | null | At a very high-level view, latent topics are formed from words that often appear together in the same documents.
Your examples don't have a clear set of topics, so let's use the following documents instead:
```
Doc1: After I eat my breakfast of apples, oranges, bananas, and grapes, I'm going to go snowboarding in the Alps if it's not too cold outside.
Doc2: Apples, oranges, bananas, and grapes make good smoothies.
Doc3: Apples, oranges, bananas, and grapes are tasty fruits.
Doc4: Snowboarding in the Alps is a lot of fun, but cold.
Doc5: My boyfriend lives in the Alps, where he teaches snowboarding.
```
Suppose we say there are two latent topics that we want to discover. The topics that we discover are likely to be:
- Topic 1 (the "fruit" topic): represented most strongly by apples, oranges, bananas, grapes.
- Topic 2 (the "Alps" topic): represented most strongly by Alps, snowboarding, cold.
Doc 1 is then about an equal mix of topic 1 and topic 2, docs 2-3 are mostly topic 1, docs 4-5 are mostly topic 2.
Here's an interesting example of latent dirichlet allocation applied to the WikiLeaks CableGate: [http://idea.ed.ac.uk/topics/cables/browser/cables.html](http://idea.ed.ac.uk/topics/cables/browser/cables.html) (The set of topics are on the left.)
Also, I wasn't sure if you wanted a high-level view or a more technical algorithmic explanation, so if it's the latter you were looking for, just say so and I can add a more technical explanation.
| null | CC BY-SA 3.0 | null | 2011-06-09T23:16:07.517 | 2011-06-09T23:16:07.517 | null | null | 1106 | null |
11787 | 2 | null | 11754 | 1 | null | Have you tried to use Bugs or Jags, calling one of them from R? The model you seem to be estimating is a simple varying slope model, with predictors at the second level.
I'd rewrite your model as:
Be $i = 1, ...n$ students and $k = 1, ... K$ classes. Assuming your data is in the form student-class (i.e. repeated measures), then your model is:
$y_{i} \sim N(\beta_{[k]}*x_{1,i} + \delta_{[k]}*x_{2,i} +..., \sigma^{2})$
$ \beta_{[k]} \sim N(\gamma_{1}*Z_{1,k}, \sigma_{\beta1}^{2})$
...
This model is quite easy to estimate using Bugs or jags and you can call them with function rjags or bugs. They're in package R2jags and [here](http://www.stat.columbia.edu/~gelman/bugsR/runningbugs.html) is a simple example o fitting a multilevel model (with winBugs) on R.
| null | CC BY-SA 3.0 | null | 2011-06-09T23:28:16.390 | 2011-06-09T23:28:16.390 | null | null | 3058 | null |
11788 | 1 | 11838 | null | 4 | 1311 | I have compiled a very small set of summary data from the literature, and I wish to compare the variances between aspects of the literature-based data, and to some of my own data. The summary data includes the mean, standard deviation and sample size.
In earlier tests, I compared the variances of one continuous dependent variable among 2 age classes and 2 years. I used the Fligner-Killeen test since I have extreme values in the data and I'm not sure it was normal (can't remember now!). I followed up this broad test with pairwise multiple comparisons using F-tests in R `var.test(x~age)`
What would be a good way to compare the variances of the literature-sourced data? I've searched through the help files in R, and came up with this method, which I believe generates a random set of numbers with the specified sample size, mean and standard deviation, and then I used those datasets to conduct the F-test:
```
herring_year1<-rnorm(10,mean=10.5,sd=0.51)
herring_year2<-rnorm(15,mean=10.9,sd=0.43)
var.test(herring_year1,herring_year2)
```
Would this be a good approach? If not, can you suggest what might be?
If so, how can I then compare these variances to my own data set? Should I essentially use the summary data from my own set in the same manner? Or generate the random data for the summary data from the literature and stick it in a file to compare to my raw data?
Also, do I need a broad test initially, or can I go straight to the pairwise comparisons?
| How to compare the variance from published summary statistics with own data? | CC BY-SA 3.0 | null | 2011-06-10T01:31:52.463 | 2011-06-11T21:40:33.070 | 2011-06-11T20:47:27.090 | 4238 | 4238 | [
"r",
"variance",
"descriptive-statistics"
] |
11789 | 2 | null | 11769 | 7 | null | "Bayesian" and "frequentist" aren't "probabilistic philosophies". They're schools of statistical thought and practice concerned mainly with quantifying uncertainty and making decisions, although they're often associated with particular interpretations of probability. Probably the most common perception, although it is incomplete, is that of probability as subjective quantification of belief versus probabilities as long-run frequencies. But even these aren't really mutually exclusive. And you may not be aware of this but there are avowed Bayesians who don't agree on particular philosophical issues about probability.
Bayesian statistics and frequentist statistics aren't orthogonal either. It seems like "frequentist" has come to mean "not Bayesian" but that's incorrect. For example, it's perfectly reasonable to ask questions about the properties of Bayesian estimators and confidence intervals under repeated sampling. It's a false dichotomy perpetuated at least in part by a lack of a common definition of the terms Bayesian and frequentist (we statisticians have no one to blame but ourselves for that).
For an amusing, pointed and thoughtful discussion I would suggest Gelman's "Objections to Bayesian Statistics", the comments, and the rejoinder, available here:
[http://ba.stat.cmu.edu/vol03is03.php](http://ba.stat.cmu.edu/vol03is03.php)
There is even some discussion about confidence intervals in physics IIRC. For more in-depth discussions you could walk back through the references therein. If you want to understand the principles behind Bayesian inference, I would suggest Bernando & Smith's book but there are many, many other good references.
| null | CC BY-SA 3.0 | null | 2011-06-10T02:12:09.267 | 2011-06-10T02:12:09.267 | null | null | 26 | null |
11790 | 2 | null | 11764 | 12 | null | Neural networks are sometimes called "differentiable function approximators". So what you can do is to differentiate any unit with respect to any other unit to see what their relationshsip is.
You can check how sensitive the error of the network is wrt to a specific input as well with this.
Then, there is something called "receptive fields", which is just the visualization of the connections going into a hidden unit. This makes it easy to understand what particular units do for image data, for example. This can be done for higher levels as well. See [Visualizing Higher-Level Features of a Deep Network](http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/247).
| null | CC BY-SA 3.0 | null | 2011-06-10T06:29:05.517 | 2011-06-10T21:56:42.070 | 2011-06-10T21:56:42.070 | 2860 | 2860 | null |
11791 | 2 | null | 11768 | 2 | null | There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding the appropriate statistical model, which might have generated the data.
| null | CC BY-SA 3.0 | null | 2011-06-10T06:45:00.600 | 2011-06-10T06:45:00.600 | null | null | 2116 | null |
11793 | 2 | null | 11769 | 6 | null | For me, the important thing about Bayesianism is that it regards probability as having the same meaning we apply intuitively in everyday life, namely the degree of plausibility of the truth of a proposition. Very few of us really use probability to mean strictly a long run frequency in everyday use, if only because we are often interested in particular events that have no long run frequency, for example what is the probability that fossil fuel emissions are causing significant climate change? For this reason, Bayesian statistics are much less prone to misinterpretation than frequentist statistics.
Bayesianism also has marginalisation, priors, maxent, transformation groups etc. that all have their uses, but for me the key benefit is that the definition of probability is more appropriate for the kinds of problems I want to address.
That doesn't make Bayesian statistcs better than frequentist statistics. It seems to me that frequentist statistics are well suited to problems in quality control (where you do have repeated sampling from populations) or where you have designed experiments, rather than analysis of pre-collected data (although that lies rather beyond my expertise, so it is just intuition).
As an engineer, it is a matter of "horses for courses" and I have both sets of tools in my toolbox and I use both on a regular basis.
| null | CC BY-SA 3.0 | null | 2011-06-10T07:08:17.753 | 2011-06-10T07:08:17.753 | null | null | 887 | null |
11794 | 1 | null | null | 2 | 1213 | I need to perform a computation of reliability of a 5-point Likert scale having 6 items. From a factor analysis I found that my scale is a multidimensional scale (3 factors), so I cannot use Cronbach's alpha to compute reliability. I have seen in several papers that it is possible to use the multidimensional extension of the McDonald's omega. Does the `omega` function in the psych package allow to do this?
Also, is there an R function to compute the Stratified alpha?
| How to compute multidimensional omega with R | CC BY-SA 3.0 | null | 2011-06-10T08:16:34.113 | 2015-12-15T04:22:41.220 | 2011-06-10T09:21:19.067 | 2116 | 4903 | [
"r",
"reliability",
"likert"
] |
11795 | 1 | 11796 | null | 2 | 6930 | If
$$E[f(x)]=0$$
can we derive that
$$E[f'(x)]=0?$$
For example $f(x)$ means some noise with zero mean, gaussian distribution.
| Is it possible to differentiate in expectation? | CC BY-SA 3.0 | null | 2011-06-10T08:31:26.720 | 2011-06-10T17:56:54.840 | 2011-06-10T09:11:53.237 | 2116 | 4898 | [
"distributions",
"expected-value"
] |
11796 | 2 | null | 11795 | 9 | null | With your definitions no. Suppose we have a random variable $X$, what you are asking if it is possible to derive
$$Ef'(X)=0$$
from
$$Ef(X)=0.$$
Take $f(x)=x$. Then $Ef(X)=EX=0$ and this means that variable $X$ has zero mean. Now $f'(x)=1$, and
$$Ef'(X)=E[1]=1,$$
hence the original statement does not hold for all functions $f$.
| null | CC BY-SA 3.0 | null | 2011-06-10T09:18:19.240 | 2011-06-10T09:18:19.240 | null | null | 2116 | null |
11797 | 1 | 11841 | null | 3 | 872 | I have built an unrestricted co-occurrence network of words from a songs corpus. To convert it to a restricted network, Ramon Ferrer Cancho and Ricard V. Sole describe the following approach in their paper [The small world of human language](http://complex.upf.es/~ricard/SWPRS.pdf):
>
The technique can be improved by choosing only pairs of consecutive words, the mutual co-occurrence of which is larger than expected by chance. This can be measured with the condition $p_{ij} > p_i*p_j$, which defines the presence of correlations beyond that expected from a random ordering of words. If a pair of words co-occurs less than expected when independence between such words is assumed, the pair is considered to be spurious. Graphs in which this condition is used will be called restricted (unrestricted otherwise).
Word pairs are considered as co-occurring if they occur within a maximum distance of 2 within a song. Here, $i$ and $j$ are the two words; $p_{ij}$ is the probability that the two words co-occur. If $i$ and $j$ are statistically independent, then the probability that they co-occur is given by the product $p_{i}\cdot p_{j}$. If they are not independent, and they have a tendency to co-occur, then $p_{ij}$ will be greater than $p_{i}\cdot p_{j}$.
I am confused on how to calculate $p_{ij}$, $p_{i}$, and $p_{j}$. For example, to calculate $p_{i}$, can I take the count of all occurrences of $i$ divided by total number of songs in my corpus? Alternatively, $p_{i}$ can also be calculated by dividing count of all occurrences of $i$ by total number of words in the corpus. Which would be right approach out of the two?
Similarly, to calculate $p_{ij}$, can I take the count of co-occurences of $i$ and $j$ divided by total number of words or total number of possible co-occurring word pairs?
EDIT: Moreover, If $i$ and $j$ are statistically independent, then shouldn't the probability that they co-occur be given by $4∗p_{i}⋅p_{j}$ since word pairs are considered as co-occurring if they occur within a maximum distance of 2 within a song.
| How to convert an unrestricted co-occurrence network to a restricted one? | CC BY-SA 3.0 | null | 2011-06-10T10:00:13.303 | 2011-06-12T00:19:51.727 | 2011-06-11T08:11:25.157 | 4966 | 4966 | [
"text-mining",
"networks"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.