Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
11798
2
null
11795
3
null
Agree with Mpiktas .. way 1 to think it: In generalized way is that $E(f(x)) = \int f(x)p(x) = \int(f(x)p(x)dx)$ while $E(f'(x) ) = \int ( \frac{df(x)}{dx}p(x)dx)$.. Thinking it mathematically also a $d/dx$ operator comes inside the integral to cancel some part of the integral effect. It makes sense then to think that they are not equal. way 2: if integral is zero then $f(1)p(1)+ f(2)p(2)+ ... = 0$ meaning that the function is rising-falling..The slope of that function will then not rise fall the same way.
null
CC BY-SA 3.0
null
2011-06-10T10:10:07.917
2011-06-10T17:56:54.840
2011-06-10T17:56:54.840
2902
1763
null
11799
1
null
null
0
200
I want to derive the equation $\dot P = K'P+PK'^T + Cov(Kv-Gw)$ from the system $\dot x=Fx+Gw \ \ \ \ \ \ w \sim N(0,Q)$ $z=Hx+v \ \ \ \ \ \ v \sim N(0,R)$ given $\dot {\hat x}=K'\hat x+Kz$ $K'=F-KH$ $\dot {\hat x}=F\hat x+K(z-H\hat x)$ $d\tilde x/dt = K'\tilde x + Kv -Gw$ $\tilde x = \hat x - x$ $P(t)=E[\tilde x\tilde x^T]$ where $K'$ and $K$ are to be chosen to optimize the estimate $\hat x$. Here is how I tried (and failed) to show this: $P(t)=E[\tilde x\tilde x^T]$ $\dot P(t)=E[(K'\tilde x+Kv-Gw)\tilde x^T+\tilde x(K'\tilde x+Kv-Gw)^T]$ $=E[(K'\tilde x)\tilde x^T+\tilde x(K'\tilde x)^T]+E[(Kv-Gw)\tilde x^T+\tilde x(Kv-Gw)^T]$ $=K'P+PK'^T+E[(Kv-Gw)\tilde x^T+\tilde x(Kv-Gw)^T]$ I cannot figure out how to proceed further. Any help will be appreciated.
How to derive $P$ in continuous Kalman filter?
CC BY-SA 3.0
null
2011-06-10T10:54:07.903
2011-06-12T14:50:20.747
2011-06-12T14:50:20.747
null
4898
[ "kalman-filter" ]
11800
1
11804
null
4
2975
Say we want to perform a logistical regression analysis (although my question pertains to regressions in general) on sports results to determine the effects of various factors on who wins and who loses. We have the background information we want on the teams and players and now just need a random sample. So we decide to take the published results over the past couple of years as our sample. The sample we collect is in the following form: Result, Team 1, Team 2, ... The result is always 0-1 or 1-0 (no draws). We can start preparing the data by converting Result into a binary variable: Result = 1 if Team 1 wins, = 0 if Team 2 wins. The problem is that this doesn't give us a valid regression. The reason will take a bit of explaining. Say one of our observations is: Result = 1; Team 1 = Man.U.; Team 2 = Chelsea This observation can be rewritten: Result = 0; Team 1 = Chelsea; Team 2 = Man.U. And it is the exact same observation and all the information is still the same and perfectly correct. And this actually changes the results of our regression! One quick way to prove this is to consider what happens if we rewrite all of the observations so that Team 1 always wins. Then our dependent variable will always be Result = 1. Thus Var(Result) = 0 and the estimates for our parameters will all be 0 (except for the constant, of course). If we flip half of the observations so that half the time Result = 1 and half the time Result = 0 and we run the regression on that, we will get non-zero estimates for our parameters. This bothers me because we are regressing the same data but getting wildly different results based on the order the teams are written in. If our results can change based on the order we decided to put the teams down when recording our observations, then they can't be valid. So what is the best way to prepare this data for analysis so that we can get valid results?
How should we convert sports results data to perform a valid logistical regression?
CC BY-SA 3.0
null
2011-06-10T12:36:21.297
2012-07-07T13:29:50.977
2012-07-07T13:29:50.977
2970
4968
[ "regression", "dataset", "logistic", "games" ]
11801
2
null
11800
1
null
I would be tempted to use a resampling approach, were in each iteration the presentation of each observation is chosen randomly. That way the data for each model is still i.i.d. and the uncertainty due to the presentation of the observations is taken into account by the averaging over the resampled datasest. You can then look at the distribution of parameter values to get an idea of the importance of the explanatory variables.
null
CC BY-SA 3.0
null
2011-06-10T12:41:39.333
2011-06-10T12:41:39.333
null
null
887
null
11803
2
null
11800
4
null
Rather using trying logistic regression, I would consider trying the techniques in > Dixon, M.J. and S.G. Coles, 1997. Modelling Association Football Scores and Inefficiencies in the Football Betting Market. Applied Statistics. In this paper, they use Poisson regression to model football scores. Basically, the number of goals a team can score is modelled using a Poisson distribution, adjusted for: - a home advantage - an attack rating - a defence rating --- For non-British readers: football == soccer.
null
CC BY-SA 3.0
null
2011-06-10T12:57:54.083
2011-06-10T12:57:54.083
null
null
8
null
11804
2
null
11800
3
null
A simple solution is to incorporate the hometown advantage (that is if your data holds this info). This makes it possible to give a definite meaning to your outcome. So if you have that data, it'll likely be a better model and solves your problem: go there! Right now, your outcome's definition depend on the order, but your data doesn't. A possible solution (though I haven't checked this completely) would be to duplicate every record in your data, but change the order and the outcome (so for every observation, both representations are in your dataset), and then do a weighted logistic regression, giving every observation a weight of 1/2 (I think this correctly adjusts your variances, but I'd have to check) Another option is to change your outcome so it is not dependent on the order anymore (i.e.: the alphabetically former team wins or not), or to always code your two teams in a steady order (i.e. make that the alphabetically former team is always in column 1). These things are bound to be a bit harder to interpret, though...
null
CC BY-SA 3.0
null
2011-06-10T13:05:39.187
2011-06-10T13:05:39.187
null
null
4257
null
11806
2
null
11749
2
null
The exponential distribution might be a good starting point for the waiting time between new posts. This would be equivalent to assuming a Poisson distributed number of posts in a given time period. There are some pretty strong assumptions behind a model like that, but it might make sense for your application.
null
CC BY-SA 3.0
null
2011-06-10T16:21:24.647
2011-06-10T16:21:24.647
null
null
26
null
11807
1
null
null
11
1945
I'm working with some large data sets using the gbm package in R. Both my predictor matrix and my response vector are pretty sparse (i.e. most entries are zero). I was hoping to build decision trees using an algorithm that takes advantage of this sparseness, as was done [here](http://pubs.acs.org/doi/full/10.1021/ci9903049)). In that paper, as in my situation, most items have only a few of the many possible features, so they were able to avoid a lot of wasted computation by assuming that their items lacked a given feature unless the data explicitly said otherwise. My hope is that I could get a similar speedup by using this sort of algorithm (and then wrapping a boosting algorithm around it to improve my predictive accuracy). Since they didn't seem to publish their code, I was wondering if there were any open-source packages or libraries (in any language) that are optimized for this case. Ideally, I'd like something that could take a sparse matrix directly from R's `Matrix` package, but I'll take what I can get. I've looked around and it seems like this sort of thing should be out there: - Chemists seem to run into this issue a lot (the paper I linked above was about learning to find new drug compounds), but the implementations I could find were either proprietary or highly specialized for chemical analysis. It's possible one of them could be re-purposed, though. - Document classification also seems to be an area where learning from sparse feature spaces is useful (most documents don't contain most words). For instance, there's an oblique reference to a sparse implementation of C4.5 (a CART-like algorithm) in this paper, but no code. - According to the mailing list, WEKA can accept sparse data, but unlike the method in the paper I linked above, WEKA isn't optimized to actually take advantage of it in terms of avoiding wasted CPU cycles. Thanks in advance!
Are there any libraries available for CART-like methods using sparse predictors & responses?
CC BY-SA 3.0
null
2011-06-10T18:50:10.903
2014-12-26T23:17:13.293
2011-06-14T03:29:05.067
4862
4862
[ "r", "regression", "machine-learning", "classification", "cart" ]
11808
1
11819
null
12
7012
In a previous question, I inquired about fitting distributions to some non-Gaussian empirical data. It was suggested to me offline, that I might try the assumption that the data is Gaussian and fit a Kalman filter first. Then, depending on the errors, decide if it is worth developing something fancier. That makes sense. So, with a nice set of time series data, I need to estimate several variable for a Kalman filter to run. (Sure, there is probably an R package somewhere, but I want to actually learn how to do this myself.)
How to estimate parameters for a Kalman filter
CC BY-SA 3.0
null
2011-06-10T19:55:41.443
2011-07-11T00:09:56.020
2011-06-10T20:09:21.307
2116
2566
[ "kalman-filter" ]
11809
1
null
null
7
408
When we perform a principal components analysis (PCA) on a multivariate data set we are interested in finding orthogonal components that explain maximal variance in the data set. We can form a biplot of the data using the scores and the loadings, and the locations of the sample points in the biplot are an approximation of the Euclidean distance between the samples. In PLS, we extract orthogonal components from a predictor data set that have maximal covariance with the response (vector or matrix). We also get scores and loadings as part of the analysis and can draw a biplot of these scores. What, if any, dissimilarity is represented by the Euclidean distances on the biplot between sample points? One of the reasons I ask is that with PCA, we can apply a transformation to the data prior to applying PCA such that the Euclidean distance between samples on the biplot approximates the Euclidean distance between samples in the transformed data, but in the untransformed data the distance represented is some other distance. For example, by applying the Hellinger transformation (rows are standardised by their row sum and then a square root transformation is applied) to the raw data, a PCA applied to the transformed data will reflect the Hellinger distances between the observations. I wonder if a similar principal might hold for PLS?
What, if any, dissimilarity is preserved in partial least squares (PLS)?
CC BY-SA 3.0
null
2011-06-10T20:03:31.707
2011-06-17T09:41:42.640
null
null
1390
[ "pca", "data-transformation", "partial-least-squares", "distance", "biplot" ]
11811
2
null
11808
1
null
The usual method is to use [Maximum Likelihood Estimation](http://en.wikipedia.org/wiki/Maximum_likelihood). Basically, you need a Likelihood function and then run a standard optimizer (such as `optim`) to maximize your Likelihood.
null
CC BY-SA 3.0
null
2011-06-10T20:43:35.040
2011-06-10T20:43:35.040
null
null
1764
null
11812
1
11814
null
26
22551
I am using a ranksum test to compare the median of two samples ($n=120000$) and have found that they are significantly different with: `p = 1.12E-207`. Should I be suspicious of such a small $p$-value or should I attribute it to the high statistical power associated with having a very large sample? Is there any such thing as a suspiciously low $p$-value?
Sanity check: how low can a p-value go?
CC BY-SA 4.0
null
2011-06-10T21:04:04.857
2018-05-09T21:03:03.700
2018-05-09T19:52:46.167
28666
4054
[ "hypothesis-testing", "p-value", "sample-size", "statistical-power" ]
11814
2
null
11812
32
null
P-values on standard computers (using IEEE double precision floats) can get as low as approximately $10^{-303}$. These can be legitimately correct calculations when effect sizes are large and/or standard errors are low. Your value, if computed with a T or normal distribution, corresponds to an effect size of about 31 standard errors. Remembering that standard errors usually scale with the reciprocal square root of $n$, that reflects a difference of less than 0.09 standard deviations (assuming all samples are independent). In most applications, there would be nothing suspicious or unusual about such a difference. Interpreting such p-values is another matter. Viewing a number as small as $10^{-207}$ or even $10^{-10}$ as a probability is exceeding the bounds of reason, given all the ways in which reality is likely to deviate from the probability model that underpins this p-value calculation. A good choice is to report the p-value as being less than the smallest threshold you feel the model can reasonably support: often between $0.01$ and $0.0001$.
null
CC BY-SA 3.0
null
2011-06-10T21:17:52.253
2011-06-10T21:17:52.253
null
null
919
null
11815
1
null
null
3
744
I have some data on the number of times each of my machines turned off (due to an error) in a particular time period. There are about 6 different classes of machines being used to construct a total population of 50 machines. I wanted to analyze the stability of the 6 classes of machine relative to each other. An acquaintance suggested that I could do some heavy-hitter analysis during a brief chat to determine if some machines are shutting down more than others. Can someone tell me how to systematically perform this analysis or if there is a formal name for this kind of analysis?
What is heavy hitter analysis?
CC BY-SA 3.0
null
2011-06-10T21:27:25.890
2011-10-03T23:32:09.577
null
null
2164
[ "clustering", "multivariate-analysis", "dataset", "large-data" ]
11816
2
null
11812
17
null
There is nothing suspicious -- extremely low p-values like yours are pretty common when sample sizes are large (as yours is for comparing medians). As whuber mentioned, normally such p-values are reported as being less than some threshold (e.g. <0.001). One thing to be careful about is that p-values only tells you whether the difference in median is is statistically significant. Whether the difference is significant enough in magnitude is something you will have to decide: e.g. for large sample sets, extremely small differences in means/medians can be statistically significant, but it might not mean very much.
null
CC BY-SA 3.0
null
2011-06-10T21:57:52.757
2011-06-10T21:57:52.757
null
null
2973
null
11817
2
null
11762
2
null
The concepts of slowly varying, regular varying and second order regular varying functions are used in extreme value statistics to provide regularity conditions on the behavior of the tail of a distribution function to be able to prove theorems. They can be thought of as smoothness conditions for the tail at infinity. The concepts are crucial to extreme value statistics, where you need some assumptions about the tail of the distribution function. One could just assume that the distribution function was from a parametric class, a [Frechet distribution](http://en.wikipedia.org/wiki/Fr%C3%A9chet_distribution), for example, but this distribution may not fit the data, and one may only be interested in the tail of the distribution, in which case one would not want to make assumptions about the entire distribution function. Regular variation is used to give a semi-parametric class of distribution functions where we, for instance, know how the extremes behave according to the [Fisher-Tippett-Gnedenko Theorem](http://en.wikipedia.org/wiki/Fisher%E2%80%93Tippett%E2%80%93Gnedenko_theorem) which classifies distributions into what is called the domains of attraction for one of the three [extreme value distributions](http://en.wikipedia.org/wiki/Generalized_extreme_value_distribution). The tail index can then be estimated, the classical estimator being the Hill estimator. More regularity, like second order regular variation, comes into the picture when we want to prove distributional results about the estimators. These conditions are technical are somewhat difficult to comprehend from an intuitive point of view, but they assure that the tail of the distribution function is "sufficiently nice". I good book I can recommend is [Extreme Value Theory. An Introduction](http://books.google.com/books?id=catZCl17d7gC&printsec=frontcover&dq=extreme+value+analysis+an+introduction&hl=en&ei=Zp3yTe1mlL6wA-2FvboL&sa=X&oi=book_result&ct=result&resnum=2&ved=0CC8Q6AEwAQ#v=onepage&q=regular%20variation&f=false) by Laurens de Haan and Ana Ferreira.
null
CC BY-SA 3.0
null
2011-06-10T23:35:23.300
2011-06-11T05:43:42.933
2011-06-11T05:43:42.933
4376
4376
null
11818
2
null
11707
57
null
You don't need normality. All you need is that $$s^2 = \frac{1}{n-1} \sum_{i=1}^n(x_i - \bar{x})^2$$ is an unbiased estimator of the variance $\sigma^2$. Then use that the square root function is strictly concave such that (by a strong form of [Jensen's inequality](http://en.wikipedia.org/wiki/Jensen%27s_inequality#Proofs)) $$E(\sqrt{s^2}) < \sqrt{E(s^2)} = \sigma$$ unless the distribution of $s^2$ is degenerate at $\sigma^2$.
null
CC BY-SA 3.0
null
2011-06-10T23:54:30.733
2011-06-10T23:54:30.733
null
null
4376
null
11819
2
null
11808
7
null
Max Welling has a nice [tutorial](http://www.cs.ucl.ac.uk/staff/S.Prince/4C75/WellingKalmanFilter.pdf) that describes all of the Kalman Filtering and Smoothing equations as well as parameter estimation. This may be a good place to start.
null
CC BY-SA 3.0
null
2011-06-10T23:56:35.130
2011-06-10T23:56:35.130
null
null
1913
null
11820
2
null
11768
1
null
Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one best way to approximate an unknown distribution in practice. There isn't even one "best". As a general comment, you can get a long way mixing normal distributions. But without assuming something you're going to have a hard time, particularly when the data aren't iid.
null
CC BY-SA 3.0
null
2011-06-11T07:13:06.747
2011-06-11T07:13:06.747
null
null
26
null
11821
1
11828
null
6
2091
Assume that you have a regression with a whole set of variables and you know that the residuals are not normal distributed. So you just estimate a regression using OLS to find the best linear fit. For this you disclaim the assumption of normal distributed error terms. After the estimation you have 2 "significant" coefficients. But how can anyone interpret these coefficients? So there is no way to say: "These coefficients are significant", although the Hypothesis $\beta=0$ can be declined with a high t-statistic (because of disclaiming normal error assumption). But what to do in this case? How would you argue?
Interpret t-values when not assuming normal distribution of the error term
CC BY-SA 3.0
null
2011-06-11T09:55:12.953
2011-06-11T20:22:45.043
2011-06-11T10:24:46.670
2116
4496
[ "regression", "linear-model" ]
11822
2
null
11821
1
null
If the errors are not normally distributed, asymptotic results can be used. Suppose your model is $$y_i=x_i'\beta+\varepsilon_i$$ where $(y_i,x_i',\varepsilon_i)$, $i=1,...,n$ is an iid sample. Assume \begin{align*} E(\varepsilon_i|x_i)&=0 \\ E(\varepsilon_i^2|x_i)&=\sigma^2 \end{align*} and $$rank(Ex_ix_i')=K,$$ where $K$ is the number of coefficients. Then usual OLS estimate $\hat\beta$ is asymptoticaly normal: $$\sqrt{n}(\hat\beta-\beta)\to N(0,\sigma^2E(x_ix_i'))$$ Practical implications of this result are that the usual t-statistics become z-statistics, i.e. their distribution is normal instead of Student. So you can interpret t-statistics as usual, only p-values should be adjusted for normal distribution. Note that since this result is asymptotic, it does not hold for small sample sizes. Also the assumptions used can be relaxed.
null
CC BY-SA 3.0
null
2011-06-11T10:36:59.673
2011-06-11T20:22:45.043
2011-06-11T20:22:45.043
2116
2116
null
11823
1
11824
null
4
2884
I have tested a regression framework's robustness to noise and I have noticed in some cases that adding noise improves the prediction performance and in other cases the performance degrades. What could be the reasons for this? If there are multiple reasons, how to I determine which is the cause? Edit: Some more details about what I am doing. The framework uses ridge regression. The inputs are vectors of extracted image features. The outputs are vectors of angles (in degrees, -180 to 180). To test for robustness to noise I am applying 3 levels of noise (white additive Gaussian noise) to the angles (targets) proportional to the individual angle variances (2%, 5%, and 10% of the variance of each angle). I have noticed that in some observations, adding a small amount of noise (2-5%) leads to a small improvement in performance and in one case, all levels of noise give improvement. In my tests, the regularisation term is fixed across all noise levels, and I have ran each noise level test several times to take into account the fluctuations of the random noise. Also, I have two broad sets of observation data. The first set was observed relatively accurately, however the second set was more complex (significantly more heterogeneous, leading to notable performance degradation relative to the first set) and exhibited a number of minor errors due to the observation technique being more limited than that which was used with the first set. In the first set, the phenomena of better performance through adding noise did not occur. However, sometimes more noise was better than less noise. If more information is required to better answer the question, I'd be happy to provide it.
Why does noisy data result in better prediction performance?
CC BY-SA 3.0
null
2011-06-11T11:51:39.733
2013-02-09T07:22:06.400
2011-06-11T14:16:58.987
3052
3052
[ "regression", "white-noise" ]
11824
2
null
11823
8
null
Your description is quite sketchy. Adding noise can (seem to) improve prediction if the method of developing the predictions is overfitting. Likewise if you are overfitting you can improve prediction by deleting progressively more of your data. Depending on your sample size, "improvements" are best demonstrated by bootstrapping in 100-fold repeats of 10-fold cross-validation.
null
CC BY-SA 3.0
null
2011-06-11T12:14:35.133
2011-06-11T12:14:35.133
null
null
4253
null
11828
2
null
11821
3
null
If the residuals are not normal (and note that this applies to the theoretical residuals rather than the observed residuals), but not overly skewed or with outliers then the Central Limit Theorem applies and the inference on the slopes (t-tests, confidence intervals) will be approximately correct. The quality of the approximation depends on the sample size and the degree and type of non-normality in the residuals. The CLT works fine for the inference on the slopes, but does not apply to prediction intervals for new data. If your not happy with the CLT argument (small sample sizes, skewness, just not sure, want a second opinion, want to convince a skeptic, etc.) then you can use bootstrap or permutation methods which do not depend on the normality assumption.
null
CC BY-SA 3.0
null
2011-06-11T14:45:14.073
2011-06-11T14:45:14.073
null
null
4505
null
11829
1
11851
null
4
7375
Is it possible to take the log of an independent variable in a Poisson regression? What to I have to be aware of, when doing so? (The results are getting better, when assuming that the independent variable is with log link.)
Take the log of an independent variable in a Poisson regression
CC BY-SA 3.0
null
2011-06-11T15:10:43.740
2011-06-12T19:58:56.200
2011-06-11T17:44:07.480
919
4496
[ "regression", "poisson-distribution", "count-data" ]
11830
2
null
11829
7
null
There is no problem with taking the log or other transform of predictor/independent variables in a poisson regression so long as the transformation is possible (no 0's or negative numbers) and makes sense given the science.
null
CC BY-SA 3.0
null
2011-06-11T16:08:46.110
2011-06-11T16:08:46.110
null
null
4505
null
11832
1
11844
null
2
1880
I just need a little bit of a push in the right direction. I'm working my way through Hayashi's Econometrics and hit a snag in section 1.4. Review question 7 asks: > Show that, under Assumptions 1.1-1.5, $Var(s^2|X)=\frac{2\sigma^4}{n-K}$ Hint: If a random variable is distributed as $\chi^2(m)$, then its mean is $m$ and variance $2m$. I figure this needs to be broken into two parts – the first showing that $s^2$ follows a $\chi^2$ distribution, and the second part showing that the mean is the expression above sans the 2. The book gives a couple of hints about the kinds of things that follow $\chi^2$ distributions. Here's a footnote on page 41: > Fact: Let x be an $m$ dimensional random vector. If $x~N(\mu,\Sigma)$ with $\Sigma$ nonsingular, then $(x-\mu)'\Sigma^{-1}(x-\mu)\sim\chi^2(m)$. This doesn't do me much good though. Secondly, there's this bit on page 37: > Fact: If $x\sim N(0,I_n)$ and $A$ is idempotent, then $x'Ax$ has a chi-squared distribution with degrees of freedom equal to the rank of A. But $\varepsilon$ (measurement error) doesn't follow the standard normal distribution – its variance is $\sigma^2$, so this isn't much use to me either. I'm just starting out and not really sure how to tackle this. Could someone give me a hand?
How do you derive the conditional variance for $s^2$, the OLS estimator of $\sigma^2$?
CC BY-SA 3.0
null
2011-06-11T17:00:14.327
2021-02-07T12:04:26.980
2021-02-07T12:04:26.980
11887
2251
[ "distributions", "variance", "econometrics", "conditional-probability", "chi-squared-distribution" ]
11833
1
null
null
11
626
I teach an introductory economic geography course. To help my students develop a better understanding of the kinds of countries found in the contemporary world economy and an appreciation of data reduction techniques, I want to construct an assignment that creates a typology of different kinds of countries (e.g, high-income high-value added mfg long life expectancy; high income natural resource exporter mid-high life expectancy; with Germany being an element of the first type, and Yemen an example of the second type). This would use publicly available UNDP data (which if I recall correctly contains socioeconomic data on a bit less than 200 countries; sorry no regional data are available). Prior to this assignment would be another which asks them (using the same --- largely interval or ratio level --- data) to examine correlations between these same variables. My hope is that they would first develop an intuition for the kinds of relationships between different variables (e.g., a positive relationship between life expectancy and [various indicators of] wealth; a positive relationship between wealth and export diversity). Then, when using the data reduction technique, the components or factors would make some intuitive sense (e.g., factor / component 1 captures the importance of wealth; factor / component 2 captures the importance of education). Given that these are second to fourth year students, often with limited exposure to analytical thinking more generally, what single data reduction technique would you suggest as most appropriate for the second assignment? These are population data, so inferential statistics (p-vlaues, etc.) are not really necessary.
Data reduction technique to identify types of countries
CC BY-SA 3.0
null
2011-06-11T17:37:52.547
2023-01-06T04:45:36.723
2011-12-05T21:13:34.090
930
4980
[ "pca", "factor-analysis", "dimensionality-reduction" ]
11834
2
null
11833
10
null
As an exploratory method, PCA is a good first choice for an assignment like this IMO. It'd also be nice for them to get exposed to it; it sounds like many of them won't have seen principal components before. In terms of data I'd also point you to the World Bank Indicators, which are remarkably complete: [http://data.worldbank.org/indicator](http://data.worldbank.org/indicator).
null
CC BY-SA 3.0
null
2011-06-11T17:50:10.890
2011-06-11T17:50:10.890
null
null
26
null
11835
1
null
null
7
15987
I have consulted two texts on how to calculate to calculate confidence intervals when N is small and the population standard deviation is unknown. There are some differences in the formulas they give and the end result varies depending on which text I follow (although not by a large amount). Text one says: - Calculate the mean - Calculate the standard deviation using the formula: s= √ ((∑ X(squared)/N)–X-bar) - Calculate the standard error of the mean using the formula: s/√ N-1 - Determine the value of T from the t-table - Obtain the margin of error by multiplying the standard error of the mean by multiplying it by the value obtained in step 4. - Add and subtract this product from the sample mean to obtain the C.I. Steps, 1,4,5& 6 are exactly the same in the second text. However it gives different formulas for steps 2 & 3. It says: - Calculate the standard deviation using the formula: s= √ ((∑ X(squared) /N-1) –X-bar). The difference is that they reduce N by one. - Calculate the standard error of the mean using the formula: s/√ N The difference is that N is not reduced by 1. Can anyone explain why the different formulas are used and why? Thanks. Anne S
Formula for confidence intervals for small samples and unknown population standard deviation
CC BY-SA 3.0
null
2011-06-11T19:33:57.497
2018-08-23T13:47:08.133
2011-06-11T22:02:25.383
4498
4498
[ "confidence-interval" ]
11836
1
null
null
4
140
I am trying to determine the probability of a "mixed panel" assignment (i.e., a panel of judges w/at least 1 woman). Consider the following: A court has a total of 20 judges, 8 of whom are women. Panels of 5 judges are randomly drawn to decide any given case. What is the probability of drawing a panel on which at least 1 of the judges is female? I realize this is a "simple" question, but it is beyond me. I would greatly appreciate a discussion of the steps required for its solution. Thank you in advance.
Probability of panel assignment
CC BY-SA 3.0
null
2011-06-11T20:25:24.340
2011-06-12T17:27:43.327
null
null
4982
[ "probability" ]
11837
2
null
11833
5
null
I agree with JMS, and PCA seems like a good idea after examining the initial correlations and scatterplots between the variables for each county. [This thread](https://stats.stackexchange.com/q/2691/1036) has some useful suggestions to introduce PCA in non-mathematical terms. I would also suggest utilizing small multiple maps to visualize the spatial distributions of each of the variables (and there are some good examples in [this question](https://gis.stackexchange.com/q/4568/751) on the gis.se site). I think these work particularly well if you have a limited number of areal units to compare and you use a good color scheme (like [this example](https://statmodeling.stat.columbia.edu/2011/04/04/irritating_pseu/) on Andrew Gelman's blog). Unfortunately the nature of any "world countries" dataset I suspect would frequently result in sparse data (i.e. alot of missing countries), making geographic visualization difficult. But such visualization techniques should be useful in other situations as well for your course.
null
CC BY-SA 4.0
null
2011-06-11T20:25:39.440
2023-01-06T04:45:36.723
2023-01-06T04:45:36.723
362671
1036
null
11838
2
null
11788
3
null
I don't think the Fligner-Killeen test (nor the Brown-Forsythe) test is appropriate since you don't know the median in the published data (if you do have it and simply didn't mention it then never mind). I wouldn't suggest simulation of the data either unless you're sure the samples follow a specific distribution. Since you don't have the median and the distribution is uncertain [Levene's Test](http://en.wikipedia.org/wiki/Levene%27s_test) would be appropriate. I've never ran the test in R before, but there is a description of it [here](http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/car/html/levene.test.html). If you're having a lot of trouble getting the R code to work though I'd just compute it by hand given the summary statistics from the literature and your own data. As the wikipedia indicates that statistic is F distributed so [you'll need a table](http://www.statsoft.com/textbook/distribution-tables/#f) if you don't have one.
null
CC BY-SA 3.0
null
2011-06-11T21:17:49.697
2011-06-11T21:40:33.070
2011-06-11T21:40:33.070
4325
4325
null
11839
2
null
11835
5
null
Here are some good notes on standard deviation and the standard error of the mean [here](http://www.cms.murdoch.edu.au/areas/maths/statsnotes/samplestats/stdevmore.html). The Wackerly et al text computes small sample confidence intervals in section 8.8 (page 430) you can see their formula [here](http://books.google.com/books?id=ZvPKTemPsY4C&lpg=PP1&dq=wackerly&pg=PA430#v=snippet&q=Small-Sample%20confidence%20intervals&f=false). Confidence interval: $\bar{Y} \pm t_{\alpha/2} * \frac{S}{\sqrt{n}}$ Where $\bar{y}$ = $\frac{1}{n}$$\sum{\textstyle y_{i}}$ (the sample mean) and $S = \sqrt{\frac{1}{n-1}\sum(y_{i}-\bar{y})^2}$ (the sample standard deviation) t$_{\alpha/2}$ is the critical value for a given value of $\alpha$ (e.g., .1, .05, etc.) and has n-1 degrees of freedom, where n is the sample size, that you'd find in a [table](http://www.statsoft.com/textbook/distribution-tables/#t). Now if your sample is a large proportion of a known finite population size there is something called a [population correction factor](http://www.childrensmercy.org/stats/size/population.asp), but for basic needs you probably don't have to worry about this.
null
CC BY-SA 4.0
null
2011-06-11T22:53:04.550
2018-08-23T13:47:08.133
2018-08-23T13:47:08.133
7290
4325
null
11840
1
11882
null
5
578
I have been investigating the possibility of using the interval between uncommon events to test for changes in the frequency of such events over time. As an example, say that the event is breaking a record in some sporting competition. This might occur at most a few times a year, and the data segmentation problem (whether an event falls within a certain interval of observation) causes the usual events-per-year analysis, usually a GLM with a Poisson link, to be strongly affected by even a single event without a very long series. I first thought of sampling different observation intervals, and then decided to discard the intervals entirely and see if the intervals between events were associated with their serial position. This seemed to be a more powerful test in the data I had. Apparently this was an accepted procedure some 50-60 years ago, but I have not been able to find much in the recent literature. I'm pretty sure that this is related to analyzing the frequency of extreme weather events and the like, but I am not familiar with that field. Anyone out there know whether this type of test has been superceded by something?
Analysis of intervals between events
CC BY-SA 3.0
null
2011-06-11T23:49:41.347
2011-06-13T19:39:35.467
2011-06-12T00:02:30.703
null
4983
[ "time-series" ]
11841
2
null
11797
5
null
One usually estimates probabilities with frequencies: [according to Laplace](http://en.wikipedia.org/wiki/Probability_interpretations) (1814), > The ratio of this number [of "favorable cases"] to that of all the cases possible is the measure of this probability... This is justified by an urn model (or "tickets in a box" model) of probability: print the text on paper, cut out each word, and drop them into a box. Imagine creating "sentences" by randomly drawing one piece of paper from the box, writing the word seen on it, returning the paper to the box (to leave its contents unchanged), and repeating. The number of "favorable cases" for any word is the number of slips of paper on which it is written. The number of "all cases possible" is the total number of slips of paper in the box. Therefore, to compute $p_i$, you count two things: $n$, the number of words in a text, and $n_i$, the number of words that match word $i$. Then $p_i = n_i/n$. For example, in the sentence > I once had a girl; or should I say, she once had me. there are $n=13$ words. We would compute $p_{\text{once}} = 2/13$, $p_{\text{she}} = 1/13$, $p_{\text{boy}} = 0/13$, etc. With this model we can compute the probability of a "co-occurrence." This (in your situation) is the chance that word $i$ is followed by word $j$ in a random sequence of three words drawn as described. Continuing the example, let's compute the probability of co-occurrence of "she had." This can be done with a probability tree: - The chance that the first word is "she" equals $1/13$. - Conditional on the first word being "she", there is a $2/13$ chance that the second word is "had". Thus there is a $1/13 \cdot 2/13$ chance of "she had ...". - Conditional on the first word being "she" and the second not being "had", there is a $2/13$ chance that the third word is "had". Because the chance of the second word not being "had" is $11/13$, this conditional probability equals $1/13 \cdot 11/13 \cdot 2/13$. It is the chance of "she ... had" where "..." is not "had." - Conditional on the first word not being "she", the chance that the second is "she" and the third is "had" equals $1/13 \cdot 2/13$. Because the chance of the first word not being "she" is $12/13$, the conditional chance equals $12/13 \cdot 1/13 \cdot 2/13$. It is the chance of "... she had" where "..." is not "she." These three conditional events are mutually exclusive, allowing us to add their chances. Whence the chance of co-occurrence of "she had" is $$p_{ij} = 1/13 \cdot 2/13 + 1/13 \cdot 11/13 \cdot 2/13 + 12/13 \cdot 1/13 \cdot 2/13 = 72/2197 = 0.032772.$$ Note that $p_i \cdot p_j = 1/13 \cdot 2/13 = 2/169 = 26/2197 = 0.0118343.$ In particular, it is definitely not the case that $p_{ij} = p_i \cdot p_j$ under this probability model. Moreover, the ratio $p_{ij} / (p_i p_j) = 36/13$ is none of the intuitively "obvious" values $1$, $3$, or $4$ (it is slightly less than $3$). Comparing a co-occurrence probability to frequencies within an actual text is challenging because the $n-2$ sequences of three words that do appear in the text are not independent. For instance, consider co-occurrences of "once had" in the preceding example. The initial sequence "I once had" is a co-occurrence. It guarantees that the second sequence, "once had a" also is a co-occurrence. However, it lowers the chances that the third sequence is a co-occurrence, because the third sequence must begin with "had," making it impossible to begin with "once." People often address this by computing expected values of the frequencies. Returning to the probability tree calculation, we find the expected number of co-occurrences of "she had" in a random sequence of three words, counting "she she had" as just one co-occurrence, is $72/2179$. Therefore the expected number of such co-occurrences in a sentence of 13 words, which contains 11 such sequences, equals $11 \cdot 72/2179 = 792/2179 = 0.36$. That does not depart significantly from the observed number, $1$. What these considerations teach us is that any research that applies probability to co-occurrence networks needs to be clear and specific about (a) how probability is being applied: that is, what probability model is used; and (b) how co-occurrences will be identified and their frequencies computed. It is evident, though, that the formula $p_{ij} = p_i \cdot p_j$ (for independently drawn words $i$ and $j$) is unlikely to be even approximately true when a "co-occurrence" can include one intermediate word between $i$ and $j$.
null
CC BY-SA 3.0
null
2011-06-12T00:19:51.727
2011-06-12T00:19:51.727
null
null
919
null
11842
2
null
11836
6
null
There is a large and rich branch of mathematics, [combinatorics](http://en.wikipedia.org/wiki/Combinatorics), devoted to solving such problems. The most important step used here is to recognize that "at least one female" is more simply characterized as "not all males." Details follow. --- The name for the number of distinct 5-member panels from a pool of 20 judges is the "binomial coefficient," $\binom{20}{5}$. We will worry later about how to compute this. All probabilities involving 5-member panels of these judges will be fractions with this number in the denominator. Computing a probability is a matter of counting which panels are described by an event; that will go in the numerator. The all-male panels are drawn from a smaller pool of just 20-8 = 12 judges. There are therefore $\binom{12}{5}$ of them. The remainder of the possible panels, equal in number to $\binom{20}{5} - \binom{12}{5}$, have at least one female. Therefore the desired probability is $$\Pr(\text{Panel with a female judge}) = \frac{\binom{20}{5} - \binom{12}{5}}{\binom{20}{5}}.$$ To compute the binomial coefficients, consider that the number of ordered sequences of 5 people out of 20 equals $20 \cdot 19 \cdot 18 \cdot 17 \cdot 16$ because there are 20 ways to pick the first in the panel, 19 remaining people from whom to choose the second, and so on. Any panel of 5 people determines $5! = 5\cdot 4 \cdots 2 \cdot 1$ such orderings. Therefore $\binom{20}{5} = 20\cdots 16 / (5 \cdots 1) = 15504.$ Likewise $\binom{12}{5} = 792$. The desired probability is $$\frac{\binom{20}{5} - \binom{12}{5}}{\binom{20}{5}} = \frac{15504 - 792}{15504} = \frac{613}{646}.$$
null
CC BY-SA 3.0
null
2011-06-12T01:37:45.843
2011-06-12T17:27:43.327
2011-06-12T17:27:43.327
919
919
null
11844
2
null
11832
4
null
Browsing around in the online Google version of the book it seems to me that Assumption 1.5 is the normality assumption. In that case the proof of Proposition 1.3 says that $q|X \sim \chi^2(n-K)$ where $q = (n-K)s^2/\sigma^2$. Thus $$\begin{array}{rcl} \text{Var}(s^2|X) & = & \text{Var}(\sigma^2 q/(n-K)|X) \\ & = & \frac{\sigma^4}{(n-K)^2} \text{Var}(q|X) \\ & = & \frac{\sigma^4}{(n-K)^2} 2(n-K) \\ & = & \frac{2\sigma^4}{n-K} \end{array}$$ where we used the hint for the third equality.
null
CC BY-SA 3.0
null
2011-06-12T07:30:38.100
2011-06-12T07:30:38.100
null
null
4376
null
11845
2
null
11807
1
null
Probably there is a little chance for any code which would take advantage of that -- you would rather need to write something on your own. However, the other option is to transform your data to reduce the size of your data removing redundant information. It is hard to tell how without the information about your data, but maybe you can merge some features which you know does not overlap, PCA parts of it or change representation of some descriptors? Also, if you say your response is sparse as well, maybe it is reasonable to downsample objects with 0 in response?
null
CC BY-SA 3.0
null
2011-06-12T08:25:58.907
2011-06-12T08:25:58.907
null
null
null
null
11846
2
null
11628
20
null
I think @Jeromy already said the essential so I shall concentrate on measures of reliability. The Cronbach's alpha is a sample-dependent index used to ascertain a lower-bound of the reliability of an instrument. It is no more than an indicator of variance shared by all items considered in the computation of a scale score. Therefore, it should not be confused with an absolute measure of reliability, nor does it apply to a multidimensional instrument as a whole. In effect, the following assumptions are made: (a) no residual correlations, (b) items have identical loadings, and (c) the scale is unidimensional. This means that the sole case where alpha will be essentially the same as reliability is the case of uniformly high factor loadings, no error covariances, and unidimensional instrument (1). As its precision depends on the standard error of items intercorrelations it depends on the spread of item correlations, which means that alpha will reflect this range of correlations regardless of the source or sources of this particular range (e.g., measurement error or multidimensionality). This point is largely discussed in (2). It is worth noting that when alpha is 0.70, a widely refered reliability threshold for group comparison purpose (3,4), the standard error of measurement will be over half (0.55) a standard deviation. Moreover, Cronbach alpha is a measure of internal consistency, it is not a measure of unidimensionality and can’t be used to infer unidimensionality (5). Finally, we can quote L.J. Cronbach himself, > Coefficients are a crude device that does not bring to the surface many subtleties implied by variance components. In particular, the interpretations being made in current assessments are best evaluated through use of a standard error of measurement. --- Cronbach & Shavelson, (6) There are many other pitfalls that were largely discussed in several papers in the last 10 years (e.g., 7-10). Guttman (1945) proposed a series of 6 so-called lambda indices to assess a similar lower bound for reliability, and Guttman's $\lambda_3$ lowest bound is strictly equivalent to Cronbach's alpha. If instead of estimating the true variance of each item as the average covariance between items we consider the amount of variance in each item that can be accounted for by the linear regression of all other items (aka, the squared multiple correlation), we get the $\lambda_6$ estimate, which might be computed for multi-scale instrument as well. More details can be found in William Revelle's forthcoming textbook, [An introduction to psychometric theory with applications in R](http://personality-project.org/r/book/) (chapter 7). (He is also the author of the [psych](http://cran.r-project.org/web/packages/psych/index.html) R package.) You might be interested in reading section 7.2.5 and 7.3, in particular, as it gives an overview of alternative measures, like McDonald's $ \omega_t$ or $\omega_h$ (instead of using the squared multiple correlation, we use item uniqueness as determined from an FA model) or Revelle's $\beta$ (replace FA with hierarchical cluster analysis, for a more general discussion see (12,13)), and provide simulation-based comparison of all indices. ## References - Raykov, T. (1997). Scale reliability, Cronbach’s coefficient alpha, and violations of essential tau-equivalence for fixed congeneric components. Multivariate Behavioral Research, 32, 329-354. - Cortina, J.M. (1993). What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology, 78(1), 98-104. - Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory. McGraw-Hill Series in Psychology, Third edition. - De Vaus, D. (2002). Analyzing social science data. London: Sage Publications. - Danes, J.E. and Mann, O.K.. (1984). Unidimensional measurement and structural equation models with latent variables. Journal of Business Research, 12, 337-352. - Cronbach, L.J. and Shavelson, R.J. (2004). My current thoughts on coefficient alpha and successorprocedures. Educational and Psychological Measurement, 64(3), 391-418. - Schmitt, N. (1996). Uses and Abuses of Coefficient Alpha. Psychological Assessment, 8(4), 350-353. - Iacobucci, D. and Duhachek, A. (2003). Advancing Alpha: Measuring Reliability With Confidence. Journal of Consumer Psychology, 13(4), 478-487. - Shevlin, M., Miles, J.N.V., Davies, M.N.O., and Walker, S. (2000). Coefficient alpha: a useful indicator of reliability? Personality and Individual Differences, 28, 229-237. - Fong, D.Y.T., Ho, S.Y., and Lam, T.H. (2010). Evaluation of internal reliability in the presence of inconsistent responses. Health and Quality of Life Outcomes, 8, 27. - Guttman, L. (1945). A basis for analyzing test-retest reliability. Psychometrika, 10(4), 255-282. - Zinbarg, R.E., Revelle, W., Yovel, I., and Li, W. (2005). Cronbach's $\alpha$, Revelle's $\beta$, and McDonald's $\omega_h$: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70(1), 123-133. - Revelle, W. and Zinbarg, R.E. (2009) Coefficients alpha, beta, omega and the glb: comments on Sijtsma. Psychometrika, 74(1), 145-154
null
CC BY-SA 3.0
null
2011-06-12T10:23:38.533
2011-06-14T15:02:56.760
2011-06-14T15:02:56.760
930
930
null
11847
1
null
null
5
2017
I have the following dataset (triplicate values of 5 independent measurements and duplicate values of a control): ``` Sample 1 Sample 2 Sample 3 Sample 4 Sample 5 C 181.8 58.2 288.9 273.2 290.9 53.9 120.3 116.8 108.9 281.3 446 39.6 86.1 148.5 52.9 126 150.3 ``` The six conditions are independent production of a chemical in six different microorganisms. My aim was to find if the production of the chemical is significantly higher in Samples 1 - 5 than in C (control). At first, I carried out mean, SD and t-test (one-tail). Although SD error bars are large, two of the samples have mean values that are significantly higher than that of C. I decided to carry out median and Wilcoxon-Mann-Whitney test because of my worry about the size of my data and the high variations in the replicates. I was surprised however, that median and Wilcoxon-Mann- Whitney tests did not reveal statistical significant results. I will be happy if anyone could advise me on the best way to analyse this small dataset.
Help with data analysis of small datasets
CC BY-SA 3.0
null
2011-06-12T10:36:50.057
2011-06-13T07:39:06.533
2011-06-12T14:22:47.513
2970
4986
[ "hypothesis-testing" ]
11848
1
20475
null
9
564
Suppose you have a casino with n poker players. Each player has a win rate - the amount of money he wins or loses per hand. We assume that these win rates are normally distributed with a mean of 0. (We also assume that the players don't pay the casino any money.) Our goal is to estimate the variance V of the distribution. For each player x, we have observed a number of hands; we know how much money x has won or lost on each of these hands. How would you go about estimating V? Can we get a better estimate if we add some empirical assumptions (in the vein of "in the long run, no-one can sustain a win rate of more than 1$/hand")? EDIT: Let me try to clarify what I mean by "win rate". If a player wins 500.000$ by playing a million hands then his observed win rate is 0.5$/hand. With a million hands it's also likely that his actual win rate is close to 0.5$/hand. The idea is that a player has an actual win rate which cannot be observed directly, but which is a function of the player's skill. For example, if all players are equally skilled they will all have an actual win rate of 0; in this case, we also have V=0. The question above is concerned with actual win rates. EDIT: My motivation for asking this question was to estimate how many players have an actual win rate of, say, more than 0.3$/hand. If you disagree with the assumptions made above, feel free to base your estimate on other assumptions.
Estimating the variance of poker win rates
CC BY-SA 3.0
null
2011-06-12T14:35:30.007
2012-01-02T14:25:17.040
2011-06-13T11:41:14.013
4988
4988
[ "estimation", "normal-distribution", "sampling", "variance" ]
11849
2
null
11847
4
null
There is a time to make formal statistical inferences from sample to population, and a time to simply report on your descriptive results and let your audience make informal inferences--or not, as they see fit. This looks like the latter. With two control values, you are one step away from having no variation on which to base any findings. "I was surprised however, that median and Wilcoxon-Mann- Whitney tests did not reveal statistical significant results." I recommend that you familiarize yourself with the literature on statistical power.
null
CC BY-SA 3.0
null
2011-06-12T14:58:31.647
2011-06-12T14:58:31.647
null
null
2669
null
11850
1
null
null
10
15081
I'm really having trouble finding out how to compare ARIMA and regression models. I understand how to evaluate ARIMA models against each other, and different types of regression models (ie: regression vs dynamic regression with AR errors) against each other, however I cannot see many commonalities between ARIMA model and regression model evaluation metrics. The only two metrics they share is the SBC & AIC. ARIMA output produces neither a root MSE figure or an r^2 statistic. I'm not too sure whether the standard error estimate of an ARIMA model is directly equivalent (or comparable) to anything within regression outputs. If anyone could point me in the right direction that would be great, as I'm really confused here. I feel like i'm trying to compare apples with oranges. I'm using SAS by the way in conducting this analysis.
Model comparison between an ARIMA model and a regression model
CC BY-SA 3.0
null
2011-06-12T17:03:17.913
2011-06-14T05:51:01.037
null
null
4989
[ "arima", "model-comparison", "dynamic-regression" ]
11851
2
null
11829
7
null
Thanks for the clarification. I agree with @Greg Snow that any transformation should make sense in the context of the problem. Why are you considering a log transform? Have you tried standardizing your predictors? You want to keep in mind how the transformation changes the assumptions in your model. I'll use $\beta = (\beta_2, \dots, \beta_p)'$ and $X = (X_2, \dots, X_p)$. Your two models are Log transform model: $E(Y|X_1,X) = \exp(\tilde\beta_1\log(X_1) + X\beta) = X_1^{\tilde\beta_1}\exp(X\beta)$ Original model: $E(Y|X_1, X) = \exp(\beta_1X_1 + X\beta)$. For convenience I've overloaded $\beta$ slightly, in that their estimates would obviously be different under each model (in general). A simple way to compare the two models is through their relative risk. Suppose we have two observations $y_i, y_j$ with the same covariate values except that $x_{i1} - 1 = x_{j1}$ ($x_{i1}$ is one unit greater than $x_{j1}$). The relative risk $RR=E(Y|X_1=x_{i1},X)/E(Y|X_1=x_{i1}-1,X)$ is then the multiplicative change in the rate caused by increasing $x_1$ by one unit. The $RR$ is given by Log transform: $RR = \left(\frac{x_{i1}}{x_{i1} - 1}\right)^{\tilde\beta_1}$ Original model: $RR = \exp(\beta_1)$ RR under the log transform varies over the range of $x_{i1}$ (unless of course $\tilde \beta_1 = 0$). Does that make sense in your problem? In the original model the effect of a unit change in $x_1$ doesn't vary with its magnitude (i.e. increasing $X_1$ one unit has the same effect on the rate whether we move from 4 to 5, or 0 to 1, or 100 to 101, etc). Does that make sense in your problem? The coefficient in the log transformed model is harder to interpret, so unless there is a good reason for the transformation I would pass. You didn't say by what criterion the results are getting better, so it's hard to know for sure than any improvement in fit is "real". But even if it is, it might just be an indication that a Poisson regression is inappropriate. In particular the log transform removes the implicit proportional hazards assumption in the original model. Unfortunately it does so in a very rigid way, so while the overall fit might improve that doesn't necessarily mean you have a good model. Edit: A couple of points re: your comments. Your reference gives another way to interpret the coefficients via partial derivatives. Here, to compare the two models above we would look at $\frac{dE(Y|X)}{dx_1}$. So let's do that: Log transform: $\frac{dE(Y|X_1,X)}{dx_1} = \frac{\tilde\beta_1}{x_1}\exp(\tilde\beta_1\log(x_1)+X\beta) = \tilde\beta_1x_1^{\tilde\beta-1}\exp(X\beta)$ Original: $\frac{dE(Y|X_1,X)}{dx_1} = \beta_1\exp(\beta x_1+X\beta) = \beta_1\exp(\beta_1 x_1)\exp(X\beta)$ Again, these are different models: compare the terms $\tilde\beta_1x_1^{\tilde\beta-1}$ and $\beta_1\exp(\beta_1 x_1)$. You can't interpret the log transformed model in the same way as the original model. However, you could apply that interpretation to $\log(X_1)$; the question is whether or not that's meaningful/reasonable/etc (a percent/unit change on the log scale is very different, obviously). (basically @Greg Snow's original point). If the only reason for the transformation is to reduce the excess variance or improve the residuals then I would look at other aspects of the model first. In terms of decreasing the Pearson residuals: This isn't always a plus. You may be overfitting the data for one, and for another my original point applies - the log transformed predictor might be compensating for a misspecified model , perhaps in a less-than-obvious way. What are the sample mean and variance of $Y$ - are the data over/underdispersed? Have you considered another model, a negative binomial regression for example?
null
CC BY-SA 3.0
null
2011-06-12T17:03:27.230
2011-06-12T19:58:56.200
2011-06-12T19:58:56.200
26
26
null
11852
2
null
11847
7
null
One could use a non-parametric version of ANOVA: this is called the [Kruskal-Wallis](http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance) test. It is based on ranking all 17 results and computing the mean ranks within each group. The mean rank of 2.0 among the controls is obviously smaller than any other mean rank (which range from 7 to 12). However, the test p-value is only 0.0937 (based on a chi-squared approximation). Within each group (including the control), the SDs are approximately one-half the means (the "trend" in the figure). ![scatterplot of SDs versus means by group](https://i.stack.imgur.com/C4Zp2.png) This suggests (and justifies) basing the analysis on the logarithms of the concentrations, for which the group SDs will be approximately stable. This provides 11 degrees of freedom for estimating variation, so having just two or three measurements per group is not a limitation. This observation (that using logarithms may stabilize the residuals) is important in its own right, because it indicates how best to make estimates, how to carry out future analyses on continuations of this experiment, and supports the perception that the standard deviation of the control measurements really is relatively small. (That otherwise is a weak conclusion because there are only two control measurements.) Regression (or equivalently, ANOVA) of the logs against the group identifiers has an overall p-value of 0.0521 (using an F-test with 5 and 11 degrees of freedom). This is suggestive but not quite low enough to be taken as "significant" by most journals. However, this is a two-sided test, whereas your hypothesis is one-sided. A crude adjustment is to halve the p-value to reflect this and report the result as "significant" with p approximately equal to 0.026. Because this crude adjustment is merely an approximation, you might drive the point home with a permutation test. Returning to the idea of a non-parametric analysis, we ask for the chance that the average rank of the control measurement is 2.0 or less under the null hypothesis that all 17 results were randomly associated with the 17 measurements. This is equivalent to the control having either the first and second or the first and third smallest concentrations out of the 17. The chance of the first event is $\binom{2}{2}/\binom{17}{2} = 1/136$ and the chance of the second event is the same, for a total chance of $2/136$, or 1.47%. You could even conservatively characterize the results as "all the control concentrations were among the lowest three." The chance of this is $\binom{3}{2} / \binom{17}{2}$, equal to 2.2%. In any case you have a significant result for $p \lt 0.022$.
null
CC BY-SA 3.0
null
2011-06-12T19:03:21.303
2011-06-12T19:03:21.303
null
null
919
null
11853
2
null
11850
6
null
If we exclude the ARIMAX models, which are ARIMA with regressors, ARIMA and regression models are models with different approaches. ARIMA tries to model the variable only with information about the past values of the same variable. Regression models on the other hand model the variable with the values of other variables. Since these approaches are different, it is natural then that models are not directly comparable. On the other hand since both models try to model one variable, they both produce the modelled values of this variable. So the question of model comparison is identical to comparison of modelled values to true values. For more information how to do that the seventh chapter of [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) by Hastie et al. is an enlightening read. Update: Note that I do not advocate comparing only in sample fit, just that when models are different the natural way to compare models is to compare their outputs, disregarding how they were obtained.
null
CC BY-SA 3.0
null
2011-06-12T19:52:14.573
2011-06-14T05:51:01.037
2011-06-14T05:51:01.037
2116
2116
null
11854
2
null
11850
1
null
You could use the MSE/AIC/BIC of the arima model and compare it to the MSE/AIC/BIC of the regression model. Just make sure that the number of fitted values are the same otherwise you might be making a mistake. For example if the ARIMA model has a lag structure of say sp+p ( a seasonal difference of order sp and an autoregressive structure of order p , you lose the first sp+p datapoints and only NOB-SP-P values are actually fit. If the regression model has no lags then you have NOB fitted points or less depending upon your specification of the lagged values for the inputs. So one has to realize that the MSE's may not be on the same historical actual values. One approach would be to compute the MSE of the regression model on the last NOB-SP-P values to put the models on an equal footing. You might want to GOOGLE "regression vs box-jenkins" and get some pointers on this and more. In closing one would normally never just fit a regression model with time series as their may be information in the lags of the causals and the lags of the dependent variable justifying the STEP-UP from regression to a Transfer Function Model a.k.a ARMAX Model . If you didn't STEP-UP then one one or more of the Gauusian Assumptions would be voided making your F/T tests meaningless and irrevelant. Furthermore there may be violations of the constancy of the error term requiring the incorporation of level shifts/local time trends and either pulse or seasonal pulse variable to render the error process having a "mean of 0.0 everywhere"
null
CC BY-SA 3.0
null
2011-06-12T20:50:07.973
2011-06-12T22:09:59.310
2011-06-12T22:09:59.310
3382
3382
null
11855
1
null
null
0
805
I have 5 checklists with different perfect scores. Say, I have checklist A-E with their corresponding perfect scores: A = 24 B = 17 C = 38 D = 41 E = 25 Each item in all the checklists are equivalent to 1 point. I want to compare one item from one checklist to another item in other checklists. How could I make the weight of each item in all the checklists equal to each other?
How to equalize the weight of each item in multiple checklists?
CC BY-SA 3.0
null
2011-06-13T01:19:21.967
2011-10-25T17:04:54.573
2011-06-27T11:44:04.527
-1
4992
[ "normalization" ]
11856
1
11873
null
24
43495
SPSS provides the output "confidence interval of the difference means." I have read in some places that it means "95 times out of 100, our sample mean difference will be between between these bounds" I find this unclear. Can anyone suggest clearer wording to explain "confidence interval of the difference in means"? This output appears in the context of a one-sample t-test.
How to interpret confidence interval of the difference in means in one sample T-test?
CC BY-SA 3.0
null
2011-06-13T02:47:36.607
2013-02-18T20:38:47.020
2011-06-15T04:25:29.470
4498
4498
[ "confidence-interval" ]
11857
2
null
11856
5
null
From a pedantic technical viewpoint, I personally don't think there is a "clear wording" of the interpretation of confidence intervals. I would interpret a confidence interval as: there is a 95% probability that the 95% confidence interval covers the true mean difference An interpretation of this is that if we were to repeat the whole experiment $N$ times, under the same conditions, then we would have $N$ different confidence intervals. The confidence level is the proportion of these intervals which contain the true mean difference. My own personal quibble with the logic of such reasoning is that this explanation of confidence intervals requires us to ignore the other $N-1$ samples when calculating our confidence interval. For instance if you had a sample size of 100, would you then go and calculate 100 "1-sample" 95% confidence intervals? But note that this is all in the philosophy. Confidence intervals are best left vague in the explanation I think. They give good results when used properly.
null
CC BY-SA 3.0
null
2011-06-13T04:50:18.203
2011-06-18T12:24:46.743
2011-06-18T12:24:46.743
2392
2392
null
11858
2
null
11856
-2
null
My Interpretation: If you conduct the experiment N times ( where N tends to infinity) then out of these large number of experiments 95% of the experiments will have confidence intervals which lie within these 95% limits. More clearly, lets say those limits are "a" and "b" then 95 out of 100 times your sample mean difference will lie between "a" and "b".I assume that you understand that different experiment can have different samples to cover out of the whole population.
null
CC BY-SA 3.0
null
2011-06-13T05:16:57.427
2011-06-13T05:16:57.427
null
null
1763
null
11859
1
11866
null
108
110019
What is the difference between a multiclass problem and a multilabel problem?
What is the difference between Multiclass and Multilabel Problem
CC BY-SA 3.0
null
2011-06-13T05:35:36.353
2022-01-19T05:50:29.577
2016-08-28T13:17:12.573
12359
4290
[ "classification", "clustering", "terminology", "multi-class", "multilabel" ]
11861
2
null
11850
1
null
Cross validation would probably be good here. To do this you split your data set into 2 parts. You use the first part to fit both models, and then use the fitted model to predict the second part. This can be justified as an approximation to a fully Bayesian approach to model selection. We have the likelihood of a model $M_{i}$ $$p(d_{1}d_{2}...d_{N}|M_{i}I)=p(d_{1}|M_{i}I)\times p(d_{2}|d_{1}M_{i}I)\times p(d_{3}|d_{1}d_{2}M_{i}I)\times..$$ $$..\times p(d_{N}|d_{1}d_{2}...d_{N-1}M_{i}I)$$ Which can be seen heuristically as sequence of predictions, and then of learning from mistakes. You predict the first data point with no training. Then you predict the second data point after learning about the model with the first one. Then you predict the 3rd data point after using the first two to learn about the model, and so on. Now if you have a sufficiently large data set, then the parameters of the model will become well determined beyond a certain amount of data, and we will have, for some value $k$: $$p(d_{k+2}|d_{1}....d_{k}d_{k+1}M_{i}I)\approx p(d_{k+2}|d_{1}....d_{k}M_{i}I)$$ The model can't "learn" any more about the parameters, and is basically just predicting based on the first $k$ observations. So I would choose $k$ (the size of the first group) to be large enough so that you can accurately fit the model, $20$-$30$ data points per parameter is probably enough. You also want to choose $k$ large enough so that the dependence in the $d_{k+1}...d_{N}$ which is being ignored does not make the approximation useless. Then I would simply evaluate the likelihoods of each prediction, and take their ratio, interpreted as a likelihood ratio. If the ratio is about $1$, then neither model is particularly better than the other. If it is far away from $1$ then this indicates one of the models is outperforming the other. a ratio of under 5 is weak, 10 is strong, 20 very strong, and 100, decisive (corresponding reciprocal for small numbers).
null
CC BY-SA 3.0
null
2011-06-13T05:58:30.507
2011-06-13T05:58:30.507
null
null
2392
null
11865
2
null
11847
2
null
This is an interesting data set. It seems like a good idea to follow @whuber's advice and do the analysis on the log scale. However, there is more than one hypothesis here. For you could have the hypothesis $$H_{0}:\text{samples 1-5 have the same mean and variance on the log scale,}$$ $$\text{and this is different from the control mean}$$ But you could also have: $$H_{1}:\text{samples 1-3 have the same mean and variance on the log scale,}$$ $$\text{and this is different from the control mean, and samples 4-5 have}$$ $$\text{the same mean and variance but different from both control group}$$ $$\text{and samples 1-3}$$ $H_{1}$ appears to be the most plausible hypothesis to me as judged by eye. You can also have: $$H_{2}:\text{samples 1-5 have different means and variances on the log}$$ $$\text{scale, and are different from the control group}$$ Each of these hypothesis, if true, would constitute some sort of "significant" result. In any case, once you have decided that they are different, the interest then shifts to saying "well, exactly how are they different?" I think you have a less significant result because you are testing $H_{2}$ which has many parameters. For $H_0$ we have $\text{mean}\pm\text{standard dev}$ of $5.0\pm 0.62$ and $3.8\pm 0.22$, showing a clear difference, the behrens fisher statistic is $$T=\frac{5.0-3.8}{\sqrt{\frac{0.62^2}{15}+\frac{0.22^2}{2}}}=5.39$$ The two-sample T statistic is about $2.64$, but the assumption of equal variance is not supported by the data, especially as the control group was by far the lowest variance, and is nearly triple the variance of the pooled sample. More later as I have to stop for now...
null
CC BY-SA 3.0
null
2011-06-13T07:39:06.533
2011-06-13T07:39:06.533
null
null
2392
null
11866
2
null
11859
89
null
I suspect the difference is that in multi-class problems the classes are mutually exclusive, whereas for multi-label problems each label represents a different classification task, but the tasks are somehow related (so there is a benefit in tackling them together rather than separately). For example, in the famous leptograspus crabs [dataset](http://www.stats.ox.ac.uk/pub/PRNN/) there are examples of males and females of two colour forms of crab. You could approach this as a multi-class problem with four classes (male-blue, female-blue, male-orange, female-orange) or as a multi-label problem, where one label would be male/female and the other blue/orange. Essentially in multi-label problems a pattern can belong to more than one class.
null
CC BY-SA 3.0
null
2011-06-13T09:50:21.037
2011-06-13T10:00:16.603
2011-06-13T10:00:16.603
887
887
null
11867
1
null
null
5
2342
How can I compare the following mutual information values ? I'm just wondering what's the most appropriate way to display them in my report table. I'm computing them with this formula ![http://d.pr/chkK](https://i.stack.imgur.com/g1i2L.png) where e and c are clusters and the intersection is the number of elements in common. For each couple e and c I have a I value (mutual information). Successively I average over all e belonging to the same category (not shown in the formula) and I end up with a table like: ``` cat1 0.0123 cat2 0.0012 cat3 0.0009 cat4 0.0100 ... ``` The mutual dependency values are usually very low (around 0.01), because n (total amount of documents in the collection) is very high. Should I use another measure, or... what do you suggest ? thanks
How can I compare the following mutual information values?
CC BY-SA 3.0
null
2011-06-13T12:55:06.773
2012-09-04T10:25:02.923
2011-06-13T18:13:35.187
null
3941
[ "clustering", "mutual-information" ]
11868
1
11874
null
5
963
Assume that you have a Poisson model with overdispersion. Besides negative binomial models, what are other appropriate count-data modeling regression techniques?
What count-data models to choose besides negative binomial model when overdispersion occurs?
CC BY-SA 3.0
null
2011-06-13T13:04:46.723
2011-08-28T08:51:46.007
2011-06-13T13:15:19.347
2116
4496
[ "regression", "poisson-distribution", "count-data", "negative-binomial-distribution" ]
11869
1
11884
null
5
9350
I have a broad question about sliding window validation. Specifically, I am looking at using Rapid Miner to predict future values of a financial series using "lagged" values of that series and other covariates. I have been experimenting with the windowing operator in this software and lagging the values to prepare for modeling. What I am confused about, and suspect this is a general process, not just something centric to Rapid Miner and thus I ask it here, is the sliding window training/evaluation process. - Does anyone have sources to recommend for learning about sliding window processes for building data mining models on time series? - Specifically when building a model, I think I understand that k instances are used to train a model (e.g. SVM) and the performance of this model is determined by predicting the next m records. Then, the window is slid forward some amount and the next k records are used for training and the evaluation is done on the subsequent m records. This continues until the end of the data. Is my understanding correct? How is a final model built for use on future data? Is it always re-trained on the last k records and these last k records would only be used to create the final model?
Sliding window validation for time series
CC BY-SA 3.0
null
2011-06-13T13:09:15.977
2011-06-14T17:40:26.777
2011-06-13T18:13:55.420
null
2040
[ "time-series", "data-mining", "rapidminer" ]
11871
1
11880
null
5
844
The "[Introductory Statistics with R](http://www.springer.com/statistics/computanional+statistics/book/978-0-387-79053-4)" book contains a section that deals with correlations (section 6.4 in the second edition). The book shows Pearson, Spearman and Kendall correlation coefficients computed on the `blood.glucose` and `short.velocity` columns of the [thuesen](http://www.oga-lab.net/RGM2/func.php?rd_id=ISwR%3athuesen) data set. The p-values associated with these coefficients are 0.048, 0.139 and 0.119, correspondingly. The book then says the following: > Notice that neither of the two nonparametric correlations is significant at the 5% level, which the Pearson correlation is, albeit only borderline significant. I have several problems with this paragraph. First of all, my naive guess would be that since the non-parametric coefficients do not imply linearity, they will tend to be "significant" more frequently than Pearson's r. Am I right? Secondly, and more importantly, is such a comparison between p-values of different tests applied on the same data legit? (I'm talking about real-life comparisons and not about trivial examples in a text book) If it is, how one need to interpret the notion that linear correlation is "significant", while rank or concordance correlation isn't?
Interpreting p-values associated with correlation measurements
CC BY-SA 3.0
null
2011-06-13T13:31:51.697
2011-06-13T18:36:35.977
null
null
1496
[ "correlation", "statistical-significance", "references", "mathematical-statistics" ]
11872
1
null
null
16
7712
I would be interested in finding ways in R for efficiently updating a linear model when an observation or a predictor is added. biglm has an updating capability when adding observations, but my data are small enough to reside in memory (although I do have a large number of instances to update). There are ways to do this with bare hands, e.g., to update the QR factorization (see ["Updating the QR Factorization and the Least Squares Problem", by Hammarling and Lucas from 2008](http://eprints.maths.manchester.ac.uk/1192/1/qrupdating_12nov08.pdf)), but I am hoping for an existing implementation.
Updating linear regression efficiently when adding observations and/or predictors in R
CC BY-SA 4.0
null
2011-06-13T14:01:30.360
2021-01-12T12:05:48.820
2021-01-12T12:05:48.820
28436
30
[ "r", "regression", "computational-statistics", "linear-model" ]
11873
2
null
11856
14
null
This is not an easy thing, even for respected statisticians. Look at one recent attempt by [Nate Silver](http://fivethirtyeight.blogs.nytimes.com/2010/09/29/the-uncanny-accuracy-of-polling-averages-part-i-why-you-cant-trust-your-gut/): > ... if I asked you to tell me how often your commute takes 10 minutes longer than average — something that requires some version of a confidence interval — you’d have to think about that a little bit, ... (from the FiveThirtyEight blog in the New York Times, 9/29/10.) This is not a confidence interval. Depending on how you interpret it, it's either a tolerance interval or a prediction interval. (Otherwise there's nothing the matter with Mr. Silver's excellent discussion of estimating probabilities; it's a good read.) Many other web sites (particularly those with an investment focus) similarly confuse confidence intervals with other kinds of intervals. The New York Times has made efforts to clarify the meaning of the statistical results it produces and reports on. The fine print beneath many polls includes something like this: > In theory, in 19 cases out of 20, results based on such samples of all adults will differ by no more than three percentage points in either direction from what would have been obtained by seeking to interview all American adults. (e.g., [How the Poll Was Conducted](http://www.nytimes.com/2011/05/03/business/economy/03method.html), 5/2/2011.) A little wordy, perhaps, but clear and accurate: this statement characterizes the variability of the sampling distribution of the poll results. That's getting close to the idea of confidence interval, but it is not quite there. One might consider using such wording in place of confidence intervals in many cases, however. When there is so much potential confusion on the internet, it is useful to turn to authoritative sources. One of my favorites is Freedman, Pisani, & Purves' time-honored text, Statistics. Now in its fourth edition, it has been used at universities for over 30 years and is notable for its clear, plain explanations and focus on classical "frequentist" methods. Let's see what it says about interpreting confidence intervals: > The confidence level of 95% says something about the sampling procedure... [at p. 384; all quotations are from the third edition (1998)]. It continues, > If the sample had come out differently, the confidence interval would have been different. ... For about 95% of all samples, the interval ... covers the population percentage, and for the other 5% it does not. [p. 384]. The text says much more about confidence intervals, but this is enough to help: its approach is to move the focus of discussion onto the sample, at once bringing rigor and clarity to the statements. We might therefore try the same thing in our own reporting. For instance, let's apply this approach to describing a confidence interval of [34%, 40%] around a reported percentage difference in a hypothetical experiment: > "This experiment used a randomly selected sample of subjects and a random selection of controls. We report a confidence interval from 34% to 40% for the difference. This quantifies the reliability of the experiment: if the selections of subjects and controls had been different, this confidence interval would change to reflect the results for the chosen subjects and controls. In 95% of such cases the confidence interval would include the true difference (between all subjects and all controls) and in the other 5% of cases it would not. Therefore it is likely--but not certain--that this confidence interval includes the true difference: that is, we believe the true difference is between 34% and 40%." (This is my text, which surely can be improved: I invite editors to work on it.) A long statement like this is somewhat unwieldy. In actual reports most of the context--random sampling, subjects and controls, possibility of variability--will already have been established, making half of the preceding statement unnecessary. When the report establishes that there is sampling variability and exhibits a probability model for the sample results, it is usually not difficult to explain a confidence interval (or other random interval) as clearly and rigorously as the audience needs.
null
CC BY-SA 3.0
null
2011-06-13T14:14:13.190
2011-06-13T21:19:33.360
2011-06-13T21:19:33.360
919
919
null
11874
2
null
11868
6
null
If you're willing to impose an upper bound on your counts, the beta-binomial works well. Its story is that the binomial probability for each of your count responses is drawn from a beta distribution, which is bounded between zero and one and is used to model binomial probabilities in a Bayesian context. There is also a negative beta binomial, which is more or less what it sounds like. At the bottom of any wikipedia page on a count distribution, there is a box with links to other distributions, including some fun count distributions. Click the 'show' button in the bluish box near the bottom of [this page](http://en.wikipedia.org/wiki/Beta_negative_binomial_distribution) to open it. But beta binomial and negative binomial are both chosen for computational reasons as much as anything else. There's no reason Poisson processes have to have gamma distributed means and binomials have to have beta distributed means other than that someone figured out how to do the math for them by hand. If you're using a more sophisticated multilevel modeling package like BUGS, you can make any mixture distribution you want. You could have a count variable whose mean is drawn from a uniform distribution like a die roll or from anything else. So you can think about your problem, see if any of the existing methods are good enouh for your purposes, with the comfort of knowing you can always build your own if you need to.
null
CC BY-SA 3.0
null
2011-06-13T14:15:34.577
2011-06-13T14:15:34.577
null
null
4862
null
11875
2
null
11872
3
null
Why don't you try the update capability of the linear model object ``` update.lm( lm.obj, formula, data, weights, subset, na.action) ``` Take a look at this links - For a general explanation of the update function: [http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html](http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html) - For a particular explanation about update.lm: [http://www.science.oregonstate.edu/~shenr/Rhelp/update.lm.html](http://www.science.oregonstate.edu/~shenr/Rhelp/update.lm.html)
null
CC BY-SA 3.0
null
2011-06-13T14:44:34.367
2011-06-13T14:44:34.367
null
null
2902
null
11876
1
11877
null
7
13199
I'm estimating some count data. I have counts for say $m=100$ individuals. Unfortunately when using the Poisson regression overdispersion occurs. So I was thinking to fit a negbin model. But this is not appropriate in my case. So I assume that I can not fit a Poisson regression, because the way the Poisson distribution arises is not appropriate in my case ($n$ is not growing to infinity and $p$ is not converging to zero). So I found the beta-bin model. But quite honestly I'm absolutely not familiar in estimating beta-binomial models using R? First of all: Does it make sense to fit a beta-bin model when anyone wants to estimate counts? Btw: If it makes sense, does anybody know a good book where the application is discribed?
Fitting a beta-binomial model in the case of overdispersion in R
CC BY-SA 3.0
null
2011-06-13T14:44:40.063
2011-06-14T06:06:46.793
2011-06-14T06:06:46.793
2116
4496
[ "r", "regression", "count-data", "beta-binomial-distribution", "overdispersion" ]
11877
2
null
11876
8
null
Beta binomial does sound like a good choice. Ben Bolker has a nice example of how to do it with his bbmle package [here](http://cran.r-project.org/web/packages/bbmle/vignettes/mle2.pdf). I believe his book has more, some kind of tadpole-related example. You can get preprints of the book [here](http://www.math.mcmaster.ca/~bolker/emdbook/). Hope this helps!
null
CC BY-SA 3.0
null
2011-06-13T15:57:52.590
2011-06-13T15:57:52.590
null
null
4862
null
11878
1
null
null
13
780
I'm trying to fit a hierarchical model using jags, and the rjags package. My outcome variable is y, which is a sequence of bernoulli trials. I have 38 human subjects which are performing under two categories: P and M. Based on my analysis, every speaker has a probability of success in category P of $\theta_p$ and a probability of success in category M of $\theta_p\times\theta_m$. I'm also assuming that there is some community level hyperparameter for P and M: $\mu_p$ and $\mu_m$. So, for every speaker: $\theta_p \sim beta(\mu_p\times\kappa_p, (1-\mu_p)\times\kappa_p)$ and $\theta_m \sim beta(\mu_m\times\kappa_m, (1-\mu_m)\times\kappa_m)$ where $\kappa_p$ and $\kappa_m$ control how peaked the distribution is around $\mu_p$ and $\mu_m$. Also $\mu_p \sim beta(A_p, B_p)$, $\mu_m \sim beta(A_m, B_m)$. Here's my jags model: ``` model{ ## y = N bernoulli trials ## Each speaker has a theta value for each category for(i in 1:length(y)){ y[i] ~ dbern( theta[ speaker[i],category[i]]) } ## Category P has theta Ptheta ## Category M has theta Ptheta * Mtheta ## No observed data for pure Mtheta ## ## Kp and Km represent how similar speakers are to each other ## for Ptheta and Mtheta for(j in 1:max(speaker)){ theta[j,1] ~ dbeta(Pmu*Kp, (1-Pmu)*Kp) catM[j] ~ dbeta(Mmu*Km, (1-Mmu)*Km) theta[j,2] <- theta[j,1] * catM[j] } ## Priors for Pmu and Mmu Pmu ~ dbeta(Ap,Bp) Mmu ~ dbeta(Am,Bm) ## Priors for Kp and Km Kp ~ dgamma(1,1/50) Km ~ dgamma(1,1/50) ## Hyperpriors for Pmu and Mmu Ap ~ dgamma(1,1/50) Bp ~ dgamma(1,1/50) Am ~ dgamma(1,1/50) Bm ~ dgamma(1,1/50) } ``` The issue I have is that when I run this model with 5000 iterations for adapting, then take 1000 samples, `Mmu` and `Km` have converged to single values. I've been running it with 4 chains, and each chain doesn't have the same value, but within each chain there is just a single value. I'm pretty new to fitting hierarchical models using MCMC methods, so I'm wondering how bad this is. Should I take this as a sign that this model is hopeless to fit, that something is wrong with my priors, or is this par for the course? Edit: In case it matters, the value for $\mu_m$ it converged to (averaged across chains) was 0.91 and $\kappa_m$ was 1.78
MCMC converging to a single value?
CC BY-SA 3.0
null
2011-06-13T16:41:55.423
2015-10-29T07:55:17.897
2011-06-13T17:17:36.207
287
287
[ "markov-chain-montecarlo", "multilevel-analysis", "jags" ]
11879
2
null
11018
4
null
Lucky for me, Andrew Gelman decided to discuss [this topic](http://www.stat.columbia.edu/~cook/movabletype/archives/2011/06/sampling_design.html) on his blog last week! There I found the following books recommended in the comments: [Applied Survey Data Analysis](http://rads.stackoverflow.com/amzn/click/1420080660) by Heeringa, West & Burglund [Sampling: Design and Analysis](http://rads.stackoverflow.com/amzn/click/0495105279) by Sharon Lohr [Survey Methodology](http://rads.stackoverflow.com/amzn/click/0470465468) by Groves, et. al. [Struggles with Survey Weighting and Regression Modeling](http://arxiv.org/PS_cache/arxiv/pdf/0710/0710.5005v1.pdf) by Andrew Gelman [Comments from lots of people and Rejoinder from Gelman](http://healthyalgorithms.wordpress.com/2011/05/24/journal-culture/)
null
CC BY-SA 3.0
null
2011-06-13T16:44:57.480
2011-07-15T19:58:50.013
2011-07-15T19:58:50.013
3748
3748
null
11880
2
null
11871
5
null
One explanation is that outliers, even mild ones can affect the results in a pearson correlation. If the outlier is a legitimate point (not a typo or other error) then it should increase the significance of the correlation (as you see), but will not change much in the other 2, so it is easy for the pearson correlation to be larger and more significant. In real data analysis seeing this would suggest looking for outliers (you should be plotting the data anyways) that are influencing the results. What to do next depends on what question you are asking and what assumptions are reasonable given the science.
null
CC BY-SA 3.0
null
2011-06-13T16:51:20.857
2011-06-13T16:51:20.857
null
null
4505
null
11881
2
null
11871
5
null
@Greg Snow is on the money about your first question. In regard to your second, comparing the two tests is misleading since two hypotheses are different even though the scientific question is (ostensibly) the same. This is a case where it's really important to be explicit about what hypothesis test you're using. To be explicit, the test using $r$ is testing something like $H_0: r=0$ vs $H_1: r \neq 0$. For Spearman's rho, you're testing $H_0: \rho=0$ vs $H_1: \rho \neq 0$. Using $r$ presumes a linear relationship, while using $\rho$ presumes a more general monotonic relationship since it's based on the observed ranks (which is also where it gets its robustness). The two hypotheses are actually quite different.
null
CC BY-SA 3.0
null
2011-06-13T18:36:35.977
2011-06-13T18:36:35.977
null
null
26
null
11882
2
null
11840
3
null
This is a fairly common problem but very tricky.It can be found by googling Intermittent Demand , Sparse Data Analysis and some other "names". We deal with time series data where there are two random variables ; The interval and the actual demand at each point.We have not experimented with cases like yours where the demand is ALWAYS a "1" but we see no reason that our approach wouldn't work![plot of demand and forecasts over time](https://i.stack.imgur.com/oKiAy.jpg). What we have found to be useful is to model the rate ( in your case 1/interval ) as it can be prdicted from Interval. This approach allows you to detect level changes in rate and of course unusual rates ( pulses). Try googling "analysis of time series with many zero values" and you will get some information on dealing with this "tricky problem".
null
CC BY-SA 3.0
null
2011-06-13T19:10:52.507
2011-06-13T19:39:35.467
2011-06-13T19:39:35.467
3382
3382
null
11883
1
22670
null
9
1345
I am analyzing data from two surveys that I merged together: - School staff survey, for years 2005-06 and 2007-08 - School students survey, for years 2005-06 through 2008-09 For both of these data sets, I have observations (at the student or staff level) from 3 different school districts, each having representative samples per year within their distinct school district. For analysis, I combined the student data into two 2-year periods (2005-07 and 2007-09). Then I then I 'ddply'-ed each data set to obtain percentages of staff or students that responded to questions according to cutoffs (e.g., whether they answered in the affirmative, "Agreed", or whether the student marked that they used alcohol, etc.). So when I merged the staff and student level data sets together, the school is the unit of analysis, and I only have 1 observation per school per 2-year time periods (given that the school wasn't missing data for a given time period). My goal is to estimate associations between staff and student responses. So far, my plan was to obtain Pearson correlation coefficients between all the variables (as they're all continuous responses representing percentages) for each school district separately from each other (as this eliminates the generalizability assumption for the other districts in this data set). To do this, I would average the district data over the two years anyway to get just one observation per school. Questions: - Is this an appropriate analysis plan? Is there some other method I may use that could provide me better inference or power? - If my plan is appropriate, should I obtain weighted correlations based on school's enrollment (as there are more smaller schools than large that would be contributing disproportionally to the correlation coefficients)? I have asked the data administrator about this, and he mentioned that the main factors that determine the necessity for weighting my data is whether or not I think school size affects the degree of correlation and whether my interpretation will be at the student or school level. I think my interpretation will be at the school level (e.g., "a school with this percentage of staff answering this way is correlated to this percentage of students responding this way...").
On the use of weighted correlations in aggregated survey data
CC BY-SA 3.0
null
2011-06-13T19:48:52.427
2012-02-12T08:53:40.407
2012-02-12T08:53:40.407
930
3309
[ "correlation", "survey", "multilevel-analysis" ]
11884
2
null
11869
7
null
Your understanding about sliding window analysis is generally correct. You may find it helpful to separate the model validation process from the actual forecasting. In model validation, you use $k$ instances to train a model that predicts "one step" forward. Make sure each of your $k$ instances uses only information available at that particular time. This can be subtle, because it is easy to accidentally peek ahead into the future and pollute your out-of-sample test. For example, you might accidentally use the entire time series history in feature selection, and then use those features to test the model at every step of time. This is cheating, and will give you an overestimate of accuracy. This is mentioned in [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), but outside the sliding window time series context. It is also easy to accidentally pollute with future information if some of your independent variables are asset returns. Say I use the return on an asset from time $t=21$ days to $t=28$ days to test at $t=21$ days. In this case, I have also polluted the out-of-sample test. Instead I would want to train with instances up to $t=21$ days, and test with one step at $t=28$ days. When you have validated your model, and are happy with the parameters and feature selection, then you typically train with all of your data and forecast into the actual future.
null
CC BY-SA 3.0
null
2011-06-13T20:56:19.137
2011-06-14T17:40:26.777
2011-06-14T17:40:26.777
4942
4942
null
11885
1
null
null
1
2060
In the [card game Pitch](http://en.wikipedia.org/wiki/Pitch_%28card_game%29) how do I calculate when my opponents have 12 cards out of 52 if they have the Ace, King or Queen of a suit? I assume there is only about a 22% chance of the Ace being in their hand, but I don't know how to add the other two cards. I want my Jack to be the high card, or my partner to have it, we pay double for moons and I'm wanting to know if I should moon with just the jack for high, with 3 cards out there against me.
Odds for high in the card game pitch
CC BY-SA 3.0
null
2011-06-13T23:18:32.027
2019-02-07T15:33:39.777
2018-09-03T19:44:12.330
22311
5001
[ "probability", "games" ]
11886
2
null
11885
2
null
You have 6 cards (out of 52) and you want to know if another set of 12 (from the same 52) have at least one of three particular cards which you do not have. It is easier to work out the probability they do not have any of the three, which is $$\frac{43}{46}\times\frac{42}{45}\times\frac{41}{44}\times\frac{40}{43}\times\frac{39}{42}\times\frac{38}{41}\times\frac{37}{40}\times\frac{36}{39}\times\frac{35}{38}\times\frac{34}{37}\times\frac{33}{36}\times\frac{32}{35}$$ $$= \frac{34 \times 33 \times 32}{46 \times 45 \times 44} \approx 0.39$$ so subtracting this from 1 (and multiplying by 100 to get a percentage) means your opponents have a chance of about 61% of having at least one of the three particular cards. Even if they do have one or two of these cards, it is possible, though less likely, that your partner has an even higher one.
null
CC BY-SA 3.0
null
2011-06-14T00:08:32.417
2011-06-14T00:18:10.447
2011-06-14T00:18:10.447
2958
2958
null
11887
1
15400
null
5
1941
I collect blood from one human (donor), separate leukocytes and put $2\times10^6$ of them per each well of five: - well 1: $2\times10^6$ Lk (leukocytes) - well 2: $2\times10^6$ Lk - well 3: $2\times10^6$ Lk - well 4: $2\times10^6$ Lk - well 5: $2\times10^6$ Lk (Lk consists of a mix of different sets of cells and one of them are lymphocytes (Lph)). Then I incubated the cells under different conditions (with the following substances in the same concentration (with the exception of well #1, of course)): - well 1: saline (serves as a zero control) - well 2: substance A (is a native compound) - well 3: substance B (is a chemically modified analogue of A) - well 4: substance C (is a chemically modified analogue' of A) - well 5: substance D (is a chemically modified analogue'' of A) Then I incubate the leucocytes for 24 hours. Then I run FACS and get percentages of lymphocytes expressing a receptor (CD69) of interest (Lph') (where the percentages is the ratio of Lph' to all Lph). Then I repeated the experiment 7 times: i.e. totally I collected blood from 8 different donors (so $n = 8$). With this experiment I want to know: Does a structure of a molecule affect the percentage/proportion of activated lymphocytes (Lph') (i.e. Lph bearing CD69 molecule)? My example data (in percentages, %) are: DonorID cntrl substanceA substanceB substanceC substanceD 20 5.1 12.42 10.10 9.58 8.54 21 4.07 9.96 14.12 10.79 12.24 22 3.94 4.92 13.04 15.96 9.37 25 0.60 3.24 8.94 0.61 3.62 26 1.72 13.96 2.48 3.44 3.12 27 0.53 3.36 1.40 4.00 0.81 28 0.97 3.88 2.33 3.77 3.31 31 0.15 4.05 1.45 2.44 1.47 32 0.58 1.92 2.47 2.33 4.92 33 1.02 6.03 4.40 4.80 3.88 --- SUMMARY: So the experimental design can be characterized by the following terms: - ANOVA as more than 2 groups are tested; - RM as the statistical/experimental unit ("Lph") for every case is under different conditions; - one-way there is one fixed (within-subject) factor / independent variable ("conditioins") with 5 levels; if I'm not mistaken in the light of linear mixed models the "DonorID" might be interpreted as an additional random factor ("donor"). --- Many thanks for the answers!
What experimental design is this?
CC BY-SA 3.0
null
2011-06-14T01:07:19.033
2016-04-06T17:46:20.330
2016-04-06T17:46:20.330
2798
5003
[ "anova", "repeated-measures", "experiment-design" ]
11888
1
11891
null
3
583
Why is it so successful for the lasso, though for most other problems standard Quasi-Newton approaches seem to be preferred? I sort of have this vague geometric idea that it might have to do with the shape of the $L_1$ ball, but haven't really been able to formalize it.
The effectiveness of coordinate ascent
CC BY-SA 3.0
null
2011-06-14T03:47:37.230
2011-06-14T06:47:25.070
2011-06-14T05:47:14.330
2116
5007
[ "optimization", "lasso" ]
11890
2
null
11888
2
null
Trevor Hastie has some ideas starting on page 19 [here.](http://www.stanford.edu/~hastie/TALKS/glmnet.pdf) A big part of the answer has to do with being able to simply ignore large portions of the data during updates, either because things are sparse or because we're only looking at one variable at a time or because solutions from a few steps back can be re-used without worrying too much that they're out of date.
null
CC BY-SA 3.0
null
2011-06-14T05:34:39.483
2011-06-14T05:34:39.483
null
null
4862
null
11891
2
null
11888
3
null
There are several things to keep in mind here. Lasso for linear regression is optimization of a quadratic function with an $\ell_1$-norm penalty term. The latter is non-smooth, and one can put the problem into the context of quadratic optimization with linear constraints. General purpose solvers turn out to be less than optimal for this particular problem, which has special properties. The coordinate descent algorithm relies on these properties for convergence, see e.g. [this paper](http://www.springerlink.com/content/36132n1667120303/) for the concept of separability. Because the optimization is optimization of a non-smooth function the methods that rely on smoothness, such as Quasi-Newton, are not appropriate. For the linear regression problem the coordinate descent algorithm is particularly fast because there are several tricks that save computations. However, there are other fast algorithms like [lars](http://www.stanford.edu/~hastie/Papers/LARS/) aimed directly at the regression problem. In the comparisons I have seen, the implementation of coordinate descent in the R package `glmnet` is faster than the implementation in the `lars` package, but the former is also really optimized Fortran code. For more general $\ell_1$-penalized optimization problems I have found the following to be important for speed: - Warm starts. For a decreasing sequence of penalty parameter values ($\lambda$'s) we use the parameter from a previous $\lambda$-value as start guess for the next $\lambda$-value. - The coordinate wise update splits into two parts. First we check if the coordinate should stay zero with minimal computations (happens a lot, no extra computations if that is the case). If not, find non-zero update of the coordinate. - Sparseness is preserved. The algorithm does not only produce a result with many zeroes but keeps many coordinates at zero for the entire algorithm. Whether this matters for speed depends on the problem, but often sparseness can be exploited in the other computations. Whether the non-zero update of one coordinate at a time is best for a particular problem will depend on the problem. It works very well for the linear regression problem and generalized linear models because each update is the minimization of a simple quadratic function, which is very fast.
null
CC BY-SA 3.0
null
2011-06-14T06:47:25.070
2011-06-14T06:47:25.070
null
null
4376
null
11892
1
null
null
4
1000
I have a dataset with 4025 participants across two time points. I have scored them on a three-point categorical variable (`Unlikely, Possible, Probable`) at each time point. I would like to visualize the various patterns of change (e.g. going from `Unlikely` at T1 to `Possible` at T2 or going from `Possible` at T1 to `Unlikely` at T2). I would also like some way of representing the number of participants in each of these clusters on the graph (somehow weighing by `N` and representing this by line thickness, number of lines etc.). The data is currently on the form: ``` id1, id2, variable_t1, variable_t2 1 500 0 1 2 501 1 0 ... ``` Any suggestions for how to do this? I have tried using ggplot2 and geom_line and grouping by id, but this graph looks very messy. I am looking for something more along the lines of a [clustergram](http://www.r-statistics.com/2010/06/clustergram-visualization-and-diagnostics-for-cluster-analysis-r-code/), but am open to suggestions. Update: I recently discovered [Parallel Sets](http://eagereyes.org/blog/2009/parallel-sets-released.html) which is very close to what I would like to achieve. The only downside to this program is that it allows for very little customization of the resulting plot (e.g. rotating plot, adding titles and axis labels, manually adjusting size etc.). Although this is possible with a bit of post processing of the png file that the program can export to. (Now, is there a way of achieving the same with R and ggplot?). Solution: thanks to a reposting of my question by Tal ([of R-bloggers fame](http://www.r-bloggers.com/)) [here](https://stats.stackexchange.com/questions/12029/is-it-possible-to-create-parallel-sets-plot-using-r) there is now a solution for this question using R and lattice.
Plotting changes in a three-valued ordinal variable across two time points using R
CC BY-SA 3.0
null
2011-06-14T07:26:45.253
2011-06-20T08:34:35.890
2017-04-13T12:44:29.013
-1
913
[ "r", "data-visualization", "categorical-data", "ggplot2" ]
11893
2
null
10185
5
null
I ran your model with rjags package. I have not provided any initial value since JAGS can produce them for you. You can see the error below ``` > m <- jags.model(file = "model.txt", n.chain = 1) Compiling model graph Resolving undeclared variables Deleting model Error in jags.model(file = "model.txt", n.chain = 1) : RUNTIME ERROR: Index out of range for node mu_dk ``` You have specified the codes[n, k] variable in the likelihood to have a Bernoulli distribution as ``` codes[n, k] ~ dbern(p[n, k]) ``` which should have values of 0 or 1, but in the data you have ``` > codes Answer.1 Answer.2 [1,] 0.00 0.6666667 [2,] 0.25 0.0000000 [3,] 1.00 0.3333333 [4,] 0.00 0.6666667 [5,] 0.25 1.0000000 [6,] 0.00 0.0000000 [7,] 0.00 0.6666667 [8,] 0.25 0.6666667 [9,] 0.00 1.0000000 [10,] 0.50 0.0000000 [11,] 0.75 0.3333333 [12,] 0.25 0.6666667 [13,] 0.50 1.0000000 [14,] 0.00 0.0000000 [15,] 1.00 0.3333333 [16,] 0.00 0.6666667 [17,] 0.25 0.0000000 [18,] 0.50 0.3333333 ``` How can codes[n, k] have a Bernoulli distribution?
null
CC BY-SA 3.0
null
2011-06-14T07:47:41.560
2011-06-14T07:47:41.560
null
null
4618
null
11894
2
null
11892
2
null
One of your options is to use a sunflowerplot for each combination. This is available from a default installation of R. For some datasets, a sunflowerplot is not particularly clear, so I have used colour coding instead. If R is your thing, the code below should get you going with the colour coding (just copy-and-pasted from an old utility function I had lying around, so it may require some editing): ``` plotTwoCats2<-function (x, y = NULL, type = "p", xlim = NULL, ylim = NULL, log = "", main = NULL, sub = NULL, xlab = NULL, ylab = NULL, ann = par("ann"), axes = FALSE, frame.plot = axes, panel.first = NULL, panel.last = NULL, asp = NA, ...) { ttt<-table(data.frame(x,y)) tt<-data.frame(ttt) #print(names(tt)) tt$x<-as.factor(tt$x) tt$y<-as.factor(tt$y) usr<-par("usr"); on.exit(par(usr)) minFreq<-min(tt$Freq) maxFreq<-max(tt$Freq) numX<-length(levels(tt$x)) numY<-length(levels(tt$y)) #par(usr=c(0, numX, 0, numY)) ppin <- par("pin") xsize<-ppin[1] / (length(levels(x)) + 1) ysize<-ppin[2] / (length(levels(y)) + 1) plot(as.numeric(x),as.numeric(y), xlim=c(0.5,length(levels(x)) + 0.5), ylim=c(0.5,length(levels(y)) + 0.5), log = log, main = main, sub = sub, xlab = xlab, ylab = ylab, ann = ann, axes = axes, frame.plot = frame.plot, panel.first = panel.first, panel.last = panel.last, asp = asp, ...) sapply(seq(numX), function(xi){ sapply(seq(numY), function(yi){ frq<-tt$Freq[(as.numeric(tt$x)==xi) & (as.numeric(tt$y)==yi)] freqpct<-(frq-minFreq)/(maxFreq-minFreq) clr<-rgb(freqpct, 1-freqpct, 0, 1) rect(-0.5+xi, -0.5+yi, 0.5+xi, 0.5+yi, col=clr) text(xi, yi, labels=frq) invisible() }) invisible() }) invisible() } ```
null
CC BY-SA 3.0
null
2011-06-14T07:55:35.317
2011-06-14T07:55:35.317
null
null
4257
null
11895
1
null
null
2
271
In the extreme case where all of the components of an $M$-variate observation are pairwise independent from each other, a multivariate normal distribution can be decomposed into the product of $M$ univariate normal distributions. For example, $$p \left( X_{1},X_{2},X_{3};[\mu_{1},\mu_{2},\mu_{3}]^{T},\left[\begin{array}{ccc} \sigma_{1} & 0 & 0\\ 0 & \sigma_{2} & 0\\ 0 & 0 & \sigma_{3}\end{array}\right]\right)=\prod_{i=1}^{3}\ p(X_{i};\mu_{i},\sigma_{i}).$$ The sample complexity of a univariate distribution is less than that of a full-covariance $M$-variate normal distribution, and thus fewer observations are required to get a good approximation of this model, since only univariate distributions must be estimated. With zero-mean observations $\{x^{(1)},\cdots,x^{(N)}\}$, the covariance MLE for the case where the covariance matrix is isotropic (i.e. all observation components -- denoted with subscripts -- are pairwise independent) is simply $\hat{\Sigma}=\mathrm{diag}(\hat{\sigma}_{1,}\hat{\sigma}_{2},\cdots,\hat{\sigma}_{M})$ where $\hat{\sigma}_{i}=\frac{1}{N}\sum_{j=1}^{N}\left(x_{i}^{(j)}\right)^{2}$. This pairwise independence is represented as zeros in the precision matrix $\Sigma^{-1}$, which is diagonal when $\Sigma$ is diagonal. As mentioned above, this is identical to (i.e. gives the same result as) separately estimating $M$ univariate distributions, and taking their product. My question is: In the in between case, where there is some (not fully connected) dependency graph $\mathcal{G}$ between observation components, what does the covariance MLE look like then? As a specific example, if we say that $X_{1},X_{2}\perp X_{3}$, then the precision matrix must always have the form $$\Sigma^{-1} = \left[\begin{array}{ccc} \lambda_{11} & \lambda_{12} & 0\\ \lambda_{21} & \lambda_{22} & 0\\ 0 & 0 & \lambda_{33}\end{array}\right].$$ Therefore, what would the decomposition, and the covariance (or inverse covariance) MLE look like for a normal distribution such as the following? : $$ p\left(X_{1},X_{2},X_{3};[\mu_{1},\mu_{2},\mu_{3}]^{T},\left[\begin{array}{ccc} \lambda_{11} & \lambda_{12} & 0\\ \lambda_{21} & \lambda_{22} & 0\\ 0 & 0 & \lambda_{33}\end{array}\right]^{-1}\right)$$ For a given number of observations, the MLE should give the same result as the decomposed version of the pdf represented by $\mathcal{G}$.
Is there a covariance MLE which takes into account independence relationships?
CC BY-SA 3.0
null
2011-06-14T08:29:35.377
2011-06-15T15:04:15.530
2011-06-15T15:04:15.530
3691
3691
[ "normal-distribution", "maximum-likelihood", "independence" ]
11896
1
null
null
3
1114
Short question: What happens to the beta-binomial distribution, when n increases to infinity? Is there a count distribution arising like it's for the classical binomial distribution?
What happens with the beta-binomial distribution, when n approaches infinity?
CC BY-SA 3.0
null
2011-06-14T10:01:51.670
2011-06-14T13:46:37.730
2011-06-14T12:40:27.723
2116
4496
[ "distributions", "mathematical-statistics", "beta-binomial-distribution", "proof" ]
11897
1
null
null
11
255
You may have heard about the recent enterohaemorrhagic E. coli ([EHEC](http://en.wikipedia.org/wiki/EHEC)) [outbreak in Germany](http://en.wikipedia.org/wiki/2011_E._coli_O104%3aH4_outbreak). What questions would a statistician ask about EHEC analysis ? I'm thinking of Q+As between reporters / public officials ↔ non-experts, say teachers and engineers with a Diplom / Master's degree but at most a smattering of statistics. (Is a picture, a map of EHEC land showing various strains of EHEC and the coverage of various tests, possible ?) Monday 20 June: I thought that the EHEC outbreak would be an area where statistics really matters in the world at large: what's the evidence for various causes, how can these be communicated to the public ? So, starting a bounty.
What questions would a statistician ask about analysis of E. coli outbreak?
CC BY-SA 3.0
null
2011-06-14T10:53:01.613
2011-06-21T18:10:47.383
2011-06-20T10:25:58.717
557
557
[ "data-visualization", "teaching" ]
11898
1
11902
null
4
1083
I have calculated the repeatability of individuals' responses to a stimulus using the methodology of [Lessells & Boag (1987) Auk 104:116](http://www.univet.hu/users/jkis/education/Kutatastervezes/Lessells_Boag_Auk_87_Unrepeatable_repeatabilities_-_a_common_mistake.pdf), where repeatability r = among-groups variance component / (among-groups variance component + within-groups variance component). How do I assign confidence intervals to my estimate of r?
Confidence intervals for repeatability
CC BY-SA 3.0
null
2011-06-14T10:59:48.373
2011-06-14T14:43:35.997
null
null
266
[ "confidence-interval", "repeated-measures", "repeatability" ]
11899
1
11911
null
9
17912
What approaches are there to perform FA on data that is clearly ordinal (or nominal for that matter) by nature? Should the data be transformed our are there readily available `R` packages that can handle this format? What if the data is of a mixed nature, containing both numerical, ordinal and nominal data? The data is from a survey where subjects have answered questions of many types: yes/no; continuous; scales. My aim is to use FA as a method for analyzing the underlying factors. I do not yet know what factors I'm looking for. However, condensing the underlying factors into a manageable number of factors is important. EDIT: Also, can I approximate a survey question answered on the Likert-type scale as a continuous variable? Thank you.
Factor analysis on mixed (continuous/ordinal/nominal) data?
CC BY-SA 3.0
null
2011-06-14T13:23:48.300
2019-05-15T16:16:56.790
2011-06-15T12:12:28.120
3401
3401
[ "r", "factor-analysis", "categorical-data", "ordinal-data" ]
11900
2
null
11896
2
null
Consider the [urn model](http://en.wikipedia.org/wiki/Beta-binomial_distribution#Beta-binomial_as_an_urn_model) for the beta-binomial: > ... imagine an urn containing α red balls and β black balls, where random draws are made. If a red ball is observed, then two red balls are returned to the urn. Likewise, if a black ball is drawn, it is replaced and another black ball is added to the urn. If this is repeated n times, then the probability of observing k red balls follows a beta-binomial distribution with parameters n,α and β. When both $\alpha$ and $\beta$ are very large, the additional balls introduced do not appreciably change the chances of red or black. This is just a binomial experiment (with fixed probabilities $\alpha / (\alpha + \beta)$ of red and $\beta/ (\alpha + \beta)$ of black).
null
CC BY-SA 3.0
null
2011-06-14T13:46:37.730
2011-06-14T13:46:37.730
null
null
919
null
11901
2
null
11872
7
null
If the algorithm you are looking for is indeed something like [Applied Statistics 274, 1992, Vol 41(2)](https://www.jstor.org/stable/2347583) then you could just use [biglm](http://cran.r-project.org/package=biglm) as it does not require you to keep your data in a file.
null
CC BY-SA 4.0
null
2011-06-14T13:55:57.360
2021-01-12T12:05:40.433
2021-01-12T12:05:40.433
28436
334
null
11902
2
null
11898
3
null
I would go for bootstrap to compute 95% CIs. This is what is generally done with coefficient of heritability or intraclass correlation. (I found no other indication in Falconer's book.) There is an example in the [gap](http://cran.r-project.org/web/packages/gap/index.html) package of an handmade bootstrap (see `help(h2)`) in case of the correlation-based heritability coefficient, $h^2$. IMO, you're better off computing the variance components yourself, and using the [boot](http://cran.r-project.org/web/packages/boot/index.html) package. Briefly, the idea is to write a small function that returns your MSs ratio and then call the `boot()` function, e.g. ``` library(boot) repeat.boot <- function(data, x) { foo(data[x,])$ratio } res.boot <- boot(yourdata, repeat.boot, 500) boot.ci(res.boot, type="bca") ``` where `foo(x)` is a function that take a data.frame, compute the variance ratio, and return it as `ratio`. Sidenote: I just checked on [http://rseek.org](http://rseek.org) and found this project, [rptR: Repeatability estimation for Gaussian and non-Gaussian data](http://rptr.r-forge.r-project.org/). I don't know if the above is not simpler.
null
CC BY-SA 3.0
null
2011-06-14T14:43:35.997
2011-06-14T14:43:35.997
null
null
930
null
11903
1
null
null
5
135
## Background Conventional approaches to fitting a priori models to observed data seek to find those model parameters that maximize the likelihood of the data. For more complicated models, this typically necessitates an iterative search across a reasonable parameter space, computing the likelihood of the data given each candidate parameter set and selecting from amongst the evaluated candidates the set that makes the data most likely. When comparing different families of models with regards to their ability to account for the observed data, a gold-standard for the metric of comparison is cross-validated prediction performance. That is, for each model family of interest and each sub-set of the observed data, maximum likelihood estimates of that family's parameters are obtained given the data omitting that subset, and the likelihood of the omitted subset given these parameters is evaluated. Aggregating likelihood estimates across subsets yields a measure of fit that permits comparison of model families that is uncontaminated by potential differences between families in their ability to fit noise over-and-above their ability to fit phenomena of interest in the data. ## Foreground It strikes me that this hierarchically iterated approach, searching for ML parameter estimates repeatedly for each cross-validation subset, may be computationally simplified to a one-stage approach. Specifically, I wonder if a non-self-terminating genetic algorithm like that employed by [DEoptim](http://cran.r-project.org/web/packages/DEoptim/index.html) might be supplied with an objective function that randomly samples (with replacement) the observed data before evaluating the adequacy of a given candidate parameter set of a given model. (Thus, the "search" is now for the candidate parameter set that maximizes the likelihood of data likely to be observed in the future.) After a large number of generations, the latest surviving generation of parameter values (or some aggregation of the N most recent generations) is chosen as the final estimates and the adequacy of the model (for comparison to other model families) is evaluated by computing the likelihood of the observed data (no resampling) given this model. ## Questions - Is this idea new? - Is this idea blatantly not worthwhile? (If so, why?)
Estimation by future likelihood maximization
CC BY-SA 3.0
null
2011-06-14T16:05:00.390
2011-06-17T05:55:52.093
2011-06-14T21:59:16.870
null
364
[ "model-selection", "maximum-likelihood", "cross-validation" ]
11904
2
null
2272
12
null
Always fun to engage in a bit of philosophy. I quite like Keith's response, however I would say that he is taking the position of "Mr forgetful Bayesia". The bad coverage when type B and type C can only come about if (s)he applies the same probability distribution at every trial, and refuses to update his(her) prior. You can see this quite clearly, for the type A and type D jars make "definite predictions" so to speak (for 0-1 and 2-3 chips respectively), whereas type B and C jars basically give a uniform distribution of chips. So, on repetitions of the experiment with some fixed "true jar" (or if we sampled another biscuit), a uniform distribution of chips will provide evidence for type B or C jars. And from the "practical" viewpoint, type B and C would require an enormous sample to be able to distinguish between them. The KL divergences between the two distributions are $KL(B||C) \approx 0.006 \approx KL(C||B)$. This is a divergence equivalent to two normal distributions both with variance $1$ and a difference in means of $\sqrt{2\times 0.006}=0.11$. So we can't possibly be expected to be able to discriminate on the basis of one sample (for the normal case, we would require about 320 sample size to detect this difference at 5% significance level). So we can justifiably collapse type B and type C together, until such time as we have a big enough sample. Now what happens to those credible intervals? We actually now have 100% coverage of "B or C"! What about the frequentist intervals? The coverage is unchanged as all intervals contained both B and C or neither, so it is still subject to the criticisms in Keith's response - 59% and 0% for 3 and 0 chips observed. But lets be pragmatic here. If you optimise something with respect to one function, it can't be expected to work well for a different function. However, both the frequentist and bayesian intervals do achieve the desired credibility/confidence level on the average. We have $(0+99+99+59+99)/5=71.2$ - so the frequentist has appropriate average credibility. We also have $(98+60+66+97)/4=80.3$ - the bayesian has appropriate average coverage. Another point I would like to stress is that the Bayesian is not saying that "the parameter is random" by assigning a probability distribution. For the Bayesian (well, at least for me anyways) a probability distribution is a description of what is known about that parameter. The notion of "randomness" does not really exist in Bayesian theory, only the notions of "knowing" and "not knowing". The "knowns" go into the conditions, and the "unknowns" are what we calculate the probabilities for, if of interest, and marginalise over if a nuisance. So a credible interval describes what is known about a fixed parameter, averaging over what is not known about it. So if we were to take the position of the person who packed the cookie jar and knew that it was type A, their credibility interval would just be [A], regardless of the sample, and no matter how many samples were taken. And they would be 100% accurate! A confidence interval is based on the "randomness" or variation which exists in the different possible samples. As such the only variation that they take into account is that in a sample. So the confidence interval is unchanged for the person who packed the cookie jar and new that it was type A. So if you drew the biscuit with 1 chip out of the type A jar, the frequentist would assert with 70% confidence that the type was not A, even though they know the jar is type A! (if they maintained their ideology and ignored their common sense). To see that this is the case, note that nothing in this situation has changed the sampling distribution - we have simply taken the perspective of a different person with "non-data" based information about a parameter. Confidence intervals will change only when the data changes or the model/sampling distribution changes. credibility intervals can change if other relevant information is taken into account. Note that this crazy behavior is certainly not what a proponent of confidence intervals would actually do; but it does demonstrate a weakness in the philosophy underlying the method in a particular case. Confidence intervals work their best when you don't know much about a parameter beyond the information contained in a data set. And further, credibility intervals won't be able to improve much on confidence intervals unless there is prior information which the confidence interval can't take into account, or finding the sufficient and ancillary statistics is hard.
null
CC BY-SA 3.0
null
2011-06-14T16:37:10.570
2011-06-14T16:37:10.570
null
null
2392
null
11906
1
null
null
3
216
I am tasked with analyzing data to find "triggers" to an event. Specifically, this is transaction data from a bank (e.g., checking account daily balances, daily over draft fees, daily number of checks cleared etc) and the event of interest is the checking account being closed by the customer. It sounds like I am really needing to do a feature selection (important features might be something like "having three over drafts in 6 months"). I was thinking about cox regression (possibly with time varying covariates) and some variable selection (the paper [here](http://dblab.mgt.ncu.edu.tw/%E6%95%99%E6%9D%90/2008%20DM/36.pdf) was an inspiration). In this way significant variables could be considered triggers. The business will be using these triggers in monitoring software (if a trigger event is detected), flag the customer as an attrition risk, and then do something to keep them from leaving. Couple questions: - Does this seem like the correct approach or are there others? - It seems like it will be important to construct variables to determine the best triggers. For example, is the best trigger: a. Number of overdrafts in 3 months? b. Number of overdrafts in last 7 days? c. 3 or more overdrafts in the past 2 months d. (...) Any suggestions on how to determine these triggers?
Use of survival analysis for trigger mining
CC BY-SA 3.0
null
2011-06-14T19:12:15.087
2011-09-24T01:27:58.047
2020-06-11T14:32:37.003
-1
2040
[ "data-mining", "survival", "cart" ]
11907
1
null
null
3
513
The "exclusive or" function has a long and arduous history in the AI/machine learning communities. From my understanding of "association rule learning", xor would appear to be a problem for this type of learning. That is, suppose we have the following data: ``` A B C 0 0 0 0 1 1 1 0 1 1 1 0 ``` Clearly the rule I would seek from this data is that $A\oplus B = C$. However, it is my undnerstanding that association rule learning techniques would instead discover the rules $A \Rightarrow C$ and $B \Rightarrow C$ each with 50% confidence. Is my assessment correct that this is a known issue within association rule learning, and if so, are there standard ways of handling such issues? I can imagine some workarounds, but I'm not sure they fit within the context of association rule learning.
What is the association rule learning approach to the logical XOR problem?
CC BY-SA 3.0
null
2011-06-14T19:33:06.610
2013-05-27T09:06:46.213
2011-06-14T20:35:46.947
930
2485
[ "machine-learning", "data-mining" ]
11908
2
null
11903
2
null
The idea looks related to using the bootstrap, see Section 7.11 in [ESL](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), as an alternative to cross-validation. The bootstrap also resamples subsets for training and uses the original data for estimation of the generalization error (evaluation of the model). The difference, as far as I can see, is that when using the bootstrap we train the model on the subsampled data sets, whereas the suggestion here is "merely" to evaluate a given parameter on a subsampled data set. The training happens inside the magic genetic algorithm (that I don't know anything about) by modifications of the parameters before the next evaluation. For the bootstrap various issues arise from the fact that subsamples share observations with the date set used for evaluationO, and the average number of distinct observations in a subsample is somewhat smaller than the size of the data set. This leads to strange corrections like the .632-estimator, see (7.56) in ESL, and I don't think that the bootstrap has any edge over cross-validation. I have no reason to rule out the idea presented, but I would be skeptical about the final evaluation on the full data set. As I understand the suggested idea, the algorithm will produce a sequence of parameter estimates that get better and better when evaluated on subsampled data. Because you constantly subsample, I don't see that the algorithm can be convergent, but it may stabilize after a number of iterations. I would imagine that the parameters would then be "around" the MLEs for the subsampled data (but I may be wrong here), and that there would be issues similar to those for the bootstrap with evaluating parameters/models on the full data set. By the way, if I remember correctly, the fitting of e.g. generalized additive models by the R package `mgcv` uses an algorithm where everything is optimized in one go.
null
CC BY-SA 3.0
null
2011-06-14T20:49:08.887
2011-06-17T05:55:52.093
2011-06-17T05:55:52.093
4376
4376
null
11909
1
null
null
5
620
It's hard to think of a more eloquent way of phrasing this question - I'm basically wondering if a classifier trained on data where examples of some of the classes are infrequent/rare would be a bad model? I'm mainly interested in decision trees (C4.5). I think the answer is no, but that you will get a high error, because you will usually classify members of the infrequent classes as instances of the more frequent classes. This has been my experience so far. I'm also wondering when it's okay to remove these examples and when it's considered bad practice (i.e. doing it just to lower the error). I'm guessing that it's okay to remove these if there's a good reason to do so, and you explain that reasoning when you report your results. I'm not really interested in building the best classifier, I'm more interested in understanding relationships between the variables and the structure of the data. But all my variables are categorical and it's non-linear data, so decision trees have so far been the best tool I've found to do this. (SVMs and ensemble methods are more accurate, but you can't really see the internal model structure, which you get with decision trees.) thanks.
Do infrequent examples screw up classifiers? If so, when is it okay to remove the infrequent examples from the data?
CC BY-SA 3.0
0
2011-06-14T22:15:46.340
2011-09-13T00:51:09.427
null
null
3984
[ "machine-learning", "classification", "cart" ]
11910
2
null
11679
1
null
After a few days, I decided it may be best to use an alternative method. What I did was sample the data such that it reflected the reported distributions in the population. I repeated this a number of times, each time randomly sampling in appropriate proportions, and took the average performance on the classifier. I continued to use the case-control design to find the features that I wanted, however in the validation step and subsequent performance reporting I used the sampling method. This seemed to me a simpler and more straight forward alternative to using a Bayes Factor.
null
CC BY-SA 3.0
null
2011-06-14T22:27:19.433
2011-06-14T22:27:19.433
null
null
4673
null
11911
2
null
11899
3
null
Particularly if you have nominal indicators along with the ordinal & continuous ones, this is probably a good candidate for latent class factor analysis. Take a look at this -- [http://web.archive.org/web/20130502181643/http://www.statisticalinnovations.com/articles/bozdogan.pdf](http://web.archive.org/web/20130502181643/http://www.statisticalinnovations.com/articles/bozdogan.pdf)
null
CC BY-SA 3.0
null
2011-06-14T22:34:41.453
2015-12-13T19:26:16.700
2015-12-13T19:26:16.700
22228
11954
null
11912
2
null
11807
1
null
Have you looked at the [caret package](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) in R? It provides an interface that makes it easier to use a variety of models, including some for recursive partitioning such as [rpart](http://cran.r-project.org/web/packages/rpart/index.html), [ctree](http://cran.r-project.org/web/packages/party/index.html) and [ctree2](http://cran.r-project.org/web/packages/party/index.html).
null
CC BY-SA 3.0
null
2011-06-14T22:39:44.687
2011-06-14T22:39:44.687
null
null
3984
null
11913
2
null
11909
3
null
By 'infrequent example', i assume you mean that the class label occurs infrequently, (i.e., points to which you've assigned a class label occurs with very low frequency in your data). So hiding them from your classifier in essence removes any opportunity your classifier would have had to learn to assign that class label to data points in your test set--but if you don't care about that class, then i think it makes sense to remove those data assigned to that irrelevant class. But what if you do care about training your classifier to assign data to that class? The paradigmatic example is fraud prediction--the data points are e.g., transactions and the classifier is trained to assign one of two class labels to each transaction--"fraud" or "not fraud". The representation of the two classes in the training and test data is often much less than one percent. In fact, rather than eliminating these data with the low-frequency class label, it's common to give this small population of data points much higher weights so that a mis-classification penalty is greater for a 'false positive' (e.g., mis-classifying a transaction as 'not fraud'). I suppose you could also have used the term "infrequent example" to refer to an outliner. Absent knowledge or reasonable belief that the value is an artifact, rather than an accurate measurement, it's of course bad form to reject outliners just because they are outliers.
null
CC BY-SA 3.0
null
2011-06-14T23:29:16.463
2011-06-14T23:29:16.463
null
null
438
null