Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10918 | 1 | 10919 | null | 4 | 371 | I'm trying to do a mixture discriminant analysis for a mid-sized data.frame, and bumped into a problem: all my predictions are NA.
After tracing through way too much code, I figured it had something to do with the fact that some of the coefficients in the mda turn out to be NA. I've created a smaller data.frame that still has the problem:
```
dfr<-structure(list(min_GCs_last_3_bases = structure(c(1L, 2L, 1L,
1L, 2L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 2L,
1L), .Label = c("zero", "one"), class = "factor"), cq = c(-0.334138707578632,
-0.586150643906373, -0.474578712720667, -0.220268433139143, -0.486876877353103,
-0.0912154554410563, 0.00341593805213764, 0.713424582672338,
-0.448914652233824, 2.94156773625266, -0.0954835698859817, -0.238125375419562,
-0.448914652233824, -0.0290299100261503, -0.479666688891261,
-0.596316272919155, -0.0919665779183363, -0.394427887135795,
-0.396749521580081, -0.338156015653477), Coverage = structure(c(2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L,
1L, 2L, 2L), .Label = c("failure", "success"), class = "factor")), .Names = c("min_GCs_last_3_bases",
"cq", "Coverage"), class = "data.frame", row.names = c("SMURF2_16_1",
"MI0000437", "SMAD6_4_1", "SMAD2_8_1", "MI0003561", "SKP1_6_1",
"MI0003556", "LTBP1_19_1", "MI0006350", "MI0006351", "TGFBR1_6_1",
"MI0000772_1", "MI0003176", "MI0005758", "MI0003639", "MI0003641",
"CUL1_5_1", "APC_3_1", "MI0005769", "KPNB1_5_1"))
```
When I use this data.frame like so (after loading the package mda):
```
mda(Coverage~.,data=dfr,subclasses=2)$fit$coef
```
I get this result:
```
[,1] [,2]
Intercept -0.90574883 -0.42382446
min_GCs_last_3_baseszero 1.80667985 0.85786905
min_GCs_last_3_basesone -0.02302343 0.04884007
cq NA NA
```
What troubles me about this, is:
- Why are there coefficients for both
the values of the factor
min_GCs_last_3_bases?
- Why are the coefficients of cq NA?
I'm willing to assume I have completely misunderstood mda. Can someone please enlighten me? Or did I just bump into a bug?
(note: I've tried if the names of the columns had an influence: they don't)
| Problem with mixture discriminant analysis in R returning NA for predictions | CC BY-SA 3.0 | 0 | 2011-05-17T19:55:21.100 | 2011-05-18T04:52:58.743 | 2011-05-18T04:52:58.743 | 183 | 4257 | [
"r"
] |
10919 | 2 | null | 10918 | 1 | null | You only have 2 failures Why were you thinking you could estimate more than two coefficients?
| null | CC BY-SA 3.0 | null | 2011-05-17T20:09:15.070 | 2011-05-17T20:09:15.070 | null | null | 2129 | null |
10920 | 1 | 10925 | null | 3 | 691 | I am looking for a statistical method to define the variance/ diversity / inequality in a set of observations.
For example:
If I have following (n=4) observations using 4 data points, here the diversity/variance/inequality is zero.
```
A-B-C-D
A-B-C-D
A-B-C-D
A-B-C-D
```
In the following (n=4) observations using 7 data points, there is x amount of diversity / variance / inequality in the observations. What method I can use to derive a diversity / variance score ?
```
A-B-B-D
Q-D-C-B
B-C-B-A
B-Z-F-A
```
My real data is derived from a database of 10K data points and observations are usually >= 2 - 1000s. Currently am looking at [Gini Impurity](http://en.wikipedia.org/wiki/Gini_coefficient#Advantages_of_Gini_coefficient_as_a_measure_of_inequality) as a potential method, Do you think Gini Impurity is better for this type data ? Do you know about any better method which can derive a score by considering the order of the data ? Looking forward for your suggestions.
| Statistical method to quantify diversity / variance / inequality | CC BY-SA 3.0 | null | 2011-05-17T21:49:53.323 | 2018-10-10T10:19:13.543 | 2018-10-10T10:19:13.543 | 11887 | 529 | [
"variance",
"diversity"
] |
10921 | 2 | null | 10832 | 2 | null | You can use PhyFi web server for generating dendrograms from Newick files.
Sample output using your data from PhyFi:

| null | CC BY-SA 3.0 | null | 2011-05-17T21:54:51.667 | 2011-05-17T21:54:51.667 | null | null | 529 | null |
10922 | 2 | null | 10904 | 16 | null | This is an important question that I have given some thoughts over the years in my own teaching, and not only regarding distributions but also many other probabilistic and mathematical concepts. I don't know of any research that actually targets this question so the following is based on experience, reflection and discussions with colleagues.
First it is important to realize that what motivates students to understand a fundamentally mathematical concept, such as a distribution and its mathematical properties, may depend on a lot of things and vary from student to student. Among math students in general I find that mathematically precise statements are appreciated and too much beating around the bush can be confusing and frustrating (hey, get to point man). That is not to say that you shouldn't use, for example, computer simulations. On the contrary, they can be very illustrative of the mathematical concepts, and I know of many examples where computational illustrations of key mathematical concepts could help the understanding, but where the teaching is still old-fashioned math oriented. It is important, though, for math students that the precise math gets through.
However, your question suggests that you are not so much interested in math students. If the students have some kind of computational emphasis, computer simulations and algorithms are really good for quickly getting an intuition about what a distribution is and what kind of properties it can have. The students need to have good tools for programming and visualizing, and I use R. This implies that you need to teach some R (or another preferred language), but if this is part of the course anyway, that is not really a big deal. If the students are not expected to work rigorously with the math afterwords, I feel comfortable if they get most of their understanding from algorithms and simulations. I teach bioinformatics students like that.
Then for the students who are neither computationally oriented nor math students, it may be better to have a range of real and relevant data sets that illustrate how different kinds of distributions occur in their field. If you teach survival distributions to medical doctors, say, the best way to get their attention is to have a range of real survival data. To me, it is an open question whether a subsequent mathematical treatment or a simulation based treatment is best. If you haven't done any programming before, the practical problems of doing so can easily overshadow the expected gain in understanding. The students may end up learning how to write if-then-else statements but fail to relate this to the real life distributions.
As a general remark, I find that one of the really important points to investigate with simulations is how distributions transform. In particular, in relation to test statistics. It is quite a challenge to understand that this single number you computed, the $t$-test statistic, say, from your entire data set has anything to do with a distribution. Even if you understand the math quite well. As a curious side effect of having to deal with multiple testing for microarray data, it has actually become much easier to show the students how the distribution of the test statistic pops up in real life situations.
| null | CC BY-SA 3.0 | null | 2011-05-17T21:57:27.727 | 2011-05-17T21:57:27.727 | null | null | 4376 | null |
10925 | 2 | null | 10920 | 4 | null | I can't give a specific answer - mainly because the question isn't specific enough (happy to edit in due course though, given more information). If the values A, B, C, D, E, etc. have a definite ordering that is meaningful to you (such as $A>C>B>E>Z>\dots$) and you only want to describe the diversity that exists within your observations, then gini coefficient makes sense.
However, if the values A, B, C, etc. are just arbitrary labels (i.e. they carry no information apart from distinguishing one type from another type) then a gini coefficient makes less sense - what is the relevant analogy to "rich" and "poor" if being "rich" can't be seen to be greater than being "poor". For Gini coefficient to work, you have to be able to order the observations
A further consideration is if you wish to make inference about something which exists outside your sampled values. For example, if the first set of 4 observations (which are the same) are part of a larger group which was not observed and you want to make a statement about the larger group. Such a statement might be the sample is not diverse therefore the group that the sample came from is not diverse. Gini co-efficient may or may not be appropriate in this case, depending on what you know about how the larger group is related to the sample, and on how big your sample is compared to the larger group.
One way to think about the problem which may help is to consider the following scenarios. If you were told that a set of observations A is diverse, but not given any observations from A, what would you predict them to be? What if you were given the first observation only? If you were told that the set A is more diverse than another set B, and you were given the observations from set B, what would you predict the observations in set A to be? Thinking about the problem this way will help you to describe the features that your diversity measure should have (and features that it shouldn't have).
If your data are categorical, then you could use tests based on a multinomial distribution with the number of trials equal to the number of categories per observation (4 in your examples) and the number of probability parameters equal to the number of data points (4 in your first example, and 7 in your second). So, taking the second example we have:
$$x_{i}\sim Multinomial(4,\theta_{1},\theta_{2},\dots,\theta_{7})$$
And a homogeneity score can be created by calculating the probability that all of the $\theta_{j}$ are equal. However, in calculating this probability (which is not difficult) you will also end up calculate the probability of several other hypothesis (which are also useful), such as $\theta_{1}$ is different, all other $\theta_{j}$ are equal, that 2 are different, 3 are different, and so on up to all 7 are different. I can post how you would do this, but I want to make sure this is something that you actually want first! If you also cared about positioning (so that $A-B-C-D$ is considered different to $A-C-B-D$), then this can be incorporated by creating $28$ $\theta$ pararemters, and doing hypothesis tests about them. So you would have:
$$x_{ik}\sim Multinomial(1,\theta_{1k},\theta_{2k},\dots,\theta_{7k})\;\;\;\;\;\;\;\;k=1,2,3,4$$
Admittedly you will need a reasonably sized data set in order to do this kind of test. And you would have hypothesis about the $k$ index indicating "ordering" diversity, and hypothesis about the $j$ index indicating "composition" diversity.
| null | CC BY-SA 3.0 | null | 2011-05-18T03:43:24.807 | 2011-05-18T03:43:24.807 | null | null | 2392 | null |
10926 | 1 | 10929 | null | 7 | 39467 | As question, I have found something similar [here](http://www.graphpad.com/quickcalcs/ConfInterval1.cfm), but how to do it in R?
| How to calculate confidence interval for count data in R? | CC BY-SA 3.0 | null | 2011-05-18T04:44:30.650 | 2016-04-03T11:16:15.797 | 2016-04-03T11:16:15.797 | 2910 | 588 | [
"r",
"confidence-interval",
"count-data"
] |
10927 | 2 | null | 10910 | 2 | null | Here are two posts where I describe the process of computing scale scores for multiple-item, multiple-scale tests:
- Scale construction and item reversal for multiple item scale in SPSS
- Calculating scale scores in SPSS
There are many things to consider when creating scales (e.g., should the items be equally weighted? do you have missing data? do you want a mean or sum? etc.), and there are several tricks for doing it efficiently and reliably (e.g., using loops, using syntax, automatically generating syntax from metadata), but the posts above describe this in detail.
| null | CC BY-SA 3.0 | null | 2011-05-18T05:03:38.017 | 2011-05-18T05:03:38.017 | null | null | 183 | null |
10928 | 1 | null | null | 1 | 165 | If i test two hypotheses, one of which has a null that is -in fact- false and the other has a null that is -in fact- true, I want to know the probability that the first test will obtain a p value less than that of the second, given other parameters such as delta, sigma, and sample size. I am not interested in whether one, or the other. or both, or neither are above some threshold, I'm interested in the probability that one is larger than the other.
I can simulate the situation in R, and come up with a reasonable estimate that way, but I want to know how I can impute the answer exactly.
Using the program R:
```
> a=vector()
> b=vector()
> for (i in 1:1000) {
+ ai = rnorm(108, mean = 7, sd = 20)
+ a = c(a,t.test(ai)$p.value)
+ bi = rnorm(108)
+ b = c(b,t.test(bi)$p.value)
+ }
> q=rep(0,length=length(a))
> q[a < b]=1
> mean(q)
[1] 0.978
```
So, I find that approximately 98% of the time, the truly alternative hypothesis yields a lower p value, but how can I impute this result. I want to prove it mathematically.
Thanks for your help,
Nick
@Nick Sabbe
What I think you are saying is that if the p value from the false null hypothesis equals a, then the probability that the p value from the true null hypothesis is less than a, is a. I get that, but what if I don't know the p value from the false null hypothesis test? What if I have two p values, and I know that exactly one of them comes from a hypothesis test in which the null is -in fact- false, what is the probability that it is the lower of the two p values? Assume all I know are the population parameters: delta, sigma, and N.
| Probability that X<=Y when X and Y are p values from two different hypotheses | CC BY-SA 3.0 | null | 2011-05-18T07:17:28.760 | 2011-05-19T20:48:57.070 | 2011-05-19T17:50:06.617 | 4647 | 4647 | [
"probability",
"hypothesis-testing"
] |
10929 | 2 | null | 10926 | 10 | null | You are looking for a confidence interval around the count from a Poisson process. If you put for example 42 into your linked example you get
>
You observed 42 objects in a certain
volume or 42 events in a certain time
period.
Exact Poisson confidence interval:
The 90% confidence interval extends from 31.94 to 54.32
The 95% confidence interval extends from 30.27 to 56.77
The 99% confidence interval extends from 27.18 to 61.76
You can get this in R using `poisson.test`. For example
```
> poisson.test(42, conf.level = 0.9 )
Exact Poisson test
data: 42 time base: 1
number of events = 42, time base = 1, p-value < 2.2e-16
alternative hypothesis: true event rate is not equal to 1
90 percent confidence interval:
31.93813 54.32395
sample estimates:
event rate
42
```
and similarly the other values by changing `conf.level`. If you do not want all the background information, try something like
```
> poisson.test(42, conf.level = 0.95 )$conf.int
[1] 30.26991 56.77180
attr(,"conf.level")
[1] 0.95
```
| null | CC BY-SA 3.0 | null | 2011-05-18T07:25:38.310 | 2011-05-18T07:25:38.310 | null | null | 2958 | null |
10931 | 2 | null | 10537 | 4 | null | Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance parameters ("jeffreys" prior). But you will be able to see that if you were to use jeffreys prior for both parameters, you would have an improper posterior. But note that the main justification for using jeffreys prior is that it is a scale parameter. However you can show for your model, that neither parameter sets the scale of the problem.
If we consider the marginal model, with $\theta_{i}$ integrated out. It is a well-known result that if you integrate a normal with another normal, you get a normal. So we can skip the integration, and just work out the expectation and variance. We then get:
$$E(y_{i}|\mu\sigma\sigma_{\theta})=E\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=E\left[\theta_{i}|\mu\sigma\sigma_{\theta}\right]=\mu$$
$$V(y_{i}|\mu\sigma\sigma_{\theta})=E\left[V(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]+V\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=\sigma^{2}+\sigma_{\theta}^{2}$$
And hence we have the marginal model:
$$(y_{i}|\mu\sigma\sigma_{\theta})\sim N(\mu,\sigma^{2}+\sigma_{\theta}^{2})$$
And this does show an identifiability problem with this model - so the data cannot distinguish between the two variances, it can only give information about their sum. You may have been able to see this intuitively. For example, we can always take $\theta_{i}=y_{i}$ for all $i$ and hence this will set $\sigma=0$. Alternatively we can set $\theta_{i}=\mu$ for all $i$ and this will set $\sigma_{\theta}=0$. Both of these scenarios will be indistinguishable by the data - in the sense that if I was to generate two data sets, one from the first case, and one from the second (but ensured that $\sigma^{2}+\sigma_{\theta}^{2}$ was the same in both cases), you would not be able to tell which data set came from which case. This suggests that it is fundamentally the sum that sets the scale and so we should apply jeffreys prior to the parameter $\tau^{2}=\sigma^{2}+\sigma_{\theta}^{2}$. Now suppose that $\tau^{2}$ was known, I would have thought a non-informative choice of prior for $\sigma^{2}$ would be uniform between $0$ and $\tau^{2}$ (for a more informative choice I would use a re-scaled beta distribution over this range). So we have the prior:
$$p(\tau^{2},\sigma^{2})\propto\frac{1}{\tau^{2}}\frac{I(0<\sigma^{2}<\tau^{2})}{\tau^{2}}$$
If we make the change of variables to $\sigma^{2},\tau^{2}\to\sigma,\sigma_{\theta}$ so that. We then get:
$$p(\sigma_{\theta},\sigma)\propto\frac{1}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}|\frac{\partial\sigma^{2}}{\partial\sigma}\frac{\partial\tau^{2}}{\partial\sigma_{\theta}}-\frac{\partial\sigma^{2}}{\partial\sigma_{\theta}}\frac{\partial\tau^{2}}{\partial\sigma}|
=\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}$$
Note that the non-identifiability is preserved in this prior because it is symmetric in its arguments. Another not so obvious symmetry is that if you were to integrate out either one of the variance parameters you would be left with the jeffreys prior for the other one:
$$\int_{0}^{\infty}\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}d\sigma=\frac{1}{\sigma_{\theta}}$$
Hence, all you are required to input is the prior range for one of the parameters, as this will stop you from getting into trouble with improper priors. Call this $0<L_{\sigma}<\sigma<U_{\sigma}<\infty$. It is then easy to sample from the joint density using the inverse CDF method, for we have:
$$F_{\sigma}(x)=\frac{\log\left(\frac{x}{L_{\sigma}}\right)}{\log\left(\frac{U_{\sigma}}{L_{\sigma}}\right)}\implies F^{-1}_{\sigma}(p)=\frac{U_{\sigma}^{p}}{L_{\sigma}^{p-1}}$$
$$F_{\sigma_{\theta}|\sigma}(y|x)=1-\frac{x^{2}}{y^{2}+x^{2}}\implies F^{-1}_{\sigma_{\theta}|\sigma}(p|x)=x\sqrt{\frac{p}{1-p}}$$
So you sample two independent uniform random variables $q_{1b},q_{2b}$, and then your random value of $\sigma^{(b)}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}$ and your random value of $\sigma^{(b)}_{\theta}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}\sqrt{\frac{q_{2b}}{1-q_{2b}}}$. Combine this with the usual flat prior for $-\infty<L_{\mu}<\mu<U_{\mu}<\infty$ generated by a third random uniform variable $\mu^{(b)}=L_{\mu}+q_{3b}(U_{\mu}-L_{\mu})$ and you have all the ingredients to do monte carlo posterior simulation - note that this is much better than "Gibbs sampling" because each simulation is independent, so no need to wait for convergence (and also less need for a large number of simulations) - and you are dealing with proper priors - so divergence is impossible (however some moments may or may not exist, but all quantiles exist).
| null | CC BY-SA 3.0 | null | 2011-05-18T07:37:57.803 | 2011-05-18T07:37:57.803 | null | null | 2392 | null |
10932 | 2 | null | 10928 | 4 | null | The p-values for the true null hypothesis (Ha) should be uniformly distributed (see amongst others [q10613](https://stats.stackexchange.com/questions/10613/why-p-values-are-uniformly-distributed)). If your two tests are independent (which they seem to be from your example), the chance of the p-value of Hb, given that for Ha's (the non-true one) is a, is simply a.
So, if you know the distribution of a, you may be able to integrate this out to find an analytical solution. But this depends upon your alternative, and upon which test you are using (for the false null hypothesis).
Extending the comment by @Henry and abusing notation somewhat:
$p(a<b) = \int p(a<b) da = \int (1-a) da = E(1-a)$
| null | CC BY-SA 3.0 | null | 2011-05-18T07:45:35.023 | 2011-05-18T08:21:47.847 | 2017-04-13T12:44:39.283 | -1 | 4257 | null |
10933 | 2 | null | 1252 | 8 | null | Here is a collection page:
[http://sasdataguru.blogspot.com/2011/05/online-statistics-cheat-sheet.html](http://sasdataguru.blogspot.com/2011/05/online-statistics-cheat-sheet.html)
| null | CC BY-SA 3.0 | null | 2011-05-18T08:10:16.340 | 2011-05-18T08:10:16.340 | null | null | 4648 | null |
10934 | 2 | null | 10418 | 10 | null | Exchangeability is not an essential feature of a hierarchical model (at least not at the observational level). It is basically a Bayesian analogue of "independent and identically distributed" from the standard literature. It is simply a way of describing what you know about the situation at hand. This is namely that "shuffling" does not alter your problem. One way I like to think of this is to consider the case where you were given $x_{j}=5$ but you were not told the value of $j$. If learning that $x_{j}=5$ would lead you to suspect particular values of $j$ more than others, then the sequence is not exchangeable. If it tells you nothing about $j$, then the sequence is exchangeable. Note that exhcangeability is "in the information" rather than "in reality" - it depends on what you know.
While exchangeability is not essential in terms of the observed variables, it would probably be quite difficult to fit any model without some notion of exchangeability, because without exchangeability you basically have no justification for pooling observations together. So my guess is that your inferences will be much weaker if you don't have exchangeability somewhere in the model. For example, consider $x_{i}\sim N(\mu_{i},\sigma_{i})$ for $i=1,\dots,N$. If $x_{i}$ are completely exchangeable then this means $\mu_{i}=\mu$ and $\sigma_{i}=\sigma$. If $x_{i}$ are conditionally exchangeable given $\mu_{i}$ then this means $\sigma_{i}=\sigma$. If $x_{i}$ are conditionally exchangeable given $\sigma_{i}$ then this means $\mu_{i}=\mu$. But note that in either of these two "conditionally exchangeable" cases, the quality of inference is reduced compared to the first, because there are an extra $N$ parameters that get introduced into the problem. If we have no exchangeability, then we basically have $N$ unrelated problems.
Basically exchangeability means we can make the inference $x_{i}\to \text{parameters}\to x_{j}$ for any $i$ and $j$ which are partly exchangeable
| null | CC BY-SA 3.0 | null | 2011-05-18T09:16:25.957 | 2011-05-18T09:16:25.957 | null | null | 2392 | null |
10942 | 2 | null | 173 | 2 | null | Escape from traditional enumerative statistics as Deming would suggest and venture into traditional analytical statistics - in this case, control charts. See any books by Donald Wheeler PhD, particularly his "Advanced Topics in SPC" for more info.
| null | CC BY-SA 3.0 | null | 2011-05-18T14:52:56.227 | 2011-05-18T14:52:56.227 | null | null | 4652 | null |
10943 | 1 | null | null | 33 | 2313 | I have a mixed effect model (in fact a generalized additive mixed model) that gives me predictions for a timeseries. To counter the autocorrelation, I use a corCAR1 model, given the fact I have missing data. The data is supposed to give me a total load, so I need to sum over the whole prediction interval. But I should also get an estimate of the standard error on that total load.
If all predictions would be independent, this could be easily solved by :
$Var(\sum^{n}_{i=1}E[X_i]) = \sum^{n}_{i=1}Var(E[X_i])$
with $Var(E[X_i]) = SE(E[X_i])^2$
Problem is, the predicted values are coming from a model, and the original data has autocorrelation. The whole problem leads to following questions :
- Am I correct in assuming that the SE on the calculated predictions can be interpreted as the root of the variance on the expected value of that prediction? I tend to interprete the predictions as "mean predictions", and hence sum a whole set of means.
- How do I incorporate the autocorrelation in this problem, or can I safely assume that it wouldn't influence the results too much?
This is an example in R. My real dataset has about 34.000 measurements, so scaleability is a problem. That's the reason why I model the autocorrelation within each month, otherwise the calculations aren't possible any more. It's not the most correct solution, but the most correct one isn't feasible.
```
set.seed(12)
require(mgcv)
Data <- data.frame(
dates = seq(as.Date("2011-1-1"),as.Date("2011-12-31"),by="day")
)
Data <- within(Data,{
X <- abs(rnorm(nrow(Data),3))
Y <- 2*X + X^2 + scale(Data$dates)^2
month <- as.POSIXlt(dates)$mon+1
mday <- as.POSIXlt(dates)$mday
})
model <- gamm(Y~s(X)+s(as.numeric(dates)),correlation=corCAR1(form=~mday|month),data=Data)
preds <- predict(model$gam,se=T)
Total <- sum(preds$fit)
```
Edit :
Lesson to learn : first go through all the samples in all the help files before panicking. In the help files of predict.gam, I can find :
```
#########################################################
## now get variance of sum of predictions using lpmatrix
#########################################################
Xp <- predict(b,newd,type="lpmatrix")
## Xp %*% coef(b) yields vector of predictions
a <- rep(1,31)
Xs <- t(a) %*% Xp ## Xs %*% coef(b) gives sum of predictions
var.sum <- Xs %*% b$Vp %*% t(Xs)
```
Which seems to be close to what I want to do. This still doesn't tell me exactly how it is done. I could get as far as the fact that it's based on the linear predictor matrix. Any insights are still welcome.
| Variance on the sum of predicted values from a mixed effect model on a timeseries | CC BY-SA 3.0 | null | 2011-05-18T14:59:03.320 | 2014-08-20T22:08:25.843 | 2011-05-19T14:12:19.330 | 1124 | 1124 | [
"mixed-model",
"variance",
"random-variable"
] |
10944 | 2 | null | 10907 | 1 | null | The basic idea that you want is either the confidence interval on a predicted mean, or the prediction interval on an individual point. Both formulas are found in any standard regression textbook and probably many places on the web.
Though deriving the correct pieces that you need for those formulas is probably a lot more work than is worth it. Gnuplot is a fine plotting program, but is not a full statistics package. A statistics package will give you the predictions fairly straight forward. The [R statistical package](http://www.r-project.org/) is the same general price as gnuplot. In R you can fit your regression using the lm (linear model) function, then use the predict function to get either confidence or prediction intervals. You could generate the intervals for a whole sequence of values, then either plot them directly in R, or transfer the predictions back to gnuplot and add them to your plot there.
| null | CC BY-SA 3.0 | null | 2011-05-18T15:34:41.480 | 2011-05-18T15:34:41.480 | null | null | 4505 | null |
10945 | 1 | null | null | 8 | 195 | If an exploratory factor analysis is done with some 1-5 agreement items and some 0/1 "choose all that apply" items, theoretically how much of a spurious tendency would there be for the 1-5 items to load on one or two factors and the 0/1 items to load on a separate set of one or two factors? (I've heard arguments both for and against the idea that correlations tend to be much higher among items of a similar scale. My own experimentation/simulation has not found much of an effect.)
| Should factor loadings be dominated by items' ranges of answer options? | CC BY-SA 3.0 | null | 2011-05-18T16:20:16.897 | 2011-12-05T18:09:43.737 | 2011-07-25T06:33:26.847 | 183 | 2669 | [
"correlation",
"factor-analysis"
] |
10946 | 2 | null | 10897 | 2 | null | Let the rod have length $L$ and fix a segment of length $x$. The chance that any single breakpoint misses the segment equals the proportion of the rod not occupied by the segment, $1−x/L$. Because the breakpoints are independent, the chance that all of them miss it is the product of $n$ such chances, $(1 - x/L)^n$.
From comments following the question, it appears that $x$ is intended to be small compared to the rod's length: $x/L \ll 1$. Let $\xi = L/x$ (assumed to be large) and rewrite $n = \xi(n/\xi)$, leading (purely via substitutions) to
$$\Pr(\text{all miss}) = (1 - x/L)^n = (1 - 1/\xi)^{\xi(n/\xi)} = \left((1-1/\xi)^\xi\right)^{n/\xi}\text{.}$$
Asymptotically $\xi \to \infty$. If we assume that $n$ varies in a way that makes $n/\xi$ converge to a constant, this probability approaches a computable limit. Let this constant be some value $\lambda$ times $x$. It is the limiting value of $n/\xi/x = n/L$: notice how the length of the rod is involved here and effectively is incorporated in $\lambda$. Because $\exp(-1) = 1/e$ is the limiting value of $(1-1/\xi)^\xi$ and raising to (positive) powers is a continuous function, it follows readily that the limit is
$$\Pr(\text{all miss}) \to e^{-\lambda x}.$$
One application is when $n$ is a constant, entailing $\lambda = n/L$, and $x \ll L$. We obtain $$e^{-nx/L}$$ as a good approximation for the probability that all breaks miss the segment. This analysis shows that the approximation fails as $x$ grows large: the approximation is only as good as the approximation $1/e \sim (1-1/\xi)^\xi$. Finally, if you set $x = L$, the approximation is clearly wrong because it gives $e^{-n}$ instead of the correct answer, $0$.
| null | CC BY-SA 3.0 | null | 2011-05-18T16:41:17.197 | 2011-05-18T16:41:17.197 | null | null | 919 | null |
10947 | 1 | 10948 | null | 17 | 9356 | I am trying to figure out how R computes lag-k autocorrelation (apparently, it is the same formula used by Minitab and SAS), so that I can compare it to using Excel's CORREL function applied to the series and its k-lagged version. R and Excel (using CORREL) give slightly different autocorrelation values.
I'd also be interested to find out whether one computation is more correct than the other.
| Formula for autocorrelation in R vs. Excel | CC BY-SA 3.0 | null | 2011-05-18T17:00:10.830 | 2011-05-18T18:42:20.133 | null | null | 1945 | [
"r",
"sas",
"autocorrelation",
"excel"
] |
10948 | 2 | null | 10947 | 20 | null | The exact equation is given in: Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer-Verlag. I'll give you an example:
```
### simulate some data with AR(1) where rho = .75
xi <- 1:50
yi <- arima.sim(model=list(ar=.75), n=50)
### get residuals
res <- resid(lm(yi ~ xi))
### acf for lags 1 and 2
cor(res[1:49], res[2:50]) ### not quite how this is calculated by R
cor(res[1:48], res[3:50]) ### not quite how this is calculated by R
### how R calculates these
acf(res, lag.max=2, plot=F)
### how this is calculated by R
### note: mean(res) = 0 for this example, so technically not needed here
c0 <- 1/50 * sum( (res[1:50] - mean(res)) * (res[1:50] - mean(res)) )
c1 <- 1/50 * sum( (res[1:49] - mean(res)) * (res[2:50] - mean(res)) )
c2 <- 1/50 * sum( (res[1:48] - mean(res)) * (res[3:50] - mean(res)) )
c1/c0
c2/c0
```
And so on (e.g., `res[1:47]` and `res[4:50]` for lag 3).
| null | CC BY-SA 3.0 | null | 2011-05-18T17:23:01.437 | 2011-05-18T17:23:01.437 | null | null | 1934 | null |
10949 | 1 | 11130 | null | 5 | 2507 | I want to estimate the impact of having health insurance on health care expenditures and have a hunch that it is best to use a two-part model to first estimate the probability of using any health care and then estimating the amount spent by those who used health care. However, I am not clear on the advantage of using a two-part model over just running ordinary least squares regression on the subsample of people who used any health care. Any insight into the advantages of the 2-part model over OLS on the subsample of users, and the differences in these two approaches would be much appreciated!
| What is the added value of using a 2-part model over an OLS model on a subsample when estimating health care expenditures? | CC BY-SA 3.0 | null | 2011-05-18T18:36:14.577 | 2011-05-22T19:19:30.557 | null | null | 834 | [
"regression",
"probability",
"logistic"
] |
10950 | 2 | null | 10947 | 11 | null | The naive way to calculate the auto correlation (and possibly what Excel uses) is to create 2 copies of the vector then remove the 1st n elements from the first copy and the last n elements from the second copy (where n is the lag that you are computing from). Then pass those 2 vectors to the function to calculate the correlation. This method is OK and will give a reasonable answer, but it ignores the fact that the 2 vectors being compared are really measures of the same thing.
The improved version (as shown by Wolfgang) is a similar function to the regular correlation, except that it uses the entire vector for computing the mean and variance.
| null | CC BY-SA 3.0 | null | 2011-05-18T18:42:20.133 | 2011-05-18T18:42:20.133 | null | null | 4505 | null |
10951 | 1 | null | null | 26 | 14067 | I have a question concerning random variables. Let us assume that we have two random variables $X$ and $Y$. Let's say $X$ is Poisson distributed with parameter $\lambda_1$, and $Y$ is Poisson distributed with parameter $\lambda_2$.
When you build the fracture from $X/Y$ and call this a random variable $Z$, how is this distributed and what is the mean? Is it $\lambda_1/\lambda_2$?
| What is the distribution of the ratio of two Poisson random variables? | CC BY-SA 3.0 | null | 2011-05-18T19:36:32.787 | 2018-03-06T21:53:38.330 | 2011-05-18T20:06:01.823 | 930 | 4496 | [
"random-variable",
"poisson-distribution"
] |
10952 | 2 | null | 10951 | 13 | null | I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
| null | CC BY-SA 3.0 | null | 2011-05-18T19:47:22.720 | 2011-05-18T19:47:22.720 | null | null | 2775 | null |
10953 | 1 | 10971 | null | 2 | 1476 | I have a data set consisting of interactions between male-female dyads within a group in two conditions. The values range from -1 to 1 and indicate how responsible the female was for maintaining the relationship (Hinde's index). What I want to see is if the female responsibility changes in the two conditions. My understanding is that the Wilcoxon test is inappropriate because I have both paired and repeated measurements: each individual is part of several dyads.
Female1-Male1
F1-M2
F1-M3
F2-M1....etc
My other option was to run a permutation test, and I've been searching for one in R (I'm not conversant enough with R to create one myself), but so far the only ones I've found want my data to be integers, which is most definitely not the case.
So I suppose my two questions are
- Am I even on the right track here or should I be looking for a different test?
- Can anybody suggest a permutation test that can deal with decimal places?
| Paired permutation test for repeated measures | CC BY-SA 4.0 | null | 2011-05-18T20:18:52.783 | 2020-07-23T16:14:08.727 | 2020-07-23T16:14:08.727 | 11887 | 4655 | [
"repeated-measures",
"permutation-test",
"dyadic-data"
] |
10954 | 2 | null | 10949 | 1 | null | Sarah,
I was a little puzzled to think about whether there is a selection bias here. It seems that health insurance is bundled with other job benefits, and there is no self selection. However, those people who do not have health insurance are very different from those who do.
If there is a selection bias, OLS is biased and inconsistent. Selection bias is interpreted as a problem of omitted variables which can be corrected with the inverse mills ratio in your OLS.
The Heckman's procedure gets unbiased OLS parameter estimates. Testing their significance requires additional work, since the estimates of the covariance matrix of the parameter estimates are inconsistent - use bootstrapping to derive the appropriate asymptotic standard errors and test statistics.
See [http://en.wikipedia.org/wiki/Heckman_correction](http://en.wikipedia.org/wiki/Heckman_correction).
M
| null | CC BY-SA 3.0 | null | 2011-05-18T20:47:46.283 | 2011-05-18T20:47:46.283 | null | null | 4617 | null |
10955 | 2 | null | 10953 | 2 | null | Check out the `ezPerm` function from the [ez package](http://cran.r-project.org/web/packages/ez/index.html) for R. For example, assuming your data is in "long format" (see the [reshape2](http://cran.r-project.org/web/packages/reshape2/index.html) package):
```
ezPerm(
data = my_data
, dv = .(the_dv)
, wid = .(dyad_num)
, within = .(sex)
, perms = 1e3
)
```
| null | CC BY-SA 3.0 | null | 2011-05-18T22:41:35.020 | 2011-05-18T22:41:35.020 | null | null | 364 | null |
10957 | 2 | null | 10949 | 1 | null | I think you are assuming something like this. People use health care if they are sick. If they use health care, than they spend more money on it. But, sick people will see more value on insurance and will be more likey to be insured. So, you will find out that insured people spends more money on health care, but this may be so due to selection bias.
And you seem to think that, if you use only the subsample of people who used health care, than you don't have this problem. However, think of this example. Assume you have a woman who wants to be pregnant and a man who doesn't want to have a child, both on their 30's. The woman than gets insured, but the man not. Both get sick and use health care, so they both are in your subsample. However, they have different probabilities of being insured and they will spend different amount of money if she is pregnant. Thus selection bias will occur anyway.
So, what I'm saying is that restricting your subsample will not solve the selection bias problem. You still have to account for the different probabilities of people getting insurance.
| null | CC BY-SA 3.0 | null | 2011-05-19T04:26:13.363 | 2011-05-19T04:26:13.363 | null | null | 3058 | null |
10958 | 1 | 11002 | null | 5 | 381 | I am trying to increase my model accuracy by taking into account interaction effect of relevant variables.
I am choosing variables to interact based more on the common sense than on trying every combination.
So far most interaction effects with good p value ( p<.0001) and chi-square have resulted in an increase of true positives.
However, the last interaction effect, while having good p values and chi square values, resulted in decreased true positives by 10.
- Should this happen?
- How do I interpret this?
- Shouldnt a variable which is significant always increase my true positives?
| Why does adding statistically significant interaction reduce true positives? | CC BY-SA 3.0 | null | 2011-05-19T06:01:21.953 | 2011-05-19T21:55:58.340 | 2011-05-19T06:58:31.413 | 183 | 1763 | [
"interaction"
] |
10959 | 2 | null | 10958 | 6 | null | No it should not. e.g. for logistic regression, which appears to be the case here, it may be that the greater p-value (I'm assuming you mean the right kind of p-value here, like from a likelihood ratio test) comes from increasing the odds for (correctly predicted) observations that are already relatively extreme (e.g. all observations that had predicted probability 0.8 become 0.9 and all those that had 0.2 become 0.1), while the ones that were only just on the right side of the 50% threshold are now just on the other side. As a result, the extremities are now predicted with more confidence, but there are more misclassifications.
In general, good fit does not guarantee good prediction (or the other way around) - even though that's the way most 'scientific' publications work these days :-(
I would advise you to look into a more evolved technique like LASSO or elastic net for variable selection... It will also easily allow you to optimize for some predictive measure like missclassification. In R, try glmnet.
| null | CC BY-SA 3.0 | null | 2011-05-19T06:51:10.817 | 2011-05-19T06:51:10.817 | null | null | 4257 | null |
10960 | 2 | null | 10953 | 1 | null | Since you say each pair was observed in both conditions, you can use the basic trick of the paired t-test: subtract the values for the same pair in both conditions and then test for this to be zero (or if that's more relevant for this kind of measure: divide them and test for this to be 1).
At the least, this reduces the complexity of your problem.
| null | CC BY-SA 3.0 | null | 2011-05-19T07:41:35.067 | 2011-05-19T07:41:35.067 | null | null | 4257 | null |
10961 | 1 | null | null | 3 | 578 | I am trying to model `steel` prices using `brent` prices with following model:
$steel_t=\alpha + \beta steel_{t-1}+\gamma brent_t + \epsilon_t$
I have monthly data. I fitted the parameter values with `lm` (R, is that reasonable?). Now I want to see the effect of a "shock event" in one year, e.g. what happens to steel prices if brent prices go to USD 50 in one year (it is about USD 100 nowadays). How can I do it? Does it make sense to use an autoregressive model for such analysis?
| Autoregressive model and shock events | CC BY-SA 3.0 | null | 2011-05-19T08:23:46.003 | 2011-05-19T11:55:52.667 | 2011-05-19T08:26:25.030 | 2116 | 1443 | [
"r",
"time-series"
] |
10963 | 1 | 10967 | null | 2 | 1206 | I have data like Person $A$ like movies `['X','Y', 'Z']` and he dislikes `['V']`. Person $B$ like movies `['X','L','V']` and dislikes `['Y']`. like wise so many users. What could be a good algorithm to find mean difference of users' tastes?
| Algorithm to calculate difference in users' tastes | CC BY-SA 3.0 | null | 2011-05-19T10:14:30.060 | 2012-02-02T07:31:19.903 | 2012-02-02T07:31:19.903 | 264 | 4665 | [
"algorithms",
"matching",
"recommender-system"
] |
10964 | 1 | 10972 | null | 10 | 2166 | I was presenting proofs of WLLN and a version of SLLN (assuming bounded 4th central moment) when somebody asked which measure is the probability with respect too and I realised that, on reflection, I wasn't quite sure.
It seems that it is straightforward, since in both laws we have a sequence of $X_{i}$'s, independent RVs with identical mean and finite variance. There is only one random variable in sight, namely the $X_{i}$, so the probability must be w.r.t the distribution of the $X_{i}$, right? But then that doesn't seem quite right for the strong law since the typical proof technique is then to define a new RV $S_{n} := \sum_{i=1}^{n} X_{i}$ and work with that, and the limit is inside the probability:
$
Pr \left[\lim_{n \rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n} X_{i} = E[X_{i}]\right]=1
$
So now it looks as if the RV is the sums over $n$ terms, so the probability is over the distribution of the sums $S_{n}$, where $n$ is no longer fixed. Is that correct? If it is, how would we go about constructing a suitable probability measure on the sequences of partial sums?
Happy to receive intuitive responses as to what is going on as well as formal ones using e.g. real or complex analysis, undergrad probability/statistics, basic measure theory. I've read [Convergence in probability vs. almost sure convergence](https://stats.stackexchange.com/questions/2230) and associated links, but find no help there.
| In convergence in probability or a.s. convergence w.r.t which measure is the probability? | CC BY-SA 3.0 | null | 2011-05-19T10:43:13.270 | 2011-05-19T13:28:30.873 | 2017-04-13T12:44:24.667 | -1 | 3248 | [
"random-variable",
"probability"
] |
10965 | 1 | null | null | 0 | 1718 | I'm using the `ets` forecast function in R.
When I fit a model to some timeseries t1:
```
model<-ets(t1) [36 periods]
```
and the calculate forecasts from that model:
```
f1 <- forecast(model,10)
```
so i get 10 forecasts for periods 37-48
so my question is, are these 10 point-forecast one-step-ahead forecasts wich
have their seeds in $t,t+1,t+2$ with $t=37$
or are these forecasts with their seed only in $t=37$ with forecast horizon $h=1,2,3,4,...$
| Problem with ets from R forecast package | CC BY-SA 3.0 | null | 2011-05-19T11:03:36.530 | 2011-05-21T18:45:41.890 | 2011-05-21T18:45:41.890 | 930 | 4666 | [
"r",
"forecasting"
] |
10966 | 2 | null | 10963 | 1 | null | If you represent each movie as a categorical variable with 3 levels (like, unspecified, dislike) you can do any type of clustering analysis on your users with these covariates.
| null | CC BY-SA 3.0 | null | 2011-05-19T11:07:59.960 | 2011-05-19T11:07:59.960 | null | null | 4257 | null |
10967 | 2 | null | 10963 | 2 | null | What you want to do is called "Collaborative Filtering". Searching the web will offer you a tremendous amount of resources for this topic, but I truly recommend this paper:
[Xiaoyuan Su & Taghi M. Khoshgoftaar: A Survey of Collaborative Filtering Techniques](http://www.hindawi.com/journals/aai/2009/421425/)
In section 3. Memory-Based Collaborative Filtering Techniques you'll find the basic techniques to find users with similar taste. They generally consist of selecting a metric for a nearest-neighor-approach plus some modifications on the user-item-matrix (how to deal with missing values / how to treat items which have not been rated this often etc.).This is a good point to start.
After grasping the basic ideas you may want to try out some sophisticated techniques like Singular Value Decomposition, which has been successfully applied in the [Netflix-Price](http://en.wikipedia.org/wiki/Netflix_Prize) (you'll find a link to this and other techniques in section 4. Model-Based Collaborative Filtering Techniques of the recommended paper).
If you have some bucks to spend, I also recommend "Programming Collective Intelligence" by Toby Segaran, which approaches this topic in a very very practical way.
| null | CC BY-SA 3.0 | null | 2011-05-19T11:28:41.813 | 2011-05-19T11:45:45.760 | 2011-05-19T11:45:45.760 | 264 | 264 | null |
10968 | 2 | null | 10965 | 3 | null | They can't possibly be one-step forecasts because you haven't provided any data for t>36. They are forecasts of times 37,...,46 based on data up to time 36 (i.e., horizons 1,2,3,...,10).
| null | CC BY-SA 3.0 | null | 2011-05-19T11:28:45.260 | 2011-05-19T11:28:45.260 | null | null | 159 | null |
10970 | 2 | null | 10961 | 1 | null | First of all, I'd suggest using the `arima` function instead and explicitly fitting a AR(1) model. I don't think it will change your result, but it should properly handle the error correlations.
Once you have that, you can set up your predicted values of `brent` and just run the model on that.
| null | CC BY-SA 3.0 | null | 2011-05-19T11:55:52.667 | 2011-05-19T11:55:52.667 | null | null | 1397 | null |
10971 | 2 | null | 10953 | 3 | null | There is the possibility of using the `coin` package for this type of stuff. [See its webpage](http://cran.r-project.org/web/packages/coin/index.html) and [the accepted answer to this question](https://stats.stackexchange.com/questions/6127/which-permutation-test-implementation-in-r-to-use-instead-of-t-tests-paired-and).
An implementation for this type of stuff would be the following.
```
#load the package
require(coin)
# Some toy data:
s.data <- data.frame(dyad = c("F1-M1", "F1-M2","F2-M1","F2-M2"), condition = c(rep("A", 4), rep("B", 4)), dv = runif(8,0,1))
# Make sure the factors are really factors!
str(s.data)
# here goes the permutation test.
oneway_test(dv ~ condition | dyad, distribution = approximate(B=10000), data = s.data)
```
(Note that you need to use an high number for B in your real example)
Furthermore, you could also use a paired t.test. I don't see any reasons against it:
```
t.test(subset(s.data, condition == "A", "dv", drop = TRUE), subset(s.data, condition == "B", "dv", drop = TRUE), paired = TRUE)
```
---
I think there are is one import thing you haven't discussed so far: With this implementation you would have controlled for the dyads, but not for the individuals (e.g., F1, independent of her male interaction partner). This is a serious threat to your analysis.
I know that there is a bunch of stuff an dyad analyses in psychology and related fields. Unfortunately I cannot give you a real pointer. But you should definitely check this stuff out before finishing your analyses. A quick search on rseek.org returns at least a package called `dyad` and [this webpage](http://www.davidakenny.net/dyad.htm).
| null | CC BY-SA 3.0 | null | 2011-05-19T13:00:42.123 | 2011-05-19T13:00:42.123 | 2017-04-13T12:44:55.360 | -1 | 442 | null |
10972 | 2 | null | 10964 | 13 | null | The probability measure is the same in both cases, but the question of interest is different between the two. In both cases we have a (countably) infinite sequence of random variables defined on a the single probability space $(\Omega,\mathscr{F},P)$. We take $\Omega$, $\mathscr{F}$ and $P$ to be the infinite products in each case (care is needed, here, that we are talking about only probability measures because we can run into troubles otherwise).
For the SLLN, what we care about is the probability (or measure) of the set of all $\omega = (\omega_{1},\omega_{2},\ldots)$ where the scaled partial sums DO NOT converge. This set has measure zero (w.r.t. $P$), says the SLLN.
For the WLLN, what we care about is the behavior of the sequence of projection measures $\left(P_{n}\right)_{n=1}^{\infty}$, where for each $n$, $P_{n}$ is the projection of $P$ onto the finite measureable space $\Omega_{n} = \prod_{i=1}^{n} \Omega_{i}$. The WLLN says that the (projected) probability of the cylinders (that is, events involving $X_{1},\ldots,X_{n}$), on which the scaled partial sums do not converge, goes to zero in the limit as $n$ goes to infinity.
In the WLLN we are calculating probabilities which appear removed from the infinite product space, but it never actually went away - it was there all along. All we were doing was projecting onto the subspace from 1 to $n$ and then taking the limit afterward. That such a thing is possible, that it is possible to construct a probability measure on an infinite product space such that the projections for each $n$ match what we think they should, and do what they're supposed to do, is one of the consequences of [Kolmogorov's Extension Theorem](http://en.wikipedia.org/wiki/Kolmogorov_extension_theorem) .
If you'd like to read more, I've found the most detailed discussion of subtle points like these in "Probability and Measure Theory" by Ash, Doleans-Dade. There are a couple others, but Ash/D-D is my favorite.
| null | CC BY-SA 3.0 | null | 2011-05-19T13:28:30.873 | 2011-05-19T13:28:30.873 | null | null | null | null |
10973 | 2 | null | 3 | 3 | null |
- clusterPy for analytical
regionalization or geospatial
clustering
- PySal for spatial data analysis.
| null | CC BY-SA 4.0 | null | 2011-05-19T13:31:00.567 | 2022-11-27T23:10:50.513 | 2022-11-27T23:10:50.513 | 362671 | 4329 | null |
10974 | 1 | 10980 | null | 7 | 970 | For an upcoming study of about 200 (rare) cancer cases, we would like to determine the power of detection for a hypothetical marker, present in 10% of cases, i.e. 20 cases with a hazard ratio of 3.0. Cases will be followed up for at least 5 years. We will have the option to validate any identified markers in an independent additional 200 member cohort.
In the past, I have used the program [PS](http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/PowerSampleSize) to run power calculations, which is fairly idiot-proof for a biologist like me. However, the cancer we are studying tends to have an overall survival rate of 60%, so the calculation of a median survival time doesn't really make sense (at least to me).
What I wanted to know is whether there are approaches available for the calculation of power which can overcome this limitation, and if so, how can I easily implement them in, ideally, R.
I have googled around to find an answer, but haven't met with success so far.
| Calculation of power of survival study | CC BY-SA 3.0 | null | 2011-05-19T13:34:08.113 | 2011-05-19T15:15:25.510 | 2011-05-19T14:59:26.263 | 183 | 3429 | [
"survival",
"median",
"statistical-power"
] |
10975 | 1 | 10979 | null | 22 | 30489 | Is there a (stronger?) alternative to the arcsin square root transformation for percentage/proportion data? In the data set I'm working on at the moment, marked
heteroscedasticity remains after I apply this transformation, i.e. the plot of residuals vs. fitted values is still very much rhomboid.
Edited to respond to comments: the data are investment decisions by experimental participants who may invest 0-100% of an endowment in multiples of 10%. I have also looked at these data using ordinal logistic regression, but would like to see what a valid glm would produce. Plus I could see the answer being useful for future work, as arcsin square root seems to be used as a one-size-fits all solution in my field and I hadn't come across any alternatives being employed.
| Transforming proportion data: when arcsin square root is not enough | CC BY-SA 4.0 | null | 2011-05-19T13:48:08.743 | 2018-10-22T17:41:57.780 | 2018-10-22T17:41:57.780 | 11887 | 266 | [
"data-transformation",
"generalized-linear-model",
"heteroscedasticity"
] |
10976 | 2 | null | 6314 | 14 | null | I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very likely there is other good stuff out there I'm not aware of.
First I think it's worth noting that despite the title of their paper "When is `nearest neighbor' meaningful", Beyer et al actually answered a different question, namely when is NN not meaningful. We proved the converse to their theorem, under some additional mild assumptions on the size of the sample, in [When Is 'Nearest Neighbor' Meaningful: A Converse Theorem and Implications. Journal of Complexity, 25(4), August 2009, pp 385-397.](http://www.cs.bham.ac.uk/~durranrj/JCOM_article.pdf) and showed that there are situations when (in theory) the concentration of distances will not arise (we give examples, but in essence the number of non-noise features needs to grow with the dimensionality so of course they seldom arise in practice).
The references 1 and 7 cited in our paper give some examples of ways in which the distance concentration can be mitigated in practice.
A paper by my supervisor, Ata Kaban, looks at whether these distance concentration issues persist despite applying dimensionality reduction techniques in [On the Distance Concentration Awareness of Certain Data Reduction Techniques. Pattern Recognition. Vol. 44, Issue 2, Feb 2011, pp.265-277.](http://www.cs.bham.ac.uk/~axk/dred2.pdf). There's some nice discussion in there too.
A recent paper by Radovanovic et al [Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data. JMLR, 11(Sep), September 2010, pp:2487−2531. ](http://jmlr.csail.mit.edu/papers/volume11/radovanovic10a/radovanovic10a.pdf) discusses the issue of "hubness", that is when a small subset of points belong to the $k$ nearest neighbours of many of the labelled observations. See also the first author's PhD thesis, which is on the web.
| null | CC BY-SA 3.0 | null | 2011-05-19T14:06:47.853 | 2011-05-19T14:06:47.853 | null | null | 3248 | null |
10977 | 1 | 10978 | null | 4 | 1989 | I am looking for an easy to use stand alone software that is able to construct [Bayesian belief networks](http://en.wikipedia.org/wiki/Bayesian_network) out of data. The software should (of course ;-) be free.
Can anybody recommend something? Thank you!
| Software for learning Bayesian belief networks | CC BY-SA 3.0 | null | 2011-05-19T14:23:29.600 | 2015-10-30T20:29:58.063 | null | null | 230 | [
"bayesian",
"software"
] |
10978 | 2 | null | 10977 | 2 | null | Your own link contains a load of free tools (near the bottom: software resources), and you can check the bayesian task view at [CRAN](http://cran.r-project.org/).
| null | CC BY-SA 3.0 | null | 2011-05-19T14:46:47.357 | 2011-05-19T14:46:47.357 | null | null | 4257 | null |
10979 | 2 | null | 10975 | 32 | null | Sure. John Tukey describes a family of (increasing, one-to-one) transformations in [EDA](https://rads.stackoverflow.com/amzn/click/B0007347RW). It is based on these ideas:
- To be able to extend the tails (towards 0 and 1) as controlled by a parameter.
- Nevertheless, to match the original (untransformed) values near the middle ($1/2$), which makes the transformation easier to interpret.
- To make the re-expression symmetric about $1/2.$ That is, if $p$ is re-expressed as $f(p)$, then $1-p$ will be re-expressed as $-f(p)$.
If you begin with any increasing monotonic function $g: (0,1) \to \mathbb{R}$ differentiable at $1/2$ you can adjust it to meet the second and third criteria: just define
$$f(p) = \frac{g(p) - g(1-p)}{2g'(1/2)}.$$
The numerator is explicitly symmetric (criterion $(3)$), because swapping $p$ with $1-p$ reverses the subtraction, thereby negating it. To see that $(2)$ is satisfied, note that the denominator is precisely the factor needed to make $f^\prime(1/2)=1.$ Recall that the derivative approximates the local behavior of a function with a linear function; a slope of $1=1:1$ thereby means that $f(p)\approx p$ (plus a constant $-1/2$) when $p$ is sufficiently close to $1/2.$ This is the sense in which the original values are "matched near the middle."
Tukey calls this the "folded" version of $g$. His family consists of the power and log transformations $g(p) = p^\lambda$ where, when $\lambda=0$, we consider $g(p) = \log(p)$.
Let's look at some examples. When $\lambda = 1/2$ we get the folded root, or "froot," $f(p) = \sqrt{1/2}\left(\sqrt{p} - \sqrt{1-p}\right)$. When $\lambda = 0$ we have the folded logarithm, or "flog," $f(p) = (\log(p) - \log(1-p))/4.$ Evidently this is just a constant multiple of the logit transformation, $\log(\frac{p}{1-p})$.

In this graph the blue line corresponds to $\lambda=1$, the intermediate red line to $\lambda=1/2$, and the extreme green line to $\lambda=0$. The dashed gold line is the arcsine transformation, $\arcsin(2p-1)/2 = \arcsin(\sqrt{p}) - \arcsin(\sqrt{1/2})$. The "matching" of slopes (criterion $(2)$) causes all the graphs to coincide near $p=1/2.$
The most useful values of the parameter $\lambda$ lie between $1$ and $0$. (You can make the tails even heavier with negative values of $\lambda$, but this use is rare.) $\lambda=1$ doesn't do anything at all except recenter the values ($f(p) = p-1/2$). As $\lambda$ shrinks towards zero, the tails get pulled further towards $\pm \infty$. This satisfies criterion #1. Thus, by choosing an appropriate value of $\lambda$, you can control the "strength" of this re-expression in the tails.
| null | CC BY-SA 4.0 | null | 2011-05-19T14:49:19.280 | 2018-10-22T16:20:59.933 | 2018-10-22T16:20:59.933 | 919 | 919 | null |
10980 | 2 | null | 10974 | 6 | null | In the R Hmisc package see the cpower and spower functions. spower does simulations for complex situations (late treatment effect, drop-in, drop-out, etc.) whereas cpower using normal approximations for simpler cases such as yours.
| null | CC BY-SA 3.0 | null | 2011-05-19T15:15:25.510 | 2011-05-19T15:15:25.510 | null | null | 4253 | null |
10981 | 1 | 17139 | null | 9 | 2364 | I'm re-posting a question from [math.stackexchange.com](https://math.stackexchange.com/questions/32569/a-question-about-linear-regression), I think the current answer in math.se is not right.
Select $n$ numbers from a set $\{1,2,...,U\}$, $y_i$ is the $i$th number selected, and $x_i$ is the rank of $y_i$ in the $n$ numbers. The selection is without replacement. $n$ is always smaller than $U$. The rank is the order of the a number after the $n$ numbers are sorted in ascending order.
We can get $n$ data points $(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)$, And a best fit line for these data points can be found by linear regression. $r_{xy}$ (correlation coefficient) is the goodness of the fit line, I want to calculate $\mathbb{E}(r_{xy})$ or $\mathbb{E}(r_{xy}^2)$ (correlation of determination).
If the $\mathbb{E}[r_{xy}]$ cannot be calculated, an estimation or lower bound is still OK.
Updated:
By calculating the sample correlation coefficient using randomly generated data, we can see that $r_{xy}$ is quite closed to 1, so I want to proof it from the theoretical view, or theoretically say the data generated by the above method is very linear.
Updated:
Is it possible to get the distribution of sample correlation coefficient?
| Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression | CC BY-SA 3.0 | null | 2011-05-19T15:21:04.487 | 2011-10-28T15:55:33.437 | 2017-04-13T12:19:38.853 | -1 | 4670 | [
"regression",
"correlation"
] |
10982 | 1 | null | null | 7 | 4696 | I am a newbie in stat. I am working on the Laplace distribution for my algorithm.
- Could tell me the first what the four moments of the Laplace distribution are?
- Does it have infinite tail like the Cauchy distribution?
- What is the empirical rule?
| Moments of Laplace distribution | CC BY-SA 3.0 | null | 2011-05-19T15:21:22.930 | 2019-02-02T02:34:28.533 | 2011-05-19T22:10:05.820 | 930 | 4319 | [
"distributions",
"moments"
] |
10984 | 1 | null | null | 2 | 218 | I'm having a hard time explaining this (hence the weird and long title), also I'm not a mathematician, I have this data lying around in a database and was wondering how I could visualise it (and predict the future)
Say I gave you the following data for one user, it can show anything, for example say it shows whether someone ran on a certain day (or completed any other task):
```
Date Done?
19/05/2011 yes
18/05/2011 no
17/05/2011 no
16/05/2011 no
15/05/2011 no
14/05/2011 yes
13/05/2011 yes
12/05/2011 yes
11/05/2011 yes
10/05/2011 no
9/05/2011 yes
8/05/2011 no
7/05/2011 no
6/05/2011 yes
5/05/2011 no
4/05/2011 yes
```
I have a couple of questions:
- What data could you derive from this? How can this be represented (other than pie charts, bar charts and totals).
- Is there anyway to predict what they would do tomorrow?
- I used Lower bound of Wilson score confidence interval for a Bernoulli parameter to figure out a score... what does this mean, is this useful in anyway?
| How can you predict the likelihood of someone doing something given previous data? | CC BY-SA 3.0 | null | 2011-05-19T16:11:28.713 | 2012-03-02T13:44:45.177 | 2011-05-19T20:56:03.663 | null | null | [
"time-series",
"binomial-distribution",
"predictive-models"
] |
10985 | 1 | 11042 | null | 2 | 9037 | Can anybody show me, why the dispersion parameter of the negative binomial distribution is taken to be one? In the Poisson case you can show that $E(y)/V(y)=\mu/\mu=1$ which is called equidispersion. But how can you show that the dispersion parameter of the negative binomial distribution is 1 as well?
| Dispersion parameter of negbin distribution | CC BY-SA 3.0 | null | 2011-05-19T16:23:47.003 | 2013-06-20T18:43:34.087 | 2013-06-20T18:43:34.087 | 24617 | 4496 | [
"negative-binomial-distribution",
"proof",
"overdispersion"
] |
10986 | 1 | 18837 | null | 3 | 3182 | I am trying to find some help with something that is called an "Adjusted Analysis" (or also Covariate Adjusted Logistic Regression); a typical response has been that I might just want multivariable logistic regression, but this is not quite what I am looking for. The trouble I have is with what exactly an "adjusted" analysis is.
As an example, I have at my disposal a software suite that performs this type of adjusted analysis. We have some genes and various clinical variables from patients; what the method seems to do is adjust the p-values of the genes. But I can't figure out why, or how. So I am trying to move outside of this software suite to truly understand what the underlying mathematics of this statistical technique is.
When I've posted this question in [other places](https://stackoverflow.com/questions/6061305/using-r-for-covariate-adjusted-logistic-regression) the response has been that I should just take more courses in statistics. So while acknowledging my short comings, I would like to please ask if anyone can point me in a somewhat correct direction. I have been trying to find resources to help however I think I am not posing my question correctly enough. As an aside I have a background in computer science and more recently I am branching into biostatistics and I don't like using black box software so I would eventually like to re-implement this technique in R.
Thank you for any help that can be offered. Please let me know if there is a way I can pose my question clearer.
| Covariate Adjusted Logistic Regression ("Adjusted Analysis") | CC BY-SA 3.0 | null | 2011-05-19T17:36:23.553 | 2015-03-23T00:13:25.600 | 2017-05-23T12:39:26.523 | -1 | 4673 | [
"r",
"logistic",
"genetics",
"clinical-trials"
] |
10987 | 1 | 10998 | null | 23 | 21907 | I am looking for input on how others organize their R code and output.
My current practice is to write code in blocks in a text file as such:
```
#=================================================
# 19 May 2011
date()
# Correlation analysis of variables in sed summary
load("/media/working/working_files/R_working/sed_OM_survey.RData")
# correlation between estimated surface and mean perc.OM in epi samples
cor.test(survey$mean.perc.OM[survey$Depth == "epi"],
survey$est.surf.OM[survey$Depth == "epi"]))
#==================================================
```
I then paste the output into another text file, usually with some annotation.
The problems with this method are:
- The code and the output are not explicitly linked other than by date.
- The code and output are organized chronologically and thus can be hard to search.
I have considered making one Sweave document with everything since I could then make a table of contents but this seems like it may be more hassle than the benefits it would provide.
Let me know of any effective routines you have for organizing your R code and output that would allow for efficient searching and editing the analysis.
| What are efficient ways to organize R code and output? | CC BY-SA 3.0 | null | 2011-05-19T17:42:19.817 | 2017-07-09T10:45:09.697 | 2017-05-18T21:16:44.490 | 28666 | 4048 | [
"r",
"project-management"
] |
10988 | 2 | null | 6314 | 4 | null | You might as well be interested in [neighbourhood components analysis](http://www.google.de/url?sa=t&source=web&cd=1&ved=0CB0QFjAA&url=http://eprints.pascal-network.org/archive/00001570/01/nca6.pdf&ei=jFzVTf_xAczPsgaurv2QDA&usg=AFQjCNG-rlschiAWiAEtgyL-dhdsi2H3ng&sig2=CV-NnOB_Obf6Et_4EWijdQ) by Goldberger et al.
Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic nearest neighbourhood selection.
As a side effect the (expected) number of neighbours is determined from the data.
| null | CC BY-SA 3.0 | null | 2011-05-19T18:10:44.623 | 2011-05-19T18:10:44.623 | null | null | 2860 | null |
10989 | 2 | null | 10271 | 0 | null | In the OP's reponse to my prior answer he has posted his data to the web. [60 readings per hour for 24 hours for 6 days](https://i.stack.imgur.com/XhSQH.jpg) . Since this is time series cross-sectional tools like DBSCAN have limited relevance as the data has temporal dependence. With data like this one normally looks for intra-hour and intra-day structure. In addition to these kinds of structure one can pursue the detection of anomalies which can be either one time only (pulse) or systematic in nature (level shift) using methods that are well documented (see the literature of Tsay,Tiao,Chen et.al.) These procedures yielded the following "anomalies'.Note that a level shift is essentially suggestive of separate "clusters". 
```
HOUR/MINUTE TIME
```
| null | CC BY-SA 3.0 | null | 2011-05-19T18:17:05.830 | 2011-05-19T18:56:20.710 | 2011-05-19T18:56:20.710 | 3382 | 3382 | null |
10990 | 2 | null | 10987 | 6 | null | I for one organize everything into 4 files for every project or analysis.
(1) 'code' Where I store text files of R functions.
(2) 'sql' Where I keep the queries used to gather my data.
(3) 'dat' Where I keep copies (usually csv) of my raw and processed data.
(4) 'rpt' Where I store the reports I've distributed.
ALL of my files are named using very verbose names such as 'analysis_of_network_abc_for_research_on_modified_buffer_19May2011'
I also write detailed documentation up front where I organize the hypothesis, any assumptions, inclusion and exclusion criteria, and steps I intend to take to reach my deliverable. All of this is invaluable for repeatable research and makes my annual goal setting process easier.
| null | CC BY-SA 3.0 | null | 2011-05-19T18:31:02.617 | 2016-08-20T13:13:22.373 | 2016-08-20T13:13:22.373 | 22468 | 4675 | null |
10991 | 2 | null | 10928 | -1 | null | Found the answer. Seems to work well except when power is low.
$$ P(X\le Y) = \int_0^1 \text{pnorm}((\delta\sqrt{N})/\sigma - \text{qnorm}(x))dx $$
Thanks for your help guys, couldn't have gotten there without you.
| null | CC BY-SA 3.0 | null | 2011-05-19T18:33:15.893 | 2011-05-19T20:48:57.070 | 2011-05-19T20:48:57.070 | 919 | 4647 | null |
10992 | 2 | null | 10985 | 8 | null | Based on your previous question using `glm.nb`, I'll take a wild guess that you are referring to the text in the output of that function:
```
> library(MASS)
> a <- glm.nb(Days ~ Eth + Sex, data=quine)
> summary(a)
Call:
glm.nb(formula = Days ~ Eth + Sex, data = quine, init.theta = 1.171409701,
link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.5901 -1.0235 -0.3985 0.3414 2.3438
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.9630 0.1351 21.936 < 2e-16 ***
EthN -0.5738 0.1588 -3.613 0.000303 ***
SexM 0.2135 0.1594 1.340 0.180380
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(1.1714) family taken to be 1)
Null deviance: 182.14 on 145 degrees of freedom
Residual deviance: 168.12 on 143 degrees of freedom
AIC: 1112.9
Number of Fisher Scoring iterations: 1
Theta: 1.171
Std. Err.: 0.145
2 x log-likelihood: -1104.855
```
In this output, the sentence about the dispersion parameter is an artifact of the approach that `glm.nb` is using for fitting the negative binomial distribution. It repeatedly calls `glm` for fixed values of the shape parameter `theta`, then updates `theta` based on the results of the fit. The sentence means that the distribution used by the final model is not overdispersed compared to the NegBin(1.1714) distribution, which is what you want, since that is the final fitted distribution. It says nothing about the overdispersion of NegBin(1.1714) compared to anything else.
| null | CC BY-SA 3.0 | null | 2011-05-19T18:37:53.050 | 2011-05-20T13:46:09.087 | 2011-05-20T13:46:09.087 | 279 | 279 | null |
10993 | 2 | null | 10981 | 3 | null | If you only want to show $r^2_{xy}$ must be close to 1, and compute a lower bound for it, it's straightforward, because that means for given $U$ and $n$ you only need to maximize the variance of the residuals. This can be done in exactly four symmetric ways. The two extremes (lowest and highest possible correlations) are illustrated for $U=20, n=9$.

For large values of $U$ and appropriate values of $n$, $r^2_{xy}$ can actually get close to 0. For example, with $n=100$ and very large values of $U \gg n$, $r^2_{xy} \sim 0.03$ in the worst case.
| null | CC BY-SA 3.0 | null | 2011-05-19T18:52:43.147 | 2011-05-19T18:52:43.147 | null | null | 919 | null |
10994 | 2 | null | 9659 | 3 | null | How accurate does your posterior cdf need to be? You might consider replacing the continuous prior with a discrete approximation:
$p^*(\theta) \propto p(\theta) 1(\theta\in t_1, \dots, t_k)$
where $p(\theta)$ is your original continuous prior.
Then to compute the posterior you just calculate likelihood x prior
$p(\theta|x) \propto p^*(\theta)p(x|\theta)$
over the support of the prior $t_1, \dots, t_k$ and renormalize.
This is called "griddy Gibbs" by some. It can be quite effective if you have an informative prior in which case you can choose the grid points non-uniformly (and, of course, if you can live with a discrete approximation coarse enough to be computationally feasible).
| null | CC BY-SA 3.0 | null | 2011-05-19T19:02:59.720 | 2011-05-19T19:02:59.720 | null | null | 26 | null |
10995 | 2 | null | 10987 | 2 | null | Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much easier to correct one minor thing at the beginning and have it ripple through the output without having to rerun anything manually.
| null | CC BY-SA 3.0 | null | 2011-05-19T20:02:44.443 | 2011-05-19T20:02:44.443 | null | null | 3601 | null |
10996 | 1 | 10999 | null | 21 | 12589 | I want to test a multi-stage path model (e.g., A predicts B, B predicts C, C predicts D) where all of my variables are individual observations nested within groups. So far I've been doing this through multiple unique multilevel analysis in R.
I would prefer to use a technique like SEM that lets me test multiple paths at the same time (A -> B -> C -> D) and still properly handle the 2-levels (individuals in groups).
I understand that MPLUS can handle this. Is there an R package I can use?
| R package for multilevel structural equation modeling? | CC BY-SA 3.0 | null | 2011-05-19T20:33:12.903 | 2020-05-25T21:09:51.267 | 2011-05-20T02:13:11.003 | 307 | 4677 | [
"r",
"multilevel-analysis",
"structural-equation-modeling",
"path-model"
] |
10998 | 2 | null | 10987 | 23 | null | You are not the first person to ask this question.
- Managing a statistical analysis project – guidelines and best practices
- A workflow for R
- R Workflow: Slides from a Talk at Melbourne R Users by Jeromy Anglim (including another much longer list of webpages dedicated to R Workflow)
- My own stuff: Dynamic documents with R and LATEX as an important part of reproducible research
- More links to project organization: How to efficiently manage a statistical analysis project?
| null | CC BY-SA 3.0 | null | 2011-05-19T20:53:08.020 | 2013-06-25T17:42:31.833 | 2017-04-13T12:44:25.283 | -1 | 307 | null |
10999 | 2 | null | 10996 | 20 | null | It seems that [OpenMx](http://openmx.psyc.virginia.edu/) (based on Mx but it's now an R package) can do what you are looking for: ["Multi Level Analysis"](http://openmx.psyc.virginia.edu/thread/485)
| null | CC BY-SA 3.0 | null | 2011-05-19T20:59:58.840 | 2011-05-19T20:59:58.840 | null | null | 307 | null |
11000 | 1 | 11028 | null | 44 | 170685 | I'd like to regress a vector B against each of the columns in a matrix A. This is trivial if there are no missing data, but if matrix A contains missing values, then my regression against A is constrained to include only rows where all values are present (the default na.omit behavior). This produces incorrect results for columns with no missing data. I can regress the column matrix B against individual columns of the matrix A, but I have thousands of regressions to do, and this is prohibitively slow and inelegant. The na.exclude function seems to be designed for this case, but I can't make it work. What am I doing wrong here? Using R 2.13 on OSX, if it matters.
```
A = matrix(1:20, nrow=10, ncol=2)
B = matrix(1:10, nrow=10, ncol=1)
dim(lm(A~B)$residuals)
# [1] 10 2 (the expected 10 residual values)
# Missing value in first column; now we have 9 residuals
A[1,1] = NA
dim(lm(A~B)$residuals)
#[1] 9 2 (the expected 9 residuals, given na.omit() is the default)
# Call lm with na.exclude; still have 9 residuals
dim(lm(A~B, na.action=na.exclude)$residuals)
#[1] 9 2 (was hoping to get a 10x2 matrix with a missing value here)
A.ex = na.exclude(A)
dim(lm(A.ex~B)$residuals)
# Throws an error because dim(A.ex)==9,2
#Error in model.frame.default(formula = A.ex ~ B, drop.unused.levels = TRUE) :
# variable lengths differ (found for 'B')
```
| How does R handle missing values in lm? | CC BY-SA 3.0 | null | 2011-05-19T21:03:54.640 | 2015-10-06T15:02:29.630 | 2011-05-20T19:00:34.397 | 1699 | 1699 | [
"r",
"missing-data",
"linear-model"
] |
11001 | 2 | null | 4840 | 1 | null | Since all you have is the data for the series you are trying to predict , the approach should be to construct a Robust Arima Model. A Robust Arima Model can reflect not only auto-projective structure (arima component) but changes in Levels and or Trends over time. The parameters of this model should be proven to not have changed over time and this is also true for the variance of the errors. There may be a change in any required "seasonal Dummies" to reflect a deterministic component as compared to a memory component. If you load your data up on the web and share it with the list , perhaps you can get understanding of the analytics and the analytical tools required. Similar work as been done in forecasting a highly seasonal monthly series entitled "air line passengers". You might google "Airline Passenger series BOX-JENKINS" to get some clues as to how a similar series was handled and severely mishandled by NOT CORRECTLY identifying omitted deterministic structure.
| null | CC BY-SA 3.0 | null | 2011-05-19T21:20:30.913 | 2011-05-19T21:20:30.913 | null | null | 3382 | null |
11002 | 2 | null | 10958 | 4 | null | "True positives", like proportion classified correctly, requires arbitrary and information losing categorization of the predicted values. These are improper scoring rules. An improper scoring rule is a criterion that when optimized leads to a bogus model. Also, watch out when using P-values in any way to guide model selection. This will greatly distort the inference from the final model.
| null | CC BY-SA 3.0 | null | 2011-05-19T21:55:58.340 | 2011-05-19T21:55:58.340 | null | null | 4253 | null |
11003 | 1 | 22621 | null | 1 | 374 | I have a mixed ANOVA with one between and one within factor:
between factor: control versus treament
within factor: ad1, ad2 (measures click rates on 2 ads)
```
aov(repdat~(within*between)+Error(subjcts/(withincontrasts))+(between),data.rep)
```
However my outcome repdat has a Poisson distribution and 0s (since sometimes users just dont click). How do I model this in R?
I considered:
1.) using Zelig package with model Poisson, but I dont know how to properly write the mixed model formula here. Is it the same as for the mixed ANOVA?
2.) using lme4 package with model Poisson. Can I use both a between and within factor or even treat "ads" as a random variable?
It would be great if someone could quickly show me example code for both 1. and 2.
thanks
| R statistics Mixed ANOVA but outcome Poisson distributed - which model in R? | CC BY-SA 3.0 | null | 2011-05-19T22:55:08.337 | 2012-02-11T08:29:11.487 | 2012-02-11T08:29:11.487 | 7972 | 4679 | [
"r",
"poisson-distribution",
"lme4-nlme"
] |
11004 | 1 | 11005 | null | 2 | 146 | I was wondering how to get arguments for R functions. Is there any R function which can be used to get all arguments for a certain function?
As R function glm has the following arguments
```
glm(formula, family = gaussian, data, weights, subset,
na.action, start = NULL, etastart, mustart, offset,
control = list(...), model = TRUE, method = "glm.fit",
x = FALSE, y = TRUE, contrasts = NULL, ...)
```
How these arguments can be obtained on R console? Thanks
| Checking $R$ functions arguments | CC BY-SA 3.0 | null | 2011-05-19T23:04:30.417 | 2011-05-19T23:44:51.157 | 2011-05-19T23:19:32.777 | 3903 | 3903 | [
"r"
] |
11005 | 2 | null | 11004 | 4 | null | You could use something like `formals()` or `args()`, e.g. `formals(glm)` gives:
```
> formals(glm)
$formula
$family
gaussian
$data
$weights
$subset
$na.action
$start
NULL
$etastart
$mustart
$offset
$control
list(...)
$model
[1] TRUE
$method
[1] "glm.fit"
$x
[1] FALSE
$y
[1] TRUE
$contrasts
NULL
$...
```
| null | CC BY-SA 3.0 | null | 2011-05-19T23:44:51.157 | 2011-05-19T23:44:51.157 | null | null | 307 | null |
11006 | 2 | null | 10984 | 1 | null | You can have a look at [this](http://seed.ucsd.edu/~mindreader/) "Mind Reading" game and at the details of its implementation. I think it is very relevant to your second question.
| null | CC BY-SA 3.0 | null | 2011-05-20T00:16:26.300 | 2011-05-20T00:16:26.300 | null | null | 4337 | null |
11007 | 1 | null | null | 1 | 308 | I am performing this simple experiment: I have one variety of grass and 8 different fungi (say #1 to #8). I am going to put 10-20 grass plants in each one of 18 containers, and then I will put each one of the fungi in 2 containers and let 2 with no fungus ("negative control").
After some time I will count how many plants have been killed by each one of the fungi in each one of two replicates for that fungus.
I want to compare the results for each one of the fungi with respect to the negative control. I think I can do that using the Fisher exact test, but I don't know if I have to combine the counts of the two replicates into a single count (like using a single container with all the plants together) or if I can keep those count separated.
Also, I would like to know if I have to run the test doing pairwise comparisons (controls vs #n) or if I should run a single test considering all the results at the same time. May I do that also with the Fisher exact test? Do I have to make a multiple comparison adjustment?
| Test for biological replicates | CC BY-SA 4.0 | null | 2011-05-20T00:30:20.703 | 2022-06-08T17:25:03.340 | 2022-06-08T17:25:03.340 | 11887 | 4680 | [
"statistical-significance",
"multiple-comparisons",
"biostatistics"
] |
11008 | 1 | null | null | 4 | 13580 | I made a comparison of hatch success between 2 populations of birds using R's `prop.test()` function:
```
prop.test(c(#hatched_site1, #hatched_site2),c(#laid_site1, #laid_site2))
```
It gave me the proportions of each site as part of the summary. How can I calculate the standard error for each proportion?
| How can I calculate the standard error of a proportion? | CC BY-SA 3.0 | null | 2011-05-20T00:39:14.580 | 2011-05-20T11:06:57.907 | 2011-05-20T11:06:57.907 | 307 | 4238 | [
"r",
"standard-deviation",
"proportion"
] |
11009 | 1 | 11080 | null | 114 | 107798 | Is it ever valid to include a two-way interaction in a model without including the main effects? What if your hypothesis is only about the interaction, do you still need to include the main effects?
| Including the interaction but not the main effects in a model | CC BY-SA 3.0 | null | 2011-05-20T01:19:45.107 | 2023-04-24T23:42:46.460 | 2015-02-05T16:52:02.207 | 36515 | 2310 | [
"regression",
"modeling",
"interaction",
"regression-coefficients"
] |
11011 | 2 | null | 10996 | 2 | null | Try searching for "structural equation modeling" on [http://rseek.org](http://rseek.org). You'll find several helpful links, including links to several possible packages.
You might also check out the Task View for the social sciences, there's a section for structural equation modeling maybe a third of the way down. See [http://cran.r-project.org/web/views/SocialSciences.html](http://cran.r-project.org/web/views/SocialSciences.html).
One package in particular you might find helpful is John Fox's `sem` package.
[http://cran.r-project.org/web/packages/sem/index.html](http://cran.r-project.org/web/packages/sem/index.html)
| null | CC BY-SA 3.0 | null | 2011-05-20T01:52:07.213 | 2011-05-20T04:06:40.047 | 2011-05-20T04:06:40.047 | 3601 | 3601 | null |
11012 | 1 | null | null | 3 | 1292 | I understand that SPSS can create a contingency table, and at the same time perform the chi-square test.
However, is it possible that when we already have the contingency table, to have SPSS do the chi-square test?
| Can SPSS perform chi-square test on an existing contingency table? | CC BY-SA 3.0 | null | 2011-05-20T02:43:38.253 | 2012-03-02T23:28:25.467 | null | null | 1663 | [
"spss",
"contingency-tables"
] |
11013 | 2 | null | 2230 | 51 | null | I know this question has already been answered (and quite well, in my view), but there was a different question [here](https://stats.stackexchange.com/questions/10964/in-convergence-in-probability-or-a-s-convergence-w-r-t-which-measure-is-the-prob) which had a comment @NRH that mentioned the graphical explanation, and rather than put the pictures [there](https://stats.stackexchange.com/questions/10964/in-convergence-in-probability-or-a-s-convergence-w-r-t-which-measure-is-the-prob) it would seem more fitting to put them here.
So, here goes. It's not as cool as an R package. But it's self-contained and doesn't require a subscription to JSTOR.
In the following we're talking about a simple random walk, $X_{i}= \pm 1$ with equal probability, and we are calculating running averages,
$$
\frac{S_{n}}{n} = \frac{1}{n}\sum_{i = 1}^{n}X_{i},\quad n=1,2,\ldots.
$$

The SLLN (convergence almost surely) says that we can be 100% sure that this curve stretching off to the right will eventually, at some finite time, fall entirely within the bands forever afterward (to the right).
The R code used to generate this graph is below (plot labels omitted for brevity).
```
n <- 1000; m <- 50; e <- 0.05
s <- cumsum(2*(rbinom(n, size=1, prob=0.5) - 0.5))
plot(s/seq.int(n), type = "l", ylim = c(-0.4, 0.4))
abline(h = c(-e,e), lty = 2)
```
---

The WLLN (convergence in probability) says that a large proportion of the sample paths will be in the bands on the right-hand side, at time $n$ (for the above it looks like around 48 or 9 out of 50). We can never be sure that any particular curve will be inside at any finite time, but looking at the mass of noodles above it'd be a pretty safe bet. The WLLN also says that we can make the proportion of noodles inside as close to 1 as we like by making the plot sufficiently wide.
The R code for the graph follows (again, skipping labels).
```
x <- matrix(2*(rbinom(n*m, size=1, prob=0.5) - 0.5), ncol = m)
y <- apply(x, 2, function(z) cumsum(z)/seq_along(z))
matplot(y, type = "l", ylim = c(-0.4,0.4))
abline(h = c(-e,e), lty = 2, lwd = 2)
```
| null | CC BY-SA 3.0 | null | 2011-05-20T02:47:18.010 | 2011-05-20T12:06:18.283 | 2017-04-13T12:44:37.420 | -1 | null | null |
11015 | 1 | null | null | 2 | 146 | The latest general framework I know in MCMC-based wrapper method (doing variable selection and clustering simultaneously) are the paper "[Bayesian variable selection in clustering high-dimensional data](http://www18.georgetown.edu/data/people/mgt26/publication-29809.pdf)" of Tadesse et al (2005) and the paper "[Variable selection in clustering via Dirichlet process mixture models](http://www18.georgetown.edu/data/people/mgt26/publication-29804.pdf)" of Kim et al (2006).
I wonder if there are any new developments in this area?
In particular, has anyone tried to extend the model of Tadesse?
| New development in variable selection in clustering using MCMC? | CC BY-SA 3.0 | null | 2011-05-20T03:29:33.120 | 2016-12-09T08:41:34.543 | 2016-12-09T08:41:34.543 | 113090 | 4683 | [
"clustering",
"references",
"feature-selection",
"markov-chain-montecarlo"
] |
11016 | 2 | null | 11009 | 9 | null | Arguably, it depends on what you're using your model for. But I've never seen a reason not to run and describe models with main effects, even in cases where the hypothesis is only about the interaction.
| null | CC BY-SA 3.0 | null | 2011-05-20T03:42:34.577 | 2011-05-20T03:42:34.577 | null | null | 3748 | null |
11017 | 2 | null | 10975 | 7 | null | One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is the standard student t distribution, with $\nu$ degrees of freedom. The parameter $v$ controls how quickly the transformed variable wanders off to infinity. If you set $v=1$ then you have the arctan transform:
$$x=arctan\left(\frac{\pi[2p-1]}{2}\right)$$
This is much more extreme than arcsine, and more extreme than logit transform. Note that logit transform can be roughly approximated by using the t-distribution with $\nu\approx 8$. SO in some way it provides an approximate link between logit and probit ($\nu=\infty$) transforms, and an extension of them to more extreme transformations.
The problem with these transforms is that they give $\pm\infty$ when the observed proportion is equal to $1$ or $0$. So you need to somehow shrink these somehow - the simplest way being to add $+1$ "successes" and $+1$ "failures".
| null | CC BY-SA 3.0 | null | 2011-05-20T03:46:23.310 | 2011-05-20T03:46:23.310 | null | null | 2392 | null |
11018 | 1 | null | null | 9 | 3707 | Let's aim for some at an introductory level, some articles and some textbooks. Applied is more helpful, including R code is great. Thanks!
| Recommend references on survey sample weighting | CC BY-SA 3.0 | null | 2011-05-20T03:54:39.277 | 2017-09-19T15:29:55.590 | 2017-09-19T15:18:01.470 | 5739 | 3748 | [
"sampling",
"references",
"survey-weights",
"survey-sampling"
] |
11019 | 1 | 11046 | null | 3 | 3023 | I am doing text classification, and have been playing around with different classifiers. However I have a pretty basic question: what if a new unseen document comes in and it happens to not belong to any of the pre-existing classes? The classifiers that I have seen (in WEKA, libsvm etc.) still go ahead and put the unseen document in one of the existing classes anyway.
This situation comes up pretty frequently in my work. How can I handle it sensibly?
On a related note, is there a way I could get a sense of how confident a classifier is regarding it's classification decision?
================================Edit 1======================================
After reading the comments here, I think I am going to try one of the two things:
(1) Use the approach described in [Zadrozny,Elkan](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.7457) paper that Steffen pointed me to, in order to get the probability estimates. If the probabilities are less than a magic threshold, I could then simply discard the unseen instance as noise.
(2) I am increasingly starting to think that I could instead handle this as n 1-class problems. let's say that I have n classes in my data. I could build and train n 1-class classifiers that give a yes/no decision (whether an instance belongs to a particular class or not). So that way when a new instance comes in, I could just pass it through each of there 1-class classifiers.
Thoughts? Any implementations/packages anyone is aware of that would let me do (2)?
===========================================================================
| How does a classifier handle unseen documents that do not belong to any of the pre-existing classes? | CC BY-SA 3.0 | null | 2011-05-20T04:07:25.990 | 2011-05-20T15:25:44.807 | 2011-05-20T14:37:42.537 | 3301 | 3301 | [
"machine-learning",
"classification",
"text-mining"
] |
11020 | 2 | null | 11009 | 20 | null | The reason to keep the main effects in the model is for identifiability. Hence, if the purpose is statistical inference about each of the effects, you should keep the main effects in the model. However, if your modeling purpose is solely to predict new values, then it is perfectly legitimate to include only the interaction if that improves predictive accuracy.
| null | CC BY-SA 3.0 | null | 2011-05-20T04:51:11.040 | 2011-05-20T04:51:11.040 | null | null | 1945 | null |
11021 | 1 | 11158 | null | 6 | 1560 | It has come to my attention that companies will rank resumes based on buzz words and only look at those that have high scores assuming enough people submit for the position.
I don't like this system but still want a job, so instead of inserting these meaningless terms into my actual resume I've compiled a list of such terms and am going to incorporate the list into my resume in a text box with white text.
The problem is that I don't know how those scoring engines work (is there an application most employers use?) and so I can't be sure if I should maximize the terms I put in, if I should repeat terms, or if I should maximize the effective terms put in because number of words matter.
I have entered my resume on [Rezscore](http://rezscore.com/) with and without the buzz word metadata and found my resume score significantly lowered with buzz words because it was so verbose.
Does anyone have more knowledge of these resume ranking systems?
| Resume buzz words | CC BY-SA 3.0 | null | 2011-05-20T04:57:27.690 | 2022-12-02T15:42:33.643 | null | null | 4685 | [
"data-mining",
"careers"
] |
11022 | 2 | null | 11009 | 4 | null | This one is tricky and happened to me in my last project. I would explain it this way: lets say you had variables A and B which came out significant independently and by a business sense you thought that an interaction of A and B seems good. You included the interaction which came out to be significant but B lost its significance. You would explain your model initially by showing two results. The results would show that initially B was significant but when seen in light of A it lost its sheen. So B is a good variable but only when seen in light of various levels of A (if A is a categorical variable). Its like saying Obama is a good leader when seen in the light of its SEAL army. So Obama*seal will be a significant variable. But Obama when seen alone might not be as important. (No offense to Obama, just an example.)
| null | CC BY-SA 3.0 | null | 2011-05-20T05:31:47.087 | 2016-07-28T15:39:32.670 | 2016-07-28T15:39:32.670 | 41294 | 1763 | null |
11023 | 2 | null | 3484 | 0 | null | when i joined analytics industry( just out of my own interest) after serving software for 5 yrs..I didnt know SAS either..I got some version from somewhere and started writing codes on my own. Yes, I had programming background before that..I knew SQL, I knew general programming. I would suggest you visit tutorials and start writing codes yourself. The version is something you should get first but. Read SQL. Know everything from select * to joins to merge..tommorow if some interviewer gives you a loop or a join(left,right, full)..or some function like 1) contains 2) coalesce 3) sum, min, max, average
4) merge( in=a) (in=b) ..bla bla..you should be in decent condition that you are gonna ace it. These are just some bits from my side..apart from this you could also focus on reading things like regression analysis, MLE and OLS methods..this would show the interviewer that though this guy didnt have SAS facility he is good on general concepts..All I am preaching here is what i practiced.
| null | CC BY-SA 3.0 | null | 2011-05-20T06:18:32.570 | 2011-05-20T06:18:32.570 | null | null | 1763 | null |
11024 | 2 | null | 11019 | 2 | null | In a lot of the classification algorithms (note: I do mean [classification](http://en.wikipedia.org/wiki/Statistical_classification) in the 'classical' sense here), it is silently assumed that the classes are complete, i.e. every observation must belong to one of the classes. So in that sense, the situation you are describing should not occur.
A kind of solution if your classes are not complete, is to provide a 'garbage' class. This may or may not work, depending on your situation, because the items that get assigned to this class could be of diverse nature, so if you take this class along in your classification algorithm, it may not work as expected. If your 'real' classes are strongly predicted, it could still work.
If you roll your own algorithm, you could exclude this class from the algorithm and just throw the nonfitting observations 'in the garbage', but I guess you would be violating some assumptions then (so guarantees provided by some methods will no longer be valid).
A relatively easy way to assess the confidence of a classifier is to do [crossvalidation](http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29) with missclassification.
| null | CC BY-SA 3.0 | null | 2011-05-20T07:01:08.583 | 2011-05-20T07:01:08.583 | null | null | 4257 | null |
11025 | 2 | null | 11000 | 5 | null | I can think of two ways. One is combine the data use the `na.exclude` and then separate data again:
```
A = matrix(1:20, nrow=10, ncol=2)
colnames(A) <- paste("A",1:ncol(A),sep="")
B = matrix(1:10, nrow=10, ncol=1)
colnames(B) <- paste("B",1:ncol(B),sep="")
C <- cbind(A,B)
C[1,1] <- NA
C.ex <- na.exclude(C)
A.ex <- C[,colnames(A)]
B.ex <- C[,colnames(B)]
lm(A.ex~B.ex)
```
Another way is to use the `data` argument and create a formula.
```
Cd <- data.frame(C)
fr <- formula(paste("cbind(",paste(colnames(A),collapse=","),")~",paste(colnames(B),collapse="+"),sep=""))
lm(fr,data=Cd)
Cd[1,1] <-NA
lm(fr,data=Cd,na.action=na.exclude)
```
If you are doing a lot of regression the first way should be faster, since less background magic is performed. Although if you need only coefficients and residuals I suggest using `lsfit`, which is much faster than `lm`. The second way is a bit nicer, but on my laptop trying to do summary on the resulting regression throws an error. I will try to see whether this is a bug.
| null | CC BY-SA 3.0 | null | 2011-05-20T07:28:22.837 | 2011-05-20T07:28:22.837 | null | null | 2116 | null |
11026 | 1 | null | null | 2 | 93 | My problem is to figure out an algorithm for summarising wind statistics over an area. The statistics are (24x) hourly but I want to end up with 2-3 'blocks' that summarise the conditions for the whole day.
My data is in the form of separate magnitude and direction NumPy arrays but I could convert them to u,v or polar coordinates or whatever if it made the solution easier.
There are other conditions like: each block must be at least 3 hours long, but the general problem is the trade off between accuracy and precision. And in a way, to maximise the similarity of the statistics inside each 'block' and the difference between each 'block'.
Anyway I don't have a background in statistics and am wondering if there is an array-based approach I could use, maybe involving calculating minimum bounding box/convex hull, that kind of thing?
I'm not looking for a complete algorithm but some hints as to what I should be searching for to do further research.
| Summarising wind vector statistics over time | CC BY-SA 3.0 | null | 2011-05-20T08:02:31.720 | 2011-05-20T08:02:31.720 | null | null | 4686 | [
"algorithms",
"python",
"descriptive-statistics"
] |
11028 | 2 | null | 11000 | 31 | null | Edit: I misunderstood your question. There are two aspects:
a) `na.omit` and `na.exclude` both do casewise deletion with respect to both predictors and criterions. They only differ in that extractor functions like `residuals()` or `fitted()` will pad their output with `NA`s for the omitted cases with `na.exclude`, thus having an output of the same length as the input variables.
```
> N <- 20 # generate some data
> y1 <- rnorm(N, 175, 7) # criterion 1
> y2 <- rnorm(N, 30, 8) # criterion 2
> x <- 0.5*y1 - 0.3*y2 + rnorm(N, 0, 3) # predictor
> y1[c(1, 3, 5)] <- NA # some NA values
> y2[c(7, 9, 11)] <- NA # some other NA values
> Y <- cbind(y1, y2) # matrix for multivariate regression
> fitO <- lm(Y ~ x, na.action=na.omit) # fit with na.omit
> dim(residuals(fitO)) # use extractor function
[1] 14 2
> fitE <- lm(Y ~ x, na.action=na.exclude) # fit with na.exclude
> dim(residuals(fitE)) # use extractor function -> = N
[1] 20 2
> dim(fitE$residuals) # access residuals directly
[1] 14 2
```
b) The real issue is not with this difference between `na.omit` and `na.exclude`, you don't seem to want casewise deletion that takes criterion variables into account, which both do.
```
> X <- model.matrix(fitE) # design matrix
> dim(X) # casewise deletion -> only 14 complete cases
[1] 14 2
```
The regression results depend on the matrices $X^{+} = (X' X)^{-1} X'$ (pseudoinverse of design matrix $X$, coefficients $\hat{\beta} = X^{+} Y$) and the hat matrix $H = X X^{+}$, fitted values $\hat{Y} = H Y$). If you don't want casewise deletion, you need a different design matrix $X$ for each column of $Y$, so there's no way around fitting separate regressions for each criterion. You can try to avoid the overhead of `lm()` by doing something along the lines of the following:
```
> Xf <- model.matrix(~ x) # full design matrix (all cases)
# function: manually calculate coefficients and fitted values for single criterion y
> getFit <- function(y) {
+ idx <- !is.na(y) # throw away NAs
+ Xsvd <- svd(Xf[idx , ]) # SVD decomposition of X
+ # get X+ but note: there might be better ways
+ Xplus <- tcrossprod(Xsvd$v %*% diag(Xsvd$d^(-2)) %*% t(Xsvd$v), Xf[idx, ])
+ list(coefs=(Xplus %*% y[idx]), yhat=(Xf[idx, ] %*% Xplus %*% y[idx]))
+ }
> res <- apply(Y, 2, getFit) # get fits for each column of Y
> res$y1$coefs
[,1]
(Intercept) 113.9398761
x 0.7601234
> res$y2$coefs
[,1]
(Intercept) 91.580505
x -0.805897
> coefficients(lm(y1 ~ x)) # compare with separate results from lm()
(Intercept) x
113.9398761 0.7601234
> coefficients(lm(y2 ~ x))
(Intercept) x
91.580505 -0.805897
```
Note that there might be numerically better ways to caculate $X^{+}$ and $H$, you could check a $QR$-decomposition instead. The SVD-approach is explained [here on SE](https://stats.stackexchange.com/questions/6731/using-singular-value-decomposition-to-compute-variance-covariance-matrix-from-lin/6732#6732). I have not timed the above approach with big matrices $Y$ against actually using `lm()`.
| null | CC BY-SA 3.0 | null | 2011-05-20T09:30:15.167 | 2011-05-20T22:46:53.610 | 2017-04-13T12:44:28.813 | -1 | 1909 | null |
11029 | 2 | null | 2377 | 0 | null | I would rather model this data set using a leptokurtic distribution instead of using data-transformations. I like the sinh-arcsinh distribution from Jones and Pewsey (2009), Biometrika.
| null | CC BY-SA 3.0 | null | 2011-05-20T09:56:36.523 | 2011-05-20T09:56:36.523 | null | null | 4688 | null |
11030 | 2 | null | 11018 | 6 | null | I guess one could start with Thomas Lumley's webpage "[Survey analysis in R](http://faculty.washington.edu/tlumley/survey/)". He is the author of an R package called `survey` and he has recently published a book about "[Complex Surveys: a guide to analysis using R](http://faculty.washington.edu/tlumley/svybook/)".
| null | CC BY-SA 3.0 | null | 2011-05-20T10:56:14.390 | 2011-05-20T10:56:14.390 | null | null | 307 | null |
11032 | 1 | null | null | 11 | 579 | What are the benefits of giving certain initial values to transition probabilities in a Hidden Markov Model? Eventually system will learn them, so what is the point of giving values other than random ones? Does underlying algorithm make a difference such as Baum–Welch?
If I know the transition probabilities at the beginning very accurately, and my main purpose is to predict output probabilities from hidden state to observations, what would you advise me?
| Significance of initial transition probabilites in a hidden markov model | CC BY-SA 3.0 | null | 2011-05-20T11:10:50.573 | 2011-07-01T11:17:37.883 | 2011-05-20T11:57:38.207 | null | 4001 | [
"machine-learning",
"expectation-maximization",
"hidden-markov-model"
] |
11033 | 1 | 11035 | null | 17 | 9931 | I have a model fitted (from the literature). I also have the raw data for the predictive variables.
What's the equation I should be using to get probabilities? Basically, how do I combine raw data and coefficients to get probabilities?
| How can I use logistic regression betas + raw data to get probabilities | CC BY-SA 3.0 | null | 2011-05-20T11:29:45.040 | 2011-05-27T07:01:11.133 | 2011-05-27T07:01:11.133 | 2116 | 333 | [
"regression",
"logistic"
] |
11034 | 2 | null | 11033 | 20 | null | The link function of a logistic model is $f: x \mapsto \log \tfrac{x}{1 - x}$. Its inverse is $g: x \mapsto \tfrac{\exp x}{1 + \exp x}$.
In a logistic model, the left-hand side is the logit of $\pi$, the probability of success:
$f(\pi) = \beta_0 + x_1 \beta_1 + x_2 \beta_2 + \ldots$
Therefore, if you want $\pi$ you need to evaluate $g$ at the right-hand side:
$\pi = g( \beta_0 + x_1 \beta_1 + x_2 \beta_2 + \ldots)$.
| null | CC BY-SA 3.0 | null | 2011-05-20T11:39:22.537 | 2011-05-20T11:39:22.537 | null | null | 3019 | null |
11035 | 2 | null | 11033 | 15 | null | Here is the applied researcher's answer (using the statistics package R).
First, let's create some data, i.e. I am simulating data for a simple bivariate logistic regression model $log(\frac{p}{1-p})=\beta_0 + \beta_1 \cdot x$:
```
> set.seed(3124)
>
> ## Formula for converting logit to probabilities
> ## Source: http://www.statgun.com/tutorials/logistic-regression.html
> logit2prop <- function(l){exp(l)/(1+exp(l))}
>
> ## Make up some data
> y <- rbinom(100, 1, 0.2)
> x <- rbinom(100, 1, 0.5)
```
The predictor `x` is a dichotomous variable:
```
> x
[1] 0 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 1 1 1 1
[48] 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 0
[95] 1 1 1 1 1 0
```
Second, estimate the intercept ($\beta_0$) and the slope ($\beta_1$). As you can see, the intercept is $\beta_0 = -0.8690$ and the slope is $\beta_1 = -1.0769 $.
```
> ## Run the model
> summary(glm.mod <- glm(y ~ x, family = "binomial"))
[...]
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.8690 0.3304 -2.630 0.00854 **
x -1.0769 0.5220 -2.063 0.03910 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
[...]
```
Third, R, like most statistical packages, can compute the fitted values, i.e. the probabilities. I will use these values as reference.
```
> ## Save the fitted values
> glm.fitted <- fitted(glm.mod)
```
Fourth, this step directly refers to your question: We have the raw data (here: $x$) and we have the coefficients ($\beta_0$ and $\beta_1$). Now, let's compute the logits and save these fitted values in `glm.rcdm`:
```
> ## "Raw data + coefficients" method (RDCM)
## logit = -0.8690 + (-1.0769) * x
glm.rdcm <- -0.8690 + (-1.0769)*x
```
The final step is a comparison of the fitted values based on R's `fitted`-function (`glm.fitted`) and my "hand-made" approach (`logit2prop.glm.rdcm`). My own function `logit2prop` (see first step) converts logits to probabilities:
```
> ## Compare fitted values and RDCM
> df <- data.frame(glm.fitted, logit2prop(glm.rdcm))
> df[10:25,]
> df[10:25,]
glm.fitted logit2prop.glm.rdcm.
10 0.1250000 0.1250011
11 0.2954545 0.2954624
12 0.1250000 0.1250011
13 0.2954545 0.2954624
14 0.2954545 0.2954624
15 0.1250000 0.1250011
16 0.1250000 0.1250011
17 0.1250000 0.1250011
18 0.2954545 0.2954624
19 0.1250000 0.1250011
20 0.1250000 0.1250011
21 0.1250000 0.1250011
22 0.1250000 0.1250011
23 0.1250000 0.1250011
24 0.1250000 0.1250011
25 0.2954545 0.2954624
```
| null | CC BY-SA 3.0 | null | 2011-05-20T12:30:31.097 | 2011-05-20T13:42:54.187 | 2011-05-20T13:42:54.187 | 307 | 307 | null |
11036 | 2 | null | 10965 | 0 | null | A point of clarification but in my opinion all ARIMA forecasts are "one-step ahead forecasts". In the "fitting area t<37" the fitted values are 1 period/step ahead forecasts. In the absence of data beyond t>36 the procedure is to use the PREDICTED VALUE as if it were the ACTUAL VALUE thus bootstrapping the forecast. For example the forecast for period 38 will be a one-step ahead forecast as it will assume that the predicted value for period 37 is the "actual" for period 37. This is precisely why forecast accuracy measured from a single origin is tainted because the forecasts are correlated as they are obtained from a recursive scheme. If a one period out forecast is "wrong" then subsequent "wrong forecasts" may ensue. One is always better off by looking at forecast accuracies from a number of origins in order to assess model performance.
| null | CC BY-SA 3.0 | null | 2011-05-20T13:22:43.093 | 2011-05-20T13:40:57.760 | 2011-05-20T13:40:57.760 | 3382 | 3382 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.