Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9261 | 2 | null | 9240 | 10 | null | This is essentially a variant of the coupon collector's problem.
If there are $n$ items in total and you have taken a sample size $s$ with replacement then the probability of having identified $u$ unique items is
$$ Pr(U=u|n,s) = \frac{S_2(s,u) n! }{ (n-u)! n^s }$$
where $ S_2(s,u)$ gives [Stirling numbers of the second kind](http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind)
Now all you need is a prior distribution for $Pr(N=n)$, apply Bayes theorem, and get a posterior distribution for $N$.
| null | CC BY-SA 2.5 | null | 2011-04-06T11:14:25.150 | 2011-04-06T11:14:25.150 | null | null | 2958 | null |
9262 | 2 | null | 9233 | 3 | null | I am not sure I understand this bit: "Unfortunately, since the response is the mean value (without looking at the mean, just regular inter-annual variation will swamp the signal)"
With careful modelling, it seems to me you could gain a lot by modelling this as panel data. Depending on the spatial scope of your data, there may be large differences in the temperatures that your data points were exposed to within any given year. Averaging all these variations seems costly.
| null | CC BY-SA 2.5 | null | 2011-04-06T11:50:08.220 | 2011-04-06T11:50:08.220 | null | null | 2044 | null |
9263 | 1 | 9270 | null | 6 | 752 | I am using SPSS and having some trouble with a research question which is analogous to the hypothetical question:
Is there a longitudinal Relationship between Happiness and Chocolate Consumption?
Let’s say I take a sample of people and contact them when they are aged 14 and aged 18 and ask them:
a) What is your chocolate consumption in grams per day.
b) Are you happy?
I have my fictitious data in the following wide format:
ID HAPPY.14 HAPPY.18 CHOC.14 CHOC.18
1 YES YES 100 5
2 YES NO 50 30
3 NO YES 30 50
etc.
I would like to know if the mean chocolate consumption per day is higher among happy people than those who are not happy while accounting for the fact that I have taken repeated measures of both chocolate and happiness at the two time points.
Approach 1
I suppose one way of doing this would be to do an ANCOVA, using time (before/after) as a grouping variable and controlling for happiness status. However, I think this may be inadvisable as correlation between the two time points would be neglected.
Approach 2
I understand that one valid approach for this should be a repeated measures ANOVA. I’m just not sure how to do this correctly in SPSS. I have specified my within-subjects factor as chocolate consumption with age 14 data as level one and age 18 data as level 2.
What I’m uncertain about is the next step – specifying covariates and between individual factors – I have the option of adding HAPPY.14 and OR HAPPY.18 as a between individual factor. If I add both the output tells me about the effect of HAPPY.14 and HAPPY.18 as you’d expect, not about the “effect” of happy (YES/NO)per se.
I realise it’s a basic question. Any feedback would on either of the two approached would be greatly appreciated.
| Longitudinal relationship between chocolate consumption and happiness: repeated measures ANOVA? | CC BY-SA 2.5 | null | 2011-04-06T12:20:49.227 | 2011-04-07T01:53:19.273 | 2011-04-06T14:27:19.523 | 183 | 4054 | [
"anova",
"spss",
"ancova"
] |
9264 | 2 | null | 9263 | 2 | null | You may think about happiness as the dependent variable, and you could use logistic regression with chocolate consumption as a predictor. Some people may be generally happier or less happy independently from chocolate consumption. This can be modelled by including subject id as a random effect categorical predictor. Age might also influence happiness. After these the model would look like this: `logit(happy) ~ choc + age + id`, where age is either 14 or 18, and the data are in the long format, a mixed effect logistic regression including a random categorical, a fixed categorical and a continuous predictor. (As an analogue of the repeated measures approach you could use a covariance pattern model, where id is not a predictor, but used in the specification of the covariance.)
Alternatively chocolate consumption can be regarded as dependent variable. `choc ~ happy + age + id` could be the model (long data format), where id is a random effect, mixed effect ANOVA; or `choc ~ happy + age`, where repeated measures are considered, repeated measures ANOVA.
I have no idea if happiness causes increased/decreased chocolate consumption or vice versa. You are safe asking about a "relationship" between the two.
| null | CC BY-SA 2.5 | null | 2011-04-06T13:03:53.853 | 2011-04-06T14:10:11.800 | 2011-04-06T14:10:11.800 | 3911 | 3911 | null |
9265 | 2 | null | 9260 | 3 | null | A simple approach would be to assume that each shipment will meet the specification with $p = 1 - 20\% = 80\%$ probability. The number of shipments meeting the specification (k) will then follow a [binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution): k ~ B(15, 80%) and the expected value will be $n \cdot p = 15 \cdot 80\%=12$. The standard error of this estimate is the standard deviation of the binomial distribution: $\sqrt{n p (1 − p)} = 1.55$, however, k is not normally distributed.
A more complicated approach would account for the fact that the 20% fail rate is only an estimate based on "recent history". So the actual fail rate may be somewhat lower or higher as well, and the above approach underestimates the uncertainty of the expected value. As we don't exactly know where the 20% came from, we can not calculate this.
| null | CC BY-SA 2.5 | null | 2011-04-06T13:25:03.273 | 2011-04-06T13:25:03.273 | null | null | 3911 | null |
9266 | 1 | null | null | 3 | 270 | Given a collections of sets, which have an inherent but unknown (at runtime) hierarchy, I would like to cluster them based on the sub/super-relationships with respect to their elements. Let me try and illustrate this with a overly simplified example:
```
Set 1 = {a, b, c, d, e, f}
Set 2 = {a, b}
Set 3 = {a, b, c ,d}
Set 4 = {a, c, d, f, g, h}
Set 5 = {d, f}
```
In this example, there would be two main clusters with the following relations;
cluster 1:
Set 1 $\supset$ Set 3 $\supset$ Set 2;
Set 1 $\supset$ Set 5
... and cluster 2:
Set 4 $\supset$ Set 5
The way I see it, the complications here from a standard clustering approach are;
1) I can not come up with a good measure of correlation between sets that are to be clustered. I was initially thinking of using the number of common elements but then the following scenario (which is essentially rather likely) complicates things:
$_s(Set1 \cap Set2) = 10$
$_s(Set1 \cap Set3) = 10$
$_s(Set3 \cap Set2) = 0$
2) In theory there is no reason for a small set to not be sub-set under more than one superset. This effectively makes any Tree-based data structure unusable, or am I mistaken on this point?
I did some googling, checked on both StackOverflow and here briefly but havent really found something that is useful. Before I start implementing something in Java from scratch I was wondering if anyone had ideas or previous experiences on something like this. If there are libraries/functions one can use for this purpose it would be pretty cool, although I doubt there is something like written in Java.
I know that most of you use R, but as I said, the rest of the software is written in Java so I'd prefer to keep things there, if at all possible.
Thanks,
EDIT: Following @whuber's comments I'll try and clarify the question further. I believe a significant portion of reasoning behind the question got lost when I tried to generalize and abstract the concept.
So here it goes:
The sets mentioned above are gene/protein sets, and the elements are then genes/proteins. As these entities work in connection with one another, one speaks of functional groups/sets. However the databases that hold this data usually have a high degree of redundancy, in the sense that Set A usually has all the elements of Set B, C .. etc. My whole project is based on analyzing these sets, and when I am done with the analysis and present my results I have a long set of these sets with associated scores. However highly scoring sets sometimes cluster, they may or may not be in the same super-sets. Thus the need/desire to cluster these in a structure like a dendogram. Thus one can overlay the scoring data, with the hierarchy data.
On a side-note: I was recommended by a colleague of mine to consider spectral clustering, on which I will read more in the coming days to see whether or not the method can be used here or not.
I hope these notes make things more clear now, I'd do my best to further develop the ideas if necessary.
Thanks again!
| A smart way of clustering a collection of sets based on an inherent hierarchy | CC BY-SA 2.5 | null | 2011-04-06T13:52:40.207 | 2011-04-06T20:07:17.577 | 2011-04-06T18:37:19.810 | 3014 | 3014 | [
"clustering",
"algorithms"
] |
9267 | 1 | 9274 | null | 7 | 4085 | I'm in the process of running a repeated-measures ANOVA on two groups of participants from an experiment. In one group, there are 10 participants, and in the other group, there is only 1 participant. This is because the 1 participant is a patient who is being compared to the 10 other participants.
Now, my question is: is it fair/allowable to run an ANOVA comparing the patient with the other participants? I wasn't sure, so ran it in SPSS, and it seems to work, with a significant result. Is this naughty?
| What's the minimum number of individuals in a group for repeated-measures ANOVA? | CC BY-SA 2.5 | null | 2011-04-06T14:16:14.613 | 2013-06-22T08:48:34.200 | 2011-04-06T19:24:02.030 | 3911 | 4055 | [
"anova",
"small-sample"
] |
9268 | 2 | null | 9240 | 4 | null | I have already give a suggestion based on Stirling numbers of the second kind and Bayesian methods.
For those who find Stirling numbers too large or Bayesian methods too difficult, a rougher method might be to use
$$E[U|n,s] = n\left( 1- \left(1-\frac{1}{n}\right)^s\right)$$
$$var[U|n,s] = n\left(1-\frac{1}{n}\right)^s + n^2 \left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)^s - n^2\left(1-\frac{1}{n}\right)^{2s} $$
and back-calculate using numerical methods.
For example, taking GaBorgulya's example with $s=300$ and an observed $U = 265$, this might give us an estimate of $\hat{n} \approx 1180$ for the population.
If that had been the population then it would have given us a variance for $U$ of about 25, and an arbitrary two standard deviations either side of 265 would be about 255 and 275 (as I said, this is a rough method). 255 would have given us a estimate for $n$ about 895, while 275 would have given about 1692. The example's 1000 is comfortably within this interval.
| null | CC BY-SA 2.5 | null | 2011-04-06T14:38:25.303 | 2011-04-06T14:38:25.303 | null | null | 2958 | null |
9269 | 1 | 9271 | null | 4 | 274 | I can't seem to do a comparison with a sum of numbers and get the right answer (lines alternate between my input and R output):
>
(.6 + .3 + .1) == 1
[1] FALSE
(.6 + .3 + .1)
[1] 1
1
[1] 1
I've tried with and without parenthesis, tried comparing to 1.0, tried using as.numeric, but can't get it to work.
| Why is R giving a bad result of addition? | CC BY-SA 2.5 | null | 2011-04-06T14:44:19.773 | 2011-04-06T15:24:10.977 | 2020-06-11T14:32:37.003 | -1 | null | [
"r"
] |
9270 | 2 | null | 9263 | 5 | null | You may need to clarify what you mean by
>
"accounting for the fact that I have taken repeated measures..."
You say that
>
"I would like to know if the mean chocolate consumption per day is higher among happy
people than those who are not happy..."
This suggests to me that time is not really relevant to your research question. Thus, you could do one of the following.
- You could correlate mean happiness ([time1 + time2] / 2) with mean chocolate consumption.
- You could correlate happiness with chocolate consumption at a given time point.
- You could correlate happiness with chocolate consumption across times (e.g., 1 with 2).
A variant on the above would involve performing a regression or other predictive model predicting one variable from the other.
Alternatively, you may find that you can rephrase your research question more clearly to incorporate what you are interested in with regards to the effect of time.
- You could correlate chocolate change scores with happiness change scores.
- You could predict time 2 chocolate from time 1 chocolate and time 1 happiness to see whether time 1 happiness predicts over and above time 1 chocolate.
As a side point, while it may be an artificial example, it seems strange to measure happiness as a Yes / No variable. I would measure it as a scale. It is also a little strange talking about the mean of a Yes / No variable.
| null | CC BY-SA 2.5 | null | 2011-04-06T14:47:31.160 | 2011-04-07T01:53:19.273 | 2011-04-07T01:53:19.273 | 183 | 183 | null |
9271 | 2 | null | 9269 | 9 | null | You've fallen into the "Floating Point Trap." See the first chapter of the R Inferno: [http://www.burns-stat.com/pages/Tutor/R_inferno.pdf](http://www.burns-stat.com/pages/Tutor/R_inferno.pdf)
You want to test floating point numbers with `all.equal` instead, like this:
```
> all.equal(.6+.3+.1, 1)
[1] TRUE
```
| null | CC BY-SA 2.5 | null | 2011-04-06T14:57:58.367 | 2011-04-06T14:57:58.367 | null | null | 3601 | null |
9272 | 2 | null | 9269 | 9 | null | It is floating point rounding. You get
```
> round(.6 + .3 + .1, digits=16) == 1
[1] FALSE
> round(.6 + .3 + .1, digits=15) == 1
[1] TRUE
> (.6 + .3 + .1) - 1
[1] -1.11022302462516e-16
```
| null | CC BY-SA 2.5 | null | 2011-04-06T15:04:18.797 | 2011-04-06T15:04:18.797 | null | null | 2958 | null |
9273 | 2 | null | 9242 | 1 | null | The issue here is that hypothesis testing involves a null AND an alternative hypotheis, and therefore; the rejection region is determined by both hypotheses.
Consider a simpler example. If you are investigating a process that possibly has a MEAN of zero, but could not have a mean of less than zero, then you might be interested in performing the following test
\begin{equation}
\begin{array}{c}
H_{0}: \mu = 0 \
H_{1}: \mu > 0 \
\end{array}
\nonumber
\end{equation}
at a level alpha. Your rejection region of the null hypothesis is on the right of zero.
It is not impossible for you to get a sample mean that is negative, albeit with a small
probability. If you were to get a negative sample mean in your experiment, you would not question the veracity of the experiment.
Now consider your question. The reason that the rejection region for the F-statistic is on
the right is because of the alternative hypothesis in the one-way ANOVA. You are testing the hypothesis that
\begin{equation}
\begin{array}{c}
H_{0}: \sum \tau_{i}^{2} = 0 \
H_{1}: \sum \tau_{i}^{2} \ne 0 \
\end{array}
\nonumber
\end{equation}
The null hypothesis dictates that you use the central F distribution, and the alternative hypothesis, forcing the distribution to the right when the alternative hypothesis is true means that all of the Type I error probability must be located on the right.
Is it possible for the test statistic to be less than one? When the null hypothesis is true, it is certainly possible; just as in the previous example where it was possible for the test statistic to be negative even if the MEAN of the data is zero.
| null | CC BY-SA 2.5 | null | 2011-04-06T15:37:55.743 | 2011-04-06T15:43:48.640 | 2011-04-06T15:43:48.640 | 3805 | 3805 | null |
9274 | 2 | null | 9267 | 11 | null | Performing the ANOVA assumes that the nature and amount of variation in the hypothetical population represented by the one patient are the same as the variation in the hypothetical population represented by the ten other patients, and that all were obtained randomly and independently from their populations.
Under these assumptions a standard approach is to use data from the ten patients to compute a [prediction interval](http://en.wikipedia.org/wiki/Prediction_interval#Unknown_mean.2C_unknown_variance) for a single additional patient. The significance test only has to check whether the lone patient falls within that interval.
The prediction interval is simple to justify and compute. It depends only on the estimated mean $\bar{m}$ from the ten patients and on their estimated variance $s^2$. Let $x$ be the value of the lone patient. Under the null hypothesis (that all patients are drawn independently from a single population), the variance of $x - \bar{m}$ equals $s^2 + s^2/10$. If you further assume all variation is Normal--that's critical here because it's hard to check with just ten patients in a group--then $\frac{x - \bar{m}}{\sqrt{s^2 + s^2/10}}$ has a Student's t distribution with 9 degrees of freedom (9 is the df used to estimate $s^2$), allowing you to erect a confidence interval in the standard way for $x$.
If this prediction interval test disagrees with SPSS's result, I would not trust the SPSS ANOVA in this case.
| null | CC BY-SA 2.5 | null | 2011-04-06T15:56:21.583 | 2011-04-06T15:56:21.583 | null | null | 919 | null |
9275 | 1 | null | null | 2 | 1128 | I'm concerned about treating my data as gold, especially in areas of low data support, so I would like to apply [additive smoothing](http://en.wikipedia.org/wiki/Additive_smoothing). I'm then doing several things with this data, and one of them is [Pearson's chi-square test for independence](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test#Test_of_independence). Is it still acceptable to first do the smoothing and then do the chi-square test? The former is very Bayesian and the latter is very frequentist, so at this at least feels somewhat strange.
However, it seems like smoothing should curtail the need to worry much about the data support or the need for something like Fisher's exact test.
My reasoning for wanting to use these competing philosophies is because the audience really wants to see p-values, but the Bayesian inside of me is fighting to be let loose. As such I came up with this compromise.
EDIT: To clarify how I envision the smoothing taking place, consider the following contingency table:
```
X=0 X=1
+-------+
Y=0| 2| 2|
+---+---+
Y=1| 3| 3|
+---+---+
```
So without smoothing we have:
$P(X=0)=\frac{5}{10}$, $P(X=1)=\frac{5}{10}$, $P(Y=0)=\frac{4}{10}$, $P(Y=1)=\frac{6}{10}$
With additive smoothing and say $\alpha=1$ we get:
$P(X=0)=\frac{6}{12}$, $P(X=1)=\frac{6}{12}$, $P(Y=0)=\frac{5}{12}$, $P(Y=1)=\frac{7}{12}$
I haven't yet decided how to handle the contingency table. One way would be to smooth each cell:
```
X=0 X=1
+-------+
Y=0| 3| 3|
+---+---+
Y=1| 4| 4|
+---+---+
```
| Is it OK to do additive smoothing before applying Pearson's chi-square test for independence? | CC BY-SA 4.0 | null | 2011-04-06T16:04:07.350 | 2023-02-05T05:16:27.797 | 2023-02-05T05:16:27.797 | 345611 | 2485 | [
"bayesian",
"chi-squared-test",
"smoothing",
"frequentist"
] |
9276 | 1 | null | null | 12 | 4256 | Background
I am designing a Monte Carlo simulation that combines the outputs of series of models, and I want to be sure that the simulation will allow me to make reasonable claims about the probability of the simulated outcome and the precision of that probability estimate.
The simulation will find the probability that a jury drawn from a specified community will convict a certain defendant. These are the steps of the simulation:
- Using existing data, generate a logistic probability model (M) by regressing “juror first ballot vote” on demographic predictors.
- Use Monte Carlo methods to simulate 1,000 versions of M (i.e., 1000 versions of the coefficients for the model parameters).
- Select one of the 1,000 versions of the model (Mi).
- Empanel 1,000 juries by randomly selecting 1,000 sets of 12 “jurors” from a “community” (C) of individuals with specified demographic characteristic distributions.
- Deterministically calculate the probability of a first ballot guilty vote for each juror using Mi.
- Render each "juror’s" probable vote into a determinate vote (based on whether it is greater or less than randomly selected value between 0-1).
- Determine each "jury’s" “final vote” by using a model (derived from empirical data) of the probability a jury will convict, conditional on the proportion of jurors voting for conviction on the first ballot.
- Store the proportion of guilty verdicts for the 1000 juries (PGi).
- Repeat steps 3-8 for each of the 1,000 simulated versions of M.
- Calculate the mean value of PG and report that as the point estimate of the probability of conviction in C.
- Identify the 2.5 & 97.5 percentile values for PG and report that as 0.95 confidence interval.
I am currently using 1,000 jurors and 1,000 juries on the theory that 1,000 random draws from a probability distribution—demographic characteristics of C or versions of M—will fill out that distribution.
Questions
Will this allow me to accurately determine the precision of my estimate? If so, how many juries do I need to empanel for each PGi calculation to cover C's probability distribution (so I avoid selection bias); may I use fewer than 1,000?
Thank you so much for any help!
| Finding precision of Monte Carlo simulation estimate | CC BY-SA 2.5 | null | 2011-04-06T17:35:50.953 | 2011-04-11T16:37:21.353 | null | null | 4059 | [
"confidence-interval",
"monte-carlo",
"standard-error",
"simulation"
] |
9277 | 1 | 9284 | null | 5 | 1088 | I'm trying to use the `nnet` library in R, and can't seem to work out how to use the `reltol` parameter. It says in the docs:
>
Stop if the optimizer is unable to reduce the fit criterion by a factor of at least 1 - reltol.
I assume this means that if `reltol = 0.5` then it will stop if it can't halve the error rate, and if `reltol = 0.1` it will stop if it can't reduce the error rate by 10%. Is that correct? The experiments that I've done don't seem to work like that...
| Meaning and use of `reltol` in `nnet` library in R | CC BY-SA 2.5 | null | 2011-04-06T18:41:33.737 | 2011-04-06T21:36:21.523 | null | null | 261 | [
"r",
"machine-learning",
"neural-networks"
] |
9278 | 1 | null | null | 4 | 414 | Very basic question, I suspect. It's sort of the inverse of [this one](https://stats.stackexchange.com/questions/5092/representative-sampling), and the same disclaimers about my own ignorance apply, but here goes:
I have a sampling technology. I know the total population, and I know the number of samples I've done. What's the best way to calculate and express how confident I am that my results are correct?
A few more points in case they make a difference:
- My test checks whether an item is in one of three states (and they are the only possible states).
- I'm assuming that the test is truly a random distribution.
- In general, my total population is > 1,000,000 and my sample is ~20% of that, but it can vary quite a bit.
And a wrinkle (I hesitate to even put this in because I want to keep things simple):
- There's a slight error where my reported Total Population value might be slightly smaller than the actual total (by 5-10%). I can get the actual total if need be, but won't bother if it doesn't really make a difference.
Rough and simple / clear is probably better than more accurate if there are different ways of expressing this. I.e., something like "we're 95% confident that this is correct" is probably better than "we're 95% confident that this is correct to within 3%". (Am I talking about P?)
UPDATE:
Hmm... I think my "wrinkle" may have introduced confusion. I'm not trying to find out how accurate the total is. I can get the actual total, but I have to use a more convoluted method. So my main question was about saying if I have 10,000 samples out of 1,000,000 items how confident can I be that my sampled distribution among the three states is correct.
My secondary question (which is where the wrinkle comes in) is: given that the actual total is 5-10% larger than the one I'm using, how much of a difference does that make to my confidence? In other words, I'm saying 1,000,000 items, but it might actually be 1,100,000. Should I bother to go through the convoluted process to get the actual total, just to compute the confidence? To me, that seems unlikely to make a significant difference to my confidence level, but I thought I should check.
| How to check reliability of a given sampling technology? | CC BY-SA 3.0 | null | 2011-04-06T19:25:10.097 | 2011-08-24T12:45:53.140 | 2017-04-13T12:44:52.660 | -1 | 1531 | [
"sampling"
] |
9279 | 2 | null | 9266 | 1 | null | If I understood you correctly, you have a list of scores for subsets, and you want to identify the items that contribute most to a high score, but you cannot score arbitrary subsets.
This corresponds to a high-dimensional binary regression problem with features $item\in Subset$. You can run a linear or logistic regression on the dataset.
If you want a multiset or partition instead of an itemwise regression, you'll have to specify the goal and scoring model further.
| null | CC BY-SA 2.5 | null | 2011-04-06T20:01:32.527 | 2011-04-06T20:07:17.577 | 2011-04-06T20:07:17.577 | 2456 | 2456 | null |
9280 | 1 | 9669 | null | 4 | 812 | I have a spatial error model that I have estimated and would like to create a figure with the prediction interval. By spatial error model, I mean that the errors are spatially correlated and thus OLS is unbiased but inefficient (not a spatially auto-regressive model).
I used the geostatistical approach to spatial statistics and estimated a semivariogram for my data and used the semivariogram estimates to create a weighting matrix to account for the spatial autocorrelation in the data. I think the answer to this question though would help anyone using weights in their regression regardless of the exact structure of their weighting matrix.
I see from this [question](https://stats.stackexchange.com/questions/9131/obtaining-a-formula-for-prediction-limits-in-a-linear-model) that the formula for a 95% prediction interval for OLS is
$$
\hat{y} \pm 1.96 \hat{\sigma} \sqrt{1 + \mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'}.
$$
Given my limited intuition, my guess would be that the prediction interval for the case with weights would be this (where $\Omega$ is the weighting matrix):
$$
\hat{y} \pm 1.96 \hat{\sigma} \sqrt{1 + \mathbf{X}^* (\mathbf{X}'\mathbf{\Omega}^{-1}\mathbf{X})^{-1} (\mathbf{X}^*)'}
$$
and was hoping that I could get some feedback from the community here about the correctness or if I am even in the right ballpark.
| Formula for prediction interval for spatial regression | CC BY-SA 2.5 | null | 2011-04-06T20:54:39.407 | 2019-11-14T07:27:53.847 | 2017-04-13T12:44:20.840 | -1 | 3713 | [
"regression",
"spatial",
"prediction-interval"
] |
9281 | 1 | 9293 | null | 13 | 7079 | Hope this newbie question is the right question for this site:
Suppose I would like to compare the composition of ecological communities at two sites A, B. I know all three sites have dogs, cats, cows, and birds, so I sample their abundances at each site (I don't really have an "expected" abundance for each animal at each site).
If I count, say, five of each animal at each site, then the A and B are very "similar" (in fact, they're the "same").
But if I find 100 dogs, 5 cats, 2 cows, and 3 birds at site A. 5 dogs, 3 cats, 75 cows, and 2 birds at site B. Then I would say that the sites A and B are "dissimilar", even though they have the exact same species composition.
(I read up on the Sorensen's and Bray-Curtis indices, but looks like they only consider absence/presence of the dogs, cats, etc., and not their abundances.)
Is there a statistical test to determine this?
| What test to compare community composition? | CC BY-SA 3.0 | null | 2011-04-06T21:00:00.867 | 2017-01-02T16:44:46.163 | 2017-01-02T16:44:46.163 | 11887 | 2830 | [
"hypothesis-testing",
"distributions",
"correlation",
"multinomial-distribution",
"compositional-data"
] |
9282 | 2 | null | 9280 | 3 | null | Assuming $\Omega$ has been estimated with no appreciable error--which is rarely the case--the correct formula is given by the equation for the [kriging prediction error](http://en.wikipedia.org/wiki/Kriging). It needs to replace everything under the square root sign. What is missing from the second formula you give is any explicit accounting of the covariance between the value at the estimation location and the data values.
Some recent approaches, such as presented in Diggle & Ribeiro's "[Model-based Geostatistics](http://www.leg.ufpr.br/mbgbook/)" (implemented as an R package) have the additional merit of incorporating the estimation error of $\Omega$ in the prediction uncertainty.
| null | CC BY-SA 2.5 | null | 2011-04-06T21:13:52.667 | 2011-04-06T21:13:52.667 | null | null | 919 | null |
9283 | 1 | 9297 | null | 18 | 15588 | Could you recommend an easy to use or comprehensive conjoint analysis package for R?
| Conjoint Packages for R | CC BY-SA 2.5 | null | 2011-04-06T21:16:01.777 | 2017-05-19T09:42:06.330 | 2011-04-07T02:29:22.393 | 183 | 776 | [
"r",
"conjoint-analysis"
] |
9284 | 2 | null | 9277 | 4 | null | That is what it is supposed to mean.
A `reltol` parameter is common in optimisation functions in R and other mathematical and statistical systems.
In `nnet`, it has the default `reltol = 1.0e-8`. In the more often used `optim` in the `stats` library the statement is:
>
reltol
Relative convergence tolerance.
The algorithm stops if it is unable to
reduce the value by a factor of reltol * (abs(val) + reltol)
at a step. Defaults to sqrt(.Machine$double.eps),
typically about 1e-8.
You can see why it has `sqrt` since it might be looking at an amount of about `reltol^2`
| null | CC BY-SA 2.5 | null | 2011-04-06T21:36:21.523 | 2011-04-06T21:36:21.523 | null | null | 2958 | null |
9285 | 5 | null | null | 0 | null | In [statistical hypothesis testing](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing), the size is the largest chance of rejecting the null when the null is true (a "false positive" error). The power is the chance of not rejecting the null; it depends on the "effect size" (a measure of how far reality actually departs from the null). Caeteris paribus, power and size are inversely related (one must increase if the other is decreased), so considerations often focus on size, which is simpler to analyze.
When more than one hypothesis test is performed to make a binary decision, the chance of a false positive is usually greater than the size of any of the tests used for that decision. For example, suppose groups of "control" and "treatment" subjects are randomly selected from the same population and each subject is given a questionnaire comprising 20 yes-no questions. Let the groups be compared separately for each question using a test of size .05. If the comparisons are independent, then the chance of at least one of them rejecting the null equals $1 - (1 - 0.05)^{20}$ = 0.64. Thus a nominal false positive rate of 0.05 in each test is inflated to a decision false positive rate of 0.64.
To avoid unacceptably large chances of reaching mistaken conclusions in such "multiple comparisons" cases, either an overall test of significance is initially conducted or the sizes of the individual tests leading up to the decision are decreased (that is, the tests are made more stringent). Examples of the former are the [F-test in an ANOVA](http://en.wikipedia.org/wiki/Analysis_of_variance#Follow_up_tests) setting and [Tukey's HSD test](http://en.wikipedia.org/wiki/Tukey%27s_range_test). Exemplary of the latter approach is the [Bonferroni correction](http://en.wikipedia.org/wiki/Bonferroni_correction).
| null | CC BY-SA 2.5 | null | 2011-04-06T21:41:34.313 | 2011-04-06T21:41:34.313 | 2011-04-06T21:41:34.313 | 919 | 919 | null |
9286 | 4 | null | null | 0 | null | Signals situations where one is concerned about achieving intended power and size when more than one hypothesis test is performed. | null | CC BY-SA 2.5 | null | 2011-04-06T21:41:34.313 | 2011-04-06T21:41:34.313 | 2011-04-06T21:41:34.313 | 919 | 919 | null |
9287 | 2 | null | 9228 | 3 | null | There are at least two possibilities for this data. One possibility is that your microarrays contain no disease markers whatsoever. But, they do contain information about age, and since in your case the sick and control populations are of different age, you get the illusion of good classification performance. Another possibility is that the microarrays do contain disease markers, and, moreover, these markers is exactly what SVM focuses on.
It seems like the principal components of the data may be correlated with age in both of these possibilities. In the first case it will be because age is what the data expresses. In the second case it will be because disease is what the data expresses, and this disease is itself correlated with age (for your dataset). I don't think there is an easy way to look at the correlation value and conclude which case it is.
I could think of several ways to assess the effect differently. One option is to split your training set into groups of equal age. In this case, for 'young' ages the normal class will have more training examples than the disease class, and vice versa for the older ages. But as long as there are enough examples, this should not be a problem. Another option is to do the same with the test sets, i.e. see whether the classifier tends to say 'sick' more often for older patients. Both of these options could be difficult since you don't have that many examples.
One more option is to train two classifiers. In the first, the only feature will be the age. It seems this has AUC of 0.82. In the second, there will be age and the microarray data. (It seems that currently you train a different classifier which only uses the microarray data, and it gives you AUC 0.95. Adding the age feature explicitly is likely to improve performance, so AUC will be even higher.) If the second classifier performs better than the first, this indicates that age is not the only thing of interest in this data. Based on your comment, the improvement in AUC is 0.13 or more, which seems fair.
| null | CC BY-SA 2.5 | null | 2011-04-06T22:15:08.573 | 2011-04-06T22:23:28.577 | 2011-04-06T22:23:28.577 | 3369 | 3369 | null |
9288 | 2 | null | 9237 | 1 | null | maximum entropy is a good way to go here. With maximum entropy, you specify the "structure" that your model is to depend on, and it does the rest. It has a very similar form to a generalised estimating equation. So we have an unknown (or "random") variable $x$ that is the object of inference (may be a vector). It can take on $n$ values $x_1,x_2,\dots,x_n$. We have $m$ "model functions" $g_{k}(x)$ $(k=1,\dots,m)$ and $G_{k}$ "constraints" and the two are related by:
$$\sum_{i=1}^{n}p_i g_{k}(x_i)=G_{k}$$
Where $p_i=Pr(x=x_i)$ is the "unknown" (more appropriately "unassigned") probability distribution. By choosing the distribution with maximum entropy $-\sum_i p_i log(p_i)$ you get:
$$Pr(x=x_i||G_{1},\dots,G_{m})=p_i=exp\left(-\lambda_0-\sum_{k=1}^{m}\lambda_{k}g_{k}(x_i)\right)$$
Where $\lambda_{k}$ are chosen so that the above constraints are satisfied, and that the $p_i$ sum to $1$. This requires discrete sample space, for continuous case, it is the same form with 2 exceptions:
- The summation is replaced by an integral with respect to $dx$
- The probability is multiplied by the invariant measure $m(x)$, which in your case is the probability transform of the improper $Dir(0,\dots,0)$ into "odds" form.
So you have:
$$p(x|G_{1},\dots,G_{m})=m(x)exp\left(-\lambda_0-\sum_{k=1}^{m}\lambda_{k}g_{k}(x)\right)dx$$
Now what you will find is that in solving for the lagrange multipliers $\lambda_{k}$ you will get a set of equations which look identical to the GEE equations. Thus the MaxEnt algorithm basically tells you "which probability model" is the most likely to be consistent with your GEE equations. You can then use this distribution to make predictions and give an indication of their accuracy.
Note if the constraints $G_{k}$ are not known, this is solved by multiplying the above $p(x||G_{1},\dots,G_{m})$ by a prior density for $G_{k}$ and then integrating out $G_{k}$ from the probability - this allows for the structure induced by $G_{k}$ to be incorporated into the model, as well as the uncertainty about the true value of $G_{k}$.
| null | CC BY-SA 2.5 | null | 2011-04-06T22:38:22.240 | 2011-04-06T22:38:22.240 | null | null | 2392 | null |
9289 | 2 | null | 9281 | 2 | null | Since you mention Sorensen index, it seems you need a dissimilarity score rather than a dissimilarity test. (A score will give a numerical value indicating how different they are. A test will tell you whether the difference is significant with a given probability.)
You can represent species abundance at each location by a histogram. This histogram can be normalized if you just care about relative abundance (e.g. that cats are twice as abundant as dogs), or unnormalized if you care about absolute numbers as well.
There are many ways to measure dissimilarity of histograms. Some of the popular ones are:
- Chi-squared statistic
- L2 distance
- Histogram intersection distance
| null | CC BY-SA 2.5 | null | 2011-04-06T22:54:23.313 | 2011-04-06T22:54:23.313 | null | null | 3369 | null |
9290 | 2 | null | 9278 | 1 | null | This sounds like a simple case of hyper-geometric sampling. So you have a sampling distribution of:
$$p(r_1,r_2,r_3|R_1,R_2,R_3,I)=\frac{{R_1 \choose r_1}{R_2 \choose r_2}{R_3 \choose r_3}}{{R_1+R_2+R_3 \choose r_1+r_2+r_3}}$$
Capitals letters denote population totals, and small letters denote sampled numbers. $I$ is the prior information about the sampling. You want to "invert" this to get a distribution for $R_{i}$ Just use Bayes theorem:
$$p(R_1,R_2,R_3|r_1,r_2,r_3,I)=p(R_1,R_2,R_3|I)\frac{p(r_1,r_2,r_3|R_1,R_2,R_3,I)}{p(r_1,r_2,r_3|I)}$$
And you now have a statement about the accuracy of the population totals!
UPDATE
In response to the comment, the values of $R_{j}$ are the unknown population totals for each of the three states - so if you had sampled/tested every item, you would get $R_{j}$ as the numbers in each state. I assume that these quantities are the target of your inference - this is what you would like to know.
Now what do you know? the population total $N=R_{1}+R_{2}+R_{3}\approx 1,000,000$ (we can account for possible error later). You also know the sampled numbers $r_{1},r_{2},r_{3}$ (the number which tested positive for each state in your sample). The total total sample size $n=r_{1}+r_{2}+r_{3}$.
the quantity $p(R_1,R_2,R_3|I)$ is called the prior, and you assign it based on what is known about the population totals beyond the data from the sample.
Now you have stated that $N$ is known, which constrains but does not determine the prior. One way to determine the prior is to break up the three propositions into mutually exclusive and exhaustive pieces, and assign equal probabilities to the ones which add to $N$, and zero to everything else. A quick counting exercise shows there is $\frac{(N+1)(N+2)}{2}$ combinations of $R_1,R_2,R_3$ which add up to $N$, so the joint prior is:
$$p(R_1,R_2,R_3|N,I)=\frac{2}{(N+1)(N+2)}\delta(N-R_1-R_2-R_3)$$
Where $\delta(x)=1$ if $x=0$ and $\delta(x)=0$ if $x\neq 0$. And you can work out the normalising constant $P(r_1,r_2,r_3|N,I)$ by adding up the prior and the sampling probabilities over the $R_{j}$, so we get:
$$p(r_1,r_2,r_3|N,I)=\sum_{R_1=0}^{N}\sum_{R_2=0}^{N}\sum_{R_3=0}^{N}p(R_1,R_2,R_3|N,I)p(r_1,r_2,r_3|R_1,R_2,R_3,N,I)$$
$$=\sum_{R_1=0}^{N}\sum_{R_2=0}^{N-R_1}\frac{2}{(N+1)(N+2)}\frac{{R_1 \choose r_1}{R_2 \choose r_2}{N-R_1-R_2 \choose n-r_1-r_2}}{{N \choose n}}$$
Now $(N+1)(N+2){N \choose n}=(n+1)(n+2){N+2 \choose n+2}$ and
$$\sum_{R_1=0}^{N}\sum_{R_2=0}^{N-R_1}{R_1 \choose r_1}{R_2 \choose r_2}{N-R_1-R_2 \choose n-r_1-r_2}={N+2 \choose n+2}$$
So we get:
$$p(r_1,r_2,r_3|N,I)=\frac{2}{(n+1)(n+2)}$$
And thus the posterior distribution is:
$$p(R_1,R_2,R_3|r_1,r_2,r_3,N,I)=\frac{2}{(N+1)(N+2)}\frac{\frac{{R_1 \choose r_1}{R_2 \choose r_2}{R_3 \choose r_3}}{{N \choose n}}}{\frac{2}{(n+1)(n+2)}}=\frac{{R_1 \choose r_1}{R_2 \choose r_2}{R_3 \choose r_3}}{{N+2 \choose n+2}}$$
The last form shows very easily how to generalise, for those interested (noting that $2=3-1$). This posterior has expectation for $R_1$ of:
$$E([R_1+1]|r_1,r_2,r_3,N,I)=\sum_{R_1=0}^{N}\sum_{R_2=0}^{N-R_1}\frac{(R_1+1){R_1 \choose r_1}{R_2 \choose r_2}{R_3 \choose r_3}}{{N+2 \choose n+2}}=\frac{(r_1+1)(N+3)}{n+3}$$
$$\implies E(R_1|r_1,r_2,r_3,N,I)=\frac{(r_1+1)(N-n+n+3)-(n+3)}{n+3}$$
$$=r_1+(N-n)\hat{p}$$
where $\hat{p}=\frac{r_1+1}{n+3}$. This is the number of observed in category 1 plus an estimate of the number remaining unobserved in category 1. Now for the accuracy we can take the variance - which, using the same trick calculate
$$E([R_1+1][R_1+2])=\frac{(r_1+1)(r_1+2)(N+3)(N+4)}{(n+3)(n+4)}=E([R_1+1]^2)+E(R_1+1)$$
and note that $var(R_1)=var(R_1+1)$ we get
$$var(R_1)=E([R_1+1][R_1+2])-E(R_1+1)-[E(R_1+1)]^2$$
$$=\frac{(r_1+1)(N+3)}{n+3}\left[\frac{(r_1+2)(N+4)}{n+4}-1-\frac{(r_1+1)(N+3)}{n+3}\right]$$
which after some tedious manipulations you get:
$$var(R_1)=\frac{\hat{p}(1-\hat{p})}{n+4}(N-n)(N+3)$$
You could also calculate the mean and variance of the fraction remaining $F=\frac{R_1-r_1}{N-n}$ which are given by:
$$E(F|r_1,r_2,r_3,N,I)=\hat{p}\;\;\;\;\;var(F|r_1,r_2,r_3,N,I)=\frac{\hat{p}(1-\hat{p})}{n+4}\left(1+\frac{n+3}{N-n}\right)$$
And then quantities are approximately independent of $N$ - so the accuracy of $N$ is not important for inferring the proportions in each category - but the sampling fraction is.
One way to incorporate the uncertainty about $N$ is to use a uniform prior between to bounds $L_N<N<U_N$, and then "average out" the value of $N$ from the posterior:
$$p(R_1,R_2,R_3|r_1,r_2,r_3,I)=\frac{1}{U_N-L_N}\sum_{N=L_N}^{U_N}p(R_1,R_2,R_3|r_1,r_2,r_3,N,I)$$
But unless the terms in this summation are appreciably different, the result won't change that much. It won't in this case as I have shown
| null | CC BY-SA 3.0 | null | 2011-04-06T23:24:22.640 | 2011-04-10T01:58:04.600 | 2011-04-10T01:58:04.600 | 2392 | 2392 | null |
9291 | 2 | null | 9281 | 2 | null | Bray-Curtis and other similar indices do incorporate differences in species abundance. In addition to Legendre and Legendre, I would also recommend Charles Krebs' book, Ecological Methology (1999).
| null | CC BY-SA 2.5 | null | 2011-04-06T23:37:49.453 | 2011-04-07T03:14:05.010 | 2011-04-07T03:14:05.010 | 3774 | 3774 | null |
9292 | 1 | null | null | 5 | 267 | What is a good book that covers mixed distributions?
Most statistics books either only briefly mention them or do not cover the topic at all.
I'd like to have a comprehensive resource covering issues of joint distributions with both discrete and continuous variables, conditional and marginal distributions in the mixed case, etc.
| Books for mixed distributions (continuous and discrete)? | CC BY-SA 3.0 | null | 2011-04-06T23:49:38.163 | 2015-10-30T23:27:26.220 | 2015-10-30T23:27:26.220 | 22468 | 3280 | [
"references",
"continuous-data",
"discrete-data",
"joint-distribution",
"marginal-distribution"
] |
9293 | 2 | null | 9281 | 4 | null | I will concur with what has been mentioned that Bray-Curtis can handle abundance as well as presence/absence, also to add another good book to the mix: Analysis of Ecological Communities by McCune and Grace.
There are a lot of factors to consider as you compare ecological communities and I don't think there is a single test that will do the job. The appropriateness of the test will depend a lot on the type of question you are asking about the communities and the nature of your dataset. Common approaches include ordination techniques, which array sites within a multidimensional taxon-space. However if you truly only have 2 sites then this is not likely to work. Mantel tests correlate a the distance matrix based on composition (e.g., the pairwise Bray-Curtis distance across all sites) with a distance matrix based on other potential factors of influence. The simplest case can be just the euclidean distance between the sites in space. Cluster analysis groups sites with respect to their community composition.
In general I would take the approach of using a subset of the many statistical tools described in any of the above books to provide a statistical description of the differences between the communities in question. There is no single measure of the difference in community composition so the stats are used to synthesize multidimensional data into a more easily interpretable form.
EDIT: I also just thought of this paper which lays out many of the different options pretty clearly and thoroughly.
Anderson, M. J. et al. 2011. Navigating the multiple meanings of Beta diversity: a roadmap for the practicing ecologist. Ecology Letters 14:19-28
| null | CC BY-SA 2.5 | null | 2011-04-07T00:17:04.250 | 2011-04-07T11:42:05.100 | 2011-04-07T11:42:05.100 | 4048 | 4048 | null |
9295 | 1 | 9320 | null | 5 | 301 | This is a more "statistical" version of my [ Math.SE problem ](https://math.stackexchange.com/questions/31267/mean-chord-length-in-a-convex-polygon).
Question Let $\mathbf{P}$ be a convex polygon of $m$ sides. Pick two of its edges at random, and further pick a random point on each of these edges, and compute the length of the chord we get. If we repeat this process $n$ times,and average the chord lengths we thus obtained, we get an estimate $\hat{\ell}_n$ of the expected chord length $\ell$ of $\mathbf{P}$. Then,how large should $n$ be to ensure that $\hat{\ell}_n$ differs from $\ell$ by (say) at most 5%,with a confidence level of (say) 95%, and precisely how should we collect our samples?
Note that determining the expected chord length analytically leads to intractable integrals as shown in my SE post. Also,note that there are multiple inequivalent ways to define the expected chord length of $\mathbf{P}$; the one we need here is $\ell = \lim_{n\rightarrow \infty} \hat{\ell}_n$.
| Monte-Carlo estimation of the mean chord length in a polygon | CC BY-SA 2.5 | null | 2011-04-07T01:50:31.040 | 2011-04-07T18:41:45.193 | 2017-04-13T12:19:38.800 | -1 | 4011 | [
"monte-carlo"
] |
9296 | 2 | null | 9233 | 4 | null | Modeling the data fundamentally (especially for time series) assumes that you have collected data at a sufficient enough frequency to capture the phenomena of interest.
Simplest example is for a sine wave - if you are collecting data at a frequency of n*pi
where n is an integer then you will not see anything but zeros and miss the sinusoidal
pattern altogether. There are articles on sampling theory which discuss how often
should data be collected.
| null | CC BY-SA 2.5 | null | 2011-04-07T02:04:49.847 | 2011-04-07T02:04:49.847 | null | null | null | null |
9297 | 2 | null | 9283 | 14 | null | I've never used R for conjoint analysis, but here are a couple of things I found when I hunted around.
- Aizaki and Nishimura (2008) have an article "Design and Analysis of Choice Experiments Using R: A brief introduction" (Free PDF available here).
Perhaps check out the following packages:
- AlgDesign for constructing choice sets
- prefmod for analysing paired comparison data
- conf.design for constructing factorial designs
| null | CC BY-SA 2.5 | null | 2011-04-07T02:21:47.643 | 2011-04-07T02:21:47.643 | null | null | 183 | null |
9298 | 1 | 9303 | null | 4 | 1441 | a journal article has a method for designing experiments to be fit to a 4-parameter logistic model. The model used is $y= D + \frac{A - D}{1 + (\frac{x}{C}) ^ B}$
A = upper asymptote
B = maximum slope
C = x value when y = 50% of maximum (i.e. 1/2 of upper asymptote)
D = lower asymptote
Using pilot data, further experiments are optimally designed by plugging the preliminary parameter estimates into equations presented in the article.
However, the nonlinear modeling software that I have access to parameterizes the 4-parameter logistic model differently. The model used is
$y = D + \frac{A - D}{1 + e^{B(x-C)}}$
Once I estimate the parameters from the software, how do I translate these to the exponential parameterization used by the software? Thank you.
| Reconciling different parameterizations of the same nonlinear model | CC BY-SA 2.5 | null | 2011-04-07T02:42:00.870 | 2011-04-07T07:02:17.783 | 2011-04-07T07:02:17.783 | null | 2473 | [
"nonlinear-regression"
] |
9299 | 1 | null | null | 12 | 576 | I'm given an $n\times n$ grid of positive integer values. These numbers represent an intensity that should correspond to the strength of belief of a person occupying that grid location (a higher value indicating a higher belief). A person will in general have an influence over multiple grid cells.
I believe that the pattern of intensities should "look Gaussian" in that there will be a central location of high intensity, and then the intensities taper off radially in all directions. Specifically, I'd like to model the values as coming from a "scaled Gaussian" with a parameter for the variance and another for the scale factor.
There are two complicating factors:
- the absence of a person will not correspond to a zero value, because of background noise and other effects, but the values should be smaller. They can be erratic though, and at a first approximation might be difficult to model as simple Gaussian noise.
- The intensity range can vary. For one instance, the values might range between 1 and 10, and in another, between 1 and 100.
I'm looking for an appropriate parameter estimation strategy, or pointers to relevant literature. Pointers to why I'm approaching this problem the wrong way altogether would also be appreciated :). I've been reading about kriging, and Gaussian processes, but that seems like very heavy machinery for my problem.
| Estimating parameters for a spatial process | CC BY-SA 2.5 | null | 2011-04-07T03:02:36.983 | 2011-05-11T03:57:17.720 | null | null | 139 | [
"estimation",
"normal-distribution",
"spatial"
] |
9301 | 1 | 9302 | null | 2 | 157 | REF: [http://en.wikipedia.org/wiki/Order_of_integration](http://en.wikipedia.org/wiki/Order_of_integration)
What is the interpretation/intuition behind a time series integrable of order 0? I am reading something on cointegration and this is not yet clear to me.
| Interpretation of an integrable time series of an order zero | CC BY-SA 2.5 | null | 2011-04-07T04:00:19.483 | 2011-04-07T06:59:22.310 | 2011-04-07T06:59:22.310 | null | 862 | [
"time-series",
"cointegration"
] |
9302 | 2 | null | 9301 | 2 | null | Commonly, a time series is said to be $I(0)$ if the time series itself is stationary (no need to differentiate it to obtain stationarity).
The Wikipedia page you mention says that not all $I(0)$ time series are stationary. I didn't know this and I think that indeed many authors do not make the difference. The paper from Engle and Granger (1987) says that all $I(0)$ are stationary.
Two times series $X^1_t$ and $X^2_t$ are said to be cointegrated if
$\exists n,d>0$ and $\beta \in \mathbb{R}$ so that
- $X^1_t \sim I(n)$
- $X^2_t \sim I(n)$
- $X^1_t+\beta X^2_t \sim I(n-d)$
See Engle, Granger (1987) [http://www.ntuzov.com/Nik_Site/Niks_files/Research/papers/stat_arb/EG_1987.pdf](http://www.ntuzov.com/Nik_Site/Niks_files/Research/papers/stat_arb/EG_1987.pdf)
If you reach $n-d=0$ indeed your linear combination of time series is stationary.
| null | CC BY-SA 2.5 | null | 2011-04-07T04:56:57.423 | 2011-04-07T06:24:38.387 | 2011-04-07T06:24:38.387 | 919 | 1709 | null |
9303 | 2 | null | 9298 | 5 | null | These models are not the same, because the first is a rational function of $x$ and the second is an exponential function. The second one is truly a "logistic" model but the first is not. Moreover, the claims about $B$ and $C$ in the first model are not true. The maximum slope depends on $C$, which is a scale parameter, not a location parameter. Moreover, although the maximum slope does depend on $B$, it does so in a complicated fashion. For instance, when $B = 2$, the maximum slope equals $\frac{9}{8\sqrt{3}}$.
If you write $x = \log(z)$ and $C = \log(\gamma)$ then the second model is
$$y = D + \frac{A-D}{1 + \exp(B(\log(z)-\log(\gamma))} = D + \frac{A-D}{1 + (z/\gamma)^B},$$
which does have the form of the first model. In other words, in the first model the logarithm of $z$ has a logistic form.
| null | CC BY-SA 2.5 | null | 2011-04-07T05:03:26.757 | 2011-04-07T06:18:15.007 | 2011-04-07T06:18:15.007 | 919 | 919 | null |
9304 | 1 | 9316 | null | 6 | 76 | A colleague just asked me this question:
### Context:
A psychological study had
- 2 groups of participants (between subjects)
- 4 contexts (within subjects))
- each participant provided a response in each of the four contexts
- there were three categorically distinct response options (lets call them $A$, $B$, and $C$)
### Question
- What is an appropriate statistical model for analysing the effect of group and context on response?
- What would you tell someone who analysed this data as a 2 by 4 by 3 log-linear model? (e.g., problems with assuming independence of observations)
| Modelling the effect of a 2 by 4 mixed design on a three-level nominal dependent variable | CC BY-SA 2.5 | null | 2011-04-07T06:20:27.157 | 2011-04-07T15:20:29.337 | null | null | 183 | [
"modeling"
] |
9305 | 1 | null | null | 3 | 99 | Consider a conjugate prior in the k-parameter canonical exponential family:
$$p(w|n_0,y_0)=c(n_0,y_0) \exp(n_0(y_0 w'-b(w))) \, ,$$
where $n_0>0, y_0 \in Y$ is the pseudo observation vector, and $Y$ is an open subset of $\mathbb{R}^k$.
I have to prove that the normalization constant $c(n_0,y_0)$ goes to zero when $y_0$ tends to the boundary of $Y$. Any suggestions?
I would like to find a proof that works both when $Y$ is bounded and unbounded (e.g. $Y=\mathbb{R}^k$).
| The boundary proprerty of conjugate exponential family | CC BY-SA 3.0 | null | 2011-04-07T07:38:02.097 | 2021-01-30T21:53:56.903 | 2021-01-30T21:53:56.903 | 11887 | 4065 | [
"self-study",
"normalization",
"exponential-family"
] |
9306 | 1 | 11658 | null | 9 | 971 | I am using the `evolfft` function in the `RSEIS` R package to do a STFT analysis of a signal.
The signal is one hour long and it was acquired during 3 different conditions, in particular 0-20' control, 20'-40' stimulus, 40'-60' after stimulus.
Visually, I see a change in the spectrogram during these 3 periods, with higher frequency and increased FFT power during the treatment, but I was wondering whether there was some kind of statistical analysis I could do to "put some numbers" on it.
Any suggestion?
EDIT: as suggested I will add an example of the data I'm dealing with

The treatment is between 20' and 40', as you can see it produces an increase in the power of the FFT over a fairly wide range of frequencies.
I have 50-60 of these STFT for each experiment (for 10 total experiments).
I can average the spectra for each experiments and still get a similar type of pattern. Now, my problem is how to correctly quantify the data I have and possibly do some stats to compare before, during and after the treatment.
| STFT statistical analysis | CC BY-SA 3.0 | null | 2011-04-07T08:30:58.900 | 2011-06-07T07:40:36.277 | 2011-06-07T07:40:36.277 | 223 | 582 | [
"time-series",
"hypothesis-testing",
"anova",
"fourier-transform"
] |
9307 | 1 | 9308 | null | 1 | 1099 | What is a family of functions which goes from 0 to 1 and has one or more parameters that alters the rate at which a value of 1 is approached?
| Family of functions ranging from zero to one with parameter influencing rate of approach to one | CC BY-SA 2.5 | null | 2011-04-07T10:56:48.250 | 2011-04-07T14:29:27.167 | 2011-04-07T14:29:27.167 | 183 | 333 | [
"function"
] |
9308 | 2 | null | 9307 | 2 | null | How do you like the [Sigmoid Function](http://en.wikipedia.org/wiki/Sigmoid_function) ? By adding a parameter k one gets
$f_k(x)=\frac{1}{1+exp(-k*x)}$.
By setting k you can control how steep the function is and hence how fast it approaches 1.
| null | CC BY-SA 2.5 | null | 2011-04-07T11:16:04.993 | 2011-04-07T11:16:04.993 | null | null | 264 | null |
9309 | 2 | null | 9307 | 1 | null | There's an incredible amount of those functions... and it really depends on what you want to do with it.
In general any function $f(x)$ with no infinite bounds can be limited to [0;1] by simply using $\frac{f(x)-min}{max-min}$ (where min and max are the minimum and maximum of the function).
One commonly used function is the [sigmoid](http://en.wikipedia.org/wiki/Sigmoid_function).
| null | CC BY-SA 2.5 | null | 2011-04-07T11:16:42.047 | 2011-04-07T11:16:42.047 | null | null | 582 | null |
9311 | 1 | 9338 | null | 57 | 17559 | I can see that there are a lot of formal differences between Kullback–Leibler vs Kolmogorov-Smirnov distance measures.
However, both are used to measure the distance between distributions.
- Is there a typical situation where one should be used instead of the other?
- What is the rationale to do so?
| Kullback–Leibler vs Kolmogorov-Smirnov distance | CC BY-SA 2.5 | null | 2011-04-07T11:39:13.870 | 2022-11-17T20:38:49.603 | 2011-04-07T14:26:13.410 | 183 | 3592 | [
"distributions",
"distance-functions",
"kolmogorov-smirnov-test",
"kullback-leibler"
] |
9312 | 1 | null | null | 3 | 35604 | SPSS returns lower and upper bounds for Reliability. While calculating the Standard Error of Measurement, should we use the Lower and Upper bounds or continue using the Reliability estimate.
I am using the formula :
$$\text{SEM}\% =\left(\text{SD}\times\sqrt{1-R_1} \times 1/\text{mean}\right) × 100$$
where SD is the standard deviation, $R_1$ is the intraclass correlation for a single measure (one-way ICC).
| How to compute the standard error of measurement (SEM) from a reliability estimate? | CC BY-SA 3.0 | null | 2011-04-07T12:36:40.230 | 2021-05-25T12:07:04.960 | 2011-04-08T01:15:46.230 | 930 | null | [
"spss",
"reliability"
] |
9313 | 1 | 9366 | null | 6 | 702 | Can you suggest a good review of case control matching algorithms? Algorithms that can be used to set up the matched pairs of one case and one control, or matched blocks of a case and multiple controls.(Preferably a paper, book chapter or website discussing recent developments.)
| Review of case control matching algorithms? | CC BY-SA 3.0 | null | 2011-04-07T14:32:47.600 | 2011-11-01T13:42:38.983 | 2011-04-08T11:28:14.527 | 3911 | 3911 | [
"experiment-design",
"matching",
"case-control-study"
] |
9315 | 1 | 9479 | null | 19 | 11727 | I have used LDA on a corpus of documents and found some topics. The output of my code is two matrices containing probabilities; one doc-topic probabilities and the other word-topic probabilities. But I actually don't know how to use these results to predict the topic of a new document. I am using Gibbs sampling. Does anyone know how? thanks
| Topic prediction using latent Dirichlet allocation | CC BY-SA 3.0 | null | 2011-04-07T14:42:24.517 | 2012-07-15T20:57:39.640 | 2012-07-15T20:57:39.640 | 8413 | 2986 | [
"text-mining",
"topic-models"
] |
9316 | 2 | null | 9304 | 6 | null | A log-linear model or any model that fails to model the dependence of responses could underestimate (or overestimate) standard errors because they do not take into account potential subject-level association of responses. For example, if some subjects are likely to have responses patterns like (A,A,A,A) and others like (C,C,C,C), treating responses as independent is problematic.
An appropriate model would be the multinomial logit with subject-level random intercepts.
Depending on your colleague's modeling goals, another approach might be latent class regression, which estimates class probabilities and class-conditional responses probabilities for k latent classes. If you expect strong clustering in subjects' responses, this might be a particularly nice approach because you get regression estimates for each of $k$ fuzzy classes, which might have meaningful psychological labels. Identifiability is an issue here because of the large number of parameters. See `poLCA` in R and the PDF write-up [here](http://userwww.service.emory.edu/~dlinzer/poLCA/).
`drm` in R is another [package](http://cran.r-project.org/web/packages/drm/index.html) which is supposed to be able to model clustered categorical responses, but I have not tried it.
Finally, for very specific applications/hypotheses, you could implement resampling methods by resampling entire vectors of responses -- e.g., a permutation test on odds ratios across groups by permuting group labels without replacement.
| null | CC BY-SA 2.5 | null | 2011-04-07T15:20:29.337 | 2011-04-07T15:20:29.337 | null | null | 3432 | null |
9317 | 1 | 9319 | null | 5 | 445 | I was reading a Wikipedia article on [the birthday paradox](http://en.wikipedia.org/wiki/Birthday_problem) and stumbled upon the following statement:
>
...the pairings in a group of 23 people are not statistically equivalent to 253 pairs chosen independently...
Could you explain what does it mean and why does it matter in this case?
---
Here is the quotation in its original context:
>
Although the pairings in a group of 23 people are not statistically equivalent to 253 pairs chosen independently, the birthday paradox becomes less surprising if a group is thought of in terms of the number of possible pairs, rather than as the number of individuals.
| Statistical equivalence in the birthday paradox | CC BY-SA 2.5 | null | 2011-04-07T15:45:55.837 | 2016-09-23T19:26:36.780 | 2016-09-23T19:26:36.780 | 35989 | 4068 | [
"probability",
"birthday-paradox"
] |
9318 | 1 | null | null | 5 | 3852 | I am planning to write a program that performs MDS. Any pointers to where I can access the pseudo-code for MDS?
Thanks!
| Multidimensional scaling pseudo-code | CC BY-SA 2.5 | null | 2011-04-07T15:46:39.943 | 2011-04-07T21:28:15.390 | 2011-04-07T21:28:15.390 | 930 | 1417 | [
"algorithms",
"multidimensional-scaling"
] |
9319 | 2 | null | 9317 | 9 | null | In a group of 23 people, all pairs must involve just those 23 people: the pairs are thus mathematically (and statistically) dependent. On the other hand, 253 pairs chosen independently and randomly from the 366*365/2 possible pairs will typically involve around 100 separate people. This (strong) dependency means we cannot use simple formulas for combining probabilities.
This vague statement is in the Wikipedia article to counter the false intuition some people have that birthday collisions in small groups must be rare. It is, as the article notes, not at all rigorous.
| null | CC BY-SA 2.5 | null | 2011-04-07T16:25:28.517 | 2011-04-07T16:25:28.517 | null | null | 919 | null |
9320 | 2 | null | 9295 | 4 | null | This is really a question about numerical integration because it reduces immediately to the problem of finding the mean distance between two line segments in the plane.
Monte-Carlo integration would be wasteful and inefficient compared to other methods that are readily available, including
- Direct analytic integration. This is messy but can be done.
- Numeric quadrature. Adaptive methods will be accurate.
- Analytic integration of the mean distance between a point and a line segment and numeric quadrature of that mean distance.
- Discrete approximations, such as Riemann sums, the Trapezoidal Rule, etc.
Because the mean distance function in (3) is differentiable and varies relatively slowly, simple numerical methods such as Simpson's Rule work well. It's best to do the numeric integration over the shorter of the two line segments. In experiments I find that using the endpoints and midpoint in Simpson's Rule typically gives better than 1% accuracy and using five equally spaced points is far better than that. (The toughest cases occur where two sides are equally long, parallel, and close together.) If you need to, you can do the analysis to estimate the Simpson's Rule error and adaptively decrease the point spacing.
For the original problem, the approximations sometimes overestimate and sometimes underestimate the mean distance, so we could expect some cancellation of errors.
At any rate, you will definitely achieve 5% accuracy using method (3). For an $n$ sided polygon it requires $n$ computations (each of constant cost) per vertex and $n-1$ computations per midpoint for a cost of $O(n^2)$. The individual computations of mean point-segment distances require some basic arithmetic, four square roots, and two logarithms, so they're quick and easy. (Mathematica computes about 7,500 mean segment-segment distances per core per second.)
If you want to avoid all but the simplest arithmetic, you can approximate each segment-segment integral with a double Simpson's Rule pass, one for each segment (method 4). With three division points (requiring the computation of nine distances) you will achieve 5% accuracy. With five division points (25 distances) you will be better than 1% accurate (usually better than 0.05%) except for extremely close parallel segments. In Mathematica, with its interpreted overhead, this technique actually is only half as fast as method (3).
---

This plot shows (negative) log (base 10) relative errors for 1000 segments randomly distributed near the endpoint of a unit segment (which is the larger of the two). E.g., a value of 2 is a 1% error; a value of 6 is a 0.0001% error. This is a severe test, because the worst performances occur when the segments are crossing or close to doing so, situations that are impossible or unlikely in convex polygons. (Note that the smallest possible mean distance is 1/4 (for an infinitesimally short segment near the middle of the unit segment).)
Where the approximation is high, the error is plotted on the right, and where the approximation is low, the error is plotted on the left. Blue dots are the three-point by three-point Simpson's Rule calculation (method 4) and red dots are the three-point Simpson's Rule calculation (method 3).
Typically method 3 (red) has much better than 2% accuracy and is better than method 4 (blue), which in a few cases has almost 10% error. Method 3 has a tendency to overestimate slightly (more of its values are at the right). Both methods almost always obtain better than 0.1% accuracy (a height of 3 or better on the plot), especially at mean distances greater than the longer side. This implies that if you want to improve the accuracy, focus first on nearby edges of a convex polygon.

In this plot of the 1000 segments, the brighter, thicker, redder ones have the greatest relative error for Method 3. The black segment is the reference (unit) segment.
| null | CC BY-SA 2.5 | null | 2011-04-07T16:46:35.707 | 2011-04-07T18:41:45.193 | 2011-04-07T18:41:45.193 | 919 | 919 | null |
9321 | 2 | null | 9318 | 5 | null | If you have the Statistics Toolbox in MATLAB you can read the source code of `mdscale.m`. While it's not pseudocode, it will definitely help you understand MDS better and gives you one approach to coding it.
In MATLAB you can type
`
edit mdscale
`
and that will open up an editor window that shows you the `mdscale.m` script that does the work. If you don't have MATLAB, check out [Scikits.learn](http://scikits.appspot.com/learn). It has some code for MDS. A lot of times reading Python code is very similar to reading pseudocode!
| null | CC BY-SA 2.5 | null | 2011-04-07T17:20:37.147 | 2011-04-07T17:20:37.147 | null | null | 2660 | null |
9322 | 1 | 15472 | null | 12 | 10845 | From Econometrics, by Fumio Hayashi (Chpt 1):
Unconditional Homoskedasticity:
- The second moment of the error terms E(εᵢ²) is constant across the observations
- The functional form E(εᵢ²|xi) is constant across the observations
Conditional Homoskedasticity:
- The restriction that the second moment of the error terms E(εᵢ²) is constant across the observations is lifted
Thus the conditional second moment E(εᵢ²|xi) can differ across the observations through possible dependence on xᵢ.
So then, my question:
How does Conditional Homoskedasticity differ from Heteroskedasticity?
My understanding is that there is heteroskedasticity when the second moment differs across observations (xᵢ).
| Conditional homoskedasticity vs heteroskedasticity | CC BY-SA 3.0 | null | 2011-04-07T18:43:13.910 | 2015-06-07T01:07:35.317 | 2015-06-06T16:08:34.283 | 53690 | 4072 | [
"regression",
"econometrics",
"heteroscedasticity",
"assumptions"
] |
9323 | 2 | null | 9299 | 2 | null | Your model is a two-dimensional random field $X[i,j]$, and you are trying to estimate the joint distribution of the integer-values random variables $X[i,j]$. You will want to assume spatial stationarity: that is, the joint distribution of $(X[i_1,j_1],...,X[i_m,j_m])$ is the same as the joint distribution of $(X[i_1+k,j_1+l]...,X[i_m+k,j_m+l])$. In particular, the marginal distribution is the same for every cell. A simple question to ask is the autocorrelation structure of the field. That is, what is $corr(X[i_1,j_1],X[i_2,j_2])$ given the distance $d([i_1,j_1],[i_2,j_2])$? We represent this as a function $\rho(d)$. A simple model for the autocorrelation structure is $\rho(d)=kd^{-1}$, where $k$ is a constant.
A 'gaussian' effect corresponds to a quadratic distance function, but there are many other distance functions you should consider, such as the taxicab norm $d([i_1,j_1],[i_2,j_2]) = |i_1-i_2|+|j_1-j_2|$. Once you have decided on a distance function and the form of your model for autocorrelation it is simple enough to estimate $\rho(d)$ e.g. via maximum likelihood. For more ideas, look for "random field".
| null | CC BY-SA 2.5 | null | 2011-04-07T19:45:16.923 | 2011-04-07T19:59:54.003 | 2011-04-07T19:59:54.003 | 3567 | 3567 | null |
9324 | 1 | 9356 | null | 1 | 473 | I'd love a check if anyone is willing!
I am trying to see if there is a statistical difference in female size between sites. Over the years females were repeatedly sampled within sites. I have sampled females opportunistically. Meaning that females were sampled a different number of times between and within sites.
My formula is:
```
> lmerfit1<-lmer(size ~ (1|FEMALE), data=Data)
> lmerfit2<-lmer(size ~ SITE+(1|FEMALE), data=Data)
> anova(lmerfit1, lmerfit2)
Data: Data
Models:
lmerfit1: size ~ (1 | FEMALE)
lmerfit2: size ~ SITE + (1 | FEMALE)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
lmerfit1 3 2167.8 2179.6 -1080.9
lmerfit2 4 2169.8 2185.5 -1080.9 0 1 **1**
```
A p value of 1 leaves me concerned. The other female traits I ran thru this same formula made sense.
thanks!
| Check of R command and output of unbalanced repeated measures ANOVA | CC BY-SA 3.0 | 0 | 2011-04-07T20:34:34.710 | 2011-04-08T14:43:53.077 | 2011-04-08T02:47:19.363 | 183 | 4027 | [
"r",
"repeated-measures",
"lme4-nlme"
] |
9325 | 2 | null | 8738 | 5 | null | Searching the SAS documentation for ARIMAX (based on [Patrick's answer](https://stats.stackexchange.com/questions/8738/what-is-the-term-for-a-time-series-regression-having-more-than-one-predictor/8741#8741)), I found (in "[The ARIMA Procedure: Input Variables and Regression with ARMA Errors](http://support.sas.com/documentation/cdl/en/etsug/60372/HTML/default/viewer.htm#etsug_arima_sect012.htm)") that they listed the following terms:
- ARIMAX
- Box-Tiao
- Transfer Function Model
- Dynamic Regression
- Intervention Model
- Interrupted Time Series Model
- Regression Model with ARMA Errors
The term ARIMAX was used almost equally as much as the term Transfer Function Model. Intervention Model and Interrupted Time Series Model appear to refer to a different kind of model than the others.
| null | CC BY-SA 2.5 | null | 2011-04-07T21:01:14.247 | 2011-04-07T21:01:14.247 | 2017-04-13T12:44:20.903 | -1 | 1583 | null |
9326 | 2 | null | 9318 | 8 | null | There are different kind of MDS (e.g., see this [brief review](http://forrest.psych.unc.edu/teaching/p208a/mds/mds.html)). Here are two pointers:
- the smacof R package, developed by Jan de Leeuw and Patrick Mair has a nice vignette, Multidimensional Scaling Using Majorization: SMACOF in R (or see, the Journal of Statistical Software (2009) 31(3)) -- R code is available, of course.
- there are some handouts on Multidimensional Scaling, by Forrest Young, where several algorithms are discussed (including INDSCAL (Individual Difference Scaling, or weighted MDS) and ALSCAL, with Fortran source code by the same author) -- this two keywords should help you to find other source code (mostly Fortran, C, or Lisp).
You can also look for "Manifold learning" which should give you a lot of techniques for dimension reduction (Isomap, PCA, MDS, etc.); the term was coined by the Machine Learning community, among others, and they probably have a different view on MDS compared to psychometricians.
| null | CC BY-SA 2.5 | null | 2011-04-07T21:27:47.123 | 2011-04-07T21:27:47.123 | null | null | 930 | null |
9327 | 1 | null | null | 4 | 5360 | EDIT: Thanks for the help so far. I have updated the question based on further work.
I am interested in finding any difference in response of counts of two competing species to dryness and elevation. Both are caught in the same type of trap, but trap placement (bushes vs buildings) probably biases toward one species or the other, so each trap placement is rated according to indoors/outdoors on a scale of 1-3. A trap does not move once it is placed. Traps are nested in sites, dryness is measured at each site/visit combination, and elevation is of course constant at each site.
My data includes the number of each species caught in 4-12 traps, at 8-12 sites during 6-8 visits to each site. The visits and sites are far enough spaced that I assume they are independent. Most data are zeros, especially for Sp_A which is rare. Below is the structure of the data.
```
R> str(mydata)
'data.frame': 300 obs. of 8 variables:
$ count : num 0 5 1 0 1 1 0 0 0 0 ...
$ species : Factor w/ 2 levels "a","b": 1 1 1 1 1 1 1 1 1 1 ...
$ elevation: int 1 1 1 1 1 1 1 1 1 1 ...
$ dryness : num 0.179 0.179 0.179 0.179 0.179 ...
$ site : Factor w/ 5 levels "a","b","c","d",..: 1 1 1 1 1 1 1 1 1 1 ...
$ trap : Factor w/ 50 levels "MT10a","MT10b",..: 6 11 16 21 26 31 36 41 46 1 ...
$ visit : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
$ in_out : num 1 1 1 1 1 1 1 1 1 1 ...
```
As an initial attempt, I had no trouble using R function `lmer` to model Sp_A presence/absence with Sp_B as a covariate: `(SpA>0) ~ Sp_B + elevation + (1|site/trap) + (1|visit)`
However on suggestion from R-sig-mixed-models list, and also "ils" from this site, I have reshaped the data as seen above, and tried to model count with species as a factor in the model. I have used two different packages (lme4 and glmmADMB). The functions handle the data ok in simple models, until I add the factor "species." Below are the R code and error messages. These same errors also happen when I use simpler dummy data. Any ideas on syntax or other packages?
```
require(lme4)
glmera1 <- glmer(count~elevation*species*in_out
+ (1|visit) + (1|site/trap),
data=mydata, family="poisson")
Error in asMethod(object) : matrix is not symmetric [1,2]
In addition: Warning messages:
1: In mer_finalize(ans) :
Cholmod warning 'not positive definite' at file:../Cholesky/t_cholmod_rowfac.c, line 432
2: In mer_finalize(ans) :
Cholmod warning 'not positive definite' at file:../Cholesky/t_cholmod_rowfac.c, line 432
3: In mer_finalize(ans) : singular convergence (7)
require(glmmADMB)
admba1 <- glmm.admb(count~elevation*species*in_out, random=~visit,
group="site", data=mydata, family="nbinom")
Error in glmm.admb(count ~ elevation * species * in_out, random = ~visit, :
The function maximizer failed
In addition: Warning message:
running command './nbmm -maxfn 500 ' had status 1
```
| Count data with mixed effects | CC BY-SA 3.0 | null | 2011-04-07T21:30:43.483 | 2013-03-16T00:26:58.203 | 2013-03-16T00:26:58.203 | 805 | 2993 | [
"r",
"mixed-model",
"repeated-measures",
"poisson-distribution"
] |
9328 | 2 | null | 9327 | 3 | null | Check out this question: [Negative values for AICc (corrected Akaike Information Criterion)](https://stats.stackexchange.com/questions/486/negative-values-for-aicc-corrected-akaike-information-criterion)
The same explanation holds for BIC.
| null | CC BY-SA 2.5 | null | 2011-04-07T21:51:47.123 | 2011-04-07T21:51:47.123 | 2017-04-13T12:44:33.310 | -1 | null | null |
9329 | 1 | null | null | 12 | 3395 | Say I've got a predictive classification model based on a random forest (using the randomForest package in R). I'd like to set it up so that end-users can specify an item to generate a prediction for, and it'll output a classification likelihood. So far, no problem.
But it would be useful/cool to be able to output something like a variable importance graph, but for the specific item being predicted, not for the training set. Something like:
Item X is predicted to be a Dog (73% likely)
Because:
Legs=4
Breath=bad
Fur=short
Food=nasty
You get the point. Is there a standard, or at least justifiable, way of extracting this information from a trained random forest? If so, does anyone have code that will do this for the randomForest package?
| Is there a way to explain a prediction from a random forest model? | CC BY-SA 2.5 | null | 2011-04-07T22:21:29.077 | 2019-10-03T08:48:20.570 | 2011-04-09T00:36:01.243 | null | 6 | [
"machine-learning",
"random-forest"
] |
9330 | 1 | null | null | 7 | 788 | We have a series of experiments where we measure virus transmission to plants when exposed to virus-infected insects for different time periods, so all of the experiments have similar types of independent and dependent variables. In one experiment, there are 6 time periods (1 to 24 hours) and 25 plants were tested (individually) for each time period. The response for each plant is yes or no (Plants are scored as virus infected). For 2 of the time intervals, all of the plants were negative for virus infection (0/25 for each time interval).
I am using PROC GLIMMIX in SAS for the analyses. For all of the other experiments, using a binary distribution in the model statement gives reasonable results. For the experiment where two of the time intervals had 0 positive plants, if I use a binary distribution in the model statement, the standard errors for the two groups with 0 transmissions are huge, thus distorting the results.
If I use a negative binomial distribution (based on counts of virus positive plants) the results seem reasonable. Since the same number of plants were tested for each time interval, using this approach works, but it differs from the other experiments.
Is there a method to adjust/account for the zeros in treatment groups that would allow the binary distribution return reasonable results?
| Binomial data analysis with all 0 responses for some treatment groups | CC BY-SA 3.0 | null | 2011-04-08T01:35:42.510 | 2017-11-03T15:22:33.990 | 2017-11-03T15:22:33.990 | 11887 | null | [
"logistic",
"binomial-distribution",
"sas",
"isotonic"
] |
9331 | 1 | 9339 | null | 6 | 3529 | I am new to the use of cubic splines for regression purposes and wanted to find out
1) What is a good source (besides ESL which I read but am still uncertain) to learn about splines for regression?
2) How would you calculate the basis of a given natural cubic spline solution on new data? Specifically if one were to do the following:
```
data(iris)
colnames(iris)
Sepal.Length.ns<-ns(iris$Sepal.Length,df=5)
Sepal.Length.ns
```
How would you take the information in Sepal.Length.ns (knots, boundaries) and compute the values for a new observation? The reason is to code this process outside of R, once fit in R initially (i.e. to put a regression model using cubic splines into a production system).
For example I can do this in R, but want to understand the calculation:
```
#three new observations to predict
newVector<-c(4.45,3.35,2.2)
pred.new<-predict(Sepal.Length.ns,newVector)
```
Thanks!
| Calculation of natural cubic splines in R | CC BY-SA 3.0 | null | 2011-04-08T01:36:55.427 | 2011-04-08T20:21:18.013 | 2011-04-08T08:22:37.160 | null | 2040 | [
"r",
"splines"
] |
9332 | 2 | null | 9324 | 1 | null | How many sites did you have? The models only differ by one df, so either you only have two sites or you treated site as a continuous variable when it should have been categorical. If it should have been a factor, use `factor(SITE)` instead of `SITE`.
Also, try plotting the data (always a good idea!) -- do you see any visual differences?
| null | CC BY-SA 3.0 | null | 2011-04-08T02:21:51.427 | 2011-04-08T02:21:51.427 | null | null | 3601 | null |
9333 | 1 | null | null | 3 | 95 |
### Context:
I have a series of figures for car sales that show me (a) the usual number of car sales for particular models and (b) the number of car sales by a particular car salesperson for each model.
Let's say the values are:
```
Model | Sales by all Salespeople | Sales by Salesperson x
A | 100 | 20
B | 50 | 40
C | 50 | 0
```
I want to find out if Salesperson x is significantly more responsible for sales of a particular model.
My first thought is to determine the rate of sales per model vs the rate of sales per model for the salesperson, i.e.:
```
Model | % Total Sales for all Salespeople | % Total Sales for Salesperson x
A | 50% | 33%
B | 25% | 66%
C | 25% | 0%
```
However, naturally every salesperson has a significant variation in the cars they sell.
### Question:
- How do I determine if salesperson variation from the mean is statistically significant?
### Initial Thoughts:
I think the Pearson correlation has some bearing, as perhaps may chi-square distribution, but I really don't have the background yet to understand why, so introductory help is appreciated.
| Determining the relationship between salesperson and products sold? | CC BY-SA 3.0 | null | 2011-04-08T02:27:00.023 | 2017-11-12T21:25:35.270 | 2017-11-12T21:25:35.270 | 11887 | 4078 | [
"correlation",
"variance"
] |
9334 | 1 | 9337 | null | 20 | 15276 |
### Context:
From a question on Mathematics Stack Exchange [(Can I build a program)](https://math.stackexchange.com/questions/31649/can-i-build-a-program-that-will-tell-me-if-a-real-world-data-set-looks-linear-lo/), someone has a set of $x-y$ points, and wants to fit a curve to it, linear, exponential or logarithmic.
The usual method is to start by choosing one of these (which specifies the model), and then do the statistical calculations.
But what is really wanted is to find the 'best' curve out of linear, exponential or logarithmic.
Ostensibly, one could try all three, and choose the best fitted curve of the three according to the best correlation coefficient.
But somehow I'm feeling this is not quite kosher. The generally accepted method is to pick your model first, one of those three (or some other link function), then from the data calculate the coefficients. And post facto picking the best of all is cherry picking. But to me whether you're determining a function or coefficients from the data it is still the same thing, your procedure is discovering the best...thing (let's say that which function is -also- another coefficient o be discovered).
### Questions:
- Is it appropriate to choose the best fitting model out of linear, exponential, and logarithmic models, based on a comparison of fit statistics?
- If so, what is the most appropriate way to do this?
- If regression helps find parameters (coefficients) in a function, why can't there be a discrete parameter to choose which of three curve families the best would come from?
| Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions | CC BY-SA 3.0 | null | 2011-04-08T02:46:48.813 | 2021-02-22T12:02:49.157 | 2017-04-13T12:19:38.853 | -1 | 3186 | [
"regression",
"predictive-models",
"model-selection",
"curve-fitting"
] |
9335 | 2 | null | 9333 | 1 | null | Chi square test, but I'd set it up as follows:
Model | this salesman | all other salesmen
A | 20 | 80
B | 40 | 10
etc.
Chi square test of independence, with expected values equal to row total * column total / grand total.
Null hypothesis: this salesman = same as all other.
Most stat books will have this test.
| null | CC BY-SA 3.0 | null | 2011-04-08T02:55:10.523 | 2011-04-08T02:55:10.523 | null | null | 3919 | null |
9336 | 2 | null | 9334 | 5 | null | Since plenty of people routinely explore the fit of various curves to their data, I don't know where your reservations are coming from. Granted, there is the fact that a quadratic will always fit at least as well as a linear, and a cubic, at least as well as a quadratic, so there are ways to test the statistical significance of adding such a nonlinear term and thus to avoid needless complexity. But the basic practice of testing many different forms of a relationship is just good practice. In fact, one might start with a very flexible loess regression to see what is the most plausible kind of curve to fit.
| null | CC BY-SA 3.0 | null | 2011-04-08T03:19:19.230 | 2011-04-08T03:19:19.230 | null | null | 2669 | null |
9337 | 2 | null | 9334 | 12 | null |
- You might want to check out the free software called Eureqa. It has the specific aim of automating the process of finding both the functional form and the parameters of a given functional relationship.
- If you are comparing models, with different numbers of parameters, you will generally want to use a measure of fit that penalises models with more parameters. There is a rich literature on which fit measure is most appropriate for model comparison, and issues get more complicated when the models are not nested. I'd be interested to hear what others think is the most suitable model comparison index given your scenario (as a side point, there was recently a discussion on my blog about model comparison indices in the context of comparing models for curve fitting).
- From my experience, non-linear regression models are used for reasons beyond pure statistical fit to the given data:
Non-linear models make more plausible predictions outside the range of the data
Non-linear models require fewer parameters for equivalent fit
Non-linear regression models are often applied in domains where there is substantial prior research and theory guiding model selection.
| null | CC BY-SA 3.0 | null | 2011-04-08T03:41:09.710 | 2011-04-08T03:41:09.710 | null | null | 183 | null |
9338 | 2 | null | 9311 | 38 | null | The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for example. It's not a distance in the typical (metric) sense, because of lack of symmetry and triangle inequality, and so it's used in places where the directionality is meaningful.
The KS-distance is typically used in the context of a non-parametric test. In fact, I've rarely seen it used as a generic "distance between distributions", where the $\ell_1$ distance, the Jensen-Shannon distance, and other distances are more common.
| null | CC BY-SA 3.0 | null | 2011-04-08T04:07:17.223 | 2011-04-08T04:07:17.223 | null | null | 139 | null |
9339 | 2 | null | 9331 | 4 | null | Wikipedia has a [nice explanation](http://en.wikipedia.org/wiki/Spline_interpolation) of spline interpolation
I posted the code to create cubic [Bezier splines](http://en.wikipedia.org/wiki/B%C3%A9zier_spline) on [Rosettacode](http://rosettacode.org/wiki/Bitmap/B%C3%A9zier_curves/Cubic#R) a while ago.
Also, you can have a look at [this discussion](https://stackoverflow.com/questions/881143/cubic-spline-extrapolation) on SO about spline extrapolation.
| null | CC BY-SA 3.0 | null | 2011-04-08T05:48:32.833 | 2011-04-08T17:55:56.330 | 2017-05-23T12:39:26.523 | -1 | 582 | null |
9340 | 2 | null | 9330 | 4 | null | The problem with zeroes, is that the data do not rule out arbitrarily small proportions. So your prior information must be assessed more carefully, because in this case it still matters. Details which are irrelevant when the prior information is "swamped" by the data can be important. In this type of problem, the population size $N$ becomes important, but a binomial assumes $N\to\infty$, which gives absurd results, if this limit is applied too soon in the calculations (as your standard errors indicates).
In this case, there is relatively straight-forward approximation, you just replace $\frac{0}{25}$ with $\frac{1}{27}$, which is a Bayesian estimate based on a uniform prior for the true fraction of "positive infections". Given that you are using GLIMMIX - doing anything more sophisticated will likely wreck your SAS program.
To be consistent, it may be worthwhile to replace all proportions $\frac{r}{n}$ with $\frac{r+1}{n+2}$ - however it shouldn't influence your results too much.
| null | CC BY-SA 3.0 | null | 2011-04-08T07:11:22.210 | 2011-04-08T07:11:22.210 | null | null | 2392 | null |
9341 | 1 | 9344 | null | 5 | 287 | I have a multiple linear regression problem $y=X\beta+\epsilon$. The number of observations $m$ is large, so by the time the data gets to me it's been summarized into:
- $m$
- $X^TX$
- $X^Ty$
- $y^Ty$
- $\sum_{i=1}^m{y_i}$
The above list appears sufficient to compute the OLS estimate. But can it be used to compute a regularized estimate of some sort (e.g. ridge regression)? If not, can the list be augmented in the same spirit to enable regularization?
| Regularized fit from summarized data | CC BY-SA 3.0 | null | 2011-04-08T07:18:45.413 | 2011-04-08T11:29:40.227 | 2011-04-08T11:29:40.227 | 439 | 439 | [
"regression",
"lasso",
"regularization",
"ridge-regression"
] |
9342 | 1 | null | null | 19 | 8208 | I am using hierarchical clustering to analyze time series data. My code is implemented using the Mathematica function `DirectAgglomerate[...]`, which generates hierarchical clusters given the following inputs:
- a distance matrix D
- the name of the method used to determine inter-cluster linkage.
I have calculated the distance matrix D using Manhattan distance:
$$d(x,y) = \sum_i|x_i - y_i|$$
where $i = 1,\cdots, n$ and $n \approx 150$ is the number of data points in my time series.
My question is, is it ok to use Ward's inter-cluster linkage with a Manhattan distance matrix? Some sources suggest that Ward's linkage should only be used with Euclidean distance.
Note that `DirectAgglomerate[...]` calculates Ward's linkage using the distance matrix only, not the original observations. Unfortunately, I am unsure how Mathematica modifies Ward's original algorithm, which (from my understanding) worked by minimizing the error sum of squares of the observations, calculated with respect to the cluster mean. For example, for a cluster $c$ consisting of a vector of univariate observations, Ward formulated the error sum of squares as:
$$(\sum_j||c_j - mean(c)||_2)^2$$
(Other software tools such as Matlab and R also implement Ward's clustering using just a distance matrix so the question isn't specific to Mathematica.)
| Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering? | CC BY-SA 3.0 | null | 2011-04-08T07:47:43.787 | 2020-11-17T15:32:35.653 | 2017-05-12T19:05:56.270 | 37647 | 4079 | [
"clustering",
"distance-functions",
"ward"
] |
9343 | 1 | null | null | 9 | 196 | I am looking at extremely non linear data for which the ARMA/ARIMA models do not work well. Though, I see some autocorrelation, and I suspect to have better results for non linear autocorrelation.
1/ is there an equivalent of the PACF for rank correlation? (in R?)
2/ is there an equivalent of ARMA model for non linear / rank correlation (in R?)
| Is there an equivalent of ARMA for rank correlation? | CC BY-SA 3.0 | null | 2011-04-08T08:14:07.460 | 2018-07-20T09:25:33.440 | 2017-03-14T22:21:06.843 | 28666 | 1709 | [
"r",
"correlation",
"nonparametric",
"garch",
"arma"
] |
9344 | 2 | null | 9341 | 5 | null | It would be nice to have in addition the covariance matrix of the residuals $\hat{\Sigma}$ to draw common inferences about the significance of estimated parameters, or if you are sure it is homoscedastic, then just $\hat{\sigma}^2$.
As for the regularizations of generalised least squares type (probably including instrumental variables estimators) the answer will be no. You need the original data matrices (though if you are supplied by $X^T \Omega^{-1}X$ and $X^T\Omega^{-1}y$ you may do GLS, but you loose the control for the choice of $\Omega$ anyway).
For general non-linear [lasso](http://www-stat.stanford.edu/~tibs/lasso/lasso.pdf) regularization it would be even more complicated. Luckily it may be approximated by ridge regression (see p. 273 in the reference) of a special type.
Regarding ordinary ridge regression it is sufficient, since all you need to do in this case is just to add elements to the diagonal $X^TX+\delta I$, where $I$ is an identity matrix. Thus in this particular case it works well.
| null | CC BY-SA 3.0 | null | 2011-04-08T08:23:08.517 | 2011-04-08T08:38:34.733 | 2011-04-08T08:38:34.733 | 2645 | 2645 | null |
9345 | 1 | null | null | 3 | 103 | When doing multivariate regression, it's often the case that some predictors often have many zero values - dichotomous inputs, dummy coding of polychotomous inputs, interval coding, etc. The fraction of nonzero observations for a predictor is especially decreased when interacting these variables.
It seems obvious that only the observations of a predictor for which it is nonzero (after any transformations) will affect the estimation of its coefficient.
When doing multivariate regression I like to use weakly informative priors (like Gaussian or Cauchy) for regularization. I've noticed that sometimes, predictors which are usually zero still take on implausible regression coefficients, like ~6 for a logistic regression model. This seems to be because a variable that is nonzero for only a handful of observations is still overcoming the prior because it's getting full credit for all of its zero observations, and thus getting a huge coefficient, which is in many cases a prior obviously overfitting, and hurts generalization performance for prediction.
Multiplying the prior's scale by the fraction of observations of a predictor that are nonzero fixes this and greatly improves generalization performance, makes coefficients plausible, and seems to be common sense.
So - why don't more people do this? I've tried a cursory search in common textbooks (various machine learning books, applied and theoretical Bayesian statistics), R packages on CRAN, some machine learning packages, and I don't see evidence of anyone else doing this... isn't this just common sense? Or is there something wrong with this approach that I'm missing?
| Why doesn't it seem to be standard to multiply prior scale by fraction of non-zero predictor observations? | CC BY-SA 3.0 | null | 2011-04-08T11:01:21.077 | 2011-04-08T15:24:16.877 | null | null | 1119 | [
"regression",
"bayesian"
] |
9347 | 2 | null | 7314 | 0 | null | My bad. I am trying to use too many frameworks at once, and failing to cover documentation properly. The asia example in winbugs does exactly what I've been asking for.
| null | CC BY-SA 3.0 | null | 2011-04-08T11:45:51.657 | 2011-04-08T11:45:51.657 | null | null | 3280 | null |
9348 | 2 | null | 9342 | 6 | null | I can't think of any reason why Ward should favor any metric. Ward's method is just another option to decide which clusters to fusion next during agglomeration. This is achieved by finding the two clusters whose fusion will minimize a certain error ([examplary source for the formula](http://www.stat.psu.edu/online/courses/stat505/18_cluster/09_cluster_wards.html)).
Hence it relies on two concepts:
- The mean of vectors which (for numerical vectors) is generally calculated by averaging over every dimension separately.
- The distance metric itself i.e. the concept of similarity expressed by this metric.
So: As long as the properties of the choosen metric (like e.g. rotation,translation or scale invariance) satisfy your needs (and the metric fits to the way the cluster mean is calculated), I don't see any reason to not use it.
I suspect that most people suggest the euclidean metric because they
- want to increase the weight of the differences between a cluster mean and a single observation vector (which is done by quadration)
- or because it came out as best metric in the validation based on their data
- or because it is used in general.
| null | CC BY-SA 3.0 | null | 2011-04-08T11:46:37.697 | 2011-04-08T11:46:37.697 | null | null | 264 | null |
9350 | 2 | null | 9299 | 3 | null | Here is a simple idea which might work. As I've said in the comments if you have a grid with intensities why not fit density of bivariate distribution?
Here is the sample graph to illustrate my point:

Each grid point with is displayed as a square, colored according to intensity. Superimposed on the graph is the contour plot of bivariate normal density plot. As you can see the contour lines expand in the direction of decreasing intensity. The center will be controled by the mean of bivariate normal and the spread of the intensity according to covariance matrix.
To get the estimates of mean and covariance matrix simple numerical optimisation can be used, compare the intensities with values of density function using the mean and the covariance matrix as parameters. Minimize to get the estimates.
This is of course strictly speaking not a statistical estimate, but at least it will give you an idea how to proceed further.
Here is the code for reproducing the graph:
```
require(mvtnorm)
sigma=cbind(c(0.1,0.7*0.1),c(0.7*0.1,0.1))
x<-seq(0,1,by=0.01)
y<-seq(0,1,by=0.01)
z<-outer(x,y,function(x,y)dmvnorm(cbind(x,y),mean=mean,sigma=sigma))
mz<-melt(z)
mz$X1<-(mz$X1-1)/100
mz$X2<-(mz$X2-1)/100
colnames(mz)<-c("x","y","z")
mz$intensity<-round(mz$z*1000)
ggplot(mz, aes(x,y)) + geom_tile(aes(fill = intensity), colour = "white") + scale_fill_gradient(low = "white", high = "steelblue")+geom_contour(aes(z=z),colour="black")
```
| null | CC BY-SA 3.0 | null | 2011-04-08T12:39:56.850 | 2011-04-08T12:39:56.850 | null | null | 2116 | null |
9351 | 2 | null | 8784 | 4 | null | The short answer is "yes you can" - but you should compare the Maximum Likelihood Estimates (MLEs) of the "big model" with all co variates in either model fitted to both.
This is a "quasi-formal" way to get probability theory to answer your question
In the example, $Y_{1}$ and $Y_{2}$ are the same type of variables (fractions/percentages) so they are comparable. I will assume that you fit the same model to both. So we have two models:
$$M_{1}:Y_{1i}\sim Bin(n_{1i},p_{1i})$$
$$log\left(\frac{p_{1i}}{1-p_{1i}}\right)=\alpha_{1}+\beta_{1}X_{i}$$
$$M_{2}:Y_{2i}\sim Bin(n_{2i},p_{2i})$$
$$log\left(\frac{p_{2i}}{1-p_{2i}}\right)=\alpha_{2}+\beta_{2}X_{i}$$
So you have the hypothesis you want to assess:
$$H_{0}:\beta_{1}>\beta_{2}$$
And you have some data $\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n}$, and some prior information (such as the use of logistic model). So you calculate the probability:
$$P=Pr(H_0|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I)$$
Now $H_0$ doesn't depend on the actual value of any of the regression parameters, so they must have be removed by marginalising.
$$P=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(H_0,\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
The hypothesis simply restricts the range of integration, so we have:
$$P=\int_{-\infty}^{\infty} \int_{\beta_{2}}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
Because the probability is conditional on the data, it will factor into the two separate posteriors for each model
$$Pr(\alpha_{1},\beta_{1}|\{Y_{1i},X_{i},Y_{2i}\}_{i=1}^{n},I)Pr(\alpha_{2},\beta_{2}|\{Y_{2i},X_{i},Y_{1i}\}_{i=1}^{n},I)$$
Now because there is no direct links between $Y_{1i}$ and $\alpha_{2},\beta_{2}$, only indirect links through $X_{i}$, which is known, it will drop out of the conditioning in the second posterior. same for $Y_{2i}$ in the first posterior.
From standard logistic regression theory, and assuming uniform prior probabilities, the posterior for the parameters is approximately bi-variate normal with mean equal to the MLEs, and variance equal to the information matrix, denoted by $V_{1}$ and $V_{2}$ - which do not depend on the parameters, only the MLEs. so you have straight-forward normal integrals with known variance matrix. $\alpha_{j}$ marginalises out with no contribution (as would any other "common variable") and we are left with the usual result (I can post the details of the derivation if you want, but its pretty "standard" stuff):
$$P=\Phi\left(\frac{\hat{\beta}_{2,MLE}-\hat{\beta}_{1,MLE}}{\sqrt{V_{1:\beta,\beta}+V_{2:\beta,\beta}}}\right)
$$
Where $\Phi()$ is just the standard normal CDF. This is the usual comparison of normal means test. But note that this approach requires the use of the same set of regression variables in each. In the multivariate case with many predictors, if you have different regression variables, the integrals will become effectively equal to the above test, but from the MLEs of the two betas from the "big model" which includes all covariates from both models.
| null | CC BY-SA 3.0 | null | 2011-04-08T12:43:31.697 | 2011-04-08T12:43:31.697 | null | null | 2392 | null |
9352 | 1 | 9353 | null | 6 | 375 | I'm playing with support vector machines (SVM) using the e1071::svm() function in R, and I encountered a scenario where I asked it for a leave-one-out cross-validated classification of a 2-category response and obtained a total accuracy of 38% (35/90), which, given 90 samples, ends up with a 95% confidence interval that is below chance. Should I consider this a fluke, and if not, how is it possible for an SVM to become anti-predictive?
In case it matters, I used default values for the cost and gamma parameters, and the data predicting the response was a 8192 item vector representing 500 milliseconds of electroencephalogram data collected across 64 electrodes.
| Can anyone explain why I have obtained an anti-predictive Support Vector Machine? | CC BY-SA 3.0 | null | 2011-04-08T13:12:09.120 | 2011-04-08T13:17:12.443 | null | null | 364 | [
"cross-validation",
"svm"
] |
9353 | 2 | null | 9352 | 10 | null | It is very probably the settings for the hyper-parameters that are the issue, leading to severe over-fitting of the data. Without proper tuning of the hyper-parameters, and SVM can perform arbitrarily badly, especially for high dimensional data (it is the tuning of the regularisation parameter that gives robustness against over-fitting in high dimensional spaces). I would suggest nested cross-validation, with the outer (leave-oue-out) cross-validation used form performance estimation and the hyper-parameters tuned independently in each fold by minimising a cross-validation based model selection criterion (I use Nelder-Mead simplex method rather than grid search).
The short answer, is never use default hyper-parameter values, always tune them afresh for each new (partition of the) dataset.
| null | CC BY-SA 3.0 | null | 2011-04-08T13:17:12.443 | 2011-04-08T13:17:12.443 | null | null | 887 | null |
9354 | 1 | 9355 | null | 9 | 2987 | The wonderful libsvm package provides a python interface and a file "easy.py" that automatically searches for learning parameters (cost & gamma) that maximize the accuracy of the classifier. Within a given candidate set of learning parameters, accuracy is operationalized by cross-validation, but I feel like this undermines the purpose of cross-validation. That is, insofar as the learning parameters themselves can be chosen an a manner that might cause an over-fit of the data, I feel like a more appropriate approach would be to apply cross validation at the level of the search itself: perform the search on a training data set and then evaluate the ultimate accuracy of SVM resulting from the finally-chosen learning parameters by evaluation within a separate testing data set. Or am I missing something here?
| How does one appropriately apply cross-validation in the context of selecting learning parameters for support vector machines? | CC BY-SA 3.0 | null | 2011-04-08T13:29:59.237 | 2013-02-01T08:29:08.153 | null | null | 364 | [
"cross-validation",
"svm"
] |
9355 | 2 | null | 9354 | 10 | null | If you learn the hyper-parameters in the full training data and then cross-validate, you will get an optimistically biased performance estimate, because the test data in each fold will already have been used in setting the hyper-parameters, so the hyper-parameters selected are selected in part because they suit the data in the test set. The optimistic bias introduced in this way can be unexpectedly large. See [Cawley and Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR 11(Jul):2079−2107, 2010.](http://jmlr.csail.mit.edu/papers/v11/cawley10a.html) (Particularly section 5.3). The best thing to do is nested cross-validation. The basic idea is that you cross-validate the entire method used to generate the model, so treat model selection (choosing the hyper-parameters) as simply part of the model fitting procedure (where the parameters are determined) and you can't go too far wrong.
If you use cross-validation on the training set to determine the hyper-parameters and then evaluate the performance of a model trained using those parameters on the whole training set, using a separate test set, that is also fine (provided you have enough data for reliably fitting the model and estimating performance using disjoint partitions).
| null | CC BY-SA 3.0 | null | 2011-04-08T13:40:43.197 | 2011-04-08T13:40:43.197 | null | null | 887 | null |
9356 | 2 | null | 9324 | 4 | null | Firstly, you need to set `REML=F`:
```
lmerfit1<-lmer(size ~ (1|FEMALE), data=Data, REML=F)
lmerfit2<-lmer(size ~ SITE+(1|FEMALE), data=Data, REML=F)
anova(lmerfit1, lmerfit2)
```
This will use MLE instead of REML, which is necessary because likelihoods from mixed models with different fixed effects are not comparable when REML is used.
Secondly, you could do the following quick checks:
```
summary(lmerfit2) # To see the size of the SITE coefficient
summary(lm(size ~ SITE, data=Data)) # To check the fixed effects estimates
plot(size ~ SITE, data=Data) # Box plot
dotplot(size ~ SITE, data=Data) # Another visual check
```
But given the non-significance of `SITE` in your reported test, and the lack of visual difference you reported in your comment, I'm guessing there is no significant main effect of `SITE`.
| null | CC BY-SA 3.0 | null | 2011-04-08T14:43:53.077 | 2011-04-08T14:43:53.077 | null | null | 3432 | null |
9357 | 1 | 9364 | null | 66 | 35783 | When you are trying to fit models to a large dataset, the common advice is to partition the data into three parts: the training, validation, and test dataset.
This is because the models usually have three "levels" of parameters: the first "parameter" is the model class (e.g. SVM, neural network, random forest), the second set of parameters are the "regularization" parameters or "hyperparameters" (e.g. lasso penalty coefficient, choice of kernel, neural network structure) and the third set are what are usually considered the "parameters" (e.g. coefficients for the covariates.)
Given a model class and a choice of hyperparameters, one selects the parameters by choosing the parameters which minimize error on the training set. Given a model class, one tunes the hyperparameters by minimizing error on the validation set. One selects the model class by performance on the test set.
But why not more partitions? Often one can split the hyperparameters into two groups, and use a "validation 1" to fit the first and "validation 2" to fit the second. Or one could even treat the size of the training data/validation data split as a hyperparameter to be tuned.
Is this already a common practice in some applications? Is there any theoretical work on the optimal partitioning of data?
| Why only three partitions? (training, validation, test) | CC BY-SA 3.0 | null | 2011-04-08T14:45:04.197 | 2018-07-03T07:24:50.143 | 2018-07-03T07:24:50.143 | 128677 | 3567 | [
"machine-learning",
"model-selection",
"data-mining"
] |
9358 | 1 | 9361 | null | 24 | 35968 | I have four numeric variables. All of them are measures of soil quality. Higher the variable, higher the quality. The range for all of them is different:
Var1 from 1 to 10
Var2 from 1000 to 2000
Var3 from 150 to 300
Var4 from 0 to 5
I need to combine four variables into single soil quality score which will successfully rank order.
My idea is very simple. Standardize all four variables, sum them up and whatever you get is the score which should rank-order. Do you see any problem with applying this approach. Is there any other (better) approach that you would recommend?
Thanks
Edit:
Thanks guys. A lot of discussion went into "domain expertise"... Agriculture stuff... Whereas I expected more stats-talk. In terms of technique that I will be using... It will probably be simple z-score summation + logistic regression as an experiment. Because vast majority of samples has poor quality 90% I'm going to combine 3 quality categories into one and basically have binary problem (somequality vs no-quality). I kill two birds with one stone. I increase my sample in terms of event rate and I make a use of experts by getting them to clasify my samples. Expert classified samples will then be used to fit log-reg model to maximize level of concordance / discordance with the experts.... How does that sound to you?
| Creating an index of quality from multiple variables to enable rank ordering | CC BY-SA 3.0 | null | 2011-04-08T15:01:30.430 | 2020-02-11T09:24:47.123 | 2013-06-28T13:24:43.540 | 919 | 333 | [
"ranking",
"valuation"
] |
9359 | 2 | null | 9345 | 4 | null | It is difficult to understand quite what you are saying here and it would be helpful to have an illustration. Take for example the values $x_1$, $x_2$ and $y$ here
```
x_1 x_2 y
--- --- ---
1 1 8
-1 1 -1
-3 0 -15
2 0 9
0 0 -1
2 0 10
-1 0 -6
```
OLS regression would give $\hat{y} = 4.9 x_1 + 4.1 x_2 - 0.6$. I don't see how the $+4.1$ can be said to be implausible. What is true is that it is a more uncertain estimate than the estimate of $+4.9$ since there is less information about the impact of variation of $x_2$ than about the impact of variation of $x_1$. [When I generated these numbers, I was aiming for $5 x_1 + 3 x_2$.]
This is not the same as saying $x_2$ has too many zeros. Let $x_3 = 1 - x_2$, so $x_3$ is zero only twice, rather than five times. Then OLS regression of $y$ on $x_1$ and $x_3$ would give $\hat{y} = 4.9 x_1 - 4.1 x_3 +3.5$, much as you might expect. But the new coefficient of $-4.1$ is as good (in any sense) as the earlier $+4.1$ even though there are fewer zeros.
On a different issue, if you are using a prior distribution which is not scale-free and you have other prior information that suggests to you that the dispersion of plausible values is going to be narrower or wider than suggested by your distribution then you can (indeed you should) adjust your prior distribution accordingly.
| null | CC BY-SA 3.0 | null | 2011-04-08T15:24:16.877 | 2011-04-08T15:24:16.877 | null | null | 2958 | null |
9360 | 2 | null | 9330 | 0 | null | You could try exact logistic regression with proc logistic, but then you cannot specify random effects somewhere in your model which you probably do now as you're using proc mixed. You'd have to switch to fixed effects but you could keep the binomial error distribution.
| null | CC BY-SA 3.0 | null | 2011-04-08T15:35:04.183 | 2011-04-08T15:35:04.183 | null | null | 1573 | null |
9361 | 2 | null | 9358 | 23 | null | The proposed approach may give a reasonable result, but only by accident. At this distance--that is, taking the question at face value, with the meanings of the variables disguised--some problems are apparent:
- It is not even evident that each variable is positively related to "quality." For example, what if a 10 for 'Var1' means the "quality" is worse than the quality when Var1 is 1? Then adding it to the sum is about as wrong a thing as one can do; it needs to be subtracted.
- Standardization implies that "quality" depends on the data set itself. Thus the definition will change with different data sets or with additions and deletions to these data. This can make the "quality" into an arbitrary, transient, non-objective construct and preclude comparisons between datasets.
- There is no definition of "quality". What is it supposed to mean? Ability to block migration of contaminated water? Ability to support organic processes? Ability to promote certain chemical reactions? Soils good for one of these purposes may be especially poor for others.
- The problem as stated has no purpose: why does "quality" need to be ranked? What will the ranking be used for--input to more analysis, selecting the "best" soil, deciding a scientific hypothesis, developing a theory, promoting a product?
- The consequences of the ranking are not apparent. If the ranking is incorrect or inferior, what will happen? Will the world be hungrier, the environment more contaminated, scientists more misled, gardeners more disappointed?
- Why should a linear combination of variables be appropriate? Why shouldn't they be multiplied or exponentiated or combined as a posynomial or something even more esoteric?
- Raw soil quality measures are commonly re-expressed. For example, log permeability is usually more useful than the permeability itself and log hydrogen ion activity (pH) is much more useful than the activity. What are the appropriate re-expressions of the variables for determining "quality"?
One would hope that soils science would answer most of these questions and indicate what the appropriate combination of the variables might be for any objective sense of "quality." If not, then you face a [multi-attribute valuation problem](http://en.wikipedia.org/wiki/Multi-criteria_decision_analysis). The Wikipedia article lists dozens of methods for addressing this. IMHO, most of them are inappropriate for addressing a scientific question. One of the few with a solid theory and potential applicability to empirical matters is [Keeney & Raiffa's multiple attribute valuation theory](http://books.google.com/books?hl=en&lr=&id=GPE6ZAqGrnoC&oi=fnd&pg=PR11&dq=Keeney%20&%20Raiffa%20multiple%20attribute%20valuation%20theory&ots=EkCGJz7pwx&sig=LVkWUJRADGezaSKENx0-giUehcs#v=onepage&q&f=false) (MAVT). It requires you to be able to determine, for any two specific combinations of the variables, which of the two should rank higher. A structured sequence of such comparisons reveals (a) appropriate ways to re-express the values; (b) whether or not a linear combination of the re-expressed values will produce the correct ranking; and (c) if a linear combination is possible, it will let you compute the coefficients. In short, MAVT provides algorithms for solving your problem provided you already know how to compare specific cases.
| null | CC BY-SA 3.0 | null | 2011-04-08T15:42:31.763 | 2013-04-15T20:50:06.113 | 2013-04-15T20:50:06.113 | 919 | 919 | null |
9362 | 2 | null | 9358 | 0 | null | One other thing you did not discuss is the scale of the measurements. V1 and V5 looks like they are of rank order and the other seem not. So standardization may be skewing the score. So you may be better transforming all of the variables into ranks, and determining a weighting for each variable, since it is highly unlikely that they have the same weight. Equal weighting is more of a "no nothing" default. You might want to do some correlation or regression analysis to come up with some a priori weights.
| null | CC BY-SA 3.0 | null | 2011-04-08T16:22:39.233 | 2011-04-08T16:22:39.233 | null | null | 3489 | null |
9363 | 2 | null | 9358 | -3 | null | Following up on Ralph Winters' answer, you might use PCA (principal component analysis) on the matrix of suitably standardized scores. This will give you a "natural" weight vector that you can use to combine future scores.
Do this also after all scores have been transformed into ranks. If the results are very similar, you have good reasons to continue with either method. If there are discrepancies, this will lead to interesting questions and a better understanding.
| null | CC BY-SA 3.0 | null | 2011-04-08T17:43:33.113 | 2011-04-08T17:43:33.113 | null | null | 4062 | null |
9364 | 2 | null | 9357 | 88 | null | First, I think you're mistaken about what the three partitions do. You don't make any choices based on the test data. Your algorithms adjust their parameters based on the training data. You then run them on the validation data to compare your algorithms (and their trained parameters) and decide on a winner. You then run the winner on your test data to give you a forecast of how well it will do in the real world.
You don't validate on the training data because that would overfit your models. You don't stop at the validation step's winner's score because you've iteratively been adjusting things to get a winner in the validation step, and so you need an independent test (that you haven't specifically been adjusting towards) to give you an idea of how well you'll do outside of the current arena.
Second, I would think that one limiting factor here is how much data you have. Most of the time, we don't even want to split the data into fixed partitions at all, hence CV.
| null | CC BY-SA 3.0 | null | 2011-04-08T17:58:07.700 | 2011-04-08T17:58:07.700 | null | null | 1764 | null |
9365 | 1 | null | null | 31 | 6958 | What are some good papers describing applications of statistics that would be fun and informative to read? Just to be clear, I'm not really looking for papers describing new statistical methods (e.g., a paper on least angle regression), but rather papers describing how to solve real-world problems.
For example, one paper that would fit what I'm looking is the climate paper from the [second Cross-Validated Journal Club](https://stats.meta.stackexchange.com/questions/685/second-cross-validated-journal-club). I'm kind of looking for more statistics-ish papers, rather than machine learning papers, but I guess it's kind of a fuzzy distinction (I'd classify the Netflix Prize papers as a bit borderline, and a paper on sentiment analysis as something I'm not looking for).
I'm asking because most of the applications of statistics I've seen are either the little snippets you seen in textbooks, or things related to my own work, so I'd like to branch out a bit.
| What are some interesting and well-written applied statistics papers? | CC BY-SA 3.0 | null | 2011-04-08T19:01:11.850 | 2019-03-02T12:41:01.320 | 2017-03-16T16:02:09.620 | -1 | 1106 | [
"references",
"application"
] |
9366 | 2 | null | 9313 | 5 | null | For an overview of some matching algorithms as well as clear examples of applications in everything from education to medical experiments, I would suggest:
- Paul R. Rosenbaum (2010). Design of Observational Studies. Springer.
Rosenbaum's earlier book provides a more technical review, though because matching is such a hot topic at the moment, it may not cover the most current techniques:
- Paul R. Rosenbaum (2002). Observational Studies, Second Edition. Springer.
Even if Rosenbaum doesn't hit on a particular topic of interest, his chapter bibliographies are an excellent resource (particularly those in Design). He has also done some very valuable work on matching sensitivity analyses, which are covered extensively in these books.
Of course, you would probably also be served by going directly to the source (I haven't read this myself):
- Donald B. Rubin (2006). Matched Sampling for Causal Effects. Cambridge University Press.
As mentioned above, matching is something of a hot topic. So, generally, I would look through the citations of more recent books and articles. Besides work by Donald Rubin and Paul Rosenbaum, I would look for work by Alberto Abadie and Guido Imbens (both at Harvard) and James Heckman (Chicago), probably in that order. Of course, depending on your particular research interests, others may be equally as important.
| null | CC BY-SA 3.0 | null | 2011-04-08T19:16:32.820 | 2011-04-08T19:16:32.820 | null | null | 3265 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.