Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10360 | 2 | null | 10271 | 1 | null | The graph of the "original series" does not have to exhibit any pre-defined structure. What is critical is that the graph of the "residuals from a suitable model series" need to exhibit either a gaussian structure . This "gaussian structure" can usually obtained by incorporating one or more of the following "transformations"
1. an arima MODEL
2. Adjustments for Local Level Shifts or Local Time Trends or Seasonal Pulses or Ordinary Pulses
3. a weighted analysis exploiting proven variance heterogeneity
4. a possible power transformation ( logs etc ) to deal with a specific variance heterogenity
5. the detection of points in time where the model/parameters may have changed.
Intervention Detection will yield a statement about the statistical significance of the most recent event suggesting either normalcy or an anomaly
| null | CC BY-SA 3.0 | null | 2011-05-05T14:56:16.697 | 2011-05-05T14:56:16.697 | null | null | 3382 | null |
10361 | 1 | 12695 | null | 5 | 618 | My homework question:
>
An inspector suspects that the food in the factory she is inspecting has been contaminated with a harmful chemical c. Such chemical contamination occurs in 5% of factories producing this food. The inspector has a test A for the chemical which registers positive with 100% certainty when the chemical is present, but the test also registers positive in 10% of cases where the chemical is not present. She decides to use this test to help her decide whether there is contamination.
Assume that the prior probability of contamination is equal to the base rate, and that the
inspector’s test shows a positive result. Compute the posterior probability of contamination.
The inspector has another test B for chemical c which only registers positive 50% of the time when c is present, but has the advantage of never giving a false positive (i.e., if c is not present, the test will never say it is). The results of the two tests, A and B, are independent given the presence or absence of c. It turns out that when the inspector uses test B, the results are negative. In addition, the inspector knows that the factory is poorly maintained. The rate of contamination in factories with poor maintenance is twice as high as the rate in factories overall. Compute the posterior probability of contamination.
I know these should be rather basic, but I'm getting stuck. For #1 I've reached an answer, but I'm not sure it's correct:
Using Bayes rule the probability should be
$P(c|positive) = \frac{P(positive|c)P(c)}{P(positive)}$
Now I think that $P(positive)$ should be:
$P(positive) = P(positive,c)+P(positive,\neg c) = P(positive|c)P(c) + P(positive|\neg c)P(\neg c)$
Thus:
$P(c|positive) = \frac{P(positive|c)P(c)}{P(positive|c) P(c) + P(positive|\neg c) P(\neg c)}$
$P(c|positive) = \frac{1 * 0.05}{1 * 0.05 + 0.1 * 0.95} = 0.34$
Is this correct?
In part #2 I thought since they are supposed to be independent this must hold:
$ P(A_{positive} \cap B_{negative}|c) = P(A_{positive}|c)*P(B_{negative}|c) $
First I adjusted the base rate and recalculated #1:
$P(c|A_{positive}) = \frac{1 * 0.1}{1 * 0.1 + 0.1 * 0.9} = 0.52$
The questions says: "the results are negative". So B should be:
$P(c|B_{negative}) = \frac{P(B_{negative}|c)P(c)}{P(B_{negative}|c) P(c) + P(B_{negative}|\neg c) P(\neg c)} = \frac{0.5 * 0.1}{0.5 * 0.1 + 1 * 0.9} = 0.05$
$ P(c|A_{positive} \cap B_{negative}) = \frac{P(A_{positive} \cap B_{negative}|c) * P(c)}{P(A_{positive} \cap B_{negative})} = \frac{P(A_{positive}|c) * P(B_{negative}|c) * P(c)}{P(A_{positive} \cap B_{negative})} $
What should I do next?
| Bayes rule and base rate | CC BY-SA 3.0 | null | 2011-05-05T15:13:41.393 | 2011-07-06T16:44:37.127 | 2020-06-11T14:32:37.003 | -1 | 2261 | [
"self-study"
] |
10362 | 2 | null | 10356 | 3 | null | Asymptotically, the ratio of positive to negative patterns is essentially irrelevant. The problem arises principally when you have too few samples of the minority class to adequately describe its statistical distribution. Making the dataset larger generally solves the problem (where that is possible).
If this is not possible, the best thing to do is to re-sample the data to get a balanced dataset, and then apply a multiplicative adjustment to the output of the classifier to compensate for the difference between training set and operational relative class frequencies. While you can calculate the (asymptotically) optimal adjustment factor, in practice it is best to tune the adjustment using cross-validation (as we are dealing with a finite practical case rather than an asymptotic one).
In this sort of situation, I often use a committee of models, where each is trained on all of the minority patterns and a different random sample of the majority patterns of the same size as the minority patterns. This guards against bad luck in the selection of a single subset of the majority patterns.
| null | CC BY-SA 3.0 | null | 2011-05-05T15:15:12.283 | 2014-02-11T12:18:34.357 | 2014-02-11T12:18:34.357 | 22047 | 887 | null |
10363 | 1 | null | null | 35 | 6203 | I'm curious about repeatable procedures that can be used to discover the functional form of the function `y = f(A, B, C) + error_term` where my only input is a set of observations (`y`, `A`, `B` and `C`). Please note that the functional form of `f`is unknown.
Consider the following dataset:
AA BB CC DD EE FF
== == == == == ==
98 11 66 84 67 10500
71 44 48 12 47 7250
54 28 90 73 95 5463
34 95 15 45 75 2581
56 37 0 79 43 3221
68 79 1 65 9 4721
53 2 90 10 18 3095
38 75 41 97 40 4558
29 99 46 28 96 5336
22 63 27 43 4 2196
4 5 89 78 39 492
10 28 39 59 64 1178
11 59 56 25 5 3418
10 4 79 98 24 431
86 36 84 14 67 10526
80 46 29 96 7 7793
67 71 12 43 3 5411
14 63 2 9 52 368
99 62 56 81 26 13334
56 4 72 65 33 3495
51 40 62 11 52 5178
29 77 80 2 54 7001
42 32 4 17 72 1926
44 45 30 25 5 3360
6 3 65 16 87 288
In this example, assume that we know that `FF = f(AA, BB, CC, DD, EE) + error term`, but we're not sure about the functional form of `f(...)`.
What procedure/what methods would you use to arrive at finding the functional form of `f(...)`?
(Bonus point: What is your best guess at the definiton of `f` given the data above? :-) And yes, there is a "correct" answer that will yield an `R^2` in excess of 0.99.)
| Data mining: How should I go about finding the functional form? | CC BY-SA 3.0 | null | 2011-05-05T16:26:00.037 | 2015-12-18T08:11:22.717 | 2011-05-06T09:57:11.100 | 914 | 914 | [
"regression",
"machine-learning",
"algorithms",
"model-selection",
"data-mining"
] |
10364 | 2 | null | 10361 | 0 | null | From your independence statement,
\begin{equation}
P( A_{+} \bigcap B_{+} ) = P(A_{+} ) P(B_{+})
\nonumber
\end{equation}
and the definitions
\begin{equation}
P(A_{+} \bigcap B_{+} | c) \equiv \frac{P(A_{+} \bigcap B_{+} \bigcap c)}{P(c)}
\nonumber
\end{equation}
\begin{equation}
P(A_{+} | c) \equiv \frac{P(A_{+} \bigcap c)}{P(c)}
\nonumber
\end{equation}
and
\begin{equation}
P(B_{+} | c) \equiv \frac{P(B_{+} \bigcap c)}{P(c)}
\nonumber
\end{equation}
can you algebraically show that
\begin{equation}
P(A_{+} \bigcap B_{+} | c) \equiv \frac{P(A_{+} \bigcap B_{+} \bigcap c)}{P(c)}
= \frac{P(A_{+} \bigcap c)}{P(c)} \frac{P(B_{+} \bigcap c)}{P(c)} \equiv P(A_{+} | c) P(B_{+} | c)
\nonumber
\end{equation}
| null | CC BY-SA 3.0 | null | 2011-05-05T18:09:20.020 | 2011-05-05T18:09:20.020 | null | null | 3805 | null |
10366 | 1 | null | null | 1 | 3414 | I am having difficulty to get $\bar u$ (to denote average of values $u_i$) as label for the x-axis in the output eps figure.
Any help would be appreciated!
Many thanks!
| Using LaTeX expression in gnuplot | CC BY-SA 3.0 | null | 2011-05-05T19:00:36.900 | 2011-05-07T18:40:09.827 | 2011-05-06T12:11:09.423 | null | 3172 | [
"gnuplot"
] |
10367 | 2 | null | 10363 | -3 | null | All Models are wrong but some are useful : G.E.P.Box
Y(T)= - 4709.7
+ 102.60*AA(T)- 17.0707*AA(T-1)
+ 62.4994*BB(T)
+ 41.7453*CC(T)
+ 965.70*ZZ(T)
where ZZ(T)=0 FOR T=1,10
=1 OTHERWISE
There appears to be a "lagged relationship" between Y and AA AND an explained shift in the mean for observations 11-25 .
Curious results if this is not chronological or spatial data.
| null | CC BY-SA 3.0 | null | 2011-05-05T19:20:09.780 | 2011-05-05T19:48:41.600 | 2011-05-05T19:48:41.600 | 3382 | 3382 | null |
10368 | 2 | null | 10363 | 5 | null | Your question needs refining because the function `f` is almost certainly not uniquely defined by the sample data. There are many different functions which could generate the same data.
That being said, Analysis of Variance (ANOVA) or a "sensitivity study" can tell you a lot about how your inputs (AA..EE) affect your output (FF).
I just did a quick ANOVA and found a reasonably good model: `FF = 101*A + 47*B + 49*C - 4484`.
The function does not seem to depend on DD or EE linearly. Of course, we could go further with the model and add quadratic and mixture terms. Eventually you will have a perfect model that over-fits the data and has no predictive value. :)
| null | CC BY-SA 3.0 | null | 2011-05-05T19:21:25.287 | 2011-05-05T19:32:49.257 | 2011-05-05T19:32:49.257 | 2260 | 2260 | null |
10369 | 1 | 10668 | null | 9 | 253 | Suppose I have paired observations drawn i.i.d. as $X_i \sim \mathcal{N}\left(0,\sigma_x^2\right), Y_i \sim \mathcal{N}\left(0,\sigma_y^2\right),$ for $i=1,2,\ldots,n$. Let $Z_i = X_i + Y_i,$ and denote by $Z_{i_j}$ the $j$th largest observed value of $Z$. What is the (conditional) distribution of $X_{i_j}$? (or equivalently, that of $Y_{i_j}$)
That is, what is the distribution of $X_i$ conditional on $Z_i$ being the $j$th largest of $n$ observed values of $Z$?
I am guessing that as $\rho = \frac{\sigma_x}{\sigma_y} \to 0$, the distribution of $X_{i_j}$ converges to just the unconditional distribution of $X$, while as $\rho \to \infty$, the distribution of $X_{i_j}$ converges to the unconditional distribution of the $j$th order statistic of $X$. In the middle, though, I am uncertain.
| Distribution of 'unmixed' parts based on order of the mix | CC BY-SA 3.0 | 0 | 2011-05-05T19:28:23.433 | 2011-05-11T18:22:17.520 | 2011-05-05T21:22:53.483 | 919 | 795 | [
"distributions",
"order-statistics",
"regularization"
] |
10370 | 1 | null | null | 12 | 4366 | I want to train a classifier, say SVM, or random forest, or any other classifier. One of the features in the dataset is a categorical variable with 1000 levels. What is the best way to reduce the number of levels in this variable. In R there is a function called `combine.levels()` in the Hmisc package, which combines infrequent levels, but I was looking for other suggestions.
| Reducing number of levels of unordered categorical predictor variable | CC BY-SA 3.0 | null | 2011-05-05T19:33:30.583 | 2018-05-03T11:13:26.407 | 2018-05-03T11:13:26.407 | 128677 | 616 | [
"classification",
"svm",
"random-forest",
"many-categories"
] |
10371 | 2 | null | 10363 | 28 | null | To find the best fitting functional form (so called free-form or symbolic regression) for the data try this tool - to all of my knowledge this is the best one available (at least I am very excited about it)...and its free :-)
[http://creativemachines.cornell.edu/eureqa](http://creativemachines.cornell.edu/eureqa)
EDIT: I gave it a shot with Eureqa and I would go for:
$$AA + AA^2 + BB*CC$$ with $R^2=0.99988$
I would call it a perfect fit (Eureqa gives other, better fitting solutions, but these are also a little bit more complicated. Eureqa favours this one, so I chose this one) - and Eureqa did everything for me in about a few seconds on a normal laptop ;-)
| null | CC BY-SA 3.0 | null | 2011-05-05T19:41:52.350 | 2011-05-09T18:58:37.057 | 2011-05-09T18:58:37.057 | 230 | 230 | null |
10373 | 2 | null | 10370 | 9 | null | How best to do this is going to vary tremendously depending on the task you're performing, so it's impossible to say what will be best in a task-independent way.
There are two easy things to try if your levels are ordinal:
- Bin them. E.g., 0 = (0 250), 1 = (251 500), etc. You may want to select the limits so each bin has an equal number of items.
- You can also take a log transform of the levels. This will squish the range down.
If the levels are not ordinal you can cluster the levels based on other features/variables in your dataset and substitute the cluster ids for the previous levels. There are as many ways to do this as there are clustering algorithms, so the field is wide open. As I read it, this is what `combine.levels()` is doing. You could do similarly using `kmeans()` or `prcomp()`. (You could/should subsequently train a classifier to predict the clusters for new datapoints.)
| null | CC BY-SA 3.0 | null | 2011-05-05T20:54:18.127 | 2011-05-05T20:54:18.127 | null | null | 1876 | null |
10374 | 2 | null | 423 | 46 | null | From [xkcd](http://xkcd.com/893/):

This is data analysis in the form of a cartoon, and I find it particularly poignant.
>
The universe is probably littered with the one-planet graves of cultures which made the sensible economic decision that there's no good reason to go into space--each discovered, studied, and remembered by the ones who made the irrational decision.
| null | CC BY-SA 3.0 | null | 2011-05-05T21:04:31.380 | 2011-10-25T20:57:04.763 | 2011-10-25T20:57:04.763 | 5880 | 2817 | null |
10375 | 1 | 10380 | null | 10 | 17201 | I have two datasets from genome-wide association studies. The only
information available is the odds ratio and the p-value for the first
data set. For the second data set I have the Odds Ratio, p-value and allele frequencies (AFD= disease, AFC= controls) (e.g: 0.321). I'm trying to do a meta-analysis of these data but I
don't have the effect size parameter to perform this. Is there a
possibility to calculate the SE and OR confidence intervals for each of
these data only using the info that is provided??
Thank you in advance
example:
Data available:
```
Study SNP ID P OR Allele AFD AFC
1 rs12345 0.023 0.85
2 rs12345 0.014 0.91 C 0.32 0.25
```
With these data can I calculate the SE and CI95% OR ?
Thanks
| How to calculate Standard Error of Odds Ratios? | CC BY-SA 3.0 | null | 2011-05-05T22:18:43.513 | 2011-10-21T06:58:21.333 | 2011-05-06T13:45:19.837 | 930 | 4483 | [
"meta-analysis",
"genetics"
] |
10376 | 2 | null | 10366 | 1 | null | I have not tried the following, but they might work for you*
You could try the Baltic letter `ū` (u with macron) either directly or with Unicode `U+016B` or html `ū`
Or you could follow the [advice here](http://www.fnal.gov/docs/products/gnuplot/tutorial/), which seems to imply that something like
```
set xlabel "$\bar{u}$"
```
might work.
| null | CC BY-SA 3.0 | null | 2011-05-05T22:26:02.227 | 2011-05-05T22:26:02.227 | null | null | 2958 | null |
10377 | 1 | null | null | 6 | 434 | I am currently working towards a Statistics BSc and am frequently having to organise my research.
I am currently using a mixture of a mediawiki, and a bibliography browser plugin, to make an archive of my online data sources, and references to pdfs with useful information. but its becoming a bit of a mess.
Is there any research tool that is orientated towards statistics?
would-likes include
- //TODO, task tracking so I can have a central list of outstanding tasks
- some cloud storage, so I can access my archive from any machine
some suggestions have been
- org-mode
- zotero
- mediawiki
(I guess that anything I could store the files as text, I can distribute my work using SVN/git etc)
| What research tool to use when researching a project? | CC BY-SA 3.0 | null | 2011-05-05T23:27:01.870 | 2011-05-06T10:38:58.583 | 2011-05-06T05:49:44.827 | 2116 | 4484 | [
"software"
] |
10378 | 1 | 10422 | null | 10 | 13630 | [I first posted this question to Stack Overflow [here](https://stackoverflow.com/questions/5866850/using-holt-winters-for-forecasting-in-python) but didn't get any replies, so I thought I'd try over here. Apologies if reposting isn't allowed.]
I've been trying to use [this implementation of the Holt-Winters algorithm](http://adorio-research.org/wordpress/?p=1230) for time series forecasting in Python but have run into a roadblock... basically, for some series of (positive) inputs, it sometimes forecasts negative numbers, which should clearly not be the case. Even if the forecasts are not negative, they are sometimes wildly inaccurate - orders of magnitude higher/lower than they should be. Giving the algorithm more periods of data to work with does not appear to help, and in fact often makes the forecast worse.
The data I'm using has the following characteristics, which might be problems:
- Very frequently sampled (one data point every 15 minutes, as opposed to monthly data as the example uses) - but from what I've read, the Holt-Winters algorithm shouldn't have a problem with that. Perhaps that indicates a problem with the implementation?
- Has multiple periodicities - there are daily peaks (i.e. every 96 data points) as well as a weekly cycle of weekend data being significantly lower than weekday data - for example weekdays can peak around 4000 but weekends peak at 1000 - but even when I only give it weekday data, I run into the negative-number problem.
Is there something I'm missing with either the implementation or my usage of the Holt-Winters algorithm in general? I'm not a statistician so I'm using the 'default' values of alpha, beta, and gamma indicated in the link above - is that likely to be the problem? What is a better way to calculate these values?
Or ... is there a better algorithm to use here than Holt-Winters? Ultimately I just want to create sensible forecasts from historical data here. I've tried single- and double-exponential smoothing but (as far as I understand) neither support periodicity in data.
I have also looked into using the R [forecast](http://robjhyndman.com/software/forecast/) package instead through rpy2 - would that give me better results? I imagine I would still have to calculate the parameters and so on, so it would only be a good idea if my current problem lies in the implementation of the algorithm...?
Any help/input would be greatly appreciated!
| Using Holt-Winters for forecasting in Python | CC BY-SA 3.0 | null | 2011-05-05T23:46:05.043 | 2011-05-06T17:02:29.550 | 2017-05-23T12:39:26.143 | -1 | 4470 | [
"forecasting",
"python"
] |
10379 | 2 | null | 10378 | 7 | null | The problem might be that Holt-Winters is a specific model form and may not be applicable to your data. The HW Model assumes among other things the following.
a) one and only one trend
b) no level shifts in the data i.e. no intercept changes
3) that seasonal parameters do not vary over time
4) no outliers
5) no autoregressive structure or adaptive model structure
6)model errors that have constant variance
And of course
7) that the history causes the future i.e. no incorporation of price/promotions.events etc as helping variables
From your description it appears to me that a mixed-frequency approach might be needed. I have seen time series problems where the hour-of-the-day effects and the day-of-the-week effects have significant interaction terms. You are trying to force your data into an inadequate i.e. not-generalized enough structure. Estimating parameters and choosing from a small set of models does not replace Model Identification. You might want to read a piece on the different approaches to Automatic Modeling at www.autobox.com/pdfs/catchword.pdf . In terms of a more general approach I would suggest that you consider an ARMAX model otherwise known as a Transfer Function which relaxes the afore-mentioned assumptions.
| null | CC BY-SA 3.0 | null | 2011-05-06T00:25:11.470 | 2011-05-06T00:25:11.470 | null | null | 3382 | null |
10380 | 2 | null | 10375 | 16 | null | You can calculate/approximate the standard errors via the p-values. First, convert the two-sided p-values into one-sided p-values by dividing them by 2. So you get $p = .0115$ and $p = .007$. Then convert these p-values to the corresponding z-values. For $p = .0115$, this is $z = -2.273$ and for $p = .007$, this is $z = -2.457$ (they are negative, since the odds ratios are below 1). These z-values are actually the test statistics calculated by taking the log of the odds ratios divided by the corresponding standard errors (i.e., $z = log(OR) / SE$). So, it follows that $SE = log(OR) / z$, which yields $SE = 0.071$ for the first and $SE = .038$ for the second study.
Now you have everything to do a meta-analysis. I'll illustrate how you can do the computations with R, using the metafor package:
```
library(metafor)
yi <- log(c(.85, .91)) ### the log odds ratios
sei <- c(0.071, .038) ### the corresponding standard errors
res <- rma(yi=yi, sei=sei) ### fit a random-effects model to these data
res
Random-Effects Model (k = 2; tau^2 estimator: REML)
tau^2 (estimate of total amount of heterogeneity): 0 (SE = 0.0046)
tau (sqrt of the estimate of total heterogeneity): 0
I^2 (% of total variability due to heterogeneity): 0.00%
H^2 (total variability / within-study variance): 1.00
Test for Heterogeneity:
Q(df = 1) = 0.7174, p-val = 0.3970
Model Results:
estimate se zval pval ci.lb ci.ub
-0.1095 0.0335 -3.2683 0.0011 -0.1752 -0.0438 **
```
Note that the meta-analysis is done using the log odds ratios. So, $-0.1095$ is the estimated pooled log odds ratio based on these two studies. Let's convert this back to an odds ratio:
```
predict(res, transf=exp, digits=2)
pred se ci.lb ci.ub cr.lb cr.ub
0.90 NA 0.84 0.96 0.84 0.96
```
So, the pooled odds ratio is .90 with 95% CI: .84 to .96.
| null | CC BY-SA 3.0 | null | 2011-05-06T01:03:12.977 | 2011-10-21T06:58:21.333 | 2011-10-21T06:58:21.333 | 1934 | 1934 | null |
10381 | 1 | null | null | 4 | 306 | I'm really confused. I never took statistics but am trying to read up as much as I can. I didn't think much about statistical analysis until after doing the experiment, unfortunately, but maybe someone can help me out.
I have 4 birds. Each bird must fly in 7 different wind conditions. In each wind condition, I measure things like wing stroke amplitude, flapping frequency, etc. ~40 times. I am pretty sure that those things are normally distributed. I would like to test whether the wind condition affects any of the kinematic variables I measure. The wind conditions I use are winds of different magnitudes and directions, so that 4 m/s and 3 m/s forward will probably cause greater correlation in kinematic variables then 4m/s forward wind and 4 m/s backwards wind. I hope this makes sense!
So, from what I can understand, it seems as if I should do a repeated measures multivariate analysis. However, I am concerned with my sample size of 4 birds. A plot of say, individual bird's stroke amplitude with wind condition shows an obvious effect (when it must fly faster, amplitude increases), but I just don't know how/if I can show that this is statistically significant. Any input would be appreciated!
| Is repeated measures ANOVA appropriate for my experiment and is my sample size large enough? | CC BY-SA 3.0 | null | 2011-05-06T01:08:11.353 | 2011-05-06T01:48:28.767 | 2011-05-06T01:48:28.767 | 2970 | 4486 | [
"repeated-measures",
"sample-size"
] |
10382 | 1 | null | null | 4 | 1587 | I am going to make an environmental index to be used as an explanatory variable in a regression model. For making this index, I asked respondents a set of questions about their environmental attitudes. Each question has 5 response options from 1=“completely agree” to 5=“completely disagree”.
I'm going to summarize each these categories as one index.
What general advice can be offered for creating such an index?
| General advice on forming an index of attitude to the environment from a set of Likert items | CC BY-SA 3.0 | null | 2011-05-06T01:11:14.787 | 2018-05-08T20:17:37.590 | 2013-07-21T11:38:10.530 | 183 | 4487 | [
"factor-analysis",
"scales",
"reliability"
] |
10385 | 1 | null | null | 3 | 375 | I am trying to fit an ordinal regression model using the `logit` link function in R using `ordinal` package; the response variables have five levels.
The number of explanatory variables is much larger than the number of samples ($p \gg n$)
Could any one help me with the following problem:
- Start with a model that contains only the intercept.
- For the current model, explore the improvement in fit by adding additional variables.
- Add the baseline for the variables that performed the best (using AIC, deviance, etc.)
- Go back to step 2 until the maximal number of variables in the model is reached.
Unfortunately, `glmnet`, cannot handle ordinal regression otherwise it would have been great. Is there a way of reducing the ordinal regression problem to multinomial regression using indicator variables? This would be of great benefit as I could use `glmnet` for variable selection.
This is sample data (in my case $n \sim 100$, and $p \sim 10000$):
```
structure(list(resp = structure(c(1L, 1L, 2L, 2L, 2L), .Label = c("a",
"b"), class = c("ordered", "factor")), x1 = 1:5, x2 = c(0.1,
0.2, 0.3, 0.4, 0.5), x3 = c(0.01, 0.04, 0.09, 0.16, 0.25), x4 = c(1,
4, 9, 16, 25), x5 = c(0.001, 0.002, 0.003, 0.004, 0.005), x6 = c(-5,
-4, -3, -2, -1), x7 = c(-0.5, -0.4, -0.3, -0.2, -0.1), x8 = c(0.25,
0.16, 0.09, 0.04, 0.01), x9 = c(25, 16, 9, 4, 1), x10 = c(0.0316227766016838,
0.0447213595499958, 0.0547722557505166, 0.0632455532033676, 0.0707106781186548
)), .Names = c("resp", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
"x8", "x9", "x10"), row.names = c(NA, -5L), class = "data.frame")
```
| How to add variables sequentially in ordinal package in R | CC BY-SA 4.0 | 0 | 2011-05-06T04:04:21.720 | 2018-07-21T22:11:44.307 | 2018-07-21T22:11:44.307 | 11887 | 1307 | [
"r",
"regression",
"ordinal-data"
] |
10386 | 1 | null | null | 5 | 1667 | I would like to measure the amplitude of waves in a noisy time-series on-line. I have a time-series that models a noisy wave function, that undergoes shifts in amplitude. Say, for example, something like this:
```
set.seed <- 1001
x <- abs(sin(seq(from = 0, t = 100, by = 0.1)))
x <- x + (runif(1001, 0, 1) / 5)
x <- x * c(rep(1.0, 500), rep(2.0, 501))
```
The resulting data looks like this:
```
> head(x, n = 30)
[1] 0.1581530 0.1329728 0.3911897 0.4104984 0.4774424 0.5118123 0.6499325
[8] 0.6837706 0.8520770 0.8625692 0.8441520 0.9960601 1.1119514 1.1414032
[15] 1.1153601 1.1456799 1.0843497 1.1141201 1.1290904 0.9906415 0.9836052
[22] 0.9369836 0.9493608 0.7484588 0.7588435 0.6467422 0.5787302 0.4665009
[29] 0.4643982 0.3398427
> plot(x)
> lines(x)
```

As you can see, the because of the noise in the series the data does not increase monotonically between the waves' troughs and crests.
I'm looking for a way to estimate the amplitude of each wave's peak on-line in a computationally untaxing way. I can probably find a way to measure the maximum magnitude of the noise term. I'm not sure if the frequency of the waves is constant, so I'd be interested both in answers that assume a constant (known) wave frequency or a variable wave frequency. The real data is also sinusoidal.
I'm sure that this is a common problem with well-known solutions, but I am so new to this that I don't even know what terms to search for. Also, apologies if this question would be more appropriate to stackoverflow, I can ask there if that is preferred.
| Online method for detecting wave amplitude | CC BY-SA 3.0 | null | 2011-05-06T04:51:28.657 | 2011-05-06T22:16:54.750 | 2011-05-06T06:21:06.680 | 179 | 179 | [
"time-series",
"signal-processing",
"online-algorithms"
] |
10387 | 1 | 10393 | null | 6 | 1121 | I'm using supervised classification algorithms from [mlpy](https://mlpy.fbk.eu/data/doc/classification.html) to classify things into two groups for a question-answering system. I don't really know how these algorithms work, but they seem to be doing vaguely what I want.
I would like to get some measure of confidence out of the classifiers. I can get "real-valued predictions" from the classifiers. These appear to be values of what I would call a link function. Here's some sample output from my system.
```
Predictions as to whether an answer is correct
for random data from various models
Results for one run
----------------------------------------------
Model Result Confidence? ("Real value")
----- ------ ---------------------------------
SVM [True, 0.10396502611075412]
FDA [True, 3.3052963597375227]
SRDA [False, 0.34205901959526142]
PDA [True, 3.8857018468328794]
----------------------------------------------
Results for another run
----------------------------------------------
Model Result Confidence? ("Real value")
----- ------ ---------------------------------
SVM [False, -0.0059697528841203369]
FDA [False, -0.15660355802446979]
SRDA [False, 1.2465697042600801]
PDA [True, 0.23122963338708608]
----------------------------------------------
```
The real values are generally more positive for classifications as "True" and more negative for classifications as "False", but the link is a bit more complex than that. Can I turn these real values into confidence measures? If so, how?
And I'm playing with the following classifiers.
- Support Vector Machines (SVMs)
- K Nearest Neighbor (KNN)
- Fisher Discriminant Analysis (FDA)
- Spectral Regression Discriminant Analysis (SRDA)
- Penalized Discriminant Analysis (PDA)
- Diagonal Linear Discriminant Analysis (DLDA)
Update: Having thought about this more, realize that I just need the rank of the confidence actually. Does it seem right the answers with the highest real values be the ones that are most confidently categorized in the True group? I'd still like to understand this a bit better, but an answer to that would be nice in the short term.
| What do "real values" refer to in supervised classification? | CC BY-SA 3.0 | null | 2011-05-06T05:36:53.120 | 2011-05-06T11:16:24.200 | 2011-05-06T08:16:05.417 | 3874 | 3874 | [
"machine-learning",
"classification",
"svm",
"discriminant-analysis"
] |
10388 | 1 | 10402 | null | 5 | 3781 | I have a data set
```
> head(data)
id centre u time event x c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14
1 1 0.729891 0.3300478 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 0.729891 7.0100000 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 0.729891 7.0150000 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 0.729891 1.3616940 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 0.729891 7.0250000 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
6 1 0.729891 5.0824055 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
```
and I want to fit a proportional hazards model.
IN R
I first defined `formula`:
```
> formula
Surv(time, event) ~ x + c1 + c2 + c3 + c4 + c5 + c6 + c7 + c8 +
c9 + c10 + c11 + c12 + c13 + c14
```
and I used the `coxph` function:
```
> library(survival)
> mod <- coxph(formula, data=data)
Erreur dans fitter(X, Y, strats, offset, init, control, weights = weights, :
NA/NaN/Inf dans un appel à une fonction externe (argument 6)
De plus : Message d'avis :
In fitter(X, Y, strats, offset, init, control, weights = weights, :
Ran out of iterations and did not converge
```
IN SAS
I used `proc phreg`:
```
proc phreg data=data;
model time*event(0) = x c1-c14 / ties=breslow;
run;
```
Partial output:
```
Convergence Status
Convergence criterion (GCONV=1E-8) satisfied.
```
### Question:
- Do you have any suggestion to make it work in R?
I have tried to increase `iter.max` but the problem is still there... If needed, I can provide the data (600 rows).
| How to get convergence using coxph (R) given that model converges using proc phreg (SAS) | CC BY-SA 3.0 | null | 2011-05-06T06:07:07.653 | 2011-05-06T13:01:48.383 | 2011-05-06T06:25:18.100 | 183 | 3019 | [
"r",
"sas",
"cox-model"
] |
10389 | 1 | null | null | 5 | 952 | According to normal probability distribution theory which says that for $n$ independent,
identically distributed, standard, normal, random variables $\xi_j$ the expected absolute maximum is
$E(\max|\xi_j|)=\sqrt{2 \ln n}$
Regarding this, why do we need to multiple the above-mentioned estimate by $\sigma$ (Standard Deviation) in order to derive the expected absolute maximum for a normal, random variable with zero mean?
| Calculating the distribution of maximal value of $n$ draws from a normal distribution | CC BY-SA 3.0 | null | 2011-05-06T06:11:52.817 | 2011-05-06T13:47:26.630 | 2011-05-06T12:18:03.990 | null | 4286 | [
"normal-distribution",
"extreme-value"
] |
10390 | 2 | null | 10389 | 2 | null | Intuitively: values of the standard normal distribution (including the absolute maximum) 'tend to be' 1 SD = 1 away from 0.
In non-standard zero-mean normal, the data 'tend to be' 1 SD = sigma away from 0.
You could say that as long as you're doing linear stuff, all distances to zero will blow up by a factor sigma.
| null | CC BY-SA 3.0 | null | 2011-05-06T06:22:13.960 | 2011-05-06T06:22:13.960 | null | null | 4257 | null |
10392 | 2 | null | 10389 | 5 | null | If $\zeta_j = \sigma \xi_j $ for some $\sigma >0$ and some $\mu$ then
$$E[\max|\zeta_j|] = E[\max|\sigma \xi_j |] = E[\sigma \max| \xi_j|]= \sigma E[ \max| \xi_j|]$$
and this tells us how to move from a standard normal with mean $0$ and standard deviation $1$ to a normal distribution with mean $0$ and standard deviation $\sigma$.
| null | CC BY-SA 3.0 | null | 2011-05-06T07:27:01.087 | 2011-05-06T13:47:26.630 | 2011-05-06T13:47:26.630 | -1 | 2958 | null |
10393 | 2 | null | 10387 | 6 | null | Regarding the "Real Values"
The "Real Values" are better called "confidences" or (from my pov the most common term) "scores".
Such scores are often normalized so that they sum up to 1 for all classes. They represent a measure how, well, confident the model is that the presented example belongs to a certain class. They are highly dependent on the general strategy and the properties of the algorithm. For example in KNN the score for a class i is calculated by averaging the distance to those examples which both belong to the k-nearest-neighbors and have class i. Then the score is sum-normalized across all classes.
Regarding your question
I suppose with "converting into confidences" you actually mean "probability estimates". E.g. if an example has probability 0.3 for class "1", then 30% of all examples with similar values should belong to class "1" and 70% should not.
As far as I know, his task is called "calibration". For this purpose some general methods exist (e.g. binning the scores and mapping them to the class-fraction of the corresponding bin) and some classifier-dependent (like e.g. [Platt Scaling](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639) which has been invented for SVMs). A good point to start is:
[Bianca Zadrozny, Charles Elkan: Transforming Classifier Scores into Accurate Multiclass Probability Estimates](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.7457)
EDIT after Question-Edit:
@Thomas wrote: Does it seem right the answers with the highest real values be the ones that are most confidently categorized in the True group?
Yes, in general this is correct (with the same argument as above). I suggest to create a ROC - plot to see if this also applies to mlpy - package. I suggest [ROCR](http://rocr.bioinf.mpi-sb.mpg.de/) for this purpose.
| null | CC BY-SA 3.0 | null | 2011-05-06T07:53:39.133 | 2011-05-06T11:16:24.200 | 2011-05-06T11:16:24.200 | 264 | 264 | null |
10394 | 1 | null | null | 12 | 1658 | In case of robust estimators, What does Gaussian efficiency means? For example $Q_{_n}$ has 82% Gaussian efficiency and 50% breakdown point.
The reference is: Rousseeuw P.J., and Croux, C. (1993). “Alternatives to median absolute deviation.” J. American Statistical Assoc., 88, 1273-1283
| What does Gaussian efficiency mean? | CC BY-SA 3.0 | null | 2011-05-06T08:33:03.437 | 2020-12-06T11:18:29.397 | 2011-05-16T09:38:37.290 | 4286 | 4286 | [
"normal-distribution",
"scales",
"robust"
] |
10395 | 2 | null | 10386 | 8 | null | More than a complete solution this is meant to be a very rough series of "hints" on how to implement one using FFT, there are probably better methods but, if it works...
First of all let's generate the wave, with varying frequency and amplitude
```
freqs <- c(0.2, 0.05, 0.1)
x <- NULL
y <- NULL
for (n in 1:length(freqs))
{
tmpx <- seq(n*100, (n+1)*100, 0.1)
x <- c(x, tmpx)
y <- c(y, sin(freqs[n] * 2*pi*tmpx))
}
y <- y * c(rep(1:5, each=(length(x)/5)))
y <- y + rnorm(length(x), 0, 0.2)
plot(x, y, "l")
```
Which gives us this

Now, if we calculate the FFT of the wave using `fft` and then plot it (I used the `plotFFT` function I posted [here](https://stackoverflow.com/questions/3485456/useful-little-functions-in-r/3486696#3486696)) we get:

Note that I overplotted the 3 frequencies (0.05, 0.1 and 0.2) with which I generated the data. As the data is sinusoidal the FFT does a very good job in retrieving them. Note that this works best when the y-values are 0 centered.
Now let's do a sliding FFT with a window of 50 we get

As expected, at the beginning we only get the 0.2 frequency (first 2 plot, so between 0 and 100), as we go on we get the 0.05 frequency (100-200) and finally the 0.1 frequency comes about (200-300).
The power of the FFT function is proportional to the amplitude of the wave. In fact, if we write down the maximum in each window we get:
```
1 Max frequency: 0.2 - power: 254
2 Max frequency: 0.2 - power: 452
3 Max frequency: 0.04 - power: 478
4 Max frequency: 0.04 - power: 606
5 Max frequency: 0.1 - power: 1053
6 Max frequency: 0.1 - power: 1253
```
===
This can be also achieved using a STFT (short-time Fourier transform), which is basically the same thing I showed you before, but with overlapping windows.
This is implemented, for instance, by the `evolfft` function of the `RSEIS` package.
It would give you:
```
stft <- evolfft(y, dt=0.1, Nfft=2048, Ns=100, Nov=90, fl=0, fh=0.5)
plotevol(stft)
```

This, however, may be trickier to analyze, especially online.
hope this helps somehow
| null | CC BY-SA 3.0 | null | 2011-05-06T09:07:19.593 | 2011-05-06T09:07:19.593 | 2017-05-23T12:39:27.620 | -1 | 582 | null |
10396 | 1 | 10399 | null | 4 | 5072 | I want to do a regression where my dependent variable has four categories `(1,2,3,4)` which represents the number of dependents. Can I do this with logistic regression? I read somewhere that `link=glogit` option is useful in this, can somebody please shed some light? I am new to this.
Writing the syntax here would be very useful for me.
| Regression for dependent variable with 4 categories | CC BY-SA 3.0 | null | 2011-05-06T09:30:12.217 | 2011-05-06T13:23:06.260 | 2011-05-06T09:51:59.813 | 2116 | 1763 | [
"logistic",
"sas"
] |
10397 | 2 | null | 10356 | 7 | null | I disagreed with the other answers in the comments, so it's only fair I give my own. Let $Y$ be the response (good/bad accounts), and $X$ be the covariates.
For logistic regression, the model is the following:
$\log\left(\frac{p(Y=1|X=x)}{p(Y=0|X=x)}\right)= \alpha + \sum_{i=1}^k x_i \beta_i $
Think about how the data might be collected:
- You could select the observations randomly from some hypothetical "population"
- You could select the data based on $X$, and see what values of $Y$ occur.
Both of these are okay for the above model, as you are only modelling the distribution of $Y|X$. These would be called a prospective study.
Alternatively:
- You could select the observations based on $Y$ (say 100 of each), and see the relative prevalence of $X$ (i.e. you are stratifying on $Y$). This is called a retrospective or case-control study.
(You could also select the data based on $Y$ and certain variables of $X$: this would be a stratified case-control study, and is much more complicated to work with, so I won't go into it here).
There is a nice result from epidemiology (see [Prentice and Pyke (1979)](http://biomet.oxfordjournals.org/content/66/3/403.short)) that for a case-control study, the maximum likelihood estimates for $\beta$ can be found by logistic regression, that is using the prospective model for retrospective data.
So how is this relevant to your problem?
Well, it means that if you are able to collect more data, you could just look at the bad accounts and still use logistic regression to estimate the $\beta_i$'s (but you would need to adjust the $\alpha$ to account for the over-representation). Say it cost $1 for each extra account, then this might be more cost effective then simply looking at all accounts.
But on the other hand, if you already have ALL possible data, there is no point to stratifying: you would simply be throwing away data (giving worse estimates), and then be left with the problem of trying to estimate $\alpha$.
| null | CC BY-SA 3.0 | null | 2011-05-06T10:11:27.033 | 2011-05-06T10:11:27.033 | null | null | 495 | null |
10398 | 2 | null | 10377 | 1 | null | Steffen's comment and link to the question above is very useful.
I would say that it depends. I am going to assume that you are using LaTeX, at least (for mathematical typesetting), and possibly R (cos its free, and great).
In that case, especially given that you have mentioned org-mode, I would suggest using Emacs to organise your statistical analysis.
The advantages are as follows:
unify your statistical analysis (Emacs Speaks Statistics), your paper writing (LaTeX) - Emacs sweave support is also the best I have found.
You also have Ebib to manage references, and this is very nice.
Emacs is so customizable that anything you need to do, can be done from within it. However, it has quite a steep learning curve, and you may need to unlearn some shortcuts and ways of dealing with programs that are commonplace elsewhere.
Emacs also has integrated version control. I cannot speak to their quality as I have not used it.
On reference management, I personally didn't like Zotero, and use JabRef. JabRef is nice for the GUI and its simplicity, and could support you while you learn enough about Emacs to be productive within it. It also has a cite while you write plugin for open office, if you like that kind of thing.
HTH.
| null | CC BY-SA 3.0 | null | 2011-05-06T10:38:58.583 | 2011-05-06T10:38:58.583 | null | null | 656 | null |
10399 | 2 | null | 10396 | 1 | null | You'll want to look up the literature on multinomial logistic regression, a.k.a. nominal regression. It's an expanded version of the usual logistic regression. Coefficients and odds ratios obtained deal with the likelihood of the outcome being A, B, or C as opposed to D, the reference category. Thus for 4 levels of a dependent variable you'll have 3 tables with coefficients, odds ratios, etc. You'll want to make sure you choose that reference category intentionally: to which result will you and your readers most want to make comparisons? Sorry I can't provide syntax but SAS documentation should be a help there.
| null | CC BY-SA 3.0 | null | 2011-05-06T11:12:23.657 | 2011-05-06T11:12:23.657 | null | null | 2669 | null |
10400 | 2 | null | 10396 | 1 | null | I'd suggest to you to consider using decision trees (such as CART) for such problems.
I see that SAS Enterprise Miner has some functions for decision trees:
[http://sas-x.com/2011/01/decision-trees-in-sas-enterprise-miner-and-spss-clementine/](http://sas-x.com/2011/01/decision-trees-in-sas-enterprise-miner-and-spss-clementine/)
| null | CC BY-SA 3.0 | null | 2011-05-06T11:25:06.253 | 2011-05-06T11:25:06.253 | null | null | 253 | null |
10401 | 2 | null | 10396 | 2 | null | I think this document: [logistic](http://www.biostat.umn.edu/~melanie/PH7402/2009/NOTE/logistic.pdf) holds all the information you need (with pointers to how you can do it in both R and SAS). It explains the concept of the proportional odds model and indicates that glogit is indeed the way to go.
If you need more, just google for "SAS proportional odds logistic regression"...
| null | CC BY-SA 3.0 | null | 2011-05-06T11:44:23.293 | 2011-05-06T11:44:23.293 | null | null | 4257 | null |
10402 | 2 | null | 10388 | 4 | null | What about changing of the starting values? Supply to `init` similar values to the output of SAS and see what happens. This in general is not a good strategy to get the convergence, but it helps to see whether the problem is in algorithm, or just in starting values.
| null | CC BY-SA 3.0 | null | 2011-05-06T12:08:22.810 | 2011-05-06T12:08:22.810 | null | null | 2116 | null |
10403 | 2 | null | 423 | 77 | null | Another one from [xkcd](http://xkcd.org/892/):

Alt-text:
>
Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.
| null | CC BY-SA 3.0 | null | 2011-05-06T12:13:40.790 | 2013-08-21T01:39:08.423 | 2013-08-21T01:39:08.423 | 9007 | 565 | null |
10404 | 2 | null | 6630 | 1 | null | I believe that you are trying to use statistical methods that are appropriate for independent observations while you have correlated data, both temporarily and spatially. If you have observations say for 5 hours and decide to re-state this as 241 observations taken every minute, you really don't have 240 degrees of freedom in respect to the mean of these 241 values. Autocorrelation potentially yields an overstatement of the size of "N" and thusly creates false uncertainty statements. What you need to do is to find someone/some textbook/some web site/.... to teach you about time series data and it's analysis. One way to start is to GOOGLE "help me understand time series" and start to read/learn. There is a lot of material available on the web. One available trove of time series information is something I helped create at [http://www.autobox.com/AFSUniversity/afsuFrameset.htm](http://www.autobox.com/AFSUniversity/afsuFrameset.htm) . I mention this as I am still associated with this firm and it's products thus my comments are "biased and opinionated" but not solely self-serving.
| null | CC BY-SA 3.0 | null | 2011-05-06T12:32:01.043 | 2011-05-06T12:32:01.043 | null | null | 3382 | null |
10405 | 2 | null | 10388 | 2 | null | How many events are in the dataset? The fit from SAS may not be meaningful if there are fewer than a few multiples of the number of covariates (14).
In general with Cox regression I have not had to specify starting values but have occasionally played around with the convergence criteria.
| null | CC BY-SA 3.0 | null | 2011-05-06T13:01:48.383 | 2011-05-06T13:01:48.383 | null | null | 4253 | null |
10406 | 2 | null | 10096 | 2 | null | In my opinion , you might need to add/detect day-of-the-week ; week-of-the-year ; Holiday effects ( lead ,contemporaneous and lag effects ); possible level shifts and or Local Time Trends ; Possible fixed-days-of-the-month; Pulses/Outlier correction via Intervention Detection schemes ; AND then INTRODUCE a set of possible predictors (K-1) reflecting EACH of the K algorithmic change effects. Additionally you might need to incorporate an ARMA component to your predictors to render your error process Gaussian. Care should also be taken to ensure that the parameters of your final model did not significantly change over time and that the residuals from your final model have constant variance. Heterogenous error variance can be caused by structural changes in variance at particular time points , coupling of the error dispersion and the level of the output series and/or a pure stochastic variance change over time.
| null | CC BY-SA 3.0 | null | 2011-05-06T13:02:33.687 | 2011-05-06T13:02:33.687 | null | null | 3382 | null |
10407 | 1 | 10830 | null | 7 | 436 | Repeating an experiment with $n$ possible outcomes $t$ times independently, where all but one outcomes have probability $\frac{1}{n+1}$ and the other outcome has the double probability $\frac{2}{n+1}$, is there a good approximate formula for the probability that the outcome with the higher probability happens more often than any other one?
For me, $n$ is typically some hundreds, and $t$ is chosen depending on $n$ such that the probability that the most likely outcome occurs most often is between 10% and 99.999%.
In the moment I use a small program that calculates a crude approximation by assuming that the counts for how often each outcome shows up in $t$ trials are independent and approximate the counts using the Poisson distribution. How can I improve on this?
EDIT: I'd strongly appreciate comments/votes on the two (maybe soon more) answers given.
EDIT 2: As none of the two answers is convincing me, but as I don't want to let the 100 points bounty to vanish (and as nobody voted for/against one of the two answers), I'll just pick one of the answers. I'd still appreciate other answers.
| Probability for finding a double-as-likely event | CC BY-SA 3.0 | null | 2011-05-06T13:16:53.577 | 2011-05-15T19:28:58.837 | 2011-05-15T19:28:58.837 | 919 | 565 | [
"probability",
"approximation"
] |
10408 | 2 | null | 1459 | 1 | null | Originally the idea of examining pre-whitened cross-correlations was suggested by Box and Jenkins. In 1981, Liu and Hanssens published( L.-M. Liu and D.M. Hanssens (1982). "Identification of Multiple-Input Transfer Function Models." Communications in Statistics A 11: 297-314.) a paper that suggested a common filter approach that would effectively deal with multiple inputs whose pre-whitened series exhibit cross-correlative structure. They even created a 2 input model data set to demonstrate their solution.
After we programmed that approach and then compared it to the Box-Jenkins pre-whitening approach iteratively implemented by us we decided to not to use either the Pankratz approach or the Liu-Hanssens approach.We would be glad to share the Liu-Hansens test data with you if you wish me to post it to the list.
| null | CC BY-SA 3.0 | null | 2011-05-06T13:17:11.287 | 2011-05-06T13:17:11.287 | null | null | 3382 | null |
10409 | 2 | null | 10396 | 2 | null | If you need imputation (as your comment suggests), look into `PROC MI`. It is specifically designed for this purpose. One of the many options it has is imputation of an ordinal outcome.
For example, the following code will use proportional odds regression to impute `ndependents` based on `x1`, `x2` and their interaction.
```
proc mi;
class ndependents;
var x1 x2;
monotone logistic(ndependents= x1 x2 x1*x2);
run;
```
You can do only one imputation, but multiple imputations are preferrable. After the analysis you can use `PROC MIANALYZE` to combine the results.
| null | CC BY-SA 3.0 | null | 2011-05-06T13:23:06.260 | 2011-05-06T13:23:06.260 | null | null | 279 | null |
10411 | 1 | 10413 | null | 2 | 2749 | I have a few datasets of "interactions" between pairs of elements like so:
```
element1 element2 1
element2 element3 1
element4 element5 1
...
element505535 element4 2
```
where the value in the 3rd column is the "strength" of interaction. Almost all of these strengths are "1." A strength of 1 means that this interaction was observed one time. A strength of 2 means 2x, etc. I have actually gone one step further and normalized all of my datasets by the total # of interactions observed in the dataset so that datasets interaction values can be compared.
There are 5-6 million interactions listed in each file and each dataset is obviously under-sampled since there are ~500k elements (making a square matrix of ~250 trillion positions).
I would like to cluster these datasets so that I can make statements about which types of elements tend to cluster with which other types elements. Obviously, robusticity of clustering will be a factor—but this is partially ameliorated by the fact that I will make biological replicates of the data.
I have tried a few different "naive" clustering approaches just to see what I could do easily with the data. I fully realize that these are problematic ways of clustering, either because they are not robust or because they rely on the data being very undersampled, but here is what I've done:
- Clustering elements together as long as there is at least one interaction between each element in the cluster and at least one other element in the cluster. When I do this, all the elements end up in a single cluster. This was important to do because it tells me that there are no pairs of elements that are totally isolated from the rest of the group.
- Finding "superclusters"—that is, clusters where every member of the cluster interacts with every other member of the cluster (e.g. a triangle for a cluster of 3 and a box with an X in the middle for a cluster of 4, etc). This yields almost exclusively clusters with 2 and 3 elements after about 10% of the data has been analyzed (this is still running).
I would love to be able to do some sort of hierarchical clustering using my "interaction strength" values as the distance measure between each pair of elements (unobserved interactions have a strength of 0). Does anyone know of a way to do HC on this sort of large, sparse data—or know of a clustering method that might be more appropriate? I've used R up until now.
| Clustering large and sparse datasets | CC BY-SA 3.0 | null | 2011-05-06T13:34:49.283 | 2011-05-06T15:42:24.923 | 2011-05-06T15:42:24.923 | 3561 | 3561 | [
"r",
"clustering",
"bioinformatics"
] |
10412 | 2 | null | 10150 | 2 | null | You might try using ARIMA models making sure that you incorporate any identifiable Level Shifts and/or Local Time Trends culminating in an ARMAX model. Changes in parameters/variance of the errors should also be tested and remedied if necessary. As compared to NN, these approaches challenge the data rather than simply believing/fitting the data.
| null | CC BY-SA 3.0 | null | 2011-05-06T14:43:19.223 | 2011-05-07T11:45:48.963 | 2011-05-07T11:45:48.963 | 3382 | 3382 | null |
10413 | 2 | null | 10411 | 4 | null | Does this happen to be protein interaction data? Regardless, there are many algorithms you can try. I am the author of [mcl](http://micans.org/mcl/), a clustering algorithm fairly often used in bioinformatics. You could easily try it, after installing, by given it the data in exaxctly the format you describe; the command line would be "mcl yourdata --abc" (the abc option tells mcl to expect this particular format). You need not and cannot specify the number of clusters, as cluster structure arises from the process computed by mcl in an emergent fashion. Protein interaction data can be quite voluminous and noisy -- and other data sources as well of course. Especially the presence of nodes of very high-degree (say with hundreds to thousands of neighbours) may obscure cluster structure. In such a scenario, in the absence of weights, it can be really worthwhile to try to reduce the input graph by e.g. a k-NN transform taking into account the two-step graph incidence relation. mcl also provides means to do this, as well a tool that lists various graph traits under a series of ever more stringent transformations, so that it is possible to choose such a transformation in an informed manner. As for hierarchical clustering, it is possible to achieve clusterings at different levels of granularity by varying mcl's so-called inflation parameter. You could for example supply three values (e.g. 1.4, 2, and 5) to obtain clustering at different levels of granularity. It is also possible to arrange clusterings thus obtained into a strictly hierarchical data structure.
| null | CC BY-SA 3.0 | null | 2011-05-06T15:01:22.680 | 2011-05-06T15:01:22.680 | null | null | 4495 | null |
10414 | 2 | null | 9385 | 2 | null | 
[1](https://i.stack.imgur.com/NjDSh.jpg): [http://i.stack.imgur.com/kxU4t.jpg](https://i.stack.imgur.com/kxU4t.jpg) reflects a questioning of the highly unusual Oct 2009 value 130 Oct-09 2301.41 . Time series analysis actually challenges the data rather than fitting a presumed set of models. The residuals from the following model 
[1](https://i.stack.imgur.com/NjDSh.jpg): [http://i.stack.imgur.com/Q4W5h.jpg](https://i.stack.imgur.com/Q4W5h.jpg) more closely exhibit the required Gaussian Structure for T tests to be valid. 
I apologize in advance for the unnecessary repetition of the forecast graph. I will have to go to wiki school to learn how to include images in my posts.
[1](https://i.stack.imgur.com/NjDSh.jpg): [http://i.stack.imgur.com/OUc5a.jpg](https://i.stack.imgur.com/OUc5a.jpg) . The forecasts for the next 24 months are then robust to identified anomalies 
| null | CC BY-SA 3.0 | null | 2011-05-06T15:14:38.527 | 2011-05-06T15:14:38.527 | null | null | 3382 | null |
10415 | 2 | null | 9429 | 3 | null | I'll imagine a concrete example, with more context, to make things easy. Assume you measure the score on test of 3k students of 200 schools and you measured each student at 4 time points (say, at each quarter). You have a covariate at student level that doesn't vary by time (like sex), that you called pred1.obs and a covariate by school that vary by time (say the number of meetings between teachers and parents until that moment in time). If this example resembles your study, than I think you have to set up a three level model (individual level, group level and time level for the groups):
i = 1 ... 3000 individuals
t = 1... 4 periods
g = 1... 200 groups
The model would be:
```
y_i ~ N(a + b_[groups_g] + b.ind*pred.obs1_i, sigma^2) # 1st level
b_g = N(gamma + gamma_[time] + gamma.g[time_t]*pred2.grp, sigma.b^2) # 2nd level
gamma.g_t = N(0, sigma.gamma^2) # 3rd level
```
Note that you would have the slope at the second level (group level) varying by time, which makes sense, since you expect that the effect of schools on the perfomance of students may vary by time, depending of the valu of the covariate at the level of schools.
I'm not that sure how to estimate this with lmer (I know how to estimate a Bayesian model using WinBugs or Jags, calling them by R). In any case, here is my suggestion.
In lme4, I'd try:
First, expand pred2.grp (the covariate at group level that vary by time) to the individual level, then you would have repetead measures by individuals at the group and time level. Then:
```
lmer(outcome ~ pred1.obs + pred2.grp + (1|group))
```
| null | CC BY-SA 3.0 | null | 2011-05-06T15:27:14.023 | 2011-05-06T15:27:14.023 | null | null | 3058 | null |
10416 | 2 | null | 4089 | 15 | null | I highly recommend the function [chart.Correlations](http://braverock.com/brian/R/PerformanceAnalytics/html/chart.Correlation.html) in the package [PerformanceAnalytics](http://cran.r-project.org/web/packages/PerformanceAnalytics/index.html). It packs an amazing amount of information into a single chart: kernel-density plots and histograms for each variable, and scatterplots, lowess smoothers, and correlations for each variable pair. It's one of my favorite graphical data summary functions:
```
library(PerformanceAnalytics)
chart.Correlation(iris[,1:4],col=iris$Species)
```

| null | CC BY-SA 3.0 | null | 2011-05-06T15:35:24.853 | 2011-05-06T15:49:38.020 | 2011-05-06T15:49:38.020 | 2817 | 2817 | null |
10417 | 2 | null | 10356 | 0 | null | There are many ways in which you can think of logistic regressions. My favorite way is to think that your response variable, $y_i$, follows a Bernoulli distribution with probability $p_i$. An $p_i$, in turn, is a function of some predictors. More formally:
$$y_i \sim \text{Bernoulli}(p_i)$$
$$p_i = \text{logit}^{-1}(a + b_1x_1 + ... +b_nx_n)$$
where $\text{logit}^{-1} = \frac{\exp(X)}{1+\exp(x)}$
Now does it matter it you have low proportion of failures (bad accounts)? Not really, as long as your sample data is balanced, as some people already pointed. However, if your data is not balanced, then getting more data may be almost useless if there is some selection effects you are not taking into account. In this case, you should use matching, but the lack of balance may turn matching pretty useless. Another strategy is trying to find a natural experiment, so you can use instrumental variable or regression disconinuity design.
Last, but not least, if you have a balanced sample or there is no selection bias, you may be worried with the fact the bad account is rare. I don't think 5% is rare, but just in case, take a look at [the paper by Gary King](http://gking.harvard.edu/files/abs/0s-abs.shtml) about running a rare event logistic. In the Zelig package,in R, you can run a rare event logistic.
| null | CC BY-SA 3.0 | null | 2011-05-06T16:16:21.933 | 2014-02-11T14:37:02.673 | 2014-02-11T14:37:02.673 | 22311 | 3058 | null |
10418 | 1 | 10934 | null | 10 | 1961 | Why the exchangeability of random variables is essential for the hierarchical Bayesian modeling?
| Why the exchangeability of random variables is essential in hierarchical bayesian models? | CC BY-SA 3.0 | null | 2011-05-06T16:28:31.677 | 2017-08-29T21:22:43.157 | 2017-08-29T21:22:43.157 | 11887 | 3125 | [
"bayesian",
"multilevel-analysis",
"exchangeability"
] |
10419 | 1 | 10442 | null | 35 | 47795 | I've got a question concerning a negative binomial regression: Suppose that you have the following commands:
```
require(MASS)
attach(cars)
mod.NB<-glm.nb(dist~speed)
summary(mod.NB)
detach(cars)
```
(Note that cars is a dataset which is available in R, and I don't really care if this model makes sense.)
What I'd like to know is: How can I interpret the variable `theta` (as returned at the bottom of a call to `summary`). Is this the shape parameter of the negbin distribution and is it possible to interpret it as a measure of skewness?
| What is theta in a negative binomial regression fitted with R? | CC BY-SA 4.0 | null | 2011-05-06T16:32:49.483 | 2018-06-19T08:50:56.607 | 2018-06-19T08:50:56.607 | 128677 | 4496 | [
"regression",
"generalized-linear-model",
"negative-binomial-distribution"
] |
10420 | 1 | null | null | 5 | 34669 | I have a large simulated loss data (from catastrophic models developed at my school) to calculate some extreme quantiles. Previously they used non-parametric methods to do this (find the point estimate for these extreme quantiles and CIs).
I am using parametric models (extreme value theory, fat tail distributions, etc.) to do it. I have been thinking about the pros and cons for these two methods.
I would appreciate if someone could provide some summaries of parametric and non-parametric models, their advantages and disadvantages.
| Advantages and disadvantages of parametric and non-parametric models | CC BY-SA 3.0 | null | 2011-05-06T16:37:04.897 | 2013-03-19T07:26:38.507 | 2011-07-11T22:24:07.243 | 930 | 4497 | [
"nonparametric"
] |
10421 | 1 | null | null | 4 | 113 | Recently I've been working EM algorithms for MAP estimation in a problem where the expectation is intractable, but the maximization is easy. Further, draws from the distribution in the E-step are easily available through MCMC, so I've been experimenting with stochastic versions of EM. Let $X$ be the observed data, $Z$ be the missing data, and $\Theta$ be the parameters to be estimated. I'll use $\Theta_t$ for the current estimate of $\Theta$. Specifically I've looked at stochastic approximation EM and Monte Carlo EM, which estimate the E step like so:
1) MCEM: The $Q$ function is approximated with an average of $m_t$ MC draws $\hat Q(\Theta; \Theta_t) = \frac{1}{m_t}\sum_{i=1}^{m_t}\log(p(X,Z_i|\Theta)$ where $Z_i$ is drawn from $p(Z|X,\Theta_t)$
2) SAEM: $\hat Q(\Theta; \Theta_t) = \gamma_t \hat Q(\Theta; \Theta_{t_1}) + (1-\gamma_t)\frac{1}{m_t}\sum_{i=1}^{m_t}\log(p(X,Z_i|\Theta)$. In the context of exponential families, this amounts to using a weighted average of the "new" and "old" sufficient statistics at each step, compared to MCEM which takes a completely new average (ie $\gamma_t=0$)
Both have the same M step, with $\Theta_{t+1}$ chosen to maximize the approximate $Q$ function. My question is when does it make sense to average over the sequence of $\Theta_t$'s? Presumably this would further reduce the Monte Carlo error, and my understanding is that the convergence is (theoretically) unaffected since the $\Theta_t$'s are converging to a stationary point with appropriate conditions on $m_t$ and $\gamma_t$.
What I'm looking for is more practical advice, since averaging too early will bias the (finite sample) estimates and averaging too late is wasteful. Also the related question of assessing convergence; if we consider convergence criteria like $|\Theta_t - \Theta_{t-1}| \leq \epsilon$ should the difference be taken instead between the two averaged estimates? (Not the best criterion, I know, but super easy :) )
| Averaged estimators in stochastic versions of EM | CC BY-SA 3.0 | null | 2011-05-06T16:46:51.740 | 2011-05-06T16:46:51.740 | null | null | 26 | [
"monte-carlo",
"expectation-maximization"
] |
10422 | 2 | null | 10378 | 4 | null | I think the R forecast package you mentioned is a better fit for this problem than just using Holt-Winters. The two functions you are interested in are [ets()](http://www.oga-lab.net/RGM2/func.php?rd_id=forecast%3aets) and [auto.arima()](http://www.oga-lab.net/RGM2/func.php?rd_id=forecast%3aauto.arima). ets() will fit an exponential smoothing model, including Holt-Winters and several other methods. It will choose parameters (alpha, beta, and gama) for a variety of models and then return the one with the lowest AIC (or BIC if you prefer). auto.arima() works similarly.
However, as IrishStat pointed out, these kinds of models may not be appropriate for your analysis. In that case, try calculating some covariates, such as dummy variables for weekends, holidays, and their interactions. Once you've specified covariates that make sense, use auto.arima() to find a ARMAX model, and then [forecast()](http://www.oga-lab.net/RGM2/func.php?rd_id=forecast%3aforecast) to make predictions. You will probably end up with something much better than a simple Holt-Winters model in python with default parameters.
You should also note that both ets() and auto.arima can fit seasonal models, but you need to format your data as a seasonal time series. Let me know if you need any help with that.
You can read more about the forecast package [here](http://www.jstatsoft.org/v27/i03).
| null | CC BY-SA 3.0 | null | 2011-05-06T17:02:29.550 | 2011-05-06T17:02:29.550 | null | null | 2817 | null |
10423 | 1 | null | null | 30 | 49089 | Are there any papers/books/ideas about the relationship between the number of features and the number of observations one needs to have to train a "robust" classifier?
For example, assume I have 1000 features and 10 observations from two classes as a training set, and 10 other observations as a testing set. I train some classifier X and it gives me 90% sensitivity and 90% specificity on the testing set. Let's say I am happy with this accuracy and based on that I can say it is a good classifier. On the other hand, I've approximated a function of 1000 variables using 10 points only, which may seem to be not very... robust?
| Number of features vs. number of observations | CC BY-SA 3.0 | null | 2011-05-06T17:12:19.433 | 2011-05-08T18:51:15.643 | null | null | 4337 | [
"machine-learning"
] |
10424 | 1 | null | null | 4 | 255 | I'm a journalist turned developer who hobbies in APIs and analysis of web traffic. I've always enjoyed learning about stats but as I learned, I learned that I have misapplied some basic concepts in the past. I now know a bit better, and know to doublecheck my ideas with those are smarter than me -- I'm hoping to find a great answer to whether my idea is going to work.
A while back I was using traffic data to do basic summary statistics and standard deviation as a basic measure of variability. Where I went wrong was that I tried to create a confidence interval -- wrong because I presumed my dataset was Gaussian when in fact I now believe it to resemble something more like a Power Law. So I could sample the data and create standard deviations of the data all day but and they wouldn't be good models.
Recently I've been thinking about this problem. Given a limited set of data, how can I try to model the traffic for a website in question? I have built an API on top of a wonderful dataset, and that dataset is used in all kinds of pages around an actively used site. My idea: use the API access pattern as a small sample of my larger data set.
If you're following, what I'd do is basically treat each datarequest as a sample, maybe grouping requests by hour and logging total, standard deviation, number of samples n, etc. Then take use standard deviation of those samples to create the model, under the assumption that the Central Limit Theorem says that distribution will be normal.
As I understand it, even though my dataset is not normal the statistics I derive from that dataset should be. Is that the case? If so, can I create a confidence interval from that data?
| Does the Central Limit Theorem allow one to create confidence intervals from a web traffic dataset? | CC BY-SA 3.0 | null | 2011-05-06T17:13:49.510 | 2011-05-06T18:10:09.203 | 2011-05-06T17:22:16.637 | 3198 | 3198 | [
"confidence-interval",
"central-limit-theorem"
] |
10425 | 1 | 50221 | null | 31 | 5887 | I use the [auto.arima()](http://www.oga-lab.net/RGM2/func.php?rd_id=forecast:auto.arima) function in the [forecast](http://cran.r-project.org/web/packages/forecast/index.html) package to fit ARMAX models with a variety of covariates. However, I often have a large number of variables to select from and usually end up with a final model that works with a subset of them. I don't like ad-hoc techniques for variable selection because I am human and subject to bias, but [cross-validating time series is hard](https://stats.stackexchange.com/questions/8807/cross-validating-time-series-analysis), so I haven't found a good way to automatically try different subsets of my available variables, and am stuck tuning my models using my own best judgement.
When I fit glm models, I can use the elastic net or the lasso for regularization and variable selection, via the [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html) package. Is there a existing toolkit in R for using the elastic net on ARMAX models, or am I going to have to roll my own? Is this even a good idea?
edit: Would it make sense to manually calculate the AR and MA terms (say up to AR5 and MA5) and the use glmnet to fit the model?
edit 2: It seems that the [FitAR](http://cran.r-project.org/web/packages/FitAR/index.html) package gets me part, but not all, of the way there.
| Fitting an ARIMAX model with regularization or penalization (e.g. with the lasso, elastic net, or ridge regression) | CC BY-SA 3.0 | null | 2011-05-06T17:17:52.510 | 2013-02-18T08:56:03.297 | 2017-04-13T12:44:28.873 | -1 | 2817 | [
"r",
"time-series",
"lasso",
"regularization",
"elastic-net"
] |
10426 | 2 | null | 10423 | 23 | null | What you've hit on here is [the curse of dimensionality](http://en.wikipedia.org/wiki/Curse_of_dimensionality) or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this problem. You can use [AIC](http://en.wikipedia.org/wiki/Akaike_information_criterion) or [BIC](http://en.wikipedia.org/wiki/Bayesian_information_criterion) to penalize models with more predictors. You can choose random sets of variables and asses their importance using [cross-validation](http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29). You can use [ridge-regression](http://en.wikipedia.org/wiki/Tikhonov_regularization), [the lasso](http://en.wikipedia.org/wiki/Lasso_%28statistics%29#LASSO_method), or the [elastic net](http://www.stanford.edu/~hastie/TALKS/enet_talk.pdf) for [regularization](http://en.wikipedia.org/wiki/Regularization_%28mathematics%29). Or you can choose a technique, such as a [support vector machine](http://en.wikipedia.org/wiki/Support_vector_machine) or [random forest](http://en.wikipedia.org/wiki/Random_forest) that deals well with a large number of predictors.
Honestly, the solution depends on the specific nature of the problem you are trying to solve.
| null | CC BY-SA 3.0 | null | 2011-05-06T17:38:58.600 | 2011-05-06T17:38:58.600 | null | null | 2817 | null |
10427 | 1 | null | null | 10 | 64864 | I am using a fixed effect model for my panel data (9 years, 1000+ obs), since my Hausman test indicates a value $(Pr>\chi^2)<0.05$. When I add dummy variables for industries that my firms included, they always get omitted. I know there is a big difference when it comes to the DV (disclosure index) among the different industry groups. But I am not able to get them in my model when using Stata.
Any suggestions how to solve this? And why are they omitted?
| How to deal with omitted dummy variables in a fixed effect model? | CC BY-SA 3.0 | null | 2011-05-06T17:40:57.943 | 2016-12-03T21:01:34.723 | 2014-08-09T10:16:09.673 | 26338 | 4394 | [
"stata",
"panel-data",
"fixed-effects-model",
"hausman"
] |
10428 | 2 | null | 10423 | 11 | null | I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I}$. In that case, you only need two samples, one from either class to get perfect classification, almost regardless of the number of features. At the other end of the spectrum if both classes are centered on the origin with covariance $\vec{I}$, no amount of training data is going to give you a useful classifier. At the end of the day, the amount of samples you need for a given number of features depends on how the data are distributed, in general, the more features you have, the more data you will need to adequately describe the distribution of the data (exponential in the number of features if you are unlucky - see the curse of dimensionality mentioned by Zach).
If you use regularisation, then in principal, (an upper bound on) the generalisation error is independent of the number of features (see Vapnik's work on the support vector machine). However that leaves the problem of finding a good value for the regularisation parameter (cross-validation is handy).
| null | CC BY-SA 3.0 | null | 2011-05-06T17:49:09.447 | 2011-05-06T17:49:09.447 | null | null | 887 | null |
10429 | 1 | null | null | 9 | 14385 | I'm wondering how to fit multivariate linear mixed model and finding multivariate BLUP in R. I'd appreciate if someone come up with example and R code.
Edit
I wonder how to fit multivariate linear mixed model with `lme4`. I fitted univariate linear mixed models with the following code:
```
library(lme4)
lmer.m1 <- lmer(Y1 ~ A*B + (1|Block) + (1|Block:A), data=Data)
summary(lmer.m1)
anova(lmer.m1)
lmer.m2 <- lmer(Y2 ~ A*B + (1|Block) + (1|Block:A), data=Data)
summary(lmer.m2)
anova(lmer.m2)
```
I'd like to know how to fit multivariate linear mixed model with `lme4`. The data is below:
```
Block A B Y1 Y2
1 1 1 135.8 121.6
1 1 2 149.4 142.5
1 1 3 155.4 145.0
1 2 1 105.9 106.6
1 2 2 112.9 119.2
1 2 3 121.6 126.7
2 1 1 121.9 133.5
2 1 2 136.5 146.1
2 1 3 145.8 154.0
2 2 1 102.1 116.0
2 2 2 112.0 121.3
2 2 3 114.6 137.3
3 1 1 133.4 132.4
3 1 2 139.1 141.8
3 1 3 157.3 156.1
3 2 1 101.2 89.0
3 2 2 109.8 104.6
3 2 3 111.0 107.7
4 1 1 124.9 133.4
4 1 2 140.3 147.7
4 1 3 147.1 157.7
4 2 1 110.5 99.1
4 2 2 117.7 100.9
4 2 3 129.5 116.2
```
| Fitting multivariate linear mixed model in R | CC BY-SA 4.0 | null | 2011-05-06T17:53:42.990 | 2022-07-12T01:30:18.867 | 2022-07-12T01:30:18.867 | 11887 | 3903 | [
"r",
"mixed-model"
] |
10430 | 2 | null | 10424 | 1 | null | There are a few variants on the Central limit theorem, however an important point is whether or not you have a independent and identically distributed sample. Referrer engines and SEO seems to always be changing the sample, even though your theoretical target population may remain the same. You cannot make this assumption when sampling from a web site blindly. There is some flexibility to the CLT, but at some point it will break. You might be better off studying your population first, and hopefully you will be able to explaining who they are or where they come from.
| null | CC BY-SA 3.0 | null | 2011-05-06T18:10:09.203 | 2011-05-06T18:10:09.203 | null | null | 3489 | null |
10431 | 2 | null | 10429 | 2 | null | Try the R package `nlme`
You can find some examples, theory and further documentation in:
[http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf](http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf)
The `nlme` package is able to calculate pooled estimates [or the so called BLUP= best linear unbiased predictor].
Once you've downloaded the package, type in R console: `help(predict.lme)`
For more information, look at page 17 in Fox's paper. There you can find an example on how to pool information across subjects.
Hope this helps :)
| null | CC BY-SA 3.0 | null | 2011-05-06T18:24:23.917 | 2011-05-06T18:24:23.917 | null | null | 2902 | null |
10432 | 1 | 10580 | null | 9 | 333 | First, let me say that I am a bit out of my depth here, so if this question needs to be re-phrased or closed as a duplicate, please let me know. It may simply be that I don't have the proper vocabulary to express my question.
I am working on an image processing task in which I identify features in an image, and then classify them based on their properties, including shape, size, darkness, etc. I'm quite experienced with the image processing portion of this, but think I could improve the methods I use for classification of the features.
Right now, I set thresholds for each of the parameters measured, and then classify features according to some simple logic based on which thresholds the feature has crossed. For example (the actual properties and groupings are more complex, but I'm trying to simplify irrelevant portions of my project for this question), lets say I'm grouping features into the groups "Big and Dark," "Big and Light" and "Small". Then a feature $A$ will be in "Big and Dark" iff (size($A$)>sizeThreshold) & (darkness($A$)>darknessThreshold).
The goal is for the classification to agree with the classification done by an expert-level human, so I can set the thresholds to produce the best match between the groupings made by human and computer on some test set, and then hope that the classification works well with new data.
This is already working pretty well, but I see one particular failure mode which I think may be fixable. Let's say feature $A$ is known to belong to "Big and Dark." The human classified it in this way because, while is was just barely big enough, it was very very dark, which made up somewhat for the lack of "bigness." My algorithm would fail to classify this feature properly, because classification is based on rigid binary logic, and requires all thresholds to be crossed.
I would like to improve this failure by making my algorithm better mimic the human guided process, in which a deficiency in one parameter can be compensated by an abundance of another. To do this, I would like to take each of the base properties of my features, and convert them into some sort of score which would be a predictor of the group to which the feature belongs. I have thought of many ways of doing this, but they are mostly ad hoc ideas, based on my background in vector calculus and physics. For example, I've considered treating each feature as a vector in the N-D space of feature properties, and calculating the projection of each feature along certain vectors, each of which would measure the degree to which a feature belongs in the group.
I am sure there is a more rigorous and better established technique for doing this sort of thing, but my background is relatively weak in statistical analysis, so I'm looking for a shove in the right direction. Even the name of a technique, or a link to a textbook would be helpful.
TL;DR:
What techniques are useful in classifying objects based on a large number of descriptive parameters?
| Categorization/Segmentation techniques | CC BY-SA 3.0 | null | 2011-05-06T19:10:28.690 | 2011-05-09T22:58:25.560 | null | null | 2426 | [
"classification"
] |
10433 | 1 | 10440 | null | 8 | 26215 | I hope you can
help me with a question regarding calculating mean age from grouped census data. If the
age categories used were [0–4], [5–9], [10–14], and [15–19] years,
how would you calculate the midpoints? I initially assumed the midpoints would be 2, 7,
and so on.
However, I read in a worked example that the midpoint should be 2.5 when age range is 0 to 4. I am assuming this has something to do with the babies not actually being
zero years, but I am not exactly sure why the midpoint would be 2.5.
Can anyone assist? Many thanks
| Calculating mean age from grouped census data | CC BY-SA 3.0 | null | 2011-05-06T19:32:08.850 | 2015-11-30T11:28:25.733 | 2011-05-06T20:02:14.477 | 930 | 4498 | [
"mean"
] |
10434 | 1 | null | null | 2 | 377 | I am not that familiar with statistics and was wondering about the best way to quantify the difference between a set of data and a function.
My scenario is that I have a set with maximum heart rate data from around 60 people, with their age. I'm trying to see which of all "mathematical models" out there that describe the relation between maximum heart rate and age, is the least wrong.
I'm much better at understanding the biology behind this, not the mathematics.
| What is the best way to measure goodness of fit between data and functions? | CC BY-SA 4.0 | null | 2011-05-05T21:24:40.070 | 2018-09-02T11:43:11.063 | 2018-09-02T11:43:11.063 | 11887 | null | [
"regression",
"medicine"
] |
10435 | 2 | null | 7535 | 4 | null | absolutely agreed to what Matt said, first you have to think about the background of the data... It doesn't make any sense to fit ZI models, when there are no Zero generating triggers in the population! The advantage of NB models are that they can display unobserved heterogenity in a gamma distributed random variable. Techniqually: The main reasons for overdispersion are unobs Heterogenity and Zero inflation. I do not believe that your fit is bad. Btw to get the goodness of fit you should always compare the Deviance with the degrees of freedom of your model. If the Deviance D is higher than n-(p+1) (this is df) than you should search a better model. Although there are mostly no better models than ZINB to get rid of overdispersion.
if you want to fit a ZINB with R, get the package `pscl` and try to use the command `zeroinfl(<model>, dist=negative)`. For further information see `?zeroinfl` after loading the required package!
| null | CC BY-SA 3.0 | null | 2011-05-06T19:48:18.143 | 2011-05-06T19:48:18.143 | null | null | 4496 | null |
10436 | 2 | null | 10433 | 6 | null | The 0-4 years group refers to the following age interval: $0 \leq x < 5$, i.e. a child which is 4 years and 364 days old still belongs to this group. So, let's compute the midpoint for that range:
```
> ((365+365+365+365+364)/2)/365
[1] 2.49863
```
| null | CC BY-SA 3.0 | null | 2011-05-06T19:53:46.040 | 2011-05-06T19:53:46.040 | null | null | 307 | null |
10437 | 2 | null | 10418 | 1 | null | It isn't! I'm no expert here, but i'll give my two cents.
In general when you have a hierarchical model, say
$y|\Theta_{1} \sim \text{N}(X\Theta_{1},\sigma^2)$
$\Theta_{1}|\Theta_{2} \sim\text{N}(W\Theta_{2},\sigma^2)$
We make conditional independence assumptions, i.e., conditional on $\Theta_{2}$, the $\Theta_{1}$ are exchangeable. If the second level is not exchangeable, than you can incluce another level that makes it exchangeable. But even in the case that you can't make an assumption of exchaganbelity, the model may still be a good fit to your data at the first level.
Last, but not least, exchangeability is important only if you wanna think in terms of De Finetti's representation theorem. You might just think that priors are regularization tools that help you to fit your model. In this case, the exchangeability assumption is as good as it is your model fit to the data. In other words, if you think of Bayesian hierarchical model as way to get abetter fit to your data, then exchangeability is not essential in any sense.
| null | CC BY-SA 3.0 | null | 2011-05-06T20:00:20.750 | 2011-05-06T20:16:49.073 | 2011-05-06T20:16:49.073 | 3058 | 3058 | null |
10438 | 2 | null | 10418 | 4 | null | "Essential" is too vague. But surpressing the technicalities, if the sequence $X=\{X_i\}$ is exchangeable then the $X_i$ are conditionally independent given some unobserved parameter(s) $\Theta$ with a probability distribution $\pi$. That is, $p(X) = \int p(X_i|\Theta)d\pi(\Theta)$. $\Theta$ needn't be univariate or even finite dimensional and may be further represented as a mixture, etc.
Exchangability is essential in the sense that these conditional independence relationships allow us to fit models we almost certainly couldn't otherwise.
| null | CC BY-SA 3.0 | null | 2011-05-06T20:04:43.487 | 2011-05-06T20:04:43.487 | null | null | 26 | null |
10439 | 1 | 10453 | null | 13 | 3589 | Suppose I have a quadratic regression model
$$
Y = \beta_0 + \beta_1 X + \beta_2 X^2 + \epsilon
$$
with the errors $\epsilon$ satisfying the usual assumptions (independent, normal, independent of the $X$ values). Let $b_0, b_1, b_2$ be the least squares estimates.
I have two new $X$ values $x_1$ and $x_2$, and I'm interested in getting a confidence interval for $v = E(Y|X = x_2) - E(Y|X=x_1) = \beta_1 (x_2 - x_1) + \beta_2 (x_2^2 - x_1^2)$.
The point estimate is $\hat{v} = b_1 (x_2 - x_1) + b_2 (x_2^2 - x_1^2)$, and (correct me if I'm wrong) I can estimate the variance by $$\hat{s}^2 = (x_2 - x_1)^2 \text{Var}(b_1) + (x_2^2 - x_1^2)^2 \text{Var}(b_2) + 2 (x_2 - x_1)(x^2 - x_1^2)\text{Cov}(b_1, b_2)$$ using the variance and covariance estimates of the coefficients provided by the software.
I could use a normal approximation and take $\hat{v} \pm 1.96 \hat{s}$ as a 95% confidence interval for $v$, or I could use a bootstrap confidence interval, but is there a way to work out the exact distribution and use that?
| Confidence interval for difference of means in regression | CC BY-SA 4.0 | null | 2011-05-06T20:10:53.717 | 2019-06-07T08:17:11.677 | 2019-06-07T08:17:11.677 | 128677 | 3835 | [
"regression",
"confidence-interval"
] |
10440 | 2 | null | 10433 | 9 | null | As @Bernd has pointed out, 2.5 really is the midpoint of the 0 to 4 year age group, etc. However, using midpoints at either end of the population distribution introduces bias. For instance, the midpoint of the 80 - 90 year group is approximately 83, because most people in this group are nearer 80 than 90. If this nicety matters (and it perhaps it does, if you are agonizing over a half-year difference), read on.
Demographers make their estimates using various methods of monotonic interpolation. A classic method is [Sprague's Formula](http://books.google.com/books?id=SuXrAAAAMAAJ&pg=PA688&lpg=PA688&dq=sprague%27s%20formula&source=bl&ots=REEU1bw97W&sig=myjV4EznXpojTh5qnfOPel7rQ9Q&hl=en&ei=a1bETfK-Hsjc0QHJw8i1CA&sa=X&oi=book_result&ct=result&resnum=3&ved=0CCMQ6AEwAg#v=onepage&q=sprague%27s%20formula&f=false). This is well described in their literature; for an overview see Hubert Vaughan, Symmetry in Central Polynomial Interpolation, [JIA 80, 1954](http://www.actuaries.org.uk/research-and-resources/documents/symmetry-central-polynomial-interpolation). This method as published requires equally-spaced age groups but it can be adapted to variable spacings. @Rob Hyndman was the co-author of a nice paper on monotonic splines (Smith, Hyndman, & Wood, Spline Interpolation for Demographic Variables: The Monotonicity Problem, [J. Pop. Res. 21 #1, 2004](http://www.google.com/url?sa=t&source=web&cd=7&ved=0CEMQFjAG&url=http://citeseer.ist.psu.edu/viewdoc/download;jsessionid=DBDD57BFA879A27200EB2AC370EABEDA?doi=10.1.1.154.8979&rep=rep1&type=pdf&rct=j&q=Smith,%20Hyndman,%20&%20Wood,%20%2aSpline%20Interpolation%20for%20Demographic%20Variables%3a%20The%20Monotonicity%20Problem,&ei=0FXETd63FYXd0QG9r7mKCA&usg=AFQjCNEE7eAGilKXJe3lCmt-BzZQKvLniA&sig2=87e74oR_76nJulTI3hLJbw&cad=rja)). The paper mentions R code for the "Hyman filter." It is still available on [Rob's Web site](http://robjhyndman.com/Rfiles/interpcode.R).
Once you have an interpolated age distribution you can compute moments (and any other properties) according to the standard definitions. For instance, the mean is estimated by numerically integrating the age with respect to the distribution.
| null | CC BY-SA 3.0 | null | 2011-05-06T20:13:02.327 | 2011-05-06T20:13:02.327 | null | null | 919 | null |
10441 | 1 | 10445 | null | 18 | 8412 | I'm running an experiment where I'm gathering (independent) samples in parallel, I compute the variance of each group of samples and now I want to combine then all to find the total variance of all the samples.
I'm having a hard time finding a derivation for this as I'm not sure of terminology. I think of it as a partition of one RV.
So I want to find $Var(X)$ from $Var(X_1)$, $Var(X_2)$, ..., and $Var(X_n)$, where $X$ = $[X_1, X_2, \dots, X_n]$.
EDIT: The partitions are not the same size/cardinality, but the sum of the partition sizes equal the number of samples in the overall sample set.
EDIT 2: There is a formula for a [parallel computation here](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm), but it only covers the case of a partition into two sets, not $n$ sets.
| How to calculate the variance of a partition of variables | CC BY-SA 3.0 | null | 2011-05-06T20:27:01.463 | 2020-09-30T05:41:58.510 | 2011-05-09T12:50:54.850 | 4499 | 4499 | [
"variance"
] |
10442 | 2 | null | 10419 | 26 | null | Yes, `theta` is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
- skewness will depend on the value of theta, but also on the mean
- there is no value of theta that will guarantee you lack of skew
If I did not mess it up, in the `mu`/`theta` parametrization used in negative binomial regression, the skewness is
$$
{\rm Skew}(NB) = \frac{\theta+2\mu}{\sqrt{\theta\mu(\theta+\mu)}}
= \frac{1 + 2\frac{\mu}{\theta}}{\sqrt{\mu(1+\frac{\mu}{\theta})}}
$$
In this context, $\theta$ is usually interpreted as a measure of overdispersion with respect to the Poisson distribution. The variance of the negative binomial is $\mu + \mu^2/\theta$, so $\theta$ really controls the excess variability compared to Poisson (which would be $\mu$), and not the skew.
| null | CC BY-SA 3.0 | null | 2011-05-06T20:28:31.947 | 2014-06-16T18:05:20.277 | 2014-06-16T18:05:20.277 | 7290 | 279 | null |
10444 | 1 | null | null | 17 | 11464 | In general, I standardize my independent variables in regressions, in order to properly compare the coefficients (this way they have the same units: standard deviations). However, with panel/longitudinal data, I'm not sure how I should standardize my data, especially if I estimate a hierarchical model.
To see why it can be a potential problem, assume you have $i = 1, \ldots, n$ individuals measured along $t=1,\ldots, T$ periods and you measured a dependent variable, $y_{i,t}$ and one independent variable $x_{i,t}$. If you run a complete pooling regression, then it's ok to standardize your data in this way: $x.z = (x- \text{mean}(x))/\text{sd}(x)$, since it will not change t-statistic. On the other hand, if you fit an unpooled regression, i.e., one regression for each individual, then you should standardize your data by individual only, not the whole dataset (in R code):
```
for (i in 1:n) {
for ( t in 1:T) x.z[i] = (x[i,t] - mean(x[i,]))/sd(x[i,])
}
```
However, if you fit a simple hierarchical model with a varying intercept by individuals, then you are using a shrinkage estimator, i.e, you are estimating a model between pooled and unpooled regression. How should I standardize my data? Using the whole data like a pooled regression? Using only individuals, like in the unpooled case?
| Is it good practice to standardize your data in a regression with panel/longitudinal data? | CC BY-SA 3.0 | null | 2011-05-06T20:46:27.680 | 2016-12-15T23:56:31.713 | 2011-07-30T00:52:22.233 | 3058 | 3058 | [
"r",
"regression",
"standardization"
] |
10445 | 2 | null | 10441 | 25 | null | The formula is fairly straightforward if all the sub-sample have the same sample size. If you had $g$ sub-samples of size $k$ (for a total of $gk$ samples), then the variance of the combined sample depends on the mean $E_j$ and variance $V_j$ of each sub-sample:
$$ Var(X_1,\ldots,X_{gk}) = \frac{k-1}{gk-1}(\sum_{j=1}^g V_j + \frac{k(g-1)}{k-1} Var(E_j)),$$ where by $Var(E_j)$ means the variance of the sample means.
A demonstration in R:
```
> x <- rnorm(100)
> g <- gl(10,10)
> mns <- tapply(x, g, mean)
> vs <- tapply(x, g, var)
> 9/99*(sum(vs) + 10*var(mns))
[1] 1.033749
> var(x)
[1] 1.033749
```
If the sample sizes are not equal, the formula is not so nice.
EDIT: formula for unequal sample sizes
If there are $g$ sub-samples, each with $k_j, j=1,\ldots,g$ elements for a total of $n=\sum{k_j}$ values, then
$$ Var(X_1,\ldots,X_{n}) = \frac{1}{n-1}\left(\sum_{j=1}^g (k_j-1) V_j + \sum_{j=1}^g k_j (\bar{X}_j - \bar{X})^2\right), $$
where $\bar{X} = (\sum_{j=1}^gk_j\bar{X}_j)/n$ is the weighted average of all the means (and equals to the mean of all values).
Again, a demonstration:
```
> k <- rpois(10, lambda=10)
> n <- sum(k)
> g <- factor(rep(1:10, k))
> x <- rnorm(n)
> mns <- tapply(x, g, mean)
> vs <- tapply(x, g, var)
> 1/(n-1)*(sum((k-1)*vs) + sum(k*(mns-weighted.mean(mns,k))^2))
[1] 1.108966
> var(x)
[1] 1.108966
```
By the way, these formulas are easy to derive by writing the desired variance as the scaled sum of $(X_{ji}-\bar{X})^2$, then introducing $\bar{X}_j$: $[(X_{ji}-\bar{X}_j)-(\bar{X}_j-\bar{X})]^2$, using the square of difference formula, and simplifying.
| null | CC BY-SA 3.0 | null | 2011-05-06T20:50:07.443 | 2011-05-10T16:34:37.680 | 2011-05-10T16:34:37.680 | 4499 | 279 | null |
10446 | 2 | null | 10423 | 9 | null | You are probably over impression from the classical modelling, which is vulnerable to the [Runge paradox](http://en.wikipedia.org/wiki/Runge%27s_phenomenon)-like problems and thus require some parsimony tuning in post-processing.
However, in case of machine learning, the idea of including robustness as an aim of model optimization is just the core of the whole domain (often expressed as accuracy on unseen data). So, well, as long as you know your model works good (for instance from CV) there is probably no point to bother.
The real problem with $p\gg n$ in case of ML are the irrelevant attributes -- mostly because some set of them may become more usable for regenerating decision than the truly relevant ones due to some random fluctuations. Obviously this issue has nothing to do with parsimony, but, same as in classical case, ends up in terrible loss of generalization power. How to solve it is a different story, called feature selection -- but the general idea is to pre-process the data to kick out the noise rather than putting constrains on the model.
| null | CC BY-SA 3.0 | null | 2011-05-06T22:08:35.283 | 2011-05-06T22:08:35.283 | null | null | null | null |
10447 | 2 | null | 4089 | 5 | null | I have found this function helpful... the [original author's handle is respiratoryclub](http://gossetsstudent.wordpress.com/2010/08/02/159/).

```
f_summary <- function(data_to_plot)
{
## univariate data summary
require(nortest)
#data <- as.numeric(scan ("data.txt")) #commenting out by mike
data <- na.omit(as.numeric(as.character(data_to_plot))) #added by mike
dataFull <- as.numeric(as.character(data_to_plot))
# first job is to save the graphics parameters currently used
def.par <- par(no.readonly = TRUE)
par("plt" = c(.2,.95,.2,.8))
layout( matrix(c(1,1,2,2,1,1,2,2,4,5,8,8,6,7,9,10,3,3,9,10), 5, 4, byrow = TRUE))
#histogram on the top left
h <- hist(data, breaks = "Sturges", plot = FALSE)
xfit<-seq(min(data),max(data),length=100)
yfit<-yfit<-dnorm(xfit,mean=mean(data),sd=sd(data))
yfit <- yfit*diff(h$mids[1:2])*length(data)
plot (h, axes = TRUE, main = paste(deparse(substitute(data_to_plot))), cex.main=2, xlab=NA)
lines(xfit, yfit, col="blue", lwd=2)
leg1 <- paste("mean = ", round(mean(data), digits = 4))
leg2 <- paste("sd = ", round(sd(data),digits = 4))
count <- paste("count = ", sum(!is.na(dataFull)))
missing <- paste("missing = ", sum(is.na(dataFull)))
legend(x = "topright", c(leg1,leg2,count,missing), bty = "n")
## normal qq plot
qqnorm(data, bty = "n", pch = 20)
qqline(data)
p <- ad.test(data)
leg <- paste("Anderson-Darling p = ", round(as.numeric(p[2]), digits = 4))
legend(x = "topleft", leg, bty = "n")
## boxplot (bottom left)
boxplot(data, horizontal = TRUE)
leg1 <- paste("median = ", round(median(data), digits = 4))
lq <- quantile(data, 0.25)
leg2 <- paste("25th percentile = ", round(lq,digits = 4))
uq <- quantile(data, 0.75)
leg3 <- paste("75th percentile = ", round(uq,digits = 4))
legend(x = "top", leg1, bty = "n")
legend(x = "bottom", paste(leg2, leg3, sep = "; "), bty = "n")
## the various histograms with different bins
h2 <- hist(data, breaks = (0:20 * (max(data) - min (data))/20)+min(data), plot = FALSE)
plot (h2, axes = TRUE, main = "20 bins")
h3 <- hist(data, breaks = (0:10 * (max(data) - min (data))/10)+min(data), plot = FALSE)
plot (h3, axes = TRUE, main = "10 bins")
h4 <- hist(data, breaks = (0:8 * (max(data) - min (data))/8)+min(data), plot = FALSE)
plot (h4, axes = TRUE, main = "8 bins")
h5 <- hist(data, breaks = (0:6 * (max(data) - min (data))/6)+min(data), plot = FALSE)
plot (h5, axes = TRUE,main = "6 bins")
## the time series, ACF and PACF
plot (data, main = "Time series", pch = 20, ylab = paste(deparse(substitute(data_to_plot))))
acf(data, lag.max = 20)
pacf(data, lag.max = 20)
## reset the graphics display to default
par(def.par)
#original code for f_summary by respiratoryclub
}
```
| null | CC BY-SA 3.0 | null | 2011-05-06T22:08:51.820 | 2011-12-02T23:45:06.743 | 2011-12-02T23:45:06.743 | 3748 | 3748 | null |
10448 | 2 | null | 10386 | 1 | null | In answer to another question on this forum I referred the OP to [this site](http://ta-lib.org/function.html) where there is open source code for a function called HT_PERIOD to measure the instantaneous period of a time series. There are also functions called HT_PHASE, HT_PHASOR and HT_SINE to respectively measure the phase, the phasor components of the sine wave/cyclic component and to extract the sine wave itself (scaled between -1 and 1) of a time series. As the calculations in these functions are causal they would be appropriate for an on-line function that updates as new data comes in. The code for these functions might be of help to you.
| null | CC BY-SA 3.0 | null | 2011-05-06T22:16:54.750 | 2011-05-06T22:16:54.750 | null | null | 226 | null |
10449 | 2 | null | 10359 | 1 | null | One way is to use transformations of random variables. It's easy to generate dependent Gaussians; then transform them to uniform variates with the CDF of the gaussian, and then transform the uniform variates to your distribution with the inverse CDF of your distribution.
| null | CC BY-SA 3.0 | null | 2011-05-06T23:23:47.537 | 2011-05-06T23:23:47.537 | null | null | 3567 | null |
10450 | 1 | null | null | 3 | 737 | Can anyone suggest a method for generating random correlation matrix with $90\%$ of the off-diagonal entries between $[-0.3, 0.3]$. The other $10\%$ should be larger than $0.3$ or smaller than $-0.3$.
| How to generate a bounded random correlation matrix? | CC BY-SA 3.0 | null | 2011-05-07T00:30:21.807 | 2014-11-19T11:26:50.427 | 2014-11-19T11:26:50.427 | 28666 | 4383 | [
"correlation",
"random-generation"
] |
10451 | 2 | null | 10450 | 0 | null | If all you care about is the proportion of entries between $\pm 0.3$ then sure - generate a random correlation matrix, compute the proportion of entries which are greater than $0.3$ in absolute value, and if there are too many pick some at random and reassign them to random values between $\pm 0.3$. Similarly if there are too few.
Edit: Never you mind, this won't work; see the comments...
| null | CC BY-SA 3.0 | null | 2011-05-07T00:41:16.913 | 2011-05-07T01:54:17.850 | 2011-05-07T01:54:17.850 | 26 | 26 | null |
10452 | 2 | null | 2181 | 7 | null | "An Introduction to Multivariate Statistical Analysis" Third edition by T. W. Anderson .
Wiley series in Probability and Statistics.
| null | CC BY-SA 3.0 | null | 2011-05-07T03:55:02.787 | 2011-05-07T04:00:18.503 | 2011-05-07T04:00:18.503 | 4116 | 4116 | null |
10453 | 2 | null | 10439 | 11 | null | The general result you are looking for (under the stated assumptions) looks like this:
For linear regression with $p$ predictor variables (you have two, $X$ and $X^2$) and an intercept, then with $n$ observations, $\mathbf{X}$ the $n \times (p+1)$ design matrix, $\hat{\beta}$ the $p+1$ dimensional estimator and $a \in \mathbb{R}^{p+1}$
$$ \frac{a^T\hat{\beta} - a^T \beta}{\hat{\sigma}
\sqrt{a^T(\mathbf{X}^T\mathbf{X})^{-1}a}} \sim t_{n-p-1}.$$
The consequence is that you can construct confidence intervals for any linear combination of the $\beta$ vector using the same $t$-distribution you use to construct a confidence interval for one of the coordinates.
In your case, $p = 2$ and $a^T = (0, x_2 - x_1, x_2^2 - x_1^2)$. The denominator in the formula above is the square root of what you compute as the estimate of the standard error (provided that this is what the software computes ...). Note that the variance estimator, $\hat{\sigma}^2$, is supposed to be the (usual) unbiased estimator, where you divide by the degrees of freedom, $n-p-1$, and not the number of observations $n$.
| null | CC BY-SA 3.0 | null | 2011-05-07T07:30:36.927 | 2011-05-07T18:07:15.263 | 2011-05-07T18:07:15.263 | 4376 | 4376 | null |
10454 | 1 | 10456 | null | 5 | 6480 | I usually use the `plot(lm())` or `plot(glm())` (combined with `par(mfrow=c(2,2)`) to analyze residuals. Unfortunately this is not possible whenn estimating a `tobit()` (package [AER](http://cran.r-project.org/web/packages/AER/index.html)) or `zeroinfl()` (package [pscl](http://cran.r-project.org/web/packages/pscl/index.html)) model. Furthermore any Cooks distance command does not work. It's either not possible to make the residual analysis in a convienent way, or I just don't understand it. Any help is appreciated!
Here is a short example of what I am talking about:
```
require(AER)
require(stats)
attach(cars)
tob<-tobit(dist~speed)
summary(tob)
par(mfrow=c(2,2))
plot(tob) #doesn't work
cooks.distance(tob) #doesn't work
detach(cars)
```
| How to get Cook's distance and carry out residual analysis for non-lm() and non-glm() models in R? | CC BY-SA 4.0 | null | 2011-05-07T08:48:57.553 | 2019-12-05T14:47:24.250 | 2019-12-05T14:47:24.250 | 92235 | 4496 | [
"r",
"outliers",
"residuals",
"cooks-distance"
] |
10455 | 2 | null | 10427 | 13 | null | Fixed effect panel regression models involve subtracting group means from the regressors. This means that you can only include time-varying regressors in the model. Since firms usually belong to one industry the dummy variable for industry does not vary with time. Hence it is excluded from your model by Stata, since after subtracting the group mean from such variable you will get that it is equal to zero.
Note that Hausman test is a bit tricky, so you cannot solely base your model selection (fixed vs random effects) with it. Wooldridge explains it very nicely (in my opinion) in [his book](http://books.google.com/books?id=cdBPOJUP4VsC&lpg=PP1&dq=wooldridge%20panel%20data&hl=fr&pg=PA289#v=onepage&q&f=false).
| null | CC BY-SA 3.0 | null | 2011-05-07T09:05:21.043 | 2011-05-07T09:05:21.043 | null | null | 2116 | null |
10456 | 2 | null | 10454 | 6 | null | As described in the on-line help, the `cooks.distance()` function expects an object of class `lm` or `glm` so it is not possible to get it work with other type of models. It is defined in `src/library/stats/R/lm.influence.R`, from R source, so you can browse the code directly and build your own function if nothing exits in other places. A quick way of seeing what it does is to type `stats:::cooks.distance.lm` at the R prompt, though.
Also, as `tobit` is nothing more than a wrapper for `survreg`, all attached methods to the latter kind of R object might be used. For example, there's a `residuals.survreg` (in the [survival](http://cran.r-project.org/web/packages/survival/index.html) package) S3 method for extracting residuals from objects inheriting from class `survreg`.
| null | CC BY-SA 3.0 | null | 2011-05-07T09:25:35.793 | 2011-05-07T09:50:17.383 | 2011-05-07T09:50:17.383 | 930 | 930 | null |
10457 | 1 | null | null | 3 | 1507 | I've got a question concerning a negbin regression: suppose that you have the following commands
```
require(MASS)
attach(cars)
mod.NB<-glm.nb(dist~speed)
summary(mod.NB)
detach(cars)
```
(note that cars is a dataset which is available in R) and don't care if this model makes sense. What I'd like to know is, how can I interpret the variable theta (at the bottom of the summary). Is this the shape parameter of the negbin distribution and is it possible to interpret it as a measure of skewness? I appreciate every thought!
| Interpreting negative binomial regression output in R | CC BY-SA 3.0 | null | 2011-05-06T14:41:22.950 | 2011-05-07T11:50:28.077 | 2011-05-07T11:46:52.683 | 2116 | null | [
"r",
"regression"
] |
10458 | 2 | null | 10457 | 4 | null | If Y is NB with mean $\mu$ then $var(Y) = \mu + \mu^2/\theta$. Skewness would not be an appropriate interpretation, but the departure from $\theta= 1$ is often taken as "extra-Poisson" dispersion.
| null | CC BY-SA 3.0 | null | 2011-05-06T20:11:54.580 | 2011-05-07T11:50:28.077 | 2011-05-07T11:50:28.077 | 2116 | 2129 | null |
10459 | 1 | 10465 | null | 46 | 26351 | Can you suggest some good movies which involve math, probabilities etc? One example is [21](http://en.wikipedia.org/wiki/21_%282008_film%29). I would also be interested in movies that involve algorithms (e.g. text decryption). In general "geeky" movies with famous scientific theories but no science fiction or documentaries. Thanks in advance!
| Are there any good movies involving mathematics or probability? | CC BY-SA 3.0 | null | 2011-05-07T11:13:51.243 | 2020-12-30T15:36:22.567 | 2011-05-07T12:34:34.433 | 930 | 4504 | [
"probability",
"references"
] |
10460 | 2 | null | 10459 | 13 | null | '[A Beautiful Mind](http://www.imdb.com/title/tt0268978/)' naturally has a bit of game theory in it.
| null | CC BY-SA 3.0 | null | 2011-05-07T13:05:04.590 | 2011-05-07T13:05:04.590 | null | null | 4360 | null |
10462 | 2 | null | 10444 | 10 | null | I can't see that standardization is a good idea in ordinary regression or with a longitudinal model. It makes predictions harder to obtain and doesn't solve a problem that needs solving, usually. And what if you have $x$ and $x^2$ in the model. How do you standardize $x^2$? What if you have a continuous variable and a binary variable in the model? How do you standardize the binary variable? Certainly not by its standard deviation, which would cause low prevalence variables to have greater importance.
In general it's best to interpret model effects on the original scale of $x$.
| null | CC BY-SA 3.0 | null | 2011-05-07T13:23:49.177 | 2011-05-07T13:23:49.177 | null | null | 4253 | null |
10463 | 1 | 22624 | null | 4 | 559 | Possible [Duplicate](https://stats.stackexchange.com/questions/8130/sample-size-required-to-determine-which-of-a-set-of-advertisements-has-the-highes)
First, I am not a statistician (though I'd like be one) but I am trying to understand how different tests can be utilized to examine samples.
Let's say that I have five of Google AdWords ads and I'm trying to develop a test to show which has been more effective at getting users to click on that ad.
```
January February
Ad A - 850 5000
Ad B - 900 5300
Ad C - 880 5100
Ad D - 880 5100
Ad E - 800 5000
```
In terms of frequency, Ad B has the highest amount of user click throughs in both January and February. (Is it fine just to say that one has a higher freqency and is therefore the more effective ad without relying on any significance test)
However, for each month, I want to use a statistical test to determine which ad was more effective. Initially, I assumed a chi-square (pearsons) test would be appropriate, but with five seperate ads, I'm not sure if that's the appropriate plan of action.
Can anyone help me understand what action to take and why, or is it enough just to rely on the frequency distribution of the five ads.
| Determining which AdWords have the highest amount of user click throughs | CC BY-SA 3.0 | null | 2011-05-07T13:59:28.740 | 2012-02-11T07:16:38.017 | 2017-04-13T12:44:33.310 | -1 | 3310 | [
"statistical-significance",
"chi-squared-test"
] |
10464 | 1 | 10467 | null | 7 | 1477 | I've been taught that binning a continuous variable into categories is almost never a good idea, because you lose information in the process. But now I'm facing a situation where I have an age variable that is "mostly continuous", which is to say that about 90% of the values represent age in years, and the remaining 10% are recorded as an ordered factor with haphazard levels, e.g., >18, 18-34, >50 ...
I want to use age as a predictor of a continuous outcome variable, but I'm not sure how to proceed. Should I make this into an ordered factor, even though this will mean throwing away information in 90% of the cases? If not, what do I do with the 10% of the cases where age is already an ordered factor? Any suggestions will be appreciated.
| What to do with almost-continuous variable in regression? | CC BY-SA 3.0 | null | 2011-05-07T14:13:51.813 | 2011-07-04T15:17:28.820 | 2011-07-04T15:17:28.820 | null | 4506 | [
"regression",
"categorical-data",
"continuous-data"
] |
10465 | 2 | null | 10459 | 16 | null | [Pi](http://www.imdb.com/title/tt0138704/)
| null | CC BY-SA 3.0 | null | 2011-05-07T14:19:42.887 | 2011-05-07T14:19:42.887 | null | null | 226 | null |
10466 | 2 | null | 10425 | 3 | null | I was challenged by a client to solve this problem in an automatic i.e. turnkey way. I implemented an approach that for each pair ( i.e. y and a candidate x ) , prewhiten , compute cross-correlations of the pre-whitened series, identify the PDL ( OR ADL AUTOREGRESSIVE DISTRIBUTED LAG MODEL including any DEAD TIME ) while incorporating Intervention Detection to yield robust estimates, develop a "measure" for this structure. After conducting this for ALL candidate regressors, rank them by the "measure" and then select the top K regressors based upon the "measure". This is sometimes referred to as Linear Filtering. We successfully incorporated this heuristic into our commercially available time series package. You should be able to "ROLL YOUR OWN" if you have sufficient time and statistical programming skills and/or available modules to implement some of these tasks.
| null | CC BY-SA 3.0 | null | 2011-05-07T15:02:08.107 | 2011-05-07T15:26:53.670 | 2011-05-07T15:26:53.670 | 3382 | 3382 | null |
10467 | 2 | null | 10464 | 4 | null | I'd say interact continuous age with a dummy "continuous age is available", and categorical age with a dummy "continuous age is not available". That way you'll be using as much of the information you have as possible.
Of course if the effect of age is something you'd like to be able to summarize with just one point estimate, you'll have to think a bit more (though the coefficient on continuous age should be a pretty good approximation for that).
| null | CC BY-SA 3.0 | null | 2011-05-07T15:14:25.560 | 2011-05-07T15:14:25.560 | null | null | 2044 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.