Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9033 | 1 | null | null | 2 | 136 | Going through some old notes of mine, I found a formula whose purpose seemed statistical, but whose provenance and use I have dumbly forgotten to write. I am asking if anybody might remember this one:
\begin{align*}
-N-\frac1{N}\sum_{j=1}^N (2j-1)(A_j+B_j),
\end{align*}
where
\begin{align*}
A_j=\ln\left(\frac{1}{2}+\int_{\bar{x}}^{x_j} \exp\left(-\frac{(t-\bar{x})^2}{2s^2}\right)\mathrm{dt}\right)
\end{align*}
and
\begin{align*}
B_j=\ln\left(\frac{1}{2}-\int_{\bar{x}}^{x_{N-j+1}} \exp\left(-\frac{(t-\bar{x})^2}{2s^2}\right)\mathrm{dt}\right)
\end{align*}
and where $x_j,\quad j=1\dots N$ is given data, $\bar{x}$ is the mean, and $s$ is the standard deviation.
Can anybody help?
| Identifying a statistical formula | CC BY-SA 2.5 | null | 2011-04-01T09:19:56.027 | 2011-04-01T18:11:38.237 | 2011-04-01T11:46:09.357 | null | null | [
"normal-distribution"
] |
9034 | 2 | null | 7166 | 4 | null | Predicting hourly data has become my main interest. This problem arises normally in Call Center Forecasting. One needs to be concerned with hourly patterns within the day , different daily patterns across the week and seasonal patterns across the year ( Monthly indicators/Weekly Indicators. In addition there can be and I have seen interaction between hourly patterns and daily patterns. Transfer Function ( a generalization/super-set of Regression for time series data ) can easily accommodate the mentioned structures. Additionally events during the year (Xmas, Easter etc) need to be possibly included using lead, contemporaneous and/or lag structures. In time series analysis we need to validate via Intervention Detection schemes that there are no Pulses, Level/Step Shifts , Seasonal Pulses and/or Local Time Trends remaining in the error process suggesting an augmentation to the model. If the residual series suggests autotregressive structure then one simply adds a suitable ARIMA structure. Care should be taken when selecting a resource to deal with this problem. I recently analyzed and developed forecasts for a similar problem: the number of passengers in the Paris Subway System by hour and by day. IMHO this is a problem of constructing a useful equation from the data which can then be used to simulate possible scenarios which can then be used to evaluate queue length etc.
| null | CC BY-SA 2.5 | null | 2011-04-01T10:57:40.617 | 2011-04-01T10:57:40.617 | null | null | 3382 | null |
9035 | 2 | null | 8800 | 2 | null | Ok, as far as I see your central question can be summarized in the following way:
>
I transform one feature F in my
dataset, but only for those rows where
the label is "flower", the rest of the
rows remain unchanged. I evaluate
NB,C4.5 and Logistic Regression before
and after the transformation. The results obtained
via NB and C4.5 did not change, but
the result via LogisticRegression. Why
?
In general, the transformation of values of one feature (and additionally, restricted only to a subset of the data), may cause nearly anything. Imagine for example that one replaces the old value by a constant or values mainly occurring in another class. In this cases the predictive power of the feature may decrease rapidly. On the other hand, if the values do not overlap with the values of F of the other classes before AND after the transformation, the predictive power is not afflicted at all.
So in general it is hard to explain your results without knowing the whole dataset + algorithm parameterization/implementation. I wonder on the other hand whether the results differ significantly, given that the flower class only occurs 15 times in 5000. Did you analyse the confusion matrix ?
However, I can provide some hints:
- NB in my experience is rather robust. So it may that F has less impact because other features have more predictive power and have already sealed the deal ;). You can e.g. calculate InformationGainRatio to estimate the predictive power before and after the transformation.
- C4.5: Do the trees differ ? If F is not selected as split node or removed during pruning, the results won't be affected at all. If F has been selected it would be interesting to see whether the new split-point can be inverse transformed to get the old splitpoint. Maybe the overlapping of classes at this point (i.e. given the restriction of the dataspace when F is selected as split-node) has not changed.
- Logistic Regression: My knowledge is not thaaat solid here, in the implementations I used the coefficients are evaluated via an evolutionary approach. Maybe the results differ because it ran into a local minima ? In this case on can rerun the learner multiple times (before and after the transformation) to see if/how the results change.
---
Regarding the second question: Since the two questions differ extremely, I recommend to open a new question for IDS. Meanwhile you may some of [these links](http://www.google.de/#hl=de&source=hp&q=intrusion+detection+survey&aq=f&aqi=&aql=&oq=&fp=2cdc0fce7b07738a) interesting.
| null | CC BY-SA 2.5 | null | 2011-04-01T11:26:39.950 | 2011-04-01T11:26:39.950 | null | null | 264 | null |
9036 | 1 | 9037 | null | 4 | 2440 | Using R, I noticed that I get different results if I use `fisher.test()` with raw data (as factors) versus a contingency table. The documentation states that `fisher.test(x,y)` will compute a contingency table from `x, y` by treating them as factors.
My example is a baseline statistics exercise: comparing the gender split in a placebo group to the experimental group.
Here's my example for running `fisher.test` on factors:
```
> expData <- data.frame(Placebo=factor(c("Y","Y","Y","Y","N","N","N","N")),
Gender=factor(c("F","F","F","M","M","M","M","F")))
> expData
Placebo Gender
1 Y F
2 Y F
3 Y F
4 Y M
5 N M
6 N M
7 N M
8 N F
> fisher.test(expData[expData$Placebo=="Y","Gender"],
expData[expData$Placebo=="N","Gender"])
Fisher's Exact Test for Count Data
data: expData[expData$Placebo == "Y", "Gender"] and
expData[expData$Placebo == "N", "Gender"]
p-value = 0.25
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.00000 13.00002
sample estimates:
odds ratio
0
```
Here's my example for building a contingency table representing the above data and running `fisher.test` on that:
```
> contingency <- matrix(c(3,1,1,3),nrow=2,
dimnames = list(tx=c("Placebo","Exper"), gender=c("F","M")))
> contingency
gender
tx F M
Placebo 3 1
Exper 1 3
> fisher.test(contingency)
Fisher's Exact Test for Count Data
data: contingency
p-value = 0.4857
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.2117329 621.9337505
sample estimates:
odds ratio
6.408309
```
Am I using `fisher.test` in the wrong way? Or maybe building my contingency table wrong?
| In R, fisher.test returns different results if I use vectors vs contingency table | CC BY-SA 3.0 | null | 2011-04-01T11:31:34.533 | 2013-07-22T16:41:54.237 | 2013-07-22T16:41:54.237 | 7290 | 1138 | [
"r",
"hypothesis-testing",
"p-value",
"contingency-tables",
"fishers-exact-test"
] |
9037 | 2 | null | 9036 | 5 | null | The first test should read
```
> fisher.test(expData[,1], expData[,2])
Fisher's Exact Test for Count Data
data: expData[, 1] and expData[, 2]
p-value = 0.4857
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.001607888 4.722931239
sample estimates:
odds ratio
0.156047
```
as per the doc: `x` is the outcome and `y` is the factor (or vice-versa).
| null | CC BY-SA 2.5 | null | 2011-04-01T11:39:40.613 | 2011-04-01T11:39:40.613 | null | null | 930 | null |
9038 | 1 | 9051 | null | 2 | 155 | I'm using [ENSO dataset](http://www.itl.nist.gov/div898/strd/nls/data/LINKS/DATA/ENSO.dat) from the NIST Statistical Reference Datasets as a test for nonlinear regression code.
The data are monthly averaged atmospheric pressure differences between Easter Island and Darwin, called [Southern Oscillation](http://en.wikipedia.org/wiki/El_Ni%C3%B1o-Southern_Oscillation) Index (SOI). In the test, the data are fitted with three sine cycles ([like this](http://fityk.nieto.pl/_images/fityk-1.0.1-osx-so.png)).
I got curious about the Southern Oscillations and I started looking for more SOI data. I learned that there are a few different methods of how to calculate the SOI, but none of the other sources that I found (e.g. [here](http://www.bom.gov.au/climate/current/soihtm1.shtml)) gives the same numbers are in the NIST StRD.
There is a reference in the dataset, Numerical Methods and Software by Kahaner, Moler and Nash, where I found only that the data are for years 1962-1975.
So my question is not really about statistics: how the ENSO dataset is related to the SOI data from other sources.
| The source of ENSO data from NIST | CC BY-SA 2.5 | null | 2011-04-01T11:40:34.837 | 2011-04-01T14:50:47.687 | 2011-04-01T11:44:21.793 | null | 3981 | [
"dataset",
"nonlinear-regression"
] |
9039 | 2 | null | 8567 | -2 | null | From wiki "Given a data set, several candidate models may be ranked according to their AIC values. From the AIC values one may also infer that e.g. the top two models are roughly in a tie and the rest are far worse. Thus, AIC provides a means for comparison among models—a tool for model selection. AIC does not provide a test of a model in the usual sense of testing a null hypothesis; i.e. AIC can tell nothing about how well a model fits the data in an absolute sense. Ergo, if all the candidate models fit poorly, AIC will not give any warning of that."
Thus the AIC should bot be used to do model selection. After reading up on DLM it appears to me that this approach is MODEL-BASED rather than DATA-BASED. One is assuming a model when using DLM. A DLM model uses a mixture of high order polynomials in time and fixed trigonometric structures which might be useful for certain physical time series. This approach does not lend itself to generating a set of residuals free of structure which meet the Gaussian requirements. A useful alternative is to form the model empirically detecting as needed any auto-projective structure, any deterministic structure (Pulses, level Shifts, Seasonal Pulses and.or Local Time Trends via INTERVENTION DETECTION also referred to as OUTLIER DETECTION.
| null | CC BY-SA 2.5 | null | 2011-04-01T11:45:03.477 | 2011-04-01T11:45:03.477 | null | null | 3382 | null |
9040 | 1 | 9048 | null | 7 | 1061 | First off, I'm a programmer but my experience with true statistics ended at A-Level so I'm looking to all of you for help with a little side project I've been tinkering with.
At home I use Plex Media Center to display all of my movies. I built an export tool for this to generate a HTML file containing information on your library so that others can view it online. After I made this tool I realised I now had access to a wealth of data about films and the actors in them. And this is where you guys (and gals) hopefully come in.
I want to visualize the relationships between actors and movies somehow. Initially I just used a node graph library to map all actors who have been in more than one movie to all their movies and ended up with this: [http://www.flickr.com/photos/dachande663/5574979625/](http://www.flickr.com/photos/dachande663/5574979625/) [section of a 5000x2500px image]
The problem is, with anything more than 250 movies it just turns into a mess of spaghetti that's impossible to follow. I've looked into arc diagrams but think it would just be even more confusing.
My question therefore is: how do I visualize this? Size isn't too much of an issue as I'd love to print this out on a large canvas and actually hang it up. Also, I'll eventually replace the text with images of the respective movies and actors. What I'm trying to avoid is having a million lines snaking everywhere. I've tried to find the most important movies and place them more centrally but at the moment that's more guess work than actual logic.
Are there libraries that can do a better job of this, or even a better way of displaying the data (dropping actors as nodes and adding them as edge labels)? I'm currently using Dracula graph, which provides an okay-starting point but can change as needed.
Any input will be much appreciated. Cheers.
| Visualize movie/actor relationships | CC BY-SA 2.5 | null | 2011-04-01T11:54:19.400 | 2011-05-04T13:50:39.353 | null | null | 3988 | [
"data-visualization"
] |
9041 | 2 | null | 9040 | 3 | null | Graphviz can optimise the layout, see something similar [here](http://www.graphviz.org/content/siblings).
| null | CC BY-SA 2.5 | null | 2011-04-01T12:10:43.893 | 2011-04-01T12:10:43.893 | null | null | 3911 | null |
9042 | 2 | null | 9020 | 2 | null | Hi
Take a look at the varclus function in the Frank Harrel's Hmisc package.
```
require(Hmisc)
? varclus
```
| null | CC BY-SA 2.5 | null | 2011-04-01T12:48:36.933 | 2011-04-01T12:48:36.933 | null | null | 2028 | null |
9043 | 1 | 9044 | null | 1 | 387 | I'm helping my father to translate a text from my own native Latvian language to English. Even though I'm a fair English speaker and have a bit of mathematics in my past, I never did have a firm grasp of statistics. :(
The term I'm looking for has something to do with error calculation. A GPS device is measuring coordinates and then trying to estimate the error. In wikipedia I've come across two different likely terms - [Mean squared error](http://en.wikipedia.org/wiki/Mean_squared_error) and [Root mean square error](http://en.wikipedia.org/wiki/Root_mean_square_deviation). From what I can understand, they are not one and the same thing, but what each of them means is beyond me.
A direct translation from Latvian language would be "average (mean?) squared error estimate". No idea what that means either, I'm afraid.
Perhaps someone here has a better idea?
| What is the English name for a statistics term that I'm looking for? | CC BY-SA 2.5 | null | 2011-04-01T08:53:26.107 | 2011-04-01T13:10:24.590 | null | null | 2590 | [
"terminology"
] |
9044 | 2 | null | 9043 | 3 | null | Google translate gives ["rms error of assessment"](http://translate.google.co.uk/?hl=en&tab=wT#lv%7Cen%7Cvid%C4%93jo%20kvadr%C4%81tisko%20k%C4%BC%C5%ABdu%20nov%C4%93rt%C4%93jums) which (as root mean square error) seems to be the intended meaning.
The difference between mean square error and root
mean square error is similar to the differrence between variance and standard deviation: if you want the average error to be reported in the same units as the measurement (metres or degrees or whatever) then you need to take the square root.
| null | CC BY-SA 2.5 | null | 2011-04-01T09:51:37.740 | 2011-04-01T09:51:37.740 | null | null | 2958 | null |
9045 | 2 | null | 9043 | 0 | null | I would guess, it is a Least Mean Squared Error (LMSE) estimator. [publication1](http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5090171) [publication2](http://web.cecs.pdx.edu/~ssp/Reports/2006/Monaghan.pdf) [wikipedia](http://en.wikipedia.org/wiki/Mean_squared_error)
the GPS is trying to estimate coordinates, it does so by minimizing a square error function, so reporting the coordinate that is best in terms of MSE.
| null | CC BY-SA 2.5 | null | 2011-04-01T12:21:40.590 | 2011-04-01T12:29:20.880 | null | null | 4060 | null |
9048 | 2 | null | 9040 | 6 | null | N.B.: This was previously a (long) comment that I've converted to an answer. Hopefully I'll be able to post an example of what I describe below within a day or two.
Why not try something like a heatmap? Have movies as rows and actors as columns. Maybe sort each of them in terms of the number of actors in the movie and number of movies each actor has been in. Then color each cell where there is a match. This is basically a visualization of the adjacency matrix. The proposed sorting should make some interesting patterns and the right use of color could make it both artistic and more informative. Maybe color by movie type or Netflix rating or proportion of male to female actors (or viewers!), etc.
| null | CC BY-SA 2.5 | null | 2011-04-01T13:58:06.120 | 2011-04-01T13:58:06.120 | null | null | 2970 | null |
9049 | 2 | null | 9040 | 1 | null | I wouldn't know how you'd go about constructing this but I liked the method using hyperbolic geometry
[http://www.newscientist.com/data/images/ns/cms/dn19420/dn19420-1_800.jpg](http://www.newscientist.com/data/images/ns/cms/dn19420/dn19420-1_800.jpg)
[http://www.newscientist.com/article/dn19420-escherlike-internet-map-could-speed-online-traffic.html](http://www.newscientist.com/article/dn19420-escherlike-internet-map-could-speed-online-traffic.html)
| null | CC BY-SA 2.5 | null | 2011-04-01T14:30:18.650 | 2011-04-01T14:30:18.650 | null | null | null | null |
9050 | 1 | null | null | 11 | 4650 | I'd like to obtain a graphic representation of the correlations in articles I have gathered so far to easily explore the relationships between variables. I used to draw a (messy) graph but I have too much data now.
Basically, I have a table with:
- [0]: name of variable 1
- [1]: name of variable 2
- [2]: correlation value
The "overall" matrix is incomplete (e.g., I have the correlation of V1*V2, V2*V3, but not V1*V3).
Is there a way to graphically represent this ?
| How to display a matrix of correlations with missing entries? | CC BY-SA 2.5 | null | 2011-04-01T14:40:54.433 | 2011-04-01T17:02:00.770 | 2011-04-01T16:27:54.967 | 3827 | 3827 | [
"r",
"data-visualization",
"correlation"
] |
9051 | 2 | null | 9038 | 3 | null | You would not expect your two sets of numbers to be the same: the NIST numbers seem to be have been published to [test statistical software](http://www.itl.nist.gov/div898/strd/general/bkground.html) while the Australian BOM numbers are designed to [identify whether the ENSO cycle is above or below average](http://www.bom.gov.au/climate/glossary/soi.shtml).
In particular the BOM numbers have been standardised to an index to have monthly seasonal factors removed, and for the remaining numbers to have a mean of around 0 and standard deviation around 10. They may also have had corrections or improved sources used since the NIST numbers were published.
The NIST have had not had these proceses applied: in fact one of the purposes of the illustrated analysis is identifying the mean (coefficient `b1`) and the magnitude of the annual cycle (coefficients `b2` and `b3`). It is possible that the NIST numbers are in real physical units (perhaps of pressure).
You might have hoped that there was a linear relationship between the two sets of numbers, for example between the January BOM numbers and data points 1, 13, 25, ... , 157 of the NIST numbers. I did not spot it, though I did not try very hard.
| null | CC BY-SA 2.5 | null | 2011-04-01T14:50:47.687 | 2011-04-01T14:50:47.687 | null | null | 2958 | null |
9052 | 2 | null | 9050 | 5 | null | Your data may be like
```
name1 name2 correlation
1 V1 V2 0.2
2 V2 V3 0.4
```
You can rearrange your long table into a wide one with the following R code
```
d = structure(list(name1 = c("V1", "V2"), name2 = c("V2", "V3"),
correlation = c(0.2, 0.4)), .Names = c("name1", "name2",
"correlation"), row.names = 1:2, class = "data.frame")
k = d[, c(2, 1, 3)]
names(k) = names(d)
e = rbind(d, k)
x = with(e, reshape(e[order(name2),], v.names="correlation",
idvar="name1", timevar="name2", direction="wide"))
x[order(x$name1),]
```
You get
```
name1 correlation.V1 correlation.V2 correlation.V3
1 V1 NA 0.2 NA
3 V2 0.2 NA 0.4
4 V3 NA 0.4 NA
```
Now you can use techniques for visualizing correlation matrices (at least ones that can cope with missing values).
| null | CC BY-SA 2.5 | null | 2011-04-01T15:10:47.950 | 2011-04-01T15:10:47.950 | null | null | 3911 | null |
9053 | 1 | 9055 | null | 45 | 54369 | Why does a cross-validation procedure overcome the problem of overfitting a model?
| How does cross-validation overcome the overfitting problem? | CC BY-SA 2.5 | null | 2011-04-01T16:26:57.173 | 2020-07-19T23:07:44.370 | 2020-07-19T23:07:44.370 | 12359 | 3269 | [
"regression",
"cross-validation",
"model-selection",
"overfitting"
] |
9054 | 2 | null | 9050 | 12 | null | Building upon @GaBorgulya's response, I would suggest trying fluctuation or level plot (aka heatmap displays).
For example, using [ggplot2](http://had.co.nz/ggplot2/):
```
library(ggplot2, quietly=TRUE)
k <- 100
rvals <- sample(seq(-1,1,by=.001), k, replace=TRUE)
rvals[sample(1:k, 10)] <- NA
cc <- matrix(rvals, nr=10)
ggfluctuation(as.table(cc)) + opts(legend.position="none") +
labs(x="", y="")
```
(Here, missing entry are displayed in plain gray, but the default color scheme can be changed, and you can also put "NA" in the legend.)

or
```
ggfluctuation(as.table(cc), type="color") + labs(x="", y="") +
scale_fill_gradient(low = "red", high = "blue")
```
(Here, missing values are simply not displayed. However, you can add a `geom_text()` and display something like "NA" in the empty cell.)

| null | CC BY-SA 2.5 | null | 2011-04-01T16:38:04.397 | 2011-04-01T17:02:00.770 | 2011-04-01T17:02:00.770 | 930 | 930 | null |
9055 | 2 | null | 9053 | 35 | null | I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross-validation error does not have a negligible variance, especially if the size of the dataset is small; in other words you get a slightly different value depending on the particular sample of data you use. This means that if you have many degrees of freedom in model selection (e.g. lots of features from which to select a small subset, many hyper-parameters to tune, many models from which to choose) you can over-fit the cross-validation criterion as the model is tuned in ways that exploit this random variation rather than in ways that really do improve performance, and you can end up with a model that performs poorly. For a discussion of this, see [Cawley and Talbot "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR, vol. 11, pp. 2079−2107, 2010](http://jmlr.csail.mit.edu/papers/v11/cawley10a.html)
Sadly cross-validation is most likely to let you down when you have a small dataset, which is exactly when you need cross-validation the most. Note that k-fold cross-validation is generally more reliable than leave-one-out cross-validation as it has a lower variance, but may be more expensive to compute for some models (which is why LOOCV is sometimes used for model selection, even though it has a high variance).
| null | CC BY-SA 3.0 | null | 2011-04-01T16:51:53.760 | 2015-12-10T17:36:21.880 | 2015-12-10T17:36:21.880 | 887 | 887 | null |
9056 | 2 | null | 9050 | 3 | null | The `corrplot` package is a useful function for visualizing correlation matrices. It accepts a correlation matrix as the input object and has several options for displaying the matrix itself. A nice feature is that it can reorder your variables using hierarchical clustering or PCA methods.
See the accepted answer in [this thread](https://stats.stackexchange.com/questions/2086/quickly-evaluate-visually-correlations-between-ordered-categorical-data-in-r) for an example visualization.
| null | CC BY-SA 2.5 | null | 2011-04-01T16:52:54.727 | 2011-04-01T16:52:54.727 | 2017-04-13T12:44:25.283 | -1 | 3309 | null |
9057 | 2 | null | 9029 | 6 | null | Agresti 2007 discusses them. They're in chapter 9 and 10. The 2002 edition probably discusses them too, as @suncoolsu mentioned.
Agresti refers to the group of response variables as a cluster and discusses according analysis with marginal models, conditional models and generalized estimating equations.
| null | CC BY-SA 3.0 | null | 2011-04-01T16:57:26.190 | 2014-10-18T17:16:56.540 | 2014-10-18T17:16:56.540 | 3874 | 3874 | null |
9058 | 2 | null | 9033 | 4 | null | When correctly rewritten (as indicated in a comment), it will be the [Anderson-Darling](http://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test) statistic (for a normality test).
| null | CC BY-SA 2.5 | null | 2011-04-01T18:11:38.237 | 2011-04-01T18:11:38.237 | null | null | 919 | null |
9059 | 2 | null | 9053 | 18 | null | My answer is more intuitive than rigorous, but maybe it will help...
As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have a flexible fitting mechanism: you fit your sample of data so closely that you're fitting the noise, outliers, and all the other variance.
Splitting the data into a training and testing set keeps you from doing this. But a static split is not using your data efficiently and your split itself could be an issue. Cross-validation keeps the don't-reward-an-exact-fit-to-training-data advantage of the training-testing split, while also using the data that you have as efficiently as possible (i.e. all of your data is used as training and testing data, just not in the same run).
If you have a flexible fitting mechanism, you need to constrain your model selection so that it doesn't favor "perfect" but complex fits somehow. You can do it with AIC, BIC, or some other penalization method that penalizes fit complexity directly, or you can do it with CV. (Or you can do it by using a fitting method that is not very flexible, which is one reason linear models are nice.)
Another way of looking at it is that learning is about generalizing, and a fit that's too tight is in some sense not generalizing. By varying what you learn on and what you're tested on, you generalize better than if you only learned the answers to a specific set of questions.
| null | CC BY-SA 2.5 | null | 2011-04-01T18:23:29.910 | 2011-04-01T18:23:29.910 | null | null | 1764 | null |
9060 | 2 | null | 9030 | 2 | null | Some people have started to look at this issue in the chemometrics literature. For instance, about 20 years ago Robert Gibbons started to do statistical analyses suggesting instrument responses (for low-level measurement of chemicals) were nonlinear, heteroscedastic, and had non-normal (perhaps lognormal) error distributions. I found an [abstract of one of those papers](https://link.springer.com/article/10.1007/BF00680298) on Springer's site, Some statistical and conceptual issues in the detection of low-level environmental pollutants (JEES 1995).
| null | CC BY-SA 4.0 | null | 2011-04-01T20:39:18.377 | 2022-06-20T12:23:38.553 | 2022-06-20T12:23:38.553 | 919 | 919 | null |
9061 | 2 | null | 8911 | 0 | null | R would be my first vote. Another free option would be gretl. If you happen to know BUGS, JAGS makes sense and is free. And I really don't like its syntax, but if you have some knowledge of Matlab, the free alternative Octave runs on MacOS X as well.
| null | CC BY-SA 2.5 | null | 2011-04-01T21:35:45.077 | 2011-04-01T21:35:45.077 | null | null | 1764 | null |
9062 | 1 | 9096 | null | 6 | 8032 | I have a 2x3 contingency table - the row variable is a factor, the column variable is an ordered factor (ordinal level). I'd like to apply either symmetrical or asymmetrical association technique. What do you recommend me to do? Which technique do you find the most appropriate?
| Measure of association for 2x3 contingency table | CC BY-SA 2.5 | null | 2011-04-01T21:45:15.740 | 2011-04-02T16:46:40.420 | null | null | 1356 | [
"correlation",
"categorical-data",
"contingency-tables",
"association-measure"
] |
9063 | 2 | null | 9062 | 3 | null | On a 2x3 contingency table where the three-level factor is ordered you may use rank correlation (Spearman or Kendall) to assess association between the two variables.
You may also think about the data as an ordered variable observed in two groups. A corresponding significance test could be the Mann-Whitney test (with many ties). This has an associated measure of association, the [WMW odds, related to Agresti’s generalized odds ratio](http://www2.sas.com/proceedings/sugi31/209-31.pdf).
Both for rank correlation coefficients and WMW odds confidence intervals can be calculated. I find odds more intuitive, otherwise I believe both kinds of measures are appropriate.
| null | CC BY-SA 2.5 | null | 2011-04-01T22:39:37.960 | 2011-04-01T22:39:37.960 | null | null | 3911 | null |
9064 | 1 | 9084 | null | 6 | 2376 | I've heard a bit about the 'kernel trick' for support vector machines, and I was wondering:
- How do you identify problems that might benefit from the kernel trick?
- How to implement it in R?
Thank you
| Implementing the 'kernel trick' for a support vector machine in R | CC BY-SA 2.5 | null | 2011-04-01T23:15:45.397 | 2015-04-19T20:48:43.750 | 2015-04-19T20:48:43.750 | 9964 | 2817 | [
"r",
"machine-learning",
"svm",
"kernel-trick"
] |
9065 | 5 | null | null | 0 | null | Overview
From The Discipline of Machine Learning by Tom Mitchell:
>
The field of Machine Learning seeks to answer the question "How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?" This question covers a broad range of learning tasks, such as how to design autonomous mobile robots that learn to navigate from their own experience, how to data mine historical medical records to learn which future patients will respond best to which treatments, and how to build search engines that automatically customize to their user's interests. To be more precise, we say that a machine learns with respect to a particular task T, performance metric P, and type of experience E, if the system reliably improves its performance P at task T, following experience E. Depending on how we specify T, P, and E, the learning task might also be called by names such as data mining, autonomous discovery, database updating, programming by example, etc.
High level machine learning problems include:
- supervised learning (tag);
- unsupervised learning (tag);
- semi-supervised learning (tag);
- outlier or anomaly detection (tag); and
- reinforcement learning (tag).
References
The following threads have details of references on the subject:
- Can you recommend a book to read before Elements of Statistical Learning?
- Machine learning cookbook / reference card / cheatsheet?
The following journals are dedicated to research in Machine Learning:
- Journal of Machine Learning Research (Open Access)
- Machine Learning
- International Journal of Machine Learning and Cybernetics
- International Journal of Machine Learning and Applications (Open Access)
- International Journal of Machine Learning and Computing (Open Access)
| null | CC BY-SA 3.0 | null | 2011-04-01T23:43:56.183 | 2017-08-30T17:14:06.607 | 2017-08-30T17:14:06.607 | 171895 | 7365 | null |
9066 | 4 | null | null | 0 | null | Machine learning algorithms build a model of the training data. The term "machine learning" is vaguely defined; it includes what is also called statistical learning, reinforcement learning, unsupervised learning, etc. ALWAYS ADD A MORE SPECIFIC TAG. | null | CC BY-SA 4.0 | null | 2011-04-01T23:43:56.183 | 2019-04-08T14:17:29.407 | 2019-04-08T14:17:29.407 | 28666 | 7365 | null |
9067 | 1 | 9088 | null | 1 | 9404 | I am taking a class in data mining and I am working on a term project using the [BRFSS](http://www.cdc.gov/brfss/) dataset. I have a huge dataset with 405 columns and 12,000 rows. There are many columns which are completely empty. I was trying to remove empty columns using SAS, R or Excel but it doesn't work. Could you suggest a method to remove the empty columns or any tutorial that will help me cleaning up the data? There are a lot of missing cells too. I am using KNIME to train my data and it doesn't work if there are missing values. How can I handle the missing values?
| Removing empty columns from a dataset | CC BY-SA 2.5 | null | 2011-04-02T00:07:35.167 | 2012-12-14T15:19:22.137 | 2011-04-02T13:09:16.883 | null | 3897 | [
"excel"
] |
9068 | 1 | 9090 | null | 12 | 13951 | I am having a problem computing the pearson correlation coefficient of data sets with possibly zero standard deviation (i.e. all data has the same value).
Suppose that I have the following two data sets:
```
float x[] = {2, 2, 2, 3, 2};
float y[] = {2, 2, 2, 2, 2};
```
The correlation coefficient "r", would be computed using the following equation:
```
float r = covariance(x, y) / (std_dev(x) * std_dev(y));
```
However, because all data in data set "y" has the same value, the standard deviation std_dev(y) would be zero and "r" would be undefined.
Is there any solution for this problem? Or should I use other methods to measure data relationship in this case?
| Pearson correlation of data sets with possibly zero standard deviation? | CC BY-SA 2.5 | null | 2011-04-01T14:57:10.287 | 2011-04-29T01:08:20.040 | 2011-04-29T01:08:20.040 | 3911 | 3993 | [
"correlation"
] |
9069 | 2 | null | 9062 | 2 | null | One way to incorporate the ordering of the column factor into your analysis is to use the cumulative frequencies instead of the cell frequencies. So in your table you have:
$$f_{ij}=\frac{n_{ij}}{n_{\bullet\bullet}}\;\;\;\; i=1,2\;\;j=1,2,3$$
where a "$\bullet$" indicates sum over that index. So I suggesting modeling instead:
$$g_{ij}=\sum_{k=1}^{j}f_{ik}$$
Now you basically have a simple hypothesis for association, that the index $i$ doesn't matter. So you have:
$$E(g_{ij}|H_{0})=\sum_{k=1}^{j}\frac{n_{\bullet k}}{n_{\bullet\bullet}}$$
And then use the good old "entropy" test statistic:
$$T(H_{0})=n_{\bullet\bullet}\sum_{i,j}g_{ij}log\left(\frac{g_{ij}}{E(g_{ij}|H_{0})}\right)$$
Plugging in the numbers gives:
$$T(H_{0})=\sum_{i,j}\left(\sum_{k=1}^{j}n_{ik}\right)log\left(\frac{\sum_{k=1}^{j}n_{ik}}{\sum_{k=1}^{j}n_{\bullet k}}\right)$$
And you reject if this number is too big, it should be interpreted as a "log-odds" ratio which will help with choosing cut-offs.
| null | CC BY-SA 2.5 | null | 2011-04-02T00:25:13.240 | 2011-04-02T00:25:13.240 | null | null | 2392 | null |
9070 | 2 | null | 9062 | 1 | null | You could use the Jonckheere Terpstra test. In SAS, you can get this in PROC FREQ with the /JT option on the tables statement. I didn't see a function for it in R, but there may be one out there.
| null | CC BY-SA 2.5 | null | 2011-04-02T01:14:14.767 | 2011-04-02T01:14:14.767 | null | null | 686 | null |
9071 | 1 | 9073 | null | 18 | 6714 | If I have two normally distributed independent random variables $X$ and $Y$ with means $\mu_X$ and $\mu_Y$ and standard deviations $\sigma_X$ and $\sigma_Y$ and I discover that $X+Y=c$, then (assuming I have not made any errors) the conditional distribution of $X$ and $Y$ given $c$ are also normally distributed with means
$$\mu_{X|c} = \mu_X + (c - \mu_X - \mu_Y)\frac{ \sigma_X^2}{\sigma_X^2+\sigma_Y^2}$$ $$\mu_{Y|c} = \mu_Y + (c - \mu_X - \mu_Y)\frac{ \sigma_Y^2}{\sigma_X^2+\sigma_Y^2}$$
and standard deviation
$$\sigma_{X|c} = \sigma_{Y|c} = \sqrt{ \frac{\sigma_X^2 \sigma_Y^2}{\sigma_X^2 + \sigma_Y^2}}.$$
It is no surprise that the conditional standard deviations are the same as, given $c$, if one goes up the other must come down by the same amount. It is interesting that the conditional standard deviation does not depend on $c$.
What I cannot get my head round are the conditional means, where they take a share of the excess $(c - \mu_X - \mu_Y)$ proportional to the original variances, not to the original standard deviations.
For example, if they have zero means, $\mu_X=\mu_Y=0$, and standard deviations $\sigma_X =3$ and $\sigma_Y=1$ then conditioned on $c=4$ we would have $E[X|c=4]=3.6$ and $E[Y|c=4]=0.4$, i.e. in the ratio $9:1$ even though I would have intuitively thought that the ratio $3:1$ would be more natural. Can anyone give an intuitive explanation for this?
This was provoked by [a Math.SE question](https://math.stackexchange.com/questions/30365/posterior-distribution-after-having-partial-information-on-some-linear-combinatio)
| Intuitive explanation of contribution to sum of two normally distributed random variables | CC BY-SA 2.5 | null | 2011-04-02T01:28:12.757 | 2011-04-02T03:05:20.290 | 2017-04-13T12:19:38.800 | -1 | 2958 | [
"normal-distribution",
"conditional-probability"
] |
9072 | 2 | null | 9068 | 0 | null | The correlation is undefined in that case. If you must define it, I would define it as 0, but consider a simple mean absolute difference instead.
| null | CC BY-SA 2.5 | null | 2011-04-02T02:07:54.087 | 2011-04-02T02:07:54.087 | null | null | 2456 | null |
9073 | 2 | null | 9071 | 17 | null | The question readily reduces to the case $\mu_X = \mu_Y = 0$ by looking at $X-\mu_X$ and $Y-\mu_Y$.
Clearly the conditional distributions are Normal. Thus, the mean, median, and mode of each are coincident. The modes will occur at the coordinates of a local maximum of the bivariate PDF of $X$ and $Y$ constrained to the curve $g(x,y) = x+y = c$. This implies the contour of the bivariate PDF at this location and the constraint curve have parallel tangents. (This is the theory of Lagrange multipliers.) Because the equation of any contour is of the form $f(x,y) = x^2/(2\sigma_X^2) + y^2/(2\sigma_Y^2) = \rho$ for some constant $\rho$ (that is, all contours are ellipses), their gradients must be parallel, whence there exists $\lambda$ such that
$$\left(\frac{x}{\sigma_X^2}, \frac{y}{\sigma_Y^2}\right) = \nabla f(x,y) = \lambda \nabla g(x,y) = \lambda(1,1).$$

It follows immediately that the modes of the conditional distributions (and therefore also the means) are determined by the ratio of the variances, not of the SDs.
This analysis works for correlated $X$ and $Y$ as well and it applies to any linear constraints, not just the sum.
| null | CC BY-SA 2.5 | null | 2011-04-02T03:05:20.290 | 2011-04-02T03:05:20.290 | null | null | 919 | null |
9074 | 1 | 9082 | null | 7 | 11660 | I'm wondering how to implement two-way clustering, as explained in [Statistica documentation](http://www.statsoft.com/textbook/cluster-analysis/#twotwo) in R. Any help in this regard will be highly appreciated. Thanks
| Two-way clustering in R | CC BY-SA 2.5 | null | 2011-04-02T05:39:09.150 | 2011-04-02T13:07:14.437 | 2011-04-02T13:07:14.437 | null | 3903 | [
"r",
"clustering",
"multivariate-analysis"
] |
9075 | 2 | null | 9064 | 3 | null | you should take a look at kernlab R package. They even have a very nice [vignette](http://cran.r-project.org/web/packages/kernlab/vignettes/kernlab.pdf).
| null | CC BY-SA 2.5 | null | 2011-04-02T06:08:42.183 | 2011-04-02T06:08:42.183 | null | null | 223 | null |
9076 | 1 | 9078 | null | 2 | 547 | I am trying to programmatically identify an ARIMA model for a series of data and forecast values.
Currently the problem i am facing is to find a way to evaluate partial autocorrelation. I have been looking for methods to calculate PACF for quite a long time now but in vain.
Please provide some online resources which can help me in this matter.
Thank you.
| Methods for evaluating partial autocorrelation for identification of ARIMA models | CC BY-SA 2.5 | null | 2011-04-02T06:29:34.960 | 2011-04-04T18:01:02.420 | 2011-04-02T10:22:43.337 | 930 | 3972 | [
"autocorrelation",
"arima"
] |
9077 | 2 | null | 9067 | 2 | null | If they really are just completely blank columns then in R...
```
read.table( 'myBigFile', strip.white = TRUE)
```
might do what you want. You will have to set other arguments of the read.table() command as needed. It's best to use this when specifying the specific column delimiter you have.
| null | CC BY-SA 2.5 | null | 2011-04-02T07:11:25.833 | 2011-04-02T07:24:39.017 | 2011-04-02T07:24:39.017 | 601 | 601 | null |
9078 | 2 | null | 9076 | 3 | null | Use the Durbin-Levinson algorithm. It will be explained in any good time series book such as [Brockwell and Davis](http://rads.stackoverflow.com/amzn/click/0387953515). Here are some online explanations:
- http://www.stat.ufl.edu/~berg/sta4853/files/sta4853-4.pdf
- http://amath.colorado.edu/courses/4540/2008Spr/HandOuts/DurbinLevinson.pdf
| null | CC BY-SA 2.5 | null | 2011-04-02T07:12:52.580 | 2011-04-02T07:12:52.580 | null | null | 159 | null |
9079 | 2 | null | 9068 | 6 | null | I agree with sesqu that the correlation is undefined in this case. Depending on your type of application you could e.g. calculate the Gower Similarity between both vectors, which is:
$gower(v1,v2)=\frac{\sum_{i=1}^{n}\delta(v1_i,v2_i)}{n}$ where $\delta$ represents the [kronecker-delta](http://en.wikipedia.org/wiki/Kronecker_delta), applied as function on $v1,v2$.
So for instance if all values are equal, gower(.,.)=1. If on the other hand they differ only in one dimension, gower(.,.)=0.9. If they differ in every dimension, gower(.,.)=0 and so on.
Of course this is no measure for correlation, but it allows you to calculate how close the vector with s>0 is to the one with s=0. Of course you can apply other metrics,too, if they serve your purpose better.
| null | CC BY-SA 2.5 | null | 2011-04-02T08:31:46.503 | 2011-04-02T08:31:46.503 | null | null | 264 | null |
9080 | 1 | null | null | 1 | 304 | The objective of this research was to investigate the long-term effects of irrigation with treated waste water on some chemical soil properties.
The investigation was carried out by comparison of soil properties in two different fields: one irrigated with the effluent from Parkan Waste water Treatment Plant over a period of six years, and the other one irrigated with water over the same period of time. Soil samples were taken from different depths of 0-15, 15-30, 30-60, 60-100 and 100-150 cm in both fields, and analyzed for various chemical properties.
For visual summaries, I am going to plot depths of soil and element (with line and bar plots). However, I also need to fit a linear model with one categorical and one continuous predictor. How can this be done in R?
Thanks for your answer!
| How to do a linear model in R? | CC BY-SA 2.5 | null | 2011-04-02T09:09:33.990 | 2011-04-02T12:45:38.977 | 2011-04-02T10:36:18.717 | 930 | 3996 | [
"r",
"regression"
] |
9081 | 2 | null | 9080 | 5 | null | The syntax is actually absurdly simple:
```
mymodel <- lm(dependentvariable ~ continuousvariable + categoricalvariable,
data=yourdata).
```
You can then call `summary()` to get the coefficients, and `plot()` to examine the residuals.
HTH.
| null | CC BY-SA 2.5 | null | 2011-04-02T09:16:40.097 | 2011-04-02T10:37:25.003 | 2011-04-02T10:37:25.003 | 930 | 656 | null |
9082 | 2 | null | 9074 | 9 | null | Generally speaking, you should always find useful pointers by looking at the relevant CRAN TAsk Views, in this case the one that deals with [Cluster](http://cran.r-project.org/web/views/Cluster.html) packages, or maybe [Quick-R](http://www.statmethods.net/).
It's not clear to me whether the link you gave referenced standard clustering techniques for $n$ (individuals) by $k$ (variables) matrix of measures where we impose constraints on the resulting heatmap displays, or two-mode clustering or [biclustering](http://en.wikipedia.org/wiki/Biclustering).
In the first approach, we could, for example,
- compute a measure of (dis)similarity between individuals, or correlation between variables, and show the resulting $n\times n$ or $k\times k$ matrix where rows and columns are rearranged by some kind of partitioning or ordering technique -- this help highlighting possible substructures in the association matrix, and you will find more information in this related question;
- compute the correlation between two blocks of data observed on the same individuals, and reorder the pattern of correlations following an external ordination technique (e.g., hierarchical clustering) -- it amounts to show a heatmap of the observed statistics reordered by rows and columns.
As proposed in an [earlier response](https://stats.stackexchange.com/questions/6890/plotting-a-heatmap-given-a-dendrogram-and-a-distance-matrix-in-r/6893#6893), the latter is readily available in the `cim()` function from the [mixOmics](http://cran.r-project.org/web/packages/mixOmics/index.html) package. From the on-line help, we can end up with something like that:

Please, note that this is just a two-step process to conveniently display summary measures of association: clustering of rows (individuals or variables) and columns (individuals or variables) is done separately.
In the second approach (biclustering), that I'm inclined to favour, I only know one R package, [biclust](http://cran.r-project.org/web/packages/biclust/index.html), that is greatly inspired for research in bioinformatics. Some pointers were also given in an [earlier thread](https://stats.stackexchange.com/questions/7419/getting-started-with-biclustering). (But there's even some papers in the psychometrics literature.) In this case, we need to put some constraints during clustering because we want to cluster both individuals and variables at the same time.
Again, you can display the resulting structure as heatmaps (see `help(heatmapBC)`), as shown below

| null | CC BY-SA 2.5 | null | 2011-04-02T10:20:17.737 | 2011-04-02T10:27:54.797 | 2017-04-13T12:44:21.160 | -1 | 930 | null |
9083 | 2 | null | 9080 | 2 | null | First you have to enter your data into R, see [this class note](http://www.ats.ucla.edu/stat/r/notes/entering.htm). You can follow the steps of [this tutorial](http://data.princeton.edu/R/linearModels.html) in the analysis, section 4.4 has a very similar example. In visualization you could do something similar as the `qplot(wt, mpg, data = mtcars, colour = factor(cyl))` example of [this tutorial](http://had.co.nz/ggplot2/geom_point.html).
| null | CC BY-SA 2.5 | null | 2011-04-02T11:24:37.767 | 2011-04-02T12:45:38.977 | 2011-04-02T12:45:38.977 | 3911 | 3911 | null |
9084 | 2 | null | 9064 | 9 | null |
- Basically anything what is not separable with a line (ok, hyperplane), for instance 2D data like this:
kernel trick will effectively project this situation into a (higher-dim) space in which linear separation is possible; see this movie for an effect of a gaussian kernel on similar data.
- Look for a kernel argument in your svm function ;-) Note that using a kernel usually introduces new parameters to the outer optimization.
| null | CC BY-SA 2.5 | null | 2011-04-02T13:21:19.140 | 2011-04-02T13:21:19.140 | null | null | null | null |
9085 | 1 | 9146 | null | 20 | 3156 | In my attempts to fight spreadsheet mayhem, I am often evangelical in pushing for more robust tools such as true statistics software (R, Stata, and the like). Recently, I was challenged on this view by someone who stated flat out that they simply will not learn to program. I would like to provide them with data analysis tools that require no programming (but ideally which would extend to programming if they decide to dip a toe into the water later). What packages are out there for data exploration that I can recommend with a straight face?
| Software for easy-yet-robust data exploration | CC BY-SA 2.5 | null | 2011-04-02T13:32:36.553 | 2015-07-06T00:51:14.617 | null | null | 3488 | [
"data-visualization",
"software"
] |
9086 | 2 | null | 9085 | 3 | null | A new software system that looks promising for this purpose is [Deducer](http://www.r-bloggers.com/r-ready-to-deduce-you/), built on top of R. Unfortunately, being new, I suspect it does not yet cover the breadth of questions that people might ask, but it does meet the toe-in-the-water criterion of leading people towards a true package should they so decide later.
I've also used JMP in the past, which had a nice interactivity to it. I am worried that some of the interface might be too complicated for these purposes. And it's non-free, which makes it harder for potential spreadsheet refugees to try out on a whim.
---
There's also [Rattle](http://datamining.togaware.com/survivor/) which looks somewhat promising.
| null | CC BY-SA 3.0 | null | 2011-04-02T13:35:47.410 | 2012-07-29T00:03:30.923 | 2012-07-29T00:03:30.923 | 3488 | 3488 | null |
9087 | 1 | 9091 | null | 7 | 277 | Let $X = N(0,\frac{1}{\alpha})$, $Y = 2X + 8 + N_{y}$, and $N_{y}$ be a noise $N_{y} = N(0,1)$. Then, $P(y|x) = \frac{1}{\sqrt{2\pi}}exp\{ -\frac{1}{2}(y - 2x - 8)^{2} \}$
and $P(x) = \sqrt{\frac{\alpha}{2\pi}}exp\{-\frac{\alpha x^{2}}{2}\} $.
The mean vector is:
$$\mathbf{\mu} = \left( \begin{array}{c}
\mu_{x}\\
\mu_{y}\end{array} \right)= \left( \begin{array}{c}
0\\
8\end{array} \right).$$
The question is how to calculate the variance of Y.
I know that the correct answer is
$$\frac{4}{\alpha} + 1, $$
but don't know how to get from
$$var(Y) = E[(Y-\mu_{y})^{2}] = E[(2X+N_{y})^{2}] $$
to
$$\frac{4}{\alpha} + 1. $$
Can anybody help?
UPDATE:
Thank you All for answers
| Inference with Gaussian Random Variable | CC BY-SA 2.5 | null | 2011-04-02T13:49:55.227 | 2011-04-29T00:42:48.390 | 2011-04-29T00:42:48.390 | 3911 | 1371 | [
"normal-distribution",
"variance",
"random-variable",
"inference"
] |
9088 | 2 | null | 9067 | 3 | null | Empty columns contain `NA`s only or `""`s only, they have has no variability. This code removes all columns without variability (which is probably a plus in this case).
```
d=data.frame(r=seq(1, 5), a=rep('a', 5), n=rep(NA, 5), n1=c(NA, NA, 3, 3, 3))
homogenous = apply(d, 2, function(var) length(unique(var)) == 1)
d[, !homogenous]
```
| null | CC BY-SA 2.5 | null | 2011-04-02T14:05:43.993 | 2011-04-02T14:05:43.993 | null | null | 3911 | null |
9089 | 2 | null | 9087 | 6 | null | Solution to this homework is straightforward application of simple algebra and independence of $X$ and $N_y$: $\mathbb{E} (2 X + N_y)^2 = 4 \mathbb{E} X^2 + 4 \mathbb{E} X \mathbb{E} N_y + \mathbb{E} N_y^2 = 4 Var X + 0 + Var N_y = \frac{4}{\alpha} + 1$.
| null | CC BY-SA 2.5 | null | 2011-04-02T14:05:58.993 | 2011-04-02T16:02:38.683 | 2011-04-02T16:02:38.683 | 2645 | 2645 | null |
9090 | 2 | null | 9068 | 9 | null | The "sampling theory" people will tell you that no such estimate exists. But you can get one, you just need to be reasonable about your prior information, and do a lot harder mathematical work.
If you specified a Bayesian method of estimation, and the posterior is the same as the prior, then you can say the data say nothing about the parameter. Because things may get "singular" on us, then we cannot use infinite parameter spaces. I am assuming that because you use Pearson correlation, you have a bivariate normal likelihood:
$$p(D|\mu_x,\mu_y,\sigma_x,\sigma_y,\rho)=\left(\sigma_x\sigma_y\sqrt{2\pi(1-\rho^2)}\right)^{-N}exp\left(-\frac{\sum_{i}Q_i}{2(1-\rho^2)}\right)$$
where
$$Q_i=\frac{(x_i-\mu_x)^2}{\sigma_x^2}+\frac{(y_i-\mu_y)^2}{\sigma_y^2}-2\rho\frac{(x_i-\mu_x)(y_i-\mu_y)}{\sigma_x\sigma_y}$$
Now to indicate that one data set may be the same value, write $y_i=y$, and then we get:
$$\sum_{i}Q_i=N\left[\frac{(y-\mu_y)^2}{\sigma_y^2}+\frac{s_x^2 + (\overline{x}-\mu_x)^2}{\sigma_x^2}-2\rho\frac{(\overline{x}-\mu_x)(y-\mu_y)}{\sigma_x\sigma_y}\right]$$
where
$$s_x^2=\frac{1}{N}\sum_{i}(x_i-\overline{x})^2$$
And so your likelihood depends on four numbers, $s_x^2,y,\overline{x},N$. So you want an estimate of $\rho$, so you need to multiply by a prior, and integrate out the nuisance parameters $\mu_x,\mu_y,\sigma_x,\sigma_y$. Now to prepare for integration, we "complete the square"
$$\frac{\sum_{i}Q_i}{1-\rho^2}=N\left[\frac{\left(\mu_y-\left[y-(\overline{x}-\mu_x)\frac{\rho\sigma_y}{\sigma_x}\right]\right)^2}{\sigma_y^2(1-\rho^{2})}+\frac{s_x^2}{\sigma_{x}^{2}(1-\rho^{2})} + \frac{(\overline{x}-\mu_x)^2}{\sigma_x^2}\right]$$
Now we should err on the side of caution and ensure a properly normalised probability. That way we can't get into trouble. One such option is to use a weakly informative prior, which just places restriction on the range of each. So we have $L_{\mu}<\mu_x,\mu_y<U_{\mu}$ for the means with flat prior and $L_{\sigma}<\sigma_x,\sigma_y<U_{\sigma}$ for the standard deviations with jeffreys prior. These limits are easy to set with a bit of "common sense" thinking about the problem. I will take an unspecified prior for $\rho$, and so we get (uniform should work ok, if not truncate the singularity at $\pm 1$):
$$p(\rho,\mu_x,\mu_y,\sigma_x,\sigma_y)=\frac{p(\rho)}{A\sigma_x\sigma_y}$$
Where $A=2(U_{\mu}-L_{\mu})^{2}[log(U_{\sigma})-log(L_{\sigma})]^{2}$. This gives a posterior of:
$$p(\rho|D)=\int p(\rho,\mu_x,\mu_y,\sigma_x,\sigma_y)p(D|\mu_x,\mu_y,\sigma_x,\sigma_y,\rho)d\mu_y d\mu_x d\sigma_x d\sigma_y$$
$$=\frac{p(\rho)}{A[2\pi(1-\rho^2)]^{\frac{N}{2}}}\int_{L_{\sigma}}^{U_{\sigma}}\int_{L_{\sigma}}^{U_{\sigma}}\left(\sigma_x\sigma_y\right)^{-N-1}exp\left(-\frac{N s_x^2}{2\sigma_{x}^{2}(1-\rho^{2})}\right) \times$$
$$\int_{L_{\mu}}^{U_{\mu}}exp\left(-\frac{N(\overline{x}-\mu_x)^2}{2\sigma_x^2}\right)\int_{L_{\mu}}^{U_{\mu}}exp\left(-\frac{N\left(\mu_y-\left[y-(\overline{x}-\mu_x)\frac{\rho\sigma_y}{\sigma_x}\right]\right)^2}{2\sigma_y^2(1-\rho^{2})}\right)d\mu_y d\mu_x d\sigma_x d\sigma_y$$
Now the first integration over $\mu_y$ can be done by making a change of variables $z=\sqrt{N}\frac{\mu_y-\left[y-(\overline{x}-\mu_x)\frac{\rho\sigma_y}{\sigma_x}\right]}{\sigma_y\sqrt{1-\rho^{2}}}\implies dz=\frac{\sqrt{N}}{\sigma_y\sqrt{1-\rho^{2}}}d\mu_y$ and the first integral over $\mu_y$ becomes:
$$\frac{\sigma_y\sqrt{2\pi(1-\rho^{2})}}{\sqrt{N}}\left[\Phi\left(
\frac{U_{\mu}-\left[y-(\overline{x}-\mu_x)\frac{\rho\sigma_y}{\sigma_x}\right]}{\frac{\sigma_y}{\sqrt{N}}\sqrt{1-\rho^{2}}}
\right)-\Phi\left(
\frac{L_{\mu}-\left[y-(\overline{x}-\mu_x)\frac{\rho\sigma_y}{\sigma_x}\right]}{\frac{\sigma_y}{\sqrt{N}}\sqrt{1-\rho^{2}}}
\right)\right]$$
And you can see from here, no analytic solutions are possible. However, it is also worthwhile to note that the value $\rho$ has not dropped out of the equations. This means that the data and prior information still have something to say about the true correlation. If the data said nothing about the correlation, then we would be simply left with $p(\rho)$ as the only function of $\rho$ in these equations.
It also shows how that passing to the limit of infinite bounds for $\mu_y$ "throws away" some of the information about $\rho$, which is contained in the complicated looking normal CDF function $\Phi(.)$. Now if you have a lot of data, then passing to the limit is fine, you don't loose much, but if you have very scarce information, such as in your case - it is important keep every scrap you have. It means ugly maths, but this example is not too hard to do numerically. So we can evaluate the integrated likelihood for $\rho$ at values of say $-0.99,-0.98,\dots,0.98,0.99$ fairly easily. Just replace the integrals by summations over a small enough intervals - so you have a triple summation
| null | CC BY-SA 2.5 | null | 2011-04-02T14:06:14.687 | 2011-04-02T14:06:14.687 | null | null | 2392 | null |
9091 | 2 | null | 9087 | 8 | null | The law of iterated expectations can help here. We have:
$$Var[Y]=E(Var[Y|X])+Var[E(Y|X)]$$
Now conditional on $X$ the expected value of $Y$ is $2X+8$, and its variance is $1$. So we have:
$$Var[Y]=E(1)+Var[2X+8]=1+4 Var[X]=1+\frac{4}{\alpha}$$
| null | CC BY-SA 2.5 | null | 2011-04-02T14:25:22.247 | 2011-04-02T14:25:22.247 | null | null | 2392 | null |
9092 | 2 | null | 9068 | 0 | null | This question is coming from programmers, so I'd suggest plugging in zero. There's no evidence of a correlation, and the null hypothesis would be zero (no correlation). There might be other context knowledge that would provide a "typical" correlation in one context, but the code might be re-used in another context.
| null | CC BY-SA 2.5 | null | 2011-04-02T15:04:59.187 | 2011-04-02T15:04:59.187 | null | null | 3919 | null |
9093 | 2 | null | 9085 | 8 | null | Some people think of programming as simply entering a command line statement. At that point then perhaps you are a bit lost in encouraging them. However, if they are using spreadsheets already then they already have to enter formulas. These are akin to command line statements. If they really mean they don't want to do any programming in the sense of logical and automated analysis then you can tell them that they can still do the analyses in R or Stata without any programming at all.
If they can do their stats in the spreadsheet... all that they want to do... then all of the statistical analyses they wish to accomplish can be done without 'programming' in R or Stata as well. They could arrange and organize the data in the spreadsheet and then just export it as text. Then the analysis is carried out without any programming at all.
That's how I do intro to R sometimes. No programming is required to do the data analysis you could do in a spreadsheet.
If you get them hooked that way then just reel the fish in slowly... :) In a couple of years compliment them on what a good programmer they've become.
You might also want to show [this](http://www.burns-stat.com/pages/Tutor/spreadsheet_addiction.html) document to your colleagues or at least read it yourself to better make your points.
| null | CC BY-SA 2.5 | null | 2011-04-02T15:11:16.043 | 2011-04-04T17:09:56.883 | 2011-04-04T17:09:56.883 | 601 | 601 | null |
9094 | 2 | null | 9085 | 8 | null | As far as exploratory (possibly interactive) data analysis is concerned, I would suggest to take a look at:
- Weka, originally targets data-mining applications, but can be used for data summaries.
- Mondrian, for interactive data visualization.
- KNIME, which relies on the idea of building data flows and is compatible with Weka and R.
All three accept data in `arff` or `csv` format.
In my view, Stata does not require so much programming expertise. This is even part of its attractiveness, in fact: Most of basic analysis can be done by point-and-click user actions, with dialog boxes for customizing specific parameters, say, for prediction in a linear model. The same applies, albeit to a lesser extent, to R when you use external GUIs like [Rcmdr](http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/), Deducer, etc. as said by @gsk3.
| null | CC BY-SA 2.5 | null | 2011-04-02T15:11:34.827 | 2011-04-02T15:11:34.827 | null | null | 930 | null |
9096 | 2 | null | 9062 | 5 | null | Linear or monotonic trend tests--$M^2$ association measure, WMW test cited by @GaBorgulya, or the Cochran-Armitage trend test--can also be used, and they are well explained in Agresti ([CDA](http://www.stat.ufl.edu/~aa/cda/cda.html), 2002, §3.4.6, p. 90).
The latter is actually equivalent to a score test for testing $H_0:\; \beta = 0$ in a logistic regression model, but it can be computed from the $M^2$ statistic, defined as $(n-1)r^2$ ($\sim\chi^2(1)$ for large sample), where $r$ is the sample correlation coefficient between the two variables (the ordinal measure being recoded as numerical scores), by replacing $n-1$ with $n$ (ibid., p. 182). It is easy to compute in any statistical software, but you can also use the [coin](http://cran.r-project.org/web/packages/coin/index.html) package in R (I provided an example of use for a [related question](https://stats.stackexchange.com/questions/8774/what-is-the-difference-between-independence-test-in-r-and-cott/8979#8979)).
Sidenote
If you are using R, you will find useful resources in either Laura Thompson's [R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002)](https://home.comcast.net/~lthompson221/Splusdiscrete2.pdf), which shows how to replicate Agresti's results with R, or the [gnm](http://cran.r-project.org/web/packages/gnm/index.html) package (and its companion packages, [vcd](http://cran.r-project.org/web/packages/vcd/index.html) and [vcdExtra](http://cran.r-project.org/web/packages/vcdExtra/index.html)) which allows to fit row-column association models (see the vignette, [Generalized nonlinear models in R: An overview of the gnm package](http://cran.r-project.org/web/packages/gnm/vignettes/gnmOverview.pdf)).
| null | CC BY-SA 2.5 | null | 2011-04-02T16:46:40.420 | 2011-04-02T16:46:40.420 | 2017-04-13T12:44:46.083 | -1 | 930 | null |
9097 | 2 | null | 5690 | 6 | null | One thing to keep in mind with the Kaplan-Meier survival curve is that it is basically descriptive and not inferential. It is just a function of the data, with an incredibly flexible model that lies behind it. This is a strength because this means there is virtually no assumptions that might be broken, but a weakness because it is hard to generalise it, and that it fits "noise" as well as "signal". If you want to make an inference, then you basically have to introduce something that is unknown that you wish to know.
Now one way to compare the median survival times is to make the following assumptions:
- I have an estimate of the median survival time $t_{i}$ for each of the $i$ states, given by the kaplan meier curve.
- I expect the true median survival time, $T_{i}$ to be equal to this estimate. $E(T_{i}|t_{i})=t_{i}$
- I am 100% certain that the true median survival time is positive. $Pr(T_{i}>0)=1$
Now the "most conservative" way to use these assumptions is the principle of maximum entropy, so you get:
$$p(T_{i}|t_{i})= K exp(-\lambda T_{i})$$
Where $K$ and $\lambda$ are chosen such that the PDF is normalised, and the expected value is $t_{i}$. Now we have:
$$1=\int_{0}^{\infty}p(T_{i}|t_{i})dT_{i}
=K \int_{0}^{\infty}exp(-\lambda T_{i})dT_{i}
$$
$$=K \left[-\frac{exp(-\lambda T_{i})}{\lambda}\right]_{T_{i}=0}^{T_{i}=\infty}=\frac{K}{\lambda}\implies K=\lambda
$$
and now we have $E(T_{i})=\frac{1}{\lambda}\implies \lambda=t_{i}^{-1}$
And so you have a set of probability distributions for each state.
$$p(T_{i}|t_{i})= \frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)\;\;\;\;\;(i=1,\dots,N)$$
Which give a joint probability distribution of:
$$p(T_{1},T_{2},\dots,T_{N}|t_{1},t_{2},\dots,t_{N})= \prod_{i=1}^{N}\frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)$$
Now it sounds like you want to test the hypothesis $H_{0}:T_{1}=T_{2}=\dots=T_{N}=\overline{t}$, where $\overline{t}=\frac{1}{N}\sum_{i=1}^{N}t_{i}$ is the mean median survivial time. The severe alternative hypothesis to test against is the "every state is a unique and beautiful snowflake" hypothesis $H_{A}:T_{1}=t_{1},\dots,T_{N}=t_{N}$ because this is the most likely alternative, and thus represents the information lost in moving to the simpler hypothesis (a "minimax" test). The measure of the evidence against the simpler hypothesis is given by the odds ratio:
$$O(H_{A}|H_{0})=\frac{p(T_{1}=t_{1},T_{2}=t_{2},\dots,T_{N}=t_{N}|t_{1},t_{2},\dots,t_{N})}{
p(T_{1}=\overline{t},T_{2}=\overline{t},\dots,T_{N}=\overline{t}|t_{1},t_{2},\dots,t_{N})}$$
$$=\frac{
\left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{t_{i}}{t_{i}}\right)
}{
\left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{\overline{t}}{t_{i}}\right)
}
=exp\left(N\left[\frac{\overline{t}}{t_{harm}}-1\right]\right)$$
Where
$$t_{harm}=\left[\frac{1}{N}\sum_{i=1}^{N}t_{i}^{-1}\right]^{-1}\leq \overline{t}$$
is the harmonic mean. Note that the odds will always favour the perfect fit, but not by much if the median survival times are reasonably close. Further, this gives you a direct way to state the evidence of this particular hypothesis test:
assumptions 1-3 give maximum odds of $O(H_{A}|H_{0}):1$ against equal median survival times across all states
Combine this with a decision rule, loss function, utility function, etc. which says how advantageous it is to accept the simpler hypothesis, and you've got your conclusion!
There is no limit to the amount of hypothesis you can test for, and give similar odds for. Just change $H_{0}$ to specify a different set of possible "true values". You could do "significance testing" by choosing the hypothesis as:
$$H_{S,i}:T_{i}=t_{i},T_{j}=T=\overline{t}_{(i)}=\frac{1}{N-1}\sum_{j\neq i}t_{j}$$
So this hypothesis is verbally "state $i$ has different median survival rate, but all other states are the same". And then re-do the odds ratio calculation I did above. Although you should be careful about what the alternative hypothesis is. For any one of these below is "reasonable" in the sense that they might be questions you are interested in answering (and they will generally have different answers)
- my $H_{A}$ defined above - how much worse is $H_{S,i}$ compared to the perfect fit?
- my $H_{0}$ defined above - how much better is $H_{S,i}$ compared to the average fit?
- a different $H_{S,k}$ - how much is state $k$ "more different" compared to state $i$?
Now one thing which has been over-looked here is correlations between states - this structure assumes that knowing the median survival rate in one state tells you nothing about the median survival rate in another state. While this may seem "bad" it is not to difficult to improve on, and the above calculations are good initial results which are easy to calculate.
Adding connections between states will change the probability models, and you will effectively see some "pooling" of the median survival times. One way to incorporate correlations into the analysis is to separate the true survival times into two components, a "common part" or "trend" and an "individual part":
$$T_{i}=T+U_{i}$$
And then constrain the individual part $U_{i}$ to have average zero over all units and unknown variance $\sigma$ to be integrated out using a prior describing what knowledge you have of the individual variability, prior to observing the data (or jeffreys prior if you know nothing, and half cauchy if jeffreys causes problems).
| null | CC BY-SA 2.5 | null | 2011-04-02T16:47:55.793 | 2011-04-02T23:46:01.627 | 2011-04-02T23:46:01.627 | 2392 | 2392 | null |
9098 | 2 | null | 5690 | 3 | null | First I would visualize the data: calculate confidence intervals and standard errors for the median survivals in each state and show CIs on a forest plot, medians and their SEs using a funnel plot.
The “mean median survival all across the country” is a quantity that is estimated from the data and thus has uncertainty so you can not take it as a sharp reference value during significance testing. An other difficulty with the mean-of-all approach is that when you compare a state median to it you are comparing the median to a quantity that already includes that quantity as a component. So it is easier to compare each state to all other states combined. This can be done by performing a log rank test (or its alternatives) for each state. (Edit after reading the answer of probabilityislogic: the log rank test does compare survival in two (or more) groups, but it is not strictly the median that it is comparing. If you are sure it is the median that you want to compare, you may rely on his equations or use resampling here, too)
You labelled your question [multiple comparisons], so I assume you also want to adjust (increase) your p values in a way that if you see at least one adjusted p value less than 5% you could conclude that “median survival across states is not equal” at the 5% significance level. You may use generic and overly conservative methods like Bonferroni, but the optimal correction scheme will take the correlations of the p values into consideration. I assume that you don't want to build any a priori knowledge into the correction scheme, so I will discuss a scheme where the adjustment is multiplying each p value by the same C constant.
As I don't know how to derive the formula to obtain the optimal C multiplyer, I would use [resampling](http://en.wikipedia.org/wiki/Resampling_%28statistics%29). Under the null hypothesis that the survival characteristics are the same across all states, so you can permutate the state labels of the cancer cases and recalculate medians. After obtaining many resampled vectors of state p values I would numerically find the C multiplyer below which less than 95% of the vectors include no significant p values and above which more then 95%. While the range looks wide I would repeatedly increase the number of resamples by an order of magnitude.
| null | CC BY-SA 2.5 | null | 2011-04-02T17:07:50.450 | 2011-04-03T01:21:43.877 | 2011-04-03T01:21:43.877 | 3911 | 3911 | null |
9099 | 1 | 52350 | null | 5 | 4778 | Diallel Analysis using the Griffing and Hayman approach is so common in plant breeding and genetics. I'm wondering if someone can share R worked example on Diallel Analysis. Is there any good referenced book which covered worked examples? Thanks
References:
Griffing B (1956) Concept of general and specific combining ability in relation to diallel crossing systems. Aust J Biol Sci 9:463-493 [[pdf](http://www.publish.csiro.au/?act=view_file&file_id=BI9560463.pdf)]
Hayman BI (1954) The analysis of variance of diallel tables. Biometrics 10:235-244 [[JSTOR](http://www.jstor.org/stable/3001877)]
Hayman BI (1954) The theory and analysis of diallel crosses. Genetics 39:789-809 [[pdf](http://www.genetics.org/content/39/6/789.full.pdf)]
| How to perform diallel analysis in R? | CC BY-SA 3.0 | null | 2011-04-02T17:18:59.177 | 2013-12-03T09:22:09.447 | 2011-08-21T07:02:46.460 | 5862 | 3903 | [
"r",
"experiment-design",
"genetics"
] |
9100 | 2 | null | 9053 | 4 | null | From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does.
This is because if you are comparing models in a Bayesian way, then you are essentially already doing cross validation. This is because the posterior odds of model A $M_A$ against model B $M_B$, with data $D$ and prior information $I$ has the following form:
$$\frac{P(M_A|D,I)}{P(M_B|D,I)}=\frac{P(M_A|I)}{P(M_B|I)}\times\frac{P(D|M_A,I)}{P(D|M_B,I)}$$
And $P(D|M_A,I)$ is given by:
$$P(D|M_A,I)=\int P(D,\theta_A|M_A,I)d\theta_A=\int P(\theta_A|M_A,I)P(D|M_A,\theta_A,I)d\theta_A$$
Which is called the prior predictive distribution. It basically says how well the model predicted the data that was actually observed, which is exactly what cross validation does, with the "prior" being replaced by the "training" model fitted, and the "data" being replace by the "testing" data. So if model B predicted the data better than model A, its posterior probability increases relative to model A. It seems from this that Bayes theorem will actually do cross validation using all the data, rather than a subset. However, I am not fully convinced of this - seems like we get something for nothing.
Another neat feature of this method is that it has an in built "occam's razor", given by the ratio of normalisation constants of the prior distributions for each model.
However cross validation seems valuable for the dreaded old "something else" or what is sometimes called "model mispecification". I am constantly torn by whether this "something else" matters or not, for it seems like it should matter - but it leaves you paralyzed with no solution at all when it apparently matters. Just something to give you a headache, but nothing you can do about it - except for thinking of what that "something else" might be, and trying it out in your model (so that it is no longer part of "something else").
And further, cross validation is a way to actually do a Bayesian analysis when the integrals above are ridiculously hard. And cross validation "makes sense" to just about anyone - it is "mechanical" rather than "mathematical". So it is easy to understand what is going on. And it also seems to get your head to focus on the important part of models - making good predictions.
| null | CC BY-SA 2.5 | null | 2011-04-02T17:40:55.537 | 2011-04-02T17:40:55.537 | null | null | 2392 | null |
9101 | 2 | null | 8567 | 1 | null | I do not know the answer, so I can only offer some thoughts hoping that someone else can throw light on the issue.
It seems to me that there is no problem in computing the likelihood. To compute the value of the AIC criterion (which may or may not make sense in this context), what would be required is the number of fitted parameters.
Here is were things become slippery. To take the simplest example, the (univariate) local level model requires fitting two parameters (the variances of the state and of the noise), but those are really best described as metaparameters. If the variance of the state is zero, you are fitting a single mean (1 parameter). If the variance of the state goes to infinity, you are effectively fitting one parameter per observation.
One way out is to define the "number of equivalent parameters" as in,
Hodges, J. S. and Sargent, D. J. (2001) Counting Degrees of Freedom in
Hierarchical and Other Richly-Parameterised Models, Biometrika, 88(2),
p. 367-379.
and this is what I have done in my own work (on purely heuristic grounds and with some trepidation!). See for instance,
[Pérez-Castroviejo, P. and Tusell, F. (2007) Using redundant and incomplete time series for the estimation of cost of living indices, Review of Income and Wealth, vol. 53, p. 673-691.](http://www.et.bs.ehu.es/~etptupaf/nuevo/ficheros/papiros/riw.pdf)
There are alternative ways of calculating "equivalent parameters"; if you want to follow this route I can give you some references.
| null | CC BY-SA 2.5 | null | 2011-04-02T17:53:53.967 | 2011-04-02T17:53:53.967 | null | null | 892 | null |
9102 | 2 | null | 8898 | 5 | null | As it is described in the original post, the experiment is a randomized block.
- Pathologist (4 levels) is a blocking factor; the experiment is repeated within each pathologist.
- Instrument (3 levels) and the true result (2 levels) of the test are the two treatments, which I assume were assigned randomly.
- Consider the different specimens to be replications of each treatment combination.
- The one response variable is whether the pathologist's diagnosis is correct (2 levels).
Because the result is categorical, the link function will need to be something like logit or probit. Here's some R code that does that. It may need to be extended depending on your friend's hypotheses.
```
library(lme4)
glmer(correctness ~ instrument*trueresult + (1|pathologist),
family = binomial)
```
The coefficients from a logit model can be interpreted in relation to odds ratios. For a particular combination of predictors, the model estimates an odds ratio. The individual coefficients indicate how the odds ratio changes depending on the predictors.
If your friend doesn't care about distinguishing between type I and type II error, he or she can drop true result predictor from the model.
```
library(lme4)
glmer(correctness ~ instrument+trueresult + (1|pathologist),
family = binomial)
```
The measurement in multiple sessions may be an incomplete block design, so your friend should look at those if he or she is concerned about the assumption of independence among measurements.
| null | CC BY-SA 3.0 | null | 2011-04-02T19:09:55.037 | 2011-04-24T17:14:47.580 | 2011-04-24T17:14:47.580 | 3874 | 3874 | null |
9103 | 2 | null | 9067 | 0 | null | Hard for me to think of this as a huge dataset, but these things are relative ;)
To do this in excel (a version like 2007 or 2010 which has enough columns), you could insert two rows at the top. In the first row, just have consecutive integers: 1 in column A, 2 in column B, etc. You can do this with a function [I think it's col()] and then convert it to values.
In the second row, you need some logical expression which will tell you if the column is empty. This might be as simple as +max(a3:A12002)<>0 if all variables which are non-empty contain a nonzero value. Without knowing the characteristics of your data I can't be sure that will work. +max(a3:A12002)<>0 should be true if there is data in the column, false if it isn't. Now, use sort to sort columns based on row 2, true/false. Next, delete all the false columns. Next, sort on row 1 to put the data back in the original order. Finally, delete rows 1 and 2 and save the file (probably as .CSV).
I'm writing this from memory; should be right.
| null | CC BY-SA 2.5 | null | 2011-04-02T20:36:24.113 | 2011-04-02T20:36:24.113 | null | null | 3919 | null |
9104 | 1 | 9161 | null | 12 | 19326 | I have the following question for a course I'm working on:
>
Conduct a Monte Carlo study to estimate the coverage probabilities of
the standard normal bootstrap confidence interval and the basic bootstrap confidence
interval. Sample from a normal population and check the empirical coverage rates for the
sample mean.
Coverage probabilities for the standard normal bootstrap CI are easy:
```
n = 1000;
alpha = c(0.025, 0.975);
x = rnorm(n, 0, 1);
mu = mean(x);
sqrt.n = sqrt(n);
LNorm = numeric(B);
UNorm = numeric(B);
for(j in 1:B)
{
smpl = x[sample(1:n, size = n, replace = TRUE)];
xbar = mean(smpl);
s = sd(smpl);
LNorm[j] = xbar + qnorm(alpha[1]) * (s / sqrt.n);
UNorm[j] = xbar + qnorm(alpha[2]) * (s / sqrt.n);
}
mean(LNorm < 0 & UNorm > 0); # Approximates to 0.95
# NOTE: it is not good enough to look at overall coverage
# Must compute separately for each tail
```
From what I've been taught for this course, the basic bootstrap confidence interval can be calculated like this:
```
# Using x from previous...
R = boot(data = x, R=1000, statistic = function(x, i){ mean(x[i]); });
result = 2 * mu - quantile(R$t, alpha, type=1);
```
That makes sense. What I don't understand is how to calculate coverage probabilities for the basic bootstrap CI. I understand that the coverage probability would represent the number of times that the CI contains the true value (in this case `mu`). Do I simply run the `boot` function many times?
How can I approach this question differently?
| Coverage probabilities of the basic bootstrap confidence Interval | CC BY-SA 3.0 | null | 2011-04-02T20:42:02.777 | 2015-12-15T23:32:22.733 | 2015-12-15T23:32:22.733 | 4253 | 1894 | [
"r",
"confidence-interval",
"self-study",
"bootstrap",
"monte-carlo"
] |
9105 | 1 | null | null | 16 | 1288 | for the following 3 values 222,1122,45444
[WolframAlpha gives](http://www.wolframalpha.com/input/?i=skew+222%2C1122%2C45444+) 0.706
Excel, using `=SKEW(222,1122,45444)` gives 1.729
What explains the difference?
| Why do Excel and WolframAlpha give different values for skewness | CC BY-SA 2.5 | null | 2011-04-02T21:57:45.523 | 2011-11-20T04:41:40.967 | 2011-11-20T04:41:40.967 | 1381 | 276 | [
"excel",
"software",
"descriptive-statistics",
"mathematica"
] |
9106 | 2 | null | 9105 | 19 | null | They are using different methods to compute the skew. Searching in the help pages for `skewness()` within the R package `e1071` yields:
```
Joanes and Gill (1998) discuss three methods for estimating skewness:
Type 1:
g_1 = m_3 / m_2^(3/2). This is the typical definition used in many older textbooks.
Type 2:
G_1 = g_1 * sqrt(n(n-1)) / (n-2). Used in SAS and SPSS.
Type 3:
b_1 = m_3 / s^3 = g_1 ((n-1)/n)^(3/2). Used in MINITAB and BMDP.
All three skewness measures are unbiased under normality.
#Why are these numbers different?
> skewness(c(222,1122,45444), type = 2)
[1] 1.729690
> skewness(c(222,1122,45444), type = 1)
[1] 0.7061429
```
Here's a link to the paper referenced if someone has the credentials to get it for further discussion or education: [http://onlinelibrary.wiley.com/doi/10.1111/1467-9884.00122/abstract](http://onlinelibrary.wiley.com/doi/10.1111/1467-9884.00122/abstract)
| null | CC BY-SA 2.5 | null | 2011-04-02T22:07:09.333 | 2011-04-02T22:07:09.333 | null | null | 696 | null |
9107 | 1 | 9108 | null | 4 | 281 | I would like to plot the attached [income distribution dataset](https://i.stack.imgur.com/acNhK.jpg) (rendered as an image) as an area chart.
As you can see, personal income is divided into 26 intervals of varying width. I also have the average and mean income in the intervals.
To convey a truthful area graphic of this data, I wonder what my options really are?
Plotting the ordinal categorical data at hand would yield a big hump in the area chart for the 400-499 interval. But this is only because that interval is wider and the user can hence be misguided by the shape. Another issue with the categorical data is that the average of the "1000+ interval" is very far from 1000 (= 1644). An area graphic not taking this into account would do a bad job in showing the actual distribution.
How would you go about and is there any way in which I can use the average/mean to "convert the categorical scale to a continuous scale"?
| Categorical or continuous scale for area chart? | CC BY-SA 2.5 | null | 2011-04-02T22:08:57.993 | 2011-04-03T12:15:46.313 | 2011-04-03T09:10:18.443 | 930 | 4003 | [
"distributions",
"data-visualization",
"data-transformation",
"categorical-data"
] |
9108 | 2 | null | 9107 | 4 | null | This is not exactly what you asked, but still may be helpful.
You may rely on the `plot.histogram` command in R. The usual use is to run a `hist` command, which prepares an object of class `histogram` and passes it to `plot.histogram`. You may prepare a customized `histogram` object yourself and plot it with `plot.histogram`.
The following code prints a `histogram` object:
```
data(cars); dput(hist(cars$dist, plot=FALSE))
```
You can make a similar object, and plot it:
```
k = structure(list(breaks = c(0, 5, 10, 15, 25, 50, 75), counts = NULL,
intensities = NULL, density = c(0.01, 0.018, 0.011, 0.006, 0.004,
0.001), mids = NULL, xname = "dist", equidist = FALSE), .Names = c("breaks",
"counts", "intensities", "density", "mids", "xname", "equidist"
), class = "histogram")
plot(k)
```

| null | CC BY-SA 2.5 | null | 2011-04-02T22:43:06.963 | 2011-04-03T12:15:46.313 | 2011-04-03T12:15:46.313 | 3911 | 3911 | null |
9109 | 1 | null | null | 2 | 1700 | I have finally been able to wrap my head around the mechanics of how to initialize and train a multivariate Gaussian mixture model using expectation maximization algorithm. So I wonder how difficult this GMM and EM task is in comparison to all other common algorithms and models in machine learning. I appreciate any feedback. Thank you.
| How difficult is it to train a gaussian mixture model compared to other models? | CC BY-SA 2.5 | null | 2011-04-02T22:51:25.520 | 2011-04-04T06:35:27.023 | 2011-04-03T09:19:38.667 | 930 | 2729 | [
"mixed-model",
"normal-distribution",
"maximum-likelihood"
] |
9110 | 2 | null | 9085 | 2 | null | Anyone who answers R, or any of it's "GUIs" didn't read the question.
There is a program specifically designed for this and it's called JMP. Yes, it's expensive, though it has a free trial, and is incredibly cheap for students or college staff (like $50 cheap).
There is also RapidMiner, which is a workflow-based GUI for data mining and statistical analysis. It's free and open source.
| null | CC BY-SA 2.5 | null | 2011-04-02T22:53:03.193 | 2011-04-02T22:53:03.193 | null | null | 74 | null |
9111 | 1 | 9112 | null | 9 | 27399 | I am assuming R has this built-in. How do I reference it?
| In R how do I reference\lookup in the cdf of standard normal distribution table? | CC BY-SA 2.5 | null | 2011-04-02T19:10:34.457 | 2011-04-08T20:42:13.073 | 2011-04-08T20:42:13.073 | 919 | 4008 | [
"r",
"normal-distribution"
] |
9112 | 2 | null | 9111 | 13 | null | The functions you are looking for are either `dnorm`, `pnorm` or `qnorm`, depending on exactly what you are looking for.
`dnorm(x)` gives the density function at `x`.
`pnorm(x)` gives the probability that a random value is less than `x`.
`qnorm(p)` is the inverse of `pnorm`, giving the value of `x` for which getting a random value less than `x` has probability `p`.
See the help page for these functions to see how to change the parameters and values.
| null | CC BY-SA 2.5 | null | 2011-04-02T20:21:03.230 | 2011-04-03T01:43:38.397 | 2011-04-03T01:43:38.397 | 72 | 72 | null |
9114 | 2 | null | 9109 | 3 | null | [this paper](http://www.uv.es/~bernardo/Kernel.pdf) is a small gem about fitting mixtures of gaussians to the data using Bayesian methods.
Big plus: no maximising algorithms required, so should be quick compared to EM algorithm. Also paper has testing data, so you can test each method against this one if you want. And it outlines a procedure for choosing the number of mixing components
The drawback is that it is based on a non-informative prior (so you must throw away info which isn't data), and a common variance (or kernel width).
| null | CC BY-SA 2.5 | null | 2011-04-03T01:42:41.927 | 2011-04-03T01:42:41.927 | null | null | 2392 | null |
9115 | 2 | null | 9107 | 3 | null | A histogram with a continuous scale as described by GaBorgulya is clearly the way to go. When the blocks are wider, you need to adjust the density appropriately: the block 380-399 with 42246 people should be about 1.6 times the density of the block 400-499 with 132485 people.
Except for the extremes of 0 and 1000+, you can just use the blocks you have, with the densities (number of people divided by width of block) as height. You can get even closer to the distribution by dividing each block at the medians: so for example you have 58700 in the interval "600 to 799 tkr" (i.e to just under 800), for a density of 293.5. Or you could divide this at the median of 681.3 into two blocks representing 29350 each, to have an interval from 600 to 672.6 of density 404.3 and an interval from 672.6 to 800 of density 230.4. You could go further and also take into account the means in each interval, but I don't think it is a priority.
The extreme of 1000+ (23143 people, median 1281.0, mean 1644.2) is slightly harder but you can use the median to give you an interval from 1000 to 1281 with density 41.2. Now it is worth using the mean. You could for example have the top interval from 1281 to 3014.8
with density 6.7. This is not realistic as the maximum income is likely to be higher than 3014.8 and the curve is likely to be decreasing rather than flat, but it does illustrate the issue.
Illustrating the extreme at 0 is even harder. Depending on how wide you make the block, you can have a spike there as high as you like. Here is an example from [Households Below Average Income](http://research.dwp.gov.uk/asd/hbai/hbai_2009/pdf_files/full_hbai10.pdf) where they used £10 blocks. It has other design features, some of which you may find interesting, such as a cutoff at the top end and words describing how many were cut off.

| null | CC BY-SA 2.5 | null | 2011-04-03T01:51:49.713 | 2011-04-03T01:51:49.713 | null | null | 2958 | null |
9116 | 1 | 9118 | null | 0 | 4822 | First off, be warned: I am a complete stats novice. I'd like to learn, but at the moment I have an intense business problem to solve that I think (hope?) is straightforward enough that it could be answered easily. I will try to explain as simply as I can so I don't muck it up.
In short, i'm trying to find the right way to display a collection of pie charts. These pie charts are part of a proposal for new business; they measure incumbent capture across multiple contracts from different teaming partners.
Data I'm working with:
- Company A: [a1] percentage of capture across [b1] contracts
- Company B: [a2] percentage of capture across [b2] contracts
- Company C: [a3] percentage of capture across [b3] contracts
- Total: [aT] percentage of capture across [bT] contracts
I have the data for sets 1, 2, and 3 -- the goal is to derive the data for set "T".
Essentially, we'd like to say "business A had 30% capture across 100 contracts, business B had 50% against 70 contracts, and business C had 40% against 500 contracts" the individual pie charts are of course simple enough (even I can do that one ;) ), but I would like them all to feed into a pie chart which shows the average percentage across all contracts with the total of contracts.
To try to articulate more bluntly, my question is: how can I ensure that the individual percentages for company performance are weighted according to the number of contracts they are referencing?
Thank you in advance for any help you can give!
All the best,
--Sean
| How to properly weight contributing percentages? | CC BY-SA 2.5 | null | 2011-04-03T03:04:46.487 | 2011-04-03T03:18:51.280 | 2011-04-03T03:18:29.550 | 4004 | 4004 | [
"pie-chart"
] |
9117 | 2 | null | 9085 | 7 | null | I can recommend Tableau as a good tool for data exploration and visualization, simply because of the different ways that you can explore and view the data, simply by dragging and dropping. The graphs are fairly sharp and you can easily output to PDF for presentation purposes. If you want you can extend it with some "programming". I regularly use this tool along with "R" and SAS and they all work together well.
| null | CC BY-SA 2.5 | null | 2011-04-03T03:07:13.240 | 2011-04-03T03:07:13.240 | null | null | 3489 | null |
9118 | 2 | null | 9116 | 1 | null | It might be easier than I thought -- please let me know if this is correct or on the right path.
Let's assume the following data:
- Company A: 70 percentage of capture across 10 contracts
- Company B: 60 percentage of capture across 100 contracts
- Company C: 50 percentage of capture across 40 contracts
First, I figured out the percentage of each contractors contribution to the total amount of contracts, so:
- Company A: 6.67% of total contracts
- Company B: 66.67% of total contracts
- Company C: 26.67% of total contracts
I then took the company's capture rate and multiplied it by their percentage of total contracts, so:
- Company A: 4.67% contributing to total percentage
- Company B: 40% contributing to total percentage
- Company C: 13.33% of contributing to total percentage
Which makes the total:
- 58.00% across 150 contracts
Please let me know if this is correct (or close enough). Thanks!
| null | CC BY-SA 2.5 | null | 2011-04-03T03:18:51.280 | 2011-04-03T03:18:51.280 | null | null | 4004 | null |
9119 | 2 | null | 8567 | 3 | null | The dlmMLE function in the dlm package will compute the likelihood. Then
AIC = -2 log(likelihood) + 2p
where p is the number of parameters estimated. You might like to read the [vignette for the dlm package](http://cran.r-project.org/web/packages/dlm/vignettes/dlm.pdf) which contains a lot of helpful information and examples.
However, a much simpler approach is to use the `StructTS` function in the `stats` package (which is automatically loaded). It will fit the model you want, and returns the loglikelihood.
| null | CC BY-SA 2.5 | null | 2011-04-03T04:34:24.763 | 2011-04-03T04:34:24.763 | null | null | 159 | null |
9120 | 2 | null | 2504 | 6 | null | In order to test such a vague hypothesis, you need to average out over all densities with finite variance, and all densities with infinite variance. This is likely to be impossible, you basically need to be more specific. One more specific version of this and have two hypothesis for a sample $D\equiv Y_{1},Y_{2},\dots,Y_{N}$:
- $H_{0}:Y_{i}\sim Normal(\mu,\sigma)$
- $H_{A}:Y_{i}\sim Cauchy(\nu,\tau)$
One hypothesis has finite variance, one has infinite variance. Just calculate the odds:
$$\frac{P(H_{0}|D,I)}{P(H_{A}|D,I)}=\frac{P(H_{0}|I)}{P(H_{A}|I)}\frac{\int P(D,\mu,\sigma|H_{0},I)d\mu d\sigma}{\int P(D,\nu,\tau|H_{A},I)d\nu d\tau}
$$
Where $\frac{P(H_{0}|I)}{P(H_{A}|I)}$ is the prior odds (usually 1)
$$P(D,\mu,\sigma|H_{0},I)=P(\mu,\sigma|H_{0},I)P(D|\mu,\sigma,H_{0},I)$$
And
$$P(D,\nu,\tau|H_{A},I)=P(\nu,\tau|H_{A},I)P(D|\nu,\tau,H_{A},I)$$
Now you normally wouldn't be able to use improper priors here, but because both densities are of the "location-scale" type, if you specify the standard non-informative prior with the same range $L_{1}<\mu,\tau<U_{1}$ and $L_{2}<\sigma,\tau<U_{2}$, then we get for the numerator integral:
$$\frac{\left(2\pi\right)^{-\frac{N}{2}}}{(U_1-L_1)log\left(\frac{U_2}{L_2}\right)}\int_{L_2}^{U_2}\sigma^{-(N+1)}\int_{L_1}^{U_1} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma$$
Where $s^2=N^{-1}\sum_{i=1}^{N}(Y_i-\overline{Y})^2$ and $\overline{Y}=N^{-1}\sum_{i=1}^{N}Y_i$. And for the denominator integral:
$$\frac{\pi^{-N}}{(U_1-L_1)log\left(\frac{U_2}{L_2}\right)}\int_{L_2}^{U_2}\tau^{-(N+1)}\int_{L_1}^{U_1} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau$$
And now taking the ratio we find that the important parts of the normalising constants cancel and we get:
$$\frac{P(D|H_{0},I)}{P(D|H_{A},I)}=\left(\frac{\pi}{2}\right)^{\frac{N}{2}}\frac{\int_{L_2}^{U_2}\sigma^{-(N+1)}\int_{L_1}^{U_1} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma}{\int_{L_2}^{U_2}\tau^{-(N+1)}\int_{L_1}^{U_1} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$
And all integrals are still proper in the limit so we can get:
$$\frac{P(D|H_{0},I)}{P(D|H_{A},I)}=\left(\frac{2}{\pi}\right)^{-\frac{N}{2}}\frac{\int_{0}^{\infty}\sigma^{-(N+1)}\int_{-\infty}^{\infty} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma}{\int_{0}^{\infty}\tau^{-(N+1)}\int_{-\infty}^{\infty} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$
The denominator integral cannot be analytically computed, but the numerator can, and we get for the numerator:
$$\int_{0}^{\infty}\sigma^{-(N+1)}\int_{-\infty}^{\infty} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma=\sqrt{2N\pi}\int_{0}^{\infty}\sigma^{-N} exp\left(-\frac{Ns^{2}}{2\sigma^{2}}\right)d\sigma$$
Now make change of variables $\lambda=\sigma^{-2}\implies d\sigma = -\frac{1}{2}\lambda^{-\frac{3}{2}}d\lambda$ and you get a gamma integral:
$$-\sqrt{2N\pi}\int_{\infty}^{0}\lambda^{\frac{N-1}{2}-1} exp\left(-\lambda\frac{Ns^{2}}{2}\right)d\lambda=\sqrt{2N\pi}\left(\frac{2}{Ns^{2}}\right)^{\frac{N-1}{2}}\Gamma\left(\frac{N-1}{2}\right)$$
And we get as a final analytic form for the odds for numerical work:
$$\frac{P(H_{0}|D,I)}{P(H_{A}|D,I)}=\frac{P(H_{0}|I)}{P(H_{A}|I)}\times\frac{\pi^{\frac{N+1}{2}}N^{-\frac{N}{2}}s^{-(N-1)}\Gamma\left(\frac{N-1}{2}\right)}{\int_{0}^{\infty}\tau^{-(N+1)}\int_{-\infty}^{\infty} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$
So this can be thought of as a specific test of finite versus infinite variance. We could also do a T distribution into this framework to get another test (test the hypothesis that the degrees of freedom is greater than 2).
| null | CC BY-SA 2.5 | null | 2011-04-03T05:03:28.207 | 2011-04-03T22:43:02.550 | 2011-04-03T22:43:02.550 | 2392 | 2392 | null |
9121 | 1 | 11424 | null | 3 | 1675 | I've seen couple of articles reporting individuals standard deviations and/or standard errors for groups even after implementing ANOVA. My understanding is that groups SE's should be based on experimental error mean square. Any comment?
| Individuals standard deviations and/or standard errors for groups after implementing ANOVA? | CC BY-SA 2.5 | null | 2011-04-03T06:28:11.013 | 2011-06-01T10:11:22.020 | 2011-06-01T10:11:22.020 | 930 | 3903 | [
"anova",
"standard-deviation",
"experiment-design",
"standard-error"
] |
9124 | 1 | null | null | 2 | 4679 | In S-plus estimates of percentiles for a survival function can be obtained using the `qkaplanMeier` function (on the results of a call to kaplanMeier) like that:
```
kfit <-kaplanMeier(censor(TIME,STATUS)~1)
qkaplanMeier(kfit, c(.25, .5, .75))
```
How can I do this in R?. Those functions do not exist anymore. What if I also want the (asymptotic) confidence intervals for the percentiles? How can I get the mean for the survival time?
| Estimates and C.I. of percentiles for a survival function | CC BY-SA 2.5 | null | 2011-04-03T11:06:21.313 | 2023-04-29T06:23:20.560 | 2011-04-03T11:18:23.073 | 339 | 339 | [
"r",
"confidence-interval",
"survival"
] |
9125 | 2 | null | 9124 | 3 | null | The [CRAN task view on survival analysis](http://cran.r-project.org/web/views/Survival.html) says:
Kaplan-Meier:
The
`survfit`
function from the
[survival](http://cran.r-project.org/web/packages/survival/index.html)
package
computes the Kaplan-Meier estimator for truncated and/or censored data.
[rms](http://cran.r-project.org/web/packages/rms/index.html)
(replacement of the
[Design$^\dagger$](http://cran.r-project.org/web/packages/Design/index.html)
package)
proposes a modified version of the
`survfit`
function.
The
[prodlim](http://cran.r-project.org/web/packages/prodlim/index.html)
package implements a fast algorithm and some features
not included in
[survival](http://cran.r-project.org/web/packages/survival/index.html).
Various confidence intervals and confidence bands for the Kaplan-Meier estimator
are implemented in the
[km.ci](http://cran.r-project.org/web/packages/km.ci/index.html)
package.
`plot.Surv` of package
[eha](http://cran.r-project.org/web/packages/eha/index.html)
plots
the Kaplan-Meier estimator.
`svykm`
in
[survey](http://cran.r-project.org/web/packages/survey/index.html)
provides a weighted Kaplan-Meier
estimator.
`nested.km`
in
[NestedCohort$^\dagger$](http://cran.r-project.org/web/packages/NestedCohort/index.html)
estimates the
survival curve for each level of categorical variables with missing data.
The
`kaplan-meier`
function in
[spatstat](http://cran.r-project.org/web/packages/spatstat/index.html)
computes the Kaplan-Meier estimator from histogram data.
The
[MAMSE$^\dagger$](http://cran.r-project.org/web/packages/MAMSE/index.html)
package permits to compute a weighted Kaplan-Meier estimate.
The
`KM`
function in package
[rhosp](http://cran.r-project.org/web/packages/rhosp/index.html)
plots the survival
function using a variant of the Kaplan-Meier estimator in a hospitalisation
risk context.
The
[survPresmooth](http://cran.r-project.org/web/packages/survPresmooth/index.html)
package computes presmoothed estimates of the main quantities used
for right-censored data, i.e., survival, hazard and density functions.
---
$^\dagger$ Respective packages have now been removed from CRAN repository.
| null | CC BY-SA 4.0 | null | 2011-04-03T11:40:56.150 | 2023-04-29T06:23:20.560 | 2023-04-29T06:23:20.560 | 362671 | 2958 | null |
9126 | 2 | null | 9124 | 2 | null | The `bootkm()` function in [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) provides bootstraped estimate of the probability of survival, as well as the estimate of the quantile of the survival distribution (through either `describe` or `quantile` applied onto the result of `bootkm`).
| null | CC BY-SA 2.5 | null | 2011-04-03T12:44:55.260 | 2011-04-03T12:44:55.260 | null | null | 930 | null |
9127 | 1 | null | null | 3 | 2684 | Where can I obtain all hourly weather data available?
Notes:
- Ideally, this would include the history of all data currently published at:
here
here
here
- I've obtained 10 years of METAR data from wunderground.com like this:
curl -H 'Cookie: Prefs=|SHOWMETAR:1|;' -o data.txt 'http://www.wunderground.com/history/airport/KABQ/2005/05/02/DailyHistory.html?req_city=NA&req_state=NA&req_statename=NA&theprefset=SHOWMETAR&theprefvalue=1&format=1'
but am sure there's a better way.
- I'd love per-minute/etc data too, though I don't think anyone kept
measurements better than hourly back then.
| How to fetch all historically available hourly weather data? | CC BY-SA 4.0 | null | 2011-04-03T14:27:58.373 | 2022-11-22T02:20:05.457 | 2022-11-22T02:20:05.457 | 362671 | null | [
"time-series",
"dataset"
] |
9128 | 2 | null | 9085 | 4 | null | As John said, data exploration doesn't require much programming in R. Here's a list of data exploration commands you can give people. (I just came up with this; you can surely expand it.)
Export the data from whatever package it's in. (Exporting numerical data without quotation marks is convenient.) Then read the data in R.
```
ChickWeight=read.csv('chickweight.csv')
```
Make a table.
```
table(ChickWeight$Diet)
```
Let R guess what sort of graphic to give you. It sometimes works very nicely.
```
plot(ChickWeight)
plot(ChickWeight$weight)
plot(ChickWeight$weight~ChickWeight$Diet)
```
A bunch of specific plotting functions work quite simply on single variables.
```
hist(ChickWeight$weight)
```
Taking subsets
```
plot(subset(ChickWeight,Diet=='2'))
```
SQL-like syntax in case people are used to that (more [here](http://www.r-bloggers.com/make-r-speak-sql-with-sqldf))
```
library(sqldf)
plot(sqldf('select * from ChickWeight where Diet == "2"'))
```
PCA (You'd have more than two variables of course.)
```
princomp(~ ChickWeight$weight + ChickWeight$Time)
```
| null | CC BY-SA 2.5 | null | 2011-04-03T14:49:47.713 | 2011-04-03T14:55:02.353 | 2011-04-03T14:55:02.353 | 3874 | 3874 | null |
9129 | 1 | null | null | 5 | 1303 | I am currently looking for some Information Retrieval techniques.
I have a SQL database table containing strings. It has 1000 records, each being a random sentence I picked from random web sites. I need to get the term frequency and represent each string into a vector. I also need to cluster the records, e.g. using k-means.
Does anyone know what is the best way to do this? Are there any tools I can use? I am new to this and looking for a jump off point.
| How to compute term frequency and find clusters in a dataset composed of strings? | CC BY-SA 2.5 | null | 2011-04-03T15:33:54.147 | 2011-09-02T12:37:19.820 | 2011-04-03T16:26:29.463 | 930 | 4020 | [
"clustering",
"information-retrieval"
] |
9130 | 2 | null | 9127 | 3 | null | This may be simplistic, but if you have a consistent directory structure on the NOAA site (they usually are), you can recursive wget the entire thing, then sort through it at your leisure.
```
wget -r http://weather.noaa.gov/pub/SL.us008001/DF.an/
```
This will grab everything recursively from that URL and deeper. It's what I normally use when I want a huge whack of data from some (typically government site) and they do silly things like store it in 1-file-per-hour and I need 12 years worth.
As an aside, I presume you only want historical weather data for the United States? If you're working on this on a slightly longer scale and more global, the Berkeley Earth Surface Temperature Project should be releasing their raw data set in the next few months: see [here](http://berkeleyearth.org/dataset).
| null | CC BY-SA 2.5 | null | 2011-04-03T15:41:42.047 | 2011-04-03T15:41:42.047 | null | null | 781 | null |
9131 | 1 | 9144 | null | 22 | 10010 | Let's take the following example:
```
set.seed(342)
x1 <- runif(100)
x2 <- runif(100)
y <- x1+x2 + 2*x1*x2 + rnorm(100)
fit <- lm(y~x1*x2)
```
This creates a model of y based on x1 and x2, using a OLS regression. If we wish to predict y for a given x_vec we could simply use the formula we get from the `summary(fit)`.
However, what if we want to predict the lower and upper predictions of y? (for a given confidence level).
How then would we build the formula?
| Obtaining a formula for prediction limits in a linear model (i.e.: prediction intervals) | CC BY-SA 4.0 | null | 2011-04-03T18:24:49.593 | 2021-09-27T14:11:17.713 | 2019-04-15T11:33:54.723 | 253 | 253 | [
"r",
"regression",
"predictive-models",
"prediction-interval"
] |
9132 | 1 | 9133 | null | 2 | 2798 | We are trying to predict values of variable A having N other variables. What we do, we calculate Pearson correlation between A and each of the other N variables, for last M values, using fixed M. We use variable with largest correlation coefficient as predictor.
This scheme works fine when analyzing N variables during 24 month and decreasing between month 25-36.
What could come as improvement of Pearson correlation in this context?
Variables are bi-valued.
| Alternatives to Pearson correlation | CC BY-SA 2.5 | null | 2011-04-03T20:02:37.747 | 2011-04-03T20:47:44.887 | 2011-04-03T20:39:22.490 | null | 4010 | [
"correlation"
] |
9133 | 2 | null | 9132 | 3 | null | This is generally hard to tell without knowing what the problem exactly is, but I would advise you to try some machine learning methods. For start you may try random forest, which is almost trivial to apply and quite probably will achieve better accuracy than just using one, best-correlated variable.
Also, it will produce importance measure which will tell you which variables contribute most to the prediction accuracy -- possibly taking into account even quite complex multivariate interactions obviously invisible to Pearson's correlation.
| null | CC BY-SA 2.5 | null | 2011-04-03T20:47:44.887 | 2011-04-03T20:47:44.887 | null | null | null | null |
9134 | 2 | null | 9131 | 9 | null | Are you by chance after the different types of prediction intervals? The `predict.lm` manual page has
```
## S3 method for class 'lm'
predict(object, newdata, se.fit = FALSE, scale = NULL, df = Inf,
interval = c("none", "confidence", "prediction"),
level = 0.95, type = c("response", "terms"),
terms = NULL, na.action = na.pass,
pred.var = res.var/weights, weights = 1, ...)
```
and
>
Setting
‘intervals’ specifies computation of confidence or prediction
(tolerance) intervals at the specified ‘level’, sometimes referred
to as narrow vs. wide intervals.
Is that what you had in mind?
| null | CC BY-SA 2.5 | null | 2011-04-03T21:04:33.583 | 2011-04-03T21:04:33.583 | null | null | 334 | null |
9135 | 1 | null | null | 4 | 205 | Given two point sets, $B$ consisting of blue points, and $R$ of red points, on the plane.
The problem is to formulate a theoretical model to compare the average runtime of the Computational Geometric (CG) Algorithm and the [ Perceptron Learning (PL) ](http://en.wikipedia.org/wiki/Perceptron#Learning_algorithm) for shattering two sets of points by a line. ( Briefly, CG tests if there exists a line, $ax+by = c$ such that the system of equations $ax_r+by_r\leq c; ax_b+by_b \geq c$ for all $(x_b,y_b) \in B$ and $(x_r,y_r) \in R$ is consistent; if so, it finds one such.) AFAIK, only empirical models are known thus far, where we take a large number of such point sets and test out the performance of CG and PL.
While I can't see any clear solution to this, I am looking for one along the following lines: Choose 2 random integers, $b$ and $r$ in the range $[1,n]$ where $n$ is a large integer, to appear in the answer as a parameter. Choose $b$ blue points and $r$ red points within the unit square. Compute the average runtime performance of CG (essentially determining the complexity of determining consistency of linear equations on a random set of values) and PL (essentially determining the number of weight changes on a random data set) by a multi-fold integration. Further, average across all possible choices of $b$ and $r$ in the range $[1,n]$.
Can you offer more insight or a better formulation?
| Computational geometry vs perceptron for shattering | CC BY-SA 2.5 | null | 2011-04-03T21:12:07.560 | 2021-05-22T00:59:36.957 | 2021-05-22T00:59:36.957 | 11887 | 4011 | [
"neural-networks",
"algorithms",
"geometry"
] |
9136 | 2 | null | 8911 | 0 | null | Try [http://tukhi.com](http://tukhi.com). It is not clear whether or not they have a Mac OS X Excel version, but they have contact info on that site. It is pretty amazing. Heh, you could always run Window in a VM.
| null | CC BY-SA 2.5 | null | 2011-04-03T21:31:49.947 | 2011-04-03T21:31:49.947 | null | null | null | null |
9137 | 1 | 9139 | null | 8 | 2923 | Say there are 3 companies A, B and C. Each company has a quality rating from 0 to 100 and a price in USD.
```
Company Quality Price
A 80 7.9
B 70 8.0
C 75 8.1
```
How do I determine the best quality-price trade-off? What kind of analysis should I use?
| Quality-price trade-off | CC BY-SA 2.5 | null | 2011-04-03T22:17:08.980 | 2013-06-28T13:24:24.787 | 2013-06-28T13:24:24.787 | 919 | 4013 | [
"valuation"
] |
9139 | 2 | null | 9137 | 11 | null | The [Keeney-Raiffa approach to Multi-attribute valuation theory](http://rads.stackoverflow.com/amzn/click/0521438837) is well-grounded practically and theoretically, has been successfully applied to many problems, and--when applied to problems with just two attributes--is particularly simple. It proceeds by systematically exploring the trade-offs you would actually make in hypothetical situations and uses those to deduce two things: (1) an appropriate way to re-express each attribute and (2) a linear combination of the re-expressed attributes that fully reflects an overall value.
Be careful when doing Web research on this. The vast majority of published models of this type appear to ignore (1), which is crucial, and often establish (2) in ad-hoc or arbitrary ways.
Another approach, consistent with (but inferior to) the Keeney-Raiffa theory, establishes an "efficient frontier." Plot quality on one axis and price on another for each of the available alternatives. If you do so with increasing quality to the right and decreasing price upwards, then points lying at the extreme right or above of all the others are the best candidates to consider. In effect this method ignores (1) and uses this "frontier" to avoid specifying the coefficients in (2). It is often used in financial applications where the two attributes are "alpha" (expected rate of return) and "beta" (variance of returns, a surrogate for risk). [Modern portfolio theory](http://en.wikipedia.org/wiki/Modern_portfolio_theory) uses a variant of this approach.
| null | CC BY-SA 2.5 | null | 2011-04-03T23:04:03.450 | 2011-04-03T23:04:03.450 | null | null | 919 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.