Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11578 | 1 | null | null | 1 | 1130 | I'm a bit stuck on the general concept and calculations of iterated expectations. A simple example:
$$E[E[Y|X]] = E[Y]$$
I'm not sure how or why this is the case? I have a 3 line proof in my notes using densities and stuff but is there no 'quicker' way. Especially in longer cases like (below) writing everything out is difficult. Are there tricks to spot?
Example 2:
$Var[Y] = E[Y^2] - (E[Y])^2$ (I get this line!)
$$= E[E[Y^2|X]] - \bigg(E[E[Y|X]]\bigg)^2 $$
$$= E\left[Var[Y|X]+(E[Y|X])^2\right] - \bigg(E[E[Y|X]]\bigg)^2$$
$$= E[Var[Y|X]] + E[(E[Y|X])^2] - \bigg(E[E[Y|X]]\bigg)^2$$
$$= E[Var[Y|X]] + Var[E[Y|X]]$$
I'm not at all sure how they go from line to line.
| Iterated expectations theory | CC BY-SA 3.0 | null | 2011-06-04T22:39:01.103 | 2014-07-08T05:37:38.360 | 2011-06-14T02:20:12.470 | 2392 | 4624 | [
"self-study"
] |
11579 | 2 | null | 11568 | 1 | null | Let's take an extreme example where you have just one value in your sample. Then you have no information about the dispersion of $X$ either from your knowledge of the distribution or from your sample and so no way of testing your hypotheses.
It does not take much to change this: for example if you know that $X$ is always non-negative then, taking the null hypothesis $\mathbb{E}[X] \le y$, by Markov's inequality you have
$$\Pr(X \geq x) \leq \frac{y}{x}$$
| null | CC BY-SA 3.0 | null | 2011-06-04T23:59:57.423 | 2011-06-04T23:59:57.423 | null | null | 2958 | null |
11580 | 1 | null | null | 3 | 2682 | Recently, I read [an article regarding the association between age and
lymph node metastases](http://jco.ascopubs.org/content/27/18/2931.long).
The authors stated that:
>
"Because a nonlinear relationship
between age and lymph node involvement
was expected based on existing
literature, lymph node involvement was
also regressed on age using
nonparametric logistic regression
based on locally weighted scatterplot
smoothing (lowess)."
Could someone explain what is nonparametric logistic regression based on locally
weighted scatterplot smoothing (lowess)?
| What is nonparametric logistic regression based on locally weighted scatterplot smoothing? | CC BY-SA 3.0 | null | 2011-06-05T02:55:53.457 | 2022-09-20T01:25:29.720 | 2011-07-16T21:19:21.113 | 930 | 4887 | [
"logistic",
"loess"
] |
11581 | 2 | null | 11553 | 7 | null | I confirmed @caracal's answer with a Monte Carlo experiment. I generated random instances from a linear model (with the size random), computed the F-statistic and computed the p-value using the non-centrality parameter
$$
\delta^2 = \frac{||X\beta_1 - X\beta_2||^2}{\sigma^2},
$$
Then I plotted the empirical cdf of these p-values. If the non-centrality parameter (and the code!) is correct, I should get a near uniform cdf, which is the case:

Here is the R code (pardon the style, I'm still learning):
```
#sum of squares
sum2 <- function(x) { return(sum(x * x)) }
#random integer between n and 2n
rint <- function(n) { return(ceiling(runif(1,min=n,max=2*n))) }
#generate random instance from linear model plus noise.
#n observations of p2 vector
#regress against all variables and against a subset of p1 of them
#compute the F-statistic for the test of the p2-p1 marginal variables
#compute the p-value under the putative non-centrality parameter
gend <- function(n,p1,p2,sig = 1) {
beta2 <- matrix(rnorm(p2,sd=0.1),nrow=p2)
beta1 <- matrix(beta2[1:p1],nrow=p1)
X <- matrix(rnorm(n*p2),nrow=n,ncol=p2)
yt1 <- X[,1:p1] %*% beta1
yt2 <- X %*% beta2
y <- yt2 + matrix(rnorm(n,mean=0,sd=sig),nrow=n)
ncp <- (sum2(yt2 - yt1)) / (sig ** 2)
bhat2 <- lm(y ~ X - 1)
bhat1 <- lm(y ~ X[,1:p1] - 1)
SSE1 <- sum2(bhat1$residual)
SSE2 <- sum2(bhat2$residual)
df1 <- bhat1$df.residual
df2 <- bhat2$df.residual
Fstat <- ((SSE1 - SSE2) / (df1 - df2)) / (SSE2 / bhat2$df.residual)
pval <- pf(Fstat,df=df1-df2,df2=df2,ncp=ncp)
return(pval)
}
#call the above function, but randomize the problem size (within reason)
genr <- function(n,p1,p2,sig=1) {
use.p1 <- rint(p1)
use.p2 <- use.p1 + rint(p2 - p1)
return(gend(n=rint(n),p1=use.p1,p2=use.p2,sig=sig+runif(1)))
}
ntrial <- 4096
ssize <- 256
z <- replicate(ntrial,genr(ssize,p1=4,p2=10))
plot(ecdf(z))
```
| null | CC BY-SA 3.0 | null | 2011-06-05T03:41:28.607 | 2011-06-05T03:41:28.607 | null | null | 795 | null |
11582 | 1 | null | null | 1 | 126 | 
The above image represents an article's page views over time. X axis is days with 9 being the most recent day. The y-axis is number of pageviews. I'm looking for a decent, not to complex either physics or statistical calculation that would be able to give me (based on the history of the page views) what the current trending of the page views is for the past n days.
So basically, in the past 5 days is this link trending unusually higher than it usually does and if so by what degree/magnitude?
Ideally the answer would provide an algorithm class that applies to this problem as well as some example of that using the data provided from this chart above.
| How can I get a velocity of how much this link is trending? | CC BY-SA 3.0 | 0 | 2011-06-05T03:42:30.270 | 2011-06-06T18:39:55.273 | null | null | 4875 | [
"time-series",
"statistical-significance",
"mathematical-statistics",
"trend"
] |
11583 | 1 | null | null | 1 | 3311 | Does any know the reference/link where i can find the MATLAB implementation of gap statistics for clustering as mentioned in [this](http://gremlin1.gdcb.iastate.edu/MIP/gene/MicroarrayData/gapstatistics.pdf) paper?
| Gap statistics MATLAB implementation | CC BY-SA 3.0 | null | 2011-06-05T05:32:14.513 | 2014-04-28T19:32:37.817 | 2011-06-05T12:11:37.537 | null | 4290 | [
"clustering",
"matlab",
"mathematical-statistics"
] |
11584 | 2 | null | 11519 | 1 | null | I have noticed there is a data format [GTFS](http://code.google.com/intl/fr-FR/transit/spec/transit_feed_specification.html) created by Google for public transportation data. There is an [interesting repository](http://www.gtfs-data-exchange.com/) with public data all around the world. The only thing that is missing is a R package/S4 class that would extend [sp-classes](http://cran.r-project.org/web/packages/sp/vignettes/sp.pdf) with POSIXct and permit to read this type of data. Someone motivated to work on that ? Are there already ongoing work ?
| null | CC BY-SA 3.0 | null | 2011-06-05T06:18:46.220 | 2011-06-05T06:18:46.220 | null | null | 223 | null |
11585 | 1 | null | null | 3 | 354 | I took some measurements of some data (code run time measurements, for those curious) of which I have no idea what the expected value is.
The data is discrete, and I have no idea what type of properties it has or distribution it follows.
The only thing that is known is that the values are more or less independent of each other. I say more or less, because there are some cache effects that can cause correlate one measurement with another, but I don't know how it may affect the measurements.
In addition, I do know that each measurement has a certain amount of granularity, since I can only measure to something like $\frac{1}{Frequency}$ accuracy, limited by the speed of my processor.
Given that I have $N$ total samples of data, what sorts of things can I do to say, for example, "I have $X$% confidence that the expected run time of this code should be $Y$ "?
I essentially want to determine how accurate my measurements are and to what degree I'm confident they are. I have no idea how to do this, as I've never had a chance to take a proper statistics class.
| How can I statistically determine if my data on code run time measurements is "good"? | CC BY-SA 3.0 | null | 2011-06-05T06:45:27.440 | 2011-08-05T08:39:00.227 | 2011-06-06T07:44:00.090 | 183 | 4888 | [
"confidence-interval"
] |
11586 | 2 | null | 11444 | 4 | null | My answer: I assume $(X,Y)$ is abs cont, i.e. has a density $f_{(X,Y)}$ with respect to Lebesgue measure in $\mathbb{R}^2$. Everything can be done by using parameter "weights" of function density in R, i.e. with a call to
$$\text{density}\left ((Y_1,\dots,Y_n),\; \text{ weights}=(e^{|x-X_i|^2/h^2})_{i=1,\dots,n}\right )$$
to estimate $\hat{f}_{Y}^{|X=x}(y)$ (for a given $x$) and a proper use of conditioning formulae (given at the end of the post (you also need to tune the window parameter $h$).
Developpment: I assume $X\leadsto \mathcal{U}[0,1]$. Since you know $f_{X}$ you only need to estimate $\hat{f}_{Y}^{|X=x}(y)$ the conditional density.
Depending on how much observation you have, there might be different strategies.
The simplest one is to do a regular binning of $[0,1]$, say with $p$ bins and estimate $\hat{f}_{Y}^{|X=x_i}(y)$ ($x_i$ stands for the center of bin $i$) with an histogram (or with function density of R) for each $i=1,\dots,p$. Your final estimate is
$$ \hat{f}_{(X,Y)}(x,y)=\frac{1}{p}\sum_{i}1(x\in B_i)\hat{f}_{Y}^{|X=x_i}(y)$$
$1(x\in B_i)$ is the indicator function that $x$ is in bin $i$.
Obviously you can turn the zero-one weigthing scheme ($1(x\in B_i)$) into a smoother one (you need to adapt the calculation of $\hat{f}_{Y}^{|X=x_i}(y)$ but it is easy since function "density" of R allows for a parameter "weigths"). Your weights have to keep the invariance by translation which characterize the uniform distribution. For example, with exponential weight of windows parameter $h>0$ this will give, for a given $x\in [0,1]$, something like:
$$ \hat{f}_{(X,Y)}(x,y)=1(x\in [0,1])\frac{1}{\sqrt{2\pi}h}\int_{t}e^{|x-t|^2/h^2}\hat{f}_{Y}^{|X=x}(y) dt$$
where $\hat{f}_{Y}^{|X=x}(y)$ is obtained with function "density" of R and parameter weights=$(e^{|x-X_i|^2/h^2})_{i=1,\dots,n}$.
| null | CC BY-SA 3.0 | null | 2011-06-05T07:11:22.583 | 2011-06-05T11:20:23.733 | 2011-06-05T11:20:23.733 | 930 | 223 | null |
11587 | 2 | null | 11578 | 3 | null | This is a simple, but not so simple concept to understand. I personally find using the example of a table to illustrate what is going on. So we have a $2\times 2$ table with counts in each cell. To keep from the "abstract" nature of the concept, I'll use real numbers instead of letters. So we have a table of "being sick" against "having a sore throat":
$$\begin{array}{c|c}
& \text{Sick} & \text{Not Sick}\\
\hline
\text{Sore throat} & 12 & 5 \\
\hline
\text{No Sore throat} & 4 & 55
\end{array}$$
So in this table we have 12 people who are both sick and have a sore throat, and 5 people who are not sick but still have a sore throat. Now if we condition on "Sore throat", then the chances of being sick are
$$Pr(\text{Sick}|\text{sore throat})=\frac{12}{12+5}=0.706$$
You can think of this quantity as a part of $E[Y|X]$ in the more abstract sense (not exactly the case, but will do for understanding the concept). Now We can calculate the same probability, but on the condition of no sore throat
$$Pr(\text{Sick}|\text{No sore throat})=\frac{4}{4+55}=0.068$$
This quantity is the "other part" of $E[Y|X]$. No suppose you want to calculate the chance of someone being sick, independently of their throat. We can do this (at least) two ways. The simplest in this case is to calculate the total proportion of people who are sick:
$$Pr(\text{Sick})=\frac{4+12}{4+12+55+5}=0.210$$
But You can also use the law of iterated expectations. It is more cumbersome here, but can be much simpler in more complicated problems. We have, by the product rule and sum rule:
$$Pr(\text{Sick})=Pr(\text{Sick}|\text{Sore throat})Pr(\text{Sore throat})$$
$$+Pr(\text{Sick}|\text{No Sore throat})Pr(\text{No Sore throat})$$
Now we need the marginal probabilities $Pr(\text{No Sore throat})=\frac{4+55}{4+12+55+5}=0.776$ and $Pr(\text{Sore throat})=\frac{12+5}{4+12+55+5}=0.224$. Now we have all the ingredients, we just plug them in:
$$Pr(\text{Sick})=0.706 \times 0.224 + 0.068 \times 0.776=0.210$$
Which is the same result. This is an "empirical proof" so to speak. The law simply say that it holds more generally, for arbitrary distributions.
The variance version comes about by "adding zero" to the equations, but as $0=(E[Y|X])^{2}-(E[Y|X])^{2}$, or by noting that $Var[Y|X]=E[Y^{2}|X]-(E[Y|X])^{2}$, which can be re-arranged to give $E[Y^{2}|X]=Var[Y|X]+(E[Y|X])^{2}$. It's just messy because of all the parenthesis, and remembering where the "square" is
| null | CC BY-SA 3.0 | null | 2011-06-05T08:20:21.983 | 2011-06-05T08:20:21.983 | null | null | 2392 | null |
11588 | 2 | null | 11562 | 4 | null |
### Overview
Usually when I think of multiple raters assessing multiple objects, I think of "bias" as a mean difference in expected rating of a particular judge from the mean of a hypothetical population of judges.
This is a rather statistical definition of bias, which does not necessarily correspond to everyday definitions of bias, which would presumably also include the notion of failure to impartially apply relevant standards.
### Basic ideas
Bearing in mind that there is probably an established literature on this, these are the ideas that came to my mind:
- Compare mean rating of each judge
is a given judge harsher or more lenient on average?
- Compare standard deviation or variance of each judge
is the judge differentiating to the extent that is expected or in ways consistent with other judges?
- For each judge, correlate that judge's ratings with the mean of all other judges, and use the correlation as an index of the validity of that judge's ratings
is the judge identifying quality in the same way as other judges?
- Build a model predicting ratings for contestant i by judge j and record the residuals; large absolute residuals could be excluded from some overall rating. The model could be as simple as an ANOVA predicting response for contestant i by judge j using just the main effects (no interaction effects).
is a judge responding in an uncharacteristic manner for a particular contestant?
The mean approach is what I think of as bias.
The residuals approach will capture what you are interested in.
### Basic implementation in R
I hacked this out in a few minutes, so hopefully there aren't any bugs (but use at your own risk).
```
# Import data
x <- structure(list(contestant = c(1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L), judge = c(1L,
2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L, 1L, 2L,
3L, 4L, 5L), rating = c(83.03, 67.15, 72.05, 86.95, 44, 96.5,
89.9, 84.6, 93.3, 65.15, 88.5, 85.36, 78.95, 88, 52.45, 90.5,
89.85, 85, 94.1, 96.05)), .Names = c("contestant", "judge", "rating"
), class = "data.frame", row.names = c(NA, -20L))
> # Mean: Judge's Mean rating - i.e., bias
round(tapply(x$rating, x$judge, function(X) mean(X)), 1)
1 2 3 4 5
89.6 83.1 80.2 90.6 64.4
```
This shows that judge 5 is harsh and perhaps also that judge 1 and 4 maybe too lenient.
```
> # SD: Judge's SD rating i.e., excessive or insufficient variability in ratings
round(tapply(x$rating, x$judge, function(X) sd(X)), 1)
1 2 3 4 5
5.6 10.8 6.1 3.6 22.8
```
This shows that judge 5 is vastly more variable, but equally the other judges vary in their variability quite a lot also.
```
> # Correlation
judgecor <- list()
for (i in unique(x$judge)) {
contestant_mean <- tapply(
x[x$judge != i, "rating"], x[x$judge != i, "contestant"],
function(X) mean(X))
judgecor[[as.character(i)]] <- cor(x[x$judge == i, "rating"], contestant_mean)
}
round(unlist(judgecor), 2)
1 2 3 4 5
0.70 0.84 0.96 0.95 0.73
```
Judge 1 and 5 are less consistent with the other judges.
```
> # Residuals
fit <- lm(rating~factor(judge)+factor(contestant), x)
xres <- data.frame(x, res=residuals(fit))
xres$absres <- abs(xres$res)
# Overview of problematic ratings
head(xres[order(xres$absres, decreasing=TRUE), ], 5)
contestant judge rating res absres
20 4 5 96.05 22.107 22.107
5 1 5 44.00 -9.479 9.479
15 3 5 52.45 -9.045 9.045
16 4 1 90.50 -8.663 8.663
4 1 4 86.95 7.296 7.296
```
This shows the largest five absolute residuals in ratings after taking out mean contestant and mean rater effects.
It shows clearly that the rating by judge 5 on contestant 4 was an extreme outlier, relative to the other residuals.
| null | CC BY-SA 3.0 | null | 2011-06-05T10:49:37.833 | 2011-06-05T15:40:09.920 | 2011-06-05T15:40:09.920 | 183 | 183 | null |
11589 | 2 | null | 11494 | 5 | null | If I understand your question correctly, you are wondering why you got different p-values from your t-tests when they are carried out as post-hoc tests or as separate tests. But did you control the [FWER](http://www.utdallas.edu/~herve/Abdi-Bonferroni2007-pretty.pdf) in the second case (because this is what id done with the step down Sidak-Holm method)? Because, in case of simple t-tests, the t-values won't change, unless you use a different pooling method for computing variance at the denominator, but the p-value of the unprotected tests will be lower than the corrected one.
This is easily seen with Bonferroni adjustment, since we multiply the observed p-value by the number of tests. With step-down methods like [Holm-Sidak](http://www.utdallas.edu/~herve/Abdi-Bonferroni2007-pretty.pdf), the idea is rather to sort the null hypothesis tests by increasing p-values and correct the alpha value with Sidak correction factor in a stepwise manner ($\alpha’ = 1 - (1 - \alpha)^k$, with $k$ the number of possible comparisons, updated after each step). Note that, in contrast to Bonferroni-Holm's method, control of the FWER is only guaranteed when comparisons are independent. A more detailed description of the different kind of correction for multiple comparisons is available here: [Pairwise Comparisons in SAS and SPSS](http://www.uky.edu/ComputingCenter/SSTARS/www/documentation/MultipleComparisons_3.htm#b13).
| null | CC BY-SA 3.0 | null | 2011-06-05T11:03:53.713 | 2011-06-05T11:03:53.713 | null | null | 930 | null |
11590 | 1 | null | null | 4 | 2053 | I am searching for a good criterion to measure the "goodness of fit" in generalized linear models. To make clear: I am not searching for a criterion which gives me an answer to the question "does overdispersion occur?". What do you think about Nagelkerke's pseudo R-squared? Any thought would be appreciated!
| Goodness of fit in GLMs | CC BY-SA 3.0 | null | 2011-06-05T11:51:37.503 | 2012-01-06T16:08:13.557 | 2011-06-05T15:27:10.857 | 919 | 4496 | [
"r",
"regression",
"generalized-linear-model",
"goodness-of-fit"
] |
11592 | 1 | null | null | 9 | 2103 | Can anyone tell me the factors that affect the memory requirements of $k$-means clustering with a bit of explanation?
| Memory requirements of $k$-means clustering | CC BY-SA 3.0 | null | 2011-06-05T13:37:48.747 | 2014-10-03T14:51:07.120 | 2013-11-24T15:21:13.467 | 7290 | 4879 | [
"clustering",
"k-means"
] |
11593 | 2 | null | 6794 | 2 | null | RStudio (rstudio.org) makes things quite easy assuming LaTeX is already installed on your system. There is a PDF button that runs the code through Sweave then runs it through pdflatex and launches a pdf viewer.
| null | CC BY-SA 3.0 | null | 2011-06-05T14:07:24.403 | 2011-06-05T14:07:24.403 | null | null | 4253 | null |
11594 | 1 | null | null | 3 | 724 | Lets say i want to calculate the information content of a particular message.What apart from the message itself has to be taken into account in doing so, and what data would i need to collect to perform my action?
| Calculating the information contained in a message | CC BY-SA 3.0 | null | 2011-06-05T14:20:57.853 | 2011-07-07T04:27:11.073 | null | null | 4879 | [
"machine-learning"
] |
11595 | 1 | 11777 | null | 9 | 9861 | I've got a question concerning wheter or not to use an offset. Assume a very easy model, where you want to describe the (overall)number of goals in hockey. So you have goals, number of games played and a dummy variable "striker" which is equal to 1 if the player is a striker and 0 otherwise. So which of the following models is correctly specified?
- goals=games+striker , or
- goals=offset(games)+striker
Again, the goals are overall goals and the number of games are overall games for a single player. For example there could be a player picked up who has 50 goals in 100 games and another player who has 20 goals in 50 games and so on.
What am I supposed to do when I'd like to estimate the number of goals?
Is it really necessary to use an offset here?
References:
- See this previous question discussing when to use offsets in Poisson regression in general.
| Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey players | CC BY-SA 3.0 | null | 2011-06-05T14:26:51.900 | 2014-08-31T21:32:50.907 | 2017-04-13T12:44:33.237 | -1 | 4496 | [
"r",
"regression",
"poisson-distribution",
"generalized-linear-model",
"count-data"
] |
11596 | 1 | 11601 | null | 10 | 3049 | Especially in the computer-science oriented side of the machine learning literature, AUC (area under the receiver operator characteristic curve) is a popular criterion for evaluating classifiers. What are the justifications for using the AUC? E.g. is there a particular loss function for which the optimal decision is the classifier with the best AUC?
| Rationale of using AUC? | CC BY-SA 3.0 | null | 2011-06-05T14:52:53.700 | 2019-10-10T04:25:54.453 | 2011-06-05T20:50:52.817 | null | 3567 | [
"machine-learning",
"roc"
] |
11597 | 2 | null | 6538 | 3 | null | Regarding the measurement of your knowledge: You could attend some data mining / data analysis competitions, such as [1](http://www.kdnuggets.com/datasets/competitions.html), [2](http://www.kaggle.com), [3](http://www.research-garden.de), [4](http://www.innocentive.com/), and see how you score compared to others.
There are a lot of pointers to textbooks on mathematical statistics in the answers. I would like to add as relevant topics:
- the empirical social research component, which comprise sampling theory, socio-demographic and regional standards
- data management, which includes knowlegde on databases (writing SQL queries, common database schemes)
- communication, how to present results in a way the audience stays awake (visualization methods)
Disclaimer: I am not a statistician, this are just my 2cents
| null | CC BY-SA 3.0 | null | 2011-06-05T14:57:27.867 | 2011-06-05T14:57:27.867 | null | null | 573 | null |
11598 | 2 | null | 11595 | 1 | null | A few simple points not directly addressing your question about offsets:
- I'd have a look at whether number of games is correlated with mean goals scored. In many elite goal scoring sports that I can think of (e.g., soccer, Australian rules football, etc.) I would predict that longevity of a career is related to the success of a career. And at least for players in goal scoring roles, success is related to number of goals scored.
If this is true, then number of games would capture two effects. One would relate to the mere fact that more games played means more opportunities to score goals; and the other would capture skill-related effects.
You could examine the relationship between number of games and mean goals scored (e.g., goals / number of games) to explore this. I think this has substantive implications for any modelling that you do.
- My instincts are to convert the dependent variable into mean goals per game. I realise that you would have more precise measurement of a player's skill for those who played more games, so maybe that would be an issue. Depending on the precision in your model that you desire, and the resulting distribution of player means, you might be able to rely on standard linear modelling techniques. But perhaps this is a bit too applied for your purposes, and perhaps you have reasons for wanting to model total goals scored.
| null | CC BY-SA 3.0 | null | 2011-06-05T15:14:42.277 | 2011-06-05T15:14:42.277 | null | null | 183 | null |
11599 | 2 | null | 11568 | 2 | null | The problem with allowing any distribution is that it could have a tiny chance of yielding a huge value. That eliminates any possibility of testing the mean with satisfactory confidence.
Here are the details. Choose a unit of measurement in which $y$ is hugely greater than $1$. Let $\alpha$ be the desired significance for the hypothesis test ($0 \lt \alpha \lt 1$) and $n$ be the sample size. Choose any $p$ for which $0 \lt p \lt 1 - \alpha^{1/n}$ and define $\mu = 1 + y/p$. Consider the two-point distribution for which $1$ has probability $1-p$ and $\mu$ has probability $p$. The chance that a sample of size $n$ from this distribution consists entirely of $1$s is
$$(1-p)^n \gt (\alpha^{1/n})^n = \alpha,$$
yet its expectation is
$$1(1-p) + \mu (p) = (1-p) + (1 + y/p)p = y+1 \gt y.$$
Because $y$ can be made arbitrarily large compared to $1$, no hypothesis test of any positive power will conclude that the true mean exceeds $y$ when $n$ $1$s are observed. Therefore the test will fail to detect that the mean exceeds $y$ with probability greater than $\alpha$ when this two-point distribution is the true distribution. Because this analysis places no restrictions on $\alpha$ or $n$, this proves that no test with positive power, with any amount of sampling, can achieve any positive level of significance.
| null | CC BY-SA 3.0 | null | 2011-06-05T15:18:15.483 | 2011-06-05T15:18:15.483 | null | null | 919 | null |
11600 | 2 | null | 11551 | 3 | null | Recently I started to keep the data in a sqlite database, access the database directly from R using sqldf and view / edit with a database tool named [tksqlite](http://wiki.tcl.tk/17603)
Another option is to export the data and view / edit with [Google Refine](http://code.google.com/p/google-refine/)
| null | CC BY-SA 3.0 | null | 2011-06-05T16:28:49.587 | 2011-06-05T16:28:49.587 | null | null | 573 | null |
11601 | 2 | null | 11596 | 15 | null | For binary classifiers $C$ used for ranking (i.e. for each example $e$ we have $C(e)$ in the interval $[0, 1]$) from which the AUC is measured the AUC is equivalent to the probability that $C(e_1) > C(e_0)$ where $e_1$ is a true positive example and $e_0$ is a true negative example. Thus, choosing a model with the maximal AUC minimizes the probability that $C(e_0) \geq C(e_1)$. That is, minimizes the loss of ranking a true negative at least as large as a true positive.
| null | CC BY-SA 3.0 | null | 2011-06-05T16:43:27.907 | 2011-06-05T16:43:27.907 | null | null | 3232 | null |
11602 | 1 | 26535 | null | 193 | 69786 | TL:DR: Is it ever a good idea to train an ML model on all the data available before shipping it to production? Put another way, is it ever ok to train on all data available and not check if the model overfits, or get a final read of the expected performance of the model?
---
Say I have a family of models parametrized by $\alpha$. I can do a search (e.g. a grid search) on $\alpha$ by, for example, running k-fold cross-validation for each candidate.
The point of using cross-validation for choosing $\alpha$ is that I can check if a learned model $\beta_i$ for that particular $\alpha_i$ had e.g. overfit, by testing it on the "unseen data" in each CV iteration (a validation set). After iterating through all $\alpha_i$'s, I could then choose a model $\beta_{\alpha^*}$ learned for the parameters $\alpha^*$ that seemed to do best on the grid search, e.g. on average across all folds.
Now, say that after model selection I would like to use all the the data that I have available in an attempt to ship the best possible model in production. For this, I could use the parameters $\alpha^*$ that I chose via grid search with cross-validation, and then, after training the model on the full ($F$) dataset, I would a get a single new learned model $\beta^{F}_{\alpha^*}$
The problem is that, if I use my entire dataset for training, I can't reliably check if this new learned model $\beta^{F}_{\alpha^*}$ overfits or how it may perform on unseen data. So is this at all good practice? What is a good way to think about this problem?
| Training on the full dataset after cross-validation? | CC BY-SA 4.0 | null | 2011-06-05T16:50:50.747 | 2020-05-31T14:38:02.733 | 2020-05-31T14:38:02.733 | 2798 | 2798 | [
"machine-learning",
"cross-validation",
"model-selection"
] |
11604 | 2 | null | 11548 | 2 | null | Simply build an ARIMA MODEL that separate signal from noise incorporating any identifiable deterministic structure such as changes in levels/trends/seaonal pulses/parameter or variance change over time. Develop a prediction for the next 5 days and use the uncertainty in that sum to create possible bounds. Compare the actual sum of the "new five readings" and compute the pobability of yielding a value as "high" or as diverse as this.
| null | CC BY-SA 3.0 | null | 2011-06-05T17:20:43.337 | 2011-06-06T18:39:55.273 | 2011-06-06T18:39:55.273 | 3382 | 3382 | null |
11605 | 2 | null | 11602 | 18 | null | I believe that Frank Harrell would recommend bootstrap validation rather than cross validation. Bootstrap validation would allow you to validate the model fitted on the full data set, and is more stable than cross validation. You can do it in R using `validate` in Harrell's `rms` package.
See the book "Regression Modeling Strategies" by Harrell and/or "An Introduction to the Bootstrap" by Efron and Tibshirani for more information.
| null | CC BY-SA 3.0 | null | 2011-06-05T19:13:10.803 | 2011-06-05T19:13:10.803 | null | null | 3835 | null |
11606 | 2 | null | 11602 | 6 | null | What you do is not a cross validation, rather some kind of stochastic optimization.
The idea of CV is to simulate a performance on unseen data by performing several rounds of building the model on a subset of objects and testing on the remaining ones. The somewhat averaged results of all rounds are the approximation of performance of a model trained on the whole set.
In your case of model selection, you should perform a full CV for each parameter set, and thus get a on-full-set performance approximation for each setup, so seemingly the thing you wanted to have.
However, note that it is not at all guaranteed that the model with best approximated accuracy will be the best in fact -- you may cross-validate the whole model selection procedure to see that there exist some range in parameter space for which the differences in model accuracies are not significant.
| null | CC BY-SA 3.0 | null | 2011-06-05T20:49:08.340 | 2011-06-05T20:49:08.340 | null | null | null | null |
11607 | 1 | 11608 | null | 4 | 2211 | I'm trying to fit two equations with nls() function in R. The two functions are:
$f(x) = c_{1} \exp\left(-\left(\frac{x-\mu}{\sigma_{(x)}}\right)^2\right)$
where $\sigma_{(x)} = \sigma_{11}$ if $x \le \mu$ and $\sigma_{(x)} = \sigma_{12}$ if $x > \mu$
and
$f(x) = a K \exp\left(- \frac{a}{b} \exp\left(-b x\right) - bx\right)$
Below is my attempt with factitious data:
```
x <- seq(from = 17, to = 47, by = 5)
y <- c(26.2, 173.6, 233.9, 185.9, 115.4, 62.0, 21.7)
Data <- data.frame(y, x)
Fit1 <- nls(formula = y ~ if (x <= Mu) Mean <- c1*exp(-((x-Mu)/Sigma11)^2) else Mean <- c1*exp(-((x-Mu)/Sigma12)^2),
data = Data, start = list(c1 = 240, Mu = 25, Sigma11 = 5, Sigma12 = 14), trace = TRUE)
Fit2 <- nls(formula = y~K*a*exp(-(a/b)*exp(-b*x)-b*x), data = Data,
start = list(K=4250, a=10, b=0.1), trace = TRUE)
```
Both codes produce Error and Warning messages. Any help to figure out these problems will be highly appreciated. Thanks
| Fitting conditional functions in nls | CC BY-SA 3.0 | null | 2011-06-05T20:57:37.040 | 2011-06-05T22:07:13.627 | 2011-06-05T21:47:17.080 | null | 3903 | [
"r",
"modeling",
"nonlinear-regression",
"nls"
] |
11608 | 2 | null | 11607 | 4 | null | In the first case, nls will not digest any `if`s or other higher expressions... you may use `ifelse`, however this may make this function too complex to effectively fit it -- `nls` is not a magic wand.
In the second case, the standard algorithm dies on numerical error -- the usual approach in this case is to alter starting point or change the used method; for instance
```
Fit2<-nls(y~K*a*exp(-(a/b)*exp(-b*x)-b*x),Data,
start=list(K=4250,a=10,b=0.1),trace=T,algorithm="port")
```
do converge (consult `?nls` for a list of methods).
| null | CC BY-SA 3.0 | null | 2011-06-05T22:07:13.627 | 2011-06-05T22:07:13.627 | null | null | null | null |
11609 | 1 | 11727 | null | 49 | 6577 | My current understanding of the notion "confidence interval with confidence level $1 - \alpha$" is that if we tried to calculate the confidence interval many times (each time with a fresh sample), it would contain the correct
parameter $1 - \alpha$ of the time.
Though I realize that this is not the same as "probability that the true parameter lies in this interval", there's something I want to clarify.
[Major Update]
Before we calculate a 95% confidence interval, there is a 95% probability that the interval we calculate will cover the true parameter. After we've calculated the confidence interval and obtained a particular interval $[a,b]$, we can no longer say this. We can't even make some sort of non-frequentist argument that we're 95% sure the true parameter will lie in $[a,b]$; for if we could, it would contradict counterexamples such as this one: [What, precisely, is a confidence interval?](https://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval/6801#6801)
I don't want to make this a debate about the philosophy of probability; instead, I'm looking for a precise, mathematical explanation of the how and why seeing the particular interval $[a,b]$ changes (or doesn't change) the 95% probability we had before seeing that interval. If you argue that "after seeing the interval, the notion of probability no longer makes sense", then fine, let's work in an interpretation of probability in which it does make sense.
More precisely:
Suppose we program a computer to calculate a 95% confidence interval. The computer does some number crunching, calculates an interval, and refuses to show me the interval until I enter a password. Before I've entered the password and seen the interval (but after the computer has already calculated it), what's the probability that the interval will contain the true parameter? It's 95%, and this part is not up for debate: this is the interpretation of probability that I'm interested in for this particular question (I realize there are major philosophical issues that I'm suppressing, and this is intentional).
But as soon as I type in the password and make the computer show me the interval it calculated, the probability (that the interval contains the true parameter) could change. Any claim that this probability never changes would contradict the counterexample above. In this counterexample, the probability could change from 50% to 100%, but...
- Are there any examples where the probability changes to something other than 100% or 0% (EDIT: and if so, what are they)?
- Are there any examples where the probability doesn't change after seeing the particular interval $[a,b]$ (i.e. the probability that the true parameter lies in $[a,b]$ is still 95%)?
- How (and why) does the probability change in general after seeing the computer spit out $[a,b]$?
[Edit]
Thanks for all the great answers and helpful discussions!
| Clarification on interpreting confidence intervals? | CC BY-SA 3.0 | null | 2011-06-05T22:41:40.083 | 2019-10-21T16:33:13.347 | 2017-04-13T12:44:51.217 | -1 | 4895 | [
"confidence-interval"
] |
11610 | 2 | null | 11585 | 1 | null | I'm focussing on your second last para and the fact that you haven't taken a stats class, and assuming you're mainly interested in saying something about the average runtime of your code?
The following is the simplest possible approach - I suspect that this is something you already know but it's not entirely clear to me from your post whether that's the case or not:
Run the code $N$ times and average the runtimes over those $N$ runs.
The [Central Limit Theorem](http://en.wikipedia.org/wiki/Central_limit_theorem) then implies that the sample mean you calculated came from a normal distribution (that is, unless the distribution of runtimes is rather strange then the sample means of collections of runtimes will be approximately normally distributed). From there you can use standard confidence bounds on normally distributed data to say the sort of things you'd like to about the runtime ([example](http://en.wikipedia.org/wiki/Confidence_interval#Practical_example)).
| null | CC BY-SA 3.0 | null | 2011-06-05T22:59:38.190 | 2011-06-05T23:06:31.350 | 2011-06-05T23:06:31.350 | 3248 | 3248 | null |
11611 | 1 | null | null | 9 | 383 | If there are 0's in the contingency table and we're fitting nested poisson/loglinear models (using R's `glm` function) for a likelihood ratio test, do we need to adjust the data prior to fitting the glm models (e.g. add 1/2 to all the counts)? Obviously some parameters cannot be estimated without some adjustment, but how does the adjustment/lack of adjustment effect the LR test?
| Do zero counts need to be adjusted for a likelihood ratio test of poisson/loglinear models? | CC BY-SA 3.0 | null | 2011-06-06T00:31:48.050 | 2012-05-02T01:58:45.313 | 2011-06-06T05:40:21.567 | 2116 | 4896 | [
"regression",
"poisson-distribution",
"generalized-linear-model",
"likelihood-ratio",
"log-linear"
] |
11612 | 2 | null | 11590 | 1 | null | It will depend on what kind of GLM you're using and your data. For example, the Wald chi-square and likelihood test are good statistics for categorical data.
| null | CC BY-SA 3.0 | null | 2011-06-06T03:13:05.647 | 2011-06-06T03:13:05.647 | null | null | 4897 | null |
11613 | 2 | null | 11609 | 4 | null | The reason that the confidence interval doesn't specify "the probability that the true parameter lies in the interval" is because once the interval is specified, the paramater either lies in it or it doesn't. However, for a 95% confidence interval for example, you have a 95% chance of creating a confidence interval that does contain the value. This is a pretty difficult concept to grasp, so I may not be articulating it well. See [http://frank.itlab.us/datamodel/node39.html](http://frank.itlab.us/datamodel/node39.html) for further clarification.
| null | CC BY-SA 3.0 | null | 2011-06-06T03:21:50.793 | 2011-06-06T03:21:50.793 | null | null | 4897 | null |
11615 | 1 | null | null | 1 | 1206 | My research design is as follows:
I have these Between Subjects IVs:
- Experiment Condition - 5 levels
- Facebook User status - 2 levels (yes/no)
My supervisor also wants me to see if there are significant effects of:
- Gender - 2 levels
- Relationship Status - 2 levels
- Relationship Satisfaction - 2 levels
And these within subjects IVs:
- Mood at Time 1
- Mood at Time 2
I also have three DVs
- Attraction level
- Frequency of Thought
- Mood State
I am looking at the effect of a certain condition on Facebook users, vs non Facebook users. Within this I want to look at whether there are differences between the genders; whether there is a difference between people in a relationship vs. not in a relationship and the effect of relationship satisfaction. It's a big study, but I'm not sure about how to describe the design.
### Question
- How should I describe the design of this study?
### Initial thoughts
I'm thinking it has to be a MANOVA design, however what I am finding confusing are the various IVs.
Is it a 2 x 5 Factorial MANOVA? I'm just so confused with all the IVs flying around.
| How to describe a design with a mix of experimental conditions, predictor variables, and multiple outcome variables? | CC BY-SA 3.0 | null | 2011-06-06T04:33:01.347 | 2011-06-06T08:48:43.040 | 2011-06-06T06:17:50.237 | 183 | 4899 | [
"mixed-model",
"experiment-design",
"manova"
] |
11616 | 1 | null | null | 5 | 956 |
### Context
I was talking to a researcher in the following situation.
- Participants (n = 500) were sampled from schools.
- Participants came from around 50 different schools.
- The number of participants per school varied with some schools supplying 20 or 30 participants, but a few schools only supplying 3 or 4 or 5 participants.
- The researcher was trying to assess whether there was a substantial violation of the independence of observations assumption. Thus, they were looking at the intra-class correlation of core outcome variables based on the effect of school.
### Question
- Is it problematic to include groups in an intra-class correlation analysis with small numbers per group? If so, what are the implications of this? What might a rule of thumb regarding a minimum number for inclusion be?
- To take it to the extreme, what is some groups only supplied a single participant?
- If there are groups with samples sizes below a given threshold, what is a good subsequent course of action? remove group from analysis? collapse small groups into an "other" group?
- How would any recommendations given relate to the assessment of the independence of observations assumption?
| Assessing independence of observations using intraclass correlation when some groups have small group sample sizes | CC BY-SA 3.0 | null | 2011-06-06T04:38:14.590 | 2011-06-07T16:24:48.803 | 2011-06-07T14:20:50.933 | 183 | 183 | [
"independence",
"intraclass-correlation"
] |
11617 | 2 | null | 4600 | 1 | null |
- Stepwise regression: I generally would not use stepwise regression to analyse experimental data. Generally you are wanting to test quite specific hypotheses based on the factors that you have manipulated. Also, sample sizes are often smaller in experiments. If you do use stepwise regression, you should at least ensure that you adopt some procedure like cross-validation in order to get an unbiased estimate of your model fit.
- Variance explained: In general as @rolando2 has said partial eta squared describes the percentage of variance explained by a factor, and you could use the overall r-squared to describe the overall variance explained by the model. You may also want to look at omega-squared, because r-squared is biased (i.e., sample values on average are larger than true population values) and omega-squared aims not to be biased.
- Alternative effect size measure: In general, I'm not a big fan of variance explained measures of effect size within the context of experimental manipulation. I prefer unstandardised coefficients or d-based (standardised group mean difference) estimates of effect. This is because the variance explained is contingent on the particular levels that you choose for your experiment. Furthermore, it is often the case that manipulating more factors in a factorial design will reduce the variance explained estimate of each factor. d-based measures often have more meaning across contexts.
| null | CC BY-SA 3.0 | null | 2011-06-06T04:49:38.850 | 2011-06-06T04:49:38.850 | null | null | 183 | null |
11618 | 2 | null | 11594 | 2 | null | May be, one needs to estimate the frequency $p(m)$ of occurrence of the message $m$. Then $\log \frac{1}{p(m)}$ is the information content of the message $m$.
| null | CC BY-SA 3.0 | null | 2011-06-06T05:45:31.373 | 2011-06-06T05:45:31.373 | null | null | 3485 | null |
11619 | 2 | null | 11615 | 1 | null |
- There is a difference between the design and a statistical test. Your design presumably incorporates random assignment of participants to one of five conditions, actively sampling (perhaps an even number?) of facebook and non-facebook users, and to some extent the study of time.
- You might describe your design as a 5 by 2 by 2 mixed design (in the case of mood) with condition (5 levels) and facebook status (2 levels) as between subjects factors and time (2 levels) as a within subjects factor; and a 5 by 2 factorial design for the dependent variables only measured at time 2.
- Any description of your design should make it clear which between subjects factors were achieved through random allocation.
- In terms of statistical analyses, you may choose to run ANOVAs or MANOVAs and it sounds like you are interested in covariates (i.e., gender, relationship status, relationship satisfaction). I label these covariates because they were not part of either random allocation or the process of sampling participants.
| null | CC BY-SA 3.0 | null | 2011-06-06T06:15:20.720 | 2011-06-06T08:48:43.040 | 2011-06-06T08:48:43.040 | 183 | 183 | null |
11620 | 1 | null | null | 7 | 1522 | One of the data sets I deal with is quite strange. The datawarehouse I downloaded the data from has a lot 999999999 values in one of the variables. Apparently the computer system on which the datawarehouse sits on does not support storing of null values. So they use 999999999 as the "null" value. Now if I just run `pretty` in R on the variable, it gives non-sensical ranges.
- Is there a package with a version of pretty that can deal with outliers by putting them in the range of say (100,High) ?
| Is there an R package with a pretty function that can deal effectively with outliers? | CC BY-SA 3.0 | null | 2011-06-06T06:57:07.203 | 2011-06-07T07:37:50.610 | 2011-06-06T15:46:01.280 | 919 | 1126 | [
"r",
"outliers",
"missing-data"
] |
11621 | 2 | null | 10787 | 5 | null | If you don't like those options, have you considered using a boosting method instead? Given an appropriate loss function, boosting automatically recalibrates the weights as it goes along. If the stochastic nature of random forests appeals to you, stochastic gradient boosting builds that in as well.
| null | CC BY-SA 3.0 | null | 2011-06-06T09:25:21.090 | 2011-06-06T09:25:21.090 | null | null | 4862 | null |
11622 | 1 | 11625 | null | 13 | 2099 | I am following a course on Bayesian statistics using BUGS and R. Now, I already know BUGS, it's great but I am not really fond of using a separate program rather than just R.
I have read that there are a lot of new Bayesian packages in R. Is there a list or reference on which packages there are for Bayesian statistics and what these do? And, is there an R package alternative for the flexibility of BUGS?
| R only alternatives to BUGS | CC BY-SA 3.0 | null | 2011-06-06T09:44:31.190 | 2011-06-06T18:20:56.943 | null | null | 3094 | [
"r",
"bayesian",
"bugs"
] |
11623 | 1 | null | null | 3 | 180 | I have a set $E_{1}$, with a finite cardinality $n$ of rectangular matrices which contains the useful MFCC coefficients generated from $n$ speech signals. Similary I have a set $E_{2}$ of same cardinality as that of $E_{1}$ which is a collection of vectors of finite dimension containing the LPC of the same set of speech signal which was used to form $E_{1}$. Now $ D=\{ E_{1},E_{2} \}$ forms the database for the speaker recognition system.
When a test signal is given, MFCC $M_{i}$ and LPC $L_{i}$ are generated and the closest members $M_{j} \in E_{1}$ for MFCC and $L_{j} \in E_{2}$ for LPC are found using a distance function $d$. It is not necessary that $M_{j}$ and $L_{j}$ are the exact members of $E_{1}$ and $E_{2}$ respectively. It depends on the acoustic environment during the test phase.
- What is the distance function used in literature?
- If its $L_{2}$ norm, is there any other better measure which is "sensitive" so I can reduce the possibility of misclassification?
| A best measure for speaker recognition | CC BY-SA 3.0 | null | 2011-06-06T09:51:29.607 | 2018-03-02T09:19:29.267 | 2013-06-14T14:23:51.110 | 3826 | 4900 | [
"distance-functions",
"signal-processing",
"mfcc"
] |
11624 | 2 | null | 11622 | 9 | null | [Bayesian CRAN task view](http://cran.r-project.org/web/views/Bayesian.html)
| null | CC BY-SA 3.0 | null | 2011-06-06T10:14:28.820 | 2011-06-06T10:14:28.820 | null | null | 375 | null |
11625 | 2 | null | 11622 | 8 | null | You can take a look at the [MCMCglmm](http://cran.r-project.org/web/packages/MCMCglmm/index.html) package that comes with very nice vignettes. There's a also a `bayesglm()` function for fitting Bayesian generalized linear models in the [arm](http://cran.r-project.org/web/packages/arm/index.html) package, by Andrew Gelman. I've also heard of a [future release](http://www.stat.columbia.edu/~cook/movabletype/archives/2011/04/multilevel_regr.html) `blmer`/`bglmer` functions for hierarchical modeling in the same package.
| null | CC BY-SA 3.0 | null | 2011-06-06T10:29:06.210 | 2011-06-06T10:29:06.210 | null | null | 930 | null |
11626 | 2 | null | 11622 | 6 | null | A few people I know have been using [JAGS](http://calvin.iarc.fr/~martyn/software/jags/). The JAGS syntax is similar to BUGS.
| null | CC BY-SA 3.0 | null | 2011-06-06T10:42:47.093 | 2011-06-06T10:42:47.093 | null | null | 8 | null |
11627 | 2 | null | 11620 | 18 | null | If you're importing your data with a command like, say,
```
read.table('yourfile.txt', header=TRUE, ...)
```
you can indicate what values are to be considered as "null" or `NA` values, by specifying `na.strings = "999999999"`. We can also consider different values for indicating `NA` values. Consider the following file (`fake.txt`) where we want to treat "." and "999999999" as NA values:
```
1 2 .
3 999999999 4
5 6 7
```
then in R we would do:
```
> a <- read.table("fake.txt", na.strings=c(".","999999999"))
> a
V1 V2 V3
1 1 2 NA
2 3 NA 4
3 5 6 7
```
Otherwise, you can always filter your data as indicated by @Sacha in his comment. Here, it could be something like
```
a[a=="." | a==999999999] <- NA
```
Edit
In case there are multiple abnormal values that can possibly be observed in different columns with different values, but you know the likely range of admissible values, you can apply a function to each column. For example, define the following filter:
```
my.filter <- function(x, threshold=100) ifelse(x > threshold, NA, x)
```
then
```
a.filt <- apply(a, 2, my.filter)
```
will replace every value > 100 with NA in the matrix `a`.
Example:
```
> a <- replicate(10, rnorm(10))
> a[1,3] <- 99999999
> a[5,6] <- 99999999
> a[8,10] <- 99999990
> summary(a[,3])
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1e+00 0e+00 0e+00 1e+07 1e+00 1e+08
> af <- apply(a, 2, my.filter)
> summary(af[,3])
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-1.4640 -0.2680 0.4671 -0.0418 0.4981 0.7444 1.0000
```
It can be vector-based of course:
```
> summary(my.filter(a[,3], 500))
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-1.4640 -0.2680 0.4671 -0.0418 0.4981 0.7444 1.0000
```
| null | CC BY-SA 3.0 | null | 2011-06-06T10:51:04.233 | 2011-06-07T07:37:50.610 | 2011-06-07T07:37:50.610 | 930 | 930 | null |
11628 | 1 | null | null | 17 | 7735 | I am analyzing scores given by participants attending an experiment. I want to estimate the reliability of my questionnaire which is composed of 6 items aimed at estimating the attitude of the participants towards a product.
I computed Cronbach's alpha treating all items as a single scale (alpha was about 0.6) and deleting one item at a time (max alpha was about 0.72). I know that alpha can be underestimated and overestimated depending on the number of items and the dimensionality of the underlying construct. So I also performed a PCA. This analysis revealed that there were three principal components explaining about 80% of the variance. So, my questions are all about how can I proceed now?
- Do I need to perform alpha computation on each of these dimension?
- Do I have remove the items affecting reliability?
Further, searching on the web I found there is another measure of reliability: the lambda6 of guttman.
- What are the main differences between this measure and alpha?
- What is a good value of lambda?
| Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alpha, lambda6 or some other index? | CC BY-SA 3.0 | null | 2011-06-06T12:32:07.103 | 2016-05-04T06:34:51.360 | 2016-05-04T06:34:51.360 | 1352 | 4903 | [
"pca",
"reliability",
"scales",
"psychometrics",
"cronbachs-alpha"
] |
11629 | 2 | null | 11622 | 0 | null | Performance is the main reason people use WinBUGS / OpenBUGS /JAGS vs. packages like MCMglmm. It is very hard not practical to write an efficient Gibbs sampler in native R. There are packages that let you run BUGS models from an R script, notably [RBUGS](http://cran.r-project.org/web/packages/rbugs/index.html) and [BUGSParallel](http://code.google.com/p/bugsparallel/).
| null | CC BY-SA 3.0 | null | 2011-06-06T12:37:53.140 | 2011-06-06T12:37:53.140 | null | null | 4904 | null |
11630 | 2 | null | 11628 | 8 | null | Here are some general comments:
- PCA: The PCA analysis does not "reveal that there are three principal components". You chose to extract three dimensions, or you relied on some default rule of thumb (typically eigenvalues over 1) to decide how many dimensions to extract. In addition eigenvalues over one often extracts more dimensions than is useful.
- Assessing item dimensionality: I agree that you can use PCA to assess the dimensionality of the items. However, I find that looking at the scree plot can provide a better guidance for number of dimensions. You may want to check out this page by William Revelle on assessing scale dimensionality.
- How to proceed?
If the scale is well established, then you may want to leave it as is (assuming its properties are at least reasonable; although in your case 0.6 is relatively poor by most standards).
If the scale is not well established, then you should consider theoretically what the items are intended to measure and for what purpose you want to use the resulting scale. Given that you have only six items, you do not have much room to create multiple scales without dropping to worrying numbers of items per scale. Simultaneously, it is a smart idea to check whether there are any problematic items either based on floor, ceiling, or low reliability issues.
Also, you may want to check whether any items need to be reversed.
I put together some links to general resources on scale development that you may find helpful
The following addresses your specific questions:
- Do I need to perform alpha computation on each of these dimension?
As you may gather from the above discussion, I don't think you should treat your data as if you have three dimensions. There are a range of arguments that you could make depending on your purposes and the details, so it's hard to say exactly what to do. In most cases, I'd be looking to create at least one good scale (perhaps deleting an item) rather than three unreliable scales.
- Do I have remove the items affecting reliability?
It's up to you. If the scale is established, then you may choose not to. If your sample size is small, it might be an anomaly of random sampling. However, in general I'd be inclined to delete an item if it was really dropping your alpha from 0.72 to 0.60. I'd also check whether this problematic item isn't actually meant to be reversed.
I'll leave the discussion of lambda 6 ([discussed by William Revelle here](http://www.personality-project.org/r/html/alpha.html)) to others.
| null | CC BY-SA 3.0 | null | 2011-06-06T13:31:05.527 | 2011-06-06T13:38:52.477 | 2011-06-06T13:38:52.477 | 183 | 183 | null |
11632 | 1 | 11640 | null | 11 | 9835 | Does anybody know what the formula for Cook's distance is? The original Cook's distance formula uses studentized residuals, but why is R using std. Pearson residuals when computing the Cook's distance plot for a GLM. I know that studentized residuals are not defined for GLMs, but how does the formula to compute Cook's distance look like?
Assume the following example:
```
numberofdrugs <- rcauchy(84, 10)
healthvalue <- rpois(84,75)
test <- glm(healthvalue ~ numberofdrugs, family=poisson)
plot(test, which=5)
```
What is the formula for Cook's distance? In other words, what is the formula to compute the red dashed line? And where does this formula for standardized Pearson residuals come from?

| What kind of residuals and Cook's distance are used for GLM? | CC BY-SA 3.0 | null | 2011-06-06T15:02:09.953 | 2017-03-04T13:19:06.257 | 2017-03-04T13:19:06.257 | 101426 | 4496 | [
"r",
"regression",
"generalized-linear-model",
"residuals",
"cooks-distance"
] |
11633 | 2 | null | 11622 | 5 | null | Second the Bayesian task view. I'd just add a vote for [MCMCpack](http://cran.r-project.org/web/packages/MCMCpack/index.html), a mature package which offers a variety of models. For the most part it's pretty well-documented too.
| null | CC BY-SA 3.0 | null | 2011-06-06T15:03:15.590 | 2011-06-06T18:20:56.943 | 2011-06-06T18:20:56.943 | 26 | 26 | null |
11634 | 1 | 11775 | null | 6 | 2430 | I am calculating the age of lake sediments at the base of a sediment core by dividing the total sediment mass of the core ($\mathrm{mg} \ \mathrm{cm}^{-2}$) by the sediment accumulation rate ($\mathrm{mg} \ \mathrm{cm}^{-2}\ \mathrm{y}^{-1}$).
Both the sediment mass and the accumulation rate have variation associated with them. The sediment mass is a mean of 3 samples and the accumulation rate is reported as $\pm 10\%$.
It is my understanding that I can calculate the error associated with age as:
$\sqrt{\left(\frac{\delta x}{x}\right)^2 + \left(\frac{\delta y}{y}\right)^2}$
where $\delta x$ and $\delta y$ are the relative error of the measurements being divided (i.e., $x$ and $y$).
The error associated with the sedimentation rate is $\pm \ 10%$ but the error associated with the sediment mass in a standard deviation.
Are these forms of error compatible within the above error propagation formula?
Thank you.
| How do I calculate error propagation with different measures of error? | CC BY-SA 3.0 | null | 2011-06-06T15:08:15.090 | 2013-02-27T21:26:49.387 | 2012-12-01T08:42:40.557 | 17230 | 4048 | [
"error",
"error-propagation"
] |
11636 | 1 | null | null | 12 | 43189 | i was wondering what is the differences between Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) in determining the accuracy of a forecast? Which one is better? Thanks
| The difference between MSE and MAPE | CC BY-SA 3.0 | null | 2011-06-06T16:24:10.950 | 2017-03-24T17:36:03.393 | 2016-04-15T09:15:40.360 | 1352 | 4906 | [
"time-series",
"mse",
"mape"
] |
11637 | 2 | null | 10787 | 0 | null | Instead of sampling large classes you can expand small classes ! If large classes have many times more observation then small, then biase will be small. I do hope you can handle that supersized dataset.
You may also identify subsets of observations which handle the most information about large classes, there are many possible procedures, the simplest I think is based on nearest neighbors method - observation sampling conditioned on neighborhood graph structure guarantee that sample will have probability density more similar to original one.
randomForest is written in Fortran and c, source code is available (http://cran.r-project.org/src/contrib/randomForest_4.6-2.tar.gz) but I cant spot the place where enthropy is computed,
ps. ups that randomforest use Gini instead of enthropy
| null | CC BY-SA 3.0 | null | 2011-06-06T17:17:44.910 | 2011-06-07T22:57:09.510 | 2011-06-07T22:57:09.510 | 4908 | 4908 | null |
11638 | 2 | null | 11568 | 2 | null | Intervals between things like requests are often modeled well with exponential, gamma, and Weibull distributions. These can have pretty fat tails, so @whuber's concern is already accounted for, to some extent, when you calculate your confidence intervals.
| null | CC BY-SA 3.0 | null | 2011-06-06T17:19:51.677 | 2011-06-06T17:19:51.677 | null | null | 4862 | null |
11640 | 2 | null | 11632 | 15 | null | If you take a look at the code (simple type `plot.lm`, without parenthesis, or `edit(plot.lm)` at the R prompt), you'll see that [Cook's distances](http://en.wikipedia.org/wiki/Cook%27s_distance) are defined line 44, with the `cooks.distance()` function. To see what it does, type `stats:::cooks.distance.glm` at the R prompt. There you see that it is defined as
```
(res/(1 - hat))^2 * hat/(dispersion * p)
```
where `res` are Pearson residuals (as returned by the `influence()` function), `hat` is the [hat matrix](http://en.wikipedia.org/wiki/Hat_matrix), `p` is the number of parameters in the model, and `dispersion` is the dispersion considered for the current model (fixed at one for logistic and Poisson regression, see `help(glm)`). In sum, it is computed as a function of the leverage of the observations and their standardized residuals.
(Compare with `stats:::cooks.distance.lm`.)
For a more formal reference you can follow references in the `plot.lm()` function, namely
>
Belsley, D. A., Kuh, E. and Welsch, R.
E. (1980). Regression Diagnostics.
New York: Wiley.
Moreover, about the additional information displayed in the graphics, we can look further and see that R uses
```
plot(xx, rsp, ... # line 230
panel(xx, rsp, ...) # line 233
cl.h <- sqrt(crit * p * (1 - hh)/hh) # line 243
lines(hh, cl.h, lty = 2, col = 2) #
lines(hh, -cl.h, lty = 2, col = 2) #
```
where `rsp` is labeled as Std. Pearson resid. in case of a GLM, Std. residuals otherwise (line 172); in both cases, however, the formula used by R is (lines 175 and 178)
```
residuals(x, "pearson") / s * sqrt(1 - hii)
```
where `hii` is the hat matrix returned by the generic function `lm.influence()`. This is the usual formula for std. residuals:
$$rs_j=\frac{r_j}{\sqrt{1-\hat h_j}}$$
where $j$ here denotes the $j$th covariate of interest. See e.g., Agresti Categorical Data Analysis, §4.5.5.
The next lines of R code draw a smoother for Cook's distance (`add.smooth=TRUE` in `plot.lm()` by default, see `getOption("add.smooth")`) and contour lines (not visible in your plot) for critical standardized residuals (see the `cook.levels=` option).
| null | CC BY-SA 3.0 | null | 2011-06-06T19:04:56.220 | 2011-06-06T19:11:28.970 | 2011-06-06T19:11:28.970 | 930 | 930 | null |
11641 | 2 | null | 11620 | 1 | null | I encounter this quite frequently when dealing with customer daily time series data. It appears that many accounting systems IGNORE daily data that didn't occur i.e. no transactions were recorded for that day (time interval/bucket) and don't fill in a '0" number . Since time series analysis require a reading for every interval/bucket we need to inject a "0" for the omitted observation. Intervention Detection is essentially a scheme to detect the anomaly and replace it with an expected value based on an identified profile/signal/prediction. If there are many of these 'missing values" the system can break down The problem becomes a little more complex when there is strong day-of-the-week profile in the historical data and a "sequential patch of values" are not recorded, suggesting that replacement values be obtained by computing local daily averages as a precursor to fine-tuning these values.
| null | CC BY-SA 3.0 | null | 2011-06-06T19:23:01.337 | 2011-06-06T19:23:01.337 | null | null | 3382 | null |
11642 | 2 | null | 11372 | 12 | null | To sum up, with n=45 subjects you're left with correlation-based and multivariate descriptive approaches. However, since this questionnaire is supposed to be unidimensional, this always is a good start.
What I would do:
- Compute pairwise correlations for your 22 items; report the range and the median -- this will give an indication of the relative consistency of observed items responses (correlations above 0.3 are generally thought of as indicative of good convergent validity, but of course the precision of this estimate depends on the sample size); an alternative way to study the internal consistency of the questionnaire would be to compute Cronbach's alpha, although with n=45 the associated confidence interval (use bootstrap for that) will be relatively large.
- Compute point-biserial correlation between items and the summated scale score; it will give you an idea of the discriminative power of each item (like loadings in FA), where values above 0.3 are indicative of a satisfactory relationship between each item and their corresponding scale.
- Use a PCA to summarize the correlation matrix (it yields an equivalent interpretation to what would be obtained from a multiple correspondence analysis in case of dichotomously scored items). If your instrument behaves as a unidimensional scale for your sample, you should observe a dominant axis of variation (as reflected by the first eigenvalue).
Should you want to use R, you will find useful function in the [ltm](http://cran.r-project.org/web/packages/ltm/index.html) and [psych](http://cran.r-project.org/web/packages/psych/index.html) package; browse the CRAN [Psychometrics](http://cran.r-project.org/web/views/Psychometrics.html) Task View for more packages. In case you get 100 subjects, you can try some CFA or SEM analysis with bootstrap confidence interval. (Bear in mind that loadings should be very large to consider there's a significant correlation between any item and its factor, since it should be at least two times the standard error of a reliable correlation coefficient, $2(1-r^2)/\sqrt{n}$.)
| null | CC BY-SA 4.0 | null | 2011-06-06T20:03:18.063 | 2021-03-24T12:47:04.527 | 2021-03-24T12:47:04.527 | 53690 | 930 | null |
11643 | 1 | 11644 | null | 4 | 1137 | Using Zelig in R I fitted a negative binomial model to my data. The psychology APA standard demands to report the overall R squared, F-Value and p-value for the whole model.
I looked at the formula for R square and it is using sum of squares. Since the negative binomial model is not using least square method assuming a linear model, does it make sense here to report any R squared?
If not, could I argue in my paper that reporting R square is non-sense or not possible?
If yes, how can I calculate an equivalent R Squared, and how can I perform the F-Test to report the demanded numbers?
thanks
| Zelig reports $R^2$ of a negative binomial regression - nonsense? | CC BY-SA 3.0 | null | 2011-06-06T22:04:52.133 | 2011-09-05T22:20:47.027 | 2011-06-06T22:19:43.483 | null | 4679 | [
"r",
"negative-binomial-distribution",
"r-squared"
] |
11644 | 2 | null | 11643 | 5 | null | Incidentally, F values also assume normal errors. I don't think these requirements were made with count data in mind. I'm not sure what to tell you. If apa requirements weren't an issue, I'd report something like proportion of explained deviance instead of R2, along with my regression coefficients and overdispersion parameter, and improvement (in deviance units or AIC) attributable to including different model terms.
Good luck!
| null | CC BY-SA 3.0 | null | 2011-06-06T22:32:26.490 | 2011-06-06T22:32:26.490 | null | null | 4862 | null |
11645 | 1 | 12102 | null | 9 | 3301 | So our data is structured as follows:
We have $M$ participants, each participant can be categorized into 3 groups (G $\in {A,B,C}$), and for each participant we have $N$ samples of a continuous variable.
And we are trying to predict values that are either 0 or 1.
How would we use matlab to test for an interaction between the continuous variable and the categorical variable in predicting these values?
| Coding an interaction between a nominal and a continuous predictor for logistic regression in MATLAB | CC BY-SA 3.0 | null | 2011-06-06T23:05:34.870 | 2011-06-22T21:04:41.400 | 2011-06-22T21:04:41.400 | 930 | 2800 | [
"logistic",
"matlab",
"interaction"
] |
11646 | 1 | null | null | 18 | 6777 | Choosing to parameterize the gamma distribution $\Gamma(b,c)$ by the pdf
$g(x;b,c) = \frac{1}{\Gamma(c)}\frac{x^{c-1}}{b^c}e^{-x/b}$
The Kullback-Leibler divergence between $\Gamma(b_q,c_q)$ and $\Gamma(b_p,c_p)$ is given by [1] as
\begin{align}
KL_{Ga}(b_q,c_q;b_p,c_p) &= (c_q-1)\Psi(c_q) - \log b_q - c_q - \log\Gamma(c_q) + \log\Gamma(c_p)\\
&\qquad+ c_p\log b_p - (c_p-1)(\Psi(c_q) + \log b_q) + \frac{b_qc_q}{b_p}
\end{align}
I'm guessing that $\Psi(x):= \Gamma'(x)/\Gamma(x)$ is the digamma function.
This is given with no derivation. I cannot find any reference that does derive this. Any help? A good reference would be sufficient. The difficult part is integrating $\log x$ against a gamma pdf.
[1] W.D. Penny, KL-Divergences of Normal, Gamma, Dirichlet, and Wishart densities, Available at: www.fil.ion.ucl.ac.uk/~wpenny/publications/densities.ps
| Kullback–Leibler divergence between two gamma distributions | CC BY-SA 3.0 | null | 2011-06-06T23:39:37.377 | 2021-10-11T15:36:07.050 | 2012-02-29T20:30:00.747 | 858 | 2952 | [
"kullback-leibler",
"gamma-distribution",
"exponential-family"
] |
11647 | 2 | null | 4603 | 4 | null | [lasso4j](http://code.google.com/p/lasso4j/) is an open source Java implementation of Lasso for linear regression.
| null | CC BY-SA 3.0 | null | 2011-06-07T00:09:52.463 | 2011-06-07T00:09:52.463 | null | null | 4912 | null |
11648 | 2 | null | 4980 | 2 | null | You can also take a look at [lasso4j](http://code.google.com/p/lasso4j/) which is an open source Java implementation of Lasso for linear regression. It is a port of the glmnet package to pure Java.
| null | CC BY-SA 3.0 | null | 2011-06-07T00:12:46.673 | 2011-06-07T00:12:46.673 | null | null | 4912 | null |
11649 | 2 | null | 11636 | 31 | null | MSE is scale-dependent, MAPE is not. So if you are comparing accuracy across time series with different scales, you can't use MSE.
For business use, MAPE is often preferred because apparently managers understand percentages better than squared errors.
MAPE can't be used when percentages make no sense. For example, the Fahrenheit and Celsius temperature scales have relatively arbitrary zero points, and it makes no sense to talk about percentages. MAPE also cannot be used when the time series can take zero values.
MASE is intended to be both independent of scale and usable on all scales.
As @Dmitrij said, the `accuracy()` function in the `forecast` package for R is an easy way to compute these and other accuracy measures.
There is a lot more about forecast accuracy measures in my [2006 IJF paper with Anne Koehler](http://robjhyndman.com/papers/mase.pdf).
| null | CC BY-SA 3.0 | null | 2011-06-07T01:30:10.137 | 2011-06-07T01:30:10.137 | null | null | 159 | null |
11650 | 1 | 11651 | null | 11 | 3874 | I'm using latent semantic indexing to find similarities between documents ([thanks, JMS!](https://stats.stackexchange.com/q/11102/1977))
After dimension reduction, I've tried k-means clustering to group the documents into clusters, which works very well. But I'd like to go a bit further, and visualize the documents as a set of nodes, where the distance between any two nodes is inversely proportional to their similarity (nodes that are highly similar are close together).
It strikes me that I can't accurately reduce a similarity matrix to a 2-dimensional graph since my data is > 2 dimensions. So my first question: is there a standard way to do this?
Could I just reduce my data to two dimensions and then plot them as the X and Y axis, and would that suffice for a group of ~100-200 documents? If this is the solution, is it better to reduce my data to 2 dimensions from the start, or is there any way to pick the two "best" dimensions from my multi-dimensional data?
I am using Python and the gensim library if that makes a difference.
| Visualizing multi-dimensional data (LSI) in 2D | CC BY-SA 4.0 | null | 2011-06-07T03:17:45.697 | 2018-05-24T07:02:27.237 | 2018-05-24T07:02:27.237 | 128677 | 1977 | [
"data-visualization",
"clustering",
"python",
"multidimensional-scaling"
] |
11651 | 2 | null | 11650 | 7 | null | This is what MDS (multidimensional scaling) is designed for. In short, if you're given a similarity matrix M, you want to find the closest approximation $S = X X^\top$ where $S$ has rank 2. This can be done by computing the SVD of $M = V \Lambda V^\top = X X^\top$ where $X = V \Lambda^{1/2}$.
Now, assuming that $\Lambda$ is permuted so the eigenvalues are in decreasing order, the first two columns of $X$ are your desired embedding in the plane.
There's lots of code available for MDS (and I'd be surprised if scipy doesn't have some version of it). In any case as long as you have access to some SVD routine in python you're set.
| null | CC BY-SA 3.0 | null | 2011-06-07T04:15:34.467 | 2011-06-07T04:15:34.467 | null | null | 139 | null |
11653 | 1 | null | null | 5 | 2917 |
### Context
This came up recently in a consulting context. A researcher was performing repeated measures t-tests based on experimental data.
Some of the analyses involved comparing one condition with another. Other analyses involved performing contrasts comparing one or more conditions with one or more other conditions.
### Question
- What effect size measure would you recommend using in relation to (a) repeated measures t-tests (b) repeated measures contrasts comparing one or more group means with one or more other group means?
- If you were reporting a d-based effect size measure, which measure of the within group standard deviation would you use?
- Are there any references that you would recommend?
I have a few thoughts, but I'm keen to get your suggestions.
| Recommended effect size measure for repeated measures t-tests and repeated measures contrasts on experimental data | CC BY-SA 3.0 | null | 2011-06-07T04:18:23.313 | 2015-08-19T20:39:49.890 | 2011-06-07T15:46:48.813 | 183 | 183 | [
"repeated-measures",
"effect-size",
"contrasts"
] |
11654 | 2 | null | 11594 | 3 | null | Using notions like entropy (like in Ashok's answer) only work if you believe the message is coming from a specific distribution. If all you have a single message, then the only measure of complexity that's meaningful is the [Kolmogorov complexity](http://en.wikipedia.org/wiki/Kolmogorov_complexity) of the message, which is sadly uncomputable.
| null | CC BY-SA 3.0 | null | 2011-06-07T04:18:42.860 | 2011-06-07T04:18:42.860 | null | null | 139 | null |
11655 | 2 | null | 9425 | 11 | null | In addition to the useful [link](http://itl.nist.gov/div898/handbook/prc/section4/prc473.htm) mentioned in the comments by @schenectady.
I would also add the point that Bonferroni correction applies to a broader class of problems. As far as I'm aware Tukey's HSD is only applied to situations where you want to examine all possible pairwise comparisons, whereas Bonferroni correction can be applied to any set of hypothesis tests.
In particular, Bonferroni correction is useful when you have a small set of planned comparisons, and you want to control the family-wise Type I error rate.
This also permits compound comparisons.
For example, you have a 6-way ANOVA and you want to compare the average of groups 1, 2, and 3 with group 4, and you want to compare group 5 with 6.
To further illustrate, you could apply Bonferroni correction to assessing significance of correlations in a correlation matrix, or the set of main and interaction effects in an ANOVA. However, such a correction is typically not applied, presumably for the reason that the reduction in Type I error rate results in an unacceptable reduction in power.
| null | CC BY-SA 3.0 | null | 2011-06-07T04:36:19.457 | 2011-06-07T04:36:19.457 | null | null | 183 | null |
11656 | 2 | null | 11653 | 4 | null | The answer here depends on your situation. Dunlap, Cortina, Vaslow, and Burke (1996) argued that the effect size should be calculated using a SD based on pooled variance from separate conditions, as is typical in independent groups studies, even with repeated measurements. Their argument was that the study may be replicated with a between design and the effect sizes will be more comparable across studies in meta-analysis with that effect size measure. They asserted that the effect size is the effect size and shouldn't be influenced by the correlation in the measurement in a repeated measures design.
Unfortunately, this suggestion has been overgeneralized in some literatures (and in Cortina's book I believe). When it's not possible to design an experiment any other way than repeated measures then using the between S effects size is a mistake. It will underestimate the size of the effect and be useless in power calculations.
Imagine an attentional cueing study where you need to study a single mental state, (e.g. oriented in a direction indicated by an arrow), and have to measure the effect comparing performance at the indicated location and one that is not. That study has to be done within and there is no other way to do it. In that case, the need to have an effect size comparable to situations where the study is done with independent groups vanishes because the independent group study couldn't occur. Using the between S effect size would not be a useful estimate in seeing the number of S's to replicate the study while the within would. The between S effect size would tend to vastly underestimate what you actually needed to measure, which is the effect within.
| null | CC BY-SA 3.0 | null | 2011-06-07T05:33:05.100 | 2015-08-19T20:39:49.890 | 2015-08-19T20:39:49.890 | 601 | 601 | null |
11657 | 1 | null | null | 3 | 1348 | I am working on understanding various document ranking algorithms like (TF-IDF, LSI, language models, etc) by actually implementing them. I want to understand LDA and using various resources to understand the algorithm. What I don't understand is how we come up with the latent (hidden) variables/topics. Can someone please explain it to me using examples like:
```
Doc1: "shipment of gold damaged in a fire",
Doc2: "delivery of silver arrived in a silver truck",
Doc3: "shipment of gold arrived in a truck"};
Query: "gold silver truck"
```
I will really appreciate any help in this regard. Thanks!
| Using latent Dirichlet allocation for information retrieval | CC BY-SA 3.0 | null | 2011-06-07T07:20:47.107 | 2011-09-08T01:35:56.557 | 2011-06-07T07:27:05.123 | null | 4915 | [
"information-retrieval"
] |
11658 | 2 | null | 9306 | 3 | null | I think the use of the spectrogram is visually interesting but not that obvious to exploit because of information redundency along frequencies. What we can see is that the changes between period are obvious. Also I would go back to the initial problem where you have for 3 different time periods indexed by $k=1,2,3$ a set of $n$ ($n=50$) signals of length $T>0$: $ i=1,\dots,n\; \; X^{k}_i\in \mathbb{R}^T$.
From this I would simply do a some sort of "Functional ANOVA" (or "multivariate ANOVA") :
$$X^{k}_i(t)=\mu_k+\beta_k(t)+\epsilon_{k,i}(t)$$
and test for difference in the mean i.e. test $\beta_1-\beta_2=0$ versus $\|\beta_1-\beta_2\|>\rho$.
You might be interested in [this paper](http://www.ima.umn.edu/imaging/W12.5-9.05/activities/Vidakovic-Brani/Abr_Ant_Sap_Vid.pdf) also [this paper](http://www3.stat.sinica.edu.tw/statistica/oldpdf/A14n415.pdf) involves a different FANOVA modelling. The difficult point in your real case might be that all assumption that are made in these papers are false (homoscedasticity, or stationnarity, ...) and you might need to build a different "functional" test adapted to your problem.
Note that your idea of using multiscale analysis is not lost here because you can integrate it in the test (if I remember it is what is done in the first paper I mention).
| null | CC BY-SA 3.0 | null | 2011-06-07T07:38:54.357 | 2011-06-07T07:38:54.357 | null | null | 223 | null |
11659 | 1 | 11669 | null | 95 | 7919 | In my job role I often work with other people's datasets, non-experts bring me clinical data and I help them to summarise it and perform statistical tests.
The problem I am having is that the datasets I am brought are almost always riddled with typos, inconsistencies, and all sorts of other problems. I am interested to know if other people have standard tests which they do to try to check any datasets that come in.
I used to draw histograms of each variable just to have a look but I now realise there are lots of horrible errors that can survive this test. For example, I had a repeated measures dataset the other day where, for some individuals, the repeated measure was identical at Time 2 as it was at Time 1. This was subsequently proved to be incorrect, as you would expect. Another dataset had an individual who went from being very severely disordered (represented by a high score) to being problem-free, represented by 0's across the board. This is just impossible, although I couldn't prove it definitively.
So what basic tests can I run on each dataset to make sure that they don't have typos and they don't contain impossible values?
Thanks in advance!
| Essential data checking tests | CC BY-SA 3.0 | null | 2011-06-07T08:19:22.500 | 2016-03-31T15:43:46.780 | 2011-06-07T08:43:56.070 | 223 | 199 | [
"dataset",
"outliers",
"checking"
] |
11660 | 2 | null | 11659 | 10 | null | When you have measures along time ("longitudinal data") it is often useful to check the gradients as well as the marginal distributions. This gradient can be calculated at different scales. More generally you can do meaningful transformations on your data (fourier, wavelet) and check the distributions of the marginals of the transformed data.
| null | CC BY-SA 3.0 | null | 2011-06-07T08:42:59.473 | 2011-06-07T14:33:13.617 | 2011-06-07T14:33:13.617 | 919 | 223 | null |
11661 | 1 | 11663 | null | 5 | 6442 | I have two groups (experimental, N=6, and control group, N=20). For each participant I measured a score (let say mean reaction time) 4 times. I would like to check:
- whether these groups differed in the beginning (Time 1)
- whether the score changes in time (for control group)
- compare the change in time for both groups
I use R to analyze the data. What statistical tests can I use, given the small group size? I would be very happy for any advice or link. Thank you in advance.
| Comparing means across two groups and over four time points when group sample sizes are very small | CC BY-SA 3.0 | null | 2011-06-07T09:21:37.127 | 2011-06-08T03:33:07.030 | 2011-06-08T03:33:07.030 | 183 | 427 | [
"r",
"hypothesis-testing",
"small-sample"
] |
11662 | 1 | 11665 | null | 3 | 2633 | How can I call residuals out from function `cv.lm`?
`cv.lm$ss` gives me the cross validation sum of squares, but I need individual residuals from each fold.
Is it possible to call out?
| How to extract residuals from function cv.lm in R? | CC BY-SA 3.0 | null | 2011-06-07T09:25:49.823 | 2013-08-05T14:09:10.330 | 2011-06-07T09:56:43.830 | 183 | 4917 | [
"r",
"cross-validation"
] |
11663 | 2 | null | 11661 | 5 | null |
### Your analyses
One strategy is to use the same techniques as you would with larger sample sizes.
- You could do a 2 by 4 mixed ANOVA with appropriate contrasts to test your effects of interest.
Or you could split your analyses up into discrete tests
- T-test for group differences at time 1
- Repeated measures ANOVA, possibly with linear and perhaps also quadratic contrasts for the effect of change in control group
- Interaction effect of the 2 by 4 mixed ANOVA (or perhaps a linear by group interaction) for whether the change in time differed between the two groups
### General considerations regarding small sample sizes
- You need to be particularly careful with outliers; and for this reason I have heard some researchers recommend using non-parametric tests with small sample sizes. I'm not completely convinced that this is necessary; I think if you are careful about looking for outliers, then standard tests may be okay.
You also need to rely more on prior knowledge when doing assumption tests, because the data itself may be insufficient, for example, to assess whether residuals are normally distributed in the experimental group.
- Do an actual power analysis based on your expected effect sizes. In your case, the comparisons between groups at time 1 is likely to have very little statistical power unless the effect size is huge. So, you should just acknowledge the fact that you may not be able to answer your research question with your data.
In contrast, changes over time in reaction time with 20 people might still be quite powerful because of the increased power of repeated measures effects and the fact that you are ignoring the tiny n=6 group.
- Do things that focus your question. For example, a test of the linear effect of the four time points may be more powerful than if you just do ANOVA with time as factor. This is because the linear effect only uses 1 degree of freedom where as treating it as a factor would use k-1, i.e., 3 degrees of freedom. Of course, this is trade off because you might assume that the effect of time is monotonic, but not quite linear.
- Keep the number of statistical tests to a minimum, which it sounds like you are. Tiny sample sizes combined with testing heaps of hypotheses can lead to some awful data dredging. Or if you are being conservative and apply bonferroni corrections or other adjustments, the minimal power that was present gets even lower.
- Try to get more participants.
- If you know that you can't get more participants, consider at the time of study design whether the research is worth doing if it will be insufficiently powered to answer your research question.
| null | CC BY-SA 3.0 | null | 2011-06-07T09:39:56.610 | 2011-06-08T03:29:39.307 | 2011-06-08T03:29:39.307 | 183 | 183 | null |
11664 | 1 | null | null | 3 | 1059 | I am looking to compare individual likert scale items.
I measured respondent's level of agreement (in a 5 point-scale) for several items (e.g. from 1 to 5 how much is A competent to treat your condition? how much is B? C? and so on..- 11 items in total). I want to analyse if the scores for each statement are significantly different?
I am considering the data as ordinal and as such using non-parametric analysis. So far what I did was a Friedman test (which should indicate if there is agreement between k set of ranking) (p<0.001). So indeed the distribution of the several scores (of the several items) are different. But I also wanted to know which items are statistically significanly different. So I did a Kendall correlation for all items.
- Is this the right approach?
- Can anyone let me know what is the best way to compare likert-scale items?
| How do I compare individual likert scale items | CC BY-SA 3.0 | null | 2011-06-07T10:39:11.050 | 2019-05-01T14:02:00.333 | 2018-08-15T07:55:53.553 | 11887 | 4919 | [
"likert",
"ranking",
"kendall-tau",
"agreement-statistics"
] |
11665 | 2 | null | 11662 | 3 | null | Looking at the R code, computation for individual fold are done in the inner loop, starting with
```
for (i in sort(unique(rand))) { # line 37
```
but results are just returned with a `print` statement (line 67-68), if `printit=TRUE` (which is the default). So, you can use what I suggested for a [related question](https://stats.stackexchange.com/questions/10347/making-a-heatmap-with-a-precomputed-distance-matrix-and-data-matrix-in-r/10349#10349) and edit the function in place so that it returns the SS for each fold in a list. That is, use
```
fix(cv.lm)
```
at the R prompt, then add the following three lines in the code
```
...
sumss <- 0
sumdf <- 0
ssl <- list() # (*)
...
ms <- ss/num
ssl[[i]] <- ss # (*)
if (printit)
cat("\nSum of squares =", round(ss, 2), " Mean square =",
...
invisible(c(ss = sumss, df = sumdf,
ss.fold=ssl)) # (*)
}
```
To check that it worked, try
```
> res <- cv.lm(printit=FALSE, plotit=FALSE)
> str(res)
List of 5
ss : num 59008
df : num 15
ss.fold1: num 24351
ss.fold2: num 20416
ss.fold3: num 14241
```
You can also returned a list of the fold SS by replacing `ss.fold=ssl` with `ss.fold=list(ssl)`, so that the output would look like
```
List of 3
ss : num 59008
df : num 15
ss.fold:List of 3
..$ : num 24351
..$ : num 20416
..$ : num 14241
```
| null | CC BY-SA 3.0 | null | 2011-06-07T12:11:32.447 | 2011-06-07T12:25:43.350 | 2017-04-13T12:44:20.840 | -1 | 930 | null |
11666 | 2 | null | 3911 | 5 | null | This is a great discussion. I feel that Bayesian credible intervals and likelihood support intervals are the way to go, as well as Bayesian posterior probabilities of events of interest (e.g., a drug is efficacious). But supplanting P-values with confidence intervals is a major gain. Virtually every issue of the finest medical journals such as NEJM and JAMA has a paper with the "absence of evidence is not evidence of absence" problem in their abstracts. The use of confidence intervals will largely prevent such blunders. A great little text is [http://www.amazon.com/Statistics-Confidence-Intervals-Statistical-Guidelines/dp/0727913751](http://rads.stackoverflow.com/amzn/click/0727913751)
| null | CC BY-SA 3.0 | null | 2011-06-07T12:24:47.820 | 2011-06-07T12:24:47.820 | null | null | 4253 | null |
11667 | 2 | null | 100 | 2 | null | The main aim of windowing in spectral analysis is the ability of zooming into the finer details of the signal rather than looking the whole signal as such. Short Time Fourier Transforms(STFT) are of prime importance in case of speech signal processing where the information like pitch or the formant frequencies are extracted by analyzing the signals through a window of specific duration. The width of the windowing function relates to how the signal is represented that is it determines whether there is good frequency resolution (frequency components close together can be separated) or good time resolution (the time at which frequencies change).A wide window gives better frequency resolution but poor time resolution. A narrower window gives good time resolution but poor frequency resolution. These are called narrowband and wideband transforms, respectively. This is the exact reason as why a wavelet transform was developed where a wavelet transform is capable of giving good time resolution for high frequency events and good frequency resolution for low frequency events. This type of analysis is well suited for real signals.
| null | CC BY-SA 3.0 | null | 2011-06-07T12:25:23.473 | 2011-06-07T12:25:23.473 | null | null | 4900 | null |
11668 | 2 | null | 11646 | 16 | null | The KL divergence is a difference of integrals of the form
$$\begin{aligned}
I(a,b,c,d)&=\int_0^{\infty} \log\left(\frac{e^{-x/a}x^{b-1}}{a^b\Gamma(b)}\right) \frac{e^{-x/c}x^{d-1}}{c^d \Gamma(d)}\, \mathrm dx \\
&=-\frac{1}{a}\int_0^\infty \frac{x^d e^{-x/c}}{c^d\Gamma(d)}\, \mathrm dx
- \log(a^b\Gamma(b))\int_0^\infty \frac{e^{-x/c}x^{d-1}}{c^d\Gamma(d)}\, \mathrm dx\\
&\quad+ (b-1)\int_0^\infty \log(x) \frac{e^{-x/c}x^{d-1}}{c^d\Gamma(d)}\, \mathrm dx\\
&=-\frac{cd}{a}
- \log(a^b\Gamma(b))
+ (b-1)\int_0^\infty \log(x) \frac{e^{-x/c}x^{d-1}}{c^d\Gamma(d)}\,\mathrm dx
\end{aligned}$$
We just have to deal with the right hand integral, which is obtained by observing
$$\eqalign{
\frac{\partial}{\partial d}\Gamma(d) =& \frac{\partial}{\partial d}\int_0^{\infty}e^{-x/c}\frac{x^{d-1}}{c^d}\, \mathrm dx\\
=& \frac{\partial}{\partial d} \int_0^\infty e^{-x/c} \frac{(x/c)^{d-1}}{c}\, \mathrm dx\\
=&\int_0^\infty e^{-x/c}\frac{x^{d-1}}{c^d} \log\frac{x}{c} \, \mathrm dx\\
=&\int_0^{\infty}\log(x)e^{-x/c}\frac{x^{d-1}}{c^d}\, \mathrm dx - \log(c)\Gamma(d).
}$$
Whence
$$\frac{b-1}{\Gamma(d)}\int_0^{\infty} \log(x)e^{-x/c}(x/c)^{d-1}\, \mathrm dx = (b-1)\frac{\Gamma'(d)}{\Gamma(d)} + (b-1)\log(c).$$
Plugging into the preceding yields
$$I(a,b,c,d)=\frac{-cd}{a} -\log(a^b\Gamma(b))+(b-1)\frac{\Gamma'(d)}{\Gamma(d)} + (b-1)\log(c).$$
The KL divergence between $\Gamma(c,d)$ and $\Gamma(a,b)$ equals $I(c,d,c,d) - I(a,b,c,d)$, which is straightforward to assemble.
---
### Implementation Details
Gamma functions grow rapidly, so to avoid overflow don't compute Gamma and take its logarithm: instead use the log-Gamma function that will be found in any statistical computing platform (including Excel, for that matter).
The ratio $\Gamma^\prime(d)/\Gamma(d)$ is the logarithmic derivative of $\Gamma,$ generally called $\psi,$ the digamma function. If it's not available to you, there are relatively simple ways to approximate it, as described in [the Wikipedia article](https://en.wikipedia.org/wiki/Digamma_function#Computation_and_approximation).
Here, to illustrate, is a direct `R` implementation of the formula in terms of $I$. This does not exploit an opportunity to simplify the result algebraically, which would make it a little more efficient (by eliminating a redundant calculation of $\psi$).
```
#
# `b` and `d` are Gamma shape parameters and
# `a` and `c` are scale parameters.
# (All, therefore, must be positive.)
#
KL.gamma <- function(a,b,c,d) {
i <- function(a,b,c,d)
- c * d / a - b * log(a) - lgamma(b) + (b-1)*(psigamma(d) + log(c))
i(c,d,c,d) - i(a,b,c,d)
}
print(KL.gamma(1/114186.3, 202, 1/119237.3, 195), digits=12)
```
| null | CC BY-SA 4.0 | null | 2011-06-07T13:48:30.880 | 2021-10-11T15:36:07.050 | 2021-10-11T15:36:07.050 | 919 | 919 | null |
11669 | 2 | null | 11659 | 79 | null | It helps to understand how the data were recorded.
Let me share a story. Once, long ago, many datasets were stored only in fading hardcopy. In those dark days I contracted with an organization (of great pedigree and size; many of you probably own its stock) to computerize about 10^5 records of environmental monitoring data at one of its manufacturing plants. To do this, I personally marked up a shelf of laboratory reports (to show where the data were), created data entry forms, and contracted with a temp agency for literate workers to type the data into the forms. (Yes, you had to pay extra for people who could read.) Due to the value and sensitivity of the data, I conducted this process in parallel with two workers at a time (who usually changed from day to day). It took a couple of weeks. I wrote software to compare the two sets of entries, systematically identifying and correcting all the errors that showed up.
Boy were there errors! What can go wrong? A good way to describe and measure errors is at the level of the basic record, which in this situation was a description of a single analytical result (the concentration of some chemical, often) for a particular sample obtained at a given monitoring point on a given date. In comparing the two datasets, I found:
- Errors of omission: one dataset would include a record, another would not. This usually happened because either (a) a line or two would be overlooked at the bottom of a page or (b) an entire page would be skipped.
- Apparent errors of omission that were really data-entry mistakes. A record is identified by a monitoring point name, a date, and the "analyte" (usually a chemical name). If any of these has a typographical error, it will not be matched to the other records with which it is related. In effect, the correct record disappears and an incorrect record appears.
- Fake duplication. The same results can appear in multiple sources, be transcribed multiple times, and seem to be true repeated measures when they are not. Duplicates are straightforward to detect, but deciding whether they are erroneous depends on knowing whether duplicates should even appear in the dataset. Sometimes you just can't know.
- Frank data-entry errors. The "good" ones are easy to catch because they change the type of the datum: using the letter "O" for the digit "0", for instance, turns a number into a non-number. Other good errors change the value so much it can readily be detected with statistical tests. (In one case, the leading digit in "1,000,010 mg/Kg" was cut off, leaving a value of 10. That's an enormous change when you're talking about a pesticide concentration!) The bad errors are hard to catch because they change a value into one that fits (sort of) with the rest of the data, such as typing "80" for "50". (This kind of mistake happens with OCR software all the time.)
- Transpositions. The right values can be entered but associated with the wrong record keys. This is insidious, because the global statistical characteristics of the dataset might remain unaltered, but spurious differences can be created between groups. Probably only a mechanism like double-entry is even capable of detecting these errors.
Once you are aware of these errors and know, or have a theory, of how they occur, you can write scripts to troll your datasets for the possible presence of such errors and flag them for further attention. You cannot always resolve them, but at least you can include a "comment" or "quality flag" field to accompany the data throughout their later analysis.
Since that time I have paid attention to data quality issues and have had many more opportunities to make comprehensive checks of large statistical datasets. None is perfect; they all benefit from quality checks. Some of the principles I have developed over the years for doing this include
- Whenever possible, create redundancy in data entry and data transcription procedures: checksums, totals, repeated entries: anything to support automatic internal checks of consistency.
- If possible, create and exploit another database which describes what the data should look like: that is, computer-readable metadata. For instance, in a drug experiment you might know in advance that every patient will be seen three times. This enables you to create a database with all the correct records and their identifiers with the values just waiting to be filled in. Fill them in with the data given you and then check for duplicates, omissions, and unexpected data.
- Always normalize your data (specifically, get them into at least fourth normal form), regardless of how you plan to format the dataset for analysis. This forces you to create tables of every conceptually distinct entity you are modeling. (In the environmental case, this would include tables of monitoring locations, samples, chemicals (properties, typical ranges, etc.), tests of those samples (a test usually covers a suite of chemicals), and the individual results of those tests. In so doing you create many effective checks of data quality and consistency and identify many potentially missing or duplicate or inconsistent values.
This effort (which requires good data processing skills but is straightforward) is astonishingly effective. If you aspire to analyze large or complex datasets and do not have good working knowledge of relational databases and their theory, add that to your list of things to be learned as soon as possible. It will pay dividends throughout your career.
- Always perform as many "stupid" checks as you possibly can. These are automated verification of obvious things such that dates fall into their expected periods, the counts of patients (or chemicals or whatever) always add up correctly, that values are always reasonable (e.g., a pH must be between 0 and 14 and maybe in a much narrower range for, say, blood pH readings), etc. This is where domain expertise can be the most help: the statistician can fearlessly ask stupid questions of the experts and exploit the answers to check the data.
Much more can be said of course--the subject is worth a book--but this should be enough to stimulate ideas.
| null | CC BY-SA 3.0 | null | 2011-06-07T14:30:49.173 | 2011-06-07T14:30:49.173 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
11671 | 2 | null | 11616 | 2 | null | I'm not sure about the metric of the ICC itself (I have never seen anyone report this metric for inferential purposes, only for description), but I do not believe many modelling strategies will be greatly impacted by the instance of many small groups. This is because random effects modelling takes into account the sample size of the groups, by "shrinking" the estimated group variances by their sample sizes. As a note when I refer to fixed or random effects, it is in line with the definitions layed out [here](http://www.stata.com/support/faqs/stat/xtreg.html).
One way to assess this is to examine the outcome of interest as deviations from the group means as oppossed to the original metric. So if $y_{ij}$ is the variable $y$ for observation $i$ within group $j$, from this variable subtract the mean of group $j$, and graph a scatterplot of those deviations versus the group size. You would expect this scatterplot to show heteroscedasticity (as the mean of smaller group sizes should be less representative of the observations within the group), and have a wider variance for smaller groups. If the opposite occurs, this suggests that the smaller groups are more homogenous, and might be evidence that the independence of observations is violated and is directly related to group size (e.g. those in smaller groups tend to be more similar to each other than those in larger groups).
If anything small groups should inflate the ICC. When there is only 1 observation, all of the variance for that observation is attributed to the group level mean, and if all groups only had 1 observation, the ICC would be 1 (i.e. there is no within group variation, only between groups).
Also as a note, frequently to estimate some relationship it is not that the observations need to be independent, it is simply that the model residuals need to be independent. Hence the whole reason to fit multi-level models.
Stephen Raudenbush has a book chapter, [Many Small Groups](http://psycnet.apa.org/psycinfo/2008-02337-004) that may be of interest (I see a PDF of the whole book can be found [here](http://gen.lib.rus.ec/get?md5=3fb54f61e667475a9cbad9e9c6c13ec6)). The chapter is mainly about how to estimate models with many small groups and potential problems that can arise. This is really only pertinent though if you want to estimate random effects models. If you are simply interested in fixed effects models it is largely unproblematic.
Also I have found the tutorials developed by the [Centre for Multilevel Modelling](http://www.bristol.ac.uk/cmm/learning/) to be very useful introductions to the subject material (very gentle, especially compared to the Raudenbush chapter I just cited!)
| null | CC BY-SA 3.0 | null | 2011-06-07T16:24:48.803 | 2011-06-07T16:24:48.803 | null | null | 1036 | null |
11672 | 1 | null | null | 6 | 506 | I am doing a project on sexual selection (male-male competition) in the turquoise killifish Nothobranchius furzeri.
There are two morphs of male in the population from which my fish are obtained from- one has a red tail and the other has a yellow tail.
My null hypothesis is: Tail colour is not related to dominance/competitive ability.
I will be putting one yellow-tailed male and one red-tailed male in an arena and recording the number or aggressive interactions that take place within 5 minutes and the winner of each. I have 8 red males and 8 yellow males all of similar colour intensity. I originally thought I had about 30 of each and I was going to rank for size and pair them up (i.e. largest red with largest male etc.)which would control for size (the larger fish are more dominant). However with a sample size of only 8 this would not be sufficient to get significant results.
How could I redesign the experiment and which statistical tests could I use?
I don’t have access to any more fish.
| Which statistical test should I use for my experiment on aggressive interactions in killifish? | CC BY-SA 3.0 | null | 2011-06-07T16:52:36.367 | 2011-06-16T16:38:20.713 | 2011-06-16T15:57:02.420 | 82 | 4923 | [
"experiment-design",
"categorical-data",
"small-sample",
"biostatistics",
"multiple-comparisons"
] |
11673 | 2 | null | 11659 | 25 | null | @whuber makes great suggestions; I would only add this: Plots, plots, plots, plots. Scatterplots, histograms, boxplots, lineplots, heatmaps and anything else you can think of. Of course, as you've found there are errors that won't be apparent on any plots but they're a good place to start. Just make sure you're clear on how your software handles missing data, etc.
Depending on the context you can get creative. One thing I like to do With multivariate data is fit some kind of factor model/probabilistic PCA (something that will do multiple imputation for missing data) and look at scores for as many components as possible. Data points which score highly on the less important components/factors are often outliers you might not see otherwise.
| null | CC BY-SA 3.0 | null | 2011-06-07T17:04:52.570 | 2011-06-07T17:04:52.570 | null | null | 26 | null |
11674 | 1 | null | null | 8 | 3782 | How does a computer algorithm set up to take as input an arbirary bivariate probability density function, generate pairs of numbers from that distribution? I have found a routine called simcontour that is part of LearnBayes in R that performs that operation.
| Generating random samples from a density function | CC BY-SA 3.0 | null | 2011-06-07T17:24:13.797 | 2012-02-02T09:38:46.240 | 2011-06-07T19:00:42.393 | 930 | 3805 | [
"algorithms",
"random-generation",
"density-function"
] |
11675 | 2 | null | 11548 | 2 | null | Jeromy Anglim and IrishStat both give great answers, but they sound maybe a little more complex than what you're looking for.
- A simpler method could could be to perform a linear regression on your data, to get PageViews = a * Date + b for some constants a and b; the constant a is then a measure of the linear "slope" of your data, which you could use to measure how much the link is trending. However, this might not work so well if your data doesn't follow a linear trend (the example in your other link looks pretty linear, but you could imagine that your link has instead been growing exponentially lately).
- So another approach could be to convert your pageviews into ranks (e.g., in article 1, 100 is the lowest value, so convert that into a 1; 80 is the 2nd-lowest value, so convert that into a 2; 60 is the highest value, so convert that into a 3), and then take the correlation of these ranks with (1,2,...,n) (where n is the total number of dates you have).
For example, if your article behaves like
```
Date, PageViews, Rank
June 1, 100, 1
June 2, 120, 3
June 3, 115, 2
June 4, 125, 4
June 5, 150, 5
```
Then you would take the correlation between `(1,3,2,4,5)` and `(1,2,3,4,5)` to get a trending score of 0.9. (Note that under this method, though, pageviews of `(100, 120, 115, 125, 150)` have the same trending score as `(100, 300, 299, 7000, 35000)`, which may or may not be what you want, since the latter is growing faster. In other words, this method tells you how strong the direction of the trend is, but not the magnitude. If you do want to get a sense of the magnitude, then you could just repeat these methods on the day-by-day changes of pageviews, i.e., determine whether the day-by-day changes are trending upwards or downwards.)
| null | CC BY-SA 3.0 | null | 2011-06-07T17:31:27.397 | 2011-06-07T17:31:27.397 | null | null | 1106 | null |
11676 | 1 | null | null | 29 | 72533 | I found a formula for pseudo $R^2$ in the book [Extending the Linear Model with R, Julian J. Faraway](http://www.maths.bath.ac.uk/~jjf23/ELM/) (p. 59).
$$1-\frac{\text{ResidualDeviance}}{\text{NullDeviance}}$$.
Is this a common formula for pseudo $R^2$ for GLMs?
| Pseudo R squared formula for GLMs | CC BY-SA 3.0 | null | 2011-06-07T17:34:41.347 | 2020-04-08T23:13:12.077 | 2014-01-11T22:39:22.300 | 7290 | 4496 | [
"r",
"regression",
"generalized-linear-model",
"r-squared"
] |
11677 | 1 | null | null | 12 | 30270 | I am having trouble understanding the different estimators that can be used in an impact evaluation. I know that the intention-to-treat (ITT) estimator compares differences between eligible individuals without the program, and eligible individuals with the program, regardless of compliance. However, I thought the average treatment effect (ATE) also measured the same thing. However, it seems that the ATE takes into consideration the compliance. Therefore, it compares outcomes between those eligible and taking up treatment with those who are not eligible. Is this correct?
| What is the difference between ITT and ATE? | CC BY-SA 3.0 | null | 2011-06-07T17:56:29.180 | 2019-11-09T00:54:24.183 | 2011-06-07T19:29:13.750 | 930 | 834 | [
"experiment-design",
"epidemiology"
] |
11678 | 1 | null | null | 8 | 256 | I would like to find a reference, preferably free on the internet, where I can read about the theoretical or practical justification for the use of parametric / analytic probability distributions.
By parametric distributions I mean the named ones like Normal, Weibull, etc.
| Where can I read about the justification for the use of parametric probability distributions? | CC BY-SA 3.0 | null | 2011-06-07T17:56:43.910 | 2013-09-04T15:06:03.857 | 2013-09-04T15:06:03.857 | 27581 | 74 | [
"probability",
"distributions",
"references"
] |
11679 | 1 | 13093 | null | 6 | 220 | I have a nested-case control study that I have been using for analysis. At the end of my work I have deduced a set of variables that I use later to to classify new cases. One example of a simple classifier I am using is a naive Bayes, which will output simply a probability.
So here is my question:
Could I make my probabilities reflect the real world? In my specific example, the condition that I am testing for has a prevalence of 33% in my study, but a it has a population prevalence of only 10%. Bayes factors have been suggested to me as a way to achieve this, however I am little unsure how to set up the problem.
As an example I have seen a Bayes factor as a logit between the true vs. study prevalence of the outcome. The classifier however was a logistic regression, and in that case the Bayes factor was just added to the linear predictors. I think the example there was very specific, and perhaps an inappropriate method for probabilities of a naive Bayes. Instead what I did was add the logit Bayes factor to the logged probabilities, but I am also not convinced this is right either. I also think a simpler solution would be to use Bayes theorem directly, but there I am not sure how to represented my study vs.population prevalences. The method below isn't quite right, but gets at what I want:
```
p_final = classier_posterior*(population_prev)/(study_prev)
```
I should contextualize that I use the probabilities to establish a threshold for classification down stream.
| Probabilities in case-controlled studies | CC BY-SA 3.0 | null | 2011-06-07T19:14:29.673 | 2011-09-13T14:52:37.710 | 2011-06-08T08:38:14.803 | null | 4673 | [
"r",
"probability",
"case-control-study"
] |
11680 | 1 | null | null | 5 | 1295 | I'd like to hear your opinions on the following:
- What parameters would you report when estimating different likelihood based regression? AIC, BIC, Pseudo $R^2$?
- What is the standard to report?
It should be a parameter which answers the question of how good the specified model is.
| Which measure of model fit to report when performing likelihood based regression: AIC, BIC, Pseudo R-square? | CC BY-SA 3.0 | null | 2011-06-07T19:38:25.840 | 2012-02-20T01:03:10.477 | 2012-02-20T01:03:10.477 | 4856 | 4496 | [
"regression",
"maximum-likelihood",
"aic",
"bic"
] |
11681 | 2 | null | 11573 | 2 | null | What you are talking about is called [Conjoint Analysis](http://en.wikipedia.org/wiki/Conjoint_analysis_%28marketing%29) . "Multivariate Data Analysis" by Hair has a good chapter on this.
[[Discrete] Choice Modeling](http://en.wikipedia.org/wiki/Choice_modelling) is when you have the user compare two (or three) images at once, and ask them to choose their preferred one. In this case you would use conditional logistic regression, or a hierarchical bayesian model, to analyze the results.
[Sawtooth software](http://www.sawtoothsoftware.com/) has a good package, for both conjoint and choice, and [JMP has a nice Choice Modeling package](http://www.jmp.com/software/jmp8/demos.shtml), which uses the bayes technique. Neither are cheap, though JMP has a 30 day trial, and is remarkably discounted if you are associated with a college/university.
| null | CC BY-SA 3.0 | null | 2011-06-07T20:09:17.800 | 2011-06-07T20:09:17.800 | null | null | 74 | null |
11682 | 1 | 12307 | null | 8 | 6481 | I have a multitree that represents the lineages of all the fishgroups in a breeding program. It's stored as a double adjacency table with `fish_id`, `sire_id` and `dam_id`. This means a particular fishgroup is only directly aware of its parents (if any) and knows nothing about descendents. I can conveniently output this data as a [GraphML](http://graphml.graphdrawing.org/primer/graphml-primer.html) doc, but I'm having difficulty finding a viewer that can display a multitree given simple input. [InfoVis](http://thejit.org/) can, but requires a verbose and redundant description of the structure. [Cytoscape Web](http://cytoscapeweb.cytoscape.org/) can take GraphML objects, but can't automatically layout a multitree (it does it's best, but I'd like a hierarchical layout). Does anyone know of another option?
My GraphML looks like this:
```
<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns
http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="currentfish" for="node" attr.name="cfish" attr.type="string">
<default>#ffff00</default>
</key>
<key id="parcount" for="edge" attr.name="parcount" attr.type="int"></key>
<key id="parenttype" for="edge" attr.name="parenttype" attr.type="string"></key>
<key id="fish" for="node" attr.name="fish" attr.type='string'></key>
<graph id="G" edgedefault="directed">
<node id='f22'>
<data key='fishname'>ZS6181</data>
</node>
<edge id='es_23' source='f57' target='f23'>
<data key='parcount'>0</data>
<data key='parenttype'>sire</data>
</edge>
<node id='f23'>
<data key='fishname'>90716</data>
</node>
<edge id='ed_23' source='f42' target='f23'>
<data key='parcount'>0</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f24'>
<data key='fishname'>ZS6377</data>
</node>
<node id='f25'>
<data key='fishname'>ZS6375</data>
</node>
<edge id='ed_26' source='f25' target='f26'>
<data key='parcount'>10</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f26'>
<data key='fishname'>ZS6375 F1</data>
</node>
<edge id='es_26' source='f25' target='f26'>
<data key='parcount'>10</data>
<data key='parenttype'>sire</data>
</edge>
<node id='f27'>
<data key='fishname'>ZS56181 F1</data>
</node>
<edge id='es_27' source='f43' target='f27'>
<data key='parcount'>9</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='ed_27' source='f43' target='f27'>
<data key='parcount'>9</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f28'>
<data key='fishname'>ZS6377 F1</data>
</node>
<edge id='es_28' source='f44' target='f28'>
<data key='parcount'>7</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='ed_28' source='f44' target='f28'>
<data key='parcount'>7</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f29'>
<data key='fishname'>100128 AB</data>
</node>
<edge id='es_30' source='f25' target='f30'>
<data key='parcount'>10</data>
<data key='parenttype'>sire</data>
</edge>
<node id='f30'>
<data key='fishname'>ZS6375 F1A</data>
</node>
<edge id='ed_30' source='f25' target='f30'>
<data key='parcount'>10</data>
<data key='parenttype'>dam</data>
</edge>
<edge id='ed_31' source='f45' target='f31'>
<data key='parcount'>0</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f31'>
<data key='fishname'>AB 100223</data>
</node>
<edge id='es_31' source='f25' target='f31'>
<data key='parcount'>0</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='es_32' source='f45' target='f32'>
<data key='parcount'>0</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='ed_32' source='f45' target='f32'>
<data key='parcount'>0</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f32'>
<data key='fishname'>AB 100319</data>
</node>
<edge id='es_33' source='f32' target='f33'>
<data key='parcount'>8</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='ed_33' source='f31' target='f33'>
<data key='parcount'>8</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f33'>
<data key='fishname'>AB 100714</data>
</node>
<edge id='ed_34' source='f28' target='f34'>
<data key='parcount'>10</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f34'>
<data key='fishname'>AB 100715</data>
</node>
<edge id='es_34' source='f59' target='f34'>
<data key='parcount'>10</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='ed_35' source='f27' target='f35'>
<data key='parcount'>4</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f35'>
<data key='fishname'>AB 100722</data>
</node>
<edge id='es_35' source='f28' target='f35'>
<data key='parcount'>5</data>
<data key='parenttype'>sire</data>
</edge>
<node id='f36'>
<data key='fishname'>ZS6377 F2</data>
</node>
<edge id='es_36' source='f28' target='f36'>
<data key='parcount'>8</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='ed_36' source='f28' target='f36'>
<data key='parcount'>6</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f37'>
<data key='fishname'>ZS6375 F2</data>
</node>
<node id='f38'>
<data key='fishname'>100730 AB</data>
</node>
<edge id='es_39' source='f34' target='f39'>
<data key='parcount'>6</data>
<data key='parenttype'>sire</data>
</edge>
<edge id='ed_39' source='f38' target='f39'>
<data key='parcount'>6</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f39'>
<data key='fishname'>110208</data>
</node>
<node id='f40'>
<data key='fishname'>110412</data>
</node>
<edge id='ed_41' source='f27' target='f41'>
<data key='parcount'>2</data>
<data key='parenttype'>dam</data>
</edge>
<node id='f41'>
<data key='fishname'>110413</data>
</node>
<edge id='es_41' source='f24' target='f41'>
<data key='parcount'>5</data>
<data key='parenttype'>sire</data>
</edge>
<node id='f42'>
<data key='fishname'>90318</data>
</node>
<node id='f43'>
<data key='fishname'>ZS56181-90705</data>
</node>
<node id='f44'>
<data key='fishname'>ZS6377-90913</data>
</node>
<node id='f45'>
<data key='fishname'>ZS56181</data>
</node>
<node id='f57'>
<data key='fishname'>90317</data>
</node>
<node id='f59'>
<data key='fishname'>ZS63775 F1</data>
</node>
</graph>
</graphml>
```
| How to visualize a GraphML multitree? | CC BY-SA 3.0 | null | 2011-06-07T20:23:14.947 | 2012-06-19T14:27:31.653 | 2011-06-07T21:01:09.653 | 930 | 1079 | [
"data-visualization",
"software",
"graph-theory"
] |
11683 | 2 | null | 155 | 19 | null | A p value is a measure of how embarrassing the data are to the null hypothesis
Nicholas Maxwell, Data Matters: Conceptual Statistics for a Random World Emeryville CA: Key College Publishing, 2004.
| null | CC BY-SA 3.0 | null | 2011-06-07T20:26:43.583 | 2011-06-07T20:26:43.583 | null | null | 4253 | null |
11684 | 2 | null | 11678 | 4 | null | Nice question. I like Ben Bolker's descriptions from his book, [Ecological Models and Data in R](http://www.math.mcmaster.ca/~bolker/emdbook/index.html) ([preprint of the relevant chapter](http://www.math.mcmaster.ca/~bolker/emdbook/chap4A.pdf); the bestiary of distributions starts on page 19).
For each distribution, he has a few sentences to a page on where it comes from and what it's used for, plus some math and graphs.
| null | CC BY-SA 3.0 | null | 2011-06-07T20:58:32.590 | 2011-06-07T20:58:32.590 | null | null | 4862 | null |
11687 | 1 | 11710 | null | 13 | 786 | There are different methods for prediction of ordinal and categorical variables.
What I do not understand, is how this distinction matters. Is there a simple example which can make clear what goes wrong if I drop the order? Under what circumstances does it not matter? For instance, if the independent variables are all categorical/ordinal, too, would there be a difference?
[This related question](https://stats.stackexchange.com/questions/6481/consequence-of-ignoring-the-order-of-a-categorical-variable-with-different-levels) focuses on the type of the independent variables. Here I am asking about outcome variables.
Edit:
I see the point that using the order structure reduces the number of model parameters, but I am still not really convinced.
Here is an example (taken from an [introduction to ordered logistic regression](http://www.ats.ucla.edu/stat/r/dae/ologit.htm) where as far as I can see ordinal logistic regression does not perform better than multinomial logistic regression:
```
library(nnet)
library(MASS)
gradapply <- read.csv(url("http://www.ats.ucla.edu/stat/r/dae/ologit.csv"), colClasses=c("factor", "factor", "factor", "numeric"))
ordered_result <- function() {
train_rows <- sample(nrow(gradapply), round(nrow(gradapply)*0.9))
train_data <- gradapply[train_rows,]
test_data <- gradapply[setdiff(1:nrow(gradapply), train_rows),]
m <- polr(apply~pared+gpa, data=train_data)
pred <- predict(m, test_data)
return(sum(pred==test_data$apply))
}
multinomial_result <- function() {
train_rows <- sample(nrow(gradapply), round(nrow(gradapply)*0.9))
train_data <- gradapply[train_rows,]
test_data <- gradapply[setdiff(1:nrow(gradapply), train_rows),]
m <- multinom(apply~pared+gpa, data=train_data)
pred <- predict(m, test_data)
return(sum(pred==test_data$apply))
}
n <- 100
polr_res <- replicate(n, ordered_result())
multinom_res <- replicate(n, multinomial_result())
boxplot(data.frame(polr=polr_res, multinom=multinom_res))
```
which shows the distribution of the number of right guesses (out of 40) of both algorithms.

Edit2: When I use as scoring method the following
```
return(sum(abs(as.numeric(pred)-as.numeric(test_data$apply)))
```
and penalize "very wrong" predictions, polr still looks bad, i.e. the plot above does not change very much.
| What do I gain if I consider the outcome as ordinal instead of categorical? | CC BY-SA 3.0 | null | 2011-06-07T22:31:07.243 | 2017-05-05T12:33:07.403 | 2017-05-05T12:33:07.403 | 101426 | 573 | [
"logistic",
"multinomial-distribution",
"ordered-logit"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.