Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12027 | 2 | null | 12026 | 2 | null | The most related technique I know of is described in a [talk at ACM Data Mining SIG by Ted Dunning](http://fora.tv/2009/10/14/ACM_Data_Mining_SIG_Ted_Dunning).
| null | CC BY-SA 3.0 | null | 2011-06-17T10:15:41.077 | 2011-06-17T10:15:41.077 | null | null | 2150 | null |
12029 | 1 | 12036 | null | 16 | 8874 | Thanks to Tormod question (posted [here](https://stats.stackexchange.com/questions/11892/plotting-changes-in-a-three-valued-ordinal-variable-across-two-time-points-using)) I came across the [Parallel Sets](http://eagereyes.org/blog/2009/parallel-sets-released.html) plot. Here is an example for how it looks:

(It is a visualization of the Titanic dataset. Showing, for example, how most of the women that didn't survive belonged to the third class...)
I would love to be able to reproduce such a plot with R. Is that possible to do?
Thanks, Tal
| Is it possible to create "parallel sets" plot using R? | CC BY-SA 3.0 | null | 2011-06-17T11:14:03.430 | 2017-06-19T13:01:42.357 | 2017-04-13T12:44:36.923 | -1 | 253 | [
"r",
"data-visualization",
"categorical-data",
"interactive-visualization"
] |
12030 | 1 | null | null | 12 | 32062 | I have read the Wikipedia pages for [Friedman's](http://en.wikipedia.org/wiki/Friedman_test) and [Kruskal-Wallis'](http://en.wikipedia.org/wiki/Kruskal-Wallis_one-way_analysis_of_variance) test, but I am not sure which one to use. Are there differences in the assumptions?
| Friedman vs Kruskal-Wallis test | CC BY-SA 3.0 | null | 2011-06-17T11:24:16.413 | 2017-06-04T11:25:21.350 | 2011-06-17T12:43:38.683 | null | 5058 | [
"anova",
"kruskal-wallis-test"
] |
12031 | 2 | null | 12026 | 1 | null | You can try recurrent neural networks: neural networks for time series/sequences. I have given an explanation [here](https://stats.stackexchange.com/questions/8000/proper-way-of-using-recurrent-neural-network-for-time-series-analysis/8014#8014).
| null | CC BY-SA 3.0 | null | 2011-06-17T13:15:16.227 | 2011-06-17T13:15:16.227 | 2017-04-13T12:44:41.967 | -1 | 2860 | null |
12032 | 1 | 12038 | null | 2 | 111 | I am new to CrossValidated so hope this question is appropriate... apologies if not.
I am studying the stability of a molecule 'B' and have data on the recorded level at different time points. Molecule B actually increases over time due to the fact that its parent molecule, AB, is decaying releasing it (and A): AB -> A + B
If I were measuring AB I believe it would decay along the following [model](http://en.wikipedia.org/wiki/Half-life#Formulas_for_half-life_in_exponential_decay):
$$
N(t) = N_0 e^{-\lambda t}
$$
Where $N_0$ is the start concentration of AB, $\lambda=\ln(2)/t_{1/2}$ and $t$ is time. For example when $N_0=1000$ and $t_{1/2}=2$ we get as below, which when log transformed these form a nice straight line on which real data can easily be fit, see left hand images [in this Google spreadsheet](https://spreadsheets.google.com/spreadsheet/pub?hl=en_GB&hl=en_GB&key=0ApdtrT02Tv8sdGpRQmROZFhIOE1vMU1xQ1hGODllNkE&single=true&gid=0&output=html)
However, because I am measuring B, and the parent start concentration is unknown (probably much bigger than B), I am getting a graph like (top-right graph in link), which is a mirror of the decay curve of AB, starting at the start concentration of B (known, and in this example 100) and increasing by the decay of AB: ($N_0 - N(t)$) at each $t$.
However, log transforming this does not produce a straight line, because the concentration is not halving or doubling, but increasing by an amount to do with the unknown start concentration of AB, producing the not very useful graph (bottom-right graph in link).
Therefore, I am finding it hard to fit a line to my data. I have tried converting concentration of B into something that looks like concentration of AB which can then be converted easily into a straight line by log transforming, but had no luck as I don't know $N_0$ for AB.
I am hoping I've missed something and there is a nice way transform my data to a straight line, or model it some other way. Any help and ideas would be very much appreciated!
Thanks
Nick
| Fitting child molecule concentration in parent molecule exponential decay | CC BY-SA 3.0 | null | 2011-06-17T13:33:55.553 | 2011-06-17T18:16:37.890 | 2011-06-17T18:16:37.890 | null | 5061 | [
"modeling",
"exponential-distribution"
] |
12033 | 2 | null | 12020 | 1 | null | An idea is to try a set of different metrics, running them through the same data set and then compare their ratings (places of "interesting things") with those of human classifiers. Then you can select the metric that is the closest to the rating of humans either based on correctly identified instances (true positives) or based on the number of incorrectly missed instanced (false negatives).
Of course, this is basically just guess work. You could also try to approach the task with theory, find out what the numbers mean and what exactly an event of interest is. If you have a good specification of these questions you basically have your classifier.
| null | CC BY-SA 3.0 | null | 2011-06-17T13:35:33.087 | 2011-06-17T13:35:33.087 | null | null | 1048 | null |
12035 | 1 | null | null | 0 | 169 | I have a data table detailing various histological parameters among 3 groups of patients- DLE, SCLE and ACLE.
I would like to know how we could do an analysis on the strenghth of the association of a parameter with a particular group.
| Correlation analysis of parameters | CC BY-SA 3.0 | null | 2011-06-17T14:04:08.600 | 2012-03-17T06:54:37.933 | 2011-06-17T14:09:25.437 | 2116 | 5063 | [
"multiple-comparisons"
] |
12036 | 2 | null | 12029 | 25 | null | Here's a version using only base graphics, thanks to Hadley's comment.
(For previous version, see edit history).

```
parallelset <- function(..., freq, col="gray", border=0, layer,
alpha=0.5, gap.width=0.05) {
p <- data.frame(..., freq, col, border, alpha, stringsAsFactors=FALSE)
n <- nrow(p)
if(missing(layer)) { layer <- 1:n }
p$layer <- layer
np <- ncol(p) - 5
d <- p[ , 1:np, drop=FALSE]
p <- p[ , -c(1:np), drop=FALSE]
p$freq <- with(p, freq/sum(freq))
col <- col2rgb(p$col, alpha=TRUE)
if(!identical(alpha, FALSE)) { col["alpha", ] <- p$alpha*256 }
p$col <- apply(col, 2, function(x) do.call(rgb, c(as.list(x), maxColorValue = 256)))
getp <- function(i, d, f, w=gap.width) {
a <- c(i, (1:ncol(d))[-i])
o <- do.call(order, d[a])
x <- c(0, cumsum(f[o])) * (1-w)
x <- cbind(x[-length(x)], x[-1])
gap <- cumsum( c(0L, diff(as.numeric(d[o,i])) != 0) )
gap <- gap / max(gap) * w
(x + gap)[order(o),]
}
dd <- lapply(seq_along(d), getp, d=d, f=p$freq)
par(mar = c(0, 0, 2, 0) + 0.1, xpd=TRUE )
plot(NULL, type="n",xlim=c(0, 1), ylim=c(np, 1),
xaxt="n", yaxt="n", xaxs="i", yaxs="i", xlab='', ylab='', frame=FALSE)
for(i in rev(order(p$layer)) ) {
for(j in 1:(np-1) )
polygon(c(dd[[j]][i,], rev(dd[[j+1]][i,])), c(j, j, j+1, j+1),
col=p$col[i], border=p$border[i])
}
text(0, seq_along(dd), labels=names(d), adj=c(0,-2), font=2)
for(j in seq_along(dd)) {
ax <- lapply(split(dd[[j]], d[,j]), range)
for(k in seq_along(ax)) {
lines(ax[[k]], c(j, j))
text(ax[[k]][1], j, labels=names(ax)[k], adj=c(0, -0.25))
}
}
}
data(Titanic)
myt <- subset(as.data.frame(Titanic), Age=="Adult",
select=c("Survived","Sex","Class","Freq"))
myt <- within(myt, {
Survived <- factor(Survived, levels=c("Yes","No"))
levels(Class) <- c(paste(c("First", "Second", "Third"), "Class"), "Crew")
color <- ifelse(Survived=="Yes","#008888","#330066")
})
with(myt, parallelset(Survived, Sex, Class, freq=Freq, col=color, alpha=0.2))
```
| null | CC BY-SA 3.0 | null | 2011-06-17T16:08:09.200 | 2011-06-21T16:29:52.253 | 2011-06-21T16:29:52.253 | 3601 | 3601 | null |
12037 | 2 | null | 12030 | 15 | null | Kruskal-Wallis' test is a non parametric one way anova. While Friedman's test can be thought of as a (non parametric) repeated measure one way anova.
If you don't understand the difference, I compiled a list of tutorials I found about doing repeated measure anova with R, you can find them [here](http://www.r-statistics.com/2010/04/repeated-measures-anova-with-r-tutorials/)...
| null | CC BY-SA 3.0 | null | 2011-06-17T16:08:34.800 | 2011-06-17T16:08:34.800 | null | null | 253 | null |
12038 | 2 | null | 12032 | 4 | null | The problem is that $[B]$ behaves like $N_\infty-N_\infty e^{-\lambda t}$, and the log of this is nothing really interesting. You should subtract this from the asymptotic value to get exponential function again and restore linear look on the log-y plot.
Easy to say, but how to get this $N_\infty$? One way is to wait some time (rule of thumb say $>3/\lambda$, but obviously the more the better) and collect some points then and average them (they will be very close to $N_\infty$).
The better idea is not to plot any lines at all and just use non-linear fitting procedure (such as `nls` in R) to fit $N_\infty\left(1-e^{-\lambda (t-t_0)}\right)$ to the data and just get all those parameters nicely approximated with CIs.
| null | CC BY-SA 3.0 | null | 2011-06-17T17:59:37.413 | 2011-06-17T17:59:37.413 | null | null | null | null |
12039 | 1 | null | null | 2 | 3870 | I am looking for the formula of the confidence interval for the difference between means in a one sample t-test. I have only been able to locate the formula for a two sample t-test.
Let me give an example: I have the following ten scores
```
10,12,13,11.5,9,11,11.1,11.9,12.1,9.3
```
I want to know if the mean of these scores is significantly different from my population mean of 11.5. When I conduct a one sample t-test in SPSS I get the following results:
```
t obtained = -10.776
SIG (i.e., P) = 0.000
95% CI of the difference = -5.3363 to -3.4837.
```
I know how SPSS calculated t and P but not how it calculated the 95% CI of the difference. This 95% CI of the difference is not the same as the CI for the mean. The CI for the mean I can obtain by
$$CI =\bar{x} \pm t S/\sqrt{N}.$$
The confidence interval in this case is: 10.16,12.01. I get the same result if I calculate this in SPSS.
So my question is: what is the formula for the CI of the difference which SPSS produces? How do I get that range? I do not want the CI for the mean. Thanks.
| Formula confidence interval for difference in means - one sample t-test | CC BY-SA 3.0 | null | 2011-06-17T18:04:53.840 | 2019-02-10T16:40:21.310 | 2018-12-19T12:43:17.000 | 11887 | 4498 | [
"confidence-interval",
"t-test",
"mean"
] |
12040 | 1 | 12042 | null | 5 | 207 | I quote this question from "All of Statistics":
>
Suppose a gene can be type A or type
a. There are three types of people
(called genotypes): AA, Aa, and aa.
Let (p, q, r) denote the fraction of
people of each genotype. We assume
that everyone contributes one of their
two copies of the gene at random to
their children. We also assume that
mates are selected at random. The
latter is not realistic however, it is
often reasonable to assume that you do
not choose your mate based on whether
they are AA, Aa, or aa. (This would be
false if the gene was for eye color
and if people chose mates based on eye
color.) Imagine if we pooled
everyone’s genes together. The
proportion of A genes is P = p + (q/2)
and the proportion of a genes is Q = r
+ (q/2). A child is AA with probability P^2, aA with probability
2PQ, and aa with probability Q^2.
Thus, the fraction of A genes in this
generation is P^2 + PQ
This is probably a basic question. I'm asking: why is $A = P^2 + PQ$ and not $P^2 + 2PQ$?
| Basic question regarding probability | CC BY-SA 4.0 | null | 2011-06-17T18:04:57.520 | 2018-08-14T10:59:27.040 | 2018-08-14T10:59:27.040 | 128677 | 5057 | [
"probability"
] |
12041 | 1 | null | null | 2 | 1882 | I'm writing Python code to use the Kalman State-space approach to estimate ARMA model coefficients using MLE however, I'm not too clear on how to derive the coefficient estimates standard errors from the process.
I know you have to use the information matrix, invert it to get the variance-covariance matrix and you can derive it from there, but how to derive the information matrix is the challenge I'm currently faced with (from a coding standpoint).
Any links to free papers, tutorials, equations, or references to code is what would be very helpful to me.
Thanks!
| ARMA model coefficient standard errors | CC BY-SA 3.0 | null | 2011-06-17T18:30:55.480 | 2011-08-18T16:04:52.067 | 2011-08-18T16:04:52.067 | 5739 | 5068 | [
"time-series",
"estimation",
"maximum-likelihood",
"standard-error",
"arma"
] |
12042 | 2 | null | 12040 | 4 | null | For now, I'm going to assume that you understand everything up until the last sentence of your block quote, i.e. you understand why the probability of Aa is 2PQ. If I'm wrong, reply with a comment and I'll try to explain the earlier stuff.
- So we have a proportion (2PQ) of Aa individuals in the population. Each of those individuals has an A allele and an a allele. So half (one out of two) of their alleles is A. So the proportion of alleles that are both A and in an Aa individual is PQ (2PQ times one half).
- Meanwhile, a different proportion (P^2) of the individuals are AA. 100% (2/2) of their alleles are A. So the proportion of alleles that are in an AA individual and are A is P^2 (P^2 times one).
- No A alleles are found in the aa individuals (Q^2 times zero).
We can find the total proportion of A alleles in the population by adding those three, which gives us PQ from the mixed individuals, P^2 from the AA individuals, and 0 from the aa individuals.
Hope this helps.
| null | CC BY-SA 3.0 | null | 2011-06-17T18:37:14.697 | 2011-06-17T18:37:14.697 | null | null | 4862 | null |
12043 | 2 | null | 11892 | 1 | null | With 3 categories you can use a trilinear plot:
>
Allen, Terry. Using and Interpreting the Trilinear Plot. Chance.
15 (Summer 2002).
The article shows an example of change in 3 categories over time (as well as many other examples).
The triplot function in the TeachingDemos package for R does these plots as well as triangle.plot in ade4, ternaryplot in vcd, tri in cwhtool, and triax.plot in plotrix (and probably a couple of others as well).
| null | CC BY-SA 3.0 | null | 2011-06-17T19:20:01.190 | 2011-06-17T19:20:01.190 | null | null | 4505 | null |
12044 | 2 | null | 12023 | 3 | null | Yes the 5 was arbitrary, but you can look in textbooks on survey sample design and they have formulas and algorithms for determining sample sizes within clusters and numbers of clusters. These take into account things like the within cluster/person variability, between cluster/person variability (in the example you have an upper bound on the variability since the data has to be in the 1-7 range), cost per cluster/person, cost per measurement within cluster/person, and sometimes more. If the above example had gone into detail on exactly how they came up with 5 using the above then it would have been much longer and would probably have distracted from their main point.
| null | CC BY-SA 3.0 | null | 2011-06-17T19:26:55.037 | 2011-06-17T19:26:55.037 | null | null | 4505 | null |
12045 | 2 | null | 12002 | 7 | null | This is a question of estimation within a linear mixed effects model. The problem is that the variance of the grand mean is a weighted sum of two [variance components](http://www.itl.nist.gov/div898/handbook/prc/section4/prc44.htm) which have to be separately estimated (via an ANOVA of the data). The estimates have different degrees of freedom. Therefore, although one can attempt to construct a confidence interval for the mean using the usual small-sample (Student t) formulas, it is unlikely to attain its nominal coverage because the deviations from the mean will not exactly follow a Student t distribution.
A recent (2010) article by Eva Jarosova, Estimation with the Linear Mixed Effects Model, discusses this issue. (As of 2015 it no longer appears to be available on the Web.) In the context of a "small" dataset (even so, about three times larger than this one), she uses simulation to evaluate two approximate CI calculations (the well-known Satterthwaite approximation and the "Kenward-Roger's method"). Her conclusions include
>
Simulation study revealed that quality of estimation of covariance parameters and consequently adjustment of confidence intervals in small samples can be quite poor.... A poor estimation may influence not only the true confidence level of conventional intervals but it can also make the adjustment impossible. It is obvious that even for balanced data three types of intervals [conventional, Satterthwaite, K-R] may differ
substantially. When a striking difference between the conventional and the adjusted intervals is observed, standard errors of covariance parameter estimates should be checked. On the other hand, when the differences between [the three] types of intervals are small, the adjustment seems to be unnecessary.
In short, a good approach seems to be
- Compute a conventional CI by using the estimates of variance components and pretending a t-distribution applies.
- Also compute at least one of the adjusted CIs.
- If the computations are "close," accept the conventional CI. Otherwise, report that there are insufficient data to produce a reliable CI.
| null | CC BY-SA 3.0 | null | 2011-06-17T19:54:50.570 | 2015-01-21T16:01:32.387 | 2015-01-21T16:01:32.387 | 919 | 919 | null |
12046 | 1 | 12089 | null | 11 | 11020 | Is there a way that once a complex classification tree is constructed using rpart (in R), to organize the decision rules produced for each class? So instead of getting one huge tree, we get a set of rules for each of the classes?
(if so, how?)
Here is a simple code example to show examples on:
```
fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis)
```
Thanks.
| Organizing a classification tree (in rpart) into a set of rules? | CC BY-SA 3.0 | null | 2011-06-17T20:07:39.447 | 2018-08-03T22:37:37.940 | null | null | 253 | [
"r",
"classification",
"cart",
"rpart"
] |
12047 | 1 | null | null | 3 | 837 | I have a series of observations $\{X_i,Y_i,Z_i\}$ for random variable $X$, $Y$ and $Z$. Now I want to test if $X$, $Y$ and $Z$ are mutual independent, can anyone help me?
Some pairwise independence tests like this one [http://www.gatsby.ucl.ac.uk/~gretton/indepTestFiles/indep.htm](http://www.gatsby.ucl.ac.uk/~gretton/indepTestFiles/indep.htm) are available, but I haven't found such tests for multivariate mutual independence yet?
Thanks in advance.
| Multivariate mutual independence test | CC BY-SA 3.0 | null | 2011-06-17T19:59:05.887 | 2011-06-18T10:09:54.303 | 2011-06-18T10:09:54.303 | null | 3594 | [
"probability"
] |
12048 | 2 | null | 12047 | 2 | null | For discrete variables, [Pearson's chi-squared test](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test#Test_of_independence) generalizes. Estimate the probabilities in the 3D cells by multiplying the marginal probabilities (which is correct, given the assumption of independence). The degrees of freedom are therefore $(k-1)(m-1)(n-1)$ when the variables have $k$, $m$, and $n$ distinct categories.
For normally distributed variables, use tests of correlation with adjustments for multiple comparisons. (These are all approximate.)
For other variables, you can bin the values into classes defined a priori and apply a chi-squared test. This is successful when it rejects the null hypothesis of independence; it is not so successful otherwise, because some power is lost in the binning.
| null | CC BY-SA 3.0 | null | 2011-06-17T20:20:29.570 | 2011-06-17T20:20:29.570 | null | null | 919 | null |
12049 | 2 | null | 12041 | 0 | null | I have no experience with such models. However, there is a time varying copula model of Patton (2006) that is estimating a dependence between two series and this dependence is non-linear (so, he is using MLE) and follows an ARMA-type process. There he derives the standard errors for the estimates. There is also a reference for the literature provided in his paper. On his website, he has the codes to model that. However, I do not remember, if the codes for the standard errors are there.
Here is the paper [http://econ.duke.edu/~ap172/2stage_published_version_mar06.pdf](http://econ.duke.edu/~ap172/2stage_published_version_mar06.pdf)
check p.152
His codes for this paper are here [http://econ.duke.edu/~ap172/code.html](http://econ.duke.edu/~ap172/code.html)
Hope, this information is a little relevant to you problem.
Good luck.
| null | CC BY-SA 3.0 | null | 2011-06-17T21:17:08.693 | 2011-06-17T21:17:08.693 | null | null | 5071 | null |
12050 | 1 | null | null | 0 | 246 | Using a logarithmic regression tool found at xuru.org ( [http://www.xuru.org/rt/LnR.asp#CopyPaste](http://www.xuru.org/rt/LnR.asp#CopyPaste) ) and the data from below, the curve of the graph for this data is roughly described by y = 31.78303295ln(x) - 36.17569359, which has an RSS value of 10877.59526. My goal is to find an equation that describes the data accurately enough so that I can then multiply it by some smallish constant and the resulting y value will always be above, yet still reasonably close to, the expected y value.
As it is, the errors for the calculated y values stray (+ or -) from the actual value in a roughly sinusoidal manner. The errors for the data below are as follows:
expected Calculated Error
0 -37.0769275 37.0769275
1 -14.88358987 15.88358987
7 -1.901319596 8.901319596
8 20.29201803 12.29201803
16 25.22764812 9.227648123
19 33.27428831 14.27428831
20 55.46762593 35.46762593
23 65.98574081 42.98574081
111 68.44989621 42.55010379
112 90.64323384 21.35676616
113 100.2959388 14.70406121
118 109.3971665 8.602833487
121 118.5256062 2.474393843
124 127.5499778 3.549977828
127 137.1795899 10.17958994
130 146.9062597 16.9062597
143 148.3072802 5.307280243
144 170.2548897 26.25488974
170 172.8139194 2.813919413
178 179.674946 1.674946009
181 188.876822 7.876821965
182 209.6750859 27.67508586
208 212.9576731 4.957673121
216 218.3963188 2.396318798
237 226.0826215 10.91737846
261 242.3657911 18.63420891
267 260.7888986 6.211101379
275 266.8441652 8.155834795
278 276.0074897 1.992510289
281 285.2181035 4.218103464
307 289.1736918 17.8263082
310 297.2291502 12.77084978
Bearing in mind that I know very little about statistics, my questions are:
Why is it that when I add more data points the RSS value becomes worse? e.g. the next data point following "34239 310" is "35655 323". When added to the set below and regression is done on the updated set, I get y = 32.38336295 ln(x) - 38.48210346 with RSS=11417.26182.
As the value of x increases, the results become increasingly inaccurate (namely, y consistently falls well below the target value). How should I interpret this?
Given that the errors seem to fluctuate in a sine-like manner, is there some way to use this knowledge to improve the results of the function?
data set:
1,0
2,1
3,7
6,8
7,16
9,19
18,20
25,23
27,111
54,112
73,115
97,118
129,121
171,124
231,127
313,130
327,143
649,144
703,170
871,178
1161,181
2223,182
2463,208
2919,216
3711,237
6171,261
10971,267
13255,275
17647,278
23529,281
26623,307
34239,310
Edit by @PeterEllis - addition of illustrative plot showing the original fit

| Need to refine results of logarithmic regression | CC BY-SA 3.0 | null | 2011-06-17T20:25:28.297 | 2013-02-05T23:06:38.070 | 2012-06-10T03:51:10.807 | 7972 | null | [
"regression"
] |
12051 | 1 | 12052 | null | 1 | 9991 | how can I add a label to the max value of this plot:
```
qplot(c, data=subset(df,c < 3000),geom="freqpoly")
```

c is just integers
| How to add a label to the max value in R's ggplot2? | CC BY-SA 3.0 | null | 2011-06-17T22:42:04.577 | 2011-06-18T00:17:51.133 | null | null | 1762 | [
"r",
"ggplot2"
] |
12052 | 2 | null | 12051 | 3 | null | I think either `annotate` or `geom_text` is what you're looking for. [https://stackoverflow.com/questions/2409357/how-to-nicely-annotate-a-ggplot2-manual](https://stackoverflow.com/questions/2409357/how-to-nicely-annotate-a-ggplot2-manual)
| null | CC BY-SA 3.0 | null | 2011-06-18T00:17:51.133 | 2011-06-18T00:17:51.133 | 2017-05-23T12:39:27.620 | -1 | 2973 | null |
12053 | 1 | 12074 | null | 29 | 59485 | I've learnt that I must test for normality not on the raw data but their residuals. Should I calculate residuals and then do the Shapiro–Wilk's W test?
Are residuals calculated as: $X_i - \text{mean}$ ?
Please see [this previous question](https://stats.stackexchange.com/questions/11887/is-this-design-a-one-way-repeated-measures-anova-or-not) for my data and the design.
| What should I check for normality: raw data or residuals? | CC BY-SA 3.0 | null | 2011-06-18T01:08:55.550 | 2021-07-03T08:56:57.970 | 2017-04-13T12:44:25.283 | -1 | 5003 | [
"normality-assumption",
"residuals",
"assumptions"
] |
12054 | 1 | null | null | 5 | 7054 | I fit a cubic function (in mathematica)
$$
y(x) = a + b x + c x^2 + d x^3
$$
to my data and obtained a function.
I have the error in each coefficient ($\sigma_a$, $\sigma_b$, $\sigma_c$, $\sigma_d$). Now since I need the model to predict $y$ values for a given $x$ value, I also need a corresponding error for the $y$ value (coming from the errors in the coefficients).
The error in the $x$ value (from our measurements) is negligible. I can't figure out the formula I would use when plugging in an $x$ value to determine its corresponding error in $y$.
I've exhausted my googling capabilities and my textbooks in error analysis. Any help is greatly appreciated.
| Propagation of polynomial coefficient errors in fit | CC BY-SA 3.0 | null | 2011-06-18T01:13:54.543 | 2011-06-19T16:23:33.510 | 2011-06-18T09:49:58.953 | null | 5075 | [
"regression",
"error-propagation"
] |
12056 | 2 | null | 12053 | 7 | null | First you can "eyeball it" using a [QQ-plot](http://en.wikipedia.org/wiki/Q-Q_plot) to get a general sense [here is](http://astrostatistics.psu.edu/datasets/R/html/stats/html/qqnorm.html) how to generate one in R.
According to the [R manual](http://stat.ethz.ch/R-manual/R-devel/library/stats/html/shapiro.test.html) you can feed your data vector directly into the shapiro.test() function.
If you would like to calculate the residuals yourself yes each residual is calculated that way over your set of observations. You can see more about it [here](http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics#Example_with_some_mathematical_theory).
| null | CC BY-SA 3.0 | null | 2011-06-18T02:05:19.143 | 2011-06-18T02:05:19.143 | null | null | 4325 | null |
12058 | 2 | null | 10079 | 2 | null | I agree that power calculators are useful, especially to see the effect of different factors on the power. In that sense, calculators that include more input information are much better. For linear regression, I like the regression calculator [here](http://www.stat.uiowa.edu/~rlenth/Power/) which includes factors such as error in Xs, correlation between Xs, and more.
| null | CC BY-SA 3.0 | null | 2011-06-18T06:06:59.980 | 2011-06-18T06:06:59.980 | null | null | 1945 | null |
12059 | 1 | 12101 | null | 2 | 2424 | Why are the percentages in the y-axis of this bar chart displayed incorrectly (values larger than 100%) and how can I fix it?
```
qplot(tctype,tccount,data=categ,xlab="Type",ylab="",geom = "bar")+ scale_y_continuous(formatter="percent")
```

This is the data frame
```
categ
tctype tccount
1 inthread (10 or less) 16228
2 occasional (10 to 100) 3561
3 addicted (100 to 1000) 327
4 communal(1000+) 10
```
| Why is the y-axis in this R plot showing invalid percentage values? | CC BY-SA 3.0 | null | 2011-06-18T06:24:41.760 | 2011-06-19T16:47:02.330 | null | null | 1762 | [
"r",
"ggplot2"
] |
12060 | 1 | null | null | 1 | 2976 | I have a dataset:
- X variable is date (from April to October)
- Y variable is vegetation biomass data
In my study area, growing season starts around April when vegetation biomass is low and peaks around at the end of August when biomass is highest, and finishes around October.
The purpose is to determine the exactly date when the vegetation biomass increased at maximum during the start of growing season.
It should be in April.
First, I did the curve fitting using Sigmoid, logistic, 4 parameter method which was the best fit for this dataset. And I got a formula of sigmoid, logistic 4 parameter method.
Now, how can I calculate maximum increase date of vegetation biomass from curve fitting line to accomplish the purpose as mentioned above?
Thanks a lot.
| Curve fitting and max slope calculation | CC BY-SA 3.0 | null | 2011-06-18T07:13:07.997 | 2012-10-03T09:09:48.830 | 2011-06-18T13:48:22.813 | 3454 | 5077 | [
"regression",
"correlation",
"logistic"
] |
12061 | 1 | 12062 | null | 6 | 665 | Feature or bug? Why is it that the tick marker for zero projects is after the bar that represents the count for zero in this plot (instead of being in the middle as I'd have expected):
```
> qplot(projects,data=subset(df,projects<1000),geom="bar")
stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this
```

Here is the data I am using:
```
username gender id tenure projects post
1 foo male 123 1566 120 75
2 bar male 456 1565 78 1
3 baz female 678 1564 55 1
```
| Why is the tick marker for zero after the bar in this qplot bar chart? | CC BY-SA 3.0 | null | 2011-06-18T08:56:27.163 | 2011-06-18T09:42:50.190 | 2011-06-18T09:28:54.540 | 1762 | 1762 | [
"r",
"ggplot2"
] |
12062 | 2 | null | 12061 | 7 | null | The reason it appears to the left is that it putting the $0$ projects into a bin $(-33,0]$ which it then treats as negative. To solve this you need `right=FALSE`. You could then have a similar problem at the other end with the $999$ projects put into the bin $[999,1032)$ which would appear above $1000$; so it would be better to have a binwidth which is a factor of $1000$ - I would suggest `binwidth=25`.
For example
```
library(ggplot2)
set.seed(1)
df <- data.frame(projects = rgeom(10000,.005) )
qplot(projects, data=subset(df, projects<1000), geom="bar",
binwidth=25, right=FALSE )
```
produces

| null | CC BY-SA 3.0 | null | 2011-06-18T09:42:50.190 | 2011-06-18T09:42:50.190 | null | null | 2958 | null |
12064 | 2 | null | 12060 | 1 | null | You basically need to differentiate the sigmoid to get some bell-like curve; this is how it looks like for standard sigmoid $1/(1+e^{-t})$:

Then it will be only a peak finding. You can do this either analytically from the fit, or calculate the differences from the data and look for a peak there.
| null | CC BY-SA 3.0 | null | 2011-06-18T10:02:02.043 | 2011-06-18T10:02:02.043 | null | null | null | null |
12065 | 2 | null | 10211 | 4 | null | One point: there is no such thing as a "basic question", you only know what you know, and not what you don't know. asking a question is often the only way to find out.
Whenever you see small samples, you find out who really has "faith" in their models and who doesn't. I say this because small samples is usually where models have the biggest impact.
Being a keen (psycho?) modeller myself, I say go for it! You seem to be adopting a cautious approach, and you have acknowledged potential bias, etc. due to small sample. One thing to keep in mind with fitting models to small data is that you have 12 variables. Now you should think - how well could any model with 12 variables be determined by 42 observations? If you had 42 variables, then any model could be perfectly fit to those 42 observations (loosely speaking), so your case is not too far from being too flexible. What happens when your model is too flexible? It tends to fit the noise - that is, the relationships which are determined by things other than the ones you hypothesize.
You also have the opportunity to put your ego where your model is by predicting what those future 10-20 samples will be from your model. I wonder how your critics will react to a so called "dodgy" model which gives the right predictions. Note that you would get a similar "I told you so" if your model doesn't predict the data well.
Another way you could assure yourself that your results are reliable, is to try and break them. Keeping your original data intact, create a new data set, and see what you have to do to this new data set in order to make your SEM results seem ridiculous. Then look at what you had to do, and consider: is this a reasonable scenario? Does my "ridiculous" data resemble a genuine possibility? If you have to take your data to ridiculous territory in order to produce ridiculous results, it provides some assurance (heuristic, not formal) that your method is sound.
| null | CC BY-SA 3.0 | null | 2011-06-18T10:06:29.973 | 2011-06-18T10:06:29.973 | null | null | 2392 | null |
12066 | 1 | null | null | 1 | 506 | What are the most widely used measures of predictive power of attributes in scoring models?
Motivation: I have a lot of attributes, more than I can study by myself and I want to select somehow the most promising ones. Is IV a good criterion for that? Are there any alternatives?
| Measures of predictive power of attributes in data mining | CC BY-SA 3.0 | null | 2011-06-18T10:12:05.363 | 2011-06-18T10:55:11.383 | 2011-06-18T10:55:11.383 | null | 1643 | [
"data-mining",
"feature-selection"
] |
12067 | 1 | null | null | 3 | 2947 | I've described a typical design for my experiments [in this question](https://stats.stackexchange.com/questions/11887/what-experimental-design-is-this). Well, 1-way RM ANOVA assumes a Gaussian distributed vector.
I try $y=\arcsin{\sqrt{x}}$. But for some data it works, for some doesn't... So the next step is to find the most powerful/general statistics. [Freeman — Tukey's transformation](https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-21/issue-4/Transformations-Related-to-the-Angular-and-the-Square-Root/10.1214/aoms/1177729756.full) with $y=\sqrt{x}+\sqrt{x+1}$ is the most powerful and seems to be appropriate in almost all my cases but again it doesn't give normal data sometimes.
What should I do this case? [Box — Cox transformation](https://stats.stackexchange.com/questions/1601/what-other-normalizing-transformations-are-commonly-used-beyond-the-common-ones)?
Is it OK to transform data using different statistics within a paper?
| Is Freeman — Tukey's transformation the most powerful for percentages? | CC BY-SA 4.0 | null | 2011-06-18T12:11:41.923 | 2022-07-07T10:47:34.957 | 2022-07-07T10:47:34.957 | 5003 | 5003 | [
"data-transformation",
"percentage"
] |
12068 | 1 | 12130 | null | 4 | 9096 | I am using libsvm (which is meant for solving binary classification problems) for multi-class classification. How can I get classification scores / confidences for each class to effectively compare them given that libsvm can only produces scores for two classes. Desired output:
```
Class 1: score1
Class 2: score2
Class 3: score3
Class 4: score4
```
| Classification score: SVM | CC BY-SA 3.0 | null | 2011-06-18T12:28:31.430 | 2011-07-07T18:24:41.640 | 2011-06-20T07:09:25.333 | 264 | 4317 | [
"machine-learning",
"classification",
"svm"
] |
12069 | 1 | 12076 | null | 3 | 6422 | This is for a friend of mine. As an econometrician used to rely on large samples for inference, I find myself unsure whether the answer I have in mind is the best.
Suppose we have four continuous random variables, X, Y, W and Z. We have a small (5 to 10 observations) iid sample from each.
We want to test whether $\frac{E(X)}{E(Y)} = \frac{E(W)}{E(Z)}$.
My first reaction is to consider taking the logs, so as to think in terms of differences rather than ratios; but I fear this would lower the power of tests that rely on the normality of the variables (X, Y, W, Z are probably approximately normal; but then their logarithm is quite far from normal).
What are your thoughts on this? Any advice would be greatly appreciated!
| Testing the significance of differences between ratios with small samples | CC BY-SA 3.0 | null | 2011-06-18T12:30:54.503 | 2011-06-18T16:31:04.360 | 2011-06-18T13:10:10.553 | 2044 | 2044 | [
"hypothesis-testing"
] |
12070 | 2 | null | 10079 | 12 | null | Your rule of thumb is not particularly good if $m$ is very large. Take $m=500$: your rule says its ok to fit $500$ variables with only $600$ observations. I hardly think so!
For multiple regression, you have some theory to suggest a minimum sample size. If you are going to be using ordinary least squares, then one of the assumptions you require is that the "true residuals" be independent. Now when you fit a least squares model to $m$ variables, you are imposing $m+1$ linear constraints on your empirical residuals (given by the least squares or "normal" equations). This implies that the empirical residuals are not independent - once we know $n-m-1$ of them, the remaining $m+1$ can be deduced, where $n$ is the sample size. So we have a violation of this assumption. Now the order of the dependence is $O\left(\frac{m+1}{n}\right)$. Hence if you choose $n=k(m+1)$ for some number $k$, then the order is given by $O\left(\frac{1}{k}\right)$. So by choosing $k$, you are choosing how much dependence you are willing to tolerate. I choose $k$ in much the same way you do for applying the "central limit theorem" - $10-20$ is good, and we have the "stats counting" rule $30\equiv\infty$ (i.e. the statistician's counting system is $1,2,\dots,26,27,28,29,\infty$).
| null | CC BY-SA 3.0 | null | 2011-06-18T13:02:34.840 | 2011-06-18T13:02:34.840 | null | null | 2392 | null |
12071 | 2 | null | 12023 | 0 | null | There are a bundle of distinct sampling issues, which are intertwined with the validity of the rating procedure:
- The first relates to reliability. You need to be sure the raters are are self-consistent across trials ("intra-rater reliability") and the (self-consistent) raters consistent with on another ("inter-rater reliability") in their ratings. Otherwise, you will not be able to draw any inferences about whether their ratings are replicable. Since individual ratings are noisy, the greater the number of trials you perform with each individual rater, the more confidence you'll have that he or she is reliable. However, even if each rater performs on only one trial (ranks A, B, & C based on one taste each), you'll be able to assess inter-rater reliability--w/ more precision as the number of raters increases. You won't be able to say much w/ confidence in that case, though, about how reliable any one of your raters is (or how much he or she differs from the others), given how noisy the individual ratings are likely to be-- but you might not care about that (you will if you want to find a team of competent raters to use repeatedly; won't if you are doing a marketing test, etc.). These are all measurement error issues.
- You also have sampling error issues to consider. Are you trying to estimate how members of the general population are likely to rate the quality of the wines? How a collection of "experts" would rate the wines? How your "friends" -- people, say, you might invite to your next party -- would feel? Whatever the answer, you need to draw a sufficient number of raters from the relevant population.
There are formulae for assessing these maters & heuristics that go along with them. Applying them will likely require judgment given what the goal of the rating task is (e.g., to market a new product generally, to estimate how some subgroup will respond, to come up w/ "expert ratings" etc.). Likely someone else will be able to reel them off-- I can't. I can tell you, however, that these are standard issues in marketing research & if you look for a text on Rasch measurement, you'll be able to see how people in that field tend to think about this (including how they feel about use of a likert item for this purpose).
| null | CC BY-SA 3.0 | null | 2011-06-18T14:29:18.473 | 2011-06-18T14:29:18.473 | null | null | 11954 | null |
12072 | 1 | 12075 | null | 0 | 3830 | Assume $X$, $Y$ are independent zero mean random variables. Define $Z_1=X+Y$ and $Z_2=X-Y$. Then, their mean values are the same.
How does one check that $Z_1$ and $Z_2$ are not the same random variables?
And how to describe their variances?
| What's the difference between add and subtract of two random variables, if zero mean? | CC BY-SA 3.0 | null | 2011-06-18T14:40:23.797 | 2011-06-20T06:51:30.040 | 2011-06-20T06:51:30.040 | 2116 | 4898 | [
"self-study",
"random-variable"
] |
12073 | 2 | null | 12069 | 2 | null | Because the distributions of the random variables are not given, we must assign them. Now one is free to do this assignment however they want, so long as it conforms to the information you specify. Now we have three "pieces" of information so far:
- all random variables are positve
- all expectations of the random variables exist and are finite
- the results of an experiment (some "data", denoted by $D$)
One general principle for assigning probability distributions is maximum entropy. Being an economist, you may not have heard of this. Arnold Zellner's work on Bayesian Method of Moments (BMOM) gives some economic applications of this principle, and a bit of an explanation. Edwin Jaynes is the "king" of this method. Searching these two authors should give good explanations of these methods.
Now in order to apply the principle, we require a reference measure $m_{X}(x)$. The most frequently used one is the uniform distribution $m_{X}(x)\propto 1$, and it usually works well. Now what we then do is constrain the moment of our new distribution $p_{X}(x)$ to have mean $\mu_{X}$, while having maximum entropy. The solution is the exponential distribution:
$$p_{X}(x|\mu_{X}I)=\frac{1}{\mu_{X}}\exp\left(-\frac{x}{\mu_{X}}\right)$$
We will have similar distribution for $W,Y,Z$. The notation $I$ is to indicate that this is the information (assumptions) that we are using. But we do not know the actual value $\mu_{X}$, only that it is relevant to the problem. So we need to multiply by a prior and integrate it out of the resulting equations. If you have information about the population means, this is where you specify it, otherwise the non-informative prior is given by the jeffreys prior
$$p(\mu_{X}|I)=\frac{1}{log\left(\frac{U}{L}\right)\mu_{X}}\;\;\;\;\;\;\; L<\mu_{X}<U$$
The bounds $L$ and $U$ are a safety device for now. If we can take the limits $L\to 0,U\to\infty$ we will, but at the end of the calculation, not at the start.
If this is treated as a parameter estimation problem, then we will seek as our final result, the probability:
$$Pr\left(\frac{\mu_{X}}{\mu_{Y}}>\frac{\mu_{W}}{\mu_{Z}}|DI\right)$$
With the direction of the inequality going the way that is supported by the data. You can interpret this as the amount of support against the hypothesis. But in order to get this we first need the joint posterior:
$$p(\mu_{X}\mu_{Y}\mu_{W}\mu_{Z}|DI)\propto p(\mu_{X}\mu_{Y}\mu_{W}\mu_{Z}|I)P(D|\mu_{X}\mu_{Y}\mu_{W}\mu_{Z}I)$$
All quantities have been assigned, so we just plug them in:
$$p(\mu_{X}\mu_{Y}\mu_{W}\mu_{Z}|I)=\frac{1}{\left[log\left(\frac{U}{L}\right)\right]^{4}\mu_{X}\mu_{W}\mu_{Y}\mu_{Z}}$$
$$P(D|\mu_{X}\mu_{Y}\mu_{W}\mu_{Z}I)=\frac{1}{\mu_{X}^{n_{x}}\mu_{W}^{n_{w}}\mu_{Y}^{n_{y}}\mu_{Z}^{n_{z}}}\exp\left(-\frac{n_{x}\overline{x}}{\mu_{X}}-\frac{n_{w}\overline{w}}{\mu_{W}}-\frac{n_{y}\overline{y}}{\mu_{Y}}-\frac{n_{z}\overline{z}}{\mu_{Z}}\right)$$
Where $n_{i}$ is the sample in the ith group, and the bar indicates an average over that group. Now you may not recognise them, but this posterior is proportional to the product of 4 independent inverse gamma distributions, with $(\mu_{X}|DI)\sim IGa(\mu_{X}|n_{x},n_{x}\overline{x})$ and similarly for $\mu_{W},\mu_{Y},\mu_{Z}$. Because inverse gamma distribution is proper on the positive real axis, we can take the limits $L\to 0,U\to\infty$ without harm. Now you could try to evaluate the integral required for $Pr\left(\frac{\mu_{X}}{\mu_{Y}}>\frac{\mu_{W}}{\mu_{Z}}|DI\right)$, but I would instead use monte carlo to evaluate this probability, because generating inverse gamma random variables is very cheap, and getting the range of integration correct seems to be difficult (I wasn't sure how to do it). Note that because you are monte-carlo sampling directly from the posterior, should be efficient (unlike monte-carlo from a non-informative prior).
The easiest way to program this is to use a gamma inverse cdf function, $ginv()$ and the random number generator $rand()$, then take your monte-carlo sample as $\frac{1}{ginv(rand)}$ for each variable. Let me know if you need more explanation.
| null | CC BY-SA 3.0 | null | 2011-06-18T15:20:30.513 | 2011-06-18T15:20:30.513 | null | null | 2392 | null |
12074 | 2 | null | 12053 | 42 | null | Why must you test for normality?
The standard assumption in linear regression is that the theoretical residuals are independent and normally distributed. The observed residuals are an estimate of the theoretical residuals, but are not independent (there are transforms on the residuals that remove some of the dependence, but still give only an approximation of the true residuals). So a test on the observed residuals does not guarantee that the theoretical residuals match.
If the theoretical residuals are not exactly normally distributed, but the sample size is large enough then the Central Limit Theorem says that the usual inference (tests and confidence intervals, but not necessarily prediction intervals) based on the assumption of normality will still be approximately correct.
Also note that the tests of normality are rule out tests, they can tell you that the data is unlikely to have come from a normal distribution. But if the test is not significant that does not mean that the data came from a normal distribution, it could also mean that you just don't have enough power to see the difference. Larger sample sizes give more power to detect the non-normality, but larger samples and the CLT mean that the non-normality is least important. So for small sample sizes the assumption of normality is important but the tests are meaningless, for large sample sizes the tests may be more accurate, but the question of exact normality becomes meaningless.
So combining all the above, what is more important than a test of exact normality is an understanding of the science behind the data to see if the population is close enough to normal. Graphs like qqplots can be good diagnostics, but understanding of the science is needed as well. If there is concern that there is too much skewness or potential for outliers, then non-parametric methods are available that do not require the normality assumption.
| null | CC BY-SA 4.0 | null | 2011-06-18T16:16:31.460 | 2018-10-04T10:31:12.057 | 2018-10-04T10:31:12.057 | 22047 | 4505 | null |
12075 | 2 | null | 12072 | 8 | null | The difference between $Z_1$ and $Z_2$ is $2Y$. Whenever $Y \not = 0$, they will be different. If $Y$ has a symmetric distribution about $0$ then $Z_1$ and $Z_2$ will have the same distribution as each other.
The variance of the sum of two independent random variables is the sum of their variances, as is the variance of the difference of two independent random variables, so
$$\sigma_{Z_1}^2=\sigma_{Z_2}^2=\sigma_{X}^2+\sigma_{Y}^2 $$
| null | CC BY-SA 3.0 | null | 2011-06-18T16:24:12.943 | 2011-06-18T16:24:12.943 | null | null | 2958 | null |
12076 | 2 | null | 12069 | 3 | null | You say that the variables are probably normally distributed, so you could test this by simulation (parametric bootstrap). Choose a test statistic like the ratio of the ratios of the sample means. Now generate data under the null hypothesis that the 2 ratios are equal, you will need to make some additional assumptions, i.e. what the variances are, but you can vary these assumptions to see what effect they have.
Now repeate the simulation a bunch of times calculating the test statistic each time. Your P-value is the proportion of simulated test statistics that are more extreeme than the observed test statistic.
| null | CC BY-SA 3.0 | null | 2011-06-18T16:31:04.360 | 2011-06-18T16:31:04.360 | null | null | 4505 | null |
12077 | 1 | 12131 | null | 1 | 135 | I want to estimate $n+m$ parameters with the equation:
$$\hat\theta=\left[\frac{1}{N}\sum_{t=1}^N \varphi(t)\varphi^T(t)\right]^{-1}\left[\frac{1}{N}\sum_{t=1}^N\varphi(t)y(t)\right]$$
where $\varphi$ is a vector of regressors and $\theta$ the vector of parameters.
Then my book derive an expression for the estimation error:
$$\hat\theta-\theta_0 =\left[\frac{1}{N}\sum_{t=1}^N\varphi(t)\varphi^T(t)\right]^{-1} \left[\frac{1}{N}\sum_{t=1}^N\varphi(t)y(t)-\left\{\frac{1}{N}\sum_{t=1}^N\varphi(t)\varphi^T(t)\right\}\theta_0\right]$$
this makes no sense to me! Why the term in curly braces? And why inside the second factor?
Please help.
| Least square estimation error, calculus question | CC BY-SA 3.0 | null | 2011-06-18T17:05:34.880 | 2011-06-20T14:18:23.077 | 2011-06-18T17:10:54.900 | null | 5080 | [
"self-study",
"predictive-models"
] |
12078 | 1 | 12091 | null | 1 | 187 | I work in the loss forecasting area and would like to know a little bit more about the theory and implementation of age-period-cohort models. Several papers pop up in google search but I need a more comprehensive material relating to identification/estimation. Thank you.
| Reference for age-period-cohort models | CC BY-SA 3.0 | null | 2011-06-18T17:53:07.037 | 2012-02-23T10:40:39.530 | 2012-02-23T10:40:39.530 | null | 5081 | [
"references"
] |
12080 | 1 | 12092 | null | 5 | 192 | I am planning to perform an online behavioral survey across a nationwide sample, and I expect several thousand responses. I expect to have not that many questions (perhaps 3 pages, 8 qs each), require skip logic, and would like it to redirect back to a page of my choice. I also need it to be hosted, as local webhosting is not an appealing option for me. I do not need reports or fancy user interface.
Has anyone been involved in a survey of this scale, and do you have a platform you recommend?
- I am aware of REDCap but my institution does not have access to it.
- SurveyMonkey's Gold plan appears to meet my needs, but I would like to review alternatives
- AFAIK Google Docs Forms doesn't have skip logic
- LimeSurvey was recommended in a related answer
Thanks for any help.
| Can you recommend an online survey platform for 5k+ participants? | CC BY-SA 3.0 | null | 2011-06-18T20:16:29.623 | 2021-01-30T18:38:08.867 | 2021-01-30T18:38:08.867 | 11887 | 1138 | [
"survey",
"internet"
] |
12081 | 1 | 12082 | null | 9 | 2550 | I have some data which is clearly truncated on the left. I wish to fit it with a density estimation that will handle it in some way instead of trying to smooth it down.
What known methods (as usual, in R) can address this?
Sample code:
```
set.seed(1341)
x <- c(runif(30, 0, 0.01), rnorm(100,3))
hist(x, br = 10, freq = F)
lines(density(x), col = 3, lwd = 3)
```

Thanks :)
| Density estimation with a truncated distribution? | CC BY-SA 3.0 | 0 | 2011-06-18T21:14:54.670 | 2015-04-27T05:38:00.550 | 2015-04-27T05:38:00.550 | 9964 | 253 | [
"r",
"density-function",
"histogram",
"kernel-smoothing"
] |
12082 | 2 | null | 12081 | 6 | null | The logspline package for R has the oldlogspline function which will estimate densities using a mixture of observed and censored data.
| null | CC BY-SA 3.0 | null | 2011-06-18T21:52:27.577 | 2011-06-18T21:52:27.577 | null | null | 4505 | null |
12083 | 2 | null | 10984 | 1 | null | Simply construct a Transfer Function Model and include 6 dummies representing days of the week , then bring in 51 dummies representing weeks of the year , then incorporate 15-7 national holiday/event indicators. Then estimate an OLS model and identify any unusual data such a a missed monday even though he had been very reliable on monday given that it was not an event. Validate that the runner has not decided to take certain days of the week off starting at some point in history. This plus a ton of other things ( like identifying that he never runs on certain days of the month ) get incorporated into predictions that we make for millions of series every day.
| null | CC BY-SA 3.0 | null | 2011-06-19T01:46:55.237 | 2011-06-19T01:46:55.237 | null | null | 3382 | null |
12085 | 2 | null | 11856 | 3 | null | The rough answer to the question is that a 95% confidence interval allows you to be 95% confident that the true parameter value lies within the interval. However, that rough answer is both incomplete and inaccurate.
The incompleteness lies in the fact that it is not clear that "95% confident" means anything concrete, or if it does, then that concrete meaning would not be universally agreed upon by even a small sample of statisticians. The meaning of confidence depends on what method was used to obtain the interval and on what model of inference is being used (which I hope will become clearer below).
The inaccuracy lies in the fact that many confidence intervals are not designed to tell you anything about the location of the true parameter value for the particular experimental case that yielded the confidence interval! That will be surprising to many, but it follows directly from the Neyman-Pearson philosophy that is clearly stated in this quote from their 1933 paper "On the Problem of the Most Efficient Tests of Statistical Hypotheses":
>
We are inclined to think that as far
as a particular hypothesis is
concerned, no test based upon the
theory of probability can by itself
provide any valuable evidence of the
truth or falsehood of that hypothesis.
But we may look at the purpose of
tests from another view-point. Without
hoping to know whether each separate
hypothesis is true or false, we may
search for rules to govern our
behaviour with regard to them, in
following which we insure that, in the
long run of experience, we shall not
be too often wrong.
Intervals that are based on the 'inversion' of N-P hypothesis tests will therefore inherit from that test the nature of having known long-run error properties without allowing inference about the properties of the experiment that yielded them! My understanding is that this protects against inductive inference, which Neyman apparently considered to be an abomination.
Neyman explicitly lays claim to the term ‘confidence interval’ and to the origin of the theory of confidence intervals in his 1941 Biometrika paper “Fiducial argument and the theory of confidence intervals”. In a sense, then, anything that is properly a confidence interval plays by his rules and so the meaning of an individual interval can only be expressed in terms of the long run rate at which intervals calculated by that method contain (cover) the relevant true parameter value.
We now need to fork the discussion. One strand follows the notion of ‘coverage’, and the other follows non-Neymanian intervals that are like confidence intervals. I will defer the former so that I can complete this post before it becomes too long.
There are many different approaches that yield intervals that could be called non-Neymanian confidence intervals. The first of these is Fisher’s fiducial intervals. (The word ‘fiducial’ may scare many and elicit derisive smirks from others, but I will leave that aside...) For some types of data (e.g. normal with unknown population variance) the intervals calculated by Fisher’s method are numerically identical to the intervals that would be calculated by Neyman’s method. However, they invite interpretations that are diametrically opposed. Neymanian intervals reflect only long run coverage properties of the method, whereas Fisher’s intervals are intended to support inductive inference concerning the true parameter values for the particular experiment that was performed.
The fact that one set of interval bounds can come from methods based on either of two philosophically distinct paradigms leads to a really confusing situation--the results can be interpreted in two contradictory ways. From the fiducial argument there is a 95% likelihood that a particular 95% fiducial interval will contain the true parameter value. From Neyman’s method we know only that 95% of intervals calculated in that manner will contain the true parameter value, and have to say confusing things about the probability of the interval containing the true parameter value being unknown but either 1 or 0.
To a large extent, Neyman’s approach has held sway over Fisher’s. That is most unfortunate, in my opinion, because it does not lead to a natural interpretation of the intervals. (Re-read the quote above from Neyman and Pearson and see if it matches your natural interpretation of experimental results. Most likely it does not.)
If an interval can be correctly interpreted in terms of global error rates but also correctly in local inferential terms, I don’t see a good reason to bar interval users from the more natural interpretation afforded by the latter. Thus my suggestion is that the proper interpretation of a confidence interval is BOTH of the following:
- Neymanian: This 95% interval was constructed by a method that yields intervals that cover the true parameter value on 95% of occasions in the long run (...of our statistical experience).
- Fisherian: This 95% interval has a 95% probability of covering the true parameter value.
(Bayesian and likelihood methods will also yield intervals with desirable frequentist properties. Such intervals invite slightly different interpretations that will both probably feel more natural than the Neymanian.)
| null | CC BY-SA 3.0 | null | 2011-06-19T04:06:18.343 | 2011-06-19T04:06:18.343 | null | null | 1679 | null |
12086 | 2 | null | 12081 | 5 | null | The [density](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/density.html) function also has a `from` parameter to indicate the left-most side "of the grid at which the density is to be estimated". Continuing from the above example:
```
lines(density(x, from = 0), col = 4, lwd = 3)
```
However, as you can see this is exactly the same distribution without the `from` parameter as above. It just starts from 0, that's all.
| null | CC BY-SA 3.0 | null | 2011-06-19T07:44:24.397 | 2011-06-19T07:44:24.397 | null | null | 3929 | null |
12087 | 1 | null | null | 3 | 25521 | I have two groups (G1, n=10; G2, n = 10) each representing a separate condition.
Participants in each group answered 20 questions and each question is a dichotomous variable coded 0 and 1 (VDD).
I want to compare the group 1 with group 2.
I am having some trouble understanding if I have it right, for every participants of both group, to mean their answer (since the variable is dichotomous). It would give me a probability to get an answer more than the other one I guess, but I don't know if I have the right to do that.
| How to compare two groups on a set of dichotomous variables? | CC BY-SA 3.0 | null | 2011-06-19T08:50:54.310 | 2023-04-11T11:14:51.387 | 2011-08-20T05:06:25.573 | 183 | null | [
"categorical-data",
"mean"
] |
12088 | 2 | null | 33 | 2 | null | I [build](http://cran.r-project.org/web/packages/seas/)/[published](http://dx.doi.org/10.1016/j.cageo.2006.11.011) an R package named `seas` for my [M.Sc. work](https://summit.sfu.ca/item/8338) a few years ago. The package is good for discretizing a time-series over years into seasonal divisions, such as months or 11-day periods. These divisions can then be applied to continuous variables (e.g., [temperature](https://web.archive.org/web/20130221070211/http://gallery.r-enthusiasts.com/graph/Seasonal_temperature_140), water levels) or discontinuous variables (e.g., [precipitation](https://web.archive.org/web/20130617052327/http://gallery.r-enthusiasts.com/graph/Seasonal_precipitation_141), groundwater recharge rates).
| null | CC BY-SA 4.0 | null | 2011-06-19T09:40:20.760 | 2022-11-24T14:17:48.967 | 2022-11-24T14:17:48.967 | 362671 | 3929 | null |
12089 | 2 | null | 12046 | 9 | null | Such a functionality (or a close one) seems to be available in the [rattle](http://cran.r-project.org/web/packages/rattle/index.html) package, as described in [RJournal 1/2 2009](http://journal.r-project.org/archive/2009-2/RJournal_2009-2_Williams.pdf) (p. 50), although I only checked it from the command-line.
For your example, it yields the following output:
```
Rule number: 3 [Kyphosis=present cover=19 (23%) prob=0.58]
Start< 8.5
Rule number: 23 [Kyphosis=present cover=7 (9%) prob=0.57]
Start>=8.5
Start< 14.5
Age>=55
Age< 111
Rule number: 22 [Kyphosis=absent cover=14 (17%) prob=0.14]
Start>=8.5
Start< 14.5
Age>=55
Age>=111
Rule number: 10 [Kyphosis=absent cover=12 (15%) prob=0.00]
Start>=8.5
Start< 14.5
Age< 55
Rule number: 4 [Kyphosis=absent cover=29 (36%) prob=0.00]
Start>=8.5
Start>=14.5
```
To get this output, I source the `rattle/R/rpart.R` source file (from the source package) in my workspace, after having removed the two calls to `Rtxt()` in the `asRules.rpart()` function (you can also replace it with `print`). Then, I just type
```
> asRules(fit)
```
| null | CC BY-SA 3.0 | null | 2011-06-19T10:28:30.630 | 2011-06-19T10:28:30.630 | null | null | 930 | null |
12090 | 1 | 12093 | null | 7 | 2114 | I want to measure the inner correlation of the occurrences of events i.e. I want to distinguish between the two (drawn) and say "in the second sample events occur more conglomerate compared to the first":

Isn't this different from burstiness (i.e. easier to compute as it doesn't invole traffic-values)?
I have drawn a possible representation of the expected result (for S2) in the image below the samples (just as it is in my mind now), but don't hesitate to suggest different proposals.
I looked out and tried different build-in functions and googled/ searched CrossValidated for R and burstiness or "inner correlation", but I didn't make progress. Maybe I am looking for the wrong search terms.
If I wanted to really measure burstiness, it seemed to me (by reading papers about burstiness) that there would be no common known measure (but many different one). I we would pick one it would be cool to justify why choosing this specific one.
This is how the time series is represented in R, currently:
(written by dput in file)
```
c(3.256861, 3.377142, 3.941173, 4.304236, 4.485358, 4.606512,
4.707296, 5.473004, 5.714746, 5.815394, 5.835405, 5.936067, 5.957008,
6.964611, 7.045158, 7.065171, 7.165824, 7.669618, 8.17324, 8.273692,
9.503988, 9.604991, 9.624853, 9.725522, 10.237766, 10.954529,
11.378399, 12.687714, 13.291919, 13.41258, 13.67527, 14.380529,
14.743638, 15.247138, 15.851832, 15.952875, 15.972497, 16.456259,
16.476052, 17.201506, 17.463708, 18.068535, 18.309645, 18.390292,
18.410299, 18.430323, 18.531736, 18.652921, 18.793662, 19.297076,
19.639692, 19.760698, 20.768096, 20.868441, 20.990499, 21.494412,
21.856368, 22.199341, 22.219143, 22.440472, 22.481118, 23.327013,
23.447678, 23.811188, 23.843, 24.113302)
```
| Inner correlation of occurrences (burstiness?) in R | CC BY-SA 3.0 | null | 2011-06-19T11:34:52.597 | 2011-06-19T14:23:58.087 | 2011-06-19T12:46:09.750 | null | 5085 | [
"r",
"time-series"
] |
12091 | 2 | null | 12078 | 1 | null | Try this one:
Carstensen, B. (2007). "Age-period-cohort models for the Lexis diagram." Stat Med 26(15): 3018-3045.
>
Analysis of rates from disease
registers are often reported
inadequately because of too coarse
tabulation of data and because of
confusion about the mechanics of the
age-period-cohort model used for
analysis. Rates should be considered
as observations in a Lexis diagram,
and tabulation a necessary reduction
of data, which should be as small as
possible, and age, period and cohort
should be treated as continuous
variables. Reporting should include
the absolute level of the rates as
part of the age-effects. This paper
gives a guide to analysis of rates
from a Lexis diagram by the
age-period-cohort model. Three aspects
are considered separately: (1)
tabulation of cases and person-years;
(2) modelling of age, period and
cohort effects; and (3)
parametrization and reporting of the
estimated effects. It is argued that
most of the confusion in the
literature comes from failure to make
a clear distinction between these
three aspects. A set of
recommendations for the practitioner
is given and a package for R that
implements the recommendations is
introduced.
//M
| null | CC BY-SA 3.0 | null | 2011-06-19T12:32:11.890 | 2011-06-19T12:41:39.177 | 2011-06-19T12:41:39.177 | null | 1291 | null |
12092 | 2 | null | 12080 | 2 | null | Survey Monkey is likely to work out well if your question formats are not too creative and if you don't mind having the monkey logo appear on each page; some people find it takes away from the professionalism of a study.
Question Pro is a little more flexible than Survey Monkey and also a little more difficult to master.
If you have complex skips and/or creative, nonstandard formats and/or you want something for the long term, I recommend Key Survey. Colleagues of mine did a thorough search and found it to be a good program and a good value and they are happy with it after a year of use. Unlike the other 2 I mentioned, Key has not had server problems which interrupt surveys, and they don't make unannounced changes to their platform which can surprise and confound the survey designer.
| null | CC BY-SA 3.0 | null | 2011-06-19T12:42:11.367 | 2011-06-19T12:42:11.367 | null | null | 2669 | null |
12093 | 2 | null | 12090 | 10 | null | This is a widely studied problem in neurosciences, where you need to determine the "burstiness" of action potentials of a neuron. The methods, however, can be obviously applied to any series of events.
Most of them rely on the analysis of the intervals between two following events: in the case of action potentials these are generally called inter-spike interval (ISI), but we can call them inter-event intervals (IEI) to generalize.
We can define them as
$IEI = t_n - t_{n-1} \quad\quad n=2,3,4,...,N$
Where $t_n$ is the time of event $n$ and $N$ is the total number of events.
I will list some of the approaches that have been used. Mind, however, that this list is far from exaustive.
---
The easiest visual thing to do is starting to plot an histogram of the IEIs or, even better, an histogram of $log_{10}(IEI)$.
In case of high "burstiness" the histogram will have a clear bimodal
distribution, with lots of short intervals between events and some longer ones (the pauses between bursts)
If you have a fairly good number of series of event you can also use a clustering algorithm to divide them in groups (regular, slow bursting, fast bursting etc.). This approach was taken, for instance, in this paper by Nowak et al. where several parameters of the distribution (mean, median, skewness, kurtosis, IQI etc.) are taken as classifiers for hierarchical clustering.
[Electrophysiological Classes of Cat Primary Visual Cortical Neurons In Vivo as Revealed by Quantitative Analyses](http://www.ncbi.nlm.nih.gov/pubmed/12626627) (free article)
---
Another classic approach is known as the "Poisson surprise method" and was described in 1985 by Charles Legéndy and Michael Salcman in their paper
[Bursts and recurrences of bursts in the spike trains of spontaneously active striate cortex neurons.](http://www.ncbi.nlm.nih.gov/pubmed/3998798) (not free)
The idea of the method is that:
>
The measure used here is an evaluation of how improbable it is that the burst is a chance occurrence and is computed, for any given burst that contains n spikes in a time interval T, as
$s = - log P$
where P is the probability that, in a random (Poisson) spike train having the same average spike rate Y as the spike train studied, a given time interval of length T contains y2 or more spikes.
I can provide R code for this if needed
---
An "updated" version of the Poisson-surprise method, which was developed to solve certain issues with that method is the rank-surprise method described in 2007 by Boris Gourévitch and Jos Eggermont in their paper
[A nonparametric approach for detection of bursts in spike trains](http://www.ncbi.nlm.nih.gov/pubmed/17070926) (not free)
which uses a non parametric approach to define bursts.
>
We propose to use a more exhaustive search of the maximum of the surprise statistic using the following algorithm dubbed ESM (exhaustive surprise maximization): preliminary to the algorithm, we fix the largest ISI value acceptable in a burst (limit) and a level −log(α) of minimum significance for the surprise statistic. We, then identify a first sequence of ISIs whose values are below limit. From this sequence, we perform an exhaustive search of the highest surprise statistic over all possible continuous subsequences of ISI.If the final surprise statistic is above −log(α), the associated subsequence is labeled as a burst. Another burst is then searched among the remaining continuous ISI subsequences, obeying the same criterion. The process is repeated until one remaining continuous ISI subsequence is able to provide a significant RS statistic. When the process stops, we proceed to the next sequence of ISIs whose values are below limit and so on.
The authors provide pseudo-code and Matlab code for the algorithm
---
Other approaches rely on the variability of the distribution.
In particular, one can use the coefficient of variation $C_V$, classically defined as
$C_V = \frac{\sigma_{IEI}}{\langle{IEI}\rangle}$
The higher $C_V$ the burstier the events' distribution
---
$C_V$, however, is a fairly rough index, so a finer version of it was proposed, called $C_{V2}$, by Gary Holt and colleagues in their paper
[Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons](http://www.ncbi.nlm.nih.gov/pubmed/8734581) (not free)
$C_{V2} = \frac{2*|{IEI}_{n+1}-{IEI}_n|}{{IEI}_{n+1}+{IEI}_n}$
---
Finally, another approach, proposed by Shigeru Shinomoto and colleagues in 2003 is the local variation coefficient $L_v$ which is defined as
$L_v = \frac{1}{n-1} \sum_{i=1}^{n-1}\frac{3(T_i-T_{i+1})^2}{(T_i+T_{i+1})^2}$
in their paper
[Differences in Spiking Patterns Among Cortical Neurons](http://www.ncbi.nlm.nih.gov/pubmed/14629869)
---
Also, two classical must-reads:
[Neuronal spike trains and stochastic point processes. I. The single spike train](http://www.ncbi.nlm.nih.gov/pubmed/4292791)
[Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains](http://www.ncbi.nlm.nih.gov/pubmed/4292792) (both free, the second one probably is not too interesting for you, but it's still a good read)
| null | CC BY-SA 3.0 | null | 2011-06-19T14:23:58.087 | 2011-06-19T14:23:58.087 | 2020-06-11T14:32:37.003 | -1 | 582 | null |
12094 | 1 | 12098 | null | 1 | 4384 | X,Y are independent random variables.
X's pdf = f(x)
Y's pdf = g(x)
if Z= X+Y
what is the Z's pdf?
Can it be calculated?
| How to add two random variable's pdf? | CC BY-SA 3.0 | null | 2011-06-19T14:35:39.027 | 2011-06-19T15:34:42.407 | null | null | 4898 | [
"random-variable"
] |
12095 | 2 | null | 12053 | 7 | null | The Gaussian Asuumptions refer to the residuals from the model. There are no assumptions necessary about the original data. As a case in point the distribution of daily beer sales
 .After a reasonable model captured the day-of-the-week, holiday/events effects , level shifts/time trends we get 
| null | CC BY-SA 3.0 | null | 2011-06-19T14:43:01.160 | 2011-06-19T14:43:01.160 | null | null | 3382 | null |
12096 | 2 | null | 8696 | 5 | null | Caret already does this internally for you as part of the `train()` function, see the bottom section of the [caret webpage](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) for starters.
| null | CC BY-SA 3.0 | null | 2011-06-19T14:51:52.047 | 2011-06-19T14:51:52.047 | null | null | 334 | null |
12097 | 2 | null | 12094 | 0 | null | If $A$ is the domain for $f(x)$ then $h(z)=\int_{x\in A}f(x)g(z-x)\,dx$.
A sum would be used in the discrete case.
| null | CC BY-SA 3.0 | null | 2011-06-19T15:10:46.727 | 2011-06-19T15:10:46.727 | null | null | 4637 | null |
12098 | 2 | null | 12094 | 5 | null | Reference: [http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf](http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf)
If $X$ and $Y$ are two independent, continuous random variables, then you can find the distribution of $Z=X+Y$ by taking the convolution of $f(x)$ and $g(y)$:$$h(z)=(f*g)(z)=\int_{-\infty}^{\infty}f(x)g(z-x)dx$$If $X$ and $Y$ are two independent, discrete random variables, then you can find the distribution of $Z=X+Y$ by taking the discrete convolution of $X$ and $Y$:$$\mbox{P}(Z=k)=\sum_{i=-\infty}^{\infty}\mbox{P}(X=i)\cdot\mbox{P}(Y=k-i)$$
| null | CC BY-SA 3.0 | null | 2011-06-19T15:15:02.297 | 2011-06-19T15:34:42.407 | 2011-06-19T15:34:42.407 | 4812 | 4812 | null |
12099 | 2 | null | 12087 | 0 | null | You have a couple of different approaches that depend upon how you think about the responses to your twenty questions.
If the responses to the question reveal different types of information about the respondents, you may want to think about each particular set of responses as a multivariate random variable.
In this case, you should first create a frequency table of groups by questions. Given the small sample sizes, you should not likely use Pearson's Chi-Square Test of Independence. You can use Fisher's exact test.
If the responses to the questions are all revealing the same type of information, then you can think of the 20 questions as repeated observations. Again, because of your sample size, while you could do a one-way ANOVA with repeated measures, you are probably safer using the Cochran test.
| null | CC BY-SA 3.0 | null | 2011-06-19T16:12:10.693 | 2011-08-20T21:11:05.260 | 2011-08-20T21:11:05.260 | 930 | 82 | null |
12100 | 2 | null | 12054 | 6 | null | The question appears to ask for a predicted value and for a prediction interval about that value.
The predicted value is obtained by means of the formula using the estimated parameters and a specified value of $x$:
$$\hat{y}(x) = \hat{a} + \hat{b} x + \hat{c} x^2 + \hat{d} x^3.$$
In general, the uncertainty in $\hat{y}(x)$ comes from three sources:
- Uncertainty in the parameter estimates $\hat{a}$, ..., $\hat{d}$ due to assumed randomness of the original data. An explicit probability model makes sense of this. In most settings, the model is additive. That is, for any datum $(x_i,y_i)$ it assumes
$$y_i = a + b x_i + c x_i^2 + d x_i^3 + \varepsilon_i$$
where $\varepsilon_i$ is a random variable with zero expectation. Furthermore, absent any information to the contrary, it is often the case that the $\varepsilon_i$ are independent of each other and have similar distributions. These assumptions allow us to estimate that common distribution from the data.
- Uncertainty in the actual value that would be observed at a given value $x$. This uncertainty is now directly evident in the preceding model: it is the contribution of $\varepsilon$.
- Uncertainty in the true value of $x$. In the original question this is explicitly assumed to be inconsequential.
Therefore we would expect the prediction interval formula to depend on three estimates: (i) the predicted value, (ii) the uncertainty in the predicted value due to the parameter uncertainty in (1) above, and (iii) the estimated variance of $\varepsilon$ (from (2) above). The parameter uncertainty plays a relatively small role for values of $x$ close to the average of the data $x_i$, but as $x$ moves away from this average, the parameter uncertainty makes an ever greater contribution.
Formulas can be found by searching this site for "prediction interval." In [one thread](https://stats.stackexchange.com/questions/9131/obtaining-a-formula-for-prediction-limits-in-a-linear-model/9144#9144), @Rob Hyndman gives a general formula that is directly applicable to this question. It assumes the parameters are estimated using least squares. For a design matrix $X$, the formula takes the form
$$\hat{y} \pm k_\alpha \hat{\sigma} \sqrt{1 + \mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'}.$$
($\mathbf{X}^*$ is determined by $x$ and $\mathbf{X}'\mathbf{X}$ encodes the variances and covariances of the $x_i$.)
This contains parts we can explicitly match to (i), (ii), and (iii):
(i) The interval is centered around the predicted value $\hat{y}$.
(ii) The parameter uncertainty appears in the term $\hat{\sigma} \sqrt{\mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'}$. The stuff inside the square root measures a squared distance between $x$ and the average of the $x_i$.
(iii) The contribution of $\varepsilon$ appears as the "$1$" (multiplied by $\hat{\sigma}$) added to the parameter uncertainty. We recognize this as the usual Pythagorean formula for the variance of a sum of independent random variables: take the square root of the sum of their squares.
As usual, the coefficient $k_\alpha$ is chosen to achieve a desired level of confidence. It depends on that level, $1-\alpha$, and on assumptions about how the $\varepsilon_i$ are distributed.
---
A comment to the original question prompts me to remark that considerations of correlation among the variables $1$, $x$, $x^2$, and $x^3$ are not relevant. Such correlations do of course exist in general--it is a rare situation in which our variables are mutually orthogonal--but the correlations are already accounted for in the least squares regression machinery. It is wise to check that the correlations are not so great that they introduce numerical instability in the solutions. That is unlikely in this particular problem.
Another comment also calls for another follow-up remark to emphasize one point: the presence of the second source of uncertainty (the contribution of $\varepsilon$, which is usually material) shows that we need to do more than consider the variances and covariances of the estimates $\hat{a}$, ..., $\hat{d}$: we also need to incorporate the contribution of $\varepsilon$ in the estimated uncertainty of $\hat{y}(x)$.
| null | CC BY-SA 3.0 | null | 2011-06-19T16:23:33.510 | 2011-06-19T16:23:33.510 | 2017-04-13T12:44:24.677 | -1 | 919 | null |
12101 | 2 | null | 12059 | 1 | null | As mark999 mentioned in the comment above, I needed to normalize the data like this:
```
plot(tctype,tccount/sum(tccount),data=categ,xlab="Type",ylab="",geom = "bar")+ scale_y_continuous(formatter="percent")
```
| null | CC BY-SA 3.0 | null | 2011-06-19T16:47:02.330 | 2011-06-19T16:47:02.330 | null | null | 1762 | null |
12102 | 2 | null | 11645 | 9 | null | The easiest way, IMO, is to build the design matrix yourself, as `glmfit` accepts either a matrix of raw (observed) values or a design matrix. Coding an interaction term isn't that much difficult once you wrote the full model. Let's say we have two predictors, $x$ (continuous) and $g$ (categorical, with three unordered levels, say $g={1,2,3}$). Using Wilkinson's notation, we would write this model as `y ~ x + g + x:g`, neglecting the left-hand side (for a binomial outcome, we would use a logit link function). We only need two dummy vectors to code the `g` levels (as present/absent for a particular observation), so we will have 5 regression coefficients, plus an intercept term. This can be summarized as
$$\beta_0 + \beta_1\cdot x +\beta_2\cdot\mathbb{I}_{g=2} +\beta_3\cdot\mathbb{I}_{g=3} + \beta_4\cdot x\times\mathbb{I}_{g=2} + \beta_5\cdot x\times\mathbb{I}_{g=3},$$
where $\mathbb{I}$ stands for an indicator matrix coding the level of $g$.
In Matlab, using the online example, I would do as follows:
```
x = [2100 2300 2500 2700 2900 3100 3300 3500 3700 3900 4100 4300]';
g = [1 1 1 1 2 2 2 2 3 3 3 3]';
gcat = dummyvar(g);
gcat = gcat(:,2:3); % remove the first column
X = [x gcat x.*gcat(:,1) x.*gcat(:,2)];
n = [48 42 31 34 31 21 23 23 21 16 17 21]';
y = [1 2 0 3 8 8 14 17 19 15 17 21]';
[b, dev, stats] = glmfit(X, [y n], 'binomial', 'link', 'probit');
```
I didn't include a column of ones for the intercept as it is included by default. The design matrix looks like
```
2100 0 0 0 0
2300 0 0 0 0
2500 0 0 0 0
2700 0 0 0 0
2900 1 0 2900 0
3100 1 0 3100 0
3300 1 0 3300 0
3500 1 0 3500 0
3700 0 1 0 3700
3900 0 1 0 3900
4100 0 1 0 4100
4300 0 1 0 4300
```
and you can see that the interaction terms are just coded as the product of `x` with the corresponding column of `g` (g=2 and g=3, since we don't need the first level).
The results are given below, as coefficients, standard errors, statistic and p-value (from `stats` structure):
```
int. -3.8929 2.0251 -1.9223 0.0546
x 0.0009 0.0008 1.0663 0.2863
g2 -3.2125 2.7622 -1.1630 0.2448
g3 -5.7745 7.5542 -0.7644 0.4446
x:g2 0.0013 0.0010 1.3122 0.1894
x:g3 0.0021 0.0021 0.9882 0.3230
```
Now, testing the interaction can be done by computing the difference in deviance from the full model above and a reduced model (omitting the interaction term, that is the last two columns of the design matrix). This can be done manually, or using the `lratiotest` function which provides Likelihood ratio hypothesis test. The deviance for the full model is 4.3122 (`dev`), while for the model without interaction it is 6.4200 (I used `glmfit(X(:,1:3), [y n], 'binomial', 'link', 'probit');`), and the associated LR test has two degrees of freedom (the difference in the number of parameters between the two models). As the scaled deviance is just two times the log-likelihood for GLMs, we can use
```
[H, pValue, Ratio, CriticalValue] = lratiotest(4.3122/2, 6.4200/2, 2)
```
where the statistic is distributed as a $\chi^2$ with 2 df (the critical value is then 5.9915, see`chi2inv(0.95, 2)`). The output indicates a non-significant result: We cannot conclude to the existence of an interaction between `x` and `g` in the observed sample.
I guess you can wrap up the above steps in a convenient function of your choice. (Note that the LR test might be done by hand in very few commands!)
---
I checked those results against R output, which is given next.
Here is the R code:
```
x <- c(2100,2300,2500,2700,2900,3100,3300,3500,3700,3900,4100,4300)
g <- gl(3, 4)
n <- c(48,42,31,34,31,21,23,23,21,16,17,21)
y <- c(1,2,0,3,8,8,14,17,19,15,17,21)
f <- cbind(y, n-y) ~ x*g
model.matrix(f) # will be model.frame() for glm()
m1 <- glm(f, family=binomial("probit"))
summary(m1)
```
Here are the results, for the coefficients in the full model,
```
Call:
glm(formula = f, family = binomial("probit"))
Deviance Residuals:
Min 1Q Median 3Q Max
-1.7124 -0.1192 0.1494 0.3036 0.5585
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -3.892859 2.025096 -1.922 0.0546 .
x 0.000884 0.000829 1.066 0.2863
g2 -3.212494 2.762155 -1.163 0.2448
g3 -5.774400 7.553615 -0.764 0.4446
x:g2 0.001335 0.001017 1.312 0.1894
x:g3 0.002061 0.002086 0.988 0.3230
```
For the comparison of the two nested models, I used the following commands:
```
m0 <- update(m1, . ~ . -x:g)
anova(m1,m0)
```
which yields the following "deviance table":
```
Analysis of Deviance Table
Model 1: cbind(y, n - y) ~ x + g
Model 2: cbind(y, n - y) ~ x * g
Resid. Df Resid. Dev Df Deviance
1 8 6.4200
2 6 4.3122 2 2.1078
```
| null | CC BY-SA 3.0 | null | 2011-06-19T17:40:12.613 | 2011-06-19T17:40:12.613 | null | null | 930 | null |
12103 | 1 | 12121 | null | 0 | 201 | Why $z=f(x)$ does not imply $E[z]=f(E[x])$ when f is not linear?
I can give an example, but I couldn't derive the general form.
Let $z = {x^2}$
Let $g(x)$ be the pdf of $x$. Then:
$ E\left[ z \right] = E\left[ {{x^2}} \right] = \int\limits_{ - \infty }^\infty {\left( {{x^2}} \right)g\left( x \right)dx} $
$f\left( {E\left[ x \right]} \right) = {\left( {\int\limits_{ - \infty }^\infty {\left( x \right)g\left( x \right)dx} } \right)^2}$
$ \Rightarrow E\left[ z \right] \ne f\left( {E\left[ x \right]} \right)$
| Why z=f(x) does not imply E[z]=f(E[x]) when f is not linear? | CC BY-SA 3.0 | null | 2011-06-19T17:44:50.197 | 2011-06-20T09:30:31.577 | 2011-06-20T06:27:22.157 | 2116 | 4898 | [
"self-study",
"expected-value"
] |
12104 | 1 | 12183 | null | 2 | 239 | My textbook gives an example, that normal distribution family $\{N(0,\sigma^2):\sigma\in R^+\}$ is not [complete](http://en.wikipedia.org/wiki/Completeness_%28statistics%29), but a complete statistic, $T_n=\sum_{i=1}^n X_i^2$, can still be constructed from samples $(X_1,\cdots,X_n)$.
So under what (sufficient / necessary) conditions can we draw complete statistics from an incomplete model?
| How to draw a complete statistic from an incomplete statistical model? | CC BY-SA 3.0 | null | 2011-06-19T18:31:37.773 | 2011-06-21T19:45:10.017 | 2011-06-20T16:43:49.383 | 4864 | 4864 | [
"mathematical-statistics"
] |
12105 | 1 | 12111 | null | 3 | 121 | I have data which consist of three longitudinal series of financial data. The hypothesis is that two of the series are "caused" (in a loose sense) by the third. It is fine to investigate this as two separate hypotheses. All the series are about the relationship between two specific countries. There are data for about 20 quarters on one of the series, and about 80 months on the other two series.
A little exploratory work reveals that none of the series are stationary, and that there is no apparent seasonality.
So, how best to analysis these hypotheses? Can I simply try regressions with different lags, or should I do a more formal time series analysis, or should I do something else?
Peter
| Best analysis method for three interrelated longitudinal series | CC BY-SA 3.0 | null | 2011-06-19T19:01:29.630 | 2011-06-20T01:52:40.640 | null | null | 686 | [
"time-series",
"panel-data"
] |
12106 | 2 | null | 12087 | 0 | null | You could sum the responses for each individual. Then you could do a simple chi-square analysis with a 2x2 table: Group by VDD. But that's only if you have no other variables to consider. You could also do a nonlinear mixed model, with person being a random effect and group a fixed effect; this would let you add other variables to the model.
| null | CC BY-SA 3.0 | null | 2011-06-19T19:27:51.693 | 2011-06-19T19:27:51.693 | null | null | 686 | null |
12107 | 1 | 12108 | null | 3 | 5838 | I'm reading an article, [The commonality of neural networks for verbal and visual short-term memory](http://www.ncbi.nlm.nih.gov/pubmed/19925207) (Majerus et al., J Cogn Neurosci 2010 22(11): 2570), about brain imaging in which the results are analysed with multiple analyses.
One of them is a null conjunction analysis.
I've tried googling a bit but I didn't come up with something useful.
I did find some results on 'conjunction analysis', but I'm not sure if that's the exact same thing.
Can anyone explain to me what this analysis is (Goals, method...)?
| What is a null conjunction analysis in an fMRI study? | CC BY-SA 3.0 | null | 2011-06-19T19:48:20.773 | 2011-06-19T21:44:05.837 | 2011-06-19T21:35:55.297 | 930 | 3140 | [
"hypothesis-testing",
"neuroimaging"
] |
12108 | 2 | null | 12107 | 2 | null | The original paradigm originates from the work of Price and Friston (1997, 1999), and it has been criticized in a more recent paper by Tom Nichols et al. (2005), but see Friston et al. (2005) for a reply.
The idea behind conjunction analysis is to determine whether two tasks activate the same region(s) of the brain. Quoting [Towards Evidence of Absence: Conjunction Analyses in fMRI](http://scienceblogs.com/developingintelligence/2008/09/towards_evidence_of_absence_co.php), contrary to the subtraction approach ((a) sum all of the activation maps (AM) from the baseline condition, (b) sum all of the AMs from the stimulation condition, (c) rescale them by converting these summated AMs to average images, (d) subtract the baseline AM from the stimulation AM in order to reveal the activation locations (voxels)), the idea is that
- each voxel should be significantly activated by the two tasks;
- each voxel should not be significantly modulated by an interaction effect between tasks;
- the estimated relationships between each voxel and each task are not significantly different.
The rest of the blog post is worth reading, IMO.
## References
- Price, C.J. and Friston, K.J. (1997). Cognitive conjunction: a new approach to brain activation experiments. Neuroimage, 5, 261-70.
- Friston, K.J., Holmes, A.P., Price, C.J., Büchel, C., and Worsley, K.J. (1999). Multisubject fMRI Studies and Conjunction Analyses. NeuroImage, 10(4), 385-396.
- Nichols, T., Brett, M., Andersson, J., Wager, T., Poline, J.-B. (2005). Valid conjunction inference with the minimum statistic. Neuroimage, 15;25(3): 653-60.
- Friston, K.J., Penny, W.D., and Glaser, D.E. (2005). Conjunction revisited. NeuroImage, 25, 661-667.
| null | CC BY-SA 3.0 | null | 2011-06-19T21:31:46.390 | 2011-06-19T21:44:05.837 | 2011-06-19T21:44:05.837 | 930 | 930 | null |
12109 | 2 | null | 421 | 4 | null | ["Biometry: The Principles and Practices of Statistics in Biological Research" by Robert R. Sokal and F. James Rohlf](http://rads.stackoverflow.com/amzn/click/0716724111)
["Biostatistical Analysis" by Jerrold H. Zar](http://rads.stackoverflow.com/amzn/click/0131008463)
["Primer of Biostatistics" by Stanton Glantz](http://rads.stackoverflow.com/amzn/click/0071435093)
| null | CC BY-SA 3.0 | null | 2011-06-19T23:56:44.540 | 2011-06-19T23:56:44.540 | null | null | 5003 | null |
12110 | 2 | null | 9724 | 1 | null | In my case we ran an experiment 5 times in triplicates (to calculate SEM and mean) and selected the most attractive results
| null | CC BY-SA 3.0 | null | 2011-06-20T00:09:14.037 | 2011-06-20T00:09:14.037 | null | null | 5003 | null |
12111 | 2 | null | 12105 | 1 | null | In a perfect world we could form a Vector Arima Model which would have two endogenous and three exogenous variables. The two equations would also include any necessary deterministic inputs such as Level Shifts , Local Time Trends , Seasonal Pulses and Pulses as necessary. Since you have relaxed to predict/analyse the two endogenous series simultaneously we can proceed with two Transfer Functions. These two Transfer Functions will be formed to fully utilize any contemporaneous or lag structure in each of the two potential causal series. Furthermore omitted stochastic structure will be proxied by including an ARIMA component. Additionally any and all omitted deterministic structure ( e.g. a law change ) will be proxied by employing Intervention Detection schemes to suggest Level Shifts/Local Time Trends/Seasonal Pulses ( e.g a June effect )/Pulses. Care would be taken to ensure that the parameters of said model did not vary over time or that non-constant error variance biased the results. A Transfer Function is often referred to as an ARMAX Nodel, consider  The series Y being predicted by series B and an error term E.
| null | CC BY-SA 3.0 | null | 2011-06-20T01:52:40.640 | 2011-06-20T01:52:40.640 | null | null | 3382 | null |
12112 | 1 | 12114 | null | 23 | 24799 | I read that in Bayes rule, the denominator $\Pr(\textrm{data})$ of
$$\Pr(\text{parameters} \mid \text{data}) = \frac{\Pr(\textrm{data} \mid \textrm{parameters}) \Pr(\text{parameters})}{\Pr(\text{data})}$$
is called a normalizing constant. What exactly is it? What is its purpose? Why does it look like $\Pr(data)$? Why doesn't it depend on the parameters?
| Normalizing constant in Bayes theorem | CC BY-SA 3.0 | null | 2011-06-20T03:38:02.277 | 2022-04-16T17:02:37.793 | 2018-03-07T18:04:42.687 | 7290 | 5057 | [
"probability",
"bayesian"
] |
12114 | 2 | null | 12112 | 21 | null | The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data and, of course, it does not depend on the parameters since these have been integrated out.
Now, since:
- $\Pr(\textrm{data})$ does not depend on the parameters for which one wants to make inference;
- $\Pr(\textrm{data})$ is generally difficult to calculate in a closed-form;
one often uses the following adaptation of Baye's formula:
$\Pr(\textrm{parameters} \mid \textrm{data}) \propto \Pr(\textrm{data} \mid \textrm{parameters}) \Pr(\textrm{parameters})$
Basically, $\Pr(\textrm{data})$ is nothing but a "normalising constant", i.e., a constant that makes the posterior density integrate to one.
| null | CC BY-SA 3.0 | null | 2011-06-20T04:28:24.427 | 2018-03-07T17:59:12.160 | 2018-03-07T17:59:12.160 | null | 3019 | null |
12115 | 1 | null | null | 0 | 403 | The definition for the $k$-th lag auto correlation is $Cov(y_t,y_{t-k})/Var(y_t)$.
My question is why should not it be $Cov(y_t,y_{t-k})/[Var(y_t)\cdot Var(y_{t-k})]^{0.5}$.
In another words, what is it different from the correlation coefficient between $y_t$ and $y_{t-k}$?
| Time series autocorrelation | CC BY-SA 3.0 | null | 2011-06-20T06:10:24.520 | 2011-06-20T17:17:47.060 | 2011-06-20T17:17:47.060 | 3454 | 1887 | [
"time-series",
"autocorrelation"
] |
12116 | 2 | null | 12115 | 3 | null | Auto-correlation is defined for stationary processes. The stationary process has constant mean and constant variance, hence $Var(y_t)=Var(y_{t-k})$ for all $t$ and $k$. With this in mind the definitions coincide.
| null | CC BY-SA 3.0 | null | 2011-06-20T06:46:06.407 | 2011-06-20T06:46:06.407 | null | null | 2116 | null |
12117 | 2 | null | 12068 | 3 | null | See here: [http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM](http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM)
I am not familier with libSVM, but this is the way I would do it (Given c classes):
one-vs-all: Train c classifiers, each classifier $i$ with the two classes $c_i$ and (all classes except $c_i$). Afterwards sum-normalize all single-class-scores, i.e. $finalscore(c_i)=\frac{score(c_i)}{\sum{score(c_i)}}$.
one-vs-one: Train (c^2 - c)/2 classifiers $m_{i,j}$ (one for each pair $(c_i,c_j)$. Let $m_{(i,j)}(c_l)$ be the score for class $c_l$ of model $m_{(i,j)}$, which means that $m_{(i,j)}(c_l)>=0$ for $l \in \{i,j\}$, else 0. Afterwards calculate the average score for each class, i.e. $finalscore(c_i)=\frac{\sum_{m_{(l,k)}}m_{(l,k)}(c_i)}{c-1}$ and sum-normalize the finalscores afterwards (as in one-vs-all).
Final remark: This paper (found on libsvm-page) might help, too: [A comparison of methods for multi-class support vector machines](http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf)
| null | CC BY-SA 3.0 | null | 2011-06-20T06:47:32.433 | 2011-06-20T06:47:32.433 | null | null | 264 | null |
12118 | 1 | 12127 | null | 8 | 2036 | Following the design and data described in [this question](https://stats.stackexchange.com/questions/11887/is-this-design-a-one-way-repeated-measures-anova-or-not), I did a simple one-way within-subjects repeated-measures (RM) ANOVA and found some significant p-values. I then applied non-orthogonal post-hoc Tukey's HSD tests, and when I got significant results I applied Holm-Bonferroni (1979) correction. Whenever some p-values survived the FWER correction, I calculated 95% CIs and mean for the associated pairwise comparisons.
My question is: If I don't observe a significant result at any of the above steps, do I have to carry out a power analysis for the RM ANOVA, apply [Tukey's HSD test](http://en.wikipedia.org/wiki/Tukey%27s_range_test) or [Holm-Bonferroni](http://en.wikipedia.org/wiki/Holm%E2%80%93Bonferroni_method) adjustments, or do I simply report results from the RM ANOVA without doing the power analysis?
The problem is that I'm starting to immerse in biostatistics only after my experiments, and unfortunately I didn't run a power analysis beforehand.
| If a statistic doesn't reveal a significance, do I have to calculate power for it? | CC BY-SA 4.0 | null | 2011-06-20T07:06:07.463 | 2023-01-10T15:26:26.447 | 2023-01-10T15:26:26.447 | 362671 | 5003 | [
"anova",
"repeated-measures",
"multiple-comparisons",
"statistical-power"
] |
12119 | 1 | 12142 | null | 2 | 1117 | I want to estimate the parameters of a function of general form $y = a \cdot x^b$. I applied a log-log transformation to obtain a linear function of the form $\log y = \log a + b \times \log x$. I have fitted the linear model in MATLAB.
MATLAB computes the goodness of a fit in terms of sum of squares error (SSE) and (adjusted) R bar. I want to report these numbers but they are probably meaningless given that in fact $\log a$ and not $a$ has been estimated. How do I fix this?
| How to compute goodness of fit after applying logarithmic transformation? | CC BY-SA 3.0 | null | 2011-06-20T09:03:59.923 | 2011-06-20T21:50:59.837 | 2011-06-20T10:15:19.310 | 930 | 5090 | [
"matlab",
"goodness-of-fit",
"curve-fitting"
] |
12120 | 1 | null | null | 1 | 320 | I have the following function $Y = \alpha\ln(x)$, $\alpha$ is a constant. Question:
(a) What is the expression for the derivative of Y with respect to x?
(b) what is the elasticity of Y wrt. x?
| Derivation of an elasticity from a simple function $Y = \alpha\ln(x)$ | CC BY-SA 3.0 | null | 2011-06-20T09:22:59.563 | 2012-06-08T01:55:59.490 | 2012-06-08T01:55:59.490 | 4856 | 3762 | [
"self-study",
"mathematical-statistics"
] |
12121 | 2 | null | 12103 | 3 | null | You are basically looking for the [Jensen's inequality](http://en.wikipedia.org/wiki/Jensen%27s_inequality). Proofs are in the article.
| null | CC BY-SA 3.0 | null | 2011-06-20T09:30:31.577 | 2011-06-20T09:30:31.577 | null | null | 1765 | null |
12122 | 2 | null | 12120 | 5 | null | a) The derivative of $\ln(x)$ is $\frac{1}{x}$ therefore $\frac{\partial Y}{\partial x} = \frac{\alpha}{x}$
b) The elasticity of a function is calculated as $\frac{\partial Y}{\partial x} \frac{x}{Y} = \frac{\alpha x}{Yx} = \frac{\alpha}{Y}$
I hope that is what you're looking for.
| null | CC BY-SA 3.0 | null | 2011-06-20T09:49:39.307 | 2011-06-20T15:55:10.487 | 2011-06-20T15:55:10.487 | 1765 | 1765 | null |
12123 | 2 | null | 12068 | 1 | null | If you are using only linear SVMs, you can try liblinear. It's by the same research group and supports multi-class linear SVM using the '-s 4' option.
[http://www.csie.ntu.edu.tw/~cjlin/liblinear/](http://www.csie.ntu.edu.tw/~cjlin/liblinear/)
Off-the-shelf Multiclass classifiers for linear SVM and LDA (latent dirichlet allocation) are easily available.
As mentioned on [http://en.wikipedia.org/wiki/Multiclass_classification](http://en.wikipedia.org/wiki/Multiclass_classification), there are two main approaches for multi class problems for general classifiers: one-vs-all and all-vs-all.
one-vs-all: Train c classifiers for each class: that class versus the rest. Use all c classifiers on a test point, and output the class with the highest score. (Winner Takes All scoring)
all-vs-all: Train a classifier for each pair of classes. Apply each classifier to a test point, and choose the classifier with the highest average score.
Other variations for the final scoring are also possible.
| null | CC BY-SA 3.0 | null | 2011-06-20T10:08:50.060 | 2011-06-20T10:08:50.060 | null | null | 2072 | null |
12124 | 1 | null | null | 1 | 6402 | I'm trying to interpret a significant three-way interaction. Basically, I've used hierarchical regression to analyse my data, and I have come up with a significant three-way interaction.
My DV is continuous. My 3 IVs are continuous, categorical (2 levels), and another categorical (3 levels). My sample size is 194.
I know that I can do up graphs to eyeball the interactions, but I need a statistical method in order to figure out whether or not a slope is significant. I'm aware of [Jeremy Dawson's template](http://www.jeremydawson.co.uk/slopes.htm) to figure out significant slope differences, but they only work for 3 continuous variables. Is there a method I can use to do this?
I've also had a read through the UCLA's SPSS guide to [interpreting three-way interactions](http://www.ats.ucla.edu/stat/spss/faq/threeway_hand.htm). Would this be the way to go?
Thanks in advance for any help you can provide.
| Dissecting three-way interactions | CC BY-SA 3.0 | null | 2011-06-20T10:09:58.720 | 2012-03-27T13:59:42.490 | 2011-06-20T10:35:45.807 | 930 | 3998 | [
"regression",
"interaction"
] |
12126 | 2 | null | 12124 | 3 | null | I am a little confused by your question, since you say that you have already found a significant 3 way interaction and then say you want to find whether slope differences are significant, but I think you want to see which of the levels are different in terms of their slopes, is that right?
I don't know SPSS, but in SAS you can request particular tests of different hypotheses. In SAS you can do this with EFFECT statements. You can also do this inside a LSMEANS statement.
But I would shy away from these statements; first, they usually have low power (unless all your variables are perfectly measured and perfectly reliable). Second, significance just isn't that significant. Effect size is more important. Third, the graphs say more (especially in interaction interpretation) than any p-value could.
To quote my favorite professor in grad school "When an article is full of significance tests, the authors are p-ing all over the research".
| null | CC BY-SA 3.0 | null | 2011-06-20T10:33:12.957 | 2011-06-20T10:33:12.957 | null | null | 686 | null |
12127 | 2 | null | 12118 | 16 | null | The hardline view on post-hoc power calculation is: don't do it as it's pointless. Russ Lenth from the University of Iowa has an article on this topic [here](https://stat.uiowa.edu/sites/stat.uiowa.edu/files/techrep/tr378.pdf) (He also has an amusingly facetious Java applet for post-hoc power on his [website](http://www.stat.uiowa.edu/%7Erlenth/Power/index.html)).
| null | CC BY-SA 4.0 | null | 2011-06-20T12:02:54.423 | 2023-01-10T15:22:16.090 | 2023-01-10T15:22:16.090 | 28500 | 266 | null |
12128 | 1 | 12132 | null | 67 | 96593 | In other contexts, orthogonal means "at right angles" or "perpendicular".
What does orthogonal mean in a statistical context?
Thanks for any clarifications.
| What does orthogonal mean in the context of statistics? | CC BY-SA 3.0 | null | 2011-06-20T12:38:51.657 | 2023-01-15T03:18:40.330 | null | null | 561 | [
"descriptive-statistics"
] |
12129 | 2 | null | 12128 | 3 | null | It's most likely they mean 'unrelated' if they say 'orthogonal'; if two factors are orthogonal (e.g. in factor analysis), they are unrelated, their correlation is zero.
| null | CC BY-SA 3.0 | null | 2011-06-20T13:56:50.330 | 2011-06-20T13:56:50.330 | null | null | 3140 | null |
12130 | 2 | null | 12068 | 3 | null | If I understand your question, you can pass the `-b` flag with option `1` when building the model like this:
```
../svm-train -b 1 data_file
```
And then when creating the prediction vector, you do the same:
```
../svm-precict -b 1 test_file data_file.model output_file
```
Here is an output_file example that has 3 classes:
```
labels 0 -1 1
0 0.635655 0.18753 0.176816
```
| null | CC BY-SA 3.0 | null | 2011-06-20T14:14:25.713 | 2011-06-20T14:19:56.680 | 2011-06-20T14:19:56.680 | 3306 | 3306 | null |
12131 | 2 | null | 12077 | 1 | null | Let $A=\frac{1}{N}\sum_{t=1}^N \varphi(t)\varphi^T(t)$ and $B=\frac{1}{N}\sum_{t=1}^N\varphi(t)y(t)$
Then by distributivity $A^{-1}[B-A\theta_0]=A^{-1}B-A^{-1}A\theta_0=\hat{\theta}-\theta_0$.
| null | CC BY-SA 3.0 | null | 2011-06-20T14:18:23.077 | 2011-06-20T14:18:23.077 | null | null | 3454 | null |
12132 | 2 | null | 12128 | -19 | null | It means they [the random variables X,Y] are 'independent' to each other. Independent random variables are often considered to be at 'right angles' to each other, where by 'right angles' is meant that the inner product of the two is 0 (an equivalent condition from linear algebra).
For example on the X-Y plane the X and Y axis are said to be orthogonal because if a given point's x value changes, say going from (2,3) to (5,3), its y value remains the same (3), and vice versa. Hence the two variables are 'independent'.
See also Wikipedia's entries for [Independence](http://en.wikipedia.org/wiki/Independence_(probability_theory)) and [Orthogonality](http://en.wikipedia.org/wiki/Orthogonality)
| null | CC BY-SA 3.0 | null | 2011-06-20T14:29:43.350 | 2014-11-04T09:58:55.187 | 2014-11-04T09:58:55.187 | 14640 | 5093 | null |
12133 | 2 | null | 12128 | 21 | null | @Mien already provided an answer, and, as pointed out by @whuber, orthogonal means uncorrelated. However, I really wish people would provide some references. You might consider the following links helpful since they explain the concept of correlation from a geometric perspective.
- The Geometry of Vectors (see p. 7)
- Linearly Independent, Orthogonal, and Uncorrelated Variables
- Graphical representation of two-dimensional correlation in vector space (may not be free to you)
| null | CC BY-SA 4.0 | null | 2011-06-20T14:30:33.273 | 2023-01-15T03:18:40.330 | 2023-01-15T03:18:40.330 | 362671 | 307 | null |
12134 | 1 | null | null | 1 | 140 | I would like to estimate the distribution of a very large population of known size but unknown mean and variance. I cannot assume anything about the shape of the underlying distribution (although I am relatively certain that it is not normal). However, I am certain that the values in the population are non-negative, non-zero integers. I believe that the distribution is naturally lumped in some way discretely rather than being continuously distributed over the entire range. I cannot sample the entire population but I would like to estimate the probability density function. I would also like some level of assurance of the correctness of the estimated distribution. What is the best way to go about this?
| Estimating the distribution of a very large population of known size and unknown variance | CC BY-SA 3.0 | 0 | 2011-06-20T16:58:38.517 | 2011-06-20T18:44:04.457 | 2011-06-20T17:19:06.420 | null | 5095 | [
"distributions",
"computational-statistics"
] |
12135 | 2 | null | 12112 | 2 | null | When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
| null | CC BY-SA 3.0 | null | 2011-06-20T18:04:29.430 | 2018-03-07T18:04:29.347 | 2018-03-07T18:04:29.347 | null | 2072 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.