Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9924 | 2 | null | 9917 | 3 | null | Rather than "Fix a wall at time $T$ and select the best candidate after $T$", I think you may be using "Fix a wall at time $T$ and select the first candidate to arrive at or after $T$ who is better than all those arriving before time $T$". This is a continuous version of the [secretary problem](http://en.wikipedia.org/wiki/Secretary_problem).
$E_{t}[1_{\{t>T\}}\frac{T}{t}]$ can be interpreted in two parts: $1_{\{t>T\}}$ is an indicator that the best candidate arrives at or after time $T$, i.e. you have not already rejected the best overall candidate; $\frac{T}{t}$ is the probability that the best candidate to arrive strictly before time $t$ arrives before time $T$, i.e. that you have not selected a candidate who is not the best overall. The expectation of the indicator of an event is the probability of that event.
The interval $[0,1]$ is simply the time during which you see the candidates. You can scale this to other units, so if you are looking at time $[0,\alpha]$ then your $T=\frac{1}{e}$ will be scaled to $\frac{\alpha}{e}$.
| null | CC BY-SA 3.0 | null | 2011-04-24T13:40:16.647 | 2011-04-24T13:40:16.647 | null | null | 2958 | null |
9925 | 2 | null | 9913 | 6 | null | whuber has made the point in the comment.
If $\log_e(Y) = B_0 + B_1\log_e(X) + U$ and $U$ is independent of $X$ then taking the partial derivative with respect to $X$ gives $\frac{\partial Y}{\partial X}\cdot\frac{1}{Y} = B_1\frac{1}{X}$, i.e. $B_1 = \frac{\partial Y}{\partial X}\cdot\frac{X}{Y}$.
$E_{y,x} = \lim_{X \rightarrow x} \frac { \Delta Y} { y} / \frac { \Delta X} { x}$, which is the same thing. Take absolute values if you want to avoid negative elasticities.
| null | CC BY-SA 3.0 | null | 2011-04-24T14:05:19.113 | 2011-04-24T14:05:19.113 | null | null | 2958 | null |
9926 | 1 | 9975 | null | 13 | 534 | Is there any use for the quantity
$$
\int f(x)^2 dx
$$
in statistics or information theory?
| Is there any use for the quantity $\int f(x)^2 dx$ in statistics or information theory? | CC BY-SA 3.0 | null | 2011-04-24T15:05:10.177 | 2011-04-29T03:05:15.267 | 2011-04-29T03:05:15.267 | 2970 | 3567 | [
"probability",
"entropy",
"information-theory"
] |
9927 | 2 | null | 9918 | 7 | null | The way I read your terminology, what you want is first to assess internal consistency within each group of variables, and then to assess the correlations among the scale scores which constitute the average of each group of variables. The first can be done using Cronbach's alpha, and the second using Pearson correlation. This assumes you have reasonably normal distributions and reasonably linear relationships.
A more involved method, and not necessarily a required one, would be to conduct an exploratory factor analysis. You would try to establish which variables should be grouped together and then again to what degree those factors would be correlated. If you try this method, make sure you use oblique rotation to allow those correlations to show up. Whether you use principal components extraction or principal axis extraction would depend, respectively, on whether your variables are objective, error-free measurements or subjective ones such as survey items that contain a certain amount of error.
| null | CC BY-SA 3.0 | null | 2011-04-24T15:48:08.573 | 2013-01-22T18:52:18.873 | 2013-01-22T18:52:18.873 | 2669 | 2669 | null |
9928 | 1 | 9936 | null | 3 | 602 | What "fat-tailed distributions" $p(x)$, symmetric about zero, have the property
$$\newcommand{\e}{\mathbb{E}}\newcommand{\rd}{\mathrm{d}}
\e e^X = \int_{-\infty}^{\infty} e^x p(x) \rd x < \infty \> ?
$$
Context
I'm attempting to price financial options for $X$ without using the Black–Scholes formula. It is usually easier to work with the log-price $Y = \log(X)$ and often it is assumed that $Y$ is normally distributed.
Empirical observations (eg, the "volatility smile") suggest that $Y$ isn't normal; the normal distribution decreases too rapidly away from 0. Thus, we need a fat-tailed distribution.
The value of a call option increases exponentially as $Y$ increases linearly. Therefore, $p(x)$ must decline fast enough that $\e e^X < \infty$. In other words, the distribution must decline slower than the normal distribution, but still fast enough that $\e e^X$ converges.
I tried the Cauchy and Student's $t$ distributions, but $\e e^X$ diverges for both, regardless of parameters.
I also realize I can create arbitrary distributions meeting my conditions (though I'm not exactly sure how), but I'm looking for a well-known (parametrized family of) distribution.
Even more details (for the masochist):
[https://github.com/barrycarter/bcapps/blob/master/bc-imp-vol.m](https://github.com/barrycarter/bcapps/blob/master/bc-imp-vol.m)
| Symmetric fat-tailed distributions where $\mathbb{E} e^X < \infty$ | CC BY-SA 3.0 | null | 2011-04-24T18:07:51.337 | 2011-04-27T16:44:25.453 | 2011-04-27T16:44:25.453 | null | null | [
"distributions"
] |
9930 | 1 | null | null | 18 | 7915 | This is probably demonstrating a fundamental lack of understanding of how partial correlations work.
I have 3 variables, x,y,z. When I control for z, the correlation between x and y increases over the correlation between x and y when z was not controlled for.
Does this make sense? I tend to think that when one controls for the effect of a 3rd variable, the correlation should decreases.
Thank you for your help!
| Does it make sense for a partial correlation to be larger than a zero-order correlation? | CC BY-SA 3.0 | null | 2011-04-24T20:08:52.457 | 2021-12-10T11:29:35.833 | 2011-04-29T00:49:44.813 | 3911 | 4307 | [
"correlation"
] |
9931 | 1 | 9955 | null | 6 | 16779 | I have a problem where I need to calculate linear regression as samples come in. Is there a formula that I can use to get the exponentially weighted moving linear regression? Not sure if that's what you would call it though.
| Exponentially weighted moving linear regression | CC BY-SA 3.0 | null | 2011-04-24T18:56:26.757 | 2021-08-21T08:26:22.517 | 2011-04-24T20:24:05.063 | 4306 | 4306 | [
"regression"
] |
9932 | 2 | null | 9930 | 2 | null | I think you need to know about moderator and mediator variables. The classic paper is Baron and Kenny [cited 21,659 times]
A moderator variable
>
"In general terms, a moderator is a
qualitative (e.g., sex, race, class)
or quantitative (e.g., level of
reward) variable that affects the
direction and/or strength of the
relation between an independent or
predictor variable and a dependent or
criterion variable. Specifically
within a correlational analysis
framework, a moderator is a third
variable that affects the zero-order
correlation between two other
variables. ... In the more familiar
analysis of variance (ANOVA) terms, a
basic moderator effect can be
represented as an interaction between
a focal independent variable and a
factor that specifies the appropriate
conditions for its operation." p.
1174
A mediator variable
>
"In general, a given variable may be
said to function as a mediator to the
extent that it accounts for the
relation between the predictor and the
criterion. Mediators explain how
external physical events take on
internal psychological significance.
Whereas moderator variables specify
when certain effects will hold,
mediators speak to how or why such
effects occur." p. 1176
| null | CC BY-SA 3.0 | null | 2011-04-24T20:53:18.513 | 2011-04-25T05:19:38.197 | 2011-04-25T05:19:38.197 | 183 | 3597 | null |
9933 | 2 | null | 9931 | 7 | null | Sure, just add a `weights=` argument to `lm()` (in case of [R](http://www.r-project.org)):
```
R> x <- 1:10 ## mean of this is 5.5
R> lm(x ~ 1) ## regression on constant computes mean
Call:
lm(formula = x ~ 1)
Coefficients:
(Intercept)
5.5
R> lm(x ~ 1, weights=0.9^(seq(10,1,by=-1)))
Call:
lm(formula = x ~ 1, weights = 0.9^(seq(10, 1, by = -1)))
Coefficients:
(Intercept)
6.35
R>
```
Here is give 'more recent' (i.e., higher) values more weight and the mean shifts from 5.5 to 6.35. The key, if any, is the $\lambda ^ \tau$ exponential weight I compute on the fly; you can change the weight factor to any value you choose and depending on how you order your data you can also have the exponent run the other way.
You can do the same with regression models involving whichever regressors you have.
| null | CC BY-SA 3.0 | null | 2011-04-24T21:27:08.933 | 2011-04-24T21:27:08.933 | null | null | 334 | null |
9934 | 2 | null | 9918 | 2 | null | I would suggest using as a replacement for the notion of correlation, which is defined only for pair-wise, the notion of mutual information and integration in Gaussian models.
In Gaussian models, integration of a group of variables $G_1$ is defined as the entropy of the group:
$I_1 \propto log(|C_1|)$
where $C_1$ is the correlation matrix of the group of variables $G_1$. It is easy to see that if $G_1$ is comprised only of 2 variables, its integration is $log ( 1 - \rho^2)$, which directly relates to the pairwise correlation coefficient of the variables $\rho$.
To compute interaction between two groups of variables, you can use mutual information, which is just cross-entropy between the groups:
$MU_{12} = I_{12} - I_{1} - I_{2}$
I found [a reference](http://www.imt.liu.se/~magnus/cca/tutorial/node16.html) on these notions after a quick google that might be helpful.
| null | CC BY-SA 3.0 | null | 2011-04-24T22:34:58.023 | 2011-04-24T22:34:58.023 | null | null | 1265 | null |
9935 | 2 | null | 9895 | 1 | null | I think this is is not a good question: first having $N_e$ on both sides of the model, and second I suspect it is ill conditioned.
Here are my explorations. Let's use R to try to minimise the sum of squares of the difference between the right hand side and $N_e$ by setting up two functions with `param[1]=b`, `param[2]=c`, `param[3]=d` and `param[4]=T`
```
RHS <- function(par) {
Data$No * (1 - exp( (par[3] + par[1]*Data$No) * (par[4]*Data$Ne - 72)
/ (1 + par[2]*Data$No) ) )
}
sumsq <- function(par) {
sum( ( Data$Ne - RHS(par) )^2 )
}
```
We now need to start from some initial estimate, and this one seems to work quite well
```
param <- c(2,3,4,11)
```
We can then use `optim`. We could set up the constraints, either with `optim` or with `constrOptim`, but with luck we can find a solution which meets the constraints anyway. We may need to run `optim` several times until the estimates of the parameters stop changing. For example
```
> o <- optim(param, sumsq)
> (param <- o$par)
[1] -0.200 3.825 4.825 11.825
> o <- optim(param, sumsq)
> (param <- o$par)
[1] 6.081155e-05 1.821490e+00 -2.820120e-02 9.624836e+01
> o <- optim(param, sumsq)
> (param <- o$par)
[1] 5.278305e-05 3.244559e+00 -2.819326e-02 1.631129e+02
> o <- optim(param, sumsq)
> (param <- o$par)
[1] 5.278179e-05 3.244649e+00 -2.819326e-02 1.631168e+02
> o <- optim(param, sumsq)
> (param <- o$par)
[1] 5.278179e-05 3.244649e+00 -2.819326e-02 1.631168e+02
```
and looking at the details we can see the sum of squares of the differences have been reduced to under 30, which is not bad given there are 89 data points
```
> o
$par
[1] 5.278179e-05 3.244649e+00 -2.819326e-02 1.631168e+02
$value
[1] 29.79642
$counts
function gradient
195 NA
$convergence
[1] 0
$message
NULL
```
Using these parameters we can plot the right hand side of the model against the left hand side with a line to show where $y=x$
```
plot(RHS(param) ~ Data$Ne)
abline(0,1)
```

and that does not look too bad, though perhaps it should curve a bit lower for high $N_e$. The biggest differences come with the points `(No=30,Ne=20)` and `(No=100,Ne=33)`.
The problem comes when starting with other initial values for the parameters in the optimisation: some fail completely, some fail the constraints, and most worryingly some settle down on good but different values. For example starting with
```
param <- c(2,3,4,9)
```
the repeated `optim` calls settle down on the parameters
```
[1] 8.291305e-06 9.521925e-01 -4.292615e-03 3.153295e+02
```
which are very different to the values found earlier. The sum of squares of the differences are still under 34 and the graph looks almost identical, but if the parameters are supposed to have physical meaning then this is not a good way to estimate them; $Th$ has almost doubled from about 163 to 315.
So the parameters produced can not be trusted.
| null | CC BY-SA 3.0 | null | 2011-04-25T00:02:07.613 | 2011-04-25T00:02:07.613 | null | null | 2958 | null |
9936 | 2 | null | 9928 | 1 | null | The variance gamma process is a useful way to go. It is an extension of the standard brownian motion process, $Z(t)$
$$Y(t;\mu,\sigma,\nu)=\mu \Gamma(t;1,\nu) + \sigma Z(\Gamma[t;1,\nu])$$
Where $\Gamma(t;1,\nu)$ is a gamma process, with independent gamma distributed increments. So $\Gamma(t+s;1,\nu)-\Gamma(t;1,\nu)$ has a gamma distribution with mean $s$ and variance $s\nu$. (you could replace the $1$ by another parameter $\mu$ if desired, to have mean $s\mu$, as is done [here](http://www.math.nyu.edu/research/carrp/papers/pdf/VGEFRpub.pdf)). The intuition is that "time" is measured in some sort of trading volume rather than calendar time. So it incorporates a sense of the market "getting hot" and "getting cold".
This process has the "fat tail" property that you seek, and is commonly used as an option pricing model. It can be used by averaging the Black–Scholes option price with respect to gamma distribution for the time.
The increments of this distribution $Y(t)-Y(s)$ follow a Laplace distribution.
More generally one speaks of what are called Levy process, which have characteristic function $\varphi_{y}(\theta)=E[\exp(i\theta Y_{t})]$:
$$\varphi_{t}(\theta)=\exp\left(it\mu\theta - \frac{t}{2}\sigma^{2}\theta^{2}+t\int_{|x|<\epsilon}[e^{i\theta x}-1-i\theta x]d\Pi(x)+t\int_{|x|>\epsilon}[e^{i\theta x}-1]d\Pi(x)\right)$$
Where $d\Pi(x)$ is called the Levy measure, and governs discontinuities in the process (or the "jumps"). The measure must satisfy:
$$\int_{|x|<\epsilon}x^{2}d\Pi(x)<\infty$$
$$\int_{|x|>\epsilon}d\Pi(x)<\infty$$
Thus it can have a singularity at zero as long it is not "too big". Can show for the variance gamma process the Levy measure is given by:
$$d\Pi(x)=\frac{dx}{\nu|x|}\exp(-\frac{1}{\nu}|x|)$$
UPDATE
In regards to Cardinal's well directed comments (I have a tendency to "wander-off" when I'm doing some sort of maths) you can see that the only criterion for your "fat tailed" distribution to satisfy your criterion is that the moment generating function $m_{X}(t)=E[\exp(tX)]$ exists when evaluated at $t=1$. Equivalently we require the characteristic function to exist when evaluated at $\theta=-i$. Because the Levy-process is defined by its CF then it is always finite when the two integral equations above are satisfied. The expectation is given by the above formula
| null | CC BY-SA 3.0 | null | 2011-04-25T00:24:40.020 | 2011-04-26T12:14:57.870 | 2011-04-26T12:14:57.870 | 2392 | 2392 | null |
9937 | 1 | 9938 | null | 11 | 43465 | In a data frame, I would like to get the column's index by name. For example:
```
x <- data.frame(foo=c('a','b','c'),bar=c(4,5,6),quux=c(4,5,6))
```
I want to know the column index for "bar".
I came up with the following but it seems inelegant. Is there a more straightforward builtin that I am missing?
```
seq(1,length(names(x)))[names(x) == "bar"]
[1] 2
```
| Finding the column index by its name in R | CC BY-SA 3.0 | null | 2011-04-25T00:44:09.760 | 2011-04-25T14:45:57.530 | 2011-04-25T14:45:57.530 | null | 1138 | [
"r"
] |
9938 | 2 | null | 9937 | 22 | null | probably this is the simplest way:
```
which(names(x)=="bar")
```
| null | CC BY-SA 3.0 | null | 2011-04-25T00:59:02.707 | 2011-04-25T00:59:02.707 | null | null | 2280 | null |
9939 | 2 | null | 9930 | 7 | null | Looking at the wikipedia page we have the partial correlation between $X$ and $Y$ given $Z$ is given by:
$$\rho_{XY|Z}=\frac{\rho_{XY}-\rho_{XZ}\rho_{YZ}}{\sqrt{1-\rho_{XZ}^{2}}\sqrt{1-\rho_{YZ}^{2}}}>\rho_{XY}$$
So we simply require
$$\rho_{XY}>\frac{\rho_{XZ}\rho_{YZ}}{1-\sqrt{1-\rho_{XZ}^{2}}\sqrt{1-\rho_{YZ}^{2}}}$$
The right hand side has a global minimum when $\rho_{XZ}=-\rho_{YZ}$. This global minimum is $-1$. I think this should explain what's going on. If the correlation between $Z$ and $Y$ is the opposite sign to the correlation between $Z$ and $X$ (but same magnitude), then the partial correlation between $X$ and $Y$ given $Z$ will always be greater than or equal to the correlation between $X$ and $Y$. In some sense the "plus" and "minus" conditional correlation tend to cancel out in the unconditional correlation.
UPDATE
I did some mucking around with R, and here is some code to generate a few plots.
```
partial.plot <- function(r){
r.xz<- as.vector(rep(-99:99/100,199))
r.yz<- sort(r.xz)
r.xy.z <- (r-r.xz*r.yz)/sqrt(1-r.xz^2)/sqrt(1-r.yz^2)
tmp2 <- ifelse(abs(r.xy.z)<1,ifelse(abs(r.xy.z)<abs(r),2,1),0)
r.all <-cbind(r.xz,r.yz,r.xy.z,tmp2)
mycol <- tmp2
mycol[mycol==0] <- "red"
mycol[mycol==1] <- "blue"
mycol[mycol==2] <- "green"
plot(r.xz,r.yz,type="n")
text(r.all[,1],r.all[,2],labels=r.all[,4],col=mycol)
}
```
so you submit partial.plot(0.5) to see when a marginal correlation of 0.5 corresponds to in partial correlation. The plot is color coded so that red area represents the "impossible" partial correlation, blue area where $|\rho|<|\rho_{XY|Z}|<1$ and the green area where $1>|\rho|>|\rho_{XY|Z}|$ Below is an example for $\rho_{XY}=r=0.5$

| null | CC BY-SA 3.0 | null | 2011-04-25T01:01:49.443 | 2011-04-29T01:13:22.797 | 2011-04-29T01:13:22.797 | 3911 | 2392 | null |
9940 | 1 | null | null | 2 | 142 | I have $n$ standard normal and independent random variables $X_i$ (In reality I have a large known number of them, but let's just say I have $n$). In my experiment I want to on average get exactly 3 random variables $X_i$ under a threshold $c$. To get that, I can compute $c$ having that property easily, because the average number of $X_i$ that are under a threshold $c$ is $n \Phi(c)$ where $\Phi$ is the cdf of the standard normal distribution.
So I choose $c = \Phi^{-1}(3/n)$ in this case. (Which is a negative number for large $n$.)
But unfortunately I already know the value of two other standard normal random variables $Y$ and $Z$ which may depend on each other and on any number of the other $X_i$.
So my question is: if I know that $Y$ and $Z$ are under the threshold $c=\Phi^{-1}(3/n)$, is it then still true that on average at most a constant number of the other random variables $X_i$ are under the threshold $c$? So by knowing that $Y$ and $Z$ are under the threshold, they can't suddenly make many of the other random variables go under it too.
I am almost certain that they can't, but I don't know how to prove it. Any hints are welcome. Or books where you think this might be in.
| Few random variables cannot influence $n$ independent others too much? | CC BY-SA 3.0 | null | 2011-04-25T05:40:10.820 | 2011-04-25T20:32:48.797 | 2011-04-25T18:23:15.007 | 4312 | 4312 | [
"probability",
"normal-distribution",
"random-variable"
] |
9941 | 2 | null | 9918 | 5 | null |
- The standard tools, at least in psychology, in your situation would be exploratory and confirmatory factor analysis to assess the convergence of the inter-item correlation matrix with some proposed model of the relationship between factors and items. The way that you have phrased your question suggests that you might not be familiar with this literature.
For example, here are my notes on the scale construction and factor analysis and here is a tutorial in R on factor analysis form Quick-R.
Thus, while it's worth answering your specific question, I think that your broader aims will be better served by examining factor analytic approaches to evaluating multi-item, multi-factor scales.
- Another standard strategy would be to calculate total scores for each group of variables (what I would call a "scale") and correlate the scales.
- Many reliability analysis tools will report average inter-item correlation.
- If you created the 50 by 50 matrix of correlations between items, you could write a function in R that averaged subsets based on combinations of groups of variables. You might not get what you want if you have a mixture of positive and negative items, as the negative correlations might cancel out the positive correlations.
| null | CC BY-SA 3.0 | null | 2011-04-25T05:43:44.557 | 2011-04-25T05:43:44.557 | null | null | 183 | null |
9942 | 1 | 9943 | null | 4 | 455 | In the one-dimensional case, if $X$ is $\mathcal{N}(\mu,\sigma^2)$, then $Y =\alpha X + \beta $ is $\mathcal{N}(\alpha \mu + \beta,\alpha^2\sigma^2)$ . We can prove this using the cumulative distribution function of of $Y$
$F_Y(a) = P\{Y \leq a\} = P\{\alpha X + \beta \leq a\} = P\{X \leq (a-\beta)/\alpha\}$.
Substituting $Y =\alpha X + \beta $ and change of variable gives us,
$F_Y(a) = \int_{-\infty}^{a} \frac{1}{\sqrt{2\pi}(\alpha\sigma)} \exp \{ \frac{-(v-(\alpha \mu + \beta))^2}{2(\alpha\sigma)^2}\} dv $
Hence
$f_Y(v) = \frac{1}{\sqrt{2\pi}(\alpha\sigma)} \exp \{ \frac{-(v-(\alpha \mu + \beta))^2}{2(\alpha\sigma)^2}\} $
Thus $Y$ is $\mathcal{N}(\alpha \mu + \beta, \alpha^2 \sigma^2)$.
In the multivariate case, if $X$ is $\mathcal{N}(\mu,\Sigma)$ and $Y=\alpha X + \beta$, is $Y \sim \mathcal{N}(\alpha \mu + \beta,\alpha^2\Sigma)$? If so, how do we prove it?
| Does $Y=\alpha X + \beta$ hold for multivariate gaussian density? | CC BY-SA 3.0 | null | 2011-04-25T06:08:43.413 | 2011-04-29T00:54:00.753 | 2011-04-29T00:54:00.753 | 3911 | 4290 | [
"multivariate-analysis"
] |
9943 | 2 | null | 9942 | 7 | null | The method of characteristic functions (CF) will work here. So we have the CF for $X$ as
$$\varphi_{X}(t)=\exp\left(it^{T}\mu_{X}-\frac{1}{2}t^{T}\Sigma_{X}t\right)$$
Now we make the substitution $Y=\alpha X + \beta$ in the CF and we get:
$$\varphi_{Y}(t)=E\left[\exp(it^{T}Y)\right]=E\left[\exp(it^{T}\alpha X +it^{T}\beta)\right]=\exp(it^{T}\beta)\varphi_{X}(\alpha t)$$
Then substitute in the CF expression for $X$.
$$\varphi_{Y}(t)=\exp(it^{T}\beta)\exp\left(i(\alpha t)^{T}\mu_{X}-\frac{1}{2}(\alpha t)^{T}\Sigma_{X}(\alpha t)\right)$$
$$=\exp\left(it^{T}[\alpha\mu_{X}+\beta]-\frac{1}{2}t^{T}[\alpha^{2}\Sigma_{X}]t\right)$$
But this is the characteristic function of a new normal distribution with mean vector $\alpha\mu_{X}+\beta$ and covariance matrix $\alpha^{2}\Sigma_{X}$. As characteristic functions are uniquely defined from a distribution function and vice versa, you have your proof.
To generalise to the case where $\alpha$ is an appropriately defined $c\times p$ matrix ($p$ is the dimension of $X$). we simply replace the covariance matrix $\alpha^{2}\Sigma_{X}$ with the $c\times c$ covariance matrix $\alpha\Sigma_{X}\alpha^{T}$. Note that $\beta$ must be a $c\times 1$ vector for mean vector to make sense - but it is unchanged at $\alpha\mu_{X}+\beta$.
| null | CC BY-SA 3.0 | null | 2011-04-25T07:00:19.810 | 2011-04-26T11:59:13.590 | 2011-04-26T11:59:13.590 | 2970 | 2392 | null |
9944 | 2 | null | 9918 | 16 | null | What @rolando suggested looks like a good start, if not the whole response (IMO). Let me continue with the correlational approach, following the Classical Test Theory (CTT) framework. Here, as noted by @Jeromy, a summary measure for your group of characteristics might be considered as the totalled (or sum) score of all items (a characteristic, in your words) belonging to what I will now refer to as a scale. Under CTT, this allows us to formalize individual "trait" propensity or liability as one's location on a continuous scale reflecting an underlying construct (a latent trait), although here it is merely an ordinal scale (but this another debate in the psychometrics literature).
What you described has to do with what is know as convergent (to what extent items belonging to the same scale do correlate one with each other) and discriminant (items belonging to different scales should not correlate to a great extent) validity in psychometrics. Classical techniques include multi-trait multi-method (MTMM) analysis (Campbell & Fiske, 1959). An illustration of how it works is shown below (three methods or instruments, three constructs or traits):

In this MTMM matrix, the diagonal elements might be Cronbach's alpha or test-retest intraclass correlation; these are indicators of the reliability of each measurement scale. The validity of the hypothesized (shared) constructs is assessed by the correlation of scales scores when different instruments are used to assess the same trait; if these instrument were developed independently, high correlation ($> 0.7$) would support the idea that the traits are defined in a consistent and objective manner. The remaining cells in this MTMM matrix summarize relations between traits within method, and between traits across methods, and are indicative of the way unique constructs are measured with different scales and what are the relations between each trait in a given scale. Assuming independent traits, we generally don't expect them to be high (a recommended threshold is $<.3$), but more formal test of hypothesis (on correlation point estimates) can be carried out. A subtlety is that we use so-called "rest correlation", that is we compute correlation between an item (or trait) and its scale (or method) after removing the contribution of this item to the sum score of this scale (correction for overlap).
Even if this method was initially developed to assess convergent and discriminant validity of a certain number of traits as studied by different measurement instruments, it can be applied for a single multi-scale instrument. The traits then becomes the items, and the methods are just the different scales. A generalization of this method to a single instrument is also known as multitrait scaling. Items correlating as expected (i.e., with their own scale rather than a different scale) are counted as scaling success. We generally assume, however, that the different scales are not correlated, that is they are targeting different hypothetical constructs. But averaging the within and between-scale correlations provide a quick way of summarizing the internal structure of your instrument. Another convenient way of doing so is to apply a cluster analysis on the matrix of pairwise correlations and see how your variables do hang together.
Of note, in both cases, the usual caveats of working with correlation measures apply, that is you cannot account for measurement error, you need a large sample, instruments or tests are assumed to be "parallel" (tau-equivalence, uncorrelated errors, equal error variances).
The second part addressed by @rolando is also interesting: If there's no theoretical or substantive indication that the already established grouping of items makes sense, then you'll have to find a way to highlight the structure of your data with e.g., exploratory factor analysis. But even if you trust those "characteristics within a group", you can check that this is a valid assumption. Now, you might be using confirmatory factor analysis model to check that the pattern of items loadings (correlation of an item with its own scale) behaves as expected.
Instead of traditional factor analytic methods, you can also take a look at items clustering (Revelle, 1979) which relies on a Cronbach's alpha-based split-rule to group together items into homogeneous scales.
A final word: If you are using R, there are two very nice packages that will ease the aforementioned steps:
- psych, provides you with everything you need for getting started with psychometrics methods, including factor analysis (fa, fa.parallel, principal), items clustering (ICLUST and related methods), Cronbach's alpha (alpha); there's a nice overview available on William Revelle's website, especially An introduction to psychometric theory with applications in R.
- psy, also includes scree plot (via PCA + simulated datasets) visualization (scree.plot) and MTMM (mtmm).
References
- Campbell, D.T. and Fiske, D.W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56: 81–105.
- Hays, R.D. and Fayers, P. (2005). Evaluating multi-item scales. In Assessing quality of life in clinical trials, (Fayers, P. and Hays, R., Eds.), pp. 41-53. Oxford.
- Revelle, W. (1979). Hierarchical Cluster Analysis and the Internal Structure of Tests. Multivariate Behavioral Research, 14: 57-74.
| null | CC BY-SA 3.0 | null | 2011-04-25T08:25:35.240 | 2011-04-25T08:25:35.240 | null | null | 930 | null |
9946 | 2 | null | 9911 | 1 | null | This smells like archetypal analysis -- extracting some underlying prototypical objects. However, the vanilla AA will give you linear combination as PCA; thus I would suggest making something similar by first making some k-means-like clustering of the events and then selecting those which are closest to the centroids.
| null | CC BY-SA 3.0 | null | 2011-04-25T08:54:17.140 | 2011-04-25T08:54:17.140 | null | null | null | null |
9947 | 1 | 9963 | null | 7 | 702 | For a Dataset $D$, we have gold standard centroids say $c_1, c_2, \cdots, c_n$. Now if we run k-means algorithm on $D$ with input $n$, we get k-means centroid $k_1, k_2, \cdots, k_n$.
I just wanted to know, is there any algorithm/heuristic to match the centroids between $k_i$ and $c_j$ where $i, j= 1, \cdots, n$ (One to one mapping between $k$'s and $c$'s)
I tried to calculate the pairwise distance between $k_p$ and $c_j,\; j= 1, \cdots, n$, and match $k_p$ to $c_r$ where the distance between them is minimum. But in this case $k_p$ and $k_q$ are assigned to $c_r$, which we dont need.
| Centroid matching problem | CC BY-SA 3.0 | null | 2011-04-25T09:04:15.967 | 2011-04-26T04:29:33.870 | 2011-04-25T09:22:45.317 | null | 4290 | [
"clustering",
"algorithms"
] |
9948 | 1 | null | null | 4 | 561 | I need to generate cross variograms of images using moving windows. For that I use the following equation:
$$
\gamma_{jk}(h)=\frac{1}{2n(h)}\sum_{i=1}^{n(h)}\Big\{\big[dn_j(x_i)-dn_j(x_i+h)\big]\cdot\big[dn_k(x_i)-dn_k(x_i+h)\big]\Big\}
$$
The first part stands for one band(j) and next part of band k. To illustrate with sample matrices,
```
j = 1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
k = 17 18 19 20
21 22 23 24
25 26 27 28
29 30 31 32
```
In actual case I am using 7 X 7 windows for large satellite images.
I also had to generate variograms from images for this work. For generation of variograms I had to consider only one band of data. For that case I used `nlfilter` for moving window and created a function to select and calculate values.
But for cross variograms, I am not able to decide upon what function to use. For this case the calculations will go like this:
>
(1 - 2)(17 - 18) + (2 - 3)(18 - 19)
and so on.
| Cross variogram with a moving window | CC BY-SA 3.0 | null | 2011-04-25T09:12:07.153 | 2011-04-25T18:03:19.457 | 2011-04-25T17:13:52.510 | 4313 | 4313 | [
"variance",
"matlab",
"image-processing"
] |
9949 | 2 | null | 9937 | 12 | null | just to add another possibility:
You can usually use `grep` and it's decedents (i.e., grepl, to do these kind of jobs in a more sophisiticated way using regular expressions.
On your example your could get the column index with:
`grep("^bar$", colnames(x))` or `grep("^bar$", names(x))`
The `^` and `$` are meta characters for the beginning and end of a string, respectively.
Check ?grep and especially ?regex for more infos (i.e., you can grab only partial names/matches, or the return value is the string itself or a logical vector,...)
For me, `grep` is more R-ish.
Strongly related is the recent package by Hadley Wickhem: [stringr](http://cran.r-project.org/web/packages/stringr/index.html), A package for "modern, consistent string processing" including grep like functions. He recently published [a paper on it in the R Journal.](http://journal.r-project.org/archive/2010-2/RJournal_2010-2_Wickham.pdf)
[See also my answer on stackoverflow on an identical issue.](https://stackoverflow.com/questions/4427234/get-column-index-from-label-in-a-data-frame/4427459#4427459)
| null | CC BY-SA 3.0 | null | 2011-04-25T10:02:14.867 | 2011-04-25T10:02:14.867 | 2017-05-23T12:39:26.203 | -1 | 442 | null |
9950 | 2 | null | 9931 | 1 | null | If you form the Transfer Function Model y(t)=W(B)*X(t)+[THETA(B)/PHI(B)]*a(t) the operator [THETA(B)/PHI(B)] is the "smoothing component". For examnple if PHI(B)=1.0 and THETA(B)=1-.5B this would imply a set of weights of .5,.25,.125,... . in this way you could provide the answer to optimizing the "weighted moving linear regression" rather than assuming it's form.
| null | CC BY-SA 3.0 | null | 2011-04-25T10:49:36.277 | 2011-04-25T10:49:36.277 | null | null | 3382 | null |
9951 | 1 | null | null | 17 | 34590 | I was wrestling with stationarity in my head for a while... Is this how you think about it? Any comments or further thoughts will be appreciated.
>
Stationary process is the one which
generates time-series values such that
distribution mean and variance is kept
constant. Strictly speaking, this is
known as weak form of stationarity or
covariance/mean stationarity.
Weak form of stationarity is when the
time-series has constant mean and
variance throughout the time.
Let's put it simple, practitioners say
that the stationary time-series is the
one with no trend - fluctuates around
the constant mean and has constant
variance.
Covariance between different lags is
constant, it doesn't depend on
absolute location in time-series. For
example, the covariance between t and
t-1 (first order lag) should always be
the same (for the period from
1960-1970 same as for the period from
1965-1975 or any other period).
In non-stationary processes there is
no long-run mean to which the series
reverts; so we say that non-stationary
time series do not mean revert. In
that case, the variance depends on
absolute position in time-series and
variance goes to infinity as time goes
on. Technically speaking,
auto-correlations to not decay with
time, but in small samples they do
disappear - although slowly.
In stationary processes, shocks are
temporary and dissipate (lose energy)
over time. After a while, they do not
contribute to the new time-series
values. For example, something which
happened log time ago (long enough)
such as World War II, had an impact,
but, it the time-series today is the
same as if World War II never
happened, we would say that shock lost
its energy or dissipated. Stationarity
is especially important as many
classical econometric theories are
derived under the assumptions of
stationarity.
A strong form of stationarity is when
the distribution of a time-series is
exactly the same trough time. In other
words, the distribution of original
time-series is exactly same as lagged
time-series (by any number of lags) or
even sub-segments of the time-series.
For example, strong form also suggests
that the distribution should be the
same even for a sub-segments
1950-1960, 1960-1970 or even
overlapping periods such as 1950-1960
and 1950-1980. This form of
stationarity is called strong because
it doesn't assume any distribution. It
only says the probability distribution
should be the same. In the case of
weak stationarity, we defined
distribution by its mean and variance.
We could do this simplification
because implicitly we assumed normal
distribution, and normal distribution
is fully defined by its mean and
variance or standard deviation. This
is nothing but saying that probability
measure of the sequence (within
time-series) is the same as that for
lagged/shifted sequence of values
within same time-series.
| Intuitive explanation of stationarity | CC BY-SA 3.0 | null | 2011-04-25T12:18:37.257 | 2019-01-16T15:35:36.007 | 2019-01-16T15:35:36.007 | 11887 | 333 | [
"time-series",
"stationarity",
"intuition"
] |
9952 | 2 | null | 9947 | 0 | null | Sounds like you might want to consider using/writing an energy function. More here: [http://en.wikipedia.org/wiki/Optimization_%28mathematics%29#Multi-objective_optimization](http://en.wikipedia.org/wiki/Optimization_%28mathematics%29#Multi-objective_optimization)
I suppose if your number of k centroids is "small" you can run a distance function for all c k pairings and select the set which minimizes total distance as the 'best' solution.
Hope that helps -
Perry
| null | CC BY-SA 3.0 | null | 2011-04-25T12:43:32.667 | 2011-04-25T12:43:32.667 | null | null | 4316 | null |
9953 | 2 | null | 9852 | 9 | null | As whuber stated this actually is a case of nested models, and hence one can apply a [likelihood-ratio test](http://en.wikipedia.org/wiki/Likelihood-ratio_test). Because it is still not exactly clear what models you are specifying I will just rewrite them in this example;
So model 1 can be:
$Y = a_1 + B_{11}(X) + B_{12}(W) + B_{13}(Z) + e_1$
And model 2 can be (I ignore the division by 2, but this action has no consequence for your question):
$Y = a_2 + B_{21}(X) + B_{22}(W+Z) + e_2$
Which can be rewritten as:
$Y = a_2 + B_{21}(X) + B_{22}(W) + B_{22}(Z)+ e_2$
And hence model 2 is a specific case of model 1 in which $B_{12}$ and $B_{13}$ are equal. One can use the likelihood-ratio test between these two models to assign a p-value to the fit of model 1 compared to model 2. There are good reasons in practice to do this, especially if the correlation between W and Z are quite large ([multicollinearity](https://stats.stackexchange.com/q/1149/1036)). As I stated previously, whether you divide by two does not matter for testing the fit of the models, although if it is easier to interpret $\frac{W+Z}{2}$ then $W+Z$ by all means use the average of the two variables.
Model fit statistics (such as Mallow's CP already mentioned by bill_080, and other examples are [AIC](http://en.wikipedia.org/wiki/Akaike_information_criterion) and [BIC](http://en.wikipedia.org/wiki/Bayesian_information_criterion)), are frequently used to assess non-nested models. Those statistics do not follow known distributions (like the log-likelihood does, [Chi-square](http://en.wikipedia.org/wiki/Chi-square_distribution)) and hence the differences in those statistics between models can not be given a p-value.
| null | CC BY-SA 3.0 | null | 2011-04-25T12:57:28.647 | 2011-04-25T12:57:28.647 | 2017-04-13T12:44:51.060 | -1 | 1036 | null |
9954 | 2 | null | 9948 | 4 | null | I prefer a slight change of notation due to the many $n$'s appearing in the original. Let $\alpha$ and $\beta$ designate the images. Let $i$ and $j$ each designate pairs of indexes into the image rows and columns. (Indexing goes from $1$ to $m$ for rows and $1$ to $n$ for columns.) Let $h$ designate a relative index pair (so that its two entries are integers, either of which can be negative), also known as an offset. Then, by definition, the value of the experimental cross-variogram of these images at an offset $h$ is
$$\gamma_{\alpha,\beta}(h)=\frac{1}{2n(h)}\sum_{i}\left(\alpha[i+h]-\alpha[i]\right)\left(\beta[i+h] - \beta[i]\right).$$
The sum ranges over all indexes $i$ for which both $i$ and $i+h$ are valid indexes into both images; $n(h)$ is the number of such indexes (easily computed in the same way by taking a similar sum of $1$'s).
By expanding the summand algebraically the calculation is reduced to the problem of obtaining
$$\sum_{i}\alpha[i+h]\beta[i]$$
for various $h$, both positive and negative, ranging from $(1-m,1-n)$ through $(m-1,n-1)$.
Let us say that the reversal of an image negates the indexes; that is, the value of the reversal of $\alpha$ at the pixel $h$ is the value of $\alpha$ at $(m+1,n+1)-h$.
Such a sum can be seen as the reversal of the convolution of the reversal of $\alpha$ with $\beta$. It is best computed using [discrete Fourier transforms](http://en.wikipedia.org/wiki/Discrete_Fourier_transform#Circular_convolution_theorem_and_cross-correlation_theorem) after first padding each image to the right and down with zeros. The padding must extend to the range of the largest $h$ for which $\gamma$ needs to be computed. Convolutions with Fourier transforms are obtained by taking the inverse Fourier transform (itself a scalar multiple of the FT) of the product of the FTs.
Direct computation of the variogram via its definition for a pair of $m$ by $n$ images requires up to $m n$ products and sums for each value of $h$. Typically $O(m n)$ values of $h$ are needed. The direct algorithm therefore has $O(m^2 n^2)$ computational cost, which is ridiculously large for moderate (megapixel) images. The discrete Fourier transform costs at most $O(2m 2n \log(2m 2n))$ (assuming the maximum range of offsets $h$) and has to be applied only a constant number of times (3). The reversals and paddings cost $O(2m 2n)$. Thus the total cost is still only $O(12 m n \log(4 m n))$, a huge improvement.
---
As a simple example, take $\alpha$ and $\beta$ to be the matrices
```
1 2
3 4
```
and
```
5 6
7 8
```
After padding with zeros to the right and down (by two columns and two rows) and reversing $\alpha$, multiplying these two 4 by 4 matrices componentwise, and taking the inverse Fourier transform, we get
```
8 0 14 23
0 0 0 0
18 0 20 39
30 0 38 70
```
Rotating this right by 2 columns and 2 rows and reversing gives
```
0 0 0 0
0 8 23 14
0 30 70 38
0 18 39 20
```
If you think of the new row and column indexes ranging $-2, -1, 0, 1$, this new matrix is exactly $\sum_{i}\alpha[i+h]\beta[i]$ (indexed by $h$). For example, the $h = (0,1)$ entry is 38 and indeed
$$\alpha[1,2]\beta[1,1] + \alpha[2,2]\beta[2,1] = 2 \times 5 + 4 \times 7 = 38.$$
| null | CC BY-SA 3.0 | null | 2011-04-25T14:50:39.467 | 2011-04-25T18:03:19.457 | 2011-04-25T18:03:19.457 | 919 | 919 | null |
9955 | 2 | null | 9931 | 6 | null | Sounds like what you want to do is a two-stage model. First transform your data into exponentially smoothed form using a specified smoothing factor, and then input the transformed data into your linear regression formula.
[http://www.jstor.org/pss/2627674](http://www.jstor.org/pss/2627674)
[http://en.wikipedia.org/wiki/Exponential_smoothing](http://en.wikipedia.org/wiki/Exponential_smoothing)
| null | CC BY-SA 3.0 | null | 2011-04-25T15:18:03.097 | 2011-04-25T15:18:03.097 | null | null | 3489 | null |
9956 | 2 | null | 8696 | 6 | null | Try
```
computeFunction=function(onWhat,what,...){foreach(i=onWhat) %do% what(i,...)},
```
| null | CC BY-SA 3.0 | null | 2011-04-25T15:34:00.080 | 2011-04-25T15:34:00.080 | null | null | null | null |
9957 | 2 | null | 9931 | 4 | null | If you are looking for an equation of the form
$$y=\alpha_n + \beta_n x$$
after $n$ pieces of data have come in, and you are using an exponential factor $k \ge 1$ then you could use
$$\beta_n = \frac{\left(\sum_{i=1}^n k^i\right) \left(\sum_{i=1}^n k^i X_i Y_i\right) - \left(\sum_{i=1}^n k^i X_i\right) \left(\sum_{i=1}^n k^i Y_i\right) }{ \left(\sum_{i=1}^n k^i\right) \left(\sum_{i=1}^n k^i X_i^2\right) - \left(\sum_{i=1}^n k^i X_i \right)^2}$$
and
$$\alpha_n = \frac{\left(\sum_{i=1}^n k^i Y_i\right) - \beta_n \left(\sum_{i=1}^n k^i X_i\right)}{\sum_{i=1}^n k^i} .$$
If rounding or speed become issues, this can be recast in other forms. It may also be worth knowing that for $k>1$ you have $\sum_{i=1}^n k^i = \frac{k(k^n - 1)}{k-1}$.
| null | CC BY-SA 3.0 | null | 2011-04-25T16:20:15.547 | 2011-04-25T16:20:15.547 | null | null | 2958 | null |
9958 | 2 | null | 9880 | 4 | null | For models of speeded decision tasks, check out the diffusion model and the linear ballistic accumulator; Donkin et al (2011, [pdf](http://mypage.iu.edu/~cdonkin/pubs/pbr11.pdf)) provide a good overview of these models and their different behaviours. There is R code out there for both these models. You might also do a literature search using the keyword "[Decision Field Theory](http://en.wikipedia.org/wiki/Decision_field_theory)", which seems to be a specific instantiation of the principles of diffusion models for high-level decisions like consumer choices, etc (in contrast, the diffusion model proper and linear ballistic accumulator are more typically used for simpler perceptual discrimination tasks). Finally, possibly related are [Neural Field models](http://www.scholarpedia.org/article/Neural_fields).
For models of semantics, check out [BEAGLE](http://www.indiana.edu/~clcl/BEAGLE/). For a related model of memory encoding/retrieval, check out Mehort & Johns (2005, [pdf](http://www.queensu.ca/psychology/hiplab/Publications/Recentpublications/Mewhort_RP_14.pdf))'s Iterative Resonance Model.
| null | CC BY-SA 3.0 | null | 2011-04-25T16:46:01.893 | 2011-04-25T16:55:53.633 | 2011-04-25T16:55:53.633 | 364 | 364 | null |
9959 | 1 | null | null | 8 | 3718 | In [An Empirical Comparison of Supervised Learning Algorithms](http://www.cs.cornell.edu/~caruana/ctp/ct.papers/caruana.icml06.pdf) (ICML 2006) the authors (Rich Caruana and Alexandru Niculescu-Mizil) evaluated several classification algorithms (SVMs, ANN, KNN, Random Forests, Decision Trees, etc.), and reported that calibrated boosted trees ranked as the best learning algorithm overall across eight different metrics (F-score, ROC Area, average precision, cross-entropy, etc.).
I would like to test calibrated boosted decision trees in one of my projects, and was wondering if anybody could suggest a good R package or MATLAB library for this.
I am relatively new to R, although I have large experience with MATLAB and Python. I have read about R's gbm, tree, and rpart but I am not sure if these packages implement calibrated boosted decision trees or if there are others that implement them.
Thanks
| Calibrated boosted decision trees in R or MATLAB | CC BY-SA 3.0 | null | 2011-04-25T16:46:53.890 | 2011-04-26T13:01:49.497 | 2011-04-26T13:01:49.497 | 2798 | 2798 | [
"r",
"classification",
"matlab"
] |
9960 | 2 | null | 9959 | 3 | null | About R, I would vote for the [gbm](http://cran.r-project.org/web/packages/gbm/index.html) package; there's a vignette that provides a good overview: [Generalized Boosted Models: A guide to the gbm package](http://cran.r-project.org/web/packages/gbm/vignettes/gbm.pdf). If you are looking for an unified interface to ML algorithms, I recommend the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package which has built-in facilities for data preprocessing, resampling, and comparative assessment of model performance. Other packages for boosted trees are reported under Table 1 of one of its accompanying vignettes, [Model tuning, prediction and performance functions](http://cran.r-project.org/web/packages/caret/vignettes/caretTrain.pdf). There is also an example of parameters tuning for boosted trees in the [JSS paper](http://www.jstatsoft.org/v28/i05/paper), pp. 10-11.
Note: I didn't check, but you can also look into [Weka](http://www.cs.waikato.ac.nz/ml/weka/) (there's an R interface, [RWeka](http://cran.r-project.org/web/packages/RWeka/index.html)).
| null | CC BY-SA 3.0 | null | 2011-04-25T17:06:02.400 | 2011-04-25T19:01:26.750 | 2011-04-25T19:01:26.750 | 930 | 930 | null |
9961 | 1 | 9973 | null | 8 | 3312 | I am new to evolutionary algorithm. I have studied Covariance Matrix Adaptation Evolution Strategy. I am not good at statistics. So could you please explain me in simple language (I mean not too many equations)
- What is CMA-ES?
- How does it work?
- Why is it superior to other strategies?
| What is covariance matrix adaptation evolution strategy? | CC BY-SA 3.0 | null | 2011-04-25T17:17:12.213 | 2017-04-02T11:58:46.580 | null | null | 4319 | [
"covariance-matrix"
] |
9962 | 1 | 9974 | null | 3 | 7709 | I have a multi-class dataset like the following (a,b,c,d are features and e is the class (it can be 0,1 and 2)).
```
a b c d e
1 1 1 2 2 1
2 1 2 4 2 0
3 1 2 4 2 0
4 2 2 2 2 0
5 2 1 2 2 2
```
I am trying to use mlogit package in order to see which column is more important but I am having a difficulty to understand how to use it.
What I am doing is:
```
dataset$e<-as.factor(dataset$e)
mldata<-mlogit.data(dataset, choice="e")
> mldata[1:5,]
a b c d e chid alt
1.0 1 1 2 2 FALSE 1 0
1.1 1 1 2 2 TRUE 1 1
1.2 1 1 2 2 FALSE 1 2
2.0 1 2 4 2 TRUE 2 0
2.1 1 2 4 2 FALSE 2 1
```
Now, in order to see the coefficients, I am constructing the model like that:
>
mlogit.model<- mlogit(e~1|a+b+c+d, data = mldata, reflevel="1")
mlogit.model
```
Call:
mlogit(formula = e ~ 1 | a + b + c + d, data = mldata, reflevel = "1", method = "nr", print.level = 0)
Coefficients:
alt0 alt2 alt0:a alt2:a alt0:b alt2:b alt0:c
-211.0953 -89.7558 27.9911 33.1440 37.7503 7.3585 8.5072
alt2:c alt0:d alt2:d
3.0950 37.1340 5.5584
```
But now I don't understand what are alt0, alt2? What are the real coefficients of a,b,c and d?
| Multiclass logistic regression with mlogit in R | CC BY-SA 3.0 | null | 2011-04-25T17:52:52.057 | 2011-04-26T00:15:37.927 | 2011-04-25T19:21:31.530 | null | 4320 | [
"r",
"logistic"
] |
9963 | 2 | null | 9947 | 4 | null | Because K-means minimizes variances, a good criterion is to minimize the sum of squared distances between the pairs of points.
This is an [integral (0/1) linear program](http://en.wikipedia.org/wiki/Linear_programming#Integer_unknowns). Specifically, the pairing can be specified by a matrix $\Lambda = (\lambda_{ij})$ where $\lambda_{ij} = 1$ if $c_i$ is paired with $k_j$ and $\lambda_{ij}=0$ otherwise. We seek to minimize
$$\sum_{i,j}\lambda_{ij}|c_i - k_j|^2$$
subject to the constraints (which enforce the one-to-one pairing)
$$\sum_{j}\lambda_{ij}=1$$
$$\sum_{i}\lambda_{ij}=1$$
$$\lambda_{ij} \in\{0,1\}.$$
Provided the centroids do not number more than a few hundred, this is quickly solved. (The matrices involved in setting up the problem will quickly exhaust RAM with more than a few hundred centroids, because they scale as $O(n^3)$, and then you might have to be a little fussier with the programming.) For instance, Mathematica 8's `LinearProgramming' function takes no measurable time with fewer than $n=20$ centroids, escalating to about 5 seconds with 400 centroids.

By means of line segments to show the pairings, this figure depicts an optimal solution with $n=20$ bivariate normal centroids $c_i$ and independent bivariate normal K-means solutions $k_i$.
| null | CC BY-SA 3.0 | null | 2011-04-25T17:53:54.317 | 2011-04-25T17:59:55.590 | 2011-04-25T17:59:55.590 | 919 | 919 | null |
9964 | 2 | null | 9911 | 0 | null | I think you may want to reshape your data, not reduce it. This will let you change the structure of your data set so that you can use all of your observations. You don't mention which statistical package you're using, but R, stata, and MATLAB all have a nice out-of-the-box reshape command you can use.
Side thought: you may need to adjust for clustered errors in the reshaped data, since it doesn't sound like your observations are completely independent.
| null | CC BY-SA 3.0 | null | 2011-04-25T18:15:06.373 | 2011-04-25T18:15:06.373 | null | null | 4110 | null |
9965 | 1 | null | null | 1 | 108 | This is probably a simple question.
I'm studying events which have N outcomes, of which exactly one is correct. N is very large, more than a billion (and is known). There are many possible events, some of which are tested multiple times.
I would like to test the following model: a given event is tested correctly with probability 1 - p. Otherwise, the test result is chosen uniformly at random from the N outcomes (hence, almost certainly incorrect). I want to (1) see if this model is plausible and (2) if so, find p. p should be small, just a few percent.
The data I have omits events where two or more tests have the same value. (Because N is large, if two tests have the same value that is likely to be the correct result.) So what I have is, for each k, the number of different events which have had k different tests, no two of which match. Most of the events have had k = 1 tests but some have 2, 3, or even more tests. In most cases one result is correct and the rest are incorrect; in some cases all are incorrect.
How can I test my hypothesis?
| Testing a model with truncated data | CC BY-SA 3.0 | null | 2011-04-25T18:52:46.150 | 2011-04-25T18:52:46.150 | null | null | 1378 | [
"hypothesis-testing",
"binomial-distribution",
"censoring"
] |
9966 | 1 | null | null | 1 | 452 | I have a dataset with between 10,000 and 100,000 feature values. The number of datapoints is between 1,000 and 10,000. I want to perform a LASSO on this dataset but can't really find any good software to do so. Does anyone have any suggestions?
| Software for LASSO for high dimensional dataset | CC BY-SA 3.0 | null | 2011-04-25T20:01:37.703 | 2011-04-26T06:40:19.617 | null | null | 4322 | [
"software",
"lasso"
] |
9967 | 2 | null | 9961 | 3 | null | It's an optimization algorithm: it tries to find the minimum of a function. It is said to be amongst the best optimization algorithms for non-convex problems in high dimensions (above 5 or 10 parameters to optimize).
The term Covariance in the name is a bit misleading to the statistics community: there is no statistics involved in the algorithm. As a result, this site might not be the best place to ask for help on it.
I can't give much more information on it, but the [wikipedia](http://en.wikipedia.org/wiki/CMA-ES) page is quite clear.
| null | CC BY-SA 3.0 | null | 2011-04-25T20:08:08.723 | 2011-04-25T20:08:08.723 | null | null | 1265 | null |
9969 | 2 | null | 9940 | 1 | null | In the asymptotic sense seemingly suggested by the phasing of the question, it's not true, but the analysis might be revealing.
We don't even need $Z$.
Let $p$ be the chance of a standard normal variable being $c$ or less; that is, $p = \Phi(c)$. Then the chance that at least $k$ or more of the $X_i$ are less than or equal to $c$ is given by a Binomial distribution
$$\sum_{i=0}^{n-k} \binom{n}{i} p^{n-i}(1-p)^i\text{.}$$
Because this sum runs from $p^n \lt p$ to $1 \gt p$, there exists a $k$ between $1$ and $n-1$ where the sum is as large as possible but still less than $p$.
For future reference, note that as $n$ grows large, $k$ is approximately equal to $p n$. This is a consequence of the Central Limit Theorem (for Binomial variates), because the sum is approximately equal to $\Phi((n-k - (1-p) n) / \sqrt{n p (1-p)})$. If eventually $k$ were less than $p n$, say $k \lt (p - \epsilon)n$ for $\epsilon \gt 0$, then the sum would approximate $\Phi(\epsilon \sqrt{n} / \sqrt{p(1-p)})$, which approaches $1$ as $n$ increases, but the sum is constructed to stay below $p$. Similarly, if $k \gt (p + \epsilon)n$, the sum would go to zero, again contradicting the construction of $k$ (to be as large as possible).
Define $q$ (which depends implicitly on $c$ and $n$) to be the value of the sum for such a $k$. Let $b = \Phi^{-1}(q) \le c$.
Conditional on at least $k$ of the $X_i$ not exceeding $c$, let $Y$ have a truncated standard normal distribution ranging from $-\infty$ to $b$. This happens with probability $q$. Otherwise, let $Y$ have a truncated standard normal distribution ranging from $b$ to $+\infty$. This gives $Y$ a standard normal distribution but it depends on the $X_i$.
If $Y \le c$, the chance that $Y \le b$ equals $q/p$. With sufficiently large $n$, an easy estimate shows this value is close to $1$. Given that $Y \le b$, we know at least $k$ of the $X_i$ are below $c$, by construction of $Y$. Therefore the expected number of such $X_i$ is at least $p n$ (asymptotically in $n$). This quantity grows without bound, it is not limited by a constant (independent of $n$).
You can work this analysis in reverse: if the expected number of $X_i$ below the threshold $c$, conditional on $Y \le c$, is much larger than $p n$, then the probability that $Y \le c$ would have to be greater than $\Phi(c)$, implying $Y$ does not have a standard Normal distribution. In this sense the preceding construction is a worst case: asymptotically, it achieves the largest possible expected number of $X_i$ below $c$ consistent with the assumptions on $X_i$ and $Y$.
| null | CC BY-SA 3.0 | null | 2011-04-25T20:32:48.797 | 2011-04-25T20:32:48.797 | null | null | 919 | null |
9971 | 1 | null | null | 4 | 1298 |
### Context:
I am trying to analyze an experiment on plant community response to two treatments. Here’s a simplified description of the experiment, there are a few extra complications in reality.
Treatments were applied to small patches of ground arranged in blocks with a mix of naturally occurring plant species present in each patch. I have measured biomass of the plants in each patch in years one and two as the response variables. Analyzing total biomass seems simple enough with a repeated measures GLM or linear mixed model and block a random effect.
However I am also interested in breaking down total biomass into four response groups of interest: perennial grasses, perennial forbs, annual grasses and annual forbs (forbs are plants that aren't grasses).
### Question:
- Because these groups sum up to total biomass can I go a head and analyze these responses with four separate repeated measures ANOVAs, or will I be guilty of re-analyzing the same experiment?
It’s been suggested to me that I do a MANOVA followed by “protected ANOVA’s” but this doesn’t make sense to me because I am not interested in the multivariate response, rather I am interested in testing whether these specific variables have been affected by the treatments. I also don’t know how to do a repeated measures MANOVA correctly and it seems unduly complicated. Any general advice would be appreciated—I’ll try and design a more easily analyzable experiment next time.
| Is MANOVA the correct way to handle multiple response variables that are additive? | CC BY-SA 3.0 | null | 2011-04-25T21:38:23.780 | 2011-04-26T05:13:34.833 | 2011-04-26T03:21:51.807 | 183 | 4326 | [
"multivariate-analysis",
"repeated-measures",
"manova"
] |
9972 | 1 | null | null | 1 | 2442 | I'm compiling a survey that will have several questions which would lend to the creation of an index of a main dependent variable (level of engagement in sucession planning). The questions will involve topics like:
- PROCESS: linking strategic planning to succession planning, identifying critical positions that need to be filled,identifying competencies, identifying high potential employees, coaching and mentoring, providing leadership development;
- ROLE IDENTIFICATION: identifying roles (several options); and
- RESULTS: zero people ready for each position, 1 or 2 people ready for each position, or a pool of people ready for each position.
I'm not sure how to start. Any suggestions?
| Getting started with creating an index based on multiple survey items | CC BY-SA 3.0 | null | 2011-04-25T21:54:51.503 | 2011-04-26T14:11:36.723 | 2011-04-26T05:18:32.050 | 183 | 4327 | [
"survey",
"scales"
] |
9973 | 2 | null | 9961 | 8 | null | Per the [wikipedia page](http://en.wikipedia.org/wiki/CMA-ES) linked above to answer (1) this is another form of gradient descent (which if you need more information with lots of pictures there are many articles available if you google it -- sorry apparently new posters only get 2 urls so I'm having to have to tell you to search instead of pointing you to specific ones) so it is used for optimization of an objective function. Usually this means I want to find the maximum or minimum value for a function I'm interested in.
Not to split hairs with the other poster, but how statistical this algorithm is a question of semantics. It is proposed to converge (for our considerations that is to say this method works) based on a maximum likelihood argument and covariance (variance in higher dimension -- the spread of the sample in the space) is used as part of the update rule for this algorithm to determine how best to proceed for the next iteration. Also new points are sampled from a multivariate normal distribution. Which is a probability model. Similarly as the wikipedia points out this method is similar to, but not identical to, principal component analysis. So from my perspective it seems pretty statistical, but again that is perhaps subjective.
I suppose the second wiki link and parts of the above address (2), but perhaps not adequately so if you need more information please feel free to follow up. The short of it is if there is an optimum this procedure searches for it in the problem space and will eventually find it under the model assumptions. You need to test for the assumptions, because deviation from them is likely a major consequence to and how well this method will work on your problem.
I do however agree with the other poster that you may find more if not better answers on a site like [Meta Optimize](http://metaoptimize.com/qa/) which is a community like this one specific to machine learning, but many Statisticians do work on learning problems so I would expect others will weigh in as well.
As for (3) it is not necessarily a better method. It is a method for a different set of assumptions than other methods (such as feedforward neural network with gradient descent for example) specifically this method is for non-linear or non-convex problems. Again you need to test if the problem you are working on is appropriate for this method before proceeding to use it. Otherwise you may get a less than optimal solution or worse yet garbage in garbage out. If your problem is non-linear or non-convex then go for it, otherwise you may need to look into using a different method. There is a part of the wikipedia article "Performance in Practice" may help you determine whether this is an appropriate method for the problem you are working on.
You alluded to some mathphobia when not wanting to see too many expressions. So you may not know some properties of the data/problem you're working with. Are you willing to share some more specific details or are you just asking in general?
Hopefully this helps. Please let me know if there are any follow ups. Good luck.
| null | CC BY-SA 3.0 | null | 2011-04-25T23:22:45.657 | 2011-04-25T23:22:45.657 | null | null | 4325 | null |
9974 | 2 | null | 9962 | 3 | null | Multinomial logit assumes that you have a categorical dependent variable. In your case, there are three categories, denoted 0, 1, and 2. You've set 1 as the reference category, which means that mlogit is going to use 1 as the baseline category -- everything else is compared to 1.
The thing to keep in mind is that in the mlogit framework, each additional category is compared against the reference group. Each has its own log-linear model, with k+1 parameters, where k is the number of predictors in your model, and the 1 accounts for the intercept term. Since you have 2 additional categories and 4 predictors, you're going to be estimating 2*(4+1) = 10 parameters.
So, alt0 is the intercept term for the 0-against-1 comparison, and alt-a,b,c and d describe the marginal change in likelihood between categories 0 and 1. The alt2 parameters describe the category 2-against-1 comparison model.
See [Train's excellent and (last I checked) free pdf book](http://elsa.berkeley.edu/books/train1201.pdf) on discrete choice models for more background.
| null | CC BY-SA 3.0 | null | 2011-04-26T00:15:37.927 | 2011-04-26T00:15:37.927 | null | null | 4110 | null |
9975 | 2 | null | 9926 | 24 | null | Letting $f$ denote a probability density function (either with respect to Lebesgue or counting measure, respectively), the quantity $\newcommand{\rd}{\mathrm{d}}$
$$
H_\alpha(f) = -\frac{1}{\alpha-1} \log(\textstyle\int f^\alpha \rd \mu)
$$
is known as the [Renyi entropy](http://en.wikipedia.org/wiki/R%C3%A9nyi_entropy) of order $\alpha \geq 0$. It is a generalization of Shannon entropy that retains many of the same properties. For the case $\alpha = 1$, we interpret $H_1(f)$ as $\lim_{\alpha \to 1} H_{\alpha}(f)$, and this corresponds to the standard Shannon entropy $H(f)$.
Renyi introduced this in his paper
>
A. Renyi, On measures of information and entropy, Proc. 4th Berkeley Symp. on Math., Stat. and Prob. (1960), pp. 547–561.
which is well worth reading, not only for the ideas but for the exemplary exposition style.
The case $\alpha = 2$ is one of the more common choices for $\alpha$ and this special case is (also) often referred to as the Renyi entropy. Here we see that
$$\newcommand{\e}{\mathbb{E}}
H_2(f) = - \log( \textstyle\int f^2 \rd \mu ) = -\log( \e f(X) )
$$
for a random variable distributed with density $f$.
Note that $- \log(x)$ is a convex function and so, by Jensen's inequality we have
$$
H_2(f) = -\log( \e f(X) ) \leq \e( -\log f(X) ) = - \e \log f(X) = H(f)
$$
where the right-hand side denotes the Shannon entropy. Hence the Renyi entropy provides a lower bound for the Shannon entropy and, in many cases, is easier to calculate.
Another natural instance in which the Renyi entropy arises is when considering a discrete random variable $X$ and an independent copy $X^\star$. In some scenarios we want to know the probability that $X = X^\star$, which by an elementary calculation is
$$\renewcommand{\Pr}{\mathbb{P}}
\Pr(X = X^\star) = \sum_{i=1}^\infty \Pr(X = x_i, X^\star = x_i) = \sum_{i=1}^\infty \Pr(X = x_i) \Pr(X^\star = x_i) = e^{-H_2(f)} .
$$
Here $f$ denotes the density with respect to counting measure on the set of values $\Omega = \{x_i: i \in \mathbb{N}\}$.
The (general) Renyi entropy is also apparently related to free energy of a system in thermal equilibrium, though I'm not personally up on that. A (very) recent paper on the subject is
>
J. C. Baez, Renyi entropy and free energy, arXiv [quant-ph] 1101.2098 (Feb. 2011).
| null | CC BY-SA 3.0 | null | 2011-04-26T01:36:18.780 | 2011-04-26T01:36:18.780 | null | null | 2970 | null |
9976 | 2 | null | 2914 | 5 | null | I would suggest that this is a problem with how the results are reported. Not to "beat the Bayesian drum" but approaching model uncertainty from a Bayesian perspective as an inference problem would greatly help here. And it doesn't have to be a big change either. If the report simply contained the probability that the model is true this would be very helpful. This is an easy quantity to approximate using BIC. Call the BIC for the mth model $BIC_{m}$. Then the probability that mth model is the "true" model, given that $M$ models were fit (and that one of the models is true) is given by:
$$P(\text{model m is true}|\text{one of the M models is true})\approx\frac{w_{m}\exp\left(-\frac{1}{2}BIC_{m}\right)}{\sum_{j=1}^{M}w_{j}\exp\left(-\frac{1}{2}BIC_{j}\right)}$$
$$=\frac{1}{1+\sum_{j\neq m}^{M}\frac{w_{j}}{w_{m}}\exp\left(-\frac{1}{2}(BIC_{j}-BIC_{m})\right)}$$
Where $w_{j}$ is proportional to the prior probability for the jth model. Note that this includes a "penalty" for trying to many models - and the penalty depends on how well the other models fit the data. Usually you will set $w_{j}=1$, however, you may have some "theoretical" models within your class that you would expect to be better prior to seeing any data.
Now if somebody else doesn't report all the BIC's from all the models, then I would attempt to infer the above quantity from what you have been given. Suppose you are given the BIC from the model - note that BIC is calculable from the mean square error of the regression model, so you can always get BIC for the reported model. Now if we take the basic premise that the final model was chosen from the smallest BIC then we have $BIC_{final}<BIC_{j}$. Now, suppose you were told that "forward" or "forward stepwise" model selection was used, starting from the intercept using $p$ potential variables. If the final model is of dimension $d$, then the procedure must have tried at least
$$M\geq 1+p+(p-1)+\dots+(p-d+1)=1+\frac{p(p-1)-(p-d)(p-d-1)}{2}$$
different models (exact for forward selection), If the backwards selection was used, then we know at least
$$M\geq 1+p+(p-1)+\dots+(d+1)=1+\frac{p(p-1)-d(d-1)}{2}$$
Models were tried (the +1 comes from the null model or the full model). Now we could try an be more specific, but these are "minimal" parameters which a standard model selection must satisfy. We could specify a probability model for the number of models tried $M$ and the sizes of the $BIC_{j}$ - but simply plugging in some values may be useful here anyway. For example suppose that all the BICs were $\lambda$ bigger than the one of the model chosen so that $BIC_{m}=BIC_{j}-\lambda$, then the probability becomes:
$$\frac{1}{1+(M-1)\exp\left(-\frac{\lambda}{2}\right)}$$
So what this means is that unless $\lambda$ is large or $M$ is small, the probability will be small also. From an "over-fitting" perspective, this would occur when the BIC for the bigger model is not much bigger than the BIC for the smaller model - a non-neglible term appears in the denominator. Plugging in the backward selection formula for $M$ we get:
$$\frac{1}{1+\frac{p(p-1)-d(d-1)}{2}\exp\left(-\frac{\lambda}{2}\right)}$$
Now suppose we invert the problem. say $p=50$ and the backward selection gave $d=20$ variables, what would $\lambda$ have to be to make the probability of the model greater than some value $P_{0}$? we have
$$\lambda > -2 log\left(\frac{2(1-P_{0})}{P_{0}[p(p-1)-d(d-1)]}\right)$$
Setting $P_{0}=0.9$ we get $\lambda > 18.28$ - so BIC of the winning model has to win by a lot for the model to be certain.
| null | CC BY-SA 3.0 | null | 2011-04-26T01:38:14.393 | 2011-04-26T01:38:14.393 | null | null | 2392 | null |
9977 | 1 | null | null | 6 | 417 | I'm trying to estimate the design effect of a series of relatively small sample size surveys ($n\sim 70$) with multiple responses. Design effects roughly correspond to how much larger actual sample variance than would be expected from naive random sampling. The simplest way to parametrize this is for Effective Sample Size to be equal to SampleSize/D, where D is an unknown parameter.
Throwing it naively into BUGS is impossible due to the inability to handle multinomial models with unknown sample size. I tried to do a multinormal approximation with a structured covariance matrix, but I can't figure out/find a closed form for the precision matrix of a multinomial variable.
Does anyone have any ideas about the best way to proceed? This seems like it should be a common problem.
| Modeling multinomial problems with unknown sample size in BUGS | CC BY-SA 3.0 | null | 2011-04-26T03:05:11.570 | 2011-12-30T16:35:45.870 | 2011-04-26T12:00:01.143 | 3911 | 996 | [
"bayesian",
"sampling",
"markov-chain-montecarlo",
"sample-size",
"bugs"
] |
9978 | 2 | null | 9947 | 4 | null | The problem you're trying to solve is a [min-cost matching problem](http://en.wikipedia.org/wiki/Hungarian_algorithm), specifically the problem of minimizing the functional
$F(\pi) = \sum_i \|c_i - k_{\pi(i)}\|^2 $
where $\pi$ is over all permutations in $S_n$.
This can be solved by the Hungarian algorithm (which is a primal-dual method in disguise) and takes $n^3$ time.
| null | CC BY-SA 3.0 | null | 2011-04-26T04:29:33.870 | 2011-04-26T04:29:33.870 | null | null | 139 | null |
9979 | 2 | null | 9971 | 2 | null | I can see the merits in running four separate repeated measures ANOVAs. If your theoretical question concerns the four individual variables, then running the ANOVAs separately is more aligned with your theoretical question.
I guess the main issue is the parsimony of your approach and controlling your Type I error rate. Here are a few possibilities that you could explore:
- You could apply a Bonferonni correction to your $\alpha$ level (e.g., $\alpha=.05 /4=.0125$) so that your family-wise $\alpha$ is kept at an acceptable level.
- You could distinguish analyses into confirmatory and more exploratory analyses. For example, you could frame the effect of time on overall biomass as confirmatory and interpret the effect of time on individual aspects of biomass as exploratory analyses.
- You might even include type of biomass as another independent variable in your ANOVA. Then you could see whether there was a time by "type of biomass" interaction. This might be a more parsimonious approach because it, in some senses, starts with the assumption that any effect of time on type of biomass is constant across types of biomass.
| null | CC BY-SA 3.0 | null | 2011-04-26T05:13:34.833 | 2011-04-26T05:13:34.833 | null | null | 183 | null |
9980 | 2 | null | 9966 | 2 | null | Check the article by Wu Chen Hastie Sobel Lange - Genome-wide association analysis by lasso penalized logistic regression - 2009. They mention a 'swindle' that is not hard to implement + then you can simply work with glmnet (there is a new version out recently which promises a performance improvement but I haven't had a chance to check it out).
| null | CC BY-SA 3.0 | null | 2011-04-26T06:40:19.617 | 2011-04-26T06:40:19.617 | null | null | 4257 | null |
9981 | 1 | 9986 | null | 5 | 2538 | In a technique that uses CUSUM for change-point detection in this [paper](http://www.cs.utexas.edu/~mahimkar/MERCURY_sigcomm10.pdf), the first step is given below:
>
Let $x_1, x_2,..., x_n$ be the $n$
samples in an event-series. The
samples are ranked in increasing order
and the rank $r_i$ for each sample is
calculated. In case of ties, we assign
average rank to each sample. The
cumulative sums are computed as: $S_i = S_{i-1} + (r_i - \bar{r})$.
If the ranks are randomly distributed, then
there is no change-point. However, if
there is indeed a change-point in the
event-series, then higher ranks should
dominate in either the earlier or
later part of the event-series.
I am not quite sure I understand the meaning of rank in this context. For instance, if n=10 and my data points are: 5, 2, 4, 1, 9, 2, 9, 2, 10, 1 can someone please clarify what is really being done here?
- Sort in increasing order: 1, 1, 2, 2, 2, 4, 5, 9, 9, 10
- Assign Average Ranks to break ties: What does this step mean?
- Assign Final Ranks: What does this step mean?
| What is the meaning of rank in the context of change-detection? | CC BY-SA 3.0 | null | 2011-04-26T06:54:27.007 | 2011-04-26T17:52:45.217 | 2011-04-26T17:52:45.217 | 1390 | 2164 | [
"statistical-significance",
"nonparametric",
"change-point"
] |
9982 | 2 | null | 9739 | 0 | null | A nicely documented python library for spatial analysis that has some clustering is [pySAL](http://pysal.org/1.1/library/region/index.html).
Another python library in the development stage that is focused on spatial clustering is [clusterPy](http://www.rise-group.org/risem/clusterpy/clusterPy-pysrc.html) [(pdf slide presentation)](http://dl.dropbox.com/u/408103/ClusterPy-Slides.pdf).
With a more limited choice of clustering algorithms but with nice mapping interface is the GUI software [GeoGrouper](http://geogrouper.appspot.com/).
| null | CC BY-SA 3.0 | null | 2011-04-26T07:13:33.340 | 2011-04-26T07:13:33.340 | null | null | 4329 | null |
9983 | 2 | null | 277 | 23 | null | Non-spatial model
My House Value is a function of my home Gardening Investment.
SAR model
My House Value is a function of the House Values of my neighbours.
CAR model
My House Value is a function of the Gardening Investment of my neighbours.
| null | CC BY-SA 3.0 | null | 2011-04-26T07:24:16.177 | 2011-04-26T07:24:16.177 | null | null | 4329 | null |
9984 | 2 | null | 9928 | 3 | null | The definition of fat-tail in [wikipedia](http://en.wikipedia.org/wiki/Fat_tail) is that
$$p(x)\sim x^{-(\alpha+1)}$$
as $x\to\infty$ for some $\alpha>0$. Now
$$\frac{e^x}{x^{\alpha+1}}\to\infty,$$
as $x\to\infty$, so the $Ee^X$ cannot exist for such type of distributions. So you need to precise what do you have in mind by saying fat-tailed.
| null | CC BY-SA 3.0 | null | 2011-04-26T07:53:57.130 | 2011-04-26T07:53:57.130 | null | null | 2116 | null |
9985 | 2 | null | 9809 | 3 | null | Let's build it!
You mentioned:
1 moment generating function
2 law of iterated expectations
3 change of measure
Adding:
4 Decompose random variable as a sum. Usually the sum of indicators of something.
5 Build a reccurence relation for E(X) (or a set of linear equations). Useful in Markov Chains.
6 Stopping time theorem for martingales: $E(X_{T})=E(X_{1})$
6b Wald identity. $E(S_{T})=E(X_{1})E(T)$
7 Kolmogorov forward/backward equation
8 [Crofton](http://web.mit.edu/urban_or_book/www/book/chapter3/3.5.html)'s method
9 General idea: Symmetry. Especially for "$n$ points are chosen uniformly on the $[0;1]$" problems.
| null | CC BY-SA 3.0 | null | 2011-04-26T10:07:42.067 | 2011-04-26T10:29:53.403 | 2011-04-26T10:29:53.403 | 2043 | 2043 | null |
9986 | 2 | null | 9981 | 6 | null | Given your data:
```
cp <- c(5, 2, 4, 1, 9, 2, 9, 2, 10, 1)
```
then the ranks, with ties being given average of the ranks, are:
```
> rank(cp)
[1] 7.0 4.0 6.0 1.5 8.5 4.0 8.5 4.0 10.0 1.5
```
What is being done here? If you sort the data in increasing order, then we have a `1` in both rank order positions 1 and 2. We could assign rank 1 to both `1`s, or rank 2, or as stated above, the average of the rank orders (1/2) / 2 = 1.5. This is why the two `1`s have been given rank of 1.5 in the above output from R.
Now look at the next values in the rank order, the `2`s. The `2`'s are in rank order positions 3, 4, and 5, therefore they all get rank 4 from (3+4+5) / 3 = 4, as this is the average of the tied ranks for these values.
If we initiate $S_0 = 0$, i.e. the zeroth cumulative sum is 0, we compute the $i$th cumulative sum ($S_i$) as the previous cumulative sum ($S_{i-1}$) plus the difference between the rank of the $i$th data point ($r_i$) and the average over all ranks $\bar{r}$.
For the above data, the average rank is:
```
> rcp <- rank(cp)
> mean(rcp)
[1] 5.5
```
The values $r_i - \bar{r}$ for this set of data are:
```
> rcp - mean(rcp)
[1] 1.5 -1.5 0.5 -4.0 3.0 -1.5 3.0 -1.5 4.5 -4.0
```
and the cumulative sums are:
```
> cumsum(rcp - mean(rcp))
[1] 1.5 0.0 0.5 -3.5 -0.5 -2.0 1.0 -0.5 4.0 0.0
```
| null | CC BY-SA 3.0 | null | 2011-04-26T10:42:52.393 | 2011-04-26T10:42:52.393 | null | null | 1390 | null |
9987 | 1 | null | null | 7 | 6531 | I know that "deviations in the data are devil", and when the distribution is highly skewed, it is better to consider median as average rather than mean, but how to decide these hard-limits.
For example:
- CASE 1:
Assume X = 10,20,30,40,50,60,70
In this case, I think that it is better to use mean and that it will give very accurate results.
- CASE 2:
Assume X = 10,20,30,40,50,60,70,7000
In this case, I think that it is better to use median instead of using the mean.
- CASE 3:
Assume X = 10,20,30,400,500,600,700
In this case, I think it is better to use IQR (Inter Quartile Range)
But I'm stuck with how to decide these hard-limits i.e. which to use in which condition, in general.
I've found a tool working on subjected principle, which takes context-less sample-distribution as input and determines whether mean is close/moderate or against the null-hypothesis.
Find References:-
- http://home.ubalt.edu/ntsbarsh/Business-stat/otherapplets/MeanTest.htm (For Mean Test)
- http://home.ubalt.edu/ntsbarsh/Business-stat/otherapplets/MediansTest.htm (For Median Test)
What I'm really looking is a good answer which states how to derive these conclusions.
| When does the amount of skew or prevalence of outliers make the median preferable to the mean? | CC BY-SA 3.0 | null | 2011-04-26T11:35:49.713 | 2017-11-10T23:21:52.083 | 2017-11-10T23:21:52.083 | 128677 | 4331 | [
"mean",
"median"
] |
9988 | 1 | 10019 | null | 4 | 2828 | I received this question by email from a Neuroscience PhD student.
>
I would greatly appreciate if you
could please let me know whether
Factor Analysis could load positively
and inversely correlated variables
onto the same latent factor, whereas
Cluster Analysis can only cluster into
the same factor either positively or
inversely correlated variables (e.g.,
they would be segregated into 2
factors).
| Can cluster analysis cluster variables that both positively and negatively correlate with each other? | CC BY-SA 3.0 | null | 2011-04-26T11:37:30.257 | 2011-04-27T05:01:49.230 | null | null | 183 | [
"clustering",
"factor-analysis"
] |
9989 | 2 | null | 9987 | 2 | null | You can read about measures of central tendency here: [http://en.wikipedia.org/wiki/Central_tendency](http://en.wikipedia.org/wiki/Central_tendency) .
Generally, you analyse a sample in order to tell something about a (much larger) population. Often you know more about the population than merely the data in your sample, usually something that motivated you to take a sample in the first place. If you know that the population has normal distribution then the sample mean will be the best estimator of the expected value even if the sample does not look normal (using small sample sizes like the above you can't really characterise the distribution anyway). You can reliably estimate the mean if you have a lot of data even if the distribution is not normal (see [T-test for non normal when N>50?](https://stats.stackexchange.com/questions/9573/t-test-for-non-normal-when-n50)).
In cases of distributions that can not be described parametrically median and the IQR may tell much more. IQR is a dispersion measure as opposed to the mean and median that are location measures. You can read about dispersion parameters here: [http://en.wikipedia.org/wiki/Statistical_dispersion](http://en.wikipedia.org/wiki/Statistical_dispersion) .
A further aspect to consider is that some of your data may be outliers (see [Rigorous definition of an outlier?](https://stats.stackexchange.com/questions/7155/rigorous-definition-of-an-outlier)).
| null | CC BY-SA 3.0 | null | 2011-04-26T11:49:10.857 | 2011-04-26T13:02:21.527 | 2017-04-13T12:44:20.840 | -1 | 3911 | null |
9990 | 1 | 12934 | null | 17 | 4169 | I'm decently familiar with mixed effects models (MEM), but a colleague recently asked me how it compares to latent growth models (LGM). I did a bit of googling, and it seems that LGM is a variant of structural equation modelling that is applied to circumstances where repeated measures are obtained within each level of at least one random effect, thus making Time a fixed effect in the model. Otherwise, MEM and LGM seem pretty similar (eg. they both permit exploration of different covariance structures, etc).
Am I correct that LGM is conceptually a special case of MEM, or are there differences between the two approaches with respect to their assumptions or capacity to evaluate different types of theories?
| What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"? | CC BY-SA 3.0 | null | 2011-04-26T12:33:47.493 | 2020-01-22T12:59:08.357 | 2020-01-22T12:59:08.357 | 11887 | 364 | [
"mixed-model",
"panel-data",
"growth-model"
] |
9991 | 2 | null | 9987 | 1 | null | Be careful with medians: they are biased estimators and the degree of bias can change depending on the skew of the distribution and the sample size (see [Miller, 1988](http://www.ncbi.nlm.nih.gov/pubmed/2971778)). This means that if you are comparing two conditions that have either different skew or different sample sizes, you may find a difference that is in fact attributable to bias rather than a real difference, or you may fail to find a difference when there is a real one when the direction of the difference in bias is opposite to the direction of the real difference between the conditions.
| null | CC BY-SA 3.0 | null | 2011-04-26T12:58:56.007 | 2011-04-26T12:58:56.007 | null | null | 364 | null |
9993 | 2 | null | 9987 | 2 | null | There are no hard and fast rules. They convey different information and have different properties. You select the statistic that best conveys what you want to convey. Or better yet, select statistics that best describe the data. Keep this same thing in mind when you're selecting a measure of central tendency to analyze.
(snipped a bunch of stuff repeating Mike Lawrence's answer)
Note that Mike Lawrence is referring to something that's surprising for a lot of people. In the behavioural sciences there's a lot of folk wisdom that you use medians with small sample sizes. But in actual fact that's exactly the wrong thing to do because the median quickly becomes more biased than the mean with small samples.
| null | CC BY-SA 3.0 | null | 2011-04-26T13:17:12.987 | 2011-04-26T13:17:12.987 | null | null | 601 | null |
9994 | 2 | null | 9987 | 7 | null |
### Framing the question
- You are asking an applied and subjective question, and thus, any answer needs to be infused with applied and subjective considerations.
- From a purely statistical perspective, the mean and median both provide different information about the central tendency of a sample of data. Thus, neither is correct or incorrect by definition.
- From an applied perspective, we often want to say something meaningful about the central tendency of a sample, where central tendency maps onto some subjective notion of "typical".
### General thoughts
- When summarising what is typical in a sample, observations that are many standard deviations away from the mean (perhaps 3 or 4 SD) will have a large influence on the mean, but not the median. Such observations may lead the mean to deviate from what we think of as the "typical" value of the sample. This helps to explain the popularity of the median when it comes to reporting house prices and income, where a single island in the pacific or billionaire could dramatically influence the mean, but not the median. Such distributions can often include extreme outliers, and the distribution is positively skewed. In contrast, the median is robust.
- The median can be problematic when the data takes on a limited number of values.
For example, the median of a 5-point Likert item lacks the nuance possessed by the mean. For example, means of 2.8, 3.0, and 3.3 might all have a median of 3.
- In general, the mean has the benefit of using more of the information from the data.
- When skewed distributions exist, it is also possible to transform the distribution and report the mean of the transformed distribution.
- When a distribution includes outliers, it is possible to use a trimmed mean, or remove the outliers, or adjust the value of the outlier to a less extreme value (e.g., 2 SD from the mean).
| null | CC BY-SA 3.0 | null | 2011-04-26T14:02:40.367 | 2011-04-26T14:02:40.367 | null | null | 183 | null |
9995 | 2 | null | 9972 | 1 | null | Check out some of the following:
- the literature on scale construction.
- discussions of formative and reflective indicators (here's a discussion).
It sounds like your scale might be formative, in that it is driven by theoretical definition of a construct rather than some natural correlation between the items and facets that you mention.
- whuber's comments on this earlier question on an index of soil quality.
| null | CC BY-SA 3.0 | null | 2011-04-26T14:11:36.723 | 2011-04-26T14:11:36.723 | 2017-04-13T12:44:44.530 | -1 | 183 | null |
9997 | 1 | 9999 | null | 8 | 2159 | I've been doing some Machine Learning, and have been using k-fold cross-validation to assess the generalisation performance of the algorithm. I've tried k-fold cross-validation with k = 5 and k = 200 and get very different results for Support Vector Machine classification.
```
k SVM accuracy
-----------------
5 75%
200 94%
```
This seems like a huge difference in accuracy caused by changing the number of splits we're doing for the k-fold cross-validation. Is there any reason for this? I can't seem to find any references on studies that have been done investigating the effects of using different k values. Obviously, which k value I decide to use in my report gives completely different impressions of the quality of my classifier!
| Information on how value of k in k-fold cross-validation affects resulting accuracies | CC BY-SA 3.0 | null | 2011-04-26T15:25:45.430 | 2011-04-26T16:54:13.690 | null | null | 261 | [
"machine-learning",
"cross-validation",
"svm"
] |
9998 | 2 | null | 9987 | 1 | null | "deviations in the data are the devil" is just not true I think - well I don't agree with it at least. I'd say its more like "chilli" than the "devil" - as much as you can reasonably handle is good, but it can get nasty if there is too much.
The most general procedure I know of to "choose a statistic" to report your data is a combination of two things
- Bayesian inference (describing what is known)
- Decision theory (taking actions under uncertainty)
However, both of these methods are only partially "algorithmic" so to speak. You have to supply the inputs though. Perhaps the most important part of this stage is that you have to ask a question that your procedure is going to answer. Naturally, different questions get different answers. As the saying goes "I have just derived a very elegant and beautiful answer. All I have to do now is figure out the question." This is a common problem that I have seen with many statistical procedures, is that there is not always a clear statement of the class of problems that it is the best procedure to use.
Bayesian inference requires you to specify your prior information in a mathematical framework. This involves
- Specifying the hypothesis space - what possibilities am I going to consider?
- Assigning probabilities to each part of the space
- Using the rules of probability theory to manipulate the assigned probabilities
this is basically an open-ended problem (you can always analyse a given English statement more deeply, to extract more or different information from it). Decision theory also requires you to specify a loss function - and there are basically no rules or principles by which to do this, at least as far as I know (computational simplicity is a key driver).
One useful question to ask yourself though is "what information about the sample do I convey by presenting this statistic?" or "how much of the complete data set can I recover from using just this set of statistics?"
One way you could use Bayesian statistics to help you here is to propose a hypothesis:
$$\begin{array}{l l}
H_{mean}:\text{The mean is the best statistic}
\\H_{med}:\text{The median is the best statistic}
\\H_{IQR}:\text{The IQR is the best statistic}
\end{array}
$$
Now these are not "mathematically well posed" hypothesis, but if we use them anyway, and see what parts of the maths are required to make it well posed. The first part is the prior probabilities, without any data, how likely is each hypothesis? The usual answer is equal probabilities (but not always - may have some theoretical reason to support one hypothesis being more likely - the CLT is perhaps one for $H_{mean}$ being higher than the others).
So we use Bayes theorem to update each probability ($I$=prior information, $D$= data set):
$$P(H_{i}|D,I)=P(H_{i}|I)\frac{P(D|H_{i},I)}{P(D|I)}\implies \frac{P(H_{i}|D,I)}{P(H_{j}|D,I)}=\frac{P(H_{i}|I)}{P(H_{j}|I)}\frac{P(D|H_{i},I)}{P(D|H_{j},I)}$$
So if the prior probabilities are equal, then the relative probabilities are given by the likelihood ratio. So you also need to specify a probability distribution for what type of data sets you would be likely to see if the mean was the best statistic, etc. Note that each hypothesis doesn't actually state what the specific value of the mean, median, or IQR actually is. Therefore, the probability cannot depend on the exact value of the mean. Hence in the likelihoods these must have been "integrated out" using the sum and product rules
$$P(D|H_{i},I)=\int P(\theta_{i}|H_{i},I)P(D|\theta_{i},H_{i},I)d\theta_{i}$$
So you have the prior $P(\theta_{i}|H_{i},I)$ which can be interpreted for i=mean as "given that the mean is the best statistic, and prior to seeing the data, what values of the mean are we likely to see?" and the likelihood $P(D|\theta_{i},H_{i},I)$ can be similarly interpreted as "given the mean is best, and equal to $\theta_{mean}$ how likely is the data that was observed?". This may help you come up with some kinds of features that your distribution should have.
This describes the inference - now it is time to apply decision theory. This is particularly simple because your decision doesn't influence the state of nature - the statistic won't change if you do or don't use it. So we can describe the decisions ($A$ for "action" because $D$ is already taken):
$$\begin{array}{l l}
A_{mean}:\text{The mean is the reported statistic}
\\A_{med}:\text{The median is the reported statistic}
\\A_{IQR}:\text{The IQR is the reported statistic}
\end{array}
$$
And now you need to specify a loss matrix $L_{ij}$ which relates the action/decision $A_{i}$ to the state of nature $H_{j}$ - what is the loss if I report the mean, but the median is actually the best statistic? In most cases the diagonal elements will be zero - taking the correct action means no loss. You may also have that all non-diagonal elements are equal - how you are wrong doesn't matter, only whether or not you are wrong.
You then proceed by calculating the average loss for each action, weighted by their probabilities:
$$L_{i}=\sum_{j}L_{ij}P(H_{j}|D,I)$$
And you then choose the action with the smallest average loss.
| null | CC BY-SA 3.0 | null | 2011-04-26T16:39:30.940 | 2011-04-26T16:39:30.940 | null | null | 2392 | null |
9999 | 2 | null | 9997 | 6 | null | Not much of a "proof" but when k is small, you are removing a much larger chunk of your data, so you model has a much smaller amount of data to "learn from". For k=5 you are removing 20% of the data each time, whereas for k=200 you are only removing 0.5%. You model has a much better chance of picking up all the relevant "structure" in the training part when k is large. When k is small, the is a larger chance that the "left out" part will contain a structure which is absent from the "left in" bit - a bit like an "un-representative" sub-sample.
| null | CC BY-SA 3.0 | null | 2011-04-26T16:54:13.690 | 2011-04-26T16:54:13.690 | null | null | 2392 | null |
10001 | 1 | 10028 | null | 47 | 18651 | I have noticed that there are a few implementations of random forest such as ALGLIB, Waffles and some R packages like `randomForest`. Can anybody tell me whether these libraries are highly optimized? Are they basically equivalent to the random forests as detailed in [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/) or have a lot of extra tricks been added?
I hope this question is specific enough. As an illustration of the type of answer I am looking for, if somebody asked me whether the linear algebra package BLAS was highly optimized, I would say it was extremely highly optimized and mostly not worth trying to improve upon except in very specialized applications.
| Optimized implementations of the Random Forest algorithm | CC BY-SA 3.0 | null | 2011-04-26T18:39:04.007 | 2021-04-14T12:10:45.183 | 2014-05-17T16:58:30.963 | 27403 | 847 | [
"random-forest",
"algorithms",
"model-evaluation"
] |
10002 | 1 | 10023 | null | 18 | 940 | Given a predicted variable (P), a random effect (R) and a fixed effect (F), one could fit two* mixed effects models ([lme4](http://cran.r-project.org/web/packages/lme4/) syntax):
```
m1 = lmer( P ~ (1|R) + F )
m2 = lmer( P ~ (1+F|R) + F)
```
As I understand it, the second model is the one that permits the fixed effect to vary across levels of the random effect.
In my research I typically employ mixed effects models to analyze data from experiments conducted across multiple human participants. I model participant as a random effect and experimental manipulations as fixed effects. I think it makes sense a priori to let the degree to which the fixed effects affect performance in the experiment vary across participants. However, I have trouble imagining circumstances under which I should nor permit the fixed effects to vary across levels of a random effect, so my question is:
When should one not permit a fixed effect to vary across levels of a random effect?
| When should I *not* permit a fixed effect to vary across levels of a random effect in a mixed effects model? | CC BY-SA 3.0 | null | 2011-04-26T19:43:37.553 | 2021-11-11T00:27:23.733 | 2020-07-17T19:08:08.463 | 7486 | 364 | [
"r",
"regression",
"mixed-model",
"lme4-nlme",
"random-effects-model"
] |
10003 | 1 | 11139 | null | 11 | 334 |
## Background
I read about [StatProb.com](http://statprob.com) from a comment on [Andrew Gelman's Blog](http://www.stat.columbia.edu/%7Ecook/movabletype/archives/2011/03/why_edit_wikipe.html#comment-2187431).
According to the website, StatProb is:
>
StatProb: The Encyclopedia Sponsored
by Statistics and Probability
Societies combines the advantages of
traditional wikis (rapid and
up-to-date publication, user-generated
development, hyperlinking, and a saved
history) with traditional publishing
(quality assurance, review, credit to
authors, and a structured information
display). All contributions have been
approved by an editorial board
determined by leading statistical
societies; the editorial board members
are listed on the About page.
I am not a statistician, but I use statistics, and this site appears to offer an opportunity me to publish material that while potentially useful to others, would likely go unpublished unless I were to include it as an appendix or post it on a website. The option is appealing because the review process would boost my own confidence in the methods that I use and give it some credibility in the public sphere.
Despite the support of major statistics and probability societies, the site has not taken off. Indeed, [one blogger asked 'R.I.P. StatProb?'](http://xianblog.wordpress.com/tag/jsm-2010/) and the frequency of contributions has been declining with time.
## Question:
Is it worth the effort to publish through StatProb.com?
## Update:
As of today (2012-02-01), the most recent contribution was [2011-05-04](http://statprob.com/?op=enlist&mode=created); the most recent edit 2011-06. So it is looking less appealing today than when the question was originally asked.
| Is it worthwhile to publish at the refereed wiki StatProb.com? | CC BY-SA 3.0 | null | 2011-04-26T20:09:18.910 | 2019-09-24T19:40:02.243 | 2020-06-11T14:32:37.003 | -1 | 2750 | [
"probability",
"references",
"methodology"
] |
10004 | 2 | null | 10001 | 7 | null | The [ELSII](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) used [randomForest](http://cran.r-project.org/web/packages/randomForest/index.html) (see e.g., footnote 3 p.591), which is an R implementation of the Breiman and Cutler's [Fortran code](http://stat-www.berkeley.edu/users/breiman/RandomForests) from Salford. Andy Liaw's code is in C.
There's another implementation of RFs proposed in the [party](http://cran.r-project.org/web/packages/party/index.html) package (in C), which relies on R/Lapack, which has some dependencies on BLAS (see`/include/R_ext/Lapack.h` in your base R directory).
As far as bagging is concerned, it should not be too hard to parallelize it, but I'll let more specialized users answer on this aspect.
| null | CC BY-SA 3.0 | null | 2011-04-26T20:12:31.790 | 2011-04-26T20:12:31.790 | null | null | 930 | null |
10005 | 1 | null | null | 0 | 24408 | I have a table I want to convert into a graph (bar-graph or line-graph)
The first column has fixed values. Twenty different values are simulated for these fixed values and kept in the next columns. I want to plot a graph of the fixed column against all the different twenty simulated columns.
How do I go about it?
## Edit
>
table name:bygrace
```
V1 V2 V3 V4 V5
100 16 11 -6 1
120 -17 -12 7 -2
140 18 13 -8 3
150 -19 -14 9 -4
210 20 15 -10 -5
```
Actually, my table looks like the one above.
The first column V1 represents Premiums charged by an insurance company,say last year (for 5 policyholders/policies)
After critical analysis of the portfolio, the company decides to increase or decrease the premium amount for some of the customers.
However, the company does not know exactly by how much it should increase of decrease the premiums for fear that it would lose customers.
For this reason, four different scenarios/simulations are considered which are represented by the next four columns(V2 through V5) respectively.
Now the task is to plot the premium amounts in V1 against these four different scenarios(bar-graph or line-graph; I think bar will be better).
And my question is can this be done on one graph/at once? If yes, how shoul I go about it?
Or do I have to plot the premium against each column separately?
In fact, I have spent the whole of yesterday and today on this but I have not been able to get the desired result.
Someone gave me an answer to try.And I am going to do that because it has given me an idea and I am very thankful to the one!
Many thanks to everyone for their help
| How to convert a table into a graph in R | CC BY-SA 3.0 | null | 2011-04-26T20:15:47.633 | 2013-03-29T02:42:39.043 | 2020-06-11T14:32:37.003 | -1 | 4340 | [
"r",
"data-visualization"
] |
10006 | 1 | null | null | 22 | 2452 | Suppose $X\sim \operatorname{InvWishart}(\nu, \Sigma_0)$. I'm interested in the marginal distribution of the diagonal elements $\operatorname{diag}(X) = (x_{11}, \dots, x_{pp})$. There are a few simple results on the distribution of submatrices of $X$ (at least some listed at Wikipedia). From this I can figure that the marginal distribution of any single element on the diagonal is inverse Gamma. But I've been unable to deduce the joint distribution.
I thought maybe it could be derived by composition, like:
$$p(x_{11} | x_{ii}, i\gt 1)p(x_{22}|x_{ii}, i>2)\dots p(x_{(p-1)(p-1)}|x_{pp})p(x_{pp}),$$
but I never got anywhere with it and further suspect that I'm missing something simple; it seems like this "ought" to be known but I haven't been able to find/show it.
| Marginal distribution of the diagonal of an inverse Wishart distributed matrix | CC BY-SA 3.0 | null | 2011-04-26T20:30:43.627 | 2019-10-02T17:32:12.547 | 2016-12-04T10:15:55.690 | 113090 | 26 | [
"distributions",
"probability",
"density-function"
] |
10007 | 2 | null | 6776 | 3 | null | It's a mixture model set up you've got. So to start, put the mixture identifying variable in - you don't have it yet. It's an indicator variable saying whether a case comes from one regression (say Z=0) or the other (say Z=1). Probably it will enter the full model in the form of an interaction with a slope and/or intercept to allow these to change depending on which regression generates the point (although other more complex arrangements are possible). Formulate that model carefully to ensure the mixture dependencies are what you want - there are a lot of possibilities.
Now, if Z was observed you'd know how to fit the complete model and get betas from it because there would be nothing unobserved on the right hand side of it. But assuming you see only the data and the covariates, you don't observe it. However, you have assumed a complete model for how data is generated for each value of Z. So (E-step) use that to get a posterior distribution over the possible values of Z for each data point using the model with its parameters as they stand and some prior assumption about the distribution of Z (or you could estimate that too). Recall that the posterior probability of Z=1 just is the expectation of Z. Now (M-step) use that expected Z as if it was a real observation of Z to refit the whole model. The complete data likelihood will, in normal circumstances, not go down.
Alternate this process until the likelihood of the data under the model stops rising, retrieve the final set of betas, hope you're not in a local minimum, and declare that you've estimated them.
| null | CC BY-SA 3.0 | null | 2011-04-26T20:47:11.920 | 2011-04-26T20:47:11.920 | null | null | 1739 | null |
10008 | 1 | null | null | 1 | 1352 | I have a scenario where I have a User which likes 10 different sports and there is another user which likes 20 different sports. I need to find the correlation between them. What kind of correlations can be used in such a scenario. Any kind of guide would be helpful. I tried with Pearson correlation but was not helpful. I would like to program using MATLAB. Thanks in advance.
| Need to find correlation between two entities | CC BY-SA 3.0 | null | 2011-04-26T20:48:43.323 | 2011-04-27T13:34:20.870 | 2011-04-27T13:34:20.870 | null | 4341 | [
"correlation",
"matlab"
] |
10009 | 2 | null | 10005 | 1 | null | Try this:
```
#Generate 100 x values. Then generate 20 random walk y values for each x value
x <- seq(1, 100, 1)
y <- matrix(20*100, nrow=100, ncol=20)
for (i in 1:20) {
y[, i] <- cumsum(rnorm(100))
}
#Build the table
df <- data.frame(x=x, y=y)
head(df)
#Plot the table
matplot(df[, 1], df[, 2:21], type="l", main="Twenty Random Walks", xlab="x", ylab="y")
grid()
```

| null | CC BY-SA 3.0 | null | 2011-04-26T21:56:07.137 | 2011-04-26T22:01:25.727 | 2011-04-26T22:01:25.727 | 2775 | 2775 | null |
10010 | 1 | null | null | 4 | 191 | I am currently trying to do the following in R:
I have thousands of measured spectra (x,y; see below). Each spectra has one or two peaks. Also I have sets of "training" spectra obtained in more controlled conditions and I would like to know which of my training spectra has the closest match to the measured spectra!?
I was thinking that some sort of pattern recognition would be useful but I know too little to make an informed choice as this is a bit outside of my usual work-area
- What is the most promising way/function in R to do this kind of pattern recognition I want?
- In case pattern recognition (like PCA) is not the most promising way, what other options are there?
I am looking for sample bits of code or literature dealing with this kind of data analysis.

EDIT
The peak position will most probably always be the same, however the laser used to record the spectra is temperature controlled and slight variations are possible. The intensity will change depending on experimental conditions.
The two peaks sould be treated as independet peaks.
| Classifying spectra | CC BY-SA 3.0 | null | 2011-04-26T22:27:58.763 | 2012-05-10T05:23:07.890 | 2012-05-09T10:31:02.793 | 4479 | 4342 | [
"classification"
] |
10011 | 1 | null | null | 7 | 1268 | I have a set of samples in which I assume there are 2 definite subsets in it. I plotted their values in a histogram and found that there are two distinct modes as shown in the figure below.
My question is how do I differentiate two groups. i.e how do I choose a value that differentiates the two subsets?

| How to differentiate two subgroups from a histogram? | CC BY-SA 3.0 | null | 2011-04-26T22:36:32.657 | 2013-05-13T04:58:16.037 | 2013-05-13T04:58:16.037 | 805 | 2725 | [
"normal-distribution",
"mixture-distribution",
"histogram",
"unsupervised-learning"
] |
10012 | 2 | null | 10008 | 1 | null | This is like [inter-rater reliability](http://en.wikipedia.org/wiki/Inter-rater_reliability). The code [here](http://www.mathworks.com/matlabcentral/fileexchange/15365) should do it.
| null | CC BY-SA 3.0 | null | 2011-04-27T00:13:44.980 | 2011-04-27T00:13:44.980 | null | null | 3874 | null |
10013 | 1 | null | null | 5 | 2310 | I found the following posts interesting and I was wondering if any of you guys know of good academic papers that describe methods/relationships of exogenous variables in VECM models. If so could you kindly point them out to me as I am very interested in learning. Thank you.
[Finding coefficients for VECM + exogenous variables](https://stats.stackexchange.com/questions/4030/finding-coefficients-for-vecm-exogenous-variables)
[Lagged Exogenous Variables in VECM with R](https://stats.stackexchange.com/questions/6487/lagged-exogenous-variables-in-vecm-with-r)
| Exogenous variables in VECM | CC BY-SA 3.0 | null | 2011-04-27T00:16:30.820 | 2021-11-21T01:53:36.777 | 2021-11-21T01:53:36.777 | 11887 | 4338 | [
"time-series",
"exogeneity"
] |
10015 | 2 | null | 8642 | 1 | null | I have two notes and one suggestion.
The first note is that testing theory is typically done by setting an acceptable level where you would reject a true hypothesis (Type I error), then minimize the risk of accepting a false hypothesis (Type II error). There are two reasons for this, first is that all your tests use this assumption, and second of all in almost all cases you can't minimize both errors simultaneously.
My second note is that the Wilcoxon Test hypothesis is actually $H_0: F_0 = F_1, H_1: F_0 \ne F_1$, where $F_i$ are CDFs, the relationship of this test to the mean is a property of the class of CDFs you are considering and the conditions you are considering them under.
Under the data discussed I think bootstrapping would probably be appropriate if you think the sample is representative of the population of interest. Other possible choices include deriving an empirical likelihood ratio test, or resampling t-tests and checking robustness.
| null | CC BY-SA 4.0 | null | 2011-04-27T01:51:25.523 | 2023-05-09T17:07:10.133 | 2023-05-09T17:07:10.133 | 31853 | 2339 | null |
10016 | 2 | null | 10011 | 0 | null | If you are willing to assume the populations have the same variance you could use essentially LDA without the normality assumption (a.k.a. Fisher's Method or Fisher's Discriminant Function).
Without this assumption you could try an EM algorithm which is indirectly what Matt Suggested since this would be a mixture model approach.
| null | CC BY-SA 3.0 | null | 2011-04-27T02:14:35.040 | 2011-04-27T02:14:35.040 | null | null | 2339 | null |
10017 | 1 | 60262 | null | 44 | 71963 | I am trying to understand standard error "clustering" and how to execute in R (it is trivial in Stata). In R I have been unsuccessful using either `plm` or writing my own function. I'll use the `diamonds` data from the `ggplot2` package.
I can do fixed effects with either dummy variables
```
> library(plyr)
> library(ggplot2)
> library(lmtest)
> library(sandwich)
> # with dummies to create fixed effects
> fe.lsdv <- lm(price ~ carat + factor(cut) + 0, data = diamonds)
> ct.lsdv <- coeftest(fe.lsdv, vcov. = vcovHC)
> ct.lsdv
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
carat 7871.082 24.892 316.207 < 2.2e-16 ***
factor(cut)Fair -3875.470 51.190 -75.707 < 2.2e-16 ***
factor(cut)Good -2755.138 26.570 -103.692 < 2.2e-16 ***
factor(cut)Very Good -2365.334 20.548 -115.111 < 2.2e-16 ***
factor(cut)Premium -2436.393 21.172 -115.075 < 2.2e-16 ***
factor(cut)Ideal -2074.546 16.092 -128.920 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
or by de-meaning both left- and right-hand sides (no time invariant regressors here) and correcting degrees of freedom.
```
> # by demeaning with degrees of freedom correction
> diamonds <- ddply(diamonds, .(cut), transform, price.dm = price - mean(price), carat.dm = carat .... [TRUNCATED]
> fe.dm <- lm(price.dm ~ carat.dm + 0, data = diamonds)
> ct.dm <- coeftest(fe.dm, vcov. = vcovHC, df = nrow(diamonds) - 1 - 5)
> ct.dm
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
carat.dm 7871.082 24.888 316.26 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
I can't replicate these results with `plm`, because I don't have a "time" index (i.e., this isn't really a panel, just clusters that could have a common bias in their error terms).
```
> plm.temp <- plm(price ~ carat, data = diamonds, index = "cut")
duplicate couples (time-id)
Error in pdim.default(index[[1]], index[[2]]) :
```
I also tried to code my own covariance matrix with clustered standard error using Stata's explanation of their `cluster` option ([explained here](http://www.stata.com/support/faqs/stat/cluster.html)), which is to solve $$\hat V_{cluster} = (X'X)^{-1} \left( \sum_{j=1}^{n_c} u_j'u_j \right) (X'X)^{-1}$$ where $u_j = \sum_{cluster~j} e_i * x_i$, $n_c$ si the number of clusters, $e_i$ is the residual for the $i^{th}$ observation and $x_i$ is the row vector of predictors, including the constant (this also appears as equation (7.22) in Wooldridge's Cross Section and Panel Data). But the following code gives very large covariance matrices. Are these very large values given the small number of clusters I have? Given that I can't get `plm` to do clusters on one factor, I'm not sure how to benchmark my code.
```
> # with cluster robust se
> lm.temp <- lm(price ~ carat + factor(cut) + 0, data = diamonds)
>
> # using the model that Stata uses
> stata.clustering <- function(x, clu, res) {
+ x <- as.matrix(x)
+ clu <- as.vector(clu)
+ res <- as.vector(res)
+ fac <- unique(clu)
+ num.fac <- length(fac)
+ num.reg <- ncol(x)
+ u <- matrix(NA, nrow = num.fac, ncol = num.reg)
+ meat <- matrix(NA, nrow = num.reg, ncol = num.reg)
+
+ # outer terms (X'X)^-1
+ outer <- solve(t(x) %*% x)
+
+ # inner term sum_j u_j'u_j where u_j = sum_i e_i * x_i
+ for (i in seq(num.fac)) {
+ index.loop <- clu == fac[i]
+ res.loop <- res[index.loop]
+ x.loop <- x[clu == fac[i], ]
+ u[i, ] <- as.vector(colSums(res.loop * x.loop))
+ }
+ inner <- t(u) %*% u
+
+ #
+ V <- outer %*% inner %*% outer
+ return(V)
+ }
> x.temp <- data.frame(const = 1, diamonds[, "carat"])
> summary(lm.temp)
Call:
lm(formula = price ~ carat + factor(cut) + 0, data = diamonds)
Residuals:
Min 1Q Median 3Q Max
-17540.7 -791.6 -37.6 522.1 12721.4
Coefficients:
Estimate Std. Error t value Pr(>|t|)
carat 7871.08 13.98 563.0 <2e-16 ***
factor(cut)Fair -3875.47 40.41 -95.9 <2e-16 ***
factor(cut)Good -2755.14 24.63 -111.9 <2e-16 ***
factor(cut)Very Good -2365.33 17.78 -133.0 <2e-16 ***
factor(cut)Premium -2436.39 17.92 -136.0 <2e-16 ***
factor(cut)Ideal -2074.55 14.23 -145.8 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1511 on 53934 degrees of freedom
Multiple R-squared: 0.9272, Adjusted R-squared: 0.9272
F-statistic: 1.145e+05 on 6 and 53934 DF, p-value: < 2.2e-16
> stata.clustering(x = x.temp, clu = diamonds$cut, res = lm.temp$residuals)
const diamonds....carat..
const 11352.64 -14227.44
diamonds....carat.. -14227.44 17830.22
```
Can this be done in R? It is a fairly common technique in econometrics (there's a brief tutorial in [this lecture](http://sekhon.berkeley.edu/causalinf/sp2010/section/week7.pdf)), but I can't figure it out in R. Thanks!
| Standard error clustering in R (either manually or in plm) | CC BY-SA 3.0 | null | 2011-04-27T02:34:11.373 | 2023-04-12T09:48:21.233 | 2016-02-20T16:36:10.930 | 36515 | 1445 | [
"r",
"panel-data",
"standard-error",
"fixed-effects-model",
"clustered-standard-errors"
] |
10019 | 2 | null | 9988 | 4 | null |
### General interpration of question:
The question is a bit confusing, but I interpret it as follows.
- Factor Analysis: When a survey has multiple items and some are positively worded (e.g., "I am the life of the party") and others are negatively worded (e.g., "I avoid social interaction"), factor analysis often assigns such items to the same factor.
For example, extraversion items load positively on one factor and introversion items load negatively on the same factor.
- Cluster Analysis: It is possible to cluster by variables.
In R you can use dist to generate a distance matrix and then send it to hclust to perform a hierarchical cluster analysis.
In SPSS the hierarchical cluster analysis procedure allows you to cluster by variables. The procedure uses the proximities command to generate the distance matrix.
### Variables and clusters:
When running a cluster analysis on variables, you need to think about what formula you will use to generate your distance matrix.
- Common ways of calculating distances between variables (i.e., the items in your survey) include the squared euclidean distance or one minus the correlation.
In both the cases, the distance between negative and positively worded items on the same dimension will appear distant. Thus, when you run a cluster analysis, such items will not cluster together.
- However, if you want such items to cluster together, you can choose a measure of distance between items that sees negatively correlated items as similar. A few ideas include:
Reverse the negatively worded items before entering them into the cluster analysis.
Use 1 minus absolute correlation as the measure of distance
In general, cluster analysis is an algorithmic and atheoretical way of examining groupings in your variables. You have a lot of freedom in how you define distances and how you aggregate based on those distances. The important consideration is that you align your conceptual definition of distance with your operational algorithm for measuring distance.
| null | CC BY-SA 3.0 | null | 2011-04-27T05:01:49.230 | 2011-04-27T05:01:49.230 | null | null | 183 | null |
10020 | 1 | null | null | 14 | 3402 | I was advising a research student with a particular problem, and I was keen to get the input of others on this site.
### Context:
The researcher had three types of predictor variables. Each type contained a different number of predictor variables. Each predictor was a continuous variable:
- Social: S1, S2, S3, S4 (i.e., four predictors)
- Cognitive: C1, C2 (i.e., two predictors)
- Behavioural: B1, B2, B3 (i.e., three predictors)
The outcome variable was also continuous.
The sample included around 60 participants.
The researcher wanted to comment about which type of predictors were more important in explaining the outcome variable. This was related to broader theoretical concerns about the relative importance of these types of predictors.
### Questions
- What is a good way to assess the relative importance of one set of predictors relative to another set?
- What is a good strategy for dealing with the fact that there are different numbers of predictors in each set?
- What caveats in interpretation might you suggest?
Any references to examples or discussion of techniques would also be most welcome.
| Comparing importance of different sets of predictors | CC BY-SA 3.0 | null | 2011-04-27T05:35:46.223 | 2020-04-02T16:58:41.300 | 2011-04-27T08:57:44.160 | 183 | 183 | [
"regression",
"predictor",
"importance"
] |
10021 | 1 | 10025 | null | 2 | 324 | Given a Gaussian distribution $N(\mu_1,\sigma_1^2)$, i would like to choose another mean $\mu_2$ which is $2\sigma_1$ away from $\mu_1$.
In this case our new mean $\mu_2=\mu_1\pm 2\sigma_1$.
How do we calculate the new mean($\mu_2$) in multivariate case?
I mean to say, when your multivariate Gaussian distribution is $N(\mu_1,\Sigma_1)$ and my $\Sigma_1$ is symmetric positive definite matrix. i.e
$\left[ \begin{array}{cc} \sigma_x^2 & \sigma_{xy} \\ \sigma_{yx} & \sigma_y^2 \end{array} \right]$.
| New mean calculation in multivariate gaussian | CC BY-SA 3.0 | null | 2011-04-27T10:07:39.523 | 2011-06-26T13:42:29.627 | 2011-06-26T13:42:29.627 | null | 4290 | [
"multivariate-analysis"
] |
10022 | 2 | null | 10020 | 7 | null | Importance
First thing to do is operationalise 'importance of predictors'. I shall assume that it means something like 'sensitivity of mean outcome to changes in predictor values'. Since your predictors are grouped then sensitivity of the mean outcome to groups of predictors is more interesting than a variable by variable analysis. I leave it open whether sensitivity is understood causally. That issue is picked up later.
Three version of importance
Lots of variance explained: I'm guessing that psychologists' first port of call is probably a variance decomposition leading to a measure of how much outcome variance is explained by the variance-covarance structure in each group of predictors. Not being an experimentalist I can't suggest much here, except to note that the whole 'variance explained' concept is a bit ungrounded for my taste, even without the 'which sum of which squares' issue. Others are welcome to disagree and develop it further.
Large standardized coefficients: SPSS offers the (misnamed) beta to measure impact in a way that is comparable across variable. There are several reasons not to use this, discussed in Fox's regression textbook, [here](http://www.jerrydallal.com/LHSP/importnt.htm), and elsewhere. All apply here. It also ignores group structure.
On the other hand, I imagine that one could standardise predictors in groups and use covariance information to judge the effect of a one standard deviation movement in all of them. Personally the motto: "if a things not worth doing, it's not worth doing well" damps my interest in doing so.
Large marginal effects: The other approach is to stay on the scale of the measurements and calculate marginal effects between carefully chosen sample points.
Because you are interested in groups it is useful to choose points to vary groups of variables rather than single ones, e.g. manipulating both cognitive variables at once. (Lots of opportunity for cool plots here). Basic paper [here](http://gking.harvard.edu/gking/files/making.pdf). The `effects` package in R will do this nicely.
There are two caveats here:
- If you do that you will want to watch out that you are not choosing two cognitive variables that while individually plausible, e.g. medians, are jointly far from any subject observation.
- Some variables are not even theoretically manipulable, so the interpretation of marginal effects as causal is more delicate, though still useful.
Different numbers of predictors
Issues arise due to the grouped variables covariance structure, which we normally try not to worry about but for this task should.
In particular when calculating marginal effects (or standardized coefficients for that matter) on groups rather than single variables the curse of dimensionality will for larger groups make it easier for comparisons to stray into regions where there are no cases. More predictors in a group lead to a more sparsely populated space, so any importance measure will depend more on model assumptions and less on observations (but will not tell you that...) But these are the same issues as in the model fitting phase really. Certainly the same ones as would arise in a model-based causal impact assessment.
| null | CC BY-SA 3.0 | null | 2011-04-27T10:48:27.033 | 2011-04-27T10:48:27.033 | null | null | 1739 | null |
10023 | 2 | null | 10002 | 13 | null | I am not an expert in mixed effect modelling, but the question is much easier to answer if it is rephrased in hierarchical regression modelling context. So our observations have two indexes $P_{ij}$ and $F_{ij}$ with index $i$ representing class and $j$ members of the class. The hierarchical models let us fit linear regression, where coefficients vary across classes:
$$Y_{ij}=\beta_{0i}+\beta_{1i}F_{ij}$$
This is our first level regression. The second level regression is done on the first regression coefficients:
\begin{align*}
\beta_{0i}&=\gamma_{00}+u_{0i}\\\\
\beta_{1i}&=\gamma_{01}+u_{1i}
\end{align*}
when we substitute this in first level regression we get
\begin{align*}
Y_{ij}&=(\gamma_{00}+u_{0i})+(\gamma_{01}+u_{1i})F_{ij}\\\\
&=\gamma_{00}+u_{0i}+u_{1i}F_{ij}+\gamma_{01}F_{ij}
\end{align*}
Here $\gamma$ are fixed effects and $u$ are random effects. Mixed models estimate $\gamma$ and variances of $u$.
The model I've written down corresponds to `lmer` syntax
```
P ~ (1+F|R) + F
```
Now if we put $\beta_{1i}=\gamma_{01}$ without the random term we get
\begin{align*}
Y_{ij}=\gamma_{00}+u_{0i}+\gamma_{01}F_{ij}
\end{align*}
which corresponds to `lmer` syntax
```
P ~ (1|R) + F
```
So the question now becomes when can we exclude error term from the second level regression? The canonical answer is that when we are sure that the regressors (here we do not have any, but we can include them, they naturally are constant within classes) in the second level regression fully explain the variance of coefficients across classes.
So in this particular case if coefficient of $F_{ij}$ does not vary, or alternatively the variance of $u_{1i}$ is very small we should entertain idea that we are probably better of with the first model.
Note. I've only gave algebraic explanation, but I think having it in mind it is much easier to think of particular applied example.
| null | CC BY-SA 4.0 | null | 2011-04-27T11:12:22.073 | 2021-11-11T00:27:23.733 | 2021-11-11T00:27:23.733 | 42597 | 2116 | null |
10024 | 1 | 10041 | null | 10 | 2538 | According to [wiki](http://en.wikipedia.org/wiki/Cluster_analysis#Partitional_clustering) the most widely used convergence criterion is "assigment hasn't changed". I was wondering whether cycling can occur if we use such convergence criterion? I'd be pleased if anyone pointed a reference to an article that gives an example of cycling or proves that this is impossible.
| Cycling in k-means algorithm | CC BY-SA 3.0 | null | 2011-04-27T11:13:57.890 | 2013-03-16T23:57:51.583 | null | null | 1643 | [
"clustering",
"algorithms",
"k-means"
] |
10025 | 2 | null | 10021 | 2 | null | In the bivariate case you can substitute the two points ($\mu_2=\mu_1\pm 2\sigma_1$) with an isodensity ellipse: [http://www.stat.psu.edu/online/courses/stat505/05_multnorm/06_multnorm_revist.html](http://www.stat.psu.edu/online/courses/stat505/05_multnorm/06_multnorm_revist.html) .
Your $2\sigma_1$ criterion seems a bit arbitrary, but it includes 95.44997% of the random variable. So you may want to use the 95.44997% isodensity ellipse in the bivariate case, too. The principle axes of this rotated ellipse are the eigenvectors of the covariance matrix, see [http://web.as.uky.edu/statistics/users/viele/sta601s08/multinorm.pdf](http://web.as.uky.edu/statistics/users/viele/sta601s08/multinorm.pdf) . You can generalise this to more than 2 dimensions.
| null | CC BY-SA 3.0 | null | 2011-04-27T11:35:54.217 | 2011-04-27T11:35:54.217 | null | null | 3911 | null |
10026 | 2 | null | 10020 | 8 | null | Suppose that the first set of predictors requires $a$ degrees of freedom ($a \geq 4$ allowing for nonlinear terms), the second set requires $b$, and the third requires $c$ ($c \geq 3$) allowing for nonlinear terms. Compute the likelihood ratio $\chi^2$ test for the combined partial effects of each set, yielding $L_{1}, L_{2}, L_{3}$. The expected value of a $\chi^2$ random variable with $d$ degrees of freedom is $d$, so subtract $d$ to level the playing field. I.e., compute $L_{1}-a, L_{2}-b, L_{3}-c$. If using F-tests, multiple F by its numerator d.f. to get the $\chi^2$ scale.
| null | CC BY-SA 4.0 | null | 2011-04-27T11:43:16.597 | 2020-04-02T16:58:41.300 | 2020-04-02T16:58:41.300 | 21054 | 4253 | null |
10027 | 2 | null | 10011 | 3 | null | I assume you are talking about Neonatal Behavioral Assessment Scale values in Hereditary Renal Adysplasia.
I often see in medical research that physicians want to have cut-offs and simple threshold based interpretations of their research results, based merely on the distribution of the measurements. Practice and applications however usually need high positive predictive value or high negative predictive value, so the characteristics of the future population tested have to be considered. My point of view is even if now you just want to "differentiate two groups" you probably want to apply this somehow in the future and thus you probably want to find the optimal threshold, optimising costs, risks and benefits (survival, quality of life etc.) in a practical setting. So I suggest that you to think these over in your application.
| null | CC BY-SA 3.0 | null | 2011-04-27T11:59:18.457 | 2011-04-27T11:59:18.457 | null | null | 3911 | null |
10028 | 2 | null | 10001 | 33 | null | (Updated 6 IX 2015 with suggestions from comments, also made CW)
There are two new, nice packages available for R which are pretty well optimised for a certain conditions:
- ranger -- C++, R package, optimised for $p>>n$ problems, parallel, special treatment of GWAS data.
- Arborist -- C++, R and Python bindings, optimised for large-$n$ problems, apparently plans for GPGPU.
Other RF implementations:
- The Original One -- standalone Fortran code, not parallel, pretty hard to use.
- randomForest -- C, R package, probably the most popular, not parallel, actually quite fast when compared on a single-core speed basis, especially for small data.
- randomForestSRC -- C, R package, clone of randomForest supporting parallel processing and survival problems.
- party -- C, R package, quite slow, but designed as a plane for experimenting with RF.
- bigrf -- C+/R, R package, built to work on big data within bigmemory framework; quite far from being complete.
- scikit learn Ensemble forest -- Python, part of scikit-learn framework, parallel, implements many variants of RF.
- milk's RF -- Python, part of milk framework.
- so-called WEKA rf -- Java/WEKA, parallel.
- ALGLIB
- rt-rank -- abandoned?
[Ranger paper](http://arxiv.org/abs/1508.04409) has some speed/memory comparisons, but there is no thorough benchmark.
| null | CC BY-SA 4.0 | null | 2011-04-27T12:02:47.953 | 2021-04-14T12:10:45.183 | 2021-04-14T12:10:45.183 | -1 | null | null |
10029 | 2 | null | 10008 | 1 | null | If your goal is to measure similarity between individual users or groups of users you may use similarity or distance measures used in cluster analysis, biclustering or multidimensional scaling. In situations where you need such a measure the above techniques themselves may be useful, too.
| null | CC BY-SA 3.0 | null | 2011-04-27T12:11:40.880 | 2011-04-27T12:11:40.880 | null | null | 3911 | null |
10030 | 1 | null | null | 4 | 2993 | On the Internet there is an example of k-s test being applied relative to distribution of number of bird varieties over different five hour periods.
The observed distribution was:
```
a=c(0,1,1,9,4)
```
The expected distribution (if there is no difference between the five hours) could be:
```
b=c(3,3,3,3,3)
```
After I found the two cumulate distributions, I calculated manually, D = 0,4667 (a value that is similar with the internet).
But if I try to use R, I find a different value of D:
```
> a=c(0,1,1,9,4)
> b=c(3,3,3,3,3)
> ks.test(a,b)
Two-sample Kolmogorov-Smirnov test
data: a and b
D = 0.6, p-value = 0.3291
alternative hypothesis: two-sided .......
```
- What is leading to the difference between my manual calculation and result that R gives?
| Difference between K-S manual test and K-S test with R? | CC BY-SA 3.0 | null | 2011-04-27T12:34:12.893 | 2011-05-02T13:55:08.293 | 2011-05-02T13:55:08.293 | 183 | 4345 | [
"r",
"kolmogorov-smirnov-test"
] |
10031 | 1 | 10034 | null | 3 | 267 | I have a multinomial logistic regression model.
One of the output categories is not observed in the data set that I'm using.
### Example:
- 4 different diagnoses (response variable) in the population, but in the sample, Type 3 never occurred
- 5 hormone level measurements (predictors)
### Question
- What books/papers discuss the mathematics of handling this situation in logistic regression?
| How to handle categorical dependent variable using logistic regression when one of the categories never occurs in the sample | CC BY-SA 3.0 | null | 2011-04-27T13:19:43.057 | 2013-11-15T13:03:53.140 | 2011-04-28T15:18:55.860 | 183 | 3280 | [
"logistic"
] |
10032 | 2 | null | 10030 | 6 | null | You are testing a different thing.
While you think `c(0,1,1,9,4)` means you are looking at 0 values of one, 1 value of two, 1 value of three, 9 values of four, and 4 values of five, R thinks you are looking at one value of 0, two values of 1, one value of 9, and one value of 4.
To get D = 0.4667..., try the rather verbose
```
ks.test( c(2,3,4,4,4,4,4,4,4,4,4,5,5,5,5),
c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,5) )
```
giving
```
Two-sample Kolmogorov-Smirnov test
D = 0.4667, p-value = 0.07626
alternative hypothesis: two-sided
```
| null | CC BY-SA 3.0 | null | 2011-04-27T14:08:24.743 | 2011-04-27T14:08:24.743 | null | null | 2958 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.