Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11362 | 2 | null | 11359 | 4 | null | Taking the square root is sometimes advocated to make a non-normal variable appear like a normal variable in regression problems. The logarithm is another common possible transformation.
| null | CC BY-SA 3.0 | null | 2011-05-30T09:58:02.853 | 2011-05-30T09:58:02.853 | null | null | 3019 | null |
11363 | 2 | null | 11359 | 10 | null | The square-root transformation is just a special case of Box-Cox power transformation (a nice overview by Pengfi Li, could be useful reading and is found [here](http://www.stat.uconn.edu/~studentjournal/index_files/pengfi_s05.pdf)), with $\lambda = 0.5$ and omitting some centering.
>
The aim of the Box-Cox transformations is to ensure the usual
assumptions for Linear Model hold. That is, $y\sim N(X\beta, \sigma^2 I_n)$.
However this a priori fixed value could be (and probably is) not optimal. In R you may consider a function from `car` library `powerTransform` that helps to estimate an optimal value for Box-Cox transformations for each of the variables participated in linear regression or any data you work with (see the `example(powerTransform)` for further details).
| null | CC BY-SA 3.0 | null | 2011-05-30T10:42:59.890 | 2011-05-30T10:54:55.480 | 2011-05-30T10:54:55.480 | 2645 | 2645 | null |
11365 | 2 | null | 6656 | 1 | null | The "expected entropy" at a particular time step given a particular starting point is a well-defined quantity and you could certainly study it. There is no particular reason why you should favor the "median" entropy without knowing anything else about the system. You should do experiments with as diverse of a set of starting configurations as possible to get a better understanding of your system.
| null | CC BY-SA 3.0 | null | 2011-05-30T13:29:35.353 | 2011-05-30T13:29:35.353 | null | null | 3567 | null |
11366 | 2 | null | 11359 | 5 | null | When the variable follows a Poisson distribution, the results of the square root transform will be much closer to Gaussian.
| null | CC BY-SA 3.0 | null | 2011-05-30T14:00:01.927 | 2011-05-30T14:00:01.927 | null | null | 25 | null |
11368 | 1 | 14756 | null | 27 | 6778 | Suppose I have the following model
$$y_i=f(x_i,\theta)+\varepsilon_i$$
where $y_i\in \mathbb{R}^K$ , $x_i$ is a vector of explanatory variables, $\theta$ is the parameters of non-linear function $f$ and $\varepsilon_i\sim N(0,\Sigma)$, where $\Sigma$ naturally is $K\times K$ matrix.
The goal is the usual to estimate $\theta$ and $\Sigma$. The obvious choice is maximum likelihood method. Log-likelihood for this model (assuming we have a sample $(y_i,x_i),i=1,...,n$) looks like
$$l(\theta,\Sigma)=-\frac{n}{2}\log(2\pi)-\frac{n}{2} \log\det\Sigma-\sum_{i=1}^n(y_i-f(x_i,\theta))'\Sigma^{-1}(y-f(x_i,\theta)))$$
Now this seems simple, the log-likelihood is specified, put in data, and use some algorithm for non-linear optimisation. The problem is how to ensure that $\Sigma$ is positive definite. Using for example `optim` in R (or any other non-linear optimisation algorithm) will not guarantee me that $\Sigma$ is positive definite.
So the question is how to ensure that $\Sigma$ stays positive definite? I see two possible solutions:
- Reparametrise $\Sigma$ as $RR'$ where $R$ is upper-triangular or symmetric matrix. Then $\Sigma$ will always be positive-definite and $R$ can be unconstrained.
- Use profile likelihood. Derive the formulas for $\hat\theta(\Sigma)$ and $\hat{\Sigma}(\theta)$. Start with some $\theta_0$ and iterate $\hat{\Sigma}_j=\hat\Sigma(\hat\theta_{j-1})$, $\hat{\theta}_j=\hat\theta(\hat\Sigma_{j-1})$ until convergence.
Is there some other way and what about these 2 approaches, will they work, are they standard? This seems pretty standard problem, but quick search did not give me any pointers. I know that Bayesian estimation would be also possible, but for the moment I would not want to engage in it.
| How to ensure properties of covariance matrix when fitting multivariate normal model using maximum likelihood? | CC BY-SA 3.0 | null | 2011-05-30T14:18:25.663 | 2017-09-12T14:34:31.000 | 2011-05-31T06:30:58.367 | 2116 | 2116 | [
"maximum-likelihood",
"optimization",
"covariance"
] |
11370 | 1 | 11371 | null | 2 | 630 | I have data from an experiment that is testing how the order of two studying methods (visual or auditory) affects word recall. For analysis a multi-factor anova with a repeated measure is appropriate, but I am not sure if I am structuring my data correctly.
This is the command I'm using:
`aov(recalled_items~task*order)+Error(subject/task)+(order))`
Here is an example of the data structure:
```
Subject Task Order Recalled Items
A Visual First 13
A Auditory Second 22
B Visual First 14
B Auditory Second 28
C Visual Second 10
C Auditory First 15
D Visual Second 14
D Auditory First 29
```
- Does R know to compare Visual 1 and Visual 2 recall values and Auditory 1 and Auditory 2 recall values?
I am worried that because of the way I structured my data R is just comparing Visual 1 and Auditory 1 and as a result I am getting no effect.
| Testing a 2 by 2 mixed ANOVA in R | CC BY-SA 3.0 | 0 | 2011-05-30T14:36:08.930 | 2011-06-02T15:46:52.483 | 2011-05-30T15:11:25.247 | 307 | 4804 | [
"r",
"anova",
"dataset"
] |
11371 | 2 | null | 11370 | 6 | null | Your method won't work because it's going to treat order within. Try this...
```
Subject Task Order Recalled Items
A Visual vFirst 13
A Auditory vFirst 22
D Visual aFirst 14
D Auditory aFirst 28
```
Or something like that.
| null | CC BY-SA 3.0 | null | 2011-05-30T15:12:41.203 | 2011-05-30T15:12:41.203 | null | null | 601 | null |
11372 | 1 | null | null | 10 | 7661 | I have to run a factor analysis on a dataset made up of dichotomous variables (0=yes, 1= no) and I don´t know if I'm on the right track.
Using `tetrachoric()` I create a correlation matrix, on which I run `fa(data,factors=1)`.
The result is quite near to the results i receive when using [MixFactor](https://typo3.univie.ac.at/index.php?id=78913), but it´s not the same.
- Is this ok or would you recommend another procedure?
- Why does fa() work and factanal() produce an error? (Fehler in solve.default(cv) :
System ist für den Rechner singulär: reziproke Konditionszahl = 4.22612e-18)
| Recommended procedure for factor analysis on dichotomous data with R | CC BY-SA 3.0 | null | 2011-05-30T15:49:10.737 | 2021-03-24T12:47:04.527 | 2012-03-20T21:45:41.057 | 930 | 4805 | [
"r",
"factor-analysis",
"psychometrics",
"binary-data"
] |
11373 | 2 | null | 11359 | 18 | null | In general, parametric regression / GLM assume that the relationship between the $Y$ variable and each $X$ variable is linear, that the residuals once you've fitted the model follow a normal distribution and that the size of the residuals stays about the same all the way along your fitted line(s). When your data don't conform to these assumptions, transformations can help.
It should be intuitive that if $Y$ is proportional to $X^2$ then square-rooting $Y$ linearises this relationship, leading to a model that better fits the assumptions and that explains more variance (has higher $R^2$). Square rooting $Y$ also helps when you have the problem that the size of your residuals progressively increases as your values of $X$ increase (i.e. the scatter of data points around the fitted line gets more marked as you move along it). Think of the shape of a square root function: it increases steeply at first but then saturates. So applying a square root transform inflates smaller numbers but stabilises bigger ones. So you can think of it as pushing small residuals at low $X$ values away from the fitted line and squishing large residuals at high $X$ values towards the line. (This is mental shorthand not proper maths!)
As Dmitrij and ocram say, this is just one possible transformation which will help in certain circumstances, and tools like the Box-Cox formula can help you to pick the most useful one. I would advise getting into the habit of always looking at a plots of residuals against fitted values (and also a normal probability plot or histogram of residuals) when you fit a model. You'll find you'll often end up being able to see from these what sort of transformation will help.
| null | CC BY-SA 3.0 | null | 2011-05-30T16:01:00.770 | 2011-05-30T19:20:54.643 | 2011-05-30T19:20:54.643 | 2116 | 266 | null |
11374 | 1 | 11379 | null | 3 | 949 | I'm a statistics newbie and I would like to make sense of the p-value with ANOVA, hopefully with a visual presentation. All online visual tools I've found so far only have MSB, MSE and F values visually presented (such as [this here](http://www.psych.utah.edu/aoce/tools/Anova/anovatool.html). I'm a very visual person and I would like to see how it works before I learn more about it.
Do you have any suggestions for me? It could be a Matlab program as well, something where I play with the distributions and see the p value change.
| Visual representations of the p-value in ANOVA to assist intuitive understanding | CC BY-SA 3.0 | null | 2011-05-30T16:12:20.290 | 2013-09-02T09:14:18.117 | 2013-09-02T09:14:18.117 | 27581 | 4806 | [
"hypothesis-testing",
"anova",
"data-visualization"
] |
11375 | 1 | null | null | 2 | 1366 | I have a question for adjusted $R^2$ given a specific regression model.
I am doing a project on January effect and I have a model from some journal using
$$R_i = a_0 + a_1D_{\mathrm{Jan}} + \varepsilon_i$$
where
- $R_i$ is daily return of portfolio/index,
- $a_0$ is non-January daily returns,
- $a_1$ is January returns over non January returns,
- $D_{\mathrm{Jan}}$ is dummy variable (1 for Jan, 0 otherwise).
With this model I tried to test whether January return is significantly greater than non-Jan returns, especially in small capitalization stocks.
So I have returns for size sorted portfolio 1 to 4, where Portfolio 1 (P1) consist of smallest cap stocks, P4 consist of largest cap stocks.
What I did was using regression program in Excel and SPSS, input all daily returns of the portfolio from Jan to Dec for 10 years as dependent variables and the dummy (1 for january and 0 for others) as an independent variable into the regression program and I get negative adjusted $R^2$ for all portfolios (P1-P4), mostly about -0.05. The value of $R^2$ itself was also very small at about 0.0001.
The journal I am basing my model on was using monthly returns as $R_i$, but I modified it into daily return. I still get negative adjusted $R^2$ when I use the monthly returns.
Can anyone please help me point out what was wrong in the model? Did I use the wrong input based on the given model? If yes, what should be the correct input? Or if the model was wrong, what model should I use?
Here are the results of my test, the p-value of the dummy variable and the adj $R^2$.
- For P1 0.54, adj $R^2$ -0.0002.
- P2 0.36, adj $R^2$ 0.0004.
- P3 0.68, Adj $R^2$ -0.0003,
- P4 0.14, adj $R^2$ 0.0005.
I understand that the variables are all insignificant. But I am confused in interpreting the $R^2$
| Why is a regression model of portfolio return giving smaller adjusted R-square (i.e., negative) than expected? | CC BY-SA 3.0 | null | 2011-05-30T17:50:09.890 | 2013-01-09T19:37:39.950 | 2013-01-09T17:45:22.463 | 17230 | 4808 | [
"regression",
"self-study",
"spss",
"model-selection",
"r-squared"
] |
11377 | 2 | null | 11375 | 5 | null | You've included an interaction term without including both of the main effects that are the components of that interaction. According to standard practice, you need a term for January returns. Exceptions to this rule are rare though they have been discussed on this site recently at [Including the interaction but not the main effects in a model](https://stats.stackexchange.com/questions/11009/including-the-interaction-but-not-the-main-effects-in-a-model)
Beyond that (which may no longer apply after edits to the question), many people will obtain negative adjusted rsq when trying to predict something so difficult as stock returns. The rsq itself is so tiny that when the model gets penalized for its number of predictors (k), the resulting adjusted rsq will quite often go negative. This is especially true if the sample size is small, since N is, along with k, part of what determines the adjustment. Adjusted rsq = 1 - [(1 - rsq)(n - 1)] / (n - k - 1)
| null | CC BY-SA 3.0 | null | 2011-05-30T21:17:59.833 | 2013-01-09T19:37:39.950 | 2017-04-13T12:44:21.613 | -1 | 2669 | null |
11378 | 1 | null | null | 4 | 574 | I am attempting to perform a two-group confirmatory factor analysis (CFA) of one continuous factor on six ordinal predictors in `OpenMx` (`R`) using (robust) Weighted Least Squares (WLS) estimation. While I am completely new to OpenMx (I only used `sem` and `lavaan` ) I think I have most things now: I have a way of estimating polychoric correlations/covariances and estimating the latent means in one group and a good idea on how to implement this model in `OpenMx`.
I just have one problem, I need a weight matrix in WLS estimation. This should be an asymptotic covariance matrix of the elements of the observed correlations/covariances.
What is this exactly and how can I compute such a matrix?
| How to compute the weight matrix for WLS estimation of a multi-group ordinal CFA model | CC BY-SA 3.0 | null | 2011-05-30T21:43:35.950 | 2011-05-30T21:56:35.773 | 2011-05-30T21:56:35.773 | 3094 | 3094 | [
"r",
"factor-analysis",
"computational-statistics"
] |
11379 | 2 | null | 11374 | 3 | null | Here is a toy example for simulating a one-way ANOVA in R.
First, I just defined a general function that expect an effect size (`es`), which is simply the ratio MSB/MSW (between/within mean squares), a value for the MSB, the number of groups, which might or not be of equal sizes:
```
sim.exp <- function(es=0.25, msb=10, groups=5, n=NULL, verbose=FALSE) {
msw <- msb/es
N <- ifelse(is.null(n), sample(10:40, groups), groups*n)
means <- rnorm(n=groups, mean=0, sd=sqrt(msb))
my.df <- data.frame(grp=gl(groups, 1, N),
y=rnorm(N, means, sqrt(msw)))
aov.res <- aov(y ~ grp, my.df)
if (verbose) print(summary(aov.res))
ave <- with(my.df, tapply(y, grp, function(x) c(mean(x), sd(x))))
invisible(list(ave=ave, p.value=summary(aov.res)[[1]][1,5]))
}
```
This function returns the p-value associated to the F-test, as well as the sample means and SDs. We can use it as follows:
```
> sim.exp(verbose=TRUE)
Df Sum Sq Mean Sq F value Pr(>F)
grp 4 32.71 8.176 0.1875 0.9418
Residuals 18 784.93 43.607
> sim.exp(es=2, verbose=TRUE)
Df Sum Sq Mean Sq F value Pr(>F)
grp 4 555.66 138.915 33.567 1.653e-09 ***
Residuals 24 99.32 4.138
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> sim.exp(es=.5, n=30, groups=3, verbose=TRUE)
Df Sum Sq Mean Sq F value Pr(>F)
grp 2 639.12 319.56 16.498 8.42e-07 ***
Residuals 87 1685.13 19.37
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Then, I created a grid of values for `es` and `msb`, that is I want to check whether varying these parameters has an effect on the estimated p-value.
```
my.design <- expand.grid(es=seq(.2, 2.4, by=.2), msb=seq(2, 10, by=2))
n.sim <- nrow(my.design)
```
Finally, let's use it. First, with a single replicate of each condition:
```
for (i in 1:n.sim)
my.design$p[i] <- sim.exp(my.design[i,1], my.design[i,2], n=20)$p.value
```
As can be seen, when increasing the effect size we are more likely to reject the null (averaged over MSB):
```
> with(my.design, aggregate(p, list(es=es), mean))
es x
1 0.2 1.178042e-01
2 0.4 1.315028e-02
3 0.6 5.765548e-02
4 0.8 5.742882e-02
5 1.0 8.940993e-05
6 1.2 9.199611e-09
7 1.4 9.115640e-06
8 1.6 8.100427e-10
9 1.8 2.656848e-07
10 2.0 3.577391e-05
11 2.2 5.477981e-14
12 2.4 1.219156e-04
```
The results are shown below, although for clarity I took the log of the p-value. The horizontal dashed line shows the 5% limit for type I risk.

Ok, it's somewhat noisy. So, let's try to average p-values for 500 replicates in each conditions:
```
for (i in 1:n.sim)
my.design$p[i] <- mean(unlist(replicate(500,
sim.exp(my.design[i,1], my.design[i,2], n=20))[2,]))
```
and the results are:

We can play with `es` only as follows:
```
k <- 10000
es1 <- sample(seq(.1, 5, by=.1), k, rep=T)
pp <- numeric(k)
for (i in 1:k)
pp[i] <- sim.exp(groups=3, es=es1[i])$p.value
plot(es1, -log10(pp), pch=19, col="#FF737350", cex=.6, xlab="Effect size (MSB=10)")
xx <- seq(.1, 5, by=.1)
lines(xx, predict(loess(-log10(pp) ~ es1), data.frame(es1=xx)),
col="green", lwd=2)
```

Many other experiments are possible, and probably a better code too.
| null | CC BY-SA 3.0 | null | 2011-05-30T21:56:25.460 | 2011-05-30T21:56:25.460 | null | null | 930 | null |
11381 | 1 | 11382 | null | 10 | 7193 | I've calculated the proportion of chicks fledged out of the number of eggs hatched in each year using `prop.test()` in R. I see that it gives me the proportion fledged, but also the 95% confidence interval, which is what I'm after.
Having read the excellent information from another question on this site [here](https://stats.stackexchange.com/questions/4713/binomial-confidence-interval-estimation-why-is-it-not-symmetric), I understand why I don't have symmetry in my 95% CIs!
- However, how should I report this in a paper?
I've seen people report values as 38% (±0.2%), with an indication that the value in brackets is 95% CI. Obviously this won't work for asymmetrical CIs. Must I report the upper and lower values in these cases?
| How to report asymmetrical confidence intervals of a proportion? | CC BY-SA 3.0 | null | 2011-05-31T00:43:26.973 | 2011-10-06T00:52:53.107 | 2017-04-13T12:44:33.310 | -1 | 4238 | [
"r",
"confidence-interval",
"binomial-distribution"
] |
11382 | 2 | null | 11381 | 9 | null | You should report the lower and upper intervals and also the method used to calculate the interval.
It turns out that there is no 'right' way to calculate confidence intervals for proportions, but instead many competing methods, each with advantages and disadvantages. The lack of a universally correct method stands in contrast to many statistical things that you might put numbers to, like means and standard deviations. For your interval to be fully specified you have to say how you calculated it.
| null | CC BY-SA 3.0 | null | 2011-05-31T01:18:08.367 | 2011-05-31T01:18:08.367 | null | null | 1679 | null |
11383 | 2 | null | 11374 | 2 | null | There could possibly be a good reason why the MSB, MSE, and F values are only shown. These are what is "actually important" in the analysis so to speak. A p-value is just a sampling probability of some function of these three quantities (not sure of the specific function).
You don't need a p-value if you understand how to interpret these quantities. p-value doesn't provide you with "extra" information, and it is so easy to mis-interpret (such as: probability for hypothesis, probability of eroneous conclusion, probability of type 1 error, etc.). It can be quite fun to have a "p-value bash" every now and then :)
Much more useful and informative to look at effect sizes. For if they are all pretty close - to within say plus/minus one standard error - then you already know without any need for hypothesis testing, that the data do provide support in favour of the hypothesis of equal means $H_0:\;\mu_0=\mu_1=\dots=\mu_k$. You only need a hypothesis test, in the formal setting when it isn't "obvious" whether or not to accept or reject. Say if one or two out of 20 means were between 2 and 3 standard errors away from the rest. This is when the hypothesis test will help. And also, if you see 10 out of the 20 groups over 3 standard errors apart, then you don't need the test: you know the null is not supported by the data.
| null | CC BY-SA 3.0 | null | 2011-05-31T03:53:48.550 | 2011-05-31T03:53:48.550 | null | null | 2392 | null |
11384 | 1 | 11404 | null | 13 | 2952 | I want to apply a PCA on a dataset, which consists of mixed type variables (continuous and binary). To illustrate the procedure, I paste a minimal reproducible example in R below.
```
# Generate synthetic dataset
set.seed(12345)
n <- 100
x1 <- rnorm(n)
x2 <- runif(n, -2, 2)
x3 <- x1 + x2 + rnorm(n)
x4 <- rbinom(n, 1, 0.5)
x5 <- rbinom(n, 1, 0.6)
data <- data.frame(x1, x2, x3, x4, x5)
# Correlation matrix with appropriate coefficients
# Pearson product-moment: 2 continuous variables
# Point-biserial: 1 continuous and 1 binary variable
# Phi: 2 binary variables
# For testing purposes use hetcor function
library(polycor)
C <- as.matrix(hetcor(data=data))
# Run PCA
pca <- princomp(covmat=C)
L <- loadings(pca)
```
Now, I wonder how to calculate component scores (i.e., raw variables weighted by component loadings). When dataset consists of continuous variables, component scores are simply obtained by multiplying (scaled) raw data and eigenvectors stored in loading matrix (L in the example above). Any pointers would be greatly appreciated.
| PCA and component scores based on a mix of continuous and binary variables | CC BY-SA 3.0 | null | 2011-05-31T07:02:42.717 | 2011-05-31T18:12:02.890 | 2011-05-31T07:45:14.223 | 183 | 609 | [
"r",
"pca"
] |
11385 | 1 | 11386 | null | 8 | 20310 | I am trying to test whether my regression has an issue of heteroscedasticity. After running a regression, I can clearly see that the residual plot has a pattern. After taking a log of the dependent variable the pattern is much, much reduced. The White's test on the original formula returns a p-value of 0.0004 before the transformation (the model with strong pattern in residuals), and a p-value of 0.08 after the log transformation.
I can see that the second model has less heteroscedasticity on the plot, but how do I interpret the results of White's test? Does the first value mean that we can reject that there is heteroscedasticity at (100-0.0004)% significance, while in the second model, we can reject it at, say, 95% confidence?
| Linear regression, heteroscedasticity, White's test interpretation? | CC BY-SA 3.0 | null | 2011-05-31T07:05:36.300 | 2015-02-15T18:51:15.097 | 2011-10-17T09:25:39.843 | 2116 | 4814 | [
"regression",
"econometrics",
"heteroscedasticity"
] |
11386 | 2 | null | 11385 | 6 | null | The [original White paper](http://www.jstor.org/stable/1912934) where the test statistic was proposed is an enlightening read. This excerpt I think is of interest here:
>
...the null hypothesis maintains not
only that the errors are
homoskedastic, but also that they are
independent of the regressors, and
that the model is correctly
specified... Failure of any of these
conditions cal lead to a statistically
significant test statistic.
Assuming that the model is correctly specified your results indicate that for non-transformed case there is a clear presence of heteroskedasticity, and in the log case there is no heteroskedasticity at 5% significance level, but there is at 10%. This means that in the log case further tests should be made, since the test "barely" accepts the null hypothesis of no heteroskedasticity. For me personally this would be an indication that maybe model specification is not correct and other heteroskedasticity tests should be made. Incidentally White gives an overview of alternative tests in its article: Godfrey, Goldfeld-Quandt, etc.
| null | CC BY-SA 3.0 | null | 2011-05-31T07:26:43.670 | 2011-05-31T07:26:43.670 | null | null | 2116 | null |
11387 | 1 | null | null | 3 | 344 | Let's say i have sequences of symbols which can have five values : A, B, C, X, Y. The average length of sequences is around 7.
It is important that the symbols A, B, C have a bigger importance than X and Y which may be consider as 'whatever different from A, B or C'
I need to classify those data among two classes : positives and negatives. The positive class is composed of sequences generally well aligned like
```
X X A B Y C
A B C X X
A Y A X B C X X
```
Note that positive examples have generally symbols A,B and C in that order.
Negatives examples look like more 'messy' like
```
B X A X X X X C
C A Y Y X X B
```
My first thought was that entropy was the key of that problem. I checked various papers but nothing was really satisfying. So my question is:
Which features would you use for a classification purpose?
| Classification of sequences of symbols | CC BY-SA 3.0 | null | 2011-05-31T08:55:38.270 | 2011-05-31T15:16:16.697 | 2011-05-31T09:10:44.403 | 2116 | 2505 | [
"classification"
] |
11388 | 2 | null | 11387 | 2 | null | I would not use any feature detectors but recurrent neural networks. They are very good for symbolic sequences: for example, they are able to recognize context sensitive languages.
Check out [Biologically Phoneme Classification with LSTM neural nets (Graves, Schmidhuber)](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.5339&rep=rep1&type=pdf) for an explanation of how to use RNNs for classification. See [Generating Text with Recurrent Neural Networks (Martens, Sutskever, Hinton)](http://www.google.de/url?sa=t&source=web&cd=1&ved=0CCIQFjAA&url=http://www.icml-2011.org/papers/524_icmlpaper.pdf&ei=b6_kTcXmLMPPsga8wrSABg&usg=AFQjCNHGCGk9fYM9owhkFX6VahYzLadLAw&sig2=isZS8pi_fMoRm99Q4tzGmw) for an impressive symbolic application of RNNs.
| null | CC BY-SA 3.0 | null | 2011-05-31T09:06:53.763 | 2011-05-31T09:06:53.763 | null | null | 2860 | null |
11389 | 1 | null | null | 8 | 664 | I am working on a new webpage for my part-time job as a methodological/statistical consultant for (psychology) students at my university. On this website I would like to place several links to online recourses for clients to consult themselves.
So I am looking for links to websites that offer a lot of statistical information. Preferably written in a way that is easy to comprehend. Most students use SPSS, but information on other programs is welcome too.
So far I have:
- www.crossvalidated.com
- www.statmethods.net
| Internet statistics resources suitable for psychology students doing research | CC BY-SA 3.0 | null | 2011-05-31T10:13:58.237 | 2011-05-31T14:22:50.303 | 2011-05-31T12:11:46.573 | 183 | 3094 | [
"spss",
"psychology",
"internet"
] |
11390 | 2 | null | 11387 | 0 | null | Can we do it programitically ?? Meaning that writing a peace of code that does that you saying ..ex giving "A x X B Y c" more importance than "B X X X A C".
agree that this code should be bit complex though.?
| null | CC BY-SA 3.0 | null | 2011-05-31T10:36:03.260 | 2011-05-31T10:36:03.260 | null | null | 1763 | null |
11391 | 2 | null | 11389 | 4 | null | Copy N' Paste from my Google Reader: [http://jeromyanglim.blogspot.com/](http://jeromyanglim.blogspot.com/)
| null | CC BY-SA 3.0 | null | 2011-05-31T10:44:46.417 | 2011-05-31T10:44:46.417 | null | null | 609 | null |
11392 | 2 | null | 11389 | 2 | null | The [UCLA](http://www.ats.ucla.edu/stat/) server has a lot of ressources for statistical computing, including annotated output from various statistical packages.
| null | CC BY-SA 3.0 | null | 2011-05-31T11:05:52.783 | 2011-05-31T11:05:52.783 | null | null | 930 | null |
11393 | 2 | null | 11389 | 3 | null |
- There's a correct answer here!
http://faculty.chass.ncsu.edu/garson/PA765/statnote.htm
- Also good:
http://statcomp.ats.ucla.edu/
http://dss.princeton.edu/online_help/
http://www.psych.cornell.edu/darlington/
I know you didn't ask, probably because you know, the answer, but absolutely best statistics tests (for multivariate analysis) for psychologists (& for most other social scientists, although they don't all realize it) are:
- Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, (L. Erlbaum Associates, Mahwah, N.J., 2003).
- Judd, C.M., McClelland, G.H. & Ryan, C.S. Data analysis : a model comparison approach, (Routledge/Taylor and Francis, New York, NY, 2008).
| null | CC BY-SA 3.0 | null | 2011-05-31T11:07:04.313 | 2011-05-31T11:29:37.487 | 2011-05-31T11:29:37.487 | 183 | 11954 | null |
11394 | 2 | null | 11389 | 10 | null | In general, encouraging research students to use Google and sites like [Cross Validated](https://stats.stackexchange.com/) to ask and answer their own questions is important.
### Specific Sites
- Andy Field is famous for making statistics more palatable for psychology students. He provides many online resources generally with a focus on SPSS.
- UCLA Statistics Consulting has many useful resources.
- @chl has many good statistics resources with a psychology flavour, such as this one on psychometrics and R
- G. David Garson provides extensive notes on most techniques with a focus on SPSS generally suitable to a psychology research audience.
- David Kenny has lots of resources particularly on SEM, mediation, moderation, and dyadic data analysis.
- Encyclopedia - Psychology and Statistics has an extensive set of links to resources
- mathpsych on Reddit is a small but interesting Reddit community.
### A little self-promotion
One of my main aims over the last few years has been to develop resources designed to assist psychology students perform the data analysis for their research. Thus, I hope you'll forgive the self-promotion. The following links may be relevant:
- Sitemap of the blog. Most of the blog is devoted to saying what I find myself saying to psychology research students in consultation settings. Thus, there's a fair bit of SPSS content in addition to my own preference for R.
- Advice on completing data analysis for a thesis in psychology
- General teaching resources with an SPSS manual and some multivariate course notes
- General thoughts on encouraging students to use sites like Cross Validated
### R in Psychology
I also have a post on [getting started with R](http://jeromyanglim.blogspot.com/2009/06/learning-r-for-researchers-in.html).
The following quotes the section of that post listing specific resources for researchers in psychology.
- Task Views particularly relevant to psychology
Psychometric Models and Methods
Social Sciences
Multivariate
- R Notes for Experimental Psychology
- William Revelle's
Psychology R Site ;also see the package ;pscyh, ;and the ;online book and workshop
resources
- Jonathan
Baron and Yelin Li's R for Psychology Experiments
- Drew Conway suggests a list of must have R packages for the social
scientsist
- SEM in R
- Mailing
list for Psychology and R
- Edinburgh Psychology R-users
- Jason
Locklin's notes on standard experimental analyses in psychology
- My posts with the R tag
| null | CC BY-SA 3.0 | null | 2011-05-31T11:07:15.030 | 2011-05-31T14:22:50.303 | 2017-04-13T12:44:52.660 | -1 | 183 | null |
11396 | 1 | 11403 | null | 1 | 2342 | I'm trying to implement logNormal distribution into my java program because lognormal dist doesn't exist into apache commons math library.
I have no problem to re-write density and cumulative probability function, extending abstract classes of apache commons math library, like this :
```
public double cumulativeProbability(double mu, double sigma, double x) {
if (sigma <= 0.0) {
throw new IllegalArgumentException("sigma <= 0");
}
if (x <= 0.0) {
return 0.0;
}
return this.cumulativeProbability((Math.log(x) - mu) / sigma);
}
public double cumulativeProbability(double x){
double var = 0.0;
NormalDistributionImpl normalDist = new NormalDistributionImpl();
try {
var = normalDist.cumulativeProbability(x);
} catch (MathException e) {}
return var;
}
public double density (double x) {
return density (mu, sigma, x);
}
public static double density (double mu, double sigma, double x) {
if (sigma <= 0)
throw new IllegalArgumentException ("sigma <= 0");
if (x <= 0)
return 0;
double diff = Math.log (x) - mu;
return Math.exp (-diff*diff/(2*sigma*sigma))/
(Math.sqrt (2*Math.PI)*sigma*x);
}
```
Apache commons math need implementation of a list of utilies function to compute the inverse cumulative probability, and i don't know which upper/lowerbound/initial domain my lognormal distribution can take ...
I suppose it's something like this, but i'm not sure : x between [0 ; +infinity] ?
Thanks a lot for your help
---
p = Desired probability for the critical value.
Access the domain value lower bound, based on p, used to bracket a CDF root. This method is used by inverseCumulativeProbability(double) to find critical values.
```
public double getDomainLowerBound(double p){
return ?;
}
```
Access the domain value upper bound, based on p, used to bracket a CDF root. This method is used by inverseCumulativeProbability(double) to find critical values.
```
public double getDomainUpperBound(double p){
return ?;
}
```
Access the initial domain value, based on p, used to bracket a CDF root. This
method is used by inverseCumulativeProbability(double) to find critical values.
```
public double getInitialDomain(double p){
return ?;
}
```
Access the lower bound of the support.
Returns:
lower bound of the support (might be Double.NEGATIVE_INFINITY)
```
public double getSupportLowerBound(){
return ?;
}
```
Access the upper bound of the support.
Returns:
upper bound of the support (might be Double.POSITIVE_INFINITY)
```
public double getSupportUpperBound(){
return ?;
}
```
Use this method to get information about whether the lower bound of the support is inclusive or not.
Returns:
whether the lower bound of the support is inclusive or not
```
public boolean isSupportLowerBoundInclusive(){
return ;
}
```
Use this method to get information about whether the upper bound of the support is inclusive or not.
Returns:
whether the upper bound of the support is inclusive or not
```
public boolean isSupportUpperBoundInclusive(){
return ;
}
```
| Upper/lower bound and initial domain for lognormal distribution | CC BY-SA 3.0 | null | 2011-05-31T12:55:39.020 | 2011-05-31T16:08:04.300 | null | null | 4693 | [
"distributions",
"lognormal-distribution",
"bounds"
] |
11397 | 2 | null | 11353 | 2 | null | I would strongly recommend you use some form of [regularization](http://en.wikipedia.org/wiki/Regularization_%28mathematics%29). The package '[glmnet](http://cran.r-project.org/web/packages/glmnet/index.html)' in R is very good and will do variable selection and regularization for the linear model using the [elastic net](http://www.stanford.edu/~hastie/TALKS/enet_talk.pdf).
Let me know if you'd like to see some examples of how to use glmnet.
| null | CC BY-SA 3.0 | null | 2011-05-31T13:35:40.367 | 2011-05-31T13:35:40.367 | null | null | 2817 | null |
11398 | 1 | 11401 | null | 5 | 120 | In case you want to compare the average income of a group of male employees against the average income of a group of female employees, the observations are clearly independent.
Now, I have a network of a certain number of nodes. These nodes are linked by edges and I can characterize each node by the number of links it has to other nodes. (this is $k$: called the degree)
I can also characterize the nodes by their average nearest neighbour degree. That is the sum of the degree of all nodes, to which one node is linked to. (this is $k_{nn}$; $k_{nn}$ of node i = $\sum k_j$ for any node $j$ that is linked to $i$).
When I create a scatter plot of these nodes ($k$ vs $k_{nn}$) I can clearly distinguish two groups of nodes by a certain threshold value for $k$ and $k_{nn}$.
My nodes also have a color. Now I want to test if a certain color in these two groups is overrepresented.
I can do that using the Wilcoxon rank test, because the color is an independent observation. Fine.
But is the color really an independent observation?
Implicitly the association to a group is not only based on the node's own property, but also on the properties of the other nodes (because of $k_{nn}$).
So can I really use the Wilcoxon rank test here?
Actually, my question is:
Does the Wilcoxon rank test require only an independent observation.
Or does it also require an association to a group that is based on independent observations?
| Should the group decisions be independent in Wilcoxon rank sum test? | CC BY-SA 3.0 | null | 2011-05-31T15:02:15.027 | 2017-04-03T15:48:56.920 | 2017-04-03T15:48:56.920 | 101426 | 4819 | [
"nonparametric",
"independence",
"wilcoxon-mann-whitney-test"
] |
11399 | 2 | null | 11387 | 2 | null | It sounds like this question is asking for a way to quantify the sense of "generally well aligned" strings. Of course there are many ways to do this, but the examples and the description suggest that any solution meet two criteria:
```
1. The X's and Y's should play no role in the result.
2. The strings in which the A, B, and C's appear in order are the most "well aligned."
```
This suggests basing the classification on an [edit distance](http://en.wikipedia.org/wiki/Edit_distance) among the {A,B,C} substrings or on a [partial ordering of multiset permutations](http://www.emis.de/journals/NSJOM/Papers/37_2/NSJOM_37_2_073_092.pdf). To provide more focused advice, we would need more information about the purpose of the intended importance ordering and about how these strings are generated.
| null | CC BY-SA 3.0 | null | 2011-05-31T15:05:15.183 | 2011-05-31T15:05:15.183 | null | null | 919 | null |
11400 | 2 | null | 11387 | 2 | null | Kernel methods (such as the support vector machine) are likely to be quite good for this kind of problem as you can use kernel functions that operate directly on strings of symbols of variable length. Examples include the [spectrum kernel](http://psb.stanford.edu/psb-online/proceedings/psb02/leslie.pdf) (which projects the strings into an implicit feature space where each dimension records the number of ocurrences of all possible substrings of a given length - or less) and the [mismatch kernel](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.78.9441&rep=rep1&type=pdf), which is similar, but the counting of sub-strings allows a certain amount of mis-matches. There is also the [sequence alignment](http://bioinformatics.oxfordjournals.org/content/19/15/1964.short) kernel, which might be of interest.
| null | CC BY-SA 3.0 | null | 2011-05-31T15:16:16.697 | 2011-05-31T15:16:16.697 | null | null | 887 | null |
11401 | 2 | null | 11398 | 5 | null | Regardless of the graph's coloring, associated with each of its nodes is a pair $(k, k_{nn})$. You have used these pairs to classify the nodes into two groups. The coloring separately classifies the nodes by color. This is the situation of a $c$ by $2$ contingency table with fixed margins. To assess whether color is associated with node group, use an appropriate test of association: [Fisher's Exact Test](http://en.wikipedia.org/wiki/Fisher%27s_exact_test) if the counts are not too large; otherwise a chi-squared test.
| null | CC BY-SA 3.0 | null | 2011-05-31T15:20:53.820 | 2011-05-31T15:20:53.820 | null | null | 919 | null |
11402 | 1 | 11409 | null | 17 | 6718 | My nonparametric text, [Practical Nonparametric Statistics](http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471160687.html), often gives clean formulas for expectations, variances, test statistics, and the like, but includes the caveat that this only works if we ignore ties. When calculating the Mann-Whitney U Statistic, it is encouraged that you throw out tied pairs when comparing which is bigger.
I get that ties don't really tell us much about which population is bigger (if that's what we're interested in) since neither group is bigger than the other, but it doesn't seem like that would matter when developing asymptotic distributions.
Why then is it such a quandary dealing with ties in some nonparametric procedures? Is there a way of extracting any useful information from ties, rather than simply throwing them away?
EDIT: In regards to @whuber's comment, I checked my sources again, and some procedures use an average of ranks instead of dropping the tied values completely. While this seems more sensible in reference to retaining information, it also seems to me that it lacks rigor. The spirit of the question still stands, however.
| Why are ties so difficult in nonparametric statistics? | CC BY-SA 3.0 | null | 2011-05-31T15:59:34.210 | 2019-06-29T08:07:17.043 | 2020-06-11T14:32:37.003 | -1 | 1118 | [
"nonparametric",
"ties"
] |
11403 | 2 | null | 11396 | 3 | null | The support for the log normal distribution is the open interval from 0 to infinity. This should give you enough information to implement all methods that have 'Bound' in their name.
I understand that the methods with 'Domain' in their name are used to provide bounds and estimates for the inverse CDF that are easy to compute; for this you can easily follow @whuber's suggestion and return the $\exp$ of the corresponding values for the normal distribution. It's instructive to see what the implementors use for that: if $p \not= \frac{1}{2}$, then the infinite interval on the appropriate side of $\mu$ provides the bounds and the initial estimate is $\mu \pm \sigma$. So you could also directly return the $\exp$ of these values.
| null | CC BY-SA 3.0 | null | 2011-05-31T16:08:04.300 | 2011-05-31T16:08:04.300 | null | null | 2898 | null |
11404 | 2 | null | 11384 | 9 | null | I think Insanodag is right. I quote Jollife's Principal Component Analysis:
>
When PCA is used as a descriptive
technique, there is no reason for the
variables in the analysis to be of any
particular type. [...] the basic
objective of PCA - to summarize most
of the 'variation' that is present in
the original set of $p$ variables
using smaller number of derived
varaibles - can be achieved regardless
of the nature of the original
variables.
Multiplying the data matrix with the loadings matrix will give the desired result. However, I've had some problems with `princomp()` function so I used `prcomp()` instead.
One of the return values of the function `prcomp()` is `x`, which is activated using `retx=TRUE`. This x is the multiplication of the data matrix by the loadings matrix as stated in the R Documentation:
```
rotation: the matrix of variable
loadings (i.e., a matrix whose columns
contain the eigenvectors). The function ‘princomp’ returns
this in the element ‘loadings’.
x: if ‘retx’ is true the value of the rotated data (the centred
(and scaled if requested) data multiplied by the ‘rotation’
matrix) is returned. Hence, ‘cov(x)’ is the diagonal matrix
‘diag(sdev^2)’. For the formula method, ‘napredict()’ is
applied to handle the treatment of values omitted by the
‘na.action’.
```
Let me know if this was useful, or if it needs further corrections.
--
I.T. Jollife. Principal Component Analysis. Springer. Second Edition. 2002. pp 339-343.
| null | CC BY-SA 3.0 | null | 2011-05-31T17:39:23.990 | 2011-05-31T18:12:02.890 | 2011-05-31T18:12:02.890 | 2902 | 2902 | null |
11405 | 1 | 13407 | null | 10 | 8149 | Does anybody know where to find good application and examples (besides the manual and the book applied econometrics with R) using the tobit model with the packages AER?
### Edit
I'm searching for a command to compute the marginal effects for y (not for the latent variable y*). It seems to be $\phi(x\beta/\sigma)\beta$, where $\phi$ is the std.normal cumulative distribution function. But how can I compute those effects with R?
| Tobit model with R | CC BY-SA 3.0 | null | 2011-05-31T17:49:46.537 | 2015-03-14T17:58:47.667 | 2020-06-11T14:32:37.003 | -1 | 4496 | [
"r",
"tobit-regression"
] |
11406 | 1 | 11407 | null | 13 | 197579 | I am very new to R and to any packages in R. I looked at the ggplot2 documentation but could not find this. I want a box plot of variable `boxthis` with respect to two factors `f1` and `f2`. That is suppose both `f1` and `f2` are factor variables and each of them takes two values and `boxthis` is a continuous variable. I want to get 4 boxplots on a graph, each corresponding to one combination from the possible combinations that `f1` and `f2` can take. I think using the basic functionality in R, this can be done by
```
> boxplot(boxthis ~ f1 * f2 , data = datasetname)
```
Thanks in advance for any help.
| Boxplot with respect to two factors using ggplot2 in R | CC BY-SA 3.0 | null | 2011-05-31T18:53:09.813 | 2015-12-19T01:59:43.137 | 2011-06-01T03:44:48.610 | 919 | 4820 | [
"r",
"boxplot",
"ggplot2"
] |
11407 | 2 | null | 11406 | 23 | null | I can think of two ways to accomplish this:
1. Create all combinations of `f1` and `f2` outside of the `ggplot`-function
```
library(ggplot2)
df <- data.frame(f1=factor(rbinom(100, 1, 0.45), label=c("m","w")),
f2=factor(rbinom(100, 1, 0.45), label=c("young","old")),
boxthis=rnorm(100))
df$f1f2 <- interaction(df$f1, df$f2)
ggplot(aes(y = boxthis, x = f1f2), data = df) + geom_boxplot()
```

2. use colour/fill/etc.
```
ggplot(aes(y = boxthis, x = f2, fill = f1), data = df) + geom_boxplot()
```

| null | CC BY-SA 3.0 | null | 2011-05-31T19:23:14.877 | 2011-05-31T19:23:14.877 | null | null | 307 | null |
11408 | 2 | null | 396 | 3 | null | Don't use dynamite plots:
[http://pablomarin-garcia.blogspot.com/2010/02/why-dynamite-plots-are-bad.html](http://pablomarin-garcia.blogspot.com/2010/02/why-dynamite-plots-are-bad.html), use violin plots or similar (boxplots family)
| null | CC BY-SA 3.0 | null | 2011-05-31T19:35:02.157 | 2011-05-31T19:57:14.673 | 2011-05-31T19:57:14.673 | 2343 | 2343 | null |
11409 | 2 | null | 11402 | 16 | null | Most of the work on non-parametrics was originally done assuming that there was an underlying continuous distribution in which ties would be impossible (if measured accurately enough). The theory can then be based on the distributions of order statistics (which are a lot simpler without ties) or other formulas. In some cases the statistic works out to be approximately normal which makes things really easy. When ties are introduced either because the data was rounded or is naturally discrete, then the standard assumptions do not hold. The approximation may still be good enough in some cases, but not in others, so often the easiest thing to do is just give a warning that these formulas don't work with ties.
There are tools for some of the standard non-parametric tests that have worked out the exact distribution when ties are present. The exactRankTests package for R is one example.
One simple way to deal with ties is to use randomization tests like permutation tests or bootstrapping. These don't worry about asymptotic distributions, but use the data as it is, ties and all (note that with a lot of ties, even these techniques may have low power).
There was an article a few years back (I thought in the American Statistician, but I am not finding it) that discussed the ideas of ties and some of the things that you can do with them. One point is that it depends on what question you are asking, what to do with ties can be very different in a superiority test vs. a non-inferiority test.
| null | CC BY-SA 3.0 | null | 2011-05-31T20:00:46.263 | 2011-05-31T20:00:46.263 | null | null | 4505 | null |
11411 | 1 | null | null | 1 | 146 | >
Possible Duplicate:
Cross-correlation significance in R
How can I test the significance of the correlation coefficient? I have two time series and I want to test if they are cross correlation. Should I do prewhitening the two series before comuputing the ccf or there are an easy way?
| How can I test the significance of the correlation coefficient? | CC BY-SA 3.0 | null | 2011-05-31T20:57:49.140 | 2011-05-31T21:03:51.863 | 2017-04-13T12:44:44.530 | -1 | 4823 | [
"time-series",
"cross-correlation"
] |
11412 | 1 | 14568 | null | 12 | 7054 | I've used a wide array of tests for my thesis data, from parametric ANOVAs and t-tests to non-parametric Kruskal-Wallis tests and Mann-Whitneys, as well as rank-transformed 2-way ANOVAs, and GzLMs with binary, poisson and proportional data. Now I need to report everything as I write all of this up in my results.
I've already asked [here](https://stats.stackexchange.com/questions/11381/how-to-report-asymmetrical-confidence-intervals-of-a-proportion) how to report asymmetrical confidence intervals for proportion data. I know that standard deviation, standard error or confidence intervals are appropriate for means, which is what I'd report if all my tests were nicely parametric. However, for my non-parametric tests, should I be reporting medians and not means? If so, what error would I report with it?
Associated with this is how best to present non-parametric test results graphically. Since I largely have continuous or interval data within categories, I'm generally using bar graphs, with the top of the bar being the mean and error bars showing 95% CI. For NP tests, can I still use bar graphs, but have the top of the bar represent the median?
Thanks for your suggestions!
| Error to report with median and graphical representations? | CC BY-SA 3.0 | null | 2011-05-31T22:03:47.097 | 2011-08-21T00:34:00.843 | 2017-04-13T12:44:48.343 | -1 | 4238 | [
"data-visualization",
"median",
"error"
] |
11413 | 1 | 38681 | null | 11 | 4023 | In plain English:
I have a multiple regression or ANOVA model but the response variable for each individual is a curvilinear function of time.
- How can I tell which of the right-hand-side variables are responsible for significant differences in the shapes or vertical offsets of the curves?
- Is this a time-series problem, a repeated-measures problem, or something else entirely?
- What are the best-practices for analyzing such data (preferably in R, but I'm open to using other software)?
In more precise terms:
Let's say I have a model $y_{ijk} = \beta_0 + \beta_1 x_i + \beta_2 x_j + \beta_3 x_i x_j + \epsilon_k$ but $y_{ijk}$ is actually a series of data-points collected from the same individual $k$ at many time-points $t$, which were recorded as a numeric variable. Plotting the data shows that for each individual $y_{ijkt}$ is a quadratic or cyclical function of time whose vertical offset, shape, or frequency (in the cyclical case) might significantly depend on the covariates. The covariates do not change over time-- i.e., an individual has a constant body weight or treatment group for the duration of the data collection period.
So far I have tried the following `R` approaches:
- Manova
Anova(lm(YT~A*B,mydata),idata=data.frame(TIME=factor(c(1:10))),idesign=~TIME);
...where YT is a matrix whose columns are the time points, 10 of them in this example, but far more in the real data.
Problem: this treats time as a factor, but the time-points don't exactly match for each individual. Furthermore, there are many of them relative to the sample size so the model gets saturated. It seems like the shape of the response variable over time is ignored.
- Mixed-model (as in Pinheiro and Bates, Mixed Effect Models in S and S-Plus)
lme(fixed=Y~ A*B*TIME + sin(2*pi*TIME) + cos(2*pi*TIME), data=mydata,
random=~(TIME + sin(2*pi*TIME) + cos(2*pi*TIME))|ID), method='ML')
...where ID is a factor that groups data by individual. In this example the response is cyclical over time, but there could instead be quadratic terms or other functions of time.
Problem: I'm not certain whether each time term is necessary (especially for quadratic terms) and which ones are affected by which covariates.
Is stepAIC() a good method for selecting them?
If it does remove a time-dependent term, will it also remove it from the random argument?
What if I also use an autocorrelation function (such as corEXP()) that takes a formula in the correlation argument-- should I make that formula for corEXP() the same as the one in random or just ~1|ID?
The nlme package is rarely mentioned in the context of time series outside Pinheiro and Bates... is it not considered well suited to this problem?
- Fitting a quadratic or trigonometric model to each individual, and then using each coefficient as a response variable for multiple regression or ANOVA.
Problem: Multiple comparison correction necessary. Can't think of any other problems which makes me suspicious that I'm overlooking something.
- As previously suggested on this site (What is the term for a time series regression having more than one predictor?), there are ARIMAX and transfer function / dynamic regression models.
Problem: ARMA-based models assume discrete times, don't they? As for dynamic regression, I heard about it for the first time today, but before I delve into yet another new method that might not pan out after all, I thought it would be prudent to ask people who have done this before for advice.
| Longitudinal data: time series, repeated measures, or something else? | CC BY-SA 3.0 | null | 2011-06-01T00:47:09.967 | 2017-01-19T14:33:45.403 | 2017-04-13T12:44:56.303 | -1 | 4829 | [
"regression",
"time-series",
"mixed-model",
"repeated-measures",
"panel-data"
] |
11414 | 1 | null | null | 54 | 1841 | I know, this may sound like it is off-topic, but hear me out.
At Stack Overflow and here we get votes on posts, this is all stored in a tabular form.
E.g.:
post id voter id vote type datetime
------- -------- --------- --------
10 1 2 2000-1-1 10:00:01
11 3 3 2000-1-1 10:00:01
10 5 2 2000-1-1 10:00:01
... and so on. Vote type 2 is an upvote, vote type 3 is a downvote. You can query an anonymized version of this data at [http://data.stackexchange.com](http://data.stackexchange.com)
There is a perception that if a post reaches the score of -1 or lower it is more likely to be upvoted. This may be simply confirmation bias or it may be rooted in fact.
How would we analyze this data to confirm or deny this hypothesis? How would we measure the effect of this bias?
| Do we have a problem of "pity upvotes"? | CC BY-SA 3.0 | null | 2011-06-01T01:57:42.547 | 2011-06-04T19:07:31.727 | 2011-06-03T14:54:01.880 | 223 | 1163 | [
"time-series",
"hypothesis-testing",
"data-mining",
"markov-process",
"censoring"
] |
11415 | 2 | null | 11414 | 36 | null | You could use a multistate model or Markov chain (the msm package in R is one way to fit these). You could then look to see if the transition probability from -1 to 0 is greater than from 0 to 1, 1 to 2, etc. You can also look at the average time at -1 compared to the others to see if it is shorter.
| null | CC BY-SA 3.0 | null | 2011-06-01T03:12:38.823 | 2011-06-01T03:12:38.823 | null | null | 4505 | null |
11416 | 2 | null | 11331 | 2 | null | [Noether's Test for Cyclic Trend](https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-21/issue-2/Asymptotic-Properties-of-the-Wald-Wolfowitz-Test-of-Randomness/10.1214/aoms/1177729841.full) may help. You'll find this routine implemented in the [IMSL Statistical libraries](https://web.archive.org/web/20110909041110/http://www.roguewave.com/portals/0/products/imsl-numerical-libraries/c-library/docs/7.0/Cstat.pdf) which are accessible from a variety of programming languages.
The book Modeling Hydrologic Change: Statistical Methods by Richard H. McCuen, covers Noether's Test as well as multiple others that you'll likely find useful. The book is available as a [pdf file here](http://www.ce.metu.edu.tr/%7Ece530/Modeling%20Hydrologic%20Change_Statistical%20Methods.pdf) (warning, it is big).
| null | CC BY-SA 4.0 | null | 2011-06-01T03:18:46.037 | 2022-08-30T19:38:44.387 | 2022-08-30T19:38:44.387 | 79696 | 1080 | null |
11417 | 1 | null | null | 3 | 232 | I am working through a problem on CI for a sample mean and I cannot get the answer listed in the text I have. I am wondering if I am missing something or whether the text may be incorrect.
We are given a sample of 10 scores:
```
45,38,52,48,25,39,51,46,55,46
```
I get a mean of 44.5 and SD of 8.68 which is the same as the solutions listed. I also get a standard error of the mean of 2.75 which is the same as the solutions.
However I cannot get the final answer. We are asked to calculate a 95% CI for the mean. When I calculate the margin of error (standard error of the mean * tcritical) I get the answer `2.75*2.262=6.2`. (I choose df= 9). Thus the CI I get is `38.3 - 50.7`.
However, the text says it is `28.35-60.65`. Am I missing something? I would be really grateful for any comments. Thanks.
| Confidence interval for sample mean (possible error in text) | CC BY-SA 3.0 | null | 2011-06-01T03:43:28.240 | 2014-06-29T05:14:59.217 | 2011-06-01T06:13:32.743 | 4498 | 4498 | [
"confidence-interval"
] |
11418 | 1 | null | null | 10 | 6816 | this question started as "[Clustering spatial data in R](https://stats.stackexchange.com/questions/9739/clustering-spatial-data-in-r)" and now has moved to DBSCAN question.
As the responses to the first question suggested I searched information about DBSCAN and read some docs about. New questions have arisen.
DBSCAN requires some parameters, one of them is "distance". As my data are three dimensional, longitude, latitude and temperature, which "distance" should I use? which dimension is related to that distance? I suposse it should be temperature. How do I find such minimum distance with R?
Another parameter is the minimum number of points neded to form a cluster. Is there any method to find that number? Unfortunately I haven't found.
Searching thorugh Google I could not find an R example for using dbscan in a dataset similar to mine, do you know any website with such kind of examples? So I can read and try to adapt to my case.
The last question is that my first R attempt with DBSCAN (without a proper answer to the prior questions) resulted in a memory problem. R says it can not allocate vector. I start with a 4 km spaced grid with 779191 points that ends in approximately 300000 rows x 3 columns (latitude, longitude and temperature) when removing not valid SST points. Any hint to address this memory problem. Does it depend on my computer or in DBSCAN itself?
Thanks for the patience to read a long and probably boring message and for your help.
| Density-based spatial clustering of applications with noise (DBSCAN) clustering in R | CC BY-SA 3.0 | null | 2011-06-01T07:59:59.237 | 2011-12-06T09:20:48.557 | 2017-04-13T12:44:41.980 | -1 | 4147 | [
"r",
"clustering",
"spatial"
] |
11419 | 1 | null | null | 1 | 2510 | is anybody familiar with the `tobit()` command using the package AER? I'm searching for a command to compute the marginal effects for y (not for the latent variable y*). It seems to be $\phi(x\beta/\sigma)\beta$, where $\phi$ is the std.normal cumulative distribution function. But how can I compute those effects with R?
| How to compute marginal effects in a Tobit model using R? | CC BY-SA 3.0 | 0 | 2011-06-01T08:09:38.367 | 2011-06-01T08:21:22.667 | 2011-06-01T08:21:22.667 | null | 4496 | [
"r",
"regression",
"tobit-regression"
] |
11420 | 1 | 11426 | null | 4 | 2163 | I prepared a model which had very good accuracy (80.5%) on my out of sample data. However when I ran that model on a population which is some 6mths old the accuracy went down to abysmal 33%. I am talking here about percentage detection of event (say defaulters). So right now my model is only detecting 33 out of 100 defaulters in the out of time dataset.
Please suggest what can be the possible reasons behind this. How do I go about improving this? It'd be tough to defend this accuracy before the client. God forbid, if required to defend this accuracy before the client, how could this be justified?
| Low accuracy in out of time validation | CC BY-SA 3.0 | null | 2011-06-01T09:00:07.850 | 2011-06-01T13:48:04.823 | 2011-06-01T10:07:48.977 | 2116 | 1763 | [
"logistic"
] |
11421 | 1 | 11423 | null | 12 | 24320 | As far as I know variance is calculated as
$$\text{variance} = \frac{(x-\text{mean})^2}{n}$$
while
$$\text{Empirical Variance} = \frac{(x-\text{mean})^2}{n(n-1)} $$
Is it correct? Or is there some other definition? Kindly explain with example or any refence for reading on this topic
| What is the difference between empirical variance and variance? | CC BY-SA 3.0 | null | 2011-06-01T09:24:42.800 | 2011-06-01T14:15:23.697 | 2011-06-01T10:05:36.543 | 2116 | 4802 | [
"machine-learning",
"variance",
"cart"
] |
11423 | 2 | null | 11421 | 19 | null | In your expression for the [variance](http://en.wikipedia.org/wiki/Variance), you need to take a sum (or integral) across the population
$$\text{variance} = \frac{\sum_i(x_i-\text{mean})^2}{n}$$
If your data is a sample from the population then this expression will give you a biased estimate of the population variance. An unbiased estimate would be as follows (note the change in the denominator from your expression), often called the sample variance
$$\text{Sample variance} = \frac{\sum_i(x_i-\text{mean})^2}{n-1} $$
If on the other hand you were trying to estimate the variance of the sample mean, then you vould have a smaller number, closer to your expression. The square root of this is called the [standard error of the mean](http://en.wikipedia.org/wiki/Standard_error_%28statistics%29#Standard_error_of_the_mean) and a reasonable estimate is
$$\text{Standard error} = \sqrt{\frac{\sum_i (x_i-\text{mean})^2}{n(n-1)}} $$
| null | CC BY-SA 3.0 | null | 2011-06-01T09:46:10.353 | 2011-06-01T14:15:23.697 | 2011-06-01T14:15:23.697 | 2958 | 2958 | null |
11424 | 2 | null | 9121 | 5 | null | My personal view on this is that
- For descriptive purpose, we usually want to show the within-group (i.e., individual) variations (barplot + SD, or better boxplot).
- Within the inferential context of the ANOVA, we might rather want to show the SE, 95% CIs, or LSD intervals, for example. Showing 95% CIs has the merit of visually conveying the precision of the estimates, and they are easier to interpret, IMO. In this context, what we really want to show is how good is our estimate of the mean, not so much individual fluctuations on a single sample. Note that the question arises then as to whether we display pooled (when the homoscedasticity assumption holds) or group-specific SEs. We can combine any of the above estimates, of course. E.g., for a one-way ANOVA, we can show 95% CIs associated to each group mean on a barplot and show the overall mean $\pm 1 SE$ next to it. The figure below illustrates this idea by showing the SE for an interaction effect, centered on the overall mean (without the 95% CI):

Finally, the following paper offers an interesting discussion of the use of error bars when presenting experimental results (and gauging significant difference from non overlapping error bars):
>
Cumming, G., Fidler, F., and Vaux,
D.L. (2007). Error bars in
experimental biology. J Cell Biol,
177(1): 7-11.
| null | CC BY-SA 3.0 | null | 2011-06-01T10:10:45.487 | 2011-06-01T10:10:45.487 | null | null | 930 | null |
11425 | 2 | null | 11420 | 4 | null | I'm assuming that you expect that the other data set should have similar characteristics to your original data set. I only consider myself a beginner in this area, but it sounds likely that you "over-fitted" your model to the sample data. This means fitting to random noise in the data, as though it is a real effect that would be observed on a new data set. A likely cause is having too many parameters relative to the sample size.
You haven't provided much detail, but possible solutions may be reducing the number of parameters in your model, and/or the use of a shrinkage method (which I know very little about). The bootstrap may be useful to validate your model (`validate` in the `rms` package in R).
These things are covered in the book "Regression Modeling Strategies" by Frank Harrell.
| null | CC BY-SA 3.0 | null | 2011-06-01T10:23:59.630 | 2011-06-01T10:43:21.777 | 2011-06-01T10:43:21.777 | 3835 | 3835 | null |
11426 | 2 | null | 11420 | 4 | null | The most obvious culprit for your problem is probably [spurious relationship](http://en.wikipedia.org/wiki/Spurious_relationship). You identified relationships which deemed significant for certain period of time, but they are not significant for all periods of time. [Lucas critique](http://en.wikipedia.org/wiki/Lucas_critique) might apply too.
When dealing with models of economic activity it is always prudent to define the time boundaries of the model (i.e. for which time period the model is applicable) or include time in the model. It seems you are building your model on snapshots at specific time-periods, try using panel-data framework, this will help you to see which covariates remain significant for all time periods.
| null | CC BY-SA 3.0 | null | 2011-06-01T10:24:27.843 | 2011-06-01T10:24:27.843 | null | null | 2116 | null |
11427 | 2 | null | 11413 | 5 | null | As Jeromy Anglim said, it would help to know the number of time points you have for each individual; as you said "many" I would venture that [functional analysis](http://ego.psych.mcgill.ca/misc/fda/) might be a viable alternative. You might want to check the R package [fda](http://cran.at.r-project.org/package=fda)
and look at the [book by Ramsay and Silverman](http://rads.stackoverflow.com/amzn/click/1441923004).
| null | CC BY-SA 3.0 | null | 2011-06-01T10:55:44.523 | 2011-06-01T10:55:44.523 | null | null | 892 | null |
11428 | 1 | 11776 | null | 3 | 181 | I want to evaluate the implications of increasing fine prices. I will have a few different scenarios ranging from business as usual, minor increase, proportional increase, categorical increase, to extreme-increases. Each scenario will have different levels of monetary increase depending on the fine details and subprograms. Also, I'd like to do comparative analysis of implementing a strike base system (x times you get a fine, you're out) versus a cooperative non-extreme user lead system.
Are there any models that predict such increases, especially considering that there is a likelihood of people to eitehr decrease their or increase there frequency of fines after a new system is in place.
| Statistical models and methods to evaluate and forecast "fine price" increases | CC BY-SA 3.0 | null | 2011-06-01T13:09:57.743 | 2011-06-09T20:38:13.227 | 2011-06-01T15:21:13.607 | null | 59 | [
"forecasting"
] |
11429 | 1 | null | null | 3 | 555 | Does anyone know such? I tried to find such procedures in Gretl, but there you can use either hsk procedure for heteroskedasticity correction or ar1 procedure for serial correlation correction. I need the GLS procedure that will deal with both. Thanks
| Software enabling GLS estimation with both heteroskedasticity and serial correlation correction | CC BY-SA 3.0 | null | 2011-06-01T13:37:03.077 | 2011-06-02T11:00:26.363 | null | null | 4837 | [
"regression"
] |
11430 | 2 | null | 11420 | 1 | null | Could you provide information about the nature of data (cross-section, time-series, panel...)? In any case, it seems to me that one pssible problem is that there is a time trend that you are not take into acount. Another possibility is just that the data is not stationary, i.e, th past does not resemble the future at all and as time goes by your accuracy will decrease.
Finally, you have to think in relative terms. With no model, it seems that you'd predict only 3.79% of defaulters. Assuming you predict on a random bases, your accuracy would be very low (probaly near zero). On the other hand, with your model you'd predict with accuracy of 30%, which is much better. For instance, with 1000 people, without model you would say I expect only 30 of them to default, but they are equally likely to default. With your model, you will say: person 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 will default (and only one third of them will actually default). This is much better!
However, take a look not only at the true-positive rates, but also on false-positive reates of your model and false-negatives. A ROC curve may be a good way to look at this and calibate your model better to increase whatever the client wants (true-positive rates, true-negatives).
| null | CC BY-SA 3.0 | null | 2011-06-01T13:48:04.823 | 2011-06-01T13:48:04.823 | null | null | 3058 | null |
11431 | 2 | null | 11429 | 7 | null | Function `gls()` in package nlme for [R](http://www.r-project.org) can handle both situations. The serial correlation is specified via argument `correlation` and heteroskedasticity is specified via the `weights` argument.
Both can take a number of pre-specified functions also provided by the package which can estimate any parameters required. For example, the serial correlation can be specified using `corAR1()`, `corARMA()`, and `corCAR1()` for AR(1), ARMA and continuous time AR(1) serial correlations.
| null | CC BY-SA 3.0 | null | 2011-06-01T13:55:35.220 | 2011-06-01T13:55:35.220 | null | null | 1390 | null |
11432 | 2 | null | 11429 | 0 | null | AUTOBOX is a program that I am familiar with having written both of these modules for AFS a company that I am still working with. The distinguishing features of AUTOBOX is that it can automatically determine the weights required for GLS while automatically identifying the ARIMA structure. They have a 30 day free demo at [http://www.autobox.com/30day.exe](http://www.autobox.com/30day.exe) . If you have any questions about the output you can deal with them directly.
| null | CC BY-SA 3.0 | null | 2011-06-01T15:14:50.707 | 2011-06-01T15:23:49.930 | 2011-06-01T15:23:49.930 | 3382 | 3382 | null |
11433 | 1 | 11434 | null | 2 | 393 | I know that there are various posts regarding variable selection but I am asking something particular. With respect to the question that I posted today in the following link:
[Low accuracy in out of time validation](https://stats.stackexchange.com/questions/11420/low-accuracy-in-out-of-time-validation)
If you had a look at the above link, you have seen that my problem is low detection rate in out of time data (ie, low true positives) though I had a very good accuracy in out of sample (80.5%). Please comment on the thoughts below that I have for this problem. Since I need to have a model which has reasonably good accuracy with the past as well as future data would the following things be of any use to me?
- Trying and selecting those variables which are shock resistant to time variation in data (not really sure whether there are such variables but trying to think intuitively that model is as good as its data and variables)-- what would this variable look like?
- I had done profiling of both sample and out of time validation data; should I consider dropping the variables which have high variation or difference in distribution or statistics (in case of continuous variables). Agree, it might decrease my model accuracy from 80 to 70 (may be) but, I guess, it would help me in keeping only those variables which are more shock proof to the seismic waves of time -- please suggest.
All in all, I want suggestions on which variables to keep so as to maintain my initial accuracy.
I dont mind initial accuracy of 65% detection and out of time accuracy of 50% finally but a drop from 80 to 35 is a worry.
| Variable selection for increasing accuracy | CC BY-SA 3.0 | null | 2011-06-01T15:57:19.287 | 2011-06-01T16:14:46.487 | 2017-04-13T12:44:24.677 | -1 | 1763 | [
"machine-learning",
"feature-selection"
] |
11434 | 2 | null | 11433 | 1 | null | Whatever variable selection technique you use, be sure to cross-validate it to keep from overfitting. It seems that you may have overfit your initial model, so this step is very important.
One idea would be to use the elastic net or lasso for regularization and variable selection. You can use the package [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html) in [R](http://www.r-project.org/) to run a logistic regression using the elastic net for regularization and variable selection. It's pretty straightforward and could improve your results.
| null | CC BY-SA 3.0 | null | 2011-06-01T16:14:46.487 | 2011-06-01T16:14:46.487 | null | null | 2817 | null |
11435 | 1 | null | null | 3 | 133 | Suppose $x_{1}, x_{2} \dots x_{N}$ are gaussian RVs with variance $S$ and mean $1$. What is the density function of
$$\frac{ |\sum_{n=1}^{N}x_{n}|^{2}}{\sum_{n=1}^{N}|x_{n}|^{2}}\text{?}$$
| Density function question | CC BY-SA 3.0 | null | 2011-06-01T16:39:02.153 | 2021-07-13T00:19:57.347 | 2021-07-13T00:19:57.347 | 11887 | 99 | [
"normal-distribution",
"density-function"
] |
11436 | 1 | null | null | 15 | 1203 | Does anyone use the $L_1$ or $L_.5$ metrics for clustering, rather than $L_2$ ?
Aggarwal et al.,
[On the surprising behavior of distance metrics in high dimensional space](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23.7409&rep=rep1&type=pdf)
said (in 2001) that
>
$L_1$ is consistently more preferable
then the Euclidean distance metric
$L_2$ for high dimensional data mining
applications
and claimed that $L_.5$ or $L_.1$ can be better yet.
Reasons for using $L_1$ or $L_.5$ could be theoretical or experimental,
e.g. sensitivity to outliers / Kabán's papers,
or programs run on real or synthetic data (reproducible please).
An example or a picture would help my layman's intuition.
This question is a follow-up to Bob Durrant's answer to
[When-is-nearest-neighbor-meaningful-today](https://stats.stackexchange.com/questions/6314/when-is-nearest-neighbor-meaningful-today).
As he says, the choice of $p$ will be both data and application dependent;
nonetheless, reports of real experience would be useful.
---
Notes added Tuesday 7 June:
I stumbled across
"Statistical data analysis based on the L1-norm and related methods",
Dodge ed., 2002, 454p, isbn 3764369205 — dozens of conference papers.
Can anyone analyze distance concentration for i.i.d. exponential features ?
One reason for exponentials is that $|exp - exp| \sim exp$;
another (non-expert) is that it's the max-entropy distribution $\ge$ 0;
a third is that some real data sets, in particular SIFTs,
look roughly exponential.
| $L_1$ or $L_.5$ metrics for clustering? | CC BY-SA 3.0 | null | 2011-06-01T16:42:07.290 | 2022-09-03T20:02:06.363 | 2017-04-13T12:44:39.283 | -1 | 557 | [
"clustering",
"distance-functions",
"rule-of-thumb"
] |
11437 | 1 | 11442 | null | 8 | 5383 | I am interested in fitting a Bayesian Two Factor ANOVA in BUGS or by utilizing some R package. Unfortunately I am having a hard time finding resources on this topic. Any suggestions? Even an article describing the approach would be helpful.
| Bayesian two-factor ANOVA | CC BY-SA 3.0 | null | 2011-06-01T17:14:14.053 | 2011-12-27T08:44:05.910 | 2011-06-02T08:22:37.617 | null | 2310 | [
"r",
"bayesian",
"anova",
"bugs"
] |
11438 | 1 | 20504 | null | 4 | 2326 | I have a model $$Y=\beta_0 + \beta_1 x_1 + \beta_2x_2 +\epsilon$$
I would like the minimum variance unbiased estimate of $\gamma=\beta_1 + \beta_2$. Assuming the Gauss Markov conditions hold, but $x_1$ and $x_2$ are correlated, is there a more efficient way to estimate $\gamma$ than running OLS and adding the estimates of $\beta_1$ and
$\beta_2$?
Given that var($\hat{\theta}$)=var($\hat{\beta_1}$)+var($\hat{\beta_2}$)+2cov($\hat{\beta_1},\hat{\beta_2}$) I'm specifically wondering if there is a GLS estimator that reduces cov($\hat{\beta_1},\hat{\beta_2}$) faster than it increases the variance of the individual estimates.
| Is there a GLS estimator that has lower variance than OLS for sum of parameters in linear model under Gauss-Markov conditions? | CC BY-SA 3.0 | null | 2011-06-01T17:34:48.367 | 2012-01-02T22:34:30.367 | 2011-06-02T11:40:00.413 | 3700 | 3700 | [
"least-squares",
"multiple-regression",
"generalized-least-squares"
] |
11439 | 2 | null | 726 | 14 | null | preamble: There is even a class of user now days who sees the significance stars rather like the gold stars my grandson sometimes gets on his homework:
>
Three solid gold (significance) stars
on the main effects will do very
nicely, thank you, and if there are a
few little stars here and there on the
interactions, so much the better!
W.N. Venables
[Exegeses on Linear Models](http://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf)
| null | CC BY-SA 3.0 | null | 2011-06-01T17:43:40.663 | 2012-03-28T23:07:00.480 | 2012-03-28T23:07:00.480 | 1381 | 1381 | null |
11440 | 1 | 11447 | null | 2 | 1820 | Does anybody know why it is not possible to compute standardized residuals when estimating a Tobit model?
Here is a short example of what I am talking about:
```
> require(AER) # for tobit estimation commands
> require(MASS) # for computing standardized residuals
> numberofdrugs <- rpois(84, 5)+1
> healthvalue <- rpois(84,5)
> tob <- tobit(healthvalue ~ numberofdrugs)
> residuals(tob) # works fine
> stdres(tob) # doesn't work fine
```
Where is the problem?
| Standardized residuals of a Tobit model in R | CC BY-SA 3.0 | null | 2011-06-01T17:44:35.677 | 2011-06-02T08:21:56.230 | 2011-06-02T08:21:56.230 | null | 4496 | [
"r",
"residuals",
"tobit-regression"
] |
11441 | 2 | null | 11438 | 3 | null | The short answer is no. By the Gauss-Markov theorem, if you want an unbiased estimator, then the OLS is the minimum variance estimator.
However, if you relax the unbiasedness condition, you can get better expected mean square error by using some kind of regularization (e.g. ridge regression).
| null | CC BY-SA 3.0 | null | 2011-06-01T17:49:19.557 | 2011-06-01T17:49:19.557 | null | null | 3834 | null |
11442 | 2 | null | 11437 | 6 | null | Simon Jackman has some working code for fitting ANOVA and regression models with [JAGS](http://www-ice.iarc.fr/~martyn/software/jags/) (which is pretty like BUGS), e.g. [two-way ANOVA via JAGS](http://jackman.stanford.edu/classes/BASS/madrid/madridWednesday.R) (R code) or maybe among his handouts on [bayesian analysis for the social sciences](http://jackman.stanford.edu/classes/BASS/index.php).
A lot of WinBUGS code, including one- and two-way ANOVA, seem to be available on the companion website for [Bayesian Modeling Using WinBUGS: An introduction](http://stat-athens.aueb.gr/~jbn/winbugs_book/).
| null | CC BY-SA 3.0 | null | 2011-06-01T18:12:49.360 | 2011-06-01T18:12:49.360 | null | null | 930 | null |
11443 | 1 | 11461 | null | 1 | 847 | Why is the negbin distribution required when the analyzed count data is bounded? I don't really understand the following:
>
"The Poisson distribution can form the
basis for some analyses of count data
and in this case Poisson regression
may be used. This is a special case of
the class of generalized linear models
which also contains specific forms of
model capable of using the binomial
distribution (binomial regression,
logistic regression) or the negative
binomial distribution where the
assumptions of the Poisson model are
violated, in particular when the range
of count values is limited or when
overdispersion is present." ---
Wikipedia
| Negative binomial distribution for bounded data | CC BY-SA 3.0 | null | 2011-06-01T18:48:16.050 | 2011-07-02T08:14:24.763 | 2011-06-02T08:08:59.333 | null | 4496 | [
"regression",
"count-data",
"negative-binomial-distribution"
] |
11444 | 1 | 11586 | null | 3 | 601 | Suppose you are trying to estimate the joint density $p(x,y)$ based on observed $(X,Y)$. However, you know that the marginal density $p(x)$ is uniform. How can you use this information to improve your density estimate?
| Constrained kernel density estimation | CC BY-SA 3.0 | null | 2011-06-01T18:56:43.163 | 2015-04-27T05:38:33.130 | 2015-04-27T05:38:33.130 | 9964 | 3567 | [
"estimation",
"density-function",
"smoothing",
"kernel-smoothing"
] |
11445 | 2 | null | 11414 | 13 | null | Summary of my answer. I like the Markov chain modeling but it misses the "temporal" aspect. On the other end, focusing on the temporal aspect (e.g. average time at $-1$) misses the "transition" aspect. I would go into the following general modelling (which with suitable assumption can lead to [markov process][1]). Also there is a lot of "censored" statistic behind this problem (which is certainly a classical problem of Software reliability ? ). The last equation of my answer gives the maximum likelihood estimator of voting intensity (up with "+" and dow with "-") for a given state of vote. As we can see from the equation, it is an intermediate from the case when you only estimate transition probability and the case when you only measure time spent at a given state. Hope this help.
General Modelling (to restate the question and assumptions).
Let $(VD_i)_{i\geq 1}$ and $(S_{i})_{i\geq 1}$ be random variables modelling respectively the voting dates and the associated vote sign (+1 for upvote, -1 for downvote). The voting process is simply
$$Y_{t}=Y^+_t-Y^-_t$$ where
$$Y^+_t=\sum_{i=0}^{\infty}1_{VD_i\leq t,S_i=1} \;\text{ and } \;Y^-_t=\sum_{i=0}^{\infty}1_{VD_i\leq t,S_i=-1}$$
The important quantity here is the intentity of $\epsilon$-jump
$$\lambda^{\epsilon}_t=\lim_{dt\rightarrow 0} \frac{1}{dt} P(Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1|\mathcal{F}_t) $$
where $\epsilon$ can be $-$ or $+$ and $\mathcal{F}_t$ is a good filtration, in the genera case, without other knowledge it would be:
$$\mathcal{F}_t=\sigma \left (Y^+_t,Y^-_t,VD_1,\dots,VD_{Y^+_t+Y^-_t},S_{1},\dots,S_{Y^+_t+Y^-_t} \right )$$.
but along the lines of your question, I think you implicitly assume that
$$ P \left ( Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1 | \mathcal{F}_t \right )= P \left (Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1| Y_t \right ) $$
This means that for $\epsilon=+,-$ there exists a deterministic sequence $(\mu^{\epsilon}_i)_{i\in \mathbb{Z}}$ such that $\lambda^{\epsilon}_t=\mu^{\epsilon}_{Y_t}$.
Within this formalism, you question can be restated as: "it likely that $ \mu^{+}_{-1} -\mu^{+}_{0}>0$ " (or at least is the difference larger than a given threshold).
Under this assumption, it is easy to show that $Y_t$ is an [homogeneous markov process][3] on $\mathbb{Z}$ with generator $Q$ given by
$$\forall i,j \in \mathbb{Z}\;\;\; Q_{i,i+1}=\mu^{+}_{i}\;\; Q_{i,i-1}=\mu^{-}_{i}\;\; Q_{ii}=1-(\mu^{+}_{i}+\mu^{-}_{i}) \;\; Q_{ij}=0 \text{ if } |i-j|>1$$
Answering the question (through proposing a maximum likelihood estimatior for the statistical problem)
From this reformulation, solving the problem is done by estimating $(\mu^{+}_i)$ and building a test uppon its values. Let us fix and forget the $i$ index without loss of generality. Estimation of $\mu^+$ (and $\mu^-$) can be done uppon the observation of
$(T^{1},\eta^1),\dots,(T^{p},\eta^p)$ where $T^j$ are the lengths of the $j^{th}$ of the $p$ periods spent in state $i$ (i.e. successive times with $Y_t=i$) and $\eta^j$ is $+1$ if the question was upvoted, $-1$ if it was downvoted and $0$ if it was the last state of observation.
If you forget the case with the last state of observation, the mentionned couples are iid from a distribution that depends on $\mu_i^+$ and $\mu_i^-$: it is distributed as $(\min(Exp(\mu_i^+),Exp(\mu_i^-)),\eta)$ (where Exp is a random var from an exponential distribution and $\eta$ is + or -1 depending on who realizes the max).
Then, you can use the following simple lemma (the proof is straightforward):
Lemma If $X_+\leadsto Exp(\mu_+)$ and $X_{-} \leadsto Exp(\mu_{-})$ then, $T=\min(X_+,X_-)\leadsto Exp(\mu_++\mu_-)$ and $P(X_+1<X_-)=\frac{\mu_+}{\mu_++\mu_-}$.
This implies that the density $f(t,\epsilon)$ of $(T,\eta)$ is given by:
$$ f(t,\epsilon)=g_{\mu_++\mu_-}\left ( \frac{1(\epsilon=+1)*\mu_++1(\epsilon=-1)*\mu_-}{\mu_++\mu_-}\right )$$
where $g_a$ for $a>0$ is the density function of an exponential random variable with parameter $a$. From this expression, it is easy to derive the maximum likelihood estimator of $\mu_+$ and $\mu_-$:
$$(\hat{\mu}_+,\hat{\mu}_-)=argmin \ln (\mu_-+\mu_+)\left ( (\mu_-+\mu_+)\sum_{i=1}^p T^i+p\right )- p_-\ln\left (\mu_-\right ) -p_+ \ln \left (\mu_+\right )$$
where $p_-=|{i:\delta_i=-1}|$ and $p_+=|{i:\delta_i=+1}|$.
Comments for more advanced approaches
If you want to take into acount cases when $i$ is the last observed state (certainly smarter because when you go through $-1$, it is often your last score...) you have to modify a little bit the reasonning. The corresponding censoring is relatively classical...
Possible other approache may include the possibility of
- Having an intensity that decreases in with time
- Having an intensity that decreases with the time spent since the last vote (I prefer this one. In this case there are classical way of modelling how the density decreases...
- You may want to assume that $\mu_i^+$ is a smooth function of $i$
- .... you can propose other ideas !
| null | CC BY-SA 3.0 | null | 2011-06-01T18:59:04.133 | 2011-06-04T19:07:31.727 | 2011-06-04T19:07:31.727 | 223 | 223 | null |
11446 | 2 | null | 11414 | 13 | null | Conduct an experiment. Randomly downvote half of the new posts at a particular time every day.
| null | CC BY-SA 3.0 | null | 2011-06-01T19:01:42.683 | 2011-06-01T19:01:42.683 | null | null | 3567 | null |
11447 | 2 | null | 11440 | 6 | null | The problem is in assuming that `stdres()` will work for a tobit regression. The reason `resid()` works for `tob` is that the type of object returned by `tobit()` inherits from class "survreg":
```
R> class(tob)
[1] "tobit" "survreg"
```
and there is a special `residuals()` method for objects of that class:
```
R> methods(residuals)
[1] residuals.breakpointsfull* residuals.coxph*
[3] residuals.coxph.null* residuals.coxph.penal*
[5] residuals.default residuals.glm
[7] residuals.HoltWinters* residuals.isoreg*
[9] residuals.lm residuals.loglm*
[11] residuals.nls* residuals.smooth.spline*
[13] residuals.survreg* residuals.survreg.penal*
[15] residuals.tukeyline*
Non-visible functions are asterisked
```
("survreg" is there as number 13.)
`stdres()` is (or at least appears to me to be) limited to the linear model case, and one could argue that a tobit model is not a linear regression. From `?stdres` we have:
```
Arguments:
object: any object representing a linear model.
```
That this is the case is seen more clearly when one looks at `lmwork()`, the function that does the work in `stdres()`, which on the second line of the code for that function, `lm.influence()` is called, and that is where `stdres()` is failing (output from debugging `lmwork()` and calling `stdres(tob)`:
```
Browse[2]> n
debug: resid <- object$residuals
Browse[2]>
debug: hat <- lm.influence(object, do.coef = FALSE)$hat
Browse[2]>
Error in if (model$rank == 0) { : argument is of length zero
```
I'm not familiar with survival models nor tobit regression. Read `?residuals.survreg` and see if that is doing the right thing regarding standardising the residuals (there is something in there about using sigma). otherwise you might have to cook this yourself or contact the author of the survival package for suggestions as to how to compute the standardised residuals that you want.
| null | CC BY-SA 3.0 | null | 2011-06-01T19:27:44.660 | 2011-06-01T23:01:10.137 | 2011-06-01T23:01:10.137 | 1390 | 1390 | null |
11448 | 1 | 11452 | null | 5 | 1754 | I have a series of scores in a signal detection task. For each block of scores (i.e. a set of scores from one participant on one day) I have calculated a d' score which I am using as an indicator of performance.
These d' scores rise over time, which is an interesting result. Is there any way I can calculate whether the change is statistically significant?
| Is there a way to determine the significance of a change in a d' score? | CC BY-SA 3.0 | null | 2011-06-01T20:34:29.613 | 2011-06-02T12:39:14.340 | null | null | 1950 | [
"statistical-significance",
"signal-detection"
] |
11450 | 1 | 11451 | null | 10 | 1983 | I've got R running on amazon EC2, using a modified version of the [bioconductor AMI](http://www.bioconductor.org/help/bioconductor-cloud-ami/). Currently, I am using putty to ssh into my server, starting R from the command line, and then copying and pasting my script from notepad++ into my putty session.
The thing is, I hate cut and pasting. It feels stone-age and I occasionally get weird buffering issues that screw up my code. I can't use [RStudio](http://www.rstudio.org/), because it doesn't support [multicore](http://cran.r-project.org/web/packages/multicore/index.html), which I heavily depend on.
What's the more elegant way to do this?
/Edit: Thanks for all the great suggestions. For now, I've switched over to using foreach with the doRedis backend, which works great on my Mac, my PC, and on amazon through RStudio. This switch was pretty easy once I learned how to write a [function that emulates "lapply"](https://stats.stackexchange.com/questions/8696/parallelizing-the-caret-package-using-dosmp) using "foreach." (Also, doRedis is awesome!)
| Best way to interact with an R session running in the cloud | CC BY-SA 3.0 | null | 2011-06-01T21:33:48.467 | 2015-10-15T17:07:48.210 | 2017-04-13T12:44:20.943 | -1 | 2817 | [
"r"
] |
11451 | 2 | null | 11450 | 12 | null | I can think of a few ways. I've done this quite a bit and here are the ways I found most useful:
- Emacs Daemon mode. ssh into the EC2 instance with the -X switch so it forwards X windows back to your remove machine. Using daemon mode will ensure that you don't lose state if your connection times out or drops
- Instead of using the multicore package, use a different parallel backend with the foreach package. That way you can use RStudio, which is fantastic. Foreach is great because you can test your code in non-parallel, then switch to parallel mode by simply changing your backend (1 or 2 lines of code). I recommend the doRedis backend. You're in the cloud, might as well fire up multiple machines!
| null | CC BY-SA 3.0 | null | 2011-06-01T21:50:43.183 | 2011-06-01T21:50:43.183 | null | null | 29 | null |
11452 | 2 | null | 11448 | 4 | null | I've come to decide that the best approach to the analysis of signal detection data (frankly, any data with dichotomous stimuli & responses) collected in multiple participants is to use a generalized mixed model, treating participant as a random effect and predicting response as a function of truth and whatever other explanatory variables are in the experiment. Effects that involve the variable specifying the truth reflect effects on in discriminability, while effects not involving that variable reflect effects on response bias.
For example, say you present a list of items for a participant to remember, then later present them with a second list containing some items they were asked to remember as well as some new items, asking the participant to label each as "old" or "new". You use words of different concreteness, and want to determine whether word concreteness affects discrimination ability. Thus, you would have data as:
```
participantID word concreteness truth response
1 brick 10 old old
1 happy 2 old new
1 river 8 new new
1 peace 1 old old
...
```
You could fit the model (first converting "response" to 0/1) using the `lmer` function from the [lme4 package](http://cran.r-project.org/web/packages/lme4/index.html) in R:
```
my_data$response_num = as.numeric(factor(my_data$response))-1
my_mix = lmer(
formula = response_num ~ (1|participant) + truth*concreteness
, family = binomial
, data = my_data
)
print(my_mix)
```
Or, if you want likelihood ratios ([and you should!](http://www.distans.hkr.se/joto/psych/p.pdf)), you can use `ezMixed` function from the [ez package](http://cran.r-project.org/web/packages/ez/index.html) in R:
```
my_mix = ezMixed(
data = my_data
, dv = .(response_num)
, random = .(participant)
, fixed = .(truth, concreteness)
, family = binomial
)
print(my_mix$summary)
```
In both approaches (`ezMixed` is simply a wrapper around `lmer`, but with additional computation of likelihood ratios), the intercept reflects any overall bias in labelling words as new/old. The main effect of truth reflects the discriminability of new/old words. The main effect of concreteness reflects any effect of concreteness on response bias. Finally, the truth:concreteness interaction reflects any effect of concreteness on discriminability of new/old words.
A couple points about this case specifically. Since this example deals with lexical stimuli, it may be reasonable to model words as a random effect as well (add `+ (1|word)` to the `lmer` formula, or add `word` to the list of random effects in the call to `ezMixed`). Additionally, the model above fits a linear function to the effects involving concreteness. If you want to account for non-linearity, you might employ generalized additive mixed models (implemented in the [gamm4](http://cran.r-project.org/web/packages/gamm4/index.html) package). Unfortunately `ezMixed` currently only handles non-linearity by permitting polynomials up to a user-specified degree, which I feel is less useful than GAMM. Adding GAMM to `ezMixed` is on my [to do list](https://github.com/mike-lawrence/ez/issues/10)...
| null | CC BY-SA 3.0 | null | 2011-06-01T22:57:51.170 | 2011-06-02T12:39:14.340 | 2011-06-02T12:39:14.340 | 364 | 364 | null |
11453 | 2 | null | 11450 | 3 | null | I don't know how Amazon EC2 works, so maybe my simple solutions don't work. But I normally use scp or sftp (through WinSCP if I'm on Windows) or git.
| null | CC BY-SA 3.0 | null | 2011-06-01T23:28:58.400 | 2011-06-01T23:28:58.400 | null | null | 3874 | null |
11454 | 1 | 11492 | null | 3 | 978 | Suppose I have two (possibly biased) coins. I've run an experiment where I flipped each coin `N` times, and the coins landed heads with proportions `p_1 < p_2`.
Now I want to do a power analysis to figure out how many flips I need to run in a second experiment, in order to have an 80% chance of detecting a difference at least as great as what I've seen (i.e., detecting that the new `p_2 - p_1` is greater than or equal to the old `p_2 - p_1` -- I do want to that the second coin is more likely to land heads than the first coin, not just that the head probabilities are different) with significance 0.01. How do I do this? I'm unsure on both the theory and the particular method in R I would run.
I was thinking that the following R call would do the trick
```
library(pwr)
pwr.t.test(d = (p_2 - p_1) / sqrt(p_1 * (1 - p_1)), sig.level = 0.01, power = 0.8, alternative = "greater")
```
But I'm not sure my `d = (p_2 - p_1) / sqrt(p_1 * (1 - p_1))` is correct (in particular, I'm not sure if my denominator is correct, or what I should put there instead).
| Conducting a power analysis on difference between two proportions | CC BY-SA 3.0 | null | 2011-06-01T23:37:15.583 | 2011-06-02T16:27:52.900 | null | null | 1106 | [
"statistical-power"
] |
11455 | 1 | 11467 | null | 1 | 169 | I am conducting a Linear Mixed Effects Model analyses to evaluate the efficacy of an intervention. I have a dummy variable 'condition' which I have coded '2' for control group and '1' for intervention group. Is that okay or do I need to use a specific order (.e.g, 0 = no intervention, 1 = intervention)?
| Does it matter what values you assign to represent two groups in a dummy variable? | CC BY-SA 3.0 | null | 2011-06-02T02:45:01.610 | 2011-06-02T05:58:57.107 | 2011-06-02T03:14:13.727 | 919 | 4269 | [
"mixed-model",
"ordinal-data"
] |
11456 | 2 | null | 10529 | 1 | null | I didn't undestand what they are doing. Here what I understood:
You have a $y_{i}$ response, and a coavariate $x_{it}$, where $i = 1, 2, .. n$ is the individual measure and $t=1, 2, 3, 4$ is the time dimension. Here, the $y$ didn't vary by time, is that right?
Is that correct? If so, it seems that they calculated sd in $x$ for each i, so they got $n$ sds. The they looked at cor(y,sd(x)), and since it was low, they concluded that it would be ok to use the mean value of x.
This doesn't sound good to me. First, with only 4 observations, sample standard deviation will have only 3 degrees of freadom. Second, the fact that the variabiliy within each individual doesn't correlate with y is expected, since y doesn't vary. But it may be that is the first observation, for example, that matters, not the mean.
| null | CC BY-SA 3.0 | null | 2011-06-02T02:46:27.130 | 2011-06-02T02:46:27.130 | null | null | 3058 | null |
11457 | 1 | null | null | 4 | 4651 | is it possible to do stepwise (direction = both) model selection in nested binary logistic regression in R? I would also appreciate if you can teach me how to get:
- Hosmer-Lemeshow statitistic,
- Odds ratio of the predictors,
- Prediction success of the model.
I used lme4 package of R. This is the script I used to get the general model with all the independent variables:
```
nest.reg <- glmer(decision ~ age + education + children + (1|town), family = binomial, data = fish)
```
where:
- fish -- dataframe
- decision -- 1 or 0, whether the respondent exit or stay, respectively.
- age, education and children -- independent variables.
- town -- random effect (where our respondents are nested)
Now my problem is how to get the best model. I know how to do stepwise model selection but only for linear regression. (`step( lm(decision ~ age + education + children, data = fish), direction +"both")`). But this could not be used for binary logistic regression right? also when i add `(1|town)` to the formula to account for the effects of town, I get an error result.
By the way... I'm very much thankful to Manoel Galdino [who provided me with the script on how to run nested logistic regression](https://stackoverflow.com/questions/5906272/step-by-step-procedure-on-how-to-run-nested-logistic-regression-in-r).
Thank you very much for your help.
| Stepwise model selection, Hosmer-Lemeshow statistics and prediction success of model in nested logistic regression in R | CC BY-SA 3.0 | null | 2011-06-02T02:57:31.857 | 2011-06-02T12:41:01.310 | 2017-05-23T12:39:27.620 | -1 | 4848 | [
"r",
"logistic",
"multilevel-analysis"
] |
11458 | 1 | 11506 | null | 3 | 2362 | I have set $x = {1,2,3,4,5}$ and set $y = {2,3,4,5,6}$. Lets say the correlation of $x$ and $y$ is $0.7$. If I then have set $z = {1,2,3,4,5,2,3,4,5,6}$, and I do autocorrelation using lag $=$ $5$, should I not get the same $0.7$? I have been doing this but I keep getting different results. I'm wondering if I'm doing something wrong, or maybe it is that they should have different results.
| Autocorrelation vs correlation calculation | CC BY-SA 3.0 | null | 2011-06-02T03:07:09.990 | 2011-06-03T04:05:10.890 | 2011-06-02T12:22:54.703 | 4403 | 4403 | [
"correlation",
"autocorrelation"
] |
11459 | 1 | 11468 | null | 7 | 7398 | I am confused about the mixed advice regarding controlling for baseline differences.
Would you always control for a baseline between groups difference on a particular variable or only if the variable correlates with the DV?
I am using SPSS and conducting Mixed Model analyses to evaluate an intervention.
Jeromy, I tried to answer by adding comment, but can't seem to make it work. To answer:
@jeromy-anglim: Well it is a group randomised design (schools randomised into intervention or waitlist control). However, participants from waitlist control schools (control condition) were sent an invitation to take part in a questionnaire study, whereas participants from the intervention schools were invited to take part in a parenting intervention. Hence, a slightly more distressed sample (despite efforts to avoid this by offering child participants a $30 voucher). In my case there are more boys in the intervention sample, and the intervention sample is on average 3 months younger. No other baseline differences for adolescent report. But baseline differences on almost all parent reported outcome variables (with the intervention group being more distressed).
| Do you include a covariate because of baseline group difference or if correlated with DV or both? | CC BY-SA 3.0 | null | 2011-06-02T03:40:38.623 | 2012-02-10T12:32:20.267 | 2011-06-02T07:52:14.237 | 4269 | 4269 | [
"mixed-model",
"repeated-measures"
] |
11460 | 2 | null | 396 | 6 | null | If plotting in color, consider that colorblind people may have trouble distinguishing elements by color alone. So:
- Use line styles to distinguish lines.
- Use extra weight in elements, make linewidth at least 2 pt, etc.
- Use different markers as well as colors to distinguish points.
- Use labels and annotations, referring to position and style also.
- When referring to plot elements in text, describe them by color, relative position and style: "the red, upper, dash-dot curve"
- Use a colorblind friendly palette. See this, this. I have a simple python implementation of the palette in the last reference at code.google.com, look for python-cudtools
| null | CC BY-SA 4.0 | null | 2011-06-02T03:51:31.690 | 2022-11-20T10:10:33.567 | 2022-11-20T10:10:33.567 | 362671 | 4847 | null |
11461 | 2 | null | 11443 | 1 | null | In a Poisson distribution, the variance is equal to the mean.
The negative binomial distribution has a variance that is greater than the mean by some factor -- hence it's "overdispersed" relative to the Poisson.
In marketing theory (see Ehrenberg's Repeat Buying), purchases by a given individual have a Poisson distribution, with individual lambdas. But since your lambda and my lambda are different values, the overall variance is higher. In a negative binomial, the lambdas are assumed to follow a gamma distribution.
| null | CC BY-SA 3.0 | null | 2011-06-02T03:54:27.217 | 2011-06-02T03:54:27.217 | null | null | 3919 | null |
11462 | 1 | null | null | 6 | 6868 | I'm trying to find the best model based on AIC using the stepwise (`direction = both`) model selection in R using the stepAIC in MASS package.
This is the script i used:
```
stepAIC (glmer(decision ~ as.factor(Age) + as.factor(Educ) + as.factor(Child), family=binomial, data=RShifting), direction="both")
```
however I got this error result:
```
Error in lmerFactorList(formula, fr, 0L, 0L) :
No random effects terms specified in formula
```
I tried to add `(1|town)` to the formula since town is the random effect (where the respondents are nested) and ran this script):
```
stepAIC (glmer(decision ~ as.factor(Age) + as.factor(Educ) + as.factor(Child) + (1|town), family=binomial, data=RShifting), direction="both")
```
The result is this:
```
Error in x$terms : $ operator not defined for this S4 class
```
I hope you could help me figure out how to solve this problem. Thanks a lot.
| Incorporating random effects in the logistic regression formula in R | CC BY-SA 3.0 | null | 2011-06-02T04:23:51.223 | 2011-06-02T09:38:42.860 | 2011-06-02T09:38:42.860 | 2116 | 4848 | [
"r",
"logistic",
"stepwise-regression"
] |
11463 | 2 | null | 11457 | 1 | null | I'm here again =)!
It happens that right now I'm fitting a nested logistic regression and I have to choose the better model as well. Actually I don't know how to do stepwise with lme4, nonetheless I'm not sure it is advisble to use AIC to choose the best model (better fit). Take a look at [this link](http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/dicpage.shtml#q6); is dissuss very briefly when AIC may be apropriate with hierarchical (nested) models. Just to provide some backgroud on the link, it is a FAQ about DIC, which is like AIC and is an output of Bugs.
All in all, I don't know how to answer your question, but I can provide some guidance on how I choose my own models. I take a look at two things: ROC curves and fake data simulation. ROC curves are a good graphic way to compare the fit of models regarding true positive rates and false positive rates. Depending on what is more sensitive in your case, you can choose a model with higher true positive rate or higher true negative rates.
If you use Zelig package, I think you can fit models with lme4 and plot ROC curves. The [documentation](http://gking.harvard.edu/sites/scholar.iq.harvard.edu/files/zelig.pdf) has some nice examples of this.
Another way (my prefered one) is to simulate data with parameters from the model estimated. Then, you check if the simulated data resembles the real data. This is a visual comparison, but can be very effetive. If you need reference on this procedure, Gelman and Hill's book is a standard texto book on this approach. For instance, assume you fitted the model:
P(y=1) = 10 + 2*x
Then you create a new variable, y.fake, which is
for (i in 1:n) y.fake ~ rbinom(1,1, 10 + 2*x[i])
Then you check if y.fake resembles the true y. You can compute, for instance, the number of cases that the y.fake is equal to the true y. You can ran more than one simulation to take into acount sample variability and you can take into acount the variability of estimated coeficients as well.
Hope this helps.
| null | CC BY-SA 3.0 | null | 2011-06-02T04:56:40.010 | 2011-06-02T04:56:40.010 | null | null | 3058 | null |
11464 | 2 | null | 11450 | 3 | null | I'd use rsync to push the scripts and data files to the server, then "nohup Rscript myscript.R > output.out &" to run things and when finished, rsync to pull the results.
| null | CC BY-SA 3.0 | null | 2011-06-02T05:03:33.267 | 2011-06-02T05:03:33.267 | null | null | 4849 | null |
11466 | 2 | null | 8148 | 3 | null | Spatial cluster analysis uses geographically referenced observations and is a subset of cluster analysis that is not limited to exploratory analysis.
Example 1
It can be used to make fair election districts.
Example 2
Local spatial autocorrelation measures are used in the [AMOEBA method](http://code.google.com/p/clusterpy/source/browse/trunk/clusterpy/core/toolboxes/cluster/amoeba.py) of clustering. Aldstadt and Getis use the resulting clusters to create a spatial weights matrix that can be specified in [spatial regressions](http://en.wikipedia.org/wiki/Spatial_econometrics) to test a hypothesis.
See Aldstadt, Jared and Arthur Getis (2006) “Using AMOEBA to create a spatial weights matrix and identify spatial clusters.” Geographical Analysis 38(4) 327-343
Example 3
Cluster analysis based on randomly growing regions given a set of criteria could be used as a probabilistic method to indicate unfairness in the design of institutional zones such as school attendance zones or election districts.
| null | CC BY-SA 3.0 | null | 2011-06-02T05:53:39.117 | 2011-06-02T06:01:50.210 | 2011-06-02T06:01:50.210 | 4329 | 4329 | null |
11467 | 2 | null | 11455 | 3 | null | Suppose your model is the following:
$$Y_i=X_i\beta+D_i\alpha$$
where $X$ is the other variables and $D$ is your dummy variable. Then for the control group your model is
$$Y_i=X_i\beta+2\alpha$$
and for intervention group
$$Y_i=X_i\beta+\alpha.$$
If you recode with 0 and 1 then respectively you get
$$Y_i=X_i\beta$$
for control group and
$$Y_i=X_i\beta+\alpha$$
for intervention group.
In the latter case $\alpha$ has a clearer interpretation. It measures the additional effect on $Y_i$ for intervention group. The interpretation in the first case is a bit trickier. That is the only difference.
| null | CC BY-SA 3.0 | null | 2011-06-02T05:58:57.107 | 2011-06-02T05:58:57.107 | null | null | 2116 | null |
11468 | 2 | null | 11459 | 5 | null | You might want to read the following article
- Pocock, S.J. and Assmann, S.E. and Enos, L.E. and Kasten, L.E. (2002).
Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practice and problems. FREE PDF
- Check out the discussion of answers to this question on best practice in analysing pre-post intervention designs
A few thoughts, although I confess I'm not an expert on this:
- If you had perfect randomisation, then the significance test without covariates would be fair, although there might be power benefits in including covariates. The more strongly the covariate is related to the DV, in general, the more it will reduce your error variance (which can increase statistical power assuming an true effect exists). This in combination with base-line group differences on the covariates will lead to greater differences between covariate adjusted and non-adjusted estimates of the intervention effect.
- It sounds like you have some mild departures from randomisation, in that randomisation happened at a group-level, and there may have been some differences in the uptake of the experiment.
I'd be particularly interested to know whether there are reasons to expect the groups to differ in their means on the DV at baseline.
The degree to which departures from randomisation in the protocol are a problem is related to the degree to which it leads to systematically different groups.
- In most applications that I've seen, pre-test measurements are likely to capture most of the potential effects of any baseline covariates.
- I think the big issue is that if there are substantial baseline differences on the dependent variable, then it can be difficult to assess the effect of the intervention.
| null | CC BY-SA 3.0 | null | 2011-06-02T06:12:26.110 | 2011-06-03T05:22:22.907 | 2017-04-13T12:44:24.947 | -1 | 183 | null |
11469 | 2 | null | 11448 | 1 | null | The appropriate analysis would depend on whether you have multiple participants or just one participant and the number of blocks that you have.
In general, with more blocks, you can more precisely characterise the functional relationship between practice and performance.
### Small number of blocks (e.g., 3 to perhaps 10)
- Linear and quadratic contrasts as part of a repeated measures ANOVA would provide a basic test of the improvement in d-prime with practice. Or you could implement the same model within a mixed-model framework.
Another simple option would be to compare the first one or two blocks with the last one or two blocks either using a repeated measures t-test or using a contrast with appropriate weights as part of the repeated measure ANOVA.
### Many blocks (e.g., perhaps 15 or 30 or more)
- You could start to characterise the change in d-prime with practice more precisely perhaps with some non-linear functions.
Practice effects are usually more rapid at the start of practice and monotonically decelerate and approach an asymptote.
Thus, non-linear regression per participant or non-linear multilevel modelling for a more integrated approach represent two major options.
### Distribution of residuals for d-prime
- If you had concerns about the distribution of residuals of d-prime then you could consider a transformation; I'm not sure what's standard practice in signal detection research, but at a guess I thought d-primes might be relatively normal without transformation.
| null | CC BY-SA 3.0 | null | 2011-06-02T06:23:07.207 | 2011-06-02T08:24:09.440 | 2011-06-02T08:24:09.440 | 183 | 183 | null |
11470 | 2 | null | 11462 | 5 | null | Short answer is you can't - well, not without recoding a version of `stepAIC()` that knows how to handle S4 objects. `stepAIC()` knows nothing about `lmer()` and `glmer()` models, and there is no equivalent code in lme4 that will allow you to do this sort of stepping.
I also think your whole process needs carefully rethinking - why should there be the one best model? AIC could be used to identify several candidate models that do similar jobs and average those models, rather than trying to find the best model for your sample of data.
Selection via AIC is effectively doing multiple testing - but how should you correct the AIC to take into account the fact that you are doing all this testing? How do you interpret the precision of the coefficients for the final model you might select?
A final point; don;t do all the `as.factor()` in the model formula as it just makes the whole thing a mess, takes up a lot of space and doesn't aid understanding of the model you fitted. Get the data in the correct format first, then fit the model, e.g.:
```
RShifting <- transform(RShifting,
Age = as.factor(Age),
Educ = as.factor(Educ),
Child = as.factor(Child))
```
then
```
glmer(decision ~ Age + Educ + Child + (1|town), family=binomial,
data=RShifting)
```
Apart from making things far more readable, it separates the tasks of data processing from the data analysis steps.
| null | CC BY-SA 3.0 | null | 2011-06-02T08:40:01.790 | 2011-06-02T08:40:01.790 | null | null | 1390 | null |
11471 | 2 | null | 11450 | 13 | null | The most convenient way is just to install VNC server and some light environment like XFCE and make yourself a virtual session that you can use from wherever you want (it persists disconnects), i.e. something like this:

Additional goodies are that you can use your local clipboard in the virtual desktop and see R plots way faster than via X11 forwarding or copying image files.
It takes some effort to setup everything right (X init, ssh tunnel), but the internet is full of tutorials how to that.
| null | CC BY-SA 3.0 | null | 2011-06-02T08:45:49.460 | 2011-06-02T08:45:49.460 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.