Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12710 | 1 | null | null | 2 | 117 | I have [case fatality rates](http://en.wikipedia.org/wiki/Case_fatality_rate) (deaths per 100 cases) for 2 different states receiving different treatments for 17 years.
What is the best statistical method to compare them? Relative risk, odds ratio, plain time series analysis, ...?
The data is like this:
```
Year St.1 Cases St.1 Deaths St.1 CFR St.2 Cases St.2 Deaths State2 CFR
1994 1836 383 20.86 583 121 20.75
1995 1246 257 20.63 1126 227 20.16
1996 1450 263 18.14 896 179 19.98
1997 2953 407 13.78 351 76 21.65
1998 1161 149 12.83 1061 195 18.55
1999 2924 434 14.84 1371 275 20.06
2000 1729 169 9.77 1170 253 21.62
2001 1888 275 14.57 1005 199 19.80
2002 919 178 19.37 604 133 22.02
2003 865 142 16.42 1124 237 21.09
2004 1543 131 8.49 1030 228 22.14
2005 2887 336 11.64 6061 1500 24.75
2006 1484 108 7.28 2320 528 22.76
2007 1592 75 4.71 3024 645 21.33
2008 1920 53 2.76 3012 537 17.83
2009 1477 40 2.71 3073 556 18.09
2010 1534 26 1.69 3540 494 13.95
```
| Comparing fatality rates | CC BY-SA 3.0 | 0 | 2011-07-06T13:19:34.447 | 2011-07-11T18:52:19.143 | 2011-07-06T14:56:50.537 | null | 2956 | [
"time-series",
"hypothesis-testing",
"epidemiology"
] |
12711 | 2 | null | 12709 | 2 | null | I think the correct formula should be
`model1 <- lmer(dependent ~ independent1 + indepdendent2 + independent3 + independent4 + (1|independent5), REML=TRUE)`
the (1|independent5) makes the grouping factor. you can change the '1' to a factor that stores the order of your records.
hope that helps?
| null | CC BY-SA 3.0 | null | 2011-07-06T13:23:19.467 | 2011-07-06T13:23:19.467 | null | null | 5280 | null |
12712 | 1 | null | null | 4 | 1137 | I want to apply a GAMM with R to this time series but I am not sure how to handle the station P18, as shown in the figure below.

If I shrink the dataset to the point where P18 ends (i.e. left side of the dashed line) I would loose many reversing trends in P07,P14,P22,P2O and so on.
The question is thus: can I use the whole dataset including P18? I am not sure what the GAMM is doing with such a gap. what would be if there were more gaps?
any idea is welcome!
| How to handle gaps in a time series when doing GAMM? | CC BY-SA 3.0 | null | 2011-07-06T13:56:37.567 | 2015-07-07T17:36:37.413 | 2011-07-06T15:59:14.573 | 1390 | 5280 | [
"time-series",
"nonlinear-regression",
"mixed-model"
] |
12714 | 1 | null | null | 1 | 214 | I am using a mixed repeated measures with an ARMA(1,1) variance covariance structure. I have a basic understanding of why I am using this model etc but I need some help on articulating a few things:
- How does this model correct for the variance in time between each time point? Or does it ignore the problem? In which case then why is it acceptable here?
- What happens to variables that are correlated? When it is assigning variance to one variable or the other? How does this analysis do this?
- Why this over repeated measure? Less error? More power? How much is imputed?
I have formulated some thought/answers, but I am curious to hear from others who might actually have used this model more than me.
Thank so much
| General questions about a mixed repeated measures model | CC BY-SA 3.0 | null | 2011-07-06T14:24:40.847 | 2011-07-06T14:50:19.937 | 2011-07-06T14:50:19.937 | null | 5304 | [
"mixed-model"
] |
12715 | 1 | 12716 | null | 4 | 3095 | I am always struggling with normality testing for quantitative predictors (no factors) and transforming them to normality.
- If I am running a GLMM and my predictors are really non-normal, should I transform them as well to try to make them normally distributed?
- I know that this is important for the response variable but what should be done with predictors?
P.S.: I really could not find a similar question.
| Should quantitative predictors be transformed to be normally distributed? | CC BY-SA 3.0 | null | 2011-07-06T15:10:50.310 | 2012-07-18T14:24:04.467 | 2012-07-18T14:24:04.467 | 7290 | 5280 | [
"regression",
"data-transformation",
"normality-assumption",
"predictor",
"glmm"
] |
12716 | 2 | null | 12715 | 17 | null | There is nothing in the theory behind regression models that requires any distribution for X other than having a minimum number of observations in each range of X for which you want to learn something. The only problem you usually run into is overly influential observations due to a heavy right tail of the distribution of X. To deal with that I often fit something like a restricted cubic spline in the cube root or square root of X. In the R rms package this would look like `y ~ rcs(x^(1/3)) + ...` other variables or `rcs(sqrt(x),5) + ...` (5=5 knots using default knot placement). That way you only assume a smooth relationship but you limit the influence of large values, while allowing for zeros (though not negative values).
| null | CC BY-SA 3.0 | null | 2011-07-06T15:26:07.927 | 2011-07-06T15:38:26.320 | 2011-07-06T15:38:26.320 | 1390 | 4253 | null |
12717 | 2 | null | 363 | 3 | null | Lots of good books already suggested. But here is another: Gerd Gigerenzer's "Reckoning With Risk" because understanding how statistics affect decisions is more important than getting all the theory right. In fact number one sin of statisticians is failing to communicate clearly. His book talks about the consequences of poor communication and how to avoid it.
| null | CC BY-SA 3.0 | null | 2011-07-06T15:37:34.777 | 2011-07-06T15:37:34.777 | null | null | 5305 | null |
12718 | 2 | null | 12709 | 6 | null | Number of groups differ in Stata and R
Regarding your "99 vs. 100 groups problem": Are you really sure that your R and Stata dataset are identical? In Stata run `summarize`, in R run `summary(yourDataFrame)` and compare the results.
Fitting varying intercept/slope models in Stata and R
@Jens already has pointed out how to write Stata's xtmixed model in R. Some comment on the difference between `1 + (1|independent5)` and `(1|independent5)`: There is no difference. `1 + (...)` means that an intercept is included in the model; but this is the default.
Gelman offers a nice overview of how to fit models in R and Stata, see [this page](http://www.stat.columbia.edu/~gelman/arm/other.packages/) and then [The (current version of) the relevant pages in the book (see Section C.4)](http://www.stat.columbia.edu/~gelman/arm/other.packages/software.pdf)
Please find below some sample code how to estimate simple MLMs in Stata and R. The data and some information can be found here: [http://dss.princeton.edu/training/Multilevel101.pdf](http://dss.princeton.edu/training/Multilevel101.pdf) (here is the dataset: [http://dss.princeton.edu/training/schools.dta](http://dss.princeton.edu/training/schools.dta))
Stata:
```
clear
use c:/tmp/schools.dta
## varying intercept model
xtmixed y x1 || school:, reml
## varying intercept and slope (x1) model
xtmixed y x1 || school:x1, reml
```
R:
```
library(foreign)
library(lme4)
dfr <- read.dta(file="c:/tmp/schools.dta")
head(dfr)
## varying intercept model
lmer(y ~ x1 + (1|school), data=dfr)
## varying intercept and slope (x1) model
lmer(y ~ x1 + (x1|school), data=dfr)
```
R's lmer does not report p-values
I am not a statistician and I cannot comment on lmer's behavior. You might want to read this [post](https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q3/002912.html) by D Bates. You also might be interested in this blog post: [Linear mixed-effects regression p-values in R: A likelihood ratio test function](http://blog.lib.umn.edu/moor0554/canoemoore/2010/09/lmer_p-values_lrt.html)
Hopefully @Ben Bolker will comment on this...
Extracting information from R's *mer-class-objects
To extract
- the fixed effect(s): fixef(myLmerObject)
- the random effect(s): VarCor(myLmerObject) (see str(VarCorr(la)) to better understand the internal structure, e.g. VarCorr(myLmerObject)[[1]][1])
| null | CC BY-SA 3.0 | null | 2011-07-06T16:01:52.827 | 2011-07-06T16:01:52.827 | null | null | 307 | null |
12719 | 2 | null | 12712 | 3 | null | If the covariates (not including Julian day) are sufficient to model the response in and of themselves, then the problem of missing data in the one site is irrelevant. For example, if the model were
```
mod <- gam(porpoise ~ s(temperature) + s(salinity) + s(dist.to.land) +
s(site, bs = "re"), data = foo))
```
Which is an AM with a random effect term for site employed using a random effect `s(foo, bs = "re")` spline, then we are saying that, irrespective of site, there is a functional, additive relationship between the porpoise acitivity and the 3 covariates. The random effect for site just says that the mean activity per site is allowed to differ, i.e. some sites can have more activity than others in general. You can get the same with `gamm()` but that does involve a bit more heavy lifting - though if your covariates don't fully model the temporal dependence in the data then you'll need `gamm()` and a correlation structure for the residuals.
In that case, the lack of data after a certain date in one or more sites is irrelevant because you are not using time itself to model the response, it is the combination of the covariates that is being modelled.
By the way, the generalised in GAM and GAMM relates to the generalisation of the model to non-Gaussian errors. It is the additive bit, the A in the acronyms, that gives the non-linear bit. You are making things difficult for yourself by expressing activity as a % variable because those usually need to be modelled using things like beta regression which are not GLMs and thus are not GAMs. It might be better to work with actual events, counts in other words, and not a %.
| null | CC BY-SA 3.0 | null | 2011-07-06T16:13:22.627 | 2015-07-07T17:36:37.413 | 2015-07-07T17:36:37.413 | 1390 | 1390 | null |
12720 | 2 | null | 12612 | 3 | null | One approach would be to model the $X$'s with a generalized linear model (GLM). Here, you would formulate $p_i$, the probability of success on the $i$'th trial as a (logistic linear) function of the recent observation history. So you're essentially fitting an autoregressive GLM where the noise is Bernoulli and the link function is logit. The setup is:
$p_i = f(b + a_1 X_{i-1} + a_2 X_{i-2} + \ldots a_k X_{i-k})$, where
$f(x) = \frac{1}{1+\exp(x)}$, and
$X_i \sim Bernoulli(p_i)$
The parameters of the model are $\{b, a_1, \ldots a_k\}$, which can be estimated by logistic regression. (All you have to do is set up your design matrix using the relevant portion of observation history at each trial, and pass that into a logistic regression estimation function; the log-likelihood is concave so there's a unique global maximum for the parameters). If the outcomes are indeed independent then the $a_i$'s will be set to zero; positive $a_i$'s mean that subsequent $p_i$'s increase whenever a success is observed.
The model doesn't provide a simple expression for the probability over the sum of the $X_i$'s, but this is easy to compute by simulation (particle filtering or MCMC) since the model has simple Markovian structure.
This kind of model has been used with great success to model temporal dependencies between "spikes" of neurons in the brain, and there is an extensive literature on autoregressive point process models. See, e.g., [Truccolo et al 2005](http://jn.physiology.org/content/93/2/1074.full.pdf) (although this paper uses a Poisson instead of a Bernoulli likelihood, but the mapping from one to the other is straightforward).
| null | CC BY-SA 3.0 | null | 2011-07-06T16:20:18.677 | 2011-07-07T05:22:47.330 | 2011-07-07T05:22:47.330 | 5289 | 5289 | null |
12721 | 1 | null | null | 8 | 333 | I have three product categories, $A,B,C$. Each category has two products, $0,1$. I provide a number of different kinds of choice situations, 1) the test subject is presented a single category and made to choose a product, 2) the test subject is presented with two categories and made to choose a product from two categories, and 3) the the test subject is presented with all three categories and made to choose a product from each. I believe that product choices depend on a number of measured covariates of the individual products, the product categories presented, and the choice in the other category (if such a choice is possible).
For example, let's say that we had a product category of vinegar, with two brands. The first brand is an expensive, balsamic vinegar. The second brand is an inexpensive, store brand, apple vinegar. Now, let's say we have two other product categories: salad greens and kitchen gloves, each containing an expensive, high quality brand and a cheap, generic brand. Even if a consumer chooses the expensive vinegar when asked to choose only from the vinegar category or from the vinegar and salad category, we might still expect that he would select the inexpensive vinegar if asked to choose products from the vinegar and kitchen glove categories. We might also expect that a person who chose the inexpensive vinegar, when asked to choose from the vinegar and salad green categories, will also choose the inexpensive salad greens.
This situation is similar to the "shopping basket" problems reviewed by P.B. Seetharaman, et. al. in "[Models of Multi-Category Choice Behavior](http://apps.olin.wustl.edu/workingpapers/pdf/2005-08-016.pdf)". However, the models I have seen consider the incidence of a product category as a function of the consumer, often as a stage model.
How would we estimate the coefficients of the measured covariates in the case when the chooser does not choose the categories they must select from.
| Multicategory choice model with given categories | CC BY-SA 3.0 | null | 2011-07-06T17:25:13.640 | 2011-07-20T16:50:37.730 | 2011-07-20T16:50:37.730 | 82 | 82 | [
"multivariate-analysis",
"conjoint-analysis"
] |
12722 | 2 | null | 12704 | 2 | null | Your data describe how long it took before some event takes place, but with some overhead such that the event will never take place before some time $t$. Look into a shifted negative binomial distribution.
| null | CC BY-SA 3.0 | null | 2011-07-06T18:08:42.837 | 2011-07-06T18:08:42.837 | null | null | 82 | null |
12724 | 1 | null | null | 4 | 325 | I'm trying to train a single layer of an autoencoder using minFunc, and while the cost function appears to decrease, when enabled, the DerivativeCheck fails. The code I'm using is as close to textbook values as possible, though extremely simplified.
The loss function I'm using is the squared-error:
$ J(W; x) = \frac{1}{2}||a^{l} - x||^2 $
with $a^{l}$ equal to $\sigma(W^{T}x)$, where $\sigma$ is the sigmoid function. The gradient should therefore be:
$ \delta = (a^{l} - x)*a^{l}(1 - a^{l}) $
$ \nabla_{W} = \delta(a^{l-1})^T $
Note, that to simplify things, I've left off the bias altogether. While this will cause poor performance, it shouldn't affect the gradient check, as I'm only looking at the weight matrix. Additionally, I've tied the encoder and decoder matrices, so there is effectively a single weight matrix.
The code I'm using for the loss function is (edit: I've vectorized the loop I had and cleaned the code up a little):
```
% loss function passed to minFunc
function [ loss, grad ] = calcLoss(theta, X, nHidden)
[nInstances, nVars] = size(X);
% we get the variables a single vector, so need to roll it into a weight matrix
W = reshape(theta(1:nVars*nHidden), nVars, nHidden);
Wp = W; % tied weight matrix
% encode each example (nInstances)
hidden = sigmoid(X*W);
% decode each sample (nInstances)
output = sigmoid(hidden*Wp);
% loss function: sum(-0.5.*(x - output).^2)
% derivative of loss: -(x - output)*f'(o)
% if f is sigmoid, then f'(o) = output.*(1-output)
diff = X - output;
error = -diff .* output .* (1 - output);
dW = hidden*error';
loss = 0.5*sum(diff(:).^2, 2) ./ nInstances;
% need to unroll gradient matrix back into a single vector
grad = dW(:) ./ nInstances;
end
```
Below is the code I use to run the optimizer (for a single time, as the runtime is fairly long with all training samples):
```
examples = 5000;
fprintf('loading data..\n');
images = readMNIST('train-images-idx3-ubyte', examples) / 255.0;
data = images(:, :, 1:examples);
% each row is a different training sample
X = reshape(data, examples, 784);
% initialize weight matrix with random values
% W: (R^{784} -> R^{10}), W': (R^{10} -> R^{784})
numHidden = 10; % NOTE: this is extremely small to speed up DerivativeCheck
numVisible = 784;
low = -4*sqrt(6./(numHidden + numVisible));
high = 4*sqrt(6./(numHidden + numVisible));
W = low + (high-low)*rand(numVisible, numHidden);
% run optimization
options = {};
options.Display = 'iter';
options.GradObj = 'on';
options.MaxIter = 10;
mfopts.MaxFunEvals = ceil(options.MaxIter * 2.5);
options.DerivativeCheck = 'on';
options.Method = 'lbfgs';
[ x, f, exitFlag, output] = minFunc(@calcLoss, W(:), options, X, numHidden);
```
The results I get with the DerivitiveCheck on are generally less than 0, but greater than 0.1. I've tried similar code using batch gradient descent, and get slightly better results (some are < 0.0001, but certainly not all).
I'm not sure if I made either a mistake with my math or code. Any help would be greatly appreciated!
update
I discovered a small typo in my code (which doesn't appear in the code below) causing exceptionally bad performance. Unfortunately, I'm still getting getting less-than-good results. For example, comparison between the two gradients:
```
calculate check
0.0379 0.0383
0.0413 0.0409
0.0339 0.0342
0.0281 0.0282
0.0322 0.0320
```
with differences of up to 0.04, which I'm assuming is still failing.
| DerivativeCheck fails with minFunc | CC BY-SA 3.0 | null | 2011-07-06T18:28:31.233 | 2013-04-03T18:55:19.307 | 2011-07-08T20:22:01.767 | 5268 | 5268 | [
"matlab",
"optimization",
"neural-networks"
] |
12725 | 1 | 12728 | null | 2 | 1435 | I have a dataset that includes an open-ended response that was coded into categories by a vendor. The vendor created 50 multi-punch categories stored as 50 true/false variables.
By putting this in a multiple response set and cross-tabbing it with itself I can see that most responses seem to be in a small number of categories - the crosstab is strongly diagonal.
For the purposes of an analysis I would like to perform, the categories can only be single-punch. Short of sending this back to the vendor and having them re-categorize over 6,000 responses, I'd like to solve this programmatically if the number of affected responses is small enough. I'd like to write a syntax file that can count the number of respondents categorized into each number of categories - say, 5000 assigned to one category, 1000 assigned to two categories, 500 assigned to three, and so on... If the number assigned to multiple categories is small enough, I'd like to just throw out the ones that were assigned to multiple categories.
Is there an effective way to do this in SPSS syntax? Writing the logic for each possible pair / triplet / quadruplet of multi-response data would be impossible given that I have 50 categories. Any ideas?
| Detect multi-punch responses in SPSS syntax? | CC BY-SA 3.0 | null | 2011-07-06T18:42:19.520 | 2011-07-07T03:54:37.453 | 2011-07-06T19:43:55.313 | 3331 | 3331 | [
"spss",
"data-transformation"
] |
12726 | 1 | null | null | 9 | 4047 | I would like to plot 2D confidence regions (at 1-sigma, 2-sigma) for a model that I've fit to data. I've used PyMC to generate 50k MCMC posterior samples for my model with 6 parameters.
I know the process to create confidence regions is something similar to:
1.) create a histogram of the samples in the 2D space
2.) identify iso-density contours
3.) from a selected start point (eg, the mean) integrate outwards perpendicular to iso-density contours until the desired fraction of sample points are contained in the region.
Is there a convenient function in the numpy/scipy/pymc/pylab/etc world that will create the 2D confidence region plot? Alternatively, where can I find a coded algorithm, or stand-alone tool, that will compute the contours for later plotting?
| Calculating 2D Confidence Regions from MCMC Samples | CC BY-SA 3.0 | null | 2011-07-06T18:53:08.730 | 2020-06-05T17:30:53.330 | null | null | 5307 | [
"confidence-interval",
"markov-chain-montecarlo",
"python"
] |
12728 | 2 | null | 12725 | 2 | null | It sounds like a simple sum command would work to identify if a response had multiple positive responses to the same category.
So say `X1_1,X1_2,...X1_50` are the dummy variables for survey item `X1`. The command
```
compute X1_sum = SUM(X1_1 to X1_50).
freq var = X1_sum.
```
Would result in a variable, `X1_sum`, that would be a total of all the items (i.e. the number who have assigned 1 category, 2 categories, etc..). The only caveat of this is that the statement, `X1_1 to X1_50` to work they need to be in order in the dataset.
You could make your own macro function to do this for all of the items, which could probably be reduced in complexity and number of parameters passed based on how your variables are coded. An example could be;
```
define !sum_response (name = !TOKENS(1)
/ begin = !TOKENS(1)
/ end = !TOKENS(1) ).
compute !name = SUM(!begin to !end).
execute.
freq var !name.
!enddefine.
!sum_response name = X1_sum begin = X1_1 end = X1_50.
```
This is not a real great example, as the produced function takes as much writing as does the original compute command. But this could be further simplified if you have other consistent naming conventions (and I can give examples if needed). Such as if all your responses have the suffix as I did above, you could reduce the macro to only passing one statement (the `X1` in my example). If all the responses are coded in a similar progressive manner (e.g. `X1,X2,X3...`) you could write code to loop through all of the variables without needing to call the macro for each set (you could do this anyway even if the names aren't in a consistent manner, but the code would be much more verbose and perhaps would not save any time over just writing the separate calls).
I can't give any more advice without being more explicit about what you want to accomplish (such as how to handle cases that have multiple responses). I cringe about the suggestion of throwing out cases, but without knowing more I could not give any useful advice. This should get you started though, and you can either update this question or ask a new question about how to handle the multiple responses if you can't figure it out on your own.
| null | CC BY-SA 3.0 | null | 2011-07-06T19:13:43.373 | 2011-07-07T03:54:37.453 | 2011-07-07T03:54:37.453 | 1036 | 1036 | null |
12732 | 2 | null | 12687 | 1 | null | Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest.
I'm also not sure how to compare methods like boosting and DL, as boosting is really just a collection of methods. What other algorithms are you using with the boosting?
In general, DL techniques can be described as layers of encoder/decoders. Unsupervised pre-training works by first pre-training each layer by encoding the signal, decoding the signal, then measuring the reconstruction error. Tuning can then be used to get better performance (e.g. if you use denoising stacked-autoencoders you can use back-propagation).
One good starting point for DL theory is:
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.795&rep=rep1&type=pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.795&rep=rep1&type=pdf)
as well as these:
[http://portal.acm.org/citation.cfm?id=1756025](http://portal.acm.org/citation.cfm?id=1756025)
(sorry, had to delete last link due to SPAM filtration system)
I didn't include any information on RBMs, but they are closely related (though personally a little more difficult to understand at first).
| null | CC BY-SA 3.0 | null | 2011-07-06T20:16:52.213 | 2011-07-06T20:16:52.213 | null | null | 5268 | null |
12734 | 1 | 12740 | null | 2 | 283 | I posted this on maths, but seems it would be better here :S
[https://math.stackexchange.com/questions/49941/calculate-the-rate-of-change](https://math.stackexchange.com/questions/49941/calculate-the-rate-of-change)
Basically
I am trying to calculate the change frequency for a set of data. Each bit of data has the date-time it was created. I would like to say for a specific set of data the change frequency is hourly, daily, weekly, monthly or yearly.
So far I have tried getting the list of dates and get the min/max which is easy to calculate an average from which can be converted into a human readable label such as hourly, daily etc
How would i take into account the age of the last new bit of data. eg: say there were 50 dates all roughly an hour one after the other. This is hourly. but if the last one was 2 weeks ago, its not quite hourly.
In this example I am not sure myself what the frequency would be of the list (hourly, daily, weekly, monthly or yearly) so I'm looking for a bit of direction. Maybe someone here has done this before and has a good model or knows a bit more than me :)
Thanks
| calculate the rate of change | CC BY-SA 3.0 | null | 2011-07-06T20:35:14.760 | 2011-07-06T21:55:23.923 | 2017-04-13T12:19:38.800 | -1 | 5308 | [
"distributions",
"estimation",
"data-mining",
"partitioning"
] |
12736 | 1 | null | null | 1 | 788 | I'm just facing an issue of finding an easy tutorial or guide to the `funnel plot`.
Are there simple resources you recommend for this?
Thanks.
| Easy resources for funnel plot | CC BY-SA 3.0 | null | 2011-07-06T21:09:34.277 | 2011-07-07T18:42:44.950 | 2011-07-07T18:42:44.950 | 449 | 5907 | [
"meta-analysis",
"funnel-plot"
] |
12737 | 2 | null | 12651 | 6 | null | Any model selection procedure will affect the standard errors and this is hardly ever accounted for. For example, prediction intervals are computed conditionally on the estimated model and the parameter estimation and model selection are usually ignored.
It should be possible to bootstrap the whole procedure in order to estimate the effect of the model selection process. But remember that time series bootstrapping is trickier than normal bootstrapping because you have to preserve the serial correlation. The block bootstrap is one possible approach although it loses some serial correlation due to the block structure.
| null | CC BY-SA 3.0 | null | 2011-07-06T21:13:12.353 | 2011-07-06T21:13:12.353 | null | null | 159 | null |
12738 | 2 | null | 12736 | 5 | null | Do you mean interpreting [this](http://en.wikipedia.org/wiki/Funnel_plot)? Here are a couple of other descriptions:
1.[Interpreting Funnel Plots](http://www.cochrane-net.org/openlearning/html/mod15-3.htm)
2.["Funnel plots in Meta Analysis" by Sterne & Harbord](http://www.stata-journal.com/sjpdf.html?articlenum=st0061)
3.["Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis" by Sterne & Egger](http://www.ncbi.nlm.nih.gov/pubmed/11576817)
But the [wikipedia link](http://en.wikipedia.org/wiki/Funnel_plot) is probably the best place to start. (It also contains a few other refs).
| null | CC BY-SA 3.0 | null | 2011-07-06T21:18:23.503 | 2011-07-06T21:18:23.503 | null | null | 1499 | null |
12739 | 1 | 12746 | null | 8 | 13413 | A/B testing:
[http://20bits.com/articles/statistical-analysis-and-ab-testing/](http://20bits.com/articles/statistical-analysis-and-ab-testing/)
[http://elem.com/~btilly/effective-ab-testing/](http://elem.com/~btilly/effective-ab-testing/)
I'm not too familiar with A/B testing, but I was wondering if there were any specific packages/libraries in R or Python that can be used to perform A/B testing.
| A/B testing in Python or R | CC BY-SA 3.0 | null | 2011-07-06T21:20:49.710 | 2015-02-21T02:58:00.723 | 2011-07-07T06:41:22.470 | null | 3310 | [
"r",
"python",
"ab-test"
] |
12740 | 2 | null | 12734 | 2 | null | As your situation sounds a bit vague, I'd simply convert the data/times into seconds and plot the arrival times. Then, if I'm not mistaken, what you're interested in is an approximate derivative of these times. If you'd like an off-the-shelf answer using R, I recommend looking at the documentation [here](http://rss.acs.unt.edu/Rdoc/library/base/html/as.POSIXlt.html).
Date/time plotting is described [here](http://stat.ethz.ch/R-manual/R-devel/library/graphics/html/axis.POSIXct.html). With a simple R plot example:
```
## 100 random dates in a 10-week period
random.dates <- as.Date("2001/1/1") + 70*sort(stats::runif(100))
plot(random.dates, 1:100)
# or for a better axis labelling
plot(random.dates, 1:100, xaxt="n")
axis.Date(1, at=seq(as.Date("2001/1/1"), max(random.dates)+6, "weeks"))
axis.Date(1, at=seq(as.Date("2001/1/1"), max(random.dates)+6, "days"),
labels = FALSE, tcl = -0.2)
```
| null | CC BY-SA 3.0 | null | 2011-07-06T21:55:23.923 | 2011-07-06T21:55:23.923 | null | null | 1499 | null |
12741 | 1 | null | null | 0 | 112 | I'd like to calculate the conditional probability in the following case:
I was told that a box contains a BLUE ball. this is my evidence, my prior probability of a BLUE being drawn is 0.3. and this message can come from a friend(0.6) or an enemy (0.4)
I also know the probabilities of:
a friend observing BLUE : 0.16
a friend observing RED: 0.09
an enemy observing BLUE: 0.05
an enemy observing RED: 0.17 given that the ball is actually BLUE.
I would like to know the conditional probability of the ball actually being BLUE given this information.
| calculating conditional probability-bayes rule | CC BY-SA 3.0 | 0 | 2011-07-06T22:26:41.513 | 2011-07-06T23:32:58.507 | 2011-07-06T23:32:58.507 | null | null | [
"probability",
"bayesian"
] |
12742 | 1 | 15555 | null | 6 | 2475 | So, I'm working on a problem where I have a sample distribution with a reported, mean and an asymmetric confidence interval from bootstrapped sampling of the initial population (this is from a meta-analysis - so, I also have the # of bootstrap replicates and the original sample size). Given this information, I would like to draw a random variable (I'm then going use it to construct my own bootstrapped sample using other similar variables - basically, a meta-analysis of meta-analyses). I'm sure this is simple, but I'm completely blanking on how one would do this - is there an appropriate distribution to draw from or way to back-engineer reported results to create one? R code would be great (if there is a package to do this), or just a see-this-reference, as I'm drawing a blank.
N.B. I think my larger question here was, if you have the quantiles for a population, but no other distributional knowledge or data, is it possible to create a method to draw random numbers based on that information. But perhaps the answer there is no.
| Sampling random numbers from a distribution with asymmetric confidence intervals generated by a bootstrapped estimate | CC BY-SA 3.0 | null | 2011-07-06T22:34:03.300 | 2013-01-21T21:02:23.607 | 2011-09-14T22:51:12.960 | 101 | 101 | [
"confidence-interval",
"random-variable",
"bootstrap",
"random-generation"
] |
12743 | 1 | null | null | 0 | 1321 | I would like to know how to estimate a population average model of a hierarchical logistic regression using `R` package `geepack`.
The `Stata` code is:
```
xtlogit dep ind1 ind2 ind3, i(ind4) pa
```
I would like to reproduce this in `R` using `geepack` or any other method.
| Estimating population average models in lmer or geepack | CC BY-SA 3.0 | null | 2011-07-06T23:03:44.740 | 2016-10-18T21:13:22.690 | 2016-10-18T21:13:22.690 | 7290 | null | [
"r",
"stata",
"generalized-estimating-equations",
"lme4-nlme"
] |
12744 | 2 | null | 1337 | 93 | null | From the [CMU protest at G20](http://www.flickr.com/photos/30686429@N07/sets/72157622330082619/):
[](http://www.flickr.com/photos/30686429@N07/sets/72157622330082619/)
There are [other pictures](http://www.flickr.com/photos/30686429@N07/sets/72157622330082619/) from the protest as well.
| null | CC BY-SA 3.0 | null | 2011-07-06T23:08:50.570 | 2011-07-06T23:08:50.570 | null | null | 1106 | null |
12745 | 2 | null | 1337 | 250 | null | A guy is flying in a hot air balloon and he's lost. So he lowers himself over a field and shouts to a guy on the ground:
"Can you tell me where I am, and which way I'm headed?"
"Sure! You're at 43 degrees, 12 minutes, 21.2 seconds north; 123 degrees, 8 minutes, 12.8 seconds west. You're at 212 meters above sea level. Right now, you're hovering, but on your way in here you were at a speed of 1.83 meters per second at 1.929 radians"
"Thanks! By the way, are you a statistician?"
"I am! But how did you know?"
"Everything you've told me is completely accurate; you gave me more detail than I needed, and you told me in such a way that it's no use to me at all!"
"Dang! By the way, are you a principal investigator?"
"Geeze! How'd you know that????"
"You don't know where you are, you don't know where you're going. You got where you are by blowing hot air, you start asking questions after you get into trouble, and you're in exactly the same spot you were a few minutes ago, but now, somehow, it's my fault!
| null | CC BY-SA 3.0 | null | 2011-07-06T23:18:06.740 | 2011-07-06T23:18:06.740 | null | null | 686 | null |
12746 | 2 | null | 12739 | 12 | null | Sure, for both python and R, there are a few interesting and usable packages/libraries.
First, for python, i highly recommend reading this [StackOverflow Answer](https://stackoverflow.com/questions/752919/any-thoughts-on-a-b-testing-in-django-based-project) directed to a question about A/B Testing in Python/Django. It's a one-page Master's thesis on the subject.
[Akoha](http://www.slideshare.net/erikwright/djangolean-akohas-opensource-ab-experimentation-framework-montreal-python-9) is fairly recent (a little more than one year old) package directed to AB Testing in Django. I haven't used this package but it is apparently the most widely used Django package of this type (based on number of downloads). It is available on [bitbucket](https://bitbucket.org/akoha/django-lean/wiki/Home).
[Django-AB](http://www.djangopackages.com/packages/p/django-ab/) is the other Django package i am aware of and the only one i have used.
As you would expect of Packages to support a web framework, each provides a micro-framework to setup, configure, conduct, and record the results of AB Tests. As you would expect, they both work by dynamically switching the (django) template (skeleton html page)referenced in the views.py file.
For R, i highly recommend the [agricolae](http://tarwi.lamolina.edu.pe/~fmendiburu/) Package, authored and maintained by a University in Peru. available on CRAN. This is part of the core distribution. (See also agridat, which is comprised of very useful datasets from completed AB and multi-variate tests).
As far as i know, and i have referred to the agricolae documentation quite a few times, web applications or web sites are never mentioned as the test/analytical subject. From the package name, you can tell that the domain is agriculture, but the analogy with testing on the Web is nearly perfect.
This package nicely complements the two Django packages because agricolae is directed to the beginning (test design and establishing success/termination criterion) and end (analysis of the results) of the AB Test workflow.
| null | CC BY-SA 3.0 | null | 2011-07-06T23:23:36.423 | 2011-07-06T23:32:06.387 | 2017-05-23T12:39:26.150 | -1 | 438 | null |
12747 | 2 | null | 12743 | 4 | null | I am by no means an expert in this field but as far as I know you cannot estimate a population average model with R's lme4 package. If I am right then "population average models" use a "generalized estimating equation" (GEE) approach, see, e.g., [To GEE or Not to GEE: Comparing Population Average and Mixed Models for Estimating the Associations Between Neighborhood Risk Factors and Health](http://www.ncbi.nlm.nih.gov/pubmed/20220526).
You might want to search for R packages that can fit generalized estimating equations (GEE), for example the package [geepack](http://www.jstatsoft.org/v15/i02/paper) or [gee](http://cran.r-project.org/web/packages/gee/index.html).
| null | CC BY-SA 3.0 | null | 2011-07-06T23:59:56.503 | 2011-08-06T19:04:09.220 | 2011-08-06T19:04:09.220 | 930 | 307 | null |
12748 | 1 | null | null | 2 | 242 | I am looking to study a particular type of error on a cognitive test to evaluate for potential clinical implications. As there is no existing research on this variable, I would like to run a pilot evaluation on a relatively large pool of subject data from our clinic (feasibly this would be about 200 subjects) to see if there are any trends to guide hypothesis testing in future studies (e.g., are patients from a particular clinical population more likely to make this type of error than those from another population). The data is such that a patient could potentially make 0-15 of these errors, although it's likely that there will be a strong tendency toward the 0-5 range.
Essentially, I am looking to identify subjects from within this pool who are making a relatively higher rate of these errors. Is there a recognized format for doing this type of pilot work? Barring that, what would your recommendations be?
This is obviously very exploratory, so I feel like I would have some latitude in defining what constitutes an 'outlier' for the purposes of this study, but any references or suggestions would be much appreciated.
| Looking for help identifing outliers in a pilot study to guide future hypothesis testing | CC BY-SA 3.0 | null | 2011-07-07T02:14:59.487 | 2011-07-08T12:20:22.737 | 2011-07-08T11:18:20.963 | null | 5311 | [
"outliers"
] |
12750 | 1 | 12782 | null | 2 | 135 | So... let's say I have data that look something like this...

(as I look at it now, in the actual data the red line is about 20% shorter than the black (at the high end... but you get the idea)
I've made a mixed effects model (lmer) where there's an effect of the x-predictor and and also an effect of the two colours. I'm thinking that if I have centred the x-axis in my model that comparisons between the colours is perfectly fine. Should I be concerned that someone may argue that only the overlapping parts on the x-axis be allowed in the comparison across colours? The lines aren't perfectly parallel but the interaction is as near 0 as one can get.
| Inferences about non-overlapping lines | CC BY-SA 3.0 | null | 2011-07-07T02:49:58.210 | 2011-07-07T20:49:59.437 | 2011-07-07T13:45:00.953 | null | 601 | [
"r",
"regression"
] |
12751 | 2 | null | 12748 | 2 | null | I'd encourage you not to think primarily in binary terms (outliers vs. non-outliers) but rather to look more comprehensively for predictors that are associated with this variable. It's an interval-level variable, which means you might be able to test associations using ANOVA or regression. Chances are you'll want to transform the scores first, since they are so skewed; taking the square root might help.
If you conduct an ANOVA or regression and find promising predictors, you can at that point return if you want to the narrower task of identifying or predicting outliers: you'll have a basis for saying what sort of person is likely to have the highest number of errors.
| null | CC BY-SA 3.0 | null | 2011-07-07T03:04:58.060 | 2011-07-07T03:04:58.060 | null | null | 2669 | null |
12753 | 1 | null | null | 1 | 7749 | I am looking for a variable selection technique in R to reduce the number of my regression predictors, where I can force the method to keep a specific variable within the model. Here is a toy example from the R help of ?step, the variable "Examination" will be removed:
```
summary(lm1 <- lm(Fertility ~ ., data = swiss));
slm1 <- step(lm1);
summary(slm1);
```
| Forcing variable selection to keep certain predictor in R | CC BY-SA 3.0 | null | 2011-07-07T05:03:36.973 | 2011-07-07T06:37:09.080 | 2011-07-07T06:37:09.080 | null | 5029 | [
"stepwise-regression"
] |
12754 | 1 | null | null | 8 | 7411 | There's not much more I can add to the question. Googling has mostly turned up research papers on springerlink and other sites I don't have access to.
Given a neural network model with $tanh(x)$ as the output non-linearity, what is the appropriate matching loss function to use?
-Brian
| Matching loss function for tanh units in a neural net | CC BY-SA 3.0 | null | 2011-07-07T05:07:47.810 | 2011-07-07T14:48:46.523 | null | null | 3982 | [
"neural-networks",
"loss-functions"
] |
12755 | 2 | null | 12754 | 3 | null | The loss function is chosen according to the noise process assumed to contaminate the data, not the output layer activation function. The purpose of the output layer activation function is to apply whatever constraints ought to apply on the output of the model. There is a correspondance between loss function and activation function that can simplify the implementation of the model, but that is pretty much the only real benefit (c.f. link functions in Generalised Linear Models) as neural net people generally don't go in much for analysis of parameters etc. Note the tanh function is a scaled and translated version of the logistic sigmoidal function, so a modified logistic loss with recoded targets might be a good match from that perspective.
| null | CC BY-SA 3.0 | null | 2011-07-07T06:52:44.090 | 2011-07-07T06:52:44.090 | null | null | 887 | null |
12756 | 1 | 12757 | null | 12 | 4042 | Last night I started a complex calculation with gamm() and it took me...
>
user system elapsed
9259.76 326.05 9622.64 (s)
...meaning it took me 160 minutes or 2.67 hours for that calculation. The problem is that I have to do around 50 or even 100 more of these! So I was wondering if there is any way that could speed up these calculations. I compared the 32bit with the 64bit version (4gb) and R 2.12.2 to calculate a less complex gamm().
>
32bit solution
User System elapsed
41.87 0.01 42.01
64 bit solution
User System elapsed
40.06 2.82 43.05
but it took even longer using 64bit!
My question now:
>
Would it help to simply buy more ram, for example 8GB DDR3? or would that be a waste of money? Or would the compiler package in R 2.13.0 be able to handle that properly? I do not think that rcpp can handle gamm() functions, or am I wrong?
any comments welcome!
the gamm() model call for the 160min process was:
```
g1 <- gamm(CountPP10M ~ s(tempsurf,bs="cr")
+ s(salsurf,bs="cr")
+ s(speedsurf,bs="cr")
+ s(Usurf,bs="cr")
+ s(Vsurf,bs="cr")
+ s(Wsurf,bs="cr")
+ s(water_depth,bs="cr")
+ s(distance.to.bridge,bs="cr")
+ s(dist_land2,bs="cr")
+ s(Dist_sventa,bs="cr"),
data=data,
random=list(ID_Station=~1),
family=poisson,
methods="REML",
control=lmc)
)
```
| How to make R's gamm work faster? | CC BY-SA 3.0 | 0 | 2011-07-07T07:17:32.013 | 2011-07-07T10:01:43.750 | 2020-06-11T14:32:37.003 | -1 | 5280 | [
"r",
"mixed-model",
"computational-statistics"
] |
12757 | 2 | null | 12756 | 8 | null | You are not going to be able to achieve substantial speed-up here as most of the computation will be being done inside compiled C code.
If you are fitting correlation structures in `gamm()` then you can either simplify the correlation structure you want to fit (i.e. don't use `corARMA(p=1, .....)` when `corAR1(....)` would suffice. Or nest the correlations within years if you have many observations per year, rather than for the whole time interval.
If you aren't fitting correlation structures, `gam()` can fit simple random effects, and if you need more complex random effects, consider the gamm4 which is by the same author as mgcv but which uses the lme4 package (`lmer()`) instead of the slower/older nlme package (`lme()`).
You could try simpler bases for the smooth terms; `bs = "cr"` rather than the default thin-plate spline bases.
If all else fails, and you are just facing big-data issues, the best you can do is exploit multiple cores (manually split a job into ncores chunks and run them in BATCH mode over night, or by one of the parallel processing packages on R) and run models as the weekend. If you do this, make sure you wrap your `gamm()` calls in `try()` so that the whole job doesn't stop because you have a convergence problem part way through the run.
| null | CC BY-SA 3.0 | null | 2011-07-07T08:08:28.590 | 2011-07-07T08:08:28.590 | null | null | 1390 | null |
12758 | 2 | null | 12704 | 0 | null | In case somebody is interested, Skew normal distribution, it fits best
Read more at [http://en.wikipedia.org/wiki/Skew_normal_distribution](http://en.wikipedia.org/wiki/Skew_normal_distribution)
| null | CC BY-SA 3.0 | null | 2011-07-07T08:18:39.090 | 2011-07-07T08:18:39.090 | null | null | 5300 | null |
12759 | 1 | 15229 | null | 5 | 2136 | I am currently trying to estimate the density of a joint distribution of K single dimensional RVs. I have at my disposal a set of N sample points, each of which represents an outcome of the K RVs.
Some specifics about my problem:
- the RVs are independent
- the RVs need not belong to same family of distributions
- the RVs can either all be discrete or all be continuous, but not both
- I know the upper and lower bounds for each RV; in the discrete case, I know the values that the RV can take on.
Right now, I am estimating the density of each RV separately using the [ksdensity function](http://www.mathworks.com/help/toolbox/stats/ksdensity.html) in MATLAB. The independence assumption then allows me to produce a joint density using the product of the individual densities. I am hoping to improve the precision of my estimate this by either using another method (that I can code up in MATLAB) or by playing around with the options in ksdensity (such as the kernel type, support, width of the density window).
I am specifically hoping that people can shed light on:
- What method to use for the discrete case vs. the continuous case. In the discrete case, is it worth specifying the bounds and the values? In the continuous case,
- Whether it ever makes sense to forget the independence assumption and estimate the joint distribution as a joint distribution
- Whether anyone knows about some simple reading material on the issue.
| Best practices for density estimation of discrete & continuous random variables | CC BY-SA 3.0 | null | 2011-07-07T08:29:53.267 | 2011-09-05T18:23:13.393 | 2011-07-07T08:32:46.930 | null | 3572 | [
"estimation",
"matlab",
"density-function"
] |
12760 | 2 | null | 1337 | 11 | null | A circus strongman, a physicist and a statistician are stranded on a desert island. They have fruit and fish, and it rains a lot, so they aren't starving, but they are still happy when they find a cache of canned fruit.
The physicist says to the strongman - "If you climb that tree, and throw the cans against a rock, the force will burst the cans open. We'll lose some, but it's better than nothing"
The strongman says "No, I can open the cans with my teeth. It'll hurt, but I should be able to do it"
They turn to the statistician who says "First, assume we have a can opener".
(I originally heard this with an economist instead of statistician, but I think both work)
| null | CC BY-SA 3.0 | null | 2011-07-07T09:58:49.960 | 2011-07-07T09:58:49.960 | null | null | 686 | null |
12761 | 2 | null | 12756 | 0 | null | If `gamm()` is in R code rather than C it might be worth using the byte-code compiler that is new in R 2.13. There is a new core package called `compiler` and you can compile a function using the `cmpfun()` function.
More details can be found here:
[http://www.r-bloggers.com/the-new-r-compiler-package-in-r-2-13-0-some-first-experiments/](http://www.r-bloggers.com/the-new-r-compiler-package-in-r-2-13-0-some-first-experiments/)
| null | CC BY-SA 3.0 | null | 2011-07-07T10:01:43.750 | 2011-07-07T10:01:43.750 | null | null | 1150 | null |
12762 | 1 | 12764 | null | 10 | 3010 | What is a good measure of spread for a multivariate normal distribution?
I was thinking about using an average of the component standard deviations; perhaps the trace of the covariance matrix divided by the number of dimensions, or a version of that. Is that any good?
Thanks
| Measure of spread of a multivariate normal distribution | CC BY-SA 3.0 | null | 2011-07-07T10:21:26.000 | 2011-07-15T20:53:35.010 | 2011-07-15T20:53:35.010 | null | 3586 | [
"normal-distribution",
"multivariate-analysis"
] |
12763 | 1 | null | null | 11 | 3015 | I need to fit a generalized Gaussian distribution to a 7-dim cloud of points containing quite a significant number of outliers with high leverage. Do you know any good R package for this job?
| Robust multivariate Gaussian fit in R | CC BY-SA 3.0 | null | 2011-07-07T11:35:55.323 | 2018-06-11T14:41:53.730 | 2018-06-11T14:41:53.730 | 11887 | null | [
"r",
"distributions",
"normal-distribution",
"robust"
] |
12764 | 2 | null | 12762 | 14 | null | What about the determinant of the sample variance-covariance matrix: a measure of the
squared volume enclosed by the matrix within the space of dimension of the measurement vector. Also, an often used scale invariant version of that measure is the determinant of the sample correlation matrix: the volume of the space occupied within the dimensions of the measurement vector.
| null | CC BY-SA 3.0 | null | 2011-07-07T11:41:18.650 | 2011-07-07T13:54:21.117 | 2011-07-07T13:54:21.117 | 919 | 3805 | null |
12765 | 1 | 12775 | null | 9 | 7353 | I have been asked to analyse some data from a clinical trial looking a two methods of measuring blood pressure. I have data from 50 subjects, each with between 2 and 57 measures using each method.
I'm wondering how best to proceed.
Obviously I need a solution that will account for the fact the measure of blood pressure is paired (two methods measured contemporaneously) and also a time varying covariate (with a varying number of observations per patient) as well as account for intra- and inter-patient variablility.
I was thinking of somehow shoe-horning this into repeated measures ANOVA, but I'm thinking it might need to be a mixed model approch.
I'd appreciate any helpful advice you could offer.
I'm a complete R newbie but very excited to develop skills and I have a moderate experience in Stata so could always fall back on that.
| Paired, repeated-measures ANOVA or a mixed model? | CC BY-SA 3.0 | null | 2011-07-07T14:00:56.323 | 2011-07-07T18:59:10.257 | 2011-07-07T16:00:30.820 | null | 5317 | [
"r",
"anova",
"mixed-model",
"stata"
] |
12766 | 2 | null | 12765 | 1 | null | If you are looking for RM-ANOVA with mixed model by using R. You might want to check this out
[http://blog.gribblelab.org/2009/03/09/repeated-measures-anova-using-r/](http://blog.gribblelab.org/2009/03/09/repeated-measures-anova-using-r/)
There are great examples to demonstrate how to use mixed model to accomplish the RM-ANOVA.
Based on my experience, SAS is a better tool to deal with the mixed model. If you are using SAS, you could check the SAS help "Proc Mixed" for RM-ANOVA.
| null | CC BY-SA 3.0 | null | 2011-07-07T14:45:55.883 | 2011-07-07T14:45:55.883 | null | null | 4559 | null |
12767 | 2 | null | 12754 | 4 | null | I think I've derived something that'll work:
$$-\frac{1}{2}((1-x_0)log|1-tanh(x)| + (1+x_0)log|1+tanh(x)|)$$
The derivative of this quantity w/respect to $x$ is $tanh(x) - x_0$, which is precisely what I need.
| null | CC BY-SA 3.0 | null | 2011-07-07T14:48:46.523 | 2011-07-07T14:48:46.523 | null | null | 3982 | null |
12768 | 1 | 12770 | null | 8 | 17713 | I would like to fit a 3-level hierarchical regression in lmer, however, I don't know how to specify the grouping factor above the second level.
the model would be:
```
lmer(depedent ~ independent 1 + independent2 + (1|group1)....
```
And I would like to specify another group nested within `group1`.
I've tried `(1|group1/group2)` but this gives an error message and group1:group2 is an interaction.
I've also tried separately `(1|group1) + (1|group2)` but i'm not sure if this is correct.
thanks
| Three-level hierarchical regression using lmer | CC BY-SA 3.0 | null | 2011-07-07T15:55:36.610 | 2011-07-10T19:00:04.940 | 2011-07-10T19:00:04.940 | 307 | null | [
"r",
"multilevel-analysis",
"lme4-nlme"
] |
12769 | 1 | null | null | 4 | 19149 | I run several simulation studies with R using a MacBook Pro, which has 8 GB of RAM. Unfortunately, some of the simulations cannot be done due to limited memory. My question is how much RAM is needed for large simulations? (For example, each data set has 10000 subjects and needs to generate 1000 datasets). Is 64 GB of RAM big enough for large R simulations?
| How much RAM is needed for simulation studies using R? | CC BY-SA 3.0 | null | 2011-07-07T16:43:40.993 | 2014-08-26T11:21:18.060 | 2011-09-03T11:21:05.390 | null | 4559 | [
"r",
"computational-statistics"
] |
12770 | 2 | null | 12768 | 13 | null | Not enough reputation to comment, so I'll post this as an answer.
There are a number of questions like this already around. you might want to look at this [message](http://www.opensubscriber.com/message/r-help@stat.math.ethz.ch/5174171.html).
However, `(1|group1/group2)` should work with all but very old versions of lme4, so if that gives you an error, there is probably something wrong with the way you set up your data. Note that once your data are correctly set up, `(1|group1/group2)` and `(1|group1) + (1|group2)` should give the same results.
| null | CC BY-SA 3.0 | null | 2011-07-07T16:58:43.217 | 2011-07-07T16:58:43.217 | null | null | 5020 | null |
12771 | 2 | null | 6896 | 0 | null | One problem with using the angles as a proxy for shape is that small perturbations in the angles can lead to large perturbations in the shape. Further, different angle configurations could result in the same (or similar) shape.
| null | CC BY-SA 3.0 | null | 2011-07-07T17:02:45.687 | 2011-07-07T17:02:45.687 | null | null | 139 | null |
12772 | 1 | null | null | 5 | 14198 | I want to build a model to predict the outcomes of experiments.
My predictive model gives out scores with an range 1 to 100 values.
I want to test if my predictive scores can be used to classify experimental outcomes as "good" or "bad" groups.
Experimentally, we did the 1000 experiments. Using my predictive model, I have 1000 scores.
To test if my predictive model statistically acceptable, what should I do? I have done ROC and sensitivity test for these 1000 X 2 data.
ROC were plotted for all 1000 experimental data and predictive scores. By looking at the AUC values for the plot (sensitity vs 1-specificity), AUC=0.64.
Let's said if my predictive score has a cut off value of 5, i.e. it is likelihood that the experimental outcome will be "good", score > 5 are likely to have "bad" experimental outcome. I calculate the enrichment of my predictive model, i.e. no. of real "good" results / no. of predictive score < 5.
Did I do anything wrong here?
What else should I do to check the predictive power of a model?
| How to test the predictive power of a model? | CC BY-SA 3.0 | null | 2011-07-07T17:09:38.223 | 2011-07-07T19:56:02.260 | null | null | 5126 | [
"predictive-models"
] |
12773 | 2 | null | 12759 | 2 | null | You might want to try using a [Copula](http://en.wikipedia.org/wiki/Copula_%28statistics%29). There are some free versions of this on the web, the most promising of which appears to be [Andrew Patton's Copula toolbox](http://econ.duke.edu/~ap172/code.html) (I have not used this, mind you, but it looks about right).
| null | CC BY-SA 3.0 | null | 2011-07-07T17:15:15.130 | 2011-07-07T17:15:15.130 | null | null | 795 | null |
12774 | 2 | null | 12068 | 2 | null | If there is the possibility that something may belong to more than one class, another approach is to train N classifiers with the '-b 1' flag (to enable probability estimates). You will then get confidence levels of each data point belonging to each classifier. However, there's still the question of what threshold to use. If you want to get around the problem of picking the 'best' threshold, you can use 11-pt Mean Average Precision. This measures the AP for threshold values [0.0, 0.1, 0.2, ..., 1.0] (thus the 11 pt).
| null | CC BY-SA 3.0 | null | 2011-07-07T18:24:41.640 | 2011-07-07T18:24:41.640 | null | null | 5268 | null |
12775 | 2 | null | 12765 | 12 | null | I don't think you can easily do what you want to do with RM-ANOVA since number of the repetitions are not the same for all subjects. Running mixed-effects models is very easy in R. In fact, by investing a little time to learn the fundamentals and the commands, it will open a lot of possibilities to you. I also find mixed-modeling much simpler to use and more flexible and almost never need to do RM-ANOVA directly. Finally, consider that with mixed modeling you can also account for the covariance structure of the residuals (RM-ANOVA simply assumes a diagonal structure) which can be important for many applications.
There are two main packages for linear mixed modeling in R: `nlme` and `lme4`. The `lme4` packages is the more modern one which is great for large datasets and also for the cases you deal with clustered data. `Nlme` is the older package and is mostly deprecated in favor of `lme4`. However, for repeated measures designs it is still better than `lme4` since only `nlme` allows you to model the covariance structure of the residuals. The basic syntax of `nlme` is very simple. For example:
`fit.1 <- lme(dv ~ x + t, random=~1|subject, cor=corCompSymm())`
Here I'm modeling the relationship between a dependent variable `dv` and a factor `x` and time-related covariate `t`. `Subject` is a random effect and I have used a compound symmetry structure for the covariance of the residuals. Now you can easily get the infamous p-values by:
`anova(fit.1)`
Finally, I can suggest you to read more about nlme using its definitive reference guide, [Mixed Effects Models in S and S-Plus](http://rads.stackoverflow.com/amzn/click/0387989579). Another good reference for beginners is [Linear Mixed Models - a Practical Guide Using Statistical Software](http://www-personal.umich.edu/~bwest/almmussp.html) which compiles lots of examples of different applications of mixed modeling with code in R, SAS, SPSS, etc.
| null | CC BY-SA 3.0 | null | 2011-07-07T18:25:29.303 | 2011-07-07T18:59:10.257 | 2011-07-07T18:59:10.257 | 2020 | 2020 | null |
12776 | 2 | null | 12772 | 6 | null | AUC is a good start. You can also calculate what percent of observations were correctly classified, and you can make a [confusion matrix](http://en.wikipedia.org/wiki/Confusion_matrix).
However, the best single thing you can do is calculate these values using a "test" dataset, who's observations were not used to train the model. This is the only true test of a predictive model.
| null | CC BY-SA 3.0 | null | 2011-07-07T18:55:28.587 | 2011-07-07T18:55:28.587 | null | null | 2817 | null |
12777 | 1 | null | null | 5 | 653 | INTRODUCTION: I'm a bioinformatician. In my analysis which I perform on all human genes (about 20 000) I search for a particular short sequence motif to check how many times this motif occurs in each gene.
Genes are 'written' in a linear sequence in four letters (A,T,G,C). For example: CGTAGGGGGTTTAC... This is the four-letter alphabet of genetic code which is like the secret language of each cell, it’s how DNA actually stores information.
I suspect that frequent repetations of a particular short motif sequence (AGTGGAC) in some genes are crucial in a specific biochemical process in the cell. Since the motif itself is very short it is difficult with computational tools to distinguish between true functional examples in genes and those that look similar by chance. To avoid this problem I get sequences of all genes and concatenated into a single string and shuffled. The length of each of the original genes was stored. Then for each of the original sequence lengths, a random sequence was constructed by repeatedly picking A or T or G or C at random from the concatenated sequence and transferring it to the random sequence. In this way, the resulting set of randomized sequences has the same length distribution, as well as the overall A,T,G,C composition. Then I search for the motif in these randomized sequences. I permormed this procedure 1000 times and averaged the results.
- 15000 genes that do not contain a given motif
- 5000 genes that contain 1 motif
- 3000 genes that contain 2 motifs
- 1000 genes that contain 3 motifs
- ...
- 1 gene that contain 6 motifs
So even after 1000 times randomization of true genetic code, there aren't any genes which have more than 6 motifs. But in the true genetic code, there are a few genes which contain more then 20 occurrences of the motif, which suggest that these repetition might be functional and it's unlikely to find them in such an abundance by pure chance.
PROBLEM: I would like to know the probability of finding a gene with let's say 20 occurences of the motif in my distribution. So I want to know the probability to find such a gene by chance. I would like to implement this in Python, but I don't know how.
Can I do such an analysis in Python?
Any help would be appreciated.
| Fitting distributions, goodness of fit, p-value. Is it possible to do this with Scipy (Python)? | CC BY-SA 3.0 | null | 2011-07-07T19:48:31.707 | 2011-10-18T18:49:28.877 | 2011-10-18T18:49:28.877 | 1080 | 5318 | [
"distributions",
"python",
"p-value"
] |
12778 | 2 | null | 12772 | 7 | null | ROC, sensitivity, specificity, and cutoffs have gotten in the way, unfortunately. Assuming there is nothing between "good" and "bad" and that the success of the experiment was not based on an underlying continuum that should have instead formed the dependent variable, a probability model such as logistic regression would seem to be called for. You may need to do resampling to get an unbiased appraisal of the model's likely future performance. Note that even though a receiver operating characteristic curve is seldom appropriate, its area (also called c-index or concordance probability from the Wilcoxon-Mann-Whitney test) is a good summary measure of pure predictive discrimination. On the other hand, percent classified correctly is an improper scoring rule that, if optimized, will result in a bogus model.
Predicted probabilities are your friend, and they are also self-contained error rates at the point where someone forces you to make a binary decision, if they do.
| null | CC BY-SA 3.0 | null | 2011-07-07T19:56:02.260 | 2011-07-07T19:56:02.260 | null | null | 4253 | null |
12779 | 1 | 12786 | null | 2 | 1008 | I have 5 more less correlated variables measuring conceptually same thing - size. For example, height, weight, shoe size, jacket size and age. I just want to summarize the information from all variables into one and use that single (synthetic) variable in further analysis.
What are the options for doing that?
One of the requirements is to be able to Measure and "tweak" weights of each of the components of the synthetic variable based on exPert judgement... lats say the requirement may be to increase weight of shoe size from 11.34% to 20%. Or reduce weight of another variable because of known measuremen error or some other valid reason.
Any ideas?
| How to create one synthetic variable from 5 measured variables | CC BY-SA 3.0 | null | 2011-07-07T20:11:04.703 | 2011-07-07T23:59:28.617 | 2011-07-07T21:25:21.357 | 333 | 333 | [
"dimensionality-reduction"
] |
12781 | 1 | null | null | 3 | 7444 | I am having trouble figuring out how to group data using the inter-quartile ranges calculated with a box and whiskers plot as well as looking at Tukey's Hinges. I understand the IQR and Tukey's hinges are not the same thing and that there are different interpretations of Tukey's Hinges. Basically, I did a calculation using SPSS and the output provided me the weighted quartiles and below that, Tukey's hinges. My question is about whether or not you can use the hinges or the weighted quartile values within the data to group the data. For example. Let's say, this is your set of data (this is not the data set I'm working, but just a simple example to illustrate my question):
1
2
2.5
2.5
2.5
3
3
4
5
5
6
7
7.5
7.5
8
9
9
10
Let's then say your IQR and Tukey Hinges are 25% = 2.5, 50% = 5, 75% = 7.5 (This is just random, this may not be the actual case for the data but I am using these values just to explain my question).
Now let's say you want to divide the data into 4 groups using the IQR and/or Tukey's Hinges. Which group do the values which are the hinge or "division point" between the groups go? When I was reading up on Tukey's Hinges, Tukey stated that the hinge is a point of division in the data but that is still vague when trying to group data. So, would a groups be like this:
Group A
1
2
2.5
2.5
2.5
Group B
3
3
4
5
5
Group C
6
7
7.5
7.5
Group D
8
9
9
10
Or can you exclude the "hinge" values and group the data like this?
Group A
1
2
Group B
3
3
4
Group C
6
7
Group D
8
9
9
10
I am having a tough time finding good explanations and research papers I could use to back up why I choose one methodology over another. This project is for an internship, none of the data I've written out in this e-mail is being used in my analysis. I am simply confused about dividing groups based on Tukey's Hinges. I would appreciate your thoughts. Thanks!
| Tukey's Hinges: Grouping Data | CC BY-SA 3.0 | null | 2011-07-07T20:15:14.373 | 2011-07-08T00:01:35.663 | 2011-07-07T20:20:47.270 | 930 | 5320 | [
"distributions",
"descriptive-statistics",
"exploratory-data-analysis"
] |
12782 | 2 | null | 12750 | 2 | null | I think that if you did show a difference between the two groups one might generate an argument against the analysis that the true function might be non-linear and you're simply fitting different pieces of it. However, since you don't find a difference, I can't think of any scenario where the conclusion of no difference is compromised by lack of complete overlap on x.
| null | CC BY-SA 3.0 | null | 2011-07-07T20:49:59.437 | 2011-07-07T20:49:59.437 | null | null | 364 | null |
12783 | 1 | 12793 | null | 4 | 246 | In Ackley and Hinton's paper ["A Learning Algorithm for Boltzmann Machines"](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.1370&rep=rep1&type=pdf), they write that
>
A hidden unit would be needed, for
example, if the environment demanded
that the states of three visible units
should have even parity-a regularity
that cannot be enforced by pairwise
interactions alone.
Could someone explain how a hidden unit enforces this parity constraint? I'm having a hard time seeing what the structure and weights of the network would be. (In general, I see intuitively why hidden units add power, but I don't have a rigorous understanding.)
| How do hidden units in a Boltzmann Machine enforce this parity constraint? | CC BY-SA 3.0 | null | 2011-07-07T21:40:44.737 | 2011-07-08T04:51:54.833 | null | null | 1106 | [
"neural-networks"
] |
12784 | 1 | null | null | 1 | 437 | This is more a general question on machine learning in general, but I have two datasets ~75k rows of patients and 100 columns (call it `training` and `testing`)--each column is either a numeric predictor corresponding to how many times they've visited a specific physician, or a factor (two examples are Gender and 10-year age bins). There are also two outcome arrays `training.obs` and `testing.obs` that correspond to how many days each patient has spent in the hospital.
Let's say I wanted to train a linear model `model <- lm(training.obs ~ ., data=training)`, and cross-validate it with the testing set by
```
testing.pred <- predict(model, testing)
```
and take the RMSE of `testing.pred` and `testing.obs`.
The two datasets are greatly different, in terms of factor ratios (i.e. a larger proportion of males in the training set), and distribution of numeric predictors. Would it make sense to match these two datasets by sampling `training`, or will I lose model accuracy because I'm excluding rows? How would I go about doing this?
| Matching training and testing datasets | CC BY-SA 3.0 | null | 2011-07-07T22:03:32.933 | 2011-07-07T22:03:32.933 | null | null | 5180 | [
"cross-validation",
"matching"
] |
12785 | 2 | null | 12781 | 1 | null | You are going to have problems dividing a set of data into four equal parts if the number of pieces of data is not a multiple of $4$.
One approach might be to duplicate some of the data. So if you have $4n$ data points, just divide into four sets of $n$ points by rank. If you have $4n-1$ points, duplicate the median, including it in both the second and third sets, so again you have four sets of $n$ points by rank. If you have $4n-2$ points, duplicate the first and third quartile points, including the first quartile in both the first and second sets and including the third quartile in the third and fourth sets, so again you have four sets of $n$ points by rank. And if you have $4n-3$ points, duplicate the median and the first and third quartile points, including them in the relevant sets and again you have four sets of $n$ points by rank. There are other approaches.
In your example with $18$ data points, that would give four equally sized subsets sets of
- 1st group: $1, 2, 2.5, 2.5, 2.5$
- 2nd group: $2.5, 3, 3, 4, 5$
- 3rd group: $5, 6, 7, 7.5, 7.5$
- 4th group: $7.5, 8, 9, 9, 10$
Quantiles (note the change from r to n) are difficult to define easily. Wikipedia gives [10 estimate types](http://en.wikipedia.org/wiki/Quantile#Estimating_the_quantiles_of_a_population) while Eric Langford gives [15 methods](http://www.amstat.org/publications/jse/v14n3/langford.html) in the Journal of Statistics Education.
| null | CC BY-SA 3.0 | null | 2011-07-07T23:55:10.237 | 2011-07-08T00:01:35.663 | 2011-07-08T00:01:35.663 | 2958 | 2958 | null |
12786 | 2 | null | 12779 | 3 | null | Have you looked at [PCA](http://en.wikipedia.org/wiki/Principal_component_analysis)? One approach would be to pass your data through PCA and do your analysis on the first principal component. This doesn't let you differentially weight the different original variables however.
| null | CC BY-SA 3.0 | null | 2011-07-07T23:59:28.617 | 2011-07-07T23:59:28.617 | null | null | 364 | null |
12787 | 1 | 12792 | null | 6 | 400 | Quite simple, I have some probability distribution p(x), how can I measure whether one empirical density (set of delta masses) is a better approximation than another. I know that KL-divergence is a well accepted measure between two continuous densities, but it's not clear how to apply that to a set of samples.
| How do I determine how well a dataset approximates a distribution? | CC BY-SA 3.0 | null | 2011-07-08T00:14:28.153 | 2011-07-08T04:35:23.600 | 2011-07-08T04:06:57.703 | 919 | 5321 | [
"distributions",
"sampling",
"kullback-leibler"
] |
12789 | 1 | 12798 | null | 8 | 3452 | I thought that the loadings in factor analysis were the correlations between the observed variables and the latent factors. However, when I do factor analysis in R using the psych package, this does not seem to be the case:
```
library(psych)
set.seed(1)
X <- matrix(rnorm(200), ncol=10)
fa1 <- fa(X, nfactors=3, rotate="none", scores=TRUE)
cor(X, fa1$scores) #correlations between original variables and factor scores
MR2 MR1 MR3
[1,] 0.465509161 0.87299813 0.03241641
[2,] -0.010609644 -0.32714571 0.64968725
[3,] -0.219685860 0.47331827 -0.39132195
[4,] -0.815516983 0.22669390 0.42273446
[5,] -0.075178935 -0.40431701 -0.69661843
[6,] -0.204917832 0.07472006 0.05508017
[7,] 0.240675941 0.13027263 0.23238220
[8,] 0.756677687 -0.05621205 0.23746738
[9,] 0.004384459 0.12095273 0.55100943
[10,] 0.640507568 -0.67810600 0.18597947
fa1$loadings[1:10, 1:3]
MR2 MR1 MR3
[1,] 0.433925641 0.82218385 0.02717957
[2,] -0.009889808 -0.30810366 0.54473104
[3,] -0.204780777 0.44576800 -0.32810435
[4,] -0.760186392 0.21349881 0.35444221
[5,] -0.070078250 -0.38078308 -0.58408054
[6,] -0.191014719 0.07037085 0.04618204
[7,] 0.224346738 0.12268990 0.19484113
[8,] 0.705339180 -0.05294013 0.19910480
[9,] 0.004086985 0.11391248 0.46199451
[10,] 0.597050885 -0.63863574 0.15593470
cor(fa1$scores) # Check that factor scores are uncorrelated
MR2 MR1 MR3
MR2 1.000000e+00 4.266996e-16 -1.299606e-16
MR1 4.266996e-16 1.000000e+00 1.961151e-16
MR3 -1.299606e-16 1.961151e-16 1.000000e+00
```
The loadings and correlations are similar, but I expected them to be the same. I tried looking at the source code for `fa` but had trouble understanding it. Could someone please tell me how the loadings differ from the correlations?
Update: For each factor, the correlations with the observed variables are constant multiples of the loadings:
```
cor(X, fa1$scores)/fa1$loadings[1:10, 1:3]
MR2 MR1 MR3
[1,] 1.072786 1.061804 1.192675
[2,] 1.072786 1.061804 1.192675
[3,] 1.072786 1.061804 1.192675
[4,] 1.072786 1.061804 1.192675
[5,] 1.072786 1.061804 1.192675
[6,] 1.072786 1.061804 1.192675
[7,] 1.072786 1.061804 1.192675
[8,] 1.072786 1.061804 1.192675
[9,] 1.072786 1.061804 1.192675
[10,] 1.072786 1.061804 1.192675
```
| Difference between loadings and correlations between observed variables and factor saved scores in factor analysis | CC BY-SA 3.0 | null | 2011-07-08T02:30:05.750 | 2020-05-17T11:23:33.640 | 2011-07-08T07:22:12.363 | 3835 | 3835 | [
"r",
"factor-analysis"
] |
12790 | 1 | null | null | 2 | 1120 |
### Context
I have 25 men and 25 women as participants, and they did exactly the same thing: Each of them
heard an attractive dialogue and they had to choose between a photo of a
woman in red and a woman in green. Then each of them heard an unattractive
dialogue and they chose again between the red and the green shirt.
My hypothesis is that men are attracted to women in red in contrast to women. So, I 've got gender (2levels:0=male, 1=female), attraction (1=yes, 0=no) and
trials/colour (0=green, 1=red)
I am interested in showing that the colour (red) predicts attraction as far as men
are concerned and that there are gender differences! Men are much more
attracted to it.
### Questions
- Should I use a test which combine all of them or a test that combine first
the attraction and colour for men, then the attraction and colour for women
and then the gender differences?
I've been told to run a pearson's correlation (one for men and one for women)
but I think that it demands interval data.I was also told to use the chi-
square test but it's not for participants who parrticipated in the same
experimental conditions.
- What about repeated measures logistic regression?
If so, can you give me some advice how to process my data? Any suggestions?
P.S. I really appreciate everyone who answered my previous posts
[here](https://stats.stackexchange.com/questions/12587/should-i-use-repeated-measures-anova-or-which-other-spss-test-should-i-use) and [here](https://stats.stackexchange.com/questions/12453/what-test-do-i-use-in-order-to-analyze-a-within-participants-repeated-measure-exp), they helped me to go one step further!
| What's the appropriate spss test? | CC BY-SA 3.0 | null | 2011-07-08T02:45:35.777 | 2011-07-08T14:51:00.000 | 2017-04-13T12:44:20.840 | -1 | 5218 | [
"hypothesis-testing",
"spss"
] |
12791 | 2 | null | 1337 | 34 | null | 67% of statistics are made up.
| null | CC BY-SA 3.0 | null | 2011-07-08T03:50:26.057 | 2011-07-08T03:50:26.057 | null | null | 3929 | null |
12792 | 2 | null | 12787 | 7 | null | For visualization purposes, try a [Q-Q plot](http://en.wikipedia.org/wiki/Q-Q_plot), which is a plot of the quantiles of your data against the quantiles of the expected distribution.
If you want a statistical test, the [Kolmogorov-Smirnov](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) statistic provides a non-parametric test for whether the data come from $p(x)$, using the maximum difference in the empirical and analytic cdf.
Of course, you could also evaluate the log-probability of your data under the two distributions: $L_1 = \sum_i p_1(X_i)$ vs. $L_2 = \sum_i p_2(X_i)$, and take whichever is larger. This is equivalent to maximum likelihood density comparison. (However, this may not be valid if $p_1$ and $p_2$ are distributions fit to your data, especially if they have different numbers of fitted parameters; in that case you want to do "model comparison", and there are a variety of tools for this— AIC, BIC, Bayes Factors, Likelihood-ratio test, Cross-validation, etc.)
| null | CC BY-SA 3.0 | null | 2011-07-08T04:25:21.173 | 2011-07-08T04:35:23.600 | 2011-07-08T04:35:23.600 | 5289 | 5289 | null |
12793 | 2 | null | 12783 | 2 | null | There are four allowed states of the visible units that have even parity:
[000] [110] [101] [011].
A simple (if costly) way to enforce this constraint would be to have 4 hidden units which correspond to these four allowed states. The first unit has weights [---] to the visible units, the second-fourth have weights [++-], [+-+], and [-++], respectively. If a hidden unit comes on, it enforces (via its weights) that the network adopt the corresponding state. (You could use strong inhibitory weights between the hidden units to ensure that only a single hidden unit is on at a time).
| null | CC BY-SA 3.0 | null | 2011-07-08T04:51:54.833 | 2011-07-08T04:51:54.833 | null | null | 5289 | null |
12794 | 2 | null | 12790 | 2 | null | If we can assume that the men and women were selected randomly, your design is a special case of completely randomized design (CRD) with 2 treatments. As you mention, repeated-measures, I must remark that if gender is assumed to be a block, then it models correlation (and repeated measures is a way of imposing correlation). If you treat gender as a block, then the design will be a randomized complete block design (RCB) with 1 treatment. You can still test the effects of gender, but the tests will be conservative.
In summary:
- Response: attraction (0 or 1)
- Treatment 1: gender (male or female)
- Treatment 2: color (green or red)
If the analysis is CRD, then you model gender and color as fixed effects and attraction as response.
If the analysis is RCB, then you model gender as a block (random effect), color as fixed effect, and attraction as response.
Irrespective or RCB or CRD, your responses are binomial (Bernoulli, 0 or 1). Therefore the solition boils down to fitting a [GLMM](http://en.wikipedia.org/wiki/Generalized_linear_mixed_model) with a logit-link.
As you are interested in the relationship between men and color, you might want to include the interaction of color and gender in the model. This will be in the case of CRD. Once you have the model fit, you can use the following contrast to see if men are attracted to red than women:
$$H_0: g_mc_r-g_fc_r=0$$
$$H_a: g_mc_r-g_fc_r\neq0$$
Unfortunately, ANOVA (or GLMM) can't give you a one sided test, that is, $g_mc_r-g_fc_r>0$ (although I have seen people do it).
I am aware that SPSS can fit GLMM, but not sure how. I will let someone else give the exact SPSS instructions.
| null | CC BY-SA 3.0 | null | 2011-07-08T05:15:15.463 | 2011-07-08T14:51:00.000 | 2011-07-08T14:51:00.000 | 1036 | 1307 | null |
12795 | 2 | null | 12762 | 1 | null | Another (closely related) quantity is the entropy of the distribution: for a multivariate Gaussian this is the log of the determinant of the covariance matrix, or
$\frac{1}{2} \log |(2\pi e)\Lambda|$
where $\Lambda$ is the covariance matrix. The advantage of this choice is that it can be compared to the "spread" of points under other (e.g., non-Gaussian) distributions.
(If we want to get technical, this is the differential entropy of a Gaussian).
| null | CC BY-SA 3.0 | null | 2011-07-08T05:16:01.600 | 2011-07-08T05:16:01.600 | null | null | 5289 | null |
12796 | 1 | null | null | 10 | 15175 | (Step 1) Using my predictive model, I predicted 1000 scores for my sample dataset.
(Step 2) I then calculate the random score using the same method for a randomized dataset. I firstly fit the distribution of the random score.
(Step 3) For each of my predictive score (1000 scores, in step 1), I calculated the p-values of getting a score larger than my predictive score for my sample sample dataset. Thus, 1000 p-values for my sample dataset are obtained.
(Step 4) As the real classification is known, by looking at the enrichment of true positive, I found when filter the sample dataset by p-values < 0.05. The best true positive values is obtained, which represents about 150 data from my sample dataset.
I then, want to test the predictive power of my model by doing AUC of the ROC (sensitivity vs 1- specificity plot).
However, I am facing a problem now, should I include all 1000 data for ROC plot to get the AUC, or should I only include those 150 data (p < 0.05) for my AUC analysis?
When I said p < 0.05, the p-value to obtain a score higher than my predictive score by random. In general speaking, does it means that 50% of my data are obtained by chance?
---
## Edit
Thanks for the comments from @AlefSin, @steffen and @Frank Harrell.
For easier to discuss, I have prepare a sample dataset (x) as follows:
- My model predicted score (assume it is normal distributed with mean=1, sd=1)
- random set (assume also has mean=1, sd=1)
- Probability for each predicted score
- class prediction are listed as below, as listed in four columns
---
```
x <- data.frame (predict_score=c(rnorm(50,m=1, sd=1)))
x$random <- rnorm(50, m=1, sd=1)
x$probability <- pnorm(x$predict_score, m=mean(x$random),sd=sd(x$random))
x$class <- c(1,1,1,1,2,1,2,1,2,2,1,1,1,1,2,1,2,1,2,2,1,1,1,1,2,1,2,1,2,2,1,1,2,1,2,1,2,2,1,1,1,1,1,1,2,2,2,2,1,1)
```
I then did AUC for all data as follows for all data points:
```
library(caTools)
colAUC(x$predict_score, x$class, plotROC=T, alg=c("Wilcoxon","ROC"))
```
result:
```
[,1]
1 vs. 2 0.6
```
Let's said if the enrichment of true positive is higher (Your runs may be differnt from mine, as rnorm give different results everytime) when I filter the dataset by p < 0.5, I did the AUC for a subset of the data as folows:
```
b <- subset(x, x$probability < 0.5)
colAUC(b$predict_score, b$class, plotROC=T, alg=c("Wilcoxon","ROC"))
```
result:
```
[,1]
1 vs. 2 0.7401961
```
My question is: When I do AUC analysis, is it a must to do the analysis with the whole dataset, or should we do filter the dataset first based on enrichment of true positive or what ever criteria before doing AUC?
| How to test the statistical significance of AUC? | CC BY-SA 3.0 | null | 2011-07-08T05:17:25.933 | 2014-04-02T06:06:10.177 | 2011-07-11T09:57:01.360 | 264 | 5137 | [
"p-value",
"roc"
] |
12797 | 1 | 12809 | null | 12 | 8531 | I have a regression done on two groups of the sample based on a moderating variable (say gender). I'm doing a simple test for the moderating effect by checking whether the significance of the regression is lost on one set while remains in the other.
Q1: The above method is valid, isn't it?
Q2: The level of confidence of my research is set at 95%. For one group, the regression is significant at .000. For the other, it is significant at 0.038
So, I believe I have to accept both regressions as significant and that there's no moderating effect. By accepting the regression is significant at while it's proven to be not at 0.01 am I causing a Type I error (accepting the falsy argument)?
| How to test whether a regression coefficient is moderated by a grouping variable? | CC BY-SA 3.0 | null | 2011-07-08T05:38:02.483 | 2016-02-11T09:53:53.280 | 2011-07-08T06:04:57.233 | 183 | 5325 | [
"regression",
"type-i-and-ii-errors",
"interaction"
] |
12798 | 2 | null | 12789 | 6 | null | I don't know R very well, so I can't track your code. But factor scores (unless the factors are simply principal components) [are always approximate](https://stats.stackexchange.com/q/127483/3277): exact scores cannot be computed because the uniqueness value for each case and variable is eternally unobservable. Thus, observed correlations between computed factor scores and the variables only approximate true correlations between factors and variables, the loadings.
| null | CC BY-SA 3.0 | null | 2011-07-08T07:13:43.410 | 2015-07-29T02:38:45.303 | 2017-04-13T12:44:21.613 | -1 | 3277 | null |
12799 | 2 | null | 12790 | 0 | null | The main criterion on which test or further handling is appropriate depends on the level of measurement. As you've pointed out, your data are nominal. To every level there are allowed operations e.g. nominal: $=, \ne$, on ordinal: $\lt, \le, \gt, \ge$ and so on.
Hence the only thing to describe nominal data is by showing equality to one category. I would suggest that you just count if more male or female are attracted to it in contrast to which are don't.
| null | CC BY-SA 3.0 | null | 2011-07-08T07:36:50.457 | 2011-07-08T07:36:50.457 | null | null | 5042 | null |
12800 | 2 | null | 12790 | 1 | null | Generalized Mixed is available in latest version of SPSS. But Olga might prefer easier ways. So, as I understand, there are 3 variables in the data: gender (values male vs female), attractive dialogue (values red vs green), unattractive dialogue (values red vs green). To check hypothesis that males endorse red more often than green hearing dialogue one just performs gender X colour chi-square test (probably 2 times, once for attractive dialogue, once for unattractive). Because the table is 2x2 it's equivalent to Pearson r. To check hypothesis that the attractive dialogue envokes more red responses than the unattractive one, McNemar's test is suitable. The tests are found in SPSS Crosstabs procedure.
| null | CC BY-SA 3.0 | null | 2011-07-08T08:32:35.073 | 2011-07-08T08:32:35.073 | null | null | 3277 | null |
12801 | 2 | null | 12748 | 1 | null | From my understanding of your problem, the data seems skewed and univariate and the aim explanatory. The first step is plot skewness adjusted box-plots. I know of a $\verb+R+$ implementation in package $\verb+robustbase+$ (look for a function called $\verb+adjbox()+$. The associated white paper is very readable too.
Source:
M. Hubert& E. Vandervieren An adjusted boxplot for skewed distributions, Computational Statistics & Data Analysis Volume 52, Issue 12, 15 August 2008, Pages 5186-5201. Ungated version: [ftp://ftp.win.ua.ac.be/pub/preprints/04/AdjBox04.pdf](ftp://ftp.win.ua.ac.be/pub/preprints/04/AdjBox04.pdf)
EDIT In principle, these adjusted boxplots assume that the data is continuous. If that's not the case, then this particular algorithm will spot too many outliers. But this problem is an algorithmic, not a statistical one: you can solve it by simply jittering your data a bit to remove ties (i.e. adding noise with small variance to every observations).
| null | CC BY-SA 3.0 | null | 2011-07-08T08:47:42.457 | 2011-07-08T12:20:22.737 | 2011-07-08T12:20:22.737 | 603 | 603 | null |
12802 | 2 | null | 9214 | 3 | null | Background: I've experience in implementing LSA models.
From my experience, there's no real way to predict it. The best way I've found is to generate a number of models based on different parameters and test them with a known task. So if you wanted LSA to categorise documents, you would get a set of docs belonging to different categories (see the Reuters 21578 or the Brown corpus both of which are widely available) and prepare docs from different categories. Then you submit each to each model and see which is the most accurate.
I've also found that the content of documents also affects outcome not just the size of corpus. I won't tell you the specifics but shorter documents tend not to contribute so well to a model's accuracy.
Sorry I can't be of more help in this. I could be wrong about this though - try Google Scholar to see if someone's researched this already and found anything useful.
| null | CC BY-SA 3.0 | null | 2011-07-08T09:49:18.187 | 2011-07-08T09:49:18.187 | null | null | 5327 | null |
12805 | 1 | 12846 | null | 10 | 2521 | I'm quite enamoured with likelihood ratios as a means of quantifying relative evidence in scientific endeavours. However, in practice I find that the raw likelihood ratio can get unprintably large, so I've taken to log-transforming them, which has the nice side-benefit of representing evidence for/against the denominator in a symmetric fashion (i.e. the absolute value of the log likelihood ratio represents the strength of evidence and the sign indicates which model, the numerator or denominator, is the supported model). Now, what choice of logarithm base? Most likelihood metrics use log-base-e, but this strikes me as a not very intuition-friendly base. For a while I used log-base-10, which apparently was dubbed the "[ban](http://en.wikipedia.org/wiki/Ban_%28information%29)" scale by Alan Turing and has the nice property that one can easily discern relative orders of magnitude of evidence. It recently occurred to me that it might be useful also to employ log-base-2, in which case I thought it might be appropriate to use the term "bit" to refer to the resulting values. For example, a raw likelihood ratio of 16 would transform to 4 bits of evidence for the denominator relative to the numerator. However, I wonder if this use of the term "bit" violates its conventional information theoretic sense. Any thoughts?
| Is it appropriate to use the term "bits" to discuss a log-base-2 likelihood ratio? | CC BY-SA 3.0 | null | 2011-07-08T13:38:41.293 | 2014-12-28T05:43:48.117 | 2011-07-08T13:51:54.637 | 919 | 364 | [
"terminology",
"likelihood-ratio",
"information-theory"
] |
12806 | 1 | null | null | 4 | 724 | I would like to know if it is useful (or maybe dangerous) to reduce the number of attributes (by selecting the most informative ones among thousands) before seeking for latent variables or not (in an exploratory perspective).
A subsidiary question: in the same case, would it be beneficial to select the most important features for each categorie of features (these can be compressed using an entity-attribute-value model which is not really suitable for data mining) before detecting the latent variables?
| Feature selection and latent variables | CC BY-SA 3.0 | 0 | 2011-07-08T13:41:58.733 | 2011-07-08T17:16:18.483 | 2011-07-08T14:18:24.740 | null | 5330 | [
"feature-selection",
"latent-variable"
] |
12807 | 2 | null | 12790 | 1 | null | Just run a contingency table, and then a logistic regression.
contingency table: color x gender and color vs dialogue.
Then, logistic regression
color = a + b*gender + c*dialogue + d*dialogue *gender
the interaction term will test your hipothesis that the main efect of gender isn't affected by dialogue. However, be awere that interpreting interactions terms in logistic regressions are tricky:
Ai, Chunrong / Norton, Edward (2003): Interaction terms in logit and probit models, Economic Letters 80, p. 123–129
Last, but not least, I don't think you have a nested model, but a non-nested model. In any case, since the groups are small, It won't help you to model it as a multilevel model. So, just run a simple logistic regression.
| null | CC BY-SA 3.0 | null | 2011-07-08T14:06:54.127 | 2011-07-08T14:06:54.127 | null | null | 3058 | null |
12809 | 2 | null | 12797 | 15 | null | Your method does not appear to address the question, assuming that a "moderating effect" is a change in one or more regression coefficients between the two groups. Significance tests in regression assess whether the coefficients are nonzero. Comparing p-values in two regressions tells you little (if anything) about differences in those coefficients between the two samples.
Instead, introduce gender as a dummy variable and interact it with all the coefficients of interest. Then test for significance of the associated coefficients.
For example, in the simplest case (of one independent variable) your data can be expressed as a list of $(x_i, y_i, g_i)$ tuples where $g_i$ are the genders, coded as $0$ and $1$. The model for gender $0$ is
$$y_i = \alpha_0 + \beta_0 x_i + \varepsilon_i$$
(where $i$ indexes the data for which $g_i = 0$) and the model for gender $1$ is
$$y_i = \alpha_1 + \beta_1 x_i + \varepsilon_i$$
(where $i$ indexes the data for which $g_i = 1$). The parameters are $\alpha_0$, $\alpha_1$, $\beta_0$, and $\beta_1$. The errors are the $\varepsilon_i$. Let's assume they are independent and identically distributed with zero means. A combined model to test for a difference in slopes (the $\beta$'s) can be written as
$$y_i = \alpha + \beta_0 x_i + (\beta_1 - \beta_0) (x_i g_i) + \varepsilon_i$$
(where $i$ ranges over all the data) because when you set $g_i=0$ the last term drops out, giving the first model with $\alpha = \alpha_0$, and when you set $g_i=1$ the two multiples of $x_i$ combine to give $\beta_1$, yielding the second model with $\alpha = \alpha_1$. Therefore, you can test whether the slopes are the same (the "moderating effect") by fitting the model
$$y_i = \alpha + \beta x_i + \gamma (x_i g_i) + \varepsilon_i$$
and testing whether the estimated moderating effect size, $\hat{\gamma}$, is zero. If you're not sure the intercepts will be the same, include a fourth term:
$$y_i = \alpha + \delta g_i + \beta x_i + \gamma (x_i g_i) + \varepsilon_i.$$
You don't necessarily have to test whether $\hat{\delta}$ is zero, if that is not of any interest: it's included to allow separate linear fits to the two genders without forcing them to have the same intercept.
The main limitation of this approach is the assumption that the variances of the errors $\varepsilon_i$ are the same for both genders. If not, you need to incorporate that possibility and that requires a little more work with the software to fit the model and deeper thought about how to test the significance of the coefficients.
| null | CC BY-SA 3.0 | null | 2011-07-08T15:07:58.817 | 2011-07-08T15:07:58.817 | null | null | 919 | null |
12810 | 1 | 12820 | null | 13 | 29901 | It seems universal that demographic statistics are given in terms of 100,000 population per year. For instance, suicide rates, homicide rates, disability-adjusted life year, the list goes on. Why?
If we were talking about chemistry, parts per million (ppm) is common. Why is the act of counting people looked at fundamentally differently. The number of 100,000 has no basis in the SI system, and as far as I can tell, it has no empirical basis at all, except a weak relation to a percentage. A count per 100,000 could be construed as a mili-percent, m%. I thought that might get some groans.
Is this a historical artifact? Or is there any argument to defend the unit?
| Why do demographers give rates per 100,000 people? | CC BY-SA 3.0 | 0 | 2011-07-08T15:32:13.147 | 2021-03-06T17:47:51.187 | 2019-08-04T07:40:09.363 | 11887 | 5331 | [
"demography",
"units"
] |
12811 | 2 | null | 12810 | 3 | null | Generally we are trying to convey information to actual people, so using a number that is meaningful to people is useful. 100,000 people is the size of a small to medium city which is easy to think about.
| null | CC BY-SA 3.0 | null | 2011-07-08T15:57:57.197 | 2011-07-08T15:57:57.197 | null | null | 4505 | null |
12812 | 1 | null | null | 2 | 2393 | This is a very basic question, so please bear with me.
I've been learning about AB Testing, which is largely used in internet marketing to examine the effectiveness of certain aspects of ads, websites, etc.
Here's a couple links to people who want to know more about AB Testing:
[http://visualwebsiteoptimizer.com/split-testing-blog/what-you-really-need-to-know-about-mathematics-of-ab-split-testing/](http://visualwebsiteoptimizer.com/split-testing-blog/what-you-really-need-to-know-about-mathematics-of-ab-split-testing/)
[http://20bits.com/articles/statistical-analysis-and-ab-testing/](http://20bits.com/articles/statistical-analysis-and-ab-testing/)
[http://elem.com/~btilly/effective-ab-testing/](http://elem.com/~btilly/effective-ab-testing/)
Let's say that I have a website that registers users for a forum. I want to know if Headline 1 or Headline 2 is more effective at getting visitors on the web site to register for the forum.
So I have the following data.
```
dat = data.frame(Headline=c("Headline 1", "Headline 2"),
Visitors=c("1000", "1300"),
Clicks=c("500", "600"),
Conversions=c("100", "150"))
```
And here are the click through rates and conversion rates for each of the headlines.
```
ctr1 = (500/1000)*100 # for headline 1
ctr2 = (600/1300)*100 # for headline 2
ctr1; ctr2
conv1 = (100/1000)*100 # for headline 1
conv2 = (150/1300)*100 # for headline 2
conv1; conv2
```
According to the sites above, I'm really interested in determining the confidence intervals for the conversion rates for each headline. While 95% confidence would be ideal, I'm really open to anything 80% and up, so I need to calculate confidence intervals where I am 80%/85%/90%/95% confident that the conversion rate for a headline is within a certain range.
I'm really not sure how to go about this. Are there specific tests and/or functions in R that with provide me with the appropriate information? (confint, chisquare, gtest, etc.?)
Thanks for your patience and help.
EDIT:
So I tried the following, and I'm not sure if I'm doing it properly or making the right conclusions. Furthermore, there has to be a more efficient way to perform this task in R.
For a given conversion rate (p) and number of trials (n):
```
p1 = 0.1
n1 = 1000
se1 = sqrt( p1 * (1-p1) / n1 )
se1
se1 * 1.96
(p1 + 1.96*se1) * 100
(p1 - 1.96*se1) * 100
p2 = 0.11
n2 = 1300
se2 = sqrt( p2 * (1-p2) / n2 )
se2
se2 * 1.96
(p2 + 1.96*se2) * 100
(p2 - 1.96*se2) * 100
(8.1, 11.8) # headline 1
(9.2, 12.7) # headline 2
# these confidence intervals for the two headlines overlap.
# therefore, the variation (headline 2) isn't more effective
# than the control headline
```
Thanks again.
| Finding confidence intervals for the click-through-rate of a website | CC BY-SA 3.0 | null | 2011-07-08T16:46:08.827 | 2011-07-08T19:44:13.037 | 2011-07-08T19:44:13.037 | 3310 | 3310 | [
"r",
"confidence-interval",
"ab-test"
] |
12813 | 2 | null | 12806 | 1 | null | Briefly, the answer is maybe (It is hard for me to tell what you are going to do with your data after variable selection has been performed). It is known that for least squares or PLS regression, [feature selection should be performed](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.60.9706) prior to construction of the final model. The reason for this is that the mean squared error of prediction for these models has a term which depends on $(p/n)^2$ where $p$ is the number of 'attributes', and $n$ is the number of observations (or samples). The number of latent factors used in PLS is evidently immaterial in this result. In my view, this somewhat undermines the raison d'etre of PLS, but I still find it useful.
| null | CC BY-SA 3.0 | null | 2011-07-08T17:02:53.857 | 2011-07-08T17:02:53.857 | null | null | 795 | null |
12814 | 1 | null | null | 2 | 429 | I'm trying to run a 3 level hierarchical regression in lmer in R.
The model is specified:
```
model <- lmer(dependent ~ ind1 + ... + (1|level3/level2), data=data)
```
However, when I enter this code, R starts processing it for 20 minutes and doesn't respond. Am I doing something incorrectly?
| Problem with three-level hierarchical regression using lmer | CC BY-SA 3.0 | null | 2011-07-08T17:11:46.760 | 2011-07-08T17:27:09.630 | 2011-07-08T17:27:09.630 | null | null | [
"r",
"lme4-nlme"
] |
12815 | 2 | null | 12806 | 0 | null | How dangerous this activity is depends upon the properties of your features and the tests that you use to reduce the feature number. It may be the case that you will not find the "one-test-fits-all" method for reducing features, in which case its important to understand and state the assumptions you're making when performing the test.
It is hard from your statement to judge what the purpose of the latent variables is in your analysis. However, depending upon your data, feature selection is useful because too much data may serve to dampen your true signal and add noise to classification.
| null | CC BY-SA 3.0 | null | 2011-07-08T17:16:18.483 | 2011-07-08T17:16:18.483 | null | null | 4673 | null |
12816 | 1 | 12840 | null | 0 | 263 | I have a variable with some missing values
```
a <- rnorm(100);
a[sample(1:100,10)] <- NA;
a;
```
How can I fill missing values with previous non missing value?
for example if I have sequence:
a<- (3, 2, 1, 6, 3, NA, 23, 23, NA);
first NA should be replaced by first previous non NA number 3, second NA should be replaced with 23 etc.
Thanks
| Data imputation question | CC BY-SA 3.0 | null | 2011-07-08T17:28:45.993 | 2011-07-09T17:51:01.607 | 2011-07-08T21:08:26.647 | 333 | 333 | [
"missing-data"
] |
12818 | 2 | null | 12816 | 0 | null | I would outright remove any features that have far too many missing values to impute and use KNN to impute missing values for the remaining ones.
| null | CC BY-SA 3.0 | null | 2011-07-08T18:03:06.963 | 2011-07-08T18:03:06.963 | null | null | 4673 | null |
12819 | 1 | 17206 | null | 13 | 36182 | I am using singular vector decomposition on a matrix and obtaining the U, S and Vt matrices. At this point, I am trying to choose a threshold for the number of dimensions to retain. I was suggested to look at a scree plot but am wondering how to go about plotting it in numpy. Currently, I am doing the following using numpy and scipy libraries in python:
```
U, S, Vt = svd(A)
```
Any suggestions?
| How to draw a scree plot in python? | CC BY-SA 3.0 | null | 2011-07-08T19:19:10.200 | 2019-01-28T10:18:30.677 | 2011-10-18T17:52:32.890 | null | 2164 | [
"data-visualization",
"python",
"svd"
] |
12820 | 2 | null | 12810 | 14 | null | A little research shows first that demographers (and others, such as epidemiologists, who report rates of events in human populations) do not "universally" use 100,000 as the denominator. Indeed, Googling "demography 100000" or related searches seems to turn up as many documents using 1000 for the denominator as 100,000. An example is the Population Reference Bureau's [Glossary of Demographic Terms](http://www.prb.org/pdf04/glossary.pdf), which consistently uses 1000.
Looking around in the writings of early epidemiologists and demographers shows that the early ones (such as John Graunt and William Petty, contributors to the early [London Bills of Mortality](http://books.google.com/books?id=3wYAAAAAMAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q=100000&f=false), 1662) did not even normalize their statistics: they reported raw counts within particular administrative units (such as the city of London) during given time periods (such as one year or seven years).
The seminal epidemiologist [John Snow](http://en.wikipedia.org/wiki/John_Snow_%28physician%29) (1853) produced tables normalized to 100,000 but discussed rates per 10,000. This suggests that the denominator in the tables was chosen according to the number of significant figures available and adjusted to make all entries integral.
Such conventions were common in mathematical tables going at least as far back as [John Napier's book of logarithms](http://books.google.com/books?id=Zlu4AAAAIAAJ&printsec=frontcover&dq=John%20Napier%27s%20logarithm&hl=en&ei=q1UXTuqPDsKRgQfNt9AU&sa=X&oi=book_result&ct=result&resnum=2&ved=0CDQQ6AEwAQ#v=onepage&q&f=false) (c. 1600), which expressed its values per 10,000,000 to achieve seven digit precision for values in the range $[0,1]$. (Decimal notation was apparently so recent that he felt obliged to explain his notation in the book!) Thus one would expect that typically denominators have been selected to reflect the precision with which data are reported and to avoid decimals.
A modern example of consistent use of rescaling by powers of ten to achieve manageable integral values in datasets is provided by [John Tukey](http://en.wikipedia.org/wiki/John_Tukey)'s classic text, EDA (1977). He emphasizes that data analysts should feel free to rescale (and, more generally, nonlinearly re-express) data to make them more suitable for analysis and easier to manage.
I therefore doubt speculations, however natural and appealing they may be, that a denominator of 100,000 historically originated with any particular human scale such as a "small to medium city" (which before the 20th century would have had fewer than 10,000 people anyway and far fewer than 100,000).
| null | CC BY-SA 3.0 | null | 2011-07-08T19:24:06.747 | 2011-07-08T19:24:06.747 | null | null | 919 | null |
12821 | 2 | null | 8243 | 1 | null | A bit late, if you want to use Python you have several choices:
- You can employ Automatic Differentiation, which essentially uses the chain-rule and a look-up table to differentiate for you. Three packages I know of which do this are:
a. OpenOpt's FuncDesigner (http://openopt.org/FuncDesigner)
b. Theano which additionally optimizes your code (including compiling to the GPU). However, a major caveat is that in order to do its magic, it hides a lot from you (personally not my cup of tea).
c. ScientificPython (one of its many modules)
- You can do symbolic differentiation with SymPy (which does a large number of other things as well).
With at least 1.a, 1.c, and 2 you can get your answers and then use the answers in whatever your favorite language happens to be.
| null | CC BY-SA 3.0 | null | 2011-07-08T20:37:16.110 | 2011-07-08T20:37:16.110 | null | null | 5268 | null |
12822 | 1 | null | null | 9 | 329 | I plot something to make a point to myself or someone else. Usually, a question starts this process, and often the person asking hopes for a particular answer.
How can I learn interesting things about the data in a less biased way?
Right now I'm roughly following this method:
- Summary statistics.
- Stripchart.
- Scatter plot.
- Maybe repeat with an interesting subset of data.
But that doesn't seem methodical or scientific enough.
Are there guidelines or procedures to follow that reveal things about the data I wouldn't think to ask? How do I know when I have done an adequate analysis?
| Guidelines for discovering new knowledge in data | CC BY-SA 3.0 | null | 2011-07-08T22:37:15.973 | 2016-08-22T05:12:55.787 | 2016-08-21T15:33:22.747 | 22468 | 5335 | [
"data-visualization",
"exploratory-data-analysis",
"knowledge-discovery"
] |
12823 | 1 | 12849 | null | 6 | 2079 | Is it possible to perform a statistical test to determine if one classifier is better than the other using only the confusion matrices of these classifiers?
What about the average accuracies from k-fold cross validation?
I have a number confusion matrices and average accuracy for classifiers obtained through k-fold cross validation (done using RapidMiner). The data sets for these classifiers are all the same, though the splitting into folds was all done independently. What I'd like to do is, given two of these classifiers, A and B, be able to test if A is statistically better than B using only the confusion matrices and/or the average accuracies for classifiers A and B.
All the statistical tests I've found so far require knowing the number of samples that A classified correctly when B did not, and vice-versa. (McNemar's Test for example). I can generate this data if necessary, but I'd like to avoid it if reasonably possible.
| Statistically comparing classifiers using only confusion matrix (or average accuracies) | CC BY-SA 3.0 | null | 2011-07-09T02:42:53.030 | 2011-07-09T23:34:54.450 | 2011-07-09T23:08:53.740 | 5336 | 5336 | [
"machine-learning",
"statistical-significance",
"model-selection",
"cross-validation"
] |
12824 | 1 | null | null | 0 | 482 | I am a thesis student. I conducted an experiment on three different schools. I took a class from each school and assumed that they are one group. I took one group and considered it as a control group. I used paired t test.
The reader in my defense considered that I can't assume that students from each school as one group. He preferred Covariance test.
Please send me your reply asap based on scientific evidence.
Regards,
Teacher and Student
| Can I use paired t-test in this case? | CC BY-SA 3.0 | null | 2011-07-09T07:19:52.213 | 2011-07-10T01:23:16.527 | null | null | 5338 | [
"covariance"
] |
12825 | 1 | 12827 | null | 5 | 617 | To make clear what I want to ask, I want to begin with the negative binomial distribution. The first two moments are $E(y)=\mu$ with variance $Var(y)=\mu+\mu^2/k$. When we solve the scorefunction of the log-likelihood with an iterationalgorithm, we get an estimator for $\widehat{\mu}$ (because of the estimator of $\widehat{\beta}$) and an estimator for the shape parameter $\widehat{k}$. So there are two parameters to be estimated.
When I look at the quasi-Poisson approach we have mean $E(y)=\mu$ and variance $Var(y)=\phi\cdot\mu$. This approach is not solvable since we do not know the probability function of $y$. Thus we use quasi-ML methods.
My question is: Why don't we create a distribution where we have the two moments of the quasi-Poisson regression and use a classic ML-estimation? It should be solvable with classic ML-methods since an estimation with negbin distribution (where we have also two estimators for $\mu$ and $k$ compared to $\mu$ and $\phi$).
I'm curious about your answers!
| Why do we use quasi-ML methods when estimating a quasi-Poisson regression? | CC BY-SA 3.0 | null | 2011-07-09T08:36:31.520 | 2021-10-04T23:56:50.933 | 2017-11-20T23:29:13.227 | 28666 | 4496 | [
"generalized-linear-model",
"negative-binomial-distribution",
"poisson-regression",
"quasi-likelihood"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.