Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10142 | 1 | null | null | 3 | 662 | How can the following tables be computed:
- Chi-Square table
- Student t-table
I'm looking for a formula or procedure used to make these tables.
### For example:
If I have a given 'x'-value, as df(degree of freedom) with some confidence percentage 'y', I should be able to plug these x and y values into that formulae or procedure and should get results close to that in the standard tables.
I'm pretty sure that there is some logic behind these tables, but I could not find any concrete pointers. how to reach them. My guess, about some logic got surety when I found some tool doing this online: [http://faculty.vassar.edu/lowry/csqsamp.html](http://faculty.vassar.edu/lowry/csqsamp.html)
| Formula or procedure for computing standard statistical tables such as z table, Student's t-table, or chi-square table | CC BY-SA 3.0 | null | 2011-04-29T10:59:15.100 | 2015-11-28T00:08:23.500 | 2015-11-28T00:08:23.500 | 805 | 4331 | [
"normal-distribution",
"chi-squared-test",
"t-distribution"
] |
10143 | 2 | null | 10142 | 3 | null | On e.g. [wikipedia](http://en.wikipedia.org/wiki/Normal_distribution) you can find the formula for the cdf / pdf of these distributions. Enter the values of the parameters and you're done.
If you want the reverse (I think you do from your question), simplest 'general' way of getting it done (not all cdfs have an analytic inverse) is use a univariate solver.
Maybe you could just use R, that holds all these functions...
| null | CC BY-SA 3.0 | null | 2011-04-29T11:11:55.250 | 2011-04-29T11:11:55.250 | null | null | 4257 | null |
10145 | 1 | null | null | 3 | 355 | I am writing my master´s thesis in finance on the topic of voluntary disclosure of financial targets in annual reports of manufacturing firms.
### Context
- I have created a dependent variable that is an index of level of disclosure.
- At the moment I am gathering firm characteristics as independent variables from around 10 years (unbalanced panel data).
### Questions
- Should the model be probit of logit?
- Should the model be fixed or random?
- How should it be implemented in Stata?
| Predicting index from multiple predictors using panel data over 10 years: logit or probit? Fixed or random? | CC BY-SA 3.0 | null | 2011-04-29T12:03:17.433 | 2011-04-30T22:24:09.737 | 2011-04-30T03:26:20.447 | 183 | 4394 | [
"logistic",
"stata",
"random-effects-model",
"fixed-effects-model"
] |
10146 | 1 | null | null | 8 | 8592 | I have been doing multinomial logistic regression analysis using SPSS 19.
I have encountered the following problem when I run the analysis procedure:
>
"Unexpected singularities in the
Hessian matrix are encountered. This
indicates that either some predictor
variables should be excluded or some
categories should be merged."
A little background about my data used. I have four categorical predictors with two levels each, 1 or 2. The response variable in my model is a three-level categorical variable. I used the last level as the reference category. I tried to compare the coefficients of the intercept with that of the four predictors in the two logits so as to find which level of the response variable may cause this problem. The big differences in coefficients between the intercept and three of the predictors suggest that it might be the reference category that has the problem. However, I could not combine the levels of the response variable (which I'm not allowed for my research).
I have also tried to exclude the predictors one by one, but still got the same problem.
Could anyone please tell me what I should do to solve this problem?
| Unexpected singularities in the Hessian matrix error in multinomial logistic regression | CC BY-SA 3.0 | null | 2011-04-29T14:34:35.197 | 2017-06-04T03:04:44.390 | 2017-06-04T03:04:44.390 | 22311 | 4380 | [
"logistic",
"spss",
"multinomial-distribution",
"separation"
] |
10149 | 1 | 10151 | null | 3 | 1127 | Is there a function in R that given a vector of numbers, returns another vector with the standard units corresponding to each value?
where...
standard unit: how many SDs a value is + or - from the mean
Example:
```
x <- c(1,3,4,5,7) # note: mean=4, sd=2
su(x)
[1] -1.5 -0.5 0.0 0.5 1.5
```
Is this func already included in a package?
Thanks.
| Converting a vector to a vector of standard units in R | CC BY-SA 3.0 | null | 2011-04-29T17:43:14.533 | 2011-04-29T19:23:38.993 | 2011-04-29T19:23:38.993 | null | 4400 | [
"r"
] |
10150 | 1 | null | null | 3 | 580 | If I have a list of values throughout time, say a list of values for every minute throughout an hour of monitoring something, can I somehow 'predict' or estimate what the value would probably be in the future, say another 20-30 minute after my last known value? And if so, how can this be done?
I figure maybe I could do it with just a 'best-fit' to a quadratic and look at the value for x=90 (60 known values + 30 values into the future) but I feel like that probably isn't going to be the most accurate method. Does anyone have any suggestions for a better way to do this? Just a note, I do have very little statistical experience, so please bare with me if this question is known to be easy (or impossible).
Thanks!
| Predicting a future data value with regression | CC BY-SA 3.0 | null | 2011-04-29T17:45:23.660 | 2011-05-07T11:45:48.963 | 2011-04-29T22:27:13.157 | 3911 | 4399 | [
"time-series",
"forecasting"
] |
10151 | 2 | null | 10149 | 4 | null | Try the scale function.
```
?scale
```
| null | CC BY-SA 3.0 | null | 2011-04-29T18:05:02.800 | 2011-04-29T18:05:02.800 | null | null | 26 | null |
10152 | 1 | 10153 | null | 6 | 476 | I learned about the zero-inflated negative binomial distribution a few months ago when I was trying to do regression on some discrete data. I have a different data set now, and it seems to be very similar except that the value `1` seems to be over-represented (as opposed to `0`). Is there such a thing as a one-inflated negative binomial distribution? How could I model these data?
| One-inflated negative binomial? | CC BY-SA 3.0 | null | 2011-04-29T18:06:24.857 | 2011-04-29T18:15:14.767 | null | null | 1973 | [
"distributions",
"negative-binomial-distribution"
] |
10153 | 2 | null | 10152 | 5 | null | Sure. You can write
$Pr(X=x) = p*f(x) + (1-p)1(x=1)$
where $f$ is the NB pmf, $1()$ is just an indicator function and $p$ is some probability. Of course in general the $x=1$ could be $x=k$ for any integer $k$, and $f$ could be any pmf.
In principle it shouldn't be harder to fit than a ZI model, but an off-the-shelf model-fitting solution may not exist.
| null | CC BY-SA 3.0 | null | 2011-04-29T18:15:14.767 | 2011-04-29T18:15:14.767 | null | null | 26 | null |
10154 | 1 | 10158 | null | 1 | 132 | First of all I have to mention, that I'm not a statician at all, I'm just a simple programmer and I have some curiosities ... and the wrost of all, I don't know where to start from.
Let's assume the following working scenario:
A big company, a internet service provider (ISP) with unlimitd bandwith choose to change how the users will use and pay for the internet services: each user has to predict how much bandwidth they will consume in the next day for each hour. If the user predict that she will consume 0MB from 0.00 TO 18.00 she will pay nothing. If the user predict that she will see a HD movie from 18.00 to 20.00 they will comsume 10GB and they will pay just for that ammount of data.
If the user consume more that she predicted, she will pay more for that ammount of data (just for the differece). Predicted mount of data is cheapest. If they consume more that it was predicted, they will have to pay penalities.
The thing is that the users can build networks/groups with they freinds in order to optimize their costs. For examaple, if a user is not using his predicted amount of data, another friend from theyr network can use it, for free. If they wish the users of a group, can see (at every 30minutes) if their freinds are consuming traffic or not.
Idea is that each user has to predict how much internet they will comsume, based of theyr habits and schedules. The users will pay at the end of each month the trafice that they consumed.
Now, the quest is to find the most appropiate way to represent the traffic prognoses for each user, group and for the ISP.
- I'll use line charts to represent how much user has predicted and how much she consumed in each day for each hour.
- I'll use bar charts to represent the difference between what was predicted and what was consumed in each hour.
The problem is to represent those data in a graphical way for each month, for each user, group and the ISP:
-Can you give me some samples or resources which can help to find the best way of represeting my data costs?
-Do you have an idea of what statistical approach to use in order to illustrate the prognoses, comsumes and errors?
| Representing traffic use forecasts graphically | CC BY-SA 4.0 | null | 2011-04-29T19:14:16.423 | 2019-03-21T07:11:42.320 | 2019-03-21T07:11:42.320 | 11887 | 3856 | [
"data-visualization"
] |
10155 | 1 | 10174 | null | 2 | 1641 | When using a binomial family, logit link for GLM (or GEE in my case), I notice that my model estimates diverge when my response variables (which are continuous probabilities with range 0 to 1) include 0 or 1 (or 0 <= y <= 1) as observed values, but the models with response variables that don't include 0 or 1 (or 0 < y < 1) are able to converge just fine.
Question:
- Why does this happen?
When running a logistic regression model (with 0 < y < 1) the model runs fine, as does the model when the response variable is dichotomous 0/1.
I suspect the following: say I have observations 0 < y <= 1. In this case, the algorithm sees my ones but not any zeros, and then craps out saying "some groups have fewer than x observations," the aforementioned group being the ones that are supposed to have zeros.
Secondary question:
- If I exclude observations that are 0 or 1 in order to fit my models, am I biasing my results?
Here's an example: my response variable is graduation rate expressed as a percentage. For the logistic regression models, there are apparently schools that have 100% graduation rate (seen as a 1 in my dataset). Would it be a valid strategy to drop these schools from the model, and what are the implications in interpretation? Is this akin to dropping outliers willy-nilly?
| Estimates diverging using continuous probabilities in logistic regression | CC BY-SA 3.0 | null | 2011-04-29T21:05:25.247 | 2011-04-30T10:54:03.897 | null | null | 3309 | [
"logistic",
"stata"
] |
10156 | 2 | null | 10150 | 1 | null | Yes, the problem occurs in variety of scenarios for time series prediction thing.
But I would suggest to avoid going through quadratic analysis just by estimating that it might fit.
Go on hitting with Linear multivariate regression, further try a combinations of non-linear series. Plus I have also seen Certain problems of this sort like retweet prediction, stock prices predictions giving nearly accurate results with SOFNN- self of organizing fuzzy and neural network. Try scholar.google search, you would hit upon a previous good work on this.
| null | CC BY-SA 3.0 | null | 2011-04-29T21:34:52.957 | 2011-04-29T21:34:52.957 | null | null | 3994 | null |
10157 | 2 | null | 10141 | 1 | null | What is wrong with using [plm](http://www.jstatsoft.org/v27/i02/paper) or [lme4](http://lme4.r-forge.r-project.org/slides/2009-07-07-Rennes/3Longitudinal-4.pdf) ([another link](http://lme4.r-forge.r-project.org/book/Ch4.pdf))? Particularly the `glmer` function?
| null | CC BY-SA 3.0 | null | 2011-04-29T21:35:09.960 | 2011-05-30T07:39:10.500 | 2011-05-30T07:39:10.500 | 2116 | 1893 | null |
10158 | 2 | null | 10154 | 3 | null | I would use [hanging bars](http://4.bp.blogspot.com/_V8g1rNtmHuM/SSMfczhjWoI/AAAAAAAAASk/mmf4jfkBHfY/s320/plot3.jpg), where actual consumptions are the bars, and errors are the differences between the lower ends of the bars and the horizontal axis. For aggregate data (multiple users in one group) the bars can be substituted with [stacked bars](http://www.java2s.com/Code/JavaImages/JFreeChartStackedBarChartDemo4.PNG).
| null | CC BY-SA 3.0 | null | 2011-04-29T22:24:25.127 | 2011-04-29T22:24:25.127 | null | null | 3911 | null |
10159 | 1 | 10161 | null | 66 | 204215 | I'm going to start out by saying this is a homework problem straight out of the book. I have spent a couple hours looking up how to find expected values, and have determined I understand nothing.
>
Let $X$ have the CDF $F(x) = 1 - x^{-\alpha}, x\ge1$.
Find $E(X)$ for those values of $\alpha$ for which $E(X)$ exists.
I have no idea how to even start this. How can I determine which values of $\alpha$ exist? I also don't know what to do with the CDF (I'm assuming this means Cumulative Distribution Function). There are formulas for finding the expected value when you have a frequency function or density function. Wikipedia says the CDF of $X$ can be defined in terms of the probability density function $f$ as follows:
$F(x) = \int_{-\infty}^x f(t)\,dt$
This is as far as I got. Where do I go from here?
EDIT: I meant to put $x\ge1$.
| Find expected value using CDF | CC BY-SA 4.0 | null | 2011-04-29T22:30:31.797 | 2022-03-22T18:15:47.100 | 2019-08-31T04:55:06.390 | 44269 | 4401 | [
"self-study",
"expected-value"
] |
10160 | 1 | 10163 | null | 3 | 9779 | The ultimate goal is to show users, at a glance, if their data is normally distributed.
The first attempt is a kludge that plots the data in a frequency graph. Then, the observed mean and standard deviation are used to build a "normal curve" graph. The frequency chart is laid over the the normal-curve chart and put next to some key statistics. The frequency chart also colors positive bins green and negative bins red.
It looks like this:

I understand the fallacy of this approach, but for now it's practical. What is a better way to approach this issue?
| What are some ways to graphically display non-normal distributions in Excel? | CC BY-SA 3.0 | null | 2011-04-29T23:11:29.087 | 2011-04-30T01:42:18.423 | null | null | 933 | [
"excel",
"descriptive-statistics",
"curve-fitting"
] |
10161 | 2 | null | 10159 | 28 | null | Edited for the comment from probabilityislogic
Note that $F(1)=0$ in this case so the distribution has probability $0$ of being less than $1$, so $x \ge 1$, and you will also need $\alpha > 0$ for an increasing cdf.
If you have the cdf then you want the anti-integral or derivative which with a continuous distribution like this
$$f(x) = \frac{dF(x)}{dx}$$
and in reverse $F(x) = \int_{1}^x f(t)\,dt$ for $x \ge 1$.
Then to find the expectation you need to find
$$E[X] = \int_{1}^{\infty} x f(x)\,dx$$
providing that this exists. I will leave the calculus to you.
| null | CC BY-SA 3.0 | null | 2011-04-29T23:21:56.283 | 2011-04-30T09:54:18.863 | 2011-04-30T09:54:18.863 | 2958 | 2958 | null |
10162 | 1 | 10196 | null | 91 | 87655 | I'm new to machine learning, and I have been trying to figure out how to apply neural network to time series forecasting. I have found resource related to my query, but I seem to still be a bit lost. I think a basic explanation without too much detail would help.
Let's say I have some price values for each month over a few years, and I want to predict new price values. I could get a list of prices for the last few months, and then try to find similar trends in the past using K-Nearest-Neighbor. I could them use the rate of change or some other property of the past trends to try and predict new prices. How I can apply neural network to this same problem is what I am trying to find out.
| How to apply Neural Network to time series forecasting? | CC BY-SA 3.0 | null | 2011-04-30T00:11:19.003 | 2018-07-15T10:10:11.693 | 2011-04-30T01:47:52.990 | 4403 | 4403 | [
"time-series",
"forecasting",
"neural-networks"
] |
10163 | 2 | null | 10160 | 6 | null | A normal probability plot is an excellent way to compare an empirical distribution to a normal distribution. Its merits are that it clearly displays the nature of any deviations from normality: ideally, the points lie along the diagonal; vertical deviations from the diagonal depict deviations from normality. Its disadvantages are that many people do not know how to read it, so beware!
To create a normal probability plot in Excel, rank the data (with the RANK function) and convert them to a normal score via
```
NORMSINV((rank-1/2)/count)
```
where 'count' is the amount of data and 'rank' references a cell with the rank, as shown in the illustrations.

The formulas in this spreadsheet are

| null | CC BY-SA 3.0 | null | 2011-04-30T01:42:18.423 | 2011-04-30T01:42:18.423 | null | null | 919 | null |
10164 | 2 | null | 577 | 25 | null | In my experience, BIC results in serious underfitting and AIC typically performs well, when the goal is to maximize predictive discrimination.
| null | CC BY-SA 3.0 | null | 2011-04-30T02:01:01.827 | 2011-04-30T02:01:01.827 | null | null | 4253 | null |
10165 | 2 | null | 577 | 17 | null | An informative and accessible "derivation" of AIC and BIC by Brian Ripley can be found here:
[http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf](http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf)
Ripley provides some remarks on the assumptions behind the mathematical results. Contrary to what some of the other answers indicate, Ripley emphasizes that AIC is based on assuming that the model is true. If the model is not true, a general computation will reveal that the "number of parameters" has to be replaced by a more complicated quantity. Some references are given in Ripleys slides. Note, however, that for linear regression (strictly speaking with a known variance) the, in general, more complicated quantity simplifies to be equal to the number of parameters.
| null | CC BY-SA 3.0 | null | 2011-04-30T05:49:44.777 | 2011-05-03T13:37:26.293 | 2011-05-03T13:37:26.293 | 930 | 4376 | null |
10166 | 2 | null | 8570 | 5 | null | Nocedal and Wrights book
[http://users.eecs.northwestern.edu/~nocedal/book/](http://users.eecs.northwestern.edu/~nocedal/book/)
is a good reference for optimization in general, and many things in their book are of interest to a statistician. There is also a whole chapter on non-linear least squares.
| null | CC BY-SA 3.0 | null | 2011-04-30T06:01:46.580 | 2011-04-30T06:01:46.580 | null | null | 4376 | null |
10167 | 1 | null | null | 8 | 1414 |
### Context:
Within the context of structural equation modelling, I have non-normality according to the Mardia test but univariate indices of skewness and kurtosis are less than 2.0.
### Questions:
- Should parameter estimates (coefficient estimates) be evaluated using bootstrapping (1000 replicates) with bias-corrected methods?
- In place of the traditional chi-square test, should the Bollen-Stine bootstrapped version be used?
| Bootstrapped parameter and fit estimates with non-normality for structural equation models | CC BY-SA 3.0 | null | 2011-04-30T06:10:14.923 | 2015-07-16T21:49:01.800 | 2013-08-03T20:48:46.447 | 17230 | 4406 | [
"bootstrap",
"normality-assumption",
"structural-equation-modeling"
] |
10168 | 2 | null | 10159 | 10 | null | I think you actually mean $x\geq 1$, otherwise the CDF is vacuous, as $F(1)=1-1^{-\alpha}=1-1=0$.
What you "know" about CDFs is that they eventually approach zero as the argument $x$ decreases without bound and eventually approach one as $x \to \infty$. They are also non-decreasing, so this means $0\leq F(y)\leq F(x)\leq 1$ for all $y\leq x$.
So if we plug in the CDF we get:
$$0\leq 1-x^{-\alpha}\leq 1\implies 1\geq \frac{1}{x^{\alpha}}\geq 0\implies x^{\alpha}\geq 1 > 0\implies x\geq 1 \>.$$
From this we conclude that the support for $x$ is $x\geq 1$. Now we also require $\lim_{x\to\infty} F(x)=1$ which implies that $\alpha>0$
To work out what values the expectation exists, we require:
$$\newcommand{\rd}{\mathrm{d}}E(X)=\int_{1}^{\infty}x\frac{\rd F(x)}{\rd x}\rd x=\alpha\int_{1}^{\infty}x^{-\alpha} \rd x$$
And this last expression shows that for $E(X)$ to exist, we must have $-\alpha<-1$, which in turn implies $\alpha>1$. This can easily be extended to determine the values of $\alpha$ for which the $r$'th raw moment $E(X^{r})$ exists.
| null | CC BY-SA 3.0 | null | 2011-04-30T07:59:08.367 | 2011-04-30T13:59:33.003 | 2011-04-30T13:59:33.003 | 2970 | 2392 | null |
10169 | 1 | 10173 | null | 5 | 933 | I have a three-way contingency table in which the marginal totals for two sides are fixed and for the third are random. I'm wondering how to perform a chi-square test for homogeneity for such a three-way contingency table.
Example

Assume both Trt and Gender marginal totals are fixed.
Thanks
| $\chi^2$ test of homogeneity for three-way contingency table | CC BY-SA 3.0 | null | 2011-04-30T08:07:27.833 | 2011-06-26T01:11:47.643 | 2011-05-04T09:18:55.823 | 3903 | 3903 | [
"statistical-significance",
"chi-squared-test",
"categorical-data"
] |
10170 | 2 | null | 10059 | 6 | null | The closest package that I can think of is [birch](http://cran.r-project.org/src/contrib/Archive/birch/), but it is not available on CRAN anymore so you have to get the source and install it yourself (`R CMD install birch_1.1-3.tar.gz` works fine for me, OS X 10.6 with R version 2.13.0 (2011-04-13)). It implements the original algorithm described in
>
Zhang, T. and Ramakrishnan, R. and
Livny, M. (1997). BIRCH: A New Data
Clustering Algorithm and Its
Applications. Data Mining and
Knowledge Discovery, 1, 141-182.
which relies on cluster feature tree, as does SPSS TwoStep (I cannot check, though). There's a possibility of using the k-means algorithm to perform clustering on `birch` object (`kmeans.birch()`), that is partition the subclusters into k groups such that the sum of squares of all the points in each subcluster to the assigned cluster centers is minimized.
| null | CC BY-SA 3.0 | null | 2011-04-30T08:12:03.110 | 2011-04-30T08:12:03.110 | null | null | 930 | null |
10171 | 1 | 10771 | null | 9 | 2333 | I am confused about permutation analysis for feature selection in a logistic regression context.
Could you provide a clear explanation of the random permutation test and how does it applies to feature selection? Possibly with exact algorithm and examples.
Finally, How does it compare to other shrinkage methods like Lasso or LAR?
| Random permutation test for feature selection | CC BY-SA 3.0 | null | 2011-04-30T08:52:26.243 | 2011-06-12T17:57:26.913 | null | null | 3896 | [
"regression",
"logistic",
"feature-selection",
"permutation-test",
"regularization"
] |
10172 | 2 | null | 9920 | 3 | null | What you describes about Tukey's nonadditivity test sounds good to me. In effect, it allows to test for an item by rater interaction. Some words of caution, though:
- Tukey's nonadditivity test effectively allows to test for a linear-by-linear product of two factor main effects.
- The possibility of deriving a total score is irrelevant here, as this particular Tukey's test can be applied in any randomized block design, as described on Stata FAQ, for example.
- It applies in situation where you have a single observation per cell, that is each rater assess only one item (no replicates).
You might recall that the interaction term is confounded with the error term when there're no replicates in an ANOVA design; in inter-rater studies, it means we have only one rating for each rater x item cell. Tukey's test in this case provide a 1-DF test for assessing any deviation from additivity, which is a common assumption to interpret a main effect in two-factor models. Here is a [tutorial](http://www.plantsciences.ucdavis.edu/agr205/Supplemental%20Handouts/Tukey_NonAdd_Handout_PDF.pdf) describing how it works.
I must admit I never used it when computing ICC, and I spent some times trying to reproduce Dave Garson's results with R. This led me to the following two papers that showed that Tukey's nonadditivity test might not be the "best" test to use as it will fail to recover a true interaction effect (e.g., where some raters exhibit an opposite rating behavior compared to the rest of the raters) when there's no main effect of the target of the ratings (e.g., marks given to items):
- Lahey, M.A., Downey, R.G., and Saal, F.E. (1983). Intraclass Correlations: There's More There Than Meets the Eye. Psychological Bulletin, 93(3), 586-595.
- Johnson, D.E. and Graybill, F.A. (1972). An analysis of a two-way model with interaction and no replication. Journal of the American Statistical Association, 67, 862-868.
- Hegemann, V. and Johnson, D.E. (1976). The power of two tests for nonadditivity. Journal of the American Statistical Association, 71(356), 945-948.
(I'm very sorry but I couldn't find ungated PDF version of those papers. The first one is really a must-read one.)
About your particular design, you considered raters as fixed effects (hence the use of Shrout and Fleiss's type 3 ICC, i.e. mixed model approach). In this case,
Lahey et al. (1) stated that you face a situation of nonorthogonal interaction components (i.e., the interaction is not independent of other effect) and a biased estimate of the rating effect -- but, this for the case where you have a single observation per cell (ICC(3,1)). With multiple ratings per items, estimating ICC(3,k) requires the "assumption of nonsignificance of the interaction. In this case, the ANOVA effects are neither theoretically nor mathematically independent, and without adequate justification, the assumption of no interaction is very tenuous."
In other words, such an interaction test aims at offering you diagnostic information.
My opinion is that you can go on with you ICC, but be sure to check that (a) there's a significant effect for the target of ratings (otherwise, it would mean the reliability of measurements is low), (b) no rater systematically deviates from others' ratings (this can be done graphically, or based on the residuals of your ANOVA model).
---
More technical details are given below.
The alternative test that is proposed is called the characteristic root test of the interaction (2,3). Consider a multiplicative interaction model of the form (here, as an effect model, that is we use parameters that summarize deviations from the grand mean):
$$\mu_{ij}=\mu + \tau_i + \beta_j + \lambda\alpha_i\gamma_j + \varepsilon_{ij}$$
with $\tau$ ($i=1,\dots,t$) the effect due to targets/items, $\beta$ ($j=1,\dots,b$) the effect of raters, $\alpha\gamma$ the interaction targets x raters, and the usual assumptions for the distribution of errors and parameters constraints. We can compute the largest characteristic root of $Z'Z$ or $ZZ'$, where $Z=z_{ij}=y_{ij}-y_{i\cdot}-y_{\cdot j}+y_{\cdot\cdot}$ is the $t \times b$ matrix of residuals from an additive model.
The test then relies on the idea of using $\lambda_1/\text{RSS}$ as a test statistic ($H_0:\, \lambda=0$) where $\lambda_1$ is the largest nonzero characteristic root of $ZZ'$ (or $Z'Z$), and RSS equals the residual sum of squares from an additive model (2).
| null | CC BY-SA 4.0 | null | 2011-04-30T09:23:06.453 | 2020-11-16T23:00:43.997 | 2020-11-16T23:00:43.997 | 930 | 930 | null |
10173 | 2 | null | 10169 | 5 | null | I would approach this as a hypothesis test (but then I am a Bayesian, and we always tend to do things a bit differently). I take it by homogeneous that you mean homogeneous in the "way" which did not have its totals fixed.
If we index the cell counts as $n_{ijk}$ for $i=1,\dots,I$ for the first way, and $j=1,\dots,J$ for the second way, and $k=1,\dots,K$ for the third way. This gives a total of $IJK$ cells in your contingency table. The "saturated model" is to assume that each cell is a multinomial distributed variable, with an individual long run frequency, so
$$p(n_{111},\dots,n_{IJK}|\theta_{111},\dots,\theta_{IJK})=n_{\bullet\bullet\bullet}!\prod_{i=1}^{I}\prod_{j=1}^{J}\prod_{k=1}^{K}\frac{\theta_{ijk}^{n_{ijk}}}{n_{ijk}!}\propto\prod_{i=1}^{I}\prod_{j=1}^{J}\prod_{k=1}^{K}\theta_{ijk}^{n_{ijk}}$$
If a row total had been fixed, then we would get:
$$p(n_{111},\dots,n_{IJK}|\theta_{111},\dots,\theta_{IJK})=\prod_{i=1}^{I}n_{i\bullet\bullet}!\prod_{j=1}^{J}\prod_{k=1}^{K}\frac{\theta_{ijk}^{n_{ijk}}}{n_{ijk}!}\propto\prod_{i=1}^{I}\prod_{j=1}^{J}\prod_{k=1}^{K}\theta_{ijk}^{n_{ijk}}$$
If a row and column totals had been fixed, then we would get:
$$p(n_{111},\dots,n_{IJK}|\theta_{111},\dots,\theta_{IJK})=\prod_{i=1}^{I}n_{i\bullet\bullet}!\prod_{j=1}^{J}n_{\bullet j\bullet}!\prod_{k=1}^{K}\frac{\theta_{ijk}^{n_{ijk}}}{n_{ijk}!}\propto\prod_{i=1}^{I}\prod_{j=1}^{J}\prod_{k=1}^{K}\theta_{ijk}^{n_{ijk}}$$
This shows that whether the row totals are fixed or random, is not relevant to inference about the $\theta_{ijk}$ parameters - as this will simply change the factorial in the denominator. This is called the "likelihood principle". However it is relevant if you are going to be predicting a new set of counts sampled in the same way that you sampled your data. This is usually how the standard statistician will think though (in terms of "if I did this experiment again...").
But for a hypothesis which is not sharp, such as two parameters being equal, but not restricted to be equal to anything the "prior predictive" distribution is what matters (at least from a Bayesian perspective) - and this certainly depends on the sample design! This is quite interesting for the "likelihood principle" is a bit subtle - your interest must be in the particular values of the parameters and not whether two parameters are equal. It is quite easy to see that the prior predictive distribution will depend on the sample design.
My answer to [this question](https://stats.stackexchange.com/questions/9685/how-to-carry-out-multiple-post-hoc-chi-square-tests-on-a-2-x-3-table/9721#9721) should be easily adapted to your case. I can provide more details if you are uncertain how to adapt it to your case.
You can also use a poisson GLM with a log-link function to do this test, and it amounts to testing that a certain interaction term in the glm is zero, so you can use the Pearson residuals from the output of the glm to do your chi-square test - you can also use the deviance table.
UPDATE
Having denied the likelihood principle here is actually incorrect on my part, for making inference on the $\theta_{ijk}$ parameters. This is easily seen by first computing odds ratios and then renormalising - the part related to the sampling design will cancel out. What an ass I made of myself, talking about "subtle" and all that! To show this more explicitly, I will write out $A\equiv A(n_{111},\dots,n_{IJK})$ as the constant which depends on how the experiment was done and which totals were fixed and which were random (independent of the $\theta_{ijk}$ parameters). I will show the final answer does not depend on $A$. So we we have:
$$p(n_{111},\dots,n_{IJK}|\theta_{111},\dots,\theta_{IJK})=A\prod_{i=1}^{I}\prod_{j=1}^{J}\prod_{k=1}^{K}\theta_{ijk}^{n_{ijk}}$$
For the specific problem I will let $i$ denote gender, $j$ denote treatment, and $k$ denote the response, so we have $I=2,J=3,K=2$. Now I am going to assume that these are exhaustive classes. This can be re-phrased as restricting the domain of generality to those units which could be classified into this table - and not as "something else" which isn't in the table (e.g. a fourth treatment). This places some restrictions on the values of $\theta_{ijk}$, namely that they sum to 1:
$$\sum_{i=1}^{2}\sum_{j=1}^{3}\sum_{k=1}^{2}\theta_{ijk}=1$$
And we have a hypothesis to consider:
$$H_{0}:\text{Response does not differentiate between treatments and gender}$$
Or in mathematical terms
$$H_{0}:\theta_{i_{1}j_{1}k}=\theta_{i_{2}j_{2}k}\;\;\;\forall i_{1},j_{1},i_{2},j_{2},k$$
Now this basically means that there is only one parameter in the table $\theta$, and we have an integral to do (letting $D\equiv n_{111},\dots,n_{IJK}$ and $I$ denote the prior information):
$$p(H_{0}|D,I)=\frac{p(H_{0}|I)p(D|H_{0},I)}{p(D|I)}=\frac{p(H_{0}|I)}{p(D|I)}\int_{0}^{1}p(\theta|H_{0},I)p(D|\theta,H_{0},I)d\theta$$
$$=\frac{p(H_{0}|I)}{p(D|I)}A\int_{0}^{1}p(\theta|H_{0},I)\prod_{i=1}^{I}\prod_{j=1}^{J}\theta^{n_{ij1}}(1-\theta)^{n_{ij2}}$$
$$=\frac{p(H_{0}|I)}{p(D|I)}A\int_{0}^{1}p(\theta|H_{0},I)\theta^{n_{\bullet\bullet 1}}(1-\theta)^{n_{\bullet\bullet 2}}d\theta$$
Now nothing in the hypothesis says what the actual rate is, so $p(\theta|H_{0},I)=p(\theta|I)$ - this must depend on the prior information. A conservative choice is the uniform one, this will adopted for all priors, and we have:
$$p(H_{0}|D,I)=\frac{p(H_{0}|I)}{p(D|I)}A \frac{\Gamma(n_{\bullet\bullet 1}+1)\Gamma(n_{\bullet\bullet 2}+1)}{\Gamma(n_{\bullet\bullet\bullet}+2)}
=\frac{p(H_{0}|I)}{p(D|I)}A \frac{\Gamma(46)\Gamma(276)}{\Gamma(322)}$$
And it looks like $A$ is relevant, but we have yet to calculate $P(D|I)$. In order to do this, we must think about what the alternatives to $H_0$ are. These are fairly obvious: that the response differs by treatment and not by gender; by gender and not by treatment; or by both gender and by treatment
$$H_{1T}:\theta_{ij_{1}k}=\theta_{ij_{2}k}\;\;\;\forall i,j_{1},j_{2},k$$
$$H_{1G}:\theta_{i_{1}jk}=\theta_{i_{2}jk}\;\;\;\forall i_{1},i_{2},j,k$$
$$H_{2}:\theta_{i_{1}j_{1}k}\neq\theta_{i_{2}j_{2}k}\;\;\;\forall i_{1},j_{1},i_{2},j_{2},k$$
These could be further broken down into statements about the individual cells, but this is not necessary. As before we will set uniform priors for the unrestricted parameters, and for $H_{1T}$ we get an integral
$$\int_{\sum_{ik}\theta_{ik}=1}\prod_{i,k=1}^{2}\theta_{ik}^{n_{i\bullet k}}\prod_{i,k=1}^{2}d\theta_{ik}=\frac{\Gamma(n_{1\bullet 1}+1)\Gamma(n_{1\bullet 2}+1)\Gamma(n_{2\bullet 1}+1)\Gamma(n_{2\bullet 2}+1)}{\Gamma(n_{\bullet\bullet\bullet}+4)}$$
Now if you are like me, you will be able to see the pattern. The probabilities of each are written below:
$$p(H_{0}|D,I)=\frac{p(H_{0}|I)}{p(D|I)}A \frac{\Gamma(n_{\bullet\bullet 1}+1)\Gamma(n_{\bullet\bullet 2}+1)}{\Gamma(n_{\bullet\bullet\bullet}+2)}$$
$$p(H_{1T}|D,I)=\frac{p(H_{1T}|I)}{p(D|I)}A \frac{\prod_{i,k}\Gamma(n_{i\bullet k}+1)}{\Gamma(n_{\bullet\bullet\bullet}+4)}$$
$$p(H_{1G}|D,I)=\frac{p(H_{1G}|I)}{p(D|I)}A \frac{\prod_{j,k}\Gamma(n_{\bullet jk}+1)}{\Gamma(n_{\bullet\bullet\bullet}+6)}$$
$$p(H_{2}|D,I)=\frac{p(H_{2}|I)}{p(D|I)}A \frac{\prod_{i,j,k}\Gamma(n_{ijk}+1)}{\Gamma(n_{\bullet\bullet\bullet}+12)}$$
Note that all of these probabilities contain the factor $\frac{A}{p(D|I)}$. So if we were to take odds ratios they would just cancel out. That is both $A$ and $P(D|I)$ cannot change the relative relationships between the 4 hypothesis - thus $A$ is irrelevant for answering this question as is $P(D|I)$. It does not matter whether you had fixed designs or random ones - it only matters that the likelihood function has the form I wrote at the start. So I will give my results in odds ratio form, with $H_0$ the denominator in the odds. Now if we assume that all hypothesis are equally likely, then these also drop out of the odds ratios. To go from odds ratios to probabilities is easy
$$P(H_0|D,I)=\left(\sum_{h} \frac{P(H_h|D,I)}{P(H_0|D,I)}\right)^{-1}$$
And the results are given below, which shows that you would have to be crazy to reject $H_{0}$ in favour of the alternatives (the probability has first 60 decimal digits of 9 before the first non 9 digit). I also did a quick $\chi^{2}$ test and this gave values of $\chi^{2}=5.49$ on $5$ degrees of freedom, for a p-value of 0.36 (note that this is inflated because of some cells below 10, so the fit is much better than what $\chi^{2}$ test indicates).
$$
\begin{array}{c|c}
\text{Hypothesis} & \text{prior odds vs }H_{0} & \text{posterior odds vs }H_{0} \\ \hline
H_{0} & 1 & 1 \\
H_{1T} & 1 & 5\times 10^{-61} \\
H_{1G} & 1 & 9\times 10^{-147} \\
H_{2} & 1 & 4\times 10^{-214} \\
\end{array}
$$
| null | CC BY-SA 3.0 | null | 2011-04-30T09:41:13.747 | 2011-06-26T01:11:47.643 | 2017-04-13T12:44:27.570 | -1 | 2392 | null |
10174 | 2 | null | 10155 | 2 | null | It shouldn't happen, if you do the taylor series well - I'd suggest starting it at different initial values. A good choice is to set the intercept equal to the logit of the total proportion in your sample, and all other betas to zero. So you have $p_{i}=\frac{y_{i}}{n_{i}}$ as the observed proportions for each unit. Just set
$$\beta_0=logit\left(\frac{\overline{y}}{\overline{n}}\right)$$
and all other betas equal to zero as your starting values. This should stop the 0s and 1s giving you problems.
Another way to stabilise your results is the good old (+1) and (+2) rule, which is similar to ridging in ordinary regression. To do this you regress
$$\tilde{p}_{i}=\frac{y_{i}+1}{n_{i}+2}$$
On X directly using ols regression (i.e. no iterations). This is shown to be a generalised MLE in [this paper](http://www.samsi.info/sites/default/files/tr2007-08.pdf)
| null | CC BY-SA 3.0 | null | 2011-04-30T10:54:03.897 | 2011-04-30T10:54:03.897 | null | null | 2392 | null |
10175 | 2 | null | 10142 | 1 | null | I would look into a few special functions, which continually "pop-up" in statistics - often in disguised forms:
- the confluent hypergeometric function $_{1}F_{1}(a;b;z)$. heaps of functions are special cases of this one, such as erf, incomplete gamma, bessel.
- the incomplete gamma function (and regularised incomplete gamma function)
- the incomplete beta function (and regularised incomplete beta function)
- Gaussian hypergeometric function $_{2}F_{1}(a;b;c;z)$
2) and 3) are the CDFs for chi-square and t-distributions for certain values of the parameters. I would invert these using the table backwards, rather than directly looking for an inverse function. It is likely that the error in interpolation (for sufficiently close values) may be less than the error in numerically evaluating a complicated inverse function.
| null | CC BY-SA 3.0 | null | 2011-04-30T11:18:54.993 | 2011-04-30T11:18:54.993 | null | null | 2392 | null |
10176 | 2 | null | 8570 | 3 | null | Optimization, by Kenneth Lange (Springer, 2004), [reviewed](http://pubs.amstat.org/doi/abs/10.1198/jasa.2005.s47) in JASA by Russell Steele. It's a good textbook with Gentle's Matrix algebra for an introductory course on Matrix Calculus and Optimization, like the one by [Jan de Leeuw](https://public.me.com/jdeleeuw) (courses/202B).
| null | CC BY-SA 3.0 | null | 2011-04-30T12:02:50.277 | 2011-04-30T12:02:50.277 | null | null | 930 | null |
10177 | 1 | 10178 | null | 3 | 1286 | Is there a function that does smart date parsing in R?
I know the `strftime`/`as.POSIXct`/`as.POSIXlt` commands, but they require a date formatting string, or throw the error "character string is not in a standard unambiguous format". This happens even when I pass a string with more than enough information to be parseable, like "Fri Apr 29 16:43:20 GMT 2011".
It would be really nice to not bother with reverse-engineering the format every time I import dates from a new source. Is there code that already does this?
Thanks!
| Smart date parsing in R? | CC BY-SA 3.0 | null | 2011-04-30T13:56:28.013 | 2011-04-30T15:52:11.647 | 2011-04-30T15:52:11.647 | null | 4110 | [
"r"
] |
10178 | 2 | null | 10177 | 5 | null | Try "lubridate" package, from CRAN, install in the usual way. Might help!
| null | CC BY-SA 3.0 | null | 2011-04-30T15:09:15.863 | 2011-04-30T15:09:15.863 | null | null | 1549 | null |
10179 | 2 | null | 10177 | 1 | null | I don't know of any, and I'm not sure how it would work. Does it use the first entry, figure a template, then use that for the rest of the entries? Does it parse each entry individually, so that there is no template and each entry can be different?
In the latter case, I'd worry a bit that it would be too flexible. Maybe I'm not thinking clearly, but the reverse-engineering and fixed-template method would seem to catch more errors and would help me to be more aware of my data.
(I was about to recommend lubridate, which doesn't do what you want, but does make many date tasks easier, and while I was typing, Spacedman beat me to the punch.)
| null | CC BY-SA 3.0 | null | 2011-04-30T15:12:02.103 | 2011-04-30T15:12:02.103 | null | null | 1764 | null |
10180 | 1 | null | null | 3 | 684 | I work quite often with Google's Website Optimizer, essentially it allows you to make small changes to a site to determine if they have an effect on the ratio of clicks to a web page to sales. The stats look like this:
```
Combination 1: 500 clicks 10 sales
Combination 2: 498 clicks 11 sales
Combination 3: 503 clicks 15 sales
```
Now obviously that part makes sense to me, but then it gives a value called "Probability of being better" which is based (among other factors) on the sample size (see also [this question](https://stats.stackexchange.com/questions/9735/how-does-a-frequentist-calculate-the-chance-that-group-a-beats-group-b-regarding)). So if a particular experiment has less data it would rate it as having a 13% chance at being better where as at a larger dataset it would have a 90% chance at being correct.
Obviously a smaller sample size would be more prone to swings in variance but I'm curious to know if there's a formula to determine when enough data would have been collected so that a high probability of being better can be calculated (which seems to be equivalent to a small overlapping of the corresponding confidence intervals and hence a high probability of rejecting the Null-Hypothesis).
| How to calculate sample size for a one-sided test on a rxs contigency table | CC BY-SA 3.0 | null | 2011-04-30T15:50:33.363 | 2011-05-03T11:18:21.667 | 2017-04-13T12:44:52.660 | -1 | 4412 | [
"probability",
"hypothesis-testing",
"ab-test"
] |
10182 | 1 | 11732 | null | 12 | 22032 | I'm a little confused regarding the intraclass correlation coefficient and one-way ANOVA. As I understand it, both tell you how similar observations within a group are, relative to observations in other groups.
Could someone explain this a little better, and perhaps explain the situation(s) in which each method is more advantageous?
| Intraclass correlation coefficient vs. F-test (one-way ANOVA)? | CC BY-SA 3.0 | null | 2011-04-30T19:02:45.603 | 2019-02-17T01:36:16.990 | 2017-04-20T20:17:33.313 | 11887 | 4301 | [
"anova",
"psychometrics",
"reliability",
"intraclass-correlation"
] |
10184 | 2 | null | 10145 | 3 | null | Just on the first of your questions: probit or logit?
In practice it usually makes little difference. You will get different parameter estimates from the two methods, but that is because the parameters mean different things. When you then use these to model, the differences will then largely disappear. The logistic density distribution is slightly leptokurtic (a sharper peak and fatter tails) compared with the normal distribution, and being comfortable with odds I personally find it slightly easier to explain logit methods, but you need not worry too much whichever you choose.
You can find some discussion in [this lecture](http://www.iasri.res.in/ebook/EBADAT/6-Other%20Useful%20Techniques/5-Logit%20and%20Probit%20Analysis%20Lecture.pdf).
| null | CC BY-SA 3.0 | null | 2011-04-30T22:24:09.737 | 2011-04-30T22:24:09.737 | null | null | 2958 | null |
10185 | 1 | null | null | 5 | 6067 | I'm working on a not-too-fancy Bayesian model in R and JAGS. The goal is to isolate coder errors in a content analysis task. Code and output are given below.
The larger question is how to go about debugging JAGS. (I assume that the same advice would hold for BUGS as well.) What am I supposed to make of an error like "Invalid initial values" when there are nearly a dozen different initial values?
Here's my R code:
```
library(rjags)
library(R2jags)
#Load the data
toy_data <- read.csv("toy_data.csv")
#Rescale data
rescaled_data <- toy_data[,c(3:(2+K))]
for( k in c(1:K) ){
col <- rescaled_data[,c(k)]
rescaled_data[,c(k)] <- (col-min(col))/(max(col)-min(col))
}
codes <- as.matrix(rescaled_data)
doc_ids <- toy_data$docid
coder_ids <- toy_data$coderid
N <- nrow( toy_data ) #Number of document codings
D <- max(toy_data$docid) #Number of documents
I <- max(toy_data$coderid) #Number of coders
K <- dim(toy_data)[2]-2 #Number of attributes
#Package info for jags
jags.data <- list( "doc_ids", "coder_ids", "codes", "N", "D", "I", "K" )
jags.params <- c("z","mu","sigma","sigma_i","sigma_k","mu_dk","alpha_k","beta_k","alpha_i","beta_i")
jags.inits <- list(
"z" <- matrix(rnorm(N*K),nrow=N,ncol=K),
"mu" <- runif(1)*10,
"sigma" <- rgamma(1,10),
"sigma_i" <- rgamma(I,10),
"sigma_k" <- rgamma(K,10),
"mu_dk" <- as.matrix(rnorm(D*K),nrow=D,ncol=K),
"alpha_k" <- rgamma(1,10),
"alpha_i" <- rgamma(1,10),
"alpha_i" <- rgamma(1,10),
"beta_i" <- rgamma(1,10)
)
#Fit the model
jagsfit <- jags(
model.file="coder_model.txt",
data=jags.data,
inits=jags.inits,
jags.params,
n.iter=5000,
)
```
Here's the JAGS model:
```
model {
for( n in 1:N ){ #Loop over codings
for( k in 1:K ){ #Loop over attributes
#d <- doc_ids[n] #Get document index
#i <- coder_ids[n] #Get code index
codes[n,k] ~ dbern(p[n,k])
logit(p[n,k]) <- z[n,k]
z[n,k] ~ dnorm( mu_dk[doc_ids[n],k], sigma_k[k]*(1+sigma_i[coder_ids[n]]) )
}
}
for( d in 1:D ){
for( k in 1:K ){
mu_dk[d,k] ~ dnorm( mu, sigma )
}
}
for( k in 1:K ){ #Loop over attributes
sigma_k[k] ~ dgamma( alpha_k, beta_k )
}
for( i in 1:I ){ #Loop over coders
sigma_i[i] ~ dgamma( alpha_i, beta_i )
}
#Noninformative priors over alphas and betas
mu ~ dnorm( 0, 10 )
sigma ~ dgamma(10,8)
alpha_k ~ dgamma(10,8)
beta_k ~ dgamma(10,8)
alpha_i ~ dgamma(10,8)
beta_i ~ dgamma(10,8)
}
```
Here's the data:
```
"docid","coderid","Answer.1","Answer.2"
1,1,3,3
1,2,4,1
1,3,7,2
2,1,3,3
2,2,4,4
2,4,3,1
3,1,3,3
3,2,4,3
3,3,3,4
4,4,5,1
4,5,6,2
4,2,4,3
5,2,5,4
5,3,3,1
5,4,7,2
6,1,3,3
6,5,4,1
6,2,5,2
```
And here's the R output:
```
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph Size: 352
Error in jags.model(model.file, data = data, inits = inits, n.chains = n.chains, :
Invalid initial values
```
| Debugging JAGS and BUGS | CC BY-SA 3.0 | null | 2011-04-30T23:14:00.433 | 2011-06-14T07:47:41.560 | null | null | 4110 | [
"r",
"bugs",
"jags"
] |
10186 | 1 | 10197 | null | 4 | 700 | The setup is that I'm trying to understand how a computer program works, so I'm capturing some numbers every time a function is called. For example, I might be capturing the number of branches taken and the number of branches incorrectly predicted. During the course of running the program, a particular function might get called twenty or thirty thousand times. I have a fair amount of control over how many times a given function is called.
My initial plan was to calculate a mean and standard deviation using those 20-30k data points as my sample. However, my (computer science) professor suggested that I needed to rerun the experiment several times in order to calculate a standard deviation. So I would run my script five or six times, calculating a mean each time. Then I would use those five or six values to calculate a standard deviation. That doesn't make a lot of sense to me - it seems to me that if I want to understand how a given function is behaving, I should treat the data from each function call as a data point, and that the professor's method more or less throws away a lot of data.
However, I'm thinking that I may be making an unwarranted assumption that one run of the program is like another. In this case, I guess that running and presenting both sets of numbers would be a good, as I would capture how the functions behave at each call, and also see if there is behavior difference across calls.
So getting around to the question, is my initial impulse to use each function call as my data set correct/better, or should I calculate the data both ways and present both numbers?
| Repeating an experiment - more valuable than sample size? | CC BY-SA 3.0 | null | 2011-04-30T23:40:13.477 | 2011-05-01T09:10:52.967 | null | null | 527 | [
"self-study",
"experiment-design"
] |
10188 | 2 | null | 10186 | 5 | null | I assume that you analyse a stochastic algorithm. Repeated runs of the program may have different sources of variability than repeated function evaluations within a single run.
An example: the program may initialise a random number generator with the same seed in every run, which will make the results of repeated runs identical, but the function evaluations pseudo-random.
| null | CC BY-SA 3.0 | null | 2011-05-01T00:28:02.547 | 2011-05-01T00:28:02.547 | null | null | 3911 | null |
10189 | 2 | null | 10185 | 1 | null | You could try running the same model in WinBugs or OpenBugs. They generally give slightly more detailed error messages, so you might get something more useful in this specific case, too.
| null | CC BY-SA 3.0 | null | 2011-05-01T00:32:48.660 | 2011-05-01T00:32:48.660 | null | null | 3911 | null |
10190 | 2 | null | 10186 | 6 | null | GaBorgulya said it quite well (+1), but I'd like to present a more conceptual perspective too.
If GaBorgulya's assumptions are correct, you can treat the random seed as a blocking factor. Imagine that each run of the program is a person and that for each person you measure blood pressure over time.
You could increase your sample size by taking more measurements on each person (like looking at all those thousands of data points) or by looking at more persons (like running the program more times). The former will tell you about the error within persons, and the latter will tell you about error between persons.
Because you are running a computer program and your program can be repeated if the seed is saved, there is no error within the "person" (the particular run of the program); whatever output you get is the true, complete output for that particular seed. You would like to know how much noise is created by the random seed.
On the other hand, your ability to adjust how many times a particular function is called blurs my definitions of error within and error between. It may help to discuss in a bit more detail what those functions are doing.
| null | CC BY-SA 3.0 | null | 2011-05-01T01:58:23.390 | 2011-05-01T01:58:23.390 | null | null | 3874 | null |
10191 | 1 | 10201 | null | 5 | 1496 | It is said that if the plots of the hypothetical responses are not parallel, but crossed, there is interaction. Suppose we have two factors. Is it possible that the plots cross but we do not have interaction? That is more reasonable when the plots are close to each other.
I noticed the converse is true. We may have interaction even if neither the curves of factor A on factor B nor factor B on factor A intersect. In a book I read this happens when the interaction is removable, which means that there is another cognate independent variable.
| Counterexample for interaction and parallel curves? | CC BY-SA 3.0 | null | 2011-05-01T04:02:02.753 | 2011-05-01T10:25:13.650 | 2011-05-01T08:05:59.720 | 930 | 3454 | [
"anova",
"interaction"
] |
10192 | 1 | 10215 | null | 8 | 1019 | Wikipedia says that the name of concept comes from physics, but I cannot find any similarity between these two concepts.
| How to understand moments for a random variable? | CC BY-SA 3.0 | null | 2011-05-01T04:36:57.863 | 2020-03-20T02:24:21.957 | 2020-03-20T02:24:21.957 | 11887 | 4416 | [
"random-variable",
"moments"
] |
10193 | 1 | 10203 | null | 4 | 231 | I have a set of sessions and urls that have been accessed in each of these sessions and frequencies with which they have been accessed. I've put them in a matrix-like representation.
Imagine I have the following "Pageview matrix":
```
COLUMN HEADINGS
books placement resources br aca
```
Each row represents a session.
Here is an example of the records:
```
4 5 0 2 2
1 2 1 7 3
1 3 6 1 6
```
saved in a `.txt` file
Can I give this as an input to a k-means program and obtain clusters based on the highest frequency of occurrence? How do I use it?
If not k-means, what other cluster method can I use?
| Clustering elements by access counts in sessions | CC BY-SA 3.0 | null | 2011-05-01T05:24:17.993 | 2011-05-01T11:16:55.293 | 2011-05-01T11:16:55.293 | null | 4402 | [
"clustering"
] |
10194 | 2 | null | 10193 | 5 | null | Let me try to answer your questions in parts:
1) You can do k-mean cluster analysis using the dataset. But how you use the result of cluster analysis will be based on the problem that you are trying to solve using the cluster analysis. I used cluster analysis using clickstream data. But my dataset was bit different from theta of yours. I took variables(columns) like pageviews,time on page, bounce rate, etc & urls as row variables. The idea was try to segment the urls into different clusters & then try to find the meaningful attributes of these particular clusters.
2) There are mainly 2 types of cluster analysis: hierarchical & partition clustering. K-mean falls within partition clustering. The [book](http://rads.stackoverflow.com/amzn/click/0387781889) gives details of different king of clustering techniques available.
| null | CC BY-SA 3.0 | null | 2011-05-01T06:28:12.233 | 2011-05-01T06:28:12.233 | null | null | 4278 | null |
10195 | 2 | null | 10192 | 2 | null | Moments gives information about the statistical distribution. We judge one dataset over other based on moments of the dataset (e.g. difference between means(1st moment) of the 2 dataset)
| null | CC BY-SA 3.0 | null | 2011-05-01T06:36:16.847 | 2011-05-01T06:36:16.847 | null | null | 4278 | null |
10196 | 2 | null | 10162 | 110 | null | Here is a simple recipe that may help you get started writing code and testing ideas...
Let's assume you have monthly data recorded over several years, so you have 36 values. Let's also assume that you only care about predicting one month (value) in advance.
- Exploratory data analysis: Apply some of the traditional time series analysis methods to estimate the lag dependence in the data (e.g. auto-correlation and partial auto-correlation plots, transformations, differencing).
Let's say that you find a given month's value is correlated with the past three month's data but not much so beyond that.
- Partition your data into training and validation sets: Take the first 24 points as your training values and the remaining points as the validation set.
- Create the neural network layout: You'll take the past three month's values as inputs and you want to predict the next month's value. So, you need a neural network with an input layer containing three nodes and an output layer containing one node. You should probably have a hidden layer with at least a couple of nodes. Unfortunately, picking the number of hidden layers, and their respective number of nodes, is not something for which there are clear guidelines. I'd start small, like 3:2:1.
- Create the training patterns: Each training pattern will be four values, with the first three corresponding to the input nodes and the last one defining what the correct value is for the output node. For example, if your training data are values $$x_1,x_2\dots,x_{24}$$ then $$pattern 1: x_1,x_2,x_3,x_4$$ $$pattern 2: x_2,x_3,x_4,x_5$$ $$\dots$$ $$pattern 21: x_{21},x_{22},x_{23},x_{24}$$
- Train the neural network on these patterns
- Test the network on the validation set (months 25-36): Here you will pass in the three values the neural network needs for the input layer and see what the output node gets set to. So, to see how well the trained neural network can predict month 32's value you'll pass in values for months 29, 30, and 31
This recipe is obviously high level and you may scratch your head at first when trying to map your context into different software libraries/programs. But, hopefully this sketches out the main point: you need to create training patterns that reasonably contain the correlation structure of the series you are trying to forecast. And whether you do the forecasting with a neural network or an ARIMA model, the exploratory work to determine what that structure is is often the most time consuming and difficult part.
In my experience, neural networks can provide great classification and forecasting functionality but setting them up can be time consuming. In the example above, you may find that 21 training patterns is not enough; different input data transformations lead to a better/worse forecasts; varying the number of hidden layers and hidden layer nodes greatly affects forecasts; etc.
I highly recommend looking at the [neural_forecasting](http://www.neural-forecasting-competition.com/index.htm) website, which contains tons of information on neural network forecasting competitions. The [Motivations](http://www.neural-forecasting-competition.com/motivation.htm) page is especially useful.
| null | CC BY-SA 3.0 | null | 2011-05-01T06:36:43.740 | 2011-05-01T06:36:43.740 | null | null | 1080 | null |
10197 | 2 | null | 10186 | 7 | null | For every experiment, one has explicit or implicit assumptions about all the boundary conditions that should not influence the experiment's outcome. When you run your program, maybe time of day should have no effect, it should not matter who presses the keyboard key to start the program, same for CPU architecture etc. You assume these things are unimportant because you have a well-established theory about what goes on when a computer runs a program.
However, some of these assumptions could be wrong because the theory is incomplete, only partly valid, or because you missed a connection between an unimportant boundary condition and a known influence (think of the [email only within 500 miles anecdote](http://www.ibiblio.org/harris/500milemail.html), also check its FAQ). Repeating your experiment allows you to vary at least some of the boundary conditions and verify empirically that they indeed have no effect.
Quite often, replications reveal that a specific combination of circumstances that were not considered important can influence experiment's a great deal, see e.g., this [New Yorker: the truth wears off](http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer) article.
| null | CC BY-SA 3.0 | null | 2011-05-01T09:10:52.967 | 2011-05-01T09:10:52.967 | null | null | 1909 | null |
10198 | 2 | null | 10191 | 2 | null | Yes, if the true (hypothetical) responses are not parallel there is interaction. Not parallel, however, does not necessarily mean that the segments cross. When you investigate interaction the sampling error may lead to different results in the sample than in the population, so it's useful to calculate confidence intervals or credibility intervals for the extent of the possible interaction. The extent of the interaction depends on the scales of the variables, in special cases (removable interaction) there is a transformation when the effects are additive and there is no interaction.
| null | CC BY-SA 3.0 | null | 2011-05-01T09:15:10.617 | 2011-05-01T09:20:14.450 | 2011-05-01T09:20:14.450 | 3911 | 3911 | null |
10199 | 2 | null | 10191 | 3 | null | To me it seems like you (and many books probably) are confusing the empirical level with the theoretical level: The [null hypothesis of an interaction effect](https://stats.stackexchange.com/questions/5617/what-is-the-null-hypothesis-for-interaction-in-a-two-way-anova/5622#5622) in a two-way ANOVA is defined on the theoretical level using the cell expected values $\mu_{jk}$ (and not response values): there is an interaction if (and only if) the lines connecting the $\mu_{jk}$ in a diagram are exactly parallel. Note that "not parallel" is not the same as "lines cross".
On the empirical side, we do not have the $\mu_{jk}$, but can only plot their estimates, the cell means $M_{jk}$. Even if the null hypothesis is true, their connecting lines will almost never be exactly parallel due to measurement error. Conversely, even if the alternative hypothesis is true, they could be almost parallel for the same reason. A measure for the degree to which deviation from parallelity of the $M_{jk}$ indicates interaction is the ANOVA's corresponding F-value.
| null | CC BY-SA 3.0 | null | 2011-05-01T09:28:25.543 | 2011-05-01T09:28:25.543 | 2017-04-13T12:44:24.677 | -1 | 1909 | null |
10200 | 1 | null | null | 8 | 251 | I remember hearing an argument that if a certain population of people has a mean IQ of 110 rather than the typical 100, it will have far more people of IQ 150 than a similarly-sized group from the general population.
For example, if both groups' IQs are normally distributed and have the same standard deviation of 15, we expect about 0.2% of people from the high-IQ group to be over 150, and 0.02% from the general population. There are about ten times as many "genius-level" people from the high-IQ group. This is despite there being only three times as many people of IQ 120 or more in the high-IQ group, and only twice as many people of IQ 110 or more.
Therefore, when we meet someone with IQ over 150, we can strongly suspect they're from the high-IQ group, even though on average that group does not have a huge advantage.
Is there a special name for this effect?
Similarly, if two populations have the same mean, but population A has a higher variance than population B, there will be more data points above a certain high threshold from population A. (I have heard this argument given to explain the high proportion of men as opposed to women in the highest tiers of mathematical achievement. It was claimed that men and women had the same mean ability, but men had higher variance.) Does this effect also have a name?
I apologize for the somewhat controversial nature of the examples. I'm only interested in the names for the effects, not in IQ and mathematical ability in different sorts of groups. I simply cited these examples because that's the context under which I heard these phenomena described.
| Is there a name for the high sensitivity of frequency of extreme data points to the mean of a normal distribution? | CC BY-SA 3.0 | null | 2011-05-01T09:38:50.040 | 2016-10-17T22:49:52.380 | null | null | 2665 | [
"normal-distribution",
"outliers",
"terminology"
] |
10201 | 2 | null | 10191 | 2 | null | This depends on what is meant by "interaction". If the data have no noise - the plot is literally just two parallel lines, then there is certainly no interaction, we know this deductively, without any need for statistics. Secondly if the lines are not parallel, then we know deductively that there is interaction. So there is no counter example if there is no noise.
But if there is noise (or error), then there is basically more than one possible place that the "noiseless" or "true" lines could be. It is also possible for the true lines to be parallel but if the noise is big enough and you get an "unlucky" sample of noise, then the noisy lines will cross. Just how unlucky depends on how "non-parallel" the two "true lines" are and how many units have been sampled. Consider the OLS case, the lines are generated by:
$$y_{i}=x_{i}^{T}\beta_{true}+n_{i}$$
Where $\beta_{true}$ is a 4-D vector with the intercept for group 1, the offset for group 2, the slope for group 1 and the offset to the slope for group 2
Now you fit an OLS to the observed data, and you get
$$\beta_{OLS}=(X^{T}X)^{-1}X^{T}Y=(X^{T}X)^{-1}X^{T}(X\beta_{true}+n)=\beta_{true}+(X^{T}X)^{-1}X^{T}n$$
So by a careful choice of the noise we can make the OLS estimates be basically anything. So I don't have to invert a $4\times 4$ matrix, I will specialise to the case where both intercepts are equal to zero, and we have
$$y_{ij}=\beta_{1}x_{ij}+\beta_{2}x_{i2}I(j=2)$$
And then
$$(X^{T}X)^{-1}=\frac{1}{\left(\sum_{i}x_{i1}^{2}\right)\left(\sum_{i}x_{i2}^{2}\right)}\begin{pmatrix} \sum_{i}x_{i1}^{2}+\sum_{i}x_{i2}^{2} & -\sum_{i}x_{i2}^{2} \\ -\sum_{i}x_{i2}^{2} & \sum_{i}x_{i2}^{2}
\end{pmatrix}$$
$$=\frac{1}{\sum_{i}x_{i1}^{2}}\begin{pmatrix} 1 & -1 \\ -1 & 1\end{pmatrix}
+\frac{1}{\sum_{i}x_{i2}^{2}}\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}$$
Now for $X^{T}n$ we have:
$$X^{T}n=\sum_{i}x_{i2}n_{i2}\begin{pmatrix} 1\\1\end{pmatrix}
+\sum_{i}x_{i1}n_{i1}\begin{pmatrix} 1\\0\end{pmatrix}$$
And so the total error from the regression is:
$$\frac{\sum_{i}x_{i2}n_{i2}+\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i2}^{2}}\begin{pmatrix} 1\\0\end{pmatrix}
+\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}\begin{pmatrix} 1\\-1\end{pmatrix}$$
Now if the true slopes are parallel, so that $\beta_{2,true}=0$, then the OLS estimates will be:
$$\hat{\beta}_{1}=\beta_{1,true}+\frac{\sum_{i}x_{i2}n_{i2}+\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i2}^{2}}+\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}$$
$$\hat{\beta}_{2}=-\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}$$
Now this shows that the OLS estimate can indeed lead to erroneous interactions, just choose the "true" noise such that it is highly correlated with $x_{i1}$ - essentially you need to violate one of the assumptions of OLS, non-heteroscedasticity of the noise. So if you generate data according to:
$$y_{i1}=x_{i1}(\beta_{1,true}+n_{i1})$$
$$y_{i2}=x_{i2}\beta_{1,true}+n_{i2}$$
And then try to fit an interaction model using OLS of $y$ on $x$ with an interaction, you will find a significant result, even though the true betas are the same. The plots will cross because of the fanning in the first group.
One example data set (true beta is 2 and noise was generated from standard normal). You get a t-statistic above 10 for the interaction effect:
$$\begin{array}{c|c}
group & y & x \\
1 & 1.282817715 & 1 \\
1 & 2.026032115 & 2 \\
1 & 5.9786882 & 3 \\
1 & 22.1588319 & 7 \\
2 & 16.28587668 & 9 \\
2 & 15.12007527 & 6 \\
2 & 9.566273403 & 5 \\
\end{array}$$
| null | CC BY-SA 3.0 | null | 2011-05-01T09:55:03.493 | 2011-05-01T10:25:13.650 | 2011-05-01T10:25:13.650 | 2392 | 2392 | null |
10202 | 2 | null | 10200 | 1 | null | This is not an answer to the question, but may be of interest. The question gave three ratios of number of people in the mean=110 and mean=100 populations having IQ above a given threshold (ratio(IQ=150)≈10, ratio(120)≈3, ratio(110)≈2). The R code below plots the ratio as a function of IQ.
```
IQ = seq(0, 200, length.out=100)
c100 = pnorm(IQ, mean=100, sd=15)
c110 = pnorm(IQ, mean=110, sd=15)
ratio = (1 - c110) / (1 - c100)
plot(ratio ~ IQ); abline(h=c(0, 10, 20, 30, 40, 50, 60))
```

| null | CC BY-SA 3.0 | null | 2011-05-01T10:21:02.827 | 2011-05-01T10:21:02.827 | null | null | 3911 | null |
10203 | 2 | null | 10193 | 2 | null | So, taking your comment into consideration, you want to make clusters of entries which group those entries which were frequently co-accessed?
If so, well, you need to decide how to measure this co-access, i.e. transform it into a dissimilarity, and this is a fairly nontrivial task.
The simple measure is to count, for each pair of entries, sessions in which they both were accessed and divide by the count of session in which any of them was accessed. The resulting matrix will be similarity one, so you can for instance subtract each cell from one and feed the result to the clustering algorithm of your choice.
Of course this measure does not take the access counts during session into account, so you'll probably need something more complex; one idea (simple extension of the trivial one) may be to sum minimal smaller of counts from each session when particular pair co-occurs and divide the whole by the sum of total number of accesses to the both entries.
However, you should try to make this measure on your own taking into account the specificity of this particular problem.
| null | CC BY-SA 3.0 | null | 2011-05-01T11:15:14.063 | 2011-05-01T11:15:14.063 | null | null | null | null |
10204 | 1 | null | null | 3 | 22674 | I have measured frequency of a certain behavior on 15 individuals.
I would like to create two groups based on the amount of this behaviour that was observed (i.e., a group exhibiting high levels of the behaviour and a group exhibiting low levels of the behaviour).
I want to see whether this new binary variable predicts a dependent variable that I have measured.
| How to split a numeric variable into a binary low-high variable | CC BY-SA 3.0 | null | 2011-05-01T11:20:48.103 | 2023-03-30T06:59:33.480 | 2011-05-02T07:54:45.727 | 183 | 4420 | [
"distributions",
"data-transformation"
] |
10206 | 1 | 10212 | null | 6 | 999 | Cumbersome technical assumptions (e.g., mixing properties) are used in the literature to prove Central Limit Theorems for dependent sequences. I sketched a proof that does not require any of these technical assumptions. Can you help me figure out what is wrong with this proof? The proof is at: [http://www.statlect.com/central_limit_theorem_for_correlated_sequences.htm](http://www.statlect.com/central_limit_theorem_for_correlated_sequences.htm). Thanks in advance to all those who will be so generous and patient to read it.
| Conditions for Central Limit Theorem for dependent sequences | CC BY-SA 3.0 | null | 2011-05-01T13:06:32.093 | 2011-05-02T02:42:42.530 | 2011-05-02T02:42:42.530 | 2970 | 4422 | [
"probability",
"stochastic-processes",
"central-limit-theorem",
"stationarity"
] |
10207 | 1 | null | null | 3 | 4457 | How to find the number of runs in the following
- aaaabbabbbaabba
- bbaaaaaabbbbaaaaaaa
| How to find the number of runs? | CC BY-SA 3.0 | null | 2011-05-01T13:45:33.557 | 2011-05-01T17:17:39.257 | 2011-05-01T17:17:39.257 | 919 | 4423 | [
"self-study",
"algorithms"
] |
10208 | 2 | null | 10207 | 2 | null | Quoting [http://en.wikipedia.org/wiki/Wald-Wolfowitz_runs_test](http://en.wikipedia.org/wiki/Wald-Wolfowitz_runs_test):
>
A "run" of a sequence is a maximal non-empty segment of the sequence consisting of adjacent equal elements. For example, the sequence "++++−−−+++−−++++++−−−−" consists of six runs, three of which consist of +'s and the others of −'s.
In your case
```
1) aaaabbabbbaabba 2) bbaaaaaabbbbaaaaaaa
1 2 34 5 6 7 1 2 3 4
```
| null | CC BY-SA 3.0 | null | 2011-05-01T14:08:24.853 | 2011-05-01T14:08:24.853 | null | null | 3911 | null |
10209 | 2 | null | 10207 | 4 | null | If you want do this in R, check out the `rle` function.
For example:
```
> ext <- function(x) {strsplit(x, "")[[1]]}
> x <- ext("aaaabbabbbaabba")
> y <- ext("bbaaaaaabbbbaaaaaaa")
> length(rle(x)$lengths)
[1] 7
> length(rle(y)$lengths)
[1] 4
```
See also this related question on [Stack Overflow on counting runs in R](https://stackoverflow.com/questions/1502910/how-can-i-count-runs-in-r)
| null | CC BY-SA 3.0 | null | 2011-05-01T14:35:51.453 | 2011-05-01T14:44:20.403 | 2017-05-23T12:39:26.150 | -1 | 183 | null |
10210 | 1 | null | null | 7 | 1457 | Let's say you want to cluster some objects, say documents, or sentences, or images.
On the technical side, you first represent these object somehow so that you could calculate distance between them, and then you feed those representations to some clustering algorithm.
Externally, however, you just want to group similar (in some sense -- and that's where things become pretty vague for me) objects together. For example, in case of sentences we want for clusters to contain sentences about similar topic/concept; we feel that sentences "oh look at this pic of a cute lolcat" and "facebook revealed new shiny feature tonight" should be in different clusters.
What are the usual approaches for measuring this "external" quality of clustering? I.e. we want to measure how well our clustering procedure groups initial objects (sentences, images); we're not interested in internal measures (like averaged cluster radius, clusters sparseness), since those measures deal with objects' representations, not with real objects. Meaning, the chosen representation may be awful, and even if internal measures is great, externally we'll end up with clusters that are complete junk from our vague, subjective, "some sense"-ish point of view.
P.S. Having limited knowledge in clustering domain, I suspect I may be asking about really obvious thing, or my terminology may sound strange to clustering experts. If so, please advice what should I read on the subject.
P.P.S. Just in case, I asked the very same question on Quora: [http://www.quora.com/How-to-evaluate-external-quality-of-clustering](http://www.quora.com/How-to-evaluate-external-quality-of-clustering)
| How to evaluate "external" quality of clustering? | CC BY-SA 3.0 | null | 2011-05-01T16:08:12.030 | 2014-03-05T15:25:35.493 | 2011-05-01T17:16:20.670 | 930 | 4425 | [
"clustering",
"data-mining"
] |
10211 | 1 | 12065 | null | 14 | 3565 | I am running a structural equation model (SEM) in Amos 18. I was looking for 100 participants for my experiment (used loosely), which was deemed to be probably not enough to conduct successful SEM. I've been told repeatedly that SEM (along with EFA, CFA) is a "large sample" statistical procedure. Long story short, I didn't make it to 100 participants (what a surprise!), and only have 42 after excluding two problematic data points. Out of interest, I tried the model anyway, and to my surprise, it seemed to fit very well! CFI >.95, RMSEA < .09, SRMR <.08.
The model is not simple, in fact, I would say it is relatively complex. I have two latent variables, one with two observed and the other with 5 observed variables. I also have four additional observed variables in the model. There are numerous relationships between the variables, indirect and direct, with some variables being endogenous to four others, as an example.
I am somewhat new to SEM; however, two individuals that I know who are quite familiar with SEM tell me that as long as the fit indicies are good, the effects are interpretable (as long as they are significant) and there is nothing significantly "wrong" with the model. I know some fit indicies are biased for or against small samples in terms of suggesting good fit, but the three that I mentioned earlier seem fine, and I believe not similarly biased. To test for indirect effects I am using bootstrapping (2000 samples or so), 90 percent bias corrected confidence, monte carlo. An additional note is that I am running three different SEMs for three different conditions.
I have two questions that I would like some of you to consider and please reply to if you have something to contribute:
- Are there any significant weaknesses to my model that are not demonstrated by the fit indices? The small sample will be highlighted as a weakness of the study, but I am left wondering if there is some huge statistical problem that I am completely oblivious to. I plan on getting another 10-20 participants in the future, but this will still leave me with a relatively small sample for such analyses.
- Are there any problems with my use of bootstrapping given my small sample, or the context in which I am using it?
I hope these questions are not too "basic" for this forum. I have read a number of chapters on SEM and related matters, but I find people are very dispersed in terms of opinions in this area!
Cheers
| Complications of having a very small sample in a structural equation model | CC BY-SA 3.0 | null | 2011-05-01T17:28:38.533 | 2013-04-01T20:06:29.500 | 2011-05-01T20:17:22.737 | null | 3262 | [
"modeling",
"sample-size",
"bootstrap",
"structural-equation-modeling"
] |
10212 | 2 | null | 10206 | 7 | null | Additional conditions are needed. (A near-proof of this fact is that many incredibly smart individuals have been thinking deeply about these issues for over 100 years. It is highly unlikely that something like this would have escaped all of them.)
First of all, note that the formula for $V$ that you give is part of the conclusion of the associated central limit theorem. See, for example, Theorem 7.6 on pages 416–417 of R. Durrett, Probability: Theory and Examples, 3rd. ed., which based on your link, you appear to have access to.
At any rate, here is a simple counterexample to your claim.
>
Let $X_0$ equal $+1$ with probability $1/2$ and $-1$ with probability $1/2$. Define $X_n = (-1)^n X_0$. Then $\{X_n\}$ is a stationary ergodic process with mean 0 and variance 1, but the Central Limit Theorem fails.
The properties of stationarity and ergodicity should be pretty easy to see as we can construct this process by defining a function over the states of a two-state Markov chain with stationary probability measure $\pi(x) = 1/2$ for $x \in \{0,1\}$.
Observe that this process yields a sequence of the form $-X_0, X_0, -X_0, \ldots$ and so
- Even without appealing to any notions about ergodicity, it is easy to see that $\newcommand{\e}{\mathbb{E}}\bar{X}_n \to \e X_0 = 0$ almost surely, and,
- $\newcommand{\Var}{\mathbb{V}\mathrm{ar}}\Var(S_n) = 0$ if $n$ is even and $1$ if $n$ is odd.
This already is enough to conclude that there is no way that any rescaling of $S_n$ can make it converge in distribution to a normal random variable. In fact, for every function $f$ such that $f(n) \to \infty$, $S_n / f(n) \to 0$ almost surely no matter how slowly $f$ diverges.
Note also that this example should make it clear that the formula for $V$ is a conclusion of the theorem. Indeed, for the example above,
$$
V_n = 1 + 2 \sum_{i = 1}^n \e X_0 X_i = \left\{
\begin{array}{rl}
-1, & n \text{ odd}, \\
1, & n \text{ even},
\end{array}
\right.
$$
which, of course, (a) makes no sense as a variance, (b) does not have a limit, and (c) is not asymptotically equivalent to $\Var(S_n)$. (NB: I use a slightly different form for $V_n$ than you do where mine matches that given in Durrett.)
| null | CC BY-SA 3.0 | null | 2011-05-01T18:31:30.110 | 2011-05-01T18:48:08.857 | 2011-05-01T18:48:08.857 | 2970 | 2970 | null |
10213 | 1 | 10216 | null | 99 | 63483 | I'm doing some reading on topic modeling (with Latent Dirichlet Allocation) which makes use of Gibbs sampling. As a newbie in statistics―well, I know things like binomials, multinomials, priors, etc.―,I find it difficult to grasp how Gibbs sampling works. Can someone please explain it in simple English and/or using simple examples? (If you are not familiar with topic modeling, any examples will do.)
| Can someone explain Gibbs sampling in very simple words? | CC BY-SA 4.0 | null | 2011-05-01T19:37:56.640 | 2019-06-20T01:59:21.610 | 2018-07-20T09:32:40.537 | 128677 | 4429 | [
"modeling",
"sampling",
"conditional-probability",
"gibbs"
] |
10214 | 2 | null | 10133 | 1 | null | I am a novice data miner as well, but may I suggest that exploratory data analysis is always a good first step? I would see if items can be assigned some sort of 'priority value' which can serve to predict how early they appear in the cart, as such a result may allow you to use simpler models. Something as simple as a linear regression on (#order in cart/#number of items in cart) for all carts possessing item X will give you an idea of whether this is possible. Suppose you find that a certain proportion of items always appear early, or later, and some seem to be completely random: this would guide you in your later model-building.
| null | CC BY-SA 3.0 | null | 2011-05-01T20:20:58.663 | 2011-05-01T20:20:58.663 | null | null | 3567 | null |
10215 | 2 | null | 10192 | 6 | null | If you have a linear rod, the center of gravity is the first moment (the expected value), and the moment of rotational inertia about the center of gravity is the variance. (A rod with centrally located mass will have less inertia than a rod with heavy concentrations of mass at the tips.)
| null | CC BY-SA 3.0 | null | 2011-05-01T20:24:10.717 | 2011-05-01T21:49:03.560 | 2011-05-01T21:49:03.560 | 3567 | 3567 | null |
10216 | 2 | null | 10213 | 189 | null | You are a dungeonmaster hosting Dungeons & Dragons and a player casts 'Spell of
Eldritch Chaotic Weather (SECW). You've never heard of this spell before, but it turns out it is quite involved. The player hands you a dense book and says, 'the effect of this spell is that one of the events in this book occurs.' The book contains a whopping 1000 different effects, and what's more, the events have different 'relative probabilities.' The book tells you that the most likely event is 'fireball'; all the probabilities of the other events are described relative to the probability of 'fireball'; for example: on page 155, it says that 'duck storm' is half as likely as 'fireball.'
How are you, the Dungeon Master, to sample a random event from this book? Here's how you can do it:
The accept-reject algorithm:
1) Roll a d1000 to decide a 'candidate' event.
2) Suppose the candidate event is 44% as likely as the most likely event, 'fireball'. Then accept the candidate with probability 44%. (Roll a d100, and accept if the roll is 44 or lower. Otherwise, go back to step 1 until you accept an event.)
3) The accepted event is your random sample.
The accept-reject algorithm is guaranteed to sample from the distribution with the specified relative probabilities.
After much dice rolling you finally end up accepting a candidate: 'summon frog'. You breathe a sigh of relief as you now you can get back to the (routine in comparison) business of handling the battle between the troll-orcs and dragon-elves.
However, not to be outdone, another player decides to cast 'Level. 2 arcane cyber-effect storm.' For this spell, two different random effects occur: a randomly generated attack, and a randomly generated character buff. The manual for this spell is so huge that it can only fit on a CD. The player boots you up and shows you a page. Your jaw drops: the entry for each attack is about as large a the manual for the previous spell, because it lists a relative probability for each possible accompanying buff
>
'Cupric Blade'
The most likely buff accompanying this attack is 'Hotelling aura'
'Jackal Vision' is 33% as likely to accompany this attack as 'Hotelling aura'
'Toaster Ears' is 20% as likely to accompany this attack as 'Hotelling aura'
...
Similarly, the probability of a particular attack spell occurring depends on the probability of the buff occurring.
It would be justified to wonder if a proper probability distribution can even be defined given this information. Well, it turns out that if there is one, it is uniquely specified by the conditional probabilities given in the manual. But how to sample from it?
Luckily for you, the CD comes with an automated Gibbs' sampler, because you would have to spend an eternity doing the following by hand.
Gibbs' sampler algorithm
1) Choose an attack spell randomly
2) Use the accept-reject algorithm to choose the buff conditional on the attack
3) Forget the attack spell you chose in step 1.
Choose a new attack spell using the accept-reject algorithm conditional on the buff in step 2
4) Go to step 2, repeat forever (though usually 10000 iterations will be enough)
5) Whatever your algorithm has at the last iteration, is your sample.
You see, in general, MCMC samplers are only asymptotically guaranteed to generate samples from a distribution with the specified conditional probabilities. But in many cases, MCMC samplers are the only practical solution available.
| null | CC BY-SA 4.0 | null | 2011-05-01T20:52:40.543 | 2019-06-20T01:59:21.610 | 2019-06-20T01:59:21.610 | 44269 | 3567 | null |
10217 | 2 | null | 10204 | 6 | null | Based on the post and the comments to date: If you want to create two groups based on a single variable, you are faced with an arbitrary choice. You can say that below x is "low" and at or above x is "high" but there is not going to be any statistical procedure (certainly not a significance test) that can make that determination for you. In this situation some people would draw a histogram and look for what seems like a "natural" dividing point, which might simply mean one that would be convincing or defensible to one's particular audience. Alternatively, one might divide so as to create two equal-sized groups. There is no right or wrong answer. But I question the need for dichotomization, for I suspect that whatever methods you plan to apply with two groups could be replaced by other methods at least as informative that preserve the original variable. For example, rather than dichotomizing and running a T-test using a dependent variable, why not correlate your independent and dependent variables, or create a scatterplot to show their relationship.
| null | CC BY-SA 3.0 | null | 2011-05-01T23:17:09.337 | 2011-05-01T23:17:09.337 | null | null | 2669 | null |
10218 | 2 | null | 10110 | 3 | null | [This video](http://www.youtube.com/watch?v=C7JQ7Rpwn2k) (especially the part starting at 23:20) describes the same problem you have with double integration, which amplifies low frequency noise to unbearable levels quickly. They solve the problem by sensor fusion, effectively using other sensors (like magnetic field sensors and gyroscopes) simultaneously to infer a more robust estimate of the acceleration coming from gravity alone and the acceleration coming from the movement of the sensor.
To help you with the drift from the double integration you could also try a particle filter to estimate the true position of the accelerometer over time.
There is an interesting [Tech Talk](http://www.youtube.com/watch?v=b6gPXKfJA5g) about a more robust version of this idea.
Perhaps you could also use characteristic points in your time series as a kind of position anchor, e.g. if you can infer with some confidence the times when the pivot is lowest (or highest) and just assume a fixed height over ground for these times. Then, instead of an initial value problem resulting in onesided double integration, you would have a boundary value problem, where you can additionally integrate backwards from the next anchor position. This reduces the time where errors can grow down to half of a period.
| null | CC BY-SA 3.0 | null | 2011-05-01T23:19:42.463 | 2011-05-01T23:30:01.290 | 2011-05-01T23:30:01.290 | 4360 | 4360 | null |
10219 | 2 | null | 421 | 5 | null | As a first introduction to the topic i liked [Data Analysis: A Bayesian Tutorial](http://rads.stackoverflow.com/amzn/click/0198568320).
For a deep and philosophical discussion of the underlying ideas of quantitative scientific reasoning i recommend [Probability Theory: The Logic of Science](http://rads.stackoverflow.com/amzn/click/0521592712). This book does not serve as a good introduction, though. It's only recommended for persons who want to know, why bayesian statistics is the way it is and/or are interested in a historic review of bayesian statistics.
| null | CC BY-SA 3.0 | null | 2011-05-02T03:02:45.817 | 2011-05-02T04:24:27.573 | 2011-05-02T04:24:27.573 | 4360 | 4360 | null |
10220 | 1 | 10222 | null | 7 | 11452 | This may sound like a noob question but I'm unable to find any 'good' resources/examples on the same. The basic question is this: Most variables, depending on the problem will follow certain types of distributions. Normal/Gaussian may not be the most appropriate one for capturing certain types of phenomena.
Although I'm quite familiar with various distributions from a mathematical viewpoint I'm unable to understand some of them conceptually e.g.: Uniform distribution is when the occurrence of that event is equally likely over time, Normal when the occurrence 'tends' to be centered around the mean more often (like number of defects in samples or heights of citizens in a country etc.,) Similarly for triangular - I understand these easy ones so to speak.
What type of distributions have you commonly encountered when using monte-carlo simulations? Examples would be helpful along with the rationale for choosing that distribution. Basically looking for a reference/pointer that would help me lay it out as a list for reference and understanding. I'd prefer a non-mathematical explanation since it'll be used for discussing with non-mathematical stakeholders to whom the monte-carlo simulations would be shown
- <"Distribution name"> : <"Most appropriate use">
I've heard of the power law but don't really know/understand what it is and how it could be used.
| Most suitable distributions for modeling Monte Carlo Simulations | CC BY-SA 3.0 | null | 2011-05-02T03:26:20.900 | 2013-11-17T12:38:09.913 | null | null | 4426 | [
"distributions",
"random-variable",
"monte-carlo"
] |
10221 | 2 | null | 10204 | 7 | null | Assuming you have a single predictor variable that represents frequency of behaviour, I would make the following points
### Should you split a numeric variable into high-low groups
I quote the following from one of my [blog posts on creating clusters](http://jeromyanglim.blogspot.com/2009/09/cluster-analysis-and-single-dominant.html), where I use the term "median split" as a prototypical example of converting a numeric variable into a binary high-low variable.
>
Many researchers have heard the advice
to not form median splits (see,
Howell
for a discussion), or other kinds of
binary splits for that matter. The
same arguments also tend to apply with
other forms of abrupt grouping into a
small number of factors.
Some
arguments FOR running median splits
are: 1) it allows you to do an ANOVA
or t-test and compare group means; 2)
group differences are easier to
communicate to a lay audience; 3) it
reflects the important distinction in
the underlying continuous variable.
Some arguments AGAINST running median
splits are: 1) you can always find an
equivalent analysis that respects the
continuous nature of the variable
(e.g., regression); 2) when creating
median splits, you lose a lot of
information; 3) the cut-off tends to
be relatively arbitrary and it varies
between samples; 4) the resulting
model based on a median split does not
reflect the underlying nature of the
variable; 5) in most cases a binary
split will have less statistical
power; 6) if the purpose is to
communicate to a scientific audience,
respecting the continuous nature of
the variable is a necessary
complexity.
From the above you can see that there
are generally more reasons in favour
of maintaining the continuous version
of the variable. The two occasions
where splits are tolerable are where
it makes it easy to communicate
findings to a lay audience and where
the underlying effect of interest
occurs in a stepwise fashion. In the
case of the latter, the presence of a
stepwise effect can be tested
empirically; a quick look at a scatter
plot should give some sense if there
is a point where the effect changes
dramatically. Likewise decisions based
on test scores are often based on
pass-fail kinds of categories, and
there is often a concrete desire to
draw inferences about these specific
groups.
Also, check out [page 128 of Making Friends with Your Data](http://web.archive.org/web/20120412171759/http://www.unt.edu/rss/class/mike/5030/articles/makefriends.pdf) for further discussion.
In summary, my advice would be to run a correlation or a regression predicting your outcome variable from the continuous version of your predictor. You may or may not want to perform an order preserving transformation of your predictor depending on its distribution.
### Creating two groups based on numeric variable
Putting aside the issues raised above, if you decide that you still want to split your predictor variable into high-low groups, the following are some options
- Use Statistical properties of your sample
Median split
Above or below the mean
Take bottom 25% and top 25% and throw out the middle
Take bottom third and top third and throw out the middle third
- Use accepted or externally validated cut-offs
e.g., medical diagnoses are often based on certain cut-offs on a continuous scale
Use your own understanding of the phenomena to define a cut-off
- Examine a histogram or density plot and look for a natural split in the data (as mentioned by @rolando2)
| null | CC BY-SA 4.0 | null | 2011-05-02T04:47:47.903 | 2023-03-30T06:59:33.480 | 2023-03-30T06:59:33.480 | 805 | 183 | null |
10222 | 2 | null | 10220 | 4 | null | The books "Continuous univariate distributions" Vol 1 + Vol 2 by Johnson, Kotz and Balakrishnan (and there is a multivariate book too, I believe) are classical references, rich on the mathematical properties as well as giving examples of the usages of the different distributions they treat.
If you want details on a specific class of distributions, Wikipedia is always a good place to start, see
[http://en.wikipedia.org/wiki/Power_law](http://en.wikipedia.org/wiki/Power_law)
The requested list is probably not easy to compile - the "most appropriate use" may be highly dependent upon context, but again the Wikipedia list of distributions
[http://en.wikipedia.org/wiki/List_of_probability_distributions](http://en.wikipedia.org/wiki/List_of_probability_distributions)
could be a place to start to find distributions appropriate for your project.
| null | CC BY-SA 3.0 | null | 2011-05-02T05:28:45.687 | 2011-05-02T05:28:45.687 | null | null | 4376 | null |
10223 | 1 | null | null | 3 | 859 | Does anyone know an approach to performing model selection in Weka through cross validation for regression problems?
As far as I can tell, the cross validation is implemented in Weka just to assess the performance of the classifier. I guess that calling Weka API from Java might solve the problem, but is there a GUI-based approach?
| Model selection in Weka through cross validation for regression problems | CC BY-SA 3.0 | null | 2011-05-02T06:06:46.337 | 2011-05-02T10:34:32.220 | 2011-05-02T10:34:32.220 | 183 | 976 | [
"regression",
"model-selection",
"cross-validation",
"weka"
] |
10224 | 1 | null | null | 3 | 1341 | I'm running experiements that record the time my algorithm takes to solve a set of problem instances on a particular benchmark. Each problem has an associated difficulty in the range [1, n]. Ideally these should be evenly distributed across the difficulty spectrum but this is not the case: the problem sample I have is skewed toward the easier end of the spectrum.
To account for this, I have grouped the problems by difficulty e.g. [1-10], [11-20], ... , [n-9, n]. Each interval usually contains contains at least 10 problems (usually more; 50+ is not uncommon) and I take the average time required to solve all problems in each interval. This gives me a clearer picture of how my algorithm performs on both easy and hard problems with the caveat that the data is somewhat less reliable for the harder end of the spectrum.
First question: is this okay or are there some gotchas I haven't accounted for?
Next: For comparative purposes, I need to summarise performance on each benchmark as a single number. I am loath to simply take the average across all problems as this figure is skewed by too many easy problems. Which brings me to...
Second question: can I take an average of all interval averages instead?
| Taking the mean of a data set with a skewed distribution | CC BY-SA 3.0 | null | 2011-05-02T06:22:30.293 | 2011-05-02T15:28:22.937 | 2011-05-02T07:04:07.107 | 4431 | 4431 | [
"distributions",
"mean"
] |
10225 | 1 | 10227 | null | 14 | 68108 | If I have a matrix `M` of 15 columns, what is R syntax to extract a matrix `M1` consisting of 1,7,9,11,13 and 15 columns?
| Extracting multiple columns from a matrix in R | CC BY-SA 3.0 | null | 2011-05-02T06:53:09.437 | 2011-05-02T07:07:58.430 | 2011-05-02T07:06:05.500 | 183 | 4432 | [
"r",
"matrix"
] |
10226 | 2 | null | 6890 | 5 | null | Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again.
Let's first simulate some data:
```
set.seed(1)
dat<-matrix(ncol=4, nrow=10, data=rnorm(40))
```
Then cluster the rows and columns:
```
rd<-dist(dat)
rc<-hclust(rd)
cd<-dist(t(dat))
cc<-hclust(cd)
```
After this we have
1) the raw data (dat)
2) a distance matrix (rd) and a dendrogram (rc) for rows of the raw data matrix
3) a distance matrix (cd) and and a dendrogram (cc) for columns of the raw data
Distance matrices are not actually needed for the further steps, but the raw data on which the clustering was performed, and the resulting dendrogram(s) are.
With the raw data these dendrograms can be used as input to the function heatmap(). If both row and column dendrograms are needed, use:
```
heatmap(dat, Rowv=as.dendrogram(rc), Colv=as.dendrogram(cc))
```
If only row or column dendrogram is needed, use NA as an input for either Rowv or Colv parameter in heatmap():
```
# Dendrogram for rows only
heatmap(dat, Rowv=as.dendrogram(rc), Colv=NA)
# Dendrogram for columns only
heatmap(dat, Rowv=NA, Colv=as.dendrogram(cc))
```
| null | CC BY-SA 3.0 | null | 2011-05-02T06:53:51.023 | 2011-05-02T09:05:12.570 | 2011-05-02T09:05:12.570 | 4433 | 4433 | null |
10227 | 2 | null | 10225 | 19 | null | Like this: `M[,c(1,7,9,11,13,15)]`
| null | CC BY-SA 3.0 | null | 2011-05-02T06:55:24.757 | 2011-05-02T07:07:58.430 | 2011-05-02T07:07:58.430 | 183 | 4257 | null |
10228 | 1 | 10229 | null | 7 | 13735 | I have a matrix `M` of float values, how to shuffle `M` line-wise?
| How to shuffle matrix data in R? | CC BY-SA 3.0 | null | 2011-05-02T07:50:49.067 | 2011-05-02T07:55:36.740 | null | null | 4432 | [
"r",
"matrix"
] |
10229 | 2 | null | 10228 | 6 | null | something like:
```
nr<-dim(M)[1]
M[sample.int(nr),]
```
| null | CC BY-SA 3.0 | null | 2011-05-02T07:55:36.740 | 2011-05-02T07:55:36.740 | null | null | 4257 | null |
10230 | 1 | null | null | 2 | 2478 | I use kmeans for clustering a set of data. However, I have to specify the number of clusters. The problem is that sometimes I need 2 and other times I need 3 clusters.
- Is there a clustering algorithm that could incorporate that feature in it?
| Automating determination of number of clusters from a kmeans cluster analysis | CC BY-SA 3.0 | null | 2011-05-02T07:58:16.427 | 2012-11-15T00:32:17.267 | 2011-05-02T08:22:37.333 | 183 | 2721 | [
"clustering",
"data-mining",
"k-means"
] |
10231 | 2 | null | 10230 | 2 | null | Simplest solution: do both and then check which gives best results...
| null | CC BY-SA 3.0 | null | 2011-05-02T08:00:12.460 | 2011-05-02T08:00:12.460 | null | null | 4257 | null |
10232 | 2 | null | 10220 | 3 | null | I personally worry about the philosophy of deciding on a distribution with certain properties first and then defining the parameters for the distribution.
My advice is normally to get the best data that you can first and then to process that data to find the best distribution and parameters to fit it. If you decide a distribution first to suit the input data then you are deciding on the way the data should behave before you have actually checked that it behaves how you have assumed in reality. A simple example would be a data set where you assume that all of the variations are due to measurement errors, in which case a normal distribution would probably be your chosen distribution. What you can often find though is that the data is not normally distributed and there is something more complex underlying the data you have.
One of the biggest problems in the modern use of Monte-Carlo analysis is the use of assumed distributions based on the best guesses of the users rather than actual data. If you don't actually have the data on which to estimate the input distributions then I would argue that there are better ways of looking at the problem than just going for Monte-Carlo analysis in the first instance.
| null | CC BY-SA 3.0 | null | 2011-05-02T08:40:58.573 | 2011-05-02T08:40:58.573 | null | null | 210 | null |
10233 | 2 | null | 6763 | 1 | null |
- As already stated (by @mpiktas), in order to do PCA, you need to transpose your data so that chemical batches are rows and "measurements" are columns.
You can then run a PCA on the data and plot the 80 chemical batches on axes derived from the first two components.
Here's an example on Quick-R of doing this in R.
- Also a small supplementary suggestion, you might want to have a look at Chernoff faces.
They present a face where each of your eight variables would represent a feature on the face. The size or shape of the feature indicates something about the variable. Flowing data has a tutorial in R with images.
| null | CC BY-SA 3.0 | null | 2011-05-02T10:00:17.367 | 2011-05-02T13:52:06.720 | 2011-05-02T13:52:06.720 | 183 | 183 | null |
10234 | 1 | 10235 | null | 38 | 14541 | What is the difference in meaning between the notation $P(z;d,w)$ and $P(z|d,w)$ which are commonly used in many books and papers?
| What is the difference between the vertical bar and semi-colon notations? | CC BY-SA 4.0 | null | 2011-05-02T10:16:09.873 | 2023-02-26T15:33:06.783 | 2023-02-26T15:33:06.783 | 296197 | 4290 | [
"probability",
"notation"
] |
10235 | 2 | null | 10234 | 16 | null | I believe the origin of this is the likelihood paradigm (though I have not checked the actual historical correctness of the below, it is a reasonable way of understanding how it came to be).
Let's say in a regression setting, you would have a distribution:
$$
p(Y | x, \beta)
$$
Which means: the distribution of $Y$ if you know (conditional on) the $x$ and $\beta$ values.
If you want to estimate the betas, you want to maximize the likelihood:
$$
L(\beta; y,x) = p(Y | x, \beta)
$$
Essentially, you are now looking at the expression $p(Y | x, \beta)$ as a function of the beta's, but apart from that, there is no difference (for mathematical correct expressions that you can properly derive, this is a necessity --- although in practice no one bothers).
Then, in bayesian settings, the difference between parameters and other variables soon fades, so one started to you use both notations intermixedly.
So, in essence: there is no actual difference: they both indicate the conditional distribution of the thing on the left, conditional on the thing(s) on the right.
| null | CC BY-SA 4.0 | null | 2011-05-02T10:45:59.180 | 2020-09-01T13:37:57.513 | 2020-09-01T13:37:57.513 | 276503 | 4257 | null |
10236 | 1 | null | null | 15 | 9150 | I have a dataset in which the event rate is very low ( 40,000 out of $12\cdot10^5$).
I am applying logistic regression on this. I have had a discussion with someone where it came out that logistic regression would not give good confusion matrix on such low event rate data. But because of the business problem and the way it has been defined, I can't increase the number of events from 40,000 to any larger number though I agree that I can delete some nonevent population.
Please tell me your views on this, specifically:
- Does accuracy of logistic regression depend on event rate or is there any minimum event rate which is recommended ?
- Is there any special technique for low event rate data ?
- Would deleting my nonevent population would be good for the accuracy of my model ?
I am new to statistical modeling so forgive my ignorance and please address any associated issues that I could think about.
Thanks,
| Applying logistic regression with low event rate | CC BY-SA 3.0 | null | 2011-05-02T11:19:01.220 | 2011-06-21T19:30:43.663 | 2011-05-02T11:44:44.540 | 3911 | 1763 | [
"logistic"
] |
10237 | 2 | null | 10224 | 2 | null | You have a hierarchy of measurements, the first level of multiple time measurements on problem number $i$ ($1\leq i \leq n$), the second level is multiple problems of the same difficulty group.
Level 1. The measured times follow a distribution. This distribution may be normal (if the run time is influence by a large number of more or less independent factors), exponential distribution (if the algorithm waits for a random event to occur), or something complicated (e.g. multimodal, where run time strongly depends on initial decisions). The average is useful in case of the normal and exponential distributions, but may not be useful in the complicated cases without a large number of runs on the same problem. To determine the distribution of run times it may be useful to (a) pick a couple of problems, and measure the run time with a large number of repetitions, (b) to think over the mechanism, the details of the algorithm. You may find that a few repetitions are generally enough, or that many repetitions are needed and perhaps median may be a better statistic than mean.
Level 2. The difficulties within a difficulty group are not the same, but you think that they are similar. The differences in run times between groups may be small or large. If the times within difficulty groups are close to each other compared to the differences between adjacent difficulty groups it may not be very important to find a perfect summary measure to characterise a difficulty group, mean will probably do. If, however, the differences between difficulty groups are small you probably want to have the best summary measure of the difficulty of groups. In this case again, the distribution of problem times within a difficulty group decides which method to use.
I generally advice against using “a single number”, because usually expressing the level of uncertainty is almost as important as finding the most likely values.
| null | CC BY-SA 3.0 | null | 2011-05-02T11:26:14.357 | 2011-05-02T11:26:14.357 | null | null | 3911 | null |
10238 | 1 | null | null | 3 | 1304 | This follows on from the previous question on [differences between K-S manual test and K-S test with R](https://stats.stackexchange.com/questions/10030/difference-between-k-s-manual-test-and-k-s-test-with-r).
My frequency sample was
```
a=c(0,1,1,4,9).
```
Then the observed sample is
```
obs=c(2,3,4,4,4,4,5,5,5,5,5,5,5,5,5)
```
The expected sample is then
```
exp=c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,5)
```
I hope you agree.
First, I use `ks.test`, like another time:
```
ks.test(obs,exp)
data: oss and att
D = 0.4667, p-value = 0.07626
```
Then, I use the ks.test the other way:
The expected distribution can be the uniform. Do you agree?
And then:
```
ks.test(obs, "punif", 0,5)
data: obs
D = 0.6667, p-value = 3.239e-06
```
### Question
- Why do the two approaches give different results?
| Even more with the Kolmogorov-Smirnov test with R software | CC BY-SA 3.0 | null | 2011-05-02T13:43:46.493 | 2011-05-02T16:17:59.077 | 2017-04-13T12:44:33.310 | -1 | 4345 | [
"r",
"multinomial-distribution",
"kolmogorov-smirnov-test"
] |
10239 | 2 | null | 10236 | 2 | null | There is a better alternative to deleting nonevents for temporal or spatial data: you can aggregate your data across time/space, and model the counts as Poisson. For example, if your event is "volcanic eruption happens on day X", then not many days will have a volcanic eruption. However, if you group together the days into weeks or months, e.g. "number of volcanic eruptions on month X", then you will have reduced the number of events, and more of the events will have nonzero values.
| null | CC BY-SA 3.0 | null | 2011-05-02T14:00:51.480 | 2011-05-02T14:00:51.480 | null | null | 3567 | null |
10240 | 2 | null | 10234 | 12 | null | Although it hasn't always been this way, these days $P(z; d, w)$ is generally used when $d,w$ are not random variables (which isn't to say that they're known, necessarily). $P(z | d, w)$ indicates conditioning on values of $d,w$. Conditioning is an operation on random variables and as such using this notation when $d, w$ aren't random variables is confusing (and tragically common).
As @Nick Sabbe points out $p(y|X, \Theta)$ is a common notation for the sampling distribution of observed data $y$. Some frequentists will use this notation but insist that $\Theta$ isn't a random variable, which is an abuse IMO. But they have no monopoly there; I've seen Bayesians do it too, tacking fixed hyperparameters on at the end of the conditionals.
| null | CC BY-SA 3.0 | null | 2011-05-02T15:03:17.627 | 2011-05-02T15:03:17.627 | null | null | 26 | null |
10241 | 1 | 10244 | null | 3 | 3303 | I have two variables for some districts in the UK:
- Number of crimes per habitant
- Median yearly income
Here's what it looks like:

I would like to determine if there is a dependence between the two. I ran a Pearson's correlation test, which gave me the following result:
```
Pearson's product-moment correlation
data: income and nbCrimesPerHab
t = 1.3689, df = 315, p-value = 0.172
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.03353993 0.18548945
sample estimates:
cor
0.0769025
```
As I understand it, the two are almost uncorrelated, but I am not sure how to interpret the p-value.
What could I do next? I am interested only in the crimes data. How could I determine their distribution? What more tools could I use?
EDIT:
As suggested in the comments, here's the same plot but this time with the mean

It doesn't seem to change much, but the correlation is a bit higer (and with a better p-value):
```
Pearson's product-moment correlation
data: income and nbCrimesPerHab
t = 2.0986, df = 319, p-value = 0.03664
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.007318403 0.223310038
sample estimates:
cor
0.1166938
```
A scatter plot shows the structure better, as suggested:

| These two variables are almost uncorrelated. What else can I say? | CC BY-SA 3.0 | null | 2011-05-02T15:06:14.833 | 2011-05-03T19:34:53.553 | 2011-05-03T08:15:19.843 | 3699 | 3699 | [
"correlation",
"multivariate-analysis",
"independence"
] |
10242 | 1 | null | null | 2 | 204 | For example, I have a set of numbers (say 0 to 10) that are presented to 100 subjects.
Each subject is asked whether the number is a small or a large number.
The results are that 100 people think zero is a small number, 70 people think one is a small number, etc.
Now I use a certain distribution, say, exponential to estimate the parameter of the distribution based on these data.
I assume people give subjective opinions and that most of them think 0 is a small number so it may be assumed that the degree of truth of zero being a small number is 1.
Finally, by multiplying the exponential probability density function (PDF) by a constant, I can scale the (PDF) such that the peak of it hits one and making it a membership function.
Does this sound right?
| Modeling membership function given some survey data or empirical distribution | CC BY-SA 3.0 | null | 2011-05-02T15:06:17.487 | 2017-01-30T11:56:32.800 | 2017-01-30T11:56:32.800 | 28666 | 4440 | [
"density-function",
"fuzzy"
] |
10243 | 2 | null | 10224 | 1 | null | I would take a look at the median score and see if it is a better representation of what you want. If not you can do two other things. 1) Use the mean and standard deviation. Yes, I know that is 2 numbers, but it would give you a better representation of the distribution. 2) Use the mean and standard deviation to come up with a composite score. e.g. you might want to divide the mean by the standard deviation, or come up with some other linear combination of these two factors depending upon what you feel is more important. If you are "loath" to take the average, for example, you might want to give more weight to the standard deviation, e.g. 60% of the total score. But even so, two numbers are better, and I would plot the scores to see if they made sense.
| null | CC BY-SA 3.0 | null | 2011-05-02T15:22:38.353 | 2011-05-02T15:28:22.937 | 2011-05-02T15:28:22.937 | 3489 | 3489 | null |
10244 | 2 | null | 10241 | 3 | null | I think you need to consider two issues when interpreting correlations.
1) the p-value tells you the probability of observing an effect of this magnitude due to chance alone. In your case, the probability that a correlation of this magnitude occurring solely by chance is 17.2%. This is what it is. Statistical convention typically dictates that we only accept as significant those tests in which this probability is < 5% (i.e., p-value <= 0.05) but I think it is better to consider the definition of the p-value as opposed to viewing 0.05 as a rigid wall.
2) the second thing to consider is the correlation coefficient, which is the measure of how much of the variation in one random variable corresponds to the variation in another random variable. This value can only range from 0 to 1 with a maximum at 1.
Based on your comment about being interested in the crimes data mainly, I think you may be more interested in regression since you are likely thinking about income as a fixed variable and are mostly interested in how crimes are dependent on income. Your correlation coefficient is rather low (close to 0) so regardless of the p-value not much of the variation in crimes corresponds to the variation in income.
Without knowing more about the questions of your study it is hard to suggest exactly where to go next. The distribution of the crimes data can be visualized with a number of tools including boxplots, frequency histograms and scatter plots.
Edit (based on comments)
Since you are interested in exploring other predictor variables you may want to consider stepwise multiple regression. This analysis allows you to build a multiple regression model from a suite of explanatory variables by only including those that improve the model by certain per-determined criteria. Multiple regression will also allow you to assess the effect of one variable on crime after taking into account the effect of a previous variable.
Given your list of proposed variables (in the comment to @Leo) be sure to test for correlation among your explanatory variables (e.g., income and age) because the explanatory variables of a multiple regression (or even a series of independent regressions) should not be substantially correlated . If there is a lot of correlated between the predictor variables then you cannot effectively partition the influence of each predictor on the dependent variable.
| null | CC BY-SA 3.0 | null | 2011-05-02T15:35:25.667 | 2011-05-03T19:34:53.553 | 2011-05-03T19:34:53.553 | 4048 | 4048 | null |
10245 | 2 | null | 10241 | 2 | null | So you don't want to add any other variables in your model (average age, level of education, ethnicity, etc.)?
It may be useful to visualize these two variables on a geographical map. You may see some patterns from there (including correlation, which may be slightly "shifted in space").
| null | CC BY-SA 3.0 | null | 2011-05-02T15:40:32.120 | 2011-05-02T15:40:32.120 | null | null | 4337 | null |
10246 | 2 | null | 10180 | 5 | null | What you are looking for is called "Determination of minimum sample size" for a particular test which is an application of statistical [power analysis](http://en.wikipedia.org/wiki/Statistical_power).
In your special cases one anaylses a rxs [contigency table](http://en.wikipedia.org/wiki/Contingency_table). However, I can only provide some details to the 2x2 case, which can be extended to the multiple test case using e.g. the so called [Bonferroni correction](http://de.wikipedia.org/wiki/Bonferroni-Methode) (details below). One test which can be performed here is a so called chi2-test, e.g. [Fisher's exact](http://en.wikipedia.org/wiki/Fishers_exact_test) test.
Let's say:
- conversion_rate =$\frac{sales}{clicks}$
- $p_i$ conversion-rate of Combination i
- $n_i$ sample size for Combination i (aka the number of clicks)
Now what you want to calculate is: What is the minimum sample required sample size $n_i$ and $n_j$ so that my preferred statistical test with significance level $\alpha$ detects the difference $p_i-p_j$ with probability $1-\beta$, where ...
- $\alpha$ denotes the probability that one rejects the Null-Hypothesis although it is true (i.e. calls a difference significant which is not)
- $\beta$ denotes the probability that one does not reject the Null-Hypothesis although it is false (i.e. fails to identify a significant difference).
One formula for the case of Fisher's exact test is from [Casagrande et al.](http://www.jstor.org/pss/2530613) which is (according to my reference):
$n_i:=n_j:=\frac{A(1+\sqrt{1+4\delta/A})^2}{4\delta^2}$
where
$A=\left(u_{1-\alpha}\sqrt{2\frac{p_i+p_j}{2}(1-\frac{p_i+p_j}{2})}-u_{\beta}\sqrt{p_i(1-p_i)+p_j(1-p_j)}\right)^2$ and $\delta=p_i-p_j$
where
$u_{\alpha}$ is the quantile of the standard normal distribution for the probability $\alpha$
As you have seen above, $n_i$ is set equivalent to $n_j$ here. But since you are performing an ABC-Test, this should not be a problem because the sample sizes for all combinations are roughly the same.
The Bonferroni correction:
Since you have three combinations, you have to perform at least 3 tests (1 against 2, 2 against 3, 1 against 3), so the $\alpha$ value you should use here is:
$\alpha_{corrected}$=your-desired-alpha/3 (same for $\beta$).
Now let's perform an exemplary calculation:
Let's say you want to detect at least a difference reflecting a 10% increase (assuming that a lesser difference, although significant, would not be of interest (e.g. because of cost effectiveness)).
So we got:
- $p_1=\frac{10}{500}$
- $p_2=\frac{10}{500}*1.1$ (which is roughly equivalent to the "true" $p_2=\frac{11}{498}$)
- $p_3=\frac{15}{503}>p_2*1.1$
- let's say $\alpha=0.05$ => $\alpha_{corrected}=0.05/3\approx 0.0167$
- let's say $\beta=0.2$ => $\beta_{corrected}=0.2/3\approx 0.0667$
Hence:
- $n_{p_1vsp_2}=136382.6$
- $n_{p_1vsp_3}=6425.552$
- $n_{p_2vsp_3}=10608.72$
You see that the main influence on the outcome is the difference between the ps, which is squared in the formula (see $\delta$ above). One can show greater differences faster (i.e. with a smaller sample). So if you want to e.g. show in an AB-Test that Combination3 is better than Combination1 (assuming that the measured conversion-rates are the actual true ones), one can do this with a sample size of 4481.86 (calculated without any $\alpha$ or $\beta$-correction). A week, if you create $\frac{4481.86*2}{7}\approx 1281$ clicks per day.
Final Note: This so called "probability of being better" is assumedly calculated using a bayesian approach (I started a discussion about that [here](https://stats.stackexchange.com/questions/9735/how-does-a-frequentist-calculate-the-chance-that-group-a-beats-group-b-regarding)). I would not make a decision based on that number if it is not close enough to 0 or 1 (i.e. 0.95). One can also calculate the sample size bayesian style, but I am not done with it yet (I am also struggling with interpretation of GWO - results).
| null | CC BY-SA 3.0 | null | 2011-05-02T16:01:55.797 | 2011-05-03T09:40:26.993 | 2017-04-13T12:44:29.013 | -1 | 264 | null |
10247 | 2 | null | 10238 | 6 | null | The first is a two-sample test; the second is a one-sample test against a continuous distribution. Neither is used correctly:
- The two-sample test views both sets of data as being data, but your "expected sample" is not data, it's a theoretical reference. It is not subject to any variation. The two-sample test thinks that it can vary. That's why the p-value is so large.
- The reference distribution used in the one-sample test is a continuous uniform distribution between 0 and 5. However, these data look discrete: from the way they are given, it appears they can attain only the values 1, 2, ..., 5. Because the one-sample test doesn't know this, its p-value is probably too small.
At least this lets us infer that the correct p-value should lie somewhere between 0.076 and 3.2e-06. Because that doesn't settle the question, let's analyze further.
To get a sense of whether the data (0, 1, 1, 4, 9) differ significantly from the discrete uniform frequencies (3, 3, 3, 3, 3), view the latter as describing a five-sided die. What are the chances that in 0+1+...+9 = 15 tosses of this die that at least one value would appear 9 or more times? The events (1 appears 9 or more times), (2 appears 9 or more times), ..., (5 appears 9 or more times) are mutually exclusive--no two of them can hold at once--so their probabilities add. Because the die is uniform each of these five events has the same probability. We can compute the chance that a 5 comes up 9 or more times by viewing it like tosses of a biased coin: a 5 has a 1/5 chance; a non-5 has a 4/5 chance. The chance of 9 or more 5's therefore equals
$$\binom{15}{9}(1/5)^9(4/5)^6 + \binom{15}{10}(1/5)^{10}(4/5)^5 + \cdots + \binom{15}{15}(1/5)^{15}(1/4)^0.$$
This value is approximately 0.000785. Multiplying by 5 gives .00392 = 0.39%, still quite small. Thus this set of frequencies is unlikely to have arisen through a single experiment in which each of the values has an equal chance of arising.
| null | CC BY-SA 3.0 | null | 2011-05-02T16:17:40.357 | 2011-05-02T16:17:40.357 | null | null | 919 | null |
10248 | 2 | null | 10241 | 1 | null | Pearson correlation assumes linear relationships. It is possible that you have a curvilinear relationship going on.That can often be the case when income is involved. A second issue is that you have defined yearly income in terms of the median rather than the mean. That may truly be hiding some relationships that you need to see. I suggest starting with a plot of the raw values prior to performing a correlation test to see if it reasonably meets the assumptions.
| null | CC BY-SA 3.0 | null | 2011-05-02T17:55:00.947 | 2011-05-02T17:55:00.947 | null | null | 3489 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.