Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
8917
2
null
8883
0
null
The reason that wikipedia says $\sum_{i=1}^{N}X_{i}\sim\Gamma(N,\theta)$ is actually because we assume a poisson process. That is, the waiting time for the next event is exponentially distributed $X_{i}\sim EXP(\theta)$. You can prove with the moment generating function that given $X_{i}\sim EXP(\theta)$ for $i=1...N$ IID sample, then $\sum_{i=1}^{N}X_{i}\sim\Gamma(N,\theta)$. Briefly put, if you assumed the $\sum_{i=1}^{N}X_{i}\sim\Gamma(N,\theta)$ model for your data, then you can also assume $X_{i}\sim EXP(\theta)$. (Meaning that's what they assumed originally...) One last thing, when you write $NX_{i}|\sum_{i=1}^{N}X_{i}\sim\Gamma(N,\frac{1}{N}\sum_{i=1}^{N}X_{i})$, that is not correct. Instead, $NX_{i}|\sum_{i=1}^{N}X_{i}\sim UNIFORM(0,\sum_{i=1}^{N}X_{i})$. To more or less convince yourself, think that given the value of $\sum_{i=1}^{N}X_{i}=M$, then $NX_{i}$ cannot possibly be greater than $M$. I hope this helps! EDIT: I'm wrong about the uniform. See comments.
null
CC BY-SA 2.5
null
2011-03-29T13:19:54.503
2011-03-30T04:22:56.787
2011-03-30T04:22:56.787
3834
3834
null
8918
1
8927
null
4
3099
I am curious, is there a way to optimize a regression according to a specific statistic? Let's say I am interested in a model with the best possible AIC statistic (or MSE or whatever measurement I am interested in) - could I somehow direct the regression to give me the top X models that would do this? (Of course I would not ignore the other measures, but would it be possible to ask for this?) What software supports this, or would you write your own code (in say R)? Also in general, when multiple regression results are displayed (let's say an all-possible or best-subset regression is done), is there a ranking and if so, according to what measure/criteria? I'm not saying that this is the best way to evaluate models, but perhaps it would be a way to explore candidate models? (This is really not my main question though.) Thanks .. learning a lot reading this site.
Is there a way to optimize regression according to a specific criterion?
CC BY-SA 2.5
null
2011-03-29T13:22:43.783
2011-04-22T02:56:08.773
2011-03-29T16:04:10.763
919
10633
[ "regression", "model-selection", "descriptive-statistics" ]
8919
2
null
8911
3
null
To some extent (less strictly statistical) [NetLogo](http://ccl.northwestern.edu/netlogo/).
null
CC BY-SA 2.5
null
2011-03-29T13:28:06.453
2011-03-29T13:28:06.453
null
null
null
null
8920
2
null
8891
7
null
(You may start from the after line section, for a shorter answer) To begin with, you are absolutely right saying that it firstly depends on the purposes of your analysis: forecasting of average price (at macro level) or a particular price (at micro level), causal analysis of consumer preferences (district, size, age, number of bedrooms, gas, travelling to the job, level of noise, etc.). This verbal specialization secondly will guide you to an appropriate choice of a model and, finally, requirements for your data. From what you have written, I assume, that you deal with the real estate pricing models. After quick googling showed there are many ways to specify a model. Quite good starting reference could be Simon P. Leblond's [article](https://papyrus.bib.umontreal.ca:8443/jspui/bitstream/1866/257/1/a1.1g979.pdf) Comparing predictive accuracy of real estate pricing models: an applied study for the city of Montreal. From practical point of view you have to choose between additive or multiplicative regression models. The latter has several advantages as opposed to additive models: - parameter estimates (but intercept term, junk regression parameter anyway) are not affected by the changes in scale - parameters for log-transformed variables have a nice elasticity interpretation, that ... - naturally allows for diminishing returns to scale restrictions (in real estate this one could be crucial restrictions) - if one studies average prices, more robust averaging is weighted geometric mean than average (this will not be relevant too much at micro level though) - you may set price to zero, if, for instance, your apartment has no bedrooms at all (it is hard to do so with additive models) One more important thing before you proceed is to think about each of your observation as a unique data point that was jointly set on the market by a decision maker on the basis of utility maximizing behaviour. Jointly meaning here that you can't separate the variables from each other (for instance, the value of apartments without a bedroom is zero for most of the consumers), but a consumer may or may not like all bundle of the attributes together, after that his or her budget (money in the pocket) is all that matters. Therefore standardization is useful for analysis of relative importance of explanatory variables, but be careful judging what variables are not significant (all factors may be important). Heterogeneity of preferences and budgets (buyers are different households) in each case of your observation also shows why regression at micro level (not averaging or so) could also be misleading. Finally, you have cross-sectional (static) data. Trying to predict prices for different years (than the year of your observations), static pictures work poorly for different periods of time (say you build model based on 2009 year's data, it will be not very useful retrospectively predicting prices for say 2007, or for 2011). At least try to correct the outcomes on the basis of change in average price for a particular year in this case. --- Regarding your particular questions (what I personally do for my projects, or at least pretend to do): - List all the variables you have and their measurement units - Check and re-check the data for imputation errors - Make additional imputation for the points with missing values (you may also simply exclude the observations if you have large dataset with not so many missing values) - Make all measurement units the same across similar variables (sq. meters, currency units, etc.) - Think of a simple data frame structure at once (you need to communicate with $R$ conveniently) - Bring only raw data to $R$, make all log, differences, fractions transformations in $R$ directly (logarithms are important for multiplicative models, some pros for one are in the prelude above; fractions are also nice for you may want to eliminate the scale (size) effect at once, and emphasize the differences caused by other factors) - Leave dummies as they are but always leave one level of qualitative attribute for intercept term (if not this would be a source for pure multicollinearity problem in your model) - For your purposes you may apply ordinary least squares (OLS), though in pricing models I would also consider tobit or Heckman models, that do need a special treatment (one of my early may-be-not-so-successful post's on pricing was about this) - OLS is straightforward and usual residual analysis (found in textbooks on econometrics) is done. Violating some of the assumptions you may go for generalized methods, instrumental variables, ridged regression, cures for autoregressive residuals, but... What you really need to know: are the parameter estimates theoretically reasonable (values, signs, etc.)? - Just a nice number... any additions from the community are welcome.
null
CC BY-SA 2.5
null
2011-03-29T13:38:45.607
2011-03-29T13:38:45.607
2017-04-13T12:44:20.730
-1
2645
null
8921
2
null
8883
2
null
If you have a Poisson process, then each $X_i$ has an exponential distribution (and their sum a Gamma distribution). You can then use Bayesian methods to look at possible values of $\theta$. For example, the [conjugate prior for an exponential distribution](http://en.wikipedia.org/wiki/Exponential_distribution#Bayesian_inference) is also a Gamma distribution, which you may find helpful incorporating your prior information and observed data to get a credible interval for $\theta$.
null
CC BY-SA 2.5
null
2011-03-29T14:01:17.720
2011-03-29T14:01:17.720
null
null
2958
null
8922
2
null
8891
1
null
For preprocessing I always like to include outlier detection, and removing bad data. If your data is of different scales, normalizing the data is a good idea (standardization). As far as technique goes, it always pays to graph and plot all of your variables with each other, as well as with the predicted variable. That will tell you a lot about which assumptions you can make about the data such as linearity, equality of variances, normality and can better help you choose a technique.
null
CC BY-SA 2.5
null
2011-03-29T14:11:32.650
2011-03-29T14:11:32.650
null
null
3489
null
8923
1
8929
null
7
1219
I have meteorological records for a point, these include temperature and solar irradiation. I want to plot them against another variable, that we shall call Rating, and see how the data are distributed. This is what I'm doing: ``` d1 <- ggplot(data = mydata, aes(Temperature, Rating, fill = ..density..)) + stat_binhex(na.rm = TRUE) + opts(aspect.ratio = 1)#, legend.position = "none") d2 <- ggplot(data = mydata, aes(Solar.Irrad, Rating, fill = ..density..)) + stat_binhex(na.rm = TRUE) + opts(aspect.ratio = 1)#, legend.position = "none") ``` I get both things on the same window by using grid.arrange from package gridExtra: ``` grid.arrange(d1,d2, nrow=1) ``` ![Two graphs together, different z scale](https://i.stack.imgur.com/dPHEk.png) This produces the image shown. Now, my problem is that I would really like that both graphs shared their z scale: the legend should be the same and the color scheme should be homogeneous through different graphs. Is this possible? I'm totally lost here, do anyone know of a way to do this?
Plotting multiple binhex with the same z levels
CC BY-SA 2.5
null
2011-03-29T14:47:42.687
2011-03-29T15:57:12.923
2011-03-29T15:57:12.923
696
1504
[ "r", "ggplot2" ]
8924
1
8931
null
12
8851
I would greatly appreciate your advice on the following problem: I've got a large continuous dataset with lots of zeros (~95%) and I need to find the best way to test whether certain subsets of it are "interesting", ie don't seem to be drawn from the same distribution as the rest. Zero inflation comes from the fact that each data point is based a count measurement with both true and sampling zeros, but the result is continuous as it takes into account some other parameters weighted by the count (and so if the count is zero, the result is also zero). What would be the best way to do this? I have a feeling that Wilcoxon and even brute-force permutation tests are inadequate as they get skewed by these zeros. Focussing on non-zero measurements also removes true zeros that are extremely important. Zero-inflated models for count data are well-developed, but unsuitable for my case. I considered fitting a Tweedie distribution to the data and then fitting a glm on response=f(subset_label). Theoretically, this seems feasible, but I'm wondering whether (a) this is overkill and (b) would still implicitly assume that all zeros are sample zeros, ie would be biased in the same way (at best) as a permutation? Intuitively, it sounds like have some kind of hierarchical design that combines a binomial statistic based on the proportion of zeros and, say, a Wilcoxon statistic computed on non-zero values (or, better still, non-zero values supplemented with a fraction of zeros based on some prior). Sounds like a Bayesian network... Hopefully I'm not the first one having this problem, so would be very grateful if you could point me to suitable existing techniques... Many thanks!
Hypothesis testing on zero-inflated continuous data
CC BY-SA 2.5
null
2011-03-29T15:06:06.213
2013-03-25T21:27:16.527
2011-03-29T15:41:14.403
6649
6649
[ "hypothesis-testing" ]
8925
2
null
8911
1
null
You can even do Monte Carlo Simulation in Excel. It's not a perfect tool, but you probably already have it and know how to use it. Depending on the scope of your problem, it might be easier to use Excel than to learn something new. If you are going to learn something new, R is a great choice. What are you trying to simulate?
null
CC BY-SA 2.5
null
2011-03-29T15:14:59.303
2011-03-29T15:14:59.303
null
null
2817
null
8926
2
null
8908
2
null
I would add random forests next to trees, or perhaps under a new category of ensemble methods.
null
CC BY-SA 2.5
null
2011-03-29T15:21:19.173
2011-03-29T15:21:19.173
null
null
2817
null
8927
2
null
8918
5
null
Statistical appropriateness aside, R provides some nice functions that allow for this type of analysis. You can take a look at the `leaps()` function within the [leaps package](http://cran.r-project.org/web/packages/leaps/index.html). The `leaps()` function returns the top `n` models and some fit statistics for a given set of parameters. `stepAIC()` within `MASS` is another handy function for this type of analysis. There's a decent tutorial on the [statmethods](http://www.statmethods.net/stats/regression.html) site describing these and some other techniques.
null
CC BY-SA 2.5
null
2011-03-29T15:32:45.283
2011-03-29T15:32:45.283
null
null
696
null
8928
2
null
8749
16
null
There is a package for "[R](http://www.r-project.org/)" called "[caret](http://cran.r-project.org/web/packages/caret/index.html)," which stands for "classification and regression testing." I think it would be a good place for you to start, as it will easily allow you to apply a dozen or so different learning algorithms to your data, and then cross-validate them to estimate how accurate they each are. Here is an example that you can modify with your own data/other methods: ``` install.packages('caret',dependencies = c('Depends','Suggests')) library(caret) set.seed(999) Precursor1 <- runif(25) Precursor2 <- runif(25) Target <- sample(c('T','F'),25,replace=TRUE) MyData <- data.frame(Precursor1,Precursor2,Target) str(MyData) #Try Logistic regression model_Logistic <- train(Target~Precursor1+Precursor2,data=MyData,method='glm') #Try Neural Network model_NN <- train(Target~Precursor1+Precursor2,data=MyData,method='nnet',trace=FALSE) #Try Naive Bayes model_NB <- train(Target~Precursor1+Precursor2,data=MyData,method='nb') #Try Random Forest model_RF <- train(Target~Precursor1+Precursor2,data=MyData,method='rf') #Try Support Vector Machine model_SVM<- train(Target~Precursor1+Precursor2,data=MyData,method='svmLinear') #Try Nearest Neighbors model_KNN<- train(Target~Precursor1+Precursor2,data=MyData,method='knn') #Compare the accuracy of each model cat('Logistic:',max(model_Logistic$results$Accuracy)) cat('Neural:',max(model_NN$results$Accuracy)) cat('Bayes:',max(model_NB$results$Accuracy)) cat('Random Forest:',max(model_RF$results$Accuracy)) cat('Support Vector Machine:',max(model_SVM$results$Accuracy)) cat('Nearest Neighbors:',max(model_KNN$results$Accuracy)) #Look at other available methods ?train ``` Another idea would be to break your data into a training set and a test set, and then compare how each model performs on the test set. If you like, I can show you how to do that.
null
CC BY-SA 2.5
null
2011-03-29T15:52:01.003
2011-03-29T15:52:01.003
null
null
2817
null
8929
2
null
8923
8
null
Have you thought about using the faceting capabilities within ggplot2 directly? You can allow for free scales as a parameter in the call to `facet_wrap()`. Here's an example for you: ``` library(ggplot2) library(hexbin) #Sample data.frame. dat <- data.frame( rating = sample(500:2500, 1000, TRUE) , Solar.Radiation = sample(0:1200, 1000, TRUE) , Ambient.Temperature = sample(-10:25, 1000, TRUE) ) #Melt data.frame for plotting dat.m <- melt(dat, id.vars = "rating") #Plotting ggplot(dat.m, aes(value, rating, fill = ..density..)) + stat_binhex(na.rm = TRUE) + opts(aspect.ratio = 1) + facet_wrap(facets = ~ variable, scales = "free_x") ```
null
CC BY-SA 2.5
null
2011-03-29T15:54:50.117
2011-03-29T15:54:50.117
null
null
696
null
8930
1
null
null
14
12030
As I've heard of the AdaBoost classifier repeatedly mentioned at work, I wanted to get a better feel for how it works and when one might want to use it. I've gone ahead and read a number of papers and tutorials on it which I found on Google, but there are aspects of the classifier which I'm still having trouble understanding: - Most tutorials I've seen speak of AdaBoost as finding the best weighted combination of many classifiers. This makes sense to me. What does not make sense are implementations (i.e. MALLET) where AdaBoost seems to only accept one weak learner. How does this make any sense? If there's only one classifier provided to AdaBoost, shouldn't it just return back that same classifier with a weight of 1? How does it produce new classifiers from the first classifier? - When would one actually want to use AdaBoost? I've read that it's supposed to be one of the best out-of-the-box classifiers, but when I try boosting a MaxEnt classifier I was getting f-scores of 70%+ with, AdaBoost murders it and gives me f-scores of something like 15% with very high recall and very low precision instead. So now I'm confused. When would I ever want to use AdaBoost? I'm looking for more of an intuitive rather than strictly statistical answer, if possible.
When would one want to use AdaBoost?
CC BY-SA 2.5
null
2011-03-29T16:14:24.407
2018-06-05T08:45:16.083
2018-06-05T08:45:16.083
128677
3948
[ "machine-learning", "boosting", "adaboost" ]
8931
2
null
8924
9
null
@msp, I think you are looking at a two stage model in that attachment (I did not have time to read it), but zero inflated continuous data is the type I work with a lot. To fit a parametric model to this data (to allow hypothesis tests) you can fit a two stage but then you have two models (Y is the target and X are covariates): P(Y=0 |X) and P(Y|X;Y>0). You have to use simulation to "bring" these together. Gelmans [book](http://rads.stackoverflow.com/amzn/click/052168689X) (and the arm package in R) shows this process for this exact model (using logistic regression and ordinary linear regression with a log link). The other option I have seen and like better is to fit a zero inflated gamma regression, which is the same as above (but gamma as the error instead of guassian) and you can bring them together for hypothesis tests on P(Y|X). I dont know how to do this in R, but you can in SAS NLMIXED. See this [post](http://listserv.uga.edu/cgi-bin/wa?A2=ind0805A&L=sas-l&P=R20779), it works well.
null
CC BY-SA 2.5
null
2011-03-29T17:17:43.460
2011-03-29T17:17:43.460
null
null
2040
null
8932
2
null
8918
2
null
Try wle.cp from package wle: [http://cran.r-project.org/web/packages/wle/index.html](http://cran.r-project.org/web/packages/wle/index.html) It's based on Mallow's Cp: [Mallow's Cp](http://en.wikipedia.org/wiki/Mallows%27_Cp) Here's the example given in the reference manual. ``` library(wle) x.data <- c(runif(60,20,80),runif(5,73,78)) e.data <- rnorm(65,0,0.6) y.data <- 8*log(x.data+1)+e.data y.data[61:65] <- y.data[61:65]-4 z.data <- c(rep(0,60),rep(1,5)) plot(x.data,y.data,xlab="X",ylab="Y") xx.data <- cbind(x.data,x.data^2,x.data^3,log(x.data+1)) colnames(xx.data) <- c("X","X^2","X^3","log(X+1)") result <- wle.cp(y.data~xx.data,boot=10,group=10,num.sol=2) summary(result) plot(result,num.max=15) result <- wle.cp(y.data~xx.data+z.data,boot=10,group=10,num.sol=2) summary(result) plot(result,num.max=15) ``` The output from the last summary(result) statement is: ``` Call: wle.cp(formula = y.data ~ xx.data + z.data, boot = 10, group = 10, num.sol = 2) Weighted Mallows Cp: (Intercept) xx.dataX xx.dataX^2 xx.dataX^3 xx.datalog(X+1) z.data wcp [1,] 0 0 0 0 1 1 1.570 [2,] 1 0 0 0 1 1 2.372 [3,] 0 1 0 0 1 1 2.510 [4,] 0 0 1 0 1 1 2.564 [5,] 0 0 0 1 1 1 2.570 [6,] 1 1 1 1 0 1 4.088 [7,] 0 1 1 1 1 1 4.289 [8,] 1 0 1 1 1 1 4.530 [9,] 1 1 0 1 1 1 4.710 [10,] 1 1 1 0 1 1 4.888 [11,] 1 1 1 1 1 1 6.000 Printed the first 11 best models ``` The top model (row [1,] above), which uses log(X+1) from xx.data (see the above 1 under xx.datalog(X+1)) and z.data (see the above 1 under z.data) has the lowest Mallow's Cp value (wcp = 1.57). The final plot(result,num.max=15) statement provides the following graph where any green model under the black line follows Mallow's criteria. The blue model in the lower left area is the "best Mallow's Cp model" (see the above list). ![enter image description here](https://i.stack.imgur.com/M1jRI.jpg)
null
CC BY-SA 3.0
null
2011-03-29T17:43:06.107
2011-04-22T02:56:08.773
2011-04-22T02:56:08.773
2775
2775
null
8933
1
8940
null
5
4997
I have some samples from eight people who all gave the same answer to a question. Now, obviously the sample's mean is the answer all people gave, and the standard dev is 0. Excel throws a `#NUM!` error when I call the function ``` CONFIDENCE.T(0.05, K33, COUNTA(B33:I33)) ``` where `K33` is the standard dev (0). What would be the correct interpretation of this? Can I even calculate a confidence interval? For clarification: People are asked to give their opinion on a scale of (1, 2, 3, 4, 5), which is ordinal. Nevertheless, one always calculates the arithmetic mean of all judgements (according to [ITU-T P.800](http://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC-P.800-199608-I!!PDF-E&type=items), see also: [Wikipedia](http://en.wikipedia.org/wiki/Mean_opinion_score)), so that's why I also want to get a confidence interval.
Excel's confidence interval function throws #NUM! when standard deviation is 0
CC BY-SA 2.5
null
2011-03-29T18:10:41.033
2011-03-30T03:25:30.340
2011-03-29T18:31:48.777
1205
1205
[ "confidence-interval", "standard-deviation", "excel" ]
8934
2
null
8933
1
null
If eight samples from a distribution are exactly the same it is probably not a normal distribution or you use rounding at a higher order of magnitude than of the standard deviation. Or are you calculating means on a numerically coded ordinal scale?
null
CC BY-SA 2.5
null
2011-03-29T18:17:12.657
2011-03-29T18:17:12.657
null
null
3911
null
8936
2
null
8924
1
null
You could treat the exact number of zeros unknown, but constrained between 0 and the observed number of zeros. This can surely be handled using a Bayesian formulation of the model. Maybe a multiple imputation method can also be tweaked to appropriately vary the weights (between 0 and 1) of the zero observations…
null
CC BY-SA 2.5
null
2011-03-29T18:40:05.443
2011-03-29T18:40:05.443
null
null
3911
null
8937
2
null
3634
0
null
Pierre Legendre has some Code on his homepage: [nest.anova.perm.R (D. Borcard and P. Legendre): Nested anova with permutation tests (main factor and one nested factor, balanced design).](http://www.bio.umontreal.ca/legendre/indexEn.html)
null
CC BY-SA 2.5
null
2011-03-29T19:00:57.517
2011-03-29T19:00:57.517
null
null
1050
null
8938
2
null
8930
11
null
Adaboost can use multiple instances of the same classifier with different parameters. Thus, a previously linear classifier can be combined into nonlinear classifiers. Or, as the AdaBoost people like to put it, multiple weak learners can make one strong learner. A nice picture can be found [here](http://doc.prsdstudio.com/2.1/kb/16.html), on the bottom. Basically, it goes as with any other learning algorithm: on some datasets it works, on some it doesn't. There sure are datasets out there, where it excels. And maybe you haven't chosen the right weak learner yet. Did you try logistic regression? Did you visualize how the decision boundaries evolve during adding of learners? Maybe you can tell what is going wrong.
null
CC BY-SA 2.5
null
2011-03-29T19:06:49.200
2011-03-29T19:06:49.200
null
null
2860
null
8939
2
null
8933
2
null
Let's assume all your 8 subjects chose to answer 3 on the (1, 2, 3, 4, 5) scale. Let's assume that their opinions were continuous in their minds, and they rounded it to the closest values of the scale. This means that the original opinion of each subject was in the range $[2.5, 3.5)$. ``` > mean(replicate(1e5, diff(range(rnorm(8))))) [1] 2.841661 > mean(replicate(1e5, diff(range(rnorm(8))))) [1] 2.847447 > 1 / 2.845 [1] 0.3514938 ``` The above simulation shows that if you take 8 samples from a normal distribution of sd 0.35 they will cover an interval of the approximate width of 1. Thus in your population the sd is likely to be 0.35 or less. Rounding to one of 1, 2, 3, 4, 5 is not precise enough to measure the sd in this case.
null
CC BY-SA 2.5
null
2011-03-29T19:15:00.720
2011-03-29T19:20:45.847
2011-03-29T19:20:45.847
3911
3911
null
8940
2
null
8933
6
null
This behavior is questionable but documented. The help for "confidence" states: > If standard_dev ≤ 0, CONFIDENCE returns the #NUM! error value. ... If we assume alpha equals 0.05, we need to calculate the area under the standard normal curve that equals (1 - alpha), or 95 percent. This value is ± 1.96. The confidence interval is therefore: $$\bar{x} \pm 1.96\left(\frac{\sigma}{\sqrt{n}}\right).$$ (Yes, this is badly phrased, but that's a direct quote.) To overcome these (somewhat artificial) limitations, compute the confidence limits yourself (according to this formula) as ``` =AVERAGE(X) + NORMSINV(1-0.05/2) * STDEV(X)/SQRT(COUNT(X)) =AVERAGE(X) - NORMSINV(1-0.05/2) * STDEV(X)/SQRT(COUNT(X)) ``` where 'X' names a range containing your data (such as B33:I33) and '0.05' is $\alpha$ (the complement of the desired confidence), just as before. In your case, because STDEV(X) is 0, both limits will equal the mean. This is legitimate, although it has its own problems (because it almost surely fails to cover the true mean).
null
CC BY-SA 2.5
null
2011-03-29T19:15:03.043
2011-03-29T19:15:03.043
null
null
919
null
8943
5
null
null
0
null
Overview The binomial distribution gives the frequencies of "successes" in a fixed number of independent "trials". It is a discrete distribution parametrized by $p$, the probability of "success" in a trial. For $k$ "successes" in $n$ "trials" ($k \leq n$), the form of the probability mass function is: $$P(k,n;p) = {n \choose k} p^k (1-p)^{n-k}$$ For a binomially distributed variable $X$, the expected value and variance are given by: $$\mathrm{E}[X] = np $$ $$\mathrm{Var}[X] = np(1-p) $$ A common example to demonstrate the use of the demonstration is finding the probability of the number of heads or tails in a certain number of coin flips.
null
CC BY-SA 3.0
null
2011-03-29T20:52:24.643
2013-08-31T15:33:52.063
2013-08-31T15:33:52.063
27581
919
null
8944
4
null
null
0
null
The binomial distribution gives the frequencies of "successes" in a fixed number of independent "trials". Use this tag for questions about data that might be binomially distributed or for questions about the theory of this distribution.
null
CC BY-SA 3.0
null
2011-03-29T20:52:24.643
2013-08-31T15:25:31.137
2013-08-31T15:25:31.137
27581
919
null
8945
5
null
null
0
null
Nonlinear regression concerns models that are inherently nonlinear: that is, they cannot be expressed as a linear combination of parameters $\beta$. It is practically the same thing to say that a nonlinear model cannot be put into the form $Y = X\beta + \epsilon$ after a preliminary mathematical re-expression of $X$, $Y$, or both. For example, $Y = \log(X)\beta + \epsilon$ and $Y = \exp(X\beta + \epsilon)$ are both linear whereas $Y = exp(X\beta) + \epsilon$ and $Y = \log(X + \beta) + \epsilon$ are nonlinear. (As usual, $Y$ is a dependent variable (or vector thereof), $X$ is a vector of independent variables, $\beta$ is a set of parameters to be estimated, and $\epsilon$ is random "error" with zero mean.)
null
CC BY-SA 3.0
null
2011-03-29T21:04:21.467
2017-01-08T14:00:42.063
2017-01-08T14:00:42.063
73527
919
null
8946
4
null
null
0
null
Use this tag only for regression models in which the response is a nonlinear function of the parameters. Do not use this tag for nonlinear data transformation.
null
CC BY-SA 4.0
null
2011-03-29T21:04:21.467
2019-02-05T09:48:09.413
2019-02-05T09:48:09.413
28666
919
null
8947
1
null
null
1
150
Let's say I have a matrix where the underlying data is from the entire population. Let's say I also have a matrix where the underlying data has been sampled from population. How would I analyze how well the the sample represents the population? My gut instinct is to convert it into a row-col listing: ``` row col popvalue samplevalue 1 1 34.5 33.2 [...] i j 54.4 51.2 ``` and then run ANOVA between the two. Does this sound right or is there a better way to measure this?
How do I compare a matrix of population values to a matrix of sample values?
CC BY-SA 2.5
null
2011-03-29T22:44:34.297
2011-03-30T01:32:16.820
2011-03-29T22:46:50.257
919
1965
[ "sampling", "matrix" ]
8948
2
null
8947
1
null
This is difficult to answer because the within-cell variance is unknown. The variances of the popvalue and samplevalue will give you only between-cell variance. With two-way ANOVA, you could model the average difference between popvalue and samplevalue by row and by column. This doesn't tell you that much. As @whuber mentioned, you may be interested in a different parameter. Moreover, you may be interested at cell-level variation rather than just row- or column- level variation. Another thing to consider: If you convert it to that sort of row-column listing and treat each cell as independent, you'll lose information about spatial correlations. If these are aggregated data, I think you'll want to run some sort of chi-square test on the un-aggregated data. But more information about your situation would help.
null
CC BY-SA 2.5
null
2011-03-30T01:19:31.500
2011-03-30T01:19:31.500
null
null
3874
null
8949
2
null
8947
1
null
There's many different kinds of matrices that can be obtained from data. Off the top of my head: - a covariance matrix: you have several variables measured on each data point, and you want to see how they are correlated - a misclassification matrix: you have a categorical output or response variable, and another variable that attempts to estimate or predict the response - a grid of spatial measurements: adjacent rows/columns are in some sense closer to each other than to non-adjacent rows/columns - a contingency table of counts or cell averages The methods you'd use to analyse these matrices would differ. I'd guess that your matrix is the last one judging from the ANOVA mention, but it would be good to have more information.
null
CC BY-SA 2.5
null
2011-03-30T01:32:16.820
2011-03-30T01:32:16.820
null
null
1569
null
8950
2
null
8924
2
null
A similar approach to the Fletcher paper is used in marketing testing, where we can arbitrarily separate the effects of interventions (such as advertising) into (a) a change in the number buying the brand (i.e. proportion of zeroes) and (b) a change in the frequency of buying the band (sales given sales occur at all). This is a solid approach and conceptually meaningful in the marketing context and in the ecological context Fletcher discusses. In fact, this can be extended to (c) a change in the size of each purchase.
null
CC BY-SA 2.5
null
2011-03-30T03:01:12.940
2011-03-30T03:01:12.940
null
null
3919
null
8951
2
null
8918
1
null
At the simple end of the spectrum: Minitab will do "best subsets regression", which will find the best one predictor model, the best two predictor model, the best 3 predictor model, the best 4 predictor model, etc. The criterion is r-squared.
null
CC BY-SA 2.5
null
2011-03-30T03:10:44.310
2011-03-30T03:10:44.310
null
null
3919
null
8952
2
null
8933
0
null
Let's suppose that you have a number of instances in which the average rating is 3. Each of these will have a variance -- if the raters all answered "3" then that variance will be zero. In such cases, why not use the average of the variances in which the average rating is 3 (including your 0 value)? This will give you a real number and a reasonable confidence interval. I would use median rather than mean to "average" the variances, since it is less subject to extremes (although extremes would be unlikely on a fixed 5 point scale). Of course, you might decide that any average rating in some range (such as 2.5 to 3.499) counts as "3" in order to give you more values to average. This procedure is simple and intuitive. I like whuber's approach as well, but then somebody is going to ask you "why 95%? why not some other %". You are less likely to get this question if you take a simple average.
null
CC BY-SA 2.5
null
2011-03-30T03:25:30.340
2011-03-30T03:25:30.340
null
null
3919
null
8953
2
null
8884
0
null
mpiktas gave a very good technical answer. There's a nice simulation of this here: [http://onlinestatbook.com/stat_sim/sampling_dist/index.html](http://onlinestatbook.com/stat_sim/sampling_dist/index.html) You can manipulate the distribution at the top of this demo to show a combination of different distributions (such as bimodal), and the distribution of sample means will still be normal.
null
CC BY-SA 2.5
null
2011-03-30T03:34:38.853
2011-03-30T03:34:38.853
null
null
3919
null
8954
2
null
258
1
null
[K-means](http://en.wikipedia.org/wiki/K-means_clustering) clustering for unsupervised learning.
null
CC BY-SA 2.5
null
2011-03-30T04:20:40.470
2011-03-30T04:20:40.470
null
null
3270
null
8955
1
8964
null
9
5246
I am looking for any help, advice or tips in how to explain heterogeneity / heteroscedasticity to biologists in my department. In particular I want to explain why its important to look for it and deal with it if it exists, I was looking for opinions on the following questions. - Does heterogeneity influence the reliability of random effect estimates? I am pretty sure it does, but I couldn't find a paper. - How serious a problem is heterogeneity? I have found conflicting views on this, while some say that model standard errors etc. will be unreliable, I have also read that it is only a problem if the heterogeneity is severe. How severe is severe? - Advice on modelling heterogeneity. Currently, I focus largely on the nlme package in R and the use of variance covariates, this is pretty simple and most people here use R so providing scripts is useful. I am also using the MCMCglmm package as well, but other suggestions are welcome, particularly for non-normal data. - Any other suggestions are welcome.
Advice on explaining heterogeneity / heteroscedasticty
CC BY-SA 3.0
null
2011-03-30T09:12:59.377
2013-07-12T20:15:41.853
2013-07-12T20:15:41.853
7290
3136
[ "regression", "mixed-model", "references", "residuals", "heteroscedasticity" ]
8956
1
null
null
10
54116
I am wanting to run correlations on a number of measurements where Likert scales were used. Looking at the scatterplots it appears the assumptions of linearity and homoscedasticity may have been violated. - Given that there appears to be some debate around ordinal level rating approximating interval level scaling should I play it safe and use Spearman's Rho rather than Pearson's r? - Is there a reference that I can cite if I go with Spearman's Rho?
Spearman's or Pearson's correlation with Likert scales where linearity and homoscedasticity may be violated
CC BY-SA 2.5
null
2011-03-30T09:28:01.670
2017-03-25T00:03:07.783
2011-03-30T15:31:35.083
919
null
[ "correlation", "scales", "heteroscedasticity", "likert" ]
8957
2
null
8956
2
null
You should almost certainly go for Spearman's rho or Kendall's tau. Often, if the data is non-normal but variances are equal, you can go for Pearson's r as it doesnt make a huge amount of difference. If the variances are significantly different, then you need a non parametric method. You could probably cite almost any introductory statistics textbook to support your use of Spearman's Rho. Update: if the assumption of linearity is violated, then you should not be using the Pearson correlation coefficient on your data, as it assumes a linear relationship. Spearman's Rho is acceptable without linearity and is meant for more general monotonic relationships between the variables. If you want to use Pearson's correlation coefficient, you could look at log transforming your data as this might deal with the non-linearity.
null
CC BY-SA 3.0
null
2011-03-30T09:56:10.943
2017-03-25T00:03:07.783
2017-03-25T00:03:07.783
8413
656
null
8958
2
null
8956
13
null
### Previous answers on this site: Related questions have been asked a few times on this site. Check out - whether to treat likert scales as ordinal or interval - choosing between pearson and spearman correlation - Spearman or Pearson with non-normal data ### Scales versus items: From my experience, there is a difference between running analyses on a likert item as opposed to a likert scale. A likert scale is the sum of multiple items. After summing multiple items, likert scales obtain more possible values, the resulting scale is less lumpy. Such scales often have a sufficient number of points that many researchers are prepared to treat them as continuous. Of course, some would argue that this is a bit cavalier, and much has been written in psychometrics about how best to measure psychological and related constructs. ### Standard practice in social sciences: From my casual observations from reading journal articles in psychology, the majority of bivariate relationships between multiple-item likert scales are analysed using Pearson's correlation coefficient. Here, I'm thinking about scales like personality, intelligence, attitudes, well-being, and so forth. If you have scales like this, it is worth considering that your results will be compared to previous results where Pearson may have been the dominant choice. ### Compare methods: It is an interesting exercise to compare Pearson's with Spearman's (and perhaps even Kendall's tau). However, you are still left with the decision of which statistic to use, and this ultimately depends on what definition you have of bivariate association. ### Heteroscedasticity A correlation coefficient is an accurate summary of the linear relationship between two variables even in the absence of Homoscedasticity (or perhaps we should say bivariate normality given that neither variable is a dependent variable). ### Nonlinearity If there is a non-linear relationship between your two variables, this is interesting. However, both variables could still be treated as continuous variables, and thus, you could still use Pearson's. For example, age often has an inverted-U relationship with other variables such as income, yet age is still a continuous variable. I suggest that you produce a scatter plot and fit some smoothed fits (such as a spline or LOESS) to explore any non-linear relationships. If the relationship is truly non-linear then linear correlation is not the best choice for describing such a relationship. You might then want to explore polynomial or nonlinear regression.
null
CC BY-SA 2.5
null
2011-03-30T10:41:17.973
2011-03-30T13:19:37.973
2017-04-13T12:44:37.420
-1
183
null
8959
1
null
null
2
37894
How can I change the labels of the vertical y axis in a boxplot, e.g. from numbers to text? For example, I would like to replace {-2, -1, 0, 1, 2} with {0hour, 1hours, 2hours, ...}.
How to customize axis labels in a boxplot?
CC BY-SA 2.5
null
2011-03-30T10:43:28.360
2011-03-30T12:16:21.123
2011-03-30T12:16:21.123
930
null
[ "r", "boxplot" ]
8960
1
8965
null
10
29607
I have a histogram of wind speed data which is often represented using a weibull distribution. I would like to calculate the weibull shape and scale factors which give the best fit to the histogram. I need a numerical solution (as opposed to [graphic solutions](http://www.weibull.com/LifeDataWeb/probability_plotting.htm)) because the goal is to determine the weibull form programmatically. Edit: Samples are collected every 10 minutes, the wind speed is averaged over the 10 minutes. Samples also include the maximum and minimum wind speed recorded during each interval which are ignored at present but I would like to incorporate later. Bin width is 0.5 m/s ![Histogram for 1 month of data](https://i.stack.imgur.com/C0WCk.png)
How can I determine weibull parameters from data?
CC BY-SA 2.5
null
2011-03-30T10:47:17.780
2015-12-09T20:17:17.730
2011-03-30T15:29:18.080
919
3579
[ "distributions", "histogram", "java" ]
8962
2
null
258
4
null
[Gaussian Process classifier](http://www.gaussianprocess.org/) - it gives probabilistic predictions (which is useful when your operational relative class frequencies differ from those in your training set, or equivalenty your false-positive/false-negative costs are unknown or variable). It also provides an inidcation of the uncertainty in model predictions due to the uncertainty in "estimating the model" from a finite dataset. The co-variance function is equivalent to the kernel function in an SVM, so it can also operate directly on non-vectorial data (e.g. strings or graphs etc). The mathematical framework is also neat (but don't use the Laplace approximation). Automated model selection via maximising marginal likelihood. Essentially combines good features of logistic regression and SVM.
null
CC BY-SA 2.5
null
2011-03-30T11:35:21.860
2011-03-30T11:35:21.860
null
null
887
null
8963
2
null
8955
6
null
One option is to use a simulation. So set up a model where you specifically specify the heterogeneity suppose as $var(\alpha_i)=\overline{X}_i^2\sigma^2_u$. Then generate your data from this model, taking random intercepts as a simple example. $$\alpha_i=\overline{X}_i u_i\;\;\;\;\;\; u_i\sim N(0,\sigma^2_u)$$ $$Y_ij=\alpha_{i}+\beta X_{ij} + e_{ij}\;\;\;\;\;\; e_{ij}\sim N(0,\sigma^2_e)$$ (hope this notation makes sense). I believe playing around with a set-up such as this will help you answer question 2). So you would fit this model using a random intercept, when in fact it should be a random slope (which gives you a partial answer to question 3 - random intercepts can account for "fanning" to a degree - this is "level 2 fanning"). The idea of the above is to try as hard as you can to break your modeling method - try extreme conditions consistent with what you know about the data, and see what happens. If you are struggling to find these conditions, then don't worry. I did a quick check on heteroscedasticity for OLS, and it doesn't seem to affect the estimated betas too much. To me it just seems like heteroscedasticity will in some places by giving an under-estimate of the likely error, and in other places it will give an over-estimate of the likely error (in predictive terms). See below: awaiting plot of data here, user currently frustrated with computers And one thing which I always find amusing is this "non-normality of the data" that people worry about. The data does not need to be normally distributed, but the error term does. If this were not true, then GLMs would not work - GLMs use a normal approximation to the likelihood function to estimate the parameters, as do GLMMs. So I'd say if estimating fixed effect parameters is the main goal then not much to worry about, but you may get better results for prediction by taking heteroscedasticity into account.
null
CC BY-SA 2.5
null
2011-03-30T11:43:19.377
2011-03-30T11:43:19.377
null
null
2392
null
8964
2
null
8955
6
null
[Allometry](http://en.wikipedia.org/wiki/Allometry) would be a good place to start that will be familiar to biologists. Logaritmic transformations are often used in allometry because the data have a power-law form, but also because the noise process is heteroskedastic (as the variability is proportional to size). For an example where this has caused a severe problem, see ["Allometric equations for predicting body mass of dinosaurs"](http://10.1111/j.1469-7998.2009.00594.x), where the conclusion that dinosaurs were only half the size previously though was incorrect because an invalid assumtion of homoscedasticity was made (see the correspondance for details).
null
CC BY-SA 2.5
null
2011-03-30T11:51:54.740
2011-03-30T11:51:54.740
null
null
887
null
8965
2
null
8960
11
null
Maximum Likelihood Estimation of Weibull parameters may be a good idea in your case. A form of Weibull distribution looks like this: $$(\gamma / \theta) (x)^{\gamma-1}\exp(-x^{\gamma}/\theta)$$ Where $\theta, \gamma > 0$ are parameters. Given observations $X_1, \ldots, X_n$, the log-likelihood function is $$L(\theta, \gamma)=\displaystyle \sum_{i=1}^{n}\log f(X_i| \theta, \gamma)$$ One "programming based" solution would be optimize this function using constrained optimization. Solving for optimum solution: $$\frac {\partial \log L} {\partial \gamma} = \frac{n}{\gamma} + \sum_1^n \log x_i - \frac{1}{\theta}\sum_1^nx_i^{\gamma}\log x_i = 0 $$ $$\frac {\partial \log L} {\partial \theta} = -\frac{n}{\theta} + \frac{1}{\theta^2}\sum_1^nx_i^{\gamma}=0$$ On eliminating $\theta$ we get: $$\Bigg[ \frac {\sum_1^n x_i^{\gamma} \log x_i}{\sum_1^n x_i^{\gamma}} - \frac {1}{\gamma}\Bigg]=\frac{1}{n}\sum_1^n \log x_i$$ Now this can be solved for ML estimate $\hat \gamma$. This can be accomplished with the aid of standard iterative procedures which solve are used to find the solution of equation such as -- Newton-Raphson or other numerical procedures. Now $\theta$ can be found in terms of $\hat \gamma$ as: $$\hat \theta = \frac {\sum_1^n x_i^{\hat \gamma}}{n}$$
null
CC BY-SA 2.5
null
2011-03-30T12:08:23.683
2011-03-30T13:02:54.993
2011-03-30T13:02:54.993
2116
1307
null
8966
2
null
8959
6
null
Here's a reproducible example, that you might adapt to fit with what you want to achieve with your data. ``` opar <- par(las=1) df <- data.frame(y=rnorm(100), x=gl(2, 50, labels=letters[1:2])) with(df, plot(y ~ x, axes=FALSE)) axis(1, at=1:2, labels=levels(df$x)) axis(2, at=seq(-3, 3, by=1), labels=paste(seq(-3, 3, by=1), "hr", sep="")) box() par(opar) ```
null
CC BY-SA 2.5
null
2011-03-30T12:10:39.237
2011-03-30T12:10:39.237
null
null
930
null
8967
2
null
8959
2
null
``` data(cars) with(cars, boxplot(dist ~ speed)) ``` As `speed` was numerical the boxplot has numerical values on the horizontal axis. Let's create a `character` variable: ``` hours = paste(cars$speed, "hours", sep="") with(cars, boxplot(dist ~ hours)) ``` Now the horizontal axis has text labels.
null
CC BY-SA 2.5
null
2011-03-30T12:11:34.760
2011-03-30T12:11:34.760
null
null
3911
null
8968
1
null
null
1
359
I'm trying to solve the following issue : Let's say that I have three normal random variables a,b,c, non correlated. Let's say that I only have two observations of these, M and N where : ``` M = a + b N = a + c ``` And another variable I'm trying to explain : ``` R = a + b + c + err ``` I can do a simple regression on this data, `lm(R ~ M + N + 0)`, I get the same loading for both `M` and `N`, equal to `0.66`. My final model is `1.33a + 0.66b + 0.66c`. I know that this model is the best linear model here, but what could I use to improve it. Are there methods to properly get `a, b, c` from `M`and `N` and do the regression on them ? I've tried PCA but I only get two vectors, not three. Basically, I'd need a method that would project `M` and `N` into the three interesting components here. EDIT I realise I haven't been clear here. $M$, $N$, $R$ are three vectors, with a high number of observations. If they had the relations above with three (unknown) factors $a$, $b$, $c$, then a simple regression would give me : $\hat{R} = 0.66 M + 0.66 N = 1.33 a + 0.66 b + 0.66 c$. I am wondering if there is an alternative technique for modelling $R$, that would give a closer result to the actual $R$. Something that would somehow reconstruct the common term $a$ from $M$ and $N$ automatically, and take that into account to finally get : $\hat{R} = 1 a + 1 * (M - a) + 1 * (N - a)$ I hope I'm a bit more clear in my explanations ...
How can I improve a simple regression here?
CC BY-SA 3.0
null
2011-03-30T13:32:31.957
2011-05-13T10:45:54.703
2011-04-13T10:40:26.083
3791
3791
[ "regression", "pca", "factor-analysis", "linear-model" ]
8969
2
null
8960
13
null
Use fitdistrplus: [Need help identifying a distribution by its histogram](https://stats.stackexchange.com/questions/8662/need-help-identifying-a-distribution-by-its-histogram/8674#8674) Here's an example of how the Weibull Distribution is fit: ``` library(fitdistrplus) #Generate fake data shape <- 1.9 x <- rweibull(n=1000, shape=shape, scale=1) #Fit x data with fitdist fit.w <- fitdist(x, "weibull") summary(fit.w) plot(fit.w) Fitting of the distribution ' weibull ' by maximum likelihood Parameters : estimate Std. Error shape 1.8720133 0.04596699 scale 0.9976703 0.01776794 Loglikelihood: -636.1181 AIC: 1276.236 BIC: 1286.052 Correlation matrix: shape scale shape 1.0000000 0.3166085 scale 0.3166085 1.0000000 ``` ![enter image description here](https://i.stack.imgur.com/oH6nP.jpg)
null
CC BY-SA 2.5
null
2011-03-30T14:09:37.057
2011-03-30T14:38:19.420
2017-04-13T12:44:45.640
-1
2775
null
8972
1
27646
null
9
1188
Consider an experiment with multiple human participants, each measured multiple times in two conditions. A mixed effects model can be formulated (using [lme4](http://cran.r-project.org/web/packages/lme4/index.html) syntax) as: ``` fit = lmer( formula = measure ~ (1|participant) + condition ) ``` Now, say I want to generate bootstrapped confidence intervals for the predictions of this model. I think I've come up with a simple and computationally efficient method, and I'm sure I'm not the first to think of it, but I'm having trouble finding any prior publications describing this approach. Here it is: - Fit the model (as above), call this the "original model" - Obtain predictions from the original model, call these the "original predictions" - Obtain residuals from the original model associated with each response from each participant - Resample the residuals, sampling participants with replacement - Fit a linear mixed effects model with gaussian error to the residuals, call this the "interim model" - Compute predictions from the interim model for each condition (these predictions will be very close to zero), call these the "interim predictions" - Add the interim predictions to the original predictions, call the result the "resample predictions" - Repeat steps 4 through 7 many times, generating a distribution of resample predictions for each condition from which once can compute CIs. I've seen ["residual bootstrapping"](http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29#Resampling_residuals) procedures in the context of simple regression (i.e. not a mixed model) where residuals are sampled as the unit of resampling and then added to the predictions of the original model before fitting a new model on each iteration of the bootstrap, but this seems rather different from the approach I describe where residuals are never resampled, people are, and only after the interim model is obtained do the original model predictions come into play. This last feature has a really nice side-benefit in that no matter the complexity of the original model, the interim model can always be fit as a gaussian linear mixed model, which can be substantially faster in some cases. For example, I recently had binomial data and 3 predictor variables, one of which I suspected would cause strongly non-linear effects, so I had to employ [Generalized Additive Mixed Modelling](http://cran.r-project.org/web/packages/gamm4/index.html) using a binomial link function. Fitting the original model in this case took over an hour, whereas fitting the gaussian LMM on each iteration took mere seconds. I really don't want to claim priority on this if it's already a known procedure, so I'd be very grateful if anyone can provide information on where this might have been described before. (Also, if there are any glaring problems with this approach, do let me know!)
Is there a name for this type of bootstrapping?
CC BY-SA 2.5
null
2011-03-30T15:27:52.997
2012-05-03T15:44:56.057
null
null
364
[ "mixed-model", "bootstrap" ]
8974
1
8976
null
32
12666
There seems to be in increasing discussion about pie charts. The main arguments against it seem to be: - Area is perceived with less power than length. - Pie charts have very low data-point-to-pixel ratio However, I think they can be somehow useful when portraying proportions. I agree to use a table in most cases but when you are writing a business report and you've just included hundreds of tables why not having a pie chart? I'm curious about what the community thinks about this topic. Further references are welcome. I include a couple of links: - http://www.juiceanalytics.com/writing/the-problem-with-pie-charts/ - http://www.usf.uni-osnabrueck.de/~breiter/tools/piechart/warning.en.html --- In order to conclude this question I decided to build an example of pie-chart vs waffle-chart. ![enter image description here](https://i.stack.imgur.com/oMPL4.jpg)
Problems with pie charts
CC BY-SA 3.0
null
2011-03-30T16:24:11.467
2022-09-29T17:21:25.803
2017-04-07T14:31:59.690
11887
2902
[ "data-visualization", "many-categories", "pie-chart" ]
8975
2
null
8974
1
null
I think you've answered your own question for the 2nd bullet point. If you want to take up valuable real estate, so be it! However the first bullet is more important. With a bar chart the observer needs to estimate relative proportion based upon only 1 axis. With a pie chart judging along at least 2 axes are involved. And one axis is curved. I think that pie charts are used most effectively when you have many categories in the pie, with a legend, and it is not all that important to judge proportion.
null
CC BY-SA 2.5
null
2011-03-30T16:50:56.217
2011-03-30T16:50:56.217
null
null
3489
null
8976
2
null
8974
25
null
I wouldn't say there's an increasing interest or debate about the use of pie charts. They are just found everywhere on the web and in so-called "predictive analytic" solutions. I guess you know Tufte's work (he also discussed the use of [multiple pie charts](http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=00018S)), but more funny is the fact that the second chapter of Wilkinson's Grammar of Graphics starts with "How to make a pie chart?". You're probably also aware that Cleveland's [dotplot](http://www.b-eye-network.com/view/2468), or even a barchart, will convey much more precise information. The problem seems to really stem from the way our visual system is able to deal with spatial information. It is even quoted in the R software; from the on-line help for `pie`, > Cleveland (1985), page 264: “Data that can be shown by pie charts always can be shown by a dot chart. This means that judgements of position along a common scale can be made instead of the less accurate angle judgements.” This statement is based on the empirical investigations of Cleveland and McGill as well as investigations by perceptual psychologists. Cleveland, W. S. (1985) The elements of graphing data. Wadsworth: Monterey, CA, USA. There are variations of pie charts (e.g., donut-like charts) that all raise the same problems: We are not good at evaluating angle and area. Even the ones used in "corrgram", as described in Friendly, [Corrgrams: Exploratory displays for correlation matrices](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.19.1268&rep=rep1&type=pdf), American Statistician (2002) 56:316, are hard to read, IMHO. At some point, however, I wondered whether they might still be useful, for example (1) displaying two classes is fine but increasing the number of categories generally worsen the reading (especially with strong imbalance between %), (2) relative judgments are better than absolute ones, that is displaying two pie charts side by side should favor a better appreciation of the results than a simple estimate from, say a pie chart mixing all results (e.g. a two-way cross-classification table). Incidentally, I asked a similar question to Hadley Wickham who kindly pointed me to the following articles: - Spence, I. (2005). No Humble Pie: The Origins and Usage of a Statistical Chart. Journal of Educational and Behavioral Statistics, 30(4), 353–368. - Heer, J. and Bostock, M. (2010). Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. CHI 2010, April 10–15, 2010, Atlanta, Georgia, USA. In sum, I think they are just good for grossly depicting the distribution of 2 to 3 classes (I use them, from time to time, to show the distribution of males and females in a sample on top of an histogram of ages), but they must be accompanied by relative frequencies or counts for being really informative. A table would still do a better job since you can add margins, and go beyond 2-way classifications. Finally, there are alternative displays that are built upon the idea of pie chart. I can think of square pie or [waffle chart](http://eagereyes.org/communication/Engaging-readers-with-square-pie-waffle-charts.html), described by Robert Kosara in [Understanding Pie Charts](http://eagereyes.org/techniques/pie-charts).
null
CC BY-SA 2.5
null
2011-03-30T18:17:53.490
2011-03-30T18:47:15.337
2011-03-30T18:47:15.337
930
930
null
8977
2
null
8160
1
null
I am sorry, as it may not be straight answer to your question, by if you are using this "total score" as a predictor of something why dont you try regression and evaluate the results with the AUC of ROC ? or the other way, maybe user Neural networks / Random Forest / Support Vector machines on them to predict given outcome ? Regards Luke
null
CC BY-SA 2.5
null
2011-03-30T18:54:57.467
2011-03-30T18:54:57.467
null
null
3870
null
8978
2
null
8974
29
null
My personal problem with pie charts is while they may be useful to show differences like this: ![enter image description here](https://i.stack.imgur.com/r5mw1.png) way too many people use it to show that: ![enter image description here](https://i.stack.imgur.com/dD9sk.png)
null
CC BY-SA 3.0
null
2011-03-30T19:05:42.137
2013-07-01T00:55:02.403
2013-07-01T00:55:02.403
22047
null
null
8979
2
null
8774
6
null
As a follow-up to my comment, if `independence.test` refers to `coin::independence_test`, then you can reproduce a Cochrane and Armitage trend test, as it is used in GWAS analysis, as follows: ``` > library(SNPassoc) > library(coin) > data(SNPs) > datSNP <- setupSNP(SNPs,6:40,sep="") > ( tab <- xtabs(~ casco + snp10001, data=datSNP) ) snp10001 casco T/T C/T C/C 0 24 21 2 1 68 32 10 > independence_test(casco~snp10001, data=datSNP, teststat="quad", scores=list(snp10001=c(0,1,2))) Asymptotic General Independence Test data: casco by snp10001 (T/T < C/T < C/C) chi-squared = 0.2846, df = 1, p-value = 0.5937 ``` This is a conditional version of the CATT. About scoring of the ordinal variable (here, the frequency of the minor allele denoted by the letter `C`), you can play with the `scores=` arguments of `independence_test()` in order to reflect the model you want to test (the above result is for a log-additive model). There are five different genetic models that are generally considered in GWAS, and they reflect how genotypes might be collapsed: codominant (T/T (92) C/T (53) C/C (12), yielding the usual $\chi^2(2)$ association test), dominant (T/T (92) vs. C/T-C/C (65)), recessive (T/T-C/T (145) vs. C/C (12)), overdominant (T/T-C/C (104) vs. C/T (53)) and log-additive (0 (92) < 1 (53) < 2 (12)). Note that genotype recoding is readily available in inheritance functions from the [SNPassoc](http://cran.r-project.org/web/packages/SNPassoc/index.html) package. The "scores" should reflect these collapsing schemes. Following Agresti ([CDA](http://www.stat.ufl.edu/~aa/cda/cda.html), 2002, p. 182), CATT is computed as $n\cdot r^2$, where $r$ stands for the linear correlation between the numerical scores and the binary outcome (case/control), that is ``` z.catt <- sum(tab)*cor(datSNP$casco, as.numeric(datSNP$snp10001))^2 1 - pchisq(z.catt, df = 1) # p=0.5925 ``` There also exist various built-in CATT functions in R/Bioconductor ecosystem for GWAS, e.g. - CATT() from Rassoc, e.g. with(datSNP, CATT(table(casco, snp10001), 0.5)) # p=0.5925 (additive/multiplicative) - in snpMatrix, there are headed as 1-df $\chi^2$-test when you call single.snp.tests() (see the vignette); please note that the default mode of inheritance is the codominant/additive effect. Finally, here are two references that discuss the choice of scoring scheme depending on the genetic model under consideration, and some issues with power/robustness - Zheng, G, Freidlin, B, Li, Z and Gastwirth, JL (2003). Choice of scores in trend tests for case-control studies of candidate-gene associations. Biometrical Journal, 45: 335-348. - Freidlin, B, Zheng, G, Li, Z, and Gastwirth, JL (2002). Trend Tests for Case-Control Studies of Genetic Markers: Power, Sample Size and Robustness. Human Heredity, 53: 146-152. See also the [GeneticsDesign](http://www.bioconductor.org/packages/2.3/bioc/html/GeneticsDesign.html) (bioc) package for power calculation with linear trend tests.
null
CC BY-SA 3.0
null
2011-03-30T19:06:57.707
2011-05-04T16:55:49.133
2011-05-04T16:55:49.133
930
930
null
8980
1
null
null
1
12560
I went to a stats course refresher last week and the instructor talked about data distribution and sampling distribution. I am just practicing with the exercises shown in class. Based on my dataset below, For the data in Week2, what can I explain about the ‘data distribution’. and the ‘sampling distribution’ of the sample mean? I would appreciate any explanation of these two terms. Thanks ``` fishy <- structure(list(week1 = c(2.011, 1.994, 10.332, 7.056, 4.926, 6.12, 2.039, 5.948, 7.731, 8.41, 11.055, 7.157, 5.855, 25.243, 36.553, 76.281, 38.902, 64.689, 42.934, 80.373, 115.858, 145.981,84.735, 163.084, 190.472, 295.79, 254.71, 273.446, 582.495), week2 = c(535.013, 513.534, 824.442, 1130.764, 1396.367, 1122.016, 1263.061, 1449.587, 1588.527, 680.99, 1861.677, 1432.656, 2921.025, 3595.931, 2071.98, 1666.726, 1594.989, 1522.255, 2496.464, 2169.722, 1870.255, 1039.203, 54.847, 0.266, 60.603, 601.822, 244.916, 124.749, 74.059)), .Names = c("week1", "week2"), class = "data.frame", row.names = c(NA, -29L)) boxplot(fishy,week1~week2) ```
Difference between sampling distribution and data distribution
CC BY-SA 2.5
null
2011-03-30T19:09:55.817
2018-04-08T23:08:33.613
2011-03-31T07:37:27.087
null
3965
[ "distributions", "self-study" ]
8981
3
null
null
0
null
"Statistics" can refer variously to the (wide) field of statistical theory and statistical analysis; to constructing functions of data as used in formal procedures; to collections of data; and to summaries of data. Because this site is about statistics and statistical analysis, it is rare that tagging a question with "statistics" will be informative. Use of this tag will signal that your question is extremely general and broad.
null
CC BY-SA 2.5
null
2011-03-30T19:23:14.790
2011-03-30T19:23:14.790
2011-03-30T19:23:14.790
919
-1
null
8982
3
null
null
0
null
This generic tag is only rarely suitable; use it with caution. Consider selecting more specific, descriptive tags.
null
CC BY-SA 2.5
null
2011-03-30T19:23:14.790
2011-03-30T19:23:14.790
2011-03-30T19:23:14.790
919
-1
null
8983
2
null
8980
4
null
Data distribution is the distribution of the observations in your data (for example: the scores of students taking statistics course). Sampling distribution of the sample mean: Let imagine you sample the data from population n times (randomly, each sample has N observations), for each sample you compute the mean. So you have n means of n samples. Then you have the distribution of the sample mean. Hope it's easy to understand.
null
CC BY-SA 2.5
null
2011-03-30T21:17:39.063
2011-03-30T21:17:39.063
null
null
null
null
8984
1
8988
null
7
145
We have a large number of samples whose concentration we measure twice, averaging the two values. Typically, the coefficient of variation (cv) for each sample is < 5%, but for a few samples the cv is high. We assume that in these cases something went wrong with one or both concentration measurements. We can afford one more concentration measurement for the samples with high cv's. My question is, how to use the three measurements to achieve the "best" estimate of the true concentration? Average all three measurements? Pick the two with the lowest cv? Or...? Many thanks for any insights or pointers to literature.
Remeasuring "bad" values
CC BY-SA 2.5
null
2011-03-30T21:43:38.600
2011-03-31T01:29:33.023
null
null
3968
[ "measurement-error" ]
8985
2
null
8909
1
null
In my experience, #1 is the better option. If you store the data in any flatfile setup (as you're suggesting) and don't put the rows as your time variable, it becomes that much harder to import into selected programs. For example, I work primarily in Fortran/C, with secondary applications occasionally done in R or MATLAB. To be compatible with all of these, I use ASCII flatfiles to store most of my data, with fixed-column width, fixed-precision reporting. Any time I have to work with something that isn't set up in this way, it always ends up being a hassle, regardless of how sexy or novel the method for storing the data was. Having empty columns isn't actually a problem, so long as you figure out a method for flagging them properly. Leaving them blank isn't actually the best option, as this can read (i.e. in Fortran) as a 0, which is decidedly different for most applications to an empty value. If you think your client will want to use any sort of programming language-based analysis, you'll want to try to come up with a consistent way to store / flag missing values. For example, if all of your data samples are positive real numbers, then storing a -99.99 is a good way to flag an entry as missing. Conclusion: figure out what your client is likely to require. If you really don't know, then go with #1, because it is the most general and the easiest to read in for multiple programs and programming languages. Remember to store the dimension information at the top of the file if you're using ASCII flatfiles, or in a defined data block if you're using binary files.
null
CC BY-SA 2.5
null
2011-03-30T23:25:37.247
2011-03-30T23:25:37.247
null
null
781
null
8986
2
null
8732
1
null
You may want to examine the GAM package in R, as it can be adapted to do some (or all) of what you are looking for. The original paper ([Hastie & Tibshirani, 1986](https://projecteuclid.org/journals/statistical-science/volume-1/issue-3/Generalized-Additive-Models/10.1214/ss/1177013604.full)) is available via OpenAccess if you're up for reading it. Essentially, you model a single dependent variable as being an additive combination of 'smooth' predictors. One of the typical uses is to have time series and lags thereof as your predictors, smooth these inputs, then apply GAM. This method has been used extensively to estimate daily mortality as a function of smoothed environmental time series, especially pollutants. It's not OpenAccess, but ([Dominici et al., 2000](https://www.jstor.org/stable/2680517)) is a superb reference, and ([Statistical Methods for Environmental Epidemiology with R](https://books.google.ca/books?id=QbDBxSSXIjsC&pg=PA145&lpg=PA145&dq=springer%20books%20dominici%20peng&source=bl&ots=rJ158hQsYw&sig=we0w2B8naxLeV_bTyaCJKGdS7BE&hl=en&ei=cb6TTdSJCOjB0QGB1-DMBw&sa=X&oi=book_result&ct=result#v=onepage&q&f=false)) is an excellent book on how to use R to do this type of analysis.
null
CC BY-SA 4.0
null
2011-03-30T23:37:04.383
2022-08-24T18:36:34.137
2022-08-24T18:36:34.137
79696
781
null
8987
1
8991
null
0
566
Following a previous [question](https://stats.stackexchange.com/questions/8899/several-questions-about-conditional-probability), lets say we now have 3 variables: $L$, $B$, $S$: ``` S / \ L B ``` So $L$ depends on $S$ $B$ depends on $S$ $P(S) =$ 0.5 $P( \lnot S) =$ 0.5 $L$ that depends on $S$: $P(L|S) =$ 0.10 $P( \lnot L|S) =$ 0.90 $P(L| \lnot S) =$ 0.01 $P( \lnot L| \lnot S) =$ 0.99 $B$ that depends on $S$: $P(B|S) =$ 0.60 $P( \lnot B|S) =$ 0.40 $P(B| \lnot S) =$ 0.30 $P( \lnot B| \lnot S) =$ 0.70 I have gotten $P(L) =$ 0.055 $P(\lnot L) =$ 0.945 $P(B) =$ 0.45 $P(\lnot B) =$ 0.55 As the previos question to get $P(S)$ after I observe that $P(L)=1$ $P(S∣L)= P(L∣S)P(S) / P(L) =$ (0.10)(0.5) / (0.055) = 0.9091 so $P(S)=$ 0.9091 Similarly when we have $P(B)=$1 $P(S∣B)= P(B∣S)P(S) / P(B) =$ (0.40)(0.5) / (0.45) = 0.3636 so $P(S)=$ 0.3636 But What do you do when you observe both events: $P(L)=$1 and $P(B)=$1 How do you modify the above formula to get $P(S)$?
3 variables and conditional probability
CC BY-SA 2.5
null
2011-03-30T23:49:42.127
2011-03-31T09:51:45.473
2017-04-13T12:44:44.530
-1
3681
[ "conditional-probability" ]
8988
2
null
8984
5
null
If the probability of a bad measurement is small than the probability of having two bad measurements out of three will be very small, thus neglecting the outlying one among the three will usually leave you with two valid measurements. I would, however, record all the values measured, even other measurements on the same subject/sample. On the strength of a data collection of valid and bad measurements you could study the distributions of bad and valid measurements. You might also see that bad values depend on the true value (and thus carry information) and both bad and valid values may depend on other measurements of the same subject/sample. When you possess the (conditional) distributions of bad and valid measurements, and the proportion of bad measurements then for each specific measurement you will be able to calculate the probability that it is bad (comes from an other distribution), and to calculate best estimates and established confidence intervals. I believe that your protocol (keep the two if CV is low, otherwise use the lowest CV pair of three) may be good to start with, but I would revise it after collecting enough data to know more about bad measurements. However, whether a protocol is acceptable also depends on the probability of a bad measurement, how bad a bad measurement is, and how critical is a bad best value in its application. I assume that you are talking about CV because you analysed some of your existing data and you found CV to be stable. This suggests that measurement error is proportional to the value, thus measurement error SD is constant on the logarithmic scale. If so, taking the geometric mean (not the arithmetic mean) may be more accurate.
null
CC BY-SA 2.5
null
2011-03-31T00:10:04.933
2011-03-31T01:29:33.023
2011-03-31T01:29:33.023
3911
3911
null
8989
2
null
8909
2
null
Option #2 is much more flexible than #1, particularly if you plan on using Excel pivot tables and/or R packages such as Hadley Wickham's excellent [reshape](http://had.co.nz/reshape/) package. I would store the data so that each row contains measured (event-level and contender-level) variables and any variables necessary to uniquely identify an instance of the measured variables (contender ID, event ID, measurement occasion ID [e.g., half-second increments]) for a single measurement occasion within an event. This allows for the most flexible reshaping of data into any other format desired, a process Wickham describes as melting and casting. You can export the data into an comma-separated value (CSV) spreadsheet, which of course can be read into Excel and most other statistical software. If you have long-format data in Excel, aggregating, summarizing, and tabulating data is also easy using [Pivot Tables](https://office.microsoft.com/en-us/excel-help/design-the-layout-and-format-of-a-pivottable-report-HP010168032.aspx). This enables you to create different views of the data that might be of interest to your client, such that as the data are updated you can update these useful views as well. IMO, the most robust solution for very large amounts of structured data is one you didn't mention: store them in a relational database (using, e.g., MS Access or open-source databases such as PostgreSQL) and use Structured Query Language (SQL) to perform the above operations. Here, your data would be broken up into separate tables containing information that is unique to events (e.g., event ID, event type, etc.), contenders (e.g., contender ID, contender name, etc.), unique event-contender combinations (since a single contender might participate in more than one event, and each event certainly has more than one contender), and the measured data in half-second intervals. This avoids storing redundant data and allows you to enforce the integrity of the data that you articulated in your question as data are added, deleted, or updated. There are methods for calling SQL queries from Excel, R, Matlab, and other statistical programs to extract just the information your client wants. A useful introductory text on relational database theory and application is "Inside Relational Databases" by Whitehorn and Marklyn.
null
CC BY-SA 2.5
null
2011-03-31T00:26:24.340
2011-03-31T00:26:24.340
null
null
3964
null
8991
2
null
8987
2
null
At first we don't know the outcomes of $L$ and $B$: $$P_{prior} = \begin{matrix} P_{prior}(S \land L \land B) & P_{prior}(\lnot S \land L \land B) \\ P_{prior}(S \land L \land \lnot B) & P_{prior}(\lnot S \land L \land \lnot B) \\ P_{prior}(S \land \lnot L \land B) & P_{prior}(\lnot S \land \lnot L \land B) \\ P_{prior}(S \land \lnot L \land \lnot B) & P_{prior}(\lnot S \land \lnot L \land \lnot B) \end{matrix}$$ $$ = \begin{matrix} 0.5 \cdot 0.1 \cdot 0.6 & 0.5 \cdot 0.01 \cdot 0.3 \\ 0.5 \cdot 0.1 \cdot 0.4 & 0.5 \cdot 0.01 \cdot 0.7 \\ 0.5 \cdot 0.9 \cdot 0.6 & 0.5 \cdot 0.99 \cdot 0.3 \\ 0.5 \cdot 0.9 \cdot 0.4 & 0.5 \cdot 0.99 \cdot 0.7 \\ \end{matrix}$$ We then observe L and B. $$P_{posterior} = \begin{matrix} 0.5 \cdot 0.1 \cdot 0.6 & 0.5 \cdot 0.01 \cdot 0.3 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ \end{matrix} \cdot \frac{1}{0.5 \cdot 0.1 \cdot 0.6 + 0.5 \cdot 0.01 \cdot 0.3}$$ $$ = \begin{matrix} 20/21 & 1/21 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ \end{matrix}$$ $$P_{posterior}(S) = \frac{P_{prior}(S \land L \land B)}{P_{prior}(S \land L \land B) + P_{prior}(\lnot S \land L \land B)} = 20/21 \approx 0.95$$
null
CC BY-SA 2.5
null
2011-03-31T01:19:43.640
2011-03-31T09:51:45.473
2011-03-31T09:51:45.473
3911
3911
null
8992
2
null
8974
1
null
I can think of almost no case in which a pie chart is better than a bar chart or stacked bar if you want to convey information. I do have a theory or two on how pie charts got to be so popular. My first thought is related to PC commercials. Early PCs had text screens (24 x 80 characters), often green like old mainframe CRTs. To show off the new graphics screens that had a Red-Green-Blue pixel basis, a pie chart was ideal. A text screen could do a bar chart after a fashion, but couldn't do a remotely credible pie chart. Pie charts looked a lot more serious than showing a Mario Brothers screen, regardless of how the PC would actually be used. Thus, it seemed like every PC commercial in the late 1980s and early 1990s showed a pie chart on the monitor. A second theory is that a bar chart or stacked bar is better if you want to convey information. But what if you don't? Then a pie chart works -- and charts with 3-D effects work even better.
null
CC BY-SA 2.5
null
2011-03-31T01:40:58.007
2011-03-31T01:40:58.007
null
null
3919
null
8995
2
null
97
35
null
I'm going to argue from an applied perspective that the mean is often the best choice for summarising the central tendency of a Likert item. Specifically, I'm thinking of contexts such as student satisfaction surveys, market research scales, employee opinion surveys, personality test items, and many social science survey items. In such contexts, consumers of research often want answers to questions like: - Which statements have more or less agreement relative to others? - Which groups agreed more or less with a given statement? - Over time, has agreement gone up or down? For these purposes, the mean has several benefits: ### 1. Mean is easy to calculate: - It is easy to see the relationship between the raw data and the mean. - It is pragmatically easy to calculate. Thus, the mean can be easily embedded into reporting systems. - It also facilitates comparability across contexts, and settings. ### 2. Mean is relatively well understood and intuitive: - The mean is often used to report central tendency of Likert items. Thus, consumers of research are more likely to understand the mean (and thus trust it, and act on it). - Some researchers prefer the, arguably, even more intuitive option of reporting the percentage of the sample answering 4 or 5. I.e., it has the relatively intuitive interpretation of "percentage agreement". In essence, this is just an alternative form of the mean, with 0, 0, 0, 1, 1 coding. - Also, over time, consumers of research build up frames of reference. For example, when you're comparing your teaching performance from year to year, or across subjects, you build up a nuanced sense of what a mean of 3.7, 3.9, or 4.1 indicates. ### 3. The mean is a single number: - A single number is particularly valuable, when you want to make claims like "students were more satisfied with Subject X than Subject Y." - I also find, empirically, that a single number is actually the main information of interest in a Likert item. The standard deviation tends to be related to the extent to which the mean is close to the central score (e.g., 3.0). Of course, empirically, this may not apply in your context. For example, I read somewhere that when You Tube ratings had the star system, there were a large number of either the lowest or the highest rating. For this reason, it is important to inspect category frequencies. ### 4. It doesn't make much difference - Although I have not formally tested it, I would hypothesise that for the purpose of comparing central tendency ratings across items, or groups of participants, or over time, any reasonable choice of scaling for generating the mean would yield similar conclusions.
null
CC BY-SA 2.5
null
2011-03-31T04:27:05.490
2011-03-31T04:27:05.490
null
null
183
null
8997
1
8998
null
5
3603
This is in reference to the [Girsanov theorem](http://en.wikipedia.org/wiki/Girsanov_theorem) however question is general. If $X$ is a standard normal variable $N(0,1)$, why is expectation of $e^{-\mu X - \mu^2/2}$ equal to 1? Shouldn't it be $e^{-\mu^2/2}$?
Expected value of certain exponential transformation of standard normal variable
CC BY-SA 2.5
null
2011-03-31T05:23:01.543
2012-10-20T21:16:29.413
2011-03-31T08:31:28.927
449
862
[ "probability", "expected-value" ]
8998
2
null
8997
9
null
If we let $Y = -\mu X$, then $Y$ is distributed as $N(0, \mu^2)$, and $e^Y$ is a lognormal random variable with parameters $0, \mu^2$. The expected value of a lognormal with parameters $a, b^2$ is $e^{a + b^2/2}$, so $E(e^{-\mu X}) = E(e^Y) = e^{\mu^2/2}$. The given result follows immediately.
null
CC BY-SA 2.5
null
2011-03-31T05:38:17.697
2011-03-31T05:43:18.937
2011-03-31T05:43:18.937
1569
1569
null
8999
2
null
8997
0
null
The answer is right, but to make it even more fairly obvious. Imaginae Y = -muX -0.5mu^2, then it follows Y~N(-0.5mu^2,mu^2) as above, and thus expectation of lognormal is e^(-0.5mu^2+0.5mu^2) thus = e^0 = 1
null
CC BY-SA 2.5
null
2011-03-31T06:06:51.270
2011-03-31T06:06:51.270
null
null
null
null
9000
2
null
8036
2
null
As a first step you could take a sample from the documents. This can be random sampling, but if you know that certain characteristics of the documents are particularly relevant you could use stratified sampling. The second step can be feature extraction. Define characteristics of the documents that may help predicting the `accept` or `reject` labels. Give clear definitions; you can use numerical scale, ordinal and nominal (including binary) variables. Determine features and labels in the sample. Third step, developing a prediction model. To predict the binary response there are many kinds of methods available. If you have some experience how some features affect acceptance you may want to build it into a model and use regression techniques e.g. logistic regression, probit regression, and model selection (e.g. stepwise variable selection). If you can't or don't want to build in pre-existing knowledge into the automatic assessment you can use machine learning techniques like random forests, logistic regression tree, support vector machines. Fourth step is validating your model on a second sample. The way to do this can be randomly partitioning your original sample into a modelling and a testing subset before modelling. When you know how well your automatic classification system performs you will be able to judge which documents need human review.
null
CC BY-SA 2.5
null
2011-03-31T09:48:42.233
2011-03-31T12:47:51.767
2011-03-31T12:47:51.767
3911
3911
null
9001
1
9007
null
51
29717
Are there well known formulas for the order statistics of certain random distributions? Particularly the first and last order statistics of a normal random variable, but a more general answer would also be appreciated. Edit: To clarify, I am looking for approximating formulas that can be more-or-less explicitly evaluated, not the exact integral expression. For example, I have seen the following two approximations for the first order statistic (ie the minimum) of a normal rv: $e_{1:n} \geq \mu - \frac{n-1}{\sqrt{2n-1}}\sigma$ and $e_{1:n} \approx \mu + \Phi^{-1} \left( \frac{1}{n+1} \right)\sigma$ The first of these, for $n=200$, gives approximately $e_{1:200} \geq \mu - 10\sigma$ which seems like a wildly loose bound. The second gives $e_{1:200} \approx \mu - 2.58\sigma$ whereas a quick Monte Carlo gives $e_{1:200} \approx \mu - 2.75\sigma$, so it's not a bad approximation but not great either, and more importantly I don't have any intuition about where it comes from. Any help?
Approximate order statistics for normal random variables
CC BY-SA 2.5
null
2011-03-31T10:14:39.883
2017-10-03T18:25:14.253
2017-10-03T18:25:14.253
11887
2425
[ "distributions", "normal-distribution", "approximation", "order-statistics" ]
9003
1
null
null
4
2966
I have about 10 variables about products. I need to rank order the products by quality. Now, the quality is very subjective thing, so I got all 300 products ranked by domain experts. Now I have 10 variables, and ranking assigned by an expert. About 1% of products should be excelent, about 10% should be very good, 30% should be good, 30% should be average and the rest is below average. What sort of modeling technique would you use to make sue of this data and create system which will automatically be able to classify product into those categories and mimic "expert judgement".
Rank ordering and/or classification problem
CC BY-SA 4.0
null
2011-03-31T10:31:29.800
2019-02-21T07:27:30.187
2019-02-21T07:27:30.187
11887
333
[ "classification", "ranking" ]
9004
2
null
9003
3
null
The dependent variable is ordinal (excellent > very good > good > average > below average). You could try ordinal logistic regression, with some variable selection method. After choosing the model you could tweak the thresholds to influence proportions falling into various categories.
null
CC BY-SA 2.5
null
2011-03-31T10:46:33.353
2011-03-31T10:46:33.353
null
null
3911
null
9005
2
null
8974
18
null
Pie charts, like pie, may be delicious but they are not nutritious. In addition to points made already, one is that rotating a pie chart changes perception of the size of the angles, as does changing the color. If a pie chart has only a few categories, make a table. If it has a LOT of categories, then the slices will be too thin to see (much less to label accurately). I wrote about this on [my blog](http://www.statisticalanalysisconsulting.com/graphics-for-univariate-data-pie-is-delicious-but-not-nutritious/). [A link via the wayback machine](https://web.archive.org/web/20190207124546/http://statisticalanalysisconsulting.com/graphics-for-univariate-data-pie-is-delicious-but-not-nutritious/).
null
CC BY-SA 4.0
null
2011-03-31T10:50:11.990
2022-09-29T17:21:25.803
2022-09-29T17:21:25.803
11887
686
null
9006
2
null
9003
0
null
An other approach would be taking the 1..300 ranks assigned to the products by the experts (temporarily neglecting the excellent/very good/... categorisation). You could try [ACE](http://cran.r-project.org/web/packages/acepack/acepack.pdf) (linear model with transformations of dependent variable and predictors), you may find that simple transformations of the predictors are satisfactory. Then you can set thresholds for the classification.
null
CC BY-SA 2.5
null
2011-03-31T12:39:27.733
2011-03-31T12:39:27.733
null
null
3911
null
9007
2
null
9001
39
null
The classic reference is Royston (1982)[1] which has algorithms going beyond explicit formulas. It also quotes a well-known formula by Blom (1958): $E(r:n) \approx \mu + \Phi^{-1}(\frac{r-\alpha}{n-2\alpha+1})\sigma$ with $\alpha=0.375$. This formula gives a multiplier of -2.73 for $n=200, r=1$. [1]: [Algorithm AS 177: Expected Normal Order Statistics (Exact and Approximate)](http://www.jstor.org/stable/2347982) J. P. Royston. Journal of the Royal Statistical Society. Series C (Applied Statistics) Vol. 31, No. 2 (1982), pp. 161-165
null
CC BY-SA 2.5
null
2011-03-31T12:52:27.970
2011-03-31T17:54:50.003
2011-03-31T17:54:50.003
449
279
null
9008
2
null
9001
11
null
Depending on what you want to do, this answer may or may not help - I got the following exact formula from [Maple's Statistics package](http://www.maplesoft.com/support/help/Maple/view.aspx?path=Statistics). ``` with(Statistics): X := OrderStatistic(Normal(0, 1), 1, n): m := Mean(X): m; ``` $$\int _{-\infty }^{\infty }\!1/2\,{\frac {{\it \_t0}\,n!\,\sqrt {2}{ {\rm e}^{-1/2\,{{\it \_t0}}^{2}}} \left( 1/2-1/2\, {{\rm erf}\left(1/2\,{\it \_t0}\,\sqrt {2}\right)} \right) ^{-1+n}}{ \left( -1+n \right) !\,\sqrt {\pi }}}{d{\it \_t0}}$$ By itself this isn't very useful (and it could probably be derived fairly easily by hand, since it's the minimum of $n$ random variables), but it does allow for quick and very accurate approximation for given values of $n $ - much more accurate than Monte Carlo: ``` evalf(eval(m, n = 200)); evalf[25](eval(m, n = 200)); ``` gives -2.746042447 and -2.746042447451154492412344, respectively. (Full disclosure - I maintain this package.)
null
CC BY-SA 2.5
null
2011-03-31T13:01:19.303
2011-03-31T13:01:19.303
null
null
2898
null
9009
1
null
null
10
461
I'm using the [quantreg](http://cran.r-project.org/web/packages/quantreg/index.html) package to make a regression model using the 99th percentile of my values in a data set. Based on advice from a previous stackoverflow [question](https://stackoverflow.com/questions/4594370/advice-on-calculating-a-function-to-describe-upper-bound-of-data) I asked, I used the following code structure. ``` mod <- rq(y ~ log(x), data=df, tau=.99) pDF <- data.frame(x = seq(1,10000, length=1000) ) pDF <- within(pDF, y <- predict(mod, newdata = pDF) ) ``` which I show plotted on top of my data. I've plotted this using ggplot2, with an alpha value for the points. I think that the tail of my distribution is not being considered sufficiently in my analysis. Perhaps this is due to the fact that there are individual points, that are being ignored by the percentile type measurement. One of the comments suggested that > The package vignette includes sections on nonlinear quantile regression and also models with smoothing splines etc. Based on my previous question I assumed a logarithmic relationship, but I'm not sure if that is correct. I thought I could extract all the points at the 99th percentile interval and then examine them separately, but I'm not sure how to do that, or if that is a good approach. I would appreciate any advice on how to improve identifying this relationship. ![enter image description here](https://i.stack.imgur.com/Jwtlh.jpg)
Advice on identifying curve shape using quantreg
CC BY-SA 2.5
null
2011-03-31T13:53:50.733
2014-07-01T01:44:33.253
2017-05-23T12:39:27.620
-1
2635
[ "regression", "logarithm" ]
9010
2
null
9001
28
null
$$\newcommand{\Pr}{\mathrm{Pr}}\newcommand{\Beta}{\mathrm{Beta}}\newcommand{\Var}{\mathrm{Var}}$$The distribution of the ith order statistic of any continuous random variable with a PDF is given by the "beta-F" compound distribution. The intuitive way to think about this distribution, is to consider the ith order statistic in a sample of $N$. Now in order for the value of the ith order statistic of a random variable $X$ to be equal to $x$ we need 3 conditions: - $i-1$ values below $x$, this has probability $F_{X}(x)$ for each observation, where $F_X(x)=\Pr(X<x)$ is the CDF of the random variable X. - $N-i$ values above $x$, this has probability $1-F_{X}(x)$ - 1 value inside a infinitesimal interval containing $x$, this has probability $f_{X}(x)dx$ where $f_{X}(x)dx=dF_{X}(x)=\Pr(x<X<x+dx)$ is the PDF of the random variable $X$ There are ${N \choose 1}{N-1 \choose i-1}$ ways to make this choice, so we have: $$f_{i}(x_{i})=\frac{N!}{(i-1)!(N-i)!}f_{X}(x_{i})\left[1-F_{X}(x_{i})\right]^{N-i}\left[F_{X}(x_{i})\right]^{i-1}dx$$ EDIT in my original post, I made a very poor attempt at going further from this point, and the comments below reflect this. I have sought to rectify this below If we take the mean value of this pdf we get: $$E(X_{i})=\int_{-\infty}^{\infty} x_{i}f_{i}(x_{i})dx_{i}$$ And in this integral, we make the following change of variable $p_{i}=F_{X}(x_{i})$ (taking @henry's hint), and the integral becomes: $$E(X_{i})=\int_{0}^{1} F_{X}^{-1}(p_{i})\Beta(p_{i}|i,N-i+1)dp_{i}=E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]$$ So this is the expected value of the inverse CDF, which can be well approximated using the delta method to give: $$E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]\approx F_{X}^{-1}\left[E_{\Beta(p_{i}|i,N-i+1)}\right]=F_{X}^{-1}\left[\frac{i}{N+1}\right]$$ To make a better approximation, we can expand to 2nd order (prime denoting differentiation), and noting that the second derivative of an inverse is: $$\frac{\partial^{2}}{\partial a^{2}}F_{X}^{-1}(a)=-\frac{F_{X}^{''}(F_{X}^{-1}(a))}{\left[F_{X}^{'}(F_{X}^{-1}(a))\right]^{3}}=-\frac{f_{X}^{'}(F_{X}^{-1}(a))}{\left[f_{X}(F_{X}^{-1}(a))\right]^{3}}$$ Let $\nu_{i}=F_{X}^{-1}\left[\frac{i}{N+1}\right]$. Then We have: $$E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]\approx F_{X}^{-1}\left[\nu_{i}\right]-\frac{\Var_{\Beta(p_{i}|i,N-i+1)}\left[p_{i}\right]}{2}\frac{f_{X}^{'}(\nu_{i})}{\left[f_{X}(\nu_{i})\right]^{3}}$$ $$=\nu_{i}-\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)}\frac{f_{X}^{'}(\nu_{i})}{\left[f_{X}(\nu_{i})\right]^{3}}$$ Now, specialising to normal case we have $$f_{X}(x)=\frac{1}{\sigma}\phi(\frac{x-\mu}{\sigma})\rightarrow f_{X}^{'}(x)=-\frac{x-\mu}{\sigma^{3}}\phi(\frac{x-\mu}{\sigma})=-\frac{x-\mu}{\sigma^{2}}f_{X}(x)$$ $$F_{X}(x)=\Phi(\frac{x-\mu}{\sigma})\implies F_{X}^{-1}(x)=\mu+\sigma\Phi^{-1}(x)$$ Note that $f_{X}(\nu_{i})=\frac{1}{\sigma}\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]$ And the expectation approximately becomes: $$E[x_{i}]\approx \mu+\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)+\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)}\frac{\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)}{\left[\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]\right]^{2}}$$ And finally: $$E[x_{i}]\approx \mu+\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)\left[1+\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)\left[\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]\right]^{2}}\right]$$ Although as @whuber has noted, this will not be accurate in the tails. In fact I think it may be worse, because of the skewness of a beta with different parameters
null
CC BY-SA 3.0
null
2011-03-31T14:22:39.473
2015-05-15T22:48:08.450
2015-05-15T22:48:08.450
77271
2392
null
9012
2
null
6791
1
null
Consider transforming the responses from both data sets into z-scores. There is going to be an ad hoc quality to any sort of rescaling but at least this way you avoid mechanically treating any particular set of intervals on one item as equivalent to any particular set on the other. I'd definitely go this route if I were using the items as predictors or outcome variables in any sort of analysis of variance. If you were doing anything w/ composite scales -- ones that aggregate likert measures -- you'd likely do essentially what I've prpposed: either you'd convert the item responses to z-scores before summing or taking their mean to form the composite scale; or you'd form a scale with factor analysis or another technique that uses the covariance matrix of the items to determine the affinity of the responses to them.
null
CC BY-SA 2.5
null
2011-03-31T17:12:31.247
2011-03-31T17:12:31.247
null
null
11954
null
9013
1
9015
null
3
860
Let's say I have a file A1.txt and a file A2.txt. I have written the statements ``` filename in1 'A1.txt'; filename in2 'A2.txt'; ``` Now, I want to re-do this using B1 and B2 (and eventually C1, C2, D1, D2), and just rename the variable in one place. So, I want a statement like ``` %let prefix = 'B'; ``` and then I want to put that prefix in the filename statements. ``` filename in1 '&prefix1.txt'; filename in2 '&prefix2.txt'; ``` I'm not doing this in a data statement. I can almost get it. filename in1 &prefix'.txt' tries to read B'.txt but that extra quote is pesky. There's something vague in my memory banks about using an _ but I can't pull it out. Thanks.
String replace in SAS?
CC BY-SA 2.5
null
2011-03-31T17:37:12.297
2011-03-31T18:12:50.597
2011-03-31T17:43:47.707
null
62
[ "sas" ]
9014
1
null
null
4
637
In [Girsanov](http://en.wikipedia.org/wiki/Girsanov_theorem) theorem, the change of probability measure variable $Z_t = \frac{dQ}{dP}|_{\mathcal{F}_t}$, why does it need to be a martingale with respect to measure $P$ for the change of measure $\frac{dQ}{dP}$ to exist? I am having trouble understanding this. Anyone familiar with this?
Why local martingale property is important in Girsanov theorem?
CC BY-SA 2.5
null
2011-03-31T18:00:00.810
2011-04-12T11:45:39.793
2011-04-01T08:19:03.897
2116
862
[ "probability", "stochastic-processes" ]
9015
2
null
9013
4
null
You are looking for a period: `&prefix.1.txt`, where the period after `&prefix` tells SAS that the name of the macro variable is finished. If you don't have the "1", then you need two periods: `&prefix..txt`.
null
CC BY-SA 2.5
null
2011-03-31T18:12:50.597
2011-03-31T18:12:50.597
null
null
279
null
9016
1
9019
null
20
20099
Is there any way to determine the optimal cluster number or should I just try different values and check the error rates to decide on the best value?
How to define number of clusters in K-means clustering?
CC BY-SA 3.0
null
2011-03-31T18:29:46.377
2013-12-10T02:35:49.203
2012-12-05T14:07:15.160
1036
3270
[ "clustering", "unsupervised-learning" ]
9017
2
null
6498
33
null
I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins were widely criticized for providing a forecast that was wildly on the high side due to the “explosive nature” of a reverse logged transformation. ![enter image description here](https://i.stack.imgur.com/S8hrL.jpg) Visually we get the impression that the variance of the original series increases with the level of the series suggesting a need for a transformation. However we know that one the requirements for a useful model is that the variance of the “model errors” needs to be homogenous. No assumptions are necessary about the variance of the original series. They are identical if the model is simply a constant i.e. y(t)=u . As [https://stats.stackexchange.com/users/2392/probabilityislogic](https://stats.stackexchange.com/users/2392/probabilityislogic) stated so clearly in his response to [Advice on explaining heterogeneity / heteroscedasticty](https://stats.stackexchange.com/questions/8955/advice-on-explaining-heterogeneity-heteroscedasticty) “one thing which I always find amusing is this "non-normality of the data" that people worry about. The data does not need to be normally distributed, but the error term does” Early work in time series often erroneously jumped to conclusions about unwarranted transformations. We will discover here that the remedial transformation for this data is to simply add three indicator dummy series to the ARIMA model reflecting an adjustment for three unusual data points. Following is the plot of the autocorrelation function suggesting a strong autocorrelation at lag 12 (.76) and at lag 1 (.948). Autocorrelations are simply regression coefficients in a model where y is the dependent variable being predicted by a lag of y. ![enter image description here](https://i.stack.imgur.com/IrN2D.jpg)! ![enter image description here](https://i.stack.imgur.com/sz3PB.jpg) The analysis above suggests that one model the first differences of the series and study that “residual series” which is identical to the first differences first for it’s properties. ![enter image description here](https://i.stack.imgur.com/4L07U.jpg) This analysis reconfirms the idea that a strong seasonal pattern exists in the data that could be remedied or modeled by a model that contained two differencing operators . ![enter image description here](https://i.stack.imgur.com/TFse2.jpg) ![enter image description here](https://i.stack.imgur.com/E1s8T.jpg) This simple double differencing yields a set of residual a.k.a an adjusted series or loosely speaking a transformed series that evidences non-constant variance but the reason for the non-constant variance is the non-constant mean of the residuals.Here is a plot of the doubly differenced series , suggesting three anomalies at the end of the series. The Autocorrelation of this series falsely indicates that “all is well” and there might be a need for any Ma(1) adjustment. Care should be taken as there is a suggestion of anomalies in the data thus the acf is biased downwards. This is known as the “Alice in Wonderland Effect” i.e. accepting the null hypothesis of no evidented structure when that structure is being masked by a violation of one of the assumptions. ![enter image description here](https://i.stack.imgur.com/DFIro.jpg) ![enter image description here](https://i.stack.imgur.com/30Tbp.jpg) We visually detect three unusual points ( 117,135,136) ![enter image description here](https://i.stack.imgur.com/kGZXe.jpg) This step of detecting the outliers is called Intervention Detection and can be easily , or not so easily, programmed following the following the work of Tsay. ![enter image description here](https://i.stack.imgur.com/0ON8N.jpg)![enter image description here](https://i.stack.imgur.com/6EQck.jpg) If we add three indicators to the model, we get ![enter image description here](https://i.stack.imgur.com/qKwHK.jpg) We can then estimate ![enter image description here](https://i.stack.imgur.com/keOHB.jpg) And receive a plot of the residuals and the acf ![enter image description here](https://i.stack.imgur.com/lEdHy.jpg) ![enter image description here](https://i.stack.imgur.com/vaCmR.jpg) This acf suggests that we add potentially two moving average coefficients to the model . Thus the next estimated model might be. ![enter image description here](https://i.stack.imgur.com/5mEC7.jpg) Yielding ![enter image description here](https://i.stack.imgur.com/A5tDw.jpg) ![enter image description here](https://i.stack.imgur.com/YCBes.jpg) ![enter image description here](https://i.stack.imgur.com/GB70k.jpg) ![enter image description here](https://i.stack.imgur.com/7RT3E.jpg) ![enter image description here](https://i.stack.imgur.com/Qt7X3.jpg) One could then delete the non-significant constant and get a refined model : ![enter image description here](https://i.stack.imgur.com/u0hrp.jpg) We note that no power transformations were needed whatsoever to obtain a set of residuals that constant variance. Note that the forecasts are non-explosive. ![enter image description here](https://i.stack.imgur.com/xvO67.jpg) ![enter image description here](https://i.stack.imgur.com/0tKpt.jpg) In terms of a simple weighted sum , we have: 13 weights ; 3 non-zero and equal to (1.0.1,0.,-1.0) ![enter image description here](https://i.stack.imgur.com/rMynA.jpg) ![enter image description here](https://i.stack.imgur.com/LFLMO.jpg) This material was presented in a way that was non-automatic and consequentially required user interaction in terms of making modeling decisions.
null
CC BY-SA 2.5
null
2011-03-31T19:03:44.280
2011-03-31T22:03:58.057
2017-04-13T12:44:49.953
-1
3382
null
9019
2
null
9016
9
null
The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At that point I take the number of clusters at the (local) maximum. This would be similar to using a scree plot to picking the number of principal components. --- SAS Technical Report A-108 Cubic Clustering Criterion ([pdf](http://support.sas.com/documentation/onlinedoc/v82/techreport_a108.pdf)) $n$ = number of observations $n_k$ = number in cluster $k$ $p$ = number of variables $q$ = number of clusters $X$ = $n\times p$ data matrix $M$ = $q\times p$ matrix of cluster means $Z$ = cluster indicator ($z_{ik}=1$ if obs. $i$ in cluster $k$, 0 otherwise) Assume each variable has mean 0: $Z’Z = \text{diag}(n_1, \cdots, n_q)$, $M = (Z’Z)-1Z’X$ $SS$(total) matrix = $T$= $X’X$ $SS$(between clusters) matrix = $B$ = $M’ Z’Z M$ $SS$(within clusters) matrix = $W$ = $T-B$ $R^2 = 1 – \frac{\text{trace(W)}}{\text{trace}(T)}$ (trace = sum of diagonal elements) Stack columns of $X$ into one long column. Regress on [Kronecker product](http://en.wikipedia.org/wiki/Kronecker_product) of $Z$ with $p\times p$ identity matrix Compute $R^2$ for this regression – same $R^2$ The CCC idea is to compare the $R^2$ you get for a given set of clusters with the $R^2$ you would get by clustering a uniformly distributed set of points in $p$ dimensional space.
null
CC BY-SA 3.0
null
2011-03-31T19:09:04.953
2012-07-05T16:01:10.203
2012-07-05T16:01:10.203
7290
3489
null
9020
1
9042
null
3
875
In SPSS, if I use the hierarchical clustering procedure, I have the ability to cluster both variables and cases using a variety of methods and distance measures. For this task, I would like to use R to cluster my variables. For context, my data come from a survey and the respondents were able to select multiple items from a block of options. In my datafile the data are coded as 0,1. I am trying to learn R so any help you can provide will be greatly appreciated.
Best option to cluster variables (not cases) in R
CC BY-SA 4.0
null
2011-03-31T21:05:49.463
2019-12-19T20:03:37.397
2019-12-19T20:03:37.397
92235
569
[ "r", "clustering" ]
9021
2
null
4101
3
null
Each time series should be evaluated separately with the ultimate idea of collecting i.e. grouping similar series into groups or sections as having similar/common structure. Since time series data can be intervened with by unknown deterministic structure at unspecified poins in time,one is advised to do Intervention Detection to find where the intervention actually had an effect. If you know a law went into effect at a particular point of (de jure) this may in fact (de facto) not the date when the intervention actually happened. Systems can respond in advance of a known effect date or even after the date due to non-compliance or non-response. Specifying the date of the intervention can lead to Model Specification Bias. I suggest that you google "Intervention Detection" or "Outlier Detection". A good book on this would be by Prof. Wei of Temple University published by Addison-Wessley. I believe the title is "Time Series Analysis". One further comment an Intervention Variable might appear as a Pulse or Level/Step Shift or a Seasonal Pulse or a Local Time Trend. In response to expanding the discussion about Local Time Trends: If you have a series that exhibits 1,2,3,4,5,7,9,11,13,15,16,17,18,19... there has been a change in trend at period 5 and at 10. For me a main question in time series is the detection of level shifts e.g. 1,2,3,4,5,8,9,10,..or another example of a level shift 1,1,1,1,2,2,2,2, AND/OR or the detection of time trend breaks. Just as a Pulse is a difference of a Step, a Step is a difference of a Trend. We have extended the theory of Intervention Detection to the 4th dimension i,e, Trend Point Change. In terms of openess, I have been able to implement such Intervention Detection schemes in conjuction with both ARIMA and Transfer Function Models. I am one of the senior time-series statisticians who have collaborated in the development of AUTOBOX which incorporates these features. I am unaware of anyone else who has programmed this exciting innovation. Perhaps someone else can comment on an R package that might do that but I don’t think so.
null
CC BY-SA 2.5
null
2011-03-31T21:15:33.277
2011-04-01T01:07:15.667
2011-04-01T01:07:15.667
3382
3382
null
9022
1
9024
null
7
166
What steps could be taken to check for bivariate Gaussianity without using regression based check? Can we somehow employ the use of definition of [variogram](http://en.wikipedia.org/wiki/Variogram) measure for assessing spatial variability?
How to check for bivariate Gaussianity without the use of regression?
CC BY-SA 2.5
null
2011-03-31T21:48:27.543
2011-12-20T08:43:54.607
2011-03-31T21:58:28.240
null
null
[ "multivariate-analysis", "normal-distribution", "variance", "spatial" ]
9023
1
9025
null
-2
22254
![enter image description here](https://i.stack.imgur.com/GTB1c.png) This is a Minitab printout. I want to find the value of A5, or S. I think S is supposed to be the sample standard deviation, but I don't know how to calculate it. Any tips on how I should go about calculating it?
How do I deduce the SD from regression and ANOVA tables?
CC BY-SA 2.5
null
2011-03-31T21:48:29.263
2011-03-31T22:38:48.313
2011-03-31T22:35:29.073
919
null
[ "estimation", "self-study" ]
9024
2
null
9022
4
null
I have recently come across this method that was displayed in Johnson and Wichern. Let the data points that you want to test for bivariate normality be designated as $\{ x_{i} \}$. Next, compute the sample covariance matrix and deisgnate it as $S$. For each observed point calculate $d_{j}^{2} = (x_{j} - \bar{x})^{T} S^{-1} (x_{j} - \bar{x})$. Order the values of the $d_{j}^{2}$ from low to high. The last mathematical step is to plot the pair $(q_{c,p}((j- \frac{1}{2}), d_{j}^{2})$, where $q_{c,p}((j- \frac{1}{2})/n)$ is the $100(j- \frac{1}{2})$ quantile of the chi-squared distribution. The plot should be a straight line if the data has a bivariate normal distribution.
null
CC BY-SA 3.0
null
2011-03-31T22:05:58.560
2011-12-20T08:43:54.607
2011-12-20T08:43:54.607
null
3805
null
9025
2
null
9023
1
null
I got it! It's the sqrt of residual SS / (n-2). Cheers!
null
CC BY-SA 2.5
null
2011-03-31T22:38:48.313
2011-03-31T22:38:48.313
null
null
null
null
9027
1
9028
null
10
4550
I have two logistic regression models in R made with `glm()`. They both use the same variables, but were made using different subsets of a matrix. Is there an easy way to get an average model which gives the means of the coefficients and then use this with the predict() function? [ sorry if this type of question should be posted on a programming site let me know and I'll post it there ] Thanks
Is there an easy way to combine two glm models in R?
CC BY-SA 2.5
null
2011-04-01T00:05:27.990
2011-12-05T16:01:31.130
2011-04-01T10:38:48.073
930
1991
[ "r", "generalized-linear-model" ]
9028
2
null
9027
2
null
Do you want to take the average of the predicted probabilities, or the average of the coefficients? They will give different results, because a logistic regression involves a nonlinear transform of the linear predictor. A function to do either would be something like this. Set `avg` to `"prob"` to get the former, or something else for the latter. ``` pred_comb <- function(mod1, mod2, dat, avg="prob", ...) { xb1 <- predict(mod1, dat, type="link", ...) xb2 <- predict(mod2, dat, type="link", ...) if(avg == "prob") (plogis(xb1) + plogis(xb2))/2 else plogis((xb1 + xb2)/2) } ```
null
CC BY-SA 2.5
null
2011-04-01T00:44:23.557
2011-04-01T00:50:06.837
2011-04-01T00:50:06.837
1569
1569
null
9029
1
9057
null
9
2598
I'm working with a data set with 2-3 response variables and 7 predictor variables. All the variables are categorical. If there were just one response variable, I think a multinomial logit would be the right model, but there are 2 or 3. So my question is - is there a multivariate version of the multinomial logit? I've looked at several books on categorical data, but haven't seen anything like this (mainly using Agresti 2002). I have about 2000 observations, though I'll probably need to split it up into 2 or 3 data subsets to really see what's going on. One thing I was thinking about is converting it to counts and use a model for count data. I could also combine the 2-3 response vars into 1 categorical with a lot of categories, but I think that will lower the chances of anything showing up for any of the categories. I could also do 2-3 separate models, one for each variable, which is obviously not as good. I might also be able to get rid of some of the predictors (I think 3 of the 7 have the most explanatory power). I'm not opposed to using machine learning methods, I've found some interesting stuff already with decision trees. thanks, -paul
Is there a version of multivariate multinomial logit?
CC BY-SA 2.5
null
2011-04-01T05:06:28.050
2014-10-18T17:16:56.540
2011-04-01T06:02:59.600
2116
3984
[ "multivariate-analysis", "categorical-data", "multinomial-distribution" ]
9030
1
9060
null
6
108
I am looking for a document or research articles classifying physical or chemical measurements (or perhaps better means of measurement) according to the reference statistical distribution and properties they have. Example of questions studied would be - What kind of weight balance equipment, technology and procedure have the most gaussian behavior? In what order scale range? - What is the typical distribution of error in a spectroscopic ray analyzer? - What is the typical distribution of an amperemeter, a voltmeter, etc.? - What are the noise distributions of the various diode technologies, etc.? I am particularly interested by hints of "exotical" distributions in some common apparatus.
Is there a classification of physical measurements according to their statistical distribution?
CC BY-SA 2.5
null
2011-04-01T07:38:22.440
2022-06-20T12:23:38.553
2011-04-01T10:44:32.360
930
3840
[ "distributions", "repeated-measures", "measurement", "white-noise" ]
9032
2
null
9014
1
null
After reading up about Girsanov theorem and martingale theory, I can come up with the following observations. First if we have a filtration $\mathcal{F}_t$ and two probability measures $P$ and $Q$ for which Radon-Nikodym derivative $\frac{dQ}{dP}$ exist, then for each $\mathcal{F}_t$ there exists a Radon-Nikodym derivative $D_t$ with respect to $\mathcal{F}_t$ and $D_t$ is a uniformly integrable martingale with respect to $\mathcal{F}_t$ and $P$. Now if we have measure $P$ and a martingale $Z_t$ with filtration $\mathcal{F}_t$ we can define set function $Q=Z_t\cdot P$ defined on $\cup \mathcal{F}_t$. It will define a unique probability measure $Q$ on $\sigma(\cup \mathcal{F}_t)$ if $Z_t$ has additional properties, $EZ_t\equiv 1$ being one of them. Going into more details requires reposting some book on this topic, which is not feasible. I read [this one](http://books.google.com/books?id=1ml95FLM5koC&printsec=frontcover&dq=revuz%20yor&hl=fr&ei=bJOVTY-GCcmbOqW8-acH&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC0Q6AEwAA#v=onepage&q&f=false). Chapter VIII is a good read for clarifying things up. Naturally other books can be found. Note that this is really a comment not an answer, I suggest trying to ask at math.SE, with details what exactly you did not understand, the question in the current format can be answered in many different ways, since there is a lot of things going on.
null
CC BY-SA 3.0
null
2011-04-01T09:02:56.170
2011-04-12T11:45:39.793
2011-04-12T11:45:39.793
2116
2116
null