Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
10468
2
null
10450
0
null
Here's an older answer to a similar question on SO. It has some code that you could try/modify: [Similar Question](https://stackoverflow.com/questions/1040324/how-to-generate-pseudo-random-positive-definite-matrix-with-constraints-on-the-of) Some other links: [Forecasting Covariance Matrices](http://www.oxford-man.ox.ac.uk/documents/papers/2011OMI08_Sheppard.pdf) [Various Matrix Techniques](http://www.kevinsheppard.com/images/4/47/Chapter8.pdf) [Matrix Shrinkage Technique](http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5331v1.pdf)
null
CC BY-SA 3.0
null
2011-05-07T16:01:26.687
2011-05-11T15:03:29.787
2017-05-23T12:39:26.143
-1
2775
null
10469
2
null
10459
4
null
There are several movie versions of [Flatland](http://en.wikipedia.org/wiki/Flatland). And there's The Great $\pi$/e Debate.
null
CC BY-SA 3.0
null
2011-05-07T16:13:17.890
2011-05-07T19:40:38.563
2011-05-07T19:40:38.563
264
3874
null
10470
2
null
10459
10
null
[N Is a Number: A Portrait of Paul Erdős](http://zalafilms.com/films/nisanumber.html)
null
CC BY-SA 3.0
null
2011-05-07T16:26:44.237
2011-05-07T16:26:44.237
null
null
22
null
10471
2
null
10459
4
null
Early in [The Social Network](http://en.wikipedia.org/wiki/The_Social_Network#Plot) begins with a one night hackathon where Mark Zuckerberg uses the [Elo rating system algorithm](http://en.wikipedia.org/wiki/Elo_rating_system) to > ... create a website that rates the attractiveness of female students when compared to each other. ... in a few hours, using an algorithm for ranking chess players supplied by his best friend, Eduardo Saverin, he creates a website called "FaceMash," where students can choose which of two girls presented at a time is more attractive. However, much of the rest of the movie is devoted to episodes of hacking, corporate politics, lawsuits, escapades, Zuckerberg's interpersonal problems, etc. But, I found it quite fascinating, overall. A great geek movie.
null
CC BY-SA 3.0
null
2011-05-07T17:20:21.900
2011-05-07T17:20:21.900
null
null
4508
null
10472
2
null
10459
10
null
[Proof](http://www.imdb.com/title/tt0377107/) was pretty good.
null
CC BY-SA 3.0
null
2011-05-07T17:43:25.480
2011-05-07T17:43:25.480
null
null
3748
null
10473
2
null
10366
1
null
If your penultimate goal is to embed your plot in a Latex document, you might consider using the [gnuplottex](ftp://ftp.dante.de/tex-archive/help/Catalogue/entries/gnuplottex.html) package (as an alternative to [pgfplots](http://ctan.org/pkg/pgfplots) which is an awesome package). The idea is rather simple: you write your code chunk in your tex document directly (like you would do with Sweave), et voilà! Here is an example (grabbed from gnuplot demos): ``` \documentclass{standalone} \usepackage{gnuplottex} \begin{document} \begin{gnuplot}[scale=0.95, terminal=epslatex] set style fill transparent pattern 4 bo set style function filledcurves y1=0 set clip two Gauss(x,mu,sigma) = 1./(sigma*sqrt(2*pi)) * exp( -(x-mu)**2 / (2*sigma**2) ) d1(x) = Gauss(x, 0.5, 0.5) d2(x) = Gauss(x, 2., 1.) d3(x) = Gauss(x, -1., 2.) set xrange [-5:5] set yrange [0:1] set xlabel "$\\bar x$ values" unset colorbox plot d1(x) fs solid 1.0 lc rgb "forest-green", \ d2(x) lc rgb "gold", d3(x) lc rgb "red" \end{gnuplot} \end{document} ``` You'll need to compile with the `-shell-escape` option to `pdflatex`. ![enter image description here](https://i.stack.imgur.com/zhQyn.png)
null
CC BY-SA 3.0
null
2011-05-07T18:40:09.827
2011-05-07T18:40:09.827
null
null
930
null
10474
2
null
10459
11
null
Not a movie, but a TV series: [Numb3rs](http://en.wikipedia.org/wiki/Numb3rs)
null
CC BY-SA 3.0
null
2011-05-07T18:40:18.220
2011-05-07T18:40:18.220
null
null
264
null
10475
2
null
7045
4
null
I would suggest trying the [glmnet package](http://cran.r-project.org/web/packages/glmnet/index.html) for feature selection. glmnet uses the [elastic net](http://citeseer.ist.psu.edu/viewdoc/download;jsessionid=6DD6392682DB8F43219A2C81171B64CF?doi=10.1.1.124.4696&rep=rep1&type=pdf) for regularization and feature selection, and is a pretty solid method for choosing a subset of variables for a glm. If you want to automatically choose transformations for your dependent variable, I suggest checking out the [preProcess](http://www.oga-lab.net/RGM2/func.php?rd_id=caret%3apreProcess) function in the [caret](http://cran.r-project.org/web/packages/caret/index.html) package. preProcess can help you choose sensible transformations for both your dependent and independent variables. So, preProcess will select transformations for you, such as log(y). Then glmnet will select a sensible subset of your variables to build a model (i.e. not all of your independent variables may be important for your model). You can use the cv.glmnet function to fine-tune the hyper-parameters. I hope I've answered all your questions!
null
CC BY-SA 3.0
null
2011-05-07T18:54:02.790
2011-05-07T18:54:02.790
null
null
2817
null
10476
2
null
10459
8
null
[21](http://www.imdb.com/title/tt0478087/) - based on the book Bringing Down the House (MIT Blackjack team) Near the beginning they discuss the Monty Hall Problem. However after that there isn't much actual math/probability.
null
CC BY-SA 3.0
null
2011-05-07T18:56:38.453
2011-05-07T23:56:16.110
2011-05-07T23:56:16.110
4360
2310
null
10477
2
null
10459
5
null
I have not seen this yet, but it seems somewhat geeky: [Fermat's Room](http://www.imdb.com/title/tt1016301/)
null
CC BY-SA 3.0
null
2011-05-07T19:25:34.977
2011-05-07T19:25:34.977
null
null
795
null
10478
1
277016
null
15
5009
Are there any analytical results or experimental papers regarding the optimal choice of the coefficient of the $\ell_1$ penalty term. By optimal, I mean a parameter that maximizes the probability of selecting the best model, or that minimizes the expected loss. I am asking because often it is impractical to choose the parameter by cross-validation or bootstrap, either because of a large number of instances of the problem, or because of the size of the problem at hand. The only positive result I am aware of is Candes and Plan, [Near-ideal model selection by $\ell_1$ minimization](http://www-stat.stanford.edu/~candes/papers/LassoPredict.pdf).
Optimal penalty selection for lasso
CC BY-SA 3.0
null
2011-05-07T22:37:35.410
2022-08-27T18:48:57.300
2011-05-07T23:57:03.037
null
30
[ "model-selection", "lasso", "regularization" ]
10479
2
null
9715
1
null
You could also try running a multinomial logit using the glmnet package. I'm not sure how to force it to keep all variables, but I'm sure it's possible.
null
CC BY-SA 3.0
null
2011-05-07T22:51:34.940
2011-05-07T22:51:34.940
null
null
2817
null
10480
1
10485
null
5
3796
This is a homework problem out of the book. It says > If $U$ is a uniform random variable on [0,1], what is the distribution of the random variable $X = [nU]$, where [$t$] denotes the greatest integer less than or equal to $t$? There is a second part that says > Do this for $n = 10$. True or false, and explain: “Random digit” is a good name for the random variable $X$ I don't even know where to begin. All the book says about Uniform Distribution is that uniform density is $f(x) = 1, 0 \le x \le 1$. What does $t$ represent and how do I start this problem?
Uniform random variable distribution
CC BY-SA 3.0
null
2011-05-07T23:24:21.390
2011-05-08T04:51:12.043
null
null
4401
[ "distributions", "self-study", "uniform-distribution" ]
10481
2
null
10480
3
null
$t$ is just a placeholder name for a variable, the actual focus in that explanation is on the square brackets which refer to the [floor](http://en.wikipedia.org/wiki/Floor_and_ceiling_functions) function. I would start by plotting the function that maps from $U$ to $X$, that is $X(u)=[nu]$ in the range of all the values $U$ can assume. What does the set of possible function values (i.e. the [image](http://en.wikipedia.org/wiki/Image_%28mathematics%29) of the function) look like? How big are their relative proportions regarding their preimages? That should give you a clue how to get the distribution function of $X$.
null
CC BY-SA 3.0
null
2011-05-07T23:38:16.203
2011-05-08T04:51:12.043
2011-05-08T04:51:12.043
2116
4360
null
10482
1
null
null
3
510
I was impressed by the Reputation histogram on the SE sites allowing one to zoom in to any time interval. How was it created?
How to create histogram with "zoom-in" feature
CC BY-SA 3.0
null
2011-05-07T23:00:27.467
2011-05-07T23:53:52.087
2011-05-07T23:53:52.087
null
null
[ "data-visualization" ]
10483
2
null
10482
5
null
This is a barplot rather than histogram... Anyway, judging from the page source, it is made with [Highcharts JS](http://www.highcharts.com/).
null
CC BY-SA 3.0
null
2011-05-07T23:51:42.400
2011-05-07T23:51:42.400
null
null
null
null
10484
1
10489
null
10
675
I have some data in R, stored in a list. Think ``` d <- c(1,2,3,4) ``` although this is not my data. If I then enter the command ``` plot(density(d, kernel="gaussian", width=1)) ``` then I get the kernel probability density estimate, where the kernel is standard normal. If I replace 1 with other numbers, of course the picture changes. What I would like to do is create a video or animation in which each frame is such a plot, but the bandwidth of the kernel varies from frame to frame, thereby showing the effect of changing the bandwidth. How can I do this? (My apologies if this is not the right place to ask questions about R.)
Animating the effect of changing kernel width in R
CC BY-SA 3.0
null
2011-05-08T00:25:47.643
2015-04-23T05:58:23.283
2015-04-23T05:58:23.283
9964
98
[ "r", "kernel-smoothing" ]
10485
2
null
10480
5
null
$[t]$ is the floor function, and $t$ just represents a generic argument. So for example $[0.5]=0$, $[0.9]=0$, $[1.01]=1$, $[1]=1$, $[23.567]=23$, and so on. You simply ignore whats written after the decimal point (note: this is not the same thing as rounding, for $[0.9]=0$ whereas rounding would give $1$.) With non-smooth functions such as the floor function, the safest way to go is to use the cumulative distribution function, or CDF. For the uniform distribution this is given by: $$F_{U}(y)=\Pr(U<y)=\int_{0}^{y}f_{U}(t)dt=\int_{0}^{y}dt=y$$ Now the good thing about CDFs is that you can simply substitute the functional relation in, but only once you have inverted the floor function. Now this inversion is not 1-to-1, so a standard change of variables using jacobian's doesn't apply. For example, suppose $X=0$. Then we know that $[nU]=0$, which means that $nU<1$, which implies that $U<n^{-1}$. We can work out this probability directly from the CDF: $$\Pr(X=0)=\Pr(U<n^{-1})=F_{U}(n^{-1})=n^{-1}$$ The reason we can do this is that the two propositions " $X=0$ " and " $U<n^{-1}$ " are equivalent - one occurs if and only if the other occurs. So they must have the same "truth value" and hence also the same probability. This is not too hard to continue on. Suppose $X=1$, then we must have $nU<2$ (or else $X>1$) and we must also have $nU>1$ (or else $X=0$ as we have just seen). So the equivalent condition to $X=1$ in terms of $U$ is $1<nU<2$. I'll stop my answer here so you can work out the general form of the probability mass function for $X$ ($\Pr(X=z)$ for general argument $z$). One small hint is to note that $\Pr(a<U<b)=\Pr(U<b)-\Pr(U<a)=b-a$ for a uniform distribution. I can post the full answer if you wish, but you may not learn as well compared to if you do it yourself.
null
CC BY-SA 3.0
null
2011-05-08T00:45:37.190
2011-05-08T03:32:23.783
2011-05-08T03:32:23.783
2970
2392
null
10486
1
null
null
4
1803
Suppose I want to predict Amazon or Netflix demand, using demand data over the past year. For example, I might want to forecast the number of sales in the Electronics category on Amazon, or the number of times someone wants to rent Titanic on Netflix. My dataset consists of daily demand per item over the past couple of months, along with item metadata (tags and categories), split by things like customer demographics (age group, gender, location, browser, job -- some of these might be unknown). To be concrete, let's suppose I want to forecast the number of times someone wants to rent a Comedy on Netflix, and I want to make this forecast at various levels (e.g., overall, by the state the customer lives in, by male/female, etc.). How would I go about this? My naive first thought is to form a time series at each level I care about (e.g., form a time series of comedy demand by all the males living in Florida), and build some kind of time series model on top of this (I guess an ARIMA model...?). But this seems wrong for a bunch of reasons (not only would I be building a ton of different models for all the different possible levels, but each level would be ignoring a lot of data from closely related levels). Any suggestions? Surprisingly, I couldn't find any papers related to this problem when Googling, but I might just be using the wrong search terms. (I learned a smidgen of time series analysis a couple years ago, but I was incredibly bad at it.) Also, I'm interested in both methods (what algorithms to use) and particular statistical libraries that might be useful (e.g., R packages or Python libraries).
Forecasting Amazon or Netflix demand
CC BY-SA 3.0
null
2011-05-08T00:51:51.530
2011-05-09T15:48:15.743
2011-05-08T02:27:15.560
2116
1106
[ "time-series", "forecasting" ]
10487
2
null
10486
3
null
When you have a number of Endogenous series possibly cross-related and a number of Exogenous series, this is referred to as a VECTOR ARIMA MODEL , a super-set of a VAR model and an ARIMA MODEL. We have resolved Daily forecasts for a Family TYpe and Daily forecasts for "children" or "subset categories" by incorporating the Parent as a possible predictor variable to model each of the "children" speaking to your concern about "ignoring data". VARIMA models are not really practical because each series might have different dependencies on level shifts and local time trends. My suggestion is that you form "reasonable families" , model both the Parent and the Children using ARMAX models and then use a reconciliation strategy.
null
CC BY-SA 3.0
null
2011-05-08T01:17:36.197
2011-05-08T02:40:32.763
2011-05-08T02:40:32.763
3382
3382
null
10488
2
null
10484
7
null
One way to go is to use the excellent [animation](http://animation.yihui.name/animation%3astart) package by Yihui Xie. I uploaded a very simple example to my public dropbox account: [densityplot](http://dl.dropbox.com/u/6973449/RtmpeKvc2B/index.html) (I will remove this example in 3 days). Is this what you are looking for? The animation was created using the following R code: ``` library(animation) density.ani <- function(){ i <- 1 d <- c(1,2,3,4) while (i <= ani.options("nmax")) { plot(density(d, kernel="gaussian", bw = i), ylim = c(0, 0.25)) ani.pause() i <- i + 1 } } saveHTML({ par(mar = c(5, 4, 1, 0.5)) density.ani() }, nmax = 30, title = "Changing kernel width") ```
null
CC BY-SA 3.0
null
2011-05-08T02:16:55.657
2011-05-08T10:44:42.037
2011-05-08T10:44:42.037
307
307
null
10489
2
null
10484
11
null
It depends a little bit on what your end goal is. Quick and dirty hack for real-time demonstrations Using `Sys.sleep(seconds)` in a loop where `seconds` indicates the number of seconds between frames is a viable option. You'll need to set the `xlim` and `ylim` parameters in your call to `plot` to make things behave as expected. Here's some simple demonstration code. ``` # Just a quick test of Sys.sleep() animation x <- seq(0,2*pi, by=0.01) y <- sin(x) n <- 5 pause <- 0.5 ybnds <- quantile(n*y, probs=c(0,1)) x11() # Draw successively taller sinewaves with a gradually changing color for( i in 1:n ) { plot(x, i*y, type="l", lwd=2, ylim=ybnds, col=topo.colors(2*n)[i]) Sys.sleep(pause) } ``` This works pretty well, especially using X-Windows as the windowing system. I've found that Mac's `quartz()` does not play nice, unfortunately. Animated GIFs If you need something that can be redistributed, posted on a webpage, etc., look at the `write.gif` function in the [caTools](http://cran.r-project.org/web/packages/caTools/index.html) package. Displaying help on `write.gif` gives several nice examples, including a couple of animations—one with a quite nice example using the Mandelbrot set. See also [here](http://en.wikibooks.org/wiki/R_Programming/Graphics#Animated_plots) and [here](http://www.oga-lab.net/RGM2/func.php?rd_id=caTools%3aGIF). More fine-tuned control and fancier animations There is an [animation](http://cran.r-project.org/web/packages/animation/index.html) package that looks pretty capable. I haven't used it myself, though, so I can't give any real recommendations either way. I have seen a few good examples of output from this package and they look pretty nice. Perhaps one of the "highlights" is the ability to embed an animation in a PDF.
null
CC BY-SA 3.0
null
2011-05-08T02:17:18.763
2011-05-08T02:17:18.763
null
null
2970
null
10490
2
null
6702
2
null
To answer my own question, I've moved over to using the 'glmnet' package to fit my multinomial logits, which has the added advantage of using the lasso or elastic net to regularize my independent variables. glmnet seems to be a much more 'finished' packaged than mlogit, complete with a 'predict' function.
null
CC BY-SA 3.0
null
2011-05-08T02:23:47.900
2011-05-08T02:23:47.900
null
null
2817
null
10491
2
null
73
4
null
If you are doing any kind of predictive modeling, [caret](http://cran.r-project.org/web/packages/caret/index.html) is a godsend. Especially combined with the [multicore](http://cran.r-project.org/web/packages/multicore/index.html) package, some pretty amazing things are possible.
null
CC BY-SA 3.0
null
2011-05-08T02:26:21.970
2011-05-08T02:26:21.970
null
null
2817
null
10492
2
null
3392
6
null
Excel is no good for statistics, but it can be wonderful for exploratory data analysis. [Take a look at this video](http://www.r-bloggers.com/getting-into-shape-for-the-sport-of-data-science-screencast-of-talk-by-jeremy-howard-at-melbourne-r-users/) for some particularly interesting techniques. Excel's ability to conditionally color your data and add in-cell bar charts can give great insight into the structure of your raw data.
null
CC BY-SA 3.0
null
2011-05-08T02:32:10.377
2011-05-08T02:32:10.377
null
null
2817
null
10493
2
null
10486
4
null
If you do a good enough job modeling the important predictor variables, you probably will not need to worry as much about the time series aspects (You should probably still test for serial correlation and adjust for it if needed). Most of the times series style association you will see can easily be modeled by things like day of the week, holiday/vacation indicators, and time since the dvd release or some form of advertising or event that spurs rentals of a particular movie.
null
CC BY-SA 3.0
null
2011-05-08T02:49:16.147
2011-05-09T15:48:15.743
2011-05-09T15:48:15.743
4505
4505
null
10494
2
null
3392
7
null
Another good reference source for why you might not want to use excel is: [Spreadsheet addiction](http://www.burns-stat.com/pages/Tutor/spreadsheet_addiction.html) If you find yourself in a situation where you really need to use excel (some accademic departments insist), then I would suggest using the [Rexcel plugin](http://rcom.univie.ac.at/). This lets you interface using Excel, but uses the R program as the computational engine. You don't need to know R to use it, you can use drop down menus and dialogs, but you can do a lot more if you do. Since R is doing the computations they are a lot more trustworthy than Excel and you have much better graphs and boxplots and other graphs missing from excel. It even works with the automatic cell updating in excel (though that can make things really slow if you have a lot of complex analyses to recompute every time). It does not fix all the problems from the spreadsheet addiction page, but it is a huge improvement over using straight excel.
null
CC BY-SA 3.0
null
2011-05-08T03:01:16.803
2011-05-08T03:01:16.803
null
null
4505
null
10495
2
null
10484
4
null
Here is another approach: ``` library(TeachingDemos) d <- c(1,2,3,4) tmpfun <- function(width=1, kernel='gaussian'){ plot(density(d, width=width, kernel=kernel)) } tmplst <- list( width=list('slider', init=1, from=.5, to=5, resolution=.1), kernel=list('radiobuttons', init='gaussian', values=c('gaussian', "epanechnikov","rectangular","triangular","biweight","cosine", "optcosine"))) tkexamp( tmpfun, tmplst, plotloc='left' ) ```
null
CC BY-SA 3.0
null
2011-05-08T03:22:27.187
2011-05-08T03:22:27.187
null
null
4505
null
10496
2
null
10484
5
null
Just for the sake of completeness, if you need this for a class demonstration, I would also mention the `manipulate` package which comes with [RStudio](http://www.rstudio.org/). Note that this package is dependent on RStudio interface, so it won't work outside of it. `manipulate` is quite cool because it allows to quickly create some sliders to manipulate any element in the plot. This would allow to do some easy and real-time demonstration in class. ``` manipulate( plot(density(1:10, bw)), bw = slider(0, 10, step = 0.1, initial = 1)) ``` Other examples [here](http://www.rstudio.org/docs/advanced/manipulate)
null
CC BY-SA 3.0
null
2011-05-08T05:01:33.230
2011-05-08T05:01:33.230
null
null
582
null
10497
1
10548
null
7
1974
I became interested in doing this in C# for my own amusement after reading the following papers: [http://www.cs.washington.edu/homes/brun/pubs/pubs/Kiddon11.pdf](http://www.cs.washington.edu/homes/brun/pubs/pubs/Kiddon11.pdf) I also took a look at [http://www.cs.rpi.edu/academics/courses/fall03/ai/misc/naive-example.pdf](http://www.cs.rpi.edu/academics/courses/fall03/ai/misc/naive-example.pdf) as a concrete example for my implementation. I have a working implementation now, but I wanted to make sure that I was approaching it properly. I just want to have a solid Naive Bayes Classifier (unigram). Problem statement and setup I am using two sets of data, a list of sentences that ARE "that's what she said" and a list of sentences that dont make sense with a "that's what she said" suffix. Next I parse through all the words in all of the sentences and keep a tally on each word and how many times it was found in each of the two sets, so I might end up with data that looks like this: ``` Word PositiveCount NegativeCount wet 23 4 hard 30 5 haiti 0 20 to 60 77 ``` The I iterate over all of the words and calculate individual $\Pr(\text{Positive}|\text{<word>})$ and $\Pr(\text{Negative}|\text{<word>})$ using the following formula which I found in the above example paper: ``` P(Positive|wet) = (23 + p * m) / ((23 + 4) + m) P(Negative|wet) = (4 + p * m) / ((23 + 4) + m) ``` Where m is the equivilent sample size and p is the a priori estimate. Then in order to check an unknown sentence to see if it is a TWSS I iterate over each word in the sentence and multiply their positive [probibility distributions?] together and multiply all that by p. And do the same for the negative. Then if the positive number is larger I say that the sentence is a "that's what she said". Questions - Currently I am using $p = .5$ for both positive and negative. I feel like I could be doing something better. Is this what the Bayesian vs. frequentist thing is about? How would I go about getting better numbers for $p$? - Also, I am using $m$-estimates for the $\Pr( \text{Yes/No} | \text{<word>} )$. Should I be doing it this way and what should $m$ be? What effects does it have to make $m$ larger/smaller? Super Minor Question: Suggestions on where to get sample data from would be a bonus.
Naive Bayes classification for "That's what she said" problem
CC BY-SA 3.0
null
2011-05-08T05:08:07.753
2011-05-09T19:59:17.390
2011-05-08T20:02:33.823
4513
4513
[ "machine-learning", "naive-bayes" ]
10498
2
null
8807
1
null
I can recomend you 2 interesting papers to read that are online 1.[Streamed Learning: One-Pass SVMs, by Piyush Rai, Hal Daum´e III, Suresh Venkatasubramanian](https://www.ijcai.org/Proceedings/09/Papers/204.pdf) 2.[Streaming k-means approximation, by Nir Ailon](http://www1.cs.columbia.edu/%7Erjaiswal/ajmNIPS09.pdf) Hope it clarifies you a little your ideas
null
CC BY-SA 4.0
null
2011-05-08T05:51:21.903
2022-07-12T14:24:45.640
2022-07-12T14:24:45.640
-1
1808
null
10499
2
null
10478
5
null
I take it that you are mostly interested in regression, as in the cited paper, and not other applications of the $\ell_1$-penalty (graphical lasso, say). I then believe that some answers can be found in the paper [On the “degrees of freedom” of the lasso](https://projecteuclid.org/journals/annals-of-statistics/volume-35/issue-5/On-the-degrees-of-freedom-of-the-lasso/10.1214/009053607000000127.full) by Zou et al. Briefly, it gives an analytic formula for the effective degrees of freedom, which for the squared error loss allows you to replace CV by an analytic $C_p$-type statistic, say. Another place to look is in [The Dantzig selector: Statistical estimation when p is much larger than n](https://projecteuclid.org/journals/annals-of-statistics/volume-35/issue-6/The-Dantzig-selector--Statistical-estimation-when-p-is-much/10.1214/009053606000001523.full) and the discussion papers in the same issue of Annals of Statistics. My understanding is that they solve a problem closely related to lasso regression but with a fixed choice of penalty coefficient. But please take a look at the discussion papers too. If you are not interested in prediction, but in model selection, I am not aware of similar results. Prediction optimal models often result in too many selected variables in regression models. In the paper [Stability selection](http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2010.00740.x/abstract;jsessionid=C4D77523A208F52E5BB4344781600CB3.d03t04) Meinshausen and Bühlmann presents a subsampling technique more useful for model selection, but it may be too computationally demanding for your needs.
null
CC BY-SA 4.0
null
2011-05-08T07:10:23.307
2022-08-27T18:48:57.300
2022-08-27T18:48:57.300
79696
4376
null
10500
2
null
10459
9
null
[The Cube](http://en.wikipedia.org/wiki/Cube_%28film%29)
null
CC BY-SA 3.0
null
2011-05-08T07:16:23.570
2011-05-08T07:16:23.570
null
null
4376
null
10501
1
226972
null
21
24332
It is easy to find a package calculating area under ROC, but is there a package that calculates the area under precision-recall curve?
Calculating AUPR in R
CC BY-SA 3.0
null
2011-05-08T07:32:35.350
2019-07-17T15:54:06.683
null
null
null
[ "r", "precision-recall" ]
10502
2
null
10501
2
null
A little googling returns one bioc package, [qpgraph](http://www.bioconductor.org/packages/2.4/bioc/html/qpgraph.html) (`qpPrecisionRecall`), and a cran one, [minet](http://cran.r-project.org/web/packages/minet/) (`auc.pr`). I have no experience with them, though. Both have been devised to deal with biological networks.
null
CC BY-SA 3.0
null
2011-05-08T08:17:15.270
2011-05-08T08:17:15.270
null
null
930
null
10503
1
null
null
4
198
I have a set of data (positive real numbers < 1) in five categories. My aim is to show that the data in the last category is bigger than the other categories for a range of examples (the data set), but the last category isn't necessarily more important than the others. I thought the best thing is to average each category and then plot the graph of it which works. But I feel this is too simple and is it good if I look at the standard deviation of each category to show that there isn't too much difference and the average-taking is valid? Also, I was going to use a scatter plot, is this OK? What would you recommend?
Set of data and averaging/standard deviation
CC BY-SA 3.0
null
2011-05-08T08:44:12.670
2011-05-08T10:03:27.937
2011-05-08T10:03:27.937
4515
4515
[ "standard-deviation", "simulation", "mean" ]
10504
2
null
73
2
null
lattice, car, MASS, foreign, party.
null
CC BY-SA 3.0
null
2011-05-08T11:53:51.660
2011-05-08T11:53:51.660
null
null
686
null
10505
2
null
10497
1
null
An interesting source of suggestions for this problem might be the [That's What She Said Quora thread](http://www.quora.com/How-would-you-programmatically-parse-a-sentence-and-decide-whether-to-answer-thats-what-she-said). Specifically, the first comment identifies that it might make sense to use Twitter streams as a source of statements to which there has been a response of "That's What She Said". Another thing you could do would be to make a simple UI which would take input from twitter, and then ask a human annotator to decide whether "That's What She Said" is an appropriate response. The UW paper mentions their data sources; I assume (though you didn't mention it specifically) that you're already using this data? (It looks like you can slightly increase the size of the training set from that paper, but not a lot.) Another potential source of data to look at might be to analyze television show dialogue for positive examples. (The "[A Computer That Knows When To Say “That’s What She Said](http://blogs.forbes.com/alexknapp/2011/04/30/a-computer-that-knows-when-to-say-thats-what-she-said/)" Forbes Article mentions The Office, for example.) Using Hulu's Captions search ([search for That's What she said](http://www.hulu.com/labs/caption-search?lang=en&query=that%27s%20what%20she%20said)) identifies 179 examples, though it looks like in the first 10, only 2 are positive training examples, so that may be a somewhat noisy source of data. Officequotes.net appears to have a larger source of potential data, though again, cleaning it up and using it may take some work.
null
CC BY-SA 3.0
null
2011-05-08T12:13:10.863
2011-05-09T13:02:03.733
2011-05-09T13:02:03.733
1065
1065
null
10506
2
null
10382
2
null
You need to check how others have built indexes with similar questions. My guess is that Inglehart and Norris, in [Rising Tide](http://www.hks.harvard.edu/fs/pnorris/Books/Rising%20tide.htm), have built their Gender Equality/Empowerment index in a way that you can emulate (the construction of the index escapes me but I remember it's in the Technical Appendix).
null
CC BY-SA 3.0
null
2011-05-08T12:17:28.707
2011-05-08T12:17:28.707
null
null
3582
null
10507
2
null
10420
0
null
The question is too vague as such, but any answer will depend on your object of study ([example](http://www.iq.harvard.edu/blog/sss/archives/2006/02/bayesian_vs_fre.shtml)). You will find tons on the topic over at [Andrew Gelman's blog](http://www.stat.columbia.edu/~cook/movabletype/archives/2011/04/bayesian_statis_1.html), with an April's Fool somewhere in the middle if I recall correctly.
null
CC BY-SA 3.0
null
2011-05-08T12:31:33.807
2011-05-08T12:31:33.807
null
null
3582
null
10508
2
null
10497
1
null
Here is a Web site that uses a classifier to determine the gender of an author of text: ``` http://bookblog.net/gender/genie.php ``` There are a number of articles on the subject at the Web site and the composer of the site has written a number of articles on the subject as well. There are a number of good methods that can be used in the classification - Bayesian Logistic Regression - Linear Discriminant Analysis - Quadratic Discriminant Analysis - Bayes Classifiers Have fun
null
CC BY-SA 3.0
null
2011-05-08T12:58:42.243
2011-05-08T12:58:42.243
null
null
3805
null
10510
1
null
null
67
5466
In the last few years I've read a number of papers arguing against the use of null hypothesis significance testing in science, but didn't think to keep a persistent list. A colleague recently asked me for such a list, so I thought I'd ask everyone here to help build it. To start things off, here's what I have so far: - Johansson (2011) "Hail the impossible: p-values, evidence, and likelihood." - Haller & Kraus (2002) "Misinterpretation of significance: A problem students share with their teachers." - Wagenmakers (2007) "A practical solution to the pervasive problem of p-values." - Rodgers (2010) "The epistemology of mathematical and statistical modeling: A quiet methodological revolution." - Dixon (1998) "Why scientists value p-values." - Glover & Dixon (2004) "Likelihood ratios: a simple and flexible statistic for empirical psychologists."
References containing arguments against null hypothesis significance testing?
CC BY-SA 4.0
null
2011-05-08T16:09:04.040
2022-08-27T12:03:06.130
2022-03-26T10:13:03.113
79696
364
[ "hypothesis-testing", "statistical-significance", "references", "p-value" ]
10511
2
null
10510
44
null
Chris Fraley has taught [a whole course on the history of the debate](http://www.uic.edu/classes/psych/psych548/fraley/) (the link seems to be broken, even though it's still on his official site; here is [a copy in Internet Archive](https://web.archive.org/web/20150430200302/http://www.uic.edu/classes/psych/psych548/fraley)). His summary/conclusion is [here](http://www.uic.edu/classes/psych/psych548/fraley/NHSTsummary.htm) (again, [archived copy](https://web.archive.org/web/20150810175229/http://www.uic.edu/classes/psych/psych548/fraley/NHSTsummary.htm)). According to Fraley's homepage, the last time he taught this course was in 2003. He prefaces this list with an "Instructor's bias": > Although my goal is to facilitate lively, deep, and fair discussions on the issues at hand, I believe that it is necessary to make my bias explicit from the outset. Paul Meehl once stated that "Sir Ronald [Fisher] has befuddled us, mesmerized us, and led us down the primrose path. I believe that the almost universal reliance on merely refuting the null hypothesis as the standard method for corroborating substantive theories in the soft areas is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology." I echo Meehl's sentiment. One of my goals in this seminar is to make it clear why I believe this to be the case. Furthermore, I expect you, by the time you have completed this seminar, to be able to articulate and defend your stance on the NHST debate, regardless of what that stance is. I'll copy in the reading list in case the course page ever disappears: > Week 1. Introduction: What is a Null Hypothesis Significance Test? Facts, Myths, and the State of Our Science Lyken, D. L. (1991). What’s wrong with psychology? In D. Cicchetti & W.M. Grove (eds.), Thinking Clearly about Psychology, vol. 1: Matters of Public Interest, Essays in honor of Paul E. Meehl (pp. 3 – 39). Minneapolis, MN: University of Minnesota Press. Week 2. Early Criticisms of NHST Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103-115. Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834. Rozeboom, W. W. (1960). The fallacy of the null hypothesis significance test. Psychological Bulletin, 57, 416-428. Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66, 423-437. [optional] Week 3. Contemporary Criticisms of NHST Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003. Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 311-339). Hillsdale, NJ: Lawrence Erlbaum Associates. Schmidt, F. L. & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds.) What if there were no significance tests? (pp. 37-64). Mahwah, NJ: Lawrence Erlbaum Associates. Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley. (Chapter 2 [A Critique of Significance Tests]) [optional] Week 4. Rebuttal: Advocates of NHST Come to Its Defense Frick, R. W. (1996). The appropriate use of null hypothesis testing. Psychological Methods, 1, 379-390. Hagen, R. L. (1997). In praise of the null hypothesis statistical test. American Psychologist, 52, 15-24. Wilkinson, L., & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604. Wainer, H. (1999). One cheer for null hypothesis significance testing. Psychological Methods, 6, 212-213. Mulaik, S. A., Raju, N. S., & Harshman, R. A. (1997). There is a time and place for significance testing. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger , Eds. What if there were no significance tests? (pp. 65-116). Mahwah, NJ: Lawrence Erlbaum Associates. [optional] Week 5. Rebuttal: Advocates of NHST Come to Its Defense Abelson, R. P. (1997). On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychological Science, 8, 12-15. Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56, 16-26. Scarr, S. (1997). Rules of evidence: A larger context for the statistical debate. Psychological Science, 8, 16-17. Greenwald, A. G., Gonzalez, R., Harris, R. J., & Guthrie, D. (1996). Effect sizes and p values: What should be reported and what should be replicated? Psychophysiology, 33, 175-183. Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241-301. [optional] Harris, R. J. (1997). Significance tests have their place. Psychological Science, 8, 8-11. [optional] Week 6. Effect Size Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage. [Ch. 2, Defining Research Results] Chow, S. L. (1988). Significance test or effect size? Psychological Bulletin, 103, 105-110. Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97, 129-133. [optional] Week 7. Statistical Power Hallahan, M., & Rosenthal, R. (1996). Statistical power: Concepts, procedures, and applications. Behaviour Research and Therapy, 34, 489-499. Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316. Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65, 145-153. [optional] Maddock, J. E., Rossi, J. S. (2001). Statistical power of articles published in three health-psychology related journals. Health Psychology, 20, 76-78. [optional] Thomas, L. & Juanes, F. (1996). The importance of statistical power analysis: An example from Animal Behaviour. Animal Behaviour, 52, 856-859. [optional] Rossi, J. S. (1990). Statistical power of psychological research: What have we gained in 20 years? Journal of Consulting and Clinical Psychology, 58, 646-656. [optional] Tukey, J. W. (1969). Analyzing data: Sanctification or detective work? American Psychologist, 24, 83-91. [optional] Week 8. Confidence Intervals and Significance testing Gardner, M. J., & D. G. Altman. 1986. Confidence intervals rather than P values: Estimation rather than hypothesis testing. British Medical Journal, 292, 746-750. Cumming, G., & Finch, S. (2001). A primer on understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 532-574. Loftus, G. R., & Masson, M.E.J. (1994). Using confidence intervals in within-subject designs. Psychonomic Bulletin and Review, 1, 476-490. Week 9 [note: we are skipping this section]. Theoretical Modeling: Developing Formal Models of Natural Phenomena Haefner, J. W. (1996). Modeling biological systems: Principles and applications. New York: International Thomson Publishing. (Chapters 1 [Models of Systems] & 2 [The Modeling Process]) Loehlin, J. C. (1992). Latent variable models: An introduction to factor, path, and structural analysis. Hillsdale, NJ: Lawrence Erlbaum Associates. (Chapter 1 [Path models in factor, path and structural analysis], p. 1-18] Grant, D. A. (1962). Testing the null hypothesis and the strategy of investigating theoretical models. Psychological Review, 69, 54-61. [optional] Binder, A. (1963). Further considerations on testing the null hypothesis and the strategy and tactics of investigating theoretical models. Psychological Review, 70, 107-115. [optional] Edwards, W. (1965). Tactical note on the relations between scientific and statistical hypotheses. Psychological Bulletin, 63, 400-402. [optional] Week 10. What is the Meaning of Probability? Controversy Concerning Relative Frequency and Subjective Probability Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York: W. H. Freeman. (Chapters 10, 11, & 12) Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley. (Chapters 4, 5, & 6) Pruzek, R. M. (1997). An introduction to Bayesian inference and its applications. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger , Eds. What if there were no significance tests? (pp. 287-318). Mahwah, NJ: Lawrence Erlbaum Associates. Rindskoph, D. M. (1997). Testing "small," not null, hypothesis: Classical and Bayesian Approaches. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds). What if there were no significance tests? (pp. 319-332). Mahwah, NJ: Lawrence Erlbaum Associates. Edwards, W., Lindman, H., Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193-242. [optional] Week 11. Theory Appraisal: Philosophy of Science and the Testing and Amending of Theories Meehl, P. E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1, 108-141. Roberts, S. & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358-367. Week 12. Theory Appraisal: Philosophy of Science and the Testing and Amending of Theories Urbach, P. (1974). Progress and degeneration in the "IQ debate" (I). British Journal of Philosophy of Science, 25, 99-125. Serlin, R. C. & Lapsley, D. K. (1985). Rationality in psychological research: The good-enough principle. American Psychologist, 40, 73-83. Dar, R. (1987). Another look at Meehl, Lakatos, and the scientific practices of psychologists. American Psychologist, 42, 145-151. Gholson, B. & Barker, P. (1985). Kuhn, Lakatos, & Laudan: Applications in the history of physics and psychology. American Psychologist, 40, 755-769. [optional] Faust, D., & Meehl, P. E. (1992). Using scientific methods to resolve questions in the history and philosophy of science: Some illustrations. Behavior Therapy, 23, 195-211. [optional] Urbach, P. (1974). Progress and degeneration in the "IQ debate" (II). British Journal of Philosophy of Science, 25, 235-259. [optional] Salmon, W. C. (1973, May). Confirmation. Scientific American, 228, 75-83. [optional] Meehl, P. E. (1993). Philosophy of science: Help or hindrance? Psychological Reports, 72, 707-733. [optional] Manicas. P. T., & Secord, P. F. (1983). Implications for psychology of the new philosophy of science. American Psychologist, 38, 399-413. [optional] Week 13. Has the NHST Tradition Undermined a Non-Biased, Cumulative Knowledge Base in Psychology? Cooper, H., DeNeve, K., & Charlton, K. (1997). Finding the missing science: The fate of studies submitted for review by a human subjects committee. Psychological Methods, 2, 447-452. Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115-129. Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1-20. Berger, J. O. & Berry, D. A. (1988). Statistical analysis and illusion of objectivity. American Scientist, 76, 159-165. Week 14. Replication and Scientific Integrity Smith, N. C. (1970). Replication studies: A neglected aspect of psychological research. American Psychologist, 25, 970-975. Sohn, D. (1998). Statistical significance and replicability: Why the former does not presage the latter. Theory and Psychology, 8, 291-311. Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66, 195-244. Platt, J. R. (1964). Strong Inference. Science, 146, 347-353. Feynman, R. L. (1997). Surely you’re joking, Mr. Feynman! New York: W. W. Norton. (Chapter: Cargo-cult science). Rorer, L. G. (1991). Some myths of science in psychology. In D. Cicchetti & W.M. Grove (eds.), Thinking Clearly about Psychology, vol. 1: Matters of Public Interest, Essays in honor of Paul E. Meehl (pp. 61 – 87). Minneapolis, MN: University of Minnesota Press. [optional] Lindsay, R. M. & Ehrenberg, A. S. C. (1993). The design of replicated studies. The American Statistician, 47, 217-228. [optional] Week 15. Quantitative Thinking: Why We Need Mathematics (and not NHST per se) in Psychological Science Aiken, L. S., West, S. G., Sechrest, L., & Reno, R. R. (1990). Graduate training in statistics, methodology, and measurement in psychology: A survey of Ph.D. programs in North America. American Psychologist, 45, 721-734. Meehl, P. E. (1998, May). The power of quantitative thinking. Invited address as recipient of the James McKeen Cattell Award at the annual meeting of the American Psychological Society, Washington, DC.
null
CC BY-SA 3.0
null
2011-05-08T16:52:17.817
2016-12-06T13:56:36.520
2016-12-06T13:56:36.520
28666
3748
null
10513
2
null
3719
1
null
Pearson correlations can be converted into 'z' scores. Those z scores may be averaged and their median converted back to the composit correlation
null
CC BY-SA 3.0
null
2011-05-08T17:46:41.447
2011-05-08T17:46:41.447
null
null
4518
null
10514
2
null
10363
-4
null
r square of 97.2 Estimation/Diagnostic Checking for Variable Y Y X1 AAS X2 BB X3 BBS X4 CC Number of Residuals (R) =n 25 Number of Degrees of Freedom =n-m 20 Residual Mean =Sum R / n -.141873E-05 Sum of Squares =Sum R2 .775723E+07 Variance =SOS/(n) 310289. Adjusted Variance =SOS/(n-m) 387861. Standard Deviation RMSE =SQRT(Adj Var) 622.785 Standard Error of the Mean =Standard Dev/ (n-m) 139.259 Mean / its Standard Error =Mean/SEM -.101877E-07 Mean Absolute Deviation =Sum(ABS(R))/n 455.684 AIC Value ( Uses var ) =nln +2m 326.131 SBC Value ( Uses var ) =nln +m*lnn 332.226 BIC Value ( Uses var ) =see Wei p153 340.388 R Square = .972211 Durbin-Watson Statistic =[-A(T-1)]**2/A2 1.76580 ** MODEL COMPONENT LAG COEFF STANDARD P T # (BOP) ERROR VALUE VALUE ``` 1CONSTANT -.381E+04 466. .0000 -8.18 ``` INPUT SERIES X1 AAS AA SQUARED ``` 2Omega (input) -Factor # 1 0 .983 .410E-01 .0000 23.98 ``` INPUT SERIES X2 BB BB AS GIVEN ``` 3Omega (input) -Factor # 2 0 108. 14.9 .0000 7.27 ``` INPUT SERIES X3 BBS BB SQUARED ``` 4Omega (input) -Factor # 3 0 -.577 .147 .0008 -3.93 ``` INPUT SERIES X4 CC CC AS GIVEN ``` 5Omega (input) -Factor # 4 0 49.9 4.67 .0000 10.67 ``` ![ Residual PLOT](https://i.stack.imgur.com/W3Cnx.jpg)
null
CC BY-SA 3.0
null
2011-05-08T17:51:27.797
2011-05-08T17:51:27.797
null
null
3382
null
10515
2
null
10420
1
null
In my experience, they are both useful in different situations. Where you can confidently say that the data come from a specified probability model, then parametric statistics will usually give you more information. However, they can also lead to significantly biased conclusions if the wrong model is used. Non-parametric statistics, on the other hand, require fewer assumptions about the data, and consequently will prove better in situations where the true distribution is unknown or cannot be easily approximated using a probability distribution. All in all, I prefer making as few assumptions as possible, so I tend to prefer non-parametric approaches.
null
CC BY-SA 3.0
null
2011-05-08T17:52:31.057
2011-05-08T17:52:31.057
null
null
656
null
10517
1
null
null
6
6073
Given a sample data set of floating point numbers, how do we determine its probability distribution and prove it? Also generate random numbers of the same distributions thereafter.
Identify probability distributions
CC BY-SA 3.0
null
2011-05-08T18:56:08.370
2016-03-23T19:33:41.060
null
null
4520
[ "distributions", "probability", "modeling", "dataset", "simulation" ]
10518
2
null
10510
12
null
These are excellent references. I have a perhaps useful handout at [http://hbiostat.org/bayes](http://hbiostat.org/bayes)
null
CC BY-SA 4.0
null
2011-05-08T19:03:21.840
2022-08-27T12:03:06.130
2022-08-27T12:03:06.130
4253
4253
null
10519
1
10524
null
4
4259
If I have many time series that I'd like to compare to see if there are relationships between the variables, (I have several dependent variables and many more independent variables) how might I go about doing this (I'm working in R, just fyi)? I haven't really found too many examples seeking to compare and explore the relationships between many variables. In particular, I'd like to see if the variation in my independent time series drives variation in the dependent time series, and I'm so new to statistics (and R) that I'm really not sure how to approach this problem. Here is some sample data (I know there are 2 missing values, and I may choose to treat Y2 as an independent variable since I think Y2 and Y1 may be correlated. The column labeled "covariate" is years since last flood, because I think this might be important too): ``` year Y1 Y2 X1 X2 X3 X4 X5 covariate 1 40 92 0 0 20.6 91 503 3 2 54 65 0 0 21.7 33 175 4 3 59 75 1 1 22.2 34 94 5 4 68 53 8 9 22.2 24 86 6 5 5 20 20.6 5 185 7 6 76 65 8 13 22.2 32 119 8 7 76 55 16 18 23.3 0 153 9 8 82 58 18 2 24.4 19 0 1 9 60 57 28 24 23.33 0 223 2 10 58 46 18 3 22.78 0 184 3 11 49 48 2 1 23.33 0 110 4 12 28 76 0 3 22.78 0 213 5 13 56 61 0 1 22.78 12 123 6 14 105 53 56 24 23.33 0 122 7 15 99 43 28 13 24.44 0 154 8 16 119 47 46 35 23.33 0 182 9 ``` Any guidance would be much appreciated as I'm not really sure how to proceed here.
Comparing multiple time series in R
CC BY-SA 3.0
null
2011-05-08T19:31:29.433
2011-05-08T22:37:39.683
2011-05-08T20:43:07.527
null
4521
[ "r", "time-series", "multiple-comparisons" ]
10520
2
null
10363
24
null
$R^2$ alone is not a good measure of goodness of fit, but let's not get into that here except to observe that parsimony is valued in modeling. To that end, note that standard techniques of [exploratory data analysis](https://en.wikipedia.org/wiki/Exploratory_data_analysis) (EDA) and regression (but not stepwise or other automated procedures) suggest using a linear model in the form $$\sqrt{f} = a + b*c + a*b*c + \text{constant} + \text{error}$$ Using OLS, this does achieve an $R^2$ above 0.99. Heartened by such a result, one is tempted to square both sides and regress $f$ on $a$, $b*c$, $a*b*c$, and all their squares and products. This immediately produces a model $$f = a^2 + b*c + \text{constant} + \text{error}$$ with a root MSE of under 34 and an adjusted $R^2$ of 0.9999. The estimated coefficients of 1.0112 and 0.988 suggest the data may be artificially generated with the formula $$f = a^2 + b*c + 50$$ plus a little normally distributed error of SD approximately equal to 50. ### Edit In response to @knorv's hints, I continued the analysis. To do so I used the techniques that had been successful so far, beginning with inspecting scatterplot matrices of the residuals against the original variables. Sure enough, there was a clear indication of correlation between $a$ and the residuals (even though OLS regression of $f$ against $a$, $a^2$, and $b*c$ did not indicate $a$ was "significant"). Continuing in this vein I explored all correlations between the quadratic terms $a^2, \ldots, e^2, a*b, a*c, \ldots, d*e$ and the new residuals and found a tiny but highly significant relationship with $b^2$. "Highly significant" means that all this snooping involved looking at some 20 different variables, so my criterion for significance on this fishing expedition was approximately 0.05/20 = 0.0025: anything less stringent could easily be an artifact of the probing for fits. This has something of the flavor of a physical model in that we expect, and therefore search for, relationships with "interesting" and "simple" coefficients. So, for instance, seeing that the estimated coefficient of $b^2$ was -0.0092 (between -0.005 and -0.013 with 95% confidence), I elected to use -1/100 for it. If this were some other dataset, such as observations of a social or political system, I would make no such changes but just use the OLS estimates as-is. Anyway, an improved fit is given by $$f = a + a^2 + b*c - b^2/100 + 30.5 + \text{error}$$ with mean residual $0$, standard deviation 26.8, all residuals between -50 and +43, and no evidence of non-normality (although with such a small dataset the errors could even be uniformly distributed and one couldn't really tell the difference). The reduction in residual standard deviation from around 50 to around 25 would often be expressed as "explaining 75% of the residual variance." --- I make no claim that this is the formula used to generate the data. The residuals are large enough to allow some fairly large changes in a few of the coefficients. For instance, 95% CIs for the coefficients of $a$, $b^2$, and the constant are [-0.4, 2.7], [-0.013, -0.003], and [-7, 61] respectively. The point is that if any random error has actually been introduced in the data-generation procedure (and that is true of all real-world data), that would preclude definitive identification of the coefficients (and even of all the variables that might be involved). That's not a limitation of statistical methods: it's just a mathematical fact. BTW, using robust regression I can fit the model $$f = 1.0103 a^2 + 0.99493 b*c - 0.007 b^2 + 46.78 + \text{error}$$ with residual SD of 27.4 and all residuals between -51 and +47: essentially as good as the previous fit but with one less variable. It is more parsimonious in that sense, but less parsimonious in the sense that I haven't rounded the coefficients to "nice" values. Nevertheless, this is the form I would usually favor in a regression analysis absent any rigorous theories about what kinds of values the coefficients ought to have and which variables ought to be included. It is likely that additional strong relationships are lurking here, but they would have to be fairly complicated. Incidentally, taking data whose original SD is 3410 and reducing their variation to residuals with an SD of 27 is a 99.99384% reduction in variance (the $R^2$ of this new fit). One would continue looking for additional effects only if the residual SD is too large for the intended purpose. In the absence of any purpose besides second-guessing the OP, it's time to stop.
null
CC BY-SA 3.0
null
2011-05-08T19:40:32.340
2013-10-05T15:00:21.693
2020-06-11T14:32:37.003
-1
919
null
10522
2
null
10382
10
null
People often use the phrase "Likert scale" erroneously, not realizing that it originally described the coherent method Rensis Likert developed to do just what you're describing. Key steps are - Seeing how well each item correlates with "the whole"--the average of all other items - Checking variability, since an item with low variability is unlikely to contribute much information to such a scale - Using steps 1 and 2, as well as tests of Cronbach's alpha (and sometimes factor analysis, as @fmark has said), to narrow down the list of items to be kept - Averaging the items ultimately selected to compute the scale score for each person. An excellent and very accessible guide to this process can be found in [Paul Spector's Summated Rating Scale Construction](https://rads.stackoverflow.com/amzn/click/0803943415), a little green Sage book that's available for about $18 new.
null
CC BY-SA 4.0
null
2011-05-08T20:36:37.933
2018-05-08T20:17:37.590
2018-05-08T20:17:37.590
2669
2669
null
10523
1
10525
null
2
597
I've been trying to replicate the results in this online calculator: [http://www.raosoft.com/samplesize.html](http://www.raosoft.com/samplesize.html) However, it seems that I am missing something. Exactly how do I solve the margin of error when all the other variables (sample size, confidence level, distribution and population*) are known? I've tried to use the formulas on the page but I cannot arrive at the same result. Also using the formulas on bottom of [http://www.resolutions.co.nz/sample_sizes.htm](http://www.resolutions.co.nz/sample_sizes.htm) gives me somewhat different results. What am I missing? *) = a small population so it cannot be assumed infinite.
Margin-of-error calculation in survey
CC BY-SA 3.0
null
2011-05-08T20:47:43.803
2011-05-08T21:27:25.250
2011-05-08T20:56:19.740
3401
3401
[ "confidence-interval", "standard-error" ]
10524
2
null
10519
3
null
here goes ... These are the steps that are required to form your analysis. Simply reproduce them in R or whatever tools you have available. The reason the following exercise is daunting is because the statistical problem you are asking is "daunting" and one needs to "up-armor" their solutions skills/procedures. - pre-whiten each of your 7 candidate regressors in order to identify an initial Tranfer Function model of the form MODEL COMPONENT LAG COEFF STANDARD P T (BOP) ERROR VALUE VALUE 1CONSTANT -258. 330. 0.4568 -0.78 INPUT SERIES X1 Y2 2Omega (input) -Factor # 1 0 0.635 0.583 0.3079 1.09 INPUT SERIES X2 X1 3Omega (input) -Factor # 2 1 0.544 0.563 0.3623 0.97 INPUT SERIES X3 X2 4Omega (input) -Factor # 3 0 1.60 0.485 0.0110 3.29 INPUT SERIES X4 X3 5Omega (input) -Factor # 4 0 11.7 14.8 0.4493 0.80 INPUT SERIES X5 X4 6Omega (input) -Factor # 5 0 0.491 0.699 0.5020 0.70 INPUT SERIES X6 X5 7Omega (input) -Factor # 6 0 -0.151 0.164 0.3852 -0.92 INPUT SERIES X7 X6 8Omega (input) -Factor # 7 0 2.40 1.57 0.1639 1.53 Then do Interventon Detection to extract any anomalies in the data finding three ``` : NEWLY IDENTIFIED VARIABLE X8 I~P00010 10 PULSE : NEWLY IDENTIFIED VARIABLE X9 I~P00014 14 PULSE : NEWLY IDENTIFIED VARIABLE X10 I~P00002 2 PULSE ``` This leads to an augmented model : ``` MODEL COMPONENT LAG COEFF STANDARD P T ``` # (BOP) ERROR VALUE VALUE ``` 1CONSTANT -33.9 191. 0.8662 -0.18 ``` INPUT SERIES X1 Y2 ``` 2Omega (input) -Factor # 1 0 1.08 0.324 0.0207 3.34 ``` INPUT SERIES X2 X1 ``` 3Omega (input) -Factor # 2 1 0.867 0.325 0.0446 2.67 ``` INPUT SERIES X3 X2 ``` 4Omega (input) -Factor # 3 0 2.06 0.248 0.0004 8.29 ``` INPUT SERIES X4 X3 ``` 5Omega (input) -Factor # 4 0 1.68 8.47 0.8503 0.20 ``` INPUT SERIES X5 X4 ``` 6Omega (input) -Factor # 5 0 -0.239 0.455 0.6218 -0.53 ``` INPUT SERIES X6 X5 ``` 7Omega (input) -Factor # 6 0 -0.363 0.102 0.0161 -3.56 ``` INPUT SERIES X7 X6 ``` 8Omega (input) -Factor # 7 0 3.21 0.712 0.0064 4.50 ``` INPUT SERIES X8 I~P00010 10 PULSE ``` 9Omega (input) -Factor # 8 0 30.4 6.52 0.0055 4.66 ``` INPUT SERIES X9 I~P00014 14 PULSE 10Omega (input) -Factor # 9 0 14.7 6.42 0.0709 2.29 INPUT SERIES X 10 I~P00002 2 PULSE 11Omega (input) -Factor # 10 0 39.6 8.22 0.0048 4.81 which is over-specified thus we must step-down and obtain ``` MODEL COMPONENT LAG COEFF STANDARD P T ``` # (BOP) ERROR VALUE VALUE ``` 1CONSTANT -2.32 7.25 0.7584 -0.32 ``` INPUT SERIES X1 Y2 ``` 2Omega (input) -Factor # 1 0 1.10 0.872E-01 0.0000 12.58 ``` INPUT SERIES X2 X1 ``` 3Omega (input) -Factor # 2 1 1.04 0.103 0.0000 10.10 ``` INPUT SERIES X3 X2 ``` 4Omega (input) -Factor # 3 0 2.04 0.199 0.0000 10.22 ``` INPUT SERIES X4 X5 ``` 5Omega (input) -Factor # 4 0 -0.335 0.301E-01 0.0000 -11.13 ``` INPUT SERIES X5 X6 ``` 6Omega (input) -Factor # 5 0 2.84 0.663 0.0037 4.28 ``` INPUT SERIES X6 I~P00010 10 PULSE ``` 7Omega (input) -Factor # 6 0 27.8 6.48 0.0036 4.29 ``` INPUT SERIES X7 I~P00014 14 PULSE ``` 8Omega (input) -Factor # 7 0 21.3 6.17 0.0107 3.45 ``` INPUT SERIES X8 I~P00002 2 PULSE ``` 9Omega (input) -Factor # 8 0 32.3 5.99 0.0010 5.39 ``` which now will suggest additional structure in the X's via cross-correlative tests between the current model residuals and the residuals from the pre-whitened X's .... that had previously remained unidentified. ``` MODEL COMPONENT LAG COEFF STANDARD P T ``` # (BOP) ERROR VALUE VALUE ``` 1CONSTANT -25.8 6.12 0.0083 -4.22 ``` INPUT SERIES X1 Y2 ``` 2Omega (input) -Factor # 1 0 1.11 0.598E-01 0.0000 18.50 3 1 -0.211 0.516E-01 0.0095 -4.09 ``` INPUT SERIES X2 X1 ``` 4Omega (input) -Factor # 2 1 1.05 0.692E-01 0.0000 15.23 ``` INPUT SERIES X3 X2 ``` 5Omega (input) -Factor # 3 0 2.28 0.143 0.0000 15.99 ``` INPUT SERIES X4 X5 ``` 6Omega (input) -Factor # 4 0 -0.311 0.193E-01 0.0000 -16.17 7 1 -0.693E-01 0.103E-01 0.0011 -6.72 ``` INPUT SERIES X5 X6 ``` 8Omega (input) -Factor # 5 0 2.19 0.466 0.0054 4.70 ``` INPUT SERIES X6 I~P00010 10 PULSE ``` 9Omega (input) -Factor # 6 0 19.8 4.31 0.0059 4.59 ``` INPUT SERIES X7 I~P00014 14 PULSE 10Omega (input) -Factor # 7 0 18.6 4.21 0.0068 4.43 culminating in the final model MODEL STATISTICS AND EQUATION FOR THE CURRENT EQUATION (DETAILS FOLLOW). Estimation/Diagnostic Checking for Variable Y Y1 X1 Y2 X2 X1 X3 X2 X4 X5 X5 X6 : NEWLY IDENTIFIED VARIABLE X6 I~P00010 10 PULSE : NEWLY IDENTIFIED VARIABLE X7 I~P00014 14 PULSE Number of Residuals (R) =n 15 Number of Degrees of Freedom =n-m 7 Residual Mean =Sum R / n -0.204655E-04 Sum of Squares =Sum R**2 171.679 Variance =SOS/(n) 10.7299 Adjusted Variance =SOS/(n-m) 12.2628 Standard Deviation RMSE =SQRT(Adj Var) 3.50183 Standard Error of the Mean =Standard Dev/ (n-m) 0.935902 Mean / its Standard Error =Mean/SEM -0.218671E-04 Mean Absolute Deviation =Sum(ABS(R))/n 2.50518 AIC Value ( Uses var ) =nln +2m 37.5956 SBC Value ( Uses var ) =nln +m*lnn 38.3036 BIC Value ( Uses var ) =see Wei p153 64.1437 R Square = 0.986111 Durbin-Watson Statistic =[-A(T-1)]*2/A*2 2.85557 ``` D-W STATISTIC IS INCONCLUSIVE. ``` THE DURBIN-WATSON STATISTIC IS VALID ONLY FOR MODELS THAT HAVE A WHITE NOISE ERROR TERM AND NO LAGS OF THE Y SERIES. OTHERWISE IT IS INVALID. IN THIS CASE THE TEST IS VALID. AUTOMATICALLY REVISING MODEL ``` MODEL COMPONENT LAG COEFF STANDARD P T ``` # (BOP) ERROR VALUE VALUE ``` 1CONSTANT -25.8 6.12 0.0029 -4.22 ``` INPUT SERIES X1 Y2 ``` 2Omega (input) -Factor # 1 0 1.11 0.598E-01 0.0000 18.50 3 1 -0.211 0.516E-01 0.0035 -4.09 ``` INPUT SERIES X2 X1 ``` 4Omega (input) -Factor # 2 1 1.05 0.692E-01 0.0000 15.23 ``` INPUT SERIES X3 X2 ``` 5Omega (input) -Factor # 3 0 2.28 0.143 0.0000 15.99 ``` INPUT SERIES X4 X5 ``` 6Omega (input) -Factor # 4 0 -0.311 0.193E-01 0.0000 -16.17 7 1 -0.693E-01 0.103E-01 0.0001 -6.72 ``` INPUT SERIES X5 X6 ``` 8Omega (input) -Factor # 5 0 2.19 0.466 0.0016 4.70 ``` INPUT SERIES X6 I~P00010 10 PULSE ``` 9Omega (input) -Factor # 6 0 19.8 4.31 0.0018 4.59 ``` INPUT SERIES X7 I~P00014 14 PULSE 10Omega (input) -Factor # 7 0 18.6 4.21 0.0022 4.43 Y(T) = -25.800 +[X1(T)][(+ 1.1058+ 0.211B** 1)] +[X2(T)][(+ 1.0543B** 1)] +[X3(T)][(+ 2.2848)] +[X4(T)][(- 0.311+ 0.0693B** 1)] +[X5(T)][(+ 2.1900)] +[X6(T)][(+ 19.8039)] +[X7(T)][(+ 18.6231)]
null
CC BY-SA 3.0
null
2011-05-08T20:53:30.243
2011-05-08T20:53:30.243
null
null
3382
null
10525
2
null
10523
5
null
It is possible to reproduce the page's Javascript formula, for example in R (with some minor adjustments, notably treating the confidence figure as two-tailed, but leaving it with the slightly confusing calculations using percentages). ``` MarginOfError <- function(sample, confidence, response, population) { pcn <- qnorm( (100+confidence) / 200) d1 <- pcn * pcn * response * (100-response) d2 <- d1 * (population - sample) / (sample * (population-1) ) ifelse(d2>0 , sqrt(d2), 0) } ``` For example ``` > MarginOfError(100, 95, 50, 20000) [1] 9.775534 ``` corresponding to the 9.8% given on that page for a sample size of 100. The other Javascript formulae can similarly be reproduced.
null
CC BY-SA 3.0
null
2011-05-08T21:27:25.250
2011-05-08T21:27:25.250
null
null
2958
null
10526
1
10600
null
7
1750
Is there a way to utilize Canonical Correlation Analysis when your data are time series and repeated measures (i.e. your experimental units are not independent)? How might one approach the analysis of two sets of variables when the question is what relationships, if any, are there between one set of variables and the other. I was thinking canonical correlation analysis might help me do this, but my variables are count data (not normally distributed) taken over several consecutive years at the same location. In sum, one set of variables is the abundances of various species and the other set is the abundances of a variety of potential food resources. Perhaps it's best to look at one dependent variable at a time instead of having several dependent variables. Any advice for a statistics novice?
Canonical correlation analysis and time series analysis
CC BY-SA 3.0
null
2011-05-08T21:47:17.597
2011-05-10T15:57:42.317
2011-05-09T08:34:03.113
null
4521
[ "time-series", "correlation", "multivariate-analysis" ]
10527
2
null
10363
3
null
Broadly speaking, there's no free lunch in machine learning: > In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A /edit: also, a radial SVM with C = 4 and sigma = 0.206 easily yields an R2 of .99. Extracting the actual equation used to derive this dataset is left as an exercise to the class. Code is in R. ``` setwd("~/wherever") library('caret') Data <- read.csv("CV.csv", header=TRUE) FL <- as.formula("FF ~ AA+BB+CC+DD+EE") model <- train(FL,data=Data,method='svmRadial',tuneGrid = expand.grid(.C=4,.sigma=0.206)) R2( predict(model, Data), Data$FF) ```
null
CC BY-SA 3.0
null
2011-05-08T21:53:41.973
2011-05-09T01:47:04.977
2011-05-09T01:47:04.977
2817
2817
null
10528
2
null
173
2
null
In response to your direct question "How can I test if there's a real change in the process? And if I can identify a decline, how could I use that trend and whatever seasonality there might be to estimate the number of cases we might see in the upcoming months?" Develop a Transfer Function Model ( ARMAX ) that readily explains period-to-period dependency including and seasonal ARIMA structure. Incorporate any Identifiable Level Shifts , Seasonal Pulses, Local Time Trends and PUlses that may have been suggested by empirical/analystical methods like Intervention Detection. IF THIS ROBUST MODEL INCLUDES A FACTOR/SERIES matching up with "declines" Then your prayers have been answerered. In the alternative simply add an hypothesized structure e.g. to test a time trend change at point T1 construct two dummies X1 = 1,1,2,3,,,,,,T and X2 = 0,0,0,0,0,0,0,1,2,3,4,5,.... WHERE THE ZEROES END AT PERIOD T1-1 . The test of the hypothesis of a significant trend change at time period T1 will be assessed using the "t value" for X2 . Edited 9/22/11 Often, disease data like this has monthly effects since weather/temperature is often an unspecified causal . In the omission of the true caudsal series ARIMA models use memory or seasonal dummies as a surrogate. Additionally series like this can have level shifts and/or local time trends reflecting structural change over time. Exploiting the autoregressive structure in the data rather than imposing various artifacts like time and time square and time cubic etc. have been found to be quite useful and less presumptive and ad hoc. Care should also be taken to identify "unusual values" as they can often be useful in suggestng additional cause variables and at a minimum lead to robust estimates of the other model parameters. Finally we have found that the variability/paramaters may vary over times thus these model refinements may be in order.
null
CC BY-SA 3.0
null
2011-05-08T22:19:18.250
2011-09-22T12:29:10.747
2011-09-22T12:29:10.747
3382
3382
null
10529
1
null
null
5
505
I am reviewing a study in which standard deviations (SDs) of an X variable, calculated for each individual in the study (measures on each individual were replicated 4 times), are used as predictors of a Y variable. They do not observe a significant correlation between Y and the SDs of X and they conclude by saying that it is not necessary to consider the variability among replicates of the same individual and, therefore, they can work with average values​​. Does that make sense?
Linear regression using standard deviations as regressors?
CC BY-SA 3.0
null
2011-05-08T22:23:20.810
2011-07-02T03:14:09.787
2011-05-09T00:18:46.427
919
221
[ "regression", "standard-deviation" ]
10531
1
10612
null
6
932
I'm wondering if this is possible to fit structural equation model for experimental design data. Problem Suppose a researcher observed four responses $Y_1$, $Y_2$, $Y_3$, and $Y_4$ along with three covariates $X_1$, $X_2$, and $X_3$ from an experiment involving ab treatment combinations from a fixed factor A with a levels and a random factor B with b levels. Based on past experience, it is assumed four responses are correlated and $Y_1$ is also influenced by the other three ($Y_2$, $Y_3$, and $Y_4$). Is it possible to model causality among responses $Y_1$, $Y_2$, $Y_3$, and $Y_4$ as well as to asses the effects of factors A , B and their interaction AB on responses $Y_1$, $Y_2$, $Y_3$, and $Y_4$? Thanks
Structural equation modeling for experimental design data
CC BY-SA 3.0
null
2011-05-08T23:04:11.983
2011-05-10T18:22:00.337
2011-05-09T14:01:51.513
3903
3903
[ "mixed-model", "experiment-design", "structural-equation-modeling" ]
10532
1
10572
null
6
1918
Can someone tell a reference and/or book that explain how to use R for simulation of experimental design data?
Reference or book on simulation of experimental design data in R
CC BY-SA 4.0
null
2011-05-08T23:13:44.127
2019-10-30T10:43:00.390
2019-10-30T10:43:00.390
11887
3903
[ "r", "references", "experiment-design", "simulation" ]
10534
1
null
null
7
3185
I have key-value pairs of text. The values can be multiple words (n-grams). For example, ``` A abcd A efgh B abcd C wxyz C mnop ``` I want to calculate [Pointwise Mutual Information](http://en.wikipedia.org/wiki/Pointwise_mutual_information) for the pairs. Is there a function in R to do this? Otherwise, how can I go about it? Thanks
Pointwise mutual information for text using R
CC BY-SA 3.0
null
2011-05-09T02:40:41.403
2011-10-03T09:55:17.947
2011-05-09T08:32:45.080
null
3111
[ "r", "text-mining", "mutual-information" ]
10535
1
null
null
4
1450
I am having some difficulty understanding Adaboost. How should the 1st threshold/classifier/weak learner be chosen? It seems that there are two conditions which must be satisfied - Choose the classifier with the lowest error - $e(t)<0.5$ otherwise stop; But if condition 1 is satisfied, doesn't it imply that condition 2 will also be satisfied automatically?
How to choose the 1st threshold/classifier/ weak learner in Adaboost?
CC BY-SA 3.0
null
2011-05-09T02:58:40.720
2012-02-06T12:09:33.973
2011-05-09T08:32:11.510
null
4527
[ "machine-learning", "boosting" ]
10536
2
null
10478
0
null
This does not answer your question, but: in a large data setting, it may be fine to tune the regularizer using a single train/test split, instead of doing it 10 or so times in cross-validation (or more for bootstrap). The size and representativeness of the sample chosen for the devset determines the accuracy of the estimation of the optimal regularizer. In my experience the held-out loss is relatively flat over a substantial regularizer range. I'm sure this fact may not hold for other problems.
null
CC BY-SA 3.0
null
2011-05-09T03:45:30.610
2011-05-09T03:45:30.610
null
null
3799
null
10537
1
null
null
6
764
I'm trying to fit some data using a hierarchical normal model $y_i \sim N(\theta_i,\sigma^2)$ $\theta_i \sim N(\mu, \sigma_\theta^2)$ $(\mu,\sigma^2,\sigma_\theta^2) \sim diffuse$ I fit this model and I'm getting posteriors for $\sigma_\theta^2$ and $\sigma^2$ that are nearly identical. Is this an identifiability issue or a coincidence? There's no other information in the data to use to determine where the variability is coming from. Is there any way to still use this type of model, or is it just not useful without more data?
Possible identifiability issue in hierarchical model
CC BY-SA 3.0
null
2011-05-09T04:15:58.563
2011-06-30T01:13:53.170
2011-06-30T01:13:53.170
1499
4528
[ "bayesian", "multilevel-analysis", "identifiability" ]
10538
2
null
10537
5
null
Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See [here](http://stat.columbia.edu/~gelman/research/published/taumain.pdf) for a detailed exposition of just this model and appropriate prior specification. In short, yes, this model can be very useful and there probably ought to be some information about the variance parameters even in relatively small samples - but you need to be careful in how you specify and fit it. Edit: When I wrote this answer I apparently hadn't read the OP properly (see my comment to @probabilityislogic's answer). Anyway as this model is written the parameters $\sigma, \sigma_\theta$ aren't separately identifiable as @probabilityislogic points out. I suspect that if you looked at the posterior distribution of $\sigma^2 + \sigma_\theta^2$ it would be doing something much more reasonable, and if you looked at the joint posterior of $\sigma, \sigma_\theta$ there would be a strong negative correlation. You should go back to the original problem and try to reformulate this model - either it's not posed correctly in the OP or you're hosed, I think.
null
CC BY-SA 3.0
null
2011-05-09T04:25:35.350
2011-05-22T04:32:25.327
2011-05-22T04:32:25.327
26
26
null
10539
1
null
null
25
15025
migrated from [math.stackexchange](https://math.stackexchange.com/questions/37732/compute-approximate-percentiles-for-a-stream-of-integers-using-moments). I'm processing a long stream of integers and am considering tracking a few moments in order to be able to approximately compute various percentiles for the stream without storing much data. What's the simplest way to compute percentiles from a few moments. Is there a better approach that involves only storing a small amount of data?
Compute approximate quantiles for a stream of integers using moments?
CC BY-SA 3.0
null
2011-05-09T05:22:38.153
2017-02-27T09:39:27.020
2017-04-13T12:19:38.800
-1
4530
[ "algorithms", "mathematical-statistics", "moments" ]
10540
1
null
null
47
102361
Im trying to use silhouette plot to determine the number of cluster in my dataset. Given the dataset Train , i used the following matlab code ``` Train_data = full(Train); Result = []; for num_of_cluster = 1:20 centroid = kmeans(Train_data,num_of_cluster,'distance','sqeuclid'); s = silhouette(Train_data,centroid,'sqeuclid'); Result = [ Result; num_of_cluster mean(s)]; end plot( Result(:,1),Result(:,2),'r*-.');` ``` The resultant plot is given below with xaxis as number of cluster and yaxis mean of silhouette value. How do i interpret this graph? How do i determine the number of cluster from this? ![enter image description here](https://i.stack.imgur.com/KSqC5.jpg)
How to interpret mean of Silhouette plot?
CC BY-SA 3.0
null
2011-05-09T06:05:22.237
2019-07-29T16:06:15.417
2011-05-09T06:43:41.717
930
4290
[ "data-visualization", "clustering", "matlab" ]
10542
2
null
9198
5
null
That's an answer for question 2. - STL: http://www.wessa.net/download/stl.pdf - X-12-ARIMA (and much more): http://www.census.gov/srd/www/sapaper/sapaper.html
null
CC BY-SA 3.0
null
2011-05-09T08:05:36.423
2017-06-14T21:18:41.800
2017-06-14T21:18:41.800
64672
1709
null
10543
1
10568
null
4
2061
I've got a problem choosing the right model. I have a model with various variables (covariables and dummy variables). I was trying to find the best size for this model, so I first started by comparing different models with AIC. From this it followed, that the minimum AIC was reached when allowing all variables to stay in the model (with the whole bunch to interact with all dummies). When I compute the summary of the model, all effects are absolutely not significant and the standard errors are very high. I was a bit confused, when comparing the "best" (on AIC) model with a smaller model with any interaction. The smaller model had small standard errors and nice p-values... But the AIC is higher compared to the big model. What might be the problem? Overspecification? I really need help in this, because I have absolutely no idea how to handle this! Thanks a lot
How to interpret decreasing AIC but higher standard errors in model selection?
CC BY-SA 3.0
null
2011-05-09T08:14:28.497
2011-05-09T19:07:23.040
2011-05-09T08:19:36.347
930
4496
[ "model-selection", "standard-error", "aic" ]
10544
1
10545
null
7
2242
Let's say that N randomly chosen persons where asked a question where the answer could be in either of X categories. For example, 500 persons where asked which of the top 5 political parties they support the most. Each person can give only one answer. How do I determine if the leading party which, for example, got 33 % is truly larger than the 2nd largest party that got 29 % (on a certain confidence level). Calculations like these solve my question when there are only two choices, but how about my case when there are several? [http://www.stat.wmich.edu/s216/book/node85.html](http://www.stat.wmich.edu/s216/book/node85.html)
Difference in means in multiple-choice poll
CC BY-SA 3.0
null
2011-05-09T08:38:08.217
2011-05-11T01:09:17.860
null
null
3401
[ "statistical-significance", "mean", "polling" ]
10545
2
null
10544
5
null
Find out if there's any difference at all through [Pearson's Chi Square](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test). If this turns out significant, then do a (set of) post-hoc test(s), e.g. [Tukey's HSD](http://en.wikipedia.org/wiki/Tukey%27s_range_test).
null
CC BY-SA 3.0
null
2011-05-09T08:55:17.227
2011-05-09T08:55:17.227
null
null
4257
null
10546
1
null
null
8
3766
Let $T_1, T_2, \dots$ be iid sequence of exponential random variables with parameter $\lambda$. The sum $S_n = T_1 + T_2 + \dots + T_n$ is a Gamma distribution. Now as I understand the Poisson distribution is defined by $N_t$ as follows: $$N_t = \max\{k: S_k \le t\}$$ How do I formally show that $N_t$ is a Poisson random variable? Any suggestions appreciated. I tried to work out a number of proofs but cannot get to the final equation. References [http://en.wikipedia.org/wiki/Exponential_distribution](http://en.wikipedia.org/wiki/Exponential_distribution) [http://en.wikipedia.org/wiki/Gamma_distribution](http://en.wikipedia.org/wiki/Gamma_distribution) [http://en.wikipedia.org/wiki/Poisson_distribution](http://en.wikipedia.org/wiki/Poisson_distribution)
How to derive Poisson distribution from gamma distribution?
CC BY-SA 3.0
null
2011-05-09T09:05:18.130
2011-05-09T21:13:10.660
2011-05-09T12:17:35.457
null
862
[ "distributions", "probability", "poisson-distribution", "exponential-distribution", "gamma-distribution" ]
10548
2
null
10497
5
null
- You are not using Naive Bayes, you are actually using something I'd call "Multiplicative Decision Stump"-Classifer ;). You can do that, but I'd recommend in this case to calculate the micro or macro-average across all words in the sentence (instead of multiplying them). E.g. macro-average: $p(Positive|sentence)=\frac{1}{n}\sum_{word\in sentence}p(Positive|word)$ - I'd set $p$ to $\frac{Positive}{Positive+Negative}$ for calculating the $p(Positive|.)$ and $\frac{Negative}{Positive+Negative}$ for calculating the $p(Negative|.)$ respectively - $m$ is the weight of the prior meanwhile the word-count is the weight of the occuring word. The lower $m$ the more importance the probability calculated by word-frequency gets and vice versa. Let's say for example $m$=8 and word-count=2 (for a certain word), than the resulting score will contain of 80% prior-information and 20% non-prior-information. Hence I'd always compare $m$ to the word-count of important words (i.e. those whose yes/no-probability differs strongly from the prior). Unfortunately, there is no "golden hammer"-value for this variable, so I suggest to play around a little bit. - If you want to try out Naive Bayes, I suggest the section "Documentation classification" in the wiki-article about Naive Bayes
null
CC BY-SA 3.0
null
2011-05-09T11:32:38.727
2011-05-09T19:59:17.390
2011-05-09T19:59:17.390
264
264
null
10549
2
null
10369
1
null
The distribution of $Z_{i_{j}}$ is not difficult, and it is given by the Beta-F compound distribution: $$p_{Z_{i_{j}}}(z)dz=\frac{n!}{(j-1)!(n-j)!} \frac{1}{\sigma_{z}}\phi(\frac{z}{\sigma_{z}})\left[\Phi(\frac{z}{\sigma_{z}})\right]^{j-1}\left[1-\Phi(\frac{z}{\sigma_{z}})\right]^{n-j}dz$$ Where $\phi(x)$ is a standard normal PDF, and $\Phi(x)$ is a standard normal CDF, and $\sigma_{z}^{2}=\sigma_{y}^{2}+\sigma_{x}^{2}$. Now if you are given that $Y_{i_{j}}=y$, then $X_{i_{j}}$ is a 1-to-1 function of $Z_{i_{j}}$, namely $X_{i_{j}}=Z_{i_{j}}-y$. So I would think that this should be a simple application of the jacobian rule. $$p_{X_{i_{j}}|Y_{i_{j}}}(x|y)=\frac{n!}{(j-1)!(n-j)!} \frac{1}{\sigma_{z}}\phi(\frac{x+y}{\sigma_{z}})\left[\Phi(\frac{x+y}{\sigma_{z}})\right]^{j-1}\left[1-\Phi(\frac{x+y}{\sigma_{z}})\right]^{n-j}dx$$ This seems too easy, but I think it is correct. Happy to be shown wrong.
null
CC BY-SA 3.0
null
2011-05-09T12:27:32.203
2011-05-09T12:27:32.203
null
null
2392
null
10550
2
null
10532
4
null
I have the feeling that pretty recent "[Introduction to Scientific Programming and Scientific Simulation Using R](http://rads.stackoverflow.com/amzn/click/1420068725)" by Owen Jones, Robert Maillardet, and Andrew Robinson (2009) could be what you are looking for. There is also a very positive review for it [in the Journal of Statistical Software.](http://www.jstatsoft.org/v36/b04/paper). Although this book is not specifically targeted at simulating experimental data it will probably get you in the direction you want to go.
null
CC BY-SA 3.0
null
2011-05-09T12:32:00.453
2011-05-09T12:32:00.453
null
null
442
null
10551
1
10552
null
8
9180
I am having trouble determining what kernel I should use in a non-linear SVM without testing in advance. I want to know if there are any other ways to determine the best kernel without tests? How does it relate to the data?
How do I choose what SVM kernels to use?
CC BY-SA 3.0
null
2011-05-09T13:01:52.487
2015-09-19T06:45:51.300
2015-09-19T06:45:51.300
87311
4531
[ "svm", "nonlinear-regression", "kernel-trick" ]
10552
2
null
10551
16
null
Do your analysis with several different kernels. Make sure you cross-validate. Choose the kernel that performs the best during cross-validation and fit it to your whole dataset. /edit: Here is some example code in R, for a classification SVM: ``` #Use a support vector machine to predict iris species library(caret) library(caTools) #Choose x and y x <- iris[,c("Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")] y <- iris$Species #Pre-Compute CV folds so we can use the same ones for all models CV_Folds <- createMultiFolds(y, k = 10, times = 5) #Fit a Linear SVM L_model <- train(x,y,method="svmLinear",tuneLength=5, trControl=trainControl(method='repeatedCV',index=CV_Folds)) #Fit a Poly SVM P_model <- train(x,y,method="svmPoly",tuneLength=5, trControl=trainControl(method='repeatedCV',index=CV_Folds)) #Fit a Radial SVM R_model <- train(x,y,method="svmRadial",tuneLength=5, trControl=trainControl(method='repeatedCV',index=CV_Folds)) #Compare 3 models: resamps <- resamples(list(Linear = L_model, Poly = P_model, Radial = R_model)) summary(resamps) bwplot(resamps, metric = "Accuracy") densityplot(resamps, metric = "Accuracy") #Test a model's predictive accuracy Using Area under the ROC curve #Ideally, this should be done with a SEPERATE test set pSpecies <- predict(L_model,x,type='prob') colAUC(pSpecies,y,plot=TRUE) ```
null
CC BY-SA 3.0
null
2011-05-09T14:21:32.587
2011-05-09T18:16:21.190
2011-05-09T18:16:21.190
2817
2817
null
10553
1
null
null
11
1843
In G power 3, ANOVA repeated measures within-between interaction: Only the total sample size is reported assuming equal sample size for the two groups. My questions are: - How would it work if the sample sizes are slightly different, for example: N1/ N2 = 1.16. - I have to input correlation between repeated measures. Is this the correlation between repeats after merging the data from the two groups? - Information about non-centrality parameter would be helpful.
How to use G Power 3 to calculate statistical power in mixed design ANOVA with unequal group sample sizes
CC BY-SA 3.0
null
2011-05-09T14:47:47.793
2011-06-08T15:44:08.497
2011-06-08T15:44:08.497
183
4453
[ "repeated-measures", "sample-size", "statistical-power" ]
10554
1
10566
null
7
870
I'm working with dataset of individual households that I aggregate into 'areas' using several different spatial configurations, from smaller to bigger. These areas are then characterized by four variables (two categorical, two continuous). I'd like to see what effects these different aggregations have on the dataset. Particularly, I'd like to estimate what the differences in homogeneity are when I move from one spatial resolution to another. What would be the best way to approach this problem? Is there any measure I could use for this purpose?
Measuring homogeneity across different spatial aggregations of data
CC BY-SA 3.0
null
2011-05-09T15:00:34.937
2015-08-14T11:24:19.583
2015-08-14T11:24:19.583
11887
22
[ "heteroscedasticity", "spatial", "aggregation", "relative-distribution" ]
10555
2
null
10359
3
null
You can use Markov chains. You will have a to specify a density $p(x_t|x_{t-1})$. Of course you will have to be able to sample from that marginal. Then just sample...
null
CC BY-SA 3.0
null
2011-05-09T15:45:40.490
2011-05-09T15:45:40.490
null
null
2860
null
10556
2
null
10532
4
null
Here is an example of some code that I wrote for this purpose. The experimental design is: there are four levels of nitrogen and six replicates at each level. These data could be tested using a one-way ANOVA, but as the levels are continuous, I tested the fit of different curves. ``` set.seed(1) library(nlme) library(ggplot2) ### Below is a set of practice data ## 1. four levels of Nitrogen: ## 0,1,4,10 N <- c(rep(0,6),rep(1,6),rep(4,6),rep(10,6)) ## 2. variance s <- 2 ## 3. Data simulated to provide examples of the ## Various hypothesized responses of Y to N ## 3.1 asymptotic incr Y = 10*N/(1+N) + 10 asym <- c(rnorm(6,10,s),rnorm(6,15,s),rnorm(6,18,s),rnorm(6,19,s)) ## 3.2 Y = 0*N + 10 the Null model m0 <- c(rnorm(24,10,s)) ## 3.3 Y =0.2*N+10 a shallow slope m1 <- c(rnorm(6,10,s),rnorm(6,10.2,s),rnorm(6,10.8,s),rnorm(6,12,s)) ## 3.4 Y = 1*N + 10 a more steep slope m4 <- c(rnorm(6,10,s),rnorm(6,14,s),rnorm(6,26,s),rnorm(6,50,s)) ## 3.5 Y = 4*log10(N)+ 10 an log-linear response lm4 <- c(rnorm(6,10,s),rnorm(6,12.4,s),rnorm(6,15.6,s),rnorm(6,18.2,s)) ## 3.6 'Hump' with max at N=1 g m-2 yr hump <- c(rnorm(6,10,s),rnorm(6,20,s),rnorm(6,9,s),rnorm(6,8,s)) ## A function to compare the fit of five models: fn.BIC.lmnls <- function (x,y,shape){ foo.null <- lm ( y~1) foo.poly1 <- lm(y~x) foo.poly2 <- lm(y ~ x + I(x^2)) foo.poly3 <- lm(y ~ x + I(x^2) + I(x^3) ) foo.mm <- nls ( y~ (a*x)/(b + x),start=list(a=1,b=1)) bic <- BIC(foo.null,foo.poly1,foo.poly2,foo.poly3,foo.mm) print(bic) return(bic) } ### now, plot data and print BIC values for each of the data sets par(mfrow = c(3,2)) fn.BIC.lmnls (N,m0,"Y = 0*N +10") fn.BIC.lmnls (N,m1,"Y = 0.2*N + 10") fn.BIC.lmnls (N,m4,"Y = 1*N + 10") fn.BIC.lmnls (N,lm4,"Y = 0.4*log10(N) + 10") fn.BIC.lmnls (N,asym,"Y = 10+10*N/(1+N)")#Y = 20*N/(5+N) fn.BIC.lmnls (N,hump,"Y = [10,20,9,8]") ```
null
CC BY-SA 3.0
null
2011-05-09T15:46:54.257
2011-05-09T21:57:12.547
2011-05-09T21:57:12.547
1381
1381
null
10557
1
null
null
9
54920
``` names(mydat)[c(name)]<-c("newname") ``` From this, I know that the column/variable name "name" of the data frame mydat is replaced with "newname". My question is if, I want to do this by a loop so that I will have some thing like: newname1 newname2 newname3 newname4 and so on, how do I do it? This is what did and it did not work: ``` for(i in 1:4){ names(mydat)[c(name)]<-c("newname"i) } ``` Is there a way to code this? many thanks to all who could be of help. Owusu Isaac
How to change column names in data frame in R?
CC BY-SA 3.0
null
2011-05-09T16:18:23.433
2014-02-09T06:32:29.587
2011-05-11T15:33:27.243
183
4340
[ "r" ]
10558
2
null
9198
4
null
If you are willing to learn to understand the diagnostics, X12-ARIMA provides a boatload of diagnostics that range from (ASCII) graphs to rule-of-thumb indicators. Learning and understanding the diagnostics is something of an education in time series and seasonal adjustment. On the other hand, X12-ARIMA software is a one-trick pony, while using stl in R would allow you to do other things and to switch to other methods (decompose, dlm's, etc) if you wish. On the other-other hand, X12-Arima makes it easier to include exogenous variables and to indicate outliers, etc.
null
CC BY-SA 3.0
null
2011-05-09T16:23:21.067
2011-05-09T16:23:21.067
null
null
1764
null
10559
2
null
10517
10
null
The short answer is that you can't. The longer answer is that you really need to think about what you are trying to accomplish and what question(s) you are trying to answer. Tests on distributions are not designed to prove a particular distribution, but to disprove (they are not perfect for that, you still have type I and type II errors). But, often establishing an exact distribution is less important than finding a reasonable approximation. With round-off error and machine precision, you can not tell the difference between whether the data came from a normal distribution or another distribution that is only slightly different from the normal without an infinite amount of data (and still maybe not then due to the round-off). But treating such data as normal is probably still reasonable. The CLT tells us that we can often model using the normal distribution even when the data is clearly not from a normal distribution (provided we are modeling the behavior of the sample mean, not the population). What is more important than the statistical tests and proofs is knowledge of the science that generated the data. Is a particular distribution (and the assumptions that go with it) reasonable from the science? I prefer a visual test rather than the exact test for looking at distributions, generate data from the hypothesized distribution and create several plots, one with the original data, the rest with the generated data, then see if you can pick out which is different (the vis.test function in the TeachingDemos package for R does this). If you can't tell which is different then the hypothesized distribution is probably "close enough". Even if you can tell the difference you may decide that the differences are not that important. If you want to generate new data from a distribution similar to your existing data then you can take bootstrap samples, or bootstrap samples plus some random noise (this is sampling from the kernel density estimate), or you can do a logspline fit and generate from that distribution (see the logspline package for R as one tool for this).
null
CC BY-SA 3.0
null
2011-05-09T16:26:18.643
2011-05-09T16:26:18.643
null
null
4505
null
10560
2
null
10557
10
null
Most obvious solution would be to change your code in for loop with the following: ``` names(mydat)[c(name)] <- paste("newname",i,sep="") ``` But you need to clarify what your variable `name` is. At the moment this loop will do 4 renames of the single column. In general if the names which you want to change are in vector, this is a standard subsetting procedure: ``` names(mydat)[names(mydat)%in% names_to_be_changed] <- name_changes ```
null
CC BY-SA 3.0
null
2011-05-09T16:39:29.073
2011-05-10T06:05:35.363
2011-05-10T06:05:35.363
2116
2116
null
10561
2
null
10557
6
null
Try using `sprintf` or `paste`, like this: ``` names(mydat)<-sprintf("name%d",1:10) ``` Also, note that the `names(mydat)[c(name)]` is a more-less a nonsense; `c(name)` is equivalent to writing just `name` and means "get the value of variable called `name`'; bracket will at least extract elements of `names(mydat)` but only if `name` variable holds a numeric or boolean index. If you want to replace columns called `name` with `name1, name2, ..., nameN`, use something like this: ``` names(mydat)[names(mydat)=="name"]<-sprintf("name%d",1:sum(names(mydat)=="name")) ``` EDIT: Well, if you just want to remove duplicated column names, there is even easier way; R has a `make.names` function which fixes this problem; it can be used like this: ``` names(mydat)<-make.names(names(mydat),unique=TRUE) ``` Even shorter, the same can be obtained only by writing: ``` data.frame(mydat)->mydat #The magic is in check.names, but it is TRUE by default ```
null
CC BY-SA 3.0
null
2011-05-09T16:40:24.673
2011-05-10T09:02:25.403
2011-05-10T09:02:25.403
null
null
null
10562
1
14073
null
7
1328
I have a large distance matrix $3400\times 3400$. I need to cluster them hierarchically and then cut the tree into clusters (like a partitional approach). Which algorithm is most sensitive to finding natural clusters in the data based on the distance matrix? How can I evaluate the result? I am planning on using average silhouette coefficient of the tree at various levels to identify the 'natural' clusters from the tree. Thanks
Which hierarchical clustering algorithm?
CC BY-SA 3.0
null
2011-05-09T17:16:08.777
2011-08-10T05:20:23.550
2011-05-11T09:54:17.467
930
4534
[ "clustering" ]
10563
2
null
10517
6
null
The only way to "prove" that data comes from a certain distribution (without an infinite number of samples) is to know precisely how that data is generated. For example, if you know that the data came from the magnitude of a circular bivariate normal random variable, it has a Rician distribution. Or if the data came from the time between events in a Poisson process, then it has an exponential distribution. Lacking a precise definition of the generating process, there are a number of empirical measures you can use to determine the underlying distribution. First, look at the data itself: Is it discrete or continuous? Is it supported on (-inf,inf), [0,inf),(0,1), or another interval? This knowledge can be used to narrow down the possible univariate parametric distributions that could fit your data. Examples include the Gaussian distribution, Cauchy, Exponential, Gamma, Generalized Extreme Value, Rician, Wrapped Cauchy, Von Mises, Binomial, and Beta. Once you have determined the support of the distribution, test the potential univariate distributions with an information criterion - such as Akaike information criterion (AIC) or Bayesian information criterion (BIC). These balance the number of parameters in a given distribution with the likelihood of the data fitting a given distribution. Visually check the best-scoring distribution(s) to see if they appear to fit the data. An alternative is to construct a kernel density estimate of the data. This is basically a sophisticated version of creating a histogram of the data, where a small Gaussian (or other) distribution is placed at each data point, and the estimated distribution is constructed from the sum of these. For more information, see [Kernel Density Estimation](http://en.wikipedia.org/wiki/Kernel_density_estimation). This has the advantage of being able to fit arbitrary distributions in the data, but sampling from this distribution has a large computational cost, especially with large data sets. Another option is to construct a Gaussian Mixture Model (GMM) from the data, where a small number of Gaussian distributions are used to approximate the underlying distribution. For more information, see [Mixture Models](http://en.wikipedia.org/wiki/Mixture_model). The method appropriate for your application depends on the application itself. If you can determine the distribution from the generating process, great, estimate the parameters and your done! If not, the next-best scenario is finding a univariate parametric distribution that accurately describes the data. Lacking that, mixture models, KDEs, or other methods can be used to approximate the distribution.
null
CC BY-SA 3.0
null
2011-05-09T17:17:21.243
2011-05-11T13:06:21.467
2011-05-11T13:06:21.467
3595
3595
null
10564
1
null
null
5
3699
I ran a multivariate logistic regression with `glm` in `R` with some continuous and some categorical variables. Only continuous variable $A$ showed a p-value of < 0.05 and a confidence interval which did not stradle 1. Running a Wilcoxon test (actually a Mann-Whitney test because the samples are not paired) with $A$ divided into the two outcomes groups returns a p-value of 0.15. This indicates that there is no difference between the means of $A$ in the two groups. How do I reconcile these two results? The logistic regression result indicates that $A$ is a predictor of the outcome, but the Wilcoxon/Mann-Whitney indicates that there is no difference between the two groups.
Logistic regression and Wilcoxon test
CC BY-SA 3.0
null
2011-05-09T17:23:47.537
2017-04-03T15:49:25.863
2017-04-03T15:49:25.863
101426
2824
[ "logistic", "wilcoxon-mann-whitney-test" ]
10565
2
null
10564
3
null
The tests make different assumptions, and so do not give exactly the same result. The bigger problem is the (incorrect) assumption that failure to reject the null "indicates that there is no difference". It does not. It just means that you don't have enough evidence to reject the null of no difference.
null
CC BY-SA 3.0
null
2011-05-09T17:39:12.920
2011-05-09T17:39:12.920
null
null
4506
null
10566
2
null
10554
7
null
There are many ways you can characterize homogeneity, so there could be many answers to your question. One of the most intuitive ways I have seen it displayed is in a book chapter, "Spatial Analysis of Regional Income Inequality" by Sergio Rey in the book Spatially Integrated Social Science ([PDF](http://129.3.20.41/eps/urb/papers/0110/0110002.pdf)). The approach Rey takes in that chapter is to visualize the change in a metric called [Theil's Index](http://en.wikipedia.org/wiki/Theil_index). Particularly this is intutitive as the Theil index can be broken down into the "between" unit variation and the "within" unit variation. Subsequently Rey examines the change in the components of Theil's index between different census aggregations across time. (As a note, I find Rey's notation of the Theil index far easier to follow than the Wikipedia page) This metric is only applicable to continuous variables, so a different approach would be necessary for the categorical variables. A prolific listing of commonly used indices to measure racial segregation are provided in this paper ([Massey and Denton, 1988](http://dx.doi.org/10.2307/2579183)). All of those metrics can be used with categorical variables. Ones I have come across in Criminology/Sociology are the [index of qualitative variation](http://en.wikipedia.org/wiki/Qualitative_variation) and [diversity indices](http://en.wikipedia.org/wiki/Diversity_index).
null
CC BY-SA 3.0
null
2011-05-09T17:45:34.507
2011-05-10T12:12:00.107
2011-05-10T12:12:00.107
1036
1036
null
10567
1
10573
null
8
3659
I am currently looking at a cheminformatics problem where I am looking at the relationship between chemical structure and reactivity, e.g. how the angle at which two molecules approach each other affects the rate of the subsequent reaction. Obviously, the angle can only vary between 0° and 360°. This is "quick check" question from a cautious non-statistician. I understand that in regression analysis, the dependent variable must be continuous and unbounded. I just wondered - do continuous predictor variables in regression need to be unbounded too? Instinctively I assume not.
Can we use bounded continuous variables as predictors in regression and logistic regression?
CC BY-SA 3.0
null
2011-05-09T17:47:58.140
2013-09-05T09:45:23.387
2013-09-05T09:45:23.387
17230
4054
[ "regression", "logistic" ]
10568
2
null
10543
4
null
The AIC and standard error measure different things, and if you are trying to minimize standard error, a cross-validation approach may be better to use. Another alternative is the [Bayesian information criterion](http://en.wikipedia.org/wiki/Bayesian_information_criterion) (BIC), which is more parsimonious than the AIC. Also, here's a good article comparing the relations between various evaluation metric for supervised machine learning: [Data mining in metric space: an empirical analysis of supervised learning performance criteria](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.6684&rep=rep1&type=pdf).
null
CC BY-SA 3.0
null
2011-05-09T17:52:57.150
2011-05-09T18:05:26.120
2011-05-09T18:05:26.120
930
3595
null
10570
2
null
10564
5
null
Did you just fit one big glm model then look at the individual p-values? Remember that those p-values are measuring the effect of that variable above and beyond all other variables in the model. It is possible that more of your covariates are really contributing, but there is redundant information, so they don't show significance. It could be the combination of A along with another covariate or 2 that shows the real difference, but A by itself is not meaningful. Also look at the effect sizes and standard errors for all terms. There is a paradox associated with the Wald tests/estimates (the individual p-values in the standard summary) that you can have a very important variable appear to be non-significant because the standard error is over estimated. Likelihood ratio tests are much better because of this. You can assess both the above by fitting a reduced model using only A, then use the anova function to compare the 2 models, if that is significant then it indicates that there is something important in your model beyond the A variable.
null
CC BY-SA 3.0
null
2011-05-09T18:03:19.990
2011-05-09T18:03:19.990
null
null
4505
null
10572
2
null
10532
4
null
Statistical models in S, by Chambers and Hastie (Chapmann and Hall, 1991; or the so-called White Book), and to a lesser extent Modern Applied Statistics with S, by Venables and Ripley (Springer, 2002, 4th ed.), include some material about DoE and the analysis of common designs in S and R. Vikneswaran wrote [An R companion to "Experimental Design"](http://cran.r-project.org/doc/contrib/Vikneswaran-ED_companion.pdf), although it is not very complete (IMHO), but there are a lot other textbooks in the [Contributed](http://cran.r-project.org/other-docs.html) section on CRAN that might help you get started. Apart from textbook, the CRAN Task View on [Design of Experiments (DoE) & Analysis of Experimental Data](http://cran.r-project.org/web/views/ExperimentalDesign.html) has some good packages that ease the creation and analysis of various experimental designs; I can think of [dae](http://cran.r-project.org/web/packages/dae/index.html), [agricolae](http://cran.r-project.org/web/packages/agricolae/index.html), or [AlgDesign](http://cran.r-project.org/web/packages/AlgDesign/index.html) (which comes with a nice [vignette](http://cran.r-project.org/web/packages/AlgDesign/vignettes/AlgDesign.pdf)), to name a few.
null
CC BY-SA 3.0
null
2011-05-09T18:28:15.963
2011-05-09T18:28:15.963
null
null
930
null
10573
2
null
10567
13
null
The condition that dependent variables must be "continuous and unbounded" is unusual: there is no mathematical or statistical requirement for either. In most regression models we posit that the dependent variable be a linear combination of the independent variables plus an independent random error term of zero mean, approximately and within the ranges attained by, or potentially attained by, the independent variables. For instance, it would be fine to regress the length of the Mississippi River on time for the period 1700 - 1850 but not to project the regression back, say, a million years or forward 700 years: > In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact. (Mark Twain, Life on the Mississippi.) In the present case it sounds like the angle is an independent variable, not the dependent one, so this question does not even arise. The problem that arises is that the angle seems to be defined only modulo 360 degrees (actually mod 180). Actually, the angle is really a latitude and varies from 0 to 180 (or -90 to 90) without "wrapping around" at all. Really, then, all that matters is how best to express this angle: does the reaction rate vary linearly with the angle or does it vary perhaps with its sine or cosine? Or maybe its tangent, which is unbounded? But that matter is addressed with appropriate exploratory analysis, perhaps by some stereochemical considerations, and standard procedures to fit and check models. Therefore this angular variable neither enjoys nor suffers from any special quality that would distinguish it from other independent variables.
null
CC BY-SA 3.0
null
2011-05-09T18:33:42.420
2011-05-09T18:33:42.420
null
null
919
null
10574
1
null
null
8
3998
I have two independent poisson random variables, $X_1$ and $X_2$, with $X_1 \sim \text{Pois}(\lambda_1)$ and $X_2 \sim \text{Pois}(\lambda_2)$. I want to test $H_0:\, \lambda_1 = \lambda_2$ versus the alternative $H_1:\, \lambda_1 \neq \lambda_2$. I already derived maximum likelihood estimates under null and alternate hypothesis (model), and based on those I calculated likelihood ratio test (LRT) statistic (R codes given below). Now I am interested to calculate power of the test based on: - Fixed alpha (type 1 error) = 0.05. - Using different sample sizes (n), say n = 5, 10, 20, 50, 100. - Different combination of $\lambda_1$ and $\lambda_2$, which will change LRT statistics (computed as LRTstat below). Here is my R code: ``` X1 = rpois(λ1); X2 = rpois(λ2) Xbar = (X1+X2)/2 LLRNum = dpois(X1, X1) * dpois(X2, X2) LLRDenom = dpois(X1, Xbar) * dpois(X2, Xbar) LRTstat = 2*log(LLRNum/LLRDenom) ``` From here, how could I proceed with power calculation (preferably in R)?
Power calculation for likelihood ratio test
CC BY-SA 3.0
null
2011-05-09T18:39:54.350
2011-05-09T20:18:48.010
2011-05-09T18:44:41.843
930
4098
[ "poisson-distribution", "statistical-power", "likelihood-ratio" ]
10576
2
null
10574
9
null
You can do this using simulation. Write a function that does your test and accepts the lambdas and sample size(s) as arguments (you have a good start above). Now for a given set of lambdas and sample size(s) run the function a bunch of times (the replicate function in R is great for that). Then the power is just the proportion of times that you reject the null hypothesis, you can use the mean function to compute the proportion and prop.test to give a confidence interval on the power. Here is some example code: ``` tmpfunc1 <- function(l1, l2=l1, n1=10, n2=n1) { x1 <- rpois(n1, l1) x2 <- rpois(n2, l2) m1 <- mean(x1) m2 <- mean(x2) m <- mean( c(x1,x2) ) ll <- sum( dpois(x1, m1, log=TRUE) ) + sum( dpois(x2, m2, log=TRUE) ) - sum( dpois(x1, m, log=TRUE) ) - sum( dpois(x2, m, log=TRUE) ) pchisq(2*ll, 1, lower=FALSE) } # verify under null n=10 out1 <- replicate(10000, tmpfunc1(3)) mean(out1 <= 0.05) hist(out1) prop.test( sum(out1<=0.05), 10000 )$conf.int # power for l1=3, l2=3.5, n1=n2=10 out2 <- replicate(10000, tmpfunc1(3,3.5)) mean(out2 <= 0.05) hist(out2) # power for l1=3, l2=3.5, n1=n2=50 out3 <- replicate(10000, tmpfunc1(3,3.5,n1=50)) mean(out3 <= 0.05) hist(out3) ``` My results (your will differ with a different seed, but should be similar) showed a type I error rate (alpha) of 0.0496 (95% CI 0.0455-0.0541) which is close to 0.05, more precision can be obtained by increasing the 10000 in the replicate command. The powers I computed were: 9.86% and 28.6%. The histograms are not strictly necessary, but I like seeing the patterns.
null
CC BY-SA 3.0
null
2011-05-09T19:07:42.993
2011-05-09T20:18:48.010
2011-05-09T20:18:48.010
4505
4505
null
10577
2
null
10546
6
null
I'm sure that Durrett's proof is nice. A straight forward solution to the question asked is as follows. For $n \geq 1$ $$ \begin{array}{rcl} P(N_t = n) & = & \int_0^t P(S_{n+1} > t \mid S_n = s) P(S_n \in ds) \\ & = & \int_0^t P(T_{n+1} > t-s) P(S_n \in ds) \\ & = & \int_0^t e^{-\lambda(t-s)} \frac{\lambda^n s^{n-1} e^{-\lambda s}}{(n-1)!} \mathrm{d} s \\ & = & e^{-\lambda t} \frac{\lambda^n }{(n-1)!} \int_0^t s^{n-1} \mathrm{d} s \\ & = & e^{-\lambda t} \frac{(\lambda t)^n}{n!} \end{array} $$ For $n = 0$ we have $P(N_t = 0) = P(T_1 > t) = e^{-\lambda t}$. This does not prove that $(N_t)_{t \geq 0}$ is a Poisson process, which is harder, but it does show that the marginal distribution of $N_t$ is Poisson with mean $\lambda t$.
null
CC BY-SA 3.0
null
2011-05-09T19:53:46.250
2011-05-09T21:13:10.660
2011-05-09T21:13:10.660
4376
4376
null
10578
1
null
null
72
23141
I am not comfortable with Fisher information, what it measures and how is it helpful. Also it's relationship with the Cramer-Rao bound is not apparent to me. Can someone please give an intuitive explanation of these concepts?
Intuitive explanation of Fisher Information and Cramer-Rao bound
CC BY-SA 3.0
null
2011-05-09T20:43:10.830
2019-11-14T18:08:19.120
2018-07-15T22:22:03.910
11887
4101
[ "estimation", "intuition", "fisher-information" ]