Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
10797
2
null
10789
12
null
The issues regarding $\sigma$-algebras are mathematical subtleties, that do not really explain why or if we need a background space. Indeed, I would say that there is no compelling evidence that the background space is a necessity. For any probabilistic setup $(E, \mathbb{E}, \mu)$ where $E$ is the sample space, $\mathbb{E}$ the $\sigma$-algebra and $\mu$ a probability measure, the interest is in $\mu$, and there is no abstract reason that we want $\mu$ to be the image measure of a measurable map $X : (\Omega, \mathbb{B}) \to (E, \mathbb{E})$. However, the use of an abstract background space gives mathematical convenience that makes many results appear more natural and intuitive. The objective is always to say something about $\mu$, the distribution of $X$, but it may be easier and more clearly expressed in terms of $X$. An example is given by the central limit theorem. If $X_1, \ldots, X_n$ are i.i.d. real valued with mean $\mu$ and variance $\sigma^2$ the CLT says that $$P\left(\frac{\sqrt{n}}{\sigma} \left(\frac{1}{n}\sum_{i=1}^n X_i - \xi\right) \leq x \right) \to \Phi(x)$$ where $\Phi$ is the distribution function for the standard normal distribution. If the distribution of $X_i$ is $\mu$ the corresponding result in terms of the measure reads $$\rho_{\sqrt{n}/\sigma} \circ \tau_{\xi} \circ \rho_{1/n}(\mu^{*n})((-\infty, x]) \to \Phi(x)$$ Some explanation of the terminology is needed. By $\mu^{*n}$ we mean the $n$-times convolution of $\mu$ (the distribution of the sum). The functions $\rho_c$ are the linear functions $\rho_c(x) = cx$ and $\tau_{\xi}$ is the translation $\tau_{\xi}(x) = x - \xi$. One could probably get used to the second formulation, but it does a good job at hiding what it is all about. What seems to be the issue is that the arithmetic transformations involved in the CLT are quite clearly expressed in terms of random variables but they do not translate so well in terms of the measures.
null
CC BY-SA 3.0
null
2011-05-14T07:07:41.470
2011-05-14T17:19:49.897
2011-05-14T17:19:49.897
4376
4376
null
10798
1
10799
null
9
1836
Direct Question: Are there any measures of auto-correlation for a sequence of observations of an (unordered) categorical variable? Background: I'm using MCMC to sample from a categorical variable and I'd like a measure of how well the sampling method I've developed is mixing across the posterior distribution. I'm familiar with acf plots and auto-correlation for continuous variables, but I've been stuck looking at the transition probability matrix for this categorical variable... Any thoughts?
Measures of autocorrelation in categorical values of a Markov Chain?
CC BY-SA 3.0
null
2011-05-14T07:58:36.470
2011-08-10T14:15:42.663
2011-05-14T08:04:11.110
1499
1499
[ "categorical-data", "markov-chain-montecarlo", "autocorrelation" ]
10799
2
null
10798
6
null
You can always choose one or several real valued functions of the categorical variables and look at the auto-correlation for the resulting sequence(s). You can, for instance, consider indicators of some subsets of the variables. However, if I understood your question correctly, your sequence is obtained by an MCMC algorithm on the discrete space. In that case, it may be more interesting to look directly at the convergence rate for the Markov chain. Chapter 6 in [this book](http://books.google.com/books?id=KF0LgxRCgQsC&printsec=frontcover&dq=bremaud+markov+chains&hl=en&ei=nDjOTcj5KZS-sAPavPizCw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CEUQ6AEwAA#v=onepage&q=bremaud%20markov%20chains&f=false) by Brémaud treats this in details. The size of the second largest absolute value of the eigenvalues determines the convergence rate of the matrix of transition probabilities and thus the mixing of the process.
null
CC BY-SA 3.0
null
2011-05-14T08:26:49.843
2011-05-16T05:34:30.250
2011-05-16T05:34:30.250
4376
4376
null
10800
1
10804
null
8
2998
I'm just having a look through Hadley's very excellent book about his ggplot2 R package. He has some code to remove a linear trend in the diamonds dataset, like so: ``` d <- subset(diamonds, carat < 2.5 & rbinom(nrow(diamonds), 1, 0.2) == 1) d$lcarat <- log10(d$carat) d$lprice <- log10(d$price) detrend <- lm(lprice ~ lcarat, data = d) d$lprice2 <- resid(detrend) qplot(lcarat, lprice, data = d) qplot(lcarat, lprice2, data = d) ``` Produces these graphs Unadjusted... ![enter image description here](https://i.stack.imgur.com/OV174.png) Detrended... ![enter image description here](https://i.stack.imgur.com/JWxiC.png) I'd like to see what the actual values of lprice would be without the effect of lcarat. Plotting residuals vs lcarat shows the right shape, but the points are shifted toward y = 0 (look at the range of the y-axis units). To get what I want, does it make sense to simply plot residuals + mean(lprice)? i.e. shift the previous graph up by mean(lprice). ``` qplot(lcarat, lprice2 + mean(lprice), data = d) ``` ![enter image description here](https://i.stack.imgur.com/hrrDr.png) Does it make sense to do this? Is there a name for what I'm trying to do?
How do I display detrended data from a linear regression?
CC BY-SA 3.0
null
2011-05-14T08:53:36.253
2011-08-24T00:34:05.280
2011-05-14T23:10:52.190
827
827
[ "data-visualization" ]
10801
2
null
10795
11
null
Covariance matrix can represent relations between all variables while inverse covariance shows the relations of elements with their neighbors (as wikipedia said partial/pair wise relations). I borrow the following example from [here](http://videolectures.net/gpip06_mackay_gpb/) in 24:10. Imagine 5 masses are connected together and vowelling around with 6 springs. The covariance matrix would contain correlation of all masses, if one goes right, others can also goes right, but the inverse covariance matrix shows the relation of those masses that are connected by same springs (neighbors) and it contains many zeros and it is not necessary positive.
null
CC BY-SA 4.0
null
2011-05-14T09:07:25.183
2021-04-14T15:29:45.470
2021-04-14T15:29:45.470
102655
4581
null
10803
2
null
6883
1
null
simply without using any toolbox box use "polyfit" and in consequence "polyval" function, here is the example : X could you you input data, and Y you response value or target value You can make a n-degree polynomial by : ``` p = polyfit(x,y,n) ``` and fit the polynomial and get the interest output yd, based on fitted value. ``` yd = polyval(p,xd); ``` the plot the data ``` plot(x,y) ``` If you check these functions help document, you would get more comprehensive information. Also [this](http://www.uni-bonn.de/~mbrei/Data/regressionMatlab.pdf) documents talks about different types of regression and their implementation in Matlab
null
CC BY-SA 3.0
null
2011-05-14T11:34:41.720
2011-05-14T11:34:41.720
null
null
4581
null
10804
2
null
10800
2
null
As for me, it is terribly confusing, especially while you can do much simpler thing -- calculate `price/carat` to get a price of one carat, which would be way easier to interpret.
null
CC BY-SA 3.0
null
2011-05-14T12:49:02.853
2011-05-14T16:13:40.960
2011-05-14T16:13:40.960
null
null
null
10805
1
null
null
4
589
What is an appropriate econometric model to describe count data with an upper bound?
Regression model for count data with an upper bound
CC BY-SA 3.0
null
2011-05-14T14:05:14.403
2015-09-29T20:14:36.727
2015-07-06T23:06:35.227
22228
4496
[ "regression", "poisson-distribution", "count-data", "bounds" ]
10806
1
null
null
10
808
In practical application, I have witnessed often the following practice. One observes a pair $(x_t, y_t)$ over time. Under the assumption that they are linearly related, we regress one against the other using geometric weights rather than uniform ones, i.e., the OLS minimizes $$\sum_{t=0}^\infty k^{t} (y_{T-t}- a x_{T-t}-b)^2$$ for some $k\in (0,1)$. This is very intuitive: we weight less observations far in the past. Compared to a "boxcar" weighting scheme, it has also the advantage of producing estimates that are changing smoothly over time, because observations do not fall abruptly off the observation window. However, I wonder if there is a probabilistic model underlying the relationship between $x_t$ and $y_t$ that justifies this choice.
Justification for using geometric weights in linear regression
CC BY-SA 3.0
null
2011-05-14T15:38:30.307
2011-05-16T08:33:33.177
2011-05-15T02:21:41.113
30
30
[ "regression", "least-squares" ]
10807
1
10808
null
5
16377
In R, what is the difference between: ``` if(x>2 & x<3) ... ``` and ``` if(x>2 && x<3) ... ``` Similarly: ``` if(x<2 | x>3) ... ``` and ``` if(x<2 || x>3) ... ```
Using and/or operators in R
CC BY-SA 3.0
null
2011-05-14T19:05:37.203
2011-05-15T16:59:09.517
2011-05-15T16:59:09.517
null
2310
[ "r" ]
10808
2
null
10807
10
null
See `?"&"`: the single version does elementwise comparisons (for when you are doing logical operations on two vectors of the same length, e.g. if in your example x<-c(1.5,3.5). The other one works just like C++'s or java's &&: it only looks at the first element of each vector (this is typically an unexpected downside), but in a typically better performing way: it looks from left to right, and as soon as at least one of the values is false, it knows not to look at the rest anymore. So if you know, in your examples x<-6 (in any case, just one value), you're better off using &&, otherwise, always use &.
null
CC BY-SA 3.0
null
2011-05-14T19:25:01.863
2011-05-14T19:25:01.863
null
null
4257
null
10809
2
null
10806
7
null
"Linearly related" usually means $$y_t = a x_t + b + \varepsilon_t$$ for constant $a$, $b$ and iid random errors $\varepsilon_t$, $t=0,1,\ldots,T$. One reason one would make an exponentially weighted OLS estimate is the suspicion that $a$ and $b$ might themselves be (slowly) varying with time, too. Thus we really think the correct model is $$y_t = \alpha(t) x_t + \beta(t) + \varepsilon_t$$ for unknown functions $\alpha(t)$ and $\beta(t)$ which vary slowly (if at all) over time and we're interested in estimating their current values, $a = \alpha_T$ and $b = \beta_T$. Let's assume these functions are smooth, so we can apply Taylor's Theorem. This asserts that $$\alpha(t) = \alpha(T) + \alpha'(t_{\alpha,t})(t-T)$$ for some $t_{\alpha,t}, 0 \le t_{\alpha,t} \lt T$, and similarly for $\beta(t)$. We think of $a$ and $b$ as being the most recent values, $\alpha_T$ and $\beta_T$, respectively. Use this to re-express the residuals: $$y_t - (a x_t + b) = \alpha'(t_{\alpha,t})(t-T)x_t + \beta'(t_{\beta,t})(t-T) + \varepsilon_t\text{.}$$ Now a lot of hand-waving needs to occur. We will consider the entire right hand side to be random. Its variance is that of $\varepsilon_t$ plus $x_t^2(t-T)^2$ times the variance of $\alpha'(t_{\alpha,t})$ plus $(t-T)^2$ times the variance of $\beta'(t_{\beta,t})$. Those two variances are completely unknown, but (abracadabra) let's think of them as resulting from some kind of (stochastic) process in which possibly systematic (not random, but still unknown) "errors" or "variations" are accumulated from one time to the other. This would suggest an exponential change in those variances over time. Now just simplify the explicit (but essentially useless) expression for the right hand side, and absorb the quadratic terms $(t-T)^2$ into the exponential (since we're waving our hands so wildly about anyway), to obtain $$y_t - (a x_t + b) = \delta_t$$ with the variance of $\delta_t$ equal to $\exp(\kappa(t-T))$ for some constant $\kappa$. Ignoring possible temporal correlations among the $\delta_t$ and assuming they have Normal distributions gives a log likelihood for the data proportional to $$\sum_{t=0}^{T} k^{-t} (y_{T-t}- a x_{T-t}-b)^2$$ (plus an irrelevant constant depending only on $k$) with $k = \exp{\kappa}$. The exponentially weighted OLS procedure therefore maximizes the likelihood, assuming we know the value of $k$ (kind of like a profile likelihood procedure). Although this entire derivation clearly is fanciful, it does show how, and approximately to what degree, the exponential weighting attempts to cope with possible changes in the linear parameters over time. It relates the parameter $k$ to the temporal rate of change of those parameters.
null
CC BY-SA 3.0
null
2011-05-14T19:26:33.187
2011-05-15T14:10:12.053
2011-05-15T14:10:12.053
919
919
null
10810
1
null
null
9
914
Who created sampling distributions? I have looked everywhere and I'm writing a paper, but, so far, all I've been able to come up with is the theory and the definition. Please help me find the who, what, when,and where.
Who first developed the idea of "sampling distributions"?
CC BY-SA 3.0
null
2011-05-14T21:23:24.490
2011-10-04T23:35:14.370
2011-10-04T23:35:14.370
183
4615
[ "distributions", "sampling", "history" ]
10811
2
null
10810
8
null
If you refer to the term "sampling distribution," information is hard to find. But the concept is crucial to Jakob Bernoulli's (Switzerland) recognition that there is a distinction between that distribution and the population distribution itself, leading to his formulation (and proof) of a [Law of Large Numbers](http://en.wikipedia.org/wiki/Law_of_large_numbers). (The Wikipedia article attributes the first statement of such a law to [Cardano](http://en.wikipedia.org/wiki/Gerolamo_Cardano) (Italy), who--among many other things--was an avid gambler and mathematician in the first half of the 16th century.) Bernoulli's seminal work was published posthumously in 1713 as his [ars conjectandi](http://en.wikipedia.org/wiki/Ars_Conjectandi) but likely was developed in the late 1600's in response to [pioneering work by Pascal and Fermat](http://en.wikipedia.org/wiki/Problem_of_points#Pascal_and_Fermat) (France) as recounted by Christian Huyghens (Netherlands) in an influential (and amazingly brief) 1657 textbook, [de ratiociniis in ludo aleae](http://en.wikipedia.org/wiki/Christiaan_Huygens#Probability_theory). For more background you can read a [brief accounting of some of this history](http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.2009.01271.x/full) I wrote in the form of a review of a Keith Devlin book, [The Unfinished Game](http://literati.net/Devlin/DevlinReviews.htm).
null
CC BY-SA 3.0
null
2011-05-14T22:14:06.040
2011-05-14T22:14:06.040
null
null
919
null
10812
2
null
10089
1
null
You could use [Kruskal–Wallis test](http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance) to perform a test for $$ H_0: m_1 = m_2 = m_3 \> , $$ where $m_i$ is the median of the $i$th group. If the $p$-value is $< 0.05$, you could conduct a post hoc test using Bonferroni correction. Take a look at [this website](http://yatani.jp/HCIstats/KruskalWallis). It might be helpful.
null
CC BY-SA 3.0
null
2011-05-14T22:46:19.800
2011-05-15T00:18:32.437
2011-05-15T00:18:32.437
2970
4559
null
10813
2
null
10806
1
null
I think in that you actually mean $k^{t}$ as your weight, or that $k>1$. If $0<k<1$ and we take $k^{-t}$ as the weight then $k^{-\infty}=\infty$. So this actually weights the present observation the least. For example, if we take $k=0.5$ then $k^{0}=1,\;k^{-1}=2,\;k^{-2}=4,\dots,k^{-20}\approx 10^{6}$, and so on. This is just stating something that you know about the how the variance changes with each observation (it gets bigger as you mover further backward in time from time $T$): $$(y_{T-t}|x_{T-t},a,b,k,s) \sim Normal(ax_{T-t}+b,s^{2}k^{-t})$$ Denoting $Y\equiv\{y_{T},y_{T-1},\dots,y_{1}\}$ and $X\equiv\{x_{T},x_{T-1},\dots,x_{1}\}$ we have a joint log-likelihood of: $$\log\left[p(Y|X,a,b,k,s)\right]=-\frac{1}{2}\left(T\log(2\pi s^{2} k^{-t})+\sum_{t=0}^{T-1}\frac{(y_{T-t}-ax_{T-t}-b)^{2}}{s^{2}k^{-t}}\right)$$ So in order to get the maximum likelihood estimates of $a$ and $b$ you have the following objective function: $$\sum_{t=0}^{T-1}k^{t}(y_{T-t}-ax_{T-t}-b)^{2}$$ Which is the one you seek
null
CC BY-SA 3.0
null
2011-05-15T00:38:21.330
2011-05-16T08:33:33.177
2011-05-16T08:33:33.177
2392
2392
null
10814
2
null
10768
11
null
I like the other answers, but nobody has mentioned the following yet. The event $\{U \leq t,\ V\leq t \}$ occurs if and only if $\{\mathrm{max}(U,V)\leq t\}$, so if $U$ and $V$ are independent and $W = \mathrm{max}(U,V)$, then $F_{W}(t) = F_{U}(t)*F_{V}(t)$ so for $\alpha$ a positive integer (say, $\alpha = n$) take $X = \mathrm{max}(Z_{1},...Z_{n})$ where the $Z$'s are i.i.d. For $\alpha = 1/n$ we can switcheroo to get $F_{Z} = F_{X}^n$, so $X$ would be that random variable such that the max of $n$ independent copies has the same distribution as $Z$ (and this would not be one of our familiar friends, in general). The case of $\alpha$ a positive rational number (say, $\alpha = m/n$) follows from the previous since $$ \left(F_{Z}\right)^{m/n} = \left(F_{Z}^{1/n}\right)^{m}. $$ For $\alpha$ an irrational, choose a sequence of positive rationals $a_{k}$ converging to $\alpha$; then the sequence $X_{k}$ (where we can use our above tricks for each $k$) will converge in distribution to the $X$ desired. This might not be the characterization you are looking for, but it least gives some idea of how to think about $F_{Z}^{\alpha}$ for $\alpha$ suitably nice. On the other hand, I'm not really sure how much nicer it can really get: you already have the CDF, so the chain rule gives you the PDF, and you can calculate moments till the sun sets...? It's true that most $Z$'s won't have an $X$ that's familiar for $\alpha = \sqrt{2}$, but if I wanted to play around with an example to look for something interesting I might try $Z$ uniformly distributed on the unit interval with $F(z) = z$, $0<z<1$. --- EDIT: I wrote some comments in @JMS answer, and there was a question about my arithmetic, so I'll write out what I meant in the hopes that it's more clear. @cardinal correctly in the comment to @JMS answer wrote that the problem simplifies to $$ g^{-1}(y) = \Phi^{-1}(\Phi^{\alpha}(y)), $$ or more generally when $Z$ is not necessarily $N(0,1)$, we have $$ x = g^{-1}(y) = F^{-1}(F^{\alpha}(y)). $$ My point was that when $F$ has a nice inverse function we can just solve for the function $y = g(x)$ with basic algebra. I wrote in the comment that $g$ should be $$ y = g(x) = F^{-1}(F^{1/\alpha}(x)). $$ Let's take a special case, plug things in, and see how it works. Let $X$ have an Exp(1) distribution, with CDF $$ F(x) = (1 - \mathrm{e}^{-x}),\ x > 0, $$ and inverse CDF $$ F^{-1}(y) = -\ln(1 - y). $$ It is easy to plug everything in to find $g$; after we're done we get $$ y = g(x) = -\ln \left( 1 - (1 - \mathrm{e}^{-x})^{1/\alpha} \right) $$ So, in summary, my claim is that if $X \sim \mathrm{Exp}(1)$ and if we define $$ Y = -\ln \left( 1 - (1 - \mathrm{e}^{-X})^{1/\alpha} \right), $$ then $Y$ will have a CDF which looks like $$ F_{Y}(y) = \left( 1 - \mathrm{e}^{-y} \right)^{\alpha}. $$ We can prove this directly (look at $P(Y \leq y)$ and use algebra to get the expression, in the next to the last step we need the Probability Integral Transform). Just in the (often repeated) case that I'm crazy, I ran some simulations to double-check that it works, ... and it does. See below. To make the code easier I used two facts: $$ \mbox{If $X \sim F$ then $U = F(X) \sim \mathrm{Unif}(0,1)$.} $$ $$ \mbox{If $U \sim \mathrm{Unif}(0,1)$ then $U^{1/\alpha} \sim \mathrm{Beta}(\alpha,1)$.} $$ The plot of the simulation results follows. ![ECDF and F to the alpha](https://i.stack.imgur.com/K6XFB.png) The R code used to generate the plot (minus labels) is ``` n <- 10000; alpha <- 0.7 z <- rbeta(n, shape1 = alpha, shape2 = 1) y <- -log(1 - z) plot(ecdf(y)) f <- function(x) (pexp(x, rate = 1))^alpha curve(f, add = TRUE, lty = 2, lwd = 2) ``` The fit looks pretty good, I think? Maybe I'm not crazy (this time)?
null
CC BY-SA 3.0
null
2011-05-15T00:52:55.247
2011-05-20T17:59:24.970
2011-05-20T17:59:24.970
null
null
null
10815
2
null
9685
0
null
I believe you could use the "simultaneous confidence intervals" for doing multiple comparisons. The reference is Agresti et al. 2008 Simultaneous confidence intervals for comparing binomial parameters. Biometrics 64 1270-1275. You could find the corresponding R code in [http://www.stat.ufl.edu/~aa/cda/software.html](http://www.stat.ufl.edu/~aa/cda/software.html)
null
CC BY-SA 3.0
null
2011-05-15T01:12:35.080
2011-05-15T01:12:35.080
null
null
4559
null
10816
1
null
null
6
1778
On page 9 in [http://jenni.uchicago.edu/Oxford2005/four_param_all_2005-08-07_csh.pdf](http://jenni.uchicago.edu/Oxford2005/four_param_all_2005-08-07_csh.pdf) ATE - the average treatment effect is the expected gain from participation in a program for a random individual. For example, evaluate the impact of going to college on wages. Due to selection bias estimation steps are: - Probit, to model probability of an individual going to college. Two different inverse Mills ratios are calculated. - for those who did go to college, do OLS of wage on explanatory variables (like gender, etc.) and do the same for those who did not go to college. For each OLS regression include an appropriate inverse Mills ratio obtained from step 1 as an additional explanatory variables. - ATE on is the average of the difference in predicted values using parameter estimates for college and non-college groups. My questions are: - on step 3, there is no need to use parameter estimates on inverse Mills ratios used in prediction? I just drop these coefficients calculating the ATE. - do I need to keep the variables in the OLS same across college and non-college groups? If I fit OLS for college and non-college groups different variables are going to be significant in explaining variation in income. So, when I calculate the ATE some parameter estimates will be zero. - I had decided to split independent variables into two sets, one for probit and another for OLS. In the OLS, if I use the inverse Mills ratios together with variables used in the Probit, there is high multicollinearity. Even if unbiased estimates are obtained in the presence of multicollinearity, I am worried about the prediction and wide confidence interval due to inflated standard errors.
Heckman sample selection
CC BY-SA 3.0
null
2011-05-15T05:31:38.173
2017-11-24T18:01:18.490
2017-11-24T18:01:18.490
128677
4617
[ "regression", "econometrics", "probit", "heckman" ]
10817
1
null
null
4
315
Have anyone tried modeling number of phone calls using OLS? The dataset is number of calls per months for each customer's account. The dependent variable is number of calls or average number of calls, and the explanatory variables are customer specific variables including number of purchases, total spend and so on.... How do you deal with zero callers? Only small proportion of customers actually call, 5%. I am attempting to build a predictive model, so I want to keep zero callers in the model. I do not believe that number of call is bounded or censored random variable? I thought zero number of calls is a true zero and there is to account for it? Do I need to use Tobit for the estimation here? M
Modeling number of phone calls with OLS
CC BY-SA 3.0
null
2011-05-15T05:58:33.960
2011-05-16T05:51:19.287
2011-05-16T05:51:19.287
2116
4617
[ "regression", "least-squares" ]
10818
2
null
10817
3
null
Just from knowing that there are many zeros in the data, this would suggest to me that you use a zero-inflated poisson model - [a generalised linear model](http://en.wikipedia.org/wiki/Generalized_linear_model).
null
CC BY-SA 3.0
null
2011-05-15T10:35:51.817
2011-05-15T10:35:51.817
null
null
656
null
10819
2
null
10711
4
null
This is a really, really interesting question (indeed I'm somewhat in love with the idea of modelling the stack exchange sites in general). On the issue of well-roundedness, one way to assess this would be through the tags that particular users tend to answer, and their distribution across sites. Examples may make this clearer. I am a member on TeX, StackOverflow, CrossValidated and AskUbuntu. Now, I really only contribute to here and StackOverflow, and only about R on Stackoverflow. So,to define well roundedness I would look at a) the amount of tags which two sites have in common (to define similarity across sites) and the extent to which a user answers questions on sites which have little or no tags in common. If, for instance, someone contributes to Python tags on StackOverflow and cooking, that person is more well-rounded than someone who is answering questions statistical software questions (for instance) on Overflow and stats questions here. I hope this is somewhat helpful.
null
CC BY-SA 3.0
null
2011-05-15T11:44:02.263
2011-05-15T11:44:02.263
null
null
656
null
10820
2
null
10817
1
null
• Since this is time series data you would be well advised to include some form of "time variable" in the model. This could be accomplished by including seasonal dummies and/or seasonal autoregressive structure. You might also have one or more Level Shifts and/or one or more trends in the data. You might have changes in parameters or error variance over time that might need to be incorporated. Incorporating predictor variables would be important making sure that correct contemporaneous and lag effects were treated. Additionally you might want to detect anomalies/pulses so that your model parameters were robust to them via Intervention Detection. In general this is referred to as a Box-Jenkins Model with Causal Variables (ARMAX or Transfer Functions).You might want to Google "regression vs box-jenkins" to find out more about the whys and wherefores of incorporating time series structure into your model. Be careful about some web content that incorrectly positions Box-Jenkins as being non-causative. The univariate (single series approach) is called ARIMA Modelling. This approach is only suggested when you don't want to include predictor variables. Lots of incorrect textbooks and web sites don't make this clear as they assert things like “Box-Jenkins ignores information that might be contained in a structural regression model” For example google "difference between box-jenkins and regression" and you will get some other hits on this topic. The first hit leads to a typical misrepresentation of what Box-Jenkins models are. For example "Box-Jenkins ignores information that might be contained in a structural regression model" is a half-truth as what is more correct is to say “Box-Jenkins without Causal Variables etc.”
null
CC BY-SA 3.0
null
2011-05-15T12:26:30.877
2011-05-15T14:41:24.497
2011-05-15T14:41:24.497
3382
3382
null
10822
1
10826
null
3
105
I am not able to find an existing name for the comparison function I wrote. I have two lists of values which I want to compare. At first I used jaccard, but then I recognized I need to eliminate the length of the second list as it does not seem to have anything to say in my use-case. So I changed jaccard to: $$ sim(L1,L2) = \frac{|L1 \cap L2|}{|L1|} $$ One example might be: ``` L1 = {name, test, somevalue} L2 = {name, foo, bar, baz, some, random, other, values, which, would, flood, the, jaccard, comparison, as, they, are, too, much} ``` $$ sim(L1,L2) = \frac{|L1 \cap L2|}{|L1|} = \frac{|\{name\}|}{|\{name, test, somevalue\}|} = \frac{1}{3} $$ To make a long story short, I only want to calculate how many percent of the values in L1 are in L2. Does anybody know if there is a name for such a function so I can give it a good name within my source code?
Is there a name for the function | L1 sect L2 | / | L1 |
CC BY-SA 3.0
null
2011-05-15T13:48:01.293
2011-05-15T15:40:04.573
null
null
4350
[ "distance-functions" ]
10823
2
null
10407
0
null
I agree with some comments, in that the Poisson approximation sounds nice here (not a 'crude' approximation). It should be asympotically exact, and it seems the most reasonable thing to do, as an exact analytic solutions seems difficult. As an intermediate alternative, (if you really need it) I suggest I fisrt order correction to the Poisson approximation, in the following way (I've done something similar some time ago, and it worked). As suggested by a comment, your model is (not approximately but exactly) Poisson if we condition on the sum. That is: Let $X_t$ ($t$ is a parameter here) be a vector of $n$ independent Poisson variables, the first one with $\lambda = 2t/(n+1)$, the others with $\lambda = t/(n+1)$. Let $s=\sum x$, so $E(s)=t$. It is clear that $X_t$ is not equivalent to other model (because our model is restricted to $s=t$), but it is a good approximation. Further, the distribution of $X_t | s$ is equivalent to our model. Indeed, we can write $ \displaystyle P(X_t) = \sum_s P(X_t | s) P(s)$ This can also be writen for the event in consideration (that $x_1 $ is the maximum). We know to compute the LHS, and $P(s)$, but we are interested in the other term. Our first order Poisson approximation comes from assuming that $P(s)$ concentrates about the mean so that it can be assimilated to a delta, and then $ P(X_t) \approx P(X_t | s=t) $ To refine the aprroximation, we can see the above as a convolution of two functions: our unknown $P(X_t | s)$, which we assume smooth around $s=t$, and a quasi delta function, say a gaussian with small variance. Now, we have our first order approximation (for continuous variables) : $h(x) = g(x) * N(x_0,\sigma^2)$ (convolution) $h(x_0) \approx g(x_0) + g(x_0)''\sigma^2/2$ $g(x_0) \approx h(x_0) - h''(x_0)''\sigma^2/2$ Applying this to the previous equation can lead to a refined approximation to our desired probability.
null
CC BY-SA 3.0
null
2011-05-15T14:13:09.697
2011-05-15T14:13:09.697
null
null
2546
null
10824
2
null
10774
4
null
1) What is your spatial explaining variable? Looks like the x*y plane would be a poor model for the spatial effect. ![plot of treatments and responses](https://i.stack.imgur.com/KDVrG.png) > i=c(1,3,5,7,8,11,14,15,16,17,18,22,23,25,28,30,31,32,35,36,39,39,41,42) l=rep(NA,42)[i];l[i]=level r=rep(NA,42)[i];r[i]=response image(t(matrix(-l,6)));title("treatment") image(t(matrix(-r,6)));title("response") 2) Seeing as how the blocks are 1 mile apart and you expect to see effects for mere 30 meters, I would say it's entirely appropriate to analyze them separately.
null
CC BY-SA 3.0
null
2011-05-15T14:33:44.393
2011-05-15T20:09:18.353
2011-05-15T20:09:18.353
2456
2456
null
10825
2
null
10407
0
null
Just a word of explanation: Part out of curiosity, part for lack of a better more theoretical method, i approached the problem in a completely empirical/inductive way. I'm aware that there is the risk of getting stuck in a dead end without gaining much insight, but i thought, i'll just present what i got so far anyway, in case it is useful to someone. Starting by computing the exact probabilities for $n,t\in\{1,...,8\}$ we get ![Table of the first few probabilities for low n,t](https://i.stack.imgur.com/FczWE.png) Due to the underlying multinomial distribution, multiplying the entries in the table by $(n+1)^t$ leaves us with a purely integer table: ![Table of integerified probabilities](https://i.stack.imgur.com/Nn20U.png) Now we find that there is a polynomial in $n$ for every column which acts as the sequence function for that column: ![Sequence functions for different t's](https://i.stack.imgur.com/RMLVI.png) Dividing the sequence functions by $(n+1)^t$ gives us sequence functions for the original probabilities for the first $t$'s. These rational polynomials can be simplified by decomposing them into partial fractions and substituting $x$ for $1/(n+1)$, leaving us with: ![Sequence functions in x=1/(n+1)](https://i.stack.imgur.com/RxbR5.png) or as coefficient table ![sequence function coefficients](https://i.stack.imgur.com/epKgZ.png) Starting with the $x^2$ column there are sequence functions for these coefficients again: ![x^k coefficients sequence functions](https://i.stack.imgur.com/FtVpS.png) That's how far i got. There are definitely exploitable patterns here that allow sequence functions to occur, but i'm not sure if there is a nice closed form solution for these sequence functions.
null
CC BY-SA 3.0
null
2011-05-15T15:05:05.490
2011-05-15T15:05:05.490
null
null
4360
null
10826
2
null
10822
2
null
From where I stand it looks like the conditional relative frequency of $L2$ given $L1$.
null
CC BY-SA 3.0
null
2011-05-15T15:40:04.573
2011-05-15T15:40:04.573
null
null
4376
null
10827
1
null
null
10
4178
I'm investigating interplay between two variables ($x_1$ and $x_2$). There is a great deal of linear correlation between these variables with $r>0.9$. From the nature of the problem I cannot say anything about the causation (whether $x_1$ causes $x_2$ or the other way around). I would like to study deviations from the regression line, in order to detect outliers. In order to do this I can either build a linear regression of $x_1$ as a function of $x_2$, or the other way around. Can my choice of variable order influence my results?
Does the variable order matter in linear regression
CC BY-SA 3.0
null
2011-05-15T15:48:13.567
2011-05-16T15:13:08.160
null
null
4622
[ "regression", "outliers", "linear-model" ]
10828
2
null
10827
3
null
It surely can (actually, it even matters with regard to the assumptions on your data - you only make assumptions about the distribution of the outcome given the covariate). In this light, you might look up a term like "inverse prediction variance". Either way, linear regression says nothing about causation! At best, you can say something about causation through careful design.
null
CC BY-SA 3.0
null
2011-05-15T15:53:39.430
2011-05-15T15:53:39.430
null
null
4257
null
10829
2
null
10827
3
null
To make the case symmetrical, one may regress the difference between the two variables ($\Delta x$) vs their average value.
null
CC BY-SA 3.0
null
2011-05-15T16:53:06.540
2011-05-15T17:01:17.343
2011-05-15T17:01:17.343
1496
1496
null
10830
2
null
10407
5
null
Partition the outcomes by the frequency of occurrences $x$ of the "double outcome", $0 \le x \le t$. Conditional on this number, the distribution of the remaining $t-x$ outcomes is multinomial across $n-1$ equiprobable bins. Let $p(t-x, n-1, x)$ be the chance that no bin out of $n-1$ equally likely ones receives more than $x$ outcomes. The sought-for probability therefore equals $$\sum_{x=0}^{t} \binom{t}{x}\left(\frac{2}{n+1}\right)^x \left(\frac{n-1}{n+1}\right)^{t-x} p(t-x,n-1,x).$$ In [Exact Tail Probabilities and Percentiles of the Multinomial Maximum](http://www.stat.purdue.edu/~dasgupta/mult.pdf), Anirban DasGupta points out (after correcting typographical errors) that $p(n,K,x)K^n/n!$ equals the coefficient of $\lambda^n$ in the expansion of $\left(\sum_{j=0}^{x}\lambda^j/j!\right)^K$ (using his notation). For the values of $t$ and $n$ involved here, this coefficient can be computed in at most a few seconds (making sure to discard all $O(\lambda^{n+1})$ terms while performing the successive convolutions needed to obtain the $K^{\text{th}}$ power). (I checked the timing and corrected the typos by reproducing DasGupta's Table 4, which displays the complementary probabilities $1 - p(n,K,x)$, and extending it to values where $n$ and $K$ are both in the hundreds.) Quoting a theorem of Kolchin et al., DasGupta provides an approximation for the computationally intensive case where $t$ is substantially larger than $n$. Between the exact computation and the approximation, it looks like all possibilities are covered.
null
CC BY-SA 3.0
null
2011-05-15T19:07:36.967
2011-05-15T19:07:36.967
null
null
919
null
10831
2
null
10827
0
null
Your x1 and x2 variables are collinear. In the presence of multicollinearity, your parameter estimates are still unbiased, but their variance is large, i.e., your inference on the significance of the parameter estimates is not valid, and your prediction will have large confidence intervals. Interpretation of the parameter estimates is also difficult. In the linear regression framework, the parameter estimate on x1 is the change in Y for a unit change in x1 given every other exogeneous variable in the model is held constant. In your case, x1 and x2 are highly correlated, and you cannot hold x2 constant when x1 is changing.
null
CC BY-SA 3.0
null
2011-05-15T19:50:27.713
2011-05-15T19:50:27.713
null
null
4617
null
10832
1
10834
null
6
4657
I've written a small hierarchical clustering algorithm (for better or for worse). I'd like a quick way of visualizing it, any tooling ideas?
Anyone know of a simple dendrogram visualizer?
CC BY-SA 3.0
null
2011-05-15T19:51:12.777
2011-12-16T06:06:24.933
2011-05-15T20:18:15.013
null
4623
[ "data-visualization", "dendrogram" ]
10834
2
null
10832
5
null
[TreeView](http://taxonomy.zoology.gla.ac.uk/rod/treeview.html) -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to [Newick format](http://en.wikipedia.org/wiki/Newick_format), which TV eats without problems. More powerful solution is to use R, but here you would have to invest some time in making conversion to the `dendrogram` object (basically list-of-lists).
null
CC BY-SA 3.0
null
2011-05-15T20:17:01.317
2011-05-15T20:17:01.317
null
null
null
null
10835
2
null
10554
3
null
Homogeneity Definition: To start, let's define homogeneity as the degree to which households grouped in same area are like one another for some attribute. ## MAUP Approach Paraphrasing the stated problem: We are uncertain how homogeneity changes as we decrease the spatial resolution of the design of how we group households into areas. For this problem @AndyW answer is solid. In the field of geography, your problem can be classed within the modifiable areal unit problem ([MAUP](http://en.wikipedia.org/wiki/Modifiable_areal_unit_problem)).You can search the index for 'MAUP' at [this site](http://www.spatialanalysisonline.com/output/). ## Alternative Clustering Approach An alternative problem: Given that we want to maximise areal homogeneity when aggregating households, we are uncertain of the optimal spatial configuration of how we should group the households. With the [p-regions clustering algorithm](http://pysal.org/users/tutorials/region.html), you can visually explore different structures of homogeneity within your data by creating different maps of household areas by playing with these 2 parameters: - changing the different attributes for maximising area homogeneity - changing the number of households required to build an areal group
null
CC BY-SA 3.0
null
2011-05-15T20:20:23.437
2011-05-15T20:31:46.160
2011-05-15T20:31:46.160
4329
4329
null
10838
1
null
null
32
221823
I wonder if there is a simple way to produce a list of variables using a for loop, and give its value. ``` for(i in 1:3) { noquote(paste("a",i,sep=""))=i } ``` In the above code, I try to create `a1`, `a2`, `a3`, which assign to the values of 1, 2, 3. However, R gives an error message. Thanks for your help.
Produce a list of variable name in a for loop, then assign values to them
CC BY-SA 3.0
null
2011-05-16T00:17:52.007
2013-08-15T13:26:48.480
2011-05-16T04:18:00.890
159
4625
[ "r" ]
10839
2
null
10838
46
null
Your are looking for `assign()`. ``` for(i in 1:3){ assign(paste("a", i, sep = ""), i) } ``` gives ``` > ls() [1] "a1" "a2" "a3" ``` and ``` > a1 [1] 1 > a2 [1] 2 > a3 [1] 3 ``` Update I agree that using loops is (very often) bad R coding style (see discussion above). Using `list2env()` (thanks to @mbq for mentioning it), this is another solution to @Han Lin Shang's question: ``` x <- as.list(rnorm(10000)) names(x) <- paste("a", 1:length(x), sep = "") list2env(x , envir = .GlobalEnv) ```
null
CC BY-SA 3.0
null
2011-05-16T00:51:17.090
2011-05-16T11:53:29.887
2011-05-16T11:53:29.887
307
307
null
10840
2
null
206
1
null
In the case of database, we would always store the data in discrete even the nature of the data is continuous. Why should I emphasize the nature of data? We should take the distribution of data that could help us to analyze the data. IF the nature of data is continuous, I suggest you to use them by continuous analysis. Take an example of continuous and discrete: MP3. Even the type of "sound" is analogy, if stored by digital format. We should analyze it always in a analogy way.
null
CC BY-SA 3.0
null
2011-05-16T02:19:18.120
2011-05-16T02:19:18.120
null
null
4588
null
10841
1
10843
null
7
462
If you're modelling a proportion response against numerous predictors that are also proportions, is it necessary to transform the response if the standard OLS model is seemingly well behaved? By well behaved I mean: - None of the fitted values are outside the range [0,1] (In fact they are fairly accurate) - Residuals look good I believe arcsine transform is typically used in this scenario to make the data look normal, but what if this is not needed? Also, say the data wasn't normal, would a transform still be necessary if one were modelling the proportions with the Random Forest technique? Cheers
Is it necessary to perform a transformation on proportion data if it's reasonably well behaved?
CC BY-SA 3.0
null
2011-05-16T02:23:47.057
2011-05-16T04:33:27.030
null
null
845
[ "modeling", "data-transformation", "proportion" ]
10842
1
10844
null
5
4982
A simple question. If $Y=\frac{1}{X}$ and I know $f_X(x)$, is it true that $E(Y) = E(1/X) = \int_{-\infty}^\infty \frac{1}{x}f_X(x) dx$?
Expected value of a transformation
CC BY-SA 3.0
null
2011-05-16T04:32:57.500
2011-05-16T15:28:28.480
2011-05-16T09:37:28.880
null
4627
[ "data-transformation", "expected-value" ]
10843
2
null
10841
4
null
It depends. If your goal is prediction, then you may not need to do any gymnastics to get a more theoretically sound model if the one in hand does well. But of course you should be always be aware that a good-fitting model to present data may not perform well on new data. You can try to get a feel for that using cross validation, although you simply might not have important aspects of the distribution represented in your sample. If you want to make inferences using some of the parameters in the model then that model should be motivated by the problem at hand. Anyway, a first step is to just look at the response. Is it roughly bell-shaped? Did you try the arcsine transform? Does the transformed distribution look (much) different? If the distribution of the proportions is fairly tight and located somewhere in the middle the transformation might not do much. And then, of course, does the transformation make a difference in the regression?
null
CC BY-SA 3.0
null
2011-05-16T04:33:27.030
2011-05-16T04:33:27.030
null
null
26
null
10844
2
null
10842
6
null
Yes. In general if $X\sim f(x)$ then for a function $g(x)$ you have $E(g(X)) = \int g(x)f(x)dx$. You can verify this for simple cases by deriving the distribution of the transformed variable. The completely general result takes some more advanced math which you can probably safely avoid :)
null
CC BY-SA 3.0
null
2011-05-16T04:37:24.950
2011-05-16T04:37:24.950
null
null
26
null
10845
2
null
10838
23
null
If the values are in vector, the loop is not necessary: ``` vals <- rnorm(3) n <- length(vals) lhs <- paste("a", 1:n, sep="") rhs <- paste("vals[",1:n,"]", sep="") eq <- paste(paste(lhs, rhs, sep="<-"), collapse=";") eval(parse(text=eq)) ``` As a side note, this is the reason why I love R.
null
CC BY-SA 3.0
null
2011-05-16T05:46:43.297
2013-08-15T13:26:48.480
2013-08-15T13:26:48.480
7290
2116
null
10846
2
null
10816
5
null
- The answer is yes, you do not need to use the parameters of inverse Mills ratios. But you must include them in the regression nevertheless, or your other parameters will be biased. - According to the article yes. Although if different variables are statistically significant in different regression there is no problem. Just assume that coefficients for the non-significant regressors are zero. - Splitting is perfectly reasonable. Since you are fitting two models, one for decision whether to go to college or not and another for log-earnings, it is perfectly reasonable to assume that different variables will be important. I should investigate this further though, high multicolinearity when using the same variables in probit and ols regression is not a standard feature of Heckman model as far as I know.
null
CC BY-SA 3.0
null
2011-05-16T06:11:15.550
2011-05-16T06:11:15.550
null
null
2116
null
10847
1
10848
null
8
920
What kind of function is: $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$ Is this a common distribution? I am trying to find a confidence interval of $\lambda$ using the estimator $\hat{\lambda}=\frac{n}{\pi \sum^n_{i=1} X^2_i}$ and I am struggling to prove if this estimator has Asymptotic Normality. Thanks
What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$?
CC BY-SA 3.0
null
2011-05-16T07:22:55.747
2012-12-11T16:09:46.907
2011-05-17T03:53:29.227
2116
4627
[ "distributions", "confidence-interval", "estimation", "self-study" ]
10848
2
null
10847
10
null
It is a square root of [exponential distribution](http://en.wikipedia.org/wiki/Exponential_distribution) with rate $\pi\lambda$. This means that if $Y\sim\exp(\pi\lambda)$, then $\sqrt{Y}\sim f_X$. Since your estimate is [maximum likelihood estimate](http://en.wikipedia.org/wiki/Maximum_likelihood) it should be asymptotically normal. This follows immediately from the properties of maximum likelihood estimates. In this particular case: $$\sqrt{n}(\hat\lambda-\lambda)\to N(0,\lambda^2)$$ since $$E\frac{\partial^2}{\partial \lambda^2}\log f_X(X)=-\frac{1}{\lambda^2}.$$
null
CC BY-SA 3.0
null
2011-05-16T08:17:54.307
2011-05-16T08:24:30.760
2011-05-16T08:24:30.760
2116
2116
null
10849
2
null
10847
10
null
Why do you care about asymptotics when the exact answer is just as simple (and exact)? I am assuming that you want asymptotic normality so that you can use the $\mathrm{Est}\pm z_{\alpha}\mathrm{StdErr}$ type of confidence interval If you make the probability transformation $Y_{i}=X_{i}^{2}$ then you have an exponential sampling distribution (as @mpiktas has mentioned): $$\newcommand{\Gamma}{\mathrm{Gamma}} \newcommand{\MLE}{\mathrm{MLE}} \newcommand{\Pr}{\mathrm{Pr}} f_{Y_{i}}(y_{i})=f_{X_{i}}(\sqrt{y_{i}})|\frac{\partial\sqrt{y_{i}}}{\partial y_{i}}|=2 \lambda \pi \sqrt{y_{i}} \exp(-\lambda \pi \sqrt{y_{i}} ^2)\frac{1}{2\sqrt{y_{i}}}=\lambda\pi\exp(-\lambda\pi y_{i})$$ So the joint log-likelihood in terms of $D\equiv\{y_{1},\dots,y_{N}\}$ becomes: $$\log[f(D|\lambda)]=N\log(\pi)+N\log(\lambda)-\lambda\pi\sum_{i=1}^{N}y_{i}$$ Now the only way the data enters the analysis is through the total $T_{N}=\sum_{i=1}^{N}y_{i}$ (and the sample size $N$). Now it is an elementary sampling theory calculation to show that $T_{N}\sim \Gamma(N,\pi\lambda)$, and further that $\pi N^{-1}T_{N}\sim \Gamma(N,N\lambda)$. We can further make this a "pivotal" quantity by taking $\lambda$ out of the equations (via the same way that I just put $N$ into them). And we have: $$\lambda\pi N^{-1}T_{N}=\frac{\lambda}{\hat{\lambda}_{\MLE}}\sim \Gamma(N,N)$$ Note that thus we now have a distribution which involves the MLE and whose sampling distribution is independent of the parameter $\lambda$. Now your MLE is equal to $\frac{1}{\pi N^{-1}T_{N}}$ And so writing quantities $L_{\alpha}$ and $U_{\alpha}$ such that the following holds: $$\Pr(L_{\alpha} < G < U_{\alpha})=1-\alpha\;\;\;\;\;\;\;G\sim \Gamma(N,N)$$ And we then have: $$\Pr(L_{\alpha} < \frac{\lambda}{\hat{\lambda}_{\MLE}} < U_{\alpha})=\Pr(L_{\alpha}\hat{\lambda}_{\MLE} > \lambda > U_{\alpha}\hat{\lambda}_{\MLE})=1-\alpha$$ And you have an exact $1-\alpha$ confidence interval for $\lambda$. NOTE: The Gamma distribution I am using is the "precision" style, so that a $\Gamma(N,N)$ density looks like: $$f_{\Gamma(N,N)}(g)=\frac{N^{N}}{\Gamma(N)}g^{N-1}\exp(-Ng)$$
null
CC BY-SA 3.0
null
2011-05-16T10:22:52.400
2012-12-11T16:09:46.907
2012-12-11T16:09:46.907
17230
2392
null
10850
1
10853
null
6
114
My current dataset has three conditions, and we've measured the activity levels of 10,000 genes in each condition. Replicated 8 times. Using 10,000 linear models, we determine for each pair of conditions (ie for each of three contrasts) the number of genes with significantly different activity levels. This is [standard procedure](http://bioinf.wehi.edu.au/limma/) for this kind of microarray data. We find: - 2000 genes have significantly different activity levels between A and B - 1500 genes have significantly different activity levels between A and C - 100 genes have significantly different activity levels between B and C This suggests that conditions B and C are more similar to each other than to C. PCA suggests the same result. Is there any way for us to quantify the extent to which "conditions B and C are more similar to each other than to C (ie to put a p-value on it?) Thanks for your help, and apologies if this question is trivial. Kind regards,
Comparing numbers of p-values from many linear models
CC BY-SA 3.0
null
2011-05-16T11:28:46.337
2011-05-16T12:12:57.250
2011-05-16T11:51:00.613
930
3773
[ "genetics", "microarray" ]
10851
1
null
null
2
124
I am trying to devise a simple algorithm to create a Bayesian prior from measurements obtained from time series data. Firstly, I presuppose that the data can take on one of five possible "shapes" or "patterns" and then measure the Euclidean distance of each time series data point from its equivalent point in time in each of the five possible patterns, summed over a specified period of time for each pattern. This will give five distinct numbers with the lowest one indicating which pattern the data is most likely to resemble due to the fact that this distance metric is the smallest "error measurement." However, this is not a sufficient measure in and of itself hence my desire to use it as a prior in a naive Bayesian classifier. My problem is: how to use these numbers to form a prior? I am thinking of using some form of ratio i.e. the numbers are $1, 2, 3, 4, 5$ for a total of $15$, therefore: $$15/1=15, 15/2=7.5, 15/3=5, 15/4=3.75, 15/5=3,$$ and so the prior for $1$ would be: $$15/(15+7.5+5+3.75+3),$$ the prior for 2 would be: $$7.5/(15+7.5+5+3.75+3),$$ etc. Is there any justification for using an approach such as this, or is there perhaps a better approach?
Algorithm to create Bayesian priors from measurements
CC BY-SA 3.0
null
2011-05-16T11:42:56.027
2017-08-01T10:41:07.663
2017-08-01T10:41:07.663
11887
226
[ "bayesian", "modeling", "prior" ]
10852
1
10864
null
3
182
I have annual returns and standard deviations for two funds, $r_{a}$, $r_{b}$, $\mathrm{SD}_{a}$ and $\mathrm{SD}_{b}$ but I do not have individual data, just the annual data. The annual correlation between the prices of the funds is 0.7. If I had the individual data I could use $\mathrm{Cov}(A,B) =\sum_i (\bar{r}_{a} - r_{i a}) (\bar{r}_{b} - r_{i b}) \>,$ but now I am lost without the data. Are there some approximations to get the half annual data from the annual data or some formula to do it directly?
Half annual covariance $\mathrm{Cov}(X,Y)_{\text{6 months}}$ from annual covariance?
CC BY-SA 3.0
null
2011-05-16T12:02:12.167
2011-05-16T15:59:16.300
2011-05-16T14:47:44.590
2970
2914
[ "covariance" ]
10853
2
null
10850
3
null
Prior to answering your question - is the distribution of effect of the genes justify using a linear model (e.g: are they distributed more or less normally?) Now to your question - I might offer to go a different way about it. It sounds like what you are asking for is to measure the correlation (e.g similarity of behavior) between the different conditions. A simple way to do that is to take the mean (or maybe the trimmed mean or median) of the 8 replications and then you'd have 10K triplets you can use for creating a correlation matrix (between the 3 conditions). The second step would then be to answer the question if one correlation (say between A and B) is significantly higher then the other two correlations (say, between --A and C-- and --B and C--). Here you can use the following nice [online tool](http://faculty.vassar.edu/lowry/rdiff.html) or you can code it yourself in R using the information given [here](http://davidmlane.com/hyperstat/B8712.html). Cheers, Tal
null
CC BY-SA 3.0
null
2011-05-16T12:12:57.250
2011-05-16T12:12:57.250
null
null
253
null
10855
1
null
null
5
414
I have this long running experiment. Each time I run it I get a new goodness value, since the algorithm has random variables in it. So I need to report the mean and the std of some n runs. What should n be? I need to be able to defend n based on some statistical ideas. Some kind of scientific reference (a book, a paper) would be wonderful, too. I provide more details as you say, thanks for the answers: In computer vision, an important challenge is to recognize objects from images. Different computer algorithms are developed for this purpose. To see how good a new algorithm is, one sometimes constructs a test and a training set of images, say 1000 images for each, train the algorithm with the training images, and produce a success rate using the test set. If out of the 1000 objects in the test images, 800 is recognized by the algorithm, the success rate is said to be 80 percent. Now, my algorithm analyses, say, 1000 RANDOM points in the image, and using that analysis, tries to recognize the objects in the image. Each time I run the algorithm, I get a different success rate, since 1000 points are produced RANDOMLY. So I think its best to report some kind of summary statistics (e.g. the main and std deviation) of the success rate. Also, one sometimes needs to say, "well in addition to my algorithm, I tried these, say, 10 algorithms on the same dataset, and this table shows that mine is the best in this and this way..." Some of these algorithms may need to run more than once, too. So one can really have a long experiment. So, as I said before, at least how many times should I run the long running experiment? Thx.
Number of times to run a lengthy experiment
CC BY-SA 3.0
null
2011-05-16T12:59:12.657
2011-05-18T03:09:31.003
2011-05-17T12:27:13.450
4629
4629
[ "experiment-design" ]
10856
1
10857
null
24
14633
... and why ? Assuming $X_1$,$X_2$ are independent random-variables with mean $\mu_1,\mu_2$ and variance $\sigma^2_1,\sigma^2_2$ respectively. My basic statistics book tells me that the distribution of the $X_1-X_2$ has the following properties: - $E(X_1-X_2)=\mu_1-\mu_2$ - $Var(X_1-X_2)=\sigma^2_1 +\sigma^2_2$ Now let's say $X_1$, $X_2$ are t-distributions with $n_1-1$, $n_2-2$ degrees of freedom. What is the distribution of $X_1-X_2$ ? This question has been edited: The original question was "What are the degrees of freedom of the difference of two t-distributions ?". mpiktas has already pointed out that this makes no sense since $X_1-X_2$ is not t-distributed, no matter how approximately normal $X_1,X_2$ (i.e. high df) may be.
What is the distribution of the difference of two-t-distributions
CC BY-SA 3.0
null
2011-05-16T13:52:03.117
2022-01-25T23:30:49.427
2015-11-28T00:05:10.073
805
264
[ "distributions", "degrees-of-freedom", "t-distribution" ]
10857
2
null
10856
16
null
The sum of two independent t-distributed random variables is not t-distributed. Hence you cannot talk about degrees of freedom of this distribution, since the resulting distribution does not have any degrees of freedom in a sense that t-distribution has.
null
CC BY-SA 3.0
null
2011-05-16T13:58:53.843
2011-05-16T13:58:53.843
null
null
2116
null
10858
1
10860
null
7
6335
This is more of a general statistics question, though if it matters I'm writing PHP code. Let's say I'm trying to compute the average value of a toy that is commonly bought and sold on the secondary market, and I have a set of price values culled both from auctions and from user-entered "price paid" data. The data points that represent auctions are pretty reliable, but I also get the occasional "garage sale" type of data point, where someone may have paid a buck to buy something from Aunt Polly at a garage sale. The problem is that the `$1` type of data points aren't really valuable to me, as they don't really indicate value--Aunt Polly didn't know any better, and didn't care. Similarly, I may occasionally get a data point coming from a jokester entering `$9000` for a toy that is really only worth `$9`. So, when computing value, what's the best way to factor these types of anomalies out of otherwise useful data? I've read about outliers, and something about generally ignoring anything that is more than 2.5 standard deviations outside the rest of the data, but I'm looking for the full recipe, here. Thanks so much!
Computing average value ignoring outliers
CC BY-SA 3.0
null
2011-05-16T14:01:19.187
2011-07-08T14:55:05.550
2011-05-17T17:02:35.693
null
4631
[ "standard-deviation", "outliers" ]
10859
2
null
10858
2
null
I originally posted this on SO before it was deleted: [https://stats.stackexchange.com/](https://stats.stackexchange.com/) will probably help you better with this, and give a more comprehensive answer. I'm not a mathematician, but I suspect there are multiple ways to solve this issue. As a programmer this is how I would tackle the problem. I'm not skilled enough to tell you if this is sound, but for simple data it should be acceptable. Depending on the type of data, it might be acceptable to have cut off amounts. You will probably want a rolling average (often used in stock markets) that takes the average price over the last n months, this helps negate the impact of inflation, and then have a `$n` cuttoff or a percentage based cutoff, that is, any value that deviates +-20% or +-`$n` of the rolling average will be ignored. This would work quite well for relatively stable markets, if your entity exists in a volatile market that fluctuates wildly then you probably want to find a different approach. You also need to seriously consider cutting data off, you mention granny's yard sale which is arguably a legitimate cut off, but you need to accept that you will probably be losing legitimate data points as well that could have a significant effect on your results. But again, there will be multiple ways to achieve this.
null
CC BY-SA 3.0
null
2011-05-16T14:13:37.913
2011-05-16T14:26:03.333
2017-04-13T12:44:39.283
-1
4632
null
10860
2
null
10858
10
null
In boxplots, values that are more than 1.5 times the IQR (interquartile range, difference between quartile 1 and 3) away from (as in: in the direction away from the median) the quartiles are typically considered outliers. I cannot say whether this is an appropriate measure for your data, though...
null
CC BY-SA 3.0
null
2011-05-16T14:22:43.363
2011-05-16T14:22:43.363
null
null
4257
null
10861
2
null
10827
3
null
Standard regression minimizes the vertical distance between the points and the line, so switching the 2 variables will now minimize the horizontal distance (given the same scatterplot). Another option (which goes by several names) is to minimize the perpendicular distance, this can be done using principle components. Here is some R code that shows the differences: ``` library(MASS) tmp <- mvrnorm(100, c(0,0), rbind( c(1,.9),c(.9,1)) ) plot(tmp, asp=1) fit1 <- lm(tmp[,1] ~ tmp[,2]) # horizontal residuals segments( tmp[,1], tmp[,2], fitted(fit1),tmp[,2], col='blue' ) o <- order(tmp[,2]) lines( fitted(fit1)[o], tmp[o,2], col='blue' ) fit2 <- lm(tmp[,2] ~ tmp[,1]) # vertical residuals segments( tmp[,1], tmp[,2], tmp[,1], fitted(fit2), col='green' ) o <- order(tmp[,1]) lines( tmp[o,1], fitted(fit2)[o], col='green' ) fit3 <- prcomp(tmp) b <- -fit3$rotation[1,2]/fit3$rotation[2,2] a <- fit3$center[2] - b*fit3$center[1] abline(a,b, col='red') segments(tmp[,1], tmp[,2], tmp[,1]-fit3$x[,2]*fit3$rotation[1,2], tmp[,2]-fit3$x[,2]*fit3$rotation[2,2], col='red') legend('bottomright', legend=c('Horizontal','Vertical','Perpendicular'), lty=1, col=c('blue','green','red')) ``` To look for outliers you can just plot the results of the principle components analysis. You may also want to look at: > Bland and Altman (1986), Statistical Methods for Assessing Agreement Between Two Methods of CLinical Measurement. Lancet, pp 307-310
null
CC BY-SA 3.0
null
2011-05-16T15:13:08.160
2011-05-16T15:13:08.160
null
null
4505
null
10862
2
null
10842
1
null
Another approach if you are happy with a numerical estimate (as opposed to the theorectical exact value) is to generate a bunch of data from the distribution, do the transformation, then take the mean of the transformed data as the estimate of the expected value. This avoids integration which can be nice in ugly cases, but does not give the theory, relationship, or exact value.
null
CC BY-SA 3.0
null
2011-05-16T15:28:28.480
2011-05-16T15:28:28.480
null
null
4505
null
10864
2
null
10852
2
null
The [Wikipedia entry for covariance](http://en.wikipedia.org/wiki/Covariance) is good for learning the rules for manipulating and calculating covariance. The [Wikipedia entry on variance](http://en.wikipedia.org/wiki/Variance) is good for that special case. In your particular case, you can solve your problem fairly easily by using the rule (in the properties section of the covariance article) $$ \operatorname{Cov}(aX+bY, cW+dV) = ac\,\operatorname{Cov}(X,W)+ad\,\operatorname{Cov}(X,V)+bc\,\operatorname{Cov}(Y,W)+bd\,\operatorname{Cov}(Y,V)\ $$ If you think of annual returns as the sum of two semi-annual returns, and assume that the semi-annual returns are not correlated across periods and the correlations for the two periods are constant. I think it should turn out that you can simply divide the covariance in half.
null
CC BY-SA 3.0
null
2011-05-16T15:59:16.300
2011-05-16T15:59:16.300
null
null
1146
null
10865
2
null
10774
10
null
1) You can model spatial correlation with the `nlme` library; there are several possible models you might choose. See pages 260-266 of Pinheiro/Bates. A good first step is to make a variogram to see how the correlation depends on distance. ``` library(nlme) m0 <- gls(response ~ level, data = layout) plot(Variogram(m0, form=~x+y)) ``` Here the sample semivariogram increases with distance indicating that the observations are indeed spatially correlated. One option for the correlation structure is a spherical structure; that could be modeled in the following way. ``` m1 <- update(m0, corr=corSpher(c(15, 0.25), form=~x+y, nugget=TRUE)) ``` This model does seem to fit better than the model with no correlation structure, though it's entirely possible it too could be improved on with one of the other possible correlation structures. ``` > anova(m0, m1) Model df AIC BIC logLik Test L.Ratio p-value m0 1 3 46.5297 49.80283 -20.26485 m1 2 5 43.3244 48.77961 -16.66220 1 vs 2 7.205301 0.0273 ``` 2) You could also try including `x` and `y` directly in the model; this could be appropriate if the pattern of correlation depends on more than just distance. In your case (looking at sesqu's pictures) it seems that for this block anyway, you may have a diagonal pattern. Here I'm updating the original model instead of m0 because I'm only changing the fixed effects, so the models should both be fit using maximum likelihood. ``` > model2 <- update(model, .~.+x*y) > anova(model, model2) Analysis of Variance Table Model 1: response ~ level Model 2: response ~ level + x + y + x:y Res.Df RSS Df Sum of Sq F Pr(>F) 1 22 5.3809 2 19 2.7268 3 2.6541 6.1646 0.004168 ** ``` To compare all three models, you'd need to fit them all with `gls` and the maximum likelihood method instead of the default method of REML. ``` > m0b <- update(m0, method="ML") > m1b <- update(m1, method="ML") > m2b <- update(m0b, .~x*y) > anova(m0b, m1b, m2b, test=FALSE) Model df AIC BIC logLik m0b 1 3 38.22422 41.75838 -16.112112 m1b 2 5 35.88922 41.77949 -12.944610 m2b 3 5 29.09821 34.98847 -9.549103 ``` Remember that especially with your knowledge of the study, you might be able to come up with a model that is better than any of these. That is, model `m2b` shouldn't necessarily be considered to be the best yet. Note: These calculations were performed after changing the x-value of plot 37 to 0.
null
CC BY-SA 3.0
null
2011-05-16T16:09:29.857
2011-05-16T19:56:57.893
2011-05-16T19:56:57.893
3601
3601
null
10867
1
null
null
4
1342
I have 3 experiments, where some quantity was measured 3 times. Thus, 3 biological replicas, 3 technical replicas in each biological replica, 9 measurements in total. I need to answer the following questions: - For each biological replica: are my 3 measurements consistent? To do this, I'm using Dixon's Q test. - For average measurements derived for each biological replica: are these 3 measurements consistent? So, how should I go about the second question? Is it a good idea to use Dixon's Q test here too? What other tests can I use to address the above questions? I guess ANOVA is not suitable because samples are too small, right?
Tests for consistent measurements and outliers
CC BY-SA 3.0
null
2011-05-16T17:36:32.373
2011-05-16T20:12:36.277
null
null
4337
[ "repeated-measures", "outliers", "reproducible-research" ]
10868
2
null
10858
0
null
Perhaps a robust estimator like [RANSAC](http://en.wikipedia.org/wiki/RANSAC) could be used here.
null
CC BY-SA 3.0
null
2011-05-16T18:31:08.690
2011-05-16T18:31:08.690
null
null
4360
null
10869
2
null
10858
0
null
hope this helps Simplistic approaches , as suggested here , often fail to their lack of generality. In general you may have a series that has multiple trends and/or multiple levels thus to detect anomalies one has to "control" for these effects. Additionally there may be a seasonal effect that may have started in the last k periods and not present in the first n-k values. Now let's get to the meat of the problem. Assume that there are no mean shifts/no trend changes/no seasonal pulse structure in the data. The data may be autocorrelated causing the simple standard deviation to be over or under estimated depending upon the nature of the autocorrelation. The possible existence of Pulses,Seasonal Pulses,Level Shifts and/or local time trends obfuscates the identification of the "exceptions". Using a "bad standard deviation" to try to identify anomalies is flawed because it is an out of model test as compared to an "in model test" which ultimately is what is used to conclude about the statistical significance of the anomilies. You might Google "how to do statistical intervention detection" to help you find sources/software to do this.
null
CC BY-SA 3.0
null
2011-05-16T18:58:16.157
2011-05-16T20:39:41.933
2011-05-16T20:39:41.933
3382
3382
null
10870
2
null
10858
5
null
You could consider using a [trimmed mean](http://en.wikipedia.org/wiki/Truncated_mean). This would involve discarding, say, the highest 10% of values and the lowest 10% of values, regardless of whether you consider them to be bad.
null
CC BY-SA 3.0
null
2011-05-16T19:10:27.770
2011-05-16T19:10:27.770
null
null
3835
null
10871
1
10879
null
10
394
I am interested to know what level of statistics kids are learning in different countries around the world. Could you please suggest data/links that shed light on what is happening in this regards? I'll start. Israel: The students who are taking advanced math study more or less - mean, sd, histogram, normal distribution, very basic probability.
Children's statistical education in different countries?
CC BY-SA 3.0
null
2011-05-16T19:35:27.120
2017-02-13T16:29:58.350
2017-02-13T16:29:58.350
22468
253
[ "dataset", "teaching" ]
10872
2
null
10867
1
null
It seems highly unlikely that there would be a test that, based upon three observations, decides whether one is an outlier! The fact that 'the other two' are closer could just as well be an anomaly. At best, you could use Dixon's Q test to find out whether the largest/smallest value of your 9 observations is an outlier. Note that even the almighty [Wikipedia](http://en.wikipedia.org/wiki/Dixon%27s_Q_test) advises to only use it once within a dataset. Either way: the terminology of '3 measurements being consistent' is confusing: in statistics, normally consistency points to big sample sizes...
null
CC BY-SA 3.0
null
2011-05-16T20:12:36.277
2011-05-16T20:12:36.277
null
null
4257
null
10873
2
null
10795
40
null
There are basically two things to be said. The first is that if you look at the density for the multivariate normal distribution (with mean 0 here) it is proportional to $$\exp\left(-\frac{1}{2}x^T P x\right)$$ where $P = \Sigma^{-1}$ is the inverse of the covariance matrix, also called the precision. This matrix is positive definite and defines via $$(x,y) \mapsto x^T P y$$ an inner product on $\mathbb{R}^p$. The resulting geometry, which gives specific meaning to the concept of orthogonality and defines a norm related to the normal distribution, is important, and to understand, for instance, the geometric content of [LDA](http://en.wikipedia.org/wiki/Linear_discriminant_analysis) you need to view things in the light of the geometry given by $P$. The other thing to be said is that the partial correlations can be read of directly from $P$, see [here](http://en.wikipedia.org/wiki/Partial_correlation#Using_matrix_inversion). The same Wikipedia page gives that the partial correlations, and thus the entries of $P$, have a geometrical interpretation in terms of cosine to an angle. What is, perhaps, more important in the context of partial correlations is that the partial correlation between $X_i$ and $X_j$ is 0 if and only if entry $i,j$ in $P$ is zero. For the normal distribution the variables $X_i$ and $X_j$ are then conditionally independent given all other variables. This is what Steffens book, that I referred to in the comment above, is all about. Conditional independence and Graphical models. It has a fairly complete treatment of the normal distribution, but it may not be that easy to follow.
null
CC BY-SA 3.0
null
2011-05-16T20:24:14.100
2011-05-16T20:24:14.100
null
null
4376
null
10874
1
null
null
8
472
What are important/notable publishing houses for books in statistics? When I come across a book published at O'Reilly or Springer I imagine its quality will be high. What other notable publishing houses are out there (for statistics books)? Any recommendation on a way to find out? (I'd imagine we could check it if we had the database dump of amazon, but I don't imagine something like that is available anywhere...)
High quality publishing house for books in the field of statistics
CC BY-SA 3.0
null
2011-05-16T20:41:15.513
2011-10-30T03:14:35.967
2011-10-30T03:14:35.967
5256
253
[ "references" ]
10875
2
null
10871
5
null
Good question. For my answer, I'll talk about Ireland. In Senior Cycle (16-18 years) students study very basic statistics, mean, histograms, standard deviation. Basic probability is covered (completely seperately). Calculus, up to the level of integration by parts. Matrices (only 2*2) are an option on the Higher level paper, as is more statistics. That being said, less than 20% of the school population take the higher course, so the other 80% do basic statistics, some differentiation and very basic probability.
null
CC BY-SA 3.0
null
2011-05-16T20:51:58.830
2011-05-16T20:51:58.830
null
null
656
null
10879
2
null
10871
9
null
Statistics eduction in the US is in flux, in no small part because we now expect even grade school students (ages 5-12) to become proficient not only with fundamental concepts of statistical thinking, but also with techniques of data summary and presentation that many of their teachers do not even know! For an authoritative overview of efforts being made at both the K-12 and college levels, see the [GAISE reports](http://www.amstat.org/education/gaise/) on the ASA Website. At a high level, these documents expect that all students graduating from U.S. high schools (age 18) will: > formulate questions that can be addressed with data and collect, organize, and display relevant data to answer them; select and use appropriate statistical methods to analyze data; develop and evaluate inferences and predictions that are based on data; and understand and apply basic concepts of probability. Notably, in my opinion, is an insistence that by virtue of "variability in data," there is an important "difference between statistics and mathematics." The aim is to "develop statistical thinking" in students as opposed to teaching techniques or algorithms alone. For a college level approach, a good resource is [CAUSEweb](http://www.causeweb.org/) (Consortium for the Advancement of Undergraduate Statistics Education).
null
CC BY-SA 3.0
null
2011-05-16T22:32:27.220
2011-05-16T22:32:27.220
null
null
919
null
10880
1
null
null
3
155
I hope this is a stats question :) There's a game (RftG) with cards, each of which has various attributes (generally bonuses of some kind) and a price. I'm wondering if there's some kind of technique for determining the average value of each attribute, and therefore calculating the expected cost of each card as a whole, and therefore finding "undervalued" and "overpriced" cards. Now, say that each attribute is a letter (A, B, C...), an obvious and simple technique would be to look for minimal pairs: ABC: $5 ABCD: $7 E: $1 ED: $2 Therefore D has an average value of $1.50 But that seems far too simple and limited. For starters, not every attribute will turn up in a minimal pair. Can anyone point me in the right direction of the kind of technique that would work here? I'm afraid I don't have a stats background. Even some relevant stats terms would help. If interested, there's a card list here: [http://boardgamegeek.com/file/download/4wl2eqaevk/RftG_%2B_exp1_%2B_exp2_%2B_Card_Reference_v1.0.xls](http://boardgamegeek.com/file/download/4wl2eqaevk/RftG_%2B_exp1_%2B_exp2_%2B_Card_Reference_v1.0.xls)
Calculating the value of attributes of cards in a card game
CC BY-SA 3.0
null
2011-05-16T23:24:57.510
2011-06-16T01:59:02.513
null
null
4636
[ "bayesian", "games", "nonparametric", "multiple-comparisons" ]
10881
2
null
10880
1
null
Regression analysis could be used for this sort of thing. But it wouldn't be perfect. For one reason, regression analysis is meant for random data, and the prices fixed to playing cards is definitely not random. There's not really a standard method here. One reason is that certain abilities, in different contexts become more valuable, and a combination of abilities can be more powerful than if the two abilities were on separate cards.
null
CC BY-SA 3.0
null
2011-05-17T00:25:11.637
2011-05-17T01:03:57.033
2011-05-17T01:03:57.033
4637
4637
null
10882
1
null
null
3
856
I'm not a statician/mathematician, so please be gentle with me This is a cross-post [from Stack Overflow](https://stackoverflow.com/questions/6021170/formula-to-discard-items-by-votes-lower-bound-of-wilson-score-confidence-interva). I'm working with a site that lets the users create 'sections'. Each section have multiple items which are voted. The best ones are shown in the main page. I found that a simplistic approach to rank would be bad idea. `positive_votes - negative_votes` will rank the recent ones with less votes higher (100%, two votes) than the older ones that have way more votes but lower percentage (93%, 300 votes). The average isn't a solution either. I found [one article](http://www.evanmiller.org/how-not-to-sort-by-average-rating.html) that explains these concepts, why they are a bad idea and how to fix it. So I'm using the lower bound of Wilson score confidence interval for a Bernoulli parameter and it seems to be working just fine. However, I'd like to discard items that rank way too bad in that particular section. I think I require two things: - The minimum votes required to discard an item in each section - The score's threshold that decides if an item will be discarded or not It has to consider that while one section might have hundreds of items with thousands of votes, another might have less than 10 items with 50 or 60 votes, so while the minimum votes required for the popular one might be 100, it might be too high for the less popular ones. --- In the original question, [somebody suggested](https://stackoverflow.com/questions/6021170/formula-to-discard-items-by-votes-lower-bound-of-wilson-score-confidence-interva/6021687#6021687) to use the same formula. However, it seems that the Ruby implementation is missing some parts: the `alpha/2` is not present anywhere. Also, the original formula has a +/- sign, while the implementation has just a minus sign. Thanks!
Formula to discard items by votes (Lower bound of Wilson score confidence interval)
CC BY-SA 3.0
null
2011-05-17T01:32:50.910
2011-05-20T17:39:28.577
2017-05-23T12:39:26.143
-1
4638
[ "confidence-interval" ]
10883
2
null
10394
-3
null
I guess Gaussian efficiency is something related to computation cost. The efficiency of Gaussian adaptation relies on the theory of information due to Claude E. Shannon. When an event occurs with probability P, then the information −log(P) may be achieved. For instance, if the mean fitness is P, the information gained for each individual selected for survival will be −log(P) – on the average - and the work/time needed to get the information is proportional to 1/P. Thus, if efficiency, E, is defined as information divided by the work/time needed to get it we have: E = −P log(P). This function attains its maximum when P = 1/e = 0.37. The same result has been obtained by Gaines with a different method. I may simply conclude that the higher the Gaussian Efficiency is, less resources (RAM) is needed for computing something like a robust scale estimator of a large sample. Since CPUs are much faster than the rest of computer we prefer to run a trial/error algorithm for times rather doing it at once with saying 128GB of RAM. when the Gaussian Efficiency is high the job will be done in a shorter time.
null
CC BY-SA 3.0
null
2011-05-17T01:53:56.790
2011-05-17T01:53:56.790
null
null
4286
null
10884
1
null
null
8
34906
I received this elementary question by email: > In a regression equation am I correct in thinking that if the beta value is positive the dependent variable has increased in response to greater use of the independent variable, and if negative the dependent variable has decreased in response to an increase in the independent variable - similar to the way you read correlations?
Interpretation of positive and negative beta weights in regression equation
CC BY-SA 3.0
null
2011-05-17T05:26:40.100
2021-06-24T05:16:22.860
null
null
183
[ "regression", "causality" ]
10885
2
null
10884
7
null
In explaining the meaning of regression coefficient I found that the following explanation very useful. Suppose we have the regression $$Y=a+bX$$ Say $X$ changes by $\Delta X$ and $Y$ changes by $\Delta Y$. Since we have the linear relationship we have $$Y+\Delta Y= a+ b(X+\Delta X)$$ Since $Y=a+bX$ we get that $$\Delta Y = b \Delta X.$$ Having this is easy to see that if $b$ positive, then positive change in $X$ will result in positive change in $Y$. If $b$ is negative then positive change in $X$ will result in negative change in $Y$. Note: I treated this question as a pedagogical one, i.e. provide simple explanation. Note 2: As pointed out by @whuber this explanation has an important assumption that the relationship holds for all possible values of $X$ and $Y$. In reality this is a very restricting assumption, on the other hand the the explanation is valid for small values of $\Delta X$, since Taylor theorem says that relationships which can be expressed as differentiable functions (and this is a reasonable assumption to make) are linear locally.
null
CC BY-SA 3.0
null
2011-05-17T05:36:19.520
2011-05-17T06:03:25.247
2011-05-17T06:03:25.247
2116
2116
null
10886
1
null
null
5
1662
I'm wondering how to proceed to perform Canonical Correspondence Analysis and Multiple Correspondence Analysis in R on the following three-way contingency table ``` Eco_region3 Eco_region4 Species Freq A A1 S1 10 A A1 S2 12 A A1 S3 8 A A2 S1 10 A A2 S2 6 A A2 S3 11 A A3 S1 2 A A3 S2 9 A A3 S3 13 B B1 S1 13 B B1 S2 15 B B1 S3 7 B B2 S1 9 B B2 S2 8 B B2 S3 13 B B3 S1 15 B B3 S2 12 B B3 S3 13 C C1 S1 12 C C1 S2 18 C C1 S3 20 C C2 S1 12 C C2 S2 0 C C2 S3 11 C C3 S1 18 C C3 S2 10 C C3 S3 16 ``` Eco_region4 is nested within Eco_region3. Thanks
Correspondence analysis for a three-way contingency table
CC BY-SA 3.0
null
2011-05-17T06:24:48.690
2013-11-16T07:49:56.723
2011-05-17T17:10:47.323
3903
3903
[ "r", "contingency-tables", "correspondence-analysis" ]
10887
2
null
10884
7
null
As @gung notes, there are varying conventions regarding the meaning of ($\beta$, i.e., "beta"). In the broader statistical literature, beta is often used to represent unstandardised coefficients. However, in psychology (and perhaps other areas), there is often a distinction between b for unstandardised and beta for standardised coefficients. This answer assumes that the context indicates that beta is representing standardised coefficients: - Beta weights: As @whuber mentioned, "beta weights" are by convention standardised regression coefficients (see wikipedia on standardised coefficient). In this context, $b$ is often used for unstandardised coefficients and $\beta$ is often used for standardised coefficients. - Basic interpretation: A beta weight for a given predictor variable is the predicted difference in the outcome variable in standard units for a one standard deviation increase on the given predictor variable holding all other predictors constant. - General resource on multiple regression: The question is elementary and implies that you should read some general material on multiple regression (here is an elementary description by Andy Field). - Causality: Be careful of language like "the dependent variable has increased in response to greater use of the independent variable". Such language has causal connotations. Beta weights by themselves are not enough to justify a causal interpretation. You would require additional evidence to justify a causal interpretation.
null
CC BY-SA 3.0
null
2011-05-17T06:33:30.290
2013-07-07T04:23:19.687
2013-07-07T04:23:19.687
183
183
null
10889
1
null
null
3
59
Say I want to classify my data into two categories. I am pretty sure that my data has been generated by two mixtures of Gaussians -- on has a bimodal and one a trimodal form. I then train the generating models $p(x|c_1)$ and $p(x|c_2)$ and combine them via Bayes' theorem into a classifier of the form $p(c_i|x)$. Since both models are obtained by an optimization process, I don't have any guarantee that they fit their class equally well. Actually, it might be that one of the models assigns too much probability mass to points from the other class solely due to the fact that I let the optimization process run too long, chose the wrong model or whatever. As you can see, I just chose the mixture of Gaussians example only for the sake of the optimization. It might as well be hidden Markov models or even wrong modeling assumptions. Does this have a name? Are there principled ways to overcome this?
Generative modelling: what if the generating models have very different "quality of fit"
CC BY-SA 3.0
null
2011-05-17T08:42:53.303
2011-05-17T08:42:53.303
null
null
2860
[ "classification", "generative-models" ]
10890
1
11133
null
14
33811
What is the difference between [endogeneity](http://en.wikipedia.org/wiki/Endogeneity_%28economics%29) and unobserved heterogeneity? I know that endogeneity comes for example from omitted variables? But as far as I understand, unobserved heterogeneity causes the same problem. But where exactly lays the difference between these two notions?
Endogeneity versus unobserved heterogeneity
CC BY-SA 3.0
null
2011-05-17T09:38:43.160
2018-04-22T17:18:49.153
2011-05-17T10:16:50.513
2116
4496
[ "regression", "assumptions" ]
10891
2
null
10271
-1
null
After a friend of mine pointed me into the direction of clustering algorithms, I stumbled across [DBSCAN](http://en.wikipedia.org/wiki/DBSCAN) which builds clusters in n-dimensional space according to two predefined parameters. The basic idea is density-based clustering, i.e. dense regions form clusters. Outliers are returned separately by the algorithm. So, when applied to my 1-dimensional histogram, DBSCAN is able to tell me, whether my anomaly scores feature any outliers. Note: In DBSCAN, an outlier is just a point which does not belong to any cluster. During normal operations, I expect the algorithm to yield only a single cluster (and no outliers). After some experimenting, I found out that the parameters $\epsilon \approx 0.1$ works well. This means that points have to exhibit a distance of at least 0.1 to the "normal" cluster in order to be seen as outlier. After being able to identify outliers, finding the threshold boils down to simple rules such as: - If the set exhibits outliers, set the threshold between the "normal" and "outlier" cluster so that the margin to both is maximized. - If the set does not exhibit any outliers, set the threshold one standard deviation away from the outmost right point. Anyway, thanks for all the helpful replies!
null
CC BY-SA 3.0
null
2011-05-17T14:09:26.937
2011-05-18T08:32:24.623
2011-05-18T08:32:24.623
4446
4446
null
10892
1
null
null
7
1110
I'm trying to run R's multidimensional scaling algorithm, `cmdscale`, on roughly 2,200 variables, i.e. a 2,200x2,200 distance matrix. It's taking forever (about a day so far). I don't know much about the algorithm used under the hood. What's its big-O notation? Is a more efficient, even if only approximate, algorithm available that's easy to set up and use?
Big-O Scaling of R's cmdscale()
CC BY-SA 3.0
null
2011-05-17T14:16:14.383
2011-05-17T18:24:11.260
null
null
1347
[ "r", "algorithms", "multidimensional-scaling" ]
10893
1
null
null
2
329
I'm developing an application in which users can create 'sections' (à la subreddit in reddit), in which items/posts can be created and voted with a thumbs-up/down system. [A great article](http://www.evanmiller.org/how-not-to-sort-by-average-rating.html) guided me on how to sort these votes so that an item with a 100% positive response but with few votes won't get ranked over one with hundreds of votes and an acceptance of 80%. The article describes it pretty well. However, I'd like to discard the lowest-ranked items and this is where it gets tricky: - How could I know the minimum number of votes in order to discard it? - What is the score's threshold required to discard the item? As I said, there are sections, and each one has items (which are the ones voted). The formula has to take into consideration the fact that one section may have 100 items with thousands of votes and another might have 3 or 4 items with 20 votes, so a minimum of 40 votes required might be optimal for the first case but totally out of bounds for the second. (I was tempted in posting this to MathOverflow, but I'm not really sure since this also involves some programming) Thanks!
Formula to discard items by votes (Lower bound of Wilson score confidence interval)
CC BY-SA 3.0
0
2011-05-16T17:39:58.170
2011-06-02T22:10:16.550
2011-06-02T22:10:16.550
null
4638
[ "confidence-interval" ]
10894
2
null
10882
4
null
Perhaps you could extend the idea of using the lower bound of the confidence interval for sorting: you could throw away items that have a low upper bound. The items with only a few votes will have pretty high upper bounds; the lowest upper bounds will correspond to the lowest "quality" items.
null
CC BY-SA 3.0
null
2011-05-16T18:27:42.933
2011-05-16T18:27:42.933
null
null
279
null
10897
1
null
null
6
398
I would like to compute the probability distribution for the length of the fragments which I would obtain by fragmenting a linear rod of length $L$ in the following way: - I choose at random (uniformly) $n$ breakpoints - I cut the rod at those breakpoints, creating $(n+1)$ fragments. Now, while it is easy to see that the probability that a stretch of length $x$ does not contain any breakpoint goes like a negative exponential, I don't know how to throw in the information about the length of the rod.
Probability distribution of fragment lengths
CC BY-SA 3.0
null
2011-05-17T16:06:19.143
2011-05-18T16:41:17.197
2011-05-17T16:38:47.773
2970
4642
[ "probability" ]
10899
2
null
10897
0
null
Let $\{X_i\}$ be the locations of the cuts. I'd approach this problem by finding the order statistics $\{Y_i\}$ so that $Y_1$ would be the location of the leftmost cut. Then I'd calculate the probability distributions of the differences between the variables $Y_i-Y_{i-1}$. Don't forget to also calculate $Y_1-0$ and $L-Y_n$. Can anyone think of a better way?
null
CC BY-SA 3.0
null
2011-05-17T16:17:19.443
2011-05-17T16:17:19.443
null
null
4637
null
10900
1
null
null
4
3195
I have to generate random numbers for my algorithm based on probability distributions. I want a distribution which has heavy tails and is unskewed, which can produce numbers far away from location parameter. There should be a parameter to control the tail heaviness (e.g., like levy distribution where alpha determines tail heaviness). I have identified the t-distribution (for smaller degrees of freedom) and the laplace distribution as two possibilities. - Are there any reasons to prefer t or laplace for my purpose? - Apart from t-distribution and laplace distribution, is there any distribution except cauchy or levy that would be useful for my purpose?
Long tailed distributions for generating random numbers with parameters to control tail heaviness
CC BY-SA 3.0
null
2011-05-17T17:13:30.487
2012-07-13T07:12:05.503
2011-06-08T16:11:41.350
183
4319
[ "distributions", "randomness" ]
10901
2
null
10892
4
null
You should be able to test this on your system via Monte Carlo. For example: ``` try.cmdscale <- function(n) { x <- matrix(rnorm(n*n) ** 2,nrow=n,ncol=n); took.time <- system.time(cmdscale(x)) } #repeat multiple times and take the median of the user.self + sys.self #probably a better way to do timing? multi.cmdscale <- function(n,ntrial = 11) { all.times <- replicate(ntrial,try.cmdscale(n)) median(all.times[1,] + all.times[2,]) } nvals <- c(8,16,32,64,128,256,512,1024) tvals <- sapply(nvals,multi.cmdscale) #ack! wish I could do a fit plot in logspace plot(nvals,tvals,log="xy") ``` not so informative yet: If I could do some kind of fit on this plot, I could estimate the big O. (not so proficient in R yet, but learning). ![scaling of time versus sample size](https://i.stack.imgur.com/UbBtf.jpg)
null
CC BY-SA 3.0
null
2011-05-17T17:33:50.177
2011-05-17T17:33:50.177
null
null
795
null
10902
2
null
10855
4
null
If we assume an underlying success probability $\theta$ for a specific training set and number of points used for image analysis, we can expect a binomial distribution for the number of successful object recognitions $k$ out of $n$ runs: $P(k|\theta,n)=\left(n\atop k\right)\theta^k (1-\theta)^{n-k}$ What we're actually interested in, is the value and uncertainty of $\theta$ for given $n,k$. We get a probability distribution for $\theta$ by applying [Bayes Theorem](http://en.wikipedia.org/wiki/Bayes%27_theorem): $p(\theta|k,n) \propto p(k|\theta,n)*p(\theta|n)$ The first term on the right side is the likelihood, for which we can use the formula from above. The second term is the [prior distribution](http://en.wikipedia.org/wiki/Prior_probability) for $\theta$. If we want to be neutral about $\theta$ before any experiment is done we can choose the [Jeffreys prior](http://en.wikipedia.org/wiki/Jeffreys_prior) $p(\theta|n) \propto \theta^{-1/2}(1-\theta)^{-1/2}.$ Putting our likelihood and prior distributions together and normalizing the resulting distribution we end up with $p(\theta|k,n) = \frac{\Gamma(n+1)}{\Gamma(k+\frac{1}{2})\Gamma(n-k+\frac{1}{2})} \theta^{k-1/2}(1-\theta)^{n-k-1/2},$ which is the [Beta distribution](http://en.wikipedia.org/wiki/Beta_distribution) with parameters $k+\frac{1}{2}$ and $n-k+\frac{1}{2}$. If we e.g. have $n=5$ runs and $k\in\{1,3\}$ successes, the distribution for probability of different underlying $\theta$ values looks like this: ![true theta pdf](https://i.stack.imgur.com/cCy6R.png) From this probability distribution we can get all the information we need about $\theta$ in order to decide to do further runs or not, e.g. we can compute [credible intervals](http://en.wikipedia.org/wiki/Credible_interval) for the true value of $\theta$. By computing expected values for $\theta$ and $\theta^2$ we can get formulas for the mean $\theta$ and the variance of $\theta$: $E\left[\theta\right]=\int_{\theta=0}^1 \theta \; p(\theta|k,n) = \frac{2k+1}{2n+2}$ $E\left[\theta^2\right]=\int_{\theta=0}^1 \theta^2 p(\theta|k,n) = \frac{(2k+1)(2k+3)}{4(n+1)(n+2)}$ $V\left[\theta\right]=E\left[\theta^2\right] - E\left[\theta\right]^2=\frac{4k(n-k)+2n+1}{4(n+1)^2(n+2)}=\frac{(k+\frac{1}{2})(n-k+\frac{1}{2})}{(n+1)^2(n+2)}$ Now we could run experiments and compute the mean and variance of the expected $\theta$ and stop when the variance is smaller than a predefined value that reflects our desired certainty for $\theta$. For example, let's say we want the standard deviation of our $\theta$-distribution be smaller than $\sigma$, then we should do runs until the condition $\frac{(k+\frac{1}{2})(n-k+\frac{1}{2})}{(n+1)^2(n+2)} < \sigma^2$ is satisfied, with a resulting estimate for ${\tilde \theta}=\frac{2k+1}{2n+2}$. If e.g. $\sigma=0.05$ we have to do runs until we end up in the orange region in this plot: ![](https://i.stack.imgur.com/lrL88.png) We can see that we need a lot more runs to be as certain about $\theta$ in case of $\theta \approx 0.5$ (about a hundred in this example) than if we have a $\theta$ near zero or one (where perhaps 30 runs could be sufficient). A nice introductory textbook which contains examples like this is [Data Analysis: A Bayesian Tutorial](http://amzn.com/0198568320). It also contains a small chapter on experiment design.
null
CC BY-SA 3.0
null
2011-05-17T17:50:45.363
2011-05-18T03:09:31.003
2011-05-18T03:09:31.003
4360
4360
null
10903
2
null
10892
3
null
Ironically, the MDS actually wasn't my problem. The preprocessing I was doing was the issue. I'm used to coding in lower-level languages and forgot how slow looping is in R. I rewrote the preprocessing code using vector ops and the MDS actually only takes a few seconds.
null
CC BY-SA 3.0
null
2011-05-17T18:24:11.260
2011-05-17T18:24:11.260
null
null
1347
null
10904
1
10922
null
15
1780
What are the pros and cons of learning about a distribution's properties algorithmically (via computer simulations) versus mathematically? It seems like computer simulations can be an alternative learning method, especially for those new students who do not feel strong in calculus. Also it seems that coding simulations can offer an earlier and more intuitive grasp of the concept of a distribution.
What are the pros and cons of learning about a distribution algorithmically (simulations) versus mathematically?
CC BY-SA 3.0
null
2011-05-17T18:24:27.733
2017-02-20T10:35:22.387
2017-02-20T10:35:22.387
9964
4329
[ "distributions", "algorithms", "teaching" ]
10905
1
10909
null
10
44835
I have searched for this online for hours but none of online posts is what I am looking for. My question is very easy to implement in SAS Proc mixed procedure but I am not sure how to do it in lme and/or lmer packages. Assume, I have a model, $y = \mu + \alpha + \beta +\alpha\beta + e$, where $\alpha$ is fixed but $\beta$ and $\alpha\beta$ are random. My R code is ``` f1 = lme(y ~ factor(a), data = mydata, random = list(factor(b) = ~ 1, factor(a):factor(b) = ~ 1)) ``` Error: unexpected `=` in: ``` f1 = lme(y ~ factor(a), data = mydata, random = list(factor(a) = ``` Could someone please tell me how to specify these random effects in lme? many thanks in advance
How to specify random effects in lme?
CC BY-SA 3.0
null
2011-05-17T18:39:01.290
2011-05-17T20:15:56.673
2011-05-17T18:40:30.710
2116
4559
[ "r", "mixed-model", "cross-validation", "biostatistics" ]
10906
2
null
10905
2
null
It would help a lot if you provided a data.frame. Now it is not clear what is a grouping factor. I judge that it is $\beta$. Then in `lme` notation your model should be written as follows: ``` lme(y~a,random=~a|b, data=mydata) ```
null
CC BY-SA 3.0
null
2011-05-17T19:01:07.637
2011-05-17T19:01:07.637
null
null
2116
null
10907
1
10944
null
5
2632
I have data with standard error, included below for clarity, ``` X Y Error in Y 0.0105574 -28.831027 0.04422 0.0070382 -27.800385 0.04225 0.0052787 -27.314088 0.04209 0.0042229 -27.054207 0.04185 0.0035191 -27.000188 0.04143 0.0030164 -26.891275 0.04108 ``` I have obtained parameters a and b of the expression y=a*x*x + b from a weighted least squares regression using this data (fit in gnuplot). The regression returned what was called "Asymptotic Standard Error" associated with these parameters. I believe this error was calculated using the deviation from fitted point to actual points ([Equation 34/35 here](http://mathworld.wolfram.com/LeastSquaresFitting.html)) and is used to assess the quality of a fit. However, this is not the error that I'm interested in. I'm looking to determine the value of the data point at X=0.0 from my fitted function with standard error like my other values. The output of the regression was: ``` Final set of parameters Asymptotic Standard Error a = -19389.1 +/- 752 (3.878%) b = -26.7951 +/- 0.03915 (0.1461%) ``` So, to be quite specific, how might I determine the standard error at the point (X,Y)=(0.0, -26.7951) using my fitted function? I expect the error in this calculated point to be much larger than the errors of the values reported in the Y values of my table above. I can see how gnuplot is not the right tool for this, as it only weights my data points using the standard error in my input. What I need to do is propagate the error in my original data points to obtain the error on the regression line. This seems like a pretty basic exercise, sorry for my statistics ignorance. Thanks!
Obtaining standard error on a data point obtained from linear regression
CC BY-SA 3.0
0
2011-05-17T19:14:12.547
2017-11-12T21:25:20.757
2017-11-12T21:25:20.757
11887
4643
[ "regression", "least-squares", "error" ]
10909
2
null
10905
13
null
Try this, it's a standard way to do a split plot. The notation `/` means that method is nested in day. ``` lme(level~method, random=~1|day/method, data=d) ```
null
CC BY-SA 3.0
null
2011-05-17T20:15:56.673
2011-05-17T20:15:56.673
null
null
3601
null
10910
1
null
null
1
21770
I have a 37-question scale, where each item belongs to one of four scales. The items for each scale are interspersed. How can I compute a scale score for each of the four scales? I need to be able to analyse each subscale independently and then compare scale responses on and independent variable.
How do I create subscales based on sets of items in SPSS
CC BY-SA 3.0
null
2011-05-17T20:50:11.410
2015-07-07T15:15:10.360
2011-05-18T04:59:26.770
183
4644
[ "spss", "scales" ]
10912
2
null
10910
1
null
This code divides the sum of scores (or of weighted scores) by the number of valid values a given case has for that set of variables. In this example, if a case has values only for verbal and quant, then the score will be simply the sum of those divided by 2. ``` Compute SCORE = sum (.05* jobs, .15* exper, verbal, quant, intvw, college, gpa)/ nvalid (jobs, exper, verbal, quant, intvw, college, gpa). Execute. ``` The benefit of using this over simpler syntax is that cases will not be left out of the SCORE even if they are missing one or more values on the component variables.
null
CC BY-SA 3.0
null
2011-05-17T21:07:35.910
2011-05-17T21:07:35.910
null
null
2669
null
10913
2
null
10832
0
null
While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross.
null
CC BY-SA 3.0
null
2011-05-17T21:13:53.457
2011-05-17T21:13:53.457
null
null
4623
null