Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10249 | 1 | null | null | 0 | 281 | I was reading OkTrends and [came across this](http://blog.okcupid.com/index.php/page/3/):
>
In fact, 32% of successful couples
agreed on all of them—which is 3.7×
the rate of simple coincidence.
So, my question is: what is simple coincidence? How is it calculated?
I can't find a Wikipedia article about simple coincidence, and nowhere can I find a numerical definition. So, I'm confused about how the author got "3.7x" the rate of "simple" coincidence.
| What is simple coincidence? | CC BY-SA 3.0 | null | 2011-05-02T18:04:51.363 | 2013-09-04T21:17:02.627 | null | null | 4441 | [
"probability"
] |
10250 | 1 | null | null | 10 | 4083 | I have data on the percent of organic matter in lake sediments from 0 cm (i.e., the sediment - water interface) down to 9 cm for approximately 25 lakes. In each lake 2 cores were taken from each location so I have 2 replicate measures of organic matter percentage at each sediment depth for each lake.
I am interested in comparing how lakes differ in the relationship between percent organic matter and sediment depth (i.e., slope). In some lakes the relationship between percent organic matter and sediment depth appears linear but in other cases the relationship is more complex (see examples below).
My initial thoughts were to fit linear relationships where appropriate either to the whole curve or to a subset of the curve if it was "mainly" linear and only compare those lakes where a significant linear relationship was found. However I am unhappy with this approach in that it requires eliminating data for no other reason than they do not fit the linear model and it ignores potentially interesting information about the relationship between percent organic matter and sediment depth.
What would be a good way to summarize and compare the curves from different lakes?
Thank you
Example curves: In all cases the y-axis is the percent organic matter in the sediment and the x-axis is the sediment depth where 0 = the sedi
ment-water interface.
A nice linear example:

2 non-linear examples:


An example with no obvious relationship:

| How to summarize and compare non-linear relationships? | CC BY-SA 3.0 | null | 2011-05-02T22:05:57.513 | 2012-06-25T06:33:28.883 | null | null | 4048 | [
"regression",
"nonlinear-regression"
] |
10251 | 1 | 10256 | null | 62 | 29861 | Principal component analysis can use matrix decomposition, but that is just a tool to get there.
How would you find the principal components without the use of matrix algebra?
What is the objective function (goal), and what are the constraints?
| What is the objective function of PCA? | CC BY-SA 3.0 | null | 2011-05-02T23:10:16.580 | 2015-02-07T01:06:13.063 | 2015-02-07T01:06:13.063 | 28666 | 74 | [
"pca"
] |
10253 | 2 | null | 10250 | 2 | null | Check out [Generalized Additive Models](http://en.wikipedia.org/wiki/Generalized_additive_model), which permit fitting non-linear functions without a priori specification of the non-linear form. I'm not sure how one would go about comparing the subsequent fits however. Another similar (in that I believe they both employ cubic splines) approach is achieved by [Functional Data Analysis](http://www.psych.mcgill.ca/misc/fda/), where I understand there are methods for characterizing differences between fitted functions.
| null | CC BY-SA 3.0 | null | 2011-05-02T23:20:45.080 | 2011-05-03T02:21:12.163 | 2011-05-03T02:21:12.163 | 364 | 364 | null |
10254 | 2 | null | 10220 | 1 | null | If you know the domain of the random variable and maybe have knowledge of some other properties like the mean, variance, etc. but want to be ignorant in a fair way about all other aspects of the distribution, you can find a distribution by applying the [principle of maximum entropy](http://en.wikipedia.org/wiki/Principle_of_maximum_entropy). Or put simply
>
Given a collection of facts,
choose a model which is consistent with all the facts,
but otherwise as uniform as possible.
(Adam Berger)
Even if you don't want to derive these distributions yourself i think it's still good to know that there is such a general principle that may be used.
For many common cases people have already done this and you can just look up the solution, depending on your domain and given statistics: [Wikipedia](http://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution) lists some of them.
| null | CC BY-SA 3.0 | null | 2011-05-03T00:58:11.377 | 2011-05-03T01:11:31.460 | 2011-05-03T01:11:31.460 | 4360 | 4360 | null |
10255 | 2 | null | 10251 | 5 | null | See NIPALS ([wiki](http://en.wikipedia.org/wiki/Non-linear_iterative_partial_least_squares)) for one algorithm which doesn't explicitly use a matrix decomposition. I suppose that's what you mean when you say that you want to avoid matrix algebra since you really can't avoid matrix algebra here :)
| null | CC BY-SA 3.0 | null | 2011-05-03T01:50:30.833 | 2011-05-03T01:50:30.833 | null | null | 26 | null |
10256 | 2 | null | 10251 | 58 | null | Without trying to give a full primer on PCA, from an optimization standpoint, the primary objective function is the [Rayleigh quotient](http://en.wikipedia.org/wiki/Rayleigh_quotient). The matrix that figures in the quotient is (some multiple of) the sample covariance matrix
$$\newcommand{\m}[1]{\mathbf{#1}}\newcommand{\x}{\m{x}}\newcommand{\S}{\m{S}}\newcommand{\u}{\m{u}}\newcommand{\reals}{\mathbb{R}}\newcommand{\Q}{\m{Q}}\newcommand{\L}{\boldsymbol{\Lambda}}
\S = \frac{1}{n} \sum_{i=1}^n \x_i \x_i^T = \m{X}^T \m{X} / n
$$
where each $\x_i$ is a vector of $p$ features and $\m{X}$ is the matrix such that the $i$th row is $\x_i^T$.
PCA seeks to solve a sequence of optimization problems. The first in the sequence is the unconstrained problem
$$
\begin{array}{ll}
\text{maximize} & \frac{\u^T \S \u}{\u^T\u} \;, \u \in \reals^p \> .
\end{array}
$$
Since $\u^T \u = \|\u\|_2^2 = \|\u\| \|\u\|$, the above unconstrained problem is equivalent to the constrained problem
$$
\begin{array}{ll}
\text{maximize} & \u^T \S \u \\
\text{subject to} & \u^T \u = 1 \>.
\end{array}
$$
Here is where the matrix algebra comes in. Since $\S$ is a symmetric positive semidefinite matrix (by construction!) it has an eigenvalue decomposition of the form
$$
\S = \Q \L \Q^T \>,
$$
where $\Q$ is an orthogonal matrix (so $\Q \Q^T = \m{I}$) and $\L$ is a diagonal matrix with nonnegative entries $\lambda_i$ such that $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p \geq 0$.
Hence, $\u^T \S \u = \u^T \Q \L \Q^T \u = \m{w}^T \L \m{w} = \sum_{i=1}^p \lambda_i w_i^2$. Since $\u$ is constrained in the problem to have a norm of one, then so is $\m{w}$ since $\|\m{w}\|_2 = \|\Q^T \u\|_2 = \|\u\|_2 = 1$, by virtue of $\Q$ being orthogonal.
But, if we want to maximize the quantity $\sum_{i=1}^p \lambda_i w_i^2$ under the constraints that $\sum_{i=1}^p w_i^2 = 1$, then the best we can do is to set $\m{w} = \m{e}_1$, that is, $w_1 = 1$ and $w_i = 0$ for $i > 1$.
Now, backing out the corresponding $\u$, which is what we sought in the first place, we get that
$$
\u^\star = \Q \m{e}_1 = \m{q}_1
$$
where $\m{q}_1$ denotes the first column of $\Q$, i.e., the eigenvector corresponding to the largest eigenvalue of $\S$. The value of the objective function is then also easily seen to be $\lambda_1$.
---
The remaining principal component vectors are then found by solving the sequence (indexed by $i$) of optimization problems
$$
\begin{array}{ll}
\text{maximize} & \u_i^T \S \u_i \\
\text{subject to} & \u_i^T \u_i = 1 \\
& \u_i^T \u_j = 0 \quad \forall 1 \leq j < i\>.
\end{array}
$$
So, the problem is the same, except that we add the additional constraint that the solution must be orthogonal to all of the previous solutions in the sequence. It is not difficult to extend the argument above inductively to show that the solution of the $i$th problem is, indeed, $\m{q}_i$, the $i$th eigenvector of $\S$.
---
The PCA solution is also often expressed in terms of the [singular value decomposition](http://en.wikipedia.org/wiki/Singular_value_decomposition) of $\m{X}$. To see why, let $\m{X} = \m{U} \m{D} \m{V}^T$. Then $n \S = \m{X}^T \m{X} = \m{V} \m{D}^2 \m{V}^T$ and so $\m{V} = \m{Q}$ (strictly speaking, up to sign flips) and $\L = \m{D}^2 / n$.
The principal components are found by projecting $\m{X}$ onto the principal component vectors. From the SVD formulation just given, it's easy to see that
$$
\m{X} \m{Q} = \m{X} \m{V} = \m{U} \m{D} \m{V}^T \m{V} = \m{U} \m{D} \> .
$$
The simplicity of representation of both the principal component vectors and the principal components themselves in terms of the SVD of the matrix of features is one reason the SVD features so prominently in some treatments of PCA.
| null | CC BY-SA 3.0 | null | 2011-05-03T02:27:02.357 | 2011-05-04T07:49:25.030 | 2011-05-04T07:49:25.030 | 4376 | 2970 | null |
10257 | 2 | null | 421 | 5 | null | So many wonderful recommendations! It's not quite what you asked for, but [How to Lie with Statistics](http://en.wikipedia.org/wiki/How_to_Lie_with_Statistics) is short and quite wonderful. It doesn't directly teach the things you want, but it does help point out violation of assumptions and other flaws.
| null | CC BY-SA 3.0 | null | 2011-05-03T02:40:27.687 | 2011-05-03T02:40:27.687 | null | null | 3874 | null |
10258 | 2 | null | 10049 | 0 | null | Based on the output you shared, Maximum # of branches from a node is set at 2. It's possible that raising that limit would give you more options for branches, especially if SAS can take continuous variables and break them up into categories. It's data dredgy, but that's the game we're in, and as long as you crossvalidate you're on solid moral ground :-)
| null | CC BY-SA 3.0 | null | 2011-05-03T03:14:21.113 | 2011-05-03T03:14:21.113 | null | null | 2669 | null |
10259 | 2 | null | 10049 | 0 | null | If you are using tree-based methods, you can play around with the splitting criterion. For example, at each step, choose the split that gives the highest weighted accuracy (the average of the two classes' accuracies).
This can be used as the basis for a random forest too, which should give you a good classifier.
I once used a similar process to boost precision while sacrificing recall. It worked very well (better than thresholding the scores from the classification algorithm which were very noisy anyway).
| null | CC BY-SA 3.0 | null | 2011-05-03T03:56:31.520 | 2011-05-03T03:56:31.520 | null | null | 2067 | null |
10260 | 2 | null | 10251 | 30 | null | The solution presented by cardinal focuses on the sample covariance matrix. Another starting point is the reconstruction error of the data by a q-dimensional hyperplane. If the p-dimensional data points are $x_1, \ldots, x_n$ the objective is to solve
$$\min_{\mu, \lambda_1,\ldots, \lambda_n, \mathbf{V}_q} \sum_{i=1}^n ||x_i - \mu - \mathbf{V}_q \lambda_i||^2$$
for a $p \times q$ matrix $\mathbf{V}_q$ with orthonormal columns and $\lambda_i \in \mathbb{R}^q$. This gives the best rank q-reconstruction as measured by the euclidean norm, and the columns of the $\mathbf{V}_q$ solution are the first q principal component vectors.
For fixed $\mathbf{V}_q$ the solution for $\mu$ and $\lambda_i$ (this is regression) are
$$\mu = \overline{x} = \frac{1}{n}\sum_{i=1}^n x_i \qquad \lambda_i = \mathbf{V}_q^T(x_i - \overline{x})$$
For ease of notation lets assume that $x_i$ have been centered in the following computations. We then have to minimize
$$\sum_{i=1}^n ||x_i - \mathbf{V}_q\mathbf{V}_q^T x_i||^2$$
over $\mathbf{V}_q$ with orthonormal columns. Note that $P = \mathbf{V}_q\mathbf{V}_q^T$ is the projection onto the q-dimensional column space. Hence the problem is equivalent to minimizing
$$\sum_{i=1}^n ||x_i - P x_i||^2 = \sum_{i=1}^n ||x_i||^2 - \sum_{i=1}^n||Px_i||^2$$
over rank q projections $P$. That is, we need to maximize
$$\sum_{i=1}^n||Px_i||^2 = \sum_{i=1}^n x_i^TPx_i = \text{tr}(P \sum_{i=1}^n x_i x_i^T) = n \text{tr}(P \mathbf{S})$$
over rank q projections $P$, where $\mathbf{S}$ is the sample covariance matrix. Now $$\text{tr}(P\mathbf{S}) = \text{tr}(\mathbf{V}_q^T\mathbf{S}\mathbf{V}_q) = \sum_{i=1}^q u_i^T \mathbf{S} u_i$$
where $u_1, \ldots, u_q$ are the $q$ (orthonormal) columns in $\mathbf{V}_q$, and the arguments presented in @cardinal's answer show that the maximum is obtained by taking the $u_i$'s to be $q$ eigenvectors for $\mathbf{S}$ with the $q$ largest eigenvalues.
The reconstruction error suggests a number of useful generalizations, for instance sparse principal components or reconstructions by low-dimensional manifolds instead of hyperplanes. For details, see Section 14.5 in [The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/).
| null | CC BY-SA 3.0 | null | 2011-05-03T04:20:44.130 | 2011-05-04T06:25:29.590 | 2011-05-04T06:25:29.590 | 4376 | 4376 | null |
10261 | 1 | 10263 | null | 3 | 1347 | While im going through the derivation of E step in EM algorithm for pLSA, i came across the following derivation [at this page](http://www.hongliangjie.com/2010/01/04/notes-on-probabilistic-latent-semantic-analysis-plsa/). Could anyone explain me how the following step is derived.
$\sum_z q(z) log \frac{P(X|z,\theta)P(z|\theta)}{q(z)} = \sum_z q(z) log \frac{P(z|X,\theta)P(X,\theta)}{q(z)} $
| Derivation of E step in EM algorithm | CC BY-SA 3.0 | null | 2011-05-03T06:23:35.117 | 2014-04-02T04:29:02.523 | 2011-05-03T12:57:33.020 | 930 | 4290 | [
"expectation-maximization",
"latent-semantic-analysis"
] |
10262 | 2 | null | 6705 | 4 | null | Moran's I statistic is used to explore a specific type of spatial clustering: whether high values are located in proximity to other high values and whether low values are located in proximity to other low values.
The trick then is 1st to get a sense of what you mean by proximity and 2nd formulating this mathematically. This idea of proximity will depend on the what type of observations (attributes) you are working with and what type of questions you have in mind.
For example, for human beings proximity could mean the distance needed to have a chat. So, if you wanted to know whether high income people like to chat with other high income people at your cocktail party, you could formulate proximity by using binary weights where 1 is defined by 2 people being within 3 feet of each other. To see whether house prices are spatially correlated you could define proximity as when 2 houses are neighbors or perhaps if two houses are on the same block or if 2 houses are within sight of one another etc etc.
Basically, you need a hypothesis of proximity that is based on some of your prior common sense ideas or expert knowledge of why 2 objects that are close to one another are more associated than 2 objects that are far from one another.
Moran's I can then be seen as a test of your hypothesis of how your notion of proximity structures high values next to one another on the landscape.
| null | CC BY-SA 3.0 | null | 2011-05-03T06:40:58.367 | 2011-05-03T08:46:51.997 | 2011-05-03T08:46:51.997 | 4329 | 4329 | null |
10263 | 2 | null | 10261 | 4 | null | It looks like [Bayes' formula](http://en.wikipedia.org/wiki/Bayes%27_theorem) :
$\Pr[A \mid B] = \frac{\Pr[B \mid A] \Pr[A]}{\Pr[B]}$
Here, it gives:
$\Pr[X \mid z, \theta] = \frac{\Pr[z \mid X, \theta] \Pr[X \mid \theta]}{\Pr[z \mid \theta]}$
| null | CC BY-SA 3.0 | null | 2011-05-03T06:47:38.257 | 2011-05-03T06:47:38.257 | 2020-06-11T14:32:37.003 | -1 | 3019 | null |
10264 | 2 | null | 3268 | 2 | null | An alternative to multidimentional scaling is making a map of the each group's position to one another as a SOM (Self Organising Maps). Just like you see with a geographic map of the United States with Kansas in the middle, the groups that are positioned near the middle of your SOM map would be the groups that are most connected to other groups.
Here is a [python SOM module](http://code.google.com/p/pysom-thesis/)
| null | CC BY-SA 3.0 | null | 2011-05-03T06:52:24.267 | 2011-05-03T08:26:00.337 | 2011-05-03T08:26:00.337 | 4329 | 4329 | null |
10265 | 1 | 10278 | null | 5 | 974 | In the paper of [Probabilistic Latent Semantic Analysis](http://www.cs.brown.edu/~th/papers/Hofmann-UAI99.pdf) by Hofmann, the author fits the model for document $\times$ word matrix through EM Algorithm in section 3. I was able to follow the derivation and meaning of the model derived in it.
However in the later section, the author mentioned about Tempered EM for improving the generalization. Could anyone explain or point me to the location where I can understand the actual meaning of Tempered EM Algorithm.
| What is a "tempered EM algorithm"? | CC BY-SA 3.0 | null | 2011-05-03T09:12:35.067 | 2019-10-04T09:30:43.383 | 2011-05-03T11:06:56.247 | null | 4290 | [
"expectation-maximization",
"latent-semantic-analysis"
] |
10267 | 1 | null | null | 6 | 5216 | I am building a Box-Jenkins model in Excel using solver. The model is AR(2).
The data that I have contains trend and seasonality both.
I know how to remove seasonality using seasonal indexes and add it back to the forecast.
But, how do I handle trend? If I remove trend from the data, how should I add it back to the forecast?
Also, is the excel solver best way to find the AR parameters?
| Predicting forecasts for next 12 months using Box-Jenkins | CC BY-SA 3.0 | null | 2011-05-03T12:08:37.473 | 2016-05-23T10:12:22.290 | 2016-05-23T10:12:22.290 | 1352 | 4445 | [
"time-series",
"forecasting",
"arima",
"box-jenkins"
] |
10268 | 1 | null | null | 3 | 449 | I've got a question concerning the estimation of a tobit model with the [AER](http://cran.r-project.org/web/packages/AER/index.html) package in R. I observed t-distributed residuals, so that the assumption of normal distributed std errors is violated. Fortunately it's possible to choose "t" as the distribution when fitting with `tobit()`. It is `tobit(<formula>, dist="t")`. After changing the distribution from t to normal, the AIC increases, so this seems to be a good idea.
My problem is: Is it possible to flip the normal distribution that way? Is it still a tobit model and do I have to care about something when changing the distribution in a tobit model?
I appreciate any thought!
| Tobit model with t-distribution | CC BY-SA 3.0 | null | 2011-05-03T10:58:28.040 | 2023-03-31T00:06:30.737 | 2011-05-03T18:42:17.707 | 71 | null | [
"r",
"regression",
"tobit-regression"
] |
10269 | 2 | null | 10267 | 1 | null | Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form. The best way to answer this question is to evaluate alternative final models for adequacy in terms of separating the observed observations to signal and noise. There are a number of possible pitfalls awaiting you . One of them " does the series have one or more trends AND/OR one or more level shifts " ? Another possible issue "does the series have a constant set of monthly indicators or have some months had a statistically significant change in the their effects ? In terms of a seasonal ARIMA model this question translates to " have the model parameters changed over time " ?. My experience with Excel Solver has not been very positive.
| null | CC BY-SA 3.0 | null | 2011-05-03T12:28:47.897 | 2011-05-03T12:53:10.690 | 2011-05-03T12:53:10.690 | 3382 | 3382 | null |
10270 | 2 | null | 10049 | 4 | null | The problem is more with the choice of the accuracy scoring rule. Make sure that the ultimate goal is classification as opposed to prediction. The proportion classified correctly is a discontinuous improper scoring rule. An improper scoring rule is one that is optimized by a bogus model. With an improper scoring rule such things as addition of a highly important predictor making the model less accurate can happen. The use of log likelihood (or deviance) or the Brier quadratic scoring rule will help. The concordance index C (which happens to equal the ROC area, making ROCs appear more useful than they really are) is a useful measure of predictive discrimination once the model is finalized.
| null | CC BY-SA 3.0 | null | 2011-05-03T13:25:29.633 | 2011-05-03T13:25:29.633 | null | null | 4253 | null |
10271 | 1 | null | null | 14 | 12439 | I am working with a time series of anomaly scores (the background is anomaly detection in computer networks). Every minute, I get an anomaly score $x_t \in [0, 5]$ which tells me how "unexpected" or abnormal the current state of the network is. The higher the score, the more abnormal the current state. Scores close to 5 are theoretically possible but occur almost never.
Now I want to come up with an algorithm or a formula which automatically determines a threshold for this anomaly time series. As soon as an anomaly score exceeds this threshold, an alarm is triggered.
The frequency distribution below is an example for an anomaly time series over 1 day. However, it is not safe to assume that every anomaly time series is going to look like that. In this special example, an anomaly threshold such as the .99-quantile would make sense since the few scores on the very right can be regarded as anomalies.

And the same frequency distribution as time series (it only ranges from 0 to 1 since there are no higher anomaly scores in the time series):

Unfortunately, the frequency distribution might have shapes, where the .99-quantile is not useful. An example is below. The right tail is very low, so if the .99-quantile is used as threshold, this might result in many false positives. This frequency distribution does not seem to contain anomalies so the threshold should lie outside the distribution at around 0.25.

Summing up, the difference between these two examples is that the first one seems to exhibit anomalies whereas the second one does not.
From my naive point of view, the algorithm should consider these two cases:
- If the frequency distribution has a large right tail (i.e. a couple abnormal scores), then the .99-quantile can be a good threshold.
- If the frequency distribution has a very short right tail (i.e. no abnormal scores), then the threshold should lie outside the distribution.
/edit: There is also no ground truth, i.e. labeled data sets available. So the algorithm is "blind" against the nature of the anomaly scores.
Now I am not sure how these observations can be expressed in terms of an algorithm or a formula. Does anyone have a suggestion how this problem could be solved? I hope that my explanations are sufficient since my statistical background is very limited.
Thanks for your help!
| Automatic threshold determination for anomaly detection | CC BY-SA 3.0 | null | 2011-05-03T13:35:45.367 | 2011-05-19T18:56:20.710 | 2011-05-05T08:59:25.660 | 4446 | 4446 | [
"time-series",
"outliers",
"threshold"
] |
10272 | 5 | null | null | 0 | null | For two or more dependent variables, use [multivariate-regression](/questions/tagged/multivariate-regression).
Linear regression models a variable (the "dependent variable") as varying randomly with respect to a linear combination of other variables (the "independent variables"). Multiple regression includes two or more non-constant independent variables (three or more variables in total). This adds complications not present with only one independent variable, including [complex forms of correlation](http://en.wikipedia.org/wiki/Multiple_correlation) and [interaction effects](http://en.wikipedia.org/wiki/Interaction_%28statistics%29).
Use this `multiple-regression` tag instead of the more generic [regression](http://stats.stackexchange.com/questions/tagged/regression) tag when your question focuses on an issue specifically related to including two or more independent variables in a regression model.
Multiple regression concerns the so-called "general linear model," not to be confused with the [generalized linear model](http://stats.stackexchange.com/questions/tagged/generalized-linear-model) despite the close similarity of their names.
| null | CC BY-SA 4.0 | null | 2011-05-03T13:39:20.200 | 2022-07-25T13:54:29.357 | 2022-07-25T13:54:29.357 | 121522 | 919 | null |
10273 | 4 | null | null | 0 | null | Regression that includes two or more non-constant independent variables. | null | CC BY-SA 4.0 | null | 2011-05-03T13:39:20.200 | 2022-07-25T13:55:51.503 | 2022-07-25T13:55:51.503 | 919 | 919 | null |
10276 | 1 | 10281 | null | 3 | 1775 | I have a response variable measured at three time points per individual (week 0, 18, and 36). I am interested in differences in the change of the response over the 36 weeks within some categorical variable X.
I see two ways of modeling this.
- One way ANOVA with response = week_36_score - week_0_score (this seems like the simplest option)
- Repeated Measures ANCOVA with response = week_18_score and week_36_score. Covariate = Week_0_score.
### Question
- Which would you prefer (if any, maybe there is a better choice)?
I know there has been a lot of articles on this and they seem to say each has its own strengths and weaknesses. I believe the second model would have more power. I am not worried about a ceiling effect here.
| Best model for change in scores over three time points | CC BY-SA 3.0 | null | 2011-05-03T14:06:30.987 | 2011-05-04T01:56:24.750 | 2011-05-03T15:38:06.650 | 183 | 2310 | [
"anova",
"modeling",
"repeated-measures",
"panel-data"
] |
10277 | 1 | 10282 | null | 9 | 507 | This is a rather general question (i.e. not necessarily specific to statistics), but I have noticed a trend in the machine learning and statistical literature where authors prefer to follow the following approach:
Approach 1: Obtain a solution to a practical problem by formulating a cost function for which it is possible (e.g. from a computational standpoint) to find a globally optimal solution (e.g. by formulating a convex cost function).
rather than:
Approach 2: Obtain a solution to the same problem by formulating a cost function for which we may not be able to obtain a globally optimal solution (e.g. we can only get a locally optimal solution for it).
Note that rigorously speaking the two problems are different; the assumption is that we can find the globally optimal solution for the first one, but not for the second one.
Other considerations aside (i.e. speed, ease of implementation, etc.), I am looking for:
- An explanation of this trend (e.g. mathematical or historical arguments)
- Benefits (practical and/or theoretical) for following Approach 1 instead of 2 when solving a practical problem.
| Advantages of approaching a problem by formulating a cost function that is globally optimizable | CC BY-SA 3.0 | null | 2011-05-03T14:46:00.920 | 2016-08-19T23:05:55.560 | 2016-08-19T22:47:54.237 | 22468 | 2798 | [
"optimization",
"function"
] |
10278 | 2 | null | 10265 | 3 | null | I found an answer via Google in a [UTexas Paper](http://www.ma.utexas.edu/users/zmccoy/report.pdf). As I suspected from the name, it combines a temperature that decreases ala Simulated Annealing, changing the E step of the algorithm slightly.
| null | CC BY-SA 3.0 | null | 2011-05-03T15:03:25.993 | 2011-05-03T15:03:25.993 | null | null | 1764 | null |
10279 | 1 | null | null | 0 | 96 | I am analyzing a large of dataset (n>100) of incident rates, with the aim of forming a normal distribution. Then I will know if a future incident rate (x%) is either close to a historical mean or not, and can score/rate it accordingly with an already created formula.
The data is positively-skewed, as most data points cluster around or near zero percent. I HAVE to transform this data into a normal distribution, correct? What is the preferred method when dealing with percentages (these will always be between 0 and 100%)? Are there alternative non-normalizing methods I can use to reach my desired output?
Anyway, let's say I've transformed the data and it follows a normal distribution. Now I can find the mean and std dev, then plot these in Excel using z-scores. Then I should be able to determine if incident rate x% is in the top 10%, top 20% of values, and score it with my formula accordingly.
Any problems with this method?
| Analyzing historical incident rates and rating future performance | CC BY-SA 3.0 | null | 2011-05-03T15:23:07.630 | 2011-05-03T17:17:19.583 | 2011-05-03T17:17:19.583 | null | 4450 | [
"normal-distribution",
"data-transformation",
"skewness"
] |
10280 | 2 | null | 10279 | 4 | null | You don't need to transform to a normal distribution to see if a particular value is the top tenth or top fifth of observations. All you need to do is sort your observations (and count them).
| null | CC BY-SA 3.0 | null | 2011-05-03T16:12:25.170 | 2011-05-03T16:12:25.170 | null | null | 2958 | null |
10281 | 2 | null | 10276 | 2 | null | In general, I would go with a repeated measures design.
There is nothing technically wrong with the first option. However, you are essentially throwing away 1/3 of your data (and 1/2 of your non-baseline data!), which may result in a lost of power. Additionally, since you have a measurement in between baseline and 36 weeks, you cannot conclude anything about the shapes of the response profiles (i.e. test for a quadratic/cubic effects over time.)
With that being said, for your repeated measures design, I would define a new variable called response_change. I am assuming that you are interested in testing if X has some effect on changes in the responses. At week 18, this variable would take value week_18_score - week_0_score. At week 36, this would take value week_36_score - week_0_score. Using response_change is more suited to this hypothesis.
| null | CC BY-SA 3.0 | null | 2011-05-03T16:12:50.613 | 2011-05-03T16:12:50.613 | null | null | 2144 | null |
10282 | 2 | null | 10277 | 3 | null | My believe is that the goal should be to optimize the function you are interested in. If that happens to be the number of misclassifications - and not a binomial likelihood, say - then you should try minimizing the number of misclassifications. However, for the number of practical reasons mentioned (speed, implementation, instability etc), this may not be so easy and it may even be impossible. In that case, we choose to approximate the solution.
I know of basically two approximation strategies; either we come up with algorithms that attempt to directly approximate the solution of the original problem, or we reformulate the original problem as a more directly solvable problem (e.g. convex relaxations).
A mathematical argument for preferring one approach over the other is whether we can understand a) the properties of the solution actually computed and b) how well the solution approximates the solution of the problem we are actually interested in.
I know of many results in statistics where we can prove properties of a solution to a optimization problem. To me it seems more difficult to analyze the solution of an
algorithm, where you don't have a mathematical formulation of what it computes (e.g. that it solves a given optimization problem). I certainly won't claim that you can't, but it seems to be a theoretical benefit, if you can give a clear mathematical formulation of what you compute.
It is unclear to me, if such mathematical arguments give any practical benefits to Approach 1 over Approach 2. There are certainly somebody out there, [who are not afraid of a non-convex loss function](http://videolectures.net/eml07_lecun_wia/).
| null | CC BY-SA 3.0 | null | 2011-05-03T17:45:30.897 | 2011-05-04T16:12:42.777 | 2011-05-04T16:12:42.777 | 4376 | 4376 | null |
10283 | 2 | null | 10271 | 1 | null | Do you have any 'labeled' examples of what constitutes an anomaly? i.e. values associated with a network failure, or something like that?
One idea you might consider applying is a ROC curve, which is useful for picking threshholds that meet a specific criteria, like maximizing true positives or minimizing false negatives.
Of course, to use a ROC curve, you need to label your data in some way.
| null | CC BY-SA 3.0 | null | 2011-05-03T18:18:53.510 | 2011-05-03T18:18:53.510 | null | null | 2817 | null |
10284 | 2 | null | 10267 | 5 | null | If you are at all familiar with [R](http://www.r-project.org/) (if you're building time series models, you should be), check out the [forecast](http://cran.r-project.org/web/packages/forecast/index.html) package. It's designed to choose parameters for Arima as well as exponential smoothing models, and uses a solid methodology to do so. It will probably get you a lot farther than what you are building in excel, especially because it will also allow you to explore exponential smoothing models. The two functions you are interested in are 'auto.arima' and 'ets.'
/Edit: auto.arima can also be used to fit ARMAX models, which (if properly specified) can solve many of the problems identified by IrishStat.
| null | CC BY-SA 3.0 | null | 2011-05-03T18:23:46.203 | 2011-05-04T15:17:58.293 | 2011-05-04T15:17:58.293 | 2817 | 2817 | null |
10285 | 1 | null | null | 5 | 25168 |
### Context
I ran an experiment with `3 x 2` design with three levels of within subjects factor (repeated measures) and two levels to the between subjects factors.
I am interested in examining the changes from baseline and the interaction effect.
### Question
- How do I compute the required sample size for a 3 x 2 design in order to achieve adequate statistical power?
| Sample size required for mixed design ANOVA to achieve adequate statistical power | CC BY-SA 3.0 | null | 2011-05-03T18:31:11.137 | 2011-05-04T16:12:13.730 | 2011-05-04T04:03:35.047 | 183 | 4453 | [
"anova",
"repeated-measures",
"statistical-power"
] |
10289 | 1 | null | null | 171 | 289532 | At work we were discussing this as my boss has never heard of normalization. In Linear Algebra, Normalization seems to refer to the dividing of a vector by its length. And in statistics, Standardization seems to refer to the subtraction of a mean then dividing by its SD. But they seem interchangeable with other possibilities as well.
When creating some kind of universal score, that makes up $2$ different metrics, which have different means and different SD's, would you Normalize, Standardize, or something else? One person told me it's just a matter of taking each metric and dividing them by their SD, individually. Then summing the two. And that will result in a universal score that can be used to judge both metrics.
For instance, say you had the number of people who take the subway to work (in NYC) and the number of people who drove to work (in NYC).
$$\text{Train} \longrightarrow x$$
$$\text{Car} \longrightarrow y$$
If you wanted to create a universal score to quickly report traffic fluctuations, you can't just add $\text{mean}(x)$ and $\text{mean}(y)$ because there will be a LOT more people who ride the train. There's 8 million people living in NYC, plus tourists. That's millions of people taking the train everyday verse hundreds of thousands of people in cars. So they need to be transformed to a similar scale in order to be compared.
If $\text{mean}(x) = 8,000,000$
and $\text{mean}(y) = 800,000$
Would you normalize $x$ & $y$ then sum? Would you standardize $x$ & $y$ then sum? Or would you divide each by their respective SD then sum? In order to get to a number that when fluctuates, represents total traffic fluctuations.
Any article or chapters of books for reference would be much appreciated. THANKS!
Also here's another example of what I'm trying to do.
Imagine you're a college dean, and you're discussing admission requirements. You may want students with at least a certain GPA and a certain test score. It'd be nice if they were both on the same scale because then you could just add the two together and say, "anyone with at least a 7.0 can get admitted." That way, if a prospective student has a 4.0 GPA, they could get as low as a 3.0 test score and still get admitted. Inversely, if someone had a 3.0 GPA, they could still get admitted with a 4.0 test score.
But it's not like that. The ACT is on a 36 point scale and most GPA's are on 4.0 (some are 4.3, yes annoying). Since I can't just add an ACT and GPA to get some kind of universal score, how can I transform them so they can be added, thus creating a universal admission score. And then as a Dean, I could just automatically accept anyone with a score above a certain threshold. Or even automatically accept everyone whose score is within the top 95%.... those sorts of things.
Would that be normalization? standardization? or just dividing each by their SD then summing?
| What's the difference between Normalization and Standardization? | CC BY-SA 3.0 | null | 2011-05-03T20:26:45.730 | 2021-10-09T18:06:13.350 | 2017-11-15T08:39:52.533 | 101426 | 4455 | [
"descriptive-statistics",
"normalization",
"standardization"
] |
10290 | 2 | null | 10121 | 1 | null | If you are trying to generate random correlation matrices, consider sampling from the Wishart distribution. This following question provides information the Wishart distribution as well as advice on how to sample:
[How to efficiently generate random positive-semidefinite correlation matrices?](https://stats.stackexchange.com/questions/2746/how-to-efficiently-generate-positive-semi-definite-correlation-matrices)
| null | CC BY-SA 3.0 | null | 2011-05-03T20:40:19.043 | 2011-05-03T20:40:19.043 | 2017-04-13T12:44:26.710 | -1 | 2773 | null |
10291 | 2 | null | 10289 | 50 | null | In the business world, "normalization" typically means that the range of values are "normalized to be from 0.0 to 1.0". "Standardization" typically means that the range of values are "standardized" to measure how many standard deviations the value is from its mean. However, not everyone would agree with that. It's best to explain your definitions before you use them.
In any case, your transformation needs to provide something useful.
In your train/car example, do you gain anything out of knowing how many standard deviations from their mean, each value lies? If you plot those "standardized" measures against each other as an x-y plot, you might see a correlation (see the first graph on the right):
[http://en.wikipedia.org/wiki/Correlation_and_dependence](http://en.wikipedia.org/wiki/Correlation_and_dependence)
If so, does that mean anything to you?
As far as your second example goes, if you want to "equate" a GPA from one scale to another scale, what do these scales have in common? In other words, how could you transform those minimums to be equivalent, and the maximums to be equivalent?
Here's an example of "normalization":
[Normalization Link](http://en.wikipedia.org/wiki/Normalization_%28image_processing%29)
Once you get your GPA and ACT scores in an interchangeable form, does it make sense to weigh the ACT and GPA scores differently? If so, what weighting means something to you?
Edit 1 (05/03/2011) ==========================================
First, I would check out the links suggested by whuber above. The bottom line is, in both of your two-variable problems, you are going to have to come up with an "equivalence" of one variable versus the other. And, a way to differentiate one variable from the other. In other words, even if you can simplify this to a simple linear relationship, you'll need "weights" to differentiate one variable from the other.
Here's an example of a two variable problem:
[Multi-Attribute Utilities](http://www.doc.ic.ac.uk/~frk/frank/da/6.%20multiple%20utility.pdf)
From the last page, if you can say that standardized train traffic `U1(x)` versus standardized car traffic `U2(y)` is "additively independent", then you might be able to get away with a simple equation such as:
```
U(x, y) = k1*U1(x) + (1 - k1)*U2(y)
```
Where k1=0.5 means you're indifferent to standardized car/train traffic. A higher k1 would mean train traffic `U1(x)` is more important.
However, if these two variables are not "additively independent", then you'll have to use a more complicated equation. One possibility is shown on page 1:
```
U(x, y) = k1*U1(x) + k2*U2(y) + (1-k1-k2)*U1(x)*U2(y)
```
In either case, you'll have to come up with a utility `U(x, y)` that makes sense.
The same general weighting/comparison concepts hold for your GPA/ACT problem. Even if they are "normalized" rather than "standardized".
One last issue. I know you're not going to like this, but the definition of the term "additively independent" is on page 4 of the following link. I looked for a less geeky definition, but I couldn't find one. You might look around to find something better.
[Additively Independent](http://www.cs.toronto.edu/~fbacchus/Papers/BGUAI95.pdf)
Quoting the link:
```
Intuitively, the agent prefers being both healthy and wealthy
more than might be suggested by considering the two attributes
separately. It thus displays a preference for probability
distributions in which health and wealth are positively
correlated.
```
As suggested at the top of this response, if you plot standardized train traffic versus standardized car traffic on an x-y plot, you might see a correlation. If so, then you're stuck with the above non-linear utility equation or something similar.
| null | CC BY-SA 3.0 | null | 2011-05-03T21:02:01.230 | 2011-05-04T03:51:50.550 | 2011-05-04T03:51:50.550 | 2775 | 2775 | null |
10292 | 2 | null | 10289 | 7 | null | The answer is simple, but you're not going to like it: it depends. If you value 1 standard deviation from both scores equally, then standardization is the way to go (note: in fact, you're [studentizing](http://en.wikipedia.org/wiki/Studentized_range), because you're dividing by an estimate of the SD of the population).
If not, it is likely that standardization will be a good first step, after which you can give more weight to one of the score by multiplying by a wellchosen factor.
| null | CC BY-SA 3.0 | null | 2011-05-03T21:08:58.627 | 2011-05-03T21:08:58.627 | null | null | 4257 | null |
10293 | 1 | null | null | 3 | 961 | I have some data that a downstream system needs an optimized function of boolean logic for. Essentially I have data similar to:
```
User cat1 cat2 cat3 cat4
1 0 0 0 1
2 1 0 0 1
3 0 1 1 0
```
I must optimize cat4 as a function like this: `(cat1 or cat2 or cat3)`. "Or" is the only operation that I can use. Does anyone know a particular technique / strategy for optimizing a union of categories in this way? (Note, I will have many categories.)
| Optimize a boolean function | CC BY-SA 3.0 | null | 2011-05-03T21:47:17.023 | 2017-04-08T18:21:13.197 | 2017-04-08T18:21:13.197 | 11887 | 739 | [
"machine-learning",
"optimization",
"many-categories"
] |
10294 | 2 | null | 10276 | 4 | null | There are difficulties in computing change. This doesn't work on ordinal repsonses, and for continuous responses makes a strong assumption of proper choice of transformations for the variables. I recommend adjusting for baseline and modeling the 2nd and 3rd measurements as longitudinal measurements.
Repeated measures ANOVA is becoming obsolete in favor of generalized least squares or mixed effects models.
| null | CC BY-SA 3.0 | null | 2011-05-03T22:08:01.730 | 2011-05-04T01:56:24.750 | 2011-05-04T01:56:24.750 | 4253 | 4253 | null |
10295 | 1 | null | null | 22 | 391 | I am trying to put together a data-mining package for StackExchange sites and in particular, I am stuck in trying to determine the "most interesting" questions. I would like to use the question score, but remove the bias due to the number of views, but I don't know how to approach this rigorously.
In the ideal world, I could sort the questions by calculating $\frac{v}{n}$, where $v$ is the votes total and $n$ is the number of views. After all it would measure the percentage of people that upvote the question, minus the percentage of people that downvote the question.
Unfortunately, the voting pattern is much more complicated. Votes tend to "plateau" to a certain level and this has the effect of drastically underestimating wildly popular questions. In practice, a question with 1 view and 1 upvote would certainly score and be sorted higher than any other question with 10,000 views, but less than 10,000 votes.
I am currently using $\frac{v}{\log{n}+1}$ as an empirical formula, but I would like to be precise. How can I approach this problem with mathematical rigorousness?
In order to address some of the comments, I'll try to restate the problem in a better way:
Let's say I have a question with $v_0$ votes total and $n_0$ views. I would like to be able to estimate what votes total $v_1$ is most likely when the views reach $n_1$.
In this way I could simply choose a nominal value for $n_1$ and order all the question according to the expected $v_1$ total.
---
I've created two queries on the SO datadump to show better the effect I am talking about:
[Average Views by Score](http://data.stackexchange.com/stackoverflow/q/99605/)
Result:

[Average Score by Views (100-views buckets)](http://data.stackexchange.com/stackoverflow/q/99606/)
Result:

---
[The two formulas compared](http://data.stackexchange.com/stackoverflow/q/99608/)
Results, not sure if straighter is better: ($\frac{v}{n}$ in blue, $\frac{v}{log{n}+1}$ in red)

| "Interestingness" function for StackExchange questions | CC BY-SA 3.0 | null | 2011-05-03T21:53:26.910 | 2011-05-05T01:16:53.857 | 2011-05-05T00:13:04.400 | 4456 | 4456 | [
"data-mining",
"predictive-models"
] |
10297 | 2 | null | 498 | 1 | null | I have had the exact same problem . . . in fact I'm having right now! It seems to matter whether or not I include the blue-colored labels from the output window. Try copying only the text in black (the table fillin's) and see if that does the trick. It worked for me just now, and then when I go back and try again to copy the whole of it, it allows me to copy the blue label text too! It seems to have woken SAS up to realize my desire to fill the clipboard. I hate SAS . . . but I refuse to let it defeat me!
| null | CC BY-SA 3.0 | null | 2011-05-03T22:59:01.330 | 2011-05-03T22:59:01.330 | null | null | 4459 | null |
10298 | 2 | null | 10289 | 122 | null | Normalization rescales the values into a range of [0,1]. This might be useful in some cases where all parameters need to have the same positive scale. However, the outliers from the data set are lost.
$$ X_{changed} = \frac{X - X_{min}}{X_{max}-X_{min}} $$
Standardization rescales data to have a mean ($\mu$) of 0 and standard deviation ($\sigma$) of 1 (unit variance).
$$ X_{changed} = \frac{X - \mu}{\sigma} $$
For most applications standardization is recommended.
| null | CC BY-SA 3.0 | null | 2011-05-04T00:05:54.767 | 2016-12-31T18:27:47.567 | 2016-12-31T18:27:47.567 | 73527 | 2202 | null |
10299 | 2 | null | 498 | 1 | null | I have the problem sometimes. It seems as long as I do not highlight the whole output window but instead highlight all of it except the last line and leave a little extra space at the end of the line it always works.
If you are facing the problem try copying just the middle section and see if it works, if so then this can probably fix it.
Hopefully that made sense.
I just saw Dason's post and this sounds a lot like my solution.
| null | CC BY-SA 3.0 | null | 2011-05-04T02:39:22.600 | 2011-05-04T02:39:22.600 | null | null | 2310 | null |
10300 | 2 | null | 7200 | 13 | null | I suppose I'm too late the hero, but I wanted to comment on cardinal's post, and this comment became too big for its intended box.
For this answer, I'm assuming $x >0$; appropriate reflection formulae can be used for negative $x$.
I'm more used to dealing with the error function $\mathrm{erf}(x)$ myself, but I'll try to recast what I know in terms of Mills's ratio $R(x)$ (as defined in cardinal's answer).
There are in fact alternative ways for computing the (complementary) error function apart from using Chebyshev approximations. Since the use of a Chebyshev approximation requires the storage of not a few coefficients, these methods might have an edge if array structures are a bit costly in your computing environment (you could inline the coefficients, but the resulting code would probably look like a baroque mess).
For "small" $|x|$, Abramowitz and Stegun give a nicely behaved series (at least better behaved than the usual Maclaurin series):
$$R(x)=\sqrt{\frac{\pi}{2}}\exp\left(\frac{x^2}{2}\right)-x\sum_{j=0}^\infty\frac{2^j j!}{(2j+1)!}x^{2j}$$
(adapted from [formula 7.1.6](http://people.math.sfu.ca/~cbm/aands/page_297.htm))
Note that the coefficients of $x^{2j}$ in the series $c_j=\frac{2^j j!}{(2j+1)!}$ can be computed by starting with $c_0=1$ and then using the recursion formula $c_{j+1}=\frac{c_j}{2j+3}$. This is convenient when implementing the series as a summation loop.
---
cardinal gave the Laplacian continued fraction as a way to bound Mills's ratio for large $|x|$; what is not as well-known is that the continued fraction is also useful for numerical evaluation.
[Lentz](http://dx.doi.org/10.1364/AO.15.000668), [Thompson and Barnett](http://dx.doi.org/10.1016/0021-9991%2886%2990046-X) derived an algorithm for numerically evaluating a continued fraction as an infinite product, which is more efficient than the usual approach of computing a continued fraction "backwards". Instead of displaying the general algorithm, I'll show how it specializes to the computation of Mills's ratio:
$\displaystyle Y_0=x,\,C_0=Y_0,\,D_0=0$
$\text{repeat for }j=1,2,\dots$
$$D_j=\frac1{x+jD_{j-1}}$$
$$C_j=x+\frac{j}{C_{j-1}}$$
$$H_j=C_j D_j$$
$$Y_j=H_j Y_{j-1}$$
$\text{until }|H_j-1| < \text{tol}$
$\displaystyle R(x)=\frac1{Y_j}$
where $\text{tol}$ determines the accuracy.
The CF is useful where the previously mentioned series starts to converge slowly; you will have to experiment with determining the appropriate "break point" to switch from the series to the CF in your computing environment. There is also the alternative of using an asymptotic series instead of the Laplacian CF, but my experience is that the Laplacian CF is good enough for most applications.
---
Finally, if you don't need to compute the (complementary) error function very accurately (i.e., to only a few significant digits), there are [compact](http://sites.google.com/site/winitzki/sergei-winitzkis-files/erf-approx.pdf) [approximations](http://sites.google.com/site/winitzki/top-index/Winitzki_2003_Uniform_approximations_for_transcendental_functions_LNCS_2667.pdf) due to Serge Winitzki. Here is one of them:
$$R(x)\approx \frac{\sqrt{2\pi}+x(\pi-2)}{2+x\sqrt{2\pi}+x^2(\pi-2)}$$
This approximation has a maximum relative error of $1.84\times 10^{-2}$ and becomes more accurate as $x$ increases.
| null | CC BY-SA 3.0 | null | 2011-05-04T03:37:21.730 | 2011-05-04T13:33:45.540 | 2011-05-04T13:33:45.540 | 830 | 830 | null |
10301 | 2 | null | 10293 | 1 | null | Think you mean something like which categories you should take into account to union them with the OR operator to get a good probability to predict the last variable?
On your training set, try to develop a model based on minimum binary integer programming (mBIP; as proposed here: [http://www.sce.carleton.ca/faculty/chinneck/po/Chapter13.pdf](http://www.sce.carleton.ca/faculty/chinneck/po/Chapter13.pdf)). This should get you a good starting point for your prediction. Minimizing should be right for your disjunctive combination optimization.
Later on, you can process your test set and for each of the items either re-calculate the mBIP optimal set or try and inject it into the algorithm, no idea though at the moment how efficient that can be.
Hope it helps though.
| null | CC BY-SA 3.0 | null | 2011-05-04T04:30:36.277 | 2011-05-04T04:30:36.277 | null | null | 1158 | null |
10302 | 1 | 10340 | null | 57 | 44387 | I came across term perplexity which refers to the log-averaged inverse probability on unseen data. Wikipedia [article](http://en.wikipedia.org/wiki/Perplexity) on perplexity does not give an intuitive meaning for the same.
This perplexity measure was used in [pLSA](http://www.cs.brown.edu/~th/papers/Hofmann-UAI99.pdf) paper.
Can anyone explain the need and intuitive meaning of perplexity measure?
| What is perplexity? | CC BY-SA 3.0 | null | 2011-05-04T06:04:26.560 | 2021-12-10T23:40:29.340 | 2021-12-10T23:40:29.340 | 11887 | 4290 | [
"intuition",
"information-theory",
"measurement",
"perplexity"
] |
10303 | 1 | 10304 | null | 4 | 108 | Using OLS, I've estimated the following equation:
$y_i = \alpha_0 + \alpha_1 X_i + \epsilon_i$
I know that theoretically, the following should be true:
$y_i = a + (1-e^{-\lambda 60}) X_i$
Is there any way, having an estimate of $\alpha_1$ I can translate it to an estimate of $\lambda$?
As a follow up, if this is not possible without some difficulty, If I knew the distribution of $\alpha_1$ was a normal distribution with some mean and variance, is there a way to describe what the form of the distribution of $\lambda$ would be? I feel like it would be a log-linear distribution, but I'm not sure what the mean/variance would be.
| Using an OLS coefficient to estimate a non-linear coefficient | CC BY-SA 3.0 | null | 2011-05-04T06:45:56.653 | 2011-05-04T07:20:52.663 | null | null | 726 | [
"distributions",
"estimation",
"normal-distribution",
"log-linear"
] |
10304 | 2 | null | 10303 | 5 | null | Judging from your equations there is no reason for the OLS estimate of $\alpha_1$ not to be consistent and asymptotically normal. So you can use plug-in estimate for $\lambda$:
$$\hat{\lambda}=-\frac{1}{60}\log(1-\hat{\alpha}_1)$$
Using [delta method](http://en.wikipedia.org/wiki/Delta_method) it would be possible to show that this estimate is also consistent and asymptotically normal. The only caveat is that the estimate of $\alpha_1$ can lie outside the interval $[0,1]$, but this would indicate that your postulated model is incorrect.
| null | CC BY-SA 3.0 | null | 2011-05-04T07:20:52.663 | 2011-05-04T07:20:52.663 | null | null | 2116 | null |
10305 | 1 | null | null | 4 | 1329 | When does/can one use the likelihood ratio significance test instead of Fisher's exact test or its Pearson $\chi^2$ approximation for comparing two binomial datasets?
Given two binomial datasets (distributions), I'm seeing the LR test being used to compare one distribution against the global (combined) distribution. Usually I apply Fisher's test for comparing one dataset against the other. I realize that LR testing is of the Neyman-Pearson school, which assumes a fully specified alternative model as well as null model. E.g., in the [LR test Wikipedia page example](http://en.wikipedia.org/wiki/Likelihood-ratio_test#Examples), it's being used to compare two binomial datasets (# heads/tails for two coins).
Why not use the $\chi^2$ test to compare the two samples against each other? What are the conceptual differences in these two approaches? When do I use which? And when is it appropriate to compare one sample against not its complement but the global dataset?
| Likelihood ratio test vs. $\chi^2$/Z-test for comparing binomial datasets | CC BY-SA 3.0 | null | 2011-05-04T07:38:37.180 | 2017-11-06T12:21:09.443 | 2017-11-06T12:21:09.443 | 101426 | 1720 | [
"hypothesis-testing",
"likelihood-ratio",
"fishers-exact-test"
] |
10306 | 2 | null | 10267 | 3 | null | The time series are usually [decomposed](http://en.wikipedia.org/wiki/Decomposition_of_time_series) into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type of models are only used for irregular part and by their design these model do not incorporate any trend (I am assuming that trend is some function which varies in time). So if you simply want to estimate AR(2) model no software will estimate the trend for you, since if it did, it would not be fitting AR(2) model.
To forecast the trend you will first need to create some sort of model and test it, and only after you are confident that your model truly estimates the trend, then you can use it to forecast the trend. Without such model any forecasting is impossible. Sadly the majority of time series textbooks do not stress this when talking about forecasting.
| null | CC BY-SA 3.0 | null | 2011-05-04T08:02:11.270 | 2011-05-04T08:02:11.270 | null | null | 2116 | null |
10307 | 2 | null | 10285 | 8 | null |
- You need to decide what is acceptable statistical power for a given significance test. The rule of thumb of 80% power being reasonable is often bandied about.
However, I think it is more sensible to see sample size selection as an optimisation problem, where statistical power is but one consideration, and the cost of collecting additional data is considered (see here for further discussion of my thoughts).
- Mixed ANOVA involves multiple potential significance tests. At the very least there are two main effects and an interaction. Potentially, there are also various comparisons. Each significance test can have different statistical power, and thus, literally, it does not make sense to speak of "the" statistical power of a mixed design as if it was a singular property. When determining the minimum sample size, you may want to think about which of the possible significance tests on the mixed ANOVA are important. If all of them are important, then you may want to consider the sample size required by the least powerful significance test.
- G Power 3 permits power analysis for all three types of significance tests in a mixed ANOVA (between subjects; within subjects; and interaction).
Download this free software and go to the tests - means - Repeated Measures ... menu.
The software permits a priori (i.e., calculate sample size for given effect size, alpha, power, and design) or post hoc (i.e., calculate power for given sample size, effect size, alpha, and design) power analysis.
| null | CC BY-SA 3.0 | null | 2011-05-04T08:25:15.447 | 2011-05-04T16:12:13.730 | 2011-05-04T16:12:13.730 | 183 | 183 | null |
10308 | 1 | null | null | 9 | 2492 | I would like to explore the different ways one can detrend a time series without look ahead bias.
I wanted to use the Hodrick Prescott filter, which seems like a quite good frequency filter, but it is based on an optimization method, and I understand that it may give strange and volatile results at the border.
Wavelet smoothing on a rolling window would be another option, but again border effects can be huge (the data is copied by symmetry which is horrible for the precision of the technic at the edge).
Any idea or comments?
PS: The subject has already been discussed here, I know. But I would like to dig a bit more on a more precise question.
| How to remove trend with no look ahead bias? | CC BY-SA 3.0 | null | 2011-05-04T08:30:37.473 | 2013-10-09T06:01:19.347 | 2011-05-04T11:02:09.977 | 1709 | 1709 | [
"time-series",
"econometrics",
"trend"
] |
10309 | 1 | null | null | 6 | 9142 | I am carrying out a statistical analysis for my research. I am using a Friedman's test with post hoc analysis. At present I am using the function `friedman.test.with.post.hoc` available for R software. This function reports "maxT" value instead of the chi-square value. Someone can explain me what is maxT and its relationship with Chi-square?
Thank you.
| Friedman's test and post-hoc analysis | CC BY-SA 3.0 | null | 2011-05-04T09:07:08.330 | 2011-05-13T09:55:18.950 | 2011-05-04T09:46:04.787 | 930 | 4461 | [
"hypothesis-testing",
"anova",
"nonparametric",
"repeated-measures",
"permutation-test"
] |
10310 | 2 | null | 10308 | 0 | null | De-trending requires a pre-specification of of how many values do you require before declaring that a new trend has started. Given this specification , say n values then one has to be concerned with distinguishing between Level Shifts ( i.e. intercept changes ) and time trend changes. If you assume that there are no Level Shifts then simply search for different points in time and select those points which have been find to be statistically significant. For example if you have the series 1,2,3,4,5,7,9,11 ...this would suggest two points in time where the trend "changed" , period 1 and period 5. Alternatively if you have a series like 0,0,0,0,0,1,2,3,4,5,,,,,, there is only 1 point in time where a significant trend is evidenced i.e. period 5 . Outliers and ARIMA structure in a time series can lead to distortions in the identification of trend-point changes and may need to be incorporated prior to trend-point detection. A recent paper on tree-ring data [http://www.autobox.com/pdfs/forestdisturbance.pdf](http://www.autobox.com/pdfs/forestdisturbance.pdf) discusses this issue.
| null | CC BY-SA 3.0 | null | 2011-05-04T09:12:27.980 | 2011-05-04T09:12:27.980 | null | null | 3382 | null |
10311 | 1 | 10312 | null | 7 | 730 | I analyze a set of multivariate measurements. It is known that several pairs of independent variables show high linear correlation. The graph below shows a scatterplot of one such pair (X and Y, upper pane), as well the residuals as a function of Y (lower left pane) and the histogram of these residuals (lower right pane)

As one can see, there is a strange peak in the residual histogram. Many of the remaining linearly-dependent variable pairs from the same data set have a similar peak. I have double checked, and I'm sure there are no duplicate records in the data set. What might be the reason for such a behaviour?
PS Please don't ask me to elaborate on the problem domain, I'm not allowed to.
| Weird residuals in linear regression | CC BY-SA 3.0 | null | 2011-05-04T11:03:30.273 | 2012-07-25T15:10:14.870 | 2012-07-25T15:10:14.870 | 3748 | 1496 | [
"regression",
"dataset",
"outliers",
"residuals"
] |
10312 | 2 | null | 10311 | 9 | null | What is the value of the residual that shows such a high count? It does not appear to be zero (slightly to the right of 0), so maybe 1? In any case, there may be something about that value that may provide you with some insight about the underlying mechanism. For example, if X and Y are measurements taken by observers, some of them may have a tendency to follow a certain pattern (i.e., Joe thinks: "everybody knows that Y is always 1 point higher than X" and "observes" accordingly), leading to such results. Without domain-level knowledge though, it is difficult to guess what is going on here.
| null | CC BY-SA 3.0 | null | 2011-05-04T12:03:26.133 | 2011-05-04T12:03:26.133 | null | null | 1934 | null |
10313 | 2 | null | 10305 | 3 | null | Generally speaking, the likelihood ratio and the ordinary Pearson $\chi^2$ tests are more accurate than Fisher's "exact" test. But for your situation you need an extremely heavy multiplicity adjustment thrown in, not matter which statistical test is used. Decision trees such as the one you are building require amazingly large datasets for their structure to validate. In a quick look at the CN2 link you provided I could not tell if the algorithms incorporated shrinkage (panelization; regularization). If not, watch for over-interpretation.
| null | CC BY-SA 3.0 | null | 2011-05-04T12:42:45.710 | 2011-05-04T12:42:45.710 | null | null | 4253 | null |
10314 | 2 | null | 9040 | 4 | null | checkout Gephi, this software has some very good layout algorithms to handle the spaghetti problem: [http://gephi.org/features/](http://gephi.org/features/)
Especially, try the ForceAtlas layout: [http://forum.gephi.org/viewtopic.php?f=26&t=926](http://forum.gephi.org/viewtopic.php?f=26&t=926)
The software let you control the parameters in real time, and you can move the nodes manually.
(disclamer: I'm part of this community)
| null | CC BY-SA 3.0 | null | 2011-05-04T13:50:39.353 | 2011-05-04T13:50:39.353 | null | null | 4443 | null |
10315 | 2 | null | 9435 | 1 | null | Gephi, an open source network visualization software, can do that: [http://gephi.org](http://gephi.org)
See [this recent discussion](http://forum.gephi.org/viewtopic.php?f=29&t=1016) on what a user was able to do with a bipartite graph, which is what you have. It is also called a bipartite network, or a two-mode network in Social Network Analysis.
Thought you'll need some trick, because Gephi doesn't have a layout to handle that directly. Feel free to ask the user how did I get that.
(disclamer: I'm part of this community)
| null | CC BY-SA 3.0 | null | 2011-05-04T13:57:44.040 | 2011-05-04T13:57:44.040 | null | null | 4443 | null |
10316 | 1 | null | null | 13 | 43904 | I'm working on a multiple logistic regression in R using `glm`. The predictor variables are continuous and categorical. An extract of the summary of the model shows the following:
```
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.451e+00 2.439e+00 1.005 0.3150
Age 5.747e-02 3.466e-02 1.658 0.0973 .
BMI -7.750e-02 7.090e-02 -1.093 0.2743
...
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Confidence intervals:
```
2.5 % 97.5 %
(Intercept) 0.10969506 1.863217e+03
Age 0.99565783 1.142627e+00
BMI 0.80089276 1.064256e+00
...
```
Odd ratios:
```
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.159642e+01 11.464683 2.7310435 1.370327
Age 1.059155e+00 1.035269 5.2491658 1.102195
B 9.254228e-01 1.073477 0.3351730 1.315670
...
```
The first output shows that $Age$ is significant. However, the confidence interval for $Age$ includes the value 1 and the odds ratio for $Age$ is very close to 1. What does the significant p-value from the first output mean? Is $Age$ a predictor of the outcome or not?
| Interpreting logistic regression output in R | CC BY-SA 3.0 | null | 2011-05-04T14:49:20.663 | 2018-02-07T22:11:40.747 | 2013-10-24T13:32:53.260 | 28740 | 2824 | [
"r",
"logistic",
"interpretation",
"p-value"
] |
10317 | 2 | null | 10316 | 8 | null | There are a host of questions here on the site that will help with the interpretation of the models output (here are three different examples, [1](https://stats.stackexchange.com/q/3628/1036) [2](https://stats.stackexchange.com/q/8106/1036) [3](https://stats.stackexchange.com/q/6740/1036) , and I am sure there are more if you dig through the archive). Here is also a [tutorial on the UCLA](https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-how-do-i-interpret-odds-ratios-in-logistic-regression/) stats website on how to interpret the coefficients for logistic regression.
Although the odds-ratio for the age coefficient is close to one it does not necessarily mean the effect is small (whether an effect is small or large is frequently as much a normative question as it is an empirical one). One would need to know the typical variation in age between observations to make a more informed opinion.
| null | CC BY-SA 3.0 | null | 2011-05-04T15:15:26.640 | 2018-02-07T22:11:40.747 | 2018-02-07T22:11:40.747 | 194258 | 1036 | null |
10318 | 2 | null | 10308 | 3 | null | There is no way to get rid of the end effects.
Like any interpolation technique, the HP method depends on data before and after the current location to provide a filtered point/line for that location. As you approach either end of the data series and drop below the required number of future (or past) points, you either don't provide the filtered line or the characteristics of the filtered line must change.
It is dangerous to blindly extend the line and assume that it has the same properties at the ends of the series as it does in the middle. The bottom line is, the HP filter has no predicting power.
| null | CC BY-SA 3.0 | null | 2011-05-04T15:42:02.307 | 2011-05-04T15:42:02.307 | null | null | 2775 | null |
10320 | 1 | null | null | 5 | 154 | Context
I have a set of data that was collected from several inertial measurement units (orientation and acceleration data). I want to determine to what extent an inference method degrades when the data becomes noisy (or, should I say "noisier").
Questions
How do I determine what type of noise to use? Is there a way in which I can find this out from the data itself?
What range of noise levels would be appropriate for testing the inference method's robustness? My guess is that it would depend on the answer to the question: "how noisy would you expect the environment/system to get"? Again, given only the collected data, what is the best approach in determining these levels?
| Determing noise type and level of noise | CC BY-SA 3.0 | null | 2011-05-04T15:56:25.277 | 2011-05-04T15:56:25.277 | null | null | 3052 | [
"regression",
"white-noise"
] |
10321 | 2 | null | 10058 | 3 | null | Following Nick Sabbe's answer, here is the simplest GLMM solution I can come up with:
```
dej$location <- factor(rep(1:25,2))
library(lme4)
glmer(count ~ type1 + type2*species +
perc.for.100m + perc.dry.100m + perc.wet.100m +
(1|location), family = poisson, data = dej)
```
It would be a good idea to check for overdispersion too.
For your picture, I would try
```
library(ggplot2)
ggplot(dej,aes(x=type2,y=count))+stat_sum(aes(size=..n..))+
facet_grid(.~species)
```
mainly for the advantage of `stat_sum`, which will easily show where you have a lot of overplotting (more simply you could try jittering)
| null | CC BY-SA 3.0 | null | 2011-05-04T16:50:48.563 | 2011-05-04T16:50:48.563 | null | null | 2126 | null |
10322 | 1 | 10327 | null | 4 | 338 | My girlfriend is an Actuarial Analyst at a large insurance company in the Netherlands and because we'll soon have our two year anniversary, I thought of gifts for her.
On [Proof: Math is beautiful](http://proofmathisbeautiful.tumblr.com/post/5104877044/baseln-statistical-distribution-plushies-am) I discovered these [Distribution pluffies](http://www.etsy.com/listing/71739287/collection-of-10-distribution-plushies).
So here's my question: What distribution is of the most relevance in the field of an econometrician?
The available pluffies are:
- Standard Normal Distribution
- t Distribution
- Chi-Square Distribution
- Log Normal Distribution
- Continuous Uniform Distribution
- Weibull Distribution
- Cauchy Distribution
- Poisson Distribution
- Gumbel Distribution
- Erlang Distribution
Any help much appreciated.
EDIT: Thanks a lot for all the suggestions despite this being just off-topic! I'll get her the t Distribution pluffy.
| What distribution pluffy to buy for an aspiring econometrician? | CC BY-SA 3.0 | null | 2011-05-04T18:25:56.640 | 2013-05-06T21:09:32.367 | 2013-05-06T21:09:32.367 | 25315 | 4468 | [
"distributions"
] |
10323 | 2 | null | 10322 | 0 | null | From the list I would pick standard normal. After all regression is the main tool of econometrician and usually econometrician can rely only on asymptotic results, hence standard normal rules them all :)
Having said that I would not like to get a standard normal distribution pluffy (I am not a girl, but can be considered econometrician) for standard normal is so widely used that it is a bit mundane. So you can choose log-normal, since econometricians use log-log regression extensively or Cauchy as a reminder, that not all distributions have finite moments.
| null | CC BY-SA 3.0 | null | 2011-05-04T18:39:13.447 | 2011-05-04T18:39:13.447 | null | null | 2116 | null |
10324 | 2 | null | 10322 | 6 | null | You're in big trouble if you're asking us for gift advice.
| null | CC BY-SA 3.0 | null | 2011-05-04T19:10:31.520 | 2011-05-04T19:10:31.520 | null | null | 2775 | null |
10325 | 1 | null | null | 7 | 1700 | In the book “Programming Collective Intelligence” Segaran explains the Fisher method for categorizing text as an alternative to Naive Bayes classifier. The Fisher method uses inverse-chi-square-distribution, which I do not really understand.
I watched this video found on stats.stackexchange about chi-square-distribution to understand at least the “forward” function: [http://www.youtube.com/watch?v=dXB3cUGnaxQ](http://www.youtube.com/watch?v=dXB3cUGnaxQ)
Segaran explains in his book that they use inverse chi-square to somehow get a probability “that a random set of probabilities would return such a high number”. With high number he means that an item fitting a specific category has many features with high probabilities in that category. Somehow he also seems to take into account that “if the probabilities were independent and random, the result of this calculation would fit a chi-squared distribution”. But as he mentioned before the words are not independant (which is also a false assumption at naive bayes). So how does this then work?
And if I understand it right now, the inverse chi-square function somehow checks if many of my words have a high probability of being in the text and only if all words have such a high probability it returns a high over-all probability?
I’m sort of confused.
PS: The whole paragraph:
“Fisher showed that if the probabilities were independent and random, the result of this calculation would fit a chi-squared distribution. You would expect an item that doesn’t belong in a particular category to contain words of varying feature probabilities for that category (which would appear somewhat random), and an item that does belong in that category to have many features with high probabilities. By feeding the result of the Fisher calculation to the inverse chi-square function”, ou get the probability that a random set of probabilities would return such a high number.”
| What does inverse-chi-square in Fisher method (classifying) exactly do? | CC BY-SA 3.0 | 0 | 2011-05-04T19:24:03.320 | 2021-02-06T17:05:42.443 | 2021-02-06T17:05:42.443 | 11887 | 4350 | [
"text-mining",
"chi-squared-distribution",
"combining-p-values",
"inverse-gamma-distribution"
] |
10326 | 2 | null | 10322 | 2 | null | Insurance is all about skewed distributions with long tails: think amount of loss. These also typically have only positive values. The log-normal distribution looks most like one of those. Another good option is the Gumbel distribution, which comes up in extreme value theory.
| null | CC BY-SA 3.0 | null | 2011-05-04T19:31:35.703 | 2011-05-04T19:31:35.703 | null | null | 279 | null |
10327 | 2 | null | 10322 | 13 | null | You gotta get her one with some Kurtosis. Maybe the t-distribution. And be sure and write a loving note along the lines of, "Baby, when I think of fat tails, I think of you. Your kurtosis makes you non-normal."
My wife digs it when I get sappy like that. I have the scars to prove it.
| null | CC BY-SA 3.0 | null | 2011-05-04T20:10:51.810 | 2011-05-04T20:10:51.810 | null | null | 29 | null |
10328 | 1 | 10336 | null | 7 | 8117 | I know that I, and others, sometimes get confused by the hypergeometric distribution (HD) as it pertains to overlapping lists. This is because the HD is usually described with the "balls in an urn" metaphor and not using "overlapping lists."
What is the proper way to calculate the p-value, according to the hypergeometric distribution, for the overlap of $B$ and $C$ in the lists below, ideally using the `phyper` function in R, where
- $A$ contains all of the genes in the genome
- $B$ is one subset of genes in the genome
- $C$ is another subset of genes in the genome?
| Using R's phyper to get the probability of list overlap | CC BY-SA 3.0 | null | 2011-05-04T15:13:26.520 | 2011-05-05T07:29:10.630 | 2011-05-05T07:29:10.630 | null | 3561 | [
"r"
] |
10329 | 1 | 10333 | null | 8 | 495 | What functionality should exist in a [CAS](http://en.wikipedia.org/wiki/Computer_algebra_system) that was specifically geared toward Statistics?
Symbolic algebra systems like Mathematica and Maple are often used for calculus, logic, and physics problems but are rarely used for statistics. Why is this?
What statistical constructs could be added to a symbolic algebra system to improve its use in this field? What are some specific code samples that many people would like to be able to do.
Please think about the following three users: research statistician, non-statistics researcher using statistics in another field (such as biology), statistics student.
I'll be working on [SymPy's](http://sympy.org/) statistics code over the next few months and would like to solicit input for desired functionality. The things I use are not necessarily what the broader community uses.
| Symbolic computer algebra for statistics | CC BY-SA 3.0 | null | 2011-05-04T20:32:27.960 | 2019-01-19T23:04:45.170 | 2019-01-19T23:04:45.170 | 99274 | 3830 | [
"python",
"computational-statistics",
"mathematica",
"maple"
] |
10330 | 2 | null | 10322 | 2 | null | Aren't [econometricians](http://en.wikipedia.org/wiki/What%27s_that_got_to_do_with_the...?) concerned with the [price of t (distributions) in China](http://supertart.com/priceofteainchina/index.php)? It has the large (on occasion, infinite) kurtosis recommended by @JD Long, too.
| null | CC BY-SA 3.0 | null | 2011-05-04T20:52:28.667 | 2011-05-04T20:52:28.667 | null | null | 919 | null |
10331 | 1 | 10332 | null | 5 | 1210 | I'm working on some practice test problems, and one of them says to design a rejection sampling algorithm to produce draws from a unit exponential using draws from a Gamma(2,1).
I don't understand how this is possible, because I am under the impression that the "envelope function" g(x) needs to be scalable in such a manner that for some constant $M$, $Mg(x)\geq f(x)\; \forall x$.
I can't see any way to do this, as the Gamma(2,1) is going to have little mass around 0, while the exponential function has most of its mass around 0. What kind of transformation do I need to do to the Gamma function to allow it to function as an envelope?
Using R, I tried flipping it to make it an inverse gamma, but that won't adequately capture the probability mass close to 0 and a K-S test confirmed that the points I generated did not arise from a unit exponential.
edit: I will include my code in which I tried to use an inverse-gamma(2,1) as an envelope:
```
x <- c()
for(i in 1:100000)
{
g <- runif(1, 0, 1)
h <- rigamma(1, 2, 1)
M <- densigamma(h, 2, 1)
crit <- dexp(h, 1)/M
if(g < crit)
x[i] = h
}
```
| How to use rejection sampling to generate draws from Unit Exponential | CC BY-SA 3.0 | null | 2011-05-04T21:17:49.260 | 2011-05-05T21:09:49.177 | 2011-05-05T21:09:49.177 | 8 | 2984 | [
"self-study",
"monte-carlo",
"simulation"
] |
10332 | 2 | null | 10331 | 5 | null | Try a location shift on the Gamma(2,1)
EDIT:
[Illustration](http://www.wolframalpha.com/input/?i=3%2a%28x%2b1%29%20%2a%20exp%28-%28x%2b1%29%29%20vs%20exp%28-x%29%20from%200%20to%206)
| null | CC BY-SA 3.0 | null | 2011-05-04T23:17:36.520 | 2011-05-05T17:07:45.637 | 2011-05-05T17:07:45.637 | 3567 | 3567 | null |
10333 | 2 | null | 10329 | 9 | null | Support for matrix algebra. The vast majority of practiced statistics is multivariate and involves matrices, and often simplifying matrix forms requires special rules that aren't easily translated from a univariate case, so good matrix support would be really helpful.
| null | CC BY-SA 3.0 | null | 2011-05-04T23:29:40.427 | 2011-05-04T23:29:40.427 | null | null | 2839 | null |
10334 | 2 | null | 10295 | 1 | null | This is my theory. I think there are two kinds of questions: those that remain mostly within SE (which usually have fewer views), and those that are viewed by outsiders because it was linked from somewhere else (usually have more views).
For the questions that remain mostly within SE, votes are a good measure of interesting questions. This is the point of votes.
When a question is linked to outside the site the votes stop meaning as much. Some linking sites may have very few SE members, others may have more. The variance of the number of votes for these questions is probably high (as evidenced by your score vs view plot, where the right side of the curve blooms out). These questions will have more views, and views MAY be a better indicator of interesting questions. Or questions that a larger community happened to find more interesting. There are many variables in this situation, and I think it would be worth trying to find more information to differentiate these cases. Does SE publicize referral information?
| null | CC BY-SA 3.0 | null | 2011-05-05T01:09:16.273 | 2011-05-05T01:09:16.273 | null | null | 2965 | null |
10335 | 2 | null | 10295 | 3 | null | One might define an interesting question as one that has received comparatively many votes given the number of views. To this end, you can create a baseline curve that reflects the expected number of votes given the views. Curves that attracted a lot more votes than the baseline were considered particularly interesting.
To construct the baseline, you may want to calculate the median number of votes per 100-view bin. In addition, you could calculate the median absolute deviation (MAD) as a robust measure for the standard deviation per bin. Then, "interestingness" can be calculated as
```
interestingness(votes,views) = (votes-baselineVotes(views))/baselineMAD(views)
```
| null | CC BY-SA 3.0 | null | 2011-05-05T01:16:53.857 | 2011-05-05T01:16:53.857 | null | null | 198 | null |
10336 | 2 | null | 10328 | 10 | null | Trying to translate this into a statistical question, it seems you have a population with $a$ members and you take two random samples without replacement sized $b$ and $c$, and you want the distribution of $X$, the number appearing in both samples.
As an illustration, suppose $a=5$, $b=2$ and $c=3$. There are 100 ways of taking the samples, of which 10 have none in common, 60 have one in common and 30 have two in common. It the language of black and white balls in an urn, the urn has $b=2$ white balls and $a-b=3$ black balls, and we take $c=3$ balls out to inspect how many white balls come out. In R we can effectively get these values with
```
> totalpop <- 5
> sample1 <- 2
> sample2 <- 3
> dhyper(0:2, sample1, totalpop-sample1, sample2)
[1] 0.1 0.6 0.3
> phyper(-1:2, sample1, totalpop-sample1, sample2)
[1] 0.0 0.1 0.7 1.0
```
which confirms the earlier calculations.
If you want to test a number `overlap`, then the probability of getting that number or smaller from this model is
```
phyper(overlap, sampleb, totala - sampleb, samplec)
```
and of getting that number or larger is
```
1 - phyper(overlap - 1, sampleb, totala - sampleb, samplec)
```
| null | CC BY-SA 3.0 | null | 2011-05-05T01:29:25.800 | 2011-05-05T01:29:25.800 | null | null | 2958 | null |
10337 | 1 | 22173 | null | 8 | 34425 | What would be $\operatorname{Var}(X^2)$, if $\operatorname{Var}(X)=\sigma^2$?
| $\operatorname{Var}(X^2)$, if $\operatorname{Var}(X)=\sigma^2$ | CC BY-SA 4.0 | null | 2011-05-05T03:42:06.087 | 2018-12-18T23:35:56.927 | 2018-12-18T23:35:56.927 | 5176 | 3903 | [
"mathematical-statistics",
"variance"
] |
10338 | 1 | null | null | 3 | 1980 | I want to perform bootstrapping for calculation of efficiency score from data envelopment analysis (DEA) using R.
- Are there any examples of data and results for this type of analysis in R to enable me to check my results?
- Are there any online or other resources that might assist my task?
| Bootstrapping data envelopment analysis efficiency score using R | CC BY-SA 3.0 | null | 2011-05-05T05:47:30.180 | 2012-10-09T09:41:04.573 | 2011-05-05T08:03:28.140 | 183 | 4472 | [
"r",
"bootstrap",
"efficiency"
] |
10339 | 2 | null | 10337 | 13 | null | As a simple example of the responses of @user2168 and @mpiktas:
The variance of the set of values 1,2,3 is 0.67, while the variance of its square is 10.89. On the other hand, the variance of 2,3,4 is also 0.67, but the variance of the squares is 24.22.
These are just variances for finite sets of data, but the idea extends to distributions.
| null | CC BY-SA 3.0 | null | 2011-05-05T06:35:12.517 | 2011-05-05T06:35:12.517 | null | null | 4257 | null |
10340 | 2 | null | 10302 | 25 | null | You have looked at the [Wikipedia article on perplexity](http://en.wikipedia.org/wiki/Perplexity). It gives the perplexity of a discrete distribution as
$$2^{-\sum_x p(x)\log_2 p(x)}$$
which could also be written as
$$\exp\left({\sum_x p(x)\log_e \frac{1}{p(x)}}\right)$$
i.e. as a weighted geometric average of the inverses of the probabilities. For a continuous distribution, the sum would turn into a integral.
The article also gives a way of estimating perplexity for a model using $N$ pieces of test data
$$2^{-\sum_{i=1}^N \frac{1}{N} \log_2 q(x_i)}$$
which could also be written
$$\exp\left(\frac{{\sum_{i=1}^N \log_e \left(\dfrac{1}{q(x_i)}\right)}}{N}\right) \text{ or } \sqrt[N]{\prod_{i=1}^N \frac{1}{q(x_i)}}$$
or in a variety of other ways, and this should make it even clearer where "log-average inverse probability" comes from.
| null | CC BY-SA 3.0 | null | 2011-05-05T07:07:56.453 | 2011-05-05T07:07:56.453 | null | null | 2958 | null |
10342 | 1 | null | null | 4 | 3131 | Given a panel of countries over time, a fixed effects estimator makes sense to control for country-specific effects. My intuition tells me that if the dependent variable is correlated with lags of the independent variables, then bias will be introduced into the estimator. However, I'm having difficulty rigorously understanding why this would be the case. Additionally, is there any easy way to tell if the bias would be towards or away from zero?
| Panel Data: In a fixed effects model, does auto-correlation introduce bias? | CC BY-SA 3.0 | null | 2011-05-05T08:06:46.827 | 2011-05-05T08:06:46.827 | null | null | 726 | [
"autocorrelation",
"panel-data",
"fixed-effects-model"
] |
10343 | 1 | 10351 | null | 8 | 2393 | I am totally confused: On the one hand you can read all kinds of explanations why you have to divide by n-1 to get an unbiased estimator for the (unknown) population variance (degrees of freedom, not defined for sample size 1 etc.) - see e.g. [here](http://en.wikipedia.org/wiki/Bessel%27s_correction) or [here](https://stats.stackexchange.com/questions/3931/intuitive-explanation-for-dividing-in-n-1-when-calculating-sd).
On the other hand when it comes to variance estimation of a supposed normal distribution all of this doesn't seem to be true anymore. There it is said that the maximum likelihood estimator for variance includes only a division by n - see e.g. [here](http://en.wikipedia.org/wiki/Normal_distribution#Estimation_of_parameters).
Now, can anyone please enlighten me why it is true here but not there? I mean normality is what most models boil down to (not least due to the [CLT](http://en.wikipedia.org/wiki/Central_limit_theorem)). So is the choice "division by n" yet the best choice for finding the best estimation for the true population variance after all?!?
| When estimating variance, why do unbiased estimators divide by n-1 yet maximum likelihood estimates divide by n? | CC BY-SA 4.0 | null | 2011-05-05T08:11:02.280 | 2019-03-02T22:56:03.470 | 2018-09-12T08:27:00.863 | 11887 | 230 | [
"normal-distribution",
"variance",
"unbiased-estimator"
] |
10344 | 2 | null | 10343 | 7 | null | The MLE is indeed found through division by n. However, MLE's are not guaranteed to be unbiased. So there is no contradiction in the fact that the unbiased estimator (divided by n-1) is used.
In practice, for reasonable sample sizes, it should not make a big difference anyway.
| null | CC BY-SA 3.0 | null | 2011-05-05T08:18:56.053 | 2011-05-05T08:18:56.053 | null | null | 4257 | null |
10345 | 2 | null | 423 | 54 | null | 
Found this one in the [comments on Andrew Gelman's blog](http://www.stat.columbia.edu/~cook/movabletype/archives/2011/04/worst_statistic.html#comment-2512168).
| null | CC BY-SA 3.0 | null | 2011-05-05T09:27:32.327 | 2011-05-05T09:27:32.327 | null | null | 442 | null |
10346 | 1 | 10348 | null | 4 | 102 | I am running a logistic regression with customer event data with multiple predictors.
However, one variable is extremely important, alone predicting 60% of the customers for the event. When this main predictor is included in the model, other predictors add very little to prediction over and above this main predictor.
This main predictor is not a post event variable. The variable has full business support to be in the model.
- Given this, is it still okay to retain this main predictor variable in the model?
- Does this suggest that there is anything wrong with the model?
| Is it problematic if one predictor in a set accounts for almost all the prediction? | CC BY-SA 3.0 | null | 2011-05-05T09:34:46.440 | 2011-05-05T10:32:03.557 | 2011-05-05T10:31:15.537 | 183 | 1763 | [
"logistic",
"modeling"
] |
10347 | 1 | null | null | 6 | 8846 | I have made a heatmap based upon a regular data matrix in R, the package I use is `pheatmap`. Regular clustering of my samples is performed by the `distfun` function within the package.
Now I want to attach a precomputed distance matrix (generated by Unifrac) to my previously generated matrix/heatmap. Is this possible?
| Making a heatmap with a precomputed distance matrix and data matrix in R | CC BY-SA 3.0 | null | 2011-05-05T09:39:26.173 | 2019-01-15T23:26:36.430 | 2011-05-05T11:18:43.750 | 930 | 4473 | [
"r",
"data-visualization"
] |
10348 | 2 | null | 10346 | 2 | null | I understand your gut feeling. But depending on the type of response and predictor, this has not to be unsual (example: response = "weight", predictors "height" and others with presumably less meaning like "state", "favorite movie" etc.).
However, you should check that for the creation of the predictor only information has been used which was available at that time.
Here is an example:
Suppose you want to predict an event of the next day. You have raw data from day 1 to 10. Now you create one predictor by calculating a statistic over all 10 days and other predictors using only "local" information (i.e. directly extracted from a single event on a single day, i.e. "event start time"). Now you build a model from day 1 to 5 using this predictors to predict the events on day 6. The model is flawed because it contains information from day 6-10, information which is never available when building the model for real (i.e. not simulation) usage.
If this check has been succesful (I guess you mean that with "not a post-event variable"), then it shouldn't be a problem. In one of my projects, something similar occurred. Checking on the meaning of the variables indeed revealed, that the detected relationship (i.e. variable importance) was trivial, but true ;).
| null | CC BY-SA 3.0 | null | 2011-05-05T10:32:03.557 | 2011-05-05T10:32:03.557 | null | null | 264 | null |
10349 | 2 | null | 10347 | 4 | null | Ok, so you can just look at the code by typing the name of the function at the R prompt, or use `edit(pheatmap)` to see it in your default editor.
Around line 14 and 23, you'll see that another function is called for computing the distance matrices (for rows and columns), given a distance function (R `dist`) and a method (compatible with `hclust` for hierarchical clustering in R). What does this function do? Use `getAnywhere("cluster_mat")` to print it on screen, and you soon notice that it does nothing more than returning an `hclust` object, that is your dendrogram computed from the specified distance and linkage options.
So, if you already have your distance matrix, change line 14 (rows) or 23 (columns) so that it reads, e.g.
```
tree_row = hclust(my.dist.mat, method="complete")
```
where `my.dist.mat` is your own distance function, and `complete` is one of the many methods available in `hclust` (see `help(hclust)`). Here, it is important to use `fix(pheatmap)` and not `edit(pheatmap)`; otherwise, the edited function will not be callable in the correct environment/namespace.
This is a quick and dirty hack that I would not recommend with larger package. It seems to work for me at least, that is I can use a custom distance matrix with complete linkage for the rows.
In sum, assuming your distance matrix is stored in a variable named `dd`,
```
library(pheatmap)
fix(pheatmap)
# 1. change the function as you see fit
# 2. save and go back to R
# 3. if your custom distance matrix was simply read as a matrix, make sure
# it is read as a distance matrix
my.dist.map <- dd # or as.dist(dd)
```
Then, you can call `pheatmap` as you did but now it will use the results of `hclust` applied to `my.dist.map` with `complete` linkage. Please note that you just have to ensure that `cluster_rows=TRUE` (which is the default). Now, you may be able to change
- the linkage method
- choose between rows or columns
by editing the package function appropriately.
| null | CC BY-SA 3.0 | null | 2011-05-05T11:02:11.647 | 2011-05-05T11:02:11.647 | null | null | 930 | null |
10350 | 1 | 10352 | null | 6 | 673 | I am working on insurance data in which a customer has a field named `customer_no_dependent` (customer's number of dependent). Its coming out to be a significant variable( just that it has $p<0.0001$).
This variable has almost 20% missing values. For imputation, I thought to determine proxy indicators for number of dependents. I tried age (thinking a person more aged could be having more dependents). I correlated it with premium amount as well to think that a person who has more dependents could be having less disposable income. So low premium paying could be meaning more dependents. I do understand that a demographic variable can't be fully taken out from such logic.
Now, if somebody goes into detail, he can prove my imputation to be far from perfect. What should I do in such situation? Would deleting those 20% be a correct solution? 20% for my data would be close to 2 lakh rows which is large amount of information..
I know, this question can have many possible answers. I would be grateful for any pointers how to proceed.
| Advice on missing value imputation | CC BY-SA 3.0 | null | 2011-05-05T11:20:13.343 | 2012-12-19T20:10:02.963 | 2011-05-05T12:10:32.370 | 2116 | 1763 | [
"data-imputation"
] |
10351 | 2 | null | 10343 | 6 | null | The answer to your question is contained within your question.
When choosing an estimator for a parameter, you should ask yourself,
what property would you like your estimator to have:
- Robustness
- Unbiasedness
- Have the distributional properties of a MLE
- Consistency
- Asymptotically normal
- You know the population mean, but the variance is unknown
If your estimator is the one that is divided by (n-1), then you want an
unbiased estimtor of the variance. If your estimator is the one that is
divided by n, then you have an MLE estimator. Of course, when n is large;
dividing by either (n-1) or n will give you approximately the same results
and the estimator will be approximately unbiased and have the properties
of all MLE estimators.
| null | CC BY-SA 3.0 | null | 2011-05-05T12:15:11.133 | 2011-05-05T12:15:11.133 | null | null | 3805 | null |
10352 | 2 | null | 10350 | 5 | null | First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can both lead to severely biased results.
Next, if I understand correctly, your problem is that you don't know which variables to use as covariates for you imputation model. If you number of possible covariates (I assume these are the other covariates in your model of interest) is limited, you could opt for the nonparametric kind of imputation that is offered by MICE (in R) and similar algorithms.
Another option is to use shrinkage (LASSO or alike) in a model predicting customer_no_dependent: this should give you a set of likely predictors. Be aware though, that this step induces even more uncertainty (you reuse the data yet again), and you should trust your confidence intervals and p-values somewhat less. The effect should be negligble if your association is truly as strong as you indicate.
If you do use the kind of parametric and commonsense induced imputation mechanism (like regressing on 'credible' predictors): simply make note of this fact, and mention that the obtained results are conditional on this extra set of assumptions.
| null | CC BY-SA 3.0 | null | 2011-05-05T12:24:24.350 | 2011-05-05T12:24:24.350 | null | null | 4257 | null |
10353 | 1 | null | null | 8 | 2124 |
### Context:
My question concerns a typical design in my area – a researcher takes a group of subjects (say 10) and then applies three different conditions to them to measure the change in a response variable, e.g. vertical jump height performed after drinking a glucose drink, coloured plain water, and fruit juice (say). Every subject has every treatment, but in a random order with enough time between for effects to ‘wash out’.
### Analysis:
Kuehl (2000) (Kuehl, R. O. (2009)
Design of Experiments: Statistical
principles of research design and
analysis, Duxbury Press, CA, p497 2nd
Ed.) states:
>
When each of the
treatments is administered in a random
order to each subject... then subjects
are random blocks in a randomised
complete block design”
and then shows the corresponding analysis.
In this case, the subject is a random effect, but a nuisance or blocking factor, and although our statistical model will test the significance of the block factor, we aren’t really interested in its significance. However, many researchers (and reviewers!) think that such a design should be analysed as a repeated measures design with a Mauchly test for the Huynh-Feldt condition (with the treatment as the repeated measure). However, this seems more appropriate when a time factor is being analysed – for example when observations are taken at 0 minutes, 10 minutes, 30 minutes and 60 minutes, for example. In this case the covariance between pairs of time points might reasonably be expected to change, particularly when unequal time intervals are used. [In fact, I use SAS to model different covariance structures in this case (e.g. autoregressive) and use the AIC to choose the best structure, though this is not an approach that is well received by many reviewers.]
I understood that when the subject is a block factor, and the different treatments are administered in a random order that is different for different subjects, this means that the correlation between observations is different for each subject so compound symmetry can be assumed.
### Question:
- How should repeated measures ANOVAs with 3 or more conditions presented in random order be analysed?
- Is it reasonable to assume compound symmetry?
| How to analyse repeated measure ANOVA with three or more conditions presented in randomised order? | CC BY-SA 3.0 | null | 2011-05-05T12:26:43.317 | 2014-12-06T05:40:54.650 | 2013-05-03T13:18:51.167 | 6029 | 4474 | [
"hypothesis-testing",
"anova",
"repeated-measures"
] |
10354 | 2 | null | 9653 | -1 | null | Another consequence of a small sample is the increase of type 2 error.
Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a point null hypothesis. These hypothesis are hypothesis having some parameters equals zero, and are known to be false in the considered experience.
On the opposite, too large samples increase the type 1 error because the p-value depends on the size of the sample, but the alpha level of significance is fixed. A test on such a sample will always reject the null hypothesis. Read "The insignificance of statistical significance testing" by Johnson and Douglas (1999) to have an overview of the issue.
This is not a direct answer to the question but these considerations are complementary.
| null | CC BY-SA 3.0 | null | 2011-05-05T12:28:58.153 | 2011-05-05T12:28:58.153 | null | null | 4443 | null |
10356 | 1 | null | null | 10 | 7830 | I am building a propensity model using logistic regression for a utility client.
My concern is that out of the total sample my 'bad' accounts are just 5%, and the rest are all good.
I am predicting 'bad'.
- Will the result be biassed?
- What is optimal 'bad to good proportion' to build a good model?
| Is a logistic regression biased when the outcome variable is split 5% - 95%? | CC BY-SA 3.0 | null | 2011-05-05T14:03:29.927 | 2017-08-30T17:11:38.003 | 2011-05-06T14:07:42.000 | 495 | 4478 | [
"logistic",
"modeling"
] |
10357 | 2 | null | 10356 | 1 | null | In theory, you will be able to discriminate better if the proportions of "good" and "bad" are roughly similar in size. You might be able to move towards this by stratified sampling, oversampling bad cases and then reweighting to return to the true proportions later.
This carries some risks. In particular your model is likely to be labelling individuals as "potentially bad" - presumably those who may not pay their utility bills when due. It is important that the impact of errors when doing this are properly recognised: in particular how many "good customers" will be labelled "potentially bad" by the model, and you are less likely to get the reweighting wrong if you have not distorted your model by stratified sampling.
| null | CC BY-SA 3.0 | null | 2011-05-05T14:42:39.460 | 2011-05-05T14:42:39.460 | null | null | 2958 | null |
10358 | 2 | null | 10271 | 2 | null | You might find [this paper](http://www.stat.duke.edu/~mw/Smith+West1983.pdf) of interest. See also more detailed presentation of similar models in [West & Harrison](http://rads.stackoverflow.com/amzn/click/0387947256). There are other examples of this sort of monitoring as well, many which are more recent, but this isn't exactly my wheelhouse :). Undoubtedly there are suitable implementations of these models, but I don't know what they might be offhand...
The basic idea is that you have a switching model where some observations/sequence of observations are attributed to abnormal network states while the rest are considered normal. A mixture like this could account for the long right tail in your first plot. A dynamic model could also alert you to abnormal jumps like at 8:00 and 4:00 in real-time by assigning high probability to new observations belonging to a problem state. It could also be easily extended to include things like predictors, periodic components (perhaps your score rises/falls a bit with activity) and that sort of thing.
Edit: I should also add, this kind of model is "unsupervised" in the sense that anomalies are caught either by showing a large mean shift or increase in variance. As you gather data you can improve the model with more informative prior distributions. But perhaps once you have enough data (and hard-won training examples by dealing with network problems!) you could devise some simple monitoring rules (thresholds, etc)
| null | CC BY-SA 3.0 | null | 2011-05-05T14:44:07.957 | 2011-05-05T14:49:44.440 | 2011-05-05T14:49:44.440 | 26 | 26 | null |
10359 | 1 | 10555 | null | 6 | 1190 | How can I generate dependent time series from a given marginal distribution? I want to be able to adjust the level of dependence, to influence the predictability of the series, which will be given as input to a Monte Carlo simulation. The dependence parameter can be the correlation, the mutual information, or something along those lines.
You may assume the distribution is Bernoulli for discussion purposes. MATLAB code is gratefully accepted.
| Generating dependent time series from a given distribution? | CC BY-SA 3.0 | null | 2011-05-05T14:48:53.297 | 2011-05-09T15:45:40.490 | 2011-05-06T14:42:03.297 | 4479 | 4479 | [
"time-series",
"monte-carlo",
"simulation",
"non-independent",
"prediction-interval"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.