idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
48,501
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
|
First off, it depends what your dependent variable (Y) is. If it is numerical then most multiple regression models would be sufficient. If it (Y) is categorical then you need a logistic regression or a similar categorical regression model.
As for how to handle independent variables, the numerical ones will fit neatly into almost any regression model. The categorical ones will need to be "factored". I use R. In R, you specify categorical variable "k" in a data frame by running
Data.Object[DollarSign]k <- factor(Data.Object[DollarSign]k)
In other languages/softwares you do it differently. But definitely make sure the categorical data is being treated as such regardless of which software you use. No, there is no magic algorithm or software that lets you wave a magic wand and factors all these for you. This is a common problem, and if you think 40 is bad, consider the problem with 100.
As for the "best" regression to run when thrown a data set like yours?...well, it depends on what you/your bosses are looking for.
The tricky part for you is interpreting the labels into something meaningful. Say you have a variable "What is your political party?" If 1 is "republican", 2 is "democrat" and 3 is "independent" then you won't have a variable for each. You'll have a variable for "democrat" and "independent" and both values equaling zero would mean "not democrat and not independent". In this case, the regression coefficient of "democrat" will show the change in Y if the individual is a democrat. The coefficient for "independent" would show the change in Y if the person is independent.
Most importantly, the base case, for all other coefficients, is a republican. So any interpretations you make for other variables should adjust for that base case. There ARE algorithms to make that adjustment easier, but I don't know any off the top of my head.
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
|
First off, it depends what your dependent variable (Y) is. If it is numerical then most multiple regression models would be sufficient. If it (Y) is categorical then you need a logistic regression or
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
First off, it depends what your dependent variable (Y) is. If it is numerical then most multiple regression models would be sufficient. If it (Y) is categorical then you need a logistic regression or a similar categorical regression model.
As for how to handle independent variables, the numerical ones will fit neatly into almost any regression model. The categorical ones will need to be "factored". I use R. In R, you specify categorical variable "k" in a data frame by running
Data.Object[DollarSign]k <- factor(Data.Object[DollarSign]k)
In other languages/softwares you do it differently. But definitely make sure the categorical data is being treated as such regardless of which software you use. No, there is no magic algorithm or software that lets you wave a magic wand and factors all these for you. This is a common problem, and if you think 40 is bad, consider the problem with 100.
As for the "best" regression to run when thrown a data set like yours?...well, it depends on what you/your bosses are looking for.
The tricky part for you is interpreting the labels into something meaningful. Say you have a variable "What is your political party?" If 1 is "republican", 2 is "democrat" and 3 is "independent" then you won't have a variable for each. You'll have a variable for "democrat" and "independent" and both values equaling zero would mean "not democrat and not independent". In this case, the regression coefficient of "democrat" will show the change in Y if the individual is a democrat. The coefficient for "independent" would show the change in Y if the person is independent.
Most importantly, the base case, for all other coefficients, is a republican. So any interpretations you make for other variables should adjust for that base case. There ARE algorithms to make that adjustment easier, but I don't know any off the top of my head.
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
First off, it depends what your dependent variable (Y) is. If it is numerical then most multiple regression models would be sufficient. If it (Y) is categorical then you need a logistic regression or
|
48,502
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
|
Almost all the regression algorithms take care of both numerical and categorical variables. For categorical variables, there are different "coding" can be used.
The Simple example is binary coding. For example, for gender, you can use $0$ to represent male and $1$ to represent female. If the variables has more then $2$ values, one hot coding can be used.
Details can be found
http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
|
Almost all the regression algorithms take care of both numerical and categorical variables. For categorical variables, there are different "coding" can be used.
The Simple example is binary coding. Fo
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
Almost all the regression algorithms take care of both numerical and categorical variables. For categorical variables, there are different "coding" can be used.
The Simple example is binary coding. For example, for gender, you can use $0$ to represent male and $1$ to represent female. If the variables has more then $2$ values, one hot coding can be used.
Details can be found
http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm
|
How to do regression when there is a mix of numerical and non-numerical predictor variables?
Almost all the regression algorithms take care of both numerical and categorical variables. For categorical variables, there are different "coding" can be used.
The Simple example is binary coding. Fo
|
48,503
|
Training performance jumps up after epoch, Dev performance jumps down
|
Yes, this reduction is normal
When training any kind of machine learning algorithm, if you continue training, your algorithm overfits the training set and starts learning the details of the noise in the training set instead of utilizing the generalizable information. When it does this, your algorithm often loses it's generality and gets worse on other similar sets, specifically your dev set.
I have seen jumps like this myself when fitting an algorithm to my training set. It finds a pattern in the noise of the test set, but this pattern does not generalize to the test set.
There are a few ways to reduce overfitting:
Cross-validation - Hold out a subset of your training data (most use ~10%) and then use that to compare your algorithm and find a good stopping point. This prevents you from potentially overfitting your dev set as well.
Regularization - Add a penalty to your loss function for each NN node, so that it focuses on the most important connections.
Dropout - Randomly drop various connections as your train each epoch. This forces the NN to reduce reliance on particular connection webs and makes each node more robust by itself since it might not be able to rely on other nodes that could potentially be dropped out of each epoch.
Stop training at the point when the test error starts decreasing successively. That might be the point at which the algorithm has learned what it can and it is no longer beneficial to train with the training data that you have.
I hope this helps!
|
Training performance jumps up after epoch, Dev performance jumps down
|
Yes, this reduction is normal
When training any kind of machine learning algorithm, if you continue training, your algorithm overfits the training set and starts learning the details of the noise in t
|
Training performance jumps up after epoch, Dev performance jumps down
Yes, this reduction is normal
When training any kind of machine learning algorithm, if you continue training, your algorithm overfits the training set and starts learning the details of the noise in the training set instead of utilizing the generalizable information. When it does this, your algorithm often loses it's generality and gets worse on other similar sets, specifically your dev set.
I have seen jumps like this myself when fitting an algorithm to my training set. It finds a pattern in the noise of the test set, but this pattern does not generalize to the test set.
There are a few ways to reduce overfitting:
Cross-validation - Hold out a subset of your training data (most use ~10%) and then use that to compare your algorithm and find a good stopping point. This prevents you from potentially overfitting your dev set as well.
Regularization - Add a penalty to your loss function for each NN node, so that it focuses on the most important connections.
Dropout - Randomly drop various connections as your train each epoch. This forces the NN to reduce reliance on particular connection webs and makes each node more robust by itself since it might not be able to rely on other nodes that could potentially be dropped out of each epoch.
Stop training at the point when the test error starts decreasing successively. That might be the point at which the algorithm has learned what it can and it is no longer beneficial to train with the training data that you have.
I hope this helps!
|
Training performance jumps up after epoch, Dev performance jumps down
Yes, this reduction is normal
When training any kind of machine learning algorithm, if you continue training, your algorithm overfits the training set and starts learning the details of the noise in t
|
48,504
|
Training performance jumps up after epoch, Dev performance jumps down
|
I believe this can be an artefact on how you calculate your metrics. If you are using an moving average to calculate a metric and reseting the moving average at epoch boundaries than it is reasonable to expect a jump in epoch boundaries.
For example if you accuracies are going from 0 to 50 in the first epoch you moving average at the end of your first epoch will be less than 50. Depending on the window size assuming you report at the whole batch level it will be 25. But at the start of the next epoch it will start from 50 hence the jump.
|
Training performance jumps up after epoch, Dev performance jumps down
|
I believe this can be an artefact on how you calculate your metrics. If you are using an moving average to calculate a metric and reseting the moving average at epoch boundaries than it is reasonable
|
Training performance jumps up after epoch, Dev performance jumps down
I believe this can be an artefact on how you calculate your metrics. If you are using an moving average to calculate a metric and reseting the moving average at epoch boundaries than it is reasonable to expect a jump in epoch boundaries.
For example if you accuracies are going from 0 to 50 in the first epoch you moving average at the end of your first epoch will be less than 50. Depending on the window size assuming you report at the whole batch level it will be 25. But at the start of the next epoch it will start from 50 hence the jump.
|
Training performance jumps up after epoch, Dev performance jumps down
I believe this can be an artefact on how you calculate your metrics. If you are using an moving average to calculate a metric and reseting the moving average at epoch boundaries than it is reasonable
|
48,505
|
Number of Bernoulli trials to first success, with changing $p$
|
We all come in contact with a special case of this distribution when going Bernoulli $\rightarrow$ Geometric distribution.
The Bernoulli distribution is the realization of an event $X_k=1$ with $\Pr(X_k=1)=\theta$. The Geometric distribution $Y$ is the realization of $y-1$ non-events and $1$ event, all iid trials i.e
$$\Pr(Y= y)=\Pr(X_1=\ldots X_{y-1}=0\cap X_y=1)$$
This should be enough of a hint.
edit2: after @Glen_b's comment
In general, any positive random variable $Y$ and in particular the discrete ones (Poisson, Geometric, Rayleigh, Weibull) can be seen as a sequence of independent (but not identically) distributed Bernoulli trials, a sequence of non-events followed by an event. Set $\Pr(X_t=1)=\theta_t$ and see that
$$\Pr(Y=y)=\theta_y\prod_{t=1}^{y-1}(1-\theta_t)$$
Sidenote
To factor any positive distribution $T\in[0,\infty)$ we can write its survival function as $S(t)=1-F(t)=e^{-\Lambda(t)}$ where
\begin{align}
\Lambda(0)&=0\\
\Lambda(\infty)&=\infty\\
\frac{d}{dt}\Lambda(t)&=\lambda(t)\geq 0\\
\end{align}
And note that the probability of event within $s$ steps from $t$ is
\begin{align}
F(t,s)&=Pr(T\leq t+s|T>t) \\
&=\frac{\Pr(T\in[t,t+s))}{Pr(T>t)}\\
&=\frac{S(t)-S(t+s)}{S(t)} \\
&= 1-e^{-(\Lambda(t+s)-\Lambda(t))}=1-e^{-R(t,s)}
\end{align}
This is called the Conditional Excess Cumulative Distribution Function. In particular, using a steplength of $1$ we have $R(t,1)=d(t)$ and we may write
\begin{align}
\theta_t&=1-e^{-d(t)}\\
\Pr(T\in[y,y+1))&=\theta_y\prod_{t=1}^{y-1}(1-\theta_t)= e^{-d(y)}-e^{-d(y+1)}
\end{align}
|
Number of Bernoulli trials to first success, with changing $p$
|
We all come in contact with a special case of this distribution when going Bernoulli $\rightarrow$ Geometric distribution.
The Bernoulli distribution is the realization of an event $X_k=1$ with $\Pr
|
Number of Bernoulli trials to first success, with changing $p$
We all come in contact with a special case of this distribution when going Bernoulli $\rightarrow$ Geometric distribution.
The Bernoulli distribution is the realization of an event $X_k=1$ with $\Pr(X_k=1)=\theta$. The Geometric distribution $Y$ is the realization of $y-1$ non-events and $1$ event, all iid trials i.e
$$\Pr(Y= y)=\Pr(X_1=\ldots X_{y-1}=0\cap X_y=1)$$
This should be enough of a hint.
edit2: after @Glen_b's comment
In general, any positive random variable $Y$ and in particular the discrete ones (Poisson, Geometric, Rayleigh, Weibull) can be seen as a sequence of independent (but not identically) distributed Bernoulli trials, a sequence of non-events followed by an event. Set $\Pr(X_t=1)=\theta_t$ and see that
$$\Pr(Y=y)=\theta_y\prod_{t=1}^{y-1}(1-\theta_t)$$
Sidenote
To factor any positive distribution $T\in[0,\infty)$ we can write its survival function as $S(t)=1-F(t)=e^{-\Lambda(t)}$ where
\begin{align}
\Lambda(0)&=0\\
\Lambda(\infty)&=\infty\\
\frac{d}{dt}\Lambda(t)&=\lambda(t)\geq 0\\
\end{align}
And note that the probability of event within $s$ steps from $t$ is
\begin{align}
F(t,s)&=Pr(T\leq t+s|T>t) \\
&=\frac{\Pr(T\in[t,t+s))}{Pr(T>t)}\\
&=\frac{S(t)-S(t+s)}{S(t)} \\
&= 1-e^{-(\Lambda(t+s)-\Lambda(t))}=1-e^{-R(t,s)}
\end{align}
This is called the Conditional Excess Cumulative Distribution Function. In particular, using a steplength of $1$ we have $R(t,1)=d(t)$ and we may write
\begin{align}
\theta_t&=1-e^{-d(t)}\\
\Pr(T\in[y,y+1))&=\theta_y\prod_{t=1}^{y-1}(1-\theta_t)= e^{-d(y)}-e^{-d(y+1)}
\end{align}
|
Number of Bernoulli trials to first success, with changing $p$
We all come in contact with a special case of this distribution when going Bernoulli $\rightarrow$ Geometric distribution.
The Bernoulli distribution is the realization of an event $X_k=1$ with $\Pr
|
48,506
|
A sequence of random variables, how to understand it in the convergence theory?
|
As I understand this. You should have some Randome Variables $X_n$ which depends on $n$.
For example, if you take a look at this post:
Convergence in distribution, probability, and 2nd mean
You'll find that if $n \rightarrow \infty$ then $X_n$ converges in probability. Depeding on RVs you have different types of converging.
For your example you can take $Y_n = \frac{1}{n}\sum_{k=1}^{n}X_k$ and it should converge to 0.5. You could have 10 heads in a row, but as $n \rightarrow \infty$ then $Y_n \rightarrow 0.5$
I hope I answered your question.
|
A sequence of random variables, how to understand it in the convergence theory?
|
As I understand this. You should have some Randome Variables $X_n$ which depends on $n$.
For example, if you take a look at this post:
Convergence in distribution, probability, and 2nd mean
You'll fin
|
A sequence of random variables, how to understand it in the convergence theory?
As I understand this. You should have some Randome Variables $X_n$ which depends on $n$.
For example, if you take a look at this post:
Convergence in distribution, probability, and 2nd mean
You'll find that if $n \rightarrow \infty$ then $X_n$ converges in probability. Depeding on RVs you have different types of converging.
For your example you can take $Y_n = \frac{1}{n}\sum_{k=1}^{n}X_k$ and it should converge to 0.5. You could have 10 heads in a row, but as $n \rightarrow \infty$ then $Y_n \rightarrow 0.5$
I hope I answered your question.
|
A sequence of random variables, how to understand it in the convergence theory?
As I understand this. You should have some Randome Variables $X_n$ which depends on $n$.
For example, if you take a look at this post:
Convergence in distribution, probability, and 2nd mean
You'll fin
|
48,507
|
A sequence of random variables, how to understand it in the convergence theory?
|
The following contents are just copy-paste from: Sequence of Random Variables.
Here, we would like to discuss what we precisely mean by a sequence of random variables. Remember that, in any probability model, we have a sample space $S$ and a probability measure $P$. For simplicity, suppose that our sample space consists of a finite number of elements, i.e.,
$$
S=\left\{s_{1}, s_{2}, \cdots, s_{k}\right\}
$$
Then, a random variable $X$ is a mapping that assigns a real number to any of the possible outcomes $s_{i}, i=1,2, \cdots, k .$ Thus, we may write
$$
X\left(s_{i}\right)=x_{i}, \quad \text { for } i=1,2, \cdots, k
$$
When we have a sequence of random variables $X_{1}, X_{2}, X_{3}, \cdots$, it is also useful to remember that we have an underlying sample space $S$. In particular, each $X_{n}$ is a function from $S$ to real numbers. Thus, we may write
$$
X_{n}\left(s_{i}\right)=x_{n i}, \quad \text { for } i=1,2, \cdots, k
$$
In sum, a sequence of random variables is in fact a sequence of functions $X_{n}: S \rightarrow \mathbb{R}$.
|
A sequence of random variables, how to understand it in the convergence theory?
|
The following contents are just copy-paste from: Sequence of Random Variables.
Here, we would like to discuss what we precisely mean by a sequence of random variables. Remember that, in any probabilit
|
A sequence of random variables, how to understand it in the convergence theory?
The following contents are just copy-paste from: Sequence of Random Variables.
Here, we would like to discuss what we precisely mean by a sequence of random variables. Remember that, in any probability model, we have a sample space $S$ and a probability measure $P$. For simplicity, suppose that our sample space consists of a finite number of elements, i.e.,
$$
S=\left\{s_{1}, s_{2}, \cdots, s_{k}\right\}
$$
Then, a random variable $X$ is a mapping that assigns a real number to any of the possible outcomes $s_{i}, i=1,2, \cdots, k .$ Thus, we may write
$$
X\left(s_{i}\right)=x_{i}, \quad \text { for } i=1,2, \cdots, k
$$
When we have a sequence of random variables $X_{1}, X_{2}, X_{3}, \cdots$, it is also useful to remember that we have an underlying sample space $S$. In particular, each $X_{n}$ is a function from $S$ to real numbers. Thus, we may write
$$
X_{n}\left(s_{i}\right)=x_{n i}, \quad \text { for } i=1,2, \cdots, k
$$
In sum, a sequence of random variables is in fact a sequence of functions $X_{n}: S \rightarrow \mathbb{R}$.
|
A sequence of random variables, how to understand it in the convergence theory?
The following contents are just copy-paste from: Sequence of Random Variables.
Here, we would like to discuss what we precisely mean by a sequence of random variables. Remember that, in any probabilit
|
48,508
|
How did Likert calculate sigma values in his original 1932 paper?
|
Thorndike's Table 22 displays the expected value of a doubly-truncated normal distribution, which can be seen as a conditional expectation given that the variate is in an interval specified by quantiles:
$$\mathbb{E}(Z \mid z_p<Z<z_{p+q}) = \frac{\phi(z_p)-\phi(z_{p+q})}{q}$$
where $z_p$ is the lower $p$th quantile of $Z\sim N(0,1)$, $\phi$ is the PDF of $Z$, and $0<p<1,\ 0<p+q<1$.
R-code for Likert's data:
E <- function (p, q) {(dnorm(qnorm(p)) - dnorm(qnorm(p+q))) / q}
P <- c(0.13, 0.43, 0.21, 0.13, 0.10)
p <- 0
for (q in P) {
cat (p, q, E(p, q), "\n")
p <- p + q
}
Output:
0 0.13 -1.62727
0.13 0.43 -0.4252946
0.56 0.21 0.4322558
0.77 0.13 0.9857673
0.9 0.1 1.754983
Online sources: Likert, Thorndike
|
How did Likert calculate sigma values in his original 1932 paper?
|
Thorndike's Table 22 displays the expected value of a doubly-truncated normal distribution, which can be seen as a conditional expectation given that the variate is in an interval specified by quantil
|
How did Likert calculate sigma values in his original 1932 paper?
Thorndike's Table 22 displays the expected value of a doubly-truncated normal distribution, which can be seen as a conditional expectation given that the variate is in an interval specified by quantiles:
$$\mathbb{E}(Z \mid z_p<Z<z_{p+q}) = \frac{\phi(z_p)-\phi(z_{p+q})}{q}$$
where $z_p$ is the lower $p$th quantile of $Z\sim N(0,1)$, $\phi$ is the PDF of $Z$, and $0<p<1,\ 0<p+q<1$.
R-code for Likert's data:
E <- function (p, q) {(dnorm(qnorm(p)) - dnorm(qnorm(p+q))) / q}
P <- c(0.13, 0.43, 0.21, 0.13, 0.10)
p <- 0
for (q in P) {
cat (p, q, E(p, q), "\n")
p <- p + q
}
Output:
0 0.13 -1.62727
0.13 0.43 -0.4252946
0.56 0.21 0.4322558
0.77 0.13 0.9857673
0.9 0.1 1.754983
Online sources: Likert, Thorndike
|
How did Likert calculate sigma values in his original 1932 paper?
Thorndike's Table 22 displays the expected value of a doubly-truncated normal distribution, which can be seen as a conditional expectation given that the variate is in an interval specified by quantil
|
48,509
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
|
Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to know the centroids' coordinates (the group means) - they pass invisibly "on the background": euclidean geometry laws allow so.
Let $\bf D$ be the N x N matrix of squared euclidean distances between the cases, and $G$ is the N x 1 column of group labels (k groups). Create binary dummy variables, aka N x k design matrix: $\mathbf G=design(G)$.
[I'll accompany the formulas with example data shared with this answer. The code is SPSS Matrix session syntax, almost a pseudocode-easy to understand.]
This is the raw data X (p=2 variables, columns),
with N=6 cases: n(1)=3, n(2)=2, n(3)=1
V1 V2 Group
2.06 7.73 1
.67 5.27 1
6.62 9.36 1
3.16 5.23 2
7.66 1.27 2
5.59 9.83 3
------------------------------------
comp X= {2.06, 7.73;
.67, 5.27;
6.62, 9.36;
3.16, 5.23;
7.66, 1.27;
5.59, 9.83}.
comp g= {1;1;1;2;2;3}.
!seuclid(X%D). /*This function to compute squared euclidean distances
is taken from my web-page, it is techically more convenient
here than to call regular SPSS command to do it
print D.
D
.0000 7.9837 23.4505 7.4600 73.0916 16.8709
7.9837 .0000 52.1306 6.2017 64.8601 45.0000
23.4505 52.1306 .0000 29.0285 66.5297 1.2818
7.4600 6.2017 29.0285 .0000 35.9316 27.0649
73.0916 64.8601 66.5297 35.9316 .0000 77.5585
16.8709 45.0000 1.2818 27.0649 77.5585 .0000
comp G= design(g).
print G.
G
1 0 0
1 0 0
1 0 0
0 1 0
0 1 0
0 0 1
comp Nt= nrow(G).
comp n= csum(G).
print Nt. /*This is total N
print n. /*Group frequencies
Nt
6
n
3 2 1
Quick method. Use if you want just the above three scalars. As mentioned in here or here, the sum of squared deviations from centroid is equal to the sum of pairwise squared Euclidean distances divided by the number of points. Then follows:
Total sum-of-squares (of deviations from grand centroid): $SS_t= \frac{\sum \bf D}{2N}$, where $\sum$ is the sum in the entire matrix.
Pooled within-group sum-of-squares (of deviations from group centroids): $SS_w= \sum \frac{diag(\bf G'DG)}{2\bf n'}$, where $\bf n$ is the k-length row vector of within-group frequencies, i.e. column sums in $\bf G$. Without the summation $\sum$ you have the k-length column vector: $SS_w$ in each group.
Between-group sum-of-squares is, of course, $SS_b=SS_t-SS_w$.
comp SSt= msum(D)/(2*Nt).
print SSt.
SSt
89.07401667
comp SSw= diag(t(G)*D*G)/(2*t(n)).
print SSw. /*By groups
SSw
27.85493333
17.96580000
.00000000
comp SSw= csum(SSw).
print SSw. /*And summed (pooled SSw)
SSw
45.82073333
comp SSb= SSt-SSw.
print SSb.
SSb
43.25328333
Slower method. Use if you will need also to know some multivariate properties of the data, such as eigenvalues of the principal directions spanned by the data cloud (needed, for example, in multidimensional scaling).
Convert $\bf D$ into its double-centered matrix $\bf S$, explained here and here. As noted in the 2nd link, one of its properties is that $trace(\mathbf S)=trace(\mathbf {T})=SS_t$, and (first nonzero) eigenvalues of $\bf S$ are the same as of $\bf T$ - the scatter matrix of the original (or implied, hypothetical) cases x variables data.
comp rmean= (rsum(D)/ncol(D))*make(1,ncol(D),1).
comp S= (rmean+t(rmean)-D-msum(D)/ncol(D)**2)/2.
print S.
S
6.63045 6.58188 -1.46444 .96961 -14.15579 1.43828
6.58188 14.51701 -11.86120 5.54205 -6.09675 -8.68299
-1.46444 -11.86120 13.89118 -6.18427 -7.24447 12.86320
.96961 5.54205 -6.18427 2.76878 2.49338 -5.58955
-14.15579 -6.09675 -7.24447 2.49338 38.14958 -13.14595
1.43828 -8.68299 12.86320 -5.58955 -13.14595 13.11701
comp SSt= trace(S).
print SSt.
SSt
89.07401667
Of course, you can do likewise the double centration also on each group distance submatrix; the traces of the $\bf S$s will be each group's SSwithin, which summation yields pooled $SS_w$.
If you need to compute matrix $\bf C$ of squared euclidean distances between group centroids, compute these ingredients:
$\bf E= G'DG$;
$\mathbf Q= \frac{diag(\bf E)n^2}{2}$, (n is a row vector and diag is a column);
$\bf F=n'n$.
Then $\bf C= (E-\frac{Q+Q'}{F})/F$.
comp E= t(G)*D*G.
print E.
E
167.1296 247.1716 63.1527
247.1716 71.8632 104.6234
63.1527 104.6234 .0000
comp Q= (diag(E)*n&**2)/2.
print Q.
Q
752.0832 334.2592 83.5648
323.3844 143.7264 35.9316
.0000 .0000 .0000
comp F= sscp(n).
print F.
F
9.0000 6.0000 3.0000
6.0000 4.0000 2.0000
3.0000 2.0000 1.0000
comp C= (E-(Q+t(Q))/F)/F.
print C.
C
.0000 22.9274 11.7659
22.9274 .0000 43.3288
11.7659 43.3288 .0000
Of course, $SS_b$ - if you still don't know it - can be obviously obtained from it (within group frequency is the weight).
Bonus instructions. How to compute SSt, SSb, SSw when you do have the original N cases x p variables data X. Many ways are possible. One of the most efficient (fast) matrix way with data of typical size is as follows.
Matrix of group means (centroids), k groups x p variables: $\bf M= \frac{G'X}{n'[1]}$, where $[1]$ is the p-length row of ones; $\bf n$ is a row defined earlier; $\bf G$ also see above; $\bf X$ is the data with columns (variables) centered about their grand means.
Total scatter matrix $\bf T=X'X$, and $SS_t= trace(\mathbf T)$.
Between-group scatter matrix $\bf B=(GM)'(GM)$, and $SS_b= trace(\mathbf B)$.
Pooled within-group scatter matrix $\bf W=T-B$, and $SS_w= trace(\mathbf W)= SS_t-SS_b$.
X with columns centered
-2.2333 1.2817
-3.6233 -1.1783
2.3267 2.9117
-1.1333 -1.2183
3.3667 -5.1783
1.2967 3.3817
comp M= (t(G)*X)/(t(n)*make(1,ncol(X),1)).
print M.
M
-1.1767 1.0050
1.1167 -3.1983
1.2967 3.3817
comp Tot= sscp(X). /*T scatter matrix
print Tot.
print trace(Tot).
Tot
37.8299 -3.4865
-3.4865 51.2441
TRACE(Tot)
89.07401667
comp GM= G*M.
comp B= sscp(GM). /*B scatter matrix
print B.
print trace(B).
B
8.3289 -6.3057
-6.3057 34.9244
TRACE(B)
43.25328333
comp W= Tot-B. /*W scatter matrix
print W.
print trace(W).
W
29.5011 2.8192
2.8192 16.3197
TRACE(W)
45.82073333
(If you do not center $\bf X$ initially, $\bf W=T-B$ persists, and gives the same $\bf W$ as before, however $\bf M$, $\bf T$, and $\bf B$ matrices will be different from what before.)
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
|
Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to know the centroids' coordinates (the group means) - they pass invisibly "on the background": euclidean geometry laws allow so.
Let $\bf D$ be the N x N matrix of squared euclidean distances between the cases, and $G$ is the N x 1 column of group labels (k groups). Create binary dummy variables, aka N x k design matrix: $\mathbf G=design(G)$.
[I'll accompany the formulas with example data shared with this answer. The code is SPSS Matrix session syntax, almost a pseudocode-easy to understand.]
This is the raw data X (p=2 variables, columns),
with N=6 cases: n(1)=3, n(2)=2, n(3)=1
V1 V2 Group
2.06 7.73 1
.67 5.27 1
6.62 9.36 1
3.16 5.23 2
7.66 1.27 2
5.59 9.83 3
------------------------------------
comp X= {2.06, 7.73;
.67, 5.27;
6.62, 9.36;
3.16, 5.23;
7.66, 1.27;
5.59, 9.83}.
comp g= {1;1;1;2;2;3}.
!seuclid(X%D). /*This function to compute squared euclidean distances
is taken from my web-page, it is techically more convenient
here than to call regular SPSS command to do it
print D.
D
.0000 7.9837 23.4505 7.4600 73.0916 16.8709
7.9837 .0000 52.1306 6.2017 64.8601 45.0000
23.4505 52.1306 .0000 29.0285 66.5297 1.2818
7.4600 6.2017 29.0285 .0000 35.9316 27.0649
73.0916 64.8601 66.5297 35.9316 .0000 77.5585
16.8709 45.0000 1.2818 27.0649 77.5585 .0000
comp G= design(g).
print G.
G
1 0 0
1 0 0
1 0 0
0 1 0
0 1 0
0 0 1
comp Nt= nrow(G).
comp n= csum(G).
print Nt. /*This is total N
print n. /*Group frequencies
Nt
6
n
3 2 1
Quick method. Use if you want just the above three scalars. As mentioned in here or here, the sum of squared deviations from centroid is equal to the sum of pairwise squared Euclidean distances divided by the number of points. Then follows:
Total sum-of-squares (of deviations from grand centroid): $SS_t= \frac{\sum \bf D}{2N}$, where $\sum$ is the sum in the entire matrix.
Pooled within-group sum-of-squares (of deviations from group centroids): $SS_w= \sum \frac{diag(\bf G'DG)}{2\bf n'}$, where $\bf n$ is the k-length row vector of within-group frequencies, i.e. column sums in $\bf G$. Without the summation $\sum$ you have the k-length column vector: $SS_w$ in each group.
Between-group sum-of-squares is, of course, $SS_b=SS_t-SS_w$.
comp SSt= msum(D)/(2*Nt).
print SSt.
SSt
89.07401667
comp SSw= diag(t(G)*D*G)/(2*t(n)).
print SSw. /*By groups
SSw
27.85493333
17.96580000
.00000000
comp SSw= csum(SSw).
print SSw. /*And summed (pooled SSw)
SSw
45.82073333
comp SSb= SSt-SSw.
print SSb.
SSb
43.25328333
Slower method. Use if you will need also to know some multivariate properties of the data, such as eigenvalues of the principal directions spanned by the data cloud (needed, for example, in multidimensional scaling).
Convert $\bf D$ into its double-centered matrix $\bf S$, explained here and here. As noted in the 2nd link, one of its properties is that $trace(\mathbf S)=trace(\mathbf {T})=SS_t$, and (first nonzero) eigenvalues of $\bf S$ are the same as of $\bf T$ - the scatter matrix of the original (or implied, hypothetical) cases x variables data.
comp rmean= (rsum(D)/ncol(D))*make(1,ncol(D),1).
comp S= (rmean+t(rmean)-D-msum(D)/ncol(D)**2)/2.
print S.
S
6.63045 6.58188 -1.46444 .96961 -14.15579 1.43828
6.58188 14.51701 -11.86120 5.54205 -6.09675 -8.68299
-1.46444 -11.86120 13.89118 -6.18427 -7.24447 12.86320
.96961 5.54205 -6.18427 2.76878 2.49338 -5.58955
-14.15579 -6.09675 -7.24447 2.49338 38.14958 -13.14595
1.43828 -8.68299 12.86320 -5.58955 -13.14595 13.11701
comp SSt= trace(S).
print SSt.
SSt
89.07401667
Of course, you can do likewise the double centration also on each group distance submatrix; the traces of the $\bf S$s will be each group's SSwithin, which summation yields pooled $SS_w$.
If you need to compute matrix $\bf C$ of squared euclidean distances between group centroids, compute these ingredients:
$\bf E= G'DG$;
$\mathbf Q= \frac{diag(\bf E)n^2}{2}$, (n is a row vector and diag is a column);
$\bf F=n'n$.
Then $\bf C= (E-\frac{Q+Q'}{F})/F$.
comp E= t(G)*D*G.
print E.
E
167.1296 247.1716 63.1527
247.1716 71.8632 104.6234
63.1527 104.6234 .0000
comp Q= (diag(E)*n&**2)/2.
print Q.
Q
752.0832 334.2592 83.5648
323.3844 143.7264 35.9316
.0000 .0000 .0000
comp F= sscp(n).
print F.
F
9.0000 6.0000 3.0000
6.0000 4.0000 2.0000
3.0000 2.0000 1.0000
comp C= (E-(Q+t(Q))/F)/F.
print C.
C
.0000 22.9274 11.7659
22.9274 .0000 43.3288
11.7659 43.3288 .0000
Of course, $SS_b$ - if you still don't know it - can be obviously obtained from it (within group frequency is the weight).
Bonus instructions. How to compute SSt, SSb, SSw when you do have the original N cases x p variables data X. Many ways are possible. One of the most efficient (fast) matrix way with data of typical size is as follows.
Matrix of group means (centroids), k groups x p variables: $\bf M= \frac{G'X}{n'[1]}$, where $[1]$ is the p-length row of ones; $\bf n$ is a row defined earlier; $\bf G$ also see above; $\bf X$ is the data with columns (variables) centered about their grand means.
Total scatter matrix $\bf T=X'X$, and $SS_t= trace(\mathbf T)$.
Between-group scatter matrix $\bf B=(GM)'(GM)$, and $SS_b= trace(\mathbf B)$.
Pooled within-group scatter matrix $\bf W=T-B$, and $SS_w= trace(\mathbf W)= SS_t-SS_b$.
X with columns centered
-2.2333 1.2817
-3.6233 -1.1783
2.3267 2.9117
-1.1333 -1.2183
3.3667 -5.1783
1.2967 3.3817
comp M= (t(G)*X)/(t(n)*make(1,ncol(X),1)).
print M.
M
-1.1767 1.0050
1.1167 -3.1983
1.2967 3.3817
comp Tot= sscp(X). /*T scatter matrix
print Tot.
print trace(Tot).
Tot
37.8299 -3.4865
-3.4865 51.2441
TRACE(Tot)
89.07401667
comp GM= G*M.
comp B= sscp(GM). /*B scatter matrix
print B.
print trace(B).
B
8.3289 -6.3057
-6.3057 34.9244
TRACE(B)
43.25328333
comp W= Tot-B. /*W scatter matrix
print W.
print trace(W).
W
29.5011 2.8192
2.8192 16.3197
TRACE(W)
45.82073333
(If you do not center $\bf X$ initially, $\bf W=T-B$ persists, and gives the same $\bf W$ as before, however $\bf M$, $\bf T$, and $\bf B$ matrices will be different from what before.)
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to
|
48,510
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
|
Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the SSQ etc. are.
Recall how the sum-of-squares are usually defined, compared to Euclidean distance:
$$
SSQ(A,B) = \sum_{a\in A} \sum_{b\in B} \sum_i (a_i-b_i)^2
\\
= \sum_{a\in A} \sum_{b\in B} d^2_{\text{Euclidean}}(a, b)
$$
So if your distance matrix stores Euclidean distances (or squared Euclidean), then you can use that second line to compute SSQ.
If you have a different distance function, the result will usually be wrong (unless you use a different definition of "sum of squares" than the usual one; I believe you could use the second line, but this may cause trouble in other situations such as k-means).
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
|
Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the SSQ etc. are.
Recall how the sum-of-squares are usually defined, compared to Euclidean distance:
$$
SSQ(A,B) = \sum_{a\in A} \sum_{b\in B} \sum_i (a_i-b_i)^2
\\
= \sum_{a\in A} \sum_{b\in B} d^2_{\text{Euclidean}}(a, b)
$$
So if your distance matrix stores Euclidean distances (or squared Euclidean), then you can use that second line to compute SSQ.
If you have a different distance function, the result will usually be wrong (unless you use a different definition of "sum of squares" than the usual one; I believe you could use the second line, but this may cause trouble in other situations such as k-means).
|
Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the
|
48,511
|
Negative Binomial MGF converges to Poisson MGF
|
You make a mistake by ignoring $p^r$: If you consider your MGF $$M_X(t) = \frac{p^r}{[1-e^t(1-p)]^r}\,,$$
then $$\log\{M_X(t)\} = r\log(p)-r\log\{1-e^t(1-p)\}$$ and using the asymptotic equivalences
\begin{align*}
r\log(p)-r\log\{1-e^t(1-p)\}
&= r\log(1-[1-p])-r\log\{1-e^t(1-p)\}\\
&\approx -r[1-p]+re^t(1-p)\\
&\approx \lambda[-1+e^t]
\end{align*}
which shows that the limiting value of the MGF is
$$ \exp\{\lambda[e^t-1]\}$$ as requested in this exercise.
Note: There are two versions of the MGF for a 𝒩eg(n,p) distribution,
one for the number of trials and one for the number of failures. The
current version is the MGF for the number of failures, which starts at
zero like the Poisson distribution.
|
Negative Binomial MGF converges to Poisson MGF
|
You make a mistake by ignoring $p^r$: If you consider your MGF $$M_X(t) = \frac{p^r}{[1-e^t(1-p)]^r}\,,$$
then $$\log\{M_X(t)\} = r\log(p)-r\log\{1-e^t(1-p)\}$$ and using the asymptotic equivalences
\
|
Negative Binomial MGF converges to Poisson MGF
You make a mistake by ignoring $p^r$: If you consider your MGF $$M_X(t) = \frac{p^r}{[1-e^t(1-p)]^r}\,,$$
then $$\log\{M_X(t)\} = r\log(p)-r\log\{1-e^t(1-p)\}$$ and using the asymptotic equivalences
\begin{align*}
r\log(p)-r\log\{1-e^t(1-p)\}
&= r\log(1-[1-p])-r\log\{1-e^t(1-p)\}\\
&\approx -r[1-p]+re^t(1-p)\\
&\approx \lambda[-1+e^t]
\end{align*}
which shows that the limiting value of the MGF is
$$ \exp\{\lambda[e^t-1]\}$$ as requested in this exercise.
Note: There are two versions of the MGF for a 𝒩eg(n,p) distribution,
one for the number of trials and one for the number of failures. The
current version is the MGF for the number of failures, which starts at
zero like the Poisson distribution.
|
Negative Binomial MGF converges to Poisson MGF
You make a mistake by ignoring $p^r$: If you consider your MGF $$M_X(t) = \frac{p^r}{[1-e^t(1-p)]^r}\,,$$
then $$\log\{M_X(t)\} = r\log(p)-r\log\{1-e^t(1-p)\}$$ and using the asymptotic equivalences
\
|
48,512
|
Negative Binomial MGF converges to Poisson MGF
|
$$\eqalign{ M_{x}(t)&=\left[\frac{p}{1-\frac{e^{t}(1-p)}{p}}\right]^r \\
&=\left[\frac{1-(1-p)}{1-\frac{e^{t}(1-p)}{p}}\right]\\
&= \left[\frac{1-\frac{r(1-p)}{r}}{1-\frac{e^{t}r(1-p)}{rp}}\right]\\
&=\frac{e^{-r(1-p)}}{e^{{-e^{t}}r(1-p)}}\\
&=e^{-\lambda}e^{e^{t}\lambda}
}$$
by applying the given conditions, and we have the desired result
|
Negative Binomial MGF converges to Poisson MGF
|
$$\eqalign{ M_{x}(t)&=\left[\frac{p}{1-\frac{e^{t}(1-p)}{p}}\right]^r \\
&=\left[\frac{1-(1-p)}{1-\frac{e^{t}(1-p)}{p}}\right]\\
&= \left[\frac{1-\frac{r(1-p)}{r}}{1-\frac
|
Negative Binomial MGF converges to Poisson MGF
$$\eqalign{ M_{x}(t)&=\left[\frac{p}{1-\frac{e^{t}(1-p)}{p}}\right]^r \\
&=\left[\frac{1-(1-p)}{1-\frac{e^{t}(1-p)}{p}}\right]\\
&= \left[\frac{1-\frac{r(1-p)}{r}}{1-\frac{e^{t}r(1-p)}{rp}}\right]\\
&=\frac{e^{-r(1-p)}}{e^{{-e^{t}}r(1-p)}}\\
&=e^{-\lambda}e^{e^{t}\lambda}
}$$
by applying the given conditions, and we have the desired result
|
Negative Binomial MGF converges to Poisson MGF
$$\eqalign{ M_{x}(t)&=\left[\frac{p}{1-\frac{e^{t}(1-p)}{p}}\right]^r \\
&=\left[\frac{1-(1-p)}{1-\frac{e^{t}(1-p)}{p}}\right]\\
&= \left[\frac{1-\frac{r(1-p)}{r}}{1-\frac
|
48,513
|
Negative Binomial MGF converges to Poisson MGF
|
The above answer was correct that you ignored the numerator $p^r$, but I found the illustration is a little confusing, so here we go for a standard way to solve the problem
$\begin{aligned} \lim_{r\to \infty} M_{NB}(t) & = \lim_{r\to \infty} (\frac{p}{1-(1-p)e^t})^r \\ &= \lim_{r\to \infty} (\frac{1-(1-p)}{1-(1-p)e^t})^r \\ &= \lim_{r\to \infty} \frac{(1 + \frac{1}{r}(-\lambda))^r }{( 1+ \frac{1}{r}e^t(-\lambda))^r } \quad \quad r(1-p) = \lambda \Rightarrow 1-p = \frac{\lambda}{r} \\ &= \frac{e^{-\lambda}}{e^{-\lambda e^t}} \\ &= e^{\lambda(e^t-1)} \end{aligned}$
which is the $MGF$ of Poisson Distribution
Note: Lemma 2.3.14 in the Statistical Inference by Casella and Berger book states a useful limit where when we have $\lim_{n\to\infty} a_n = a$
$\lim_{n\to\infty} (1 + \frac{a_n}{n})^n = e^a$
|
Negative Binomial MGF converges to Poisson MGF
|
The above answer was correct that you ignored the numerator $p^r$, but I found the illustration is a little confusing, so here we go for a standard way to solve the problem
$\begin{aligned} \lim_{r\to
|
Negative Binomial MGF converges to Poisson MGF
The above answer was correct that you ignored the numerator $p^r$, but I found the illustration is a little confusing, so here we go for a standard way to solve the problem
$\begin{aligned} \lim_{r\to \infty} M_{NB}(t) & = \lim_{r\to \infty} (\frac{p}{1-(1-p)e^t})^r \\ &= \lim_{r\to \infty} (\frac{1-(1-p)}{1-(1-p)e^t})^r \\ &= \lim_{r\to \infty} \frac{(1 + \frac{1}{r}(-\lambda))^r }{( 1+ \frac{1}{r}e^t(-\lambda))^r } \quad \quad r(1-p) = \lambda \Rightarrow 1-p = \frac{\lambda}{r} \\ &= \frac{e^{-\lambda}}{e^{-\lambda e^t}} \\ &= e^{\lambda(e^t-1)} \end{aligned}$
which is the $MGF$ of Poisson Distribution
Note: Lemma 2.3.14 in the Statistical Inference by Casella and Berger book states a useful limit where when we have $\lim_{n\to\infty} a_n = a$
$\lim_{n\to\infty} (1 + \frac{a_n}{n})^n = e^a$
|
Negative Binomial MGF converges to Poisson MGF
The above answer was correct that you ignored the numerator $p^r$, but I found the illustration is a little confusing, so here we go for a standard way to solve the problem
$\begin{aligned} \lim_{r\to
|
48,514
|
Convergence of the Independent Metropolis-Hastings algorithm
|
A more relevant paper about the convergence of Metropolis-Hastings algorithms is the one by Mengersen and Tweedie (1996) since it is both quite readable and general. In this paper, two major results can be singled out:
The independent Metropolis-Hastings algorithm with target $p$ and proposal $q$ leads to a uniformly ergodic Markov chain when $p/q$ is bounded;
In the case of a target with a non-compact support, the random walk Metropolis-Hastings algorithm cannot produce a uniformly ergodic Markov chain. There exist some conditions under which the Markov chain is geometrically ergodic.
If you want a deeper entry on convergence properties of Metropolis-Hastings algorithms, the series of papers written by Gareth Roberts (Warwick) and Jeff Rosenthal (Toronto) contain a wealth of results.
|
Convergence of the Independent Metropolis-Hastings algorithm
|
A more relevant paper about the convergence of Metropolis-Hastings algorithms is the one by Mengersen and Tweedie (1996) since it is both quite readable and general. In this paper, two major results c
|
Convergence of the Independent Metropolis-Hastings algorithm
A more relevant paper about the convergence of Metropolis-Hastings algorithms is the one by Mengersen and Tweedie (1996) since it is both quite readable and general. In this paper, two major results can be singled out:
The independent Metropolis-Hastings algorithm with target $p$ and proposal $q$ leads to a uniformly ergodic Markov chain when $p/q$ is bounded;
In the case of a target with a non-compact support, the random walk Metropolis-Hastings algorithm cannot produce a uniformly ergodic Markov chain. There exist some conditions under which the Markov chain is geometrically ergodic.
If you want a deeper entry on convergence properties of Metropolis-Hastings algorithms, the series of papers written by Gareth Roberts (Warwick) and Jeff Rosenthal (Toronto) contain a wealth of results.
|
Convergence of the Independent Metropolis-Hastings algorithm
A more relevant paper about the convergence of Metropolis-Hastings algorithms is the one by Mengersen and Tweedie (1996) since it is both quite readable and general. In this paper, two major results c
|
48,515
|
"Error: no valid set of coefficients has been found: please supply starting values" when trying to get confidence intervals in R
|
I can't solve this problem (yet), but I can diagnose some of what's going wrong.
This is a summary of your data:
with(dd,table(Treatment,Count))
Count
Treatment 0 1 2 4
1 13 0 0 0
2 4 8 1 0
3 13 0 0 0
4 5 6 1 1
You can see that treatments 1 and 3 have all of the values equal to zero. When we fit a Poisson GLM to this, we fit the parameters on the log scale - that is, the intercept is the log-density of the first treatment, and the other parameters are differences between the log-density of the other treatments and the first. If we look at the coefficient table:
printCoefmat(coef(summary(Tetab.pglm)))
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.0303e+01 4.3105e+03 -0.0047 0.9962
Treatment2 2.0040e+01 4.3105e+03 0.0046 0.9963
Treatment3 9.9225e-09 6.0960e+03 0.0000 1.0000
Treatment4 2.0223e+01 4.3105e+03 0.0047 0.9963
We see that all the parameters are large (+20/-20) except for treatment 3 which is basically zero; the standard errors are huge; and the p-values are basically 1. The phenomenon of ridiculous standard errors is called the Hauck-Donner effect, and it occurs in this kind of extreme situation.
The zero-inflation stuff seems totally unnecessary here and will make an already difficult situation a bit harder.
In this case the maximum-likelihood estimate of the log-density for treatments 1 and 3 is actually $-\infty$, which is going to make life harder. It is in principle possible to compute a finite upper confidence interval for treatment 1 and lower confidence intervals for the differences between treatment 1 and (2,4), but it's going to be numerically ugly.
Probably (?) the best solution is some kind of bias-reduced or penalized estimate, which pushes the solution away from $-\infty$. I thought arm::bayesglm() would do this, but I still got in trouble:
b1 <- bayesglm(Count ~ Treatment, data= dd, family=poisson)
gives reasonable answers, but confint(b1) still fails ...
|
"Error: no valid set of coefficients has been found: please supply starting values" when trying to g
|
I can't solve this problem (yet), but I can diagnose some of what's going wrong.
This is a summary of your data:
with(dd,table(Treatment,Count))
Count
Treatment 0 1 2 4
1 13 0
|
"Error: no valid set of coefficients has been found: please supply starting values" when trying to get confidence intervals in R
I can't solve this problem (yet), but I can diagnose some of what's going wrong.
This is a summary of your data:
with(dd,table(Treatment,Count))
Count
Treatment 0 1 2 4
1 13 0 0 0
2 4 8 1 0
3 13 0 0 0
4 5 6 1 1
You can see that treatments 1 and 3 have all of the values equal to zero. When we fit a Poisson GLM to this, we fit the parameters on the log scale - that is, the intercept is the log-density of the first treatment, and the other parameters are differences between the log-density of the other treatments and the first. If we look at the coefficient table:
printCoefmat(coef(summary(Tetab.pglm)))
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.0303e+01 4.3105e+03 -0.0047 0.9962
Treatment2 2.0040e+01 4.3105e+03 0.0046 0.9963
Treatment3 9.9225e-09 6.0960e+03 0.0000 1.0000
Treatment4 2.0223e+01 4.3105e+03 0.0047 0.9963
We see that all the parameters are large (+20/-20) except for treatment 3 which is basically zero; the standard errors are huge; and the p-values are basically 1. The phenomenon of ridiculous standard errors is called the Hauck-Donner effect, and it occurs in this kind of extreme situation.
The zero-inflation stuff seems totally unnecessary here and will make an already difficult situation a bit harder.
In this case the maximum-likelihood estimate of the log-density for treatments 1 and 3 is actually $-\infty$, which is going to make life harder. It is in principle possible to compute a finite upper confidence interval for treatment 1 and lower confidence intervals for the differences between treatment 1 and (2,4), but it's going to be numerically ugly.
Probably (?) the best solution is some kind of bias-reduced or penalized estimate, which pushes the solution away from $-\infty$. I thought arm::bayesglm() would do this, but I still got in trouble:
b1 <- bayesglm(Count ~ Treatment, data= dd, family=poisson)
gives reasonable answers, but confint(b1) still fails ...
|
"Error: no valid set of coefficients has been found: please supply starting values" when trying to g
I can't solve this problem (yet), but I can diagnose some of what's going wrong.
This is a summary of your data:
with(dd,table(Treatment,Count))
Count
Treatment 0 1 2 4
1 13 0
|
48,516
|
How can I make the correct time-series analysis for my data?
|
You would likely be looking to calculate the residual autocorrelation function (RACF), also called residual cross-correlation function, which is the autocorrelation function on the residuals of fitted models for two different time series (fitted using ARMA models, auto-regressive moving average models). The calculation will yield a plot that shows if there is a causality between the two inputs, X and Y, and at what lags it is significant.
In order to fit the ARMA models properly, there are some assumptions made. The model should be stationary, so you may need to use a differenced ARMA model in fitting (see ARIMA). There are also the usual assumptions in fitting an ARMA model, such as normality of data, iid, constant variance. You should check these in your data before proceeding; there are some transformations you can do to your data to help make them more normal if required (such as Box-Cox).
In terms of a textbook, I took a course with this professor and found the book to be fairly helpful. The link is no longer working but you can likely find another copy somewhere.
http://www.systems.uwaterloo.ca/Faculty/Hipel/Time%20Series%20Book.htm
Hope that helps. I am afraid I don't work in SPSS so I can't help you there (I work in R myself). If you can find a way to fit ARMA models in SPSS and get the residuals, you should be able to use the cross-correlation function on the residuals for your statistic.
Best of luck!
|
How can I make the correct time-series analysis for my data?
|
You would likely be looking to calculate the residual autocorrelation function (RACF), also called residual cross-correlation function, which is the autocorrelation function on the residuals of fitted
|
How can I make the correct time-series analysis for my data?
You would likely be looking to calculate the residual autocorrelation function (RACF), also called residual cross-correlation function, which is the autocorrelation function on the residuals of fitted models for two different time series (fitted using ARMA models, auto-regressive moving average models). The calculation will yield a plot that shows if there is a causality between the two inputs, X and Y, and at what lags it is significant.
In order to fit the ARMA models properly, there are some assumptions made. The model should be stationary, so you may need to use a differenced ARMA model in fitting (see ARIMA). There are also the usual assumptions in fitting an ARMA model, such as normality of data, iid, constant variance. You should check these in your data before proceeding; there are some transformations you can do to your data to help make them more normal if required (such as Box-Cox).
In terms of a textbook, I took a course with this professor and found the book to be fairly helpful. The link is no longer working but you can likely find another copy somewhere.
http://www.systems.uwaterloo.ca/Faculty/Hipel/Time%20Series%20Book.htm
Hope that helps. I am afraid I don't work in SPSS so I can't help you there (I work in R myself). If you can find a way to fit ARMA models in SPSS and get the residuals, you should be able to use the cross-correlation function on the residuals for your statistic.
Best of luck!
|
How can I make the correct time-series analysis for my data?
You would likely be looking to calculate the residual autocorrelation function (RACF), also called residual cross-correlation function, which is the autocorrelation function on the residuals of fitted
|
48,517
|
Correctly expressing improvement in AUC?
|
You can equivalently use the following expressions:
50% relative improvement
25% absolute improvement, or a 25 percentage point improvement.
Speaking of correct terms, AUROC (if that is your metric) should be used instead of AUC, as the latter term is ambiguous.
|
Correctly expressing improvement in AUC?
|
You can equivalently use the following expressions:
50% relative improvement
25% absolute improvement, or a 25 percentage point improvement.
Speaking of correct terms, AUROC (if that is your metric)
|
Correctly expressing improvement in AUC?
You can equivalently use the following expressions:
50% relative improvement
25% absolute improvement, or a 25 percentage point improvement.
Speaking of correct terms, AUROC (if that is your metric) should be used instead of AUC, as the latter term is ambiguous.
|
Correctly expressing improvement in AUC?
You can equivalently use the following expressions:
50% relative improvement
25% absolute improvement, or a 25 percentage point improvement.
Speaking of correct terms, AUROC (if that is your metric)
|
48,518
|
Correctly expressing improvement in AUC?
|
I just posted a somewhat related question here: Compare and quantify relative improvement in ROC AUC scores? that has this as a component to it.
I'm not sure, though given the ROC AUC represents the probability that a classifier scores a positive case higher than a negative case, perhaps comparing the magnitude of the odds ratios would be an appropriate method for comparison.
So rather than 0.75 / 0.50, this becomes (3/1) / (1/1), so the improvement is 3x, i.e. the new model has 3x the odds of scoring a positive case more highly than a negative case compared to the old model.
(To the point above, 0.50 represents random chance, so perhaps these should be rescaled to between 0.5 and 1 which changes the interpretation though. Also in the case when you're comparing against a model with ROC AUC of 0.5, any improvement would represent infinite improvement. Comparing the odds ratios under this rescaled version also means that an improvement from 0.51 to 0.59 would represent the same amount of improvement as 0.91 to 0.99, however the former may feel more likely to arise randomly -- so would probably want to also consider some notion of confidence when comparing ROC AUCs.)
|
Correctly expressing improvement in AUC?
|
I just posted a somewhat related question here: Compare and quantify relative improvement in ROC AUC scores? that has this as a component to it.
I'm not sure, though given the ROC AUC represents the
|
Correctly expressing improvement in AUC?
I just posted a somewhat related question here: Compare and quantify relative improvement in ROC AUC scores? that has this as a component to it.
I'm not sure, though given the ROC AUC represents the probability that a classifier scores a positive case higher than a negative case, perhaps comparing the magnitude of the odds ratios would be an appropriate method for comparison.
So rather than 0.75 / 0.50, this becomes (3/1) / (1/1), so the improvement is 3x, i.e. the new model has 3x the odds of scoring a positive case more highly than a negative case compared to the old model.
(To the point above, 0.50 represents random chance, so perhaps these should be rescaled to between 0.5 and 1 which changes the interpretation though. Also in the case when you're comparing against a model with ROC AUC of 0.5, any improvement would represent infinite improvement. Comparing the odds ratios under this rescaled version also means that an improvement from 0.51 to 0.59 would represent the same amount of improvement as 0.91 to 0.99, however the former may feel more likely to arise randomly -- so would probably want to also consider some notion of confidence when comparing ROC AUCs.)
|
Correctly expressing improvement in AUC?
I just posted a somewhat related question here: Compare and quantify relative improvement in ROC AUC scores? that has this as a component to it.
I'm not sure, though given the ROC AUC represents the
|
48,519
|
Correctly expressing improvement in AUC?
|
To calculate your relative improvement assumed to be $x$, you should use $x=(0.75-0.5)/0.5$.
I believe the relative improvement is more intuitive in the AUC case. Also, AUC is commomly used in literature, so both AUC and AUROC arr okay
|
Correctly expressing improvement in AUC?
|
To calculate your relative improvement assumed to be $x$, you should use $x=(0.75-0.5)/0.5$.
I believe the relative improvement is more intuitive in the AUC case. Also, AUC is commomly used in litera
|
Correctly expressing improvement in AUC?
To calculate your relative improvement assumed to be $x$, you should use $x=(0.75-0.5)/0.5$.
I believe the relative improvement is more intuitive in the AUC case. Also, AUC is commomly used in literature, so both AUC and AUROC arr okay
|
Correctly expressing improvement in AUC?
To calculate your relative improvement assumed to be $x$, you should use $x=(0.75-0.5)/0.5$.
I believe the relative improvement is more intuitive in the AUC case. Also, AUC is commomly used in litera
|
48,520
|
Basic question on link function in GLM
|
All of the basic regression-type models that people use are for the mean. With OLS regression, where the response is assumed conditionally normal, your predicted values, $\hat y_i$, are the conditional means (cf., here). So in a GLiM context more broadly, where the response is distributed as something else like a Bernoulli, we also want to predict the mean.
A more general question, setting aside generalized linear models, is why we might want to predict means at all. First, the mean is the expected value. Moreover, for distributions in the exponential family (which means they are applicable for the GLiM) the mean is one of the parameters of the distribution. If you know that the distribution is something or other, and you know the mean, you basically know everything there is know, or at least much of it. (Some distributions have additional parameters that you would still want to estimate, for example, for the normal you would want to know the variance as well.)
You don't have to want to know the mean, however. You might want to know the value of some quantile, for example the 37th percentile. You can model that with quantile regression. Ordinal logistic regression and the Cox proportional hazards model don't assume distributional forms and aren't estimating conditional means in a direct sense.
|
Basic question on link function in GLM
|
All of the basic regression-type models that people use are for the mean. With OLS regression, where the response is assumed conditionally normal, your predicted values, $\hat y_i$, are the condition
|
Basic question on link function in GLM
All of the basic regression-type models that people use are for the mean. With OLS regression, where the response is assumed conditionally normal, your predicted values, $\hat y_i$, are the conditional means (cf., here). So in a GLiM context more broadly, where the response is distributed as something else like a Bernoulli, we also want to predict the mean.
A more general question, setting aside generalized linear models, is why we might want to predict means at all. First, the mean is the expected value. Moreover, for distributions in the exponential family (which means they are applicable for the GLiM) the mean is one of the parameters of the distribution. If you know that the distribution is something or other, and you know the mean, you basically know everything there is know, or at least much of it. (Some distributions have additional parameters that you would still want to estimate, for example, for the normal you would want to know the variance as well.)
You don't have to want to know the mean, however. You might want to know the value of some quantile, for example the 37th percentile. You can model that with quantile regression. Ordinal logistic regression and the Cox proportional hazards model don't assume distributional forms and aren't estimating conditional means in a direct sense.
|
Basic question on link function in GLM
All of the basic regression-type models that people use are for the mean. With OLS regression, where the response is assumed conditionally normal, your predicted values, $\hat y_i$, are the condition
|
48,521
|
How can we convert values proportional to probabilities to Bernoulli probabilities?
|
Since $p(1)=p$ and $p(0)=1-p$ are both proportional to a known expression* (the unscaled probabilities, $u(i)=c.p(i)$, with the same unknown constant of proportionality, $c$) and you know the $p(i)$ values must add to $1$, then $u(0)+u(1)=c$.
Which is to say $p(i) = \frac{u(i)}{u(0)+u(1)},\: i=0,1$.
(This notion is widely used in Bayesian statistics with many kinds of discrete variables.)
Note that $u(0)=1$ (always, since the power is $s_i=0$), so $p(0) = 1/(1+u(1))$ and $p(1)=u(1)/(1+u(1))$
*(these $p$'s are $q(s_i|\alpha_0)$ in the paper, for $s_i=0$ and $1$ respectively)
|
How can we convert values proportional to probabilities to Bernoulli probabilities?
|
Since $p(1)=p$ and $p(0)=1-p$ are both proportional to a known expression* (the unscaled probabilities, $u(i)=c.p(i)$, with the same unknown constant of proportionality, $c$) and you know the $p(i)$ v
|
How can we convert values proportional to probabilities to Bernoulli probabilities?
Since $p(1)=p$ and $p(0)=1-p$ are both proportional to a known expression* (the unscaled probabilities, $u(i)=c.p(i)$, with the same unknown constant of proportionality, $c$) and you know the $p(i)$ values must add to $1$, then $u(0)+u(1)=c$.
Which is to say $p(i) = \frac{u(i)}{u(0)+u(1)},\: i=0,1$.
(This notion is widely used in Bayesian statistics with many kinds of discrete variables.)
Note that $u(0)=1$ (always, since the power is $s_i=0$), so $p(0) = 1/(1+u(1))$ and $p(1)=u(1)/(1+u(1))$
*(these $p$'s are $q(s_i|\alpha_0)$ in the paper, for $s_i=0$ and $1$ respectively)
|
How can we convert values proportional to probabilities to Bernoulli probabilities?
Since $p(1)=p$ and $p(0)=1-p$ are both proportional to a known expression* (the unscaled probabilities, $u(i)=c.p(i)$, with the same unknown constant of proportionality, $c$) and you know the $p(i)$ v
|
48,522
|
Why is regularization interpreted as a gaussian prior on my weights?
|
Since we're using MAP we are trying to maximize the probability of the parameters given the data.
$$
P(W|x,y) = \frac {P(x,y|W) P(W)} {P(x,y)}
$$
$P(x,y)$ can be ignored since it's fixed for our data. So we are trying to maximize $log(P(x,y|W)) + log(P(W))$. Let's look at $P(W)$.
If each element of $W$ is drawn independently from a unit Gaussian, the probability of the matrix is
$$
P(W) = \frac {1} {\sqrt{2 \pi}} \prod_{ij} exp\Big( -\frac {w_{ij}^2} 2 \Big)
$$
so $log(P(W))$ is
$$
-\frac 1 2 \sum_{ij}{w_{ij}}^2
$$
plus some constant terms. The log-likelihood is then
$$
log(P(x,y|W)) - \frac 1 2 \sum_{ij} {{w_{ij}}^2}
$$
Since we are thinking in terms of losses we negate the log-likelihood and minimize. Now the first term is cross-entropy and the second term is $\frac 1 2 R(W)$.
If you have a (reasonable in some sense that I don't know how to define) more or less arbitrary penalty function you can obtain a density for it by integrating over its domain and then normalizing to obtain a density. In this case it's just easy to see that it's a Gaussian.
This does not mean that the elements of $W$ actually are sampled from a Gaussian. What it means is that you believe that's what $W$ looks like before you have any evidence to the contrary. In other words the prior on the elements of $W$ is a Gaussian.
|
Why is regularization interpreted as a gaussian prior on my weights?
|
Since we're using MAP we are trying to maximize the probability of the parameters given the data.
$$
P(W|x,y) = \frac {P(x,y|W) P(W)} {P(x,y)}
$$
$P(x,y)$ can be ignored since it's fixed for our da
|
Why is regularization interpreted as a gaussian prior on my weights?
Since we're using MAP we are trying to maximize the probability of the parameters given the data.
$$
P(W|x,y) = \frac {P(x,y|W) P(W)} {P(x,y)}
$$
$P(x,y)$ can be ignored since it's fixed for our data. So we are trying to maximize $log(P(x,y|W)) + log(P(W))$. Let's look at $P(W)$.
If each element of $W$ is drawn independently from a unit Gaussian, the probability of the matrix is
$$
P(W) = \frac {1} {\sqrt{2 \pi}} \prod_{ij} exp\Big( -\frac {w_{ij}^2} 2 \Big)
$$
so $log(P(W))$ is
$$
-\frac 1 2 \sum_{ij}{w_{ij}}^2
$$
plus some constant terms. The log-likelihood is then
$$
log(P(x,y|W)) - \frac 1 2 \sum_{ij} {{w_{ij}}^2}
$$
Since we are thinking in terms of losses we negate the log-likelihood and minimize. Now the first term is cross-entropy and the second term is $\frac 1 2 R(W)$.
If you have a (reasonable in some sense that I don't know how to define) more or less arbitrary penalty function you can obtain a density for it by integrating over its domain and then normalizing to obtain a density. In this case it's just easy to see that it's a Gaussian.
This does not mean that the elements of $W$ actually are sampled from a Gaussian. What it means is that you believe that's what $W$ looks like before you have any evidence to the contrary. In other words the prior on the elements of $W$ is a Gaussian.
|
Why is regularization interpreted as a gaussian prior on my weights?
Since we're using MAP we are trying to maximize the probability of the parameters given the data.
$$
P(W|x,y) = \frac {P(x,y|W) P(W)} {P(x,y)}
$$
$P(x,y)$ can be ignored since it's fixed for our da
|
48,523
|
Improving Chebyshev-type bound for discrete uniform distribution
|
If you know that the the random variable is bounded then you know a lot about the random variable and the Chebyshev and Markov inequality will not be tight in general.
A better inequality is the Hoeffding's inequality (consider the general case and not the binomial version). This takes into account implicitly that you have all higher moments (since the RV is finite with finite support) but is nice because it doesn't need any support information and says that you go to your mean exponentially fast.
For example if your RV is bounded on $[-1,1]$ (Note that it doesn't have to discrete) then you can say the following for $\epsilon>0$
$$ \mathbb{P}(|\mu - \bar{\mu}| \geq \epsilon) \leq 2 \exp\bigg(-\frac{n\epsilon^2}{2}\bigg) $$
I don't believe you could get much better by trying to figure in the discrete knowledge because all the values could be at the endpoints causing higher variance (in other words the worst case scenarios are probably the discrete cases).
|
Improving Chebyshev-type bound for discrete uniform distribution
|
If you know that the the random variable is bounded then you know a lot about the random variable and the Chebyshev and Markov inequality will not be tight in general.
A better inequality is the Hoeff
|
Improving Chebyshev-type bound for discrete uniform distribution
If you know that the the random variable is bounded then you know a lot about the random variable and the Chebyshev and Markov inequality will not be tight in general.
A better inequality is the Hoeffding's inequality (consider the general case and not the binomial version). This takes into account implicitly that you have all higher moments (since the RV is finite with finite support) but is nice because it doesn't need any support information and says that you go to your mean exponentially fast.
For example if your RV is bounded on $[-1,1]$ (Note that it doesn't have to discrete) then you can say the following for $\epsilon>0$
$$ \mathbb{P}(|\mu - \bar{\mu}| \geq \epsilon) \leq 2 \exp\bigg(-\frac{n\epsilon^2}{2}\bigg) $$
I don't believe you could get much better by trying to figure in the discrete knowledge because all the values could be at the endpoints causing higher variance (in other words the worst case scenarios are probably the discrete cases).
|
Improving Chebyshev-type bound for discrete uniform distribution
If you know that the the random variable is bounded then you know a lot about the random variable and the Chebyshev and Markov inequality will not be tight in general.
A better inequality is the Hoeff
|
48,524
|
Expectation of gradients
|
$$ E_Q\left[\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)}\right] = \int\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)} Q_\phi(h|x) = \int{\nabla_\phi Q_\phi(h|x)}$$
Assuming that you can exchange the integral and the gradient operators (deep waters)
$$ = \nabla_\phi\int{ Q_\phi(h|x)} = \nabla_\phi E_Q[1] = \nabla_\phi 1 =0$$
Since the distribution $Q$ is chosen by the user, you can say that this result is satisfied by those $Q$ that allow exchanging the integral and the gradient.
|
Expectation of gradients
|
$$ E_Q\left[\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)}\right] = \int\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)} Q_\phi(h|x) = \int{\nabla_\phi Q_\phi(h|x)}$$
Assuming that you can exchange the integr
|
Expectation of gradients
$$ E_Q\left[\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)}\right] = \int\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)} Q_\phi(h|x) = \int{\nabla_\phi Q_\phi(h|x)}$$
Assuming that you can exchange the integral and the gradient operators (deep waters)
$$ = \nabla_\phi\int{ Q_\phi(h|x)} = \nabla_\phi E_Q[1] = \nabla_\phi 1 =0$$
Since the distribution $Q$ is chosen by the user, you can say that this result is satisfied by those $Q$ that allow exchanging the integral and the gradient.
|
Expectation of gradients
$$ E_Q\left[\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)}\right] = \int\frac{\nabla_\phi Q_\phi(h|x)}{Q_\phi(h|x)} Q_\phi(h|x) = \int{\nabla_\phi Q_\phi(h|x)}$$
Assuming that you can exchange the integr
|
48,525
|
What is the proper way to use rfImpute? (Imputation by Random Forest in R)
|
I'm not entirely sure if this is an answer to your question, but maybe you'll find it useful.
Maybe the author of the randomForest package would disagree with me, but I feel like the rfImpute() function is mostly used or called upon other imputation packages in their algorithms to impute many variables. If you only have one variable with missing data, then using this function as a stand alone may work. However, I think it is the case for most people that they have many variables with missing data in a datset that they'd like to impute. Enter the packages missForest and mice.
If you use the R package missForest, you can impute your entire dataset (many variables of different types may be missing) with one command missForest(). If I recall correctly, this function draws on the rfImpute() function from the randomForest package. For some reason (maybe others can elaborate), when you use the missForest() function, the other variables that are used to predict a single variable can also have missingness. So I think using this function and package are a nice idea if you are hoping to only get one dataset out, after all variables have been imputed.
The downside to using missForest() is that you only get one dataset, which does not allow you to take into account the uncertainty of your estimates (in your follow-on analytical models). So your analytical models will have incorrect confidence intervals if you just base the analysis on that one imputed dataset. If that doesn't matter to you, then I highly recommend this package and function, because it is very easy to use and specify your imputation model.
However, if you do need to get appropriate confidence intervals and pooled estimates in your analytical models, then you should probably use multivariate imputation by chained equations (MICE) approaches to imputation. For this, you can use the mice package. There is recent functionality within this package that allows you to specify which variables you'd like to impute with a random forest algorithm, and which you would like to use the usual methods (e.g. pmm). When specifying your imputation model with the mice() function, under methods you would do something like meth <- c("rfcat", "rfcont").
missForest has a nice vignette you can look up in R.
Here is a nice resource for how to set up your imputation models using mice:
http://www.stefvanbuuren.nl/publications/MICE%20in%20R%20-%20Draft.pdf
|
What is the proper way to use rfImpute? (Imputation by Random Forest in R)
|
I'm not entirely sure if this is an answer to your question, but maybe you'll find it useful.
Maybe the author of the randomForest package would disagree with me, but I feel like the rfImpute() functi
|
What is the proper way to use rfImpute? (Imputation by Random Forest in R)
I'm not entirely sure if this is an answer to your question, but maybe you'll find it useful.
Maybe the author of the randomForest package would disagree with me, but I feel like the rfImpute() function is mostly used or called upon other imputation packages in their algorithms to impute many variables. If you only have one variable with missing data, then using this function as a stand alone may work. However, I think it is the case for most people that they have many variables with missing data in a datset that they'd like to impute. Enter the packages missForest and mice.
If you use the R package missForest, you can impute your entire dataset (many variables of different types may be missing) with one command missForest(). If I recall correctly, this function draws on the rfImpute() function from the randomForest package. For some reason (maybe others can elaborate), when you use the missForest() function, the other variables that are used to predict a single variable can also have missingness. So I think using this function and package are a nice idea if you are hoping to only get one dataset out, after all variables have been imputed.
The downside to using missForest() is that you only get one dataset, which does not allow you to take into account the uncertainty of your estimates (in your follow-on analytical models). So your analytical models will have incorrect confidence intervals if you just base the analysis on that one imputed dataset. If that doesn't matter to you, then I highly recommend this package and function, because it is very easy to use and specify your imputation model.
However, if you do need to get appropriate confidence intervals and pooled estimates in your analytical models, then you should probably use multivariate imputation by chained equations (MICE) approaches to imputation. For this, you can use the mice package. There is recent functionality within this package that allows you to specify which variables you'd like to impute with a random forest algorithm, and which you would like to use the usual methods (e.g. pmm). When specifying your imputation model with the mice() function, under methods you would do something like meth <- c("rfcat", "rfcont").
missForest has a nice vignette you can look up in R.
Here is a nice resource for how to set up your imputation models using mice:
http://www.stefvanbuuren.nl/publications/MICE%20in%20R%20-%20Draft.pdf
|
What is the proper way to use rfImpute? (Imputation by Random Forest in R)
I'm not entirely sure if this is an answer to your question, but maybe you'll find it useful.
Maybe the author of the randomForest package would disagree with me, but I feel like the rfImpute() functi
|
48,526
|
Combining ARIMA model with regression
|
Yes. You can either use an ARIMAX model, or a regression with ARIMA errors. Rob Hyndman explains the difference in his blog post "The ARIMAX model muddle". In R, you can use the forecast package to fit regressions with ARIMA errors, or the TSA package to fit an ARIMAX model.
|
Combining ARIMA model with regression
|
Yes. You can either use an ARIMAX model, or a regression with ARIMA errors. Rob Hyndman explains the difference in his blog post "The ARIMAX model muddle". In R, you can use the forecast package to fi
|
Combining ARIMA model with regression
Yes. You can either use an ARIMAX model, or a regression with ARIMA errors. Rob Hyndman explains the difference in his blog post "The ARIMAX model muddle". In R, you can use the forecast package to fit regressions with ARIMA errors, or the TSA package to fit an ARIMAX model.
|
Combining ARIMA model with regression
Yes. You can either use an ARIMAX model, or a regression with ARIMA errors. Rob Hyndman explains the difference in his blog post "The ARIMAX model muddle". In R, you can use the forecast package to fi
|
48,527
|
Finding pdf of transformed variable for uniform distribution
|
One way to check whether you are right, or the webpage is right is to see if the pdfs integrate to 1. $X$ is defined between 0 and 1, so $Y = X^3$ is also defined between 0 and 1.
$$\int_0^1 \dfrac{1}{3} \dfrac{1}{y^2}dy = \dfrac{1}{3}\left[-\dfrac{1}{y} \right]^1_0 = \text{does not converge}. $$
So clearly the webpage is wrong.
|
Finding pdf of transformed variable for uniform distribution
|
One way to check whether you are right, or the webpage is right is to see if the pdfs integrate to 1. $X$ is defined between 0 and 1, so $Y = X^3$ is also defined between 0 and 1.
$$\int_0^1 \dfrac{1}
|
Finding pdf of transformed variable for uniform distribution
One way to check whether you are right, or the webpage is right is to see if the pdfs integrate to 1. $X$ is defined between 0 and 1, so $Y = X^3$ is also defined between 0 and 1.
$$\int_0^1 \dfrac{1}{3} \dfrac{1}{y^2}dy = \dfrac{1}{3}\left[-\dfrac{1}{y} \right]^1_0 = \text{does not converge}. $$
So clearly the webpage is wrong.
|
Finding pdf of transformed variable for uniform distribution
One way to check whether you are right, or the webpage is right is to see if the pdfs integrate to 1. $X$ is defined between 0 and 1, so $Y = X^3$ is also defined between 0 and 1.
$$\int_0^1 \dfrac{1}
|
48,528
|
Stacking: Do more base classifiers always improve accuracy?
|
As with any classifier, adding new input features can improve classification accuracy when when the new features contain new information about the labels. This performance improvement isn't guaranteed because classifiers are imperfect and may not be able to exploit the information. If the new features share information with existing features, the new features may or may not help. For example, multiple noisy copies (or invertibly transformed versions) of a signal can help to 'average out' the noise. But, if there's no noise or the noise is correlated across signals, then the multiple copies may just be redundant. If the new features don't contain information about the labels, they'll do nothing in the best case and hurt performance in the worst case. In all cases, the curse of dimensionality must be considered. The presence of more features can hurt many classifiers and can increase opportunities for overfitting. It's possible that this effect could overshadow the benefit of new features if that benefit is small.
The situation is similar when adding new base classifiers to a stacking setup, because the base classifiers' outputs are features for the final classifier. All the same arguments from above hold here. In this case, these 'second level' features are likely correlated because all base classifiers are all trying to predict the same thing. But, they do it suboptimally. The hope is that they behave in different ways, so that the final classifier can combine the noisy predictions into a better final prediction. Loosely, then, adding new base classifiers has the best chance of helping when they do a good job and behave differently than existing base classifiers, but this isn't guaranteed. If the new classifiers perform at chance they can't help, and will probably hurt. The final classifier can overfit, and providing it with more base classifiers may increase its ability to do so.
|
Stacking: Do more base classifiers always improve accuracy?
|
As with any classifier, adding new input features can improve classification accuracy when when the new features contain new information about the labels. This performance improvement isn't guaranteed
|
Stacking: Do more base classifiers always improve accuracy?
As with any classifier, adding new input features can improve classification accuracy when when the new features contain new information about the labels. This performance improvement isn't guaranteed because classifiers are imperfect and may not be able to exploit the information. If the new features share information with existing features, the new features may or may not help. For example, multiple noisy copies (or invertibly transformed versions) of a signal can help to 'average out' the noise. But, if there's no noise or the noise is correlated across signals, then the multiple copies may just be redundant. If the new features don't contain information about the labels, they'll do nothing in the best case and hurt performance in the worst case. In all cases, the curse of dimensionality must be considered. The presence of more features can hurt many classifiers and can increase opportunities for overfitting. It's possible that this effect could overshadow the benefit of new features if that benefit is small.
The situation is similar when adding new base classifiers to a stacking setup, because the base classifiers' outputs are features for the final classifier. All the same arguments from above hold here. In this case, these 'second level' features are likely correlated because all base classifiers are all trying to predict the same thing. But, they do it suboptimally. The hope is that they behave in different ways, so that the final classifier can combine the noisy predictions into a better final prediction. Loosely, then, adding new base classifiers has the best chance of helping when they do a good job and behave differently than existing base classifiers, but this isn't guaranteed. If the new classifiers perform at chance they can't help, and will probably hurt. The final classifier can overfit, and providing it with more base classifiers may increase its ability to do so.
|
Stacking: Do more base classifiers always improve accuracy?
As with any classifier, adding new input features can improve classification accuracy when when the new features contain new information about the labels. This performance improvement isn't guaranteed
|
48,529
|
Generalised Boosted Models (GBM) Assumptions
|
After contacting the author of the paper directly, I can answer the question myself. Assumptions:
1) Independence of observations
2) Assumptions related to the interaction depth. If set to $1$, strictly additive model is assumed. As we increase the interaction depth, this assumption is relaxed.
|
Generalised Boosted Models (GBM) Assumptions
|
After contacting the author of the paper directly, I can answer the question myself. Assumptions:
1) Independence of observations
2) Assumptions related to the interaction depth. If set to $1$, stric
|
Generalised Boosted Models (GBM) Assumptions
After contacting the author of the paper directly, I can answer the question myself. Assumptions:
1) Independence of observations
2) Assumptions related to the interaction depth. If set to $1$, strictly additive model is assumed. As we increase the interaction depth, this assumption is relaxed.
|
Generalised Boosted Models (GBM) Assumptions
After contacting the author of the paper directly, I can answer the question myself. Assumptions:
1) Independence of observations
2) Assumptions related to the interaction depth. If set to $1$, stric
|
48,530
|
Why do normal-pseudo residuals measure the deviation from the median?
|
Background
In this paper, $Y$ is a random variable with continuous distribution function $F(y)=\Pr(Y \le y)$. One way to measure how extreme a small value of $Y$ may be is to report the "probability of observing an equal or more extremely (small) value under the model [$F$]": in other words, when $F(y)$ is close to $0$, $y$ is an extremely low value for $Y$.
Some people, in whom reasoning about Normal distributions (determined by the Standard Normal distribution function $\Phi$) is deeply ingrained, prefer to re-express $F(y)$ in terms of the number of standard deviations ("Z score") $z$ for which $\Phi(z) = F(y)$. If we assume that $F$ strictly increases, this can be solved to yield
$$Z(y) = \Phi^{-1}(F(y)),$$
producing a new random variable $Z(Y)$ with a standard Normal distribution.
Explanation
$Z(y)=0$ if and only if $$1/2 = \Phi(0) = \Phi(Z(y)) = F(y).$$
That is the definition of the median of $F$: a value $y$ for which $F(y)$ is $50\%$.
If a distribution $F$ has a mean $\mu_F$, it is not necessarily equal to its median. When, for instance, the mean of $F$ exceeds its median, then $Z(\mu_F)$ must be greater than $0$. Consequently, $Z$ when thought of relative to $0$, which is the center of a Normal distribution according to any definition whatsoever, truly reflects deviations relative to the median of $F$, not its mean (and not any other particular central location of $F$).
An application
In United States case law on discrimination, courts have been exposed to enough statistical experts to have heard about standard deviations and z-scores. Some case law has resulted in standards (to serve as evidence of discrimination) that are expressed in terms of "numbers of standard deviations;" that is, in terms of Z-scores. When the statistic of interest (such as a measure of discriminatory impact) does not have a normal distribution, some experts like to convert p-values into "numbers of standard deviations." (They hope the courts will thereby understand the p-values better.) These could be interpreted as the pseudo-residuals discussed in this paper.
|
Why do normal-pseudo residuals measure the deviation from the median?
|
Background
In this paper, $Y$ is a random variable with continuous distribution function $F(y)=\Pr(Y \le y)$. One way to measure how extreme a small value of $Y$ may be is to report the "probability
|
Why do normal-pseudo residuals measure the deviation from the median?
Background
In this paper, $Y$ is a random variable with continuous distribution function $F(y)=\Pr(Y \le y)$. One way to measure how extreme a small value of $Y$ may be is to report the "probability of observing an equal or more extremely (small) value under the model [$F$]": in other words, when $F(y)$ is close to $0$, $y$ is an extremely low value for $Y$.
Some people, in whom reasoning about Normal distributions (determined by the Standard Normal distribution function $\Phi$) is deeply ingrained, prefer to re-express $F(y)$ in terms of the number of standard deviations ("Z score") $z$ for which $\Phi(z) = F(y)$. If we assume that $F$ strictly increases, this can be solved to yield
$$Z(y) = \Phi^{-1}(F(y)),$$
producing a new random variable $Z(Y)$ with a standard Normal distribution.
Explanation
$Z(y)=0$ if and only if $$1/2 = \Phi(0) = \Phi(Z(y)) = F(y).$$
That is the definition of the median of $F$: a value $y$ for which $F(y)$ is $50\%$.
If a distribution $F$ has a mean $\mu_F$, it is not necessarily equal to its median. When, for instance, the mean of $F$ exceeds its median, then $Z(\mu_F)$ must be greater than $0$. Consequently, $Z$ when thought of relative to $0$, which is the center of a Normal distribution according to any definition whatsoever, truly reflects deviations relative to the median of $F$, not its mean (and not any other particular central location of $F$).
An application
In United States case law on discrimination, courts have been exposed to enough statistical experts to have heard about standard deviations and z-scores. Some case law has resulted in standards (to serve as evidence of discrimination) that are expressed in terms of "numbers of standard deviations;" that is, in terms of Z-scores. When the statistic of interest (such as a measure of discriminatory impact) does not have a normal distribution, some experts like to convert p-values into "numbers of standard deviations." (They hope the courts will thereby understand the p-values better.) These could be interpreted as the pseudo-residuals discussed in this paper.
|
Why do normal-pseudo residuals measure the deviation from the median?
Background
In this paper, $Y$ is a random variable with continuous distribution function $F(y)=\Pr(Y \le y)$. One way to measure how extreme a small value of $Y$ may be is to report the "probability
|
48,531
|
Why must kernel functions be scalar products [duplicate]
|
AFAIK the kernel trick is only applied when the data only appear in the form of scalar products like $x_1'x_2$. For many problems we need the dual representation in such forms so that the kernel trick can be applied.
If the kernel can be written as scalar products in some feature space $k(x_1, x_2)=\phi(x_1)'\phi(x_2)$, then applying the kernel trick we are actually solving the same problem but in another feature space.
But if the kernel can not be written as scalar products, we can't be sure whether adding kernels into it would still be the problem we want to solve in the first place.
|
Why must kernel functions be scalar products [duplicate]
|
AFAIK the kernel trick is only applied when the data only appear in the form of scalar products like $x_1'x_2$. For many problems we need the dual representation in such forms so that the kernel trick
|
Why must kernel functions be scalar products [duplicate]
AFAIK the kernel trick is only applied when the data only appear in the form of scalar products like $x_1'x_2$. For many problems we need the dual representation in such forms so that the kernel trick can be applied.
If the kernel can be written as scalar products in some feature space $k(x_1, x_2)=\phi(x_1)'\phi(x_2)$, then applying the kernel trick we are actually solving the same problem but in another feature space.
But if the kernel can not be written as scalar products, we can't be sure whether adding kernels into it would still be the problem we want to solve in the first place.
|
Why must kernel functions be scalar products [duplicate]
AFAIK the kernel trick is only applied when the data only appear in the form of scalar products like $x_1'x_2$. For many problems we need the dual representation in such forms so that the kernel trick
|
48,532
|
What are some of the best approaches for variable selection in Poisson regression?
|
You can use the lasso or elastic net regularisation. Both are available in glmnet, if you are an R user, with a poisson dependent variable, using the family=poisson option. Hopefully you have sufficient observations to be able to split the dataset and do cross validation.
Stepwise selection methods are generally best avoided, particularly with 2000 variables.
log(y) isn't a good idea if the data are poisson since you will be taking logs of zero. Of course you could use log(y+1) but since glmnet supports the poisson distribution this doesn't seem necessary unless there are computational limitations.
|
What are some of the best approaches for variable selection in Poisson regression?
|
You can use the lasso or elastic net regularisation. Both are available in glmnet, if you are an R user, with a poisson dependent variable, using the family=poisson option. Hopefully you have sufficie
|
What are some of the best approaches for variable selection in Poisson regression?
You can use the lasso or elastic net regularisation. Both are available in glmnet, if you are an R user, with a poisson dependent variable, using the family=poisson option. Hopefully you have sufficient observations to be able to split the dataset and do cross validation.
Stepwise selection methods are generally best avoided, particularly with 2000 variables.
log(y) isn't a good idea if the data are poisson since you will be taking logs of zero. Of course you could use log(y+1) but since glmnet supports the poisson distribution this doesn't seem necessary unless there are computational limitations.
|
What are some of the best approaches for variable selection in Poisson regression?
You can use the lasso or elastic net regularisation. Both are available in glmnet, if you are an R user, with a poisson dependent variable, using the family=poisson option. Hopefully you have sufficie
|
48,533
|
Distribution of Quotient of 2 dependent random variables
|
Although the brute force method I mentioned in the comments may work, there is an easier way which does not rely on Basu's theorem, and it also avoids integration of the joint density of transformed random variables. I leave some details out because the question is of homework-style.
Write
$$
Z_n = \frac{\frac{1}{n}U_n}{\frac{1}{n}V_n}.
$$
Find the distribution of the numerator random variable using a density transform and consider how it depends on $n$
Note that all moments of $X_i$ exist and are finite. In particular, the variance of $X_i^2$ is finite. Thus, you can use Chebyshev's inequality to prove that $V_n/n \to 1$ in probability
Prove that, in general, if for some sequence of random variables $W_n$ it holds that $W_n \overset{p}{\to} 1$, then $Y/W_n \overset{d}{\to}Y$. Note that you do not need independence for this.
|
Distribution of Quotient of 2 dependent random variables
|
Although the brute force method I mentioned in the comments may work, there is an easier way which does not rely on Basu's theorem, and it also avoids integration of the joint density of transformed r
|
Distribution of Quotient of 2 dependent random variables
Although the brute force method I mentioned in the comments may work, there is an easier way which does not rely on Basu's theorem, and it also avoids integration of the joint density of transformed random variables. I leave some details out because the question is of homework-style.
Write
$$
Z_n = \frac{\frac{1}{n}U_n}{\frac{1}{n}V_n}.
$$
Find the distribution of the numerator random variable using a density transform and consider how it depends on $n$
Note that all moments of $X_i$ exist and are finite. In particular, the variance of $X_i^2$ is finite. Thus, you can use Chebyshev's inequality to prove that $V_n/n \to 1$ in probability
Prove that, in general, if for some sequence of random variables $W_n$ it holds that $W_n \overset{p}{\to} 1$, then $Y/W_n \overset{d}{\to}Y$. Note that you do not need independence for this.
|
Distribution of Quotient of 2 dependent random variables
Although the brute force method I mentioned in the comments may work, there is an easier way which does not rely on Basu's theorem, and it also avoids integration of the joint density of transformed r
|
48,534
|
What happens if I flip targets and predictions in cross-entropy?
|
I'll rewrite the cross entropy in terms of distributions $q$ and $z$, so we can save your notation $t$ and $p$ for explicitly talking about classifiers. The cross entropy is:
$$H(q, z) = -\sum_{x} q(x) \log z(x)$$
The information theoretic meaning of the cross entropy is this: Say $q$ is a distribution that generates some data. We want to encode the data using a set of symbols (e.g. to transmit it over a channel, or store it). We'd prefer to use short codes, because they require fewer resources to transmit/store. There's a fundamental relationship between code length and entropy. It turns out that, for the optimal code, the average number of bits per symbol is given by the entropy of the distribution. No encoding can be shorter than this without destroying information. The intuition is that we should assign short codes to high probability events (because they'll occur more frequently) and longer codes to low probability events. Looking at the definition of entropy:
$$H(q) = -\sum_x q(x) \log q(x)$$
This is the expected value of $-\log q(x)$, which is the optimal code length for event $x$ (given in bits if the log is base 2, or nats if it's the natural log).
But, say we don't know $q$, and instead we have some 'proxy' distribution $z$. We can design an optimal code for $z$, even though the data are generated by $q$. In that case, how many bits/nats per symbol will we use on average to encode the data? The answer to this question is given by the cross entropy $H(q, z)$. Looking at the definition of the cross entropy, we can see that it's the expected value of $-\log z(x)$ with respect to distribution $q$. Here, $-\log z(x)$ is the optimal code length for event $x$, using the code based on $z$. The expectation is taken over $q$ because that's what generated the data. As our proxy distribution $z$ becomes closer to the data-generating distribution $q$, our code will be more efficient, and the cross entropy will be lower. It's minimum possible value is $H(q)$, when $z$ = $q$.
Bringing things back to classification, we have $t$ as the true/observed distribution of class labels, and $p$ as the classifier's estimated distribution over class labels. Plugging these concepts into the description of cross entropy, it makes sense to use $H(t, p)$ because $t$ is the data-generating distribution and $p$ is the 'proxy distribution'.
Here's another argument. When using hard class labels, people treat distribution $t$ as the empirical distribution, which assigns probability $1$ to the true/observed class label $i$, and $0$ to all others. In that case, the cross entropy reduces to: $-\log p(i)$. Summing over all data points, this is just the negative log likelihood. In that case, minimizing the cross entropy loss is equivalent to maximum likelihood estimation.
|
What happens if I flip targets and predictions in cross-entropy?
|
I'll rewrite the cross entropy in terms of distributions $q$ and $z$, so we can save your notation $t$ and $p$ for explicitly talking about classifiers. The cross entropy is:
$$H(q, z) = -\sum_{x} q(x
|
What happens if I flip targets and predictions in cross-entropy?
I'll rewrite the cross entropy in terms of distributions $q$ and $z$, so we can save your notation $t$ and $p$ for explicitly talking about classifiers. The cross entropy is:
$$H(q, z) = -\sum_{x} q(x) \log z(x)$$
The information theoretic meaning of the cross entropy is this: Say $q$ is a distribution that generates some data. We want to encode the data using a set of symbols (e.g. to transmit it over a channel, or store it). We'd prefer to use short codes, because they require fewer resources to transmit/store. There's a fundamental relationship between code length and entropy. It turns out that, for the optimal code, the average number of bits per symbol is given by the entropy of the distribution. No encoding can be shorter than this without destroying information. The intuition is that we should assign short codes to high probability events (because they'll occur more frequently) and longer codes to low probability events. Looking at the definition of entropy:
$$H(q) = -\sum_x q(x) \log q(x)$$
This is the expected value of $-\log q(x)$, which is the optimal code length for event $x$ (given in bits if the log is base 2, or nats if it's the natural log).
But, say we don't know $q$, and instead we have some 'proxy' distribution $z$. We can design an optimal code for $z$, even though the data are generated by $q$. In that case, how many bits/nats per symbol will we use on average to encode the data? The answer to this question is given by the cross entropy $H(q, z)$. Looking at the definition of the cross entropy, we can see that it's the expected value of $-\log z(x)$ with respect to distribution $q$. Here, $-\log z(x)$ is the optimal code length for event $x$, using the code based on $z$. The expectation is taken over $q$ because that's what generated the data. As our proxy distribution $z$ becomes closer to the data-generating distribution $q$, our code will be more efficient, and the cross entropy will be lower. It's minimum possible value is $H(q)$, when $z$ = $q$.
Bringing things back to classification, we have $t$ as the true/observed distribution of class labels, and $p$ as the classifier's estimated distribution over class labels. Plugging these concepts into the description of cross entropy, it makes sense to use $H(t, p)$ because $t$ is the data-generating distribution and $p$ is the 'proxy distribution'.
Here's another argument. When using hard class labels, people treat distribution $t$ as the empirical distribution, which assigns probability $1$ to the true/observed class label $i$, and $0$ to all others. In that case, the cross entropy reduces to: $-\log p(i)$. Summing over all data points, this is just the negative log likelihood. In that case, minimizing the cross entropy loss is equivalent to maximum likelihood estimation.
|
What happens if I flip targets and predictions in cross-entropy?
I'll rewrite the cross entropy in terms of distributions $q$ and $z$, so we can save your notation $t$ and $p$ for explicitly talking about classifiers. The cross entropy is:
$$H(q, z) = -\sum_{x} q(x
|
48,535
|
Expectation with indicator function
|
The original random variable $X_{t+1}$ is normally distributed. Call its distribution $P_{X_{t+1}}$
Define a function
$$g(v) = v \cdot 1_{\{v > z_t\}}$$
where $1_{\{\cdot\}}$ is the indicator function. This can also be written as:
$$g(v) = \left \{
\begin{array}{cl}
v & v > z_t \\
0 & \text{Otherwise}
\end{array}
\right .$$
Another way to phrase your question is: what is the expected value of $g(X_{t+1})$? We can write this as:
$$E[g(X_t)] = \int_{-\infty}^{\infty} g(v) P_{X_{t+1}}(v) dv$$
We know that $g(v) = 0$ for $v \le z_t$, and $g(v) = v$ for $v > z_t$. So, we can split the integral across two intervals:
$$E[g(X_t)] = \int_{-\infty}^{z_t} 0 \cdot P_{X_{t+1}}(v) dv
+ \int_{z_t}^{\infty} v \cdot P_{X_{t+1}}(v) dv$$
The first term is clearly zero, so we're left with:
$$E[g(X_t)] = \int_{z_t}^{\infty} v \cdot P_{X_{t+1}}(v) dv$$
$X_{t+1}$ is normally distributed, so we can substitute $N(\mu, \sigma^2)$ in for $P_{X_{t+1}}$
$$E[g(X_t)] = \int_{z_t}^{\infty} \frac{v}{\sigma \sqrt{2 \pi}} \exp \left [ {-\frac{(v-\mu)^2}{2 \sigma^2}} \right ] dv$$
Evaluating the integral gives the final answer:
$$
E[g(X_t)] =
\frac{\mu}{2} \left [
1 - \text{erf} \left (
\frac{z_t - \mu}{\sigma \sqrt{2}}
\right )
\right ]
+ {
\frac{\sigma}{\sqrt{2 \pi}}
\exp \left [
-\frac{(z_t - \mu)^2}{2 \sigma^2}
\right ]
}
$$
where $\text{erf}(\cdot)$ is the error function
You can check that this is correct by simulation. Draw many samples from $N(\mu, \sigma^2)$, set values less than $z_t$ to zero, then take the sample mean.
Edit (as suggested by user12):
In the case where $X_{t+1}$ has mean zero, plug $\mu = 0$ into the last equation above, to obtain:
$$
E[g(X_t)] =
\frac{\sigma}{\sqrt{2 \pi}}
\exp \left [
-\frac{z_t^2}{2 \sigma^2}
\right ]
$$
|
Expectation with indicator function
|
The original random variable $X_{t+1}$ is normally distributed. Call its distribution $P_{X_{t+1}}$
Define a function
$$g(v) = v \cdot 1_{\{v > z_t\}}$$
where $1_{\{\cdot\}}$ is the indicator functio
|
Expectation with indicator function
The original random variable $X_{t+1}$ is normally distributed. Call its distribution $P_{X_{t+1}}$
Define a function
$$g(v) = v \cdot 1_{\{v > z_t\}}$$
where $1_{\{\cdot\}}$ is the indicator function. This can also be written as:
$$g(v) = \left \{
\begin{array}{cl}
v & v > z_t \\
0 & \text{Otherwise}
\end{array}
\right .$$
Another way to phrase your question is: what is the expected value of $g(X_{t+1})$? We can write this as:
$$E[g(X_t)] = \int_{-\infty}^{\infty} g(v) P_{X_{t+1}}(v) dv$$
We know that $g(v) = 0$ for $v \le z_t$, and $g(v) = v$ for $v > z_t$. So, we can split the integral across two intervals:
$$E[g(X_t)] = \int_{-\infty}^{z_t} 0 \cdot P_{X_{t+1}}(v) dv
+ \int_{z_t}^{\infty} v \cdot P_{X_{t+1}}(v) dv$$
The first term is clearly zero, so we're left with:
$$E[g(X_t)] = \int_{z_t}^{\infty} v \cdot P_{X_{t+1}}(v) dv$$
$X_{t+1}$ is normally distributed, so we can substitute $N(\mu, \sigma^2)$ in for $P_{X_{t+1}}$
$$E[g(X_t)] = \int_{z_t}^{\infty} \frac{v}{\sigma \sqrt{2 \pi}} \exp \left [ {-\frac{(v-\mu)^2}{2 \sigma^2}} \right ] dv$$
Evaluating the integral gives the final answer:
$$
E[g(X_t)] =
\frac{\mu}{2} \left [
1 - \text{erf} \left (
\frac{z_t - \mu}{\sigma \sqrt{2}}
\right )
\right ]
+ {
\frac{\sigma}{\sqrt{2 \pi}}
\exp \left [
-\frac{(z_t - \mu)^2}{2 \sigma^2}
\right ]
}
$$
where $\text{erf}(\cdot)$ is the error function
You can check that this is correct by simulation. Draw many samples from $N(\mu, \sigma^2)$, set values less than $z_t$ to zero, then take the sample mean.
Edit (as suggested by user12):
In the case where $X_{t+1}$ has mean zero, plug $\mu = 0$ into the last equation above, to obtain:
$$
E[g(X_t)] =
\frac{\sigma}{\sqrt{2 \pi}}
\exp \left [
-\frac{z_t^2}{2 \sigma^2}
\right ]
$$
|
Expectation with indicator function
The original random variable $X_{t+1}$ is normally distributed. Call its distribution $P_{X_{t+1}}$
Define a function
$$g(v) = v \cdot 1_{\{v > z_t\}}$$
where $1_{\{\cdot\}}$ is the indicator functio
|
48,536
|
Scikit-learn's SGDClassifier code question
|
As you said, with $L_2$ regularization, the update is:
\begin{equation}
w \leftarrow w - \eta_t\lambda w - \eta_t y_t x_t 1_{\{...\}}
\end{equation}
The first two terms corresponds to a simple scaling. It is updated with the line:
w.scale(max(0, 1.0 - ((1.0 - l1_ratio) * eta * alpha)))
where the weight w are not really updated: to be fast, only a scaling coefficients is updated.
The last term of the update is done with the line:
w.add(x_data_ptr, x_ind_ptr, xnnz, update)
The key element is that if the update is sparse, we don't want to update all the weights. The second step only apply the non-zero updates, but the first step would affects all the weights. To avoid that, only a scaling coefficient is updated.
You might want to check the code of the weight vector.
|
Scikit-learn's SGDClassifier code question
|
As you said, with $L_2$ regularization, the update is:
\begin{equation}
w \leftarrow w - \eta_t\lambda w - \eta_t y_t x_t 1_{\{...\}}
\end{equation}
The first two terms corresponds to a simple scaling
|
Scikit-learn's SGDClassifier code question
As you said, with $L_2$ regularization, the update is:
\begin{equation}
w \leftarrow w - \eta_t\lambda w - \eta_t y_t x_t 1_{\{...\}}
\end{equation}
The first two terms corresponds to a simple scaling. It is updated with the line:
w.scale(max(0, 1.0 - ((1.0 - l1_ratio) * eta * alpha)))
where the weight w are not really updated: to be fast, only a scaling coefficients is updated.
The last term of the update is done with the line:
w.add(x_data_ptr, x_ind_ptr, xnnz, update)
The key element is that if the update is sparse, we don't want to update all the weights. The second step only apply the non-zero updates, but the first step would affects all the weights. To avoid that, only a scaling coefficient is updated.
You might want to check the code of the weight vector.
|
Scikit-learn's SGDClassifier code question
As you said, with $L_2$ regularization, the update is:
\begin{equation}
w \leftarrow w - \eta_t\lambda w - \eta_t y_t x_t 1_{\{...\}}
\end{equation}
The first two terms corresponds to a simple scaling
|
48,537
|
Scikit-learn's SGDClassifier code question
|
While i can't give you a complete analysis, i'm pretty sure, that these differences are due to the fact, that the implementation is targeting sparsity in the data.
Have a look at the Bottou paper you linked, especially part 5.1! There is a special treatment mentioned for sparse input and the update rule is splitted into several lines.
Further evidence:
the code-line w.add(x_data_ptr, x_ind_ptr, xnnz, update) you refer to: xnnz means most of the time nonzeros (as in nonzeros of sparse-matrix) like for example here
sklearns SGDClassifier is sparse-matrix ready (see here)
personal experience tells me that sparsity-information is used!
|
Scikit-learn's SGDClassifier code question
|
While i can't give you a complete analysis, i'm pretty sure, that these differences are due to the fact, that the implementation is targeting sparsity in the data.
Have a look at the Bottou paper you
|
Scikit-learn's SGDClassifier code question
While i can't give you a complete analysis, i'm pretty sure, that these differences are due to the fact, that the implementation is targeting sparsity in the data.
Have a look at the Bottou paper you linked, especially part 5.1! There is a special treatment mentioned for sparse input and the update rule is splitted into several lines.
Further evidence:
the code-line w.add(x_data_ptr, x_ind_ptr, xnnz, update) you refer to: xnnz means most of the time nonzeros (as in nonzeros of sparse-matrix) like for example here
sklearns SGDClassifier is sparse-matrix ready (see here)
personal experience tells me that sparsity-information is used!
|
Scikit-learn's SGDClassifier code question
While i can't give you a complete analysis, i'm pretty sure, that these differences are due to the fact, that the implementation is targeting sparsity in the data.
Have a look at the Bottou paper you
|
48,538
|
Correlation of order statistics from Uniform parent
|
You appear to understand the steps involved. I am not sure if this is an assignment or exercise, so it may not be appropriate to show workings anyway, but am happy to sketch out the approach ...
Given: $X \sim \text{Uniform}(0,1)$ with pdf $f(x)$:
Then, the joint pdf of the $r^{\text{th}}$ and $s^{\text{th}}$ order statistics is say $g(x_r, x_s)$:
where I am using the OrderStat function from the mathStatica package for Mathematica to automate the calculation.
Given the joint pdf of the $r^{\text{th}}$ and $s^{\text{th}}$ order statistics, you seek their correlation:
All done. This should hopefully help in both forming the appropriate integrals, and checking your working.
|
Correlation of order statistics from Uniform parent
|
You appear to understand the steps involved. I am not sure if this is an assignment or exercise, so it may not be appropriate to show workings anyway, but am happy to sketch out the approach ...
Given
|
Correlation of order statistics from Uniform parent
You appear to understand the steps involved. I am not sure if this is an assignment or exercise, so it may not be appropriate to show workings anyway, but am happy to sketch out the approach ...
Given: $X \sim \text{Uniform}(0,1)$ with pdf $f(x)$:
Then, the joint pdf of the $r^{\text{th}}$ and $s^{\text{th}}$ order statistics is say $g(x_r, x_s)$:
where I am using the OrderStat function from the mathStatica package for Mathematica to automate the calculation.
Given the joint pdf of the $r^{\text{th}}$ and $s^{\text{th}}$ order statistics, you seek their correlation:
All done. This should hopefully help in both forming the appropriate integrals, and checking your working.
|
Correlation of order statistics from Uniform parent
You appear to understand the steps involved. I am not sure if this is an assignment or exercise, so it may not be appropriate to show workings anyway, but am happy to sketch out the approach ...
Given
|
48,539
|
Testing for publication bias in meta-analysis when effect is raw mean
|
The "standard" Egger regression test uses $\sqrt{v_i} = SE_i$ (i.e., the standard error of the outcomes) as the predictor. This is sometimes not advisable, especially when the standard error is a function of the outcome measure itself. For example, suppose you are not meta-analyzing means, but raw (Pearson product-moment) correlation coefficients. The large-sample variance of a correlation coefficient $r_i$ can be estimated with $v_i = (1-r_i^2)^2 / (n_i-1)$. Note that $v_i$ (and hence the SE) is a function of $r_i$, so by construction, there is a relationship between the outcomes and the SEs. In that case, it may be better to use $n_i$ or some other function of $n_i$ (e.g., $1/\sqrt{n_i}$) as the predictor.
For means, there is no inherent relationship between $v_i = SD^2/n_i$ and $\bar{x}$, so this issue does not apply. Therefore, I think it would be fine to stick to the default approach and use $\sqrt{v_i} = SE_i$.
|
Testing for publication bias in meta-analysis when effect is raw mean
|
The "standard" Egger regression test uses $\sqrt{v_i} = SE_i$ (i.e., the standard error of the outcomes) as the predictor. This is sometimes not advisable, especially when the standard error is a func
|
Testing for publication bias in meta-analysis when effect is raw mean
The "standard" Egger regression test uses $\sqrt{v_i} = SE_i$ (i.e., the standard error of the outcomes) as the predictor. This is sometimes not advisable, especially when the standard error is a function of the outcome measure itself. For example, suppose you are not meta-analyzing means, but raw (Pearson product-moment) correlation coefficients. The large-sample variance of a correlation coefficient $r_i$ can be estimated with $v_i = (1-r_i^2)^2 / (n_i-1)$. Note that $v_i$ (and hence the SE) is a function of $r_i$, so by construction, there is a relationship between the outcomes and the SEs. In that case, it may be better to use $n_i$ or some other function of $n_i$ (e.g., $1/\sqrt{n_i}$) as the predictor.
For means, there is no inherent relationship between $v_i = SD^2/n_i$ and $\bar{x}$, so this issue does not apply. Therefore, I think it would be fine to stick to the default approach and use $\sqrt{v_i} = SE_i$.
|
Testing for publication bias in meta-analysis when effect is raw mean
The "standard" Egger regression test uses $\sqrt{v_i} = SE_i$ (i.e., the standard error of the outcomes) as the predictor. This is sometimes not advisable, especially when the standard error is a func
|
48,540
|
monte carlo simulation using exponential distributions
|
The whole point of simulation is to show you that such variation is realistic. In fact, there appears to be nothing wrong with your results--except that you haven't yet done enough simulations to appreciate what they are telling you. In this case, the results are especially erratic because the starting population is so small.
Let's run your scenario 500 times up to a population of 300 (rather than a half dozen times to a population of 3):
It looks more stable when you start with a larger population:
Just for fun, here's a similar simulation for a population that grows from one individual to its carrying capacity:
I used R for these simulations and plots, because it does one very interesting thing. Since you know in advance that the population will progress in whole steps from the initial population to the final one, you can easily generate the sequence of population values in advance of the simulation. Thus, all that remains is to generate a set of exponentially distributed variates with rates determined by that sequence and accumulate them to simulate the birth times. R performs that with a single command (the line that creates simulation below). It takes about one second. Everything else is just parameter specification and plotting.
(I can get away with such a simple algorithm because I am running these simulations out to a given population target rather than out to a given time endpoint. Obviously the model is the same; all that differs is how I control the length of the simulation.)
rate <- 1
pop.0 <- 1
time.0 <- 0
k <- 0 # Carrying capacity (use 0 or negative when not applicable)
n.final <- 300 # Must not exceed the capacity!
#
# Pre-calculation: populations and the associated rates.
#
n <- pop.0:(n.final-1)
if (k <= 0) r <- rate * n else r <- rate * n * (1 - n/k)
#
# The simulation.
# Each iteration is stored as a column of the result.
#
simulation <- replicate(500, cumsum(c(time.0, rexp(length(n), r))))
#
# Plot the results:
# Set it up, show the overlaid growth curves, then plot a reference curve.
#
plot(range(simulation), c(pop.0, n.final), type="n", ylab="Population", xlab="Time")
apply(simulation, 2, function(x) lines(x, c(n, n.final), col="#00000020"))
if (k <= 0) {curve(pop.0 * exp((x - time.0)*rate), add=TRUE, col="Red", lwd=2)} else
curve(k*(1 - 1/(1+(pop.0/(k-pop.0))*exp(rate*(x-time.0)))), add=TRUE, col="Red", lwd=2)
|
monte carlo simulation using exponential distributions
|
The whole point of simulation is to show you that such variation is realistic. In fact, there appears to be nothing wrong with your results--except that you haven't yet done enough simulations to app
|
monte carlo simulation using exponential distributions
The whole point of simulation is to show you that such variation is realistic. In fact, there appears to be nothing wrong with your results--except that you haven't yet done enough simulations to appreciate what they are telling you. In this case, the results are especially erratic because the starting population is so small.
Let's run your scenario 500 times up to a population of 300 (rather than a half dozen times to a population of 3):
It looks more stable when you start with a larger population:
Just for fun, here's a similar simulation for a population that grows from one individual to its carrying capacity:
I used R for these simulations and plots, because it does one very interesting thing. Since you know in advance that the population will progress in whole steps from the initial population to the final one, you can easily generate the sequence of population values in advance of the simulation. Thus, all that remains is to generate a set of exponentially distributed variates with rates determined by that sequence and accumulate them to simulate the birth times. R performs that with a single command (the line that creates simulation below). It takes about one second. Everything else is just parameter specification and plotting.
(I can get away with such a simple algorithm because I am running these simulations out to a given population target rather than out to a given time endpoint. Obviously the model is the same; all that differs is how I control the length of the simulation.)
rate <- 1
pop.0 <- 1
time.0 <- 0
k <- 0 # Carrying capacity (use 0 or negative when not applicable)
n.final <- 300 # Must not exceed the capacity!
#
# Pre-calculation: populations and the associated rates.
#
n <- pop.0:(n.final-1)
if (k <= 0) r <- rate * n else r <- rate * n * (1 - n/k)
#
# The simulation.
# Each iteration is stored as a column of the result.
#
simulation <- replicate(500, cumsum(c(time.0, rexp(length(n), r))))
#
# Plot the results:
# Set it up, show the overlaid growth curves, then plot a reference curve.
#
plot(range(simulation), c(pop.0, n.final), type="n", ylab="Population", xlab="Time")
apply(simulation, 2, function(x) lines(x, c(n, n.final), col="#00000020"))
if (k <= 0) {curve(pop.0 * exp((x - time.0)*rate), add=TRUE, col="Red", lwd=2)} else
curve(k*(1 - 1/(1+(pop.0/(k-pop.0))*exp(rate*(x-time.0)))), add=TRUE, col="Red", lwd=2)
|
monte carlo simulation using exponential distributions
The whole point of simulation is to show you that such variation is realistic. In fact, there appears to be nothing wrong with your results--except that you haven't yet done enough simulations to app
|
48,541
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log odds ratios from between-subject designs?
|
I'll focus in my answer purely on the question on how to compute a (log) OR based on a within-subjects design that is comparable to that from a between-subjects design.
Suppose you have a within-subjects design with these data:
condition2
decision1 decision2 total
condition1 decision1 s t a
decision2 u v b
total c d n
Note that this is the 'paired-subjects' 2x2 table based on n subjects. This table can be rearranged into a 'between-subjects' 2x2 table:
decision1 decision2 total
condition1 a b n
condition2 c d n
Then you can compute what is often called the "marginal OR" with the usual equation for computing an odds ratio with:
$$OR = \frac{ad}{bc}$$
And for meta-analytic purposes, we usually work with the log(OR), so just take the log of that. This value is then comparable to that obtained from a between-subjects design.
However, note that the same n subjects are used to compute the cell entries under condition1 and condition2, so the data are not independent. This needs to be taken into consideration when computing the sampling variance of the marginal log odds ratio. Based on Becker and Balagtas (1993) (see also: Elbourne et al., 2002, and Stedman et al., 2011), we can compute (or to be precise: estimate) the sampling variance of the marginal log(OR) with:
$$Var(log[OR]) = \frac{1}{a} + \frac{1}{b} + \frac{1}{c} + \frac{1}{d} - \frac{2\Delta}{n},$$
where
$$\Delta = n^2 \left(\frac{ns - ac}{abcd}\right).$$
(Recall that $s$ is the upper-left cell count from the paired-subjects table.)
References
Becker, M. P., & Balagtas, C. C. (1993). Marginal modeling of binary cross-over data. Biometrics, 49(4), 997-1009.
Elbourne, D. R., Altman, D. G., Higgins, J. P. T., Curtin, F., Worthington, H. V., & Vail, A. (2002). Meta-analyses involving cross-over trials: Methodological issues. International Journal of Epidemiology, 31(1), 140-149.
Stedman, M. R., Curtin, F., Elbourne, D. R., Kesselheim, A. S., & Brookhart, M. A. (2011). Meta-analyses involving cross-over trials: Methodological issues. International Journal of Epidemiology, 40(6), 1732-1734.
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log
|
I'll focus in my answer purely on the question on how to compute a (log) OR based on a within-subjects design that is comparable to that from a between-subjects design.
Suppose you have a within-subje
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log odds ratios from between-subject designs?
I'll focus in my answer purely on the question on how to compute a (log) OR based on a within-subjects design that is comparable to that from a between-subjects design.
Suppose you have a within-subjects design with these data:
condition2
decision1 decision2 total
condition1 decision1 s t a
decision2 u v b
total c d n
Note that this is the 'paired-subjects' 2x2 table based on n subjects. This table can be rearranged into a 'between-subjects' 2x2 table:
decision1 decision2 total
condition1 a b n
condition2 c d n
Then you can compute what is often called the "marginal OR" with the usual equation for computing an odds ratio with:
$$OR = \frac{ad}{bc}$$
And for meta-analytic purposes, we usually work with the log(OR), so just take the log of that. This value is then comparable to that obtained from a between-subjects design.
However, note that the same n subjects are used to compute the cell entries under condition1 and condition2, so the data are not independent. This needs to be taken into consideration when computing the sampling variance of the marginal log odds ratio. Based on Becker and Balagtas (1993) (see also: Elbourne et al., 2002, and Stedman et al., 2011), we can compute (or to be precise: estimate) the sampling variance of the marginal log(OR) with:
$$Var(log[OR]) = \frac{1}{a} + \frac{1}{b} + \frac{1}{c} + \frac{1}{d} - \frac{2\Delta}{n},$$
where
$$\Delta = n^2 \left(\frac{ns - ac}{abcd}\right).$$
(Recall that $s$ is the upper-left cell count from the paired-subjects table.)
References
Becker, M. P., & Balagtas, C. C. (1993). Marginal modeling of binary cross-over data. Biometrics, 49(4), 997-1009.
Elbourne, D. R., Altman, D. G., Higgins, J. P. T., Curtin, F., Worthington, H. V., & Vail, A. (2002). Meta-analyses involving cross-over trials: Methodological issues. International Journal of Epidemiology, 31(1), 140-149.
Stedman, M. R., Curtin, F., Elbourne, D. R., Kesselheim, A. S., & Brookhart, M. A. (2011). Meta-analyses involving cross-over trials: Methodological issues. International Journal of Epidemiology, 40(6), 1732-1734.
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log
I'll focus in my answer purely on the question on how to compute a (log) OR based on a within-subjects design that is comparable to that from a between-subjects design.
Suppose you have a within-subje
|
48,542
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log odds ratios from between-subject designs?
|
You have the individual participant data so you have two choices (a) a one step approach which since you use R can be implemented in lme4 or (b) a two-step approach where you reduce each study to a single summary statistic and then meta-analyse them using, as you suggest metafor. If you do the single step approach you need to specify study and subject as random effects. Each subject in your between subject designs then forms his/her own cluster in the design with a single observation whereas the other clusters from the within designs have two observations per cluster. The one-step design of course lends itself to the inclusion of person level covariates if you have them.
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log
|
You have the individual participant data so you have two choices (a) a one step approach which since you use R can be implemented in lme4 or (b) a two-step approach where you reduce each study to a si
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log odds ratios from between-subject designs?
You have the individual participant data so you have two choices (a) a one step approach which since you use R can be implemented in lme4 or (b) a two-step approach where you reduce each study to a single summary statistic and then meta-analyse them using, as you suggest metafor. If you do the single step approach you need to specify study and subject as random effects. Each subject in your between subject designs then forms his/her own cluster in the design with a single observation whereas the other clusters from the within designs have two observations per cluster. The one-step design of course lends itself to the inclusion of person level covariates if you have them.
|
How can I compute a log odds ratio for a within-subject design that could be meta-analyzed with log
You have the individual participant data so you have two choices (a) a one step approach which since you use R can be implemented in lme4 or (b) a two-step approach where you reduce each study to a si
|
48,543
|
Linear Regression for a discrete count dependent variable?
|
By "GLM", I assume you mean the General Linear Model, which generalizes multiple regression and ANOVA. For what it's worth, many people (and I) just call that 'multiple regression'; I typically reserve "GLM" for generalized linear model. You are right that the general linear model requires normality. For the sake of clarity, only the errors / residuals need to be normal, neither X nor Y itself actually do. (To understand this better, it may help to read my answer here: What if residuals are normally distributed, but y is not?)
On the other hand, there isn't necessarily any problem for the generalized linear model to handle count data, so long as the distribution used falls within the exponential family. The prototypical GLM for count data is Poisson regression. That may be a good option for your data. However, note that the Poisson is actually fairly restrictive: the number of zeros needs to be 'just right', and the variance of the conditional distribution of the counts needs to be equal to the conditional mean. Those constraints are not often met. As a result, a number of other options exist: quasi-Poisson regression (cf., here and/or here), negative binomial regression (cf., here), and zero-inflated and hurdle models (cf., here). If you aren't very familiar with all this, that may be a bit to navigate. Another option, if you are more familiar with it, is to use ordinal logistic regression. All you need there is to be able to say that, e.g., 1 trip is more than 0 trips. Many of the types of models mentioned in this paragraph are demonstrated at the excellent UCLA statistics help site.
Regarding your question about how this scales up to situations where there are more response possibilities, but that still cannot be negative (like income), the issue is complicated. The truth is that many variables can only take positive values, but are treated as normal(ish) and modeled with linear regression anyway. The prototypical example of regression modeling is adult height (going back to Galton), but heights cannot be negative. The actual question isn't whether the errors are perfectly normal, they never will be. The actual question is rather: Is it good enough? And there the answer might well be 'yes'.
A common problem with non-negative data is that the variance scales with the mean. In this case, people will often use a transformation, or use a robust, heteroscedasticity-consistent 'sandwich' estimator for the standard errors. (For an overview of strategies used in that kind of situation, see my answer here: Alternatives to one-way ANOVA for heteroskedastic data.)
There are distributions that are specific to this kind of situation, and that are compatible with the GLM / are members of the exponential family, namely the Gamma distribution. Gamma regression could well be used to model income, and I believe is occasionally, but in truth, I think other approaches are used more commonly.
|
Linear Regression for a discrete count dependent variable?
|
By "GLM", I assume you mean the General Linear Model, which generalizes multiple regression and ANOVA. For what it's worth, many people (and I) just call that 'multiple regression'; I typically reser
|
Linear Regression for a discrete count dependent variable?
By "GLM", I assume you mean the General Linear Model, which generalizes multiple regression and ANOVA. For what it's worth, many people (and I) just call that 'multiple regression'; I typically reserve "GLM" for generalized linear model. You are right that the general linear model requires normality. For the sake of clarity, only the errors / residuals need to be normal, neither X nor Y itself actually do. (To understand this better, it may help to read my answer here: What if residuals are normally distributed, but y is not?)
On the other hand, there isn't necessarily any problem for the generalized linear model to handle count data, so long as the distribution used falls within the exponential family. The prototypical GLM for count data is Poisson regression. That may be a good option for your data. However, note that the Poisson is actually fairly restrictive: the number of zeros needs to be 'just right', and the variance of the conditional distribution of the counts needs to be equal to the conditional mean. Those constraints are not often met. As a result, a number of other options exist: quasi-Poisson regression (cf., here and/or here), negative binomial regression (cf., here), and zero-inflated and hurdle models (cf., here). If you aren't very familiar with all this, that may be a bit to navigate. Another option, if you are more familiar with it, is to use ordinal logistic regression. All you need there is to be able to say that, e.g., 1 trip is more than 0 trips. Many of the types of models mentioned in this paragraph are demonstrated at the excellent UCLA statistics help site.
Regarding your question about how this scales up to situations where there are more response possibilities, but that still cannot be negative (like income), the issue is complicated. The truth is that many variables can only take positive values, but are treated as normal(ish) and modeled with linear regression anyway. The prototypical example of regression modeling is adult height (going back to Galton), but heights cannot be negative. The actual question isn't whether the errors are perfectly normal, they never will be. The actual question is rather: Is it good enough? And there the answer might well be 'yes'.
A common problem with non-negative data is that the variance scales with the mean. In this case, people will often use a transformation, or use a robust, heteroscedasticity-consistent 'sandwich' estimator for the standard errors. (For an overview of strategies used in that kind of situation, see my answer here: Alternatives to one-way ANOVA for heteroskedastic data.)
There are distributions that are specific to this kind of situation, and that are compatible with the GLM / are members of the exponential family, namely the Gamma distribution. Gamma regression could well be used to model income, and I believe is occasionally, but in truth, I think other approaches are used more commonly.
|
Linear Regression for a discrete count dependent variable?
By "GLM", I assume you mean the General Linear Model, which generalizes multiple regression and ANOVA. For what it's worth, many people (and I) just call that 'multiple regression'; I typically reser
|
48,544
|
Understanding Kernel Functions for SVMs
|
By solving the optimization problem of SVM in its dual form, it turns out that the dependency of the problem on the training data $\{x_i\}_{i=1}^n$ is only through their inner products. That is, you only need $\{x_i^\top x_j\}_{i, j=1}^n$ i.e., inner products of all pairs of points you have. So to train an SVM, you only need to give it the labels $Y=(y_1, \ldots, y_n)$ and a kernel matrix $K$ where $K_{ij} = x_i^\top x_j.$
Now to map each data point $x_i$ to a high-dimensional space, you apply $\phi(x)$. So the kernel matrix becomes
$$K_{ij} = \langle \phi(x_i), \phi(x_j)\rangle$$
where $\langle ,\rangle$ is just a formal notation for an inner product in a general inner product space. It can be seen that as long as we can define an inner product in the high-dimensional space, we can train SVM. We do not even need to compute $\phi(x)$ itself. We only need to compute the inner product $\langle \phi(x_i), \phi(x_j)\rangle$. This is where we set
$$K_{ij} = k(x_i, x_j)$$
for some kernel $k$ of your choice. It is known (by Moore-Aronzajn theorem) that if $k$ is positive definite, then it corresponds to some inner product space i.e., there exists a corresponding feature map $\phi(\cdot)$ such that $k(x_i, x_j) = \langle \phi(x_i), \phi(x_j) \rangle$.
To answer your question, the kernel $k(x,y)$ does not specify a projection of $x$. It is $\phi(\cdot)$ (which is usually implicit) associated with $k$ that specifies the projection. As an example, the feature map $\phi$ of an RBF kernel $k(x,y) = \exp(-\gamma \|x-y\|_2^2)$ is infinite-dimensional.
|
Understanding Kernel Functions for SVMs
|
By solving the optimization problem of SVM in its dual form, it turns out that the dependency of the problem on the training data $\{x_i\}_{i=1}^n$ is only through their inner products. That is, you o
|
Understanding Kernel Functions for SVMs
By solving the optimization problem of SVM in its dual form, it turns out that the dependency of the problem on the training data $\{x_i\}_{i=1}^n$ is only through their inner products. That is, you only need $\{x_i^\top x_j\}_{i, j=1}^n$ i.e., inner products of all pairs of points you have. So to train an SVM, you only need to give it the labels $Y=(y_1, \ldots, y_n)$ and a kernel matrix $K$ where $K_{ij} = x_i^\top x_j.$
Now to map each data point $x_i$ to a high-dimensional space, you apply $\phi(x)$. So the kernel matrix becomes
$$K_{ij} = \langle \phi(x_i), \phi(x_j)\rangle$$
where $\langle ,\rangle$ is just a formal notation for an inner product in a general inner product space. It can be seen that as long as we can define an inner product in the high-dimensional space, we can train SVM. We do not even need to compute $\phi(x)$ itself. We only need to compute the inner product $\langle \phi(x_i), \phi(x_j)\rangle$. This is where we set
$$K_{ij} = k(x_i, x_j)$$
for some kernel $k$ of your choice. It is known (by Moore-Aronzajn theorem) that if $k$ is positive definite, then it corresponds to some inner product space i.e., there exists a corresponding feature map $\phi(\cdot)$ such that $k(x_i, x_j) = \langle \phi(x_i), \phi(x_j) \rangle$.
To answer your question, the kernel $k(x,y)$ does not specify a projection of $x$. It is $\phi(\cdot)$ (which is usually implicit) associated with $k$ that specifies the projection. As an example, the feature map $\phi$ of an RBF kernel $k(x,y) = \exp(-\gamma \|x-y\|_2^2)$ is infinite-dimensional.
|
Understanding Kernel Functions for SVMs
By solving the optimization problem of SVM in its dual form, it turns out that the dependency of the problem on the training data $\{x_i\}_{i=1}^n$ is only through their inner products. That is, you o
|
48,545
|
Understanding Kernel Functions for SVMs
|
First, the radial basis function (RBF) is given by $k\colon\mathcal{X}\times\mathcal{X}\to\Bbb{R}$, with
$$
k(x,y)=\exp(-\gamma\lVert x-y \rVert^2),
$$
where $\gamma$ is a positive parameter.
What is really useful in SVMs is the so-called "kernel trick". Briefly, you don't need to explicitly know the mapping from the original space to the high-dimensional space (this sometimes is even impossible). What you really need to know is how to apply this mapping to the inner products of the form $x_i\cdot x_j$ that are present in the SVM formulation. So, if the mapping function is $\phi$, then the inner product $x_i\cdot x_j$ is transformed into inner product of the form $\phi(x_i)\cdot\phi(x_j)$, which is given by the kernel function evaluated on these points, i.e.,
$$
\phi(x_i)\cdot\phi(x_j)=k(x_i,x_j).
$$
|
Understanding Kernel Functions for SVMs
|
First, the radial basis function (RBF) is given by $k\colon\mathcal{X}\times\mathcal{X}\to\Bbb{R}$, with
$$
k(x,y)=\exp(-\gamma\lVert x-y \rVert^2),
$$
where $\gamma$ is a positive parameter.
What is
|
Understanding Kernel Functions for SVMs
First, the radial basis function (RBF) is given by $k\colon\mathcal{X}\times\mathcal{X}\to\Bbb{R}$, with
$$
k(x,y)=\exp(-\gamma\lVert x-y \rVert^2),
$$
where $\gamma$ is a positive parameter.
What is really useful in SVMs is the so-called "kernel trick". Briefly, you don't need to explicitly know the mapping from the original space to the high-dimensional space (this sometimes is even impossible). What you really need to know is how to apply this mapping to the inner products of the form $x_i\cdot x_j$ that are present in the SVM formulation. So, if the mapping function is $\phi$, then the inner product $x_i\cdot x_j$ is transformed into inner product of the form $\phi(x_i)\cdot\phi(x_j)$, which is given by the kernel function evaluated on these points, i.e.,
$$
\phi(x_i)\cdot\phi(x_j)=k(x_i,x_j).
$$
|
Understanding Kernel Functions for SVMs
First, the radial basis function (RBF) is given by $k\colon\mathcal{X}\times\mathcal{X}\to\Bbb{R}$, with
$$
k(x,y)=\exp(-\gamma\lVert x-y \rVert^2),
$$
where $\gamma$ is a positive parameter.
What is
|
48,546
|
Why is it that xgb.cv performs well but xgb.train does not
|
I just lost a couple of days on perhaps the same issue. TL;DR: are you sure your watchlist has the same number and order of columns as your ce.dmatrix?
In the current implementation of xgb.cv, any watchlist argument that gets passed in is going to get ignored. xgb.cv ends up calling xgb.cv.mknfold, which forcibly sets the watchlist for each of the folds as below:
for (k in 1:nfold) {
dtest <- slice(dall, folds[[k]])
didx <- c()
for (i in 1:nfold) {
if (i != k) {
didx <- append(didx, folds[[i]])
}
}
dtrain <- slice(dall, didx)
bst <- xgb.Booster(param, list(dtrain, dtest))
watchlist <- list(train=dtrain, test=dtest)
ret[[k]] <- list(dtrain=dtrain, booster=bst, watchlist=watchlist, index=folds[[k]])
}
This makes sense, since as others have said passing in a watchlist to xgb.cv doesn't make a ton of sense. So the "test" showing in your cv output is not the same data set as the "test" showing in your xgb.train output
xgb.train calls xgb.iter.eval in order to evaluate the test statistics of the in-sample and watchlist data. xgb.iter.eval does the actual computation like this:
msg <- paste("[", iter, "]", sep="")
for (j in 1:length(watchlist)) {
w <- watchlist[j]
if (length(names(w)) == 0) {
stop("xgb.eval: name tag must be presented for every elements in watchlist")
}
preds <- predict(booster, w[[1]])
ret <- feval(preds, w[[1]])
msg <- paste(msg, "\t", names(w), "-", ret$metric, ":", ret$value, sep="")
}
So it calls predict() using the booster handle. Since this is the same booster handle class that gets returned from a call to xgb.train, this is equivalent to you calling predict() with your finished model.
Somewhere in the bowels of the C++ implementation of Booster, it appears that predict() does not verify that the column names of the data you pass in match the column names of the data your model was built off of. It doesn't even check that there are the correct number of columns. You can see this easily for yourself by examining the output of the following calls:
head(predict(bst, newdata=ce.dmatrix))
#predict using only the first 10 columns, missing values default to 0
head(predict(bst, newdata=ce.dmatrix[,1:10]))
#predict using the wrong columns, because we ignore column names
head(predict(bst, newdata=ce.dmatrix[,sample(ncol(ce.dmatrix))]))
So if your watchlist "test" set is defined incorrectly, you will see exactly the kind of odd behavior you are seeing. You can check if they are identical by doing something along the lines of this:
colnames(ce.dmatrix)[!(colnames(ce.dmatrix) %in% colnames(watchlist[[1]]))]
colnames(watchlist[[1]])[!(colnames(watchlist[[1]]) %in% colnames(ce.dmatrix))]
In my case, I was cleaning my test and training data separately, and because some factor levels showed up in training but not in test, my test data had the wrong number of columns/columns in incorrect places.
Hope that helps.
|
Why is it that xgb.cv performs well but xgb.train does not
|
I just lost a couple of days on perhaps the same issue. TL;DR: are you sure your watchlist has the same number and order of columns as your ce.dmatrix?
In the current implementation of xgb.cv, any wa
|
Why is it that xgb.cv performs well but xgb.train does not
I just lost a couple of days on perhaps the same issue. TL;DR: are you sure your watchlist has the same number and order of columns as your ce.dmatrix?
In the current implementation of xgb.cv, any watchlist argument that gets passed in is going to get ignored. xgb.cv ends up calling xgb.cv.mknfold, which forcibly sets the watchlist for each of the folds as below:
for (k in 1:nfold) {
dtest <- slice(dall, folds[[k]])
didx <- c()
for (i in 1:nfold) {
if (i != k) {
didx <- append(didx, folds[[i]])
}
}
dtrain <- slice(dall, didx)
bst <- xgb.Booster(param, list(dtrain, dtest))
watchlist <- list(train=dtrain, test=dtest)
ret[[k]] <- list(dtrain=dtrain, booster=bst, watchlist=watchlist, index=folds[[k]])
}
This makes sense, since as others have said passing in a watchlist to xgb.cv doesn't make a ton of sense. So the "test" showing in your cv output is not the same data set as the "test" showing in your xgb.train output
xgb.train calls xgb.iter.eval in order to evaluate the test statistics of the in-sample and watchlist data. xgb.iter.eval does the actual computation like this:
msg <- paste("[", iter, "]", sep="")
for (j in 1:length(watchlist)) {
w <- watchlist[j]
if (length(names(w)) == 0) {
stop("xgb.eval: name tag must be presented for every elements in watchlist")
}
preds <- predict(booster, w[[1]])
ret <- feval(preds, w[[1]])
msg <- paste(msg, "\t", names(w), "-", ret$metric, ":", ret$value, sep="")
}
So it calls predict() using the booster handle. Since this is the same booster handle class that gets returned from a call to xgb.train, this is equivalent to you calling predict() with your finished model.
Somewhere in the bowels of the C++ implementation of Booster, it appears that predict() does not verify that the column names of the data you pass in match the column names of the data your model was built off of. It doesn't even check that there are the correct number of columns. You can see this easily for yourself by examining the output of the following calls:
head(predict(bst, newdata=ce.dmatrix))
#predict using only the first 10 columns, missing values default to 0
head(predict(bst, newdata=ce.dmatrix[,1:10]))
#predict using the wrong columns, because we ignore column names
head(predict(bst, newdata=ce.dmatrix[,sample(ncol(ce.dmatrix))]))
So if your watchlist "test" set is defined incorrectly, you will see exactly the kind of odd behavior you are seeing. You can check if they are identical by doing something along the lines of this:
colnames(ce.dmatrix)[!(colnames(ce.dmatrix) %in% colnames(watchlist[[1]]))]
colnames(watchlist[[1]])[!(colnames(watchlist[[1]]) %in% colnames(ce.dmatrix))]
In my case, I was cleaning my test and training data separately, and because some factor levels showed up in training but not in test, my test data had the wrong number of columns/columns in incorrect places.
Hope that helps.
|
Why is it that xgb.cv performs well but xgb.train does not
I just lost a couple of days on perhaps the same issue. TL;DR: are you sure your watchlist has the same number and order of columns as your ce.dmatrix?
In the current implementation of xgb.cv, any wa
|
48,547
|
Why is it that xgb.cv performs well but xgb.train does not
|
Documentation is bit nebulous to me, but whole point of cross validation is to select the best hyper parameters to avoid overfitting. So xgb.cv is using cross validation to tune the parameters before testing, therefore avoiding overfitting.
|
Why is it that xgb.cv performs well but xgb.train does not
|
Documentation is bit nebulous to me, but whole point of cross validation is to select the best hyper parameters to avoid overfitting. So xgb.cv is using cross validation to tune the parameters before
|
Why is it that xgb.cv performs well but xgb.train does not
Documentation is bit nebulous to me, but whole point of cross validation is to select the best hyper parameters to avoid overfitting. So xgb.cv is using cross validation to tune the parameters before testing, therefore avoiding overfitting.
|
Why is it that xgb.cv performs well but xgb.train does not
Documentation is bit nebulous to me, but whole point of cross validation is to select the best hyper parameters to avoid overfitting. So xgb.cv is using cross validation to tune the parameters before
|
48,548
|
Why is it that xgb.cv performs well but xgb.train does not
|
Why would you need a watchlist in the CV method? The respective CV folds ARE the watchlist! I don't know the R command, but in Python verbose_eval=True returns the proper output that you are looking for. My guess is that since CV is only used for hyperparameter tuning and doesn't return a model by itself, the usage of the parameter watchlist somehow interferes w/ the proper triggering of early.stop.round.
P.S.: Your eta parameter is very low. I've never used an eta value lower than 0.01...
|
Why is it that xgb.cv performs well but xgb.train does not
|
Why would you need a watchlist in the CV method? The respective CV folds ARE the watchlist! I don't know the R command, but in Python verbose_eval=True returns the proper output that you are looking f
|
Why is it that xgb.cv performs well but xgb.train does not
Why would you need a watchlist in the CV method? The respective CV folds ARE the watchlist! I don't know the R command, but in Python verbose_eval=True returns the proper output that you are looking for. My guess is that since CV is only used for hyperparameter tuning and doesn't return a model by itself, the usage of the parameter watchlist somehow interferes w/ the proper triggering of early.stop.round.
P.S.: Your eta parameter is very low. I've never used an eta value lower than 0.01...
|
Why is it that xgb.cv performs well but xgb.train does not
Why would you need a watchlist in the CV method? The respective CV folds ARE the watchlist! I don't know the R command, but in Python verbose_eval=True returns the proper output that you are looking f
|
48,549
|
Nesting terminology in mixed models
|
I would describe the final model as simply having
a random intercept for every observed combination of A and B.
Although it looks like an interaction because you're just using R's interaction syntax, you're really just redefine a grouping variable in a slightly less structured way than in the previous specifications, ensuring that the random effect can now vary more freely than in the previous arrangements.
For expository purposes you may wish to give 'every observed combination of A and B' a name that makes sense to readers in terms of the study you are analyzing.
|
Nesting terminology in mixed models
|
I would describe the final model as simply having
a random intercept for every observed combination of A and B.
Although it looks like an interaction because you're just using R's interaction syntax
|
Nesting terminology in mixed models
I would describe the final model as simply having
a random intercept for every observed combination of A and B.
Although it looks like an interaction because you're just using R's interaction syntax, you're really just redefine a grouping variable in a slightly less structured way than in the previous specifications, ensuring that the random effect can now vary more freely than in the previous arrangements.
For expository purposes you may wish to give 'every observed combination of A and B' a name that makes sense to readers in terms of the study you are analyzing.
|
Nesting terminology in mixed models
I would describe the final model as simply having
a random intercept for every observed combination of A and B.
Although it looks like an interaction because you're just using R's interaction syntax
|
48,550
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
|
Coding the targets as $\{0.1, 0.9\}$ is an example of label smoothing. It's one of the tricks highlighted in this review and it appears to originate in "Rethinking the Inception Architecture for Computer Vision" by Christian Szegedy et al. as a certain kind of regularization. It's used as a regularization strategy to discourage a neural network from giving over-confident predictions in the outcome, because the predicted probabilities cannot be improved by becoming arbitrarily close to 1 for the true class; instead, the lowest loss value occurs at 0.9 for the true class.
The standard approach is to not use label smoothing. When we estimate a logistic regression (a 0-hidden layer neural network with sigmoid activations on the last layer), we never do this; the problem is left "as is" for Newton's Method to solve. Provided that the usual conditions are satisfied (design matrix is full-rank, perfect separation is not present), we have no problem estimating the regression coefficients. Likewise, neural networks trained on binary problems proceed in the same way! Nonlinearities and multiple layers make the optimization more challenging, but the core concept is the same.
However, neural networks are more flexible models, and are able to find more complex relationships compared to logistic regression. This means that they are also more prone to giving uncalibrated outcomes (the predicted probabilities do not strongly correspond to the true probabilities), a trend that is remarked upon in "Your classifier is secretly an energy-based model, and you should treat it like one" by Will Grathwohl, et al. (which also has citations to other papers making similar observaitons).
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
|
Coding the targets as $\{0.1, 0.9\}$ is an example of label smoothing. It's one of the tricks highlighted in this review and it appears to originate in "Rethinking the Inception Architecture for Compu
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
Coding the targets as $\{0.1, 0.9\}$ is an example of label smoothing. It's one of the tricks highlighted in this review and it appears to originate in "Rethinking the Inception Architecture for Computer Vision" by Christian Szegedy et al. as a certain kind of regularization. It's used as a regularization strategy to discourage a neural network from giving over-confident predictions in the outcome, because the predicted probabilities cannot be improved by becoming arbitrarily close to 1 for the true class; instead, the lowest loss value occurs at 0.9 for the true class.
The standard approach is to not use label smoothing. When we estimate a logistic regression (a 0-hidden layer neural network with sigmoid activations on the last layer), we never do this; the problem is left "as is" for Newton's Method to solve. Provided that the usual conditions are satisfied (design matrix is full-rank, perfect separation is not present), we have no problem estimating the regression coefficients. Likewise, neural networks trained on binary problems proceed in the same way! Nonlinearities and multiple layers make the optimization more challenging, but the core concept is the same.
However, neural networks are more flexible models, and are able to find more complex relationships compared to logistic regression. This means that they are also more prone to giving uncalibrated outcomes (the predicted probabilities do not strongly correspond to the true probabilities), a trend that is remarked upon in "Your classifier is secretly an energy-based model, and you should treat it like one" by Will Grathwohl, et al. (which also has citations to other papers making similar observaitons).
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
Coding the targets as $\{0.1, 0.9\}$ is an example of label smoothing. It's one of the tricks highlighted in this review and it appears to originate in "Rethinking the Inception Architecture for Compu
|
48,551
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
|
I suspect the reasons they suggested this approach was that the magnitudes of the gradient descent steps in back-propagation are proportional to the derivative of the activation function. If you use a logistic activation function then the derivative is numerically zero before the output reaches 1 or 0. As a result the optimization can never absolutely converge as the loss function has a very long trough in weight-space with a very shallow slope along the bottom. Using these modified targets backpropagation is able to converge to a more definite minimum. I used this technique a bit back in the very early 90s‡, and it doesn't really give much of a benefit in practice, especially if the modification to the targets is as large as that!
A more recent usage of this trick can be found in Platt Scaling, which is used to get estimates of the probability of class membership from support vector machines†. In Platt Scaling, a logistic regression model is fitted to the (leave-one-out) scores outputted by the SVM, but with modified targets (as @Sycorax explains +1) as a regularisation method to avoid over-fitting. However Platt adopts a much less heuristic (and less drastic) approach, and sets the targets to be
$y_+ = \frac{N_+ + 1}{N + 2} \qquad \mathrm{and} \qquad y_- = \frac{N_- + 1}{N + 2}$
which is effectively a Bayesian regularisation based on the Laplace correction.
† Really - don't do this, if you want probabilities (and for most practical applications, you will), use a proper probabilistic classifier, such as kernel logistic regression or Gaussian process classifiers (if you like to be Bayesian).
‡ The PDP books were where I first learned about neural networks! (my copy is from 1989 ;o)
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
|
I suspect the reasons they suggested this approach was that the magnitudes of the gradient descent steps in back-propagation are proportional to the derivative of the activation function. If you use a
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
I suspect the reasons they suggested this approach was that the magnitudes of the gradient descent steps in back-propagation are proportional to the derivative of the activation function. If you use a logistic activation function then the derivative is numerically zero before the output reaches 1 or 0. As a result the optimization can never absolutely converge as the loss function has a very long trough in weight-space with a very shallow slope along the bottom. Using these modified targets backpropagation is able to converge to a more definite minimum. I used this technique a bit back in the very early 90s‡, and it doesn't really give much of a benefit in practice, especially if the modification to the targets is as large as that!
A more recent usage of this trick can be found in Platt Scaling, which is used to get estimates of the probability of class membership from support vector machines†. In Platt Scaling, a logistic regression model is fitted to the (leave-one-out) scores outputted by the SVM, but with modified targets (as @Sycorax explains +1) as a regularisation method to avoid over-fitting. However Platt adopts a much less heuristic (and less drastic) approach, and sets the targets to be
$y_+ = \frac{N_+ + 1}{N + 2} \qquad \mathrm{and} \qquad y_- = \frac{N_- + 1}{N + 2}$
which is effectively a Bayesian regularisation based on the Laplace correction.
† Really - don't do this, if you want probabilities (and for most practical applications, you will), use a proper probabilistic classifier, such as kernel logistic regression or Gaussian process classifiers (if you like to be Bayesian).
‡ The PDP books were where I first learned about neural networks! (my copy is from 1989 ;o)
|
Targets of 0.1/0.9 instead of 0/1 in neural networks and other classification algorithms
I suspect the reasons they suggested this approach was that the magnitudes of the gradient descent steps in back-propagation are proportional to the derivative of the activation function. If you use a
|
48,552
|
Minimum "recommended" sample size for boxplots? Boxplots for different sample sizes
|
[I thought I had written an answer to the first question but I can't locate one.]
With 5 or fewer observations, you might as well just plot the actual points.
It doesn't matter when comparing across samples that the samples aren't the same size, but if you have much larger samples in some groups you should see more points outside the ends of the whiskers on those.
what is the best way to compare between methods with different sample size
You might compare quantile plots, perhaps, or (as Nick Cox has suggested on at least one of his answers, but which I also can't locate right now -- edit: see here) you might combine such a plot with a boxplot by plotting the quantile plot under the boxplot.
Nick shows an example of a quantile plot here
|
Minimum "recommended" sample size for boxplots? Boxplots for different sample sizes
|
[I thought I had written an answer to the first question but I can't locate one.]
With 5 or fewer observations, you might as well just plot the actual points.
It doesn't matter when comparing across s
|
Minimum "recommended" sample size for boxplots? Boxplots for different sample sizes
[I thought I had written an answer to the first question but I can't locate one.]
With 5 or fewer observations, you might as well just plot the actual points.
It doesn't matter when comparing across samples that the samples aren't the same size, but if you have much larger samples in some groups you should see more points outside the ends of the whiskers on those.
what is the best way to compare between methods with different sample size
You might compare quantile plots, perhaps, or (as Nick Cox has suggested on at least one of his answers, but which I also can't locate right now -- edit: see here) you might combine such a plot with a boxplot by plotting the quantile plot under the boxplot.
Nick shows an example of a quantile plot here
|
Minimum "recommended" sample size for boxplots? Boxplots for different sample sizes
[I thought I had written an answer to the first question but I can't locate one.]
With 5 or fewer observations, you might as well just plot the actual points.
It doesn't matter when comparing across s
|
48,553
|
Covariance of two time series driven by a restricted VAR(1) model
|
One direction to go in might be something like:
$$
X_n = \rho_x X_{n-1} + \epsilon_n
$$
and
$$
Y_n = \rho_y Y_{n-1} + \rho_{xy}X_n +v_n
$$
Substitute for $X_n$ in the second equation:
$$Y_n = \rho_y Y_{n-1} + \rho_{xy} \rho_x X_{n-1} + \rho_{xy} \epsilon_n +v_n$$
Let $Z_n = \left[ \begin{array}{c} X_n \\ Y_n \end{array} \right]$. As suggested by Richard, we can write the above equations as a single VAR(1) process $Z_n = A Z_{n-1} + B U_n$.
$$ \underbrace{\left[ \begin{array}{c} X_n \\ Y_n \end{array} \right] }_{Z_n}
= \underbrace{\left[ \begin{array}{cc} \rho_x & 0 \\ \rho_{xy} \rho_x&\rho_y \end{array} \right]}_{A} \underbrace{\left[ \begin{array}{cc} X_{n-1} \\ Y_{n-1} \end{array} \right]}_{Z_{n-1}} + \underbrace{\left[ \begin{array}{cc} 1 & 0 \\ \rho_{xy} & 1 \end{array} \right]}_B \underbrace{\left[ \begin{array}{c} \epsilon_n \\ v_n \end{array} \right]}_{U_n}
$$
The process is is mean zero so we can write the covariance as:
$$ E\left[ Z_n Z'_n \right] = E\left[ \left( AZ_{n-1} + B U_{n-1}\right) \left( AZ_{n-1} + B U_{n-1}\right) ' \right] $$
$$ = A E\left[ Z_{n-1} Z_{n-1}'\right] A' + BB'$$
If the process is stationary, $E\left[ Z_n Z'_n \right] = E\left[ Z_{n-1} Z'_{n-1} \right] = \Sigma $. Under some technical conditions $\Sigma$ is the solution to:
$$ \Sigma = A \Sigma A' + BB'$$
This is a linear system of equations. Perhaps more conveniently for solving numerically with standard software, you can write $\Sigma$ as a vector. The resulting system is:
$$ vec(\Sigma) = (I - A \otimes A)^{-1} vec(BB')$$
Where $\otimes$ is the Kronecker product and vec is the vec operator. Basically, $\Sigma$ is the solution to a system of 4 equations.
|
Covariance of two time series driven by a restricted VAR(1) model
|
One direction to go in might be something like:
$$
X_n = \rho_x X_{n-1} + \epsilon_n
$$
and
$$
Y_n = \rho_y Y_{n-1} + \rho_{xy}X_n +v_n
$$
Substitute for $X_n$ in the second equation:
$$Y_n = \rho_y Y
|
Covariance of two time series driven by a restricted VAR(1) model
One direction to go in might be something like:
$$
X_n = \rho_x X_{n-1} + \epsilon_n
$$
and
$$
Y_n = \rho_y Y_{n-1} + \rho_{xy}X_n +v_n
$$
Substitute for $X_n$ in the second equation:
$$Y_n = \rho_y Y_{n-1} + \rho_{xy} \rho_x X_{n-1} + \rho_{xy} \epsilon_n +v_n$$
Let $Z_n = \left[ \begin{array}{c} X_n \\ Y_n \end{array} \right]$. As suggested by Richard, we can write the above equations as a single VAR(1) process $Z_n = A Z_{n-1} + B U_n$.
$$ \underbrace{\left[ \begin{array}{c} X_n \\ Y_n \end{array} \right] }_{Z_n}
= \underbrace{\left[ \begin{array}{cc} \rho_x & 0 \\ \rho_{xy} \rho_x&\rho_y \end{array} \right]}_{A} \underbrace{\left[ \begin{array}{cc} X_{n-1} \\ Y_{n-1} \end{array} \right]}_{Z_{n-1}} + \underbrace{\left[ \begin{array}{cc} 1 & 0 \\ \rho_{xy} & 1 \end{array} \right]}_B \underbrace{\left[ \begin{array}{c} \epsilon_n \\ v_n \end{array} \right]}_{U_n}
$$
The process is is mean zero so we can write the covariance as:
$$ E\left[ Z_n Z'_n \right] = E\left[ \left( AZ_{n-1} + B U_{n-1}\right) \left( AZ_{n-1} + B U_{n-1}\right) ' \right] $$
$$ = A E\left[ Z_{n-1} Z_{n-1}'\right] A' + BB'$$
If the process is stationary, $E\left[ Z_n Z'_n \right] = E\left[ Z_{n-1} Z'_{n-1} \right] = \Sigma $. Under some technical conditions $\Sigma$ is the solution to:
$$ \Sigma = A \Sigma A' + BB'$$
This is a linear system of equations. Perhaps more conveniently for solving numerically with standard software, you can write $\Sigma$ as a vector. The resulting system is:
$$ vec(\Sigma) = (I - A \otimes A)^{-1} vec(BB')$$
Where $\otimes$ is the Kronecker product and vec is the vec operator. Basically, $\Sigma$ is the solution to a system of 4 equations.
|
Covariance of two time series driven by a restricted VAR(1) model
One direction to go in might be something like:
$$
X_n = \rho_x X_{n-1} + \epsilon_n
$$
and
$$
Y_n = \rho_y Y_{n-1} + \rho_{xy}X_n +v_n
$$
Substitute for $X_n$ in the second equation:
$$Y_n = \rho_y Y
|
48,554
|
Covariance of two time series driven by a restricted VAR(1) model
|
After taking Richard's hint into account, you can appeal to the general result of an autocovariance function of a stable $VAR(1)$
$$y_t=c+\Phi y_{t-1}+\epsilon_t,$$
with $\Omega$ the variance-covariance matrix of $\epsilon_t$:
$$
\Gamma_j=\sum_{i=0}^\infty\Phi^{j+i}\Omega(\Phi^{i})^\top
$$
This follows from writing the $VAR(1)$ as a vector moving average
$$
y_t=\mu+\sum_{j=0}^\infty\Phi^j\epsilon_{t-j},\quad\text{where}\quad \mu=(I-\Phi)^{-1}c,
$$
and hence
\begin{eqnarray*}
\Gamma_j &=& E(y_t - \mu)(y_{t-j}-\mu)^\top\\
&=& E\left(\sum_{i=0}^{\infty}\Phi^i\epsilon_{t-j}\right)\left(\sum_{k=0}^{\infty}\Phi^k\epsilon_{t-k-j}\right)^\top\\
&=& E\left(\sum_{i=j}^{\infty}\Phi^i\epsilon_{t-i}\right)\left(\sum_{k=0}^{\infty}\Phi^k\epsilon_{t-k-j}\right)^\top\\
&=& E\left(\sum_{i=j}^{\infty}\Phi^i\epsilon_{t-i}\right)\left(\sum_{k=0}^{\infty}\epsilon_{t-k-j}{\Phi^k} ^\top\right)\\
&=& \sum_{i=j}^{\infty}\sum_{k=0}^{\infty}\Phi^iE\left(\epsilon_{t-i}\epsilon_{t-k-j}^\top\right){\Phi^k}^\top\\
&=& \Phi^j\Omega(\Phi^0)^\top + \Phi^{j+1}\Omega\Phi^\top + \Phi^{j+2}\Omega(\Phi^2)^\top + \ldots\\
&=& \sum_{i=0}^{\infty}\Phi^{j+i} \Omega {\Phi^i}^\top
\end{eqnarray*}
|
Covariance of two time series driven by a restricted VAR(1) model
|
After taking Richard's hint into account, you can appeal to the general result of an autocovariance function of a stable $VAR(1)$
$$y_t=c+\Phi y_{t-1}+\epsilon_t,$$
with $\Omega$ the variance-covari
|
Covariance of two time series driven by a restricted VAR(1) model
After taking Richard's hint into account, you can appeal to the general result of an autocovariance function of a stable $VAR(1)$
$$y_t=c+\Phi y_{t-1}+\epsilon_t,$$
with $\Omega$ the variance-covariance matrix of $\epsilon_t$:
$$
\Gamma_j=\sum_{i=0}^\infty\Phi^{j+i}\Omega(\Phi^{i})^\top
$$
This follows from writing the $VAR(1)$ as a vector moving average
$$
y_t=\mu+\sum_{j=0}^\infty\Phi^j\epsilon_{t-j},\quad\text{where}\quad \mu=(I-\Phi)^{-1}c,
$$
and hence
\begin{eqnarray*}
\Gamma_j &=& E(y_t - \mu)(y_{t-j}-\mu)^\top\\
&=& E\left(\sum_{i=0}^{\infty}\Phi^i\epsilon_{t-j}\right)\left(\sum_{k=0}^{\infty}\Phi^k\epsilon_{t-k-j}\right)^\top\\
&=& E\left(\sum_{i=j}^{\infty}\Phi^i\epsilon_{t-i}\right)\left(\sum_{k=0}^{\infty}\Phi^k\epsilon_{t-k-j}\right)^\top\\
&=& E\left(\sum_{i=j}^{\infty}\Phi^i\epsilon_{t-i}\right)\left(\sum_{k=0}^{\infty}\epsilon_{t-k-j}{\Phi^k} ^\top\right)\\
&=& \sum_{i=j}^{\infty}\sum_{k=0}^{\infty}\Phi^iE\left(\epsilon_{t-i}\epsilon_{t-k-j}^\top\right){\Phi^k}^\top\\
&=& \Phi^j\Omega(\Phi^0)^\top + \Phi^{j+1}\Omega\Phi^\top + \Phi^{j+2}\Omega(\Phi^2)^\top + \ldots\\
&=& \sum_{i=0}^{\infty}\Phi^{j+i} \Omega {\Phi^i}^\top
\end{eqnarray*}
|
Covariance of two time series driven by a restricted VAR(1) model
After taking Richard's hint into account, you can appeal to the general result of an autocovariance function of a stable $VAR(1)$
$$y_t=c+\Phi y_{t-1}+\epsilon_t,$$
with $\Omega$ the variance-covari
|
48,555
|
Bias of Tibshirani's Lasso estimator
|
Suppose that we have a rank deficient least squares problem
$\min \| X\beta -y \|_{2}$
where $\alpha$ is a nonzero vector in the null space of $X$. That is,
$X\alpha=0$.
The lasso estimator can be formulated either as a constrained least squares problem or as unconstrained problem with an objective that includes the least squares term and the one-norm regularization. I'll used the constrained formulation:
$\min \| X \beta - y \|_{2}^{2} $
subject to
$\| \beta \|_{1} \leq t $
where $t$ is a fixed regularization parameter. Let $\beta_{L}$ be the Lasso estimator obtained by solving this constrained optimization problem. Let $\beta_{2}$ be the minimum 2-norm solution to the least squares problem and suppose that $\| \beta_{2} \|_{1} \leq t$.
Now consider what happens if the true $\beta$ that we're trying to estimate is of the form
$\beta=\beta_{2}+ s \alpha$,
where $s$ is a very large scalar. Since the $s \alpha$ term has no effect on the least squares objective, but greatly increases $\| \beta \|_{1}$, we could have a true $\beta$ that is arbitrarily far from the Lasso estimator and has just as good a least squares objective value as $\beta_{2}$.
If you use a Bayesian approach you can avoid this issue by specifying a prior that effectively rules out such solutions.
|
Bias of Tibshirani's Lasso estimator
|
Suppose that we have a rank deficient least squares problem
$\min \| X\beta -y \|_{2}$
where $\alpha$ is a nonzero vector in the null space of $X$. That is,
$X\alpha=0$.
The lasso estimator can b
|
Bias of Tibshirani's Lasso estimator
Suppose that we have a rank deficient least squares problem
$\min \| X\beta -y \|_{2}$
where $\alpha$ is a nonzero vector in the null space of $X$. That is,
$X\alpha=0$.
The lasso estimator can be formulated either as a constrained least squares problem or as unconstrained problem with an objective that includes the least squares term and the one-norm regularization. I'll used the constrained formulation:
$\min \| X \beta - y \|_{2}^{2} $
subject to
$\| \beta \|_{1} \leq t $
where $t$ is a fixed regularization parameter. Let $\beta_{L}$ be the Lasso estimator obtained by solving this constrained optimization problem. Let $\beta_{2}$ be the minimum 2-norm solution to the least squares problem and suppose that $\| \beta_{2} \|_{1} \leq t$.
Now consider what happens if the true $\beta$ that we're trying to estimate is of the form
$\beta=\beta_{2}+ s \alpha$,
where $s$ is a very large scalar. Since the $s \alpha$ term has no effect on the least squares objective, but greatly increases $\| \beta \|_{1}$, we could have a true $\beta$ that is arbitrarily far from the Lasso estimator and has just as good a least squares objective value as $\beta_{2}$.
If you use a Bayesian approach you can avoid this issue by specifying a prior that effectively rules out such solutions.
|
Bias of Tibshirani's Lasso estimator
Suppose that we have a rank deficient least squares problem
$\min \| X\beta -y \|_{2}$
where $\alpha$ is a nonzero vector in the null space of $X$. That is,
$X\alpha=0$.
The lasso estimator can b
|
48,556
|
Standard reference for K-means [duplicate]
|
According to wikipedia, the term k-means was first introduced in the reference you refer to. The usual reference in the computer vision community for the algorithm, which solves the k-means problem, is:
Lloyd, Stuart P. "Least squares quantization in PCM." Information Theory, IEEE Transactions on 28.2 (1982): 129-137.
|
Standard reference for K-means [duplicate]
|
According to wikipedia, the term k-means was first introduced in the reference you refer to. The usual reference in the computer vision community for the algorithm, which solves the k-means problem, i
|
Standard reference for K-means [duplicate]
According to wikipedia, the term k-means was first introduced in the reference you refer to. The usual reference in the computer vision community for the algorithm, which solves the k-means problem, is:
Lloyd, Stuart P. "Least squares quantization in PCM." Information Theory, IEEE Transactions on 28.2 (1982): 129-137.
|
Standard reference for K-means [duplicate]
According to wikipedia, the term k-means was first introduced in the reference you refer to. The usual reference in the computer vision community for the algorithm, which solves the k-means problem, i
|
48,557
|
Distribution of differences in beta-distribution
|
If you look at the joint density of $(X_{(n-1)},X_{(n)})$ the two largest observation, their joint density is given by
$$g_n(x,y)=\dfrac{n!}{(n-2)!0!0!}F(x)^{n-2}[F(y)-F(x)]^0[1-F(y)]^0f(x)f(y)\mathbb{I}_{x\le y}$$
(where the $0$'s are considering the special case $j=n-1$, $k=n$ in the generic formula).
From there the derivation of the distribution of the difference $Z_{n}=(X_{(n)}-X_{(n-1)})$ is a mere convolution formula, i.e. a special case of a change of variables:
$$\begin{align}Z_n\sim&\int_\mathcal{X} g_n(x,x+z)\, \text{d}x\ \mathbb{I}_{(0,\infty)}(z)\\
&=n(n-1)\,\int_\mathcal{X} F(x)^{n-2}f(x)f(x+z)\, \text{d}x\ \mathbb{I}_{(0,\infty)}(z)
\end{align}$$
In the special case of a Beta $\mathfrak{B}(a,b)$ distribution, $F=F(\cdot;a,b)$ has no analytic expression in general, so you get
$$n(n-1)B(a,b)^{-2}\,\int_0^{1-z} F(x;a,b)^{n-2}x^{a-1}(1-x)^{b-1}(x+z)^{a-1}(1-x-z)^{b-1}\, \text{d}x\ \mathbb{I}_{(0,1}(z)$$
|
Distribution of differences in beta-distribution
|
If you look at the joint density of $(X_{(n-1)},X_{(n)})$ the two largest observation, their joint density is given by
$$g_n(x,y)=\dfrac{n!}{(n-2)!0!0!}F(x)^{n-2}[F(y)-F(x)]^0[1-F(y)]^0f(x)f(y)\mathb
|
Distribution of differences in beta-distribution
If you look at the joint density of $(X_{(n-1)},X_{(n)})$ the two largest observation, their joint density is given by
$$g_n(x,y)=\dfrac{n!}{(n-2)!0!0!}F(x)^{n-2}[F(y)-F(x)]^0[1-F(y)]^0f(x)f(y)\mathbb{I}_{x\le y}$$
(where the $0$'s are considering the special case $j=n-1$, $k=n$ in the generic formula).
From there the derivation of the distribution of the difference $Z_{n}=(X_{(n)}-X_{(n-1)})$ is a mere convolution formula, i.e. a special case of a change of variables:
$$\begin{align}Z_n\sim&\int_\mathcal{X} g_n(x,x+z)\, \text{d}x\ \mathbb{I}_{(0,\infty)}(z)\\
&=n(n-1)\,\int_\mathcal{X} F(x)^{n-2}f(x)f(x+z)\, \text{d}x\ \mathbb{I}_{(0,\infty)}(z)
\end{align}$$
In the special case of a Beta $\mathfrak{B}(a,b)$ distribution, $F=F(\cdot;a,b)$ has no analytic expression in general, so you get
$$n(n-1)B(a,b)^{-2}\,\int_0^{1-z} F(x;a,b)^{n-2}x^{a-1}(1-x)^{b-1}(x+z)^{a-1}(1-x-z)^{b-1}\, \text{d}x\ \mathbb{I}_{(0,1}(z)$$
|
Distribution of differences in beta-distribution
If you look at the joint density of $(X_{(n-1)},X_{(n)})$ the two largest observation, their joint density is given by
$$g_n(x,y)=\dfrac{n!}{(n-2)!0!0!}F(x)^{n-2}[F(y)-F(x)]^0[1-F(y)]^0f(x)f(y)\mathb
|
48,558
|
Conditional Expectation of sum of uniform random variables?
|
It is perhaps easier to see what is going on with a diagram
So your calculation will be $$\dfrac{\displaystyle \int_{x=0.3}^{1}\int_{y=1.3-x}^{1} x \,dy \, dx}{\displaystyle \int_{x=0.3}^{1}\int_{y=1.3-x}^{1} 1 \,dy \, dx} = \dfrac{23}{30} \approx 0.76666667$$
|
Conditional Expectation of sum of uniform random variables?
|
It is perhaps easier to see what is going on with a diagram
So your calculation will be $$\dfrac{\displaystyle \int_{x=0.3}^{1}\int_{y=1.3-x}^{1} x \,dy \, dx}{\displaystyle \int_{x=0.3}^{1}\int_{y=
|
Conditional Expectation of sum of uniform random variables?
It is perhaps easier to see what is going on with a diagram
So your calculation will be $$\dfrac{\displaystyle \int_{x=0.3}^{1}\int_{y=1.3-x}^{1} x \,dy \, dx}{\displaystyle \int_{x=0.3}^{1}\int_{y=1.3-x}^{1} 1 \,dy \, dx} = \dfrac{23}{30} \approx 0.76666667$$
|
Conditional Expectation of sum of uniform random variables?
It is perhaps easier to see what is going on with a diagram
So your calculation will be $$\dfrac{\displaystyle \int_{x=0.3}^{1}\int_{y=1.3-x}^{1} x \,dy \, dx}{\displaystyle \int_{x=0.3}^{1}\int_{y=
|
48,559
|
Choosing IRLS over gradient descent in logistic regression
|
Yes, IRLS could be faster, as I said in my answer to your previous question. For example, if the log-likelihood is nearly quadratic (which it will usually be if you are able to start fairly close to the maximum and the sample size isn't very small), then it may converge in only a couple of steps. Note, in fact that on p240, Bishop says
Although such an approach might intuitively seem reasonable, in fact it turns out to be a poor
algorithm, for reasons discussed in Bishop and Nabney (2008)
Notice the "LS" part. IRLS proceeds by performing weighted least squares, but the weights to observations are updated each step (re-weighted).
The particular version of IRLS Bishop presents might possibly be Newton-Raphson, but this is not necessarily the case in general (it could be Fisher scoring for example, which is related to but slightly different from actual Newton Raphson). But yes, a single step of IRLS on a (already correctly-weighted) regression problem suffices.
|
Choosing IRLS over gradient descent in logistic regression
|
Yes, IRLS could be faster, as I said in my answer to your previous question. For example, if the log-likelihood is nearly quadratic (which it will usually be if you are able to start fairly close to t
|
Choosing IRLS over gradient descent in logistic regression
Yes, IRLS could be faster, as I said in my answer to your previous question. For example, if the log-likelihood is nearly quadratic (which it will usually be if you are able to start fairly close to the maximum and the sample size isn't very small), then it may converge in only a couple of steps. Note, in fact that on p240, Bishop says
Although such an approach might intuitively seem reasonable, in fact it turns out to be a poor
algorithm, for reasons discussed in Bishop and Nabney (2008)
Notice the "LS" part. IRLS proceeds by performing weighted least squares, but the weights to observations are updated each step (re-weighted).
The particular version of IRLS Bishop presents might possibly be Newton-Raphson, but this is not necessarily the case in general (it could be Fisher scoring for example, which is related to but slightly different from actual Newton Raphson). But yes, a single step of IRLS on a (already correctly-weighted) regression problem suffices.
|
Choosing IRLS over gradient descent in logistic regression
Yes, IRLS could be faster, as I said in my answer to your previous question. For example, if the log-likelihood is nearly quadratic (which it will usually be if you are able to start fairly close to t
|
48,560
|
Does Random Forest ever compare the splitting of one node to the slitting of a **different** node?
|
Random forests are based on greedy induction of decision trees, which means that the best attribute to split on and best cut-off is computed independently for each internal node in the tree. Thus, nodes are not directly compared.
The article you cited does not specify that a split might be induced on a smaller number of points $N$ if an attribute has missing values. Indeed, in their experimental evaluation the adult dataset has missing values. It make sense to penalize split induced on attributes with missing values and they do that defining a new splitting criterion based on confidence intervals.
For example, let's say that you have the following data set:
A B Class
1 ? +
3 4 -
4 6 +
These 3 points can be splitted according to feature $A$ on the cut-offs 1 or 3, or according to feature $B$ on the cut-off 3. If we split according to feature $B$ we just take 2 points into account to compute the splitting criterion. If we split according to $B$ with the missing value ?, deciding whether to put the first point is a totally different story: C4.5 uses weights and CART uses surrogates splits.
Actually the topic of bias of splitting criteria has received attention in the past as well. The blog post you cited does not cite any previous work. We worked on a similar approach which penalizes missing values based on statistical significance rather than using confidence intervals: here. However, we focused in particular on categorical data sets which is another possible application domain. The positive side of using statistical significance is that this approach can be applied to the original Gini gain. Therefore it works also for multi-class classification and not just for binary classes.
|
Does Random Forest ever compare the splitting of one node to the slitting of a **different** node?
|
Random forests are based on greedy induction of decision trees, which means that the best attribute to split on and best cut-off is computed independently for each internal node in the tree. Thus, nod
|
Does Random Forest ever compare the splitting of one node to the slitting of a **different** node?
Random forests are based on greedy induction of decision trees, which means that the best attribute to split on and best cut-off is computed independently for each internal node in the tree. Thus, nodes are not directly compared.
The article you cited does not specify that a split might be induced on a smaller number of points $N$ if an attribute has missing values. Indeed, in their experimental evaluation the adult dataset has missing values. It make sense to penalize split induced on attributes with missing values and they do that defining a new splitting criterion based on confidence intervals.
For example, let's say that you have the following data set:
A B Class
1 ? +
3 4 -
4 6 +
These 3 points can be splitted according to feature $A$ on the cut-offs 1 or 3, or according to feature $B$ on the cut-off 3. If we split according to feature $B$ we just take 2 points into account to compute the splitting criterion. If we split according to $B$ with the missing value ?, deciding whether to put the first point is a totally different story: C4.5 uses weights and CART uses surrogates splits.
Actually the topic of bias of splitting criteria has received attention in the past as well. The blog post you cited does not cite any previous work. We worked on a similar approach which penalizes missing values based on statistical significance rather than using confidence intervals: here. However, we focused in particular on categorical data sets which is another possible application domain. The positive side of using statistical significance is that this approach can be applied to the original Gini gain. Therefore it works also for multi-class classification and not just for binary classes.
|
Does Random Forest ever compare the splitting of one node to the slitting of a **different** node?
Random forests are based on greedy induction of decision trees, which means that the best attribute to split on and best cut-off is computed independently for each internal node in the tree. Thus, nod
|
48,561
|
The distribution of $\bf{x}$ given an underdetermined system $A{\bf x}={\bf b}\sim N(0,\sigma^2 I)$
|
Consider a simple case where $A = \left[\begin{array}{cc}1& 1\end{array} \right] $ and $\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$.
Let $x_1$, $x_2$, and $b$ be random variables. We know from $A {\bf x} = {b}$ that $x_1 + x_2 = b$. We also know that $b$ is a normally distributed random variable.
Must $x_1$ and $x_2$ be distributed multivariate normal? No.
Could $x_1$ and $x_2$ be multivariable normal? Yes.
In contrast, if $A \mathbf{x}$ is normal for any 1 by 2 matrix A, then $x_1$ and $x_2$ would be distributed multivariate normal (by definition of multivariate normality), but if $A$ is some specific matrix, all bets are off.
Example:
Let $b$ be a normally distributed random variable. Then define:
$${ x_1} = \left\{ \begin{array}{c} { b} &\text {if } {b} \leq 0 \\ 0 &\text{if } { b} > 0\end{array} \right\} $$
$${ x_2} = \left\{ \begin{array}{c} 0 &\text {if } { b} \leq 0 \\ { b} &\text{if } { b} > 0\end{array} \right\} $$
${x_1} + {x_2}$ is normally distributed but neither $x_1$ nor $x_2$ are normal.
|
The distribution of $\bf{x}$ given an underdetermined system $A{\bf x}={\bf b}\sim N(0,\sigma^2 I)$
|
Consider a simple case where $A = \left[\begin{array}{cc}1& 1\end{array} \right] $ and $\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$.
Let $x_1$, $x_2$, and $b$ be random variables. We know f
|
The distribution of $\bf{x}$ given an underdetermined system $A{\bf x}={\bf b}\sim N(0,\sigma^2 I)$
Consider a simple case where $A = \left[\begin{array}{cc}1& 1\end{array} \right] $ and $\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$.
Let $x_1$, $x_2$, and $b$ be random variables. We know from $A {\bf x} = {b}$ that $x_1 + x_2 = b$. We also know that $b$ is a normally distributed random variable.
Must $x_1$ and $x_2$ be distributed multivariate normal? No.
Could $x_1$ and $x_2$ be multivariable normal? Yes.
In contrast, if $A \mathbf{x}$ is normal for any 1 by 2 matrix A, then $x_1$ and $x_2$ would be distributed multivariate normal (by definition of multivariate normality), but if $A$ is some specific matrix, all bets are off.
Example:
Let $b$ be a normally distributed random variable. Then define:
$${ x_1} = \left\{ \begin{array}{c} { b} &\text {if } {b} \leq 0 \\ 0 &\text{if } { b} > 0\end{array} \right\} $$
$${ x_2} = \left\{ \begin{array}{c} 0 &\text {if } { b} \leq 0 \\ { b} &\text{if } { b} > 0\end{array} \right\} $$
${x_1} + {x_2}$ is normally distributed but neither $x_1$ nor $x_2$ are normal.
|
The distribution of $\bf{x}$ given an underdetermined system $A{\bf x}={\bf b}\sim N(0,\sigma^2 I)$
Consider a simple case where $A = \left[\begin{array}{cc}1& 1\end{array} \right] $ and $\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$.
Let $x_1$, $x_2$, and $b$ be random variables. We know f
|
48,562
|
What types of functions can be implemented in a layer of a Neural Networks?
|
There are plenty of functions that are hard to optimise. In fact when you look at the successful implementation, they are mainly image/signal processing using sigmoid or ReLU activation functions. (maybe you want to clarify what your target function is. I.e. do you mean the function you want to approximate or are you including the activation functions- i.e. minimising error for fixed network structure).
So a classic target function that NNs cannot approximate by gradient descent is the spiral (i.e. input is (x, y) coordinates and output is if input lies on spiral band). If you Google cascade correlation you will find pictures...
|
What types of functions can be implemented in a layer of a Neural Networks?
|
There are plenty of functions that are hard to optimise. In fact when you look at the successful implementation, they are mainly image/signal processing using sigmoid or ReLU activation functions. (ma
|
What types of functions can be implemented in a layer of a Neural Networks?
There are plenty of functions that are hard to optimise. In fact when you look at the successful implementation, they are mainly image/signal processing using sigmoid or ReLU activation functions. (maybe you want to clarify what your target function is. I.e. do you mean the function you want to approximate or are you including the activation functions- i.e. minimising error for fixed network structure).
So a classic target function that NNs cannot approximate by gradient descent is the spiral (i.e. input is (x, y) coordinates and output is if input lies on spiral band). If you Google cascade correlation you will find pictures...
|
What types of functions can be implemented in a layer of a Neural Networks?
There are plenty of functions that are hard to optimise. In fact when you look at the successful implementation, they are mainly image/signal processing using sigmoid or ReLU activation functions. (ma
|
48,563
|
What types of functions can be implemented in a layer of a Neural Networks?
|
My question is: are there specific types of functions/layers that are
believed to be hard/impossible to train in neural nets and using back
propagation? Is it likely to get much better results using neural nets
if we were to use more sophisticated optimization methods?
With backpropagation alone you can't compute the argmax, argmin and similiar positional functions for example. But people got really creative in this front too.
These days people can backpropagate through distributions (VAEs), physical simulations (ray tracing for example), fixed point problems (DEMs) and differential equation solvers (NeuralODEs).
Follow up comments: I am asking this because before neural nets became
epidemic the most important step in designing new models was to make
sure that training the model leads to a simple optimization problem
(e.g. LP, QP, convex, bi-convex, etc.); even if doing gradient descent
on more complex objective functions was possible. You wouldn't just do
gradient descent unless you had a customized optimization procedure
(with careful initialization and what not) to do training, otherwise
you would very likely get stuck in a bad local minima. But I see
people becoming as creative as they can get with neural nets and throw
in any function in the training objective as long as they can compute
the gradient so that they can do back propagation.
I think the key innovation here was stochastic gradient descent.
Since we cannot guarantee convexity, people moved on from it and, with SGD, they found that local optima are just as fine in many cases, and that batch-learning helps the optimizer being stuck in saddle points and "shallow" optima.
We have now several methods to optimize networks with good probability of convergence to flat local minima.
Piecewise differentiability is still necessary, though, for most functions.
|
What types of functions can be implemented in a layer of a Neural Networks?
|
My question is: are there specific types of functions/layers that are
believed to be hard/impossible to train in neural nets and using back
propagation? Is it likely to get much better results usi
|
What types of functions can be implemented in a layer of a Neural Networks?
My question is: are there specific types of functions/layers that are
believed to be hard/impossible to train in neural nets and using back
propagation? Is it likely to get much better results using neural nets
if we were to use more sophisticated optimization methods?
With backpropagation alone you can't compute the argmax, argmin and similiar positional functions for example. But people got really creative in this front too.
These days people can backpropagate through distributions (VAEs), physical simulations (ray tracing for example), fixed point problems (DEMs) and differential equation solvers (NeuralODEs).
Follow up comments: I am asking this because before neural nets became
epidemic the most important step in designing new models was to make
sure that training the model leads to a simple optimization problem
(e.g. LP, QP, convex, bi-convex, etc.); even if doing gradient descent
on more complex objective functions was possible. You wouldn't just do
gradient descent unless you had a customized optimization procedure
(with careful initialization and what not) to do training, otherwise
you would very likely get stuck in a bad local minima. But I see
people becoming as creative as they can get with neural nets and throw
in any function in the training objective as long as they can compute
the gradient so that they can do back propagation.
I think the key innovation here was stochastic gradient descent.
Since we cannot guarantee convexity, people moved on from it and, with SGD, they found that local optima are just as fine in many cases, and that batch-learning helps the optimizer being stuck in saddle points and "shallow" optima.
We have now several methods to optimize networks with good probability of convergence to flat local minima.
Piecewise differentiability is still necessary, though, for most functions.
|
What types of functions can be implemented in a layer of a Neural Networks?
My question is: are there specific types of functions/layers that are
believed to be hard/impossible to train in neural nets and using back
propagation? Is it likely to get much better results usi
|
48,564
|
How to compute partial log-likelihood function in Cox proportional hazards model?
|
This is technically a programming question with an easy programming answer. If you simply want the partial likelihood, why not fool R into giving it to you? Simply initialize beta and allow no iterations, then extract the loglik value from the coxph object. (see ?coxph.object).
For example:
## artificial data
library(survival)
n <- 1000
t <- rexp(100)
c <- rbinom(100, 1, .2) ## censoring indicator (independent process)
x <- rbinom(100, 1, exp(-t)) ## some arbitrary relationship btn x and t
betamax <- coxph(Surv(t, c) ~ x)
beta1 <- coxph(Surv(t, c) ~ x, init = c(1), control=list('iter.max'=0))
With example output:
> betamax$loglik
[1] -68.62548 -65.99652
> beta1$loglik
[1] -66.10908 -66.10908
You can even define a wrapper:
loglik <- function(beta, formula) {
formula, init=beta, control=list('iter.max'=0))$loglik[2]
}
betas <- seq(0, 2, by=0.01)
logliks <- sapply(betas, loglik, Surv(t, c) ~ x)
plot(betas, logliks)
abline(v=betamax$coefficients)
|
How to compute partial log-likelihood function in Cox proportional hazards model?
|
This is technically a programming question with an easy programming answer. If you simply want the partial likelihood, why not fool R into giving it to you? Simply initialize beta and allow no iterati
|
How to compute partial log-likelihood function in Cox proportional hazards model?
This is technically a programming question with an easy programming answer. If you simply want the partial likelihood, why not fool R into giving it to you? Simply initialize beta and allow no iterations, then extract the loglik value from the coxph object. (see ?coxph.object).
For example:
## artificial data
library(survival)
n <- 1000
t <- rexp(100)
c <- rbinom(100, 1, .2) ## censoring indicator (independent process)
x <- rbinom(100, 1, exp(-t)) ## some arbitrary relationship btn x and t
betamax <- coxph(Surv(t, c) ~ x)
beta1 <- coxph(Surv(t, c) ~ x, init = c(1), control=list('iter.max'=0))
With example output:
> betamax$loglik
[1] -68.62548 -65.99652
> beta1$loglik
[1] -66.10908 -66.10908
You can even define a wrapper:
loglik <- function(beta, formula) {
formula, init=beta, control=list('iter.max'=0))$loglik[2]
}
betas <- seq(0, 2, by=0.01)
logliks <- sapply(betas, loglik, Surv(t, c) ~ x)
plot(betas, logliks)
abline(v=betamax$coefficients)
|
How to compute partial log-likelihood function in Cox proportional hazards model?
This is technically a programming question with an easy programming answer. If you simply want the partial likelihood, why not fool R into giving it to you? Simply initialize beta and allow no iterati
|
48,565
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image classification tasks
|
For image classification task, how can a stacked auto-encoder help an traditional Convolutional Neural Network?
As mentioned in the paper, we can use the pre-trained weights to initialize CNN layers, although that essentially doesn't add anything to the CNN, it normally helps setting a good starting point for training (especially when there's insufficient amount of labeled data).
any pre-trained step before first convolution operation like Dimensionally Reduction or AutoEncoder output can be used as input image instead of real image data in CNN
Becaues of CNN's local connectivity, if the topology of data is lost after dimensionality reduction, then CNNs would no longer be appropriate.
For example, suppose our data are images, if we see each pixel as a dimension, and use PCA to do dimensionality reduction, then the new representation of a image will be a vector and no longer preserves the original 2D topology (and correlation between adjacent pixels). So in this case it can not be used directly with 2D CNNs (there are ways to recover the topology though).
Using the AutoEncoder output should work well with CNNs, as it can be seen as adding an additional layer (with fixed parameters) between the CNN and the input.
how much it affects the performance of Convolution Neural Network in context of image classification tasks
I happened to have done a related project at college, where I tried to label each part of an image as road, sky or else. Although the results are far from satisfactory, it might give some ideas about how those pre-processing techniques affects the performance.
(1) image of a clear road (2) outcome of a simple two-layer CNN
(3) CNN with first layer initialized by pre-trained CAE (4) CNN with ZCA whitening
The CNNs are trained using SGD with fixed learning rates. Tested on the KITTI road category data set, the error rate of method (2) is around 14%, and the error rates of method (3) and (4) are around 12%.
Please correct me where I'm wrong. :)
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image clas
|
For image classification task, how can a stacked auto-encoder help an traditional Convolutional Neural Network?
As mentioned in the paper, we can use the pre-trained weights to initialize CNN layers,
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image classification tasks
For image classification task, how can a stacked auto-encoder help an traditional Convolutional Neural Network?
As mentioned in the paper, we can use the pre-trained weights to initialize CNN layers, although that essentially doesn't add anything to the CNN, it normally helps setting a good starting point for training (especially when there's insufficient amount of labeled data).
any pre-trained step before first convolution operation like Dimensionally Reduction or AutoEncoder output can be used as input image instead of real image data in CNN
Becaues of CNN's local connectivity, if the topology of data is lost after dimensionality reduction, then CNNs would no longer be appropriate.
For example, suppose our data are images, if we see each pixel as a dimension, and use PCA to do dimensionality reduction, then the new representation of a image will be a vector and no longer preserves the original 2D topology (and correlation between adjacent pixels). So in this case it can not be used directly with 2D CNNs (there are ways to recover the topology though).
Using the AutoEncoder output should work well with CNNs, as it can be seen as adding an additional layer (with fixed parameters) between the CNN and the input.
how much it affects the performance of Convolution Neural Network in context of image classification tasks
I happened to have done a related project at college, where I tried to label each part of an image as road, sky or else. Although the results are far from satisfactory, it might give some ideas about how those pre-processing techniques affects the performance.
(1) image of a clear road (2) outcome of a simple two-layer CNN
(3) CNN with first layer initialized by pre-trained CAE (4) CNN with ZCA whitening
The CNNs are trained using SGD with fixed learning rates. Tested on the KITTI road category data set, the error rate of method (2) is around 14%, and the error rates of method (3) and (4) are around 12%.
Please correct me where I'm wrong. :)
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image clas
For image classification task, how can a stacked auto-encoder help an traditional Convolutional Neural Network?
As mentioned in the paper, we can use the pre-trained weights to initialize CNN layers,
|
48,566
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image classification tasks
|
As dontloo mentions, an autoencoder can be used to initialize weights for a CNN and thus act as an additional layer before the CNN. In particular, the the hierarchical nature of a stacked autoencoder allows us to encode different types of (progressively more complex) features in each hidden layer, similarly to CNNs.
Stanford's UFLDL has a great explanation of this:
The first layer of a stacked autoencoder tends to learn first-order
features in the raw input (such as edges in an image). The second
layer of a stacked autoencoder tends to learn second-order features
corresponding to patterns in the appearance of first-order features
(e.g., in terms of what edges tend to occur together--for example, to
form contour or corner detectors). Higher layers of the stacked
autoencoder tend to learn even higher-order features.
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image clas
|
As dontloo mentions, an autoencoder can be used to initialize weights for a CNN and thus act as an additional layer before the CNN. In particular, the the hierarchical nature of a stacked autoencoder
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image classification tasks
As dontloo mentions, an autoencoder can be used to initialize weights for a CNN and thus act as an additional layer before the CNN. In particular, the the hierarchical nature of a stacked autoencoder allows us to encode different types of (progressively more complex) features in each hidden layer, similarly to CNNs.
Stanford's UFLDL has a great explanation of this:
The first layer of a stacked autoencoder tends to learn first-order
features in the raw input (such as edges in an image). The second
layer of a stacked autoencoder tends to learn second-order features
corresponding to patterns in the appearance of first-order features
(e.g., in terms of what edges tend to occur together--for example, to
form contour or corner detectors). Higher layers of the stacked
autoencoder tend to learn even higher-order features.
|
How does a Stacked AutoEncoder increases performance of a Convolutional Neural Network in image clas
As dontloo mentions, an autoencoder can be used to initialize weights for a CNN and thus act as an additional layer before the CNN. In particular, the the hierarchical nature of a stacked autoencoder
|
48,567
|
Non-normality of residuals in a negative binomial GLMM
|
Reginald, you have probably moved on by now, but for people that stumble across this post I would like to note that the DHARMa package (available from CRAN, see here) that I have created solves this problem, i.e. it will allow you to test if the residuals are compatible with the assumptions of nb (or any other distributions for that matter).
From the package description:
The DHARMa package uses a simulation-based approach to create readily
interpretable scaled residuals from fitted generalized linear mixed
models. Currently supported are all 'merMod' classes from 'lme4'
('lmerMod', 'glmerMod'), 'glm' (including 'negbin' from 'MASS', but
excluding quasi-distributions) and 'lm' model classes. Alternatively,
externally created simulations, e.g. posterior predictive simulations
from Bayesian software such as 'JAGS', 'STAN', or 'BUGS' can be
processed as well. The resulting residuals are standardized to values
between 0 and 1 and can be interpreted as intuitively as residuals
from a linear regression. The package also provides a number of plot
and test functions for typical model mispecification problem, such as
over/underdispersion, zero-inflation, and spatial / temporal
autocorrelation.
|
Non-normality of residuals in a negative binomial GLMM
|
Reginald, you have probably moved on by now, but for people that stumble across this post I would like to note that the DHARMa package (available from CRAN, see here) that I have created solves this p
|
Non-normality of residuals in a negative binomial GLMM
Reginald, you have probably moved on by now, but for people that stumble across this post I would like to note that the DHARMa package (available from CRAN, see here) that I have created solves this problem, i.e. it will allow you to test if the residuals are compatible with the assumptions of nb (or any other distributions for that matter).
From the package description:
The DHARMa package uses a simulation-based approach to create readily
interpretable scaled residuals from fitted generalized linear mixed
models. Currently supported are all 'merMod' classes from 'lme4'
('lmerMod', 'glmerMod'), 'glm' (including 'negbin' from 'MASS', but
excluding quasi-distributions) and 'lm' model classes. Alternatively,
externally created simulations, e.g. posterior predictive simulations
from Bayesian software such as 'JAGS', 'STAN', or 'BUGS' can be
processed as well. The resulting residuals are standardized to values
between 0 and 1 and can be interpreted as intuitively as residuals
from a linear regression. The package also provides a number of plot
and test functions for typical model mispecification problem, such as
over/underdispersion, zero-inflation, and spatial / temporal
autocorrelation.
|
Non-normality of residuals in a negative binomial GLMM
Reginald, you have probably moved on by now, but for people that stumble across this post I would like to note that the DHARMa package (available from CRAN, see here) that I have created solves this p
|
48,568
|
Expectation of a chi-squared distribution
|
The PDFs are
$$f_U(u) = C(1)u^{-1/2} e^{-u/2}$$
and
$$f_V(v) = C(n)v^{n/2-1}e^{-v/2}$$
where
$$C(k) = \frac{1}{2^{k/2}\Gamma(\frac{k}{2})}$$
are the normalizing constants. Use polar-like coordinates $u=(r\cos(\theta))^2$ and $v=(r\sin(\theta))^2$ to evaluate the expectation, after first computing
$$\eqalign{du\wedge dv &= (2 r \cos(\theta)^2 dr - 2r^2 \sin(\theta)\cos(\theta)d\theta)\wedge (2 r \sin(\theta)^2 dr + 2 r^2 \sin(\theta)\cos(\theta)d\theta) \\
&= 4r^3\sin(\theta)\cos(\theta) dr\wedge d\theta}$$
and
$$u+v = r^2(\cos(\theta)^2 + \sin(\theta)^2) = r^2,$$
so that (provided $n+2p \gt 1$) it splits into a Beta integral involving $\theta$ and a Gamma integral involving $r^2$ and a great deal of cancellation occurs:
$$\eqalign{\mathbb{E}\left(\frac{U^p}{U+V}\right) &= C(1)C(n)\int_0^\infty\int_0^\infty \frac{u^p}{u+v} u^{-1/2} v^{n/2-1} e^{-(u+v)/2}\, du\, dv,\\
&= 4C(1)C(n)\color{blue}{\int_0^{\pi/2}\sin(\theta)^{n-1}\cos(\theta)^{2p} d\theta}\color{red}{ \int_0^\infty r^{2p+n-2} e^{-r^2/2} dr} \\
&= 2^2 \frac{1}{2^{1/2}\Gamma(1/2)} \frac{1}{2^{n/2}\Gamma(n/2)} \color{blue}{\frac{\Gamma(n/2)\Gamma(p+1/2)}{2\Gamma(p+n/2-1/2)}}\; \color{red}{2^{p+n/2-3/2} \Gamma(p+n/2-1/2)} \\
&= \frac{2^p \Gamma(p+1/2)}{\sqrt{\pi}(n+2p-1)}.
}$$
Otherwise, if $n + 2p \le 1$, the integral diverges as $r\to 0$.
|
Expectation of a chi-squared distribution
|
The PDFs are
$$f_U(u) = C(1)u^{-1/2} e^{-u/2}$$
and
$$f_V(v) = C(n)v^{n/2-1}e^{-v/2}$$
where
$$C(k) = \frac{1}{2^{k/2}\Gamma(\frac{k}{2})}$$
are the normalizing constants. Use polar-like coordinates
|
Expectation of a chi-squared distribution
The PDFs are
$$f_U(u) = C(1)u^{-1/2} e^{-u/2}$$
and
$$f_V(v) = C(n)v^{n/2-1}e^{-v/2}$$
where
$$C(k) = \frac{1}{2^{k/2}\Gamma(\frac{k}{2})}$$
are the normalizing constants. Use polar-like coordinates $u=(r\cos(\theta))^2$ and $v=(r\sin(\theta))^2$ to evaluate the expectation, after first computing
$$\eqalign{du\wedge dv &= (2 r \cos(\theta)^2 dr - 2r^2 \sin(\theta)\cos(\theta)d\theta)\wedge (2 r \sin(\theta)^2 dr + 2 r^2 \sin(\theta)\cos(\theta)d\theta) \\
&= 4r^3\sin(\theta)\cos(\theta) dr\wedge d\theta}$$
and
$$u+v = r^2(\cos(\theta)^2 + \sin(\theta)^2) = r^2,$$
so that (provided $n+2p \gt 1$) it splits into a Beta integral involving $\theta$ and a Gamma integral involving $r^2$ and a great deal of cancellation occurs:
$$\eqalign{\mathbb{E}\left(\frac{U^p}{U+V}\right) &= C(1)C(n)\int_0^\infty\int_0^\infty \frac{u^p}{u+v} u^{-1/2} v^{n/2-1} e^{-(u+v)/2}\, du\, dv,\\
&= 4C(1)C(n)\color{blue}{\int_0^{\pi/2}\sin(\theta)^{n-1}\cos(\theta)^{2p} d\theta}\color{red}{ \int_0^\infty r^{2p+n-2} e^{-r^2/2} dr} \\
&= 2^2 \frac{1}{2^{1/2}\Gamma(1/2)} \frac{1}{2^{n/2}\Gamma(n/2)} \color{blue}{\frac{\Gamma(n/2)\Gamma(p+1/2)}{2\Gamma(p+n/2-1/2)}}\; \color{red}{2^{p+n/2-3/2} \Gamma(p+n/2-1/2)} \\
&= \frac{2^p \Gamma(p+1/2)}{\sqrt{\pi}(n+2p-1)}.
}$$
Otherwise, if $n + 2p \le 1$, the integral diverges as $r\to 0$.
|
Expectation of a chi-squared distribution
The PDFs are
$$f_U(u) = C(1)u^{-1/2} e^{-u/2}$$
and
$$f_V(v) = C(n)v^{n/2-1}e^{-v/2}$$
where
$$C(k) = \frac{1}{2^{k/2}\Gamma(\frac{k}{2})}$$
are the normalizing constants. Use polar-like coordinates
|
48,569
|
How to show mean and standard deviation of Normal distribution?
|
Since only the terms involving $\mu$ are relevant, I will be dropping multiplicative terms not involving it without warning.
\begin{align*}
[\mu | x_1,\ldots,x_n,\sigma^2] &\propto [x_1,\ldots,x_n|\mu,\sigma^2] \times [\mu|\sigma^2]\\
&\propto
\prod_i \exp(-\frac{(x_i-\mu)^2}{2\sigma^2}) \times \exp(-\frac{(\mu-\beta)^2}{2\sigma^2/n_0})\\
&= \exp(-\frac{\sum_i(x_i-\mu)^2 + n_0(\mu-\beta)^2}{2\sigma^2})\\
& =\exp(-\frac{\sum_i (x_i^2 -2\mu x_i + \mu^2) +n_0(\mu^2-2\mu\beta+\beta^2)}{2\sigma^2})\\
&\propto \exp(-\frac{\mu^2(n+n_0) - 2\mu(\sum_i x_i + n_0\beta)}{2\sigma^2})\\
& \propto\exp(-\frac{(n+n_0)(\mu - \frac{\sum_i x_i + n_0\beta}{n+n_0})^2}{2\sigma^2})\\
& = \exp(-\frac{(\mu - \frac{n\bar{x}+ n_0\beta}{n+n_0})^2}{2\sigma^2/(n+n_0)})
\end{align*}
The last term is recognizable as the pdf of the $N(\frac{n\bar{x}+ n_0\beta}{n+n_0}, \frac{\sigma^2}{n+n_0})$ distribution. Note that the chi-squared distribution was not needed, because the sample variance $S^2$, which needs this distribution, was not used anywhere.
|
How to show mean and standard deviation of Normal distribution?
|
Since only the terms involving $\mu$ are relevant, I will be dropping multiplicative terms not involving it without warning.
\begin{align*}
[\mu | x_1,\ldots,x_n,\sigma^2] &\propto [x_1,\ldots,x_n|\m
|
How to show mean and standard deviation of Normal distribution?
Since only the terms involving $\mu$ are relevant, I will be dropping multiplicative terms not involving it without warning.
\begin{align*}
[\mu | x_1,\ldots,x_n,\sigma^2] &\propto [x_1,\ldots,x_n|\mu,\sigma^2] \times [\mu|\sigma^2]\\
&\propto
\prod_i \exp(-\frac{(x_i-\mu)^2}{2\sigma^2}) \times \exp(-\frac{(\mu-\beta)^2}{2\sigma^2/n_0})\\
&= \exp(-\frac{\sum_i(x_i-\mu)^2 + n_0(\mu-\beta)^2}{2\sigma^2})\\
& =\exp(-\frac{\sum_i (x_i^2 -2\mu x_i + \mu^2) +n_0(\mu^2-2\mu\beta+\beta^2)}{2\sigma^2})\\
&\propto \exp(-\frac{\mu^2(n+n_0) - 2\mu(\sum_i x_i + n_0\beta)}{2\sigma^2})\\
& \propto\exp(-\frac{(n+n_0)(\mu - \frac{\sum_i x_i + n_0\beta}{n+n_0})^2}{2\sigma^2})\\
& = \exp(-\frac{(\mu - \frac{n\bar{x}+ n_0\beta}{n+n_0})^2}{2\sigma^2/(n+n_0)})
\end{align*}
The last term is recognizable as the pdf of the $N(\frac{n\bar{x}+ n_0\beta}{n+n_0}, \frac{\sigma^2}{n+n_0})$ distribution. Note that the chi-squared distribution was not needed, because the sample variance $S^2$, which needs this distribution, was not used anywhere.
|
How to show mean and standard deviation of Normal distribution?
Since only the terms involving $\mu$ are relevant, I will be dropping multiplicative terms not involving it without warning.
\begin{align*}
[\mu | x_1,\ldots,x_n,\sigma^2] &\propto [x_1,\ldots,x_n|\m
|
48,570
|
How to decide on Theil-Sen outliers?
|
I assume that you are familiar with the notion of breakdown point of an estimator. A similar concept exists for outlier identification rules (see [3]).
(a) The breakdown point of the Theil-Sen estimator at 2D data is $1-\frac{1}{\sqrt{2}}$.
(b) Furthermore, the 2D Theil-Sen estimator is residual admissible (meaning that it only depends on the data through the vector of residuals of the fit).
Denote $\pmb e=\{e_i\}_{i=1}^n$ the $n$-vector of fitted residuals. Because of (a) and (b) flagging as outliers the observations with residual larger than the $q_h=1-(1-\frac{1}{\sqrt{2}})$ quantile of $|\pmb e|=\{|e_i|\}_{i=1}^n$ will enssure that your outlier identification rule has the same breakdown point that the estimator it is based on. Using a higher value of $q_h$ will decrease the breakdown point (of your outlier identification rule) while using a lower value of $q_h$ will not increase it (the breakdown of your outlier identification rule is bounded from below by the breakdown point of the fit you use to identify the outliers).
Increasing $q_h$ will reduce the risk of misclassifying non-outlying observations as outliers (setting $q_h=1$ will make sure that no non-outlying observations are miss-classified as outliers). But you could obtain a much better result by using one step reweighing (1). With one step re weighting you can set the asymptotic risk of misclassifying non-outlying observations as outliers to any small value $\epsilon$ without affecting the breakdown point of your outlier identification rule (though you will increase the minimum distance outliers have to be from the bulk of the data to be identifiable, this distance has no bearing on the notion of breakdown point of an outlier identification rule). As a cost, you will need to add assumptions about the distribution of the vector of residuals.
In any case there is a range of admissible trade-off between the two type of risks (misclassifying non-outlying data points as outliers and misclassifying outliers as non-outlying data points) for any value of $q_h$ above a threshold $q_h^*$ corresponding to the breakdown point of the initial estimator. For outlier identification rule based on the Theil-Sen estimator, $q_h^*=1-(1-\frac{1}{\sqrt{2}})$.
Using the Theil-Sen estimator to find outlier is sub-optimal from a statistical point of view. You will get better trade-off terms as well as also the choice of having more robustness to outliers by using a more modern method such as FastLTS. FastLTS also includes a re-weighting step, but is based on more robust initial estimates than the Theil-Sen fit (so that the $q_h^*$ of FastLTS can be as high as $\approx 0.5$). See 2 for a recent review.
[1] P. Cizek (2010). Reweighted Least Trimmed Squares: An Alternative to One-Step Estimators.
[2] M. Hubert, P. J. Rousseeuw, and S. Van Aelst (2008). High-Breakdown Robust Multivariate Methods.
[3] C. Becker and U. Gather (1999).
The Masking Breakdown Point of Multivariate Outlier Identification Rules.
|
How to decide on Theil-Sen outliers?
|
I assume that you are familiar with the notion of breakdown point of an estimator. A similar concept exists for outlier identification rules (see [3]).
(a) The breakdown point of the Theil-Sen estim
|
How to decide on Theil-Sen outliers?
I assume that you are familiar with the notion of breakdown point of an estimator. A similar concept exists for outlier identification rules (see [3]).
(a) The breakdown point of the Theil-Sen estimator at 2D data is $1-\frac{1}{\sqrt{2}}$.
(b) Furthermore, the 2D Theil-Sen estimator is residual admissible (meaning that it only depends on the data through the vector of residuals of the fit).
Denote $\pmb e=\{e_i\}_{i=1}^n$ the $n$-vector of fitted residuals. Because of (a) and (b) flagging as outliers the observations with residual larger than the $q_h=1-(1-\frac{1}{\sqrt{2}})$ quantile of $|\pmb e|=\{|e_i|\}_{i=1}^n$ will enssure that your outlier identification rule has the same breakdown point that the estimator it is based on. Using a higher value of $q_h$ will decrease the breakdown point (of your outlier identification rule) while using a lower value of $q_h$ will not increase it (the breakdown of your outlier identification rule is bounded from below by the breakdown point of the fit you use to identify the outliers).
Increasing $q_h$ will reduce the risk of misclassifying non-outlying observations as outliers (setting $q_h=1$ will make sure that no non-outlying observations are miss-classified as outliers). But you could obtain a much better result by using one step reweighing (1). With one step re weighting you can set the asymptotic risk of misclassifying non-outlying observations as outliers to any small value $\epsilon$ without affecting the breakdown point of your outlier identification rule (though you will increase the minimum distance outliers have to be from the bulk of the data to be identifiable, this distance has no bearing on the notion of breakdown point of an outlier identification rule). As a cost, you will need to add assumptions about the distribution of the vector of residuals.
In any case there is a range of admissible trade-off between the two type of risks (misclassifying non-outlying data points as outliers and misclassifying outliers as non-outlying data points) for any value of $q_h$ above a threshold $q_h^*$ corresponding to the breakdown point of the initial estimator. For outlier identification rule based on the Theil-Sen estimator, $q_h^*=1-(1-\frac{1}{\sqrt{2}})$.
Using the Theil-Sen estimator to find outlier is sub-optimal from a statistical point of view. You will get better trade-off terms as well as also the choice of having more robustness to outliers by using a more modern method such as FastLTS. FastLTS also includes a re-weighting step, but is based on more robust initial estimates than the Theil-Sen fit (so that the $q_h^*$ of FastLTS can be as high as $\approx 0.5$). See 2 for a recent review.
[1] P. Cizek (2010). Reweighted Least Trimmed Squares: An Alternative to One-Step Estimators.
[2] M. Hubert, P. J. Rousseeuw, and S. Van Aelst (2008). High-Breakdown Robust Multivariate Methods.
[3] C. Becker and U. Gather (1999).
The Masking Breakdown Point of Multivariate Outlier Identification Rules.
|
How to decide on Theil-Sen outliers?
I assume that you are familiar with the notion of breakdown point of an estimator. A similar concept exists for outlier identification rules (see [3]).
(a) The breakdown point of the Theil-Sen estim
|
48,571
|
Quantreg : Unbalanced residuals
|
You are conducting a median regression (tau=0.5) on asymmetrically distributed data. Here is a simpler example to show what is going on.
Suppose your asymmetric data are lognormal:
set.seed(1)
xx <- rlnorm(100,0,1)
Then what you are doing amounts to finding the median of your data.
median(xx)
[1] 1.121518
Now, the median minimizes the sum of absolute errors. It will not minimize the sum of "raw" errors:
sum(xx-median(xx))
[1] 52.74494
If you want a value that minimizes the sum of "raw" errors, you need to take the mean:
mean(xx)
[1] 1.648967
sum(xx-mean(xx))
[1] -9.992007e-15
So: if it is important to you that your fit yields zero average error, you will need to run an ordinary OLS regression. Which will, of course, be sensitive to outliers. (You incidentally found that the conditional mean is equal to the conditional 47% quantile. But that of course won't minimize absolute deviations.)
There is no way to have both minimal absolute deviation and balanced residuals if your distribution is asymmetric. You can of course find a tradeoff between median and OLS regression, perhaps by taking an average, or by regularizing in some other way (lasso, ridge regression, elastic net).
|
Quantreg : Unbalanced residuals
|
You are conducting a median regression (tau=0.5) on asymmetrically distributed data. Here is a simpler example to show what is going on.
Suppose your asymmetric data are lognormal:
set.seed(1)
xx <- r
|
Quantreg : Unbalanced residuals
You are conducting a median regression (tau=0.5) on asymmetrically distributed data. Here is a simpler example to show what is going on.
Suppose your asymmetric data are lognormal:
set.seed(1)
xx <- rlnorm(100,0,1)
Then what you are doing amounts to finding the median of your data.
median(xx)
[1] 1.121518
Now, the median minimizes the sum of absolute errors. It will not minimize the sum of "raw" errors:
sum(xx-median(xx))
[1] 52.74494
If you want a value that minimizes the sum of "raw" errors, you need to take the mean:
mean(xx)
[1] 1.648967
sum(xx-mean(xx))
[1] -9.992007e-15
So: if it is important to you that your fit yields zero average error, you will need to run an ordinary OLS regression. Which will, of course, be sensitive to outliers. (You incidentally found that the conditional mean is equal to the conditional 47% quantile. But that of course won't minimize absolute deviations.)
There is no way to have both minimal absolute deviation and balanced residuals if your distribution is asymmetric. You can of course find a tradeoff between median and OLS regression, perhaps by taking an average, or by regularizing in some other way (lasso, ridge regression, elastic net).
|
Quantreg : Unbalanced residuals
You are conducting a median regression (tau=0.5) on asymmetrically distributed data. Here is a simpler example to show what is going on.
Suppose your asymmetric data are lognormal:
set.seed(1)
xx <- r
|
48,572
|
ICC in a multi level model with two random effects
|
There are several ways to calculate and interpret the ICC for a mixed model. A useful thread is here. To calculate, it is the amount of variance from certain factor(s) divided by the total variance. That would be akin to your B and D calculations. This can be interpreted as the correlation of two randomly drawn units from the same grouping or as the amount of variance explained by those groupings, similar to an $R^2$.
This calculation (B): $$\frac{\sigma^{2}_{item}}{\sigma^{2}_{subj}+\sigma^{2}_{item}+\sigma^{2}_{res}}$$ could be interpreted as the correlation of scores for any item, regardless of the subject using the item.
Similarly, this calculation (D): $$\frac{\sigma^{2}_{subj}}{\sigma^{2}_{subj}+\sigma^{2}_{item}+\sigma^{2}_{res}}$$ could be interpreted as the correlation of scores from any subject, regardless of what item they are working on.
A combined calculation, such as this: $$\frac{\sigma^{2}_{subj}+\sigma^{2}_{item}}{\sigma^{2}_{subj}+\sigma^{2}_{item}+\sigma^{2}_{res}}$$ could be interpreted as the correlation between scores of the same person using the same item (in your data, this is 0.654).
Turning to the values you listed (0.488 and 0.165, respectively), a naive interpretation is the items you're using are modestly correlated regardless of who is using it, while the subject scores are mostly uncorrelated across all the items.
Finally, regarding your two additional calculations (A and C), I'm not aware of a useful interpretation of those values, however I have an open question on the topic.
|
ICC in a multi level model with two random effects
|
There are several ways to calculate and interpret the ICC for a mixed model. A useful thread is here. To calculate, it is the amount of variance from certain factor(s) divided by the total variance.
|
ICC in a multi level model with two random effects
There are several ways to calculate and interpret the ICC for a mixed model. A useful thread is here. To calculate, it is the amount of variance from certain factor(s) divided by the total variance. That would be akin to your B and D calculations. This can be interpreted as the correlation of two randomly drawn units from the same grouping or as the amount of variance explained by those groupings, similar to an $R^2$.
This calculation (B): $$\frac{\sigma^{2}_{item}}{\sigma^{2}_{subj}+\sigma^{2}_{item}+\sigma^{2}_{res}}$$ could be interpreted as the correlation of scores for any item, regardless of the subject using the item.
Similarly, this calculation (D): $$\frac{\sigma^{2}_{subj}}{\sigma^{2}_{subj}+\sigma^{2}_{item}+\sigma^{2}_{res}}$$ could be interpreted as the correlation of scores from any subject, regardless of what item they are working on.
A combined calculation, such as this: $$\frac{\sigma^{2}_{subj}+\sigma^{2}_{item}}{\sigma^{2}_{subj}+\sigma^{2}_{item}+\sigma^{2}_{res}}$$ could be interpreted as the correlation between scores of the same person using the same item (in your data, this is 0.654).
Turning to the values you listed (0.488 and 0.165, respectively), a naive interpretation is the items you're using are modestly correlated regardless of who is using it, while the subject scores are mostly uncorrelated across all the items.
Finally, regarding your two additional calculations (A and C), I'm not aware of a useful interpretation of those values, however I have an open question on the topic.
|
ICC in a multi level model with two random effects
There are several ways to calculate and interpret the ICC for a mixed model. A useful thread is here. To calculate, it is the amount of variance from certain factor(s) divided by the total variance.
|
48,573
|
interpreting confidence intervals in t.test
|
.2532 is always going to be in the confidence interval since this is how the interval was constructed. The formula is
$(\bar{x_1} - \bar{x_2}) \pm t^*_{df} \times SE(\bar{x_1} - \bar{x_2})$
Further more $\bar{x_1} - \bar{x_2}$ is called your observed result (or sample difference in means) and as you can see this is where the interval will be centered and thus always be contained in the confidence interval.
The hypotheses you are testing are
$H_0: \mu_1 - \mu_2 = 0$ vs. $H_1: \mu_1 - \mu_2 \neq 0$
Therefore, you when checking whether or not the null hypothesis is rejected based on the confidence interval you look to see if the null hypothesized value is in the confidence interval. In this case it is 0, since 0 is not in the interval we reject the null hypothesis at the .05 significance level. This is consistent with the decision made from the p-value.
Hopefully this helps but my response is very much an overview and there is a lot more going on conceptually here that I did not go into.
|
interpreting confidence intervals in t.test
|
.2532 is always going to be in the confidence interval since this is how the interval was constructed. The formula is
$(\bar{x_1} - \bar{x_2}) \pm t^*_{df} \times SE(\bar{x_1} - \bar{x_2})$
Further
|
interpreting confidence intervals in t.test
.2532 is always going to be in the confidence interval since this is how the interval was constructed. The formula is
$(\bar{x_1} - \bar{x_2}) \pm t^*_{df} \times SE(\bar{x_1} - \bar{x_2})$
Further more $\bar{x_1} - \bar{x_2}$ is called your observed result (or sample difference in means) and as you can see this is where the interval will be centered and thus always be contained in the confidence interval.
The hypotheses you are testing are
$H_0: \mu_1 - \mu_2 = 0$ vs. $H_1: \mu_1 - \mu_2 \neq 0$
Therefore, you when checking whether or not the null hypothesis is rejected based on the confidence interval you look to see if the null hypothesized value is in the confidence interval. In this case it is 0, since 0 is not in the interval we reject the null hypothesis at the .05 significance level. This is consistent with the decision made from the p-value.
Hopefully this helps but my response is very much an overview and there is a lot more going on conceptually here that I did not go into.
|
interpreting confidence intervals in t.test
.2532 is always going to be in the confidence interval since this is how the interval was constructed. The formula is
$(\bar{x_1} - \bar{x_2}) \pm t^*_{df} \times SE(\bar{x_1} - \bar{x_2})$
Further
|
48,574
|
interpreting confidence intervals in t.test
|
Why do smart people have such trouble explaining simple things.
In laymen terms, p-value = 0.03236 means there is a 97% chance sample A > sample B.
You're 95% confident the difference between sample A and B is between 0.02 and 0.48. This is a weird result given the difference between A and B is only 0.25. This indicates your data is skewed which probably means the 97% chance mentioned above isn't true due to T-Test assumptions being violated.
A normal 95% CI for a difference of means = 0.25 would be something smaller like 0.20 to 0.30 where you could say if my difference of means is 0.25 then the true mean is probably somewhere between 0.20 and 0.30. Your test shows true mean between 0.02 to 0.48 which is a huge range. Run a t.test on some fake data with a normal dist and the result will make sense.
|
interpreting confidence intervals in t.test
|
Why do smart people have such trouble explaining simple things.
In laymen terms, p-value = 0.03236 means there is a 97% chance sample A > sample B.
You're 95% confident the difference between sample
|
interpreting confidence intervals in t.test
Why do smart people have such trouble explaining simple things.
In laymen terms, p-value = 0.03236 means there is a 97% chance sample A > sample B.
You're 95% confident the difference between sample A and B is between 0.02 and 0.48. This is a weird result given the difference between A and B is only 0.25. This indicates your data is skewed which probably means the 97% chance mentioned above isn't true due to T-Test assumptions being violated.
A normal 95% CI for a difference of means = 0.25 would be something smaller like 0.20 to 0.30 where you could say if my difference of means is 0.25 then the true mean is probably somewhere between 0.20 and 0.30. Your test shows true mean between 0.02 to 0.48 which is a huge range. Run a t.test on some fake data with a normal dist and the result will make sense.
|
interpreting confidence intervals in t.test
Why do smart people have such trouble explaining simple things.
In laymen terms, p-value = 0.03236 means there is a 97% chance sample A > sample B.
You're 95% confident the difference between sample
|
48,575
|
Books for statistical computing course?
|
Numerical Analysis for Statisticians, by Kenneth Lange, is a wonderful book for this purpose. It provides most of the necessary background in calculus and some algebra to conduct rigorous numerical analyses of statistical problems. This includes expansions, eigen-analysis, optimisation, integration, approximation theory, and simulation, in less than 600 pages.
Note: I reviewed this book for CHANCE a few years ago, if you want to read the whole review.
|
Books for statistical computing course?
|
Numerical Analysis for Statisticians, by Kenneth Lange, is a wonderful book for this purpose. It provides most of the necessary background in calculus and some algebra to conduct rigorous numerical an
|
Books for statistical computing course?
Numerical Analysis for Statisticians, by Kenneth Lange, is a wonderful book for this purpose. It provides most of the necessary background in calculus and some algebra to conduct rigorous numerical analyses of statistical problems. This includes expansions, eigen-analysis, optimisation, integration, approximation theory, and simulation, in less than 600 pages.
Note: I reviewed this book for CHANCE a few years ago, if you want to read the whole review.
|
Books for statistical computing course?
Numerical Analysis for Statisticians, by Kenneth Lange, is a wonderful book for this purpose. It provides most of the necessary background in calculus and some algebra to conduct rigorous numerical an
|
48,576
|
What is the difference between sample space and population?
|
The population is the set of all units a random process can pick. The sample space S is the set of all possible outcome of a random variable.
For example, the population can be the complete population of the US. Then your random process picks a person, John Smith.
If your random variable asks the color of hair of a person, then S={black, brown, blonde,...}. If your variable asks the age, S = [0,130[. If your variable asks the number of letters in the last name, then S=N.
In some examples, they are the same, like if you ask for the number of points on the dice. Then the population is {1,2,3,4,5,6} and the event space is also {1,2,3,4,5,6}.
In the case of one random variable, this concept is a bit tedious. It becomes very clear and important when you have multiple variables. Then one realization, John Smith, can answer all these questions, X_1 ... X_n.
|
What is the difference between sample space and population?
|
The population is the set of all units a random process can pick. The sample space S is the set of all possible outcome of a random variable.
For example, the population can be the complete populatio
|
What is the difference between sample space and population?
The population is the set of all units a random process can pick. The sample space S is the set of all possible outcome of a random variable.
For example, the population can be the complete population of the US. Then your random process picks a person, John Smith.
If your random variable asks the color of hair of a person, then S={black, brown, blonde,...}. If your variable asks the age, S = [0,130[. If your variable asks the number of letters in the last name, then S=N.
In some examples, they are the same, like if you ask for the number of points on the dice. Then the population is {1,2,3,4,5,6} and the event space is also {1,2,3,4,5,6}.
In the case of one random variable, this concept is a bit tedious. It becomes very clear and important when you have multiple variables. Then one realization, John Smith, can answer all these questions, X_1 ... X_n.
|
What is the difference between sample space and population?
The population is the set of all units a random process can pick. The sample space S is the set of all possible outcome of a random variable.
For example, the population can be the complete populatio
|
48,577
|
What is the difference between sample space and population?
|
In both Probability and Statistics, we refer to the (probability-theoretic) experiment of drawing a SAMPLE,
size n, from a (statistical) POPULATION. The SAMPLE SPACE for this experiment is therefore the set of all n-element samples. If n = 1, then the sample space is the same as the population.
Now, I understand that both of these last two concepts have been considered equivalent to the notion of "universe". It can be said that we begin with the population as our universe, and then modify the universe so that it consists of n-element samples (subsets/combinations) of the population. The difference between Probability and Statistics is that the population is fully known in the study of probability theory, while statistics is about using the chosen sample to make an inference about the population.
|
What is the difference between sample space and population?
|
In both Probability and Statistics, we refer to the (probability-theoretic) experiment of drawing a SAMPLE,
size n, from a (statistical) POPULATION. The SAMPLE SPACE for this experiment is therefo
|
What is the difference between sample space and population?
In both Probability and Statistics, we refer to the (probability-theoretic) experiment of drawing a SAMPLE,
size n, from a (statistical) POPULATION. The SAMPLE SPACE for this experiment is therefore the set of all n-element samples. If n = 1, then the sample space is the same as the population.
Now, I understand that both of these last two concepts have been considered equivalent to the notion of "universe". It can be said that we begin with the population as our universe, and then modify the universe so that it consists of n-element samples (subsets/combinations) of the population. The difference between Probability and Statistics is that the population is fully known in the study of probability theory, while statistics is about using the chosen sample to make an inference about the population.
|
What is the difference between sample space and population?
In both Probability and Statistics, we refer to the (probability-theoretic) experiment of drawing a SAMPLE,
size n, from a (statistical) POPULATION. The SAMPLE SPACE for this experiment is therefo
|
48,578
|
Variance of the Poisson Binomial Distribution
|
Think of the case where $n=2$. If $p_1 = p_2 = 0.5$, this maximizes the variance of X. If $p_1 = 0$ and $p_2 = 1$, then $X=1$ and there is no variance.
|
Variance of the Poisson Binomial Distribution
|
Think of the case where $n=2$. If $p_1 = p_2 = 0.5$, this maximizes the variance of X. If $p_1 = 0$ and $p_2 = 1$, then $X=1$ and there is no variance.
|
Variance of the Poisson Binomial Distribution
Think of the case where $n=2$. If $p_1 = p_2 = 0.5$, this maximizes the variance of X. If $p_1 = 0$ and $p_2 = 1$, then $X=1$ and there is no variance.
|
Variance of the Poisson Binomial Distribution
Think of the case where $n=2$. If $p_1 = p_2 = 0.5$, this maximizes the variance of X. If $p_1 = 0$ and $p_2 = 1$, then $X=1$ and there is no variance.
|
48,579
|
Full rank assumption in the linear regression model explanation:
|
There is an error. It should be $\beta_4'=\beta_4-a$. If we substitute these new $\beta$s into the regression equation we get:
\begin{align}
C &= \beta_1 + \beta_2'X_{non-labor-income}+\beta_3'X_{non-labor-income}+\beta_4'X_{total-income}+\varepsilon \\
& = \beta_1 + (\beta_2 +a)X_{non-labor-income}+(\beta_3+a)X_{salary}+(\beta_4-a)X_{total-income}+\varepsilon\\
& = \beta_1 + \beta_2X_{non-labor-income}+\beta_3X_{salary}+\beta_4X_{total- income}+\varepsilon \\
& + a(X_{non-labor-income}+X_{salary}-X_{total-income})
\end{align}
and the last term is zero. So we substituted new coefficients, but the regression did not change, which means that there are multitude of coefficient values which give the same results, whereas the main regression assumption is that the coefficients are uniquely defined.
|
Full rank assumption in the linear regression model explanation:
|
There is an error. It should be $\beta_4'=\beta_4-a$. If we substitute these new $\beta$s into the regression equation we get:
\begin{align}
C &= \beta_1 + \beta_2'X_{non-labor-income}+\beta_3'X_{non-
|
Full rank assumption in the linear regression model explanation:
There is an error. It should be $\beta_4'=\beta_4-a$. If we substitute these new $\beta$s into the regression equation we get:
\begin{align}
C &= \beta_1 + \beta_2'X_{non-labor-income}+\beta_3'X_{non-labor-income}+\beta_4'X_{total-income}+\varepsilon \\
& = \beta_1 + (\beta_2 +a)X_{non-labor-income}+(\beta_3+a)X_{salary}+(\beta_4-a)X_{total-income}+\varepsilon\\
& = \beta_1 + \beta_2X_{non-labor-income}+\beta_3X_{salary}+\beta_4X_{total- income}+\varepsilon \\
& + a(X_{non-labor-income}+X_{salary}-X_{total-income})
\end{align}
and the last term is zero. So we substituted new coefficients, but the regression did not change, which means that there are multitude of coefficient values which give the same results, whereas the main regression assumption is that the coefficients are uniquely defined.
|
Full rank assumption in the linear regression model explanation:
There is an error. It should be $\beta_4'=\beta_4-a$. If we substitute these new $\beta$s into the regression equation we get:
\begin{align}
C &= \beta_1 + \beta_2'X_{non-labor-income}+\beta_3'X_{non-
|
48,580
|
Full rank assumption in the linear regression model explanation:
|
Old post but since I've been wondering about the same question here's my take on it. I came up a with the following numerical example to help me understand why the design matrix $\textbf{X}$ must be full rank.
First, with a linear model we want to solve (leaving out the error term $\boldsymbol{\epsilon}$):
$$\textbf{y} = \textbf{X} \boldsymbol{\beta}$$
where $\textbf{y}$ is vector of $n$ observation and $\textbf{X}$ is matrix of known covariates with $n$ rows and $p$ columns. We want solve to obtain the vector of $p$ coefficients $\boldsymbol{\beta}$.
Matrix $\textbf{X}$ is a transformation to pass from the vector space $\mathbb{R}^n$ of $\textbf{y}$ to the $\mathbb{R}^p$ space of $\boldsymbol{\beta}$. If $\textbf{X}$ is not full rank we cannot do this mapping uniquely, i.e. there is more than one solution for $\boldsymbol{\beta}$.
For example, with:
$$ \textbf{y} =
\begin{bmatrix}
8 \\
19 \\
27 \\
35 \\
\end{bmatrix}
\textbf{X} =
\begin{bmatrix}
1 & 2 \\
2 & 5 \\
3 & 7 \\
4 & 9 \\
\end{bmatrix}
$$
$\boldsymbol{X}$ is full rank and the model has a unique solution for $\boldsymbol{\beta} = \boldsymbol{(X^TX)^{-1}X^Ty} = [2 \ \ 3]$.
R code to reproduce this:
y <- c(8, 19, 27, 35)
X <- cbind(c(1, 2, 3, 4), c(2, 5, 7, 9))
qr(X)$rank == ncol(X) # Check X is full rank: TRUE
solve((t(X) %*% X)) %*% t(X) %*% y # Returns [2, 3]
# ---
# Alternatively, use the linear regression machinery:
lm(y ~ 0 + X[,1] + X[,2])
# Coefficients:
# X[, 1] X[, 2]
# 2 3
Now, consider:
$$ \textbf{y} =
\begin{bmatrix}
8 \\
16 \\
24 \\
32 \\
\end{bmatrix}
\textbf{X} =
\begin{bmatrix}
1 & 2 \\
2 & 4 \\
3 & 6 \\
4 & 8 \\
\end{bmatrix}
$$
$\boldsymbol{X}$ is not full rank (column two is twice column 1, i.e. it's linear combination of column 1) so there are multiple solutions for $\boldsymbol{\beta}$. For example with $\boldsymbol{\beta} = [2 \ \ 3]$, or $[4 \ \ 2]$, or $[6 \ \ 1]$. R code:
y <- c(8, 16, 24, 32)
X <- cbind(c(1, 2, 3, 4), c(2, 4, 6, 8))
qr(X)$rank == ncol(X) # FALSE: not full rank
X %*% c(2, 3) == y # All these solutions return TRUE
X %*% c(4, 2) == y
X %*% c(6, 1) == y
# Try to solve:
solve((t(X) %*% X)) %*% t(X) %*% y
Error in solve.default((t(X) %*% X)) :
Lapack routine dgesv: system is exactly singular: U[2,2] = 0
|
Full rank assumption in the linear regression model explanation:
|
Old post but since I've been wondering about the same question here's my take on it. I came up a with the following numerical example to help me understand why the design matrix $\textbf{X}$ must be f
|
Full rank assumption in the linear regression model explanation:
Old post but since I've been wondering about the same question here's my take on it. I came up a with the following numerical example to help me understand why the design matrix $\textbf{X}$ must be full rank.
First, with a linear model we want to solve (leaving out the error term $\boldsymbol{\epsilon}$):
$$\textbf{y} = \textbf{X} \boldsymbol{\beta}$$
where $\textbf{y}$ is vector of $n$ observation and $\textbf{X}$ is matrix of known covariates with $n$ rows and $p$ columns. We want solve to obtain the vector of $p$ coefficients $\boldsymbol{\beta}$.
Matrix $\textbf{X}$ is a transformation to pass from the vector space $\mathbb{R}^n$ of $\textbf{y}$ to the $\mathbb{R}^p$ space of $\boldsymbol{\beta}$. If $\textbf{X}$ is not full rank we cannot do this mapping uniquely, i.e. there is more than one solution for $\boldsymbol{\beta}$.
For example, with:
$$ \textbf{y} =
\begin{bmatrix}
8 \\
19 \\
27 \\
35 \\
\end{bmatrix}
\textbf{X} =
\begin{bmatrix}
1 & 2 \\
2 & 5 \\
3 & 7 \\
4 & 9 \\
\end{bmatrix}
$$
$\boldsymbol{X}$ is full rank and the model has a unique solution for $\boldsymbol{\beta} = \boldsymbol{(X^TX)^{-1}X^Ty} = [2 \ \ 3]$.
R code to reproduce this:
y <- c(8, 19, 27, 35)
X <- cbind(c(1, 2, 3, 4), c(2, 5, 7, 9))
qr(X)$rank == ncol(X) # Check X is full rank: TRUE
solve((t(X) %*% X)) %*% t(X) %*% y # Returns [2, 3]
# ---
# Alternatively, use the linear regression machinery:
lm(y ~ 0 + X[,1] + X[,2])
# Coefficients:
# X[, 1] X[, 2]
# 2 3
Now, consider:
$$ \textbf{y} =
\begin{bmatrix}
8 \\
16 \\
24 \\
32 \\
\end{bmatrix}
\textbf{X} =
\begin{bmatrix}
1 & 2 \\
2 & 4 \\
3 & 6 \\
4 & 8 \\
\end{bmatrix}
$$
$\boldsymbol{X}$ is not full rank (column two is twice column 1, i.e. it's linear combination of column 1) so there are multiple solutions for $\boldsymbol{\beta}$. For example with $\boldsymbol{\beta} = [2 \ \ 3]$, or $[4 \ \ 2]$, or $[6 \ \ 1]$. R code:
y <- c(8, 16, 24, 32)
X <- cbind(c(1, 2, 3, 4), c(2, 4, 6, 8))
qr(X)$rank == ncol(X) # FALSE: not full rank
X %*% c(2, 3) == y # All these solutions return TRUE
X %*% c(4, 2) == y
X %*% c(6, 1) == y
# Try to solve:
solve((t(X) %*% X)) %*% t(X) %*% y
Error in solve.default((t(X) %*% X)) :
Lapack routine dgesv: system is exactly singular: U[2,2] = 0
|
Full rank assumption in the linear regression model explanation:
Old post but since I've been wondering about the same question here's my take on it. I came up a with the following numerical example to help me understand why the design matrix $\textbf{X}$ must be f
|
48,581
|
Does the sample size influence the number of PCs needed to explain a fixed percentage of variance?
|
You are probably in the $n\ll p$ situation. As @Glen_b wrote in the comment above, such a behaviour is expected.
Illustration and intuition
Let $p=1000$. Let the population be a multivariate normal with mean zero and some arbitrary covariance matrix $\boldsymbol\Sigma$. Let us sample $n$ points from this population and compute the sample covariance matrix $\mathbf S_n$. Here I plotted the spectrum (sorted eigenvalues) of $\boldsymbol\Sigma$ (black line) and the spectra of $\mathbf S_n$ for various values of $n$:
The sum of all eigenvalues (trace of $\mathbf S_n$) remains approximately constant because it is equal to the sum of variances of each of the $p$ variables, and those can be reasonably well estimated already with small $n$: $$\operatorname{tr}(\mathbf S_n)\approx \operatorname{tr}(\boldsymbol\Sigma).$$ But if $n=100$, then certainly only $100$ eigenvalues can be non-zero, so the same trace has to be "spread" over only $100$ values, meaning that the leading eigenvalues will be much larger than the population ones. As $n$ grows, the leading eigenvalues will decrease and the tail will grow. Notice that even once $n>p$, the bias still remains and only with $n=10000$ (ten times the dimensionality) the spectrum starts to look like the population spectrum.
The dots mark the number of components that explain $90\%$ of the variance. The more horizontal the spectrum looks, the larger this number, so by now it should be clear that it will increase with increasing $n$.
For clarity, here is the same example with $\boldsymbol\Sigma = \mathbf I$. The same effect can still be clearly observed:
Theory
If $\mathbf x \sim \mathcal N (0, \boldsymbol \Sigma_{p\times p})$, then $$(n-1) \mathbf S_n \sim \mathcal W_p(\boldsymbol \Sigma, n-1),$$ where $\mathcal W$ is Wishart distribution. Wishart distribution is well studied, so I expect that there are results about the sampling properties of the eigenvalues of Wishart-distributed matrices. I am not familiar with this field so I cannot say much more, but this would be a starting point for further exploration.
References
I think you might want to cite Jollife, 2002, Principal Components Analysis, section 3.6 "Probability Distributions for Sample Principal Components". Here is what he writes there, page 48 ($l_i$ denote eigenvalues of $\mathbf S_n$ and $\lambda_i$ denote eigenvalues of $\boldsymbol \Sigma$):
One specific point [...] is that $E(l_1 ) > \lambda_ 1$ and
$E(l_p ) < \lambda_p$. In general the larger eigenvalues tend to be overestimated and the smaller ones underestimated.
He also adds that
If a distribution other than the multivariate normal is assumed, distributional results for PCs will typically become less tractable.
The references given around are Jackson, 1991, A User’s Guide to Principal Components and Srivastava and Khatri, 1979, An Introduction to Multivariate Statistics. I am not familiar with these books.
Note that Jolliffe does not explicitly comment on the fact the you need more PCs to explain a certain percentage of variance. But perhaps you can write something like that:
For smaller $n$ less PCs are needed to explain the same amount of variance because for smaller $n\ll p$ leading eigenvalues tend to be overestimated and trailing eigenvalues underestimated (Jollife 2002, Section 3.6).
Matlab code to produce these figures
clear all
p = 1000;
ns = [100 200 500 1000 5000 10000];
W = randn(p,p);
Sigma = transpose(W)*W;
%// alternatively: Sigma = eye(p);
spectrum_population = sort(eig(Sigma), 'descend');
figure('Position', [100 100 1000 400])
hold on
col = lines(length(ns));
for i = 1:length(ns)
X = randn(ns(i),p);
X = X * chol(Sigma);
spectra(i,:) = sort(eig(cov(X)), 'descend');
h(i) = plot(spectra(i,:), 'Color', col(i,:));
ind = find(cumsum(spectra(i,:)) > 0.9*sum(spectra(i,:)), 1);
plot(ind, spectra(i,ind), '.', 'MarkerSize', 20, 'Color', col(i,:))
leg{i} = ['n = ' num2str(ns(i))];
end
h(length(ns)+1) = plot(spectrum_population, 'k', 'LineWidth', 2);
ind = find(cumsum(spectrum_population) > 0.9*sum(spectrum_population), 1);
plot(ind, spectrum_population(ind), 'k.', 'MarkerSize', 20)
leg{length(ns)+1} = 'Population';
legend(h, leg)
legend boxoff
axis([0 p 0 max(spectra(:))])
|
Does the sample size influence the number of PCs needed to explain a fixed percentage of variance?
|
You are probably in the $n\ll p$ situation. As @Glen_b wrote in the comment above, such a behaviour is expected.
Illustration and intuition
Let $p=1000$. Let the population be a multivariate normal wi
|
Does the sample size influence the number of PCs needed to explain a fixed percentage of variance?
You are probably in the $n\ll p$ situation. As @Glen_b wrote in the comment above, such a behaviour is expected.
Illustration and intuition
Let $p=1000$. Let the population be a multivariate normal with mean zero and some arbitrary covariance matrix $\boldsymbol\Sigma$. Let us sample $n$ points from this population and compute the sample covariance matrix $\mathbf S_n$. Here I plotted the spectrum (sorted eigenvalues) of $\boldsymbol\Sigma$ (black line) and the spectra of $\mathbf S_n$ for various values of $n$:
The sum of all eigenvalues (trace of $\mathbf S_n$) remains approximately constant because it is equal to the sum of variances of each of the $p$ variables, and those can be reasonably well estimated already with small $n$: $$\operatorname{tr}(\mathbf S_n)\approx \operatorname{tr}(\boldsymbol\Sigma).$$ But if $n=100$, then certainly only $100$ eigenvalues can be non-zero, so the same trace has to be "spread" over only $100$ values, meaning that the leading eigenvalues will be much larger than the population ones. As $n$ grows, the leading eigenvalues will decrease and the tail will grow. Notice that even once $n>p$, the bias still remains and only with $n=10000$ (ten times the dimensionality) the spectrum starts to look like the population spectrum.
The dots mark the number of components that explain $90\%$ of the variance. The more horizontal the spectrum looks, the larger this number, so by now it should be clear that it will increase with increasing $n$.
For clarity, here is the same example with $\boldsymbol\Sigma = \mathbf I$. The same effect can still be clearly observed:
Theory
If $\mathbf x \sim \mathcal N (0, \boldsymbol \Sigma_{p\times p})$, then $$(n-1) \mathbf S_n \sim \mathcal W_p(\boldsymbol \Sigma, n-1),$$ where $\mathcal W$ is Wishart distribution. Wishart distribution is well studied, so I expect that there are results about the sampling properties of the eigenvalues of Wishart-distributed matrices. I am not familiar with this field so I cannot say much more, but this would be a starting point for further exploration.
References
I think you might want to cite Jollife, 2002, Principal Components Analysis, section 3.6 "Probability Distributions for Sample Principal Components". Here is what he writes there, page 48 ($l_i$ denote eigenvalues of $\mathbf S_n$ and $\lambda_i$ denote eigenvalues of $\boldsymbol \Sigma$):
One specific point [...] is that $E(l_1 ) > \lambda_ 1$ and
$E(l_p ) < \lambda_p$. In general the larger eigenvalues tend to be overestimated and the smaller ones underestimated.
He also adds that
If a distribution other than the multivariate normal is assumed, distributional results for PCs will typically become less tractable.
The references given around are Jackson, 1991, A User’s Guide to Principal Components and Srivastava and Khatri, 1979, An Introduction to Multivariate Statistics. I am not familiar with these books.
Note that Jolliffe does not explicitly comment on the fact the you need more PCs to explain a certain percentage of variance. But perhaps you can write something like that:
For smaller $n$ less PCs are needed to explain the same amount of variance because for smaller $n\ll p$ leading eigenvalues tend to be overestimated and trailing eigenvalues underestimated (Jollife 2002, Section 3.6).
Matlab code to produce these figures
clear all
p = 1000;
ns = [100 200 500 1000 5000 10000];
W = randn(p,p);
Sigma = transpose(W)*W;
%// alternatively: Sigma = eye(p);
spectrum_population = sort(eig(Sigma), 'descend');
figure('Position', [100 100 1000 400])
hold on
col = lines(length(ns));
for i = 1:length(ns)
X = randn(ns(i),p);
X = X * chol(Sigma);
spectra(i,:) = sort(eig(cov(X)), 'descend');
h(i) = plot(spectra(i,:), 'Color', col(i,:));
ind = find(cumsum(spectra(i,:)) > 0.9*sum(spectra(i,:)), 1);
plot(ind, spectra(i,ind), '.', 'MarkerSize', 20, 'Color', col(i,:))
leg{i} = ['n = ' num2str(ns(i))];
end
h(length(ns)+1) = plot(spectrum_population, 'k', 'LineWidth', 2);
ind = find(cumsum(spectrum_population) > 0.9*sum(spectrum_population), 1);
plot(ind, spectrum_population(ind), 'k.', 'MarkerSize', 20)
leg{length(ns)+1} = 'Population';
legend(h, leg)
legend boxoff
axis([0 p 0 max(spectra(:))])
|
Does the sample size influence the number of PCs needed to explain a fixed percentage of variance?
You are probably in the $n\ll p$ situation. As @Glen_b wrote in the comment above, such a behaviour is expected.
Illustration and intuition
Let $p=1000$. Let the population be a multivariate normal wi
|
48,582
|
What is the meaning of the correlation in glm
|
You are actually mentioning the answer to your question in your question's body. The coefficients you see are actually estimated. This means coefficients themselves are actually random variables that follow a distribution. What you see is one value of the random variable. The calculated correlation is the correlation between the random variables and not the correlation between the estimates which as you mention would not make sense.
This is why we do the t-test (hypothesis testing) for each of the coefficients and check how significant each one is.
To prove my point consider a super simple model:
a <- rnorm(100)
b <- rnorm(100)
df <- data.frame(a,b)
> summary(glm(a~b, data=df), corr=TRUE)
Call:
glm(formula = a ~ b, data = df)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.48721 -0.64103 0.00034 0.66420 2.50019
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.095567 0.103944 -0.919 0.360
b 0.007084 0.107731 0.066 0.948
(Dispersion parameter for gaussian family taken to be 1.075244)
Null deviance: 105.38 on 99 degrees of freedom
Residual deviance: 105.37 on 98 degrees of freedom
AIC: 295.02
Number of Fisher Scoring iterations: 2
Correlation of Coefficients:
(Intercept)
b 0.07
As you can see in the summary output, for the coefficients you have 4 columns. The estimate, the standard error, the t value and the p-value. The t-statistic (beta / standard error) follows a t-distribution and has an associated p-value.
So since both b0 (the intercept) and b1 are random variables a correlation between them can be calculated.
|
What is the meaning of the correlation in glm
|
You are actually mentioning the answer to your question in your question's body. The coefficients you see are actually estimated. This means coefficients themselves are actually random variables that
|
What is the meaning of the correlation in glm
You are actually mentioning the answer to your question in your question's body. The coefficients you see are actually estimated. This means coefficients themselves are actually random variables that follow a distribution. What you see is one value of the random variable. The calculated correlation is the correlation between the random variables and not the correlation between the estimates which as you mention would not make sense.
This is why we do the t-test (hypothesis testing) for each of the coefficients and check how significant each one is.
To prove my point consider a super simple model:
a <- rnorm(100)
b <- rnorm(100)
df <- data.frame(a,b)
> summary(glm(a~b, data=df), corr=TRUE)
Call:
glm(formula = a ~ b, data = df)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.48721 -0.64103 0.00034 0.66420 2.50019
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.095567 0.103944 -0.919 0.360
b 0.007084 0.107731 0.066 0.948
(Dispersion parameter for gaussian family taken to be 1.075244)
Null deviance: 105.38 on 99 degrees of freedom
Residual deviance: 105.37 on 98 degrees of freedom
AIC: 295.02
Number of Fisher Scoring iterations: 2
Correlation of Coefficients:
(Intercept)
b 0.07
As you can see in the summary output, for the coefficients you have 4 columns. The estimate, the standard error, the t value and the p-value. The t-statistic (beta / standard error) follows a t-distribution and has an associated p-value.
So since both b0 (the intercept) and b1 are random variables a correlation between them can be calculated.
|
What is the meaning of the correlation in glm
You are actually mentioning the answer to your question in your question's body. The coefficients you see are actually estimated. This means coefficients themselves are actually random variables that
|
48,583
|
What is the meaning of the correlation in glm
|
The correlation of estimates is a scaling parameter so that the standard deviation of the estimated probabilities are constant regardless of linear transformations of the predictor variable. They have no substantive meaning. They only set the metric of the covariance.
|
What is the meaning of the correlation in glm
|
The correlation of estimates is a scaling parameter so that the standard deviation of the estimated probabilities are constant regardless of linear transformations of the predictor variable. They hav
|
What is the meaning of the correlation in glm
The correlation of estimates is a scaling parameter so that the standard deviation of the estimated probabilities are constant regardless of linear transformations of the predictor variable. They have no substantive meaning. They only set the metric of the covariance.
|
What is the meaning of the correlation in glm
The correlation of estimates is a scaling parameter so that the standard deviation of the estimated probabilities are constant regardless of linear transformations of the predictor variable. They hav
|
48,584
|
Method for a hypothesis testing non normal distribution number of retweets
|
You could use Mann-Withney U-test
In statistics, the Mann–Whitney U test (also called the
Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test (WRS), or
Wilcoxon–Mann–Whitney test) is a nonparametric test of the null
hypothesis that two samples come from the same population against an
alternative hypothesis, especially that a particular population tends
to have larger values than the other.
It can be applied on unknown distributions contrary to t-test which
has to be applied only on normal distributions, and it is nearly as
efficient as the t-test on normal distributions.
In python, this test is available in scipy.stats mannwithneyu. Similarly to a t-test, you get a value of the U statistic and a probability.
Hope it helps.
|
Method for a hypothesis testing non normal distribution number of retweets
|
You could use Mann-Withney U-test
In statistics, the Mann–Whitney U test (also called the
Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test (WRS), or
Wilcoxon–Mann–Whitney test) is a nonparamet
|
Method for a hypothesis testing non normal distribution number of retweets
You could use Mann-Withney U-test
In statistics, the Mann–Whitney U test (also called the
Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test (WRS), or
Wilcoxon–Mann–Whitney test) is a nonparametric test of the null
hypothesis that two samples come from the same population against an
alternative hypothesis, especially that a particular population tends
to have larger values than the other.
It can be applied on unknown distributions contrary to t-test which
has to be applied only on normal distributions, and it is nearly as
efficient as the t-test on normal distributions.
In python, this test is available in scipy.stats mannwithneyu. Similarly to a t-test, you get a value of the U statistic and a probability.
Hope it helps.
|
Method for a hypothesis testing non normal distribution number of retweets
You could use Mann-Withney U-test
In statistics, the Mann–Whitney U test (also called the
Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test (WRS), or
Wilcoxon–Mann–Whitney test) is a nonparamet
|
48,585
|
Nonrandomized better performance
|
The importance of randomized experiments is that we can get causal inference from them: by randomly assigning treatments to subjects, we know that the outcome should be approximately independent of any other covariates, other than treatment. Thus, the only systematic difference between treatment and control groups should be the treatment level.
So the key thing about this: randomized experiments allows us to remove unforeseen covariate effects (most importantly, confounding variables). On the other hand, if we a priori know some of the important covariate effects, we can get more power by evenly balancing them across the groups (or adjusting for them in our model).
Consider if you have a treatment and we know that gender affects outcome. If don't account for gender in our sampling model, it should still be approximately equal in the treatment and control arms. On the other hand, we could actually require that it is perfectly balanced across the different groups and this would reduce our standard error slightly. Even better, we could apply something like a block design, and only compare males to males and females to females. But you should still be randomizing your treatment, just in a more structured way, i.e. 25 random males get treatment, 25 random males get control, etc.
In summary, the importance of randomized experiments is that the results are robust to confounding from unknown influential factors. However, power can be increased by accounting for known influential factors in your experimental design. In such a case, you will still want to randomize your treatments to account for unforeseen influential factors. In this light, the Wikipedia entry and Jayne are not disagreeing with each other.
|
Nonrandomized better performance
|
The importance of randomized experiments is that we can get causal inference from them: by randomly assigning treatments to subjects, we know that the outcome should be approximately independent of an
|
Nonrandomized better performance
The importance of randomized experiments is that we can get causal inference from them: by randomly assigning treatments to subjects, we know that the outcome should be approximately independent of any other covariates, other than treatment. Thus, the only systematic difference between treatment and control groups should be the treatment level.
So the key thing about this: randomized experiments allows us to remove unforeseen covariate effects (most importantly, confounding variables). On the other hand, if we a priori know some of the important covariate effects, we can get more power by evenly balancing them across the groups (or adjusting for them in our model).
Consider if you have a treatment and we know that gender affects outcome. If don't account for gender in our sampling model, it should still be approximately equal in the treatment and control arms. On the other hand, we could actually require that it is perfectly balanced across the different groups and this would reduce our standard error slightly. Even better, we could apply something like a block design, and only compare males to males and females to females. But you should still be randomizing your treatment, just in a more structured way, i.e. 25 random males get treatment, 25 random males get control, etc.
In summary, the importance of randomized experiments is that the results are robust to confounding from unknown influential factors. However, power can be increased by accounting for known influential factors in your experimental design. In such a case, you will still want to randomize your treatments to account for unforeseen influential factors. In this light, the Wikipedia entry and Jayne are not disagreeing with each other.
|
Nonrandomized better performance
The importance of randomized experiments is that we can get causal inference from them: by randomly assigning treatments to subjects, we know that the outcome should be approximately independent of an
|
48,586
|
Nonrandomized better performance
|
I think (this interpretation of) Jaynes' statement is false.
In theory of computer science, for example there is lots of effort put into derandomizing randomized algorithms. One (not very CS-y) example is the Miller Rabin algorithm. If you'd like to dig deeper into this, computer science theory stackexchange is the place to go.
In computational physics \ science, many times (markov chain) monte carlo is the only way to go for exploring high dimensional probability distributions, where a the cost of using a uniform mesh is exponential in the dimension (integration on the unit cube with a 0.1 mesh costs $\mathcal{O}( 10^{d} )$ for $[0,1]^d$).
Finally, in statistics - lets say you want to really know if smoking causes cancer. The simplest solution is to take two groups of people. Force one group to smoke and one to not smoke. Let them live (die?) and see which group outlives the other and by how much. This is a simple randomized experiment. But you can't do it in real life, so you have to use smarter statistical methods.
|
Nonrandomized better performance
|
I think (this interpretation of) Jaynes' statement is false.
In theory of computer science, for example there is lots of effort put into derandomizing randomized algorithms. One (not very CS-y) exam
|
Nonrandomized better performance
I think (this interpretation of) Jaynes' statement is false.
In theory of computer science, for example there is lots of effort put into derandomizing randomized algorithms. One (not very CS-y) example is the Miller Rabin algorithm. If you'd like to dig deeper into this, computer science theory stackexchange is the place to go.
In computational physics \ science, many times (markov chain) monte carlo is the only way to go for exploring high dimensional probability distributions, where a the cost of using a uniform mesh is exponential in the dimension (integration on the unit cube with a 0.1 mesh costs $\mathcal{O}( 10^{d} )$ for $[0,1]^d$).
Finally, in statistics - lets say you want to really know if smoking causes cancer. The simplest solution is to take two groups of people. Force one group to smoke and one to not smoke. Let them live (die?) and see which group outlives the other and by how much. This is a simple randomized experiment. But you can't do it in real life, so you have to use smarter statistical methods.
|
Nonrandomized better performance
I think (this interpretation of) Jaynes' statement is false.
In theory of computer science, for example there is lots of effort put into derandomizing randomized algorithms. One (not very CS-y) exam
|
48,587
|
Nonrandomized better performance
|
An example that I'm familiar with: Sometimes experimenters want to design an experiment to minimize the maximum response variance over the design space (G-optimal designs). Early work used genetic algorithms to find such designs. Later work used meta-models and a more traditional optimization approach to find designs.
Returning to Jaynes' integration example, between Monte Carlo and fixed quadrature/cubature there is quasi-Monte-Carlo techniques. Some of these are deterministic and some have some scrambling done to make them stochastic. This approach uses low discrepancy sequences that are also space filling to give faster convergence than MC and are about as easy to apply to high dimensional problems as MC.
Speaking to the randomized experiment example in particular, I'll play Devil's advocate. There are situations where the size of an experiment is limited, knowledge exists about types of effects which cannot be included in the model, and a design is sought to minimize the bias of these effects on the parameter estimates. This can result in a completely deterministic design. I don't think I'd call this a victory of thought over randomization as much as a prudent approach to a situation that really just needs a larger experiment. Yet, it is a situation where thought yields a deterministic approach that is preferable to a random approach.
|
Nonrandomized better performance
|
An example that I'm familiar with: Sometimes experimenters want to design an experiment to minimize the maximum response variance over the design space (G-optimal designs). Early work used genetic a
|
Nonrandomized better performance
An example that I'm familiar with: Sometimes experimenters want to design an experiment to minimize the maximum response variance over the design space (G-optimal designs). Early work used genetic algorithms to find such designs. Later work used meta-models and a more traditional optimization approach to find designs.
Returning to Jaynes' integration example, between Monte Carlo and fixed quadrature/cubature there is quasi-Monte-Carlo techniques. Some of these are deterministic and some have some scrambling done to make them stochastic. This approach uses low discrepancy sequences that are also space filling to give faster convergence than MC and are about as easy to apply to high dimensional problems as MC.
Speaking to the randomized experiment example in particular, I'll play Devil's advocate. There are situations where the size of an experiment is limited, knowledge exists about types of effects which cannot be included in the model, and a design is sought to minimize the bias of these effects on the parameter estimates. This can result in a completely deterministic design. I don't think I'd call this a victory of thought over randomization as much as a prudent approach to a situation that really just needs a larger experiment. Yet, it is a situation where thought yields a deterministic approach that is preferable to a random approach.
|
Nonrandomized better performance
An example that I'm familiar with: Sometimes experimenters want to design an experiment to minimize the maximum response variance over the design space (G-optimal designs). Early work used genetic a
|
48,588
|
How can I estimate the sliding window standard deviation of a stream?
|
You might be able to adapt a technique dating from the dark ages when people calculated standard deviations with hand-operated calculators, so they kept running tallies of both the sums of the observations and of the sums of the squares of the observations. Quoting from the Wikipedia page on mean squared error
the "corrected sample variance" [is]:'
$$S^2_{n-1} =
\frac{1}{n-1}\sum_{i=1}^n\left(X_i-\overline{X}\,\right)^2
=\frac{1}{n-1}\left(\sum_{i=1}^n X_i^2-n\overline{X}^2\right)$$
With this formula you get the sample variance directly from the running sum and the running sum of squares.
So if you have an appropriate way to keep a smoothed sum, and it's also appropriate for a smoothed sum of squares, then your problem might be solved.
|
How can I estimate the sliding window standard deviation of a stream?
|
You might be able to adapt a technique dating from the dark ages when people calculated standard deviations with hand-operated calculators, so they kept running tallies of both the sums of the observa
|
How can I estimate the sliding window standard deviation of a stream?
You might be able to adapt a technique dating from the dark ages when people calculated standard deviations with hand-operated calculators, so they kept running tallies of both the sums of the observations and of the sums of the squares of the observations. Quoting from the Wikipedia page on mean squared error
the "corrected sample variance" [is]:'
$$S^2_{n-1} =
\frac{1}{n-1}\sum_{i=1}^n\left(X_i-\overline{X}\,\right)^2
=\frac{1}{n-1}\left(\sum_{i=1}^n X_i^2-n\overline{X}^2\right)$$
With this formula you get the sample variance directly from the running sum and the running sum of squares.
So if you have an appropriate way to keep a smoothed sum, and it's also appropriate for a smoothed sum of squares, then your problem might be solved.
|
How can I estimate the sliding window standard deviation of a stream?
You might be able to adapt a technique dating from the dark ages when people calculated standard deviations with hand-operated calculators, so they kept running tallies of both the sums of the observa
|
48,589
|
How can I estimate the sliding window standard deviation of a stream?
|
I implemented the frugal streaming algorithm and made a small enhancement that improved the convergence and reduced the error. I am well satisfied with the results for quantiles: less than 5% error, 95% of the time. Using this to compute the first and third quartiles and other code that estimates the max and min, I then estimated the standard deviation using formula 13 in "Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range" by Wan, Wang and Tong in BMC Medical Research Methodology (2014).
Here is the C# code I wrote for quantile computations. I use a 3rd party library called FastRandom. You can replace FastRandom with the C# Random class and keep the same method calls without any problems.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Algorithms
{
/// <summary>
/// Maintain a running estimate of a quantile over a stream with very small memory requirements
/// using the algorithm frugal_2u found in:
/// http://arxiv.org/pdf/1407.1121v1.pdf
/// "Frugal Streaming for Estimating Quantiles: One (or two) memory suffices" by Ma, Muthukrishnan and Sandler (2014).
///
/// One can, for instance, track the median value of a stream of data, or the 68th percentile, or the third decile.
/// This estimate follows recent values of data; it is not an estimate over all time.
/// Thus if the quantile you are measuring changes, this will adapt and track the new value.
///
/// Caveat: The published algorithm uses integers. While this implementation uses doubles, the quantile values cannot
/// be resolved any finer than one, the minimum step size. To resolve to finer values would require small
/// changes to this algorithm and much testing to decide how to balance convergence speed with accuracy.
///
/// Usage:
///
/// // Let's track the median, which has quantile = 0.5.
/// var seed = 100; // Educated guess for the median.
/// var estimator = new FrugalQuantile(seed, 0.5, FrugalQuantile.LinearStepAdjuster);
/// IEnumerable data = ... your data ...;
/// foreach (var item in data) {
/// var newEstimate = estimator.Add(item);
/// // Do something with estimate...
/// }
///
/// Author: Paul A. Chernoch
/// </summary>
public class FrugalQuantile
{
#region Standard functions you can use for StepAdjuster.
/// <summary>
/// Best step adjuster found so far because it converges fast without overshooting.
/// Every time the step grows by an amount that increases by one:
/// 1, 2, 4, 7, 11, 16, 22, 29...
/// </summary>
public static Func<double, double> LinearStepAdjuster = oldStep => oldStep + 1;
/// <summary>
/// Step adjuster used in the published paper, which is good, but not as good as LinearStepAdjuster.
/// Every time the step increases by one:
/// 1, 2, 3, 4, 5, 6...
/// </summary>
public static Func<double, double> ConstantStepAdjuster = oldStep => 1;
#endregion
#region Input parameters
/// <summary>
/// Quantile whose estimate will be maintained.
/// If 0.5, the median will be estimated.
/// If 0.75, the third quartile will be estimated.
/// Id 0.2, the second decile will be estimated.
/// etc...
/// </summary>
public double Quantile { get; set; }
/// <summary>
/// Function to dynamically adjust the step size based on the previous step size.
///
/// NOTE: Best function found so far:
/// StepAdjuster = step => step + 1;
/// </summary>
public Func<double, double> StepAdjuster { get; set; }
#endregion
#region Output parameters
/// <summary>
/// The running estimate of the value found at the given quantile.
///
/// This is the value returned by the most recent call to Add.
/// </summary>
public double Estimate { get; set; }
#endregion
#region Internal state
/// <summary>
/// Amount to add to or subtract from the current estimate, depending on whether our estimate is too low or too high.
///
/// As the algorithm proceeds, this is adjusted up and down to improve convergence.
/// </summary>
private double Step { get; set; }
/// <summary>
/// Tracks whether the previous adjustment was to increase the Estimate or decrease it.
///
/// If +1, the Estimate increased.
/// If -1, the Estimate decreased.
/// This should always have the value +1 or -1.
/// </summary>
private SByte Sign { get; set; }
/// <summary>
/// Random number generator.
///
/// Note: One could refactor to use the C# Random class instead. I prefer FastRandom.
/// </summary>
private FastRandom Rand { get; set; }
#endregion
#region Constructors
/// <summary>
/// Create a FrugalQuantile to track a running estimate of a quantile value.
/// </summary>
/// <param name="seed">Initial estimate for the quantile.
/// A good initial estimate permits more rapid convergence.</param>
/// <param name="quantile">Quantile to estimate, in the exclusive range [0,1].
/// The default is 0.5, the median.
/// </param>
/// <param name="stepAdjuster">Function that can update the step size to improve the rate of convergence.
/// Its parameter is the previous step size.
/// The default lambda for this parameter is good, but there are better functions, like this one:
/// stepAdjuster = step => step + 1
/// Researching the function best for your data is recommended.
/// </param>
public FrugalQuantile(double seed, double quantile = 0.5, Func<double,double> stepAdjuster = null)
{
if (quantile <= 0 || quantile >= 1)
throw new ArgumentOutOfRangeException("quantile", "Must be between zero and one, exclusive.");
Quantile = quantile;
Estimate = seed;
Step = 1;
Sign = 1;
// Default lambda for StepAdjuster shown below always return a step change of 1.
// This default is per the published algorithm but testing shows a different function works much better:
// StepAdjuster = oldStep => oldStep + 1 (aka LinearStepAdjuster).
StepAdjuster = stepAdjuster ?? ConstantStepAdjuster;
Rand = new FastRandom();
}
#endregion
/// <summary>
/// Update the quantile Estimate to reflect the latest value arriving from the stream and return that estimate.
/// </summary>
/// <param name="item">Data Item arriving from the stream.
/// Note: This algorithm was designed for use on non-negative integers. Its accuracy or suitability
/// for negative values is not guaranteed.
/// </param>
/// <returns>The new Estimate.</returns>
public double Add(double item)
{
// This is implemented to resemble as close as possible the pseudo code for function frugal_2u
// on this page:
// http://research.neustar.biz/2013/09/16/sketch-of-the-day-frugal-streaming/
var m = Estimate;
var q = Quantile;
var f = StepAdjuster;
var random = Rand.NextDouble();
if (item > m && random > 1 - q) {
// Increment the step size if and only if the estimate keeps moving in
// the same direction. Step size is incremented by the result of applying
// the specified step function to the previous step size.
Step += (Sign > 0 ? 1 : -1) * f(Step);
// Increment the estimate by step size if step is positive. Otherwise,
// increment the step size by one.
m += Step > 0 ? Step : 1;
// Mark that the estimate increased this step
Sign = 1;
// If the estimate overshot the item in the stream, pull the estimate back
// and re-adjust the step size.
if (m > item) {
Step += (item - m);
m = item;
}
}
else if (item < m && random > q) {
// If the item is less than the stream, follow all of the same steps as
// above, with signs reversed.
Step += (Sign < 0 ? 1 : -1) * f(Step);
m -= Step > 0 ? Step : 1;
Sign = -1;
if (m < item) {
Step += (m - item);
m = item;
}
}
// Damp down the step size to avoid oscillation.
if ((m - item) * Sign < 0 && Step > 1)
Step = 1;
Estimate = m;
return Estimate;
}
}
}
|
How can I estimate the sliding window standard deviation of a stream?
|
I implemented the frugal streaming algorithm and made a small enhancement that improved the convergence and reduced the error. I am well satisfied with the results for quantiles: less than 5% error, 9
|
How can I estimate the sliding window standard deviation of a stream?
I implemented the frugal streaming algorithm and made a small enhancement that improved the convergence and reduced the error. I am well satisfied with the results for quantiles: less than 5% error, 95% of the time. Using this to compute the first and third quartiles and other code that estimates the max and min, I then estimated the standard deviation using formula 13 in "Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range" by Wan, Wang and Tong in BMC Medical Research Methodology (2014).
Here is the C# code I wrote for quantile computations. I use a 3rd party library called FastRandom. You can replace FastRandom with the C# Random class and keep the same method calls without any problems.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Algorithms
{
/// <summary>
/// Maintain a running estimate of a quantile over a stream with very small memory requirements
/// using the algorithm frugal_2u found in:
/// http://arxiv.org/pdf/1407.1121v1.pdf
/// "Frugal Streaming for Estimating Quantiles: One (or two) memory suffices" by Ma, Muthukrishnan and Sandler (2014).
///
/// One can, for instance, track the median value of a stream of data, or the 68th percentile, or the third decile.
/// This estimate follows recent values of data; it is not an estimate over all time.
/// Thus if the quantile you are measuring changes, this will adapt and track the new value.
///
/// Caveat: The published algorithm uses integers. While this implementation uses doubles, the quantile values cannot
/// be resolved any finer than one, the minimum step size. To resolve to finer values would require small
/// changes to this algorithm and much testing to decide how to balance convergence speed with accuracy.
///
/// Usage:
///
/// // Let's track the median, which has quantile = 0.5.
/// var seed = 100; // Educated guess for the median.
/// var estimator = new FrugalQuantile(seed, 0.5, FrugalQuantile.LinearStepAdjuster);
/// IEnumerable data = ... your data ...;
/// foreach (var item in data) {
/// var newEstimate = estimator.Add(item);
/// // Do something with estimate...
/// }
///
/// Author: Paul A. Chernoch
/// </summary>
public class FrugalQuantile
{
#region Standard functions you can use for StepAdjuster.
/// <summary>
/// Best step adjuster found so far because it converges fast without overshooting.
/// Every time the step grows by an amount that increases by one:
/// 1, 2, 4, 7, 11, 16, 22, 29...
/// </summary>
public static Func<double, double> LinearStepAdjuster = oldStep => oldStep + 1;
/// <summary>
/// Step adjuster used in the published paper, which is good, but not as good as LinearStepAdjuster.
/// Every time the step increases by one:
/// 1, 2, 3, 4, 5, 6...
/// </summary>
public static Func<double, double> ConstantStepAdjuster = oldStep => 1;
#endregion
#region Input parameters
/// <summary>
/// Quantile whose estimate will be maintained.
/// If 0.5, the median will be estimated.
/// If 0.75, the third quartile will be estimated.
/// Id 0.2, the second decile will be estimated.
/// etc...
/// </summary>
public double Quantile { get; set; }
/// <summary>
/// Function to dynamically adjust the step size based on the previous step size.
///
/// NOTE: Best function found so far:
/// StepAdjuster = step => step + 1;
/// </summary>
public Func<double, double> StepAdjuster { get; set; }
#endregion
#region Output parameters
/// <summary>
/// The running estimate of the value found at the given quantile.
///
/// This is the value returned by the most recent call to Add.
/// </summary>
public double Estimate { get; set; }
#endregion
#region Internal state
/// <summary>
/// Amount to add to or subtract from the current estimate, depending on whether our estimate is too low or too high.
///
/// As the algorithm proceeds, this is adjusted up and down to improve convergence.
/// </summary>
private double Step { get; set; }
/// <summary>
/// Tracks whether the previous adjustment was to increase the Estimate or decrease it.
///
/// If +1, the Estimate increased.
/// If -1, the Estimate decreased.
/// This should always have the value +1 or -1.
/// </summary>
private SByte Sign { get; set; }
/// <summary>
/// Random number generator.
///
/// Note: One could refactor to use the C# Random class instead. I prefer FastRandom.
/// </summary>
private FastRandom Rand { get; set; }
#endregion
#region Constructors
/// <summary>
/// Create a FrugalQuantile to track a running estimate of a quantile value.
/// </summary>
/// <param name="seed">Initial estimate for the quantile.
/// A good initial estimate permits more rapid convergence.</param>
/// <param name="quantile">Quantile to estimate, in the exclusive range [0,1].
/// The default is 0.5, the median.
/// </param>
/// <param name="stepAdjuster">Function that can update the step size to improve the rate of convergence.
/// Its parameter is the previous step size.
/// The default lambda for this parameter is good, but there are better functions, like this one:
/// stepAdjuster = step => step + 1
/// Researching the function best for your data is recommended.
/// </param>
public FrugalQuantile(double seed, double quantile = 0.5, Func<double,double> stepAdjuster = null)
{
if (quantile <= 0 || quantile >= 1)
throw new ArgumentOutOfRangeException("quantile", "Must be between zero and one, exclusive.");
Quantile = quantile;
Estimate = seed;
Step = 1;
Sign = 1;
// Default lambda for StepAdjuster shown below always return a step change of 1.
// This default is per the published algorithm but testing shows a different function works much better:
// StepAdjuster = oldStep => oldStep + 1 (aka LinearStepAdjuster).
StepAdjuster = stepAdjuster ?? ConstantStepAdjuster;
Rand = new FastRandom();
}
#endregion
/// <summary>
/// Update the quantile Estimate to reflect the latest value arriving from the stream and return that estimate.
/// </summary>
/// <param name="item">Data Item arriving from the stream.
/// Note: This algorithm was designed for use on non-negative integers. Its accuracy or suitability
/// for negative values is not guaranteed.
/// </param>
/// <returns>The new Estimate.</returns>
public double Add(double item)
{
// This is implemented to resemble as close as possible the pseudo code for function frugal_2u
// on this page:
// http://research.neustar.biz/2013/09/16/sketch-of-the-day-frugal-streaming/
var m = Estimate;
var q = Quantile;
var f = StepAdjuster;
var random = Rand.NextDouble();
if (item > m && random > 1 - q) {
// Increment the step size if and only if the estimate keeps moving in
// the same direction. Step size is incremented by the result of applying
// the specified step function to the previous step size.
Step += (Sign > 0 ? 1 : -1) * f(Step);
// Increment the estimate by step size if step is positive. Otherwise,
// increment the step size by one.
m += Step > 0 ? Step : 1;
// Mark that the estimate increased this step
Sign = 1;
// If the estimate overshot the item in the stream, pull the estimate back
// and re-adjust the step size.
if (m > item) {
Step += (item - m);
m = item;
}
}
else if (item < m && random > q) {
// If the item is less than the stream, follow all of the same steps as
// above, with signs reversed.
Step += (Sign < 0 ? 1 : -1) * f(Step);
m -= Step > 0 ? Step : 1;
Sign = -1;
if (m < item) {
Step += (m - item);
m = item;
}
}
// Damp down the step size to avoid oscillation.
if ((m - item) * Sign < 0 && Step > 1)
Step = 1;
Estimate = m;
return Estimate;
}
}
}
|
How can I estimate the sliding window standard deviation of a stream?
I implemented the frugal streaming algorithm and made a small enhancement that improved the convergence and reduced the error. I am well satisfied with the results for quantiles: less than 5% error, 9
|
48,590
|
Which iterative algorithm lmer uses for REML estimation?
|
Neither of the above. The paper describing the algorithms used in lmer [Bates et al. J. Statistical Software (2015) 1-48, available via vignette("lmer",package="lme4")] specifies that the likelihood conditional on top-level variance-covariance parameters is computed by penalized least squares (section 3.6); the algorithm then uses a derivative-free nonlinear optimizer (often Powell's BOBYQA algorithm) to minimize the negative log-likelihood over the space of variance-covariance parameters.
|
Which iterative algorithm lmer uses for REML estimation?
|
Neither of the above. The paper describing the algorithms used in lmer [Bates et al. J. Statistical Software (2015) 1-48, available via vignette("lmer",package="lme4")] specifies that the likelihood
|
Which iterative algorithm lmer uses for REML estimation?
Neither of the above. The paper describing the algorithms used in lmer [Bates et al. J. Statistical Software (2015) 1-48, available via vignette("lmer",package="lme4")] specifies that the likelihood conditional on top-level variance-covariance parameters is computed by penalized least squares (section 3.6); the algorithm then uses a derivative-free nonlinear optimizer (often Powell's BOBYQA algorithm) to minimize the negative log-likelihood over the space of variance-covariance parameters.
|
Which iterative algorithm lmer uses for REML estimation?
Neither of the above. The paper describing the algorithms used in lmer [Bates et al. J. Statistical Software (2015) 1-48, available via vignette("lmer",package="lme4")] specifies that the likelihood
|
48,591
|
Problems generating a sample from a custom distribution with log
|
We can generate random variates from this distribution by inverting it.
This means solving $1 - F(x) = q$ (which lies between $0$ and $1$, implying $a\gt 0$) for $x$. Notice that $\log^b(x)$ will be undefined or negative (which won't work) unless $x \gt 1$. Leave aside the case $b=0$ for the moment. The solutions are slightly different depending on the sign of $b$. When $b \lt 0$, $$x=\exp(y)$$ and solve for $y \gt 0$:
$$q^{1/b} = (c \log^b(x) x^{-a})^{1/b} = c^{1/b} y\exp(-\frac{a}{b} y),$$
whence
$$-\frac{b}{a} W\left(-\frac{a}{b} \left(\frac{q}{c}\right)^{1/b}\right)= y.$$
$W$ is the primary branch of the Lambert $W$ function: when $w = u \exp(u)$ for $u \ge -1$, $W(w)=u$. The solid lines in the figure show this branch.
When $b\gt 0$, let $$x = \exp(-y)$$ and solve as before, yielding
$$\frac{b}{a} W\left(-\frac{a}{b} \left(\frac{q}{c}\right)^{1/b}\right)= y.$$
Because we are interested in the behavior as $x\to \infty$, which corresponds to $y\to -\infty$, the relevant branch is the one shown with the dashed lines in the figure. Notice that the argument of $W$ must lie in the interval $[-1/e, 0]$. This will happen only when $c$ is sufficiently large. Since $0 \le q \le 1$, this implies
$$c \ge \left(\frac{e a}{b}\right)^b.$$
Values along either branch of $W$ are readily computed using Newton-Raphson. Depending on the values of $a,b,$ and $c$ chosen, between one and a dozen iterations will be needed.
Finally, when $b=0$ the logarithmic term is $1$ and we can readily solve
$$x = \left(\frac{c}{q}\right)^{1/a} = \exp\left(-\frac{1}{a}\log\left(\frac{q}{c}\right)\right).$$
(In some sense the limiting value of $(q/c)^{1/b}/b$ gives the natural logarithm, as we would hope.)
In either case, to generate variates from this distribution, stipulate $a$, $b$, and $c$, then generate uniform variates in the range $[0,1]$ and substitute their values for $q$ in the appropriate formula.
Here are examples with $a=5$ and $b=\pm 2$ to illustrate. $10,000$ independent variates were drawn and summarized with histograms of $y$ and $x$. For negative $b$ (top), I chose a value of $c$ that gives a picture that is not terribly skewed. For positive $b$ (bottom), the most extreme possible value of $c$ was chosen. Shown for comparison are solid curves graphing the derivative of the distribution function $F$. The match is excellent in both cases.
Negative $b$
Positive $b$
Here is working code to compute $W$ in R, with an example showing its use. It is vectorized to perform Newton-Raphson steps in parallel for a large number of values of the argument, which is ideal for efficient generation of random variates.
(Mathematica, which generated the figures here, implements $W$ as ProductLog. The negative branch used here is the one with index $-1$ in Mathematica's numbering. It returns the same values in the examples given here, which are computed to at least 12 significant figures.)
W <- function(q, tol.x2=1e-24, tol.W2=1e-24, iter.max=15, verbose=FALSE) {
#
# Define the function and its derivative.
#
W.inverse <- function(z) z * exp(z)
W.inverse.prime <- function(z) exp(z) + W.inverse(z)
#
# Functions to take one Newton-Raphson step.
#
NR <- function(x, f, f.prime) x - f(x) / f.prime(x)
step <- function(x, q) NR(x, function(y) W.inverse(y) - q, W.inverse.prime)
#
# Pick a good starting value. Use the principal branch for positive
# arguments and its continuation (to large negative values) for
# negative arguments.
#
x.0 <- ifelse(q < 0, log(-q), log(q + 1))
#
# True zeros must be handled specially.
#
i.zero <- q == 0
q.names <- q
q[i.zero] <- NA
#
# Newton-Raphson iteration.
#
w.1 <- W.inverse(x.0)
i <- 0
if (verbose) x <- x.0
if (any(!i.zero, na.rm=TRUE)) {
while (i < iter.max) {
i <- i + 1
x.1 <- step(x.0, q)
if (verbose) x <- rbind(x, x.1)
if (mean((x.0/x.1 - 1)^2, na.rm=TRUE) <= tol.x2) break
w.1 <- W.inverse(x.1)
if (mean(((w.1 - q)/(w.1 + q))^2, na.rm=TRUE) <= tol.W2) break
x.0 <- x.1
}
}
x.0[i.zero] <- 0
w.1[i.zero] <- 0
rv <- list(W=x.0, # Values of Lambert W
W.inverse=w.1, # W * exp(W)
Iterations=i,
Code=ifelse(i < iter.max, 0, -1), # 0 for success
Tolerances=c(x2=tol.x2, W2=tol.W2))
names(rv$W) <- q.names # $
if (verbose) {
rownames(x) <- 1:nrow(x) - 1
rv$Steps <- x # $
}
return (rv)
}
#
# Test on both positive and negative arguments
#
q <- rbind(Positive = 10^seq(-3, 3, length.out=7),
Negative = -exp(seq(-(1+1e-15), -600, length.out=7)))
for (i in 1:nrow(q)) {
rv <- W(q[i, ], verbose=TRUE)
cat(rv$Iterations, " steps:", rv$W, "\n")
#print(rv$Steps, digits=13) # Shows the steps
#print(rbind(q[i, ], rv$W.inverse), digits=12) # Checks correctness
}
|
Problems generating a sample from a custom distribution with log
|
We can generate random variates from this distribution by inverting it.
This means solving $1 - F(x) = q$ (which lies between $0$ and $1$, implying $a\gt 0$) for $x$. Notice that $\log^b(x)$ will be
|
Problems generating a sample from a custom distribution with log
We can generate random variates from this distribution by inverting it.
This means solving $1 - F(x) = q$ (which lies between $0$ and $1$, implying $a\gt 0$) for $x$. Notice that $\log^b(x)$ will be undefined or negative (which won't work) unless $x \gt 1$. Leave aside the case $b=0$ for the moment. The solutions are slightly different depending on the sign of $b$. When $b \lt 0$, $$x=\exp(y)$$ and solve for $y \gt 0$:
$$q^{1/b} = (c \log^b(x) x^{-a})^{1/b} = c^{1/b} y\exp(-\frac{a}{b} y),$$
whence
$$-\frac{b}{a} W\left(-\frac{a}{b} \left(\frac{q}{c}\right)^{1/b}\right)= y.$$
$W$ is the primary branch of the Lambert $W$ function: when $w = u \exp(u)$ for $u \ge -1$, $W(w)=u$. The solid lines in the figure show this branch.
When $b\gt 0$, let $$x = \exp(-y)$$ and solve as before, yielding
$$\frac{b}{a} W\left(-\frac{a}{b} \left(\frac{q}{c}\right)^{1/b}\right)= y.$$
Because we are interested in the behavior as $x\to \infty$, which corresponds to $y\to -\infty$, the relevant branch is the one shown with the dashed lines in the figure. Notice that the argument of $W$ must lie in the interval $[-1/e, 0]$. This will happen only when $c$ is sufficiently large. Since $0 \le q \le 1$, this implies
$$c \ge \left(\frac{e a}{b}\right)^b.$$
Values along either branch of $W$ are readily computed using Newton-Raphson. Depending on the values of $a,b,$ and $c$ chosen, between one and a dozen iterations will be needed.
Finally, when $b=0$ the logarithmic term is $1$ and we can readily solve
$$x = \left(\frac{c}{q}\right)^{1/a} = \exp\left(-\frac{1}{a}\log\left(\frac{q}{c}\right)\right).$$
(In some sense the limiting value of $(q/c)^{1/b}/b$ gives the natural logarithm, as we would hope.)
In either case, to generate variates from this distribution, stipulate $a$, $b$, and $c$, then generate uniform variates in the range $[0,1]$ and substitute their values for $q$ in the appropriate formula.
Here are examples with $a=5$ and $b=\pm 2$ to illustrate. $10,000$ independent variates were drawn and summarized with histograms of $y$ and $x$. For negative $b$ (top), I chose a value of $c$ that gives a picture that is not terribly skewed. For positive $b$ (bottom), the most extreme possible value of $c$ was chosen. Shown for comparison are solid curves graphing the derivative of the distribution function $F$. The match is excellent in both cases.
Negative $b$
Positive $b$
Here is working code to compute $W$ in R, with an example showing its use. It is vectorized to perform Newton-Raphson steps in parallel for a large number of values of the argument, which is ideal for efficient generation of random variates.
(Mathematica, which generated the figures here, implements $W$ as ProductLog. The negative branch used here is the one with index $-1$ in Mathematica's numbering. It returns the same values in the examples given here, which are computed to at least 12 significant figures.)
W <- function(q, tol.x2=1e-24, tol.W2=1e-24, iter.max=15, verbose=FALSE) {
#
# Define the function and its derivative.
#
W.inverse <- function(z) z * exp(z)
W.inverse.prime <- function(z) exp(z) + W.inverse(z)
#
# Functions to take one Newton-Raphson step.
#
NR <- function(x, f, f.prime) x - f(x) / f.prime(x)
step <- function(x, q) NR(x, function(y) W.inverse(y) - q, W.inverse.prime)
#
# Pick a good starting value. Use the principal branch for positive
# arguments and its continuation (to large negative values) for
# negative arguments.
#
x.0 <- ifelse(q < 0, log(-q), log(q + 1))
#
# True zeros must be handled specially.
#
i.zero <- q == 0
q.names <- q
q[i.zero] <- NA
#
# Newton-Raphson iteration.
#
w.1 <- W.inverse(x.0)
i <- 0
if (verbose) x <- x.0
if (any(!i.zero, na.rm=TRUE)) {
while (i < iter.max) {
i <- i + 1
x.1 <- step(x.0, q)
if (verbose) x <- rbind(x, x.1)
if (mean((x.0/x.1 - 1)^2, na.rm=TRUE) <= tol.x2) break
w.1 <- W.inverse(x.1)
if (mean(((w.1 - q)/(w.1 + q))^2, na.rm=TRUE) <= tol.W2) break
x.0 <- x.1
}
}
x.0[i.zero] <- 0
w.1[i.zero] <- 0
rv <- list(W=x.0, # Values of Lambert W
W.inverse=w.1, # W * exp(W)
Iterations=i,
Code=ifelse(i < iter.max, 0, -1), # 0 for success
Tolerances=c(x2=tol.x2, W2=tol.W2))
names(rv$W) <- q.names # $
if (verbose) {
rownames(x) <- 1:nrow(x) - 1
rv$Steps <- x # $
}
return (rv)
}
#
# Test on both positive and negative arguments
#
q <- rbind(Positive = 10^seq(-3, 3, length.out=7),
Negative = -exp(seq(-(1+1e-15), -600, length.out=7)))
for (i in 1:nrow(q)) {
rv <- W(q[i, ], verbose=TRUE)
cat(rv$Iterations, " steps:", rv$W, "\n")
#print(rv$Steps, digits=13) # Shows the steps
#print(rbind(q[i, ], rv$W.inverse), digits=12) # Checks correctness
}
|
Problems generating a sample from a custom distribution with log
We can generate random variates from this distribution by inverting it.
This means solving $1 - F(x) = q$ (which lies between $0$ and $1$, implying $a\gt 0$) for $x$. Notice that $\log^b(x)$ will be
|
48,592
|
Limitation of LDA (latent dirichlet allocation)
|
Common LDA limitations:
Fixed K (the number of topics is fixed and must be known ahead of time)
Uncorrelated topics (Dirichlet topic distribution cannot capture correlations)
Non-hierarchical (in data-limited regimes hierarchical models allow sharing of data)
Static (no evolution of topics over time)
Bag of words (assumes words are exchangeable, sentence structure is not modeled)
Unsupervised (sometimes weak supervision is desirable, e.g. in sentiment analysis)
A number of these limitations have been addressed in papers that followed the original LDA work. Despite its limitations, LDA is central to topic modeling and has really revolutionized the field.
|
Limitation of LDA (latent dirichlet allocation)
|
Common LDA limitations:
Fixed K (the number of topics is fixed and must be known ahead of time)
Uncorrelated topics (Dirichlet topic distribution cannot capture correlations)
Non-hierarchical (in dat
|
Limitation of LDA (latent dirichlet allocation)
Common LDA limitations:
Fixed K (the number of topics is fixed and must be known ahead of time)
Uncorrelated topics (Dirichlet topic distribution cannot capture correlations)
Non-hierarchical (in data-limited regimes hierarchical models allow sharing of data)
Static (no evolution of topics over time)
Bag of words (assumes words are exchangeable, sentence structure is not modeled)
Unsupervised (sometimes weak supervision is desirable, e.g. in sentiment analysis)
A number of these limitations have been addressed in papers that followed the original LDA work. Despite its limitations, LDA is central to topic modeling and has really revolutionized the field.
|
Limitation of LDA (latent dirichlet allocation)
Common LDA limitations:
Fixed K (the number of topics is fixed and must be known ahead of time)
Uncorrelated topics (Dirichlet topic distribution cannot capture correlations)
Non-hierarchical (in dat
|
48,593
|
Is the minimum of the Doksum ratio always unbounded?
|
It is geometrically obvious that shifting $G$ changes nothing and that rescaling will rescale the denominator by the same factor--it's merely a matter of keeping track of units of measurement. The following is a rigorous algebraic demonstration. Your conclusion follows immediately.
Let $a\gt 0$ and $b$ be real numbers and define $G_{(a,b)}$ to be the $a$-scaled, $b$-shifted version of $G$:
$$G_{(a,b)}(x) = G(ax + b).$$
Let $0 \le \alpha \le 1$. Compute that
$$(G_{(a,b)})^{-1}(\alpha) = \frac{G^{-1}(\alpha) - b}{a}$$
and (via the Chain Rule)
$$\frac{d}{dx}G_{(a,b)}(x) = a \left(\frac{d}{dx} G\right)(ax + b).$$
Thus
$$\left(\frac{d}{dx}G_{(a,b)}\right)\left(\left(G_{(a,b)}\right)^{-1}(\alpha)\right) = a \left(\frac{d}{dx}G\right)\left(G^{-1}(\alpha)\right).$$
This shows that shifting does not change the denominator of the Doksum ratio and scaling multiplies the denominator by $a$. As $a\to 0^{+}$, the ratio will grow without bound provided its numerator $F^\prime(x)$ exists and is positive.
|
Is the minimum of the Doksum ratio always unbounded?
|
It is geometrically obvious that shifting $G$ changes nothing and that rescaling will rescale the denominator by the same factor--it's merely a matter of keeping track of units of measurement. The fo
|
Is the minimum of the Doksum ratio always unbounded?
It is geometrically obvious that shifting $G$ changes nothing and that rescaling will rescale the denominator by the same factor--it's merely a matter of keeping track of units of measurement. The following is a rigorous algebraic demonstration. Your conclusion follows immediately.
Let $a\gt 0$ and $b$ be real numbers and define $G_{(a,b)}$ to be the $a$-scaled, $b$-shifted version of $G$:
$$G_{(a,b)}(x) = G(ax + b).$$
Let $0 \le \alpha \le 1$. Compute that
$$(G_{(a,b)})^{-1}(\alpha) = \frac{G^{-1}(\alpha) - b}{a}$$
and (via the Chain Rule)
$$\frac{d}{dx}G_{(a,b)}(x) = a \left(\frac{d}{dx} G\right)(ax + b).$$
Thus
$$\left(\frac{d}{dx}G_{(a,b)}\right)\left(\left(G_{(a,b)}\right)^{-1}(\alpha)\right) = a \left(\frac{d}{dx}G\right)\left(G^{-1}(\alpha)\right).$$
This shows that shifting does not change the denominator of the Doksum ratio and scaling multiplies the denominator by $a$. As $a\to 0^{+}$, the ratio will grow without bound provided its numerator $F^\prime(x)$ exists and is positive.
|
Is the minimum of the Doksum ratio always unbounded?
It is geometrically obvious that shifting $G$ changes nothing and that rescaling will rescale the denominator by the same factor--it's merely a matter of keeping track of units of measurement. The fo
|
48,594
|
Sum of squared Negative Binomial probability masses
|
Take two independent Poisson random variables $X$ and $X'$, with means $\lambda$ and $\lambda'$.
The formula answering your question in the Poisson case is a particular case of the identity $$\Pr(X = X' \mid \lambda, \lambda') = \exp(-\lambda)\exp(-\lambda')I_0(2\sqrt{\lambda}{\sqrt{\lambda'}}).$$
> lambda <- 1; lambdaa <- 2
> sum(dpois(0:100,lambda)*dpois(0:100,lambdaa))
[1] 0.2117121
> gsl::bessel_I0(2*sqrt(lambda*lambdaa)) * exp(-lambda-lambdaa)
[1] 0.2117121
Your problem is the same as calculating $\int\int\Pr(X= X' \mid \lambda, \lambda') f_{a,b}(\lambda)f_{a,b}(\lambda')d\lambda d\lambda'$ where $f_{a,b}$ is the $\Gamma(a,b)$ pdf, setting $r=a$ and $p=\frac{b}{1+b}$ with your notations, because of the link between the negative binomial distribution and the Poisson-Gamma distribution.
Let's start by $$\int \exp(-\lambda)I_0(2\sqrt{\lambda}{\sqrt{\lambda'}})f_{a,b}(\lambda)d\lambda = \frac{b^a}{\Gamma(a)}\int \lambda^{a-1}\exp\bigl(-(b+1)\lambda\bigr)I_0(2\sqrt{\lambda}{\sqrt{\lambda'}})d\lambda.$$
According to Mathematica this is equal to
$$
{\left(\frac{b}{1+b}\right)}^a {}_1\!F_1\left(a, 1, \frac{\lambda'}{b+1}\right)
$$
where ${}_1\!F_1$ is the Kummer hypergeometric function.
Now we can even get something for
$$
\begin{multline}
\int \exp(-\lambda'){}_1\!F_1\left(a, 1, \frac{\lambda'}{b+1}\right)f_{a',b'}(\lambda')d\lambda' \\
= \frac{{b'}^{a'}}{\Gamma(a')}\int {\lambda'}^{a'-1}\exp\bigl(-(b'+1)\lambda'\bigr) {{}_1\!F_1}\left(a, 1, \frac{\lambda'}{b+1}\right)d\lambda'.
\end{multline}
$$
Indeed, Mathematica gives
$$
{\left(\frac{b'}{1+b'}\right)}^{a'}
{}_2\!F_1\left(a, a', 1, \frac{1}{(b+1)(b'+1)}\right)
$$
where ${}_2\!F_1$ is the Gauss hypergeometric function.
The final result is beautiful:
$$
{\left(\frac{b}{1+b}\right)}^{a}{\left(\frac{b'}{1+b'}\right)}^{a'}{}_2\!F_1\left(a, a', 1, \frac{1}{(b+1)(b'+1)}\right),
$$
and even a bit more beautiful with your notations:
$$
p^{a}{p'}^{a'}{}_2\!F_1\left(a, a', 1, (1-p)(1-p')\right)
$$
Check:
> a <- 2; A <- 3; b <- 5; B <- 8
> (b/(1+b))^a*(B/(1+B))^A*gsl::hyperg_2F1(a,A,1,1/(b+1)/(B+1))
[1] 0.5450618
> sum(dnbinom(0:100, a, b/(1+b))*dnbinom(0:100, A, B/(1+B)))
[1] 0.5450618
In the special case you are interested in, the sum of squares is
$$
p^{2a}{}_2\!F_1\left(a, a, 1, {(1-p)}^2\right),
$$
and the second-order Renyi entropy is
$$
-2a \log p - \log {}_2\!F_1\left(a, a, 1, {(1-p)}^2\right).
$$
|
Sum of squared Negative Binomial probability masses
|
Take two independent Poisson random variables $X$ and $X'$, with means $\lambda$ and $\lambda'$.
The formula answering your question in the Poisson case is a particular case of the identity $$\Pr(X
|
Sum of squared Negative Binomial probability masses
Take two independent Poisson random variables $X$ and $X'$, with means $\lambda$ and $\lambda'$.
The formula answering your question in the Poisson case is a particular case of the identity $$\Pr(X = X' \mid \lambda, \lambda') = \exp(-\lambda)\exp(-\lambda')I_0(2\sqrt{\lambda}{\sqrt{\lambda'}}).$$
> lambda <- 1; lambdaa <- 2
> sum(dpois(0:100,lambda)*dpois(0:100,lambdaa))
[1] 0.2117121
> gsl::bessel_I0(2*sqrt(lambda*lambdaa)) * exp(-lambda-lambdaa)
[1] 0.2117121
Your problem is the same as calculating $\int\int\Pr(X= X' \mid \lambda, \lambda') f_{a,b}(\lambda)f_{a,b}(\lambda')d\lambda d\lambda'$ where $f_{a,b}$ is the $\Gamma(a,b)$ pdf, setting $r=a$ and $p=\frac{b}{1+b}$ with your notations, because of the link between the negative binomial distribution and the Poisson-Gamma distribution.
Let's start by $$\int \exp(-\lambda)I_0(2\sqrt{\lambda}{\sqrt{\lambda'}})f_{a,b}(\lambda)d\lambda = \frac{b^a}{\Gamma(a)}\int \lambda^{a-1}\exp\bigl(-(b+1)\lambda\bigr)I_0(2\sqrt{\lambda}{\sqrt{\lambda'}})d\lambda.$$
According to Mathematica this is equal to
$$
{\left(\frac{b}{1+b}\right)}^a {}_1\!F_1\left(a, 1, \frac{\lambda'}{b+1}\right)
$$
where ${}_1\!F_1$ is the Kummer hypergeometric function.
Now we can even get something for
$$
\begin{multline}
\int \exp(-\lambda'){}_1\!F_1\left(a, 1, \frac{\lambda'}{b+1}\right)f_{a',b'}(\lambda')d\lambda' \\
= \frac{{b'}^{a'}}{\Gamma(a')}\int {\lambda'}^{a'-1}\exp\bigl(-(b'+1)\lambda'\bigr) {{}_1\!F_1}\left(a, 1, \frac{\lambda'}{b+1}\right)d\lambda'.
\end{multline}
$$
Indeed, Mathematica gives
$$
{\left(\frac{b'}{1+b'}\right)}^{a'}
{}_2\!F_1\left(a, a', 1, \frac{1}{(b+1)(b'+1)}\right)
$$
where ${}_2\!F_1$ is the Gauss hypergeometric function.
The final result is beautiful:
$$
{\left(\frac{b}{1+b}\right)}^{a}{\left(\frac{b'}{1+b'}\right)}^{a'}{}_2\!F_1\left(a, a', 1, \frac{1}{(b+1)(b'+1)}\right),
$$
and even a bit more beautiful with your notations:
$$
p^{a}{p'}^{a'}{}_2\!F_1\left(a, a', 1, (1-p)(1-p')\right)
$$
Check:
> a <- 2; A <- 3; b <- 5; B <- 8
> (b/(1+b))^a*(B/(1+B))^A*gsl::hyperg_2F1(a,A,1,1/(b+1)/(B+1))
[1] 0.5450618
> sum(dnbinom(0:100, a, b/(1+b))*dnbinom(0:100, A, B/(1+B)))
[1] 0.5450618
In the special case you are interested in, the sum of squares is
$$
p^{2a}{}_2\!F_1\left(a, a, 1, {(1-p)}^2\right),
$$
and the second-order Renyi entropy is
$$
-2a \log p - \log {}_2\!F_1\left(a, a, 1, {(1-p)}^2\right).
$$
|
Sum of squared Negative Binomial probability masses
Take two independent Poisson random variables $X$ and $X'$, with means $\lambda$ and $\lambda'$.
The formula answering your question in the Poisson case is a particular case of the identity $$\Pr(X
|
48,595
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as opposed to just $\theta^Tx$
|
You need power to get rid of negative values. When you raise positive number to the power - you will always get positive value. For negative power - the result is just small and for positive - it's big and grows exponentially.
By using softmax you will never get negative probability nor probability higher then 1 and will never divide by zero when calculating it
$$\frac{ e^{\Theta_{i}^T x }}{ \sum_{j} e^{\Theta_{j}^T x } }$$
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as oppose
|
You need power to get rid of negative values. When you raise positive number to the power - you will always get positive value. For negative power - the result is just small and for positive - it's bi
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as opposed to just $\theta^Tx$
You need power to get rid of negative values. When you raise positive number to the power - you will always get positive value. For negative power - the result is just small and for positive - it's big and grows exponentially.
By using softmax you will never get negative probability nor probability higher then 1 and will never divide by zero when calculating it
$$\frac{ e^{\Theta_{i}^T x }}{ \sum_{j} e^{\Theta_{j}^T x } }$$
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as oppose
You need power to get rid of negative values. When you raise positive number to the power - you will always get positive value. For negative power - the result is just small and for positive - it's bi
|
48,596
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as opposed to just $\theta^Tx$
|
There is an intuitive definition. I tried to explain the softmax in this answer. To put it simply, you are interpreting the unbounded $\theta_i^Tx$ as log-odds, and the softmax converts them to probabilities in $[0,1]$. Your formula has no such interpretation.
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as oppose
|
There is an intuitive definition. I tried to explain the softmax in this answer. To put it simply, you are interpreting the unbounded $\theta_i^Tx$ as log-odds, and the softmax converts them to prob
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as opposed to just $\theta^Tx$
There is an intuitive definition. I tried to explain the softmax in this answer. To put it simply, you are interpreting the unbounded $\theta_i^Tx$ as log-odds, and the softmax converts them to probabilities in $[0,1]$. Your formula has no such interpretation.
|
Softmax regression: Intuition about why distribution of $y$ is in terms of $e^{\theta^Tx}$ as oppose
There is an intuitive definition. I tried to explain the softmax in this answer. To put it simply, you are interpreting the unbounded $\theta_i^Tx$ as log-odds, and the softmax converts them to prob
|
48,597
|
Understanding a characterization of minimal sufficient statistics
|
The symbols $\boldsymbol{x,y}$ refer to data. Associated with each possible value $v$ of the statistic $S$ is a collection of possible data values $S^{-1}(v)$ for which $S$ has the value $v$. Since $T$ has the same value (call it $w$) on every such dataset, we may define $H(v) = w$.
The case $S(\boldsymbol{x}) \ne S(\boldsymbol{y})$ is irrelevant to the theorem.
Let's interpret the theorem. Say that two datasets $\boldsymbol{x,y}$ are equivalent if the relative likelihood function
$$\theta \to \frac{L(\theta,\boldsymbol{x})}{L(\theta,\boldsymbol{y})}$$
is constant. This means that any analysis based on comparing likelihoods (for different values of $\theta$) will not make any distinction between any two equivalent datasets. The theorem informs us that a minimal sufficient statistic will not ever distinguish between two equivalent datasets (that is, it must have the same value on each).
The proof of the theorem proceeds by noting that any two datasets having the same value of $S$ must be equivalent (provided that $S$ is sufficient) and therefore $T$ will have the same value on those datasets.
We might picture this by supposing that this equivalence relation among datasets partitions $\Omega$ into separate, overlapping components, each being a collection of equivalent datasets. Sufficient statistics have different values on different components: this guarantees that they can discriminate among inequivalent datasets. However, their values within any given component might vary (thereby discriminating among some equivalent datasets, too). Any minimal sufficient statistic, though, will be constant on each component: it will not discriminate between two equivalent datasets.
The following is a formal mathematical demonstration that $T$ is a function of $S$.
Let the set of all possible such data be $\Omega$. A statistic, such as $S$ or $T$, assigns some kind of mathematical object to each dataset $\boldsymbol{x}\in\Omega$--such as a number or vector--that we can calculate with. The details don't matter, so suppose $S$ assigns objects in a set $V$ and $T$ assigns objects in a set $W$. If the function $H$ exists, then it is a map from $V$ to $W$ (depending on $S$, by the way):
$$\begin{array}{rrcll}
\; &&\Omega \\
\; &^S\swarrow & &\searrow^T \\
\; V & &\xrightarrow{H} &&W \\
\end{array}$$
Given $T$ and $S$ as in the question, what we know is that
For all $\boldsymbol{x,y}\in\Omega$, $S(\boldsymbol{x}) = S(\boldsymbol{y})$ implies $T(\boldsymbol{x}) = T(\boldsymbol{y})$.
From this we would like to deduce the existence of a function $H:V\to W$ such that $T = H\circ S$: that is, $T(\boldsymbol{x}) = H(S(\boldsymbol{x}))$ for all $\boldsymbol{x}\in\Omega$.
$H$ can be found with an explicit construction. One way begins with a function $H^{*}$ defined on $V$ whose values are subsets of $W$ defined as
$$H^{*}(v) = T(S^{-1}(v)) = \{T(\boldsymbol{x})\,|\, S(\boldsymbol{x}) = v\}.$$
I claim that all elements of $H^{*}(v)$ are equal, no matter what $v\in V$ might be. To prove the claim let $u, w\in H^{*}(v)$. By definition, this means there are $\boldsymbol{x,y}\in\Omega$ such that $u=T(\boldsymbol{x})$ and $w=T(\boldsymbol{y})$ and $S(\boldsymbol{x}) = S(\boldsymbol{y}) = v$. The latter equality implies $u = T(\boldsymbol{x}) = T(\boldsymbol{y}) = w$, proving the claim.
This claim enables us to define $H(v)$ whenever $H^{*}(v)$ is nonempty: it's the unique element of $H^{*}(v)$. To complete the definition of $H$, pick an arbitrary $w_0\in W$ and set $H(v) = w_0$ when $H(v)$ is empty. Formally,
$$H(v) = \left\{ \begin{array}{ll}
w & \text{if } H^{*}(v) = \{w\}\\
w_0 & \text{if } H^{*}(v) = \emptyset.
\end{array} \right. $$
|
Understanding a characterization of minimal sufficient statistics
|
The symbols $\boldsymbol{x,y}$ refer to data. Associated with each possible value $v$ of the statistic $S$ is a collection of possible data values $S^{-1}(v)$ for which $S$ has the value $v$. Since
|
Understanding a characterization of minimal sufficient statistics
The symbols $\boldsymbol{x,y}$ refer to data. Associated with each possible value $v$ of the statistic $S$ is a collection of possible data values $S^{-1}(v)$ for which $S$ has the value $v$. Since $T$ has the same value (call it $w$) on every such dataset, we may define $H(v) = w$.
The case $S(\boldsymbol{x}) \ne S(\boldsymbol{y})$ is irrelevant to the theorem.
Let's interpret the theorem. Say that two datasets $\boldsymbol{x,y}$ are equivalent if the relative likelihood function
$$\theta \to \frac{L(\theta,\boldsymbol{x})}{L(\theta,\boldsymbol{y})}$$
is constant. This means that any analysis based on comparing likelihoods (for different values of $\theta$) will not make any distinction between any two equivalent datasets. The theorem informs us that a minimal sufficient statistic will not ever distinguish between two equivalent datasets (that is, it must have the same value on each).
The proof of the theorem proceeds by noting that any two datasets having the same value of $S$ must be equivalent (provided that $S$ is sufficient) and therefore $T$ will have the same value on those datasets.
We might picture this by supposing that this equivalence relation among datasets partitions $\Omega$ into separate, overlapping components, each being a collection of equivalent datasets. Sufficient statistics have different values on different components: this guarantees that they can discriminate among inequivalent datasets. However, their values within any given component might vary (thereby discriminating among some equivalent datasets, too). Any minimal sufficient statistic, though, will be constant on each component: it will not discriminate between two equivalent datasets.
The following is a formal mathematical demonstration that $T$ is a function of $S$.
Let the set of all possible such data be $\Omega$. A statistic, such as $S$ or $T$, assigns some kind of mathematical object to each dataset $\boldsymbol{x}\in\Omega$--such as a number or vector--that we can calculate with. The details don't matter, so suppose $S$ assigns objects in a set $V$ and $T$ assigns objects in a set $W$. If the function $H$ exists, then it is a map from $V$ to $W$ (depending on $S$, by the way):
$$\begin{array}{rrcll}
\; &&\Omega \\
\; &^S\swarrow & &\searrow^T \\
\; V & &\xrightarrow{H} &&W \\
\end{array}$$
Given $T$ and $S$ as in the question, what we know is that
For all $\boldsymbol{x,y}\in\Omega$, $S(\boldsymbol{x}) = S(\boldsymbol{y})$ implies $T(\boldsymbol{x}) = T(\boldsymbol{y})$.
From this we would like to deduce the existence of a function $H:V\to W$ such that $T = H\circ S$: that is, $T(\boldsymbol{x}) = H(S(\boldsymbol{x}))$ for all $\boldsymbol{x}\in\Omega$.
$H$ can be found with an explicit construction. One way begins with a function $H^{*}$ defined on $V$ whose values are subsets of $W$ defined as
$$H^{*}(v) = T(S^{-1}(v)) = \{T(\boldsymbol{x})\,|\, S(\boldsymbol{x}) = v\}.$$
I claim that all elements of $H^{*}(v)$ are equal, no matter what $v\in V$ might be. To prove the claim let $u, w\in H^{*}(v)$. By definition, this means there are $\boldsymbol{x,y}\in\Omega$ such that $u=T(\boldsymbol{x})$ and $w=T(\boldsymbol{y})$ and $S(\boldsymbol{x}) = S(\boldsymbol{y}) = v$. The latter equality implies $u = T(\boldsymbol{x}) = T(\boldsymbol{y}) = w$, proving the claim.
This claim enables us to define $H(v)$ whenever $H^{*}(v)$ is nonempty: it's the unique element of $H^{*}(v)$. To complete the definition of $H$, pick an arbitrary $w_0\in W$ and set $H(v) = w_0$ when $H(v)$ is empty. Formally,
$$H(v) = \left\{ \begin{array}{ll}
w & \text{if } H^{*}(v) = \{w\}\\
w_0 & \text{if } H^{*}(v) = \emptyset.
\end{array} \right. $$
|
Understanding a characterization of minimal sufficient statistics
The symbols $\boldsymbol{x,y}$ refer to data. Associated with each possible value $v$ of the statistic $S$ is a collection of possible data values $S^{-1}(v)$ for which $S$ has the value $v$. Since
|
48,598
|
Bayesian neural networks: very multimodal posterior?
|
Regarding the question how the non-identifiability can be addressed, I can recommend to have a look at Improving the Identifiability of Neural Networks for Bayesian Inference, which "eliminates" the (discrete) combinatorial non-identifiability problem through ordering of nodes (as one of the comments suspected). The paper also addresses a continuous non-identifiability problem (related to rescaling-invariance in RELUs) and tries to solve this, too. Very similar problems are encountered in Bayesian mixture models and can be "solved", c.f. the excellent tutorial Identifying Bayesian Mixture Models.
Unfortunately, it appears that even after one considers the above, there remains the risk of multiple modes, as discussed here Why are Bayesian Neural Networks multi-modal?.
I can also recommend to read section 3.7 of the paper “Issues in Bayesian Analysis of Neural Network Models”, which discusses mechanisms leading to multi-modal behaviour. Besides the ones already mentioned, they also discuss a problem they refer to as "node-duplication"!
|
Bayesian neural networks: very multimodal posterior?
|
Regarding the question how the non-identifiability can be addressed, I can recommend to have a look at Improving the Identifiability of Neural Networks for Bayesian Inference, which "eliminates" the (
|
Bayesian neural networks: very multimodal posterior?
Regarding the question how the non-identifiability can be addressed, I can recommend to have a look at Improving the Identifiability of Neural Networks for Bayesian Inference, which "eliminates" the (discrete) combinatorial non-identifiability problem through ordering of nodes (as one of the comments suspected). The paper also addresses a continuous non-identifiability problem (related to rescaling-invariance in RELUs) and tries to solve this, too. Very similar problems are encountered in Bayesian mixture models and can be "solved", c.f. the excellent tutorial Identifying Bayesian Mixture Models.
Unfortunately, it appears that even after one considers the above, there remains the risk of multiple modes, as discussed here Why are Bayesian Neural Networks multi-modal?.
I can also recommend to read section 3.7 of the paper “Issues in Bayesian Analysis of Neural Network Models”, which discusses mechanisms leading to multi-modal behaviour. Besides the ones already mentioned, they also discuss a problem they refer to as "node-duplication"!
|
Bayesian neural networks: very multimodal posterior?
Regarding the question how the non-identifiability can be addressed, I can recommend to have a look at Improving the Identifiability of Neural Networks for Bayesian Inference, which "eliminates" the (
|
48,599
|
Bayesian neural networks: very multimodal posterior?
|
In Keras the posterior distribution is specified explicitly. If multimodality of posteriors is expected, it has to be specified.
Here is one example http://ezcodesample.com/Multimodal.html, showing how to change unimodal posterior into multimodal.
That means Keras or tensorflow provide you best parameters for the distribution that you specify in your model, not the one that actually is. It is your responsibility to specify it correctly.
|
Bayesian neural networks: very multimodal posterior?
|
In Keras the posterior distribution is specified explicitly. If multimodality of posteriors is expected, it has to be specified.
Here is one example http://ezcodesample.com/Multimodal.html, showing ho
|
Bayesian neural networks: very multimodal posterior?
In Keras the posterior distribution is specified explicitly. If multimodality of posteriors is expected, it has to be specified.
Here is one example http://ezcodesample.com/Multimodal.html, showing how to change unimodal posterior into multimodal.
That means Keras or tensorflow provide you best parameters for the distribution that you specify in your model, not the one that actually is. It is your responsibility to specify it correctly.
|
Bayesian neural networks: very multimodal posterior?
In Keras the posterior distribution is specified explicitly. If multimodality of posteriors is expected, it has to be specified.
Here is one example http://ezcodesample.com/Multimodal.html, showing ho
|
48,600
|
Is it possible for test error to be lower than training error
|
Totally possible, though it probably means that you aren't training quite as much as you could be. Typically when you look at test/train accuracies over time you get a graph like this:
The test/train stages can be (very broadly) categorized as follows:
first you start training and the test/train accuracy is noisier, but they are very strongly correlated. This means you haven't quite fit to the problem.
As time goes on, they both start to decrease, but the training error starts to decrease more quickly than the testing error. This means you're approaching a very good level of fit.
Eventually you start to see the error rate of the testing set increase, while the training set error continues to decrease. This means you have officially started to overfit.
There are a lot of ways of dealing with overfitting if that becomes a problem, but your goal in picking an algorithm and training should be to hit the highest accuracy, which typically happens somewhere in the second stage.
If your test accuracy is higher than your train accuracy, you are likely still very far left on the training graph. There are three main options for resolving that problem:
use an algorithm better suited for small datasets (hard to tell without knowing about your problem, but Naive Bayes is usually a good small data choice)
Change your model constants to fit more strongly to your training set (increasing the learning rate)
Get more data
|
Is it possible for test error to be lower than training error
|
Totally possible, though it probably means that you aren't training quite as much as you could be. Typically when you look at test/train accuracies over time you get a graph like this:
The test/train
|
Is it possible for test error to be lower than training error
Totally possible, though it probably means that you aren't training quite as much as you could be. Typically when you look at test/train accuracies over time you get a graph like this:
The test/train stages can be (very broadly) categorized as follows:
first you start training and the test/train accuracy is noisier, but they are very strongly correlated. This means you haven't quite fit to the problem.
As time goes on, they both start to decrease, but the training error starts to decrease more quickly than the testing error. This means you're approaching a very good level of fit.
Eventually you start to see the error rate of the testing set increase, while the training set error continues to decrease. This means you have officially started to overfit.
There are a lot of ways of dealing with overfitting if that becomes a problem, but your goal in picking an algorithm and training should be to hit the highest accuracy, which typically happens somewhere in the second stage.
If your test accuracy is higher than your train accuracy, you are likely still very far left on the training graph. There are three main options for resolving that problem:
use an algorithm better suited for small datasets (hard to tell without knowing about your problem, but Naive Bayes is usually a good small data choice)
Change your model constants to fit more strongly to your training set (increasing the learning rate)
Get more data
|
Is it possible for test error to be lower than training error
Totally possible, though it probably means that you aren't training quite as much as you could be. Typically when you look at test/train accuracies over time you get a graph like this:
The test/train
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.