idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
36,201
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim?
In my opinion, the most important thing to consider is if there are ways in which eyewitness accounts may not capture the full possibilities of the event (or lack of it), i.e., if they are statistically biased in any way, and if so, how. Of course, "eyewitness" can be replaced with any non-human objective measurement obtained through a measurement device or procedure, and the question remains whether the measurement is biased or distorted or noisy, and if so, in what way. A simple example to illustrate the problem is Survivorship bias, i.e., a censored dataset: Let's assume, for argument's sake, continuing the OP's example, that there indeed are aliens and they abduct humans. But furthermore, let's assume that their abductions are very common, and that the vast majority of the humans being abducted are never returned to Earth. To make matters worse, the aliens can analyze human relationships, and they make an effort to abduct humans with very few relationships and relatives, so as not to draw attention to these missing humans. In this hypothetical example, there will be zero eyewitnesses of humans abducted and not returned to Earth, even though the assumption is that this is the event with vast majority. The very few cases of abductions of humans which survive to report them is miniscule, obviously in comparison with humans living on Earth which have not been abducted (as far as they're aware...) So, what would we be able to infer from existing eyewitness accounts if we ignore Survivorship bias? Probably that the existence of alien abductions is questionable. However, if we do take take into account the censored dataset caused by Survivorship bias, we would attach non-negligible probability to alien abductions. Of course this is only in the case where we accept this preposterous hypothetical example and attach to it's existence non-zero prior probability... The example of Survivorship bias of determining reinforcement based on location of bullet holes in the returning aircraft is most illuminating: https://en.wikipedia.org/wiki/Survivorship_bias#In_the_military
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting
In my opinion, the most important thing to consider is if there are ways in which eyewitness accounts may not capture the full possibilities of the event (or lack of it), i.e., if they are statistical
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim? In my opinion, the most important thing to consider is if there are ways in which eyewitness accounts may not capture the full possibilities of the event (or lack of it), i.e., if they are statistically biased in any way, and if so, how. Of course, "eyewitness" can be replaced with any non-human objective measurement obtained through a measurement device or procedure, and the question remains whether the measurement is biased or distorted or noisy, and if so, in what way. A simple example to illustrate the problem is Survivorship bias, i.e., a censored dataset: Let's assume, for argument's sake, continuing the OP's example, that there indeed are aliens and they abduct humans. But furthermore, let's assume that their abductions are very common, and that the vast majority of the humans being abducted are never returned to Earth. To make matters worse, the aliens can analyze human relationships, and they make an effort to abduct humans with very few relationships and relatives, so as not to draw attention to these missing humans. In this hypothetical example, there will be zero eyewitnesses of humans abducted and not returned to Earth, even though the assumption is that this is the event with vast majority. The very few cases of abductions of humans which survive to report them is miniscule, obviously in comparison with humans living on Earth which have not been abducted (as far as they're aware...) So, what would we be able to infer from existing eyewitness accounts if we ignore Survivorship bias? Probably that the existence of alien abductions is questionable. However, if we do take take into account the censored dataset caused by Survivorship bias, we would attach non-negligible probability to alien abductions. Of course this is only in the case where we accept this preposterous hypothetical example and attach to it's existence non-zero prior probability... The example of Survivorship bias of determining reinforcement based on location of bullet holes in the returning aircraft is most illuminating: https://en.wikipedia.org/wiki/Survivorship_bias#In_the_military
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting In my opinion, the most important thing to consider is if there are ways in which eyewitness accounts may not capture the full possibilities of the event (or lack of it), i.e., if they are statistical
36,202
PCA as a Cure for the Curse of Dimensionality
In a way, PCA does not use the outcome you are trying to model/predict, i.e. it is an unsupervised technique. From that perspective, its parameters are not parameters that get trained in your supervised model. Of course, using PCA for dimensionality reduction is not in any way guaranteed to preserve "the signal" for the outcome of interest that may be in the data (see e.g. this previous question for a discussion). I.e. it may well be preferable to select the most important variables based on subject-matter expertise, if there is a decent amount of prior knowledge. There's of course also other techniques/alternatives to PCA (e.g. various variants of PCA, UMAP, t-SNE, training a denoising autoencoder on the features and so on). However, a lot may also depend on your goals. Are you trying to interpret the model coefficients (if so, PCA does make that harder), are you trying to create a prediction model that is meant to achieve a certain level of performance (if so, interpretability of PCA may be less of a concern, but it may also be even more of a problem to be working with too little data), or are you trying to do something else?
PCA as a Cure for the Curse of Dimensionality
In a way, PCA does not use the outcome you are trying to model/predict, i.e. it is an unsupervised technique. From that perspective, its parameters are not parameters that get trained in your supervis
PCA as a Cure for the Curse of Dimensionality In a way, PCA does not use the outcome you are trying to model/predict, i.e. it is an unsupervised technique. From that perspective, its parameters are not parameters that get trained in your supervised model. Of course, using PCA for dimensionality reduction is not in any way guaranteed to preserve "the signal" for the outcome of interest that may be in the data (see e.g. this previous question for a discussion). I.e. it may well be preferable to select the most important variables based on subject-matter expertise, if there is a decent amount of prior knowledge. There's of course also other techniques/alternatives to PCA (e.g. various variants of PCA, UMAP, t-SNE, training a denoising autoencoder on the features and so on). However, a lot may also depend on your goals. Are you trying to interpret the model coefficients (if so, PCA does make that harder), are you trying to create a prediction model that is meant to achieve a certain level of performance (if so, interpretability of PCA may be less of a concern, but it may also be even more of a problem to be working with too little data), or are you trying to do something else?
PCA as a Cure for the Curse of Dimensionality In a way, PCA does not use the outcome you are trying to model/predict, i.e. it is an unsupervised technique. From that perspective, its parameters are not parameters that get trained in your supervis
36,203
PCA as a Cure for the Curse of Dimensionality
But then I have computed 30x30 elements for my eigenvector matrix and 3 parameters for my model, I have fitted 900+3 parameters to the data. The possible solutions for the parameters relating to the features are strongly limited. You are effectively only fitting 3 parameters. Because the potential solutions $\hat{Y} = \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_{30} X_{30}$ lie in a 3d space.
PCA as a Cure for the Curse of Dimensionality
But then I have computed 30x30 elements for my eigenvector matrix and 3 parameters for my model, I have fitted 900+3 parameters to the data. The possible solutions for the parameters relating to the
PCA as a Cure for the Curse of Dimensionality But then I have computed 30x30 elements for my eigenvector matrix and 3 parameters for my model, I have fitted 900+3 parameters to the data. The possible solutions for the parameters relating to the features are strongly limited. You are effectively only fitting 3 parameters. Because the potential solutions $\hat{Y} = \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_{30} X_{30}$ lie in a 3d space.
PCA as a Cure for the Curse of Dimensionality But then I have computed 30x30 elements for my eigenvector matrix and 3 parameters for my model, I have fitted 900+3 parameters to the data. The possible solutions for the parameters relating to the
36,204
Sampling Normal variables with linear constraints and given variances - Fraser (1951)
Suppose $(Y_1, \cdots, Y_{n-1}) \sim N_{n-1}\left(\boldsymbol{0}_{n-1}, \boldsymbol{T}\right)$. For simplicity, let $v_i=1$ for all $i$ and let $b_i^{\ast}$ denote the $i$th column of $\boldsymbol{B}$ excluding observation $b_{ni}$. That reference states that the elements of $\boldsymbol{T} = (\tau_{rs})$ are found by solving the set of $n$ equations \begin{eqnarray*} \sum_{r=1}^{n-1}\sum_{s=1}^{n-1} b_{ri}\tau_{rs}b_{si} = 1 \quad \mbox{for} \quad i=1,\cdots,n. \end{eqnarray*} Note that this can be expressed as the equivalent quadratic form $\left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}b_i^{\ast}=1.$ Now we wish to solve for the elements of $\boldsymbol{T}$. Since the quadratic form is a scalar, it is equal to its trace. Using the fact that $\mbox{tr}(AB)=vec^{\prime}(A^{\prime})vec(B)$, $\mbox{tr}(AB)=\mbox{tr}(BA)$, and for a symmetric $n \times n$ matrix, $A$, that $D_n vech(A)=vec(A)$, with $vec(\cdot)$ denoting the vectorization of a matrix, $vech(\cdot)$ denoting the half-vectorization of a matrix and $D_n$ denoting the duplication matrix of order $n$. Since $vech(\boldsymbol{T})$ contains all of the unique elements of $\boldsymbol{T}$, we can create a linear system of equations using the equivalent formulation: \begin{eqnarray*} \left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}b_i^{\ast} &=& \mbox{tr} \left(\left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}b_i^{\ast}\right) \\ &=& \mbox{tr} \left(b_i^{\ast}\left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}\right) \\ &=& vec^{\prime}\left(b_i^{\ast}\left(b_i^{\ast}\right)^{\prime}\right)vec\left(\boldsymbol{T}\right) \\ &=& vec^{\prime}\left(b_i^{\ast}\left(b_i^{\ast}\right)^{\prime}\right) \boldsymbol{D}_{n-1} vech\left(\boldsymbol{T}\right). \end{eqnarray*} Next let \begin{eqnarray*} \boldsymbol{W} = \begin{pmatrix} vec^{\prime}\left(b_1^{\ast}\left(b_1^{\ast}\right)^{\prime}\right) \boldsymbol{D}_{n-1} \\ \vdots \\ vec^{\prime}\left(b_n^{\ast}\left(b_n^{\ast}\right)^{\prime}\right) \boldsymbol{D}_{n-1}. \end{pmatrix} \end{eqnarray*} Hence the original $n$ set of equations can be written as \begin{eqnarray*} \boldsymbol{W} vech\left(\boldsymbol{T}\right) = \boldsymbol{1}_n, \end{eqnarray*} and for any generalized inverse of $\boldsymbol{W}^{\prime}\boldsymbol{W}$, we have \begin{eqnarray*} vech\left(\boldsymbol{T}\right) = \left(\boldsymbol{W}^{\prime}\boldsymbol{W}\right)^{-} \boldsymbol{W}^{\prime}\boldsymbol{1}_n \end{eqnarray*}. In fact, for this problem, one may reduce the above to find that $\boldsymbol{T} = \frac{n}{n-1} \boldsymbol{I}_{n-1}$. Using the ${\tt matrixcalc}$ package in ${\tt R}$ to obtain the duplication matrix, the correct code should be d <- 4 B <- matrix(NA, nrow = d, ncol = d) B[d, ] <- rep((1/sqrt(d)), d) for (j in 1:(d-1)){ for (i in 1:d){ if (i < j) B[j, i] <- 0 if (i == j) B[j, i] <- ((d - j)/(d - j + 1))^(1/2) if (i > j) B[j, i] <- - 1/(((d-j+1)*(d-j))^(1/2)) } } set.seed(1234) library(matrixcalc) library(Matrix) library(MASS) D = duplication.matrix(d-1) W = matrix(0,d,d*(d-1)/2) for (i in 1:d){ W[i,] = t(as.vector(B[-d,i]%*%t(B[-d,i])))%*%D } vech.tau = as.vector(ginv(t(W)%*%W)%*%t(W)%*%rep(1,d)) tau = matrix(0,d-1,d-1) l=1 for(i in 1:(d-1)){ for (j in i:(d-1)){ tau[i,j] = vech.tau[l] l = l+1 } } tau = forceSymmetric(tau) aux <- matrix(rnorm((d-1)*100000,0,1),100000,d-1) sqrt_tau <- chol(tau) y <- cbind(aux%*%chol(tau), 0) x <- y %*%B mean(rowSums(x)) # [1] -6.543378e-19 apply(x, 2, var) # [1] 0.9989172 0.9898065 1.0061580 1.0053505 ```
Sampling Normal variables with linear constraints and given variances - Fraser (1951)
Suppose $(Y_1, \cdots, Y_{n-1}) \sim N_{n-1}\left(\boldsymbol{0}_{n-1}, \boldsymbol{T}\right)$. For simplicity, let $v_i=1$ for all $i$ and let $b_i^{\ast}$ denote the $i$th column of $\boldsymbol{B}
Sampling Normal variables with linear constraints and given variances - Fraser (1951) Suppose $(Y_1, \cdots, Y_{n-1}) \sim N_{n-1}\left(\boldsymbol{0}_{n-1}, \boldsymbol{T}\right)$. For simplicity, let $v_i=1$ for all $i$ and let $b_i^{\ast}$ denote the $i$th column of $\boldsymbol{B}$ excluding observation $b_{ni}$. That reference states that the elements of $\boldsymbol{T} = (\tau_{rs})$ are found by solving the set of $n$ equations \begin{eqnarray*} \sum_{r=1}^{n-1}\sum_{s=1}^{n-1} b_{ri}\tau_{rs}b_{si} = 1 \quad \mbox{for} \quad i=1,\cdots,n. \end{eqnarray*} Note that this can be expressed as the equivalent quadratic form $\left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}b_i^{\ast}=1.$ Now we wish to solve for the elements of $\boldsymbol{T}$. Since the quadratic form is a scalar, it is equal to its trace. Using the fact that $\mbox{tr}(AB)=vec^{\prime}(A^{\prime})vec(B)$, $\mbox{tr}(AB)=\mbox{tr}(BA)$, and for a symmetric $n \times n$ matrix, $A$, that $D_n vech(A)=vec(A)$, with $vec(\cdot)$ denoting the vectorization of a matrix, $vech(\cdot)$ denoting the half-vectorization of a matrix and $D_n$ denoting the duplication matrix of order $n$. Since $vech(\boldsymbol{T})$ contains all of the unique elements of $\boldsymbol{T}$, we can create a linear system of equations using the equivalent formulation: \begin{eqnarray*} \left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}b_i^{\ast} &=& \mbox{tr} \left(\left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}b_i^{\ast}\right) \\ &=& \mbox{tr} \left(b_i^{\ast}\left(b_i^{\ast}\right)^{\prime}\boldsymbol{T}\right) \\ &=& vec^{\prime}\left(b_i^{\ast}\left(b_i^{\ast}\right)^{\prime}\right)vec\left(\boldsymbol{T}\right) \\ &=& vec^{\prime}\left(b_i^{\ast}\left(b_i^{\ast}\right)^{\prime}\right) \boldsymbol{D}_{n-1} vech\left(\boldsymbol{T}\right). \end{eqnarray*} Next let \begin{eqnarray*} \boldsymbol{W} = \begin{pmatrix} vec^{\prime}\left(b_1^{\ast}\left(b_1^{\ast}\right)^{\prime}\right) \boldsymbol{D}_{n-1} \\ \vdots \\ vec^{\prime}\left(b_n^{\ast}\left(b_n^{\ast}\right)^{\prime}\right) \boldsymbol{D}_{n-1}. \end{pmatrix} \end{eqnarray*} Hence the original $n$ set of equations can be written as \begin{eqnarray*} \boldsymbol{W} vech\left(\boldsymbol{T}\right) = \boldsymbol{1}_n, \end{eqnarray*} and for any generalized inverse of $\boldsymbol{W}^{\prime}\boldsymbol{W}$, we have \begin{eqnarray*} vech\left(\boldsymbol{T}\right) = \left(\boldsymbol{W}^{\prime}\boldsymbol{W}\right)^{-} \boldsymbol{W}^{\prime}\boldsymbol{1}_n \end{eqnarray*}. In fact, for this problem, one may reduce the above to find that $\boldsymbol{T} = \frac{n}{n-1} \boldsymbol{I}_{n-1}$. Using the ${\tt matrixcalc}$ package in ${\tt R}$ to obtain the duplication matrix, the correct code should be d <- 4 B <- matrix(NA, nrow = d, ncol = d) B[d, ] <- rep((1/sqrt(d)), d) for (j in 1:(d-1)){ for (i in 1:d){ if (i < j) B[j, i] <- 0 if (i == j) B[j, i] <- ((d - j)/(d - j + 1))^(1/2) if (i > j) B[j, i] <- - 1/(((d-j+1)*(d-j))^(1/2)) } } set.seed(1234) library(matrixcalc) library(Matrix) library(MASS) D = duplication.matrix(d-1) W = matrix(0,d,d*(d-1)/2) for (i in 1:d){ W[i,] = t(as.vector(B[-d,i]%*%t(B[-d,i])))%*%D } vech.tau = as.vector(ginv(t(W)%*%W)%*%t(W)%*%rep(1,d)) tau = matrix(0,d-1,d-1) l=1 for(i in 1:(d-1)){ for (j in i:(d-1)){ tau[i,j] = vech.tau[l] l = l+1 } } tau = forceSymmetric(tau) aux <- matrix(rnorm((d-1)*100000,0,1),100000,d-1) sqrt_tau <- chol(tau) y <- cbind(aux%*%chol(tau), 0) x <- y %*%B mean(rowSums(x)) # [1] -6.543378e-19 apply(x, 2, var) # [1] 0.9989172 0.9898065 1.0061580 1.0053505 ```
Sampling Normal variables with linear constraints and given variances - Fraser (1951) Suppose $(Y_1, \cdots, Y_{n-1}) \sim N_{n-1}\left(\boldsymbol{0}_{n-1}, \boldsymbol{T}\right)$. For simplicity, let $v_i=1$ for all $i$ and let $b_i^{\ast}$ denote the $i$th column of $\boldsymbol{B}
36,205
Is there a real example in which a correlation finally leads to the discovery of a non-trivial causal relationship?
Lung cancer was not even recognised medically until the 18th century, and as recently as 1900 only about 140 cases were known in the published medical literature. ... Tobacco was apparently not even suspected as a cause of lung tumours until the final decade of the 19th century. ... Scholars started noting the parallel rise in cigarette consumption and lung cancer, and by the 1930s had begun to investigate this relationship using the methods of case-control epidemiology. Proctor, 2012. "The history of the discovery of the cigarette–lung cancer link: evidentiary traditions, corporate denial, global toll". Tobacco Control.
Is there a real example in which a correlation finally leads to the discovery of a non-trivial causa
Lung cancer was not even recognised medically until the 18th century, and as recently as 1900 only about 140 cases were known in the published medical literature. ... Tobacco was apparently not even s
Is there a real example in which a correlation finally leads to the discovery of a non-trivial causal relationship? Lung cancer was not even recognised medically until the 18th century, and as recently as 1900 only about 140 cases were known in the published medical literature. ... Tobacco was apparently not even suspected as a cause of lung tumours until the final decade of the 19th century. ... Scholars started noting the parallel rise in cigarette consumption and lung cancer, and by the 1930s had begun to investigate this relationship using the methods of case-control epidemiology. Proctor, 2012. "The history of the discovery of the cigarette–lung cancer link: evidentiary traditions, corporate denial, global toll". Tobacco Control.
Is there a real example in which a correlation finally leads to the discovery of a non-trivial causa Lung cancer was not even recognised medically until the 18th century, and as recently as 1900 only about 140 cases were known in the published medical literature. ... Tobacco was apparently not even s
36,206
Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$
This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of differential Calculus, and with the right strategy can be reduced to a simple algebraic calculation. There is a classic set of plots comparing the density functions (right panel of the figure) and the distribution functions (left panel) of Student $t$ variables as their parameter $\nu$ is varied: Although these cannot show the full extents of the graphs, which go from $-\infty$ to $\infty,$ the curves do suggest that when $\nu^\prime \gt \nu \gt 0,$ All these densities are symmetric around $0;$ The density for $\nu$ is lower near $0$ than the density for $\nu^\prime;$ The density for $\nu$ is higher asymptotically than the density for $\nu^\prime$ (that is, the distribution with small parameter $\nu$ has heavier tails); and There is just one positive number (depending on $\nu$ and $\nu^\prime$) where the density functions for $\nu^\prime$ and $\nu$ cross. All but the last claim are straightforward (and obvious) to prove, so let's get to the heart of the matter: a study of how two Student $t$ densities relate to one another for positive arguments $x.$ (Claim (1) justifies the focus on positive values.) The worry is that two different Student $t$ densities might wiggle around each other several times as their argument $x$ grows large, alternating between which has the larger density. Intuitively that shouldn't be the case, but how to prove it? The following analysis is motivated by a focus on simplifying away the obstacles. What would these obstacles be? Consider the expression for the distribution function, $$F(t;\nu) = \int_{-\infty}^t f(x,\nu)\,\mathrm{d}x = \frac{\Gamma\left(\frac{\nu+1}{2}\right)}{\sqrt{\nu\pi}\,\Gamma\left(\frac{\nu}{2}\right)} \int_{-\infty}^t \left(1 + \frac{x^2}{\nu}\right)^{-(\nu+1)/2}\,\mathrm{d}x.$$ This is required to define the critical point $t_\nu(\alpha/2)$ as given in the question. From left to right we are confronted in turn with the apparent need to analyze (1) a ratio of Gamma functions, (2) an integral, (3) a fractional power, and (4) a reciprocal quadratic function. The strategy of comparing two distributions, starting with the obvious equalities $F(0,\nu)=1/2=F(0,\nu^\prime)$ and $\lim_{t\to \infty} F(t,\nu) = 1 = \lim_{t\to\infty}F(t,\nu^\prime),$ can eliminate the first obstacle. Comparing the density functions avoids dealing directly with the integral. To deal with the powers, let's compare the densities by taking the logarithm of their ratios. The log is positive when the numerator exceeds the denominator and negative otherwise. Although this introduces the logarithm as a new complication, by differentiating it we are reduced to a manageable rational function. To this end, for $x\ge 0$ define $$h(x,\nu^\prime,\nu) = \log\left(\frac{f(x,\nu^\prime)}{f(x,\nu)}\right) = \log(f(x,\nu^\prime)) - \log(f(x,\nu)).$$ See the left panel of the next figure for a plot of $h.$ This is a typical shape of the graph, no matter what $\nu^\prime \gt \nu$ might be. Claim (2) is that $h(0,\nu^\prime,\nu)\gt 0$ while claim (3) is that $h(x,\nu^\prime,\nu)\lt 0$ for all sufficiently large $x.$ Our concern is what happens at intermediate values $x$ where $h$ drops from its maximum down to negative values. The right panel plots the derivative of $h.$ I will prove that the derivative has exactly one zero at $x=1.$ Because the simplification strategy is a good one, this is an easy calculation based on computing the logarithmic derivative of $f(x,\nu):$ $$\frac{\mathrm{d}\log f(x,\nu)}{\mathrm{d}x} = -\frac{(\nu+1)x}{\nu + x^2}.$$ Consequently $$\frac{\mathrm{d}h(x,\nu^\prime,\nu)}{\mathrm{d}x} = \frac{(\nu+1)x}{\nu + x^2} - \frac{(\nu^\prime+1)x}{\nu^\prime + x^2} = \frac{(\nu^\prime-\nu)}{(\nu+x^2)(\nu^\prime+x^2)}\,x\,(1-x^2).$$ Since $\nu,$ $\nu^\prime,$ and $x^2$ are all positive, this is a continuous function for all $x\ge 0.$ It can cross zero, then, only where $x(1-x^2)=0.$ The only solution for $0\lt x \lt \infty$ is $x=1,$ as claimed. Let's unroll the implications. $h$ increases from $x=0$ (where it is positive) to $x=1$ and thereafter decreases, eventually becoming negative (which is a restatement of Claim (3)). Therefore Claim (4) holds: any two different Student $t$ densities cross at exactly one positive number (potentially depending on those two densities, of course). Consequently the distribution function $t \to F(t,\nu^\prime)$ rises more steeply from its value of $1/2$ at $t=0$ compared to $t\to F(t,\nu)$ and can never cross that graph: the two graphs eventually converge as $t\to \infty,$ where they both squeeze up to a height of $1.$ Thus, for any $0\lt p \lt 1,$ the middle portion $p$ of the probability distribution with $\nu^\prime$ degrees of freedom is strictly contained within the middle portion $p$ of the distribution with $\nu\lt \nu^\prime$ degrees of freedom. The last conclusion is equivalent to saying $t_\nu((1-p)/2)$ is a strictly decreasing function of $\nu$, QED.
Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$
This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of
Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$ This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of differential Calculus, and with the right strategy can be reduced to a simple algebraic calculation. There is a classic set of plots comparing the density functions (right panel of the figure) and the distribution functions (left panel) of Student $t$ variables as their parameter $\nu$ is varied: Although these cannot show the full extents of the graphs, which go from $-\infty$ to $\infty,$ the curves do suggest that when $\nu^\prime \gt \nu \gt 0,$ All these densities are symmetric around $0;$ The density for $\nu$ is lower near $0$ than the density for $\nu^\prime;$ The density for $\nu$ is higher asymptotically than the density for $\nu^\prime$ (that is, the distribution with small parameter $\nu$ has heavier tails); and There is just one positive number (depending on $\nu$ and $\nu^\prime$) where the density functions for $\nu^\prime$ and $\nu$ cross. All but the last claim are straightforward (and obvious) to prove, so let's get to the heart of the matter: a study of how two Student $t$ densities relate to one another for positive arguments $x.$ (Claim (1) justifies the focus on positive values.) The worry is that two different Student $t$ densities might wiggle around each other several times as their argument $x$ grows large, alternating between which has the larger density. Intuitively that shouldn't be the case, but how to prove it? The following analysis is motivated by a focus on simplifying away the obstacles. What would these obstacles be? Consider the expression for the distribution function, $$F(t;\nu) = \int_{-\infty}^t f(x,\nu)\,\mathrm{d}x = \frac{\Gamma\left(\frac{\nu+1}{2}\right)}{\sqrt{\nu\pi}\,\Gamma\left(\frac{\nu}{2}\right)} \int_{-\infty}^t \left(1 + \frac{x^2}{\nu}\right)^{-(\nu+1)/2}\,\mathrm{d}x.$$ This is required to define the critical point $t_\nu(\alpha/2)$ as given in the question. From left to right we are confronted in turn with the apparent need to analyze (1) a ratio of Gamma functions, (2) an integral, (3) a fractional power, and (4) a reciprocal quadratic function. The strategy of comparing two distributions, starting with the obvious equalities $F(0,\nu)=1/2=F(0,\nu^\prime)$ and $\lim_{t\to \infty} F(t,\nu) = 1 = \lim_{t\to\infty}F(t,\nu^\prime),$ can eliminate the first obstacle. Comparing the density functions avoids dealing directly with the integral. To deal with the powers, let's compare the densities by taking the logarithm of their ratios. The log is positive when the numerator exceeds the denominator and negative otherwise. Although this introduces the logarithm as a new complication, by differentiating it we are reduced to a manageable rational function. To this end, for $x\ge 0$ define $$h(x,\nu^\prime,\nu) = \log\left(\frac{f(x,\nu^\prime)}{f(x,\nu)}\right) = \log(f(x,\nu^\prime)) - \log(f(x,\nu)).$$ See the left panel of the next figure for a plot of $h.$ This is a typical shape of the graph, no matter what $\nu^\prime \gt \nu$ might be. Claim (2) is that $h(0,\nu^\prime,\nu)\gt 0$ while claim (3) is that $h(x,\nu^\prime,\nu)\lt 0$ for all sufficiently large $x.$ Our concern is what happens at intermediate values $x$ where $h$ drops from its maximum down to negative values. The right panel plots the derivative of $h.$ I will prove that the derivative has exactly one zero at $x=1.$ Because the simplification strategy is a good one, this is an easy calculation based on computing the logarithmic derivative of $f(x,\nu):$ $$\frac{\mathrm{d}\log f(x,\nu)}{\mathrm{d}x} = -\frac{(\nu+1)x}{\nu + x^2}.$$ Consequently $$\frac{\mathrm{d}h(x,\nu^\prime,\nu)}{\mathrm{d}x} = \frac{(\nu+1)x}{\nu + x^2} - \frac{(\nu^\prime+1)x}{\nu^\prime + x^2} = \frac{(\nu^\prime-\nu)}{(\nu+x^2)(\nu^\prime+x^2)}\,x\,(1-x^2).$$ Since $\nu,$ $\nu^\prime,$ and $x^2$ are all positive, this is a continuous function for all $x\ge 0.$ It can cross zero, then, only where $x(1-x^2)=0.$ The only solution for $0\lt x \lt \infty$ is $x=1,$ as claimed. Let's unroll the implications. $h$ increases from $x=0$ (where it is positive) to $x=1$ and thereafter decreases, eventually becoming negative (which is a restatement of Claim (3)). Therefore Claim (4) holds: any two different Student $t$ densities cross at exactly one positive number (potentially depending on those two densities, of course). Consequently the distribution function $t \to F(t,\nu^\prime)$ rises more steeply from its value of $1/2$ at $t=0$ compared to $t\to F(t,\nu)$ and can never cross that graph: the two graphs eventually converge as $t\to \infty,$ where they both squeeze up to a height of $1.$ Thus, for any $0\lt p \lt 1,$ the middle portion $p$ of the probability distribution with $\nu^\prime$ degrees of freedom is strictly contained within the middle portion $p$ of the distribution with $\nu\lt \nu^\prime$ degrees of freedom. The last conclusion is equivalent to saying $t_\nu((1-p)/2)$ is a strictly decreasing function of $\nu$, QED.
Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$ This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of
36,207
A challenging question of ANN
As long as we are talking only about additive neurons (i.e. all inputs to the neuron are summed together before being passed to the activation function), "unipolar" and "bipolar" can be used interchangeably. We can always transform a "unipolar" output to a "bipolar" one by multiplying by 2 and subtracting one: $$ o_{bipolar} = 2 \cdot o_{unipolar} - 1 $$ To implement this in the network, we just need to double the weights and decrease the bias in by one for each input neuron: $$ w_{ij}' = 2 \cdot w_{ij} $$ $$ bias_j' = bias_{j} - N_{in[j]} $$ where $N_{in[j]}$ is the number of neurons feeding their output as the input to the $j$-th neuron. So the part if we use Bipolar can be safely ignored. Now, as Thomas points out in his comment, the first layer of the networks (D) and (E) simply map the continuous $(x, y)$-space onto $\{0, 1\}^2$ (or, alternatively, $\{-1, 1\}^2$, if you use "bipolar" neurons). With the given arrangement of the classes this becomes the classical XOR-problem, and you need two further layers to solve it.
A challenging question of ANN
As long as we are talking only about additive neurons (i.e. all inputs to the neuron are summed together before being passed to the activation function), "unipolar" and "bipolar" can be used interchan
A challenging question of ANN As long as we are talking only about additive neurons (i.e. all inputs to the neuron are summed together before being passed to the activation function), "unipolar" and "bipolar" can be used interchangeably. We can always transform a "unipolar" output to a "bipolar" one by multiplying by 2 and subtracting one: $$ o_{bipolar} = 2 \cdot o_{unipolar} - 1 $$ To implement this in the network, we just need to double the weights and decrease the bias in by one for each input neuron: $$ w_{ij}' = 2 \cdot w_{ij} $$ $$ bias_j' = bias_{j} - N_{in[j]} $$ where $N_{in[j]}$ is the number of neurons feeding their output as the input to the $j$-th neuron. So the part if we use Bipolar can be safely ignored. Now, as Thomas points out in his comment, the first layer of the networks (D) and (E) simply map the continuous $(x, y)$-space onto $\{0, 1\}^2$ (or, alternatively, $\{-1, 1\}^2$, if you use "bipolar" neurons). With the given arrangement of the classes this becomes the classical XOR-problem, and you need two further layers to solve it.
A challenging question of ANN As long as we are talking only about additive neurons (i.e. all inputs to the neuron are summed together before being passed to the activation function), "unipolar" and "bipolar" can be used interchan
36,208
A challenging question of ANN
If neuron had three outputs, say [-1,0,1] then it could draw three areas with linear boundaries as shown here for the first layer and solution would be (E). The second layer simply picks the south and north region as one category, and west and east regions as another. A neuron with two outputs, whether it's [0,1] or [-1,1] or any other pair of values, can only criss-cross. So the solution can only be (D) Sideways If you abstract yourself from the actual question, then it's clear that the variables are "wrong" :) This is asking for feature engineering (another buzzword!) - shift and rotate by 45 degrees would work beautifully. First you de-mean the data, then create new variables: S = x+y and V=x-y. Then your classification becomes simply a bit problem: L is (S*V<0). No, this is not the solution of the problem, because it still requires four regions, and with binary neurons you still need D in this problem. I just thought it's an interesting twist to consider
A challenging question of ANN
If neuron had three outputs, say [-1,0,1] then it could draw three areas with linear boundaries as shown here for the first layer and solution would be (E). The second layer simply picks the south an
A challenging question of ANN If neuron had three outputs, say [-1,0,1] then it could draw three areas with linear boundaries as shown here for the first layer and solution would be (E). The second layer simply picks the south and north region as one category, and west and east regions as another. A neuron with two outputs, whether it's [0,1] or [-1,1] or any other pair of values, can only criss-cross. So the solution can only be (D) Sideways If you abstract yourself from the actual question, then it's clear that the variables are "wrong" :) This is asking for feature engineering (another buzzword!) - shift and rotate by 45 degrees would work beautifully. First you de-mean the data, then create new variables: S = x+y and V=x-y. Then your classification becomes simply a bit problem: L is (S*V<0). No, this is not the solution of the problem, because it still requires four regions, and with binary neurons you still need D in this problem. I just thought it's an interesting twist to consider
A challenging question of ANN If neuron had three outputs, say [-1,0,1] then it could draw three areas with linear boundaries as shown here for the first layer and solution would be (E). The second layer simply picks the south an
36,209
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that?
The first thing to note here is that the upper bounds on the probabilities are going above one in some cases, and obviously you can truncate these to make them one in that Chebychev bound and the Mills bound go above one is really just a matter of convention --- i.e., the expression does not bother to differentiate between the useless bound at unity, and useless bounds above unity. Setting aside that complication, one way to look at this is to try to find the nastiest distribution in each case --- i.e., the one that achieves the stated bound. For example, suppose we just consider the Chebychev inequality, which applies to any distribution. The uselessness of the bound in this case for small $t$ is the fault of the nefarious Bernoulli distribution! For simplicity, consider the shifted version of this distribution, with probability mass values: $$\mathbb{P}(Z=-1) = \mathbb{P}(Z=1) = \frac{1}{2}.$$ For this distribution we have: $$\mathbb{P}(|Z| > t) = \begin{cases} 1 & & \text{if } t < 1, \\[6pt] 0 & & \text{if } t \geqslant 1. \\[6pt] \end{cases}$$ The Chebychev inequality must accommodate this distribution, so it cannot say anything useful at all in the case where $t<1$. In this case, all that can be said is that the tail probability is no greater than one! You can proceed likewise for the other inequalities, trying to find a distribution that acheives the stated bound (or a sequence of distributions that approach it).
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that?
The first thing to note here is that the upper bounds on the probabilities are going above one in some cases, and obviously you can truncate these to make them one in that Chebychev bound and the Mill
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that? The first thing to note here is that the upper bounds on the probabilities are going above one in some cases, and obviously you can truncate these to make them one in that Chebychev bound and the Mills bound go above one is really just a matter of convention --- i.e., the expression does not bother to differentiate between the useless bound at unity, and useless bounds above unity. Setting aside that complication, one way to look at this is to try to find the nastiest distribution in each case --- i.e., the one that achieves the stated bound. For example, suppose we just consider the Chebychev inequality, which applies to any distribution. The uselessness of the bound in this case for small $t$ is the fault of the nefarious Bernoulli distribution! For simplicity, consider the shifted version of this distribution, with probability mass values: $$\mathbb{P}(Z=-1) = \mathbb{P}(Z=1) = \frac{1}{2}.$$ For this distribution we have: $$\mathbb{P}(|Z| > t) = \begin{cases} 1 & & \text{if } t < 1, \\[6pt] 0 & & \text{if } t \geqslant 1. \\[6pt] \end{cases}$$ The Chebychev inequality must accommodate this distribution, so it cannot say anything useful at all in the case where $t<1$. In this case, all that can be said is that the tail probability is no greater than one! You can proceed likewise for the other inequalities, trying to find a distribution that acheives the stated bound (or a sequence of distributions that approach it).
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that? The first thing to note here is that the upper bounds on the probabilities are going above one in some cases, and obviously you can truncate these to make them one in that Chebychev bound and the Mill
36,210
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that?
The distribution of $\bar X$ near the true mean will depend on the distribution of $X$ near the true mean, so you won't be able to bound it just by making assumptions about moments or other expectations. Also, there's more theoretical interest in tail bounds, so more work has gone into refining them. Results for $\bar X$ near the mean tend not to be explicit bounds. Edgeworth and Cornish-Fisher expansions are examples: valid for values near the mean, but error only up to some order in $n$. For example, Johnson gives an approximation for confidence intervals on $\bar X$ based on Cornish-Fisher expansions that works well for quite a wide range of distributions, but doesn't give an explicit bound. Saddlepoint expansions are another example: very accurate, but error only bounded up to an unknown constant.
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that?
The distribution of $\bar X$ near the true mean will depend on the distribution of $X$ near the true mean, so you won't be able to bound it just by making assumptions about moments or other expectatio
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that? The distribution of $\bar X$ near the true mean will depend on the distribution of $X$ near the true mean, so you won't be able to bound it just by making assumptions about moments or other expectations. Also, there's more theoretical interest in tail bounds, so more work has gone into refining them. Results for $\bar X$ near the mean tend not to be explicit bounds. Edgeworth and Cornish-Fisher expansions are examples: valid for values near the mean, but error only up to some order in $n$. For example, Johnson gives an approximation for confidence intervals on $\bar X$ based on Cornish-Fisher expansions that works well for quite a wide range of distributions, but doesn't give an explicit bound. Saddlepoint expansions are another example: very accurate, but error only bounded up to an unknown constant.
Tail probability bounds on $P(|Z| > t)$ tend to be useless for small $t>0$. Why is that? The distribution of $\bar X$ near the true mean will depend on the distribution of $X$ near the true mean, so you won't be able to bound it just by making assumptions about moments or other expectatio
36,211
GLM interpretation
Since you have only 1 other gender, it would make a bit more sense to include this with males and have "female" and "not female", although it won't change the interpretation much. Assuming that "Y" for glasses is coded as 1, and "N" is coded as 0, then there is clear evidence from these data that males are less likely to wear glasses than females. In particular, the log-odds of wearing glasses is 1.3 lower for males than females. There is no point in interpreting the output for genderOther. There is also some evidence of a negative association between reading books and wearing glasses. In particular each additional book read is associated with 0.3 lower log-odds of wearing glasss.
GLM interpretation
Since you have only 1 other gender, it would make a bit more sense to include this with males and have "female" and "not female", although it won't change the interpretation much. Assuming that "Y" fo
GLM interpretation Since you have only 1 other gender, it would make a bit more sense to include this with males and have "female" and "not female", although it won't change the interpretation much. Assuming that "Y" for glasses is coded as 1, and "N" is coded as 0, then there is clear evidence from these data that males are less likely to wear glasses than females. In particular, the log-odds of wearing glasses is 1.3 lower for males than females. There is no point in interpreting the output for genderOther. There is also some evidence of a negative association between reading books and wearing glasses. In particular each additional book read is associated with 0.3 lower log-odds of wearing glasss.
GLM interpretation Since you have only 1 other gender, it would make a bit more sense to include this with males and have "female" and "not female", although it won't change the interpretation much. Assuming that "Y" fo
36,212
PCA and variable contributions to first n dimensions
We have a dedicated thread for that very specific purpose: Using principal component analysis (PCA) for feature selection. Just a few points regarding the interpretation of those visual displays, and some reflexions on the question at hand: This graphical output is a visual aid to see which variable contribute the most to the definition of the principal component. If you have a "PCA" object constructed using FactoMineR::PCA, then variable contribution values are stored in the $var$contrib slot of your object. The contribution is a scaled version of the squared correlation between variables and component axes (or the cosine, from a geometrical point of view) --- this is used to assess the quality of the representation of the variables of the principal component, and it is computed as $\text{cos}(\text{variable}, \text{axis})^2 \times 100$ / total $\text{cos}^2$ of the component. It might not always be relevant to select a subset of variables based on their contribution to each principal component. Sometimes, a single variable can drive the component (this is sometimes known as a size effect, and it might simply result from a single variable capturing most of the variance along the first principal axis --- this would result in a very high loading for that variable, and very low loadings for the remaining ones); other times, the signal is driven by few variables in higher dimension (e.g., past the 10th component); finally, a variable might have a high weight on one component, yet also a weight that is above your threshold (10%) on another component: does that mean it is more "important" than those variables that only load on (or drive) a single component? It will be hard to cope with highly correlated variables, yet one of the principled approach to feature selection is to get ride (sometimes simply as a side effect of the algorithm itself) of colinearity by selecting only one variable among the cluster of highly correlated variables. Beware that any arbitrary cutoff (10% for variable contribution, or 80% for the total explained variance) should be motivated by pragmatic or computational arguments. To sum up, this approach to selecting variables might work, when used in a single pass algorithm or as a recursive procedure, but it really depends on the dataset. If the objective is to perform feature selection on a mulitvariate dataset with a primary outcome, why not use techniques dedicated to this task (Lasso operator, Random Forest, Gradient Boosting Machines, and the like), since they generally rely on an objective loss function and provide a more interpretable measure of variable importance?
PCA and variable contributions to first n dimensions
We have a dedicated thread for that very specific purpose: Using principal component analysis (PCA) for feature selection. Just a few points regarding the interpretation of those visual displays, and
PCA and variable contributions to first n dimensions We have a dedicated thread for that very specific purpose: Using principal component analysis (PCA) for feature selection. Just a few points regarding the interpretation of those visual displays, and some reflexions on the question at hand: This graphical output is a visual aid to see which variable contribute the most to the definition of the principal component. If you have a "PCA" object constructed using FactoMineR::PCA, then variable contribution values are stored in the $var$contrib slot of your object. The contribution is a scaled version of the squared correlation between variables and component axes (or the cosine, from a geometrical point of view) --- this is used to assess the quality of the representation of the variables of the principal component, and it is computed as $\text{cos}(\text{variable}, \text{axis})^2 \times 100$ / total $\text{cos}^2$ of the component. It might not always be relevant to select a subset of variables based on their contribution to each principal component. Sometimes, a single variable can drive the component (this is sometimes known as a size effect, and it might simply result from a single variable capturing most of the variance along the first principal axis --- this would result in a very high loading for that variable, and very low loadings for the remaining ones); other times, the signal is driven by few variables in higher dimension (e.g., past the 10th component); finally, a variable might have a high weight on one component, yet also a weight that is above your threshold (10%) on another component: does that mean it is more "important" than those variables that only load on (or drive) a single component? It will be hard to cope with highly correlated variables, yet one of the principled approach to feature selection is to get ride (sometimes simply as a side effect of the algorithm itself) of colinearity by selecting only one variable among the cluster of highly correlated variables. Beware that any arbitrary cutoff (10% for variable contribution, or 80% for the total explained variance) should be motivated by pragmatic or computational arguments. To sum up, this approach to selecting variables might work, when used in a single pass algorithm or as a recursive procedure, but it really depends on the dataset. If the objective is to perform feature selection on a mulitvariate dataset with a primary outcome, why not use techniques dedicated to this task (Lasso operator, Random Forest, Gradient Boosting Machines, and the like), since they generally rely on an objective loss function and provide a more interpretable measure of variable importance?
PCA and variable contributions to first n dimensions We have a dedicated thread for that very specific purpose: Using principal component analysis (PCA) for feature selection. Just a few points regarding the interpretation of those visual displays, and
36,213
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem
Note: This answer is wrong/incomplete and will need an update according to the comments Y̶o̶u̶ ̶c̶a̶n̶ ̶a̶p̶p̶r̶o̶a̶c̶h̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶b̶u̶t̶ ̶n̶o̶t̶ ̶g̶e̶t̶ ̶e̶q̶u̶a̶l̶ Suppose we are given a coin with arbitrary (unknown) head probability $p$, I am wondering if there is an easy-to-implement algorithm for generating a $\min\{p, 0.5\}$ coin for any $p\in [0,1]$. Function $f(p) = \min\{p, 0.5\} \geq \min\{p, 1-p\}$ so the famous paper by Keane and O’Brien guarantees such an algorithm exists W̶e̶ ̶s̶h̶o̶u̶l̶d̶ ̶b̶e̶ ̶m̶o̶r̶e̶ ̶n̶u̶a̶n̶c̶e̶d̶.̶ ̶T̶h̶e̶r̶e̶ ̶i̶s̶ ̶̶n̶o̶̶ ̶s̶u̶c̶h̶ ̶a̶l̶g̶o̶r̶i̶t̶h̶m̶ ̶t̶h̶a̶t̶ ̶a̶l̶l̶o̶w̶s̶ ̶y̶o̶u̶ ̶t̶o̶ ̶g̶e̶n̶e̶r̶a̶t̶e̶ ̶a̶ $min\lbrace 0,0.5 \rbrace$ c̶o̶i̶n̶.̶ ̶W̶h̶a̶t̶ ̶K̶e̶a̶n̶e̶ ̶a̶n̶d̶ ̶O̶'̶B̶r̶i̶e̶n̶ ̶s̶t̶a̶t̶s̶ ̶i̶s̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶ ̶c̶a̶n̶ ̶̶a̶p̶p̶r̶o̶a̶c̶h̶̶ ̶t̶h̶e̶ ̶d̶e̶s̶i̶r̶e̶d̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶a̶s̶ ̶c̶l̶o̶s̶e̶ ̶a̶s̶ ̶y̶o̶u̶ ̶l̶i̶k̶e̶.̶ Intuitive example We can generalize the method from Luis Mendo to get a procedure which allows you to generate a Bernoulli variable with probability $f(p)$ given a Bernoulli variable with probability $p$. The steps are as following: Use $m$ coin flips to estimate $\hat p$, the bias of the coin. Based on the bias of the coin, use $n$ unbiased coin flips (for which you can use John von Neumann's algorithm) to generate a coin with approximately $\hat{f}(\hat{p})$ probability. In the computational example below the $\hat{f}(\hat{p})$ is equal to one of the quantiles of a Bernoulli variable with size $n$ and probability $0.5$ By increasing $m$ you can make $\hat p$ get closer to the true value $p$. By increasing $n$ you can make $\hat{f}(\hat p)$, get closer to ${f}(\hat p)$. In the end we can make $\hat{f}(\hat p)$ as close to ${f}(p)$ as we like by increasing $m$ and $n$. The code below demonstrates this construction for some function $f(p) = \frac{1}{2} + \frac{1}{2} \sin(4\pi p)$ with a simulation. We see that with $m = n = 1000$ you get to the following approximation: ### estimate p by **tossing m coins** and use average number of 1's estimate_p <- function(m,p) { rbinom(1,m,p)/m } ### create a coin toss with probability p **based on n tosses** with a fair coin toss_p <- function(n,p,k=10^3) { rng <- 0:n cutoff <- qbinom(p,n,0.5) #sum(rbinom(k,n,0.5) <= cutoff)/k pbinom(cutoff,n,0.5) } ### some function that we want to convert the coin to conv_p <- function(p) { 0.5+0.5*sin(p*pi*4) } ps <- seq(0.001,0.999,0.001) plot(ps, conv_p(ps), type = "l", xlab = "p coin", ylab = "p process") ### estimate f(p) with coin tosses of coin p set.seed(1) m <- 1000 ### number of coin tosses to estimate p n <- 1000 ### number of fair coin tosses to approach f(p) k_trials = 1:10^2 p_trials <- seq(0.01,0.99,0.01) pt = 0.1 k_trials = 1:100 for (pt in p_trials) { for (k in k_trials) { p_est <- estimate_p(m,conv_p(pt)) p_out <- toss_p(n,p_est) points(pt,p_out, pch = 21 , col = rgb(0,0,0,0.1), bg = rgb(0,0,0,0.1), cex = 0.3) } } Trick for a more efficient example The above example is simple and helps to see intuititively how an algorithm can approach as close as we like. However, we could try to see how to make a faster method. The trick is that we can tabulate the results like toss probability HH p² HT p(1-p) TH (1-p)p TT (1-p)² then select a few of the tosses that create a desirable ratio. For instance in the case of John von Neumann's trick (generating a fair coin out of a biased coin) we could decide 'heads' if we observe HT with the unfair coin 'tails' if we observe TH with the unfair coin 'toss again' if we did not observe either. (Yes, this approach throws away results which is not efficient. There's a lot of ways to improve, and complicate, the strategies to solve these kind of problems). Then the ratio's of heads and tails is $\frac{p(1-p)}{(1-p)p} = 1$ so you get the fair coin independent of $p$. Generalizing the trick What we will try to do now is find out the best combination of events for tails and heads given $n$ coin flips to get a ratio like $$\frac{a_1 p^n + a_2 p^{n-1}(1-p) + \dots + a_n (1-p)^n }{b_1 p^n + b_2 p^{n-1}(1-p) + \dots + b_n (1-p)^n}$$ that approximates our desired function the best. (the coefficients $a_i$ and $b_i$ will need to be integers and are bounded by binomial coefficients) The r-code below computes the optimal for 3 coin tosses (I am computing just 3 tosses because the least-squares problem with integer constraints is for the moment computed in a dumb way) We will be selecting 'heads' for HHT and HTT and 'tails' for HTH and TTT which will give an odds ratio of $$\frac{p^2(1-p)+p(1-p)^2}{p^2(1-p)+(1-p)^3}=\frac{p}{1-2p+2p^2}$$ ### function in numerator and denominator simulated <- function(x,par){ k <- length(par)-1 xp <- x^(k:0)*(1-x)^(0:k) return(sum(par*xp)) } simulated <- Vectorize(simulated, vectorize.args = "x") ### function to compare estimate of f(p) with given desired f(p) fn <- function(x) { l <- length(x) ### number of pars a <- x[1:(l/2)] b <- x[(l/2+1):l] ### integrate difference in first half int1 <- integrate(f = function(x) {(simulated(x,a)/(simulated(x,a)+simulated(x,b))-x)^2}, lower = 0.001, upper = 0.5, stop.on.error = F) int2 <- integrate(f = function(x) {(simulated(x,a)/(simulated(x,a)+simulated(x,b))-0.5)^2}, lower = 0.5, upper = 0.999, stop.on.error = F) if (int1$message != "OK") { int1$value = 10^6 } if (int2$message != "OK") { int2$value = 10^6 } return(int1$value+int2$value) } ### compute best option for 3 tosses ### by just trying every option choose(3,0:3) RSS = 10^6 solution = c(0,0,0,0,0,0,0,0) for (a1 in 0:1) { for (a2 in 0:3) { for (a3 in 0:3) { for (a4 in 0:1) { for (b1 in 0:(1-a1)) { for (b2 in 0:(3-a2)) { for (b3 in 0:(3-a3)) { for (b4 in 0:(1-a4)) { if ((a1+a2+a3+a4+b1+b2+b3+b4)>0) { test_RSS <- fn(c(a1,a2,a3,a4,b1,b2,b3,b4)) if (test_RSS < RSS) { RSS = test_RSS solution <- c(a1,a2,a3,a4,b1,b2,b3,b4) } } a <- c(a1,a2,a3,a4) b <- c(b1,b2,b3,b4) #lines(xs, simulated(xs,a)/(simulated(xs,a)+simulated(xs,b)),col = rgb(0,0,0,0.1)) } } } } } } } } l <- length(solution) a <- solution[1:(l/2)] b <- solution[(l/2+1):l] xs <- seq(0.001,0.999,0.001) plot(-10,-10, xlim = c(0,1), ylim = c(0,1), xlab = "p coin", ylab = "p simulated") lines(xs, simulated(xs,a)/(simulated(xs,a)+simulated(xs,b))) lines(c(0,0.5,1), c(0,0.5,0.5), col = 2)
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem
Note: This answer is wrong/incomplete and will need an update according to the comments Y̶o̶u̶ ̶c̶a̶n̶ ̶a̶p̶p̶r̶o̶a̶c̶h̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶b̶u̶t̶ ̶n̶o̶t̶ ̶g̶e̶t̶ ̶e̶q̶u̶a̶l̶ Suppose we are g
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem Note: This answer is wrong/incomplete and will need an update according to the comments Y̶o̶u̶ ̶c̶a̶n̶ ̶a̶p̶p̶r̶o̶a̶c̶h̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶b̶u̶t̶ ̶n̶o̶t̶ ̶g̶e̶t̶ ̶e̶q̶u̶a̶l̶ Suppose we are given a coin with arbitrary (unknown) head probability $p$, I am wondering if there is an easy-to-implement algorithm for generating a $\min\{p, 0.5\}$ coin for any $p\in [0,1]$. Function $f(p) = \min\{p, 0.5\} \geq \min\{p, 1-p\}$ so the famous paper by Keane and O’Brien guarantees such an algorithm exists W̶e̶ ̶s̶h̶o̶u̶l̶d̶ ̶b̶e̶ ̶m̶o̶r̶e̶ ̶n̶u̶a̶n̶c̶e̶d̶.̶ ̶T̶h̶e̶r̶e̶ ̶i̶s̶ ̶̶n̶o̶̶ ̶s̶u̶c̶h̶ ̶a̶l̶g̶o̶r̶i̶t̶h̶m̶ ̶t̶h̶a̶t̶ ̶a̶l̶l̶o̶w̶s̶ ̶y̶o̶u̶ ̶t̶o̶ ̶g̶e̶n̶e̶r̶a̶t̶e̶ ̶a̶ $min\lbrace 0,0.5 \rbrace$ c̶o̶i̶n̶.̶ ̶W̶h̶a̶t̶ ̶K̶e̶a̶n̶e̶ ̶a̶n̶d̶ ̶O̶'̶B̶r̶i̶e̶n̶ ̶s̶t̶a̶t̶s̶ ̶i̶s̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶ ̶c̶a̶n̶ ̶̶a̶p̶p̶r̶o̶a̶c̶h̶̶ ̶t̶h̶e̶ ̶d̶e̶s̶i̶r̶e̶d̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶a̶s̶ ̶c̶l̶o̶s̶e̶ ̶a̶s̶ ̶y̶o̶u̶ ̶l̶i̶k̶e̶.̶ Intuitive example We can generalize the method from Luis Mendo to get a procedure which allows you to generate a Bernoulli variable with probability $f(p)$ given a Bernoulli variable with probability $p$. The steps are as following: Use $m$ coin flips to estimate $\hat p$, the bias of the coin. Based on the bias of the coin, use $n$ unbiased coin flips (for which you can use John von Neumann's algorithm) to generate a coin with approximately $\hat{f}(\hat{p})$ probability. In the computational example below the $\hat{f}(\hat{p})$ is equal to one of the quantiles of a Bernoulli variable with size $n$ and probability $0.5$ By increasing $m$ you can make $\hat p$ get closer to the true value $p$. By increasing $n$ you can make $\hat{f}(\hat p)$, get closer to ${f}(\hat p)$. In the end we can make $\hat{f}(\hat p)$ as close to ${f}(p)$ as we like by increasing $m$ and $n$. The code below demonstrates this construction for some function $f(p) = \frac{1}{2} + \frac{1}{2} \sin(4\pi p)$ with a simulation. We see that with $m = n = 1000$ you get to the following approximation: ### estimate p by **tossing m coins** and use average number of 1's estimate_p <- function(m,p) { rbinom(1,m,p)/m } ### create a coin toss with probability p **based on n tosses** with a fair coin toss_p <- function(n,p,k=10^3) { rng <- 0:n cutoff <- qbinom(p,n,0.5) #sum(rbinom(k,n,0.5) <= cutoff)/k pbinom(cutoff,n,0.5) } ### some function that we want to convert the coin to conv_p <- function(p) { 0.5+0.5*sin(p*pi*4) } ps <- seq(0.001,0.999,0.001) plot(ps, conv_p(ps), type = "l", xlab = "p coin", ylab = "p process") ### estimate f(p) with coin tosses of coin p set.seed(1) m <- 1000 ### number of coin tosses to estimate p n <- 1000 ### number of fair coin tosses to approach f(p) k_trials = 1:10^2 p_trials <- seq(0.01,0.99,0.01) pt = 0.1 k_trials = 1:100 for (pt in p_trials) { for (k in k_trials) { p_est <- estimate_p(m,conv_p(pt)) p_out <- toss_p(n,p_est) points(pt,p_out, pch = 21 , col = rgb(0,0,0,0.1), bg = rgb(0,0,0,0.1), cex = 0.3) } } Trick for a more efficient example The above example is simple and helps to see intuititively how an algorithm can approach as close as we like. However, we could try to see how to make a faster method. The trick is that we can tabulate the results like toss probability HH p² HT p(1-p) TH (1-p)p TT (1-p)² then select a few of the tosses that create a desirable ratio. For instance in the case of John von Neumann's trick (generating a fair coin out of a biased coin) we could decide 'heads' if we observe HT with the unfair coin 'tails' if we observe TH with the unfair coin 'toss again' if we did not observe either. (Yes, this approach throws away results which is not efficient. There's a lot of ways to improve, and complicate, the strategies to solve these kind of problems). Then the ratio's of heads and tails is $\frac{p(1-p)}{(1-p)p} = 1$ so you get the fair coin independent of $p$. Generalizing the trick What we will try to do now is find out the best combination of events for tails and heads given $n$ coin flips to get a ratio like $$\frac{a_1 p^n + a_2 p^{n-1}(1-p) + \dots + a_n (1-p)^n }{b_1 p^n + b_2 p^{n-1}(1-p) + \dots + b_n (1-p)^n}$$ that approximates our desired function the best. (the coefficients $a_i$ and $b_i$ will need to be integers and are bounded by binomial coefficients) The r-code below computes the optimal for 3 coin tosses (I am computing just 3 tosses because the least-squares problem with integer constraints is for the moment computed in a dumb way) We will be selecting 'heads' for HHT and HTT and 'tails' for HTH and TTT which will give an odds ratio of $$\frac{p^2(1-p)+p(1-p)^2}{p^2(1-p)+(1-p)^3}=\frac{p}{1-2p+2p^2}$$ ### function in numerator and denominator simulated <- function(x,par){ k <- length(par)-1 xp <- x^(k:0)*(1-x)^(0:k) return(sum(par*xp)) } simulated <- Vectorize(simulated, vectorize.args = "x") ### function to compare estimate of f(p) with given desired f(p) fn <- function(x) { l <- length(x) ### number of pars a <- x[1:(l/2)] b <- x[(l/2+1):l] ### integrate difference in first half int1 <- integrate(f = function(x) {(simulated(x,a)/(simulated(x,a)+simulated(x,b))-x)^2}, lower = 0.001, upper = 0.5, stop.on.error = F) int2 <- integrate(f = function(x) {(simulated(x,a)/(simulated(x,a)+simulated(x,b))-0.5)^2}, lower = 0.5, upper = 0.999, stop.on.error = F) if (int1$message != "OK") { int1$value = 10^6 } if (int2$message != "OK") { int2$value = 10^6 } return(int1$value+int2$value) } ### compute best option for 3 tosses ### by just trying every option choose(3,0:3) RSS = 10^6 solution = c(0,0,0,0,0,0,0,0) for (a1 in 0:1) { for (a2 in 0:3) { for (a3 in 0:3) { for (a4 in 0:1) { for (b1 in 0:(1-a1)) { for (b2 in 0:(3-a2)) { for (b3 in 0:(3-a3)) { for (b4 in 0:(1-a4)) { if ((a1+a2+a3+a4+b1+b2+b3+b4)>0) { test_RSS <- fn(c(a1,a2,a3,a4,b1,b2,b3,b4)) if (test_RSS < RSS) { RSS = test_RSS solution <- c(a1,a2,a3,a4,b1,b2,b3,b4) } } a <- c(a1,a2,a3,a4) b <- c(b1,b2,b3,b4) #lines(xs, simulated(xs,a)/(simulated(xs,a)+simulated(xs,b)),col = rgb(0,0,0,0.1)) } } } } } } } } l <- length(solution) a <- solution[1:(l/2)] b <- solution[(l/2+1):l] xs <- seq(0.001,0.999,0.001) plot(-10,-10, xlim = c(0,1), ylim = c(0,1), xlab = "p coin", ylab = "p simulated") lines(xs, simulated(xs,a)/(simulated(xs,a)+simulated(xs,b))) lines(c(0,0.5,1), c(0,0.5,0.5), col = 2)
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem Note: This answer is wrong/incomplete and will need an update according to the comments Y̶o̶u̶ ̶c̶a̶n̶ ̶a̶p̶p̶r̶o̶a̶c̶h̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶b̶u̶t̶ ̶n̶o̶t̶ ̶g̶e̶t̶ ̶e̶q̶u̶a̶l̶ Suppose we are g
36,214
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem
Since the function min(λ, c) meets the Keane–O'Brien theorem, this implies that there are polynomials that converge from above and below to the function. This is discussed, for example, in Thomas and Blanchet (2012) and in Łatuszyński et al. (2009/2011). However, neither paper shows how to automate the task of finding polynomials required for the method to work, so that finding such a sequence for an arbitrary function (that satisfies the Keane–O'Brien theorem) remains far from trivial (and the question interests me to some extent, too). But fortunately, there is an alternative way to simulate min(λ, 1/2) without having to build a sequence of polynomials explicitly. This algorithm I found is given below, and I have a page describing the derivation of this algorithm. With probability 1/2, flip the input coin and return the result. (Random walk.) Generate unbiased random bits until more zeros than ones are generated this way for the first time. Then set m to (n −1)/2+1, where n is the number of bits generated this way. (Build a degree-m*2 polynomial equivalent to (4*λ*(1− λ ))m/2.) Let z be (4m/2)/choose(m*2, m). Define a polynomial of degree m*2 whose (m*2)+1 Bernstein coefficients are all zero except the mth coefficient (starting at 0), whose value is z. Elevate the degree of this polynomial enough times so that all its coefficients are 1 or less (degree elevation increases the polynomial's degree without changing its shape or position; see the derivation in the appendix). Let d be the new polynomial's degree. (Simulate the polynomial, whose degree is d (Goyal and Sigman 2012).) Flip the input coin d times and set h to the number of ones generated this way. Let a be the hth Bernstein coefficient (starting at 0) of the new polynomial. With probability a, return 1. Otherwise, return 0. I suspected that the required degree d would be floor(m*2/3)+1. With help from the MathOverflow community, steps 3 and 4 of the algorithm can be described more efficiently as follows: (3.) Let r be floor(m*2/3)+1, and let d be m*2+r. (4.) (Simulate the polynomial, whose degree is d.) Flip the input coin d times and set h to the number of ones generated this way. Let a be (1/2) * 2m*2*choose(r, h_−_m)/choose(d, h) (the polynomial's hth Bernstein coefficient starting at 0; the first term is 1/2 because the polynomial being simulated has the value 1/2 at the point 1/2). With probability a, return 1. Otherwise, return 0. (Here, choose(n, k) is a binomial coefficient.) In addition, there is an approximate way to sample min(λ, c) and most other continuous functions f that map (0, 1) to (0, 1). Specifically, it's trivial to simulate an individual polynomial with Bernstein coefficients in [0, 1], even if the polynomial has high degree and follows the desired function closely (Goyal and Sigman 2012): Flip the input coin n times (where n is the polynomial's degree), and let j be the number of ones. With probability a[j], that is, the j-th control point, starting at 0, for the polynomial's corresponding Bézier curve, return 1. Otherwise, return 0. To use this algorithm, simply calculate a[j] = f(j/n), where n is the desired degree of the polynomial (such as 100). Each a[j] is one of the Bernstein coefficients of a polynomial that closely approximates the function; the higher n is, the better the approximation. EDIT: Let me clarify two things: Generating fair bits from a biased coin, and simulation vs. Estimation. First, generating fair bits. You can generate unbiased bits either by generating them separately from the coin, or by using biased coin tosses and applying a randomness extraction procedure to turn them into unbiased bits. Ways to do so include not just the von Neumann algorithm itself, but also randomness extractors that assume no knowledge of the coin's bias, including Yuval Peres's (1992) iterated von Neumann extractor as well as an "extractor tree" by Zhou and Bruck (2012). See also my note on randomness extraction. Second, the difference between simulating and estimating probabilities. Essentially, "simulation" means generating the same distribution, and "estimation" means generating the same expected value (Glynn 2016). However, a Bernoulli factory for simulating f(p) acts as an unbiased estimator for f(p) (Łatuszyński et al. 2009/2011). But a function that doesn't meet the Keane–O'Brien theorem, such as min(2 p, 1 − (2 p)), can't be simulated by any algorithm without further knowledge of p, because the estimate will not be unbiased (Łatuszyński et al. 2009/2011). (However, it is possible to simulate min(2 p, 1 − (2 p), 1−ε) this way.) See also my note. REFERENCES: Goyal, V. And Sigman, K., 2012. On simulating a class of Bernstein polynomials. ACM Transactions on Modeling and Computer Simulation (TOMACS), 22(2), pp.1-5. Łatuszyński, K., Kosmidis, I., Papaspiliopoulos, O., Roberts, G.O., "Simulating events of unknown probabilities via reverse time martingales", arXiv:0907.4018v2 [stat.CO], 2009/2011. Thomas, A.C., Blanchet, J., "A Practical Implementation of the Bernoulli Factory", arXiv:1106.2508v3 [stat.AP], 2012. Glynn, P.W., "Exact simulation vs exact estimation", Proceedings of the 2016 Winter Simulation Conference, 2016. Zhou, H. And Bruck, J., "Streaming algorithms for optimal generation of random bits", arXiv:1209.0730 [cs.IT], 2012. Peres, Y., "Iterating von Neumann's procedure for extracting random bits", Annals of Statistics 1992,20,1, p. 590-597.
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem
Since the function min(λ, c) meets the Keane–O'Brien theorem, this implies that there are polynomials that converge from above and below to the function. This is discussed, for example, in Thomas and
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem Since the function min(λ, c) meets the Keane–O'Brien theorem, this implies that there are polynomials that converge from above and below to the function. This is discussed, for example, in Thomas and Blanchet (2012) and in Łatuszyński et al. (2009/2011). However, neither paper shows how to automate the task of finding polynomials required for the method to work, so that finding such a sequence for an arbitrary function (that satisfies the Keane–O'Brien theorem) remains far from trivial (and the question interests me to some extent, too). But fortunately, there is an alternative way to simulate min(λ, 1/2) without having to build a sequence of polynomials explicitly. This algorithm I found is given below, and I have a page describing the derivation of this algorithm. With probability 1/2, flip the input coin and return the result. (Random walk.) Generate unbiased random bits until more zeros than ones are generated this way for the first time. Then set m to (n −1)/2+1, where n is the number of bits generated this way. (Build a degree-m*2 polynomial equivalent to (4*λ*(1− λ ))m/2.) Let z be (4m/2)/choose(m*2, m). Define a polynomial of degree m*2 whose (m*2)+1 Bernstein coefficients are all zero except the mth coefficient (starting at 0), whose value is z. Elevate the degree of this polynomial enough times so that all its coefficients are 1 or less (degree elevation increases the polynomial's degree without changing its shape or position; see the derivation in the appendix). Let d be the new polynomial's degree. (Simulate the polynomial, whose degree is d (Goyal and Sigman 2012).) Flip the input coin d times and set h to the number of ones generated this way. Let a be the hth Bernstein coefficient (starting at 0) of the new polynomial. With probability a, return 1. Otherwise, return 0. I suspected that the required degree d would be floor(m*2/3)+1. With help from the MathOverflow community, steps 3 and 4 of the algorithm can be described more efficiently as follows: (3.) Let r be floor(m*2/3)+1, and let d be m*2+r. (4.) (Simulate the polynomial, whose degree is d.) Flip the input coin d times and set h to the number of ones generated this way. Let a be (1/2) * 2m*2*choose(r, h_−_m)/choose(d, h) (the polynomial's hth Bernstein coefficient starting at 0; the first term is 1/2 because the polynomial being simulated has the value 1/2 at the point 1/2). With probability a, return 1. Otherwise, return 0. (Here, choose(n, k) is a binomial coefficient.) In addition, there is an approximate way to sample min(λ, c) and most other continuous functions f that map (0, 1) to (0, 1). Specifically, it's trivial to simulate an individual polynomial with Bernstein coefficients in [0, 1], even if the polynomial has high degree and follows the desired function closely (Goyal and Sigman 2012): Flip the input coin n times (where n is the polynomial's degree), and let j be the number of ones. With probability a[j], that is, the j-th control point, starting at 0, for the polynomial's corresponding Bézier curve, return 1. Otherwise, return 0. To use this algorithm, simply calculate a[j] = f(j/n), where n is the desired degree of the polynomial (such as 100). Each a[j] is one of the Bernstein coefficients of a polynomial that closely approximates the function; the higher n is, the better the approximation. EDIT: Let me clarify two things: Generating fair bits from a biased coin, and simulation vs. Estimation. First, generating fair bits. You can generate unbiased bits either by generating them separately from the coin, or by using biased coin tosses and applying a randomness extraction procedure to turn them into unbiased bits. Ways to do so include not just the von Neumann algorithm itself, but also randomness extractors that assume no knowledge of the coin's bias, including Yuval Peres's (1992) iterated von Neumann extractor as well as an "extractor tree" by Zhou and Bruck (2012). See also my note on randomness extraction. Second, the difference between simulating and estimating probabilities. Essentially, "simulation" means generating the same distribution, and "estimation" means generating the same expected value (Glynn 2016). However, a Bernoulli factory for simulating f(p) acts as an unbiased estimator for f(p) (Łatuszyński et al. 2009/2011). But a function that doesn't meet the Keane–O'Brien theorem, such as min(2 p, 1 − (2 p)), can't be simulated by any algorithm without further knowledge of p, because the estimate will not be unbiased (Łatuszyński et al. 2009/2011). (However, it is possible to simulate min(2 p, 1 − (2 p), 1−ε) this way.) See also my note. REFERENCES: Goyal, V. And Sigman, K., 2012. On simulating a class of Bernstein polynomials. ACM Transactions on Modeling and Computer Simulation (TOMACS), 22(2), pp.1-5. Łatuszyński, K., Kosmidis, I., Papaspiliopoulos, O., Roberts, G.O., "Simulating events of unknown probabilities via reverse time martingales", arXiv:0907.4018v2 [stat.CO], 2009/2011. Thomas, A.C., Blanchet, J., "A Practical Implementation of the Bernoulli Factory", arXiv:1106.2508v3 [stat.AP], 2012. Glynn, P.W., "Exact simulation vs exact estimation", Proceedings of the 2016 Winter Simulation Conference, 2016. Zhou, H. And Bruck, J., "Streaming algorithms for optimal generation of random bits", arXiv:1209.0730 [cs.IT], 2012. Peres, Y., "Iterating von Neumann's procedure for extracting random bits", Annals of Statistics 1992,20,1, p. 590-597.
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem Since the function min(λ, c) meets the Keane–O'Brien theorem, this implies that there are polynomials that converge from above and below to the function. This is discussed, for example, in Thomas and
36,215
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem
I don't know how to generate that function, but there is a simple way to generate increasingly good approximations of it by consuming an increasingly large of inputs. Namely, you observe $N$ inputs, where $N$ is an odd integer. Then: If the majority of those inputs is $0$ you "decide" that $p<1/2$. So you take an addtional input an output that. If the majority of he $N$ inputs is $1$ you "decide" that $p>1/2$. So you output a Bernoulli auxiliary random variable with parameter $1/2$ (if needed you can generate that variable from additional inputs using the well-known von Neumann procedure). Here is some Matlab code that plots the resulting $f_N(p)$ as a function of $p$ for several values of $N$. It's easy to prove that $f_N(p)\rightarrow f(p)$ as $N \rightarrow \infty$. N = 101; p_axis = 0:.01:1; y = 1/2 + (p_axis-1/2).*binocdf((N-1)/2, N, p_axis); plot(p_axis, y) As a check, here's an experiment to estimate $f_{101}(p)$ using $10^5$ realizations for each value of $p$. The markers represent the proportion of $1$ in the output for each set of $10^5$ realizations. N = 101; R = 1e5; p_test = 0:.05:1; result = NaN(size(p_test)); for k = 1:numel(p_test) p = p_test(k); t = mean(rand(N,R)<=p, 1)<=1/2; out_k = NaN(1,R); out_k(t) = rand(1,sum(t))<=p; out_k(~t) = rand(1,sum(~t))<=1/2; result(k) = mean(out_k); end plot(p_test, result, 'o')
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem
I don't know how to generate that function, but there is a simple way to generate increasingly good approximations of it by consuming an increasingly large of inputs. Namely, you observe $N$ inputs, w
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem I don't know how to generate that function, but there is a simple way to generate increasingly good approximations of it by consuming an increasingly large of inputs. Namely, you observe $N$ inputs, where $N$ is an odd integer. Then: If the majority of those inputs is $0$ you "decide" that $p<1/2$. So you take an addtional input an output that. If the majority of he $N$ inputs is $1$ you "decide" that $p>1/2$. So you output a Bernoulli auxiliary random variable with parameter $1/2$ (if needed you can generate that variable from additional inputs using the well-known von Neumann procedure). Here is some Matlab code that plots the resulting $f_N(p)$ as a function of $p$ for several values of $N$. It's easy to prove that $f_N(p)\rightarrow f(p)$ as $N \rightarrow \infty$. N = 101; p_axis = 0:.01:1; y = 1/2 + (p_axis-1/2).*binocdf((N-1)/2, N, p_axis); plot(p_axis, y) As a check, here's an experiment to estimate $f_{101}(p)$ using $10^5$ realizations for each value of $p$. The markers represent the proportion of $1$ in the output for each set of $10^5$ realizations. N = 101; R = 1e5; p_test = 0:.05:1; result = NaN(size(p_test)); for k = 1:numel(p_test) p = p_test(k); t = mean(rand(N,R)<=p, 1)<=1/2; out_k = NaN(1,R); out_k(t) = rand(1,sum(t))<=p; out_k(~t) = rand(1,sum(~t))<=1/2; result(k) = mean(out_k); end plot(p_test, result, 'o')
Generating a min{p, 0.5} coin from a p-coin - Bernoulli factory type problem I don't know how to generate that function, but there is a simple way to generate increasingly good approximations of it by consuming an increasingly large of inputs. Namely, you observe $N$ inputs, w
36,216
How to conduct a multilevel model/regression for panel data in Python?
Linear regression will not be suitable for a multilevel model. A mixed effects model is a good way to fit most multilevel models. In python you can use mixedlm in statsmodels. For example: In [1]: import statsmodels.api as sm In [2]: import statsmodels.formula.api as smf In [3]: data = sm.datasets.get_rdataset("dietox", "geepack").data In [4]: md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"]) In [5]: mdf = md.fit() In [6]: print(mdf.summary()) Mixed Linear Model Regression Results ======================================================== Model: MixedLM Dependent Variable: Weight No. Observations: 861 Method: REML No. Groups: 72 Scale: 11.3669 Min. group size: 11 Log-Likelihood: -2404.7753 Max. group size: 12 Converged: Yes Mean group size: 12.0 -------------------------------------------------------- Coef. Std.Err. z P>|z| [0.025 0.975] -------------------------------------------------------- Intercept 15.724 0.788 19.952 0.000 14.179 17.268 Time 6.943 0.033 207.939 0.000 6.877 7.008 Group Var 40.394 2.149 ========================================================
How to conduct a multilevel model/regression for panel data in Python?
Linear regression will not be suitable for a multilevel model. A mixed effects model is a good way to fit most multilevel models. In python you can use mixedlm in statsmodels. For example: In [1]: imp
How to conduct a multilevel model/regression for panel data in Python? Linear regression will not be suitable for a multilevel model. A mixed effects model is a good way to fit most multilevel models. In python you can use mixedlm in statsmodels. For example: In [1]: import statsmodels.api as sm In [2]: import statsmodels.formula.api as smf In [3]: data = sm.datasets.get_rdataset("dietox", "geepack").data In [4]: md = smf.mixedlm("Weight ~ Time", data, groups=data["Pig"]) In [5]: mdf = md.fit() In [6]: print(mdf.summary()) Mixed Linear Model Regression Results ======================================================== Model: MixedLM Dependent Variable: Weight No. Observations: 861 Method: REML No. Groups: 72 Scale: 11.3669 Min. group size: 11 Log-Likelihood: -2404.7753 Max. group size: 12 Converged: Yes Mean group size: 12.0 -------------------------------------------------------- Coef. Std.Err. z P>|z| [0.025 0.975] -------------------------------------------------------- Intercept 15.724 0.788 19.952 0.000 14.179 17.268 Time 6.943 0.033 207.939 0.000 6.877 7.008 Group Var 40.394 2.149 ========================================================
How to conduct a multilevel model/regression for panel data in Python? Linear regression will not be suitable for a multilevel model. A mixed effects model is a good way to fit most multilevel models. In python you can use mixedlm in statsmodels. For example: In [1]: imp
36,217
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both?
Most U.S. health surveys (NHIS and its kiddo MEPS, NHANES, NSDUH) are stratified cluster surveys. The common representation of the public use data sets is a two-stage design with ~50 strata at the first stage of sampling (at which clusters are sampled), usually with two clusters per stratum, and people sampled at the second stage within clusters. This is kind of sixth grade reading level explanation of science, if you like. Why, and how, are these surveys stratified? Well, the health professionals know that people in different settings have different health care needs and health care outcomes. Urban is different from suburban different from rural, so the level of urbanization / population density is a stratifying variable for these. Why, and how, are these surveys clustered? Well, cluster samples are either a measure of desperation (there is no way to reach the population in other ways), or simply a way to save on costs (in face-to-face surveys, you rather want to pay interviewers to talk with people, rather that to sit in the car / on the train / walk from one interview to next... so the interviewers should have 5-10-15 minute travel than 2 hour travel between appointments). In large scale U.S. health surveys, you have bits of both: there is no central listing of all people in the country (although one can lay their hands on the list of all addresses, sort of). In international surveys like Demographic and Health Surveys , there may not be enough government data to set up data collection like it is done in the U.S.; the best you may have to deal with is administrative division into provinces, districts, and cities/towns/villages within the latter, with at best rough estimates of population sizes. So you end up sampling those districts, and those settlements within districts, and then send enumerators to count dwellings and then sample from the lists thus created. There are of course other situations where cluster samples make perfect sense – namely when the populations are absolutely naturally organized in hierarchical way, like school districts / schools / classes-teachers / students. Clusters are defined by the social processes, not by the statistician's pen. In many of these hierarchical population surveys, there is also interest in data at each level of hierarchy, and in multilevel modeling of mediation of student-level variable effects by the teacher or principal-level variables. Out of the questions posed by the OP, I can only answer this (others are qualitative research questions, not quantitative research ones): What circumstances would lead a study designer to say "You know what? We need an additional variable to cluster sample/stratify on." You can only stratify on a variable that is available on the sampling frame (sampling frame = list of entities that you take a sample from; this would be a list of districts in the example of the DHS surveys, or the list of all 80,000 Census tracts in the case of the United States for the large scale health surveys; this could also be an implicit list like the way to generate random phone numbers in random digit dialing, which is what is being done for BRFSS). As far as to which variable is to cluster on, it is either the natural hierarchy, or a cost-precision tradeoff: if your interviewers have smaller area to cover, the population is likely to be somewhat more homogeneous, so you don't learn as much from the same number of observations. P.S. The distinction between clusters and strata is something a lot of people struggle with. You are not alone. P.P.S. Contrary to what you may have heard, including some of the posted answers, in the U.S., you cannot stratify by person's race/ethnicity, sex/gender, or age, not in the general population surveys, at least. If you have a list of hospital patients with these fields, then of course you can. But there is no general sampling frame (short of maybe the Census Bureau Master Address File) that would list person's name, person's address, and these demographic characteristics. The Nordic countries, however, have population registers where this information can be found; the conversations between Swedes and Americans at professional conferences sometimes go in parallel universes with little traction.) What does happens is that when you stratify by geography, and minorities are heavily segregated, you can select areas that are 90%+ Black/African American or 80%+ Hispanic, and that way you have a good way to predict how many people in those groups your sample will have in the end of the day.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith
Most U.S. health surveys (NHIS and its kiddo MEPS, NHANES, NSDUH) are stratified cluster surveys. The common representation of the public use data sets is a two-stage design with ~50 strata at the fir
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both? Most U.S. health surveys (NHIS and its kiddo MEPS, NHANES, NSDUH) are stratified cluster surveys. The common representation of the public use data sets is a two-stage design with ~50 strata at the first stage of sampling (at which clusters are sampled), usually with two clusters per stratum, and people sampled at the second stage within clusters. This is kind of sixth grade reading level explanation of science, if you like. Why, and how, are these surveys stratified? Well, the health professionals know that people in different settings have different health care needs and health care outcomes. Urban is different from suburban different from rural, so the level of urbanization / population density is a stratifying variable for these. Why, and how, are these surveys clustered? Well, cluster samples are either a measure of desperation (there is no way to reach the population in other ways), or simply a way to save on costs (in face-to-face surveys, you rather want to pay interviewers to talk with people, rather that to sit in the car / on the train / walk from one interview to next... so the interviewers should have 5-10-15 minute travel than 2 hour travel between appointments). In large scale U.S. health surveys, you have bits of both: there is no central listing of all people in the country (although one can lay their hands on the list of all addresses, sort of). In international surveys like Demographic and Health Surveys , there may not be enough government data to set up data collection like it is done in the U.S.; the best you may have to deal with is administrative division into provinces, districts, and cities/towns/villages within the latter, with at best rough estimates of population sizes. So you end up sampling those districts, and those settlements within districts, and then send enumerators to count dwellings and then sample from the lists thus created. There are of course other situations where cluster samples make perfect sense – namely when the populations are absolutely naturally organized in hierarchical way, like school districts / schools / classes-teachers / students. Clusters are defined by the social processes, not by the statistician's pen. In many of these hierarchical population surveys, there is also interest in data at each level of hierarchy, and in multilevel modeling of mediation of student-level variable effects by the teacher or principal-level variables. Out of the questions posed by the OP, I can only answer this (others are qualitative research questions, not quantitative research ones): What circumstances would lead a study designer to say "You know what? We need an additional variable to cluster sample/stratify on." You can only stratify on a variable that is available on the sampling frame (sampling frame = list of entities that you take a sample from; this would be a list of districts in the example of the DHS surveys, or the list of all 80,000 Census tracts in the case of the United States for the large scale health surveys; this could also be an implicit list like the way to generate random phone numbers in random digit dialing, which is what is being done for BRFSS). As far as to which variable is to cluster on, it is either the natural hierarchy, or a cost-precision tradeoff: if your interviewers have smaller area to cover, the population is likely to be somewhat more homogeneous, so you don't learn as much from the same number of observations. P.S. The distinction between clusters and strata is something a lot of people struggle with. You are not alone. P.P.S. Contrary to what you may have heard, including some of the posted answers, in the U.S., you cannot stratify by person's race/ethnicity, sex/gender, or age, not in the general population surveys, at least. If you have a list of hospital patients with these fields, then of course you can. But there is no general sampling frame (short of maybe the Census Bureau Master Address File) that would list person's name, person's address, and these demographic characteristics. The Nordic countries, however, have population registers where this information can be found; the conversations between Swedes and Americans at professional conferences sometimes go in parallel universes with little traction.) What does happens is that when you stratify by geography, and minorities are heavily segregated, you can select areas that are 90%+ Black/African American or 80%+ Hispanic, and that way you have a good way to predict how many people in those groups your sample will have in the end of the day.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith Most U.S. health surveys (NHIS and its kiddo MEPS, NHANES, NSDUH) are stratified cluster surveys. The common representation of the public use data sets is a two-stage design with ~50 strata at the fir
36,218
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both?
Stratified sampling is most efficient (in terms of variance of the estimate) when you have homogeneity WITHIN strata and heterogeneity BETWEEN strata. Think US states if your variable of interest were some social issue. Texans are very similar to each other but wildly different from New Yorkers (who are again similar to each other). If this is the case then stratified sampling can be more efficient than simple random sampling since you require less samples to achieve a fully represented sample of your population. If, in the case of a rare population (i.e. sexual minorities), if that population acts homogeneously with respect to the variable of interest and heterogeneously from members that do not belong to that rare population, then this can cause a large variance in your estimate dependent on whether or not members of this group are in your sample or not. Stratifying on this group ensures that members of this group are in the sample thus achieving less sampling variance for the same sample size. Consider the case of estimating business revenue in a town with many small businesses and one Wal-Mart. Whether Wal-Mart is included in your sample will cause huge variations in your estimate. Stratifying based on something such as number of employees and perhaps including Wal-Mart in its own strata where the sampling percentage is 100% (this is a take all strata) will decrease the variance in your estimate. Conceptually, stratified sampling is all about decreasing the variance of your estimate. It allows either the same variance as SRS with fewer samples or less variance for the same amount of samples. What would preclude a variable from being used to stratify? If it had no effect on the variance of your estimate. That is, if it did not further increase the homogeneity within strata. For example, stratifying on eye colour if your variable of interest was student performance. It may not hurt your strata but it will increase the complexity of your survey design needlessly. Cluster sampling is most efficient (again, efficiency in terms of variance) when you have heterogeneity WITHIN strata and homogeneity BETWEEN strata. Think schools in a particular state and the variable of interest is student height. Cluster sampling intends to design each cluster to essentially be a mini version of your population. The main benefits of this are practical in consideration. For example, you don't require a complete frame, i.e. if you want to sample students but don't have the students contact information, you can sample the schools instead and have them give the survey to all of the students. It also saves on cost of actually administering the survey. If your survey must be completed in person then it can be expensive to drive around and survey persons chosen randomly using SRS. If you sample clusters that are chosen with geographic proximity in mind this becomes less expensive and can actually lead to you being able to survey more people (which can lead to less variance than SRS). Clusters are less chosen for their ability to reduce the variance of your estimate and more for their ability to aid in survey administration and reducing costs, however that being said, beyond just practical reasons, it is possible that cluster sampling will have less variance than SRS with the same sample size if there is an intra-class correlation that is negative.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith
Stratified sampling is most efficient (in terms of variance of the estimate) when you have homogeneity WITHIN strata and heterogeneity BETWEEN strata. Think US states if your variable of interest were
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both? Stratified sampling is most efficient (in terms of variance of the estimate) when you have homogeneity WITHIN strata and heterogeneity BETWEEN strata. Think US states if your variable of interest were some social issue. Texans are very similar to each other but wildly different from New Yorkers (who are again similar to each other). If this is the case then stratified sampling can be more efficient than simple random sampling since you require less samples to achieve a fully represented sample of your population. If, in the case of a rare population (i.e. sexual minorities), if that population acts homogeneously with respect to the variable of interest and heterogeneously from members that do not belong to that rare population, then this can cause a large variance in your estimate dependent on whether or not members of this group are in your sample or not. Stratifying on this group ensures that members of this group are in the sample thus achieving less sampling variance for the same sample size. Consider the case of estimating business revenue in a town with many small businesses and one Wal-Mart. Whether Wal-Mart is included in your sample will cause huge variations in your estimate. Stratifying based on something such as number of employees and perhaps including Wal-Mart in its own strata where the sampling percentage is 100% (this is a take all strata) will decrease the variance in your estimate. Conceptually, stratified sampling is all about decreasing the variance of your estimate. It allows either the same variance as SRS with fewer samples or less variance for the same amount of samples. What would preclude a variable from being used to stratify? If it had no effect on the variance of your estimate. That is, if it did not further increase the homogeneity within strata. For example, stratifying on eye colour if your variable of interest was student performance. It may not hurt your strata but it will increase the complexity of your survey design needlessly. Cluster sampling is most efficient (again, efficiency in terms of variance) when you have heterogeneity WITHIN strata and homogeneity BETWEEN strata. Think schools in a particular state and the variable of interest is student height. Cluster sampling intends to design each cluster to essentially be a mini version of your population. The main benefits of this are practical in consideration. For example, you don't require a complete frame, i.e. if you want to sample students but don't have the students contact information, you can sample the schools instead and have them give the survey to all of the students. It also saves on cost of actually administering the survey. If your survey must be completed in person then it can be expensive to drive around and survey persons chosen randomly using SRS. If you sample clusters that are chosen with geographic proximity in mind this becomes less expensive and can actually lead to you being able to survey more people (which can lead to less variance than SRS). Clusters are less chosen for their ability to reduce the variance of your estimate and more for their ability to aid in survey administration and reducing costs, however that being said, beyond just practical reasons, it is possible that cluster sampling will have less variance than SRS with the same sample size if there is an intra-class correlation that is negative.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith Stratified sampling is most efficient (in terms of variance of the estimate) when you have homogeneity WITHIN strata and heterogeneity BETWEEN strata. Think US states if your variable of interest were
36,219
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both?
Here's how the terms are usually used in survey research. Stratified sampling is when you take the entire sample frame and preemptively divide it into a number of "buckets" based on some criteria you already know. So if you are sampling people in the US and you already know their race you might divide the sample into white, black, Hispanic and other. These buckets are the "strata." Then instead of taking one big random sample from the entire population you take a random sample from each bucket. There are various benefits of doing this but the biggest is that, if you want, you can take a BIGGER % random sample from smaller buckets to ensure you have enough respondents from that group in your final sample. So if I drew a sample of 500 from each bucket I'm going to have way more Blacks, Hispanics and "others" in my sample than I would if I just drew a random sample from the whole population, which might be important if I want to make sure I have enough N for those subgroups. Of course I'll then need to calculate design weights to adjust for the bias I've intentionally introduced in my sample. But this is easy since I know exactly what sort of bias I've introduced. Clusters, by contrast, are part of a "two stage" sampling design, where first you draw a random sample of clusters, and then you draw a random sample of observations within the sampled cluster. So if I wanted to study hospital patients I might start by first making a sample frame of all hospitals in the US. Then I would draw a random sample of hospitals. Then, within the hospitals I've sampled I draw a random sample of patients to study. From a statistical perspective the key difference is that in stratified sampling you just draw ONE random sample, and everyone in the frame has a non-zero probability of selection. Of course people in some strata might have a higher probability of selection than others, but that's where the design weights come in. In cluster sampling, you draw two random samples – one sample of clusters and another sample of people (in the sampled clusters). And in that second stage of sampling lots of people (those who are in non-sampled clusters) have a zero % chance of selection. This is when you might want to consider HLM/multilevel modeling to account for the fact that observations are nested within clusters that are themselves just a sample of the total population. Addition: One conceptual motivation for cluster sampling is that it's often the only feasible way to get the sample you want. There is no one "list" of all hospital patients (or elementary school students) in a country that you can use to draw a random sample of. But there is a list of hospitals (or schools) you can use as a sample frame, and for each hospital chosen there is a list of patients within that hospital. So often it's the only feasible way of proceeding.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith
Here's how the terms are usually used in survey research. Stratified sampling is when you take the entire sample frame and preemptively divide it into a number of "buckets" based on some criteria you
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both? Here's how the terms are usually used in survey research. Stratified sampling is when you take the entire sample frame and preemptively divide it into a number of "buckets" based on some criteria you already know. So if you are sampling people in the US and you already know their race you might divide the sample into white, black, Hispanic and other. These buckets are the "strata." Then instead of taking one big random sample from the entire population you take a random sample from each bucket. There are various benefits of doing this but the biggest is that, if you want, you can take a BIGGER % random sample from smaller buckets to ensure you have enough respondents from that group in your final sample. So if I drew a sample of 500 from each bucket I'm going to have way more Blacks, Hispanics and "others" in my sample than I would if I just drew a random sample from the whole population, which might be important if I want to make sure I have enough N for those subgroups. Of course I'll then need to calculate design weights to adjust for the bias I've intentionally introduced in my sample. But this is easy since I know exactly what sort of bias I've introduced. Clusters, by contrast, are part of a "two stage" sampling design, where first you draw a random sample of clusters, and then you draw a random sample of observations within the sampled cluster. So if I wanted to study hospital patients I might start by first making a sample frame of all hospitals in the US. Then I would draw a random sample of hospitals. Then, within the hospitals I've sampled I draw a random sample of patients to study. From a statistical perspective the key difference is that in stratified sampling you just draw ONE random sample, and everyone in the frame has a non-zero probability of selection. Of course people in some strata might have a higher probability of selection than others, but that's where the design weights come in. In cluster sampling, you draw two random samples – one sample of clusters and another sample of people (in the sampled clusters). And in that second stage of sampling lots of people (those who are in non-sampled clusters) have a zero % chance of selection. This is when you might want to consider HLM/multilevel modeling to account for the fact that observations are nested within clusters that are themselves just a sample of the total population. Addition: One conceptual motivation for cluster sampling is that it's often the only feasible way to get the sample you want. There is no one "list" of all hospital patients (or elementary school students) in a country that you can use to draw a random sample of. But there is a list of hospitals (or schools) you can use as a sample frame, and for each hospital chosen there is a list of patients within that hospital. So often it's the only feasible way of proceeding.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith Here's how the terms are usually used in survey research. Stratified sampling is when you take the entire sample frame and preemptively divide it into a number of "buckets" based on some criteria you
36,220
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both?
As I understand it, Cluster sampling is best when the population is homogeneous, and the differences between the means of the clusters is small, and the variance within a cluster is large. The aim is to use the cluster as a proxy for the population as a whole. The benefit is practical. For example, it is easier to pick and one or two schools and sample the students from that school, rather than sample one or two students from many many schools. So you might select a small number of schools through simple random sampling and then go to those schools and use simple random sampling to select students from them. This of course requires that the schools be basically the same as each other, and each school to have a wide selection of students to be representative of the whole population. On the other hand, Stratified sampling is best when the population is heterogeneous, and there are large differences between the means of the strata, and the variance within a stratum is small. The aim is to make sure you do not miss out on the differences within your population. Leave it to random chance and simple Random Sampling and you might not sample small but important groups—for example rural schools might be underrepresented. So you make sure that that strata is represented in the sample by creating a scheme that captures the stratification of the population. For example, you know your final sample will have to be 95% urban schools and 5% rural schools. Then simple random sample within those strata until you have the desired portions to make up your final sample. If there is indeed wide variation within a population, stratified sample should lead to more precise estimates compared to simple random sampling.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith
As I understand it, Cluster sampling is best when the population is homogeneous, and the differences between the means of the clusters is small, and the variance within a cluster is large. The aim is
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both? As I understand it, Cluster sampling is best when the population is homogeneous, and the differences between the means of the clusters is small, and the variance within a cluster is large. The aim is to use the cluster as a proxy for the population as a whole. The benefit is practical. For example, it is easier to pick and one or two schools and sample the students from that school, rather than sample one or two students from many many schools. So you might select a small number of schools through simple random sampling and then go to those schools and use simple random sampling to select students from them. This of course requires that the schools be basically the same as each other, and each school to have a wide selection of students to be representative of the whole population. On the other hand, Stratified sampling is best when the population is heterogeneous, and there are large differences between the means of the strata, and the variance within a stratum is small. The aim is to make sure you do not miss out on the differences within your population. Leave it to random chance and simple Random Sampling and you might not sample small but important groups—for example rural schools might be underrepresented. So you make sure that that strata is represented in the sample by creating a scheme that captures the stratification of the population. For example, you know your final sample will have to be 95% urban schools and 5% rural schools. Then simple random sample within those strata until you have the desired portions to make up your final sample. If there is indeed wide variation within a population, stratified sample should lead to more precise estimates compared to simple random sampling.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith As I understand it, Cluster sampling is best when the population is homogeneous, and the differences between the means of the clusters is small, and the variance within a cluster is large. The aim is
36,221
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both?
Other answers have been giving good and clear examples. I'd like to try a different wording for this. Consider you are going to sample a city's population to know its average income. Some of things that will "stratify" your population: Income level (high, medium, low) Type of job (skilled labor, unskilled labor, etc.) Education level (none, highschool, bachelor, master, autodidact, skill from experience, etc.) Those things will "stratify" your population because you know that you'll find people with different income level or type of job or education level will have different amount of income; while people within the same income level or type of job or education level will more-or-less have the same. In contrast, some of things that will not "stratify" your population but rather a "cluster": Neighborhood or city block If you can assume that any neighborhood in the city are not really different from one another, you can consider neighborhood as a "cluster" rather than a "strata", since you don't believe different neighborhoods will have really different income. In sampling methodology, strata are designed to make sure you include all different parts of population in your sample, i.e. you have all strata represented. In contrast, clusters are designed so that rather than picking samples from the ENTIRE population at random (which in real-life situations is expensive and more difficult), you can just pick a cluster at random and say "this cluster represents the population at a smaller scale". To demonstrate why cluster sampling is easier and cheaper than sampling entirely at random, consider you're sampling a city population. Sampling directly from the city residents list will result in you having to deal with some of sampled people that are really far away. This will make the sampling harder and more expensive. If you do a cluster sampling, that is you randomly choose neighborhoods/blocks, THEN sample from the residents list of these neighborhoods, the resulting people sampled will be more easier to access because they're closer together. If all the neighborhoods of the city is not that different one another, you can safely say that the cluster you chose will still represent the entire city.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith
Other answers have been giving good and clear examples. I'd like to try a different wording for this. Consider you are going to sample a city's population to know its average income. Some of things t
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neither or both? Other answers have been giving good and clear examples. I'd like to try a different wording for this. Consider you are going to sample a city's population to know its average income. Some of things that will "stratify" your population: Income level (high, medium, low) Type of job (skilled labor, unskilled labor, etc.) Education level (none, highschool, bachelor, master, autodidact, skill from experience, etc.) Those things will "stratify" your population because you know that you'll find people with different income level or type of job or education level will have different amount of income; while people within the same income level or type of job or education level will more-or-less have the same. In contrast, some of things that will not "stratify" your population but rather a "cluster": Neighborhood or city block If you can assume that any neighborhood in the city are not really different from one another, you can consider neighborhood as a "cluster" rather than a "strata", since you don't believe different neighborhoods will have really different income. In sampling methodology, strata are designed to make sure you include all different parts of population in your sample, i.e. you have all strata represented. In contrast, clusters are designed so that rather than picking samples from the ENTIRE population at random (which in real-life situations is expensive and more difficult), you can just pick a cluster at random and say "this cluster represents the population at a smaller scale". To demonstrate why cluster sampling is easier and cheaper than sampling entirely at random, consider you're sampling a city population. Sampling directly from the city residents list will result in you having to deal with some of sampled people that are really far away. This will make the sampling harder and more expensive. If you do a cluster sampling, that is you randomly choose neighborhoods/blocks, THEN sample from the residents list of these neighborhoods, the resulting people sampled will be more easier to access because they're closer together. If all the neighborhoods of the city is not that different one another, you can safely say that the cluster you chose will still represent the entire city.
Are the differences between sampling clusters and sampling strata, conceptual, methodological, neith Other answers have been giving good and clear examples. I'd like to try a different wording for this. Consider you are going to sample a city's population to know its average income. Some of things t
36,222
What does it mean to have a "gaussian prior?"
Prior is a belief you have on some quantity, typically on a set of parameters, without having any look at the data. If data is involved, the belief you have is updated and is called as posterior. In ridge regression, a gaussian prior on regression coefficients means that the coefficients are assumed to be distributed according to Gaussian/Normal distribution. Of course, one needs to assume mean and covariance structure as well.
What does it mean to have a "gaussian prior?"
Prior is a belief you have on some quantity, typically on a set of parameters, without having any look at the data. If data is involved, the belief you have is updated and is called as posterior. In r
What does it mean to have a "gaussian prior?" Prior is a belief you have on some quantity, typically on a set of parameters, without having any look at the data. If data is involved, the belief you have is updated and is called as posterior. In ridge regression, a gaussian prior on regression coefficients means that the coefficients are assumed to be distributed according to Gaussian/Normal distribution. Of course, one needs to assume mean and covariance structure as well.
What does it mean to have a "gaussian prior?" Prior is a belief you have on some quantity, typically on a set of parameters, without having any look at the data. If data is involved, the belief you have is updated and is called as posterior. In r
36,223
Multivariate Wasserstein metric for $n$-dimensions
Wasserstein in 1D is a special case of optimal transport. Both the R wasserstein1d and Python scipy.stats.wasserstein_distance are intended solely for the 1D special case. The algorithm behind both functions rank discrete data according to their c.d.f.'s so that the distances and amounts to move are multiplied together for corresponding points between $u$ and $v$ nearest to one another. More on the 1D special case can be found in Remark 2.28 of Peyre and Cuturi's Computational optimal transport. The 1D special case is much easier than implementing linear programming, which is the approach that must be followed for higher-dimensional couplings. Linear programming for optimal transport is hardly anymore harder computation-wise than the ranking algorithm of 1D Wasserstein however, being fairly efficient and low-overhead itself. wasserstein1d and scipy.stats.wasserstein_distance do not conduct linear programming. What you're asking about might not really have anything to do with higher dimensions though, because you first said "two vectors a and b are of unequal length". If the source and target distributions are of unequal length, this is not really a problem of higher dimensions (since after all, there are just "two vectors a and b"), but a problem of unbalanced distributions (i.e. "unequal length"), which is in itself another special case of optimal transport that might admit difficulties in the Wasserstein optimization. Some work-arounds for dealing with unbalanced optimal transport have already been developed of course. If it really is higher-dimensional, multivariate transportation that you're after (not necessarily unbalanced OT), you shouldn't pursue your attempted code any further since you apparently are just trying to extend the 1D special case of Wasserstein when in fact you can't extend that 1D special case to a multivariate setting. Look into linear programming instead. The pot package in Python, for starters, is well-known, whose documentation addresses the 1D special case, 2D, unbalanced OT, discrete-to-continuous and more.
Multivariate Wasserstein metric for $n$-dimensions
Wasserstein in 1D is a special case of optimal transport. Both the R wasserstein1d and Python scipy.stats.wasserstein_distance are intended solely for the 1D special case. The algorithm behind both fu
Multivariate Wasserstein metric for $n$-dimensions Wasserstein in 1D is a special case of optimal transport. Both the R wasserstein1d and Python scipy.stats.wasserstein_distance are intended solely for the 1D special case. The algorithm behind both functions rank discrete data according to their c.d.f.'s so that the distances and amounts to move are multiplied together for corresponding points between $u$ and $v$ nearest to one another. More on the 1D special case can be found in Remark 2.28 of Peyre and Cuturi's Computational optimal transport. The 1D special case is much easier than implementing linear programming, which is the approach that must be followed for higher-dimensional couplings. Linear programming for optimal transport is hardly anymore harder computation-wise than the ranking algorithm of 1D Wasserstein however, being fairly efficient and low-overhead itself. wasserstein1d and scipy.stats.wasserstein_distance do not conduct linear programming. What you're asking about might not really have anything to do with higher dimensions though, because you first said "two vectors a and b are of unequal length". If the source and target distributions are of unequal length, this is not really a problem of higher dimensions (since after all, there are just "two vectors a and b"), but a problem of unbalanced distributions (i.e. "unequal length"), which is in itself another special case of optimal transport that might admit difficulties in the Wasserstein optimization. Some work-arounds for dealing with unbalanced optimal transport have already been developed of course. If it really is higher-dimensional, multivariate transportation that you're after (not necessarily unbalanced OT), you shouldn't pursue your attempted code any further since you apparently are just trying to extend the 1D special case of Wasserstein when in fact you can't extend that 1D special case to a multivariate setting. Look into linear programming instead. The pot package in Python, for starters, is well-known, whose documentation addresses the 1D special case, 2D, unbalanced OT, discrete-to-continuous and more.
Multivariate Wasserstein metric for $n$-dimensions Wasserstein in 1D is a special case of optimal transport. Both the R wasserstein1d and Python scipy.stats.wasserstein_distance are intended solely for the 1D special case. The algorithm behind both fu
36,224
What is limiting about a linear model?
I will cite an educational reference to indicate the possible drawbacks. To quote for the case of a Simple Linear Regression Model: Objective: model the expected value of a continuous variable, Y, as a linear function of the continuous predictor, X, E(Yi) = β0 + β1xi Model structure: ${Y_i = β_0 + β_1x_i + \epsilon_i}$ Model assumptions: Y is normally distributed, errors are normally distributed, ${\epsilon_i}$ ∼ N(0, ${σ^2}$), and independent. In the corresponding case of Generalized Linear Models (GLMs) the assumptions cited include, to quote from the same reference: The data Y1, Y2, ..., Yn are independently distributed, i.e., cases are independent. The dependent variable Yi does NOT need to be normally distributed, but it typically assumes a distribution from an exponential family (e.g. binomial, Poisson, multinomial, normal,...) GLM does NOT assume a linear relationship between the dependent variable and the independent variables, but it does assume linear relationship between the transformed response in terms of the link function and the explanatory variables; e.g., for binary logistic regression ${logit(π) = β_0 + β_X}$. Independent (explanatory) variables can be even the power terms or some other nonlinear transformations of the original independent variables. The homogeneity of variance does NOT need to be satisfied. In fact, it is not even possible in many cases given the model structure, and overdispersion (when the observed variance is larger than what the model assumes) maybe present. Errors need to be independent but NOT normally distributed. It uses maximum likelihood estimation (MLE) rather than ordinary least squares (OLS) to estimate the parameters, and thus relies on large-sample approximations. So, the differences from Simple Linear Regression essentially relate to a normality assumption for Y and the error terms, while GLMs do NOT require such an assumption, but do generally operate within the exponential family of distributions. Also, homogeneity of variance is only in place for Simple Linear Regressions and GLM can specify an appropriate variance-covariance matrix structure. Lastly, GLMs generally employs a numerically more complex maximum likelihood estimation routine which is not required for ordinary regression. To answer the particular question: "but what would be a drawback/incapability of just a linear model like this?", the answer is the correct specification of the error structure, and even the diagonal matrix relating to variances, with some explanatory variables involving powers.
What is limiting about a linear model?
I will cite an educational reference to indicate the possible drawbacks. To quote for the case of a Simple Linear Regression Model: Objective: model the expected value of a continuous variable, Y, as
What is limiting about a linear model? I will cite an educational reference to indicate the possible drawbacks. To quote for the case of a Simple Linear Regression Model: Objective: model the expected value of a continuous variable, Y, as a linear function of the continuous predictor, X, E(Yi) = β0 + β1xi Model structure: ${Y_i = β_0 + β_1x_i + \epsilon_i}$ Model assumptions: Y is normally distributed, errors are normally distributed, ${\epsilon_i}$ ∼ N(0, ${σ^2}$), and independent. In the corresponding case of Generalized Linear Models (GLMs) the assumptions cited include, to quote from the same reference: The data Y1, Y2, ..., Yn are independently distributed, i.e., cases are independent. The dependent variable Yi does NOT need to be normally distributed, but it typically assumes a distribution from an exponential family (e.g. binomial, Poisson, multinomial, normal,...) GLM does NOT assume a linear relationship between the dependent variable and the independent variables, but it does assume linear relationship between the transformed response in terms of the link function and the explanatory variables; e.g., for binary logistic regression ${logit(π) = β_0 + β_X}$. Independent (explanatory) variables can be even the power terms or some other nonlinear transformations of the original independent variables. The homogeneity of variance does NOT need to be satisfied. In fact, it is not even possible in many cases given the model structure, and overdispersion (when the observed variance is larger than what the model assumes) maybe present. Errors need to be independent but NOT normally distributed. It uses maximum likelihood estimation (MLE) rather than ordinary least squares (OLS) to estimate the parameters, and thus relies on large-sample approximations. So, the differences from Simple Linear Regression essentially relate to a normality assumption for Y and the error terms, while GLMs do NOT require such an assumption, but do generally operate within the exponential family of distributions. Also, homogeneity of variance is only in place for Simple Linear Regressions and GLM can specify an appropriate variance-covariance matrix structure. Lastly, GLMs generally employs a numerically more complex maximum likelihood estimation routine which is not required for ordinary regression. To answer the particular question: "but what would be a drawback/incapability of just a linear model like this?", the answer is the correct specification of the error structure, and even the diagonal matrix relating to variances, with some explanatory variables involving powers.
What is limiting about a linear model? I will cite an educational reference to indicate the possible drawbacks. To quote for the case of a Simple Linear Regression Model: Objective: model the expected value of a continuous variable, Y, as
36,225
What is limiting about a linear model?
$$Y=\beta_0+\beta_1x_1+\beta_2 x_2^2+\beta_3 e^{5x_3}+\cdots+\epsilon$$ are linear models. Visualy, I would expect such flexibility would let me model any sort of shape between the response and the predictors if I plot my data. I haven't yet learnt more advanced models, but what would be a drawback/incapability of just a linear model like this? Yes, you can model any sort of shape. But the flexibility of the model, as function of the parameters $\beta_i$ is limited. The model parameters only occur in the linear part. So you can't for instance fit this model $$Y=\beta_0+\beta_1x_1+\beta_2 x_2^{\beta_4} +\beta_3 e^{\beta_5 x_3}+\cdots+\epsilon$$ You can change your model 'shape' $\beta_2 x_2^2+\beta_3 e^{5x_3}$ by changing those coefficients $2$ and $5$ but they are not free model parameters that can be changed in the fitting procedure. (I realise you wouldn't be able to use linear regression on $Y=\beta_0 + \beta_1 x^{\beta_2}+\epsilon$, for instance, but I'm having trouble visualising/understanding how that would be preventive/inflexible in modelling) This is a bit of a loaded question. There is not really anything to understand visually. You can make any shape of curve with a linear regression. But multiple shapes will not be available within a single model. For instance you can have the shapes: $$Y=\beta_0 + \beta_1 x^2+\epsilon$$ or $$Y=\beta_0 + \beta_1 x^3+\epsilon$$ or using whatever other coefficient. But only with a more general non-linear model can you capture all those possible shapes at once. $$Y=\beta_0 + \beta_1 x^{\beta_2}+\epsilon$$ This is for instance useful when the coefficient $\beta_2$ is an unknown parameter that you wish to determine using inference.
What is limiting about a linear model?
$$Y=\beta_0+\beta_1x_1+\beta_2 x_2^2+\beta_3 e^{5x_3}+\cdots+\epsilon$$ are linear models. Visualy, I would expect such flexibility would let me model any sort of shape between the response and the pr
What is limiting about a linear model? $$Y=\beta_0+\beta_1x_1+\beta_2 x_2^2+\beta_3 e^{5x_3}+\cdots+\epsilon$$ are linear models. Visualy, I would expect such flexibility would let me model any sort of shape between the response and the predictors if I plot my data. I haven't yet learnt more advanced models, but what would be a drawback/incapability of just a linear model like this? Yes, you can model any sort of shape. But the flexibility of the model, as function of the parameters $\beta_i$ is limited. The model parameters only occur in the linear part. So you can't for instance fit this model $$Y=\beta_0+\beta_1x_1+\beta_2 x_2^{\beta_4} +\beta_3 e^{\beta_5 x_3}+\cdots+\epsilon$$ You can change your model 'shape' $\beta_2 x_2^2+\beta_3 e^{5x_3}$ by changing those coefficients $2$ and $5$ but they are not free model parameters that can be changed in the fitting procedure. (I realise you wouldn't be able to use linear regression on $Y=\beta_0 + \beta_1 x^{\beta_2}+\epsilon$, for instance, but I'm having trouble visualising/understanding how that would be preventive/inflexible in modelling) This is a bit of a loaded question. There is not really anything to understand visually. You can make any shape of curve with a linear regression. But multiple shapes will not be available within a single model. For instance you can have the shapes: $$Y=\beta_0 + \beta_1 x^2+\epsilon$$ or $$Y=\beta_0 + \beta_1 x^3+\epsilon$$ or using whatever other coefficient. But only with a more general non-linear model can you capture all those possible shapes at once. $$Y=\beta_0 + \beta_1 x^{\beta_2}+\epsilon$$ This is for instance useful when the coefficient $\beta_2$ is an unknown parameter that you wish to determine using inference.
What is limiting about a linear model? $$Y=\beta_0+\beta_1x_1+\beta_2 x_2^2+\beta_3 e^{5x_3}+\cdots+\epsilon$$ are linear models. Visualy, I would expect such flexibility would let me model any sort of shape between the response and the pr
36,226
What is limiting about a linear model?
Just an example: Step functions cannot be represented by linear regressions: A factory at the sea side has a wall to protect it from the waves. Waves smaller then 5 meters stay behind the wall and do no harm. Waves above 5 meters lead to water coming into the cooler, short-circuit it and there is a loss worth of 10 Million Dollars. Model the loss as a function of wave height. Simplest problem imaginable for a decision tree regression, not at all a good match for a linear model (even logistic regression claims perfect separation...).
What is limiting about a linear model?
Just an example: Step functions cannot be represented by linear regressions: A factory at the sea side has a wall to protect it from the waves. Waves smaller then 5 meters stay behind the wall and do
What is limiting about a linear model? Just an example: Step functions cannot be represented by linear regressions: A factory at the sea side has a wall to protect it from the waves. Waves smaller then 5 meters stay behind the wall and do no harm. Waves above 5 meters lead to water coming into the cooler, short-circuit it and there is a loss worth of 10 Million Dollars. Model the loss as a function of wave height. Simplest problem imaginable for a decision tree regression, not at all a good match for a linear model (even logistic regression claims perfect separation...).
What is limiting about a linear model? Just an example: Step functions cannot be represented by linear regressions: A factory at the sea side has a wall to protect it from the waves. Waves smaller then 5 meters stay behind the wall and do
36,227
What is limiting about a linear model?
There's little limiting about a linear model per se. In fact, there is Cybenko's universal approximation theorem for neural networks! This has the output of a single layer network as a linear function of some constructed predictors. The problem lies in finding the right set of predictors, generalization out of sample and so forth. In practice these are hard problems.
What is limiting about a linear model?
There's little limiting about a linear model per se. In fact, there is Cybenko's universal approximation theorem for neural networks! This has the output of a single layer network as a linear function
What is limiting about a linear model? There's little limiting about a linear model per se. In fact, there is Cybenko's universal approximation theorem for neural networks! This has the output of a single layer network as a linear function of some constructed predictors. The problem lies in finding the right set of predictors, generalization out of sample and so forth. In practice these are hard problems.
What is limiting about a linear model? There's little limiting about a linear model per se. In fact, there is Cybenko's universal approximation theorem for neural networks! This has the output of a single layer network as a linear function
36,228
Kernels in SVM primal form
First, some terminology clarification, which is important for further understanding: In your second formula, applying $\phi(\mathbf{x}^{(i)})$ is not using the kernel trick! Kernel trick is computing $K(\mathbf{x}^{(i)}, \mathbf{x}^{(j)})$ without computing $\phi(\mathbf{x}^{(i)})$ or $\phi(\mathbf{x}^{(j)})$, and even without the need to know their form explicitly. With that in mind, to answer your questions: Recall that, for SVMs, $\mathbf{w}$ is defined as a linear combination of the data points: $$ \mathbf{w} = \sum_{j=1}^m \alpha_j \phi(\mathbf{x}^{(j)}) $$ This is (the?) essence of Support Vector Machines. Since they attempt to minimise $\mathbf{w}^t \cdot \mathbf{w}$, many $\alpha_j$'s will be zero, meaning that the corresponding $\mathbf{x}^{(j)}$'s do not affect the boundary. Those which do, whose corresponding $\alpha_j$'s are non-zero, are the support vectors. With this definition of $\mathbf{w}$ and applying the kernel trick, we come to: $$ \mathbf{w}^t \cdot \phi(\mathbf{x}^{(i)}) = \sum_{j=1}^m \alpha_j \phi(\mathbf{x}^{(j)}) \phi(\mathbf{x}^{(j)}) = \sum_{j=1}^m \alpha_j K(\mathbf{x}^{(i)}, \mathbf{x}^{(j)}) $$ or, in vector notation: $$ \mathbf{w}^t \cdot \phi(\mathbf{x}^{(i)}) = \alpha^t \cdot \mathbf{f}^{(i)} $$ where we define: $$ \mathbf{f}^{(i)} = [ ~ K(\mathbf{x}^{(i)}, \mathbf{x}^{(1)}), K(\mathbf{x}^{(i)}, \mathbf{x}^{(2)}), ..., K(\mathbf{x}^{(i)}, \mathbf{x}^{(m)}) ~ ]^t $$ This is almost the Ng notation. Recall that we also need to optimise for $b$, and Ng, for a more compact notation, puts $b$ as the first component of $\theta$ and must therefore prepend a one to the vector $\mathbf{f}^{(i)}$. He is actually saying: $$ b + \mathbf{w}^t \cdot \phi(\mathbf{x}^{(i)}) = \theta^t \cdot \mathbf{f}^{(i)} $$ where $$ \mathbf{f}^{(i)} = [ ~ 1, K(\mathbf{x}^{(i)}, \mathbf{x}^{(1)}), K(\mathbf{x}^{(i)}, \mathbf{x}^{(2)}), ..., K(\mathbf{x}^{(i)}, \mathbf{x}^{(m)}) ~ ]^t $$ and $$ \theta = [ ~ b, \alpha^{(1)}, \alpha^{(2)}, ..., \alpha^{(m)}) ~ ]^t $$ The rest of his notation is just defining $cost_k$ as an affine function of the above dot product (to get the "$1 - $" term), and accommodating the fact that his class labels are not $(-1, 1)$ (which are often used in machine learning community), but $(0, 1)$ (how they are typically used in statistics, like in logistic regression). As for the vector dimensionality, that's again explained by the kernel trick. SVM's never need to compute $\phi(\mathbf{x}^{(i)})$, because these terms never appear alone. They only appear as parts of dot products, which is computed by the kernel function (see my second formula above). The dimensionality of $\mathbf{f}^{(i)}$ has absolutely nothing to do with the dimensionality of $\phi$. $\mathbf{f}^{(i)}$ is simply a vector of all dot products (or kernel function evaluations) between $\mathbf{x}^{(i)}$ and every $\mathbf{x}^{(j)}$ (I'm ignoring $b$ here, which is the ($m+1$)th dimension). Correctly if I'm wrong, but I believe there is some misunderstanding in your second question. As I've shown above, there is a dot product in the primal form, and you can substitute it for the kernel function. The purpose of SMO (and other decomposition algorithms) is to make computation feasible for large amounts of data. Standard gradient descent algorithms would require $O(m^2)$ memory for storing all possible kernel values. Decomposition algorithms, specifically designed for SVMs, work on smaller subsets of data.
Kernels in SVM primal form
First, some terminology clarification, which is important for further understanding: In your second formula, applying $\phi(\mathbf{x}^{(i)})$ is not using the kernel trick! Kernel trick is computing
Kernels in SVM primal form First, some terminology clarification, which is important for further understanding: In your second formula, applying $\phi(\mathbf{x}^{(i)})$ is not using the kernel trick! Kernel trick is computing $K(\mathbf{x}^{(i)}, \mathbf{x}^{(j)})$ without computing $\phi(\mathbf{x}^{(i)})$ or $\phi(\mathbf{x}^{(j)})$, and even without the need to know their form explicitly. With that in mind, to answer your questions: Recall that, for SVMs, $\mathbf{w}$ is defined as a linear combination of the data points: $$ \mathbf{w} = \sum_{j=1}^m \alpha_j \phi(\mathbf{x}^{(j)}) $$ This is (the?) essence of Support Vector Machines. Since they attempt to minimise $\mathbf{w}^t \cdot \mathbf{w}$, many $\alpha_j$'s will be zero, meaning that the corresponding $\mathbf{x}^{(j)}$'s do not affect the boundary. Those which do, whose corresponding $\alpha_j$'s are non-zero, are the support vectors. With this definition of $\mathbf{w}$ and applying the kernel trick, we come to: $$ \mathbf{w}^t \cdot \phi(\mathbf{x}^{(i)}) = \sum_{j=1}^m \alpha_j \phi(\mathbf{x}^{(j)}) \phi(\mathbf{x}^{(j)}) = \sum_{j=1}^m \alpha_j K(\mathbf{x}^{(i)}, \mathbf{x}^{(j)}) $$ or, in vector notation: $$ \mathbf{w}^t \cdot \phi(\mathbf{x}^{(i)}) = \alpha^t \cdot \mathbf{f}^{(i)} $$ where we define: $$ \mathbf{f}^{(i)} = [ ~ K(\mathbf{x}^{(i)}, \mathbf{x}^{(1)}), K(\mathbf{x}^{(i)}, \mathbf{x}^{(2)}), ..., K(\mathbf{x}^{(i)}, \mathbf{x}^{(m)}) ~ ]^t $$ This is almost the Ng notation. Recall that we also need to optimise for $b$, and Ng, for a more compact notation, puts $b$ as the first component of $\theta$ and must therefore prepend a one to the vector $\mathbf{f}^{(i)}$. He is actually saying: $$ b + \mathbf{w}^t \cdot \phi(\mathbf{x}^{(i)}) = \theta^t \cdot \mathbf{f}^{(i)} $$ where $$ \mathbf{f}^{(i)} = [ ~ 1, K(\mathbf{x}^{(i)}, \mathbf{x}^{(1)}), K(\mathbf{x}^{(i)}, \mathbf{x}^{(2)}), ..., K(\mathbf{x}^{(i)}, \mathbf{x}^{(m)}) ~ ]^t $$ and $$ \theta = [ ~ b, \alpha^{(1)}, \alpha^{(2)}, ..., \alpha^{(m)}) ~ ]^t $$ The rest of his notation is just defining $cost_k$ as an affine function of the above dot product (to get the "$1 - $" term), and accommodating the fact that his class labels are not $(-1, 1)$ (which are often used in machine learning community), but $(0, 1)$ (how they are typically used in statistics, like in logistic regression). As for the vector dimensionality, that's again explained by the kernel trick. SVM's never need to compute $\phi(\mathbf{x}^{(i)})$, because these terms never appear alone. They only appear as parts of dot products, which is computed by the kernel function (see my second formula above). The dimensionality of $\mathbf{f}^{(i)}$ has absolutely nothing to do with the dimensionality of $\phi$. $\mathbf{f}^{(i)}$ is simply a vector of all dot products (or kernel function evaluations) between $\mathbf{x}^{(i)}$ and every $\mathbf{x}^{(j)}$ (I'm ignoring $b$ here, which is the ($m+1$)th dimension). Correctly if I'm wrong, but I believe there is some misunderstanding in your second question. As I've shown above, there is a dot product in the primal form, and you can substitute it for the kernel function. The purpose of SMO (and other decomposition algorithms) is to make computation feasible for large amounts of data. Standard gradient descent algorithms would require $O(m^2)$ memory for storing all possible kernel values. Decomposition algorithms, specifically designed for SVMs, work on smaller subsets of data.
Kernels in SVM primal form First, some terminology clarification, which is important for further understanding: In your second formula, applying $\phi(\mathbf{x}^{(i)})$ is not using the kernel trick! Kernel trick is computing
36,229
What is the difference between intervention and conditional distribution?
It is true that $C$ is not a function of $E;$ moreover, as the causal diagram $C\to E$ clearly says, we are thinking of $C$ as a cause of $E.$ However, in a causal diagram, you can use the $\newcommand{\doop}{\operatorname{do}} \doop$ operator on any variable you like; doing so deletes all arrows going into that node. This is "graphical surgery", so to speak. If we are intervening on $E,$ we must delete all arrows going into $E,$ except from the exogenous variables $N_C,N_E,$ etc.; this produces the graph $C\; E,$ with no arrows at all. We simultaneously modify the structural equations so as to set $E=2.$ That is, the structural equations become \begin{align*} C&=N_C\\ E&=2. \end{align*} Hence, $C$ is distributed according to $N_C$ when we intervene on $E.$ So much for $P(C|\doop(E)).$ What about $P(C|E)?$ This is going to be a similar calculation as before, except that we don't do graph surgery. Graph surgery equals intervention equals the $\doop$ operator. The only relationship we have between $C$ and $E$ is the equation $E=4C+N_E.$ When doing mere conditionals (without the $\doop$ operator), it is perfectly permissible to use any of the structural model equations. But they will not be modified this time, because we're not intervening. Hence, you get \begin{align*} 2&=4C+N_E\\ \frac{2-N_E}{4}&=C\\ \frac12-\frac{N_E}{4}&=C. \end{align*} In summary: the $\doop$ operator forces graph surgery and structural equation model alteration. Regular conditional probabilities do not.
What is the difference between intervention and conditional distribution?
It is true that $C$ is not a function of $E;$ moreover, as the causal diagram $C\to E$ clearly says, we are thinking of $C$ as a cause of $E.$ However, in a causal diagram, you can use the $\newcomman
What is the difference between intervention and conditional distribution? It is true that $C$ is not a function of $E;$ moreover, as the causal diagram $C\to E$ clearly says, we are thinking of $C$ as a cause of $E.$ However, in a causal diagram, you can use the $\newcommand{\doop}{\operatorname{do}} \doop$ operator on any variable you like; doing so deletes all arrows going into that node. This is "graphical surgery", so to speak. If we are intervening on $E,$ we must delete all arrows going into $E,$ except from the exogenous variables $N_C,N_E,$ etc.; this produces the graph $C\; E,$ with no arrows at all. We simultaneously modify the structural equations so as to set $E=2.$ That is, the structural equations become \begin{align*} C&=N_C\\ E&=2. \end{align*} Hence, $C$ is distributed according to $N_C$ when we intervene on $E.$ So much for $P(C|\doop(E)).$ What about $P(C|E)?$ This is going to be a similar calculation as before, except that we don't do graph surgery. Graph surgery equals intervention equals the $\doop$ operator. The only relationship we have between $C$ and $E$ is the equation $E=4C+N_E.$ When doing mere conditionals (without the $\doop$ operator), it is perfectly permissible to use any of the structural model equations. But they will not be modified this time, because we're not intervening. Hence, you get \begin{align*} 2&=4C+N_E\\ \frac{2-N_E}{4}&=C\\ \frac12-\frac{N_E}{4}&=C. \end{align*} In summary: the $\doop$ operator forces graph surgery and structural equation model alteration. Regular conditional probabilities do not.
What is the difference between intervention and conditional distribution? It is true that $C$ is not a function of $E;$ moreover, as the causal diagram $C\to E$ clearly says, we are thinking of $C$ as a cause of $E.$ However, in a causal diagram, you can use the $\newcomman
36,230
Explanation of Joint Probability and Independence
Informally, independence mean that knowing the value of one random variable gives you no extra information about the other But if $0 \lt X+Y \lt 1$, then knowing $X=\frac34$ tells you $Y < \frac14$. Meanwhile knowing $X=\frac13$ tells you $Y$ can take values up to $\frac23$. So the value of $X$ is affecting the distribution of possible values of $Y$ and thus they are not independent. The indicator function has this effect, because it cannot be seperated into an $X$ part and a $Y$ part
Explanation of Joint Probability and Independence
Informally, independence mean that knowing the value of one random variable gives you no extra information about the other But if $0 \lt X+Y \lt 1$, then knowing $X=\frac34$ tells you $Y < \frac14$.
Explanation of Joint Probability and Independence Informally, independence mean that knowing the value of one random variable gives you no extra information about the other But if $0 \lt X+Y \lt 1$, then knowing $X=\frac34$ tells you $Y < \frac14$. Meanwhile knowing $X=\frac13$ tells you $Y$ can take values up to $\frac23$. So the value of $X$ is affecting the distribution of possible values of $Y$ and thus they are not independent. The indicator function has this effect, because it cannot be seperated into an $X$ part and a $Y$ part
Explanation of Joint Probability and Independence Informally, independence mean that knowing the value of one random variable gives you no extra information about the other But if $0 \lt X+Y \lt 1$, then knowing $X=\frac34$ tells you $Y < \frac14$.
36,231
Explanation of Joint Probability and Independence
The necessary (not sufficient) condition for independence is that $f(x,y)$ should be factored into something like $g(x)h(y)$. For that to happen, $I(x,y)$ should be factored like $I_A(x)I_B(y)$, but the author says that there is no way to do it, basically because of the line $0<x+y<1$. Assume $I(x,y)=I_A(x)I_B(y)$, where $A=(0,1)$, $B=(0,1)$ (it's $(0,1)$ because there is density for $x,y$ everywhere in $(0,1)$). So, $I_A(x)$ and $I_B(y)$ should be non-zero, if for example $x=0.3,y=0.8$, but $I(x,y)$ is zero.
Explanation of Joint Probability and Independence
The necessary (not sufficient) condition for independence is that $f(x,y)$ should be factored into something like $g(x)h(y)$. For that to happen, $I(x,y)$ should be factored like $I_A(x)I_B(y)$, but t
Explanation of Joint Probability and Independence The necessary (not sufficient) condition for independence is that $f(x,y)$ should be factored into something like $g(x)h(y)$. For that to happen, $I(x,y)$ should be factored like $I_A(x)I_B(y)$, but the author says that there is no way to do it, basically because of the line $0<x+y<1$. Assume $I(x,y)=I_A(x)I_B(y)$, where $A=(0,1)$, $B=(0,1)$ (it's $(0,1)$ because there is density for $x,y$ everywhere in $(0,1)$). So, $I_A(x)$ and $I_B(y)$ should be non-zero, if for example $x=0.3,y=0.8$, but $I(x,y)$ is zero.
Explanation of Joint Probability and Independence The necessary (not sufficient) condition for independence is that $f(x,y)$ should be factored into something like $g(x)h(y)$. For that to happen, $I(x,y)$ should be factored like $I_A(x)I_B(y)$, but t
36,232
Explanation of Joint Probability and Independence
A sufficient test for detecting non-independence of random variables is the eyeball test (described briefly in this answer of mine on stats.SE and in more detail in an answer on math.SE) which says that if the support of the joint density is not a rectangle with sides parallel to the coordinate axes, then the random variables are dependent. Here, the support of $f_{X,Y}(x,y)$ is a triangle and so we can assert that the random variables are dependent without the need for laboriously calculating $f_X(x)$ and $f_Y(y)$ and then checking whether $f_{X,Y}(x,y)$ equals the product $f_X(x)f_Y(y)$ for all real numbers $x$ and $y$ as the definition of independence says (or should say). That $f_{X,Y}(x,y)$ can be expressed as $g(x)h(y)$ for some real numbers $x$ and $y$ (as it does in this instance) is not enough to claim independence.
Explanation of Joint Probability and Independence
A sufficient test for detecting non-independence of random variables is the eyeball test (described briefly in this answer of mine on stats.SE and in more detail in an answer on math.SE) which says th
Explanation of Joint Probability and Independence A sufficient test for detecting non-independence of random variables is the eyeball test (described briefly in this answer of mine on stats.SE and in more detail in an answer on math.SE) which says that if the support of the joint density is not a rectangle with sides parallel to the coordinate axes, then the random variables are dependent. Here, the support of $f_{X,Y}(x,y)$ is a triangle and so we can assert that the random variables are dependent without the need for laboriously calculating $f_X(x)$ and $f_Y(y)$ and then checking whether $f_{X,Y}(x,y)$ equals the product $f_X(x)f_Y(y)$ for all real numbers $x$ and $y$ as the definition of independence says (or should say). That $f_{X,Y}(x,y)$ can be expressed as $g(x)h(y)$ for some real numbers $x$ and $y$ (as it does in this instance) is not enough to claim independence.
Explanation of Joint Probability and Independence A sufficient test for detecting non-independence of random variables is the eyeball test (described briefly in this answer of mine on stats.SE and in more detail in an answer on math.SE) which says th
36,233
Explanation of Joint Probability and Independence
I don't know if this makes it any easier to understand, but consider a function that's equal to $x^2y^3$ when $x<y$ and $0$ otherwise. Now consider the function $x^2y^3 (|y-x|+y-x)/(2y-2x)$. If you plot out the second function, you'll find that it's the same as the first function. It's in more complicated form, but it's in closed form rather than having a conditional definition. While the first function appears to factor, the second one doesn't. If you want to factor out the $y^3$, you still have the $(|y-x|+y-x)/(2y-2x)$ to deal with.
Explanation of Joint Probability and Independence
I don't know if this makes it any easier to understand, but consider a function that's equal to $x^2y^3$ when $x<y$ and $0$ otherwise. Now consider the function $x^2y^3 (|y-x|+y-x)/(2y-2x)$. If you pl
Explanation of Joint Probability and Independence I don't know if this makes it any easier to understand, but consider a function that's equal to $x^2y^3$ when $x<y$ and $0$ otherwise. Now consider the function $x^2y^3 (|y-x|+y-x)/(2y-2x)$. If you plot out the second function, you'll find that it's the same as the first function. It's in more complicated form, but it's in closed form rather than having a conditional definition. While the first function appears to factor, the second one doesn't. If you want to factor out the $y^3$, you still have the $(|y-x|+y-x)/(2y-2x)$ to deal with.
Explanation of Joint Probability and Independence I don't know if this makes it any easier to understand, but consider a function that's equal to $x^2y^3$ when $x<y$ and $0$ otherwise. Now consider the function $x^2y^3 (|y-x|+y-x)/(2y-2x)$. If you pl
36,234
The difference between DID and fixed effect model
Let's begin with an understanding of the standard fixed effects estimator before extending our intuition to make sense of how difference-in-differences (DD) estimation may offer any improvements. Assume you have repeated observations of individuals across time. For example, let’s say we want to estimate the following model: $$ y_{it} = X’_{it}\beta + \alpha_{i} + u_{it}, $$ where $\alpha_{i}$ represents a fixed parameter. We can define this fixed effect as the individual heterogeneity that is different across individuals but stable over time. Some of these time-invariant variables may be observed and known to a researcher (e.g., sex, race, ethnicity, etc.); some may be unobserved yet still known to be a source of individual heterogeneity (e.g., innate ability, stable personality characteristics, temperament, etc.); and, well, some of the other stable factors may be unobserved and unbeknownst to a researcher. In a fixed effects specification, demeaning removes (i.e., ‘sweeps out’) the fixed effect, $\alpha_{i}$. The average of a time-invariant variable is the time-invariant variable, and so demeaning 'wipes out' (subtracts out) the stable characteristics of individuals that differ across individuals but are stable over time. Who is in control of a change in treatment/exposure status? It is the changes individuals experience in life that motivate us to use a fixed effects approach. However, these decisions are typically under the control of the individual. People change jobs; they get married; they earn more money; they change their political affiliation; they move; they have children; they become unionized; they join the military; they drop out of school; they get arrested. In practice, we wish to understand how this change in people's lives (treatment/exposure) affects the change in another variable (outcome). For example, does more education reduce infant mortality? Does one's union status affect wages? But, when changes in treatment/exposure status are under the control of the individual units we observe over time, then concerns about unobserved factors that are correlated with changes in treatment/exposure status remain. Note, the foregoing equation could also be viewed as having two sources of error: $\alpha_{i}$ and $u_{it}$. The idiosyncratic, time-varying factors embedded in $u_{it}$ typically motivates researchers to acquire a control group. Think about the multitude of unobserved time-varying factors that might influence individuals’ decisions across time. Often times, the individual is in control of these decisions, not the researcher. Limitation of fixed effects? Fixed effects identifies effects for individuals who do change. But, why do some people change, and not others? This leads to one of the major drawbacks of fixed effects: it cannot investigate the effects of a within-unit change in the independent variable on the within-unit change in some outcome variable for individuals who do not experience a change. Simply put, a fixed effects model only uses within-unit variation. The model identifies effects within units, and it is constant within the unit. This is a special kind of control, as we controlled for the stable characteristics that stably made you, you. The counterfactual in a fixed effects specification is the treated/exposed individual. That is, individuals act as controls for themselves. Again, the model does not address changes over time. One method to overcome time-varying confounding is to collect data on individuals or entities (e.g., firms, counties, states, etc.) not exposed to the treatment/exposure of interest. This allows you to partition units into a treatment or control condition. Now you can observe treated and untreated groups as they move through time. The external control group is the counterfactual for what would have occurred to a treated/exposed group in the absence of treatment exposure. Enter the DD model. Under a DD specification, we are measuring the before-and-after change in the outcome of the treatment group relative to the before-and-after change in the outcome of the control group. It is important to note a subtle distinction here. In DD settings, the change in treatment exposure is typically determined outside of the unit of observation. For example, a policy/law may be introduced at the county/state level and affect a particular group of individuals/entities within that state. Often times, these policies/laws don't go into effect everywhere. Thus, these 'non-adopters' can serve as a suitable counterfactual. This is one of the attractive features of DD models; you can exploit this source of variation. It is said that the DID (difference-in-difference) is a special case of the fixed-effect model Correct. Texts will often refer to DD as a “special case” of fixed effects. Both fixed effects and DD models include “fixed effects” for individuals or higher-level entities (e.g., firms, counties, states, etc.) that control for factors—both observed and unobserved—that are constant over time within those individuals or higher-level entities. Again, DD methods require at least some units to be unexposed to the treatment/policy/intervention. And, only information at the group level is required for identification of your treatment effect. Here is the canonical DD setup with two groups and two periods: $$ y_{ist} = \alpha + \gamma T_{s} + \lambda d_{t} + \delta(T_{s} \cdot d_{t}) + \epsilon_{ist}, $$ where we may observe individual/entity $i$, in state $s$, at time period $t$. This is an example where data is ‘aggregated up’ to a higher-level, where some states introduce a new law/policy and others do not. You could estimate this equation with dummies for all groups (states), but the dummies (i.e., “fixed effects”) will absorb the treatment variable. This becomes clearer when you have different states introducing laws/policies at different times. The generalization of the foregoing equation would include dummies for each state and each time period but is otherwise unchanged. For example, $$ y_{ist} = \gamma_{s} + \lambda_{t} + \delta D_{st} + \epsilon_{ist}, $$ where the new treatment dummy $D_{st}$ is the same as before $(T_{s} \cdot d_{t})$. Note, $\gamma_{s}$ denotes state fixed effects. The inclusion of dummy variables for all states is algebraically equivalent to estimation in deviations from means. Due to the inclusion of fixed effects at this higher-level of aggregation, DD methods do allow for some selection on the basis of time-invariant unobserved characteristics. I hope this gave you a better understanding of why DD is a special case of fixed effects. As for establishing causality, fixed effects doesn’t always cut it. It’s up to you to show that the policy/treatment change is plausibly unconfounded.
The difference between DID and fixed effect model
Let's begin with an understanding of the standard fixed effects estimator before extending our intuition to make sense of how difference-in-differences (DD) estimation may offer any improvements. Assu
The difference between DID and fixed effect model Let's begin with an understanding of the standard fixed effects estimator before extending our intuition to make sense of how difference-in-differences (DD) estimation may offer any improvements. Assume you have repeated observations of individuals across time. For example, let’s say we want to estimate the following model: $$ y_{it} = X’_{it}\beta + \alpha_{i} + u_{it}, $$ where $\alpha_{i}$ represents a fixed parameter. We can define this fixed effect as the individual heterogeneity that is different across individuals but stable over time. Some of these time-invariant variables may be observed and known to a researcher (e.g., sex, race, ethnicity, etc.); some may be unobserved yet still known to be a source of individual heterogeneity (e.g., innate ability, stable personality characteristics, temperament, etc.); and, well, some of the other stable factors may be unobserved and unbeknownst to a researcher. In a fixed effects specification, demeaning removes (i.e., ‘sweeps out’) the fixed effect, $\alpha_{i}$. The average of a time-invariant variable is the time-invariant variable, and so demeaning 'wipes out' (subtracts out) the stable characteristics of individuals that differ across individuals but are stable over time. Who is in control of a change in treatment/exposure status? It is the changes individuals experience in life that motivate us to use a fixed effects approach. However, these decisions are typically under the control of the individual. People change jobs; they get married; they earn more money; they change their political affiliation; they move; they have children; they become unionized; they join the military; they drop out of school; they get arrested. In practice, we wish to understand how this change in people's lives (treatment/exposure) affects the change in another variable (outcome). For example, does more education reduce infant mortality? Does one's union status affect wages? But, when changes in treatment/exposure status are under the control of the individual units we observe over time, then concerns about unobserved factors that are correlated with changes in treatment/exposure status remain. Note, the foregoing equation could also be viewed as having two sources of error: $\alpha_{i}$ and $u_{it}$. The idiosyncratic, time-varying factors embedded in $u_{it}$ typically motivates researchers to acquire a control group. Think about the multitude of unobserved time-varying factors that might influence individuals’ decisions across time. Often times, the individual is in control of these decisions, not the researcher. Limitation of fixed effects? Fixed effects identifies effects for individuals who do change. But, why do some people change, and not others? This leads to one of the major drawbacks of fixed effects: it cannot investigate the effects of a within-unit change in the independent variable on the within-unit change in some outcome variable for individuals who do not experience a change. Simply put, a fixed effects model only uses within-unit variation. The model identifies effects within units, and it is constant within the unit. This is a special kind of control, as we controlled for the stable characteristics that stably made you, you. The counterfactual in a fixed effects specification is the treated/exposed individual. That is, individuals act as controls for themselves. Again, the model does not address changes over time. One method to overcome time-varying confounding is to collect data on individuals or entities (e.g., firms, counties, states, etc.) not exposed to the treatment/exposure of interest. This allows you to partition units into a treatment or control condition. Now you can observe treated and untreated groups as they move through time. The external control group is the counterfactual for what would have occurred to a treated/exposed group in the absence of treatment exposure. Enter the DD model. Under a DD specification, we are measuring the before-and-after change in the outcome of the treatment group relative to the before-and-after change in the outcome of the control group. It is important to note a subtle distinction here. In DD settings, the change in treatment exposure is typically determined outside of the unit of observation. For example, a policy/law may be introduced at the county/state level and affect a particular group of individuals/entities within that state. Often times, these policies/laws don't go into effect everywhere. Thus, these 'non-adopters' can serve as a suitable counterfactual. This is one of the attractive features of DD models; you can exploit this source of variation. It is said that the DID (difference-in-difference) is a special case of the fixed-effect model Correct. Texts will often refer to DD as a “special case” of fixed effects. Both fixed effects and DD models include “fixed effects” for individuals or higher-level entities (e.g., firms, counties, states, etc.) that control for factors—both observed and unobserved—that are constant over time within those individuals or higher-level entities. Again, DD methods require at least some units to be unexposed to the treatment/policy/intervention. And, only information at the group level is required for identification of your treatment effect. Here is the canonical DD setup with two groups and two periods: $$ y_{ist} = \alpha + \gamma T_{s} + \lambda d_{t} + \delta(T_{s} \cdot d_{t}) + \epsilon_{ist}, $$ where we may observe individual/entity $i$, in state $s$, at time period $t$. This is an example where data is ‘aggregated up’ to a higher-level, where some states introduce a new law/policy and others do not. You could estimate this equation with dummies for all groups (states), but the dummies (i.e., “fixed effects”) will absorb the treatment variable. This becomes clearer when you have different states introducing laws/policies at different times. The generalization of the foregoing equation would include dummies for each state and each time period but is otherwise unchanged. For example, $$ y_{ist} = \gamma_{s} + \lambda_{t} + \delta D_{st} + \epsilon_{ist}, $$ where the new treatment dummy $D_{st}$ is the same as before $(T_{s} \cdot d_{t})$. Note, $\gamma_{s}$ denotes state fixed effects. The inclusion of dummy variables for all states is algebraically equivalent to estimation in deviations from means. Due to the inclusion of fixed effects at this higher-level of aggregation, DD methods do allow for some selection on the basis of time-invariant unobserved characteristics. I hope this gave you a better understanding of why DD is a special case of fixed effects. As for establishing causality, fixed effects doesn’t always cut it. It’s up to you to show that the policy/treatment change is plausibly unconfounded.
The difference between DID and fixed effect model Let's begin with an understanding of the standard fixed effects estimator before extending our intuition to make sense of how difference-in-differences (DD) estimation may offer any improvements. Assu
36,235
the approximation power of Gaussian mixture models?
Goodfellow et al. 2016, p. 65 states: A Gaussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific nonzero amount of error by a Gaussian mixture model with enough components. M. Carreira ascribes this property to kernel density estimation with reference to Scott 1992 and another source that I could not find. Given the connection between KDE and GMMs this is understandable. The user Xi'an provided an explanation for the above statement in this answer. While this answers the question, it has to be noted that while it can theoretically approximate any smooth density, it shouldn't be used as a general purpose model. Fitting a mixture of many components can quickly become more computationally expensive than using a more fitting parametric model. Examples of this could be distributions with very thin peaks, which one needs to approximate with very thin bandwidths, as well as distributions with long tails, which will be difficult to get right with either very wide Gaussians or many small ones. In these and probably many other cases it is preferrable to use a more fitting, if less general, model.
the approximation power of Gaussian mixture models?
Goodfellow et al. 2016, p. 65 states: A Gaussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific nonzero amount o
the approximation power of Gaussian mixture models? Goodfellow et al. 2016, p. 65 states: A Gaussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific nonzero amount of error by a Gaussian mixture model with enough components. M. Carreira ascribes this property to kernel density estimation with reference to Scott 1992 and another source that I could not find. Given the connection between KDE and GMMs this is understandable. The user Xi'an provided an explanation for the above statement in this answer. While this answers the question, it has to be noted that while it can theoretically approximate any smooth density, it shouldn't be used as a general purpose model. Fitting a mixture of many components can quickly become more computationally expensive than using a more fitting parametric model. Examples of this could be distributions with very thin peaks, which one needs to approximate with very thin bandwidths, as well as distributions with long tails, which will be difficult to get right with either very wide Gaussians or many small ones. In these and probably many other cases it is preferrable to use a more fitting, if less general, model.
the approximation power of Gaussian mixture models? Goodfellow et al. 2016, p. 65 states: A Gaussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific nonzero amount o
36,236
the approximation power of Gaussian mixture models?
Dalal and Hall's "Approximating Priors by Mixtures of Natural Conjugate Priors" discusses estimating arbitrary densities with mixtures of normals (or, for that matter, mixtures of other conjugate prior densities). This paper and papers which cite it discuss the details of how to approximate densities arbitrarily well with GMMs and other mixtures. This can be a useful way to approximate priors, since a mixture of conjugate priors is also conjugate. Dalal, S. R., and W. J. Hall. "Approximating Priors by Mixtures of Natural Conjugate Priors." Journal of the Royal Statistical Society. Series B (Methodological), vol. 45, no. 2, 1983, pp. 278–286. JSTOR, www.jstor.org/stable/2345533. Accessed 7 Dec. 2020.
the approximation power of Gaussian mixture models?
Dalal and Hall's "Approximating Priors by Mixtures of Natural Conjugate Priors" discusses estimating arbitrary densities with mixtures of normals (or, for that matter, mixtures of other conjugate prio
the approximation power of Gaussian mixture models? Dalal and Hall's "Approximating Priors by Mixtures of Natural Conjugate Priors" discusses estimating arbitrary densities with mixtures of normals (or, for that matter, mixtures of other conjugate prior densities). This paper and papers which cite it discuss the details of how to approximate densities arbitrarily well with GMMs and other mixtures. This can be a useful way to approximate priors, since a mixture of conjugate priors is also conjugate. Dalal, S. R., and W. J. Hall. "Approximating Priors by Mixtures of Natural Conjugate Priors." Journal of the Royal Statistical Society. Series B (Methodological), vol. 45, no. 2, 1983, pp. 278–286. JSTOR, www.jstor.org/stable/2345533. Accessed 7 Dec. 2020.
the approximation power of Gaussian mixture models? Dalal and Hall's "Approximating Priors by Mixtures of Natural Conjugate Priors" discusses estimating arbitrary densities with mixtures of normals (or, for that matter, mixtures of other conjugate prio
36,237
p-values change after mean centering with interaction terms. How to test for significance?
But I do not understand what it means by "correct test for significance". Can someone explain what he's referring to? If I were you I would post a comment to that answer by @EdM, otherwise, unless they actually see this question and answers themself, we can only make an informed guess. Having said that, what I think is meant by that statement, is that the model must include both the main effect and the interaction in order to make correct inferences. There could be some rare cases where it is not necessary to include the main effect, but as a good general rule, you should. Now, looking at the output from your two models, the first thing I notice is: the condition number is large, 2.17e+03. his might indicate that there are strong multicollinearity or other numerical problems and also note that this warning is absent from the centered model. One consequence of muticollinearity is that it can inflate standard errors, which increases p values. Your model contains an interaction which is a product of two other variables. Depending on the scale it might be the case that there is a high correlation between the interaction and the variables themselves and this could cause inflated p values. Centering variables often reduces correlation between them when nonlinear terms (such as an interaction) are included. Without access to the data itself it is hard to say if this is what is actually happening, but it's my best informed guess. Your first point of call should be a correlation matrix between all the predictors and this will give you a big hint if this is actually the cause. However, further inspection of the output reveals that the R squared for both models is 1. This indicates that there is a problem somewhere. Without access to the data it is very difficult to see where that might be. As to the reason why the estimates an p values for the main effects change after centering, first, note that in a model without an interaction term, mean-centering the variables will change only the intercept term. The coefficients and their standard errors for the other variables will be unchanged. However, in the presence of an interaction, the main effects no longer have the same interpretation. They are interpreted as the change in the outcome variable for a 1 unit change of the variable in question, when the other main effect that it is interacted with is at zero (or in the case of a categorical variable, its reference level). This implies that, after centering the variables, the estimates and their standard errors for the main effects that are involved in an interaction will change (and hence the p values too), because zero now has a different meaning after centering, but the estimate and the standard error for the interaction itself will remain unchanged. In other words the tests are different. Looking at the output, this is exactly what has happened. Edit: To provide better understanding: To understand the last point more fully we can write out the equations for two simple models, one without centering, and one with centering, with two predictors, $x_1$ and $x_2$ along with their interaction. Firstly, the original (uncentered) model is: $$\mathbb{E}[Y] = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1x_2$$ Denote the centered variables by $z_1$ and $z_2$, such that $$ \begin{align} z_1 &= x_1 - \mu_1 \text{ and} \\ z_2 &= x_2 - \mu_2 \end{align} $$ where $\mu_1$ and $\mu_2$ are the means of $x_1$ and $x_2$ respectively. We can now write the model with centering in terms of the centered variables and the means of the uncentered variables: $$\mathbb{E}[Y] = \beta_0 + \beta_1 (z_1 + \mu_1) + \beta_2 (z_2 + \mu_2) + \beta_3 (z_1 + \mu_1) (z_2 + \mu_2)$$ Expanding: $$\mathbb{E}[Y] = \beta_0 + \beta_1 z_1 + \beta_1 \mu_1 + \beta_2 z_2 + \beta_2\mu_2 + \beta_3 z_1 z_2 +\beta_3 z_1 \mu_2 +\beta_3 z_2 \mu_1 + \beta_3 \mu_1 \mu_2 $$ Now, note that $\beta_1 \mu_1$, $\beta_2\mu_2$ and $\beta_3 \mu_1 \mu_2$ are all constant so these can be subsumed into a new intercept, $\gamma_0$, giving: $$\mathbb{E}[Y] = \gamma_0 + \beta_1 z_1 + \beta_2 z_2 + \beta_3 z_1 z_2 +\beta_3 z_1 \mu_2 +\beta_3 z_2 \mu_1 $$ Rearranging this by factorizing by $z_1$, $z_2$ and $z_1 z_2$ we arrive at: $$\mathbb{E}[Y] = \gamma_0 + z_1 (\beta_1 + \beta_3 \mu_2 ) + z_2 (\beta_2 + \beta_3 \mu_1) + z_1 z_2 \beta_3 $$ So, this is the simplified form of the regression model using the centered variables. We can immediately note that: the intercept will be different from the uncentered model, since it is now equal to $ \gamma_0 = \beta_0 + \beta_1 \mu_1 +\beta_2\mu_2 +\beta_3 \mu_1 \mu_2$ the test for $z_1$ is comparing $\beta_1 + \beta_3 \mu_2$ to zero, or equivalently the equality of $\beta_1$ and $-\beta_3 \mu_2$, which will only be the same as the test for $\beta_1$ in the uncentered model if $\mu_2$ is zero, which obviously it isn't otherwise you wouldn't be centering $x_2$ in the first place. similarly, the test for $z_2$ is comparing $\beta_2 + \beta_3 \mu_1$ to zero, which will only be the same as the test for $\beta_2$ in the uncentered model if $\mu_1$ is zero. The test for $z_1 z_2$ is comparing $\beta_3$ to zero, which is the same as in the uncentered model. Again, inspecting the output of both models, this is exactly what is happening. To sum up, although the two models are the same, ie the centered model is just a re-parameterization of the uncentered model, the p values for the tests of the estimated coefficient for the main effects of the centered variables that are involved in the interaction, and the intercept, will be different, because they are testing different things. The p values for the tests of the estimated coefficients of the main effect which is not involved in an interaction, along with that for the interaction, will be unchanged. These are general results. In addition to this, in your particular data there could also be issues due to multicollinearity, and the fact that R-squared is reported as 1, is also suspicious.
p-values change after mean centering with interaction terms. How to test for significance?
But I do not understand what it means by "correct test for significance". Can someone explain what he's referring to? If I were you I would post a comment to that answer by @EdM, otherwise, unless th
p-values change after mean centering with interaction terms. How to test for significance? But I do not understand what it means by "correct test for significance". Can someone explain what he's referring to? If I were you I would post a comment to that answer by @EdM, otherwise, unless they actually see this question and answers themself, we can only make an informed guess. Having said that, what I think is meant by that statement, is that the model must include both the main effect and the interaction in order to make correct inferences. There could be some rare cases where it is not necessary to include the main effect, but as a good general rule, you should. Now, looking at the output from your two models, the first thing I notice is: the condition number is large, 2.17e+03. his might indicate that there are strong multicollinearity or other numerical problems and also note that this warning is absent from the centered model. One consequence of muticollinearity is that it can inflate standard errors, which increases p values. Your model contains an interaction which is a product of two other variables. Depending on the scale it might be the case that there is a high correlation between the interaction and the variables themselves and this could cause inflated p values. Centering variables often reduces correlation between them when nonlinear terms (such as an interaction) are included. Without access to the data itself it is hard to say if this is what is actually happening, but it's my best informed guess. Your first point of call should be a correlation matrix between all the predictors and this will give you a big hint if this is actually the cause. However, further inspection of the output reveals that the R squared for both models is 1. This indicates that there is a problem somewhere. Without access to the data it is very difficult to see where that might be. As to the reason why the estimates an p values for the main effects change after centering, first, note that in a model without an interaction term, mean-centering the variables will change only the intercept term. The coefficients and their standard errors for the other variables will be unchanged. However, in the presence of an interaction, the main effects no longer have the same interpretation. They are interpreted as the change in the outcome variable for a 1 unit change of the variable in question, when the other main effect that it is interacted with is at zero (or in the case of a categorical variable, its reference level). This implies that, after centering the variables, the estimates and their standard errors for the main effects that are involved in an interaction will change (and hence the p values too), because zero now has a different meaning after centering, but the estimate and the standard error for the interaction itself will remain unchanged. In other words the tests are different. Looking at the output, this is exactly what has happened. Edit: To provide better understanding: To understand the last point more fully we can write out the equations for two simple models, one without centering, and one with centering, with two predictors, $x_1$ and $x_2$ along with their interaction. Firstly, the original (uncentered) model is: $$\mathbb{E}[Y] = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1x_2$$ Denote the centered variables by $z_1$ and $z_2$, such that $$ \begin{align} z_1 &= x_1 - \mu_1 \text{ and} \\ z_2 &= x_2 - \mu_2 \end{align} $$ where $\mu_1$ and $\mu_2$ are the means of $x_1$ and $x_2$ respectively. We can now write the model with centering in terms of the centered variables and the means of the uncentered variables: $$\mathbb{E}[Y] = \beta_0 + \beta_1 (z_1 + \mu_1) + \beta_2 (z_2 + \mu_2) + \beta_3 (z_1 + \mu_1) (z_2 + \mu_2)$$ Expanding: $$\mathbb{E}[Y] = \beta_0 + \beta_1 z_1 + \beta_1 \mu_1 + \beta_2 z_2 + \beta_2\mu_2 + \beta_3 z_1 z_2 +\beta_3 z_1 \mu_2 +\beta_3 z_2 \mu_1 + \beta_3 \mu_1 \mu_2 $$ Now, note that $\beta_1 \mu_1$, $\beta_2\mu_2$ and $\beta_3 \mu_1 \mu_2$ are all constant so these can be subsumed into a new intercept, $\gamma_0$, giving: $$\mathbb{E}[Y] = \gamma_0 + \beta_1 z_1 + \beta_2 z_2 + \beta_3 z_1 z_2 +\beta_3 z_1 \mu_2 +\beta_3 z_2 \mu_1 $$ Rearranging this by factorizing by $z_1$, $z_2$ and $z_1 z_2$ we arrive at: $$\mathbb{E}[Y] = \gamma_0 + z_1 (\beta_1 + \beta_3 \mu_2 ) + z_2 (\beta_2 + \beta_3 \mu_1) + z_1 z_2 \beta_3 $$ So, this is the simplified form of the regression model using the centered variables. We can immediately note that: the intercept will be different from the uncentered model, since it is now equal to $ \gamma_0 = \beta_0 + \beta_1 \mu_1 +\beta_2\mu_2 +\beta_3 \mu_1 \mu_2$ the test for $z_1$ is comparing $\beta_1 + \beta_3 \mu_2$ to zero, or equivalently the equality of $\beta_1$ and $-\beta_3 \mu_2$, which will only be the same as the test for $\beta_1$ in the uncentered model if $\mu_2$ is zero, which obviously it isn't otherwise you wouldn't be centering $x_2$ in the first place. similarly, the test for $z_2$ is comparing $\beta_2 + \beta_3 \mu_1$ to zero, which will only be the same as the test for $\beta_2$ in the uncentered model if $\mu_1$ is zero. The test for $z_1 z_2$ is comparing $\beta_3$ to zero, which is the same as in the uncentered model. Again, inspecting the output of both models, this is exactly what is happening. To sum up, although the two models are the same, ie the centered model is just a re-parameterization of the uncentered model, the p values for the tests of the estimated coefficient for the main effects of the centered variables that are involved in the interaction, and the intercept, will be different, because they are testing different things. The p values for the tests of the estimated coefficients of the main effect which is not involved in an interaction, along with that for the interaction, will be unchanged. These are general results. In addition to this, in your particular data there could also be issues due to multicollinearity, and the fact that R-squared is reported as 1, is also suspicious.
p-values change after mean centering with interaction terms. How to test for significance? But I do not understand what it means by "correct test for significance". Can someone explain what he's referring to? If I were you I would post a comment to that answer by @EdM, otherwise, unless th
36,238
p-values change after mean centering with interaction terms. How to test for significance?
The reported p-values for the coefficient for z will differ between the uncentered and x-centered models. That might seem troubling at first, but that's OK. The correct test for significance of a predictor involved in an interaction must involve both its individual coefficient and its interaction coefficient, and the result of that test is unchanged by centering. But I do not understand what it means by "correct test for significance". Can someone explain what he's referring to? In these two questions and their answers... Why and how does adding an interaction term affects the confidence interval of a main effect? Standardization of variables and collinearity ...you read some more about the effect of transforming the variables and the effect on coefficients. What you are effectively doing is some sort of transforming the coefficients $$y = \underbrace{(\beta_0+\beta_1 \bar{x}_1+\beta_2 \bar{x}_2 + \beta_3 \bar{x}_3 +\beta_4 \bar{x}_2 \bar{x}_3)}_{\beta_0^\prime} \, + \, \underbrace{(\beta_1)}_{\beta_1^\prime} x_1 \, + \, \underbrace{( \beta_2 + \beta_4 \bar {x}_3)}_{\beta_2^\prime} x_2 \, + \, \underbrace{(\beta_3 + \beta_4 \bar {x}_2)}_{\beta_3^\prime} x_3 \, + \, \underbrace{(\beta_4)}_{\beta_4^\prime} x_2 x_3$$ This is changing the sample distribution of the coefficients. In the image from the two questions (which relates to a transformation in a linear model where already the same principal applies), there you can see intuitively what this does to the error of the coefficients. One can see the sample distribution of the coefficients as a joint multivariate normal distribution. A confidence region of the joint distribution of the coefficients can be shown as some n-dimensional spheroid (in the image n=2) and this translation/centering is transforming the spheroid (some sort of shear transform). That image makes clear that the individual z-scores and p-values do not make much sense when the errors in the coefficients are correlated. The joint distribution of the coefficients may be very narrow. The area of the confidence region, using the joint distribution, does does not change with the translations/transformations, but the marginal distributions may change a lot. So when you (linearly) transform the variables then tests like ANOVA test (F-test) or likelihood ratio test (chi-square distribution) do not change (the predicted values $\hat{y}$ remain the same), and these are the 'correct' tests for finding out whether the model improves by including an extra term. But the marginal distributions of the coefficients (and related z-tests or t-tests) are changing.
p-values change after mean centering with interaction terms. How to test for significance?
The reported p-values for the coefficient for z will differ between the uncentered and x-centered models. That might seem troubling at first, but that's OK. The correct test for significance of a pred
p-values change after mean centering with interaction terms. How to test for significance? The reported p-values for the coefficient for z will differ between the uncentered and x-centered models. That might seem troubling at first, but that's OK. The correct test for significance of a predictor involved in an interaction must involve both its individual coefficient and its interaction coefficient, and the result of that test is unchanged by centering. But I do not understand what it means by "correct test for significance". Can someone explain what he's referring to? In these two questions and their answers... Why and how does adding an interaction term affects the confidence interval of a main effect? Standardization of variables and collinearity ...you read some more about the effect of transforming the variables and the effect on coefficients. What you are effectively doing is some sort of transforming the coefficients $$y = \underbrace{(\beta_0+\beta_1 \bar{x}_1+\beta_2 \bar{x}_2 + \beta_3 \bar{x}_3 +\beta_4 \bar{x}_2 \bar{x}_3)}_{\beta_0^\prime} \, + \, \underbrace{(\beta_1)}_{\beta_1^\prime} x_1 \, + \, \underbrace{( \beta_2 + \beta_4 \bar {x}_3)}_{\beta_2^\prime} x_2 \, + \, \underbrace{(\beta_3 + \beta_4 \bar {x}_2)}_{\beta_3^\prime} x_3 \, + \, \underbrace{(\beta_4)}_{\beta_4^\prime} x_2 x_3$$ This is changing the sample distribution of the coefficients. In the image from the two questions (which relates to a transformation in a linear model where already the same principal applies), there you can see intuitively what this does to the error of the coefficients. One can see the sample distribution of the coefficients as a joint multivariate normal distribution. A confidence region of the joint distribution of the coefficients can be shown as some n-dimensional spheroid (in the image n=2) and this translation/centering is transforming the spheroid (some sort of shear transform). That image makes clear that the individual z-scores and p-values do not make much sense when the errors in the coefficients are correlated. The joint distribution of the coefficients may be very narrow. The area of the confidence region, using the joint distribution, does does not change with the translations/transformations, but the marginal distributions may change a lot. So when you (linearly) transform the variables then tests like ANOVA test (F-test) or likelihood ratio test (chi-square distribution) do not change (the predicted values $\hat{y}$ remain the same), and these are the 'correct' tests for finding out whether the model improves by including an extra term. But the marginal distributions of the coefficients (and related z-tests or t-tests) are changing.
p-values change after mean centering with interaction terms. How to test for significance? The reported p-values for the coefficient for z will differ between the uncentered and x-centered models. That might seem troubling at first, but that's OK. The correct test for significance of a pred
36,239
Is the PCA estimator used in regression root-n-consistent?
It will not be consistent if $Y$ is explained by any of the the discarded components. Consider the case with $p=2$ and $ d = 1$ where the first principal component is $(1,0)$ and the second is $(0,1)$ where the first explains $\%99$ percent of the variancewith $(\beta_1, \beta_2) = (0, 100)$. There is no way $\beta_2$ can be estimated if all observation are mostly projected over the $x$-axis. The following script simulates this scenario with $\sigma^2 = 0$ in the linear part of the model. set.seed(123) beta_true = c(0,100) beta_est = matrix(0, ncol = 2, nrow = 1000) sqe = rep(0, 1000) for (k in 1:1000 ){ x = as.matrix(cbind(rnorm(1000, 0, sd = sqrt(0.99) ), rnorm(1000, 0, sd = sqrt(0.01)))) Y = x%*%matrix(beta_true , ncol =1) q = princomp(x) b1 = lm(Y ~ q$scores[,1] -1)$coefficients # fit linear model with the projection in the first components beta = b1*q$loadings[,1] # beta_est[k, ] =beta sqe[k] = sum((beta_true - beta )^2) } boxplot(beta_est, main ='Estimation of beta') msqe = mean(sqe) # 9999.896 This boxplot shows estimation of $\hat \beta_2 \approx 0$ which is way off for a sample size of $n = 1000$.
Is the PCA estimator used in regression root-n-consistent?
It will not be consistent if $Y$ is explained by any of the the discarded components. Consider the case with $p=2$ and $ d = 1$ where the first principal component is $(1,0)$ and the second is $(0,1)
Is the PCA estimator used in regression root-n-consistent? It will not be consistent if $Y$ is explained by any of the the discarded components. Consider the case with $p=2$ and $ d = 1$ where the first principal component is $(1,0)$ and the second is $(0,1)$ where the first explains $\%99$ percent of the variancewith $(\beta_1, \beta_2) = (0, 100)$. There is no way $\beta_2$ can be estimated if all observation are mostly projected over the $x$-axis. The following script simulates this scenario with $\sigma^2 = 0$ in the linear part of the model. set.seed(123) beta_true = c(0,100) beta_est = matrix(0, ncol = 2, nrow = 1000) sqe = rep(0, 1000) for (k in 1:1000 ){ x = as.matrix(cbind(rnorm(1000, 0, sd = sqrt(0.99) ), rnorm(1000, 0, sd = sqrt(0.01)))) Y = x%*%matrix(beta_true , ncol =1) q = princomp(x) b1 = lm(Y ~ q$scores[,1] -1)$coefficients # fit linear model with the projection in the first components beta = b1*q$loadings[,1] # beta_est[k, ] =beta sqe[k] = sum((beta_true - beta )^2) } boxplot(beta_est, main ='Estimation of beta') msqe = mean(sqe) # 9999.896 This boxplot shows estimation of $\hat \beta_2 \approx 0$ which is way off for a sample size of $n = 1000$.
Is the PCA estimator used in regression root-n-consistent? It will not be consistent if $Y$ is explained by any of the the discarded components. Consider the case with $p=2$ and $ d = 1$ where the first principal component is $(1,0)$ and the second is $(0,1)
36,240
What is the relationship between the Harrel's C and the AUC?
In the case of a binary outcome and a continuous predictor, the AUC of the ROC or c-index is simply a function of how well the ordered values of the continuous predictor correlate to the corresponding event status. In a Cox model or other time to event method, those persons with a higher predictor value (hazard ratio) should have a shorter time to event. In addition, censoring and timing of censoring affect which pairs of data are usable in the final accounting of the censored c-index, while no such requirement is imposed on the simple c-index (except for ties). To more directly answer your question, the censored c-index has no obligatory correlation to the c-index. The censored c-index’s requirement for accurate time ordering is simply not measured in the simple c-index. The effect of censoring and time mean that not all values are used in a censored c-index. For these reasons, the two measure discrimination differ lu and are not expected to be the same. As proof of this concept, I created 1000 simulations of 20 subjects with random follow-up, event status and prognostic index values demonstrating no correlation between the two:
What is the relationship between the Harrel's C and the AUC?
In the case of a binary outcome and a continuous predictor, the AUC of the ROC or c-index is simply a function of how well the ordered values of the continuous predictor correlate to the corresponding
What is the relationship between the Harrel's C and the AUC? In the case of a binary outcome and a continuous predictor, the AUC of the ROC or c-index is simply a function of how well the ordered values of the continuous predictor correlate to the corresponding event status. In a Cox model or other time to event method, those persons with a higher predictor value (hazard ratio) should have a shorter time to event. In addition, censoring and timing of censoring affect which pairs of data are usable in the final accounting of the censored c-index, while no such requirement is imposed on the simple c-index (except for ties). To more directly answer your question, the censored c-index has no obligatory correlation to the c-index. The censored c-index’s requirement for accurate time ordering is simply not measured in the simple c-index. The effect of censoring and time mean that not all values are used in a censored c-index. For these reasons, the two measure discrimination differ lu and are not expected to be the same. As proof of this concept, I created 1000 simulations of 20 subjects with random follow-up, event status and prognostic index values demonstrating no correlation between the two:
What is the relationship between the Harrel's C and the AUC? In the case of a binary outcome and a continuous predictor, the AUC of the ROC or c-index is simply a function of how well the ordered values of the continuous predictor correlate to the corresponding
36,241
What is the relationship between the Harrel's C and the AUC?
AUROC is the same as concordance probability (Harrel's C) for the binary outcome. If the outcome is not binary, or it is censored, then in general whatever measure you compute for this outcome would not be called AUC/ROC.
What is the relationship between the Harrel's C and the AUC?
AUROC is the same as concordance probability (Harrel's C) for the binary outcome. If the outcome is not binary, or it is censored, then in general whatever measure you compute for this outcome would n
What is the relationship between the Harrel's C and the AUC? AUROC is the same as concordance probability (Harrel's C) for the binary outcome. If the outcome is not binary, or it is censored, then in general whatever measure you compute for this outcome would not be called AUC/ROC.
What is the relationship between the Harrel's C and the AUC? AUROC is the same as concordance probability (Harrel's C) for the binary outcome. If the outcome is not binary, or it is censored, then in general whatever measure you compute for this outcome would n
36,242
Uniform distribution and ordered statistics
As detailed in (the simulation Bible) Devroye's Non-uniform Variate Generation (1985), Chapter V, Section 2, the vector $(S_1,\ldots,S_{n})$ is jointly uniform: Theorem 2.1 $\quad$ If $U_{(1)}\le\cdots\le U_{(n)}$ are the Uniform $\mathcal U(0,1)$ order statistics of an $n$-sample, and the$$S_i=U_{(i)}-U_{(i-1)}\qquad(1\le i\le n+1)$$ where by convention $U_{(0)}=0$ and $U_{(n+1)}=1$, are the uniform spacings, then $(S_1,\ldots,S_{n})$ is uniformly distributed over the simplex $$\mathcal A_{n-1}=\left\{(x_1,\ldots,x_{n});\ 0\le x_i\,,\ \sum_{i=1}^{n} x_i\le 1\right\}$$ [The proof follows from the order statistics $U_{(1)}\le\cdots\le U_{(n)}$ being distributed as$$n!\,\prod_{i=1}^n \mathbb{I}_{(0,1)}(u_{(i)})\times\mathbb{I}_{u_{(1)}\le\cdots\le u_{(n)}}$$$n!$ being the number of permutations of $\{1,...,n\}$ and from the change of variables from the $U_{(i)}$'s to the $S_i$'s being of Jacobian determinant equal to one] and Theorem 2.2 $\quad$ The vector $$(S_1,\ldots,S_{n+1})$$ is distributed as $$(\varepsilon_0,\ldots,\varepsilon_n)\Big/\sum_{i=0}^n\varepsilon_i$$where the $\epsilon_i$'s are iid $\mathcal E(1)$. The above is also the constructive definition of a Dirichlet $$\mathcal D\overbrace{(1,\ldots,1)}^\text{$n+1$ terms}$$ distribution and the consequence is that $S_i$ is distributed as a Beta $\mathcal B(1,n)$ random variable. Not marginaly uniform then (if conditionally so).
Uniform distribution and ordered statistics
As detailed in (the simulation Bible) Devroye's Non-uniform Variate Generation (1985), Chapter V, Section 2, the vector $(S_1,\ldots,S_{n})$ is jointly uniform: Theorem 2.1 $\quad$ If $U_{(1)}\le\cdo
Uniform distribution and ordered statistics As detailed in (the simulation Bible) Devroye's Non-uniform Variate Generation (1985), Chapter V, Section 2, the vector $(S_1,\ldots,S_{n})$ is jointly uniform: Theorem 2.1 $\quad$ If $U_{(1)}\le\cdots\le U_{(n)}$ are the Uniform $\mathcal U(0,1)$ order statistics of an $n$-sample, and the$$S_i=U_{(i)}-U_{(i-1)}\qquad(1\le i\le n+1)$$ where by convention $U_{(0)}=0$ and $U_{(n+1)}=1$, are the uniform spacings, then $(S_1,\ldots,S_{n})$ is uniformly distributed over the simplex $$\mathcal A_{n-1}=\left\{(x_1,\ldots,x_{n});\ 0\le x_i\,,\ \sum_{i=1}^{n} x_i\le 1\right\}$$ [The proof follows from the order statistics $U_{(1)}\le\cdots\le U_{(n)}$ being distributed as$$n!\,\prod_{i=1}^n \mathbb{I}_{(0,1)}(u_{(i)})\times\mathbb{I}_{u_{(1)}\le\cdots\le u_{(n)}}$$$n!$ being the number of permutations of $\{1,...,n\}$ and from the change of variables from the $U_{(i)}$'s to the $S_i$'s being of Jacobian determinant equal to one] and Theorem 2.2 $\quad$ The vector $$(S_1,\ldots,S_{n+1})$$ is distributed as $$(\varepsilon_0,\ldots,\varepsilon_n)\Big/\sum_{i=0}^n\varepsilon_i$$where the $\epsilon_i$'s are iid $\mathcal E(1)$. The above is also the constructive definition of a Dirichlet $$\mathcal D\overbrace{(1,\ldots,1)}^\text{$n+1$ terms}$$ distribution and the consequence is that $S_i$ is distributed as a Beta $\mathcal B(1,n)$ random variable. Not marginaly uniform then (if conditionally so).
Uniform distribution and ordered statistics As detailed in (the simulation Bible) Devroye's Non-uniform Variate Generation (1985), Chapter V, Section 2, the vector $(S_1,\ldots,S_{n})$ is jointly uniform: Theorem 2.1 $\quad$ If $U_{(1)}\le\cdo
36,243
How does one compare two nested quasibinomial GLMs?
Yes, a deviance test is still valid. Some more details: Since the general theory is not specific for binomial models, I will start out with some general theory, but use binomial examples (and R.) GLM's is based on the exponential dispersion model $$ f(y_i;\theta_i,\phi)= \exp\left\{ w_i [y_i \theta_i -\gamma(\theta_i)]/\phi +\tau(y_i,\phi/w_i)\right\} $$ where $y_i$ is the observation, $\theta_i$ parameter which depends on a linear predictor $\eta_i=x^T\beta$, $\phi$ a scale parameter and $w_i$ a prior weight. To understand the notation, think about a normal theory model, which glm's generalize. There $\phi$ is the variance $\sigma^2$, and if $y_i$ is the mean of a group of $w_i$ independent observations with the same covariables, then the variance is $\phi/w_i$. The last term $\tau(y_i,\phi/w_i)$ is often of little interest since it does not depend on the interest parameters $\theta_i$ (or $\beta$,) so we will treat it cavalierly. So the binomial case. If we have an observation $y_i^* \sim \mathcal{Binom}(w_i,p_i)$ then we will treat $y_i=y_i^*/w_i$ as the observation, so that the expectation of $y_i$ is $p_i$ and its variance $\frac{p_i(1-p_i)}{w_i}$. The binomial pmf can then be written as $$ f(y_i;\theta_i,\phi)=\exp\left\{ w_i[y_i\theta_i-\log(1+e^{\theta_i})]/\phi + \log\binom{w_i/\phi}{y_i w_i/\phi} \right\} $$ where $\phi=1$ and $\theta_i=\log\frac{p_i}{1-p_i}$. We can identify $\gamma(\theta_i)=\log(1+e^{\theta_i})$ and $\tau(y_i,\phi/w_i) = \log\binom{w_i/\phi}{y_i w_i/\phi} $. This form is chosen such that we can get the quasi-model just by allowing $\phi>0$ to vary freely. The quasi-likelihood we then get from this model, is constructed *to function as a likelihood for the $\theta_i$ (or $\beta$) parameters, it will not work as a likelihood for $\phi$. This means that the quasi-likelihood function shares enough of the properties of a true likelihood function that the usual likelihood asymptotic theory still goes through, see also Idea and intuition behind quasi maximum likelihood estimation (QMLE). Since it does not have this properties as a function of $\phi$, inference about $\phi$ must be treated outside that framework. Specifically, there is no reason to hope that maximizing the qlikelihood in $\phi$ to give good results. Now, the analysis of deviance. We define the saturated model S by giving each observation its own parameter, so setting $\hat{\mu}_i=\gamma'(\hat{\theta}_i)=y_i$. Then by assuming for the moment that $\phi=1$ we get $$ D_M=2\sum_i \left\{ w_i[( y_i \theta(y_i)-\gamma(\theta(y_i)))-( y_i\hat{\theta}_i-\gamma(\hat{\theta_i }) ) ]\right\} $$ which is twice the loglikelihood-ratio for testing the reduced model M within the saturated model S. Note that this does not depend on the function $\tau$ at all. For the case of normal-theory models, this is the residual sum of squares (RSS), which is not a function of the scale parameter $\phi=\sigma^2$ either. $D_M/\phi$ is the scaled deviance while $D_M$ often is called the residual deviance, since in normal models it corresponds to the RSS. In normal models we have $D_M/\phi \sim \chi^2_{n-p}$ so an unbiased estimator of the variance parameter $\phi$ in this case is $\hat{\phi}=D_M/(n-p)$ and this might hold as an approximation also in other cases, but often better is $$ \tilde{\phi}=\frac1{n-p}\sum_i \frac{(y_i-\hat{\mu}_i)^2}{V(\hat{\mu_i})/w_i} $$ where $V$ is the variance function, in the binomial case $V(\mu)=\mu(1-\mu)$. In the binomial case, this is considered to be better, and is the scale estimate used by R. If we are interested in a submodel $M_0 \subset M$, with $q < p$ regression parameters, then the likelihood ratio test is $$ \frac{D_{M_0}-D_M}{\phi} \stackrel{\text{approx}}{\sim} \chi^2_{p-q} $$ and with estimated scale we might use $$ \frac{D_{M_0}-D_M}{\hat{\phi}(p-q)} \stackrel{\text{approx}}{\sim} \mathcal{F}_{p-q,n-p} $$ in analogy with the normal theory. So, let us look at a simulated example. set.seed(7*11*13) # My public seed n <- 200 k <- 5 N <- n*k intercept <- rnorm(n, 0, 1) x <- rnorm(n, 1, 1.5) beta <- 0.1 expit <- function(x) 1/(1+exp(-x)) eta <- intercept + beta*x p <- expit(eta) Y <- rbinom(n, k, p) This creates overdispersion by simulating a random intercept for each of the $n=200$ groups of size $k=5$. Then we will estimate a simple model two ways, by using a binomial likelihood, and then a quasibinomial likelihood: mod0 <- glm( cbind(Y, k-Y) ~ x, family=binomial) modq <- glm( cbind(Y, k-Y) ~ x, family=quasibinomial) Then the model summaries: summary(mod0) Call: glm(formula = cbind(Y, k - Y) ~ x, family = binomial) Deviance Residuals: Min 1Q Median 3Q Max -3.053 -1.180 -0.103 1.180 2.836 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.02787 0.07632 -0.365 0.71496 x 0.12941 0.04170 3.103 0.00192 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 441.41 on 199 degrees of freedom Residual deviance: 431.62 on 198 degrees of freedom AIC: 749.1 Number of Fisher Scoring iterations: 3 > summary(modq) Call: glm(formula = cbind(Y, k - Y) ~ x, family = quasibinomial) Deviance Residuals: Min 1Q Median 3Q Max -3.053 -1.180 -0.103 1.180 2.836 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.02787 0.10117 -0.275 0.7832 x 0.12941 0.05529 2.341 0.0202 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for quasibinomial family taken to be 1.757479) Null deviance: 441.41 on 199 degrees of freedom Residual deviance: 431.62 on 198 degrees of freedom AIC: NA Number of Fisher Scoring iterations: 3 Compare the two summaries. They are very similar, the differences are in coefficient standard errors, and the printed scale parameter estimate, and lacking AIC of the modq summary. Check that you can calculate, "by hand", the standard errors for the quasimodel modq from the standard errors for mod0 and the estimated scale. The printed deviances, and deviance residuals, are identical. This is because the residual deviance is defined by taking $\phi=1$ in both cases. The null deviance is the residual deviance for the null model, the model with only an intercept. The scaled deviance is not printed, but can be calculated from the output. The analysis of deviance is calculated by the anova() function. Here we will see differences. First the model based on a binomial likelihood: anova(mod0, test="Chisq") Analysis of Deviance Table Model: binomial, link: logit Response: cbind(Y, k - Y) Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Pr(>Chi) NULL 199 441.41 x 1 9.7883 198 431.62 0.001756 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 so here the regression seems significant. Then for the quasi-model: anova(modq, test="F") Analysis of Deviance Table Model: quasibinomial, link: logit Response: cbind(Y, k - Y) Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev F Pr(>F) NULL 199 441.41 x 1 9.7883 198 431.62 5.5695 0.01925 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 What is printed as F here is (in this case) the scaled deviance (since $p-q=1$.) (I will come back to your second question)
How does one compare two nested quasibinomial GLMs?
Yes, a deviance test is still valid. Some more details: Since the general theory is not specific for binomial models, I will start out with some general theory, but use binomial examples (and R.) GLM'
How does one compare two nested quasibinomial GLMs? Yes, a deviance test is still valid. Some more details: Since the general theory is not specific for binomial models, I will start out with some general theory, but use binomial examples (and R.) GLM's is based on the exponential dispersion model $$ f(y_i;\theta_i,\phi)= \exp\left\{ w_i [y_i \theta_i -\gamma(\theta_i)]/\phi +\tau(y_i,\phi/w_i)\right\} $$ where $y_i$ is the observation, $\theta_i$ parameter which depends on a linear predictor $\eta_i=x^T\beta$, $\phi$ a scale parameter and $w_i$ a prior weight. To understand the notation, think about a normal theory model, which glm's generalize. There $\phi$ is the variance $\sigma^2$, and if $y_i$ is the mean of a group of $w_i$ independent observations with the same covariables, then the variance is $\phi/w_i$. The last term $\tau(y_i,\phi/w_i)$ is often of little interest since it does not depend on the interest parameters $\theta_i$ (or $\beta$,) so we will treat it cavalierly. So the binomial case. If we have an observation $y_i^* \sim \mathcal{Binom}(w_i,p_i)$ then we will treat $y_i=y_i^*/w_i$ as the observation, so that the expectation of $y_i$ is $p_i$ and its variance $\frac{p_i(1-p_i)}{w_i}$. The binomial pmf can then be written as $$ f(y_i;\theta_i,\phi)=\exp\left\{ w_i[y_i\theta_i-\log(1+e^{\theta_i})]/\phi + \log\binom{w_i/\phi}{y_i w_i/\phi} \right\} $$ where $\phi=1$ and $\theta_i=\log\frac{p_i}{1-p_i}$. We can identify $\gamma(\theta_i)=\log(1+e^{\theta_i})$ and $\tau(y_i,\phi/w_i) = \log\binom{w_i/\phi}{y_i w_i/\phi} $. This form is chosen such that we can get the quasi-model just by allowing $\phi>0$ to vary freely. The quasi-likelihood we then get from this model, is constructed *to function as a likelihood for the $\theta_i$ (or $\beta$) parameters, it will not work as a likelihood for $\phi$. This means that the quasi-likelihood function shares enough of the properties of a true likelihood function that the usual likelihood asymptotic theory still goes through, see also Idea and intuition behind quasi maximum likelihood estimation (QMLE). Since it does not have this properties as a function of $\phi$, inference about $\phi$ must be treated outside that framework. Specifically, there is no reason to hope that maximizing the qlikelihood in $\phi$ to give good results. Now, the analysis of deviance. We define the saturated model S by giving each observation its own parameter, so setting $\hat{\mu}_i=\gamma'(\hat{\theta}_i)=y_i$. Then by assuming for the moment that $\phi=1$ we get $$ D_M=2\sum_i \left\{ w_i[( y_i \theta(y_i)-\gamma(\theta(y_i)))-( y_i\hat{\theta}_i-\gamma(\hat{\theta_i }) ) ]\right\} $$ which is twice the loglikelihood-ratio for testing the reduced model M within the saturated model S. Note that this does not depend on the function $\tau$ at all. For the case of normal-theory models, this is the residual sum of squares (RSS), which is not a function of the scale parameter $\phi=\sigma^2$ either. $D_M/\phi$ is the scaled deviance while $D_M$ often is called the residual deviance, since in normal models it corresponds to the RSS. In normal models we have $D_M/\phi \sim \chi^2_{n-p}$ so an unbiased estimator of the variance parameter $\phi$ in this case is $\hat{\phi}=D_M/(n-p)$ and this might hold as an approximation also in other cases, but often better is $$ \tilde{\phi}=\frac1{n-p}\sum_i \frac{(y_i-\hat{\mu}_i)^2}{V(\hat{\mu_i})/w_i} $$ where $V$ is the variance function, in the binomial case $V(\mu)=\mu(1-\mu)$. In the binomial case, this is considered to be better, and is the scale estimate used by R. If we are interested in a submodel $M_0 \subset M$, with $q < p$ regression parameters, then the likelihood ratio test is $$ \frac{D_{M_0}-D_M}{\phi} \stackrel{\text{approx}}{\sim} \chi^2_{p-q} $$ and with estimated scale we might use $$ \frac{D_{M_0}-D_M}{\hat{\phi}(p-q)} \stackrel{\text{approx}}{\sim} \mathcal{F}_{p-q,n-p} $$ in analogy with the normal theory. So, let us look at a simulated example. set.seed(7*11*13) # My public seed n <- 200 k <- 5 N <- n*k intercept <- rnorm(n, 0, 1) x <- rnorm(n, 1, 1.5) beta <- 0.1 expit <- function(x) 1/(1+exp(-x)) eta <- intercept + beta*x p <- expit(eta) Y <- rbinom(n, k, p) This creates overdispersion by simulating a random intercept for each of the $n=200$ groups of size $k=5$. Then we will estimate a simple model two ways, by using a binomial likelihood, and then a quasibinomial likelihood: mod0 <- glm( cbind(Y, k-Y) ~ x, family=binomial) modq <- glm( cbind(Y, k-Y) ~ x, family=quasibinomial) Then the model summaries: summary(mod0) Call: glm(formula = cbind(Y, k - Y) ~ x, family = binomial) Deviance Residuals: Min 1Q Median 3Q Max -3.053 -1.180 -0.103 1.180 2.836 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.02787 0.07632 -0.365 0.71496 x 0.12941 0.04170 3.103 0.00192 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 441.41 on 199 degrees of freedom Residual deviance: 431.62 on 198 degrees of freedom AIC: 749.1 Number of Fisher Scoring iterations: 3 > summary(modq) Call: glm(formula = cbind(Y, k - Y) ~ x, family = quasibinomial) Deviance Residuals: Min 1Q Median 3Q Max -3.053 -1.180 -0.103 1.180 2.836 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.02787 0.10117 -0.275 0.7832 x 0.12941 0.05529 2.341 0.0202 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for quasibinomial family taken to be 1.757479) Null deviance: 441.41 on 199 degrees of freedom Residual deviance: 431.62 on 198 degrees of freedom AIC: NA Number of Fisher Scoring iterations: 3 Compare the two summaries. They are very similar, the differences are in coefficient standard errors, and the printed scale parameter estimate, and lacking AIC of the modq summary. Check that you can calculate, "by hand", the standard errors for the quasimodel modq from the standard errors for mod0 and the estimated scale. The printed deviances, and deviance residuals, are identical. This is because the residual deviance is defined by taking $\phi=1$ in both cases. The null deviance is the residual deviance for the null model, the model with only an intercept. The scaled deviance is not printed, but can be calculated from the output. The analysis of deviance is calculated by the anova() function. Here we will see differences. First the model based on a binomial likelihood: anova(mod0, test="Chisq") Analysis of Deviance Table Model: binomial, link: logit Response: cbind(Y, k - Y) Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Pr(>Chi) NULL 199 441.41 x 1 9.7883 198 431.62 0.001756 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 so here the regression seems significant. Then for the quasi-model: anova(modq, test="F") Analysis of Deviance Table Model: quasibinomial, link: logit Response: cbind(Y, k - Y) Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev F Pr(>F) NULL 199 441.41 x 1 9.7883 198 431.62 5.5695 0.01925 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 What is printed as F here is (in this case) the scaled deviance (since $p-q=1$.) (I will come back to your second question)
How does one compare two nested quasibinomial GLMs? Yes, a deviance test is still valid. Some more details: Since the general theory is not specific for binomial models, I will start out with some general theory, but use binomial examples (and R.) GLM'
36,244
Causes of bimodal distributions when bootstrapping a meta-analysis model
Thanks for providing the data and code. I re-fitted the model you are working with and the second variance component (for which cor_mat is specified) drifts off to a really large value, which is strange. However, profiling this variance component (with profile(rmamv_model, sigma2=2)) indicates no problems, so I don't think this is a convergence issue. Instead, I think the problem arises because the model does not include an estimate-level random effect (which basically every meta-analytic model should include). So, I would suggest to fit: dt$id <- 1:nrow(dt) res <- rma.mv(y ~ f2:f1 - 1, V = var_y, random = list(~ 1|r1, ~ 1|r2, ~ 1|id), R = list(r2 = cor_mat), data = dt, method = "REML") The results look much more reasonable. I suspect this might also solve the problem with the bimodal bootstrap distribution of that last coefficient.
Causes of bimodal distributions when bootstrapping a meta-analysis model
Thanks for providing the data and code. I re-fitted the model you are working with and the second variance component (for which cor_mat is specified) drifts off to a really large value, which is stran
Causes of bimodal distributions when bootstrapping a meta-analysis model Thanks for providing the data and code. I re-fitted the model you are working with and the second variance component (for which cor_mat is specified) drifts off to a really large value, which is strange. However, profiling this variance component (with profile(rmamv_model, sigma2=2)) indicates no problems, so I don't think this is a convergence issue. Instead, I think the problem arises because the model does not include an estimate-level random effect (which basically every meta-analytic model should include). So, I would suggest to fit: dt$id <- 1:nrow(dt) res <- rma.mv(y ~ f2:f1 - 1, V = var_y, random = list(~ 1|r1, ~ 1|r2, ~ 1|id), R = list(r2 = cor_mat), data = dt, method = "REML") The results look much more reasonable. I suspect this might also solve the problem with the bimodal bootstrap distribution of that last coefficient.
Causes of bimodal distributions when bootstrapping a meta-analysis model Thanks for providing the data and code. I re-fitted the model you are working with and the second variance component (for which cor_mat is specified) drifts off to a really large value, which is stran
36,245
Causes of bimodal distributions when bootstrapping a meta-analysis model
Without having access to a reproducible example is extremely difficult to give a definite answer to this bootstrapping behaviour. Assuming that there are indeed no outliers, I suspect that we observe a mild case of Stein's phenomenon especially as a mixed-effect methodology suggests we some clustering in our data. Having said the above, I would suggests going ahead and looking at some of the runs from the "unusual" values of f2f2_3:f1f1_2 interaction, where there are very different values, and investigate the marginal distribution of these two random subsamples. For example in some cases, f2f2_3:f1f1_2 is well under $1$ while the estimated model suggest a values close to $2.4$. Are the marginal distribution similar? Is there a case of having insufficient overlap? Maybe "simple" bootstrap is inappropriate and we need to stratify the sample at hand in respect to some of the factors.
Causes of bimodal distributions when bootstrapping a meta-analysis model
Without having access to a reproducible example is extremely difficult to give a definite answer to this bootstrapping behaviour. Assuming that there are indeed no outliers, I suspect that we observe
Causes of bimodal distributions when bootstrapping a meta-analysis model Without having access to a reproducible example is extremely difficult to give a definite answer to this bootstrapping behaviour. Assuming that there are indeed no outliers, I suspect that we observe a mild case of Stein's phenomenon especially as a mixed-effect methodology suggests we some clustering in our data. Having said the above, I would suggests going ahead and looking at some of the runs from the "unusual" values of f2f2_3:f1f1_2 interaction, where there are very different values, and investigate the marginal distribution of these two random subsamples. For example in some cases, f2f2_3:f1f1_2 is well under $1$ while the estimated model suggest a values close to $2.4$. Are the marginal distribution similar? Is there a case of having insufficient overlap? Maybe "simple" bootstrap is inappropriate and we need to stratify the sample at hand in respect to some of the factors.
Causes of bimodal distributions when bootstrapping a meta-analysis model Without having access to a reproducible example is extremely difficult to give a definite answer to this bootstrapping behaviour. Assuming that there are indeed no outliers, I suspect that we observe
36,246
Errors and residuals in linear regression
I think there is a lot of confusion in this question (caused of course by authors that describe what they think linear regression means very bad/imprecise/up to wrong). First of all we are given some data $(x_i, y_i)_{i=1,...,N}, x_i \in \mathbb{R}^d, y_i \in \mathbb{R}$ and we want to "make sense of it in form of a linear equation" (see below for a precise formulation). Now it may be the case that this model does absolutely not match the data, for example, if $d=1$ then it could be that $y_i = \sin(x_i)$ or so... Nevertheless one could use linear regression in order to write down a (shitty!) model for that but what you are looking for is the following version of linear regression: Assume (some things about the data) then the linear regression model is the bestest model ever. Now we are going to make this precise. First of all we assume that there is a probability space $\Omega$ and random variables $X_i : \Omega \to \mathbb{R}^d$ and $Y_i : \Omega \to \mathbb{R}$ and $\epsilon_i : \Omega \to \mathbb{R}$ and we assume that there are (as you call them) 'true' $\beta \in \mathbb{R}^d, b \in \mathbb{R}$ such that $$Y_i = \beta X_i + b + \epsilon_i$$ (as functions from $\Omega$ to $\mathbb{R}$) and we assume that there is a 'true' $\omega_0 \in \Omega$ such that $$x_i = X_i(\omega_0)$$ and $$y_i = Y_i(\omega_0)$$ and the $\epsilon_i$ are independent from the $X_i$ and the $\epsilon_i$ are iid. $\mathcal{N}(0, \sigma^2)$ distributed. These assumptions mean that the data we are confronted with really comes from these random variables and they satisfy some relations. Then we can execute an algorithm in order to find approximations $\hat{\beta}, \hat{b}$ of $\beta, b$ such that when we are confronted with a new, unseen $x$, the equation $$\hat{y} = \hat{\beta} x + \hat{b}$$ will give the best (concerning some measurement, namely in average) approximation for the 'true' $y$ that belongs to that $x$. 1) Now we have to ask: What do these people mean when they write $\delta_i$, $\epsilon_i$, ...? They for sure do hardly know what the term 'random variable' really means from a mathematical point of view or they know it and jut ignore it, hence, they just use it for any symbol in their mind that is somewhat related to some kind of error. I guess that they mean $$\text{their}~ \delta_i = \hat{y}_i - y_i$$ i.e. given the current parameters $\hat{\beta}, \hat{b}$, what is the error to the $i$-th true training answer? This is a very concrete real number, not a random variable and this (well, the sum of the squares of them) is what you minimize in linear regression. When they write that they "minimize" something involving $\epsilon_i$ then we do not know what they mean: these are random variables that we cannot even change!!! How should this be minimized? Hence, I think that you are confused for the right reason: Whatever they write in the context of approximizing $\beta, b$, they almost always mean $\hat{y}_i - y_i$. 2) I have not seen such a book yet... I think it stems from the following: either the author does not know something about mathematics and precision (or does not give a sh*t about it) or he/she does not want to exhaust the audience with these (absolutely important) details... However, there are some questions in this direction here on se, see here or here and so forth... (shamelessly referring to questions and answers of myself here but probably you can find many more). 3) What do you mean by residiuals? Are you referring to the random variables $\epsilon_i$ or are you referring to $\hat{\beta}X_i + \hat{b} - Y_i$? I hardly doubt that the latter are normally distributed or so because this depends on the distribution on $X_i$ alone and these can have any distribution as long as they are in line with the corresponding $Y_i$! 4) Lack/ignorance of mathematical knowledge or they actually want to describe something else I guess... For example: One can analyze confidence intervals (i.e. we want to leave the perspective of one single line and for a fresh new unseen $x$ we want to give lower and upper bounds $y_l, y_u$ such that with abc% probability, $y_l \leq y \leq y_u$). Then uncertainty needs to come into play again.
Errors and residuals in linear regression
I think there is a lot of confusion in this question (caused of course by authors that describe what they think linear regression means very bad/imprecise/up to wrong). First of all we are given some
Errors and residuals in linear regression I think there is a lot of confusion in this question (caused of course by authors that describe what they think linear regression means very bad/imprecise/up to wrong). First of all we are given some data $(x_i, y_i)_{i=1,...,N}, x_i \in \mathbb{R}^d, y_i \in \mathbb{R}$ and we want to "make sense of it in form of a linear equation" (see below for a precise formulation). Now it may be the case that this model does absolutely not match the data, for example, if $d=1$ then it could be that $y_i = \sin(x_i)$ or so... Nevertheless one could use linear regression in order to write down a (shitty!) model for that but what you are looking for is the following version of linear regression: Assume (some things about the data) then the linear regression model is the bestest model ever. Now we are going to make this precise. First of all we assume that there is a probability space $\Omega$ and random variables $X_i : \Omega \to \mathbb{R}^d$ and $Y_i : \Omega \to \mathbb{R}$ and $\epsilon_i : \Omega \to \mathbb{R}$ and we assume that there are (as you call them) 'true' $\beta \in \mathbb{R}^d, b \in \mathbb{R}$ such that $$Y_i = \beta X_i + b + \epsilon_i$$ (as functions from $\Omega$ to $\mathbb{R}$) and we assume that there is a 'true' $\omega_0 \in \Omega$ such that $$x_i = X_i(\omega_0)$$ and $$y_i = Y_i(\omega_0)$$ and the $\epsilon_i$ are independent from the $X_i$ and the $\epsilon_i$ are iid. $\mathcal{N}(0, \sigma^2)$ distributed. These assumptions mean that the data we are confronted with really comes from these random variables and they satisfy some relations. Then we can execute an algorithm in order to find approximations $\hat{\beta}, \hat{b}$ of $\beta, b$ such that when we are confronted with a new, unseen $x$, the equation $$\hat{y} = \hat{\beta} x + \hat{b}$$ will give the best (concerning some measurement, namely in average) approximation for the 'true' $y$ that belongs to that $x$. 1) Now we have to ask: What do these people mean when they write $\delta_i$, $\epsilon_i$, ...? They for sure do hardly know what the term 'random variable' really means from a mathematical point of view or they know it and jut ignore it, hence, they just use it for any symbol in their mind that is somewhat related to some kind of error. I guess that they mean $$\text{their}~ \delta_i = \hat{y}_i - y_i$$ i.e. given the current parameters $\hat{\beta}, \hat{b}$, what is the error to the $i$-th true training answer? This is a very concrete real number, not a random variable and this (well, the sum of the squares of them) is what you minimize in linear regression. When they write that they "minimize" something involving $\epsilon_i$ then we do not know what they mean: these are random variables that we cannot even change!!! How should this be minimized? Hence, I think that you are confused for the right reason: Whatever they write in the context of approximizing $\beta, b$, they almost always mean $\hat{y}_i - y_i$. 2) I have not seen such a book yet... I think it stems from the following: either the author does not know something about mathematics and precision (or does not give a sh*t about it) or he/she does not want to exhaust the audience with these (absolutely important) details... However, there are some questions in this direction here on se, see here or here and so forth... (shamelessly referring to questions and answers of myself here but probably you can find many more). 3) What do you mean by residiuals? Are you referring to the random variables $\epsilon_i$ or are you referring to $\hat{\beta}X_i + \hat{b} - Y_i$? I hardly doubt that the latter are normally distributed or so because this depends on the distribution on $X_i$ alone and these can have any distribution as long as they are in line with the corresponding $Y_i$! 4) Lack/ignorance of mathematical knowledge or they actually want to describe something else I guess... For example: One can analyze confidence intervals (i.e. we want to leave the perspective of one single line and for a fresh new unseen $x$ we want to give lower and upper bounds $y_l, y_u$ such that with abc% probability, $y_l \leq y \leq y_u$). Then uncertainty needs to come into play again.
Errors and residuals in linear regression I think there is a lot of confusion in this question (caused of course by authors that describe what they think linear regression means very bad/imprecise/up to wrong). First of all we are given some
36,247
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$
The Poisson distribution is a one-parameter exponential family distribution, with natural sufficient statistic given by the sample total $T(\mathbf{x}) = \sum_{i=1}^n x_i$. The canonical form is: $$p(\mathbf{x}|\theta) = \exp \Big( \ln (\theta) T(\mathbf{x}) - n\theta \Big) \cdot h(\mathbf{x}) \quad \quad \quad h(\mathbf{x}) = \coprod_{i=1}^n x_i! $$ From this form it is easy to establish that $T$ is a complete sufficient statistic for the parameter $\theta$. So the Lehmann–Scheffé theorem means that for any $g(\theta)$ there is only one unbiased estimator of this quantity that is a function of $T$, and this is the is UMVUE of $g(\theta)$. One way to find this estimator (the method you are using) is via the Rao-Blackwell theorem --- start with an arbitrary unbiased estimator of $g(\theta)$ and then condition on the complete sufficient statistic to get the unique unbiased estimator that is a function of $T$. Using Rao-Blackwell to find the UMVUE: In your case you want to find the UMVUE of: $$g(\theta) \equiv \theta \exp (-\theta).$$ Using the initial estimator $\hat{g}_*(\mathbf{X}) \equiv \mathbb{I}(X_1=1)$ you can confirm that, $$\mathbb{E}(\hat{g}_*(\mathbf{X})) = \mathbb{E}(\mathbb{I}(X_1=1)) = \mathbb{P}(X_1=1) = \theta \exp(-\theta) = g(\theta),$$ so this is indeed an unbiased estimator. Hence, the unique UMVUE obtained from the Rao-Blackwell technique is: $$\begin{equation} \begin{aligned} \hat{g}(\mathbf{X}) &\equiv \mathbb{E}(\mathbb{I}(X_1=1) | T(\mathbf{X}) = t) \\[6pt] &= \mathbb{P}(X_1=1 | T(\mathbf{X}) = t) \\[6pt] &= \mathbb{P} \Big( X_1=1 \Big| \sum_{i=1}^n X_i = t \Big) \\[6pt] &= \frac{\mathbb{P} \Big( X_1=1 \Big) \mathbb{P} \Big( \sum_{i=2}^n X_i = t-1 \Big)}{\mathbb{P} \Big( \sum_{i=1}^n X_i = t \Big)} \\[6pt] &= \frac{\text{Pois}(1| \theta) \cdot \text{Pois}(t-1| (n-1)\theta)}{\text{Pois}(t| n\theta)} \\[6pt] &= \frac{t!}{(t-1)!} \cdot \frac{ \theta \exp(-\theta) \cdot ((n-1) \theta)^{t-1} \exp(-(n-1)\theta)}{(n \theta)^t \exp(-n\theta)} \\[6pt] &= t \cdot \frac{ (n-1)^{t-1}}{n^t} \\[6pt] &= \frac{t}{n} \Big( 1- \frac{1}{n} \Big)^{t-1} \\[6pt] \end{aligned} \end{equation}$$ Your answer has a slight error where you have conflated the sample mean and the sample total, but most of your working is correct. As $n \rightarrow \infty$ we have $(1-\tfrac{1}{n})^n \rightarrow \exp(-1)$ and $t/n \rightarrow \theta$, so taking these asymptotic results together we can also confirm consistency of the estimator: $$\hat{g}(\mathbf{X}) = \frac{t}{n} \Big[ \Big( 1- \frac{1}{n} \Big)^n \Big] ^{\frac{t}{n} - \frac{1}{n}} \rightarrow \theta [ \exp (-1) ]^\theta = \theta \exp (-\theta) = g(\theta).$$ This latter demonstration is heuristic, but it gives a nice check on the working. It is interesting here that you get an estimator that is a finite approximation to the exponential function of interest.
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$
The Poisson distribution is a one-parameter exponential family distribution, with natural sufficient statistic given by the sample total $T(\mathbf{x}) = \sum_{i=1}^n x_i$. The canonical form is: $$p
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$ The Poisson distribution is a one-parameter exponential family distribution, with natural sufficient statistic given by the sample total $T(\mathbf{x}) = \sum_{i=1}^n x_i$. The canonical form is: $$p(\mathbf{x}|\theta) = \exp \Big( \ln (\theta) T(\mathbf{x}) - n\theta \Big) \cdot h(\mathbf{x}) \quad \quad \quad h(\mathbf{x}) = \coprod_{i=1}^n x_i! $$ From this form it is easy to establish that $T$ is a complete sufficient statistic for the parameter $\theta$. So the Lehmann–Scheffé theorem means that for any $g(\theta)$ there is only one unbiased estimator of this quantity that is a function of $T$, and this is the is UMVUE of $g(\theta)$. One way to find this estimator (the method you are using) is via the Rao-Blackwell theorem --- start with an arbitrary unbiased estimator of $g(\theta)$ and then condition on the complete sufficient statistic to get the unique unbiased estimator that is a function of $T$. Using Rao-Blackwell to find the UMVUE: In your case you want to find the UMVUE of: $$g(\theta) \equiv \theta \exp (-\theta).$$ Using the initial estimator $\hat{g}_*(\mathbf{X}) \equiv \mathbb{I}(X_1=1)$ you can confirm that, $$\mathbb{E}(\hat{g}_*(\mathbf{X})) = \mathbb{E}(\mathbb{I}(X_1=1)) = \mathbb{P}(X_1=1) = \theta \exp(-\theta) = g(\theta),$$ so this is indeed an unbiased estimator. Hence, the unique UMVUE obtained from the Rao-Blackwell technique is: $$\begin{equation} \begin{aligned} \hat{g}(\mathbf{X}) &\equiv \mathbb{E}(\mathbb{I}(X_1=1) | T(\mathbf{X}) = t) \\[6pt] &= \mathbb{P}(X_1=1 | T(\mathbf{X}) = t) \\[6pt] &= \mathbb{P} \Big( X_1=1 \Big| \sum_{i=1}^n X_i = t \Big) \\[6pt] &= \frac{\mathbb{P} \Big( X_1=1 \Big) \mathbb{P} \Big( \sum_{i=2}^n X_i = t-1 \Big)}{\mathbb{P} \Big( \sum_{i=1}^n X_i = t \Big)} \\[6pt] &= \frac{\text{Pois}(1| \theta) \cdot \text{Pois}(t-1| (n-1)\theta)}{\text{Pois}(t| n\theta)} \\[6pt] &= \frac{t!}{(t-1)!} \cdot \frac{ \theta \exp(-\theta) \cdot ((n-1) \theta)^{t-1} \exp(-(n-1)\theta)}{(n \theta)^t \exp(-n\theta)} \\[6pt] &= t \cdot \frac{ (n-1)^{t-1}}{n^t} \\[6pt] &= \frac{t}{n} \Big( 1- \frac{1}{n} \Big)^{t-1} \\[6pt] \end{aligned} \end{equation}$$ Your answer has a slight error where you have conflated the sample mean and the sample total, but most of your working is correct. As $n \rightarrow \infty$ we have $(1-\tfrac{1}{n})^n \rightarrow \exp(-1)$ and $t/n \rightarrow \theta$, so taking these asymptotic results together we can also confirm consistency of the estimator: $$\hat{g}(\mathbf{X}) = \frac{t}{n} \Big[ \Big( 1- \frac{1}{n} \Big)^n \Big] ^{\frac{t}{n} - \frac{1}{n}} \rightarrow \theta [ \exp (-1) ]^\theta = \theta \exp (-\theta) = g(\theta).$$ This latter demonstration is heuristic, but it gives a nice check on the working. It is interesting here that you get an estimator that is a finite approximation to the exponential function of interest.
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$ The Poisson distribution is a one-parameter exponential family distribution, with natural sufficient statistic given by the sample total $T(\mathbf{x}) = \sum_{i=1}^n x_i$. The canonical form is: $$p
36,248
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$
Here is a simulation in R that I did using a the average of $n = 20$ observations, where $\lambda = 5.$ The parameter par is $P(X = 1) = \lambda e^{-\lambda}.$ The estimate par.fcn, which tries to estimate $P(X = 1)$ merely as a function of the average, is biased. My version of your UMVUE for $P(X=1)$ using a function of average a (instead of total) seems to work OK. set.seed(2018); m = 10^5; n = 20; lam=5; par=dpois(1, lam) x = rpois(m*n, lam); MAT=matrix(x, nrow=m) # each row a sample of size 20 a = rowMeans(MAT) lam.umvue = a; lam; mean(lam.umvue); sd(lam.umvue) [1] 5 # exact lambda [1] 5.000788 # mean est of lambda [1] 0.4989791 # aprx SD of est par.fcn = exp(-lam.umvue)*lam.umvue; par; mean(par.fcn); sd(par.fcn) [1] 0.03368973 # exact P(X=1) [1] 0.03620296 # slightly biased [1] 0.01444379 sqrt(mean((par.fcn - par)^2)) [1] 0.01466074 # aprx root mean square error (rmse) of par.fun par.umvue = a*(1-1/n)^(n*a - 1); par; mean(par.umvue); sd(par.umvue) [1] 0.03368973 # exact P(X=1) [1] 0.03365454 # mean est of P(X=1), seems unbiased [1] 0.01388531 sqrt(mean((par.umvue - par)^2)) [1] 0.01388528 # aprx rmse of umvue of P(X=1); smaller than rmse of par.fun
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$
Here is a simulation in R that I did using a the average of $n = 20$ observations, where $\lambda = 5.$ The parameter par is $P(X = 1) = \lambda e^{-\lambda}.$ The estimate par.fcn, which tries to est
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$ Here is a simulation in R that I did using a the average of $n = 20$ observations, where $\lambda = 5.$ The parameter par is $P(X = 1) = \lambda e^{-\lambda}.$ The estimate par.fcn, which tries to estimate $P(X = 1)$ merely as a function of the average, is biased. My version of your UMVUE for $P(X=1)$ using a function of average a (instead of total) seems to work OK. set.seed(2018); m = 10^5; n = 20; lam=5; par=dpois(1, lam) x = rpois(m*n, lam); MAT=matrix(x, nrow=m) # each row a sample of size 20 a = rowMeans(MAT) lam.umvue = a; lam; mean(lam.umvue); sd(lam.umvue) [1] 5 # exact lambda [1] 5.000788 # mean est of lambda [1] 0.4989791 # aprx SD of est par.fcn = exp(-lam.umvue)*lam.umvue; par; mean(par.fcn); sd(par.fcn) [1] 0.03368973 # exact P(X=1) [1] 0.03620296 # slightly biased [1] 0.01444379 sqrt(mean((par.fcn - par)^2)) [1] 0.01466074 # aprx root mean square error (rmse) of par.fun par.umvue = a*(1-1/n)^(n*a - 1); par; mean(par.umvue); sd(par.umvue) [1] 0.03368973 # exact P(X=1) [1] 0.03365454 # mean est of P(X=1), seems unbiased [1] 0.01388531 sqrt(mean((par.umvue - par)^2)) [1] 0.01388528 # aprx rmse of umvue of P(X=1); smaller than rmse of par.fun
Finding UMVUE of $\theta e^{-\theta}$ where $X_i\sim\text{Pois}(\theta)$ Here is a simulation in R that I did using a the average of $n = 20$ observations, where $\lambda = 5.$ The parameter par is $P(X = 1) = \lambda e^{-\lambda}.$ The estimate par.fcn, which tries to est
36,249
Simple explanation of dynamic linear models
I also have to speak regularly to people who do not have a technical background, and here is how I would approach it: First, unless your audience knows about the normal distribution, I would not even mention DLM, I would just talk about state space models. I would still give them a DLM set of equations as an example (linear is easy to understand), but I have found that it is very very easy to talk to people without a technical background about the "observed" and the "state" equation. I would then illustrate it with a simple example (that I take from the "Dynamic Linear Models with R" book by Petris, Petrone and Campagnoli 2009). Here is what I would say (roughly) to an audience to explain them what the main point of DLM is: Speaker: "Suppose you are interested in measuring the level of the river Nile, e.g. because you want to have an idea during which period of the year certain ships (with different sizes) can sail through it or because you are just interested in seeing how the long term water level changes throughout time. Every year, you go to a certain spot along the river and you take a measurement. Now, it could happen that on that day it was raining, or even it was raining throughout the whole month, or that you did not measure precisely because your equipment was not too good, right? So the main premise is that you measure the water level with an additional, not controllable and random imprecision. To make things a bit more specific: $Observed Nile Water Level_t$ = $True Nile Water Level_t + Measurement Error_t$ We see that every year that we measure the water level, it is a function of some true level and a measurement error that is always there (but has a random nature) and cannot be avoided ( Here I find the example with the rain on the day that you measure very good to illustrate where the error term can come from) That's all well and good, but it also makes sense to assume that the true Nile water level changes throughout time, right? Maybe people build dams and stop some of the inflow from the smaller rivers or something like that. Well, then it makes sense to also incorporate the following equation right?: $True Nile Water Level_t $ = $True Nile Water Level_{t-1} + Additive Error_t$ The true, unobserved level of today depends on the level from last year and some other part that we put in, which is random, and expresses our inability to estimate things perfectly." This is roughly the way that I have explained it to audience that is not technical (but they had finance background so I was using "underlying state of the economy" as an example). This is also the random walk + noise model and it is the simplest DLM I can think of (if they don't know what a regression is, forget about talking to them about random slopes and so on). Obviously you can still scale the example up, if you think that they have at least some exposure to statistical models and discuss random slope etc. Here is the code for the filtered values of the Nile Rirver Level (I took it from the book, you can find it here) and if you cannot find the book, you can access the corresponding article for free from JStatSoft here ### plot(Nile, type='o', col = c("darkgrey"), xlab = "", ylab = "Level") mod1 <- dlmModPoly(order = 1, dV = 15100, dW = 755) NileFilt1 <- dlmFilter(Nile, mod1) lines(dropFirst(NileFilt1$m), lty = "longdash") mod2 <- dlmModPoly(order = 1, dV = 15100, dW = 7550) NileFilt2 <- dlmFilter(Nile, mod2) lines(dropFirst(NileFilt2$m), lty = "dotdash") leg <- c("data", paste("filtered, W/V =", format(c(W(mod1) / V(mod1), W(mod2) / V(mod2))))) legend("bottomright", legend = leg, col=c("darkgrey", "black", "black"), lty = c("solid", "longdash", "dotdash"), pch = c(1, NA, NA), bty = "n") The example shows the fit with different signal to noise ratios - the higher the signal to noise, the better the "fit". I think it is instructive to see that but you can skip it and just show the fitted line. If your audience can take it, talk to them about forecasting, filtering and smoothing with the Kalman Filter (but if they are not technical, skip it). And obviously you can fit other models to that data. Hope this helps, let us know what you think and what you presented to them at the end! EDIT: I actually just now saw that this thread was necroed from 4 months ago...even if the OP is way past needing this, I hope it would be useful to someone in the future.
Simple explanation of dynamic linear models
I also have to speak regularly to people who do not have a technical background, and here is how I would approach it: First, unless your audience knows about the normal distribution, I would not even
Simple explanation of dynamic linear models I also have to speak regularly to people who do not have a technical background, and here is how I would approach it: First, unless your audience knows about the normal distribution, I would not even mention DLM, I would just talk about state space models. I would still give them a DLM set of equations as an example (linear is easy to understand), but I have found that it is very very easy to talk to people without a technical background about the "observed" and the "state" equation. I would then illustrate it with a simple example (that I take from the "Dynamic Linear Models with R" book by Petris, Petrone and Campagnoli 2009). Here is what I would say (roughly) to an audience to explain them what the main point of DLM is: Speaker: "Suppose you are interested in measuring the level of the river Nile, e.g. because you want to have an idea during which period of the year certain ships (with different sizes) can sail through it or because you are just interested in seeing how the long term water level changes throughout time. Every year, you go to a certain spot along the river and you take a measurement. Now, it could happen that on that day it was raining, or even it was raining throughout the whole month, or that you did not measure precisely because your equipment was not too good, right? So the main premise is that you measure the water level with an additional, not controllable and random imprecision. To make things a bit more specific: $Observed Nile Water Level_t$ = $True Nile Water Level_t + Measurement Error_t$ We see that every year that we measure the water level, it is a function of some true level and a measurement error that is always there (but has a random nature) and cannot be avoided ( Here I find the example with the rain on the day that you measure very good to illustrate where the error term can come from) That's all well and good, but it also makes sense to assume that the true Nile water level changes throughout time, right? Maybe people build dams and stop some of the inflow from the smaller rivers or something like that. Well, then it makes sense to also incorporate the following equation right?: $True Nile Water Level_t $ = $True Nile Water Level_{t-1} + Additive Error_t$ The true, unobserved level of today depends on the level from last year and some other part that we put in, which is random, and expresses our inability to estimate things perfectly." This is roughly the way that I have explained it to audience that is not technical (but they had finance background so I was using "underlying state of the economy" as an example). This is also the random walk + noise model and it is the simplest DLM I can think of (if they don't know what a regression is, forget about talking to them about random slopes and so on). Obviously you can still scale the example up, if you think that they have at least some exposure to statistical models and discuss random slope etc. Here is the code for the filtered values of the Nile Rirver Level (I took it from the book, you can find it here) and if you cannot find the book, you can access the corresponding article for free from JStatSoft here ### plot(Nile, type='o', col = c("darkgrey"), xlab = "", ylab = "Level") mod1 <- dlmModPoly(order = 1, dV = 15100, dW = 755) NileFilt1 <- dlmFilter(Nile, mod1) lines(dropFirst(NileFilt1$m), lty = "longdash") mod2 <- dlmModPoly(order = 1, dV = 15100, dW = 7550) NileFilt2 <- dlmFilter(Nile, mod2) lines(dropFirst(NileFilt2$m), lty = "dotdash") leg <- c("data", paste("filtered, W/V =", format(c(W(mod1) / V(mod1), W(mod2) / V(mod2))))) legend("bottomright", legend = leg, col=c("darkgrey", "black", "black"), lty = c("solid", "longdash", "dotdash"), pch = c(1, NA, NA), bty = "n") The example shows the fit with different signal to noise ratios - the higher the signal to noise, the better the "fit". I think it is instructive to see that but you can skip it and just show the fitted line. If your audience can take it, talk to them about forecasting, filtering and smoothing with the Kalman Filter (but if they are not technical, skip it). And obviously you can fit other models to that data. Hope this helps, let us know what you think and what you presented to them at the end! EDIT: I actually just now saw that this thread was necroed from 4 months ago...even if the OP is way past needing this, I hope it would be useful to someone in the future.
Simple explanation of dynamic linear models I also have to speak regularly to people who do not have a technical background, and here is how I would approach it: First, unless your audience knows about the normal distribution, I would not even
36,250
Simple explanation of dynamic linear models
I recommend that you go through a few examples. The most common question is "what does the state variable represent?" The answer to that depends on the model, but most DLMs can be thought of as a regression with a time-varying coefficient. In this context, those time-varying coefficients are your states usually. If you regress on an intercept, they sometimes call that model a local level model. If you regress on past values of the process, sometimes they call that a time-varying autogression. You can also regress on harmonics or polynomials in time. All of these have in common that they're basically regression, but you put dynamics on the coefficients.
Simple explanation of dynamic linear models
I recommend that you go through a few examples. The most common question is "what does the state variable represent?" The answer to that depends on the model, but most DLMs can be thought of as a regr
Simple explanation of dynamic linear models I recommend that you go through a few examples. The most common question is "what does the state variable represent?" The answer to that depends on the model, but most DLMs can be thought of as a regression with a time-varying coefficient. In this context, those time-varying coefficients are your states usually. If you regress on an intercept, they sometimes call that model a local level model. If you regress on past values of the process, sometimes they call that a time-varying autogression. You can also regress on harmonics or polynomials in time. All of these have in common that they're basically regression, but you put dynamics on the coefficients.
Simple explanation of dynamic linear models I recommend that you go through a few examples. The most common question is "what does the state variable represent?" The answer to that depends on the model, but most DLMs can be thought of as a regr
36,251
Simple explanation of dynamic linear models
Best I can do is a tad lengthy but perhaps easier to follow (1st try): So for a (static) linear regression, the usual format is y = mx +b, like the equation of a line (b is a constant, m is the slope, x are your predictors, y is your response). The magic of regression is that it amps up this equation to the matrix level (so now we have a line in an n-dimensional space and not the usual 2-dimensional line residing in the x-y plane like above), so now more like Y= MX + b where Y and X (and M) are matrices, and regression determines our M and our b through a bunch of matrix algebra, likelihood estimation or etc. We can do this because we know the data exists in multiple observations like Y(i) = MX(i)+b for all values of i, up to our number of observations. But for time series, sequential data, this i is really a time step t, so now more like Y(t) = MX(t) + b, BUT now we can do a trick. Instead of assuming M is a matrix of static slopes to be determined by the data, what if we assume M is not a set of fixed values across all the observations over time, but instead a changing, dynamic, update-able set of slopes that are related to the prior time step, t-1, for all steps of t? This would stand to reason because in a time series, the last value of y(t-1) is related to the next value of y(t) generally (this is auto-regressive and can be tested for). There is no reason to believe M doesn't change over time too so why not check it out. To do this we insert a few more 'internal parameters' in our regression set up that allow for changing conditions at different (but sequential) t to take advantage of this y(t) to y(t-1) relationship. Dynamic regression allows our M, our slopes (i.e. our regression parameters i.e. the effect sizes of our predictors) to change over time and may give us better abilities and insight as to what is going on (rather than treating every parameter in M as 'static' at all t, but the parameters that evolve and change across the flow of t) -- - - Sometimes it helps, other times other 'non-stationary' analysis helps (like ARIMA and straight auto-regressive AR(1), etc) -- - -
Simple explanation of dynamic linear models
Best I can do is a tad lengthy but perhaps easier to follow (1st try): So for a (static) linear regression, the usual format is y = mx +b, like the equation of a line (b is a constant, m is the slope,
Simple explanation of dynamic linear models Best I can do is a tad lengthy but perhaps easier to follow (1st try): So for a (static) linear regression, the usual format is y = mx +b, like the equation of a line (b is a constant, m is the slope, x are your predictors, y is your response). The magic of regression is that it amps up this equation to the matrix level (so now we have a line in an n-dimensional space and not the usual 2-dimensional line residing in the x-y plane like above), so now more like Y= MX + b where Y and X (and M) are matrices, and regression determines our M and our b through a bunch of matrix algebra, likelihood estimation or etc. We can do this because we know the data exists in multiple observations like Y(i) = MX(i)+b for all values of i, up to our number of observations. But for time series, sequential data, this i is really a time step t, so now more like Y(t) = MX(t) + b, BUT now we can do a trick. Instead of assuming M is a matrix of static slopes to be determined by the data, what if we assume M is not a set of fixed values across all the observations over time, but instead a changing, dynamic, update-able set of slopes that are related to the prior time step, t-1, for all steps of t? This would stand to reason because in a time series, the last value of y(t-1) is related to the next value of y(t) generally (this is auto-regressive and can be tested for). There is no reason to believe M doesn't change over time too so why not check it out. To do this we insert a few more 'internal parameters' in our regression set up that allow for changing conditions at different (but sequential) t to take advantage of this y(t) to y(t-1) relationship. Dynamic regression allows our M, our slopes (i.e. our regression parameters i.e. the effect sizes of our predictors) to change over time and may give us better abilities and insight as to what is going on (rather than treating every parameter in M as 'static' at all t, but the parameters that evolve and change across the flow of t) -- - - Sometimes it helps, other times other 'non-stationary' analysis helps (like ARIMA and straight auto-regressive AR(1), etc) -- - -
Simple explanation of dynamic linear models Best I can do is a tad lengthy but perhaps easier to follow (1st try): So for a (static) linear regression, the usual format is y = mx +b, like the equation of a line (b is a constant, m is the slope,
36,252
Proper way of estimating the covariance error ellipse in 2D
I believe I've found the reason for the discrepancy between these two methods. Both seem to be correct, they just estimate different statistical concepts. The first method describes an error ellipse, characterized by some number of standard deviations. The second method describes a confidence ellipse, characterized by some probability value. The difference between these two is explained in this old paper (Algorithms For Confidence Circles and Ellipses, Wayne E. Hoover 1984; NOAA Technical Report NOS 107 C&GS 3). This question (its most upvoted answer actually) is also related to this issue.
Proper way of estimating the covariance error ellipse in 2D
I believe I've found the reason for the discrepancy between these two methods. Both seem to be correct, they just estimate different statistical concepts. The first method describes an error ellipse,
Proper way of estimating the covariance error ellipse in 2D I believe I've found the reason for the discrepancy between these two methods. Both seem to be correct, they just estimate different statistical concepts. The first method describes an error ellipse, characterized by some number of standard deviations. The second method describes a confidence ellipse, characterized by some probability value. The difference between these two is explained in this old paper (Algorithms For Confidence Circles and Ellipses, Wayne E. Hoover 1984; NOAA Technical Report NOS 107 C&GS 3). This question (its most upvoted answer actually) is also related to this issue.
Proper way of estimating the covariance error ellipse in 2D I believe I've found the reason for the discrepancy between these two methods. Both seem to be correct, they just estimate different statistical concepts. The first method describes an error ellipse,
36,253
Can confidence interval of positive values be negative?
The first question has a simple answer: yes. I interpret your question to mean, "can a strictly positive sample (where all data points are positive) have a 68% confidence interval for the normal distribution with a negative lower bound?" Proof by construction: Let $X = [1, 1, 7]^T$. The mean is $3$, the sample standard deviation is $3.46$, so the 68% confidence interval is $(-0.464, 6.46)$. $\square$ One consequence of a confidence interval that includes zero is that we are unable to reject the hypothesis that the true population has a normal distribution with mean zero. Confidence intervals are defined relative to a particular distribution. Defining a confidence interval as mean +/- sample standard deviation implicitly assumes a normal, or at least symmetric, distribution. If we choose a distribution that itself is non-negative, say $\chi^2$ or Poisson, then the confidence interval will never go below zero. It will, however, be asymmetric. Plotting the confidence interval on a log-log plot is less clear cut. I can see how the problem arises: a log-log plot is completely appropriate for strictly positive data, but there's not one obvious way to plot the confidence interval. Here are some options, it roughly increasing order of difficulty: The simplest approach is simply to extend it to negative infinity - in other-words, just extend it to the bottom of your chart and cut it off. If you use a boxplot for the CI, the "box" is drawn between the Q1 and Q3 quartiles and therefore the "box" part will always be strictly contained in the range of the original data, while only the "whisker" part will extend down past the bottom of the chart. A slightly more sophisticated version of the boxplot is the violin plot. The violin shape will go off the bottom of the chart but it will be visually clear how much is getting cut off. If you calculate your confidence interval after the log transform you may get a CI that is more meaningful for your data and stays on the plot. If you fit a different distribution, say Poisson for count data, then you can calculate the CI from that, and the CI will not go below zero because the fitted distribution itself cannot.
Can confidence interval of positive values be negative?
The first question has a simple answer: yes. I interpret your question to mean, "can a strictly positive sample (where all data points are positive) have a 68% confidence interval for the normal distr
Can confidence interval of positive values be negative? The first question has a simple answer: yes. I interpret your question to mean, "can a strictly positive sample (where all data points are positive) have a 68% confidence interval for the normal distribution with a negative lower bound?" Proof by construction: Let $X = [1, 1, 7]^T$. The mean is $3$, the sample standard deviation is $3.46$, so the 68% confidence interval is $(-0.464, 6.46)$. $\square$ One consequence of a confidence interval that includes zero is that we are unable to reject the hypothesis that the true population has a normal distribution with mean zero. Confidence intervals are defined relative to a particular distribution. Defining a confidence interval as mean +/- sample standard deviation implicitly assumes a normal, or at least symmetric, distribution. If we choose a distribution that itself is non-negative, say $\chi^2$ or Poisson, then the confidence interval will never go below zero. It will, however, be asymmetric. Plotting the confidence interval on a log-log plot is less clear cut. I can see how the problem arises: a log-log plot is completely appropriate for strictly positive data, but there's not one obvious way to plot the confidence interval. Here are some options, it roughly increasing order of difficulty: The simplest approach is simply to extend it to negative infinity - in other-words, just extend it to the bottom of your chart and cut it off. If you use a boxplot for the CI, the "box" is drawn between the Q1 and Q3 quartiles and therefore the "box" part will always be strictly contained in the range of the original data, while only the "whisker" part will extend down past the bottom of the chart. A slightly more sophisticated version of the boxplot is the violin plot. The violin shape will go off the bottom of the chart but it will be visually clear how much is getting cut off. If you calculate your confidence interval after the log transform you may get a CI that is more meaningful for your data and stays on the plot. If you fit a different distribution, say Poisson for count data, then you can calculate the CI from that, and the CI will not go below zero because the fitted distribution itself cannot.
Can confidence interval of positive values be negative? The first question has a simple answer: yes. I interpret your question to mean, "can a strictly positive sample (where all data points are positive) have a 68% confidence interval for the normal distr
36,254
Why Kullback-Leibler in Stochastic Neighbor Embedding
Dimensionality reduction techniques are often motivated by finding new representations of the data to discover hidden variables or to discover structure. The aim of SNE is to take a different approach (compared to PCA for example) by preserving local structures, which is done by taking advantage of KL-divergence's asymmetric properties. Conditional probabilities as inverse distance Looking at Eq (1) ,notice that the conditional probability can be interpreted as "inverse distance", because close points (low distance) are assigned high probabilities, and far points (high distance) are assigned low probabilities. (Note: The inverse distance name is obviously not true in a stricter mathematical sense, because effectively a larger set of numbers $ \mathbb{R} $ are mapped to a smaller set of numbers $ [0,1] $.) Taking advantage of assymetry in KL Two scenarios exhibit differences compared to a symmetric cost function in Equation (2). $ p_{i|j} >> q_{i|j}$ Points that are close in high dimensional space and far in low dimensional space are penalised heavily. This is important, because this promotes the preservation of local structures $ q_{i|j} >> p_{i|j}$ Points that are far in high dimension space and close in low dimensional space are penalised less heavily. This is okay for us. Thus, assymetric property of KL-divergence, and the definition of the conditional probability constitutes as the key idea of this dimensionality reduction technique. Below, you can see this is exactly why the other distances fail to be a good substitute. So then, what is the problem with the other distance metrics? The Jensen-Shannon Divergence is effectively the symmetrisation of the KL-Divergence, by $$ JSD(P_i||Q_i) = \frac{1}{2}KL(P_i||Q_i) + \frac{1}{2} KL(Q_i || P_i) .$$ This loses exactly the property of preserving local structures, so this is not a good substitute. The Wasserstein distance can intuitively seen as the rearranging of a histogram from one state to another state. The rearrangements are the same both ways, so the Wasserstein metric is also symmetric, and does not have this desirable property. The Kolmogorov-Smirnov distance is nonparametric, which would imply that we don't assume a probability distribution, but in fact the structure is described in Eq (1).
Why Kullback-Leibler in Stochastic Neighbor Embedding
Dimensionality reduction techniques are often motivated by finding new representations of the data to discover hidden variables or to discover structure. The aim of SNE is to take a different approach
Why Kullback-Leibler in Stochastic Neighbor Embedding Dimensionality reduction techniques are often motivated by finding new representations of the data to discover hidden variables or to discover structure. The aim of SNE is to take a different approach (compared to PCA for example) by preserving local structures, which is done by taking advantage of KL-divergence's asymmetric properties. Conditional probabilities as inverse distance Looking at Eq (1) ,notice that the conditional probability can be interpreted as "inverse distance", because close points (low distance) are assigned high probabilities, and far points (high distance) are assigned low probabilities. (Note: The inverse distance name is obviously not true in a stricter mathematical sense, because effectively a larger set of numbers $ \mathbb{R} $ are mapped to a smaller set of numbers $ [0,1] $.) Taking advantage of assymetry in KL Two scenarios exhibit differences compared to a symmetric cost function in Equation (2). $ p_{i|j} >> q_{i|j}$ Points that are close in high dimensional space and far in low dimensional space are penalised heavily. This is important, because this promotes the preservation of local structures $ q_{i|j} >> p_{i|j}$ Points that are far in high dimension space and close in low dimensional space are penalised less heavily. This is okay for us. Thus, assymetric property of KL-divergence, and the definition of the conditional probability constitutes as the key idea of this dimensionality reduction technique. Below, you can see this is exactly why the other distances fail to be a good substitute. So then, what is the problem with the other distance metrics? The Jensen-Shannon Divergence is effectively the symmetrisation of the KL-Divergence, by $$ JSD(P_i||Q_i) = \frac{1}{2}KL(P_i||Q_i) + \frac{1}{2} KL(Q_i || P_i) .$$ This loses exactly the property of preserving local structures, so this is not a good substitute. The Wasserstein distance can intuitively seen as the rearranging of a histogram from one state to another state. The rearrangements are the same both ways, so the Wasserstein metric is also symmetric, and does not have this desirable property. The Kolmogorov-Smirnov distance is nonparametric, which would imply that we don't assume a probability distribution, but in fact the structure is described in Eq (1).
Why Kullback-Leibler in Stochastic Neighbor Embedding Dimensionality reduction techniques are often motivated by finding new representations of the data to discover hidden variables or to discover structure. The aim of SNE is to take a different approach
36,255
Why Kullback-Leibler in Stochastic Neighbor Embedding
Stochastic Neighbor Embedding under f-divergences https://arxiv.org/pdf/1811.01247.pdf This paper tries five different f- divergence functions : KL, RKL, JS, CH (Chi-Square), HL (Hellinger). Also, the paper goes over which divergence emphasize what in terms of precision and recall.
Why Kullback-Leibler in Stochastic Neighbor Embedding
Stochastic Neighbor Embedding under f-divergences https://arxiv.org/pdf/1811.01247.pdf This paper tries five different f- divergence functions : KL, RKL, JS, CH (Chi-Square), HL (Hellinger). Also, the
Why Kullback-Leibler in Stochastic Neighbor Embedding Stochastic Neighbor Embedding under f-divergences https://arxiv.org/pdf/1811.01247.pdf This paper tries five different f- divergence functions : KL, RKL, JS, CH (Chi-Square), HL (Hellinger). Also, the paper goes over which divergence emphasize what in terms of precision and recall.
Why Kullback-Leibler in Stochastic Neighbor Embedding Stochastic Neighbor Embedding under f-divergences https://arxiv.org/pdf/1811.01247.pdf This paper tries five different f- divergence functions : KL, RKL, JS, CH (Chi-Square), HL (Hellinger). Also, the
36,256
Feature Importance of a feature in lightgbm is high but reduces evaluation score
If you look in the lightgbm docs for feature_importance function, you will see that it has a parameter importance_type. The two valid values for this parameters are split(default one) and gain. It is not necessarily important that both split and gain produce same feature importances. There is a new library for feature importance shap. Go through this article written by the co-author of above library, he explains this is much better way.
Feature Importance of a feature in lightgbm is high but reduces evaluation score
If you look in the lightgbm docs for feature_importance function, you will see that it has a parameter importance_type. The two valid values for this parameters are split(default one) and gain. It is
Feature Importance of a feature in lightgbm is high but reduces evaluation score If you look in the lightgbm docs for feature_importance function, you will see that it has a parameter importance_type. The two valid values for this parameters are split(default one) and gain. It is not necessarily important that both split and gain produce same feature importances. There is a new library for feature importance shap. Go through this article written by the co-author of above library, he explains this is much better way.
Feature Importance of a feature in lightgbm is high but reduces evaluation score If you look in the lightgbm docs for feature_importance function, you will see that it has a parameter importance_type. The two valid values for this parameters are split(default one) and gain. It is
36,257
Feature Importance of a feature in lightgbm is high but reduces evaluation score
You should use verbose_eval and early_stopping_rounds to track the actual performance of the model upon training. For example, verbose_eval = 10 will print out the performance of the model at every 10 iterations. It is both possible that the feature harms your model or the model is overfitted. In overfitting situations usually there occurs a dominating feature as like your case. By tracking the performance visually upon training by verbose_eval and ensuring that the model is not overfitting by early_stopping_rounds, you can analyze the features by taking them one by one off to see how the performance is affected directly. Do not take off multiple features at once, the model can lose a strong pattern from those and you can just misinterpret the result.
Feature Importance of a feature in lightgbm is high but reduces evaluation score
You should use verbose_eval and early_stopping_rounds to track the actual performance of the model upon training. For example, verbose_eval = 10 will print out the performance of the model at every 10
Feature Importance of a feature in lightgbm is high but reduces evaluation score You should use verbose_eval and early_stopping_rounds to track the actual performance of the model upon training. For example, verbose_eval = 10 will print out the performance of the model at every 10 iterations. It is both possible that the feature harms your model or the model is overfitted. In overfitting situations usually there occurs a dominating feature as like your case. By tracking the performance visually upon training by verbose_eval and ensuring that the model is not overfitting by early_stopping_rounds, you can analyze the features by taking them one by one off to see how the performance is affected directly. Do not take off multiple features at once, the model can lose a strong pattern from those and you can just misinterpret the result.
Feature Importance of a feature in lightgbm is high but reduces evaluation score You should use verbose_eval and early_stopping_rounds to track the actual performance of the model upon training. For example, verbose_eval = 10 will print out the performance of the model at every 10
36,258
Can the 'bin size' in a histogram be thought of as a regularity constraint?
Yes, this is a reasonable way to think about it (assuming the histogram is normalized to obtain a proper pdf). Bin width constrains the smoothness of the density estimate (speaking loosely, since histograms are discontinuous functions). It controls the extent to which finer structure can be modeled, and also the extent to which random fluctuations in the data affect the estimate. It plays a similar role as the kernel width in kernel density estimation, and hyperparameters that control leaf size in decision trees. To be a little more specific, bin width is a hyperparameter that controls the bias variance tradeoff. Reducing bin width decreases bias because it allows a finer representation--histograms with narrower bins form a richer class of functions that can better approximate the true/underlying distribution. But, it increases variance because fewer data points are available for estimating the height of each bin--histograms with narrower bins are more sensitive to random fluctuations in the data, and will vary more over datasets drawn from the same underlying distribution. A good bin width balances these opposing effects to give a density estimate that better matches the underlying distribution. For more detail see: Scott (1979). On optimal and data-based histograms. Shalizi (2009). Estimating Distributions and Densities [course notes]
Can the 'bin size' in a histogram be thought of as a regularity constraint?
Yes, this is a reasonable way to think about it (assuming the histogram is normalized to obtain a proper pdf). Bin width constrains the smoothness of the density estimate (speaking loosely, since hist
Can the 'bin size' in a histogram be thought of as a regularity constraint? Yes, this is a reasonable way to think about it (assuming the histogram is normalized to obtain a proper pdf). Bin width constrains the smoothness of the density estimate (speaking loosely, since histograms are discontinuous functions). It controls the extent to which finer structure can be modeled, and also the extent to which random fluctuations in the data affect the estimate. It plays a similar role as the kernel width in kernel density estimation, and hyperparameters that control leaf size in decision trees. To be a little more specific, bin width is a hyperparameter that controls the bias variance tradeoff. Reducing bin width decreases bias because it allows a finer representation--histograms with narrower bins form a richer class of functions that can better approximate the true/underlying distribution. But, it increases variance because fewer data points are available for estimating the height of each bin--histograms with narrower bins are more sensitive to random fluctuations in the data, and will vary more over datasets drawn from the same underlying distribution. A good bin width balances these opposing effects to give a density estimate that better matches the underlying distribution. For more detail see: Scott (1979). On optimal and data-based histograms. Shalizi (2009). Estimating Distributions and Densities [course notes]
Can the 'bin size' in a histogram be thought of as a regularity constraint? Yes, this is a reasonable way to think about it (assuming the histogram is normalized to obtain a proper pdf). Bin width constrains the smoothness of the density estimate (speaking loosely, since hist
36,259
Can the 'bin size' in a histogram be thought of as a regularity constraint?
Kernel density estimators are oftentimes rationalised as a "continuous" version of a histogram. Many books on nonparametric kernel estimation also discuss histograms. See, e.g., chapter 2 in Racine, Jeffrey S. "Nonparametric econometrics: A primer." Foundations and Trends® in Econometrics 3.1 (2008): 1-88.
Can the 'bin size' in a histogram be thought of as a regularity constraint?
Kernel density estimators are oftentimes rationalised as a "continuous" version of a histogram. Many books on nonparametric kernel estimation also discuss histograms. See, e.g., chapter 2 in Racine, J
Can the 'bin size' in a histogram be thought of as a regularity constraint? Kernel density estimators are oftentimes rationalised as a "continuous" version of a histogram. Many books on nonparametric kernel estimation also discuss histograms. See, e.g., chapter 2 in Racine, Jeffrey S. "Nonparametric econometrics: A primer." Foundations and Trends® in Econometrics 3.1 (2008): 1-88.
Can the 'bin size' in a histogram be thought of as a regularity constraint? Kernel density estimators are oftentimes rationalised as a "continuous" version of a histogram. Many books on nonparametric kernel estimation also discuss histograms. See, e.g., chapter 2 in Racine, J
36,260
Can the 'bin size' in a histogram be thought of as a regularity constraint?
It is reasonable, because what you're doing by putting samples in bins is approximating the data. In my experience depending on your goal and data available, those bins can vary drastically and have a big impact on how the data is handled further. For some cases you might not need a lot bins or maybe you lack data, so you can still see the general curve. On the other side if the approximation is too strong you can miss out on some details, like local mins and maxs or the structure. For example you can take the following function: And compare the hist for 100 and 8 bins There's a clear difference between the structure complexity. If we're talking about the density function, of course you should choose the second option for a more smooth curve without such extreme values as on the first image Usually I prefer to use Freedman–Diaconis rule as a rule of thumb to choose the default number of bins and then tune it considering the task.
Can the 'bin size' in a histogram be thought of as a regularity constraint?
It is reasonable, because what you're doing by putting samples in bins is approximating the data. In my experience depending on your goal and data available, those bins can vary drastically and have a
Can the 'bin size' in a histogram be thought of as a regularity constraint? It is reasonable, because what you're doing by putting samples in bins is approximating the data. In my experience depending on your goal and data available, those bins can vary drastically and have a big impact on how the data is handled further. For some cases you might not need a lot bins or maybe you lack data, so you can still see the general curve. On the other side if the approximation is too strong you can miss out on some details, like local mins and maxs or the structure. For example you can take the following function: And compare the hist for 100 and 8 bins There's a clear difference between the structure complexity. If we're talking about the density function, of course you should choose the second option for a more smooth curve without such extreme values as on the first image Usually I prefer to use Freedman–Diaconis rule as a rule of thumb to choose the default number of bins and then tune it considering the task.
Can the 'bin size' in a histogram be thought of as a regularity constraint? It is reasonable, because what you're doing by putting samples in bins is approximating the data. In my experience depending on your goal and data available, those bins can vary drastically and have a
36,261
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix?
Actually, we can use cosine similarity in knn via sklearn. The source code is here. This works for me: model = NearestNeighbors(n_neighbors=n_neighbor, metric='cosine', algorithm='brute', n_jobs=-1) model.fit(user_item_matrix_sparse)
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix?
Actually, we can use cosine similarity in knn via sklearn. The source code is here. This works for me: model = NearestNeighbors(n_neighbors=n_neighbor, metric='cosine',
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix? Actually, we can use cosine similarity in knn via sklearn. The source code is here. This works for me: model = NearestNeighbors(n_neighbors=n_neighbor, metric='cosine', algorithm='brute', n_jobs=-1) model.fit(user_item_matrix_sparse)
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix? Actually, we can use cosine similarity in knn via sklearn. The source code is here. This works for me: model = NearestNeighbors(n_neighbors=n_neighbor, metric='cosine',
36,262
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix?
one weak point is sorting, and second is creating new collection so you could just make another collection (list) and then keep topK items until some point this way instead of sorting everytnig and allocate new portion of memory you could do it more robust use this https://stackoverflow.com/questions/27672494/how-to-use-bisect-insort-left-with-a-key # import my_insort_left topK_items = [] # (index, value) for i, row in df.iteritems(): res = cos_sim(v1, row) my_insort_left(topK_items, (i, res), keyfunc=lambda v: v[1]) topK_items.pop(-1) # topK_items is the result nearest_items = [k for k, v in topK_items] *keep in mind that I wrote it without testing so it may have some tiny bugs
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix?
one weak point is sorting, and second is creating new collection so you could just make another collection (list) and then keep topK items until some point this way instead of sorting everytnig and al
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix? one weak point is sorting, and second is creating new collection so you could just make another collection (list) and then keep topK items until some point this way instead of sorting everytnig and allocate new portion of memory you could do it more robust use this https://stackoverflow.com/questions/27672494/how-to-use-bisect-insort-left-with-a-key # import my_insort_left topK_items = [] # (index, value) for i, row in df.iteritems(): res = cos_sim(v1, row) my_insort_left(topK_items, (i, res), keyfunc=lambda v: v[1]) topK_items.pop(-1) # topK_items is the result nearest_items = [k for k, v in topK_items] *keep in mind that I wrote it without testing so it may have some tiny bugs
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix? one weak point is sorting, and second is creating new collection so you could just make another collection (list) and then keep topK items until some point this way instead of sorting everytnig and al
36,263
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix?
A Nearest Neighbours model is fairly fast to build, because the algorithm uses the triangle inequality. Sadly, Scikit-Learn's ball tree does not support cosine distances, so you will end up with a KDTree, which is less efficient for high-dimensional data. from sklearn.neighbors import NearestNeighbors embeddings = get_embeddings(words) tree = NearestNeighbors( n_neighbors=30, algorithm='ball_tree', metric='cosine') tree.fit(X)
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix?
A Nearest Neighbours model is fairly fast to build, because the algorithm uses the triangle inequality. Sadly, Scikit-Learn's ball tree does not support cosine distances, so you will end up with a KDT
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix? A Nearest Neighbours model is fairly fast to build, because the algorithm uses the triangle inequality. Sadly, Scikit-Learn's ball tree does not support cosine distances, so you will end up with a KDTree, which is less efficient for high-dimensional data. from sklearn.neighbors import NearestNeighbors embeddings = get_embeddings(words) tree = NearestNeighbors( n_neighbors=30, algorithm='ball_tree', metric='cosine') tree.fit(X)
How to find nearest neighbors using cosine similarity for all items from a large embeddings matrix? A Nearest Neighbours model is fairly fast to build, because the algorithm uses the triangle inequality. Sadly, Scikit-Learn's ball tree does not support cosine distances, so you will end up with a KDT
36,264
How to determine what type of layers do I need for my Deep learning model?
I have some good news and some bad news for you. The good news is that there are many problems for which we know which architecture works best, because of previous research. The bad news is that since today we don't have a good theory of generalization for Deep Networks, we lack theoretical guidance about how to select an architecture for a new problem (however, read here for some insights). Thus, in general the most honest answer is that "it's just a matter of understanding and experience". On the other hand, for some specific fields we can give more canned suggestions: Computer Vision We know that the Convnet family of architectures works very well for image classification: LeNet, Alexnet, VGGNet, ResNets, etc. You can train a beefed-up version of LeNet on a non-GPU laptop, and that's a great way to start learning about them. I suggest you start from this Keras implementation https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py and try to improve it a bit, by reproducing in Keras the section Implementing a CNN in the TensorFlow layers API of this Jupyter notebook. By the way, I can't recommend the second edition of Sebastian Raschka's book highly enough - it's a great way to gain practical knowledge about Machine Learning and Deep Learning. Instead of wasting time reading multiple tutorials on the Internet, get the book - you'll get a more solid understanding of the subject, also because quite a few of the most cited blog posts on Convolutional Neural Networks are basically summaries of the first edition of the book. If you want to train architectures which will perform well on realistic, big data sets (such as CIFAR-100 or ImageNet), you need to have access to a GPU cluster. Natural Language Processing Here we know that RNNs work well. Actually, the "simple" RNN architecture known as LSTM delivers much better results than most people would commonly expect, as shown in this paper: On the State of the Art of Evaluation in Neural Language Models. The paper also highlights a big limit of modern Deep Learning research: a lot of papers don't care enough about repeatability and reproducibility of results, and some results than are presented as the new state of the art, are instead just due to uncontrolled experimental variation. Again, Raschka's book can be quite useful to start learning about RNNs, together with the corresponding Jupyter notebook. The general case If you want to tackle a known problem, but maybe with a new data set (for example, you want to perform image classification of car parts because you work for a car manufacturer), you need to use model selection techniques, such as for example cross-validation. You build different networks (different number of layers, different activation functions, etc.) and choose the one with the smallest cross-validation error. Then, you retrain it on the full data set, and you use it for prediction. However, since the number of alternatives can be prohibitive, you can use some automated machine learning frameworks which help you explore the space of possible networks, such as for example: auto-sklearn tpot If you need to work on big data sets, these tools won't work (they are based on scikit-learn, so there's no support for GPUs, currently). You may have a look at this paper, Large-Scale Evolution of Image Classifiers: like the other one I linked, this one takes proper care to ensure repeatability of results. If you want to attack a new problem (say, Neural Program Synthesis) for which we still have no idea of which architectures work best, probably your best bet is to attend NIPS and ICML (or stalk the right sections of arXiv), in the hope that someone has already tackled your problem.
How to determine what type of layers do I need for my Deep learning model?
I have some good news and some bad news for you. The good news is that there are many problems for which we know which architecture works best, because of previous research. The bad news is that since
How to determine what type of layers do I need for my Deep learning model? I have some good news and some bad news for you. The good news is that there are many problems for which we know which architecture works best, because of previous research. The bad news is that since today we don't have a good theory of generalization for Deep Networks, we lack theoretical guidance about how to select an architecture for a new problem (however, read here for some insights). Thus, in general the most honest answer is that "it's just a matter of understanding and experience". On the other hand, for some specific fields we can give more canned suggestions: Computer Vision We know that the Convnet family of architectures works very well for image classification: LeNet, Alexnet, VGGNet, ResNets, etc. You can train a beefed-up version of LeNet on a non-GPU laptop, and that's a great way to start learning about them. I suggest you start from this Keras implementation https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py and try to improve it a bit, by reproducing in Keras the section Implementing a CNN in the TensorFlow layers API of this Jupyter notebook. By the way, I can't recommend the second edition of Sebastian Raschka's book highly enough - it's a great way to gain practical knowledge about Machine Learning and Deep Learning. Instead of wasting time reading multiple tutorials on the Internet, get the book - you'll get a more solid understanding of the subject, also because quite a few of the most cited blog posts on Convolutional Neural Networks are basically summaries of the first edition of the book. If you want to train architectures which will perform well on realistic, big data sets (such as CIFAR-100 or ImageNet), you need to have access to a GPU cluster. Natural Language Processing Here we know that RNNs work well. Actually, the "simple" RNN architecture known as LSTM delivers much better results than most people would commonly expect, as shown in this paper: On the State of the Art of Evaluation in Neural Language Models. The paper also highlights a big limit of modern Deep Learning research: a lot of papers don't care enough about repeatability and reproducibility of results, and some results than are presented as the new state of the art, are instead just due to uncontrolled experimental variation. Again, Raschka's book can be quite useful to start learning about RNNs, together with the corresponding Jupyter notebook. The general case If you want to tackle a known problem, but maybe with a new data set (for example, you want to perform image classification of car parts because you work for a car manufacturer), you need to use model selection techniques, such as for example cross-validation. You build different networks (different number of layers, different activation functions, etc.) and choose the one with the smallest cross-validation error. Then, you retrain it on the full data set, and you use it for prediction. However, since the number of alternatives can be prohibitive, you can use some automated machine learning frameworks which help you explore the space of possible networks, such as for example: auto-sklearn tpot If you need to work on big data sets, these tools won't work (they are based on scikit-learn, so there's no support for GPUs, currently). You may have a look at this paper, Large-Scale Evolution of Image Classifiers: like the other one I linked, this one takes proper care to ensure repeatability of results. If you want to attack a new problem (say, Neural Program Synthesis) for which we still have no idea of which architectures work best, probably your best bet is to attend NIPS and ICML (or stalk the right sections of arXiv), in the hope that someone has already tackled your problem.
How to determine what type of layers do I need for my Deep learning model? I have some good news and some bad news for you. The good news is that there are many problems for which we know which architecture works best, because of previous research. The bad news is that since
36,265
How to determine what type of layers do I need for my Deep learning model?
If you want to use Deep Learning, you must know what is it good at in the current state of art, and what problem is still challenging. Essentially, the classes of problems are (no my list is definitely not exhaustive): a. image recognition and classification b. natural language processing: translation and c. audio: speech recognition And what problem it is still a challenge: a. logic processing / understanding and proving b. source code processing: automated programming, bug fixing, bugs finding etc. c. problem diagnosis (eg, engine, or mechanical problem diagnosis). My knowledge is limited, eg, the last item above may well be very much advanced right now, as it has a long history starting from the use of experts system for problem diagnosis (and had many success stories). So like (c) above, you have to frame your problems in the form that fits one of the above well-known domain or problem classes, before even identifying the algorithm. Broadly, algorithm can be broadly classified as follows: https://www.quora.com/Machine-learning-is-a-broad-discipline-Where-can-I-find-a-mind-map-knowledge-tree-of-all-the-areas-and-methods-and-their-relations
How to determine what type of layers do I need for my Deep learning model?
If you want to use Deep Learning, you must know what is it good at in the current state of art, and what problem is still challenging. Essentially, the classes of problems are (no my list is definit
How to determine what type of layers do I need for my Deep learning model? If you want to use Deep Learning, you must know what is it good at in the current state of art, and what problem is still challenging. Essentially, the classes of problems are (no my list is definitely not exhaustive): a. image recognition and classification b. natural language processing: translation and c. audio: speech recognition And what problem it is still a challenge: a. logic processing / understanding and proving b. source code processing: automated programming, bug fixing, bugs finding etc. c. problem diagnosis (eg, engine, or mechanical problem diagnosis). My knowledge is limited, eg, the last item above may well be very much advanced right now, as it has a long history starting from the use of experts system for problem diagnosis (and had many success stories). So like (c) above, you have to frame your problems in the form that fits one of the above well-known domain or problem classes, before even identifying the algorithm. Broadly, algorithm can be broadly classified as follows: https://www.quora.com/Machine-learning-is-a-broad-discipline-Where-can-I-find-a-mind-map-knowledge-tree-of-all-the-areas-and-methods-and-their-relations
How to determine what type of layers do I need for my Deep learning model? If you want to use Deep Learning, you must know what is it good at in the current state of art, and what problem is still challenging. Essentially, the classes of problems are (no my list is definit
36,266
Difference between simulated annealing and multiple greedy
The method you describe is called random restart hill climbing (or sometimes shotgun hill climbing), and it is a different algorithm from simulated annealing. Yes, generally as the number of iterations $k$ increases both methods will eventually give a location $w_i$ which reaches a global optimum $w^*$. This is for the simple reason that both incorporate random search. That is, a random restart (hill climbing) or random move (simulated annealing) can turn out to coincide with a global optimum. Nevertheless, here are two important differences: random restart hill climbing always moves to a random location $w_i$ after some fixed number of iterations $k$. In simulated annealing, moving to random location depends on the temperature $T$. random restart hill climbing will move to the best location in the neighbourhood in the climbing phase. In simulated annealing, the location is selected randomly, you always move if it's better than your current location but with some probability related to $T$ you may move even if it's worse. Simulated annealing is a somewhat more complicated algorithm, and depends on the temperature schedule which determines $T$ at iteration $k$. If the temperature $T$ is set to a very small constant value then the simulated annealing becomes like stochastic hill climbing. If $T$ is set to a very large constant value, then simulated annealing becomes like random search. The way you select the temperature schedule determines how you navigate between these two different type of behaviour. tldr: these are different algorithms, but they use similar ideas to incorporate random sampling into search.
Difference between simulated annealing and multiple greedy
The method you describe is called random restart hill climbing (or sometimes shotgun hill climbing), and it is a different algorithm from simulated annealing. Yes, generally as the number of iteration
Difference between simulated annealing and multiple greedy The method you describe is called random restart hill climbing (or sometimes shotgun hill climbing), and it is a different algorithm from simulated annealing. Yes, generally as the number of iterations $k$ increases both methods will eventually give a location $w_i$ which reaches a global optimum $w^*$. This is for the simple reason that both incorporate random search. That is, a random restart (hill climbing) or random move (simulated annealing) can turn out to coincide with a global optimum. Nevertheless, here are two important differences: random restart hill climbing always moves to a random location $w_i$ after some fixed number of iterations $k$. In simulated annealing, moving to random location depends on the temperature $T$. random restart hill climbing will move to the best location in the neighbourhood in the climbing phase. In simulated annealing, the location is selected randomly, you always move if it's better than your current location but with some probability related to $T$ you may move even if it's worse. Simulated annealing is a somewhat more complicated algorithm, and depends on the temperature schedule which determines $T$ at iteration $k$. If the temperature $T$ is set to a very small constant value then the simulated annealing becomes like stochastic hill climbing. If $T$ is set to a very large constant value, then simulated annealing becomes like random search. The way you select the temperature schedule determines how you navigate between these two different type of behaviour. tldr: these are different algorithms, but they use similar ideas to incorporate random sampling into search.
Difference between simulated annealing and multiple greedy The method you describe is called random restart hill climbing (or sometimes shotgun hill climbing), and it is a different algorithm from simulated annealing. Yes, generally as the number of iteration
36,267
Can I treat the mean of a set of z-scores as a z-score?
Maybe someone else can explain the math behind it, but consider this quick demonstration: I generate five vectors, each 100 numbers long. Each of these vectors is on a different scale, so I standardize them (i.e., create z-scored variables). That is, the mean is zero and the standard deviation is 1 for each of these five latent construct variables: set.seed(1839) ## create five different z-score variables that represent latent constructs data <- data.frame( latent_construct_1 = scale(rnorm(100, 10, 4)), latent_construct_2 = scale(rnorm(100, 3, 18)), latent_construct_3 = scale(rnorm(100, -5, 7)), latent_construct_4 = scale(rnorm(100, 0, 8)), latent_construct_5 = scale(rnorm(100, 20, 20)) ) Let's check to make sure they are actually z-scores: > sapply(data, mean) latent_construct_1 latent_construct_2 latent_construct_3 latent_construct_4 latent_construct_5 -2.203951e-16 1.634435e-17 1.400464e-17 -1.449145e-17 7.852226e-17 > > sapply(data, sd) latent_construct_1 latent_construct_2 latent_construct_3 latent_construct_4 latent_construct_5 1 1 1 1 1 So, now let's say we average all five of these together: ## make a mean of all of these latent constructs data$mean_latent_construct <- rowMeans(data) Is this new variable a z-score? We can check to see if the mean is zero and standard deviation is one: > ## is the mean zero? > mean(data$mean_latent_construct) [1] -2.436148e-17 > > ## is the standard deviation one? > sd(data$mean_latent_construct) [1] 0.4599126 The variable is not a z-score, because the standard deviation is not one. However, we could now z-score this mean variable. Let's do that and compare the distributions: ## z-score the mean latent construct data$mean_latent_construct_z <- scale(data$mean_latent_construct) ## compare distributions library(tidyverse) data <- data %>% select(mean_latent_construct, mean_latent_construct_z) %>% gather(variable, value) ggplot(data, aes(x = value, fill = variable)) + geom_density(alpha = .7) + theme_light() The z-scored aggregate variable of z-scores looks a lot different from the aggregate variable of z-scores. In short: No, a mean of z-scored variables is not a z-score itself.
Can I treat the mean of a set of z-scores as a z-score?
Maybe someone else can explain the math behind it, but consider this quick demonstration: I generate five vectors, each 100 numbers long. Each of these vectors is on a different scale, so I standardiz
Can I treat the mean of a set of z-scores as a z-score? Maybe someone else can explain the math behind it, but consider this quick demonstration: I generate five vectors, each 100 numbers long. Each of these vectors is on a different scale, so I standardize them (i.e., create z-scored variables). That is, the mean is zero and the standard deviation is 1 for each of these five latent construct variables: set.seed(1839) ## create five different z-score variables that represent latent constructs data <- data.frame( latent_construct_1 = scale(rnorm(100, 10, 4)), latent_construct_2 = scale(rnorm(100, 3, 18)), latent_construct_3 = scale(rnorm(100, -5, 7)), latent_construct_4 = scale(rnorm(100, 0, 8)), latent_construct_5 = scale(rnorm(100, 20, 20)) ) Let's check to make sure they are actually z-scores: > sapply(data, mean) latent_construct_1 latent_construct_2 latent_construct_3 latent_construct_4 latent_construct_5 -2.203951e-16 1.634435e-17 1.400464e-17 -1.449145e-17 7.852226e-17 > > sapply(data, sd) latent_construct_1 latent_construct_2 latent_construct_3 latent_construct_4 latent_construct_5 1 1 1 1 1 So, now let's say we average all five of these together: ## make a mean of all of these latent constructs data$mean_latent_construct <- rowMeans(data) Is this new variable a z-score? We can check to see if the mean is zero and standard deviation is one: > ## is the mean zero? > mean(data$mean_latent_construct) [1] -2.436148e-17 > > ## is the standard deviation one? > sd(data$mean_latent_construct) [1] 0.4599126 The variable is not a z-score, because the standard deviation is not one. However, we could now z-score this mean variable. Let's do that and compare the distributions: ## z-score the mean latent construct data$mean_latent_construct_z <- scale(data$mean_latent_construct) ## compare distributions library(tidyverse) data <- data %>% select(mean_latent_construct, mean_latent_construct_z) %>% gather(variable, value) ggplot(data, aes(x = value, fill = variable)) + geom_density(alpha = .7) + theme_light() The z-scored aggregate variable of z-scores looks a lot different from the aggregate variable of z-scores. In short: No, a mean of z-scored variables is not a z-score itself.
Can I treat the mean of a set of z-scores as a z-score? Maybe someone else can explain the math behind it, but consider this quick demonstration: I generate five vectors, each 100 numbers long. Each of these vectors is on a different scale, so I standardiz
36,268
Can I treat the mean of a set of z-scores as a z-score?
Nope. The central limit theorem should provide some insight. Or you can appeal to the variance of a sum. If $X_1, X_2, \ldots, X_p$ comprise your $p$ independent z-scores to average together, (mean 0, variance 1), then the mean has variance: $$\mbox{var} (\bar{X}) = \frac{1}{p^2} \sum_{i=1}^p \mbox{var}(X_i) = 1/p$$ This quantity could be scaled, however, since the sum of normals is normal, and this would meet the criteria of a Z-score.
Can I treat the mean of a set of z-scores as a z-score?
Nope. The central limit theorem should provide some insight. Or you can appeal to the variance of a sum. If $X_1, X_2, \ldots, X_p$ comprise your $p$ independent z-scores to average together, (mean 0,
Can I treat the mean of a set of z-scores as a z-score? Nope. The central limit theorem should provide some insight. Or you can appeal to the variance of a sum. If $X_1, X_2, \ldots, X_p$ comprise your $p$ independent z-scores to average together, (mean 0, variance 1), then the mean has variance: $$\mbox{var} (\bar{X}) = \frac{1}{p^2} \sum_{i=1}^p \mbox{var}(X_i) = 1/p$$ This quantity could be scaled, however, since the sum of normals is normal, and this would meet the criteria of a Z-score.
Can I treat the mean of a set of z-scores as a z-score? Nope. The central limit theorem should provide some insight. Or you can appeal to the variance of a sum. If $X_1, X_2, \ldots, X_p$ comprise your $p$ independent z-scores to average together, (mean 0,
36,269
Are these estimators of $P(X<Y)$ asymptotically normally distributed?
The statistic $T$ is an example of a general $U$-statistic, introduced by Hoeffding in his 1948 paper A class of statistics with asymptotically normal distribution. Moreover, it is among the most famous of that class, namely the Mann-Whitney U test. It has lower mean squared error than $S$, for it equals $E\left(S\mid X_{\left(1\right)},X_{\left(2\right)},\ldots,Y_{\left(1\right)},Y_{\left(2\right)},\ldots\right)$, and since the order statistics are sufficient, the Rao-Blackwell theorem can be applied. Furthermore, it is normally distributed, a fact that follows from a general theorem on $U$-statistics, see the chapters on $U$-statistics in e.g. Lehmann's Elements of Large Sample Theory or Serfling's Approximation Theorems of Mathematical Statistics. Its limiting distribution, with a small sample correction, can be found on wikipedia., and it is implemented in R through the function $\mathtt{wilcox.test}$ (with the option $\mathtt{paired=FALSE}$). Note that the definition of the Mann-Whitney U test is ambiguous. Sometimes it is defined as the $T$ above, but sometimes it counts "victories for $x_i$" as positive and "victories for $y_i$" as negative. This is the approach taken in e.g. the R function $\mathtt{wilcox.test}$.
Are these estimators of $P(X<Y)$ asymptotically normally distributed?
The statistic $T$ is an example of a general $U$-statistic, introduced by Hoeffding in his 1948 paper A class of statistics with asymptotically normal distribution. Moreover, it is among the most famo
Are these estimators of $P(X<Y)$ asymptotically normally distributed? The statistic $T$ is an example of a general $U$-statistic, introduced by Hoeffding in his 1948 paper A class of statistics with asymptotically normal distribution. Moreover, it is among the most famous of that class, namely the Mann-Whitney U test. It has lower mean squared error than $S$, for it equals $E\left(S\mid X_{\left(1\right)},X_{\left(2\right)},\ldots,Y_{\left(1\right)},Y_{\left(2\right)},\ldots\right)$, and since the order statistics are sufficient, the Rao-Blackwell theorem can be applied. Furthermore, it is normally distributed, a fact that follows from a general theorem on $U$-statistics, see the chapters on $U$-statistics in e.g. Lehmann's Elements of Large Sample Theory or Serfling's Approximation Theorems of Mathematical Statistics. Its limiting distribution, with a small sample correction, can be found on wikipedia., and it is implemented in R through the function $\mathtt{wilcox.test}$ (with the option $\mathtt{paired=FALSE}$). Note that the definition of the Mann-Whitney U test is ambiguous. Sometimes it is defined as the $T$ above, but sometimes it counts "victories for $x_i$" as positive and "victories for $y_i$" as negative. This is the approach taken in e.g. the R function $\mathtt{wilcox.test}$.
Are these estimators of $P(X<Y)$ asymptotically normally distributed? The statistic $T$ is an example of a general $U$-statistic, introduced by Hoeffding in his 1948 paper A class of statistics with asymptotically normal distribution. Moreover, it is among the most famo
36,270
Are these estimators of $P(X<Y)$ asymptotically normally distributed?
Here is a fairly straightforward and intuitive argument for both estimators being asymptotically normal. Call $$T(X|y_1)=\frac{1}{n} \sum_{i=1}^{n} I(x_i>y_1)$$ As with $S$, central limit theorem implies this is asymptotically normal. Furthermore, take $$T = \frac{1}{n}\sum_{i=1}^{n} T(X|y_i)$$ Since each $T(X|y_i)$ is asymptotically normal, and any linear combination of normal variables is normal, $T$ must too be asymptotically normal.
Are these estimators of $P(X<Y)$ asymptotically normally distributed?
Here is a fairly straightforward and intuitive argument for both estimators being asymptotically normal. Call $$T(X|y_1)=\frac{1}{n} \sum_{i=1}^{n} I(x_i>y_1)$$ As with $S$, central limit theorem impl
Are these estimators of $P(X<Y)$ asymptotically normally distributed? Here is a fairly straightforward and intuitive argument for both estimators being asymptotically normal. Call $$T(X|y_1)=\frac{1}{n} \sum_{i=1}^{n} I(x_i>y_1)$$ As with $S$, central limit theorem implies this is asymptotically normal. Furthermore, take $$T = \frac{1}{n}\sum_{i=1}^{n} T(X|y_i)$$ Since each $T(X|y_i)$ is asymptotically normal, and any linear combination of normal variables is normal, $T$ must too be asymptotically normal.
Are these estimators of $P(X<Y)$ asymptotically normally distributed? Here is a fairly straightforward and intuitive argument for both estimators being asymptotically normal. Call $$T(X|y_1)=\frac{1}{n} \sum_{i=1}^{n} I(x_i>y_1)$$ As with $S$, central limit theorem impl
36,271
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heritability $H^2$
While we might analyze the many different formulae that we find in statistics, and see that second moments have a special place... ...maybe a more special place in statistics than in physics (Which also occasionally uses square terms for simplicity as well, for instance 'radius of gyration' $r_g^2$. And also a term like 'moment of inertia' is neither an entirely simplified term and contains it's origin moment just like statistical terms contain their origin square. On top of this, physicists like simplicity like $\hslash = \frac{h}{2 \pi}$, while statisticians, well)... Yet, the reasons for these uses of square terms (e.g. $\left(\frac{x}{\sigma} \right)^2$ which easily becomes seen as a containing a "constant" $\sigma^2$ instead of $\sigma$, when you take it out of the brackets) may be more easily found in historical reasons. $\mathbf{h^2}$ and $\mathbf{R^2}$ Via the answer of Nick Cox on this earlier CV question Who is the creator or inventor of coefficient of determination (R-squared)? we see that the history had a great influence on this term. And this is not just for $R^2$, the term $h^2$ is "invented" by the same person. Just see an article search on google: https://scholar.google.com/scholar?q="degree+of+determination"&as_ylo=1918&as_yhi=1924 You see that Sewall Wright did a great deal on the first descriptions of the concept of 'degree of determination'. He expressed it both $R^2$ and $h^2$ in terms of the square of something else 1) coefficients of correlation $R$, and 2) heredity or an equivalent correlation coefficient $h$ (see an earlier source than mentioned by Nick Cox: Wright 1920). In an article like Mordecai Ezekiel 1929 Meaning and Significance of Correlation Coefficients you see that for a considerable time people where using all kind of expressions with the correlation coefficient (in the specific example article: $r^2$, $r$, $\sqrt{1-r^2}$, $1-\sqrt{1-r^2}$) aside from $r^2$, which made the explicit notation of $r^2$ important (physics does not provide this freedom of choice, where we need to consider what kind of moment, first, second, third, or function thereof, or something else like the median, is best to describe a certain distribution or situation). In the wonderfull overview from Wright 1934 "the method of path coefficients" he suggests "The squared path coefficient may accordingly be called a coefficient of determination. such coefficients were used before the term path coefficient was applied to the square root." although people remained using the squared definition. Probably this 'method of path coefficients' was not much liked, because who is teaching/learning this nowadays and what other statistics guru has been using these definitions? In this overview from Wright in 1934, you also find a reference to a 1918 article in which he uses squares of correlation coefficients but not yet a term related to 'determination'. $\mathbf{\sigma^2}$ This term is very often not used as such. And instead it is used without the square on the left hand side of the equation $\sigma = \sqrt{E\left[(X-\mu)^2\right]}$ or replaced by the term 'variance'. A typical expression is $Var(X)$. Another existing expression is $\mu_2$ (widely used in older texts). The subscript denotes the order of the moment. So $\mu_1=\mu$ (or better $\mu^\prime_1=\mu$) is the first raw moment or the mean, the subscript 2 means second moment (variance in case of the central second moment), the subscript 3 means third moment, .... , etc (A problem with this symbol $\mu_2$ is that it is unclear around which point the moment, e.g. central or raw, is defined, even if $\mu^\prime$ vs $\mu$ exists to differentiate between raw and central. The symbol $\mu$ for mean has actually the same problem, although it has become very standard such that the ambiguity is not so relevant in most cases) Well, this large text under this item explains a bit why $\sigma^2$ may have just be easier for many scientists and statisticians. Still also like the $h^2$ and $R^2$ there is a historic origin. Interesting reads: Pearson 1894 Contributions to the Mathematical Theory of Evolution in which, at some point, the standard deviation is actually written as $\sigma = \sqrt{\mu_2}$ Airy 1861 (who uses a letter $c$ in place of $\sigma$ and the description error of mean square, but also compares with different, non squared, concepts mean error and probable error) Fisher examines in 1920 the difference between $\sigma_1$ and $\sigma_2$ the unknown $\sigma$ estimated by either the first central moment 'mean error' or second central moment 'mean squared error'. According to Wikipedia (Okt 19 2017), Fisher first used the term 'variance'. "It is therefore desirable in analyzing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance" If you read the article you see that he often puts variance on the left hand side of the equation, and denotes it with a letter $V$. The use of a letter $V$ is actually still common nowadays in works on mathematical statistics. In this article he uses often $\sigma^2$, but that is for simplicity. Imagine Fermat's theorem written with a term like $c = \sqrt[n]{a^n+b^n}$ instead of $c^n = a^n+b^n$. In this way, simplicity in equations, the use of $\sigma^2$ becomes strengthened. Note that replacing $\sigma^2$ by $V$ is not always useful. Sometimes one wants to indicate that the calculation is about $\sigma^2$. For instance the equation 1 in the 1918 article $\sigma^2 = \sum a^2$ is more clear than $V = \sum a^2$, if the $\sigma$, what it is about, is written explicitly in the equation. Earlier than Fisher, there is mention of 'variability': 1916 James Johstone (THE MATHEMATICAL THEORY OF ORGANIC VARIABILITY ) describes a concept of variability in relation with the Gaussian distribution. In relation to 'deviation squared' or 'squared deviation' you will find several earlier sources. One interesting reference among early uses of 'squared deviation' is Francis Ysidro Edgeworth (1917) who speaks, in a footnote, of 'fluctuation' in place of $\sigma^2$.
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heri
While we might analyze the many different formulae that we find in statistics, and see that second moments have a special place... ...maybe a more special place in statistics than in physics (Which al
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heritability $H^2$ While we might analyze the many different formulae that we find in statistics, and see that second moments have a special place... ...maybe a more special place in statistics than in physics (Which also occasionally uses square terms for simplicity as well, for instance 'radius of gyration' $r_g^2$. And also a term like 'moment of inertia' is neither an entirely simplified term and contains it's origin moment just like statistical terms contain their origin square. On top of this, physicists like simplicity like $\hslash = \frac{h}{2 \pi}$, while statisticians, well)... Yet, the reasons for these uses of square terms (e.g. $\left(\frac{x}{\sigma} \right)^2$ which easily becomes seen as a containing a "constant" $\sigma^2$ instead of $\sigma$, when you take it out of the brackets) may be more easily found in historical reasons. $\mathbf{h^2}$ and $\mathbf{R^2}$ Via the answer of Nick Cox on this earlier CV question Who is the creator or inventor of coefficient of determination (R-squared)? we see that the history had a great influence on this term. And this is not just for $R^2$, the term $h^2$ is "invented" by the same person. Just see an article search on google: https://scholar.google.com/scholar?q="degree+of+determination"&as_ylo=1918&as_yhi=1924 You see that Sewall Wright did a great deal on the first descriptions of the concept of 'degree of determination'. He expressed it both $R^2$ and $h^2$ in terms of the square of something else 1) coefficients of correlation $R$, and 2) heredity or an equivalent correlation coefficient $h$ (see an earlier source than mentioned by Nick Cox: Wright 1920). In an article like Mordecai Ezekiel 1929 Meaning and Significance of Correlation Coefficients you see that for a considerable time people where using all kind of expressions with the correlation coefficient (in the specific example article: $r^2$, $r$, $\sqrt{1-r^2}$, $1-\sqrt{1-r^2}$) aside from $r^2$, which made the explicit notation of $r^2$ important (physics does not provide this freedom of choice, where we need to consider what kind of moment, first, second, third, or function thereof, or something else like the median, is best to describe a certain distribution or situation). In the wonderfull overview from Wright 1934 "the method of path coefficients" he suggests "The squared path coefficient may accordingly be called a coefficient of determination. such coefficients were used before the term path coefficient was applied to the square root." although people remained using the squared definition. Probably this 'method of path coefficients' was not much liked, because who is teaching/learning this nowadays and what other statistics guru has been using these definitions? In this overview from Wright in 1934, you also find a reference to a 1918 article in which he uses squares of correlation coefficients but not yet a term related to 'determination'. $\mathbf{\sigma^2}$ This term is very often not used as such. And instead it is used without the square on the left hand side of the equation $\sigma = \sqrt{E\left[(X-\mu)^2\right]}$ or replaced by the term 'variance'. A typical expression is $Var(X)$. Another existing expression is $\mu_2$ (widely used in older texts). The subscript denotes the order of the moment. So $\mu_1=\mu$ (or better $\mu^\prime_1=\mu$) is the first raw moment or the mean, the subscript 2 means second moment (variance in case of the central second moment), the subscript 3 means third moment, .... , etc (A problem with this symbol $\mu_2$ is that it is unclear around which point the moment, e.g. central or raw, is defined, even if $\mu^\prime$ vs $\mu$ exists to differentiate between raw and central. The symbol $\mu$ for mean has actually the same problem, although it has become very standard such that the ambiguity is not so relevant in most cases) Well, this large text under this item explains a bit why $\sigma^2$ may have just be easier for many scientists and statisticians. Still also like the $h^2$ and $R^2$ there is a historic origin. Interesting reads: Pearson 1894 Contributions to the Mathematical Theory of Evolution in which, at some point, the standard deviation is actually written as $\sigma = \sqrt{\mu_2}$ Airy 1861 (who uses a letter $c$ in place of $\sigma$ and the description error of mean square, but also compares with different, non squared, concepts mean error and probable error) Fisher examines in 1920 the difference between $\sigma_1$ and $\sigma_2$ the unknown $\sigma$ estimated by either the first central moment 'mean error' or second central moment 'mean squared error'. According to Wikipedia (Okt 19 2017), Fisher first used the term 'variance'. "It is therefore desirable in analyzing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance" If you read the article you see that he often puts variance on the left hand side of the equation, and denotes it with a letter $V$. The use of a letter $V$ is actually still common nowadays in works on mathematical statistics. In this article he uses often $\sigma^2$, but that is for simplicity. Imagine Fermat's theorem written with a term like $c = \sqrt[n]{a^n+b^n}$ instead of $c^n = a^n+b^n$. In this way, simplicity in equations, the use of $\sigma^2$ becomes strengthened. Note that replacing $\sigma^2$ by $V$ is not always useful. Sometimes one wants to indicate that the calculation is about $\sigma^2$. For instance the equation 1 in the 1918 article $\sigma^2 = \sum a^2$ is more clear than $V = \sum a^2$, if the $\sigma$, what it is about, is written explicitly in the equation. Earlier than Fisher, there is mention of 'variability': 1916 James Johstone (THE MATHEMATICAL THEORY OF ORGANIC VARIABILITY ) describes a concept of variability in relation with the Gaussian distribution. In relation to 'deviation squared' or 'squared deviation' you will find several earlier sources. One interesting reference among early uses of 'squared deviation' is Francis Ysidro Edgeworth (1917) who speaks, in a footnote, of 'fluctuation' in place of $\sigma^2$.
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heri While we might analyze the many different formulae that we find in statistics, and see that second moments have a special place... ...maybe a more special place in statistics than in physics (Which al
36,272
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heritability $H^2$
Narrow sense heritability is denoted $h^2$ because people (not sure who but see Felsenstein, 2016, Ch. IX, problem 7) first introduced the symbol $h$ for the correlation between the additive genetic effect $x$ and the phenotype $z=x+e$, \begin{align} h=\mbox{corr}(x,z) &=\frac{\mbox{Cov}(x,z)}{\sqrt{\mbox{Var}(x)\mbox{Var}(z)}} \\&=\frac{\mbox{Cov}(x,x+e)}{\sqrt{\mbox{Var}(x)\mbox{Var}(z)}} \\&=\frac{\mbox{Var}(x)}{\sqrt{\mbox{Var}(x)\mbox{Var}(z)}} \\&=\sqrt{\frac{\mbox{Var}(x)}{\mbox{Var}(z)}} \end{align} If the additive component $x$ and the phenotype $z$ is jointly binormal, then the slope of the regression of the additive genetic component or breeding value $x$ on the phenotype $z$ (the heritability determining the response to selection appearing in the breeders equation) becomes $$ \beta_{x|z}=\frac{\mbox{Cov}(x,y)}{\mbox{Var}(z)}=\frac{\mbox{Var}(x)}{\mbox{Var}(z)}=h^2. $$.
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heri
Narrow sense heritability is denoted $h^2$ because people (not sure who but see Felsenstein, 2016, Ch. IX, problem 7) first introduced the symbol $h$ for the correlation between the additive genetic e
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heritability $H^2$ Narrow sense heritability is denoted $h^2$ because people (not sure who but see Felsenstein, 2016, Ch. IX, problem 7) first introduced the symbol $h$ for the correlation between the additive genetic effect $x$ and the phenotype $z=x+e$, \begin{align} h=\mbox{corr}(x,z) &=\frac{\mbox{Cov}(x,z)}{\sqrt{\mbox{Var}(x)\mbox{Var}(z)}} \\&=\frac{\mbox{Cov}(x,x+e)}{\sqrt{\mbox{Var}(x)\mbox{Var}(z)}} \\&=\frac{\mbox{Var}(x)}{\sqrt{\mbox{Var}(x)\mbox{Var}(z)}} \\&=\sqrt{\frac{\mbox{Var}(x)}{\mbox{Var}(z)}} \end{align} If the additive component $x$ and the phenotype $z$ is jointly binormal, then the slope of the regression of the additive genetic component or breeding value $x$ on the phenotype $z$ (the heritability determining the response to selection appearing in the breeders equation) becomes $$ \beta_{x|z}=\frac{\mbox{Cov}(x,y)}{\mbox{Var}(z)}=\frac{\mbox{Var}(x)}{\mbox{Var}(z)}=h^2. $$.
Why do some statistics symbols have a "squared", e.g. Variance $\sigma^2$, "R squared" $R^2$ or heri Narrow sense heritability is denoted $h^2$ because people (not sure who but see Felsenstein, 2016, Ch. IX, problem 7) first introduced the symbol $h$ for the correlation between the additive genetic e
36,273
Applying duality and KKT conditions to LASSO problem
1) You're going the wrong direction by invoking duality directly. To get from $\text{arg min}_{\beta: \|\beta\|_1 \leq t} \|y - X\beta\|_2^2$ to $\text{arg min}_{\beta} \|y - X\beta\|_2^2 + \lambda\|\beta\|_1$ you just need to invoke Lagrange multipliers. (See, e.g. Section 5.1 of [1]) LMs are often discussed in the context of duality when teaching them, but in practice you can just switch directly from one to the other without considering the dual problem. If you are interested in the dual problem of the lasso, it's worked out on Slides 12 and 13 of [2] 2) What you have probably seen is the KKT Stationarity condition for the Lasso: $\text{arg min}\frac{1}{2}\|y - X\beta\|_2^2 + \lambda \|\beta\|_1 \Longleftrightarrow -X^T(y - X\hat{\beta}) + \lambda s = 0 \text{ for some } s \in \partial \|\hat{\beta}\|_1$ where $\partial \|\beta\|_1$ is called the subdifferential of the $\ell_1$ norm. (This is essentially just the standard "derivative equals zero at minimum" condition from calculus, but adjusted for non-differentiability.) We know the subdifferential of $|\beta_i| = \text{sign}(\beta_i)$ if $\beta_i \neq 0$ so this equation gives an exact closed form solution for the lasso if we know the support and sign of the solution. Namely, $\hat{\beta}_{\hat{S}} = (X_{\hat{S}}^TX_{\hat{S}})^{-1}(X_{\hat{S}}^Ty - \lambda * \text{sign}(\hat{\beta}_{\hat{S}}))$ (Aside: this solution makes the "shrinkage" effect of the lasso (as compared to OLS) very clear.) Of course, the hard part of solving the lasso is finding the support and signs of the solution, so this is not terribly helpful in practice. It is, however, a very useful theoretical construct and can be used to prove lots of nice properties of the lasso; most importantly, it lets us use the "primal-dual witness" technique to establish conditions under which the lasso recovers the "true" set of variables. See Section 11.4 of [3]. [1] S. Boyd and L. Vandenberghe. Convex Optimization. Available at https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf [2] http://www.stat.cmu.edu/~ryantibs/convexopt-F15/lectures/13-dual-corres.pdf [3] T. Hastie, R. Tibshirani, M. Wainwright. Statistical Learning with Sparsity: The Lasso and Generalizations. Available at https://web.stanford.edu/~hastie/StatLearnSparsity_files/SLS_corrected_1.4.16.pdf
Applying duality and KKT conditions to LASSO problem
1) You're going the wrong direction by invoking duality directly. To get from $\text{arg min}_{\beta: \|\beta\|_1 \leq t} \|y - X\beta\|_2^2$ to $\text{arg min}_{\beta} \|y - X\beta\|_2^2 + \lambda
Applying duality and KKT conditions to LASSO problem 1) You're going the wrong direction by invoking duality directly. To get from $\text{arg min}_{\beta: \|\beta\|_1 \leq t} \|y - X\beta\|_2^2$ to $\text{arg min}_{\beta} \|y - X\beta\|_2^2 + \lambda\|\beta\|_1$ you just need to invoke Lagrange multipliers. (See, e.g. Section 5.1 of [1]) LMs are often discussed in the context of duality when teaching them, but in practice you can just switch directly from one to the other without considering the dual problem. If you are interested in the dual problem of the lasso, it's worked out on Slides 12 and 13 of [2] 2) What you have probably seen is the KKT Stationarity condition for the Lasso: $\text{arg min}\frac{1}{2}\|y - X\beta\|_2^2 + \lambda \|\beta\|_1 \Longleftrightarrow -X^T(y - X\hat{\beta}) + \lambda s = 0 \text{ for some } s \in \partial \|\hat{\beta}\|_1$ where $\partial \|\beta\|_1$ is called the subdifferential of the $\ell_1$ norm. (This is essentially just the standard "derivative equals zero at minimum" condition from calculus, but adjusted for non-differentiability.) We know the subdifferential of $|\beta_i| = \text{sign}(\beta_i)$ if $\beta_i \neq 0$ so this equation gives an exact closed form solution for the lasso if we know the support and sign of the solution. Namely, $\hat{\beta}_{\hat{S}} = (X_{\hat{S}}^TX_{\hat{S}})^{-1}(X_{\hat{S}}^Ty - \lambda * \text{sign}(\hat{\beta}_{\hat{S}}))$ (Aside: this solution makes the "shrinkage" effect of the lasso (as compared to OLS) very clear.) Of course, the hard part of solving the lasso is finding the support and signs of the solution, so this is not terribly helpful in practice. It is, however, a very useful theoretical construct and can be used to prove lots of nice properties of the lasso; most importantly, it lets us use the "primal-dual witness" technique to establish conditions under which the lasso recovers the "true" set of variables. See Section 11.4 of [3]. [1] S. Boyd and L. Vandenberghe. Convex Optimization. Available at https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf [2] http://www.stat.cmu.edu/~ryantibs/convexopt-F15/lectures/13-dual-corres.pdf [3] T. Hastie, R. Tibshirani, M. Wainwright. Statistical Learning with Sparsity: The Lasso and Generalizations. Available at https://web.stanford.edu/~hastie/StatLearnSparsity_files/SLS_corrected_1.4.16.pdf
Applying duality and KKT conditions to LASSO problem 1) You're going the wrong direction by invoking duality directly. To get from $\text{arg min}_{\beta: \|\beta\|_1 \leq t} \|y - X\beta\|_2^2$ to $\text{arg min}_{\beta} \|y - X\beta\|_2^2 + \lambda
36,274
Disagreement between p-values and confidence intervals
Caveat: This answer assumes that the question is about interpreting bootstrapped p-values and CIs. A comparison between a traditional p-value (not bootstrapped) and a bootstrapped CI would be a different issue. With a traditional (not bootstrapped) t-test, the 95%CI and the p-value's position relative to the .05 cutoff for significance will always tell you the same thing. That's because they're both based on the same information: the t-distribution for your degrees of freedom and the mean and standard error observed in your sample (or difference between means and standard error, in the case of a two-sample t-test). If your CI doesn't overlap with 0, then your p-value will necessarily be < .05 --- unless, of course, there's a bug in the software or a user error in implementation or interpretation of the test. With a bootstrapped t-test, the CI and p value are both calculated directly from the empirical distribution generated by the bootstrapping: the p value is simply what percent of bootstrapped group differences are more extreme than the original observed difference; the 95%CI is the middle 95% of bootstrapped group differences. It is not impossible for the p-value and the CI to disagree about significance in a bootstrapped test. Do you accept or reject the null hypothesis? In the context of a bootstrapped test, the p-value (as compared to the CI) more directly reflects the spirit of the hypothesis test, so it makes the most sense to rely on that value to decide whether or not to reject the null at your desired alpha (generally .05). So in your case, where the p value is less than .05 but the 95%CI contains zero, I recommend rejecting the null hypothesis. All of this skips over the big ideas about how important "significance" really should be and whether or not null hypothesis significance testing is actually that useful of a tool. Briefly, I always recommend complimenting any significance testing analysis with estimation of effect sizes (for a two-sample t-test, the best effect size estimate will probably be Cohen's d), which can provide some additional context to help you understand your results. Related helpful post: What is the meaning of a confidence interval taken from bootstrapped resamples?
Disagreement between p-values and confidence intervals
Caveat: This answer assumes that the question is about interpreting bootstrapped p-values and CIs. A comparison between a traditional p-value (not bootstrapped) and a bootstrapped CI would be a differ
Disagreement between p-values and confidence intervals Caveat: This answer assumes that the question is about interpreting bootstrapped p-values and CIs. A comparison between a traditional p-value (not bootstrapped) and a bootstrapped CI would be a different issue. With a traditional (not bootstrapped) t-test, the 95%CI and the p-value's position relative to the .05 cutoff for significance will always tell you the same thing. That's because they're both based on the same information: the t-distribution for your degrees of freedom and the mean and standard error observed in your sample (or difference between means and standard error, in the case of a two-sample t-test). If your CI doesn't overlap with 0, then your p-value will necessarily be < .05 --- unless, of course, there's a bug in the software or a user error in implementation or interpretation of the test. With a bootstrapped t-test, the CI and p value are both calculated directly from the empirical distribution generated by the bootstrapping: the p value is simply what percent of bootstrapped group differences are more extreme than the original observed difference; the 95%CI is the middle 95% of bootstrapped group differences. It is not impossible for the p-value and the CI to disagree about significance in a bootstrapped test. Do you accept or reject the null hypothesis? In the context of a bootstrapped test, the p-value (as compared to the CI) more directly reflects the spirit of the hypothesis test, so it makes the most sense to rely on that value to decide whether or not to reject the null at your desired alpha (generally .05). So in your case, where the p value is less than .05 but the 95%CI contains zero, I recommend rejecting the null hypothesis. All of this skips over the big ideas about how important "significance" really should be and whether or not null hypothesis significance testing is actually that useful of a tool. Briefly, I always recommend complimenting any significance testing analysis with estimation of effect sizes (for a two-sample t-test, the best effect size estimate will probably be Cohen's d), which can provide some additional context to help you understand your results. Related helpful post: What is the meaning of a confidence interval taken from bootstrapped resamples?
Disagreement between p-values and confidence intervals Caveat: This answer assumes that the question is about interpreting bootstrapped p-values and CIs. A comparison between a traditional p-value (not bootstrapped) and a bootstrapped CI would be a differ
36,275
Disagreement between p-values and confidence intervals
If the p-value of the null hypothesis is smaller than 0.05, then zero should not be contained in the confidence interval at 0.05 of the parameter that you are assuming to be zero in the null hypothesis. This is the same thing. So there is a bug or you don't test the same hypothesis. EDIT, as the other answers and the comment below correctly indicate, this is not the full story. However, I still think that if one test indicates groups have different mean (p < 0.005), and the other does not reject (p > 0.05), probably the tests are really checking a different thing. While theoretically this difference could be due to asymptotics (bootstraps are approximations on finite sample, other tests are approximations based on normality assumptions), that difference is surprisingly large. I argue it is alarmingly large, and without figuring out what is going on with that, you should not yet draw conclusions. That is also exactly what you are doing, by the way, by posting the question here. Maybe you can share the numbers and make this interesting question a bit more concrete.
Disagreement between p-values and confidence intervals
If the p-value of the null hypothesis is smaller than 0.05, then zero should not be contained in the confidence interval at 0.05 of the parameter that you are assuming to be zero in the null hypothesi
Disagreement between p-values and confidence intervals If the p-value of the null hypothesis is smaller than 0.05, then zero should not be contained in the confidence interval at 0.05 of the parameter that you are assuming to be zero in the null hypothesis. This is the same thing. So there is a bug or you don't test the same hypothesis. EDIT, as the other answers and the comment below correctly indicate, this is not the full story. However, I still think that if one test indicates groups have different mean (p < 0.005), and the other does not reject (p > 0.05), probably the tests are really checking a different thing. While theoretically this difference could be due to asymptotics (bootstraps are approximations on finite sample, other tests are approximations based on normality assumptions), that difference is surprisingly large. I argue it is alarmingly large, and without figuring out what is going on with that, you should not yet draw conclusions. That is also exactly what you are doing, by the way, by posting the question here. Maybe you can share the numbers and make this interesting question a bit more concrete.
Disagreement between p-values and confidence intervals If the p-value of the null hypothesis is smaller than 0.05, then zero should not be contained in the confidence interval at 0.05 of the parameter that you are assuming to be zero in the null hypothesi
36,276
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$?
I believe much might be revealed by contemplating series of random variables like the following: $$X_n = \left\{\eqalign{\frac{1}{n} & \text{ with probability } 1 - \frac{1}{f(n)^2}e^{-g(n)}\\ f(n)e^{g(n)} & \text{ with probability } \frac{1}{f(n)^2}e^{-g(n)}.}\right.$$ Later we will identify suitable functions $f$ and $g$ after analyzing the roles they play in the asymptotic expectations. For now let's just assume $f(n)$ is nonzero and both diverge as $n$ grows large, with $g(n) \ge n$ for all $n\gt 0$. By definition of expectation, $$\eqalign{ \mathbb{E}(X_n) &= \frac{1}{n}\left(1-\frac{1}{f(n)^2}e^{-g(n)}\right) + f(n)e^g(n) \left(\frac{1}{f(n)^2}e^{-g(n)}\right) \\&= \frac{1}{f(n)} + \frac{1}{n} - \frac{1}{nf(n)^2}e^{-g(n)}.}$$ Evidently $$\mathbb{E}(X_n) = O\left(n^{-1} + f(n)^{-1}\right),$$ permitting us to take $a_n = n^{-1} + f(n)^{-1}$, which converges to $0$ as required. (Because it does so, and $x \log(1/x)\to 0$ as $x\to 0$, notice that $a_n\log(1/a_n)\to 0$.) Nevertheless the calculation of $\mathbb{E}(Y_n)$ includes a term $$f(n)e^g(n)\log\left(\frac{1}{f(n)e^{g(n)}}\right) \times \frac{1}{f(n)^2}e^{-g(n)}=-\frac{\log(f(n))}{f(n)} - \frac{g(n)}{f(n)}\tag{1}$$ The other term, equal to $$\frac{1}{n}\log\left(\frac{1}{1/n}\right) \times \left(1 - \frac{1}{f(n)^2}e^{-g(n)}\right) = \frac{\log{n}}{n}\left(1-\frac{1}{f(n)^2}e^{-g(n)})\right),\tag{2}$$ remains bounded (and converges to zero). Let's suppose $f$ diverges more slowly than $g$; that is, pick $f$ for which $g(n)/f(n)$ diverges. The sum of $(1)$ and $(2)$ asymptotically is $$\mathbb{E}(Y_n) = O\left(\frac{g(n)}{f(n)}\right) \to \infty.$$ There do exist such $f$ and $g$ satisfying all the conditions placed on them (positive, divergent, with $g(n)/f(n)$ divergent too): for instance, $g(n)=nh(n)$ (with $h(n) \ge 1$) and $f(n) = n^\epsilon$ works for any $0 \lt \epsilon \lt 1$. Consequently, $\mathbb{E}(Y_n)=O(h(n)n^{1-\epsilon})$ for all $\epsilon\gt 0$ and for all functions $h$ bounded below by $1$. This shows there is no limit at all on the rate at which $\mathbb{E}(Y_n)$ can diverge.
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$?
I believe much might be revealed by contemplating series of random variables like the following: $$X_n = \left\{\eqalign{\frac{1}{n} & \text{ with probability } 1 - \frac{1}{f(n)^2}e^{-g(n)}\\ f(n)e^{
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$? I believe much might be revealed by contemplating series of random variables like the following: $$X_n = \left\{\eqalign{\frac{1}{n} & \text{ with probability } 1 - \frac{1}{f(n)^2}e^{-g(n)}\\ f(n)e^{g(n)} & \text{ with probability } \frac{1}{f(n)^2}e^{-g(n)}.}\right.$$ Later we will identify suitable functions $f$ and $g$ after analyzing the roles they play in the asymptotic expectations. For now let's just assume $f(n)$ is nonzero and both diverge as $n$ grows large, with $g(n) \ge n$ for all $n\gt 0$. By definition of expectation, $$\eqalign{ \mathbb{E}(X_n) &= \frac{1}{n}\left(1-\frac{1}{f(n)^2}e^{-g(n)}\right) + f(n)e^g(n) \left(\frac{1}{f(n)^2}e^{-g(n)}\right) \\&= \frac{1}{f(n)} + \frac{1}{n} - \frac{1}{nf(n)^2}e^{-g(n)}.}$$ Evidently $$\mathbb{E}(X_n) = O\left(n^{-1} + f(n)^{-1}\right),$$ permitting us to take $a_n = n^{-1} + f(n)^{-1}$, which converges to $0$ as required. (Because it does so, and $x \log(1/x)\to 0$ as $x\to 0$, notice that $a_n\log(1/a_n)\to 0$.) Nevertheless the calculation of $\mathbb{E}(Y_n)$ includes a term $$f(n)e^g(n)\log\left(\frac{1}{f(n)e^{g(n)}}\right) \times \frac{1}{f(n)^2}e^{-g(n)}=-\frac{\log(f(n))}{f(n)} - \frac{g(n)}{f(n)}\tag{1}$$ The other term, equal to $$\frac{1}{n}\log\left(\frac{1}{1/n}\right) \times \left(1 - \frac{1}{f(n)^2}e^{-g(n)}\right) = \frac{\log{n}}{n}\left(1-\frac{1}{f(n)^2}e^{-g(n)})\right),\tag{2}$$ remains bounded (and converges to zero). Let's suppose $f$ diverges more slowly than $g$; that is, pick $f$ for which $g(n)/f(n)$ diverges. The sum of $(1)$ and $(2)$ asymptotically is $$\mathbb{E}(Y_n) = O\left(\frac{g(n)}{f(n)}\right) \to \infty.$$ There do exist such $f$ and $g$ satisfying all the conditions placed on them (positive, divergent, with $g(n)/f(n)$ divergent too): for instance, $g(n)=nh(n)$ (with $h(n) \ge 1$) and $f(n) = n^\epsilon$ works for any $0 \lt \epsilon \lt 1$. Consequently, $\mathbb{E}(Y_n)=O(h(n)n^{1-\epsilon})$ for all $\epsilon\gt 0$ and for all functions $h$ bounded below by $1$. This shows there is no limit at all on the rate at which $\mathbb{E}(Y_n)$ can diverge.
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$? I believe much might be revealed by contemplating series of random variables like the following: $$X_n = \left\{\eqalign{\frac{1}{n} & \text{ with probability } 1 - \frac{1}{f(n)^2}e^{-g(n)}\\ f(n)e^{
36,277
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$?
Since $X_n$ are positive random variables, we do not need the absolute value. We have $$\{\mathbb{E}X_n\}=O(a_n) \implies \lim_{n\to \infty}\frac{\mathbb{E}X_n}{a_n} < K \in \mathbb R_{++}$$ Then, since also $$a_n \to 0 \implies \mathbb{E}X_n \to 0 \implies X_n \to 0,\;\;\; n\to \infty$$ since they are positive r.v's. So the sequence of $X$'s converges to the constant zero. But then $$Y_n = -X_n \ln X_n \implies \lim_{n \to \infty} Y_n =0 $$ ...or maybe plims. Am I missing something?
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$?
Since $X_n$ are positive random variables, we do not need the absolute value. We have $$\{\mathbb{E}X_n\}=O(a_n) \implies \lim_{n\to \infty}\frac{\mathbb{E}X_n}{a_n} < K \in \mathbb R_{++}$$ Then, sin
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$? Since $X_n$ are positive random variables, we do not need the absolute value. We have $$\{\mathbb{E}X_n\}=O(a_n) \implies \lim_{n\to \infty}\frac{\mathbb{E}X_n}{a_n} < K \in \mathbb R_{++}$$ Then, since also $$a_n \to 0 \implies \mathbb{E}X_n \to 0 \implies X_n \to 0,\;\;\; n\to \infty$$ since they are positive r.v's. So the sequence of $X$'s converges to the constant zero. But then $$Y_n = -X_n \ln X_n \implies \lim_{n \to \infty} Y_n =0 $$ ...or maybe plims. Am I missing something?
If $\mathbb{E}|X_n|=O(a_n)$, how large is $Y_n = X_n\ln\left(\frac{1}{X_n}\right)$? Since $X_n$ are positive random variables, we do not need the absolute value. We have $$\{\mathbb{E}X_n\}=O(a_n) \implies \lim_{n\to \infty}\frac{\mathbb{E}X_n}{a_n} < K \in \mathbb R_{++}$$ Then, sin
36,278
Is testing model assumptions considered p-hacking/fishing?
It's not quite the same thing in the sense that the practice of testing whether assumptions were violated was originally intended to make sure an appropriate analysis was done, but as it turns out, it does have some of the same consequences (see e.g. this question). But it is in a milder form than the more extreme variants of p-hacking that are specifically targeted at somehow getting the p-value for the effect of interest below 0.05. That is unless you start combining multiple problematic practices (e.g. checking for normality, checking for homoscedasticity, checking covariates that "should" be in the model, checking for linearity of covariates, checking interactions etc.). I am not sure whether anyone has looked into how much that invalidates the final analysis. Of course the other issue is that testing for normality is not normally meaningful (see e.g. this discussion). For small sample sizes you do not reliably pick up massive deviations that truly violate your assumptions, while for large sample sizes e.g. the t-test becomes quite robust to deviations but the normality test will start to detect tiny deviations that do not matter. It is much better to (whenever possible) specify an appropriate model based on previous data or subject matter knowledge. When that is not possible, it may be best to use methods that are more robust to violations of distributional assumptions or have none/fewer.
Is testing model assumptions considered p-hacking/fishing?
It's not quite the same thing in the sense that the practice of testing whether assumptions were violated was originally intended to make sure an appropriate analysis was done, but as it turns out, it
Is testing model assumptions considered p-hacking/fishing? It's not quite the same thing in the sense that the practice of testing whether assumptions were violated was originally intended to make sure an appropriate analysis was done, but as it turns out, it does have some of the same consequences (see e.g. this question). But it is in a milder form than the more extreme variants of p-hacking that are specifically targeted at somehow getting the p-value for the effect of interest below 0.05. That is unless you start combining multiple problematic practices (e.g. checking for normality, checking for homoscedasticity, checking covariates that "should" be in the model, checking for linearity of covariates, checking interactions etc.). I am not sure whether anyone has looked into how much that invalidates the final analysis. Of course the other issue is that testing for normality is not normally meaningful (see e.g. this discussion). For small sample sizes you do not reliably pick up massive deviations that truly violate your assumptions, while for large sample sizes e.g. the t-test becomes quite robust to deviations but the normality test will start to detect tiny deviations that do not matter. It is much better to (whenever possible) specify an appropriate model based on previous data or subject matter knowledge. When that is not possible, it may be best to use methods that are more robust to violations of distributional assumptions or have none/fewer.
Is testing model assumptions considered p-hacking/fishing? It's not quite the same thing in the sense that the practice of testing whether assumptions were violated was originally intended to make sure an appropriate analysis was done, but as it turns out, it
36,279
Is testing model assumptions considered p-hacking/fishing?
I do not believe that checking the assumptions of any model qualifies as p-hacking /fishing. In the first article, the author is talking about analysts who are repeatedly performing analyses on a data set and only reporting the best result. In other words, they are purposely portraying a biased picture of what is happening in the data. Testing the assumptions of regression or any model is mandatory. What is not mandatory is to repeatedly re-sample from the data in order to ascertain the best possible outcome. Assuming researchers have a large enough sample to pull from, they will sometimes re-sample over and over again...perform hypothesis tests over and over again....until they achieve the result they want. Hence p-hacking. They're hacking the p-value via looking for the desired result and won't quit until they find it (fishing). So even if out of 100 hypothesis tests they only achieve 1 with a significant result, they'll report the p-value belonging to that particular test and omit all the others. Does this make sense? When checking model assumptions, you're making sure that the model is appropriate for the data that you have. With p-hacking/fishing, you are endlessly searching the data/manipulating the study in order to achieve your desired outcome. As for the purpose of multiple comparison, if you keep running a model through the mud endlessly trying to find a way to invalidate it (or validate it) then eventually you will find a way. This is fishing. If you want to validate a model, then you'll find a way. If you want to invalidate it, then you'll find a way. The key is to have an open mind and find out the truth - not just see what you want to see.
Is testing model assumptions considered p-hacking/fishing?
I do not believe that checking the assumptions of any model qualifies as p-hacking /fishing. In the first article, the author is talking about analysts who are repeatedly performing analyses on a data
Is testing model assumptions considered p-hacking/fishing? I do not believe that checking the assumptions of any model qualifies as p-hacking /fishing. In the first article, the author is talking about analysts who are repeatedly performing analyses on a data set and only reporting the best result. In other words, they are purposely portraying a biased picture of what is happening in the data. Testing the assumptions of regression or any model is mandatory. What is not mandatory is to repeatedly re-sample from the data in order to ascertain the best possible outcome. Assuming researchers have a large enough sample to pull from, they will sometimes re-sample over and over again...perform hypothesis tests over and over again....until they achieve the result they want. Hence p-hacking. They're hacking the p-value via looking for the desired result and won't quit until they find it (fishing). So even if out of 100 hypothesis tests they only achieve 1 with a significant result, they'll report the p-value belonging to that particular test and omit all the others. Does this make sense? When checking model assumptions, you're making sure that the model is appropriate for the data that you have. With p-hacking/fishing, you are endlessly searching the data/manipulating the study in order to achieve your desired outcome. As for the purpose of multiple comparison, if you keep running a model through the mud endlessly trying to find a way to invalidate it (or validate it) then eventually you will find a way. This is fishing. If you want to validate a model, then you'll find a way. If you want to invalidate it, then you'll find a way. The key is to have an open mind and find out the truth - not just see what you want to see.
Is testing model assumptions considered p-hacking/fishing? I do not believe that checking the assumptions of any model qualifies as p-hacking /fishing. In the first article, the author is talking about analysts who are repeatedly performing analyses on a data
36,280
How to report a linear mixed effects model for those who are unfamiliar and skeptical?
I partly take side with the reviewer on this one. You are interested in the effect of your parameter of interest — given the rest of the model. It is hard to interpret the results and to check the the validity of the model if you only report a single parameter of interest. I would provide: the formula of your model beta estimates for all fixed effects corresponding SEs and CIs corresponding test statistics (z, t, Chi^2, change in AIC/BIC, whatever you used) with df's/n's corresponding p values SDs for your random effects and their correlations (if necessary as separate table) The space constraints in most classical journals will make it necessary to put these information into an online supplement. Examples for reporting mixed models can be found here.
How to report a linear mixed effects model for those who are unfamiliar and skeptical?
I partly take side with the reviewer on this one. You are interested in the effect of your parameter of interest — given the rest of the model. It is hard to interpret the results and to check the the
How to report a linear mixed effects model for those who are unfamiliar and skeptical? I partly take side with the reviewer on this one. You are interested in the effect of your parameter of interest — given the rest of the model. It is hard to interpret the results and to check the the validity of the model if you only report a single parameter of interest. I would provide: the formula of your model beta estimates for all fixed effects corresponding SEs and CIs corresponding test statistics (z, t, Chi^2, change in AIC/BIC, whatever you used) with df's/n's corresponding p values SDs for your random effects and their correlations (if necessary as separate table) The space constraints in most classical journals will make it necessary to put these information into an online supplement. Examples for reporting mixed models can be found here.
How to report a linear mixed effects model for those who are unfamiliar and skeptical? I partly take side with the reviewer on this one. You are interested in the effect of your parameter of interest — given the rest of the model. It is hard to interpret the results and to check the the
36,281
Expressing the LASSO regression constraint via the penalty parameter
The answer to your question follows from consideration of Lagrangian duality. This is worked in the post which I consider to be a duplicate in my comment to OP's post. In what follows, I work out what I find to be a more insightful derivation. When we're solving a lasso, really, we're trying trying to jointly minimize $\frac{1}{2n} \|y - X \beta\|_2^2 = RSS$ and $\|\beta\|_1$. That is, we seek $\arg\min_\beta (\frac{1}{2n} \|y - X \beta\|_2^2, \|\beta\|_1)$. This doesn't seem well defined at the moment, since we know there's some tension between these two objectives. This is what optimization folks call multicriterion optimization. Let's visualize this problem by plotting $\left(\frac{1}{2n} \|y - X \beta\|_2^2, \|\beta\|_1 \right)$ for many $\beta$'s. (Note, here $p=5$, $n=100$, $X$ was randomly initialized, and the true coefficient $\beta^*$ has a roughly a quarter of it's entries equal to zero.) Here, $F = \|\beta\|_1$ and $G = \frac{1}{2n} \|y - X \beta\|_2^2$. That is, the vertical axis measures the lack of fit, and the horizontal axis measures the size of the coefficient. Note that I cut off the top of the image for the sake of clarity. The points at the bottom left of the plot are the ones we're interested in. Those correspond to the values of $\beta$ that both have small $\ell_1$ norm and have small error. In fact, for those points at the bottom left, there are no $\beta$ which have the same fit and smaller size or the same size with better fit. To choose between these points, called pareto optimal points, we need to determine the relative importance of the fit and size, our two objectives. This should remind us of the tuning parameters $\lambda$ or $C$ in the unconstrained or constrained lasso, respectively. Below we plot in green some lasso solutions, computed from glmnet, imposed on the above graph. Notice that lasso found exactly the pareto optimal points. This is very surprising, though! How did a multidimensional objective get optimized by a one dimensional objective? The process is called scalarization: we take weights $\mu_1, \mu_2 \geq 0$ and form the problem $$\arg\min_{\beta \in \mathbb{R}^p} \mu_1 \left( \frac{1}{2n} \|y-X\beta\|_2^2 \right) + \mu_2 \|\beta\|_1.$$ When both objectives are convex, which they are here, this scalarized problem finds all pareto optimal points. Assuming $\mu_1 \neq 0$, which is assuming that both objectives are being considered, and writing $\lambda = \frac{\mu_2}{\mu_1}$, we have that this is just $\hat{\beta}^\textrm{unc} = \arg\min_{\beta \in \mathbb{R}^p} \frac{1}{2n} \|y-X\beta\|_2^2 + \lambda \|\beta\|_1,$ the lasso, in it's usual form. By lagrangian duality, we know that there exists from $C$ so that we can instead solve the equivalent problem $\hat{\beta}^\textrm{con} = \arg\min_{\beta : \|\beta\|_1 \leq C} \frac{1}{2n} \|y-X\beta\|_2^2,$ where $\hat{\beta}^\textrm{con} = \hat{\beta}^\textrm{unc}$. Now that we understand better what we're trying to solve and have a good visualization, let's now focus on finding a relationship between the tuning parameters $\lambda$ and $C$. For a given value of $C$, the constrained lasso estimate $\hat{\beta}^\textrm{con.}$ will be one of those green points in the plot above. The way $\hat{\beta}^\textrm{con.}$ can be found is by fixing ourselves at $\|\beta\|_1 = \mathrm{min}\{C, \|\hat{\beta}_\mathrm{LS}\|_1\}$ (for $\hat{\beta}_\mathrm{LS}$ the least squares coefficient) and moving down until we get the lowest possible measure of lack of fit. That is, $$C = \|\hat{\beta}^\textrm{unc}\|_1.$$ As we saw above, $\lambda$ corresponds to a scalarization of our vector objective and hence is equal to the slope at this point: $$\lambda = -\frac{\partial \frac{1}{2n} \|y - X \beta\|_2^2}{\partial \|\beta\|_1} \mid_{\beta = \hat{\beta}^\textrm{con}}$$ (Note, this formula appears to be only correct up to constants. The correct $\lambda$ can quickly be found from the first order conditions, but I'd like to find a way to motivate it directly from this framework.) This corresponds (via the chain rule) to the first answer in the post that I linked as a possible duplicate.
Expressing the LASSO regression constraint via the penalty parameter
The answer to your question follows from consideration of Lagrangian duality. This is worked in the post which I consider to be a duplicate in my comment to OP's post. In what follows, I work out what
Expressing the LASSO regression constraint via the penalty parameter The answer to your question follows from consideration of Lagrangian duality. This is worked in the post which I consider to be a duplicate in my comment to OP's post. In what follows, I work out what I find to be a more insightful derivation. When we're solving a lasso, really, we're trying trying to jointly minimize $\frac{1}{2n} \|y - X \beta\|_2^2 = RSS$ and $\|\beta\|_1$. That is, we seek $\arg\min_\beta (\frac{1}{2n} \|y - X \beta\|_2^2, \|\beta\|_1)$. This doesn't seem well defined at the moment, since we know there's some tension between these two objectives. This is what optimization folks call multicriterion optimization. Let's visualize this problem by plotting $\left(\frac{1}{2n} \|y - X \beta\|_2^2, \|\beta\|_1 \right)$ for many $\beta$'s. (Note, here $p=5$, $n=100$, $X$ was randomly initialized, and the true coefficient $\beta^*$ has a roughly a quarter of it's entries equal to zero.) Here, $F = \|\beta\|_1$ and $G = \frac{1}{2n} \|y - X \beta\|_2^2$. That is, the vertical axis measures the lack of fit, and the horizontal axis measures the size of the coefficient. Note that I cut off the top of the image for the sake of clarity. The points at the bottom left of the plot are the ones we're interested in. Those correspond to the values of $\beta$ that both have small $\ell_1$ norm and have small error. In fact, for those points at the bottom left, there are no $\beta$ which have the same fit and smaller size or the same size with better fit. To choose between these points, called pareto optimal points, we need to determine the relative importance of the fit and size, our two objectives. This should remind us of the tuning parameters $\lambda$ or $C$ in the unconstrained or constrained lasso, respectively. Below we plot in green some lasso solutions, computed from glmnet, imposed on the above graph. Notice that lasso found exactly the pareto optimal points. This is very surprising, though! How did a multidimensional objective get optimized by a one dimensional objective? The process is called scalarization: we take weights $\mu_1, \mu_2 \geq 0$ and form the problem $$\arg\min_{\beta \in \mathbb{R}^p} \mu_1 \left( \frac{1}{2n} \|y-X\beta\|_2^2 \right) + \mu_2 \|\beta\|_1.$$ When both objectives are convex, which they are here, this scalarized problem finds all pareto optimal points. Assuming $\mu_1 \neq 0$, which is assuming that both objectives are being considered, and writing $\lambda = \frac{\mu_2}{\mu_1}$, we have that this is just $\hat{\beta}^\textrm{unc} = \arg\min_{\beta \in \mathbb{R}^p} \frac{1}{2n} \|y-X\beta\|_2^2 + \lambda \|\beta\|_1,$ the lasso, in it's usual form. By lagrangian duality, we know that there exists from $C$ so that we can instead solve the equivalent problem $\hat{\beta}^\textrm{con} = \arg\min_{\beta : \|\beta\|_1 \leq C} \frac{1}{2n} \|y-X\beta\|_2^2,$ where $\hat{\beta}^\textrm{con} = \hat{\beta}^\textrm{unc}$. Now that we understand better what we're trying to solve and have a good visualization, let's now focus on finding a relationship between the tuning parameters $\lambda$ and $C$. For a given value of $C$, the constrained lasso estimate $\hat{\beta}^\textrm{con.}$ will be one of those green points in the plot above. The way $\hat{\beta}^\textrm{con.}$ can be found is by fixing ourselves at $\|\beta\|_1 = \mathrm{min}\{C, \|\hat{\beta}_\mathrm{LS}\|_1\}$ (for $\hat{\beta}_\mathrm{LS}$ the least squares coefficient) and moving down until we get the lowest possible measure of lack of fit. That is, $$C = \|\hat{\beta}^\textrm{unc}\|_1.$$ As we saw above, $\lambda$ corresponds to a scalarization of our vector objective and hence is equal to the slope at this point: $$\lambda = -\frac{\partial \frac{1}{2n} \|y - X \beta\|_2^2}{\partial \|\beta\|_1} \mid_{\beta = \hat{\beta}^\textrm{con}}$$ (Note, this formula appears to be only correct up to constants. The correct $\lambda$ can quickly be found from the first order conditions, but I'd like to find a way to motivate it directly from this framework.) This corresponds (via the chain rule) to the first answer in the post that I linked as a possible duplicate.
Expressing the LASSO regression constraint via the penalty parameter The answer to your question follows from consideration of Lagrangian duality. This is worked in the post which I consider to be a duplicate in my comment to OP's post. In what follows, I work out what
36,282
What is .hat in regression output
Those would be the diagonal elements of the hat-matrix which describe the leverage each point has on its fitted values. If one fits $\vec{Y} = \mathbf{X} \vec{\beta} + \vec{\epsilon}$ then $\mathbf{H} = \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T$. In this example: $$\begin{pmatrix}Y_1\\ \vdots\\ Y_{32}\end{pmatrix} = \begin{pmatrix} 1 & 2.620\\ \vdots\\ 1 & 2.780 \end{pmatrix} \cdot \begin{pmatrix} \beta_0\\ \beta_1 \end{pmatrix} + \begin{pmatrix}\epsilon_1\\ \vdots\\ \epsilon_{32}\end{pmatrix}$$ Then calculating this $\mathbf{H}$ matrix results in: library(MASS) wt <- mtcars[,6] X <- matrix(cbind(rep(1,32),wt), ncol=2) X%*%ginv(t(X)%*%X)%*%t(X) Where this last matrix is a $32\times 32$ matrix and contains these hat values on the diagonal. Hat matrix on Wikpedia Fun fact It is called the hat matrix since it puts the hat on $\vec{Y}$: $$ \hat{\vec{Y}} = \mathbf{H}\vec{Y} $$
What is .hat in regression output
Those would be the diagonal elements of the hat-matrix which describe the leverage each point has on its fitted values. If one fits $\vec{Y} = \mathbf{X} \vec{\beta} + \vec{\epsilon}$ then $\mathbf{H}
What is .hat in regression output Those would be the diagonal elements of the hat-matrix which describe the leverage each point has on its fitted values. If one fits $\vec{Y} = \mathbf{X} \vec{\beta} + \vec{\epsilon}$ then $\mathbf{H} = \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T$. In this example: $$\begin{pmatrix}Y_1\\ \vdots\\ Y_{32}\end{pmatrix} = \begin{pmatrix} 1 & 2.620\\ \vdots\\ 1 & 2.780 \end{pmatrix} \cdot \begin{pmatrix} \beta_0\\ \beta_1 \end{pmatrix} + \begin{pmatrix}\epsilon_1\\ \vdots\\ \epsilon_{32}\end{pmatrix}$$ Then calculating this $\mathbf{H}$ matrix results in: library(MASS) wt <- mtcars[,6] X <- matrix(cbind(rep(1,32),wt), ncol=2) X%*%ginv(t(X)%*%X)%*%t(X) Where this last matrix is a $32\times 32$ matrix and contains these hat values on the diagonal. Hat matrix on Wikpedia Fun fact It is called the hat matrix since it puts the hat on $\vec{Y}$: $$ \hat{\vec{Y}} = \mathbf{H}\vec{Y} $$
What is .hat in regression output Those would be the diagonal elements of the hat-matrix which describe the leverage each point has on its fitted values. If one fits $\vec{Y} = \mathbf{X} \vec{\beta} + \vec{\epsilon}$ then $\mathbf{H}
36,283
Compressed Sensing relationship to L1 Regularization
There is essentially no difference. It's just statistician's terminology vs electrical engineer's terminology. Compressed sensing (more precisely, basis pursuit denoising [1]) is this problem: $\text{arg min}_x \frac{1}{2}\|Ax - b\| + \lambda \|x\|_1$ while the Lasso[2] is this problem $\text{arg min}_{\beta} \frac{1}{2}\|y - X\beta\| + \lambda \|\beta\|_1$ Inasmuch as there is a difference, it's that in Compressed Sensing applications, you (the engineer) get to choose $A$ to be "nicely behaved" while, for the Lasso, you (the statistician) don't get to choose $X$ and have to deal with whatever the data are (and they are rarely "nice"...). Consequently, much of the subsequent Compressed Sensing literature has focused on choosing $A$ to be as "efficient" as possible, while much of the subsequent statistical literature has focused on improvements to the lasso that still work with $X$ that "break" the lasso. [1] S.S. Chen, D.L. Donoho, M.A. Saunders. "Atomic Decomposition by Basis Pursuit." SIAM Journal on Scientific Computing 20(1), p.33-61, 1998. https://doi.org/10.1137/S1064827596304010 [2] R. Tibshirani "Regression Shrinkage and Selection via the lasso." Journal of the Royal Statistical Society: Series B 58(1), p.267–88, 1996. JSTOR 2346178.
Compressed Sensing relationship to L1 Regularization
There is essentially no difference. It's just statistician's terminology vs electrical engineer's terminology. Compressed sensing (more precisely, basis pursuit denoising [1]) is this problem: $\text
Compressed Sensing relationship to L1 Regularization There is essentially no difference. It's just statistician's terminology vs electrical engineer's terminology. Compressed sensing (more precisely, basis pursuit denoising [1]) is this problem: $\text{arg min}_x \frac{1}{2}\|Ax - b\| + \lambda \|x\|_1$ while the Lasso[2] is this problem $\text{arg min}_{\beta} \frac{1}{2}\|y - X\beta\| + \lambda \|\beta\|_1$ Inasmuch as there is a difference, it's that in Compressed Sensing applications, you (the engineer) get to choose $A$ to be "nicely behaved" while, for the Lasso, you (the statistician) don't get to choose $X$ and have to deal with whatever the data are (and they are rarely "nice"...). Consequently, much of the subsequent Compressed Sensing literature has focused on choosing $A$ to be as "efficient" as possible, while much of the subsequent statistical literature has focused on improvements to the lasso that still work with $X$ that "break" the lasso. [1] S.S. Chen, D.L. Donoho, M.A. Saunders. "Atomic Decomposition by Basis Pursuit." SIAM Journal on Scientific Computing 20(1), p.33-61, 1998. https://doi.org/10.1137/S1064827596304010 [2] R. Tibshirani "Regression Shrinkage and Selection via the lasso." Journal of the Royal Statistical Society: Series B 58(1), p.267–88, 1996. JSTOR 2346178.
Compressed Sensing relationship to L1 Regularization There is essentially no difference. It's just statistician's terminology vs electrical engineer's terminology. Compressed sensing (more precisely, basis pursuit denoising [1]) is this problem: $\text
36,284
CLT with random variables that are not integrable
Among the many ways to solve this one, constructing the sequence by perturbing a standard Normal variable seems like the simplest and most elegant. At the end I comment on the connection with the Central Limit Theorem. Characteristic Functions Allow me a digression before I present a solution. The inspiration for the technique that will be used comes from the idea that there is more than one way to describe the distribution of any random variable $X$. Commonest, and most direct, is its distribution function $F_X(x)=\Pr(X\le x)$. An indirect but extremely useful alternative is its characteristic function $$\psi_X(t) = E\left[e^{itX}\right] = E\left[\cos(t X)\right] + i\, E\left[\sin(t X)\right].$$ Because $|e^{itX}|=1$ for all $t$, $\psi_F$ is defined for any distribution $F$ (and its values for all $t$ cannot exceed $1$ in size). Moreover, $X$ and $Y$ have the same distribution if and only if they have the same characteristic function. Even better is Lévy's Continuity Theorem: A sequence $X_n$ converges in distribution to a random variable $X$ if and only if for every $t$ the sequence $\phi_{X_n}(t)$ converges to a value $\psi(t)$ and the function $\psi$ is continuous at $0$. (All characteristic functions are continuous at $0$.) In that case, $\psi$ is the characteristic function of $X$. Another of the lovely properties enjoyed by characteristic functions is their relationship with linear combinations: when $X$ and $Y$ are random variables (on the same probability space and $\alpha$ and $\beta$ are real numbers, $$\psi_{\alpha X+\beta Y}(t) = \psi_X(\alpha t)\psi_Y(\beta t).\tag{1}$$ This makes characteristic functions (cfs) a suitable tool for studying perturbations of random variables $X$ achieved by adding tiny amounts of other random variables $Y$ to them: that is, random variables of the form $X+\beta Y$ for $|\beta|$ small. Solution Construction of a sequence Let's construct a solution by starting with a standard Normal variable $Z$ and forming an independent sequence $Z_1, Z_2, \ldots, Z_n, \ldots$ with the same distribution as $Z$. This obviously has the limiting property we want: the means are all standard Normal, so in the limit the mean is standard Normal. Its cf is $$\psi_Z(t) = e^{-t^2/2}.\tag{2}$$ For the perturbations, pick some random variable $Y$ with infinite expectation. It will be convenient for $Y$ to have a cf that's easy to work with. I would like to suggest the Lévy Distribution (aka Stable Distribution with $\alpha=1/2,\ \beta=1$ or Inverse Gamma$(1/2,1/2)$ distribution) for which $$\psi_Y(t) = e^{-\sqrt{|t|}\,(1 - i \operatorname{sgn}(t))}.$$ (For $t\gt 0$, $\operatorname{sgn}(t)=1$; for $t \lt 0,$ $\operatorname{sgn}(t)=-1$.) This distribution is supported on $(0,\infty)$ and has no finite moments. To this sequence of standard normal variables $(Z_n)$ let's add ever-smaller positive multiples of $Y$. (Positivity is unnecessary but it makes working with the $\operatorname{sgn}$ function easier.) Let the sequence of multiples be $p_1,p_2,p_3,\ldots,$ to be determined. Thus, the sequence of random variables is defined to be $$X_n=Z_n + p_n Y_n$$ where $(Y_n)$ is an iid sequence of random variables with the same distribution as $Y$. Intuition What we need to worry about is whether the perturbations are so bad that they ruin the convergence to a standard Normal distribution. To those with experience with such heavy-tailed distributions, this is a real concern: there will always be some positive probability that the little bit of $Y_n$ added into $Z_n$ will occasionally introduce such a whopping big outlier that it overwhelms the partial sum $S_n$. The entire reason for using characteristic functions is to demonstrate this will not happen in the long run, provided we reduce the amount of perturbation (the $p_n$) sufficiently rapidly. Formal calculations First, $X_n$ has infinite expectation because $$E[X_n] = E[Z_n + p_n Y_n] = E[Z] + p_n E[Y] = p_n E[Y]$$ must be infinite since $E[Y]$ is infinite. Thus this sequence $(X_n)$ satisfies all the requirements of the problem. Let's turn to the analysis of the partial means. Repeated application of $(1)$ to the partial mean $$S_n = \frac{X_1 + X_2 + \cdots + X_n}{\sqrt{n}}$$ gives $$\eqalign{ \psi_{S_n}(t) &= \left[e^{-(t/\sqrt{n})^2/2}\color{Blue}{\psi_Y(p_1 t/\sqrt{n})}\right] \cdots \left[e^{-(t/\sqrt{n})^2/2}\color{Blue}{\psi_Y(p_n t/\sqrt{n})}\right] \\ &= \left[e^{-(t/\sqrt{n})^2/2} \cdots e^{-(t/\sqrt{n})^2/2}\right] \left[\color{Blue}{\psi_Y(p_1t/\sqrt{n}) \cdots \psi_Y(p_nt/\sqrt{n})}\right] \\ &= e^{-t^2/(2n) - t^2/(2n) - \cdots - t^2/(2n)}\quad \color{Blue}{e^{\sqrt{|p_1t/\sqrt{n}|}(-1+i\operatorname{sgn}(p_1t/\sqrt{n})} \cdots e^{\sqrt{|p_nt/\sqrt{n}|}(-1+i\operatorname{sgn}(p_nt/\sqrt{n})} }.\tag{3} }$$ Collecting the black powers of $e$ gives the power $-t^2/2$ while collecting the blue powers (coming from the perturbations) gives $$\sum_{i=1}^n \color{blue}{\sqrt{|p_it/\sqrt{n}|}(-1+i\operatorname{sgn}(p_it/\sqrt{n}))} = \sqrt{|t|}(-1+i\operatorname{sgn}(t))\frac{\sum_{i=1}^n \sqrt{p_i}}{n^{1/4}}\tag{4}$$ because $n$ and all the $p_i$ are positive. Since $|-1 + i\operatorname{sgn}(t)| \le \sqrt{2}$, for any fixed $t$ the value of $(4)$ goes to zero as $n$ increases provided $\sum_{i=1}^n\sqrt{p_i} = o(n^{-1/4}).$ One way to make this happen is to make the sum of the $\sqrt{p_i}$ converge: take $p_i = 2^{-2i}$, for instance. Then $$\frac{1}{n^{1/4}} \sum_{i=1}^n \sqrt{p_i} \le \frac{1}{n^{1/4}} (1/2+1/4+\cdots+1/2^n+\cdots) = \frac{1}{n^{1/4}}\to 0.$$ Consequently, because the exponential is continuous at $0$, the blue terms $(3)$ converge to $e^0=1$: they do not affect the limit. We conclude $(\psi_{S_n})$ converges to $\psi_X$. Because this is the cf of the standard Normal distribution, Lévy's Continuity Theorem implies $S_n$ converges to a standard Normal distribution, QED. Comments The ideas displayed here can be generalized. We don't need the $X_n$ to be standard Normal; it suffices (by the usual Central Limit Theorem) that they are iid with zero mean and unit variance. It looks we have established an extension of the CLT: the distributions of means of a sequence of independent random variables, even those with infinite expectations and variances, can (when suitably standardized) converge to a standard Normal distribution, provided the "infinite part" of the random variables grows small sufficiently quickly.
CLT with random variables that are not integrable
Among the many ways to solve this one, constructing the sequence by perturbing a standard Normal variable seems like the simplest and most elegant. At the end I comment on the connection with the Cent
CLT with random variables that are not integrable Among the many ways to solve this one, constructing the sequence by perturbing a standard Normal variable seems like the simplest and most elegant. At the end I comment on the connection with the Central Limit Theorem. Characteristic Functions Allow me a digression before I present a solution. The inspiration for the technique that will be used comes from the idea that there is more than one way to describe the distribution of any random variable $X$. Commonest, and most direct, is its distribution function $F_X(x)=\Pr(X\le x)$. An indirect but extremely useful alternative is its characteristic function $$\psi_X(t) = E\left[e^{itX}\right] = E\left[\cos(t X)\right] + i\, E\left[\sin(t X)\right].$$ Because $|e^{itX}|=1$ for all $t$, $\psi_F$ is defined for any distribution $F$ (and its values for all $t$ cannot exceed $1$ in size). Moreover, $X$ and $Y$ have the same distribution if and only if they have the same characteristic function. Even better is Lévy's Continuity Theorem: A sequence $X_n$ converges in distribution to a random variable $X$ if and only if for every $t$ the sequence $\phi_{X_n}(t)$ converges to a value $\psi(t)$ and the function $\psi$ is continuous at $0$. (All characteristic functions are continuous at $0$.) In that case, $\psi$ is the characteristic function of $X$. Another of the lovely properties enjoyed by characteristic functions is their relationship with linear combinations: when $X$ and $Y$ are random variables (on the same probability space and $\alpha$ and $\beta$ are real numbers, $$\psi_{\alpha X+\beta Y}(t) = \psi_X(\alpha t)\psi_Y(\beta t).\tag{1}$$ This makes characteristic functions (cfs) a suitable tool for studying perturbations of random variables $X$ achieved by adding tiny amounts of other random variables $Y$ to them: that is, random variables of the form $X+\beta Y$ for $|\beta|$ small. Solution Construction of a sequence Let's construct a solution by starting with a standard Normal variable $Z$ and forming an independent sequence $Z_1, Z_2, \ldots, Z_n, \ldots$ with the same distribution as $Z$. This obviously has the limiting property we want: the means are all standard Normal, so in the limit the mean is standard Normal. Its cf is $$\psi_Z(t) = e^{-t^2/2}.\tag{2}$$ For the perturbations, pick some random variable $Y$ with infinite expectation. It will be convenient for $Y$ to have a cf that's easy to work with. I would like to suggest the Lévy Distribution (aka Stable Distribution with $\alpha=1/2,\ \beta=1$ or Inverse Gamma$(1/2,1/2)$ distribution) for which $$\psi_Y(t) = e^{-\sqrt{|t|}\,(1 - i \operatorname{sgn}(t))}.$$ (For $t\gt 0$, $\operatorname{sgn}(t)=1$; for $t \lt 0,$ $\operatorname{sgn}(t)=-1$.) This distribution is supported on $(0,\infty)$ and has no finite moments. To this sequence of standard normal variables $(Z_n)$ let's add ever-smaller positive multiples of $Y$. (Positivity is unnecessary but it makes working with the $\operatorname{sgn}$ function easier.) Let the sequence of multiples be $p_1,p_2,p_3,\ldots,$ to be determined. Thus, the sequence of random variables is defined to be $$X_n=Z_n + p_n Y_n$$ where $(Y_n)$ is an iid sequence of random variables with the same distribution as $Y$. Intuition What we need to worry about is whether the perturbations are so bad that they ruin the convergence to a standard Normal distribution. To those with experience with such heavy-tailed distributions, this is a real concern: there will always be some positive probability that the little bit of $Y_n$ added into $Z_n$ will occasionally introduce such a whopping big outlier that it overwhelms the partial sum $S_n$. The entire reason for using characteristic functions is to demonstrate this will not happen in the long run, provided we reduce the amount of perturbation (the $p_n$) sufficiently rapidly. Formal calculations First, $X_n$ has infinite expectation because $$E[X_n] = E[Z_n + p_n Y_n] = E[Z] + p_n E[Y] = p_n E[Y]$$ must be infinite since $E[Y]$ is infinite. Thus this sequence $(X_n)$ satisfies all the requirements of the problem. Let's turn to the analysis of the partial means. Repeated application of $(1)$ to the partial mean $$S_n = \frac{X_1 + X_2 + \cdots + X_n}{\sqrt{n}}$$ gives $$\eqalign{ \psi_{S_n}(t) &= \left[e^{-(t/\sqrt{n})^2/2}\color{Blue}{\psi_Y(p_1 t/\sqrt{n})}\right] \cdots \left[e^{-(t/\sqrt{n})^2/2}\color{Blue}{\psi_Y(p_n t/\sqrt{n})}\right] \\ &= \left[e^{-(t/\sqrt{n})^2/2} \cdots e^{-(t/\sqrt{n})^2/2}\right] \left[\color{Blue}{\psi_Y(p_1t/\sqrt{n}) \cdots \psi_Y(p_nt/\sqrt{n})}\right] \\ &= e^{-t^2/(2n) - t^2/(2n) - \cdots - t^2/(2n)}\quad \color{Blue}{e^{\sqrt{|p_1t/\sqrt{n}|}(-1+i\operatorname{sgn}(p_1t/\sqrt{n})} \cdots e^{\sqrt{|p_nt/\sqrt{n}|}(-1+i\operatorname{sgn}(p_nt/\sqrt{n})} }.\tag{3} }$$ Collecting the black powers of $e$ gives the power $-t^2/2$ while collecting the blue powers (coming from the perturbations) gives $$\sum_{i=1}^n \color{blue}{\sqrt{|p_it/\sqrt{n}|}(-1+i\operatorname{sgn}(p_it/\sqrt{n}))} = \sqrt{|t|}(-1+i\operatorname{sgn}(t))\frac{\sum_{i=1}^n \sqrt{p_i}}{n^{1/4}}\tag{4}$$ because $n$ and all the $p_i$ are positive. Since $|-1 + i\operatorname{sgn}(t)| \le \sqrt{2}$, for any fixed $t$ the value of $(4)$ goes to zero as $n$ increases provided $\sum_{i=1}^n\sqrt{p_i} = o(n^{-1/4}).$ One way to make this happen is to make the sum of the $\sqrt{p_i}$ converge: take $p_i = 2^{-2i}$, for instance. Then $$\frac{1}{n^{1/4}} \sum_{i=1}^n \sqrt{p_i} \le \frac{1}{n^{1/4}} (1/2+1/4+\cdots+1/2^n+\cdots) = \frac{1}{n^{1/4}}\to 0.$$ Consequently, because the exponential is continuous at $0$, the blue terms $(3)$ converge to $e^0=1$: they do not affect the limit. We conclude $(\psi_{S_n})$ converges to $\psi_X$. Because this is the cf of the standard Normal distribution, Lévy's Continuity Theorem implies $S_n$ converges to a standard Normal distribution, QED. Comments The ideas displayed here can be generalized. We don't need the $X_n$ to be standard Normal; it suffices (by the usual Central Limit Theorem) that they are iid with zero mean and unit variance. It looks we have established an extension of the CLT: the distributions of means of a sequence of independent random variables, even those with infinite expectations and variances, can (when suitably standardized) converge to a standard Normal distribution, provided the "infinite part" of the random variables grows small sufficiently quickly.
CLT with random variables that are not integrable Among the many ways to solve this one, constructing the sequence by perturbing a standard Normal variable seems like the simplest and most elegant. At the end I comment on the connection with the Cent
36,285
Importance sampling: unbiased estimator of the normalizing constant
Why the author takes $\mathfrak{Z}=∫φ(x)dx$? Since $p$ is a density, its integral is equal to $1$. If $\mathfrak{Z}$ is the normalising constant of $\varphi$, it has to satisfy $$\int p(x)\text{d}x=\int \frac{\varphi(x)}{\mathfrak{Z}}\text{d}x=1$$ 2- I'm not able to prove mathematically that why $\mathbb{E}[\hat{\mathfrak{Z}}]=\mathfrak{Z}$ Recall that $w(x)=\varphi(x)/q(x)$. Then $$\mathbb{E}[w(X)]=\int \frac{\varphi(x)}{q(x)}q(x)\text{d}x= \int \varphi(x)\text{d}x=\mathfrak{Z}$$ 3- and I want to know how one can prove that $\hat{I_N}$ is biased for finite values of N? The ratio of two unbiased estimators is biased since $$\mathbb{E}[1/h(X)]\ge1/\mathbb{E}[h(X)]$$by Jensen's inequality. Note: There exist unbiased estimators of the inverse $\mathfrak{Z}^{-1}$, including the notorious harmonic mean estimator.
Importance sampling: unbiased estimator of the normalizing constant
Why the author takes $\mathfrak{Z}=∫φ(x)dx$? Since $p$ is a density, its integral is equal to $1$. If $\mathfrak{Z}$ is the normalising constant of $\varphi$, it has to satisfy $$\int p(x)\text{d}x=
Importance sampling: unbiased estimator of the normalizing constant Why the author takes $\mathfrak{Z}=∫φ(x)dx$? Since $p$ is a density, its integral is equal to $1$. If $\mathfrak{Z}$ is the normalising constant of $\varphi$, it has to satisfy $$\int p(x)\text{d}x=\int \frac{\varphi(x)}{\mathfrak{Z}}\text{d}x=1$$ 2- I'm not able to prove mathematically that why $\mathbb{E}[\hat{\mathfrak{Z}}]=\mathfrak{Z}$ Recall that $w(x)=\varphi(x)/q(x)$. Then $$\mathbb{E}[w(X)]=\int \frac{\varphi(x)}{q(x)}q(x)\text{d}x= \int \varphi(x)\text{d}x=\mathfrak{Z}$$ 3- and I want to know how one can prove that $\hat{I_N}$ is biased for finite values of N? The ratio of two unbiased estimators is biased since $$\mathbb{E}[1/h(X)]\ge1/\mathbb{E}[h(X)]$$by Jensen's inequality. Note: There exist unbiased estimators of the inverse $\mathfrak{Z}^{-1}$, including the notorious harmonic mean estimator.
Importance sampling: unbiased estimator of the normalizing constant Why the author takes $\mathfrak{Z}=∫φ(x)dx$? Since $p$ is a density, its integral is equal to $1$. If $\mathfrak{Z}$ is the normalising constant of $\varphi$, it has to satisfy $$\int p(x)\text{d}x=
36,286
What is the mean and variance of the median of a set of i.i.d normal random variables?
The median is the central order statistic when the number of observations is odd. If $n$ is even then the median is either an order statistic, or the mean of 2 order statistics (or something else) depending on which definition of median you use. So the exact distribution of the median can be worked out based on the distribution of order statistics. For odd $n$ where all the $x$'s are iid from a pdf $f$ with cumulative distribution $F$ the distribution of the median is: $\binom{n-1}{(n-1)/2} F(x)^{\frac{n-1}2} f(x) (1-F(x))^{\frac{n-1}2}$ You can google "distribution of order statistics" to get more details and derivation. For the normal we don't have a closed form solution for $F(x)$, but there are computational tools that can help estimate the above (see the distr package for R for one possibility). If your main goal is just an estimate of the variance of the median, then a simpler approach is just to simulate a bunch of datasets and compute the variance of their medians (and the variance of their means for comparison). The Wikipedia article on "Median" also has information that may be of interest.
What is the mean and variance of the median of a set of i.i.d normal random variables?
The median is the central order statistic when the number of observations is odd. If $n$ is even then the median is either an order statistic, or the mean of 2 order statistics (or something else) de
What is the mean and variance of the median of a set of i.i.d normal random variables? The median is the central order statistic when the number of observations is odd. If $n$ is even then the median is either an order statistic, or the mean of 2 order statistics (or something else) depending on which definition of median you use. So the exact distribution of the median can be worked out based on the distribution of order statistics. For odd $n$ where all the $x$'s are iid from a pdf $f$ with cumulative distribution $F$ the distribution of the median is: $\binom{n-1}{(n-1)/2} F(x)^{\frac{n-1}2} f(x) (1-F(x))^{\frac{n-1}2}$ You can google "distribution of order statistics" to get more details and derivation. For the normal we don't have a closed form solution for $F(x)$, but there are computational tools that can help estimate the above (see the distr package for R for one possibility). If your main goal is just an estimate of the variance of the median, then a simpler approach is just to simulate a bunch of datasets and compute the variance of their medians (and the variance of their means for comparison). The Wikipedia article on "Median" also has information that may be of interest.
What is the mean and variance of the median of a set of i.i.d normal random variables? The median is the central order statistic when the number of observations is odd. If $n$ is even then the median is either an order statistic, or the mean of 2 order statistics (or something else) de
36,287
Meaning of proper prior
A prior distribution that integrates to 1 is a proper prior, by contrast with an improper prior which doesn't. For example, consider estimation of the mean, $\mu$ in a normal distribution. the following two prior distributions: $\qquad f(\mu) = N(\mu_0,\tau^2)\,,\: -\infty<\mu<\infty$ $\qquad f(\mu) \propto c\,,\qquad\qquad -\infty<\mu<\infty.$ The first is a proper density. The second is not - no choice of $c$ can yield a density that integrates to $1$. Nevertheless, both lead to proper posterior distributions. See the following posts which throw additional light on the use of improper priors issue and some closely related issues: Flat, conjugate, and hyper- priors. What are they? What is an "uninformative prior"? Can we ever have one with truly no information?
Meaning of proper prior
A prior distribution that integrates to 1 is a proper prior, by contrast with an improper prior which doesn't. For example, consider estimation of the mean, $\mu$ in a normal distribution. the followi
Meaning of proper prior A prior distribution that integrates to 1 is a proper prior, by contrast with an improper prior which doesn't. For example, consider estimation of the mean, $\mu$ in a normal distribution. the following two prior distributions: $\qquad f(\mu) = N(\mu_0,\tau^2)\,,\: -\infty<\mu<\infty$ $\qquad f(\mu) \propto c\,,\qquad\qquad -\infty<\mu<\infty.$ The first is a proper density. The second is not - no choice of $c$ can yield a density that integrates to $1$. Nevertheless, both lead to proper posterior distributions. See the following posts which throw additional light on the use of improper priors issue and some closely related issues: Flat, conjugate, and hyper- priors. What are they? What is an "uninformative prior"? Can we ever have one with truly no information?
Meaning of proper prior A prior distribution that integrates to 1 is a proper prior, by contrast with an improper prior which doesn't. For example, consider estimation of the mean, $\mu$ in a normal distribution. the followi
36,288
How does Python Scikit Learn handle linear separation problem in logistic regression?
Yes, sklearn.linear_model.LogisticRegression uses penalized logistic regression which "solves" the problem of perfect separation. If you set C to something too large, you might still end up with bad results, though.
How does Python Scikit Learn handle linear separation problem in logistic regression?
Yes, sklearn.linear_model.LogisticRegression uses penalized logistic regression which "solves" the problem of perfect separation. If you set C to something too large, you might still end up with bad r
How does Python Scikit Learn handle linear separation problem in logistic regression? Yes, sklearn.linear_model.LogisticRegression uses penalized logistic regression which "solves" the problem of perfect separation. If you set C to something too large, you might still end up with bad results, though.
How does Python Scikit Learn handle linear separation problem in logistic regression? Yes, sklearn.linear_model.LogisticRegression uses penalized logistic regression which "solves" the problem of perfect separation. If you set C to something too large, you might still end up with bad r
36,289
What distinction is there between statistical inference and causal inference?
Causal inference is the process of ascribing causal relationships to associations between variables. Statistical inference is the process of using statistical methods to characterize the association between variables. Causality is at the root of scientific explanation which is considered to be causal explanation. However, establishing causal relationships is extremely difficult in spite of substantial advancements made during the past decades. Statistical inference works like a black box and generates the best possible characterization of the relationships between variables. Statistical inference provides estimates of the associations between variables but of course, association does not imply causation, so there is little that statistical inference can provide to establish causation. That is not to say that statistical tools cannot be used to establish causal relationships but for that purpose a number of rules must be taken into account. These rules are what is generally known as the covering laws of which statistical inference is the method used in the model of statistical relevance designed to establish scientific explanations. As scientific explanations are causal explanations a delicate relationship is established between statistical inference and causal inference. For a review of these concepts see Judea Pearl's "Causal inference in statistics:An overview" (http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf).
What distinction is there between statistical inference and causal inference?
Causal inference is the process of ascribing causal relationships to associations between variables. Statistical inference is the process of using statistical methods to characterize the association
What distinction is there between statistical inference and causal inference? Causal inference is the process of ascribing causal relationships to associations between variables. Statistical inference is the process of using statistical methods to characterize the association between variables. Causality is at the root of scientific explanation which is considered to be causal explanation. However, establishing causal relationships is extremely difficult in spite of substantial advancements made during the past decades. Statistical inference works like a black box and generates the best possible characterization of the relationships between variables. Statistical inference provides estimates of the associations between variables but of course, association does not imply causation, so there is little that statistical inference can provide to establish causation. That is not to say that statistical tools cannot be used to establish causal relationships but for that purpose a number of rules must be taken into account. These rules are what is generally known as the covering laws of which statistical inference is the method used in the model of statistical relevance designed to establish scientific explanations. As scientific explanations are causal explanations a delicate relationship is established between statistical inference and causal inference. For a review of these concepts see Judea Pearl's "Causal inference in statistics:An overview" (http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf).
What distinction is there between statistical inference and causal inference? Causal inference is the process of ascribing causal relationships to associations between variables. Statistical inference is the process of using statistical methods to characterize the association
36,290
What distinction is there between statistical inference and causal inference?
"Causal inference" mean reasoning about causation, whereas "statistical inference" means reasoning with statistics (it's more or less synonymous with the word "statistics" itself). So, causal inference is a subset of statistical inference, except that you can do some causal reasoning without statistics per se (e.g., if event A happened before event B, then B cannot have caused A). The inverse definitely doesn't hold because many statistical methods have nothing to do with causation, and can be fruitfully applied in situations where the data permits no causal inferences.
What distinction is there between statistical inference and causal inference?
"Causal inference" mean reasoning about causation, whereas "statistical inference" means reasoning with statistics (it's more or less synonymous with the word "statistics" itself). So, causal inferenc
What distinction is there between statistical inference and causal inference? "Causal inference" mean reasoning about causation, whereas "statistical inference" means reasoning with statistics (it's more or less synonymous with the word "statistics" itself). So, causal inference is a subset of statistical inference, except that you can do some causal reasoning without statistics per se (e.g., if event A happened before event B, then B cannot have caused A). The inverse definitely doesn't hold because many statistical methods have nothing to do with causation, and can be fruitfully applied in situations where the data permits no causal inferences.
What distinction is there between statistical inference and causal inference? "Causal inference" mean reasoning about causation, whereas "statistical inference" means reasoning with statistics (it's more or less synonymous with the word "statistics" itself). So, causal inferenc
36,291
What distinction is there between statistical inference and causal inference?
Causal inference uses techniques like matching before fitting statistical models. In other words, causal inference puts more emphasis on research design, while statistical inference puts more emphasis on the mathematical/computational part.
What distinction is there between statistical inference and causal inference?
Causal inference uses techniques like matching before fitting statistical models. In other words, causal inference puts more emphasis on research design, while statistical inference puts more emphasi
What distinction is there between statistical inference and causal inference? Causal inference uses techniques like matching before fitting statistical models. In other words, causal inference puts more emphasis on research design, while statistical inference puts more emphasis on the mathematical/computational part.
What distinction is there between statistical inference and causal inference? Causal inference uses techniques like matching before fitting statistical models. In other words, causal inference puts more emphasis on research design, while statistical inference puts more emphasi
36,292
Characteristic Function of a Compound Poisson Process
I was missing the knowledge of the exponential series: $$ e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots $$ I also made a mistake when I separated the uniform in the expectation. Fixing these problems: $$ \begin{align} \mathbb{E}(e^{iuY_1})&=\sum_nP(N=n)\mathbb{E}(e^{iuY_1}\mid N=n)\\ &=\sum_nP(N=n)\prod_{j=1}^n\mathbb{E}(e^{iu\mathbb{1}_{\{U_j\leq 1\}}X_j})\quad\text{(by independence)}\\ &=\sum_n P(N=n)\left(\mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})\right)^n\quad\text{(by i.i.d.)}\\ &=\sum_{n=0}^\infty\frac{(\lambda T)^n e^{-(\lambda T)}}{n!}\left(\mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})\right)^n\quad\text{(by Poisson)}\\ &=e^{-(\lambda T)}\cdot e^{(\lambda T)\mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})}\quad\text{(by the exponential series)} \end{align} $$ We can calculate the expectation by conditioning on the uniform: $$ \mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})=\frac{T-1}{T}+\frac{1}{T}\int e^{iux}f(x)dx $$ Substituting and doing some algebra we get the answer: $$ \begin{align} \mathbb{E}(e^{iuY_1})&=e^{\lambda \int (e^{iux}-1)f(x)dx} \end{align} $$
Characteristic Function of a Compound Poisson Process
I was missing the knowledge of the exponential series: $$ e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots $$ I also made a mistake when I separated the uniform in the expectation. Fixing t
Characteristic Function of a Compound Poisson Process I was missing the knowledge of the exponential series: $$ e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots $$ I also made a mistake when I separated the uniform in the expectation. Fixing these problems: $$ \begin{align} \mathbb{E}(e^{iuY_1})&=\sum_nP(N=n)\mathbb{E}(e^{iuY_1}\mid N=n)\\ &=\sum_nP(N=n)\prod_{j=1}^n\mathbb{E}(e^{iu\mathbb{1}_{\{U_j\leq 1\}}X_j})\quad\text{(by independence)}\\ &=\sum_n P(N=n)\left(\mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})\right)^n\quad\text{(by i.i.d.)}\\ &=\sum_{n=0}^\infty\frac{(\lambda T)^n e^{-(\lambda T)}}{n!}\left(\mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})\right)^n\quad\text{(by Poisson)}\\ &=e^{-(\lambda T)}\cdot e^{(\lambda T)\mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})}\quad\text{(by the exponential series)} \end{align} $$ We can calculate the expectation by conditioning on the uniform: $$ \mathbb{E}(e^{iu\mathbb{1}_{\{U_1\leq 1\}}X_1})=\frac{T-1}{T}+\frac{1}{T}\int e^{iux}f(x)dx $$ Substituting and doing some algebra we get the answer: $$ \begin{align} \mathbb{E}(e^{iuY_1})&=e^{\lambda \int (e^{iux}-1)f(x)dx} \end{align} $$
Characteristic Function of a Compound Poisson Process I was missing the knowledge of the exponential series: $$ e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots $$ I also made a mistake when I separated the uniform in the expectation. Fixing t
36,293
Characteristic Function of a Compound Poisson Process
Here's another approach that uses a common trick with characteristic functions to avoid having to work out the sums / integrals. I'll set $\lambda = 1$ without loss of generality, it simplifies notation and can be put back in in obvious ways in what follows. This means all the "$\lambda$"s below are not related to the $\lambda$ in the problem statement, until the very end where I include it again. First, note that the definition of $Y_t$ involves the sum of $X_i$ corresponding to $U_i \leq t$. This can be thought of as summing "observed" $X_i$, where an $X_i$ is "observed" with a probability $p = t/T$ that is the same across all $i$.. The number of "observed" $X_i$, label it $n$, is therefore distributed Poisson$(pT)$, which is the same as Poisson$(t)$. The proof of this is straightforward. The number of observed $X_i$, label it $n$, conditional upon $N$ is clearly distributed Binomial$(N, t/T)$. Now, let's look at the characteristic function (ch.f.) of the Binomial distribution: $\phi_{n|N}(i\theta) = (1-p+p\text{e}^{i\theta})^N$ We will want to integrate out $N$ w.r.t. the Poisson distribution to get the ch.f. of $n$. The simple way to do this is to note that: $\phi_{n|N}(i\theta) = \exp(N*\log(1-p+p\text{e}^{i\theta}))$ Writing out the integration (summation) gives us: $\phi_n(i\theta) = \sum_N \exp(N*\log(1-p+p\text{e}^{i\theta})) p(N|\lambda)$ Looking at this, we can see this will have the same form as the ch.f. of a Poisson distribution ($\exp(\lambda(\text{e}^{i\theta}-1))$), just with $\log(1-p+p\text{e}^{i\theta})$ substituted in wherever $i\theta$ appears in the ch.f. Making this substitution gives us: $\phi_n(i\theta) = \exp(\lambda \text{e}^{\log(1-p+p\text{e}^{i\theta})} - \lambda)$ which quickly reduces to: $\phi_n(i\theta) = \exp(\lambda(1-p+p\text{e}^{i\theta}) - \lambda)$ which can be rearranged to: $\phi_n(i\theta) = \exp(p\lambda(\text{e}^{i\theta}-1))$ which is the ch.f. of a Poisson variate with mean $p\lambda$. Substituting $t/T$ for $p$ and $T$ for $\lambda$ gives us the result. On to step 2. Now we have the ch.f. of the number of elements in the sum $n$. Let's define $\phi_Y(i\theta)$ as the ch.f. of $Y_t$, $\phi_\Sigma(i\theta)$ as the ch.f. of the sum of $n$ $X_i$ and $\phi_X(i\theta)$ as the ch.f. of a single $X_i$. Since the elements are i.i.d., we know that, conditional upon $n$, $\phi_\Sigma(i\theta) = \phi_X^n(i\theta) = \exp\{n \log \phi_X(i\theta)\}$ We can apply exactly the same approach as above to integrate out $n$: $\phi_Y(i\theta) = \sum_n \exp\{n \log \phi_X(i\theta)\} p(n | t)$ where we know that $p(n|t)$ is a Poisson distribution. This will be the ch.f. of a Poisson$(t)$ distribution with $\log \phi_X(i\theta)$ substituted for $i\theta$: $\phi_Y(i\theta) = \exp\{t(\phi_x(i\theta)-1)\}$ Adding the $\lambda$ from the original problem statement gives the answer: $\phi_Y(i\theta) = \exp\{t\lambda(\phi_x(i\theta)-1)\}$
Characteristic Function of a Compound Poisson Process
Here's another approach that uses a common trick with characteristic functions to avoid having to work out the sums / integrals. I'll set $\lambda = 1$ without loss of generality, it simplifies notat
Characteristic Function of a Compound Poisson Process Here's another approach that uses a common trick with characteristic functions to avoid having to work out the sums / integrals. I'll set $\lambda = 1$ without loss of generality, it simplifies notation and can be put back in in obvious ways in what follows. This means all the "$\lambda$"s below are not related to the $\lambda$ in the problem statement, until the very end where I include it again. First, note that the definition of $Y_t$ involves the sum of $X_i$ corresponding to $U_i \leq t$. This can be thought of as summing "observed" $X_i$, where an $X_i$ is "observed" with a probability $p = t/T$ that is the same across all $i$.. The number of "observed" $X_i$, label it $n$, is therefore distributed Poisson$(pT)$, which is the same as Poisson$(t)$. The proof of this is straightforward. The number of observed $X_i$, label it $n$, conditional upon $N$ is clearly distributed Binomial$(N, t/T)$. Now, let's look at the characteristic function (ch.f.) of the Binomial distribution: $\phi_{n|N}(i\theta) = (1-p+p\text{e}^{i\theta})^N$ We will want to integrate out $N$ w.r.t. the Poisson distribution to get the ch.f. of $n$. The simple way to do this is to note that: $\phi_{n|N}(i\theta) = \exp(N*\log(1-p+p\text{e}^{i\theta}))$ Writing out the integration (summation) gives us: $\phi_n(i\theta) = \sum_N \exp(N*\log(1-p+p\text{e}^{i\theta})) p(N|\lambda)$ Looking at this, we can see this will have the same form as the ch.f. of a Poisson distribution ($\exp(\lambda(\text{e}^{i\theta}-1))$), just with $\log(1-p+p\text{e}^{i\theta})$ substituted in wherever $i\theta$ appears in the ch.f. Making this substitution gives us: $\phi_n(i\theta) = \exp(\lambda \text{e}^{\log(1-p+p\text{e}^{i\theta})} - \lambda)$ which quickly reduces to: $\phi_n(i\theta) = \exp(\lambda(1-p+p\text{e}^{i\theta}) - \lambda)$ which can be rearranged to: $\phi_n(i\theta) = \exp(p\lambda(\text{e}^{i\theta}-1))$ which is the ch.f. of a Poisson variate with mean $p\lambda$. Substituting $t/T$ for $p$ and $T$ for $\lambda$ gives us the result. On to step 2. Now we have the ch.f. of the number of elements in the sum $n$. Let's define $\phi_Y(i\theta)$ as the ch.f. of $Y_t$, $\phi_\Sigma(i\theta)$ as the ch.f. of the sum of $n$ $X_i$ and $\phi_X(i\theta)$ as the ch.f. of a single $X_i$. Since the elements are i.i.d., we know that, conditional upon $n$, $\phi_\Sigma(i\theta) = \phi_X^n(i\theta) = \exp\{n \log \phi_X(i\theta)\}$ We can apply exactly the same approach as above to integrate out $n$: $\phi_Y(i\theta) = \sum_n \exp\{n \log \phi_X(i\theta)\} p(n | t)$ where we know that $p(n|t)$ is a Poisson distribution. This will be the ch.f. of a Poisson$(t)$ distribution with $\log \phi_X(i\theta)$ substituted for $i\theta$: $\phi_Y(i\theta) = \exp\{t(\phi_x(i\theta)-1)\}$ Adding the $\lambda$ from the original problem statement gives the answer: $\phi_Y(i\theta) = \exp\{t\lambda(\phi_x(i\theta)-1)\}$
Characteristic Function of a Compound Poisson Process Here's another approach that uses a common trick with characteristic functions to avoid having to work out the sums / integrals. I'll set $\lambda = 1$ without loss of generality, it simplifies notat
36,294
Is there a multi-Gaussian version of the Mahalanobis distance ?
After a little research, I found what I was looking for, a paper called "Deriving cluster analytic distance functions from gaussian mixture models" which proposes an extension of the Mahalanobis distance in the context of multi-modal data (a GMM representation) using Fisher Kernel method and other techniques.
Is there a multi-Gaussian version of the Mahalanobis distance ?
After a little research, I found what I was looking for, a paper called "Deriving cluster analytic distance functions from gaussian mixture models" which proposes an extension of the Mahalanobis dista
Is there a multi-Gaussian version of the Mahalanobis distance ? After a little research, I found what I was looking for, a paper called "Deriving cluster analytic distance functions from gaussian mixture models" which proposes an extension of the Mahalanobis distance in the context of multi-modal data (a GMM representation) using Fisher Kernel method and other techniques.
Is there a multi-Gaussian version of the Mahalanobis distance ? After a little research, I found what I was looking for, a paper called "Deriving cluster analytic distance functions from gaussian mixture models" which proposes an extension of the Mahalanobis dista
36,295
Is there a multi-Gaussian version of the Mahalanobis distance ?
I know this is several years ago, but I wanted to point out that it is possible (given some data) to estimate the Kullback-leibler divergence between two GMMs. Depending on your goal, this may prove a very elegant approach. See: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwja2cjTu7DaAhWng1QKHRfwCYgQFgg_MAE&url=https%3A%2F%2Flabrosa.ee.columbia.edu%2F~dpwe%2Fpubs%2FJenECJ07-gmmdist.pdf&usg=AOvVaw3x4mQMf0wrdHZMMP_nwu_v and: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwja2cjTu7DaAhWng1QKHRfwCYgQFgguMAA&url=https%3A%2F%2Fpdfs.semanticscholar.org%2F4f8d%2Feabc58014eae708c3e6ee27114535325067b.pdf&usg=AOvVaw0W11eUEeCobIk3zNa5TQzy
Is there a multi-Gaussian version of the Mahalanobis distance ?
I know this is several years ago, but I wanted to point out that it is possible (given some data) to estimate the Kullback-leibler divergence between two GMMs. Depending on your goal, this may prove
Is there a multi-Gaussian version of the Mahalanobis distance ? I know this is several years ago, but I wanted to point out that it is possible (given some data) to estimate the Kullback-leibler divergence between two GMMs. Depending on your goal, this may prove a very elegant approach. See: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwja2cjTu7DaAhWng1QKHRfwCYgQFgg_MAE&url=https%3A%2F%2Flabrosa.ee.columbia.edu%2F~dpwe%2Fpubs%2FJenECJ07-gmmdist.pdf&usg=AOvVaw3x4mQMf0wrdHZMMP_nwu_v and: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwja2cjTu7DaAhWng1QKHRfwCYgQFgguMAA&url=https%3A%2F%2Fpdfs.semanticscholar.org%2F4f8d%2Feabc58014eae708c3e6ee27114535325067b.pdf&usg=AOvVaw0W11eUEeCobIk3zNa5TQzy
Is there a multi-Gaussian version of the Mahalanobis distance ? I know this is several years ago, but I wanted to point out that it is possible (given some data) to estimate the Kullback-leibler divergence between two GMMs. Depending on your goal, this may prove
36,296
Gaussian processes with finite sampling area
I think your two questions nail the issue down. It sounds like you can use GPs for some part of the problem but you might need to do more. To explain the issues I see, I will first translate my understanding of your problem into more mathematical language: The problem You are interested in some physical quantity $f(x)$ ("spectra"?) where $x$ is a point in some domain of the plane (your photo). $f$ is scalar i.e. a single number for each point of the plane. You can't observe $f$ directly, you can only observe some spatial average of it $F$ at some points $s_k$ of a grid. I.e. you observe $$ F(s_k) = \int_{D_k} f(x)dx.$$ The $D_k$ are the various overlapping disks in your photo. You did not mention it but maybe there is also some measurement noise in your observations, then you would need to add a noise term $\epsilon$ on the RHS. What about GPs? It is absolutely OK to fit a GP to your observations and you will get a valid GP approximation or interpolation of $F$. The GP really does not care that your $F$ is made from overlapping disks, it will note and reflect just the right amount of correlation for values sufficiently close to each other. The problem is of course that this will produce a GP for $F$ not one for $f$. And $F$ will not be a (good/reasonable) approximation of $f$ unless $f$ is more or less constant on the $D_k$. How to recover $f$? There are different ways to recover $f$ from $F$. What is doable or maybe even "best" depends on your specific requirements and the details of the problem. Since you know the mean function $m_F$ of $F$ explicitely you might try some form of numeric deconvolution. A more GP spirited way is to make the assumption that $f$ is a GP with mean function $m$ and covariance function $K$. Mathematical theory tells you then that $F$ is a GP as well with mean function $$m_F(s) = \int_{D_s}m(x)dx$$ and covariance $$ K_F(s_1,s_2) = \int_{D_{s_1}}\int_{D_{s_2}} K(x_1,x_2)dx_1dx_2$$. The representer theorem for the mean of a GP tells you then that $m_F(s) = \sum_k \alpha_k K_F(s_k,s)$ and you can conclude by comparing the coefficients that $$ m(s) = \sum_k \alpha_k \int_{D_k} K(x,s) dx. $$ You can also derive the predictive distribution at a point $s^*$ by noting that $f(s^*)$ and the observations of $F$ have a joint normal distribution and you can condition on the observations of $F$. The formulas get complicated though but they are straightforward (see this paper Equations (8) and (9) ) The problem with this is on the practical side: You either need to find the kernel $K$ from your choice of $K_F$ which is probably difficult or you start with a $K$ such that (i) you can calculate $K_F$ AND (ii) $K_F$ works reasonably well for your observations AND (iii) $K$ makes sense as a model for your astronomical data.
Gaussian processes with finite sampling area
I think your two questions nail the issue down. It sounds like you can use GPs for some part of the problem but you might need to do more. To explain the issues I see, I will first translate my unders
Gaussian processes with finite sampling area I think your two questions nail the issue down. It sounds like you can use GPs for some part of the problem but you might need to do more. To explain the issues I see, I will first translate my understanding of your problem into more mathematical language: The problem You are interested in some physical quantity $f(x)$ ("spectra"?) where $x$ is a point in some domain of the plane (your photo). $f$ is scalar i.e. a single number for each point of the plane. You can't observe $f$ directly, you can only observe some spatial average of it $F$ at some points $s_k$ of a grid. I.e. you observe $$ F(s_k) = \int_{D_k} f(x)dx.$$ The $D_k$ are the various overlapping disks in your photo. You did not mention it but maybe there is also some measurement noise in your observations, then you would need to add a noise term $\epsilon$ on the RHS. What about GPs? It is absolutely OK to fit a GP to your observations and you will get a valid GP approximation or interpolation of $F$. The GP really does not care that your $F$ is made from overlapping disks, it will note and reflect just the right amount of correlation for values sufficiently close to each other. The problem is of course that this will produce a GP for $F$ not one for $f$. And $F$ will not be a (good/reasonable) approximation of $f$ unless $f$ is more or less constant on the $D_k$. How to recover $f$? There are different ways to recover $f$ from $F$. What is doable or maybe even "best" depends on your specific requirements and the details of the problem. Since you know the mean function $m_F$ of $F$ explicitely you might try some form of numeric deconvolution. A more GP spirited way is to make the assumption that $f$ is a GP with mean function $m$ and covariance function $K$. Mathematical theory tells you then that $F$ is a GP as well with mean function $$m_F(s) = \int_{D_s}m(x)dx$$ and covariance $$ K_F(s_1,s_2) = \int_{D_{s_1}}\int_{D_{s_2}} K(x_1,x_2)dx_1dx_2$$. The representer theorem for the mean of a GP tells you then that $m_F(s) = \sum_k \alpha_k K_F(s_k,s)$ and you can conclude by comparing the coefficients that $$ m(s) = \sum_k \alpha_k \int_{D_k} K(x,s) dx. $$ You can also derive the predictive distribution at a point $s^*$ by noting that $f(s^*)$ and the observations of $F$ have a joint normal distribution and you can condition on the observations of $F$. The formulas get complicated though but they are straightforward (see this paper Equations (8) and (9) ) The problem with this is on the practical side: You either need to find the kernel $K$ from your choice of $K_F$ which is probably difficult or you start with a $K$ such that (i) you can calculate $K_F$ AND (ii) $K_F$ works reasonably well for your observations AND (iii) $K$ makes sense as a model for your astronomical data.
Gaussian processes with finite sampling area I think your two questions nail the issue down. It sounds like you can use GPs for some part of the problem but you might need to do more. To explain the issues I see, I will first translate my unders
36,297
Gaussian processes with finite sampling area
There is a topic in geostatistics called Exact Downscaling. The main goal here is to estimate a property at a smaller scale than the observations. Also these observations may or may not be overlapped (does not really matter). Please take a look to this paper: http://www.ccgalberta.com/ccgresources/report07/2005-101-exact_reproduction.pdf In this paper, they show a method to downscale the observations using geostatistical techniques. They show that by correctly calculating the cross-covariances between different data scales (point vs block) the kriging estimate is still valid; such that the average of estimated values at smaller scale is equal to larger input data. Basically, in order to calculate the estimate values in any scale, you just need to calculate the covariance function between the input data, target scales and cross-correlations correctly. At the Gaussian Process, the assumption is that estimation is being done at the same scale as input observations. So these are the steps: 1- Calculate experimental variogram from you data. 2- Fit the variogram model to your experiential variogam. You may need to account for directional anisotropy here. This is the covariance function that in GP is calculated by maximum likelihood method. 3- Calculate all the covariances and cross covariances between input data and target scale. There are numerical receipts for this step. The idea is that by discretizing the blocks into finite points, you can calculate the average covariance. The overlap data should be taken into account here. 4- perform Kriging and calculate the estimate values. GP is very related topic to geostatistics. However, geostatistics is not limited to Gaussian processes. There are many other methods to estimate or simulate a random process.
Gaussian processes with finite sampling area
There is a topic in geostatistics called Exact Downscaling. The main goal here is to estimate a property at a smaller scale than the observations. Also these observations may or may not be overlapped
Gaussian processes with finite sampling area There is a topic in geostatistics called Exact Downscaling. The main goal here is to estimate a property at a smaller scale than the observations. Also these observations may or may not be overlapped (does not really matter). Please take a look to this paper: http://www.ccgalberta.com/ccgresources/report07/2005-101-exact_reproduction.pdf In this paper, they show a method to downscale the observations using geostatistical techniques. They show that by correctly calculating the cross-covariances between different data scales (point vs block) the kriging estimate is still valid; such that the average of estimated values at smaller scale is equal to larger input data. Basically, in order to calculate the estimate values in any scale, you just need to calculate the covariance function between the input data, target scales and cross-correlations correctly. At the Gaussian Process, the assumption is that estimation is being done at the same scale as input observations. So these are the steps: 1- Calculate experimental variogram from you data. 2- Fit the variogram model to your experiential variogam. You may need to account for directional anisotropy here. This is the covariance function that in GP is calculated by maximum likelihood method. 3- Calculate all the covariances and cross covariances between input data and target scale. There are numerical receipts for this step. The idea is that by discretizing the blocks into finite points, you can calculate the average covariance. The overlap data should be taken into account here. 4- perform Kriging and calculate the estimate values. GP is very related topic to geostatistics. However, geostatistics is not limited to Gaussian processes. There are many other methods to estimate or simulate a random process.
Gaussian processes with finite sampling area There is a topic in geostatistics called Exact Downscaling. The main goal here is to estimate a property at a smaller scale than the observations. Also these observations may or may not be overlapped
36,298
Is PCA a non-linear transform?
I think the confusion is due to what exactly is meant here to be linear or non-linear. Using the notation of your quote, operation $w(X)$ maps a data matrix $X$ into a projector $P_k$ on the first $k$ principal axes of $X$. Let us be completely clear about the notation here; for simplicity let us fix $k=1$ and assume that $X$ is centered. Then $X\in\mathbb R^{n\times p}$ and $P\in \mathbb P^p \subset \mathbb R^{p\times p}$, where by $\mathbb P^p$ I mean the space of all matrices of the form $P=\mathbf{uu}^\top$ with $\mathbf u\in \mathbb R^p$ and $\|\mathbf u\|=1$. Now: Operation $w:\mathbb R^{n\times p} \to \mathbb P^p$ is non-linear. Operation $P:\mathbb R^p \to \mathbb R$ is linear. The quote talks about the $w(\cdot)$ function; it transforms a data matrix into a projection operator. It is non-linear. Your script investigates the $P(\cdot)$ function; it transforms a high-dimensional vector into a low-dimensional PCA projection, given a fixed dataset. It is linear. So $w(\cdot)$ is a non-linear mapping into linear projections. No contradiction.
Is PCA a non-linear transform?
I think the confusion is due to what exactly is meant here to be linear or non-linear. Using the notation of your quote, operation $w(X)$ maps a data matrix $X$ into a projector $P_k$ on the first $k$
Is PCA a non-linear transform? I think the confusion is due to what exactly is meant here to be linear or non-linear. Using the notation of your quote, operation $w(X)$ maps a data matrix $X$ into a projector $P_k$ on the first $k$ principal axes of $X$. Let us be completely clear about the notation here; for simplicity let us fix $k=1$ and assume that $X$ is centered. Then $X\in\mathbb R^{n\times p}$ and $P\in \mathbb P^p \subset \mathbb R^{p\times p}$, where by $\mathbb P^p$ I mean the space of all matrices of the form $P=\mathbf{uu}^\top$ with $\mathbf u\in \mathbb R^p$ and $\|\mathbf u\|=1$. Now: Operation $w:\mathbb R^{n\times p} \to \mathbb P^p$ is non-linear. Operation $P:\mathbb R^p \to \mathbb R$ is linear. The quote talks about the $w(\cdot)$ function; it transforms a data matrix into a projection operator. It is non-linear. Your script investigates the $P(\cdot)$ function; it transforms a high-dimensional vector into a low-dimensional PCA projection, given a fixed dataset. It is linear. So $w(\cdot)$ is a non-linear mapping into linear projections. No contradiction.
Is PCA a non-linear transform? I think the confusion is due to what exactly is meant here to be linear or non-linear. Using the notation of your quote, operation $w(X)$ maps a data matrix $X$ into a projector $P_k$ on the first $k$
36,299
Neuron saturation occurs only in last layer or all layers?
It seems to me the author didn't mean that it is the only reason for learning slowdown. Surely sigmoid activation functions in hidden layers are likely to cause vanishing gradients, but for sigmoid in the output layer, it can be avoided by using the cross-entropy loss. I think the discussions about output layer and saturation in that chapter is aimed at answering the question when should we use the cross-entropy instead of the quadratic cost? The answer of which is, sigmoid output goes well with cross-entropy loss, and linear output goes well with quadratic loss.
Neuron saturation occurs only in last layer or all layers?
It seems to me the author didn't mean that it is the only reason for learning slowdown. Surely sigmoid activation functions in hidden layers are likely to cause vanishing gradients, but for sigmoid
Neuron saturation occurs only in last layer or all layers? It seems to me the author didn't mean that it is the only reason for learning slowdown. Surely sigmoid activation functions in hidden layers are likely to cause vanishing gradients, but for sigmoid in the output layer, it can be avoided by using the cross-entropy loss. I think the discussions about output layer and saturation in that chapter is aimed at answering the question when should we use the cross-entropy instead of the quadratic cost? The answer of which is, sigmoid output goes well with cross-entropy loss, and linear output goes well with quadratic loss.
Neuron saturation occurs only in last layer or all layers? It seems to me the author didn't mean that it is the only reason for learning slowdown. Surely sigmoid activation functions in hidden layers are likely to cause vanishing gradients, but for sigmoid
36,300
Neuron saturation occurs only in last layer or all layers?
From what I understand, using the cross entropy cost function instead of quadratic cost function will only help you for avoiding the vanishing gradients because of the $\sigma'$ term in the output layer. If we look at the backprop equations, we see that the $\sigma'$ terms are multiplied for gradient computations for every layer except the output layer irrespective of the cost function.
Neuron saturation occurs only in last layer or all layers?
From what I understand, using the cross entropy cost function instead of quadratic cost function will only help you for avoiding the vanishing gradients because of the $\sigma'$ term in the output lay
Neuron saturation occurs only in last layer or all layers? From what I understand, using the cross entropy cost function instead of quadratic cost function will only help you for avoiding the vanishing gradients because of the $\sigma'$ term in the output layer. If we look at the backprop equations, we see that the $\sigma'$ terms are multiplied for gradient computations for every layer except the output layer irrespective of the cost function.
Neuron saturation occurs only in last layer or all layers? From what I understand, using the cross entropy cost function instead of quadratic cost function will only help you for avoiding the vanishing gradients because of the $\sigma'$ term in the output lay