idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
32,801
Control Function Approach and Bootstrap
Cameron and Trivedi - Microeconometrics using Stata discuss different bootstrap techniques and the show Stata code files, for example, for Heckman’s two-step estimator. Regarding question 2. : The bootstrap is indeed applied to both the first-stage and second-stage equation. You can also bootstrap only the second stage but then you have to make further assumptions about the distribution of your predicted variables (parametric bootstrap). Said so, it is much simpler to do the two-stage bootstrap. Regarding question 1. : You can find code examples (in Stata) for different examples here (2SLS) oder here (Heckman) Here is also a small overview which is free and discusses some of the topics you can also find in the Cameron and Trivedi book. I have to say, I think the topic is often confusing, in particular if you have several first-stages, I have also a question open here, yet without answers. Update: Sorry, I forgot to comment on the case of panel data. I would use cluster robust standard error in each stage of the two-stage bootstrap in this case. PS: Stata has a quite elaborated help file on bootstrapping, you should also check that.
Control Function Approach and Bootstrap
Cameron and Trivedi - Microeconometrics using Stata discuss different bootstrap techniques and the show Stata code files, for example, for Heckman’s two-step estimator. Regarding question 2. : The bo
Control Function Approach and Bootstrap Cameron and Trivedi - Microeconometrics using Stata discuss different bootstrap techniques and the show Stata code files, for example, for Heckman’s two-step estimator. Regarding question 2. : The bootstrap is indeed applied to both the first-stage and second-stage equation. You can also bootstrap only the second stage but then you have to make further assumptions about the distribution of your predicted variables (parametric bootstrap). Said so, it is much simpler to do the two-stage bootstrap. Regarding question 1. : You can find code examples (in Stata) for different examples here (2SLS) oder here (Heckman) Here is also a small overview which is free and discusses some of the topics you can also find in the Cameron and Trivedi book. I have to say, I think the topic is often confusing, in particular if you have several first-stages, I have also a question open here, yet without answers. Update: Sorry, I forgot to comment on the case of panel data. I would use cluster robust standard error in each stage of the two-stage bootstrap in this case. PS: Stata has a quite elaborated help file on bootstrapping, you should also check that.
Control Function Approach and Bootstrap Cameron and Trivedi - Microeconometrics using Stata discuss different bootstrap techniques and the show Stata code files, for example, for Heckman’s two-step estimator. Regarding question 2. : The bo
32,802
Why is the marginal likelihood difficult/intractable to estimate?
Here is an answer by example. Suppose you have the following hierarchical model $$ Y_{ig} \stackrel{ind}{\sim} N(\theta_g,1) \quad \theta_g \stackrel{ind}{\sim} N(\mu,\tau^2) \quad \mu|\tau^2 \sim N(m,\tau^2/k) \quad \tau^2 \sim IG(a,b) $$ for groups $g=1,\ldots,G$ and observations within a group $i=1,\ldots,n_g$ and known values $m,k,a,$ and $b$. With $$y = (y_{1,1},\ldots,y_{n_1,1},y_{1,2},\ldots,y_{n_2,2},\ldots,y_{1,G},\ldots,y_{n_G,G}),$$ the marginal likelihood is $$ p(y) = \int \cdots \int \prod_{g=1}^G \left[\prod_{i=1}^{n_g} N(y_{ig};\theta_g,1) \right] N(\theta_g; \mu,\tau^2) d\theta_1 \cdots d\theta_G d\mu d\tau^2.$$ The dimension of the integral is $G+2$ and if $G$ is large, then this a high dimensional integral. Most numerical integration techniques will need an extreme number of samples or iterations to obtain a reasonable approximation to this integral. This integral happens to have a marginal likelihood in closed form, so you can evaluate how well a numeric integration technique can estimate the marginal likelihood. To understand why calculating the marginal likelihood is difficult, you could start simple, e.g. having a single observation, having a single group, having $\mu$ and $\sigma^2$ be known, etc. You can slowly make the problem more and more difficult and see how the numerical integration techniques fare relative to the truth. You will notice that they get worse and worse, i.e. they will need more and more samples or iterations to obtain the same accuracy, as the dimension of the problem, i.e. $G$, increases. Finally, let $Y_{ig} \stackrel{ind}{\sim} Po(e^{\theta_g})$ and now you have a marginal likelihood with no closed form. Based on your experience when you knew the truth, how much are you going to believe a numerical estimate when you don't know the truth? I'm guessing you aren't going to have much confidence in the numeric estimate.
Why is the marginal likelihood difficult/intractable to estimate?
Here is an answer by example. Suppose you have the following hierarchical model $$ Y_{ig} \stackrel{ind}{\sim} N(\theta_g,1) \quad \theta_g \stackrel{ind}{\sim} N(\mu,\tau^2) \quad \mu|\tau^2 \sim N(m
Why is the marginal likelihood difficult/intractable to estimate? Here is an answer by example. Suppose you have the following hierarchical model $$ Y_{ig} \stackrel{ind}{\sim} N(\theta_g,1) \quad \theta_g \stackrel{ind}{\sim} N(\mu,\tau^2) \quad \mu|\tau^2 \sim N(m,\tau^2/k) \quad \tau^2 \sim IG(a,b) $$ for groups $g=1,\ldots,G$ and observations within a group $i=1,\ldots,n_g$ and known values $m,k,a,$ and $b$. With $$y = (y_{1,1},\ldots,y_{n_1,1},y_{1,2},\ldots,y_{n_2,2},\ldots,y_{1,G},\ldots,y_{n_G,G}),$$ the marginal likelihood is $$ p(y) = \int \cdots \int \prod_{g=1}^G \left[\prod_{i=1}^{n_g} N(y_{ig};\theta_g,1) \right] N(\theta_g; \mu,\tau^2) d\theta_1 \cdots d\theta_G d\mu d\tau^2.$$ The dimension of the integral is $G+2$ and if $G$ is large, then this a high dimensional integral. Most numerical integration techniques will need an extreme number of samples or iterations to obtain a reasonable approximation to this integral. This integral happens to have a marginal likelihood in closed form, so you can evaluate how well a numeric integration technique can estimate the marginal likelihood. To understand why calculating the marginal likelihood is difficult, you could start simple, e.g. having a single observation, having a single group, having $\mu$ and $\sigma^2$ be known, etc. You can slowly make the problem more and more difficult and see how the numerical integration techniques fare relative to the truth. You will notice that they get worse and worse, i.e. they will need more and more samples or iterations to obtain the same accuracy, as the dimension of the problem, i.e. $G$, increases. Finally, let $Y_{ig} \stackrel{ind}{\sim} Po(e^{\theta_g})$ and now you have a marginal likelihood with no closed form. Based on your experience when you knew the truth, how much are you going to believe a numerical estimate when you don't know the truth? I'm guessing you aren't going to have much confidence in the numeric estimate.
Why is the marginal likelihood difficult/intractable to estimate? Here is an answer by example. Suppose you have the following hierarchical model $$ Y_{ig} \stackrel{ind}{\sim} N(\theta_g,1) \quad \theta_g \stackrel{ind}{\sim} N(\mu,\tau^2) \quad \mu|\tau^2 \sim N(m
32,803
How does the boot package in R handle collecting bootstrap samples if strata are not specified but the function separates the dataset by strata?
Your understanding 1) and 2) are correct; the strata option makes bootstrap samples within each strata independently, whereas the non-strata option makes bootstrap samples of all the data meaning that bootstrap samples will not contain the same number of samples from within each strata. As to why they are similar; it is just a matter of sample size. When running the above example (seed 42 and 10,000 bootstrap repli), I obtain Standard error of stratified bootstrap estimator: 0.8502739 Standard error of estimator: 0.8691592 After increasing the sample size to 10 (not 30), I obtain (with seed 42) Standard error of stratified bootstrap estimator: 1.032356 Standard error of estimator: 1.131476
How does the boot package in R handle collecting bootstrap samples if strata are not specified but t
Your understanding 1) and 2) are correct; the strata option makes bootstrap samples within each strata independently, whereas the non-strata option makes bootstrap samples of all the data meaning that
How does the boot package in R handle collecting bootstrap samples if strata are not specified but the function separates the dataset by strata? Your understanding 1) and 2) are correct; the strata option makes bootstrap samples within each strata independently, whereas the non-strata option makes bootstrap samples of all the data meaning that bootstrap samples will not contain the same number of samples from within each strata. As to why they are similar; it is just a matter of sample size. When running the above example (seed 42 and 10,000 bootstrap repli), I obtain Standard error of stratified bootstrap estimator: 0.8502739 Standard error of estimator: 0.8691592 After increasing the sample size to 10 (not 30), I obtain (with seed 42) Standard error of stratified bootstrap estimator: 1.032356 Standard error of estimator: 1.131476
How does the boot package in R handle collecting bootstrap samples if strata are not specified but t Your understanding 1) and 2) are correct; the strata option makes bootstrap samples within each strata independently, whereas the non-strata option makes bootstrap samples of all the data meaning that
32,804
How does the boot package in R handle collecting bootstrap samples if strata are not specified but the function separates the dataset by strata?
This is how I interpreted the correct answer from svendvn above in case anybody wishes to see the code: require(boot) # relative yield takes a matrix or dataframe and finds the ratio # of the means: treatmentMean/controlMean. # data structure: # first column is strata, control = 1 and treatment = 2 # second column is response, or the data to be bootstrapped rel.yield <- function(D,i) { trt <- D[i,1] resp <- D[i,2] mean(resp[trt==2]) / mean(resp[trt==1]) } # some data that has a true rel.yield of 10 set.seed(42) nTrt <- 15 sub.pop <- matrix(data = c(rep(1,nTrt),rep(2,nTrt),rnorm(nTrt,2,1),rnorm(nTrt,20,1)), nrow = nTrt*2, ncol = 2, dimnames = list((1:(nTrt*2)),c('trt','resp'))) (estRelYeild <- mean(sub.pop[sub.pop[,1]==2,2])/mean(sub.pop[sub.pop[,1]==1,2])) # with strata specified (b <- boot(sub.pop, rel.yield, R = 1E4, strata = sub.pop[,1])) #without strata specified (c <- boot(sub.pop, rel.yield, R = 1E4)) # After decreasing the sample size to 10 (not 30), I obtain (with seed 42) set.seed(42) nTrt <- 5 sub.pop <- matrix(data = c(rep(1,nTrt),rep(2,nTrt),rnorm(nTrt,2,1),rnorm(nTrt,20,1)), nrow = nTrt*2, ncol = 2, dimnames = list((1:(nTrt*2)),c('trt','resp'))) (estRelYeild <- mean(sub.pop[sub.pop[,1]==2,2])/mean(sub.pop[sub.pop[,1]==1,2])) # with strata specified (b <- boot(sub.pop, rel.yield, R = 1E4, strata = sub.pop[,1])) #without strata specified (c <- boot(sub.pop, rel.yield, R = 1E4))
How does the boot package in R handle collecting bootstrap samples if strata are not specified but t
This is how I interpreted the correct answer from svendvn above in case anybody wishes to see the code: require(boot) # relative yield takes a matrix or dataframe and finds the ratio # of the means: t
How does the boot package in R handle collecting bootstrap samples if strata are not specified but the function separates the dataset by strata? This is how I interpreted the correct answer from svendvn above in case anybody wishes to see the code: require(boot) # relative yield takes a matrix or dataframe and finds the ratio # of the means: treatmentMean/controlMean. # data structure: # first column is strata, control = 1 and treatment = 2 # second column is response, or the data to be bootstrapped rel.yield <- function(D,i) { trt <- D[i,1] resp <- D[i,2] mean(resp[trt==2]) / mean(resp[trt==1]) } # some data that has a true rel.yield of 10 set.seed(42) nTrt <- 15 sub.pop <- matrix(data = c(rep(1,nTrt),rep(2,nTrt),rnorm(nTrt,2,1),rnorm(nTrt,20,1)), nrow = nTrt*2, ncol = 2, dimnames = list((1:(nTrt*2)),c('trt','resp'))) (estRelYeild <- mean(sub.pop[sub.pop[,1]==2,2])/mean(sub.pop[sub.pop[,1]==1,2])) # with strata specified (b <- boot(sub.pop, rel.yield, R = 1E4, strata = sub.pop[,1])) #without strata specified (c <- boot(sub.pop, rel.yield, R = 1E4)) # After decreasing the sample size to 10 (not 30), I obtain (with seed 42) set.seed(42) nTrt <- 5 sub.pop <- matrix(data = c(rep(1,nTrt),rep(2,nTrt),rnorm(nTrt,2,1),rnorm(nTrt,20,1)), nrow = nTrt*2, ncol = 2, dimnames = list((1:(nTrt*2)),c('trt','resp'))) (estRelYeild <- mean(sub.pop[sub.pop[,1]==2,2])/mean(sub.pop[sub.pop[,1]==1,2])) # with strata specified (b <- boot(sub.pop, rel.yield, R = 1E4, strata = sub.pop[,1])) #without strata specified (c <- boot(sub.pop, rel.yield, R = 1E4))
How does the boot package in R handle collecting bootstrap samples if strata are not specified but t This is how I interpreted the correct answer from svendvn above in case anybody wishes to see the code: require(boot) # relative yield takes a matrix or dataframe and finds the ratio # of the means: t
32,805
PLS (partial least squares) weights, loadings, and scores interpretations
UPDATE: Read on this a bit more for a project I'm working on, and I have some links to share that may be helpful. The "weights" in a PLS model are used to translate E_a (the deflated X matrices) to a column in the scores matrix t_a. Deflation occurs after each step of the algorithm by subtracting the variance accounted for by the new component. Loadings on the other hand, translate T to X. This is a fantastic reference and goes into much more detail: https://learnche.org/pid/latent-variable-modelling/projection-to-latent-structures/how-the-pls-model-is-calculated I also read through the plsr vignette several times. It's R but the concepts should translate: https://cran.r-project.org/web/packages/pls/vignettes/pls-manual.pdf ORIGINAL ANSWER: http://www.eigenvector.com/Docs/Wise_pls_properties.pdf According to this resource, the weights are required to "maintain orthogonal scores." There are some nice visualizations starting on slide 35.
PLS (partial least squares) weights, loadings, and scores interpretations
UPDATE: Read on this a bit more for a project I'm working on, and I have some links to share that may be helpful. The "weights" in a PLS model are used to translate E_a (the deflated X matrices) to a
PLS (partial least squares) weights, loadings, and scores interpretations UPDATE: Read on this a bit more for a project I'm working on, and I have some links to share that may be helpful. The "weights" in a PLS model are used to translate E_a (the deflated X matrices) to a column in the scores matrix t_a. Deflation occurs after each step of the algorithm by subtracting the variance accounted for by the new component. Loadings on the other hand, translate T to X. This is a fantastic reference and goes into much more detail: https://learnche.org/pid/latent-variable-modelling/projection-to-latent-structures/how-the-pls-model-is-calculated I also read through the plsr vignette several times. It's R but the concepts should translate: https://cran.r-project.org/web/packages/pls/vignettes/pls-manual.pdf ORIGINAL ANSWER: http://www.eigenvector.com/Docs/Wise_pls_properties.pdf According to this resource, the weights are required to "maintain orthogonal scores." There are some nice visualizations starting on slide 35.
PLS (partial least squares) weights, loadings, and scores interpretations UPDATE: Read on this a bit more for a project I'm working on, and I have some links to share that may be helpful. The "weights" in a PLS model are used to translate E_a (the deflated X matrices) to a
32,806
Regression on the unit disk starting from "uniformly spaced" samples
I think you are on the right track in thinking about something like Zernike polynomials. As noted in the answer by jwimberly, these are an example of a system of orthogonal basis functions on a disk. I am not familiar with Zernike polynomials, but many other families of orthogonal functions (including Bessel functions) arise naturally in classical mathematical physics as eigenfunctions for certain partial differential equations (at the time of this writing, the animation at the top of that link even shows an example of a vibrating drum head). Two questions come to my mind. First, if all you are after is the radial profile ($\theta$ averaged), then how much constraint on the spatial pattern do you need? Second, what types of variability occur in the spatio-temporal data? In terms of the first question, there are two concerns that come to mind. Due to the polar coordinates, the support-area for each sensor has a trend with $r$. The second concern would be the possibility of aliasing, essentially a mis-alignment of your sensors relative to the phase of the pattern (to use a Fourier/Bessel analogy). Note that aliasing will likely be the primary uncertainty in constraining the peak temperatures (i.e. $T_{95}$). In terms of this second question, data variability could actually help with any aliasing issues, essentially allowing any mis-alignment to average out over the different measurements. (Assuming no systematic bias ... but that would be a problem for any method, without e.g. a physical model to give more information). So one possibility would be to define your spatial orthogonal functions purely at the sensor locations. These "Empirical Orthogonal Functions" could be computed via PCA on your spatiotemporal data matrix. (Possibly you could use some weighting to account for the variable sensor support areas, but given the uniform polar grid and target of radial averages, this may not be required.) Note that if there is any physical modeling data available for "expected" variations in the temperature, available on a dense spatiotemporal computational grid, then the same PCA procedure could be applied to that data to derive orthogonal functions. (This would typically called "Proper Orthogonal Decomposition" in engineering, where it is used for model reduction, e.g. an expensive computational fluid dynamics model can be distilled for use in further design activities.) A final comment, if you were to weight the sensor data by support area (i.e. polar cell size), this would be a type of diagonal covariance, in framework of GLS. (That would apply to your prediction problem more, although weighted PCA would be closely related.) I hope this helps! Update: Your new diagram of the sensor distribution changes things considerably in my view. If you want to estimate temperatures over the disk interior, you will need a much more informative prior than simply "set of orthogonal functions on the unit disk". There is just too little information in the sensor data. If you indeed want to estimate the spatial temperature variation over the disk, the only reasonable way I can see would be to treat the problem as one of data assimilation. Here you would need to at least constrain the parametric form of the spatial distribution based on some physics-based considerations (these could be from simulations, or could be from related data in systems with similar dynamics). I do not know your particular application, but if it is something like this, then I would imagine there is an extensive engineering literature that you could draw upon to choose appropriate prior constraints. (For that sort of detailed domain knowledge, this is probably not the best StackExchange site to ask on.)
Regression on the unit disk starting from "uniformly spaced" samples
I think you are on the right track in thinking about something like Zernike polynomials. As noted in the answer by jwimberly, these are an example of a system of orthogonal basis functions on a disk.
Regression on the unit disk starting from "uniformly spaced" samples I think you are on the right track in thinking about something like Zernike polynomials. As noted in the answer by jwimberly, these are an example of a system of orthogonal basis functions on a disk. I am not familiar with Zernike polynomials, but many other families of orthogonal functions (including Bessel functions) arise naturally in classical mathematical physics as eigenfunctions for certain partial differential equations (at the time of this writing, the animation at the top of that link even shows an example of a vibrating drum head). Two questions come to my mind. First, if all you are after is the radial profile ($\theta$ averaged), then how much constraint on the spatial pattern do you need? Second, what types of variability occur in the spatio-temporal data? In terms of the first question, there are two concerns that come to mind. Due to the polar coordinates, the support-area for each sensor has a trend with $r$. The second concern would be the possibility of aliasing, essentially a mis-alignment of your sensors relative to the phase of the pattern (to use a Fourier/Bessel analogy). Note that aliasing will likely be the primary uncertainty in constraining the peak temperatures (i.e. $T_{95}$). In terms of this second question, data variability could actually help with any aliasing issues, essentially allowing any mis-alignment to average out over the different measurements. (Assuming no systematic bias ... but that would be a problem for any method, without e.g. a physical model to give more information). So one possibility would be to define your spatial orthogonal functions purely at the sensor locations. These "Empirical Orthogonal Functions" could be computed via PCA on your spatiotemporal data matrix. (Possibly you could use some weighting to account for the variable sensor support areas, but given the uniform polar grid and target of radial averages, this may not be required.) Note that if there is any physical modeling data available for "expected" variations in the temperature, available on a dense spatiotemporal computational grid, then the same PCA procedure could be applied to that data to derive orthogonal functions. (This would typically called "Proper Orthogonal Decomposition" in engineering, where it is used for model reduction, e.g. an expensive computational fluid dynamics model can be distilled for use in further design activities.) A final comment, if you were to weight the sensor data by support area (i.e. polar cell size), this would be a type of diagonal covariance, in framework of GLS. (That would apply to your prediction problem more, although weighted PCA would be closely related.) I hope this helps! Update: Your new diagram of the sensor distribution changes things considerably in my view. If you want to estimate temperatures over the disk interior, you will need a much more informative prior than simply "set of orthogonal functions on the unit disk". There is just too little information in the sensor data. If you indeed want to estimate the spatial temperature variation over the disk, the only reasonable way I can see would be to treat the problem as one of data assimilation. Here you would need to at least constrain the parametric form of the spatial distribution based on some physics-based considerations (these could be from simulations, or could be from related data in systems with similar dynamics). I do not know your particular application, but if it is something like this, then I would imagine there is an extensive engineering literature that you could draw upon to choose appropriate prior constraints. (For that sort of detailed domain knowledge, this is probably not the best StackExchange site to ask on.)
Regression on the unit disk starting from "uniformly spaced" samples I think you are on the right track in thinking about something like Zernike polynomials. As noted in the answer by jwimberly, these are an example of a system of orthogonal basis functions on a disk.
32,807
Regression on the unit disk starting from "uniformly spaced" samples
The Zernlike polynomials don't sound like a bad choice, since they already have $r$ and $\theta$ dependence and orthogonality cooked in. However, since you're studying temperature, an arguably more appropriate and better known choice would be the Bessel functions. These come up in the study of heat flow in cylindrical objects / coordinate systems, and so there's a chance that they are physically more appropriate. The n-th Bessel function would give the radial dependence associated with a corresponding trigonometric function for the polar dependence; you can find the details in many physics and PDE textbooks.
Regression on the unit disk starting from "uniformly spaced" samples
The Zernlike polynomials don't sound like a bad choice, since they already have $r$ and $\theta$ dependence and orthogonality cooked in. However, since you're studying temperature, an arguably more ap
Regression on the unit disk starting from "uniformly spaced" samples The Zernlike polynomials don't sound like a bad choice, since they already have $r$ and $\theta$ dependence and orthogonality cooked in. However, since you're studying temperature, an arguably more appropriate and better known choice would be the Bessel functions. These come up in the study of heat flow in cylindrical objects / coordinate systems, and so there's a chance that they are physically more appropriate. The n-th Bessel function would give the radial dependence associated with a corresponding trigonometric function for the polar dependence; you can find the details in many physics and PDE textbooks.
Regression on the unit disk starting from "uniformly spaced" samples The Zernlike polynomials don't sound like a bad choice, since they already have $r$ and $\theta$ dependence and orthogonality cooked in. However, since you're studying temperature, an arguably more ap
32,808
Distribution of quadratic form of normals
Here is an attempt: Consider $Z=X-Y$ such that $X \sim \chi^2(\alpha)$ and $Y \sim \chi^2(\beta)$ with $\alpha \geq \beta$ $$ \mathcal{M}_X(t) = \left(1-2 \, t\right)^{-\alpha/2} $$ $$ \mathcal{M}_Y(t) = \left(1-2 \, t\right)^{-\beta/2} $$ $$ \mathcal{M}_Z(t) = M_X(t)M_Y(-t) = \left(1-2 \, t\right)^{-\alpha/2}\left(1+2 \, t\right)^{-\beta/2} = (1-4t^2)^{-\beta/2} $$ $$ \mathcal{M}_Z(t) = (1-2t)^{-n/2}(1+2t)^{-1/2} = (1-4t^2)^{-1/2} (1-2t)^{-(n-1)/2} $$ I am not sure if it can be reduced to a fathomable MGF.
Distribution of quadratic form of normals
Here is an attempt: Consider $Z=X-Y$ such that $X \sim \chi^2(\alpha)$ and $Y \sim \chi^2(\beta)$ with $\alpha \geq \beta$ $$ \mathcal{M}_X(t) = \left(1-2 \, t\right)^{-\alpha/2} $$ $$ \mathcal
Distribution of quadratic form of normals Here is an attempt: Consider $Z=X-Y$ such that $X \sim \chi^2(\alpha)$ and $Y \sim \chi^2(\beta)$ with $\alpha \geq \beta$ $$ \mathcal{M}_X(t) = \left(1-2 \, t\right)^{-\alpha/2} $$ $$ \mathcal{M}_Y(t) = \left(1-2 \, t\right)^{-\beta/2} $$ $$ \mathcal{M}_Z(t) = M_X(t)M_Y(-t) = \left(1-2 \, t\right)^{-\alpha/2}\left(1+2 \, t\right)^{-\beta/2} = (1-4t^2)^{-\beta/2} $$ $$ \mathcal{M}_Z(t) = (1-2t)^{-n/2}(1+2t)^{-1/2} = (1-4t^2)^{-1/2} (1-2t)^{-(n-1)/2} $$ I am not sure if it can be reduced to a fathomable MGF.
Distribution of quadratic form of normals Here is an attempt: Consider $Z=X-Y$ such that $X \sim \chi^2(\alpha)$ and $Y \sim \chi^2(\beta)$ with $\alpha \geq \beta$ $$ \mathcal{M}_X(t) = \left(1-2 \, t\right)^{-\alpha/2} $$ $$ \mathcal
32,809
What breaks the comparibility of models with respect to the AIC?
If you have two models $M_1$ and $M_2$ for a sample $(y_1,\dots,y_n)$, then, as long as the models are sensible, you can employ AIC to compare them. Of course, this does not mean that AIC will select the model that is closest to the truth, among the competitors, since AIC is based on asymptotic results. In an extreme scenario, suppose that you want to compare two models, one with 1 single parameter, and another one with 100 parameters, and the sample size is $101$. Then, it is expected to observe a very low precision in the estimation of the model with 100 parameters, while in the model with 1 parameter it is likely that the parameter is accurately estimated. This is one of the arguments against using AIC for comparing models for which the likelihood estimators have very different convergence rates. This may happen even in models with the same number of parameters. Yes, you can use AIC to compare two models where you transformed the response variable in one of them as long as the model still makes sense. However, this is not always the case. If you have a linear model $$y_i = x_i^T\beta + e_i,$$ where $e_i\sim N(0,\sigma)$, this implies that the variable $y_i$ can take any real value. Consequently, a log transformation makes no sense from a theoretical perspective, even if the sample only contains positive values. This is known as stepwise AIC variable selection. Already implemented in the R command stepAIC(). Again, as long as it makes sense to model the data with that sort of models. Some interesting discussion on the use of AIC can be found here: AIC MYTHS AND MISUNDERSTANDINGS
What breaks the comparibility of models with respect to the AIC?
If you have two models $M_1$ and $M_2$ for a sample $(y_1,\dots,y_n)$, then, as long as the models are sensible, you can employ AIC to compare them. Of course, this does not mean that AIC will select
What breaks the comparibility of models with respect to the AIC? If you have two models $M_1$ and $M_2$ for a sample $(y_1,\dots,y_n)$, then, as long as the models are sensible, you can employ AIC to compare them. Of course, this does not mean that AIC will select the model that is closest to the truth, among the competitors, since AIC is based on asymptotic results. In an extreme scenario, suppose that you want to compare two models, one with 1 single parameter, and another one with 100 parameters, and the sample size is $101$. Then, it is expected to observe a very low precision in the estimation of the model with 100 parameters, while in the model with 1 parameter it is likely that the parameter is accurately estimated. This is one of the arguments against using AIC for comparing models for which the likelihood estimators have very different convergence rates. This may happen even in models with the same number of parameters. Yes, you can use AIC to compare two models where you transformed the response variable in one of them as long as the model still makes sense. However, this is not always the case. If you have a linear model $$y_i = x_i^T\beta + e_i,$$ where $e_i\sim N(0,\sigma)$, this implies that the variable $y_i$ can take any real value. Consequently, a log transformation makes no sense from a theoretical perspective, even if the sample only contains positive values. This is known as stepwise AIC variable selection. Already implemented in the R command stepAIC(). Again, as long as it makes sense to model the data with that sort of models. Some interesting discussion on the use of AIC can be found here: AIC MYTHS AND MISUNDERSTANDINGS
What breaks the comparibility of models with respect to the AIC? If you have two models $M_1$ and $M_2$ for a sample $(y_1,\dots,y_n)$, then, as long as the models are sensible, you can employ AIC to compare them. Of course, this does not mean that AIC will select
32,810
Determine best ARIMA model with AICc and RMSE
The AIC should be calculated from residuals using models that control for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual model's autoregressive effect and thus miscalculates the model parameters which leads directly to an incorrect error sum of squares and ultimately an incorrect AIC. Most SE responders do not point out this assumption when they promote simple descriptive statistics such as AIC and RMSE. The quick answer is you should use neither unless you are addressing the question of identifying and remedying the effects of unspecified deterministic/exogenous structure See @AdamO's insightful response to this question Interrupted Time Series Analysis - ARIMAX for High Frequency Biological Data? "The correlogram should be calculated from residuals using a model that controls for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual autoregressive effect."
Determine best ARIMA model with AICc and RMSE
The AIC should be calculated from residuals using models that control for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual mod
Determine best ARIMA model with AICc and RMSE The AIC should be calculated from residuals using models that control for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual model's autoregressive effect and thus miscalculates the model parameters which leads directly to an incorrect error sum of squares and ultimately an incorrect AIC. Most SE responders do not point out this assumption when they promote simple descriptive statistics such as AIC and RMSE. The quick answer is you should use neither unless you are addressing the question of identifying and remedying the effects of unspecified deterministic/exogenous structure See @AdamO's insightful response to this question Interrupted Time Series Analysis - ARIMAX for High Frequency Biological Data? "The correlogram should be calculated from residuals using a model that controls for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual autoregressive effect."
Determine best ARIMA model with AICc and RMSE The AIC should be calculated from residuals using models that control for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual mod
32,811
Determine best ARIMA model with AICc and RMSE
AIC and RMSE are inter-related but they represent different objectives in choosing the best model. RMSE/MAPE are measures of error and disregards the "complexity" of the model. Optimizing for RMSE/MAPE can give you accurate results, but could lead to overly complex model that captures too much noise in the data, otherwise known as overfitting. This is where AIC/AICc and their relative BIC comes in. They take the error term and add a penalty related to the number of predictors used in the model, such that more complex models are less favorable and allows you to strike a balance between a complex but accurate model, vs a simpler but still reasonably accurate model. It ultimately comes down to the purpose of your model. If having the most accurate prediction matters then you might simply look at RMSE/MAPE, but if you need a model that is more interpretable/explainable then you might want to consider AICc which better balances complexity and accuracy.
Determine best ARIMA model with AICc and RMSE
AIC and RMSE are inter-related but they represent different objectives in choosing the best model. RMSE/MAPE are measures of error and disregards the "complexity" of the model. Optimizing for RMSE/MAP
Determine best ARIMA model with AICc and RMSE AIC and RMSE are inter-related but they represent different objectives in choosing the best model. RMSE/MAPE are measures of error and disregards the "complexity" of the model. Optimizing for RMSE/MAPE can give you accurate results, but could lead to overly complex model that captures too much noise in the data, otherwise known as overfitting. This is where AIC/AICc and their relative BIC comes in. They take the error term and add a penalty related to the number of predictors used in the model, such that more complex models are less favorable and allows you to strike a balance between a complex but accurate model, vs a simpler but still reasonably accurate model. It ultimately comes down to the purpose of your model. If having the most accurate prediction matters then you might simply look at RMSE/MAPE, but if you need a model that is more interpretable/explainable then you might want to consider AICc which better balances complexity and accuracy.
Determine best ARIMA model with AICc and RMSE AIC and RMSE are inter-related but they represent different objectives in choosing the best model. RMSE/MAPE are measures of error and disregards the "complexity" of the model. Optimizing for RMSE/MAP
32,812
Determine best ARIMA model with AICc and RMSE
I have used auto-arima function to have parameters of the best model (p, d, q), I would like to have also the RMSE value for each order (p, d, q). Could you please help me to find the RMSE value corresponding to each AIC value. I would like to illustrate the overfitting engendered by the model with the best RMSE. Thanks in advance.
Determine best ARIMA model with AICc and RMSE
I have used auto-arima function to have parameters of the best model (p, d, q), I would like to have also the RMSE value for each order (p, d, q). Could you please help me to find the RMSE value corre
Determine best ARIMA model with AICc and RMSE I have used auto-arima function to have parameters of the best model (p, d, q), I would like to have also the RMSE value for each order (p, d, q). Could you please help me to find the RMSE value corresponding to each AIC value. I would like to illustrate the overfitting engendered by the model with the best RMSE. Thanks in advance.
Determine best ARIMA model with AICc and RMSE I have used auto-arima function to have parameters of the best model (p, d, q), I would like to have also the RMSE value for each order (p, d, q). Could you please help me to find the RMSE value corre
32,813
Why is optimisation solved with gradient descent rather than with an analytical solution? [duplicate]
The system of equations you get setting the derivatives equal to zero cannot generally be solved analytically. For instance suppose I want choose $10>x>0$ to minimize $xln(x)-\sqrt{x}$ (which does attain a minimum in $10>x>0$) then our first order condition is $ln(x)+1-\frac{1}{2\sqrt{x}}=0$ which does not admit a closed form solution. Of course, we can try and find a solution using numerical methods, but that is precisely what gradient descent does!
Why is optimisation solved with gradient descent rather than with an analytical solution? [duplicate
The system of equations you get setting the derivatives equal to zero cannot generally be solved analytically. For instance suppose I want choose $10>x>0$ to minimize $xln(x)-\sqrt{x}$ (which does att
Why is optimisation solved with gradient descent rather than with an analytical solution? [duplicate] The system of equations you get setting the derivatives equal to zero cannot generally be solved analytically. For instance suppose I want choose $10>x>0$ to minimize $xln(x)-\sqrt{x}$ (which does attain a minimum in $10>x>0$) then our first order condition is $ln(x)+1-\frac{1}{2\sqrt{x}}=0$ which does not admit a closed form solution. Of course, we can try and find a solution using numerical methods, but that is precisely what gradient descent does!
Why is optimisation solved with gradient descent rather than with an analytical solution? [duplicate The system of equations you get setting the derivatives equal to zero cannot generally be solved analytically. For instance suppose I want choose $10>x>0$ to minimize $xln(x)-\sqrt{x}$ (which does att
32,814
Variance of a Cumulative Distribution Function of Normal Distribution
Expanding on the comment by Dilip Sarwate, using this list of integrals of Gaussian functions gives \begin{align*} E \left[ \Phi\left(\frac{X+c}{d}\right)\right] & = \int \frac{1}{\sigma} \phi\left(\frac{x-\mu}{\sigma}\right) \, \Phi\left(\frac{x+c}{d}\right) \, dx \\[8pt] & = \int \phi\left(x\right) \, \Phi\left(\frac{\sigma x+ \mu + c}{d}\right) \, dx\\[8pt] & = \Phi \left( \frac{\mu+c}{\sqrt{\sigma^2 + d^2}}\right) \end{align*} and \begin{align*} E \left[ \Phi\left(\frac{X+c}{d}\right)^2\right] & = \int \frac{1}{\sigma} \phi\left(\frac{x-\mu}{\sigma}\right) \, \Phi\left(\frac{x+c}{d}\right)^2 \, dx \\[8pt] & = \int \phi\left(x\right) \, \Phi\left(\frac{\sigma x+ \mu + c}{d}\right)^2 \, dx\\[8pt] & = \Phi \left( \frac{\mu+c}{\sqrt{\sigma^2 + d^2}}\right) - 2T \left( \frac{\mu+c}{\sqrt{\sigma^2 + d^2}}, \frac{d}{\sqrt{2\sigma^2+d^2}}\right) \end{align*} where $T$ is Owen's T function. From these expressions, $\text{Var}\left( \Phi\left(\frac{X+c}{d}\right) \right)$ follows.
Variance of a Cumulative Distribution Function of Normal Distribution
Expanding on the comment by Dilip Sarwate, using this list of integrals of Gaussian functions gives \begin{align*} E \left[ \Phi\left(\frac{X+c}{d}\right)\right] & = \int \frac{1}{\sigma} \phi\left(\f
Variance of a Cumulative Distribution Function of Normal Distribution Expanding on the comment by Dilip Sarwate, using this list of integrals of Gaussian functions gives \begin{align*} E \left[ \Phi\left(\frac{X+c}{d}\right)\right] & = \int \frac{1}{\sigma} \phi\left(\frac{x-\mu}{\sigma}\right) \, \Phi\left(\frac{x+c}{d}\right) \, dx \\[8pt] & = \int \phi\left(x\right) \, \Phi\left(\frac{\sigma x+ \mu + c}{d}\right) \, dx\\[8pt] & = \Phi \left( \frac{\mu+c}{\sqrt{\sigma^2 + d^2}}\right) \end{align*} and \begin{align*} E \left[ \Phi\left(\frac{X+c}{d}\right)^2\right] & = \int \frac{1}{\sigma} \phi\left(\frac{x-\mu}{\sigma}\right) \, \Phi\left(\frac{x+c}{d}\right)^2 \, dx \\[8pt] & = \int \phi\left(x\right) \, \Phi\left(\frac{\sigma x+ \mu + c}{d}\right)^2 \, dx\\[8pt] & = \Phi \left( \frac{\mu+c}{\sqrt{\sigma^2 + d^2}}\right) - 2T \left( \frac{\mu+c}{\sqrt{\sigma^2 + d^2}}, \frac{d}{\sqrt{2\sigma^2+d^2}}\right) \end{align*} where $T$ is Owen's T function. From these expressions, $\text{Var}\left( \Phi\left(\frac{X+c}{d}\right) \right)$ follows.
Variance of a Cumulative Distribution Function of Normal Distribution Expanding on the comment by Dilip Sarwate, using this list of integrals of Gaussian functions gives \begin{align*} E \left[ \Phi\left(\frac{X+c}{d}\right)\right] & = \int \frac{1}{\sigma} \phi\left(\f
32,815
What is zero mean and unit variance in terms of image data?
This is a very good question and you need to understand this to gain more understanding into deep learning. Basically, you have raw images, lets take one image. This image has 3 channels and in each channel pixel values range from 0 to 255. Our goal here is to squash the range of values for all the pixels in the three channels to a very small range. This is where Preprocessing comes in. But dont think preprocessing only involves the mean and std devtn techniques there are many other like PCA, whitening etc. 1) Using mean: By computing the mean of say, the first red pixel values across all the training images will get you the average red color value that is present across all the training images at the first position. Similarly you find this for all the red channel values, green channel values. Finally you get an average image from all the training images. Now if you subtract this mean image from all the training images you obviously transform the pixel values of the images, the image is no longer interpretable to the human eye, the pixal values now lie in range from positive to negative where the mean lies at zero. 2) Now if you again divide these by std deviation you essentially squash the pixel value range before to a small range. BUT WHY ALL THIS? I will say from my experience that doing this preprocessing on the images and then giving these transformed images to the classifier model will run faster and better. That's why. As you are into deep learning, look into batch normalization after you understand this normalization concept
What is zero mean and unit variance in terms of image data?
This is a very good question and you need to understand this to gain more understanding into deep learning. Basically, you have raw images, lets take one image. This image has 3 channels and in each c
What is zero mean and unit variance in terms of image data? This is a very good question and you need to understand this to gain more understanding into deep learning. Basically, you have raw images, lets take one image. This image has 3 channels and in each channel pixel values range from 0 to 255. Our goal here is to squash the range of values for all the pixels in the three channels to a very small range. This is where Preprocessing comes in. But dont think preprocessing only involves the mean and std devtn techniques there are many other like PCA, whitening etc. 1) Using mean: By computing the mean of say, the first red pixel values across all the training images will get you the average red color value that is present across all the training images at the first position. Similarly you find this for all the red channel values, green channel values. Finally you get an average image from all the training images. Now if you subtract this mean image from all the training images you obviously transform the pixel values of the images, the image is no longer interpretable to the human eye, the pixal values now lie in range from positive to negative where the mean lies at zero. 2) Now if you again divide these by std deviation you essentially squash the pixel value range before to a small range. BUT WHY ALL THIS? I will say from my experience that doing this preprocessing on the images and then giving these transformed images to the classifier model will run faster and better. That's why. As you are into deep learning, look into batch normalization after you understand this normalization concept
What is zero mean and unit variance in terms of image data? This is a very good question and you need to understand this to gain more understanding into deep learning. Basically, you have raw images, lets take one image. This image has 3 channels and in each c
32,816
What is the intuition for dropout used in convolutional neural networks?
As described in the paper introducing it, dropout proceeds like so: During training, randomly remove units from the network. Update parameters as normal, leaving dropped-out units unchanged. The only difference is that for each training case in a mini-batch, we sample a thinned network by dropping out units. Forward and backpropagation for that training case are done only on this thinned network. [...] Any training case which does not use a parameter contributes a gradient of zero for that parameter. At test time, account for this by rescaling: If a unit is retained with probability $p$ during training, the outgoing weights of that unit are multiplied by $p$ at test time as shown in Figure 2. This ensures that for any hidden unit the expected output (under the distribution used to drop units at training time) is the same as the actual output at test time. The intuition is that we'd like to find the Bayes optimal classifier, but doing that for a large model is prohibitive; per the paper, using a full network trained via dropout is a simple approximation that proves useful in practice. (See the paper for results on a variety of applications. One application includes a convolutional architecture.)
What is the intuition for dropout used in convolutional neural networks?
As described in the paper introducing it, dropout proceeds like so: During training, randomly remove units from the network. Update parameters as normal, leaving dropped-out units unchanged. The onl
What is the intuition for dropout used in convolutional neural networks? As described in the paper introducing it, dropout proceeds like so: During training, randomly remove units from the network. Update parameters as normal, leaving dropped-out units unchanged. The only difference is that for each training case in a mini-batch, we sample a thinned network by dropping out units. Forward and backpropagation for that training case are done only on this thinned network. [...] Any training case which does not use a parameter contributes a gradient of zero for that parameter. At test time, account for this by rescaling: If a unit is retained with probability $p$ during training, the outgoing weights of that unit are multiplied by $p$ at test time as shown in Figure 2. This ensures that for any hidden unit the expected output (under the distribution used to drop units at training time) is the same as the actual output at test time. The intuition is that we'd like to find the Bayes optimal classifier, but doing that for a large model is prohibitive; per the paper, using a full network trained via dropout is a simple approximation that proves useful in practice. (See the paper for results on a variety of applications. One application includes a convolutional architecture.)
What is the intuition for dropout used in convolutional neural networks? As described in the paper introducing it, dropout proceeds like so: During training, randomly remove units from the network. Update parameters as normal, leaving dropped-out units unchanged. The onl
32,817
What is the intuition for dropout used in convolutional neural networks?
When you find that your model is overfitting i.e. doing well on a cross validation during training but suffers in an independent test set, then you add dropout layers to reduce dependence on training set. https://www.quora.com/How-does-the-dropout-method-work-in-deep-learning/answer/Arindam-Paul-3
What is the intuition for dropout used in convolutional neural networks?
When you find that your model is overfitting i.e. doing well on a cross validation during training but suffers in an independent test set, then you add dropout layers to reduce dependence on training
What is the intuition for dropout used in convolutional neural networks? When you find that your model is overfitting i.e. doing well on a cross validation during training but suffers in an independent test set, then you add dropout layers to reduce dependence on training set. https://www.quora.com/How-does-the-dropout-method-work-in-deep-learning/answer/Arindam-Paul-3
What is the intuition for dropout used in convolutional neural networks? When you find that your model is overfitting i.e. doing well on a cross validation during training but suffers in an independent test set, then you add dropout layers to reduce dependence on training
32,818
What does it mean to integrate over a random measure?
Denote by $\mathcal{M}$ a measurable space of probability measures, containing the realisations of the Dirichlet process. The random probability measure $G$ is a measurable function $$ G : \omega \mapsto G_\omega \in \mathcal{M} $$ and the integral with respect to $G$ is the random variable $$ \int f(\,\cdot\,|\, \psi) dG(\psi) : \omega \mapsto \int f(\,\cdot\,|\,\psi) dG_\omega(\psi). $$ Thus $\int f(\,\cdot\,|\, \psi) dG(\psi)$ is itself a random p.d.f. (if $f(\cdot| \psi)$ is a p.d.f.). The idea is that $\psi_i$ follows some unknown distribution $G$. In some cases, you may have reasons to believe that $\psi_i$ is normally distributed and then put a prior on the mean and variance. In other cases, you don't want to make such parametric assumptions. In your model, for instance, the prior on $G$ is a Dirichlet process. Is the base measure in Dirichlet process a c.d.f. or is it a p.d.f.? The base measure is any probability measure, usually taken to have full support. In some cases, it can be represented by a probability density function. This is not very important.
What does it mean to integrate over a random measure?
Denote by $\mathcal{M}$ a measurable space of probability measures, containing the realisations of the Dirichlet process. The random probability measure $G$ is a measurable function $$ G : \omega \map
What does it mean to integrate over a random measure? Denote by $\mathcal{M}$ a measurable space of probability measures, containing the realisations of the Dirichlet process. The random probability measure $G$ is a measurable function $$ G : \omega \mapsto G_\omega \in \mathcal{M} $$ and the integral with respect to $G$ is the random variable $$ \int f(\,\cdot\,|\, \psi) dG(\psi) : \omega \mapsto \int f(\,\cdot\,|\,\psi) dG_\omega(\psi). $$ Thus $\int f(\,\cdot\,|\, \psi) dG(\psi)$ is itself a random p.d.f. (if $f(\cdot| \psi)$ is a p.d.f.). The idea is that $\psi_i$ follows some unknown distribution $G$. In some cases, you may have reasons to believe that $\psi_i$ is normally distributed and then put a prior on the mean and variance. In other cases, you don't want to make such parametric assumptions. In your model, for instance, the prior on $G$ is a Dirichlet process. Is the base measure in Dirichlet process a c.d.f. or is it a p.d.f.? The base measure is any probability measure, usually taken to have full support. In some cases, it can be represented by a probability density function. This is not very important.
What does it mean to integrate over a random measure? Denote by $\mathcal{M}$ a measurable space of probability measures, containing the realisations of the Dirichlet process. The random probability measure $G$ is a measurable function $$ G : \omega \map
32,819
Predicting the confidence of a neural network
Perhaps I am misunderstanding the question, but for classification it seems to me the standard way is to have an output neuron for each of the N classes. Then the N vector of [0, 1] output values represent the probability of the input belonging to each class, and so can be interpreted as the "confidence" you want to obtain.
Predicting the confidence of a neural network
Perhaps I am misunderstanding the question, but for classification it seems to me the standard way is to have an output neuron for each of the N classes. Then the N vector of [0, 1] output values rep
Predicting the confidence of a neural network Perhaps I am misunderstanding the question, but for classification it seems to me the standard way is to have an output neuron for each of the N classes. Then the N vector of [0, 1] output values represent the probability of the input belonging to each class, and so can be interpreted as the "confidence" you want to obtain.
Predicting the confidence of a neural network Perhaps I am misunderstanding the question, but for classification it seems to me the standard way is to have an output neuron for each of the N classes. Then the N vector of [0, 1] output values rep
32,820
Predicting the confidence of a neural network
For folks who are interested in NN prediction confidence estimation, you may wish to take a look at Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (Gal et al., 2016). Briefly, it demonstrates how the variance of a network's predictions with dropout over a population of runs in which dropout is performed can be used to estimate prediction confidence. This approach can be employed for networks designed for classification or for regression.
Predicting the confidence of a neural network
For folks who are interested in NN prediction confidence estimation, you may wish to take a look at Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (Gal et al., 20
Predicting the confidence of a neural network For folks who are interested in NN prediction confidence estimation, you may wish to take a look at Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (Gal et al., 2016). Briefly, it demonstrates how the variance of a network's predictions with dropout over a population of runs in which dropout is performed can be used to estimate prediction confidence. This approach can be employed for networks designed for classification or for regression.
Predicting the confidence of a neural network For folks who are interested in NN prediction confidence estimation, you may wish to take a look at Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (Gal et al., 20
32,821
Is it reasonable to include a random slope term in an lmer model without the corresponding fixed effect?
I believe this question to be very similar to the often wondered "must one always include an intercept term in a linear regression", for which the agreed upon answer is "yes, unless you have an extremely good reason not to". I tried to think through what would happen without the fixed effect term before running any experiment. Let's write your two models out in detail. The first, with the fixed effect slope, is $$ y \sim N(\mu_{\alpha} +\alpha_{[i]} + (\mu_{\beta} + \beta_{[i]}) x, \sigma) $$ $$ \alpha \sim N(0, \sigma_{\alpha}) $$ $$ \beta \sim N(0, \sigma_{\beta}) $$ where $x$ is the number of days, and we have a random intercept $\alpha_{[i]}$, and a random slope $\beta_{[i]}$ for each subject. In the other case, where there is no fixed slope, the model is $$ y \sim N(\mu_{\alpha} + \alpha_{[i]} + \beta_{[i]} x, \sigma) $$ $$ \alpha \sim N(0, \sigma_{\alpha}) $$ $$ \beta \sim N(0, \sigma_{\beta}) $$ The difference is that in the second model, we a priori assume that the mean of the random slopes is zero. This means, we expect the slopes associated to the various subjects to distribute evenly around a slope of $0$ (for example, half should be negative and half positive). Now, in the model on your data this does not seem to be true. In your second plot the estimated slopes within each subject are all positive. It looks like this model is invalid for your data. The inclusion of the fixed slope includes the mean of the subject-wise slopes as a degree of freedom, and in this plot you see the random slopes cluster evenly around zero, as you would like. As for inference from the parameters in your model, I believe this misstatement of the model will cause the following parameter estimates to be bias The subject-wise slopes will be biased towards zero, because the assumption of mean zero in the likelihood will pull them towards zero. The estimated standard deviation of the random slopes will be too large, because inflating this parameter lets the slopes cluster around their true, non-zero mean without being penalized so severely. Here I'll create some simulated data where the true subject-wise mean slope is non-zero library("lme4") library("arm") set.seed(154) N_classes = 50 N_obs <- 10000 random_intercepts <- structure( rnorm(N_classes), names = as.character(1:N_classes) ) random_slopes <- structure( rnorm(N_classes, mean = 1), names = as.character(1:N_classes) ) classes <- sample(as.character(1:N_classes), size = N_obs, replace = TRUE) x <- runif(N_obs) y <- random_intercepts[classes] + random_slopes[classes] * x + rnorm(N_obs) df <- data.frame(class = factor(classes), x = x, y = y) The first model estiamtes all true parameters well > M <- lmer(y ~ x + (x | class), data = df) > display(M) lmer(formula = y ~ x + (x | class), data = df) coef.est coef.se (Intercept) 0.01 0.15 x 1.02 0.15 Error terms: Groups Name Std.Dev. Corr class (Intercept) 1.03 x 1.01 0.19 Residual 1.00 Look's like here all the parameters are estimated well, including the standard deviation of the random slopes. Here's the model without the fixed slope > N <- lmer(y ~ (x | class), data = df) > display(N) lmer(formula = y ~ (x | class), data = df) coef.est coef.se -0.14 0.15 Error terms: Groups Name Std.Dev. Corr class (Intercept) 1.04 x 1.43 0.24 Residual 1.00 The estimate of the random slope standard deviation is 1.43, confirming my intuition that it would be biased to be too large. The mean of the subject-wise slopes in the model M comes out well > mean(fixef(M)["x"] + ranef(M)$class$x) [1] 1.015418 It doesn't seem like my intuition was quite correct on the other model > mean(ranef(N)$class$x) [1] 0.9858566 It looks like the model took fitting the data a bit more seriously than making sure the normality of random slope assumption was totally met. Altogether, it looks like the inflation of the random slope standard deviation is the most serious issue.
Is it reasonable to include a random slope term in an lmer model without the corresponding fixed eff
I believe this question to be very similar to the often wondered "must one always include an intercept term in a linear regression", for which the agreed upon answer is "yes, unless you have an extrem
Is it reasonable to include a random slope term in an lmer model without the corresponding fixed effect? I believe this question to be very similar to the often wondered "must one always include an intercept term in a linear regression", for which the agreed upon answer is "yes, unless you have an extremely good reason not to". I tried to think through what would happen without the fixed effect term before running any experiment. Let's write your two models out in detail. The first, with the fixed effect slope, is $$ y \sim N(\mu_{\alpha} +\alpha_{[i]} + (\mu_{\beta} + \beta_{[i]}) x, \sigma) $$ $$ \alpha \sim N(0, \sigma_{\alpha}) $$ $$ \beta \sim N(0, \sigma_{\beta}) $$ where $x$ is the number of days, and we have a random intercept $\alpha_{[i]}$, and a random slope $\beta_{[i]}$ for each subject. In the other case, where there is no fixed slope, the model is $$ y \sim N(\mu_{\alpha} + \alpha_{[i]} + \beta_{[i]} x, \sigma) $$ $$ \alpha \sim N(0, \sigma_{\alpha}) $$ $$ \beta \sim N(0, \sigma_{\beta}) $$ The difference is that in the second model, we a priori assume that the mean of the random slopes is zero. This means, we expect the slopes associated to the various subjects to distribute evenly around a slope of $0$ (for example, half should be negative and half positive). Now, in the model on your data this does not seem to be true. In your second plot the estimated slopes within each subject are all positive. It looks like this model is invalid for your data. The inclusion of the fixed slope includes the mean of the subject-wise slopes as a degree of freedom, and in this plot you see the random slopes cluster evenly around zero, as you would like. As for inference from the parameters in your model, I believe this misstatement of the model will cause the following parameter estimates to be bias The subject-wise slopes will be biased towards zero, because the assumption of mean zero in the likelihood will pull them towards zero. The estimated standard deviation of the random slopes will be too large, because inflating this parameter lets the slopes cluster around their true, non-zero mean without being penalized so severely. Here I'll create some simulated data where the true subject-wise mean slope is non-zero library("lme4") library("arm") set.seed(154) N_classes = 50 N_obs <- 10000 random_intercepts <- structure( rnorm(N_classes), names = as.character(1:N_classes) ) random_slopes <- structure( rnorm(N_classes, mean = 1), names = as.character(1:N_classes) ) classes <- sample(as.character(1:N_classes), size = N_obs, replace = TRUE) x <- runif(N_obs) y <- random_intercepts[classes] + random_slopes[classes] * x + rnorm(N_obs) df <- data.frame(class = factor(classes), x = x, y = y) The first model estiamtes all true parameters well > M <- lmer(y ~ x + (x | class), data = df) > display(M) lmer(formula = y ~ x + (x | class), data = df) coef.est coef.se (Intercept) 0.01 0.15 x 1.02 0.15 Error terms: Groups Name Std.Dev. Corr class (Intercept) 1.03 x 1.01 0.19 Residual 1.00 Look's like here all the parameters are estimated well, including the standard deviation of the random slopes. Here's the model without the fixed slope > N <- lmer(y ~ (x | class), data = df) > display(N) lmer(formula = y ~ (x | class), data = df) coef.est coef.se -0.14 0.15 Error terms: Groups Name Std.Dev. Corr class (Intercept) 1.04 x 1.43 0.24 Residual 1.00 The estimate of the random slope standard deviation is 1.43, confirming my intuition that it would be biased to be too large. The mean of the subject-wise slopes in the model M comes out well > mean(fixef(M)["x"] + ranef(M)$class$x) [1] 1.015418 It doesn't seem like my intuition was quite correct on the other model > mean(ranef(N)$class$x) [1] 0.9858566 It looks like the model took fitting the data a bit more seriously than making sure the normality of random slope assumption was totally met. Altogether, it looks like the inflation of the random slope standard deviation is the most serious issue.
Is it reasonable to include a random slope term in an lmer model without the corresponding fixed eff I believe this question to be very similar to the often wondered "must one always include an intercept term in a linear regression", for which the agreed upon answer is "yes, unless you have an extrem
32,822
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the chain rule?
I believe that the key to answering this question is to point out that the element-wise multiplication is actually shorthand and therefore when you derive the equations you never actually use it. The actual operation is not an element-wise multiplication but instead a standard matrix multiplication of a gradient with a Jacobian, always. In the case of the nonlinearity, the Jacobian of the vector output of the non-linearity with respect to the vector input of the non-linearity happens to be a diagonal matrix. It's therefore true that the gradient multiplied by this matrix is equivalent to the gradient of the output of the nonlinearity with respect to the loss element-wise multiplied by a vector containing all the partial derivatives of the nonlinearity with respect to the input of the nonlinearity, but this follows from the Jacobian being diagonal. You must pass through the Jacobian step to get to the element-wise multiplication, which might explain your confusion. In math, we have some nonlinearity $s$, a loss $L$, and an input to the nonlinearity $x \in \mathbb{R}^{n \times 1}$ (this could be any tensor). The output of the nonlinearity has the same dimension $s(x) \in \mathbb{R}^{n \times 1}$---as @Logan says, the activation function are defined as element-wise. We want $$\nabla_{x}L=\left({\dfrac{\partial s(x)}{\partial x}}\right)^T\nabla_{s(x)}L$$ Where $\dfrac{\partial s(x)}{\partial x}$ is the Jacobian of $s$. Expanding this Jacobian, we get \begin{bmatrix} \dfrac{\partial{s(x_{1})}}{\partial{x_1}} & \dots & \dfrac{\partial{s(x_{1})}}{\partial{x_{n}}} \\ \vdots & \ddots & \vdots \\ \dfrac{\partial{s(x_{n})}}{x_{1}} & \dots & \dfrac{\partial{s(x_{n})}}{\partial{x_{n}}} \end{bmatrix} We see that it is everywhere zero except for the diagonal. We can make a vector of all its diagonal elements $$Diag\left(\dfrac{\partial s(x)}{\partial x}\right)$$ And then use the element-wise operator. $$\nabla_{x}L =\left({\dfrac{\partial s(x)}{\partial x}}\right)^T\nabla_{s(x)}L =Diag\left(\dfrac{\partial s(x)}{\partial x}\right) \circ \nabla_{s(x)}L$$
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the cha
I believe that the key to answering this question is to point out that the element-wise multiplication is actually shorthand and therefore when you derive the equations you never actually use it. The
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the chain rule? I believe that the key to answering this question is to point out that the element-wise multiplication is actually shorthand and therefore when you derive the equations you never actually use it. The actual operation is not an element-wise multiplication but instead a standard matrix multiplication of a gradient with a Jacobian, always. In the case of the nonlinearity, the Jacobian of the vector output of the non-linearity with respect to the vector input of the non-linearity happens to be a diagonal matrix. It's therefore true that the gradient multiplied by this matrix is equivalent to the gradient of the output of the nonlinearity with respect to the loss element-wise multiplied by a vector containing all the partial derivatives of the nonlinearity with respect to the input of the nonlinearity, but this follows from the Jacobian being diagonal. You must pass through the Jacobian step to get to the element-wise multiplication, which might explain your confusion. In math, we have some nonlinearity $s$, a loss $L$, and an input to the nonlinearity $x \in \mathbb{R}^{n \times 1}$ (this could be any tensor). The output of the nonlinearity has the same dimension $s(x) \in \mathbb{R}^{n \times 1}$---as @Logan says, the activation function are defined as element-wise. We want $$\nabla_{x}L=\left({\dfrac{\partial s(x)}{\partial x}}\right)^T\nabla_{s(x)}L$$ Where $\dfrac{\partial s(x)}{\partial x}$ is the Jacobian of $s$. Expanding this Jacobian, we get \begin{bmatrix} \dfrac{\partial{s(x_{1})}}{\partial{x_1}} & \dots & \dfrac{\partial{s(x_{1})}}{\partial{x_{n}}} \\ \vdots & \ddots & \vdots \\ \dfrac{\partial{s(x_{n})}}{x_{1}} & \dots & \dfrac{\partial{s(x_{n})}}{\partial{x_{n}}} \end{bmatrix} We see that it is everywhere zero except for the diagonal. We can make a vector of all its diagonal elements $$Diag\left(\dfrac{\partial s(x)}{\partial x}\right)$$ And then use the element-wise operator. $$\nabla_{x}L =\left({\dfrac{\partial s(x)}{\partial x}}\right)^T\nabla_{s(x)}L =Diag\left(\dfrac{\partial s(x)}{\partial x}\right) \circ \nabla_{s(x)}L$$
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the cha I believe that the key to answering this question is to point out that the element-wise multiplication is actually shorthand and therefore when you derive the equations you never actually use it. The
32,823
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the chain rule?
Whenever backproportionate to an activation function, the operations become element-wise. Specifically, using your example, $\delta_2 =(\hat{y}-y)W_2^T$ is a backpropagation derivative and $a' = h \circ (1 -h)$ is an activation derivative, and their product are elementwise product, $\delta_2 \circ a'$. This because activation functions are defined as element-wise operations in neural network. See the cs224d lecture slides page 30, it might also help.
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the cha
Whenever backproportionate to an activation function, the operations become element-wise. Specifically, using your example, $\delta_2 =(\hat{y}-y)W_2^T$ is a backpropagation derivative and $a' = h \ci
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the chain rule? Whenever backproportionate to an activation function, the operations become element-wise. Specifically, using your example, $\delta_2 =(\hat{y}-y)W_2^T$ is a backpropagation derivative and $a' = h \circ (1 -h)$ is an activation derivative, and their product are elementwise product, $\delta_2 \circ a'$. This because activation functions are defined as element-wise operations in neural network. See the cs224d lecture slides page 30, it might also help.
Deriving gradient of a single layer neural network w.r.t its inputs, what is the operator in the cha Whenever backproportionate to an activation function, the operations become element-wise. Specifically, using your example, $\delta_2 =(\hat{y}-y)W_2^T$ is a backpropagation derivative and $a' = h \ci
32,824
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random variables [duplicate]
$\newcommand{\N}{\mathcal N}\newcommand{\tr}{\mathrm{tr}}$Restating the question: let $X \sim \N(\mu, \Sigma)$, where $\Sigma = \mathrm{diag}(\sigma_1^2, \dots, \sigma_n^2))$. What is $\Pr(\forall i \ne k, X_k \ge X_i)$? Now, this happens if and only if the $(n-1)$-dimensional random vector $Y$, where we drop the $k$th component and subtract each other dimension from $X_k$, is componentwise positive. Define the $(n-1) \times n$ matrix $A$ by taking the negative of $(n-1)$ identity and inserting a column of all $1$s as the $k$th column, so that $Y = A X$: if $k = n$, this is $A = \begin{bmatrix}-I & 1\end{bmatrix}$. Then $Y \sim \N(A \mu, A \Sigma A^T)$. $A \mu$ is easy, and $A \Sigma A^T$ ends up dropping the $k$th row and column from $\Sigma$ and then adding $\sigma_k^2$ to each entry: $\Sigma' + \sigma_k^2 1 1^T$, where $\Sigma'$ drops $k$ from $\Sigma$. Now, the probability in question is known naturally enough as a "Gaussian orthant probability". In general, these are difficult to get in closed form (here are a few; there are a bunch of algorithms out there to approximate them). But we have a special form of covariance matrix here (a particularly simple rank-1 plus diagonal), which may yield a reasonable solution. Below is an effort towards that, but spoiler warning: I don't get to a closed form. The probability in question is: \begin{align} \Pr\left( Y > 0 \right) = \int_{y \in (0, \infty)^{n-1}} \left( 2 \pi \right)^{-\frac{n-1}{2}} \lvert A \Sigma A^T \rvert^{-\frac12} \exp\left( -\frac12 (y - A \mu)^T (A \Sigma A^T)^{-1} (y - A \mu) \right) \mathrm{d}y .\end{align} To avoid those pesky $A \mu$ terms, define $Z = Y - A \mu$, $\mathcal Z = \{ z : \forall i, z_i > (- A \mu)_i \}$. Then we care about \begin{align} \Pr\left( Z > - A \mu \right) = \int_{z \in \mathcal{Z}} \left( 2 \pi \right)^{-\frac{n-1}{2}} \lvert A \Sigma A^T \rvert^{-\frac12} \exp\left( -\frac12 z^T (A \Sigma A^T)^{-1} z \right) \mathrm{d}z .\end{align} Applying the matrix determinant lemma: \begin{align} \lvert A \Sigma A^T \rvert &= \left\lvert \Sigma' + \sigma_k^2 1 1^T \right\rvert \\&= (1 + \sigma_k^2 1^T \Sigma^{-1} 1) \lvert \Sigma^{-1} \rvert \\&= \left( 1 + \sum_{i \ne k} \frac{\sigma_k^2}{\sigma_i^2} \right) \prod_{i \ne k} \frac{1}{\sigma_i^2} ,\end{align} so at least the normalization constant is easy enough. To tackle the exponent, apply Sherman-Morrison: \begin{align} \left( A \Sigma A^T \right)^{-1} &= \left( \Sigma' + \sigma_k^2 1 1^T \right)^{-1} \\&= \Sigma'^{-1} - \frac{\sigma_k^2 \Sigma'^{-1} 1 1^T \Sigma'^{-1}}{1 + \sigma_k^2 1^T \Sigma'^{-1} 1} \\&= \Sigma'^{-1} - \frac{1}{\frac{1}{\sigma_k^2} + \sum_{i \ne k} \frac{1}{\sigma_i^2}} \left[ \frac{1}{\sigma_i^2 \sigma_j^2} \right]_{ij} \\&= \Sigma'^{-1} - \frac{1}{\tr(\Sigma^{-1})} \left[ \frac{1}{\sigma_i^2 \sigma_j^2} \right]_{ij} \\ z^T (A \Sigma A^T)^{-1} z &= \sum_i \frac{z_i^2}{\sigma_i^2} - \frac{1}{\tr(\Sigma^{-1})} \sum_{ij} \frac{z_i z_j}{\sigma_i^2 \sigma_j^2} \end{align} and then the integral (after pulling out constants) is \begin{align} \int_{z \in \mathcal{Z}} &\exp\left( - \tfrac12 z^T (A \Sigma A^T)^{-1} z \right) \mathrm{d}z \\&= \int_{z \in \mathcal{Z}} \prod_i \exp\left( - \frac{z_i^2}{2 \sigma_i^2} \right) \prod_{ij} \exp\left( \frac{1}{2 \tr(\Sigma^{-1})} \frac{z_i z_j}{\sigma_i^2 \sigma_j^2} \right) \mathrm{d}z .\end{align} This integral seems amenable to something smarter than just blind numerical integration, but it's late now....
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random vari
$\newcommand{\N}{\mathcal N}\newcommand{\tr}{\mathrm{tr}}$Restating the question: let $X \sim \N(\mu, \Sigma)$, where $\Sigma = \mathrm{diag}(\sigma_1^2, \dots, \sigma_n^2))$. What is $\Pr(\forall i \
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random variables [duplicate] $\newcommand{\N}{\mathcal N}\newcommand{\tr}{\mathrm{tr}}$Restating the question: let $X \sim \N(\mu, \Sigma)$, where $\Sigma = \mathrm{diag}(\sigma_1^2, \dots, \sigma_n^2))$. What is $\Pr(\forall i \ne k, X_k \ge X_i)$? Now, this happens if and only if the $(n-1)$-dimensional random vector $Y$, where we drop the $k$th component and subtract each other dimension from $X_k$, is componentwise positive. Define the $(n-1) \times n$ matrix $A$ by taking the negative of $(n-1)$ identity and inserting a column of all $1$s as the $k$th column, so that $Y = A X$: if $k = n$, this is $A = \begin{bmatrix}-I & 1\end{bmatrix}$. Then $Y \sim \N(A \mu, A \Sigma A^T)$. $A \mu$ is easy, and $A \Sigma A^T$ ends up dropping the $k$th row and column from $\Sigma$ and then adding $\sigma_k^2$ to each entry: $\Sigma' + \sigma_k^2 1 1^T$, where $\Sigma'$ drops $k$ from $\Sigma$. Now, the probability in question is known naturally enough as a "Gaussian orthant probability". In general, these are difficult to get in closed form (here are a few; there are a bunch of algorithms out there to approximate them). But we have a special form of covariance matrix here (a particularly simple rank-1 plus diagonal), which may yield a reasonable solution. Below is an effort towards that, but spoiler warning: I don't get to a closed form. The probability in question is: \begin{align} \Pr\left( Y > 0 \right) = \int_{y \in (0, \infty)^{n-1}} \left( 2 \pi \right)^{-\frac{n-1}{2}} \lvert A \Sigma A^T \rvert^{-\frac12} \exp\left( -\frac12 (y - A \mu)^T (A \Sigma A^T)^{-1} (y - A \mu) \right) \mathrm{d}y .\end{align} To avoid those pesky $A \mu$ terms, define $Z = Y - A \mu$, $\mathcal Z = \{ z : \forall i, z_i > (- A \mu)_i \}$. Then we care about \begin{align} \Pr\left( Z > - A \mu \right) = \int_{z \in \mathcal{Z}} \left( 2 \pi \right)^{-\frac{n-1}{2}} \lvert A \Sigma A^T \rvert^{-\frac12} \exp\left( -\frac12 z^T (A \Sigma A^T)^{-1} z \right) \mathrm{d}z .\end{align} Applying the matrix determinant lemma: \begin{align} \lvert A \Sigma A^T \rvert &= \left\lvert \Sigma' + \sigma_k^2 1 1^T \right\rvert \\&= (1 + \sigma_k^2 1^T \Sigma^{-1} 1) \lvert \Sigma^{-1} \rvert \\&= \left( 1 + \sum_{i \ne k} \frac{\sigma_k^2}{\sigma_i^2} \right) \prod_{i \ne k} \frac{1}{\sigma_i^2} ,\end{align} so at least the normalization constant is easy enough. To tackle the exponent, apply Sherman-Morrison: \begin{align} \left( A \Sigma A^T \right)^{-1} &= \left( \Sigma' + \sigma_k^2 1 1^T \right)^{-1} \\&= \Sigma'^{-1} - \frac{\sigma_k^2 \Sigma'^{-1} 1 1^T \Sigma'^{-1}}{1 + \sigma_k^2 1^T \Sigma'^{-1} 1} \\&= \Sigma'^{-1} - \frac{1}{\frac{1}{\sigma_k^2} + \sum_{i \ne k} \frac{1}{\sigma_i^2}} \left[ \frac{1}{\sigma_i^2 \sigma_j^2} \right]_{ij} \\&= \Sigma'^{-1} - \frac{1}{\tr(\Sigma^{-1})} \left[ \frac{1}{\sigma_i^2 \sigma_j^2} \right]_{ij} \\ z^T (A \Sigma A^T)^{-1} z &= \sum_i \frac{z_i^2}{\sigma_i^2} - \frac{1}{\tr(\Sigma^{-1})} \sum_{ij} \frac{z_i z_j}{\sigma_i^2 \sigma_j^2} \end{align} and then the integral (after pulling out constants) is \begin{align} \int_{z \in \mathcal{Z}} &\exp\left( - \tfrac12 z^T (A \Sigma A^T)^{-1} z \right) \mathrm{d}z \\&= \int_{z \in \mathcal{Z}} \prod_i \exp\left( - \frac{z_i^2}{2 \sigma_i^2} \right) \prod_{ij} \exp\left( \frac{1}{2 \tr(\Sigma^{-1})} \frac{z_i z_j}{\sigma_i^2 \sigma_j^2} \right) \mathrm{d}z .\end{align} This integral seems amenable to something smarter than just blind numerical integration, but it's late now....
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random vari $\newcommand{\N}{\mathcal N}\newcommand{\tr}{\mathrm{tr}}$Restating the question: let $X \sim \N(\mu, \Sigma)$, where $\Sigma = \mathrm{diag}(\sigma_1^2, \dots, \sigma_n^2))$. What is $\Pr(\forall i \
32,825
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random variables [duplicate]
I will use $P(X_k \geq X_i\;\;\forall i \neq k)$ to denote the probability that $X_k\geq X_i\;\;\forall i \neq k$ Let $X_1,...,X_n$ be an independent (but not identically) Gaussian distributed sample of random variables, i.e. $X_i \sim N(\mu_i,\sigma_i)$. Also let $Z_{k,i}=X_k - X_i$. as @Dougal pointed out, the random variables $Z_{k,1},...,Z_{k,n}$ (with the omission of $Z_{k,k}$) are NOT independent distributed. Rather they follow a multivariate Gaussian where the $$ cov(Z_{k,i},Z_{k,j})=\begin{cases} \sigma_k^2 \;\; \mathrm{for}\;\; i \neq j \\ \sigma_k^2+\sigma_i^2 \;\; \mathrm{for}\;\; i = j \end{cases} $$ The marginal distributions are given by $$Z_{k,i} \sim N\bigg(\mu_k-\mu_i,\sqrt{\sigma_k^2+\sigma_i^2}\bigg)\;\;\forall i \neq k$$ Thus, $$ P(X_k \geq X_i\;\;\forall i \neq k) = P(Z_{k,i} \geq 0\;\;\forall i \neq k)=$$ $$ \int_0^{\infty} \int_0^{\infty} . . . \int_0^{\infty} \Phi((Z_{k,1},...,Z_{k,n})_{-k}|\mu_k,\Sigma_k)\prod_{\stackrel{i=1}{i \neq k}}^ndz_k $$ $\Phi$ being the multivariate Gaussian pdf, with $\mu_k$ and $\Sigma_k$ as the mean vector and covariance matrix formed from the above marginals and covariance. The above multivariate distribution can be really tough to integrate over. Most statistical software packages have functions for it though. In R, for example, you may want to check out the mnormt or mvnorm libraries. However, I cannot speak to how accurate these methods are for very high dimensional problems.
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random vari
I will use $P(X_k \geq X_i\;\;\forall i \neq k)$ to denote the probability that $X_k\geq X_i\;\;\forall i \neq k$ Let $X_1,...,X_n$ be an independent (but not identically) Gaussian distributed sample
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random variables [duplicate] I will use $P(X_k \geq X_i\;\;\forall i \neq k)$ to denote the probability that $X_k\geq X_i\;\;\forall i \neq k$ Let $X_1,...,X_n$ be an independent (but not identically) Gaussian distributed sample of random variables, i.e. $X_i \sim N(\mu_i,\sigma_i)$. Also let $Z_{k,i}=X_k - X_i$. as @Dougal pointed out, the random variables $Z_{k,1},...,Z_{k,n}$ (with the omission of $Z_{k,k}$) are NOT independent distributed. Rather they follow a multivariate Gaussian where the $$ cov(Z_{k,i},Z_{k,j})=\begin{cases} \sigma_k^2 \;\; \mathrm{for}\;\; i \neq j \\ \sigma_k^2+\sigma_i^2 \;\; \mathrm{for}\;\; i = j \end{cases} $$ The marginal distributions are given by $$Z_{k,i} \sim N\bigg(\mu_k-\mu_i,\sqrt{\sigma_k^2+\sigma_i^2}\bigg)\;\;\forall i \neq k$$ Thus, $$ P(X_k \geq X_i\;\;\forall i \neq k) = P(Z_{k,i} \geq 0\;\;\forall i \neq k)=$$ $$ \int_0^{\infty} \int_0^{\infty} . . . \int_0^{\infty} \Phi((Z_{k,1},...,Z_{k,n})_{-k}|\mu_k,\Sigma_k)\prod_{\stackrel{i=1}{i \neq k}}^ndz_k $$ $\Phi$ being the multivariate Gaussian pdf, with $\mu_k$ and $\Sigma_k$ as the mean vector and covariance matrix formed from the above marginals and covariance. The above multivariate distribution can be really tough to integrate over. Most statistical software packages have functions for it though. In R, for example, you may want to check out the mnormt or mvnorm libraries. However, I cannot speak to how accurate these methods are for very high dimensional problems.
Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random vari I will use $P(X_k \geq X_i\;\;\forall i \neq k)$ to denote the probability that $X_k\geq X_i\;\;\forall i \neq k$ Let $X_1,...,X_n$ be an independent (but not identically) Gaussian distributed sample
32,826
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test statistic of $A, B$?
For two time series $X_t$ and $Y_t$ to be cointegrated two conditions are met: $X_t$ and $Y_t$ must be $I(1)$ processes, i.e. $\Delta X_t$ and $\Delta Y_t$ must be stationary processes (in a weak sense, i.e. covariance stationary). There exists a set of coefficients $\alpha,\beta\in \mathbb{R}$ such that the time series $Z_t=\alpha X_t+\beta Y_t$ is a stationary process. The vector $(\alpha,\beta)$ is called cointegrating vector. Since stationarity is invariant to shift and scale it immediately follows that coefficients $\alpha$ and $\beta$ are not uniquely defined, namely they are unique up to multiplicative constant. Cointegration tests come in two varieties: Tests on residuals of regression of $Y_t$ on $X_t$. Tests on matrix rank in a vector-error correction representation of $(Y_t,X_t)$. Both varieties rely on certain theoretical results, namely: OLS of $Y_t$ on $X_t$ gives a consistent estimate of cointegration vector Granger representation theorem. The OP question is about the first variety of tests. In these tests we have a choice: estimate regression $Y_t=a_1+b_1 X_t+u_t$ or $X_t=a_2+b_2 Y_t+v_t$ on $Y_t$. Naturally these two regressions will give two different cointegrating vectors: $(-\hat b_1, 1) $ and $(1, -\hat b_2)$. But due to above mentioned theoretical result the probability limits of $-\hat b_1$ and $-1/\hat b_2$ must be the same, since the cointegrating vector is unique up to a constant. Due to algebraic properties of OLS the residual series $\hat u_t$ and $\hat v_t$ are not identical, although from theoretical perspective they both should be equal to $\frac{1}{\beta}Z_t$ and $\frac{1}{\alpha}Z_t$ respectively, i.e. they should be identical to multiplicative constant. If the series $X_t$ and $Y_t$ are cointegrated then $Z_t$ is a stationary series, so since $\hat u_t$ and $\hat v_t$ approximate $Z_t$ we can test whether they are stationary. That is how the first variety of cointegration tests are performed. Naturally since the $\hat u_t$ and $\hat v_t$ are different any tests on them will differ too. But from theoretical point of view any difference is simply a finite sample bias, which should disappear asymptotically. If the difference between the stationarity tests on series $\hat u_t$ and $\hat v_t$ is statistically significant, this is an indication that the series are not cointegrated, or assumptions of stationarity tests are not met. If we take ADF test as a stationarity test for residuals I think it would be possible to derive asymptotic distribution of difference between the ADF statistics on $\hat u_t$ and $\hat v_t$. Whether it would have any practical value I do not know. So to summarize the answers to the three questions are the following: See above. No. The asymptotic distribution of difference of the tests would depend on the test. Your methodology is fine. If time series are cointegrated, both statistics should indicate so. In case of no cointegration, either both statistics will reject stationarity, or one of them will. In both cases you should reject the null hypothesis of cointegration. As in testing for unit root you should safeguard against time trends, change points and all the other things that make unit root testing quite challenging procedure.
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test st
For two time series $X_t$ and $Y_t$ to be cointegrated two conditions are met: $X_t$ and $Y_t$ must be $I(1)$ processes, i.e. $\Delta X_t$ and $\Delta Y_t$ must be stationary processes (in a weak sen
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test statistic of $A, B$? For two time series $X_t$ and $Y_t$ to be cointegrated two conditions are met: $X_t$ and $Y_t$ must be $I(1)$ processes, i.e. $\Delta X_t$ and $\Delta Y_t$ must be stationary processes (in a weak sense, i.e. covariance stationary). There exists a set of coefficients $\alpha,\beta\in \mathbb{R}$ such that the time series $Z_t=\alpha X_t+\beta Y_t$ is a stationary process. The vector $(\alpha,\beta)$ is called cointegrating vector. Since stationarity is invariant to shift and scale it immediately follows that coefficients $\alpha$ and $\beta$ are not uniquely defined, namely they are unique up to multiplicative constant. Cointegration tests come in two varieties: Tests on residuals of regression of $Y_t$ on $X_t$. Tests on matrix rank in a vector-error correction representation of $(Y_t,X_t)$. Both varieties rely on certain theoretical results, namely: OLS of $Y_t$ on $X_t$ gives a consistent estimate of cointegration vector Granger representation theorem. The OP question is about the first variety of tests. In these tests we have a choice: estimate regression $Y_t=a_1+b_1 X_t+u_t$ or $X_t=a_2+b_2 Y_t+v_t$ on $Y_t$. Naturally these two regressions will give two different cointegrating vectors: $(-\hat b_1, 1) $ and $(1, -\hat b_2)$. But due to above mentioned theoretical result the probability limits of $-\hat b_1$ and $-1/\hat b_2$ must be the same, since the cointegrating vector is unique up to a constant. Due to algebraic properties of OLS the residual series $\hat u_t$ and $\hat v_t$ are not identical, although from theoretical perspective they both should be equal to $\frac{1}{\beta}Z_t$ and $\frac{1}{\alpha}Z_t$ respectively, i.e. they should be identical to multiplicative constant. If the series $X_t$ and $Y_t$ are cointegrated then $Z_t$ is a stationary series, so since $\hat u_t$ and $\hat v_t$ approximate $Z_t$ we can test whether they are stationary. That is how the first variety of cointegration tests are performed. Naturally since the $\hat u_t$ and $\hat v_t$ are different any tests on them will differ too. But from theoretical point of view any difference is simply a finite sample bias, which should disappear asymptotically. If the difference between the stationarity tests on series $\hat u_t$ and $\hat v_t$ is statistically significant, this is an indication that the series are not cointegrated, or assumptions of stationarity tests are not met. If we take ADF test as a stationarity test for residuals I think it would be possible to derive asymptotic distribution of difference between the ADF statistics on $\hat u_t$ and $\hat v_t$. Whether it would have any practical value I do not know. So to summarize the answers to the three questions are the following: See above. No. The asymptotic distribution of difference of the tests would depend on the test. Your methodology is fine. If time series are cointegrated, both statistics should indicate so. In case of no cointegration, either both statistics will reject stationarity, or one of them will. In both cases you should reject the null hypothesis of cointegration. As in testing for unit root you should safeguard against time trends, change points and all the other things that make unit root testing quite challenging procedure.
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test st For two time series $X_t$ and $Y_t$ to be cointegrated two conditions are met: $X_t$ and $Y_t$ must be $I(1)$ processes, i.e. $\Delta X_t$ and $\Delta Y_t$ must be stationary processes (in a weak sen
32,827
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test statistic of $A, B$?
So the most popular answer of statistics is apparently correct for this question: "it depends". A good guess can be made about the similarity of cointegration test statistics of unique orderings of input variables, given that the time series vectors have low and similar variances. This is implied from the calculation of the cointegration test statistic: when the variances of the input time series vectors are low and similar, the cointegration coefficients will be similar (which is to say, approximately scalar multiples of each other), resulting in the residual series being approximately scalar multiples of each other. Similar residual series implies similar cointegration test statistics. However, when the variances are large or dissimilar, there is no implied guarantee that the residual series will be even approximately scalar multiples of each other, which in turn makes the cointegration test statistics variable. Formally: Consider the simple regression model, used to find the cointegration coefficient for bivariate cases. Regressing x on y: $$ \hat{\beta}_{xy} = {Cov[x,y] \over \sigma_x^2 } $$ Regressing y on x: $$ \hat{\beta}_{yx} = {Cov[y,x] \over \sigma_y^2 } $$ Clearly $Cov[x,y] = Cov[y,x]$. But, generally, $ \sigma^2_x \neq \sigma^2_y $. Thus, $ \hat{\beta}_{xy} $ is not a scalar multiple of $ \hat{\beta}_{yx} $. So the linear combinations (AKA residual series) that are used to test for a unit root to determine likelihood of cointegration are not scalar multiples of one another: $$ x_t - \gamma^1 y_t = \epsilon_t^1 $$ $$ y_t - \gamma^2 x_t = \epsilon_t^2 $$ Note that, therefore, $ \gamma = \hat{\beta} $, so generally $ \gamma^1 \neq a*\gamma^2 $ for some scalar $a$. This shows two facts about cointegration: The variable order in testing for cointegration matters because of the variance of the individual time series vectors. This affects the relationship between the cointegration coefficients of the various variable orientations because of how the cointegration coefficient is calculated. The residual series may or may not be "similar" to one another: the similarity depends on the variances of the individual time series vectors. These facts imply that the residual series formed by unique variable orderings are not only different, but they are probably not scalar multiples of one another. So which ordering to choose? It depends on the application. Why do some residual series as generated from the same data series but different orderings appear similar while others appear so different? It is because of the variance of the individual time series vectors. When the time series vectors have similar variance (as is certainly possible when comparing similar time series data), the residual series may seem like $-1 * \alpha$ multiples of one another, with $\alpha$ being some scalar value. This is the case when the variance of the time series vectors are both low and similar, resulting in similar error terms in the linear combinations. So, finally, if the time series vectors that are being tested for cointegration have low and similar variances, then one can correctly suppose that the cointegration test statistic will be of a similar confidence level. In general, it is probably best to test both orientations, or at least consider the variances of the time series vectors, unless there is a prevailing reason to favor one orientation.
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test st
So the most popular answer of statistics is apparently correct for this question: "it depends". A good guess can be made about the similarity of cointegration test statistics of unique orderings of i
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test statistic of $A, B$? So the most popular answer of statistics is apparently correct for this question: "it depends". A good guess can be made about the similarity of cointegration test statistics of unique orderings of input variables, given that the time series vectors have low and similar variances. This is implied from the calculation of the cointegration test statistic: when the variances of the input time series vectors are low and similar, the cointegration coefficients will be similar (which is to say, approximately scalar multiples of each other), resulting in the residual series being approximately scalar multiples of each other. Similar residual series implies similar cointegration test statistics. However, when the variances are large or dissimilar, there is no implied guarantee that the residual series will be even approximately scalar multiples of each other, which in turn makes the cointegration test statistics variable. Formally: Consider the simple regression model, used to find the cointegration coefficient for bivariate cases. Regressing x on y: $$ \hat{\beta}_{xy} = {Cov[x,y] \over \sigma_x^2 } $$ Regressing y on x: $$ \hat{\beta}_{yx} = {Cov[y,x] \over \sigma_y^2 } $$ Clearly $Cov[x,y] = Cov[y,x]$. But, generally, $ \sigma^2_x \neq \sigma^2_y $. Thus, $ \hat{\beta}_{xy} $ is not a scalar multiple of $ \hat{\beta}_{yx} $. So the linear combinations (AKA residual series) that are used to test for a unit root to determine likelihood of cointegration are not scalar multiples of one another: $$ x_t - \gamma^1 y_t = \epsilon_t^1 $$ $$ y_t - \gamma^2 x_t = \epsilon_t^2 $$ Note that, therefore, $ \gamma = \hat{\beta} $, so generally $ \gamma^1 \neq a*\gamma^2 $ for some scalar $a$. This shows two facts about cointegration: The variable order in testing for cointegration matters because of the variance of the individual time series vectors. This affects the relationship between the cointegration coefficients of the various variable orientations because of how the cointegration coefficient is calculated. The residual series may or may not be "similar" to one another: the similarity depends on the variances of the individual time series vectors. These facts imply that the residual series formed by unique variable orderings are not only different, but they are probably not scalar multiples of one another. So which ordering to choose? It depends on the application. Why do some residual series as generated from the same data series but different orderings appear similar while others appear so different? It is because of the variance of the individual time series vectors. When the time series vectors have similar variance (as is certainly possible when comparing similar time series data), the residual series may seem like $-1 * \alpha$ multiples of one another, with $\alpha$ being some scalar value. This is the case when the variance of the time series vectors are both low and similar, resulting in similar error terms in the linear combinations. So, finally, if the time series vectors that are being tested for cointegration have low and similar variances, then one can correctly suppose that the cointegration test statistic will be of a similar confidence level. In general, it is probably best to test both orientations, or at least consider the variances of the time series vectors, unless there is a prevailing reason to favor one orientation.
Can any sort of conclusion be made about the cointegration of $B, A$ given the cointegration test st So the most popular answer of statistics is apparently correct for this question: "it depends". A good guess can be made about the similarity of cointegration test statistics of unique orderings of i
32,828
Lasso and statistical signficance of selected variables
There are at least two things to consider here. First, it's important to realize that the p-values in a regression make quite a few assumptions in order to be valid. Most important for your case, they assume you followed a procedure like this: I collected data, and decided what model to fit without looking at the data I collected. Then I fit my pre-determined model, which I assume fits the data well, without really checking and making any changes. Under these assumptions, the p-values are meaningful. If you make changes to your model based on the data you collected, variable selection using the LASSO for example, the p-values estimated from a linear model are not meaningful. This part of the question may be addressed by user2530062's answer to this question, given that p-values are actually of interest to you. Secondly, there is the question of what question you are attempting to answer. The p-values address a very specific question: Under the assumption that this model is correct for the data I am collecting, and that the true value of this parameter I am interested in estimating is in reality zero, what is the probability that I would observe an equally or more extreme value of the estimated parameter when I fit my model to a sample of data collected from this process. If that's the question that you are interested in answer, then carefully constructing your model so that the p-value is valid is how to go about it. But I suspect this may not be the question you are actually interested in answering. Maybe your question is more like this: What is the probability that including this parameter in the model improves the predictive accuracy of my model for this process? A p-value does not give you any real information on that question, or the infinity of other questions that p-values were not designed to address. Instead, you should design a procedure to measure exactly the thing you are interested in. In the above example, a rigorous procedure using the bootstrap to estimate the probability that including the parameter in the model improved predictive accuracy, along with cross validation to estimate the regularization parameter, would do you well.
Lasso and statistical signficance of selected variables
There are at least two things to consider here. First, it's important to realize that the p-values in a regression make quite a few assumptions in order to be valid. Most important for your case, the
Lasso and statistical signficance of selected variables There are at least two things to consider here. First, it's important to realize that the p-values in a regression make quite a few assumptions in order to be valid. Most important for your case, they assume you followed a procedure like this: I collected data, and decided what model to fit without looking at the data I collected. Then I fit my pre-determined model, which I assume fits the data well, without really checking and making any changes. Under these assumptions, the p-values are meaningful. If you make changes to your model based on the data you collected, variable selection using the LASSO for example, the p-values estimated from a linear model are not meaningful. This part of the question may be addressed by user2530062's answer to this question, given that p-values are actually of interest to you. Secondly, there is the question of what question you are attempting to answer. The p-values address a very specific question: Under the assumption that this model is correct for the data I am collecting, and that the true value of this parameter I am interested in estimating is in reality zero, what is the probability that I would observe an equally or more extreme value of the estimated parameter when I fit my model to a sample of data collected from this process. If that's the question that you are interested in answer, then carefully constructing your model so that the p-value is valid is how to go about it. But I suspect this may not be the question you are actually interested in answering. Maybe your question is more like this: What is the probability that including this parameter in the model improves the predictive accuracy of my model for this process? A p-value does not give you any real information on that question, or the infinity of other questions that p-values were not designed to address. Instead, you should design a procedure to measure exactly the thing you are interested in. In the above example, a rigorous procedure using the bootstrap to estimate the probability that including the parameter in the model improved predictive accuracy, along with cross validation to estimate the regularization parameter, would do you well.
Lasso and statistical signficance of selected variables There are at least two things to consider here. First, it's important to realize that the p-values in a regression make quite a few assumptions in order to be valid. Most important for your case, the
32,829
Lasso and statistical signficance of selected variables
This paper tries to provide approach to calculate p-values in elasticnet. I have been struggling to find time to implement it, as it appears to be experimental and not included in any official R package. http://statweb.stanford.edu/~tibs/ftp/covtest.pdf It does not answer the theoretical part of your question, but may bring you closer to an answer if you calculate p-values for elasticnet.
Lasso and statistical signficance of selected variables
This paper tries to provide approach to calculate p-values in elasticnet. I have been struggling to find time to implement it, as it appears to be experimental and not included in any official R packa
Lasso and statistical signficance of selected variables This paper tries to provide approach to calculate p-values in elasticnet. I have been struggling to find time to implement it, as it appears to be experimental and not included in any official R package. http://statweb.stanford.edu/~tibs/ftp/covtest.pdf It does not answer the theoretical part of your question, but may bring you closer to an answer if you calculate p-values for elasticnet.
Lasso and statistical signficance of selected variables This paper tries to provide approach to calculate p-values in elasticnet. I have been struggling to find time to implement it, as it appears to be experimental and not included in any official R packa
32,830
What methods exist for tuning graph kernel SVM hyperparameters?
Disclaimer: I'm not very familiar with graph kernels, so this answer might be based on wrong assumptions. I agree that omitting vertices while computing the kernel matrix is suboptimal. That said, I'm not sure that cross-validation is necessarily problematic. Is your learning context transduction or induction? Overall, I am not convinced that computing the kernel matrix for a given $\beta$ based on all data (i.e., both train and test) necessarily creates an information leak. If computing the kernel based on all data turns out to be okay, you can then train models in a typical cv-setup, using the relevant blocks of the (precomputed) full kernel matrix for training/testing. This approach would enable you to jointly optimize $\beta$ and $C$, for example via libraries like Optunity, where $\beta$ is used to compute the kernel based on all data and $C$ is used to train models on the training folds exclusively.
What methods exist for tuning graph kernel SVM hyperparameters?
Disclaimer: I'm not very familiar with graph kernels, so this answer might be based on wrong assumptions. I agree that omitting vertices while computing the kernel matrix is suboptimal. That said, I'm
What methods exist for tuning graph kernel SVM hyperparameters? Disclaimer: I'm not very familiar with graph kernels, so this answer might be based on wrong assumptions. I agree that omitting vertices while computing the kernel matrix is suboptimal. That said, I'm not sure that cross-validation is necessarily problematic. Is your learning context transduction or induction? Overall, I am not convinced that computing the kernel matrix for a given $\beta$ based on all data (i.e., both train and test) necessarily creates an information leak. If computing the kernel based on all data turns out to be okay, you can then train models in a typical cv-setup, using the relevant blocks of the (precomputed) full kernel matrix for training/testing. This approach would enable you to jointly optimize $\beta$ and $C$, for example via libraries like Optunity, where $\beta$ is used to compute the kernel based on all data and $C$ is used to train models on the training folds exclusively.
What methods exist for tuning graph kernel SVM hyperparameters? Disclaimer: I'm not very familiar with graph kernels, so this answer might be based on wrong assumptions. I agree that omitting vertices while computing the kernel matrix is suboptimal. That said, I'm
32,831
Bland-Altman (Tukey Mean-Difference) plot for differing scales
The problem with using correlations as a measure of agreement is that what they are really assessing is the ordering of the $X_i$ and $Y_i$ values, and their relative spacing, but not that the numbers themselves agree (cf., see my answer here: Does Spearman's $r=0.38$ indicate agreement?). On the other hand, if the numbers are incommensurate, it makes no sense to try to determine if they agree—it can't mean anything whether they do or don't. As a result, a Bland-Altman plot can't be of any value here. However, a correlation might offer some (albeit little) value. From an exploratory point of view, I would start with a regular, old scatterplot. I might also do a simple linear regression and test for curvature in the relationship. It can often be the case that different measures are differentially sensitive at different ranges. For example, they might do equally well at measuring what you want in the middle of their range, but one does a better job of measuring lower values (whereas the other just starts to output the same low number, perhaps a limit of detection), and vice-versa for higher values. What I have in mind is that that the relationships aren't linear. Consider this stylized figure of the relationship between energy and the temperature of water: Then imagine having temperature and something else, perhaps volume (ice begins to expand at lower temperatures), both as measures of energy. Once / if you were satisfied that the relationship were linear, your ability to measure the amount of agreement would be limited to Pearson's product-moment correlation; Bland-Altman plots just won't work here.
Bland-Altman (Tukey Mean-Difference) plot for differing scales
The problem with using correlations as a measure of agreement is that what they are really assessing is the ordering of the $X_i$ and $Y_i$ values, and their relative spacing, but not that the numbers
Bland-Altman (Tukey Mean-Difference) plot for differing scales The problem with using correlations as a measure of agreement is that what they are really assessing is the ordering of the $X_i$ and $Y_i$ values, and their relative spacing, but not that the numbers themselves agree (cf., see my answer here: Does Spearman's $r=0.38$ indicate agreement?). On the other hand, if the numbers are incommensurate, it makes no sense to try to determine if they agree—it can't mean anything whether they do or don't. As a result, a Bland-Altman plot can't be of any value here. However, a correlation might offer some (albeit little) value. From an exploratory point of view, I would start with a regular, old scatterplot. I might also do a simple linear regression and test for curvature in the relationship. It can often be the case that different measures are differentially sensitive at different ranges. For example, they might do equally well at measuring what you want in the middle of their range, but one does a better job of measuring lower values (whereas the other just starts to output the same low number, perhaps a limit of detection), and vice-versa for higher values. What I have in mind is that that the relationships aren't linear. Consider this stylized figure of the relationship between energy and the temperature of water: Then imagine having temperature and something else, perhaps volume (ice begins to expand at lower temperatures), both as measures of energy. Once / if you were satisfied that the relationship were linear, your ability to measure the amount of agreement would be limited to Pearson's product-moment correlation; Bland-Altman plots just won't work here.
Bland-Altman (Tukey Mean-Difference) plot for differing scales The problem with using correlations as a measure of agreement is that what they are really assessing is the ordering of the $X_i$ and $Y_i$ values, and their relative spacing, but not that the numbers
32,832
Bland-Altman (Tukey Mean-Difference) plot for differing scales
Assuming you cannot convert both measures to a common set of units, and both measures are continuous and are roughly normally distributed, convert both to standardized scores (e.g., $z = \frac{x- \mu}{\sigma}$). Added in response to @Nick: Bland-Altman plots plot the difference between two measures against the average of the two measures, so to be meaningful the two measures need to be measured on the same scale. Converting two measures that are on different scales to dimensionless standardised scores allows you to do the necessary calculations. Added in response to @Nick (2): Not sure what you are saying. Here is a workable example: # Load packages library(dplyr) library(BlandAltmanLeh) # Using the same conditions @Ashe used ## Set seed set.seed(2063) ## Generate data x <- seq(1, 40) y <- 2 * x + rnorm(n = length(x), mean = 0, sd = 10) ## Put x and y into a dataframe df <- data_frame(x = x, y = y) %>% ## Add two new columns containing standarized values of x and y mutate(x_std = (x - mean(x)) / sd(x), y_std = (y - mean(y)) / sd(y)) ## Bland-Altman plots of: ### i) raw x and y values raw <- bland.altman.plot(group1 = df$x, group2 = df$y, main = 'Raw values', xlab = 'Average of x and y', ylab = 'Difference between x and y') ### ii) standardized x and y values std <- bland.altman.plot(group1 = df$x_std, group2 = df$y_std, main = 'Standardized values', xlab = 'Average of x and y', ylab = 'Difference between x and y') It achieves the same (in shape at least) result as the lm approach used by @Ashe, which is what you would expect since both methods are 'rescaling' the values.
Bland-Altman (Tukey Mean-Difference) plot for differing scales
Assuming you cannot convert both measures to a common set of units, and both measures are continuous and are roughly normally distributed, convert both to standardized scores (e.g., $z = \frac{x- \mu}
Bland-Altman (Tukey Mean-Difference) plot for differing scales Assuming you cannot convert both measures to a common set of units, and both measures are continuous and are roughly normally distributed, convert both to standardized scores (e.g., $z = \frac{x- \mu}{\sigma}$). Added in response to @Nick: Bland-Altman plots plot the difference between two measures against the average of the two measures, so to be meaningful the two measures need to be measured on the same scale. Converting two measures that are on different scales to dimensionless standardised scores allows you to do the necessary calculations. Added in response to @Nick (2): Not sure what you are saying. Here is a workable example: # Load packages library(dplyr) library(BlandAltmanLeh) # Using the same conditions @Ashe used ## Set seed set.seed(2063) ## Generate data x <- seq(1, 40) y <- 2 * x + rnorm(n = length(x), mean = 0, sd = 10) ## Put x and y into a dataframe df <- data_frame(x = x, y = y) %>% ## Add two new columns containing standarized values of x and y mutate(x_std = (x - mean(x)) / sd(x), y_std = (y - mean(y)) / sd(y)) ## Bland-Altman plots of: ### i) raw x and y values raw <- bland.altman.plot(group1 = df$x, group2 = df$y, main = 'Raw values', xlab = 'Average of x and y', ylab = 'Difference between x and y') ### ii) standardized x and y values std <- bland.altman.plot(group1 = df$x_std, group2 = df$y_std, main = 'Standardized values', xlab = 'Average of x and y', ylab = 'Difference between x and y') It achieves the same (in shape at least) result as the lm approach used by @Ashe, which is what you would expect since both methods are 'rescaling' the values.
Bland-Altman (Tukey Mean-Difference) plot for differing scales Assuming you cannot convert both measures to a common set of units, and both measures are continuous and are roughly normally distributed, convert both to standardized scores (e.g., $z = \frac{x- \mu}
32,833
Bland-Altman (Tukey Mean-Difference) plot for differing scales
I came up with a possible solution, so I will attempt to answer my own question. I would like some critical feedback from the community though. I know the two phenomenon are related, so I make the assumption that I can calibrate one scale to the other scale. I will then compare agreement between the predicted values from one method to the experimental values of the other method. This method still cannot find bias in means (as @Jeremy pointed out, this isn't meaningful in this context), but it still might allow a comparison of the 95% limits. Some code (in R) to compare: library(ggplot2) set.seed(2063) #Dr. Cochrane bland <- function(x, y, titl=''){ gg.data <- data.frame(x=x, y=y, avg=(x+y)/2, diff=(x-y)) g <- ggplot(gg.data, aes(x=avg, y=diff)) + geom_point(size=4) + theme_bw() g <- g + theme(text=element_text(size=24), axis.text=element_text(colour='black')) g <- g + labs(x='Average', y='Difference') + ggtitle(titl) g <- g + geom_hline(yintercept=mean(gg.data$diff), colour='chocolate', size=1) g <- g + geom_hline(yintercept=mean(gg.data$diff) + 1.96*sd(gg.data$diff), colour='dodgerblue3', size=1, linetype='dashed') g <- g + geom_hline(yintercept=mean(gg.data$diff) - 1.96*sd(gg.data$diff), colour='dodgerblue3', size=1, linetype='dashed') plot(g) } #Make some data x <- seq(1,40) y <- 2*x + rnorm(n=length(x), mean=0, sd=10) qplot(x,y) lm.data <- data.frame(x=x, y=y) lm(data=lm.data, y~x) #Bland-Altman of raw data bland(x,y,'Raw Data') #Bland-Altman of calibrated data orig.df <- data.frame(x=x) y.p <- predict(lm(data=lm.data, y~x), newdata=orig.df) bland(y.p,y, 'Calib Data') qplot(y.p,y) If I try to directly compare $x$ and $y$, as expected, I get what would be very poor agreement: However, if I "calibrate" the $x$ values to the $y$ scale using a linear model, agreement appears much better: Some key thoughts: I don't have to use a linear model. Any model that calibrates one scale to another would do nicely. This is functionally equivalent to plotting the model residuals against the mean of $y$ and the $\hat{y}$ value. This is my biggest concern. I want to compare agreement between methods, but I could be simply evaluating the quality of the model. My current thinking is that these two are equivalent. Given #2, by comparing the residuals of the model as a measure of agreement, the value of my comparison rests strongly on the assumption that the model used to calibrate is correct. To bring it all together, if I have selected a reasonable model (#1) to calibrate one scale to another (#3), then I can reasonably compare the residuals of that model (#2) as a measure of agreement. In the 2nd example graph above, I would interpret this as 95% of all deviations are within ~20 points on the $y$ scale. I can then evaluate if these limits are reasonable for the two methods I'm trying to study. As I said above, criticisms are welcome.
Bland-Altman (Tukey Mean-Difference) plot for differing scales
I came up with a possible solution, so I will attempt to answer my own question. I would like some critical feedback from the community though. I know the two phenomenon are related, so I make the as
Bland-Altman (Tukey Mean-Difference) plot for differing scales I came up with a possible solution, so I will attempt to answer my own question. I would like some critical feedback from the community though. I know the two phenomenon are related, so I make the assumption that I can calibrate one scale to the other scale. I will then compare agreement between the predicted values from one method to the experimental values of the other method. This method still cannot find bias in means (as @Jeremy pointed out, this isn't meaningful in this context), but it still might allow a comparison of the 95% limits. Some code (in R) to compare: library(ggplot2) set.seed(2063) #Dr. Cochrane bland <- function(x, y, titl=''){ gg.data <- data.frame(x=x, y=y, avg=(x+y)/2, diff=(x-y)) g <- ggplot(gg.data, aes(x=avg, y=diff)) + geom_point(size=4) + theme_bw() g <- g + theme(text=element_text(size=24), axis.text=element_text(colour='black')) g <- g + labs(x='Average', y='Difference') + ggtitle(titl) g <- g + geom_hline(yintercept=mean(gg.data$diff), colour='chocolate', size=1) g <- g + geom_hline(yintercept=mean(gg.data$diff) + 1.96*sd(gg.data$diff), colour='dodgerblue3', size=1, linetype='dashed') g <- g + geom_hline(yintercept=mean(gg.data$diff) - 1.96*sd(gg.data$diff), colour='dodgerblue3', size=1, linetype='dashed') plot(g) } #Make some data x <- seq(1,40) y <- 2*x + rnorm(n=length(x), mean=0, sd=10) qplot(x,y) lm.data <- data.frame(x=x, y=y) lm(data=lm.data, y~x) #Bland-Altman of raw data bland(x,y,'Raw Data') #Bland-Altman of calibrated data orig.df <- data.frame(x=x) y.p <- predict(lm(data=lm.data, y~x), newdata=orig.df) bland(y.p,y, 'Calib Data') qplot(y.p,y) If I try to directly compare $x$ and $y$, as expected, I get what would be very poor agreement: However, if I "calibrate" the $x$ values to the $y$ scale using a linear model, agreement appears much better: Some key thoughts: I don't have to use a linear model. Any model that calibrates one scale to another would do nicely. This is functionally equivalent to plotting the model residuals against the mean of $y$ and the $\hat{y}$ value. This is my biggest concern. I want to compare agreement between methods, but I could be simply evaluating the quality of the model. My current thinking is that these two are equivalent. Given #2, by comparing the residuals of the model as a measure of agreement, the value of my comparison rests strongly on the assumption that the model used to calibrate is correct. To bring it all together, if I have selected a reasonable model (#1) to calibrate one scale to another (#3), then I can reasonably compare the residuals of that model (#2) as a measure of agreement. In the 2nd example graph above, I would interpret this as 95% of all deviations are within ~20 points on the $y$ scale. I can then evaluate if these limits are reasonable for the two methods I'm trying to study. As I said above, criticisms are welcome.
Bland-Altman (Tukey Mean-Difference) plot for differing scales I came up with a possible solution, so I will attempt to answer my own question. I would like some critical feedback from the community though. I know the two phenomenon are related, so I make the as
32,834
MLE of a multivariate Hawkes process
There is a small mistake in the derivation. In line 5 (in the inserted figure) one needs $T = t_{1,F} = t_{2,G}$ for the identity to be correct, and this is generally not the case. The terms in the final sums should be $e^{-\beta_{i,1}(T - t_{1,f})} - 1$ and $e^{-\beta_{i,2}(T - t_{2,g})} - 1$, respectively. Otherwise the derivation looks correct. A slightly simpler derivation can take line 3 as a starting point. Then interchange the sums and integration with the resulting inner integral being from $t_{j,k}$ to $T$. It might be worth noting that for the Hawkes process considered here, it is possible to compute $\lambda_i^*(t_{i,j})$ recursively, which implies that the computational complexity of the log-likelihood can be made linear in the number of jumps (instead of quadratic as the double sum over the jumps suggests). I doubt that there are inconsistent versions of the likelihood in the literature, but there may, of course, be mistakes in some of the references. Another (likely) possibility is that the notation or the assumptions differ, or that the representations are, indeed, equivalent, but written in different ways. One possibility is that the baseline intensity $\lambda_i$ is omitted, so that the $\lambda_i T$ term disappears.
MLE of a multivariate Hawkes process
There is a small mistake in the derivation. In line 5 (in the inserted figure) one needs $T = t_{1,F} = t_{2,G}$ for the identity to be correct, and this is generally not the case. The terms in the fi
MLE of a multivariate Hawkes process There is a small mistake in the derivation. In line 5 (in the inserted figure) one needs $T = t_{1,F} = t_{2,G}$ for the identity to be correct, and this is generally not the case. The terms in the final sums should be $e^{-\beta_{i,1}(T - t_{1,f})} - 1$ and $e^{-\beta_{i,2}(T - t_{2,g})} - 1$, respectively. Otherwise the derivation looks correct. A slightly simpler derivation can take line 3 as a starting point. Then interchange the sums and integration with the resulting inner integral being from $t_{j,k}$ to $T$. It might be worth noting that for the Hawkes process considered here, it is possible to compute $\lambda_i^*(t_{i,j})$ recursively, which implies that the computational complexity of the log-likelihood can be made linear in the number of jumps (instead of quadratic as the double sum over the jumps suggests). I doubt that there are inconsistent versions of the likelihood in the literature, but there may, of course, be mistakes in some of the references. Another (likely) possibility is that the notation or the assumptions differ, or that the representations are, indeed, equivalent, but written in different ways. One possibility is that the baseline intensity $\lambda_i$ is omitted, so that the $\lambda_i T$ term disappears.
MLE of a multivariate Hawkes process There is a small mistake in the derivation. In line 5 (in the inserted figure) one needs $T = t_{1,F} = t_{2,G}$ for the identity to be correct, and this is generally not the case. The terms in the fi
32,835
Using empirical priors in PyMC
If you already have a prior $p(\theta)$ and a likelihood $p(x|\theta)$, then you can easily find the posterior $p(\theta|x)$ by multiplying these and normalizing: $$p(\theta|x)=\frac{p(\theta)p(x|\theta)}{p(x)}\propto p(\theta)p(x|\theta)$$ https://en.wikipedia.org/wiki/Posterior_probability The following code demonstrates estimating a posterior represented as a histogram, so it can be used as the next prior: import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde # using Beta distribution instead on Normal to get finite support support_size=30 old_data=np.concatenate([np.random.beta(70,20,1000), np.random.beta(10,40,1000), np.random.beta(80,80,1000)])*support_size new_data=np.concatenate([np.random.beta(20,10,1000), np.random.beta(10,20,1000)])*support_size # convert samples to histograms support=np.arange(support_size) old_hist=np.histogram(old_data,bins=support,normed=True)[0] new_hist=np.histogram(new_data,bins=support,normed=True)[0] # obtain smooth estimators from samples soft_old=gaussian_kde(old_data,bw_method=0.1) soft_new=gaussian_kde(new_data,bw_method=0.1) # posterior histogram (to be used as a prior for the next batch) post_hist=old_hist*new_hist post_hist/=post_hist.sum() # smooth posterior def posterior(x): return soft_old(x)*soft_new(x)/np.sum(soft_old(x)*soft_new(x))*x.size/support_size x=np.linspace(0,support_size,100) plt.bar(support[:-1],old_hist,alpha=0.5,label='p(z)',color='b') plt.bar(support[:-1],new_hist,alpha=0.5,label='p(x|z)',color='g') plt.plot(x,soft_old(x),label='p(z) smoothed',lw=2) plt.plot(x,soft_new(x),label='p(x|z) smoothed',lw=2) plt.legend(loc='best',fontsize='small') plt.show() plt.bar(support[:-1],post_hist,alpha=0.5,label='p(z|x)',color='r') plt.plot(x,soft_old(x),label='p(z) smoothed',lw=2) plt.plot(x,soft_new(x),label='p(x|z) smoothed',lw=2) plt.plot(x,posterior(x),label='p(z|x) smoothed',lw=2) plt.legend(loc='best',fontsize='small') plt.show() If, however, you want to combine your empirical prior with some MCMC models, I suggest you take a look at PyMC's Potential, one of its main applications is "soft data". Please update your question if you need an answer targeted towards that.
Using empirical priors in PyMC
If you already have a prior $p(\theta)$ and a likelihood $p(x|\theta)$, then you can easily find the posterior $p(\theta|x)$ by multiplying these and normalizing: $$p(\theta|x)=\frac{p(\theta)p(x|\the
Using empirical priors in PyMC If you already have a prior $p(\theta)$ and a likelihood $p(x|\theta)$, then you can easily find the posterior $p(\theta|x)$ by multiplying these and normalizing: $$p(\theta|x)=\frac{p(\theta)p(x|\theta)}{p(x)}\propto p(\theta)p(x|\theta)$$ https://en.wikipedia.org/wiki/Posterior_probability The following code demonstrates estimating a posterior represented as a histogram, so it can be used as the next prior: import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde # using Beta distribution instead on Normal to get finite support support_size=30 old_data=np.concatenate([np.random.beta(70,20,1000), np.random.beta(10,40,1000), np.random.beta(80,80,1000)])*support_size new_data=np.concatenate([np.random.beta(20,10,1000), np.random.beta(10,20,1000)])*support_size # convert samples to histograms support=np.arange(support_size) old_hist=np.histogram(old_data,bins=support,normed=True)[0] new_hist=np.histogram(new_data,bins=support,normed=True)[0] # obtain smooth estimators from samples soft_old=gaussian_kde(old_data,bw_method=0.1) soft_new=gaussian_kde(new_data,bw_method=0.1) # posterior histogram (to be used as a prior for the next batch) post_hist=old_hist*new_hist post_hist/=post_hist.sum() # smooth posterior def posterior(x): return soft_old(x)*soft_new(x)/np.sum(soft_old(x)*soft_new(x))*x.size/support_size x=np.linspace(0,support_size,100) plt.bar(support[:-1],old_hist,alpha=0.5,label='p(z)',color='b') plt.bar(support[:-1],new_hist,alpha=0.5,label='p(x|z)',color='g') plt.plot(x,soft_old(x),label='p(z) smoothed',lw=2) plt.plot(x,soft_new(x),label='p(x|z) smoothed',lw=2) plt.legend(loc='best',fontsize='small') plt.show() plt.bar(support[:-1],post_hist,alpha=0.5,label='p(z|x)',color='r') plt.plot(x,soft_old(x),label='p(z) smoothed',lw=2) plt.plot(x,soft_new(x),label='p(x|z) smoothed',lw=2) plt.plot(x,posterior(x),label='p(z|x) smoothed',lw=2) plt.legend(loc='best',fontsize='small') plt.show() If, however, you want to combine your empirical prior with some MCMC models, I suggest you take a look at PyMC's Potential, one of its main applications is "soft data". Please update your question if you need an answer targeted towards that.
Using empirical priors in PyMC If you already have a prior $p(\theta)$ and a likelihood $p(x|\theta)$, then you can easily find the posterior $p(\theta|x)$ by multiplying these and normalizing: $$p(\theta|x)=\frac{p(\theta)p(x|\the
32,836
What is the equivalent for cdfs of MCMC for pdfs?
This is an attempt which I didn't completely work through, but too long for the comments section. It might be useful to put it here as another basic alternative for very low $k$. It does not require explicit differentiation + MCMC (but it does perform numerical differentiation, without MCMC). Algorithm For small $\varepsilon > 0$: Draw $u_1 \sim C_1 \equiv C(U_1 = u_1,U_2 = 1,\ldots, U_k = 1)$. This can be easily done by drawing $\eta \sim \text{Uniform}[0,1]$ and computing $C_1^{-1}(\eta)$ (which, if anything, can be easily done numerically). This is a draw from the marginal pdf $u_1 \sim \kappa(u_1)$. For $j = 2\ldots k$ Define $$D_j^{(\varepsilon)}(u_j|u_1,\ldots,u_{j-1}) \equiv \Pr\left( u_1 - \frac{\varepsilon}{2} \le U_1 \le u_1 + \frac{\varepsilon}{2} \land \dots \land u_{j-1} - \frac{\varepsilon}{2} \le U_{j-1} \le u_{j-1} + \frac{\varepsilon}{2} \land U_{j} \le u_j \land U_{j+1} \le 1 \dots \land U_{k} \le 1\right),$$ which can be computed as a difference of $C$ evaluated at various points (which in the naive way needs $O(2^{j-1})$ evaluations of $C$ for every evaluation of $D_j^{(\varepsilon)}$). $D_j^{(\varepsilon)}$ is the $\varepsilon$-approximate marginal conditional of $u_j$ given $u_1, \ldots, u_{j-1}$. Draw $u_j \sim D_j^{(\varepsilon)}(u_j|u_1,\ldots,u_{j-1})$ as per point 1, which again should be easy to do with numerical inversion. Discussion This algorithm should generate i.i.d. samples from an $\varepsilon$-approximation of $C(u_1,\ldots,u_k)$, where $\varepsilon$ merely depends on numerical precision. There are practical technicalities to refine the approximation and make it numerically stable. The obvious problem is that computational complexity scales as $O(2^{k})$, so, to put it generously, this is not very general in terms of $k$ (but the example you linked had $k = 3$, so perhaps this method is not completely useless -- I am not familiar with the typical scenario in which you would have access to the cdf). On the other hand, for very low-dimensional distributions it could work, and the cost is compensated by the fact that, unlike the other generic solution of "differentiating + MCMC", there is no need to compute derivatives, samples are i.i.d. and there is no tuning (aside the choice of $\varepsilon$, which should just be something slightly above machine precision). And perhaps there are ways to make this better than the naive approach. As I mentioned, this is off the top of my head so there might be other issues.
What is the equivalent for cdfs of MCMC for pdfs?
This is an attempt which I didn't completely work through, but too long for the comments section. It might be useful to put it here as another basic alternative for very low $k$. It does not require e
What is the equivalent for cdfs of MCMC for pdfs? This is an attempt which I didn't completely work through, but too long for the comments section. It might be useful to put it here as another basic alternative for very low $k$. It does not require explicit differentiation + MCMC (but it does perform numerical differentiation, without MCMC). Algorithm For small $\varepsilon > 0$: Draw $u_1 \sim C_1 \equiv C(U_1 = u_1,U_2 = 1,\ldots, U_k = 1)$. This can be easily done by drawing $\eta \sim \text{Uniform}[0,1]$ and computing $C_1^{-1}(\eta)$ (which, if anything, can be easily done numerically). This is a draw from the marginal pdf $u_1 \sim \kappa(u_1)$. For $j = 2\ldots k$ Define $$D_j^{(\varepsilon)}(u_j|u_1,\ldots,u_{j-1}) \equiv \Pr\left( u_1 - \frac{\varepsilon}{2} \le U_1 \le u_1 + \frac{\varepsilon}{2} \land \dots \land u_{j-1} - \frac{\varepsilon}{2} \le U_{j-1} \le u_{j-1} + \frac{\varepsilon}{2} \land U_{j} \le u_j \land U_{j+1} \le 1 \dots \land U_{k} \le 1\right),$$ which can be computed as a difference of $C$ evaluated at various points (which in the naive way needs $O(2^{j-1})$ evaluations of $C$ for every evaluation of $D_j^{(\varepsilon)}$). $D_j^{(\varepsilon)}$ is the $\varepsilon$-approximate marginal conditional of $u_j$ given $u_1, \ldots, u_{j-1}$. Draw $u_j \sim D_j^{(\varepsilon)}(u_j|u_1,\ldots,u_{j-1})$ as per point 1, which again should be easy to do with numerical inversion. Discussion This algorithm should generate i.i.d. samples from an $\varepsilon$-approximation of $C(u_1,\ldots,u_k)$, where $\varepsilon$ merely depends on numerical precision. There are practical technicalities to refine the approximation and make it numerically stable. The obvious problem is that computational complexity scales as $O(2^{k})$, so, to put it generously, this is not very general in terms of $k$ (but the example you linked had $k = 3$, so perhaps this method is not completely useless -- I am not familiar with the typical scenario in which you would have access to the cdf). On the other hand, for very low-dimensional distributions it could work, and the cost is compensated by the fact that, unlike the other generic solution of "differentiating + MCMC", there is no need to compute derivatives, samples are i.i.d. and there is no tuning (aside the choice of $\varepsilon$, which should just be something slightly above machine precision). And perhaps there are ways to make this better than the naive approach. As I mentioned, this is off the top of my head so there might be other issues.
What is the equivalent for cdfs of MCMC for pdfs? This is an attempt which I didn't completely work through, but too long for the comments section. It might be useful to put it here as another basic alternative for very low $k$. It does not require e
32,837
What is the best measure for unbalanced multi-class classification problem?
My apologies, just saw how old the question was -- why was it on the top of the list? Answer (which is as good as it gets with limited information): Of what kind is the data? You should probably never use detection accuracy or certainly not when your classifier outputs a score or probability. How do you classify? The underlying loss function of your classification algorithm is usually a good measure to start with when it comes to evaluation performance. I would not lean towards 1~vs~all analytic approaches, such as the precision recall curve(s). It won't get you very far -- you would have to test each class against all others and then combine these results somehow. Harmonic mean, a-priori likelihood given the class to be tested, ... ? It is unclear what these measures will actually tell you. If you have probabilistic output , the negative log likelihood is a good place to start with. If you already have 70% accuracy for class 1, which means 70% of your dataset are class 1, then you might be in the situation that your classifier gives up on some smaller classes and rather tries to satisfy a possible regularization term. But this is all really dependent on your classification scheme. If you want a clearer answer, you need to tell us the whole story. ;)
What is the best measure for unbalanced multi-class classification problem?
My apologies, just saw how old the question was -- why was it on the top of the list? Answer (which is as good as it gets with limited information): Of what kind is the data? You should probably neve
What is the best measure for unbalanced multi-class classification problem? My apologies, just saw how old the question was -- why was it on the top of the list? Answer (which is as good as it gets with limited information): Of what kind is the data? You should probably never use detection accuracy or certainly not when your classifier outputs a score or probability. How do you classify? The underlying loss function of your classification algorithm is usually a good measure to start with when it comes to evaluation performance. I would not lean towards 1~vs~all analytic approaches, such as the precision recall curve(s). It won't get you very far -- you would have to test each class against all others and then combine these results somehow. Harmonic mean, a-priori likelihood given the class to be tested, ... ? It is unclear what these measures will actually tell you. If you have probabilistic output , the negative log likelihood is a good place to start with. If you already have 70% accuracy for class 1, which means 70% of your dataset are class 1, then you might be in the situation that your classifier gives up on some smaller classes and rather tries to satisfy a possible regularization term. But this is all really dependent on your classification scheme. If you want a clearer answer, you need to tell us the whole story. ;)
What is the best measure for unbalanced multi-class classification problem? My apologies, just saw how old the question was -- why was it on the top of the list? Answer (which is as good as it gets with limited information): Of what kind is the data? You should probably neve
32,838
What is the best measure for unbalanced multi-class classification problem?
Try the F1-score, which balances precision and recall. Precision can be calculated by the number of true positives divided by total positives, and recall by the number of true positives divided by the total number of elements that actually belong to the positive class. These are weighted by a harmonic mean.
What is the best measure for unbalanced multi-class classification problem?
Try the F1-score, which balances precision and recall. Precision can be calculated by the number of true positives divided by total positives, and recall by the number of true positives divided by the
What is the best measure for unbalanced multi-class classification problem? Try the F1-score, which balances precision and recall. Precision can be calculated by the number of true positives divided by total positives, and recall by the number of true positives divided by the total number of elements that actually belong to the positive class. These are weighted by a harmonic mean.
What is the best measure for unbalanced multi-class classification problem? Try the F1-score, which balances precision and recall. Precision can be calculated by the number of true positives divided by total positives, and recall by the number of true positives divided by the
32,839
Does it make sense to generate prediction intervals for the estimates of a logistic regression?
In the log-odds space, every outcome is either $-\infty$ or $\infty$, so a prediction interval still makes no sense. You can achieve your goal by giving a confidence interval for the probability.
Does it make sense to generate prediction intervals for the estimates of a logistic regression?
In the log-odds space, every outcome is either $-\infty$ or $\infty$, so a prediction interval still makes no sense. You can achieve your goal by giving a confidence interval for the probability.
Does it make sense to generate prediction intervals for the estimates of a logistic regression? In the log-odds space, every outcome is either $-\infty$ or $\infty$, so a prediction interval still makes no sense. You can achieve your goal by giving a confidence interval for the probability.
Does it make sense to generate prediction intervals for the estimates of a logistic regression? In the log-odds space, every outcome is either $-\infty$ or $\infty$, so a prediction interval still makes no sense. You can achieve your goal by giving a confidence interval for the probability.
32,840
Does it make sense to generate prediction intervals for the estimates of a logistic regression?
That's a little bit confusing. If you are doing logistic regression, you predict binary class memberships, e.g., 0 and 1. The class label is basically the outcome of a thresholding function that is $sigmoid(z) \ge 0.5 \rightarrow 1 $, and 0 otherwise. where $z = w^Tx$ (w=weights, x=sample), and $sigmoid$ is the inverse-logit function $\frac{1}{1 + e^{-z}}$ or equivalently to save this one step of computation you can directly compute: $z \ge 0 \rightarrow 1 $, and 0 otherwise. The conditional probability $p = sigmoid(z)$ is basically your "confidence". I think what you want is the confidence of this probability? You could do this by calculating the standard error of your prediction on the linear scale $w^Tx$, and then calculate the upper and lower bounds of your, e.g., 95%, confidence interval via $[ pred. value +/- 1.96\times std. err ]$. After you obtained the upper and lower bounds, you can use the sigmoid function to transform those onto the logit scale.
Does it make sense to generate prediction intervals for the estimates of a logistic regression?
That's a little bit confusing. If you are doing logistic regression, you predict binary class memberships, e.g., 0 and 1. The class label is basically the outcome of a thresholding function that is $
Does it make sense to generate prediction intervals for the estimates of a logistic regression? That's a little bit confusing. If you are doing logistic regression, you predict binary class memberships, e.g., 0 and 1. The class label is basically the outcome of a thresholding function that is $sigmoid(z) \ge 0.5 \rightarrow 1 $, and 0 otherwise. where $z = w^Tx$ (w=weights, x=sample), and $sigmoid$ is the inverse-logit function $\frac{1}{1 + e^{-z}}$ or equivalently to save this one step of computation you can directly compute: $z \ge 0 \rightarrow 1 $, and 0 otherwise. The conditional probability $p = sigmoid(z)$ is basically your "confidence". I think what you want is the confidence of this probability? You could do this by calculating the standard error of your prediction on the linear scale $w^Tx$, and then calculate the upper and lower bounds of your, e.g., 95%, confidence interval via $[ pred. value +/- 1.96\times std. err ]$. After you obtained the upper and lower bounds, you can use the sigmoid function to transform those onto the logit scale.
Does it make sense to generate prediction intervals for the estimates of a logistic regression? That's a little bit confusing. If you are doing logistic regression, you predict binary class memberships, e.g., 0 and 1. The class label is basically the outcome of a thresholding function that is $
32,841
Confidence interval of the mean response from nonlinear model
I haven't investigated why the two CIs are unequal because I think both are wrong. The common problem is that the estimated parameters are likely correlated, perhaps heavily so, but neither procedure appears to account for that. (In the realistic examples shown below, the correlation ranges from -99.3% to -99.8%.) To find confidence bands, start with the model in the alternative form $$y = \exp(\alpha + \beta \log(x)) + \varepsilon.$$ The error term is $\varepsilon$ and $(\alpha,\beta)$ is the model parameter to be estimated. As in the question I will use nonlinear least squares to fit the model. (Taking logarithms of both sides will be vain, because they cannot simplify the right hand side due to the additive error term. Indeed, as the examples below indicate, it is possible--and perfectly OK--for observed values of $y$ to be negative.) One output of the fitting procedure will be the variance-covariance matrix of the parameter estimates, $$\mathbb{V} = \pmatrix{\sigma^2_\alpha & \rho \sigma_\alpha\sigma_\beta \\ \rho \sigma_\alpha\sigma_\beta & \sigma^2_\beta}.$$ The square roots of the diagonal elements, $\sigma_\alpha$ and $\sigma_\beta,$ are the standard errors of the estimated parameters $\hat\alpha$ and $\hat\beta,$ respectively. $\rho$ estimates the correlation coefficient of those estimates. The predicted value at any (positive) number $x_0$ is $$\hat{y}(x_0) = \exp(\hat\alpha + \hat\beta\log(x_0)) = e^\hat\alpha x_0^\hat\beta = \frac{\hat A}{\hat\beta} x_0^\hat\beta$$ where $\hat A=\hat\beta e^\hat\alpha,$ showing this really is the intended model as formulated in the question. Taking logarithms (which now is possible because there are no error terms in the equation) gives $$\log(\hat y(x_0)) = \hat\alpha + \hat\beta \log(x_0) = (1, \log(x_0))\ (\hat\alpha,\hat\beta)^\prime.$$ Thus the standard error of the logarithm of the estimated response is $$SE(x_0) = SE(\log(\hat y(x_0))) = \sqrt{(1, \log(x_0))\mathbb{V}(1, \log(x_0))^\prime}.$$ Being able to formulate the SE in this way was the whole point of the initial reparameterization of the model: the parameters $\alpha$ and $\beta$ (or, rather, their estimates) enter linearly into the calculation of the standard error of $\hat y.$ There is no need to expand Taylor series or "propagate error." To construct a $100(1-a)\%$ confidence interval for $\log(y(x_0)),$ do as usual and set the endpoints at $$\log(\hat y(x_0)) \pm Z_{a/2}\, SE(x_0)$$ where $\Phi(Z_{a/2}) = a/2$ is the $a/2$ percentage point of the standard Normal distribution. Because this procedure aims to enclose the true response with $100(1-a)\%$ probability, this coverage property is preserved upon taking antilogarithms. That is, A $100(1-a)\%$ confidence interval for $y(x_0)$ has endpoints $$\begin{aligned} &\exp\left(\log(\hat y(x_0)) \pm Z_{a/2}\, SE(x_0)\right)\\ &= \left[\frac{\hat y(x_0)}{\exp\left(|Z_{a/2}| SE(x_0)\right)},\ \hat y(x_0)\exp\left(|Z_{a/2}| SE(x_0)\right)\right]. \end{aligned}$$ By doing this for a sequence of values of $x_0$ you can construct confidence bands for the regression. If all is well (that is, all model assumptions are accurate and there are enough data to assure the sampling distribution of $(\hat\alpha,\hat\beta)$ is approximately Normal), you can hope that $100(1-a)\%$ of these bands envelop the graph of the true response function $x\to \exp(\alpha + \beta\log(x)).$ To illustrate, I generated $20$ datasets having properties similar to those in the problem: the true $\alpha$ and $\beta$ are close to the estimates reported in the question and the error variance $\operatorname{Var}(\varepsilon)$ was set to make the sum of squares of residuals close to that reported in the question (near 90,000). I used the foregoing technique to fit this model to each dataset and then for each one plotted (a) the data, (b) the $90\%$ confidence band and, for reference, (c) the graph of the true response function. The latter is colored red wherever it lies beyond the confidence band. The test of this approach is that about $10\%,$ or two of the $20,$ panels ought to show red portions: and that's exactly what happened (in iterations 10 and 16). For details, consult this R code that generated and plotted the simulations. # # Describe the model and the data. # x <- seq(10, 100, length.out=51) alpha <- log(7.5) beta <- 0.6 sigma <- sqrt(90000 / length(x)) # Error SD a <- 0.10 # Test level nrow <- 4 # Rows for the simulation ncol <- 5 # Columns for the simulation x.0 <- seq(min(x)*0.5, max(x)*1.1, length.out=101) # Prediction points f <- function(x, theta) exp(theta[1] + theta[2]*log(x)) X <- data.frame(x=x, y.0=f(x, c(alpha, beta))) # # Create the datasets. # set.seed(17) data.lst <- lapply(seq_len(nrow*ncol), function(i) { X$y <- X$y.0 + rnorm(nrow(X), 0, sigma) X$Iteration <- i X }) # # Fit the model to the datasets. # Z <- qnorm(a/2) # For computing a 100(1-a)% confidence band results.lst <- lapply(seq_along(data.lst), function(i) { # # Fit the data. # X <- data.lst[[i]] fit <- nls(y ~ f(x, c(alpha, beta)), data=X, start=c(alpha=0, beta=0)) print(fit) # (Optional) # # Compute the SEs for log(y). # V <- vcov(fit) se2 <- sapply(log(x.0), function(xi) { u <- c(1, xi) u %*% V %*% u }) se <- sqrt(se2) # # Compute the CIs. # y <- log(f(x.0, coefficients(fit))) # The estimated log responses at the prediction points data.frame(Iteration = i, x = x.0, y = exp(y), y.lower = exp(y + Z * se), y.upper = exp(y - Z * se)) }) # # Plot the results. # X <- do.call(rbind, data.lst) Y <- do.call(rbind, results.lst) Y$y.0 <- f(Y$x, c(alpha, beta)) # Reference curve library(ggplot2) ggplot(Y, aes(x)) + geom_ribbon(aes(ymin=y.lower, ymax=y.upper), fill="Gray") + geom_line(aes(y=y.0, color=(y.lower <= y.0 & y.0 <= y.upper)), size=1, show.legend=FALSE) + geom_point(aes(y=y), data=X, alpha=1/4) + facet_wrap(~ Iteration, nrow=nrow) + ylab("y") + ggtitle(paste0("Simulated datasets and ", 100*a, "% bands"), "True model shown (in color) for reference")
Confidence interval of the mean response from nonlinear model
I haven't investigated why the two CIs are unequal because I think both are wrong. The common problem is that the estimated parameters are likely correlated, perhaps heavily so, but neither procedure
Confidence interval of the mean response from nonlinear model I haven't investigated why the two CIs are unequal because I think both are wrong. The common problem is that the estimated parameters are likely correlated, perhaps heavily so, but neither procedure appears to account for that. (In the realistic examples shown below, the correlation ranges from -99.3% to -99.8%.) To find confidence bands, start with the model in the alternative form $$y = \exp(\alpha + \beta \log(x)) + \varepsilon.$$ The error term is $\varepsilon$ and $(\alpha,\beta)$ is the model parameter to be estimated. As in the question I will use nonlinear least squares to fit the model. (Taking logarithms of both sides will be vain, because they cannot simplify the right hand side due to the additive error term. Indeed, as the examples below indicate, it is possible--and perfectly OK--for observed values of $y$ to be negative.) One output of the fitting procedure will be the variance-covariance matrix of the parameter estimates, $$\mathbb{V} = \pmatrix{\sigma^2_\alpha & \rho \sigma_\alpha\sigma_\beta \\ \rho \sigma_\alpha\sigma_\beta & \sigma^2_\beta}.$$ The square roots of the diagonal elements, $\sigma_\alpha$ and $\sigma_\beta,$ are the standard errors of the estimated parameters $\hat\alpha$ and $\hat\beta,$ respectively. $\rho$ estimates the correlation coefficient of those estimates. The predicted value at any (positive) number $x_0$ is $$\hat{y}(x_0) = \exp(\hat\alpha + \hat\beta\log(x_0)) = e^\hat\alpha x_0^\hat\beta = \frac{\hat A}{\hat\beta} x_0^\hat\beta$$ where $\hat A=\hat\beta e^\hat\alpha,$ showing this really is the intended model as formulated in the question. Taking logarithms (which now is possible because there are no error terms in the equation) gives $$\log(\hat y(x_0)) = \hat\alpha + \hat\beta \log(x_0) = (1, \log(x_0))\ (\hat\alpha,\hat\beta)^\prime.$$ Thus the standard error of the logarithm of the estimated response is $$SE(x_0) = SE(\log(\hat y(x_0))) = \sqrt{(1, \log(x_0))\mathbb{V}(1, \log(x_0))^\prime}.$$ Being able to formulate the SE in this way was the whole point of the initial reparameterization of the model: the parameters $\alpha$ and $\beta$ (or, rather, their estimates) enter linearly into the calculation of the standard error of $\hat y.$ There is no need to expand Taylor series or "propagate error." To construct a $100(1-a)\%$ confidence interval for $\log(y(x_0)),$ do as usual and set the endpoints at $$\log(\hat y(x_0)) \pm Z_{a/2}\, SE(x_0)$$ where $\Phi(Z_{a/2}) = a/2$ is the $a/2$ percentage point of the standard Normal distribution. Because this procedure aims to enclose the true response with $100(1-a)\%$ probability, this coverage property is preserved upon taking antilogarithms. That is, A $100(1-a)\%$ confidence interval for $y(x_0)$ has endpoints $$\begin{aligned} &\exp\left(\log(\hat y(x_0)) \pm Z_{a/2}\, SE(x_0)\right)\\ &= \left[\frac{\hat y(x_0)}{\exp\left(|Z_{a/2}| SE(x_0)\right)},\ \hat y(x_0)\exp\left(|Z_{a/2}| SE(x_0)\right)\right]. \end{aligned}$$ By doing this for a sequence of values of $x_0$ you can construct confidence bands for the regression. If all is well (that is, all model assumptions are accurate and there are enough data to assure the sampling distribution of $(\hat\alpha,\hat\beta)$ is approximately Normal), you can hope that $100(1-a)\%$ of these bands envelop the graph of the true response function $x\to \exp(\alpha + \beta\log(x)).$ To illustrate, I generated $20$ datasets having properties similar to those in the problem: the true $\alpha$ and $\beta$ are close to the estimates reported in the question and the error variance $\operatorname{Var}(\varepsilon)$ was set to make the sum of squares of residuals close to that reported in the question (near 90,000). I used the foregoing technique to fit this model to each dataset and then for each one plotted (a) the data, (b) the $90\%$ confidence band and, for reference, (c) the graph of the true response function. The latter is colored red wherever it lies beyond the confidence band. The test of this approach is that about $10\%,$ or two of the $20,$ panels ought to show red portions: and that's exactly what happened (in iterations 10 and 16). For details, consult this R code that generated and plotted the simulations. # # Describe the model and the data. # x <- seq(10, 100, length.out=51) alpha <- log(7.5) beta <- 0.6 sigma <- sqrt(90000 / length(x)) # Error SD a <- 0.10 # Test level nrow <- 4 # Rows for the simulation ncol <- 5 # Columns for the simulation x.0 <- seq(min(x)*0.5, max(x)*1.1, length.out=101) # Prediction points f <- function(x, theta) exp(theta[1] + theta[2]*log(x)) X <- data.frame(x=x, y.0=f(x, c(alpha, beta))) # # Create the datasets. # set.seed(17) data.lst <- lapply(seq_len(nrow*ncol), function(i) { X$y <- X$y.0 + rnorm(nrow(X), 0, sigma) X$Iteration <- i X }) # # Fit the model to the datasets. # Z <- qnorm(a/2) # For computing a 100(1-a)% confidence band results.lst <- lapply(seq_along(data.lst), function(i) { # # Fit the data. # X <- data.lst[[i]] fit <- nls(y ~ f(x, c(alpha, beta)), data=X, start=c(alpha=0, beta=0)) print(fit) # (Optional) # # Compute the SEs for log(y). # V <- vcov(fit) se2 <- sapply(log(x.0), function(xi) { u <- c(1, xi) u %*% V %*% u }) se <- sqrt(se2) # # Compute the CIs. # y <- log(f(x.0, coefficients(fit))) # The estimated log responses at the prediction points data.frame(Iteration = i, x = x.0, y = exp(y), y.lower = exp(y + Z * se), y.upper = exp(y - Z * se)) }) # # Plot the results. # X <- do.call(rbind, data.lst) Y <- do.call(rbind, results.lst) Y$y.0 <- f(Y$x, c(alpha, beta)) # Reference curve library(ggplot2) ggplot(Y, aes(x)) + geom_ribbon(aes(ymin=y.lower, ymax=y.upper), fill="Gray") + geom_line(aes(y=y.0, color=(y.lower <= y.0 & y.0 <= y.upper)), size=1, show.legend=FALSE) + geom_point(aes(y=y), data=X, alpha=1/4) + facet_wrap(~ Iteration, nrow=nrow) + ylab("y") + ggtitle(paste0("Simulated datasets and ", 100*a, "% bands"), "True model shown (in color) for reference")
Confidence interval of the mean response from nonlinear model I haven't investigated why the two CIs are unequal because I think both are wrong. The common problem is that the estimated parameters are likely correlated, perhaps heavily so, but neither procedure
32,842
Use of information theory in applied data science
So the first part of question: Do data scientists need to know information theory? I thought the answer is no until very recently. The reason I changed my mind is one crucial component: noise. Many machine learning models (both stochastic or not) use noise as part of their encoding and transformation process and in many of these models, you need to infer the probability which the noise affected after decoding the transformed output of the model. I think that this is a core part of information theory. Not only that, in deep learning, KL divergence is a very important measure used that also comes from Information Theory. Second part of the question: I think the best source is David MacKay's Information Theory, Inference and Learning Algorithms. He starts with Information Theory and takes those ideas into both inference and even neural networks. The Pdf is free on Dave's website and the lectures are online which are great
Use of information theory in applied data science
So the first part of question: Do data scientists need to know information theory? I thought the answer is no until very recently. The reason I changed my mind is one crucial component: noise. Many m
Use of information theory in applied data science So the first part of question: Do data scientists need to know information theory? I thought the answer is no until very recently. The reason I changed my mind is one crucial component: noise. Many machine learning models (both stochastic or not) use noise as part of their encoding and transformation process and in many of these models, you need to infer the probability which the noise affected after decoding the transformed output of the model. I think that this is a core part of information theory. Not only that, in deep learning, KL divergence is a very important measure used that also comes from Information Theory. Second part of the question: I think the best source is David MacKay's Information Theory, Inference and Learning Algorithms. He starts with Information Theory and takes those ideas into both inference and even neural networks. The Pdf is free on Dave's website and the lectures are online which are great
Use of information theory in applied data science So the first part of question: Do data scientists need to know information theory? I thought the answer is no until very recently. The reason I changed my mind is one crucial component: noise. Many m
32,843
What would be an example of when L2 is a good loss function for computing a posterior loss?
L2 is "easy." It's what you get by default if you do standard matrix methods like linear regression, SVD, etc. Until we had computers, L2 was the only game in town for a lot of problems, which is why everyone uses ANOVA, t-tests, etc. It's also easier to get an exact answer using L2 loss with many fancier methods like Gaussian processes than it is to get an exact answer using other loss functions. Relatedly, you can get the L2 loss exactly using a 2nd-order Taylor approximation, which isn't the case for most loss functions (e.g. cross-entropy, ). This makes optimization easy with 2nd-order methods like Newton's method. Lots of methods for dealing with other loss functions still use methods for L2 loss under-the-hood for the same reason (e.g. iteratively reweighted least squares, integrated nested Laplace approximations). L2 is closely related to Gaussian distributions, and the Central Limit Theorem makes Gaussian distributions common. If your data-generating process is (conditionally) Gaussian, then L2 is the most efficient estimator. L2 loss decomposes nicely, because of the law of total variance. That makes certain graphical models with latent variables especially easy to fit. L2 penalizes terrible predictions disproportionately. This can be good or bad, but it's often pretty reasonable. An hour-long wait might be four times as bad as a 30-minute wait, on average, if it causes lots of people to miss their appointments.
What would be an example of when L2 is a good loss function for computing a posterior loss?
L2 is "easy." It's what you get by default if you do standard matrix methods like linear regression, SVD, etc. Until we had computers, L2 was the only game in town for a lot of problems, which is why
What would be an example of when L2 is a good loss function for computing a posterior loss? L2 is "easy." It's what you get by default if you do standard matrix methods like linear regression, SVD, etc. Until we had computers, L2 was the only game in town for a lot of problems, which is why everyone uses ANOVA, t-tests, etc. It's also easier to get an exact answer using L2 loss with many fancier methods like Gaussian processes than it is to get an exact answer using other loss functions. Relatedly, you can get the L2 loss exactly using a 2nd-order Taylor approximation, which isn't the case for most loss functions (e.g. cross-entropy, ). This makes optimization easy with 2nd-order methods like Newton's method. Lots of methods for dealing with other loss functions still use methods for L2 loss under-the-hood for the same reason (e.g. iteratively reweighted least squares, integrated nested Laplace approximations). L2 is closely related to Gaussian distributions, and the Central Limit Theorem makes Gaussian distributions common. If your data-generating process is (conditionally) Gaussian, then L2 is the most efficient estimator. L2 loss decomposes nicely, because of the law of total variance. That makes certain graphical models with latent variables especially easy to fit. L2 penalizes terrible predictions disproportionately. This can be good or bad, but it's often pretty reasonable. An hour-long wait might be four times as bad as a 30-minute wait, on average, if it causes lots of people to miss their appointments.
What would be an example of when L2 is a good loss function for computing a posterior loss? L2 is "easy." It's what you get by default if you do standard matrix methods like linear regression, SVD, etc. Until we had computers, L2 was the only game in town for a lot of problems, which is why
32,844
Which model to use for survival analysis when there are different times of entry into the sample?
I would have modelled this as a plain Cox model, or perhaps a Cox Frailty model. You do not need to worry about the timing of entry into your study when using Cox regression (unless there is time bias, which I do not notice in your description). You don't need the extended Cox model with start and stop intervals, just calculate the observation time (EndTime - StartTime) and enter it into Surv(observation_time, event). You should account for the repeated measurements on the same individual by either using: (i) mixed effects cox models, (ii) the cluster function in coxph.
Which model to use for survival analysis when there are different times of entry into the sample?
I would have modelled this as a plain Cox model, or perhaps a Cox Frailty model. You do not need to worry about the timing of entry into your study when using Cox regression (unless there is time bia
Which model to use for survival analysis when there are different times of entry into the sample? I would have modelled this as a plain Cox model, or perhaps a Cox Frailty model. You do not need to worry about the timing of entry into your study when using Cox regression (unless there is time bias, which I do not notice in your description). You don't need the extended Cox model with start and stop intervals, just calculate the observation time (EndTime - StartTime) and enter it into Surv(observation_time, event). You should account for the repeated measurements on the same individual by either using: (i) mixed effects cox models, (ii) the cluster function in coxph.
Which model to use for survival analysis when there are different times of entry into the sample? I would have modelled this as a plain Cox model, or perhaps a Cox Frailty model. You do not need to worry about the timing of entry into your study when using Cox regression (unless there is time bia
32,845
Which model to use for survival analysis when there are different times of entry into the sample?
The Cox Proportional Hazards model in R allows for late entry into the sample. One can enter the parameters as follows: cox.model <- coxph(Surv(startTime, endTime, event) ~ X + frailty(ID), data)
Which model to use for survival analysis when there are different times of entry into the sample?
The Cox Proportional Hazards model in R allows for late entry into the sample. One can enter the parameters as follows: cox.model <- coxph(Surv(startTime, endTime, event) ~ X + frailty(ID), data)
Which model to use for survival analysis when there are different times of entry into the sample? The Cox Proportional Hazards model in R allows for late entry into the sample. One can enter the parameters as follows: cox.model <- coxph(Surv(startTime, endTime, event) ~ X + frailty(ID), data)
Which model to use for survival analysis when there are different times of entry into the sample? The Cox Proportional Hazards model in R allows for late entry into the sample. One can enter the parameters as follows: cox.model <- coxph(Surv(startTime, endTime, event) ~ X + frailty(ID), data)
32,846
What is PCA doing with autocorrelated data?
Let me convert my earlier comment to an answer. Do you imagine rows in your data matrix to be the variables or the samples? I will assume they are samples: i.e. you have $n=32$ different time series (samples). Then, if all $n=32$ rows are identical, but only circularly shifted by $1$ position each, then the $n\times n$ Gram matrix of your data consisting of dot products between all pairs of rows will have Toeplitz structure: high values close to the diagonal and gradually decreasing to zero values away from it. Toeplitz matrices have consecutive Fourier modes as their eigenvectors (and eigenvectors of the Gram matrix are principal components, up to the scaling), so yes to your Q1: it is no surprise that you get sinusoidal waves of increasing frequencies as PCs. No idea if it can be useful (Q2). In my experience, it tends to appear as an annoying artifact. I.e. people have some data, get something resembling Fourier modes out of PCA and start wondering what they could mean, whereas they are simply due to some time shifts in the original time series.
What is PCA doing with autocorrelated data?
Let me convert my earlier comment to an answer. Do you imagine rows in your data matrix to be the variables or the samples? I will assume they are samples: i.e. you have $n=32$ different time series (
What is PCA doing with autocorrelated data? Let me convert my earlier comment to an answer. Do you imagine rows in your data matrix to be the variables or the samples? I will assume they are samples: i.e. you have $n=32$ different time series (samples). Then, if all $n=32$ rows are identical, but only circularly shifted by $1$ position each, then the $n\times n$ Gram matrix of your data consisting of dot products between all pairs of rows will have Toeplitz structure: high values close to the diagonal and gradually decreasing to zero values away from it. Toeplitz matrices have consecutive Fourier modes as their eigenvectors (and eigenvectors of the Gram matrix are principal components, up to the scaling), so yes to your Q1: it is no surprise that you get sinusoidal waves of increasing frequencies as PCs. No idea if it can be useful (Q2). In my experience, it tends to appear as an annoying artifact. I.e. people have some data, get something resembling Fourier modes out of PCA and start wondering what they could mean, whereas they are simply due to some time shifts in the original time series.
What is PCA doing with autocorrelated data? Let me convert my earlier comment to an answer. Do you imagine rows in your data matrix to be the variables or the samples? I will assume they are samples: i.e. you have $n=32$ different time series (
32,847
Is the negative binomial not expressible as in the exponential family if there are 2 unknowns?
If you look at the density of the Negative Binomial distribution against the counting measure over the set of integers, \begin{align*}p(x|N,p)&={x+N-1\choose{N-1}}p^N(1-p)^x\\ &= \frac{(x+N-1)!}{x!(N-1)!}p^N(1-p)^x\\ &= \frac{(x+N-1)\cdots(x+1)}{(N-1)!}\exp\left\{N\log(p)+x\log(1-p) \right\}\\ &= \frac{\exp\left\{N\log(p)\right\}}{(N-1)!}\exp\left\{N\log(p)+x\log(1-p) \right\}(x+N-1)\cdots(x+1)\end{align*} the part $(x+N-1)\cdots(x+1)$ in this density cannot be expressed as $\exp\left\{ A(N)^\text{T}B(x)\right\}$.
Is the negative binomial not expressible as in the exponential family if there are 2 unknowns?
If you look at the density of the Negative Binomial distribution against the counting measure over the set of integers, \begin{align*}p(x|N,p)&={x+N-1\choose{N-1}}p^N(1-p)^x\\ &= \frac{(x+N-1)!}{x!(N-
Is the negative binomial not expressible as in the exponential family if there are 2 unknowns? If you look at the density of the Negative Binomial distribution against the counting measure over the set of integers, \begin{align*}p(x|N,p)&={x+N-1\choose{N-1}}p^N(1-p)^x\\ &= \frac{(x+N-1)!}{x!(N-1)!}p^N(1-p)^x\\ &= \frac{(x+N-1)\cdots(x+1)}{(N-1)!}\exp\left\{N\log(p)+x\log(1-p) \right\}\\ &= \frac{\exp\left\{N\log(p)\right\}}{(N-1)!}\exp\left\{N\log(p)+x\log(1-p) \right\}(x+N-1)\cdots(x+1)\end{align*} the part $(x+N-1)\cdots(x+1)$ in this density cannot be expressed as $\exp\left\{ A(N)^\text{T}B(x)\right\}$.
Is the negative binomial not expressible as in the exponential family if there are 2 unknowns? If you look at the density of the Negative Binomial distribution against the counting measure over the set of integers, \begin{align*}p(x|N,p)&={x+N-1\choose{N-1}}p^N(1-p)^x\\ &= \frac{(x+N-1)!}{x!(N-
32,848
Deriving the posterior density for a lognormal likelihood and Jeffreys's prior
Note that - regarded as a function in $\mu$ - what you have is proportional to a normal density. So step 1 is to complete the square in $\mu$ that's in the exponent, pull out the front of the integral any superfluous constants, and then multiply the term in the integral by the constant required to make it integrate to 1. Then divide out in front of the integral by the same constant (so you don't change the value of the overall expression). Since you have a density in the integral, replace the term in the integral by 1. You're left with a function of $\sigma$ (one that has notionally replaced $\mu$ with something akin to an estimate of it). Now see the density for an inverse gamma here: $$f(x; \alpha, \beta)= \frac{\beta^\alpha}{\Gamma(\alpha)}x^{-\alpha - 1}\exp\left(-\frac{\beta}{x}\right)$$ (in this case, using a shape-scale parameterization). Assuming you have the prior correct (I haven't checked that) -- you seek a posterior density for $\sigma^2$. Note that your function from after the integration can be written in the form $c\cdot(\sigma^2)^{-\text{something}}\cdot\exp(-\text{something-else}/\sigma^2)$. So you have an expression proportional to an inverse gamma density in $\sigma^2$. (Since it must be a density, supply the required constant needed to make it integrate to 1.)
Deriving the posterior density for a lognormal likelihood and Jeffreys's prior
Note that - regarded as a function in $\mu$ - what you have is proportional to a normal density. So step 1 is to complete the square in $\mu$ that's in the exponent, pull out the front of the integral
Deriving the posterior density for a lognormal likelihood and Jeffreys's prior Note that - regarded as a function in $\mu$ - what you have is proportional to a normal density. So step 1 is to complete the square in $\mu$ that's in the exponent, pull out the front of the integral any superfluous constants, and then multiply the term in the integral by the constant required to make it integrate to 1. Then divide out in front of the integral by the same constant (so you don't change the value of the overall expression). Since you have a density in the integral, replace the term in the integral by 1. You're left with a function of $\sigma$ (one that has notionally replaced $\mu$ with something akin to an estimate of it). Now see the density for an inverse gamma here: $$f(x; \alpha, \beta)= \frac{\beta^\alpha}{\Gamma(\alpha)}x^{-\alpha - 1}\exp\left(-\frac{\beta}{x}\right)$$ (in this case, using a shape-scale parameterization). Assuming you have the prior correct (I haven't checked that) -- you seek a posterior density for $\sigma^2$. Note that your function from after the integration can be written in the form $c\cdot(\sigma^2)^{-\text{something}}\cdot\exp(-\text{something-else}/\sigma^2)$. So you have an expression proportional to an inverse gamma density in $\sigma^2$. (Since it must be a density, supply the required constant needed to make it integrate to 1.)
Deriving the posterior density for a lognormal likelihood and Jeffreys's prior Note that - regarded as a function in $\mu$ - what you have is proportional to a normal density. So step 1 is to complete the square in $\mu$ that's in the exponent, pull out the front of the integral
32,849
Is rstan or my grid approximation incorrect: deciding between conflicting quantile estimates in Bayesian inference
Cliffs: Rstan seems to be (closer to) correct based on an approach that integrates $\theta$ out analytically and evaluates $P(N)P(y\mid N)$ in a rather big grid. To get the posterior of $N$, it is actually possible to integrate $\theta$ out analytically: \begin{equation} P(y \mid N) = P(y_1 \mid N)\times P(y_2 \mid N,y_1)\times P(y_3 \mid N,y_1,y_2) \times \ldots P(y_K \mid N,y_1,\ldots,y_{K-1}) \end{equation} where $K$ is the length of $y$. Now, since $\theta$ has Beta prior (here $Beta(1,1)$) and Beta is conjugate to binomial, $\theta \mid N, y_1, \ldots, y_k$ also follows a Beta distribution. Therefore, the distribution of $y_{k+1} \mid N, y_1, \ldots, y_k$ is Beta-binomial for which a closed form expression of the probabilities exists in terms of the Gamma function. Therefore, we may evaluate $P(y\mid N)$ by computing the relevant parameters of the Beta-binomials and multiplying Beta-binomial probabilities. The following MATLAB code uses this approach to compute $P(N)P(y|N)$ for $N=72,\ldots,500000$ and normalizes to get the posterior. %The data y = [53 57 66 67 72]; %Initialize maxN = 500000; logp = zeros(1,maxN); %log prior + log likelihood logp(1:71) = -inf; for N = 72:maxN %Prior logp(N) = -log(N); %y1 has uniform distribution logp(N) = logp(N) - log(N+1); a = 1; b = 1; %Rest of the measurements for j = 2:length(y); %Update beta parameters a = a + y(j-1); b = b + N - y(j-1); %Log predictive probability of y_j (see Wikipedia article) logp(N) = logp(N) + gammaln(N+1) - gammaln(y(j) + 1) - ... gammaln(N - y(j) + 1) + gammaln(y(j) + a) + ... gammaln(N - y(j) + b) - gammaln(N + a + b) ... + gammaln(a+b) - gammaln(a) - gammaln(b); end end %Get the posterior of N pmf = exp(logp - max(logp)); pmf = pmf/sum(pmf); cdf = cumsum(pmf); %Evaluate quantiles of interest disp(cdf(5000)) %0.9763 for percentile = [0.025 0.25 0.5 0.75 0.975] disp(find(cdf>=percentile,1,'first')) end The cdf at $N=100000$ is $0.9990$, so I guess maxN=500000 is enough, but one might want to investigate sensitivity to increasing the maximum $N$. The cdf at $N=5000$ is only $0.9763$ so your original grid indeed misses quite significant tail probability mass compared to the goal of finding the $0.975$ quantile. The quantiles I get are \begin{equation} \begin{array}{lllll} \textrm{Quantile} & 0.025 & 0.25 & 0.5 & 0.75 & 0.975 \\ N & 95 & 149 & 235 & 478 & 4750 \end{array} \end{equation} Disclaimer: I did not test the code much, there may be errors (and obviously there could be numeric problems with this approach, too). However, the obtained quantiles are quite close to the Rstan results, so I am rather confident.
Is rstan or my grid approximation incorrect: deciding between conflicting quantile estimates in Baye
Cliffs: Rstan seems to be (closer to) correct based on an approach that integrates $\theta$ out analytically and evaluates $P(N)P(y\mid N)$ in a rather big grid. To get the posterior of $N$, it is act
Is rstan or my grid approximation incorrect: deciding between conflicting quantile estimates in Bayesian inference Cliffs: Rstan seems to be (closer to) correct based on an approach that integrates $\theta$ out analytically and evaluates $P(N)P(y\mid N)$ in a rather big grid. To get the posterior of $N$, it is actually possible to integrate $\theta$ out analytically: \begin{equation} P(y \mid N) = P(y_1 \mid N)\times P(y_2 \mid N,y_1)\times P(y_3 \mid N,y_1,y_2) \times \ldots P(y_K \mid N,y_1,\ldots,y_{K-1}) \end{equation} where $K$ is the length of $y$. Now, since $\theta$ has Beta prior (here $Beta(1,1)$) and Beta is conjugate to binomial, $\theta \mid N, y_1, \ldots, y_k$ also follows a Beta distribution. Therefore, the distribution of $y_{k+1} \mid N, y_1, \ldots, y_k$ is Beta-binomial for which a closed form expression of the probabilities exists in terms of the Gamma function. Therefore, we may evaluate $P(y\mid N)$ by computing the relevant parameters of the Beta-binomials and multiplying Beta-binomial probabilities. The following MATLAB code uses this approach to compute $P(N)P(y|N)$ for $N=72,\ldots,500000$ and normalizes to get the posterior. %The data y = [53 57 66 67 72]; %Initialize maxN = 500000; logp = zeros(1,maxN); %log prior + log likelihood logp(1:71) = -inf; for N = 72:maxN %Prior logp(N) = -log(N); %y1 has uniform distribution logp(N) = logp(N) - log(N+1); a = 1; b = 1; %Rest of the measurements for j = 2:length(y); %Update beta parameters a = a + y(j-1); b = b + N - y(j-1); %Log predictive probability of y_j (see Wikipedia article) logp(N) = logp(N) + gammaln(N+1) - gammaln(y(j) + 1) - ... gammaln(N - y(j) + 1) + gammaln(y(j) + a) + ... gammaln(N - y(j) + b) - gammaln(N + a + b) ... + gammaln(a+b) - gammaln(a) - gammaln(b); end end %Get the posterior of N pmf = exp(logp - max(logp)); pmf = pmf/sum(pmf); cdf = cumsum(pmf); %Evaluate quantiles of interest disp(cdf(5000)) %0.9763 for percentile = [0.025 0.25 0.5 0.75 0.975] disp(find(cdf>=percentile,1,'first')) end The cdf at $N=100000$ is $0.9990$, so I guess maxN=500000 is enough, but one might want to investigate sensitivity to increasing the maximum $N$. The cdf at $N=5000$ is only $0.9763$ so your original grid indeed misses quite significant tail probability mass compared to the goal of finding the $0.975$ quantile. The quantiles I get are \begin{equation} \begin{array}{lllll} \textrm{Quantile} & 0.025 & 0.25 & 0.5 & 0.75 & 0.975 \\ N & 95 & 149 & 235 & 478 & 4750 \end{array} \end{equation} Disclaimer: I did not test the code much, there may be errors (and obviously there could be numeric problems with this approach, too). However, the obtained quantiles are quite close to the Rstan results, so I am rather confident.
Is rstan or my grid approximation incorrect: deciding between conflicting quantile estimates in Baye Cliffs: Rstan seems to be (closer to) correct based on an approach that integrates $\theta$ out analytically and evaluates $P(N)P(y\mid N)$ in a rather big grid. To get the posterior of $N$, it is act
32,850
Comparing residuals between OLS and non-OLS regressions
(Converting my comment into an answer.) I think you cannot compare the fits that come from different loss functions, because they are answers to different questions. Once you decide that a given loss function is the appropriate one for your situation, the fit follows from that decision. You cannot fold it back to validate the choice of loss function without this becoming circular. If you have some other criterion that both loss functions can be understood to be encompassed by, you could use that, but you need to have defined that in advance.
Comparing residuals between OLS and non-OLS regressions
(Converting my comment into an answer.) I think you cannot compare the fits that come from different loss functions, because they are answers to different questions. Once you decide that a given loss
Comparing residuals between OLS and non-OLS regressions (Converting my comment into an answer.) I think you cannot compare the fits that come from different loss functions, because they are answers to different questions. Once you decide that a given loss function is the appropriate one for your situation, the fit follows from that decision. You cannot fold it back to validate the choice of loss function without this becoming circular. If you have some other criterion that both loss functions can be understood to be encompassed by, you could use that, but you need to have defined that in advance.
Comparing residuals between OLS and non-OLS regressions (Converting my comment into an answer.) I think you cannot compare the fits that come from different loss functions, because they are answers to different questions. Once you decide that a given loss
32,851
Softmax regression or $K$ binary logistic regression
The softmax function gives a proper probability for each of the possible classes: $$ P(y=j|x,\{w_k\}_{k=1...K}) = \frac{e^{x^\top w_j}}{\sum_{k=1}^K e^{x^\top w_k}} $$ This is nice if you want to interpret your classification problem in a probabilistic setting. Benefits of using the probabilistic formulation include being able to place priors on the parameters and obtaining a posterior distribution over classes. That said, maybe you can imagine a really good classifier that isn't of this form. Perhaps it is of a form that is generally difficult to express (e.g. SVM -- here for multi-class details). If some such complicated classifier works well for you on a given task, perhaps you don't want to use the [potentially weaker] softmax classifier. In such a setting, there may not be a clear all-way output, so you have to settle for repeated one-vs-others classification schemes. One more counterpoint...you could also augment the expressive power of the softmax-style approach by changing the input to the exponential. For example, it would be straightforward to replace each linear component $x^\top w_j$ with a quadratic expression $x^\top w_j + x^\top A_j x$. Other such augmentations are conceivable.
Softmax regression or $K$ binary logistic regression
The softmax function gives a proper probability for each of the possible classes: $$ P(y=j|x,\{w_k\}_{k=1...K}) = \frac{e^{x^\top w_j}}{\sum_{k=1}^K e^{x^\top w_k}} $$ This is nice if you want to inte
Softmax regression or $K$ binary logistic regression The softmax function gives a proper probability for each of the possible classes: $$ P(y=j|x,\{w_k\}_{k=1...K}) = \frac{e^{x^\top w_j}}{\sum_{k=1}^K e^{x^\top w_k}} $$ This is nice if you want to interpret your classification problem in a probabilistic setting. Benefits of using the probabilistic formulation include being able to place priors on the parameters and obtaining a posterior distribution over classes. That said, maybe you can imagine a really good classifier that isn't of this form. Perhaps it is of a form that is generally difficult to express (e.g. SVM -- here for multi-class details). If some such complicated classifier works well for you on a given task, perhaps you don't want to use the [potentially weaker] softmax classifier. In such a setting, there may not be a clear all-way output, so you have to settle for repeated one-vs-others classification schemes. One more counterpoint...you could also augment the expressive power of the softmax-style approach by changing the input to the exponential. For example, it would be straightforward to replace each linear component $x^\top w_j$ with a quadratic expression $x^\top w_j + x^\top A_j x$. Other such augmentations are conceivable.
Softmax regression or $K$ binary logistic regression The softmax function gives a proper probability for each of the possible classes: $$ P(y=j|x,\{w_k\}_{k=1...K}) = \frac{e^{x^\top w_j}}{\sum_{k=1}^K e^{x^\top w_k}} $$ This is nice if you want to inte
32,852
Sample size calculation for paired ordinal data
The standard non-parametric test for paired ordinal data is the Wilcoxon, which is sort of an augmented sign test. I don't know of a formula for power analysis for the Wilcoxon, but you can certainly get power analyses for the sign test (there are various resources listed in my question here: Free or downloadable resources for sample size calculations). Note that (as @Glen_b notes in the comment below), this would assume that there are no ties. If you expect there will be some proportion of ties, the power analysis for the sign test would give you the requisite $N$ excluding the ties, so you would inflate that estimate by multiplying it by the reciprocal of the proportion of untied data you expect to have (e.g., if you thought you might have $20\%$ tied data, and the test required $N=100$, then you'd multiply $100$ by $1/.8$ to get $125$). Unless you need the minimum $N$ to achieve a specified power, that should work for you. For example, when running power calculations for more complicated analyses, we often use a simpler calculation and then say something like 'our $N$ was calculated to achieve a minimum of 80% power on the sign test, because the Wilcoxon can be expected to be at least as powerful as the sign test, our power should meet or excede 80%'. On the other hand, if you have a strong sense of what the distributions will be like, you can always simulate. Although written in the context of logistic regression, there is a lot of basic information about using simulations for power analyses in my answer here: Simulation of logistic regression power analysis designed experiments.
Sample size calculation for paired ordinal data
The standard non-parametric test for paired ordinal data is the Wilcoxon, which is sort of an augmented sign test. I don't know of a formula for power analysis for the Wilcoxon, but you can certainly
Sample size calculation for paired ordinal data The standard non-parametric test for paired ordinal data is the Wilcoxon, which is sort of an augmented sign test. I don't know of a formula for power analysis for the Wilcoxon, but you can certainly get power analyses for the sign test (there are various resources listed in my question here: Free or downloadable resources for sample size calculations). Note that (as @Glen_b notes in the comment below), this would assume that there are no ties. If you expect there will be some proportion of ties, the power analysis for the sign test would give you the requisite $N$ excluding the ties, so you would inflate that estimate by multiplying it by the reciprocal of the proportion of untied data you expect to have (e.g., if you thought you might have $20\%$ tied data, and the test required $N=100$, then you'd multiply $100$ by $1/.8$ to get $125$). Unless you need the minimum $N$ to achieve a specified power, that should work for you. For example, when running power calculations for more complicated analyses, we often use a simpler calculation and then say something like 'our $N$ was calculated to achieve a minimum of 80% power on the sign test, because the Wilcoxon can be expected to be at least as powerful as the sign test, our power should meet or excede 80%'. On the other hand, if you have a strong sense of what the distributions will be like, you can always simulate. Although written in the context of logistic regression, there is a lot of basic information about using simulations for power analyses in my answer here: Simulation of logistic regression power analysis designed experiments.
Sample size calculation for paired ordinal data The standard non-parametric test for paired ordinal data is the Wilcoxon, which is sort of an augmented sign test. I don't know of a formula for power analysis for the Wilcoxon, but you can certainly
32,853
In a Poisson process measured with some efficiency, is the measured count still Poisson?
A quick non-technical argument might use Jackson networks. In your case total external arrivals is rate $R$, and there are no internal transitions (observed particles don't switch to the unobserved queue). The splitting proportion between the observed and unobserved nodes $p_{0i}$ is $P$, so the $\lambda_{obs}=RP$ If you're looking for first principles, call $O(t)$ the observed counting process, and $N(t)\sim PP(r)$ the total counting process. Where each arrival in $N(t)$ gets logged in $O(t)$ with probability $p$. So that if for some $s$ we have $N(s)=n$ then $O(s)$ has a binomial($n,p$) distribution. This approach uses probability generating functions: $E[z^{O(t)}|N(t)=n]=\sum_{j=0}^{n}z^{j} {n \choose j}p^j(1-p)^{n-j}=(1-p+pz)^{n}$ Last equality by the binomial theorem. Then, unconditionally, since $N(t)\sim Poisson(rt)$: $E[z^{O(t)}]=E[E[z^{O(t)}|N(t)=n]]=\sum_{n=0}^{\infty}(1-p+pz)^{n}\frac{rt^n}{n!}e^{-rt}=e^{-rt}e^{rt(1-p+pz)}=e^{rpt(z-1)}$ Which is the probability generating function of a Poisson($rpt$) random variable.
In a Poisson process measured with some efficiency, is the measured count still Poisson?
A quick non-technical argument might use Jackson networks. In your case total external arrivals is rate $R$, and there are no internal transitions (observed particles don't switch to the unobserved qu
In a Poisson process measured with some efficiency, is the measured count still Poisson? A quick non-technical argument might use Jackson networks. In your case total external arrivals is rate $R$, and there are no internal transitions (observed particles don't switch to the unobserved queue). The splitting proportion between the observed and unobserved nodes $p_{0i}$ is $P$, so the $\lambda_{obs}=RP$ If you're looking for first principles, call $O(t)$ the observed counting process, and $N(t)\sim PP(r)$ the total counting process. Where each arrival in $N(t)$ gets logged in $O(t)$ with probability $p$. So that if for some $s$ we have $N(s)=n$ then $O(s)$ has a binomial($n,p$) distribution. This approach uses probability generating functions: $E[z^{O(t)}|N(t)=n]=\sum_{j=0}^{n}z^{j} {n \choose j}p^j(1-p)^{n-j}=(1-p+pz)^{n}$ Last equality by the binomial theorem. Then, unconditionally, since $N(t)\sim Poisson(rt)$: $E[z^{O(t)}]=E[E[z^{O(t)}|N(t)=n]]=\sum_{n=0}^{\infty}(1-p+pz)^{n}\frac{rt^n}{n!}e^{-rt}=e^{-rt}e^{rt(1-p+pz)}=e^{rpt(z-1)}$ Which is the probability generating function of a Poisson($rpt$) random variable.
In a Poisson process measured with some efficiency, is the measured count still Poisson? A quick non-technical argument might use Jackson networks. In your case total external arrivals is rate $R$, and there are no internal transitions (observed particles don't switch to the unobserved qu
32,854
Interpreting a binned residual plot in logistic regression
Either I am misinterpreting your plot or there is some problem. The fact that you have negative residuals for near 0 expected values implies that your model is predicting negative value. This should not be possible for logistic regression models which only predict in the (0, 1) interval, unless you are using the log-odds output of the model in which case residual error should be undefined. Since logistic regression is a classification method, it is more useful to look at the confusion matrix first. You should also specify if the graph is based on the train data or a separate test set.
Interpreting a binned residual plot in logistic regression
Either I am misinterpreting your plot or there is some problem. The fact that you have negative residuals for near 0 expected values implies that your model is predicting negative value. This should n
Interpreting a binned residual plot in logistic regression Either I am misinterpreting your plot or there is some problem. The fact that you have negative residuals for near 0 expected values implies that your model is predicting negative value. This should not be possible for logistic regression models which only predict in the (0, 1) interval, unless you are using the log-odds output of the model in which case residual error should be undefined. Since logistic regression is a classification method, it is more useful to look at the confusion matrix first. You should also specify if the graph is based on the train data or a separate test set.
Interpreting a binned residual plot in logistic regression Either I am misinterpreting your plot or there is some problem. The fact that you have negative residuals for near 0 expected values implies that your model is predicting negative value. This should n
32,855
Interpreting a binned residual plot in logistic regression
For others in similar situations, I might suggest looking at the simulation-based residuals for GLMs (and GAMs, and GLMMs) available in the DHARMa package for R. These to me look more theoretically justified than the binned residual plots, and come with easily implemented functions. More here: https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html
Interpreting a binned residual plot in logistic regression
For others in similar situations, I might suggest looking at the simulation-based residuals for GLMs (and GAMs, and GLMMs) available in the DHARMa package for R. These to me look more theoretically ju
Interpreting a binned residual plot in logistic regression For others in similar situations, I might suggest looking at the simulation-based residuals for GLMs (and GAMs, and GLMMs) available in the DHARMa package for R. These to me look more theoretically justified than the binned residual plots, and come with easily implemented functions. More here: https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html
Interpreting a binned residual plot in logistic regression For others in similar situations, I might suggest looking at the simulation-based residuals for GLMs (and GAMs, and GLMMs) available in the DHARMa package for R. These to me look more theoretically ju
32,856
When optimizing a logistic regression model, sometimes more data makes things go *faster*. Any idea why?
With less amounts of data, spurious correlation between regression inputs is often high, since you only have so much data. When regression variables are correlated, the likelihood surface is relatively flat, and it becomes harder for an optimizer, especially one that doesn't use the full Hessian (e.g. Newton Raphson), to find the minimum. There are some nice graphs here and more explanation, with how various algorithms perform against data with different amounts of correlation, here: http://fa.bianp.net/blog/2013/numerical-optimizers-for-logistic-regression/
When optimizing a logistic regression model, sometimes more data makes things go *faster*. Any idea
With less amounts of data, spurious correlation between regression inputs is often high, since you only have so much data. When regression variables are correlated, the likelihood surface is relative
When optimizing a logistic regression model, sometimes more data makes things go *faster*. Any idea why? With less amounts of data, spurious correlation between regression inputs is often high, since you only have so much data. When regression variables are correlated, the likelihood surface is relatively flat, and it becomes harder for an optimizer, especially one that doesn't use the full Hessian (e.g. Newton Raphson), to find the minimum. There are some nice graphs here and more explanation, with how various algorithms perform against data with different amounts of correlation, here: http://fa.bianp.net/blog/2013/numerical-optimizers-for-logistic-regression/
When optimizing a logistic regression model, sometimes more data makes things go *faster*. Any idea With less amounts of data, spurious correlation between regression inputs is often high, since you only have so much data. When regression variables are correlated, the likelihood surface is relative
32,857
Convergence in Distribution\CLT
I provide a solution based on properties of characteristic functions, which are defined as follows $$\psi_X(t)=E\exp{(itX)}.$$ We know that distribution is uniquelly defined by characteristic function, so I will prove that $$\psi_{(Y-EY)/\sqrt{Var(Y)}}\rightarrow \psi_{N(0,1)}(t), \text{ when } \theta \rightarrow \infty,$$ and from that follows the desired convergence. For that I will need to calculate mean and variance of $Y$, for which I use law of total expectations/variance - http://en.wikipedia.org/wiki/Law_of_total_expectation. $$EY=E\{E(Y|N)\}=E\{2N\}=2\theta$$ $$Var(Y)=E\{Var(Y|N)\}+Var\{E(Y|N)\}=E\{4N\}+Var(2N)=4\theta+4Var(N)=8\theta$$ I used that the mean and variance of Poisson distribution are $EN=Var(N)=\theta$ and mean and variance of $\chi^2_{2n}$ are $E(Y|N=n)=2n$ and $Var(Y|N=n)=4n$. Now comes the calculus with characteristic functions. At first I rewrite the definition of $Y$ as $$Y=\sum_{n=1}^{\infty}Z_{2n}I_{[N=n]}, \text{ where } Z_{2n}\sim \chi^2_{2n}$$ Now I use theorem which states $$\psi_Y(t)=\sum_{n=1}^{\infty}\psi_{Z_{2n}(t)}P(N=n)$$ The characteristic function of $\chi^2_{2n}$ is $\psi_{Z_{2n}(t)}=(1-2it)^{-n}$, which is taken from here: http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory) So now we calculate the characteristic function for $Y$ using Taylor expansion for $\exp(x)$ $$\psi_Y(t)=\sum_{n=1}^{\infty}\psi_{Z_{2n}(t)}P(N=n)=\sum_{n=1}^{\infty}(1-2it)^{-n}\frac{\theta^n}{n!}\exp{(-\theta)}=\sum_{n=1}^{\infty}\left(\frac{\theta}{(1-2it)}\right)^n\frac{1}{n!}\exp{(-\theta)}=\exp(\frac{\theta}{1-2it})\exp(-\theta)=\exp(\frac{2it\theta}{1-2it})$$ At the end we use the properties of characteristic functions $$\psi_{(Y-EY)/\sqrt{Var(Y)}}(t)=\exp(-i\frac{EY}{\sqrt{VarY}})\psi_Y(t/\sqrt{VarY})= \\\exp(-\frac{t^2}{2})\exp{(-1+2i\frac{t}{\sqrt{8\theta}})}\rightarrow \exp(-\frac{t^2}{2})=\psi_{N(0,1)}(t), \text{ when } \theta \rightarrow \infty$$ I jumped over the calculus because it is too lengthy by now...
Convergence in Distribution\CLT
I provide a solution based on properties of characteristic functions, which are defined as follows $$\psi_X(t)=E\exp{(itX)}.$$ We know that distribution is uniquelly defined by characteristic functio
Convergence in Distribution\CLT I provide a solution based on properties of characteristic functions, which are defined as follows $$\psi_X(t)=E\exp{(itX)}.$$ We know that distribution is uniquelly defined by characteristic function, so I will prove that $$\psi_{(Y-EY)/\sqrt{Var(Y)}}\rightarrow \psi_{N(0,1)}(t), \text{ when } \theta \rightarrow \infty,$$ and from that follows the desired convergence. For that I will need to calculate mean and variance of $Y$, for which I use law of total expectations/variance - http://en.wikipedia.org/wiki/Law_of_total_expectation. $$EY=E\{E(Y|N)\}=E\{2N\}=2\theta$$ $$Var(Y)=E\{Var(Y|N)\}+Var\{E(Y|N)\}=E\{4N\}+Var(2N)=4\theta+4Var(N)=8\theta$$ I used that the mean and variance of Poisson distribution are $EN=Var(N)=\theta$ and mean and variance of $\chi^2_{2n}$ are $E(Y|N=n)=2n$ and $Var(Y|N=n)=4n$. Now comes the calculus with characteristic functions. At first I rewrite the definition of $Y$ as $$Y=\sum_{n=1}^{\infty}Z_{2n}I_{[N=n]}, \text{ where } Z_{2n}\sim \chi^2_{2n}$$ Now I use theorem which states $$\psi_Y(t)=\sum_{n=1}^{\infty}\psi_{Z_{2n}(t)}P(N=n)$$ The characteristic function of $\chi^2_{2n}$ is $\psi_{Z_{2n}(t)}=(1-2it)^{-n}$, which is taken from here: http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory) So now we calculate the characteristic function for $Y$ using Taylor expansion for $\exp(x)$ $$\psi_Y(t)=\sum_{n=1}^{\infty}\psi_{Z_{2n}(t)}P(N=n)=\sum_{n=1}^{\infty}(1-2it)^{-n}\frac{\theta^n}{n!}\exp{(-\theta)}=\sum_{n=1}^{\infty}\left(\frac{\theta}{(1-2it)}\right)^n\frac{1}{n!}\exp{(-\theta)}=\exp(\frac{\theta}{1-2it})\exp(-\theta)=\exp(\frac{2it\theta}{1-2it})$$ At the end we use the properties of characteristic functions $$\psi_{(Y-EY)/\sqrt{Var(Y)}}(t)=\exp(-i\frac{EY}{\sqrt{VarY}})\psi_Y(t/\sqrt{VarY})= \\\exp(-\frac{t^2}{2})\exp{(-1+2i\frac{t}{\sqrt{8\theta}})}\rightarrow \exp(-\frac{t^2}{2})=\psi_{N(0,1)}(t), \text{ when } \theta \rightarrow \infty$$ I jumped over the calculus because it is too lengthy by now...
Convergence in Distribution\CLT I provide a solution based on properties of characteristic functions, which are defined as follows $$\psi_X(t)=E\exp{(itX)}.$$ We know that distribution is uniquelly defined by characteristic functio
32,858
Convergence in Distribution\CLT
This can be shown via the relationship to the noncentral chisquared distribution. There is a good wikipedia article on that which I will reference freely! https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution You have given that $Y|N=n$ is distributed chisquare with $2 n $ degrees of freedom, for $n=0,1,\dots, \infty$. Here $N$ has the Poisson distribution with expectation $\theta$. Then we have that the density function of $Y$ (unconditionally) can be written, using the law of total probability, as $$ f_Y(y; 0, \theta) = \sum_{i=0}^\infty \frac{e^{-\theta} \theta^i}{i!} f_{{\chi^2}_{2i}}(y) $$ Which is almost the density of a non-central chisquared variable, except the degree of freedom parameter is $k=0$, which is really undefined. (this is given in the definition section of wikipedia article). So to get something well-defined, we replace the above formula with $$ f_Y(y; k,\theta) = \sum_{i=0}^\infty \frac{e^{-\theta} \theta^i}{i!} f_{{\chi^2}_{2i+k}}(y) $$ which is the density of a noncentral chisquared variable with $k$ degrees of freedom and non-centrality parameter $2\theta$. So, in our analysis, we must remember to take the limit when $k \rightarrow 0$ after taking the limit $\theta \rightarrow \infty$. This is unproblematic, because in the limit of $\theta \rightarrow \infty$ the probability of $N=0$ goes to zero, so the point mass at zero disappears (chisquared variable with zero degrees of freedom must be interpreted as a pointmass at zero, so, have no density function). Now, for each fixed $k$, use the result in wiki , section related distributions, normal approximations, which gives the sought-for standard normal limit for each $k$. Then, take the limit when $k$ goes to zero, which gives the result.
Convergence in Distribution\CLT
This can be shown via the relationship to the noncentral chisquared distribution. There is a good wikipedia article on that which I will reference freely! https://en.wikipedia.org/wiki/Noncentral_chi
Convergence in Distribution\CLT This can be shown via the relationship to the noncentral chisquared distribution. There is a good wikipedia article on that which I will reference freely! https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution You have given that $Y|N=n$ is distributed chisquare with $2 n $ degrees of freedom, for $n=0,1,\dots, \infty$. Here $N$ has the Poisson distribution with expectation $\theta$. Then we have that the density function of $Y$ (unconditionally) can be written, using the law of total probability, as $$ f_Y(y; 0, \theta) = \sum_{i=0}^\infty \frac{e^{-\theta} \theta^i}{i!} f_{{\chi^2}_{2i}}(y) $$ Which is almost the density of a non-central chisquared variable, except the degree of freedom parameter is $k=0$, which is really undefined. (this is given in the definition section of wikipedia article). So to get something well-defined, we replace the above formula with $$ f_Y(y; k,\theta) = \sum_{i=0}^\infty \frac{e^{-\theta} \theta^i}{i!} f_{{\chi^2}_{2i+k}}(y) $$ which is the density of a noncentral chisquared variable with $k$ degrees of freedom and non-centrality parameter $2\theta$. So, in our analysis, we must remember to take the limit when $k \rightarrow 0$ after taking the limit $\theta \rightarrow \infty$. This is unproblematic, because in the limit of $\theta \rightarrow \infty$ the probability of $N=0$ goes to zero, so the point mass at zero disappears (chisquared variable with zero degrees of freedom must be interpreted as a pointmass at zero, so, have no density function). Now, for each fixed $k$, use the result in wiki , section related distributions, normal approximations, which gives the sought-for standard normal limit for each $k$. Then, take the limit when $k$ goes to zero, which gives the result.
Convergence in Distribution\CLT This can be shown via the relationship to the noncentral chisquared distribution. There is a good wikipedia article on that which I will reference freely! https://en.wikipedia.org/wiki/Noncentral_chi
32,859
How many samples are needed to estimate a p-dimensional covariance matrix?
As @whuber has commented (+1 to his comment) you need "$p$ data points in $p$ dimensions" under the assumption those points are independent. Nevertheless as you correctly recognise this will depended on the underlying distribution of your data as well as your sample size. That is because in finite samples sizes you have finite sample effects. That means that, in very approximate manner, your random sample exhibits properties (usually regularities in the form of collinearity) that should not be there. This is not something new: Finite or small sample corrections are something that is done ubiquitously in Statistics, for example the very popular Akaike Information Criterion (AIC) has a very easy to compute version that is corrected for finite samples: the AICc (that is unfortunately underused - AICc corrects for deviations from normalities, not collinearities). So how bad things might be? [A quick note: A singular covariance matrix is essentially one that is not positive definite (PD). You can check if a matrix is PD by checking if it has a Cholesky decomposition. That is much faster than using eigendecomposition or SVD.] Let's say is looking at a random sample $S_1$ such that $s_1 \sim U[0,1]$ and another sample $S_2$ such that $s_2 \sim T_{\nu=1}$ ($\nu$ being the degrees of freedom). How would thing go with a relatively small (<200) sample? Well... not splendid! Let's simulate something like this (in MATLAB): rng(1234) N = 200; M = 200; FailsU = zeros(N,M); FailsT = zeros(N,M); for i = 1:M for j = 1:N; K = cov(rand(i)); [L,p] = chol(K,'lower'); if(p) FailsU(j,i) = 1; end K = cov(random('t',1,[i,i])); [L,p] = chol(K,'lower'); if(p) FailsT(j,i) = 1; end end end About 50% of the time you will get a non-PD matrix. That's definitely not good. What about if we had $p+1$ samples though? % Same initializations as above for i = 1:M for j = 1:N; K = cov(rand(i+1,i)); [L,p] = chol(K,'lower'); if(p) Fails1(j,i) = 1; end K = cov(random('t',1,[i+1,i])); [L,p] = chol(K,'lower'); if(p) FailsT1(j,i) = 1; end end end Yeap we are fine! Why all this trouble though? Let see the $p \times p$ version first: When you are saying that a matrix is non-invertible or it has zero eigenvalues you are saying it is rank deficient. That is because the rank of a covariance matrix can be thought as the number of non-zero eigenvalues. In addition, when you compute the product $S_x^T S_x$ you can get at most a matrix of rank $p$ because the rank of a product of matrices is at most equal to the rank of any matrix in the product. That means that the covariance matrix of your $p$-dimensional sample has at most rank $p$. Therefore for any number of samples $N$, the rank of $S_x$ is at most $p$. From this we can assume that even small deviations from complete randomness would make our covariance matrix rank deficient (as seen in the first simulation). OK, fine why does $p+1$ seems to work so nicely? Now let's see the $p+1 \times p$ version: Think of how you estimate a sample covariance matrix: While in a quick manner we can write : $K = \frac{1}{N-1} S_x^T S_X$ (because we assumed $S_x$ to have mean 0, we should properly write things as: $K = \frac{1}{N-1} \Sigma_{i=1}^N (S_{x(i)} - \hat{\mu})(S_{x(i)} - \hat{\mu})^T$ where $\hat{\mu}$ is the sample mean. But what about the rank of this $(S_{x(i)} - \hat{\mu})(S_{x(i)} - \hat{\mu})^T$ matrix? Well... that is 1! Not only that but exactly because we subtracted $\hat{\mu}$ we reduced the original rank of the matrix $S_x$ to begin with! So as we add up points ($N$ gets larger) we have more chances to have a full-rank matrix. For the current case, exactly the covariance in our sample is completely diagonal, just adding another point ensure that it was extremely unlikely (not guaranteed) to have a rank deficiency.
How many samples are needed to estimate a p-dimensional covariance matrix?
As @whuber has commented (+1 to his comment) you need "$p$ data points in $p$ dimensions" under the assumption those points are independent. Nevertheless as you correctly recognise this will depended
How many samples are needed to estimate a p-dimensional covariance matrix? As @whuber has commented (+1 to his comment) you need "$p$ data points in $p$ dimensions" under the assumption those points are independent. Nevertheless as you correctly recognise this will depended on the underlying distribution of your data as well as your sample size. That is because in finite samples sizes you have finite sample effects. That means that, in very approximate manner, your random sample exhibits properties (usually regularities in the form of collinearity) that should not be there. This is not something new: Finite or small sample corrections are something that is done ubiquitously in Statistics, for example the very popular Akaike Information Criterion (AIC) has a very easy to compute version that is corrected for finite samples: the AICc (that is unfortunately underused - AICc corrects for deviations from normalities, not collinearities). So how bad things might be? [A quick note: A singular covariance matrix is essentially one that is not positive definite (PD). You can check if a matrix is PD by checking if it has a Cholesky decomposition. That is much faster than using eigendecomposition or SVD.] Let's say is looking at a random sample $S_1$ such that $s_1 \sim U[0,1]$ and another sample $S_2$ such that $s_2 \sim T_{\nu=1}$ ($\nu$ being the degrees of freedom). How would thing go with a relatively small (<200) sample? Well... not splendid! Let's simulate something like this (in MATLAB): rng(1234) N = 200; M = 200; FailsU = zeros(N,M); FailsT = zeros(N,M); for i = 1:M for j = 1:N; K = cov(rand(i)); [L,p] = chol(K,'lower'); if(p) FailsU(j,i) = 1; end K = cov(random('t',1,[i,i])); [L,p] = chol(K,'lower'); if(p) FailsT(j,i) = 1; end end end About 50% of the time you will get a non-PD matrix. That's definitely not good. What about if we had $p+1$ samples though? % Same initializations as above for i = 1:M for j = 1:N; K = cov(rand(i+1,i)); [L,p] = chol(K,'lower'); if(p) Fails1(j,i) = 1; end K = cov(random('t',1,[i+1,i])); [L,p] = chol(K,'lower'); if(p) FailsT1(j,i) = 1; end end end Yeap we are fine! Why all this trouble though? Let see the $p \times p$ version first: When you are saying that a matrix is non-invertible or it has zero eigenvalues you are saying it is rank deficient. That is because the rank of a covariance matrix can be thought as the number of non-zero eigenvalues. In addition, when you compute the product $S_x^T S_x$ you can get at most a matrix of rank $p$ because the rank of a product of matrices is at most equal to the rank of any matrix in the product. That means that the covariance matrix of your $p$-dimensional sample has at most rank $p$. Therefore for any number of samples $N$, the rank of $S_x$ is at most $p$. From this we can assume that even small deviations from complete randomness would make our covariance matrix rank deficient (as seen in the first simulation). OK, fine why does $p+1$ seems to work so nicely? Now let's see the $p+1 \times p$ version: Think of how you estimate a sample covariance matrix: While in a quick manner we can write : $K = \frac{1}{N-1} S_x^T S_X$ (because we assumed $S_x$ to have mean 0, we should properly write things as: $K = \frac{1}{N-1} \Sigma_{i=1}^N (S_{x(i)} - \hat{\mu})(S_{x(i)} - \hat{\mu})^T$ where $\hat{\mu}$ is the sample mean. But what about the rank of this $(S_{x(i)} - \hat{\mu})(S_{x(i)} - \hat{\mu})^T$ matrix? Well... that is 1! Not only that but exactly because we subtracted $\hat{\mu}$ we reduced the original rank of the matrix $S_x$ to begin with! So as we add up points ($N$ gets larger) we have more chances to have a full-rank matrix. For the current case, exactly the covariance in our sample is completely diagonal, just adding another point ensure that it was extremely unlikely (not guaranteed) to have a rank deficiency.
How many samples are needed to estimate a p-dimensional covariance matrix? As @whuber has commented (+1 to his comment) you need "$p$ data points in $p$ dimensions" under the assumption those points are independent. Nevertheless as you correctly recognise this will depended
32,860
How many samples are needed to estimate a p-dimensional covariance matrix?
By construction, a covariance matrix of dimension $n$, computed with $n$ data points is singular. Consider the following $2 \times2$ example. Data: $x_1, x_2$ and $y_1, y_2$. Let $x_1 - \bar{x} = -(x_2 - \bar{x}) = a$ $y_1 - \bar{y} = -(y_2 - \bar{y}) = b$ The covariance matrix (ignoring bias/ML issues) is: $$ \pmatrix{2 a^2 & 2ab \\ 2 ab & 2b^2} $$ which is singular.
How many samples are needed to estimate a p-dimensional covariance matrix?
By construction, a covariance matrix of dimension $n$, computed with $n$ data points is singular. Consider the following $2 \times2$ example. Data: $x_1, x_2$ and $y_1, y_2$. Let $x_1 - \bar{x} = -(x_
How many samples are needed to estimate a p-dimensional covariance matrix? By construction, a covariance matrix of dimension $n$, computed with $n$ data points is singular. Consider the following $2 \times2$ example. Data: $x_1, x_2$ and $y_1, y_2$. Let $x_1 - \bar{x} = -(x_2 - \bar{x}) = a$ $y_1 - \bar{y} = -(y_2 - \bar{y}) = b$ The covariance matrix (ignoring bias/ML issues) is: $$ \pmatrix{2 a^2 & 2ab \\ 2 ab & 2b^2} $$ which is singular.
How many samples are needed to estimate a p-dimensional covariance matrix? By construction, a covariance matrix of dimension $n$, computed with $n$ data points is singular. Consider the following $2 \times2$ example. Data: $x_1, x_2$ and $y_1, y_2$. Let $x_1 - \bar{x} = -(x_
32,861
How many samples are needed to estimate a p-dimensional covariance matrix?
The short answer to the question is: it depends on the distribution of the data. Also, it depends on your definition of estimation: should they be close in the l_2 norm or do we use other metrics? As specific case, in the case of the operator norm, we can ask how many samples do I need if the data is a p-dimensional Gaussian with variace $\Sigma$. the answer is fortunately $O(p)$. You might need more samples if the data has other distributions. Best source of information that I was able to find online this paper, along with the presentation of the same author here.
How many samples are needed to estimate a p-dimensional covariance matrix?
The short answer to the question is: it depends on the distribution of the data. Also, it depends on your definition of estimation: should they be close in the l_2 norm or do we use other metrics? As
How many samples are needed to estimate a p-dimensional covariance matrix? The short answer to the question is: it depends on the distribution of the data. Also, it depends on your definition of estimation: should they be close in the l_2 norm or do we use other metrics? As specific case, in the case of the operator norm, we can ask how many samples do I need if the data is a p-dimensional Gaussian with variace $\Sigma$. the answer is fortunately $O(p)$. You might need more samples if the data has other distributions. Best source of information that I was able to find online this paper, along with the presentation of the same author here.
How many samples are needed to estimate a p-dimensional covariance matrix? The short answer to the question is: it depends on the distribution of the data. Also, it depends on your definition of estimation: should they be close in the l_2 norm or do we use other metrics? As
32,862
Example of CLT when moments do not exist
Here's an answer based on @cardinal's comment: Let the sample space be that of paths of the stochastic processes $(X_i)_{i=0}^{\infty}$ and $(Y_i)_{i=0}^{\infty}$, where we let $Y_i=X_i \mathbb{1}_{\{X_i\leq 1\}}$. The Lindeberg condition (conforming with Wikipedia's notation) is satisfied, for: $$\frac{1}{s_n^2} \sum_{i=0}^n \mathbb E (Y_i^2 \mathbb{1}_{\{|Y_i|> \epsilon s_n^2\}})\leq \frac{1}{s_n^2} \sum_{i=0}^n P(|Y_i|> \epsilon s_n^2)\to0,$$ for any $\epsilon$ as $s_n^2\to \infty$ whenever $n\to \infty.$ We also have that $P(X_i\neq Y_i, i.o.) = 0$ by Borel-Cantelli since $P(X_i \neq Y_i)=2^{-i}$ so that $\sum_{i=0}^{\infty} P(X_i \neq Y_i) = 2<\infty$. Stated differently, $X_i$ and $Y_i$ differ only finitely often almost surely. Define $S_{X,n}=\sum_{i=0}^{n} X_i$ and equivalently for $S_{Y,n}$. Pick a sample path of $(X_i)_{i=1}^{\infty}$ such that $X_i > 1$ only for finitely many $i$. Index these terms by $\mathcal{J}$. Require also from this path that the $X_j,j\in \mathcal{J}$ are finite. For such a path, $$\frac{S_{\mathcal{J}}}{\sqrt{n}} \to 0,\text{ as }n\to \infty$$ where $S_{\mathcal{J}}:=\sum_{j\in \mathcal{J}}X_j$. Moreover, for large enough $n$, $$S_{X,n}-S_{Y,n}=S_{\mathcal{J}}.$$ Using the Borel-Cantelli result together with the fact that $X_i$ is almost surely finite, we see that the probability of a sample path obeying our requirements is one. In other words, the differing terms go to zero almost surely. We thus have by Slutsky's theorem that for large enough $n$, $$\frac{1}{\sqrt{n}}S_{X,n}=\frac{S_{Y,n}+S_{\mathcal{J}}}{\sqrt{n}}\overset{d}{\to}\xi+0,$$ where $\xi\sim N(0,1)$.
Example of CLT when moments do not exist
Here's an answer based on @cardinal's comment: Let the sample space be that of paths of the stochastic processes $(X_i)_{i=0}^{\infty}$ and $(Y_i)_{i=0}^{\infty}$, where we let $Y_i=X_i \mathbb{1}_{\
Example of CLT when moments do not exist Here's an answer based on @cardinal's comment: Let the sample space be that of paths of the stochastic processes $(X_i)_{i=0}^{\infty}$ and $(Y_i)_{i=0}^{\infty}$, where we let $Y_i=X_i \mathbb{1}_{\{X_i\leq 1\}}$. The Lindeberg condition (conforming with Wikipedia's notation) is satisfied, for: $$\frac{1}{s_n^2} \sum_{i=0}^n \mathbb E (Y_i^2 \mathbb{1}_{\{|Y_i|> \epsilon s_n^2\}})\leq \frac{1}{s_n^2} \sum_{i=0}^n P(|Y_i|> \epsilon s_n^2)\to0,$$ for any $\epsilon$ as $s_n^2\to \infty$ whenever $n\to \infty.$ We also have that $P(X_i\neq Y_i, i.o.) = 0$ by Borel-Cantelli since $P(X_i \neq Y_i)=2^{-i}$ so that $\sum_{i=0}^{\infty} P(X_i \neq Y_i) = 2<\infty$. Stated differently, $X_i$ and $Y_i$ differ only finitely often almost surely. Define $S_{X,n}=\sum_{i=0}^{n} X_i$ and equivalently for $S_{Y,n}$. Pick a sample path of $(X_i)_{i=1}^{\infty}$ such that $X_i > 1$ only for finitely many $i$. Index these terms by $\mathcal{J}$. Require also from this path that the $X_j,j\in \mathcal{J}$ are finite. For such a path, $$\frac{S_{\mathcal{J}}}{\sqrt{n}} \to 0,\text{ as }n\to \infty$$ where $S_{\mathcal{J}}:=\sum_{j\in \mathcal{J}}X_j$. Moreover, for large enough $n$, $$S_{X,n}-S_{Y,n}=S_{\mathcal{J}}.$$ Using the Borel-Cantelli result together with the fact that $X_i$ is almost surely finite, we see that the probability of a sample path obeying our requirements is one. In other words, the differing terms go to zero almost surely. We thus have by Slutsky's theorem that for large enough $n$, $$\frac{1}{\sqrt{n}}S_{X,n}=\frac{S_{Y,n}+S_{\mathcal{J}}}{\sqrt{n}}\overset{d}{\to}\xi+0,$$ where $\xi\sim N(0,1)$.
Example of CLT when moments do not exist Here's an answer based on @cardinal's comment: Let the sample space be that of paths of the stochastic processes $(X_i)_{i=0}^{\infty}$ and $(Y_i)_{i=0}^{\infty}$, where we let $Y_i=X_i \mathbb{1}_{\
32,863
Correlation coefficients for ordered data: Kendall's Tau vs Polychoric vs Spearman's rho
Partially answered in comments: As your Wikipedia link states, the polychoric correlation assumes that the manifest ordinal variables come from categorizing latent normal variables; Kendall's tau & Spearman's correlation do not assume this. Other than that, the differences are covered in Kendall Tau or Spearman's rho? If there is anything left that isn't already covered, please edit to clarify. – gung ( Does it mean that Polychoric is less suitable in general case? – drobnbobn ) It means polychoric is appropriate when the manifest ordinal variables came from categorizing latent normal variables & not otherwise. (In practice, it's more like when you are willing to assume this & not otherwise, since you will rarely know & can't really check the assumption.) OTOH, it probably doesn't make much difference in most cases, for an analogy, see my answer here: Difference between logit and probit models. – gung
Correlation coefficients for ordered data: Kendall's Tau vs Polychoric vs Spearman's rho
Partially answered in comments: As your Wikipedia link states, the polychoric correlation assumes that the manifest ordinal variables come from categorizing latent normal variables; Kendall's tau & S
Correlation coefficients for ordered data: Kendall's Tau vs Polychoric vs Spearman's rho Partially answered in comments: As your Wikipedia link states, the polychoric correlation assumes that the manifest ordinal variables come from categorizing latent normal variables; Kendall's tau & Spearman's correlation do not assume this. Other than that, the differences are covered in Kendall Tau or Spearman's rho? If there is anything left that isn't already covered, please edit to clarify. – gung ( Does it mean that Polychoric is less suitable in general case? – drobnbobn ) It means polychoric is appropriate when the manifest ordinal variables came from categorizing latent normal variables & not otherwise. (In practice, it's more like when you are willing to assume this & not otherwise, since you will rarely know & can't really check the assumption.) OTOH, it probably doesn't make much difference in most cases, for an analogy, see my answer here: Difference between logit and probit models. – gung
Correlation coefficients for ordered data: Kendall's Tau vs Polychoric vs Spearman's rho Partially answered in comments: As your Wikipedia link states, the polychoric correlation assumes that the manifest ordinal variables come from categorizing latent normal variables; Kendall's tau & S
32,864
When making inferences about group means, are credible Intervals sensitive to within-subject variance while confidence intervals are not?
In the answers there (if I understood correctly) I learned that within-subject variance does not effect inferences made about group means and it is ok to simply take the averages of averages to calculate group mean, then calculate within-group variance and use that to perform significance tests. Let me develop this idea here. The model for the individual observations is $$y_{ijk}= \mu_i + \alpha_{ij} + \epsilon_{ijk}$$, where : $y_{ijk}$ is the $k$-th measurement of individual $j$ of group $i$ $\alpha_{ij} \sim_{\text{iid}} {\cal N}(0, \sigma^2_b)$ is the random effect for individual $j$ of group $i$ $\epsilon_{ijk} \sim_{\text{iid}} {\cal N}(0, \sigma^2_w)$ is the within-error In my answer to your first question, I have suggested you to note that one obtains a classical (fixed effects) Gaussian linear model for the subjects means $\bar y_{ij\bullet}$. Indeed you can easily check that $$\bar y_{ij\bullet} = \mu_i + \delta_{ij}$$ with $$\delta_{ij} = \alpha_{ij} + \frac{1}{K}\sum_k \epsilon_{ijk} \sim_{\text{iid}} {\cal N}(0, \sigma^2) \quad \text{where } \quad \boxed{\sigma^2=\sigma^2_b+\frac{\sigma^2_w}{K}},$$ assuming $K$ repeated measurements for each individual. This is nothing but the one-way ANOVA model with a fixed factor. And then I claimed that in order to draw inference about the $\mu_i$ you can simply consider the simple classical linear model whose observations are the subjects means $\bar y_{ij\bullet}$. Update 12/04/2014: Some examples of this idea are now written on my blog: Reducing a model to get confidence intervals. I'm under the impression that this always work when we average the data over the levels of a random effect. In the answers there (if I understood correctly) I learned that within-subject variance does not effect inferences made about group means and it is ok to simply take the averages of averages to calculate group mean, then calculate within-group variance and use that to perform significance tests. I would like to use a method where the larger the within subject variance the less sure I am about the group means or understand why it does not make sense to desire that. As you see from the boxed formula, the within-variance $\sigma^2_w$ plays a role in the model for the observed group means.
When making inferences about group means, are credible Intervals sensitive to within-subject varianc
In the answers there (if I understood correctly) I learned that within-subject variance does not effect inferences made about group means and it is ok to simply take the averages of averages to
When making inferences about group means, are credible Intervals sensitive to within-subject variance while confidence intervals are not? In the answers there (if I understood correctly) I learned that within-subject variance does not effect inferences made about group means and it is ok to simply take the averages of averages to calculate group mean, then calculate within-group variance and use that to perform significance tests. Let me develop this idea here. The model for the individual observations is $$y_{ijk}= \mu_i + \alpha_{ij} + \epsilon_{ijk}$$, where : $y_{ijk}$ is the $k$-th measurement of individual $j$ of group $i$ $\alpha_{ij} \sim_{\text{iid}} {\cal N}(0, \sigma^2_b)$ is the random effect for individual $j$ of group $i$ $\epsilon_{ijk} \sim_{\text{iid}} {\cal N}(0, \sigma^2_w)$ is the within-error In my answer to your first question, I have suggested you to note that one obtains a classical (fixed effects) Gaussian linear model for the subjects means $\bar y_{ij\bullet}$. Indeed you can easily check that $$\bar y_{ij\bullet} = \mu_i + \delta_{ij}$$ with $$\delta_{ij} = \alpha_{ij} + \frac{1}{K}\sum_k \epsilon_{ijk} \sim_{\text{iid}} {\cal N}(0, \sigma^2) \quad \text{where } \quad \boxed{\sigma^2=\sigma^2_b+\frac{\sigma^2_w}{K}},$$ assuming $K$ repeated measurements for each individual. This is nothing but the one-way ANOVA model with a fixed factor. And then I claimed that in order to draw inference about the $\mu_i$ you can simply consider the simple classical linear model whose observations are the subjects means $\bar y_{ij\bullet}$. Update 12/04/2014: Some examples of this idea are now written on my blog: Reducing a model to get confidence intervals. I'm under the impression that this always work when we average the data over the levels of a random effect. In the answers there (if I understood correctly) I learned that within-subject variance does not effect inferences made about group means and it is ok to simply take the averages of averages to calculate group mean, then calculate within-group variance and use that to perform significance tests. I would like to use a method where the larger the within subject variance the less sure I am about the group means or understand why it does not make sense to desire that. As you see from the boxed formula, the within-variance $\sigma^2_w$ plays a role in the model for the observed group means.
When making inferences about group means, are credible Intervals sensitive to within-subject varianc In the answers there (if I understood correctly) I learned that within-subject variance does not effect inferences made about group means and it is ok to simply take the averages of averages to
32,865
Simulating data to fit a mediation model
This is quite straightforward. The reason you have no relationship between $x$ and $y$ using your approach is because of the code: y <- 2.5 + 0 * x + .4 * med + rnorm(100, sd = 1) If you want some relationship between $x$ and $y$ even when ${\rm med}$ is included (that is, you want partial mediation), you would simply use a non-zero value for $b_{32}$ instead. For example, you could substitute the following code for the above: y <- 2.5 + 3 * x + .4 * med + rnorm(100, sd = 1) Thus, $b_{32}$ has been changed from $0$ to $3$. (Of course some other, specific value would probably be more relevant, depending on your situation, I just picked $3$ off the top of my head.) Edit: With respect to the marginal $x\rightarrow y$ relationship being non-significant, that is just a function of statistical power. Since the causal force of $x$ is passed entirely through ${\rm med}$ in your original setup, you have lower power than you might otherwise. Nonetheless, the effect is still real in some sense. When I ran your original code (after having set the seed using 90 as a value that I again just picked off the top of my head), I did get a significant effect: set.seed(90) x <- rep(c(-.5, .5), 50) med <- 4 + .7 * x + rnorm(100, sd = 1) # Check the relationship between x and med mod <- lm(med ~ x) summary(mod) y <- 2.5 + 0 * x + .4 * med + rnorm(100, sd = 1) # Check the relationships between x, med, and y mod <- lm(y ~ x + med) summary(mod) # Check the relationship between x and y -- not present mod <- lm(y ~ x) summary(mod) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.8491 0.1151 33.431 <2e-16 *** x 0.5315 0.2303 2.308 0.0231 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ... To get more power, you can increase the $N$ you are using, or use smaller error values (i.e., use sd= values less than the default 1 in the rnorm() calls).
Simulating data to fit a mediation model
This is quite straightforward. The reason you have no relationship between $x$ and $y$ using your approach is because of the code: y <- 2.5 + 0 * x + .4 * med + rnorm(100, sd = 1) If you want some
Simulating data to fit a mediation model This is quite straightforward. The reason you have no relationship between $x$ and $y$ using your approach is because of the code: y <- 2.5 + 0 * x + .4 * med + rnorm(100, sd = 1) If you want some relationship between $x$ and $y$ even when ${\rm med}$ is included (that is, you want partial mediation), you would simply use a non-zero value for $b_{32}$ instead. For example, you could substitute the following code for the above: y <- 2.5 + 3 * x + .4 * med + rnorm(100, sd = 1) Thus, $b_{32}$ has been changed from $0$ to $3$. (Of course some other, specific value would probably be more relevant, depending on your situation, I just picked $3$ off the top of my head.) Edit: With respect to the marginal $x\rightarrow y$ relationship being non-significant, that is just a function of statistical power. Since the causal force of $x$ is passed entirely through ${\rm med}$ in your original setup, you have lower power than you might otherwise. Nonetheless, the effect is still real in some sense. When I ran your original code (after having set the seed using 90 as a value that I again just picked off the top of my head), I did get a significant effect: set.seed(90) x <- rep(c(-.5, .5), 50) med <- 4 + .7 * x + rnorm(100, sd = 1) # Check the relationship between x and med mod <- lm(med ~ x) summary(mod) y <- 2.5 + 0 * x + .4 * med + rnorm(100, sd = 1) # Check the relationships between x, med, and y mod <- lm(y ~ x + med) summary(mod) # Check the relationship between x and y -- not present mod <- lm(y ~ x) summary(mod) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.8491 0.1151 33.431 <2e-16 *** x 0.5315 0.2303 2.308 0.0231 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ... To get more power, you can increase the $N$ you are using, or use smaller error values (i.e., use sd= values less than the default 1 in the rnorm() calls).
Simulating data to fit a mediation model This is quite straightforward. The reason you have no relationship between $x$ and $y$ using your approach is because of the code: y <- 2.5 + 0 * x + .4 * med + rnorm(100, sd = 1) If you want some
32,866
Simulating data to fit a mediation model
Here is a paper on how to model simple mediation in Caron & Valois (2018): There R code is x <- rnorm(n) em <- sqrt(1-a^2) m <- a*x + em*rnorm(n) ey2 <- sqrt(ey) y <- cp*x + b*m + ey2*rnorm(n) data <- as.data.frame(cbind(x, m, y)) You just have to specify $n$ (the sample size), $a$, $b$ and $c'$ (direct effect). The advantage here is that you will model standardized coefficients so you'll know their effect sizes. They also included code to unstandardize, carry the Baron & Kenny, Sobel and Bca bootstrap. References Caron, P.-O., & Valois, P. (2018). A computational description of simple mediation analysis. The Quantitative Methods for Psychology, 14, 147-158. doi:10.20982/tqmp.14.2.p147
Simulating data to fit a mediation model
Here is a paper on how to model simple mediation in Caron & Valois (2018): There R code is x <- rnorm(n) em <- sqrt(1-a^2) m <- a*x + em*rnorm(n) ey2 <- sqrt(ey) y <- cp*x + b*m + ey2*rnor
Simulating data to fit a mediation model Here is a paper on how to model simple mediation in Caron & Valois (2018): There R code is x <- rnorm(n) em <- sqrt(1-a^2) m <- a*x + em*rnorm(n) ey2 <- sqrt(ey) y <- cp*x + b*m + ey2*rnorm(n) data <- as.data.frame(cbind(x, m, y)) You just have to specify $n$ (the sample size), $a$, $b$ and $c'$ (direct effect). The advantage here is that you will model standardized coefficients so you'll know their effect sizes. They also included code to unstandardize, carry the Baron & Kenny, Sobel and Bca bootstrap. References Caron, P.-O., & Valois, P. (2018). A computational description of simple mediation analysis. The Quantitative Methods for Psychology, 14, 147-158. doi:10.20982/tqmp.14.2.p147
Simulating data to fit a mediation model Here is a paper on how to model simple mediation in Caron & Valois (2018): There R code is x <- rnorm(n) em <- sqrt(1-a^2) m <- a*x + em*rnorm(n) ey2 <- sqrt(ey) y <- cp*x + b*m + ey2*rnor
32,867
What are some Bayesian alternatives to the Kolmogorov-Smirnov test?
Given that the two-sample Kolmogorov-Smirnov test is a nonparametric procedure, you have to dig into the area of Bayesian nonparametrics to find Bayesian analogous tools. Here is a paper where you can start: http://arxiv.org/abs/0910.5060 I warn you that nonparametric Bayesian theory is not easy to digest.
What are some Bayesian alternatives to the Kolmogorov-Smirnov test?
Given that the two-sample Kolmogorov-Smirnov test is a nonparametric procedure, you have to dig into the area of Bayesian nonparametrics to find Bayesian analogous tools. Here is a paper where you can
What are some Bayesian alternatives to the Kolmogorov-Smirnov test? Given that the two-sample Kolmogorov-Smirnov test is a nonparametric procedure, you have to dig into the area of Bayesian nonparametrics to find Bayesian analogous tools. Here is a paper where you can start: http://arxiv.org/abs/0910.5060 I warn you that nonparametric Bayesian theory is not easy to digest.
What are some Bayesian alternatives to the Kolmogorov-Smirnov test? Given that the two-sample Kolmogorov-Smirnov test is a nonparametric procedure, you have to dig into the area of Bayesian nonparametrics to find Bayesian analogous tools. Here is a paper where you can
32,868
How to combine multiple imputed datasets?
You can't average the data. Since the variables will be same across the imputed data, you have to append each imputed data. For example, if you have 6 variables with 1000 observations and your imputation frequency is 5 , then you will have the final data of 6 variables with 5000 observations. You use the rbind function to append the data in R. For example, if you have five imputed data (assuming that you have already these data in hand), your final data will be obtained as finaldata <- rbind(data1,data2,data3,data4,data5) For details, see here. After imputation: The regression coefficient from each imputed data will be usually different; so the coefficient is obtained as average of coefficients of all imputed data. But, there is additional rule for standard error. See here for details.
How to combine multiple imputed datasets?
You can't average the data. Since the variables will be same across the imputed data, you have to append each imputed data. For example, if you have 6 variables with 1000 observations and your imputat
How to combine multiple imputed datasets? You can't average the data. Since the variables will be same across the imputed data, you have to append each imputed data. For example, if you have 6 variables with 1000 observations and your imputation frequency is 5 , then you will have the final data of 6 variables with 5000 observations. You use the rbind function to append the data in R. For example, if you have five imputed data (assuming that you have already these data in hand), your final data will be obtained as finaldata <- rbind(data1,data2,data3,data4,data5) For details, see here. After imputation: The regression coefficient from each imputed data will be usually different; so the coefficient is obtained as average of coefficients of all imputed data. But, there is additional rule for standard error. See here for details.
How to combine multiple imputed datasets? You can't average the data. Since the variables will be same across the imputed data, you have to append each imputed data. For example, if you have 6 variables with 1000 observations and your imputat
32,869
How to combine multiple imputed datasets?
Multiple imputation models for missing data are rarely employed in practice as simulation studies suggest that the chances of the true underlying parameters lying within the cover intervals are not always accurately depicted. I would strongly recommend a testing of the process based on simulated data (with parameters known precisely), based on real data in the area of investigation. A simulation study reference https://www.google.com/url?sa=t&source=web&rct=j&ei=Ua4BVJgD5MiwBMKggKgP&url=http://www.ssc.upenn.edu/~allison/MultInt99.pdf&cd=13&ved=0CCEQFjACOAo&usg=AFQjCNF1Rg6SbFPwLv5n3jYIVNA_iTMPCg&sig2=d2VORWbqTNygdM6Z51TZEg I suspect employing say five simple/naive models for the missing data may be better in producing less bias and cover intervals that accurately include the true underlying parameters. Rather than pooling of the parmeter estimates, one may do better by employing Bayesian techniques (see work with imputation models in this light at https://www.google.com/url?sa=t&source=web&rct=j&ei=mqcAVP7RA5HoggSop4LoDw&url=http://gking.harvard.edu/files/gking/files/measure.pdf&cd=5&ved=0CCUQFjAE&usg=AFQjCNFCZQwfWJDrrjzu4_5syV44vGOncA&sig2=XZUM14OMq_A01FyN4r61Zw ). Yes, not much of a ringing endorsement of standard missing data imputation models and to quote a source, for example, http://m.circoutcomes.ahajournals.org/content/3/1/98.short?rss=1&ssource=mfr :"We describe some background of missing data analysis and criticize ad hoc methods that are prone to serious problems. We then focus on multiple imputation, in which missing cases are first filled in by several sets of plausible values to create multiple completed datasets,..." where I would insert "(?)" after plausible as naive models, for one, are not generally best described as producing plausible predictions. However, models incorporating the dependent variable y, itself, as an independent variable (so called calibration regression) may better meet this characterization.
How to combine multiple imputed datasets?
Multiple imputation models for missing data are rarely employed in practice as simulation studies suggest that the chances of the true underlying parameters lying within the cover intervals are not al
How to combine multiple imputed datasets? Multiple imputation models for missing data are rarely employed in practice as simulation studies suggest that the chances of the true underlying parameters lying within the cover intervals are not always accurately depicted. I would strongly recommend a testing of the process based on simulated data (with parameters known precisely), based on real data in the area of investigation. A simulation study reference https://www.google.com/url?sa=t&source=web&rct=j&ei=Ua4BVJgD5MiwBMKggKgP&url=http://www.ssc.upenn.edu/~allison/MultInt99.pdf&cd=13&ved=0CCEQFjACOAo&usg=AFQjCNF1Rg6SbFPwLv5n3jYIVNA_iTMPCg&sig2=d2VORWbqTNygdM6Z51TZEg I suspect employing say five simple/naive models for the missing data may be better in producing less bias and cover intervals that accurately include the true underlying parameters. Rather than pooling of the parmeter estimates, one may do better by employing Bayesian techniques (see work with imputation models in this light at https://www.google.com/url?sa=t&source=web&rct=j&ei=mqcAVP7RA5HoggSop4LoDw&url=http://gking.harvard.edu/files/gking/files/measure.pdf&cd=5&ved=0CCUQFjAE&usg=AFQjCNFCZQwfWJDrrjzu4_5syV44vGOncA&sig2=XZUM14OMq_A01FyN4r61Zw ). Yes, not much of a ringing endorsement of standard missing data imputation models and to quote a source, for example, http://m.circoutcomes.ahajournals.org/content/3/1/98.short?rss=1&ssource=mfr :"We describe some background of missing data analysis and criticize ad hoc methods that are prone to serious problems. We then focus on multiple imputation, in which missing cases are first filled in by several sets of plausible values to create multiple completed datasets,..." where I would insert "(?)" after plausible as naive models, for one, are not generally best described as producing plausible predictions. However, models incorporating the dependent variable y, itself, as an independent variable (so called calibration regression) may better meet this characterization.
How to combine multiple imputed datasets? Multiple imputation models for missing data are rarely employed in practice as simulation studies suggest that the chances of the true underlying parameters lying within the cover intervals are not al
32,870
Fourier/trigonometric interpolation
I am no expert on Fourier transforms, but... Epstein's total sample range was 24 months with a monthly sample rate: 1/12 years. Your sample range is 835 weeks. If your goal is to estimate the average for one year with data from ~16 years based on daily data you need a sample rate of 1/365 years. So substitute 52 for 12, but first standardize units and expand your 835 weeks to 835*7 = 5845 days. However, if you only have weekly data points I suggest a sample rate of 52 with a bit depth of 16 or 17 for peak analysis, alternatively 32 or 33 for even/odd comparison. So the default input options include: 1) to use the weekly means (or the median absolute deviation, MAD, or something to that extent) or 2) to use the daily values, which provide a higher resolution. Liebman et al. chose the cut-off point jmax = 2. Hence, Fig 3. contains fewer partials and is thus more symmetrical at the top of the sine compared to Fig 2. (A single partial at the base frequency would result in a pure sine wave.) If Epstein would have selected a higher resolution (e.g. jmax = 12) the transform would presumably only yield minor fluctuations with the additional components, or perhaps he lacked the computational power. Through visual inspection of your data you appear to have 16-17 peaks. I would suggest you set jmax or the "bit depth" to either 6, 11, 16 or 17 (see figure) and compare the outputs. The higher the peaks, the more they contribute to the original complex waveform. So assuming a 17-band resolution or bit depth the 17th partial contributes minimally to the original waveform pattern compared to the 6th peak. However, with a 34 band-resolution you would detect a difference between even and odd peaks as suggested by the fairly constant valleys. The bit depth depends on your research question, whether you are interested in the peaks only or in both peaks and valleys, but also how exactly you wish to approximate the original series. The Fourier analysis reduces your data points. If you were to inverse the function at a certain bit depth using a Fourier transform you could probably cross-check if the new mean estimates correspond to your original means. So, to answer your fourth question: the regression parameters you mentioned depend on the sensitivity and resolution that you require. If you do not wish for an exact fit, then by all means simply input the weekly means in the transform. However, beware that lower bit depth also reduces the data. For example, note how Epstein's harmonic overlay on Lieberman and colleagues' analysis misses the mid-point of the step function, with a skewed curve slightly to the right (i.e. temp. est. too high), in December in Figure 3. Liebman and Colleagues' Parameters: Bit Depth: 2 Epstein's Parameters: Sample Rate: 12 [every month] Sample Range: 24 months Bit Depth: 6 Your Parameters: Sample Rate: 365 [every day] Sample Range: 5845 days Exact Bit Depth Approach Exact fit based on visual inspection. (If you have the power, just see what happens compared to lower bit-depths.) Full Spectrum (peaks): 17 Full Spectrum (even/odd): 34 Variable Bit Depth Approach This is probably what you wish to do: Compare Peaks Only: 6, 11, 16, 17 Compare Even/Odd: 12, 22, 32, 34 Resynthesize and compare means This approach would yield something similar to the comparison of Figures in Epstein if you inverse the transformation again, i.e. synthesise the partials into an approximation of the original time series. You could also compare the discrete points of the resynthesized curves to the mean values, perhaps even test for significant differences to indicate the sensitivity of your bit depth choice. UPDATE 1: Bit Depth A bit - short for binary digit - is either 0 or 1. The bits 010101 would describe a square wave. The bit depth is 1 bit. To describe a saw wave you would need more bits: 0123210. The more complex a wave gets the more bits you need: This is a somewhat simplified explanation, but the more complex a time series is, the more bits are required to model it. Actually, "1" is a sine wave component and not a square wave (a square wave is more like 3 2 1 0 - see attached figure). 0 bits would be a flat line. Information gets lost with reduction of bit depth. For example, CD-quality audio is usually 16 bit, but land-line phone quality audio is often around 8 bits. Please read this image from left to right, focusing on the graphs: You have actually just completed a power spectrum analysis (although at high resolution in your figure). Your next goal would be to figure out: How many components do I need in the power spectrum in order to accurately capture the means of the time series? UPDATE 2 To Filter or not to Filter I am not entirely sure how you would introduce the constraint in the regression as I am only familiar with interval constraints, but perhaps DSP is your solution. This is what I figured so far: Step 1. Break down the series into sinus components through Fourier function on the complete data set (in days) Step 2. Recreate the time series through an inverse Fourier transform, with the additional mean-constraint coupled to the original data: the deviations of the interpolations from the original means should cancel out each other (Harzallah, 1995). My best guess is that you would have to introduce autoregression if I understand Harzallah (1995, Fig 2) correctly. So that would probably correspond to an infinite response filter (IIR)? IIR http://paulbourke.net/miscellaneous/ar/ In summary: Derive means from Raw data Fourier Transform Raw data Fourier Inverse Transform transformed Data. Filter the result using IIR Perhaps you could use an IIR filter without going through the Fourier analysis? The only advantage of the Fourier analysis as I see it is to isolate and determine which patterns are influential and how often they do reoccur (i.e. oscillate). You could then decide to filter out the ones that contribute less, for example using a narrow notch filter at the least contributing peak (or filter based on your own criteria). For starters, you could filter out the less contributing odd valleys that appear more like noise in the "signal". Noise is characterized by very few cases and no pattern. A comb filter at odd frequency components could reduce the noise - unless you find a pattern there. Here's some arbitrary binning—for explanatory purposes only: Oops - There's an R Function for that!? When searching for an IIR-filter I happen to discovered the R functions interpolate in the Signal package. Forget everything I said up to this point. The interpolations should work like Harzallah's: http://cran.r-project.org/web/packages/signal/signal.pdf Play around with the functions. Should do the trick. UPDATE 3 interp1 not interp case.interp1 <- interp1(x=(ts.frame$no.influ.cases[!is.na(ts.frame$no.influ.case)]),y=ts.frame$yearday[!is.na(ts.frame$no.influ.case)],xi=mean(WEEKLYMEANSTABLE),method = c("cubic")) Set xi to the original weekly means.
Fourier/trigonometric interpolation
I am no expert on Fourier transforms, but... Epstein's total sample range was 24 months with a monthly sample rate: 1/12 years. Your sample range is 835 weeks. If your goal is to estimate the average
Fourier/trigonometric interpolation I am no expert on Fourier transforms, but... Epstein's total sample range was 24 months with a monthly sample rate: 1/12 years. Your sample range is 835 weeks. If your goal is to estimate the average for one year with data from ~16 years based on daily data you need a sample rate of 1/365 years. So substitute 52 for 12, but first standardize units and expand your 835 weeks to 835*7 = 5845 days. However, if you only have weekly data points I suggest a sample rate of 52 with a bit depth of 16 or 17 for peak analysis, alternatively 32 or 33 for even/odd comparison. So the default input options include: 1) to use the weekly means (or the median absolute deviation, MAD, or something to that extent) or 2) to use the daily values, which provide a higher resolution. Liebman et al. chose the cut-off point jmax = 2. Hence, Fig 3. contains fewer partials and is thus more symmetrical at the top of the sine compared to Fig 2. (A single partial at the base frequency would result in a pure sine wave.) If Epstein would have selected a higher resolution (e.g. jmax = 12) the transform would presumably only yield minor fluctuations with the additional components, or perhaps he lacked the computational power. Through visual inspection of your data you appear to have 16-17 peaks. I would suggest you set jmax or the "bit depth" to either 6, 11, 16 or 17 (see figure) and compare the outputs. The higher the peaks, the more they contribute to the original complex waveform. So assuming a 17-band resolution or bit depth the 17th partial contributes minimally to the original waveform pattern compared to the 6th peak. However, with a 34 band-resolution you would detect a difference between even and odd peaks as suggested by the fairly constant valleys. The bit depth depends on your research question, whether you are interested in the peaks only or in both peaks and valleys, but also how exactly you wish to approximate the original series. The Fourier analysis reduces your data points. If you were to inverse the function at a certain bit depth using a Fourier transform you could probably cross-check if the new mean estimates correspond to your original means. So, to answer your fourth question: the regression parameters you mentioned depend on the sensitivity and resolution that you require. If you do not wish for an exact fit, then by all means simply input the weekly means in the transform. However, beware that lower bit depth also reduces the data. For example, note how Epstein's harmonic overlay on Lieberman and colleagues' analysis misses the mid-point of the step function, with a skewed curve slightly to the right (i.e. temp. est. too high), in December in Figure 3. Liebman and Colleagues' Parameters: Bit Depth: 2 Epstein's Parameters: Sample Rate: 12 [every month] Sample Range: 24 months Bit Depth: 6 Your Parameters: Sample Rate: 365 [every day] Sample Range: 5845 days Exact Bit Depth Approach Exact fit based on visual inspection. (If you have the power, just see what happens compared to lower bit-depths.) Full Spectrum (peaks): 17 Full Spectrum (even/odd): 34 Variable Bit Depth Approach This is probably what you wish to do: Compare Peaks Only: 6, 11, 16, 17 Compare Even/Odd: 12, 22, 32, 34 Resynthesize and compare means This approach would yield something similar to the comparison of Figures in Epstein if you inverse the transformation again, i.e. synthesise the partials into an approximation of the original time series. You could also compare the discrete points of the resynthesized curves to the mean values, perhaps even test for significant differences to indicate the sensitivity of your bit depth choice. UPDATE 1: Bit Depth A bit - short for binary digit - is either 0 or 1. The bits 010101 would describe a square wave. The bit depth is 1 bit. To describe a saw wave you would need more bits: 0123210. The more complex a wave gets the more bits you need: This is a somewhat simplified explanation, but the more complex a time series is, the more bits are required to model it. Actually, "1" is a sine wave component and not a square wave (a square wave is more like 3 2 1 0 - see attached figure). 0 bits would be a flat line. Information gets lost with reduction of bit depth. For example, CD-quality audio is usually 16 bit, but land-line phone quality audio is often around 8 bits. Please read this image from left to right, focusing on the graphs: You have actually just completed a power spectrum analysis (although at high resolution in your figure). Your next goal would be to figure out: How many components do I need in the power spectrum in order to accurately capture the means of the time series? UPDATE 2 To Filter or not to Filter I am not entirely sure how you would introduce the constraint in the regression as I am only familiar with interval constraints, but perhaps DSP is your solution. This is what I figured so far: Step 1. Break down the series into sinus components through Fourier function on the complete data set (in days) Step 2. Recreate the time series through an inverse Fourier transform, with the additional mean-constraint coupled to the original data: the deviations of the interpolations from the original means should cancel out each other (Harzallah, 1995). My best guess is that you would have to introduce autoregression if I understand Harzallah (1995, Fig 2) correctly. So that would probably correspond to an infinite response filter (IIR)? IIR http://paulbourke.net/miscellaneous/ar/ In summary: Derive means from Raw data Fourier Transform Raw data Fourier Inverse Transform transformed Data. Filter the result using IIR Perhaps you could use an IIR filter without going through the Fourier analysis? The only advantage of the Fourier analysis as I see it is to isolate and determine which patterns are influential and how often they do reoccur (i.e. oscillate). You could then decide to filter out the ones that contribute less, for example using a narrow notch filter at the least contributing peak (or filter based on your own criteria). For starters, you could filter out the less contributing odd valleys that appear more like noise in the "signal". Noise is characterized by very few cases and no pattern. A comb filter at odd frequency components could reduce the noise - unless you find a pattern there. Here's some arbitrary binning—for explanatory purposes only: Oops - There's an R Function for that!? When searching for an IIR-filter I happen to discovered the R functions interpolate in the Signal package. Forget everything I said up to this point. The interpolations should work like Harzallah's: http://cran.r-project.org/web/packages/signal/signal.pdf Play around with the functions. Should do the trick. UPDATE 3 interp1 not interp case.interp1 <- interp1(x=(ts.frame$no.influ.cases[!is.na(ts.frame$no.influ.case)]),y=ts.frame$yearday[!is.na(ts.frame$no.influ.case)],xi=mean(WEEKLYMEANSTABLE),method = c("cubic")) Set xi to the original weekly means.
Fourier/trigonometric interpolation I am no expert on Fourier transforms, but... Epstein's total sample range was 24 months with a monthly sample rate: 1/12 years. Your sample range is 835 weeks. If your goal is to estimate the average
32,871
Large scale Cox regression with R (Big Data)
I run cox regression on a 7'000'000 observation dataset using R and this is not a problem. Indeed, on bivariate models I get the estimates in 52 seconds. I suggest that it is -as often with R- a problem related to the RAM available. You may need at least 12GB to run the model smoothly.
Large scale Cox regression with R (Big Data)
I run cox regression on a 7'000'000 observation dataset using R and this is not a problem. Indeed, on bivariate models I get the estimates in 52 seconds. I suggest that it is -as often with R- a probl
Large scale Cox regression with R (Big Data) I run cox regression on a 7'000'000 observation dataset using R and this is not a problem. Indeed, on bivariate models I get the estimates in 52 seconds. I suggest that it is -as often with R- a problem related to the RAM available. You may need at least 12GB to run the model smoothly.
Large scale Cox regression with R (Big Data) I run cox regression on a 7'000'000 observation dataset using R and this is not a problem. Indeed, on bivariate models I get the estimates in 52 seconds. I suggest that it is -as often with R- a probl
32,872
Large scale Cox regression with R (Big Data)
I went directly to the hardcore fit function (agreg.fit), which under the hood is called for the computations: n <- nrow(test) y <- as.matrix(test[, 1:3]) attr(y, "type") <- "right" x <- matrix(1:11, n, 11, byrow=TRUE) colnames(x) <- paste("level", 1:11, sep="") x <- x[, -2] == test$testfactor mode(x) = "numeric" fit2 <- agreg.fit(x, y, strata=NULL, control=coxph.control(), method="efron", init=rep(0, 10), rownames=1:n) However, the time elapsed when doubling the sample size gets quadratic as you mentioned. Also decreasing the epsilon in coxph.control does not help.
Large scale Cox regression with R (Big Data)
I went directly to the hardcore fit function (agreg.fit), which under the hood is called for the computations: n <- nrow(test) y <- as.matrix(test[, 1:3]) attr(y, "type") <- "right" x <- matrix(1:11,
Large scale Cox regression with R (Big Data) I went directly to the hardcore fit function (agreg.fit), which under the hood is called for the computations: n <- nrow(test) y <- as.matrix(test[, 1:3]) attr(y, "type") <- "right" x <- matrix(1:11, n, 11, byrow=TRUE) colnames(x) <- paste("level", 1:11, sep="") x <- x[, -2] == test$testfactor mode(x) = "numeric" fit2 <- agreg.fit(x, y, strata=NULL, control=coxph.control(), method="efron", init=rep(0, 10), rownames=1:n) However, the time elapsed when doubling the sample size gets quadratic as you mentioned. Also decreasing the epsilon in coxph.control does not help.
Large scale Cox regression with R (Big Data) I went directly to the hardcore fit function (agreg.fit), which under the hood is called for the computations: n <- nrow(test) y <- as.matrix(test[, 1:3]) attr(y, "type") <- "right" x <- matrix(1:11,
32,873
Survival analysis for event prediction
Is the survival analysis suited for my purposes? The only thing that makes this seem less applicable for survival analysis is: ... $TT$ might be missing if there was no event or set to time the follow-up ended. You will need to know the last period the individual was observed to be alive at for most models. Otherwise it should be straightforward and applicable to use survival analysis. E.g. Cox proportional hazard with survival::coxph in R or a parametric models with survival::survreg. The mean survival time, that can be computed for each record, seems a nice risk index - the lower the higher the risk is. Yes, you can use the mean survival times or just the linear predictor for the two former mentioned (classes of) models. How can I evaluate the performance of my model? The $c$ index seems like a sensible choice to me as "natural" generalization of the AUC. Note that is implemented in R with e.g. Hmisc::rcorr.cens.
Survival analysis for event prediction
Is the survival analysis suited for my purposes? The only thing that makes this seem less applicable for survival analysis is: ... $TT$ might be missing if there was no event or set to time the foll
Survival analysis for event prediction Is the survival analysis suited for my purposes? The only thing that makes this seem less applicable for survival analysis is: ... $TT$ might be missing if there was no event or set to time the follow-up ended. You will need to know the last period the individual was observed to be alive at for most models. Otherwise it should be straightforward and applicable to use survival analysis. E.g. Cox proportional hazard with survival::coxph in R or a parametric models with survival::survreg. The mean survival time, that can be computed for each record, seems a nice risk index - the lower the higher the risk is. Yes, you can use the mean survival times or just the linear predictor for the two former mentioned (classes of) models. How can I evaluate the performance of my model? The $c$ index seems like a sensible choice to me as "natural" generalization of the AUC. Note that is implemented in R with e.g. Hmisc::rcorr.cens.
Survival analysis for event prediction Is the survival analysis suited for my purposes? The only thing that makes this seem less applicable for survival analysis is: ... $TT$ might be missing if there was no event or set to time the foll
32,874
How to show operations on two random variables (each Bernoulli) are dependent but not correlated?
Here you can do a separation of the cases because there are very few. Here we go: $X = 0; Y = 0 \Rightarrow |X - Y| = 0;(X+Y) = 0;(X+Y)|X-Y| = 0$ $X = 1; Y = 0 \Rightarrow |X - Y| = 1;(X+Y) = 1;(X+Y)|X-Y| = 1$ $X = 0; Y = 1 \Rightarrow |X - Y| = 1;(X+Y) = 1;(X+Y)|X-Y| = 1$ $X = 1; Y = 1 \Rightarrow |X - Y| = 0;(X+Y) = 2;(X+Y)|X-Y| = 0$ By giving a $1/4$ weight to each of these cases, you should find that the expected value of $E\left((X+Y)|X-Y|\right)$ is $1/2$ and not $0$. But the covariance is $$E\left((X+Y)|X-Y|\right) - E(X+Y)E(|X-Y|) = 1/2 - 1 \cdot 1/2 = 0.$$
How to show operations on two random variables (each Bernoulli) are dependent but not correlated?
Here you can do a separation of the cases because there are very few. Here we go: $X = 0; Y = 0 \Rightarrow |X - Y| = 0;(X+Y) = 0;(X+Y)|X-Y| = 0$ $X = 1; Y = 0 \Rightarrow |X - Y| = 1;(X+Y) = 1;(X+Y)|
How to show operations on two random variables (each Bernoulli) are dependent but not correlated? Here you can do a separation of the cases because there are very few. Here we go: $X = 0; Y = 0 \Rightarrow |X - Y| = 0;(X+Y) = 0;(X+Y)|X-Y| = 0$ $X = 1; Y = 0 \Rightarrow |X - Y| = 1;(X+Y) = 1;(X+Y)|X-Y| = 1$ $X = 0; Y = 1 \Rightarrow |X - Y| = 1;(X+Y) = 1;(X+Y)|X-Y| = 1$ $X = 1; Y = 1 \Rightarrow |X - Y| = 0;(X+Y) = 2;(X+Y)|X-Y| = 0$ By giving a $1/4$ weight to each of these cases, you should find that the expected value of $E\left((X+Y)|X-Y|\right)$ is $1/2$ and not $0$. But the covariance is $$E\left((X+Y)|X-Y|\right) - E(X+Y)E(|X-Y|) = 1/2 - 1 \cdot 1/2 = 0.$$
How to show operations on two random variables (each Bernoulli) are dependent but not correlated? Here you can do a separation of the cases because there are very few. Here we go: $X = 0; Y = 0 \Rightarrow |X - Y| = 0;(X+Y) = 0;(X+Y)|X-Y| = 0$ $X = 1; Y = 0 \Rightarrow |X - Y| = 1;(X+Y) = 1;(X+Y)|
32,875
rpart and the printcp function
You may multiply the 'Root node error' by the 'rel error'. If you do so, $0.2997 \times 0.18726 \approx 0.056$ which is the error you obtained when doing cross-validation.
rpart and the printcp function
You may multiply the 'Root node error' by the 'rel error'. If you do so, $0.2997 \times 0.18726 \approx 0.056$ which is the error you obtained when doing cross-validation.
rpart and the printcp function You may multiply the 'Root node error' by the 'rel error'. If you do so, $0.2997 \times 0.18726 \approx 0.056$ which is the error you obtained when doing cross-validation.
rpart and the printcp function You may multiply the 'Root node error' by the 'rel error'. If you do so, $0.2997 \times 0.18726 \approx 0.056$ which is the error you obtained when doing cross-validation.
32,876
How to compare observed vs. expected events?
You mention that you get different results if you multiply all values by $1342$. This is not a problem. You should get very different results. If you flip a coin and it comes up heads, this doesn't say very much. If you flip a coin $1342$ times and you get heads every time, you have much more information suggesting that the coin is not fair. Usually you want to use alternatives to a $\chi^2$ test when the expected number of occurrences is so low (say, under $5$) in a large percentage of your categories (say, at least $20\%$). One possibility is Fisher's exact test, which is implemented in R. You can view the $\chi^2$ test as an approximation to Fisher's exact test, and the approximation is only good when more of the expected counts are large.
How to compare observed vs. expected events?
You mention that you get different results if you multiply all values by $1342$. This is not a problem. You should get very different results. If you flip a coin and it comes up heads, this doesn't sa
How to compare observed vs. expected events? You mention that you get different results if you multiply all values by $1342$. This is not a problem. You should get very different results. If you flip a coin and it comes up heads, this doesn't say very much. If you flip a coin $1342$ times and you get heads every time, you have much more information suggesting that the coin is not fair. Usually you want to use alternatives to a $\chi^2$ test when the expected number of occurrences is so low (say, under $5$) in a large percentage of your categories (say, at least $20\%$). One possibility is Fisher's exact test, which is implemented in R. You can view the $\chi^2$ test as an approximation to Fisher's exact test, and the approximation is only good when more of the expected counts are large.
How to compare observed vs. expected events? You mention that you get different results if you multiply all values by $1342$. This is not a problem. You should get very different results. If you flip a coin and it comes up heads, this doesn't sa
32,877
Using econometrics, how do I solve out the endogeneity problem?
As many have already answered, one of the easiest way to correct for endogeneity is a instrumental variable (IV) using a 2-stage least square regression (2SLS). Another method, is using a Heckman correction. For details, on the Heckman correction see his paper "Sample Selection Dias as a Specification Error" Econometrica Vol. 47, No. 1 (1979). However, instead of reading the whole paper, whatever software package you are using will probably have it already builded in. The paper is posted at the following url: http://vanpelt.sonoma.edu/users/c/cuellar/econ411/Heckman.pdf However, not sure how long the link will stay open. Hope this helps! Good Luck!
Using econometrics, how do I solve out the endogeneity problem?
As many have already answered, one of the easiest way to correct for endogeneity is a instrumental variable (IV) using a 2-stage least square regression (2SLS). Another method, is using a Heckman corr
Using econometrics, how do I solve out the endogeneity problem? As many have already answered, one of the easiest way to correct for endogeneity is a instrumental variable (IV) using a 2-stage least square regression (2SLS). Another method, is using a Heckman correction. For details, on the Heckman correction see his paper "Sample Selection Dias as a Specification Error" Econometrica Vol. 47, No. 1 (1979). However, instead of reading the whole paper, whatever software package you are using will probably have it already builded in. The paper is posted at the following url: http://vanpelt.sonoma.edu/users/c/cuellar/econ411/Heckman.pdf However, not sure how long the link will stay open. Hope this helps! Good Luck!
Using econometrics, how do I solve out the endogeneity problem? As many have already answered, one of the easiest way to correct for endogeneity is a instrumental variable (IV) using a 2-stage least square regression (2SLS). Another method, is using a Heckman corr
32,878
Using econometrics, how do I solve out the endogeneity problem?
Since this is a question with a clear break, ie. when the legislation came into place, it may be helpful to look at matching estimators and regression discontinuity designs. These are often very good alternatives (or compliments) to the classic IV estimator. I would even argue they are often better than the classic IV since, as has already been mentioned, a good IV is very difficult to find.
Using econometrics, how do I solve out the endogeneity problem?
Since this is a question with a clear break, ie. when the legislation came into place, it may be helpful to look at matching estimators and regression discontinuity designs. These are often very good
Using econometrics, how do I solve out the endogeneity problem? Since this is a question with a clear break, ie. when the legislation came into place, it may be helpful to look at matching estimators and regression discontinuity designs. These are often very good alternatives (or compliments) to the classic IV estimator. I would even argue they are often better than the classic IV since, as has already been mentioned, a good IV is very difficult to find.
Using econometrics, how do I solve out the endogeneity problem? Since this is a question with a clear break, ie. when the legislation came into place, it may be helpful to look at matching estimators and regression discontinuity designs. These are often very good
32,879
Does a stationary process implies a normal distribution of the data?
There was a discussion about this on dsp.SE some months ago. My answer there might help resolve some of the issues that the OP has. Added in response to @Alexis's complaint: In part, what I said there is as follows: All the random variables in the process have identical CDFs: $F_{X(t_1)}(x) = F_{X(t_2)}(x)$ for all $t_1, t_2$. Any two random variables separated by some specified amount of time have the same joint CDF as any other pair of random variables separated by the same amount of time. For example, the random variables $X(t_1)$ and $X(t_1 + \tau)$ are separated by $\tau$ seconds, as are the random variables $X(t_2)$ and $X(t_2 + \tau)$, and thus $F_{X(t_1), X(t_1 + \tau)}(x,y) = F_{X(t_2), X(t_2 + \tau)}(x,y)$ Any three random variables $X(t_1)$, $X(t_1 + \tau_1)$, $X(t_1 + \tau_1 + \tau_2)$ spaced $\tau_1$ and $\tau_2$ apart have the same joint CDF as $X(t_2)$, $X(t_2 + \tau_1)$, $X(t_2 + \tau_1 + \tau_2)$ which as also spaced $\tau_1$ and $\tau_2$ apart. Equivalently, the joint CDF of $X(t_1), X(t_2), X(t_3)$ is the same as the joint CDF of $X(t_1+\tau), X(t_2+\tau), X(t_3+\tau)$ and similarly for all multidimensional CDFs. Effectively, the probabilistic descriptions of the random process do not depend on what we choose to call the origin on the time axis: shifting all time instants $t_1, t_2, \ldots, t_n$ by some fixed amount $\tau$ to $t_1 + \tau, t_2 + \tau, \ldots, t_n + \tau$ gives the same probabilistic description of the random variables. This property is called strict-sense stationarity and a random process that enjoys this property is called a strictly stationary random process or, more simply, a stationary random process. Be aware that in some of the statistics literature (especially the parts related to econometrics and time-series analysis), stationary processes are defined somewhat differently; in fact as what are described later in this answer as wide-sense stationary processes. Note that strict stationarity by itself does not require any particular form of CDF. For example, it does not say that all the variables are Gaussian. Later in the same answer, I say about wide-sense-stationary (or WSS) random processes a.k.a. weakly stationary stochastic (or WSS) processes.... Note that the definition says nothing about the CDFs of the random variables comprising the process; it is entirely a constraint on the first-order and second-order moments of the random variables. Thus once again, there is nothing in the definition of stationary processes (in the time series folks' meaning of the word) that requires the random variables to be normally distributed,
Does a stationary process implies a normal distribution of the data?
There was a discussion about this on dsp.SE some months ago. My answer there might help resolve some of the issues that the OP has. Added in response to @Alexis's complaint: In part, what I said ther
Does a stationary process implies a normal distribution of the data? There was a discussion about this on dsp.SE some months ago. My answer there might help resolve some of the issues that the OP has. Added in response to @Alexis's complaint: In part, what I said there is as follows: All the random variables in the process have identical CDFs: $F_{X(t_1)}(x) = F_{X(t_2)}(x)$ for all $t_1, t_2$. Any two random variables separated by some specified amount of time have the same joint CDF as any other pair of random variables separated by the same amount of time. For example, the random variables $X(t_1)$ and $X(t_1 + \tau)$ are separated by $\tau$ seconds, as are the random variables $X(t_2)$ and $X(t_2 + \tau)$, and thus $F_{X(t_1), X(t_1 + \tau)}(x,y) = F_{X(t_2), X(t_2 + \tau)}(x,y)$ Any three random variables $X(t_1)$, $X(t_1 + \tau_1)$, $X(t_1 + \tau_1 + \tau_2)$ spaced $\tau_1$ and $\tau_2$ apart have the same joint CDF as $X(t_2)$, $X(t_2 + \tau_1)$, $X(t_2 + \tau_1 + \tau_2)$ which as also spaced $\tau_1$ and $\tau_2$ apart. Equivalently, the joint CDF of $X(t_1), X(t_2), X(t_3)$ is the same as the joint CDF of $X(t_1+\tau), X(t_2+\tau), X(t_3+\tau)$ and similarly for all multidimensional CDFs. Effectively, the probabilistic descriptions of the random process do not depend on what we choose to call the origin on the time axis: shifting all time instants $t_1, t_2, \ldots, t_n$ by some fixed amount $\tau$ to $t_1 + \tau, t_2 + \tau, \ldots, t_n + \tau$ gives the same probabilistic description of the random variables. This property is called strict-sense stationarity and a random process that enjoys this property is called a strictly stationary random process or, more simply, a stationary random process. Be aware that in some of the statistics literature (especially the parts related to econometrics and time-series analysis), stationary processes are defined somewhat differently; in fact as what are described later in this answer as wide-sense stationary processes. Note that strict stationarity by itself does not require any particular form of CDF. For example, it does not say that all the variables are Gaussian. Later in the same answer, I say about wide-sense-stationary (or WSS) random processes a.k.a. weakly stationary stochastic (or WSS) processes.... Note that the definition says nothing about the CDFs of the random variables comprising the process; it is entirely a constraint on the first-order and second-order moments of the random variables. Thus once again, there is nothing in the definition of stationary processes (in the time series folks' meaning of the word) that requires the random variables to be normally distributed,
Does a stationary process implies a normal distribution of the data? There was a discussion about this on dsp.SE some months ago. My answer there might help resolve some of the issues that the OP has. Added in response to @Alexis's complaint: In part, what I said ther
32,880
Does a stationary process implies a normal distribution of the data?
Consider the time-series process $\{ X_t | t \in \mathbb{Z} \}$ with IID binary outcomes with marginal probabilities: $$\mathbb{P}(X_t = 0) = \mathbb{P}(X_t = 1) = \tfrac{1}{2} \quad \quad \quad \text{for all } t \in \mathbb{Z}.$$ Is this process (weakly or strongly) stationary? Is the data from this process normally distributed?
Does a stationary process implies a normal distribution of the data?
Consider the time-series process $\{ X_t | t \in \mathbb{Z} \}$ with IID binary outcomes with marginal probabilities: $$\mathbb{P}(X_t = 0) = \mathbb{P}(X_t = 1) = \tfrac{1}{2} \quad \quad \quad \text
Does a stationary process implies a normal distribution of the data? Consider the time-series process $\{ X_t | t \in \mathbb{Z} \}$ with IID binary outcomes with marginal probabilities: $$\mathbb{P}(X_t = 0) = \mathbb{P}(X_t = 1) = \tfrac{1}{2} \quad \quad \quad \text{for all } t \in \mathbb{Z}.$$ Is this process (weakly or strongly) stationary? Is the data from this process normally distributed?
Does a stationary process implies a normal distribution of the data? Consider the time-series process $\{ X_t | t \in \mathbb{Z} \}$ with IID binary outcomes with marginal probabilities: $$\mathbb{P}(X_t = 0) = \mathbb{P}(X_t = 1) = \tfrac{1}{2} \quad \quad \quad \text
32,881
Does a stationary process implies a normal distribution of the data?
NO For a counterexample, consider drawing $iid$ observations from a non-normal distribution, such as $exp(1)$. The distribution does not change over time (drawing from rexp in R or np.random.exponential in Python is the same whether you do it today or tomorrow), so the process is stationary, but the distribution is not normal.
Does a stationary process implies a normal distribution of the data?
NO For a counterexample, consider drawing $iid$ observations from a non-normal distribution, such as $exp(1)$. The distribution does not change over time (drawing from rexp in R or np.random.exponenti
Does a stationary process implies a normal distribution of the data? NO For a counterexample, consider drawing $iid$ observations from a non-normal distribution, such as $exp(1)$. The distribution does not change over time (drawing from rexp in R or np.random.exponential in Python is the same whether you do it today or tomorrow), so the process is stationary, but the distribution is not normal.
Does a stationary process implies a normal distribution of the data? NO For a counterexample, consider drawing $iid$ observations from a non-normal distribution, such as $exp(1)$. The distribution does not change over time (drawing from rexp in R or np.random.exponenti
32,882
When to choose PCA vs. LSA/LSI
One difference I noted was that PCA can only give you either the term-term or Document-Document similarity (depending on how you multiplied the coreference matrix $AA^*$ or $A^*A$) but SVD/LSA can deliver both since you have eigenvectors of both $AA^*$ and $A^*A$. Actually I don't see a reason to use PCA ever over SVD.
When to choose PCA vs. LSA/LSI
One difference I noted was that PCA can only give you either the term-term or Document-Document similarity (depending on how you multiplied the coreference matrix $AA^*$ or $A^*A$) but SVD/LSA can del
When to choose PCA vs. LSA/LSI One difference I noted was that PCA can only give you either the term-term or Document-Document similarity (depending on how you multiplied the coreference matrix $AA^*$ or $A^*A$) but SVD/LSA can deliver both since you have eigenvectors of both $AA^*$ and $A^*A$. Actually I don't see a reason to use PCA ever over SVD.
When to choose PCA vs. LSA/LSI One difference I noted was that PCA can only give you either the term-term or Document-Document similarity (depending on how you multiplied the coreference matrix $AA^*$ or $A^*A$) but SVD/LSA can del
32,883
Is VAR a MANOVA with auto regression?
Strictly speaking VAR has no 'explanatory' variables - everything is assumed to be endogenous. In VAR, a time series of multivariate dependent variables is assumed to be predictable on the basis of its joint past, back a certain number of time steps (the 'lag'). VARX, in contrast, is what a VAR model looks like when it also has a time series of explanatory variables. The X series that run parallel to the multivariate Y is typically just assumed to be exogenous. Like a VARX model, MANOVA has multivariate dependent variable and also explanatory variables that are assumed to exogenous. However, there is no time series structure assumed between Y variables and therefore no lagged terms in the model. MANOVA need not always be applied to experimental data, though it often is, and that makes the exogeneity assumption for X plausible. It is, underneath, simply a linear regression model with a multivariate dependent variable. Likewise, VAR is, underneath, a system of multivariate regressions predicting the present of one part of the dependent variable on the basis of its past and the pasts of the other parts of the dependent variable. This leads to a second difference in practice. Often VAR models assume a diagonal covariance for the dependent variable, which means that model decomposes into a separately estimable sequence of linear regressions, one for each part of the dependent variable. MANOVA is typically applied when there is contemporaneous correlation between elements of the dependent variable that are not explainable by exogenous factors or the past. Lütkepohl (2005) is a standard (updated) work VAR and related time series models.
Is VAR a MANOVA with auto regression?
Strictly speaking VAR has no 'explanatory' variables - everything is assumed to be endogenous. In VAR, a time series of multivariate dependent variables is assumed to be predictable on the basis of it
Is VAR a MANOVA with auto regression? Strictly speaking VAR has no 'explanatory' variables - everything is assumed to be endogenous. In VAR, a time series of multivariate dependent variables is assumed to be predictable on the basis of its joint past, back a certain number of time steps (the 'lag'). VARX, in contrast, is what a VAR model looks like when it also has a time series of explanatory variables. The X series that run parallel to the multivariate Y is typically just assumed to be exogenous. Like a VARX model, MANOVA has multivariate dependent variable and also explanatory variables that are assumed to exogenous. However, there is no time series structure assumed between Y variables and therefore no lagged terms in the model. MANOVA need not always be applied to experimental data, though it often is, and that makes the exogeneity assumption for X plausible. It is, underneath, simply a linear regression model with a multivariate dependent variable. Likewise, VAR is, underneath, a system of multivariate regressions predicting the present of one part of the dependent variable on the basis of its past and the pasts of the other parts of the dependent variable. This leads to a second difference in practice. Often VAR models assume a diagonal covariance for the dependent variable, which means that model decomposes into a separately estimable sequence of linear regressions, one for each part of the dependent variable. MANOVA is typically applied when there is contemporaneous correlation between elements of the dependent variable that are not explainable by exogenous factors or the past. Lütkepohl (2005) is a standard (updated) work VAR and related time series models.
Is VAR a MANOVA with auto regression? Strictly speaking VAR has no 'explanatory' variables - everything is assumed to be endogenous. In VAR, a time series of multivariate dependent variables is assumed to be predictable on the basis of it
32,884
Is VAR a MANOVA with auto regression?
I like to think about the difference this way: VAR is a system of regressions with lagged dependent variables and some other independent variables observed over time (observational data). MANOVA is an advanced version of ANOVA, where there are more than one response is being measured (experimental data). The response or the dependent variable for both is not univariate. It is a vector of dependent variables.
Is VAR a MANOVA with auto regression?
I like to think about the difference this way: VAR is a system of regressions with lagged dependent variables and some other independent variables observed over time (observational data). MANOVA is an
Is VAR a MANOVA with auto regression? I like to think about the difference this way: VAR is a system of regressions with lagged dependent variables and some other independent variables observed over time (observational data). MANOVA is an advanced version of ANOVA, where there are more than one response is being measured (experimental data). The response or the dependent variable for both is not univariate. It is a vector of dependent variables.
Is VAR a MANOVA with auto regression? I like to think about the difference this way: VAR is a system of regressions with lagged dependent variables and some other independent variables observed over time (observational data). MANOVA is an
32,885
In which setting would you expect model found by LARS to differ most from the model found by exhaustive search?
Here is the description of the LARS algorithm: http://www-stat.stanford.edu/~tibs/lasso/simple.html It kind of ignores the correlation between the regressors so I would venture to guess that it might miss out on the fit in case of multicollinearity.
In which setting would you expect model found by LARS to differ most from the model found by exhaust
Here is the description of the LARS algorithm: http://www-stat.stanford.edu/~tibs/lasso/simple.html It kind of ignores the correlation between the regressors so I would venture to guess that it might
In which setting would you expect model found by LARS to differ most from the model found by exhaustive search? Here is the description of the LARS algorithm: http://www-stat.stanford.edu/~tibs/lasso/simple.html It kind of ignores the correlation between the regressors so I would venture to guess that it might miss out on the fit in case of multicollinearity.
In which setting would you expect model found by LARS to differ most from the model found by exhaust Here is the description of the LARS algorithm: http://www-stat.stanford.edu/~tibs/lasso/simple.html It kind of ignores the correlation between the regressors so I would venture to guess that it might
32,886
In which setting would you expect model found by LARS to differ most from the model found by exhaustive search?
The more features you have, in relation to the number of samples, the more over-fitting you are likely to get with the exaustive search method than with LARS. The penalty term used in LARS imposes a nested structure of increasingly complex models, indexed by a single regularisation parameter, so the "degrees of freedom" of feature selection with LARS is fairly low. For exaustive search, there is effectvely one (binary) degree of freedom per feature, which means that exaustive search is better able to exploit the random variability in the feature selection criterion due to the random sampling of the data. As a result, the exaustive search model is likely to be severely ofer-fitted to the feature selection criterion, as the "hypothesis class" is larger.
In which setting would you expect model found by LARS to differ most from the model found by exhaust
The more features you have, in relation to the number of samples, the more over-fitting you are likely to get with the exaustive search method than with LARS. The penalty term used in LARS imposes a
In which setting would you expect model found by LARS to differ most from the model found by exhaustive search? The more features you have, in relation to the number of samples, the more over-fitting you are likely to get with the exaustive search method than with LARS. The penalty term used in LARS imposes a nested structure of increasingly complex models, indexed by a single regularisation parameter, so the "degrees of freedom" of feature selection with LARS is fairly low. For exaustive search, there is effectvely one (binary) degree of freedom per feature, which means that exaustive search is better able to exploit the random variability in the feature selection criterion due to the random sampling of the data. As a result, the exaustive search model is likely to be severely ofer-fitted to the feature selection criterion, as the "hypothesis class" is larger.
In which setting would you expect model found by LARS to differ most from the model found by exhaust The more features you have, in relation to the number of samples, the more over-fitting you are likely to get with the exaustive search method than with LARS. The penalty term used in LARS imposes a
32,887
How to model the sum of Bernoulli random variables for dependent data?
One approach would be to model the $X$'s with a generalized linear model (GLM). Here, you would formulate $p_i$, the probability of success on the $i$'th trial as a (logistic linear) function of the recent observation history. So you're essentially fitting an autoregressive GLM where the noise is Bernoulli and the link function is logit. The setup is: $p_i = f(b + a_1 X_{i-1} + a_2 X_{i-2} + \ldots a_k X_{i-k})$, where $f(x) = \frac{1}{1+\exp(x)}$, and $X_i \sim Bernoulli(p_i)$ The parameters of the model are $\{b, a_1, \ldots a_k\}$, which can be estimated by logistic regression. (All you have to do is set up your design matrix using the relevant portion of observation history at each trial, and pass that into a logistic regression estimation function; the log-likelihood is concave so there's a unique global maximum for the parameters). If the outcomes are indeed independent then the $a_i$'s will be set to zero; positive $a_i$'s mean that subsequent $p_i$'s increase whenever a success is observed. The model doesn't provide a simple expression for the probability over the sum of the $X_i$'s, but this is easy to compute by simulation (particle filtering or MCMC) since the model has simple Markovian structure. This kind of model has been used with great success to model temporal dependencies between "spikes" of neurons in the brain, and there is an extensive literature on autoregressive point process models. See, e.g., Truccolo et al 2005 (although this paper uses a Poisson instead of a Bernoulli likelihood, but the mapping from one to the other is straightforward).
How to model the sum of Bernoulli random variables for dependent data?
One approach would be to model the $X$'s with a generalized linear model (GLM). Here, you would formulate $p_i$, the probability of success on the $i$'th trial as a (logistic linear) function of the
How to model the sum of Bernoulli random variables for dependent data? One approach would be to model the $X$'s with a generalized linear model (GLM). Here, you would formulate $p_i$, the probability of success on the $i$'th trial as a (logistic linear) function of the recent observation history. So you're essentially fitting an autoregressive GLM where the noise is Bernoulli and the link function is logit. The setup is: $p_i = f(b + a_1 X_{i-1} + a_2 X_{i-2} + \ldots a_k X_{i-k})$, where $f(x) = \frac{1}{1+\exp(x)}$, and $X_i \sim Bernoulli(p_i)$ The parameters of the model are $\{b, a_1, \ldots a_k\}$, which can be estimated by logistic regression. (All you have to do is set up your design matrix using the relevant portion of observation history at each trial, and pass that into a logistic regression estimation function; the log-likelihood is concave so there's a unique global maximum for the parameters). If the outcomes are indeed independent then the $a_i$'s will be set to zero; positive $a_i$'s mean that subsequent $p_i$'s increase whenever a success is observed. The model doesn't provide a simple expression for the probability over the sum of the $X_i$'s, but this is easy to compute by simulation (particle filtering or MCMC) since the model has simple Markovian structure. This kind of model has been used with great success to model temporal dependencies between "spikes" of neurons in the brain, and there is an extensive literature on autoregressive point process models. See, e.g., Truccolo et al 2005 (although this paper uses a Poisson instead of a Bernoulli likelihood, but the mapping from one to the other is straightforward).
How to model the sum of Bernoulli random variables for dependent data? One approach would be to model the $X$'s with a generalized linear model (GLM). Here, you would formulate $p_i$, the probability of success on the $i$'th trial as a (logistic linear) function of the
32,888
How to model the sum of Bernoulli random variables for dependent data?
If the dependence is due to clumping, a compound Poisson model could be the solution as a model of $S_j$. A somewhat random reference is this one by Barbour and Chryssaphinou. In a completely different direction, since you indicate that $N$ is 20, and thus relatively small, could be to build a graphical model of the $X_{ij}$'s, but I don't know if your setup and data make it possible. As @chl comments, it will be useful if you describe what the $X_{i,j}$'s are. If the $X_{i,j}$'s represent sequential measurements, e.g. over time, and the dependence is related to this, a third possibility - and to some extend a compromise between the two suggestions above - is to use a hidden Markov model of the $X_{i,j}$'s.
How to model the sum of Bernoulli random variables for dependent data?
If the dependence is due to clumping, a compound Poisson model could be the solution as a model of $S_j$. A somewhat random reference is this one by Barbour and Chryssaphinou. In a completely differe
How to model the sum of Bernoulli random variables for dependent data? If the dependence is due to clumping, a compound Poisson model could be the solution as a model of $S_j$. A somewhat random reference is this one by Barbour and Chryssaphinou. In a completely different direction, since you indicate that $N$ is 20, and thus relatively small, could be to build a graphical model of the $X_{ij}$'s, but I don't know if your setup and data make it possible. As @chl comments, it will be useful if you describe what the $X_{i,j}$'s are. If the $X_{i,j}$'s represent sequential measurements, e.g. over time, and the dependence is related to this, a third possibility - and to some extend a compromise between the two suggestions above - is to use a hidden Markov model of the $X_{i,j}$'s.
How to model the sum of Bernoulli random variables for dependent data? If the dependence is due to clumping, a compound Poisson model could be the solution as a model of $S_j$. A somewhat random reference is this one by Barbour and Chryssaphinou. In a completely differe
32,889
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression
Re-arrange the problem in terms of new variables, so that $1\leq z_1<z_2<\dots<z_n\leq U$. Then we have $(x_i,y_i)=(x_i,z_{x_i})$, as @whuber pointed out in the comments. Thus you are effectively regressing $z_j$ on $j$, and $r_{xy}=r_{xz}$. Thus if we can work out the marginal distribution for $z_j$, and show that it is basically linear in $j$ the problem is done, and we will have $r_{xy}\sim 1$. We first need the joint distribution for $z_1,\dots,z_n$. This is quite simple, after you have the solution, but I found it not straight forward before I did the maths. Just a brief lesson in doing maths paying off - so I will present the maths first, then the easy answer. Now, the original joint distribution is $p(y_1,\dots,y_n)\propto 1$. Changing variables simply relabel things for discrete probabilities, and so the probability is still constant. However, the labelling is not 1-to-1, thus we can't simply write $p(z_1,\dots,z_n)=\frac{(U-n)!}{U!}$. Instead, we have $$\begin{array}\\p(z_1,\dots,z_n)=\frac{1}{C} & 1\leq z_1<z_2<\dots<z_n\leq U\end{array}$$ And we can find $C$ by normalisation $$C=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_2=2}^{z_3-1}\sum_{z_1=1}^{z_2-1}(1)=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_2=2}^{z_3-1}(z_2-1)$$ $$=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_3=2}^{z_4-1}\frac{(z_3-1)(z_3-2)}{2}=\sum_{z_n=n}^{U}\dots\sum_{z_4=4}^{z_5-1}\frac{(z_4-1)(z_4-2)(z_4-3)}{(2)(3)}$$ $$=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_{j}=j}^{z_{j+1}-1}{z_j-1 \choose j-1}={U \choose n}$$ Which shows the relabeling ratio is equal to $\frac{(U-n)!}{U!}{U \choose n}=\frac{1}{n!}$ - for each $(z_1,\dots,z_n)$ there is $n!$ $(y_1,\dots,y_n)$ values. Makes sense because any permutation of the lables on $y_i$ leads to the same set of ranked $z_i$ values. Now, the marginal distribution $z_1$, we repeat above but with the sum over $z_1$ dropped, and a different range of summation for the remainder, namely, the minimums change from $(2,\dots,n)$ to $(z_1+1,\dots,z_1+n-1)$, and we get: $$p(z_1)=\sum_{z_n=z_1+n-1}^{U}\;\;\sum_{z_{n-1}=z_1+n-2}^{z_n-1}\dots\sum_{z_2=z_1+1}^{z_3-1}p(z_1,z_2,\dots,z_n)=\frac{{U-z_1 \choose n-1}}{{U \choose n}}$$ With support $z_1\in\{1,2,\dots,U+1-n\}$. This form, combined with a bit of intuition shows that the marginal distribution of any $z_j$ can be reasoned out by: choosing $j-1$ values below $z_j$, which can be done in ${z_j-1\choose j-1}$ ways (if $z_j\geq j$); choosing the value $z_j$, which can be done 1 way; and choosing $n-j$ values above $z_j$ which can be done in ${U-z_j\choose n-j}$ ways (if $z_j\leq U+j-n$) This method of reasoning will effortly generalise to joint distributions, such as $p(z_j,z_k)$ (which can be used to calculate the expected value of the sample covariance if you want). Hence we have: $$\begin{array}{c c}\\p(z_j)=\frac{{z_j-1\choose j-1}{U-z_j\choose n-j}}{{U \choose n}} & j\leq z_j\leq U+j-n \\p(z_j,z_k)=\frac{{z_j-1\choose j-1}{z_k-z_j-1 \choose k-j-1}{U-z_k\choose n-k}}{{U \choose n}} & j\leq z_j\leq z_k+j-k\leq U+j-n \end{array}$$ Now the marginal is the pdf of a negative hypergeometric distribution with parameters $k=j,r=n,N=U$ (in terms of the paper's notation). Now this is clear not linear exactly in $j$, but the marginal expectation for $z_j$ is $$E(z_j)=j\frac{U+1}{n+1}$$ This is indeed linear in $j$, and you would expect beta coefficient of $\frac{U+1}{n+1}$ from the regression, and intercept of zero. UPDATE I stopped my answer a bit short before. Have now completed hopefully a more complete answer Letting $\overline{j}=\frac{n+1}{2}$, and $\overline{z}=\frac{1}{n}\sum_{j=1}^{n}z_j$, the expected square of the sample covariance between $j$ and $z_j$ is given by: $$E[s_{xz}^2]=E\left[\frac{1}{n}\sum_{j=1}^{n}(j-\overline{j})(z_j-\overline{z})\right]^2$$ $$=\frac{1}{n^2}\left[\sum_{j=1}^{n}(j-\overline{j})^2E(z_j^2)+2\sum_{k=2}^{n}\sum_{j=1}^{k-1}(j-\overline{j})(k-\overline{j})E(z_jz_k)\right]$$ So we need $E(z_j^2)=V(z_j)+E(z_j)^2=Aj^2+Bj$, where $A=\frac{(U+1)(U+2)}{(n+1)(n+2)}$ and $B=\frac{(U+1)(U-n)}{(n+1)(n+2)}$ (using the formula in the pdf file). So the first sum becomes $$\sum_{j=1}^{n}(j-\overline{j})^2E(z_j^2)=\sum_{j=1}^{n}(j^2-2j\overline{j}+\overline{j}^2)(Aj^2+Bj)$$ $$=\frac{n(n-1)(U+1)}{120}\bigg( U(2n+1)+(3n-1)\bigg)$$ We also need $E(z_jz_k)=E[z_j(z_k-z_j)]+E(z_j^2)$. $$E[z_j(z_k-z_j)]=\sum_{z_k=k}^{U+k-n}\sum_{z_j=j}^{z_k+j-k}z_j(z_k-z_j) p(z_j,z_k)$$ $$=j(k-j)\sum_{z_k=k}^{U+k-n}\sum_{z_j=j}^{z_k+j-k}\frac{{z_j\choose j}{z_k-z_j \choose k-j}{U-z_k\choose n-k}}{{U \choose n}}=j(k-j)\sum_{z_k=k}^{U+k-n}\frac{{z_k+1 \choose k+1}{U+1-(z_k+1)\choose n-k}}{{U \choose n}}$$ $$=j(k-j)\frac{{U+1\choose n+1}}{{U \choose n}}=j(k-j)\frac{U+1}{n+1}$$ $$\implies E(z_jz_k)=jk\frac{U+1}{n+1}+j^2\frac{(U+1)(U-n)}{(n+1)(n+2)}+j\frac{(U+1)(U-n)}{(n+1)(n+2)}$$ And the second sum is: $$2\sum_{k=2}^{n}\sum_{j=1}^{k-1}(j-\overline{j})(k-\overline{j})E(z_jz_k)$$ $$=\frac{n(U+1)(n-1)}{720(n+2)}\bigg(6(U-n)(n^3-2n^2-9n-2) + (n+2)(5 n^3- 24 n^2- 35 n +6)\bigg)$$ And so after some rather tedious manipulations, you get for the expected value of the squared covariance of: $$E[s_{xz}^2]=\frac{(n-1)(n-2)U(U+1)}{120}-\frac{(U+1)(n-1)(n^3+2n^2+11n+22)}{720(n+2)}$$ Now if we have $U>>n$, then the first term dominates as it is $O(U^2n^2)$, whereas the second term is $O(Un^3)$. We can show that the dominant term is well approximated by $E[s_{x}^2s_{z}^2]$, and we have another theoretical reason why the pearson correlation is very close to $1$ (beyond the fact that $E(z_j)\propto j$). Now the expected sample variance of $j$ is just the sample variance, which is $s_x^2=\frac{1}{n}\sum_{j=1}^{n}(j-\overline{j})^2=\frac{(n+1)(n-1)}{12}$. The expected sample variance for $z_j$ is given by: $$E[s_z^2]=E\left[\frac{1}{n}\sum_{j=1}^{n}(z_j-\overline{z})^2\right]=\frac{1}{n}\sum_{j=1}^{n}E(z_j^2)-\left[\frac{1}{n}\sum_{j=1}^{n}E(z_j)\right]^2$$ $$=\frac{A(n+1)(2n+1)}{6}+\frac{B(n+1)}{2}-\frac{(U+1)^2}{4}$$ $$=\frac{(U+1)(U-1)}{12}$$ Combining everything together, and noting that $E[s_x^2s_z^2]=s_x^2E[s_z^2]$, we have: $$E[s_x^2s_z^2]=\frac{(n+1)(n-1)(U+1)(U-1)}{144}\approx \frac{(n-1)(n-2)U(U+1)}{120}\approx E[s_{xz}^2]$$ Which is approximately the same thing as $E[r_{xz}^2]\approx 1$
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression
Re-arrange the problem in terms of new variables, so that $1\leq z_1<z_2<\dots<z_n\leq U$. Then we have $(x_i,y_i)=(x_i,z_{x_i})$, as @whuber pointed out in the comments. Thus you are effectively re
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression Re-arrange the problem in terms of new variables, so that $1\leq z_1<z_2<\dots<z_n\leq U$. Then we have $(x_i,y_i)=(x_i,z_{x_i})$, as @whuber pointed out in the comments. Thus you are effectively regressing $z_j$ on $j$, and $r_{xy}=r_{xz}$. Thus if we can work out the marginal distribution for $z_j$, and show that it is basically linear in $j$ the problem is done, and we will have $r_{xy}\sim 1$. We first need the joint distribution for $z_1,\dots,z_n$. This is quite simple, after you have the solution, but I found it not straight forward before I did the maths. Just a brief lesson in doing maths paying off - so I will present the maths first, then the easy answer. Now, the original joint distribution is $p(y_1,\dots,y_n)\propto 1$. Changing variables simply relabel things for discrete probabilities, and so the probability is still constant. However, the labelling is not 1-to-1, thus we can't simply write $p(z_1,\dots,z_n)=\frac{(U-n)!}{U!}$. Instead, we have $$\begin{array}\\p(z_1,\dots,z_n)=\frac{1}{C} & 1\leq z_1<z_2<\dots<z_n\leq U\end{array}$$ And we can find $C$ by normalisation $$C=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_2=2}^{z_3-1}\sum_{z_1=1}^{z_2-1}(1)=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_2=2}^{z_3-1}(z_2-1)$$ $$=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_3=2}^{z_4-1}\frac{(z_3-1)(z_3-2)}{2}=\sum_{z_n=n}^{U}\dots\sum_{z_4=4}^{z_5-1}\frac{(z_4-1)(z_4-2)(z_4-3)}{(2)(3)}$$ $$=\sum_{z_n=n}^{U}\sum_{z_{n-1}=n-1}^{z_n-1}\dots\sum_{z_{j}=j}^{z_{j+1}-1}{z_j-1 \choose j-1}={U \choose n}$$ Which shows the relabeling ratio is equal to $\frac{(U-n)!}{U!}{U \choose n}=\frac{1}{n!}$ - for each $(z_1,\dots,z_n)$ there is $n!$ $(y_1,\dots,y_n)$ values. Makes sense because any permutation of the lables on $y_i$ leads to the same set of ranked $z_i$ values. Now, the marginal distribution $z_1$, we repeat above but with the sum over $z_1$ dropped, and a different range of summation for the remainder, namely, the minimums change from $(2,\dots,n)$ to $(z_1+1,\dots,z_1+n-1)$, and we get: $$p(z_1)=\sum_{z_n=z_1+n-1}^{U}\;\;\sum_{z_{n-1}=z_1+n-2}^{z_n-1}\dots\sum_{z_2=z_1+1}^{z_3-1}p(z_1,z_2,\dots,z_n)=\frac{{U-z_1 \choose n-1}}{{U \choose n}}$$ With support $z_1\in\{1,2,\dots,U+1-n\}$. This form, combined with a bit of intuition shows that the marginal distribution of any $z_j$ can be reasoned out by: choosing $j-1$ values below $z_j$, which can be done in ${z_j-1\choose j-1}$ ways (if $z_j\geq j$); choosing the value $z_j$, which can be done 1 way; and choosing $n-j$ values above $z_j$ which can be done in ${U-z_j\choose n-j}$ ways (if $z_j\leq U+j-n$) This method of reasoning will effortly generalise to joint distributions, such as $p(z_j,z_k)$ (which can be used to calculate the expected value of the sample covariance if you want). Hence we have: $$\begin{array}{c c}\\p(z_j)=\frac{{z_j-1\choose j-1}{U-z_j\choose n-j}}{{U \choose n}} & j\leq z_j\leq U+j-n \\p(z_j,z_k)=\frac{{z_j-1\choose j-1}{z_k-z_j-1 \choose k-j-1}{U-z_k\choose n-k}}{{U \choose n}} & j\leq z_j\leq z_k+j-k\leq U+j-n \end{array}$$ Now the marginal is the pdf of a negative hypergeometric distribution with parameters $k=j,r=n,N=U$ (in terms of the paper's notation). Now this is clear not linear exactly in $j$, but the marginal expectation for $z_j$ is $$E(z_j)=j\frac{U+1}{n+1}$$ This is indeed linear in $j$, and you would expect beta coefficient of $\frac{U+1}{n+1}$ from the regression, and intercept of zero. UPDATE I stopped my answer a bit short before. Have now completed hopefully a more complete answer Letting $\overline{j}=\frac{n+1}{2}$, and $\overline{z}=\frac{1}{n}\sum_{j=1}^{n}z_j$, the expected square of the sample covariance between $j$ and $z_j$ is given by: $$E[s_{xz}^2]=E\left[\frac{1}{n}\sum_{j=1}^{n}(j-\overline{j})(z_j-\overline{z})\right]^2$$ $$=\frac{1}{n^2}\left[\sum_{j=1}^{n}(j-\overline{j})^2E(z_j^2)+2\sum_{k=2}^{n}\sum_{j=1}^{k-1}(j-\overline{j})(k-\overline{j})E(z_jz_k)\right]$$ So we need $E(z_j^2)=V(z_j)+E(z_j)^2=Aj^2+Bj$, where $A=\frac{(U+1)(U+2)}{(n+1)(n+2)}$ and $B=\frac{(U+1)(U-n)}{(n+1)(n+2)}$ (using the formula in the pdf file). So the first sum becomes $$\sum_{j=1}^{n}(j-\overline{j})^2E(z_j^2)=\sum_{j=1}^{n}(j^2-2j\overline{j}+\overline{j}^2)(Aj^2+Bj)$$ $$=\frac{n(n-1)(U+1)}{120}\bigg( U(2n+1)+(3n-1)\bigg)$$ We also need $E(z_jz_k)=E[z_j(z_k-z_j)]+E(z_j^2)$. $$E[z_j(z_k-z_j)]=\sum_{z_k=k}^{U+k-n}\sum_{z_j=j}^{z_k+j-k}z_j(z_k-z_j) p(z_j,z_k)$$ $$=j(k-j)\sum_{z_k=k}^{U+k-n}\sum_{z_j=j}^{z_k+j-k}\frac{{z_j\choose j}{z_k-z_j \choose k-j}{U-z_k\choose n-k}}{{U \choose n}}=j(k-j)\sum_{z_k=k}^{U+k-n}\frac{{z_k+1 \choose k+1}{U+1-(z_k+1)\choose n-k}}{{U \choose n}}$$ $$=j(k-j)\frac{{U+1\choose n+1}}{{U \choose n}}=j(k-j)\frac{U+1}{n+1}$$ $$\implies E(z_jz_k)=jk\frac{U+1}{n+1}+j^2\frac{(U+1)(U-n)}{(n+1)(n+2)}+j\frac{(U+1)(U-n)}{(n+1)(n+2)}$$ And the second sum is: $$2\sum_{k=2}^{n}\sum_{j=1}^{k-1}(j-\overline{j})(k-\overline{j})E(z_jz_k)$$ $$=\frac{n(U+1)(n-1)}{720(n+2)}\bigg(6(U-n)(n^3-2n^2-9n-2) + (n+2)(5 n^3- 24 n^2- 35 n +6)\bigg)$$ And so after some rather tedious manipulations, you get for the expected value of the squared covariance of: $$E[s_{xz}^2]=\frac{(n-1)(n-2)U(U+1)}{120}-\frac{(U+1)(n-1)(n^3+2n^2+11n+22)}{720(n+2)}$$ Now if we have $U>>n$, then the first term dominates as it is $O(U^2n^2)$, whereas the second term is $O(Un^3)$. We can show that the dominant term is well approximated by $E[s_{x}^2s_{z}^2]$, and we have another theoretical reason why the pearson correlation is very close to $1$ (beyond the fact that $E(z_j)\propto j$). Now the expected sample variance of $j$ is just the sample variance, which is $s_x^2=\frac{1}{n}\sum_{j=1}^{n}(j-\overline{j})^2=\frac{(n+1)(n-1)}{12}$. The expected sample variance for $z_j$ is given by: $$E[s_z^2]=E\left[\frac{1}{n}\sum_{j=1}^{n}(z_j-\overline{z})^2\right]=\frac{1}{n}\sum_{j=1}^{n}E(z_j^2)-\left[\frac{1}{n}\sum_{j=1}^{n}E(z_j)\right]^2$$ $$=\frac{A(n+1)(2n+1)}{6}+\frac{B(n+1)}{2}-\frac{(U+1)^2}{4}$$ $$=\frac{(U+1)(U-1)}{12}$$ Combining everything together, and noting that $E[s_x^2s_z^2]=s_x^2E[s_z^2]$, we have: $$E[s_x^2s_z^2]=\frac{(n+1)(n-1)(U+1)(U-1)}{144}\approx \frac{(n-1)(n-2)U(U+1)}{120}\approx E[s_{xz}^2]$$ Which is approximately the same thing as $E[r_{xz}^2]\approx 1$
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression Re-arrange the problem in terms of new variables, so that $1\leq z_1<z_2<\dots<z_n\leq U$. Then we have $(x_i,y_i)=(x_i,z_{x_i})$, as @whuber pointed out in the comments. Thus you are effectively re
32,890
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression
If you only want to show $r^2_{xy}$ must be close to 1, and compute a lower bound for it, it's straightforward, because that means for given $U$ and $n$ you only need to maximize the variance of the residuals. This can be done in exactly four symmetric ways. The two extremes (lowest and highest possible correlations) are illustrated for $U=20, n=9$. For large values of $U$ and appropriate values of $n$, $r^2_{xy}$ can actually get close to 0. For example, with $n=100$ and very large values of $U \gg n$, $r^2_{xy} \sim 0.03$ in the worst case.
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression
If you only want to show $r^2_{xy}$ must be close to 1, and compute a lower bound for it, it's straightforward, because that means for given $U$ and $n$ you only need to maximize the variance of the r
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression If you only want to show $r^2_{xy}$ must be close to 1, and compute a lower bound for it, it's straightforward, because that means for given $U$ and $n$ you only need to maximize the variance of the residuals. This can be done in exactly four symmetric ways. The two extremes (lowest and highest possible correlations) are illustrated for $U=20, n=9$. For large values of $U$ and appropriate values of $n$, $r^2_{xy}$ can actually get close to 0. For example, with $n=100$ and very large values of $U \gg n$, $r^2_{xy} \sim 0.03$ in the worst case.
Computing mathematical expectation of the correlation coefficient or $R^2$ in linear regression If you only want to show $r^2_{xy}$ must be close to 1, and compute a lower bound for it, it's straightforward, because that means for given $U$ and $n$ you only need to maximize the variance of the r
32,891
Advantages of approaching a problem by formulating a cost function that is globally optimizable
My believe is that the goal should be to optimize the function you are interested in. If that happens to be the number of misclassifications - and not a binomial likelihood, say - then you should try minimizing the number of misclassifications. However, for the number of practical reasons mentioned (speed, implementation, instability etc), this may not be so easy and it may even be impossible. In that case, we choose to approximate the solution. I know of basically two approximation strategies; either we come up with algorithms that attempt to directly approximate the solution of the original problem, or we reformulate the original problem as a more directly solvable problem (e.g. convex relaxations). A mathematical argument for preferring one approach over the other is whether we can understand a) the properties of the solution actually computed and b) how well the solution approximates the solution of the problem we are actually interested in. I know of many results in statistics where we can prove properties of a solution to a optimization problem. To me it seems more difficult to analyze the solution of an algorithm, where you don't have a mathematical formulation of what it computes (e.g. that it solves a given optimization problem). I certainly won't claim that you can't, but it seems to be a theoretical benefit, if you can give a clear mathematical formulation of what you compute. It is unclear to me, if such mathematical arguments give any practical benefits to Approach 1 over Approach 2. There are certainly somebody out there, who are not afraid of a non-convex loss function.
Advantages of approaching a problem by formulating a cost function that is globally optimizable
My believe is that the goal should be to optimize the function you are interested in. If that happens to be the number of misclassifications - and not a binomial likelihood, say - then you should try
Advantages of approaching a problem by formulating a cost function that is globally optimizable My believe is that the goal should be to optimize the function you are interested in. If that happens to be the number of misclassifications - and not a binomial likelihood, say - then you should try minimizing the number of misclassifications. However, for the number of practical reasons mentioned (speed, implementation, instability etc), this may not be so easy and it may even be impossible. In that case, we choose to approximate the solution. I know of basically two approximation strategies; either we come up with algorithms that attempt to directly approximate the solution of the original problem, or we reformulate the original problem as a more directly solvable problem (e.g. convex relaxations). A mathematical argument for preferring one approach over the other is whether we can understand a) the properties of the solution actually computed and b) how well the solution approximates the solution of the problem we are actually interested in. I know of many results in statistics where we can prove properties of a solution to a optimization problem. To me it seems more difficult to analyze the solution of an algorithm, where you don't have a mathematical formulation of what it computes (e.g. that it solves a given optimization problem). I certainly won't claim that you can't, but it seems to be a theoretical benefit, if you can give a clear mathematical formulation of what you compute. It is unclear to me, if such mathematical arguments give any practical benefits to Approach 1 over Approach 2. There are certainly somebody out there, who are not afraid of a non-convex loss function.
Advantages of approaching a problem by formulating a cost function that is globally optimizable My believe is that the goal should be to optimize the function you are interested in. If that happens to be the number of misclassifications - and not a binomial likelihood, say - then you should try
32,892
Advantages of approaching a problem by formulating a cost function that is globally optimizable
@NRH provided an answer to this question (over 5 years ago), so I'll just offer an Approach 3, which combines Approaches 1 and 2. Approach 3: Formulate and solve to global optimality a convex, or in any event, globally optimizable (not necessarily convex), problem which is "close" to the problem you really want to solve. Use the globally optimal solution from step 1 as the starting (initial) solution to a non-convex optimization problem you really want to solve (or more want to solve than the problem solved in step 1). Hope that your starting solution is in the "region of attraction" to the global optimum relative to the solution method employed to solve the non-convex optimization problem you really want to solve.
Advantages of approaching a problem by formulating a cost function that is globally optimizable
@NRH provided an answer to this question (over 5 years ago), so I'll just offer an Approach 3, which combines Approaches 1 and 2. Approach 3: Formulate and solve to global optimality a convex, or in
Advantages of approaching a problem by formulating a cost function that is globally optimizable @NRH provided an answer to this question (over 5 years ago), so I'll just offer an Approach 3, which combines Approaches 1 and 2. Approach 3: Formulate and solve to global optimality a convex, or in any event, globally optimizable (not necessarily convex), problem which is "close" to the problem you really want to solve. Use the globally optimal solution from step 1 as the starting (initial) solution to a non-convex optimization problem you really want to solve (or more want to solve than the problem solved in step 1). Hope that your starting solution is in the "region of attraction" to the global optimum relative to the solution method employed to solve the non-convex optimization problem you really want to solve.
Advantages of approaching a problem by formulating a cost function that is globally optimizable @NRH provided an answer to this question (over 5 years ago), so I'll just offer an Approach 3, which combines Approaches 1 and 2. Approach 3: Formulate and solve to global optimality a convex, or in
32,893
Can confidence intervals be added?
There are multiple methods to calculate the binomial confidence intervals, and in case of your small (~30) sample size this may mean notable differences. Can we say anything about the number of heads we would expect to see if we flipped all three? Assuming that the normal approximation was used and was appropriate to get the three confidence intervals we can go on with the normal approximation: expected number of heads $E=\frac{0+5}{2} + \frac{1+9}{2} + \frac{40+70}{2}$, $CI = E ± ½ \sqrt{(5-0)^2 + (9-1)^2 + (70-40)^2}$, see sum of normally distributed random variables Or if we chose one coin randomly and flipped it? If randomly meant choosing one of these 3 coins with equal probabilities than we expect ⅓ times the expected value and a CI of $\frac{1}{\sqrt{3}}$ times the width of the result of the previous question. What if we can also use the info that led us to those CI's? I that info could mean all or some of your assumptions before setting up the experiment, the experimental conditions and the observations. In that case you could formulate the full conditional probability, tell a bit more about the coins, tackle the small sample size better, particularly giving more accurate CIs for the previous questions.
Can confidence intervals be added?
There are multiple methods to calculate the binomial confidence intervals, and in case of your small (~30) sample size this may mean notable differences. Can we say anything about the number of heads
Can confidence intervals be added? There are multiple methods to calculate the binomial confidence intervals, and in case of your small (~30) sample size this may mean notable differences. Can we say anything about the number of heads we would expect to see if we flipped all three? Assuming that the normal approximation was used and was appropriate to get the three confidence intervals we can go on with the normal approximation: expected number of heads $E=\frac{0+5}{2} + \frac{1+9}{2} + \frac{40+70}{2}$, $CI = E ± ½ \sqrt{(5-0)^2 + (9-1)^2 + (70-40)^2}$, see sum of normally distributed random variables Or if we chose one coin randomly and flipped it? If randomly meant choosing one of these 3 coins with equal probabilities than we expect ⅓ times the expected value and a CI of $\frac{1}{\sqrt{3}}$ times the width of the result of the previous question. What if we can also use the info that led us to those CI's? I that info could mean all or some of your assumptions before setting up the experiment, the experimental conditions and the observations. In that case you could formulate the full conditional probability, tell a bit more about the coins, tackle the small sample size better, particularly giving more accurate CIs for the previous questions.
Can confidence intervals be added? There are multiple methods to calculate the binomial confidence intervals, and in case of your small (~30) sample size this may mean notable differences. Can we say anything about the number of heads
32,894
Can confidence intervals be added?
This is a problem which can be relatively easily solved using a Bayesian approach. This is one of the "re-using" the original data answers, rather than one which uses the CIs. Each coin has some long run frequency of heads $\theta_i$ $(i=1,\dots,50)$. Now you have some partial information about these frequencies, that you suspect them to be below 5%. Not exactly clear what is meant by this: do you want to test this? or do you have information which says that it should be less than 5%? And do you think its 5% because of the data, or did you think this before seeing the data? Given that you have a reasonably large amount of data, this kind of information probably won't make much difference - so I will simply ignore it. Now there has been no logical connection stated between each coin - so I will not pre-suppose that one exists. This means in words that knowing the results of one coin doesn't tell you anything about any other coin. So we start by supposing that the only thing that was known about each coin is that it is possible for each to give heads, and possible for it to give tails (i.e. we assume that if we flipped the coin until we saw both a head and a tail we would not sample forever). This results in a uniform prior for each: $$p(\theta_{i}|I)=1$$ (where $I$ simply denotes the "prior information" or assumptions contained in the problem). Now you will throw each coin $n_{i}=30$ times, and conditional on $\theta_{i}$ the number of heads observed $x_{i}$ will have a binomial sampling distribution $$(x_{i}|n_{i},\theta_{i},I)\sim Binomial(n_{i},\theta_{i})$$ The posterior distribution for each $\theta_{i}$ will have a beta distribution: $$(\theta_{i}|x_{i},n_{i},I)\sim Beta(x_{i}+1,n_{i}-x_{i}+1)\implies p(\theta_{i}|x_{i},n_{i},I)= \frac{\theta_{i}^{x_{i}}(1-\theta_{i})^{n_{i}-x_{i}}}{B(x_{i}+1,n_{i}-x_{i}+1)}$$ Where $B(a,b)$ is the beta function. Note that each posterior has mode (most likely value) equal to the observed proportion $\frac{x_{i}}{n_{i}}$. Now this is the posterior for each of the $50$ coins is always proper, well behaved, and exact even when the observed fraction is zero (no approximations have been made). Denote the number of future trials as $m_{i}$ and the unknown number of heads in these trials by $y_{i}$ So if you knew which coin you picked, then you have a posterior predictive for the next result: $$p(y_{i}|m_{i},x_{i},n_{i},\text{ith coin},I)=\int_{0}^{1}p(y_{i}|m_{i},\theta_{i},I)p(\theta_{i}|x_{i},n_{i},I)d\theta_{i}$$ $$={m_{i} \choose y_{i}}\int_{0}^{1}\theta_{i}^{y_{i}}(1-\theta_{i})^{m_{i}-y_{i}}\frac{\theta_{i}^{x_{i}}(1-\theta_{i})^{n_{i}-x_{i}}}{B(x_{i}+1,n_{i}-x_{i}+1)}d\theta_{i}$$ $$={m_{i} \choose y_{i}}\frac{B(x_{i}+y_{i}+1,m_{i}+n_{i}-x_{i}-y_{i}+1)}{B(x_{i}+1,n_{i}-x_{i}+1)}=\frac{{x_{i}+y_{i} \choose y_{i}}{m_{i}+n_{i}-x_{i}-y_{i} \choose m_{i}-y_{i}}}{{m_{i}+n_{i}+1 \choose m_{i}}}$$ Which looks like a hypergeometric like distribution, but not quite, as the "random variable" $y_{i}$ appears in different places to what you would see in a hypergeometric. This is called a beta-binomial compound distribution. Note that it is "parameter free" - it only depends on what is unknown but of interest $y_{i}$ and what is known $m_{i},x_{i},n_{i}$ - no "pluggin in" of any estimate is required - hence this is an exact inference. What if we chose one coin randomly and flipped it $m$ times? But if you now don't know which coin is going to be picked, then the answer cannot depend on $i$. To do this we simply average out, or marginalise out the $i$. Assuming that you have no reason to suspect any one coin will be preferred in the choice (what is really meant by "randomly" I think), then each is equally likely, and we get: $$p(y|m,D,I)=\frac{1}{50}\sum_{i=1}^{50}\frac{{x_{i}+y \choose y}{m+n_{i}-x_{i}-y \choose m-y}}{{m+n_{i}+1 \choose m}}$$ Where $D\equiv x_{1},\dots,x_{50},n_{1},\dots,n_{50}$ ("the data") and the $i$ subscript has been dropped from $m,y$ to indicate that the result doesn't depend on $i$ (i.e. doesn't depend on which coin is flipped). Now this is an exact answer to your problem. You can calculate a 95% interval for $y$ using this probability distribution, setting $m=30$ (since this is the sample size you used). This should give you what you are after - in that the frequentist CI is usually phrased in a predictive fashion "if we sampled again what would we likely the true value be covered by the CI 95% of the time" - a CI talks about the future.
Can confidence intervals be added?
This is a problem which can be relatively easily solved using a Bayesian approach. This is one of the "re-using" the original data answers, rather than one which uses the CIs. Each coin has some long
Can confidence intervals be added? This is a problem which can be relatively easily solved using a Bayesian approach. This is one of the "re-using" the original data answers, rather than one which uses the CIs. Each coin has some long run frequency of heads $\theta_i$ $(i=1,\dots,50)$. Now you have some partial information about these frequencies, that you suspect them to be below 5%. Not exactly clear what is meant by this: do you want to test this? or do you have information which says that it should be less than 5%? And do you think its 5% because of the data, or did you think this before seeing the data? Given that you have a reasonably large amount of data, this kind of information probably won't make much difference - so I will simply ignore it. Now there has been no logical connection stated between each coin - so I will not pre-suppose that one exists. This means in words that knowing the results of one coin doesn't tell you anything about any other coin. So we start by supposing that the only thing that was known about each coin is that it is possible for each to give heads, and possible for it to give tails (i.e. we assume that if we flipped the coin until we saw both a head and a tail we would not sample forever). This results in a uniform prior for each: $$p(\theta_{i}|I)=1$$ (where $I$ simply denotes the "prior information" or assumptions contained in the problem). Now you will throw each coin $n_{i}=30$ times, and conditional on $\theta_{i}$ the number of heads observed $x_{i}$ will have a binomial sampling distribution $$(x_{i}|n_{i},\theta_{i},I)\sim Binomial(n_{i},\theta_{i})$$ The posterior distribution for each $\theta_{i}$ will have a beta distribution: $$(\theta_{i}|x_{i},n_{i},I)\sim Beta(x_{i}+1,n_{i}-x_{i}+1)\implies p(\theta_{i}|x_{i},n_{i},I)= \frac{\theta_{i}^{x_{i}}(1-\theta_{i})^{n_{i}-x_{i}}}{B(x_{i}+1,n_{i}-x_{i}+1)}$$ Where $B(a,b)$ is the beta function. Note that each posterior has mode (most likely value) equal to the observed proportion $\frac{x_{i}}{n_{i}}$. Now this is the posterior for each of the $50$ coins is always proper, well behaved, and exact even when the observed fraction is zero (no approximations have been made). Denote the number of future trials as $m_{i}$ and the unknown number of heads in these trials by $y_{i}$ So if you knew which coin you picked, then you have a posterior predictive for the next result: $$p(y_{i}|m_{i},x_{i},n_{i},\text{ith coin},I)=\int_{0}^{1}p(y_{i}|m_{i},\theta_{i},I)p(\theta_{i}|x_{i},n_{i},I)d\theta_{i}$$ $$={m_{i} \choose y_{i}}\int_{0}^{1}\theta_{i}^{y_{i}}(1-\theta_{i})^{m_{i}-y_{i}}\frac{\theta_{i}^{x_{i}}(1-\theta_{i})^{n_{i}-x_{i}}}{B(x_{i}+1,n_{i}-x_{i}+1)}d\theta_{i}$$ $$={m_{i} \choose y_{i}}\frac{B(x_{i}+y_{i}+1,m_{i}+n_{i}-x_{i}-y_{i}+1)}{B(x_{i}+1,n_{i}-x_{i}+1)}=\frac{{x_{i}+y_{i} \choose y_{i}}{m_{i}+n_{i}-x_{i}-y_{i} \choose m_{i}-y_{i}}}{{m_{i}+n_{i}+1 \choose m_{i}}}$$ Which looks like a hypergeometric like distribution, but not quite, as the "random variable" $y_{i}$ appears in different places to what you would see in a hypergeometric. This is called a beta-binomial compound distribution. Note that it is "parameter free" - it only depends on what is unknown but of interest $y_{i}$ and what is known $m_{i},x_{i},n_{i}$ - no "pluggin in" of any estimate is required - hence this is an exact inference. What if we chose one coin randomly and flipped it $m$ times? But if you now don't know which coin is going to be picked, then the answer cannot depend on $i$. To do this we simply average out, or marginalise out the $i$. Assuming that you have no reason to suspect any one coin will be preferred in the choice (what is really meant by "randomly" I think), then each is equally likely, and we get: $$p(y|m,D,I)=\frac{1}{50}\sum_{i=1}^{50}\frac{{x_{i}+y \choose y}{m+n_{i}-x_{i}-y \choose m-y}}{{m+n_{i}+1 \choose m}}$$ Where $D\equiv x_{1},\dots,x_{50},n_{1},\dots,n_{50}$ ("the data") and the $i$ subscript has been dropped from $m,y$ to indicate that the result doesn't depend on $i$ (i.e. doesn't depend on which coin is flipped). Now this is an exact answer to your problem. You can calculate a 95% interval for $y$ using this probability distribution, setting $m=30$ (since this is the sample size you used). This should give you what you are after - in that the frequentist CI is usually phrased in a predictive fashion "if we sampled again what would we likely the true value be covered by the CI 95% of the time" - a CI talks about the future.
Can confidence intervals be added? This is a problem which can be relatively easily solved using a Bayesian approach. This is one of the "re-using" the original data answers, rather than one which uses the CIs. Each coin has some long
32,895
How can I correct for measurement error in the dependent variable in a logit regression?
This situation is often referred to as misclassification error. This paper my help you correctly estimating $\beta$. EDIT: I found relevant-looking papers using http://www.google.com/search?q=misclassification+of+dependent+variable+logistic.
How can I correct for measurement error in the dependent variable in a logit regression?
This situation is often referred to as misclassification error. This paper my help you correctly estimating $\beta$. EDIT: I found relevant-looking papers using http://www.google.com/search?q=misclass
How can I correct for measurement error in the dependent variable in a logit regression? This situation is often referred to as misclassification error. This paper my help you correctly estimating $\beta$. EDIT: I found relevant-looking papers using http://www.google.com/search?q=misclassification+of+dependent+variable+logistic.
How can I correct for measurement error in the dependent variable in a logit regression? This situation is often referred to as misclassification error. This paper my help you correctly estimating $\beta$. EDIT: I found relevant-looking papers using http://www.google.com/search?q=misclass
32,896
How can I correct for measurement error in the dependent variable in a logit regression?
You can either estimate a parametric model of the error using MLE, or you can use a semi-paramteric approach based on something like the maximal rank correlation (MRC) estimator. Computationally, MRC is prohibitive for large samples, so it looks like MLE is the right approach for me. Thanks to GaBorgulya for some good, prompt direction, especially on the term "misclassification error." Here are some good sources on the topic: The basic model, exactly as described in the original problem Ungated version of the same A more complicated, but more general model A nice overview
How can I correct for measurement error in the dependent variable in a logit regression?
You can either estimate a parametric model of the error using MLE, or you can use a semi-paramteric approach based on something like the maximal rank correlation (MRC) estimator. Computationally, MRC
How can I correct for measurement error in the dependent variable in a logit regression? You can either estimate a parametric model of the error using MLE, or you can use a semi-paramteric approach based on something like the maximal rank correlation (MRC) estimator. Computationally, MRC is prohibitive for large samples, so it looks like MLE is the right approach for me. Thanks to GaBorgulya for some good, prompt direction, especially on the term "misclassification error." Here are some good sources on the topic: The basic model, exactly as described in the original problem Ungated version of the same A more complicated, but more general model A nice overview
How can I correct for measurement error in the dependent variable in a logit regression? You can either estimate a parametric model of the error using MLE, or you can use a semi-paramteric approach based on something like the maximal rank correlation (MRC) estimator. Computationally, MRC
32,897
Understanding productivity or expenses over time without falling victim to stochastic interruptions
I would start with robust time series filters (i.e. time varying medians) because these are more simple and intuitive. Basically, the robust time filter is to time series smoothers what the median is to the mean; a summary measures (in this case a time varying one) that is not sensitive to 'wired' observations so long as they do not represent the majority of the data. For a summary see here. If you need more sophisticated smoothers (i.e. non linear ones), you could do with robust Kalman filtering (although this requieres a slightly higher level of mathematical sophistication) This document contains the following example ( a code to run under R, the open source stat software): library(robfilter) data(Nile) nile <- as.numeric(Nile) obj <- wrm.filter(nile, width=11) plot(obj) . The last documents contains a large number of references to papers and books. Other types of filters are implemented in the package, but the repeated median is a very simple one.
Understanding productivity or expenses over time without falling victim to stochastic interruptions
I would start with robust time series filters (i.e. time varying medians) because these are more simple and intuitive. Basically, the robust time filter is to time series smoothers what the median is
Understanding productivity or expenses over time without falling victim to stochastic interruptions I would start with robust time series filters (i.e. time varying medians) because these are more simple and intuitive. Basically, the robust time filter is to time series smoothers what the median is to the mean; a summary measures (in this case a time varying one) that is not sensitive to 'wired' observations so long as they do not represent the majority of the data. For a summary see here. If you need more sophisticated smoothers (i.e. non linear ones), you could do with robust Kalman filtering (although this requieres a slightly higher level of mathematical sophistication) This document contains the following example ( a code to run under R, the open source stat software): library(robfilter) data(Nile) nile <- as.numeric(Nile) obj <- wrm.filter(nile, width=11) plot(obj) . The last documents contains a large number of references to papers and books. Other types of filters are implemented in the package, but the repeated median is a very simple one.
Understanding productivity or expenses over time without falling victim to stochastic interruptions I would start with robust time series filters (i.e. time varying medians) because these are more simple and intuitive. Basically, the robust time filter is to time series smoothers what the median is
32,898
Understanding productivity or expenses over time without falling victim to stochastic interruptions
A simple solution that does not require the acquisition of specialized knowledge is to use control charts. They're ridiculously easy to create and make it easy to tell special cause variation (such as when you are out of town) from common cause variation (such as when you have an actual low-productivity month), which seems to be the kind of information you want. They also preserve the data. Since you say you'll use the charts for many different purposes, I advise against performing any transformations in the data. Here is a gentle introduction. If you decide that you like control charts, you may want to dive deeper into the subject. The benefits to your business will be huge. Control charts are reputed to have been a major contributor to the post-war Japanese economic boom. There is even an R package.
Understanding productivity or expenses over time without falling victim to stochastic interruptions
A simple solution that does not require the acquisition of specialized knowledge is to use control charts. They're ridiculously easy to create and make it easy to tell special cause variation (such as
Understanding productivity or expenses over time without falling victim to stochastic interruptions A simple solution that does not require the acquisition of specialized knowledge is to use control charts. They're ridiculously easy to create and make it easy to tell special cause variation (such as when you are out of town) from common cause variation (such as when you have an actual low-productivity month), which seems to be the kind of information you want. They also preserve the data. Since you say you'll use the charts for many different purposes, I advise against performing any transformations in the data. Here is a gentle introduction. If you decide that you like control charts, you may want to dive deeper into the subject. The benefits to your business will be huge. Control charts are reputed to have been a major contributor to the post-war Japanese economic boom. There is even an R package.
Understanding productivity or expenses over time without falling victim to stochastic interruptions A simple solution that does not require the acquisition of specialized knowledge is to use control charts. They're ridiculously easy to create and make it easy to tell special cause variation (such as
32,899
Understanding productivity or expenses over time without falling victim to stochastic interruptions
I have heard of 'time-based boxcar' functions which might solve your problem. A time-based boxcar sum of 'window size' $\Delta t$ is defined at time $t$ to be the sum of all values between $t - \Delta t$ and $t$. This will be subject to discontinuities which you may or may not want. If you want older values to be downweighted, you can employ a simple or exponential moving average within your time based window. edit: I interpret the question as follows: suppose some events occur at times $t_i$ with magnitudes $x_i$. (for example, $x_i$ might be the amount of a bill paid.) Find some function $f(t)$ which estimates the sum of the magnitudes of the $x_i$ for times "near" $t$. For one of the examples posed by the OP, $f(t)$ would represent "how much one was paying for electricity" around time $t$. Similar to this problem is that of estimating the "average" value around time $t$. For example: regression, interpolation (not usually applied to noisy data), and filtering. You could spend a lifetime studying just one of these three problems. A seemingly unrelated problem, statistical in nature, is Density Estimation. Here the goal is, given observations of magnitudes $y_i$ generated by some process, to estimate, roughly, the probability of that process generating an event of magnitude $y$. One approach to density estimation is via a kernel function. My suggestion is to abuse the kernel approach for this problem. Let $w(t)$ be a function such that $w(t) \ge 0$ for all $t$, $w(0) = 1$ (ordinary kernels do not all share this property), and $w'(t) \le 0$. Let $h$ be the bandwidth, which controls how much influence each data point has. Given data $t_i, x_i$, define the sum estimate by $$f(t) = \sum_{i=1}^n x_i w(|t - t_i|/h).$$ Some possible values of the function $w(t)$ are as follows: a uniform (or 'boxcar') kernel: $w(t) = 1$ for $t \le 1$ and $0$ otherwise; a triangular kernel: $w(t) = \max{(0,1-t)}$; a quadratic kernel: $w(t) = \max{(0,1-t^2)}$; a tricube kernel: $w(t) = \max{(0,(1-t^2)^3)}$; a Gaussian kernel: $w(t) = \exp{(-t^2 / 2)}$; I call these kernels, but they are off by a constant factor here and there; see also a comprehensive list of kernels. Some example code in Matlab: %%kernels ker0 = @(t)(max(0,ceil(1-t))); %uniform ker1 = @(t)(max(0,1-t)); %triangular ker2 = @(t)(max(0,1-t.^2)); %quadratic ker3 = @(t)(max(0,(1-t.^2).^3)); %tricube ker4 = @(t)(exp(-0.5 * t.^2)); %Gaussian %%compute f(t) given x_i,t_i,kernel,h ff = @(x_i,t_i,t,kerf,h)(sum(x_i .* kerf(abs(t - t_i) / h))); %%some sample data: irregular electric bills sdata = [ datenum(2009,12,30),141.73;... datenum(2010,01,25),100.45;... datenum(2010,02,23),98.34;... datenum(2010,03,30),83.92;... datenum(2010,05,01),56.21;... %late this month; datenum(2010,05,22),47.33;... datenum(2010,06,14),62.84;... datenum(2010,07,30),83.34;... datenum(2010,09,10),93.34;... %really late this month datenum(2010,09,22),78.34;... datenum(2010,10,22),93.25;... datenum(2010,11,14),83.39;... %early this month; datenum(2010,12,30),133.82]; %%some irregular observation times at which to sample the filtered version; t_obs = sort(datenum(2009,12,01) + 400 * rand(1,400)); t_i = sdata(:,1);x_i = sdata(:,2); %%compute f(t) for each of the kernel functions; h = 60; %bandwidth of 60 days; fx0 = arrayfun(@(t)(ff(x_i,t_i,t,ker0,h)),t_obs); fx1 = arrayfun(@(t)(ff(x_i,t_i,t,ker1,h)),t_obs); fx2 = arrayfun(@(t)(ff(x_i,t_i,t,ker2,h)),t_obs); fx3 = arrayfun(@(t)(ff(x_i,t_i,t,ker3,h)),t_obs); fx4 = arrayfun(@(t)(ff(x_i,t_i,t,ker4,0.5*h)),t_obs); %!!use smaller bandwidth %%plot them; lhand = plot(t_i,x_i,'--rs',t_obs,fx0,'m-+',t_obs,fx1,'b-+',t_obs,fx2,'k-+',... t_obs,fx3,'g-+',t_obs,fx4,'c-+'); set(lhand(1),'MarkerSize',12); set(lhand(2:end),'MarkerSize',4); datetick(gca()); legend(lhand,{'data','uniform','triangular','quadratic','tricube','gaussian'}); The plot shows the use of a few kernels on some sample electric bill data. Note that the uniform kernel is subject to the 'stochastic shocks' which the OP is trying to avoid. The tricube and Gaussian kernels give much smoother approximations. If this approach is acceptable, one only has to choose the kernel and the bandwidth (in general that is a hard problem, but given some domain knowledge, and some code-test-recode loops, it should not be too difficult.)
Understanding productivity or expenses over time without falling victim to stochastic interruptions
I have heard of 'time-based boxcar' functions which might solve your problem. A time-based boxcar sum of 'window size' $\Delta t$ is defined at time $t$ to be the sum of all values between $t - \Delta
Understanding productivity or expenses over time without falling victim to stochastic interruptions I have heard of 'time-based boxcar' functions which might solve your problem. A time-based boxcar sum of 'window size' $\Delta t$ is defined at time $t$ to be the sum of all values between $t - \Delta t$ and $t$. This will be subject to discontinuities which you may or may not want. If you want older values to be downweighted, you can employ a simple or exponential moving average within your time based window. edit: I interpret the question as follows: suppose some events occur at times $t_i$ with magnitudes $x_i$. (for example, $x_i$ might be the amount of a bill paid.) Find some function $f(t)$ which estimates the sum of the magnitudes of the $x_i$ for times "near" $t$. For one of the examples posed by the OP, $f(t)$ would represent "how much one was paying for electricity" around time $t$. Similar to this problem is that of estimating the "average" value around time $t$. For example: regression, interpolation (not usually applied to noisy data), and filtering. You could spend a lifetime studying just one of these three problems. A seemingly unrelated problem, statistical in nature, is Density Estimation. Here the goal is, given observations of magnitudes $y_i$ generated by some process, to estimate, roughly, the probability of that process generating an event of magnitude $y$. One approach to density estimation is via a kernel function. My suggestion is to abuse the kernel approach for this problem. Let $w(t)$ be a function such that $w(t) \ge 0$ for all $t$, $w(0) = 1$ (ordinary kernels do not all share this property), and $w'(t) \le 0$. Let $h$ be the bandwidth, which controls how much influence each data point has. Given data $t_i, x_i$, define the sum estimate by $$f(t) = \sum_{i=1}^n x_i w(|t - t_i|/h).$$ Some possible values of the function $w(t)$ are as follows: a uniform (or 'boxcar') kernel: $w(t) = 1$ for $t \le 1$ and $0$ otherwise; a triangular kernel: $w(t) = \max{(0,1-t)}$; a quadratic kernel: $w(t) = \max{(0,1-t^2)}$; a tricube kernel: $w(t) = \max{(0,(1-t^2)^3)}$; a Gaussian kernel: $w(t) = \exp{(-t^2 / 2)}$; I call these kernels, but they are off by a constant factor here and there; see also a comprehensive list of kernels. Some example code in Matlab: %%kernels ker0 = @(t)(max(0,ceil(1-t))); %uniform ker1 = @(t)(max(0,1-t)); %triangular ker2 = @(t)(max(0,1-t.^2)); %quadratic ker3 = @(t)(max(0,(1-t.^2).^3)); %tricube ker4 = @(t)(exp(-0.5 * t.^2)); %Gaussian %%compute f(t) given x_i,t_i,kernel,h ff = @(x_i,t_i,t,kerf,h)(sum(x_i .* kerf(abs(t - t_i) / h))); %%some sample data: irregular electric bills sdata = [ datenum(2009,12,30),141.73;... datenum(2010,01,25),100.45;... datenum(2010,02,23),98.34;... datenum(2010,03,30),83.92;... datenum(2010,05,01),56.21;... %late this month; datenum(2010,05,22),47.33;... datenum(2010,06,14),62.84;... datenum(2010,07,30),83.34;... datenum(2010,09,10),93.34;... %really late this month datenum(2010,09,22),78.34;... datenum(2010,10,22),93.25;... datenum(2010,11,14),83.39;... %early this month; datenum(2010,12,30),133.82]; %%some irregular observation times at which to sample the filtered version; t_obs = sort(datenum(2009,12,01) + 400 * rand(1,400)); t_i = sdata(:,1);x_i = sdata(:,2); %%compute f(t) for each of the kernel functions; h = 60; %bandwidth of 60 days; fx0 = arrayfun(@(t)(ff(x_i,t_i,t,ker0,h)),t_obs); fx1 = arrayfun(@(t)(ff(x_i,t_i,t,ker1,h)),t_obs); fx2 = arrayfun(@(t)(ff(x_i,t_i,t,ker2,h)),t_obs); fx3 = arrayfun(@(t)(ff(x_i,t_i,t,ker3,h)),t_obs); fx4 = arrayfun(@(t)(ff(x_i,t_i,t,ker4,0.5*h)),t_obs); %!!use smaller bandwidth %%plot them; lhand = plot(t_i,x_i,'--rs',t_obs,fx0,'m-+',t_obs,fx1,'b-+',t_obs,fx2,'k-+',... t_obs,fx3,'g-+',t_obs,fx4,'c-+'); set(lhand(1),'MarkerSize',12); set(lhand(2:end),'MarkerSize',4); datetick(gca()); legend(lhand,{'data','uniform','triangular','quadratic','tricube','gaussian'}); The plot shows the use of a few kernels on some sample electric bill data. Note that the uniform kernel is subject to the 'stochastic shocks' which the OP is trying to avoid. The tricube and Gaussian kernels give much smoother approximations. If this approach is acceptable, one only has to choose the kernel and the bandwidth (in general that is a hard problem, but given some domain knowledge, and some code-test-recode loops, it should not be too difficult.)
Understanding productivity or expenses over time without falling victim to stochastic interruptions I have heard of 'time-based boxcar' functions which might solve your problem. A time-based boxcar sum of 'window size' $\Delta t$ is defined at time $t$ to be the sum of all values between $t - \Delta
32,900
Understanding productivity or expenses over time without falling victim to stochastic interruptions
Buzzwords: interpolation, resampling, smoothing. Your problem is similar to one encountered frequently in demography: people might have census counts broken down into age intervals, for example, and such intervals are not always of constant width. You want to interpolate the distribution by age. What this shares with your problem, aside from the variable width (= variable time intervals), is that the data tend to be non-negative. In addition, many such datasets can have noise, but it has a particular form of negative correlation: a count that appears in one bin will not appear in neighboring bins, but might have been assigned to the wrong bin. For example, older people may tend to round their ages to the nearest five years. They are not overlooked but they might contribute to the wrong age group. By and large, though, the data are complete and reliable. In terms of this analogy we're talking about a full census; in your datasets you have actual electric bills, actual enrollments, and so on. So it's just a question of apportioning the data reasonably to a set of intervals useful for further analysis (such as equally spaced times for time series analysis): that's where interpolation and resampling are involved. There are many interpolation techniques. The commonest in demography were developed for simple calculation and are based on polynomial splines. Many share a trick worth knowing, regardless of how you plan to process your data: don't attempt to interpolate the raw data; instead, interpolate their cumulative sum. The latter will be monotonically increasing due to the non-negativity of the original values, and therefore will tend to be relatively smooth. This is why polynomial splines can work at all. Another advantage of this approach is that although the fit may deviate from the data points (slightly, one hopes), overall it correctly reproduces the totals, so that nothing is net lost or gained. Of course, after fitting the cumulative values (as a function of time or age), you take first differences to estimate totals within any bin you like. The simplest example of this approach is a linear spline: just connect successive points on the plot of cumulative $x$ vs. cumulative $t$ by line segments. Estimate the counts in any time interval $[t_0, t_1]$ by reading off the values $x_0$ and $x_1$ of the splined curve at $t_0$ and $t_1$ respectively and using $x_1 - x_0$. Better splines (cubic in some areas; quintic in many demographic apps) sometimes improve the estimates. This is equivalent to your intuition of weighting the data and gives it a nice graphical interpretation.
Understanding productivity or expenses over time without falling victim to stochastic interruptions
Buzzwords: interpolation, resampling, smoothing. Your problem is similar to one encountered frequently in demography: people might have census counts broken down into age intervals, for example, and s
Understanding productivity or expenses over time without falling victim to stochastic interruptions Buzzwords: interpolation, resampling, smoothing. Your problem is similar to one encountered frequently in demography: people might have census counts broken down into age intervals, for example, and such intervals are not always of constant width. You want to interpolate the distribution by age. What this shares with your problem, aside from the variable width (= variable time intervals), is that the data tend to be non-negative. In addition, many such datasets can have noise, but it has a particular form of negative correlation: a count that appears in one bin will not appear in neighboring bins, but might have been assigned to the wrong bin. For example, older people may tend to round their ages to the nearest five years. They are not overlooked but they might contribute to the wrong age group. By and large, though, the data are complete and reliable. In terms of this analogy we're talking about a full census; in your datasets you have actual electric bills, actual enrollments, and so on. So it's just a question of apportioning the data reasonably to a set of intervals useful for further analysis (such as equally spaced times for time series analysis): that's where interpolation and resampling are involved. There are many interpolation techniques. The commonest in demography were developed for simple calculation and are based on polynomial splines. Many share a trick worth knowing, regardless of how you plan to process your data: don't attempt to interpolate the raw data; instead, interpolate their cumulative sum. The latter will be monotonically increasing due to the non-negativity of the original values, and therefore will tend to be relatively smooth. This is why polynomial splines can work at all. Another advantage of this approach is that although the fit may deviate from the data points (slightly, one hopes), overall it correctly reproduces the totals, so that nothing is net lost or gained. Of course, after fitting the cumulative values (as a function of time or age), you take first differences to estimate totals within any bin you like. The simplest example of this approach is a linear spline: just connect successive points on the plot of cumulative $x$ vs. cumulative $t$ by line segments. Estimate the counts in any time interval $[t_0, t_1]$ by reading off the values $x_0$ and $x_1$ of the splined curve at $t_0$ and $t_1$ respectively and using $x_1 - x_0$. Better splines (cubic in some areas; quintic in many demographic apps) sometimes improve the estimates. This is equivalent to your intuition of weighting the data and gives it a nice graphical interpretation.
Understanding productivity or expenses over time without falling victim to stochastic interruptions Buzzwords: interpolation, resampling, smoothing. Your problem is similar to one encountered frequently in demography: people might have census counts broken down into age intervals, for example, and s